text
stringlengths
56
7.94M
\begin{document} \newtheorem{theorem}[subsection]{Theorem} \newtheorem{proposition}[subsection]{Proposition} \newtheorem{lemma}[subsection]{Lemma} \newtheorem{corollary}[subsection]{Corollary} \newtheorem{conjecture}[subsection]{Conjecture} \newtheorem{prop}[subsection]{Proposition} \numberwithin{equation}{section} \newcommand{\ensuremath{\mathbb R}}{\ensuremath{\mathbb R}} \newcommand{\ensuremath{\mathbb C}}{\ensuremath{\mathbb C}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathfrak{M}}{\mathfrak{M}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\ensuremath{\mathfrak A}}{\ensuremath{\mathfrak A}} \newcommand{\ensuremath{\mathbb N}}{\ensuremath{\mathbb N}} \newcommand{\ensuremath{\mathbb Q}}{\ensuremath{\mathbb Q}} \newcommand{\half}{\tfrac{1}{2}} \newcommand{f\times \chi}{f\times \chi} \newcommand{\mathop{{\sum}^{\star}}}{\mathop{{\sum}^{\star}}} \newcommand{\chi \bmod q}{\chi \bmod q} \newcommand{\chi \bmod db}{\chi \bmod db} \newcommand{\chi \bmod d}{\chi \bmod d} \newcommand{\text{sym}^2}{\text{sym}^2} \newcommand{\hhalf}{\tfrac{1}{2}} \newcommand{\sumstar}{\sideset{}{^*}\sum} \newcommand{\sumprime}{\sideset{}{'}\sum} \newcommand{\sumprimeprime}{\sideset{}{''}\sum} \newcommand{\ensuremath{\negthickspace \negthickspace \negthickspace \pmod}}{\ensuremath{\negthickspace \negthickspace \negthickspace \pmod}} \newcommand{\V}{V\left(f\times \chirac{nm}{q^2}\right)} \newcommand{\mathop{{\sum}^{\dagger}}}{\mathop{{\sum}^{\dagger}}} \newcommand{\ensuremath{\mathbb Z}}{\ensuremath{\mathbb Z}} \newcommand{\leg}[2]{\left(f\times \chirac{#1}{#2}\right)} \newcommand{\mu_{\omega}}{\mu_{\omega}} \newcommand{\sumflat}{\sideset{}{^f\times \chilat}\sum} \newcommand{\tfrac12}{\tfrac12} \newcommand{\lambda}{\lambdabda} \title[Mean squares of real character sums]{Mean squares of quadratic twists of the M\"obius function} \author[P. Gao]{Peng Gao} \address{School of Mathematical Sciences, Beihang University, Beijing 100191, China} \email{[email protected]} \author[L. Zhao]{Liangyi Zhao} \address{School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia} \email{[email protected]} \begin{abstract} In this paper, we evaluate asymptotically the sum \[ \sum_{d \leq X} \left( \sum_{n \leq Y} \mu(n)\leg {8d}{n} \right)^2, \] where $\leg {8d}{n}$ is the Kronecker symbol and $d$ runs over positive, odd, square-free integers. \end{abstract} \maketitle \noindent {\bf Mathematics Subject Classification (2010)}: 11N37, 11L05, 11L40 \newline \noindent {\bf Keywords}: mean square, quadratic Dirichlet character, M\"obius function \section{Introduction} Let $\mu$ denote the M\"obius function and the corresponding Mertens function $M(x)$ is defined to be \begin{align*} M(x) = \sum_{n \leq x}\mu(n). \end{align*} The size of $M(x)$ is inextricably connected with the Riemann hypothesis (RH). It is known (see \cite{Sound09-1}) that RH is equivalent to \begin{align} \label{Mbound} M(x) \ll x^{1/2+\varepsilon}, \end{align} for any $\varepsilon>0$. \newline There have been a number of subsequent refinements of the bounds in \eqref{Mbound}, all under RH. In \cite{Landau24}, E. Landau proved that \eqref{Mbound} is valid with $\varepsilon \ll \log \log \log x/\log \log x$. This bound was improved to $\varepsilon \ll 1/\log \log x$ by E. C. Titchmarsh \cite{Titchmarsh27}, to $\varepsilon \ll (\log x)^{-22/61}$ by H. Maier and H. L. Montgomery \cite{MM09} and by K. Soundararajan \cite{Sound09-1} to \begin{align*} M(x) \ll x^{1/2}\exp \big ((\log x)^{1/2}(\log \log \log x)^{14} \big ). \end{align*} The power $(\log \log \log x)^{14}$ in the above expression has been improved to $(\log \log \log x)^{5/2+\varepsilon}$ for any $\varepsilon >0$ by M. Balazard and A. de Roton in \cite{BR08} upon refining the method of Soundararajan. \newline One may consider more generally the sum with the M\"obius function twisted by a Dirichlet character $\chi$ modulo $q$. More precisely, we define \begin{align*} M(x, \chi) = \sum_{n \leq x}\mu(n)\chi(n). \end{align*} Similar to the relation between $M(x)$ and RH, the size of $M(x, \chi)$ is related to the generalized Riemann hypothesis (GRH) of the corresponding Dirichlet $L$-function $L(s, \chi)$. It follows from Perron’s formula that GRH implies that \begin{align} \label{Mchibound} M(x, \chi) \ll x^{1/2+\varepsilon}, \end{align} for any $\varepsilon >0$. Conversely, \eqref{Mchibound} gives, via partial summation, the convergence of the Dirichlet series of $1/L(s, \chi)$ for any $s > 1/2$, and therefore GRH for $L(s, \chi)$. While studying sums of the M\"obius function in arithmetic progressions, L. Ye \cite{Ye} established that under GRH, uniformly for $q$ and $x$, \begin{align*} M(x, \chi) \ll x^{1/2}\exp \big ((\log x)^{3/5+o(1)}\big ). \end{align*} This improved an earlier result of K. Halupczok and B. Suger \cite[Lemma 1,2]{H&S13}. Moreover, it follows from a general result of H. Maier and A. Sankaranarayanan \cite{M&S16} on multiplicative M\"obius-like functions that $\varepsilon \ll 1/\log \log x$ in \eqref{Mchibound} under GRH, which is comparable to the above mentioned result of Titchmarsh \cite{Titchmarsh27} on $M(x)$. \newline As noted in \cite{MM09}, the behavior of $M(x)$ depends both on the distribution of $|\zeta'( \rho)|$ as $\rho = 1/2 + i\gamma$ runs over the non-trivial zeros of the Riemann zeta function $\zeta(s)$ (under RH), and on the linear independence of the $\gamma$. This makes it difficult to predict the behavior of $M(x)$ in any finer way. For example, a well-known conjecture of Mertens claiming that $|M(x)| \leq \sqrt{x}$ was disproved by A. M. Odlyzko and H. J. J. te Riele \cite{O&R}. In connection to this, one also has the weak Mertens conjecture which asserts that \begin{align*} \int\limits^X_2 \Big ( f\times \chirac {M(x)}{x} \Big )^2 \mathrm{d} x \ll \log X. \end{align*} In \cite[Theorem 3]{Ng04}, N. Ng proved that as $X \rightarrow \infty$, for some constant $c$, \begin{align} \label{Msquareest} \int\limits^X_2 \Big ( f\times \chirac {M(x)}{x} \Big )^2 \mathrm{d} x \sim c\log X, \end{align} provided that one assumes RH and \begin{align*} \sum_{0<\Im(\rho)\le T}|\zeta'(\rho)|^{-2} \ll T. \end{align*} One may interpret \eqref{Msquareest} as a mean square type of estimation for $M(x)$ and in this situation one is able to evaluate the average asymptotically. We are thus motivated to seek for other mean square estimations involving the M\"obius function and it is the aim of our paper to study one such case. \newline To state our result, we write $\chi_{d}$ for the Kronecker symbol $\leg {d}{\cdot}$ and note that if $d$ is odd and square-free, $\chi_{8d}$ is a primitive Dirichlet character. We are interested in the following sum \begin{align*} S(X,Y)=\sumstar_{\substack {0< d \leq X}} M(Y, \chi_{8d} )^2, \end{align*} where the asterisk indicates that $d$ runs over odd and square-free integers. \newline We may view $S(X,Y)$ as a mean square expression involving $M(Y, \chi_{8d})$ and one expects an asymptotic expression for it. In fact, it is not difficult to obtain one if $Y^2<X$ using the P\'olya-Vinogradov inequality to control the contribution of the off-diagonal terms. The situation is more intriguing for larger $Y$'s, especially if $X$ and $Y$ are of comparable size. For instance, the sum \begin{align*} \sum_{\substack {m \leq X \\ (m, 2)=1}}\sum_{\substack {n \leq Y \\ (n, 2)=1}} \leg {m}{n} \end{align*} can be evaluated asymptotically if $Y=o(X/\log X)$ or $X=o(Y/\log Y)$ using the P\'olya-Vinogradov inequality. In \cite{CFS}, J. B. Conrey, D. W. Farmer and K. Soundararajan applied a Poisson summation formula developed by Soundararajan in \cite{sound1} to obtain an asymptotic formula for the other ranges. We also note here that extensions and generalizations of this problem were studied by the authors in \cites{G&Zhao2019, G&Zhao2020, G&Zhao2022}. \newline In studying $S(X,Y)$, we shall also utlize the Poisson summation formula given in \cite{sound1} as well as the techniques developed by K. Soundararajan and M. P. Young \cite{S&Y} in their work on the second moment of quadratic twists of modular $L$-functions. For technical reasons, we consider smoothed sums instead. We thus fix two non-negative, smooth functions $\Phi(x), W(x)$ that are compactly supported on ${\ensuremath{\mathbb R}}_+=(0,\infty)$. Set \begin{align} \label{SXYPW} S(X,Y; \Phi, W)=\sumstar_{\substack {d}} \Big(\sum_{n} \mu(n)\chi_{8d}(n)\Phi \Big(f\times \chirac nX \Big)\Big)^2 W \Big(f\times \chirac d{X} \Big). \end{align} We shall evaluate $S(X,Y; \Phi, W)$ asymptotically as follows. \begin{theorem} \label{meansquare} With the notation as above and assuming the truth of GRH, for large $X$ and $Y$, we have \begin{align} \label{S} \begin{split} S(X,Y; \Phi, W)=& f\times \chirac{4}{\pi^2} XY \widetilde{h}_1(1,1) Z_2(1) + O\left( X^{1/2+\varepsilon}Y^{3/2+\varepsilon}+XY^{1/2+\varepsilon} \right), \end{split} \end{align} where $\widetilde{h}_1(1,1)$ is given in \eqref{h1half} and the function $Z_2(u)$ is defined in \eqref{eq:Z(u,v)}. \end{theorem} One checks that \eqref{S} gives a valid asymptotic formula if $Y \ll X^{1-\varepsilon}$ for any $\varepsilon>0$. \section{Preliminaries} \label{sec 2} We gather first a few auxiliary results necessary in the proof of Theorem \ref{meansquare} in this section. \subsection{Gauss sums} \label{section:Gauss} For all odd integers $k$ and all integers $m$, define the Gauss-type sums $G_m(k)$, as in \cite[Sect. 2.2]{sound1}, \begin{align} \label{G} G_m(k)= \left( f\times \chirac {1-i}{2}+\left( f\times \chirac {-1}{k} \right)f\times \chirac {1+i}{2}\right)\sum_{a \ensuremath{\negthickspace \negthickspace \negthickspace \pmod}{k}}\left( f\times \chirac {a}{k} \right) e \left( f\times \chirac {am}{k} \right), \quad \mbox{where} \quad e(x) = \exp (2 \pi i x) . \end{align} Let $\varphi(m)$ be the Euler totient of $m$. Our next result is taken from \cite[Lemma 2.3]{sound1} and evaluates $G_m(k)$. \begin{lemma} \label{lem1} If $(k_1,k_2)=1$ then $G_m(k_1k_2)=G_m(k_1)G_m(k_2)$. Suppose that $p^a$ is the largest power of $p$ dividing $m$ (put $a=\infty$ if $m=0$). Then for $b \geq 1$ we have \begin{equation*} \label{011} G_m(p^b)= \left\{\begin{array}{cl} 0 & \mbox{if $b\leq a$ is odd}, \\ \varphi(p^b) & \mbox{if $b\leq a$ is even}, \\ -p^a & \mbox{if $b=a+1$ is even}, \\ (f\times \chirac {m/p^a}{p})p^a\sqrt{p} & \mbox{if $b=a+1$ is odd}, \\ 0 & \mbox{if $b \geq a+2$}. \end{array}\right. \end{equation*} \end{lemma} \subsection{Poisson Summation} For any smooth function $F$, we write $\hat{F}$ for the Fourier transform of $F$ and we define \begin{equation} \label{tildedef} \widetilde{F}(\xi)=f\times \chirac {1+i}{2}\hat{F}(\xi)+f\times \chirac {1-i}{2}\hat{F}(-\xi)=\int\limits^{\infty}_{-\infty}\left(\cos(2\pi \xi x)+\sin(2\pi \xi x) \right)F(x) \mathrm{d} x. \end{equation} We note the following Poisson summation formula from \cite[Lemma 2.6]{sound1}. \begin{lemma} \label{lem2} Let $W$ be a smooth function compactly supported on ${\ensuremath{\mathbb R}}_+$. We have, for any odd integer $n$, \begin{equation*} \label{013} \sum_{(d,2)=1}\left( f\times \chirac {d}{n} \right) W\left( f\times \chirac {d}{X} \right)=f\times \chirac {X}{2n}\left( f\times \chirac {2}{n} \right) \sum_k(-1)^kG_k(n)\widetilde{W}\left( f\times \chirac {kX}{2n} \right), \end{equation*} where $\widetilde{W}$ is defined in \eqref{tildedef} and $G_k(n)$ is defined in \eqref{G}. \end{lemma} \subsection{Upper bounds for $|L(s, \chi)|^{-1}$} From \cite[Theorem 5.19]{iwakow}, we deduce the following. \begin{lemma} \label{lem:Linversebound} Assume the truth of GRH. For any Dirichlet character $\chi$ modulo $q$ and any $\varepsilon > 0$, we have \begin{equation*} \label{eq:H-B} \big|L(\tfrac 12+\varepsilon+it, \chi)\big|^{-1} \ll \big(q(1+|t|)\big)^{\varepsilon}, \end{equation*} where the implied constant depends on $\varepsilon$ alone. \end{lemma} \subsection{Analytical behaviors of some Dirichlet Series} We define for any square-free $k_1$, \begin{equation} \label{eq:Z} Z(\alpha,\beta,\gamma;q,k_1) = \sum_{k_2=1}^{\infty} \sum_{(n_1,2q)=1} \sum_{(n_2,2q)=1} f\times \chirac{\mu(n_1)\mu(n_2)}{n_1^{\alpha} n_2^{\beta} k_2^{2\gamma}} f\times \chirac{G_{k_1 k_2^2}(n_1 n_2)}{n_1 n_2}, \end{equation} where $G_m(k)$ be defined as in \eqref{G}. Note first that Lemma \ref{lem1} implies that $Z(\alpha,\beta,\gamma;q,k_1)$ converges absolutely when $\Re(\alpha)$, $\Re(\beta)$, and $\Re(\gamma)$ are all strictly greater than $\tfrac 12$. We write $L_c(s, \chi)$ for the Euler product of $L(s, \chi)$ with the factors from $p | c$ removed. Our next lemma describes the analytical behavior of $Z$. \begin{lemma} \label{lemma:Z} The function $Z(\alpha,\beta,\gamma;q,k_1)$ defined in \eqref{eq:Z} may be written as \begin{align} \label{Z2def} L_q(\tfrac 12+\alpha, \chi_{k_1})^{-1}L_q(\tfrac 12+\beta,\chi_{k_1})^{-1}Z_{2}(\alpha,\beta,\gamma;q, k_1), \end{align} where $Z_{2}(\alpha,\beta,\gamma;q,k_1)$ is a function uniformly bounded in the region $\Re(\gamma) \ge f\times \chirac 12+\varepsilon$, $\Re(\alpha), \Re(\beta) \geq \varepsilon$ for any $\varepsilon >0$. \end{lemma} \begin{proof} We deduce from Lemma \ref{lem1} that the summand in \eqref{eq:Z} is jointly multiplicative in terms of $n_1, n_2$, and $k_2$, so that we can express $Z(\alpha,\beta,\gamma;q, k_1)$ as an Euler product over all primes $p$. It suffices to match each Euler factor at $p$ for $Z(\alpha,\beta,\gamma;q,k_1)$ with the corresponding factor in \eqref{Z2def}. \newline The contribution of such an Euler factor for the generic case with $p \nmid 2q k_1$ is \begin{equation*} \sum_{k_2, n_1, n_2} f\times \chirac{\mu(n_1)\mu(n_2)}{p^{n_1 \alpha + n_2 \beta +2k_2 \gamma}}f\times \chirac{ G_{k_1 p^{2k_2}}(p^{n_1 + n_2})}{p^{n_1+n_2} }. \end{equation*} If $\Re(\gamma) \ge \tfrac 12+\varepsilon$, $\Re(\alpha)$, $\Re(\beta)\ge \varepsilon$, Lemma \ref{lem1} implies that the contribution from the terms $k_2\ge 1$ is $\ll 1/p^{1+2\varepsilon}$ and the contribution of the term $k_2=0$ is $1-\chi_{k_1}(p) (p^{-1/2-\alpha}+p^{-1/2-\beta})$. This calculation readily implies that this Euler factor for $Z$ matches the corresponding one in \eqref{Z2def}. \newline Similarly, when $\Re(\gamma)\ge f\times \chirac 12+\varepsilon$ and $\Re(\alpha)$, $\Re(\beta)\ge \varepsilon$, Lemma \ref{lem1} implies that the Euler factor for $p | k_1$ but $p \nmid 2q$ equals $$ 1-f\times \chirac{1}{p^{1+\alpha+\beta}} + O\left( f\times \chirac{1}{p^{1+\varepsilon}}\right)=1+O\left( f\times \chirac{1}{p^{1+\varepsilon}}\right). $$ Lastly, the corresponding Euler factor for the case $p|2q$ is $(1-p^{-2\gamma})^{-1}= 1+O(1/p^{1+2\varepsilon})$. The assertion of the lemma now follows from these computations. \end{proof} \section{Proof of Theorem \ref{meansquare}} \label{sec 3} \subsection{Decomposition of ${\mathcal S}(X,Y; \Phi, W)$} \label{section:mainprop} Expanding the square in \eqref{SXYPW} allows us to recast ${\mathcal S}(X,Y; \Phi, W)$ as \[ S(h):= \sumstar_{d} \sum_{n_1} \sum_{n_2} \chi_{8d}(n_1n_2)\mu(n_1)\mu(n_2) h(d,n_1,n_2), \] where $h(x,y,z)=W(f\times \chirac xX)\Phi \left(f\times \chirac {y}{Y} \right )\Phi \left(f\times \chirac {z}{Y} \right )$ is a smooth function on ${\ensuremath{\mathbb R}}_+^3$. We apply the M\"{o}bius inversion to remove the square-free condition on $d$ to obtain that, for an appropriate parameter $Z$ to be chosen later, \begin{eqnarray*} S(h) = \Big(\sum_{\substack{a \leq Z \\ (a,2)=1}} + \sum_{\substack{a > Z \\ (a,2)=1}} \Big) \mu(a) \sum_{(d,2)=1} \sum_{(n_1,a)=1} \sum_{(n_2,a)=1} \chi_{8d}(n_1n_2)\mu(n_1)\mu(n_2)h(da^2, n_1, n_2) =:S_1(h)+ S_2(h), \quad \mbox{say}. \end{eqnarray*} \subsection{Estimating $S_2(h)$} \label{section:S2} We first estimate $S_2(h)$. To this end, writing $d=b^2 \ell$ with $\ell$ square-free, and grouping terms according to $c=ab$, we deduce \begin{equation} \label{eq:S21} S_2(h) = \sum_{(c,2)=1} \sum_{\substack{a > Z \\ a|c}} \mu(a) \sumstar_{\ell} \sum_{(n_1,c)=1} \sum_{(n_2,c)=1} \chi_{8\ell}(n_1 n_2)\mu(n_1)\mu(n_2) h(c^2 \ell, n_1, n_2). \end{equation} Applying Mellin transforms in the variables $n_1$ and $n_2$ yeilds that the inner triple sum over $\ell$, $n_1$, and $n_2$ in \eqref{eq:S21} is \begin{equation} \label{eq:S22} f\times \chirac{1}{(2\pi i)^2} \int\limits_{(1+\varepsilon)} \int\limits_{(1+\varepsilon)} \sumstar_{\ell} {\check h}(c^2 \ell; u, v) \sum_{\substack{n_1, n_2 \\ (n_1n_2, c)=1}} f\times \chirac{\chi_{8\ell}(n_1) \chi_{8\ell}(n_2)\mu(n_1)\mu(n_2) }{n_1^{u}n_2^{v} } \mathrm{d} u \, \mathrm{d} v, \end{equation} where \begin{equation*} {\check h}(x;u,v) = \int\limits_0^{\infty} \int\limits_0^{\infty} h(x,y,z) y^u z^v f\times \chirac{\mathrm{d} y}{y} f\times \chirac{\mathrm{d} z}{z}. \end{equation*} Now integration by parts gives that for $\Re(u)$, $\Re(v) >0$ and any positive integers $A_j$, $1 \leq j \leq 3$, \begin{equation} \label{eq:3.12} {\check h}(x;u,v) \ll \left( 1 + f\times \chirac{x}{X} \right)^{-A_1} f\times \chirac{Y^{\Re(u)+\Re(v)}}{|uv|(1+|u|)^{A_2} (1 + |v|)^{A_3}}. \end{equation} Note that the sum over $n_1$ and $n_2$ in \eqref{eq:S22} equals $L^{-1}_c(u,\chi_{8\ell}) L^{-1}_c(v, \chi_{8\ell})$ and we can thus move the lines of integration in \eqref{eq:S22} to $\Re(u)=\Re(v)=1/2+\varepsilon$ without encountering any poles under GRH. Moreover, \begin{align} \label{Linversebound} |L^{-1}_c(u, \chi_{8\ell})L^{-1}_c(v,\chi_{8\ell})| \le d(c)^2 ( |L^{-1}(u, \chi_{8\ell})|^2 + |L^{-1}(v, \chi_{8\ell})|^2), \end{align} where $d(c)$ denotes the value of the divisor function at $c$. \newline We now apply \eqref{eq:3.12} with $A_2=A_3=1$ and $A_1$ sufficiently large and Lemma \ref{lem:Linversebound} to get that the expression in \eqref{eq:S22} is \begin{equation*} \ll d(c)^2 Y^{1+2\varepsilon} \int\limits_{-\infty}^{\infty} (1+|t|)^{-2} \sumstar_{\ell} \left(1 + f\times \chirac{c^2 \ell}{X}\right)^{-A_1} \Big|L(\tfrac 12+\varepsilon +it, \chi_{8\ell})\Big|^{-2} \ \mathrm{d} t \ll d(c)^2 (XY)^{1+\varepsilon} /c^2. \end{equation*} We conclude from the above estimation and \eqref{eq:S21} that \begin{align} \label{S2} S_2(h) \ll (XY)^{1 + \varepsilon} Z^{-1+ \varepsilon}. \end{align} \subsection{Estimating $S_1(h)$, the main term} We evaluate $S_1(h)$ now. Write for brevity $C = \cos$ and $S = \sin$. We then apply the Poisson summation formula, Lemma \ref{lem2}, to deduce that \begin{align} \label{eq:S1} S_1(h)= f\times \chirac{X}{2} \sum_{\substack{a \leq Z \\ (a,2)=1}} f\times \chirac{\mu(a)}{a^2} \sum_{k \in \ensuremath{\mathbb Z}} \sum_{(n_1,2a)=1} \sum_{(n_2,2a)=1} f\times \chirac{(-1)^kG_k(n_1 n_2)\mu(n_1)\mu(n_2)}{n_1 n_2}\int\limits_0^{\infty} h(xX, n_1, n_2) (C + S)\leg{2\pi k xX}{2n_1 n_2 a^2} \mathrm{d} x. \end{align} Let $S_{1,0}(h)$ for the terms in \eqref{eq:S1} with $k=0$. Note that $$ \sum_{\substack{a \leq Z \\ (a,2n_1n_2)=1}} f\times \chirac{\mu(a)}{a^2} = f\times \chirac{1}{\zeta(2)} \prod_{p|2n_1n_2} \left( 1-f\times \chirac{1}{p^2}\right)^{-1} +O(Z^{-1}) = f\times \chirac{8}{\pi^2}\prod_{p|n_1n_2} \left(1-f\times \chirac{1}{p^2}\right)^{-1}+ O(Z^{-1}) . $$ Moreover, with $\square$ denoting a perfect square, Lemma \ref{lem1} implies that $G_0(m) = \varphi(m)$ if $m = \square$, and is zero otherwise. Thus, upon setting $h_1(y,z) = \int_{\mathbb{R}_+} h(xX,y,z) \ \mathrm{d} x$, we infer that \begin{align} \label{S10first} S_{1,0}(h) = f\times \chirac{4X}{\pi^2} \sum_{\substack{(n_1 n_2,2)=1 \\ n_1 n_2 = \square}}\mu(n_1)\mu(n_2) \prod_{p|n_1n_2} \left( f\times \chirac{p}{p+1}\right) h_1\left( n_1, n_2\right) + O \Big( f\times \chirac XZ \sum_{\substack{(n_1 n_2,2)=1 \\ n_1 n_2 = \square}} \Big|\mu(n_1)\mu(n_2)h_1(n_1,n_2) \Big |\Big). \end{align} Mark that the definition of $h$ implies that $h_1 \ll 1$ and $h_1=0$ unless both $n_1$ and $n_2$ are $\ll Y$. Furthermore, if $n_1$, $n_2$ are square-free, then $n_1n_2=\square$ implies that $n_1=n_2$. Consequently, the sum in the $O$-term in \eqref{S10first} is $\ll Y$ and \begin{align*} S_{1,0}(h) = f\times \chirac{4X}{\pi^2} \sum_{\substack{(n,2)=1}} \prod_{p|n} \left( f\times \chirac{p}{p+1}\right) \mu^2(n) h_1\left( n, n \right) + O\left( f\times \chirac {XY}Z \right). \end{align*} We now apply the Mellin transform to recast $h_1(n,n)$ as \[ h_1(n,n) = f\times \chirac{1}{2\pi i} \int\limits_{(2)} f\times \chirac{Y^u}{n^u}\widetilde{h}_1(u,u) \mathrm{d} u, \quad \mbox{where} \quad \widetilde{h}_1(u,u) = \int\limits_{\ensuremath{\mathbb R}_{+}} h_1(yY,yY) y^{u} f\times \chirac{\mathrm{d} y}{y}. \] Similar to \eqref{eq:3.12}, we have that for $\Re(u)>0$ and any integer $B \geq 0$, \begin{equation} \label{h1bound} \widetilde{h}_1(u,u) \ll f\times \chirac{1}{|u|(1+|u|)^{B}}. \end{equation} Now we can rewrite $S_{1,0}$ as \begin{equation} \label{eqn:4.4} S_{1,0}(h)= f\times \chirac{4X}{\pi^2} f\times \chirac{1}{2\pi i} \int\limits_{(2)} Y^u \widetilde{h}_1(u,u) Z(u) \mathrm{d} u + O\left( f\times \chirac {XY}Z \right), \quad \mbox{where} \quad Z(u) = \sum_{\substack{(n,2)=1 }} f\times \chirac{\mu^2(n)}{n^{u}}\prod_{p|n} \left( f\times \chirac{p}{p+1}\right). \end{equation} We compute the Euler factors of $Z(u)$ to see that \begin{align} \label{eq:Z(u,v)} & Z(u)=\prod_{p >2} \left( 1+ f\times \chirac p{p+1} \cdot f\times \chirac {1}{p^{u}} \right) =: \zeta(u)Z_2(u), \end{align} where $Z_2(u)$ converges absolutely in the region $\Re(u) \geq f\times \chirac 12+\varepsilon$ for any $\varepsilon>0$. \newline Moving the line of integration in \eqref{eqn:4.4} to $\Re(u) = f\times \chirac 12+\varepsilon$, we encounter a simple pole at $u=1$ whose residue gives rise to the main term $$ f\times \chirac{4}{\pi^2} XY \widetilde{h}_1(1,1) Z_2(1). $$ Now to estimate the integral on the $f\times \chirac 12+\varepsilon$ line, we apply the functional equation for $\zeta(s)$ (see \cite[\S 8]{Da}) and Stirling's formula, together with the convexity bound for $\zeta(s)$, rendering \begin{align*} \zeta(s) \ll \begin{cases} 1 \qquad & \Re(s) >1,\\ (1+|s|)^{(1-\Re(s))/2} \qquad & 0< \Re(s) <1,\\ (1+|s|)^{1/2-\Re(s)} \qquad & \Re(s) \leq 0. \end{cases} \end{align*} The above and \eqref{h1bound} with $B=1$ enable us to gather that the integral on the $f\times \chirac 12+\varepsilon$ line contributes $\ll XY^{1/2+\varepsilon}$. One can easily check here that the Lindel\"of hypothesis, a consequence of GRH whose truth we assume, does not lead to a better bound. Now the above discussion, together with \eqref{eqn:4.4}, implies that \begin{align} \label{S10} S_{1,0}(h) = f\times \chirac{4}{\pi^2} XY \widetilde{h}_1(1,1) Z_2(1) + O\left( f\times \chirac {XY}Z+XY^{1/2+\varepsilon} \right). \end{align} Here we note that \begin{align} \label{h1half} \widetilde{h}_1(1,1)=\int\limits_{\ensuremath{\mathbb R}}W(x) \mathrm{d} x \left (\int\limits_{\ensuremath{\mathbb R}} \Phi (y) \mathrm{d} y \right )^2. \end{align} \subsection{Estimating $S_1(h)$, the $k \neq 0$ terms} \label{section:3.3} Let $S_3(h)$ denote the contribution to $S_1(h)$ from the terms with $k \neq 0$ in \eqref{eq:S1}. Let $f$ be a smooth function on $\ensuremath{\mathbb R}_+$ with rapid decay at infinity and $f$ itself and all its derivatives have finite limits as $x\to 0^+$. We consider the transform given by \begin{equation*} \widehat{f}_{CS}(y) := \int\limits_0^{\infty} f(x) CS(2\pi xy) \mathrm{d} x, \end{equation*} where $CS$ stands for either the cosine or the sine function. It is shown in \cite[Sec. 3.3]{S&Y} that \begin{equation*} \widehat{f}_{CS}(y) = f\times \chirac{1}{2\pi i} \int\limits_{(1/2)} \widetilde{f}(1-s) \Gamma(s) CS\left(f\times \chirac{\text{sgn}(y) \pi s}{2}\right) (2\pi |y|)^{-s} \mathrm{d} s. \end{equation*} Applying the above transform, we deduce that \begin{align} \label{eq:3.30} \begin{split} \int\limits_0^{\infty} h \left(Xx, n_1, n_2 \right) (C + S)\leg{2\pi k xX}{2n_1 n_2 a^2} \mathrm{d} x = f\times \chirac{1}{2\pi i X} \int\limits_{(\varepsilon)} \check{h}\left(1-s; n_1, n_2 \right) \leg{n_1 n_2 a^2}{\pi |k| }^{s} \Gamma(s) (C + \text{sgn}(k)S)\left(f\times \chirac{\pi s}{2} \right) \mathrm{d} s, \end{split} \end{align} where \begin{equation*} \label{eq:3.31} \check{h}(s;y,z) = \int\limits_0^{\infty} h(x,y,z) x^s f\times \chirac{\mathrm{d} x}{x}. \end{equation*} Taking the Mellin transforms in the variables $n_1$ and $n_2$, the right-hand side of \eqref{eq:3.30} equals \begin{equation*} f\times \chirac 1X \leg{1}{2\pi i}^3 \int\limits_{(1)} \int\limits_{(1)} \int\limits_{(\varepsilon)} \widetilde{h}\left(1-s,u,v \right) f\times \chirac{1}{n_1^{u} n_2^{v}} \leg{n_1 n_2 a^2}{\pi |k| }^{s} \Gamma(s) (C + \text{sgn}(k)S)\left(f\times \chirac{\pi s}{2} \right) \mathrm{d} s\, \mathrm{d} u \, \mathrm{d} v, \end{equation*} where \begin{equation*} \widetilde{h}(s,u,v) = \int_{\ensuremath{\mathbb R}_{+}^{3}} h(x,y,z) x^{s} y^{u} z^{v} f\times \chirac{\mathrm{d} x}{x} f\times \chirac{\mathrm{d} y}{y} f\times \chirac{\mathrm{d} z}{ z}. \end{equation*} Integrating by parts implies that for $\Re(s)$, $\Re(u)$, $\Re(v) >0$ and any integers $E_j \geq 0$, $1 \leq j \leq 3$, \begin{equation} \label{eq:h} |\widetilde {h}(s,u,v)| \ll f\times \chirac{X^{\Re(s)} Y^{\Re(u)+\Re(v)}}{|uvs| (1+|s|)^{E_1} (1+|u|)^{E_2} (1 + |v|)^{E_3}}. \end{equation} Applying the above bound in \eqref{eq:S1} leads to \begin{equation} \label{eq:S31} \begin{split} S_3(h) = f\times \chirac{1}{2} \sum_{\substack{a \leq Z \\ (a,2)=1}} & f\times \chirac{\mu(a)}{a^2} \sum_{k \neq 0} \sum_{(n_1,2a)=1} \sum_{(n_2,2a)=1} f\times \chirac{(-1)^kG_{k}(n_1 n_2)\mu(n_1)\mu(n_2)}{n_1 n_2} \\ & \times \leg{1}{2\pi i}^3 \int\limits_{(\varepsilon)} \int\limits_{(\varepsilon)} \int\limits_{(\varepsilon)} \widetilde{h}\left(1-s,u,v \right) f\times \chirac{1}{n_1^{u} n_2^{v}} \leg{n_1 n_2 a^2}{\pi |k| }^{s} \Gamma(s) (C + \text{sgn}(k)S)\left(f\times \chirac{\pi s}{2} \right) \mathrm{d} s \, \mathrm{d} u \, \mathrm{d} v. \end{split} \end{equation} Note that by \eqref{eq:h} and the estimation (see \cite[p. 1107]{S&Y}), \begin{equation} \label{Gammabound} \Big| \Gamma(s) (C \pm S)\Big( f\times \chirac {\pi s}{2} \Big) \Big| \ll |s|^{\Re(s)-1/2}, \end{equation} the integral over $s$ in \eqref{eq:S31} may be taken over any vertical lines between $0$ and $1$ and the integrals over $u, v$ in \eqref{eq:S31} may be taken over any vertical lines between $0$ and $2$. \newline Hence we arrive at \begin{equation} \label{eq:S31-1} \begin{split} S_3(h) = f\times \chirac{1}{2} \sum_{\substack{a \leq Z \\ (a,2)=1}} & f\times \chirac{\mu(a)}{a^2}\leg{1}{2\pi i}^3 \int\limits_{(\varepsilon)} \int\limits_{(\varepsilon)} \int\limits_{(\varepsilon)} \widetilde{h}\left(1-s,u,v \right) \sum_{k \neq 0} \sum_{(n_1,2a)=1} \\ & \times \sum_{(n_2,2a)=1} f\times \chirac{(-1)^kG_{k}(n_1 n_2)\mu(n_1)\mu(n_2)}{n_1 n_2} f\times \chirac{1}{n_1^{u} n_2^{v}} \leg{n_1 n_2 a^2}{\pi |k| }^{s} \Gamma(s) (C + \text{sgn}(k)S)\left(f\times \chirac{\pi s}{2} \right) \mathrm{d} s \, \mathrm{d} u \, \mathrm{d} v. \end{split} \end{equation} Now, we write $k = \iota k_1k^2_2$ with $\iota \in \{ \pm 1 \}$ and $k_1>0$ square-free. We write $f(k)=G_{\iota k}(n_1n_2)/|k|^{s}$. It follows from \cite[(5.15)]{Young2} that \begin{align*} \sum^{\infty}_{k=1}(-1)^{k} f(k)= (2^{1-2s}-1)\sumstar_{\substack{k_1 \geq 1 \\ (k_1, 2) = 1}} \sum^{\infty}_{k_2=1}f(k_1k^2_2) +\sumstar_{\substack{k_1 \geq 1 \\ 2| k_1}}\sum^{\infty}_{k_2=1}f(k_1k^2_2). \end{align*} We apply the above relation to recast the expression given in \eqref{eq:S31-1} for $S_3(h)$ as \begin{align} \label{S3decomp} S_3(h) &= \sum_{\iota =\pm 1} (S^{\iota}_{3,1}(h)+S^{\iota}_{3,2}(h)), \end{align} where \begin{align*} S^{\iota}_{3,1}(h) =& f\times \chirac{1}{2} \sum_{\substack{a \leq Z \\ (a,2)=1}} f\times \chirac{\mu(a)}{a^2} \sumstar_{\substack{k_1 \geq 1 \\ (k_1, 2) = 1}} \leg{1}{2\pi i}^3 \\ & \times \int\limits_{(1+2\varepsilon)} \int\limits_{(1+2\varepsilon)} \int\limits_{(1/2+\varepsilon)} \widetilde{h}\left(1-s,u,v \right) \Gamma(s) (2^{1-2s}-1) (C + \iota S)\left(f\times \chirac{\pi s}{2} \right)\leg{a^2}{\pi k_1 }^{s}Z(u-s, v-s, s;a, \iota k_1) \mathrm{d} s \, \mathrm{d} u \, \mathrm{d} v, \\ S^{\iota}_{3,2}(h) =& f\times \chirac{1}{2} \sum_{\substack{a \leq Z \\ (a,2)=1}} f\times \chirac{\mu(a)}{a^2} \sumstar_{\substack{k_1 \geq 1 \\ 2| k_1}} \leg{1}{2\pi i}^3 \\ & \times \int\limits_{(1+2\varepsilon)} \int\limits_{(1+2\varepsilon)} \int\limits_{(1/2+\varepsilon)} \widetilde{h}\left(1-s,u,v \right) \Gamma(s) (C + \iota S)\left(f\times \chirac{\pi s}{2} \right)\leg{a^2}{\pi k_1 }^{s}Z(u-s, v-s, s;a, \iota k_1) \mathrm{d} s \, \mathrm{d} u \, \mathrm{d} v. \end{align*} Here the function $Z$ is defined in \eqref{eq:Z}. We make a change of variables to rewrite $S^{\iota}_{3,1}(h)$ as \begin{multline*} S^{\iota}_{3,1}(h) = f\times \chirac{1}{2} \sum_{\substack{a \leq Z \\ (a,2)=1}} f\times \chirac{\mu(a)}{a^2} \sumstar_{\substack{k_1 \geq 1 \\ (k_1, 2) = 1}} \left(f\times \chirac{1}{2\pi i}\right)^3 \int\limits_{(1/2+\varepsilon)} \int\limits_{(1/2+\varepsilon)} \int\limits_{(1/2+\varepsilon)} {\widetilde h}(1-s,u+s,v+s)\Gamma(s) (2^{1-2s}-1) \\ \hskip 1in \times (C + \iota S) \left( f\times \chirac{\pi s}{2}\right) \left(f\times \chirac{a^2}{\pi k_1}\right)^s Z(u, v, s;a, \iota k_1) \mathrm{d} s \, \mathrm{d} u \, \mathrm{d} v. \end{multline*} We split the sum over $k_1$ into two terms according to whether $k_1 \le K$ or not, with $K$ to be optimized later. If $k_1 \le K$, we move the lines of integration to $\Re(s)= c_1$ for some $1/2<c_1<1$, $\Re(u)=\Re(v)=\varepsilon$. Otherwise, we move the lines of integration to $\Re(s)=c_2$ for some $c_2>1$, $\Re(u)=\Re(v)=\varepsilon$. We encounter no poles in either case. Applying Lemma \ref{lemma:Z} and the bound in \eqref{Linversebound} yields \begin{equation*} Z(u, v,s;a, \iota k_1) \ll |L^{-1}_a(\tfrac 12+u, \chi_{\iota k_1}) L^{-1}_a(\tfrac 12+v, \chi_{\iota k_1})| \ll d^2 (a) \left(|L^{-1}(\tfrac 12+u, \chi_{\iota k_1})|^2 + |L^{-1}(\tfrac 12+v, \chi_{\iota k_1})|^2\right). \end{equation*} The above and \eqref{eq:h} with $E_1=E_2=E_3=1$, together with \eqref{Gammabound} and the symmetry in $u$ and $v$ give that the terms with $k_1 \le K$ contribute \begin{equation} \label{eq:firstbd} \begin{split} \ll X^{1-c_1} & Y^{2c_1+2\varepsilon} \sum_{a\le Z} f\times \chirac{d(a)}{a^{2-2c_1}} \\ & \int\limits_{(c_1)} \int\limits_{(\varepsilon)}\int\limits_{(\varepsilon)} \sumstar_{1 \leq k_1\le K}f\times \chirac{1}{k_1^{c_1}} |L(\tfrac 12+u, \chi_{\iota k_1})|^{-2} f\times \chirac{ |s|^{\Re(s)-1/2} \mathrm{d} u \, \mathrm{d} v \, \mathrm{d} s}{|1-s|(1+|1-s|)|u+s|(1+|u+s|)|v+s|(1+|v+s|)}. \end{split} \end{equation} We further apply Lemma \ref{lem:Linversebound} to get \begin{align*} \sumstar_{1 \leq k_1 \le K}f\times \chirac{1}{k_1^{c_1}} |L(\tfrac 12+u, \chi_{\iota k_1})|^{-2} \ll K^{1-c_1+\varepsilon}(1+|t|)^{\varepsilon} \ll K^{1-c_1+\varepsilon}\left ((1+|u+s|)^{\varepsilon}+|s|^{\varepsilon} \right ). \end{align*} Applying the above in \eqref{eq:firstbd}, we infer that the terms with $k_1 \le K$ contribute \begin{align*} \ll X^{1-c_1} Y^{2c_2+2\varepsilon} K^{1-c_1+\varepsilon}Z^{2c_1-1+\varepsilon}. \end{align*} Similarly, the contribution from the complementary terms with $k_1 >K$ is \begin{align*} \ll X^{1-c_2} Y^{2c_2+\varepsilon} K^{1-c_2+\varepsilon}Z^{2c_2-1+\varepsilon}. \end{align*} We now balance these contributions by setting $K=Y^2Z^2/X$ so that \begin{align*} X^{1-c_1} Y^{2c_1} K^{1-c_1}Z^{2c_1-1}= X^{1-c_2} Y^{2c_2} K^{1-c_2}Z^{2c_2-1}. \end{align*} Now taking $c_1=1/2+\varepsilon$ yields the bound \begin{align*} S^{\iota}_{3,1}(h) \ll (XYZ)^{\varepsilon}Y^2Z. \end{align*} Note that $S^{\iota}_{3,2}(h)$ satisfies the above upper bound as well. Hence, we conclude from \eqref{S2}, \eqref{S10}, \eqref{S3decomp} and the above that \begin{align*} S(h) = f\times \chirac{4}{\pi^2} XY \widetilde{h}_1(1,1) Z_2(1) + O\left( (XY)^{1 + \varepsilon} Z^{-1+ \varepsilon}+XY^{1/2+\varepsilon}+(XYZ)^{\varepsilon}Y^2Z \right). \end{align*} Now \eqref{S} follows upon setting $Z=(X/Y)^{1/2}$, completing the proof of Theorem \ref{meansquare}. \vspace*{.5cm} \noindent{\bf Acknowledgments.} P. G. was supported in part by NSFC Grant 11871082 and L. Z. by the Faculty Silverstar Grant PS65447 at the University of New South Wales. The authors would also like to thank the anonymous referee for his/her careful and prompt inspection of the paper. \end{document}
\begin{document} \title{ Atomic\--frequency\--comb quantum memory via piecewise adiabatic passage} \author{J.~L.~Rubio}\ensuremath{\left|e\right\rangle}mail[]{[email protected]} \affiliation{Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona, E-08193 Bellaterra, Spain} \author{D.~Viscor} \affiliation{Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona, E-08193 Bellaterra, Spain} \author{J.~Mompart} \affiliation{Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona, E-08193 Bellaterra, Spain} \author{V.~Ahufinger} \affiliation{Departament de F\'{\i}sica, Universitat Aut\`{o}noma de Barcelona, E-08193 Bellaterra, Spain} \date{\today} \begin{abstract} In this work, we propose a method to create an atomic frequency comb (AFC) in hot atomic vapors using the piecewise adiabatic passage (PAP) technique. Due to the Doppler effect, the trains of pulses used for PAP give rise to a velocity-dependent transfer of the atomic population from the initial state to the target one, thus forming a velocity comb whose periodicity depends not only on the repetition rate of the applied pulses but also on the specific atomic transitions considered. We highlight the advantages of using this transfer technique with respect to standard methods and discuss, in particular, its application to store a single telecom photon in an AFC quantum memory using a high density Ba atomic vapor. \ensuremath{\left|e\right\rangle}nd{abstract} \pacs{03.67.Hk, 32.80.Qk, 42.50.Md} \maketitle \section{INTRODUCTION} \label{sec:INTRODUCTION} The ability to process flying qubits or strings of flying qubits, and specifically the control of light-matter interfaces capable of storing these qubits and retrieve them on demand for subsequent use, {\it{i.e.,~}} quantum memories (QM) \cite{Lvovsky'09}, are key elements for quantum communications \cite{Gisin'07}. Several QM-protocols have been proposed based, for instance, on electromagnetically induced transparency (EIT) \cite{Fleischhauer'00}, controlled reversible inhomogeneous broadening (CRIB) \cite{Moiseev'01,Nilsson'04} and atomic frequency combs (AFC) \cite{Afzelius'09,Afzelius'10,Chaneliere'10,Timoney'12, Riedmatten'08, Zheng'15} (see \cite{Heshami'16} for a review of the most relevant QM protocols), the latter being the most suitable for a multimode storage \cite{Usmani'10,Jobez'16} as the number of modes that can be stored is independent of the optical depth. AFC based QMs have experienced an enormous progress in the last years, achieving spin-wave storage for on-demand retrieval \cite{Gundogan'15,Timoney'12,Yang'18}, high-fidelity multiplexing \cite{Sinclair'14}, optimized efficiencies \cite{Chaneliere'10,Amari'10,Bonarota'10}, and telecom wavelength operation \cite{Gundougan'13,Saglamyurek'15,Jin'15,Farrera'16}. Typically, the AFC is generated in a static inhomogeneously broadened optical transition in a rare-earth ion doped crystal (REIC) at cryogenic temperatures by means of optical pumping techniques. In this approach, a large number of pulses with temporal spacing $T_{int}$ are repeatedly sent to generate the frequency grating with a detuning spacing $2\pi/T_{int}$. When a signal photon enters the crystal, it is completely absorbed as a single atomic excitation delocalized over the atoms forming the AFC. Subsequently, due to the frequency-to-time conjugation properties, the re-emission of the signal (echo) occurs at a time $T_{int}$. The ground and the excited states involved to generate the AFC system are usually split in several hyperfine sublevels and one or more of them are used as auxiliary levels for population transfer. Since those different transitions are hidden within the large inhomogeneous broadening, it is first necessary to apply a hole burning (distillation) process in which a wide spectral transmission window is created. Then, the atomic population grating is obtained by optical pumping, which requires many cycles of excitation and de-excitation \cite{Timoney'12}. In other cases, the AFC is directly generated by a combination of $\pi$-pulses that coherently transfer population at different frequencies \cite{Rippe'05}. This approach requires highly accurate control of the pulses intensities. Here, we propose an alternative way to produce an AFC using the piecewise adiabatic passage (PAP) \cite{Shapiro'07} technique. PAP is the piecewise version of the well-known stimulated Raman adiabatic passage (STIRAP) \cite{Bergmann'98} technique. PAP transfers the population between the two internal ground states of an atomic $\Lambda$-system by an accumulative coherent excitation using two trains of pump and dump pulses. We discuss the implementation of this technique in a hot atomic vapor in which a velocity comb (VC) acting as the atomic grating is generated. Some proposals concerning the generation of VCs by means of a train of short pulses have already been proposed, for instance, to map an optical frequency comb (OFC) into a velocity-selective population transfer between hyperfine levels in Rb \cite{Ban'06}, to achieve Doppler cooling in a two-level atomic system \cite{Ilinova'11}, and to study the coherent control of the accumulative effects in the coherence in a cascade configuration \cite{Felinto'04}. In our case, the two optical frequency combs of the two PAP trains of pulses selectively transfer the population in a $\Lambda$-type atomic system. Thus, the two-photon resonance condition in a Doppler broadened media is used to generate a velocity comb-like AFC, in order to subsequently store and retrieve a single-photon pulse. We will consider the mapping of the propagating photon into a storage state different from the ones used for the creation of the velocity comb. The final state where the photon is mapped can be accessed either by a direct one-photon absorption, in which case the retrieval time is predetermined, or by a two-photon process such that the retrieval time can be selected to be the first or any of the subsequent echos. An AFC generated via PAP has several advantages: (i) since only a single PAP cycle is needed to complete the transfer of population, the number of required pulses is drastically reduced compared with standard methods \cite{Timoney'12}, (ii) the process exhibits robustness with respect to intensity fluctuations, and (iii) the spacing between the AFC peaks depends not only on the temporal spacing of the PAP pulse train, but also on the ratio between the frequencies of single- and two-photon transitions of the $\Lambda$-type system. Thus, the retrieval time for the QM can be larger compared with other AFC-based QMs generated by a train of pulses with the same temporal spacing. The article is organized as follows. In Sec \ref{sec:MODEL} we describe the physical system under consideration and the mechanism and conditions to generate an AFC via PAP in an atomic vapor. In Sec. \ref{sec:SIMULATION}, we show numerical simulations for the AFC and use it to investigate the storage and retrieval of a single telecom photon in a Ba atomic vapor. Finally, in Sec. \ref{sec:CONCLUSIONS} we summarize the results and present the conclusions. \section{PHYSICAL MODEL} \label{sec:MODEL} \subsection{Velocity Comb (VC)} \label{sec:VelocityCombVC} The physical system under consideration consists of an atomic gas in a vapor cell interacting with two co-propagating trains of coherent pulses. The gas is characterized by a Maxwell--Boltzmann velocity distribution, \begin{equation} \label{eq:MB} f(v)={\rm f}ac{\varrho}{\sqrt{2\pi}\ensuremath{\left|e\right\rangle}ta}\ensuremath{\left|e\right\rangle}xp\left(-{\rm f}ac{v^2}{2\ensuremath{\left|e\right\rangle}ta^2}\right), \ensuremath{\left|e\right\rangle}nd{equation} where $v$ is the velocity component along the propagation direction of the fields being $v>0\,(v<0)$ for atoms approaching to (moving away from) the fields, $\ensuremath{\left|e\right\rangle}ta=\sqrt{k_{B}T/m}$ is the velocity standard deviation, $\varrho$ is the total atomic density, $m$ is the atomic mass, $k_{B}$ is the Boltzmann constant, and $T$ is the absolute temperature. Each atom of the gas is modeled as a $\Lambda$-type system formed by two ground states \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} and \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, and an excited state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} (see Fig.~\ref{f:fig1}). The population decay rate from \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} to \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} (\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}) is $\gamma_{21}$ ($\gamma_{23}$), and the Bohr frequency of the optical transition \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} (\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}) is $\omega_{12}$ ($\omega_{32}$). The atoms are initially in state \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}. A train of $N$ pump (dump) pulses with Rabi frequency $\Omega_{p}(t)=\boldsymbol{\mu}_{12}\cdot\boldsymbol{E}_{p}(t)/\hbar$ [$\Omega_{d}(t)=\boldsymbol{\mu}_{32}\cdot\boldsymbol{E}_{d}(t)/\hbar$] couples the \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} (\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}) optical transition with nominal detuning, {\it{i.e.,~}} for atoms at rest, $\ensuremath{\left|D\right\rangle}elta_{p}^{0}=\omega_{p}^{0}-\omega_{12}$ ($\ensuremath{\left|D\right\rangle}elta_{d}^{0}=\omega_{d}^{0}-\omega_{32}$). Here, $\boldsymbol{\mu}_{12}$ ($\boldsymbol{\mu}_{32}$) is the electric dipole moment vector of the \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} (\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}) transition, $\boldsymbol{E}_{p}(t)$ [$\boldsymbol{E}_{d}(t)$] is the slowly varying envelope of the pump (dump) electric field vector while $\omega_{p}^{0}$ ($\omega_{d}^{0}$) is the carrier frequency, and $\hbar$ is the reduced Planck constant. Due to the Doppler effect, the pump and dump detunings will be shifted by \begin{subequations} \label{eq:DopplerDelta} \begin{align} \ensuremath{\left|D\right\rangle}elta_{p}(v) &= \ensuremath{\left|D\right\rangle}elta^{0}_{p} + \omega^{0}_{p}v/c, \\ \ensuremath{\left|D\right\rangle}elta_{d}(v) &= \ensuremath{\left|D\right\rangle}elta^{0}_{d} + \omega^{0}_{d}v/c, \ensuremath{\left|e\right\rangle}nd{align} \ensuremath{\left|e\right\rangle}nd{subequations} respectively. Our aim is to create an AFC by means of the PAP technique, which is the piecewise version of STIRAP. In STIRAP, two single pulses, the pump and the dump, couple the two optical transitions of a $\Lambda$-type system, fulfilling the two-photon resonance condition $\ensuremath{\left|D\right\rangle}elta^{0}_{p}=\ensuremath{\left|D\right\rangle}elta^{0}_{d}$. Under this condition, one of the eigenstates of the Hamiltonian, the so-called dark state, takes the form \begin{equation} \label{eq:darkstate} \ensuremath{\left|D\right\rangle}=\cos\theta(t) \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}-\sin\theta(t) \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, \ensuremath{\left|e\right\rangle}nd{equation} where $\theta(t)=\arctan[\Omega_{p}(t)/\Omega_{d}(t)]$. Therefore, by smoothly varying the value of $\theta$ from $0$ to $\pi/2$ one can efficiently transfer the atomic population from state \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} to state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, adiabatically following the dark state. The desired variation of the mixing angle $\theta$ is achieved by coupling the pulses with the atoms in the so called counterintuitive sequence, {\it{i.e.,~}} if the atoms are initially in \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} the sequence consists of applying first the dump and with a certain delay in time $\tau$ the pump. To avoid the coupling between \ensuremath{\left|D\right\rangle}\, and the other eigenstates of the system, the required global adiabatic condition reads $\Omega\tau>10\pi/\sqrt{2}$ for optimally delayed Gaussian pulses \cite{Bergmann'98}, where $\Omega^2=\Omega_{p0}^2+\Omega_{d0}^2$, being $\Omega_{p0}$ ($\Omega_{d0}$) the peak value of the pump (dump) Rabi frequency. \begin{figure}[ht] { \includegraphics[width=0.9\columnwidth]{fig1.pdf} } \caption{ Scheme of the $\Lambda$-type system modeling an atom at rest, initially in \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}, interacting with the pump and dump trains of pulses with temporal spacing $T_{int}$. In the frequency domain, the fields correspond to an OFC with a frequency separation of $2\pi/T_{int}$ around its nominal frequency. See the main text for the definition of the parameters. }\label{f:fig1} \ensuremath{\left|e\right\rangle}nd{figure} In PAP, the pump and the dump pulses are replaced by two trains of pulses separated by an inter-pulse period, $T_{\rm int}$, and with an envelope that follows the temporal sequence of STIRAP (Fig.~\ref{f:fig1}). As it has been previously reported \cite{Shapiro'07,Shapiro'08}, the population is transferred from \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} by accumulation of the coherence as long as the inter-pulse period is shorter than the atoms decoherence time. We consider the following temporal profiles for the pulse trains: \begin{subequations}\label{eq:fields} \begin{eqnarray} \Omega_p(t)&=&\Omega_{p0}\,e^{-(t-\tau)^2/2\sigma_{e}^2} \sum_{l=0}^{N-1}\,e^{-(t-l T_{ int})^2/2\sigma^2}, \label{eq:fieldP}\\ \Omega_d(t)&=&\Omega_{d0}\,e^{-t^2/2\sigma_{e}^2} \sum_{l=0}^{N-1}\,e^{-(t-l T_{int})^2/2\sigma^2}. \label{eq:fieldD} \ensuremath{\left|e\right\rangle}nd{eqnarray} \ensuremath{\left|e\right\rangle}nd{subequations} Each of these expressions corresponds to a sum of $N$ narrow Gaussian pulses of width $\sigma$, separated by $T_{int}$, and modulated by a Gaussian envelope of width $\sigma_e$. The pump and dump individual pulses are coincident in time while the corresponding envelopes are shifted with respect to each other by a time $\tau=(N-1)T_{int}$, during which the PAP takes place (see Fig.~\ref{f:fig1}). In what follows, we will use for simplicity $\Omega_{0}=\Omega_{p0}=\Omega_{d0}$. Standard STIRAP requires to fulfill the two-photon resonance condition, which for a Doppler-broadened medium is equivalent to set $\ensuremath{\left|D\right\rangle}elta_{p}(v)=\ensuremath{\left|D\right\rangle}elta_{d}(v)$. However, in the case of a train of pulses one has to look for the accumulation of the coherence between two consecutive pulses. In particular, this effect has been studied in Ref. \cite{Felinto'04} for a cascade configuration. In our scheme, the conditions for constructive interference in the accumulation of the coherences are given by $\ensuremath{\left|D\right\rangle}elta_p(v)T_{int}=j2\pi$ and $\ensuremath{\left|D\right\rangle}elta_d(v)T_{ int}=k2\pi$, with $j,k \in \mathds{Z}$. Taking the difference between these two expressions, the two-photon resonance condition leads to \begin{equation} \ensuremath{\left|D\right\rangle}elta_p(v) -\ensuremath{\left|D\right\rangle}elta_d(v)= (j-k)\ensuremath{\left|D\right\rangle}elta\omega, \label{eq:condition} \ensuremath{\left|e\right\rangle}nd{equation} with $\ensuremath{\left|D\right\rangle}elta\omega=2\pi/T_{\rm int}$. Using the definition for the Doppler shifted detunings, \refeqs{eq:DopplerDelta}, one obtains the velocity $v_{2ph}$ required for an atom to be transferred from \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}: \begin{equation} v_{2ph} = {\rm f}ac{(j-k)\ensuremath{\left|D\right\rangle}elta\omega}{\omega_{13}}c, \label{eq:v2ph} \ensuremath{\left|e\right\rangle}nd{equation} where we have assumed the nominal two-photon resonance condition $\ensuremath{\left|D\right\rangle}elta^{0}_{p} =\ensuremath{\left|D\right\rangle}elta^{0}_{d}$ and defined $\omega_{13}=\omega_{12}-\omega_{32}$. One can obtain the same expression, \refeq{eq:v2ph}, through energy-conservation arguments as follows. In the reference frame of an atom moving with velocity $v$, each train of pulses in frequency domain corresponds to an optical frequency comb (OFC) with detunings \begin{subequations} \label{eq:DetuningsDoppler} \begin{align} \ensuremath{\left|D\right\rangle}elta_{p}^{n}(v)&=\omega_{p}^{n}(v)-\omega_{12}, \\ \ensuremath{\left|D\right\rangle}elta_{d}^{m}(v)&=\omega_{d}^{m}(v)-\omega_{32}. \ensuremath{\left|e\right\rangle}nd{align} \ensuremath{\left|e\right\rangle}nd{subequations} Here the frequencies $\omega_{p}^{n}(v)=\omega_{p}^{0}(1+v/c)+n\ensuremath{\left|D\right\rangle}elta\omega$, and $\omega_{d}^{m}(v)=\omega_{d}^{0}(1+v/c)+ m\ensuremath{\left|D\right\rangle}elta\omega$ correspond to the different harmonics of the OFC, and $n$ and $m$ are the integer indices for each harmonic $(0, \pm 1,\pm 2,...)$, see Fig.~\ref{f:fig1}. Therefore, transfer of atomic population from \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} will only be achieved by the simultaneous absorption of a pump photon and the stimulated emission of a dump one, such that their frequency difference matches the energy gained by the atom, {\it{i.e.,~}} \begin{equation} \ensuremath{\left|D\right\rangle}elta_{p}^{n}(v)=\ensuremath{\left|D\right\rangle}elta_{d}^{m}(v). \label{eq:2phRes} \ensuremath{\left|e\right\rangle}nd{equation} This is the generalization of the two-photon resonance condition for pairs of harmonics of the two OFCs. It is easy to see that combining \refeqs{eq:DetuningsDoppler} with \refeq{eq:2phRes} and assuming $\ensuremath{\left|D\right\rangle}elta^{0}_{p} =\ensuremath{\left|D\right\rangle}elta^{0}_{d}$ we recover \refeq{eq:v2ph} by identifying $j-k=m-n$. Thus, every value of $v_{2ph}$, {\it{i.e.,~}} every peak of the VC, is the result of the contribution of all the harmonics of both OFC fulfilling \refeq{eq:condition} and sharing the same value $j-k$. Note that such a VC, governed by \refeq{eq:v2ph}, is not well defined for $\omega_{13}=0$, {\it{i.e.,~}} for degenerate ground states. To physically understand this point, we should notice that in this situation the atoms, regardless of their velocity, see two OFCs that perfectly overlap in Fourier space since the field frequencies will have the same Doppler shift. Thus, a single possible match between harmonic indices occurs, $j=k$, which corresponds to $v_{2ph}=0$. At the end of the PAP process, we expect to obtain a VC of population transferred to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, $\rho_{33}(v)$, with peaks centered at velocity values given by \refeq{eq:v2ph}. \begin{figure}[ht] { \includegraphics[width=1\columnwidth]{fig2.pdf} } \caption{Atomic VC via PAP generated using nominal detunings (a) $\ensuremath{\left|D\right\rangle}elta^{0}=0$, (b) $\ensuremath{\left|D\right\rangle}elta^{0}=2\pi\cdot63.7\,{\rm MHz}$, and (c) $\ensuremath{\left|D\right\rangle}elta^{0}=2\pi\cdot127.4\,{\rm MHz}$. Here $\rho_{33}$ is weighted by the Maxwell--Boltzmann velocity distribution with $\ensuremath{\left|e\right\rangle}ta=350$ m/s and rescaled to show the single atom probability instead of the atomic density. See text for the rest of parameters. } \label{f:fig2} \ensuremath{\left|e\right\rangle}nd{figure} Finally, we have to take into account that for values of velocities different from those given in \ensuremath{\left|e\right\rangle}qref{eq:v2ph} the population can also be transferred to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} by spontaneous emission after the absorption of one pump photon. This undesired effect will decrease the contrast of the velocity peaks around the central value $v=0$ distorting the comb. We can diminish this effect by shifting the nominal detuning $\ensuremath{\left|D\right\rangle}elta^{0}(=\ensuremath{\left|D\right\rangle}elta_{p}^{0}=\ensuremath{\left|D\right\rangle}elta_{d}^{0})$ out of resonance. Then, population decaying from \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} occurs for velocity classes far from $v=0$. A numerical example is shown in Fig.~\ref{f:fig2} for three different nominal detunings (a) $\ensuremath{\left|D\right\rangle}elta^{0}=0$, (b) $\ensuremath{\left|D\right\rangle}elta^{0}=2\pi\cdot63.7\,{\rm MHz}$, and (c) $\ensuremath{\left|D\right\rangle}elta^{0}=2\pi\cdot127.4\,{\rm MHz}$, using, in all cases, $N=18$ pump-dump pulses, $\omega_{32}=2\pi\cdot637$ THz, $\omega_{12}=2.5\,\omega_{32}$, $\Omega_{0}=2\pi\cdot80$ MHz, $\sigma=5$ ns, $T_{int}=0.7$ $\mu$s, and $\gamma_{21}=\gamma_{23}=10$ $\mu s^{-1}$. \subsection{Atomic Frequency Comb (AFC)} \label{sec:AtomicFrequencyCombAFC} \subsubsection{AFC Configurations} Once it is generated, the VC will translate into an absorption grating for a single photon pulse, $\mathcal{E}$, copropagating with the pump and dump laser pulses, coupling any transition involving state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}. Two different configurations for the storage process are shown in Fig.~\ref{f:fig3}. In the simplest configuration, one-photon AFC, a single photon directly couples the dipole allowed transition \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} [See Fig.~\ref{f:fig3}(a)], where \ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} can be degenerate with state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} or have any other energy. A second possibility is the two-photon based AFC where the single photon is mapped into the \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} transition via an off-resonant two-photon Raman process where, in this case, \ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} is a metastable (long-lived) state [See Fig.~\ref{f:fig3}(b)]. The latter can allow, in principle, for longer memory lifetimes only limited by the finite width of the comb teeth and the atomic motion since the effect of the spontaneous emission from the excited states is avoided. In the remaining of this section, we will focus only on the one-photon AFC configuration, although the results here obtained also apply for the two-photon configuration, provided that the state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} can be adiabatically eliminated as we will discuss in Section \ref{sec:SIMULATION}. We assume that, for atoms at rest, the signal photon is resonant with this transition, {\it{i.e.,~}} $\delta^0\ensuremath{\left|e\right\rangle}quiv\omega^0-\omega_{34}=0$, with $\omega^0$ being the central frequency of the photon wave-packet and $\omega_{34}$ the frequency of the transition \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}. Therefore, for moving atoms, the corresponding detuning reads $\delta(v)=\omega^0v/c=\omega_{34}v/c$. \begin{figure}[ht] { \includegraphics[width=0.8\columnwidth]{fig3.pdf} } \caption{Schematics of two AFC configurations to store a single photon. The signal photon $\mathcal{E}$ is stored in the \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$\leftrightarrow$\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} transition via a (a) one-photon or (b) two-photon process using a control field $\Omega_{c}$.} \label{f:fig3} \ensuremath{\left|e\right\rangle}nd{figure} \subsubsection{AFC Parameters} In order to characterize the AFC, we model the density of atoms in state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, after the transfer from \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle} via PAP, with a sum of Gaussian functions, modulated by a Gaussian envelope of width $\Gamma$. Expressed as a function of the detuning $\delta$, it takes the form: \begin{equation} \label{eq:AFC} \rho_{33}(\delta)= e^{-\delta^2/2\Gamma^2}\sum_{j=-\infty}^{\infty}e^{-(4\ln{2})(\delta-j\ensuremath{\left|D\right\rangle}elta\delta)^{2}/\varpi^{2}}. \ensuremath{\left|e\right\rangle}nd{equation} Here $\ensuremath{\left|D\right\rangle}elta\delta$ is the peak separation, while $\varpi$ is the FWHM of the AFC peaks. A numerical example of the generation of the PAP train of pulses and of the corresponding AFC atomic spectral distribution is shown in Figs.~\ref{f:fig4}(a) and ~\ref{f:fig4}(b), respectively, with $N=16$ pump-dump pulses, $\omega_{34}=\omega_{32}=2\pi\cdot637$ THz, $\omega_{12}=2.5\,\omega_{32}$, $\Omega_{0}=2\pi\cdot151$ MHz, $\sigma=6.2$ ns, $T_{int}=0.17$ $\mu$s, $\ensuremath{\left|D\right\rangle}elta^{0}=2\pi\cdot 360$~MHz, and $\gamma_{21}=\gamma_{23}=10$ $\mu s^{-1}$. Next, we discuss the figures of merit involved in the PAP-based AFC and derive their analytical expressions. \paragraph{Bandwidth.} The usual limitation for the comb width is given by the width of the Maxwell--Boltzmann distribution. However, here we consider only cases in which the comb width is narrower than the velocity standard deviation $\ensuremath{\left|e\right\rangle}ta$, so the Doppler width does not affect the comb bandwidth. To obtain an expression for the AFC bandwidth, we consider first the bandwidth of the respective OFCs for every train of pulses, which is given by the frequency-time Fourier relations, $1/\sigma$ (see Appendix A.1). Secondly, due to the Doppler effect, in terms of velocities, the pump and dump OFCs will have modified envelope bandwidths $c/\omega_{12}\sigma$ and $c/\omega_{32}\sigma$, respectively (see Appendix A.2). Moreover, the centers of these OFC depend on the atomic velocity. Thus, only those atoms with velocities producing a significant overlap between the two OFC will be able to interact with both fields. The overlap between the OFC envelopes (see Appendix A.2), in terms of the transition frequency $\omega_{34}$, has a bandwidth \begin{align} \label{eq:delta12} \Gamma= &{\rm f}ac{\sqrt{2}}{\sigma}\xi, \ensuremath{\left|e\right\rangle}nd{align} where $\xi\ensuremath{\left|e\right\rangle}quiv\omega_{34}/\omega_{13}$. For the AFC of the example in Fig. \ref{f:fig4}, the analytical (numerical) value is $\Gamma=2\pi\cdot24.19$ MHz ($2\pi\cdot25.46$ MHz). \begin{figure}[ht] { \includegraphics[width=1\columnwidth]{fig4.pdf} } \caption{Numerical example of the (a) temporal sequence of a piecewise adiabatic passage (PAP) with $N=16$ simultaneous dump (blue lines) and pump (red lines) pulses with temporal inter-pulse spacing $T_{int}$, and pulse width $\sigma$, which generates (b) an atomic frequency comb (AFC) with $\rm N_{c}$ peaks, bandwidth $\Gamma$, detuning peak separation $\ensuremath{\left|D\right\rangle}elta\delta$, and peak FWHM $\varpi$. Here $\ensuremath{\left|e\right\rangle}ta=350$ m/s and $\rho_{33}(\delta)$ is rescaled to show the single atom probability instead of atomic density. See text for the parameter values.} \label{f:fig4} \ensuremath{\left|e\right\rangle}nd{figure} Expression (\ref{eq:delta12}) gives us the bandwidth of the AFC with a Gaussian shape. We should mention that when increasing $\Omega_{0}$ or the number of pulses $N$, the adiabaticity of PAP increases and the height of the peaks in the comb rises up to its maximum value, given by the Maxwell--Boltzmann distribution (\ref{eq:MB}). When this occurs, the form of the comb acquires a flat top profile and expression (\ref{eq:delta12}) does not longer provide a good enough estimation of the comb bandwidth. \paragraph{Peak separation.} Due to the condition given in \refeq{eq:v2ph} for the atoms to be transferred into state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, the absorption peaks will be found at detunings fulfilling $\delta_{2ph}=\omega_{34}v_{2ph}/c$ and, consequently, they will be separated in frequency by \begin{equation} \label{eq:Deltadelta} \ensuremath{\left|D\right\rangle}elta\delta=\xi\ensuremath{\left|D\right\rangle}elta\omega. \ensuremath{\left|e\right\rangle}nd{equation} Note that the $\xi$ factor in the last expression, accounting for the asymmetry in the optical transition frequencies, determines the peak separation of the VC and, in turn, of the AFC. For instance, if $\omega_{32}=\omega_{34}=\omega_{12}/3$, then $\ensuremath{\left|D\right\rangle}elta\delta=\ensuremath{\left|D\right\rangle}elta\omega/2$ which reduces by half the peak separation that one would have in a conventional AFC. For the AFC of the example shown in Fig. 3, the analytical (numerical) value is $\ensuremath{\left|D\right\rangle}elta\delta=2\pi\cdot3.91$ MHz ($2\pi\cdot3.95$ MHz). \paragraph{Number of peaks.} Considering $2\sqrt{2\pi}\Gamma$ for the base of the Gaussian comb, and for a large number of peaks, we can estimate the total number of peaks ${\rm N_c}$ forming the AFC with ${\rm N_{c}\ensuremath{\left|D\right\rangle}elta\delta}= 2\sqrt{2\pi}\Gamma$. Using \ensuremath{\left|e\right\rangle}qref{eq:delta12} and \ensuremath{\left|e\right\rangle}qref{eq:Deltadelta} we find that \begin{eqnarray} \label{eq:Nc} {\rm N_{c}}=\cfrac{2T_{int}}{\sqrt{\pi}\sigma}, \ensuremath{\left|e\right\rangle}nd{eqnarray} which depends only on the PAP parameters. The ability to control the number of peaks of the AFC is an interesting feature since this quantity determines the number of temporal modes that can be stored when used as a quantum memory \cite{Afzelius'09}. For the AFC of the example of Fig. 4, the analytical (numerical) value is $\rm{N_{c}}=25.7$ peaks ($25$ peaks). \paragraph{Peak width.} The FWHM of each individual tooth of the AFC is given by the transfer efficiency under conditions of quasi-two-photon resonance. Based on the expression for the velocity peak width obtained in STIRAP for a Doppler broadened medium and the relationship between the PAP and STIRAP processes (see Appendix B), one can estimate the width of each peak as \begin{align} \label{eq:FWHMFrequency} \varpi={\rm f}ac{\sqrt{\pi}\Omega_{0}^2\sigma\xi}{4\ensuremath{\left|D\right\rangle}elta^{0}T_{int}}, \ensuremath{\left|e\right\rangle}nd{align} for $|\ensuremath{\left|D\right\rangle}elta^{0}|>\Omega_{0}\sqrt{\omega_{32}/\omega_{12}}$. For the AFC of the example in Fig. \ref{f:fig4}, the analytical and numerical values are $\varpi=2\pi\cdot0.684$ MHz and $\varpi=2\pi\cdot0.891$ MHz, respectively. The discrepancy is due to the fact that expression (\ref{eq:FWHMFrequency}) has been derived neglecting spontaneous emission, in the limit of large detunings, for combs with a large number of peaks, and assuming perfect Gaussian profiles. \paragraph{Finesse.} One of the important parameters to estimate the quality of an AFC is the finesse of the comb $\mathcal{F}\ensuremath{\left|e\right\rangle}quiv\ensuremath{\left|D\right\rangle}elta\delta/\varpi$. In terms of the system parameters, it can be written as \begin{align} \label{eq:finesse} \mathcal{F}={\rm f}ac{8\sqrt{\pi}\ensuremath{\left|D\right\rangle}elta^{0}}{\Omega_{0}^2\sigma}. \ensuremath{\left|e\right\rangle}nd{align} According to this expression, the finesse of the comb does not depend on the number of pulses but can be varied adjusting the rest of the PAP parameters ($\Omega_{0},\sigma,\ensuremath{\left|D\right\rangle}elta^{0}$). For the AFC of the example in Fig. \ref{f:fig4}, the analytical and numerical values are $\mathcal{F}=5.72$ and $\mathcal{F}=4.43$ ,respectively. The discrepancy comes from the assumptions done in the estimation of the peak width $\varpi$, commented above. In what follows, we summarize the differences of the AFC created via PAP with respect to the conventional AFC generated by a train of pulses with the same pulse temporal spacing: (i) The form of the frequency peaks is not affected by the temporal shape of the individual pulses. This is because our comb is generated via PAP, whose efficiency is insensitive to the shape of the pulses, provided that their envelopes follow the counterintuitive sequence of STIRAP. (ii) Performing PAP more adiabatically, e.g., by increasing $\Omega_{0}$ or $N$, produces an increment of the height of the peaks and thus enhances the optical depth of the $\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}$ transition. (iii) The peaks frequency spacing, $\ensuremath{\left|D\right\rangle}elta\delta$, is proportional to $2\pi/T_{int}$ but also to the ratio of the transition frequencies of the $\Lambda$-type system and the storage transition, via the $\xi$ factor. \subsection{AFC-based QM} \label{sec:QuantumMemoryQM} From \ensuremath{\left|e\right\rangle}qref{eq:Deltadelta}, one can derive the expression for the retrieval time $\widetilde{\mathcal{T}}$ in an AFC-based QM via PAP as \begin{eqnarray} \label{eq:TN1} \widetilde{\mathcal{T}}&=&\cfrac{T_{int}}{\xi}=\cfrac{\omega_{13}}{\omega_{34}}\,T_{int}, \ensuremath{\left|e\right\rangle}nd{eqnarray} where $T_{int}=2\pi/\ensuremath{\left|D\right\rangle}elta\omega$ coincides with the retrieval time of the conventional AFC-based QM. As previously commented, \refeq{eq:TN1} implies that, when our AFC proposal is used as a quantum memory, the retrieval time also depends on the chosen specific atomic system through $\xi$. Next, we discuss the parameters for PAP and the frequencies of the transitions involved in order to optimize the QM and satisfy the assumptions required in our model. On the one hand, in order to achieve high storage efficiencies a large optical depth is needed. In turn, the higher the finesse the better the reemission due to the atoms rephasing. However, it is easy to show that the optical depth of the medium is effectively reduced by a factor $1/\mathcal{F}$. Therefore, a compromise between a large optical depth and the reemission efficiency is required. It has been shown that for a backward retrieval scheme, in the absence of spontaneous emission, the memory efficiency reads $\ensuremath{\left|e\right\rangle}ta\simeq(1-e^{-OD/\mathcal{F}})^2e^{-7/\mathcal{F}}$ \cite{Afzelius'09}, where $OD$ is the optical depth of the medium. Thus, for large optical depths, when the efficiency is above 90\% for $\mathcal{F}\gtrsim10$. Using \refeq{eq:finesse} with $\Gamma\gg\ensuremath{\left|D\right\rangle}elta\delta$, this condition for the finesse leads to \begin{align} \label{eq:PAPoptimal} {\rm f}ac{\Omega^{2}_{0}\sigma}{\ensuremath{\left|D\right\rangle}elta^{0}}\lesssim{\rm f}ac{4}{5}\sqrt{\pi}\approx1.4. \ensuremath{\left|e\right\rangle}nd{align} From this expression it is clear that in order to obtain a large finesse, for a given $\Omega_{0}$ and $\sigma$, we must increase the nominal detuning. This is even more clear looking at Fig.~\ref{f:fig2} where the widths of the velocity peaks decrease for increasing detunings. Moreover, to avoid spontaneous emission for the central velocity classes, a necessary requirement would be to impose a nominal detuning larger than the comb bandwidth itself, given in \refeq{eq:delta12}. On the other hand, if we define the ratio between the transition frequencies of the $\Lambda$-system as $r\ensuremath{\left|e\right\rangle}quiv\omega_{12}/\omega_{32}$, so such that $\xi=(\omega_{34}/\omega_{32})/(r-1)$, one can see that our AFC is not formally available for degenerate ground states, {\it{i.e.,~}} $r=1$, since according to \ensuremath{\left|e\right\rangle}qref{eq:Deltadelta}, $\ensuremath{\left|D\right\rangle}elta\delta\rightarrow\infty$, which it is consistent with the discussion about the velocities given in \ensuremath{\left|e\right\rangle}qref{eq:v2ph} in \ref{sec:VelocityCombVC} when $\omega_{13}=0$. Furthermore, according to \refeq{eq:TN1}, to obtain longer retrieval times than in conventional AFC-based QMs, it should be satisfied $\xi<1$, that is, \begin{equation} \label{eq:ratio} \omega_{34}<\omega_{13}, \ensuremath{\left|e\right\rangle}nd{equation} or equivalently, $r>\omega_{34}/\omega_{32}+1$, which becomes simply $r>2$ if \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} and \ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} are degenerate. To summarize, \refeq{eq:PAPoptimal} and \refeq{eq:ratio} give us the conditions to implement an AFC-based QM via PAP, which improve some features of conventional AFC-based QMs. \section{NUMERICAL EXAMPLE OF A PAP-based AFC-QM} \label{sec:SIMULATION} In this section, we show a numerical example of the proposed AFC-based QM via PAP. We consider Ba atoms \cite{Jitschin'80}, although other atomic species with different energy-level schemes could be chosen. The reasons for this choice may be summarized as follows: (i) Alkaline earth metals such as calcium or barium feature available $\Lambda$-type systems with a large asymmetry in the pump and Stokes transitions, formed by $S$, $P$, and $D$ states, where the later are, very often, long-lived metastable states. This configuration allows us to fulfill Eq.~\ensuremath{\left|e\right\rangle}qref{eq:ratio}; (ii) They also have transitions in the telecommunication range ($\sim1.5\,\mu$m), which are of special interest for long distance quantum communications \cite{Rielander'16,Rancic'17}; and (iii) Alkaline earth metals can be prepared in vapor cells or hollow cathode lamps capable of reaching high atomic densities \cite{Fang'17}. For the simulation of the VC creation we consider the $\ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}=6s^2\,{^1}S_0\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}=6s6p\,{^1}P_1$ and $\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}=6s5d\,{^1}D_2$ transitions of Ba, see Fig.~\ref{f:fig5} (left frame). Since in this system the spontaneous emission from the excited state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} occurs predominantly towards the initial ground state \ensuremath{\left|e\right\rangle}nsuremath{\left|1\right\rangle}, the number of atoms that can reach the final state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} via spontaneous decay is very limited. Moreover, the $\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$ transition is already in the telecom range (1500.4 nm). For this reason, to implement the QM (once the VC has been created), one could consider a single telecom photon coupling the $\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$ transition. However, in order to circumvent the spontaneous emission from the excited state $\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}$, the photon will be stored in the atomic coherence between state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} and the long-lived state $\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}=6s5d\,{^3}D_2$ through an off-resonant two-photon Raman process [Fig.~\ref{f:fig5} (right frame)]. \begin{figure}[ht] { \includegraphics[width=0.8\columnwidth]{fig5.pdf} } \caption{Relevant energy-levels for Ba with the Rabi frequencies and detunings involved in the VC creation via PAP (left frame) and the single telecom photon QM (right frame). See the main text for the definition of the parameters. } \label{f:fig5} \ensuremath{\left|e\right\rangle}nd{figure} First, for the simulation of the AFC preparation we use the density matrix equations and parameters for a Ba vapor (see transition properties in, e.g., \cite{DammalapatiThesis}) at 800~ºC (typical in hollow cathode lamps \cite{Dammalapati'09,Araujo'08,Fang'17}) and a density of $\varrho=2.5\times10^{20}$~at/m$^3$. The decay rates from the excited state to the ground and metastable states are $\gamma_{21} = 1.19\times10^{8}$ s$^{-1}$ and $\gamma_{23} = 0.25\times10^{6}$ s$^{-1}$ (the decay to state $\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}$ is $\gamma_{24}\simeq\gamma_{23}/100$ so we can safely neglect it), respectively, while the transition frequencies are $\omega_{32}=2\pi\cdot200$ THz and $\omega_{12}=2\pi\cdot540$ THz, so $\omega_{13}=2\pi\cdot340$ THz. The effective storage transition $\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}$ has a frequency $\omega_{34}\ensuremath{\left|e\right\rangle}quiv\omega_{42}-\omega_{32}=2\pi\cdot65.35$ THz, where $\omega_{42}=2\pi\cdot270$ THz (1130.6 nm). With these values we obtain $\xi=0.19$ and $r=2.7$, satisfying condition \ensuremath{\left|e\right\rangle}qref{eq:ratio}. We perform numerical simulations using different number of pulses, Rabi frequencies, and nominal detunings. In all cases the pulses of the train are separated by $T_{int}=689$~ns and have a duration $\sigma=T_{int}/64=10.77$~ns. The envelope duration, $\sigma_{e}=\tau/(2\sqrt{2\ln2})$, is chosen depending on the number of pulses and the nominal detunings are such that the two-photon resonance condition is satisfied for the central velocity class $\ensuremath{\left|D\right\rangle}elta^{0}(=\ensuremath{\left|D\right\rangle}elta_{p}^{0}=\ensuremath{\left|D\right\rangle}elta_{d}^{0})$. For these parameters, the estimated peak separation is $\ensuremath{\left|D\right\rangle}elta\delta=2\pi\cdot0.28$~MHz, corresponding to a retrieval time of $\widetilde{\mathcal{T}}=3.6\,\mu$s. After the comb creation, we simulate the propagation of a signal photon coupling transition $\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$, see Fig.~\ref{f:fig5}, through a medium of length $L=2$ cm. We consider that at the input of the medium, the photon wavepacket can be described with a slowly varying Gaussian envelope \begin{equation} \label{Eini} \mathcal{E}(z=0,t)=\cfrac{1}{\sqrt{\tau_{p}\sqrt{\pi}}}\,e^{-(t-t_{c})^2/(2\tau_{p}^2)}, \ensuremath{\left|e\right\rangle}nd{equation} normalized such that $\int{\left|\mathcal{E}(0,t)\right|^2dt=1}$. Here, $\tau_{p}=0.3\,\mu$s is the duration of the pulse, centered at $t_{c}=4\tau_{p}$. Since the excited state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} has a short lifetime, we consider the signal photon being tuned out of resonance, with detuning $\delta_{s}^{0}$, and it is coupled via a strong control field, of Rabi frequency $\Omega_{c}$, to the $\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}$ metastable state in a two-photon Raman configuration. In this configuration, the harmful effect of the spontaneous emission is reduced and the signal photon is effectively stored in the $\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}$ atomic coherence. Thus, by adiabatically eliminating the excited state, the equations describing the propagation of the signal photon and the slowly-varying spin wave amplitudes, $\mathcal{E}$ and $\mathcal{S}$, respectively, in the photon co-moving frame $t\rightarrow t-z/c$ \cite{Gorshkov'07} read \begin{align} \partial_{z}\mathcal{E}(z,t)=&-i{\rm f}ac{g^2\varrho}{c}\int{{\rm f}ac{\rho_{33}(v)}{\delta_{s}(v)+i\gamma}d\delta}\mathcal{E}(z,t) \nonumber \\ &-i{\rm f}ac{g\sqrt{\varrho}}{c}\Omega_{c}\int{{\rm f}ac{\rho_{33}(v)}{\delta_{s}(v)+i\gamma}\mathcal{S}(z,t,\delta)d\delta}, \label{eq:PhotProp} \\ \partial_{t}\mathcal{S}(z,t,\delta)=&i\left\{\delta(v)-i{\rm Im}\left[{\rm f}ac{\Omega_{c}^2}{\delta_{s}(v)+i\gamma}\right]\right\}\mathcal{S}(z,t,\delta) \nonumber \\ &-i{\rm f}ac{g\sqrt{\varrho}\Omega_{c}}{\delta_{s}(v)+i\gamma}\mathcal{E}(z,t). \label{eq:SpinEvol} \ensuremath{\left|e\right\rangle}nd{align} Here $c$ is the speed of light in vacuum, $g=\mu_{23}\sqrt{{\rm f}ac{\omega_{32}}{2\ensuremath{\left|e\right\rangle}psilon_{0}\hbar}}$ the photon-atom coupling constant, with $\ensuremath{\left|e\right\rangle}psilon_{0}$ the vacuum electric permittivity, $\gamma$ is the coherence decay rate (which we assume $\gamma\simeq\gamma_{21}/2$, since $\gamma_{23}$ is negligible), and $\delta(v)\ensuremath{\left|e\right\rangle}quiv\delta_{s}(v)-\delta_{c}(v)-{\rm Re}\{\Omega_{c}^2/[\delta_{s}(v)+i\gamma]\}$ the effective two-photon detuning, with $\delta_{s}(v)=\delta_{s}^{0}+{\rm f}ac{v}{c}\omega_{32}$ and $\delta_{c}(v)=\delta^{0}_{c}+{\rm f}ac{v}{c}\omega_{42}$, the signal and control detunings, respectively, including the Doppler shifts. In order for the adiabatic elimination of state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle} to hold, we must impose a large detuning compared to the other frequency scales of the system, thus we choose $\delta_{s}^{0}\simeq-2\pi\cdot380.38$ MHz and $\Omega_{c}=2\pi\cdot15.20$ MHz. The control detuning is set to be on two--photon resonance with the probe detuning and to compensate the AC-Stark shift $\delta^{0}_{c}=\delta_{s}^{0}-\Omega_{c}^2/\delta_{s}^{0}$. With this choice of parameters the effective detuning in Eq.~(\ref{eq:SpinEvol}) is proportional to the frequency difference between the signal and control transitions $\delta(v)\simeq\omega_{34}v/c$, and the atoms are on quasi two-photon resonance with the fields when $v=0$. For convenience, we perform a backward retrieval protocol \cite{Afzelius'09} for which the retrieved field is always maximum at $z=0$. \begin{figure}[ht] { \includegraphics[width=1\columnwidth]{fig6.pdf} } \caption{(a) Comb distribution $\rho_{33}/\varrho$ (black line) as a function of the detuning $\delta(v)$, together with the input photon spectral distribution seen by the atoms (red line). (b) Temporal evolution of the photon intensity at $z=0$ (solid line) and $z=L$ (dashed). The retrieved pulse in the backward direction is found at $\widetilde{\mathcal{T}}\simeq3.6\,\mu$s respect to the input pulse. (c) Scaled photon intensity as a function of position and time. The simulation is performed with an AFC comb created using $N=40$ pulses, $\Omega_0=2\pi\cdot51.72$ MHz, and $\ensuremath{\left|D\right\rangle}elta^{0}\simeq2\pi\cdot129.35$ MHz (see text for the rest of parameters). } \label{f:fig6} \ensuremath{\left|e\right\rangle}nd{figure} The simulation is performed by discretizing the space and time variables, and considering a large enough number of velocity classes to ensure the convergence of the results. The integration time is $t_f=2t_c+\widetilde{\mathcal{T}}$. For the simulation we choose only a section of the comb, but still larger than the incident photon bandwidth, see Fig.~\ref{f:fig6}(a). For this figure, the parameters used to create the comb are $N=40$ pulses, $\Omega_0=2\pi\cdot51.72$ MHz, and $\ensuremath{\left|D\right\rangle}elta^{0}=8.75/\sigma=2\pi\cdot129.35$ MHz, resulting in a comb finesse of $\mathcal{F}=5.9$ (the peak separation is $\ensuremath{\left|D\right\rangle}elta\delta =2\pi\cdot0.28$ MHz and the peak width $\varpi=2\pi\cdot47.27$ kHz). In Figs.~\ref{f:fig6}(b,c) we plot the scaled intensity, $I(z,t)\ensuremath{\left|e\right\rangle}quiv\left|\mathcal{E}(z,t)\right|^2/\left|\mathcal{E}(0,t_{c})\right|^2$, of the signal photon. In Fig.~\ref{f:fig6}(b) the solid line corresponds to $I_{0}(t)\ensuremath{\left|e\right\rangle}quiv I(0,t)$, where the left and right Gaussian profiles indicate the intensity of the incident and backward-retrieved photons, respectively. Moreover we show the transmitted intensity of the incident photon $I_{L}(t)\ensuremath{\left|e\right\rangle}quiv I(L,t)$ with a dashed line. In Fig.~\ref{f:fig6}(c) we show the photon scaled intensity during the full propagation, as a function of time and space. For this example, we obtain that the incoming photon is absorbed into the medium with a storage efficiency of $\ensuremath{\left|e\right\rangle}ta_{s}\ensuremath{\left|e\right\rangle}quiv 1-\int^{\dv{t_f/2}}_{0}{\left|\mathcal{E}(L,t)\right|^2dt}=93.4\%$, and that the reemission time occurs around $\widetilde{\mathcal{T}}\ensuremath{\left|e\right\rangle}quiv T_{int}\omega_{13}/\omega_{34}=3.6\,\mu$s respect to the input pulse, as expected from \refeq{eq:TN1} . The retrieval efficiency is $\ensuremath{\left|e\right\rangle}ta_{r}\ensuremath{\left|e\right\rangle}quiv\int^{t_f}_{t_f/2}{\left|\mathcal{E}(0,t)\right|^2dt}=41.3\%$ (typical experimentally reported AFC efficiencies are around 15-20\% \cite{Rielander'14}), limited partially due to imperfect comb profile and a low effective optical depth, which causes part of the photon wavepacket to leak at the medium output. \begin{figure}[t!] \centering \includegraphics[width=1\columnwidth]{fig7.pdf} \caption{(a,c,e) Storage and (b,d,f) retrieval efficiency for the signal photon as a function of the nominal detuning used to produce the AFC for pulse number $N=10$ (red lines-crosses), $N=20$ (blue lines-plus signs), $N=30$ (yellow lines-circles), and $N=40$ (green lines-squares), and Rabi frequency (a,b) $\Omega_{0}=2\pi\cdot27.85$ MHz, (c,d) $\Omega_{0}=2\pi\cdot39.78$ MHz, and (e,f) $\Omega_{0}=2\pi\cdot51.72$ MHz. The rest of parameters are as in Fig.~\ref{f:fig5}. The vertical dashed lines correspond to the values given by \refeq{eq:umbral} (a,c,e) and \refeq{eq:PAPoptimal} (b,d,f).} \label{f:fig7} \ensuremath{\left|e\right\rangle}nd{figure} In Fig.~\ref{f:fig7} we show the storage (a,c,e), and retrieval (b,d,f) efficiencies as a function of the nominal detuning imposed during the AFC creation, for different number of pulses $N=10$ (red lines-crosses), $N=20$ (blue lines-plus signs), $N=30$ (yellow lines-circles), and $N=40$ (green lines-squares), and three Rabi frequencies for the pump and dump fields (a,b) $\Omega_{0}=2\pi\cdot27.85$ MHz, (c,d) $\Omega_{0}=2\pi\cdot39.78$ MHz, and (e,f) $\Omega_{0}=2\pi\cdot51.72$ MHz. In the first column [Figs.~\ref{f:fig7} (a,c,e)], the storage efficiency $\ensuremath{\left|e\right\rangle}ta_{s}$ slightly changes for small values of $\ensuremath{\left|D\right\rangle}elta^{0}$ and then experiences a pronounced decrease for increasing values of the detuning. This behavior can be understood by considering that there is a value of the detuning given by \refeq{eq:umbral} in Appendix B, {\it{i.e.,~}} $\ensuremath{\left|D\right\rangle}elta^{0}/\Omega_{0}=\sqrt{\omega_{32}/\omega_{13}}$ (indicated by vertical dashed lines), above which the width of the absorption peak changes from being approximately constant to asymptotically decrease with $\ensuremath{\left|D\right\rangle}elta^{0}/\Omega_{0}$. In the second column [Figs.~\ref{f:fig7} (b,d,f)] we observe maxima values for the retrieval efficiency $\ensuremath{\left|e\right\rangle}ta_{r}$. As demonstrated in \cite{Afzelius'09}, there is an optimal finesse ($\gtrsim10$ for large optical depths) which maximizes the retrieval efficiency. This is consistent with the observed maxima, which coincide approximately with the value given by condition (\ref{eq:PAPoptimal}), represented as vertical dashed lines. We observe that, for large detunings, a large number of pulses (or Rabi frequency) improves the efficiency since it increases the height of the comb peaks (and hence the optical depth). On the contrary, in the region of small values of the detuning, we observe that for a low number of pulses or small Rabi frequencies, the efficiency is slightly higher. This is because in this region the population transfer via spontaneous decay is more significant. Therefore, the larger the number of pulses or the Rabi frequency the larger the amount of undesired population is transferred to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} from state \ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}. It is worth to mention that even with the mentioned limitations, the large atomic densities achievable in hot vapors provide, in general, large efficiencies compared to, {\it{e.g.,~}} cold atoms QMs. In particular, for the parameters used in our example, the efficiencies are larger than the state-of-the-art AFCs in REICs. Moreover, the number of pulses used to achieve such efficiencies is remarkably lower than for conventional AFC creation methods. Nevertheless, we must point out here that the efficiencies depend strongly on the Rabi frequencies used. For instance, the transition $\ensuremath{\left|e\right\rangle}nsuremath{\left|2\right\rangle}\leftrightarrow\ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle}$ exhibits a weak dipole moment, which would require large intensities for the driving laser. In particular, the value of the Rabi frequency $\Omega_{c}$ considered in Fig.~\ref{f:fig6} corresponds to an intensity of $\sim2.4$ W/cm$^2$. This requirement could be reduced by, {\it{e.g.,~}} tailoring the temporal shape of the $\Omega_{c}$ to have a better matching with the single photon, but this lies out of the scope of this work. We should also note that the retrieval time could be partially controlled by switching off the control field during the storage interval. Differently from the typical spinwave AFC \cite{Gundogan'15,Timoney'12,Yang'18}, in our case the evolution of the coherence for each atom does not stop during the storage of the photon, but it evolves with a phase depending on the atom velocity due to the Doppler shift $v\omega_{34}/c$. Therefore, at the moment of rephasing, if the control field is switched off the atoms cannot reemit the photon. Thus, one could wait until a subsequent rephasing to turn on the control field and retrieve the photon at a time multiple of the original retrieval time (2nd, 3rd, 4th... echo). In any case, since state \ensuremath{\left|e\right\rangle}nsuremath{\left|4\right\rangle} is metastable this would allow to store the photon for times only limited by atomic motion. Note that the latter has not been considered in this work since the numerical simulations presented here do not aim at reproducing a realistic experiment but at guiding experimentalists in the basic steps to implement our protocol. In any case, however, an increasing number of experiments are now focused on implementing QMs in hot vapors \cite{Reim'10,Reim'11,Hosseini'11,Finkelstein'18,Kaczmarek'18,Guo'18} in which the detrimental effects of atomic motion are avoided, e.g., by using buffer gases, cell coatings, specific Raman configurations, and short enough signal photons. \section{CONCLUSIONS} \label{sec:CONCLUSIONS} In this work, we have first studied the implementation of a VC and, eventually, an AFC in hot atomic vapors by using the PAP technique to, later one, discuss its application for QMs. We have shown that by using this technique a reduced number of pulses, compared to standard methods, is enough to create a well defined AFC, whose properties not only depend on the applied trains of pulses, but also on the specific atomic transition frequencies considered. In particular, the peak separation of the comb can be several times smaller than the one that would be obtained with conventional excitation techniques. Moreover, due to the adiabatic following of a dark-state involved in the PAP technique, the resulting combs are robust under intensity fluctuations of the pulses. We have derived analytical expressions for the comb characteristic parameters such as the bandwidth, peak separation, number of peaks, peak width and finesse, and determined the optimal conditions for its application in QMs. In particular, we have studied the implementation of this technique in a high density Ba atomic vapor for the storage and retrieval of single photons at the telecom range. Finally, although this technique has been discussed for hot vapors in which the Doppler effect is exploited, it could also be implemented in other $\Lambda$-type systems with a large enough inhomogeneous broadening in the two-photon transition due to, e.g., inhomogeneous magnetic fields. \section*{ACKNOWLEDGMENTS} \label{sec:ACKNOWLEDGMENTS} The authors gratefully acknowledge financial support through the Ministerio de Economía y Competitividad (MINECO) (FIS2014-57460-P, FIS2017-86530-P) and from the Generalitat de Catalunya (SGR2017-1646). \section*{Appendix A} \label{sec:AppendixA} \subsection*{\textbf{A.1} OFC bandwidth and height} \label{sec:AppendixA1} We consider a train of $N$ decreasing pulses as the one in \refeq{eq:fieldD}. If we assume that the individual pulses, of width $\sigma$, are much shorter than the Gaussian envelope, of width $\sigma_{e}$, the train can be approximated to \begin{equation} \label{eq:ApendixA1} \Omega(t)=\Omega_{0}\sum_{n=0}^{N-1}\Omega_{n}\,e^{-(t-n{T_{int}})^{2}/2\sigma^2}, \ensuremath{\left|e\right\rangle}nd{equation} where $\Omega_{n} =e^{-n^2T_{int}^2/2\sigma_{e}^{2}}$ being $T_{int}$ the separation between two consecutive pulses. The corresponding OFC is given by the absolute value of the Fourier transform of \refeq{eq:ApendixA1}, defined as $\tilde{\Omega}(\omega)\ensuremath{\left|e\right\rangle}quiv\int_{-\infty}^{+\infty}\Omega(t)e^{-i\omega t}d\omega$, which reads \begin{equation} \label{eq:ApendixA2} \tilde{\Omega}(\omega)=\sigma\Omega_{0}e^{-\omega^2\sigma^2/2}\sum_{n=0}^{N-1}\Omega_{n}e^{inT_{int}\omega}, \ensuremath{\left|e\right\rangle}nd{equation} where the envelope has a bandwidth of $1/\sigma$. The maxima of the OFC peaks correspond to the frequencies $\omega=k2\pi/T_{int}$, with $k\in\mathds{Z}$, since for those values the summation of \refeq{eq:ApendixA2} takes its maximum value, $\sum_{n=0}^{N-1}\Omega_{n}$, which increases with $N$. Thus, the height of the OFC increases with $N$ and it is proportional to $\Omega_{0}$. The same results can be obtained for a train of $N$ increasing pulses like the one given by \refeq{eq:fieldP}. \subsection*{\textbf{A.2} AFC bandwidth} \label{sec:AppendixA2} To obtain an expression for the bandwidth of the AFC, we proceed as follows. From \ensuremath{\left|e\right\rangle}qref{eq:ApendixA2}, the envelopes of the pump and dump OFCs for an atom at rest are two Gaussians whose centers are given by the sum of the corresponding transition frequency and nominal detuning, \begin{align} S_{p,d}(\omega)\propto\ensuremath{\left|e\right\rangle}xp{\left[-{\rm f}ac{\left(\omega-(\omega_{12,32}+\ensuremath{\left|D\right\rangle}elta_{p,d}^{0})\right)^2}{2/\sigma^2}\right]}, \ensuremath{\left|e\right\rangle}nd{align} where $\sigma$ is the width of the field pulses. Further, for a moving atom, the Doppler shift changes the detuning according to \refeq{eq:DopplerDelta} so, the previous expression reads \begin{align} S_{p,d}(\omega)\propto\ensuremath{\left|e\right\rangle}xp{\left[-\cfrac{\left(v-c\,\cfrac{\omega-\omega_{12,32}-\ensuremath{\left|D\right\rangle}elta_{p,d}^{0}}{\omega_{p,d}^{0}}\right)^2}{2\left(\cfrac{c}{\sigma\omega_{p,d}^{0}}\right)^2}\right]}, \ensuremath{\left|e\right\rangle}nd{align} whose respective bandwidth in velocity is $c/(\sigma\omega_{p,d})^{0}$. We note that each spectral distribution is displaced differently depending on the atomic velocity. Thus, only the atoms that experience a non-zero overlap with the two fields $|\int{S_{p}(\omega)S_{d}(\omega)}d\omega|$ will have the possibility to be transferred to state \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle} via PAP. It is easy to see that such an overlap takes the form of a Gaussian with a width $\sqrt{2}c/(\sigma\omega_{13})$, which determines the range of velocities that allows the atoms to interact with both fields. Finally, in terms of the storage transition frequency $\omega_{34}$, we recover the bandwidth of the AFC in \refeq{eq:delta12}, $\Gamma=\sqrt{2}\omega_{34}/(\sigma\omega_{13})=\sqrt{2}\xi/\sigma$. \section*{Appendix B} \label{sec:AppendixB} \begin{figure*}[ht] { \includegraphics[width=1\textwidth]{fig8.pdf} } \caption{Density plot of $\rho_{33}$ via STIRAP as a function of the atomic velocity $v$ and the ratio $\ensuremath{\left|D\right\rangle}elta^{0}/\Omega_{0}$. Note that the region in which the population is transfered to \ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}, named \textit{Optimal Zone}, is bounded by the curves ${\rm V_{s,p}^{\pm}}$. The values for the frequencies, couplings, and decays are the same as in Fig.~\ref{f:fig4}. } \label{f:fig8} \ensuremath{\left|e\right\rangle}nd{figure*} To obtain the expression for the width of the peaks of the AFC, i.e., \refeq{eq:FWHMFrequency}, we proceed in two steps as follows. \textit{I. Frequency peak for STIRAP}. In STIRAP, a $\Lambda$-system as the one shown in Fig.~\ref{f:fig1} (and following the same notation and definitions for clarity) is shined by a pair of pulses, pump and dump, which are sent in a counterintuitive temporal manner. In order for STIRAP to properly work, the pump and dump fields have to fulfill the so called two-photon resonance condition, i.e., $\ensuremath{\left|D\right\rangle}elta^0_{p}-\ensuremath{\left|D\right\rangle}elta^0_{d}=0$. Beyond this condition, it can be shown that the final population $\rho_ {33}$ after applying STIRAP is distributed in a two-photon resonance window \cite{Vitanov'10}. Similarly, for a Doppler broadened medium the velocity of an atom $v$ produces a differential Doppler shift in the pump and dump transitions, which leads to an inhomogeneous broadening in the two-photon resonance. Analogously as in \cite{Rubio'16}, we have rewritten the STIRAP Hamiltonian in the dark/dressed states basis, obtaining the so-called \textit{Optimal Zone} (OZ) of parameters $\ensuremath{\left|D\right\rangle}elta^{0} $, $\Omega_{0} $, and $v$ for which STIRAP works \cite{Rubio'20}. In particular, for an atom with velocity $v$ along the propagation direction of the fields, this zone is bounded by the curves: \begin{widetext} \begin{subequations} \label{V} \begin{eqnarray} \label{Vs} {\rm Vs^{\pm}}\rightarrow\;\;\;\;\;v &=& \cfrac{c}{2\omega_{12}\omega_{13}}\left\{\pm\sqrt{\omega_{13}\left[(\ensuremath{\left|D\right\rangle}elta^{0})^2\omega_{13}+\Omega_{0}^2\omega_{12}\right]}-\omega_{13}\ensuremath{\left|D\right\rangle}elta^{0}\right\}, \\ \label{Vp} {\rm Vp^{\pm}}\rightarrow\;\;\;\;\;v &=& \cfrac{c}{2\omega_{32}\omega_{13}}\left\{\pm\sqrt{\omega_{13}\left[(\ensuremath{\left|D\right\rangle}elta^{0})^2\omega_{13}-\Omega_{0}^2\omega_{32}\right]}-\omega_{13}\ensuremath{\left|D\right\rangle}elta^{0}\right\}. \ensuremath{\left|e\right\rangle}nd{eqnarray} \ensuremath{\left|e\right\rangle}nd{subequations} \ensuremath{\left|e\right\rangle}nd{widetext} Fig.~\ref{f:fig8} shows the contour plot of the numerically calculated population $\rho_{33}$ after the STIRAP process as a function of the atomic velocity $v$, and the ratio $\ensuremath{\left|D\right\rangle}elta^{0}/\Omega_{0}$, where the pump and dump pulses coincide with the envelopes of the corresponding trains of pulses used in PAP. We also consider the same atomic frequencies as in the example shown in Fig.~\ref{f:fig8}. The region where the population is transferred to state $\ensuremath{\left|e\right\rangle}nsuremath{\left|3\right\rangle}$ practically coincides with the region enclosed by the curve ${\rm Vs^{\pm}}$ and ${\rm Vp^{\pm}}$, being \begin{equation} \label{eq:umbral} \cfrac{\left|\ensuremath{\left|D\right\rangle}elta^{0}\right|}{\Omega_{0}}=\sqrt{\cfrac{\omega_{32}}{\omega_{13}}}, \ensuremath{\left|e\right\rangle}nd{equation} the points at which the $\rm Vp^{\pm}$ curves change from being real to complex valued. Note that these points delimit two zones with different growing behaviors for the base of the curve $\rho_{33}(v)$ as a function of $\ensuremath{\left|D\right\rangle}elta^{0}/\Omega_{0}$. Thus, the FWHM of $\rho_{33}(v)$ will be given by \begin{subequations} \label{eq:Ranges} \begin{eqnarray} \label{eq:Ranges1} W_{ss}&= {\rm f}ac{1}{2}\left({\rm Vs^{+}-Vs^{-}}\right)\;\;{\rm for}\;\cfrac{\left|\ensuremath{\left|D\right\rangle}elta^{0}\right|}{\Omega_{0}}<\sqrt{\cfrac{\omega_{32}}{\omega_{13}}}, \\ \label{eq:Ranges2} W_{sp}&= {\rm f}ac{1}{2}\left({\rm Vs^{+}-Vp^{+}}\right)\;\;{\rm for}\;\cfrac{\left|\ensuremath{\left|D\right\rangle}elta^{0}\right|}{\Omega_{0}}>\sqrt{\cfrac{\omega_{32}}{\omega_{13}}}, \ensuremath{\left|e\right\rangle}nd{eqnarray} \ensuremath{\left|e\right\rangle}nd{subequations} using that ${\rm Vs^{+}-Vp^{+}}\approx{\rm Vp^{-}-Vs^{-}}$. Arrows in Fig.~\ref{f:fig8} indicate the width of the base of the curve for the parameters used in Fig.~\ref{f:fig9}. Let us consider that the nominal detuning $\ensuremath{\left|D\right\rangle}elta^{0}$ is large enough to neglect the effect of spontaneous emission, which is a requirement to perform our AFC as discussed in the main text. Then, assuming that condition \refeq{eq:Ranges2} is fulfilled, combining it with Eqs.~(\ref{Vs}) and (\ref{Vp}), and using the first order of the Taylor expansion in $\sqrt{1 \pm x}$ with $x=\left|\omega_{12,32}\,\Omega_{0}^2/\omega_{13}(\ensuremath{\left|D\right\rangle}elta^{0})^2\right|\ll1$, we obtain that the FWHM of the curve $\rho_{33}(v)$ for which STIRAP is successfully performed reads: \begin{equation} \label{eq:FWHMVelocityAP} W_{sp}=\cfrac{\Omega_{0}^2\,c}{4\omega_{13}\ensuremath{\left|D\right\rangle}elta^{0}}. \ensuremath{\left|e\right\rangle}nd{equation} In terms of the frequencies, the last expression corresponds to a FWHM for $\rho_{33}(\delta)$ of \begin{equation} \label{eq:FWHMFrequencyAP} \varpi_{\rm STIRAP}=\cfrac{\Omega_{0}^2\,\xi}{4\ensuremath{\left|D\right\rangle}elta^{0}}, \ensuremath{\left|e\right\rangle}nd{equation} where $\xi=\omega_{34}/\omega_{13}$. \textit{II. Relation between PAP and STIRAP}. It has been shown that PAP is equivalent to STIRAP \cite{Shapiro'07} due to the fact that the global evolution operators are the same, as long as each individual pulse of the PAP trains does not significantly change the population. As the global evolution of the wavefunction is equal for both processes, we consider that the temporal integrals for all the elements of the density matrix are also the same (we neglect incoherent processes). In particular, \begin{equation} \label{eq:c3} \int_{0}^{t_{f}}\left|c^{\rm PAP}_{3}(t)\right|^{2}dt=\int_{0}^{t_{f}}\left|c^{\rm STIRAP}_{3}(t)\right|^{2}dt, \ensuremath{\left|e\right\rangle}nd{equation} where $c_{3}$ is the probability amplitude of state \ket{3}. Considering Parseval's theorem about the unitarity of the Fourier transform, we conclude \begin{equation} \label{eq:ApA2} \int_{-\infty}^{+\infty}\!\rho_{33}^{\rm PAP}(\delta)\,d\delta=\int_{-\infty}^{+\infty}\!\rho_{33}^{\rm STIRAP}(\delta)\,d\delta. \ensuremath{\left|e\right\rangle}nd{equation} where $\rho_{33}(\delta)=\left|c_{3}(\delta)\right|^2$. \begin{figure}[ht] { \includegraphics[width=1\columnwidth]{fig9.pdf} } \caption{Modeling of the $\rho_{33}(\delta)$ profile for (a) STIRAP and (b) PAP with rectangles of the same area. We can reconstruct the rectangle of (a) by combining conveniently the ${\rm N_{c}}$ peaks of the comb (b) as shown in (c). This allows us to relate the widths of the STIRAP and PAP peaks (see text). } \label{f:fig9} \ensuremath{\left|e\right\rangle}nd{figure} The last expression means that the total area under the curve $\rho_{33}^{\rm STIRAP}(\delta)$ of STIRAP and the sum of the areas of the PAP comb peaks has to be equal. From \ensuremath{\left|e\right\rangle}qref{eq:ApA2} we can obtain a relation between the width $\varpi_{\rm STIRAP}$ and the width of the comb peaks $\varpi_{\rm PAP}$ by proceeding as follows. First, we assume that the curve $\rho_{33}(\delta)$ for STIRAP and for the $\rm{N_{c}}$ peaks of the PAP comb have Gaussian profiles. Then, we approximate the integral of each Gaussian by the area of a rectangle with the same height as the peak and a base equal to the corresponding FWHM, $\varpi_{\rm STIRAP}$ for STIRAP [see Fig. \ref{f:fig9}(a)] and $\varpi_{\rm PAP}$ for the PAP peaks [see Fig. \ref{f:fig9}(b)]. In the figures, the maximum heights are taken to be 1 for simplicity without lack of generality. Secondly, we consider the Gaussian envelope of the PAP comb [blue dashed curve in Fig. \ref{f:fig9}(b)]. Using the properties of a Gaussian, it is easy to see that the rectangle $ABCD$, being $\overline{DC}=\sqrt{2\pi}\Gamma$, is divided into two regions with approximately equal areas: the area of the region under the curve, containing the right half of the peaks, and the area of the region over the curve. This allows us to place the left half of the rectangles, in orange in Fig.~\ref{f:fig9}(b), on top of the right half ones, inside the region $ABCD$ over the curve. Next, getting rid of the spaces between rectangles, a new one is constructed [see Fig.~\ref{f:fig9}(c)] with a base equal to $({\rm N_{c}}/2)\varpi_{\rm PAP}$, assuming that ${\rm N_{c}\gg1}$. According to \refeq{eq:ApA2}, the bases of the rectangles of Fig.~\ref{f:fig9}(a) and of Fig.~\ref{f:fig9}(c) must be the same. Therefore, using \ensuremath{\left|e\right\rangle}qref{eq:Nc}, we obtain \begin{equation} \label{eq:FWHMFrequencyPAP} \varpi_{\rm PAP}=\cfrac{\sqrt{\pi}\sigma}{T_{int}}\,\varpi_{\rm STIRAP}. \ensuremath{\left|e\right\rangle}nd{equation} Finally, casting \ensuremath{\left|e\right\rangle}qref{eq:FWHMFrequencyAP} into the last expression, we obtain \ensuremath{\left|e\right\rangle}qref{eq:FWHMFrequency}. \ensuremath{\left|e\right\rangle}nd{document}
\begin{document} \title[Topological torsion elements via natural density]{Topological torsion elements via natural density and a quest for solution of Armacost like problem} \mbox{supp }bseteqjclass[2010]{Primary: 11B05, Secondary: 22B05, 40A05} \keywords{ Circle group, natural density, Topological $s$-torsion element, statistical convergence, $s$-characterized subgroup, arithmetic sequence} \author{Pratulananda Das} \address{Department of Mathematics, Jadavpur University, Kolkata-700032, India} \email {[email protected]} \author{Ayan Ghosh} \address{Department of Mathematics, Jadavpur University, Kolkata-700032, India} \email {[email protected]} \begin{abstract} One can use the number theoretic idea of the notion of natural density \contite{B1} to define topological s-torsion elements (which form the statistically characterized subgroups, recently developed in \contite{DPK}) extending Armacost's idea of topological torsion elements. We follow in the line of Armacost who had posed the famous classical problem for "description of topological torsion elements" of the circle group. In this note we consider the natural density version of Armacost's problem and present a complete description of topological s-torsion elements in terms of the support, for all arithmetic sequences which also provides the solution of Problem 6.10 posed in \contite{DPK} . \end{abstract} \maketitle \section{Introduction and the background} Throughout ${\mathbb R}$, ${\mathbb Q}$, ${\mathbb Z}$ and ${\mathbb N}$ will stand for the set of all real numbers, the set of all rational numbers, the set of all integers and the set of all natural numbers respectively. The first three are equipped with their usual abelian group structure, The circle group ${\mathbb T}$ is identified with the quotient group ${\mathbb R}/{\mathbb Z}$ of ${\mathbb R}$ endowed with its usual compact topology. For $x\in{\mathbb R}$ we denote by $\{x\}$ the distance from the integers i.e. the difference $x - [x]$. Recall that an element $x$ of an abelian group $X$ is torsion if there exists $k \in \mathbb{N}$ such that $kx = 0$ (more specifically called $k$-torsion in this case). An element $x$ of an abelian topological group $G$ is \contite{B}: \begin{itemize} \item[(i)] {\em topologically torsion} if $n!x \rightarrow 0;$ \item[(ii)] {\em topologically $p$-torsion}, for a prime $p$, if $p^nx \rightarrow 0.$ \end{itemize} It is obvious that any $p$-torsion element is topologically $p$-torsion. Armacost \contite{A} defined the subgroups $$ X_p = \{x \in X: p^nx \rightarrow 0\} \ ~\mbox{and} ~ \ X! = \{x \in X: n!x \rightarrow 0\}$$ of an abelian topological group $X$, and started their investigation, in particular description of the elements forming the subgroups. Note that the above two notions are just special cases of the following general notion considered in (Section 4.4.2, \contite{DPS}). \begin{definition} Let $(a_n)$ be a sequence of integers. An element $x$ in an abelian topological group $G$ is called {\em (topological) $\underline{a}$-torsion element} if $a_nx \rightarrow 0$. \end{definition} Further $$ t_{(a_n)}({\mathbb T}) := \{x\in {\mathbb T}: a_nx \to 0\mbox{ in } {\mathbb T}\}. $$ of ${\mathbb T}$ is called {\em a characterized} $($by $(a_n))$ {\em subgroup} of ${\mathbb T}$, the term {\em characterized} appearing much later, coined in \contite{BDS}. (a) Let $p$ be a prime. For the sequence $(a_n)$, defined by $a_n = p^n$ for every $n$, obviously $t_{(p^n)}({\mathbb T})$ contains the Pr\" ufer group ${\mathbb Z}(p^\infty)$. Armacost \contite{A} proved that $t_{(p^n)}({\mathbb T})$ simply coincides with ${\mathbb Z}(p^\infty)$ and $x$ is a topologically $p$-torsion element iff. $supp(x)$ (defined later) is finite. (b) Armacost \contite{A} posed the problem to describe the group ${\mathbb T}! = t_{(n!)}({\mathbb T})$. It was resolved independently and almost simultaneously in \contite[Chap. 4]{DPS} and by J.-P. Borel \contite{Bo2}. In particular, in both the above mentioned instances, the sequences of integers, concerned are arithmetic sequences. Recall that a sequence of positive integers $(a_n)$ is called an arithmetic sequence if $$ 1<a_1<a_2<a_3< \ldots <a_n< \ldots \ ~ \ \mbox{ and } a_n|a_{n+1} \ ~ \ \mbox{ for every } n\in{\mathbb N} . $$ As has been seen, some of the most interesting cases studied are the topological $\underline{a}$-torsion elements characterized by arithmetic sequences. The results of Armacost \contite{A} and Borel \contite{Bo2} were considered in full general settings with arbitrary arithmetic sequences in \contite{DD} and then with more clarity in \contite{DI1} where topological $\underline{a}$-torsion elements were completely described for a major class of arithmetic sequences. However in certain sense, $a_nx \to 0 $ seems somewhat too restrictive, for example observe that, only countably many elements of ${\mathbb T}$ are topologically $p$-torsion even if the sequence $(a_n) = (p^n)$ is not too dense (it is a geometric progression, so has exponential growth). Motivated by this observation, we intend to consider a modified definition using a more general notion of convergence, as follows: For $m,n\in\mathbb{N}$ and $m\leq n$, let $[m, n]$ denotes the set $\{m, m+1, m+2,...,n\}$. By $|A|$ we denote the cardinality of a set $A$. The lower and the upper natural densities of $A \mbox{supp }bseteqset \mathbb{N}$ are defined by \contite{B1} $$ \underline{d}(A)=\displaystyle{\liminf_{n\to\infty}}\frac{|A\contap [0,n-1]|}{n} ~~\mbox{and}~~ \overline{d}(A)=\displaystyle{\limsup_{n\to\infty}}\frac{|A\contap [0,n-1]|}{n}. $$ If $\underline{d}(A)=\overline{d}(A)$, we say that the natural density of $A$ exists and it is denoted by $d(A)$ \contite{B1} (see \contite{S1, SZ} for related works). For two subsets $A, B$ of ${\mathbb N}$, we will write $A \mbox{supp }bseteqseteq^d B$ if $d(A\setminus B) = 0$ and $A =^d B$ if $d(A\triangle B) = 0$. The idea of natural density was later used to define the notion of statistical convergence (\contite{F, St}, see also \contite{C, S,Fr}). \begin{definition}\label{Def1} A sequence of real numbers $(x_n)$ is said to converge to a real number $x_0$ statistically if for any ${\varepsilon} > 0$, $d(\{n \in \mathbb{N}: |x_n - x_0| \geq {\varepsilon}\}) = 0$. \end{definition} Over the years, the notion of statistical convergence has been extended to general topological spaces using open neighborhoods \contite{MK} and a lot of work has been done on the notion of statistical convergence primarily because it extends the notion of usual convergence very naturally preserving many of the basic properties but at the same time including more sequences under its purview paving way for interesting applications (for example see \contite{BDK, C, C2}). The following two characterizations of statistical convergence would be of much help in our investigations. \begin{theorem} \contite{S} For a sequence of real numbers $(x_n)$, $x_n \to x_0$ statistically if and only if there exists a subset A of $ {\mathbb N}$ of asymptotic density 1, such that $\displaystyle{\lim_{n \in A}} x_n = x_0$. \end{theorem} \begin{lemma}\label{suf1} (Folklore) A sequence $(x_n)$ converges to $\xi\in{\mathbb R}$ {\em statistically} if and only if for any $A\mbox{supp }bseteqseteq{\mathbb N}$ with $\overline{d}(A)>0$, there exists an infinite $A'\mbox{supp }bseteqseteq A $ such that $\lim\limits_{n\in A'} x_n = \xi $. \end{lemma} Under the circumstances, it seems very natural to use the notion of statistical convergence to relax the condition $a_nx \to 0 $, i.e. we will consider the situation when $a_nx \to 0$ statistically (rephrasing Definition 1.2, in our context, this means that for every ${\varepsilon} > 0$ there exists a subset A of $ {\mathbb N}$ of asymptotic density 0, such that $\{a_nx\} < {\varepsilon}$ for every $n \not \in A$). \begin{definition} Let $(a_n)$ be a sequence of integers. An element $x$ in an abelian topological group $G$ is called {\em topological $s_{\underline{a}}$-torsion element (only topological $s$-torsion when the given sequence $(a_n)$ is fixed)} if $a_nx \to 0\ \mbox{ statistically in }$ $G$. \end{definition} It has already been observed in the recent article \contite{DPK} that the topological $s$-torsion elements of ${\mathbb T}$ form a proper Borel subgroup of ${\mathbb T}$ of cardinality $\mathfrak c$ for any arithmetic sequence $(a_n)$ which was named as an {\em $s$-characterized (by $(a_n)$) subgroup of ${\mathbb T}$} denoted by $t_{(a_n)}^s ({\mathbb T})$ i.e. $t^s_{(a_n)}({\mathbb T}) := \{x\in {\mathbb T}: a_nx \to 0\ \mbox{ statistically in }\ {\mathbb T}\}$. In this note we intend to present a complete description of topological $s$-torsion elements of ${\mathbb T}$ (given in Theorem 2.1 and Theorem 2.2) which in turn would answer the following open problem posed in \contite{DPK}:\\ \textbf{Problem 6.10.} Let $(a_n)$ be an arithmetic sequence such that $q_n>2$ for infinitely many $n$. Does there exist a characterization of the elements of the subgroup $t^s_{(a_n)}({\mathbb T})$ only in terms of the support. Before proceeding to our main result we present below certain basic definitions, notations and results which will be needed in the next section. \begin{definition} An arithmetic sequence of integers $(a_n)$ is called \begin{itemize} \item[(i)] $q$-bounded if the sequence of ratios $(q_n)$ (where $q_n=\frac{a_n}{a_{n-1}}$) is bounded. \item[(ii)] $q$-divergent if the sequence of ratios $(q_n)$ diverges to $\infty$. \end{itemize} \end{definition} \begin{lemma}\contite{DI1} For any arithmetic sequence $(a_n)$ and $x\in{\mathbb T}$, we can build a unique sequence of integers $(c_n)$, where $0\leq c_n<q_n$, such that \begin{equation}\label{canrep} x=\mbox{supp }m\limits_{n=1}^{\infty}\frac{c_n}{a_n} \end{equation} and $c_n<q_n-1$ for infinitely many $n$. \end{lemma} For $x\in{\mathbb T}$ with canonical representation (\ref{canrep}), we define $supp(x)=\{n\in{\mathbb N}\ : \ c_n\neq0\}$ and $supp_q(x)=\{n\in{\mathbb N}\ : \ c_n=q_n-1\}$. Clearly $supp_q(x) \mbox{supp }bseteqseteq supp(x)$. Now we are going to introduce some equations which will be used repeatedly in our next section. Let $(a_n)$ be an arithmetic sequence and $x\in{\mathbb T}$ has canonical representation (\ref{canrep}). Then we have, for all non-negative integer $k$, \begin{equation}\label{eq1} i) \ ~ \ q_n\contdot q_{n+1}\ldots q_{n+k} \ = \ \frac{a_n}{a_{n-1}} \contdot \frac{a_{n+1}}{a_n} \ldots \frac{a_{n+k}}{a_{n+k-1}} \ = \ \frac{a_{n+k}}{a_{n-1}}. \end{equation} $$ ii) \ ~ \ \{a_{n-1}x\}\ = \ \mbox{supp }m\limits_{i=n}^{\infty} \frac{c_i}{a_i} \contdot a_{n-1} \ = \ (c_n\contdot\frac{a_{n-1}}{a_n} + c_{n+1}\contdot\frac{a_{n-1}}{a_{n+1}}+ \ldots)$$ \begin{equation}\label{eq2} = \ \frac{c_n}{q_n}+\frac{c_{n+1}}{q_n q_{n+1}} + \ldots + \frac{c_{n+k}}{q_n \contdot q_{n+1} \ldots q_{n+k}} + \mbox{supp }m\limits_{i=k+1}^{\infty} \frac{c_{n+i}}{q_n \contdot q_{n+1} \ldots q_{n+i}}. \end{equation} $$ iii) \ ~ \ \{a_{n+k}x\} \ = \ a_{n+k}(\frac{c_{n+k+1}}{a_{n+k+1}}+\frac{c_{n+k+2}}{a_{n+k+2}}+ \ldots) \ = \ (\frac{c_{n+k+1}}{q_{n+k+1}}+\frac{c_{n+k+2}}{q_{n+k+1}\contdot q_{n+k+2}}+\ldots) $$ \begin{equation}\label{eq3} = \ q_n \contdot q_{n+1} \ldots q_{n+k} \mbox{supp }m\limits_{i=k+1}^{\infty} \frac{c_{n+i}}{q_n \contdot q_{n+1} \ldots q_{n+i}}. \end{equation} From equations (\ref{eq3}) and equations (\ref{eq2}), we get \begin{equation}\label{eq4} iv) \ ~ \ \{a_{n-1}x\} \ = \ \frac{c_n}{q_n} + \frac{c_{n+1}}{q_n\contdot q_{n+1}} + \ldots + \frac{c_{n+k}}{q_n\contdot q_{n+1} \ldots q_{n+k}}+ \frac{\{a_{n+k}x\}}{q_n\contdot q_{n+1} \ldots q_{n+k}}. \end{equation} For all $n\in{\mathbb N}$ and $k\in{\mathbb N}\contup \{0\}$, we define \begin{equation}\label{eq0} \sigma_{n,k}\ =\ \frac{c_n}{q_n} + \frac{c_{n+1}}{q_n\contdot q_{n+1}} + \ldots + \frac{c_{n+k}}{q_n\contdot q_{n+1} \ldots q_{n+k}}. \end{equation} Therefore, equation (\ref{eq4}) becomes, \begin{equation}\label{eq5} v) \ ~ \ \{a_{n-1}x\}= \sigma_{n,k}+\frac{\{a_{n+k}x\}}{q_n\contdot q_{n+1} \ldots q_{n+k}}. \end{equation} Further putting $k=0$ in equation (\ref{eq4}), we finally obtain \begin{equation}\label{eq6} vi) \ ~ \ \{a_{n-1}x\}=\frac{c_n}{q_n}+\frac{\{a_nx\}}{q_n}.\\ \end{equation} Let $a = (a_n)$ be a given arithmetic sequence. Now for any $B\mbox{supp }bseteqseteq{\mathbb N}$ with $d(B)>0$, let $t_{(a_B)} ({\mathbb T}) = \{ x\in{\mathbb T} : \ \lim\limits_{n\in B} a_nx=0$ in ${\mathbb T} \}$ and $t_{(a_B)}^s ({\mathbb T}) = \{ x\in{\mathbb T} : \ \lim\limits_{n\in B'} a_nx=0$ in ${\mathbb T}\ $ for some $B'\mbox{supp }bseteqseteq B$ with $d(B\setminus B')=0\}$. Therefore, for all $B \mbox{supp }bseteqseteq {\mathbb N}$ with $\overline{d}(B)>0$, we have $t_{(a_n)}^s ({\mathbb T}) \mbox{supp }bseteqseteq t_{(a_B)}^s ({\mathbb T})$ and $t_{(a_n)}^s ({\mathbb T}) = \bigcap\limits_{B\in [{\mathbb N}]^{\aleph_0} \ \& \ \overline{d}(B)>0} t_{(a_B)}^s ({\mathbb T})$. \begin{lemma}\label{nec} If $B\mbox{supp }bseteqseteq{\mathbb N}$ with $\overline{d}(B)>0$ and $x\in t_{(a_{B-1})}({\mathbb T})$ (where $B-1 = \{k-1: k \in B\}$) then the following hold: \\ i) If $B\mbox{supp }bseteqseteq^d supp(x)$ and $q$-bounded, then $B\mbox{supp }bseteqseteq^d supp_q(x)$ and there exists $B'\mbox{supp }bseteqseteq B$ with $d(B\setminus B')=0$ such that $\lim\limits_{n\in B'} \{a_{n-1}x\}=1$ in ${\mathbb R}$.\\ ii) If $d(B\contap supp(x))=0$, then there exists $B'\mbox{supp }bseteqseteq B$ with $d(B\setminus B')=0$ such that $\lim\limits_{n\in B'} \{a_{n-1}x\}=0$ in ${\mathbb R}$. \end{lemma} \begin{proof} i) Let $q=1+\max\limits_{n\in B} \{q_n\}$ and $B'=B\contap supp(x)$. Since $B'\mbox{supp }bseteqseteq B$ and $d(B\setminus supp(x))=0$, we get $d(B\setminus B')=d(B\setminus (B\contap supp(x)))=d(B\setminus supp(x))=0$.\\ Therefore, $$ \{a_{n-1}x\}\geq \frac{c_n}{q_n} > \frac{1}{q} \ ~ \ \forall \ n\in B' \mbox{ (Since $c_n\geq 1$ for all $n\in B'$) .} $$ But as $x\in t_{(a_{B-1})}({\mathbb T})$, thus we can conclude that $ \lim\limits_{n\in B'} \{a_{n-1}x\}=1$ in ${\mathbb R}$. \\ Therefore, \begin{eqnarray*} 1-\frac{1}{q_n} &<& 1- \frac{1}{q} <\{a_{n-1}x\} = \frac{c_n}{q_n} + \frac{\{a_nx\}}{q_n}\\ &<& \frac{c_n+1}{q_n} \ ~ \ \mbox{for almost all} \ n\in B' \mbox{ (From equation (\ref{eq6})).} \end{eqnarray*} $$ {\mathbb R}ightarrow q_n -1<c_n+1 \mbox{ i.e. } c_n>q_n-2 \ \mbox{for almost all }\ n\in B'. $$ Hence, $c_n=q_n-1$ for almost all $n\in B'$ i.e. $B'\mbox{supp }bseteqseteq^* supp_q(x)$, which implies $B\mbox{supp }bseteqseteq^d supp_q(x)$.\\ ii) Let $B'=B\contap({\mathbb N}\setminus supp(x))$. Since $B'\mbox{supp }bseteqseteq B$ and $d(B\contap supp(x))=0$, we get $d(B\setminus B')=d(B\setminus (B\contap ({\mathbb N}\setminus supp(x))))=d(B\setminus (B\setminus supp(x)))=d(B\contap supp(x))=0$. Now, from equation (\ref{eq6}), we have $$ \{a_{n-1}x\}=0+\frac{\{a_nx\}}{q_n} <\frac{1}{2} \ \forall\ n\in B' \mbox{ (Since $c_n=0 \ \forall \ n\in B'$). } $$ Then in view of the fact that $x\in t_{(a_{B-1})}({\mathbb T})$, we must have, $\lim\limits_{n\in B'} \{a_{n-1}x\}=0$ in ${\mathbb R}$. \end{proof} \begin{lemma}\label{suf2} Let $(a_n)$ be an arithmetic sequence. Consider $A\mbox{supp }bseteqseteq{\mathbb N}$ with $\overline{d}(A)>0$ where $A$ is not $q$-bounded. If $\nexists$ any $q$-bounded subset $A'\mbox{supp }bseteqseteq A$ with $\overline{d}(A')>0$ then $\exists$ a $q$-divergent set $B\mbox{supp }bseteqseteq A$ with $d(A\setminus B)=0$. \end{lemma} \begin{proof} Let $A_m=\{n\in A\ : \ q_n=m\}$ for some $m\in {\mathbb N}\setminus \{1\}$. Since $\nexists$ any $q$-bounded set $A'\mbox{supp }bseteqseteq A$ with $\overline{d}(A')>0$, we conclude that $d(A_m)=0$ for all $m\in{\mathbb N}\setminus \{1\}$. If there exists $k\in{\mathbb N}$ such that $A_m$ is finite for all $m>k$, then setting $B=\bigcup\limits_{m=k+1}^{\infty} A_m$, it is easy to observe that $B$ is $q$-divergent with $d(A\setminus B)=0$ (since $d(\bigcup\limits_{m=2}^k A_m)=0$). Otherwise without any loss of generality, we can assume that $A_m$ is infinite for all $m\in{\mathbb N}\setminus \{1\}$. Now, considering the sequence $(A_m)_{m\in{\mathbb N}}$ of density zero sets, one can find $C \mbox{supp }bseteqset {\mathbb N}$ with $d(C)=0$ such that $A_m\setminus C$ is finite for all $m\in{\mathbb N}\setminus\{1\}$ (for explicit construction of such a set, see \contite{S}, also the existence follows from the fact that $\mathcal{I}_d =\{A \mbox{supp }bseteqset {\mathbb N}: d(A) = 0\}$ is a $P$-ideal, see \contite{C}). Let, $B=A\setminus C$. Since $d(C)=0$, we conclude that $d(A\setminus B)=0$. Consider any $l\in{\mathbb N}\setminus \{1\}$. Let, $n_l= \ max\ \{n:\ n\in A_m\setminus C$ and $m\leq l\}$. Now, for all $n\in B$ with $n>n_l$, we have $q_n>l$. Since $l$ was taken arbitrarily, it follows that $B$ is $q$-divergent. \end{proof} \section{Main results} \begin{theorem}\label{mainth} Let, $(a_n)$ be an arithmetic sequence and $x\in{\mathbb T}$. Then $x$ is a topological $s$-torsion element (i.e. $x\in t_{(a_n)}^s ({\mathbb T})$) if and only if either $d(supp(x))=0$ or if $\ \overline{d}(supp(x))>0$, then for all $A\mbox{supp }bseteqseteq{\mathbb N}$ with $\overline{d}(A)>0$ the following holds: \begin{itemize} \item[(a)] If $A$ is $q$-bounded, then: \begin{itemize} \item[(a1)] If $A\mbox{supp }bseteqseteq^d supp(x)$, then $A+1\mbox{supp }bseteqseteq^d supp(x), \ A\mbox{supp }bseteqseteq^d supp_q(x)$ and there exists $A'\mbox{supp }bseteqseteq A$ with $d(A\setminus A')=0$ such that $\lim\limits_{n\in A'} \frac{c_{n+1}+1}{q_{n+1}}=1$ in ${\mathbb R}$. Moreover, if $A+1$ is $q$-bounded, then $A+1\mbox{supp }bseteqseteq^d supp_q(x)$. \item[(a2)] If $d(A\contap supp(x))=0$, then there exists $A'\mbox{supp }bseteqseteq A$ with $d(A\setminus A')=0$ such that $\lim\limits_{n\in A'} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$. Moreover, if $A+1$ is $q$-bounded, then $d((A+1)\contap supp(x))=0$ as well. \end{itemize} \item[(b)] If $A$ is $q$-divergent, then $\lim\limits_{n\in B} \frac{c_n}{q_n}=0$ in ${\mathbb T}$ for some $B\mbox{supp }bseteqseteq A$ with $d(A\setminus B)=0$. \end{itemize} \end{theorem} \begin{proof} {\bf Necessity}: Suppose $\overline{d}(supp(x))>0$ and $x\in t^s_{(a_n)}({\mathbb T})$. Therefore, there exists $M \mbox{supp }bseteqseteq {\mathbb N}$ with $d(M) = 1$ such that \begin{equation}\label{M} \lim\limits_{n\in M} \{a_{n-1}x\}=0 \mbox{ in ${\mathbb T}$.} \end{equation} Consider any $A\mbox{supp }bseteqseteq{\mathbb N}$ with $\overline{d}(A)>0$. We take $B=M\contap A$. Then $B\mbox{supp }bseteqseteq A$ and $d(A\setminus B)=d(A\contap ({\mathbb N}\setminus M))=0$. As $B\mbox{supp }bseteqseteq M$, from equation (\ref{M}), we get $\lim\limits_{n\in B} \{a_{n-1}x\}=0$ in ${\mathbb T}$. Consequently, there exists $B\mbox{supp }bseteqseteq A$ with $d(A\setminus B)=0$ such that $x\in t_{(a_{B-1})}({\mathbb T})$. \noindent\textbf{(a)} Suppose first that $A$ is $q$-bounded. The following two cases can arise:\\ \textbf{(a1)} First, suppose $A\mbox{supp }bseteqseteq^d supp(x)$. Then $B\mbox{supp }bseteqseteq A$ is $q$-bounded and $B\mbox{supp }bseteqseteq^d supp(x)$. Since, $x\in t_{(a_{B-1})}({\mathbb T})$ and $B$ is $q$-bounded, from Lemma \ref{nec} we conclude that $B\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in A'} \{a_{n-1}x\}=1$ in ${\mathbb R}$, where $A'\mbox{supp }bseteqseteq B$ with $d(B\setminus A')=0$. \\ Therefore, from equation (\ref{eq6}) \begin{eqnarray*} 1 \ &=& \ \lim\limits_{n\in A'}(\frac{c_n}{q_n}+\frac{\{a_nx\}}{q_n})\ = \ \lim\limits_{n\in A'}(\frac{q_n-1+\{a_nx\}}{q_n}) \\ &=& \lim\limits_{n\in A'}(1-\frac{1-\{a_nx\}}{q_n}) {\mathbb R}ightarrow \ \lim\limits_{n\in A'} \frac{1-\{a_nx\}}{q_n}=0. \end{eqnarray*} Hence, we get \begin{equation}\label{B1} \lim\limits_{n\in A'} \{a_nx\}=1 \mbox{ (Since, $A'\mbox{supp }bseteqseteq B$ is $q$-bounded)}. \end{equation} Now from the definition of canonical representation (\ref{canrep}), $c_{n+1}\leq q_{n+1}-1$ for all $n\in{\mathbb N}$. Again from equation (\ref{eq6}), we have $$ \{a_nx\}=\frac{c_{n+1}}{q_{n+1}}+\frac{\{a_{n+1}x\}}{q_{n+1}}<\frac{c_{n+1}+1}{q_{n+1}}\leq 1. $$ Hence from equation (\ref{B1}), it follows that \begin{equation}\label{B2} 1 \ = \ \lim\limits_{n\in A'} \{a_nx\}\ \leq\ \lim\limits_{n\in A'}\frac{c_{n+1}+1}{q_{n+1}}\ \leq\ 1 \mbox{ \ ~ \ ~ \ ~ \ i.e. } \lim\limits_{n\in A'}\frac{c_{n+1}+1}{q_{n+1}}\ = \ 1 \end{equation} Now, $q_{n+1}\geq 2$ for all $n\in{\mathbb N}$. From equation (\ref{B2}), we can observe that $c_{n+1}+1 > 1$ ( i.e. $c_{n+1}\neq0$ ) for almost all $n\in A'$. Which implies $A'+1\mbox{supp }bseteqseteq^* supp(x)$. Since, $d(B\setminus A')=0$, we obtain $B+1\mbox{supp }bseteqseteq^d supp(x)$. As $B\mbox{supp }bseteqseteq A$ and $d(A\setminus B)=0$, we must have $A+1\mbox{supp }bseteqseteq^d supp(x)$, $A\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in A'} \frac{c_{n+1}+1}{q_{n+1}}\ = \ 1$ for some $A'\mbox{supp }bseteqseteq A$ where $d(A\setminus A')=0$. If $A+1$ is $q$-bounded, proceeding as in the first part of the proof, we get $A+1\mbox{supp }bseteqseteq^d supp_q(x)$.\\ \textbf{(a2)} Now let $d(A\contap supp(x))=0$. Since $B\mbox{supp }bseteqseteq A$, we must have $d(B\contap supp(x))=0$. Then from Lemma \ref{nec}, we can conclude that $\lim\limits_{n\in A'} \{a_{n-1}x\}=0$ in ${\mathbb R}$ for some $A'\mbox{supp }bseteqseteq B$ with $d(B\setminus A')=0$. Therefore putting $k=1$ in equation (\ref{eq0}) and equation (\ref{eq5}), we get $$ \lim\limits_{n\in A'} (\frac{c_n}{q_n} + \frac{c_{n+1}}{q_n q_{n+1}} + \frac{\{a_{n+1}x\}}{q_n q_{n+1}}) \ = \ \lim\limits_{n\in A'} \{a_{n-1}x\} \ = \ 0 $$ \begin{equation}\label{B3} {\mathbb R}ightarrow \ \lim\limits_{n\in A'} \frac{c_{n+1}}{q_n q_{n+1}} \ = \ \lim\limits_{n\in A'} \frac{\{a_{n+1}x\}}{q_n q_{n+1}}\ = \ 0 \mbox{ (Since $c_n, \{a_nx\} \geq 0$ and $q_n>0$ )}. \end{equation} Now as $A'\mbox{supp }bseteqseteq B$ is $q$-bounded, equation (\ref{B3}) implies that $\lim\limits_{n\in A'} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$, where $A'\mbox{supp }bseteqseteq B\mbox{supp }bseteqseteq A$ and $d(A\setminus A')=0$. Moreover, if $A+1$ is $q$-bounded, then vanishing of the last limit implies that $(A'+1)\contap supp(x)$ is finite. Thus $d((A+1)\contap supp(x))=0$ (Since, $d((A+1)\setminus (A'+1))=d(A\setminus A')=0$). \\ \noindent\textbf{(b)} Suppose $A$ is $q$-divergent i.e. $\lim\limits_{n\in A} q_n = \infty$. Then from equation (\ref{eq6}), we get $\lim\limits_{n\in B} (\frac{c_n}{q_n}+\frac{\{a_nx\}}{q_n}) = \lim\limits_{n\in B} \{a_{n-1}x\} = 0 \mbox{ in } {\mathbb T} ~\mbox{for some}~ B\mbox{supp }bseteqseteq A ~\mbox{ with}~ d(A\setminus B)=0$\\ $${\mathbb R}ightarrow \ \lim\limits_{n\in B} \frac{c_n}{q_n} = 0 \mbox{ in } {\mathbb T} \ \mbox{ (Since, $\{a_nx\}<1$ and $\lim\limits_{n\in B} q_n = \infty)$}. $$ \begin{claim}\label{remark1} Before proving the sufficiency of the conditions, we need to reformulate the necessary conditions in a stronger iterated version. For any $A\in [{\mathbb N}]^{\aleph_0}$ and $k\in{\mathbb N}\contup\{0\}$, we define $L_k(A)=\bigcup\limits_{i=0}^{k} (A+i)$. Now putting $k=k+1$ in equation (\ref{eq0}), we obtain \begin{equation}\label{sigma} \sigma_{n,k+1}=\sigma_{n,k}+\frac{c_{n+k+1}}{q_n q_{n+1} \ldots q_{n+k+1}}. \end{equation} Therefore, from equation (\ref{eq5}) and equation (\ref{sigma}), it follows that \begin{equation}\label{split} \{a_{n-1}x\}=\sigma_{n,k+1}+\frac{\{a_{n+k+1}x\}}{q_n q_{n+1} \ldots q_{n+k+1}} \ = \ \sigma_{n,k} + \frac{c_{n+k+1}}{q_n q_{n+1} \ldots q_{n+k+1}}+ \frac{\{a_{n+k+1}x\}}{q_n q_{n+1} \ldots q_{n+k+1}} \end{equation} \begin{equation}\label{convergence} {\mathbb R}ightarrow \ \sigma_{n,k}\leq \ \{a_{n-1}x\} \ < \ \sigma_{n,k} + \frac{c_{n+k+1}}{q_n q_{n+1} \ldots q_{n+k+1}} + \frac{1}{2^{(k+2)}}. \end{equation} \\ Let $x\in{\mathbb T}$ has canonical representation (\ref{canrep}) such that (a) and (b) of Theorem \ref{mainth} hold. Let $A\mbox{supp }bseteqseteq {\mathbb N}$ be $q$-bounded with $\overline{d}(A)>0$. If $L_k(A)$ is $q$-bounded for some $k\in{\mathbb N}\contup\{0\}$, then the following hold: \begin{itemize} \item[(i)] If $A\mbox{supp }bseteqseteq^d supp(x)$, then $L_k(A)\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in A'+k+1} \frac{c_n+1}{q_n}=1$ in ${\mathbb R}$ for some $A'\mbox{supp }bseteqseteq A$ with $d(A\setminus A')=0$. Therefore there exists $n_k\in{\mathbb N}$ such that for all $n\in A'$ with $n\geq n_k$, \begin{equation}\label{sigmarec} \sigma_{n,k}=1-\frac{1}{q_n q_{n+1} \ldots q_{n+k}} \geq 1-\frac{1}{2^{k+1}}. \end{equation} Moreover if $A+k+1$ is $q$-divergent, then \begin{equation}\label{divergerec} \lim\limits_{n\in A+k+1} \frac{c_n}{q_n} \ = \ \lim\limits_{n\in A} \frac{c_{n+k+1}}{q_{n+k+1}} \ = \ 1 \mbox{ in } {\mathbb R}. \end{equation} \item[(ii)] If $d(A\contap supp(x))=0$, then $d(L_k(A)\contap supp(x))=0$ and $\lim\limits_{n\in A'} \frac{c_{n+k+1}}{q_{n+k+1}}=0$ in ${\mathbb R}$ for some $A'\mbox{supp }bseteqseteq A$ and $d(A\setminus A')=0$. \end{itemize} \end{claim} \noindent{\bf Sufficiency}: If $d(supp(x))=0$, then from \contite[Theorem 4.3]{DPK} it readily follows that $x\in t^s_{(a_n)}({\mathbb T})$. So let $\overline{d}(supp(x))>0$ and $supp(x)$ satisfy conditions $(a)$ and $(b)$. To show that $x\in t^s_{(a_n)}({\mathbb T})$, in view of Lemma \ref{suf1} it is sufficient to check the convergence criterion: for all $A\mbox{supp }bseteqseteq{\mathbb N}$ with $\overline{d}(A)>0$, there exists $B'\mbox{supp }bseteqseteq A$ such that $\lim\limits_{n\in B'} a_{n-1}x=0$ in ${\mathbb T}$. Indeed without any loss of generality, we can assume that either $d(A\contap supp(x))=0$ or $A\mbox{supp }bseteqseteq^d supp(x)$.\\ \noindent{\bf Case (i)}: First let $A$ be $q$-bounded. {\bf Subcase (i$_a$)}: Let us first assume that $L_k(A)$ is $q$-bounded for all $k\in{\mathbb N}\contup\{0\}$. Let $\varepsilon>0$ be given. Choose $k\in{\mathbb N}$ such that $\frac{1}{2^{k+1}}<\varepsilon$. $\ast~~$ Let $A\mbox{supp }bseteqseteq^d supp(x)$. Then, from (i) of Claim \ref{remark1}, $L_k(A)\mbox{supp }bseteqseteq^d supp_q(x)$. Therefore, there exists $B'\mbox{supp }bseteqseteq A$ such that for all $n\in B'$, $$ \sigma_{n,k} =1-\frac{1}{q_n q_{n+1} \ldots q_{n+k}}\geq \ 1-\frac{1}{2^{k+1}}\ > \ 1-\varepsilon $$ $$ {\mathbb R}ightarrow \ 1-\varepsilon<\sigma_{n,k}\ \leq \ \{a_{n-1}x\}\ < 1 \ ~ \ \forall\ n\in B' \mbox{ (From equation (\ref {convergence})).} $$ $\ast~~$ Let $d(A\contap supp(x))=0$. Then, from (ii) of Claim \ref{remark1}, $d(L_k(A)\contap supp(x))=0$ and $\lim\limits_{n\in B} \frac{c_{n+k+1}}{q_{n+k+1}}=0$ in ${\mathbb R}$ for some $B\mbox{supp }bseteqseteq A$ with $d(A\setminus B)=0$. So, there exists $B'\mbox{supp }bseteqseteq B$ such that $\sigma_{n,k}=0$ and $\frac{c_{n+k+1}}{q_{n+k+1}}<\varepsilon$ for all $n\in B'$. Therefore, from equation (\ref{convergence}), we get $$ \{a_{n-1}x\} < \ \sigma_{n,k} + \frac{c_{n+k+1}}{q_n q_{n+1} \ldots q_{n+k+1}} + \frac{1}{2^{(k+2)}}\ < 2\varepsilon \ \forall \ n\in B'. $$ Thus in both cases, we have $\lim\limits_{n\in B'} \{a_{n-1}x\}=0$ in ${\mathbb T}$ for some $B'\mbox{supp }bseteqseteq A$, as required.\\ {\bf Subcase (i$_b$)}: We assume that there exists an integer $k\geq0$ such that $A+k+1$ is not $q$-bounded but $A+i$ is $q$-bounded for all $i=0,1,2,\ldots ,k$. If there exists an $A'\mbox{supp }bseteqseteq A$ such that $\overline{d}(A')>0$ and $A'+k+1$ is $q$-bounded, then without any loss of generality we can start with $A'$ in place of $A$. If this process does not terminate after finitely many steps then we can conclude that there exists $B\mbox{supp }bseteqseteq A$ with $\overline{d}(B)>0$ such that $L_k(B)$ is $q$-bounded for all $k\in{\mathbb N}$. Consequently, we can consider $B$ in place of $A$ and proceed as in Subcase (i$_a$). Now let us consider the case when there does not exist any $A'\mbox{supp }bseteqseteq A$ such that $\overline{d}(A')>0$ and $A'+k+1$ is $q$-bounded. Therefore from Lemma \ref{suf2}, there exists $B\mbox{supp }bseteqseteq A$ with $d(A\setminus B)=0$ such that $B+k+1$ is $q$-divergent i.e. $\lim\limits_{n\in B} q_{n+k+1} =\infty$. Clearly $L_k(B)$ is $q$-bounded. Further more \begin{equation}\label{sufeq0} \lim\limits_{n\in B} \frac{\{a_{n+k+1}x\}}{q_n q_{n+1} \ldots q_{n+k+1}} \ \leq \ \lim\limits_{n\in B} \frac{1}{q_{n+k+1}}=0. \end{equation} Therefore, from equation (\ref{split}) and equation (\ref{sufeq0}), we get \begin{eqnarray*}\label{sufeq1} \lim\limits_{n\in B} \{a_{n-1}x\} \ &=& \ \lim\limits_{n\in B} \sigma_{n,k} + \lim\limits_{n\in B} \frac{c_{n+k+1}}{q_n q_{n+1} \ldots q_{n+k+1}}+ \lim\limits_{n\in B} \frac{\{a_{n+k+1}x\}}{q_n q_{n+1}\ldots q_{n+k+1}} \\ &=& \lim\limits_{n\in B} \sigma_{n,k} + \lim\limits_{n\in B} \frac{c_{n+k+1}}{q_n q_{n+1} \ldots q_{n+k+1}}. \end{eqnarray*} $\ast~~$ Let $A\mbox{supp }bseteqseteq^d supp(x)$. Therefore $B\mbox{supp }bseteqseteq^d supp(x)$. Consequently from equation (\ref{sigmarec}) of Claim \ref{remark1} and equation (\ref{sufeq1}), we get \begin{eqnarray*} \lim\limits_{n\in B'} \{a_{n-1}x\} \ &=& \ \lim\limits_{n\in B'}(1-\frac{1}{q_n q_{n+1}\ldots q_{n+k}}+\frac{c_{n+k+1}}{q_n q_{n+1}\ldots q_{n+k+1}})\\ &=& \ \lim\limits_{n\in B'}(1+\frac{1}{q_n q_{n+1}\ldots q_{n+k}}\contdot (\frac{c_{n+k+1}}{q_{n+k+1}}-1)) \ = \ 1 \end{eqnarray*} for some $B'\mbox{supp }bseteqseteq B$ with $d(B\setminus B')=0$. $\ast~~$ Next let $d(A\contap supp(x))=0$. Then there exists $B\mbox{supp }bseteqseteq A$ such that $\sigma_{n,k}=0$ for all $n\in B$. Subsequently from (ii) of Claim \ref{remark1} and equation (\ref{sufeq1}), we have $$ \lim\limits_{n\in B'} \{a_{n-1}x\} \ = \ \lim\limits_{n\in B'} \frac{c_{n+k+1}}{q_n q_{n+1}\ldots q_{n+k+1}} \ \leq \ \lim\limits_{n\in B'} \frac{c_{n+k+1}}{q_{n+k+1}}\ =0 $$ for some $B'\mbox{supp }bseteqseteq B$ with $d(B\setminus B')=0$. Thus in both cases, we again obtain that $\lim\limits_{n\in B'} \{a_{n-1}x\}=0$ in ${\mathbb T}$ for some $B'\mbox{supp }bseteqseteq A$. \noindent{\bf Case (ii)}: We assume that $A$ is not $q$-bounded. If there exists $A'\mbox{supp }bseteqseteq A$ such that $\overline{d}(A')>0$ and $A'$ is $q$-bounded then we can proceed as in Case (i) and consider $A'$ in place of $A$. So, let us assume that there does not exist any $A'\mbox{supp }bseteqseteq A$ such that $\overline{d}(A')>0$ and $A'$ is $q$-bounded. Then from Lemma \ref{suf2}, there exists $B\mbox{supp }bseteqseteq A$ with $d(A\setminus B)=0$ such that $B$ is $q$-divergent i.e. $\lim\limits_{n\in B} q_n=\infty$. From hypothesis, we have $\lim\limits_{n\in B'} \frac{c_n}{q_n} = 0$ in ${\mathbb T}$ for some $B'\mbox{supp }bseteqseteq B$ with $d(B\setminus B')=0$. Therefore, from equation (\ref{eq6}), we obtain $$ \lim\limits_{n\in B'}\{a_{n-1}x\} \ = \ \lim\limits_{n\in B'}(\frac{c_n}{q_n}+\frac{\{a_nx\}}{q_n}) \ =0 \mbox{ in ${\mathbb T}$ (Since $\lim\limits_{n\in B'} \frac{\{a_nx\}}{q_n} < \lim\limits_{n\in B'} \frac{1}{q_n} \ =0$ ).} $$ Hence in all cases, we can conclude that for any $A\mbox{supp }bseteqseteq {\mathbb N}$ with $\overline{d}(A)>0$, there exists $B'\mbox{supp }bseteqseteq A$ such that $\lim\limits_{n\in B'} \{a_{n-1}x\}=0$ in ${\mathbb T}$. This shows that $x\in t^s_{(a_n)}({\mathbb T})$ i.e. $x$ is a topological $s$-torsion element of ${\mathbb T}$. \end{proof} \begin{remark}\label{remark} Since, for all $n\not\in supp(x)$, we have $c_n=0$, it is sufficient to consider only subsets of $supp(x)$ in item $(b)$ of Theorem \ref{mainth}. \end{remark} Theorem 2.1 obviously provides the complete solution of the open problem Problem 6.10 posed in \contite{DPK}. In the remaining part of the article we follow in the line of investigations of \contite{DI1} which would show that in certain circumstances, one can obtain more simplified equivalent solutions for Problem 6.10. Before proceeding further, let us recall the following notion of "splitting" sequences which were considered in \contite{DI1}. \begin{definition}\contite[Definition 3.10]{DI1} A sequence $(q_n)$ of natural numbers has the splitting property if there exists a partition ${\mathbb N}= B\contup I$, such that the following statements hold: \begin{itemize} \item[(a)] $B$ and $I$ are either empty or infinite; \item[(b)] $I$ is $q$-divergent, in case $I$ is infinite; \item[(c)] $B$ is $q$-bounded, in case $B$ is infinite. \end{itemize} Here, $B$ and $I$ witness the splitting property for $(q_n)$, where $B$ and $I$ can be uniquely defined up to a finite set. \end{definition} \begin{proposition}\label{oldsplit}\contite[Proposition 3.11]{DI1} A sequence $(q_n)$ has the splitting property if and only if there exists a natural number $M$ such that the set $\{n\in{\mathbb N}: \ q_n\in [M,m]\}$ is finite for every $m>M$. \end{proposition} As a natural consequence, we can think of generalizing the idea of a splitting sequence using natural density. \begin{definition}\label{dsplitdef} We say that, a sequence $(q_n)$ of natural numbers has the $d$-splitting property if there exists a partition ${\mathbb N}=B\contup D$, such that the following statements hold: \begin{itemize} \item[(a)] $B$ and $D$ are either empty or $\overline{d}(B),\overline{d}(D)>0$. \item[(b)] If $\overline{d}(B)>0$, then there exists $B'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(B\triangle B')=0$ such that $B'$ is $q$-bounded. \item[(b)] If $\overline{d}(D)>0$, then there exists $D'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(D\triangle D')=0$ such that $D'$ is $q$-divergent. \end{itemize} Here, $B$ and $D$ witness the $d$-splitting property for $(q_n)$, where $B$ and $D$ can be uniquely determined up to a zero density set (i.e. if $B_1\contup D_1$ is another partition of ${\mathbb N}$, witnessing the $d$-splitting property for $(q_n)$, then $B_1=^d B$ and $D_1=^d D$). \end{definition} \begin{proposition}\label{newsplit} A sequence $(q_n)$ has the $d$-splitting property if and only if there exists a natural number $M$ such that $d(\{n\in{\mathbb N}: \ q_n\in [M,m]\})=0$ for every $m>M$. \end{proposition} \begin{proof} We assume that $(q_n)$ has the $d$-splitting property. Now two cases can arise: \begin{itemize} \item[*] At first, we consider $B=\emptyset$. Then there exists a $D'\mbox{supp }bseteqseteq{\mathbb N}$ with $d(D')=1$ such that $D'$ is $q$-divergent. Take any $m\in {\mathbb N}$. Since $D'$ is $q$-divergent, there exists an $n_m\in{\mathbb N}$ such that $q_n>m$ for all $n>n_m$ and $n\in D'$. We set $M=1$. Then it is evident that for all $m>M$ \begin{eqnarray*}\label{dsplitseq} \overline{d}(\{n\in{\mathbb N}: q_n\in[M,m]\}) & \leq& \overline{d}(\{n\in D': q_n\in[M,m]\} + \overline{d}({\mathbb N}\setminus D')\\ & \leq& \overline{d}(\{n\in D': n\leq n_m\})=0. \end{eqnarray*} \item[*] Let $B\neq\emptyset$. Then we have $\overline{d}(B)>0$ and consequently there exists a $B'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(B\triangle B')=0$ such that $B'$ is $q$-bounded. In this case, we set $M=1+ \max\limits_{n\in B'}\{q_n\}$. Therefore, for any $m>M$, we obtain \begin{eqnarray*} &&\overline{d}(\{n\in{\mathbb N}: q_n\in[M,m]\})\\ & \leq& \overline{d}(\{n\in B': q_n\in[M,m]\}) + \overline{d}(B\setminus B')\\ &+& \overline{d}(\{n\in D': q_n\in[M,m]\})+\overline{d}(D\setminus D')\\ &=& \overline{d}(\{n\in D': q_n\in[M,m]\})=0 \mbox{ (From equation (\ref{dsplitseq})).} \end{eqnarray*} \end{itemize} Conversely, let there exists a natural number $M$ such that $d(\{n\in{\mathbb N}:q_n\in[M,m]\})=0$ for all $m>M$. We set $B'=\{n\in{\mathbb N}:q_n\in[1,M-1]\}$ and $D'={\mathbb N}\setminus B'$. \begin{itemize} \item[*] If $\overline{d}(B')>0$ and $d(D')=0$, then we take $B={\mathbb N}$ and $D=\emptyset$. \item[*] If $d(B')=0$ and $\overline{d}(D')>0$, then we take $D={\mathbb N}$ and $B=\emptyset$. \item[*] If $\overline{d}(B')>0$ and $\overline{d}(D')=>0$, then we take $B=B'$ and $D=D'$. \end{itemize} Clearly, $B$ and $D$ witness the $d$-splitting property for the sequence $(q_n)$. \end{proof} From Proposition \ref{oldsplit} and Proposition \ref{newsplit}, it is obvious that every splitting sequence is a $d$-splitting sequence. However the converse is not necessarily true, nor it is true that every subset of ${\mathbb N}$ has the $d$-splitting property (an example not having splitting property was given in Example 3.12 \contite{DI1} but one must take into consideration that a non-splitting sequence can still be $d$-splitting). \begin{example} Let $A_1=\{n\in{\mathbb N}: n=k^2$ for some $k\in{\mathbb N}\}$, $A_2=\{n\in{\mathbb N}:n=k^2+1$ for some $k\in{\mathbb N}\}\setminus A_1$, $\ldots$ , $A_{i+1}=\{n\in{\mathbb N}:n=k^2+i$ for some $k\in{\mathbb N}\}\setminus \bigcup\limits_{j=1}^{i} A_j$. Take any $n\in{\mathbb N}$. One can find a $k\in{\mathbb N}$ such that $k^2\leq n< (k+1)^2$. So we can write $n=k^2+i$ for some $i\in{\mathbb N}\contup \{0\}$ i.e. $n\in A_i$. Therefore, ${\mathbb N}=\bigcup\limits_{i=1}^{\infty} A_i$ i.e. $(A_i)_{i\in{\mathbb N}}$ forms a partition of ${\mathbb N}$. For each $m\in{\mathbb N}$, we now define $q_n=m$ for all $n\in A_m$. Clearly, for $m,M\in{\mathbb N}$ and $m>M$, we have $\{n\in{\mathbb N}:q_n\in[M,m]\}=\bigcup\limits_{i=M}^{m} A_i$. Since $d(A_i)=0$ for all $i\in{\mathbb N}$, we get $d(\{n:q_n\in[M,m]\})=0$ for all $m,M\in{\mathbb N}$ and $m>M$. Therefore, from Proposition \ref{newsplit}, $(q_n)$ is a $d$-splitting sequence. But, we can observe that $\{n:q_n\in[M,m]\}$ cannot be finite for any $m,M\in{\mathbb N}$ and $m>M$ (since $A_i$ is infinite for all $i\in{\mathbb N}$). Therefore, from Proposition \ref{oldsplit}, $(q_n)$ is not a splitting sequence. \end{example} \begin{example} Let us define $q_n=\{i\in{\mathbb N}:n=2^{i-1}(2k-1)$ for some $k\in{\mathbb N}\}$. Let $A_i=\{n\in{\mathbb N}:q_n=i\}$. From the construction it is evident that $d(A_i)=\frac{1}{2^i}$ i.e. $d(A_i)>0$ for all $i\in{\mathbb N}$ and ${\mathbb N}=\bigcup\limits_{i=1}^{\infty} A_i$. Now for any $m,M\in{\mathbb N}$ with $m>M$, observe that $d(\{n\in{\mathbb N}:q_n\in[M,m]\})=d(\bigcup\limits_{i=M}^{m} A_i)>0$. Therefore, from Proposition \ref{newsplit}, $(q_n)$ is not a $d$-splitting sequence. \end{example} Motivated by these two examples, we present below equivalent conditions for a sequence to be splitting or $d$-splitting (or in other words, equivalent formulations of Proposition \ref{oldsplit} and Proposition \ref{newsplit}). \begin{proposition} Let $(q_n)$ be a sequence of natural numbers. For all $i\in{\mathbb N}$, we define $A_i=\{n:q_n=i\}$. Then \begin{itemize} \item[(i)] $(q_n)$ is a splitting sequence if and only if there does not exist a subsequence $(A_{n_k})_{k\in{\mathbb N}}$ of $(A_n)$ such that $A_{n_k}$ is infinite for all $k\in{\mathbb N}$: \item[(ii)] $(q_n)$ is a $d$-splitting sequence if and only if there does not exist a subsequence $(A_{n_k})_{k\in{\mathbb N}}$ of $(A_n)$ such that $\overline{d}(A_{n_k})>0$ for all $k\in{\mathbb N}$. \end{itemize} \end{proposition} For the next result we will use the following notations. Let $(a_n)$ be an arithmetic sequence and $x\in{\mathbb T}$ with canonical representation (\ref{canrep}). Assume that the sequence of ratios $(q_n)$ has the $d$-splitting property which means that there exists a partition ${\mathbb N}=B\contup D$ such that $(a), (b)$ and $(c)$ of Definition \ref{dsplitdef} hold. We will write $B^S(x)=B\contap supp(x)$, $B^N(x)=B\contap ({\mathbb N}\setminus supp(x))$ and $D^S(x)=D\contap supp(x)$. From Remark \ref{remark}, it follows that $D\contap ({\mathbb N}\setminus supp(x))$ does not play any role in Theorem \ref{mainth}. Note that if $B,D \neq \emptyset$, then there exists $B'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(B\triangle B')=0$ and $D'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(D\triangle D')=0$ such that $B'^S(x),B'^N(x)$ are $q$-bounded while $D'^S(x)$ is $q$-divergent. Our next result is a characterization of a topological $s$-torsion element, when the sequence of ratios $(q_n)$ has the $d$-splitting property. \begin{theorem}\label{splitcoro} Let $(a_n)$ be an arithmetic sequence and $x\in{\mathbb T}$ has canonical representation \ref{canrep}. If the sequence of ratios $(q_n)$ has the $d$-splitting property, then $x$ is a topological $s$-torsion element i.e. $x\in t^s_{(a_n)}({\mathbb T})$ if and only if the following conditions hold: \begin{itemize} \item[(i)] $B^S(x)+1\mbox{supp }bseteqseteq^d supp(x), \ B^S(x)\mbox{supp }bseteqseteq^d supp_q(x)$, and if $\overline{d}(B^S(x))>0$ then $\lim\limits_{n\in B_1^S(x)} \frac{c_{n+1}+1}{q_{n+1}} =1$ in ${\mathbb R}$, where $B_1\mbox{supp }bseteqseteq B$ with $d(B\setminus B_1)=0$. \item[(ii)] If $\overline{d}(B^N(x))>0$, then $\lim\limits_{n\in B_1^N(x)} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$, where $B_1\mbox{supp }bseteqseteq B$ with $d(B\setminus B_1)=0$. \item[(iii)] If $\overline{d}(D^S(x))>0$, then $\lim\limits_{n\in D_1^S(x)} \frac{c_n}{q_n}=0$ in ${\mathbb T}$, where $D_1\mbox{supp }bseteqseteq D$ with $d(D\setminus D_1)=0$. \end{itemize} \end{theorem} \begin{proof} {\bf Necessity:} Let $x\in t^s_{(a_n)}({\mathbb T})$. Observe that (a) and (b) of Theorem \ref{mainth} hold. \\ \begin{itemize} \item[(i)] If $d(B^S(x)=0$, then there is nothing to prove. So, we consider the case when $\overline{d}(B^Sx)>0$. Now, there exists a $B'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(B\triangle B')=0$ such that $B'$ is $q$-bounded. Since $d(B\triangle B')=0$, we get $B'^S(x)\mbox{supp }bseteqseteq^d supp(x)$. Therefore, taking $A=B'^S(x)$ in Theorem \ref{mainth} and applying (a1), we get $B'^S(x)+1\mbox{supp }bseteqseteq^d supp(x), \ B'^S(x)\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in B'^S_1(x)} \frac{c_{n+1}+1}{q_{n+1}} =1$ in ${\mathbb R}$, where $B'_1\mbox{supp }bseteqseteq B'$ and $d(B'\setminus B'_1)=0$. \\ Again, since $d(B\triangle B')=0$, we finally get $B^S(x)+1\mbox{supp }bseteqseteq^d supp(x), \ B^S(x)\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in B_1^S(x)} \frac{c_{n+1}+1}{q_{n+1}} =1$ in ${\mathbb R}$, where $B_1=(B\contap B'_1)\mbox{supp }bseteqseteq B$ with $d(B\setminus B_1)=0$. \\ \item[(ii)] Let $\overline{d}(B^N(x))>0$. Since $d(B^N(x)\contap supp(x))=0$, applying (a2) of Theorem \ref{mainth} to $A=B'^N(x)$, we get $\lim\limits_{n\in B_1^N(x)} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$, where $B_1\mbox{supp }bseteqseteq B$ and $d(B\setminus B_1)=0$.\\ \item[(iii)] Let $\overline{d}(D^S(x))>0$. Since there exists $D'\mbox{supp }bseteqseteq {\mathbb N}$ with $d(D\triangle D')=0$ such that $D'$ is $q$-divergent, applying (b) of Theorem \ref{mainth} to $A=D'^S(x)$ (As, $d(D\triangle D')=0 \ {\mathbb R}ightarrow\ \overline{d}(D'^S(x))=\overline{d}(D^S(x))>0$), we get $\lim\limits_{n\in D_1^S(x)} \frac{c_n}{q_n}=0$ in ${\mathbb T}$, where $D_1\mbox{supp }bseteqseteq D$ and $d(D\setminus D_1)=0$. \end{itemize} {\bf Sufficiency:} Let the conditions hold. It suffices to show that the conditions of Theorem \ref{mainth} hold. If $d(supp(x))=0$ then there is nothing to prove. So let us assume that $\overline{d}(supp(x))>0$. Consider any $A\mbox{supp }bseteqseteq N$ with $\overline{d}(A)>0$. \\ \begin{itemize} \item[(a)] First suppose that $A$ is $q$-bounded. \\ \begin{itemize} \item[(a1)] Let $A\mbox{supp }bseteqseteq^d supp(x)$. Since $A$ is $q$-bounded, $A\mbox{supp }bseteqseteq^d B$. Therefore, $A\mbox{supp }bseteqseteq^d B^S(x)$ and we get $\overline{d}(B^S(x)>0$. By (i), we have $B^S(x)+1\mbox{supp }bseteqseteq^d supp(x), \ B^S(x)\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in B_1^S(x)} \frac{c_{n+1}+1}{q_{n+1}} =1$ in ${\mathbb R}$, where $B_1\mbox{supp }bseteqseteq B$ and $d(B\setminus B_1)=0$. Again, since $A\mbox{supp }bseteqseteq^d B^S(x)$, we get $A+1\mbox{supp }bseteqseteq^d supp(x), \ A\mbox{supp }bseteqseteq^d supp_q(x)$ and $\lim\limits_{n\in A'} \frac{c_{n+1}+1}{q_{n+1}}=1$ in ${\mathbb R}$, where $A'=A\contap B_1 \mbox{supp }bseteqseteq A$ and $d(A\setminus A')=0$. \item[(a2)] Now, let $d(A\contap supp(x))=0$. Since $A$ is $q$-bounded, $A\mbox{supp }bseteqseteq^d B$. Therefore, $A\mbox{supp }bseteqseteq^d B^N(x)$ and we get $\overline{d}(B^N(x))>0$. By (ii), we have $\lim\limits_{n\in B_1^N(x)} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$, where $B_1\mbox{supp }bseteqseteq B$ and $d(B\setminus B_1)=0$. Now, taking $A'=A\contap B_1$, we get $\lim\limits_{n\in A'} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$, where $A'\mbox{supp }bseteqseteq A$ and $d(A\setminus A')=0$ ( Since, $d(A\setminus A')=d(A\setminus B_1)=d(A\setminus B)=0$). \\ \end{itemize} \item[(b)] Let us now assume that $A$ is $q$-divergent. Then we have $A\mbox{supp }bseteqseteq^d D$. From Remark \ref{remark}, without any loss of generality we can assume that $A\mbox{supp }bseteqseteq supp(x)$. Therefore, $A\mbox{supp }bseteqseteq^d D^S(x)$ and we get $\overline{d}(D^S(x))>0$. By (iii), we have $\lim\limits_{n\in D_1^S(x)} \frac{c_n}{q_n}=0$ in ${\mathbb T}$, where $D_1\mbox{supp }bseteqseteq D$ and $d(D\setminus D_1)=0$. Now, taking $A'=A\contap D_1$, we get $\lim\limits_{n\in A'} \frac{c_n}{q_n}=0$ in ${\mathbb T}$, where $A'\mbox{supp }bseteqseteq A$ and $d(A\setminus A')=0$ ( Since, $d(A\setminus A')=d(A\setminus D_1)=d(A\setminus D)=0$). Therefore, from theorem \ref{mainth}, we can conclude that $x\in t^s_{(a_n)}({\mathbb T})$. \end{itemize} \end{proof} In particular, one can obtain simpler characterizations of topological $s$-torsion elements when $supp(x)$ is either $q$-bounded or $q$-divergent for the given arithmetic sequence. \begin{corollary}\label{supboucoro} If $supp(x)$ is $q$-bounded, then $x\in t^s_{(a_n)}({\mathbb T})$ if and only if the following statements hold: \begin{itemize} \item[(i)] $d((supp(x)+1)\setminus supp(x))=0$, and \item[(ii)] $ \ d(supp(x)\setminus supp_q(x))=0$. \end{itemize} \end{corollary} \begin{proof} Let $x\in t^s_{(a_n)}({\mathbb T})$. If $d(supp(x))=0$ then there is nothing to prove. So, assume that $\overline{d}(supp(x))>0$. Since $supp(x)$ is $q$-bounded and $\overline{d}(supp(x))>0$, we set $A=supp(x)$. Therefore from item (a1) of Theorem \ref{mainth}, we get $supp(x)+1\mbox{supp }bseteqseteq^d supp(x)$ and $supp(x)\mbox{supp }bseteqseteq^d supp_q(x)$. Thus, we have (i) $d((supp(x)+1)\setminus supp(x))=0$, and (ii) $ \ d(supp(x)\setminus supp_q(x))=0$. In order to prove the sufficiency of the conditions, if possible, suppose that there is a $x\in t^s_{(a_n)}({\mathbb T})$ for which (i) does not hold i.e $\overline{d}((supp(x)+1)\setminus supp(x))>0$. Since, $\overline{d}(supp(x)+1)=\overline{d}(supp(x))$, we must have $\overline{d}(supp(x))>0$. Now, taking $A=supp(x)$ and applying item (a1) of Theorem \ref{mainth}, we get $A+1\mbox{supp }bseteqseteq^d supp(x)$ i.e. $d(supp(x)+1\setminus supp(x))=0$ \ ~ \ $-$ which is a contradiction. Therefore (i) holds true. Now, let us consider that (ii) does not hold i.e. $\overline{d}(supp(x)\setminus supp_q(x))>0$ but $x\in t^s_{(a_n)}({\mathbb T})$. Set $A=supp(x)\setminus supp_q(x)$ and $q=1+\max\limits_{n\in supp(x)} \{q_n\}$. Consequently, from equation (\ref{eq6}), we obtain $$ \frac{1}{q}< \lim\limits_{n\in A} \frac{c_n}{q_n}\leq \lim\limits_{n\in A} \{a_{n-1}x\} <\lim\limits_{n\in A} \frac{c_n+1}{q_n}\leq \lim\limits_{n\in A} \frac{q_n-1}{q_n}=1- \lim\limits_{n\in A}\frac{1}{q_n}<1-\frac{1}{q} $$ $$ {\mathbb R}ightarrow \ \lim\limits_{n\in A} \{a_{n-1}x\} \neq 0 \mbox{ in ${\mathbb T}$ for some $\overline{d}(A)>0$} $$ $-$ Which is a contradiction. Therefore (ii) holds true. \end{proof} \begin{corollary}\label{supdivcoro} If $supp(x)$ is $q$-divergent, then $x\in t^s_{(a_n)}({\mathbb T})$ if and only if the following statements hold: \begin{itemize} \item[(i)] $\lim\limits_{n\in D'} \frac{c_n}{q_n}=0$ in ${\mathbb T}$ for some $D'\mbox{supp }bseteqseteq supp(x)$ with $d(supp(x)\setminus D')=0$; and \item[(ii)] For every $D\mbox{supp }bseteqseteq^d supp(x)$ such that $D-1$ is $q$-bounded, $\lim\limits_{n\in D'} \frac{c_n}{q_n}=0$ in ${\mathbb R}$, where $D'\mbox{supp }bseteqseteq D$ and $d(D\setminus D')=0$. \end{itemize} \end{corollary} \begin{proof} First, let $x\in t^s_{(a_n)}({\mathbb T})$. If $d(supp(x))=0$, then there is nothing to prove. So, let us assume that $\overline{d}(supp(x))>0$. Now, taking $A=supp(x)$ and applying item (b) of Theorem \ref{mainth}, we can conclude that (i) holds true. Next let us suppose that $A=D-1$ is $q$-bounded for some $D\mbox{supp }bseteqseteq supp(x)$. If $d(A)=d(D)=0$, then there is nothing to prove. Therefore, we can assume $\overline{d}(A)>0$. Since, $supp(x)$ is $q$-divergent, we have $d(A\contap supp(x))=0$. Now, applying item (a2) of Theorem \ref{mainth}, we have $\lim\limits_{n\in A'}\frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$ for some $A'\mbox{supp }bseteqseteq A$ with $d(A\setminus A')=0$. Putting $D'=A'+1$, we get $\lim\limits_{n\in D'} \frac{c_n}{q_n}=0$ in ${\mathbb R}$, where $D'\mbox{supp }bseteqseteq D$ and $d(D\setminus D')=0$. Conversely, let us assume that the conditions hold. To prove that $x\in t^s_{(a_n)}({\mathbb T})$, we need to show (a) and (b) of Theorem \ref{mainth} hold. Since (b) follows from (i), it is sufficient to show only (a). If $d(supp(x))=0$, then $x\in t^s_{(a_n)}({\mathbb T})$. So, assume that $\overline{d}(supp(x))>0$. Now, take any $A\mbox{supp }bseteqseteq N$ with $\overline{d}(A)>0$. If $A$ is $q$-bounded, then $d(supp(x)\contap A)=0$. Therefore, we need to prove only (a2). If $d((A+1)\contap supp(x))=0$, then taking $A'+1=(A+1)\setminus supp(x)$, we get $\lim\limits_{n\in A'} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$, where $A'\mbox{supp }bseteqseteq A$ with $d(A\setminus A')=0$. Now considering the situation when $\overline{d}((A+1)\contap supp(x))>0$, taking $D=(A+1)\contap supp(x)$ and applying (ii), we get $\lim\limits_{n\in D'}\frac{c_n}{q_n}=0$ in ${\mathbb R}$ for some $D'\mbox{supp }bseteqseteq D$ with $d(D\setminus D')=0$. Thus, putting $A'=D'-1$ in a similar manner, we obtain that $\lim\limits_{n\in A'} \frac{c_{n+1}}{q_{n+1}}=0$ in ${\mathbb R}$ for some $A'\mbox{supp }bseteqseteq A$ with $d(A\setminus A')=0$. Therefore, (a2) holds and we finally have $x\in t^s_{(a_n)}({\mathbb T})$. \end{proof} Following observations follow from our main results, giving certain particular cases of an element of ${\mathbb T}$ being or not being a topological $s$-torsion element.\\ $\bullet$ If $supp(x)$ is $q$-divergent and $\lim\limits_{n\in A} \frac{c_n}{q_n} =0$ in ${\mathbb R}$ for some $A\mbox{supp }bseteqseteq supp(x)$ with $d(supp(x)\setminus A)=0$, then $x$ is a topological $s$-torsion element of ${\mathbb T}$. $\bullet$ Suppose $x\in{\mathbb T}$ has canonical representation \ref{canrep} with $q$-divergent support. If $d(supp(x)\setminus \{n\in supp(x): (c_n)$ is bounded$\})=0$, then $x$ is a topological $s$-torsion element of ${\mathbb T}$. $\bullet$ Suppose $A$ is $q$-divergent and $d(A)=1$. Then $x$ is a topological $s$-torsion element of ${\mathbb T}$ if and only if $\lim\limits_{n\in D'} \frac{c_n}{q_n}=0$ in ${\mathbb T}$ for some $D'\mbox{supp }bseteqseteq supp(x)$ with $d(supp(x)\setminus D')=0$. $\bullet$ Let $(a_n)$ be an arithmetic sequence and $x\in{\mathbb T}$ be such that \begin{itemize} \item[(i)] $supp(x)=\bigcup\limits_{n=1}^{\infty}[p_n,r_n]$, $p_n,r_n\in{\mathbb N}$, $p_n\leq r_n < p_{n+1}$ for all $n\in {\mathbb N}$; \item[(ii)] there exist $l\in{\mathbb N}$ such that for all $n\in{\mathbb N}$, $|r_n-p_n|\leq l$ and $|p_{n+1}-r_n|\leq l$; \item[(iii)] $supp(x)$ is $q$-bounded. \end{itemize} Then $x$ is not a topological $s$-torsion element of ${\mathbb T}$. $\bullet$ Let $(a_n)$ be an arithmetic sequence and $x\in{\mathbb T}$ be such that \begin{itemize} \item[(i)] $\overline{d}(supp(x))>0$ and $supp(x)$ is $q$-divergent; \item[(ii)] for all $n\in supp(x)$, $\frac{c_n}{q_n}\in [r_1,r_2]$, where $0<r_1,r_2<1$. \end{itemize} Then $x$ is not a topological $s$-torsion element of ${\mathbb T}$.\\ \noindent\textbf{Acknowledgement:} The second author is also thankful to the CSIR for granting Junior Research Fellowship during the tenure of which this work was done.\\ \end{document}
\betaegin{enumerate}gin{document} \renewcommand{\hline}{\hlineline} \renewcommand{\ensuremath{\bm{a}}rraystretch}{1.5} \nuewcommand{\fracacib}{\textxt{fib\,}} \nuewcommand{\fracacdp}{\textxt{FDP\,}} \timestle{M\"obius functions of higher rank and Dirichlet series} \textnormal{Aut}hor{Masato Kobayashi} \partialate{\today} \thanks{[email protected]} \subseteqjclass[2010]{Primary:11A25;\,Secondary:11M32} \keywords{ arithmetic function, cyclotomic polynomial, Dirichlet series, M\"obius function, Riemann zeta function. } \ensuremath{\bm{a}}ddress{Masato Kobayashi\\ Department of Engineering\\ Kanagawa University, 3-27-1 Rokkaku-bashi, Yokohama 221-8686, Japan.} \itemnftyaketitle \betaegin{enumerate}gin{abstract} We introduce M\"obius functions of higher rank, a new class of arithmetic functions, so that the classical M\"obius function is of rank 2. With this idea, we evaluate Dirichlet series on the sum of reciprocal square of all $r$-free numbers. For the proof, Riemann zeta function and cyclotomic polynomials play a key role. \betam{e}nd{abstract} \tableofcontents \nuewpage \renewcommand{\lamanglem}{\lamanglembda} \nuewcommand{\mathfrak{S}et}[2]{\betam{e}nsuremath{\lameftt\{{#1}\,\itemnftyiddle|\,{#2}\rightght\}}} \nuewcommand{\set}[1]{\betam{e}nsuremath{\lameftt\{{#1}\rightght\}}} \renewcommand{\cdots}{\cdotsots} \nuewcommand{\itemnftyug}{\itemnfty} \nuewcommand{\quad}{\quaduad} \section{Introduction} \subseteqsection{Classical M\"obius and zeta functions} The \betam{e}h{M\"obius function} plays an important role in number theory. Its definition is simple:\,$\itemnftyu(n)=1$ and \[ \itemnftyu(n)=\betaegin{enumerate}gin{cases} (-1)^{k}&\text{$n=p_{1}\cdots p_{k}$, primes $p_{j}$ all distinct,}\\ 0&\text{$p^{2}\,|\,n$ for some prime $p$.}\\ \betam{e}nd{cases} \] \betam{e}h{Riemann zeta function} is also another important topic in number theory. It is an analytic function of complex variable $s$ (pole at 1): \[ \zeta(s)=\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{1}{n^{s}}, \quaduad \textxt{Re}{(s)}>1. \] It has an infinite product (known as the \betam{e}h{Euler product}) expression: \[ \zeta(s)=\partialisplaystyle\pirod_{p:\textxt{prime}}(1-p^{-s})^{-1}, \quaduad \textxt{Re}{(s)}>1. \] See Titchmarsh \circte{tit} for more details. Table \ref{zeven} shows the zeta values at positive even integers up to 20. \nuewcommand{\lamanglemn}{\lamanglem(n)} \renewcommand{\pi}{\pii} \nuewcommand{\hlinegt}[1]{\rule[0pt]{0pt}{#1}} \nuewcommand{\partialep}[1]{{\bm{v}rule width 0pt height 0pt depth #1}} \nuewcommand{\Omegaega}{\Omegaegaga} \renewcommand{\itemn\itemnftyathbf{N}}{\itemn\itemnftyathbf{N}} \nuewcommand{\nu}{\nuu} \nuewcommand{\itemnftyur}{\itemnftyu_{r}} \nuewcommand{\partiali\prod}{\partialisplaystyle\pirod} \nuewcommand{\Omega}{\Omegaegaga} \nuewcommand{\itemnftyurn}{\itemnftyur(n)} {\renewcommand{\ensuremath{\bm{a}}rraystretch}{2.3} \betaegin{enumerate}gin{table} \caption{zeta values at even positive integers} \lamanglebel{zeven} \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{tabular}{c|c||c|ccccccccccccccccc} $2n$&$\zeta(2n)$&$2n$&$\zeta(2n)$\\\hline 2&$\fracacf{\pi^{2}}{6}$&12&\partialep{14pt}$\fracacf{691\pi^{12}}{638512875}$\\\hline 4&$\fracacf{\pi^{4}}{90}$&14&\partialep{14pt}$\fracacf{2\pi^{14}}{18243225}$\\\hline 6&$\fracacf{\pi^{6}}{945}$&16&\partialep{14pt}$\fracacf{3617\pi^{16}}{325641566250}$\\\hline 8&$\fracacf{\pi^{8}}{9450}$&18&\partialep{14pt}$\fracacf{43867\pi^{18}}{38979295480125}$\\\hline 10&$\fracacf{\pi^{10}}{93555}$&20&\partialep{14pt}$\fracacf{174611\pi^{20}}{1531329465290625}$ \betam{e}nd{tabular} \betam{e}nd{center} \betam{e}nd{table} } There is a deep relation between the M\"obius and zeta functions; we can ``invert" $\zetaeta(s)$: \[ \fracacf{1}{\zetaeta(s)}=\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu(n)}{n^{s}}, \quad \textxtnormal{Re}{(s)}>1. \] For instance, when $s=2$, we obtain the inverse of Euler's work $\zeta(2)=\pi^{2}/6$ as \[ \k{1+\fracacf{1}{2^{2}}+\fracacf{1}{3^{2}}+\fracacf{1}{4^{2}}+\fracacf{1}{5^{2}}+\cdots}^{-1} = \fracacf{1}{\zetaeta(2)}= \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu(n)}{n^{2}} =1-\fracacf{1}{2^{2}}-\fracacf{1}{3^{2}}-\fracacf{1}{5^{2}}+\cdots. \] \subseteqsection{Main results} In this article, we introduce M\"obius functions of higher rank, a new class of arithmetic functions, so that the classical M\"obius function is of rank 2. \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{minipage}[c][100pt]{300pt} \texttt{x}ymatrix@=9mm{ *+[F]{\textxt{\,\strut classical M\"{o}bius function $\itemnftyu=\itemnftyu_{2}$}\,} \ensuremath{\bm{a}}r@{->}_-*\txt{}[d]\\ *+[F]{\textxt{\,\strut M\"{o}bius functions of higher rank $\itemnftyu_{r}$ ($r=1, 2, \partialots, \itemnftyug$)\, }}\\ } \betam{e}nd{minipage} \betam{e}nd{center} For a positive integer $r$, say $n$ is \betam{e}h{$r$-free} if there exists some prime $p$ such that $p^{r}$ divides $n$; thus, 2-free is square-free and 3-free is cube-free as usually said. We will see that $\itemnftyu_{r}$ is similar to $\itemnftyu=\itemnftyu_{2}$: $\itemnftyu_{r}(n)\nue 0$ if and only if $n$ is $r$-free (Section \ref{sec3}). The main result of this article is to evaluate several Dirichlet series \[ \sum_{ n:r\textxtnormal{-free}} \fracacf{\itemnftyu_{r}(n)}{n^{s}} \] with $r\itemn \{3, 4, 5\}$ and $s\itemn\{2, 3\}$. \betaegin{enumerate}gin{thm}[$s=2$] The following equalities hold: \betaegin{enumerate}gin{quote} \betaegin{enumerate}gin{enumerate} \itemtem $\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{3-free} } } \fracacf{\itemnftyu_{3}(n)}{n^{2}}=\fracacf{45045}{691\pii^{4}}. $ \itemtem $\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{4-free} } } \fracacf{\itemnftyu_{4}(n)}{n^{2}}=\fracacf{630}{\pii^{6}}. $ \itemtem $ \partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{5-free} } } \fracacf{\itemnftyu_{5}(n)}{n^{2}}=\fracacf{1091215125}{174611\pi^{8}} $. \betam{e}nd{enumerate} \betam{e}nd{quote} \betam{e}nd{thm} \betaegin{enumerate}gin{thm}[$s=3$] \[ \k{\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{3-free} } } \fracacf{1}{n^{3}} } \k{\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{3-free} } } \fracacf{\itemnftyu_{3}(n)}{n^{3}}} =\fracacf{41247931725}{43867\pi^{12}}. \] \betam{e}nd{thm} Here, $\itemnftyu_{3}, \itemnftyu_{4}, \itemnftyu_{5}$ are the M\"obius functions of rank 3, 4, 5 respectively. For the proofs, it is key to understand interactions of the following three concepts: \betaegin{enumerate}gin{quote} \betaegin{enumerate}gin{itemize} \itemtem Euler product for Riemann zeta function \itemtem M\"obius functions of higher rank \itemtem Cyclotomic polynomials \betam{e}nd{itemize} \betam{e}nd{quote} We will give these details later. Additional results: it is possible to generalize the ``M\"obius inversion formula" to higher rank: for $n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}$, the prime factorization of $n$, let \[ m_{*}(n)=|\{j\,|\,m_{j}\betam{e}quiv 3, \textnormal{4 mod 6}\}|. \] \betaegin{enumerate}gin{thm}[M\"obius inversion of rank 3] \[ \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}(n)}{n^{s}}} ^{-1}= \sum_{ \subseteqstack {n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}\\ m_{j}\nuot\betam{e}quiv 2, \textnormal{5 mod }6 } } \fracacf{(-1)^{m_*(n)}}{n^{s}}. \] \betam{e}nd{thm} As a by-product, we get a new expression of $\pii$: \betaegin{enumerate}gin{align*} \pi&= \k{\fracacf{45045}{691} \k{1+\fracacf{1}{2^{2}}+\fracacf{1}{3^{2}}+\fracacf{1}{5^{2}}+\fracacf{1}{6^{2}} +\fracacf{1}{7^{2}}-\fracacf{1}{8^{2}}+\fracacf{1}{10^{2}}+\cdots}}^{1/4}. \betam{e}nd{align*} \subseteqsection{Notation} \betaegin{enumerate}gin{itemize} \itemtem Let $\itemnftyathbf{N}$ denote the set of positive integers. In addition, $\itemnftyathbf{N}^{2}$ means the set of square numbers $\{1^{2}, 2^{2}, 3^{2}, \partialots\}$. \itemtem Often, writing \[ n= p_{1}^{m_{1}}\cdots p_{k}^{m_{k}} \] means the factorization of $n$ into \betam{e}h{distinct} prime numbers ($p_{i}\nue p_{j}$ for $i\nue j$) with each $m_{j}$ positive unless otherwise specified. \itemtem $d\,|\,n$ means $d$ divides $n$. \itemtem $\partiali\prod_{p}$ indicates an infinite product over all primes $p$. \betam{e}nd{itemize} \section{Preliminaries} Let us begin with recalling some fundamental definitions and facts on arithmetic functions; you can find this topic in a standard textbook on number theory as Apostol \circte{apostol}. We thus omit most of the proofs here. \subseteqsection{Arithmetic functions} An \betam{e}h{arithmetic function} is a map \[ f:\itemnftyathbf{N}\to\itemnftyathbb{C}.\] \betaegin{enumerate}gin{ex}\hlinef \betaegin{enumerate}gin{itemize} \itemtem {\betaf M\"obius function:} \[ \itemnftyu(n)=\betaegin{enumerate}gin{cases} 1&n=1,\\ (-1)^{k}&\text{$n=p_{1}\cdots p_{k}$, primes $p_{j}$ all distinct,}\\ 0&\text{$p^{2}\,|\,n$ for some prime $p$.}\\ \betam{e}nd{cases} \] \itemtem{\betaf Omega function:} For $n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}$ with $p_{j}$ primes, $$\Omegaega(n)=m_{1}+\cdots+m_{k}.$$ \itemtem {\betaf Liouville function:} $\lamanglem(n)=(-1)^{\Omegaega(n)}$. \itemtem {\betaf Characteristic function:} For a subset $A\subseteq \itemnftyathbf{N}$, \[\chi_{A}(n)=\betaegin{enumerate}gin{cases} 1&n\itemn A,\\ 0&n\nuot\itemn A. \betam{e}nd{cases} \] In particular, $|\itemnftyu(n)|$ is a characteristic function of the set of 2-free numbers. \itemtem {\betaf Constant function:} $1(n)=1$ for all $n$. \itemtem {\betaf unit function:} $u(n)=\betaegin{enumerate}gin{cases} 1&n=1,\\ 0&n\nue1. \betam{e}nd{cases} $ \betam{e}nd{itemize} \betam{e}nd{ex} We say that an arithmetic function $f$ is \betam{e}h{multiplicative} if $f(1)=1$ and \[ f(mn)=f(m)f(n) \textxt{ \quad whenever \,\, $\gcd(m, n)=1$.} \] It is easy to check that $\itemnftyu, \lamanglem, 1, u$ are all multiplicative. {\renewcommand{\ensuremath{\bm{a}}rraystretch}{1.5} \betaegin{enumerate}gin{table} \caption{Arithmetic functions} \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{tabular}{c|ccccccccccccccccccc} $n$&1&2&3&4&5&6&7&8&9&10&$\cdots$\\\hlineline $\itemnftyu(n)$&1&$-1$&$-1$&0&$-1$&1&$-1$&0&0&1&$\cdots$\\ $\Omegaegaga(n)$&0&1&1&2&1&2&1&3&2&2&$\cdots$\\ $\lamanglem(n)$&1&$-1$&$-1$&1&$-1$&1&$-1$&$-1$&1&1&$\cdots$\\ $1(n)$&1&1&1&1&1&1&1&1&1&1&$\cdots$\\ $u(n)$&1&0&0&0&0&0&0&0&0&0&$\cdots$\\ \betam{e}nd{tabular} \betam{e}nd{center} \betam{e}nd{table} } \subseteqsection{Dirichlet series} For two arithmetic functions $f$ and $g$, define the \betam{e}h{Dirichlet product} $f*g$ by \[ (f*g)(n)=\partialisplaystyle\sum_{d\,|\,n}f(d)g\k{\fracacf{n}{d}}. \] The unit function $u$ satisfies \[ f*u=u*f=f \] for all arithmetic functions $f$. If $f*g=g*f=u$, then we write $g=f^{-1}$ and call it the \betam{e}h{Dirichlet inverse} of $f$; assuming $f(1)\nue0$, there exists $f^{-1}$. \betaegin{enumerate}gin{fact} Let $f$ and $g$ be arithmetic functions. Suppose they are multiplicative. Then, so are $f*g$ and $f^{-1}$. \betam{e}nd{fact} In this way, multiplicative functions form a group and $u$ is indeed a group-theoretic unit. \betaegin{enumerate}gin{rmk} If multiplicative functions $f, g$ satisfy \betaegin{enumerate}gin{quote} $f(p^{m})=g(p^{m})$ \,\, for all primes $p$ and $m\ge 1$, \betam{e}nd{quote} then $f(n)=g(n)$ for all $n\itemn\itemnftyathbf{N}$. Hence, to determine a multiplicative function, it is enough to know values only at prime powers. \betam{e}nd{rmk} A \betam{e}h{Dirichlet series} for $f$ is a series in the form \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{f(n)}{n^{s}} \] for a complex number $s$ (in this article, we deal with only $s=2, 3$ and convergent series). Riemann zeta function is an example of such series with $f(n)=1(n)=1$. Observe that \[ \k{\sum_{n=1}^{\itemnftyug}\fracacf{f(n)}{n^{s}}} \k{\sum_{n=1}^{\itemnftyug}\fracacf{g(n)}{n^{s}}} =\sum_{n=1}^{\itemnftyug}\fracacf{(f*g)(n)}{n^{s}} \] for all $f, g$. Then classical results \[ (\itemnftyu*1)(n)= \partialisplaystyle\sum_{d\,|\,n}\itemnftyu(d)= \betaegin{enumerate}gin{cases} 1&n=1,\\ 0&n\nue1 \betam{e}nd{cases} \] and \[ (\lamanglem*1)(n)=\chi_{\itemnftyathbf{N}^{2}}(n)= \betaegin{enumerate}gin{cases} 1&n=N^{2} \textxt{ for some } N,\\ 0&\textxt{otherwise} \betam{e}nd{cases} \] imply the following: \betaegin{enumerate}gin{fact}\hlinefill \betaegin{enumerate}gin{quote} \betaegin{enumerate}gin{enumerate} \itemtem $ \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu(n)}{n^{s}}} \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{1}{n^{s}}} =1. $\partialep{20pt} \itemtem $\k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\lamanglem(n)}{n^{s}}} \k{\partialisplaystyle\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{1}{n^{s}}} =\partialisplaystyle\sum_{n:\textxtnormal{\,square}}\fracacf{1}{n^{s}}= \sum_{N=1}^{\itemnftyug}\fracacf{1}{(N^{2})^{s}}=\zeta(2s). $ \betam{e}nd{enumerate} \betam{e}nd{quote} \betam{e}nd{fact} As a consequence, when $s=2$, we have \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu(n)}{n^{2}}= \fracacf{1}{\zeta(2)}=\fracacf{6}{\pi^{2}} \quad \textxt{and} \quad \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\lamanglemn}{n^{2}}= \fracacf{\zeta(4)}{\zetaeta(2)}=\fracacf{\pi^{2}}{15}. \] Once we introduce the M\"obius functions of higher rank $\itemnftyu_{r}$ in the next section, we can regard these as extremal cases at $r=2$ and $r=\itemnftyug$ (as shown in Table \ref{mres}): \[ \partialisplaystyle\sum_{n:\textnormal{2-free}}\fracacf{\itemnftyu_{2}(n)}{n^{2}}=\fracacf{6}{\pi^{2}} \quad \textnormal{and} \quad \partialisplaystyle\sum_{n:\itemnftyug\textnormal{-free}}\fracacf{\itemnftyu_{\itemnftyug}(n)}{n^{2}}= \fracacf{\pi^{2}}{15}. \] {\renewcommand{\ensuremath{\bm{a}}rraystretch}{2.5} \betaegin{enumerate}gin{table} \caption{Main results $(s=2)$} \lamanglebel{mres} \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{tabular}{c|ccccccc}\hline series& value&zeta expression&factor of Euler product\\\hline \partialep{20pt}$\partialisplaystyle\sum_{ \subseteqstack {n:\textxtnormal{2-free} } }\fracacf{\itemnftyu_{2}(n)}{n^{2}}$ & $\fracacf{6}{\pi^{2}}$&$\fracacf{1}{\zeta(2)}$&$1-p^{-2}$\\\hline \partialep{20pt}$\partialisplaystyle\sum_{ \subseteqstack {n:\textxtnormal{3-free} } }\fracacf{\itemnftyu_{3}(n)}{n^{2}}$ & $\fracacf{45045}{691\pi^{4}}$&$\fracacf{\zeta(4)\zeta(6)}{\zeta(2)\zeta(12)}$&$1-p^{-2}+p^{-4}$\\\hline \partialep{20pt}$\partialisplaystyle\sum_{ \subseteqstack {n:\textxtnormal{4-free} } }\fracacf{\itemnftyu_{4}(n)}{n^{2}}$ & $\fracacf{630}{\pi^{6}}$&$\fracacf{\zeta(4)}{\zeta(2)\zeta(8)}$&$1-p^{-2}+p^{-4}-p^{-6}$\\\hline \partialep{20pt}$\partialisplaystyle\sum_{ \subseteqstack {n:\textxtnormal{5-free} } }\fracacf{\itemnftyu_{5}(n)}{n^{2}}$ & $\fracacf{1091215125}{174611\pi^{8}}$&$\fracacf{\zeta(4)\zeta(10)}{\zeta(2)\zeta(20)}$&$1-p^{-2}+p^{-4}-p^{-6}+p^{-8}$\\\hline $\cdots$&&$\cdots$& \\\hline \partialep{20pt} $\partialisplaystyle\sum_{ \subseteqstack {n:\textxtnormal{$\itemnftyug$-free} } }\fracacf{\itemnftyu_{\itemnftyug}(n)}{n^{2}}$ & $\fracacf{\pi^{2}}{15}$&$\fracacf{\zeta(4)}{\zeta(2)}$&$1-p^{-2}+p^{-4}-p^{-6}+p^{-8}-\cdots$\\\hline \betam{e}nd{tabular} \betam{e}nd{center} \betam{e}nd{table}} \section{M\"obius functions of higher rank} \lamanglebel{sec3} For each natural number $r$ or $``r=\itemnftyug"$, define an arithmetic function \[ \itemnftyu_{r}:\itemnftyathbf{N}\to\{-1, 0, 1\}\] by \[ \itemnftyu_{r}(n)= \betaegin{enumerate}gin{cases} 1&n=1,\\ (-1)^{m_{1}+\cdots+m_{k}}& n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}, \textxt{all } m_{j}< r,\\ 0&\textxt{$p^{r}\,|\,n$ for some prime $p$}.\\ \betam{e}nd{cases} \] For $r=\itemnftyug$, we understand that $m_{j}< \itemnftyug$ always holds and $p^{\itemnftyug}\,|\,n$ never happens. \betaegin{enumerate}gin{ex} $r=1$: This is just the unit function. \[ \itemnftyu_{1}(n)=u(n)=\betaegin{enumerate}gin{cases} 1&n=1,\\ 0&n\nue 1 \textxt{\,\,\,(that is, $p^{1}|n$ for some prime $p$)}. \betam{e}nd{cases} \] $r=2$: the classical M\"obius function. \[ \itemnftyu_{2}(n)=\itemnftyu(n)=\betaegin{enumerate}gin{cases} (-1)^{k}&\text{$n=p_{1}p_{2}\cdots p_{k}$},\\ 0&\textxt{$p^{2}\,|\,n$ for some prime $p$}. \betam{e}nd{cases} \] $r=3$: \[ \itemnftyu_{3}(n)=\betaegin{enumerate}gin{cases} (-1)^{m_{1}+\cdots+m_{k}}&\text{$n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}$, $m_{j}<3$},\\ 0&\textxt{$p^{3}\,|\,n$ for some prime $p$}. \betam{e}nd{cases} \] $r=\itemnftyug$: the Liouville function. \[ \itemnftyu_{\itemnftyug}(n)=\lamanglem(n)= (-1)^{m_{1}+\cdots+m_{k}}\quad \text{$n= p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}$}. \] \betam{e}nd{ex} \betaegin{enumerate}gin{defn} All together, we call $\{\itemnftyu_{r}\}_{r=1}^{\itemnftyug}$ the \betam{e}h{M\"obius functions of higher rank}. \betam{e}nd{defn} It follows by definition \[ \itemnftyu_{r}(p^{m})=\betaegin{enumerate}gin{cases} (-1)^{m}&m<r,\\ 0&m\ge r \betam{e}nd{cases} \] for a prime $p$ and $m\ge1$. Observe that each $\itemnftyu_{r}$ is multiplicative; in particular, $|\itemnftyu_{r}|$ is a characteristic function of $r$-free numbers. We have already seen that \[ \itemnftyu*1=u \textxt{\quad and \quad} \lamanglem*1=\chi_{\itemnftyathbf{N}^{2}}. \] Now understand this as $\itemnftyu_{2}*1=u$ and $\itemnftyu_{\itemnftyug}*1=\chi_{\itemnftyathbf{N}^{2}}$. A natural question is: what is $\itemnftyu_{r}*1$ for $3 \lame r<\itemnftyug$? Since $\itemnftyu_{r}$ and $1$ are both multiplicative, so is $\itemnftyu_{r}*1$. Now let us see what $(\itemnftyu_{r}*1)(p^{m})$ is. \betaegin{enumerate}gin{prop} Let $r\ge 3$ and $m\ge 1$.\\ If $m<r$, then \[ (\itemnftyur*1)(p^{m})= \betaegin{enumerate}gin{cases} 1& \textxtnormal{$m$ even},\\ 0& \textxtnormal{$m$ odd}. \betam{e}nd{cases} \] \renewcommand{\textnormal}{\textxtnormal} If $m\ge r$, then \[ (\itemnftyur*1)(p^{m})=\betaegin{enumerate}gin{cases} 1&\textnormal{$r$ odd},\\ 0&\textnormal{$r$ even}. \betam{e}nd{cases} \]\betam{e}nd{prop} \betaegin{enumerate}gin{proof} Suppose $m<r$. Then \betaegin{enumerate}gin{align*} (\itemnftyur*1)(p^{m})&=\sum_{d\,|\,p^{m}}\itemnftyu_{r}(d) \\&=\itemnftyu_{r}(1)+\itemnftyur(p)+\itemnftyur(p^{2})+\cdots+\itemnftyur(p^{m}) \\&=1+(-1)+1+\cdots+(-1)^{m} \\&=\betaegin{enumerate}gin{cases} 1& \textnormal{$m$ even},\\ 0& \textnormal{$m$ odd}. \betam{e}nd{cases} \betam{e}nd{align*} If $m\ge r$, then \betaegin{enumerate}gin{align*} (\itemnftyur*1)(p^{m})&=\sum_{d\,|\,p^{m}}\itemnftyu_{r}(d) \\&=\itemnftyu_{r}(1)+\itemnftyur(p)+\itemnftyur(p^{2})+\cdots+\itemnftyur(p^{m}) \\&=\itemnftyu_{r}(1)+\itemnftyur(p)+\itemnftyur(p^{2})+\cdots+\itemnftyur(p^{r-1})+0+\cdots+0 \\&=1+(-1)+1+\cdots+(-1)^{r-1} \\&=\betaegin{enumerate}gin{cases} 1&\textnormal{$r$ odd},\\ 0&\textnormal{$r$ even}. \betam{e}nd{cases} \betam{e}nd{align*} \betam{e}nd{proof} Consequently, for $n= p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}$, the integer \[ (\itemnftyu_{r}*1)(n)=(\itemnftyur*1)(p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}) =(\itemnftyur*1)(p_{1}^{m_{1}})\cdots (\itemnftyur*1)(p_{k}^{m_{k}}) \] is 1 if and only if all of factors $(\itemnftyur*1)(p_{j}^{m_{j}})$ are 1. Otherwise, i.e., $(\itemnftyur*1)(p_{j}^{m_{j}})=0$ for some $j$, it is $0$. This naturally leads to an interpretation of $\itemnftyu_{r}*1$ as a characteristic function of some set as follows. For each $r\ge3$, define $M_{r}$, a subset of $\itemnftyathbf{N}$: \betaegin{enumerate}gin{itemize} \itemtem $r$ odd or $r=\itemnftyug$: square numbers. \[ M_{3}=M_{5}=\cdots=M_{\itemnftyug}= =\itemnftyathbf{N}^{2}\,(= \mathfrak{S}et{n\itemn\itemnftyathbf{N}}{n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}, m_{j} \textxt{ all even} }). \] \itemtem $r$ even: ranked square numbers. \[ M_{r}=\mathfrak{S}et{n\itemn\itemnftyathbf{N}}{n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}, m_{j} \textxt{ all even}, m_{j}<r }. \] \betam{e}nd{itemize} The sets $M_{r}$'s ($r$ even) are increasing: \[ M_{4}\subseteqset M_{6}\subseteqset M_{8}\subseteqset\cdots\subseteqset M_{\itemnftyug}=\itemnftyathbf{N}^{2}. \] \betaegin{enumerate}gin{ex} \betaegin{enumerate}gin{align*} M_{4}&=\mathfrak{S}et{n\itemn\itemnftyathbf{N}}{n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}, m_{j} \textxt{ all even}, m_{j}<4 } \\&= \mathfrak{S}et{n\itemn\itemnftyathbf{N}}{n=1 \itemnftybox{ or }n=p_{1}^{2}\cdots p_{k}^{2}} \\&=\{1, 4, 9, 25, 36, 49, 100, 121, 169, \partialots\}. \betam{e}nd{align*} \betam{e}nd{ex} \betaegin{enumerate}gin{prop} Let $M_{1}=\itemnftyathbf{N}$ and $M_{2}=\{1\}$. Then, for each $r\itemn \itemnftyathbf{N}\cup\{\itemnftyug\}$, the Dirichlet product $\itemnftyu_{r}*1$ is a characteristic function of the set $M_{r}$: \[ (\itemnftyu_{r}*1)(n)= \betaegin{enumerate}gin{cases} 1&n\itemn M_{r},\\ 0&n\nuot\itemn M_{r}. \betam{e}nd{cases} \] \betam{e}nd{prop} {\renewcommand{\ensuremath{\bm{a}}rraystretch}{1.25} \betaegin{enumerate}gin{table} \caption{$\itemnftyu_{3}$ and $\itemnftyu_{3}*1$} \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{tabular}{c|ccccccccccccccccccc} $n$&1&2&3&4&5&6&7&8&9&10&$\cdots$\\\hlineline $\itemnftyu_{3}(n)$&$1$&$-1$&$-1$&1&$-1$&1&$-1$&0&1&1&$\cdots$\\ $(\itemnftyu_{3}*1)(n)$&1&0&0&1&0&0&0&0&1&0&$\cdots$\\ \betam{e}nd{tabular} \betam{e}nd{center} \betam{e}nd{table} } \betaegin{enumerate}gin{prop}[M\"obius functions of higher rank and zeta] For $r\ge 1$ and $\text{Re}{(s)}>1$, we have \[ \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{r}(n)}{n^{s}}} \zetaeta(s) =\sum_{n\itemn M_{r}}\fracacf{1}{n^{s}}. \] \betam{e}nd{prop} \betaegin{enumerate}gin{proof} This statement is equivalent to $\itemnftyu_{r}*1=\chi_{M_{r}}$. \betam{e}nd{proof} For clarity, we sometimes prefer to write \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{r}(n)}{n^{s}} = \partialisplaystyle\sum_{n:\textxtnormal{$r$-free}}\fracacf{\itemnftyur(n)}{n^{s}}. \] In the next section, we will compute such sums for $s=2$. \section{Main Theorems} Before going into main theorems, we briefly recall an important family of polynomials in number theory for convenience. \subseteqsection{Cyclotomic polynomials} The \betam{e}h{cyclotomic polynomial} for $n$ is \[ \Phi_{n}(x)= \pirod_{ \subseteqstack{1\lame k\lame n\\ \gcd(k, n)=1} }(x-e^{2\pii i n/k}). \] This is indeed a polynomial of integer coefficients. \betaegin{enumerate}gin{ex} \[ \Phi_{1}(x)=x-1, \quad \Phi_{2}(x)=x+1, \quad \textxtnormal{ and\quad} \Phi_{3}(x)=x^{2}+x+1. \] \betam{e}nd{ex} An important relation to the M\"obius function is: \betaegin{enumerate}gin{fact} \[ \Phi_{n}(x)=\pirod_{d\,|\,n}(x^{d}-1)^{\itemnftyu\k{\fracac{n}{d}}. } \] \betam{e}nd{fact} Exponents are $0, \pim 1$ so that $\Phi_{n}(x)$ (and $\Phi_{n}(x)^{-1}$ also) is a product of $(x^{d}-1)$'s. Note that a factor $x^{d}-1$ looks like ``$1-p^{-s}$" in the Euler product of $\zeta(s)$; this idea will play a key role in the proofs below. \subseteqsection{Theorem ($s=2$)} We are now ready for computing three series in the middle of Table \ref{mres}. \betaegin{enumerate}gin{thm}\lamanglebel{mth1} \hlinef \betaegin{enumerate}gin{quote} \betaegin{enumerate}gin{enumerate} \itemtem $\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{3-free} } } \fracacf{\itemnftyu_{3}(n)}{n^{2}}=\fracacf{45045}{691\pii^{4}}. $ \itemtem $\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{4-free} } } \fracacf{\itemnftyu_{4}(n)}{n^{2}}=\fracacf{630}{\pii^{6}}. $ \itemtem $ \partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{5-free} } } \fracacf{\itemnftyu_{5}(n)}{n^{2}}=\fracacf{1091215125}{174611\pi^{8}} $. \betam{e}nd{enumerate} \betam{e}nd{quote} \betam{e}nd{thm} \nuewcommand{\bm{v}i}{\\[.1in]}\nuewcommand{\bm{v}ii}{\\[.2in]} \betaegin{enumerate}gin{proof}[Proof of \textnormal{(1)}] Note that \betaegin{enumerate}gin{align*} \Phi_{12}(x)&= (x-1)^{\itemnftyu(12)} (x^{2}-1)^{\itemnftyu(6)} (x^{3}-1)^{\itemnftyu(4)} (x^{4}-1)^{\itemnftyu(3)} (x^{6}-1)^{\itemnftyu(2)} (x^{12}-1)^{\itemnftyu(1)} \\&=\fracacf{(1-x^{2})(1-x^{12})}{(1-x^{4})(1-x^{6})} =1-x^{2}+x^{4}. \betam{e}nd{align*} Then, we have \betaegin{enumerate}gin{align*} \partialisplaystyle\sum_{n:\textxt{3-free}}\fracacf{\itemnftyu_{3}(n)}{n^{2}}&= 1+\sum_{ \subseteqstack {0<m_{j}<3 } } \fracacf{(-1)^{m_{1}+\cdots+m_{k}}}{(p_{1}^{m_{1}}\cdots p_{k}^{m_{k}})^{2}} \\&=\partiali\prod_{p} (1-p^{-2}+p^{-4}) \\&=\partiali\prod_{p} \fracacf{(1-p^{-2})(1-p^{-12})}{(1-p^{-4})(1-p^{-6})} \\&=\fracacf{\zeta(4)\zeta(6)}{\zeta(2)\zeta(12)} \\&=\fracacf{\pi^{4}}{90}\,\fracacf{\pi^{6}}{945}\,\fracacf{6}{\pi^{2}}\, \fracacf{638512875}{691\pi^{12}} \bm{v}i&=\fracacf{45045}{691\pi^{4}}. \betam{e}nd{align*} \betam{e}nd{proof} \betaegin{enumerate}gin{proof}[Proof of \textnormal{(2)}] Since \[ 1-x^{2}+x^{4}-x^{6}=\fracacf{(1-x^{2})(1-x^{8})}{1-x^{4}},\] we have \betaegin{enumerate}gin{align*} \partialisplaystyle\sum_{n:\textxt{4-free}}\fracacf{\itemnftyu_{3}(n)}{n^{2}}&= 1+\sum_{ \subseteqstack {0<m_{j}<4 } } \fracacf{(-1)^{m_{1}+\cdots+m_{k}}}{(p_{1}^{m_{1}}\cdots p_{k}^{m_{k}})^{2}} \\&=\partiali\prod_{p} (1-p^{-2}+p^{-4}-p^{-6}) \\&=\partiali\prod_{p} \fracacf{(1-p^{-2})(1-p^{-8})}{1-p^{-4}} \\&=\fracacf{\zeta(4)}{\zeta(2)\zeta(8)} \\&=\fracacf{\pi^{4}}{90}\,\fracacf{6}{\pi^{2}}\,\fracacf{9450}{\pi^{8}} \bm{v}i&=\fracacf{630}{\pi^{6}}. \betam{e}nd{align*} \betam{e}nd{proof} \betaegin{enumerate}gin{proof}[Proof of \textnormal{(3)}] The idea is quite similar. From the cyclotomic polynomial \[ \Phi_{20}(x)= \fracacf{(1-x^{2})(1-x^{20})}{(1-x^{4})(1-x^{10})} =1-x^{2}+x^{4}-x^{6}+x^{8}, \] we obtain \betaegin{enumerate}gin{align*} \partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{5-free} } } \fracacf{\itemnftyu_{5}(n)}{n^{2}}&= 1+\sum_{ \subseteqstack {0<m_{j}<5 } } \fracacf{(-1)^{m_{1}+\cdots+m_{k}}}{(p_{1}^{m_{1}}\cdots p_{k}^{m_{k}})^{2}} \\&=\pirod_{p}(1-p^{-2}+p^{-4}-p^{-6}+p^{-8}) \\&=\pirod_{p}\fracacf{(1-p^{-2})(1-p^{-20})}{(1-p^{-4})(1-p^{-10})} \\&=\fracacf{\zeta(4)\zeta(10)}{\zeta(2)\zeta(20)} \\&=\fracacf{\pi^{4}}{90}\, \fracacf{\pi^{10}}{93555}\, \fracacf{6}{\pi^{2}}\,\fracacf{1531329465290625}{174611\pi^{20}} \bm{v}i&=\fracacf{1091215125}{174611\pi^{8}}. \betam{e}nd{align*} \betam{e}nd{proof} \subseteqsection{M\"obius inversion of rank 3} Recall that $\itemnftyu_{2}^{-1}=1$. Thus, the equality \[ \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu(n)}{n^{s}}}^{-1} =\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{1}{n^{s}} \] can be regarded as ``M\"obius inversion of rank 2". Here we consider the case of rank 3. For the prime factorization $n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}$ of $n$ into distinct primes, let \[ m_{*}(n)=|\{j\,|\,m_{j}\betam{e}quiv 3, \textnormal{4 mod 6}\}|. \] \betaegin{enumerate}gin{thm}[M\"obius inversion of rank 3] \[ \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}(n)}{n^{s}}} ^{-1}= \sum_{ \subseteqstack {n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}\\ m_{j}\nuot\betam{e}quiv 2, \textnormal{5 mod }6 } } \fracacf{(-1)^{m_*(n)}}{n^{s}}. \]\betam{e}nd{thm} \betaegin{enumerate}gin{proof} We know that \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}(n)}{n^{s}} = \partialisplaystyle\sum_{n:\textxtnormal{3-free}}\fracacf{\itemnftyu_{3}(n)}{n^{s}}= \pirod_{p}(1-(p^{-1})^{s}+(p^{-2})^{s}). \] Now the idea is to find the inverse (formal power series) of $1-x+x^{2}$: \betaegin{enumerate}gin{align*} (1-x+x^{2})^{-1}&=\Phi_{6}(x)^{-1} \\&=\fracacf{(1-x^{2})(1-x^{3})}{(1-x)(1-x^{6})} \\&=\fracacf{1+x-x^{3}-x^{4}}{1-x^{6}} \\&=\partialisplaystyle\sum_{i=0}^{\itemnftyug}x^{6i}(1+x-x^{3}-x^{4}). \betam{e}nd{align*} That is, \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}^{-1}(n)}{n^{s}} = \partiali\prod_{p}\k{ \partialisplaystyle\sum_{i=0}^{\itemnftyug}p^{-6si}(1+p^{-s}-p^{-3s}-p^{-4s})}. \] It follows that \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}^{-1}(n)}{n^{s}} = \partiali\prod_{p}\k{\partialisplaystyle\sum_{i=0}^{\itemnftyug}p^{-6si}(1+p^{-s}-p^{-3s}-p^{-4s})} = \sum_{ \subseteqstack {n=p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}\\ m_{j}\nuot\betam{e}quiv 2, \textnormal{5 mod }6 } } \fracacf{(-1)^{m_*(n)}}{n^{s}}. \] {\renewcommand{\ensuremath{\bm{a}}rraystretch}{1.5} \betaegin{enumerate}gin{table} \caption{$\itemnftyu_{3}(n)$ and $\itemnftyu_{3}^{-1}(n)$} \betaegin{enumerate}gin{center} \betaegin{enumerate}gin{tabular}{c|ccccccccccccccccccc} $n$&1&2&3&4&5&6&7&8&9&10&$\cdots$\\\hlineline $\itemnftyu_{3}(n)$&1&$-1$&$-1$&1&$-1$&1&$-1$&0&1&1&$\cdots$\\ $\itemnftyu_{3}^{-1}(n)$&1&1&1&0&1&1&1&$-1$&0&1&$\cdots$\\ \betam{e}nd{tabular} \betam{e}nd{center} \betam{e}nd{table} } \betam{e}nd{proof} \betaegin{enumerate}gin{cor} \[ \itemnftyu_{3}^{-1}(p^{m})= \betaegin{enumerate}gin{cases} 1&m\betam{e}quiv 0, \textnormal{1 \,\,mod 6},\\ 0&m\betam{e}quiv 2, \textnormal{5 \,\,mod 6},\\ -1&m\betam{e}quiv 3, \textnormal{4 \,\,mod 6}. \betam{e}nd{cases} \] \betam{e}nd{cor} \betaegin{enumerate}gin{cor} \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}^{-1}(n)}{n^{2}}=\fracacf{691\pi^{4}}{45045}. \] \betam{e}nd{cor} \betaegin{enumerate}gin{proof} This is the inverse of the sum \[ \k{\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyu_{3}(n)}{n^{2}} =} \partialisplaystyle\sum_{n:\textnormal{3-free}}\fracacf{\itemnftyu_{3}(n)}{n^{2}}= \fracacf{45045}{691\pi^{4}} \] proved in Theorem \ref{mth1}. \betam{e}nd{proof} Let us put it this way; this gives a new kind of an infinite series expression of $\pi$ in terms of $\itemnftyu_{3}^{-1}$: \betaegin{enumerate}gin{align*} \pi&= \k{\fracacf{45045}{691} \k{1+\fracacf{1}{2^{2}}+\fracacf{1}{3^{2}}+\fracacf{1}{5^{2}}+\fracacf{1}{6^{2}} +\fracacf{1}{7^{2}}-\fracacf{1}{8^{2}}+\fracacf{1}{10^{2}}+\cdots}}^{1/4}. \betam{e}nd{align*} \subseteqsection{Theorem ($s=3$)} We evaluated several Dirichlet series at $s=2$ so that many zeta values at even integers $\{\zeta(2n)\}$ appeared. On the one hand, exact values of $\zeta(2n)$ are known in terms of \betam{e}h{Bernoulli numbers}; on the other hand, not much is known on $\{\zeta(2n+1)\}$. \betaegin{enumerate}gin{itemize} \itemtem Ap\'{e}ry \circte{apery} proved that $\zeta(3)$ is irrational in 1979. \itemtem More recently, Zudilin \circte{zudilin} proved that at least one of $\zeta(5), \zeta(7), \zeta(9), \zeta(11)$ is irrational. \itemtem Exact value of any $\zeta(2n+1)$ is not known. \betam{e}nd{itemize} However, our method is helpful for understanding some \betam{e}h{relation} of particular series involving $\{\zetaeta(2n+1)\}$. Let us see an example on $s=3$, $\zetaeta(3)$ and $\zetaeta(9)$ here. \betaegin{enumerate}gin{lem}\lamanglebel{lemma} \hlinef\betaegin{enumerate}gin{quote} \betaegin{enumerate}gin{enumerate} \itemtem $\partialisplaystyle\sum_{n:\textnormal{3-free}}\fracacf{1}{n^{3}}= \fracacf{\zeta(3)}{\zeta(9)}$. \itemtem $\partialisplaystyle\sum_{n:\textnormal{3-free}}\fracacf{\itemnftyu_{3}(n)}{n^{3}}= \fracacf{\zeta(6)\zeta(9)}{\zeta(3)\zeta(18)}$. \betam{e}nd{enumerate} \betam{e}nd{quote} \betam{e}nd{lem} \betaegin{enumerate}gin{proof} Take the cyclotomic polynomials \betaegin{enumerate}gin{align*} \Phi_{9}(x)&=\fracacf{1-x^{9}}{1-x^{3}}=1+x^{3}+x^{6} \textnormal{\itemnftybox{ } and } \\\Phi_{18}(x)&=\fracacf{(1-x^{3})(1-x^{18})}{(1-x^{6})(1-x^{9})} =1-x^{3}+x^{6}. \betam{e}nd{align*} Then \[ \partialisplaystyle\sum_{n:\textnormal{3-free}}\fracacf{1}{n^{3}}= \partiali\prod_{p}(1+(p^{-1})^{3}+(p^{-2})^{3}) =\partiali\prod_{p}\fracacf{1-p^{-9}}{1-p^{-3}}=\fracacf{\zeta(3)}{\zeta(9)} \textnormal{\quad and} \] \[ \partialisplaystyle\sum_{n:\textnormal{3-free}}\fracacf{\itemnftyu_{3}(n)}{n^{3}}= \partiali\prod_{p}(1-(p^{-1})^{3}+(p^{-2})^{3}) =\partiali\prod_{p}\fracacf{(1-p^{-3})(1-p^{-18})}{(1-p^{-3})(1-p^{-6})} =\fracacf{\zeta(6)\zeta(9)}{\zeta(3)\zeta(18)}. \] \betam{e}nd{proof} \betaegin{enumerate}gin{thm} $ \k{\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{3-free} } } \fracacf{1}{n^{3}} } \k{\partialisplaystyle\sum_{ \subseteqstack {n: \textxtnormal{3-free} } } \fracacf{\itemnftyu_{3}(n)}{n^{3}}} =\fracacf{41247931725}{43867\pi^{12}}. $ \betam{e}nd{thm} \betaegin{enumerate}gin{proof} Thanks to Lemma \ref{lemma}, the left hand side is \[ \fracacf{\zeta(6)}{\zeta(18)}= \fracacf{\pi^{6}}{945}\,\fracacf{38979295480125}{43867\pi^{18}} =\fracacf{41247931725}{43867\pi^{12}} .\] \betam{e}nd{proof} \section{Final remarks} \subseteqsection{Lambert series} Here, we record some of our results in a little different form. For a sequence $a_{n}$ of integers, its \betam{e}h{Lambert series} is the formal power series \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{a_{n}x^{n}}{1-x^{n}}.\] Assume that $a_{n}=f(n)$ for some arithmetic function $f$. It turns out that the coefficient of $x^{N}$ in $\partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{f(n)x^{n}}{1-x^{n}}$ is exactly $\partialisplaystyle\sum_{d|N}f(d)$, that is, $(f*1)(N)$. \betaegin{enumerate}gin{cor} For $r\ge 3$ odd or $r=\itemnftyug$, we have \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyurn x^{n}}{1-x^{n}}= \sum_{n=1}^{\itemnftyug}x^{n^{2}}. \] \betam{e}nd{cor} In particular, this includes \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\lamanglem(n)x^{n}}{1-x^{n}}= \sum_{n=1}^{\itemnftyug}x^{n^{2}} =x+x^{4}+x^{9}+x^{16}+\cdots \] as a special case. \betaegin{enumerate}gin{cor} For $r$ even, we have \[ \partialisplaystyle\sum_{n=1}^{\itemnftyug}\fracacf{\itemnftyur(n) x^{n}}{1-x^{n}}= \sum_{ \subseteqstack {n:n= p_{1}^{m_{1}}\cdots p_{k}^{m_{k}}\\ m_{j}<r, \,\,m_{j}\,\, \textnormal{ even} } } x^{n}. \] \betam{e}nd{cor} For example, $r=4$, \[ \fracacf{\itemnftyu_{4}(1)x}{1-x}+ \fracacf{\itemnftyu_{4}(2)x^{2}}{1-x^{2}}+ \fracacf{\itemnftyu_{4}(3)x^{3}}{1-x^{3}}+ \fracacf{\itemnftyu_{4}(4)x^{4}}{1-x^{4}}+ \fracacf{\itemnftyu_{4}(5)x^{5}}{1-x^{5}}+ \cdots \] \[= x+x^{4}+x^{9}+x^{25}+x^{36}+x^{49} +x^{100}+x^{121}+x^{169}+\cdots.\] \subseteqsection{Future research} We leave several ideas here for our future research. \betaegin{enumerate}gin{enumerate} \itemtem We expect that there are many more results on Dirichlet series \[ \partialisplaystyle\sum_{n:\textxtnormal{$r$-free}}\fracacf{f(n)}{n^{s}} , \quad r, s\ge 2, \quad f\itemn\{\itemnftyu_{r}, \itemnftyu_{r}*1, \itemnftyu_{r}^{-1}, 1\}. \] \itemtem Suppose a multiplicative arithmetic function $f$ satisfies \[ f(p^{m})=f(q^{m}) \textxtnormal{\quad for all primes $p, q$.} \] The \betam{e}h{Bell series} for such $f$ is the formal power series \[ B_{f}(x)=\sum_{m=0}^{\itemnftyug}f(p^{m})x^{m} \] as $B_{\itemnftyu}(x)=1-x$, for instance. Say $f$ is \betam{e}h{cyclotomic} if $B_{f}(x)=\Phi_{n}(x)$ for some $n$; it is \betam{e}h{inverse cyclotomic} if $B_{f}(x)=\Phi_{n}(x)^{-1}$ for some $n$. Study a series $\partialisplaystyle\sum_{n}\fracacf{f(n)}{n^{s}}$ for functions of this class. \itemtem Describe details of $\itemnftyu_{r}^{-1}$ for $r\ge 4$. \betam{e}nd{enumerate} \betaegin{enumerate}gin{thebibliography}{99} \betaibitem{apery} R. Ap\'{e}ry, Irrationalit\'{e} de $\zetaeta(2)$ et $\zetaeta(3)$, Ast\'{e}risque {\betaf 61} (1979), 11--13. \betaibitem{apostol} T. Apostol, Introduction to analytic number theory, Springer-Verlag, 1976. \betaibitem{tit} E. C. Titchmarsh, The theory of the Riemann Zeta-function, Oxford University Press, 1987. \betaibitem{zudilin} V.V. Zudilin, One of the numbers $\zeta(5), \zeta(7), \zeta(9), \zeta(11)$ is irrational, Russian Mathematical Surveys, {\betaf 56} (2001), 774--776. \betam{e}nd{thebibliography} \nuewpage \betam{e}nd{document}
\begin{document} \title{A classification of harmonic Maass forms} \alphauthor{Kathrin Bringmann} \alphaddress{Mathematical Institute\\University of Cologne\\ Weyertal 86-90 \\ 50931 Cologne \\Gammaermany} \varepsilonmail{[email protected]} \alphauthor{Stephen Kudla} \alphaddress{Department of Mathematics\\University of Toronto\\40 St. George St. BA6290\\Toronto, Ontario, M5S 2E4\\mathbb{C}anada} \varepsilonmail{[email protected]} \begin{abstract} We give a classification of the Harish-Chandra modules generated by the pullback to ${\tilde{e}xt {\rm SL}}_2(\mathbb{R})$ of harmonic Maass forms for congruence subgroups of ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$ with exponential growth allowed at the cusps. We assume that the weight is integral but include vector-valued forms. Due to the weak growth condition, these modules do not need to be irreducible. Elementary Lie algebra considerations imply that there are 9 possibilities, and we show, by giving explicit examples, that all of them arise from harmonic Maass forms. Finally, we briefly discuss the case of forms that are not harmonic but rather are annihilated by a power of the Laplacian, where much more complicated Harish-Chandra modules can arise. We hope that our classification will prove useful in understanding harmonic Maass forms from a representation theoretic perspective and that it will illustrate in the simplest case the phenomenon of extensions occurring in the space of automorphic forms. \varepsilonnd{abstract} \maketitle \centerline{\bf } { \section{Introduction and statement of results} The standard definition of a weight $k$ modular form $f$, say for ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$ or a subgroup $\Gammaamma$ of finite index in ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$, requires that $f$ is holomorphic at the cusps. If this condition is relaxed to allow poles at the cusps, the resulting {\it weakly holomorphic} modular forms can have negative weight and the associated $\Gammaamma$-invariant function $\tilde{f}$ on $\Gammaamma\backslash G$, $G:={\tilde{e}xt {\rm SL}}_2(\mathbb{R})$, does not need to be square-integrable. By further relaxing the holomorphicity condition by allowing functions which are annihilated by the Laplace operator, one arrives at harmonic Maass forms -- see Section~\ref{section2} for the precise definition. Such functions have proved to be of considerable interest and importance, as they arise, for example, in the study of mock modular forms, in the constructions of Borcherds forms, in Bruinier's construction of Green functions for divisors on orthogonal Shimura varieties, as incoherent Eisenstein series. In a number of cases, vector-valued forms are involved, and so we include such forms in our discussion. Suppose that $f$ is a harmonic Maass form of integral weight $k$ and let $\tilde{f}$ be the corresponding function on $G$, cf. (\ref{lifttoG}) below. In the scalar-valued case, it is left invariant under $\Gammaamma$, and, in general, it satisfies the equivariance property (\ref{gamma.inv}) below. The group $G$, and hence its complexified Lie algebra, acts on such functions by right translations. Let $M(\tilde{f})$ be the $(\mathfrak g,K)$-module generated by $\tilde{f}$, where $\mathfrak g := \tilde{e}xt{\rm Lie}(G)_\mathbb{C}$ is the complexified Lie algebra of $G$ and $K:=\tilde{e}xt{\rm SO}(2)$ is the standard maximal compact subgroup. Our first result, Theorem~\ref{abstractMf}, is a classification of the possible $(\mathfrak g,K)$-modules that could arise as $M(\tilde{f})$'s, based on considerations of the structure of a cyclic $(\mathfrak g,K)$-module generated by a ``harmonic'' vector. The resulting indecomposable modules are built up from the various irreducible $(\mathfrak g,K)$-modules occurring as constituents of the principal series for $G$ at points of reduciblity. It turns out that there are $9$ possibilities. Most are subquotients of the reducible principal series but several are not, cf. Corollary~\ref{sub-quotes}. The proof of this result is an easy exercise and the list of possibilities has a simple reformulation in terms of the behavior of $f$ under the classical raising and lowering operators, cf. Remark~\ref{remark4}. Note that some of the possible $(\mathfrak g,K)$-modules associated to harmonic Maass forms were described in the earlier work of Schulze-Pillot (Proposition 3 of \cite{schulze-pillot}) where he assumed\footnote{In the case $k=1$, the principal series, $V(1)$ in his notation, is a direct sum $\tilde{e}xt{\rm LDS}^+(0)\oplus \tilde{e}xt{\rm LDS}^-(0)$, in our notation. So his indecomposable in this case should be taken to mean the indecomposable {II}(b) in our Theorem~\ref{abstractMf} rather than $V(1)$.} that $f$ has weight $k \le 1$. Our second and main result shows that every possibility listed in Theorem~\ref{abstractMf} actually arises as an $M(\tilde{f})$ for some harmonic Maass form $f$. We prove this in Section~\ref{section.examples}, by giving explicit examples for each case. Here it should be noted that it is essential to consider vector-valued forms, since it is shown that certain cases cannot occur non-trivially for scalar-valued forms. In fact, in realizing our list, we only use the symmetric tensor representations $(\rho_m,\mathcal P_m)$ of $G$ on the space $\mathcal P_m$ of polynomials of degree at most $m\in\mathbb{N}_0$, cf. (\ref{poly-rep}) below. For example, the $\mathcal P_m$-valued function defined via $$e_{r,m-r}(\tau)(X) := \frac{(-1)^{m-r}}{r!}\,v^{r-m} \, (X-\tau)^r(X-\overline{\tau})^{m-r},\qquad 0\le r\le m,$$ for $\tau=u+iv\in \mathbb{H}$, satisfies $$e_{r,m-r}(\mathfrak gamma\tau) = (c\tau+d)^{m-2r}\,\rho_m(\mathfrak gamma)\,e_{r,m-r}(\tau),$$ for all $\mathfrak gamma=\left(\begin{smallmatrix} a&b\\ c&d \varepsilonnd{smallmatrix}\right)\in G$, and thus the holomorphic function $e_{m,0}$ is a harmonic Maass form of weight $-m$ and type $\rho_m$ for any $\Gammaamma$. The space $$M(\widetilde{e_{m,0}}) = \tilde{e}xt{\rm span}\left\{\widetilde{e_{m,0}}, \widetilde{e_{m-1,1}} \partialots, \widetilde{e_{1,m-1}}, \widetilde{e_{0,m}} \right\},$$ realizes the finite dimensional $(\mathfrak g,K)$-module of dimension $m+1$, case {I}(a) on our list. As already observed in \cite{schulze-pillot}, the only finite dimensional representation which can occur for scalar-valued forms is the trivial representation. A related case involves a generalization of the (non-holomophic) weight $2$ Eisenstein series $E_2^*$ defined in \varepsilonqref{E2star}. This case is treated in \cite{PSS}, however, for the reader's convenience, we give all details. The $(\mathfrak g,K)$-module associated to $E_2^*$ is a non-split extension with the weight $2$ holomorphic discrete series representation as quotient and the trivial representation as submodule. For any $m\in\mathbb{N}_0$, there is a $\mathcal P_m$-valued function $$E^*_{m+2}(\tau):= \sum_{r=0}^m \frac{1}{r+1} {m\choose r}\,e_{r,m-r}(\tau)\,R^r\,E^*_2(\tau),$$ where $R^r=R^r_k:=R_{k+2(r-1)}\circ\ldots\circ R_{k+2}\circ R_k$ is the $r$-fold application of the Maass raising operator \begin{equation}\label{raising} R=R_k:= 2i\frac{\varpiartial}{\varpiartial \tau} +\frac{k}{v}. \varepsilonnd{equation} It satisfies $$ L_{m+2}\,E^*_{m+2} = \frac{3}{\varpii} \,e_{0,m} $$ with the Maass lowering operator \begin{equation}\label{lowering} L=L_k:=-2iv^2\frac{\varpiartial}{\varpiartial\overline{\tau}}. \varepsilonnd{equation} Since $e_{0,m}$ is annihilated by $R_{m}$ and the Laplace operator $\Delta_k$ (defined in \varepsilonqref{Laplace} below) satisfies $\Delta_k = -R_m\circ L_{m+2}$, $E^*_{m+2}$ is a harmonic Maass form of weight $k=m+2$. The associated $(\mathfrak g,K)$-module is a non-split extension with the weight $k$ holomorphic discrete series representation as quotient and the finite-dimensional space $M(\widetilde{e_{m,0}})$ as submodule; this is case {III}(b) in our list. Note that the functions $R^r\,E^*_2$ occurring in the components of $E^*_{m+2}$ lie in the spaces of nearly holomorphic modular forms in Theorem~4.2 of \cite{PSS}. Case {II}(b) in our list is a non-split extension with the holomorphic weight $1$ limit of discrete series representation as quotient and anti-holomorphic weight $1$ limit of discrete series representation as submodule, cf. (\ref{tiny.ext}) below. This is precisely the $(\mathfrak g,K)$-module associated to the central derivative of the incoherent Eisenstein series of weight $1$ introduced in \cite{kry.tiny}. This weight $1$ harmonic Maass form can be viewed as the ``modular completion'' of the generating series for arithmetic degrees of special $0$-cycles on the moduli space of CM elliptic curves, loc.\!\,cit., and hence is typical of a class of such forms whose holomorphic part is related to arithmetic. Other such examples occur, sometimes only conjecturally, for example in \cite{kry.faltings,kry.book}, and in the work of Duke-Li \cite{duke.li}. Analogous phenomena have emerged the in the theory of $p$-adic modular forms \cite{darmon.lauder.rotger}. \begin{remark} In this paper, we do not include of half-integral weight forms for which a similar classification could be made, cf. \cite{schulze-pillot} for a discussion. \varepsilonnd{remark} Finally, we note that the harmonicity condition $\Delta_kf=0$ is essential to our classification, since it implies that the $K$-types occur in $M(\tilde{f})$ with multiplicity $1$. In Section~\ref{section.related}, we show that more complicated $(\mathfrak g,K)$-modules can show up if one only requires that $\Delta_k^\varepsilonll f=0$ for some $\varepsilonll\in\mathbb{N}_0$. A simple, but arithmetically interesting example is given by the weight $0$ non-holomorphic modular form \begin{equation}\label{Kronecker-fun} \varpihi(\tau) := -\frac16\,\log\big(|\Delta(\tau)|^2\,v^{12}\big), \varepsilonnd{equation} which arises in the Kronecker limit formula, where $\Delta$ is the weight $12$ modular discriminant. This form is annihilated by $\Delta_0^2$ and the associated $(\mathfrak g,K)$-module has the following picture: $$ \begin{matrix} \tilde{e}xt{\small\bf Figure 1. The $(\mathfrak g,K)$-module for the Kronecker limit formula\qquad\quad}\\ \xymatrix{ {}&{}&{}&{}&{\bullet}\alphar[dl]_{L_0}\alphar[dr]^{R_0}&{}&{}&{}&{}&{}\\ \partialots\alphar@/^/[r]^{R_{-8}}&{\circ} \alphar@/^/[l]^{L_{-6}}\alphar@/^/[r]^{R_{-6}}&{\circ}\alphar@/^/[l]^{L_{-4}} \alphar@/^/[r]^{R_{-4}}&{\circ}\alphar[dr]_{R_{-2}}\alphar@/^/[l]^{L_{-2}}& {}&{\circ}\alphar[dl]^{L_2}\alphar@/^/[r]^{R_2}&{\circ}\alphar@/^/[r]^{R_4}\alphar@/^/[l]^{L_{4}}&\alphar@/^/[l]^{L_{6}}{\circ}\alphar@/^/[r]^{R_6}&\partialots \alphar@/^/[l]^{L_{8}}\\ {}&{}&{}&{}&{\odot}&{}&{}&{}&{}& } \varepsilonnd{matrix} $$ An explanation is given in Section~\ref{section.related}. In summary, we hope that our classification will prove useful in understanding harmonic Maass forms from a representation theoretic perspective and that it will provide an elementary motivating example\footnote{In fact, this development is already underway, cf. Remark~\ref{rem10} in section ~\ref{section.related} for a brief discussion.} for the study of extensions occurring in the space of automorphic forms, including those with weaker than traditional growth conditions, and their analogues for more general groups. In addition, we hope that it will serve as an accessible introduction to this point of view. {\bf Thanks:} This project was begun during a visit by the second author to the University of Cologne in of 2014 and completed during visits to Oberwolfach, TU Darmstadt, and ETH, Z\"urich, in the summer of 2016. He would like to thank these institutions for their support and stimulating working environments. The research of the first author is supported by the Alfried Krupp Prize for Young University Teachers of the Krupp foundation and the research leading to these results receives funding from the European Research Council under the European Unions Seventh Framework Programme (FP/2007-2013) / ERC Grant agreement n. 335220 - AQSER. The authors thank Dan Bump, Stephan Ehlen, Olav Richter, Rainer Schulze-Pillot, and Martin Westerholt-Raum for useful comments on an earlier version of this paper. \section{Harmonic Maass form}\label{section2} Let $(\rho,V)$ be a finite dimensional complex representation of a subgroup $\Gammaamma$ of finite index in ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$, and, for simplicity, suppose that $k$ is an integer. By a {\it harmonic Maass form of weight} $k$ and {\it type} $(\rho,V)$, we mean a smooth function $f :\mathbb{H} \rightarrow V$ satisfying the following conditions, \cite{bruinier.funke}: \vskip -15pt \begin{enumerate} \item For $\mathfrak gamma=\left(\begin{smallmatrix}a&b\\c&d\varepsilonnd{smallmatrix}\right)\in\Gammaamma$ $$ f (\mathfrak gamma\tau) = j(\mathfrak gamma,\tau)^k \,\rho(\mathfrak gamma)\, f (\tau), $$ where $j(\mathfrak gamma,\tau):=c\tau+d$. \item We have $$ \Delta_k f = 0, $$ with the {\it hyperbolic Laplacian} in weight $k$ \begin{equation}\label{Laplace} \Delta_k:= -v^2\,\left(\frac{\varpiartial^2}{\varpiartial u^2} + \frac{\varpiartial^2}{\varpiartial v^2}\right) + i k\,v \left(\frac{\varpiartial}{\varpiartial u} + i \frac{\varpiartial}{\varpiartial v}\right). \varepsilonnd{equation} \item There exists a constant $B>0$ such that $$f(\tau) = O\left(e^{B v}\right) \qquad \tilde{e}xt{as $v\rightarrow \infty$, uniformly in $u$}.$$ A similar condition holds at all cusps of $\Gammaamma$. \varepsilonnd{enumerate} We denote this space by $H_k^{\tilde{e}xt{mg}}(\Gammaamma,\rho)$. If the representation $(\rho,V)$ is one-dimensional, i.e. is given by a character $\chi:\Gammaamma\rightarrow \mathbb{C}^\times$, we abbreviate this to $H_k^{\tilde{e}xt{mg}}(\Gammaamma,\chi)$ or $H_k^{\tilde{e}xt{mg}}(\Gammaamma)$, if $\chi$ is trivial or if we do not want to be explicit about $\chi$. We refer to functions in this space as {\it scalar-valued Maass forms}. A special subspace of $H_k^{\tilde{e}xt{mg}}(\Gammaamma,\rho)$ consists of those forms, for which there exists a polynomial $P_f(\tau)\in V[q^{-1}]$ such that \[ f(\tau)-P_f(\tau)=O\left(e^{-\varepsilon v}\right) \] as $v\to\infty$ for some $\varepsilon>0$, and similarly at the other cusps. We denote this space by $H_k(\Gammaamma, \rho)$ and refer to $P_f(\tau)$ as the principal part of $f$ (at the given cusp). \begin{remark} Note that a holomorphic function satisfying these conditions can have poles at the cusps, i.e., is a weakly holomorphic modular form in the usual, somewhat unfortunate, terminology. Harmonic Maass forms with exponential growth at the cusps as allowed by (3) are sometimes referred to as ``harmonic weak Maass forms''. \varepsilonnd{remark} A scalar-valued Maass form $f$ of weight $k \in \mathbb{Z}\setminus\{1\}$ has a Fourier expansion of the form (see (3.2a) and (3.2b) of \cite{bruinier.funke}) \begin{equation}\label{weak-Fourier} f(\tau)=f^+(\tau)+f^-(\tau) \varepsilonnd{equation} where the {\it holomorphic part of $f$} is given by \begin{equation}\label{ppart} f^+(\tau)=\sum_{n\in\frac{1}{N}\mathbb{Z} \alphatop n\mathfrak gg-\infty} c_f^+(n)\,q^n, \varepsilonnd{equation} for some $N\in \mathbb{N}$, whereas, for $k\ne1$, its {\it non-holomorphic part} is given by \begin{equation}\label{mpart1} f^-(\tau)=c_f^- (0)\,v^{1-k}+\sum_{\substack{n\in\frac{1}{N}\mathbb{Z}\setminus\{0\} \\ n\ll \infty}} c_f^-(n)\,W_k( 4\varpii n v)\,q^n. \varepsilonnd{equation} Here, for $x\in \mathbb{R}$, $W_k(x)$ is the real-valued incomplete gamma function\footnote{We sometimes write $\beta_k(x)$ in place of $W_k(-x/2)$, as this notation occurs in many places in the literature.}, defined as in \cite{BDE}, Section 2.2, by \begin{equation}\label{def-Wk} W_k(x) = \operatorname{Re}\big(\,\Gammaamma(1-k,-2x)\,\big) =\Gammaamma(1-k,-2x) + \begin{cases} \frac{(-1)^{1-k} \varpii i}{(k-1)!}&\tilde{e}xt{for $x>0$,}\\ 0&\tilde{e}xt{for $x<0$,} \varepsilonnd{cases} \varepsilonnd{equation} with $$\Gammaamma(s,x) := \int_{x}^\infty e^{-t}\,t^{s-1}\,dt. $$ For $k=1$, the non-holomorphic part $f^-(\tau)$ has the same shape but with the term $v^{1-k}$ in (\ref{mpart1}) replaced by $-\log(v)$. Note that the constraints on $n$ in the sums, $n\mathfrak gg -\infty$ in $f^+(\tau)$ and $n\ll \infty$ in $f^-(\tau)$, are a consequence of the growth condition (3) and the asymptotics of $\Gammaamma(s,x)$. The subspace $H_k(\Gammaamma)$ may be characterized as those elements of $H_k^{\operatorname{mg}}(\Gammaamma)$ for which $c_f^-(n)=0$ for $n\mathfrak ge 0$. For $f\in H_k(\Gammaamma)$, we have \begin{equation}\label{mpart2} f^-(\tau)=\sum_{\substack{n\in\frac{1}{N}\mathbb{Z} \\ n<0}} c_f^-(n)\,\Gammaamma\left(1-k,4\varpii |n| v\right)q^{n}. \varepsilonnd{equation} We define the ``flipped space'' \begin{equation}\label{sharp} H_k^\sharp\left(\Gammaamma\right):=\left\{f\in H_k^{\operatorname{mg}}\left(\Gammaamma\right): c_f^+(n)=0\tilde{e}xt{ for }n<0\right\}. \varepsilonnd{equation} \begin{remark}\label{Fourier-remark} The Fourier expansion in the case of vector-valued forms of type $(\rho,V)$ is more subtle. Suppose that $\Gammaamma={\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$, so that we have $$f(\tau+1) = \rho\left(\begin{pmatrix} 1&1\\0&1\varepsilonnd{pmatrix}\right)\,f(\tau).$$ If the space $V$ admits a basis of eigenvectors of $\rho(\left(\begin{smallmatrix} 1&1\\0&1\varepsilonnd{smallmatrix}\right))$, then corresponding components of $f$ have a Fourier expansion as in (\ref{weak-Fourier}). This is for example the case if $(\rho,V)$ is a Weil representation as in \cite{bruinier.funke}. On the other hand, if $(\rho,V)$ is a symmetric tensor representation, say realized on a space of polynomials as in (\ref{poly-rep}), then there is no such basis and the Fourier series of $f$ must be defined by procedure of \cite{kuga.shimura}, which we now describe. Suppose that the representation $(\rho,V)$ is the restriction to ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$ of a holomorphic representation of ${\tilde{e}xt {\rm SL}}_2(\mathbb{C})$. Letting $$f^*(\tau) := \rho\left(\begin{pmatrix} 1&\tau\\0&1\varepsilonnd{pmatrix}\right)^{-1}\,f(\tau),$$ we have $$ f^*(\tau+1) = \rho\left(\begin{pmatrix} 1&\tau+1\\0&1\varepsilonnd{pmatrix}\right)^{-1}\,\rho\left(\begin{pmatrix} 1&1\\0&1\varepsilonnd{pmatrix}\right)\,f(\tau)=f^*(\tau).$$ Thus $f^*$ has a Fourier expansion. The twist from $f$ to $f^*$ alters the action of the Maass operators, \varepsilonqref{raising} and \varepsilonqref{lowering} however, and this is responsible for the occurrence of different phenomena for vector-valued forms in certain cases, specifically cases I(a) and III(b) of Theorem~\ref{abstractMf}. \varepsilonnd{remark} Harmonic Maass forms relate in many ways to classical (weakly holomorphic) modular forms. To state the first of these connections, define the Bruinier-Funke {\it $\xi$-operator} \begin{equation*} \xi_k:= 2iv^k \overline{\frac{\varpiartial}{\varpiartial\overline{\tau}}}. \varepsilonnd{equation*} Note that \begin{equation*} \xi_k f=v^{k-2}\overline{L_{k}f}=R_{-k}\left(v^k \overline{f}\right) \varepsilonnd{equation*} from which one can easily conclude that $\xi_k: H^{\operatorname{mg}}_k \left(\Gammaamma,\rho\right) \to M_{2-k}^!(\Gammaamma,\bar\rho)$, where $\bar\rho$ is the representation of $\Gammaamma$ on $V$ defined by $$\bar\rho(\mathfrak gamma)v = \overline{\rho(\mathfrak gamma)\bar v}.$$ Here we need to assume that $V$ is defined over $\mathbb{R}$, i.e., that complex conjugation $\bar{}:V\rightarrow V$ is defined. Of course this is true for spaces of complex-valued functions, e.g., polynomials or group algebras as in \cite{bruinier.funke}. We also note that, if $f$ has weight $k$, then $v^k\,\overline{f(\tau)}$ has weight $-k$ and \begin{equation*} L_{-k}^a\left(v^k\,\overline{f(\tau)}\right) = v^{k+2a} \,\overline{R_k^af(\tau)}, \qquad R_{-k}^a\left(v^k\,\overline{f(\tau)}\right) = v^{k-2a}\overline{L_k^a f(\tau)} . \varepsilonnd{equation*} Now assume that $f$ is scalar-valued. Writing the Fourier expansion of $f$ as in \varepsilonqref{weak-Fourier}, we have \begin{equation}\label{xiact} \xi_kf(\tau) =(1-k)\overline{c_f^-(0)}-(4\varpii)^{1-k} \sum_{\substack{n\in\frac1N\mathbb{Z}\setminus\{0\}\\n\mathfrak gg -\infty}}\overline{\frac{c_f^-(-n)}{n^{k-1}}}q^n. \varepsilonnd{equation} while, for $k=1$, we have \begin{equation}\label{xiact} \xi_1f(\tau) =-\overline{c_f^-(0)}-\sum_{\substack{n\in\frac1N\mathbb{Z}\setminus\{0\}\\n\mathfrak gg -\infty}}\overline{c_f^-(-n)}\,q^n. \varepsilonnd{equation} Using this operator, $H_k(\Gammaamma)$ may be characterized as those elements in $H_k^{\tilde{e}xt{mg}}(\Gammaamma)$ which map to cusp forms under $\xi_k$. A further operator which relates harmonic Maass forms to (weakly) holomorphic modular forms is given by iterated differentiation. Suppose that $k\in-\mathbb{N}_0$ and let $$ D^{1-k}:=\left(\frac{1}{2\varpii i} \frac{\varpiartial}{\varpiartial \tau}\right)^{1-k}. $$ Using Bol's identity, \cite{bol, bump.choie}, \begin{equation}\label{classic-bol} D^{1-k}=(-4\varpii)^{k-1}R_k^{1-k}, \varepsilonnd{equation} one can show that $D^{1-k}:H_k^{\operatorname{mg}}\left(\Gammaamma,\rho\right)\to M_{2-k}^{!}\left(\Gammaamma,\rho\right)$. For $f$ scalar-valued, the operator $D^{1-k}$ acts on \varepsilonqref{weak-Fourier} as \begin{equation}\label{D-action} D^{1-k}f(\tau)=(1-k)!(4\varpii)^{k-1}c_f^-(0)+\sum_{\substack{n\in\frac1N\mathbb{Z}\setminus\{0\}\\n\mathfrak gg -\infty}}\frac{c_f^+(n)}{n^{k-1}}q^n. \varepsilonnd{equation} Using this operator, the space \varepsilonqref{sharp} may be characterized as $$ H^\sharp_k\left(\Gammaamma\right)=\left\{f\in H_k^{\operatorname{mg}}\left(\Gammaamma\right):D^{1-k}(f)\in S_{2-k}\left(\Gammaamma\right)\right\}. $$ Finally, we also require an operator, which ``flips'' the two spaces $H_k(\Gammaamma)$ and $H_k^\sharp(\Gammaamma)$. To be more precise, define for $f\in H_k^{\operatorname{mg}}(\Gammaamma)$ the {\it flip of $f$} \begin{equation*} \mathfrak{F}_k:=\frac{v^{-k}}{(-k)!}\overline{R_k^{-k}}. \varepsilonnd{equation*} The flipping operator $\mathfrak{F}_k$ satisfies \begin{align}\label{involution} \mathfrak{F}_k:&H_k\left(\Gammaamma\right)\to H_k^\sharp\left(\Gammaamma\right),~ H_k^\sharp\left(\Gammaamma\right)\to H_k\left(\Gammaamma\right),\notag\\ &\qquad\mathfrak{F}_k\circ\mathfrak{F}_k(f)=f. \varepsilonnd{align} Moreover we have \begin{align} \xi_k \circ\mathfrak{F}_k &=-\frac{(-4\varpii)^{1-k}}{(-k)!}D^{1-k}, \label{xiflip}\\ D^{1-k}\circ \mathfrak{F}_k &=\frac{(-k)!}{(4\varpii)^{1-k}}\xi_k.\label{Dflip} \varepsilonnd{align} Natural harmonic Maass forms can be given via Poincar\'e series. For simplicity, we restrict to scalar-valued forms. We now describe the general construction. Let \begin{equation*} \mathbb{P}_k(\varphi;\tau) := \sum_{\mathfrak gamma =\left(\begin{smallmatrix} a & b \\ c & d \varepsilonnd{smallmatrix}\right)\in\Gammaamma_\infty\setminus{\tilde{e}xt {\rm SL}}_2(\mathbb{Z})}\varphi \mathbb Big|_k\mathfrak gamma(\tau), \varepsilonnd{equation*} where $\Gammaamma_\infty:=\{\varpim\left(\begin{smallmatrix}1&n\\0&1\varepsilonnd{smallmatrix}\right):n\in\mathbb{Z}\}$, $\big|_k$ is the usual weight $k$ slash operator and $\varphi : \mathbb{H} \rightarrow \mathbb{C}$ satisfies \begin{equation*} \varphi(\tau) = O(v^{2-k+\varepsilon})\qquad (\varepsilon>0). \varepsilonnd{equation*} For $k<0$, let \begin{equation*} F_{k,m}(\tau) := \mathbb{P}_k\left(\varphi_{k,m}\right), \varepsilonnd{equation*} with $(e(u):=e^{2\varpii iu})$ \begin{equation*} \varphi_{k,m}(\tau) := \frac{(-\operatorname{sgn}(m))^{1-k}}{(1-k)!}\left(4\varpii |m|v\right)^{-\frac{k}{2}}M_{\operatorname{sgn}(m)\frac{k}{2},\frac{1-k}{2}}\left(4\varpii|m|v\right)e(mu) \varepsilonnd{equation*} with $M_{\mu,\nu}$ the $M$-Whittaker function. These functions give rise to harmonic Maass forms (see e.g. \cite{Br, Ni}). To be more precise, for $m>0$, we have \[ F_{k,m} \in H_k,\quad F_{k,-m} \in H^{\sharp}_k. \] \section{The $(\mathfrak g,K)$-module defined by $f$.} As usual, for $g\in G = {\tilde{e}xt {\rm SL}}_2(\mathbb{R})$ and $f\in H_k^{\tilde{e}xt{mg}}(\Gammaamma,\rho)$, we let \begin{equation}\label{lifttoG} \tilde{f}(g) := j(g,i)^{-k}\, f (g(i)), \varepsilonnd{equation} so that $\tilde{f}:G \rightarrow V$ satisfies \begin{align}\label{gamma.inv} \tilde{f}(\mathfrak gamma g) &= \rho(\mathfrak gamma) \tilde{f}(g) \qquad\tilde{e}xt{for all $\mathfrak gamma\in\Gammaamma$},\\ \label{K-type} \tilde{f}(g k_\theta) &= e^{i k \theta}\,\tilde{f}(g)\qquad\tilde{e}xt{for all $k_\theta := \left(\begin{smallmatrix} \cos(\theta)&\sin(\theta)\\-\sin(\theta)&\cos(\theta)\varepsilonnd{smallmatrix}\right) \in K=\tilde{e}xt{\rm SO}(2)$,} \varepsilonnd{align} and the growth condition \begin{equation}\label{growth.cond} \tilde{f}(g) = O\left(e^{Bv}\right), \qquad g(i) = u+iv, \varepsilonnd{equation} for a constant $B>0$, uniformly in $u$, as $v\rightarrow \infty$. Again, similar conditions hold at all cusps of $\Gammaamma$. As a good reference for this construction as well as the calculations to follow is \cite{verdier}, see also Chapter~1 of \cite{vogan.book}. Let $$ H := i \begin{pmatrix} 0&-1\\1&0\varepsilonnd{pmatrix}, \qquad X_+ := \frac12 \begin{pmatrix} 1&i \\ i&-1\varepsilonnd{pmatrix}, \qquad X_- := \frac12 \begin{pmatrix} 1&-i \\ -i&-1\varepsilonnd{pmatrix}, $$ be the standard basis for the complexified Lie algebra $\mathfrak g$ of $G$. Note that $H$ spans $\mathfrak k$, the complexified Lie algebra of $K$, and that condition (\ref{K-type}) on $\tilde{f}$ is equivalent to the condition $H \tilde{f} = k \tilde{f}$. Recall that, if $\tilde{f}$ is the lift of $f $ to $G$, then $X_+ \tilde{f}$ is the lift of $R_k f $ and $X_-\tilde{f}$ is the lift of $L_kf $, where $L_k$ and $R_k$ are defined in \varepsilonqref{raising} and \varepsilonqref{lowering}, respectively. The Casimir operator is the element of the universal enveloping algebra $U(\mathfrak g)$ of $\mathfrak g$ defined by \begin{equation}\label{casimir} C:= H^2 + 2 X_+ X_- + 2 X_- X_+ = (H-1)^2 + 4 X_+X_--1 = (H+1)^2 + 4 X_-X_+-1. \varepsilonnd{equation} Note that $C$ spans the center of $U(\mathfrak g)$ and acts by a scalar in any irreducible $(\mathfrak g,K)$-module. Using the second expression for $C$, it is immediate that the condition $\Delta_kf=0$ is equivalent to \begin{equation}\label{Casimir.cond} C\tilde{f} = \left(\left(k-1\right)^2-1\right)\tilde{f}. \varepsilonnd{equation} Note that, for $k\in -\mathbb{N}_0$, the flipping operator lifts to $$\widetilde{\mathfrak{F}_kf} =\frac{1}{(-k)!}\,\overline{X_+^{-k}\tilde{f}},$$ and the identity (\ref{involution}) amounts to $$X_-^{-k}\,X_+^{-k}\,\tilde{f} = (-k)!^2\,\tilde f.$$ Let $A(G,V)$ be the space of all $K$-finite\footnote{A function in this space is a finite linear combination of functions satisfying (\ref{K-type}) for various weights $k$.}, $C^\infty$-functions on $G$, which are valued in $V$, and let $A(G,V;\Gammaamma)$ be the subspace such that (\ref{gamma.inv}) holds. Note that this subspace is a $(\mathfrak g,K)$ submodule. For a harmonic Maass form $f$ of weight $k$, the corresponding function $\tilde{f}$ on $G$ lies in $A(G,V;\Gammaamma)$ and satisfies conditions (\ref{K-type}) and (\ref{Casimir.cond}), as well as the growth condition (\ref{growth.cond}), which we do not use for the moment. We want to describe the $(\mathfrak g, K)$-submodule $M(\tilde{f})$ of $A(G,V)$ generated by $\tilde{f}$. For $j= k\varpim2r\in k+2\mathbb{Z}$, $r\in \mathbb{N}_0$, let \begin{equation}\label{first-up-down} \tilde{f}_j := X_\varpim^r\,\tilde{f}. \varepsilonnd{equation} The following fact is classically well-known. \begin{proposition} The $(\mathfrak g,K)$-submodule $M(\tilde{f})$ of $A(G,V)$ generated by $\tilde{f}= \tilde{f}_k$ is spanned by the $\tilde{f}_j$. Moreover, for $r\in\mathbb{N}$, \begin{align} X_- \tilde{f}_{k+2r} &= r(1-k-r)\,\tilde{f}_{k+2(r-1)},\quad{\rm and}\label{movedown}\\ X_+ \tilde{f}_{k-2r} &= -(r-1)(r-k)\tilde{f}_{k-2(r-1)}.\notag \varepsilonnd{align} In particular, the following vanishing always occurs: \begin{align} X_+\tilde{f}_{k-2} &=0, \notag\\ X_- \tilde{f}_{2-k} &=0 &&\tilde{e}xt{if $k<1$ {\rm \tilde{e}xt{\quad (so that $r = 1-k >0$ in (\ref{first-up-down})})}},\label{autovan2}\\ X_+\tilde{f}_{-k}&=0 &&\tilde{e}xt{if $k>0$ {\rm \tilde{e}xt{\quad (so that $r = k>0$ in (\ref{first-up-down})})}}.\label{autovan3} \varepsilonnd{align} \varepsilonnd{proposition} \begin{proof} For convenience, we set $\nu= 1-k$. By (\ref{casimir}), we have \begin{align*} 4 X_+X_- & = C - (H-1)^2 +1,\\ 4 X_-X_+ &= C- (H+1)^2+1, \varepsilonnd{align*} and hence, recalling that $C$ is in the center of the enveloping algebra, \begin{align} 4 X_+X_- \,\tilde{f}_j &= \left(\left(\nu^2-1\right) -(j-1)^2+1\right)\,\tilde{f}_j, \label{multi-one-1}\\ 4 X_-X_+ \,\tilde{f}_{j-2}&= \left(\left(\nu^2-1\right) -(j-1)^2+1\right)\,\tilde{f}_{j-2}. \label{multi-one-2} \varepsilonnd{align} Now for $j= k+2r>k$, the second equation gives us \[ X_- \tilde{f}_{j} = \frac14\left(\left(\nu^2-1\right) -(2r-\nu)^2+1\right) \tilde{f}_{j-2},\quad\tilde{e}xt{i.\,e.,}\quad X_- \tilde{f}_{k+2r} = r(\nu-r)\, \tilde{f}_{k+2(r-1)}. \] Similarly, for $r>0$, the first equation yields $$ X_+ \tilde{f}_{k-2r} = -(r-1)(\nu+r-1) \tilde{f}_{k-2(r-1)}. $$ \varepsilonnd{proof} \begin{remark}\label{K-type-remark} Note that it is precisely the harmonicity condition (\ref{Casimir.cond}) which yields the relations (\ref{multi-one-1}) and (\ref{multi-one-2}). These imply that the $K$-types in $M(\tilde{f})$ have multiplicity $1$, a fact which is crucial to our determination of this module. If condition (\ref{Casimir.cond}) is relaxed, higher multiplicities and more complicated things can occur as shown by some examples in Section~\ref{section.related}. \varepsilonnd{remark} \begin{remark} It is useful to note that for $k<1$ and $\nu=1-k$, the vanishing $X_- \tilde{f}_{2-k} =0$ in (\ref{autovan2}) implies that, for a holomorphic form $f$, \begin{equation}\label{trick} R^\nu_k f (\tau) =\left(2 i \frac{\varpiartial}{\varpiartial \tau}\right)^{\nu}f(\tau), \varepsilonnd{equation} i.e., Bol's identity (\ref{classic-bol}). \varepsilonnd{remark} \section{Some standard $(\mathfrak g,K)$ modules} Before continuing our analysis, we review the structure of the reducible principal series representations of $G$. This provides the irreducible $(\mathfrak g,K)$-modules from which $M(f)$ is built. A nice reference for this material is Chapter~1 of \cite{vogan.book}. Here we use the notation $$m(a) := \begin{pmatrix} a&{0}\\{0}&a^{-1}\varepsilonnd{pmatrix}, \qquad n(b) := \begin{pmatrix} 1&b\\{0}&1\varepsilonnd{pmatrix}, \quad\tilde{e}xt{and}\quad k_\theta = \begin{pmatrix} \cos(\theta)&\sin(\theta)\\-\sin(\theta)&\cos(\theta)\varepsilonnd{pmatrix}.$$ For $\varepsilon\in \{0,1\}$ and $\nu\in \mathbb{C}$, let $I^{\tilde{e}xt{\rm sm}}(\varepsilon,\nu)$, be the principal series representation of $G$ given by right multiplication on the space of smooth functions\footnote{Hence the superscript `sm'.} $\varpihi$ such that $$ \varpihi(n(b) m(a) g) = \operatorname{sgn}(a)^\varepsilon\,|a|^{\nu+1}\,\varpihi(g). $$ Let $I(\varepsilon,\nu)$ be the corresponding $(\mathfrak g,K)$-module of $K$-finite functions. For $j \varepsilonquiv \varepsilon\varpimod{2}$, let $\varpihi_j\in I(\varepsilon,\nu)$ be the function such that \begin{equation}\label{ind-K-type} \varpihi_j(k_\theta) = e^{j \theta i}. \varepsilonnd{equation} These functions give a basis for $I(\varepsilon,\nu)$, and and easy calculation shows that \begin{equation}\label{PS.action} X_{\varpim} \varpihi_j = \frac12(\nu+1\varpim j)\varpihi_{j\varpim2}. \varepsilonnd{equation} From this it is immediate that \begin{equation} C \varpihi_j =\mathbb Big((j-1)^2 + (\nu+1-j) (\nu+1+j-2)-1\mathbb Big)\varpihi_j = \left(\nu^2-1\right) \varpihi_j. \varepsilonnd{equation} The $(\mathfrak g,K)$-module $I(\varepsilon,\nu)$ is irreducible unless $\nu$ is an integer and $\nu-1\varepsilonquiv \varepsilon\varpimod{2}$. In that case, $I(\varepsilon,\nu)$ is not irreducible and $\nu$ determines $\varepsilon$, so we write simply ${I(\nu) = I(\varepsilon,\nu)}$. The structure of $I(\nu)$ is well-known and can easily be derived from (\ref{PS.action}). \begin{enumerate} \item If $\nu>0$, then there are two irreducible submodules, the holomorphic (resp. anti-holomorphic) discrete series representation $$ \tilde{e}xt{\rm DS}^{+}(\nu) = [\varpihi_{\nu+1}, \varpihi_{\nu+3}, \partialots, ],\qquad(\tilde{e}xt{resp. } \tilde{e}xt{\rm DS}^-(\nu) = [\partialots, \varpihi_{-\nu-3},\varpihi_{-\nu-1}]) $$ with lowest (resp. highest) weight $k$ (resp. $-k$), and a unique irreducible quotient \begin{equation}\label{Is0-decomp} \tilde{e}xt{\rm FD}(\nu) = [\varpihi_{-\nu+1}, \partialots, \varpihi_{\nu-1}] \varpimod{ \tilde{e}xt{\rm DS}^+(\nu)\oplus \tilde{e}xt{\rm DS}^-(\nu)} \varepsilonnd{equation} so that there is an exact sequence $$0\longrightarrow \tilde{e}xt{\rm DS}^+(\nu)\oplus \tilde{e}xt{\rm DS}^-(\nu) \longrightarrow I(\nu) \longrightarrow \tilde{e}xt{\rm FD}(\nu)\longrightarrow 0.$$ In particular, $\tilde{e}xt{\rm FD}(\nu)$ is the finite dimensional representation of $G$ of dimension $\nu$. \item If $\nu<0$, then there is a unique irreducible submodule $$ \tilde{e}xt{\rm FD}(-\nu) = [\varpihi_{\nu+1}, \partialots, \varpihi_{-\nu-1}], $$ and two irreducible quotients, $\tilde{e}xt{\rm DS}^{\varpim}(-\nu)$, and so an exact sequence $$0\longrightarrow \tilde{e}xt{\rm FD}(-\nu)\longrightarrow I(\mu) \longrightarrow \tilde{e}xt{\rm DS}^+(-\nu)\oplus \tilde{e}xt{\rm DS}^-(-\nu)\longrightarrow 0.$$ \item If $\nu=0$, there are two irreducible summands, the limits of discrete series $$ \tilde{e}xt{\rm LDS}^+(0) = [\varpihi_1,\varpihi_3,\partialots]\qquad\tilde{e}xt{and}\qquad \tilde{e}xt{\rm LDS}^-(0) = [\partialots, \varpihi_{-3}, \varpihi_{-1}], $$ with lowest (resp. highest) weight $1$ (resp. $-1$). \varepsilonnd{enumerate} In each of the cases, we have, with $k=1-\nu$, $$ X_+^r \varpihi_k = r! \,\varpihi_{k+2r},\quad\tilde{e}xt{and}\quad X_-^r\varpihi_k = \nu(\nu+1)(\nu+2) \cdots (\nu+r-1)\,\varpihi_{k-2r}, $$ and, an easy calculation gives $$ X_- X_+^r \varpihi_k = r(\nu-r) \, X_+^{r-1}\,\varpihi_k, \quad\tilde{e}xt{and}\quad X_+X_-^r\varpihi_k = -(r-1)(\nu+r-1)\,X_-^{r-1}\varpihi_k. $$ \section{A Classification} Now returning to the subspace $M(\tilde{f})$ of $A(G,V)$ generated by $\tilde{f}=\tilde{f}_k$, we distinguish the three cases $k<1$, $k=1$, and $k>1$. The transition equations (\ref{movedown})--(\ref{autovan3}) then show when it is possible to move up and down among the $\tilde{f}_j$'s and hence reveal the possible module structures for $M(\tilde{f})$ as an abstract $(\mathfrak g,K)$-module. It turns out that there are 9 cases. Recall that $\nu=1-k$. \begin{theorem}\label{abstractMf}\mathfrak{}fill\break {\bf I.} Suppose that $k<1$, so that $\nu> 0$. Then the structure of $M(\tilde{f})$ is determined by the functions $\tilde{f}_{k-2}$ and $\tilde{f}_{k+2\nu} = \tilde{f}_{2-k}$. More precisely, $M(\tilde{f})$ is isomorphic to \begin{enumerate} \item[(a)] $\tilde{e}xt{\rm FD}(\nu)$ if $\tilde{f}_{k-2}=0$ and $\tilde{f}_{k+2\nu}=0$, \item[(b)] $I(\nu)/\tilde{e}xt{\rm DS}^-(\nu)$ if $\tilde{f}_{k-2}= 0$ and $\tilde{f}_{k+2\nu}\ne0$, so that there is an exact sequence $$0\longrightarrow \tilde{e}xt{\rm DS}^+(\nu)\longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm FD}(\nu)\longrightarrow 0,$$ \item[(c)] $I(\nu)/\tilde{e}xt{\rm DS}^+(\nu)$ if $\tilde{f}_{k-2}\ne 0$ and $\tilde{f}_{k+2\nu}=0$, so that there is an exact sequence $$0\longrightarrow \tilde{e}xt{\rm DS}^-(\nu)\longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm FD}(\nu)\longrightarrow 0,$$ \item[(d)] $I(\nu)$ if $\tilde{f}_{k-2}\ne 0$ and $\tilde{f}_{k+2\nu}\ne 0$, so that there is an exact sequence $$0\longrightarrow \tilde{e}xt{\rm DS}^+(\nu)\oplus \tilde{e}xt{\rm DS}^-(\nu)\longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm FD}(\nu)\longrightarrow 0.$$ \item[] Moreover, the sequences in cases {\rm(b)}, {\rm(c)}, and {\rm(d)} are not split. \varepsilonnd{enumerate} {\bf II.} Suppose that $k=1$ so that $\nu=0$. \begin{enumerate} \item[(a)] If $\tilde{f}_{-1}=0$, then $M(\tilde{f}) \simeq \tilde{e}xt{\rm LDS}^+(0)$. \item[(b)] If $\tilde{f}_{-1}\ne 0$, then there is a non-split extension $$ 0\longrightarrow \tilde{e}xt{\rm LDS}^-(0) \longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm LDS}^+(0)\longrightarrow 0. $$ \varepsilonnd{enumerate} {\bf III.} Suppose that $k>1$ so that $\nu<0$. Then the structure of $M(\tilde{f})$ is determined by the functions $\tilde{f}_{k-2}$ and $\tilde{f}_{-k}$. \begin{enumerate} \item[(a)] If $\tilde{f}_{k-2}=0$, then $M(\tilde{f})\simeq \tilde{e}xt{\rm DS}^+(-\nu)$. \item[(b)] If $\tilde{f}_{k-2}\ne 0$ and $\tilde{f}_{-k}=0$, then there is a non-split extension $$ 0\longrightarrow \tilde{e}xt{\rm FD}(-\nu)\longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm DS}^+(-\nu)\longrightarrow 0 $$ so that $M(\tilde{f})$ is isomorphic to the submodule of $I(\nu)$ generated by $\varpihi_k$. \item[(c)] If $\tilde{f}_{k-2}\ne 0$ and $\tilde{f}_{-k}\ne 0$, then there is a (socle) composition series of length $3$, $$ F^2M\left(\tilde{f}\right) \subset F^1M\left(\tilde{f}\right) \subset M\left(\tilde{f}\right), $$ with $F^2M(\tilde{f}) \simeq \tilde{e}xt{\rm DS}^-(-\nu)$, $F^1M(\tilde{f})/F^2M(\tilde{f}) \simeq \tilde{e}xt{\rm FD}(-\nu)$, and $M(\tilde{f}) /F^1M(\tilde{f}) \simeq \tilde{e}xt{\rm DS}^+(-\nu)$. \varepsilonnd{enumerate} \varepsilonnd{theorem} Theorem \ref{abstractMf} immediately implies \begin{corollary}\label{sub-quotes} \begin{enumerate} \item[(i)] If $k<1$, i.e., in cases I(a)-(d), $M(\tilde{f})$ is isomorphic to a quotient of $I(\nu)$. \item[(ii)] In cases III(a) and III(b), where $k>1$, and in case II(a), where $k=1$, $M(\tilde{f})$ is isomorphic to a subquotient of $I(\nu)$. \item[(iii)] In cases III(c), where $k>1$, and II(b), where $k=1$, $M(\tilde{f})$ is not isomorphic to a subquotient of $I(\nu)$. \varepsilonnd{enumerate} \varepsilonnd{corollary} \begin{remark}\label{remark4} For convenience, we summarize what the various cases amount to in classical language. Suppose that $f $ is a harmonic Maass form of weight $k$, and, for $r\in\mathbb{N}_0$, set $f _{k+2r} := R_{k}^rf $ (resp. $f _{k-2r}:= L_k^rf $) for the image of $f $ under the $r$-fold application of the raising (resp. lowering) operator. \begin{enumerate} \item[\bf I.] Here $k<1$. The subcases correspond to the following: \begin{enumerate} \item[(a)] $L_kf=f _{k-2}=0$ and $R_k^{1-k}f=f_{2-k}=0$, \item[(b)] $L_kf=f_{k-2}= 0$ and $R_k^{1-k}f=f_{2-k}\ne0$, \item[(c)] $L_kf=f_{k-2}\ne 0$ and $R_k^{1-k}f = f_{2-k}=0$, \item[(d)] $L_kf = f_{k-2}\ne 0$ and $R_k^{1-k}f=f_{k+2\nu}\ne 0$. \varepsilonnd{enumerate} \item[\bf II.] Here $k=1$. The subcases correspond to the following: \begin{enumerate} \item[(a)] $L_1f=0$, i.\,e., $f$ is a weakly holomorphic modular form of weight $1$, \item[(b)] $L_1f \ne 0$, i.\,e., $\xi_1f$ is a weakly holomorphic modular form of weight $1$. \varepsilonnd{enumerate} \item[\bf III.] Here $k>1$. The subcases correspond to the following: \begin{enumerate} \item[(a)] $L_kf=f_{k-2}=0$, i.\,e., $f$ is a weakly holomorphic modular form of weight $k$, \item[(b)] $L_kf=f_{k-2}\ne 0$ and $L_k^{k}f=f_{-k}=0$. i.\,e., $L_k^{k-1}f$ is holomorphic of weight $2-k$, \item[(c)] $L_kf=f_{k-2}\ne 0$ and $L_k^{k}f= f_{-k}\ne 0$. \varepsilonnd{enumerate} \varepsilonnd{enumerate} \varepsilonnd{remark} Of course, the cases listed in the theorem are simply possibilities. Our goal is to determine which of them can actually arise as subspaces of $A(G,V;\Gammaamma)$ and for which $(\rho,V)$. Of course, the cases I(a) and II(a), where $k\in\mathbb{N}$, arise as irreducible submodules $M(\tilde{f})$ generated by the lifts to $G$ of holomorphic cusp forms of weight $k$. On the other hand, all other cases involve non-split extensions and hence cannot occur as subspaces of $L^2(\Gammaamma\backslash G)$. In fact, we show the following: \begin{theorem} All possible cases enumerated in Theorem~\ref{abstractMf} arise as $M(\tilde{f})$'s. \varepsilonnd{theorem} \begin{remark} This result is proved in Section~\ref{section.examples} by explicit construction. It is shown there, that certain cases can only occur for $(\rho,V)$ with $\partialim (V)>1$. \varepsilonnd{remark} \section{Examples}\label{section.examples} In this section, we provide examples for each of the possibilities for $M(\tilde{f})$ listed in Theorem~\ref{abstractMf}. \subsection{Case I: $k<1$} \subsubsection{Case I(a)} In this case, we want an automorphic realization of the finite-dimensional representation $\tilde{e}xt{\rm FD}(\nu)$ of dimension $\nu$. For $k=0$ and $\nu=1$ the constant function gives a trivial example for the one-dimensional space $M(\tilde{f})=\tilde{e}xt{\rm FD}(1)$. Moreover, as remarked by Schulze-Pillot \cite{schulze-pillot}, this is the only possibility of a finite-dimensional $M(\tilde{f})$ if the representation $(\rho,V)$ is a character, i.e., for scalar-valued modular forms. For convenience of the reader, we provide a proof of this statement, which follows the approach of Schulze-Pillot. \begin{lemma}\label{lemma6.5} Suppose that $k\le 0$ and that $f\in H_k^{\rm{mg}}(\Gammaamma)$ is a scalar-valued harmonic Maass form with $M(\tilde{f}) = \tilde{e}xt{\rm FD}(\nu)$. Then $f $ is a constant and $\nu=1$. \varepsilonnd{lemma} \begin{proof} The condition $L_kf=0$ implies that $f$ is holomorphic. Then the condition $D^\nu f=0$ implies that $f$ is a polynomial. Since $f$ is invariant under $\tau\mapsto \tau+1/N$, $f$ must be constant. \varepsilonnd{proof} For vector-valued forms, each $\tilde{e}xt{\rm FD}(\nu)$ can occur, as shown by the following elementary construction, cf. \cite{verdier}. For $k\in -\mathbb{N}_0$, let $m= -k$ and let $\mathbb{C}al P_m$ be the space of polynomials of degree at most $m$ in the variable $X$. The group ${\tilde{e}xt {\rm SL}}_2(\mathbb{R})$ acts of $\mathbb{C}al P_m$ via $\left(\mathfrak gamma:= \left(\begin{smallmatrix} a & b \\ c & d \varepsilonnd{smallmatrix}\right) \right)$ \begin{equation}\label{poly-rep} \rho_m(\mathfrak gamma)p(X) = (-c X+a)^{m} p\left(\frac{d X-b}{-c X+a}\right). \varepsilonnd{equation} We abbreviate $\rho:=\rho_m$. Following \cite{verdier}, for an integer $r$ with $0\le r\le m$, define the function $e_{r,m-r}:\mathbb{H} \rightarrow \mathbb{C}al P_m$ by \begin{align*} e_{r,m-r}(\tau)(X) :&= \frac{(-1)^{m-r}}{r!}\,v^{r-m}\,\partialet\begin{pmatrix} X&\tau\\ 1 &1\varepsilonnd{pmatrix}^{r}\, \partialet\begin{pmatrix} X&\overline{\tau}\\ 1&1\varepsilonnd{pmatrix}^{m-r}\\ &= \frac{(-1)^{m-r}}{r!}\,v^{r-m}\,(X-\tau)^r \,(X-\overline{\tau})^{m-r}. \varepsilonnd{align*} Then, for any $\mathfrak gamma\in {\tilde{e}xt {\rm SL}}_2(\mathbb{R})$, \begin{equation}\label{any-Gamma} e_{r,m-r}(\mathfrak gamma\tau) = (c\tau+d)^{m-2r}\,\rho(\mathfrak gamma)\,e_{r,m-r}(\tau), \varepsilonnd{equation} so that $e_{r,m-r}$ has weight $m-2r$. The holomorphic function $e_{m,0}$ is a harmonic Maass form of weight $-m$ and type $\rho_m$. The corresponding functions $\tilde{e}_{r,m-r}$ on $G$ are given by \begin{align*} \tilde{e}_{r,m-r}(g)(X) &= \frac{(-1)^{m-r}}{r!}\,j(g,i)^r \, j(g,-i)^{m-r}\, \partialet \begin{pmatrix} X&g(i)\\ 1&1\varepsilonnd{pmatrix}^{r}\,\partialet \begin{pmatrix} X&g(-i)\\ 1&1\varepsilonnd{pmatrix}^{m-r}. \varepsilonnd{align*} Let $$ \varpihi(g) := \partialet\left( \begin{pmatrix} X\\1\varepsilonnd{pmatrix}\!, g\begin{pmatrix} \varpim i\\1\varepsilonnd{pmatrix}\right)^r. $$ If $A \in \mathfrak g_0$, the real Lie algebra of $G$, then $$ A \varpihi(g) = r\,\partialet\left( \begin{pmatrix} X\\ 1\varepsilonnd{pmatrix}, g\begin{pmatrix} \varpim i\\1\varepsilonnd{pmatrix}\right)^{r-1}\,\partialet\left( \begin{pmatrix} X\\ 1\varepsilonnd{pmatrix}, A\begin{pmatrix} \varpim i\\1\varepsilonnd{pmatrix}\right), $$ and the same formula holds for $A\in \mathfrak g$, the complexification of $\mathfrak g_0$, by linearity. In particular, since $$ X^{+} \begin{pmatrix} i\\1\varepsilonnd{pmatrix} = -\begin{pmatrix} -i\\1\varepsilonnd{pmatrix}, \quad{\tilde{e}xt{and}}\quad X^{+} \begin{pmatrix} -i\\1\varepsilonnd{pmatrix} = 0, $$ we see that \begin{align*} X_+ \tilde{e}_{r,m-r} = \tilde{e}_{r-1,m-r+1},\qquad X_+ \tilde{e}_{0,m}= 0, \quad \tilde{e}xt{ and} \\ X_- \tilde{e}_{r,m-r} = (r+1)\,(m-r)\,\tilde{e}_{r+1,m-r-1}, \qquad X_- \tilde{e}_{m,0} = 0. \varepsilonnd{align*} The classical functions $e_{r,m-r}$ behave in the same way under raising and lowering as the $\tilde{e}_{r,m-r}$ behave under $X^{\varpim}$, viz \begin{equation*} L_{m-2r} \,e_{r,m-r} = (r+1)\,(m-r)\,e_{r+1,m-r-1},\qquad R_{m-2r} \,e_{r,m-r}= e_{r-1,m-r+1}. \varepsilonnd{equation*} In particular, \begin{equation*} L_{-m}e_{m,0}=0,\qquad R_{m}e_{0,m} =0. \varepsilonnd{equation*} These formulas are easily checked by a classical calculation as well. In this way, we obtain a realization of $\tilde{e}xt{\rm FD}(\nu)$ in the space $A(G,\mathbb{C}al P_m;\Gammaamma)$, where $m=\nu-1$ and for any $\Gammaamma \subset {\tilde{e}xt {\rm SL}}_2(\mathbb{R})$; the transformation law under $\Gammaamma$ follows from (\ref{any-Gamma}). Finally, we note that, under the flipping operator, $$\mathfrak{F}_{-m}e_{m,0} = (-1)^m\,e_{m,0}.$$ \subsubsection{Case I(b)} Any weakly holomorphic modular form $f$ of weight $k<1$ gives an extension of the form $$0\longrightarrow \tilde{e}xt{\rm DS}^+(\nu)\longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm FD}(\nu)\longrightarrow 0,$$ where $\nu=1-k$, as usual. The submodule is generated by $R_k^{\nu}f$, which is again holomorphic by either (\ref{autovan2}) or (\ref{trick}). For example, for the form $f = 1/\Delta$ of weight $-12$, where $\Delta$ is the usual cusp form of weight $12$ for ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$, we have $\nu=13$ and, recalling (\ref{trick}), $$h=R^{13}_{-12} f = (2i)^{13}\,f^{(13)}$$ is a weakly holomorphic form of weight $14$. Then the extension is $$0\longrightarrow M\left(\tilde h\right) \longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm FD}(13)\longrightarrow 0.$$ As an even simpler example, we can take $f = j$, the modular $j$-invariant, with $k=0$ and $\nu=1$. Then we have $h:= R_0f= 2 i \,j'$ and we get an extension $$0\longrightarrow M\left(\tilde h\right) \longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm FD}(1)\longrightarrow 0$$ with the trivial representation as quotient. \subsubsection{Case I(c)} Let $F\in M_k^{!}\backslash\{0\}$ and set $G:=\mathfrak{F}_kF$. If $k=0$, we assume that $F$ is not a constant. We compute, using \varepsilonqref{xiflip}, \begin{align*} \xi_kG =-\frac{(-4\varpii)^{1-k}}{(-k)!}D^{1-k}F\neq 0 \varepsilonnd{align*} since $F\neq0$ (resp. $F$ is non-constant for $k=0$). Moreover, by \varepsilonqref{Dflip}, \begin{align*} D^{1-k}G =\frac{(-k)!}{(4\varpii)^{1-k}}\xi_k F =0 \varepsilonnd{align*} since $F\in M_k^!$. This gives the claim. These last two cases, say for $\frac{1}{\Delta}$ and its flip $\mathfrak{F}_{-12}\frac{1}{\Delta}$, can be pictured as follows: For case I(b) \begin{equation}\label{5.6} \xymatrix{ {}&{\bullet} \alphar@/^/[r]^{R_{-12}}&{\circ}\alphar@/^/[l]^{L_{-10}} \alphar@/^/[r]^{R_{-10}}&\ \partialots\ \alphar@/^/[l]^{L_{-8}}\alphar@/^/[r]^{R_8}& {\circ}\alphar@/^/[r]^{R_{10}}\alphar@/^/[l]^{L_{10}}&\alphar@/^/[l]^{L_{12}}{\circledast}\alphar@/^/[r]^{R_{12}}& \odot\alphar@/^/[r]^{R_{14}}&{\circ}\alphar@/^/[r]^{R_{16}}\alphar@/^/[l]^{L_{16}}&{\circ}\alphar@/^/[l]^{L_{18}}\alphar@/^/[r]^{R_{18}}&\cdots\alphar@/^/[l]^{L_{20}} }, \varepsilonnd{equation} where $\bullet$ indicates $\frac{1}{\Delta}$, $\circledast$ indicates the form $R_{-12}^{12}\frac{1}{\Delta}$ of weight $12$, and $\odot$ indicates the weight $14$ holomorphic form $h$. The omitted arrows are zero. Applying the flip yields the example for case I(c): \begin{equation}\label{5.7} \xymatrix{ \partialots\alphar@/^/[r]^{R_{-20}}&{\circ}\alphar@/^/[r]^{R_{-18}}\alphar@/^/[l]^{L_{-18}}&{\circ}\alphar@/^/[r]^{R_{-16}}\alphar@/^/[l]^{L_{-16}}&{\breve{\odot}}\alphar@/^/[l]^{L_{-14}}& {\breve\circledast} \alphar@/^/[r]^{R_{-12}}\alphar@/^/[l]^{L_{-12}} &{\circ}\alphar@/^/[l]^{L_{-10}} \alphar@/^/[r]^{R_{-10}}&\ \partialots\ \alphar@/^/[l]^{L_{-8}}\alphar@/^/[r]^{R_8}& {\circ}\alphar@/^/[r]^{R_{10}}\alphar@/^/[l]^{L_{10}}&\alphar@/^/[l]^{L_{12}}{\breve\bullet} }, \varepsilonnd{equation} where $\breve\bullet$ indicates the form $v^{12}\overline{\frac{1}{\Delta}}$ of weight $12$, $\breve{\circledast}$ indicates the harmonic form $v^{-12}\overline{R_{-12}^{12}\frac{1}{\Delta}} = \mathfrak{F}_{-12}\frac{1}{\Delta}$, and $\breve{\odot}$ indicates the weight $-14$ anti-holomorphic form $v^{14}\bar h$. Again, the omitted arrows are zero. Here is should be noted that the location of the harmonic Maass form in the diagram changes under the flip, from $\bullet$ in (\ref{5.6}) to $\breve\circledast$ in (\ref{5.7}). This is due to the harmonicity requirement, i.e., that the harmonic Maass form is annihilated by the composition $R_{r-2}\circ L_k$. \subsubsection{Case I(d)} Here we want to realize a copy of the $(\mathfrak g,K)$-module $I(\nu)$ for $\nu=1-k$ and $k<1$ in the space of automorphic forms. The simplest example is given by adding a weakly holomorphic form and its flip. To be more precise, let $F\in M_k^!\backslash\{0\}$ and set $G:=F+\mathfrak{F}_k(F)$. Then, by \varepsilonqref{xiflip} and the fact that $F\in M_k^!$, \begin{align*} \xi_kG =-\frac{(-4\varpii)^{1-k}}{(-k)!}D^{1-k}F\neq 0. \varepsilonnd{align*} Moreover, using \varepsilonqref{Dflip} yields \begin{align*} D^{1-k}G=D^{1-k}F\neq0. \varepsilonnd{align*} A second example can be constructed using Eisenstein series and we refer to \cite{kudla.yang.eis} as a convenient reference. As in Section 3 of loc.cit., for $r\in2\mathbb{Z}$, we let \begin{equation}\label{def-Eis} E_{r,s}(\tau) = \sum_{\mathfrak gamma=\left(\begin{smallmatrix}a&b\\c&d\varepsilonnd{smallmatrix}\right)\in \Gammaamma_{\infty}\backslash {\tilde{e}xt {\rm SL}}_2(\mathbb{Z})} (c\tau+d)^{-r} \, \operatorname{Im}(\mathfrak gamma\tau)^{\frac12(s+1-r)}. \varepsilonnd{equation} This series is absolutely convergent for ${\rm Re}(s)>1$ and defines an automorphic form of weight $r$. Its Fourier series is given by Proposition~3.1 of loc.cit.: \begin{align*} E_{r,s}(\tau) &= v^\beta + v^{\beta-s}\,2\varpii i^r \frac{2^{-s}\Gammaamma(s)}{\Gammaamma(\alphalpha)\Gammaamma(\beta)}\,\frac{\zeta(s)}{\zeta(s+1)} \\ &\quad+\frac{i^r (2\varpii)^{s+1}\,v^\beta}{\Gammaamma(\alpha)\,\zeta(s+1)} \sum_{m=1}^\infty \sigma_s(m)\, \Phisi(\beta,s+1;4\varpii m v)\,q^m \\ &\quad+ \frac{i^r (2\varpii)^{s+1}\,v^\beta}{\Gammaamma(\beta)\,\zeta(s+1)}\sum_{m=1}^\infty \sigma_s(m)\, \Phisi(\alphalpha,s+1;4\varpii m v)\,\overline{q^m}, \varepsilonnd{align*} where $\Gammaamma(s)$ is the usual Gamma function, $\zeta(s)$ the Riemann zeta-function, \[ \alphalpha:= \frac12(s+1+r),\qquad \beta:=\frac12(s+1-r),\qquad \sigma_s(m):= \sum_{d\mid m} d^s, \] and $\Phisi(a,b;z)$ is the confluent hypergeometric function, given by $$\Phisi(a,b;z) := \frac{1}{\Gammaamma(a)}\int_0^\infty e^{-zt}\,(1+t)^{b-a-1}\,t^{a-1}\,dt.$$ Note that we take $\Phisi(0,b;z)=1$ (cf. \cite{kudla.yang.eis}, the equation after (2.20)). Recall that the principal series representation\footnote{Here we write $s$ for $\nu$ and take $\varepsilon=0$.} $I(s)$ is spanned by the functions $\varpihi_r(s)$, defined in (\ref{ind-K-type}). For ${\rm Re}(s)>1$, we obtain a linear map \begin{equation*} \widetilde{E(s)}: I(s) \longrightarrow A(G;\Gammaamma), \qquad \varpihi_r \mapsto \widetilde{E_{r,s}}, \varepsilonnd{equation*} and this map is $(\mathfrak g,K)$-intertwining. Thus, by (\ref{PS.action}), \begin{align} L_rE_{r,s} &= \frac12(s+1-r) E_{r-2,s},\label{lower-Er} \quad \tilde{e}xt{and}\\ R_rE_{r,s} &= \frac12(s+1+r)\,E_{r+2,s}.\label{raise-Er} \varepsilonnd{align} In particular, set $\varepsilonll=-k$ with $k<0$ and let $s_0 = 1+\varepsilonll=\nu$. Then we obtain a $(\mathfrak g,K)$-intertwining map $$\widetilde{E_{s_0}}: I(s_0) \longrightarrow A(G;\Gammaamma), \qquad \varpihi_r \mapsto \widetilde{E_{r,s_0}}.$$ In fact, this map is injective. To see this, note that the constant term of $E_{r,s_0}$ equals $$v^{1+\frac12(\varepsilonll-r)} + v^{-\frac12(\varepsilonll+r)}\, \frac{2\varpii i^r\,2^{-\varepsilonll-1}\Gammaamma(\varepsilonll+1)}{\Gammaamma\left(1+\frac12(\varepsilonll+r)\right)\Gammaamma\left(1+\frac12(\varepsilonll-r)\right)}\,\frac{\zeta(\varepsilonll+1)}{\zeta(\varepsilonll+2)},$$ where, if $|r|\mathfrak ge \varepsilonll+2$, the second term is zero due to the pole in the denominator. Thus $E_{r,s_0}\ne 0$ and these functions are linearly independent since they have distinct weights. The function $$ f(\tau) := E_{k,1-k}(\tau) $$ is the harmonic Maass form of weight $k$ for ${\tilde{e}xt {\rm SL}}_2(\mathbb{Z})$. Its Fourier expansion is given by \begin{align*} E_{\varepsilonll+1,-\varepsilonll}(\tau) &= v^{\varepsilonll+1} + 2\varpii\, i^{\varepsilonll} \, 2^{-\varepsilonll-1}\,\frac{\zeta(\varepsilonll+1)}{\zeta(\varepsilonll+2)}\\ \noalign{ } {}&\qquad\qquad + \frac{i^{\varepsilonll} (2\varpii)^{\varepsilonll+2}\,v^{\varepsilonll+1}}{\zeta(\varepsilonll+2)}\sum_{m=1}^\infty \sigma_{\varepsilonll+1}(m)\, \Phisi(\varepsilonll+1,\varepsilonll+2;4\varpii m v)\,q^m\\ \noalign{ } {}&\qquad\qquad + \frac{i^{\varepsilonll} (2\varpii)^{\varepsilonll+2}\,v^{\varepsilonll+1}}{\Gammaamma(\varepsilonll+1)\,\zeta(\varepsilonll+2)}\sum_{m=1}^\infty \sigma_{\varepsilonll+1}(m)\, \Phisi(1,\varepsilonll+2;4\varpii m v)\,\overline{q^m}. \varepsilonnd{align*} But we have \begin{align*} \Phisi(\varepsilonll+1,\varepsilonll+2;z) &= \frac{1}{\Gammaamma(\varepsilonll+1)}\int_0^\infty e^{-z t}\,t^\varepsilonll\,dt = z^{-\varepsilonll-1},\qquad\tilde{e}xt{and}\\ \Phisi(1,\varepsilonll+2;z) &= e^{z}\,\int_1^\infty e^{-zt} \,t^\varepsilonll\,dt =z^{-\varepsilonll-1}\,e^{z}\,\int_z^\infty e^{-t} \,t^\varepsilonll\,dt = z^{-\varepsilonll-1}\,e^{z}\,\Gammaamma(\varepsilonll+1,z). \varepsilonnd{align*} Thus \begin{align*} E_{\varepsilonll+1,-\varepsilonll}(\tau) &= v^{\varepsilonll+1} +2\varpii\, 2^{-\varepsilonll-1}\, i^{\varepsilonll} \,\frac{\zeta(\varepsilonll+1)}{\zeta(\varepsilonll+2)} +\frac{2\varpii \,2^{-\varepsilonll-1}\,i^{\varepsilonll} }{\zeta(\varepsilonll+2)}\sum_{m=1}^\infty \sigma_{-\varepsilonll-1}(m)\, q^m\\ \noalign{ } {}&\qquad\qquad + \frac{2\varpii \,2^{-\varepsilonll-1}\,i^{\varepsilonll} }{\Gammaamma(\varepsilonll+1)\,\zeta(\varepsilonll+2)}\sum_{m=1}^\infty \sigma_{-\varepsilonll-1}(m)\, \,\beta_{-\varepsilonll}(4\varpii m v)\,q^{-m}, \varepsilonnd{align*} as in (\ref{weak-Fourier}). The image of this series under $\xi_k= \xi_{-\varepsilonll}$ is $1-k$ times the holomorphic Eisenstein series $E_{2-k}(\tau)$ of weight $2-k$, while its image under $R_k^{1-k}$ is a non-zero multiple of $E_{2-k}(\tau)$. \\ We omit the case $k=0$.\\ \begin{remark} One could simply define, for $k\in -2\mathbb{N}$, \begin{align*} \mathcal{Q}_k(\tau) :=\sum_{\mathfrak gamma\in\Gammaamma_\infty\backslash {\tilde{e}xt {\rm SL}}_2(\mathbb{Z})}v^{1-k}\mathbb Big|_k\mathfrak gamma. \varepsilonnd{align*} Then $\mathcal{Q}_k\in H_k^\tilde{e}xt{mg}$ and \begin{align*} \xi_k\left(\mathcal{Q}_k\right) &=(1-k)E_{2-k},\\ D^{1-k}\left(\mathcal{Q}_k\right) &=-(4\varpii)^{1-k}(1-k)!E_{2-k}. \varepsilonnd{align*} However, we included the description including an $s$-parameter as this adds an extra perspective described in Section~\ref{section.related}. \varepsilonnd{remark} \subsection{Case II: $k=1$} \subsubsection{Case II(a)} Any weakly holomorphic modular form $f $ of weight $1$ gives an example.\\ \subsubsection{Case II(b)} In this case we want to realize the extension of $(\mathfrak g,K)$-modules \begin{equation}\label{tiny.ext} 0\longrightarrow \tilde{e}xt{\rm LDS}^-(0) \longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm LDS}^+(0)\longrightarrow 0, \varepsilonnd{equation} which, in classical language, is equivalent to finding a harmonic Maass form $f $ of weight $1$ whose image under $\xi_1$ is a holomorphic modular form of weight $1$. An example for this case is given by the derivatives of incoherent Eisenstein series of weight $1$, constructed in \cite{kry.tiny}. To describe this, we fix an imaginary quadratic field $\bm{k}$ of prime discriminant $-D$ with $D\varepsilonquiv 3\varpimod{4}$ and $D>3$. Then define the pair of weight $1$ Eisenstein series by \begin{align*} E^{\varpim}_s(\tau): &= v^{\frac{s}2}\sum_{\mathfrak gamma=\left(\begin{smallmatrix} a & b\\ c & d \varepsilonnd{smallmatrix}\right)\in \Gammaamma_\infty\backslash {\tilde{e}xt {\rm SL}}_2(\mathbb{Z})} \Phi_D^{\varpim}(\mathfrak gamma)(c\tau+d)^{-1}|c\tau+d|^{-s},\\ \noalign{\noindent where} \Phi_D^{\varpim}(\mathfrak gamma) :&=\begin{cases} \chi_D(a)&\tilde{e}xt{if $D\mid c$,}\\ \varpim i D^{-\frac12}\,\chi_D(c)&\tilde{e}xt{if $(c,D)=1$,} \varepsilonnd{cases} \varepsilonnd{align*} with $\chi_D $ the quadratic character $D$ associated to $\bm{k}$. These series are absolutely convergent for ${\rm Re}(s)>1$ and have a meromorphic continuation to the whole $s$-plane. Let $L(s,\chi_D)$ be the usual Dirichlet L-series associated to $\chi_D$. Then, the analytic continuations in $s$ of the normalized series $$ \widehat{E}^{\varpim}_s(\tau) := \left(\frac{D}{\varpii}\right)^{\frac{s+1}2}\Gammaamma\left(\frac{s+1}2\right)\,L(s,\chi_D)\,E^{\varpim}_s(\tau) $$ satisfies the functional equation $$ \widehat{E}^{\varpim}_{-s}= \varpim \widehat{E}^{\varpim}_s. $$ We refer to $\widehat{E}^{+}_s$ (resp. $\widehat{E}^{-}_s$) as the {\it coherent} (resp. {\it incoherent}) Eisenstein series associated to $\bm{k}$ (see \cite{kry.tiny}). For the coherent Eisenstein series $\widehat{E}^{+}_s$, we have $$ \frac12\, \widehat{E}^{+}_0(\tau) = h_{\bm{k}} + 2 \sum_{n=1}^\infty \rho(n)\,q^n = \sum_{\frak a} \vartheta_{\frak{a}}(\tau), $$ where the sum runs over representatives $\frak a$ for the ideal classes, $h_{\bm{k}}$ is the class number of $\bm{k}$, and $\rho(n)$ is the number of integral ideals of norm $n$. Here $$ \vartheta_{\frak{a}}(\tau):= \sum_{n\in \frak a} q^{\frac{N(n)}{N(\frak a)}} $$ is Hecke's weight $1$ theta series for the ideal class of the fractional ideal $\frak a$. Due to the functional equation, the incoherent series $\widehat{E}^-_s$ vanishes at $s=0$, so we instead consider the function $$ \varpihi(\tau) := \frac12\,\frac{\partial}{\partial s}\left[ \widehat{E}^-_{s}(\tau)\right]_{s=0} =: a_v(0) + \sum_{n=-\infty}^1 a_v(n) \,q^{n} + \sum_{n=1}^\infty a(n)\, q^n. $$ According\footnote{Note that we have changed the sign compared to \cite{kry.tiny}.} to Theorem 1 of \cite{kry.tiny}, we have $$ a_v(n) =\begin{cases} -2\log(D)\,(\tilde{e}xt{\rm ord}_D(n)+1)\rho(n) - 2\sum_{p\ne D} \log(p)\,(\tilde{e}xt{\rm ord}_p(n)+1)\,\rho\left(\frac{n}{p}\right)&\tilde{e}xt{for $n>0$},\\ h_{\bm{k}}\, \left(\log(D) + \frac12\frac{\Lambda'(1,\chi_D)}{\Lambda(1,\chi_D)} +\log(v)\right) &\tilde{e}xt{for $n=0$},\\ -2\,\rho(-n)\,\beta_1(4\varpii |n|v)&\tilde{e}xt{for $n<0$}, \varepsilonnd{cases} $$ where $\Lambda(s,\chi_D) := \varpii^{-\frac12(s+1)}\,\Gammaamma(\frac12(s+1))\,L(s,\chi_D)$, and $\beta_1(x)=\Gammaamma(0,x)$ is the incomplete $\Gammaamma$-function. Here we slightly abuse notation and write $a_v(n) = a(n)$ if $n>0$. Applying the $\xi$-operator, we obtain $$ \xi_1\varpihi= \frac12\, \widehat{E}_0^+. $$ Thus, taking the function $f (\tau) = \varpihi(\tau)$ on $\mathbb{H}$ with corresponding function $\tilde{f}(g) = j(g,i)^{-1}\,\varpihi(g(i))$ on $G$, we obtain the desired extension (\ref{tiny.ext}) of $(\mathfrak g,K)$ modules, where the submodule $\tilde{e}xt{\rm LDS}^-(0)$ is generated by the function $$ j(g,i)\,\frac12\, \overline{\widehat{E}_0^+\left(g(i)\right)}. $$ \subsection{Case III: $k>1$} \subsubsection{Case III(a)} The function $\tilde{f}$ on $G$ corresponding to any holomorphic cusp form $f $ of weight $k$ generates a copy of $\tilde{e}xt{\rm DS}^+(\nu)$ and hence provides an example for this case. \\ \subsubsection{Case III(b)} For $k=2$, let \begin{equation}\label{E2star} f (\tau) = E_2^\alphast(\tau) := 1 - 24\sum_{n=1}^\infty \sum_{d\vert n}d\,q^n-\frac{3}{\varpii v} , \varepsilonnd{equation} be the classical non-holomorphic Eisenstein series of weight $2$. Then $L_2E^*_2 = \frac{3}{\varpii}$ and the $(\frak g,K)$-module $M(\tilde{f})$ generated by the corresponding $\tilde{f}$ gives an extension $$0\longrightarrow \tilde{e}xt{\rm FD}(1) \longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm DS}^+(1)\longrightarrow 0,$$ with the trivial representation $\tilde{e}xt{\rm FD}(1)$ as a submodule and the holomorphic discrete series $\tilde{e}xt{\rm DS}^+(1)$ of weight $2$ as quotient. This example was discussed in connection with the theory of nearly holomorphic modular forms in \cite{PSS}.\\ In general, in case III(b), we want an extension $$0\longrightarrow \tilde{e}xt{\rm FD}(\nu) \longrightarrow M\left(\tilde{f}\right) \longrightarrow \tilde{e}xt{\rm DS}^+(\nu)\longrightarrow 0,$$ and thus, in classical language, a harmonic Maass form $f $ of weight $k = \nu+1>2$ such that $\xi_kf\ne 0$ but $R_{2-k}^{k-1} \xi_kf =0$, so that $\xi_kf$ generates the ``finite dimensional'' piece. This cannot happen for scalar-valued forms of weight $k>2$. Indeed, the form $h=\xi_kf $ is a weakly holomorphic form of weight $2-k <0$. As in the proof of Lemma~\ref{lemma6.5}, the vanishing of $R_{2-k}^\nu h$ implies that $h$ is a constant and hence $h=0$ if $k>2$. Thus, in the scalar-valued case, only case III(b) with $\nu=1$, i.e., the $E_2^*$ example, can occur. However, in the vector-valued case, we can construct more examples as follows. As in the introduction, define a polynomial-valued function \begin{equation}\label{Ek} E^*_{m+2}(\tau):= \sum_{r=0}^m \frac{1}{r+1} {m\choose r}\,e_{r,m-r}(\tau)\,R^r\,E^*_2(\tau), \varepsilonnd{equation} a linear combination of products of the polynomials $e_{r,m-r}(\tau)$ and the $R^r\,E^*_2(\tau)$'s. Here and in the following, we often omit the subscript on the raising and lowering operators to lighten the notation. \begin{proposition}\label{prop-of-Ek} The polynomial-valued function $E^*_{m+2}$ satisfies $$ E^*_{m+2}(\mathfrak gamma\tau) = j(\mathfrak gamma,\tau)^{m+2}\,\rho_m(\mathfrak gamma)\,E^*_{m+2}(\tau). $$ Moreover $$ L_{m+2}\,E^*_{m+2} = \frac{3}{\varpii} \,e_{0,m}. $$ \varepsilonnd{proposition} \begin{proof} For the transformation law, we have \begin{align*} E^*_{m+2}(\mathfrak gamma\tau) &= \sum_{r=0}^m \frac{1}{r+1} {m\choose r}\,e_{r,m-r}(\mathfrak gamma\tau)\,R^r\,E^*_2(\mathfrak gamma\tau)\\ \noalign{ } {}&=\sum_{r=0}^m \frac{1}{r+1} {m\choose r}\,j(\mathfrak gamma,\tau)^{m-2r}\rho_m(\mathfrak gamma)e_{r,m-r}(\tau)\,j(\mathfrak gamma,\tau)^{2+2r}\,R^r\,E^*_2(\tau)\\ \noalign{ } {}&=j(\mathfrak gamma,\tau)^{m+2}\,\rho_m(\mathfrak gamma)\,E^*_{m+2}(\tau), \varepsilonnd{align*} as claimed. Next, applying the lowering operator and using the fact that $$ R_{k-2}L_k = L_{k+2}R_k+k, $$ we have $$ L\,R^r \,E^*_2 = -2r R^{r-1}\,E^*_2 + RLR^{r-1}\,E^*_2, $$ and hence $$ LR^r\,E_2^* = -r(r+1) R^{r-1}\,E_2^*. $$ Then, for $r\in\mathbb{N}$, \[ L_{m+2}\big(\,e_{r,m-r}\,R^r\,E^*_2\,\big)= (r+1)(m-r)\,e_{r+1,m-r-1} \,R^r\,E^*_2 - r(r+1)\,e_{r,m-r}\,R^{r-1}\,E^*_2, \] while, $$ L_{m+2}\big(\,e_{0,m}\,E^*_2\,\big) = m \,e_{1,m-1}\,E^*_2 + \frac{3}{\varpii}\,e_{0,m}. $$ For $r\in\mathbb{N}_0$, the coefficient of $e_{r+1,m-r-1}(\tau) \,R^r\,E^*_2(\tau)$ in $L_{m+2}(\,e_{r,m-r}(\tau)\,R^r\,E^*_2(\tau)\,)$ equals \begin{align*} \frac{1}{r+1} {m\choose r}(r+1)(m-r) - \frac{1}{r+2} {m\choose r+1}(r+1)(r+2). \varepsilonnd{align*} This vanishes for $0\le r<m$ and the claimed lowering identity follows. \varepsilonnd{proof} Here is the picture of the corresponding $(\mathfrak g,K)$-module: $$ \xymatrix{ {}&{\scriptstyle\ominus} \alphar@/^/[r]^{R_{2-k}}&{\circ}\alphar@/^/[l]^{L_{4-k}} \alphar@/^/[r]^{R_{4-k}}&\ \cdots\ \alphar@/^/[l]^{L_{6-k}}\alphar@/^/[r]^{R_{k-6}}& \alphar@/^/[l]^{L_{k-4}}{\circ}\alphar@/^/[r]^{R_{k-4}}& {\scriptstyle\oplus}\alphar@/^/[l]^{L_{k-2}}&{\bullet}\alphar@/^/[r]^{R_{k}}\alphar@/^/[l]^{L_{k}}&{\circ}\alphar@/^/[l]^{L_{k+2}}\alphar@/^/[r]^{R_{k+2}}&\cdots\alphar@/^/[l]^{L_{k+4}} } $$ where $\bullet$ indicates $E^*_{k}$, ${\scriptstyle\oplus}$ indicates $e_{0,m}$ and ${\scriptstyle\ominus}$ indicates $e_{m,0}$. Note that in all of these pictures there are unspecified, but non-zero, transition constants. \subsubsection{Case III (c)} A natural construction of positive weight harmonic Maass forms goes through sesquiharmonic Maass forms. They satisfy condition (1) and (3) of harmonic Maass forms but condition (2) is replaced by \begin{equation}\label{Dk2} \Delta_{k,2}f=0\qquad\tilde{e}xt{with}\qquad \Delta_{k,2}:= -\xi_k\circ\xi_{2-k}\circ\xi_k. \varepsilonnd{equation} Let $H_{k,2}^{\rm{mg}}$ be the space of sesqui-harmonic Maass forms. A way to construct sesquiharmonic forms is to differentiate, with respect to an additional parameter as we do for weight $1$ in Subsection 5.2.2. In weight $3/2$ this has been done by Duke, Imamoglu, and Toth \cite{DIT} in the context of finding a preimage under the $\xi$-operator of the Hirzebruch-Zagier Eisenstein series. Let \[ H_{k,2}^{\sharp}:=\left\{f\in H_{k,2}:\xi_k(f)\in H_{2-k}^{\sharp}\right\}. \] In \cite{BDR} it was shown that the map \[ \xi_k: H_{k, 2}^{\sharp}\to H_{2-k}^{\sharp} \] is surjective. Note that a similar construction can be carried out for all forms mapping to $H_{2-k}$, as we show below. We use Poincar\'e series and differentiate with respect to an extra $s$-parameter. To be more precise let for $m\in\mathbb{Z}\setminus\{0\}$ \begin{equation*} \mathbb{F}_{k,m}(\tau) := \mathbb{P}_k\left(\varpisi_{k,m}\right) \varepsilonnd{equation*} with \begin{equation*} \varpisi_{k,m}(\tau):=\left[\frac{\varpiartial}{\varpiartial s}\mathcal{M}_{k,s}(4\varpii mv)\right]_{s=\frac{k}{2}}e(mu), \varepsilonnd{equation*} where \[ \mathcal{M}_{k, s}(w):=|w|^{-\frac{k}{2}} M_{\operatorname{sgn}(w)\frac{k}{2}, s-\frac12}(|w|). \] Then $\mathbb{F}_{k,m}\in H_{k,2}$ and, with same constant $c_{k, m}\neq 0$, \begin{equation*} \xi_k\mathbb{F}_{k,m}=c_{k, m} F_{2-k,-m}. \varepsilonnd{equation*} Now consider in particular \begin{equation*} f := \mathbb{F}_{4,1}. \varepsilonnd{equation*} Then \begin{equation*} \xi_4f = c_{4, 1}F_{-2,-1} \in H_{-2}. \varepsilonnd{equation*} However, since the space of dual weight, $S_4 =\{0\}$, $\xi_4f \in M^{!}_{-2}$, and thus $f\in H^{\mathrm{mg}}_4$. \section{Some related examples}\label{section.related} It is natural to consider automorphic forms that are not harmonic but are annihilated by a power of the weight $k$ Laplacian, i.e., with condition (2) replaced by \begin{equation}\label{higher-power-harmonic} \Delta_k^\varepsilonll f=0, \varepsilonnd{equation} for some $\varepsilonll\in\mathbb{N}$. In particular for harmonic Maass forms we have $\varepsilonll=1$ and for sesqui-harmonic forms $\varepsilonll=2$. Now, if the minimal power of $\Delta_k$ annihilating $f$ is greater than $1$, the relations (\ref{multi-one-1}) and (\ref{multi-one-2}) need not hold and, as a result, the $K$-types in $M(\tilde{f})$ can occur with higher multiplicity. Much more complicated $(\mathfrak g,K)$-modules can arise. Here we give a couple of examples, leaving a more systematic analysis to another occasion. First, we consider the Eisenstein series of weight $0$ given by (\ref{def-Eis}) with $r=0$. It has a simple pole at $s=1$ and Laurent expansion $$2\,\zeta(s+1)\,E_{r,0}(\tau) = \frac{2\varpii}{s-1}+ 2\varpii\,\big(\,\mathfrak gamma-\log(2))+\frac{\varpii}{2}\,\varpihi(\tau) + O(s-1),$$ where, by the Kronecker limit formula \cite{lang,siegel}, is as defined in \varepsilonqref{Kronecker-fun}. An easy calculation shows that $$L_0\varpihi(\tau) = \frac{\varpii}3\,v^2\, \overline{E_2^*(\tau)},\quad\tilde{e}xt{and}\quad R_0\varpihi(\tau) = \frac{\varpii}3\,E_2^*(\tau).$$ Thus $$\Delta_0\varpihi = L_2R_0\varpihi= 1,$$ so that $\Delta_0^2\varpihi=0$. Actually, since $\Delta_0\varpihi$ is already annihilated by $R_0$, $\varpihi$ is annihilated by $\Delta_{0,2}$ as defined in \varepsilonqref{Dk2} and thus is sesquiharmonic. The $(\mathfrak g,K)$-module generated by $\widetilde{\varpihi}$ contains the trivial representation of $K$ with multiplicity $2$ and all other even weights with multiplicity $1$. This $(\mathfrak g,K)$-module is pictured in Figure 1 at the end of the introduction, where $\bullet$ indicates the function $\varpihi$, $\odot$ indicates the constant function, and the omitted arrows $R_0$ and $L_0$ originating from $\odot$ are zero. This $(\mathfrak g,K)$-module has a socle filtration with the trivial representation as maximal semi-simple submodule, the direct sum of the weight $2$ holomorphic and anti-holomorphic discrete series as intermediate subquotient, and the trivial representation as the unique irreducible quotient. A little more generally, consider the Laurent expansion of the Eisenstein series (\ref{def-Eis}) at any point $s_0$ with ${\rm Re}(s_0)>1$, $$E_{\varepsilonll,s}(\tau) =\sum_r A_{r,\varepsilonll,s_0}(\tau)\,(s-s_0)^r .$$ Note that for $r<0$, $A_{r,\varepsilonll,s_0}(\tau)=0$ for all $\varepsilonll$, since $E_{\varepsilonll,s}(\tau)$ has no pole at $\tau=s_0$ in the half-plane ${\rm Re}(s_0)>1$. Let $\mathcal E_r(s_0)$ be the span of the functions $A_{r,\varepsilonll,s_0}$ for $t\le r$ and $\varepsilonll\in 2\mathbb{Z}$ and let $\widetilde{\mathcal E_r(s_0)}$ be the span of their lifts $\widetilde{A_{r,\varepsilonll,s_0}}$ to $G$ via (\ref{lifttoG}). \begin{proposition}\label{surlinmap} The surjective linear map defined by \begin{equation*} \varpisi_r(s_0): I(s_0) \longrightarrow \widetilde{\mathcal E_r(s_0)}\bigg/ \widetilde{\mathcal E_{r-1}(s_0)},\qquad \varpihi_\varepsilonll \mapsto \widetilde{A_{r,\varepsilonll,s_0}} \varepsilonnd{equation*} is equivariant for the action of $(\mathfrak g,K)$. Moreover, this map is an isomorphism. \varepsilonnd{proposition} \begin{remark} Proposition \ref{surlinmap} yields that the space $\widetilde{\mathcal E_r(s_0)}$ has a $(\mathfrak g,K)$-invariant filtration with $I(s_0)$ as subquotients. The interesting case is if $s_0=k-1$ for an even integer $k>2$, so that the structure of $I(s_0)$ is given by (\ref{Is0-decomp}). The quotients $\widetilde{\mathcal E_r(s_0)}/\widetilde{\mathcal E_{r-j}(s_0)}$ for $j>1$ reveal a more complicated extension of $(\mathfrak g,K)$-modules due to the relations (\ref{Laurent-lower}) and (\ref{Laurent-raise}). The situation is illustrated in Figure 1 below. \varepsilonnd{remark} \begin{proof}[Proof of Proposition \ref{surlinmap}] Relations (\ref{lower-Er}) and (\ref{raise-Er}) imply that \begin{align} L_\varepsilonll A_{r,\varepsilonll,s_0} &= \frac12(s_0+1-\varepsilonll)\,A_{r,\varepsilonll-2,s_0} + \frac12\,A_{r-1,\varepsilonll-2,s_0}\quad\tilde{e}xt{and}\label{Laurent-lower}\\ R_\varepsilonll A_{r,\varepsilonll,s_0} &= \frac12(s_0+1+\varepsilonll)\,A_{r,\varepsilonll+2,s_0} + \frac12\,A_{r-1,\varepsilonll+2,s_0}.\label{Laurent-raise} \varepsilonnd{align} There are analogous relations for the action of $X_\varpim$ on the $\widetilde{A_{r,\varepsilonll,s_0}}$'s. Moreover, we have $$\Delta_\varepsilonll E_{\varepsilonll,s} = \frac14\,(s+1-\varepsilonll)(s+1+\varepsilonll-2)\,E_{\varepsilonll,s},$$ and hence \begin{equation}\label{delta-rel} \Delta_\varepsilonll A_{r,\varepsilonll,s_0}=\frac14(s_0+1-\varepsilonll)(s_0+1+\varepsilonll-2)\,A_{r,\varepsilonll,s_0} + \frac12 s_0\,A_{r-1,\varepsilonll,s_0} + \frac14\,A_{r-2,\varepsilonll,s_0}. \varepsilonnd{equation} Now suppose that $s_0 = k-1$ so that $A_{0,k,s_0}\ne0$ is the standard weight $k$ holomorphic Eisenstein series. For any $r\in\mathbb{N}_0$, relation (\ref{delta-rel}) then gives $$ \Delta_k A_{r,k,s_0} = \frac12 s_0\,A_{r-1,k,s_0} + \frac14\,A_{r-2,k,s_0}. $$ Hence $$\Delta_k^r \,A_{r,k,s_0} = 2^{-r} s_0^r\,A_{0,k,s_0}\ne 0\quad{\rm and}\quad \Delta_k^{r+1}A_{r,k,s_0}=0.$$ In particular, the functions $A_{r,k,s_0}$ of weight $k$ are linearly independent as $r$ varies and give examples of functions satisfying (\ref{higher-power-harmonic}). Moreover, it follows that $A_{r,k}\notin \mathcal E_{r-1}(s_0)$ so that the map $\varpisi_r(s_0)$ is non-zero. Also note that, by the raising relation above, $$R_{k-2}A_{r,k-2,s_0} = (k-1)\,A_{r,k,s_0} + \frac12\,A_{r-1,k,s_0},$$ so that the image of $\varpihi_{k-2}$ under $\varpisi_r(s_0)$ is also non-zero. By equivariance, it follows that $\varpisi_r(\varpihi_{2-k})\ne0$. On the other hand, again by (\ref{delta-rel}), we have $$(\Delta_{-k}+k)A_{r,-k,s_0} = \frac12 s_0\,A_{r-1,-k,s_0} + \frac14\,A_{r-2,-k,s_0},$$ so that \begin{align*} (\Delta_{-k}+k)^rA_{r,-k,s_0} &= 2^{-r}\,s_0^r\,A_{0,-k,s_0}\quad\tilde{e}xt{and} \\ (\Delta_{-k}+k)^{r+1}A_{r,-k,s_0} &= 0. \varepsilonnd{align*} But $$A_{0,-k,s_0}(\tau) = E_{-k,s_0}(\tau) = v^k\sum_{\left(\begin{smallmatrix} a & b\\ c & d \varepsilonnd{smallmatrix}\right)\in \Gammaamma_{\infty}\backslash {\tilde{e}xt {\rm SL}}_2(\mathbb{Z})} (c\overline{\tau}+d)^{-k},$$ so we again conclude that the $A_{r,-k,s_0}$'s are linearly independent as $r\in\mathbb{N}_0$ varies. This implies that $\varpisi_r(\varpihi_{-k})\ne0$ and thus that $\varpisi_r(s_0)$ is an isomorphism. \varepsilonnd{proof} \varepsilonject The following picture summarizes the structure that arises: $$ \begin{matrix}\tilde{e}xt{\small\bf Figure 2. The $(\mathfrak g,K)$-module for the Taylor coefficients $A_{r,\varepsilonll,s_0}$}\\ \noalign{ } \tilde{e}xt{\bf of $E_{\varepsilonll,s}(\tau)$ at $s_0=k-1$} \varepsilonnd{matrix} $$ $$ \xymatrix{ \vdots&&&&&\vdots&&&&&&\vdots\\ \partialots&{\circ}\alphar@/^/[l] \alphar@/^/[r]&{\scriptstyle\odot}\alphar@/^/[l]\alphar[rd]^{R_{-k}}&{\scriptstyle\ominus}\alphar[l]_{L_{2-k}} \alphar@/^/[r]&{\circ}\alphar@/^/[l] \alphar@/^/[r]&\ \partialots\ \alphar@/^/[l]\alphar@/^/[r]& \alphar@/^/[l]{\circ}\alphar@/^/[r]& {\scriptstyle\oplus}\alphar[r]^{R_{k-2}}\alphar@/^/[l]&{\bullet}\alphar@/^/[r]\alphar[ld]_{L_k}&{\circ}\alphar@/^/[l]\alphar@/^/[r]&\partialots\alphar@/^/[l]&\tilde{e}xt{$\scriptstyle r=2$}\\ \partialots&{\circ}\alphar@/^/[l] \alphar@/^/[r]&{\scriptstyle\odot}\alphar@/^/[l]\alphar[rd]^{R_{-k}}&{\scriptstyle\ominus}\alphar[l] \alphar@/^/[r]&{\circ}\alphar@/^/[l] \alphar@/^/[r]&\ \partialots\ \alphar@/^/[l]\alphar@/^/[r]& \alphar@/^/[l]{\circ}\alphar@/^/[r]& {\scriptstyle\oplus}\alphar@/^/[l]\alphar[r]&{\bullet}\alphar@/^/[r]\alphar[ld]_{L_k}&{\circ}\alphar@/^/[l]\alphar@/^/[r]&\partialots\alphar@/^/[l]&\tilde{e}xt{$\scriptstyle r=1$}\\ \partialots&{\circ}\alphar@/^/[l] \alphar@/^/[r]&{\scriptstyle\odot}\alphar@/^/[l]&{\scriptstyle\ominus}\alphar[l] \alphar@/^/[r]&{\circ}\alphar@/^/[l] \alphar@/^/[r]&\ \partialots\ \alphar@/^/[l]\alphar@/^/[r]& \alphar@/^/[l]{\circ}\alphar@/^/[r]& {\scriptstyle\oplus}\alphar[r]\alphar@/^/[l]&{\circledast}\alphar@/^/[r]&{\circ}\alphar@/^/[l]\alphar@/^/[r]&\partialots\alphar@/^/[l]&\tilde{e}xt{$\scriptstyle r=0$} } $$ Here the element in the $\varepsilonll$th column and $r$th row is $A_{r,\varepsilonll,s_0}$. The element $\circledast$ is the standard weight $k$ holomorphic Eisenstein series $A_{0,k,s_0}$. The elements denoted with $\bullet$ (resp. $\odot$) have weight $k$ (resp. $-k$) and are annihilated by powers of $\Delta_k$ (resp. $\Delta_{-k}+k$). The arrows along rows with $r>0$ indicate maps defined up to a scalar factor and an element of the same weight lying in the row below the target, cf. (\ref{Laurent-lower}) and (\ref{Laurent-raise}). The diagonal arrows and the arrows in the $r=0$ row are maps involving non-zero scaling factors. Thus, for example, the $(\mathfrak g,K)$-module generated by $A_{r,k,s_0}$, which is annihilated by $\Delta_k^6$ but no smaller power, has the holomorphic discrete series $\tilde{e}xt{\rm DS}^+(k-1)$ of weight $k$ as unique irreducible quotient. It has as constituents $\tilde{e}xt{\rm DS}^+(k-1)$ with multiplicity $6$, $\tilde{e}xt{\rm DS}^-(k-1)$ with multiplicity $5$ and $\tilde{e}xt{\rm FD}(k-1)$ with multiplicity $5$. \begin{remark}\label{rem10} It is clear that the structures we discuss here for harmonic and other Maass forms on ${\tilde{e}xt {\rm SL}}_2(\mathbb{R})$ have natural generalizations to automorphic forms on other reductive groups, the basic point being that a weakening of the cuspidal or square integrable growth conditions allow indecomposable but not irreducible Harish-Chandra modules to occur as archimedean components. The nearly holomorphic modular forms introduced by Shimura, \cite{shimura.nh-1,shimura.nh-2}, and widely studied since are among the important examples. Recently, in the case of $\tilde{e}xt{Sp}_g(\mathbb{R})$, for genus $g=2$, Pitale, Saha, and Schmidt \cite{PSS.siegel} used the Harish-Chandra modules associated to such forms to prove a structure theorem for them. Also for $\tilde{e}xt{Sp}_2(\mathbb{R})$, Westerholt-Raum \cite{raum.siegel} explained the use of Harish-Chandra modules to provide examples of harmonic weak Siegel Maass forms and their holomorphic parts, Siegel mock modular forms. We make no attempt to give a systematic review of these and related developments and apologize in advance for the many references omitted as a result. \varepsilonnd{remark} \begin{thebibliography}{ACD} \bibitem{bol} G. Bol, \varepsilonmph{Invarianten linearer Differentialgleichungen}. Abh. Math. Sem. Univ. Hamburg, {\bf 16}, 128 (1949). \bibitem{BDR} K. Bringmann, N. Diamantis, and M. Raum, {\it Mock period functions, sesquiharmonic Maass forms and non-critical values of $L$-functions}, Advances in Math. {\bf 233} (2012), 971--1013. \bibitem{BDE} K. Bringmann, N. Diamantis, and S. Ehlen, {\it Regularized inner products and errors of modularity}, preprint (2016), arXiv:1603.03056v1. \bibitem{Br} J. Bruinier {\it Borcherds products on $O(2,l)$ and Chern classes of Heegner divisors}, Lecture Notes in Mathematics {\bf 1780}, Springer Verlag (2002) \bibitem{bruinier.funke} J. Bruinier and J. Funke, \tilde{e}xtsl{Two geometric theta lifts}, Duke Math. J. {\bf 125} (2004), 45--90. \bibitem{bump.choie} D. Bump and Y. Choie, \varepsilonmph{Derivatives of modular forms of negative weight}. Pure Appl. Math. Q. {\bf 2} (2006), 111--133. \bibitem{DLMF} Digital Library of Mathematical Functions, http://dlmf.nist.gov/ \bibitem{darmon.lauder.rotger} H. Darmon, A. Lauder, and V. Rotger, \tilde{e}xtsl{Overconvergent generalized eigenforms of weight $1$ and class fields of real quadratic fields,} Advances in Math. {\bf 283} (2015), 130--142. \bibitem{DIT} W. Duke, O. Imamoglu, and A. Toth, {\it Cycle integrals of the $j$-function and mock modular forms}, Ann. of Math {\bf 173} (2011), 947--981. \bibitem{duke.li} W. Duke and Y. Li, \varepsilonmph{Harmonic Maass forms of weight $1$}, Duke Math. J. {\bf 164} (2015), 39--113. \bibitem{harris.threelemmas} M. Harris, \tilde{e}xtsl{A note on three lemmas of Shimura}, Duke Math. J. {\bf 46} (1979), 871--879. \bibitem{kudla.yang.eis} S. Kudla and T. Yang, \varepsilonmph{Eisenstein series for $\tilde{e}xt{SL}(2)$,} Science China {\bf 53} (2010), 2275--2316. \bibitem{kry.tiny} S. Kudla, M. Rapoport, and T. Yang, \tilde{e}xtsl{On the derivative of an Eisenstein series of weight $1$}, IMRN (1999), no.7. 347--385. \bibitem{kry.faltings} S. Kudla, \tilde{e}xtsl{Derivatives of Eisenstein series and Faltings heights,} Compositio Math., {\bf 140} (2004), 887--951. \bibitem{kry.book} S. Kudla, Modular forms and special cycles on Shimura curves, Annals of Math. Studies, {\bf 161}, Princeton University Press, 2006. \bibitem{kuga.shimura} M. Kuga and G. Shimura, \tilde{e}xtsl{On vector differential forms attached to automorphic forms}, J. Math. Soc. Japan {\bf 12} (1960), 258--270. \bibitem{lang} S. Lang, Elliptic Functions, second ed., Graduate Texts in Math. 112, Springer-Verlag, New York, 1987. \bibitem{Ni} D. Niebur, \varepsilonmph{A class of nonanalytic automorphic functions}, Nagoya Math. J. \tilde{e}xtbf{52} (1973), 133--145 \bibitem{PSS} A. Pitale, A. Saha, R. Schmidt, \tilde{e}xtsl{Representations of $\mathrm{SL}_2(\mathbb{R})$ and nearly holomorphic modular forms,} Surikaisekikenkyusho Kokyuroku (Research Institute for Mathematical Sciences, Kyoto University) {\bf 1973} (2015), 141-154. \bibitem{PSS.siegel} \bysame, \tilde{e}xtsl{Lowest weight modules of $\tilde{e}xt{Sp}_4(\mathbb{R})$ and nearly holomorphic Siegel modular forms}, preprint (2015), arXiv:1501.00524v2. \bibitem{schulze-pillot} R. Schulze-Pillot, \tilde{e}xtsl{Weak Maass forms and $(\mathfrak g,K)$-modules}, Ramanujan J. {\bf 26} (2011), 437--445. \bibitem{shimura.nh-1} G. Shimura, \tilde{e}xtsl{On some arithmetic properties of modular forms of one and several variables,} Ann. of Math. {\bf 102} (1975), 491--515. \bibitem{shimura.nh-2} G. Shimura, \tilde{e}xtsl{Nearly holomorphic functions on Hermitian symmetric spaces,} Math. Annalen\ {\bf 278}, (1987), 1--28. \bibitem{siegel} C. Siegel, Advanced Analytic Number Theory, 2nd ed., Tata Institute of FUndamental Research Studies in Mathematics, vol. 9, Tata Institute of Fundamental Research, Bombay, 1980. \bibitem{verdier} J. Verdier, \tilde{e}xtsl{Sur les int\'egrales attach\'ees aux formes automorphes, [d'apr\`es Goro Shimura]}, S\'eminaire Bourbaki, {\bf 6} n$^o$ 216, (1960-1961): 149--175, $\langle$ http://eudml.org/doc/109609$\rangle$. \bibitem{vogan.book} D. Vogan, \tilde{e}xtsl{Representations of Real Reductive Lie Groups}, Progress in Math., Vol.15, Birkh\"auser, Boston, 1981. \bibitem{raum.siegel} M. Westerholt-Raum, \tilde{e}xtsl{Harmonic weak Siegel Maass foms I}, preprint (2015), arXiv:1502.00557. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Large deviations of a forced velocity-jump process with a Hamilton-Jacobi approach} \maketitle \begin{abstract} We study the dispersion of a particle whose motion dynamics can be described by a forced velocity jump process. To investigate large deviations results, we study the Chapman-Kolmogorov equation of this process in the hyperbolic scaling $(t,x,v)\to(t/\varepsilon,x/\varepsilon,v)$ and then, perform a Hopf-Cole transform which gives us a kinetic equation on a potential. We prove the convergence of this potential to the solution of a Hamilton-Jacobi equation. The hamiltonian can have a $C^1$ singularity, as was previously observed in this kind of studies. This is a preliminary work before studying spreading results for more realistic processes. \end{abstract} \noindent{ \bf Key-words:} Kinetic equations, Hamilton-Jacobi equations, large deviations, perturbed test function method, Piecewise Deterministic Markov Process.\\ \noindent{\bf AMS Class. No:} {35Q92, 45K05, 35D40, 35F21} \section{Introduction} In this paper, we study the dispersion in $\mathbb{R}^d$ of a particle whose motion dynamics is described by the following piecewise deterministic Markov process (PDMP). During the so-called "run phase" (\em i.e \em the deterministic part), the particle is moving in $\mathbb{R}^d$ and is submitted to a force whose intensity and direction are given by the vector $\Gamma$, which only depends on the insantaneous velocity of the particle. Therefore, its position $\mathcal{X}_s$ and velocity $\mathcal{V}_s$ at time $s$ are given by the following system of ODEs \begin{equation*} \begin{cases} \overset{\cdot}{\mathcal{X}}_s=\mathcal{V}_s,\\ \overset{\cdot}{\mathcal{V}}_s=\Gammaamma(\mathcal{V}_s). \end{cases} \end{equation*} We shall call the measure space $(V,\nu)$ the set of admissible velocities. After a random exponential time with mean 1, a "tumble" occurs: the particle chooses a new velocity at random on the space $V$, independently from its last velocity. The law of the velocity redistribution process is given by the probability density function $M$. The particle enters a new running phase which will again last for a random exponential time of parameter 1, and so on. The Kolmogorov equation of this process is the following conservative kinetic equation: \begin{equation}\label{eq1} \partial_t f +v\cdot \nabla_x f + \mathrm{div}_v\left( \Gammaamma f \right)= M(v)\rho - f,\quad (t,x,v)\in \mathbb{R}_+\times\mathbb{R}^d\times V, \end{equation} where $\rho$ is the macroscopic density of $f$: \begin{equation*} \rho(t,x):=\int_V f(t,x,v)d\nu(v). \end{equation*} We assume that $\Gamma d\overrightarrow{S}$ is the null measure, where the vector $\overrightarrow{S}(v)$ is the normal vector to $\partial V$ at point $v\in \partial V$. This condition guarantees that the total mass of the system is conserved thanks to Ostrogradsky's Theorem. We also assume that $V$ is a compact manifold of $\mathbb{R}^d$ with a (possibly empty) boundary. In the case where the boundary of $V$ is not empty, for every function $g$ from $V$ to $\mathbb{R}$, we define (when possible) $\Gammaamma\cdot \nablabla_v g$ on $\partial V$ as follows: \begin{equation*} (\Gammaamma\cdot \nablabla_v g)(w)=\left. \frac{d}{ds} g(\gamma_s)\right\vert_{s=0}, \end{equation*} for $\gamma$ in $V$ such that $\gamma(0)=w$ and $\overset{\cdot}{\gamma_s}=\Gammaamma(\gamma_s)$ for all $s$ in $[-\delta,\delta]$. For a function $G$ from $V$ to $\mathbb{R}^d$, we define $\mathrm{div}_v (\Gammaamma G)$ on $\partial V$ (when possible) as \begin{equation*} \mathrm{div}_v (\Gammaamma G)(w)= \sum_i(\Gammaamma \nablabla_v G_i)(w). \end{equation*} The function $M\in C^0(V)$ is assumed to satisfy \begin{equation}\label{infM} \underset{v\in V}{\mathrm{min}}\,M(v)>0. \end{equation} The so-called \em force term \em $\Gammaamma$ is a lipschitz-continuous function of $v$. We assume that there exists $\alpha>0$ such that \begin{equation}\label{divgamma} 1+\mathrm{div}\Gammaamma\left(v\right)\geq\alpha>0,\quad\forall v\in V. \end{equation} We introduce the flow of $-\Gammaamma$: \begin{equation}\label{flowgamma} \begin{cases} \overset{\cdot}{\phi^v_s}=-\Gammaamma\left(\phi^v_s\right),\\ \phi^v_0=v, \end{cases} \end{equation} We assume that it satisfies a Poincar\'e-Bendixson condition in the sense that, for all $v\in V$, the limit set of orbit of $v$ is either a zero of $-\Gammaamma$ or a periodic orbit of $-\Gammaamma$. In other words, \begin{equation}\label{Poincare} \forall v \in V,\quad \exists w_0 \in V\mbox{ and }T\geq0\mbox{ such that }\phi^{w_0}_{T}=w_0\mbox{ and } \bigcap_{t>0}\overline{\left\{ \phi^v_s,\: s\geq t \right\}}=\left\{ \phi^{w_0}_s,\: 0\leq s \leq T \right\}. \end{equation} Finally, we assume the following mixing property: \begin{equation}\label{mixing} \forall v \in V,\;\exists w(v)\in V\;\mbox{ such that }\;\forall F\in C^0(V,\mathbb{R}),\quad\underset{t\to+\infty}{\mathrm{lim}}\;\frac{1}{t}\int_0^t F(\phi^v_s)ds=F(w). \end{equation} Note that, thanks to the Poincar\'e -Bendixson condition (\ref{Poincare}), we already get the existence of a $w(v)$ in the convex hull of $V$ such that $\frac{1}{t}\int_{[0,t]}F(\phi^v_s)ds\to F(w)$. Here, we assume furthermore that this "representative" of $v$ can be chosen in $V$, even when $V$ is not convex. In order to study large deviations results for this process, we use the method of geometric optics \cite{evans_pde_1987,freidlin_geometric_1986}. We study the rescaled function $f^\varepsilon(t,x,v):=f\left(\frac{t}{\varepsilon},\frac{x}{\varepsilon},v\right)$, which satisfies \begin{equation*} \partial_t f^{\varepsilon}+v\cdot \nabla f^{\varepsilon} +\frac{1}{\varepsilon}\mathrm{div}_v\left(\Gamma f^{\varepsilon}\right)=\frac{1}{\varepsilon}\left(M(v)\rho^{\varepsilon}-f^{\varepsilon}\right),\quad(t,x,v)\in\mathbb{R}_+\times\mathbb{R}^d\times V. \end{equation*} The function $f^{\varepsilon}$ quickly relaxes towards $\tilde{M}$, the solution of \begin{equation}\label{tm} \begin{cases} \mathrm{div}_v\left(\Gamma(v) \tilde{M}(v)\right)= M(v)\int_V \tilde{M}(v')d\nu(v')-\tilde{M}(v),\\ \int_V \tilde{M}(v')d\nu(v')=1. \end{cases} \end{equation} We introduce the following WKB ansatz \begin{equation*} \varphi^{\varepsilon}(t,x,v):=-\varepsilon \mathrm{log}\left(\frac{f^{\varepsilon}(t,x,v)}{\tilde{M}}\right),\quad\mbox{or equivalently,}\quad f^{\varepsilon}(t,x,v)=\tilde{M} e^{-\frac{\varphi^{\varepsilon}(t,x,v)}{\varepsilon}}. \end{equation*} Then, $\varphi^{\varepsilon}$ satisfies \begin{equation}\label{main} \partial_t \varphi^{\varepsilon}+v\cdot \nabla \varphi^{\varepsilon} +\frac{\Gamma}{\varepsilon}\cdot \nabla_v \varphi^{\varepsilon}=\frac{M}{\tilde{M}}\int_V\tilde{M}(v')\left(1-e^{\frac{\varphi^{\varepsilon}-\varphi'^{\varepsilon}}{\varepsilon}}\right)d\nu(v'), \end{equation} where (here and until the end) $\varphi'^{\varepsilon}$ stands for $\varphi^{\varepsilon}(t,x,v')$. The main result of this paper is that $(\varphi^{\varepsilon})_\varepsilon$ converges to the viscosity solution of some Hamilton-Jacobi equation. \subsection*{Motivations and earlier related works} The motivation of this work comes from the study of concentration waves in bacterial colonies of \em Escherichia coli\em. Kinetic models have been proposed to study the Run \& Tumble motion of the bacterium at the mesoscopic scale in \cite{alt_biased_1980,stroock_stochastic_1974}. More recently, it has been established that these kinetic models are more accurate than their diffusion approximations to describe the speed of a colony of bacteria in a channel of nutrient \cite{saragosti_directional_2011}. This has raised some interest on the study of front propagation in kinetic models driven by chemotactic effect \cite{calvez_chemotactic_2016} but also by growth effect \cite{bouin_hamilton-jacobi_2015,bouin_propagation_2015,bouin_spreading_2017}. Our goal is to explore those studies further, by considering kinetic equations with a force term, in view of studying propagation of biological species with an effect of the environment (one could think of fluid resistance of water for bacteria, for example). A physically relevant force term may not satisfy all the assumptions of the present paper, mostly because of (\ref{divgamma}), but our result and methods can be adapted for different force terms. Therefore, our study should be considered as a preliminary work before studying more realistic models. When $\Gamma\equiv 0$, a convergence result for $(\varphi^\varepsilon)_\varepsilon$ already exists. The question has originally been solved in \cite{bouin_kinetic_2012} by Bouin and Calvez who proved convergence of $(\varphi^\varepsilon)_\varepsilon$ to the solution of a Hamilton-Jacobi equation with an implicitly defined hamiltonian. Their result, however, only holds in dimension 1, since the implicit formulation of the hamiltonian may not have a solution. It was then generalized to higher dimensions by the author in \cite{caillerie_large_2017-1}. The proof relied on the establishment of uniform (with respect to $\varepsilon$) \em a priori \em bounds on the potential $\varphi^\varepsilon$, which may not hold in our situation. If one requires that $\mathrm{div}\Gammaamma=0$, the proof of \cite{caillerie_large_2017-1} can be adapted to our situation since one can establish those \em a priori \em bounds (see \cite{caillerie_stochastic_2017}, Chapter 3). When the velocity set is unbounded and $\Gammaamma\equiv 0$, one observes an acceleration of the front of propagation, which highlights the difference between the kinetic model and its diffusion approximation. Due to this acceleration, the hyperbolic scaling is no longer the right one to follow the front. In the special case where $M$ is gaussian and for the scaling $(t,x,v)\to(\frac{t}{\varepsilon},\frac{x}{\varepsilon^{3/2}},\frac{v}{\varepsilon^{1/2}})$ the Hamilton-Jacobi limit was performed by Bouin, Calvez, Grenier and Nadin in \cite{bouin_large_2016}. As was previously mentioned, spreading can also been driven by growth effect. Propagation in a similar model, without the force term but with a reaction-term of KPP-type was investigated by Bouin, Calvez and Nadin in \cite{bouin_propagation_2015}. They established the existence of travelling wave solutions in the one-dimensional case. Interestingly enough, the speed of propagation they established differed from the KPP speed obtained in the diffusion approximation. Their result was generalized to the higher velocity dimension case by Bouin and the author in \cite{bouin_spreading_2017}. In the present paper, we will use the method of geometric optics \cite{evans_pde_1987,freidlin_geometric_1986}, the half-relaxed limits method of Barles and Perthame \cite{barles_exit_1988} and the perturbed test function method of Evans \cite{evans_perturbed_1989} in a similar fashion as in \cite{bouin_spreading_2017,bouin_hamiltonjacobi_2015}. The Hamilton-Jacobi framework can also be used in other various situations involving population dynamics (not necessarily structured by velocity) \cite{bouin_invasion_2012,bouin_hamiltonjacobi_2015,mirrahimi_time_2015,mirrahimi_asymptotic_2015,gandon_hamiltonjacobi_2017}. \subsection*{Main result} To identify a candidate for the limit, let us assume formally that this limit $\varphi^0:=\underset{\varepsilon\to0}{\mathrm{lim}}\,\varphi^{\varepsilon}$ is independent of the velocity variable and that the convergence speed is of order 1 in $\varepsilon$, which would mean that there exists a function $\eta$ such that \begin{equation}\label{hilbert} \varphi^{\varepsilon}(t,x,v)=\varphi^0(t,x)+\varepsilon \eta(t,x,v)+\mathcal{O}(\varepsilon^2). \end{equation} Plugging (\ref{hilbert}) into \Cref{main}, we get formally at the order $\varepsilon^0$ \begin{equation}\label{prespec} \partial_t \varphi^0 +v\cdot \nabla_x \varphi^0+\Gamma\cdot\nabla_v \eta = \frac{M}{\tilde{M}}\int_V \tilde{M}'\left(1-e^{\eta-\eta'}\right)d\nu(v'). \end{equation} When $t$ and $x$ are fixed, (\ref{prespec}) is a differential equation in the variable $v$. Let us set $p:=\nabla_x\varphi^0(t,x)$, $H:=-\partial_t \varphi^0(t,x)$ and $Q(v):=e^{-\eta(v)}$. Then, $Q$ satisfies \begin{equation}\label{probspec} \begin{cases} HQ(v)=\left(v\cdot p - \frac{M(v)}{\tilde{M}(v)}\right)Q(v)-\Gamma(v)\cdot\nabla_v Q(v)+\frac{M(v)}{\tilde{M}(v)}\int_V \tilde{M}'Q'd\nu(v'),&v\in V,\\ Q>0. \end{cases} \end{equation} For fixed $p$, this is a spectral problem where $Q$ and $H$ are viewed as an eigenvector and the associated eigenvalue. We will discuss the resolution of this spectral problem in \Cref{identification}. This resolution motivates the introduction of the following hamiltonian. \begin{definition}\label{defiprincipale} For all $p\in\mathbb{R}^d$, we set \begin{equation}\label{hamiltonian} \mathcal{H}(p):=\mathrm{inf}\left\{ H\in\mathbb{R},\quad \int_V\tilde{M}(v')Q_{p,H}(v')d\nu(v') \leq 1 \right\}, \end{equation} where \begin{equation}\label{defQ} Q_{p,H}\left(v\right):=\int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left(-\int_0^t\left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+H-\phi^v_s\cdot p\right)ds\right)dt, \end{equation} and $\phi$ is the flow of $-\Gammaamma$: \begin{equation}\label{flow} \begin{cases} \overset{\cdot}{\phi^v_s}=-\Gammaamma\left(\phi^v_s\right),\\ \phi^v_0=v. \end{cases} \end{equation} \end{definition} As in \cite{coville_singular_2013-1}, this spectral problem may not have a solution in $C^1(V)$ and one may need to solve it in the set of positive measures (one can refer to \cite{caillerie_large_2017-1} where a similar situation occurs). Therefore, let us define the so-called singular set of $M$ and $\Gammaamma$, that is the set where the spectral problem has no solution in $C^1(V)$: \begin{definition} We call "Singular set of $M$ and $\Gammaamma$" the set \begin{equation}\label{sing} \mathrm{Sing}\left(M,\Gamma\right):=\left\{p\in\mathbb{R}^d,\; \left\{H\in \mathbb{R},\;1<\int_V\tilde{M}'Q'_{p,H}d\nu(v')<+\infty \right\}=\emptyset \right\}. \end{equation} \end{definition} Let us now state our main result. \begin{theorem}\label{thm} Let us assume that (\ref{infM}), (\ref{divgamma}), (\ref{Poincare}) and (\ref{mixing}) hold and that $\tilde{M}$ satisfies (\ref{tm}). Let $\varphi^{\varepsilon}$ satisfy \Cref{main}. Let us assume furthermore that the initial condition is well-prepared: $\varphi^{\varepsilon}(0,x,v):=\varphi_0(x)\geq 0$. Then, the function $\varphi^{\varepsilon}$ converges uniformly locally toward some function $\varphi^0$ which is independent from $v$. Moreover, $\varphi^0$ is the viscosity solution of the Hamiton-Jacobi equation \begin{equation}\label{HJ} \begin{cases} \partial_t \varphi^0+\mathcal{H}\left(\nabla_x \varphi^0\right)=0,& (t,x)\in \mathbb{R}_+^*\times \mathbb{R}^d,\\ \varphi^0(0,\cdot)=\varphi_0. \end{cases} \end{equation} where $\mathcal{H}$ is defined as in \Cref{defiprincipale}. \end{theorem} \begin{rem} The sufficient conditions on $\mathcal{H}$ that guarantee the uniqueness of the solution of the Hamilton-Jacobi Equation (\ref{HJ}) will be proven in \Cref{lipschitz}. \end{rem} The rest of the paper is organized as follows: in Section 2, we describe how we obtain the hamiltonian and prove some results that we will use later on. In particular, we solve the spectral problem \eqref{probspec}. Section 3 is devoted to the proof of \Cref{thm}. \section{Identification of the hamiltonian}\label{identification} \subsection{Positivity and boundedness of $\tilde{M}$} As a first step, we will prove that $0<\tilde{M}<+\infty$. \begin{lemma}\label{borneinf} Let us assume that (\ref{infM}) and (\ref{divgamma}) hold and let $\tilde{M}$ satisfy (\ref{tm}). Then, $\underset{v\in V}{\mathrm{min}}\,\tilde{M}(v)>0$. \end{lemma} \begin{proof} Let $v_{\mathrm{min}}$ bet the point where $\tilde{M}$ reaches its minimum. Suppose $v_{\mathrm{min}}$ is in the interior of $V$. Then, $\nabla_v \tilde{M}(v_{\mathrm{min}})=0$, thus \begin{equation*} \tilde{M}(v_{\mathrm{min}})\mathrm{div} \Gammaamma (v_{\mathrm{min}})=M(v_{\mathrm{min}})-\tilde{M}(v_{\mathrm{min}}), \end{equation*} which implies \begin{equation*} \tilde{M}(v_{\mathrm{min}})=\frac{M(v_{\mathrm{min}})}{1+\mathrm{div} \Gammaamma(v_{\mathrm{min}})}\geq \frac{\mathrm{min}\,M}{1+\mathrm{div} \Gammaamma(v_{\mathrm{min}})}>0. \end{equation*} If $v_{\mathrm{min}}\in \partial V$, we can conclude likewise. Indeed, if $\Gammaamma(v_{\mathrm{min}})=0$, then the result is trivial. If $v_{\mathrm{min}}\in \partial V$ and $\Gammaamma(v_{\mathrm{min}})\neq 0$, since $\Gammaamma(v)\cdot d\overrightarrow{S}(v)=0$ for all $v\in \partial V$, there exists $v_0\in V$, $v_1 \in V$ and $\delta>0$ such that \begin{equation*} \begin{cases} \phi^{v_{\mathrm{min}}}_s\in V,& \forall s\in [-\delta,\delta],\\ \phi^{v_{\mathrm{min}}}_{-\delta} = v_0,\\ \phi^{v_{\mathrm{min}}}_{\delta}=v_1. \end{cases} \end{equation*} The extremal property of $(v_{\mathrm{min}})$ now implies that \begin{equation*} \Gammaamma(v_{\mathrm{min}})\cdot \nablabla_v \tilde{M}(v_{\mathrm{min}})=-\left. \frac{d}{ds}\tilde{M}(\phi^{v_{\mathrm{min}}}_s)\right\vert_{s=0}=0. \end{equation*} \end{proof} \begin{lemma}\label{Lemmma} Let us assume that (\ref{divgamma}) holds. Then, \begin{equation*} \underset{v\in V}{\mathrm{sup}}\,\tilde{M}(v)<+\infty. \end{equation*} \end{lemma} \begin{proof} We use the same ideas as in \Cref{borneinf} to prove that \begin{equation*} \tilde{M}(v_{\mathrm{max}})=\frac{M(v_{\mathrm{max}})}{1+\mathrm{div}\Gamma(v_{\mathrm{max}})}\leq \frac{\left\Vert M \right\Vert_{\infty}}{\alpha}, \end{equation*} thanks to (\ref{divgamma}). \end{proof} \subsection{The spectral problem}\label{subpart1} Here, we discuss the resolution of the spectral problem, that is: for all $p\in\mathbb{R}^d$, find $H$ and a function $Q>0$ such that \begin{equation*} HQ(v)=\left(v\cdot p - \frac{M(v)}{\tilde{M}(v)}\right)Q(v)-\Gamma(v)\cdot\nabla_v Q(v)+\frac{M(v)}{\tilde{M}(v)}\int_V \tilde{M}'Q'd\nu(v') \end{equation*} holds, for all $v\in V$. To find such a solution, we use the method of characteristics: let us define $\phi$ as the flow of $-\Gamma$ (see (\ref{flow})). Then, we have\\ $\mathrm{div}splaystyle{\frac{d}{ds}\left(Q(\phi^v_s)\mathrm{exp}\left( -\int_0^s \left( \frac{M(\phi^v_{\sigma})}{\tilde{M}(\phi^v_\sigma)}+H-\phi^v_\sigma \cdot p \right) d\sigma\right)\right)}$ \begin{equation*} =-\mathrm{exp}\left( -\int_0^s \left( \frac{M(\phi^v_\sigma)}{\tilde{M}(\phi^v_\sigma)}+H-\phi^v_\sigma \cdot p \right) d\sigma\right)\left[\left( \frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+H-\phi^v_s \cdot p \right)Q(\phi^v_s)+\Gammaamma(\phi^v_s)\cdot \nabla_v Q(\phi^v_s)\right] \end{equation*} \begin{flushright} $\mathrm{div}splaystyle{=-\mathrm{exp}\left( -\int_0^s \left( \frac{M(\phi^v_\sigma)}{\tilde{M}(\phi^v_\sigma)}+H-\phi^v_\sigma \cdot p \right) d\sigma\right)}\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}\int_V \tilde{M}'Q'd\nu(v').$ \par\end{flushright} Suppose that \begin{equation*} \mathrm{div}splaystyle{\underset{s\to +\infty}{\mathrm{lim}}\mathrm{exp}\left( -\int_0^s \left( \frac{M(\phi^v_\sigma)}{\tilde{M}(\phi^v_\sigma)}+H-\phi^v_\sigma \cdot p \right) d\sigma\right)=0}. \end{equation*} Then, integrating between 0 and $+\infty$ gives \begin{equation}\label{vecteurpropre} Q(v)=\int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left( -\int_0^t \left( \frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+H-\phi^v_s \cdot p \right) ds\right)dt\int_V \tilde{M}'Q'd\nu(v'). \end{equation} Integrating \Cref{vecteurpropre} against $\tilde{M}$ finally gives \begin{equation*} 1=\int_V\tilde{M}(v)\int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left( -\int_0^t \left( \frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+H-\phi^v_s \cdot p \right) ds\right)dtd\nu(v). \end{equation*} In other terms, solving the spectral problem is equivalent to finding $H\in\mathbb{R}$ such that $\int_V \tilde{M}'Q'_{p,H}d\nu(v')=1$ holds, where \begin{equation*} Q_{p,H}(v)=\int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left( -\int_0^t \left( \frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+H-\phi^v_s \cdot p \right) ds\right)dt. \end{equation*} It is straight-forward to check that $H\mapsto \int_V \tilde{M}'Q'_{p,H}d\nu(v')$ is monotically decreasing and continuous. As a result, if there exists $H$ such that $\int_V \tilde{M}'Q'_{p,H}=1$, then such $H$ is unique. Let us recall the definition of our hamiltonian: \begin{equation*} \mathcal{H}(p):=\mathrm{inf}\left\{ H\in\mathbb{R},\quad \int\tilde{M}(v')Q_{p,H}(v')d\nu(v') \leq 1 \right\}. \end{equation*} \begin{proposition}\label{resolution} Resolution of the spectral problem in $C^1(V)$ \begin{enumerate} \item[(i)] If $p\in \mathrm{Sing}(M,\Gammaamma)^c$, then $\int_V \tilde{M}'Q'_{p,\mathcal{H}(p)}d\nu(v')=1$, \em i.e. \em the couple $(Q_{p,\mathcal{H}(p)},\mathcal{H}(p))$ is a solution to the spectral problem (\ref{probspec}). \item[(ii)] If $p\in \mathrm{Sing}(M,\Gammaamma)$, then $\underset{v\in V}{\mathrm{sup}}\,Q_{p,\mathcal{H}(p)}=+\infty$, \em i.e. \em there is no solution of the spectral problem (\ref{probspec}) in $C^1(V)$. \end{enumerate} \end{proposition} \begin{proof} (i) Let $p\in \mathrm{Sing}(M,\Gammaamma)^c$. By definition, there exists $H_0\in\mathbb{R}$ such that $\int_V \tilde{M}'Q'_{p,H_0}d\nu'>1$. By continuity and monotonicity of $H\mapsto\int_V \tilde{M}'Q'_{p,H}d\nu'$, this means that, for all $H_0< H <\mathcal{H}(p)$, \begin{equation*} +\infty>\int_V \tilde{M}'Q'_{p,H_0}d\nu'>\int_V \tilde{M}'Q'_{p,H}d\nu'>1, \end{equation*} the last inequality being true by definition since $H<\mathcal{H}(p)$ (recall \eqref{hamiltonian}). Finally, \begin{equation*} 1\geq \int_V \tilde{M}'Q'_{p,\mathcal{H}(p)}d\nu'=\underset{H\searrow \mathcal{H}(p)}{\mathrm{lim}}\int_V \tilde{M}'Q'_{p,H}d\nu'\geq 1, \end{equation*} which proves (i).\\ (ii) Suppose that $p\in \mathrm{Sing}(M,\Gammaamma)$ and that $Q_{p,\mathcal{H}(p)}$ is bounded. We let $\delta>0$. Then, $\mathrm{sup}_{v\in V}Q_{p,\mathcal{H}(p)-\delta}=+\infty$. Indeed, in the opposite case $Q_{p,\mathcal{H}(p)-\delta}$ is bounded and hence, integrable on $V$ which is not possible since $p\in \mathrm{Sing}(M,\Gammaamma)$. Where defined, the function $\mathcal{Z}_\delta:=Q_{p,\mathcal{H}(p)-\delta}-Q_{p,\mathcal{H}(p)}$ satisfies \begin{equation*} \left(\frac{M(v)}{\tilde{M}(v)}+\mathcal{H}(p)-\delta-v\cdot p\right)\mathcal{Z}_{\delta}+\Gammaamma\cdot \nablabla_v \mathcal{Z}_{\delta}=\delta Q_{p,\mathcal{H}(p)}\leq \delta \left\Vert Q_{p,\mathcal{H}(p)} \right\Vert_{\infty}. \end{equation*} By the method of characteristics, this implies that \begin{eqnarray*} \mathcal{Z}_{\delta}(v)&\leq &\delta \left\Vert Q_{p,\mathcal{H}(p)} \right\Vert_{\infty}\int_0^{+\infty}\mathrm{exp}\left(-\int_0^t \left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+\mathcal{H}(p)-\delta - \phi^v_s\cdot p\right)ds\right)dt\\ &=& \delta \left\Vert Q_{p,\mathcal{H}(p)} \right\Vert_{\infty} \frac{\tilde{M}(v)}{M(v)}Q_{p,\mathcal{H}(p)}(v)\\ &\leq& \delta\left\Vert Q_{p,\mathcal{H}(p)} \right\Vert_{\infty}\frac{\mathrm{max}\,\tilde{M}}{\mathrm{min}\,M}Q_{p,\mathcal{H}(p)-\delta}(v), \end{eqnarray*} for all $v$ where $Q_{p,\mathcal{H}(p)-\delta}(v)<+\infty$. Hence, \begin{equation*} Q_{p,\mathcal{H}(p)}(v)\geq \left(1-\delta\left\Vert Q_{p,\mathcal{H}(p)} \right\Vert_{\infty}\frac{\mathrm{max}\,\tilde{M}}{\mathrm{min}\,M}\right) Q_{p,\mathcal{H}(p)-\delta}(v). \end{equation*} Since $Q_{\mathcal{H}(p)}$ is bounded and $\mathrm{sup}_{v\in V}Q_{p,\mathcal{H}(p)-\delta}=+\infty$, this is absurd. \end{proof} We now discuss the resolution of the spectral problem in the set of positive measures. \begin{lemma}\label{utile} Let $p\in \mathrm{Sing}(M,\Gammaamma)^c$ and $v\in V\setminus\mathcal{D}(Q_{p,\mathcal{H}(p)})$, \em i.e. \em $Q_{p,\mathcal{H}(p)}(v)=+\infty$. Thanks to the Poincar\'e-Bendixson condition (\ref{Poincare}), either $\phi^v_t$ converges to some $v_0\in V$ or the limit set of $(\phi^v_t)_t$ is the periodic orbit of some $v_0\in V$. Either way, $Q_{p,\mathcal{H}(p)}(v_0)=+\infty$. \end{lemma} \begin{proof} The result holds since \begin{equation*} \underset{t\to +\infty}{\mathrm{lim}}\,\frac{1}{t}\int_0^t \left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+\mathcal{H}(p)-\phi^v_s \cdot p\right)ds=\underset{t\to +\infty}{\mathrm{lim}}\,\frac{1}{t}\int_0^t \left(\frac{M(\phi^{v_0}_s)}{\tilde{M}(\phi^{v_0}_s)}+\mathcal{H}(p)-\phi^{v_0}_s \cdot p\right)ds, \end{equation*} which implies that \begin{equation*} \mathrm{exp}\left(-\int_0^t \left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+\mathcal{H}(p)-\phi^v_s \cdot p\right)ds\right)\underset{t\to+\infty}{\sim}\mathrm{exp}\left(-\int_0^t \left(\frac{M(\phi^{v_0}_s)}{\tilde{M}(\phi^{v_0}_s)}+\mathcal{H}(p)-\phi^{v_0}_s \cdot p\right)ds\right). \end{equation*} \end{proof} \begin{lemma}\label{utile2} Let now $v_0$ be defined as in \Cref{utile} and let $w(v_0)\in V$ be the vector defined after the mixing property (\ref{mixing}). Then, $\frac{M(w)}{\tilde{M}(w)}+\mathcal{H}(p)-w\cdot p=0$. \end{lemma} \begin{proof} Let us assume that $\frac{M(w)}{\tilde{M}(w)}+\mathcal{H}(p)-v\cdot p=\delta>0$. Then, \begin{equation*} \mathrm{exp}\left(-\int_0^t \left(\frac{M(\phi^{v_0}_s)}{\tilde{M}(\phi^{v_0}_s)}+\mathcal{H}(p)-\phi^{v_0}_s \cdot p\right)ds\right)\underset{t\to +\infty}{\sim}e^{-\delta t}, \end{equation*} hence $Q_{p,\mathcal{H}(p)}(v_0)=\int_{[0,+\infty)}\frac{M(\phi^{v_0}_t)}{\tilde{M}(\phi^{v_0}_t)}\mathrm{exp}\left(-\int_{[0,t]}\left(\frac{M(\phi^{v_0}_s)}{\tilde{M}(\phi^{v_0}_s)}+\mathcal{H}(p)-\phi^{v_0}_s \cdot p\right)ds\right)dt<+\infty$, which is absurd after \Cref{utile}. \end{proof} \begin{proposition}\label{inthesenseofd} Resolution of the problem in the set of positive measures. Let $p\in \mathrm{Sing}(M,\Gammaamma)$. Let $v_0$ be defined after \Cref{utile}. \begin{enumerate} \item[(i)] If $v_0$ is such that $\Gammaamma(v_0)=0$ and $\frac{M(v_0)}{\tilde{M}(v_0)}+\mathcal{H}(p)-v_0\cdot p=0$, then the measure $\mu:=\delta_{v_0}$, where $\delta_{v_0}$ is the dirac mass at $v_0$ satisfies \begin{equation}\label{distribution} \left(\frac{M}{\tilde{M}}+\mathcal{H}(p)-v\cdot p\right)Q + \Gammaamma\cdot \nablabla_v Q =0 \end{equation} in the sense of distributions. \item[(ii)] If $v_0 \in V$ belongs to a periodic orbit of period $T$ and $\phi^{v_0}_t\in V\setminus \mathcal{D}(Q_{p,\mathcal{H}(p)})$ for all $0\leq t \leq T$, then the uniform probability measure $\mu$ on the set $\left\{ \phi^{v_0}_t,\; 0\leq t \leq T \right\}$ satisfies (\ref{distribution}) in the sense of distributions. \item[(iii)] Either way, the positive measure $\tilde{\mu}:=Q_{p,\mathcal{H}(p)}d\nu+\left(1-\int_V Q'_{p,\mathcal{H}(p)}d\nu'\right)\mu$ associated with the eigenvalue $\mathcal{H}(p)$ is a solution to the spectral problem (\ref{probspec}). \end{enumerate} \end{proposition} \begin{proof} (i) This is trivial since \begin{equation*} \left(\frac{M(v_0)}{\tilde{M}(v_0)}+\mathcal{H}(p)-\phi^{v_0}_t\cdot p\right)\psi(v_0)+\Gammaamma(v_0)\cdot \nabla_v \psi(v_0)=0\times \psi(v_0) + 0\cdot \nabla_v \psi(v_0)=0, \end{equation*} for all $\psi \in C^{\infty}(V)$. (ii) Let $\psi \in C^{\infty}(V)$. Then, \begin{eqnarray*} \int_{\left\{ \phi^{v_0}_t,\; 0\leq t \leq T \right\}}\left(\frac{M'}{\tilde{M}'}-\mathcal{H}(p)-v'\cdot p\right)\psi'd\mu' +\int_{\left\{ \phi^{v_0}_t,\; 0\leq t \leq T \right\}}\Gammaamma'\cdot \nablabla_v \psi'd\mu'\\ =\int_0^T \left(\frac{M(\phi^{v_0}_t)}{\tilde{M}(\phi^{v_0}_t)}+\mathcal{H}(p)-\phi^{v_0}_t\cdot p\right)\psi(\phi^{v_0}_t)dt+\int_0^T \Gammaamma(\phi^{v_0}_t)\cdot \nablabla_v \psi(\phi^{v_0}_t)dt. \end{eqnarray*} Now, \begin{equation*} \int_0^T \Gammaamma(\phi^{v_0}_t)\cdot \nablabla_v \psi(\phi^{v_0}_t)dt=-\int_0^T \overset{\cdot}{\phi^{v_0}_t}\cdot \nablabla_v \psi(\phi^{v_0}_t)dt=\left[\psi(\phi^{v_0}_t) \right]_{t=0}^{t=T}=0, \end{equation*} since $\phi^{v_0}$ is periodic with period $T$. Moreover, since \begin{equation*} \frac{1}{nT}\int_0^{nT}\left(\frac{M(\phi^{v_0}_t)}{\tilde{M}(\phi^{v_0}_t)}+\mathcal{H}(p)-\phi^{v_0}_t\right)dt=\frac{n}{nT}\int_{0}^T\left(\frac{M(\phi^{v_0}_t)}{\tilde{M}(\phi^{v_0}_t)}+\mathcal{H}(p)-\phi^{v_0}_t\right)dt, \end{equation*} for all $n\in \mathbb{N}$, and \begin{equation*} \frac{1}{nT}\int_0^{nT}\left(\frac{M(\phi^{v_0}_t)}{\tilde{M}(\phi^{v_0}_t)}+\mathcal{H}(p)-\phi^{v_0}_t\right)dt\underset{n\to +\infty}{\longrightarrow}\frac{M(w)}{\tilde{M}(w)}+\mathcal{H}(p)-w\cdot p, \end{equation*} we finally get \begin{equation*} \int_0^T \left(\frac{M(\phi^{v_0}_t)}{\tilde{M}(\phi^{v_0}_t)}+\mathcal{H}(p)-\phi^{v_0}_t\cdot p\right)\psi(\phi^{v_0}_t)dt=T\left(\frac{M(w)}{\tilde{M}(w)}+\mathcal{H}(p)-w\cdot p\right)\psi(w)=0, \end{equation*} which ends the proof. (iii) This is straight-forward since $\mu$ solves \Cref{distribution} and since \begin{equation*} \left(\frac{M}{\tilde{M}}+\mathcal{H}(p)-v\cdot p\right)Q_{p,\mathcal{H}(p)} +\Gammaamma\cdot \nabla_v Q_{p,\mathcal{H}(p)} = \frac{M}{\tilde{M}}, \end{equation*} for all $v\in \mathcal{D}(Q_{p,\mathcal{H}(p)})$. \end{proof} \begin{rem} We do not necessarily have uniqueness of the spectral problem (\ref{probspec}) in the set of positive measures as there might exist several points in $V\setminus \mathcal{D}(Q_{p,\mathcal{H}(p))}$, which do not belong to the same orbit. However, we will only use the solutions of \Cref{resolution} and \Cref{inthesenseofd} in the rest of the paper. We will use the perturbed test function method of Evans \cite{evans_perturbed_1989} to build a sub- and a super-solution of the Hamilton-Jacobi equation \eqref{HJ}. When $p\in \mathrm{Sing}(M,\Gammaamma)^c$, we will use the $C^1$ solution of \Cref{resolution} to build the perturbed test function in question. It is worth mentioning that when $p\in \mathrm{Sing}(M,\Gammaamma)$, we will use the solution in the set of positive measures given by \Cref{inthesenseofd}. However, we will only use the regular part $Q_{p,\mathcal{H}(p)}$ in the super-solution procedure, whereas we will only use the singular part $\mu$ in the sub-solution procedure. \end{rem} \subsection{Examples} Such a hamiltonian was already studied in a less general setting. Here are two examples taken from \cite{bouin_kinetic_2012,caillerie_large_2017-1} and \cite{caillerie_stochastic_2017}. \begin{example} \textbf{The special case $\Gammaamma\equiv 0$.} \\ Suppose that $V$ is a compact set such that $0$ belongs to the interior of the convex hull of $V$. In this case, it is straight-forward to check that $M$ is a solution to \eqref{tm} hence $\tilde{M}=M$. Moreover, $\mathrm{Sing}(M,\Gammaamma)=\left\{ p\in \mathbb{R}^d,\; \int_V \frac{M(v')}{\mu(p)-v\cdot p}\leq 1 \right\}$, where $\mu(p):= \mathrm{max}_{v\in V}\left\{v\cdot p\right\}$. The hamiltonian is then defined by: \begin{equation}\label{bouincalvez} \int_V \frac{M(v)}{1+\mathcal{H}(p)-v\cdot p}d\nu(v)=1,\quad \mbox{if }p\notin \mathrm{Sing}(M,\Gammaamma), \end{equation} \begin{equation}\label{caillerie} \mathcal{H}(p)=\mu(p)-1,\quad \mbox{if }p\in \mathrm{Sing}(M,\Gammaamma). \end{equation} When $V=[-1,1]$ and $M\equiv \frac{1}{2}$, then $\mathrm{Sing}(M,\Gammaamma)=\emptyset$ and $\mathcal{H}(p)=\frac{p-\mathrm{tanh}p}{\mathrm{tanh}p}$ for all $p\in \mathbb{R}^d$. \end{example} One can refer to \cite{bouin_kinetic_2012,caillerie_large_2017-1} for more details. Let us emphasize that the hamiltonian \eqref{bouincalvez}-\eqref{caillerie} is consistent with ours. Indeed, when $\Gammaamma\equiv 0$, then $\tilde{M}\equiv M$ and $\phi^v_s=v$, for all $s$, hence $\mathrm{div}splaystyle{\int_V \tilde{M}(v)\int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left(-\int_0^t \left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+\mathcal{H}(p)-\phi^v_s\cdot p\right)ds \right)dtd\nu(v)}$ \begin{equation*} = \int_V M(v)\int_0^{+\infty}\mathrm{exp}\left(-\int_0^t \left(\frac{M(v)}{M(v)}+\mathcal{H}(p)-v\cdot p\right)ds \right)dtd\nu(v) \end{equation*} \begin{flushright} $\mathrm{div}splaystyle{=\int_V \frac{M(v)}{1+\mathcal{H}(p)-v\cdot p}d\nu(v)}.$\\ \par\end{flushright} It is also straightforward to check that the hamiltonian from \cite{bouin_kinetic_2012,caillerie_large_2017-1} and ours coincide on $\mathrm{Sing}(M,\Gammaamma)$. \begin{example}\label{chap3} Let $d=3$, $V$ be the unit sphere that we parametrize with the usual spherical coordinates: $V=\left\{ (\theta,\varphi)\in [0,2\pi]\times[0,\pi] \right\}$, $v(\theta,\varphi):=(\mathrm{sin}(\varphi)\mathrm{cos}(\theta),\mathrm{sin}(\varphi)\mathrm{sin}(\theta),\mathrm{cos}\varphi)$ and let $M$ depend only on $\varphi$ and $\Gammaamma(\theta,\varphi)=(\mathrm{sin}\varphi,0)$. If $p\notin \mathrm{Sing}(M,\Gammaamma)$, then $\mathcal{H}(p)$ is implicitly defined by \begin{equation*} \int_V M(\varphi)\int_0^{+\infty}\mathrm{exp}\left(-\int_0^t \left(1+\mathcal{H}(p)-v(\theta-s,\varphi)\cdot p\right)ds\right)dt\frac{\mathrm{sin}(\varphi)d\theta d\varphi}{4\pi}=1, \end{equation*} and if $p\in \mathrm{Sing}(M,\Gamma)$, then $\mathcal{H}(p)=\left\vert p\cdot e_3 \right\vert-1$, where $e_3=(0,0,1)$. \end{example} One can find a proof of this result in \cite{caillerie_stochastic_2017}, Chapter 3. Let us emphasize that the addition of the force term is a singular perturbation in our Hamilton-Jacobi framework since the hamiltonian of \Cref{chap3} is different, at least on $\mathrm{Sing}(M,\Gamma)$, from the one obtained when $\Gamma\equiv 0$. \subsection{Properties of the hamiltonian} \begin{proposition}\label{lipschitz} The hamiltonian has the following properties: \begin{enumerate} \item[(i)] $0\in \mathrm{Sing}(M,\Gammaamma)^c$ and $\mathcal{H}(0)=0$. \item[(ii)] $\mathcal{H}$ is continuous on $\mathbb{R}^d$ and $C^1$ on $\mathbb{R}^d\setminus\mathrm{Sing}(M,\Gamma)$. \item[(iii)] $\mathcal{H}$ is lipschitz-continuous. \end{enumerate} \end{proposition} \begin{proof} (i) This result is trivial once one notices that \begin{equation*} \int_V \tilde{M}(v) \int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left(-\int_0^t \frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}ds\right)dtd\nu(v)=\int_V \tilde{M}(v)\frac{M(v)}{\tilde{M}(v)}d\nu(v)=1. \end{equation*} (ii) On $\mathrm{Sing}(M,\Gamma)^c$, the function $\mathcal{H}$ is implicitly defined by the relation \begin{equation}\label{implicit1} \int_V \tilde{M}(v)Q_{p,\mathcal{H}(p)}(v)d\nu(v)=1. \end{equation} On $\overset{\circ}{\mathrm{Sing}(M,\Gamma)}$, $\mathcal{H}(p)$ is implicitly defined by the relation \begin{equation}\label{implicit2} \int_V \tilde{M}(v)Q_{p,\mathcal{H}(p)}(v)d\nu(v)-\underset{H\in \mathfrak{B}(p)}{\mathrm{max}}\left\{ \int_V \tilde{M}(v)Q_{p,H}(v)d\nu(v) \right\}=0, \end{equation} where $\mathfrak{B}(p)=\left\{ H\in \mathbb{R},\quad\int_V \tilde{M}(v)Q_{p,H}(v)d\nu(v)<+\infty\right\}$. Hence, by the implicit function Theorem, $\mathcal{H}$ is $C^1$ on $\overset{\circ}{\mathrm{Sing}(M,\Gamma)}\cup \mathrm{Sing}(p)^c=\mathbb{R}^d\setminus \partial\mathrm{Sing}(M,\Gamma)$. Moreover, since for all $p\in \partial\mathrm{Sing}(M,\Gamma)$, \begin{equation*} \underset{H\in\mathfrak{B}(p)}{\mathrm{max}}\left\{ \int_V \tilde{M}(v)Q_{p,H}(v)d\nu(v) \right\}=1, \end{equation*} we also conclude that $\mathcal{H}$ is continuous on $\mathbb{R}^d$. (iii) Differentiating \eqref{implicit1} and \eqref{implicit2} with respect to $p$ and recalling \eqref{defQ}, we get for all $p\in \mathbb{R}^d\setminus \partial \mathrm{Sing}(M,\Gammaamma)$, \begin{equation*} \int_V \tilde{M}(v)\int_0^{+\infty} \frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)} \left(\int_0^t \left(\nablabla\mathcal{H}(p)-\phi^v_s\right)ds\right)\mathrm{exp}\left( \int_0^t \left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+\mathcal{H}(p)-\phi^v_s\cdot p\right)ds \right)dtd\nu(v)=0. \end{equation*} Hence, $\left\Vert \nablabla \mathcal{H} \right\Vert_{\infty}\leq \underset{v\in V}{\mathrm{sup}}\;\left\vert v \right\vert$ on $\mathbb{R}^d\setminus \partial \mathrm{Sing}(M,\Gammaamma)$, from which we deduce that $\mathcal{H}$ is lipschitz-continuous. \end{proof} \section{Convergence to the Hamilton-Jacobi limit} \subsection{A priori estimates}\label{apriori} \begin{proposition}\label{aprioriprop} Let us assume that (\ref{infM}) and (\ref{divgamma}) hold and that $\tilde{M}$ satisfies (\ref{tm}). Let $\varphi^{\varepsilon}$ satisfy \Cref{main}. Let us assume that the initial condition is well-prepared: $\varphi^{\varepsilon}(0,x,v)=\varphi_0(x)\geq 0$. Then, $\varphi^{\varepsilon}$ is uniformly bounded with respect to $x$, $v$, and $\varepsilon$. More precisely, for all $0\leq t \leq T$, \begin{equation} 0\leq \varphi^{\varepsilon}(t,\cdot,\cdot) \leq \left\Vert \varphi_0 \right\Vert_{\infty} + \frac{\mathrm{max}\,M}{\mathrm{min}\,\tilde{M}}T \end{equation} \end{proposition} \begin{proof} Let us recall that, under assumptions (\ref{infM}) and (\ref{divgamma}), the results of \Cref{borneinf} and \Cref{Lemmma} hold. Let $\left(\mathcal{X}^{\varepsilon},\mathcal{V}^\varepsilon\right)$ be the characteristics associated with (\ref{main}): \begin{equation*} \begin{cases} \overset{\cdot}{\mathcal{X}_{s,t}^{x,v}}=\mathcal{V}_{s,t}^{x,v},\\ \mathcal{X}^{x,v}_{t,t}=x,\\ \overset{\cdot}{\mathcal{V}_{s,t}^{x,v}}=\frac{\Gammaamma\left(\mathcal{V}_{s,t}^{x,v}\right)}{\varepsilon},\\ \mathcal{V}_{t,t}^{x,v}=v. \end{cases} \end{equation*} Here, we dropped the $\varepsilon$ for readability reasons. Using the method of characteristics, we get the following relation \begin{eqnarray*} \varphi^{\varepsilon}(t,x,v)&=&\varphi_0(\mathcal{X}^{x,v}_{0,t})\\ &+&\int_0^t\frac{M\left(\mathcal{V}_{s,t}^{x,v}\right)}{\tilde{M}\left(\mathcal{V}_{s,t}^{x,v}\right)}\int_V\tilde{M}(v')\left(1-\mathrm{exp}\left(\frac{\varphi^\varepsilon\left(s,\mathcal{X}^{x,v}_{s,t},\mathcal{V}^{x,v}_{s,t}\right)-\varphi^{\varepsilon}\left(s,\mathcal{X}^{x,v}_{s,t},v'\right)}{\varepsilon}\right)\right)d\nu(v')ds\\ & \leq & \varphi_0(\mathcal{X}^{x,v}_{0,t})+ \int_0^t\frac{M\left(\mathcal{V}_{s,t}^{x,v}\right)}{\tilde{M}\left(\mathcal{V}_{s,t}^{x,v}\right)}\int_V\tilde{M}(v')d\nu(v')ds\\ & \leq & \left\Vert \varphi_0\right\Vert_{\infty} + \int_0^T \frac{\mathrm{max}\,M}{\mathrm{min}\,\tilde{M}}\int_V \tilde{M}(v')d\nu(v')ds= \left\Vert \varphi_0 \right\Vert_{\infty} + \frac{\mathrm{max}\,M}{\mathrm{min}\,\tilde{M}}T, \end{eqnarray*} so we have an upper bound on $\varphi^\varepsilon$. We get the lower bound by noticing that $0$ trivially satisfies \eqref{main}. \end{proof} \subsection{Proof of \Cref{thm}} In this Section, we prove \Cref{thm} using the half-relaxed limits method of Barles and Perthame \cite{barles_exit_1988} in the same spirit as in \cite{bouin_spreading_2017}. Additionally, we use the method of the perturbed test function of Evans \cite{evans_perturbed_1989} using the same ideas as in \cite{bouin_kinetic_2012,bouin_spreading_2017,caillerie_large_2017-1}. Thanks to \Cref{aprioriprop}, the sequence $(\varphi^\varepsilon)_{\varepsilon}$ is uniformly bounded in $L^\infty$ with respect to $\varepsilon$. We can thus define its lower and upper semi continuous envelopes: \begin{equation}\label{envelopes} \varphi^*(t,x,v) = \limsup_{\substack{\varepsilon \to 0\\(s,y,w) \to (t,x,v)}} \varphi^\varepsilon(s,y,w), \qquad \varphi_*(t,x,v) = \liminf_{\substack{\varepsilon \to 0\\(s,y,w) \to (t,x,v)}} \varphi^\varepsilon(s,y,w). \end{equation} We will prove that $\varphi^*$ and $\varphi_*$ are respectively a sub- and a super-solution of the Hamilton-Jacobi equation. In order to do that, we need to prove that neither functions depend on the velocity variable. For this, we will use a similar proof to \cite{bouin_spreading_2017}. We write it here for the sake of self-containedness. \begin{lemma}\label{indep} Both $\varphi^*$ and $\varphi_*$ are constant with respect to the velocity variable on $\mathbb{R}^*_+\times\mathbb{R}^d$. \end{lemma} \begin{proof} Let $(t^0,x^0,v^0)\in\mathbb{R}^*_+\times\mathbb{R}^d\times V$ and $\psi\in C^1\left(\mathbb{R}^*_+\times\mathbb{R}^d\times V\right)$ be a test function such that $\varphi^*-\psi$ has a strict local maximum at $(t^0,x^0,v^0)$. Then, there exists a sequence $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$ such that $\varphi^\varepsilon-\psi$ attains its maximum at $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$ and such that $(t^\varepsilon,x^\varepsilon,v^\varepsilon)\to(t^0,x^0,v^0)$. Thus, $\mathrm{lim}_{\varepsilon\to0}\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon)=\varphi^*(t,x,v)$. Moreover, at point $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$, we have: \begin{equation*} \partial_t \psi + v^{\varepsilon} \cdot \nablabla_x \psi +\frac{\Gamma(v^\varepsilon)}{\varepsilon}\cdot \nablabla_v \psi = \frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}\int_V \tilde{M}(v')\left(1- e^{\frac{\varphi^{\varepsilon} -\varphi^{\varepsilon'}}{\varepsilon}} \right)d\nu(v'). \end{equation*} From this, and using the fact that \begin{equation*} 0<\mathrm{min}\,M\leq M\leq \mathrm{max}\,M<+\infty, \end{equation*} \begin{equation*} 0<\mathrm{min}\,\tilde{M}\leq \tilde{M}\leq \mathrm{max}\,\tilde{M}<+\infty, \end{equation*} we deduce that $\varepsilon\int_{V'} \tilde{M}(v') e^{\frac{\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon) -\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v')}{\varepsilon}} d\nu(v')$ is uniformly bounded for all $V'\subset V$. By the Jensen inequality, \begin{equation*} \varepsilon \mathrm{exp}\left(\frac{1}{\varepsilon\left\vert V' \right\vert_{\tilde{M}}}\int_{V'}\tilde{M}(v')\left(\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon)-\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v')\right)d\nu(v')\right)\leq \frac{\varepsilon}{\left\vert V' \right\vert_{\tilde{M}}} \int_{V'}\tilde{M}(v')e^{\frac{\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon) -\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v')}{\varepsilon}} d\nu(v'), \end{equation*} where $\left\vert V' \right\vert_{\tilde{M}}:=\mathrm{div}splaystyle{\int_{V'}\tilde{M}(v')d\nu(v')}$. We deduce that \begin{equation*} \underset{\varepsilon\to0}{\mathrm{lim}\,\mathrm{sup}} \int_{V'}\tilde{M}(v')\left(\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon)-\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v')\right)d\nu(v') \leq0 \end{equation*} We write \begin{align*} \int_{V'} \tilde{M}(v')\left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) d\nu(v') &= \int_{V'} \tilde{M}(v')\left[\left( \varphi^{\varepsilon}(v^{\varepsilon}) - \psi(v^{\varepsilon})\right) - \left( \varphi^{\varepsilon}(v') - \psi(v') \right) + \left( \psi(v^{\varepsilon}) - \psi(v')\right)\right] d\nu(v')\\ &= \int_{V'} \tilde{M}(v')\left[\left( \varphi^{\varepsilon}(v^{\varepsilon}) - \psi(v^{\varepsilon})\right) - \left( \varphi^{\varepsilon}(v') - \psi(v') \right) \right] d\nu(v')\\ & + \int_{V'} \tilde{M}(v') \left( \psi(v^{\varepsilon}) - \psi(v')\right) d\nu(v') \end{align*} We can thus use the Fatou Lemma, together with $- \limsup_{\varepsilon \to 0} \varphi^{\varepsilon}(t^{\varepsilon},x^{\varepsilon},v') \geq - \varphi^*(t^0,x^0,v')$ to get \begin{align*} \left(\int_{V'} \tilde{M}(v')d\nu(v')\right) \varphi^*(v^0) - \int_{V'}\tilde{M}(v')\varphi^*(v') d\nu(v') & = \int_{V'} \tilde{M}(v')\left(\varphi^*(v^0) -\varphi^*(v') \right) d\nu(v') \\ &\leq \int_{V'} \tilde{M}(v') \liminf_{\varepsilon \to 0} \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) d\nu(v') \\ &\leq \liminf_{\varepsilon \to 0} \left( \int_{V'} \tilde{M}(v') \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) d\nu(v') \right)\\ &\leq \limsup_{\varepsilon \to 0} \left( \int_{V'} \tilde{M}(v') \left(\varphi^{\varepsilon}(v^{\varepsilon}) -\varphi^{\varepsilon}(v') \right) d\nu(v') \right)\\ &\leq 0, \end{align*} We shall deduce, since the latter is true for any $\vert V' \vert$ that \begin{equation*} \varphi^*(t^0,x^0,v^0) \leq \inf_{V} \varphi^*(t^0,x^0,\cdot) \end{equation*} and thus $\varphi^*$ is constant in velocity. To prove that $\varphi_*$ is constant with respect to the velocity variable, we use the same technique with a test function $\psi$ such that $\varphi^\varepsilon-\psi$ has a local strict minimum at $(t^0,x^0,v^0)$. \end{proof} We shall now prove the following fact \begin{prop}\label{viscosity} Let $\varphi^\varepsilon$ be a solution of (\ref{main}) and let $\varphi^*$ and $\varphi_*$ be defined by (\ref{envelopes}). \begin{enumerate}\label{prop:semilimits} \item[(i)] The function $\varphi_*$ is a viscosity super-solution to the Hamilton-Jacobi equation \eqref{HJ} on $\mathbb{R}_+^* \times \mathbb{R}^n$. \item[(ii)] The function $\varphi^*$ is a viscosity sub-solution to the Hamilton-Jacobi equation \eqref{HJ} on $\mathbb{R}_+^* \times \mathbb{R}^n$. \end{enumerate} \end{prop} \begin{proof} (i) Let $\psi$ be a test function such that $\varphi_*-\psi$ has a local minimum at point $(t^0,x^0)\in\mathbb{R}_+^*\times\mathbb{R}^d$. We set $p^0:=\nabla_x \psi(t^0,x^0)$. For all $H\geq\mathcal{H}(p^0)$, let us define $\psi^{\varepsilon}_H:=\psi+\varepsilon\eta_H$, where $\eta_H:=-\mathrm{log}(Q_{p^0,H})$ and \begin{equation}\label{QQ} Q_{p^0,H}(v):=\int_0^{+\infty}\frac{M(\phi^v_t)}{\tilde{M}(\phi^v_t)}\mathrm{exp}\left(-\int_0^t \left(\frac{M(\phi^v_s)}{\tilde{M}(\phi^v_s)}+H-\phi^v_s\cdot p^0\right)ds\right)dt, \quad\forall v\in V. \end{equation} For all $H>\mathcal{H}(p^0)$, by construction of $\eta_H$, we have \begin{equation*} \int_V \tilde{M}'e^{-\eta'_H}d\nu(v')= \int_V \tilde{M}'Q'_{p^0,H}d\nu(v')<\int_V \tilde{M}'Q'_{p^0,\mathcal{H}(p^0)}d\nu(v')= 1,\quad\mbox{if }p^0\notin\mathrm{Sing}(M,\Gammaamma), \end{equation*} or \begin{equation*} \int_V \tilde{M}'e^{-\eta'_H}d\nu(v')= \int_V \tilde{M}'Q'_{p^0,H}d\nu(v')<\int_V \tilde{M}'Q'_{p^0,\mathcal{H}(p^0)}d\nu(v')\leq 1,\quad\mbox{if }p^0\in\mathrm{Sing}(M,\Gammaamma). \end{equation*} Moreover, $Q_{p^0,H}\in C^1(V)$ and \begin{equation}\label{QQQ} Q_{p^0,H}\left(\frac{M(v)}{\tilde{M}(v)}+H-v\cdot p^0\right)+\Gamma\cdot \nabla_v Q_{p^0,H}=\frac{M(v)}{\tilde{M}(v)},\quad \forall v\in V. \end{equation} By uniform convergence of $\psi_H^\varepsilon$ toward $\psi$ and by the definition of $\varphi_*$, the function $\varphi^\varepsilon-\psi^\varepsilon_H$ has a local minimum located at a point $(t^\varepsilon,x^\varepsilon,v^\varepsilon)\in\mathbb{R}_+^*\times\mathbb{R}^d\times V$, satisfying $t^\varepsilon\to t^0$ and $x^\varepsilon \to x^0$. The extremal property of $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$ implies that \begin{equation*} \partial_t\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon)=\partial_t\psi^{\varepsilon}_H(t^\varepsilon,x^\varepsilon,v^\varepsilon),\quad \nablabla_x\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon)=\nablabla_x\psi^{\varepsilon}_H(t^\varepsilon,x^\varepsilon,v^\varepsilon). \end{equation*} Moreover, we have \begin{equation*} \Gammaamma(v^\varepsilon)\cdot\nablabla_v\varphi^{\varepsilon}(t^\varepsilon,x^\varepsilon,v^\varepsilon)=\Gammaamma(v^\varepsilon)\cdot\nablabla_v\psi^{\varepsilon}_H(t^\varepsilon,x^\varepsilon,v^\varepsilon). \end{equation*} Indeed, if $v^\varepsilon\in \overset{\circ}{V}$ or $\Gammaamma(v^\varepsilon)=0$, then the result is trivial. If $v^\varepsilon\in \partial V$ and $\Gammaamma(v^\varepsilon)\neq 0$, since $\Gammaamma(v)\cdot d\overrightarrow{S}(v)=0$ for all $v\in \partial V$, there exists $v_0\in V$, $v_1 \in V$ and $\delta>0$ such that \begin{equation*} \begin{cases} \phi^{v^\varepsilon}_s\in V,& \forall s\in [-\delta,\delta],\\ \phi^{v^\varepsilon}_{-\delta} = v_0,\\ \phi^{v^\varepsilon}_{\delta}=v_1. \end{cases} \end{equation*} The extremal property of $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$ now implies that \begin{equation*} \Gammaamma(v^\varepsilon)\cdot \nablabla_v(\varphi^\varepsilon-\psi^\varepsilon_H)(t^\varepsilon,x^\varepsilon,v^\varepsilon)=-\left. \frac{d}{ds}\left(\varphi^\varepsilon-\psi^\varepsilon_H\right)(t^\varepsilon,x^\varepsilon,\phi^{v^\varepsilon}_s)\right\vert_{s=0}=0. \end{equation*} Finally, since $V$ is a compact set, we know that there exists $v^*\in V$ and a subsequence of $(v^\varepsilon)_\varepsilon$, which we will not relabel, such that $v^\varepsilon\to v^*$. At point $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$, we have: \begin{eqnarray} \partial_t \psi + v^\varepsilon \cdot \nablabla_x \psi +\Gammaamma(v^\varepsilon)\cdot \nabla_v \eta_H & = & \partial_t \psi^\varepsilon_H + v^\varepsilon\cdot \nabla_x \psi^\varepsilon_H +\frac{\Gammaamma(v^\varepsilon)}{\varepsilon}\cdot \nabla_v \psi^\varepsilon_H \nonumber \\ &\geq&\partial_t \varphi^\varepsilon + v^\varepsilon \cdot \nabla_x \varphi^\varepsilon +\frac{\Gamma(v^\varepsilon)}{\varepsilon}\cdot \nablabla_v \varphi^\varepsilon \label{inneqa}\\ &=&\frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}\left(1-\int_V \tilde{M}'e^{\frac{\varphi^\varepsilon-\varphi^{\varepsilon'}}{\varepsilon}}d\nu(v')\right).\nonumber \end{eqnarray} By the minimal property of $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$, we can estimate the right-hand side of the last equation, such that \begin{eqnarray} \partial_t \psi+v^\varepsilon\cdot\nabla_x\psi+\Gamma(v^\varepsilon)\cdot\nabla_v \eta_H & \geq& \frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}\left(1-\int_V \tilde{M}(v')e^{\eta_H(v^\varepsilon)-\eta_H(v')}d\nu(v')\right)\label{inneqb}\\ &\geq & \frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}\left(1-e^{\eta_H(v^\varepsilon)}\right)\label{inneqc}\\ &=&\frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}\left(1-\frac{1}{Q_{p^0,H}(v^\varepsilon)}\right),\nonumber \end{eqnarray} so we have at point $(t^\varepsilon,x^\varepsilon,v^\varepsilon)$, \begin{equation*} Q_{p^0,H}(v^\varepsilon)\left(\frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}-\partial_t \psi-v^\varepsilon\cdot \nablabla_x\psi\right)+\Gammaamma(v^\varepsilon)\cdot\nablabla_v Q_{p^0,H}(v^\varepsilon)\leq \frac{M(v^\varepsilon)}{\tilde{M}(v^\varepsilon)}. \end{equation*} Taking the limit $\varepsilon\to 0$, we get at point $(t^0,x^0,v^*)$, \begin{equation}\label{finQQ} Q_{p^0,H}(v^*)\left(\frac{M(v^*)}{\tilde{M}(v^*)}-\partial_t \psi-v^*\cdot p^0\right)+\Gammaamma(v^*)\cdot\nablabla_v Q_{p^0,H}(v^*)\leq \frac{M(v^*)}{\tilde{M}(v^*)}. \end{equation} Combining \eqref{QQQ} and \eqref{finQQ} at $v=v^*$, we get \begin{equation*} \partial_t \psi(t^0,x^0)+H \geq 0. \end{equation*} Since this is true for any $H>\mathcal{H}(p^0)$, we finally have \begin{equation*} \partial_t \psi(t^0,x^0)+\mathcal{H}(p^0) \geq 0, \end{equation*} which proves (i). (ii) Let $\psi$ be a test function such that $\varphi^*-\psi$ has a global strict maximum at a point $(t^0,x^0)\in\mathbb{R}_+^*\times\mathbb{R}^d$. We still denote $p^0=\nabla_x \psi(t^0,x^0)$. {\bf \# First case: $p^0 \notin\mathrm{Sing}\,\left(M,\Gammaamma\right)$} Then, from the very definition of $\mathrm{Sing}(M,\Gammaamma)$ (check \Cref{defiprincipale}), there exists $H_0<\mathcal{H}(p^0)$ such that, for all $H_0<H<\mathcal{H}(p^0)$, \begin{equation}\label{thanksto} +\infty>\int_V \tilde{M}'Q'_{p^0,H}d\nu(v')>\int_V \tilde{M}'Q'_{p^0,\mathcal{H}(p^0)}d\nu(v')=1, \end{equation} using the same notation as earlier. We can then conclude using the same arguments as in the proof of (i). We emphasize that the Estimates (\ref{inneqa}) and (\ref{inneqb}) are reverted in this "maximum" case and that (\ref{inneqc}) is reverted thanks to (\ref{thanksto}). {\bf \# Second case: $p^0 \in\mathrm{Sing}\,\left(M,\Gammaamma\right)$} Thanks to \Cref{utile}, there exists $v_0\in V$ such that $Q_{p,\mathcal{H}(p)}(v_0)=+\infty$ and that either $v_0$ is a fixed point of the flow of $-\Gammaamma$, \em i.e. \em $\Gammaamma(v_0)=0$, or $v_0$ belongs to a periodic orbit of the flow. Suppose that $v_0$ is a fixed point, then, after \Cref{utile}, we have \begin{equation}\label{fatal} \frac{M(v_0)}{\tilde{M}(v_0)}+\mathcal{H}(p)-v_0\cdot p=0. \end{equation} Moreover, the function $(t,x)\mapsto \varphi^\varepsilon(t,x,v_0) - \psi(t,x)$ has a local maximum at a point $(t^\varepsilon,x^\varepsilon)$ and, by definition of $\varphi^*$, we have $t^\varepsilon\to t^0$ and $x^\varepsilon \to x^0$. By the maximal property of $(t^\varepsilon,x^\varepsilon)$, we have at point $(t^\varepsilon,x^\varepsilon,v_0)$, \begin{eqnarray*} \partial_t \psi (t^\varepsilon,x^\varepsilon)+v_0 \cdot \nabla_x \psi(t^\varepsilon,x^\varepsilon) + 0&=&\partial_t \psi (t^\varepsilon,x^\varepsilon)+v_0 \cdot \nabla_x \psi(t^\varepsilon,x^\varepsilon) + \frac{\Gammaamma(v_0)}{\varepsilon}\cdot \nabla_v \varphi^\varepsilon(t^\varepsilon,x^\varepsilon,v_0)\\ &=&\frac{M(v_0)}{\tilde{M}(v_0)}\int_V M' \left(1-e^{\frac{\varphi^\varepsilon-\varphi'^\varepsilon}{\varepsilon}}\right)d\nu'\\ &\leq&\frac{M(v_0)}{\tilde{M}(v_0)}. \end{eqnarray*} Taking the limit $\varepsilon\to0$ and recalling, (\ref{fatal}), we get \begin{equation*} \partial_t \psi(t^0,x^0)+\mathcal{H}\left(\nabla_x\psi(t^0,x^0)\right)\leq 0, \end{equation*} which proves that $\varphi^*$ is a viscosity subsolution of (\ref{HJ}). Suppose now that $v_0$ belongs to a periodic orbit. At point $(t,x,\phi^{v_0}_s)$, we have \begin{equation*} \partial_t \varphi^\varepsilon(\phi^{v_0}_s)+\phi^{v_0}_s\cdot \nabla_x \varphi^\varepsilon(\phi^{v_0}_s)+\frac{\Gammaamma(\phi^{v_0}_s)}{\varepsilon}\cdot \nabla_v \varphi^\varepsilon(\phi^{v_0}_s)\leq\frac{M(\phi^{v_0}_s)}{\tilde{M}(\phi^{v_0}_s)} \end{equation*} Applying $\underset{t\to +\infty}{\mathrm{lim}}\mathrm{div}splaystyle{\int_0^t (\cdot) ds}$ to the latter expression gives \begin{equation*} \partial_t\varphi^\varepsilon(t,x,w)+w\cdot \nablabla_x \varphi^\varepsilon(t,x,w)\leq \frac{M(w)}{\tilde{M}(w)}, \end{equation*} where $w(v_0)$ is the representative of the orbit defined by the mixing property (\ref{mixing}). Indeed, \begin{eqnarray*} \frac{1}{t}\int_0^t \frac{\Gammaamma(\phi^{v_0}_s)}{\varepsilon}\cdot \nablabla_v \varphi^\varepsilon(\phi^{v_0}_s)ds=-\frac{1}{t}\int_0^t \frac{\overset{\cdot}{\phi^{v_0}_s}}{\varepsilon}\cdot \nablabla_v \varphi^\varepsilon(\phi^{v_0}_s)ds =-\frac{\varphi^\varepsilon(\phi^{v_0}_t)-\varphi^\varepsilon(\phi^{v_0}_0)}{t\varepsilon}\underset{t\to+\infty}{\longrightarrow}0. \end{eqnarray*} After \Cref{utile2}, we know that $\frac{M(w)}{\tilde{M}(w)}+\mathcal{H}(p)-w\cdot p=0$ so we can conclude as in the previous case by considering the function $(t,x)\mapsto \varphi^\varepsilon(t,x,w) - \psi(t,x)$. \end{proof} We can now conclude the proof of \Cref{thm}. \begin{proof}[{\bf Proof of \Cref{thm}}] We refer to Section 4.4.5 in \cite{barles_solutions_1994} and Theorem B.1 in \cite{evans_pde_1989} for arguments giving strong uniqueness (which means that there exists a comparison principle for sub- and super-solution) of \Cref{HJ} in the viscosity sense. We emphasize that the lipschitz-continuity proven in \Cref{lipschitz} is sufficient for these results. Thanks to \Cref{viscosity}, as $\varphi^*$ and $\varphi_*$ are respectively a sub- and a super-solution of the Hamilton-Jacobi \Cref{HJ}, the comparison principle yields $\varphi^*\leq \varphi_*$. However, from their definitions, it is clear that $\varphi^*\geq \varphi_*$. Hence, the function $\varphi^0:=\varphi^*=\varphi_*$ is the viscosity solution of \Cref{HJ} and $(\varphi^\varepsilon)_\varepsilon$ converges uniformly locally as $\varepsilon\to0$ to $\varphi^0$, which concludes the proof. \end{proof} \end{document}
\begin{document} \title{Properness Without Elementaricity} {\it Abstract.} We present reasons for developing a theory of forcing notions which satisfy the properness demand for countable models which are not necessarily elementary submodels of some $({\cal H}(\chi),\in)$. This leads to forcing notions which are ``reasonably" definable. We present two specific properties materializing this intuition: nep (non-elementary properness) and snep (Souslin non-elementary properness). For this we consider candidates (countable models to which the definition applies), and the older Souslin proper. A major theme here is ``preservation by iteration'', but we also show a dichotomy: if such forcing notions preserve the positiveness of the set of old reals for some naturally define c.c.c.\ ideals, then they preserve the positiveness of any old positive set. We also prove that (among such forcing notions) the only one commuting with Cohen is Cohen itself. \setcounter{section}{-1} \stepcounter{section} \subsection*{Annotated Content}\ \noindent {\bf Section 0:\quad Introduction}\quad We present reasons for developing the theory of forcing notions which satisfy the properness demand for countable models which are not necessarily elementary submodels of some $({\cal H}(\chi),\in)$. This will lead us to forcing notions which are ``reasonably" definable. \noindent {\bf Section 1:\quad Basic definitions}\quad We present two specific properties materializing this intuition: nep (non-elementary properness) and snep (Souslin non-elementary properness). For this we consider candidates (countable models to which the definition applies), and the older Souslin proper. \noindent {\bf Section 2:\quad Connections between the basic definitions}\quad We point out various implications (snep implies nep, etc.). We also point out how much the properties are absolute. \noindent {\bf Section 3:\quad There are examples}\quad We point out that not just the reasonably definable forcing notions in use fit our framework, but that all the general theorems of Ros{\l}anowski Shelah \cite{RoSh:470}, which prove properness, actually prove the stronger properties introduced earlier. \noindent {\bf Section 4:\quad Preservation under iteration: first round}\quad First we address a point we ignored earlier (it was not needed, but is certainly part of our expectations). In the definition of ``$q$ is $(N,{\Bbb Q})$-generic" predensity of each ${\cal I}\in{\rm pd}(N,{\Bbb Q})$ was originally designed to enable us to say things on $N[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]$, i.e.\ $N[G_{{\Bbb Q}}] \cap{\cal H}(\chi)^{{\bf V}}=N$, but we should be careful saying what we intend by $N[G_{{\Bbb Q}}]$ now, so we replace it by $N\langle\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\rangle$. The preservation theorem \ref{2.5} says that CS iterations of nep forcing notions have the main property of nep. For this we define $p^{\langle\langle N\rangle \rangle}$ if $N \models$`` $p \in{\rm Lim}(\bar{\Bbb Q})$ ''. We also define and should consider (\ref{2.3A}) the ``$K$-absolute nep". \noindent {\bf Section 5:\quad True preservation theorems}\quad We consider two closure operations of nep forcing notions $({\rm cl}_1,{\rm cl}_2)$, investigate what is preserved and what is gained and prove a general preservation theorem (\ref{4.9}). This is done for the ``straight" version of nep. \noindent {\bf Section 6:\quad When a real is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over ${\bf V}$}\quad We define the class ${\cal K}$ of pairs $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$, in particular when $\mathunderaccent\tilde-3 {\eta}$ is the generic real for ${\Bbb Q}$, and how nice is the subforcing ${\Bbb Q}'$ of ${\Bbb Q}$ generated by $\mathunderaccent\tilde-3 {\eta}$. \noindent {\bf Section 7:\quad Preserving a little implies preserving much} \quad We are interested in the preservation of the property (of forcing notions) ``retaining positiveness modulo the ideal derived from a c.c.c. nep forcing notion'', e.g.\ being non-null (by forcing notions which are not necessarily c.c.c.). In \cite[Ch.VI,\S1,\S2,\S3, Ch.XVIII,\S3]{Sh:f} this is dealt with but mainly in the limit case. Our main aim is to show that for ``nice'' enough forcing notion we have a dichotomy (which implies preservation under e.g.\ CS iterations (of proper forcing) of the property above) retaining the positiveness of ${}^{\textstyle \omega}\omega$ (or in general every positive Borel set) implies retaining the positiveness of any $X \subseteq {}^{\textstyle \omega}\omega$. \noindent {\bf Section 8:\quad Non-symmetry} \quad We start to investigate for c.c.c. nep forcing: when does ``if $\eta_0$ is $({\Bbb Q}_0,\mathunderaccent\tilde-3 {\eta}_0)$-generic over $N$ and $\eta_1$ is $({\Bbb Q}_1, \mathunderaccent\tilde-3 {\eta}_1)$-generic over $N[\eta_0]$ then $\mathunderaccent\tilde-3 {\eta}_1$ is $({\mathunderaccent\tilde-3 {\Bbb Q}}_0, \mathunderaccent\tilde-3 {\eta}_0)$-generic over $N[\eta_1]$''? This property is known for Cohen reals and random reals above. \noindent {\bf Section 9:\quad Poor Cohen commute only with himself} \quad We prove that commuting with Cohen is quite rare. In fact, c.c.c. Souslin forcing which adds $\mathunderaccent\tilde-3 {\eta}$ which is (absolutely) nowhere essentially Cohen does not commute with Cohen. So such forcing makes the set of old reals meagre. \noindent {\bf Section 10:\quad Some c.c.c.\ nep forcing notions are not nice} \quad We define such forcing notions which are not essentially Cohen as long as $\aleph_1$ is not too large in ${\bf L}$. This shows that ``c.c.c. Souslin'' cannot be outright replaced by ``absolutely c.c.c. nep''. \noindent {\bf Section 11:\quad Preservation of ``no dominating real''} \quad We would like to strengthen the main conclusion of \S7, (that retaining of positiveness is preserved by composition) of nice forcing notions (i.e. if each separately has it, then so does its composition) to additional natural ideals, mainly the one mentioned in the title, which does not flatly fall into the context of \S7. Though \ref{9.2} contains a counterexample, we prove it for ``nice'' enough forcing notions. \noindent {\bf Section 12:\quad Open problems} \quad We formulate several open questions. \stepcounter{section} \subsection*{\quad 0. Introduction} The thema of \cite{Sh:b}, \cite{Sh:f} is: \begin{thesis} \label{0.1} It is good to have general theory of forcings, particularly for iterated forcing. \end{thesis} Some years ago, Judah asked me a question (on inequalities on cardinal invariants of the continuum). Looking for a forcing proof we arrived to the following question: \begin{question} Will it not be nice to have a theory of forcing notions $Q$ such that: \begin{enumerate} \item[$(\oplus)$] {\em if} ${\Bbb Q}\in N\subseteq ({\cal H}(\chi),\in)$, $N$ a countable model of ${\rm ZFC}^-$ and $p\in N\cap{\Bbb Q}$, {\em then} there is $q\in{\Bbb Q}$ which is $(N,{\Bbb Q})$-generic. \end{enumerate} \end{question} Note the absence of ${\rm pr}ec$ (i.e.\ $N$ is just a submodel of $({\cal H}(\chi), \in)$), which is the difference between this property and ``properness", and is alluded to in the name of this paper. This evolved to ``Souslin proper forcing'' (see \ref{0.7}) in Judah and Shelah \cite{JdSh:292}, which was continued in Goldstern Judah \cite{GoJu}. There are still some additional desirable properties (absent there): \begin{enumerate} \item[(a)] many ``nicely defined'' forcing notions do not satisfy ``Souslin proper'', in fact not so esoteric ones: the Sacks focing, the Laver forcing; \item[(b)] actual preservation by CS iteration was not proved, just the desired conclusion $(\oplus)$ hold for ${\Bbb P}_\alpha$ when $\langle {\Bbb P}_i, {\mathunderaccent\tilde-3 {\Bbb Q}}_j:i \leq\alpha,j<\alpha\rangle$ is a countable support iteration and $i< \alpha\quad \Rightarrow\quad \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_i}$`` ${\mathunderaccent\tilde-3 {\Bbb Q}}_i$ is a Souslin proper forcing notion''; \item[(c)] to prove for such forcing notions better preservation theorems when we add properties in addition to properness. \end{enumerate} Martin Goldstern asked me some years ago on the inadequacy of Souslin proper from clause (a). I suggested a version of the definitions here, and this was preliminarily announced in Goldstern \cite{Go}. The intention here is to include forcing notions with ``nice definition'' (not ones constructed by diagonalization like Baumgartner's ``every $\aleph_1$-dense sets of reals are isomorphic" \cite{B4} or the forcing notions constructed for the oracle c.c.c., see \cite[Ch.IV]{Sh:b}, or forcing notions defined from an ultrafilter). Note that our treatment (nep/snep) in a sense stands between \cite{Sh:f} and Ros{\l}anowski Shelah \cite{RoSh:470}. In \cite{Sh:f} we like to have theorems on iterations $\bar{{\Bbb Q}}$, mainly CS, getting results on the whole ${\rm Lim}(\bar{{\Bbb Q}})$ from assumptions on each ${\Bbb Q}_i$, but with no closer look at ${\Bbb Q}_i$ -- by intention, as we would like to cover as much as we can. In Ros{\l}anowski Shelah \cite{RoSh:470} we deal with forcing notions which are quite concrete, usually built from countably many finite ``creatures'' (still relative to specific forcing this is quite general). Here, our forcing notions are definable but not in so specific way as in \cite{RoSh:470}, which still provides examples (all are included), and the theorems are quite parallel to \cite{Sh:f}. So we are solving the ``equations'' $x/$``theory of manifolds'' = theory of proper forcing \cite{Sh:b},\cite{Sh:f}/general topology = theory of forcing based on creatures \cite{RoSh:470}/theory of manifolds in ${\Bbb R}^3$. \begin{thesis} \label{0.1B} ``Nice'' forcing notions which are proved to be proper, normally satisfy (even by same proof) the stronger demands defined in the next section. \end{thesis} \noindent{\bf History:}\qquad The paper is based on the author's lectures in Rutgers University in Fall 1996, which results probably in too many explanations. Answering Goldstern's question was mentioned above. A version of \S8 (on non-symmetry) was done in Spring of '95 aiming at the symmetry question, and the rest in the Summer and Fall of '96. I thank the audience of the lectures for their remarks and mainly Andrzej Ros{\l}anowski for correcting the paper. \noindent{\bf Notation:}\qquad We try to keep our notation standard and compatible with that of classical textbooks on Set Theory (like Bartoszy\'nski Judah \cite{BaJu95} or Jech \cite{J}). However in forcing we keep the older tradition that {\em a stronger condition is the larger one}. For a regular cardinal $\chi$, ${\cal H}(\chi)$ stands for the family of sets which are hereditarily of size less than $\chi$. The collection of all sets which are hereditarily countable relatively to $\kappa$ is denoted by ${\cal H}_{{<}\aleph_1}(\kappa)$.\\ ${\rm Tc}^{\rm ord}(x)$ is defined by induction on ${\rm rk}(x)=\gamma$ as follows: if $\gamma =0$ then ${\rm Tc}^{\rm ord}(x)=x\cup \{x\}$, if $\gamma > 0$ then ${\rm Tc}^{\rm ord}(x)=x \cup \bigcup \{{\rm Tc}^{\rm ord}(y):y \in x,y \mbox{ not an ordinal}\} \cup \{x\}$.\\ So ${\cal H}_{{<}\aleph_1}(\kappa)=\{x\in{\cal H}(\kappa):{\rm Tc}^{\rm ord} (x)$ is countable$\/\}$.\\ We say that a set $M\subseteq{\cal H}(\chi)$ is ${\rm ord}$--transitive if \[x\in M\ \&\ x\mbox{ is not an ordinal}\quad\Rightarrow\quad x\subseteq M.\] \begin{notation} We will keep the following rules for our notation: \begin{enumerate} \item $\alpha,\beta,\gamma,\delta,\xi,\zeta, i,j\ldots$ will denote ordinals, \item $\theta,\kappa,\lambda,\mu,\chi\ldots$ will stand for cardinal numbers, $\theta \le \kappa$ if not said otherwise, \item a tilde indicates that we are dealing with a name for an object in forcing extension (like $\mathunderaccent\tilde-3 {x}$), \item a bar above a name indicates that the object is a sequence, usually $\bar{X}$ will be $\langle X_i: i<\ell g\/(\bar{X})\rangle$, where $\ell g\/(\bar{X})$ denotes the length of $\bar{X}$, \item For two sequences $\eta,\nu$ we write $\nu\vartriangleleft\eta$ whenever $\nu$ is a proper initial segment of $\eta$, and $\nu\trianglelefteq\eta$ when either $\nu\vartriangleleft\eta$ or $\nu=\eta$. The length of a sequence $\eta$ is denoted by $\ell g\/(\eta)$. \item A {\em tree} is a family of finite sequences closed under initial segments. For a tree $T$ the family of all $\omega$--branches through $T$ is denoted by $\lim(T)$. \item The Cantor space ${}^{\textstyle \omega}2$ and the Baire space ${}^{\textstyle \omega}\omega$ are the spaces of all functions from $\omega$ to $2$, $\omega$, respectively, equipped with natural (Polish) topology. \item The fix ``version'' ${\rm ZFC}^-_*$ should be such that the forcing theorem holds and for any large enough $\chi$, the set of $({\frak B},\bar{\varphi}, \theta)$--candidates (defined in \ref{0.1C}) is cofinal in $\{N:N\subseteq ({\cal H}(\chi),\in)\}$ and whatever we should use (fully see \ref{0.9}). \item $\frak C$, $\frak B\ldots$ will denote models (with some countable vocabulary). For a model $\frak C$, its universe is denoted $|{\frak C}|$ and its cardinality is $\|{\frak C}\|$. Usually $\frak C$'s universe is an ordinal $\alpha({\frak C})$ and $\kappa({\frak B})\subseteq |{\frak B}|\subseteq {\cal H}_{{<}\aleph_1}(\kappa({\frak B}))$, $\kappa({\frak B})$ a cardinal. \item $K$ will denote a family of forcing notions including the trivial one (so a $K$--forcing extension of ${\bf V}$ is ${\bf V}[G]$ when $G\subseteq {\Bbb P}\in K$ is generic over ${\bf V}$) and $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$`` ${\mathunderaccent\tilde-3 {\Bbb Q}}\in K^{{\bf V}^{{\Bbb P}}}$ ''$\quad \Rightarrow \quad {\Bbb P}*{\mathunderaccent\tilde-3 {\Bbb Q}}\in K$. Usually $K$ is the class of (set) forcing notions. \end{enumerate} \end{notation} \stepcounter{section} \subsection*{\quad 1. Basic definitions} Let us try to analyze the situation. Our intuition is that: looking at ${\Bbb Q}$ inside $N$ we can construct a generic condition $q$ for $N$, but if $N\nprec ({\cal H}(\chi),\in)$, ${\Bbb Q}\cap N$ might be arbitrary. So let ${\Bbb Q}$ be a definition. What is the meaning of, say, $N\models$``$r\in {\Bbb Q}$''? It is $N\models$``$r$ satisfies $\varphi_0(-)$" for a suitable $\varphi_0$. It seems quite compelling to demand that inside $N$ we can say in some sense ``$r\in {\Bbb Q}$'', and as we would like to have \[q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mbox{`` }\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap{\Bbb Q}^N \mbox{ is a subset of }{\Bbb Q}^N \mbox{ generic over $N$ ''},\] we should demand \begin{enumerate} \item[$(*)_1$] $N\models$`` $r\in {\Bbb Q}$ '' implies ${\bf V}\models$`` $r\in {\Bbb Q}$ ''. \end{enumerate} So $\varphi_0$ (the definition of the set of members of ${\Bbb Q}$) should have this amount of absoluteness. Similarly we would like to have: \begin{enumerate} \item[$(*)_2$] if $N\models$`` $p_1\le_{{\Bbb Q}} p_2$ '' and $p_2\in G_{{\Bbb Q}}$ then $p_1 \in G_{{\Bbb Q}}$. \end{enumerate} So we would like to have a $\varphi_1$ (or $<^{\varphi_1}$) (the definition of the partial order of ${\Bbb Q}$) and to have this upward absoluteness for $\varphi_1$. But before we define this notion of properness without elementaricity, we should define the class of models to which it applies. We may have put in this section the ``straight nep'' (see \ref{4.8}) and/or ``absolute nep'' (see \ref{2.3A}). {\em Advice:\ } The reader may concentrate on the case of local correct explicit simply good and nep forcing notions which are normal (see Definitions \ref{0.1C}, \ref{0.2}(11), \ref{0.2}(2), \ref{0.2}(5), \ref{0.2}(1),(4), \ref{0.9}(3), \ref{0.9}(4), respectively). When we consider ``preservation by iteration'', it is natural to define the following: \begin{definition} \label{0.1C} \begin{enumerate} \item Let ${\rm ZFC}^-_*$ be a fixed version of set theory e.g.\ ${\rm ZFC}^- + ``\beth_7$ exists'' which may speak on $\frak C$ (or see more in the end of this section), and let ${\frak C}$ be a fixed model with countable vocabulary (say $\subseteq {\cal H}(\aleph_0)$) and universe an ordinal $\alpha = \alpha_*({\frak C})$ and let $\Delta$ be a fixed set of first order formulas in the vocabulary of ${\frak C}$ (closed under subformulas normally). Let ${\frak B}$ denote another such model (not fixed) but we may allow the universe to satisfy $\kappa({\frak B}) \subseteq |{\frak B}| \subseteq {\cal H}_{<\aleph_1}(\kappa({\frak B}))$ for some cardinal\footnote{We do not fix the order between $\alpha_*({\frak C})$ and $\kappa({\frak B})$, but there is no loss if we assume that $\theta\geq\alpha_*({\frak C})$, $\theta\geq\kappa({ \frak B})$.} $\kappa({\frak B})$. \item We say that $N$ is a class $({\frak B},{\bf p},\theta)$--candidate if: \begin{enumerate} \item[(a)] $N\subseteq ({\cal H}(\chi),\in)$ for some $\chi$, \item[(b)] $N$ is countable, \item[(c)] $N$ is a model of ${\rm ZFC}^-_*$, \item[(d)] ${\frak C}\in N$, ${\bf p}\in N$, ${\frak B} \in N$ (but see below), \item[(e)] ${\frak B}{\,|\grave{}\,} N {\rm pr}ec_\Delta {\frak B}$ {\em but}\footnote{so if $\Delta = \Delta_1=\{{\rm ex}ists y \psi:\psi \mbox{ is q.f.}\}$, ${\frak C},{\frak B}$ have Skolem functions, we have\\ {\bf (e)'} \quad ${\frak C} {\,|\grave{}\,} N {\rm pr}ec {\frak C}$, ${\frak B} {\,|\grave{}\,} N {\rm pr}ec {\frak B}$;\\ can use {\bf (e)'} instead of {\bf (e)}} for transparency we treat ${\frak B}$ as relations of $N$ and ${\frak B}{\,|\grave{}\,} N$ are their interpretations in $N$; so we allow $|{\frak B}|\cap |N|\setminus |{\frak B} {\,|\grave{}\,} N|\neq\emptyset$, and $\tau({\frak B})$, the vocabulary of $\frak B$, belongs to $N$, but $N\models$``$x\in {\frak B}$'' $\Rightarrow\ x\in {\frak B}$. Similarly for ${\frak C}$ (this is less essential), (see \ref{1.10}(3)), \item[(f)] if $N\models$``$\alpha$ is an ordinal $<\theta$, or $\le |{\frak C}|$ (that is $\alpha_*({\frak C})$ or $\le \kappa({\frak B})$'' then $\alpha$ is an ordinal, \item[(g)] if $N \models$``$x$ is countable'' then $x \subseteq N$, \item[(h)] if $N\models$``$x$ is an ordinal'' then $x$ is an ordinal. \end{enumerate} \item We omit the ``class'' if additionally \begin{enumerate} \item[(i)] ${\bf p} =\bar{\varphi}$ is a tuple of formulas, $\varphi_0=\varphi_0 (x)$ and in $N$, $\varphi_0(x)$ defines a set\footnote{This is normally the forcing notion ${\Bbb Q}$.}. \end{enumerate} \item We add the adjective ``semi'' if we omit clause (b) (the countability demand). \item If ${\bf p}$ is absent (or clear from context) we may omit it, similarly $\theta$ when $\theta = \aleph_0$ or clear from the context. We tend to ``forget" to mention ${\frak C}$ (e.g.\ demand ${\frak B}$ expands it). \item We say that a formula $\varphi$ is upward absolute for (or from) class $({\frak B},\bold p,\theta)$--candidates when: if $N_1$ is a class $({\frak B}, {\bf p},\theta)$--candidate, $N_1 \models \varphi[\bar x]$, and $N_2$ is a class $({\frak B},{\bf p},\theta)$--candidate or is $({\cal H}(\chi),\in)$ for $\chi$ large enough, and $N_1$ is a set or just a class of $N_2$, {\em then} $N_2 \models \varphi[\bar x]$. We say above ``through (class) $({\frak B},\bold p,\theta)$--candidates'' if $N_2$ is demanded to be a (class) $({\frak B},\bold p,\theta)$--candidate. Note that we can omit ${\cal H}(\chi)$ in the correct case (see \ref{0.2}(11)). If ${\frak B},{\bf p},\theta$ are clear from the context, we may forget to say ``for class $({\frak B},{\bf p},\theta)$--candidates''. \item We say that $\varphi$ defines $X$ absolutely through $({\frak B},\bold p,\theta)$--candidates if \begin{enumerate} \item[$(\alpha)$] $\varphi=\varphi(x)$ is upward absolute through $({\frak B}, \bold p,\theta)$--candidates, \item[$(\beta)$] $X=\bigcup\{X^N: N$ is a $({\frak B},\bold p,\theta )$--candidate$\/\}$,\quad where $X^N=\{x\in N:N\models \varphi(x)\}$. \end{enumerate} If only clause $(\alpha)$ holds then we add ``weakly''. \end{enumerate} \end{definition} \begin{discussion} \label{1.1A} Should we prefer $|{\frak B}| = \alpha$ an ordinal or $|{\frak B}|\subseteq {\cal H}_{< \aleph_1}(\alpha)$? The former is more convenient when we ``collapse $N$ over $\kappa \cup \theta$'' (see \ref{1.10}). Also then we can fix the universe whereas $|{\frak B}| = {\cal H}_{< \aleph_1}(\alpha)$ is less reasonable as it is less absolute. On the other hand, when we would like to prove preservation by iteration the second is more useful (see \S5). To have the best of both we adopt the somewhat unnatural meaning of ${\frak B} {\,|\grave{}\,} N {\rm pr}ec_\Delta {\frak B}$ in clause (e) of Definition \ref{0.1C}.\\ We may have forgotten sometimes to write $\|{\frak B}\|$ instead of $\kappa = \kappa({\frak B})$. In some cases, we may omit the demand (h) in the definition \ref{0.1C} of $({\frak B},{\bf p},\theta)$--candidates (and then calling them ``impolite candidates''), but still we should demand then that \[N\models\mbox{`` $x$ is an ordinal from $\frak B$ or $\frak C$''}\quad \Rightarrow\quad\mbox{$x$ is an ordinal,}\] and we should change ``ordinal collapse'' appropriately. However, there is no reason to attend the ``impolite'' company here. \end{discussion} This motivates: \begin{definition} \label{0.2} \begin{enumerate} \item Let $\bar{\varphi}=\langle\varphi_0,\varphi_1\rangle$ and ${\frak B}$ be a model as in \ref{0.1C}, $\kappa=\kappa({\frak B})$, and of countable vocabulary, say $\subseteq {\cal H}(\aleph_0)$. We say that $\bar{\varphi}$ or $(\bar{\varphi},{\frak B})$ is a temporary $(\kappa,\theta)$--definition, or $({\frak B},\theta)$--definition, of a nep-forcing notion\footnote{so in the normal case (see \ref{0.9}(4), \ref{0.11}), $\bar{\varphi}$ defines ${\Bbb Q}$} ${\Bbb Q}$ if, in ${\bf V}$: \begin{enumerate} \item[(a)] $\varphi_0$ defines the set of elements of ${\Bbb Q}$ and $\varphi_0$ is upward absolute from $({\frak B},\bar{\varphi},\theta)$--candidates, \item[(b)] $\varphi_1$ defines the partial (or quasi) ordering of ${\Bbb Q}$, also in every $({\frak B},\bar{\varphi},\theta)$--candidate, and $\varphi_1$ is upward absolute from $({\frak B},\bar{\varphi},\theta)$--candidates, \item[(c)] if $N$ is a $({\frak B},\bar{\varphi},\theta)$--candidate and $p \in {\Bbb Q}^N$, {\em then} there is $q \in {\Bbb Q}$ such that $p \le^{{\Bbb Q}} q$ and \[q \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mbox{`` } \mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap {\Bbb Q}^N \mbox{ is a subset of }{\Bbb Q}^N \mbox{ generic over } N\mbox{ ''}\] where, of course, ${\Bbb Q}^N = \{p:N \models \varphi_0(p)\}$. \end{enumerate} \item We add the adjective ``explicitly'' if $\bar{\varphi}=\langle\varphi_0, \varphi_1,\varphi_2\rangle$ and additionally \begin{enumerate} \item[(b)$^+$] we add: $\varphi_2$ is an $(\omega+1)$-place relation, upward absolute through $({\frak B},\bar{\varphi},\theta)$--candidates and $\varphi_2( \langle p_i:i \leq\omega\rangle)\ \Rightarrow\ \mbox{``}\{p_i:i\le\omega\} \subseteq {\Bbb Q}$ and $\{p_i:i<\omega\}$ is predense above $p_\omega$'', not just in ${\bf V}$ but in every ${\Bbb Q}$--candidate (which, if ${\Bbb Q}$ is correct, implies the case in ${\bf V}$); in this situation we say: $\{p_i:i<\omega\}$ is explicitly predense above $p_\omega$, \item[(c)$^+$] we add: if $N \models\mbox{``}{\cal I} \subseteq{\Bbb Q}$ is dense open'' (or just predense) (so ${\cal I} \in N$) {\em then} for some list $\langle p_i:i<\omega\rangle$ of ${\cal I} \cap N$ we have $\varphi_2(\langle p_i:i < \omega\rangle{}^\frown\!\langle q \rangle)$. \end{enumerate} \item For a class $({\frak B},\bar{\varphi},\theta)$--candidate $N$ we let ${\rm pd}(N,{\Bbb Q})={\rm pd}_{{\Bbb Q}}(N)=\{{\cal I}:{\cal I}$ is a class of $N$ (i.e. defined in $N$ by a first order formula with parameters from $N$) and is a predense subset of ${\Bbb Q}^N\}$. If $N$ is a candidate, it is $\{ {\cal I} \in N:N \models$``${\cal I}$ is predense''$\}$. \item We replace ``temporary'' by $K$ if the relevant proposition holds not only in ${\bf V}$ but in any forcing extension of ${\bf V}$ by a forcing notion ${\Bbb P}\in K$. If $K$ is understood from the context (normally: all forcing notions we will use in that application) we may omit it. \item We say that $(\bar{\varphi},{\frak B})$ is simply [explicitly] $K$--$(\kappa,\theta)$--definition of a nep--forcing notion ${\Bbb Q}$, {\em if}: \begin{enumerate} \item[$(\alpha)$] $(\bar{\varphi},{\frak B})$ is [explicitly] $K$--definition of a nep-forcing notion ${\Bbb Q}$, \item[$(\beta)$] ${\Bbb Q}\subseteq {\cal H}_{<_{\aleph_1}}(\theta)$; i.e.\ ${\Bbb P} \in K$ implies $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$ ``if $\varphi_0(x)$ then $x \in {\cal H}_{<_{ \aleph_1}}(\theta)$'', \item[$(\gamma)$] ${\frak B},\kappa,\theta$ are the only parameters of $\bar{\varphi}$ (meaning there are no others, but even ${\frak B},\kappa, \theta$ do not necessarily appear). \end{enumerate} \item We add ``very simply'' {\em if} in addition: \begin{enumerate} \item[$(\delta)$] ${\Bbb Q} \subseteq {}^\omega\theta$. \end{enumerate} \item We may say ``${\Bbb Q}$ is a nep-forcing notion", ``$N$ is a ${\Bbb Q}$-candidate'' abusing notion. If not clear, we write ${\Bbb Q}^{\bar{\varphi}}$ or $({\Bbb Q}^{\bar{\varphi}})^{{\bf V}}$. If not said otherwise, $\Delta$ is the set of first order formulas. Inversely, we write $({\frak B},\bar{\varphi},\theta)= ({\frak B}^{{\Bbb Q}},\bar{\varphi}^{{\Bbb Q}},\theta^{{\Bbb Q}})$ and ${\rm ZFC}^{{\Bbb Q}}$ for the relevant ${\rm ZFC}^-_*$. \item We say ``${\cal I} \subseteq {\Bbb Q}^N$ is explicitly predense over $p_\omega$'' if $\varphi_2(\langle p_i:i\le\omega\rangle)$ for some list $\{p_i:i<\omega\}$ of ${\cal I}$. \item We add the adjective ``class'' if we allow ourselves (in clauses (b), (c) of part (1) and (c)$^+$ of part (2)) class $({\frak B},\bar{\varphi}, \theta)$--candidates $N$; so in clauses (c), (c)$^+$, ${\cal I}$ is a class of $N$; i.e.\ first order definable with parameters from $N$, and use the weak version of absoluteness. If we use $({\frak B},{\bf p},\theta)$ we mean $\bar{\varphi}$ is an initial segment of ${\bf p}$. \item We say $({\frak B},\bar{\varphi},\theta)$ (or abusing notation, ${\Bbb Q}$) is class=set if every class $({\frak B},\bar{\varphi},\theta)$--candidate is a $({\frak B},\bar{\varphi},\theta)$--candidate. \item In \ref{0.2}(1) we add the adjective ``correctly'' (and we say that $({\frak B},\bar{\varphi},\theta)$ is {\em correct}) if, for a large enough regular cardinal $\chi$: \begin{enumerate} \item[(a)] the formula $\varphi_0$ defines the set of members of ${\Bbb Q}$ absolutely through $({\frak B},\bar{\varphi},\theta)$--candidates, that is \[{\Bbb Q}=\bigcup\{{\Bbb Q}^N:N\mbox{ is a }({\frak B},\bar{\varphi},\theta) \mbox{--candidate }\},\] ${\Bbb Q}^N=\{x: N\models\varphi_0(x)\}$, \item[(b)] the formula $\varphi_1$ defines the quasi order of ${\Bbb Q}$ absolutely through $({\frak B},\bar{\varphi},\theta)$--candidates, that is $\leq_{{\Bbb Q}}= \bigcup\{(\leq_{\Bbb Q})^N: N$ is a $({\frak B},\bar{\varphi},\theta )$--candidate$\/\}$, $(\leq_{\Bbb Q})^N=\{(p,q): N \models\varphi_1(p,q)\}$. \end{enumerate} Similarly when we add ``explicitly''. So in those cases we can ignore ${\cal H}(\chi)\models\varphi_\ell(x)$ and just ask for satisfaction in suitable candidates. (Note: correct is less relevant to snep.) \end{enumerate} \end{definition} \noindent{\bf Convention:}\quad We may say ``${\Bbb Q}$ is $\dots$'' when we mean ``$({\frak B},\bar{\phi},\theta)$ is $\ldots$'' or ``$({\frak B},{\bf p},\theta)$ is $\ldots$''. \noindent{\bf Remark:}\quad The main case for us is candidates (not class ones), etc; still mostly we can use the class version of nep. Also we can play with various free choices. \begin{discussion} \label{0.3} 1)\quad Note: if $x\in{\cal I}\in N$, $N \models$`` ${\cal I}\subseteq{\Bbb Q}$'', possibly $x\notin{\Bbb Q}$ so those $x$ are not relevant (e.g.\ though $\alpha< \kappa({\frak B})$ have a special role). \noindent 2)\quad We think of using CS iteration $\bar{{\Bbb Q}}=\langle{\Bbb P}_i, {\mathunderaccent\tilde-3 {\Bbb Q}}_i:i<\delta\rangle$, each ${\mathunderaccent\tilde-3 {\Bbb Q}}_i$ has a definition $\bar{\varphi}^i$ and we would like to prove things on ${\Bbb P}_\alpha$ for $\alpha\le\delta$. So the relevant family $K_i$ of forcing notions we really should consider for $\bar{\varphi}^i$ is $\{{\Bbb P}_\beta/{\Bbb P}_i:\beta\in [i,\delta)\}$, at least this holds almost always (maybe we can look as help in other extensions). \noindent 3)\quad Note that a significant fraction of iterated forcing of proper forcing related to reals are forcing notions called ``nice'' above. The proof that they are proper usually gives more and we think that they will be included even by the same proof. \noindent 4)\quad If $K$ is trivial, (i.e.\ has only the trivial forcing notion as a member) this means we can replace it by ``temporarily''. \noindent 5)\quad See also \ref{2.3A} for ``$K$-absolutely". \noindent 6)\quad Note a crucial point in Definition \ref{0.2}, the relation ``$\{p_n:n < \omega\}$ is predense above $p$'' is not demanded to be absolute; only a ``dense'' family of cases of it is demanded (we also allow other basic relations; e.g.\ $q\notin{\Bbb Q}$ to be non-absolute but those are less crucial). This change may seem technical, but is central being the difference between including not few natural examples and including all those we have in mind. \noindent 7)\quad Note that in clause (c) of \ref{0.2}(1) we mean: \quad $G\cap {\Bbb Q}^N$ is directed (by $\leq^{{\Bbb Q}^N}$, not only by $\leq^{{\Bbb Q}}$) and $G\cap N\cap {\cal I}\neq\emptyset$ for ${\cal I}\in {\rm pd}(N,{\Bbb Q})$. \noindent 8)\quad Note that the demand described in 7) above almost implies ``incompatibility is upward absolute from $N$'', but not quite. \end{discussion} Let us consider a more restrictive class, where the absoluteness holds because of more concrete reasons, the usual ones for upward absoluteness, the relations are $\Sigma^1_1$, or more generally, $\kappa$--Souslin. \begin{definition} \label{0.4} \begin{enumerate} \item We say that $\bar T$ is a temporary $(\kappa,\theta)$--definition of a snep--forcing notion ${\Bbb Q}$ if: \begin{enumerate} \item[(a)] $\bar{T}=\langle T_0,T_1\rangle$ where $T_0 \subseteq {}^{\omega >}(\kappa\times\kappa)$ and $T_1\subseteq {}^{\omega >}(\kappa\times\kappa \times \kappa)$ are trees (i.e.\ closed under initial segments, non-empty) and $\theta\le\kappa$, \item[(b)] the set of elements of ${\Bbb Q}$ is \[\begin{array}{ll} {\rm proj}_0(T_0)\stackrel{\rm def}{=}\{\nu\in {}^\omega\theta:&\mbox{for some } \eta \in {}^\omega \kappa \mbox{ we have}\\ \ &\nu * \eta \stackrel{\rm def}{=} \langle(\nu(n),\eta(n)):n <\omega\rangle\in \lim(T_0)\}, \end{array}\] \item[(c)] the partial order of ${\Bbb Q}$, $\{(p_0,p_1):{\Bbb Q}\models p_0 \le p_1\}$ is \[\begin{array}{ll} {\rm proj}_1(T_1)\stackrel{\rm def}{=}&\{(\nu_0,\nu_1):\nu_0,\nu_1\in {\Bbb Q} \mbox{ and for some }\eta\eta\in{}^{\textstyle\omega}\kappa \mbox{ we have}\\ \ &\quad\nu_0 * \nu_1 * \eta \stackrel{\rm def}{=}\langle (\nu_0(n),\nu_1(n), \eta(n)):n<\omega\rangle\in\lim(T_1)\}, \end{array}\] \item[(d)] for a large enough regular cardinal $\chi$, if $N\subseteq ({\cal H}(\chi),\in)$ is a $({\frak B}_{\bar{T}},\bar{T},\theta)$--candidate and $\kappa\in N$, $\bar{T}\in N$, $p\in {\Bbb Q}^N$ {\em then} there is $q\in {\Bbb Q}$ such that $p \le^{{\Bbb Q}} q$ and \[q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mbox{`` }\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap {\Bbb Q}^N \mbox{ is a generic subset of } {\Bbb Q}^N \mbox{ over } N\mbox{ ''},\] where ${\frak B}_{\bar{T}}$ is the model with universe $\kappa$ and the sequence relations $T_0 \cap {}^n(\kappa\times\kappa)$, $T_1\cap {}^n(\kappa \times\kappa\times\kappa)$ for $n<\omega$. \end{enumerate} \item We add ``explicitly'' if $\bar{T}=\langle T_0,T_1,T_2\rangle$ and we add \begin{enumerate} \item[(a)$^+$] also $T_2 \subseteq {}^{\omega >}(\theta\times\theta\times \kappa)$ and we let \end{enumerate} \[\begin{array}{ll} {\rm proj}_2(T_2)\stackrel{\rm def}{=}\big\{\langle\nu_i:i \le\omega\rangle:& \mbox{for some }\eta\in {}^\omega \kappa \mbox{ we have }\nu*\nu_\omega*\eta \in \lim(T_2)\\ \ &\mbox{where }\nu={\rm code}(\langle\nu_\ell:\ell<\omega\rangle)\mbox{ is the member}\\ \ &\mbox{of } {}^\omega \theta\mbox{ satisfying } \nu\big(\binom{\ell+k+1}{2} + \ell \big)=\nu_\ell(k)\big\} \end{array}\] and $\langle \nu_i:i \le\omega\rangle\in{\rm proj}_2(T_2)$ implies $\{\nu_i:i \le \omega\} \subseteq {\Bbb Q}$ (even in candidates; the natural case is that witnesses are coded). \begin{enumerate} \item[(d)$^+$] we add: $q$ is $\bar{T}$--explicitly $(N,{\Bbb Q})$--generic, which means that\\ {\em if} $N \models ``{\cal I}$ is a dense open subset of ${\Bbb Q}$''\\ {\em then} for some list $\langle p_n:n<\omega\rangle$ of ${\cal I}\cap N$ we have $\langle p_n:n<\omega\rangle{}^\frown\!\langle q \rangle\in{\rm proj}_2 (T_2)$, \item[(e)$^+$] if $\nu_i\in{\Bbb Q}$ for $i\le\omega$ and for some $\eta\in {}^\omega \kappa$ we have ${\rm code}(\nu_0,\nu_1,\ldots)*\nu_\omega*\eta\in \lim(T_2)$ {\em then} $\{\nu_0,\nu_1,\ldots\} \subseteq {\Bbb Q}$ is predense above $\nu_\omega$ (and this holds in candidates too). \end{enumerate} \item We will also say ``${\Bbb Q}$ is a snep-forcing notion'', ``$N$ is a ${\Bbb Q}$--candidate", etc. \item We say $\eta$ is a witness for $\nu\in{\Bbb Q}$ if $\nu*\eta\in\lim(T_0)$; similarly for $T_1,T_2$. We say that ${\cal I}$ is explicitly predense over $p_\omega$ if ${\rm code}(\langle p_i:i \le \omega\rangle)\in{\rm proj}_2(T_2)$ for some list $\{p_i:i<\omega\}$ of ${\cal I}$. \end{enumerate} \end{definition} \begin{remark} \label{0.5} In clause (a)$^+$ we would like the ${\rm proj}_2(T_2)$ to be an $(\omega + 1)$-place relation on ${\Bbb Q}$, but we do not like the first coordinate to give too much information so we use the above coding, but it is in no way special. Note: we do not want to have one coordinate giving $\langle \varphi_\ell(0): \ell<\omega\rangle$. Another possible coding is ${\rm code}(\nu_0,\nu_1,\ldots)\cong\langle\langle \nu_\ell {\,|\grave{}\,} i:\ell \le i \rangle:i < \omega \rangle$, so $T \subseteq {}^{\omega>}({}^{\omega>}({}^{\omega>}\theta)\times\theta\times\kappa)$. \end{remark} \begin{proposition} \label{added14A} Assume that $\bar{T}$ is in ${\bf V}$ a temporary $(\kappa,\theta)$--definition of a snep forcing notion which we call ${\Bbb Q}$. Let ${\bf V}'$ be a transitive class of ${\bf V}$ containing $\bar{T}$. Then: \begin{enumerate} \item also in ${\bf V}'$, ${\Bbb Q}$ is snep, \item if ${\bf V}'\models$``$p\in{\Bbb Q}$'' then ${\bf V}\models$``$p\in{\Bbb Q}$'', \item if ${\bf V}'\models$``$p\leq^{{\Bbb Q}} q$'' then ${\bf V}'\models$``$p\leq^{\Bbb Q} q$'', \item if in ${\bf V}'$, the model $N$ is a $({\frak B}_{\bar{T}}, \bar{\varphi}_{\bar{T}},\kappa_{\bar{T}})$--candidate then also in ${\bf V}$, $N$ is a $({\frak B}_{\bar{T}},\bar{\varphi}_{\bar{T}}, \kappa_{\bar{T}})$--candidate. \QED \end{enumerate} \end{proposition} \begin{definition} \label{0.5A} \begin{enumerate} \item Let ${\Bbb Q}$ be explicitly snep. We add the adjective ``local'' if in the ``properness clause i.e.\ \ref{0.4}(2)(d)$^+$'' we can add: \begin{enumerate} \item[$(\otimes)$] the witnesses for ``$q\in{\Bbb Q}$'', ``$\langle p^{\cal I}_n: n<\omega\rangle$ is ${\Bbb Q}$--explicitly predense above $q$'' are from ${}^\omega (N\cap\kappa)$. \end{enumerate} \item Let ${\Bbb Q}$ be explicitly nep. We add the adjective ``$K$-local'' if in the ``properness clause i.e.\ \ref{0.2}(2)(b)$^+$'' we can add:\quad for each candidate $N$ which is ${\rm ord}$--transitive we have \begin{enumerate} \item[$(\oplus)$] for some $K$--extension $N^+$ of $N$, it is a ${\Bbb Q}$--candidate (in particular a model of ${\rm ZFC}^-_*$) and $N^+\models$ ``${\Bbb Q}^N$ is countable'' and $q\in N^+$, $N^+\models$ ``$p\le^{{\Bbb Q}} q$ and for each ${\cal I}\in{\rm pd}(N,{\Bbb Q})$, ${\cal I}^N$ is explicitly predense over $q$''. \end{enumerate} (Note that ${\frak B}{\,|\grave{}\,} N^+={\frak B}{\,|\grave{}\,} N$.) If $K$ is the family of set forcing notions, or constant understood from the context, we may omit $K$. \end{enumerate} \end{definition} \begin{discussion} \label{0.6} 1)\quad Couldn't we fix $\theta=\omega$? Well, if we would like to have the result of ``the limit of a CS iteration $\bar{{\Bbb Q}}$ of such forcing notions is such a forcing notion'', we normally need $\theta\ge\ell g\/(\bar{{\Bbb Q}})$. Also $\kappa>\aleph_0$ is good for including $\Pi^1_2$--relations. \noindent 2)\quad In ``Souslin proper" (starting with \cite{JdSh:292}) the demands were \end{discussion} \begin{definition} \label{0.7} A forcing notion ${\Bbb Q}$ is Souslin proper if it is proper and: the relations ``$x \in {\Bbb Q}$'', ``$x \le^{{\Bbb Q}} y$'' are $\Sigma^1_1$ and ``the notion of incompatibility in ${\Bbb Q}$'' is $\Sigma^1_1$ (where, of course, the compatibility relation is $\Sigma^1_1$). \end{definition} This makes ``$\{p_n:n < \omega\}$ is predense over $p_\omega$'' a $\Pi^1_2$--property, hence an $\aleph_1$-Souslin one. So we can get the ``explicitly'' cheaply, {\em however} possibly increasing $\kappa$. Note that for a Souslin proper forcing notion ${\Bbb Q}$, also $p \in {\Bbb Q}^N \Leftrightarrow p \in {\Bbb Q}\ \&\ p\in N$ and similarly for $p \le^{{\Bbb Q}} q$. \centerline {$* \qquad * \qquad *$} If you like to be more pedant on the ${\rm ZFC}^-_*$, look at the following definition. Normally there is no problem in having ${\rm ZFC}^-_*$ as required. \begin{definition} \label{0.9} \begin{enumerate} \item We say ${\rm ZFC}^-_*$ is a $K$--good version [with parameter ${\frak C}$, possibly ``for $({\frak B},{\bf p},\theta)$'' for ${\frak B},{\bf p},\theta$ as in \ref{0.2} from the relevant family] if: \begin{enumerate} \item[(a)] it contains ${\rm ZC}^-$; i.e.\ Zermelo set theory without power set, [and the axioms may speak on ${\frak C}$] \item[(b)] ${\frak C}$ is a model with countable vocabulary (given as a well ordered sequence, so ${\frak C}$ is an individual constant in the theory ${\rm ZFC}^-_*$) and universe $|{\frak C}|$ is an ordinal $\alpha({\frak C})$, \item[(c)] for every $\chi$ large enough, if $X \subseteq {\cal H}(\chi)$ is countable then for some countable $N \subseteq ({\cal H}(\chi),\in)$, $N \models {\rm ZFC}^-_*$, $X\subseteq N$ and \[x \in N\ \&\ N\models \mbox{``}|x|=\aleph_0\mbox{''}\quad \Rightarrow\quad x \subseteq N\] and ${\frak C} \in N$ and ${\frak C}{\,|\grave{}\,} (N\cap |{\frak C}|){\rm pr}ec{\frak C}$ (can be weakened to a submodel or ${\rm pr}ec_{\Delta}$, we do not loose much as we can expand by Skolem functions); in the ``for $({\frak B},{\bf p},\theta)$'' version we add ``$N$ is a $({\frak B},{\bf p},\theta)$--candidate'', \item[(d)] ${\rm ZFC}^-_*$ satisfies the forcing theorem\footnote{for \ref{6.5} we need:\quad if ${\Bbb P},{\Bbb Q}$ are forcing notions, $\mathunderaccent\tilde-3 {G}$ is a ${\Bbb P}$--name for a subset of ${\Bbb Q}$ such that $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$`` $\mathunderaccent\tilde-3 {G}$ is a generic subset of ${\Bbb Q}$ '', and $q\in{\Bbb Q}\ \Rightarrow\ \not\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$q\notin\mathunderaccent\tilde-3 {G}$'' then for some ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {{\Bbb R}}$ of a forcing notion, ${\Bbb Q}*\mathunderaccent\tilde-3 {{\Bbb R}}$, ${\Bbb P}$ are equivalent} (see e.g.\ \cite[Ch. I]{Sh:f}) at least for forcing notions in $K$, \item[(e)] those properties are preserved by forcing notions in $K$ (if ${\Bbb P} \in K$, $G \subseteq {\Bbb P}$ generic over ${\bf V}[G]$ then $K^{{\bf V}[G]}$ will be interpreted as $\{{\mathunderaccent\tilde-3 {\Bbb Q}}[G]:{\Bbb P}*{\mathunderaccent\tilde-3 {\Bbb Q}}\in K\}$). \end{enumerate} \item If $K$ is the class of all (set) forcing notions, we may omit it. \item We say ${\rm ZFC}^-_*$ is normal if for $\chi$ large enough any countable $N{\rm pr}ec ({\cal H}(\chi),\in)$ to which ${\frak C}$ belongs is O.K. (for clause (1)(c) above). \item We say ${\rm ZFC}^-_*$ is semi-normal for $({\frak B},{\bf p},\theta)$ if for $\chi$ large enough, for any countable $N{\rm pr}ec({\cal H}(\chi),\in)$ (to which appropriate ${\bf p},{\frak C},{\frak B},\theta(\in {\cal H}(\chi)$) belong), for some ${\Bbb Q}\in N$ such that $N \models$``${\Bbb Q}$ is a forcing notion'' we have: \begin{enumerate} \item[$(*)$] {\em if} $N'$ is countable $N\subseteq N'\subseteq({\cal H}( \chi),\in)$, $N' \cap \chi = N \cap \chi$ and \[(\forall x)[N' \models\mbox{``}x\mbox{ is countable ''}\quad\Rightarrow\quad x \subseteq N'],\] and $N'$ is a generic extension of $N$ for ${\Bbb Q}^N$ {\em then} $N'$ is $({\frak B},{\bf p},\theta)$--candidate and ${\Bbb Q}^{N'} {\,|\grave{}\,} N = {\Bbb Q} {\,|\grave{}\,} N$, $\varphi^{N'}_2 {\,|\grave{}\,} N = \varphi^N_2 {\,|\grave{}\,} N$. \end{enumerate} We say ``$K$--semi-normal'' if we demand $N\models{\Bbb Q}\in K$. \item We say ${\rm ZFC}^-_*$ is weakly normal for $({\frak B},{\bf p},\theta)$ if clause (c) of part (1) holds. \item In parts (4), (5) we can replace $({\frak B},{\bf p},\theta)$ by a family of such triples meaning $N$ is a candidate for all of them. \item In parts (4), (5), (6) if $({\frak B},{\bf p},\theta)=({\frak B}^{{\Bbb Q}}, \bar{\varphi}^{{\Bbb Q}},\theta^{{\Bbb Q}})$ we may replace $({\frak B},{\bf p},\theta)$ by ${\Bbb Q}$. \end{enumerate} \end{definition} \begin{discussion} \label{0.10} 1)\quad What are the points of parameters? E.g.\ we may have $\kappa^*$ an Erd\"os cardinal, ${\frak C}$ codes every $A \in {\cal H}(\chi)$ for each $\chi<\kappa^*$, ${\rm ZFC}^-_* = {\rm ZFC}^- +$ ``$\kappa^*$ is an Erd\"os cardinal ${\frak C}$ as above'', $K =$ the class of forcing notions of cardinality $< \kappa^*$. Then we have stronger absoluteness results to play with. \noindent 2)\quad On the other hand, we may use ${\rm ZFC}^-_* = {\rm ZFC}^- + (\forall r \in {}^\omega 2)(r^\#$ exists) + ``$\beth_7$ exists''. This is a good version if ${\bf V} \models (\forall r \in {}^\omega 2)(r^\#$ exists) so we can e.g.\ weaken the definition snep (or Souslin-proper or Souslin-c.c.c). \noindent 3)\quad What is the point of semi-normal? E.g.\ if we would like ${\rm ZFC}^-_*\vdash{\rm CH}$, whereas in ${\bf V}$ the Continuum Hypothesis fails. But as we have said in the beginning, the normal case is usually enough. \end{discussion} \begin{proposition} \label{0.11} \begin{enumerate} \item Assume ${\rm ZFC}^-_*$ is $\{\emptyset\}$--good. Then the clause (c)$^+$ of \ref{0.2}(2) is equivalent to clause (c) + $(*)$, where \begin{enumerate} \item[$(*)$] {\em if} $p\in{\Bbb Q}$ and ${\cal I}_n$ is predense over $p$ (for $n<\omega$), each ${\cal I}_n$ is countable,\\ {\em then} for some $q$, $p\leq q\in{\Bbb Q}$, and for some $p^n_\ell\in{\cal I}_n$ for $n<\omega$, $\ell<\omega$ we have $\varphi_2(\langle p^n_\ell:\ell<\omega \rangle{}^\frown\! q)$ \end{enumerate} (this is an obvious abusing of notation, we mean that this holds in some candidate). \item If ${\rm ZFC}^-_*$ is normal for $({\frak B},{\bf p},\theta)$ {\em then} in Definition \ref{0.2}(1),(2) there is no difference between ``absolutely through'' and ``weakly absolutely''. \QED \end{enumerate} \end{proposition} \begin{proposition} \label{0.12} \begin{enumerate} \item Assume ${\bf V}_1\subseteq{\bf V}_2$ (so ${\bf V}_1$ is a transitive class of ${\bf V}_2$ containing the ordinals, ${\frak C},{\frak B},\theta,{\bf p},\in{\bf V}_1$). If ${\rm ZFC}^-_*$ is temporarily good {\em then} also in ${\bf V}_1$ it is temporarily good. \item If co-$(\kappa+\theta)$-Souslin relations are downward absolute (from ${\bf V}_2$ to ${\bf V}_1$) then also inverse holds. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} By Shoenfield--Levy absoluteness. \QED \stepcounter{section} \subsection*{\quad 2. Connections between the basic definitions} We first give the most transparent implications: we can omit ``explicitly'' and we can replace snep by nep (this is \ref{1.1}) and the model ${\frak B}$ can be expanded, $\kappa,\theta$ increased, (see \ref{1.2}). Then we note that if $\kappa \ge \theta + \aleph_1$ and we are in the simple nep case, we can get from nep to snep because saying ``there is a countable model $N \subseteq ({\cal H}(\chi),\in)$ such that $\ldots$'' can be expressed as a $\kappa$--Souslin relation (see \ref{1.3}) and comment on the non-simple case. Then we discuss how the absoluteness lemmas help us to change the universe (in \ref{1.5}), to get the case with a class $K$ from the case of temporarily (\ref{1.6}) and to get explicit case from snep or from Souslin proper (in \ref{1.7}). \begin{proposition} \label{1.1} \begin{enumerate} \item If $(\bar{\varphi},{\frak B})$ is explicitly a $K$--definition of a nep-forcing notion ${\Bbb Q}$, {\em then} $\bar{\varphi}{\,|\grave{}\,} 2$ is a $K$--definition of a nep-forcing notion ${\Bbb Q}$. \item If $\bar{T}$ is explicitly a $K$-definition of a snep-forcing notion ${\Bbb Q}$, {\em then} $(\bar{T}{\,|\grave{}\,} 2)$ is a $K$--definition of an snep-forcing notion ${\Bbb Q}$. \item If $\bar{T}$ is [explicitly] a $K$--$(\kappa,\theta)$--definition of a snep-forcing notion ${\Bbb Q}$, and ${\frak B}$ any model with universe $\kappa$ coding the $T_\ell$'s and $\varphi_\ell$ is defined as ${\rm proj}_\ell(T_\ell)$, {\em then} $(\bar{\varphi},{\frak B})$ is very simply [explicitly] $K$--$(\kappa,\theta)$--definition of a nep forcing notion ${\Bbb Q}$ (and let ${\frak B}={\frak B}_{\bar{T}}$, $\bar{\varphi}=\bar{\varphi}_{\bar{T}}$). \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Read the definitions. \QED \begin{proposition} \label{1.2} \begin{enumerate} \item If $(\bar \varphi,{\frak B})$ is [explicitly] a $K$--definition of a nep-forcing notion and ${\frak B}$ is definable in ${\frak B}'$ (and $\Delta$ is $L_{\omega,\omega}$, or change $\Delta$ accordingly to the interpretation), {\em then} $(\bar{\varphi},{\frak B}')$ is [explicitly] a $K$-definition of a nep-forcing notion; moreover, if ${\frak B}$ is the only parameter of the $\varphi_\ell$, we can replace it by ${\frak B}'$ (changing trivially the $\varphi_\ell$'s). \item Similarly we can increase $\kappa$ and $\theta$ and add ``simply'' (to the assumption and the conclusion); we may also add ``very simply''. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED A converse to \ref{1.1}(1)+(2) is \begin{proposition} \label{1.3} \begin{enumerate} \item Assume that $\kappa'=\kappa+\theta+\aleph_1+\|{\frak B}\|$ and \begin{enumerate} \item[$(\oplus)$] $(\bar{\varphi},{\frak B})$ is a correct very simple [explicit] $K$--$(\kappa,\theta)$--definition of a nep forcing notion ${\Bbb Q}$. \end{enumerate} {\em Then} some $\bar{T}$ is an [explicitly] $K$--$(\kappa',\theta)$--definition of a snep forcing-notion ${\Bbb Q}$ (the same ${\Bbb Q}$). \item If $\kappa=\theta=\kappa'=\aleph_0$ we get a similar result with the $\varphi_\ell$ being $\Pi^1_2$-sets. \item If in clause $(\oplus)$ of \ref{1.3}(1) we replace very simple by simple (so we weaken ${\Bbb Q}\subseteq {}^\omega \theta$ to ${\Bbb Q}\subseteq {\cal H}_{< \aleph_1}(\theta)$), {\em then} part (1) still holds for some ${\Bbb Q}'$ isomorphic to ${\Bbb Q}$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1)\quad This is, by now, totally straight; still we present the case of $\varphi_0$ for part (1) for completeness. If in Definition \ref{0.1C}(2), clause (e) we use ${\rm pr}ec$, let $\langle\psi^1_n(y,x_0,\dotsc,x_{n-1}):n < \omega\rangle$ list the first order formulas in the vocabulary of ${\frak B}$ in the variables $\{y,x_\ell:\ell<\omega\}$, (so in $\psi^1_n$ no $x_\ell$, $\ell\ge n$ appears, but some $x_\ell$, $\ell<n$ may not appear); if we use ${\rm pr}ec_\Delta$ let it list subformulas of members of $\Delta$. Similarly $\langle \psi^2_n(y,x_0,\dotsc,x_{n-1}):4 \le n<\omega\rangle$ for the vocabulary of set theory. Let us define $T_0$ by defining a set of $\omega$-sequences $Y_0$, and then we will let $T_0=\{\rho{\,|\grave{}\,} n:\rho\in Y_0 \mbox{ and }n<\omega\}$. For $\alpha<\omega_1$ let $\{\beta_{\alpha,\ell}:\ell <\omega\}$ list $\{\beta:\beta \le \alpha\}$. Now let $Y_0$ be the set of $\omega$-sequences $\rho \in {}^\omega(\theta \times\kappa')$ such that for some $({\frak B},\bar{\varphi},\theta )$--candidate $N \subseteq ({\cal H}(\chi),\in)$ (so ${\frak B},\theta,\kappa$ belong to $N$) and some list $\langle a_n:n < \omega \rangle$ of the member of $N$ we have:\quad $\rho = \nu * \eta$; i.e.\ $\rho(n) = (\nu(n),\eta(n))$ and \begin{enumerate} \item[(i)] $a_0 = {\frak B}$, $a_1 = \theta$, $a_2 = \kappa$, $a_3 = \nu$, \item[(ii)] $\{n:N \models a_n \in \kappa'\}=\{\eta(8n+1):0<n<\omega\}$, \item[(iii)] every $\eta(8n+2)$ is a countable ordinal such that: \[N \models\mbox{`` }{\rm rk}(a_n) < {\rm rk}(a_m)\mbox{''\quad iff\quad}\eta(8n+2)< \eta(8m+2) <\aleph_1\le\kappa',\] \item[(iv)] if ${\frak B}\models ({\rm ex}ists y)\psi^1_n(y,a_0,\ldots,a_{n-1})$ then ${\frak B} \models \psi^1_n[a_{\eta(8(n+1)+3)},a_0,\ldots,a_{n-1}]$, \item[(v)] $N\models \varphi_0[\nu]$; i.e. $N \models \varphi_0[a_3]$, \item[(vi)] $N\models$`` $a_\ell\in a_m$ ''\quad iff\quad $\eta(8(\binom{\ell +m+1}{2}+ \ell)+4)=0$, \item[(vii)] if $n \ge 4$ and $N \models ({\rm ex}ists y)\psi^2_n(y,a_0,\ldots, a_{n-1})$ then $N \models \psi^2_n[a_{\eta(8n+5)},a_0,\ldots,a_{n-1}]$ and $\eta(8n+6)= 1$, \item[(viii)] if $N \models$``$a_n$ is a countable ordinal'' and $a_k = \beta_{a_n,\ell}$ then $\eta(8(\binom{\ell+n+1}{2}+\ell)+7)=k$. \end{enumerate} Let $T_0 = \{\rho {\,|\grave{}\,} n:\rho \in Y_0,n < \omega\}$. \begin{claim} \label{1.3A} \begin{enumerate} \item $Y_0$ is a closed subset of ${}^\omega(\theta \times \kappa)$. \item ${\Bbb Q}=\{\nu\in {}^\omega \theta:({\rm ex}ists \eta)(\eta\in {}^\omega(\kappa') \ \&\ \nu * \eta\in Y_0 (=\lim(T_0))\}={\rm proj}_0(T_0)$. \end{enumerate} \end{claim} \noindent{\em Proof of the claim:}\qquad 1)\quad Given $\nu*\eta\in\lim(T_0)$ we can define a model $N'$ with set of elements say $\{a'_n:n < \omega\}$ by clause (vi), it is a model of ${\rm ZFC}^-_*$ by clause (vii) (and the demand $N \models {\rm ZFC}^-_*$), it is well founded by clause (iii) (and the earlier information). We start to define an embedding $h$ of $N'$ into ${\cal H}(\chi)$ and we put $h(a'_0) = {\frak B}$, $h(a'_1) = \theta$, $h(a'_2) = \kappa$ and $h(a'_n) = \eta(8n+1)$ if $N' \models a'_n \in a'_2$, $n>0$. Then let $h(a'_3) \in {}^\omega \theta$ be such that $h(a'_3)(\ell)=\gamma$ iff letting $n$ be such that $\psi^2_n\equiv [y = x_3(\ell)]$, so necessarily $N'\models$``$a'_3(\ell) = a'_{\eta(8n+5)}$'', we have $\eta(8(n+5)+1)=\gamma$ (see clause (vii)). Lastly we define $h(a'_n)$ for the other $a'_n$ by induction of ${\rm rk}^{N'}( a'_n)$, note that we can give then dummy elements to relation ${\rm Rang}(h)\cap \kappa = \{\eta(8(n+1)+1):n < \omega\}$. The model $h[N']$ above should be built in such a way that it is ${\rm ord}$--transitive. This (and clause (viii)) will ensure that the clause (g) of the demand \ref{0.1C}(2) is satisfied. Note that, actually, the coding (of candidates) which we use above does not change when passing to the ${\rm ord}$--collapse. \noindent 2)\quad Should be clear from the above noting: $p \in {\Bbb Q}$ iff for some $N$ as above, $N \models\varphi_0(p)$ [as $\Leftarrow$ holds by the definition and $\Rightarrow$ holds as there are countable $N {\rm pr}ec ({\cal H}(\chi),\in)$ to which $p,{\frak B},\theta,\kappa$ belong]. This finishes the proof of the claim and so the first part of the proposition. \noindent (2), (3)\quad Easy. \QED$_{\ref{1.3}}$ What if in \ref{1.3} we omit ``the only parameters of $\bar \varphi$ are ${\frak B},\theta,\kappa$'', so what do we do? Well, the role of ${\frak B}$ is assumed by the transitive closure of $\langle\bar{\varphi},{\frak B}, \theta,\kappa\rangle$, which we can then map onto some $\kappa^*\ge\kappa$. \begin{proposition} \label{1.5} \begin{enumerate} \item Assume ${\rm ZFC}^-_*$ is $\emptyset$-normal for $({\frak B},\bar{\varphi}, \theta)$, and, in ${\bf V}$, $\bar{\varphi}$ is a $({\frak B},\theta)$--definition of an [explicit] nep forcing notion. Then we get ``correctly''. \item Assume $\bar{\varphi}$ is a $K$--$({\frak B},\theta)$--definition of a nep-forcing notion ${\Bbb Q}$ (the ``nep'' part is not really needed). Let ${\bf V}'$ be a transitive class of ${\bf V}$ such that \begin{enumerate} \item[(i)] $\bar{\varphi}$ and ${\frak B}$ belong to ${\bf V}'$ (and of course ${\frak C}$), \item[(ii)] the family of $({\frak B},\bar{\varphi},\theta)$--candidates is unbounded in ${\bf V}'$, moreover \item[(iii)] for $\chi$ large enough, in ${\bf V}$ (or just in ${\bf V}'$) the set \[\{N\subseteq ({\cal H}(\chi),\in):N\mbox{ is a $({\frak B},\bar{\varphi}, \theta)$--candidate }\}\] is stationary, or at least \item[(iii)$^-$] ${\bf V}'\models$``$\varphi_\ell(\bar x)$'' implies that for unboundedly many $({\frak B},\bar{\varphi},\theta)$--candidates $N$ (in ${\bf V}$), $N\models$``$\varphi_\ell(\bar{x})$'', which in other words says that in the universe ${\bf V}'$, $({\frak B}, \bar{\varphi},\theta)$ is a correct definition of [explicitly] nep forcing notion (see part (1) above and Definition \ref{0.2}(11)). \end{enumerate} {\em Then}: \begin{enumerate} \item[(a)] if ${\bf V}'\models$``$p\in {\Bbb Q}$'' (i.e.\ $\varphi_0(p)$) then ${\bf V} \models$``$p \in {\Bbb Q}$'', \item[(b)] if ${\bf V}'\models$``$p\le^Q q$'' (i.e.\ $\varphi_1(p,q))$ then ${\bf V} \models$``$p\le^{\Bbb Q} q$'', \item[(c)] if in ${\bf V}'$, $N$ is a $({\frak B},\bar{\varphi},\theta)$--candidate {\em then} also in ${\bf V}$, $N$ is a $({\frak B},\bar{\varphi},\theta)$--candidate. \end{enumerate} \item If in (2) we add ``explicitly" {\em then} \begin{enumerate} \item[(d)] if ${\bf V}'\models \varphi_2(\langle p_i:i \le \omega \rangle)$ then ${\bf V}\models \varphi_2(\langle p_i:i \le \omega \rangle)$, \item[(e)] if in ${\bf V}'$, $N$ is a $(\bar{\varphi},{\frak B})$--candidate and $q$ is explicitly $(N,{\Bbb Q})$-generic {\em then} this holds in ${\bf V}$. \end{enumerate} \item If in (2) we add ``$\bar{\varphi}$ is a temporary explicit correct $({\frak B},\theta)$--definition of a nep forcing notion'' (in ${\bf V}$) {\em then} also in ${\bf V}'$, $\bar{\varphi}$ is a temporary explicit correct $({\frak B},\theta)$--definition of a nep-forcing notion, \begin{enumerate} \item[$(*)_3$] $\kappa=\theta = \aleph_0$ {\em or} $\kappa=\aleph_0$ and $([\theta]^{\le \aleph_0})^{{\bf V}'}$ is cofinal in $([\theta]^{\le \aleph_0})^{ {\bf V}_1}$ {\em or} (there are large enough cardinals to guarantee) any co--$(\kappa + \theta + \aleph_1)$--Souslin relation in ${\bf V}'$ is upward absolute to ${\bf V}_1$. \end{enumerate} \item If in (2) we add $(*)_4$ below and we add ``local'' to the assumption, {\em then} also in ${\bf V}'$, $\bar \varphi$ is a temporary explicit $({\frak B}, \theta)$--definition of a local nep-forcing notion, where \begin{enumerate} \item[$(*)_4$] $([\kappa \cup \theta]^{\le \aleph_0})^{{\bf V}'}$ is cofinal in $([\kappa \cup \theta]^{ \le \aleph_0},\subseteq)^{{\bf V}}$. \end{enumerate} \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1)\quad Straight. \noindent 2)\quad There are two implications implicit in \ref{1.5}(2) concerning the versions of clause (iii). Let \[\begin{array}{lr} S_\chi\stackrel{\rm def}{=}\{N:&N\in {\bf V},\ N\mbox{ is a countable submodel of }({\cal H}(\chi),\in)^{{\bf V}}\ \ \\ \ &\mbox{and $N$ is a }({\frak B},\bar{\varphi},\theta)\mbox{--candidate }\} \end{array}\] and let \[\begin{array}{lr} S^{\rm pr}ime_\chi\stackrel{\rm def}{=}\{N:&N\in {\bf V}',\ N\mbox{ is a countable submodel of }({\cal H}(\chi),\in)^{{\bf V}'}\ \ \\ \ &\mbox{and $N$ is a }({\frak B},\bar{\varphi},\theta)\mbox{--candidate }\}. \end{array}\] Let $<^*_\chi\in {\bf V}$ be a well ordering of ${\cal H}(\chi)$ and let \[\begin{array}{lr} C\stackrel{\rm def}{=}\{N:&N\in {\bf V},\ N\mbox{ is a countable elementary submodel of }\ \ \\ \ &({\cal H}(\chi)^{\bf V},\in,{\bf V}\cap{\cal H}(\chi),<^*_\chi)\mbox{ to which $({\frak B},\bar{\varphi},\theta)$ belongs }\}. \end{array}\] \noindent{\em If clause (iii) for ${\bf V}$ then clause (iii) for ${\bf V}'$.}\qquad Why? Just observe that \begin{enumerate} \item[$(\bigstar)_1$] in ${\bf V}$:\quad $C$ is a club of $[{\cal H}(\chi)]^{\leq \aleph_0}$ and $\{N\cap{\bf V}': N\in C\}$ is a club of $[{\cal H}(\chi)^{{\bf V}'}]^{ \leq\aleph_0}$. \end{enumerate} Now suppose that, in ${\bf V}'$, $C'$ is a club of ${\cal H}(\chi)^{{\bf V}'}$ and we should prove $C'\cap S_\chi'\neq\emptyset$ (say for some model ${\frak B}\in {\bf V}'$ with countable vocabulary, the universe ${\cal H}(\chi)^{{\bf V}'}$ and Skolem functions, $C'=\{N:\ N{\rm pr}ec{\frak B}$ countable $\}$\/). As $S_\chi'$ is stationary in ${\bf V}$, also $C_1=\{N\in C: N\cap{\bf V}'\in C'\}$ is club of $[{\cal H}(\chi)^{{\bf V}'}]^{\leq\aleph_0}$ in ${\bf V}$. Hence there is $N\in S_\chi\cap C_1$. Now, $N\cap {\bf V}'$ is almost a member of $C'\cap S_\chi'$, it satisfies the requirements in the definitions of $C'$ and $S_\chi'$. But $N\cap{\bf V}'$ is a countable subset of ${\cal H}(\chi)^{{\bf V}'}$, so by Shoenfield--Levy absoluteness it exists. \noindent{\em If clause (iii) for ${\bf V}'$ then clause (iii) for ${\bf V}$.}\qquad Work in ${\bf V}'$. So let $x\in{\Bbb Q}$ and let $\chi$ be large enough such that $({\cal H}(\chi),\in){\rm pr}ec_{\Sigma_n}{\bf V}'$ for $n$ large enough. The set \[\begin{array}{lr} C^*\stackrel{\rm def}{=}\{N:&N\in {\bf V},\ N\mbox{ is a countable elementary submodel of }\ \ \\ \ &{\cal H}(\chi)\mbox{ to which $x,{\frak B},\bar{\varphi},\theta,{\frak C}$ belong }\}. \end{array}\] is a club of $[{\cal H}]^{\leq\aleph_0}$, hence has non-empty intersection with any stationary subset of $[{\cal H}]^{\leq\aleph_0}$. In particular, by the assumption, there is a $({\frak B},\bar{\varphi},\theta)$--candidate $N\in C^*$. So $N{\rm pr}ec({\cal H}(\chi),\in)$, $x,{\frak B},\bar{\varphi},\theta, {\frak C}\in N$. So \[{\bf V}'\models\varphi_0(x)\quad\Rightarrow\quad({\cal H}(\chi),\in)\models \varphi_0(x)\quad\Rightarrow\quad N\models\varphi_0(x)\quad\Rightarrow\quad x\in{\Bbb Q}^N.\] \noindent 3)\quad Straight. \noindent 4)\quad Suppose that \[{\bf V}'\models\mbox{`` }N \mbox{ is a }(\bar{\varphi},{\frak B})\mbox{--candidate and } p \in {\Bbb Q}^N\mbox{ ''}.\] In ${\bf V}'$, let $\langle {\cal I}_n:n<\omega\rangle$ list the ${\cal I}$ such that $N\models$``${\cal I}$ is a predense subset of ${\Bbb Q}$''. We know (by \ref{1.5}(2)(c)) that $N$ is a candidate in ${\bf V}_1$. Hence, in ${\bf V}_1$, there are $q,\langle p^n_\ell:\ell<\omega,n<\omega\rangle$ such that: \begin{enumerate} \item[(i)] $\langle p^n_\ell:\ell<\omega\rangle$ lists ${\cal I}_n \cap N$, \item[(ii)] $p\le^{{\Bbb Q}} q \in {\Bbb Q}$, \item[(iii)] $\varphi_2(\langle p^n_\ell:\ell<\omega\rangle{}^\frown\!\langle q \rangle)$ for each $n<\omega$. \end{enumerate} So there is a $({\frak B},\bar{\varphi},\theta)$--candidate $N_1$ such that $N\in N_1$, $\langle p^n_\ell:\ell<\omega\rangle: n<\omega\rangle$, $q$ and $\langle {\cal I}_n: n<\omega\rangle$ belong to $N_1$, and $N_1\models$``$p \leq^{{\Bbb Q}} q$'', and $N_1\models\varphi_2(\langle p^n_\ell:\ell<\omega\rangle {}^\frown\! \langle q\rangle)$ for $n<\omega$ (by ``correct''). It is enough to find such $N_1\in{\bf V}'$, which follows from \ref{0.11}.\\ (We use an amount of downward absoluteness which holds as ${\bf V}'$ is a transitive class including enough ordinals). \noindent 5)\quad Similar proof. \QED$_{\ref{1.5}}$ \begin{proposition} \label{1.6} \begin{enumerate} \item Assume $\bar{T}$ is an explicit temporary $(\kappa,\theta)$--definition of a snep--forcing notion ${\Bbb Q}$. For any extension ${\bf V}_1$ of ${\bf V}$, this still holds if $(*)_3$ of \ref{1.5}(4) above holds. So we can replace ``temporary'' by $K=$ class of all set forcing notions. \item Assume $(\bar \varphi,{\frak B})$ is a simple explicit temporary $(\kappa,\theta)$--definition of a nep--forcing notion ${\Bbb Q}$. For any extension ${\bf V}_1$ of ${\bf V}$ this still holds in ${\bf V}_1$ if $(*)_3$ of \ref{1.5}(4) holds. So we can add/replace ``temporary" by the class $K$ of all forcing notions preserving ``$([\theta]^{\le \aleph_0})^{{\bf V}}$ is cofinal in $([\theta]^{\le \aleph_0})^{{\bf V}_1}$''. \item Assume $(\bar{\varphi},{\frak B})$ is a local explicit temporary $(\kappa,\theta)$--definition of a nep forcing notion ${\Bbb Q}$. {\em Then} for any extension ${\bf V}_1$ of ${\bf V}$ this still holds, provided that: \begin{enumerate} \item[$(*)_4$]\quad $([\kappa\cup\theta]^{\leq\aleph_0})^{{\bf V}}$ is cofinal in $([\kappa\cup\theta]^{\leq\aleph_0},\subseteq)^{{\bf V}_1}$. \end{enumerate} \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1), 2)\quad Left to the reader (and similar to the proof of part (3)). \noindent 3)\quad Let $\theta_1=\kappa+\theta$, and let $a\in [\theta_1]^{ \aleph_0}$, and consider the statement \begin{enumerate} \item[$\boxtimes_a$] if $N$ is a $({\frak B},\bar{\varphi},\theta )$--candidate satisfying $N\cap\theta_1\subseteq a$ and $p\in{\Bbb Q}^N$ (i.e.\ $N \models\varphi_0[p]$),\\ {\em then} there are $N'$, a generic extension of $N$ (so have the same ordinals and $N$ is a class of $N'$) which is a $({\frak B},\bar{\varphi}, \theta)$--candidate such that $N'\models$``${\cal P}(\theta)^N$ is countable'' and \[N'\models\mbox{`` }({\rm ex}ists q)[q \in{\Bbb Q}\ \&\ q \mbox{ is explicitly } (N \cap {\cal P}({\Bbb Q}),{\Bbb Q})\mbox{--generic] ''}.\] \end{enumerate} Note:\quad for $[x\in N'\wedge N'\models$``$x$ is countable''\quad$\Rightarrow \quad x\subseteq N']$ just use a suitable collapse. Now, only $(N',c)_{c \in N}/\cong$ and $(N,\alpha)_{\alpha \in a}/\cong$ and $(N',N,\alpha)_{\alpha \in a}$ are important and we can code $N$ as a subset of $a$ (as all three are countable). Thus the statement is essentially \[\begin{array}{r} (\forall N)[(N \mbox{ is not well founded (or not } {\frak B} {\,|\grave{}\,} (N \cap a) {\rm pr}ec_\Delta {\frak B}, \mbox{ etc}.) \vee\\ \vee ({\rm ex}ists N')(N' \mbox{ as above})]. \end{array}\] So it is $\Pi^1_2$, hence it is absolute from ${\bf V}$ to ${\bf V}_1$. Now, both in ${\bf V}'$ and in ${\bf V}_1$ the statement ``${\Bbb Q}$ is simply, locally, explicitly nep'' is equivalent to $(\forall a \in [\theta_1]^{\aleph_0})\boxtimes_a$, which is equivalent to ${\cal S}=\{a\in [\theta_1]^{\aleph_0}:\boxtimes_a\}$ is cofinal in $[\theta_1]^{\aleph_0}$. But by the previous paragraph ${\cal S}[{\bf V}] \subseteq {\cal S}[{\bf V}_1]$. Now $(*)_4$ gives the needed implication. \QED$_{\ref{1.6}}$ \begin{proposition} \label{1.7} \begin{enumerate} \item Assume $\bar{T}$ is a temporarily $(\kappa,\theta)$--definition of a snep--forcing notion ${\Bbb Q}$. If $(*)$ below holds, {\em then} we can find a tree $T_2 \subseteq {}^{\omega >}(\theta \times\theta \times \kappa')$ such that $\bar{T}{}^\frown\!\langle T_2 \rangle$ is an explicit temporary $(\kappa,\theta)$--definition of a snep-forcing notion ${\Bbb Q}$, where \begin{enumerate} \item[$(*)$] $\kappa=\theta=\aleph_0$, $\kappa' = \aleph_1$ or enough absoluteness. \end{enumerate} \item If ${\Bbb Q}$ (i.e.\ $\langle \varphi_0,\varphi_1 \rangle$) is a Souslin proper forcing notion (see \ref{0.7}) and ${\frak B}$ codes the parameter (so has universe $\kappa=\aleph_0$ and let $\theta = \aleph_0$), {\em then} $(\bar{\varphi},{\frak B})$ is a simple explicit temporary $(\kappa,\theta)$--definition of the nep-forcing notion ${\Bbb Q}$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1)\quad The question is to express ``$\{p_n:n < \omega\}$ is predense above $q$'' which is equivalent to \[(\forall \nu \in {}^\omega \theta)[\varphi_0(\nu)\ \&\ \varphi_1(q,\nu)\ \Rightarrow\ ({\rm ex}ists\nu'\in {}^\omega \theta)(\bigvee_n \varphi_1(p_n,\nu')\ \&\ \varphi_1(\nu,\nu'))].\] So, as $\kappa = \theta = \aleph_0$, this is a $\Pi^1_2$-formula and hence it is $\aleph_1$-Souslin. \noindent 2)\quad Similarly (for $\varphi_2$ being upward absolute note that the relation is now $\Pi^1_1$ and $\Pi^1_1$ formulas are upward absolute). \QED$_{\ref{1.7}}$ \begin{definition} \label{1.9} Assume that $(\bar{\varphi},{\frak B})$ is a temporary $(\kappa,\theta)$--definition of a nep forcing notion ${\Bbb Q}$, and $N$ is a ${\Bbb Q}$--candidate. We say that a condition $q'\in{\Bbb Q}$ is essentially explicitly $(N,{\Bbb Q})$-generic if for some candidate $N'$, $N \subseteq N'$, $N \in N'$, $q'$ is explicitly $(N',{\Bbb Q})$-generic and for some $q_0\in {\Bbb Q}^{N'}$, $q_0 \le^{{\Bbb Q}} q'$ and $N' \models$``$q_0$ is $(N,{\Bbb Q})$-generic''. \end{definition} Note: if ${\Bbb Q}$ is a snep-forcing for $\bar{T}$, this relation is $(\kappa + \theta+\aleph_1)$--Souslin, too. \begin{proposition} \label{1.8} Assume ${\Bbb Q}$ is a correct explicitly nep-forcing notion, say by $(\bar{\varphi}, {\frak B})$. If $q$ is $(N,{\Bbb Q})$-generic, {\em then} for some $q'$ we have \[q \le q' \in {\Bbb Q}\quad \mbox{and}\quad q' \mbox{ is essentially explicitly } (N,{\Bbb Q})\mbox{-generic}.\] \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Let $\varphi_2(\langle p_n^{\cal I}:n<\omega\rangle,q)$ hold for some list $\langle p_n^{\cal I}: n<\omega\rangle$ of ${\cal I}\in{\rm pd}(N,{\Bbb Q})$. For ${\cal I}\in{\rm pd}(N,{\Bbb Q})$ let $N_{\cal I}$ be a ${\Bbb Q}$--candidate such that $N_{\cal I}\models\varphi_2(\langle p_n^{\cal I}:n<\omega\rangle,q)$. Let $N' \subseteq {\cal H}(\chi)$ be a countable ${\Bbb Q}$--candidate satisfying \[\{N,q\}\cup\{N_{\cal I}:{\cal I}\in{\rm pd}(N,{\Bbb Q})\}\in N'.\] By our assumptions there is $q'$ such that: $q\le q'\in{\Bbb Q}$ and $q'$ is explicitly $(N',{\Bbb Q})$-generic. \QED \begin{proposition} \label{1.10} \begin{enumerate} \item If $N$ is a ${\frak B}$--candidate, so in particular \[[N \models\mbox{``}\alpha<\kappa\vee \alpha<\theta\mbox{''}]\ \Rightarrow\ \alpha\in\kappa \vee \alpha\in\theta,\] and $|{\frak B}|$ is an ordinal, {\em then} there is a unique $N' = \mbox{ MosCol}_{\kappa,\theta}(N)$ and $f$ such that \begin{enumerate} \item[(a)] $f$ is an isomorphism from $N$ onto $N'$, \item[(b)] $f{\,|\grave{}\,} (N \cap \kappa) = {\rm id}$, $f(\kappa)=\kappa$ and $f {\,|\grave{}\,} (N \cap \theta)={\rm id}$, $f(\theta)=\theta$, \item[(c)] if $x\in N \backslash(\kappa+1)\backslash(\theta +1)$ then $f(x)= \{f(y):N \models$``$y \in x$''$\}$, \item[(d)] $N'$ is a ${\frak B}$--candidate. \end{enumerate} \item Note that if $N \models$``$x \in {\cal H}_{< \aleph_1}(\kappa)\cup {\cal H}_{< \aleph_1}(\theta)$'' then $f(x) = x$. \item If $|{\frak B}|$ is not an ordinal (so $\kappa \subseteq |{\frak B}| \subseteq {\cal H}_{< \aleph_1}(\kappa)$), then $N'$ is still a $({\frak B}, \bar{\varphi},\theta)$--candidate, using the ``but'' of clause (e) of Definition \ref{0.1C}(2). \end{enumerate} \end{proposition} \begin{fact} \label{1.11} In the definition of nep (or snep) in the ``properness'' clause, it is enough to restrict ourselves to a family $\bf I$ of predense subsets of ${\Bbb Q}^N$ such that: \[\begin{array}{l} \mbox{if }{\cal I}\in{\rm pd}(N,{\Bbb Q})\\ \mbox{then for some }{\cal J}\in{\bf I}\mbox{ we have }(\forall p \in {\cal I} \cap N)({\rm ex}ists q \in {\cal J})(N \models p \le^{{\Bbb Q}} q). \end{array}\] \end{fact} \begin{proposition} \label{2.11} \begin{enumerate} \item Assume $\bar{T}$ defines an explicit $(\kappa,\theta)$--snep forcing notion. Let $\bar{\varphi}=\bar{\varphi}_T$, ${\frak B}={\frak B}_{\bar{T}}$ (see \ref{1.1}(3)). If ${\Bbb Q}^{\bar{T}}$ is local then ${\Bbb Q}^{\bar{\varphi}}$ is local, in fact in Definition \ref{0.5A}(2). \item If (${\rm ZFC}^-_*$ is $K$--good and) ${\rm ZFC}^-_*$ says that $({\frak B},\bar{\varphi},\theta)$ is explicitly nep, and $\bar{\varphi}$ is correct then $({\frak B}^{\bar{\varphi}},\bar{\varphi},\theta)$ is explicitly nep and local. \QED \end{enumerate} \end{proposition} Moving from nep to snep (and inversely) we may ask what occurs to ``local". It is usually preserved. \stepcounter{section} \subsection*{\quad 3. There are examples} In this section we show that a large family of natural forcing notions satisfies our definition. Later we will deal with preservation theorems but to get nicer results we better ``doctor" the forcing notions, but this is delayed to the next section. \noindent In fact all the theorems of Ros{\l}anowski Shelah \cite{RoSh:470}, which were designed to prove properness, actually give one notion or another from \S1 here (confirming the thesis \ref{0.1B} of \S0). We will state them without giving the definitions from \cite{RoSh:470} and give a proof of (hopefully) well known specific cases, indicating why it works. \begin{lemma} [Ros{\l}anowski Shelah \cite{RoSh:470}] \label{2.1} \begin{enumerate} \item Suppose that ${\Bbb Q}$ is a forcing notion of one of the following types: \begin{enumerate} \item[(a)] ${\Bbb Q}^{\rm tree}_e(K,\Sigma)$ for some finitary tree-creating pair $(K,\Sigma)$, where $e=1$ and $(K,\Sigma)$ is 2-big {\em or} $e=0$ and $(K,\Sigma)$ is t-omittory (see \cite[\S2.3]{RoSh:470}; so e.g.\ this covers the Sacks forcing notion), \item[(b)] ${\Bbb Q}^*_{{\rm s}\infty}(K,\Sigma)$ for some finitary creating pair $(K,\Sigma)$ which is growing, condensed and of the AB--type or omittory, of the AB$^+$--type and satisfies $\oplus_0,\oplus_3$ of \cite[4.3.8]{RoSh:470} (see \cite[\S3.4]{RoSh:470}; this captures the Blass--Shelah forcing notion of \cite{BsSh:242}), \item[(c)] ${\Bbb Q}^*_{{\rm w}\infty}(K,\Sigma)$ for some finitary creating pair which captures singletons (see \cite[\S2.1]{RoSh:470}) \item[(d)] ${\Bbb Q}^*_f(K,\Sigma)$ for some finitary, 2-big creating pair $(K,\Sigma)$ with the Halving Property which is either simple or gluing and an $H$-fast function $f:\omega\times\omega\longrightarrow\omega$ (see \cite[\S2.2]{RoSh:470}). \end{enumerate} {\em Then} ${\Bbb Q}$ is an explicit $\aleph_0$--snep forcing notion, moreover, it is local. \item Assume that ${\Bbb Q}$ is a forcing notion of one of the following types: \begin{enumerate} \item[(a)] ${\Bbb Q}^{\rm tree}_e(K,\Sigma)$ for $e<3$ and a tree-creating pair $(K,\Sigma)$, which is bounded if $e=2$ (see \cite[\S2.3]{RoSh:470}; this includes the Laver forcing notion), \item[(b)] ${\Bbb Q}^*_\infty(K,\Sigma)$ for a finitary growing creating pair $(K,\Sigma)$ (see \cite[\S2.1]{RoSh:470}; this covers the Mathias forcing notion). \end{enumerate} {\em Then} ${\Bbb Q}$ is an explicit $\aleph_0$--nep forcing notion, moreover, it is local. \end{enumerate} \end{lemma} \noindent{\sc Proof} \hspace{0.2in} Let $N$ be a ${\Bbb Q}$-candidate and $p\in{\Bbb Q}^N$. Let $\langle {\cal J}_n: n<\omega\rangle$ list $\{{\cal J}:N\models ``{\cal J}\subseteq{\Bbb Q}$ is open dense''$\}$. Then there is a sequence $\langle (p_n,{\cal I}_n):n<\omega \rangle$ such that $p_n,{\cal I}_n\in N$, $N\models p_n\le p_{n+1}$, ${\cal I}_n \subseteq {\cal J}_n$ is a countable set, $\langle p_n:n<\omega\rangle$ has an upper bound in ${\Bbb Q}$ and ${\cal I}_n$ is predense above $p_{n+1}$, moreover, in an explicit way as described below (see the respective subsections in \cite{RoSh:470}). Moreover, \begin{quotation} \noindent in part (1) cases (a)+(c), ${\cal I}_n$ is finite and moreover, we can say ``${\cal I}_n$ is predense above $p_{n+1}$" in a Borel way. \end{quotation} For the Sacks forcing notion:\quad for some $k<\omega$, ${\cal I}_n = \{p^{[\eta]}_{n+1}:\eta\in p_{n+1},\ell g\/(\eta)=k\}$, so ${\cal I}_n$ corresponds to a front of $p_{n+1}$, which necessarily is finite. This property serves as $\varphi_2$ (compare with more detailed description for the Laver forcing below). In part (1) case (b) (e.g. the Blass--Shelah forcing notion) ${\cal I}_n$ is countable. We do not know which level will be activated, but if use $n$, then we get into ${\cal I}_n$, so ${\cal I}_n$ countable but the property is Borel not $\Pi^1_1$. Now, in part (2), ${\cal I}_n$ is countable and again it corresponds to some front $A$ of $p_{n+1}$ in an appropriate sense. So ${\cal I}_n=\{p^{[\eta]}_{ n+1}:\eta \in A\}$, but to say ``$A$ is a front'' is $\Pi^1_1$ (in some instances of 2(a) we have $e$-thick antichains instead of fronts, but the complexity is the same). Recall that for a subtree $T \subseteq {}^{\omega >}\omega,A \subseteq T$ is a front of $T$ if \[(\forall\eta\in\lim(T))({\rm ex}ists n)(\eta {\,|\grave{}\,} n \in A)\] (usually members of $A$ are pairwise incomparable). \noindent Specifically, for the Laver forcing notion, we can guarantee ${\cal I}_n = \{p^{[\eta]}_{n+1}:\eta\in A\}$, where $A$ is a front of $p_{n+1}$. Now being a front is a $\Pi^1_1$--sentence (see the definition above) which is upward absolute and this is our choice for $\varphi_2$. Let us write this formula in a more explicit way (for the case of the Laver forcing notion): \[\begin{array}{l} \varphi_2(\langle p_i:i \le \omega \rangle) \equiv \mbox{ each }p_i \mbox{ is a Laver condition and}\\ \qquad\bigwedge\limits_{i\in \omega}({\rm ex}ists! \eta)(\eta \in p_\omega\ \&\ p_{2i}=p^{[\eta]}_\omega)\\ \mbox{[call this unique $\eta$ by $\eta_i$]}\quad \mbox{ and}\\ \qquad\bigwedge\limits_{i\neq j} \eta_i \ntrianglelefteq \eta_j \mbox{ (incomparable) }\ \&\ (\forall\rho\in\lim(p_\omega))(\bigvee\limits_n \bigvee\limits_m \rho {\,|\grave{}\,} n = \eta_m)\\ \mbox{[this is: $\{p_i:i\in\omega\}$ is explicitly predense above $p_\omega$]}. \end{array}\] So it is $\Pi^1_1$ (of course, $\Sigma^1_2$ is okay, too.)\QED$_{\ref{2.1}}$ Note that even for the Sacks forcing notion, ``$p,q$ are incompatible'' is complete $\Pi^1_1$. So ``$\{p_n:n \in \omega\}$ is predense above $p$'' will be $\Pi^1_2$. For Laver forcing we cannot do better. Now, generally $\Pi^1_2$ is not upward absolute from countable submodels, whereas $\Pi^1_1$ is. \begin{proposition} \label{2.1B} All the forcing notions ${\Bbb Q}$ defined in \cite{RoSh:470}, \cite{RoSh:628}, are correct, and we can use ${\rm ZFC}^-_*={\rm ZC}^-$ which is good and normal (see \ref{0.9}). Also the relation ``$p,q$ are incompatible members of ${\Bbb Q}$'' is upward absolute from ${\Bbb Q}$--candidates (as well as $p \in {\Bbb Q}$, $p \notin {\Bbb Q}$, $p \le q$, and ``$p,q$ are compatible''). \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Check. \QED \stepcounter{section} \subsection*{\quad 4. Preservation under iteration: first round} We give here one variant of the preservation theorem, but for it we need some preliminary clarification. We have said ``there is $q$ which is $(N,{\Bbb Q})$-generic"; i.e.\ $q \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$`` $\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap {\Bbb Q}^N$ is a generic subset of ${\Bbb Q}^N$ over $N$ ''. Note that we have said ${\Bbb Q}^N$ and not ${\Bbb Q}\cap N$ as we intended to demand $N\models$``$r\in{\Bbb Q}$''\quad $\Rightarrow\quad {\bf V} \models$``$r \in {\Bbb Q}$'' rather than $r\in N\ \ \Rightarrow\ \ [N \models$``$r \in {\Bbb Q}$''$\ \Leftrightarrow\ {\bf V}\models$``$r \in{\Bbb Q}$''] (the version we use is, of course, weaker and so better). Now, to use Definition \ref{0.2}(1) we usually use $N[G_Q]$ (e.g.\ when iterating). But what is $N[G]$ here? In fact, what is the connection between $N\models$``$\mathunderaccent\tilde-3 {\tau}$ is a ${\Bbb Q}$--name'' and ${\bf V}\models$``$\mathunderaccent\tilde-3 {\tau}$ is a ${\Bbb Q}$--name''? Because $[x \in Y \in N \nRightarrow x \in N]$, none of the implications holds. For our purpose, the usual $N[G]=\{\mathunderaccent\tilde-3 {\tau}[G]:\mathunderaccent\tilde-3 {\tau}\in N \mbox{ is a }{\Bbb Q}\mbox{-name}\}$ is not appropriate as it is not clear where being a ${\Bbb Q}$-name is defined. We use $N\langle G \rangle$ which is $N[G\cap{\Bbb Q}^N]$ when we disregard objects in ${\bf V}\backslash N$. Of course, if the models are $\subseteq {\cal H}_{<\aleph_1}(\kappa\cup\theta)$ life is easier; but we may lose $N \models{\rm ZFC}^-_*$. We then prove (in \ref{2.5}) the first version of preservation by CS iteration. We aim at proving only that ${\Bbb P}_\alpha={\rm Lim}(\bar{{\Bbb Q}})$ satisfies the main clause, i.e.\ clause (c) of Definition \ref{0.2} (but did not say that ${\Bbb P}_\alpha$ is nep itself). For this we need again to define what is $N\langle G\rangle$ for $N$ which is not necessarily a candidate. The second treatment (in \S5) depends just on Definition \ref{2.3} from this section. \begin{definition} \label{2.3} \begin{enumerate} \item Assume $N\models$``${\Bbb Q}$ is a nep-forcing notion'' and $G\subseteq{\Bbb Q}^N$ is generic over $N$. We define $N\langle G\rangle=N\langle G\cap{\Bbb Q}^N\rangle$ ``ignoring ${\bf V}$" and letting ${\frak B}^{N \langle G\rangle}={\frak B}^N$ for the relevant ${\frak B}$. In details, \[N \langle G \rangle\stackrel{\rm def}{=}\{\mathunderaccent\tilde-3 {\tau}^N\langle G\rangle: N \models\mbox{``}\mathunderaccent\tilde-3 {\tau}\mbox{ is a ${\Bbb Q}$--name''}\},\] where $\mathunderaccent\tilde-3 {\tau}^N\langle G\rangle$ is defined by induction on ${\rm rk}^N( \mathunderaccent\tilde-3 {\tau})$ (see e.g.\ \cite[Ch.I]{Sh:f}): \begin{enumerate} \item[(a)] if for some $p\in G\cap {\Bbb Q}^N$ and $x \in N$ we have $N\models[p \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\tau} = x$''] then $\mathunderaccent\tilde-3 {\tau}^N\langle G\rangle=x$, \item[(b)] if not (a) then necessarily $N\models$``$\mathunderaccent\tilde-3 {\tau}$ has the form $\{(p_i,\mathunderaccent\tilde-3 {\tau}_i):i<i^*\}$, $p_i \in {\Bbb Q}$, $\mathunderaccent\tilde-3 {\tau}_i$ a ${\Bbb Q}$--name of rank $<{\rm rk}(\mathunderaccent\tilde-3 {\tau})$''; now we let \end{enumerate} \[\mathunderaccent\tilde-3 {\tau}^N\langle G\rangle=\{(\mathunderaccent\tilde-3 {\tau}')^N\langle G\rangle: \mathunderaccent\tilde-3 { \tau}'\in N\mbox{ and for some }p \in G \cap {\Bbb Q}^N\mbox{ we have }(p, \mathunderaccent\tilde-3 {\tau}')\in\mathunderaccent\tilde-3 {\tau}\}.\] \item If $N \models$``$\mathunderaccent\tilde-3 {\tau}$ is a ${\Bbb Q}$--name'' we define a ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {\tau}^{\langle N\rangle}$ as follows: \begin{enumerate} \item[(a)] if $N\models$``$\mathunderaccent\tilde-3 {\tau}=\check{x}$'', $x \in N$, we let $\mathunderaccent\tilde-3 {\tau}=\check{x}$ (see e.g.\ \cite[Ch.I]{Sh:f}), \item[(b)] if $N\models$``$\mathunderaccent\tilde-3 {\tau}=\{(p_i,\mathunderaccent\tilde-3 {\tau}_i):i<i^*\}$, where $p_i\in{\Bbb Q}$, $\mathunderaccent\tilde-3 {\tau}_i$ a ${\Bbb Q}$--name of rank $<{\rm rk}(\mathunderaccent\tilde-3 {\tau})$'' then \[\mathunderaccent\tilde-3 {\tau}^{\langle N\rangle}=\big\{(p,(\mathunderaccent\tilde-3 {\tau}')^{\langle N\rangle}): N\models\mbox{``}(p,\mathunderaccent\tilde-3 {\tau}')\in \mathunderaccent\tilde-3 {\tau}\mbox{''}\big\}.\] \end{enumerate} \item We say ``$q$ is $\langle N,{\Bbb Q}\rangle$--generic" if $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$ \mathunderaccent\tilde-3 {G}_{{\Bbb Q}} \cap{\Bbb Q}^N$ is a subset of $({\Bbb Q}^N,<^N_{{\Bbb Q}})$ generic over $N$''. \end{enumerate} \end{definition} \begin{definition} \label{2.3A} \begin{enumerate} \item In Definition \ref{0.2}(1) replacing ``temporarily'' by ``$K$--absolutely'' means \begin{enumerate} \item[(a)] if ${\bf V}_1$ is a $K$--extension of ${\bf V}$ (i.e.\ a generic extension of ${\bf V}$ by a forcing notion from $K^{{\bf V}}$) {\em then} \begin{enumerate} \item[(i)] ${\bf V}\models$``$x\in{\Bbb Q}^{\bar{\varphi}}\mbox{''}\quad\Rightarrow \quad{\bf V}_1\models$``$x\in {\Bbb Q}^{\bar{\varphi}}$'', \item[(ii)] ${\bf V}\models$``$x<^{{\Bbb Q}^{\bar{\varphi}}} y\mbox{''}\quad \Rightarrow\quad{\bf V}_1\models$``$x<^{{\Bbb Q}^{\bar{\varphi}}} y$'', \item[(iii)] in the explicit case we have a similar demand for $\varphi_2$; otherwise, if $N$ is a ${\Bbb Q}^{\bar{\varphi}}$--candidate in ${\bf V}$, $q\in {\Bbb Q}^{\bar{\varphi}}$ is $\langle N,{\Bbb Q} \rangle$--generic (see \ref{2.3}(3)) in ${\bf V}$ {\em then} $q$ is $\langle N,{\Bbb Q} \rangle$--generic in ${\bf V}_1$, \end{enumerate} \item[(b)] if ${\bf V}_1$ is a $K$--extension, {\em then} the relevant part of Definition \ref{0.2} and clause (a) here holds in ${\bf V}_1$, \item[(c)] if ${\bf V}_{\ell +1}$ is a $K$--extension of ${\bf V}_\ell$ for $\ell\in \{0,1,2\}$, ${\bf V}_0 = {\bf V}$ {\em then} ${\bf V}_3$ is a $K$--extension of ${\bf V}_1$. \end{enumerate} \item We omit $K$ when we mean: any set forcing. \end{enumerate} \end{definition} Note that (a)(i) + (ii) is automatic for explicitly snep, also (a)(iii). One can make ``absolutely nep" to the main case. The following is natural to assume. \begin{definition} \label{2.3B} \begin{enumerate} \item We say ${\rm ZFC}^-_*$ is nice to $\chi_1$ if $\chi_1$ is a constant in ${\frak C}$, ${\rm ZFC}^-_*$ says $\chi_1$ is strong limit and ${\rm ZFC}^-_*$ is preserved by forcing by forcing notions of cardinality $<\chi_1$. \item We say ${\Bbb Q}$ is nice (or ${\rm ZFC}^-_*$ nice to ${\Bbb Q}$) if for some $\chi_1$, ${\rm ZFC}^-_*$ is nice to $\chi_1$ and it says ${\Bbb Q} \in {\cal H}(\chi_1)$. \end{enumerate} \end{definition} \begin{proposition} \label{2.4} If $N$ is a ${\Bbb Q}$--candidate, ${\Bbb Q}$ is a nep-forcing notion, $G_{{\Bbb Q}} \subseteq {\Bbb Q}$ is generic over ${\bf V}$ and $G_{{\Bbb Q}} \cap{\Bbb Q}^N$ is generic over $N$ then: \begin{enumerate} \item[(a)] $N\models$``$\mathunderaccent\tilde-3 {\tau}$ is a ${\Bbb Q}$-name'' implies $\mathunderaccent\tilde-3 {\tau}^N \langle G\rangle=\mathunderaccent\tilde-3 {\tau}^{\langle N\rangle}[G]$, \item[(b)] $N\langle G\rangle$ is a model of ${\rm ZFC}^-_*$ and moreover it is a ${\Bbb Q}$--candidate and is a forcing extension of $N$, provided that the forcing theorem applies, i.e.\ ${\rm ZFC}^-_*$ is $K$--good, ${\Bbb Q}\in K$ (see Definition \ref{0.9}), \item[(c)] $N\langle G\rangle\cap\kappa=N\cap\kappa$, $N\langle G\rangle \cap \theta = N\cap\theta$. \QED \end{enumerate} \end{proposition} \noindent{\bf Remark:}\quad It seems that usually (but not in general) we have: \[({\cal H}_{< \aleph_1}(\kappa))^{{\bf V}[G]} \cap N \langle G \rangle={\cal H}_{ <\aleph_1}(\kappa)^{{\bf V}}\cap N\quad\mbox{ and}\] \[({\cal H}_{< \aleph_1}(\theta))^{{\bf V}[G]}\cap N\langle G\rangle=({\cal H}_{< \aleph_1}(\theta))^{{\bf V}} \cap N.\] \begin{proposition} \label{2.5} Assume \begin{enumerate} \item[(a)] $\bar{{\Bbb Q}}=\langle {\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_j:i \le\alpha,j<\alpha\rangle$ is a CS iteration, \item[(b)] for each $i<\alpha$ \[\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_i}\mbox{``}(\bar{\varphi}_i,{\frak B}_i)\mbox{ is a temporary $(\kappa_i,\theta_i)$--definition of a nep-forcing notion ${\mathunderaccent\tilde-3 {\Bbb Q}}_i$''}\] and the only parameter of $\bar{\varphi}_i$ is ${\frak B}_i$, so we are demanding $\langle(\bar \varphi_i,{\frak B}_i):i<\alpha\rangle \in {\bf V}$, \item[(c)] ${\frak B}$ is a model with universe $\alpha^*$, or including $\alpha^*$ and included in ${\cal H}_{<\aleph_1}(\alpha^*)$, where $\alpha^* \ge\alpha$, $\alpha^*\ge\kappa_i =\kappa({\frak B}_i)$, ${\frak B}$ codes $\langle({\frak B}_i,\bar{\varphi}_i):i<\alpha\rangle$ and the functions $\alpha -1,\alpha+1$. \end{enumerate} We can use a vocabulary $\subseteq \{P_{n,m}:n,m < \omega\}$ where $P_{n,m}$ is an $n$-place predicate to code $\langle {\frak B}^i:i<\alpha\rangle$: let $P^{\frak B}_{n+1,2m}=\{\langle i,x_1,\ldots,x_n \rangle:\langle x_1,\ldots, x_n \rangle \in P^{{\frak B}_i}_{n,m}\}$, $P_{2,1} = \{(\alpha,\alpha +1): \alpha +1 < \alpha^*\}$ (and $\Delta$ is the set of first order formulas). \noindent{\em Then}:\quad if $N\subseteq ({\cal H}(\chi),\in)$ is a ${\frak B}$--candidate, $p\in{\Bbb P}_\alpha\cap N$, then for some condition $q$, $p\le q\in {\Bbb P}_\alpha$ and $q$ is $\langle N,{\Bbb P}_\alpha \rangle$--generic (in particular ${\Bbb P}_\alpha$ is defined from ${\frak B}$) which is defined below. \end{proposition} \begin{definition} \label{2.6} Under the assumptions of \ref{2.5}, in $N$ we have a definition of the countable support iteration $\bar{{\Bbb Q}}=\langle {\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_j:i\le\alpha,j< \alpha\rangle$. We define by induction on $j\in N\cap(\alpha+1)$ when $q\in {\Bbb P}_j$ is $\langle N,{\Bbb P}_j \rangle$--generic: \begin{enumerate} \item[$(\circledast)$] if $q\in G_j\subseteq{\Bbb P}_j$ and $G_j$ is generic over ${\bf V}$ {\em then} $G^{\langle N\rangle}_j$ is a generic subset of ${\Bbb P}^N_j$ over $N$, where \[G^{\langle N\rangle}_j\stackrel{\rm def}{=}\{p:N\models{``}p\in {\Bbb P}_j \mbox{''\ \ and\ \ } p^{\langle\langle N\rangle\rangle} \in G_j\},\] where $p^{\langle\langle N\rangle\rangle}$ is a function with domain ${\rm Dom}(p)^N$, and $p(\gamma)$ is the following ${\Bbb P}_\gamma$--name:\quad {\em if} $p(\gamma)^{\langle N \langle\mathunderaccent\tilde-3 {G}_\gamma\cap N\rangle\rangle}\in {\mathunderaccent\tilde-3 {\Bbb Q}}_\gamma$, {\em then} it is $p(\gamma)$; {\em if} not, {\em then} it is $\emptyset_{{\mathunderaccent\tilde-3 {\Bbb Q}}_\gamma}$. \end{enumerate} \end{definition} \begin{remark} \label{2.7} The major weakness is that ${\Bbb P}_\alpha$ is not proved to be in some of our classes (nep or snep). We get the ``original property'' without the ``support team", i.e.\ the ${\mathunderaccent\tilde-3 {\Bbb Q}}_i$ are nep, but on ${\Bbb P}_\alpha$ we just say it satisfies the main part of nep. A minor one is that ${\frak B}_i$ is not allowed to be a ${\Bbb P}_i$--name in any way. In the later theorems, we use ${\Bbb P}'_\alpha \subseteq {\Bbb P}_\alpha$ consisting of ``hereditarily countable'' names. Note: inside $N$, if ``$N\models p\in{\Bbb P}_\alpha$'' then ${\rm Dom}(p_\alpha)\in [\alpha]^{\le \aleph_0}$ in $N$'s sense hence (see Definition \ref{0.1C}(2)), ${\rm Dom}(p_\alpha)\subseteq N$ and similarly the names are actually from $N$, members outside $N$ do not count, they may not be in ${\Bbb P}_\alpha$ at all. \end{remark} {\noindent{\sc Proof of \ref{2.5}} \hspace{0.2in}} We imitate the proof of the preservation of properness. So we prove by induction on $j\in (\alpha+1)\cap N$ that: \begin{enumerate} \item[$(*)_j$] if $i\in j\cap N$, $q$ is $(N,{\Bbb P}_i)$--generic, and $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_i}$``$(p {\,|\grave{}\,} i)^{\langle N \rangle}\in\mathunderaccent\tilde-3 {G}_{{\Bbb P}_i}$'' {\em then} we can find a condition $r\in {\Bbb P}_j$ such that $r{\,|\grave{}\,} i=q$, and $r \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j}$``$(p {\,|\grave{}\,} j)^{\langle N \rangle}\in \mathunderaccent\tilde-3 {G}_{P_j}$'', $r$ is $(N,{\Bbb P}_j)$--generic, and ${\rm Dom}(r)\setminus i\subseteq N$''. \end{enumerate} \noindent{\sc Case 0}:\quad $j=0$.\\ Left to the reader. \noindent{\sc Case 1}:\quad $j = j_1+1$.\\ So $j_1 \in N$ (why? use $P_{2,1}$ and \ref{0.2}(2)(e)), and by the inductive hypothesis and the form of the conclusion without loss of generality $i =j_1$. Let $q \in G_i\subseteq {\Bbb P}_i$, $G_i$ generic over ${\bf V}$. So $N\langle G^{\langle N \rangle}_i \rangle\cap\alpha^*=N\cap\alpha^*$ (by \ref{2.4}), and hence ${\frak B}{\,|\grave{}\,} N\langle G^{\langle N \rangle}_i \rangle = {\frak B} {\,|\grave{}\,} N {\rm pr}ec {\frak B}$. But $i\in N$, so this applies to ${\frak B}_i$, too. So ${\bf V}[G_i]\models$``$N \langle G^{\langle N \rangle}_i \rangle$ is a ${\frak B}_i$--candidate''. Also $N\langle G^{\langle N\rangle}_i\rangle \models$``$p(i)^{\langle N\langle G^{\langle N\rangle}_i\rangle\rangle}\in {\Bbb Q}_i$'' because $G^{\langle N\rangle}_i$ is a generic subset of $P^N_i=\{x:x \in N$, $N\models$``$x\in {\Bbb P}_i$''$\}$ over $N$ and use the property of ${\Bbb Q}_i$. \noindent{\sc Case 3}:\quad $j$ is a limit ordinal.\\ As in the proof for properness (see \cite[Ch.III, 3.2]{Sh:f}). \QED$_{\ref{2.5}}$ \noindent{\bf Remark:}\quad Note that if $N \models$``$w$ is a subset of $\alpha$'' then we can deal with ${\Bbb P}_w$, as in \S5. \stepcounter{section} \subsection*{\quad 5. True preservation theorems} Let us recall that ${\Bbb Q}$ is nep if ``$p\in{\Bbb Q}$'', ``$p\le_{{\Bbb Q}} q$'' are defined by upward absolute formulas for models $N$ which are $({\frak B}^\theta,\bar{\varphi}^\theta,\theta^{{\Bbb Q}})$--candidates; i.e.\ $N\subseteq({\cal H}(\chi),\in)$ countable, ${\frak B}^{{\Bbb Q}}\in N$ a model on some $\kappa$, ${\frak B}^{{\Bbb Q}}{\,|\grave{}\,} N{\rm pr}ec_\Delta {\frak B}^{{\Bbb Q}}$, $N$ model of ${\rm ZFC}^-_*$ and for each such model we have the properness condition. Usually ${\Bbb Q}\subseteq {}^\omega \theta$, or ${\cal H}_{< \aleph_1}(\theta)$ or so. We would like to prove that CS iteration preserves ``being nep'', but CS may give ``too large'' names of conditions (of ${\mathunderaccent\tilde-3 {\Bbb Q}}_i$, $i>0$) depending say on large maximal antichains (of ${\Bbb P}_i$). Note: if ${\Bbb Q}_0$ is not c.c.c. normally it has maximal antichain which is not absolutely so; start with a perfect set of pairwise incompatible elements and extend it to a maximal antichain. Then whenever a real is added, the maximality is lost. Finally, c.c.c. is normally lost in ${\Bbb P}_\omega$. So we will revise our iteration so that we consider only hereditarily countable names. But in the iteration, trying to prove a case of properness for a candidate $N$ and $p \in{\Bbb P}^N_{\alpha+1}$, considering $q\in{\Bbb P}_\alpha$ which is $\langle N, {\Bbb P}^N_\alpha\rangle$--generic, we know that in ${\bf V}[G_{{\Bbb P}_\alpha}]$ (if $q\in G_{{\Bbb P}_\alpha})$, there is $q'\in {\mathunderaccent\tilde-3 {\Bbb Q}}_\alpha[G_{{\Bbb P}_\alpha}]$ which is $\langle N[G_{{\Bbb P}_\alpha}],{\Bbb Q}_\alpha[G_\alpha]\rangle$--generic. But under present circumstances, we have no idea where to look for $q'$, so no way to make a name of it, $\mathunderaccent\tilde-3 {q}'$, which is hereditarily countable, without increasing $q \in{\Bbb P}_\alpha$. Except when ${\Bbb Q}$ is local (see \ref{0.5A}), of course; it is not unreasonable to assume it but we prefer not to and even then, we just have to look for it in, essentially, a copy of the set of reals. The solution is to increase ${\Bbb Q}_i$ insubstantially so that we will exactly have the right element $q'$: \[p(\alpha)\ \&\ \bigwedge_{{\cal I}\in {\rm pd}_{{\Bbb Q}}(N)}\,\,\bigvee_{p\in {\cal I} \cap N} p,\] as explained below. We give two variants. \begin{notation} \label{4.0A} Let ${\rm pd}_{{\Bbb Q}}(N)={\rm pd}(N,{\Bbb Q})=\{{\cal I}:N\models$``${\cal I}$ is a predense subset of $Q$''$\}$ and ${\cal I}[N] = {\cal I}^N={\cal I}\cap N$. \end{notation} \begin{definition} \label{4.1} Let ${\Bbb Q}$ be an explicitly nep-forcing notion. Then we define ${\Bbb Q}'={\rm cl}({\Bbb Q})$ as follows: \begin{enumerate} \item[(a)] the set of elements is \[{\Bbb Q}\cup\big\{p\ \&\ \bigwedge_{{\cal I} \in {\rm pd}_{{\Bbb Q}}(N)}\,\,\bigvee_{r\in {\cal I} \cap N} r:\ \ p \in {{\Bbb Q}}^N \mbox{ and }N \mbox{ is a }{\Bbb Q}\mbox{--candidate} \big\}\] [we are assuming no incidental identification] and, in any reasonable way, code them, if ${\Bbb Q}$ is simple, as members of ${\cal H}_{< \aleph_1}(\theta)$, for snep (or very simple) ${\Bbb Q}$ we work slightly more to code them as members of ${}^\omega \theta$, pedantically easier in ${}^\omega(\theta + \omega)$), \item[(b)] the order $\le^{{\Bbb Q}'}$ is given by $q_1 \le^{{\Bbb Q}'} q_2$ if and only if one of the following occurs: \begin{enumerate} \item[$(\alpha)$] $q_1,q_2 \in {\Bbb Q}$, $q_1 \le^{{\Bbb Q}} q_2$, \item[$(\beta)$] $q_1 \in {\Bbb Q}$, $q_2 = p\ \&\ \biggl(\bigwedge\limits_{{\cal I} \in{\rm pd}_Q(N)}\, \bigvee\limits_{r\in {\cal I}\cap N} r \biggr)$ and $q_1 \le^{{\Bbb Q}} p$, \item[$(\gamma)$] $q_1 = p\ \&\ \biggl(\bigwedge\limits_{{\cal I}\in{\rm pd}(N,Q)} \,\bigvee\limits_{r\in {\cal I}\cap N} r\biggr)$ and $q_2\in{\Bbb Q}$, $p \le^{{\Bbb Q}} q_2$ and if ${\cal I}\in{\rm pd}(N,{\Bbb Q})$ then \[\begin{array}{r} ({\rm ex}ists q' \in {\Bbb Q})({\rm ex}ists\langle p_n:n\in\omega\rangle)(q'\le^{{\Bbb Q}} q_2\ \&\ \varphi^{\Bbb Q}_2(\ldots, p_n, \ldots, q')\ \&\quad \\ \{p_n:n<\omega\}\mbox{ lists }{\cal I}\cap N) \end{array}\] \item[$(\delta)$] $q_\ell=p_\ell \ \&\ \biggl(\bigwedge\limits_{{\cal I}\in {\rm pd}(N,Q)} \,\bigvee\limits_{r\in {\cal I}\cap N} r\biggr)$ (for $\ell =1,2$) and: $q_1 = q_2$ {\em or} $q_1 \le p_2$ by clause $(\gamma)$. \end{enumerate} \end{enumerate} \end{definition} \noindent{\bf Remark:}\quad In \cite{Sh:f}, for a hereditarily countable name, instead of \[p\ \&\ \bigwedge\limits_{{\cal I} \in {\rm pd}_{{\Bbb Q}}(N)}\,\bigvee\limits_{r\in {\cal I}\cap N} r\] we use the first member of ${\Bbb Q}_i$ which forces this. Simpler, but when we ask whether this guy is $\le q$ (for some $q \in{\Bbb Q}$) we run into uncountable antichains. \begin{proposition} \label{4.2} \begin{enumerate} \item Assume ${\Bbb Q}$ is explicitly nep. {\em Then}: \begin{enumerate} \item[(a)] in Definition \ref{4.1}, ${\Bbb Q}'$ is a (quasi) order, \item[(b)] $\le^{{\Bbb Q}'}{\,|\grave{}\,}{\Bbb Q}=\le^{{\Bbb Q}}$, \item[(c)] ${\Bbb Q}$ is a dense subset of ${\Bbb Q}'$. \end{enumerate} \item Assume in addition: \begin{enumerate} \item[$(\boxtimes_2)$] ${\Bbb Q}$ is explicitly nep in every ${\Bbb Q}$--candidate. \end{enumerate} {\em Then}: \begin{enumerate} \item[(d)] if $N$ is a ${\Bbb Q}$--candidate, $N\models$``$p\in{\Bbb Q}'$'', then for some $q\in N$ we have $N\models$``$p\le^{{\Bbb Q}'}q \ \&\ q \in {\Bbb Q}$'', \item[(e)] ${\Bbb Q}'$ is explicitly nep (with the same ${\frak B}^{{\Bbb Q}}$ and parameters). \end{enumerate} \item Assume in addition \begin{enumerate} \item[$(\boxtimes_3)$] for any ${\Bbb Q}$--candidate $N$, if $N'$ is a generic extension of $N$ for the forcing notion ${\rm Levy}(\aleph_0,|{\cal P}({\Bbb Q})|^N)$, {\em then} $N'$ is a ${\Bbb Q}$--candidate. \end{enumerate} {\em Then} we can add \begin{enumerate} \item[(e)$^+$] ${\Bbb Q}'$ is explicitly local nep (see Definition \ref{0.5A}). \end{enumerate} \item We can replace above (in the assumption and conclusion) nep by snep, or nep by simple nep. \end{enumerate} \end{proposition} \begin{remark} \label{4.2A} The definition of ``local'' (in \ref{0.5A}) and the statement $(\boxtimes_3)$ in \ref{4.2}(3) can be handled a little differently. We can (in \ref{0.5A}(2)) demand less on $N'$ (it is not a ${\Bbb Q}$--candidate), just have some of its main properties and in $\boxtimes_3$ of \ref{4.2}(3), ${\rm ZFC}^-_*$ says that ${\cal H}(\theta)$ is a set (so has a cardinality) and is a ${\Bbb Q}$--candidate. So we may consider having ${\rm ZFC}^-_\ell$ for several $\ell$'s, ${\rm ZFC}^*_\ell$ speaks on $\chi_0>\ldots>\chi_{\ell -1}$ and the generic extensions of a model of ${\rm ZFC}^*_{\ell +1}$ for ${\rm Levy}(\aleph_0,\chi_\ell)$ is a model of ${\rm ZFC}^-_\ell$. Similar remarks hold for \S7. But, as we can deal with the nice case (see Definition \ref{2.3B}), we may start with a countable $N{\rm pr}ec({ \cal H}(\beth_\omega),\in)$ (or even better $({\cal H}(\beth_{\omega_1}),\in)$ so that ``countable depth can be absorbed''), we ignore this in our main presentation. Does $(\boxtimes_3)$ of \ref{4.2}(3) occur at all? Let $G$ be a subset of ${\rm Levy}(\aleph_0,|{\cal P}({\Bbb Q})|^N)$ generic over $N$. Then $N'\stackrel{\rm def}{=}N\langle G\rangle$ is a ${\Bbb Q}$--candidate. \end{remark} {\noindent{\sc Proof of \ref{4.2}} \hspace{0.2in}} 1) {\bf Clause (a)}:\quad Assume $q_1 \le q_2 \le q_3$; we have $2^3=8$ cases according to truth values of $q_i\in {\Bbb Q}$: \noindent{\sc Case (A)}:\qquad $q_1,q_2,q_3\in {\Bbb Q}$.\\ Trivial. \noindent{\sc Case (B)}:\qquad $q_1,q_2\in {\Bbb Q}$, $q_3\notin{\Bbb Q}$.\\ Check. \noindent{\sc Case (C)}:\qquad $q_1\notin{\Bbb Q}$, $q_2,q_3\in{\Bbb Q}$.\\ Check. \noindent{\sc Case (D)}:\qquad $q_1\in{\Bbb Q}$, $q_2\notin{\Bbb Q}$, $q_3\in{\Bbb Q}$.\\ Then $q_2=p_2 \ \&\ \bigwedge\limits_{{\cal I} \in{\rm pd}(N,{\Bbb Q})}\, \bigvee\limits_{r\in {\cal I}\cap N} r$ and $q_1\le^{{\Bbb Q}} p_2$ (by \ref{4.1}(b)$(\beta)$) and $p_2\le^{{\Bbb Q}} q_3$ (by \ref{4.1}(b)$(\gamma)$). Hence $q_1 \le^{{\Bbb Q}} q_3$ follows. \noindent{\sc Case (E)}:\qquad $q_1\in{\Bbb Q}$, $q_2\notin{\Bbb Q}$, $q_3\notin{\Bbb Q}$.\\ Let $q_\ell=p_\ell\ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}(N,Q)}\, \bigvee\limits_{r\in {\cal I}\cap N} r$ for $\ell = 2,3$. So $q_1\le^{{\Bbb Q}} p_2$ (see \ref{4.1}(b)$(\beta)$) and $p_2\le^{{\Bbb Q}} p_3$ (see \ref{4.1}(b)$(\gamma),(\delta)$). Hence $q_1\le^{{\Bbb Q}} p_2$ (as $\le^{{\Bbb Q}}$ is transitive) and so $q_1\le q_3$ (see \ref{4.1}(b)$(\beta)$). \noindent{\sc Case (F)}:\qquad $q_1\notin{\Bbb Q}$, $q_2\notin{\Bbb Q}$, $q_3\in{\Bbb Q}$.\\ Let $q_\ell=p_\ell \ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}(N,{\Bbb Q})}\, \bigvee\limits_{r\in {\cal I}\cap N} r$ for $\ell = 1,2$ and suppose that $q_1\neq q_2$ (otherwise trivial). Then, by \ref{4.1}(b)$(\delta)$, $q_1 \le p_2$ and by \ref{4.1}(b)$(\gamma)$, $p_2\le q_3$ so by the previous case (C), $q_1 \le q_3$ as required. \noindent{\sc Case (G)}:\qquad $q_1\notin{\Bbb Q}$, $q_2\in{\Bbb Q}$, $q_3\notin{\Bbb Q}$.\\ Let $q_\ell=p_\ell\ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}(N,{\Bbb Q})}\, \bigvee\limits_{r\in{\cal I}\cap N} r$ for $\ell=1,3$. Now, by \ref{4.1}(b)$( \beta)$, $q_2\le p_3$ and by the previous case (C), $q_1 \le p_3$ and hence, by \ref{4.1}(b)$(\delta)$, $q_1 \le q_3$ as required. \noindent{\sc Case (H)}:\qquad $\bigwedge\limits_\ell q_\ell\notin {\Bbb Q}$.\\ Let $q_\ell=p_\ell\ \&\ \bigwedge\limits_{{\cal I}\in {\rm pd}(N,{\Bbb Q})}\, \bigvee\limits_{r\in {\cal I}\cap N} r$. If $q_1=q_2$ or $q_2=q_3$ then the conclusion is totally trivial. So assume not. Thus \[\begin{array}{ll} q_1\le p_2&\quad\mbox{(by clause }(\delta)\mbox{ a case defined in }(\gamma))\\ q_2\le p_3&\quad\mbox{(by clause }(\delta)). \end{array}\] Hence $p_2\le p_3$ (see clause $(\gamma)$), so ``a previous case'' applies. This finishes the proof of the clause (a). \noindent{\bf Clause (b)}:\qquad Totally trivial. \noindent{\bf Clause (c)}:\qquad Let $q\in{\Bbb Q}'$; if $q\in{\Bbb Q}$ then there is nothing to do; otherwise for some ${\Bbb Q}$--candidate $N$ we have $q=p\ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}(N,{\Bbb Q})}\,\bigvee\limits_{r\in {\cal I}\cap N} r$ and use nep (i.e.\ clause (c) of \ref{0.2}(1)) on the ${\Bbb Q}$--candidate $N$. \noindent 2) Assume $(\boxtimes_2)$. \noindent{\bf Clause (d)}:\qquad Proved inside the proof of clause (e). \noindent{\bf Clause (e)}:\qquad More pedantically we have to define \[\varphi^{{\Bbb Q}'}_0,\varphi^{{\Bbb Q}'}_1,\varphi^{{\Bbb Q}'}_2,{\frak B}^{{\Bbb Q}'}, \theta^{{\Bbb Q}'}\] and then prove the required demands for a ${\Bbb Q}'$--candidates. We let ${\frak B}^{{\Bbb Q}'}={\frak B}^{{\Bbb Q}}$, $\theta^{{\Bbb Q}'}=\theta^{{\Bbb Q}}$, the formulas will be different, but with the same parameters. So the ${\Bbb Q}'$--candidates are the ${\Bbb Q}$--candidates. What is $\varphi^{{\Bbb Q}'}_0$? It is \[\begin{array}{r} \varphi^{{\Bbb Q}}_0(x) \vee\mbox{`` }x \mbox{ has the form }\quad p \ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}_{{\Bbb Q}(M)}}\,\bigvee\limits_{r\in{\cal I}\cap M} r,\qquad \mbox{where}\\ M \mbox{ is a }{\Bbb Q} \mbox{-candidate (so countable) and }\varphi^{{\Bbb Q}}_0(p) \mbox{ ''}. \end{array}\] Clearly $\varphi^{{\Bbb Q}'}_0$ defines ${\Bbb Q}'$ through ${\Bbb Q}'$--candidates. Note that if $N$ is a ${\Bbb Q}'$--candidate and $N\models$``$M$ is a countable ${\Bbb Q}$--candidate'', then we have $M\subseteq N$, and if $M\models$``$x$ is countable'', then $x\subseteq M\subseteq N$; so $M$ is really a ${\Bbb Q}$--candidate. Consequently, $\varphi^{{\Bbb Q}'}_0$ is upward absolute for ${\Bbb Q}'$-candidates and it defines ${\Bbb Q}'$. So clause (a) of Definition \ref{0.2}(1) holds. Now we pay our debt proving clause (d). Let $N$ be a ${\Bbb Q}'$--candidate and $N \models$``$p\in{\Bbb Q}'$'', i.e.\ $N \models\varphi^{{\Bbb Q}'}_0(p)$. By the definition of ${\Bbb Q}'$, either $N\models$``$p\in{\Bbb Q}$'' and we are done, or for some $p',M\in N$ we have \[N\models\mbox{``}M\mbox{ is a }{\Bbb Q}' \mbox{--candidate, $p'\in{\Bbb Q}^M$, and } p= \bigl(p'\ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}(M,{\Bbb Q})}\,\bigvee\limits_{r \in{\cal I}\cap M} r\bigr)\mbox{''}.\] By the assumption $(\boxtimes_2)$, for some $q\in{\Bbb Q}^N$ we have $N \models$`` $q$ is explicitly $\langle M,{\Bbb Q}\rangle$--generic'' and $N\models$``$p'\le^{ {\Bbb Q}} q$''. Then for some $\langle\langle r_{{\cal I},\ell}:\ell<\omega\rangle: {\cal I}\in {\rm pd}(M,{\Bbb Q})\rangle\in N$ we have:\quad $N\models$``$\{r_{{\cal I}, \ell}:\ell<\omega\}$ enumerates ${\cal I}\cap M$'' and $N\models$``$\varphi^{ {\Bbb Q}}_2(r_{{\cal I}_\ell,0},r_{{\cal I},1},\ldots,q)$''. Now it follows from the definition of ${\Bbb Q}'$ that $N\models$``$p\le^{{\Bbb Q}'}q$'', so $q$ is as required. What is $\varphi^{{\Bbb Q}'}_1$? Just write the definition of $p\le^{{\Bbb Q}'}q$ from clause (b) of \ref{4.1}. Clearly also $\varphi^{{\Bbb Q}'}_1$ is upward absolute for ${\Bbb Q}'$--candidates and it defines the partial order of ${\Bbb Q}'$ (even in ${\Bbb Q}'$--candidates). So clause (b) of Definition \ref{0.2}(1) holds. What is $\varphi^{{\Bbb Q}'}_2$? Let it be: \begin{enumerate} \item[\ ]$\varphi^{{\Bbb Q}'}_2(p_0,p_1,\ldots,p_\omega)\stackrel{\rm def}{=}$\\ ``there are $M,p,q$ such that:\quad $M$ is a ${\Bbb Q}'$--candidate and $p\in{\Bbb Q}^M$ and $q = \bigl(p\ \&\ \bigwedge\limits_{{\cal I} \in{\rm pd}(M,{\Bbb Q})}\, \bigvee\limits_{r\in {\cal I}\cap M} r\bigr)$ and $q\le^{{\Bbb Q}'} p_\omega$ and for some ${\cal J}\in{\rm pd}(M,{\Bbb Q})$, if $r\in {\cal J}\cap M$ then there is $\ell$ such that $p_\ell\le^{{\Bbb Q}'} r$''. \end{enumerate} To show that $\varphi^{{\Bbb Q}'}_2$ is upward absolute for ${\Bbb Q}'$--candidates suppose that $N$ is a ${\Bbb Q}'$--candidate and $N\models\varphi^{{\Bbb Q}'}_2(p_0, p_1,\ldots,p_\omega)$ and let $M,p,q$ witness it. Then, in $N$, $M$ is a ${\Bbb Q}'$--candidate, so $p\in{\Bbb Q}$, $q\in{\Bbb Q}'$ and for some ${\cal J}\in{\rm pd}(M, {\Bbb Q})$ we have: \begin{quotation} if $r\in{\cal J}\cap M$, then there is $\ell$ such that $p_\ell\le^{{\Bbb Q}'}r$. \end{quotation} By the known upward absoluteness all those statements hold in ${\bf V}$, too. Assume now that $\varphi^{{\Bbb Q}'}_2(p_0,p_1,\ldots,p_\omega)$ holds as witnessed by $M,p,q$ and ${\cal J}\in {\rm pd}(M,{\Bbb Q})$. Suppose $q'\geq p_\omega$ and we may assume that $q'\in{\Bbb Q}$ (by (1)(c)). Then $q\leq q'$ and (by clause $(\gamma)$ of the definition of $\leq^{{\Bbb Q}'}$) we have $q''\in{\Bbb Q}$ such that $q''\leq q'$ and $\varphi^{{\Bbb Q}}_2(r_0,r_1,\ldots,q')$ for some list $\{r_n:n<\omega\}$ of ${\cal J}\cap M$. Thus ${\cal J}\cap M$ is predense (in ${\Bbb Q}$) above $q''$ and we find $r\in {\cal J}\cap M$ such that $r,q'$ are compatible. But now, there is $\ell<\omega$ such that $p_\ell\leq^{{\Bbb Q}'} r$, so necessarily $p_\ell,q'$ are compatible (in ${\Bbb Q}'$). This shows \ref{0.2}(2)(b)$^+$. Let us turn to clause (c)$^+$ of Definition \ref{0.2}(2). So suppose that $N$ is a ${\Bbb Q}'$--candidate and $p\in{\Bbb Q}'\cap N$. By clause (d), there is $p'$ such that $N\models$``$p\le^{{\Bbb Q}'}p' \ \&\ p'\in{\Bbb Q}$''. Let $q=p'\ \&\ \bigwedge\limits_{{\cal I} \in{\rm pd}(N,{\Bbb Q})}\,\bigvee\limits_{r\in {\cal I}\cap N} r$, clearly $q \in{\Bbb Q}'$ and ${\Bbb Q}'\models$``$p'\le q$''. Hence, by \ref{4.2}(1)(a), we know ${\Bbb Q}'\models$``$p \le q$''. For ${\cal J}\in{\rm pd}(N, {\Bbb Q}')$ let \[{\cal J}'=\{q\in{\Bbb Q}^N:N\models\mbox{``} q \mbox{ is above some member of ${\cal J}$ in }{\Bbb Q}'\mbox{ ''}\}.\] Note that if ${\cal J}\in{\rm pd}(N,{\Bbb Q}')$ then ${\cal J}'\in{\rm pd}(N,{\Bbb Q})$, and so ${\cal J}'\cap N$ is predense above $q$. Moreover, $(\forall r\in {\cal J} \cap N)({\rm ex}ists r'\in {\cal J}'\cap N)(r\le^{{\Bbb Q}'} r')$. So let ${\cal J}\in{\rm pd}(N,{\Bbb Q}')$ and let $\langle p_n:n<\omega\rangle$ be an enumeration of ${\cal J}\cap N$. It should be clear that $\varphi^{{\Bbb Q}'}_2(p_0,p_1,\ldots, q)$ holds as witnessed by $N,p',q$ and ${\cal I}'$. \noindent 3) Compared to (e) of \ref{4.2}(2) we have also to prove (e)$^+$, i.e.\ strengthen the clause (c)$^+$ of Definition \ref{0.2}(1) by $(*)$ of Definition \ref{0.5A}(2). Let $N^+$ be a generic extension of a ${\Bbb Q}'$--candidate $N$ by the forcing notion ${\rm Levy}(\aleph_0,|{\cal P}({\Bbb Q})|^N)$. Clearly for every $p \in Q^N$, the condition \[p\ \&\ \bigwedge\limits_{{\cal I}\in {\rm pd}(N,{\Bbb Q})}\,\bigvee\limits_{r\in {\cal I}\cap N} r\] belongs to $N^+$. So by the proof of clause (c)$^+$ of Definition \ref{0.2}(1) in the proof of (e) above, we are done. \noindent 4) Similar proof. $\square_{\ref{4.2}}$ \begin{discussion} If we would like not to use \ref{4.2}, we may like to try the following Definition \ref{4.3}. Note that there: ${\rm cl}_1({\Bbb Q})$ cannot serve as a forcing notion as it contains ``false'', ${\rm cl}_2({\Bbb Q})$ is the reasonable restriction, and ${\rm cl}_3({\Bbb Q})$ has the same elements but more ``explicit'' quasi order. We do not define a quasi order on ${\rm cl}_1({\Bbb Q})$, but it is natural to use the one of ${\rm cl}_2({\Bbb Q})$ adding:\quad $\psi\leq\varphi$ if $\varphi\in{\rm cl}_1({\Bbb Q}) \setminus{\rm cl}_2({\Bbb Q})$. No harm in allowing in the definition of ${\rm cl}_1({\Bbb Q})$ also $\neg$ (the negation). The previous ${\rm cl}({\Bbb Q})$ is close to ${\rm cl}_3({\Bbb Q})$. \end{discussion} \begin{definition} \label{4.3} Let ${\Bbb Q}$ be a forcing notion. \begin{enumerate} \item Let ${\rm cl}_1({\Bbb Q})$ be the closure of the set ${\Bbb Q}$ by conjunctions and disjunctions over sequences of members of length $\le \omega$ [we may add: and $\neg$ (the negation)]; wlog there are no incidental identification and ${\Bbb Q} \subseteq{\rm cl}_1({\Bbb Q})$. \item For a generic $G\subseteq{\Bbb Q}$ over ${\bf V}$ and $\psi\in{\rm cl}_1({\Bbb Q})$ let $\psi[G]$ be the truth value of $\psi$ under $G$ where for $\psi=p\in{\Bbb Q}$, $\psi[G]$ is the truth value of $p\in G$. (We will use ${\frak t}$ for ``truth''.) \item $\hat{{\Bbb Q}}={\rm cl}_2({\Bbb Q})=\{\psi\in{\rm cl}_1({\Bbb Q}):\mbox{ for some } p\in{\Bbb Q} \mbox{ we have } p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$\psi[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]={\frak t}$''$\}$, ordered by: \[\psi_1 \le^{\hat{{\Bbb Q}}}\psi_2\ \Leftrightarrow\ (\forall p \in {\Bbb Q}) [p \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox{``}\psi_2[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]={\frak t}\mbox{''}\ \ \Rightarrow\ \ p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox ``\psi_1[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]={\frak t}\mbox{''}].\] \item Let ${\Bbb Q}$ be explicitly nep. We let ${\rm cl}_3({\Bbb Q})$ be the following forcing notion: \begin{enumerate} \item[(a)] the set of elements is ${\rm cl}_2({\Bbb Q})$, \item[(b)] the order $\leq^{\hat{{\Bbb Q}}}_3=\leq^{\Bbb Q}_3=\leq^{{\rm cl}_3({\Bbb Q})}_3=\leq_{ {\rm cl}_3({\Bbb Q})}$ is the transitive closure of $\le^{\hat{{\Bbb Q}}}_0$ which is defined by $\psi_1\le^{\hat{{\Bbb Q}}}_0\psi_2$\quad{\em iff}\quad one of the following occurs \begin{enumerate} \item[(i)] $\psi_1,\psi_2\in{\Bbb Q}$ and $\psi_1\le^{{\Bbb Q}} \psi_2$, \item[(ii)] $\psi_1$ is a conjunct of $\psi_2$ (meaning: $\psi_1=\psi_2$ or $\psi_2=\bigwedge\limits_{n<\alpha}\psi_{2,n}$, and $\psi_1\in\{\psi_{2,n}: n<\alpha\}$), \item[(iii)] $\psi_2\in{\Bbb Q}$ and there is a ${\Bbb Q}$-candidate $M$ such that $p, \psi_1\in M$, $p\in {\Bbb Q}^M$, $p\le^{{\Bbb Q}}\psi_2$, $\psi_2$ is explicitly $\langle M,{\Bbb Q}\rangle$--generic and $M\models$``$p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\psi_1[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}} ]={\frak t}$'' and if $q\in{\Bbb Q}$ is a conjunct of $\psi_1$ then $M \models$``$q\le^{{\Bbb Q}} p$''. \end{enumerate} \end{enumerate} \end{enumerate} \end{definition} \begin{proposition} \label{4.4} \begin{enumerate} \item ${\Bbb Q}\subseteq\hat{{\Bbb Q}}$, $\leq^{\hat{{\Bbb Q}}}$ is a quasi order, and $\le^{\hat{{\Bbb Q}}}{\,|\grave{}\,}{\Bbb Q} =\{(p,q):q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$p\in\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}$''$\}$, so if ${\Bbb Q}$ is separative then $\le^{\hat{{\Bbb Q}}}{\,|\grave{}\,} {\Bbb Q}=\le^{{\Bbb Q}}$; and ${\Bbb Q}$ is a dense subset of $\hat{{\Bbb Q}}$. \item Assume ${\Bbb Q}$ is temporarily explicitly nep. Then: \begin{enumerate} \item[(a)] ${\Bbb Q}\subseteq{\rm cl}_3({\Bbb Q})$ and $\le^{{\Bbb Q}}_3{\,|\grave{}\,}{\Bbb Q}\supseteq \le^{{\Bbb Q}}$ and $\leq^{{\Bbb Q}}_3\subseteq\leq^{\hat{{\Bbb Q}}}$, \item[(b)] ${\Bbb Q}$ is a dense subset of ${\rm cl}_3({\Bbb Q})$. \end{enumerate} \item Assume in addition \begin{enumerate} \item[$(\circledast_3)$] ${\Bbb Q}$ is correctly explicitly nep in ${\bf V}$ and in every ${\Bbb Q}$--candidate. \end{enumerate} Then \begin{enumerate} \item[(d)] if $N$ is a ${\Bbb Q}$--candidate and $N\models$``$p\in{\rm cl}_3({\Bbb Q})$'' {\em then} for some $q\in N$ we have $N\models$``$p\leq_{{\rm cl}_3({\Bbb Q})} q\ \&\ q\in{\Bbb Q}$'', \item[(e)] ${\Bbb Q}'$ is explicitly nep and correct. \end{enumerate} \item Assume in addition \begin{enumerate} \item[$(\circledast_4)$] for any ${\Bbb Q}$--candidate $N$, if $N'$ is a generic extension of $N$ for the forcing notion ${\rm Levy}(\aleph_0,|{\cal P}({\Bbb Q})|^N)$, {\em then} $N'$ is a ${\Bbb Q}$--candidate. \end{enumerate} {\em Then} we can add \begin{enumerate} \item[(e)$^+$] ${\rm cl}_3({\Bbb Q})$ is explicitly local nep (see Definition \ref{0.5A}). \end{enumerate} \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight, e.g. \noindent {\bf (2) Clause (b):}\quad Assume $\psi\in{\rm cl}_3({\Bbb Q})$, so $\psi\in {\rm cl}_2({\Bbb Q})$ and for some $p\in{\Bbb Q}$ we have $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\psi[\mathunderaccent\tilde-3 {G}_{ {\Bbb Q}}]={\frak t}$''. There is a ${\Bbb Q}$--candidate $M$ to which $p$ and $\psi$ belong (as ${\rm ZFC}^-_*$ is $\emptyset$--good). Let $q$ be explicitly $(M,{\Bbb Q})$--generic, and ${\Bbb Q}\models p\leq q$. So, by clause (iii) of \ref{4.3}(4)(b), we have ${\rm cl}_3({\Bbb Q})\models$``$\psi\leq q''$, as required. \noindent {\bf (3) Clause (e):}\quad Let $\varphi^3_0(x)$ say that there is a ${\Bbb Q}$--candidate $M$ such that $M\models$``$x\in{\rm cl}_3({\Bbb Q})$''. Let $\varphi^3_1(x,y)$ say the definition of $\le^\theta_3$. Lastly, $\varphi^3_2 (\langle x_i:i \le\omega\rangle)$ says that for some $\langle y_i:i \le\omega \rangle$ we have \[\varphi^{{\Bbb Q}}_2(\langle y_i:i \le\omega\rangle),\quad y_\omega\le^{{\Bbb Q}}_3 x_\omega\quad\mbox{(i.e.\ $\varphi^3_1(y_\omega,x_\omega)$) and}\quad \bigwedge_{i<\omega}\,\bigvee_{j<\omega}x_j \le^{{\Bbb Q}}_3 y_i.\] \QED \begin{remark} \label{4.5} Instead of using ${\rm cl}({\Bbb Q})$ from \ref{4.1} below we can have in $\bar{\varphi}$, a function which from an $\omega$--list of the elements of $N$ and from $p$ computes an element of ${\Bbb Q}$ having the role of $p\ \&\ \bigwedge\limits_{{\cal I}\in{\rm pd}(N,{\Bbb Q})}\,\bigvee\limits_{r\in{\cal I}\cap N} r$. The choice does not seem to matter. \end{remark} \begin{definition} \label{4.6} For a forcing notion ${\Bbb P}$ and a cardinal (or ordinal) $\kappa$, we define what is an ${\rm hc}$-$\kappa$-${\Bbb P}$--name (here ${\rm hc}$ stands for hereditarily countable), and for this we define by induction on $\zeta<\omega_1$ what is such a name of depth $\le \zeta$. \noindent{$\zeta = 0$}:\qquad It is $\alpha$, that is $\check{\alpha}$, for some $\alpha < \kappa$. \noindent{$\zeta > 0$}:\qquad It has the form $\mathunderaccent\tilde-3 {\tau}=\{\langle p_i, \mathunderaccent\tilde-3 {\tau}_i\rangle:i<i^*\}$, where $i^* < \omega_1$, $p_i\in{\rm cl}_1({\Bbb P})$ from Definition \ref{4.3}(1) and $\mathunderaccent\tilde-3 {\tau}_i$ an ${\rm hc}$-$\kappa$-${\Bbb P}$--name of some depth $<\zeta$; that is for $G\subseteq {\Bbb P}$ generic over ${\bf V}$, we let $\mathunderaccent\tilde-3 {\tau}[G]=\{\mathunderaccent\tilde-3 {\tau}_i[G]:p_i[G]={\frak t}\}$. \noindent An ${\rm hc}$-$\kappa$-${\Bbb P}$--name is an ${\rm hc}$-$\kappa$-${\Bbb P}$--name of some depth $< \omega_1$. An ${\rm hc}$-$\kappa$-${\Bbb P}$--name $\mathunderaccent\tilde-3 {\tau}$ has depth $\zeta$ if it has depth $\le \zeta$, but not $\le \xi$ for $\xi < \zeta$. \end{definition} \noindent{\bf Remark:}\quad Why did we use $p\in{\rm cl}_1({\Bbb Q})$ and not $p\in {\rm cl}_3({\Bbb Q})$? As the membership in ${\rm cl}_1({\Bbb Q})$ is easier to define. \begin{proposition} \label{4.7} \begin{enumerate} \item If $\mathunderaccent\tilde-3 \tau$ is an $hc$-$\kappa$-${\Bbb P}$--name and $G\subseteq{\Bbb P}$ is generic over ${\bf V}$ {\em then} $\mathunderaccent\tilde-3 {\tau}[G]\in {\cal H}_{< \aleph_1}( \kappa)$. If in addition ${\Bbb P}\subseteq {\cal H}_{< \aleph_1}(\kappa)$ then $\mathunderaccent\tilde-3 {\tau}\in {\cal H}_{<\aleph_1}(\kappa)$. \item Let $\varphi(x_0,\ldots,x_{n-1})$ be a first order formula and $\mathunderaccent\tilde-3 {\tau}_0,\ldots,\mathunderaccent\tilde-3 {\tau}_{n-1}$ be ${\rm hc}$-$\kappa$-${\Bbb P}$--names. Then there is $p\in{\rm cl}_1({\Bbb P})$ such that for every $G\subseteq{\Bbb P}$ generic over ${\bf V}$: \[\bigl(\bigcup_{\ell < n} {\rm Tc}^{\rm ord}(\mathunderaccent\tilde-3 {\tau}_\ell[G]),\in\bigr) \models\varphi(\mathunderaccent\tilde-3 {\tau}_0[G],\ldots,\mathunderaccent\tilde-3 {\tau}_{n-1}[G])\quad\mbox{\rm iff }\quad p[G] ={\frak t}.\] \item The set of ${\rm hc}$-$\kappa$-${\Bbb P}$--names is closed under the following operations: \begin{enumerate} \item[(a)] difference, \item[(b)] union and intersection of two, finitely many and even countably many, \item[(c)] definition by cases:\quad for $p_n\in{\rm cl}_1({\Bbb P})$ and ${\rm hc}$-$\kappa$-${\Bbb P}$--names $\mathunderaccent\tilde-3 {\tau}_n$ (for $n<\omega$) there is a ${\rm hc}$-$\kappa$-${\Bbb P}$--name $\mathunderaccent\tilde-3 {\tau}$ such that for a generic $G\subseteq {\Bbb Q}$ over ${\bf V}$ we have \[\mathunderaccent\tilde-3 {\tau}[G]\mbox{ is}:\quad\begin{array}{ll} \mathunderaccent\tilde-3 {\tau_n}[G] &\mbox{if } p_n[G]={\frak t}\ \&\ \bigwedge\limits_{\ell<n} \neg p_\ell[G]={\frak t}\\ \emptyset &\mbox{if }\bigwedge\limits_{\ell<\omega}\neg p_\ell[G]={\frak t}. \end{array}\] \end{enumerate} \end{enumerate} \end{proposition} \begin{definition} \label{4.8} \begin{enumerate} \item A forcing notion ${\Bbb Q}$ (or $\bar{\varphi}$) is temporarily, explicitly straight $(\kappa,\theta)$--nep for ${\frak B}$ if: old conditions from Definition \ref{0.2}(1),(2) (for explicitly $(\kappa,\theta)$--nep) but possibly ${\frak B} \subseteq {\cal H}_{< \aleph_1}(\kappa)$; and \begin{enumerate} \item[(d)] ${\Bbb Q} \subseteq {\cal H}_{<\aleph_1}(\theta)$ (i.e.\ ${\Bbb Q}$ is simple) and $\aleph_1+\theta\le \kappa$, \item[(e)] for $\ell<3$ the formula $\varphi^{{\Bbb Q}}_\ell(\bar{x})$ is of the form \[({\rm ex}ists t)[t\in {\cal H}_{<\aleph_1}(\kappa)\ \&\ t={\rm Tc}^{\rm ord}(t)\ \&\ t\cap\omega_1\mbox{ is an ordinal}\ \&\ \psi^{{\Bbb Q}}_\ell(\bar{x},t)],\] where in the formula $\psi^{{\Bbb Q}}_i$ the quantifiers are of the form $({\rm ex}ists s \in t)$ and the atomic formulas are ``$x \in y$'',``$x$ is an ordinal'' and those of ${\frak B}^{{\Bbb Q}}$. \end{enumerate} \item In clause (e) of part (1), we call such $t$ an explicit witness for $\varphi^{{\Bbb Q}}_i(\bar{x})$. We call $t$ a weak witness, if for every ${\Bbb Q}$--candidate $N$, $\bar{x}\in N$, if $t\in N$ then $N\models\varphi^{ {\Bbb Q}}_\ell(\bar{x})$. We call it a witness if: \begin{enumerate} \item[(i)] $\ell=0$ and it is an explicit witness, or \item[(ii)] $\ell=1$ (so $\bar{x}=\langle x_0,x_1\rangle$) and $t$ gives $k$, $y_0,\ldots,y_k$, $t_0,\ldots,t_{k-1}$, $s_0,\ldots,s_k$ such that:\quad $s_\ell$ explicitly witnesses $\varphi_0(y_\ell)$, $t_\ell$ explicitly witnesses $y_\ell\le^{{\Bbb Q}}y_{\ell+1}$ and $y_0=x_0$, $y_k=x_1$ (so $y_\ell\in t$, $s_\ell \in t$, $x_\ell \in t$), \item[(iii)] $\ell=2$ (so $\bar{x}=\langle x_i:i\le\omega\rangle$) and $t$ gives $\langle y_i:i \le\omega\rangle$, $\langle k_i:i<\omega\rangle$, $\langle s_i:i\le\omega+1\rangle$ such that $s_\omega$ is a witness to $y_\omega \le x_\omega$, $s_{\omega +1}$ is an explicit witness to $\varphi^{{\Bbb Q}}_2(\langle y_i:i \le\omega\rangle)$, $s_i$ is a witness to $x_i \le^{{\Bbb Q}} y_{k_i}$ (so also they all belong to $t$, as well as witnesses to $x_i,y_j \in {\Bbb Q}$). \end{enumerate} \end{enumerate} \end{definition} \begin{proposition} \label{4.8A} \begin{enumerate} \item Assume ${\Bbb Q}$ is temporarily explicitly straight $(\kappa,\theta)$--nep for ${\frak B}$. {\em Then} ${\Bbb Q}$ is temporarily simply explicitly $(\kappa,\theta)$--nep for ${\frak B}$. Sufficient conditions for ``$K$--absolutely'' are as in \S2. \item Assume ${\Bbb Q}$ is temporarily correctly simply explicitly $(\kappa, \theta)$--nep for ${\frak B}$ and $\theta+\aleph_1\leq\kappa$. {\em Then} ${\Bbb Q}$ is temporarily straight explicitly $(\kappa,\theta)$--nep for ${\frak B}$ and is correct. \end{enumerate} \noindent {\em [Nevertheless, ``simple'' and ``straight'' are distinct as properties of $({\frak B},\bar{\varphi},\theta)$, i.e.\ the point is changing $\bar{\varphi}$.]} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED \begin{defthe} \label{4.9} By induction on $\alpha$ we define and prove the following situations: \begin{enumerate} \item[(A)] {\em [Definition]}\quad $\bar{{\Bbb Q}}=\langle({\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_i,\bar{ \varphi}_i,\mathunderaccent\tilde-3 {{\frak B}}_i,r_i,\theta_i):i<\alpha\rangle$ is nep--CS--iteration. \item[(B)] {\em [Definition]}\quad $\kappa^{\bar{{\Bbb Q}}}=\kappa[\bar{{\Bbb Q}}]$, in short $\kappa^\alpha$ abusing notation. \item[(C)] {\em [Definition]}\quad We define ${\frak B}^\alpha={\frak B}^{ \bar{{\Bbb Q}}}$. \item[(D)] {\em [Definition]}\quad ${\rm Lim}(\bar{{\Bbb Q}})={\Bbb P}_\alpha$ and ${\Bbb P}_{ \alpha,w}$ for any set $w$ of ordinals $<\alpha$ for $\bar{{\Bbb Q}}$ as above. \item[(E)] {\em [Claim]}\quad If $\bar{{\Bbb Q}}$ is a nep--CS--iteration, and $\alpha=\ell g\/(\bar{{\Bbb Q}})$, {\em then} $\bar{{\Bbb Q}}{\,|\grave{}\,}\beta$ is a nep--CS--iteration (for $\beta<\alpha$), ${\rm Lim}(\bar{{\Bbb Q}}{\,|\grave{}\,}\beta)={\Bbb P}_\beta$ and ${\Bbb P}_\beta\subseteq {\cal H}_{< \aleph_1}(\kappa_\beta)$. \item[(F)] {\em [Claim]}\quad For $\bar{{\Bbb Q}}$ as in (A), a ${\frak B}^\alpha$--candidate $N$, $\gamma\le\beta\le\alpha$ and $p,q\in{\Bbb P}_\beta$: \begin{enumerate} \item[(a)] $p$ is a function with domain a countable subset of $\beta$ (pedantically see clause (D)), \item[(b)] ${\Bbb P}_\beta$ is a forcing notion (i.e.\ a quasi order) satisfying (d) + (e) of \ref{4.8} and (a), (b), (b)$^+$ of \ref{0.2}(1),(2), \item[(c)] $p{\,|\grave{}\,}\gamma\in{\Bbb P}_\gamma$ and ${\Bbb P}_\beta\models$``$p{\,|\grave{}\,}\gamma \le p$'', \item[(d)] ${\Bbb P}_\gamma\models$``$p{\,|\grave{}\,}\gamma\le q$'' implies ${\Bbb P}_\beta \models$``$p\le (q\cup p {\,|\grave{}\,} [\gamma,\beta))$'', \item[(e)] ${\Bbb P}_\gamma\subseteq{\Bbb P}_\beta$ and even $P_\gamma\lesdot{\Bbb P}_\beta$, \item[(f)] $p\in{\Bbb P}_\beta$\quad iff\quad $p$ a function with domain $\in [\beta]^{\le \aleph_0}$ and \[\zeta\in{\rm Dom}(p)\quad \Rightarrow\quad p{\,|\grave{}\,}\zeta\in{\Bbb P}_{\zeta+1}.\] \end{enumerate} \item[(G)] {\em [Definition]}\quad For a ${\frak B}^\alpha$--candidate $N$ and $w,\beta,\gamma$ such that $N\models$``$w\subseteq\alpha$'', $\gamma<\beta \le \alpha$ and $\beta,\gamma\in (w \cup \{\alpha\})\cap N$, and $q\in{\Bbb P}_\beta$, $p\in N$ such that $N\models$``$p \in{\Bbb P}_\beta$'' and $q{\,|\grave{}\,}\gamma$ is $(N, {\Bbb P}_\gamma)$--generic we define when $q$ is $[\beta,\gamma)$--canonically $(N,P_\beta,w)$--generic above $p$. \item[(H)] {\em [Theorem]}\quad If $q\in N$ is a $[\beta,\gamma)$--canonically $(N,{\Bbb P}_\beta,w)$--generic above $p$, {\em then} $q$ is $(N, {\Bbb P}_\beta)$--generic and $p \le q$. \item[(I)] {\em [Theorem]}\quad ${\Bbb P}_\alpha$ is explicitly straight correct $\kappa^\alpha$--nep for $\bar{\varphi},{\frak B}^\alpha$. \item[(J)] {\em [Theorem]}\quad For any $\kappa\ge\kappa^\alpha$, \[\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_\alpha}\mbox{`` }({\cal H}_{<\aleph_1}(\kappa))^{{\bf V}[{\Bbb P}_\alpha ]}=\{\mathunderaccent\tilde-3 {\tau}[\mathunderaccent\tilde-3 {G}_{{\Bbb P}_\alpha}]:\mathunderaccent\tilde-3 {\tau}\mbox{ is an ${\rm hc}$--$\kappa$--${\Bbb P}_\alpha$--name }\})\mbox{''}.\] \end{enumerate} \end{defthe} Let us carry out the clauses one by one. \noindent{\sc Clause (A)}, Definition:\qquad $\bar{{\Bbb Q}}=\langle({\Bbb P}_i, {\mathunderaccent\tilde-3 {\Bbb Q}}_i,\bar{\varphi}_i,\mathunderaccent\tilde-3 {{\frak B}}_i,\kappa_i,\theta_i):i<\alpha\rangle$ is a nep--CS--iteration if: \begin{enumerate} \item[$(\alpha)$] $\beta<\alpha\quad\Rightarrow\quad\bar{{\Bbb Q}}{\,|\grave{}\,}\beta$ is a nep--CS--iteration, \item[$(\beta)$] if $\alpha=\beta+1$ then \begin{enumerate} \item[(i)] ${\Bbb P}_\beta={\rm Lim}(\bar{{\Bbb Q}}{\,|\grave{}\,}\beta)$ (use clause (D)) \item[(ii)] $\bar{\varphi}_\beta=\langle\varphi_{\beta,\ell}:\ell<3\rangle$ is formally as in the definition of nep (the substantial demand in (v) below, but the parameter ${\frak B}_\beta$ is a name!) \item[(iii)] $\kappa_\beta,\theta_\beta$ are infinite cardinals (or ordinals) \item[(iv)] $\mathunderaccent\tilde-3 {{\frak B}}_\beta$ is a ${\Bbb P}_\beta$--name of a model with universe $\kappa_\beta$ or even ${\cal H}_{< \aleph_1}(\kappa_\beta)$, whose vocabulary is a fix countable one $\tau_0\subseteq {\cal H}(\aleph_0)$, but for each atomic formula $\psi(x_0,\ldots,x_{n-1})$ and $\alpha_0,\ldots, \alpha_{n-1}<\kappa_\beta$ the name of the truth value $\mathunderaccent\tilde-3 {{\frak B}}_\beta \models$``$\psi(\alpha_0,\ldots,\alpha_{n-1})$'' is an ${\rm hc}$--$\kappa$-${\Bbb P}_\beta$--name (i.e.\ is defined by one $p=p^\beta_{ \psi^*}\in{\rm cl}_1({\Bbb P}_\beta)$) \item[(v)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_\beta}$``${\mathunderaccent\tilde-3 {\Bbb Q}}_\beta$ defined by $\bar{\varphi}_\beta$ is temporarily straight explicitly $(\kappa_\beta, \theta_\beta)$--nep as witnessed by $\mathunderaccent\tilde-3 {{\frak B}}_\alpha$ and ${\rm ZFC}^-_*$ is good (hence is correct, see \ref{4.8A}(1))''. \end{enumerate} \end{enumerate} \noindent{\sc Clause (B)}, Definition:\qquad We define $\kappa^\alpha=\sup[ \{\kappa_i:i< \alpha\}\cup\{ \alpha\}]$ (of course, if the result is an ordinal we can replace it by its cardinality, coding it assuming the $\kappa_i$'s are cardinals; remember that $\kappa_i \ge \theta_i$). \noindent{\sc Clause (C)}, Definition:\qquad We define ${\frak B}^\alpha={\frak B}^{\bar{{\Bbb Q}}}$, a model with universe $\subseteq {\cal H}_{< \aleph_1}(\kappa^\alpha)$ or write $\kappa^\alpha$ and the usual vocabulary such that \begin{enumerate} \item[$(*)$] ${\frak B}^\alpha$ codes (by its relations) $\alpha,\{(\beta, \bar{\varphi}_\beta,\kappa_\beta,\theta_\beta):\beta<\alpha\}$ and $\langle \mathunderaccent\tilde-3 {{\frak B}}_\beta:\beta<\alpha\rangle$; i.e.\ for every atomic formula in the vocabulary $\tau_0$ (so is of $\mathunderaccent\tilde-3 {{\frak B}}_\beta$), $\psi=\psi(x_0, \ldots,x_{n-1})$ for some function symbol $F_\psi$ we have:\quad if $\alpha_\ell<\kappa_\beta$ for $\ell<n$ then $F_{\psi(\bar x)}(\beta; \alpha_0,\ldots,\alpha_{n-1})$ is $p^\beta_{\psi(\alpha_0,\ldots,\alpha_{n-1} )}$ (see clause (A)(iv)) and \begin{quotation} if the ${\frak B}$'s are on $\kappa$, we have also $F_{\psi,\ell}$, functions of ${\frak B}^\alpha$ such that:\\ if $\alpha_\ell<\kappa_\beta$ for $\ell < n$ then $\{F_\psi(\beta,\ell, \alpha_0,\ldots,\alpha_{n-1}):\ell<\omega\}$ lists the ordinals in ${\rm Tc}^{ \rm ord}(p^\beta_{\psi(\alpha_0,\ldots,\alpha_{n-1})})$ (the condition in ${\Bbb P}_\beta$ saying$\ldots$) and $F'_\psi$ codes how $p$ was gotten from them (so we need $\kappa^\alpha\ge\omega_1$). \end{quotation} \end{enumerate} So in any case \begin{enumerate} \item[$(**)$] if $N$ is a ${\frak B}^\alpha$--candidate, and $\beta\in\alpha \cap N$ then $N$ is a ${\frak B}^\beta$--candidate. \end{enumerate} \noindent{\sc Clause (D)}, Definition: \noindent{\bf Case 1}:\quad If $\alpha = 0$ then ${\Bbb P}_\alpha=\{\emptyset\}$. \noindent{\bf Case 2}:\quad If $\alpha=\beta+1$ then \[\begin{array}{ll} {\Bbb P}_\alpha=\bigl\{p:&p\mbox{ is a function, }{\rm Dom}(p)\subseteq\alpha,\ p{\,|\grave{}\,} \beta\subseteq P_\beta\mbox{ and if }\beta\in{\rm Dom}(p)\\ \ &\mbox{then for some } r=r_{p,\beta}\in {\rm cl}_1({\Bbb P}_\beta)\mbox{ determined by $p$ we have:}\\ \ &p(\beta)\mbox{ is defined by cases:}\\ \ &\mbox{\bf if } r[\mathunderaccent\tilde-3 {G}_{P_\beta}]={\frak t},\mbox{ it is in }{\rm cl}({\mathunderaccent\tilde-3 {\Bbb Q}}_\beta),\mbox{ and an explicit witness }\\ \ &\mbox{is provided (say $p[\beta]$ codes it and having } r[\mathunderaccent\tilde-3 {G}_{ {\Bbb P}_\beta}]={\frak t}\mbox{ says so)},\\ \ &\mbox{{\bf if} not }, p(\beta)\mbox{ is }\emptyset=\emptyset_{{\mathunderaccent\tilde-3 {\Bbb Q}}_\beta}= \min({\mathunderaccent\tilde-3 {\Bbb Q}}_\beta)\bigr\}. \end{array}\] Pedantically, $p \in P_\alpha$ if and only if $p$ has the form $p'\cup\{ \langle\beta,\ell,x_\ell\rangle:\ell<3\}$ where $p'\in{\Bbb P}_\beta$, $x_0\in{\rm cl}_1 ({\Bbb P}_\beta)$, $x_1,x_2$ are ${\rm hc}$--$\kappa$--${\Bbb P}_\beta$--names of members of ${\cal H}_{<\aleph_1}(\theta_\beta)$ and $x_0$ is the truth value of ``$x_2[ G_\beta]$ is a witness to $x_1[G_\beta]\in\theta_\beta$''. \noindent{\bf Case 3}:\quad If $\alpha$ is limit, then \[{\Bbb P}_\alpha=\{p:p\mbox{ is a function, }{\rm Dom}(p)\in [\alpha]^{\le\aleph_0} \mbox{ and }\beta\le\alpha\quad\Rightarrow\quad p{\,|\grave{}\,}\beta\in{\Bbb P}_\beta\}.\] \noindent{\em The order}:\\ For $\alpha = 0$ nothing to do.\\ For $\alpha$ limit: $p\le q$ if and only if $\bigwedge\limits_{\beta<\alpha} {\Bbb P}_\beta\models$``$p{\,|\grave{}\,}\beta\le q{\,|\grave{}\,}\beta$'' (equivalently: $\bigwedge\limits_{\beta\in{\rm Dom}(\beta)}{\Bbb P}_{\beta+1}\models$``$p{\,|\grave{}\,}(\beta + 1)\le q{\,|\grave{}\,}(\beta+1)$''), (see (C)).\\ For $\alpha=\beta + 1$:\quad the order is the transitive closure of the following cases: \begin{enumerate} \item[$(\alpha)$] $p\in{\Bbb P}_\beta$, $q\in{\Bbb P}_\alpha$, ${\Bbb P}_\beta\models$``$p \le q {\,|\grave{}\,} \beta$'', \item[$(\beta)$] $p(\beta)=q(\beta)$ and ${\Bbb P}_\beta\models$``$p{\,|\grave{}\,}\beta \le q {\,|\grave{}\,} \beta$'', \item[$(\gamma)$] $p{\,|\grave{}\,}\beta=q{\,|\grave{}\,}\beta$ and there is a ${\frak B}^\alpha$--candidate $N$ such that $q{\,|\grave{}\,}\beta$ is a $[0,\beta)$--canonical $(N,{\Bbb P}_\beta)$--generic above $p'{\,|\grave{}\,}\beta$, ${\Bbb P}_\beta\models$``$p'{\,|\grave{}\,} \beta\le q{\,|\grave{}\,}\beta$'', $p'\in{\Bbb P}^N_\alpha$ and \[N\models\mbox{`` }p'{\,|\grave{}\,}\beta\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash } N[\mathunderaccent\tilde-3 {G}_\beta]\models[{\rm cl}( {\mathunderaccent\tilde-3 {\Bbb Q}}_\beta)\models p(\beta)\le p'(\beta)\mbox{ and }p'(\beta)\in{\mathunderaccent\tilde-3 {\Bbb Q}}_\beta] \mbox{ ''}\] and $q(\beta)$ is canonically generic for $({\Bbb Q}_\beta,N[\mathunderaccent\tilde-3 {G}_\beta])$ above $p$, i.e.\ is \[p'(\beta) \ \&\ \bigwedge_{{\scriptstyle {\cal I}\in{\rm pd}(N,{\Bbb P}_\alpha)} \atop {\scriptstyle (\forall r \in {\cal I})r(\beta)\in{\mathunderaccent\tilde-3 {\Bbb Q}}_\beta}}\bigvee \{r(\alpha):N\models\mbox{`` }r \in {\cal I}\mbox{ '' and } r{\,|\grave{}\,}\beta\in \mathunderaccent\tilde-3 {G}_\beta\}\] if $q{\,|\grave{}\,}\beta\in\mathunderaccent\tilde-3 {G}_{{\Bbb P}_\beta}$ and $p'(\beta)$ if $q{\,|\grave{}\,}\beta\notin \mathunderaccent\tilde-3 {G}_{{\Bbb P}_\beta}$. \end{enumerate} Lastly, ${\Bbb P}_{\alpha,w}=\{p\in{\Bbb P}_\alpha:{\rm Dom}(p)\subseteq w\}$ for $w\subseteq \alpha$ (note that if $w \subseteq\beta\le\alpha$ we get the same forcing notion). \noindent{\sc Clause (E)}, Claim:\qquad Trivial. \noindent{\sc Clause (F)}, Claim:\qquad Subclauses (a) and (c)--(f) are trivial. \noindent{\bf Subclause (b)}:\quad Here we should be careful as we do not ask just that the order is forced but there is a ${\rm hc}$ witness; as we ask for a witness and not explicit witness (see Definition \ref{4.8}) this is okay. See more in the proof of clause (I). \noindent{\sc Clause (G)}, Definition: \noindent{\bf Case 1}:\quad For $\beta<\alpha$ note that $N$ is also a ${\frak B}^\beta$--candidate and use the definition for $\bar{{\Bbb Q}}{\,|\grave{}\,}\beta$. \noindent{\bf Case 2}:\quad If $\gamma=\beta=\alpha$ -- trivial. \noindent{\bf Case 3}:\quad For $\beta=\alpha$, $\alpha = 0$ -- trivial. \noindent{\bf Case 4}:\quad For $\gamma<\beta=\alpha$ and $\beta=\beta'+1$, $\beta'\notin w$ -- trivial. \noindent{\bf Case 5}:\quad Suppose $\gamma<\beta=\alpha$, $\alpha=\beta'+ 1$, $\beta' \in w$.\\ Then:\quad $q {\,|\grave{}\,}\beta'$ is $[\gamma,\beta')$--canonically $(N,{\Bbb P}_\beta, w)$--generic and for some $\mathunderaccent\tilde-3 {\tau}$, \[\begin{array}{ll} N\models&\mbox{`` }\mathunderaccent\tilde-3 {\tau}\mbox{ is a ${\rm hc}$--$\kappa_\beta$--${\Bbb P}_{\beta \cap w}$--name of a member of }{\mathunderaccent\tilde-3 {\Bbb Q}}_\beta\\ \ &\mbox{which is above $p(\beta)$ (which is in ${\rm cl}({\mathunderaccent\tilde-3 {\Bbb Q}}_\beta)$!) and is in }N\langle \mathunderaccent\tilde-3 {G}_{{\Bbb P}_{\beta \cap w}}\rangle\mbox{ ''} \end{array}\] and \[q(\beta)=\mathunderaccent\tilde-3 {\tau} \ \&\ \bigwedge_{{\scriptstyle {\cal I} \in{\rm pd}(N, {\Bbb P}_{\alpha,w})}\atop {\scriptstyle (\forall r \in {\cal I})r(\beta)\in {\mathunderaccent\tilde-3 {\Bbb Q}}_\beta}}\bigvee\{r(\beta):N\models\models{`` }r\in {\cal I}\mbox{ '' and }r {\,|\grave{}\,}\beta\in \mathunderaccent\tilde-3 {G}_{{\Bbb P}_\beta}\}.\] \noindent{\bf Case 6}:\quad $\gamma<\beta=\alpha$, $\beta$ a limit.\\ Say that diagonalization was used. \noindent{\sc Clause (H)}, Theorem:\qquad Prove by induction. \noindent{\sc Clause (I)}, Theorem:\qquad We have defined ${\frak B}^\alpha$ and $\kappa^\alpha$ (so $\theta^\alpha=\kappa^\alpha$). The formulas $\varphi^{ {\Bbb P}_\alpha}_\ell$ ($\ell<3$) are implicitly defined (in the induction). Why $\varphi^{{\Bbb P}_\alpha}_0$ is absolute enough? As the demand on $p(\beta)$ above says that $r_{p{\,|\grave{}\,} (\beta+1),\beta}$, the witness for $p(\beta)\in {\rm cl}({\Bbb Q})$, is such that $r[\mathunderaccent\tilde-3 {G}_{{\Bbb P}_\beta}]={\frak t}$ gives all the required information. Why $\varphi^{{\Bbb P}_\alpha}_1$ is absolute enough? Because the canonical genericity is about $\varphi_2$ and the properness requirement, see clause (G), fit. Now one proves by induction on $\beta \le \alpha$: \begin{enumerate} \item[$(\otimes)$] {\em if}\ $N$ is a ${\frak B}^\alpha$--candidate, $w\in N$, $N\models$``$w \subseteq \alpha$'', $\gamma_0\le\gamma_1\le\beta$, $\{\gamma_0,\gamma_1,\beta\}\subseteq (\alpha+1)\cap N\cap w$, $p\in {\Bbb P}^N_\beta$, $q\in{\Bbb P}_\gamma$, $p{\,|\grave{}\,}\gamma_1\le q$, $q$ is $[\gamma_0, \gamma_1)$--canonically $(N,{\Bbb P}_{\gamma_1},w)$--generic \noindent {\em then}\ we can find $q^+$ such that: \begin{enumerate} \item[$(\alpha)$] $q^+\in{\Bbb P}_\beta$, $q^+{\,|\grave{}\,}\gamma=q$, \item[$(\beta)$] $p\leq^+$, \item[$(\gamma)$] $q^+$ is $[\gamma,\beta)$--canonically $(N,{\Bbb P}_\beta,w)$--generic. \end{enumerate} \end{enumerate} \noindent{\sc Clause (I)}, Theorem:\qquad Straight. \QED$_{\ref{4.9}}$ \begin{proposition} \label{4.10} The iteration in \ref{4.9} is equivalent to the CS iteration. More formally, assume \[\bar{{\Bbb Q}}=\langle({\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_i,\bar{\varphi}_i,\mathunderaccent\tilde-3 {{\frak B}}_i,\kappa_i, \theta_i):i<\alpha\rangle\mbox{ is an CS--nep iteration}.\] We can define $\bar{{\Bbb Q}}'=\langle{\Bbb P}'_i,{\mathunderaccent\tilde-3 {\Bbb Q}}'_i:i<\alpha\rangle$ and $\langle F_i:i<\alpha\rangle$ such that \begin{enumerate} \item[(a)] $\bar{{\Bbb Q}}'$ is a CS iteration, \item[(b)] $F_i$ is a mapping from ${\Bbb P}_i$ into ${\Bbb P}'_i$, \item[(c)] $j<i\quad\Rightarrow\quad F_j = F_i{\,|\grave{}\,}{\Bbb P}_j$, \item[(d)] $F_i$ is an embedding of ${\Bbb P}_i$ into ${\Bbb P}'_i$ with dense range, \item[(e)] ${\mathunderaccent\tilde-3 {\Bbb Q}}_i$ is mapped by $F_i$ to ${\mathunderaccent\tilde-3 {\Bbb Q}}'_i$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED \begin{proposition} \label{4.11} In the context of \ref{4.10}: \begin{enumerate} \item Assume that each $\mathunderaccent\tilde-3 {{\frak B}}_\beta$ is essentially a real; i.e.\ $\kappa_\beta=\omega$ and if $R$ is in the vocabulary of $\mathunderaccent\tilde-3 {{\frak B}}_\beta$ then $R^{\mathunderaccent\tilde-3 {{\frak B}}_\beta}\subseteq {}^{n(R)}\omega$. If $\alpha<\omega_1$ then so is the ${\frak B}_\alpha$. (If $\alpha\ge\omega_1$ we get weaker results). \item Assume that $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_\beta}$`` the universe of ${\frak B}_\beta$ is $\kappa_\beta$ ''. Then we can make ``${\frak B}_\beta$ has universe $\kappa^\beta$'', coding the $p^\beta_\psi$'s. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Left to the reader. \QED \begin{remark} \label{4.12} 1)\quad Note that \ref{4.9}, \ref{4.10} (and \ref{4.11}(4)) say something even for $\alpha = 1$ so it speaks on ${\rm cl}({\Bbb Q}_0)={\Bbb P}_1$ (or ${\rm cl}_3({\Bbb Q}_0)={\Bbb P}_1)$. \noindent 2)\quad Concerning \ref{4.11} note that if $\kappa({\frak B})\geq \omega_1$, the difference between nep and snep is not large, so the case $\alpha<\omega_1$ has special interest. \noindent 3)\quad In \ref{4.9}, \ref{4.10}, we can replace the use of ${\Bbb Q}'= {\rm cl}({\Bbb Q})$ from \ref{4.1} by ${\rm cl}_3({\Bbb Q})$ from Definition \ref{4.3} (using \ref{4.4}). \noindent 4)\quad We can derive a theorem on local in \ref{4.10}, but for strong enough ${\rm ZFC}^-_*$, then any follows. \end{remark} Of course, we can get forcing axioms. \begin{proposition} \label{4.14} \begin{enumerate} \item Assume for simplicity that ${\bf V}\models 2^{\aleph_0}=\aleph_1\ \&\ 2^{\aleph_1}=\aleph_2$. Then for some proper $\aleph_2$--c.c. forcing notion ${\Bbb P}$ of cardinality $\aleph_2$ we have in ${\bf V}^{{\Bbb P}}$: \begin{enumerate} \item[$(\oplus)$] ${\rm Ax}_{\omega_1}[(\aleph_1,\aleph_1)\mbox{--nep}]$: \qquad {\em if}\ ${\Bbb Q}$ is a $(\kappa,\theta)$--nep forcing notion, $\kappa,\theta\le\aleph_1$ and ${\cal I}_i$ is a dense subset of ${\Bbb Q}$ for $i<\omega_1$ and $\mathunderaccent\tilde-3 {S}_i$ as a ${\Bbb Q}$--name of stationary subset of $\omega_1$ for $i<i(*) \le\omega_1$,\\ {\em then}\ for some directed $G\subseteq{\Bbb Q}$ we have:\quad $i<\omega_1 \Rightarrow G\cap {\cal I}_i \ne\emptyset$ and \[\mathunderaccent\tilde-3 {S}_i[G]\stackrel{\rm def}{=}\{\zeta<\omega_1:\mbox{for some } q\in G \mbox{ we have } q \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox{`` }\zeta\in\mathunderaccent\tilde-3 {S}_i\mbox{ ''}\}\] is a stationary subset of $\omega_1$. \end{enumerate} \item We can demand that ${\Bbb P}$ is explicitly $(\aleph_2,\aleph_2)$--nep provided that in $(\oplus)$ we add ``explicitly simply'' to the requirements on ${\Bbb Q}$. \item In parts 1) and 2), we can strengthen $(\oplus)$ to ${\rm AX}_{\omega_1} [\mbox{nep}]$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight (as failure of ``${\Bbb Q}$, i.e.\ $\bar{\varphi}$, is nep'' is preserved when extending the universe). \QED \begin{proposition} \label{4.15} We can generalize the definitions and claims so far by: \begin{enumerate} \item[(a)] a forcing notion ${\Bbb Q}$ is $({\Bbb Q},\le,\le_{{\rm pr}},\emptyset_{{\Bbb Q}})$, where $\le_{{\rm pr}}$ is a quasi order, $p \le_{{\rm pr}} q \Rightarrow p \le q$ and $\emptyset_{{\Bbb Q}}$ the minimal element; \item[(b)] in the definition of nep in addition to $\varphi_1$ we have $\varphi_{1,{\rm pr}}$ defining $\le_{{\rm pr}}$, which is upward absolute from ${\Bbb Q}$--candidates, and in Definition \ref{0.2}(2)(c) we strengthen $p \le q$ to $p\le_{{\rm pr}} q$; \item[(c)] the definition of CS iteration $\langle{\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_i:i<\alpha \rangle$ is modified in one of the following ways: \begin{enumerate} \item[$(\alpha)$] ${\Bbb P}_i=\bigl\{p:p$ is a function, ${\rm Dom}(p)$ is a countable subset of $i$, $j\in{\rm Dom}(p) \Rightarrow \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j}$``$p(j)\in{\mathunderaccent\tilde-3 {\Bbb Q}}_j$'' and the set $\{j\in{\rm Dom}(p):\ \not\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j}$``$\emptyset_{{\mathunderaccent\tilde-3 {\Bbb Q}}_j}\le_{{\rm pr}} p(j)$''$\}$ is finite $\bigr\}$, with the order $p \le q$ if and only if $j\in{\rm Dom}(p)\Rightarrow q{\,|\grave{}\,} j\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash } p(j) \le^{{\mathunderaccent\tilde-3 {\Bbb Q}}_j} q(j)$ and the set $\{j\in{\rm Dom}(p):q{\,|\grave{}\,} j\not\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j} \le^{{\mathunderaccent\tilde-3 {\Bbb Q}}_j}_{{\rm pr}} q(j)$''$\}$ is finite; \noindent (if each $\le^{{\mathunderaccent\tilde-3 {\Bbb Q}}_j}_{{\rm pr}}$ is equality, this is FS iteration) \item[$(\beta)$] ${\Bbb P}_i=\bigl\{p:p$ is a function, ${\rm Dom}(p)$ is a countable subset of $i$, $j\in{\rm Dom}(p)\ \Rightarrow\ \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j}$``$p(j)\in{\mathunderaccent\tilde-3 {\Bbb Q}}_j$'' $\bigr\}$, with the order $p \le q$ if and only if $j\in{\rm Dom}(p)\Rightarrow q{\,|\grave{}\,} j\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j}$``$p (j)\le^{{\mathunderaccent\tilde-3 {\Bbb Q}}_j}q(j)$'' and $\{j\in{\rm Dom}(p):q{\,|\grave{}\,} j\not\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_j}$``$p(j) \le^{{\mathunderaccent\tilde-3 {\Bbb Q}}_j}_{{\rm pr}} q(j)$''$\}$ is finite; \end{enumerate} \item[(d)] similarly for the CS-nep iteration. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Left to the reader. \QED \stepcounter{section} \subsection*{\quad 6. When a real is $({\Bbb Q},\eta)$--generic over ${\bf V}$} \begin{definition} \label{5.1} \begin{enumerate} \item We say that $({\Bbb Q},\bar{W})$ is a temporary $({\frak B},\theta,\sigma, \tau)$--pair if for some ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {\eta}$ the following conditions are satisfied: \begin{enumerate} \item[(a)] ${\Bbb Q}$ is a nep-forcing notion for $({\frak B},\bar{\varphi}, \theta)$; possibly ${\frak B}$ expands ${\frak B}^{{\Bbb Q}}$, \item[(b)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in{}^\sigma\tau$'', \item[(c)] $\bar{W}=\langle W_n:n<\sigma\rangle$, \item[(d)] for each $n < \sigma$, $W_n\subseteq\{(p,\alpha):p \in{\Bbb Q},\alpha <\tau\}$, \item[(e)] if $(p_\ell,\alpha_\ell)\in W_n$ for $\ell =1,2$ and $\alpha_1, \alpha_2$ are not equal, then $p_1,p_2$ are incompatible in ${\Bbb Q}$, \item[(f)] for each $n<\sigma$ the set ${\cal I}_n = {\cal I}_n[\bar W] \stackrel{\rm def}{=}\{p:({\rm ex}ists \alpha)[(p,\alpha)\in W_n]\}$ is a predense subset of ${\Bbb Q}$, \item[(g)] so $\tau=\tau[\bar{W}]=\tau[{\Bbb Q},\bar{W}]$ and (abusing notation) let $\sigma= \sigma[\bar{W}]=\sigma[{\Bbb Q},\bar{W}]$. \end{enumerate} \item For $({\Bbb Q},\bar{W})$ as above, $\mathunderaccent\tilde-3 {\eta}=\mathunderaccent\tilde-3 {\eta}[\bar{W}]= \mathunderaccent\tilde-3 {\eta}[{\Bbb Q},\bar{W}]$ is the ${\Bbb Q}$--name \[\bigcup\{(p,(n,\alpha)):({\rm ex}ists p\in \mathunderaccent\tilde-3 {G}_{{\Bbb Q}})((p,(n,\alpha))\in W_n), \mbox{ so }n<\sigma\}.\] \item We replace the temporary by $K$ if this (specifically the demand (f)) holds in any $K$--extension. \item We may write $({\Bbb Q},\mathunderaccent\tilde-3 {\eta}),\bar{W}=\bar{W}^{\mathunderaccent\tilde-3 {\eta}}$ abusing notation. If we omit ${\frak B}$ we mean ${\frak B} = {\frak B}^{{\Bbb Q}}$. If $\tau=\aleph_0$ we may omit it; if $\tau=\sigma=\aleph_0$ we may omit them, if $\theta=\sigma=\tau=\aleph_0$, we may write $\kappa$. \item We say that $\mathunderaccent\tilde-3 {\eta}[{\Bbb Q},\bar{W}]$ is a temporarily generic real (or function) for ${\Bbb Q}$ if for no distinct $G_1,G_2 \subseteq {\Bbb Q}$ generic over ${\bf V}$ do we have $\mathunderaccent\tilde-3 {\eta}[G_1]=\mathunderaccent\tilde-3 {\eta}[G_2]$. \item Instead $({\Bbb Q},\bar{W})$ we may write $(({\frak B}^{{\Bbb Q}}, \bar{\varphi}^{{\Bbb Q}},\theta^{{\Bbb Q}}),\bar{W})$ (or with $\mathunderaccent\tilde-3 {\eta}$ instead $\bar{W}$). \end{enumerate} \end{definition} \begin{definition} \label{5.2A} \begin{enumerate} \item Let ${\cal K}_{\kappa,\theta,\sigma,\tau}$ be the class of all $({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})$ which are temporary $({\frak B},\theta,\sigma,\tau)$--pairs for some ${\frak B}$ with $\kappa({\frak B})\le\kappa,\|{\frak B}\|\le\kappa$. \item Let $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ be a temporary $(\kappa,\theta)$--pair (actually more accurately write $({\frak B},\bar{\varphi},\theta)$, $\bar{W}$); so $\sigma=\tau=\aleph_0$. Let $N$ be a ${\Bbb Q}$--candidate and $\eta\in{}^{\textstyle \omega}\omega$. We say that $\eta$ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over $N$\ {\em if}\ for some $G\subseteq {\Bbb Q}^N$ which is generic over $N$ we have $\eta=\mathunderaccent\tilde-3 {\eta}[G]$. \item We say that $\mathunderaccent\tilde-3 {\eta}=\mathunderaccent\tilde-3 {\eta}[\bar{W}]$ is hereditarily countable if each $W_n$ is countable (note: the generic reals of the forcing notions from \cite{RoSh:470} are like that, but for our purpose just ``absolute enough'' suffices). \end{enumerate} \end{definition} \begin{definition} \label{5.2B} \begin{enumerate} \item $({\Bbb Q},\bar{W})$ is a temporary explicitly $({\frak B},\theta,\sigma, \tau)$--pair (or nep pair) if for some ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {\eta}$ we have: \begin{enumerate} \item[(a)] ${\Bbb Q}$ is an explicit nep forcing notion for $({\frak B}, \bar{\varphi},\theta)$, \item[(b)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in {}^\sigma \tau$'', \item[(c)] $\bar{W}=\langle\psi_{\alpha,\zeta}:\alpha<\sigma,\zeta<\tau \rangle$, \item[(d)] $\psi_{\alpha,\zeta}\in{\rm cl}_1({\Bbb Q})$ for $\alpha<\sigma$, $\zeta< \tau$, \item[(e)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$`` $\mathunderaccent\tilde-3 {\eta}(\alpha)=\zeta$ iff $\psi_{\alpha, \zeta}[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]={\frak t}$ ''. \end{enumerate} \item In this case $\mathunderaccent\tilde-3 {\eta}=\mathunderaccent\tilde-3 {\eta}[\bar{W}]=\mathunderaccent\tilde-3 {\eta}[{\Bbb Q}, \bar{W}]$ is the ${\Bbb Q}$--name above (it is unique). Abusing notation we may write $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ instead $({\Bbb Q},\bar{W})$ and then let $\bar{W}= \bar{W}[\mathunderaccent\tilde-3 {\eta}]=\bar{W}[{\Bbb Q},\mathunderaccent\tilde-3 {\eta}]$. \item We introduce the notions from \ref{5.1}(3)--(6) for the current case with almost no changes. \end{enumerate} \end{definition} \begin{definition} \label{5.2C} ${\cal K}^{{\rm ex}}_{\kappa,\theta,\sigma,\tau}=\{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\in K_{\kappa, \theta,\sigma,\tau}:({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is temporarily explicitly $({\frak B},\theta,\sigma,\tau)$--pair for some model ${\frak B}$ with $\kappa({\frak B}) \le \kappa,\|{\frak B}\| \le \kappa\}$. \end{definition} \begin{proposition} \label{5.3A} Assume that: \begin{enumerate} \item[(a)] ${\Bbb Q}$ is an explicitly nep forcing notion which satisfies the c.c.c. \item[(b)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in {}^\sigma \omega$'' and (for $\alpha<\sigma$ and $m<\omega$) $\psi_{\alpha,m}\in{\rm cl}_1({\Bbb Q})$ are such that \[\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox{`` }\mathunderaccent\tilde-3 {\eta}(\alpha)=m\ \mbox{ iff }\ \psi_{\alpha,m} [\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]={\frak t}\mbox{ ''}\] \item[(c)] ${\Bbb Q}'\stackrel{\rm def}{=}B_2({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is the following suborder of ${\rm cl}_2({\Bbb Q})$: \[\begin{array}{ll} \{p\in{\rm cl}_2({\Bbb Q}):&p\mbox{ is generated by the }\psi_{\alpha,m}\mbox{ i.e.\ it belongs to the closure of }\\ \ &\{\psi_{\alpha,m}:\alpha<\sigma,m<\omega\}\mbox{ under $\neg, \bigwedge\limits_{i<\gamma}$ for $\gamma<\omega_1$ in }{\rm cl}_2({\Bbb Q})\} \end{array}\] (i.e.\ it is the quasi order $\le^{{\Bbb Q}}_2$ restricted to this set). \end{enumerate} {\em Then}: \begin{enumerate} \item ${\Bbb Q}'\lesdot{\rm cl}_2({\Bbb Q})$ and $\mathunderaccent\tilde-3 {\eta}\in {}^\sigma \omega$ is a generic function for ${\Bbb Q}'$. \item Assume additionally that \begin{enumerate} \item[$(*)$] if $M$ is a ${\Bbb Q}$--candidate, $M\models$``${\cal I}$ is a maximal antichain of ${\Bbb Q}$'',\\ then ${\cal I}^M$ is a maximal antichain of ${\Bbb Q}$. \end{enumerate} Then we also have \begin{enumerate} \item[$(\alpha)$] ${\Bbb Q}'$ is $(\kappa,\theta)$-nep c.c.c. forcing notion, \item[$(\beta)$] if ${\Bbb Q}$ is simple, then ${\Bbb Q}'$ is simple, \item[$(\gamma)$] if ${\Bbb Q}$ is $K$--local, then ${\Bbb Q}'$ is $K$--local. \end{enumerate} \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED Now the hypothesis $(*)$ in \ref{5.3A}(2) is undesirable, so we use $B_3({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})$ (see \ref{5.3B}(c) below), which has a suitable quasi order. \begin{proposition} \label{5.3B} Assume that: \begin{enumerate} \item[(a)] ${\Bbb Q}$ is explicitly nep forcing notion which satisfies the c.c.c. \item[(b)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in {}^\sigma \omega$'' and $\psi_{\alpha,m}\in{\rm cl}_2({\Bbb Q})$ are such that \[\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox{`` }\mathunderaccent\tilde-3 {\eta}(\alpha)=m\ \mbox{ iff }\ \psi_{\alpha,m} [\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]={\frak t}\mbox{ ''},\] \item[(c)] ${\Bbb Q}'\stackrel{\rm def}=B_3({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is a forcing notion defined as follows:\\ the set of elements is like $B_2({\Bbb Q},\eta)$; i.e.\ it is the closure of $\{\psi_{\alpha,m}:\alpha<\sigma,m<\omega\}$ under $\neg,\bigwedge\limits_{i < \gamma}$ for $\gamma<\omega_1$ inside ${\rm cl}_2({\Bbb Q})$;\\ the quasi order $\le_3=\le^{B_3({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}_3$ is $\leq^{{\rm cl}_3({\Bbb Q})}$ restricted to $B_3({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$, \item[(d)] the statements $(\circledast_3)$ and $(\circledast_4)$ of \ref{4.4} hold. \end{enumerate} Then: \begin{enumerate} \item[$(\alpha)$] ${\Bbb Q}'$ is essentially a suborder of ${\rm cl}_2({\Bbb Q})$; i.e.\ $\psi \in{\Bbb Q}'\ \Rightarrow\ \psi\in{\rm cl}_2({\Bbb Q})$, and for $\psi_1,\psi_2\in {\Bbb Q}'$ we have:\quad $\psi_1\leq_3 \psi_2\ \Leftrightarrow\ \psi_1\leq^{{\rm cl}_3( {\Bbb Q})}\psi_2$, \item[$(\beta)$] $\mathunderaccent\tilde-3 {\eta}$ is a ${\Bbb Q}'$--name, $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}'}$``$\mathunderaccent\tilde-3 {\eta}\in {}^\sigma\omega$'' and $\mathunderaccent\tilde-3 {\eta}$ is a generic function for ${\Bbb Q}'$, \item[$(\gamma)$] ${\Bbb Q}'$ is explicitly nep c.c.c.\ forcing notion with ${\frak B}^{{\Bbb Q}'}={\frak B}^{{\Bbb Q}}$, $\bar{\varphi}^{{\Bbb Q}'}=\bar{\varphi}^{B_3({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})}$, $\theta^{{\Bbb Q}'}=\theta^{{\Bbb Q}}$, \item[$(\gamma)^+$] each forcing extension of ${\bf V}$ which preserves the assumption (a) (hence also (b)) preserves $(\gamma)$, \item[$(\delta)$] if ${\Bbb Q}$ is simple (or straight) then ${\Bbb Q}'$ is simple (or straight), \item[$(\varepsilon)$] if ${\Bbb Q}$ is $K$--local, then ${\Bbb Q}'$ is $K$-local. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED \begin{proposition} \label{5.3C} In \ref{5.1}--\ref{5.3B} above, we can replace ${\Bbb Q}$ by ${\Bbb Q}{\,|\grave{}\,}\{p\in{\Bbb Q}:p \ge q\}$ preserving the properties of $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$. \QED \end{proposition} \begin{fact} \label{5.5A} If ${\Bbb Q}$ is simply correctly nep for $K$, ${\Bbb Q}$ is in ${\bf V}$, and ${\bf V}_1$ is a $K$--extension of ${\bf V}$ {\em then} \begin{enumerate} \item[(i)] in ${\bf V}_1$, ${\Bbb Q}^{{\bf V}}\le_{ic}{\Bbb Q}^{{\bf V}_1}$ (see \cite[Ch.IV]{Sh:f}), i.e.\ for $p,q \in V_0$, ``$p\in{\Bbb Q}$'', ``$p \le q$'', ``$\neg(p\le q)$'', ``$p,q$ compatible'',``$p,q$ compatible'' are preserved from ${\bf V}$ to ${\bf V}_1$, \item[(ii)] for $p,p_n\in{\bf V}$ the statements ``$p\notin{\Bbb Q}$'' and ``${\cal I}= \{p_n:n<\omega\}$ is predense above $p$ in ${\Bbb Q}$'' are preserved from ${\bf V}$ to ${\bf V}_1$, \item[(iii)] if ${\Bbb Q}$ satisfies the c.c.c.\ then in clause (ii) above we can omit the countability of ${\cal I}$. \end{enumerate} \end{fact} \noindent{\sc Proof} \hspace{0.2in} Straight, for example: ``$p,q$ are incompatible'' iff there is no ${\Bbb Q}$--candidate $M$ such that \[M\models\mbox{`` }p,q\mbox{ have a common $\leq_{{\Bbb Q}}$--upper bound ''}.\] So by Shoenfield--Levy absoluteness, if this holds in ${\bf V}$, it holds in ${\bf V}_1$. \noindent (ii) Similarly. \noindent (iii) Follows (and repeated in \ref{6.8}). \QED \begin{proposition} \label{5.12} Let $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ be temporarily explicitly nep pair. Assume $N$ is a ${\Bbb Q}$--candidate. If $N\models$``$\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over a ${\Bbb Q}$--candidate $M$'', {\em then} $\eta^*$ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic for $M$. \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED \begin{proposition} \label{5.13} Assume that: \begin{enumerate} \item[(a)] ${\Bbb Q}$ is explicitly nep, \item[(b)] ${\Bbb Q}$ is c.c.c.\ moreover it satisfies the c.c.c.\ in every ${\Bbb Q}$--candidate, \item[(c)] incompatibity in ${\Bbb Q}$ is upward absolute from ${\Bbb Q}$--candidates (but see \ref{5.5A}), \item[(d)] $\mathunderaccent\tilde-3 {\eta}$ is a ${\rm hc}$--$\kappa({\frak B}^{{\Bbb Q}})$--${\Bbb Q}$--name of a member of ${}^{\textstyle \omega}\omega$ defined from ${\frak B}^{{\Bbb Q}}$ (so we demand this in every ${\Bbb Q}$--candidate). \end{enumerate} Furthermore, suppose that \begin{enumerate} \item[(A)] $N_1,N_2$ are ${\Bbb Q}$--candidates, $N_2$ is a generic extension of $N_1$ for a forcing notion ${\Bbb R}$, (so ${\frak B}^{N_2}={\frak B}^{N_1}$ and $N_1\models$``${\Bbb R}$ is a forcing notion''), \item[(B)] $N_1\models$`` for every countable $X \subseteq{\Bbb Q}$ and $n<\omega$ there is a ${\Bbb Q}$-candidate $N_0 {\rm pr}ec_{\Sigma_n} N_1$ to which $X$ and ${\Bbb R}$ belong'', \item[(C)] $\eta^*\in{}^{\textstyle \omega}\omega$ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over $N_2$. \end{enumerate} {\em Then}\ $\eta^*\in{}^{\textstyle \omega}\omega$ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over $N_1$. \end{proposition} \begin{remark} \label{5.13A} \begin{enumerate} \item In (B), we can replace $X$ by ``a maximal antichain of ${\Bbb Q}, \mathunderaccent\tilde-3 {\eta}$''. \item Clearly we can replace ``maximal antichain'' by ``predense set'' or ``predense set over $p$'' (note ${\cal I}^{N_2}={\cal I}^{N_1}$ as $N_2 = N^{{\Bbb R}}_1$). \item We can weaken ``$N_0 {\rm pr}ec_{\Sigma_n} N_1$'' in clause (B). \end{enumerate} \end{remark} {\noindent{\sc Proof of \ref{5.13}} \hspace{0.2in}} Clearly it suffices to prove that (assuming (a)-(d),(A),(B) and (C)): \begin{enumerate} \item[$(*)$] {\em if}\quad $N_1\models$``${\cal I}$ is a maximal antichain of ${\Bbb Q}$'',\\ {\em then}\quad ${\cal I}^{N_1}={\cal I}^{N_2}$ and $N_2\models$``${\cal I}$ is a maximal antichain of ${\Bbb Q}$''. \end{enumerate} Assume that this fails for ${\cal I}$. Then some $r\in{\Bbb R}$ forces this failure (in $N_1$). By assumption (b), in $N_1$ the set ${\cal I}^{N_1}$ is countable so let $N_1 \models$``${\cal I}=\{p_n:n<\alpha\}$'', where $\alpha\le\omega$. Let $n<\omega$ be large enough. By clause (B) in $N_1$ there is a ${\Bbb Q}$--candidate $N_0$ to which ${\cal I}$ and $r$ and ${\Bbb R}$ belong and $N_0{\rm pr}ec_{\Sigma_n}N_1$. Since \[\begin{array}{r} N_1\models{``}({\rm ex}ists r\in{\Bbb R})[r\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb R}}\mbox{``}{\cal I}\mbox{ is not a maximal antichain of ${\Bbb Q}$}\quad\\ \mbox{(and $N_1[\mathunderaccent\tilde-3 {G}_{{\Bbb R}}]$ is a ${\Bbb Q}$--candidate)'']''}, \end{array}\] there is $r_0\in{\Bbb R}\cap N_0$ such that \[\begin{array}{r} N_0\models\mbox{``}[r_0\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb R}}\mbox{``${\cal I}$ is not a maximal antichain of ${\Bbb Q}$}\quad\\ \mbox{(and $N_0[\mathunderaccent\tilde-3 {G}_{{\Bbb R}}]$ is a ${\Bbb Q}$--candidate)]''}. \end{array}\] Now, as $N_1$ satisfies enough set theory and $N_1$ ``thinks'' that $N_0$ is countable and ${\Bbb R}^{N_0}$ is a forcing notion in $N_0$, there is in $N_1$ a subset $G'_{{\Bbb R}}$ of ${\Bbb R}\cap N_0={\Bbb R}^{N_0}$ generic over $N_0$ to which $r_0$ belongs. So in $N_0[G'_{{\Bbb R}}]$ there is $p\in{\Bbb Q}^{N_0[G'_{{\Bbb R}}]}$ incompatible (in ${\Bbb Q}^{N_0[G'_{{\Bbb R}}]}$) with each $p_n$. By the assumption (c) this holds in $N_1$, contradiction to the choice of ${\cal I}$ (see $(*)$). \QED$_{\ref{5.13}}$ \begin{definition} \label{5.14} \begin{enumerate} \item We say that $\bar{\varphi}$ or $(\bar{\varphi},{\frak B})$ is a temporary $(\kappa,\theta)$-definition of a strong c.c.c.--nep forcing notion ${\Bbb Q}$ if: \begin{enumerate} \item[(a)] $\varphi_0$ defines the set of elements of ${\Bbb Q}$ and $\varphi_0$ is upward absolute from $({\frak B},\bar{\varphi},\theta)$--candidates, \item[(b)] $\varphi_1$ defines the partial ordering of ${\Bbb Q}$ (even in $({\frak B},\bar{\varphi},\theta)$--candidates) and $\varphi_1$ is upward absolute from $({\frak B},\bar{\varphi},\theta)$--candidates, \item[(c)] for any $({\frak B},\bar{\varphi},\theta)$--candidate $N$, if $N \models$``${\cal I}\subseteq{\Bbb Q}$ is predense'', {\em then} also in ${\bf V}$, ${\cal I}^N$ is a predense subset of ${\Bbb Q}$. \end{enumerate} \item We say that $\bar{\varphi}$ or $(\bar{\varphi},{\frak B})$ is a temporarily [explicitly] $(\kappa,\theta)$--definition of a c.c.c.--nep forcing notion ${\Bbb Q}$ if \begin{enumerate} \item[$(\alpha)$] it is a temporary [explicitly] $(\kappa,\theta)$-definition of a nep forcing notion, \item[$(\beta)$] for every ${\Bbb Q}$--candidate $N$ we have $N\models$``${\Bbb Q}$ satisfies the c.c.c.''. \end{enumerate} \item The variants are defined as usual. \end{enumerate} \end{definition} \begin{proposition} \label{5.15} \begin{enumerate} \item If ${\Bbb Q}$ is strongly c.c.c.--nep forcing notion and $N_1\subseteq N_2$ are ${\Bbb Q}$--candidates, {\em then} every $\eta$ which is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N_2$ is also $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N_1$. \item If ${\rm ZFC}^-_*$ is normal and ${\Bbb Q}$ is temporarily c.c.c.--nep {\em then} ${\Bbb Q}$ satisfies the c.c.c. \end{enumerate} \end{proposition} \begin{comment} \label{6.14} We can spell out various absoluteness, e.g. \begin{enumerate} \item If ${\Bbb Q}$ is simple nep, c.c.c.\ and ``$\langle p_n:n<\omega\rangle$ is predense'' has the form $({\rm ex}ists t\in {\cal H}_{<\aleph_1}((\kappa+\theta))) [t\models\ldots]$ (e.g.\ $\kappa^{{\Bbb Q}}=\omega$ and it is $\Pi^1_2$) then predensity of countable sets is preserved in any forcing extension. \item Note that strong c.c.c.--nep (from \ref{5.15}(1)) does not imply c.c.c.--nep (from \ref{5.15}(2)). But if ${\rm ZFC}^-_{**}\vdash{\rm ZFC}^-_*$ and ${\rm ZFC}^-_{**}$ says that ${\rm ZFC}^-_*$ is normal and ${\Bbb Q}$ is strong c.c.c.--nep for ${\rm ZFC}^-_*$, then ${\Bbb Q}$ is c.c.c.--nep for ${\rm ZFC}^-_*$. \end{enumerate} \end{comment} \stepcounter{section} \subsection*{\quad 7. Preserving a little implies preserving much} Our main intention is to show that, for example if a ``nice'' forcing notion ${\Bbb P}$ satisfies $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$({}^{\textstyle \omega}2)^{{\bf V}}$ is not null'', {\em then} it preserves ``$X\subseteq{}^{\textstyle \omega}2\ \ (X\in{\bf V})$ is not null''. By Goldstern Shelah (\cite[Ch.XVIII, 3.11]{Sh:f}) if a Souslin proper forcing preserves ``$({}^{\textstyle \omega}\omega)^{{\bf V}}$ is non-meagre'' then it preserves ``$X\subseteq {}^{\textstyle \omega}\omega$ is non-meagre'' and more (in a way suitable for the preservation theorems there). The main question not resolved there was: is it special for Cohen forcing (which is a way to speak on non-meagre), or it holds for nice c.c.c.\ forcing notions in general, in particular does a similar theorem hold for ``non-null'' instead of ``non-meagre''. Though there have been doubts about it, we succeed to do it here. In fact, even for a wider family of forcing notions but we have to work more in the proof. See \S11 on a generalization. The reader may concentrate on the case that ${\Bbb Q}$ is strongly c.c.c.\ nep and ${\Bbb P},{\Bbb Q}$ are explicitly $\aleph_0$--nep and simple. It is natural to assume that $\mathunderaccent\tilde-3 {\eta}$ is a generic real for ${\Bbb Q}$ but we do not ask for it when not used. \begin{convention} \label{6.0} \begin{enumerate} \item ${\Bbb Q}$ is an explicitly nep forcing notion. \item $\mathunderaccent\tilde-3 {\eta}\in {}^{\textstyle \omega}\omega$ is a hereditarily countable ${\Bbb Q}$-name which is ${\frak B}$--definable. \end{enumerate} \end{convention} We would like to preserve something like: ``$x$ is ${\Bbb Q}$--generic over $N$''. \begin{definition} \label{6.1} \begin{enumerate} \item $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}\stackrel{\rm def}{=}\{A\in{\rm Borel}({}^{\textstyle \omega}\omega):\ \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\notin A$''$\}$ (it is an ideal on the Boolean algebra of Borel subsets of ${}^{\textstyle \omega}\omega$). \item $I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ is the ideal generated by $I_{({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})}$ on ${\cal P}({}^{\textstyle \omega}\omega)$. (So for $A\in{\rm Borel}({}^{\textstyle \omega}\omega)$ we have: $A \in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}\ \Leftrightarrow\ A\in I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$). Let \[\begin{array}{ll} \hspace{-0.5cm}I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}\stackrel{\rm def}{=}\{X\subseteq {}^{\textstyle \omega}\omega:&\mbox{for a dense set of $q\in {\Bbb Q}$, for some Borel set $B\subseteq {}^{\textstyle \omega}\omega$,}\\ &\mbox{we have $X\subseteq B$ and $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$\mathunderaccent\tilde-3 {\eta}\notin B$''}\}. \end{array}\] \item For an ideal $I$ (on Borel sets, respectively), the family of $I$--positive (Borel, respectively) sets is denoted by $I^+$.\\ (Thus, for a Borel subset $A$ of ${}^{\textstyle \omega}\omega$, $A\in I^+_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$\ {\em iff}\ there is $q\in{\Bbb Q}$ such that $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in A$''. \end{enumerate} \end{definition} \begin{definition} \label{6.3} \begin{enumerate} \item A forcing notion ${\Bbb P}$ is $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving if for every Borel set $A$ \[A\in (I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+\quad \Rightarrow\quad \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{`` }A^{{\bf V}}\in (I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+\mbox{''}\] ($A^{{\bf V}}$ means: the same set, i.e.\ $A \cap{\bf V}$). \item ${\Bbb P}$ is strongly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving if for all $X \subseteq{}^{\textstyle \omega}\omega$ (i.e.\ not only Borel sets) \[X\in (I^{{\rm dx}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+\quad \Rightarrow\quad \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}} \mbox{``}X\in (I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+\mbox{''}.\] [See \ref{6.4}(7) for ${\Bbb Q}$ which is c.c.c.] \item We say that a forcing notion ${\Bbb P}$ is weakly $I_{({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})}$--preserving if $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$`` $({}^{\textstyle \omega}\omega)^{{\bf V}}\in (I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ ''. \item ${\Bbb P}$ is super--$I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving if for all $X\subseteq{}^{\textstyle \omega}\omega$ we have: \[X\in(I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+\quad\Rightarrow\quad \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}} X^{\bf V}\in (I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+.\] \end{enumerate} \end{definition} \begin{proposition} \label{6.4} \begin{enumerate} \item $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ is an $\aleph_1$--complete ideal (in fact, if $\langle A_i:i\le\alpha\rangle\in{\bf V}$, each $A_i\in{\rm Borel}({}^{\textstyle \omega}\omega)$ and $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$A^{{\bf V}[\mathunderaccent\tilde-3 {G}]}_\alpha\subseteq\bigcup\limits_{i<\alpha} A^{{\bf V}[\mathunderaccent\tilde-3 {G}]}_i$'' and $A_i\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ for $i<\alpha$ then $A_\alpha\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$). \item If $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is not trivial (i.e.\ $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\notin ({}^{\textstyle \omega}\omega)^{{\bf V}}$), {\em then} singletons belong to $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$. \item ${}^{\textstyle \omega}\omega\notin I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$. \item Assume (${\rm ZFC}^-_*$ is $K$--good and) ${\Bbb Q}$ is correct. If in ${\bf V}$, $X \in I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ and ${\Bbb P}\in K$, {\em then}\ in ${\bf V}^{{\Bbb P}}$ still $X\in I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ (but see later). \item Assume (${\rm ZFC}^-_*$ is $K$--good, particularly (c) of \ref{0.9} and) ${\Bbb Q}$ is correct. If, in ${\bf V}$, $B$ is a Borel subset of ${}^{\textstyle \omega}\omega$ from $I_{( {\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ and ${\bf V}_1={\bf V}^{{\Bbb P}}$ then also ${\bf V}_1\models$``$B\in I_{{\Bbb Q}, \mathunderaccent\tilde-3 {\eta}}$''. \item $I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$, $I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ are ideals of ${\cal P}({}^{\textstyle \omega}\omega)$ and \[I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}{\,|\grave{}\,}(\mbox{the family of Borel sets})= I^{\rm dx}_{( {\Bbb Q},\mathunderaccent\tilde-3 {\eta})}{\,|\grave{}\,}(\mbox{the family of Borel sets}).\] \item If ${\Bbb Q}$ satisfies the c.c.c.\ then $I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ is generated by $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$, so equal to $I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$. \item $I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ is $\aleph_1$--complete. \item If for some stationary $S\subseteq [\chi]^{\aleph_0}$, ${\Bbb Q}$ is $S$--proper then $I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ is $\aleph_1$--complete. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} We will prove parts 5) and 4) only, the rest is left to the reader. \noindent 5)\quad First work in ${\bf V}^{\Bbb P}$. If the conclusion fails then for some $q\in{\Bbb Q}$ we have $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$\mathunderaccent\tilde-3 {\eta}\in B$''. So there is a ${\Bbb Q}$--candidate $M$ to which $q,B$ (i.e.\ the code of $B$) belong. There is $q'$ such that $q\leq q'$ and $q'$ is $(M,{\Bbb Q})$--generic. Now for every $G\subseteq{\Bbb Q}$ generic oner ${\bf V}^{\Bbb P}$, $\mathunderaccent\tilde-3 {\eta}[G]\in B^{{\bf V}^{\Bbb P}[G]}$. By absoluteness, also $M\langle G\rangle\models\mathunderaccent\tilde-3 {\eta}\langle G\cap{\Bbb Q}^M \rangle\in B^{M\langle G\rangle}$ and hence (by the forcing theorem) for some $p\in G\cap{\Bbb Q}^M$ we have $M\models [p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb Q}\mbox{``}\mathunderaccent\tilde-3 {\eta}\in B \mbox{''}]$. Now, returning to ${\bf V}$, by Shoenfield--Levy absoluteness there are such $M',p'$ in ${\bf V}$. Let $p''$ be $(M',{\Bbb Q})$--generic, $p'\leq^{\Bbb Q} p''$. So similarly to the above, $p''\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in B$''. \noindent 4)\quad As $X\in I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$, clearly for some Borel set $B\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ we have $X\subseteq B$. By part (5), also in ${\bf V}^{\Bbb P}$ we have $B\in I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ and trivially $X\subseteq B^{\bf V}\subseteq B^{{\bf V}^{\Bbb P}}$. \QED \begin{proposition} \label{6.4A} \begin{enumerate} \item If a forcing notion ${\Bbb P}$ is $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, {\em then} ${\Bbb P}$ is weakly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving. \item If ${\Bbb P}$ is strongly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, {\em then} ${\Bbb P}$ is $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving. \item Assume that ${\Bbb Q}$ satisfies the c.c.c.\ and $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is homogeneous (see $(\circledast)$ below). Then:\quad ${\Bbb P}$ is $I_{({\Bbb Q},\mathunderaccent\tilde-3 { \eta})}$--preserving iff ${\Bbb P}$ is weakly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, where \begin{enumerate} \item[$(\circledast)$] $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is homogeneous if: for any (Borel) sets $B_1,B_2\in (I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ we can find a Borel set $B_1'\subseteq B_1$, $B_1\in (I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ and a Borel function $F$ from $B_1'$ into $B_1$ such that \begin{enumerate} \item[$(\alpha)$] for every Borel set $A\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$, $F^{-1}[ A\cap B_2]\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$, \item[$(\beta)$] this is absolute (or at least it holds also in ${\bf V}^{\Bbb P}$). \end{enumerate} \end{enumerate} \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 3)\quad By part (1) it suffices to show ``non--preserving'' assuming ``not weakly preserving''. So there are $p,B^*,\mathunderaccent\tilde-3 {A},\mathunderaccent\tilde-3 {q}$ such that $B\in (I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ is a Borel subset of ${}^{\textstyle \omega}\omega$ and \[\begin{array}{ll} p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{``}&(a)\ \mathunderaccent\tilde-3 {A}\mbox{ is a Borel set}\\ \ &(b)\ \mathunderaccent\tilde-3 {q}\in{\Bbb Q}\mbox{ (in ${\bf V}^{\Bbb P}$!)}\\ \ &(c)\ \mathunderaccent\tilde-3 {q}\mbox{ witnesses }\mathunderaccent\tilde-3 {A}\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})},\mbox{ that is }\mathunderaccent\tilde-3 {q}\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox{``}\mathunderaccent\tilde-3 {\eta}\notin \mathunderaccent\tilde-3 {A}\mbox{''}\\ \ &(d)\ \nu\in\mathunderaccent\tilde-3 {A}\mbox{ for every }\nu\in (B^*)^{\bf V}.\ \mbox{ ''} \end{array}\] Let \[\begin{array}{ll} {\cal J}=\{B:&B\in (I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+,\mbox{ so a Borel subset of }{}^{\textstyle \omega}\omega, \mbox{ and}\\ \ &\mbox{for some Borel one-to-one function $F$ from $B$ to $B^*$ we have}\\ \ &\mbox{$F$ is absolutely $(I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$--preserving }\}. \end{array}\] Choose a maximal family $\{B_i:i<i^*\}\subseteq{\cal J}$ such that $i\neq j\ \Rightarrow\ B_i\cap B_j\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$. As ${\Bbb Q}$ satisfies the c.c.c.\ necessarily $i^*<\omega_1$, so wlog $i^*\leq\omega$. By the assumption, ${}^{\textstyle \omega}\omega\setminus\bigcup\limits_{i<i^*} B_i\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$. Let $F_i$ witness that $B_i\in {\cal J}$. Let \[\mathunderaccent\tilde-3 {A}_i=\{\eta\in{}^{\textstyle \omega}\omega:\eta\in B_i\ \mbox{ and }\ F_i(\eta)\in\mathunderaccent\tilde-3 {A}\}.\] Then $\mathunderaccent\tilde-3 {A}_i$ is a Borel subset of ${}^{\textstyle \omega}\omega$ and $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\mathunderaccent\tilde-3 {A}_i\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$'' as $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\mathunderaccent\tilde-3 {A}\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$''. Hence \[p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mbox{`` }\bigcup_{i<i^*}\mathunderaccent\tilde-3 {A}_i\cup ({}^{\textstyle \omega}\omega\setminus\bigcup_{i< i^*})\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}\mbox{ ''}\] (call this set $\mathunderaccent\tilde-3 {A}^*$). Now \[p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{`` }({}^{\textstyle \omega}\omega)^{{\bf V}}=({}^{\textstyle \omega}\omega\setminus\bigcup_{i<i^*}B_i)^{\bf V} \cup\bigcup_{i<i^*} B_i^{\bf V}\subseteq ({}^{\textstyle \omega}\omega\setminus \bigcup_{i<i^*} B_i)\cup \bigcup_{i<i^*} A_i\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}\mbox{ ''}\] so we are done. \QED \noindent{\bf Comment}:\qquad 1)\quad It is easy to find a forcing notion ${\Bbb P}$ which is $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, but not strongly $I_{({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})}$--preserving, e.g.\ for ${\Bbb Q} =$ Cohen (see \ref{6.6} below). However, for sufficiently nice forcing notion ${\Bbb P}$, ``$I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta}) }$--preserving'' and ``strongly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$-preserving'' coincide, as we will see in \ref{6.5}. (Parallel to the phenomenon that for ``nice'' sets, CH holds). \noindent 2)\quad It is even easier to find a weakly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta}) }$--preserving forcing notion ${\Bbb P}$ which is not $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta}) }$--preserving.\\ Assume that for $\ell<2$ we have $({\Bbb Q}_\ell,\mathunderaccent\tilde-3 {\eta}_\ell)$ as in \ref{6.0}, e.g.\ ${\Bbb Q}_0$ is Cohen forcing, ${\Bbb Q}_1$ is random real forcing. Let ${\Bbb Q} =\{\emptyset\}\cup\bigcup\limits_{\ell<2}\{\ell\}\times{\Bbb Q}_\ell$, $\emptyset$ minimal, $(\ell_1,q_1) \le (\ell_2,q_2)$ iff $\ell_1 = \ell_2$ and ${\Bbb Q}_\ell\models q_1 \le q_2$. We define a ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {\eta}$ by defining for a generic $G\subseteq{\Bbb Q}$ over ${\bf V}$: \[\mathunderaccent\tilde-3 {\eta}[G]\mbox{ is }\begin{array}{ll} \langle 0\rangle{}^\frown\!(\eta_0[G_0])&\mbox{if }\{0\}\times{\Bbb Q}_0\cap G\ne \emptyset,\mbox{ and }G_0=\{q\in{\Bbb Q}_0:(0,q)\in G\}\\ \langle 1\rangle{}^\frown\!(\eta_1[G_1])&\mbox{if }\{1\}\times{\Bbb Q}_1\cap G\ne \emptyset,\mbox{ and }G_1=\{q\in{\Bbb Q}_1:(1,q)\in G\}. \end{array}\] Then usually (and certainly for our choice) we get a counterexample. \begin{proposition} \label{6.6x} Assume that $A$ is a Borel subset (better: a definition of a Borel subset) of ${}^{\textstyle \omega}\omega$, $M$ is a ${\Bbb Q}$--candidate (so $\mathunderaccent\tilde-3 {\eta}\in M$, i.e.\ $\langle \psi_{\alpha,m}:\alpha<\omega,m<\omega\rangle\in M$) and $A\in M$ (i.e.\ the definition). Further, suppose that $q\in {\Bbb Q}^M$ is such that $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in A$''. Then \begin{enumerate} \item[$(\alpha)$] $M\models$``$q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mathunderaccent\tilde-3 {\eta}\in A$'', \item[$(\beta)$] there is $\eta\in A$ which is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over $M$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} As for $(\alpha)$, if it fails then for some $q'\in{\Bbb Q}^M$, we have \[M\models\mbox{`` }q \le^{{\Bbb Q}} q'\mbox{ and }q'\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mathunderaccent\tilde-3 {\eta} \notin A\mbox{ ''},\] and let $r\in{\Bbb Q}$ be $\langle M,{\Bbb Q}\rangle$--generic above $q'$. So if $G$ is a subset of ${\Bbb Q}$ generic over ${\bf V}$ to which $r$ belongs then $q'\in G$ and $G\cap{\Bbb Q}^M$ is a subset of ${\Bbb Q}^M$ generic over $M$ to which $q'$ belongs. Hence $M\langle G\rangle\models$``$\mathunderaccent\tilde-3 {\eta}[G\cap{\Bbb Q}^M]\notin A$'' and $\mathunderaccent\tilde-3 {\eta}[G\cap Q^M]\in{}^{\textstyle \omega}\omega$. By absoluteness also ${\bf V}[G]\models \mathunderaccent\tilde-3 {\eta}[G\cap{\Bbb Q}^M]\notin A$ and $\mathunderaccent\tilde-3 {\eta}[G\cap{\Bbb Q}^M]\in{}^{\textstyle \omega}\omega$. But as $\mathunderaccent\tilde-3 {\eta}\in M$ clearly $\mathunderaccent\tilde-3 {\eta}[G\cap{\Bbb Q}^M]=\mathunderaccent\tilde-3 {\eta}[G]$ and as $q'\in G$ also $q\in G$, so we get contradiction to $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in A$''. By clause $(\alpha)$ clause $(\beta)$ is easy: we can find a subset $G\in{\bf V}$ of ${\Bbb Q}^N$ to which $q$ belongs which is generic over $M$. So $\mathunderaccent\tilde-3 {\eta}[G] \in{}^{\textstyle \omega}\omega$ and it belongs to $A$ as $M\models$``$q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mathunderaccent\tilde-3 {\eta}\in A$''. \QED$_{\ref{6.6x}}$ \begin{proposition} \label{6.7} Assume ${\Bbb Q}$ is correct and satisfies the c.c.c. The following conditions are equivalent for a set $X\subseteq{}^{\textstyle \omega}\omega$: \begin{enumerate} \item[(A)] $X\in I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$, \item[(B)] for some $\rho\in{}^{\textstyle \omega}2$, for every ${\Bbb Q}$--candidate $N$ to which $\rho$ belongs there is no $\eta\in X$ which is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N$, \item[(C)] for every $p\in{\Bbb Q}$ for some ${\Bbb Q}$--candidate $N$ such that $p\in {\Bbb Q}^N$, there is no $\eta\in X$ which is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} (A)$\ \Rightarrow\ $(B):\quad So assume (A), i.e.\ $X\in I^{{\rm ex}}_{({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})}$. Then for some Borel set $A\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$ we have $X\subseteq A$. Let $\rho\in{}^{\textstyle \omega}2$ code $A$. Since $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\notin A^{{\bf V}[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}]}$'', it follows from \ref{6.6x} that \begin{enumerate} \item[(*)] for any ${\Bbb Q}$--candidate $N$ to which $\rho$ belongs there is no $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real $\eta$ over $N$ which belongs to $X$ (or even just to $A$). \end{enumerate} \noindent (B)$\ \Rightarrow\ $(C):\quad Easy as $Q$ is correct. \noindent (C)$\ \Rightarrow\ $(A):\quad Assume (C). Let \[\begin{array}{ll} {\cal I}=\{p\in{\Bbb Q}:&\mbox{for some Borel subset } A=A_p\mbox{ of }{}^{\textstyle \omega}\omega\\ \ &\mbox{we have }p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mbox{`` }\mathunderaccent\tilde-3 {\eta}\notin A_p\mbox{ '' and } X\subseteq A_p\}. \end{array}\] Suppose first that ${\cal I}$ is predense in ${\Bbb Q}$. Clearly it is open, so we can find a maximal antichain ${\cal J}$ of ${\Bbb Q}$ such that ${\cal J}\subseteq {\cal I}$. As ${\Bbb Q}$ satisfies the c.c.c., necessarily ${\cal J}$ is countable. So $A\stackrel{\rm def}{=}\bigcap\limits_{p\in {\cal J}} A_p$ is a Borel subset of ${}^{\textstyle \omega}\omega$ (as ${\cal J}$ is countable) and it includes $X$ (as each $A_p$ does). Moreover, since ${\cal J}$ is a maximal antichain of ${\Bbb Q}$ (and $p\in {\cal J}\ \Rightarrow\ p\in {\cal I}\ \Rightarrow\ p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}} $``$\mathunderaccent\tilde-3 {\eta}\notin A_p\mbox{''}\ \Rightarrow\ p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta} \notin A$'') we have $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\eta\notin A$''. Consequently (A) holds. Suppose now that ${\cal I}$ is not predense in ${\Bbb Q}$ and let $p^*\in{\Bbb Q}$ exemplifies it, i.e.\ it is incompatible with every member of ${\cal I}$. Let $N$ be a ${\Bbb Q}$--candidate to which belongs some $\rho$ given by the assumption (C) for $p^*$. Thus $p^*\in{\Bbb Q}^N$ and no $\eta\in X$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta })$--generic over $N$. Let $q$ be a member of ${\Bbb Q}$ which is above $p^*$ and is $\langle N,{\Bbb Q}^N\rangle$--generic (i.e.\ $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$G^P\cap{\Bbb Q}^N$ is generic over $N$''). Let $A\stackrel{\rm def}{=}\{\eta\in{}^{\textstyle \omega}\omega:\eta$ is not $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N\}$. Now \begin{enumerate} \item[(a)] $A$ is a Borel subset of ${}^{\textstyle \omega}\omega$ and $X\subseteq A$ \noindent (why? as $N$ is countable), \item[(b)] $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\notin A^{{\bf V}[G_{{\Bbb Q}}]}$'' \noindent (why? by the definition of $A$), \item[(c)] $q\in {\cal I}$ \noindent (why? by (a)+(b)). \end{enumerate} Thus $p^*\le q\in {\cal I}$ and we get contradiction to the choice of $p^*$. \QED$_{\ref{6.7}}$ \begin{theorem} \label{6.5} Assume that: \begin{enumerate} \item[(a)] ${\Bbb Q}$, $\mathunderaccent\tilde-3 {\eta}$ are as above (see\ref{6.0}), and ${\Bbb Q}$ is correct, \item[(b)] ${\Bbb P}$ is nep-forcing notion with respect to {\em our fixed} version ${\rm ZFC}^-_*$, \item[(c)] ${\Bbb P}$ is $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, \item[(d)] ${\rm ZFC}^-_{**}$ is a stronger version of set theory including clauses (i)--(v) below for some $\chi_1 <\chi_2$, \begin{enumerate} \item[(i)] $({\cal H}(\chi_2),\in)$ is a (well defined) model of ${\rm ZFC}^-_*$, \item[(ii)] (a), (b) and (c) (with ${\frak B}^{{\Bbb P}}$, ${\frak B}^{{\Bbb Q}}$, $\mathunderaccent\tilde-3 {\eta}$ as individual constants), \item[(iii)] ${\Bbb Q},{\Bbb P}\in {\cal H}(\chi_1)$ and $({\cal H}(\chi_2),\in)$ is a semi ${\Bbb P}$--candidate and a semi ${\Bbb Q}$-candidate with $({\frak B}^{{\Bbb P}})$ interpreted as $({\frak B}^{{\Bbb P}})^N{\,|\grave{}\,} {\cal H} (\chi_2)^N$ and similarly for ${\Bbb Q}$, so (natural to assume) ${\frak B}^{{\Bbb P}}$, ${\frak B}^{{\Bbb Q}}\in {\cal H}(\chi_2)$, (remember, ``semi'' means omitting the countability demand) \item[(iv)] forcing of cardinality $<\chi_1$ preserves the properties (i), (ii), (iii), and $\chi_1$ is a strong limit cardinal, \item[(v)] forcing by ${\Bbb P}$ preserves ``${\cal I}$ is a predense subset of ${\Bbb Q}$'' (follows if ${\Bbb Q}$ satisfies the c.c.c.\ by \ref{5.5A}(ii)). \end{enumerate} \end{enumerate} {\em Then}: \begin{enumerate} \item[$(\alpha)$] {\em if}, additionally, \begin{enumerate} \item[(e)] ${\rm ZFC}^-_{**}$ is normal (see Definition\ref{0.9}(3)) \end{enumerate} {\em then} ${\Bbb P}$ is strongly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, \item[$(\beta)$] {\em if}\ $N$ is a ${\Bbb P}$--candidate (and ${\Bbb Q}$--candidate) and moreover it is a model of ${\rm ZFC}^-_{**}$ and $N\models$``$p\in{\Bbb P}$'' and $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N$,\\ {\em then}\ for some $q$ we have: \begin{enumerate} \item[(i)] $p\le q$ and $q\in{\Bbb P}$, \item[(ii)] $q$ is $\langle N,{\Bbb P}\rangle$--generic; i.e.\ $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{ {\Bbb P}}$``$\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap{\Bbb P}^N$ is generic over $N$'' (see \ref{2.3}), \item[(iii)] $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$`` $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[{\Bbb P}^N\cap\mathunderaccent\tilde-3 {G}_{{\Bbb P}}]$ ''. \end{enumerate} \item[$(\alpha)^+$] We can strengthen the conclusion of $(\alpha)$ to ``${\Bbb P}$ is super--$I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving''. \end{enumerate} \end{theorem} \begin{remark} \label{7.8A} 1)\quad We consider, for a nep forcing notion ${\Bbb Q}$ \begin{enumerate} \item[$(*)_1$] ${\Bbb Q}$ satisfies the c.c.c. \end{enumerate} We also consider \begin{enumerate} \item[$(*)_2$] being a predense subset (or just a maximal antichain) of ${\Bbb Q}$ is $K$--absolute. \end{enumerate} By results of the previous section, $(*_1)\ \Rightarrow\ (*_2)$ under reasonable conditions. You may wonder whether $(*_2)\ \Rightarrow\ (*_1)$, but by the examples in section 11 the answer is not. \noindent 2)\quad Note that in $(\alpha), (\alpha)^+$ we can use the weak normality if ${\Bbb Q}$ satisfies the c.c.c., see \ref{6.8}. We do not use ``${\Bbb P}$ is explicitly nep'' so we do not demand it. \end{remark} Before we prove the theorem, let us give an example for a forcing notion failing the conclusion and see why many times we can simplify assumptions. \begin{example} \label{6.6} Start with ${\bf V}_0$. Let $\bar{s}=\langle s_i:i<\omega_1\rangle$ be a sequence of random reals, forced by the measure algebra on ${}^{\omega_1}({}^{\textstyle \omega}2)$. Let ${\bf V}_1={\bf V}_0[\bar{s}]$, ${\bf V}_2={\bf V}_1[r]$, $r$ a Cohen over ${\bf V}_1$ and \[\begin{array}{ll} {\bf V}_3={\bf V}_2[\bar{t}]&\mbox{ where }\bar{t}=\langle t_i:i<\omega_1\rangle \mbox{ is a sequence of random reals}\\ \ &\mbox{forced by the measure algebra}. \end{array}\] Then in ${\bf V}_3$ (in fact, already in ${\bf V}_2$), $\{s_i:i<\omega_1\}$ is a null set, whereas $\{t_i:i<\omega_1\}$ is not null. But $\bar{t}$ is also generic for the measure algebra over ${\bf V}_1$. So ${\bf V}'_2={\bf V}_1[\bar{t}]$ is a generic extension of ${\bf V}_1$. We have ${\bf V}_3={\bf V}'_2[r]$, where $r$ is generic for some algebra, more specifically for \[{\Bbb R}\stackrel{\rm def}{=}(\mbox{Cohen $*$ measure algebra adding }\bar{t})/ \bar{t}.\] So in ${\bf V}'_2$ the sets $\bar{t}$ and $\bar{s}$ are not null and ${\Bbb R}$ makes $\bar{s}$ null, but not $\bar{t}$. How can ${\Bbb R}$ do that? ${\Bbb R}$ uses $\langle t_i:i<\omega_1\rangle$ in its definition, so it is not ``nice'' enough.\QED$_{\ref{6.6}}$ \end{example} \noindent{\bf Remark}\qquad In the proof of \ref{6.5}, of course, we may assume $N{\rm pr}ec ({\cal H}(\chi,\in))$ if $({\cal H}(\chi,\in))\models {\rm ZFC}^-_{**}$, as this normally holds. In $(\alpha)$ the use of such $N$ does not matter. In $(\beta)$ it slightly weakens the conclusion. Now, $(\alpha)$ is our original aim. But $(\beta)$ both is needed for $(\alpha)$ and is a step towards preserving them (as in \cite{Sh:f}). So typically $N$ is an elementary submodel of appropriate ${\cal H}(\chi)$. {\noindent{\sc Proof of \ref{6.5}}\hspace{0.2in}} {\bf Clause $(\alpha)$}: \qquad To prove $(\alpha)$ we will use $(\beta)$. So let $X\subseteq{}^{\textstyle \omega}\omega$, $X\in (I^{{\rm dx}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$. Then there is a condition $q^*\in{\Bbb Q}$ such that \begin{enumerate} \item[$(*)_1$] for no Borel subset $B$ of ${}^{\textstyle \omega}\omega$ do we have:\quad $X\subseteq B$ and $q^*\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb Q}$``$\mathunderaccent\tilde-3 {\eta}\notin B$''. \end{enumerate} Let $\chi$ be large enough. We can find $N\subseteq ({\cal H}(\chi),\in)$ as in $(\beta)$, moreover $N{\rm pr}ec ({\cal H}(\chi),\in)$ a model of ${\rm ZFC}^-_{**}$ (and so a ${\Bbb P}$--candidate and a ${\Bbb Q}$-candidate) [it exists because by clause (e) of the assumptions, ${\rm ZFC}^-_{**}$ is normal so for $\chi$ large enough any countable $N{\rm pr}ec ({\cal H}(\chi),\in)$ to which ${\frak C},{\frak B}^{{\Bbb Q}}, {\frak B}^{{\Bbb P}}$ belong is a model of ${\rm ZFC}^-_{**}$ and is a ${\Bbb P}$--candidate and a ${\Bbb Q}$--candidate, so as required]. Towards a contradiction, assume $p^*\in{\Bbb P}$ and $p^*\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$X\in I^{{\rm dx}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$''. So for some ${\Bbb P}$--name $\mathunderaccent\tilde-3 {A}$ we have \[p^*\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{`` }\mathunderaccent\tilde-3 {A}\mbox{ is a Borel subset of }{}^{\textstyle \omega}\omega,\ X \subseteq\mathunderaccent\tilde-3 {A}\mbox{ and }\mathunderaccent\tilde-3 {A}\in I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})},\mbox{ i.e. }\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mathunderaccent\tilde-3 {\eta}\notin\mathunderaccent\tilde-3 {A}\mbox{ ''.}\] Without loss of generality the name $\mathunderaccent\tilde-3 {A}$ is hereditarily countable and $\mathunderaccent\tilde-3 {A},p^*,q^*$ belong to $N$. In ${\bf V}$, let \[\begin{array}{ll} B=\{\eta\in{}^{\textstyle \omega}\omega:&\eta\mbox{ is a $({\Bbb Q}^{\geq q^*},\mathunderaccent\tilde-3 {\eta})$--generic real over $N$, which means:}\\ \ &\eta=\mathunderaccent\tilde-3 {\eta}[G]\mbox{ for some $G\subseteq{\Bbb Q}^N$ generic over }N\\ \ &\mbox{such that }q^*\in G\} \end{array}\] Clearly, it is an analytic set (if $\mathunderaccent\tilde-3 {\eta}$ was generic real then Borel; both holds as ``$\mathunderaccent\tilde-3 {\eta}$ is a generic real for ${\Bbb Q}$'' follows from ${\rm ZFC}^-_{**}$). So $B=\bigcup\limits_{i<\omega_1}B_i$, each $B_i$ is Borel. Let $q\in{\Bbb Q}$ be $\langle N,{\Bbb Q}\rangle$--generic and $q^*\leq q$. Then $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {\eta}\in B$'' and hence wlog for some $i<\omega_1$ we have $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb Q}$``$\mathunderaccent\tilde-3 {\eta}\in B_i$''. Since $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb Q}$``$\mathunderaccent\tilde-3 { \eta}\notin ({}^{\textstyle \omega}\omega\setminus B_i)$'' (as $q^*\leq q$, $q^*\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$\mathunderaccent\tilde-3 { \eta}\in B_i$''), we may apply $(*)_1$ to the set ${}^{\textstyle \omega}\omega\setminus B_i$ to conclude that $X\not\subseteq{}^{\textstyle \omega}\omega\setminus B_i$. Take $\eta^*\in X\cap B_i$ (so it is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N$). So by clause $(\beta)$ (proved below), there is a condition $p\in{\Bbb P}$, $p\ge p^*$ which is $\langle N,{\Bbb P}\rangle$--generic (i.e.\ it forces that $\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap{\Bbb P}^N$ is generic over $N$, not necessarily $\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap N$) and such that \[p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb P}\mbox{`` }\eta^*\mbox{ is }({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\mbox{--generic over } N[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^N]\mbox{ ''.}\] Choose $G_{{\Bbb P}}\subseteq{\Bbb P}$, generic over ${\bf V}$, such that $p\in G_{{\Bbb P}}$. In ${\bf V}[G_{{\Bbb P}}]$, $N[G_{{\Bbb P}}\cap{\Bbb P}^N]$ is a generic extension of $N$ (for ${\Bbb P}^N$!), a ${\Bbb Q}$ candidate, and $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over it. As $p^*\leq p\in G_{{\Bbb P}}$, clearly if $G_{{\Bbb Q}}\subseteq{\Bbb Q}^{{\bf V}[G_{ {\Bbb P}}]}$ is generic over ${\bf V}[G_{{\Bbb P}}]$ then $\mathunderaccent\tilde-3 {\eta}[G_{{\Bbb Q}}]\notin\mathunderaccent\tilde-3 {A} [G_{{\Bbb P}}]$. But $N[G_{{\Bbb P}}\cap {\Bbb P}^N]{\rm pr}ec({\cal H}(\chi)^{{\bf V}[G_{{\Bbb P}}]},\in)$, so $N[G_{{\Bbb P}}\cap {\Bbb P}^N]$ satisfies the parallel statement. Since $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[G_{{\Bbb P}}\cap{\Bbb P}^N]$, it cannot belong to $\mathunderaccent\tilde-3 {A}[G_{{\Bbb P}}\cap {\Bbb P}^N]$. But easily $\mathunderaccent\tilde-3 {A}[G_{{\Bbb P}}\cap{\Bbb P}^N]= \mathunderaccent\tilde-3 {A}[G_{{\Bbb P}}]$ and hence, by absoluteness, $\eta^*\in X\subseteq \mathunderaccent\tilde-3 {A}[G_{{\Bbb P}}]$, a contradiction. This ends the proof of \ref{6.5}, clause $(\alpha)$. \noindent {\bf Clause $(\alpha)^+$}:\qquad Like the proof of clause $(\alpha)$. We start like there but now we choose functions $r^*,A^*, {\cal I}$ such that \begin{enumerate} \item[$(*)_2$] ${\rm Dom}(r^*)={\rm Dom}({\cal I})$ is the set of all hereditarily countable canonical ${\Bbb P}$--names for elements of ${\Bbb Q}$ (so it is a member of ${\cal H}_{<\aleph_1}(\kappa({\Bbb P})+\kappa({\Bbb Q}))$), and ${\rm Dom}(A^*)=\{(p,\mathunderaccent\tilde-3 {q}): p\in {\cal I}(\mathunderaccent\tilde-3 {q}),\ \mathunderaccent\tilde-3 {q}\in{\rm Dom}(r^*)\}$, \item[$(*)_3$] for each $\mathunderaccent\tilde-3 {q}\in{\rm Dom}(r^*)={\rm Dom}({\cal I})$, ${\cal I}(\mathunderaccent\tilde-3 {q})$ is a predense subset of ${\Bbb P}$ such that for each $p\in{\cal I}(\mathunderaccent\tilde-3 {q})$ we have: \[\begin{array}{ll} p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{``}&A^*(\mathunderaccent\tilde-3 {q})\mbox{ is a Borel subset of }{}^{\textstyle \omega}\omega\mbox{ ''},\\ p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}&[r^*(\mathunderaccent\tilde-3 {q})\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}\mbox{`` }\mathunderaccent\tilde-3 {\eta}\notin A^* (\mathunderaccent\tilde-3 {q})\mbox{ ''}],\\ p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{``}&X\subseteq A^*(\mathunderaccent\tilde-3 {q})\mbox{ ''}. \end{array}\] \end{enumerate} Without loss of generality, the set $X$, and the functions $r^*,A^*,{\cal I}$ belong to $N$. We choose conditions $q\in{\Bbb Q}$, $p\in{\Bbb P}$ and a real $\eta^*\in X$ and a generic filter $G_{\Bbb P}\subseteq{\Bbb P}$ over ${\bf V}$ in a similar manner as in clause $(\alpha)$. We note that \[\mathunderaccent\tilde-3 {q}\in{\rm Dom}(r^*)\cap N\quad \Rightarrow\quad N\cap{\cal I}(\mathunderaccent\tilde-3 {q})\cap G_{\Bbb P}\neq\emptyset,\] so say $p[\mathunderaccent\tilde-3 {q}]\in G_{\Bbb P}\cap N$. Since $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[G_{\Bbb P}\cap{\Bbb P}^N]$, there is $G^*\subseteq {\Bbb Q}^{N[G_{\Bbb P}\cap {\Bbb P}^N]}$ generic over $N$ such that $\eta^*=\mathunderaccent\tilde-3 {\eta}[G^*]$. By the choice of $r^*,A^*$ there is $\mathunderaccent\tilde-3 {q}\in N\cap{\rm Dom}(r^*)$ such that $r^*[\mathunderaccent\tilde-3 {q}][G_{\Bbb P}\cap {\Bbb P}^N]\in G^*$. Now, $A=A^*(p[\mathunderaccent\tilde-3 {q}],\mathunderaccent\tilde-3 {q})\in N[G_{\Bbb P}\cap {\Bbb P}^N]$ is a Borel subset of ${}^{\textstyle \omega}\omega$ and $N[G_{\Bbb P}\cap{\Bbb P}^N] \models$``$\mathunderaccent\tilde-3 {\eta}\notin A$'', hence $N[G_{\Bbb P}\cap {\Bbb P}^N]\models$``$\mathunderaccent\tilde-3 { \eta}[G^*]\notin A$. But \[N[G_{\Bbb P}]=N[G_{\Bbb P}\cap{\Bbb P}^N]\models\mbox{`` }X\setminus A=\emptyset\mbox{ ''},\] contradicting $\eta^*=\mathunderaccent\tilde-3 {\eta}[G^*]\in X\setminus A$. \noindent {\bf Clause $(\beta)$}:\qquad So $N,\eta^*,{\Bbb Q},{\Bbb P},p$ are given. Let $N_1=N[G^*]$ be a generic extension of $N$ by a subset $G^*$ of ${\Bbb Q}^N$ generic over $N$ and such that $\eta^*=\mathunderaccent\tilde-3 {\eta}[G]$ (see\ref{5.2A}). Now choose (in $N$) a model $M{\rm pr}ec ({\cal H}(\chi_2),\in)^N$ such that \begin{enumerate} \item[(i)] ${\Bbb P},{\Bbb Q},\mathunderaccent\tilde-3 {\eta},p\in M$, \item[(ii)] ${\Bbb Q}^N\subseteq M$ and ${\Bbb P}^N\subseteq M$, \item[(iii)] the family of maximal antichains of ${\Bbb P}$ and of ${\Bbb Q}$ from $N$ are included in $M$, \item[(iv)] $M\in N$, moreover $M\in {\cal H}(\chi_2)^N$, \item[(v)] $M\models$``forcing by ${\Bbb P}$ preserves predensity of subsets of ${\Bbb Q}$'' \end{enumerate} [Why is clause (v) possible? As $N{\,|\grave{}\,} {\cal H}(\chi_1)$ inherits clause (v) of (d) of the assumptions]. Hence, by assumption (d), \begin{enumerate} \item[$(\bigotimes)$] $M$ is a ${\Bbb P}$-candidate and a ${\Bbb Q}$-candidate and \[N\models\mbox{`` }M\mbox{ is a semi ${\Bbb P}$-candidate and semi ${\Bbb Q}$-candidate ''}.\] \end{enumerate} Let ${\Bbb R}={\rm Levy}(\aleph_0,|M|)$. In ${\bf V}$ let $G_{{\Bbb R}}\subseteq {\Bbb R}$ be generic over $N_1=N[G^*]$ (note that as $N_1$ is countable, clearly $G_{{\Bbb R}}$ exists) and let $N_2=N_1[G_{{\Bbb R}}]$ (note that it too is a ${\Bbb P}$-candidate and a ${\Bbb Q}$-candidate). Note: $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $M$ too and $G^*$ is a subset of ${\Bbb Q}^M$ generic over $M$ (by clauses (ii) + (iii)) and ${\Bbb Q}^N={\Bbb Q}^M$, ${\Bbb P}^N={\Bbb P}^M$ (note that in $N_2$ the model $M$ is countable). Now we ask the following question: \begin{quotation} Is there $q\in{\Bbb P}^{N_2}$ such that \noindent $N_2\models$`` $p\le^{{\Bbb P}} q$, $q$ is $(M,{\Bbb P}^M)$--generic and $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $M[G_{{\Bbb P}}\cap {\Bbb P}^M]$'' ''? \end{quotation} Depending on the answer, we consider two cases. \noindent{\bf Case 1}:\quad The answer is ``yes''. \\ Choose $q'\in {\Bbb P}$, $q'\ge q$, $q'$ is $(N_2,P^{N_2})$-generic. Then we have \[\begin{array}{ll} q'\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{``}&\mbox{in }{\bf V}[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}],\ \mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^{N_2}\mbox{ is generic over }N_2,\ p,q\in \mathunderaccent\tilde-3 {G}_{{\Bbb P}},\mbox{ and}\\ \ &\mbox{in }N_2[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^{N_2}],\ \eta^*\mbox{ is $({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})$--generic over }M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap{\Bbb P}^M],\\ \ &\mbox{hence also over }N[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^N]\mbox{ ''.} \end{array}\] [Why does $q'$ force this? As: \begin{enumerate} \item[(A)] ``$\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^{N_2}$ is generic over $N_2$'' holds because $q'$ is $(N_2,{\Bbb P}^{N_2})$--generic; \item[(B)] ``$p,q\in\mathunderaccent\tilde-3 {G}_{{\Bbb P}}$'' holds as $p\le q\le q'\in G_{{\Bbb P}}$ (forced by $q'$!); \item[(C)] ``in $N_2[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^{N_2}]$, $\eta^*$ is $({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})$--generic over $M[G_{{\Bbb P}}\cap {\Bbb P}^M]$'' holds because of the choice of $q$ (i.e.\ the assumption of the case ad as $q\in G_{\Bbb P}$); \item[(D)] ``$\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[G_{{\Bbb P}}\cap {\Bbb P}^N]$ for ${\Bbb Q}$'' holds by clause (C) above and clause (iii) of the choice of $M$.] \end{enumerate} By absoluteness we can omit the ``in $N_2[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap{\Bbb P}^M]$'', i.e.\ $q'\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^N]$''. So $q'$ is as required. \noindent{\bf Case 2}:\quad The answer is ``no''. \\ Let $\psi(x)$ be the following statement: \begin{quotation} there is no $q$ such that:\\ $q\in {\Bbb P}$, ${\Bbb P}\models$``$p\le q$'', $q$ is $(M,{\Bbb P}^M)$--generic and $q \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$x$ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over $M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^M]$''. \end{quotation} So $\psi$ is a first order formula in set theory, all parameters are in $N_1= N[G^*]\subseteq N_2=N[G^*][G_{{\Bbb R}}]$, and by the assumption of the case \[N[G^*][G_{{\Bbb R}}]\models\psi[\eta^*].\] Since ${\Bbb R}$ is homogeneous we may assume that $r=\emptyset$. As $G_{{\Bbb R}} \subseteq{\Bbb R}$ is generic over $N[G^*]$ for ${\Bbb R}$, necessarily (by the forcing theorem), for some $r\in G_{{\Bbb R}}$ \[N[G^*]\models\mbox{`` }r\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb R}}\psi[\eta^*]\mbox{ ''.}\] So necessarily, for some $q\in G^*\subseteq {\Bbb Q}^N={\Bbb Q}^M$ we have \[N\models\bigl(q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}} [r\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb R}}\psi(\mathunderaccent\tilde-3 {\eta}[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}} ])]\bigr).\] Now ${\Bbb R}\in N$ (as it members are finite sets of pairs of ordinals) so \begin{enumerate} \item[$(\otimes)$] $N\models\bigl((q,r)\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}\times {\Bbb R}}\psi( \mathunderaccent\tilde-3 {\eta}[\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}])\bigr)$. \end{enumerate} Next, $N[G_{{\Bbb R}}]$ is a generic extension by a ``small'' forcing of $N$ which is a model of ${\rm ZFC}^-_{**}$, so $N[G_{{\Bbb R}}]$ satisfies (i), (ii) and (iii) of the clause (d) of the assumptions. Note that $N\models$``$M$ is a semi ${\Bbb Q}$-candidate and a semi ${\Bbb P}$--candidate'', see clause (d)(iii) of the assumptions and the choice of $M$, so also $N[G_{{\Bbb R}}]$ satisfies this. Moreover, $N[G_{\Bbb R}]\models$``$M$ is countable'', so $N[G_{\Bbb R}]\models$``$M$ is a ${\Bbb Q}$-candidate and a ${\Bbb P}$--candidate''. Hence by assumption (d)(ii) there are $p_1,\eta^\otimes,G^\otimes_{\Bbb Q}\in N[G_{{\Bbb R}}]$ such that: \[\begin{array}{ll} N[G_{{\Bbb R}}]\models\mbox{``}&p_1\in{\Bbb P},\ p\le^{{\Bbb P}} p_1,\ p_1\mbox{ is $(M, {\Bbb P}^M)$--generic and}\\ \ &p_1\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}[\eta^\otimes\mbox{ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--real over $M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^M]$ satisfying }q]\\ \ &\mbox{moreover, }p_1\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb P}\mbox{`` }\eta^\otimes=\mathunderaccent\tilde-3 {\eta}[ G^\otimes_{\Bbb Q}]\mbox{ '', and}\\ \ &G^\otimes_{\Bbb Q}\subseteq {\Bbb Q}^{M[G_{\Bbb P}\cap{\Bbb P}^M]}\mbox{ is a generic set over $M$ such that $q\in G^\otimes_{\Bbb Q}$ ''}. \end{array}\] [Here we use the following:\quad if $G\subseteq{\Bbb P}^{N[G_{\Bbb R}]}$ is generic over $N[G_{\Bbb R}]$ then $N[G_{\Bbb R}]\langle G\rangle$ is a ${\Bbb Q}$--candidate (apply clause (iv) of the assumption (d) to ${\Bbb R}*\mathunderaccent\tilde-3 {{\Bbb P}}$).] It follows from clause (d)(v) of the choice of $M$ that \begin{quotation} $G^\otimes_{{\Bbb Q}}\cap {\Bbb Q}^M$ is generic over $M$. \end{quotation} Let $\mathunderaccent\tilde-3 {p}^1,\mathunderaccent\tilde-3 {\eta}^\otimes,\mathunderaccent\tilde-3 {G}^\otimes_{\Bbb Q}\in N$ be ${\Bbb R}$--names such that $\mathunderaccent\tilde-3 {\eta}^\otimes[G_{{\Bbb R}}]=\eta^\otimes$, $G^\otimes_{\Bbb Q}= \mathunderaccent\tilde-3 {G}^\otimes_{\Bbb Q}[G_{\Bbb R}]$ and $\mathunderaccent\tilde-3 {p}^1[G_{{\Bbb R}}]=p_1$, and without loss of generality in $N$ we have \[\begin{array}{ll} r\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb R}}\mbox{``}&\mathunderaccent\tilde-3 {\eta}^\otimes\mbox{ is a $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over $M$ satisfying $q$ and }\mathunderaccent\tilde-3 {p}^1\in {\Bbb P}\mbox{ and}\\ \ &\mathunderaccent\tilde-3 {p}^1\mbox{ forces $(\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}})$ that $\mathunderaccent\tilde-3 {\eta}^\otimes$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over }M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^M],\\ \ &\mathunderaccent\tilde-3 {G}^\otimes_{\Bbb Q}\mbox{ is a subset of ${\Bbb Q}^M$ generic over ${\bf V}$ ''.} \end{array}\] Let $G'_{{\Bbb R}}\subseteq {\Bbb R}$ be generic over $N[G_{{\Bbb R}}]$, to which $r$ belongs, so $N[G_{{\Bbb R}}][G'_{{\Bbb R}}]$ is a forcing extension of $N[G_{{\Bbb R}}]$ so both are generic extensions of $N$ by a small forcing. Now $\mathunderaccent\tilde-3 {G}^\otimes_{\Bbb Q}$ is essentially a complete embedding of ${\Bbb Q}{\,|\grave{}\,}(\ge q)$ into ${\Bbb R}$ (by basic forcing theory, see the footnote to \ref{0.9}(1)(d); and we can use the value for 0 of the function $\bigcup\{f: f\in G_{{\Bbb R}}\}$ to choose $q'$, $q\le q'\in {\Bbb Q}^N$). Hence, for some ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {{\Bbb R}}^*$ we have $({\Bbb Q}{\,|\grave{}\,}(\ge q))*\mathunderaccent\tilde-3 {{\Bbb R}}^*$ is ${\Bbb R}$, so $G_{{\Bbb R}}=G^\otimes_{{\Bbb Q}} * G_{{\Bbb R}^*}$ for some $G_{{\Bbb R}^*}\in {\bf V}[G_{{\Bbb R}}]$, where ${\Bbb R}^*=\mathunderaccent\tilde-3 {{\Bbb R}}^*[G^\otimes_{{\Bbb Q}}]$, $\mathunderaccent\tilde-3 {{\Bbb R}}^*$ a ${\Bbb Q}$--name. So we can represent $N[G_{{\Bbb R}}][G'_{{\Bbb R}}]$ also as $N^3\stackrel{\rm def}{=}N[ G^\otimes_{{\Bbb Q}}][G'_{{\Bbb R}}][G_{{\Bbb R}^*}]$; i.e.\ forcing first with ${\Bbb Q}{\,|\grave{}\,} (\geq q)$, then with ${\Bbb R}$, lastly with $\mathunderaccent\tilde-3 {{\Bbb R}}^*[G^\otimes_{{\Bbb Q}}]$. Now let $N^2\stackrel{\rm def}{=} N[ G^\otimes_{{\Bbb Q}}][G'_{{\Bbb R}}]$, so $N^2$ is a generic extension of $N$ and $N^3$ is a generic extension of $N^2$ (both by ``small'' forcing), and in $N^3$ we have $p^1$ and $\eta^\otimes$ and $G^\otimes_{\Bbb Q}$. But $G^\otimes_{{\Bbb Q}}\times G'_{{\Bbb R}}$ is a generic subset of $({\Bbb Q}^N{\,|\grave{}\,}\geq q)\times {\Bbb R}$ over $N$, so essentially a generic (over $N$) subset of ${\Bbb Q}^N\times {\Bbb R}$ to which $(q,r)$ belongs, hence (by $(\otimes)$ above) $N^2\models\psi(\mathunderaccent\tilde-3 {\eta}[G^\otimes_{{\Bbb Q}}])$. Therefore there is no $p'\in N^2$ such that\footnote{of course, we can use a weaker demand on $G^\otimes_{\Bbb Q}$}: \begin{enumerate} \item[$(\boxtimes)$] \quad $N^2\models[p'\in {\Bbb P}$, $p\le p'$, $p'\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P} }$``$\mathunderaccent\tilde-3 {\eta}[G^\otimes_{{\Bbb Q}}]$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap {\Bbb P}^M]$'']. \end{enumerate} In $N^2$ we can find a countable $M'{\rm pr}ec ({\cal H}(\chi_2)^{N^2},\in)$ to which ${\Bbb P}$, ${\Bbb Q}$, $\mathunderaccent\tilde-3 {\eta}^\otimes$, $\mathunderaccent\tilde-3 {p}^1$, $\mathunderaccent\tilde-3 {{\Bbb R}}^*$, and $G^\otimes_{{\Bbb Q}}$, $G'_{{\Bbb R}}$, $M$ belong (so $N^2\models$`` $M'$ is countable and is a ${\Bbb P}$--candidate and a ${\Bbb Q}$-candidate'') and $M''=M'{\,|\grave{}\,} N\in N$. In $N^2$ we can find $G'_{\mathunderaccent\tilde-3 {{\Bbb R}}^*[G^\otimes_{{\Bbb Q}}]}\subseteq\mathunderaccent\tilde-3 {{\Bbb R}}^*[ G^\otimes_{{\Bbb Q}}]\cap M'$ which is generic over $M'$. In $M^3=M''[G^\otimes_{\Bbb Q}] [G'_{\mathunderaccent\tilde-3 {{\Bbb R}}^*[G^\otimes_{{\Bbb Q}}]}]$, again by the forcing theorem, there is $p'$ as required in $(\boxtimes)$ above. So $M^3$ is a ${\Bbb P}$--candidate inside $N^2$, hence there is $p'_1$ such that $N^2\models$``$p'\le p'_1$ and $p'_1$ is $(M^3,{\Bbb P})$--generic''. By the amount of absoluteness we require (moving up from $M^3$ to $N^2$) this $p'_1$ can serve in $N^2$ for $(\boxtimes)$, contradiction to the previous assertion. \QED$_{\ref{6.5}}$ \begin{proposition} \label{6.8} Assume (a),(b),(c) and (d) of \ref{6.5} and \begin{enumerate} \item[(e)] ${\rm ZFC}^-_{**}$ is weakly normal, \item[(f)] ${\Bbb Q}$ is c.c.c. and simple (for simplicity) and correct. \end{enumerate} {\em Then} ${\Bbb P}$ is strongly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving. \end{proposition} \noindent{\sc Proof} \hspace{0.2in} First note that $I^{\rm ex}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}=I^{\rm dx}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$. Now, if the conclusion fails as witnessed by a set $X$, then, by \ref{6.7}, the statements (A), (B), (C) of \ref{6.7} fail. Hence, by $\neg$(A), $X\in (I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ and $p\in{\Bbb P}$ and a ${\Bbb P}$--name $\mathunderaccent\tilde-3 {y}$ such that \[\begin{array}{ll} p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{``}&\mathunderaccent\tilde-3 {y}\in {\cal H}_{<\aleph_1}({\Bbb Q})\mbox{ and for no $Q$-candidate $M$ such that $\mathunderaccent\tilde-3 {y}\in M$}\\ \ &\mbox{there is }\nu\in X\mbox{ which is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over }N[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}]\mbox{ ''.} \end{array}\] As we can increase $p$, without loss of generality $\mathunderaccent\tilde-3 {y}$ is a hereditarily countable ${\Bbb P}$--name. As ${\rm ZFC}^-_{**}$ is weakly normal we can find a model $N$ of ${\rm ZFC}^-_{**}$ which is a ${\Bbb P}$--candidate and a ${\Bbb Q}$--candidate and to which $p,\mathunderaccent\tilde-3 {y}$ belongs. Let $\eta^*\in X\subseteq {}^{\textstyle \omega}\omega$ (in ${\bf V}$) be $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N$ (exists by the negation of (B) of \ref{6.7}). By $(\beta)$ of \ref{6.5} there is $q\in{\Bbb P}$ such that $p\le q$, $q$ is $\langle N,{\Bbb P}\rangle$-generic and $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{ {\Bbb P}}$``$\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}]$'', a contradiction. \QED \begin{proposition} \label{6.9} Assume (a), (b), (c) of \ref{6.5}. Let ${\Bbb P},{\Bbb Q}$ be normal and forcing with ${\Bbb P}$ preserves ``${\cal I}\subseteq{\Bbb P}$ is predense''. Then \begin{enumerate} \item[$(\alpha)'$] ${\Bbb P}$ is strongly $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving, \item[$(\beta)$] for $\chi$ large enough, if $N{\rm pr}ec ({\cal H}(\chi),\in)$ is countable (and ${\frak C},{\frak B}^{{\Bbb Q}},{\frak B}^{{\Bbb P}},\mathunderaccent\tilde-3 {\eta}\in N)$ and $N\models$``$p\in{\Bbb P}$'' and $\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N$ then for some $q$ we have \begin{enumerate} \item[(i)] $p\le q$, $q\in{\Bbb P}$, \item[(ii)] $q$ is $\langle N,{\Bbb P}\rangle$--generic; i.e.\ $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap{\Bbb P}^N$ is generic over $N$'', \item[(iii)] $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\eta^*$ is $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic over $N[{\Bbb P}^N\cap G_{{\Bbb P}}]$''. \end{enumerate} \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Let $\chi_1$ be a large enough strong limit, and $\chi_2=\beth_\omega( \chi_1)$, $\chi=\beth_\omega(\chi_2)$, and repeat the proof of \ref{6.5} using $N{\rm pr}ec ({\cal H}(\chi_3),\in)$ to which ${\frak C},{\frak B}^{{\Bbb Q}},{\frak B}^{{\Bbb P}}$ and ${\Bbb P},{\Bbb Q},\theta,\chi_1,\chi_2$ belong. \QED$_{\ref{6.9}}$ \begin{proposition} \label{6.10} Assume (a),(b),(c) of \ref{6.5} and \begin{enumerate} \item[(d)$'$] ${\rm ZFC}^-_{**}$ is a version of set theory including, for some $\chi_1 <\chi_2$ \begin{enumerate} \item[(i)] $({\cal H}(\chi_2),\in)$ is a (well defined) model of ${\rm ZFC}^-_*$, \item[(ii)] (a) and (b) (with ${\frak B}^{{\Bbb P}},{\frak B}^{{\Bbb Q}},\mathunderaccent\tilde-3 {\eta}$ as individual constants) and (c), \item[(iii)] ${\Bbb Q},{\Bbb P}\in {\cal H}(\chi_1)$ and $({\cal H}(\chi_2),\in)$ is a ${\Bbb P}$--candidate and a ${\Bbb Q}$--candidate with ${\frak B}^{{\Bbb P}}$ interpreted as $({\frak B}^{{\Bbb P}})^N{\,|\grave{}\,} {\cal H}(\chi_2)$ and similarly for ${\Bbb Q}$, so \[\mbox{`` }{\frak B}^{{\Bbb P}},{\frak B}^{{\Bbb Q}}\in {\cal H}(\chi_2)\mbox{ ''},\] \item[(iv)] forcing with ${\Bbb P}$ preserves being a ${\Bbb Q}$--candidate, \item[(v)] ${\Bbb Q}$ satisfies the c.c.c.\ and being incompatible in ${\Bbb Q}$ is upward absolute from ${\Bbb Q}$--candidates. \end{enumerate} \end{enumerate} {\em Then} for models of ${\rm ZFC}^-_{**}$, forcing with ${\Bbb P}$ preserves ``${\cal I}$ is a predense subset of ${\Bbb Q}$'' (i.e.\ (d)(v) of\ref{6.5}). \end{proposition} \noindent{\bf Remark:}\qquad 1)\quad When ${\rm ZFC}^-_{**}$ is normal, this applies to \ref{9.6}.\\ 2)\quad Compare with \ref{5.5A}(iii). Here we have redundant assumptions as we have a use for \ref{6.5} in mind. \noindent{\sc Proof} \hspace{0.2in} Let $N$ be a model of ${\rm ZFC}^-_{**}$ and let $N\models$``${\cal I}$ is a predense subset of ${\Bbb Q}$''. As $N\models$``${\Bbb Q}$ satisfies the c.c.c.'' we can find in $N$ a set ${\cal J}\subseteq\{q:N\models({\rm ex}ists p \in {\cal I})(p\le q\in {\Bbb Q})\}$ such that \[N\models\mbox{`` }{\cal J}\mbox{ is countable, say $\{p_n:n <\omega\}$, and ${\cal J}$ is predense in ${\Bbb Q}$ ''.}\] Toward contradiction assume $p^*\in {\Bbb Q}^N$ and $\mathunderaccent\tilde-3 {r}\in N$ are such that \[N\models\mbox{`` }p^*\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mathunderaccent\tilde-3 {r}\in{\Bbb Q}\mbox{ is incompatible with every $p_n$ ''.}\] We can replace $N$ by ${\cal H}(\chi_2)^N$. Let $M\in N$ be such that \[\begin{array}{ll} N\models&\mbox{`` }M{\rm pr}ec ({\cal H}(\chi_2)^N,\in)\mbox{ is countable and}\\ \ &{\Bbb P},{\Bbb Q},{\frak B}^{{\Bbb P}},{\frak B}^{{\Bbb Q}},\langle p_n:n <\omega\rangle,p^*, \mathunderaccent\tilde-3 {r}\in M\mbox{ ''.} \end{array}\] In $N$ we can find $G\subseteq {\Bbb P}^N$ generic over $M$, so $M$ inherits from ${\cal H}(\chi_2)$ the property $M[G]$ is a ${\Bbb Q}$--candidate and also $M[G] \models$``$\mathunderaccent\tilde-3 {r}[G],p_n$ are incompatible in ${\Bbb Q}$''. So $\mathunderaccent\tilde-3 {r}[G]\in {\Bbb Q}^N$ contradicts the choice of ${\cal J}$. \QED \begin{conclusion} For \ref{6.5}$(\beta)$ to hold, we can omit clause (v) of (e) there if we add: \begin{enumerate} \item[(g)] ${\Bbb Q}$ satisfies the c.c.c. in ${\Bbb Q}$-candidates and being incompatible in ${\Bbb Q}$ is upward absolute from ${\Bbb Q}$--candidates. \end{enumerate} \end{conclusion} We can conclude (phrased for simplicity for strongly c.c.c. nep). \begin{conclusion} \label{6.11} Assume that \begin{enumerate} \item[(a)] ${\Bbb Q}$ is strongly c.c.c. explicitly nep (see Definition \ref{5.14}) and simple and correct, \item[(b)] $\mathunderaccent\tilde-3 {\eta}\in {}^{\textstyle \omega}\omega$ generic for ${\Bbb Q}$, a hereditarily countable ${\Bbb Q}$--name. \end{enumerate} If ${\Bbb P}_0$ is nep, $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving and $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}_0}$``$\mathunderaccent\tilde-3 {{\Bbb P}}_1$ is nep, $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving'' then ${\Bbb P}_0 * \mathunderaccent\tilde-3 {{\Bbb P}}_1$ is (nep and) $I_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})}$--preserving. \QED \end{conclusion} The reader may ask: what about $\omega$ limits (etc)? We shall address these problems in the continuation \cite{Sh:F264}. \stepcounter{section} \subsection*{\quad 8. Non-symmetry} The following hypothesis \ref{7.1} will be assumed in this and the next section, though for the end (including the main theorems \ref{8.7}--\ref{8.10}) we assume snep (i.e.\ \ref{7.1A}). \begin{hypothesis} \label{7.1} ${\Bbb Q}$ is correct c.c.c.\ simple, strongly c.c.c. nep, $\mathunderaccent\tilde-3 {\eta}$ is a hereditarily countable name of a generic real, i.e.\ $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\in{\cal K}$ (see Definition \ref{5.2A}) for ${\rm ZFC}^-_*$ and ${\rm ZFC}^-_*$ (and the properties above) are preserved by a forcing of cardinality $<\bar{\chi}$, $|{\Bbb Q}|^{ \aleph_0}<\bar{\chi}$, for ${\Bbb Q}$-candidates. \end{hypothesis} \begin{hypothesis} \label{7.1A} Like \ref{7.1} with snep. \end{hypothesis} \begin{definition} \label{7.2} Let ${\Bbb Q},\mathunderaccent\tilde-3 {\eta}$ be as in \ref{7.1} and let $\alpha$ be an ordinal. \begin{enumerate} \item Let ${\Bbb Q}^{[\alpha]}$ be ${\Bbb P}_\alpha$, where $\langle{\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_j:i\le \alpha,j<\alpha\rangle$ is a FS iteration and ${\mathunderaccent\tilde-3 {\Bbb Q}}_j= {\Bbb Q}^{{\bf V}[{\Bbb P}_j]}$. \item We let $\mathunderaccent\tilde-3 {\eta}^{[\alpha]}$ be $\langle\mathunderaccent\tilde-3 {\eta}_\ell:\ell<\alpha \rangle$, where $\mathunderaccent\tilde-3 {\eta}_\ell$ is $\mathunderaccent\tilde-3 {\eta}$ ``copied to ${\mathunderaccent\tilde-3 {\Bbb Q}}_\ell$'' (see \ref{7.3}(1) below). \item $({\Bbb Q}^{\langle\alpha\rangle},\mathunderaccent\tilde-3 {\eta}^{\langle\alpha\rangle})$ is defined similarly as an FS product. \item For a finite set $u\subseteq\alpha$ we define $F=F^{\alpha,u}_{\Bbb Q}:{\Bbb Q} \longrightarrow {\Bbb Q}^{[\alpha]}$ by $F(p)=\bar{p}$, where $\bar{p}=\langle p_\ell:\ell<\alpha\rangle$, $p_\ell=p$ if $\ell\in u$ and $p_i=\emptyset_{\Bbb Q}$ otherwise. \item The FS iteration $\bar{{\Bbb Q}}=\langle{\Bbb P}_i,{\mathunderaccent\tilde-3 {\Bbb Q}}_j,\mathunderaccent\tilde-3 {\eta}_j:i\le \alpha,j<i\rangle$ of neps means $({\mathunderaccent\tilde-3 {\Bbb Q}}_j,\mathunderaccent\tilde-3 {\eta}_j)\in{\cal K}$. \noindent We write $\mathunderaccent\tilde-3 {\eta}$ to mean $\mathunderaccent\tilde-3 {\eta}_j=F^{\alpha,\{j\}}_{{\Bbb Q}} (\mathunderaccent\tilde-3 {\eta})$. \end{enumerate} \end{definition} \begin{proposition} \label{7.3} \begin{enumerate} \item In Definition \ref{7.2}(4), for finite $u \subseteq\alpha$, $F=F^{\alpha, u}_{\Bbb Q}$ is a complete ($\lesdot$) embedding, as ``$p \le q$'', ``$p,q$ compatible'', ``$p,q$ incompatible'', ``$\langle p_n:n<\omega\rangle$ is predense set above $q$'' are upward absolute from ${\Bbb Q}$--candidates (holds as ${\Bbb Q}$ is strongly c.c.c.\ by \ref{7.1}). So $\mathunderaccent\tilde-3 {\eta}_\ell$ is $F^{\alpha, \{\alpha\}}_{{\Bbb Q}}(\mathunderaccent\tilde-3 {\eta})$ if $\alpha \in u$. \item ${\Bbb Q}^{[\alpha]}$ satisfies the c.c.c. \item Same holds for ${\Bbb Q}^{\langle \alpha \rangle}$. \item $({\Bbb Q}^{[\alpha]},\mathunderaccent\tilde-3 {\eta}^{*[\alpha]})$ for $\alpha<\omega_1$ are as in \ref{7.1}, too. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} For example: \noindent 3)\quad It is enough to prove it for finite $\alpha$, and this we prove by induction on $\alpha$ for $\alpha=n+1$. For the c.c.c.\ use ``incompatibility is absolute'' for forcing by ${\Bbb Q}^{\langle n\rangle}$, so we can use the last phrase in \ref{7.1}. \noindent 4)\quad The main point here is the strong c.c.c., so let $N$ be a ${\Bbb Q}$-candidate (and $\alpha+1\subseteq N$) and \[N\models\mbox{`` }{\cal I}\subseteq{\Bbb Q}^{[\alpha]}\mbox{ is predense ''}.\] Let $G^{[\alpha]}\subseteq {\Bbb Q}^{[\alpha]}$ be generic over ${\bf V}$ and for $\beta \leq\alpha$, $G^{[\beta]}=G^{[\alpha]}\cap{\Bbb Q}^{[\beta]}$. Show by induction on $\beta$ that $G^{[\beta]}\cap ({\Bbb Q}^{[\beta]})^N$ is a generic subset of $({\Bbb Q}^{[\beta]})^N$ over $N\langle G^{[\beta]}\rangle$. \QED \begin{definition} \label{7.4} \begin{enumerate} \item We say that ${\Bbb Q}$ is $[n]$--symmetric if: \begin{quotation} {\em if}\ $\langle\eta^*_\ell:\ell<n\rangle$ is generic for $\langle{\Bbb P}_\ell, {\mathunderaccent\tilde-3 {\Bbb Q}}_\ell,\mathunderaccent\tilde-3 {\eta}_\ell:\ell<n\rangle$ and $\sigma$ is a permutation of $\{0,\ldots,n-1\}$\\ {\em then}\ $\langle\eta_{\sigma(\ell)}:\ell<n\rangle$ is generic for $\langle {\Bbb P}_\ell,{\mathunderaccent\tilde-3 {\Bbb Q}}_\ell,\mathunderaccent\tilde-3 {\eta}:\ell<n\rangle$. \end{quotation} \item If $({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$, $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$ are as in \ref{7.1}, we say that they commute if: \begin{quotation} {\em if}\ $r'$ is $({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$--generic over ${\bf V}$ and $r''$ is $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$--generic over ${\bf V}[r']$\\ {\em then}\ $r'$ is $({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$--generic over ${\bf V}[r'']$ \end{quotation} (note that $\eta''$ is $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$--generic over ${\bf V}$ is always true by \ref{5.3B}). \item For $({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$, $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$ we say that they weakly commute if $({\Bbb Q}'{\,|\grave{}\,}(\ge q'),\mathunderaccent\tilde-3 {\eta}')$, $({\Bbb Q}''{\,|\grave{}\,}(\ge q''), \mathunderaccent\tilde-3 {\eta}'')$ commute for some $q'\in{\Bbb Q}'$ and $q''\in{\Bbb Q}''$. \end{enumerate} \end{definition} \begin{proposition} \label{7.5} \begin{enumerate} \item ``Commute'' is a commutative relation. \item For $n \ge 2$ we have:\\ ${\Bbb Q}$ is $[n]$--symmetric \quad iff\\ ${\Bbb Q},{\Bbb Q}^{[n-1]}$ commute and ${\Bbb Q}$ is $[n-1]$--symmetric \quad iff\\ ${\Bbb Q}$ is $[2]$-symmetric. \item If ${\Bbb P},{\Bbb Q}^{[n]}$ commute, $m \le n$ then ${\Bbb P},{\Bbb Q}^{[m]}$ commute. Similarly, if ${\Bbb P},{\Bbb Q}$ commute and ${\Bbb Q}'\lesdot{\Bbb Q}$, the ${\Bbb P},{\Bbb Q}'$ commute. \item In part 3) we can replace $[-]$ by $\langle - \rangle$. \item If $({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$, $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$ weakly commute and ${\Bbb Q}',{\Bbb Q}''$ are homogeneous, then they commute. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1)\quad Let $({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$, $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$ be as in \ref{7.1}. Then ``$({\Bbb Q}',\mathunderaccent\tilde-3 {\eta}')$, $({\Bbb Q}'',\mathunderaccent\tilde-3 {\eta}'')$ commute'' says ${\Bbb Q}'*\mathunderaccent\tilde-3 {{\Bbb Q}}''={\Bbb Q}''*\mathunderaccent\tilde-3 {{\Bbb Q}}'$, which is symmetric. \noindent 2)\quad For the second ``iff'', use ``the permutations $\pi_\ell= (\ell,\ell +1)$ for $\ell < n$ generate the group of permutations of $\{0,\ldots,n-1\}$''. \QED$_{\ref{7.5}}$ \begin{proposition} \label{7.6} \begin{enumerate} \item If ${\Bbb Q}^{[\omega]}$ and ${\rm Cohen}$ do not commute, {\em then} for some $n< \omega$, ${\Bbb Q}^{[n]}$ and ${\rm Cohen}$ do not commute.\\ (The inverse holds by \ref{7.5}(3), second phrase.) \item If ${\Bbb Q}^{\langle\omega\rangle}$ and ${\rm Cohen}$ do not commute, {\em then} for some $n<\omega$, ${\Bbb Q}^{\langle n\rangle}$ and ${\rm Cohen}$ do not commute. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1)\quad Since ${\rm Cohen}$ and ${\Bbb Q}^{[\omega]}$ do not commute, there is a ${\Bbb Q}^{[\omega]}$--name $\mathunderaccent\tilde-3 {{\cal I}}$ of a dense open subset of ${\rm Cohen}$ (i.e.\ of $({}^{\omega >}2,\vartriangleleft)$) such that for some condition $(p,\mathunderaccent\tilde-3 {q})\in{\rm Cohen}*\mathunderaccent\tilde-3 {{\Bbb Q}}^{[\omega]}$ we have \[(p,\mathunderaccent\tilde-3 {q})\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mbox{`` }\mathunderaccent\tilde-3 {\eta}^{{\rm Cohen}}\mbox{ has no initial segment in }\mathunderaccent\tilde-3 {{\cal I}}\mbox{ ''}.\] Without loss of generality for some $n^*<\omega$ we have $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\rm Cohen}}$``${\rm Dom}(\mathunderaccent\tilde-3 {q})\subseteq\{0,\ldots,n^*-1\}$''. Let $\mathunderaccent\tilde-3 {{\cal I}}'$ be the ${\Bbb Q}^{[n^*]}$--name for the following set: \[\{\eta\in {}^{\omega >}2:\mbox{for some } p \in{\Bbb Q}^{[\omega]},\ p{\,|\grave{}\,} n^* \in \mathunderaccent\tilde-3 {G}_{{\Bbb Q}^{[n^*]}}\mbox{ and }p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}^{[\omega]}}\mbox{``}\eta \in \mathunderaccent\tilde-3 {\cal I}\mbox{''}\}.\] It should be clear that $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}^{[n^*]}}$``$\mathunderaccent\tilde-3 {\cal I}'$ is a dense open subset of $({}^{\omega>}2,\vartriangleleft)$''. Now we ask the following question. \begin{quotation} Does $(p,\mathunderaccent\tilde-3 {q})\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\rm Cohen}*{\mathunderaccent\tilde-3 {\Bbb Q}}^{[n^*]}}$`` $\mathunderaccent\tilde-3 {\eta}^{{\rm Cohen}}{\,|\grave{}\,} n \notin \mathunderaccent\tilde-3 {{\cal I}}$ for each $n<\omega$ ''? \end{quotation} If yes, we have gotten the desired conclusion (i.e.\ ${\rm Cohen}$ and ${\mathunderaccent\tilde-3 {\Bbb Q}}^{[n^*]}$ do not commute). If not, for some $(p',\mathunderaccent\tilde-3 {q}')$ such that $(p,\mathunderaccent\tilde-3 {q})\le (p',\mathunderaccent\tilde-3 {q}')\in{\rm Cohen}*{\mathunderaccent\tilde-3 {\Bbb Q}}^{[n^*]}$ and for some $n<\omega$ we have: \[(p',\mathunderaccent\tilde-3 {q}')\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\rm Cohen}*{\mathunderaccent\tilde-3 {\Bbb Q}}^{[n^*]}}\mbox{`` }\mathunderaccent\tilde-3 {\eta}^{\rm Cohen}{\,|\grave{}\,} n=\eta\in\mathunderaccent\tilde-3 {{\cal I}}'\mbox{ ''}.\] Without loss of generality, for some $p\in({\Bbb Q}^{[\omega]})^{\bf V}$ we have $(p', q')\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$p{\,|\grave{}\,} n^*\in G_{{\rm Cohen}*{\mathunderaccent\tilde-3 {\Bbb Q}}^{[n^*]}}$'' and $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$\eta \in \mathunderaccent\tilde-3 {{\cal I}}$''. Then $(p',q'\cup p{\,|\grave{}\,} [n^*,\omega))$ forces (in ${\rm Cohen}*{\mathunderaccent\tilde-3 {\Bbb Q}}^{[\omega]}$) that $\mathunderaccent\tilde-3 {\eta}^{\rm Cohen}{\,|\grave{}\,} n=\eta\in \mathunderaccent\tilde-3 {{\cal I}}$, a contradiction. \noindent 2)\quad Similarly. \QED$_{\ref{7.6}}$ \begin{proposition} \label{7.7} \begin{enumerate} \item If ${\Bbb Q}^{[n]}$ and ${\rm Cohen}$ do not commute (${\Bbb Q}$ as before), then ${\Bbb Q}$ and ${\rm Cohen}$ do not commute (both ``absolute"). \item The following conditions are equivalent: \begin{enumerate} \item[(i)] ${\Bbb Q}$ commutes with Cohen, \item[(ii)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$({}^{\textstyle \omega}2)^{\bf V}$ is not meagre'', \item[(iii)] $(\forall A)[{\bf V}\models$``$A\subseteq{}^{\textstyle \omega}2$ non-meagre'' $\ \ \Rightarrow\ \ \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$A$ is non-meagre''$]$ \end{enumerate} (all ``absolutely'', i.e.\ not only in the present universe but in its generic extensions too). \item We can replace ${\rm Cohen}$ by others to which \ref{6.5} applies and are homogeneous (see \ref{6.4A}). \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} 1)\quad Assume toward contradiction that $Q$ and ${\rm Cohen}$ commute (absolutely). Let $\eta\in{}^{\textstyle \omega}2$ be a Cohen real over ${\bf V}$. Let $G_\ell \subseteq{\Bbb Q}^{{\bf V}[G_0,\ldots,G_{\ell-1},\eta]}$ be generic over ${\bf V}[G_0,\ldots, G_{\ell-1},\eta]$ for $\ell<n$, and let $\eta_\ell=\mathunderaccent\tilde-3 {\eta}[G_\ell]$. We now prove by induction on $\ell$, that $\eta$ is a Cohen real over ${\bf V}[G_0, \ldots,G_{\ell -1}]$. The induction step is by the assumption ``${\Bbb Q}$ and ${\rm Cohen}$ commute''. The net result is that $\eta$ is a Cohen real over ${\bf V}[\eta_0,\ldots,\eta_{n-1}]$, contradicting the assumption. \noindent 2)\quad The second clause implies the third by \ref{6.5}. The third clause implies the second trivially. Let us argue that the implication $(i)\ \Rightarrow\ (ii)$ holds. Add $\aleph_1$ Cohen reals $\{\eta_i:i<\omega_1\}$ and then force by ${\Bbb Q}$. Let $G_{\Bbb Q}\subseteq {\Bbb Q}^{{\bf V}[\langle \eta_i:i<\omega_1\rangle]}$ be generic over ${\bf V}$, and $\mathunderaccent\tilde-3 {\eta}=\mathunderaccent\tilde-3 {\eta}_{{\Bbb Q}}[G_{{\Bbb Q}}]$. Then (i) implies that for every $j<\omega_1$ we have: $\eta_j$ is Cohen over ${\bf V}[\langle\eta_i:i< \omega_1,i\ne j\rangle,\eta]$. Hence in ${\bf V}[\langle\eta_i:i<\omega_1\rangle, \eta]= {\bf V}[\langle\eta_i:i<\omega_1\rangle][G_{{\Bbb Q}}]$, the set $\{\eta_i:i< \omega_1\}$ is not meagre and consequently (ii) holds. Lastly, assume (iii) and let $\nu\in {}^\omega|{\Bbb Q}|$ be generic for ${\rm Levy}(\aleph_0,|{\Bbb Q}|)$. Let $\eta$ be $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$--generic real over ${\bf V}[\nu]$. By (ii), we can find in ${\bf V}[\nu]$ a real $\rho\in {}^{\textstyle \omega}2$ which is in no meagre set from ${\bf V}[\eta]$ (note that there are countably many such meagre sets from the point of view of ${\bf V}[\nu]$). Now we easily finish. \noindent 3)\quad Same proof. \QED$_{\ref{7.7}}$ \stepcounter{section} \subsection*{\quad 9. Poor Cohen commutes only with himself} \begin{definition} \label{8.1} \begin{enumerate} \item We say a ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {x}$ of a subset of some countable $a^*\in {\bf V}$ is [somewhere] essentially Cohen if $B_2({\Bbb Q},\mathunderaccent\tilde-3 {x})$ is [somewhere] essentially countable; i.e.\ [above some $p$] has countable density. \item We say $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\in {\cal K}^{\neg c}$ (a non-Cohen pair) if: \begin{enumerate} \item[(a)] $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ is as in \ref{7.1A}, \item[(b)] $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ (see Definition \ref{5.3A}) is nowhere essentially Cohen (i.e.\ above every condition). \end{enumerate} \end{enumerate} \end{definition} \begin{hypothesis} \label{8.2} $\chi$ is regular large enough cardinal, and $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\in{\cal K}^{\neg c}$ will be fixed as in \ref{8.1}, and ${\rm ZFC}^-_*$ is normal (see Definition \ref{0.9}). \end{hypothesis} \begin{definition} \label{8.2A} \begin{enumerate} \item ${\cal D}={\cal D}_{\le \aleph_0}({\cal H}(\chi))$ is the filter of clubs on $[{\cal H}(\chi)]^{\le \aleph_0}$. \item ${\cal C}_0=\{a:a{\rm pr}ec ({\cal H}(\chi),\in)$ is countable, and $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\in a$ (i.e.\ their definitions) so is a ${\Bbb Q}$--candidate$\}$. \end{enumerate} \end{definition} \begin{definition} \label{8.2B} We say that $q\in{\Bbb Q}$ is strong on $a\in {\cal C}_0$ if: \begin{enumerate} \item[$(\circledast)_{a,q}$] the set $\{p\in a\cap{\Bbb Q}$: $p,q$ are incompatible in ${\Bbb Q}\}$ is dense in the (quasi) order ${\Bbb Q}\cap a$. \end{enumerate} \end{definition} \begin{proposition} \label{8.3} \begin{enumerate} \item For every $a\in {\cal C}_0$ there is $q\in{\Bbb Q}$ which is strong on $a$. \item Moreover, for every $p\in{\Bbb Q}$ and $a\in {\cal C}_0$ there is $q$ strong on $a$ such that $p\le^{{\Bbb Q}} q$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Clearly $\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap a$ is a ${\Bbb Q}$--name of a countable subset of an old set ${\Bbb Q}\cap a$, so it can be considered as a real. Note that \begin{enumerate} \item[$(*)_1$] $\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap a$ is not somewhere essentially Cohen. \end{enumerate} Why? We can restrict ourselves to be above some fix $p\in{\Bbb Q}$. From $\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap a$ we can compute $\mathunderaccent\tilde-3 {\eta}$ (as $\mathunderaccent\tilde-3 {\eta}\in a$, i.e.\ the relevant maximal antichains belong to $a$), so $\mathunderaccent\tilde-3 {\eta}$ can be considered a $B_2[{\Bbb Q},\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap a]$--name. But ``any (name of) a real in an essentially Cohen forcing notion is essentially Cohen itself'', so $\mathunderaccent\tilde-3 {\eta}$ is essentially Cohen ${\Bbb Q}$--name, contradicting Hypothesis \ref{8.2}. Consequently, $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap a$ is not a generic subset of ${\Bbb Q}{\,|\grave{}\,} a$ (over ${\bf V}$)'' and hence $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_Q$``$\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}\cap a$ is not a generic subset of ${\Bbb Q}{\,|\grave{}\,} a$ (over ${\bf V}$)''. Thus there are $q$ and ${\cal I}$ such that: \begin{enumerate} \item[(i)] $p \le q \in {\Bbb Q}$, \item[(ii)] ${\cal I}\subseteq{\Bbb Q}\cap a$ is a dense open subset of $Q{\,|\grave{}\,} a$, \item[(iii)] $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}$ is disjoint to ${\cal I}$''. \end{enumerate} But this means that \begin{enumerate} \item[$(*)_2$] $q$ is incompatible with every $r \in {\cal I}$. \end{enumerate} [Why? Otherwise $q\not\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$r\notin\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}$''.]\\ So $\{r\in a\cap{\Bbb Q}:q,r$ incompatible (in ${\Bbb Q}$)$\}$ is a subset of ${\Bbb Q}\cap a$ including ${\cal I}$ hence it is dense in ${\Bbb Q}{\,|\grave{}\,} a$. \QED$_{\ref{8.3}}$ \begin{choice} \label{8.3A} We choose $\bar{p}=\langle p_a:a\in {\cal C}_0\rangle$ such that $p_a\in{\Bbb Q}$ is strong on $a$ (possible by \ref{8.3}). \end{choice} \begin{definition} \label{8.4} \begin{enumerate} \item For $R \subseteq {\Bbb Q}$ let $A[R]\stackrel{\rm def}{=}\{a\in {\cal C}_0: p_a\in R\}$. \item ${\cal D}_{\bar{p}}={\cal D}_{{\Bbb Q},\bar{p}}\stackrel{\rm def}{=}\{R \subseteq{\Bbb Q}:A[R]\in{\cal D}\}$.\\ The family of ${\cal D}_{\bar{p}}$--positive sets will be denoted ${\cal D}^+_{\bar{p}}$ (so for a set $S\subseteq {\Bbb Q}$, $S\in {\cal D}^+_{\bar{p}}$ iff $R\cap S\neq\emptyset$ for each $R\in {\cal D}_{\bar{p}}$). \item For $R\subseteq{\Bbb Q}$ and $q\in{\Bbb Q}$ let $R[q]\stackrel{\rm def}{=}\{p\in R:p,q$ are incompatible in $Q\}$ (so $R[q]$ is in a sense the orthogonal complement of $q$ inside $R$). \end{enumerate} \end{definition} \begin{fact} \label{8.4A} \begin{enumerate} \item ${\cal D}_{\bar{p}}$ is an $\aleph_1$--complete filter on ${\Bbb Q}$. \item For $R\subseteq{\Bbb Q}$ we have $R\in {\cal D}^+_{\bar{p}}\ \Leftrightarrow \ A[R]\in {\cal D}^+$. \end{enumerate} \end{fact} \begin{proposition} \label{8.5} If $R\in {\cal D}^+_{\bar{p}}$ then the set \[R^\otimes\stackrel{\rm def}{=}\{q\in{\Bbb Q}:R[q]\in {\cal D}^+_{\bar{p}}\}\] is dense in ${\Bbb Q}$. \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Assume not, so for some $q^* \in Q$ we have \begin{enumerate} \item[$(*)_1$] there is no $q\in{\Bbb Q}$ such that $q^*\le q\in{\Bbb Q}\ \&\ R[q]\in {\cal D}^+_{\bar{p}}$. \end{enumerate} Thus \[\begin{array}{lll} q^*\le q\in{\Bbb Q}&\Rightarrow &R[q]=\emptyset\mbox{ mod }{\cal D}_{\bar{p}}\ \ \Rightarrow\ \ A[R[q]]=\emptyset\mbox{ mod } {\cal D}\\ \ &\Rightarrow&\mbox{for some club }{\cal C}_q \subseteq {\cal C}_0\mbox{ of } [{\cal H}(\chi)]^{\le \aleph_0} \mbox{ we have} \\ \ &\ &(\forall a \in {\cal C}_q)[p_a \notin R[q], \mbox{ i.e.\ $p_a,q$ are compatible}]. \end{array}\] Let ${\cal C}^*=\{a\in {\cal C}_0:q^*\in a\mbox{ and }(\forall q)[q^*\le q\in a \cap{\Bbb Q}\ \Rightarrow\ a\in {\cal C}_q]\}$. As each ${\cal C}_q$ is a club of $[{\cal H}(\chi)]^{\le \aleph_0}$ clearly ${\cal C}^*$ (as a diagonal intersection) is a club of $[{\cal H}(\chi)]^{\le \aleph_0}$, i.e.\ ${\cal C}^* \in {\cal D}$. Since $R\in {\cal D}^+_{\bar{p}}$ we have $A[R]\in {\cal D}^+$, so together with the previous sentence we know that there is $a^*\in A[R]\cap{\cal C}^*$. By the choice of $\bar{p}$ (see \ref{8.3A}, and Definition \ref{8.2B}) as $q^*\in a^*\cap {\Bbb Q}$ (see the choice of ${\cal C}^*$) for some $q$ we have: \[q^* \le q\in a^*\quad\mbox{ and }\quad p_{a^*},q\mbox{ are incompatible}.\] Now this contradicts ``$a^*\in C_q$''. \QED$_{\ref{8.6}}$ \begin{definition} \label{8.6} Assume $\chi_1=(2^\chi)^+$ (so ${\cal H}(\chi)\in {\cal H}(\chi_1))$ and $N$ is a countable elementary submodel of $({\cal H}(\chi_1),\in)$ to which $\{\chi,{\Bbb Q},\bar{p}\}$ belong (so ${\cal D}_{\bar{p}} \in N$). Further, assume that ${\Bbb Q}$ is snep. \begin{enumerate} \item We let ${\rm Cohen}_N={\rm Cohen}_{N,{\Bbb Q}}$ be $({\cal D}^+_{{\Bbb Q},\bar{p}}, \supseteq){\,|\grave{}\,} N$ (so this is a countable atomless forcing notion and hence equivalent to Cohen forcing). \item If $G_N\in\mbox{ Gen}(N,{\cal D}^+_{{\Bbb Q},\bar{p}})\stackrel{\rm def}{=} \{G:G\subseteq{\rm Cohen}_N\mbox{ is generic for }(N,({\cal D}^+_{{\Bbb Q},\bar{p}}, \supseteq){\,|\grave{}\,} N)\}$ (possibly in a universe ${\bf V}'$ extending ${\bf V}$) {\em then} let $\mathunderaccent\tilde-3 {p}_N[G]$ be the sequence (i.e.\ in ${}^{\textstyle \omega}\omega$ or just member of ${}^\omega\theta({\Bbb Q})$) such that for each $\ell<\omega$ and $\gamma$ \[(\mathunderaccent\tilde-3 {p}_N[G_N])(\ell)=\gamma\quad \Leftrightarrow\quad ({\rm ex}ists R\in G)( \forall p\in R)[p(\ell)=\gamma].\] \end{enumerate} \end{definition} \begin{proposition} \label{8.7} Assume \ref{7.1A} and, additionally, ${\Bbb Q}$ is Souslin c.c.c.\ (i.e.\ the incompatibility relation is $\Sigma^1_1$). If $\chi_1,N$ and $G\in\mbox{ Gen} (N,{\cal D}^+_{{\Bbb Q}})$ are as in \ref{8.6} (so $G$ is possibly in some generic extension ${\bf V}_1$ of ${\bf V}$ but ${\rm Cohen}_N$ is from ${\bf V}$) {\em then} \begin{enumerate} \item[(a)] $\mathunderaccent\tilde-3 {p}_N[G]$ is an $\omega$--sequence (i.e.\ for each $\ell$ there is one and only one $\gamma$), \item[(b)] $\mathunderaccent\tilde-3 {p}_N[G]\in{\Bbb Q}$, \item[(c)] $\mathunderaccent\tilde-3 {p}_N[G]$ is strong for $N{\,|\grave{}\,} {\cal H}(\chi)$ (which belongs to ${\cal C}_0$). \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} For every $p\in{\Bbb Q}$ there is $\nu_p\in{}^{\textstyle \omega}\omega$ which witnesses $p\in{\Bbb Q}$, i.e.\ $p*\nu_p\in\lim(T^{{\Bbb Q}}_0)$. So choose such a function $p\mapsto\nu_p$. Now in ${\bf V}$, for $n<\omega$ the function $p_a\mapsto (p_a{\,|\grave{}\,} n,\nu_{p_a} {\,|\grave{}\,} n)$ is a mapping from $\{p_a:a \in {\cal C}_0\}\in {\cal D}_{\bar{p}}$ with countable range. Since ${\cal D}_{\bar{p}}$ is $\aleph_1$--complete \begin{enumerate} \item[$(*)_1$] in ${\bf V}$, if $R\in {\cal D}^+_{\bar{p}}$ and $n<\omega$ then for some $R'\subseteq R$ and $(\eta^n,\nu^n)$ we have \[R'\in {\cal D}^+_{\bar{p}}\quad\mbox{ and }\quad(\forall p\in R')[(p{\,|\grave{}\,} n, \nu_p {\,|\grave{}\,} n)=(\eta^n,\nu^n)].\] \end{enumerate} This is inherited by $N$, hence $\mathunderaccent\tilde-3 {p}_N[G]$ satisfies clauses (a), (b) (in fact \[\mathunderaccent\tilde-3 {\nu}[G]=\bigcup\{\nu^*:\mbox{ for some } n<\omega\mbox{ and } R\in G \mbox{ we have } (\forall p\in R)[\nu_p{\,|\grave{}\,} n=\nu^*]\}\] is a witness for $\mathunderaccent\tilde-3 {p}_N[G]\in{\Bbb Q}$). Also for each $q\in{\Bbb Q}\cap N$ the set \[\begin{array}{ll} {\cal J}_q=\bigl\{R\in{\cal D}^+_{\bar{p}}:&\mbox{for some }q'\in{\Bbb Q} \mbox{ stronger than $q$ we have:}\\ \ &(\forall p \in R)[p,q' \mbox{ are incompatible (in ${\Bbb Q}$)}]\bigr\} \end{array}\] is a dense subset of $({\cal D}^+_{\bar{p}},\supseteq)$ (remember $p_a$ is strong on $a$; use Fodor lemma). Clearly it belongs to $N$, so by the demand on $G$ we know that $G\cap {\cal J}_q \ne\emptyset$. Choose $R_q\in G\cap {\cal J}_q$ and let $q'\in {\Bbb Q}\cap N$ witness it, so \[R_q\in {\cal D}^+_{\bar{p}}\cap N\quad\mbox{ and }\quad(\forall p\in R_q) [p,q'\mbox{ are incompatible}].\] Now ``incompatible in ${\Bbb Q}$'' is a $\Sigma^1_1$--relation (belonging to $N$) hence as above, $\mathunderaccent\tilde-3 {p}_N[G],q'$ are incompatible. As $q$ was any member of ${\Bbb Q}\cap N$ we have finished proving clause (c). \QED$_{\ref{8.7}}$ \begin{proposition} \label{8.8} Assume \ref{7.1A} and let ${\Bbb Q}$ be Souslin c.c.c. Then ${\Bbb Q}^{[\omega]}$ (see \ref{7.2}) and ${\rm Cohen}$ do not commute. \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Assume that ${\Bbb Q}^{[\omega]}$ and ${\rm Cohen}$ do commute. Let $\chi$ be large enough, $N {\rm pr}ec ({\cal H}(\chi),\in)$ be countable such that $({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})\in N$ (as in \ref{8.6}). Now we can interpret a Cohen real $\nu$ (over ${\bf V}$) as a subset of ${\cal D}^+_{\bar{p}}\cap N$ called $g_\nu$. Thus it is ${\rm Cohen}_{N,{\Bbb Q}}$--generic over ${\bf V}$ so $\mathunderaccent\tilde-3 {p}_{N}[g_\nu]$ is well defined, and it belongs to ${\Bbb Q}^{{\bf V}[\nu]}$ (by \ref{8.7}). Moreover, in ${\bf V}[\nu]$ we have: \[\{q\in{\Bbb Q}^N:q,\mathunderaccent\tilde-3 {p}_N(g_\eta)\mbox{ are incompatible $\}$ is dense in } {\Bbb Q}^N.\] Let $\langle \eta_\ell:\ell<\omega\rangle$ be generic for $({\Bbb Q}^{[\omega]}, \mathunderaccent\tilde-3 {\eta}^{[\omega]})$ and let $\nu$ be Cohen generic over ${\bf V}[\langle \eta_\ell:\ell<\omega\rangle]$. For each $\ell$, clearly $\eta_\ell$ is $({\Bbb Q}, \mathunderaccent\tilde-3 {\eta})$--generic over ${\bf V}$, so let $\eta_\ell=\mathunderaccent\tilde-3 {\eta}[G_\ell]$, where $G_\ell\subseteq{\Bbb Q}$ is generic over ${\bf V}$. Clearly $G_\ell\cap N$ is a subset of ${\Bbb Q}^N$ generic over ${\bf V}$ (by ``${\Bbb Q}$ is strongly c.c.c.''). So $\langle G_\ell\cap N,g_\nu\rangle$ is a subset of ${\Bbb Q}^N*({\cal D}^+_{ \bar{p}}\cap N,\supseteq)$ generic over $N$. By \ref{8.7}, for any $q\in{\Bbb Q}^N$ and $R\in({\cal D}^+_{\bar{p}}\cap N)$, for some $R'\subseteq R$ and $q'$ we have $R'\in ({\cal D}^+_{\bar{p}}\cap N)$, $N\models$``$q\leq q'\in{\Bbb Q}$'' and \[N\models(\forall a\in R')(p_a,q'\mbox{ are incompatible\/}).\] So look at the set \[\{(q,R)\in{\Bbb Q}^N\times({\cal D}^+_{\bar{p}}\cap N): (\forall a\in R')(p_a, q\mbox{ are incompatible\/})\}\] -- there is $(q,R)\in (G\cap N)\times g_\nu$ which belongs to it. Hence, as in \ref{8.7}, for each $\ell$, $\mathunderaccent\tilde-3 {p}_N[g_\nu]$ is incompatible with some $q\in G_{{\Bbb Q}}[\eta_\ell]$. By the assumption that the forcing notions commute we know that $\langle \eta_\ell:\ell<\omega\rangle$ is generic for $({\Bbb Q}^{[\omega]},\mathunderaccent\tilde-3 {\eta}^{ [\omega]})$ over ${\bf V}(\nu)$. Necessarily (by FS + genericity) for some $\ell$ we have $F^{\omega,\{\ell\}}_{{\Bbb Q}}(\mathunderaccent\tilde-3 {p}_N(g_\eta))\in G_{{\Bbb Q}}[\langle \eta_\ell:\ell<\omega\rangle]$; a contradiction. \QED$_{\ref{8.8}}$ \begin{conclusion} \label{8.9} Assume \ref{7.1A} and let ${\Bbb Q}$ be Souslin c.c.c. Then $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})$ does not commute with ${\rm Cohen}$ (even above any $q\in{\Bbb Q}$). \end{conclusion} \noindent{\sc Proof} \hspace{0.2in} If we restrict ourselves above $q_0\in{\Bbb Q}$, the Hypothesis \ref{8.2} still holds so we can ignore this. By \ref{8.8} we have $({\Bbb Q}^{[\omega]}, \mathunderaccent\tilde-3 {\eta}^{[\omega]})$ does not commute with ${\rm Cohen}$. So by \ref{7.6} we have that, for some $n$, $({\Bbb Q}^{[n]},\mathunderaccent\tilde-3 {\eta}^{[n]})$ does not commute with ${\rm Cohen}$ and by \ref{7.7} we finish. \QED$_{\ref{8.9}}$ \begin{proposition} \label{8.9A} If ${\Bbb Q}$ is Souslin c.c.c.\ then for suitable ${\rm ZFC}^-_*$, ${\Bbb Q}$ satisfies \ref{7.1A}. \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Let $\rho\in{}^{\textstyle \omega}2$ be the real parameter in the definition of ${\Bbb Q}$. Let ${\rm ZFC}^-_*$ say: \begin{enumerate} \item[(a)] ZC (i.e.\ the axioms of Zermelo satisfied by $({\cal H}( \beth_\omega),\in)$), \item[(b)] ${\Bbb Q}$ (defined from $\rho$ which is an individual constant) satisfies the c.c.c. \item[(c)] for each $n<\omega$, generic extensions for forcing notions of cardinality $\le\beth_\omega$ preserve (b) (and, of course (a)). \end{enumerate} Now the desired properties are easy. \QED$_{\ref{8.9A}}$ \begin{conclusion} \label{8.10} If ${\Bbb Q}$ is a Souslin c.c.c.\ forcing notion which is not ${}^{\textstyle \omega}\omega$--bounding (say $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$`` there is an unbounded $\mathunderaccent\tilde-3 {\eta}\in{}^{\textstyle \omega}\omega$ ''), but adds an essentially non-Cohen real {\em then} ${\Bbb Q}$ does not commute with itself. \end{conclusion} \noindent{\sc Proof} \hspace{0.2in} By \cite{Sh:480}, ${\Bbb Q}$ adds a Cohen real; now by the assumptions, for some ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {\eta}$, $({\Bbb Q},\mathunderaccent\tilde-3 {\eta})\in{\cal K}^{\neg c}$. By \ref{8.9} we know that ${\Bbb Q}$ and Cohen do not commute, so by \ref{7.5}(3) we are done. \QED$_{\ref{8.10}}$ \begin{conclusion} \label{8.11} If ${\Bbb Q}$ is a Souslin c.c.c.\ forcing notion adding a non-Cohen real, {\em then} the forcing by ${\Bbb Q}$ makes the old reals meagre. \end{conclusion} \stepcounter{section} \subsection*{\quad 10. Some c.c.c.\ nep forcing notions are not nice} We may wonder can we replace the assumption ``${\Bbb Q}$ is Souslin c.c.c.'' by weaker one in \S8 and in \cite{Sh:480}. We review limitations and then see how much we can weaken it. \begin{proposition} \label{9.1} Assume that $\eta^* \in{}^{\textstyle \omega}2$ and $\aleph_1=\aleph^{{\bf L}[\eta^*]}_1$. Then there is a definition of a forcing notion ${\Bbb Q}$ (i.e.\ $\bar{\varphi}$) such that \begin{enumerate} \item[(a)] the definition is $\Sigma^1_1$ (with parameter $\eta^*$), so $p \in{\Bbb Q}$, $p \le^{{\Bbb Q}} q$, ``$p,q$ incompatible'', ``$\{p_n:n<\omega\}\subseteq a$ is a maximal antichain of ${\Bbb Q}$'' are preserved by forcing extensions, \item[(b)] ${\Bbb Q}$ is c.c.c.\ (even in a forcing extension; even $\sigma$--centered), \item[(c)] there is ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {\eta}$ of a generic for ${\Bbb Q}$, \item[(d)] $\mathunderaccent\tilde-3 {\eta}$ is not essentially Cohen (preserved by extensions not collapsing $\aleph_1$), in fact has cardinality $\aleph_1$, \item[(e)] ${\Bbb Q}$ commutes with ${\rm Cohen}$, \item[(f)] ${\Bbb Q}$ is nep (though not Souslin c.c.c.). \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} A condition $p$ in ${\Bbb Q}$ is a quadruple $\langle E_p,X_p,u_p,w_p \rangle$ consisting of:\quad a 2-place relation $E_p$ on $\omega$ and subset $X_p$ of $\omega$ and a finite subset $u_p$ of $X_p$ and a finite subset $w_p$ of $\omega$ such that: \begin{quotation} $N_p\stackrel{\rm def}{=}(\omega,E_p)$ is a model of ${\rm ZFC}^- + {\bf V}={\bf L}$ (let $<^{N_p}_*$ be the canonical ordering of $N_p$, we do not require well foundedness) such that: \end{quotation} \[\begin{array}{l} (N_p,X_p)\models\mbox{`` }(\alpha)\ \mbox{every } x\in X_p\mbox{ is an infinite subset of }\omega,\\ \quad(\beta)\ \mbox{if } x\ne y\mbox{ are from } X_p \mbox{ then }x \cap y \mbox{ is finite},\\ \quad(\gamma)\ \mbox{if } x\in X\mbox{ then there is no } y\mbox{ satisfying} \\ \ y<^{N_p}_* x\ \&\ (\forall z \in X_p)(z <^{N_p}_* x\Rightarrow z\cap y \mbox{ finite})\ \&\ y \mbox{ an infinite subset of }\omega,\\ \quad(\delta)\ \bigwedge\limits_{n<\omega} (\forall z_1\ldots z_n\in X_p)\bigl( \bigwedge\limits_{\ell =1}^n z_\ell <^{N_p}_* x \Rightarrow ({\rm ex}ists^\infty m < \omega)(m \notin x \cup\bigcup\limits_{\ell =1}^n z_\ell \bigr)\mbox{ ''.} \end{array}\] The order is defined by:\qquad $p \le q$ if and only if one of the following occurs: \begin{enumerate} \item[(A)] $p=q$, \item[(B)] there are $Y\subseteq\omega$ and $a\in N_q$ and $f\in {}^{ \textstyle Y}\omega$ such that: \begin{enumerate} \item[(i)] $[x\in Y\ \&\ N_p \models y\in x]\quad \Rightarrow\quad y \in Y$, \item[(ii)] $[N_p\models$``${\rm rk}(x)=y$'', $y\in Y]\quad\Rightarrow\quad x\in Y$, \item[(iii)] $N_p{\,|\grave{}\,} Y$ is a model of $({\rm ZFC}^-+{\bf V}={\bf L})$, \item[(iv)] the set $\{x:N_p \models$``$x$ an ordinal'', $x \notin Y\}$ has no first element, \item[(v)] $N_q\models$``$a$ is a transitive set'', \item[(vi)] $f$ is an isomorphism from $N_p{\,|\grave{}\,} Y$ onto $N_q{\,|\grave{}\,}\{b:N_q \models b\in a\}$, \item[(vii)] $f$ maps $X_p$ onto $X_q{\,|\grave{}\,}{\rm Rang}(f)$, \item[(viii)] $f$ maps $u_p \cap Y$ into $u_q \cap{\rm Rang}(f)$, \item[(ix)] $w_p\subseteq w_q$, \item[(x)] if $n\in w_q\backslash w_p$ and $x\in f(u_p)$ then $N_q \models$``the $n$-th natural number does not belong to $x$''. \end{enumerate} \end{enumerate} The reader can now check (note that $\mathunderaccent\tilde-3 {w}=\bigcup\{w^p:p\in\mathunderaccent\tilde-3 {G}_{{\Bbb Q}} \}$ is forced to be an infinite subset of $\omega$ almost disjoint to every $A \in X^*$, $X^*$ a reasonably defined MAD family in ${\bf L}$); see more details in the proof of \ref{9.3}. $\QED_{\ref{9.1}}$ \begin{proposition} \label{9.2} Assume ${\bf V}={\bf L}$. There is ${\Bbb Q}={\Bbb Q}_0 * {\mathunderaccent\tilde-3 {\Bbb Q}}_1$ such that: \begin{enumerate} \item[(a)] ${\Bbb Q}_0$ is nep c.c.c.\ not adding a dominating real, \item[(b)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}_0}$``${\mathunderaccent\tilde-3 {\Bbb Q}}_1$ is nep c.c.c. (even Souslin c.c.c.) not adding a dominating real'', \item[(c)] ${\Bbb Q}$ adds a dominating real, \item[(d)] in fact, $bQ_0$ is the Cohen forcing (so in any ${\bf V}_1$ it is c.c.c.\ strongly c.c.c., correct, very simple nep (and snep), and it is really absolute, i.e.\ it is the same in ${\bf V}_1$ and ${\bf V}$, and its definition uses no parameters), \item[(e)] moreover, ${\Bbb Q}_1$ is defined in ${\bf L}$, really absolute, and in any ${\bf V}_1$ it is c.c.c., strongly c.c.c.\ nep (and even snep). In ${\bf V}_1$, ${\Bbb Q}_1$ adds a dominating real iff $({}^{\textstyle \omega}\omega)^{\bf L}$ is a dominating family in ${\bf V}_1$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Let ${\Bbb Q}_0$ be ${\rm Cohen}$. We shall define ${\Bbb Q}_1$ in a similar manner as ${\Bbb Q}$ in the proof of \ref{9.1}. A condition in ${\mathunderaccent\tilde-3 {\Bbb Q}}_1$ is a triple $\langle E_p,u_p,w_p\rangle$ such that $E_p$ is a 2-place relation on $\omega$, $u_p$ is a finite subset of $\omega$ and $w_p$ is a finite function from a subset of $\omega$ to $\omega$ and: \begin{quotation} $N_p\stackrel{\rm def}{=}(\omega,E_p)$ is a model of ${\rm ZFC}^- + {\bf V}={\bf L}$ (let $<^{N_p}_*$ be the canonical ordering of $N_p$, we do not require well foundedness); so in formulas we use $\in$. \end{quotation} [What is the intended meaning of a condition $p$? Let \[M_p=N_p{\,|\grave{}\,}\{x: ({\rm Tc}(x)^{N_p}, E_p{\,|\grave{}\,} {\rm Tc}(x)^{N_p})\mbox{ is well founded\/}\},\] where ${\rm Tc}(x)$ is the transitive closure of $x$. Let $M_p'$ be the Mostowski collapse of $M_p$, $h_p:M_p\longrightarrow M_p'$ be the isomorphism. Now, $p$ gives us information on the function $\mathunderaccent\tilde-3 {w}=\bigcup\{w_p:p\in \mathunderaccent\tilde-3 {G}\}$ from $\omega$ to $\omega$, it says: $\mathunderaccent\tilde-3 {w}$ extends the function $w_p$ and if $x\in M_p\cap u_p$ is a function from $\omega$ to $\omega$ then for every natural number $n\notin{\rm Dom}(w_p)$ we have $x(n)\leq\mathunderaccent\tilde-3 {w}(n)$. Note that $h_p(x)$ is a function from $\omega$ to $\omega$ iff $M_p\models$`` $x$ is a function from $\omega$ to $\omega$ '' iff $N_p\models$`` $x$ is a function from $\omega$ to $\omega$ ''.]\\ The order is defined by:\qquad $p \le q$ if and only if one of the following occurs: \begin{enumerate} \item[(A)] $p=q$, \item[(B)] there are $Y \subseteq \omega$ and $a\in N_q$ and $f\in {}^{ \textstyle Y}\omega$ such that \begin{enumerate} \item[(i)] $[x\in Y \ \&\ N_p\models y\in x]\quad \Rightarrow\quad y\in Y$, \item[(ii)] $[N_p\models$``${\rm rk}(x)= y$'' $\ \&\ y\in Y]\quad\Rightarrow \quad x\in Y$, \item[(iii)] $N_p{\,|\grave{}\,} Y$ is a model of $({\rm ZFC}^- + {\bf V}={\bf L})$, \item[(iv)] the set $\{x:N_p \models$ ``$x$ an ordinal'', $x \notin Y\}$ has no first element (by $E_p$), \item[(v)] $N_q\models$``$a$ is a transitive set'', \item[(vi)] $f$ is an isomorphism from $N_p{\,|\grave{}\,} Y$ onto $N_q{\,|\grave{}\,} \{b: N_q \models b\in a\}$, \item[(vii)] $f$ maps $u_p\cap Y$ into $u_q\cap{\rm Rang}(f)$, \item[(viii)] $w_p\subseteq w_q$, \item[(ix)] if $n\in{\rm Dom}(w^q)\backslash{\rm Dom}(w^p)$ and $x\in u_p$, $N_p \models$``$x$ is a function from the natural numbers to the natural numbers'' and $x^*=f(x)$ {\em then} $N_q \models$``if $y$ is the $n$-th natural number then $w^q(y) > x(y)$''. \end{enumerate} \end{enumerate} Clearly ${\Bbb Q}$ is equivalent to ${\Bbb Q}'=(\mbox{the Hechler forcing})^{\bf L}$, just let us define, for $p\in{\Bbb Q}_1$, $g(p)=(w^p,F^p)$ where $F^p=\{h_p(x): x\in M_p\}$. Now, $g$ is onto ${\Bbb Q}'$ and \[\begin{array}{ll} {\Bbb Q}_1\models p\leq q\quad\Rightarrow\ &{\Bbb Q}'\models g(p)\leq g(q)\quad \Rightarrow\\ \ &\neg({\rm ex}ists p')(p\leq^{\Bbb Q} p'\ \&\ p',q\mbox{ are incompatible in }{\Bbb Q}). \end{array}\] The rest is left to the reader. \QED$_{\ref{9.2}}$ \begin{proposition} \label{9.3} \begin{enumerate} \item Assume that: \begin{enumerate} \item[(a)] $\bar{\varphi}=(\varphi_0(x),\varphi_1(x,y))$ defines, in any model of ${\rm ZFC}^-_*$, a forcing notion ${\Bbb Q}_{\bar{\varphi}}$ with parameters from ${\bf L}_{\omega_1}$, \item[(b)] for every $\beta<\omega_1$ such that ${\bf L}_\beta\models {\rm ZFC}^-_*$, for every $x,y\in {\bf L}_\beta$ we have: \[[x\in{\Bbb Q}^{{\bf L}_\beta}_{\bar{\varphi}}\ \Leftrightarrow\ x\in {\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}]\quad\mbox{and}\quad [x< y\mbox{ in }{\Bbb Q}^{{\bf L}_\beta}_{\bar{\varphi}}\ \Leftrightarrow\ x < y\mbox{ in }{\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}],\] \item[(c)] for unboundedly many $\alpha<\omega_1$ we have ${\bf L}_\alpha \models {\rm ZFC}^-_*$, \item[(d)] any two compatible members of ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{ \varphi}}$ have a lub, \item[(e)] like (c) for compatibility and for existence of lub. \end{enumerate} {\em Then}\ there is an $\aleph_0$--snep forcing notion ${\Bbb Q}$ equivalent to ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}$: the $\Sigma^1_1$ (i.e.\ Souslin) relations have just the real parameters of $\bar{\varphi}$. \item We can use a real parameter $\rho$ and replace ${\bf L}_\alpha$ by ${\bf L}_\alpha[\rho]$. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} It is similar to the proof of \ref{9.1}. Let ${\Bbb Q}$ be the set of quadruples $p=(E_p,n_p,\bar{\alpha}_p,\bar{a}_p)$ such that: \begin{enumerate} \item[$(\alpha)$] $E_p$ is a two-place relation on $\omega$, \item[$(\beta)$] $N_p\stackrel{\rm def}{=}(\omega,E_p)$ is a model of ${\rm ZFC}^-_* +{\bf V}={\bf L}$, \item[$(\gamma)$] for some $n=n_p$ we have \[\bar{\alpha}_p=\langle \alpha_{p,\ell}:\ell<n\rangle,\qquad \bar{a}_p= \langle a_{p,\ell}:\ell<n\rangle,\] \item[$(\delta)$] $N_p\models$``$\alpha_{p,\ell}$ is an ordinal, $a_{p,\ell} \in {\bf L}_{\alpha_{p,\ell}}$, ${\bf L}_{\alpha_{p,\ell}}\models{\rm ZFC}^-_*$, and for $k\leq\ell<n$ we have ${\bf L}_{\alpha_{p,\ell}}\models\varphi_0(a_{p, k}),\alpha_{p,\ell}<\alpha_{p,\ell +1}$'',\quad and \item[$(\varepsilon)$] if $m\leq k\leq\ell<n$ then $N_p\models$`` ${\bf L}_{ \alpha_{p,\ell}}\models\varphi_1(a_p,\alpha_m,a_{p,\alpha_k})$ ''. \end{enumerate} The order is given by:\qquad $p_0 \leq_{{\Bbb Q}} p_1$ if and only if ($p_0,p_1\in {\Bbb Q}$ and) for some $Y_0,Y_1 \subseteq \omega$ and $f$ we have: \begin{enumerate} \item[(i)] for $\ell=0,1$:\quad $Y_\ell$ is an $E_p$--transitive subset of $N_p$, \[(\forall x\in N_{p_\ell})(x\in Y_\ell\equiv{\rm rk}^{N_{p_\ell}}(x)\in Y_\ell),\] \item[(ii)] $f$ is an isomorphism from $N_{p_0}{\,|\grave{}\,} Y_0$ onto $N_{p_1}{\,|\grave{}\,} Y_1$, \item[(iii)] in $\{x\in N_{p_0}:N_{p_0}$``$x\mbox{ is an ordinal}\}$ there is no $E_p$--minimal element, \item[(iv)] $f$ maps $\{\alpha_{p_0,\ell}:\ell<n^*\}\cap Y_0$ into $\{\alpha_{p_{1,\ell}}:\ell<n_{p_1}\}\cap Y_1$, \item[(v)] if $f(\alpha_{p_0,k})=\alpha_{p_1,m}$ then $N_{p_1}\models$`` ${\bf L}_{\alpha_{p_1,m}}\models\varphi_1(f(a_{p_0,k}),a_{p_1,m})$ ''. \end{enumerate} \begin{claim} \label{factA} ${\Bbb Q}$ is a quasi order. \end{claim} \noindent{\em Proof of the claim:}\qquad Check. Now define $M_p,h_p,M_p'$ as in the proof of \ref{9.2}. \begin{claim} \label{factC} The set \[{\Bbb Q}'\stackrel{\rm def}{=}\{p\in{\Bbb Q}:N_p\mbox{ is well founded, }n_p>0\}\] is dense in ${\Bbb Q}$. \end{claim} \noindent{\em Proof of the claim:}\qquad Check. Define $g:{\Bbb Q}'\longrightarrow{\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}$ by $g(p)=h_p(a_{p,n_p-1})$. \begin{claim} \label{factE} $g$ is really a function from ${\Bbb Q}'$ onto ${\Bbb Q}^{{\bf L}_{\omega_1}}_{ \bar{\varphi}}$ and \[\begin{array}{l} p_0\leq_{{\Bbb Q}} p_1\quad\Rightarrow\quad {\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varpi}} \models g(p_0)\leq g(p_1)\quad\Rightarrow\\ \mbox{[if }p_1\leq_{\Bbb Q} p_2\mbox{ then for some $p_3$ we have $p_2 \leq_{\Bbb Q} p_3$ and $p_0\leq_{\Bbb Q} p_3$].} \end{array}\] \end{claim} \noindent{\em Proof of the claim:}\qquad The first implication is immediate (by clause (v) in the definition of $\leq_{\Bbb Q}$. For the second implication assume ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}\models g(p_0)\leq g(p_1)$ and let $p_1\leq_{\Bbb Q} p_2$. For $\ell=0,1,2$ let \[n_\ell=\min\{n: n=n_p\mbox{ or }n<n_{p_\ell}\mbox{ and }\alpha_{p_\ell,n} \notin M_p\}.\] Let $p_3$ be defined as follows: $M_p={\bf L}_\gamma$, ${\bf L}_\gamma\models {\rm ZFC}^-_*$, and $\gamma>M_{p_0}'\cap \omega_1,M_{p_1}'\cap \omega_1,M_{p_2}' \cap \omega_1$. Let $g^*_\ell$ be the isomorphism from $M_{p_\ell}$ onto ${\bf L}_{\gamma_\ell}$, $\gamma_\ell<\gamma$, and let $w=\{f_\ell(\alpha_{p_\ell, m}): m<n_\ell, \ \ell<2\}$. List it as $\{\alpha_{p_3,k}: k<n_{p_3}\}$ (increasing enumeration) and let $\Upsilon=\{f_\ell(a_{p_\ell,m}):m<n_\ell,\ \ell<2\}$. Now, $f_2(a_{p_2,n_2-1})$ is a $\leq_{{\Bbb Q}^{{\bf L}_{\omega_1}}_{ \bar{\varphi}}}$--upper bound of $\Upsilon$. Consequently, by clauses (d) and (e) of the assumptions, we can define $a_{p_3,m}$ as required. \QED$_{\ref{9.3}}$ \begin{proposition} \label{9.4} Assume that $\varphi=\varphi(x,y)$ is such that \begin{enumerate} \item[(i)] ${\rm ZFC}^-_*\vdash$ for every infinite cardinal $x\in X\stackrel{\rm def}{=}\{\alpha: \alpha=\omega$ or $\omega^\alpha=\alpha$ (ordinal exponentiation) $\}$, there is a unique $A_x$, an unbounded subset of $x$ of order type $x$ such that $\varphi(x,A_x)$, and $\psi(\cdot)$ defines a set $S\subseteq X$ not reflecting, \item[(ii)] ${\rm ZFC}^-_*\vdash$ if $\mu_1<\mu_2$ are from $X$ then $A_{\mu_1} \nsubseteq A_{\mu_2}$, \item[(iii)] $\omega_1=\sup\{\alpha:{\bf L}_\alpha\models{\rm ZFC}^-_*\}$, and the truth value of ``$\beta\in A_\gamma,\ \beta\in S$'' is the same in ${\bf L}_\alpha$ for every $\alpha<\omega_1$ for which ${\bf L}_\alpha\models {\rm ZFC}^-_*$, \item[(iv)] the set $S$, i.e.\ $\{\beta<\omega_1: ({\rm ex}ists\alpha)({\bf L}_\alpha\models{\rm ZFC}^-_*\ \&\ \psi(\beta))\}$, is a stationary subset of $\omega_1$ [a kind of ``$\aleph^{{\bf V}}_1$ is below first ineffable of ${\bf L}$'' and is not weakly compact]. \end{enumerate} {\em Then} for some $\bar{\varphi}$ as in the assumptions of \ref{9.3}, and $\mathunderaccent\tilde-3 {\eta}$ we have: \begin{enumerate} \item[(a)] ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}$ is a c.c.c.\ forcing notion, \item[(b)] $\mathunderaccent\tilde-3 {\eta}\in{}^{\textstyle \omega}2$ is a generic real of ${\Bbb Q}^{{\bf L}_{\omega_1} }_{\bar{\varphi}}$, and is nowhere essentially Cohen, \item[(c)] ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}}$ commute with Cohen. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Let $pr(\alpha,\beta)=(\alpha+\beta)(\alpha+\beta)+\alpha$, it is a pairing function. By coding, without loss of generality (e.g.\ letting \[A'_\alpha=\{pr^+(n,pr_n(\beta_1,\ldots,\beta_n)):n<\omega,\{\beta_1,\ldots, \beta_n\} \subseteq B_\alpha\},\] where $pr_1(\beta)=\beta$, $pr_{n+1}(\beta_1,\ldots,\beta_{n+1})=pr(pr_n( \beta_1,\ldots,\beta_n),\beta_{n+1})$) \begin{enumerate} \item[(ii)$'$] if $x,x_1,\ldots,x_n$ are distinct cardinals in ${\bf L}_{ \omega_1}$, then $A_x \nsubseteq\bigcup\limits^n_{\ell =1} A_{x_\ell}$. \end{enumerate} For $\delta\in X$ let $f_0(\delta)=\min(X\setminus (\delta+1))$ and let $f^1_\delta$ be the first (in the canonical well ordering of ${\bf L}$) one-to-one function from $f_0(\delta)$ onto $\delta$. Let $C_\delta$ be the first club of $\delta$ disjoint to $S$. For $\alpha\in [\omega,\omega_1)$, let $\delta_\alpha=\max(X\cap\delta)$ and let \[B^*_\alpha=\{pr_3(\varepsilon,\zeta,\xi):\varepsilon\in C_{\delta_\alpha},\ \zeta=f^1_{\delta_\alpha}(\alpha),\ \xi\in A'_{\delta_\alpha}\mbox{ and } \varepsilon>\zeta,\ \varepsilon>\xi\}.\] Note that \begin{enumerate} \item[$(*)$] $B^*_\alpha$ is an unbounded subset of $\delta_\alpha$ such that \begin{enumerate} \item[(a)] $\beta\in S\cap\alpha\quad\Rightarrow\quad\beta>\sup(B^*_\alpha\cap \beta)$, \item[(b)] if $\alpha_1,\ldots,\alpha_n\in [\omega,\omega_1)\setminus\{ \alpha\}$ then $B_\alpha\setminus \bigcup\limits_{\ell=1}^n B^*_{\alpha_\ell}$ is unbounded in $\delta_\alpha$. \end{enumerate} \end{enumerate} [Why? For (a), suppose that $\beta\in S\cap\alpha$. Trivially, $\min( B^*_\alpha)>\min(C_\alpha)$, so $\gamma=\sup(C_{\delta_\alpha}\cap\beta)$ is well defined. Now, \[B^*_\alpha\cap\beta\subseteq \{{\rm pr}_3(\varepsilon,\zeta,\xi): \varepsilon,\zeta,\xi\leq\gamma\}\subseteq (\gamma+\gamma+\gamma)^3<\beta\] (the last inequality follows from the fact that $\beta\in X$). To show (b) suppose that $\gamma_0<\delta_\alpha$ and choose $\xi\in B^*_\alpha\setminus \bigcup\limits_{\ell=1}^n B^*_{\alpha_\ell}$. Let $\zeta=f^1_{\delta_\alpha} (\alpha)$ and let $\varepsilon\in C_{\delta_\alpha}$ be large enough. So ${\rm pr}_3(\varepsilon,\zeta,\xi)\in B^*_\alpha$ (by definition) and ${\rm pr}_3( \varepsilon,\zeta,\xi)\notin B^*_{\alpha_\ell}$ (use the third coordinate) and ${\rm pr}_3(\varepsilon,\zeta,\xi)>\varepsilon>\gamma_0$.] Let $I_\alpha$ be the ideal of subsets of $B^*_\alpha$ generated by \[\{B^*_\alpha\cap B^*_\beta:\omega\leq\beta<\omega_1,\ \beta\neq\alpha\}\cup \{B^*_\alpha\cap\beta:\beta<\delta_\alpha\}.\] Let ${\Bbb Q}$ be the set of finite functions $p$ from $\omega_1\setminus\omega$ to $\{0,1,2\}$ ordered by:\\ $p \le q$ if and only if: \begin{quotation} if $\alpha\in{\rm Dom}(p)$, $\beta\in{\rm Dom}(q)\cap A_\alpha\backslash{\rm Dom}(p)$\\ then $q(\beta)=p(\alpha)$ and $\beta>\sup(\delta_\alpha\cap{\rm Dom}(p))\ \vee\ q(\beta=2)$. \end{quotation} \begin{claim} \label{fact2A} ${\Bbb Q}$ is a partial order. \end{claim} \begin{claim} \label{fact2B} For each $\alpha\in [\omega,\omega_1)$ the set ${\cal I}_\alpha=\{p:\alpha\in {\rm Dom}(p)\}$ is dense in ${\Bbb Q}$. \end{claim} \noindent{\em Proof of the claim:}\qquad Let $p\in{\Bbb Q}$ and suppose that $\alpha\notin {\rm Dom}(p)$. Let $q=p\cup\{\langle\alpha,2\rangle\}$. Let $\mathunderaccent\tilde-3 {f}$ be the ${\Bbb Q}$--name defined by $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }\mathunderaccent\tilde-3 {f}=\bigcup \mathunderaccent\tilde-3 {G}_{\Bbb Q}$. \begin{claim} \label{fact2D} For $\alpha\in [\omega,\alpha)$, \[\begin{array}{ll} \mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}&\mbox{`` for some $\ell< 3$, for any $m<3$ we have}\\ \ &\{\beta\in B^*_\alpha:\mathunderaccent\tilde-3 {f}(\beta)=m\}\neq\emptyset\mod {\cal I}_\alpha\ \mbox{ iff }\ m\in\{0,\ell\}\mbox{ ''.} \end{array}\] \end{claim} \noindent{\em Proof of the claim:}\qquad Take $p\in\mathunderaccent\tilde-3 {G}_{\Bbb Q}$ such that $\alpha\in{\rm Dom}(p)$ and let \[B=\bigcup\{B^*_\alpha\cap B^*_\gamma:\gamma\in{\rm Dom}(p)\setminus\{\alpha\}\},\] so $B\in {\cal I}_\alpha$. Clearly, $p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$`` if $\beta\in B^*_\alpha \setminus B$ then $\mathunderaccent\tilde-3 {f}(\beta)\in\{2,p(\alpha)\}$ '', hence \[p\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb Q}\mbox{`` if }m\in\{0,1,2\}\setminus\{2,p(\alpha)\}\mbox{ then } m\notin{\rm Rang}(\mathunderaccent\tilde-3 {f}{\,|\grave{}\,} (B^*_\alpha\setminus B))\mbox{ ''.}\] Now, if $B'\in {\cal I}_\alpha$, and $p\leq_{\Bbb Q} q$ then there is $\gamma\in A_\alpha\setminus B'\setminus\bigcup\{B^*_\gamma: \gamma\in{\rm Dom}(p)\setminus\{ \alpha\}\}$ such that $q\cup\{\langle\gamma,2\rangle\}$ and $q\cup\{\langle\gamma,p(\alpha)\rangle\}$ are in ${\Bbb Q}$ above $q$. Reflecting we are done. \begin{claim} \label{fact2E} One can define $\mathunderaccent\tilde-3 {f}$ from $\mathunderaccent\tilde-3 {f}{\,|\grave{}\,}\omega\in {}^{\textstyle \omega}3$. \end{claim} \noindent{\em Proof of the claim:}\qquad Define $\mathunderaccent\tilde-3 {f}{\,|\grave{}\,}\alpha$ by induction on $\alpha\in X$ using \ref{fact2D}. \begin{claim} \label{fact2F} The forcing notion ${\Bbb Q}$ is nowhere essentially Cohen. \end{claim} \noindent{\em Proof of the claim:}\qquad For every $\alpha^*<\omega_1$ and for every large enough $\gamma<\omega_1$ and for $\ell\in \{0,1,2\}$, the condition $q_\ell=\{\langle\gamma,\ell\rangle\}$ is compatible with every $q\in{\Bbb Q}$ such that ${\rm Dom}(p)\subseteq\alpha^*$. \begin{claim} \label{fact2G} The ${\Bbb Q}$--name $\mathunderaccent\tilde-3 {f}$ (for a real) is nowhere essentially Cohen. \end{claim} \noindent{\em Proof of the claim:}\qquad By \ref{fact2E}, \ref{fact2F}. \begin{claim} \label{fact2H} The forcing notion ${\Bbb Q}$ satisfies the demands in \ref{9.4}. \end{claim} \noindent{\em Proof of the claim:}\qquad Check. \begin{claim} \label{fact2I} The forcing notion ${\Bbb Q}$ satisfies the c.c.c. \end{claim} \noindent{\em Proof of the claim:}\qquad Use ``$S\subseteq\omega_1$ is stationary''. \QED$_{\ref{9.4}}$ \begin{remark} \begin{enumerate} \item Of course, such forcing can make $\aleph_1$ to be $\aleph^{{\bf L}[\eta]}_1$. But it seems that we can have such forcing which preserves the ${\bf L}_{\omega_1}$--cardinals (and even their being ``large'' in suitable senses). For this it should be like ``coding the universe by a real'' of Jensen Beller Welch \cite{BJW}, and see Shelah Stanley \cite{ShSt:340}. \item Instead of coding $\aleph_1$--Cohen we can iterate adding dominating reals or whatever. \end{enumerate} \end{remark} \begin{definition} \label{9.6} \begin{enumerate} \item We say that forcing notions ${\Bbb Q}_0,{\Bbb Q}_1$ are equivalent if their completions to Boolean algebras (${\rm BA}({\Bbb Q}_0), {\rm BA}({\Bbb Q}_1)$) are isomorphic. \item Forcing notions ${\Bbb Q}_0,{\Bbb Q}_1$ are locally equivalent if \begin{enumerate} \item[(i)] for each $p_0\in{\Bbb Q}_0$ there are $q_0,q_1$ such that \[p_0\le q_0\in{\Bbb Q}_0\ \&\ q_1\in{\Bbb Q}_1\ \&\ {\rm BA}({\Bbb Q}_0{\,|\grave{}\,}(\ge q_0))\cong{\rm BA}( {\Bbb Q}_1{\,|\grave{}\,}(\ge q_1)),\] \item[(ii)] for every $p_1\in{\Bbb Q}_1$ there are $q_0,q_1$ such that \[q_0\in{\Bbb Q}_0\ \&\ p_1\le q_1\in{\Bbb Q}_1\ \&\ {\rm BA}({\Bbb Q}_0{\,|\grave{}\,}(\ge q_0))\cong{\rm BA}( {\Bbb Q}_1{\,|\grave{}\,} (\ge q_1)).\] \end{enumerate} \end{enumerate} \end{definition} Now we may phrase the conclusions of \ref{9.3}, \ref{9.4}. \begin{proposition} \label{9.5} \begin{enumerate} \item Assume $\bar{\varphi}_1=\langle \varphi^1_0,\varphi^1_1\rangle$ and $\bar{\varphi}^2_2=\langle \varphi^2_0,\varphi^2_1\rangle$ are as in \ref{9.3}. {\em Then}\ we can find $\bar{\varphi}^3_3$ as there, only with the parameters of $\bar{\varphi},\bar{\varphi}_2$ and such that: \begin{enumerate} \item[(a)] if in ${\bf L}_{\omega_1}$ there is a last cardinal $\mu$ (i.e.\ $\aleph^{{\bf V}}_1$ is a successor cardinal in ${\bf L}$), then ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi}_3}$ is locally equivalent to \[\bigcup\{{\Bbb Q}^{{\bf L}_\alpha}_{\bar{\varphi}_1}:\mu<\alpha,\ {\bf L}_\alpha \models\mu\mbox{ is the last cardinal}\},\] \item[(b)] if in ${\bf L}_{\omega_1}$ there is no last cardinal (i.e.\ $\aleph^{\bf V}_1$ is a limit cardinal in ${\bf L}$), then ${\Bbb Q}^{{\bf L}_{\omega_1}}_{\bar{\varphi_3}}$ is locally equivalent to \[\bigcup\{{\Bbb Q}^{{\bf L}_{\omega,\alpha}}_{\bar{\varphi}^1_2}:{\bf L}_{\omega_1} \models\alpha\mbox{ a cardinal}\}.\] \end{enumerate} \item In \ref{9.3}, \ref{9.4}(1) we can replace ${\bf L}_{\omega_1}$ by ${\bf L}_{\omega_1}[\eta^*]$, $\eta^* \in{}^{\textstyle \omega}\omega$. \item In \ref{9.3}, \ref{9.4}(1) we can replace ${\bf L}_{\omega_1}$ by${\bf L}_{\omega_1}[A]$ where $A \subseteq \omega_1$ but have $\aleph_1$-snep instead of $\aleph_0$-snep. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Let $\varphi_{3,0}(x)$ say \begin{enumerate} \item[(i)] $x=\langle\bar{\alpha}^x,\bar{\beta}^x,\bar{a}^x,\bar{b}^x\rangle$, $\bar{\alpha}^x=\langle\alpha^x_\ell:\ell\le n^x\rangle$, $\langle\langle \beta^x_{\ell,k}:k \le k^x_\ell\rangle:\ell\le n^x\rangle$, $\bar{a}^x= \langle\langle a^x_{\ell,k}:k \le k^x_\ell \rangle:\ell<n^x\rangle$, $\bar{a}^x=\langle a^x_\ell:\ell \le n^x \rangle$, $\bar{b}^x=\langle b^x_\ell:\ell<n^x\rangle$, \item[(ii)] $\alpha^x_\ell<\beta^x_{\ell,0}<\beta^x_{\ell,1}\ldots$, ${\bf L}_{\beta^x_\ell}\models$``$\alpha^\ell_x$ the last cardinal'', \item[(iii)] ${\bf L}_{\beta^x_\ell}\models$``$\varphi_{1,0}(b^x_\ell)$'', ${\bf L}_{\alpha^x_{\ell +1}}\models$``$\varphi_{2,0}(a^x_\ell)$'' for $\ell < n^x$, \item[(iv)] ${\bf L}_{\beta^x_{n^x}}\models$``$\alpha^x_\ell$ is a cardinal'', \item[(v)] ${\bf L}_{\alpha^x_{\ell +1}}\models$``$\varphi^2_1(a^x_\ell, a^x_{\ell+1})$''. \end{enumerate} Let $\beta(x)=x$. Let $\varphi_{3,1}(x,y)$ say: \begin{enumerate} \item[$(\alpha)$] $\beta^x_{n^x} \le \beta^y_{n_y}$, \item[$(\beta)$] $\{\alpha^x_\ell:\ell \le n^x$ and ${\bf L}_{\beta^y_y} \models$``$\alpha^x_\ell$ is a cardinal''$\}$ is a subset of $\{\alpha^y_\ell:\ell \le n^y\}$, \item[$(\gamma)$] if $\alpha^x_{\ell(*)}$ is maximal in $\{\alpha^x_\ell:\ell < n^x, {\bf L}_{\beta^y_{n^y}}\models$``$\alpha^x_\ell$ is a cardinal''$\}$ then $\bar{\alpha}^x{\,|\grave{}\,} \ell(*)=\bar{\alpha}^y {\,|\grave{}\,}\ell(x)$, $\bar{\beta}^x {\,|\grave{}\,}\ell(*)=\bar{\beta}^y{\,|\grave{}\,}\ell(*)$, $\bar{a}^x{\,|\grave{}\,}\ell(*)=\bar{a}^y {\,|\grave{}\,} \ell(*)$, $\bar{b}^x{\,|\grave{}\,}\ell(*)=\bar{b}^y{\,|\grave{}\,}\ell(*)$, \item[$(\delta)$] $\alpha^x_{\ell(*)}=\alpha^y_{\ell(*)}$, \item[$(\varepsilon)$] $\beta^x_{\ell(*)}\le\beta^y_{\ell(*)}$ and ${\bf L}_{\beta^y_{\ell(*)}}\models \varphi^1_{0,1}(b^x_{\ell(*)},b^y_{\ell(*)})$. \end{enumerate} Now check. \QED$_{\ref{9.5}}$ \stepcounter{section} \subsection*{\quad 11. Preservation of ``no dominating real''} The main result of \S7: (for homogeneous c.c.c.\ ${\Bbb Q}$) if a nep forcing ${\Bbb P}$ preserves $({}^{\textstyle \omega}\omega)^{\bf V}\in (I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ {\em then}\ it preserves $X\in (I^{{\rm ex}}_{({\Bbb Q},\mathunderaccent\tilde-3 {\eta})})^+$ (see \ref{6.5}) is a case of the following \begin{thesis} \label{6A.1} Nep forcing notions do not discern sets $X\subseteq{}^{\textstyle \omega}\omega$ built by diagonalization, say between $X,Y\subseteq{}^{\textstyle \omega}\omega$ which are generic enough. \end{thesis} But there are interesting cases not covered by \ref{6.5}, most prominent is: \begin{question} \label{6A.2} If a nep forcing notion preserves ``$F\subseteq{}^{\textstyle \omega}\omega$ is unbounded'' for some (unbounded) $F\subseteq{}^{\textstyle \omega}\omega$ {\em then} does it preserve this for every (unbounded) $F'\subseteq {}^{\textstyle \omega}\omega$? \end{question} \begin{definition} \label{6A.3} \begin{enumerate} \item For a Borel 2-place relation ${\cal R}$ on ${}^{\textstyle \omega}\omega$ let $I_{\cal R}$ be the $\aleph_1$--complete ideal on ${}^{\textstyle \omega}\omega$ generated by the sets of the form $A_\nu=\{\eta\in{}^{\textstyle \omega}\omega:\neg(\eta R \nu)\}$. \item We say that $\nu$ is ${\cal R}$--generic over $N$ if $\eta\in N\cap {}^{\textstyle \omega}\omega\ \Rightarrow\ \eta R\nu$. \item A forcing notion ${\Bbb P}$ is weakly ${\cal R}$--preserving if for any $\eta_0,\eta_1,\ldots,\eta_n,\ldots\in ({}^{\textstyle \omega}\omega)^{{\bf V}^{\Bbb P}}$ there is $\nu \in ({}^{\textstyle \omega}\omega)^{\bf V}$ such that $n<\omega\ \Rightarrow\ \eta_n {\cal R}\nu$ (i.e.\ $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}} ({}^{\textstyle \omega}\omega)^{\bf V} \in I^+_{\cal R}$). \item We say that a forcing notion ${\Bbb P}$ is ${\cal R}$--preserving if for any Borel subset $B$ of ${}^{\textstyle \omega}\omega$ from ${\bf V}$ which is in $I^+_{\cal R}$, for any $\eta_0,\eta_1,\ldots,\eta_n,\ldots \in ({}^{\textstyle \omega}\omega)^{{\bf V}^{\Bbb P}}$ there is $\nu\in B^{\bf V}$ such that $n<\omega\ \Rightarrow\ \eta_n {\cal R}\nu$ (i.e.\ $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}} B^{\bf V}\in I^+_{\cal R}$). \item We say that a forcing notion ${\Bbb P}$ is strongly ${\cal R}$--preserving if for any $X\in I^+_{\cal R}$ (in ${\bf V}$) we have $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$X\in I^+_{\cal R}$''. \item We say that a forcing notion ${\Bbb P}$ is super ${\cal R}$--preserving as witnessed by $({\frak B},{\rm ZFC}^-_{**})$ if \begin{enumerate} \item[(a)] every $({\frak B},{\rm ZFC}^-_*)$--candidate is a ${\Bbb P}$--candidate, \item[(b)] for any $({\frak B},{\rm ZFC}^-_{**})$--candidate $N$ such that $p\in N$ and for any $\nu\in{}^{\textstyle \omega}\omega$ which is ${\cal R}$--generic over $N$, {\em there is} $q$ such that $p\le_{{\Bbb P}} q\in {\Bbb P}$, $q$ is $\langle N,{\Bbb P} \rangle$--generic and $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }$``$\nu$ is ${\cal R}$--generic over $N\langle \mathunderaccent\tilde-3 {G}_{{\Bbb P}}\rangle = N\langle\mathunderaccent\tilde-3 {G}_{{\Bbb P}}\cap P^N \rangle$''. \end{enumerate} \end{enumerate} \end{definition} \begin{proposition} \label{6A.4} \begin{enumerate} \item If ${\Bbb P}$ is super ${\cal R}$--preserving, {\em then} ${\Bbb P}$ is strongly ${\cal R}$--preserving (also for ${\Bbb P}$ nep). \item If ${\Bbb P}$ is strongly ${\cal R}$--preserving, {\em then} ${\Bbb P}$ is ${\cal R}$--preserving. \item If ${\Bbb P}$ is ${\cal R}$--preserving, {\em then} it is weakly ${\cal R}$--preserving. \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Easy. \QED \begin{proposition} \label{6A.5} A sufficient condition for ``${\Bbb P}$ is super ${\cal R}$--preserving as witnessed by $({\frak B},{\rm ZFC}^-_{**})$'' is that for some nep forcing notion ${\Bbb Q}$ and ${\rm hc}$--${\Bbb Q}$-name $\mathunderaccent\tilde-3 {\eta}^*$ and a Borel relation ${\cal R}_1$ we have \begin{enumerate} \item[$(\alpha)$] every $({\frak B},{\rm ZFC}^-_{**})$--candidate is a ${\Bbb Q}$--candidate and $\mathunderaccent\tilde-3 {\eta}^*\in N$ and also it is a ${\Bbb P}$--candidate, \item[$(\beta)$] if $N$ is a $({\frak B},{\rm ZFC}^-_{**})$--candidate (so countable) and $\nu$ is ${\cal R}$--generic over $N$ then for some $G_{{\Bbb Q}} \subseteq {\Bbb Q}^N$ generic over $N$, in ${\bf V}$ we have \[(\forall x)(x\;{\cal R}_1\;\mathunderaccent\tilde-3 {\eta}^*[G_{{\Bbb Q}}])\ \Rightarrow\ x\; {\cal R}_1\; \nu)\] and for every $G_{{\Bbb Q}}\subseteq{\Bbb Q}^N$, generic over $N$, and $p\in{\Bbb P}^N$ there is $q$ such that $p\le q\in{\Bbb P}$ and $q$ is $(N,{\Bbb P})$--generic and \[q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{`` }\mathunderaccent\tilde-3 {\eta}^*[G_Q]\mbox{ is ${\cal R}_1$--generic over }N \langle G_{{\Bbb P}}\cap{\Bbb P}^N \rangle\mbox{ ''}.\] \end{enumerate} \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Straight. \QED \begin{remark} \label{6A.6} In \ref{6A.7} below we phrase a sufficient condition. Note that clause $(\delta)$ can be naturally phrased as ``an appropriate sentence $\psi$ follows from ${\rm ZFC}^-_{**}$; this is slightly stronger as possibly $\psi$ holds only for all $({\frak B},{\rm ZFC}^-_{**})$--candidates but not for some (e.g.\ non-well founded) models of ${\rm ZFC}^-_{**}$ (this does not matter). \end{remark} \begin{proposition} \label{6A.7} Assume that: \begin{enumerate} \item[$(\alpha)$] ${\Bbb P},{\Bbb Q}$ are nep forcing notions, $\mathunderaccent\tilde-3 {\eta}^*$ is a ${\rm hc}$--${\Bbb Q}$--name, ${\cal R},{\cal R}_1$ are Borel relations, \item[$(\beta)$] every $({\frak B},{\rm ZFC}^-_{**})$--candidate $N$ is a ${\Bbb Q}$--candidate and ${\Bbb P}$-candidate, $\mathunderaccent\tilde-3 {\eta}^*\in N$, \item[($\gamma)$] for every $({\frak B},{\rm ZFC}^-_{**})$--candidate $N$ and $\nu\in {}^{\textstyle \omega}\omega$ which is ${\cal R}$--generic over $N$ and $r\in{\Bbb Q}^N$ we can find $G_Q \subseteq{\Bbb Q}^N$ generic over $N$ such that \[r\in G_Q\quad\mbox{ and }\quad(\forall x)(x\; {\cal R}\; \nu\ \ \Rightarrow\ \ x\; {\cal R}_1\;\mathunderaccent\tilde-3 {\eta}^*[G_Q]),\] \item[$(\delta)$] if $N$ is a $({\frak B},{\rm ZFC}^-_{**})$--candidate then for every $G_{{\Bbb Q}}\subseteq{\Bbb Q}^N$ generic over $N$ and $G_R\subseteq{\rm Levy}( \aleph_0,2^{|{\Bbb P}|} + 2^{|{\Bbb Q}|})^N$ generic over $N[G_Q]$ we have $N[G_Q][G_R]\models$`` there are $G_{\Bbb Q}$ and $q,p\le q\in{\Bbb P}$ such that $q$ is explicitly $(N,{\Bbb P})$--generic and $G_{\Bbb Q}$ is a generic over $N$ subset of ${\Bbb Q}^N$ and $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\mathunderaccent\tilde-3 {\eta}^*[G_{{\Bbb Q}}]$ is ${\cal R}_1$--generic over $N[G_{{\Bbb Q}},\mathunderaccent\tilde-3 {G}_{{\Bbb P}}]$''. \end{enumerate} {\em Then} ``${\Bbb P}$ is super ${\cal R}$--preserving as witnessed by ${\rm ZFC}^-_{**}$'' . \end{proposition} \noindent{\sc Proof} \hspace{0.2in} Clause $(\alpha)$ of \ref{6A.5} holds by clause $(\beta)$ here. Next, the first demand in clause \ref{6A.5}$(\beta)$ (``for some $G_{\Bbb Q}\subseteq {\Bbb Q}^N$ generic over $N$'') follows from clause $(\gamma)$ of \ref{6A.7}. Finally, suppose that $G_{\Bbb Q}\subseteq{\Bbb Q}^N$ is generic over $N$, equivalently, $\nu^*$ is a ${\Bbb Q}$--generic real. Let $G\subseteq{\rm Levy}(\aleph_0, |2^{{\Bbb P}}|^N)$ be generic over $N$, equivalently over $N[\nu^*]$. In $N$, by clause (e) of the assumptions, in $N[G]$, there is a semi ${\Bbb P}$--candidate $M$, ${\cal P}({\Bbb P}^N)^N ={\cal P}({\Bbb P}^M)^M$. So in $N[G]$, $M[G]$ is a ${\Bbb P}$--candidate. So there is $q\in{\Bbb P}^{N[G]}$ such that $N[G]\models$``$p\le q$ and $q$ is $(M,{\Bbb P}^M)$--generic''. As above possibly increasing $q$, \begin{enumerate} \item[$(\circledast)$] $N[G]\models[q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$`` $\nu$ is ${\cal R}$--generic over $M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}]$ '' and $\nu$ is Cohen over $M]$. \end{enumerate} So for some $\mathunderaccent\tilde-3 {\nu}$, \[\begin{array}{ll} N \models\mbox{``}&\mathunderaccent\tilde-3 {\nu},\mathunderaccent\tilde-3 {q}\mbox{ are ${\rm Levy}(\aleph_0,|2^{({\Bbb P})} |^N)$--names of a Cohen real and}\\ \ &\mbox{ a member of ${\Bbb P}$, respectively, and some }r\in{\rm Levy}(\aleph_0,( |2^{|{\Bbb P}|}|)^N)\\ \ &\mbox{ forces the statement $(\circledast)$ above on }\mathunderaccent\tilde-3 {q}, \mathunderaccent\tilde-3 {\nu}\mbox{ ''}. \end{array}\] Now we can find $G'\subseteq{\rm Levy}(\aleph_0,(2^{|{\Bbb P}|})^N)$ generic over $N$ to which $r$ belongs and $\mathunderaccent\tilde-3 {\nu}[G']=^*\nu$ (i.e.\ they are equal except for finitely many coordinates). Let $q'\in{\Bbb P}$ be $\ge\mathunderaccent\tilde-3 {q}[G']$ and be $(N\langle G'\rangle,{\Bbb P}^{N\langle G'\rangle})$--generic, so we are done. \QED$_{\ref{6A.7}}$ \begin{theorem} \label{6A.8} Assume that: \begin{enumerate} \item[(a)] ${\cal R}$ is:\quad $f {\cal R} g$ iff $g$ is non-decreasing and $({\rm ex}ists^\infty n)(f(n)>g(n))$;\\ ${\cal R}_1$ is:\quad $f {\cal R}_1 g$ iff $({\rm ex}ists^\infty n) (f(n)>\max\{g(m):m \le n\})$, \item[(b)] ${\Bbb Q}=(\{\eta\in {}^{\omega >}\omega:\eta$ non-decreasing$\})$, $\mathunderaccent\tilde-3 {\eta}^*$ is the generic real $\bigcup\mathunderaccent\tilde-3 {G}_{{\Bbb Q}}$ (so really ${\Bbb Q}$ is the Cohen forcing), \item[(c)] ${\Bbb P}$ is nep, \item[(d)] every $({\frak B},{\rm ZFC}^-_{**})$--candidate $N$ is a ${\Bbb P}$--candidate (and easily it is a ${\Bbb Q}$--candidate), \item[(e)] ${\rm ZFC}^-_{**}$ says: ``${\Bbb P}$ is nep, ${\cal P}({\Bbb P})\in {\cal H}(\chi)$, ${\cal H}(\chi)$ is a semi ${\Bbb P}$--candidate and after forcing with ${\rm Levy}(\aleph_0,2^{|{\Bbb P}|} + 2^{|{\Bbb Q}|})$ still is, and forcing with ${\Bbb P}$ does not add a dominating real''. \end{enumerate} {\em Then} the conditions $(\alpha)$--$(\delta)$ of \ref{6A.7} hold. \end{theorem} \noindent{\sc Proof} \hspace{0.2in} Let $N$ be a ${\Bbb P}$--candidate and let $q\in{\Bbb Q}$. Now, for any $\nu\in{}^{\textstyle \omega}\omega$ \begin{enumerate} \item[$\bigotimes_1$] there are $g_1,g_2$ such that \begin{enumerate} \item[(a)] $g_\ell$ is a subset of ${\Bbb Q}^N$ generic over $N$ to which $q$ belongs; and let $\eta^*_\ell=\mathunderaccent\tilde-3 {\eta}^*[g_\ell]\in{}^{\textstyle \omega}\omega$, \item[(b)] $m\in [\ell g\/(q),\omega)\ \Rightarrow\ \eta_1(m)<\nu(m)\vee\eta_2(m) < \nu(m)\vee\nu(m)=0$. \end{enumerate} \end{enumerate} [Why? Quite easy, letting $\langle {\cal I}_k:k<\omega\rangle$ list the dense open subsets of ${\Bbb Q}^N$ in $N$, we choose inductively $m_k$ and $(\eta^1_k, \eta^2_k)$ such that $\eta^1_k\in {}^{(m_k)} \omega$, $\eta^2_k\in {}^{(m_k)} \omega$, $\eta^1_0=\eta^2_0=q$, $\eta^1_k\vartriangleleft\eta^1_{k+1}$, $\eta^2_k \vartriangleleft \eta^2_{k+1}$, $\eta^1_{2k+1}\in {\cal I}_k$, $\eta^2_{2k+2}\in {\cal I}_k$ and the demand in (b) is satisfied and $\eta_1\stackrel{\rm def}{=}\bigcup\limits_{k<\omega}\eta^1_k$, $\eta_2 \stackrel{\rm def}{=}\bigcup\limits_{k<\omega}\eta^2_k$ induce $g_1,g_2$ respectively]. Next, \begin{enumerate} \item[$\bigotimes_2$] if $\nu$ is ${\cal R}$--generic over $N'$ (any ${\Bbb P}$--candidate), {\em then} $\eta^*_2$ is ${\cal R}$--generic over $N'$ or $\eta^*_1$ is ${\cal R}$--generic over $N'$. \end{enumerate} [Why? Assume this fails, as $\eta^*_1$ is not ${\cal R}$--generic over $N'$ then some $f_1 \in ({}^{\textstyle \omega}\omega)^M$ dominates $\eta^*_1$, and as $\eta^*_2$ is not ${\cal R}$--generic over $N'$ some $f_2 \in ({}^{\textstyle \omega}\omega)^M$ dominates $\eta^*_2$, so $f^*=\max\{f_1+1,f_2+1\}\in N'$ dominates $\nu$ (i.e.\ $f^*(m)=\max\{f_1(m) + 1,f_2(m) + 1\}$).] \begin{enumerate} \item[$\bigotimes_3$] if $\nu \in {}^{\textstyle \omega}\omega$ is non-decreasing ${\cal R}$--generic over $N$ (so it is not dominated by $N$ and is non-decreasing), $q\in{\Bbb Q}$, {\em then} there is $G_{{\Bbb Q}} \subseteq {\Bbb Q}^N$ generic over $N$, $q\in G_{{\Bbb Q}}$ and $\mathunderaccent\tilde-3 {\eta}^*[G_Q]\le^*\nu$. \end{enumerate} [Why? Let $\langle {\cal I}_n:n<\omega \rangle$ list the dense open subset of ${\Bbb Q}$ in $N$. We choose by induction on $n$, $q_n\in {}^{k_n}\omega\subseteq {\Bbb Q}$ such that $q_0 = 1$, $q_n\le q_{n+1}$, and $q_n {\,|\grave{}\,}[k_0,k_n)\le\nu{\,|\grave{}\,} [k_0,k_n)$ and $q_{n+1}\in {\cal I}_n$. For $n=0$ trivial, for $n+1$ choose in $N$ by induction on $\ell$, $m_{n,\ell},\rho_{n,\ell}$ such that $m_{n,0}=k_n$ and \[m_{n,\ell}<m_{n,\ell +1},\quad\rho_{m,\ell}\in {}^{[m_{n,\ell},m_{n,\ell +1})}\omega,\quad q_n \cup 0_{[k_n,m_{n,\ell})} \cup \rho_{m,\ell} \in {\cal I}_n.\] This is easy and $\langle \rho(m_{n,\ell},\rho_{m,\ell}):\ell<\omega\rangle \in N$. Now define $\rho^*\in{}^{\textstyle \omega}\omega$ by: \[\rho^* {\,|\grave{}\,} [m_{n,\ell},m_{n,\ell +1})\mbox{ is constantly }\max\bigl( \bigcup_{i\le \ell +1}{\rm Rang}(\rho_i)\cup{\rm Rang}(q_n)\bigr).\] So $({\rm ex}ists^\infty j < \omega)(\rho^*(j)<\nu(j))$ hence for some $\ell$ and some $m \in [m_{n,\ell},m_{n,\ell +1})$ we have $\rho^*(m)<\nu(m)$. So \[(\forall m')(m_{n,\ell +1}\le m'<m_{n,\ell +2}\ \Rightarrow\ \rho_{n,\ell +1}(m')<\nu(m)),\] but $\nu(m)<\min\{\nu(j):m_{n,\ell+1}\le j<m_{n,\ell +2}\}$, so we are done.] Now we have to check the conditions in \ref{6A.7}, so obviously clauses $(\alpha),(\beta)$ hold. Also clause $(\gamma)$ there holds by $\otimes_3$. So let us prove clause $(\delta)$. Let $N$ be a $({\frak B},{\rm ZFC}^-_{ **})$--candidate and $p\in{\Bbb P}^N$. Let $q$ be $\langle N,{\Bbb P} \rangle$--generic, $p\le q$ (by $\otimes_1 + \otimes_2$). Let $G\subseteq{\rm Levy}(\aleph_0,|2^{{\Bbb P}} |^N)$ be generic over $N$ (equivalently over $N[\nu^*]$. In $N$, by clause (e) of the assumptions, in $N[G]$, there is a semi ${\Bbb P}$--candidate $M$, ${\cal P}({\Bbb P}^N)^N ={\cal P}({\Bbb P}^M)^M$. Then in $N[G]$, $M[G]$ is a ${\Bbb P}$--candidate. So there is $q\in{\Bbb P}^{N[G]}$ such that $N[G]\models$``$p\le q$ and $q$ is $(M,{\Bbb P}^M)$--generic''. As above possibly increasing $q$, \[N[G]\models[q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}\mbox{`` }\nu\mbox{ is ${\cal R}$--generic over } M[\mathunderaccent\tilde-3 {G}_{{\Bbb P}}]\mbox{''}\mbox{ and $\nu$ is Cohen over }M].\] \QED$_{\ref{6A.8}}$ \begin{remark} \label{6A.11} Clearly this proof is similar to \S7, so we can replace ``${\rm Cohen}$'' by more general ${\Bbb Q}$. More exactly, the point is that in \S7 the demand was $q\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb P}}$``$\mathunderaccent\tilde-3 {\eta}$ is ${\Bbb Q}$--generic over $N$''. Here we replace it by other demands. \end{remark} \begin{conclusion} \label{6A.9} For any Souslin proper forcing notion ${\Bbb P}$, if ${\Bbb P}$ add no dominating real, {\em then} forcing with ${\Bbb P}$ adds no member of ${}^{\textstyle \omega}\omega$ dominating some $F \subseteq{}^{\textstyle \omega}\omega$ from ${\bf V}$ not dominated there. \end{conclusion} \noindent{\sc Proof} \hspace{0.2in} By \ref{6A.8}, \ref{6A.6}, \ref{6A.7}. \QED \begin{conclusion} \label{6A.10} \begin{enumerate} \item Suppose that \begin{enumerate} \item[(a)] ${\Bbb P}$ is a forcing notion adding no dominating real, \item[(b)] ${\mathunderaccent\tilde-3 {\Bbb Q}}$ is a ${\Bbb P}$--name for a Souslin proper forcing notion not adding a dominating real. \end{enumerate} {\em Then} $P * {\mathunderaccent\tilde-3 {\Bbb Q}}$ adds no dominating real. \item $P * {\mathunderaccent\tilde-3 {\Bbb Q}}$ adds no real dominating an old undominated family {\em if}\ both ${\Bbb P}$ and ${\mathunderaccent\tilde-3 {\Bbb Q}}$ satisfy this and are Souslin proper. \end{enumerate} \end{conclusion} \noindent{\sc Proof} \hspace{0.2in} By \ref{6A.9}. \QED \stepcounter{section} \subsection*{\quad 12. Open problems} \begin{problem} \label{20.1} \begin{enumerate} \item Can we in \cite{Sh:480} weaken the assumptions (from Souslin c.c.c.) to ``${\Bbb Q}$ is nep and c.c.c.''? \item Similarly in the symmetry theorem. \item Similarly other problems here have such versions too. \end{enumerate} \end{problem} \begin{problem} \label{20.3x} \begin{enumerate} \item {\em (von Neumann)} Is it consistent that every c.c.c.\ ${}^{\textstyle \omega}\omega$--bounding atomless forcing notion is a measure algebra? We may now rephrase: is the non-existence consistent? \item {\em (Velickovic)} Is it consistent that every c.c.c.\ forcing notion adding new reals adds a real $\mathunderaccent\tilde-3 {f}\in{}^{\textstyle \omega}\omega$ such that if $S\in {\rm pr}od\limits_{n<\omega} [\omega]^{\textstyle 2^n}\cap {\bf V}$ then $({\rm ex}ists^\infty n\in\omega)(\mathunderaccent\tilde-3 {f}(n)\notin S(n))$. [Note that \cite{Sh:480} answers a relative of \ref{20.3x}(2): there is no such Souslin c.c.c.\ forcing notion.] \end{enumerate} \end{problem} A relative of the von Neumann problem is a problem which Fremlin \cite{Fe94} stresses and has many equivalent versions (see \cite{Fe94} on its history). Half way between them and our context is the following. \begin{problem} \label{20.2} Assume ${\Bbb Q}$ is a Souslin c.c.c.\ ${}^{\textstyle \omega}\omega$--bounding forcing notion. Is it random forcing? \end{problem} \begin{problem} \label{20.3} \begin{enumerate} \item Is it consistent that every c.c.c.\ forcing notion adding an unbounded real adds a Cohen real? (See B{\l}aszczyk Shelah \cite{Sh:F151} for a proof of the $\sigma$-centered version). \item If ${\Bbb P}$ satisfies \cite[1.5]{Sh:480}, does it imply ${\Bbb P}$ adds a Cohen real? \end{enumerate} \end{problem} \begin{problem} \label{20.4} Are there any symmetric (or $(<\omega)$-symmetric) c.c.c.\ Souslin forcing notions in addition to Cohen forcing and random forcing? [``Yes'' here implies ``no'' to \ref{20.2} so not of present interest.] \end{problem} \begin{problem} [Gitik Shelah \cite{GiSh:357}, \cite{GiSh:412}] \label{20.5} \begin{enumerate} \item Assume $I$ is an $\aleph_1$--complete ideal on $\kappa$ such that ${\cal P}/I$ is atomless. Can $I^+$ (as a forcing notion) be a c.c.c.\ Souslin forcing generated by a real. \item Replace Souslin by ``definable in an $({\cal H}_{<\sigma}(\theta),\in, {\frak B})$, ${\frak B}$ has universe $\kappa$ or ${\cal H}_{<\sigma}(\kappa)$, and $I$ is $(\theta+\kappa)^+$--complete (see \cite{GiSh:357}). \item Generalize the results of the form ``if ${\cal P}(\kappa)/I$ is the measure algebra with Maharam dimension $\mu$ (or is the adding of $\mu$ Cohen reals) then $\lambda$ is large enough'', see \cite{GiSh:412}, \cite{GiSh:582} for those results. \item Combine (2) and (3). \end{enumerate} \end{problem} \begin{problem} [Judah] \label{20.6} Can a Souslin c.c.c.\ forcing notion add a minimal real? (Note: this is of interest only if the answer in \ref{20.2} is NO and/or the answer to \ref{20.10} is NO.). \end{problem} \begin{problem} \label{20.7} Give examples of a Souslin forcing notion which is only temporarily c.c.c.\ and/or proper (${\bf L}$) (see \S10). \end{problem} \begin{problem} \label{20.8} Do iterations (CS,FS) of Souslin c.c.c.\ forcing notions not adding a dominating real have this property? Is each almost ${}^{\textstyle \omega}\omega$--bounding? [Maybe \ref{6.5} answers need better: replace $\eta^*$ is generic real for $(N,{\Bbb Q},\mathunderaccent\tilde-3 {eta})$ by less]. See \S11 + \S10. \end{problem} \begin{problem} \label{20.9} \begin{enumerate} \item Is there a pair $({\Bbb Q},\mathunderaccent\tilde-3 {r})$ such that: \begin{enumerate} \item[(a)] $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{{\Bbb Q}}$``$\mathunderaccent\tilde-3 {r}\in{}^{\textstyle \omega}2$ is new'', \item[(b)] if ${\Bbb P}$ is a Souslin c.c.c.\ forcing notion with no ${\Bbb P}$--name $\mathunderaccent\tilde-3 {r}'$ of a real such that the forcing notion ${\cal B}_{{\Bbb P}}(\mathunderaccent\tilde-3 {r}')$ is ${}^{\textstyle \omega}\omega$--bounding but ${\Bbb P}$ adds a nowhere essentially Cohen real\\ {\em then} forcing with ${\Bbb P}$ adds a $({\Bbb Q},\mathunderaccent\tilde-3 {r})$ real,i.e.\ for some ${\Bbb P}$--name $\mathunderaccent\tilde-3 {r}''$ for a real we have $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb P}$``for some $G'' \subseteq{\Bbb Q}^{\bf V}$ generic over ${\bf V}$, $\mathunderaccent\tilde-3 {r}''[G_{\Bbb P}]=\mathunderaccent\tilde-3 {r}[G'']$''. \end{enumerate} \item As above ${\Bbb P}$ is $\sigma$--centered. \item If ${\Bbb P}$ is a Souslin c.c.c.\ forcing notion adding new reals but not adding a real $\mathunderaccent\tilde-3 {r}'$ with ${\cal B}_{\Bbb P}(\mathunderaccent\tilde-3 {r}')$ being ${}^{\textstyle \omega}\omega$--bounding,\\ {\em then} forcing with ${\Bbb P}$ adds a new real $\mathunderaccent\tilde-3 {r}''$ such that ${\cal B}_{\Bbb P}(r'')$ is $\sigma$--centered. \end{enumerate} \end{problem} \begin{problem} \label{20.10} \begin{enumerate} \item Let ${\Bbb Q}$ be a Souslin c.c.c.\ forcing notion and $\mathrel {{\vrule height 6.9pt depth -0.1pt}\! \vdash }_{\Bbb Q}$``$ \mathunderaccent\tilde-3 {r}\in {}^{\textstyle \omega}2$''. Is ${\cal B}(\mathunderaccent\tilde-3 {r})$ also a Souslin c.c.c.\ forcing notion? \item Similarly for nep c.c.c. \end{enumerate} \end{problem} \begin{problem} \label{20.11} Assume ${\Bbb Q}$ is a Souslin c.c.c.\ forcing notion which is snep and even $``x \in{\Bbb Q}$'', ``$x \le^{{\Bbb Q}} y$'', ``$\{p_n:n<\omega\}$ predense above $q'$'' are $\Sigma^1_1$ relations. Does ${\Bbb Q}$ add Cohen or random real? \end{problem} \begin{problem} \label{20.12} Develop the theory of ``definable forcing notions'' when we allow an ultrafilter on $\omega$ as a parameter. \end{problem} \begin{problem} \label{20.13} Does nep$\neq$snep? (the case $\theta=\kappa=\aleph_0$, of course). \end{problem} \begin{problem} \label{20.14} Try to generalize our present context to $\lambda$--complete forcing notions (Baumgartner's Axiom; \cite{Sh:186}, \cite{Sh:655}). \end{problem} \begin{problem} \label{20.15} When ${\Bbb Q}^{\bf V}\lesdot{\Bbb Q}^{{\bf V}^{\Bbb P}}$? \end{problem} \begin{problem} \label{20.19} Does ${\rm Ax}_{\omega_1}[(\aleph_1,\aleph_1)\mbox{--nep}]$ imply $2^{\aleph_0}=\aleph_2$? Or does ${\rm Ax}_{\omega_1}[\mbox{nep}]$ imply $2^{\aleph_0}=\aleph_2$? \noindent [The parallel question for Souslin proper was formulated in xxx] \end{problem} \shlhetal \end{document}
\begin{equation}gin{document} \title{Quantum Brownian motion and\\the second law of thermodynamics} \author{ILki Kim\inst{1}\thanks{\emph{e-mail}: [email protected]} \and G\"{u}nter Mahler\inst{2}} \institute{Department of Physics, North Carolina Central University, Durham, NC 27707, U.S.A. \and Institute of Theoretical Physics I, University of Stuttgart, Pfaffenwaldring 57/IV, 70550 Stuttgart, Germany} \date{\today} \abstract{We consider a single harmonic oscillator coupled to a bath at zero temperature. As is well known, the oscillator then has a higher average energy than that given by its ground state. Here we show analytically that for a damping model with arbitrarily discrete distribution of bath modes and damping models with continuous distributions of bath modes with cut-off frequencies, this excess energy is less than the work needed to couple the system to the bath, therefore, the quantum second law is not violated. On the other hand, the second law may be violated for bath modes without cut-off frequencies, which are, however, physically unrealistic models. \PACS{ {03.65.Ud}{Entanglement and quantum nonlocality} \and {05.40.-a}{Fluctuation phenomena, random processes, noise, and Brownian motion} \and {05.70.-a}{Thermodynamics} } } \maketitle \section{Introduction} Thermodynamics originally developed as a purely phenomenological description of the effects caused by changes in temperature, pressure, and volume on physical systems at the macroscopic scale. At the heart of thermodynamics there are four well-known laws \cite{CAL85}; the zeroth law allows us to define temperature scales and thermometers while the first law is nothing else than a generalized expression of the law of energy conservation. The second law introduces the concept of thermodynamic entropy, which never decreases for an isolated system. The third law states that as a system approaches the zero temperature, the entropy of the system approaches zero. Later on, Boltzmann and his followers created and developed statistical thermodynamics by reducing the phenomenologically described thermodynamics entirely to the scheme of classical statistical mechanics. When quantum mechanics appeared, the statistical thermodynamics had to take into account additional factors offered by quantum mechanics, but the overall structure of thermodynamics, its fundamental laws, and its meaning fit for macroscopic systems remained unchanged since quantum mechanics was believed to play no roles at the macroscopic scale. A big challenge for thermodynamics arose with the miniaturization of a system under consideration \cite{SPI05}; in contrast to common quantum statistical mechanics which is intrinsically based on a vanishingly small coupling between system and bath, the finite coupling strength between them causes some subtleties that must be recognized. Recent advances in technology have enabled us to experimentally study mesoscopic systems and test various fundamental concepts. The field of nano electro-mechanical systems (NEMS) especially has emerged with a great potential, e.g., in quantum limit detection and amplification \cite{LAH04,CLE04}, and {\em welcher-Weg} (`which-path') interferometry \cite{MAC03}. Here, the effects of dissipative environments that are negligible in macroscopic resonators become detrimental, and the noise is, therefore, a major limiting factor in control of NEMS resonators. Theoretically, NEMS resonators can be modeled as the simplest form in the scheme of quantum Brownian motion (see \cite{HAE05} for fundamental aspects of quantum Brownian motion). Such a development in various fields related to the quantum statistical and mesoscopic physics has led to considerable interest in the area of quantum and mesoscopic thermodynamics, especially with the question raised on the validity of the thermodynamic laws. Discussions about what is the meaning of quantum thermodynamics \cite{SPI05,MAH04} have started and continued up to now. The validity of the second law was questioned in the scheme of quantum Brownian motion \cite{SPI05}, motivated from the observation of the fact that a single harmonic oscillator coupled to a bath at zero temperature has indeed a higher average energy value than the uncoupled harmonic oscillator ground state (see also \cite{NAG02}), which could not be in accordance with the second law in its Kelvin-Planck form \cite{CAL85} that {\em a system operating in contact with a thermal reservoir cannot produce positive work in its surroundings} ({\em cf.} for a discussion on the validity of the quantum third law in the low temperature limit, see, e.g., Refs. \cite{FOR05}, \cite{HAE06}). However, this argument has been shown to be wrong by Ford and O'Connell \cite{FOR06}; by means of the generalized Langevin equation they showed, for the well-known Drude model for the spectral density of bath modes, that the apparent excess energy in the coupled harmonic oscillator, however, cannot be used to extract useful work since the minimum value of the work to couple the free oscillator to a bath takes above and beyond this excess energy, therefore, the second law of thermodynamics is inviolate even in the quantum regime ({\em i.e.}, for cases with non-negligible coupling strengths at temperature $T = 0$ without thermal fluctuation). Unfortunately they were unable to explicitly connect their result with its model-independent, deep quantum origin, thus the validity of the {\em quantum} second law for a more general form of the spectral density of bath modes, $J(\omega)$ would still remain an open question; actually, in the experimental study of mesoscopic systems one might be able to manipulate the spectral density $J(\omega)$, to some extent, in his own way. In this paper, we would like to discuss the second law for various damping models. We will first show the validity of the second law for a discrete distribution of bath modes by exactly proving the second-law inequality in a simple form obtained from the general treatment of the susceptibility (see Sec. \ref{sec:2nd_law}). Subsequently, the inequality will be appropriately applied for various continuous distributions of bath modes. It is then found that for damping models with cut-off frequencies, the second law holds, whereas interestingly, we may have its violation for damping models with cut-off frequency-free $J(\omega)$, which are, however, physically unrealistic (see Sec. \ref{sec:models}). Let us begin with a brief review on the basics of the quantum Brownian motion. We will below adopt the notations used in \cite{ING98}. \section{Basics and its general treatment}\label{sec:basics} The quantum Brownian motion in consideration is described by the model Hamiltonian \begin{equation}gin{equation}\label{eq:total_hamiltonian1} \hat{H}\; =\; \hat{H}_s\, +\, \hat{H}_b\, +\, \hat{H}_{sb}\,, \end{equation} where \begin{equation}gin{eqnarray}\label{eq:total_hamiltonian2} \hat{H}_s &=& \frac{\hat{p}^2}{2 M} + \frac{M}{2}\,\omega_0^2\,\hat{q}^2\,;\, \hat{H}_b\, =\, \sum_{j=1}^N \left(\frac{\hat{p}_j^2}{2 m_j} + \frac{m_j}{2} \omega_j^2\,\hat{x}_j^2\right)\nonumber\\ \hat{H}_{sb} &=& -\hat{q} \sum_{j=1}^N c_j\,\hat{x}_j\, +\, \hat{q}^2 \sum_{j=1}^N \frac{c_j^2}{2 m_j\,\omega_j^2}\,. \end{eqnarray} Here, from the hermiticity of Hamiltonian, the coupling constants $c_j$ are obviously real-valued. Without any loss of generality, we assume that \begin{equation}gin{equation}\label{eq:frequency_relation1} \omega_1\, \leq\, \omega_2\, \leq\, \cdots\, \leq\, \omega_{N-1}\, \leq\, \omega_N\,. \end{equation} By means of the Heisenberg equation of motion for $\hat{p}$ we can derive the quantum Langevin equation \begin{equation}gin{equation}\label{eq:eq_of_motion1} \textstyle M\,\ddot{\hat{q}} \, +\, M \int_0^t d s\, \gamma(t -s)\, \dot{\hat{q}}(s)\, +\, M\,\omega_0^2\,\hat{q}\; =\; \hat{\xi}(t)\,, \end{equation} where we used $\hat{p} = M \dot{\hat{q}}$, and the damping kernel and the noise operator are respectively given by \begin{equation}gin{eqnarray}\label{eq:damping_kernel1} \displaystyle \gamma(t) &=& \frac{1}{M} \sum_{j=1}^N \frac{c_j^2}{m_j\,\omega_j^2}\cos(\omega_j\,t)\,;\, \displaystyle \hat{\xi}(t)\, =\, - M \gamma(t)\,\hat{q}(0) +\nonumber\\ && \sum_{j=1}^N c_j \left\{\hat{x}_j(0) \cos(\omega_j\,t)\, +\, \frac{\hat{p}_j(0)}{m_j\,\omega_j} \sin(\omega_j\,t)\right\}\,. \end{eqnarray} Introducing the spectral density of bath modes as a characteristic of the bath, \begin{equation}gin{equation}\label{eq:spectral_density1} J(\omega)\; =\; \pi \sum_{j=1}^N \frac{c_j^2}{2 m_j\, \omega_j}\,\delta(\omega - \omega_j)\,, \end{equation} we can express the damping kernel as \begin{equation}gin{equation}\label{eq:damping_kernel2} \gamma(t)\; =\; \frac{2}{M} \int_0^{\infty} \frac{d \omega}{\pi} \frac{J(\omega)}{\omega} \cos(\omega\,t)\,. \end{equation} Let us apply the Laplace transform to eq. (\ref{eq:eq_of_motion1}) with the aid of \cite{IKI06,ROB66} \begin{equation}gin{eqnarray} &&\mathcal{L}\{\cos(\omega_j\,t)\}(s)\; =\; {\frac{s}{s^2 + \omega_j^2}}\,,\label{eq:laplace_transform_cos_sin}\\ &&\mathcal{L}\{\sin(\omega_j\,t)\}(s)\; =\; {\frac{\omega_j}{s^2 + \omega_j^2}}\,. \end{eqnarray} With $s = -i \omega + 0^+ = -i (\omega + i\,0^+)$ we then easily obtain \begin{equation}gin{eqnarray}\label{eq:laplace_q_final} \hspace*{-.5cm}\hat{q}_{\omega} &:=& \mathcal{L}\{\hat{q}(t)\}(-i \omega + 0^+)\nonumber\\ \hspace*{-.5cm} &=& \tilde{\chi}(\omega)\,\left[\hat{\xi}_{\omega} - i \omega M \{1 + \tilde{\gamma}(\omega)\}\,\hat{q}(0) + M \dot{\hat{q}}(0)\right]\,, \end{eqnarray} where the Laplace-transformed damping kernel, the dynamic susceptibility, and the Laplace-transformed noise operator are, respectively, given by \begin{equation}gin{eqnarray} \hspace*{-.5cm}&&\tilde{\gamma}(\omega)\, =\, \frac{i \omega}{M} \sum_j^N \frac{c_j^2}{m_j\,\omega_j^2}\, \frac{1}{\omega^2 - \omega_j^2}\,,\label{eq:gamma_tilde1}\\ \hspace*{-.5cm}&&\tilde{\chi}(\omega)\, =\, \frac{1}{M}\,\frac{1}{\omega_0^2 -\omega^2 - i \omega\,\tilde{\gamma}(\omega)}\,,\label{eq:gamma_tilde1_2}\\ \hspace*{-.5cm}&&\hat{\xi}_{\omega}\, =\, \sum_{j=1}^{N} \frac{c_j}{\omega^2 - \omega_j^2} \left\{i \omega\,\hat{x}_j(0) - \frac{\hat{p}_j(0)}{m_j}\right\} - M \tilde{\gamma}(\omega)\,\hat{q}(0)\,.\nonumber \end{eqnarray} Substituting (\ref{eq:gamma_tilde1}) into (\ref{eq:gamma_tilde1_2}), we get \begin{equation}gin{equation}\label{eq:susceptibility1} \tilde{\chi}(\omega)\; =\; \frac{\displaystyle -\frac{1}{M} \prod_{j=1}^N\, (\omega^2 - \omega_j^2)}{D_{\tilde{\chi}}(\omega)}\,, \end{equation} where \begin{equation}gin{equation}\label{eq:susceptibility_denominator1} \hspace*{-.1cm}D_{\tilde{\chi}}(\omega)\, =\, {\displaystyle \prod_{j=0}^N (\omega^2 - \omega_j^2) - \frac{\omega^2}{M} \sum_{j=1}^N \frac{c_j^2}{m_j\,\omega_j^2} \prod_{j'=1 \atop (\nonumbere j)}^N \left(\omega^2 - \omega_{j'}^2\right)}\,. \end{equation} It is known \cite{LEV88} that the susceptibility $\tilde{\chi}(\omega)$ in (\ref{eq:gamma_tilde1_2}) has poles at the normal-mode frequencies of the total system $\hat{H}$ in (\ref{eq:total_hamiltonian1}), $\pm\,\begin{eqnarray}r{\omega}_k$ with $k = 0, 1, 2, \cdots, N$, so that \begin{equation}gin{equation}\label{eq:normal_mode_frequency1} \omega_0^2\, -\, \begin{eqnarray}r{\omega}_k^2\, -\, i\,\begin{eqnarray}r{\omega}_k\,\tilde{\gamma}(\begin{eqnarray}r{\omega}_k)\; =\; 0\,. \end{equation} Here, we might be able to say that a specific $k = k_0$ would represent the ``system harmonic oscillator'' with the normal-mode frequency $\begin{eqnarray}r{\omega}_{k_0}$, {\em uncoupled} to the ``bath'' consisting of the remaining oscillators with $\begin{eqnarray}r{\omega}_k$, where $k \nonumbere k_0$. From eqs. (\ref{eq:gamma_tilde1_2}), (\ref{eq:susceptibility1}), and (\ref{eq:normal_mode_frequency1}), we have a compact expression of the susceptibility, \begin{equation}gin{equation}\label{eq:susceptibility2} \tilde{\chi}(\omega)\; =\; -\frac{1}{M}\, \frac{\displaystyle \prod_{j=1}^N\, (\omega^2 - \omega_j^2)}{\displaystyle \prod_{k=0}^N\, (\omega^2 - \begin{eqnarray}r{\omega}_k^2)}\,. \end{equation} Without any loss of generality, we here assume that \begin{equation}gin{equation}\label{eq:frequency_relation2} \begin{eqnarray}r{\omega}_0\, \leq\, \begin{eqnarray}r{\omega}_1\, \leq\, \cdots\, \leq\, \begin{eqnarray}r{\omega}_{N-1}\, \leq\, \begin{eqnarray}r{\omega}_N\,. \end{equation} The damping function $\tilde{\gamma}(\omega)$ in the frequency domain has, besides eq. (\ref{eq:gamma_tilde1}), another expression which is suitable for the case of a continuous distribution of bath modes; from eqs. (\ref{eq:damping_kernel2}) and (\ref{eq:laplace_transform_cos_sin}) we obtain \begin{equation}gin{eqnarray} \hspace{-.5cm}&&{\textstyle \tilde{\gamma}(\omega)\; =\; \frac{i}{M} \int_0^{\infty} \frac{d \omega'}{\pi} \frac{J(\omega')}{\omega'} \left(\frac{1}{\omega' + \omega}\, -\, \frac{1}{\omega' - \omega}\right)\,,}\label{eq:damping_kernel2_1}\\ \hspace{-.5cm}&&{\textstyle \tilde{\gamma}(\omega)\big{|}\vspace*{2.5cm}_{\omega \to \atop \omega + i\,0^{+}}\; =\; \frac{J(\omega)}{M\, \omega}\, +}\nonumber\\ \hspace{-.5cm}&&{\textstyle \hspace*{1.3cm}\frac{i}{M} \int_0^{\infty} \frac{d \omega'}{\pi} \frac{J(\omega')}{\omega'}\; P\left(\frac{1}{\omega' + \omega} - \frac{1}{\omega' - \omega}\right)\,.}\label{eq:damping_kernel3} \end{eqnarray} We here used the well-known formula $1/(x + i\,0^+) = P(1/x) - i \pi \delta(x)$ for $x = \omega' - \omega$. For the simple Ohmic case $J_{0}(\omega)\, =\, M \gamma_o\,\omega$ with an $\omega$-independent constant $\gamma_o$, we easily have $\gamma_0(t) = 2 \gamma_o\,\delta(t)$, and $\tilde{\gamma}_{0}(\omega) = \gamma_o$ with a vanishing principal ({\em or} imaginary) part in (\ref{eq:damping_kernel3}), while for the Drude model where $J_d(\omega)\, =\, M\,\gamma_o\,\omega\,\omega_d^2/(\omega^2 + \omega_d^2)$ with a cut-off frequency $\omega_d$, we have $\gamma_d(t) = \gamma_o\, \omega_d\, e^{-\omega_d\,t}$, and \begin{equation}gin{equation}\label{eq:drude_gamma1} \tilde{\gamma}_d(\omega)\; =\; \frac{\gamma_o\, \omega_d^2}{\omega^2 + \omega_d^2}\; +\; i\,\frac{\gamma_o\, \omega_d\, \omega}{\omega^2 + \omega_d^2} \; =\; \frac{\gamma_o\, \omega_d}{\omega_d - i \omega}\,. \end{equation} For a later purpose, it is interesting to compare $D_{\tilde{\chi}}(\omega)$ in eq. (\ref{eq:susceptibility_denominator1}) with the denominator of the right hand side in (\ref{eq:susceptibility2}). Then, we can easily find that \begin{equation}gin{equation}\label{eq:relation_among_normal_modes1} \sum_{k=0}^N \begin{eqnarray}r{\omega}_k^2\; =\; \sum_{j=0}^N \omega_j^2\, +\, \gamma(0)\,;\; \prod_{k=0}^N \begin{eqnarray}r{\omega}_k^2\; =\; \prod_{j=0}^N \omega_j^2\,. \end{equation} Here, $\gamma(0) = \gamma(t)|_{t=0} \geq 0$ in eq. (\ref{eq:damping_kernel1}). From this comparison of the denominators at $\omega = \omega_N$, we also obtain $D_{\tilde{\chi}}(\omega_N) \leq 0$ and so $\omega_N \leq \begin{eqnarray}r{\omega}_N$. Similarly, we can acquire both $D_{\tilde{\chi}}(\omega_1) \leq 0$ for $N$ odd and $D_{\tilde{\chi}}(\omega_1) \geq 0$ for $N$ even, which lead to the fact that $\begin{eqnarray}r{\omega}_0 \leq \omega_1$ for any given $N$. Further, we can obtain the relationship, $D_{\tilde{\chi}}(\omega_j) \cdot D_{\tilde{\chi}}(\omega_{j-1}) \leq 0$ for any $j$. Therefore, it is found that \begin{equation}gin{equation}\label{eq:frequency_relation3} \begin{eqnarray}r{\omega}_0\, \leq\, \omega_1\, \leq\, \begin{eqnarray}r{\omega}_1\, \leq\, \cdots\, \leq\, \omega_{N-1}\, \leq\, \begin{eqnarray}r{\omega}_{N-1}\, \leq\, \omega_N\, \leq\, \begin{eqnarray}r{\omega}_N\,. \end{equation} By using $D_{\tilde{\chi}}(\omega_0)$, we can also show that $\begin{eqnarray}r{\omega}_0 \leq \omega_0 \leq \begin{eqnarray}r{\omega}_N$ (see also \cite{FOR88}). Within this {\em general} treatment of the susceptibility, we would like to consider the quantum second law below. \section{General validity of the quantum second law (discrete bath modes)}\label{sec:2nd_law} The energy of the system oscillator $\hat{H}_s$ at zero temperature can be calculated by means of the partition function $Z = \text{Tr}\,e^{-\begin{equation}ta \hat{H}}$ with $\begin{equation}ta = 1/k_B T$ as \begin{equation}gin{equation}\label{eq:excess_energy0} {\textstyle \left\langle\hat{H}_s\right\rightarrowngle_{T=0}\; =\; \left.\frac{\text{Tr}\left(\hat{H}_s\, e^{-\begin{equation}ta \hat{H}}\right)}{Z}\right|_{\begin{equation}ta \to \infty}\; =:\; E_s(0)\,.} \end{equation} It is well-known \cite{MAH04,WEI99} that the system-bath entanglement induced by the coupling term $\hat{H}_{sb}$ in (\ref{eq:total_hamiltonian1}) leads to the fact that the system oscillator $\hat{H}_s$, initially in a pure state (here, its ground state with the minimum energy $E_g = \hbar\,\omega_0/2$), is not in the pure state any longer but in a mixed state with a fluctuation in energy, and so we actually have $E_s(0) > E_g$. It was even discussed in \cite{BUT05} that the energy fluctuation measurements can provide entanglement information ({\em cf}. for a thermodynamical approach to quantifying entanglement in bipartite qubit states, see \cite{OPP02}). From the fluctuation-dissipation theorem \cite{ING98}, we can also easily obtain \begin{equation}gin{equation}\label{eq:excess_energy1} E_s(0)\; =\; \frac{M \hbar}{2 \pi} \int_0^{\infty} d\omega\,(\omega_0^2\, +\, \omega^2)\; \text{Im}\,\tilde{\chi}(\omega + i\,0^+)\,. \end{equation} The factor $\text{Im}\,\tilde{\chi}(\omega + i\,0^+)$ can be evaluated from eq. (\ref{eq:susceptibility2}) with $\omega \to \omega + i\,0^+$. By means of the technique used, e.g., in \cite{KAM04}, eq. (\ref{eq:excess_energy1}) can be rewritten as \begin{equation}gin{equation}\label{eq:contour_integral1} E_s(0)\; =\; \frac{\hbar}{2}\, \frac{1}{2 \pi i} \oint d\omega\, \frac{\omega_0^2\, +\, \omega^2}{G(\omega)}\,, \end{equation} where $G(\omega) = - 1/\{M\,\tilde{\chi}(\omega)\}$, and the integration path is a loop around the {\em positive real axis} in the complex $\omega$-plane, consisting of the two branches, $(\infty + i \epsilon,\,i \epsilon)$ and $(-i \epsilon,\,\infty - i \epsilon)$. Therefore, $E_s(0)$ can be exactly obtained in closed form from the residues evaluated at all zeroes of $G(\omega)$ on the positive real axis. It is also interesting to note that the entanglement between any pair of the bath oscillators $\hat{H}_j = \hat{p}_j^2/2\,m_j + m_j\,\omega_j^2\,\hat{x}_j^2/2$ with $j= 1, 2, 3, \cdots, N$ is induced by the system-bath entanglement and the well-known entanglement swapping.\cite{ALB01} As a result, we must obtain an excess energy for any $j$, i.e., $\langle\hat{H}_j\rightarrowngle_{T=0} > \hbar\,\omega_j/2$. However, the energy of the total system, $\langle\hat{H}\rightarrowngle_{T=0} = \sum_{k = 0}^N\, \hbar\, \begin{eqnarray}r{\omega}_k/2$ is clearly not equivalent to $\langle\hat{H}_s\rightarrowngle_{T=0}\, +\, \sum_{j=1}^N\, \langle\hat{H}_j\rightarrowngle_{T=0} = \langle\hat{H}_s\, +\, \hat{H}_b\rightarrowngle_{T=0}$. The minimum work required to couple a harmonic oscillator at temperature $T$ to a bath at the same temperature is equivalent to the Helmholtz free energy of the {\em coupled} total system minus the free energy of the {\em uncoupled} bath \cite{CAL85,FOR06}. The Helmholtz free energy can be obtained from the canonical partition function $Z_s(\begin{equation}ta) = \mbox{Tr}\, e^{-\begin{equation}ta \hat{H}}/\mbox{Tr}_b\, e^{-\begin{equation}ta \hat{H}_b}$ as $F(T) = -k_B\,T\,\ln Z_s$, where $\mbox{Tr}_b$ denotes the partial trace for the bath alone (in the absence of a coupling between system and bath, this would exactly correspond to the partition function of the system only). By means of the normal-mode frequencies $\begin{eqnarray}r{\omega}_k$ the partition function can be rewritten as \begin{equation}gin{equation}\label{eq:partion_function1} Z_s(\begin{equation}ta)\; =\; \frac{\displaystyle \prod_{k=0}\, \sum_{n_k=0}\, e^{-\begin{equation}ta \hbar \begin{eqnarray}r{\omega}_k \left(n_k + \frac{1}{2}\right)}}{\displaystyle \prod_{j=1}\, \sum_{n_j=0}\, e^{-\begin{equation}ta \hbar \omega_j \left(n_j + \frac{1}{2}\right)}} \end{equation} so that we can easily get, for $\begin{equation}ta \to \infty$, \begin{equation}gin{equation}\label{eq:helmholtz_energy0} F(0)\; =\; \frac{\hbar}{2}\, \left(\sum_{k=0}^N \begin{eqnarray}r{\omega}_k\, -\, \sum_{j=1}^N \omega_j\right)\,. \end{equation} With the aid of eq. (\ref{eq:relation_among_normal_modes1}), it is evidently found that $F(0) > \hbar\,\omega_j/2$ for any $j = 0, 1, 2, \cdots, N$. Further, we have, from \cite{FOR85}, \begin{equation}gin{equation} \hspace*{-.3cm}F(T)\; =\; \frac{1}{\pi} \int_0^{\infty}\, d\omega\, f(\omega,T)\; \text{Im}\left\{\frac{d}{d \omega} \ln \chi(\omega + i 0^+)\right\}\,,\label{eq:helmholtz_energy1}\\ \end{equation} where $f(\omega,T) = k_B\,T\,\ln \{2\,\sinh (\hbar\,\omega/2\,k_B T)\}$. Similarly to eq. (\ref{eq:contour_integral1}), we can obtain an integral form of the free energy at $T = 0$, \begin{equation}gin{equation} F(0)\; =\; \frac{\hbar}{2}\, \frac{1}{2 \pi i} \oint d\omega\, \frac{\omega\, G'(\omega)}{G(\omega)}\,.\label{eq:contour_integral2} \end{equation} Here, $f(\omega,0) = \hbar\,\omega/2$. Now, we are in a position to exactly formulate the quantum second law within this general treatment; from eqs. (\ref{eq:contour_integral1}) and (\ref{eq:contour_integral2}) with (\ref{eq:gamma_tilde1_2}), we easily find an expression \begin{equation}gin{equation}\label{eq:second_law_in_general_treatment1} K :=\; F(0) - E_s(0)\; =\; \frac{\hbar}{4 \pi}\, \oint d\omega\, \frac{\omega^2\, \tilde{\gamma}'(\omega)}{G(\omega)}\,, \end{equation} and, for the validity of the second law, we have to get $K \geq 0$ for any $N$ (the number of the bath oscillators) and the limit $N \to \infty$. Here, $K$ can exactly be evaluated from all residues of the integrand on the positive {\em real} axis. Substituting (\ref{eq:susceptibility2}) with $\tilde{\chi}(\omega) = -1/M G(\omega)$ and (\ref{eq:gamma_tilde1}) into (\ref{eq:second_law_in_general_treatment1}), we obtain, after a fairly lengthy evaluation of the contour integration (see Appendix~\ref{sec:appendix1} for details), the exact result \begin{equation}gin{equation}\label{eq:second_law_in_general_treatment2} K\; =\; \frac{\hbar}{8 M}\, \sum_{k=0}^N\, \mathcal{A}_k\,, \end{equation} where \begin{equation}gin{eqnarray}\label{eq:second_law_in_general_treatment21} \mathcal{A}_k &=& \begin{eqnarray}r{\omega}_k\, \frac{\displaystyle \prod_{j=1}^N\, (\begin{eqnarray}r{\omega}_k^2 - \omega_j^2)}{\displaystyle \prod_{k'=0 \atop (\nonumbere k)}^N\, (\begin{eqnarray}r{\omega}_k^2 - \begin{eqnarray}r{\omega}_{k'}^2)}\; \times\\ && \sum_{l=1}^N\, \frac{c_l^2}{m_l\,\omega_l^2}\;\, P\left\{\frac{1}{(\omega_l + \begin{eqnarray}r{\omega}_k)^2}\, +\, \frac{1}{(\omega_l - \begin{eqnarray}r{\omega}_k)^2}\right\}\,.\nonumber \end{eqnarray} Considering each summand $\mathcal{A}_k$ from $k = N$ with keeping in mind the frequency relationship in (\ref{eq:frequency_relation3}), we see that each of the summand is non-negative and so $K \geq 0$ indeed! Separately from this result for discrete bath modes, we will next discuss the second law for continuous bath modes. For doing this job, we will consider a continuation of the spectral density $J(\omega)$ from its original form in (\ref{eq:spectral_density1}). \section{The second law for continuous bath modes}\label{sec:models} For a discussion of the second law for a continuous distribution of bath modes, we rewrite eq. (\ref{eq:second_law_in_general_treatment1}) as \begin{equation}gin{equation}\label{eq:conti1} K\; =\; \frac{\hbar}{4 \pi}\, \left(\int_0^{\infty} d\omega\, \frac{\omega^2\; \tilde{\gamma}'_{-}(\omega)}{G_{-}(\omega)}\, -\, \int_0^{\infty} d\omega\, \frac{\omega^2\; \tilde{\gamma}'_{+}(\omega)}{G_{+}(\omega)}\right)\,, \end{equation} where the subscripts $+/-$ denote the branches $(\infty + i \epsilon,\,i \epsilon)$ and $(-i \epsilon,\,\infty - i \epsilon)$, respectively, so that \begin{equation}gin{eqnarray}\label{eq:branch1_1} G_{+}(\omega) &:=& {\textstyle G(\omega)\big{|}\vspace*{2.5cm}_{\omega \to \atop \omega + i\,0^{+}}\; =\; \omega^2\, -\, \omega_0^2\, +\, i\,\frac{J(\omega)}{M}\; -}\nonumber\\ && {\textstyle \frac{\omega}{M} \int_0^{\infty} \frac{d \omega'}{\pi} \frac{J(\omega')}{\omega'}\; P\left(\frac{1}{\omega' + \omega} - \frac{1}{\omega' - \omega}\right)\,,} \end{eqnarray} and $G_{-}(\omega) := G(\omega)\big{|}_{\omega \to \atop \omega - i\,0^{+}} = G_{+}^{\ast}(\omega)$. Here, we used $G(\omega) = \omega^2\,-\,\omega_0^2\,+\,i\,\omega\,\tilde{\gamma}(\omega)$ with eq. (\ref{eq:damping_kernel3}) for $\tilde{\gamma}_+(\omega)$. Therefore, eq. (\ref{eq:conti1}) easily reduces to \begin{equation}gin{equation}\label{eq:second_law_inequality_conti} K\; =\; \frac{\hbar}{2 \pi}\; \mbox{Im} \int_0^{\infty} d\omega\, \frac{\omega^2\, R_+'(\omega)}{G_+(\omega)}\,, \end{equation} where $R_+(\omega) = -i\,\tilde{\gamma}_+(\omega)$. First, for the Ohmic case, $(\tilde{\gamma}_+)_{0}(\omega) = \gamma_o$, which is the prototype for damping, we easily obtain $K_{0} = 0$. In fact, both $(E_s)_{0}(0)$ and $F_{0}(0)$ have the logarithmic divergence, however, the same value, namely, \begin{equation}gin{equation}\label{eq:ohmic_energy_free_energy} \textstyle (E_s)_{0}(0)\; =\; F_{0}(0)\; =\; \frac{\hbar\,\gamma_o}{2\,\pi}\,\int_0^{\infty} d\omega\, \frac{\omega\,(\omega^2\,+\,\omega_0^2)}{(\omega^2\,-\,\omega_0^2)^2\, +\, (\gamma_o\,\omega)^2} \end{equation} (see also the discussion in the last paragraphs of Secs. \ref{sec:drude_model} and \ref{sec:exponential_model}). However, the Ohmic model is not so realistic in its strict form because the spectral density of bath modes, $J_{0}(\omega) = M \gamma_o\,\omega$ diverges for large frequencies. We therefore introduce a cut-off frequency $\omega_c$ which leads to $J_c(\omega)$ decaying smoothly to zero for large frequencies $\omega > \omega_c$. We will first consider the Drude model, where $J_d(\omega)$ is polynomially decaying for $\omega > \omega_c = \omega_d$, and next a damping model with $J_e(\omega)$ being exponentially decaying for $\omega > \omega_c = \omega_e$. For these damping models, we will be able to show that $K > 0$. Subsequently, we will also consider two different damping models without cut-off frequencies $\omega_c$; first, the extended Ohmic models where the spectral densities $J(\omega)$ diverge polynomially faster than $J_{0}(\omega)$, and secondly, the extended Drude models with $J_{d,n}(\omega)$ diverging faster or more slowly than $J_{0}(\omega)$. Interestingly, we will observe $K < 0$ for some of the cut-off frequency-free damping models (see Secs. \ref{sec:extended_ohmic_model} and \ref{sec:extended_drude_model}). \subsection{Drude model $(d)$}\label{sec:drude_model} We briefly review the second law in the Drude model considered in \cite{FOR06}; it is convenient to adopt, in place of $(\omega_0, \omega_d, \gamma_o)$, the parameters $({\mathbf w}_0, \Omega, \gamma)$ through the relations \begin{equation}gin{eqnarray}\label{eq:parameter_change0} &\textstyle \omega_0^2\; :=\; {\mathbf w}_0^2\; \frac{\Omega}{\Omega\, +\, \gamma}\,;\; \omega_d\; :=\; \Omega\, +\, \gamma\,;&\nonumber\\ &\textstyle \gamma_o\; :=\; \gamma\, \frac{\Omega\, (\Omega\, +\, \gamma)\, +\, {\mathbf w}_0^2}{(\Omega\, +\, \gamma)^2}\,.& \end{eqnarray} Substituting eq. (\ref{eq:drude_gamma1}) with (\ref{eq:parameter_change0}) into eq. (\ref{eq:gamma_tilde1_2}), we obtain the susceptibility \begin{equation}gin{eqnarray} \hspace*{-.7cm}&&\tilde{\chi}_d(\omega)\nonumber\\ \hspace*{-.7cm}&=& -\frac{1}{M}\, \frac{\omega\, +\, i\,\omega_d}{\omega^3\, +\, i\,\omega_d\, \omega^2\, -\, (\omega_0^2\, +\, \gamma_o\, \omega_d)\, \omega\, -\, i\,\omega_0^2\, \omega_d}\label{eq:susceptibility_drude01}\\ \hspace*{-.7cm}&=& -\frac{1}{M}\, \frac{\omega\, +\, i\,(\Omega\, +\, z_1\, +\, z_2)}{(\omega\, +\, i \Omega) (\omega\, +\, i z_1) (\omega\, +\, i z_2)}\,,\label{eq:susceptibility_drude1} \end{eqnarray} where $z_1 = \gamma/2 + i {\mathbf w}_1$ and $z_2 = \gamma/2 - i {\mathbf w}_1$ with ${\mathbf w}_1^2 = {\mathbf w}_0^2 - (\gamma/2)^2$. This gives us $(G_+)_d(\omega) = -1/\{M\,\tilde{\chi}_d(\omega)\}$ for eq. (\ref{eq:second_law_inequality_conti}). By means of eq. (\ref{eq:susceptibility_drude1}), we can even obtain the closed expressions for both $(E_s)_d(0)$ from (\ref{eq:excess_energy1}) and $F_d(0)$ from (\ref{eq:helmholtz_energy1}). We give the detailed derivation of these expressions in Appendix~\ref{sec:appendix2}, which will also be used in Sec. \ref{sec:extended_drude_model}. It has been numerically shown in \cite{FOR06} that $(E_s)_d(0)$ in (\ref{eq:energy_drude1}) is actually greater than $E_g = \frac{\hbar\,{\mathbf w}_0}{2}\sqrt{\frac{\Omega}{\Omega\, +\, \gamma}}$, and $F_d(0)$ in (\ref{eq:helmholtz_energy2_appendix}) is even greater than the $(E_s)_d(0)$, i.e., $K_d > 0$. For a later purpose, we will also evaluate $K_d$ explicitly for various pairs $(\omega_0, \omega_d)$ (see Table \ref{tab:table2} in Sec. \ref{sec:extended_drude_model}). It is noted that in the limit $\omega_d \to \infty$ (equivalently, $\Omega \to \infty$), we have $K_d \to \frac{\gamma}{\pi {\mathbf w}_0} E_g$ (see Appendix~\ref{sec:appendix2}). From the comparison between $\tilde{\gamma}_d(\omega)$ and $\tilde{\gamma}_{0}(\omega)$ ({\em or}, equivalently, $J_d(\omega)$ and $J_0(\omega)$), this result would be interpreted as $K_{0} \to \frac{\gamma}{\pi {\mathbf w}_0} E_g$. However, it is misleading; $\tilde{\gamma}_d(\omega)$ behaves only for small frequencies, $\omega \ll \omega_d$, like in the Ohmic case, which corresponds to $\gamma_d(t) \to \gamma_{0}(t)$ only for large times. Actually, $\gamma_d(t) = \gamma_o\,\omega_d\, e^{-\omega_d\,t}$ with $\omega_d \to \infty$ does not reduce to $\gamma_{0}(t) = 2\,\gamma_o\,\delta(t) = \lim_{\omega_d \to \infty} \frac{2}{\sqrt{\pi}}\,\gamma_o\,\sqrt{\omega_d}\, e^{-\omega_d\,t^2}$. For the evaluation of $K$, however, all frequencies, $0 \leq \omega < \infty$, have to be considered. Therefore, we evidently get $\lim_{\omega_d \to \infty}\,K_d \gneq K_{0} = 0$ . \subsection{Exponentially decaying model $(e)$}\label{sec:exponential_model} We now consider a damping model with $J_e(\omega) = M\,\gamma_o\,\omega\, e^{-\omega/\omega_e}$ which, in the limit $\omega_e \to \infty$, clearly reduces to $J_{0}(\omega)$ for small frequencies. Substituting this into eq. (\ref{eq:damping_kernel2}), we can obtain \begin{equation}gin{equation}\label{eq:exponential_decay1} \gamma_e(t)\; =\; \frac{2}{\pi}\, \frac{\gamma_o\, \omega_e}{1\,+\,(\omega_e\,t)^2}\,. \end{equation} Applying the Laplace transform \cite{ROB66} to eq. (\ref{eq:exponential_decay1}) with $s = -i\,\omega + 0^{+}$, it can be found that \begin{equation}gin{eqnarray}\label{eq:exponential_frequency1} \tilde{\gamma}_e(\omega) &=& \gamma_o\, e^{-\omega/\omega_e}\; +\\ && i\, \frac{\gamma_o}{\pi} \left\{e^{\omega/\omega_e}\, E_1\left(\frac{\omega}{\omega_e}\right)\, +\, e^{-\omega/\omega_e}\, \mbox{Ei}\left(\frac{\omega}{\omega_e}\right)\right\}\,,\nonumber \end{eqnarray} (see Appendix \ref{sec:appendix3} for the detailed derivation). By using this with $E_1'(y) = -E_0(y) = -e^{-y}/y$, we can easily get $(R_{+}')_e(\omega)$ and $(G_{+})_e(\omega)$, and then introducing a dimensionless variable $\lambda = \omega/\omega_e$, we arrive at the expression \begin{equation}gin{equation}\label{eq:exponential_K_q} K_e\; =\; \frac{\hbar\,\gamma_o\,\omega_e^2}{2\,\pi^2}\;\, \mbox{Im} \int_0^{\infty} d\lambda\ \frac{f_1(\lambda)}{f_2(\lambda)}\;, \end{equation} where \begin{equation}gin{eqnarray} f_1(\lambda) &=& \lambda^2\, \{e^{\lambda}\, E_1(\lambda)\, -\, e^{-\lambda}\, \mbox{Ei}(\lambda)\, +\, i\,\pi e^{-\lambda}\}\,,\\ f_2(\lambda) &=& \omega_e^2\,\lambda^2\, -\, \omega_0^2\, -\, \frac{\gamma_o\,\omega_e}{\pi}\, \lambda\, \{e^{\lambda}\, E_1(\lambda)\, +\, e^{-\lambda}\, \mbox{Ei}(\lambda)\}\nonumber\\ && +\, i\,\omega_e\,\gamma_o\,\lambda\,e^{-\lambda}\,. \end{eqnarray} We numerically evaluate the integration in (\ref{eq:exponential_K_q}) for various pairs $(\gamma_o, \omega_e)$ to show that $K_e > 0$ (see Table~\ref{tab:table1}). From the fact that $\tilde{\gamma}_e(\omega)$ behaves like in the Ohmic case for small frequencies $\omega \ll \omega_e$, it is also interesting to consider the leading behavior of $K_e$ for $\omega_e \to \infty$; from eq. (\ref{eq:exponential_K_q}) we can easily get $\lim_{\omega_e \to \infty}\,K_e = \frac{\hbar\, \gamma_o}{2\,\pi} \nonumbere 0$, which is also different from $\lim_{\omega_d \to \infty}\,K_d$ in Sec. \ref{sec:drude_model}. This confirms that these limiting values cannot reveal the Ohmic counterpart $K_{0}$. \begin{equation}gin{table}[htb] \caption{$K_e/E_g$ for various pairs $(\gamma_o, \omega_e)$, where $E_g = \frac{\hbar}{2}$ (i.e., $\omega_0 = 1$); $\lim_{\omega_e \to \infty}\,K_e/E_g = \gamma_o/\pi$. \label{tab:table1}} \begin{equation}gin{center} \begin{equation}gin{tabular}{|c||cccc|} \hline $\omega_e$ & $\gamma_o = 0.5$ & $\gamma_o = 1$ & $\gamma_o = 2$ & $\gamma_o = 5$\\ \hline 0.5 & 0.04225 & 0.08186 & 0.15604 & 0.34038\\ 1 & 0.06130 & 0.11838 & 0.22117 & 0.47348\\ 5 & 0.10600 & 0.20348 & 0.37899 & 0.81614\\ 10 & 0.12131 & 0.23326 & 0.43819 & 0.96224\\ 50 & 0.14414 & 0.28018 & 0.54302 & 1.25567\\ 80 & 0.14789 & 0.28896 & 0.56377 & 1.32020\\ \hline \hline $\infty$ & 0.15915 & 0.31831 & 0.63662 & 1.59155\\ \hline \end{tabular} \end{center} \end{table} \subsection{Extended Ohmic models $(p)$}\label{sec:extended_ohmic_model} Let us consider damping models with $J_p(\omega) = M\,\gamma_o\,\omega\,(\omega/\gamma_o)^p$ being polynomially divergent with $\omega$. Clearly, the case of $p = 0$ is Ohmic. First, we have $J_1(\omega) = M\,\omega^2$. By using the relationship $\int_0^{\infty} d y\, e^{i k y} = \pi\,\delta(k) + i\, P(1/k)$, we can easily obtain $\gamma_1(t) = -\frac{2}{\pi}\, P\frac{1}{t^2}$, which leads to no well-defined $\tilde{\gamma}_1(\omega)$. This ($p = 1$) is, therefore, physically not acceptable. It is not difficult to show that the cases of $p$ being odd are not acceptable. Next, we consider the case of $p = 2$. It can be shown that $\gamma_2(t) = -\frac{2}{\gamma_o}\,\delta''(t)$ and $(\tilde{\gamma}_+)_2(\omega) = \frac{\omega^2}{\gamma_o} - i\,\frac{2\,\delta(0)}{\gamma_o}\,\omega$. By using this for eq. (\ref{eq:second_law_inequality_conti}), we can obtain \begin{equation}gin{equation}\label{eq:higher_ohm1} K_2\; =\; \frac{\hbar}{2\,\pi}\; \mbox{Im} \int_0^{\infty} d\omega\, \frac{2\, \omega^2\, \{-\omega + i\,\delta(0)\}}{\omega^3 - i\,\alpha\,\omega^2 + i\,\begin{equation}ta}\,, \end{equation} where $\alpha = \gamma_o + 2\,\delta(0)$ and $\begin{equation}ta = \omega_0^2\,\gamma_o$. The integral in (\ref{eq:higher_ohm1}) diverges logarithmically. This divergence is, obviously, from the fact that both $(E_s)_2(0)$ and $F_2(0)$ diverge logarithmically, however, differently from the Ohmic case, $(E_s)_2(0) \nonumbere F_2(0)$. In fact, we find that $K_2 = -\frac{\hbar}{\pi}\,\{\delta(0) + \gamma_o\} \times \infty < 0$, which clearly means that the excess energy, $(E_s)_2(0)$, is greater than the minimum work ({\em or} the work in a reversible process), $F_2(0)$, required to couple a system to a bath. This violation of the second law in the reversible process may be understood to emerge from a large amount of the energy offer by the bath with $J_2(\omega)$ diverging with $\omega$. The infinite value of $K_2$ suggests, however, that this model would be strictly unrealistic. \subsection{Extended Drude models $(d,n)$}\label{sec:extended_drude_model} We now consider a more general class of the spectral density than $J_d(\omega)$, which is \begin{equation}gin{equation}\label{eq:general_form_spectral_density1} \textstyle J_{d,n}(\omega)\; =\; \left(\frac{\omega}{\omega_d}\right)^n\, J_d(\omega)\; =\; M \gamma_o\,\frac{\omega^{n+1}}{\omega_d^{n-2}\,(\omega^2\, +\, \omega_d^2)}\,. \end{equation} Let us begin with $n$ being odd. First, $n = 1$. We then have $J_{d,1}(\omega) = M \gamma_o\,\omega_d\,\omega^2/(\omega^2 + \omega_d^2)$, which converges to a non-zero constant $M \gamma_o\,\omega_d$ for large frequencies. Substituting this into eq. (\ref{eq:damping_kernel2}), we can obtain, after some calculation (see Appendix~\ref{sec:appendix3} for details), \begin{equation}gin{eqnarray}\label{eq:gamma_n_1_extended_drude} \gamma_{d,1}(t) &=& \frac{\gamma_o\,\omega_d}{\pi}\, \left[\,\{\mbox{Ei}(\omega_d t) + E_1(\omega_d t)\}\,\sinh(\omega_d t)\; -\right.\nonumber\\ && \left.\{\mbox{Ei}(\omega_d t) - E_1(\omega_d t)\}\,\cosh(\omega_d t)\,\right]\,. \end{eqnarray} Applying the Laplace transform \cite{ROB66} to this, we can get \begin{equation}gin{equation}\label{eq:laplace_n_1_extended_drude} \textstyle (\tilde{\gamma}_+)_{d,1}(\omega)\; =\; \frac{\gamma_o\,\omega_d\,\omega}{\omega^2 + \omega_d^2}\, +\, i\,\frac{2}{\pi}\,\frac{\gamma_o\,\omega_d\,\omega}{\omega^2 + \omega_d^2}\, \ln\left(\frac{\omega}{\omega_d}\right)\,. \end{equation} Using eq. (\ref{eq:branch1_1}) with (\ref{eq:laplace_n_1_extended_drude}), we can arrive at the expression in (\ref{eq:second_law_inequality_conti}) \begin{equation}gin{equation}\label{eq:k_q_n_1_extended_drude} K_{d,1}\; =\; \frac{\hbar \gamma_o}{2\,\pi}\; \mbox{Im}\int_0^{\infty} d \lambda\,\frac{g_1(\lambda)}{g_2(\lambda)}\,, \end{equation} where \begin{equation}gin{eqnarray}\label{eq:numerator1_n_1_extended_drude} g_1(\lambda) &=& \textstyle \lambda^2\,\left[\,\frac{2\,\lambda_d}{\pi}\,\{(\lambda_d^2 - \lambda^2)\, \ln\left(\frac{\lambda}{\lambda_d}\right)\, +\, \lambda^2\, +\, \lambda_d^2\}\; +\right.\nonumber\\ && \left.i\,\lambda_d\,(\lambda^2 - \lambda_d^2)\,\right]\,,\nonumber\\ g_2(\lambda) &=& \textstyle (\lambda^2 + \lambda_d^2)\,\left\{(\lambda^2 - \lambda_0^2)\,(\lambda^2 + \lambda_d^2)\; -\right.\nonumber\\ && \textstyle \left.\frac{2\,\lambda_d\,\lambda^2}{\pi}\, \ln\left(\frac{\lambda}{\lambda_d}\right)\, +\, i\,\lambda_d\,\lambda^2\right\}\,. \end{eqnarray} Here, we introduced a dimensionless variable $\lambda = \omega/\gamma_o$ with $\lambda_0 = \omega_0/\gamma_o$ and $\lambda_d = \omega_d/\gamma_o$. We numerically evaluate $K_{d,1}$ for various pairs $(\lambda_0, \lambda_d)$ to show that $K_{d,1} > 0$ (see Table \ref{tab:table2}). It is also noted that in the limit $\lambda_d$ (or $\omega_d$) $\to \infty$, the spectral density $J_{d,1}(\omega)$ with $\gamma_o = \omega_d$ reduces to $J_{1}(\omega)$ in Sec. \ref{sec:extended_ohmic_model} for small frequencies. As was discussed, however, this case $(d,1)$ is a well-defined damping model whereas the model with $J_1(\omega)$ is not. \begin{equation}gin{table}[htb] \caption{$K_{d}\; \pi/\gamma_o E_g$ from (\ref{eq:drude_k_q__old_parameter_appendix}) versus $K_{d,1}\; \pi/\gamma_o E_g$ from (\ref{eq:k_q_n_1_extended_drude}) for various pairs $(\omega_0, \omega_d)$ with $\gamma_o = 1$, where $E_g = \frac{\hbar\,\omega_0}{2}$; $(d,0)$ denotes the Drude model. \label{tab:table2}} \begin{equation}gin{center} \begin{equation}gin{tabular}{|c|c|c||c|c|c|} \hline $(\omega_0, \omega_d)$ & $(d, 0)$ & $(d, 1)$ & $(\omega_0, \omega_d)$ & $(d, 0)$ & $(d, 1)$\\ \hline\hline $(0.5, 0.5)$ & $0.84275$ & $0.50184$ & $(1, 0.5)$ & $1.21124$ & $0.25468$\\ \hline $(0.5, 1)$ & $0.61942$ & $0.45203$ & $(1, 1)$ & $0.45318$ & $0.26521$\\ \hline $(0.5, 5)$ & $1.29483$ & $0.34454$ & $(1, 5)$ & $0.61427$ & $0.17800$\\ \hline $(0.5, 10)$ & $1.52266$ & $0.24507$ & $(1, 10)$ & $0.73767$ & $0.05454$\\ \hline\hline $(5, 0.5)$ & $0.53089$ & $0.02832$ & $(10, 0.5)$ & $0.28968$ & $0.00936$\\ \hline $(5, 1)$ & $0.47010$ & $0.04106$ & $(10, 1)$ & $0.27637$ & $0.01345$\\ \hline $(5, 5)$ & $0.09790$ & $0.05476$ & $(10, 5)$ & $0.13428$ & $0.02346$\\ \hline $(5, 10)$ & $0.09537$ & $0.04976$ & $(10, 10)$ & $0.04792$ & $0.02347$\\ \hline \end{tabular} \end{center} \end{table} Let $n = 3$ next. We have $J(\omega)_{d,3} = M \gamma_o\,\omega_d^{-1}\,\omega^4/(\omega^2 + \omega_d^2)$. After a straightforward calculation, we will obtain \begin{equation}gin{equation}\label{eq:gamma_n_3_extended_drude} \gamma_{d,3}(t)\; =\; -\gamma_{d,1}(t)\, -\, \frac{2\,\gamma_o}{\pi\,\omega_d}\, \frac{P}{t^2}\,, \end{equation} which indicates that this case is physically not acceptable. Similarly, we can also show that all cases for $n > 3$ being odd are not acceptable. Now, let $n$ be even. We can then find that \begin{equation}gin{equation}\label{eq:damping_kernel_general_drude1} \gamma_{d,2 m}(t)\; =\; (-1)^{m} \left\{\gamma_d(t)\; -\; \sum_{j=1}^m \frac{\gamma_{0}^{\{2(j-1)\}}(t)}{\omega_d^{2(j-1)}}\right\}\,, \end{equation} where $m = 1, 2, \cdots$, and $\gamma_{0}^{\{2(j-1)\}}(t)$ represent $2(j-1)$-time derivatives of $\gamma_{0}(t)$. We begin with a simple case $(m = 1)$ with $J_{d,2}(\omega) = M \gamma_o\,\omega^3/(\omega^2 + \omega_d^2)$. This case is particularly interesting because $J_{d,2}(\omega)$ diverges for large frequencies, however, more slowly than $J_0(\omega)$ for the Ohmic case, whereas all $J_{d,2 m}(\omega)$ for $m > 1$ diverge faster than $J_0(\omega)$; $J_{d,2}(\omega)$ may be said to be of {\em weak} divergence. Due to the fact that $K_{d,0} > K_{d,1} > 0$ seen from Table \ref{tab:table2}, we would like to pose a question if we will here obtain $K_{d,1} > K_{d,2} > 0 = K_0$ or $K_{d,1} > 0 \geq K_{d,2}$. In fact, we have an interesting relation $(\tilde{\gamma}_+)_{d,2}(\omega) = -\tilde{\gamma}_d(\omega) + \tilde{\gamma}_{0}(\omega)$ from eq. (\ref{eq:damping_kernel_general_drude1}), and so $(R_+')_{d,2}(\omega) = -(R_+')_d(\omega)$ for eq. (\ref{eq:second_law_inequality_conti}). Introducing the parameters $({\mathbf w}_0, \Omega, \gamma)$ defined as the relations \begin{equation}gin{equation}\label{eq:parameter_change_extended_drude} \textstyle \omega_d\,\omega_0^2 := \Omega\,{\mathbf w}_0^2\,;\; \omega_d\, + \gamma_o := \Omega\, +\, \gamma\,;\; \omega_0^2 := \Omega\,\gamma\, +\, {\mathbf w}_0^2 \end{equation} (note that these differ from the relations in (\ref{eq:parameter_change0})), we can easily obtain \begin{equation}gin{eqnarray} (\tilde{G}_+)_{d,2}(\omega) &=& \frac{\omega^3\, +\, i\,(\gamma_o + \omega_d)\, \omega^2\, -\, \omega_0^2\, \omega\, -\, i\,\omega_0^2\,\omega_d}{\omega\, +\, i\,\omega_d}\nonumber\\ &=& \frac{(\omega\, +\, i \Omega)\,(\omega\, +\, i z_1)\,(\omega\, +\, i z_2)}{\omega\, +\, i \Omega\,{\mathbf w}_0^2/(\Omega\,\gamma\, +\, {\mathbf w}_0^2)}\,,\label{eq:susceptibility_extended_drude2} \end{eqnarray} where $z_1 = \frac{\gamma}{2} + i\,{\mathbf w}_1$ and $z_2 = \frac{\gamma}{2} - i\,{\mathbf w}_1$ with ${\mathbf w}_1^2 = {\mathbf w}_0^2 - (\frac{\gamma}{2})^2$. By using eqs. (\ref{eq:second_law_inequality_conti}) and (\ref{eq:susceptibility_extended_drude2}), we arrive at the expression \begin{equation}gin{equation}\label{eq:k_q_extended_drude1} K_{d,2} \; =\; \frac{\hbar}{2\,\pi}\,\frac{\Omega\,{\mathbf w}_0^2}{(\Omega\,\gamma + {\mathbf w}_0^2)\,(\Omega\,\gamma - \Omega^2 - {\mathbf w}_0^2)}\; C({\mathbf w}_0, \Omega, \gamma) \end{equation} where \begin{equation}gin{eqnarray} C({\mathbf w}_0, \Omega, \gamma) &=& \textstyle \gamma\,({\mathbf w}_0^2 - \Omega^2)\, \frac{1}{{\mathbf w}_1}\, \arctan \frac{2\,{\mathbf w}_1}{\gamma}\, +\nonumber\\ && (\Omega^2 + {\mathbf w}_0^2 - \Omega\,\gamma)\, \ln(\Omega\,\gamma + {\mathbf w}_0^2)\, -\nonumber\\ && 2\,(\Omega^2 + {\mathbf w}_0^2)\, \ln {\mathbf w}_0\, +\, 2\,\Omega\,\gamma\, \ln \Omega\,. \end{eqnarray} In case that ${\mathbf w}_1$ is complex-valued (${\mathbf w}_0 < \gamma/2$), i.e., for the overdamped case, this has to be understood in terms of the relation, $\frac{1}{{\mathbf w}_1} \arctan \frac{2\,{\mathbf w}_1}{\gamma} = \frac{1}{2\,\begin{eqnarray}r{{\mathbf {w}}}_1} \ln\left(\frac{\gamma + 2\,\begin{eqnarray}r{{\mathbf {w}}}_1}{\gamma - 2\,\begin{eqnarray}r{{\mathbf {w}}}_1}\right)$, where $\begin{eqnarray}r{{\mathbf w}}_1^2 = (\frac{\gamma}{2})^2 - {\mathbf w}_0^2$. Interestingly enough, we can here observe $K_{d,2} < 0$ (see Fig. \ref{fig:fig1}), which would allow us to have a violation of the second law in the reversible process for this damping model. This negativity may be understood from the comparison, with the aid of the relation $(R_+')_{d,2}(\omega) = -(R_+')_d(\omega)$, between \begin{equation}gin{eqnarray}\label{eq:k_q_compare1} K_{d,2} &=& \textstyle \frac{\hbar}{2\,\pi}\, \omega_d\,\gamma_o\; \times\\ && \textstyle \mbox{Im} \int_0^{\infty} d\omega\, \frac{\omega^2}{(\omega\, +\, i \omega_d)\,(\omega\, +\, i \Omega)\,(\omega\, +\, i z_1)\,(\omega\, +\, i z_2)}\nonumber \end{eqnarray} with $({\mathbf w}_0, \Omega, \gamma)$ in eq. (\ref{eq:parameter_change0}) and $\omega_d = \Omega + \gamma$, and \begin{equation}gin{eqnarray}\label{eq:k_q_compare2} K_{d} &=& \textstyle -\frac{\hbar}{2\,\pi}\, \omega_d\,\gamma_o\; \times\\ && \textstyle \mbox{Im} \int_0^{\infty} d\omega\, \frac{\omega^2}{(\omega\, +\, i \omega_d)\,(\omega\, +\, i \Omega)\,(\omega\, +\, i z_1)\,(\omega\, +\, i z_2)}\, >\, 0\nonumber \end{eqnarray} with $({\mathbf w}_0, \Omega, \gamma)$ in (\ref{eq:parameter_change_extended_drude}) and $\omega_d = {\mathbf w}_0^2\,\Omega/(\Omega\,\gamma + {\mathbf w}_0^2)$. In fact, we can also show, by using eq. (\ref{eq:helmholtz_energy1}) with (\ref{eq:susceptibility_extended_drude2}), that $F_{d,2}(0) > F_{d}(0)$. Next, we briefly consider the case of $(d,4)$. We then have $(\tilde{\gamma}_+)_{d,4}(\omega) = \tilde{\gamma}_{d}(\omega) - \gamma_o + (\gamma_o\,\omega^2 - 2\,i \gamma_o\,\delta(0)\,\omega)/\omega_d^2$ from eq. (\ref{eq:damping_kernel_general_drude1}). After a fairly lengthy calculation with this, we can eventually obtain an explicit expression for $K_{d,4} = \frac{\hbar}{2\,\pi} \int_0^{\infty} d\omega\, f_{d,4}(\omega)$, where $f_{d,4}(\omega) \to -2\,\{\delta(0) + \omega_d^2/\gamma_o\}/\omega$ for large $\omega$. From this asymptotic form, we easily see that $K_{d,4} \to -\infty$, which indicates the violation of the second law. This infinity of $K_{d,4}$ suggests, however, that this case would be strictly unrealistic. From the above results for cases $(d,n)$ including the case $(p = 2)$ in Sec.~\ref{sec:extended_ohmic_model}, we may be able to say that aside for physically unacceptable damping models, the divergence (weak or strict) of the spectral density $J(\omega)$ for large frequencies could lead to the violation of the second law. It is also interesting here to note that $\text{Re}\,(\tilde{\gamma}_+)_{d,2}(\omega) = \gamma_o\,\omega^2/(\omega^2 + \omega_d^2) > 0$ and $\text{Re}\,(\tilde{\gamma}_+)_{d,4}(\omega) = \gamma_o\,\omega_d^{-2}\,\omega^4/(\omega^2 + \omega_d^2) > 0$. It is known \cite{MEI65} that a violation of the positivity for $\text{Re}\,\tilde{\gamma}_+(\omega)$ is tantamount to a violation of the second law in the thermodynamic limit (where a coupling strength between system and bath vanishes). We see here, however, that this positivity would not be a sufficient condition for the second law in the quantum regime (with a non-negligible finite coupling strength between system and bath).\vspace*{.0cm} \section{Conclusions} In summary, we have extensively studied the second law in the scheme of quantum Brownian motion. It has been observed that from the system-bath entanglement, a system oscillator coupled to a bath at zero temperature has a higher average energy value than the ground state of an uncoupled harmonic oscillator. For a damping model with arbitrarily discrete bath modes and damping models with continuous bath modes with cut-off frequencies, however, this apparent excess energy has actually been found to be less than the minimum work to couple a system to a bath. Therefore, the second law holds in the quantum regime. We also found, on the other hand, that the violation of the second law may happen for some cut-off frequency-free damping models, which are, however, physically unrealistic; especially the case $(d,2)$ in Sec. \ref{sec:extended_drude_model}, with a less diverging spectral density of bath modes than the Ohmic model being the prototype for damping, has a finitely negative value of $K_{d,2}$. The further question about the validity of the quantum second law for a broader class of quantum systems than the quantum Brownian motion considered here, particularly non-linear systems coupled to a bath, clearly remains open. \section*{Acknowledgments} One of us (I. K.) is grateful to G.J. Iafrate for some interesting remarks. \onecolumn{ \appendix\section{: A detailed derivation of eq. (\ref{eq:second_law_in_general_treatment2})} \label{sec:appendix1} By substituting eq. (\ref{eq:susceptibility2}) with $\tilde{\chi}(\omega) = -1/M G(\omega)$ and eq. (\ref{eq:gamma_tilde1}) into $K$ in (\ref{eq:second_law_in_general_treatment1}), we immediately have \begin{equation}gin{equation}\label{eq:appendix_second_law_in_general_treatment1} K\; =\; \frac{-i \hbar}{4 \pi M}\, \oint d\omega\; \omega^2\; \frac{{\displaystyle \prod_{j=1}^N\,(\omega^2 - \omega_j^2)}}{{\displaystyle \prod_{k=0}^N\,(\omega^2 - \begin{eqnarray}r{\omega}_k^2)}}\;\, \sum_{l=1}^N\, \frac{c_l^2}{2\,m_l\,\omega_l^2}\; \left\{\frac{1}{(\omega_l + \omega)^2}\, +\, \frac{1}{(\omega_l - \omega)^2}\right\}\,, \end{equation} where the integration path over $\omega$ is a loop around the {\em positive real axis} in the complex $\omega$-plane, consisting of the two branches, $(\infty + i \epsilon,\,i \epsilon)$ and $(-i \epsilon,\,\infty - i \epsilon)$. By using the residues at all poles $\omega = \begin{eqnarray}r{\omega}_k$ of the integrand, we can evaluate the contour integration. In doing so, we do not have any residue at $\omega = \begin{eqnarray}r{\omega}_{k'}$ when $\omega_l = \begin{eqnarray}r{\omega}_{k'}$. Accordingly, we can finally obtain \begin{equation}gin{equation}\label{eq:appendix_second_law_in_general_treatment2} K\; =\; \frac{\hbar}{4 M}\, \sum_{k=0}^N\, \mathcal{A}_k\;\;;\;\; \mathcal{A}_k\; =\; \mathcal{A}_k^{(1)} \cdot \mathcal{A}_k^{(2)}\,, \end{equation} where \begin{equation}gin{equation}\label{eq:appendix_second_law_in_general_treatment3} \mathcal{A}_k^{(1)}\; =\; \begin{eqnarray}r{\omega}_k\; \frac{\displaystyle \prod_{j=1}^N\, (\begin{eqnarray}r{\omega}_k^2 - \omega_j^2)}{\displaystyle \prod_{k'=0 \atop (\nonumbere k)}^N\, (\begin{eqnarray}r{\omega}_k^2 - \begin{eqnarray}r{\omega}_{k'}^2)}\;\;;\;\; \mathcal{A}_k^{(2)}\; =\; \sum_{l=1}^N\, \frac{c_l^2}{2\,m_l\,\omega_l^2}\;\, P\left\{\frac{1}{(\omega_l + \begin{eqnarray}r{\omega}_k)^2}\, +\, \frac{1}{(\omega_l - \begin{eqnarray}r{\omega}_k)^2}\right\}\,. \end{equation} Here, $\mathcal{A}_k^{(2)}$ is, obviously, positive-valued. Therefore, the non-negativeness of $\mathcal{A}_k$ can be completely determined by the factor $\mathcal{A}_k^{(1)}$. Keeping in mind the frequency relationship in (\ref{eq:frequency_relation3}), we first consider $\mathcal{A}_N^{(1)}$. This is clearly non-negative. Next, for $k=N-1$, we have \begin{equation}gin{equation}\label{eq:appendix_second_law_in_general_treatment4} \mathcal{A}_{N-1}^{(1)}\; =\; \begin{eqnarray}r{\omega}_{N-1}\; \frac{\displaystyle \prod_{j=1}^{N-1}\, (\begin{eqnarray}r{\omega}_{N-1}^2 - \omega_j^2)}{\displaystyle \prod_{k=0}^{N-2}\, (\begin{eqnarray}r{\omega}_{N-1}^2 - \begin{eqnarray}r{\omega}_{k}^2)}\; \times\; \frac{\begin{eqnarray}r{\omega}_{N-1}^2 - \omega_N^2}{\begin{eqnarray}r{\omega}_{N-1}^2 - \begin{eqnarray}r{\omega}_N^2}\,. \end{equation} The first factor on the right hand side is non-negative, and so is the second factor whose numerator and denominator are negative-valued, respectively. Therefore, we get $\mathcal{A}_{N-1} \geq 0$. Along the same line, we can straightforwardly show that each summand $\mathcal{A}_k$ with $k = N-2, N-3, \cdots, 0$ is non-negative, which will yield $K \geq 0$. \section{: No violation of the second law in the Drude model}\label{sec:appendix2} In the Drude model, we can even evaluate the system energy $(E_s)_d(0)$ and the free energy $F_d(0)$ explicitly and in closed form. From eq. (\ref{eq:susceptibility_drude1}) we see that for ${\mathbf w}_0 > \gamma/2$ ({\em underdamped} case), $z_1$ and $z_2$ are conjugate complex numbers to each other, while for ${\mathbf w}_0 \leq \gamma/2$ ({\em overdamped} case), both $z_1$ and $z_2$ are real-valued. Therefore, $\mbox{Im}\, \{(R_+')_d(\omega)/(G_+)_d(\omega)\}$ in (\ref{eq:second_law_inequality_conti}), and thus the explicit expressions of $K_d$, for both cases would differ from each other in parameters $({\mathbf w}_0, \Omega, \gamma)$. By using eq. (\ref{eq:susceptibility_drude1}) we can obtain \begin{equation}gin{eqnarray} \int_0^{\infty} d\omega\; \text{Im} \chi_d(\omega) &=& \left\{ \begin{equation}gin{array}{ll} \displaystyle \frac{1}{M} \frac{({\mathbf w}_0^2 + \Omega^2 - \gamma^2/2)\, \arccos (\gamma/2 {\mathbf w}_0)\, -\, \gamma\,{\mathbf w}_1\, \ln (\Omega/{\mathbf w}_0)}{{\mathbf w}_1\, ({\mathbf w}_0^2\, -\, \Omega\,\gamma\, +\, \Omega^2)}&\hspace*{.3cm} \text{for}\hspace*{.3cm} {\mathbf w}_0 > \gamma/2\\\label{eq:susceptibility_int1}\\ \displaystyle \frac{1}{M} \frac{(\gamma^2/2 - {\mathbf w}_0^2 - \Omega^2)/2 \cdot \ln \left(\frac{\gamma/2\, -\, \begin{eqnarray}r{{\mathbf w}}_1}{\gamma/2\, +\, \begin{eqnarray}r{{\mathbf w}}_1}\right)\, -\, \gamma\,\begin{eqnarray}r{{\mathbf w}}_1\, \ln (\Omega/{\mathbf w}_0)}{\begin{eqnarray}r{{\mathbf w}}_1\, ({\mathbf w}_0^2\, -\, \Omega\,\gamma\, +\, \Omega^2)}&\hspace*{.3cm} \text{for}\hspace*{.3cm} {\mathbf w}_0 \leq \gamma/2 \end{array}\right.\\[.3cm]\nonumber\\ \int_0^{\infty} d\omega\; \omega^2\; \text{Im} \chi_d(\omega) &=& \left\{ \begin{equation}gin{array}{ll} \displaystyle \frac{1}{M} \frac{({\mathbf w}_0^4 + {\mathbf w}_0^2\,\Omega^2 - \Omega^2\,\gamma^2/2)\, \arccos (\gamma/2 {\mathbf w}_0)\, +\, \Omega^2\,\gamma\,{\mathbf w}_1\, \ln (\Omega/{\mathbf w}_0)}{{\mathbf w}_1\, ({\mathbf w}_0^2\, -\, \Omega\,\gamma\, +\, \Omega^2)}&\hspace*{.3cm} \text{for}\hspace*{.3cm} {\mathbf w}_0 > \gamma/2\\\label{eq:susceptibility_int11}\\ \displaystyle \frac{1}{M} \frac{(\Omega^2\,\gamma^2 - 2\,{\mathbf w}_0^4 - 2\,{\mathbf w}_0^2\,\Omega^2)/4 \cdot \ln \left(\frac{\gamma/2\, -\, \begin{eqnarray}r{{\mathbf w}}_1}{\gamma/2\, +\, \begin{eqnarray}r{{\mathbf w}}_1}\right)\, +\, \Omega^2\,\gamma\,\begin{eqnarray}r{{\mathbf w}}_1\, \ln (\Omega/{\mathbf w}_0)}{\begin{eqnarray}r{{\mathbf w}}_1\, ({\mathbf w}_0^2\, -\, \Omega\,\gamma\, +\, \Omega^2)}&\hspace*{.3cm} \text{for}\hspace*{.3cm} {\mathbf w}_0 \leq \gamma/2 \end{array}\right. \end{eqnarray} where ${\mathbf w}_1 = \sqrt{{\mathbf w}_0^2 - (\gamma/2)^2}$\, and $\begin{eqnarray}r{{\mathbf w}}_1 = -i\,{\mathbf w}_1$. From eqs. (\ref{eq:excess_energy1}), (\ref{eq:susceptibility_int1}), and (\ref{eq:susceptibility_int11}), we have an exact expression \begin{equation}gin{equation}\label{eq:energy_drude1} (E_s)_d(0)\; =\; \frac{\hbar}{2 \pi}\, \left\{A({\mathbf w}_0, \Omega, \gamma)\, +\, B({\mathbf w}_0, \Omega, \gamma)\right\}\,, \end{equation} where \begin{equation}gin{equation}\label{eq:energy_drude2} A({\mathbf w}_0, \Omega, \gamma)\; =\; \left\{ \begin{equation}gin{array}{l} \displaystyle \frac{({\mathbf w}_0^2 + \Omega^2)\, (2\,\Omega\,{\mathbf w}_1^2\, +\, {\mathbf w}_0^2\,\gamma)\, -\, \Omega^2\,\gamma^3/2}{{\mathbf w}_1\, (\Omega\, +\, \gamma)\, ({\mathbf w}_0^2\, - \Omega\,\gamma\, +\, \Omega^2)}\, \arccos(\gamma/2 {\mathbf w}_0)\\\\ \displaystyle \frac{({\mathbf w}_0^2 + \Omega^2)\, (\Omega\,\gamma^2/4\, -\, {\mathbf w}_0^3\, -\, {\mathbf w}_0^2\,\gamma/2)\, -\, \Omega\,\gamma^2/2 \cdot ({\mathbf w}_0^2\, -\, \Omega\,\gamma/2)}{\begin{eqnarray}r{{\mathbf w}}_1\, (\Omega\, +\, \gamma)\, ({\mathbf w}_0^2\, -\, \Omega\,\gamma\, +\, \Omega^2)}\, \ln\left(\frac{\gamma/2\, -\, \begin{eqnarray}r{{\mathbf w}}_1}{\gamma/2\, +\, \begin{eqnarray}r{{\mathbf w}}_1}\right) \end{array}\right. \end{equation} for ${\mathbf w}_0 > \gamma/2$ and ${\mathbf w}_0 \leq \gamma/2$, respectively, and \begin{equation}gin{equation} B({\mathbf w}_0, \Omega, \gamma)\; =\; \frac{\Omega\,\gamma\, (\Omega^2\, +\, \Omega\,\gamma\, -\, {\mathbf w}_0^2)}{(\Omega\, +\, \gamma)\, ({\mathbf w}_0^2\, -\, \Omega\,\gamma\, +\, \Omega^2)}\, \ln (\Omega/{\mathbf w}_0)\,.\label{eq:energy_drude3} \end{equation} Similarly, the free energy $F_d(0)$ in eq. (\ref{eq:helmholtz_energy1}) can be exactly evaluated as \begin{equation}gin{equation}\label{eq:helmholtz_energy2_appendix} F_d(0)\; =\; \left\{ \begin{equation}gin{array}{ll} \displaystyle \frac{\hbar}{2 \pi}\, \left\{(\Omega + \gamma)\, \ln\left(\frac{\Omega + \gamma}{\Omega}\right)\, +\, \gamma\, \ln\left(\frac{\Omega}{{\mathbf w}_0}\right)\, +\, 2\,{\mathbf w}_1\, \arccos \left(\frac{\gamma}{2\,{\mathbf w}_0}\right)\right\}&\hspace*{.3cm} \text{for}\hspace*{.3cm} {\mathbf w}_0 > \gamma/2\\\\ \displaystyle \frac{\hbar}{2 \pi}\, \left\{(\Omega + \gamma)\, \ln\left(\frac{\Omega + \gamma}{\Omega}\right)\, +\, \gamma\, \ln\left(\frac{\Omega}{{\mathbf w}_0}\right)\, +\, \begin{eqnarray}r{{\mathbf w}}_1\, \ln \left(\frac{\gamma/2\, -\, \begin{eqnarray}r{{\mathbf w}}_1}{\gamma/2\, +\, \begin{eqnarray}r{{\mathbf w}}_1}\right)\right\}&\hspace*{.3cm} \text{for}\hspace*{.3cm} {\mathbf w}_0 \leq \gamma/2\,. \end{array}\right. \end{equation} Clearly, $(E_s)_d(0)$ and $F_d(0)$ for the underdamped case are identical to eqs. (10) and (14) in \cite{FOR06}, respectively. These expressions can also be applied for the overdamped case (${\mathbf w}_1 \nonumberot\in {\mathbb R}$) with the aid of the complex-valued expression, $\arccos(y) = \frac{\pi}{2} + i\,\ln(i\,y + \sqrt{1 - y^2})$ and actually equivalent to those for the overdamped case derived here. Then, we can obtain $E_s(0) < F(0)$ for both cases (see Fig. 2 in \cite{FOR06}). In the limit $\Omega \to \infty$ (equivalently, $\omega_d \to \infty$), we get from eqs. (\ref{eq:energy_drude1})-(\ref{eq:helmholtz_energy2_appendix}) \begin{equation}gin{equation}\label{eq:k_q_drude_omega_to_infinity1} K_d\; \approx\; \frac{\hbar\,\Omega}{2 \pi}\, \ln\left(\frac{\Omega + \gamma}{\Omega}\right)\; \approx\; \frac{\hbar\,\gamma}{2 \pi}\, \left(1 - \frac{\gamma}{2 \Omega}\right)\, +\, \mathcal{O}\left(\frac{1}{\Omega^2}\right) \end{equation} which reduces to \begin{equation}gin{equation}\label{eq:k_q_drude_omega_to_infinity2} \frac{\gamma}{\pi {\mathbf w}_0} E_g\; =\; \frac{\hbar\,\gamma}{2 \pi}\,\frac{1}{\sqrt{1 + \gamma/\Omega}}\; \approx\; \frac{\hbar\,\gamma}{2 \pi}\,\left(1 - \frac{\gamma}{2 \Omega}\right)\, +\, \mathcal{O}\left(\frac{1}{\Omega^2}\right) \end{equation} ({\em cf}. eq.~(15) in \cite{FOR06}). Lastly, we explicitly give an explicit expression for $K_d$ in original parameters $(\omega_0, \omega_d, \gamma_o)$, \begin{equation}gin{equation}\label{eq:drude_k_q__old_parameter_appendix} K_d\; =\; \frac{\hbar\,\gamma_o}{2 \pi}\, \int_0^{\infty} d\lambda\; \frac{2\,\lambda_d^2\,\lambda^5\, -\, (2\,\lambda_0^2\, +\, \lambda_d)\,\lambda_d^2\,\lambda^3}{\{(\lambda^2 + \lambda_d^2)\,(\lambda^2 - \lambda_0^2)\, -\, \lambda_d\,\lambda^2\}^2\, +\, (\lambda_d^2\,\lambda)^2}\,, \end{equation} where a dimensionless parameter $\lambda = \omega/\gamma_o$ with $\lambda_0 = \omega_0/\gamma_o$ and $\lambda_d = \omega_d/\gamma_o$. Here, we see that the non-zero value of $K_d$ for $\lambda_d \to \infty$ arises from the competition between two terms of the numerator of the integrand for large $\lambda$. Eq. (\ref{eq:drude_k_q__old_parameter_appendix}) is also used in Table \ref{tab:table2} for comparison with $K_{d,1}$ in eq. (\ref{eq:k_q_n_1_extended_drude}). \section{: Details for eqs. (\ref{eq:exponential_frequency1}) and (\ref{eq:gamma_n_1_extended_drude})}\label{sec:appendix3} In derivation of eq. (\ref{eq:exponential_frequency1}), we used the relationship\cite{ABS74} $E_1(-y \pm i\,0^+) = -\mbox{Ei}(y) \mp i\,\pi$, where the exponential integrals $E_1(y) = \int_1^{\infty} d z\, e^{-y\,z}/z$ and $\mbox{Ei}(y) = P \int_{-\infty}^y d z\, e^z/z$. Also, for eq. (\ref{eq:gamma_n_1_extended_drude}), we employed, first, \cite{GRA00} \begin{equation}gin{equation}\label{eq:integral_n_1_1} \int_0^{\infty} dy\, \frac{\cos(a\,y)}{y\, +\, b}\; =\; -\sin(a\,b)\; \mbox{si}(a\,b)\, -\, \cos(a\,b)\; \mbox{Ci}(a\,b)\,, \end{equation} where the sine integral\, $\mbox{si}(y) = -\int_y^{\infty} dz\,\frac{\sin(z)}{z} = -\frac{\pi}{2} + \mbox{Si}(y)$ with $\mbox{Si}(y) = \int_0^y dz\,\frac{\sin(z)}{z}$, and the cosine integral $\mbox{Ci}(y) = -\int_y^{\infty} dz\,\frac{\cos(z)}{z} = c_e + \ln y + \int_0^y dz\,\frac{\cos(z) - 1}{z}$ with the Euler constant $c_e = 0.5772156649 \cdots$; secondly, we used the relations $\mbox{Si}(i y) = \frac{i}{2} \{\mbox{Ei}(y) + E_1(y)\}$ and $\mbox{Ci}(i y) = \frac{1}{2} \{\mbox{Ei}(y) - E_1(y)\} + \frac{\pi}{2}i$ \cite{ABS74}. } \twocolumn{ \begin{equation}gin{thebibliography}{1} \bibitem{CAL85} H.B. Callen, {\em Thermodynamics and an introduction to thermostatics} (2nd ed., John Wiley, 1985). \bibitem{SPI05} V. \v{S}pi\v{c}ka, Th.M. Nieuwenhuizen, and P.D. Keefe, Physica E {\bf 29}, 1 (2005) and references therein; D.P. Sheehan, ed., {\em Quantum limits to the second law} (AIP Conference Proceedings, No. {\bf 643}, 2002); see also http://www.ipmt-hpm.ac.ru/SecondLaw/, {\em Quantum limits to the second law of thermodynamics}, Open Internet Conference and Information Center. \bibitem{LAH04} M.D. LaHaye, O. Buu, B. Camarota, and K.C. Schwab, Science {\bf 304}, 74 (2004). \bibitem{CLE04} A.A.Clerk, Phys. Rev. B {\bf 70}, 245306 (2004). \bibitem{MAC03} A. MacKinnon and A.D. Armour, Physica E {\bf 18}, 235 (2003). \bibitem{HAE05} P. H\"anggi and G.-L. Ingold, Chaos, {\bf 15}, 026105 (2005). \bibitem{MAH04} J. Gemmer, M. Michel, and G. Mahler, {\em Quantum Thermodynamics} (Springer, Berlin, 2004). \bibitem{NAG02} X.L. Li, G.W. Ford, and R.F. O'Connell, Phys. Rev. E {\bf 51}, 5169 (1995); K.E. Nagaev and M. B\"uttiker, Europhys. Lett., {\bf 58}(4), 475 (2002). \bibitem{FOR05} G.W. Ford and R.F. O'Connell, Physica E {\bf 29}, 82 (2005). \bibitem{HAE06} P. H\"anggi and G.-L. Ingold, Acta Physica Polonica B {\bf 37}, 1537 (2006). \bibitem{FOR06} G.W. Ford and R.F. O'Connell, Phys. Rev. Lett. {\bf 96}, 020402 (2006). \bibitem{ING98} G.-L. Ingold, {\em Dissipative quantum systems} in {\em Quantum transport and dissipation} (Wiley-VCH, 1998), pp 213-248. \bibitem{IKI06} Following, e.g., Refs. \cite{HAE05}, \cite{HAE06}, \cite{ING98}, and \cite{LEV88}, we adopt the Laplace transforms for an analysis of the frequency domain in this paper, instead of the Fourier transforms which have been employed in a series of papers by Ford and O'Connell, e.g., Refs. \cite{FOR06}, \cite{FOR05}, and \cite{FOR85}. Clearly, both are a priori equally acceptable to give rise to the same desired physical results in the end. \bibitem{ROB66} G.E. Roberts and H. Kaufman, {\em Table of Laplace Transforms} (W.B. Saunders, Philadelphia, 1966). \bibitem{LEV88} A.M. Levine, M. Shapiro, and E. Pollak, J.~Chem.~Phys. {\bf 88}, 1959 (1988). \bibitem{FOR88} G.W. Ford, J.T. Lewis, and R.F. O'Connell, J. Stat. Phys. {\bf 53}, 439 (1988). \bibitem{WEI99} U. Weiss, {\em Quantum dissipative systems} (2nd ed., World Scientific, Singapore, 1999). \bibitem{BUT05} M. B\"{u}ttiker and A.N. Jordan, Physica E {\bf 29}, 272 (2005). \bibitem{OPP02} J. Oppenheim, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. {\bf 89}, 180402 (2002). \bibitem{KAM04} N.G. van Kampen, J. Stat. Phys. {\bf 115}, 1057 (2004). \bibitem{ALB01} G. Alber, T. Beth, M. Horodecki, et al., {\em Quantum Information}: {\em An introduction to basic theoretical concepts and experiments} (Sprnger, Berlin, 2001). \bibitem{FOR85} G.W. Ford, J.T. Lewis, and R.F. O'Connell, Phys. Rev. Lett. {\bf 55}, 2273 (1985). \bibitem{MEI65} J. Meixner, in {\em Statistical mechanics of equilibrium and non-equilibrium}, edited by J. Meixner (North-Holland, Amsterdam, 1965), pp 52-68. \bibitem{ABS74} M. Abramowitz and I. Stegun, {\em Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables} (Dover, New York, 1974). \bibitem{GRA00} I.S. Gradshteyn and I.M. Ryzhik, {\em Table of Integrals, Series, and Products} (6th ed., Academic Press, San Diego, 2000). \end{thebibliography} Fig.~\ref{fig:fig1}: $K_{d,2}/E_g$ versus $x = \gamma/{\mathbf w}_0$, where the ground state energy $E_g = \frac{\hbar}{2}\,\sqrt{\Omega\,\gamma + {\mathbf w}_0^2}$; for $\Omega = 2\,{\mathbf w}_0$ (dot),\, $5\,{\mathbf w}_0$ (solid),\, $10\,{\mathbf w}_0$ (dashed) from top to bottom. } \begin{equation}gin{figure}[htb] \centering \hspace*{-5.6cm}\includegraphics{fig1.ps} \caption{\label{fig:fig1}} \end{figure} \end{document}
\begin{document} \date{\today} \begin{abstract} This paper aims to provide a robust model for the well--known phenomenon of K\'arm\'an Vortex Street arising in Fluid Mechanics. The first theoretical attempt to model this pattern was given by von K\'arm\'an \cite{Karman1,Karman2}. He considered two parallel staggered rows of point vortices, with opposite strength, that translate at the same speed. Following the ideas of Saffman and Schatzman \cite{Saffman-Schatzman}, we propose to study this phenomenon in the Euler equations by considering two infinite arrows of vortex patches. The key idea is to desingularize the point vortex model proposed by von K\'arm\'an. Our construction is flexible and can be extended to more general incompressible models. \end{abstract} \maketitle \tableofcontents \section{Introduction} A distinctive pattern is observed in the wake of a two--dimensional bluff body placed in a uniform stream at certain velocities: this is called the von K\'arm\'an Vortex Street. It consists of vortices of high vorticity in an irrotational fluid. This complex phenomenon occurs in a large amount of circumstances. For instance, the singing of power transmission lines or kite strings in the wind. It can also be observed in atmospheric flow about islands, in ocean flows about pipes or in aeronautical systems. See \cite{Karman1,Karman2,Rayleigh, Saffman-Schatzman, Saffman-Schatzman2, Schatzman, Strouhal}. The phenomenon of periodic vortex shedding, which is an oscillating flow that takes place when a fluid past a bluff body, was first studied experimentally with the works \cite{Rayleigh, Strouhal}. The more classical model for K\'arm\'an Vortex Street, and the first theoretical one, was presented by von K\'arm\'an in \cite{Karman1, Karman2}. He considered a distribution of point vortices located in two parallel staggered rows, where the strength is opposite in each row. Other works concerning this model are \cite{Aref, Lamb, Rosenhead}. Since the exact problem seems to be complex from a theoretical point of view, there has been a large variety of approximations. It is observed that viscosity is involved with the generation of the vortex layers by the body, which generates the shedding process. However, it seems that viscosity does not play an essential role in the formation of the vortex street once the vortex layers have been created. Moreover, it would appear also that the body is not important in the formation and the evolution of the vortex street. If we concentrate on the evolution of the street, and not in the vortex layer formation process, an inviscid incompressible fluid model can be proposed to model this situation. K\'arm\'an Vortex Street structures arise in the context of the Euler equations in the works of Saffman and Schatzman \cite{Saffman-Schatzman, Saffman-Schatzman2, Schatzman}. Following the ideas of von K\'arm\'an \cite{Karman1, Karman2}, they considered two parallel infinite array of vortices with finite area and uniform vorticity. They found numerically the existence of this kind of solutions that translate at a constant speed and studied their linear stability. In this paper, we focus on the study of these structures in different inviscid incompressible fluid models via a desingularization of the point vortex model proposed by von K\'arm\'an \cite{Karman1, Karman2}. We obtain two infinite arrows of vortex patches, i.e. vortices with finite area and uniform vorticity, that translate. Then, we recover analytically the solutions found numerically by Saffman and Schatzman, not only in the Euler equations framework, but also in other incompressible models, that we recall in the following. Let $q$ be a scalar magnitude of the fluid satisfying the following transport equation \begin{eqnarray}\label{Generalsystem} \left\{\begin{array}{ll} \partial_t q+(v\cdot \nabla) q=0, &\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ v=\nabla^\perp \psi,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ \psi=G*q,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ q(0,x)=q_0(x),& \text{ with $x\in\mathbb{R}^2$}. \end{array}\right. \end{eqnarray} Note that the velocity field $v$ is given in terms of $q$ via the interaction kernel $G$, which is assumed to be a smooth off zero function. Here, $(x_1,x_2)^\perp=(-x_2,x_1)$. In the case that $G=\frac{1}{2\pi}\ln|\cdot|$, we arrive at the Euler equations. On the other hand, if $G=-K_0(|\lambda|| \cdot|)$, with $\lambda\neq 0$ and $K_0$ the Bessel function, the quasi--geostrophic shallow water (QGSW) equations are found. Finally, the generalized surface quasi--geostrophic (gSQG) equations appears in the case that $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$, for $\beta\in[0,1]$ and $C_{\beta}=\frac{\Gamma\left(\frac{\beta}{2}\right)}{2^{1-\beta}\Gamma\left(\frac{2-\beta}{2}\right)}$. Note that all the previous models have a common property: the kernel $G$ is radial, which will be crucial in this work. The Euler equations deal with uniform incompressible ideal fluids and $q$ represents the vorticity of the fluid, usually denoted by $\omega$. Yudovich solutions, that are bounded and integrable solutions, are known to exist globally in time, see \cite{MajdaBertozzi, Yudovich}. When the initial data is the characteristic function of a simply--connected bounded domain $D_0$, the solution continues being a characteristic function over $D_0$, that propagates along the flow. These are known as the vortex patches solutions. If the initial domain is $C^{1,\alpha}$, with $0<\alpha<1$, the regularity persists for any time, see \cite{B-C,Chemin}. The only explicit simply--connected vortex patches known up to now are the Rankine vortex (the circular patch), which are stationary, and the Kirchhoff ellipses \cite{Kirchhoff}, that rotate. Later, Deem and Zabusky \cite{Deem-Zabusky} gave some numerical observations of the existence of V--states, i.e. rotating vortex patches, with $m-$fold symmetry. Using the bifurcation theory, Burbea \cite{Burbea} proved analytically the existence of these V--states close to the Rankine vortex. There has been several works concerning the V--states following the approach of Burbea: doubly--conected V--states, corotating and counter--rotating vortex pairs, non uniform rotating vortices and global bifurcation, see \cite{CastroCordobaGomezSerrano, DelaHozHmidiMateuVerdera, GHS, HMW, HmidiMateu-pairs, HmidiMateuVerdera}. The { quasi--geostrophic shallow water equations} are found in the limit of fast rotation and weak variations of the free surface in the rotating shallow water equations, see \cite{Vallis}. In this context, the function $q$ is called the potential vorticity. The parameter $\lambda$ is known as the inverse ``Rossby deformation length'', and when it is small, it corresponds to a free surface that is nearly rigid. In the case $\lambda=0$, we recover the Euler equations. Although vortex patches solutions are better known in the Euler equations, there are also some results in the quasi--geostrophic shallow water equations. For the analogue to the Kirchhoff ellipses in these equations, we refer to the works of Dristchhel, Flierl, Polvani and Zabusky \cite{Plotka-Dristchel, Polvani, Polvani-Zabusky-Flierl}. In \cite{D-H-R}, Dristchel, Hmidi and Renault proved the existence of V--states bifurcating from the circular patch. In the case of { the generalized surface quasi--geostrophic, }$q$ describes the potential temperature. This model has been proposed by C\'ordoba et al. in \cite{Cordoba} as an interpolation between the Euler equations and the surface quasi--geostrophic (SQG) equations, corresponding to $\beta=0$ and $\beta=1$, respectively. The mathematical analogy with the classical three--dimensional incompressible Euler equations can be found in \cite{Constantin-Majda-Tabak}. Some works concerning V--states in the gSQG equations are \cite{Cas0-Cor0-Gom,Cas-Cor-Gom, Hassainia-Hmidi, HmidiMateu-pairs}. The point model for a K\'arm\'an Vortex Street consists in two infinite arrays of point vortices with opposite strength. More specifically, consider a uniformly distributed arrow of points, with same strength in every point, located in the horizontal axis, i.e., $(kl,0)$, with $l>0$ and $k\in\Z$. The other arrow contains an infinite number of points, with opposite strength, which will be parallel to the other one and with arbitrary stagger: $(a+kl,-h)$ with $a\in\R$ and $h\neq 0$. We refer to Figure 1 for a better understanding of the localization of the points. Hence, we consider the following distribution: \begin{equation}\label{KVS} q_0(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), \end{equation} where $a\in\R$, $l>0$ and $h\neq 0$. If the kernel $G$ is radial, then we will show that every point translates in time, with the same constant speed. Moreover, if $a=0$ or $a=\frac{l}{2}$, the translation is parallel to the real axis. In the typical case of the Newtonian interaction, that is $G=\frac{1}{2\pi}\ln|\cdot|$, the evolution of \eqref{KVS} has been fully studied, see \cite{Aref, Lamb, Karman1, Karman2, Newton, Pierrehumbert, Rosenhead}. The problem consists in a first order Hamiltonian system and the evolution of every point is given by the following system: \begin{align*} \frac{d}{dt} z_m(t)=&\sum_{\substack{m\neq k\in\Z}} \frac{(z_m(t)-z_k(t))^\perp}{|z_m(t)-z_k(t)|^2}-\sum_{\substack{k\in\Z}} \frac{(z_m(t)-\tilde{z}_k(t))^\perp}{|z_m(t)-\tilde{z}_k(t)|^2},\\ \frac{d}{dt} \tilde{z}_m(t)=&\sum_{\substack{k\in\Z}}\frac{(\tilde{z}_m(t)-z_k(t))^\perp}{|\tilde{z}_m(t)-z_k(t)|^2}-\sum_{\substack{m\neq k\in\Z}}\frac{(\tilde{z}_m(t)-\tilde{z}_k(t))^\perp}{|\tilde{z}_m(t)-\tilde{z}_k(t)|^2}, \end{align*} with initial conditions \begin{align*} z_m(0)=&ml,\\ \tilde{z}_m(0)=&a+ml-ih, \end{align*} for $m\in\Z$. It can be checked that the velocity at every point is the same, providing us with a translating motion. Indeed, if $a=0$ or $a=\frac{l}{2}$, the speed can be expressed by elementary functions, where we observe that the translation is horizontal: \begin{align*} V_0=&\frac{1}{2l}\coth\left(\frac{\pi h}{l}\right), \ \textnormal{ for } \ a=0,\\ V_0=&\frac{1}{2l}\tanh\left(\frac{\pi h}{l}\right), \ \textnormal{ for } \ a=\frac{l}{2}. \end{align*} The works of Saffmann and Schatzman \cite{Saffman-Schatzman, Saffman-Schatzman2, Schatzman} refer to the study of these structures in the Euler equations. In fact, they consider two infinite arrows of vortices with finite area, which have uniform vorticity inside. These are two infinite arrows of vortex patches distributed as in \eqref{KVS}. They found existence of this kind of solutions numerically and they studied their linear stability, see \cite{Saffman-Schatzman, Saffman-Schatzman2, Schatzman}. The problem can be studied not only in the Euler equations for the full space, but in the Euler equations in the periodic setting. A theory for the Euler equations in an infinite strip can be found in \cite{Beichman-Denisov}. In \cite{VladimirGryanikBorthOlbers}, Gryanik, Borth and Olbers studied the quasi--geostrophic K\'arm\'an Vortex Street in two--layer fluids. \begin{figure} \caption{K\'arm\'an Vortex Street located at the points $(kl,0)$ and $(a+kl,-h)$, with $l>0$, $a\in\R$, $h\neq 0$, and $k\in\Z$.} \label{fig} \end{figure} The aim of this work is to find analitically solutions to the model proposed by Saffman and Schatzman in \cite{Saffman-Schatzman, Saffman-Schatzman2, Schatzman}. We will do it not only for the Euler equations, but also for other inviscid incompressible models. Here, we follow the idea of Hmidi and Mateu in \cite{HmidiMateu-pairs}, where they desingularize a vortex pairs obtaining a pair of vortex patches that rotate or translate (depending on the strength of the points). The plan is the following. For $\varepsilon>0$ and $l>0$, consider \begin{equation}\label{KVS2} q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D_1+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D_2+kl}(x_1,x_2), \end{equation} where $D_1$ and $D_2$ are simply--connected bounded domains. In the case $|D_1|=|\D|$ and $D_2=-D_1+a-ih$, with $a=0$ or $a=\frac{l}{2}$, and $h\neq 0$, we find the K\'arm\'an Vortex Street \eqref{KVS} in the limit $\varepsilon\rightarrow 0$. This means $$ \lim_{\varepsilon\rightarrow 0}\, \left\{\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D_1+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{-\varepsilon D_1+a+kl-ih}(x_1,x_2)\right\}=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), $$ in the distribution sense. Then, we connect the vortex patch model \eqref{KVS2} with the point vortex model \eqref{KVS}. In the following, we will refer to K\'arm\'an Vortex Street or K\'arm\'an Point Vortex Street when having the point vortex model \eqref{KVS}. Otherwise, we denote by K\'arm\'an Vortex Patch Street in the case of \eqref{KVS2}. Some relation between the two domains is needed, and the suitable one is mentioned before: $D_2=-D_1+a-ih$. Assuming that $q(t,x)=q_0(x-Vt)$, for some $V\in\R$, and using a conformal map $\phi:\T\rightarrow \partial D_1$, we arrive at the following equation \begin{equation*} F(\varepsilon,f,V)(w):=\textnormal{Re}\left[\left\{\overline{I(\varepsilon,f)(w)}- \overline{V}\right\}{w}{\phi'(w)}\right]=0, \quad w\in\T, \end{equation*} where \begin{align*} I(\varepsilon,f)(w):=&-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))-kl|)\phi'(\xi)\, d\xi\\ &-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)\phi'(\xi)\, d\xi. \end{align*} Let us explain the meaning of $f$. Assume that the conformal map is a perturbation of the identity in the following way \begin{equation}\label{confmap1} \phi(w)=i \left(w+\varepsilon f(w)\right), \quad f(w)=\sum_{n\geq 1}a_nw^{-n}, \quad a_n\in\R, w\in\T. \end{equation} for $G=\frac{1}{2\pi}\ln|\cdot|$ and $G=-K_0(|\lambda||\cdot|)$. Whereas, it will be consider as \begin{equation}\label{confmap2} \phi(w)=i \left(w+\frac{\varepsilon}{G(\varepsilon)} f(w)\right), \quad f(w)=\sum_{n\geq 1}a_nw^{-n}, \quad a_n\in\R, w\in\T. \end{equation} for more singular kernels, such as $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$, for $\beta\in(0,1)$. Moreover, we have that $F(0,0,V_0)(w)=0$, for any $w\in\T$. Here, $V_0$ is the speed of the point model \eqref{KVS}. The nonlinear functional $F$ is well-defined from $\R\times X_{\alpha}\times \R$ to $\tilde{Y}_\alpha$, where \begin{align*} X_{\alpha}&=\left\{f\in C^{1,\alpha}(\T), \quad f(w)=\sum_{n\geq 1}a_nw^{-n},\, a_n\in\R\right\},\\ \tilde{Y}_{\alpha}&=\left\{f\in C^{0,\alpha}(\T), \quad f(e^{i\theta})=\sum_{n\geq 1}a_n\sin(n\theta),\, a_n\in\R\right\}, \end{align*} for some $\alpha\in(0,1)$. However, $\partial_f {F}(0, 0, V)$ is not an isomorphism in these spaces. Defining $V$ as a function of $\varepsilon$ and $f$ in the following way \begin{align}\label{V} V(\varepsilon,f)=&\frac{\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw}{\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw}, \end{align} then, $F$ is also well-defined from $\R\times X_{\alpha}$ to $Y_{\alpha}$, where $$ Y_{\alpha}=\left\{f\in C^{0,\alpha}(\T), \quad f(e^{i\theta})=\sum_{n\geq 2}a_n\sin(n\theta), \, a_n\in\R\right\}. $$ Indeed, $\partial_f {F}(0, 0, V)$ is an isomorphism in these spaces. Then, we fix the velocity $V$ depending on $\varepsilon$ and $f$ as in \eqref{V}. In this way, the Implicit Function theorem can be applied in order to desingularize the point model \eqref{KVS} obtaining our main result: \begin{theo} Consider $G=\frac{1}{2\pi}\ln|\cdot|$, $G=-K_0(|\lambda||\cdot|)$, or $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$, for $\lambda\neq 0$ and $\beta\in(0,1)$. Let $h, l\in\R$, with $h\neq 0$ and $l>0$, and $a=0$ or $a=\frac{l}{2}$. Then, there exists a $C^1$ simply--connected bounded domain $D^{\varepsilon}$ such that \begin{equation*} q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{\varepsilon D^{\varepsilon}+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{-\varepsilon D^{\varepsilon}+a-ih+kl}(x), \end{equation*} defines a horizontal translating solution of \eqref{Generalsystem}, with constant speed, for any $\varepsilon\in(0,\varepsilon_0)$ and small enough $\varepsilon_0>0$. \end{theo} The proof of the theorem has to be adapted to each case. For the cases $G=\frac{1}{2\pi}\ln|\cdot|$ and $G=-K_0(|\lambda||\cdot|)$, the kernel has a logarithm singularity at the origin, which will play an important role in the choice of the scaling of the conformal map \eqref{confmap1}. Otherwise, if $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$, for $\beta\in(0,1)$, or for more general kernels, the scaling of the conformal map will depend on the singularity at the origin as depicted in \eqref{confmap2}. This work is organized as follows. In Section 2, we introduce some preliminary results about the $N$--vortex problem and, in particular, the point model for the K\'arm\'an Vortex Street \eqref{KVS} with general interactions. Section \ref{Sec2} refers to the Euler and QGSW equations, where the singularity of the kernel is logarithmic. We will deal with the general case in Section \ref{Sec3}, following the same ideas as in the Euler equations. The gSQG equations will become a particular case of this study. Finally, Appendix \ref{Ap-specialfunctions} and Appendix \ref{Ap-potentialtheory} will be devoted to provide some definitions and properties concerning special functions and complex integrals. Let us end this part by summarizing some notation to be used along the paper. We will denote the unit disc by $\D$ and its boundary by $\T$. The integral $$ \int_{\T} f(\xi)\, \, d\xi, $$ denotes the usual complex integral, for some complex function $f$. Moreover, we will use the symmetry sums defined by \begin{equation}\label{sym-sum} \sum_{k\in\Z}a_k=\lim_{K\rightarrow +\infty}\sum_{|k|\leq K}a_k, \end{equation} except being specified. \section{The N--vortex problem}\label{Sec-NVP} The $N$--vortex problem consists in the study of the evolution of a set of points that interact according to some laws. Originally, the Newtonian interaction for the $N$--vortex problem is considered. But here, we start by assuming that the interaction between the points is due to a general function $G:\R\rightarrow\R$, which is smooth off zero. The problem is a first order Hamiltonian system of the form \begin{equation}\label{dyn-syst} \frac{d}{dt} z_m(t)=\sum_{\substack{k=1 \\ k\neq m}}^{N}\Gamma_k \nabla^\perp_{z_m} G(|z_m(t)-z_k(t)|), \end{equation} with some initial conditions at $t=0$ and where $m=1,\dots, N$. Moreover, $z_k\neq z_m$ if $k\neq m$, are $N$--points located in the plane $\R^2$. The constants $\Gamma_k$ mean the strength of each point and the interaction between them is due to $G$. In the case that $G=\frac{1}{2\pi}\ln|\cdot|$, we arrive at the typical $N$--vortex problem: $$ \frac{d}{dt} z_m(t)=\sum_{\substack{k=1 \\ k\neq m}}^{N}\Gamma_k \frac{(z_m(t)-z_k(t))^\perp}{|z_m(t)-z_k(t)|^2}. $$ First, we deal with the evolution of two points. We can check that they rotate or translate depending on their strengh: $\Gamma_1$ and $\Gamma_2$. Later, we will move on the configuration that we concern: an infinite number of points with a periodic pattern in space. For more details about the $N$--vortex problem in the case of the Newtonian interaction see \cite{Newton}. \subsection{Finite configurations} There are some stable situations yielding to steady configurations. Here, we illustrate the evolution of $2$--points since the same idea is used later in the case of periodic configurations. \begin{pro} Let us consider two initial points $z_1(0)$ and $z_2(0)$, with $z_1(0)\neq z_2(0)$, located in the real axis, with strengths $\Gamma_1$ and $\Gamma_2$ respectively. We have: \begin{enumerate} \item If $\Gamma_1+\Gamma_2\neq 0$ and $\Gamma_1z_1(0)+\Gamma_2z_2(0)=0$, then the solution behaves as $z_k(t)=e^{i\Omega t}z_k(0),$ for $k=1,2$, with $\Omega=\frac{(\Gamma_1+\Gamma_2)G'(|z_1(0)-z_2(0)|)}{|z_1(0)-z_2(0)|}$. \item If $\Gamma_1+\Gamma_2=0$, then $z_k(t)=z_k(0)+U t$, for $k=1,2$, with $U=i\Gamma_2 G'(|z_1(0)-z_2(0)|)\textnormal{sign}(z_1(0)-z_2(0))$. \end{enumerate} \end{pro} \begin{proof} According to \eqref{dyn-syst}, the evolution of the two points is given by the following system: \begin{equation} \left\{ \begin{array}{l}\label{2-vortex} \frac{d}{dt}z_1(t)=i\Gamma_2G'(|z_1(t)-z_2(t)|)\frac{z_1(t)-z_2(t)}{|z_1(t)-z_2(t)|},\\ \frac{d}{dt}z_2(t)=-i\Gamma_1G'(|z_1(t)-z_2(t)|)\frac{z_1(t)-z_2(t)}{|z_1(t)-z_2(t)|}. \end{array} \right. \end{equation} \noindent {\bf (1)} By the above system, it is clear that $\frac{d}{dt}\left(\Gamma_1 z_1(t)+\Gamma_2 z_2(t)\right)=0$, which implies that $$ \Gamma_1 z_1(t)+\Gamma_2 z_2(t)=\Gamma_1 z_1(0)+\Gamma_2 z_2(0)=0. $$ Assuming that $z_1(t)=e^{i\Omega t}z_1(0)$, and using the above equation, one arrives at $z_2(t)=e^{i\Omega t}z_2(0)$. Then, the system \eqref{2-vortex} yields $$ \left\{ \begin{array}{l} i\Omega e^{i\Omega t}z_1(0)=ie^{i\Omega t}\Gamma_2G'(|z_1(0)-z_2(0)|)\frac{z_1(0)-z_2(0)}{|z_1(0)-z_2(0)|},\\ i\Omega e^{i\Omega t}z_2(0)=-ie^{i\Omega t}\Gamma_1G'(|z_1(0)-z_2(0)|)\frac{z_1(0)-z_2(0)}{|z_1(0)-z_2(0)|}. \end{array} \right. $$ Since $z_1(0)$ and $z_2(0)$ are located in the real axis, one has that $z_1(0)-z_2(0)\in\R$, and then subtracting the above two equations amounts to $$ \Omega=\frac{(\Gamma_1+\Gamma_2)G'(|z_1(0)-z_2(0)|)}{|z_1(0)-z_2(0)|}. $$ \noindent {\bf (2)} In this case, we have that $\frac{d}{dt}\left(z_1(t)- z_2(t)\right)=0$, and thus $$ z_1(t)- z_2(t)=z_1(0)- z_2(0). $$ As a consequence, \eqref{2-vortex} agrees with $$ \left\{ \begin{array}{l} \frac{d}{dt}z_1(t)=i\Gamma_2G'(|z_1(0)-z_2(0)|)\textnormal{sign}(z_1(0)-z_2(0)),\\ \frac{d}{dt}z_2(t)=i\Gamma_2G'(|z_1(0)-z_2(0)|)\textnormal{sign}(z_1(0)-z_2(0)),\\ \end{array} \right. $$ which can be solved as $$ \left\{ \begin{array}{l} z_1(t)=z_1(0)+i\Gamma_2G'(|z_1(0)-z_2(0)|)\textnormal{sign}(z_1(0)-z_2(0))t,\\ z_2(t)=z_2(0)+i\Gamma_2G'(|z_1(0)-z_2(0)|)\textnormal{sign}(z_1(0)-z_2(0))t. \end{array} \right. $$ \end{proof} The above result gives us that two vortex points with $\Gamma_1+\Gamma_2\neq 0$, have a rotating evolution. Otherwise, they {\it translate}. From now on, we refer that a structure {\it translates} when the evolution of every point (or every patch, in the case of \eqref{Generalsystem}) is a translation, with the same constant speed. In the usual $N$--vortex problem, meaning $G=\frac{1}{2\pi}\ln|\cdot|$, the above result is well--known. Here, we have seen that if the interaction of the points is due to a kernel that is radial, we get the same evolution. \subsection{Periodic setting} This section deals with the evolution of two infinite arrays of points with opposite strength, which are periodic in space. More specifically, the points of the first arrow, which have the same strength, will be located in the horizontal axis. Let us assume that we have one point at the origin, and the next one differs of it a distance $l$. This means that we have the points $(kl,0)$, for $l>0$ and $k\in\Z$. The second arrow, with opposite strength to the previous arrow, will be parallel to the horizontal axis but with a height $h\neq 0$, having the following distribution of points: $(a+kl, -h)$, for $a\in\R$ and $k\in\Z$. For the moment, let us consider that $a$ is any real number. Then, we focus on \begin{equation}\label{PointVortex} q(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), \end{equation} with $h\neq 0$, $l>0$ and $a\in\R$. In the following results, we check that the above initial configuration translate when $G=\frac{1}{2\pi}\ln|\cdot|$, $G=-K_0(|\lambda||\cdot|)$, $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$ for $\beta\in(0,1)$, or for $G$ satisfying some general conditions. Moreover, if $a=0$ or $a=\frac{l}{2}$, the translation is horizontal. We are going to differentiate two cases depending on the behavior of the interaction $G$ at infinity. This is important in order to give a meaning to the infinite sum coming from \eqref{PointVortex}, whose equations are given by \begin{align*} \frac{d}{dt} z_m(t)=&\sum_{\substack{m\neq k\in\Z}} G'(|z_m(t)-z_k(t)|)\frac{(z_m(t)-z_k(t))^\perp}{|z_m(t)-z_k(t)|}-\sum_{k\in\Z} G'(|z_m(t)-\tilde{z}_k(t)|)\frac{(z_m(t)-\tilde{z}_k(t))^\perp}{|z_m(t)-\tilde{z}_k(t)|},\\ \frac{d}{dt} \tilde{z}_m(t)=&\sum_{k\in\Z} G'(|\tilde{z}_m(t)-z_k(t)|)\frac{(\tilde{z}_m(t)-z_k(t))^\perp}{|\tilde{z}_m(t)-z_k(t)|}-\sum_{m\neq k\in\Z} G'(|\tilde{z}_m(t)-\tilde{z}_k(t)|)\frac{(\tilde{z}_m(t)-\tilde{z}_k(t))^\perp}{|\tilde{z}_m(t)-\tilde{z}_k(t)|}, \end{align*} with initial conditions \begin{align*} z_m(0)=&ml,\\ \tilde{z}_m(0)=&a+ml-ih, \end{align*} for $m\in\Z$. Then, we refer to the critical case in the case of the Newtonian interaction $$ G=\frac{1}{2\pi}\ln|\cdot|. $$ Here, we must use the structure of the logarithm in order to have a convergence sum. Note that here we need to use strongly the symmetry sum. Otherwise, the subcritical cases will use the faster decay of $G$ at infinity as it is the case of the QGSW or gSQG interactions. \noindent $\bullet$ {\it Critical case:} Let us first show the result for the Newtonian interaction, that is, $G=\frac{1}{2\pi}\ln|\cdot|$. Here, we denote $\omega$ to $q$ to emphasize that we are working with the vorticity. \begin{pro}\label{Prop-PV} Given the point vortex street \eqref{PointVortex} with $G=\frac{1}{2\pi}\ln|\cdot|$, for $h\neq 0$, $l>0$ and $a\in\R$, then the street is moving with the following constant velocity speed \begin{equation}\label{velocity_PV} V_0=\frac{1}{2l i}\overline{\cot\left(\frac{\pi(ih-a)}{l}\right)}. \end{equation} In the case that $a=0$ or $a=\frac{l}{2}$, the translation is parallel to the horizontal axis with velocity \begin{align*} V_0=&\frac{1}{2l}\coth\left(\frac{\pi h}{l}\right), \ \textnormal{ for } \ a=0,\\ V_0=&\frac{1}{2l}\tanh\left(\frac{\pi h}{l}\right), \ \textnormal{ for } \ a=\frac{l}{2}. \end{align*} \end{pro} \begin{rem} If $\omega_{\kappa}(x,y)=\kappa \omega(x,y),$ with $\kappa\in\R$ and $\omega$ given by \eqref{PointVortex}, then the velocity of the street is $V_{0,\kappa}=\kappa V_0$. \end{rem} \begin{proof} Define $$ \omega_K(x)=\sum_{|k|\leq K}\delta_{(kl,0)}(x)-\sum_{|k|\leq K}\delta_{(a+kl,-h)}(x). $$ The idea is to consider $K\rightarrow +\infty$, getting $\omega_K\rightarrow \omega$ in the distribution sense. The associated stream function to $\omega_K$ is given by \begin{equation}\label{streamfunction_PV} \psi_K(x)=\frac{1}{2\pi}\sum_{|k|\leq K}\ln|x-kl|-\frac{1}{2\pi}\sum_{|k|\leq K}\ln|x-a-kl+ih|, \end{equation} where we are using complex notation. In order to pass to the limit, we need to use the structure of the logarithm. Let us work with the sum in the following way \begin{align*} \sum_{|k|\leq K}\ln|x-a-kl+ih|=&\ln\left|\prod_{|k|\leq K}(x-a-kl+ih)\right|\\ =&\ln\left|(x-a+ih)\prod_{k=1}^{K}\left((x-a+ih)^2-k^2l^2\right)\right|\\ =&\ln\left|\frac{\pi(x-a+ih)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-a+ih)^2}{k^2l^2}\right)\right|+\ln\left|\frac{l}{\pi}\prod_{k=1}^{K}k^2l^2\right|. \end{align*} Then, we have \begin{align*} \lim_{K\rightarrow \infty}\psi_K(x)=\lim_{K\rightarrow \infty}&\left\{\frac{1}{2\pi}\ln\left|\frac{\pi x}{l}\prod_{k=1}^{K}\left(1-\frac{x^2}{k^2l^2}\right)\right|\right.\\ &\left.-\frac{1}{2\pi}\ln\left|\frac{\pi(x-a+ih)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-a+ih)^2}{k^2l^2}\right)\right|\right\}. \end{align*} Using the product expression for the sine, that is \begin{align}\label{sine} \sin (\pi x)=\pi x\prod_{k\geq 1}\left(1-\frac{x^2}{k^2}\right), \end{align} we get that \begin{equation*} \psi(x):=\lim_{K\rightarrow \infty}\psi_K(x)=\frac{1}{2\pi}\ln\left|\sin\left(\frac{\pi x}{l}\right)\right|-\frac{1}{2\pi}\ln\left|\sin\left(\frac{\pi(x-a+ih)}{l}\right)\right|. \end{equation*} In this way, we achieve that the sum in \eqref{streamfunction_PV} converges. In the same way, the corresponding velocity agrees with $$ v_K(x)=\frac{i}{2\pi}\sum_{|k|\leq K}\frac{x-kl}{|x-kl|^2}-\frac{i}{2\pi}\sum_{|k|\leq K}\frac{x-a-kl+ih}{|x-a-kl+ih|^2}, $$ where $x$ is none of the points vortex. In each of the points, the velocity is given by \begin{align*} v_K(ml)=&\frac{i}{2\pi}\sum_{k\neq m,|k|\leq K}\frac{ml-kl}{|ml-kl|^2}-\frac{i}{2\pi}\sum_{|k|\leq K}\frac{ml-a-kl+ih}{ml-a-kl+ih|^2},\\ v_K(a-ih+ml)=&\frac{i}{2\pi}\sum_{|k|\leq K}\frac{a-ih+ml-kl}{|x-kl|^2}-\frac{i}{2\pi}\sum_{k\neq m,|k|\leq K}\frac{ml-kl}{|x-a-kl+ih|^2}, \end{align*} for any $m\in\Z$. We define $v$ as the limit of the above function. First, let us show that the velocity at every point is the same. We begin with the first arrow \begin{align*} v(ml)=&\frac{i}{2\pi}\sum_{m\neq k\in \Z}\frac{ml-kl}{|ml-kl|^2}-\frac{i}{2\pi}\sum_{k\in\Z}\frac{ml-a-kl+ih}{|ml-a-kl+ih|^2}\\ =&-\frac{i}{2\pi}\sum_{0\neq k\in \Z}\frac{kl}{|kl|^2}+\frac{i}{2\pi}\sum_{k\in\Z}\frac{a+kl-ih}{|a+kl-ih|^2}\\ =&\frac{i}{2\pi}\sum_{k\in\Z}\frac{a+kl-ih}{|a+kl-ih|^2}\\ =&v(0). \end{align*} Note that $$ \sum_{0\neq k\in \Z}\frac{kl}{|kl|^2}=0, $$ since we are using the symmetry sum \eqref{sym-sum}. For the second arrow, we obtain \begin{align*} v(a-ih+ml)=&\frac{i}{2\pi}\sum_{k\in\Z}\frac{a-ih+ml-kl}{|a-ih+ml-kl|^2}-\frac{i}{2\pi}\sum_{m\neq k\in\Z}\frac{ml-kl}{|ml-kl|^2}\\ =&\frac{i}{2\pi}\sum_{k\in\Z}\frac{a+kl-ih}{|a+kl-ih|^2}-\frac{i}{2\pi}\sum_{0\neq k\in \Z}\frac{kl}{|kl|^2}\\ =& v(0). \end{align*} Then, the velocity speed of the street is given by $$ V_0=\frac{i}{2\pi}\sum_{k\in\Z}\frac{a+kl-ih}{|a+kl-ih|^2}-\frac{i}{2\pi}\sum_{0\neq k\in \Z}\frac{kl}{|kl|^2}=\frac{i}{2\pi}\sum_{k\in\Z}\frac{a+kl-ih}{|a+kl-ih|^2}. $$ In order to find a better expression for $V$, let us come back to the stream function. Using \begin{align*} \nabla \ln |\sin(x)|=\frac{\sin x_1\cos x_1+i\sinh x_2 \cosh x_2}{|\sin x|^2}=\overline{\cot x}, \end{align*} we achieve $$ V_0=\frac{1}{2l i}\overline{\cot\left(\frac{\pi(ih-a)}{l}\right)}. $$ Let us now work with $a=\frac{l}{2}$. Using the definition of the complex cotangent, we obtain \begin{align*} \cot\left(\frac{\pi(ih-\frac{l}{2})}{l}\right)=-i\frac{\sinh\left(\frac{\pi h}{l}\right)}{\cosh\left(\frac{\pi h}{l}\right)}=-i\tanh\left(\frac{\pi h}{l}\right), \end{align*} which is the announced expression for the velocity. The same idea can be applied to get the expression when $a=0$. \end{proof} For the cases $a=0$ and $a=\frac{l}{2},$ we notice that the velocity increases as $h$ goes to 0. Moreover, considering $a=\frac{l}{2}$ and $h\rightarrow 0$ in the above proposition, one obtains the following corollary. \begin{cor} The vortex arrow given by $$ \omega(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(\frac{l}{2}+kl,0)}(x), $$ is stationary, for any $l>0$. \end{cor} Similar ideas can be applied to find that a horizontal arrow of points with the same strength is stationary. \begin{pro}\label{Prop-euler_statarrow} The vortex arrow given by $$ \omega(x)=\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), $$ is stationary, for any $a\in\R$ and $h\in\R$. \end{pro} \noindent $\bullet$ {\it Subcritical case:} We finish this section by showing the result for faster decays interactions. This case will cover the QGSW and gSQG interactions: $G=-K_0(|\lambda||\cdot|)$ or $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$ for $\beta\in(0,1)$. The result reads as follows. \begin{pro}\label{Gen-point} Let $G:\R^2\rightarrow \R$ be a smooth off zero function satisfying \begin{enumerate} \item[(H1)] $G$ is radial such that $G(x)=\tilde{G}(|x|)$, \item[(H2)] there exists $R>0$ and $\beta_1\in(0,1]$ such that $|\tilde{G}'(r)|\leq \frac{C}{r^{1+\beta_1}}$, for $r\geq R$. \end{enumerate} Then, \begin{equation}\label{PointVortex-gen} q(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), \end{equation} with $h\neq 0$, $l>0$ and $a\in\R$, translates with constant velocity speed \begin{equation}\label{V_0-gen} V_0=i\sum_{k\in\Z} G'(|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}. \end{equation} In the case $a=0$ or $a=\frac{l}{1}$, the translation is parallel to the horizontal axis. \end{pro} \begin{rem} From now on, we will assume that $G$ is radial via (H1), we will write $G$ for $\tilde{G}$ when there is no confusion in order to simplify notation. \end{rem} \begin{rem} The second hypothesis is required to give a meaning to the infinite sum, which converges absolutely. This condition could be weakened by assuming $$ \sum_{k\in\Z}\left|G'(a+kl-ih)\right|<+\infty. $$ \end{rem} \begin{proof} As in the previous models, the velocity at the points is given by \begin{align*} -iv(ml)=&\sum_{m\neq k\in\Z} G'(|ml-kl|)\frac{ml-kl}{|ml-kl|}\\ &-\sum_{k\in\Z} G'(|ml-a-kl+ih|)\frac{ml-a-kl+ih}{|ml-a-kl+ih|},\\ -iv(a+ml-ih)=&\sum_{k\in\Z} G'(|a+ml-ih-kl|)\frac{a+ml-ih-kl}{|a+ml-ih-kl|}\\ &-\sum_{m\neq k\in\Z} G'(|ml-kl|)\frac{ml-kl}{|ml-kl|}, \end{align*} for $m\in\Z$. The above sums are converging due to the second assumption. We can check that the velocity is the same at every point of the street: \begin{align*} -iv(ml)=&\sum_{m\neq k\in\Z} G'(|ml-kl|)\frac{ml-kl}{|ml-kl|}\\ &-\sum_{k\in\Z} G'(|ml-a-kl+ih|)\frac{ml-a-kl+ih}{|ml-a-kl+ih|}\\ =&\sum_{0\neq k\in\Z} G'(|kl|)\frac{kl}{|kl|}+\sum_{k\in\Z} G'(|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}\\ =&\sum_{k\in\Z} G'(|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}\\ =&-iv(0),\\ -iv(a+ml-ih)=&\sum_{k\in\Z} G'(|a+ml-ih-kl|)\frac{a+ml-ih-kl}{|a+ml-ih-kl|}\\ &-\sum_{m\neq k\in\Z} G'(|ml-kl|)\frac{ml-kl}{|ml-kl|}\\ =&\sum_{k\in\Z} G'(|a-ih-kl|)\frac{a-ih-kl}{|a-ih-kl|}-\sum_{0\neq k\in\Z} G'(|kl|)\frac{kl}{|kl|}\\ =&-iv(0). \end{align*} Then, $$ V_0=v(0)=i\sum_{k\in\Z} G'(|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}. $$ If $a=0$ or $a=\frac{l}{2}$, one has that $$ \sum_{k\in\Z} G'(|a+kl-ih|)\frac{a+kl}{|a+kl-ih|}=0, $$ and the translation is in the horizontal direction. \end{proof} In the general case, we also have that an array is stationary. \begin{pro} Let $G:\R^2\rightarrow \R$ be a smooth off zero function satisfying (H1)-(H2), then \begin{equation*} q(x)=\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), \end{equation*} is stationary for any $a\in\R$ and $h\in\R$. \end{pro} It is clear that $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$, for $\beta\in(0,1)$, satisfies the hypothesis of the above results. In the case of the QGSW interaction, we obtain similar results. In this case, the stream function associated to \eqref{PointVortex} is given by \begin{equation}\label{QGSW-psi1} \psi(x)=-\frac{1}{2\pi}\sum_{k\in \Z} K_0(\lambda |x-kl|)+\frac{1}{2\pi}\sum_{k\in\Z} K_0(\lambda |x-a-kl+ih|). \end{equation} The definition and some properties of the Bessel functions can be found in Appendix \ref{Ap-specialfunctions}. The above sum is convergent due to the behavior of $K_0$ at infinity, which is exponential: \begin{equation*} K_0(z)\sim \sqrt{\frac{\pi}{2z}}e^{-z},\quad |\textnormal{arg}(z)|<\frac{3}{2}\pi. \end{equation*} There is another representation of the stream function given in \cite{VladimirGryanikBorthOlbers}, where the periodicity structure is emphasized: \begin{align}\label{QGSW-psi2} \psi(x_1,x_2)=&-\frac{1}{a}\sum_{k\in \Z}\frac{\exp\left(-\sqrt{\left(\frac{2\pi k}{a}\right)^2+\lambda^2}|x_2|\right)}{\sqrt{\left(\frac{2\pi k}{a}\right)^2+\lambda^2}}\cos\left(\frac{2\pi k}{a}x_1\right)\nonumber\\ &+\frac{1}{a}\sum_{k\in \Z}\frac{\exp\left(-\sqrt{\left(\frac{2\pi k}{a}\right)^2+\lambda^2}|x_2+h|\right)}{\sqrt{\left(\frac{2\pi k}{a}\right)^2+\lambda^2}}\cos\left(\frac{2\pi k}{a}(x_1-a)\right). \end{align} Then, we state the result concerning the QGSW interaction. \begin{pro}\label{Prop-PV-QGSW} Given the point vortex street \eqref{PointVortex}, with $h\neq 0$, $l>0$ and $a\in\R$, then the street translates with the following constant velocity speed \begin{equation}\label{velocity_PV-QGSW} V_0=\frac{\lambda i}{2\pi}\sum_{ k\in\Z}K_1(\lambda|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}. \end{equation} In the case that $a=0$ or $a=\frac{l}{2}$, the translation is parallel to the horizontal axis. \end{pro} \section{Periodic patterns in the Euler and QGSW equations}\label{Sec2} This section is devoted to show the full construction of the K\'arm\'an Vortex Street structures in the Euler equations. Instead of considering two arrows of points as in Section \ref{Sec-NVP}, we consider two infinite arrows of patches distributed in the same way than the arrows of points \eqref{PointVortex}. We will refer to this configuration in the Euler equations as K\'arm\'an Vortex Patch Street. In the case of arrows of points, we showed in the last section that they translate. Here, we want to find a similar evolution in the Euler equations. Since these structures are periodic is space, first we will have to look for the green function associated to the $-\Delta$ operator in $\T\times\R$, which will come as an infinite sum of functions. This infinite sum can be expressed in terms of elementary functions, which helps us in the computations. Once we have the equation that will characterize the K\'arm\'an Vortex Patch Street, we will have to deal with the Implicit Function theorem. Hence, a desingularization of the K\'arm\'an Point Vortex Street will show the existence of these structures in terms of finite area domains that translate. At the end of this section, we will analyze the case of the QGSW equations, which will follow similarly. Let us focus now in the Euler equations: \begin{eqnarray*} \left\{\begin{array}{ll} \omega_t+(v\cdot \nabla) \omega=0, &\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ v=K*\omega,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ \omega(t=0,x)=\omega_0(x),& \text{ with $x\in\mathbb{R}^2$}. \end{array}\right. \end{eqnarray*} The second equation links the velocity to the vorticity through the Biot--Savart law, where $K(x)=\frac{1}{2\pi}\frac{x^\perp}{|x|^2}$ and $x^\perp=(-x_2,x_1)$. We denote by $\psi$ the stream function, which verifies $v=\nabla^\perp \psi$. From now on we will use complex notation in order to simplify the computations. Then, we identify $(x_1,x_2)\in\R^2$ with $x_1+ix_2\in\C$. In the same way, $x^\perp=ix$. Moreover, the gradient operator in $\R^2$ can be identified with the Wirtinger derivative, i.e., \begin{equation}\label{gradient-complex} \nabla=2\partial_{\overline{z}}, \quad \partial_{\overline{z}}\varphi(z):=\frac12\left(\partial_{1}\varphi(z)+i\partial_{2} \varphi(z)\right), \end{equation} for a complex function $\varphi$. \subsection{Velocity of the K\'arm\'an Vortex Patch Street} Consider the initial condition given by \begin{align}\label{FAPointVortex} \omega_0(x_1,x_2)=&\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D_1}(x_1-kl,x_2)-\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D_2}(x_1-kl,x_2)\nonumber\\ =&\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D_1+kl}(x_1,x_2)-\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D_2+kl}(x_1,x_2), \end{align} where $D_1$ and $D_2$ are simply--connected bounded domains such that $|D_1|=|D_2|$, and $l>0$. The velocity field can be computed through the Biot--Savart law in $\T\times\R$. For that, one must find the Green function associated to the $-\Delta$ operator in order to have an expression for the stream function $\psi$. Later, we just use that $v=\nabla^\perp \psi$, or with the complex notation, $v=2i\partial_{\overline{z}} \psi$. This will be developed in the next result, obtaining different expressions for the velocity, which will be useful later. \begin{pro}\label{Prop-velocity} The velocity field of the Euler equations associated to \eqref{FAPointVortex} is given by the following expressions: \begin{enumerate} \item \begin{align*} v_0(x)=&-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|\sin\left(\frac{\pi(x-\xi)}{l}\right)\right|\, d\xi}+\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|\sin\left(\frac{\pi(x-\xi)}{l}\right)\right|\, d\xi}. \end{align*} \item\label{v3} $$v_0(x)=\frac{i}{2l\pi}\overline{\int_{D_1}\cot\left[\frac{\pi(x-y)}{l}\right]\, dA(y)}-\frac{i}{2l\pi}\overline{\int_{D_2}\cot\left[\frac{\pi(x-y)}{l}\right]\, dA(y)}.$$ \item\label{v4} \begin{align*} v_0(x)=&\frac{1}{4\pi^2}\overline{{\int_{\partial D_1}\frac{\overline{x}-\overline{\xi}}{x-\xi} \, d\xi}}-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}\nonumber\\ &-\frac{1}{4\pi^2}\overline{{\int_{\partial D_2}\frac{\overline{x}-\overline{\xi}}{x-\xi} \, d\xi}}-\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}, \end{align*} with \begin{equation}\label{H} H(z)=1+\sum_{k\geq 1}\frac{(-1)^k}{(2k+1)!}z^{2k}=\frac{\sin(z)}{z}. \end{equation} \end{enumerate} \end{pro} \begin{proof} \noindent (1) Let us begin finding the stream function associated to $$ \omega_{0,K}(x_1,x_2)=\frac{1}{\pi}\sum_{|k|\leq K}{\bf{1}}_{D_1+kl}(x_1,x_2)-\frac{1}{\pi}\sum_{|k|\leq K}{\bf{1}}_{D_2+kl}(x_1,x_2), $$ by superposing the stream function of each one of the elements of the sum, i.e., \begin{equation}\label{psiK} \psi_{0,K}(x)=\frac{1}{2\pi^2}\sum_{|k|\leq K}\int_{D_1}\ln|x-y-kl|\, dA(y)-\frac{1}{2\pi^2}\sum_{|k|\leq K}\int_{D_2}\ln|x-y-kl|\, dA(y). \end{equation} Using the same idea than in Proposition \ref{Prop-PV}, we find that $$ \sum_{|k|\leq K}\ln|x-y-kl|=\ln\left|\frac{\pi(x-y)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-y)^2}{k^2l^2}\right)\right|+\ln\left|\frac{l}{\pi}\prod_{k=1}^{K}k^2l^2\right|, $$ and hence \begin{align*} \psi_{0,K}(x)=&\frac{1}{2\pi^2}\int_{D_1}\ln\left|\frac{\pi(x-y)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-y)^2}{k^2l^2}\right)\right|\, dA(y)+\frac{1}{2\pi^2}\ln\left|\frac{l}{\pi}\prod_{k=1}^{K}k^2l^2\right||D_1|\\ &-\frac{1}{2\pi^2}\int_{D_2}\ln\left|\frac{\pi(x-y)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-y)^2}{k^2l^2}\right)\right|\, dA(y)-\frac{1}{2\pi^2}\ln\left|\frac{l}{\pi}\prod_{k=1}^{K}k^2l^2\right||D_2|. \end{align*} Using that $D_1$ and $D_2$ have same area, it follows that \begin{align*} \psi_{0,K}(x)=&\frac{1}{2\pi^2}\int_{D_1}\ln\left|\frac{\pi(x-y)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-y)^2}{k^2l^2}\right)\right|\, dA(y)\\ &-\frac{1}{2\pi^2}\int_{D_2}\ln\left|\frac{\pi(x-y)}{l}\prod_{k=1}^{K}\left(1-\frac{(x-y)^2}{k^2l^2}\right)\right|\, dA(y), \end{align*} where the sine formula \eqref{sine} yields $$ \psi_0(x)=\frac{1}{2\pi^2}{\int_{D_1}\ln\left|\sin\left(\frac{\pi(x-y)}{l}\right)\right|\, dA(y)}-\frac{1}{2\pi^2}{\int_{D_2}\ln\left|\sin\left(\frac{\pi(x-y)}{l}\right)\right|\, dA(y)}. $$ Then, \begin{align}\label{velocity} v_0(x)=&\frac{i\partial_{\overline{x}}}{\pi^2}{\int_{D_1}\ln\left|\sin\left(\frac{\pi(x-y)}{l}\right)\right|\, dA(y)}-\frac{i\partial_{\overline{x}}}{\pi^2}{\int_{D_2}\ln\left|\sin\left(\frac{\pi(x-y)}{l}\right)\right|\, dA(y)}\nonumber\\ =&-\frac{1}{\pi^2}{\int_{D_1}i\partial_{\overline{y}}\ln\left|\sin\left(\frac{\pi(x-y)}{l}\right)\right|\, dA(y)}+\frac{1}{\pi^2}{\int_{D_2}i\partial_{\overline{y}}\ln\left|\sin\left(\frac{\pi(x-y)}{l}\right)\right|\, dA(y)}\nonumber\\ =&-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|\sin\left(\frac{\pi(x-\xi)}{l}\right)\right|\, d\xi}+\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|\sin\left(\frac{\pi(x-\xi)}{l}\right)\right|\, d\xi}. \end{align} The Stokes theorem, see Appendix \ref{Ap-potentialtheory}, has been applied in the last line. \noindent (2) This expression comes from \eqref{velocity} and $$ 2\partial_{\overline{x}} \ln |\sin(x)|=\overline{\cot x}, $$ used in Proposition \ref{Prop-PV}. \noindent (3) From (1), we can use the series expansion of the complex sine, $$ \sin(z)=zH(z),\quad H(z)=1+\sum_{k\geq 1}\frac{(-1)^k}{(2k+1)!}z^{2k}, $$ in order to obtain \begin{align*} \ln\left|\sin\left(\frac{\pi(x-\xi)}{l}\right)\right|&=\ln\left|\frac{\pi(x-\xi)}{l}\right|+\ln\left|1+\sum_{k\geq 1}\frac{(-1)^k}{(2k+1)!}\frac{\pi^{2k}}{l^{2k}}(x-\xi)^{2k}\right|\\ &=\ln\left|\frac{\pi(x-\xi)}{l}\right|+\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right|. \end{align*} Then, we have \begin{align*} v_0(x)=&-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|\frac{\pi(x-\xi)}{l}\right| \, d\xi}-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}\\ &+\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|\frac{\pi(x-\xi)}{l}\right| \, d\xi}+\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}. \end{align*} Moreover, the Stokes formula \eqref{Stokes} yields \begin{align*} v_0(x)=&\frac{i}{2\pi^2}\overline{\int_{D_1}\frac{1}{x-y}\, dA(y)}-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}\\ &-\frac{i}{2\pi^2}\overline{\int_{D_2}\frac{1}{x-y}\, dA(y)}+\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}. \end{align*} Finally, let us now use the Cauchy--Pompeiu's formula \eqref{Cauchy-Pom} for the first and third terms, to find \begin{align*} v_0(x)=&\frac{1}{4\pi^2}\overline{{\int_{\partial D_1}\frac{\overline{x}-\overline{\xi}}{x-\xi} \, d\xi}}-\frac{1}{2\pi^2}{\int_{\partial D_1}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}\\ &-\frac{1}{4\pi^2}\overline{{\int_{\partial D_2}\frac{\overline{x}-\overline{\xi}}{x-\xi} \, d\xi}}-\frac{1}{2\pi^2}{\int_{\partial D_2}\ln\left|H\left(\frac{\pi(x-\xi)}{l}\right)\right| \, d\xi}. \end{align*} \end{proof} \subsection{Functional setting of the problem} The first step is to scale the vorticity \eqref{FAPointVortex} in order to introduce the point vortices in our formulation and be able to desingularize them. Let us define \begin{equation}\label{omega_epsilon} \omega_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D_1+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D_2+kl}(x), \end{equation} for $l>0$ and $\varepsilon>0$. The domains $D_1$ and $D_2$ are simply--connected and bounded. In the case that $|D_1|=|\D|$ and $D_2=-D_1+a-ih$, with $a\in\R$ and $h\neq 0$, we find the point vortex street \eqref{PointVortex} passing to the limit $\varepsilon\rightarrow 0$: \begin{equation}\label{omega00} \omega_{0,0}(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x). \end{equation} Proposition \ref{Prop-PV} deals with \eqref{omega00} from the dynamical system point of view, showing that it translates. Moreover, if $a=0$ or $a=\frac{l}{2}$, the translation is horizontal. Now, we try to find the equation that characterize a translating evolution in the Euler equations. Assume that we have $\omega(t,x)=\omega_0(x-Vt)$, with $V\in\C$. Inserting this ansatz in the Euler equations, we arrived at $$ \left(v_0(x)-V\right)\cdot \nabla\omega_0(x)=0,\quad x\in\R^2, $$ where ``$\cdot$'' indicates the scalar product in $\R^2$. As in the previous section, we want to work always in the complex sense identifying $(x_1,x_2)\in\R^2$ as $x_1+ix_2\in\C$. The gradient operator can be identify to the $\partial_{\overline{z}}$ derivative as in \eqref{gradient-complex}. Then, we above equation can be written as: $$ \textnormal{Re}\left[\overline{(v_0(x)-V)}\partial_{\overline{x}}\omega_0(x)\right]=0, \quad x\in\C. $$ When working with the scaled vorticity $\omega_{0,\varepsilon}$, this equation must be understood in the weak sense, yielding \begin{equation}\label{eq1} \left(v_{0,\varepsilon}(x)-V\right)\cdot \vec{n}(x)=0,\quad x\in\partial\, (\varepsilon D_1+kl)\cup \partial\, (\varepsilon D_2+kl), \end{equation} or similarly, $$ \textnormal{Re}\left[\overline{(v_{0,\varepsilon}(x)-V)}\vec{n}(x)\right]=0, \quad x\in\partial\, (\varepsilon D_1+kl)\cup \partial\, (\varepsilon D_2+kl), $$ for any $k\in\Z$. Here, $\vec{n}$ is the exterior normal vector and $v_{0,\varepsilon}$ is the velocity associated to \eqref{omega_epsilon}. The expression of $v_{0,\varepsilon}$ coming from Proposition \ref{Prop-velocity}--\eqref{v3} gives us $$ v_{0,\varepsilon}(x)=\frac{i}{2l\pi \varepsilon^2}\overline{\int_{\varepsilon D_1}\cot\left[\frac{\pi(x-y)}{l}\right]\, dA(y)}-\frac{i}{2l \pi\varepsilon^2}\overline{\int_{\varepsilon D_2}\cot\left[\frac{\pi(x-y)}{l}\right]\, dA(y)}. $$ We can check that $v_{0,\varepsilon}(x+kl)=v_{0,\varepsilon}(x)$, for any $k\in\Z$. Moreover, we have that $\vec{n}_{D+kl}(x+kl)=\vec{n}_{D}(x)$, for any simply--connected bounded domain $D$. Then, the equation \eqref{eq1} reduces to \begin{align*} \textnormal{Re}\left[\overline{(v_{0,\varepsilon}(x)-V)}\vec{n}(x)\right]=0,\quad x\in\partial\, D_1\cup \partial\, D_2. \end{align*} Consider $D_2=-D_1+a-ih$, for $a=0$ or $a=\frac{l}{2}$, and $h\neq 0$. By using the relation between $D_1$ and $D_2$, the above system reduces to just one equation: \begin{align*} \textnormal{Re}\left[\overline{(v_{0,\varepsilon}(x)-V)}\vec{n}(x)\right]=0,\quad x\in\partial\, D_1, \end{align*} where $$ v_{0,\varepsilon}(\varepsilon x)=\frac{i}{2l\pi }\overline{\int_{ D_1}\cot\left[\frac{\pi\varepsilon(x-y)}{l}\right]\, dA(y)}-\frac{i}{2l \pi}\overline{\int_{ D_1}\cot\left[\frac{\pi(\varepsilon(x+y)-a+ih)}{l}\right]\, dA(y)}. $$ Other representations of the velocity field can be obtained using Proposition \ref{Prop-velocity}: $$ v_{0,\varepsilon}(\varepsilon x)=-\frac{1}{2\pi^2\varepsilon}{\int_{\partial D_1}\ln\left|\sin\left(\frac{\pi(\varepsilon(x-\xi))}{l}\right)\right|\, d\xi}-\frac{1}{2\pi^2\varepsilon}{\int_{\partial D_1}\ln\left|\sin\left(\frac{\pi(\varepsilon(x+\xi)-a+ih)}{l}\right)\right|\, d\xi}. $$ At this stage, we are going to introduce an exterior conformal map from $\T$ into $\partial D_1$ given by \begin{equation}\label{phi-euler} \phi(w)=i (w+\varepsilon f(w)), \quad f(w)=\sum_{n\geq 1}a_nw^{-n}, \quad a_n\in\R, w\in\T. \end{equation} For others values of $a$, one must readjust the conformal map, but here we will consider $a=0$ and $a=\frac{l}{2}$ having a horizontal translation in the point vortex system. Note that $\vec{n}(\phi(w))=w\phi'(w)$. Then, we can rewrite the equation with the use of the above conformal map in the following way: \begin{equation}\label{F-euler} F_{E}(\varepsilon,f,V)(w):=\textnormal{Re}\left[\left\{\overline{I_E(\varepsilon,f)(w)}-\overline{V}\right\}{w}{\phi'(w)}\right]=0, \quad w\in\T, \end{equation} where \begin{align}\label{Iepsilon} I_E(\varepsilon,f)(w):=&-\frac{1}{2\pi^2\varepsilon}{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)-\phi(\xi)))}{l}\right)\right|\phi'(\xi)\, d\xi}\nonumber\\ &-\frac{1}{2\pi^2\varepsilon}{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right|\phi'(\xi)\, d\xi}. \end{align} Note that $$ I_E(\varepsilon,f)(w)=v_{0,\varepsilon}(\varepsilon\phi(w)). $$ As it is mentioned in the introduction, \cite{HmidiMateu-pairs} deals with the desingularization of a vortex pairs in both the Euler equations and the generalized quasi-geostrophic equation. In order to relate $F_E$ with the functional in \cite{HmidiMateu-pairs}, we can write $I_E$ as \begin{align*} I_E(\varepsilon,f)(w)=&\frac{1}{4\pi^2\varepsilon}\overline{{\int_{\T}\frac{\overline{\phi(w)}-\overline{\phi(\xi)}}{\phi(w)-\phi(\xi)}\phi'(\xi) \, d\xi}}-\frac{1}{2\pi^2 \varepsilon}{\int_{\T}\ln\left|H\left(\frac{\pi\varepsilon(\phi(w)-\phi(\xi))}{l}\right)\right| \phi'(\xi)\, d\xi}\nonumber\\ &+\frac{1}{4\pi^2\varepsilon}\overline{{\int_{\T}\frac{\varepsilon(\overline{\phi(w)}+\overline{\phi(\xi)})-a+ih}{\varepsilon(\phi(w)-\phi(\xi))-a+ih}\phi'(\xi) \, d\xi}}\\ &-\frac{1}{2\pi^2 \varepsilon}{\int_{\T}\ln\left|H\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right| \phi'(\xi)\, d\xi}\nonumber\\ =&\frac{1}{4\pi^2\varepsilon}\overline{{\int_{\T}\frac{\overline{\phi(w)}-\overline{\phi(\xi)}}{\phi(w)-\phi(\xi)}\phi'(\xi) \, d\xi}}+\frac{1}{4\pi^2\varepsilon}\overline{{\int_{\T}\frac{\varepsilon(\overline{\phi(w)}+\overline{\phi(\xi)})-a+ih}{\varepsilon(\phi(w)-\phi(\xi))-a+ih}\phi'(\xi) \, d\xi}}\\ &+\tilde{I}_E(\varepsilon,f)(w), \end{align*} using the expression of the velocity written in Proposition \ref{Prop-velocity}-(3). The Residue Theorem amounts to \begin{align}\label{exp-I} I_E(\varepsilon,f)(w)=&\frac{1}{4\pi^2\varepsilon}\overline{{\int_{\T}\frac{\overline{\phi(w)}-\overline{\phi(\xi)}}{\phi(w)-\phi(\xi)}\phi'(\xi) \, d\xi}\nonumber}\\ &+\frac{1}{4\pi^2}\overline{{\int_{\T}\frac{\overline{\phi(\xi)}}{\varepsilon(\phi(w)-\phi(\xi))-a+ih}\phi'(\xi) \, d\xi}}+\tilde{I}_E(\varepsilon,f)(w)\nonumber\\ =&\hat{I}_E(\varepsilon,f)(w)+\tilde{I}_E(\varepsilon,f)(w). \end{align} Hence, the function $\hat{I}_E$ comes from the study of the vortex pairs in \cite{HmidiMateu-pairs}. We will take advantage of the study done in that work about $\hat{I}_E$. In this model, $\hat{I}_{E}$ indicates the contribution of just two vortex patches. The first step is to check that we recover the point vortex street with this model. Remind that $a=0$ or $a=\frac{l}{2}$. \begin{pro}\label{Prop-trivialsol} For any $h\neq 0$, $l>0$, the following equation is verified $$ F_E(0, 0, V_0)(w)=0, \quad w\in\T, $$ where $V_0$ is given by \eqref{velocity_PV}. \end{pro} \begin{proof} The equation that we must check is \begin{align*} \lim_{\varepsilon\rightarrow 0}\, \textnormal{Re}\Big[\Big\{&\frac{i}{2\pi^2\varepsilon}\overline{\int_{\T}\ln\left|\sin\left(\frac{\pi\varepsilon i(w-\xi)}{l}\right)\right|\, d\xi}\\ &+\frac{i}{2\pi^2\varepsilon}\overline{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon i(w+\xi)-a+ih)}{l}\right)\right|\, d\xi}-\overline{V_0}\Big\}{w}i\Big]=0. \end{align*} Using the Stokes Theorem, it agrees with \begin{align*} \lim_{\varepsilon\rightarrow 0}\, \textnormal{Re}\Big[\Big\{&-\frac{i}{2l\pi }{\int_{ \D}\cot\left[\frac{\pi\varepsilon i(w-y)}{l}\right]\, dA(y)}\\ &+\frac{i}{2l\pi }{\int_{ \D}\cot\left[\frac{\pi(\varepsilon i(w+y)-a+ih)}{l}\right]\, dA(y)}-\overline{V_0}\Big\}{w}i\Big]=0. \end{align*} We study the equation in two parts. First, note that \begin{align*} \lim_{\varepsilon\rightarrow 0}\, &\textnormal{Re}\left[\left\{\frac{i}{2l \pi}{\int_{ \D}\cot\left[\frac{\pi(\varepsilon i(w+y)-a+ih)}{l}\right]\, dA(y)}-\overline{V_0}\right\}{w}i\right]\\ &=\textnormal{Re}\left[\left\{-\frac{1}{2l\pi i}{\cot\left[\frac{\pi(ih-a)}{l}\right]}|\D|-\overline{V_0}\right\}{w}i\right]\\ &=0, \quad w\in\T. \end{align*} In the above limit, we may use the Dominated Convergence Theorem in order to introduce the limit inside the integral. Second, we use the expansion of the complex cotangent as $$ \cot (z)=\frac{1}{z}+ z T(z),\quad T(z)=\sum_{k=1}^{\infty}\frac{2}{z^2-\pi^2k^2}, $$ where $T$ is a smooth function for $|z|<1$. Then, the only contribution in $F_E$ is given by the first part: \begin{align*} \lim_{\varepsilon\rightarrow 0}\, \textnormal{Re}\left[\frac{i}{2l\pi }{\int_{ \D}\cot\left[\frac{\pi\varepsilon i(w-y)}{l}\right]\, dA(y)}{w}i\right]&=\lim_{\varepsilon\rightarrow 0}\,\frac{1}{\varepsilon}\textnormal{Re}\left[\frac{i}{2\pi^2 }{\int_{ \D}\frac{1}{w-y} \, dA(y)}{w}\right]\\ &=\lim_{\varepsilon\rightarrow 0}\,\frac{1}{2\pi\varepsilon}\textnormal{Re}\left[i\right]\\ &=0, \end{align*} for $w\in\T$, where we have used the Residue Theorem to compute the integral. \end{proof} We fix the Banach spaces that we will use when we apply the Implicit Function Theorem. For $\alpha\in(0,1)$, we define \begin{align} X_{\alpha}&=\left\{f\in C^{1,\alpha}(\T), \quad f(w)=\sum_{n\geq 1}a_nw^{-n},\, a_n\in\R\right\},\label{X}\\ Y_{\alpha}&=\left\{f\in C^{0,\alpha}(\T), \quad f(e^{i\theta})=\sum_{n\geq 2}a_n\sin(n\theta)\label{Y},\, a_n\in\R\right\}. \end{align} \begin{rem} {{ Let us explain why we need that the first frequency in the domain $Y_{\alpha}$ is vanishing. In the case that $a=0$ or $a=\frac{l}{2}$, it can be checked that $F_E(\varepsilon,f,V)$ is well--defined and $C^1$ from $\R\times X_{\alpha}\times \R$ to $$ \tilde{Y}_{\alpha}=\left\{f\in C^{0,\alpha}(\T), \quad f(e^{i\theta})=\sum_{n\geq 1}a_n\sin(n\theta),\, a_n\in\R\right\}. $$ But, when we linerarize ${F}_E$ and obtain $\partial_f {F}_E(0, 0, V)$, this is not an isomorphism from $X_{\alpha}$ to $\tilde{Y}_{\alpha}$. However, it does from $X_{\alpha}$ to $Y_{\alpha}$. We are using $Y_{\alpha}$ instead of $\tilde{Y}_{\alpha}$ in order to implement later the Implicit Function Theorem. }} \end{rem} \begin{rem} Note that if $f\in B_{X_{\alpha}}(0,\sigma)$, with $\sigma<1$, then $\phi$ is bilipschitz. \end{rem} \begin{pro}\label{Prop-trivialsol2} The function $V:(-\varepsilon_0,\varepsilon_0)\times B_{X_{\alpha}}(0,\sigma)\longrightarrow \R$, given by \begin{align}\label{function-V} V(\varepsilon,f)=&\frac{\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw}{\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw}, \end{align} fulfills $V(0, f)=V_0$, where $V_0$ is defined in \eqref{velocity_PV}. The parameters satisfy: $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$, $\sigma<1$, $\alpha\in(0,1)$, and $X$ is defined in \eqref{X}. \end{pro} \begin{proof} In the expression \eqref{function-V}, let us work with the denominator. The Residue Theorem amounts to $$ \lim_{\varepsilon\rightarrow 0}\int_{\T}w\phi'(w)(1-\overline{w}^2)dw=i\int_{\T} w(1-\overline{w}^2)dw=2\pi. $$ From \eqref{Iepsilon} and the ideas in Proposition \ref{Prop-trivialsol}, we get \begin{align}\label{lim-Ie} \lim_{\varepsilon\rightarrow 0} I_E(\varepsilon,f)(w)=&\lim_{\varepsilon\rightarrow 0}\left\{-\frac{i}{2\pi^2\varepsilon}{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon i(w-\xi))}{l}\right)\right|\, d\xi}+V_0\right\}\nonumber\\ =&\lim_{\varepsilon\rightarrow 0}\left\{\frac{1}{2\pi^2\varepsilon }\overline{\int_{ \D}\frac{1}{w-y} \, dA(y)}+V_0\right\}\nonumber\\ =&\lim_{\varepsilon\rightarrow 0}\left\{\frac{w}{2\pi\varepsilon}+V_0\right\}. \end{align} Note also that $$ \int_{\T}\overline{w}w(1-\overline{w}^2)dw=\int_{\T}(1-\overline{w}^2)dw=0, $$ via again the Residue Theorem. Then, the first term in \eqref{lim-Ie} does not provide any contribution. It implies that $$ V(0,f)=V_0\frac{\int_{\T} {w}(1-\overline{w}^2)dw}{\int_{\T} {w}(1-\overline{w}^2)dw}=V_0. $$ \end{proof} \begin{pro}\label{Prop-regEuler} If $V$ sets \eqref{function-V}, then $$\tilde{F}_E:(-\varepsilon_0,\varepsilon_0)\times B_{X_{\alpha}}(0,\sigma)\rightarrow Y_{\alpha},$$ with $\tilde{F}_E(\varepsilon,f)=F_E(\varepsilon,f,V(\varepsilon,f))$, is well--defined and $C^1$. The parameters satisfy that $\alpha\in(0,1)$, $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$ and $\sigma<1$. \end{pro} \begin{rem}\label{rem-taylor} Let us clarify why we need the condition $\varepsilon_0<\frac{l}{4}$. In some point of the proof we need to use Taylor formula in the following way \begin{align}\label{Taylor-formula} G(|z_1+z_2|)=G(|z_1|)+\int_0^1G'(|z_1+tz_2|)\frac{\textnormal{Re}\left[(z_1+tz_2)\overline{z_2}\right]}{|z_1+tz_2|}dt, \end{align} for $z_1,z_2\in\C$ and { $|z_2|<|z_1|$}. Here, the use of this formula is not explicit since we are refering to the work \cite{HmidiMateu-pairs}, and this condition is needed in order to check $|z_2|<|z_1|$. Although we are not using it explicitly in this proof, we will use it for the general equation in the following section. \end{rem} \begin{proof} We will divide the proof in three steps. \noindent {\it $\bullet$ First step: Symmetry of $F_E$.} Note that $\phi$ given by \eqref{phi-euler} verifies $$ \phi(\overline{w})=-\overline{\phi(w)}, $$ where we are taking $\vartheta=i$ in order to work with $a=0$ or $a=\frac{l}{2}$. We are going to check that $F_E(\varepsilon,f,V)(e^{i\theta})=\sum_{n\geq 1}f_n\sin(n\theta)$, with $f_n\in\R$. To do that, it is enough to prove that $$ F_E(\varepsilon,f,V)(\overline{w})=-F_E(\varepsilon,f,V)(w). $$ Recall the following property of the complex integrals over $\T$: \begin{equation}\label{property-integral} \overline{\int_{\T}f(w)dw}=-\int_{\T}\overline{f(\overline{w})}dw, \end{equation} for a complex function $f$. Let us start with the expression of $I_E(\varepsilon,f)$ and note that \begin{align*} \ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right|=\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))+ih)}{l}\right)\right|, \quad a=0,\\ \ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right|=\ln\left|\cos\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))+ih)}{l}\right)\right|, \quad a=\frac{l}{2}. \end{align*} Then, $$ \ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(\overline{w})+\phi(\overline{\xi}))-a+ih)}{l}\right)\right|=\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right|, $$ for $a=0$ and $a=\frac{l}{2}$. Notice that $I_E(\varepsilon,f)(\overline{w})=\overline{I_E(\varepsilon,f)(w)}$, which implies \begin{align*} -2\pi^2\varepsilon\overline{I_E(\varepsilon,f)(w)}=&\overline{{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)-\phi(\xi)))}{l}\right)\right|\phi'(\xi)\, d\xi}}\\ &+\overline{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right|\phi'(\xi)\, d\xi}\\ =&-{{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)-\phi(\overline{\xi})))}{l}\right)\right|\overline{\phi'(\overline{\xi})}\, d\xi}}\\ &-{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\overline{\xi}))-a+ih)}{l}\right)\right|\overline{\phi'(\overline{\xi})}\, d\xi}\\ =&{{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(\overline{w})-\phi(\xi)))}{l}\right)\right|\phi'(\xi)\, d\xi}}\\ &+{\int_{\T}\ln\left|\sin\left(\frac{\pi(\varepsilon(\phi(\overline{w})+\phi(\xi))-a+ih)}{l}\right)\right|\phi'(\xi)\, d\xi}\\ =&-2\pi^2\varepsilon{I_E(\varepsilon,f)(\overline{w})}. \end{align*} Next, if $V$ is given by \eqref{function-V}, then we are going to check that $V\in\R$. Let us analyze the denominator and the numerator of the expression of $V$: \begin{align*} 2i\textnormal{Im}\Big[\int_{\T}\overline{ I_E(\varepsilon,f)(w)}&{w}{\phi'(w)}(1-\overline{w}^2)dw\Big]\\ =&\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw-\overline{\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw}\\ =&\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw+\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}\overline{\phi'(\overline{w})}(1-\overline{w}^2)dw\\ =&\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw-\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'({w})}(1-\overline{w}^2)dw\\ =&0, \end{align*} and \begin{align*} 2i\textnormal{Im}\left[\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw\right]=&\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw-\overline{\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw}\\ =&\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw+\int_{\T} {w}\overline{\phi'(\overline{w})}(1-\overline{w}^2)dw\\ =&\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw-\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw\\ =&0. \end{align*} Then, $V\in\R$. Hence, \begin{align*} F_E(\varepsilon,f,V)(\overline{w})=&\textnormal{Re}\left[\left\{\overline{I_E(\varepsilon,f)(\overline{w})-V}\right\}\overline{w}{\phi'(\overline{w})}\right]\\ =&-\textnormal{Re}\left[\left\{{I_E(\varepsilon,f)(w)-V}\right\}{\overline{w}\overline{\phi'(w)}}\right]\\ =&-F_E(\varepsilon,f,V)({w}). \end{align*} In order to check that $\tilde{F}_E(\varepsilon,f)\in Y_{\alpha}$, we need $f_1=0$. For that, we ask the condition $$ \int_0^{2\pi}F_E(\varepsilon,f,V)(e^{i\theta})\sin(\theta)d\theta=-\frac12\int_{\T} F_E(\varepsilon,f,V)(w)(1-\overline{w}^2)dw=0, $$ which agrees with $$ \int_{\T} \left\{\overline{I_E(\varepsilon,f)(w)}-V\right\}{w}{\phi'(w)}(1-\overline{w}^2)dw=0. $$ Using that $V$ verifies \eqref{function-V}, the last equation is clearly set. \noindent {\it $\bullet$ Second step: Regularity of $V$. } Let us begin with the denominator, noting that $$ \int_{\T}w\phi'(w)(1-\overline{w}^2)dw=i\int_{\T}w(1+\varepsilon f'(w))(1-\overline{w}^2)dw=2\pi+i\varepsilon\int_{\T}wf'(w)dw=2\pi-i\varepsilon\int_{\T}f(w)dw, $$ by using the Residue Theorem. Then, if $|\varepsilon|<\varepsilon_0$ and $f\in B_{X_\alpha}(0,\sigma)$, the denominator is not vanishing. Moreover, the denominator is clearly $C^1$ in $\varepsilon$ and $f$. We continue with the numerator denoting \begin{align*} J(\varepsilon,f)=&\int_{\T}\overline{ I_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw\\ =&\int_{\T}\overline{ \hat{I}_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw+\int_{\T}\overline{ \tilde{I}_E(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw\\ =:&J_1(\varepsilon,f)(w)+J_2(\varepsilon,f)(w), \end{align*} using the decomposition of $I_E$ done in \eqref{exp-I}. Note that $\hat{I}_E$ is the part of $I_E$ coming from the vortex pairs analyzed in \cite{HmidiMateu-pairs}. In that work $J_1$ is analyzed showing that it is $C^1$ in $\varepsilon$ and $f$. Note that the spaces used in \cite{HmidiMateu-pairs} are also \eqref{X}--\eqref{Y} and the condition $\varepsilon_0<\frac{l}{4}$ is needed in their computations, see Remark \ref{rem-taylor}. Then, it remains to study the regularity of $J_2(\varepsilon,f)$. {We should analyze $\tilde{I}_{E}$, i.e., \begin{align*} \tilde{I}_{E}(\varepsilon,f)(w)=&-\frac{1}{2\pi^2 \varepsilon}{\int_{\T}\ln\left|H\left(\frac{\pi\varepsilon(\phi(w)-\phi(\xi))}{l}\right)\right| \phi'(\xi)\, d\xi}\\ &-\frac{1}{2\pi^2 \varepsilon}{\int_{\T}\ln\left|H\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right| \phi'(\xi)\, d\xi}\\ =&-\frac{i}{2\pi^2 \varepsilon}{\int_{\T}\ln\left|H\left(\frac{\pi\varepsilon(\phi(w)-\phi(\xi))}{l}\right)\right| \, d\xi}\\ &-\frac{i}{2\pi^2}{\int_{\T}\ln\left|H\left(\frac{\pi\varepsilon(\phi(w)-\phi(\xi))}{l}\right)\right| f'(\xi)\, d\xi}\\ &-\frac{i}{2\pi^2 \varepsilon}{\int_{\T}\ln\left|H\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right| \, d\xi}\\ &-\frac{i}{2\pi^2 }{\int_{\T}\ln\left|H\left(\frac{\pi(\varepsilon(\phi(w)+\phi(\xi))-a+ih)}{l}\right)\right|f'(\xi)\, d\xi}\\ =:&-(I_1(\varepsilon,f)+I_2(\varepsilon,f)+I_3(\varepsilon,f)+I_4(\varepsilon,f)). \end{align*} Note that $I_2$ and $I_4$ are smooth in both variables, due to that $H(z)=\frac{\sin(z)}{z}$, see \eqref{H}. Then they are $C^1$ in $\varepsilon$ and $\phi$. Let us analyze the others terms. Using \eqref{H} and the expansion of the logarithm, $$ \ln|1+f(z)|=\textnormal{Re} \sum_1^{\infty} \frac{(-1)^{1+n}f(z)^n}{n}, $$ one has $$ \ln|H(\varepsilon z)|=\varepsilon^2 G_1(\varepsilon,z), $$ with $G_1$ smooth in both variables. This implies that $I_1$ is $C^1$ in $\varepsilon$ and $f$. On the other way, $$ \ln|H(\varepsilon z+z')|=\varepsilon G_2(\varepsilon,z,z')+G_3(z'), $$ with $G_2$ and $G_3$ smooth. Then, we find $$ I_3(\varepsilon,f)(w)=\frac{i}{2\pi^2}\int_{\T} G_2\left(\varepsilon, \frac{\pi(\phi(w)+\phi(\xi))}{l},\frac{-a+ih}{l}\right)\, d\xi, $$ which is smooth in $f$ and $\varepsilon$. } Hence, we achieve that $V$ is $C^1$ in both variables. \noindent {\it $\bullet$ Third step: Regularity of $\tilde{F}_E$. } Decomposing $I$ again as in \eqref{exp-I}, we get that $$ \tilde{F}(\varepsilon,f)=\textnormal{Re}\left[\left\{\overline{\hat{I}_E(\varepsilon,f)(w)}-\overline{\tilde{I}_E(\varepsilon,f)(w)}-{V}(\varepsilon,f)\right\}{w}{\phi'(w)}\right]. $$ Again, the part coming from $\hat{I}$ is analyzed in \cite{HmidiMateu-pairs}, where it is shown that is $C^1$ in both variables. From the second step, we got that $\tilde{I}$ and $V$ are also smooth completing the proof. \end{proof} \subsection{Desingularization of the K\'arm\'an Point Vortex Street} In this section, we provide the proof of the existence of K\'arm\'an Vortex Patch Street via a desingularization of the point vortex model given by the K\'arm\'an Point Vortex Street. The idea is to implement the Implicit Function Theorem to the functional $\tilde{F}_E$ defined in Proposition \ref{Prop-regEuler}. \begin{theo}\label{Th-euler} Let $h, l\in\R$, with $h\neq 0$ and $l>0$, and $a=0$ or $a=\frac{l}{2}$. Then, there exist $D^{\varepsilon}$ such that \begin{equation}\label{omega_epsilon2} \omega_0(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{\varepsilon D^{\varepsilon}+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{-\varepsilon D^{\varepsilon}+a-ih+kl}(x), \end{equation} defines a horizontal translating solution of the Euler equations, with constant speed, for any $\varepsilon\in(0,\varepsilon_0)$ and small enough $\varepsilon_0>0$. Moreover, $D^{\varepsilon}$ is at least $C^1$. \end{theo} \begin{proof} In order to look for solutions in the form \eqref{omega_epsilon2}, we need to study the functional $F_E$ defined in \eqref{F-euler}, where $\phi$ is given by \eqref{phi-euler}. Moreover, $V$ is a function of $(\varepsilon,f)$ described by \eqref{function-V}. In Proposition \ref{Prop-regEuler}, we have that $\tilde{F}_E:\R\times B_{X_\alpha}(0,\sigma)\rightarrow Y_{\alpha}$, with $\tilde{F}_E(\varepsilon,f)=F_E(\varepsilon,f,V(\varepsilon,f))$, is well--defined and $C^1$, for $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$ and $\sigma<1$. Then, we wish to apply the Implicit Function Theorem to $\tilde{F}_E$. By Proposition \ref{Prop-trivialsol} and Proposition \ref{Prop-trivialsol2}, we have that $\tilde{F}_E(0,0)(w)=0$, for any $w\in\T$. Let us show that $\partial_f \tilde{F}_E(0,0)$ is an isomorphism: \begin{align*} \partial_f \tilde{F}_E(0,0)h(w)=\lim_{\varepsilon\rightarrow 0}\textnormal{Re}\Big[&\left\{\partial_f \overline{I_E(0,f)(w)}h(w)-\partial_f V(0, 0)h(w)\right\}iw\\ &+\left\{\overline{I_E(\varepsilon,0)(w)}-V_0\right\}iw\varepsilon h'(w)\Big], \end{align*} By Proposition \ref{Prop-trivialsol2}, we obtain $\partial_f V(0, f)h(w)\equiv 0$. Note also that $$ \lim_{\varepsilon\rightarrow 0} I_E(\varepsilon,0)(w)=\lim_{\varepsilon\rightarrow 0}\left\{\frac{w}{2\pi\varepsilon}+V_0\right\}.$$ Moreover, by expression \eqref{exp-I}, we have \begin{align*} \partial_f {I_E(0,0)(w)}h(w)=&\frac{i}{4\pi^2}\overline{\int_{\T}\frac{\overline{h(w)}-\overline{h(\xi)}}{w-\xi}\, d\xi}-\frac{i}{4\pi^2}\overline{\int_{\T}\frac{(h(w)-h(\xi))(\overline{w}-\overline{\xi})}{(w-\xi)^2}\, d\xi}\\ &+\frac{i}{4\pi^2}\overline{\int_{\T}\frac{\overline{w}-\overline{\xi}}{w-\xi}h'(\xi)\, d\xi}+\partial_{f}\tilde{I}_{E}(0, 0)h(w)\\ =&\frac{i}{4\pi^2}\overline{\int_{\T}\frac{\overline{h(w)}-\overline{h(\xi)}}{w-\xi}\, d\xi}-\frac{i}{4\pi^2}\overline{\int_{\T}\frac{(h(w)-h(\xi))(\overline{w}-\overline{\xi})}{(w-\xi)^2}\, d\xi}\\ &+\frac{i}{4\pi^2}\overline{\int_{\T}\frac{\overline{w}-\overline{\xi}}{w-\xi}h'(\xi)\, d\xi}. \end{align*} Then, we obtain \begin{align*} \partial_f \tilde{F}_E(0,0)h(w)=&\textnormal{Re}\left[\left\{\frac{1}{4\pi^2}\overline{\int_{\T}\frac{\overline{h(w)}-\overline{h(\xi)}}{w-\xi}\, d\xi}-\frac{1}{4\pi^2}\overline{\int_{\T}\frac{(h(w)-h(\xi))(\overline{w}-\overline{\xi})}{(w-\xi)^2}\, d\xi}\right.\right.\\ &\left.\left.+\frac{1}{4\pi^2}\overline{\int_{\T}\frac{\overline{w}-\overline{\xi}}{w-\xi}h'(\xi)\, d\xi}\right\}w+\frac{i}{2\pi}h'(w)\right]\\ =&\textnormal{Im}\left[\left\{\frac{i}{4\pi^2 }\overline{\int_{\T}\frac{\overline{h(w)}-\overline{h(\xi)}}{w-\xi}\, d\xi}-\frac{i}{4\pi^2 }\overline{\int_{\T}\frac{(h(w)-h(\xi))(\overline{w}-\overline{\xi})}{(w-\xi)^2}\, d\xi}\right.\right.\\ &\left.\left.+\frac{i}{4\pi^2 }\overline{\int_{\T}\frac{\overline{w}-\overline{\xi}}{w-\xi}h'(\xi)\, d\xi}\right\}w-\frac{1}{2\pi}h'(w)\right]. \end{align*} By the Residue Theorem, we have \begin{align*} \int_{\T}\frac{\overline{w}-\overline{\xi}}{w-\xi}h'(\xi)\, d\xi=&0,\\ \int_{\T}\frac{\overline{h(w)}-\overline{h(\xi)}}{w-\xi}\, d\xi-\int_{\T}\frac{(h(w)-h(\xi))(\overline{w}-\overline{\xi})}{(w-\xi)^2}\, d\xi=&2i\int_{\T}\frac{\textnormal{Im}\left[\overline{(h(w)-h(\xi))}({w}-{\xi})\right]}{(w-\xi)^2}\, d\xi=0. \end{align*} Finally, we find \begin{align}\label{linop} \partial_f \tilde{F}_E(0,0)h(w)=-\frac{1}{2\pi}\textnormal{Im}\left[h'(w)\right], \end{align} which is an isomorpshim from $X$ to $Y$. \end{proof} \begin{rem} Analyzing \cite{HmidiMateu-pairs}, we realize that the above linearized operator \eqref{linop} agrees with the linearized operator in \cite{HmidiMateu-pairs} for the vortex pairs. This tells us that the only real contribution in the linearized operator is due to two vortex patches: ${\textbf 1}_{\varepsilon D_1}$ and ${\textbf 1}_{-\varepsilon D_1+a-ih}$. \end{rem} \subsection{Quasi--geostrophic shallow water equation} In this section, we investigate the case of the quasi-geostrophic shallow water (QGSW) equations. Let $q$ be the potential vorticity, then the QGSW equations are given by \begin{eqnarray*} \left\{\begin{array}{ll} q_t+(v\cdot \nabla) q=0, &\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ v=\nabla^\perp\psi,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ \psi=(\Delta-\lambda^2)^{-1}q,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ q(t=0,x)=q_0(x),& \text{ with $x\in\mathbb{R}^2$}, \end{array}\right. \end{eqnarray*} with $\lambda\neq 0$. The same results to the Euler equations are obtained in this case. That is due to the similarity of the kernel in both cases, in particular, they have the same behavior close to 0. In Section \ref{Sec-NVP} we analyzed the case of the $N$-vortex problem, see Proposition \ref{Prop-PV-QGSW}. Here, we want to desingularize \eqref{PointVortex} in order to obtain periodic in space solutions that translate in the QGSW equation. The stream function $\psi$ can be recovered in terms of $q$ in the following way $$ \psi(t,x)=-\frac{1}{2\pi}\int_{\R^2}K_0(|\lambda||x-y|)q(t,y)\, dA(y). $$ The function $K_0$ is the Modified Bessel function of order zero, whose definition and some of their properties can be found in Appendix \ref{Ap-specialfunctions}. It is of great interest the expansion of $K_0$ given in \eqref{K0-expansion} as $$ K_0(z)=-\ln\left(\frac{z}{2}\right)I_0(z)+\sum_{k=0}^\infty\frac{\left(\frac{z}{2}\right)^{2k}}{(k!)^2}\varphi(k+1), $$ where $$ \varphi(1)=-{\gamma} \quad \text{ and }\quad \varphi(k+1)=\sum_{m=1}^k \frac{1}{m}-{\gamma},\, \, k\in\N^*. $$ The constant ${\gamma}$ is the Euler's constant and the function $I_0$ is defined in Appendix \ref{Ap-specialfunctions}, but we recall it as $$ I_0(z)=\sum_{k=0}^\infty\frac{\left(\frac{z}{2}\right)^{2k}}{k!\Gamma(k+1)}. $$ Via this expansion, one notice that \begin{equation}\label{expansionK_02} K_0(z)=-\ln(z)+g_0(z)+g_1, \end{equation} where \begin{align*} g_0(z)=&-z^2\left(\ln(2)-\ln(z)\right)\sum_{k=1}^{\infty}\frac{\left(\frac{z}{2}\right)^{2k-2}}{k!\Gamma(k+1)}+z^2\sum_{k=1}^{\infty}\frac{\left(\frac{z}{2}\right)^{2k-2}}{(k!)^2}\varphi(k+1),\\ g_1=&-{\gamma}-\ln(2). \end{align*} Note that $g_0$ is smooth and $g_0(z)=O(z^2\ln(z))$ close to $0$. Consider a K\'arm\'an Vortex Patch Street in the QGSW equations in the sense \begin{align*} q_0(x)=\frac{1}{ \pi}\sum_{k\in\Z}{\bf{1}}_{D_1+kl}(x)-\frac{1}{ \pi}\sum_{k\in\Z}{\bf{1}}_{D_2+kl}(x), \end{align*} where $D_1$ and $D_2$ are simply--connected bounded domains such that $|D_1|=|D_2|$, and $l>0$. Motivated by Euler equations, assume $D_2=-D_1+a-ih$, having the following distribution \begin{align}\label{FAPointVortex-QGSW} q_0(x)=\frac{1}{ \pi}\sum_{k\in\Z}{\bf{1}}_{D+kl}(x)-\frac{1}{ \pi}\sum_{k\in\Z}{\bf{1}}_{-D+a+kl-ih}(x), \end{align} where we are rewriting $D_1$ by $D$. The velocity field is given by \begin{align}\label{QGSW-velocity} v_0(x)=&\frac{\lambda i}{2\pi^2}\sum_{k\in\Z}\int_{D} K_1(\lambda|x-y-kl|)\frac{x-y-kl}{|x-y-kl|}\, dA(y)\nonumber\\ &-\frac{\lambda i}{2\pi^2}\sum_{k\in\Z}\int_{D} K_1(\lambda|x+y-a-kl+ih|)\frac{x+y-a-kl+ih}{|x+y-a-kl+ih|}\, dA(y)\nonumber\\ =&\frac{1}{2\pi^2}\sum_{k\in\Z}\int_{\partial D} K_0(\lambda|x-y-kl|)dy+\frac{1}{2\pi^2}\sum_{k\in\Z}\int_{\partial D} K_0(\lambda|x+y-a-kl+ih|)dy. \end{align} We scale the expression \eqref{FAPointVortex-QGSW} in the following way $$ q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{-\varepsilon D+a+kl-ih}(x). $$ As in the Euler equations, if $|D|=|\D|$, with $a\in\R$ and $h\neq 0$, we obtain the point model: $$ q_{0,0}(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x), $$ which has been studied in Proposition \ref{Prop-PV-QGSW}. Considering now a translating motion in the form $q(t,x)=q_{0}(x-Vt)$, with $V\in\C$, then we arrive at $$ \left(v_0(x)-V\right)\cdot \nabla q_0(x)=0,\quad x\in\R^2. $$ In the case of $q_{0,\varepsilon}$, we need to solve the above equation understood in the weak sense, i.e., \begin{equation}\label{eq1-QGSW} \left(v_{0,\varepsilon}(x)-V\right)\cdot \vec{n}(x)=0,\quad x\in\partial\, (\varepsilon D+kl)\cup \partial\, (-\varepsilon D+a+kl-ih), \end{equation} which, as for the Euler equations, reduces to \begin{equation}\label{eq2-QGSW} \textnormal{Re}\left[\overline{\left(v_{0,\varepsilon}(x)-V\right)} \vec{n}(x)\right]=0,\quad x\in\partial\, (\varepsilon D), \end{equation} written in the complex sense. With the use of the conformal map \eqref{phi-euler}: \begin{equation*} \phi(w)=i (w+\varepsilon f(w)), \quad f(w)=\sum_{n\geq 1}a_nw^{-n}, \quad a_n\in\R, w\in\T, \end{equation*} it agrees with \begin{equation}\label{F-QGSW} F_{QGSW}(\varepsilon,f,V)(w):=\textnormal{Re}\left[\left\{\overline{I_{QGSW}(\varepsilon,f)(w)}-\overline{V}\right\}{w}{\phi'(w)}\right]=0, \quad w\in\T, \end{equation} where \begin{align}\label{Iepsilon-QGSW} I_{QGSW}(\varepsilon,f)(w)=v_{0,\varepsilon}(\varepsilon \phi(w)). \end{align} Then, \begin{align*} I_{QGSW}(\varepsilon,f)(w):=&\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} K_0(\lambda|\varepsilon(\phi(w)-\phi(\xi))-kl|)\phi'(\xi)\, d\xi\\ &+\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} K_0(\lambda|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)\phi'(\xi)\, d\xi. \end{align*} Via the expansion of $K_0$, given in \eqref{expansionK_02}, one has \begin{align}\label{decompI} I_{QGSW}(\varepsilon,f)(w)=&-\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} \ln\left|\lambda(\varepsilon(\phi(w)-\phi(\xi))-kl)\right|\phi'(\xi)\, d\xi\nonumber\\ &-\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} \ln\left|(\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih)\right|\phi'(\xi)\, d\xi\nonumber\\ &+\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} g_0\left(|\varepsilon(\phi(w)-\phi(\xi))-kl|\right)\phi'(\xi)\, d\xi\nonumber\\ &+\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} g_0\left(\lambda|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|\right)\phi'(\xi)\, d\xi\nonumber\\ =&I_{E}(\varepsilon,f)(w)+\tilde{I}_{QGSW}(\varepsilon,f)(w), \end{align} where $I_E$ is the corresponding function associated to Euler equations, see \eqref{Iepsilon}. The analogue to Propositions \ref{Prop-trivialsol}, \ref{Prop-trivialsol2} and \ref{Prop-regEuler}, and Theorem \ref{Th-euler} are obtained, whose proofs are very similar and so here we omit many details. Remark that $a=0$ or $a=\frac{l}{2}$. \begin{pro}\label{Prop-trivialsolQGSW} For any $h\neq 0$ and $l>0$, the following equation is verified $$ F_{QGSW}(0, 0, V_0)(w)=0, \quad w\in\T, $$ where $V_0$ is given by \eqref{velocity_PV-QGSW}: $$ V_0=\frac{\lambda i}{2\pi}\sum_{ k\in\Z}K_1(\lambda|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}. $$ \end{pro} {{\begin{proof} Using definition \eqref{F-QGSW}, we need to check that \begin{align*} \lim_{\varepsilon\rightarrow 0}\textnormal{Re}\Big[\Big\{&-\frac{i}{2\pi^2\varepsilon} \sum_{k\in\Z} \overline{\int_{\T}K_0(\lambda|\varepsilon i(w-\xi)-kl|)\, d\xi}\\ &-\frac{i}{2\pi^2\varepsilon} \sum_{k\in\Z} \overline{\int_{\T}K_0(\lambda|\varepsilon i(w+\xi)-a-kl+ih|)\, d\xi}-\overline{V_0}\Big\}w i\Big]=0. \end{align*} Via the Stokes Theorem, the expansion of $K_0$ given in \eqref{expansionK_02} and noting that \begin{equation}\label{property-nabla} 2\partial_{\overline{x}} G(|ix+x'|)=-iG'(|ix+x'|)\frac{ix+x'}{|ix+x'|^2}, \end{equation} for $x,x'\in\C$ and some function $G:\R\rightarrow\R$, it reduces to \begin{align*} \lim_{\varepsilon\rightarrow 0}& \textnormal{Re}\Big[\Big\{\frac{\lambda}{2\pi^2\varepsilon} \int_{\D}\frac{\, dA(y)}{w-y}-\frac{i}{2\pi^2\varepsilon}\overline{\int_{\T}g_0(\varepsilon\lambda|w-\xi|)\, d\xi}-\frac{i}{2\pi^2\varepsilon}\sum_{0\neq k\in\Z}\overline{\int_{\T}K_0(\lambda|\varepsilon i(w-\xi)-kl|)\, d\xi}\\ &-\frac{\lambda\varepsilon}{2\pi^2\varepsilon} \sum_{k\in\Z} \overline{i\int_{\D}K_1(\lambda|\varepsilon i(w+y)-a-kl+ih|)\frac{\varepsilon i(w-y)-a-kl+ih}{|\varepsilon i(w-y)-a-kl+ih|} \, dA(y)}-\overline{V_0}\Big\}w i\Big]. \end{align*} Note that $$ \lim_{\varepsilon\rightarrow 0} \frac{g_0(\varepsilon\lambda|w-\xi|)}{\varepsilon}=0, $$ and \begin{align*} \lim_{\varepsilon\rightarrow 0}\sum_{0\neq k\in\Z} \frac{i}{\varepsilon}\int_{\T}K_0(\lambda|\varepsilon i(x-\xi)-kl|)\, d\xi=&\lim_{\varepsilon\rightarrow 0}i\lambda\sum_{0\neq k\in\Z}\int_{\D} K_1(\lambda|\varepsilon i(w-\xi)-kl|)\frac{\varepsilon i(w-\xi)-kl}{|\varepsilon i(w-\xi)-kl|^2}\\ =&i\lambda\sum_{0\neq k\in\Z}\int_{\D} K_1(\lambda|kl|)\frac{kl}{|kl|^2}=0, \end{align*} making use of the Dominated Convergence Theorem. Secondly, via the definition of $V_0$ in \eqref{velocity_PV-QGSW}, one has that $$ -\frac{\lambda}{2\pi^2} \sum_{k\in\Z} \overline{i\int_{\D}K_1(\lambda|-a-kl+ih|)\frac{-a-kl+ih}{|-a-kl+ih|} \, dA(y)}-\overline{V_0}=0. $$ The left term is also zero using the computations in Proposition \ref{Prop-trivialsol}. \end{proof}}} We avoid the proof of the following result, due to the similarity with Proposition \ref{Prop-trivialsol2}. \begin{pro}\label{Prop-trivialsol2QGSW} The function $V:(-\varepsilon_0,\varepsilon_0)\times B_{X_{\alpha}}(0,\sigma)\longrightarrow \R$, given by \begin{align}\label{function-V-QGSW} V(\varepsilon,f)=\frac{\int_{\T}\overline{ I_{QGSW}(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw}{\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw}, \end{align} fulfills $V(0, f)=V_0$, where $V_0$ is defined in \eqref{velocity_PV-QGSW}. The parameters satisfy: $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$, $\sigma<1$, $\alpha\in(0,1)$, and $X$ is defined in \eqref{X}. \end{pro} The next result concerns the well-definition of $F_{QGSW}$ in the spaces defined in \eqref{X}--\eqref{Y}. \begin{pro}\label{Prop-regQGSW} If $V$ sets \eqref{function-V-QGSW}, then $$\tilde{F}_{QGSW}:(-\varepsilon_0,\varepsilon_0)\times B_{X_\alpha}(0,\sigma)\rightarrow Y_\alpha,$$ with $\tilde{F}_{QGSW}(\varepsilon,f)=F_{QGSW}(\varepsilon,f,V(\varepsilon,f))$, is well--defined and $C^1$. The parameters satisfy $\alpha\in(0,1)$, $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$ and $\sigma<1$. \end{pro} {{\begin{proof} Note the similarity of this proposition to Proposition \ref{Prop-regEuler}. Following its ideas, in order to check the symmetry of $F_{QGSW}$, it is enough to prove that $$ I_{QGSW}(\varepsilon,f)(\overline{w})=\overline{I_{QGSW}(\varepsilon,f)(w)}. $$ We take advantage of \eqref{property-integral}. Via its definition, note that \begin{align*} \overline{I_{QGSW}(\varepsilon,f)({w})}=&-\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} K_0(\lambda|\varepsilon(\phi(w)-\phi(\overline{\xi}))-kl|)\overline{\phi'(\overline{\xi})}\, d\xi\\ &-\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} K_0(\lambda|\varepsilon(\phi(w)+\phi(\overline{\xi}))-a-kl+ih|)\overline{\phi'(\overline{\xi})}\, d\xi\\ =&\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} K_0(\lambda|\varepsilon(\phi(\overline{w})-\phi({\xi}))+kl|){\phi'({\xi})}\, d\xi\\ &+\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} K_0(\lambda|\varepsilon(\phi(\overline{w})+\phi({\xi}))-a+kl+ih|){\phi'({\xi})}\, d\xi\\ =&I_{QGSW}(\varepsilon,f)(\overline{w}). \end{align*} Note that using the decomposition of $I_{QGSW}$ given in \eqref{decompI}, the regularity problem reduces to the same one for the Euler equations, done in Proposition \ref{Prop-regEuler}. \end{proof}}} Finally, we state the result concerning the desingularization of the K\'arm\'an Vortex Street. \begin{theo}\label{Th-QGSW} Let $h, l\in\R$, with $h\neq 0$ and $l>0$, and $a=0$ or $a=\frac{l}{2}$. Then, there exist $D^{\varepsilon}$ such that \begin{equation}\label{omega_epsilon2-qgsw} q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{\varepsilon D^{\varepsilon}+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{-\varepsilon D^{\varepsilon}+a-ih+kl}(x), \end{equation} defines a horizontal translating solution of the quasi--geostrophic shallow water equations, with constant velocity speed, for any $\varepsilon\in(0,\varepsilon_0)$ and small enough $\varepsilon_0>0$. Moreover, $D^{\varepsilon}$ is at least $C^1$. \end{theo} {{\begin{proof} By Proposition \ref{Prop-regQGSW}, we have that $\tilde{F}_{QGSW}:\R\times B_{X_\alpha}(0,\sigma)\rightarrow Y_\alpha$, with $\tilde{F}_{QGSW}(\varepsilon,f)=F_{QGSW}(\varepsilon,f,V(\varepsilon,f))$, is well--defined and $C^1$, for $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$ and $\sigma<1$. Moreover, Proposition \ref{Prop-trivialsolQGSW} and Proposition \ref{Prop-trivialsol2QGSW} give us that $\tilde{F}_{QGSW}(0,0)(w)=0$, for any $w\in\T$. In order to apply the Implicit Function Theorem, let us check that $\partial_f \tilde{F}_{QGSW}(0,0)$ is an isomorphism. First, using \eqref{decompI}, one achieves \begin{align*} I_{QGSW}(\varepsilon,f)(w)=&I_{E}(\varepsilon,f)(w)+\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} g_0\left(|\varepsilon(\phi(w)-\phi(\xi))-kl|\right)\phi'(\xi)\, d\xi\nonumber\\ &+\frac{1}{2\pi^2 \varepsilon}\sum_{k\in\Z} \int_{\T} g_0\left(\lambda|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|\right)\phi'(\xi)\, d\xi\nonumber, \end{align*} where $g_0$ is a smooth function such that $g_0(z)=O(z^2\ln(z))$ for small $z$, see \eqref{expansionK_02}. Note that $$ \lim_{\varepsilon\rightarrow 0}\varepsilon I_{QGSW}(\varepsilon,f)(w)=\lim_{\varepsilon\rightarrow 0}\varepsilon I_{E}(\varepsilon,f)(w) $$ and $$ \lim_{\varepsilon\rightarrow 0}\partial_f \overline{I_{QGSW}(\varepsilon,0)}=\lim_{\varepsilon\rightarrow 0}\partial_f \overline{I_{E}(\varepsilon,0)}. $$ Thus, we have $$ \partial_f \tilde{F}_{QGSW}(\varepsilon, 0)h(w)=\partial_f \tilde{F}_{E}(\varepsilon,0)h(w)=-\frac{1}{2\pi}\textnormal{Im}\left[h'(w)\right], $$ which is an isomorphism from $X$ to $Y$. \end{proof}}} \section{K\'arm\'an Vortex Street in general models}\label{Sec3} K\'arm\'an Vortex Patch Street structures are found both in the Euler equations and in the QGSW equations. The important fact in both models is that the Green functions associated to the elliptic problem of the stream function have the same behavior close to 0, having then the same linearized operator. We can extend it to other models, where the generalized surface quasi--geostrophic equations are a particular case, see Theorem \ref{Th-gSQG} for more details. Here, let us work with the general model: \begin{eqnarray} \label{Generaleq} \left\{\begin{array}{ll} q_t+(v\cdot \nabla) q=0, &\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ v=\nabla^\perp \psi,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ \psi=G*q,&\text{ in $[0,+\infty)\times\mathbb{R}^2$}, \\ q(t=0,x)=q_0(x),& \text{ with $x\in\mathbb{R}^2$}. \end{array}\right. \end{eqnarray} \subsection{Scaling the equation} The aim of this section is to look for solutions of the type \begin{align*} q_0(x)=\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D_1+kl}(x)-\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D_2+kl}(x). \end{align*} The domains $D_1$ and $D_2$ are simply--connected bounded domains such that $|D_1|=|D_2|$, and $l>0$. Consider $D_2=-D_1+a-ih$, having the following distribution \begin{align}\label{FAPointVortex-gen} q_0(x)=\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{D+kl}(x)-\frac{1}{\pi}\sum_{k\in\Z}{\bf{1}}_{-D+a+kl-ih}(x), \end{align} where we are rewriting $D_1$ by $D$. The velocity field is given by \begin{align}\label{gen-velocity} \pi v_0(x)=&2i\partial_{\overline{x}}\sum_{k\in\Z}\int_{D} G(|x-y-kl|)\, dA(y)-2i\partial_{\overline{x}}\sum_{k\in\Z}\int_{D} G(|x+y-a-kl+ih|)\, dA(y)\nonumber\\ =&-2i\sum_{k\in\Z}\int_{D}\partial_{\overline{y}}G(|x-y-kl|)\, dA(y)-2i\sum_{k\in\Z}\int_{D} \partial_{\overline{y}} G(|x+y-a-kl+ih|)\, dA(y)\nonumber\\ =&-\sum_{k\in\Z}\int_{\partial D} G(|x-y-kl|)dy-\sum_{k\in\Z}\int_{\partial D} G(|x+y-a-kl+ih|)dy, \end{align} where $\partial_{\overline{x}}$ is defined in \eqref{gradient-complex} and the Stokes Theorem \eqref{Stokes} is used. In order to introduce the point model configuration, let us scale the equation in the following way. For any $\varepsilon>0$, define \begin{equation}\label{omega_epsilon-gen} q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{\varepsilon D+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf{1}}_{-\varepsilon D+a+kl-ih}(x), \end{equation} for $l>0$, $h\neq 0$ and $a\in\R$. If $|D|=|\D|$, then we arrive at the point vortex street \eqref{PointVortex-gen} in the limit when $\varepsilon\rightarrow 0$, i.e., \begin{equation}\label{PV-reg} q_{0,0}(x)=\sum_{k\in\Z}\delta_{(kl,0)}(x)-\sum_{k\in\Z}\delta_{(a+kl,-h)}(x). \end{equation} This configuration of points is studied in Proposition \ref{Gen-point}, from the dynamical system point of view, showing that \eqref{PV-reg} translates. From now on, take $a=0$ or $a=\frac{l}{2}$ having a horizontal translation in the point model. The associated velocity field to \eqref{omega_epsilon-gen} is given by $$ v_{0,\varepsilon}(\varepsilon x)=-\frac{1}{\pi \varepsilon}\sum_{k\in\Z}\int_{\partial D} G(|\varepsilon(x-y)-kl|)dy-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\int_{\partial D} G(|\varepsilon(x+y)-a-kl+ih|)dy. $$ We introduce now the conformal map. Consider $\phi:\T\rightarrow \partial D$ such that \begin{equation}\label{phi-gen} \phi(w)=i \left(w+\frac{\varepsilon}{G(\varepsilon)} f(w)\right), \quad f(w)=\sum_{n\geq 1}a_nw^{-n}, \quad a_n\in\R, w\in\T. \end{equation} Hence \begin{align*} v_{0,\varepsilon}(\varepsilon\phi(w))=&-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))-kl|)\phi'(\xi)\, d\xi\\ &-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)\phi'(\xi)\, d\xi. \end{align*} \begin{rem} The constant $G(\varepsilon)$ in the definition of the conformal map \eqref{phi-euler} comes from the singularity of the kernel in the general case. For the logarithmic singularities, we do not need to add this constant because there we use the structure of the logarithm. When having more singular kernels, as in this case, we need to introduce $G(\varepsilon)$. \end{rem} Assuming that we look for translating solutions, i.e., $q(t,x)=q_0(x- Vt)$, we arrive at the equation \begin{equation}\label{F-gen} F(\varepsilon,f,V)(w):=\textnormal{Re}\left[\left\{\overline{I(\varepsilon,f)(w)}- \overline{V}\right\}{w}{\phi'(w)}\right]=0, \quad w\in\T, \end{equation} where $$ I(\varepsilon,f)(w):=v_{0,\varepsilon}(\varepsilon\phi(w)). $$ The next step is to check that if $\varepsilon=0$, $D=\D$ and $V=V_0$ (referring to the K\'arm\'an Point Vortex Street), equation \eqref{F-gen} is verified. \begin{pro}\label{Prop-trivialsol-gen} Let $G$ satisfies \begin{enumerate} \item[(H1)] $G$ is radial such that $G(x)=\tilde{G}(|x|)$, \item[(H2)] there exists $R>0$ and $\beta_1\in(0,1]$ such that $|\tilde{G}'(r)|\leq \frac{C}{r^{1+\beta_1}}$, for $r\geq R$, \item[(H3)]\label{H3} there exists $\beta_2\in(0,1)$ such that $G(z)=O\left(\frac{1}{z^{\beta_2}}\right)$ and $\log|z|=o\left(G(z)\right),$ as $z\rightarrow 0$. \end{enumerate} For any $h\neq 0$ and $l>0$, the following equation is verified $$ F(0, 0, V_0)(w)=0, \quad w\in\T, $$ where $V_0$ is given by \eqref{V_0-gen}: $$ V_0=i\sum_{k\in\Z} G'(|a+kl-ih|)\frac{a+kl-ih}{|a+kl-ih|}. $$ \end{pro} \begin{rem} From Hypothesis (H3) we are assuming that the kernel is more singular than the logarithmic kernel analyzed in the previous sections, but less singular than the kernel of the surface quasi--geostrophic equation. A typical kernel satisfying (H3) is the one coming from the generalized surface quasi--geostrophic equation: $G(x)=\frac{C_{\beta}}{2\pi}\frac{1}{|x|^{\beta}}$, for $\beta\in(0,1)$. \end{rem} {{\begin{proof} By definition, \begin{align*} F(\varepsilon,0, V_0)(w)=&\textnormal{Re}\Big[\left\{\frac{i}{\pi\varepsilon}\overline{\int_{\T}G(\varepsilon|w-\xi|)\, d\xi}+\frac{i}{\pi\varepsilon}\sum_{0\neq k\in\Z}\overline{\int_{\T} G(|\varepsilon i(w-\xi)-kl|)\, d\xi}\right.\\ &\left.+\frac{i}{\pi\varepsilon}\sum_{k\in\Z}\overline{\int_{\T} G(|\varepsilon i(w+\xi)-a-kl+ih|)\, d\xi}-V_0\right\}iw\Big]. \end{align*} Concerning the first term, we can compute it using \begin{align}\label{int-K} \textnormal{Re}\left[\frac{w}{\pi\varepsilon}\overline{\int_{\T}G(\varepsilon|w-\xi|)\, d\xi}\right]=\textnormal{Re}\left[\frac{w}{\pi\varepsilon}\overline{w}\overline{\int_{\T} G(\varepsilon|1-\xi|)\, d\xi}\right] =\frac{1}{\pi\varepsilon}\textnormal{Re}\left[\overline{\int_{\T} G(\varepsilon|1-\xi|)\, d\xi}\right]=0, \end{align} since the integral is a pure complex number. For the second term, note that \begin{align}\label{2ndterm-gen} \lim_{\varepsilon \rightarrow 0}\frac{1}{\varepsilon}\sum_{0\neq k\in\Z}\int_{\T}& G(|\varepsilon i(w-\xi)-kl|)\, d\xi\nonumber\\ =&-\lim_{\varepsilon \rightarrow 0}\sum_{0\neq k\in\Z}\int_{\D}G'(|\varepsilon i(w-\xi)-kl|)\frac{\varepsilon i(w-\xi)-kl}{|\varepsilon i(w-\xi)-kl|}\, d\xi\nonumber\\ =&\sum_{0\neq k\in\Z}\int_{\D}G'(|kl|)\frac{kl}{|kl|}\, d\xi\nonumber\\ =&0, \end{align} via the Stokes Theorem \eqref{Stokes} and taking into account \eqref{property-nabla}. We have computed the above limit by using the Convergence Dominated Theorem and (H3). Moreover, the sum is vanishing because we are using the symmetry sum. Using again the Stokes Theorem for the third term, one arrives at \begin{align*} \lim_{\varepsilon \rightarrow 0}\left\{\frac{i}{\pi\varepsilon}\right.&\left.\sum_{k\in\Z}\overline{\int_{\T} G(|\varepsilon i(w+\xi)-a-kl+ih|)\, d\xi}-V_0\right\}\\ =&\lim_{\varepsilon \rightarrow 0}\left\{\frac{i}{\pi}\sum_{k\in\Z}\overline{\int_{\D} G'(|\varepsilon i(w+\xi)-a-kl+ih|)\frac{\varepsilon i(w+\xi)-a-kl+ih}{|\varepsilon i(w+\xi)-a-kl+ih|} \, d\xi}-V_0\right\}\\ =&\left\{\frac{i}{\pi}\sum_{k\in\Z}\overline{\int_{\D} G'(|-a-kl+ih|)\frac{-a-kl+ih}{|-a-kl+ih|} \, d\xi}-V_0\right\}\\ =&\left\{i\sum_{k\in\Z}\overline{ G'(|-a-kl+ih|)\frac{-a-kl+ih}{|-a-kl+ih|} \, d\xi}-V_0\right\}\\ =&0, \end{align*} by definition of $V_0$, which is given in \eqref{V_0-gen}. Again, the above limit is justified with the Convergence Dominated Theorem and (H3). \end{proof}}} In what follows, we will have to deal with the singularity in $\varepsilon$. But, there is a way to simplify the equation in order to control this singularity. We decompose $I(\varepsilon,f)$ as \begin{align}\label{I_exp} -\pi I(\varepsilon,f)(w)=&\frac{1}{\varepsilon}\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))|)\phi'(\xi)\, d\xi+\frac{1}{\varepsilon}\sum_{0\neq k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))-kl|)\phi'(\xi)\, d\xi\nonumber\\ &+\frac{1}{\varepsilon}\sum_{k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)\phi'(\xi)\, d\xi\nonumber\\ =&:I_1(\varepsilon,f)(w)+I_2(\varepsilon,f)(w)+I_3(\varepsilon,f)(w). \end{align} We can use Taylor formula \eqref{Taylor-formula}: \begin{align*} G(|z_1+z_2|)=G(|z_1|)+\int_0^1G'(|z_1+tz_2|)\frac{\textnormal{Re}\left[(z_1+tz_2)\overline{z_2}\right]}{|z_1+tz_2|}dt, \end{align*} for $z_1,z_2\in\C$ and { $|z_2|<|z_1|$}. In the case of $I_1$, take $z_1=i\varepsilon(w-\xi)$ and $z_2=i\frac{\varepsilon^2}{K(\varepsilon)}(f(w)-f(\xi)),$ implying \begin{align*} I_1(\varepsilon,f)=&\frac{i}{\varepsilon}\int_{\T}G(\varepsilon|w-\xi|)\, d\xi+i\frac{\varepsilon}{G(\varepsilon)}\int_{\T}\int_0^1G'\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\overline{(f(w)-f(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|}dt\, d\xi\\ &+\frac{i}{G(\varepsilon)}\int_{\T}G(\varepsilon|\phi(w)-\phi(\xi)|)f'(\xi)\, d\xi\\ =&I_{1,1}(\varepsilon,f)(w)+I_{1,2}(\varepsilon,f)(w)+I_{1,3}(\varepsilon,f)(w). \end{align*} Let us check that $|z_2|<|z_1|$: $$ |z_2|=\frac{\varepsilon^2}{G(\varepsilon)}|f(w)-f(\xi)|\leq \frac{\varepsilon^2}{G(\varepsilon)}||f||_{C^1}|w-\xi|\leq \frac{\varepsilon^2}{G(\varepsilon)}|w-\xi|< |z_1|. $$ In virtue of \eqref{int-K}, we get $$ \textnormal{Re}\left[\frac{w}{\pi\varepsilon}\overline{\int_{\T}G(\varepsilon|w-\xi|)\, d\xi}\right]=0, $$ and then the nonlinear function $F(\varepsilon,f,V)$ can be simplified as follows \begin{align}\label{F2} F(\varepsilon,f,V)=&\textnormal{Re}\left[\left\{\overline{I(\varepsilon,f)(w)}- \overline{V}\right\}{w}{\phi'(w)}\right]-\textnormal{Re}\left[\frac{wf'(w)}{\pi G(\varepsilon)}\overline{\int_{\T}G(\varepsilon|w-\xi|)\, d\xi}\right]\nonumber\\ =&\textnormal{Re}\left[\left\{\overline{I(\varepsilon,f)(w)}- \overline{V}\right\}{w}{\phi'(w)}\right]+\frac{i\int_{\T}G(\varepsilon|1-\xi|)\, d\xi}{\pi G(\varepsilon)}\textnormal{Im}\left[f'(w)\right]. \end{align} We use the descomposition of $I(\varepsilon,f)$ in \eqref{I_exp}, and we are rewriting $I_1(\varepsilon,f)$ as \begin{equation}\label{I1-decom} I_1(\varepsilon,f):= I_{1,2}(\varepsilon,f)+I_{1,3}(\varepsilon,f), \end{equation} since there is any contribution of $I_{1,1}(\varepsilon,f)$. In the next result, we provide how $V$ must depend on $\varepsilon$ and $f$. \begin{pro}\label{Prop-trivialsol2Qgen} Let $G$ satisfies the hypothesis (H1)--(H3) of Proposition \ref{Gen-point} and Proposition \ref{Prop-trivialsol-gen}, and \begin{enumerate} \item[(H4)] {$\frac{\tilde{G}(\varepsilon r)}{\tilde{G}(\varepsilon)\tilde{G}(r)}\rightarrow 1,\, \frac{\varepsilon\tilde{G}'(\varepsilon r)}{\tilde{G}(\varepsilon)\tilde{G}'(r)}\rightarrow 1,\textnormal{when } \varepsilon\rightarrow 0, $ uniformly in $r\in(0,2)$.} \end{enumerate} The function $V:(-\varepsilon_0,\varepsilon_0)\times B_{X_{1-\beta_2}}(0,\sigma)\longrightarrow \R$, given by \begin{align}\label{function-V-gen} V(\varepsilon,f)=&\frac{\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw}{\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw}, \end{align} fulfills $V(0, f)=V_0$, where $V_0$ is defined in \eqref{V_0-gen}. The parameters satisfy: $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$, $\sigma<1$, and $X$ is defined in \eqref{X}. \end{pro} {{\begin{proof} Note that $$ V(0,f)=\lim_{\varepsilon\rightarrow 0}\frac{\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}\phi'(w)(1-\overline{w}^2)dw}{i\int_{\T} {w}(1-\overline{w}^2)dw}=-\lim_{\varepsilon\rightarrow 0}\frac{\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}(1+\frac{\varepsilon}{G(\varepsilon)}f'(w))(1-\overline{w}^2)dw}{2\pi i}, $$ via the Residue Theorem. We use the decomposition of $I(\varepsilon,f)$ given in \eqref{I_exp}. Let us begin with $I_1(\varepsilon,f)$: \begin{align*} I_1(\varepsilon,f)=&I_{1,2}(\varepsilon,f)(w)+I_{1,3}(\varepsilon,f)(w)\\ =&i\frac{\varepsilon}{G(\varepsilon)}\int_{\T}\int_0^1G'\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\overline{(f(w)-f(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|}dt\, d\xi\\ &+\frac{i}{G(\varepsilon)}\int_{\T}G(\varepsilon|\phi(w)-\phi(\xi)|)f'(\xi)\, d\xi. \end{align*} Using (H4) and the Dominated Convergence Theorem, we get \begin{equation}\label{I1-0} I_1(0,f)=i\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(f(w)-f(\xi))}\right]\, d\xi+i\int_{\T}G(|w-\xi|)f'(\xi)\, d\xi. \end{equation} Note that the Dominated Convergence Theorem can be applied since the limit in (H4) is uniform. We can compute the above integrals in the following way \begin{align}\label{int-comp} \int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(f(w)-f(\xi))}\right]\, d\xi=&\sum_{n\geq 1}a_n\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi){(w^n-\xi^n)}\right]\, d\xi\nonumber\\ =&\sum_{n\geq 1}\frac{a_n}{2}\Big\{\overline{w}^n\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\nonumber\\ &+{w}^{n+2}\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\Big\},\nonumber\\ \int_{\T}G(|w-\xi|)f'(\xi)\, d\xi=&-\sum_{n\geq 1}a_n n\int_{\T}G(|w-\xi|)\frac{1}{\xi^{n+1}} \, d\xi\nonumber\\ =&-\sum_{n\geq 1}a_n n\overline{w}^n\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi, \end{align} where $f(w)=\sum_{n\geq 1}a_nw^{-n}.$ Note also that \begin{align*} \int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi,\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi,\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi\in i\R. \end{align*} In this way, \begin{align*} \lim_{\varepsilon\rightarrow 0}\int_{\T} \overline{I_1(\varepsilon,f)(w)}w&\left(1+\frac{\varepsilon}{G(\varepsilon)}f'(w)\right)(1-\overline{w}^2)dw=\int_{\T} \overline{I_1(0,f)(w)}w(1-\overline{w}^2)dw\\ =&\sum_{n\geq 1}a_n\Big\{\frac{i}{2}\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\int_{\T}w^nw(1-\overline{w}^2)dw\\ &+i\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\int_{\T}\overline{w}^{n+2}w(1-\overline{w}^2)dw\\ &-in\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi\int_{\T}w^nw(1-\overline{w}^2)dw\Big\}\\ =&0, \end{align*} by the Residue Theorem. Let us move on $I_3(\varepsilon,f)$. Here, we use also Taylor formula \eqref{Taylor-formula} for $z_1=-a-kl+ih$ and $z_2=\varepsilon(\phi(w)+\phi(\xi))$, finding \begin{align}\label{I3-taylor} I_3(\varepsilon,f)(w)=&\frac{i}{\varepsilon}\sum_{k\in\Z}\int_{\T}G(|-a-kl+ih|)\, d\xi\nonumber\\ &+i\sum_{k\in\Z}\int_{\T}\int_0^1G'\left(\left|-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))\right|\right)\nonumber\\ &\times\frac{\textnormal{Re}\left[\left(-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))\right)\overline{(\phi(w)+\phi(\xi))}\right]}{|-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))|}dt\, d\xi\nonumber\\ &+\frac{i}{G(\varepsilon)}\sum_{k\in\Z}\int_{\T}G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)f'(\xi)\, d\xi\nonumber\\ =&i\sum_{k\in\Z}\int_{\T}\int_0^1G'\left(\left|-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))\right|\right)\nonumber\\ &\times\frac{\textnormal{Re}\left[\left(-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))\right)\overline{(\phi(w)+\phi(\xi))}\right]}{|-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))|}dt\, d\xi\nonumber\\ &+\frac{i}{G(\varepsilon)}\sum_{k\in\Z}\int_{\T}G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)f'(\xi)\, d\xi\nonumber\\ =:&I_{3,1}(\varepsilon,f)(w)+I_{3,2}(\varepsilon,f)(w). \end{align} Let us check that $|z_2|<|z_1|$, which is necessary to use Taylor formula: $$ |z_2|=\varepsilon|\phi(w)+\phi(\xi)|\leq 2\varepsilon||\phi||_{L^{\infty}}\leq 4\varepsilon<|a+l-ih|\leq |z_1|. $$ Note that $$ I_{3,2}(0,f)=0. $$ For the other term, we achieve using (H4) that \begin{align*} I_{3,1}(0,f)(w)=&i\sum_{k\in\Z}\int_{\T}G'\left(\left|-a-kl+ih\right|\right)\frac{\textnormal{Im}\left[\left(-a-kl+ih\right)\overline{(w+\xi)}\right]}{|-a-kl+ih|}\, d\xi\\ =&\frac{1}{2}\sum_{k\in\Z}\int_{\T}G'\left(\left|-a-kl+ih\right|\right)\frac{\left(-a-kl+ih\right)\overline{(w+\xi)}}{|-a-kl+ih|}\, d\xi\\ &-\frac{1}{2}\sum_{k\in\Z}\int_{\T}G'\left(\left|-a-kl+ih\right|\right)\frac{\overline{\left(-a-kl+ih\right)}{(w+\xi)}}{|-a-kl+ih|}\, d\xi\\ =&i\pi\sum_{k\in\Z}G'\left(\left|-a-kl+ih\right|\right)\frac{\left(-a-kl+ih\right)}{|-a-kl+ih|}\\ =&-\pi V_0, \end{align*} by the Residue Theorem and the definition of $V_0$ given in \eqref{V_0-gen}. Using the same ideas for $I_2(\varepsilon,f)$, we find that \begin{align*} I_2(\varepsilon,f)(w)=&i\sum_{0\neq k\in\Z}\int_{\T}\int_0^1G'\left(\left|-kl+\varepsilon t(\phi(w)+\phi(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left(-kl+\varepsilon t(\phi(w)+\phi(\xi))\right)\overline{(\phi(w)+\phi(\xi))}\right]}{|-kl+\varepsilon t(\phi(w)+\phi(\xi))|}dt\, d\xi\\ &+\frac{i}{G(\varepsilon)}\sum_{0\neq k\in\Z}\int_{\T}G(|\varepsilon(\phi(w)+\phi(\xi))-kl|)f'(\xi)\, d\xi, \end{align*} and then, $$ I_2(0,f)=-i\sum_{0\neq k\in\Z}kl\frac{G'(|kl|)}{|kl|}\textnormal{Re}\left[\overline{w-\xi}\right]\, d\xi=0. $$ This implies that $$ \lim_{\varepsilon\rightarrow 0}\int_{\T} \overline{I_2(\varepsilon,f)(w)}w\left(1+\frac{\varepsilon}{G(\varepsilon)}f'(w)\right)(1-\overline{w}^2)dw=0. $$ Finally, we find \begin{align*} \lim_{\varepsilon\rightarrow 0}\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}\left(1+\frac{\varepsilon}{G(\varepsilon)}f'(w)\right)(1-\overline{w}^2)dw=&-\frac{1}{\pi}\int_{\T}\overline{ I_{3,1}(0,f)(w)}{w}(1-\overline{w}^2)dw\\ =&V_0\int_{\T}{w}(1-\overline{w}^2)dw\\ =&-2\pi i V_0, \end{align*} getting the announced result, i.e., $ V(0,f)=V_0. $ \end{proof}}} \begin{pro}\label{Prop-reggen} Let $G$ satisfies the hypothesis (H1)--(H4) of Proposition \ref{Gen-point}, Proposition \ref{Prop-trivialsol-gen} and Proposition \ref{Prop-trivialsol2Qgen}, and \begin{enumerate} \item[(H5)] {$ \frac{d}{d\varepsilon}\frac{\tilde{G}(\varepsilon r)}{\tilde{G}(\varepsilon)\tilde{G}(r)}\rightarrow 0, \, \frac{d}{d\varepsilon}\frac{\varepsilon\tilde{G}'(\varepsilon r)}{\tilde{G}(\varepsilon)\tilde{G}'(r)}\rightarrow 0, \textnormal{when } \varepsilon\rightarrow 0, $ uniformly in $r\in(0,2)$.} \end{enumerate} If $V$ sets \eqref{function-V-gen}, then $$\tilde{F}:(-\varepsilon_0,\varepsilon_0)\times B_{X_{1-\beta_2}}(0,\sigma)\rightarrow Y_{1-\beta_2},$$ with $\tilde{F}(\varepsilon,f)=F(\varepsilon,f,V(\varepsilon,f))$, is well--defined and $C^1$. The spaces $X$ and $Y$ are defined in \eqref{X}-\eqref{Y} taking $\alpha=1-\beta_2$, and the parameters satisfy $\varepsilon_0\in(0,\textnormal{min}\{1,\frac{l}{4}\})$ and $\sigma<1$. \end{pro} \begin{proof} The proof has three steps: the symmetry of $F$, regularity of $V$, and regularity of $\tilde{F}$. \noindent {\it $\bullet$ First step: Symmetry of $F$.} Let us prove that $F(\varepsilon,f,V)(e^{i\theta})=\sum_{n\geq 1}f_n\sin(n\theta)$ with $f_n\in\R$, i.e., checking that $F$ verifies $F(\varepsilon,f,V)(\overline{w})=-F(\varepsilon,f,V)(w)$. First, we work with $I(\varepsilon,f)$ showing that $I(\varepsilon,f)(\overline{w})=\overline{I(\varepsilon,f)(w)}:$ \begin{align*} \overline{I(\varepsilon,f)(w)}=&-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\overline{\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))-kl|)\phi'(\xi)\, d\xi}\\ &-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}\overline{\int_{\T} G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)\phi'(\xi)\, d\xi}\\ =&\frac{1}{\pi\varepsilon}\sum_{k\in\Z}{\int_{\T} G(|\varepsilon(\phi(w)-\phi(\overline{\xi}))-kl|)\overline{\phi'(\overline{\xi})}\, d\xi}\\ &+\frac{1}{\pi\varepsilon}\sum_{k\in\Z}{\int_{\T} G(|\varepsilon(\phi(w)+\phi(\overline{\xi}))-a-kl+ih|)\overline{\phi'(\overline{\xi})}\, d\xi}\\ =&-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}{\int_{\T} G(|\varepsilon(\phi(w)-\phi(\overline{\xi}))-kl|){\phi'({\xi})}\, d\xi}\\ &-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}{\int_{\T} G(|\varepsilon(\phi(w)+\phi(\overline{\xi}))-a-kl+ih|){\phi'({\xi})}\, d\xi}\\ =&-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}{\int_{\T} G(|\varepsilon(\phi(\overline{w})-\phi({\xi}))-kl|){\phi'({\xi})}\, d\xi}\\ &-\frac{1}{\pi\varepsilon}\sum_{k\in\Z}{\int_{\T} G(|\varepsilon(\phi(\overline{w})+\phi({\xi}))-a-kl+ih|){\phi'({\xi})}\, d\xi}\\ =&I(\varepsilon,f)(\overline{w}). \end{align*} Second, we prove that $V\in\R$ analyzing the denominator and the numerator of its expression: \begin{align*} 2i\textnormal{Im}\Big[\int_{\T}\overline{ I(\varepsilon,f)(w)}&{w}{\phi'(w)}(1-\overline{w}^2)dw\Big]\\ =&\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw-\overline{\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw}\\ =&\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw+\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}\overline{\phi'(\overline{w})}(1-\overline{w}^2)dw\\ =&\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw-\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'({w})}(1-\overline{w}^2)dw\\ =&0, \end{align*} and \begin{align*} 2i\textnormal{Im}\left[\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw\right]=&\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw-\overline{\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw}\\ =&\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw+\int_{\T} {w}\overline{\phi'(\overline{w})}(1-\overline{w}^2)dw\\ =&\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw-\int_{\T} {w}{\phi'(w)}(1-\overline{w}^2)dw\\ =&0. \end{align*} Thirdly, from the above computations we arrive at \begin{align*} F(\varepsilon,f,V)(\overline{w})=&\textnormal{Re}\left[\left\{\overline{I(\varepsilon,f)(\overline{w})}- \overline{V}\right\}\overline{w}{\phi'(\overline{w})}\right]\\ =&-\textnormal{Re}\left[\left\{{I(\varepsilon,f)(w)}-{V}\right\}\overline{w}\overline{\phi'(w)}\right]\\ =&-F(\varepsilon,f,V)(w). \end{align*} Now, we have that $F(\varepsilon,f,V)(e^{i\theta})=\sum_{n\geq 1}f_n\sin(n\theta)$. Moreover, the condition \eqref{function-V-gen} agrees with the fact that $f_1=0$, we refer to the first step in the proof of Proposition \ref{Prop-regEuler} for more details. \noindent {\it $\bullet$ Second step: Regularity of $V$.} In similarity with the Euler equations, we need to study first the denominator: $$ \int_{\T}w\phi'(w)(1-\overline{w}^2)dw=i\int_{\T}w(w+\varepsilon f'(w))(1-\overline{w}^2)dw=2\pi+i\varepsilon\int_{\T}wf'(w)dw=2\pi-i\varepsilon\int_{\T}f(w)dw, $$ where we have used the Residue Theorem. Then, if $|\varepsilon|<\varepsilon_0$ and $f\in B_{X_{1-\beta_2}}(0,\sigma)$, the denominator is not vanishing. Moreover, it is $C^1$ in $f$ and $\varepsilon\in(-\varepsilon_0,\varepsilon_0)$. By the expression of $V$, it remains to study the regularity of $J(\varepsilon,f)$: $$ J(\varepsilon,f)(w)=\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw. $$ We use the decomposition of $I(\varepsilon,f)$ given in \eqref{I_exp}. First, we began with the continuity is both variables. Fixing $f\in B_{X_{1-\beta_2}}(0,\sigma)$, note from Proposition \ref{Prop-trivialsol2Qgen} that $$ J(0,f)=2\pi V_0, $$ and then $J$ is continuous in $\varepsilon\in(-\varepsilon_0,\varepsilon_0)$. Now, fixing $\varepsilon\neq 0$, by Lemma \ref{Lem-pottheory} and (H3), we get easily that $I_i(\varepsilon,f)\in C^{1-\beta_2}(\T)$, for $i=1,2,3$. Secondly, we study the differentiability properties. Fix again $f\in B_{X_{1-\beta_2}}(0,\sigma)$, and we differentiate with respect to $\varepsilon$: \begin{align}\label{aux1} \frac{d}{d\varepsilon} J(\varepsilon,f)(w)=&\int_{\T}\overline{ d_{\varepsilon}I(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw\nonumber\\ &+i\left(\frac{1}{G(\varepsilon)}-\frac{\varepsilon G'(\varepsilon)}{G(\varepsilon)^2}\right)\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{f'(w)}(1-\overline{w}^2)dw. \end{align} The last expression is continuous when $\varepsilon\neq 0$ and now we aim to pass to the limit $\varepsilon\rightarrow 0$. By (H3) and (H4), we have that $$ \lim_{\varepsilon \rightarrow 0} \left(\frac{1}{G(\varepsilon)}-\frac{\varepsilon G'(\varepsilon)}{G(\varepsilon)^2}\right)=\lim_{\varepsilon \rightarrow 0} \left(\frac{1}{G(\varepsilon)}-\frac{G'(1)}{G(\varepsilon)}\right)=0. $$ By using Proposition \ref{Prop-trivialsol2Qgen}, we find \begin{align}\label{I-0} \lim_{\varepsilon \rightarrow 0}{ I(\varepsilon,f)(w)}=&-\frac{i}{\pi}\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(f(w)-f(\xi))}\right]\, d\xi\nonumber\\ &-\frac{i}{\pi}\int_{\T}G(|w-\xi|)f'(\xi)\, d\xi+V_0, \end{align} which implies that $$ \lim_{\varepsilon\rightarrow 0}\left(\frac{1}{G(\varepsilon)}-\frac{\varepsilon G'(\varepsilon)}{G(\varepsilon)^2}\right)\int_{\T}\overline{ I(\varepsilon,f)(w)}{w}{f'(w)}(1-\overline{w}^2)dw=0. $$ Let us analyze the first term of \eqref{aux1}. Using the decomposition \eqref{I_exp}, we begin with $$ \int_{\T}\overline{ d_{\varepsilon}I_1(\varepsilon,f)(w)}{w}{\phi'(w)}(1-\overline{w}^2)dw. $$ We use also the decomposition for $I_1$ given in \eqref{I1-decom}. For the last term, one has that $$ \frac{d}{d\varepsilon}I_{1,3}(\varepsilon,f)(w)=i\int_{\T}\frac{d}{d\varepsilon}\left(\frac{G(\varepsilon|\phi(w)-\phi(\xi)|)}{G(\varepsilon)}\right)f'(\xi)\, d\xi, $$ which is continuous in $\varepsilon\neq 0$ and $f\in B_{X_{1-\beta_2}}(0,\sigma)$. Moreover, (H5) implies that $$ \lim_{\varepsilon\rightarrow 0}\frac{d}{d\varepsilon}I_{1,3}(\varepsilon,f)(w)=0, $$ for any $w\in\T$. Note that we can use the Dominated Convergence Theorem since the limit in (H5) is uniform. We can differentiate $I_{1,2}$ for $\varepsilon\neq 0$ and using once again (H5), we have that \begin{align*} \lim_{\varepsilon\rightarrow 0}\frac{d}{d\varepsilon}I_{1,2}(\varepsilon,f)(w)=0. \end{align*} Let us now show the idea of $I_3(\varepsilon,f)$, and $I_2(\varepsilon,f)$ will read similarly. Here, we use Taylor formula \eqref{Taylor-formula} as it was done in \eqref{I3-taylor}: \begin{align*} I_3(\varepsilon,f)(w)=&I_{3,1}(\varepsilon,f)(w)+I_{3,2}(\varepsilon,f)(w)\\ =&i\sum_{k\in\Z}\int_{\T}\int_0^1G'\left(\left|-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left(-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))\right)\overline{(\phi(w)+\phi(\xi))}\right]}{|-a-kl+ih+\varepsilon t(\phi(w)+\phi(\xi))|}dt\, d\xi\\ &+\frac{i}{G(\varepsilon)}\sum_{k\in\Z}\int_{\T}G(|\varepsilon(\phi(w)+\phi(\xi))-a-kl+ih|)f'(\xi)\, d\xi. \end{align*} These two expressions are smooth in $\varepsilon$. In fact, we can check that $\frac{d}{d\varepsilon}I_3(\varepsilon,f)$ is continuous in $\varepsilon\in(-\varepsilon_0,\varepsilon_0)$ and $f\in B_{X_{1-\beta_2}}(0,\sigma)$. Now, fix $\varepsilon\neq 0$ and we focus on the regularity with respect to $f$. The integral $I_1$ is the more delicate one since the kernel is singular. Remark the expression of this term: \begin{align*} I_1(\varepsilon,f)=&i\frac{\varepsilon}{G(\varepsilon)}\int_{\T}\int_0^1G'\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\overline{(f(w)-f(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|}dt\, d\xi\\ &+\frac{i}{G(\varepsilon)}\int_{\T}G(\varepsilon|\phi(w)-\phi(\xi)|)f'(\xi)\, d\xi. \end{align*} Then, \begin{align*} \partial_f I_1(\varepsilon,f)h(w)=&i\frac{\varepsilon^3}{G(\varepsilon)^2}\int_{\T}\int_0^1tG''\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\overline{(f(w)-f(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|^2}\\ &\times\left\{\left((w-\xi)+\frac{t\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\cdot (h(w)-h(\xi))\right\}dt\, d\xi\\ &-i\frac{\varepsilon^2}{G(\varepsilon)^2}\int_{\T}\int_0^1tG'\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\overline{(f(w)-f(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|^3}dt\, d\xi\\ &\times\left\{\left((w-\xi)+\frac{t\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\cdot (h(w)-h(\xi))\right\}dt\, d\xi\\ &+i\frac{\varepsilon}{G(\varepsilon)}\int_{\T}\int_0^1G'\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(h(w)-h(\xi))\right)\overline{(f(w)-f(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|}dt\, d\xi\\ &+i\frac{\varepsilon}{G(\varepsilon)}\int_{\T}\int_0^1G'\left(\varepsilon\left|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right|\right)\\ &\times\frac{\textnormal{Re}\left[\left((w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\overline{(h(w)-h(\xi))}\right]}{|(w-\xi)+t\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))|}dt\, d\xi\\ &+\frac{i}{G(\varepsilon)}\int_{\T}G(\varepsilon|\phi(w)-\phi(\xi)|)h'(\xi)\, d\xi\\ &+\frac{i\varepsilon^2}{G(\varepsilon)^2}\int_{\T}\frac{G'(\varepsilon|\phi(w)-\phi(\xi)|)}{|\phi(w)-\phi(\xi)|}f'(\xi)\\ &\times \left\{\left((w-\xi)+\frac{\varepsilon}{G(\varepsilon)}(f(w)-f(\xi))\right)\cdot (h(w)-h(\xi))\right\}\, d\xi. \end{align*} For any $\varepsilon\neq 0$, the above expression is continuous in $f$ by using Lemma \ref{Lem-pottheory}. Moreover, using (H4) we can obtain the limit when $\varepsilon\rightarrow 0$ as \begin{align}\label{dfI1} \partial_f I_1(0,f)h(w)=i\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(h(w)-h(\xi))}\right]\, d\xi+i\int_{\T}G(|w-\xi|)h'(\xi)\, d\xi. \end{align} Note that it agrees when differentiating with respect to $f$ in \eqref{I1-0}. For the other two integrals, notice that $I_2$ and $I_3$ are not singular integrals due to $|\varepsilon(\phi(w)-\phi(\xi))-kl|$ it not vanishing for $k\neq 0$ and neither $|\varepsilon(\phi(w)-\phi(\xi))-a-kl+ih|$, for any $k\in\Z$. This gives us that $I_2$ and $I_3$ are $C^1$. Let us show the idea of $I_2$: \begin{align*} \partial_f I_2(\varepsilon,f)h(w)=&\partial_f \frac{1}{\varepsilon}\sum_{0\neq k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))-kl|)\phi'(\xi)\, d\xi\\ =&\frac{i}{\varepsilon}\frac{\varepsilon^2}{G(\varepsilon)}\sum_{0\neq k\in\Z}\int_{\T} G'(|\varepsilon(\phi(w)-\phi(\xi))-kl|)\\ &\times\frac{(\varepsilon(\phi(w)-\phi(\xi))-kl)\cdot (h(w)-h(\xi))}{|\varepsilon(\phi(w)-\phi(\xi))-kl|}\phi'(\xi)\, d\xi\\ &+\frac{i}{\varepsilon}\frac{\varepsilon}{G(\varepsilon)}\sum_{0\neq k\in\Z}\int_{\T} G(|\varepsilon(\phi(w)-\phi(\xi))-kl|)h'(\xi)\, d\xi. \end{align*} By Lemma \ref{Lem-pottheory}, the last expression is continuous in $f$. Moreover, it does when $\varepsilon\neq 0$ and it is easy to check, with the help of the Convergence Dominated Theorem, that \begin{equation}\label{dfI2} \partial_f I_2(0,f)h(w)=0. \end{equation} In the same way, one can check that $\partial_f I_3(\varepsilon,f)$ is continuous in both variables and \begin{equation}\label{dfI3} \partial_f I_3(0,f)h(w)=0. \end{equation} \noindent {\it $\bullet$ Third step: Regularity of $\tilde{F}$.} Since $V(\varepsilon,f)$ is $C^1$ in both variables and using the computations above concerning $I(\varepsilon,f)$, one can easily check that $\tilde{F}$ is $C^1$. \end{proof} \subsection{Main result} Finally, we can announce the result concerning the desingularization of the point model \eqref{PointVortex-gen} in the general system. We need to impose an extra condition to $G$ in order to obtain that the linearized operator is an isomorphism. \begin{theo}\label{Th-gen} Consider $G$ satisfying (H1)--(H5) of Proposition \eqref{Gen-point}, Proposition \eqref{Prop-trivialsol-gen} and Proposition \eqref{Prop-trivialsol2Qgen}, and \begin{enumerate} \item[(H6)] $ 0\notin \left\{n\int_{\T}G(|1-\xi|)(1-\overline{\xi}^{n+1})\, d\xi-i\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\textnormal{Im}\left[(1-{\xi})(1-{\xi}^n)\right]\, d\xi,\quad n\geq 1\right\}. $ \end{enumerate} Let $h, l\in\R$, with $h\neq 0$ and $l>0$, and $a=0$ or $a=\frac{l}{2}$. Then, there exist $D^{\varepsilon}$ such that \begin{equation}\label{omega_epsilon2-gen} q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{\varepsilon D^{\varepsilon}+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{-\varepsilon D^{\varepsilon}+a-ih+kl}(x), \end{equation} defines a horizontal translating solution of \eqref{Generaleq}, with constant speed, for any $\varepsilon\in(0,\varepsilon_0)$ and small enough $\varepsilon_0>0$. Moreover, $D^{\varepsilon}$ is at least $C^1$. \end{theo} {{\begin{proof} Let us consider $\tilde{F}:(-\varepsilon_0,\varepsilon_0)\times B_{X_{1-\beta_2}}(0,\sigma)\rightarrow Y_{1-\beta_2}$, with $\varepsilon\in(0,1)$, $\varepsilon<\frac{l}{4}$ and $\sigma<1$, defined in Proposition \ref{Prop-reggen}. By that proposition, it is $C^1$ in both variables. Moreover, Proposition \ref{Prop-trivialsol-gen} and Proposition and \ref{Prop-trivialsol2Qgen}, give us that $\tilde{F}(0,0)=0$. In order to implement the Implicit Function Theorem, let us compute the linearized operator: \begin{align*} \partial_f \tilde{F}(0,0)h(w)=&\lim_{\varepsilon\rightarrow 0}\textnormal{Re}\left[\left\{\partial_f \overline{I(0,0)}h(w)-\partial_f V(0, 0)h(w)\right\}iw\right.\\ &\left.+\left\{\overline{I(0,0)(w)}-V_0\right\}iw\frac{\varepsilon}{G(\varepsilon)} h'(w)\right]+\frac{i\int_{\T}G(\varepsilon|w-\xi|)\, d\xi}{\pi G(\varepsilon)}\textnormal{Im}\left[h'(w)\right]. \end{align*} By Proposition \ref{Prop-trivialsol2Qgen}, then $\partial_f V(0, f)h(w)\equiv 0$. In virtue of \eqref{I-0}, we have $$ {I(0,f)(w)}=-\frac{i}{\pi}\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(f(w)-f(\xi))}\right]\, d\xi-\frac{i}{\pi}\int_{\T}G(|w-\xi|)f'(\xi)\, d\xi+V_0, $$ which implies $$ I(0,0)(w)=V_0. $$ Then, we have \begin{align*} \partial_f \tilde{F}(0,0)h(w)=&\textnormal{Re}\left[iw\partial_f \overline{I(0,0)}h(w)-\frac{1}{\pi}\overline{\int_{\T}G(|1-\xi|)\, d\xi} h'(w)\right]. \end{align*} On the other hand, using \eqref{dfI1}-\eqref{dfI2}-\eqref{dfI3} we obtain \begin{align*} -\pi\partial_f I(0,0)h(w)=&\partial_f I_1(0,0)h(w)\\ =&i\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(h(w)-h(\xi))}\right]\, d\xi+i\int_{\T}G(|w-\xi|)h'(\xi)\, d\xi, \end{align*} which amounts to \begin{align*} \partial_f \tilde{F}(0,0)h(w)=&\textnormal{Re}\Big[-\frac{w}{\pi}\overline{\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(h(w)-h(\xi))}\right]\, d\xi}\\ &-\frac{w}{\pi}\overline{\int_{\T}G(|w-\xi|)h'(\xi)\, d\xi}-\frac{h'(w)}{\pi}\overline{\int_{\T}G(|1-\xi|)\, d\xi} \Big]. \end{align*} Note that $ \mathcal{K}:C^{2-\beta_2}(\T)\rightarrow C^{1-\beta_2}(\T), $ defined by $$ \mathcal{K}(h)(w)={\int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(h(w)-h(\xi))}\right]\, d\xi}+{\int_{\T}G(|w-\xi|)h'(\xi)\, d\xi}, $$ is a compact operator since it is smoothing. Since $h\in C^{2-\beta_2}(\T)\mapsto h'\in C^{1-\beta_2}(\T)$ is a Fredholm operator of zero index, then $\partial_f \tilde{F}(0,0)$ so is. As a consequence, checking that $\partial_f \tilde{F}(0,0)$ is an isomorphism, it is enough to check that the kernel is trivial. We can compute the integrals involving in the linearized operator using \eqref{int-comp}, and finding \begin{align*} \int_{\T}\frac{G'(|w-\xi|)}{|w-\xi|}\textnormal{Re}\left[(w-\xi)\overline{(h(w)-h(\xi))}\right]\, d\xi=&\sum_{n\geq 1}\frac{a_n}{2}\Big\{\overline{w}^n\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\\ &+{w}^{n+2}\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\Big\},\\ \int_{\T}G(|w-\xi|)h'(\xi)\, d\xi=&-\sum_{n\geq 1}a_n n\overline{w}^n\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi, \end{align*} where $h(w)=\sum_{n\geq 1}a_nw^{-n}.$ Note also that \begin{align*} \int_{\T}G(\varepsilon|1-\xi|)\, d\xi,&\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi,\\ &\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi,\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi\in i\R. \end{align*} Finally, we achieve \begin{align*} \partial_f \tilde{F}(0,0)h(w)=&\sum_{n\geq 1}\frac{a_n}{\pi}\textnormal{Re}\Big[\frac{w^{n+1}}{2}\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\\ &+\frac{\overline{w}^{n+1}}{2}\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\\ &-nw^{n+1}\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi-n\overline{w}^{n+1}\int_{\T}G(\varepsilon|1-\xi|)\, d\xi\Big]\\ =&\sum_{n\geq 1}\frac{a_n i}{\pi}\sin((n+1)\theta)\Big\{\frac12\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\overline{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\\ &-\frac12\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}{\left[(1-\xi){(1-\xi^n)}\right]}\, d\xi\\ &-n\int_{\T}G(|1-\xi|)\frac{1}{\xi^{n+1}} \, d\xi+n\int_{\T}G(\varepsilon|1-\xi|)\, d\xi\Big\}\\ =&\sum_{n\geq 1}\frac{a_n i}{\pi}\sin((n+1)\theta)\Big\{n\int_{\T}G(|1-\xi|)(1-\overline{\xi}^{n+1})\, d\xi\\ &-i\int_{\T}\frac{G'(|1-\xi|)}{|1-\xi|}\textnormal{Im}\left[(1-{\xi})(1-{\xi}^n)\right]\, d\xi\Big\}. \end{align*} By (H6), we get that the kernel is trivial and then the linerized operator is an isomorphism. \end{proof}}} As a consequence, we get the result for the generalized surface quasi--geostrophic equation, meaning $G=\frac{C_{\beta}}{2\pi}\frac{1}{|\cdot|^\beta}$, for $\beta\in(0,1)$ and $C_{\beta}=\frac{\Gamma\left(\frac{\beta}{2}\right)}{2^{1-\beta}\Gamma\left(\frac{2-\beta}{2}\right)}$. We just have to check that (H6) is verified and these computations are done in \cite{HmidiMateu-pairs} for the vortex pairs. \begin{theo}\label{Th-gSQG} Let $h, l\in\R$, with $h\neq 0$ and $l>0$, and $a=0$ or $a=\frac{l}{2}$. Then, there exists $D^{\varepsilon}$ such that \begin{equation}\label{omega_epsilon2-qsqg} q_{0,\varepsilon}(x)=\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{\varepsilon D^{\varepsilon}+kl}(x)-\frac{1}{\pi\varepsilon^2}\sum_{k\in\Z}{\bf 1}_{-\varepsilon D^{\varepsilon}+a-ih+kl}(x), \end{equation} defines a horizontal translating solution of the generalized surface quasi-geostrophic equations for $\beta\in(0,1)$, with constant velocity speed, for any $\varepsilon\in(0,\varepsilon_0)$ and small enough $\varepsilon_0>0$. Moreover, $D^{\varepsilon}$ is at least $C^1$. \end{theo} \appendix \section{Special functions}\label{Ap-specialfunctions} This section recalls some definitions and properties about the Bessel functions. Let us define the Bessel function of the first kind and order $\nu$ by the expansion $$ J_{\nu}(z)=\sum_{k=0}^{+\infty}\frac{(-1)^k\left(\frac{z}{2}\right)^{\nu+2k}}{k!\, \Gamma(\nu+k+1)},\quad |\textnormal{arg}(z)|<\pi. $$ In addition, it is known that Bessel functions admit the following integral representation $$ J_n(z)=\frac{1}{\pi}\int_0^{\pi}\cos(n\theta-z\sin \theta)d\theta,\quad z\in\C, $$ for $\nu=n\in\Z$. On the other hand, the Bessel functions of imaginary argument, denoted by $I_{\nu}$ and $K_{\nu}$ are defined by $$ I_{\nu}(z)=\sum_{k=0}^{+\infty}\frac{\left(\frac{z}{2}\right)^{\nu+2k}}{k!\, \Gamma(\nu+k+1)},\quad |\textnormal{arg}(z)|<\pi, $$ and $$ K_{\nu}(z)=\frac{\pi}{2}\frac{I_{-\nu}-I_{\nu}(z)}{\sin(\nu\pi)} \quad \nu\in\C\setminus\Z, \, |\textnormal{arg}(z)|<\pi. $$ An useful expansion for $K_n$ can be found in \cite{Watson}: \begin{align*} K_n(z)=&(-1)^{n+1}\sum_{k=0}^{+\infty}\frac{\left(\frac{z}{2}\right)^{n+2k}}{k!(n+k)!}\left(\ln\left(\frac{z}{2}\right)-\frac{1}{2}\varphi(k+1)-\frac12\varphi(n+k+1)\right)\\ &+\frac12\sum_{k=0}^{n-1}\frac{(-1)^k(n-k-1)!}{k!\left(\frac{z}{2}\right)^{n-2k}}, \end{align*} for $n\in\N^{\star}$. In the case that we concern, $K_0$, it reads as \begin{equation}\label{K0-expansion} K_0(z)=-\ln\left(\frac{z}{2}\right)I_0(z)+\sum_{k=0}^\infty\frac{\left(\frac{z}{2}\right)^{2k}}{(k!)^2}\varphi(k+1), \end{equation} where $$ \varphi(1)=-\gamma \quad \text{ and }\quad \varphi(k+1)=\sum_{m=1}^k \frac{1}{m}-\gamma, \, k\in\N^*. $$ The constant $\gamma$ is the Euler's constant. Moreover, the following asymtotic behaviour at infinity for $K_0$ is given in \cite{Abramowitz}: \begin{equation}\label{K0-inf} K_0(z)\sim \sqrt{\frac{\pi}{2z}}e^{-z},\quad |\textnormal{arg}(z)|<\frac{3}{2}\pi. \end{equation} Finally, the derivative of $K_n$ can be expressed by terms of Bessel functions. In particular, $K_0'=-K_1$. \section{Complex integrals and potential theory}\label{Ap-potentialtheory} This section concerns some complex integrals, in particular, the Stokes Theorem, the Cauchy--Pompeiu's formula and some results on singular integrals. The complex version of the Stokes Theorem reads as follows. Let $D$ be a simply connected domain and $f$ a $C^1$ scalar function, then \begin{equation}\label{Stokes} \int_{\partial D}f(\xi)\, \, d\xi=2i\int_{D} \partial_{\overline{z}} f(z)\, dA(z), \end{equation} where $\partial_{\overline{z}}$ can be identify to the gradient operator in the same way $$ \nabla=2\partial_{\overline{z}}, \quad \partial_{\overline{z}}\varphi(z):=\frac12\left(\partial_{1}\varphi(z)+i\partial_{2} \varphi(z)\right). $$ Let us introduce now the Cauchy--Pompeiu's formula. Consider $\varphi:\overline{D}\to\C$ be a ${C}^1$ complex function, then \begin{equation}\label{Cauchy-Pom} -\frac{1}{\pi}\int_D \frac{\partial_{\overline{y}}\varphi(y)}{w-y}dA(y)=\frac{1}{2\pi i}\int_{\partial D}\frac{\varphi(w)-\varphi(\xi)}{w-\xi}d\xi. \end{equation} Finally, we deal with singular integrals of the type \begin{equation}\label{operator-T} \mathcal{T}(f)(w)=\int_{\T} K(w,\xi)f(\xi)\, \, d\xi,\quad w\in\T, \end{equation} where $K:\T\times \T\rightarrow\C$ being smooth off the diagonal. The next result focuses on the smoothness of the last operator, whose proof can be found in \cite{Hassainia-Hmidi}. See also \cite{Helms, Kress, LiebLoss}. \begin{lem}\label{Lem-pottheory} Let $0\leq \alpha<1$ and consider $K:\T\times\T\rightarrow \C$ with the following properties. There exists $C_0>0$ such that \begin{itemize} \item[(i)] $K$ is measurable on $\T\times\T\setminus\{(w,w), w\in\T\}$ and $$ |K(w,\xi)|\leq \frac{C_0}{|w-\xi|^\alpha}, \quad \forall w\neq \xi\in\T. $$ \item[(ii)] For each $\xi\in\T$, $w\mapsto K(w,\xi)$ is differentiable in $\T\setminus\{\xi\}$ and $$ |\partial_w K(w,\xi)|\leq \frac{C_0}{|w-\xi|^{1+\alpha}}, \quad \forall w\neq \xi\in\T. $$ \end{itemize} Then, \begin{enumerate} \item The operator $\mathcal{T}$ defined by \eqref{operator-T} is continuous from $L^{\infty}(\T)$ to $C^{1-\alpha}(\T)$. More precisely, there exists a constant $C_{\alpha}$ depending only on $\alpha$ such that $$ \|\mathcal{T}(f)\|_{1-\alpha}\leq C_{\alpha}C_0\|f\|_{L^{\infty}}. $$ \item For $\alpha=0$, the operator $\mathcal{T}$ is continuous from $L^{\infty}(\T)$ to $C^{\beta}(\T)$, for any $0<\beta<1$. That is, there exists a constant $C_{\beta}$ depending only on $\beta$ such that $$ \|\mathcal{T}(f)\|_{\beta}\leq C_{\beta}C_0\|f\|_{L^{\infty}}. $$ \end{enumerate} \end{lem} \end{document}
\begin{document} \title{A generalisation of a theorem of Wielandt} \date{\today} \author{Francesco Fumagalli} \address{Dipartimento di Matematica e Informatica ``Ulisse Dini'', viale Morgagni 67/A, 50134 Firenze, Italy.} \email{[email protected]} \author{Gunter Malle} \address{FB Mathematik, TU Kaiserslautern, Postfach 3049, 67653 Kaisers\-lautern, Germany.} \email{[email protected]} \thanks{The second author gratefully acknowledges financial support by ERC Advanced Grant 291512.} \keywords{subnormality, Wielandt's criterion, Quillen's conjecture} \lhd\!\!\lhd \, bjclass[2010]{Primary 20B05, 20D35; Secondary 20D05} \begin{abstract} In 1974, Helmut Wielandt proved that in a finite group $G$, a subgroup $A$ is subnormal if and only if it is subnormal in every $\seq{A,g}$ for all $g\in G$. In this paper, we prove that the subnormality of an odd order nilpotent subgroup $A$ of $G$ is already guaranteed by a seemingly weaker condition: $A$ is subnormal in $G$ if for every conjugacy class $C$ of $G$ there exists $c\in C$ for which $A$ is subnormal in $\seq{A,c}$. We also prove the following property of finite non-abelian simple groups: if $A$ is a subgroup of odd prime order $p$ in a finite almost simple group $G$, then there exists a cyclic $p'$-subgroup of $F^*(G)$ which does not normalise any non-trivial $p$-subgroup of $G$ that is generated by conjugates of~$A$. \end{abstract} \maketitle \section{Introduction} The main result of our paper is the following criterion for the existence of a non-trivial normal $p$-subgroup in a finite group: \begin{thmA} \label{Main_Thm} Let $G$ be a finite group and $p$ be an odd prime. Let $A$ be a $p$-subgroup of $G$ such that $$\text{for every conjugacy class $C$ of $G$ there exists $g\in C$ with $A$ subnormal in $\seq{A,g}$}.\leqno{(*)}$$ Then $A\leq O_p(G)$. \end{thmA} As an immediate consequence we have that \begin{corB} \label{Main_Cor} If $A$ is an odd order nilpotent subgroup of a finite group $G$ satisfying condition $(*)$, then $A$ is subnormal in $G$. \end{corB} This can be considered a generalisation of the following result due to H.~Wielandt (see \cite[7.3.3]{LS}): \begin{thmW}[Wielandt] Let $A$ be a subgroup of a finite group $G$. Then the following conditions are equivalent. \begin{enumerate} \item[\rm(i)] $A$ is subnormal in $G$; \item[\rm(ii)] $A$ is subnormal in $\seq{A,g}$ for all $g\in G$; \item[\rm(iii)] $A$ is subnormal in $\seq{A,A^g}$ for all $g\in G$; \item[\rm(iv)] $A$ is subnormal in $\seq{A,A^{a^g}}$ for all $a\in A$, $g\in G$. \end{enumerate} \end{thmW} Our proof of Theorem~A makes use of a reduction argument to arrive at a question about finite almost simple groups and then prove a property of these groups which may be of independent interest: \begin{thmB} \label{thm:simples} Let $G$ be a finite almost simple group with simple socle $S$ and $p>2$ be a prime dividing $|G|$. Let $A\le G$ be cyclic of order~$p$. Then there exists a cyclic $p'$-subgroup $X\le S$ such that $$\NN_G^A(X,p)=\varnothing.$$ \end{thmB} Here, $\NN_G^A(X,p)$ denotes the set of non-trivial $p$-subgroups of $G$ generated by conjugates of $A$ and normalised by $X$. Our proof is therefore related to (and relies on) the classification of finite simple groups. It should be noted that for $p=2$ the conclusions of Theorem~A and Theorem~C are no longer true. In particular condition $(\ast)$ does not imply that $A\leq O_2(G)$. An easy example is reported at the end of Section~\ref{sec:main}. In Section~\ref{sec:almost simple} we give, after some preparations, the proof of Theorem~C, and then in Section~\ref{sec:main} show the reduction of Theorem~A to the case of almost simple group.\\ We end with Section~\ref{sec:others}, where we analyse similar variations related to the other criteria for subnormality given by the original Theorem of Wielandt, namely conditions (iii), better known as the Baer-Suzuki Theorem. We show that in general these generalisations fail to guarantee the subnormality of odd $p$-subgroups. For other variations on the Baer--Suzuki Theorem the interested reader may consult \cite{X}, \cite{GR}, \cite{G1}, \cite{G2}, \cite{GM1} and \cite{GM2}. \section{Almost simple groups} \label{sec:almost simple} \lhd\!\!\lhd \, bsection{Notation and preliminary results} In this section we let $S$ be a non-abelian finite simple group and $G$ any group such that $S\leq G\leq {\operatorname{Aut}}(S)$. For $p$ a prime divisor of $|G|$ denote by ${\mathcal{S}}_p(G)$ the set of all (possibly trivial) $p$-subgroups of $G$. For a $p'$-subgroup $X$ of $S$ we denote by $\NN_G(X,p)$ the set of $p$-subgroups of $G$ normalised by $X$, namely $$\NN_G(X,p)=\set{Y\in{\mathcal{S}}_p(G)\mid X\leq N_G(Y)}.$$ Also for $A\in{\mathcal{S}}_p(G)$ set $$\NN_G^A(X,p):=\set{Y \in \NN_G(X,p)\mid Y \textrm{ is generated by $G$-conjugates of } A}.$$ Note that if $A\leq S$, then $\NN_G^A(X,p)\lhd\!\!\lhd \, bseteq \NN_S(X,p)$, otherwise if $A\not\leq S$ then no $E\in \NN_G^A(X,p)$ lies in $S$. We aim to prove Theorem~C, which we restate: \begin{thm} \label{thm:almost-simple} Let $G$ be a finite almost simple group with simple socle $S$. Then with the same notation as above, for every odd prime $p$ dividing $|G|$ and every $A\leq G$ of order $p$, there exists a cyclic $p'$-subgroup $X\leq S$ such that $\NN_G^A(X,p)=\varnothing$. \end{thm} In \cite{AK}, a similar condition is considered. The authors investigate finite groups $G$ and primes $p$ that have the following property: \begin{center} (R2)\ all nilpotent hyperelementary $p'$-subgroups $X$ of $F^*(G)$ satisfy $\NN_G(X,p)\ne 1$ \end{center} where a \emph{hyperelementary group} $X$ is one for which $O^q(X)$ is cyclic, for some prime $q$; basically a nilpotent hyperelementary $p'$-group $X$ is a direct product of a Sylow $q$-subgroup for some prime $q\ne p$, and a cyclic $p'$-group. They show (\cite[Thm.~2]{AK}) that the only almost simple group $G$ satisfying (R2) at a prime $p$ is for $S={\operatorname{L}}_3(4)$, $p=2$, and $4$ dividing $|G:S|$. Note that assumption (R2) implies our assumption. See also \cite{CC} for an analogous condition and its related subnormality criteria. \begin{lem} \label{lem:out} In the situation of Theorem~\ref{thm:almost-simple} assume that $A\not\le S$. Let $X$ be a non-trivial $p'$-subgroup of $S$ and $E\in \NN^A_G(X,p)$. Then $X$ commutes with some non-trivial $p$-element in $G\setminus S$. \end{lem} \begin{proof} By the coprime action of $X$ on $E$, we have that $E=[E,X]C_E(X)$. As $E$ is generated by conjugates of $A$ and $A\not\leq S$, we necessarily have that $C_E(X)\not\leq S$. \end{proof} \begin{lem} \label{lem:normElem} In the situation of Theorem~\ref{thm:almost-simple} if $\NN_G^A(X,p)\ne\varnothing$ then $X$ normalises a non-trivial elementary abelian $p$-subgroup of $S$ or it centralises a non-trivial $p$-element of~$G$. \end{lem} \begin{proof} Let $E$ be a non-trivial $p$-subgroup of $G$ normalised by $X$. If $X$ does not centralise any non-trivial $p$-element, then $A\le S$ by Lemma~\ref{lem:out} and hence $E\le S$. Now $X$ also normalises $Z(E)$, and then also $\Omega_1(Z(E))$, the largest elementary abelian subgroup of $Z(E)$. \end{proof} The following is a well-known consequence of the classification of finite simple groups and can be found for example in \cite[2.5.12]{GLS}. \begin{prop} \label{prop:oddout} Let $S$ be non-abelian simple, $S<G\le{\operatorname{Aut}}(S)$ and assume that $x\in G\setminus S$ has odd prime order $p$. Then $S$ is of Lie type and one of the following occurs: \begin{enumerate} \item[\rm(1)] $x$ is a field automorphism of $S$; \item[\rm(2)] $x$ is a diagonal automorphism of $S$ and one of \begin{enumerate} \item[\rm(2.1)] $S={\operatorname{L}}_n(q)$ with $p|(n,q-1)$, \item[\rm(2.2)] $S={\operatorname{U}}_n(q)$ with $p|(n,q+1)$, \item[\rm(2.3)] $p=3$, $S=E_6(q)$ with $3|(q-1)$, \item[\rm(2.4)] $p=3$, $S=\tw2E_6(q)$ with $3|(q+1)$; or \end{enumerate} \item[\rm(3)] $p=3$ and $x$ is a graph or graph-field automorphism of $S={\operatorname{O}}_8^+(q)$. \end{enumerate} \end{prop} We prove Theorem~\ref{thm:almost-simple} by treating separately the cases: $S$ is alternating, sporadic or a simple group of Lie type. \lhd\!\!\lhd \, bsection{The case of alternating groups.} Throughout the rest of this subsection we assume $S={\mathfrak{A}}_n$, with $n\geq 5$ and $G$ such that $S\leq G\leq {\operatorname{Aut}}(S)$. For $X$ a cyclic $p'$-subgroup of $S$ we denote by $E$ any element of $\NN_G(X,p)$. Also, we tacitly assume that any such $E$ is elementary abelian (see Lemma~\ref{lem:normElem}). The following elementary result \cite[Lemma~3]{AK} will be used several times. \begin{lem} \label{lem:Sym} Let $X\leq G\leq {\mathfrak{S}}_n$ and $E\in \NN_G(X,p)$. If $E$ acts non-trivially on some $X$-orbit ${\mathcal{O}}$, then $p$ divides $|{\mathcal{O}}|$. \end{lem} \begin{proof} As $X$ acts on $\operatorname{fix}_{{\mathcal{O}}}(E)$, $\operatorname{fix}_{{\mathcal{O}}}(E)=\varnothing$ and $|{\mathcal{O}}|\equiv |\operatorname{fix}_{{\mathcal{O}}}(E)|\equiv 0$ (mod $p$). \end{proof} \begin{prop} \label{prop:Alt} If $\NN_G(X,p)\ne1$ for every cyclic $p'$-subgroup $X$ of $S$, then $S={\mathfrak{A}}_6$ and $(G,p)\in\set{({\operatorname{PGL}}_2(9),2),({\operatorname{Aut}}({\mathfrak{A}}_6),2)}$. In particular for every odd prime $p$ and every subgroup $A$ of order $p$ there exists a cyclic $p'$-subgroup $X$ for which $\NN^A_G(X,p)=\varnothing$. \end{prop} \begin{proof} The cases $n=5$ and $n=6$, with $G\leq {\mathfrak{S}}_6$, follow quite immediately by taking, as subgroup $X$ a Sylow $5$-subgroup, if $p=2$ or $p=3$, and a cyclic subgroup of order $3$ if $p=5$. Assume that $n=6$ and $G\not\leq {\mathfrak{S}}_6$. As $|G:{\mathfrak{A}}_6|$ is either $2$ or $4$, if $p$ is odd then $\NN_G(X,p)=\NN_{{\mathfrak{A}}_6}(X,p)$, for every $X\leq {\mathfrak{A}}_6$. Therefore, by what we have just proved, there is some cyclic $p'$-subgroup $X$ of ${\mathfrak{A}}_6$ for which $\NN_G(X,p)=1$. Let $p=2$. The group $G=M_{10}$ contains no elements of order 10 (see \cite{Atlas}). We take $X$ a cyclic subgroup of order $5$ of $G$. Note that if $E$ is any $2$-subgroup of $G$ normalised by $X$, then $E=[E,X]$ and so $E$ lies in ${\mathfrak{A}}_6$, but then $\NN_G(X,2)=\NN_{{\mathfrak{A}}_6}(X,2)=1$. Let now $G={\operatorname{PGL}}_2(9)$ or $G={\operatorname{Aut}}({\mathfrak{A}}_6)$. Then every element of odd order normalises a non-trivial $2$-subgroup of $G$, basically the elements of order~$3$ normalise a copy of ${\mathfrak{A}}_4$ lying in ${\mathfrak{A}}_6$, while the elements of order $5$ centralise always an outer involution (see \cite{Atlas}). Therefore we have that $\NN_G(X,2)\ne 1$ for every cyclic odd order subgroup $X$ of $G$. \noindent Assume for the rest of the proof that $n\geq 7$ and argue by contradiction. We treat separately the two cases: 1) $n$ is even and 2) $n$ is odd. \noindent {\bf Case 1}. $n$ is even. \noindent Assume first that $p\vert (n-1)$. In particular $p$ is odd. We take as cyclic $p'$-subgroup $X$ of ${\mathfrak{A}}_n$ the one generated by $x=(12)(3\ldots n)$. Let $E$ be a non-trivial element of $\NN_G(X,p)$. Now the set $\set{1,2}$ cannot lie in $\operatorname{fix}(E)$, otherwise $E$ acts non-trivially on $\set{3,\ldots,n}$, and by Lemma \ref{lem:Sym} we would have that $p$ divides $n-2$, which is not the case being a divisor of $n-1$. Also from the fact that $E=E^x$, it follows that none of $1$ and $2$ are fixed by $E$. But then, as $p>2$, $EX$ is transitive on $\set{1,2,\ldots,n}$, and since $p$ does not divide $n$ we have a contradiction. Assume now that $p\nmid (n-1)$.\\ We choose first $X=\seq{x}$ with $x=(2\ldots n)$ and let $1\ne E\in\NN_G(X,p)$. By Lemma \ref{lem:Sym} we have that $E$ does not fix $1$. Then $EX$ is transitive on $\set{1,2,\ldots,n}$. In particular we have that $n$ is a power of $p$ and since it is even $p=2$. Say $n=2^r\geq 8$.\\ We can now change our testing subgroup $X=\seq{x}$ and choose now $x=(123)(4\ldots n)$. This is of course a cyclic $p'$-subgroup of ${\mathfrak{A}}_n$. Let $E$ be a non-trivial elementary abelian $2$-subgroup lying in $\NN_G(X,2)$. Note that $E$ acts fixed-point-freely on $\set{1,2,\ldots,n}$. Indeed if $E$ fixes a point, then as $X$ normalises $E$, we have that either $\set{1,2,3}$ or $\set{4,\ldots, n}$ lie in $\operatorname{fix}(E)$. In any case we reach a contradiction with Lemma \ref{lem:Sym}. Now we claim that $E$ is transitive. Let ${\mathcal{O}}$ be the $E$-orbit containing $1$. Then ${\mathcal{O}}$ cannot be contained in $\set{1,2,3}$, otherwise $E$ being a $2$-group, there will be a fixed point of $E$ in $\set{1,2,3}$, which is not the case. Let therefore $e\in E$ be such that $1e=i\in\set{4,\ldots,n}$. The subgroup $\seq{x^3}$ is transitive on $\set{4,\ldots,n}$, as $3$ is coprime to $n-3=2^r-3$, thus for every $j\in\set{4,\ldots,n}$ we may take some $y\in\seq{x^3}$ such that $iy=j$. But then $$1(e^y)=1(y^{-1}ey)=1(ey)=i(y)=j$$ and since $e^y\in E$ the element $j$ lies in ${\mathcal{O}}$. In particular we have proved that $\set{4,\ldots,n}\lhd\!\!\lhd \, bseteq {\mathcal{O}}$ and, since $n-2=2^r-2$ is not a power of 2 as $n\geq 8$, we have that ${\mathcal{O}}=\set{1,2,\ldots, n}$ and $E$ acts regularly on it. Now let $e$ be the unique element of $E$ that maps $1$ to $2$, then $e^x$ maps $2$ to $3$. Now as $[e,e^x]=1$ we have that $$1(ee^x)=2(e^x)=3=1(e^xe)$$ which means that $1(e^x)=3(e^{-1})$, and so $1e^x\not\in \set{1,2,3}$. If we set $1e^x=j$, for some $j\in\set{4,\ldots,n}$, we reach a contradiction, since $$e=(12)(3,j)\ldots \quad {\textrm{and}} \quad e^x=(23)(1,j)\ldots $$ but also as $n\geq 8$, $jx\ne j$ and so $e^x=(1x,2x)(3x,jx)\ldots=(23)(1,jx)\ldots$. \noindent {\bf Case 2}. $n$ is odd. \noindent In this situation we have that $p\vert n$. Indeed if this is not the case, we take $X$ the subgroup generated by an $n$-cycle of ${\mathfrak{A}}_n$. Now any $1\ne E\in\NN_G(X,p)$ acts non-trivially on $\set{1,2,\ldots,n}$, and therefore we reach a contradiction to Lemma~\ref{lem:Sym}. Thus $p\vert n$; in particular $p$ is odd.\\ We take now $x$ the $(n-2)$-cycle $(3,4,\ldots, n)$ and $X=\seq{x}$. Then any $1\ne E\in\NN_G(X,p)$ does not fix both $1$ and $2$, otherwise by Lemma \ref{lem:Sym}, $p\vert (n-2)$ which is not the case as $p\vert n$ and $p$ is odd. Assume that $1$ is not fixed by $E$ (otherwise argue considering $2$ in place of $1$) and let ${\mathcal{O}}_1$ be the $E$-orbit containing $1$. Since $p$ is odd there is some $e\in E$ such that $1e=i$ for some $i\in\set{3,\ldots,n}$. Now as $X$ is transitive on $\set{3,\ldots,n}$, for every $j\in\set{3,\ldots,n}$ there is some power $m$ of $x$ such $ix^m=j$. But then $$1(e^{x^m})=1(x^{-m}ex^m)=1(ex^m)=i(x^m)=j,$$ and as $e^{x^m}\in E$ we have proved that $\set{3,\ldots, n}\lhd\!\!\lhd \, bseteq{\mathcal{O}}_1$. Since $p\nmid n-1$ we conclude that $E$ is transitive on $\set{1,\ldots,n}$. We show now that $E$ is regular. The stabiliser $\operatorname{Stab}_E(1)$ is normalised by $X$, and therefore if this is non-trivial, then by Lemma~\ref{lem:Sym} we reach the contradiction $p\vert (n-2)$. It follows that $E$ is regular on $\set{1,2,\ldots,n}$ and so $n=|E|=p^r$, and any non-trivial element $\sigma$ of $E$ is a product of exactly $p^{r-1}$ cycles of length $p$. In particular there exists a unique $\sigma\in E $ which maps $1$ to $2$. We write $$\sigma=\sigma_1\sigma_2\cdots \sigma_{p^{r-1}}$$ with $\sigma_1=(12u_3\ldots u_p)$, a $p$-cycle. Since $\sigma^x\in E$ maps $1$ to $2$, we necessarily have that $\sigma=\sigma^x$, but this is not the case as $\sigma_1^x=(12u_3x\ldots u_px)\ne (12u_3\ldots u_p)=\sigma_1$. \end{proof} \lhd\!\!\lhd \, bsection{The case of sporadic groups.} \label{subsec:spor} We assume now that $S$ is one of the 27 sporadic simple groups (including the Tits simple group $\tw2F_4(2)'$), and $S\leq G\leq {\operatorname{Aut}}(S)$. As before $X$ will denote a cyclic $p'$-subgroup of $S$ and $E$ a non-trivial elementary abelian $p$-subgroup of $G$ normalised by $X$. Our basic reference for properties of sporadic groups is \cite{Atlas}. \begin{prop} \label{prop:Spor} Let $S$ be a simple sporadic group, $S\leq G\leq {\operatorname{Aut}}(S)$ and $p$ a prime. Then there exists a cyclic $p'$-subgroup $X$ of $S$ such that $\NN_G(X,p)=1$. In particular, for every odd prime $p$ and every subgroup $A$ of $G$ of order $p$, there exists a cyclic $p'$-subgroup $X$ such that $\NN^A_G(X,p)=\varnothing$. \end{prop} \begin{proof} We extend a little our notation. Given a prime $p$ and a positive integer $q$ coprime to $p$, we write $\NN_G(q,p)$ for the set of $p$-subgroups of $G$ that are normalised by some cyclic $q$-subgroup of $S$. Table~\ref{tab:spor} summarises the situation for the sporadic groups and their automorphism groups. For every group $S$, we list a pair $(q;r)$ of primes such that $\NN_G(q,p)=1$ for all $p\ne q$, and $\NN_G(r,q)=1$. For four groups, $\NN_G(q,p)\ne1$ for another prime $p\ne q$, in which case either $\NN_G(r,p)=1$, or we give a further integer $s$ such that $\NN_G(s,p)=1$. Our choice of $(q;r)$, respectively $(q;r;s)$ works for both $S$ and ${\operatorname{Aut}}(S)$. \noindent \begin{table}[htbp] \caption{The case of sporadic groups.} \label{tab:spor} $\begin{array}{|lll|lll|ll|}\hline M_{11} & (11; 3) && Co_{3} & (23; 7) && B & (47; 31) \\ M_{12} & (11; 3) && Co_{2} & (23; 5) && M & (59; 71) \\ M_{22} & (11; 3) && Co_{1}& (23\,(p\ne 2);33\,(p=2);13)&& J_1 & (19; 11) \\ M_{23} & (23; 5) && He & (17; 7) && O'N & (31; 19) \\ M_{24} & (23; 5) && Fi_{22}& (13\,(p\ne 3); 11)&& J_3 & (19; 17) \\ J_{2} & (7; 5) && Fi_{23}& (23\,(p\ne 2); 17)&& Ly & (67; 37) \\ Suz & (13; 11) && Fi_{24}'& (29; 23)&& Ru & (29; 13) \\ HS & (11; 3) && HN & (19; 11) && J_4 & (43; 37) \\ M^cL & (11\,(p\ne 2); 7) && Th& (31\,(p\ne 2); 19) && \tw2F_4(2)' & (13; 5) \\ \hline \end{array}$\\ \end{table} \noindent We prove the validity of Table 1 by considering the individual groups in turn. The groups $S=M_{11}$, $J_1$, $J_2$, $M_{23}$, $M_{24}$, $Co_3$, $Co_2$, $Ru$, $Ly$, $J_4$, $Fi_{23}$ have trivial outer automorphism group. The validity of our claim is immediate from the known lists of maximal subgroups \cite{Atlas}. For $S=M_{12}$, $M_{22}$, $HS$, $He$, $J_3$, $O'N$, $HN$, $Th$, $\tw2F_4(2)'$ we have $|{\operatorname{Out}}(S)|=2$. For $G=S$ we can argue as before, while for $G={\operatorname{Aut}}(S)$ we invoke Lemma~\ref{lem:out} for a suitable subgroup $X\le S$ of prime order as listed in Table~\ref{tab:spor}. We deal with the remaining groups in some more detail. $S=Suz$. Here $|{\operatorname{Out}}(S)|=2$. The maximal subgroups of $S$ of order divisible by 13 are isomorphic to $G_2(4)$, ${\operatorname{L}}_3(3)\co2$ or ${\operatorname{L}}_2(25)$. As these have no elements of order~11, we immediately obtain $\NN_{Suz}(5,11)=1$. Moreover, for any of these groups, a Sylow $13$-subgroup does not normalise any other $p$-subgroup, for $p\ne 13$. Thus $\NN_{Suz}(13,p)=1$ for every prime $p\ne 13$. Finally the outer involutions do not centralise any element of order $13$, forcing the same conclusions for ${\operatorname{Aut}}(Suz)$. $S=M^cL$. Here $|{\operatorname{Out}}(S)|=2$. The maximal subgroups of $S$ of order divisible by 11 are isomorphic to $M_{11}$ and $M_{12}$. Therefore $\NN_{M^cL}(7,11)=\NN_{M^cL}(11,p)=1$ for every $p\ne 11$. Now consider $G={\operatorname{Aut}}(M^cL)$. Again, $X$ of order~11 shows that there are no examples except possibly when $p=2$. In the latter case for $X$ we take a cyclic subgroup of order~7. The Atlas \cite{Atlas} shows that $C_G(X)\le S$, and so there can be no example for $G$ by Lemma~\ref{lem:out}. $S=Co_1$. Here ${\operatorname{Out}}(S)=1$. The maximal subgroups of $S$ of order divisible by 23 are isomorphic to $Co_2$, $2^{11}\co M_{24}$ or $Co_3$. Since these groups have no elements of order 13, we obtain that $\NN_{Co_1}(13,23)=1$. Moreover if $Y$ is any of these maximal subgroups $\NN_{Y}(23,p)=1$, for every $p$ different from~23 and~2. Finally $Co_1$ has elements of order $33$ which are auto-centralising. As $Co_1$ and $Co_2$ do not contain elements of order $33$, the unique maximal subgroups of $S$ that have such an element are: ${\operatorname{U}}_6(2)\co{\mathfrak{S}}_3$, $3^6\co2M_{12}$ and $3^{\cdot}Suz\co2$. Now, a cyclic subgroup of order $33$ in ${\operatorname{U}}_6(2)\co{\mathfrak{S}}_3$ does not lie completely in ${\operatorname{U}}_6(2)$; therefore, if such a subgroup normalises a non-trivial $2$-subgroup, then, since $S$ has no elements of order $66$, we should have that $\NN_{{\operatorname{U}}_6(2)}(11,2)\ne 1$. This is not the case as in ${\operatorname{U}}_6(2)$ the maximal subgroups of order divisible by $11$ are $M_{22}$ and ${\operatorname{U}}_5(2)$. Consider now a cyclic subgroup $Y$ of order $33$ inside $3^6\co2M_{12}$. This is the direct product of a subgroup of order $3$ in $3^6$ by a Sylow $11$-subgroup of $2M_{12}$. Assume that $X$ is a non-trivial $2$-subgroup of $3^6\co2M_{12}$ normalised by $Y$. Then $X\in \NN_{2M_{12}}(11,2)$, and since $\NN_{M_{12}}(11,2)=1$ we deduce that $X$ is centralised by a Sylow $11$-subgroup of $M_{12}$, and thus by the whole $Y$, which is a contradiction since in $S$ there are no elements of order $66$. Finally, a similar argument shows that if $X$ is a non-trivial element of $\NN_{3^{\cdot}Suz:2}(33,2)$, then $X\cap Suz$ is a non-trivial element of $\NN_{Suz}(11,2)$. This is impossible since the maximal subgroups of $Suz$ of order divisible by $11$ are: ${\operatorname{U}}_5(2)$, $3^5\co M_{11}$ and $M_{12}\co2$, forcing $\NN_{Suz}(11,2)=1$. $S=Fi_{22}$. Here $|{\operatorname{Out}}(S)|=2$. The maximal subgroups of $S$ of order divisible by 13 are isomorphic to $\tw2F_4(2)$ or ${\operatorname{O}}_7(3)$. Since both these groups have orders not divisible by $11$, we have that $\NN_{S}(11,13)=1$. Now, $\NN_{\tw2F_4(2)}(13,p)=1$ for every $p\ne 13$, since the maximal subgroups of $\tw2F_4(2)$ containing a Sylow $13$-subgroup are ${\operatorname{L}}_2(25)$ and ${\operatorname{L}}_3(3)\co2$ and $C_S(13)=13$. In ${\operatorname{O}}_7(3)$ there are three isomorphism classes of maximal subgroups of order divisible by 13, namely $G_2(3), {\operatorname{L}}_4(3)\co2$ and $3^{3+3}\co{\operatorname{L}}_3(3)$. We have that $\NN_{{\operatorname{O}}_7(3)}(13,p)=1$ if $p\ne 3$ (and $p\ne 13$), forcing $\NN_{S}(13,p)=1$ for every prime $p$ different from $3$ and $13$. To deal with the case $p=3$, we look at the maximal subgroups of $S$ of order divisible by~11. These are isomorphic to one of the following: $M_{12}$, $2^{10}\co M_{22}$ and $2^{{}^{\textbf{ .}}} {\operatorname{U}}_6(2)$. For any of these groups $Y$ we have $\NN_{Y}(11,3)=1$, thus the same happens in $S$. Since $|{\operatorname{Out}}(S)|=2$, we only need to show that $\NN_{{\operatorname{Aut}}(S)}(q,2)=1$ for some odd integer $q$. This is guaranteed by the fact that $\NN_{S}(13,2)=1$ and $C_{{\operatorname{Aut}}(S)}(13)=13$. For the last three groups, the Atlas does not contain complete lists of maximal subgroups, so we need to give a different argument. $S=Fi_{24}'$. Here $|S|=2^{21}\cdot 3^{16} \cdot 5^2 \cdot 11\cdot 13\cdot 17\cdot 23\cdot 29$, $|{\operatorname{Out}}(S)|=2$. Here, a subgroup of order~29 cannot act faithfully on an elementary abelian $p$-subgroup for $p\ne29$, by the order formula. On the other hand, subgroups of order~29 are not normalised by elements of order~23. $S=B$. Here $|S|=2^{41}\cdot 3^{13} \cdot 5^6 \cdot 7^2\cdot 11\cdot 13\cdot 17 \cdot 19\cdot 23\cdot 31 \cdot 47$, ${\operatorname{Out}}(S)=1$. >From the order formula it is clear that a subgroup of order~47 cannot act non-trivially on an elementary abelian $p$-subgroup of $S$, except possibly for $p=2$. Since elements of order~47 are self-centralising, and not normalised by an element of order~31, we must have $p=2$. But the 2-rank of $S$ is~14 by \cite{La07}, too small for an action of $C_{47}$. $S=M$. Here $|S|=2^{46}\cdot 3^{20}\cdot 5^9\cdot 7^6\cdot 11^2\cdot 13^2 \cdot 17\cdot 19\cdot 23\cdot 29\cdot 31\cdot 41\cdot 47\cdot 59\cdot 71$, ${\operatorname{Out}}(S)=1$. A subgroup of order~59 cannot act faithfully on an elementary abelian $p$-subgroup for $p\ne59$, by the order formula. On the other hand, subgroups of order~59 are not normalised by elements of order~71. \end{proof} \lhd\!\!\lhd \, bsection{Classical groups of Lie type} We consider the following setup. Let $S$ be a finite simple group of Lie type. There exists a simple linear algebraic group ${\mathbf{H}}$ of adjoint type defined over the algebraic closure of a finite field and a Steinberg endomorphism $F:{\mathbf{H}}\rightarrow{\mathbf{H}}$ such that the finite group of fixed points $H={\mathbf{H}}^F$ satisfies $S=[H,H]$. We now make use of the fact that groups of Lie type possess elements of orders which cannot occur in their Weyl group, and with small centraliser. These can be found, for example, in the Coxeter tori. For this we need the existence of \emph{Zsigmondy primitive prime divisors} (see \cite[Thm.~3.9]{He74}): \begin{lem} \label{lem:Zsig} Let $q$ be a power of a prime and $e>2$ an integer. Then unless $(q,e)=(2,6)$ there exists a prime $\ell$ dividing $q^e-1$, but not dividing $q^f-1$ for any $f<e$, and $\ell\ge e+1$. \end{lem} In Table~\ref{tab:tori} we have collected for each type of classical group two maximal tori $T_1,T_2$ of $H$ (indicated by their orders). Then the order of $T_i$ is divisible by a Zsigmondy prime divisor $\ell_i$ of $q^{e_i}-1$, with $e_i$ given in the table (unless $e_i=2$ or $(e_i,q)=(6,2)$). \begin{table}[htbp] \caption{Two tori for classical groups.} \label{tab:tori} \[\begin{array}{cc|cc|cc} H& & |T_1|& |T_2|& e_1& e_2\cr \hline A_{n-1}& (n\ge2)& (q^n-1)/(q-1)& q^{n-1}-1& n& n-1\cr \tw2A_{n-1}& (n\ge3$ odd$)& (q^n+1)/(q+1)& q^{n-1}-1& 2n& n-1\cr & (n\ge4$ even$)& q^{n-1}+1& (q^n-1)/(q+1)& 2n-2& n\cr B_n,C_n& (n\ge2$ even$)& q^n+1& (q^{n-1}+1)(q+1)& 2n& 2n-2\cr & (n\ge3$ odd$)& q^n+1& q^n-1& 2n& n\cr D_n& (n\ge4$ even$)& (q^{n-1}+1)(q+1)& (q^{n-1}-1)(q-1)& 2n-2& n-1\cr & (n\ge5$ odd$)& (q^{n-1}+1)(q+1)& q^n-1& 2n-2& n\cr \tw2D_n& (n\ge4)& q^n+1& (q^{n-1}+1)(q-1)& 2n& 2n-2\cr \end{array}\] \end{table} \begin{prop} \label{prop:crosschar} Assume that $S$ is of classical Lie type not in characteristic~$p$. Then Theorem~\ref{thm:almost-simple} holds for all $S\le G\le{\operatorname{Aut}}(S)$. \end{prop} \begin{proof} Let ${\mathbf{H}},H$ be as above so that $S=[H,H]$. We distinguish three cases. \noindent {\bf Case 1:} $A\le S$.\\ The cases when $e_i\le2$, that is, $H$ is of type $A_1$, $A_2$, $\tw2A_2$ or $B_2$, will be considered in Proposition~\ref{prop:smallrank}. For all other types, for $X$ we choose a maximal cyclic subgroup of $T_i\cap S$ for $i=1,2$, with $T_i$ from Table~\ref{tab:tori}. Note that the orders of $T_1\cap S,T_2\cap S$ are coprime, and $T_i$ is the centraliser in $H$ of any $s_i\in T_i$ of order $\ell_i$. Assume that $\NN_S^A(X,p)\ne\varnothing$. By Lemma~\ref{lem:normElem} and the fact that $A\le S$, $X$ normalises a non-trivial elementary abelian $p$-subgroup $E$ of $S$. Let $\pi:\tilde{\mathbf{H}}\rightarrow{\mathbf{H}}$ be a simply-connected covering of ${\mathbf{H}}$, and hence $\ker(\pi)=Z(\tilde{\mathbf{H}})$. We let $\tilde E$ be a (normal) Sylow $p$-subgroup of the full preimage of $E$ in $\tilde{\mathbf{H}}$. Then $\tilde E$ is normalised by the full preimage of $X$. First assume that $|Z(\tilde{\mathbf{H}})|$ is prime to $p$. Then $\tilde E\cong E$ is abelian. As $\tilde{\mathbf{H}}$ is simply-connected, $p>2$ is not a torsion prime of $\tilde{\mathbf{H}}$ (see \cite[Tab.~14.1]{MT}), so an inductive application of \cite[Thm.~14.16]{MT} to a sequence of generators of the abelian group $\tilde E$ shows that ${\mathbf{C}}:=C_{\tilde{\mathbf{H}}}(\tilde E)$ contains a maximal torus of $\tilde{\mathbf{H}}$ and is connected reductive, hence a subsystem subgroup of $\tilde{\mathbf{H}}$ of maximal rank. Then $N_{\tilde{\mathbf{H}}}({\mathbf{C}})={\mathbf{C}} N_{\tilde{\mathbf{H}}}({\mathbf{T}})$ for any maximal torus ${\mathbf{T}}$ of $\tilde{\mathbf{H}}$, so $N_{\tilde{\mathbf{H}}}({\mathbf{C}})/{\mathbf{C}}$ is isomorphic to a section of the Weyl group $W$ of $\tilde{\mathbf{H}}$. As $N_{\tilde{\mathbf{H}}}(\tilde E)\le N_{\tilde{\mathbf{H}}}(C_{\tilde{\mathbf{H}}}(\tilde E)) =N_{\tilde{\mathbf{H}}}({\mathbf{C}})$ we see that $N_{{\mathbf{H}}}(E)/C_{{\mathbf{H}}}(E)$ is a section of $W$. \par Now note that the order of the Weyl group of ${\mathbf{H}}$ is not divisible by any prime larger than $e_i$, except for $H$ of type $D_n$, with $n\ge4$ even and $e_2=n-1$. Here, $\ell_2\ge e_2+1=n$, but $n$ is even so that in fact $\ell_2>n$ does not divide the order of the Weyl group either. This shows that elements $s_i\in X$ of order $\ell_i$ must centralise $E$, for $i=1,2$. So $p$ divides the order of $C_S(s_i)=T_i\cap S$ for $i=1,2$, a contradiction as these orders are coprime. \par The cases when $(q,e_i)=(2,6)$, that is, $S={\operatorname{L}}_6(2),{\operatorname{L}}_7(2),{\operatorname{U}}_6(2)$, ${\operatorname{O}}_7(2),{\operatorname{O}}_8^\pm(2),{\operatorname{O}}_9(2)$ will be handled in Proposition~\ref{prop:noZsigmondy}, while $S={\operatorname{U}}_4(2)\cong{\operatorname{S}}_4(3)$ will be treated in Proposition~\ref{prop:smallrank}. \par Now assume that $\tilde E$ is non-abelian. Then $p$ divides $|Z(\tilde{\mathbf{H}})|$ and thus $S={\operatorname{L}}_n(q)$ or $S={\operatorname{U}}_n(q)$. Let $E_1$ be a minimal non-cyclic characteristic subgroup of $\tilde E$. Then $E_1$ is of symplectic type, hence extra-special (see \cite[(23.9)]{Asch}) and normalised by $X$. Write $|E_1|=p^{2a+1}$, then $p^a\le n$ as $E_1\le{\operatorname{SL}}_n(q)$ or ${\operatorname{SU}}_n(q)$. Now the outer automorphism group of $E_1$ is ${\operatorname{Sp}}_{2a}(p)$, and all prime divisors of its order are at most $(p^a+1)/2<n$. But our Zsigmondy prime divisors $\ell_i$ of $|X|$ satisfy $\ell_i\ge n$, so again we conclude that $X$ must centralise an element of order~$p$. We conclude as before. \noindent {\bf Case 2:} $A\not\le S$ contains diagonal automorphisms.\\ In this case by Proposition~\ref{prop:oddout} we have $S={\operatorname{L}}_n(q)$ or $S={\operatorname{U}}_n(q)$. Here let $X$ be generated by a regular unipotent element. By Lemma~\ref{lem:out}, if $X$ normalises a non-trivial $p$-subgroup generated by conjugates of $A$, $X$ must centralise some non-trivial element of order $p$. But the centraliser of a regular unipotent element in the group ${\operatorname{PGL}}_n(q)$ resp.~${\operatorname{PGU}}_n(q)$ of inner-diagonal automorphisms is obviously unipotent, hence this case does not occur, as by assumption $p$ is not the defining characteristic. \noindent {\bf Case 3:} $A\not\le S$ does not contain diagonal automorphisms.\\ By Proposition~\ref{prop:oddout}, $A$ contains field, graph or graph-field automorphisms. Now in all cases, a maximal cyclic subgroup $X$ of $T_1\cap S$ can be identified to a subgroup of the multiplicative group of ${\mathbb{F}}_{q^{e_1}}$ by viewing some isogeny version of $H$ as a classical matrix group. The normaliser in $S$ of $X$ then acts by field automorphisms of ${\mathbb{F}}_{q^{e_1}}/{\mathbb{F}}_q$. Using the embedding into a matrix group one sees that the field automorphisms of $S$ act on $X$ as the field automorphisms of ${\mathbb{F}}_q/{\mathbb{F}}_r$, where $r$ is the characteristic of ${\mathbf{H}}$. In particular they induce automorphisms of $X$ different from those induced by $N_S(X)$. So with this choice of $X$ field automorphisms cannot lead to examples by Lemma~\ref{lem:out}. Finally, if $S={\operatorname{O}}_8^+(q)$ and $A$ contains graph or graph-field automorphisms of order~3 then we choose $X$ to be generated by an element $x$ of order~$(q^2+1)/d$ in a maximal torus $T\le S$ of order~$(q^2+1)^2/d^2$, where $d=\gcd(q-1,2)$. The normaliser $N_S(T)$ acts by the complex reflection group $G(4,2,2)$ of 2-power order, while in the extension by a graph or graph-field automorphism it acts by the primitive reflection group $G_5$. These automorphisms hence induce further non-trivial elements normalising $X$, and not centralising $x$. \end{proof} We now complete the proof for the small rank cases. \begin{prop} \label{prop:smallrank} Assume that $S={\operatorname{L}}_2(q)$ ($q\ge8$), ${\operatorname{L}}_3(q)$, ${\operatorname{U}}_3(q)$ ($q>2$), or ${\operatorname{S}}_4(q)$ ($q>2$), and $p\nmid q$. Then Theorem~\ref{thm:almost-simple} holds for all $S\le G\le{\operatorname{Aut}}(S)$. \end{prop} \begin{proof} We just need to deal with the case that $G=S$, since the other possibilities were already discussed in the proof of Proposition~\ref{prop:crosschar}. First assume that $S={\operatorname{L}}_2(q)$. If $q\ge8$ is even, then elements of order $q+1$ do not normalise any non-trivial $p$-subgroup with $p$ dividing $q-1$, while elements of order $q-1$ do not normalise any with $p|(q+1)$. If $q=r^f\ge9$ is odd, elements of order~$r$ do not normalise non-trivial $p$-subgroups for $2<p|(q^2-1)$. \par Next let $S={\operatorname{L}}_3(q)$. Elements of order $(q^2+q+1)/\gcd(3,q-1)$ do not normalise non-trivial $p$-subgroups for $p$ dividing $q^2-1$, while elements of order $2$ do not normalise non-trivial $p$-subgroups for $p$ dividing $(q^2+q+1)/\gcd(3,q-1)$. Similarly for $S={\operatorname{U}}_3(q)$, $q>2$, we can argue using elements of order $(q^2-q+1)/\gcd(3,q+1)$, respectively of order~2. \par Finally assume that $S={\operatorname{S}}_4(q)$. Using $X$ of order $q^2+1$ we see that we must have $p|(q^2+1)$. In this case, take $X$ of order~3. \end{proof} \begin{prop} \label{prop:noZsigmondy} Assume that $S$ is one of ${\operatorname{L}}_6(2),{\operatorname{L}}_7(2),{\operatorname{U}}_6(2),{\operatorname{O}}_7(2), {\operatorname{O}}_8^\pm(2)$ or ${\operatorname{O}}_9(2)$. Then Theorem~\ref{thm:almost-simple} holds for all $S\le G\le{\operatorname{Aut}}(S)$. \end{prop} \begin{table}[htbp] \caption{Some groups over ${\mathbb{F}}_2$.} \label{tab:noZsig} $\begin{array}{|lll|lll|ll|}\hline {\operatorname{L}}_6(2)& (2; 7) && {\operatorname{U}}_6(2)& (11; 3) && {\operatorname{O}}_8^+(2)& (7; 5)\\ {\operatorname{L}}_7(2)& (127; 31) && {\operatorname{O}}_7(2)& (7; 5) && {\operatorname{O}}_8^-(2)& (17; 3)\\ {\operatorname{O}}_9(2)& (17; 5) && & && & \\ \hline \end{array}$\\ \end{table} \begin{proof} In all of these groups, just one of the two Zsigmondy primes $\ell_i$ exists. By the argument given in the proof of Proposition~\ref{prop:crosschar}, we still obtain that either elements of order $\ell_i$ centralise a $p$-element or that $S\ne G$. We may then conclude as in the proof of Proposition~\ref{prop:Spor}, using an additional prime as in Table~\ref{tab:noZsig}, except in the two cases when $|{\operatorname{Out}}(S)|>2$: Let $S={\operatorname{U}}_6(2)$. Here $\ell_1=11$ shows that $p=11$ if $G=S$, and since a subgroup of order $7$ does not normalise one of order $11$, we reach a contradiction in this case. We assume therefore that $p=3$ and $A\not\leq S$. Let first $X$ be a subgroup of $S$ of order $11$ and $Y$ a maximal subgroup of $S$ containing $X$. Then $Y\simeq {\operatorname{U}}_5(2)$ or $M_{22}$, and therefore $\NN_Y(X,3)=\NN_S(X,3)=1$. Now if $E\in\NN^A_G(X,3)$ we have that $E\cap S\in\NN_S(X,3)=1 $ and so $E=C_E(X)$ has order $3$, and, $E$ being generated by conjugates of $A$, we have that $E$ is a conjugate of $A$. Now the group $S$ has four classes of outer elements of order three, denoted $3D, 3E, 3F$ and $3G$ in \cite{Atlas}. Amongst these just $3D$ has centraliser in $S$ divisible by $11$, namely $C_S(3D)\simeq {\operatorname{U}}_5(2)$. We have therefore that $A=\seq{x}$ for some $x$ in $3D$. Now take $X$ a subgroup of $S$ of order $7$. We may argue as before. Since a maximal subgroup $Y$ of $S$ containing $X$ is isomorphic to one of $$M_{22},\, 2^9.{\operatorname{L}}_3(4),\, {\operatorname{U}}_4(3)\co2,\, {\operatorname{S}}_6(2),\, {\operatorname{L}}_3(4)\co2,$$ we have that $\NN_Y(X,3)=\NN_S(X,3)=1$, and therefore any element $E\in \NN^A_G(X,3)$ is a cyclic subgroup conjugate to $A$ and centralised by $X$. This is a contradiction since $7$ does not divide $|{\operatorname{U}}_5(2)|$. Let $S={\operatorname{O}}_8^+(2)$. The prime $\ell_2=7$ shows that $p=7$ if $A\le S$. Since a subgroup of order $5$ does not normalise any non-trivial $7$-subgroup, we reach a contradiction if $A\leq S$. Let $p=3$ and $A\not\leq S$. In $G$ there are three classes of outer $3$-elements, two of order $3$ and one of order $9$. In all cases $5$ does not divide the order of their centralisers in $S$. Thus if $X$ is a cyclic subgroup of order $5$ we reach a contradiction with Lemma~\ref{lem:out}. \end{proof} \lhd\!\!\lhd \, bsection{Groups of exceptional type} In this section we prove Theorem~\ref{thm:almost-simple} when $S$ is one of the exceptional groups of Lie type. We keep the setting from the beginning of the previous subsection. Note that we need not treat $\tw2B_2(2)$ (which is solvable), $G_2(2)\simeq{\operatorname{U}}_3(3).2$, $^2G_2(3)\simeq{\operatorname{L}}_2(8).3$ and $\tw2F_4(2)'$ (see Section~\ref{subsec:spor}). As in the case of classical groups we provide in Table~\ref{tab:exc} for each type of group two maximal tori of $H$, indicated by their orders. Here, we denote by $\Phi_n$ the $n$-th rational cyclotomic polynomial evaluated at $q$, and moreover we let $\Phi_8'=q^2+\sqrt{2}q+1,\, \Phi_8''=q^2-\sqrt{2}q+1, \Phi_{24}'=q^4+\sqrt{2}q^3+q^2+\sqrt{2}q+1$, $\Phi_{24}''=q^4-\sqrt{2}q^3+q^2-\sqrt{2}q+1$ for $q^2=2^{2f+1}$, and $\Phi_{12}'=q^2+\sqrt{3}q+1,\, \Phi_{12}''=q^2-\sqrt{3}q+1$ for $q^2=3^{2f+1}$. We then have $\gcd(|T_1|,|T_2|)=d$, where $d=(3,q-1)$, $(3,q+1)$ and $d=(2,q-1)$ for $S=E_6(q), \tw2E_6(q), E_7(q)$ respectively, and $d=1$ otherwise. Furthermore, $|H:S|=|T_i:T_i\cap S|=d$ in all cases. \begin{table}[htbp] \caption{Two tori for exceptional groups.} \label{tab:exc} \[\begin{array}{|cc|ccc||cc|ccc|} \hline H& & |T_1|& |T_2|&& H& & |T_1|& |T_2|& d\cr \hline \tw2B_2(q^2)& (q^2\ge8)& \Phi_8'& \Phi_8''&& F_4(q)& & \Phi_8& \Phi_{12}& 1\cr ^2G_2(q^2)& (q^2\ge27)& \Phi_{12}'& \Phi_{12}''&& E_6(q)& & \Phi_3\Phi_{12}& \Phi_9& (3,q-1)\cr G_2(q)& (q\ge3)& \Phi_3& \Phi_6&& \tw2E_6(q)& & \Phi_6\Phi_{12}& \Phi_{18}& (3,q+1)\cr \tw3D_4(q)& & \Phi_3^2& \Phi_{12}&& E_7(q)& & \Phi_2\Phi_{14}& \Phi_1\Phi_7& (2,q-1)\cr \tw2F_4(q^2)& (q^2\ge8)& \Phi_{24}'& \Phi_{24}''&& E_8(q)& & \Phi_{15}& \Phi_{30}& 1\cr \hline \end{array}\] \end{table} \begin{prop} \label{prop:exc} Assume that $S$ is of exceptional Lie type not in characteristic~$p$. Then Theorem~\ref{thm:almost-simple} holds for all $S\le G\le{\operatorname{Aut}}(S)$. \end{prop} \begin{proof} First assume that $A\le S$. Using for $X$ maximal cyclic subgroups of $S\cap T_i$, for $T_i$ as listed in Table~\ref{tab:exc}, we conclude by the same arguments as in the proof of Proposition~\ref{prop:crosschar} that in a possible counterexample to Theorem~\ref{thm:almost-simple} the prime $p$ would divide the orders of both $S\cap T_i$, $i=1,2$, which is a contradiction as their orders are coprime, unless possibly if $p$ is a torsion prime for $H$. The torsion primes for groups of exceptional Lie type are just the bad primes (see \cite[Tab.~14.1]{MT}); in particular $p\le5$, and even $p=3$ unless $S=E_8(q)$. The maximal rank of an elementary abelian $p$-subgroup of $H$, for $p$ odd, is at most the rank $m$ of $H$, see e.g.~\cite[Thm.~4.10.3]{GLS}. It is easy to check that for all bad primes $p\nmid q$ and $s\le m$, $p^s-1$ is not divisible by $\ell_i$ for $i\in\{1,2\}$, so again $T_i$ must centralise a non-trivial $p$-element, which contradicts the fact that $\gcd(|T_1|,|T_2|)=1$, except for the groups $F_4(2),E_6(2)$ and $\tw2E_6(2)$ (each with $p=3$). In all of the latter cases, at least one of the $\ell_i$ does not divide $3^s-1$ for $s\le 4$, and does not divide the centraliser order of an element of order $3$ either, so again we are done. \par Now assume that $A\not\le S$. Then either $A$ contains field automorphisms, in which case taking for $X$ a maximal cyclic subgroup of $T_1$ shows that no example arises by Lemma~\ref{lem:out} as field automorphisms induce proper non-inner automorphisms on this torus. Or, we have that $S=E_6(q)$ or $\tw2E_6(q)$, $p=3$, and $A$ contains diagonal automorphisms. In this case we take for $X$ the subgroup generated by a regular unipotent element; this has a unipotent centraliser in the group of inner-diagonal automorphisms and thus we are done. \end{proof} \lhd\!\!\lhd \, bsection{Groups of Lie type in defining characteristic} If $p$ is the defining prime for $S$, we can again make use of the two tori $T_1,T_2$ introduced before. \begin{prop} \label{prop:defchar} Assume that $S$ is of Lie type in characteristic~$p$. Then Theorem~\ref{thm:almost-simple} holds for all $S\le G\le{\operatorname{Aut}}(S)$. \end{prop} \begin{proof} Let $A\le H$ be cyclic of order~$p$ and $1\ne P\le H$ be a $p$-subgroup generated by conjugates of $A$. First assume that $A\le S$. Then $P$ is a non-trivial unipotent subgroup of $S$, hence its normaliser $N_S(P)$ is contained in some proper parabolic subgroup of $S$ (see \cite[Thm.~26.5]{MT}). Let $s$ be a regular semisimple element of $S$ in the torus $T_1$ as given in Table~\ref{tab:tori} when $S$ is classical, or in Table~\ref{tab:exc} in case $S$ is exceptional. Then the centraliser $C_S(s)$ is contained in $T_1$, in particular $s$ does not centralise any non-trivial split torus of ${\mathbf{H}}$ and so is not contained in a proper parabolic subgroup of $S$. Thus $\NN_S^A(X,p)=\varnothing$. \par If $A\not\le S$, then by Proposition~\ref{prop:oddout} either $A$ contains a field automorphism of $S$, or $p=3$ and $A$ contains a graph or graph-field automorphism. According to Lemma~\ref{lem:out}, $X$ is centralised by an outer $p$-element. Now as pointed out in the proof of Propositions~\ref{prop:crosschar} and~\ref{prop:exc}, field automorphisms do not enlarge the centraliser of $X$ as defined above, so we may assume that $S={\operatorname{O}}_8^+(q)$, $p=3$ and $H$ involves a graph or graph-field automorphism. In this case take $X$ generated by a semisimple element of order $(q^2+1)/\gcd(2,q-1)$ and conclude as in the proof of Proposition~\ref{prop:crosschar}. \end{proof} \section{Proof of Theorem A} \label{sec:main} In this section we complete the proof of Theorem A. We need the following result, whose proof can be found in \cite[Thm.~4.2]{INW}. \begin{lem} \label{Lem_W2_sol} Let $G$ be a finite group and $V$ a faithful irreducible $G$-module. Assume that $p$ is an odd prime number different from the characteristic of $V$ and that $A$ is a subgroup of $G$ of order $p$ that lies in $O_p(G)$. Then there exists an element $v\in V$ such that $$A\not\lhd\!\!\lhd \, bseteq \bigcup_{g\in G} C_G(v)^g.$$ \end{lem} Given a finite group $G$ and a subgroup $A\leq G$ we say that the pair $(G,A)$ satisfies $(\ast)$ if for every conjugacy class $C$ of $G$ there exists $g\in C$ such that $A$ is subnormal in $\seq{A,g}$. \begin{proof}[Proof of Theorem~A] We argue by contradiction: assume that $G$ is a finite group, $A$ an odd $p$-subgroup of $G$, $A\not\leq O_p(G)$, and the pair $(G,A)$ satisfies the condition $(\ast)$. Moreover we assume that that $|G|+|A|$ is minimal with respect to these conditions. We proceed by steps. \noindent {\bf Step 1.} We have $O_p(G)=1$. Indeed, note that $(G/O_p(G),AO_p(G)/O_p(G))$ satisfies $(\ast)$, therefore if $O_p(G)\ne 1$ by our minimal assumption, we would have that $AO_p(G)/O_p(G)\leq O_p(G/O_p(G))=1$, which is a contradiction. \noindent {\bf Step 2.} We have $|A|=p$. Let $B$ be a proper subgroup of $A$ and note that $(G,B)$ satisfies $(\ast)$. By the minimal choice, every proper non-trivial subgroup of $A$ lies in $O_p(G)$. Since $O_p(G)=1$, we conclude that $B=1$, i.e., $A$ has order $p$.\\ >From now on we set $A=\seq{a}$. \noindent {\bf Step 3.} $G$ has a unique minimal normal subgroup $M$. Assume that $M$ and $N$ are two distinct minimal normal subgroups of $G$ and assume also that $A\not\leq N$. Then $(G/N,AN/N)$ satisfies $(\ast)$ and so, by our minimal choice we have that $AN/N\leq O_p(G/N)$. In particular, $AN\lhd\!\!\lhd \, G$. Then also $AN\cap M\lhd\!\!\lhd \, G$. If $A\leq M$, then $A=A(N\cap M)=AN\cap M$, and we have that $A\lhd\!\!\lhd \, G$. Since $A$ is a $p$-subgroup, then $A\leq O_p(G)$, which is not the case. Therefore $A\not\leq M$ and by the same arguments as for $N$ above, we conclude that $AM$, and thus also $AM\cap AN$, is subnormal in $G$. Finally note that $M\cap AN=1$. Indeed, otherwise we have that $A\leq MN$, as $|A|=p$ and $M\cap N=1$. Now $MN/N$ is a minimal normal subgroup of $G/N$ and, being isomorphic to $M$, it is not a $p$-subgroup by Step~1. Then $MN/N\cap O_p(G/N)= 1$, forcing $AN/N=1$ a contradiction. Thus $M\cap AN=1$ and $A=A(M\cap AN)=AM\cap AN$, therefore is subnormal in $G$, which again contradicts Step 1. \noindent {\bf Step 4.} $M$ is non-abelian. Assume that $M$ is an elementary abelian $q$-group, with $q$ a prime different from $p$, by Step~1. Let $Y/M=O_p(G/M)$, then by our minimal assumption $A\leq Y$. We take $P\in\operatorname{Syl}_p(Y)$ such that $A\leq P$. By the Frattini argument $G=YN=MN$, with $N=N_G(P)$. Now $[N_M(P),P]\leq M\cap P=1$, thus $N_M(P)=C_M(P)$. Also, $M$ being normal and abelian, $C_M(P)$ is normalised by both $M$ and $N_G(P)$, thus $C_M(P)\trianglelefteq G$. As $M$ is the unique minimal normal subgroup of $G$, we have that either $C_M(P)=1$ or $C_M(P)=M$. Note that in the latter case $P$ is normal in $G$, which contradicts Step 1. Therefore we have that $G$ is a split extension $G=M\rtimes N$. Moreover, since $M$ is the unique minimal normal subgroup of $G$, $C_N(M)=1$, i.e., $N$ acts faithfully on $M$. Let $m$ be an arbitrary non-trivial element of $M$. By condition $(\ast)$ there exists some $n\in N$ such that $A\lhd\!\!\lhd \, \seq{A,m^n}$. In particular the subgroup $V:=\seq{a,a^{m^n}}$ is a $p$-group. As $m^n\in M$, $MV=MA$, and therefore $$V=MA\cap V=(M\cap V)A=A,$$ as $M$ and $V$ have coprime orders. Therefore $\seq{a}=\seq{a^{m^n}}$, i.e., $m^n$ normalises $A$. In particular, as $M$ is a normal $q$-subgroup, we have that $$[a, m^n]\in A\cap M=1,$$ which means that $A\lhd\!\!\lhd \, bseteq C_N(m)^n$. By the arbitrary choice of $m$ in $M$ we have reached a contradiction with Lemma \ref{Lem_W2_sol}. \noindent {\bf Step 5.} Let $M=S_1\times S_2\times \ldots \times S_n$ be the unique minimal normal subgroup of $G$, with all the $S_i$'s isomorphic to a finite non-abelian simple group $S$. Denote by $\pi_i$ the projection map of $M$ onto $S_i$, for every $i=1,2,\ldots n$. Let also $1=x_1, x_2,\ldots ,x_n$ be elements of $G$ such that $S_1^{x_i}=S_i$, for $i=1,2,\ldots, n$. Let $K$ be the kernel of the permutation action of $G$ on the set ${\mathcal{S}}:=\set{S_1,S_2,\ldots , S_n}$, i.e., $$K:=\bigcap_{i=1}^n N_G(S_i).$$ We treat separately the two cases: $A\not\leq K$ and $A\leq K$. \noindent {\bf{Case 1. $A\not\leq K$.}} \noindent Set $\overline{G}:=G/K$ and use the ``bar'' notation to denote subgroups and elements of $\overline{G}$. By induction, we have that $\overline{A}\leq O_p(\overline{G})$. Since $p$ is odd, by Gluck's Theorem (\cite[Cor.~5.7]{MW}) there exists a proper non-empty subset ${\mathcal{R}}\lhd\!\!\lhd \, bset {\mathcal{S}}$ such that $$\overline{G}_{\mathcal{R}}\cap O_p(\overline{G})=\overline{1},$$ where $G_{\mathcal{R}}$ denotes the stabiliser in $G$ of the set ${\mathcal{R}}$. Without loss of generality, we may assume ${\mathcal{R}}=\set{S_1,\ldots,S_r}$ for some $r< n$. Let $q$ be any prime different from $p$ dividing $|S_1|$, and let $s_1\in S_1$ be any non-trivial $q$-element. Set $$s_{\mathcal{R}}:=s_1s_1^{x_2}\ldots s_1^{x_r}\in M.$$ By assumption, there exists a $G$-conjugate of $s_{\mathcal{R}}$, say $y:=s_{\mathcal{R}}^g$, such that $A$ is subnormal in $\seq{a,y}$, in particular $\seq{a,a^y}$ is a $p$-subgroup. Thus $[a,y]=a^{-1}a^y$ is a $p$-element. Also, $[a,y]$ is $G$-conjugate to $[a^{g^{-1}},s_{\mathcal{R}}]$. Since $\overline{a^{g^{-1}}}$ is a non-trivial element of $O_p(\overline{G})$, $a^{g^{-1}}$ does not stabilise ${\mathcal{R}}$, therefore there exists some $i\in {\mathcal{R}}$ such that $(S_i)^{a^{g^{-1}}}=S_j$ for some $j\not\in {\mathcal{R}}$, this forces that $\pi_j([a^{g^{-1}},s_{\mathcal{R}}])$ is a non-trivial $q$-element of $S_j$, and since $p\ne q$, $[a^{g^{-1}},s_{\mathcal{R}}]$ cannot be a non-trivial $p$-element of $M$. So we have $[a,y]=1$, but then $\seq{a}$ stabilises ${\mathcal{R}}$, which is in contradiction with $\overline{G}_{\mathcal{R}}\cap O_p(\overline{G})=\overline{1}$. \noindent {\bf{Case 2. $A\leq K$.}} \noindent We consider first the case in which $A\leq C_G(S_i)$, for every $i=1,\ldots,n$. Then $$A\leq \bigcap_{i=1}^n C_G(S_i)=(C_G(S_1))_G,$$ the normal core of $S_1$ in $G$. Since $M$ is the unique minimal normal subgroup of $G$ and $M\not \leq (C_G(S_1))_G$, we necessarily have that $(C_G(S_1))_G=1$, and so $A=1$, which is a contradiction. \noindent Assume now that $A$ does not centralise some $S_i$, say $S_1$. Let $1\ne s_1\in S_1$ and let $m=s_1^{\null}s_1^{x_2}\ldots s_1^{x_n}\in M$. Let $g\in G$ be such that $A\lhd\!\!\lhd \, \seq{A,m^g}$. Writing $m^g=h_1k$, with $h_1=s_1^{x_ig}\in S_1$ for some $i=1,\ldots,n$, and $k=\prod_{j\ne i}s_1^{x_jg}\in S_2\times\ldots \times S_n$ we have that for every $u,v\in \mathbb{N}$ $$[a^u,(m^g)^v]=[a^u,h_1^vk^v]=[a^u,k^v][a^u,h_1^v]^{k^v} =[a^u,h_1^v][a^u,k^v],$$ since $A$ normalises each $S_i$ and $S_1$ is centralised by $S_j$, for every $j\ne 1$. In particular we have that $$[A,\seq{m^g}]= [A,\seq{h_1}]\times [A,\seq{k}].$$ Therefore $\pi_1([A,\seq{m^g}])=[A,\seq{h_1}]$ is a $p$-subgroup of $S_1$. Finally note that $h_1=s_1^{x_ig}$, and so for the arbitrary element $s_1\in S_1$ there exists $x_ig\in N_G(S_1)$ such that $A\lhd\!\!\lhd \, \seq{A, s_1^{x_ig}}$. In particular if $s_1$ is chosen to be a $p'$-element of $S$ we have that $s_1$ normalises a non-trivial $p$-subgroup of $G$ which is generated by $G$-conjugates of $A$. Therefore, as $A\not\leq C_G(S_1)$, we have proved that the almost simple group $\tilde G:=N_G(S_1)/C_G(S_1)$ contains a non-trivial subgroup of order~$p$, namely $\tilde A:=AC_G(S_1)/C_G(S_1)$, such that for every cyclic $p'$-subgroup $\widetilde{X}$ of $F^*(\tilde G)$ we have that $\NN_{\tilde G}^{\tilde A}(\tilde X,p)\ne \varnothing$. This is in contradiction to Theorem~\ref{thm:almost-simple}. \end{proof} We end this section by showing with an easy example that for $p=2$ Theorem A is no more true. \begin{exmp} Let $H$ be a Sylow $2$-subgroup of ${\operatorname{GL}}_2(3)$, namely $H$ is a semidihedral group of order 16, acting, in the natural way, on the natural module $M\simeq C_3\times C_3$. Let $G$ be the semidirect product $M\rtimes H$, and $a$ a non-central involution of $H$. Since $O_2(G)=1$, $\seq{a}$ is not subnormal in $G$. We show that $\seq{a}$ satisfies $(\ast)$. A non-trivial element of $G$ has order either a $2$-power, or 3, or 6. In the first case, it is conjugate to an element $g$ of $H$ and so $\seq{a}$ is subnormal in the $2$-group $\seq{a,g}$. In the second case, the element lies in $M$, but note that every element of $M$ centralises a conjugate in $H$ of $a$, i.e., $M=\bigcup_{x\in H}C_M(a^x)$, thus there exists an $x\in H$ such that $\seq{a}$ is subnormal in $\seq{a,g^x}\simeq C_6$. In the latter case, up to conjugation, we have $\seq{a,g}=\seq{g}$. \end{exmp} \section{Other conditions for subnormality} \label{sec:others} As stated in the Introduction, in this Section we briefly analyse similar variations related to the other criteria for subnormality given by the original Theorem of Wielandt (namely conditions (iii) and (iv)). We see that in general these generalisations fail to guarantee the subnormality of odd $p$- subgroups. Given a finite group $G$ and an odd $p$-subgroup $A$ of $G$, we consider the following condition: \begin{itemize} \item[($\ast \ast$)] \textit{for every conjugacy class $C$ of $G$ there exists $g\in C$ such that $A\lhd\!\!\lhd \, \seq{A,A^g}$}. \end{itemize} \noindent It is trivial that condition $(\ast)$ implies $(\ast \ast)$.\\ The next result shows that $(\ast\ast)$ is enough to guarantee the subnormality of $A$ in the class of finite solvable groups. \begin{thm} \label{thm:one} Let $G$ be a finite solvable group and $p$ a prime. If there exists a $p$-subgroup $A$ of $G$ satisfying $(\ast \ast)$ then $A\leq O_p(G)$. \end{thm} \begin{proof} As $A$ is nilpotent, every subgroup of $A$ also satisfies $(\ast \ast)$. Therefore we can assume that $A$ is cyclic, say $A=\seq{a}$.\\ We argue by induction on the order of $G$. \\ Note that if $M$ is a minimal normal subgroup of $G$ the assumption holds for the group $G/M$. Thus in particular, we may assume that $G$ admits a unique minimal normal subgroup, say $M$, and that $aM\in O_p(G/M):=Y/M$. Now if $M$ is a $p$-group we are done. Let $M$ be an elementary abelian $q$-group, with $q\ne p$. Take $P$ a Sylow $p$-subgroup of $Y$ containing $a$, so that by the Frattini argument $G=YN=MN$, with $N=N_G(P)$. Being $M$ minimal normal in $G$, we have that $C_M(P)=M\cap N_G(P)=1$ (otherwise $Y=M\times P$ and $a\in O_p(G)$). Thus $G=M\rtimes N$. Since also $M$ is the unique minimal normal subgroup of $G$ we have $C_N(M)=1$. Let $m$ be a non-trivial element of $M$. By assumption there exists $n\in N$ such that the subgroup $V:=\seq{a,a^{m^n}}$ is nilpotent. In particular, as $m^n\in M$, $MV=M\seq{a}$, and therefore $$V=M\seq{a}\cap V=(M\cap V)\times \seq{a},$$ forcing $\seq{a}=\seq{a^{m^n}}$. So $$[a, m^n]\in A\cap M=1,$$ which means that $A\lhd\!\!\lhd \, bseteq C_N(m)^n$. By the arbitrary choice of $m$ in $M$ we have reached a contradiction with Lemma~\ref{Lem_W2_sol}. \end{proof} For non-solvable groups the situation is completely different and the following example shows that there are almost simple groups with non-trivial $p$-subgroups satisfying $(**)$. \begin{exmp} Every subgroup of ${\mathfrak{S}}_8$ generated by a $3$-cycle satisfies $(\ast\ast)$. Indeed, let $A=\seq{(123)}$ and $C$ a conjugacy class of ${\mathfrak{S}}_8$. If every element of $C$ is the product of at least three disjoint cycles of length $>1$, then $C$ contains $g=(14...)(25...)(36...)...$ and so $A$ is subnormal in the abelian subgroup $\seq{A,A^g}$. If $C$ contains a $k$-cycle, then $k\geq 6$ otherwise there exists $g$ in $C$ fixing pointwise $\{1,2,3\}$ and so $\seq{A,A^g}=A$. But then take $g=(142536...)\in C$ and argue as before. The remaining case is when the elements of $C$ are products of two disjoint cycles and fix at most two points. If one of these cycle is a $3$-cycle, then $g=(123)...\in C$, forcing again $A^g=A$. We can then assume that one cycle is at least a $2$-cycle and the other a $4$-cycle, but then take $g=(14...)(2536...)\in C$ and conclude as before. A similar behaviour can be noticed for every prime $p$, if $n$ is big enough. \end{exmp} \noindent {\bf Acknowledgement.} The first author expresses his gratitude to C. Casolo for proposing this topic and for helpful discussions. \end{document}
\begin{document} \title[Functional Equations and the Cauchy Mean Value Theorem]{Functional Equations and the Cauchy Mean Value Theorem} \author{Zolt\'an M. Balogh} \address{ Institute of Mathematics\\ University of Bern\\ Sidlerstrasse 5\\ CH 3012\\ Switzerland} \email{[email protected]} \thanks{The authors were supported by the Swiss National Science Foundation, SNF, grant no. 200020$\_$146477.} \author{Orif O. Ibrogimov} \address{ Institute of Mathematics\\ University of Bern\\ Sidlerstrasse 5\\ CH 3012\\ Switzerland} \email{[email protected]} \author{Boris S. Mityagin} \address{ Department of Mathematics\\ The Ohio State University\\ 231 W 18th Ave. MW 534\\ Columbus Ohio 43210\\ U.S.A.} \email{[email protected]} \subjclass{39B22} \keywords{Mean Value Theorem, Functional Equations} \date{\today} \dedicatory{Dedicated to Professor J\"{u}rg R\"{a}tz } \begin{abstract} The aim of this note is to characterize all pairs of sufficiently smooth functions for which the mean value in the Cauchy Mean Value Theorem is taken at a point which has a well-determined position in the interval. As an application of this result, a partial answer is given to a question posed by Sahoo and Riedel. \end{abstract} \maketitle \section{Introduction} Given two differentiable functions $F,G:{\mathbb R} \to {\mathbb R}$, the Cauchy Mean Value Theorem (MVT) states that for any interval $[a,b]\subset {\mathbb R}$, where $a<b$, there exists a point $c$ in $(a,b)$ such that \begin{equation} \label{Cauchy} [F(b)-F(a)]\,g(c) = [G(b)-G(a)]\,f(c), \end{equation} where $f= F'$ and $g=G'$. A particular situation is the Lagrange MVT when $G(x)=x$ is the identity function, in which case \eqref{Cauchy} reads as \begin{equation} \label{Lagrange} F(b)-F(a) = f(c)(b-a). \end{equation} The problem to be investigated in this note can be formulated as follows. \begin{problem} \label{the-problem} Find all pairs $(F,G)$ of differentiable functions $F, G :{\mathbb R}\to{\mathbb R}$ satisfying the following equation \begin{equation} \label{eqn:cauchy.gen} [F(b)-F(a)] \, g(\alpha a+\beta b) = [G(b)-G(a)] \, f(\alpha a+\beta b) \end{equation} for all $a,b \in {\mathbb R}$, $a<b$, where $f=F'$, $g=G'$, $\alpha, \beta\in (0,1)$ are fixed and $\alpha+\beta=1$. \end{problem} For the case of the Lagrange MVT with $c=\frac{a+b}{2}$, this problem was considered first by Haruki \cite{Haruki} and independently by Acz\'el \cite{Aczel}, proving that the quadratic functions are the only solutions to \eqref{Lagrange}. This problem can serve as a starting point for various functional equations \cite{Sahoo-Riedel}. More general functional equations have been considered even in the abstract setting of groups by several authors including Kannappan \cite{Kannappan}, Ebanks \cite{Ebanks}, Fechner-Gselmann \cite{Fechner-Gselmann}. On the other hand, the result of Acz\'el and Haruki has been generalized for higher order Taylor expansion by Sablik \cite{Sablik}. For the more general case of the Cauchy MVT much less is known. We mention Aumann \cite{Aumann} illustrating the geometrical significance of this equation and the recent contribution of P\'ales \cite{Pales} providing the solution of a related equation under additional assumptions. In this note we provide a different approach to the Cauchy MVT. As it will turn out, the most challenging situation corresponds to $c=\frac{a+b}{2}$ in which case our main result is the following: \begin{theorem} \label{thm:sym.cauchy.main} Assume that $F,G: {\mathbb R} \to {\mathbb R}$ are three times differentiable functions with derivatives $F'=f$, $G'=g$ such that \begin{equation} \label{eqn:cauchy.sym1} [F(b)-F(a)]\,g\bigl(\frac{a+b}{2}\bigr) = [G(b)-G(a)]\,f\bigl(\frac{a+b}{2}\bigr) \end{equation} for all $a,b\in{\mathbb R}$. Then one of the following possibilities holds: \begin{enumerate}[(a)] \item $\{1, F, G\}$ are linearly dependent on ${\mathbb R}$; \item $F,G \in \mathrm{span} \{1,x,x^2\}$, $x\in{\mathbb R}$; \item there exists a non-zero real number $\mu$ such that \[ F,G \in \mathrm{span} \{1,e^{\mu x}, e^{-\mu x}\}, \quad x\in{\mathbb R}; \] \item there exists a non-zero real number $\mu$ such that \[ F,G \in \mathrm{span} \{1,\sin(\mu x), \cos(\mu x)\}, \quad x\in{\mathbb R}. \] \end{enumerate} \end{theorem} The paper is organized as follows. In Section 2 we consider the problem first for the known case of the Lagrange mean value theorem as an illustration of our method. In Section 3 we provide a preliminary result that will allow to pass local information to a global one about the pairs of differentiable functions $(F,G)$ satisfying \eqref{eqn:cauchy.gen}. In Sections 4, 5 we consider the asymmetric ($\alpha\neq\beta$) and symmetric ($\alpha=\beta=1/2$) cases, respectively. Section 6 is for final remarks. Here we also provide a partial result to an open problem by Sahoo and Riedel which corresponds to a more general version of \eqref{eqn:cauchy.gen}. \section{The Lagrange MVT with fixed mean value} Note that every $c \in (a,b)$ can be written uniquely as $c=\alpha a+\beta b$ for some $\alpha,\beta\in (0,1)$ with $\alpha+\beta=1$. It is easy to check that \eqref{Lagrange} holds for all $a,b\in{\mathbb R}$ with fixed $\alpha \neq 1/2$ if $F$ is a linear function, and with $\alpha = 1/2$ if $F$ is a quadratic function. We claim that the converse of this statement is also true. As mentioned earlier, there are various proofs of the latter in the literature, see for example \cite{Aczel}, \cite{Haruki} , \cite{Sahoo-Riedel}. Nevertheless, we give here a short and self-contained argument mainly to illustrate our approach to the more general case of the Cauchy MVT. \begin{proposition}\label{lagrange} Let $\alpha \in (0,1)$ be fixed and $\beta= 1-\alpha$. Assume that $F:{\mathbb R} \to {\mathbb R}$ is a continuously differentiable function with $F\rq{}=f$ such that \begin{equation} \label{convex-comb} F(b)-F(a)= f(\alpha b+ \beta a) (b-a) \quad \text{for all} \quad a,b\in {\mathbb R} \quad \text{with} \quad a<b. \end{equation} Then the following statements hold: \begin{enumerate} \item if $\alpha \neq 1/2$ then $F$ is a linear function; \item if $\alpha = 1/2$ then $F$ is a quadratic function. \end{enumerate} \end{proposition} \begin{proof} Let us denote by $\alpha b+ \beta a= x$ and $b-a= h$. Then \eqref{convex-comb} reads as \begin{equation} \label{new-lagrange} F(x+\beta h) -F(x-\alpha h)= f(x)\,h \quad \text{for all} \quad x \in {\mathbb R}, h>0. \end{equation} From this equation it is apparent that $f=F'$ is differentiable as a linear combination of two differentiable functions and thus $F$ is twice differentiable. By induction, it follows that $F$ is infinitely differentiable. Differentiating \eqref{new-lagrange} with respect to $h$, we obtain the relation \begin{equation} \label{diff-lagrange} \beta f(x+\beta h) + \alpha f(x-\alpha h) = f(x), \quad x \in {\mathbb R}, h>0. \end{equation} Again, we differentiate \eqref{diff-lagrange} with respect to $h$ and find that \begin{equation*} \beta^2 f'(x+\beta h) -\alpha^2 f'(x-\alpha h) =0, \quad x \in {\mathbb R}, h>0. \end{equation*} Since $f'$ is continuous, letting $h \searrow 0$, we obtain $$ (\beta^2-\alpha^2) f'(x)=(1-2\alpha)f'(x)=0 \quad \text{for all} \quad x\in {\mathbb R}.$$ If $\alpha \neq 1/2$, this implies that $f'=0$ identically. Therefore $f$ is constant and thus $F$ is a linear function, proving the first statement. If $\alpha = 1/2$, then the equation \eqref{diff-lagrange} reads as \begin{equation*} f\bigl(x+\frac{h}{2}\bigr) +f\bigl(x-\frac{h}{2}\bigr)= 2 f(x), \quad x \in {\mathbb R}, h>0, \end{equation*} and twice differentiation with respect to $h$ leads to \begin{equation*} f''\bigl(x+\frac{h}{2}\bigr) + f''\bigl(x-\frac{h}{2}\bigr) = 0, \quad x \in {\mathbb R}, h>0. \end{equation*} Now letting $h \searrow 0$, we get $f''(x)=0$ for all $x\in{\mathbb R}$, so $f$ is linear and $F$ is a quadratic function, proving the second statement. \end{proof} \section{The Cauchy MVT with fixed mean value} Let us introduce the sets \begin{align}\label{U_f,U_g} U_f := \{ x \in {\mathbb R}: f(x) \ne 0 \}, \quad U_g := \{ x \in {\mathbb R}: g(x) \ne 0 \}, \end{align} and also their complements $Z_f := {\mathbb R} \setminus U_g$ and $Z_g := {\mathbb R} \setminus U_f$. Observe that if $U_g$ is empty, i.e. $G$ is constant on ${\mathbb R}$, then \eqref{eqn:cauchy.gen} holds for trivial reasons (both sides are identically zero) for arbitrary differentiable function $F$. Of course, we can change the roles of $G$ and $F$ and claim: if $F$ is constant then \eqref{eqn:cauchy.gen} holds for any differentiable function $G$. Assume therefore that $U_g\neq\emptyset$. Then there is a sequence of mutually disjoint open intervals $\{I_\sigma\}_{\sigma\in\Sigma}$, $\Sigma\subset\mathbb{N}$, such that \begin{align} \label{rep:U_g=sum.of.itvs} \displaystyle U_g=\bigcup_{\sigma\in\Sigma}I_{\sigma}. \end{align} \begin{proposition} \label{prop:cauchy.gen.U_g<>empty} If $U_g\neq\emptyset$ but $U_f\cap U_g = \emptyset$, then $U_f=\emptyset$, i.e. $f\equiv 0$ on ${\mathbb R}$ and thus $F$ is constant. \end{proposition} \begin{proof} By assumption, there is a non-empty interval $(p,q)\subset U_g$ such that $g(x)\neq 0$ on $(p,q)$, but $f(x)=0$ for all $x\in[p,q]$. Then with the change of variables $h=b-a$, $x=\alpha a+\beta b$, \eqref{eqn:cauchy.gen} yields \begin{equation} \label{constancy} F(x+\alpha h) - F(x-\beta h) = 0 \quad \text{for all} \quad x\in (p,q), \, h>0. \end{equation} Denoting by $y=x+\alpha h$ for $x\in[p,q]$ and $h>0$, we get $F(y)-F(y-h) = 0$ if $(h,y)$ lies within the semi--strip $L := \bigl\{(h,y):\, h>0, \, p+\alpha h<y<q+\alpha h\bigr\}$. \noindent Then, for $y>p$ choosing $h>0$ such that $(h,y)\in L$, we have \begin{align*} \frac{\partial}{\partial y}F(y) = \frac{\partial}{\partial y}F(y-h) &= -\frac{\partial}{\partial h}F(y-h)\\ &= -\frac{\partial}{\partial h}F(y) =0, \end{align*} so $F(y)$ is a constant, say $F(y)=F\bigl(\frac{p+q}{2}\bigr)$ for $y>p$. However, by \eqref{constancy}, we have $F(q+\alpha h) = F(q-\beta h)$ and thus $F(y)$ is the same constant for all $y<q$. Therefore, $f(y)=F'(y)=0$ for all $y\in{\mathbb R}$. \end{proof} Proposition \ref{prop:cauchy.gen.U_g<>empty} shows that the condition $U_f\cap U_g=\emptyset$ holds only if at least one of the sets $U_f$ and $U_g$ is empty. Then we have the simple cases described as in the beginning of the section. \begin{proposition} \label{prop:X} Let $(F,G)$ be a solution of the Problem~\ref{the-problem} satisfying \begin{align} \label{Uf_disj_Ug} U_f\cap U_g \neq \emptyset, \end{align} and consider the representation \eqref{rep:U_g=sum.of.itvs}. If $\{F, G, 1\}$ are linearly dependent as functions on $I_{\sigma}$ for every $\sigma\in\Sigma$, then $\{F, G, 1\}$ are linearly dependent on ${\mathbb R}$. \end{proposition} \begin{proof} For $\sigma_1, \sigma_2 \in \Sigma$ with $\sigma_1 \neq \sigma_2$, consider the intervals $I_{\sigma_1} := (p_1,q_1)$, $I_{\sigma_2}:=(p_2,q_2)$ with \begin{align}\label{pqpq} p_1<q_1\leq p_2<q_2, \end{align} and assume that $\{F, G, 1\}$ are linearly dependent on $I_{\sigma_1}$ and $I_{\sigma_2}$. Then it follows that there are constants $A_1$, $A_2$, $B_1$, $B_2 \in{\mathbb R}$ such that \begin{align}\label{FA_1B_1} F(x) &= A_1G(x)+B_1, \, x\in I_{\sigma_1},\\ \label{FA_2B_2} &= A_2G(x)+B_2, \, x\in I_{\sigma_2}. \end{align} With the change of variables $h=b-a$, $x=\alpha a+\beta b$, \eqref{eqn:cauchy.gen} yields \begin{equation*} [F(x+\alpha h)-F(x-\beta h)]\, g(x) = [G(x+\alpha h)-G(x-\beta h)]\, f(x) \end{equation*} for all $x\in{\mathbb R}$ and $h>0$. Since $f(x)=A_2g(x)$ if $x\in I_{\sigma_2}$ by \eqref{FA_2B_2} and $g(x)\neq 0$ for $x\in I_{\sigma_2}$, we have \begin{align}\label{FFA_2} F(x+\alpha h)-F(x-\beta h) = A_2[G(x+\alpha h)-G(x-\beta h)], \quad x\in I_{\sigma_2}, h>0. \end{align} If at the same time $x-\beta h\in I_{\sigma_1}$, then $F(x-\beta h)=A_1G(x-\beta h)+B_1$ by \eqref{FA_1B_1}. Inserting this value into \eqref{FFA_2}, we obtain \begin{align}\label{FA_2B_1} F(x+\alpha h) = A_2G(x+\alpha h)+(A_1-A_2)G(x-\beta h)+B_1 \end{align} for \begin{align}\label{I^2I^1h} x\in I_{\sigma_2}, \, x-\beta h\in I_{\sigma_1}, \, h>0. \end{align} Put $y=x+\alpha h$, then $x-\beta h=y-h$, and \eqref{I^2I^1h} means that $(h,y)$ lies within the parallelogram $ \Pi := \bigl\{(h,y): \, p_2+\alpha h<y<q_2+\alpha h, \, p_1+h<y<q_1+h\bigr\}$. \noindent Since $\beta\in(0,1)$, \eqref{pqpq} guarantees that $\Pi\neq\emptyset$, and \eqref{FA_2B_1} implies \begin{equation*} F(y)= A_2G(y)+(A_1-A_2)G(y-h)+B_1 \quad \text{for all} \quad (h,y)\in\Pi. \end{equation*} Therefore, at any point of $\Pi$, we have \begin{align*} 0=\frac{\partial}{\partial h}F(y) = -(A_1-A_2)G'(y-h) = (A_2-A_1)g(y-h). \end{align*} But $y-h\in I_{\sigma_1}$ by \eqref{I^2I^1h}, so $g(y-h)\neq 0$ and thus \begin{align}\label{A_1=A_2} A_2-A_1=0. \end{align} So far our analysis says nothing about $B_1$, $B_2$ in \eqref{FA_1B_1}, \eqref{FA_2B_2} but since $\sigma_1, \sigma_2\in\Sigma$ were arbitrary, \eqref{A_1=A_2} together with \eqref{FA_1B_1}, \eqref{FA_2B_2} imply \begin{align}\label{fAg} f(x)=Ag(x) \quad \text{for some constant} \quad A\in{\mathbb R} \quad \text{and all} \quad x\in U_g. \end{align} On the other hand, by changing the roles of $F$ and $G$ in the above analysis, we come to the conclusion that \begin{align}\label{gKf} g(x)=Kf(x) \quad \text{for some constant} \quad K\in{\mathbb R} \quad \text{and all} \quad x\in U_f. \end{align} By \eqref{Uf_disj_Ug} there is a point $x_0\in U_g\cap U_f$ so $AK=1$ and these coefficients are not zero. But then \eqref{fAg} implies $U_g\subset U_f$ and \eqref{gKf} implies $U_f\subset U_g$; therefore, $U_g=U_f$ and $Z_g=Z_f$. The latter means that \[ f(x)=Ag(x), \, g(x)=Kf(x) \quad \text{if} \quad x\in Z_g=Z_f \] by trivial reasons (all these values are zeros) so with \eqref{fAg} and \eqref{gKf} these identities are valid on the entire ${\mathbb R}=U_f\cup Z_f=U_g\cup Z_g$. In particular, it follows that $\{F, G, 1\}$ are linearly dependent on ${\mathbb R}$. \end{proof} \section{The Cauchy MVT with fixed asymmetric mean value} In this section we consider the asymmetric case, i.e., in \eqref{eqn:cauchy.gen} we take \begin{align}\label{c} \alpha, \, \beta \in (0,1) \quad \text{with } \quad \alpha\neq 1/2 \quad \text{and} \quad \beta=1-\alpha. \end{align} The following proposition describes all pairs $(F,G)$ of two times continuously differentiable functions satisfying \eqref{eqn:cauchy.gen} under the assumption \eqref{c} on $\alpha,\beta$ in the intervals where $g=G'$ does not vanish. \begin{proposition} \label{cauchy1} Let $(F,G)$ be a solution of the Problem~\ref{the-problem} with $\alpha,\beta$ satisfying \eqref{c} and $I=(p,q)$, $-\infty\leq p<q \leq +\infty$, be an interval where the derivative $g(x)$ does not vanish. If $F,G$ are two times continuously differentiable on $I$, then $\{F, G, 1\}$ are linearly dependent on $I$. \end{proposition} \begin{proof} With the change of variables $h=b-a$, $x=\alpha a+\beta b$, \eqref{eqn:cauchy.gen} yields \begin{equation}\label{eqn_asym1} [F(x+\alpha h)-F(x-\beta h)]\, g(x) = [G(x+\alpha h)-G(x-\beta h)]\, f(x) \end{equation} if $x\in I$ and $h>0$ such that $x+\alpha h$, $x-\beta h\in I$. The latter condition tells that \eqref{eqn_asym1} holds if $(h,x)$ lies within the open triangle \begin{align}\label{triangleT} T := \bigl\{(h,x): \,0<h<q-p, p+\beta h<x<q-\alpha h \bigr\}. \end{align} By differentiating both sides of \eqref{eqn_asym1} with respect to $h$ twice, we obtain the following relation in $T$ \begin{align}\label{eqn_asym2} [\alpha^2 f'(x+\alpha h) - \beta^2 f'(x-\beta h)]\,g(x)= [\alpha^2 g(x+\alpha h)-\beta^2 g'(x-\beta h)]\,f(x). \end{align} All the functions are continiuous so \eqref{eqn_asym2} holds on the closure $\overline{T}$ as well, in particular, on the interval $\{h=0, p<x<q\}$. Therefore, with $\beta^2-\alpha^2=1-2\alpha\neq 0$ by \eqref{c}, we get $f'(x)g(x)=g'(x)f(x)$ for all $x\in I=(p,q)$. We can divide both sides by $g^2(x)$ and conclude that $(f/g)'=0$ on $I$. This implies that $f/g = A$ for some constant $A\in {\mathbb R}$, and $F'(x)=f(x)=Ag(x)=AG'(x)$, $x\in I$. After integration we get $F(x)=AG(x)+B(x)$, $x\in I$. \end{proof} The following theorem is the main result of this section. \begin{theorem} \label{thcauchy1} Let $(F,G)$ be a solution of the Problem~\ref{the-problem} with $\alpha,\beta$ satisfying \eqref{c}. If $F,G$ are two times continuously differentiable on ${\mathbb R}$, then $\{F, G, 1\}$ are linearly dependent on ${\mathbb R}$, i.e. there exist constants $A,B,C\in{\mathbb R}$ such that not all of them zeroes and \begin{equation}\label{lin.dep.1} AF(x)+BG(x)+C=0 \quad \text{for all} \quad x\in{\mathbb R}. \end{equation} \end{theorem} \begin{proof} Consider the following cases: \noindent Case 1: $U_g = \emptyset$. In this case $G$ is a constant on ${\mathbb R}$ and \eqref{eqn:cauchy.gen} holds for any differentiable function $F$. Hence \eqref{lin.dep.1} holds, for example, with $A=0$, $B=1$, $C=-G$ and thus $\{F, G, 1\}$ are linearly dependent on ${\mathbb R}$. \noindent Case 2: $U_g \neq \emptyset$ but $U_g \cap U_f=\emptyset$. In this case Proposition \ref{prop:cauchy.gen.U_g<>empty} yields that $F$ is a constant on ${\mathbb R}$ and \eqref{eqn:cauchy.gen} holds for any differentiable function $G$. Hence \eqref{lin.dep.1} holds, for example, with $A=1$, $B=0$, $C=-F$ and thus $\{F, G, 1\}$ are again linearly dependent on ${\mathbb R}$. \noindent Case 3: $U_g \cap U_f \neq \emptyset$. In this case Propositions \ref{cauchy1} and \ref{prop:X} immediately imply that $\{F, G, 1\}$ are linearly dependent on ~${\mathbb R}$. \end{proof} \section{The Cauchy MVT with symmetric mean value} In this section we consider the problem of describing all pairs $(F,G)$ of smooth functions for which the mean value in \eqref{eqn:cauchy.gen} is taken at the midpoint of the interval. Our first result gives a necessary (and also sufficient in case $\{1,F,G\}$ are not linearly dependent) condition on such pairs in the intervals where $g=G'$ does not vanish. \begin{proposition} \label{cauchy2} Assume that $F,G: {\mathbb R} \to {\mathbb R}$ are three times differentiable functions with derivatives $F'=f$, $G'=g$. Let $I\subset {\mathbb R}$ be such an interval that $g\neq 0$ for all $x\in I$ and \eqref{eqn:cauchy.sym1} holds for all $a,b \in I$. Then there exist constants $A, K\in{\mathbb R}$ and $x_{0}\in I$ such that \begin{equation} \label{f,g} f(x) = \bigg(A + K \int_{x_{0}}^{x}\frac{dt}{g^{2}(t)}\bigg) \, g(x) \quad \text{for all} \quad x \in I. \end{equation} Moreover, if \eqref{f,g} holds with $K\neq0$, then \eqref{eqn:cauchy.sym1} holds if and only if \begin{equation} \label{integralcondition} \int_{x-h}^{x+h} g(t)\, \Bigg(\int_{x_{0}}^{t}\frac{du}{g^{2}(u)} \Bigg) \,dt = \Bigg(\int_{x-h}^{x+h} g(t) dt\Bigg) \Bigg(\int_{x_{0}}^{x}\frac{du}{g^{2}(u)} \Bigg) \end{equation} for all $ x,h \in {\mathbb R}$ such that $x, x+h, x-h\in I$. \end{proposition} \begin{proof} With the change of variables $x=\frac{a+b}{2}$, $h=\frac{b-a}{2}$, we can rewrite \eqref{eqn:cauchy.sym1} as \begin{equation} \label{modifcauchy} [F(x+h)-F(x-h)]g(x)= [G(x+h)-G(x-h)]f(x) \end{equation} for all $x,h\in {\mathbb R}$ with the property that $x,x+h,x-h \in I$. By differentiating this equality three times with respect to $h$, we get \begin{equation*} [f''(x+h)+f''(x-h)]\,g(x) = [g''(x+h)+g''(x-h)]\,f(x). \end{equation*} Setting $h=0$, we obtain \[ 0=f''(x)g(x)-f(x)g''(x) = \bigl(f'(x)g(x)-f(x)g'(x)\bigr)' \quad \text{for all} \quad x\in I, \] and thus $f'(x)g(x)-f(x)g'(x)=K$ for some constant $K$. Then $\bigl(\frac{f}{g}(x)\bigr)'= \frac{K}{g^2(x)}$, $x \in I$, and integration over $(x_0,x)$ with any $x_0\in I$ yields \eqref{f,g}. Now assume \eqref{f,g} holds with a nonzero constant $K$. Then we have \begin{align} \nonumber F(x+h) - F(x-h) &= \int_{x-h}^{x+h}f(t) dt \\ \nonumber &= \int_{x-h}^{x+h} \bigg(A + K\int_{x_0}^t \frac{du}{g^2(u)} \bigg)\,g(t) dt\\ \label{FCT_comp1} &= A\int_{x-h}^{x+h} g(t) dt + K\int_{x-h}^{x+h} g(t)\bigg(\int_{x_0}^t \frac{du}{g^2(u)} \bigg) dt \end{align} and \begin{align} \nonumber [G(x+h)-G(x-h)]\frac{f(x)}{g(x)} =& \Bigg(\int_{x-h}^{x+h} g(t) dt\Bigg) \Bigg(A + K\int_{x_0}^x \frac{du}{g^2(u)}\Bigg)\\\label{FCT_comp2} =& A\int_{x-h}^{x+h} g(t) dt + K\Bigg(\int_{x-h}^{x+h} g(t) dt\Bigg) \Bigg(\int_{x_{0}}^{x}\frac{du}{g^{2}(u)} \Bigg). \end{align} By comparing \eqref{FCT_comp1} and \eqref{FCT_comp2}, it is easy to see that \eqref{integralcondition} is equivalent to \eqref{eqn:cauchy.sym1}. \end{proof} The following example illustrates that there are non-trivial functions satisfying \eqref{integralcondition} (and hence \eqref{eqn:cauchy.sym1}) on ${\mathbb R}$. \begin{example} Consider $g(t)= e^t$ on $I={\mathbb R}$ and let $A=0, K=1, x_0=0$. The integral condition \eqref{integralcondition} reads as the following identity \begin{equation*} \int_{x-h}^{x+h} e^t \, \Bigg(\int_{0}^te^{-2u} du\Bigg) \,dt= \Bigg(\int_{x-h}^{x+h}e^t dt\Bigg)\Bigg(\int_{0}^x e^{-2u}du\Bigg). \end{equation*} A direct computation gives $f(x) = \sinh(x)=\frac{e^x-e^{-x}}{2}$, and consequently, \begin{align}\label{F=cosh} F(x) = \cosh(x) = \frac{e^x+e^{-x}}{2}, \quad G(x) = e^x, \, x\in{\mathbb R}. \end{align} \end{example} We invite the interested reader to verify directly that the pair $(F,G)$ in \eqref{F=cosh} satisfies the relation \eqref{eqn:cauchy.sym1}, giving a non--trivial example of such pairs. Now we assume that $K\neq 0$ and analyze the property \eqref{integralcondition} for all $x,h\in{\mathbb R}$ such that $x,x+h,x-h\in I$. Differentiating it with respect to $h$, we obtain \[ g(x+h)\int_{x_0}^{x+h}\frac{du}{g^2(u)} + g(x-h)\int_{x_0}^{x-h}\frac{du}{g^2(u)} = [g(x+h)+g(x-h)]\int_{x_0}^x\frac{du}{g^2(u)}. \] Differentiation two more times with respect to $h$ gives \[ g''(x+h)\int_{x_0}^{x+h}\frac{du}{g^2(u)} + g''(x-h)\int_{x_0}^{x-h}\frac{du}{g^2(u)} = [g''(x+h)+g''(x-h)]\int_{x_0}^x\frac{du}{g^2(u)}, \] for all $x\in I$ and $h\in{\mathbb R}$ such that $x,x+h,x-h\in I$. Setting $h=x-x_0$ in these two equations, we obtain \begin{align} \label{eqn_for_g} g(2x-x_0)\int_{x_0}^{2x-x_0}\frac{du}{g^2(u)} = [g(2x-x_0)+g(x_0)]\int_{x_0}^x\frac{du}{g^2(u)}, \end{align} and \begin{align} \label{eqn_for_g''} g''(2x-x_0)\int_{x_0}^{2x-x_0}\frac{du}{g^2(u)} = [g''(2x-x_0)+g''(x_0)]\int_{x_0}^x\frac{du}{g^2(u)}, \end{align} for all $x\in I$ with $2x-x_0 \in I$. Since $2x-x_0\in I$ and $g$ has no zeros in $I$, both sides of \eqref{eqn_for_g} do not vanish. By comparing \eqref{eqn_for_g''} and \eqref{eqn_for_g}, we get \begin{align}\label{frac_eq_g''} \frac{g''(2x-x_0)}{g(2x-x_0)} = \frac{g''(2x-x_0)+g''(x_0)}{g(2x-x_0)+g(x_0)} \end{align} for all $x\in I$ such that $2x-x_0 \in I$. Denoting by $y(x) := g(2x-x_0)$ and $\lambda := \frac{4g''(x_0)}{g(x_0)}$, \eqref{frac_eq_g''} yields the second order differential equation $y''-\lambda y=0$, whose general real-valued solution (depending on the sign of $\lambda$), has the following form \begin{align*} g(x) &= Px+Q, \quad &&\text{if} \quad \lambda=0;\\ g(x) &= Pe^{\sqrt{\lambda} x}+Qe^{-\sqrt{\lambda} x} \quad &&\text{if} \quad \lambda=\mu^2, \mu>0;\\ g(x) &= P\sin(\sqrt{-\lambda} x)+Q\cos(\sqrt{-\lambda} x) \quad &&\text{if} \quad \lambda=-\mu^2, \mu>0, \end{align*} where $P$, $Q$ are real constants. Hence $G$ has one of the following forms \begin{align}\label{G_quad} G(x)&=Ax^2+Bx+C,\\\label{G_exp} G(x)&=Ae^{\mu x}+Be^{-\mu x}+C, \quad \mu>0,\\\label{G_trig} G(x)&=A\sin(\mu x)+B\cos(\mu x)+C, \quad \mu>0, \end{align} where $A,B,C$ are real constants. \begin{remark} \label{rem:cauchy.sym} Altogether, we come to the following conclusion: on every interval $I\subset{\mathbb R}$ on which $G'\neq 0$, either $\{F, G, 1\}$ are linearly dependent, or $G$ and thus also $F$, cf. \eqref{f,g}, has one of the forms described in \eqref{G_quad}--\eqref{G_trig}. \end{remark} In the sequel, we call a function $G$ (resp. the pair $(F,G)$) to be of \textit{quadratic, exponential} or \textit{trigonometric type on $I$} if $G$ has (resp. both of $F$ and $G$ have) the form \eqref{G_quad}, \eqref{G_exp} or \eqref{G_trig}, respectively. Consider the set $U_g$ and its representation, cf. \eqref{U_f,U_g}, \eqref{rep:U_g=sum.of.itvs}. The following lemma plays a crucial role in the analysis of the equation \eqref{eqn:cauchy.sym1}. \begin{lemma} \label{lem:cauchy.sym.f(q)=0} Let $(p,q)\in\{I_{\sigma}\}_{\sigma\in\Sigma}$ be such that $p>-\infty$ and $f(p)=0$. Then $\{F, G, 1\}$ are linearly dependent on $[p,q)$. \end{lemma} \begin{proof} $g(p)=0$ by Definition \eqref{rep:U_g=sum.of.itvs} so by Remark \ref{rem:cauchy.sym}, it is sufficient to consider the following cases. \noindent Case 1: $G$ is of quadratic type on $(p,q)$. Then $F$ is also of quadratic type on $(p,q)$, and since $f(p)=g(p)=0$, we have $F,G\in\text{span}\bigl\{1, (x-p)^2\bigr\}$. Thus $\{F, G, 1\}$ are linearly dependent on $(p,q)$. \noindent Case 2: $G$ is of either exponential or trigonometric type on $(p,q)$. First suppose that $G$ is of exponential type on $(p,q)$. Then so is $F$ and since the set of functions satisfying \eqref{eqn:cauchy.sym1} is invariant with respect to the addition of constant functions, we can assume, without loss of generality, that $F,G \in \text{span} \bigl\{e^{\,\mu (x-p)}, e^{-\mu (x-p)}\bigr\}$ for some $\mu \neq 0$. Hence there are real constants $u$, $v$ such that $F(x) = ue^{\mu(x-p)} + ve^{-\mu(x-p)}$, $x\in (p,q)$. Since $F'(p)=f(p)=0$, we get $u=v$ and thus $F(x)=2u\cosh(\mu(x-p))$. The same argument for $G$ explains that $G(x)=2w\cosh(\mu(x-p))$ for some real $w$, and consequently $F$ and $G$ are multiples of the same function $\cosh(\mu(x-p))$. If $G$ is of trigonometric type, then by the same way as above, we can conclude that $F$ and $G$ are multiples of the same function $\cos(\mu(x-p))$, implying that $\{F, G, 1\}$ are linearly dependent on $[p,q)$. \end{proof} \paragraph{Proof of Theorem~\ref{thm:sym.cauchy.main}} Consider the set $U_g$ defined in \eqref{U_f,U_g}. If $U_g=\emptyset$, then $g\equiv 0$ on ${\mathbb R}$, and thus $G$ is identically constant on ${\mathbb R}$. In this case $F$ can be an arbitrary differentiable function on ${\mathbb R}$ and thus $\{1, F, G\}$ are linearly dependent on ${\mathbb R}$. If $U_g={\mathbb R}$, then it follows (cf. Remark \ref{rem:cauchy.sym}) that either $\{1, F, G\}$ are linearly dependent or $G$ has one of the forms \eqref{G_quad}--\eqref{G_trig} on the whole of ${\mathbb R}$. Moreover, we get the same conclusion if $U_g\cap U_f=\emptyset$ (cf. Proposition \ref{prop:cauchy.gen.U_g<>empty}). Next, let us assume that $U_g\cap U_f \neq \emptyset$ and $U_g$ is a proper subset of ${\mathbb R}$. Consider the representation ~\eqref{rep:U_g=sum.of.itvs}. It is clear (cf. Remark \ref{rem:cauchy.sym}) that the index set $\Sigma$ can be split into disjoint subsets as $\Sigma = \Sigma_{\textrm{lr}} \cup \Sigma_{\textrm{q}} \cup \Sigma_{\textrm{t}} \cup \Sigma_{\textrm{e}}$, where \begin{align*} \Sigma_{\textrm{lr}} :=& \, \bigl\{ \sigma\in\Sigma: \, \{F, G, 1\} \quad \text{are in linear relationship on} \; I_{\sigma} \bigr\},\\ \Sigma_{\textrm{q}} :=& \, \bigl\{\sigma\in\Sigma: \, (F, G) \quad \text{are of quadratic type on } \,I_{\sigma} \bigr\},\\ \Sigma_{\textrm{t}} :=& \, \bigl\{\sigma\in\Sigma: \, (F, G) \quad \text{are of trigonometric type on } \, I_{\sigma} \bigr\},\\ \Sigma_{\textrm{e}} :=& \, \bigl\{\sigma\in\Sigma: \, (F, G) \quad \text{are of exponential type on } \, I_{\sigma}\bigr\}. \end{align*} \noindent \textit{Claim 1}. If $\Sigma_{\textrm{lr}} \neq \emptyset$, then $\Sigma_{\textrm{lr}} = \Sigma$. \noindent Proof. Assume $\Sigma_{\textrm{lr}}$ is a proper subest of $\Sigma$. Then there exists $\sigma_2\in\Sigma$ such that $\sigma_2\notin\Sigma_{\textrm{lr}}$. Since $\Sigma_{\textrm{lr}} \neq \emptyset$, there is $\sigma_1 \in \Sigma_{\textrm{lr}}$ and $A_1\in{\mathbb R}$ such that $f(x)=A_1g(x)$ on $x\in I_{\sigma_1}$. Consider all $x,h\in{\mathbb R}$ such that $x+h \in I_{\sigma_2}$ and $x\in I_{\sigma_1}$. Using \eqref{eqn:cauchy.sym1} for $a=x-h$ and $b=x+h$, and recalling that $g\neq 0$ on $I_{\sigma_1}$, we get \begin{align} F(x+h)-A_1G(x+h) = F(x-h) - A_1G(x-h). \end{align} Therefore, \begin{align*} f(x+h) - A_1g(x+h) &= \frac{1}{2}\bigg(\frac{\partial}{\partial x} + \frac{\partial}{\partial h}\bigg) \, F(x+h) - \frac{A_1}{2}\bigg(\frac{\partial}{\partial x} + \frac{\partial}{\partial h}\bigg) \, G(x+h)\\ &= \frac{1}{2}\bigg(\frac{\partial}{\partial x} + \frac{\partial}{\partial h}\bigg) \bigl(F(x-h) - A_1G(x-h)\bigr) \\ &= 0, \end{align*} and thus $f(x+h) = A_1g(x+h)$ for all $x,h\in{\mathbb R}$ such that $x+h \in I_{\sigma_2}$ and $x\in I_{\sigma_1}$. From this it follows that $F$ and $G$ are in linear relationship on $I_{\sigma_2}$, that is, $\sigma_2 \in \Sigma_{\textrm{lr}}$, which leads to a contradiction. \qed \noindent \textit{Claim 2}. If $\Sigma_{\textrm{lr}} = \emptyset$, then only one of the index sets $\Sigma_{\textrm{q}}$, $\Sigma_{\textrm{t}}$, $\Sigma_{\textrm{e}}$ is non-empty. \noindent Proof. Let $\sigma\in \Sigma$ and $I_{\sigma} = (p,q)$. Since $U_g$ is proper subset of ${\mathbb R}$, one of $p,q$ is finite. We can assume $p>-\infty$. Then $g(p)=0$, and Lemma \ref{lem:cauchy.sym.f(q)=0} yields $f(p)\neq 0$. Hence using \eqref{eqn:cauchy.sym1} for $a=p-h$ and $b=p+h$ we get \begin{align} \label{eqn:G(p+h)=G(p-h)} G(p+h)=G(p-h) \quad \text{for all} \quad h\in{\mathbb R}, \end{align} so the graph of $G$ is symmetric with respect to the vertical line $y=p$. If $\sigma\in\Sigma_{\textrm{q}}$ or $\sigma\in\Sigma_{\textrm{e}}$, then $q=+\infty$ since the functions of quadratic type has exactly one and the functions of exponential type has at most one critical point. Therefore, if $\sigma\in\Sigma_{\textrm{q}}$, then $G \in \text{span} \{1, (x-p)^2\}$, $x\in{\mathbb R}$ and $\Sigma = \Sigma_{\textrm{q}}$. Similarly, it follows from \eqref{eqn:G(p+h)=G(p-h)} that if $\sigma\in\Sigma_{\textrm{e}}$, then $\Sigma = \Sigma_{\textrm{e}}$. Next, assume $\Sigma_{\textrm{lr}} = \Sigma_{\textrm{q}} = \Sigma_{\textrm{e}} = \emptyset$. Then $\Sigma = \Sigma_{\textrm{t}}$ and let $\sigma\in\Sigma_{\textrm{t}}$. Since $G$ is of trigonometric type on $I_{\sigma}=(p,q)$, we must have $q<+\infty$. So $g(p)=g(q)=0$ and it follows as in the proof of Lemma \ref{lem:cauchy.sym.f(q)=0} that there are real constants $u$, $v$ such that \begin{equation}\label{G_in_(p,q)} G(x) = u+v\cos\bigl(\pi\frac{x-p}{q-p}\bigr), \quad x\in (p,q). \end{equation} Using \eqref{eqn:G(p+h)=G(p-h)} we obtain that \eqref{G_in_(p,q)} holds on the whole of ${\mathbb R}$. \qed Since $U_g\neq\emptyset$, at least one of $\Sigma_{\textrm{lr}}$, $\Sigma_{\textrm{q}}$, $\Sigma_{\textrm{t}}$, $\Sigma_{\textrm{e}}$ is non-empty. If $\Sigma_{\textrm{lr}} \neq \emptyset$, then Claim 1 and Proposition \ref{prop:X} imply that $\{F, G, 1\}$ are linearly dependent on ${\mathbb R}$. If $\Sigma_{\textrm{lr}} = \emptyset$, then Claim 2 yields that one of the possibilities $(b)$ -- $(d)$ holds. \qed \section{Final Remarks} As a consequence of our main result we can give a partial answer to following still open question of Sahoo and Riedel (cf. \cite[Section 2.7]{Sahoo-Riedel} for an equivalent formulation). \paragraph{Problem} \textit{Find all functions $F,G,\phi,\psi:{\mathbb R} \to {\mathbb R}$ satisfying \begin{equation} \label{op:Sahoo-Riedel} [F(x)-F(y)]\,\phi\bigl(\frac{x+y}{2}\bigr) = [G(x)-G(y)]\,\psi\bigl(\frac{x+y}{2}\bigr) \end{equation} for all $x,y\in{\mathbb R}$.} We provide a partial result to this problem under certain assumptions on the unknown functions. First let us make the change of variables $s=\frac{x+y}{2}$, $t=\frac{x-y}{2}$ and write \eqref{op:Sahoo-Riedel} equivalently as \begin{equation} \label{op:Sahoo-Riedel1} [F(s+t)-F(s-t)]\,\phi(s) = [G(s+t)-G(s-t)]\,\psi(s), \quad s,t\in{\mathbb R}. \end{equation} \begin{theorem} Let $F,G:{\mathbb R}\to{\mathbb R}$ be three times differentiable and $\phi,\psi:{\mathbb R}\to{\mathbb R}$ be an arbitrary functions satisfying \eqref{op:Sahoo-Riedel1} on ${\mathbb R}$. If either $\phi\neq0$ or $\psi\neq0$ on ${\mathbb R}$, then one of the following possibilities holds: \begin{enumerate}[(a)] \item there exist constants $A_0,A_1,A_2\in{\mathbb R}$ such that for all $s\in{\mathbb R}$, we have $A_0+A_1F(s)+A_2G(s)=0$ and $G'(s)\,[A_1\psi(s)+A_2\phi(s)]=0$; \item there exist constants $A_0,A_1,A_2,B_0,B_1,B_2\in{\mathbb R}$ such that for all $s\in{\mathbb R}$, we have $F(s)=A_0+A_1s^2+A_2s^2$, $G(s)=B_0+B_1s+B_2s^2$ and \[ (A_1+2A_2s)\phi(s)=(B_1+2B_2s)\psi(s); \] \item there exists $\mu\neq0$ and constants $A_0,A_1,A_2,B_0,B_1,B_2\in{\mathbb R}$ such that for all $s\in{\mathbb R}$, we have $F(s)=A_0+A_1e^{\mu s}+A_2e^{-\mu s}$, $G(s)=B_0+B_1e^{\mu s}+B_2e^{-\mu s}$ and \[ (A_1e^{\mu s}-A_2e^{-\mu s})\phi(s) = (B_1e^{\mu s}-B_2e^{-\mu s})\psi(s); \] \item there exists $\mu\neq0$ and constants $A_0,A_1,A_2,B_0,B_1,B_2\in{\mathbb R}$ such that for all $s\in{\mathbb R}$, we have $F(s)=A_0+A_1\sin(\mu s)+A_2\cos(\mu s)$, $G(s)=B_0+B_1\sin(\mu s)+B_2\cos(\mu s)$ and \[ [A_1\cos(\mu s)-A_2\sin(\mu s)]\,\phi(s) = [B_1\cos(\mu s)-B_2\sin(\mu s)]\,\psi(s). \] \end{enumerate} \end{theorem} \begin{proof} Let $f,g$ be the derivatives of $F,G$, respectively and the sets $U_g$, $U_f$ (resp. $Z_g$, $Z_f$) be defined as in Section 3. Without loss of generality, assume that $\phi$ does not vanish on ${\mathbb R}$. By differentiating \eqref{op:Sahoo-Riedel1} with respect to $t$ and setting $t=0$ in the resulting equation, we get \begin{equation} \label{op1} f(s)\phi(s) = g(s)\psi(s), \quad s\in{\mathbb R}. \end{equation} For any $s\in U_g$ and $t\in{\mathbb R}$, by \eqref{op:Sahoo-Riedel1} and \eqref{op1}, we have \begin{align*} F(s+t)-F(s-t) &= [G(s+t)-G(s-t)]\,\frac{\psi(s)}{\phi(s)}\\ &= [G(s+t)-G(s-t)]\,\frac{f(s)}{g(s)}, \end{align*} and thus \begin{equation} \label{op2} [F(s+t)-F(s-t)]\, g(s) = [G(s+t)-G(s-t)]\,f(s), \quad s\in U_g, t\in{\mathbb R}. \end{equation} On the other hand, observe that we have $Z_g\subset Z_f$ by \eqref{op1} since $\phi\neq0$ on ${\mathbb R}$. So \eqref{op2} holds for all $s\in U_g\cup Z_g={\mathbb R}$. Therefore, Theorem~\eqref{thm:sym.cauchy.main} can be applied to \eqref{op2} and the four characterizations follows immediately. \end{proof} It is likely that the methods of this paper work for related equations when we replace the linear mean $\alpha a + (1-\alpha) b$ by the $p$-mean $M^p_\alpha(a,b) = (\alpha a^p + (1-\alpha) b)^{\frac{1}{p}}$, for $a,b \geq 0$. Here $M^p_\alpha(a,b)$ is defined for all values of $ p \neq 0$. For $p=0$ the corresponding mean is defined by $M^0_\alpha(a,b) = a^\alpha b^{1-\alpha}$. Moreover, for $p \in \{ -\infty, \infty \}$ we can define $M^{-\infty}_\alpha(a,b)= \min \{ a, b \}$ and $M^{\infty}_\alpha(a,b)= \max \{ a, b \}$. We intend to investigate this issue in a subsequent paper. The essence of our approach is to reduce a functional equation to an ODE. For this strategy we need certain smoothness assumptions. It would be interesting to provide an alternative way that will not require this additional smoothness assumptions. \subsection*{Acknowledgment} We would like to thank Professor J\"urg R\"atz for helpful conversations on the subject of this work. \end{document}
\begin{document} \begin{frontmatter} \title{Observability for Initial Value Problems with Sparse Initial Data} \author{Nicolae Tarfulea} \ead{[email protected]} \ead[url]{http://ems.calumet.purdue.edu/tarfulea} \address{Department of Mathematics, Purdue University Calumet, Hammond, IN 46323} \begin{abstract} In this work we introduce the concept of $s$-sparse observability for large systems of ordinary differential equations. Let $\dot x=f(t,x)$ be such a system. At time $T>0$, suppose we make a set of observations $b=Ax(T)$ of the solution of the system with initial data $x(0)=x^0$, where $A$ is a matrix satisfying the restricted isometry property. The aim of this paper is to give answers to the following questions: Given the observation $b$, is $x^0$ uniquely determined knowing that $x^0$ is sufficiently sparse? Is there any way to reconstruct such a sparse initial data $x^0$? \end{abstract} \begin{keyword}observability \sep sparse initial data \sep restricted isometry property \MSC[2010] 93B07 \sep 49J15 \sep 49K15 \end{keyword} \end{frontmatter} \section{Introduction and Results} In recent years a number of papers on signal processing have developed a series of ideas and techniques on the reconstruction of a finite signal $x\in \R^m$ from many fewer observations than traditionally believed necessary. It is now common knowledge that it is possible to exactly recover $x$ knowing that it is sparse or nearly sparse in the sense that it has only a limited number of nonzero components. A more formal definition of sparsity can be given through the $l_0$ norm $\| x\|_0:=\#\{ i:\, x_i\neq 0\}$, that is, the cardinality of $x$'s support. If $\| x\|_0\leq s$, for $s$ a nonnegative integer, then we say that $x$ is $s$-sparse. Since sparsity is a very often encountered feature in signal processing and many other mathematical models of real-life phenomena, estimation under sparsity assumption has been a topic of increasing interest in the last decades. At this point, the work on this subject is so extended and growing so rapidly that it is extremely difficult to mention without injustice its achievements and results. It is beyond the scope of this paper to review, even partially, the contributions to this new and very dynamic area of research. For example, in the signal processing case, the interested reader can find valuable insight in the very informative survey by Bruckstein, Donoho and Elad \cite{BDE}. This work addresses the recovery of the initial state of a high-dimensional dynamic variable from a restricted set of measurements, knowing that the initial state is sparse. More precisely, let $x(\cdot)$ be the solution of the following initial-value problem \begin{equation}\label{diffeq} \dot x(t)=f(t,x(t)),\mbox{ for } t>0;\quad x(0)=x^0. \end{equation} Suppose that we can observe \begin{equation} \label{observations} b=Ax(T) \end{equation} at a certain time $T>0$, where the vector $b$ represents the observations, and $A$ is an $n\times m$ measurement matrix (dictionary). As in signal processing case, the more interesting situation is when $n<<m$; one interprets $b$ as low-dimensional observations/measurements at time $T>0$ of the high-dimensional dynamic solution $x(\cdot)$. Here are two interesting questions that we address in this note: {\bf Question 1:} Given the observation $b$, is $x^0$ uniquely determined knowing that $x^0$ is sufficiently sparse? {\bf Question 2:} Is there any way to reconstruct such a sparse initial data $x^0$? Hereafter, $f:[0,T]\times\R^m\to\R^m$ is a Lipschitz function in the second variable, i.e., there is $L(\cdot)\in L^\infty (\mathbf R^m)$ such that $\| f(t,x)-f(t,y)\|\leq L(t)\| x-y\|$ in $[0,T]\times\R^m$. By $L$ we denote the $L^\infty$-norm of $L(\cdot)$ over the interval $[0,T]$. For $x\in\mathbf R^m$, the $l^p$-norm ($p\geq 1$) of $x$ is defined as usually $\| x\|_p:=(\sum_{i=1}^m|x_i|^p)^{1/p}$. In what follows, we also assume that the matrix $A$ satisfies the restricted isometry property. Let us recall the concept of restricted isometry constants (see \cite{CT}). \begin{Def} For each integer $s=1,2,\ldots$ define the isometry constant $\delta_s$ of a matrix $A$ as the smallest number such that $(1-\delta_s)\| x\|_2^2\leq\| Ax\|_2^2\leq (1+\delta_s)\| x\|_2^2$ holds for all $x\in \mathbf R^m$ with $\| x\|_0\leq s$. \end{Def} The following definition introduces a new concept, that is, the notion of $s$-sparse observability. \begin{Def} The pair \eqref{diffeq}-\eqref{observations} is called $s$-sparse observable at time $T>0$ if the knowledge of $b$ allows us to compute the $s$-sparse initial data vector $x^0$. \end{Def} In other words, \eqref{diffeq}-\eqref{observations} is $s$-sparse observable at time $T>0$ if for any solutions $x_1(\cdot)$ and $x_2(\cdot)$ corresponding to the $s$-sparse intial data $x_1^0$ and $x_2^0$, respectively, with $Ax_1(T)=Ax_2(T)$, we have $x_1^0=x_2^0$. Our first result gives a positive answer to Question 1. In fact it provides a sufficient condition for $s$-sparse observability. Roughly speaking, it says that $s$-sparse observability holds for sufficiently short periods of time. \begin{thm} \label{observable} Let $T<\frac{1}{L}\ln{(1+\frac{\sqrt{1-\delta_{2s}}}{\| A\|})}$. Then \eqref{diffeq}-\eqref{observations} is $s$-sparse observable at time $T$. \end{thm} In the remainder of this section we indicate how the sparsest initial data $x^0$ can be found, or approximated. We consider the more applicable situation in which the measurements at time $T$ are corrupted with noise. That is, \begin{equation} \label{observations_corrupted} b=Ax(T)+e, \end{equation} where $e$ is the noise term whose maximum magnitude is $\epsilon$ (i.e., $\| e\|_2\leq \epsilon$). Suppose we seek the sparsest initial data $x^0$ that solves \eqref{diffeq} and \eqref{observations_corrupted}. In order to narrow down to one well-defined (sparse) solution, we consider the problem: $$\mbox{{\bf $(P_0)$}}\quad\mbox{ Find }\hat{x}:=\arg\min_{\| b-Ax(T)\|_2\leq\epsilon}\| x\|_0.$$ Here $x(t)$ is the solution of \eqref{diffeq} together with the initial condition $x(0)=x$. However, this is a very hard combinatorial-dynamic optimization problem and practically impossible to solve. We propose reconstructing $x^0$ by replacing the $l_0$ norm $\| \cdot\|_0$ with the weighted $l_1$ norm $\|\cdot\|_{1,w}$ ($\| x\|_{1,w}:=\sum_{i=1}^mw_i|x_i|$, $w_i>0$, $i=1,2,...,m$), which is, in a natural way, its best convex approximant. This strategy originates in the work of Santosa and Symes \cite{SS} in the mid-eighties (see also \cite{DL,DS} for early results). In this context, we seek $x^0$ as the solution to the dynamic optimization problem $$\mbox{{\bf $(P_{1,w}^\epsilon)$}}\quad\mbox{ Find }x^*:=\arg\min_{\| b-Ax(T)\|_2\leq\epsilon}\| x\|_{1,w}.$$ Denote by $\tau$ the condition number of the matrix $W:=\diag\{w_i\}$, that is, $\tau:=\max_{1\leq i\leq m}\{w_i\}/\min_{1\leq i\leq m}\{w_i\}$, and by $x_s$ the vector having only the $s$ largest entries of the vector $x$, the others being set to zero. The following result gives a positive answer to Question 2. In essence, it states that for sufficiently small times, the accurate recovery of the sparse initial data $x^0$ can be done. Its proof is largely influenced by the methods and techniques used in \cite{C,CRT}. \begin{thm} \label{recover} If $\delta_{2s}<(1+\tau\sqrt{2})^{-1}$ and $T<\frac{1}{L}\ln{\Big( 1+\frac{1-\delta_{2s}(1+\tau\sqrt{2})}{(1+\tau )\| A\|\sqrt{1+\delta_{2s}}}\Big )}$, then the solution $x^*$ to $(P_{1,w}^\epsilon)$ satisfies $\| x^*-x^0\|_2\leq C_0s^{-1/2}\| x^0-x_s^0\|_1+C_1\epsilon$, with $C_0$ and $C_1$ constants independent of $x^0$. \end{thm} \section{Proofs of Results} \subsection{Proof of Theorem~\ref{observable}} Suppose that \eqref{diffeq}-\eqref{observations} is not $s$-sparse observable at time $T$. That is, there exist $s$-sparse vectors $x_1^0$ and $x_2^0$, with $x_1^0\neq x_2^0$, such that $Ax_1(T)=Ax_2(T)$, where $x_1(t)$ and $x_2(t)$ are solutions to \eqref{diffeq} together with the initial data $x_1(0)=x_1^0$ and $x_2(0)=x_2^0$, respectively. Since $x_1(t)=x_1^0+\int_0^tf(s,x_1(s))ds$ and $x_2(t)=x_2^0+\int_0^tf(s,x_2(s))ds$, it follows that $\| x_2(t)-x_1(t)\|_2 \leq \| x_2^0-x_1^0\|_2+\int_0^t\| f(s,x_2(s))-f(s,x_1(s))\|_2ds \leq \|x_2^0-x_1^0 \|_2+\int_0^tL(s)\| x_2(s)-x_1(s)\|_2ds $. By Gronwall's inequality, one obtains $\| x_2(t)-x_1(t)\|_2 \leq \| x_2^0-x_1^0\|_2e^{\int_0^tL(s)ds},$ and so \begin{align} \nonumber 0 =\| Ax_2(T)-Ax_1(T)\|_2 &=\| A (x_2^0-x_1^0)+A\int_0^T[f(t,x_2(t))-f(t,x_1(t))]dt\|_2 \\ \nonumber &\geq \| A(x_2^0-x_1^0)\|_2-\| A\| \int_0^TL(t)\| x_2(t)-x_1(t)\|_2dt \\ \nonumber &\geq \| A(x_2^0-x_1^0)\|_2-\| A\| \int_0^TL(t)\| x_2^0-x_1^0\|_2e^{\int_0^tL(s)ds}dt\\ \nonumber &\geq \| A(x_2^0-x_1^0)\|_2-M\| A\|\cdot\| x_2^0-x_1^0\|_2, \end{align} where $M:=e^{\int_0^TL(s)ds}-1$. Thus, $\| A(x_2^0-x_1^0)\|_2\leq M\| A\|\cdot\| x_2^0-x_1^0\|_2$. Then, because $x_2^0-x_1^0$ is $2s$-sparse and from the restricted isometry property, we obtain that $\sqrt{1-\delta_{2s}}\leq M\| A\|$, which cannot hold for $T<\frac{1}{L}\ln{(1+\frac{\sqrt{1-\delta_{2s}}}{\| A\|})}$. \subsection{Proof of Theorem~\ref{recover}} The following Lemma is a simple application of the parallelogram identity and is due to Cand\'{e}s \cite[Lemma 2.1.]{C}. We include it here, together with its proof, for reader's convenience. \begin{lem} \label{candes} Let $x$ and $x'$ be two vectors in $\mathbf R^m$. Suppose that $x$ and $x'$ are supported on disjoint subsets and $\| x\|_0\leq s$ and $\| x'\|_0\leq s'$. Then $|\langle Ax,Ax'\rangle |\leq\delta_{s+s'}\| x\|_2\| x'\|_2.$ \end{lem} \begin{proof} Denote by $y=x/\| x\|_2$ and $y'=x'/\| x'\|_2$ the unit vectors in the $x$ and $x'$ directions, respectively. By the restricted isometry property, it is easy to see that $2(1-\delta_{s+s'})\leq \| Ay\pm Ay'\|_2^2\leq 2(1+\delta_{s+s'}).$ These inequalities, together with the parallelogram identity, give $|\langle Ay,Ay'\rangle |\ =\frac{1}{4}| \| Ay+ Ay'\|_2^2-\| Ay- Ay'\|_2^2|\leq \delta_{s+s'}$, and so $|\langle Ax,Ax'\rangle |\leq\delta_{s+s'}\| x\|_2\| x'\|_2$, which concludes the proof. \end{proof} Let $y(t)$ be the solution to the initial value problem $\dot y(t)=f(t,y)$ in $(0,T)$, $y(0)=x^*$, where $x^*$ is the solution to $(P_{1,w}^\epsilon)$. Then, \begin{equation} \label{ineq0} \| Ay(T)-Ax(T)\|_2\leq \| b-Ay(T)\|_2+\| b-Ax(T)\|_2\leq 2\epsilon. \end{equation} Decompose $x^*$ as $x^*=x^0+h$. Then, as in \cite{C}, decompose $h$ into a sum of vectors $h_{T_0}$, $h_{T_1}$, ..., each of $s$-sparsity. Here, $T_0$ corresponds to the locations of the first $s$ largest coefficients of $h$, $T_1$ to the locations of the next $s$ largest coefficients, and so on. Since $x(t)=x^0+\int_0^tf(s,x(s))ds$ and $y(t)=x^*+\int_0^tf(s,y(s))ds$, it follows that $\| y(t)-x(t)\|_2 \leq \| h\|_2+\int_0^tL(s)\| y(s)-x(s)\|ds$. By Gronwall's inequality, we get $\| y(t)-x(t)\|_2 \leq \| h\|_2e^{\int_0^tL(s)ds},\mbox{ for all }t\in [0,T].$ Then, \begin{align} \nonumber \| Ay(T)-Ax(T)\|_2 &=\| Ah+A\int_0^T[f(t,y(t))-f(t,x(t))]dt\|_2 \\ \nonumber &\geq \| Ah\|_2-\| A\| \int_0^TL(t)\| y(t)-x(t)\|_2dt \\ \nonumber &\geq \| Ah\|_2-\| A\| \int_0^TL(t)\| h\|_2e^{\int_0^tL(s)ds}dt\\ \label{ineq5} &\geq \| Ah\|_2-M\| A\|\| h\|_2, \end{align} where $M:=e^{\int_0^TL(s)ds}-1$. From \eqref{ineq0} and \eqref{ineq5}, we obtain \begin{equation} \label{iineq1} \| Ah\|_2\leq 2\epsilon+M\| A\| \|h\|_2. \end{equation} Next, we estimate $\| h_{(T_0\cup T_1)^c}\|_2$, where $h_{(T_0\cup T_1)^c}:=h-h_{T_0}-h_{T_1}$. First off, observe that $\| h_{T_j}\|_2\leq s^{1/2}\| h_{T_j}\|_{\infty}\leq s^{-1/2}\| h_{T_{j-1}}\|_1$, $j\geq 2$, and so \begin{equation} \label{ineq3} \sum_{j\geq 2}\| h_{T_j}\|_2\leq s^{-1/2}\sum_{j\geq 1}\| h_{T_j}\|_1=s^{-1/2}\| h_{T_0^c}\|_1, \end{equation} where $h_{T_0^c}:=h-h_{T_0}$. Thus, \begin{equation} \label{ineq1} \| h_{(T_0\cup T_1)^c}\|_2=\Big\| \sum_{j\geq 2}h_{T_j}\Big\|_2\leq\sum_{j\geq 2}\| h_{T_j}\|_2 \leq s^{-1/2}\| h_{T_0^c}\|_1. \end{equation} Because $x^0$ is feasible, it satisfies $\| x^*\|_{1,w}\leq \| x^0\|_{1,w}$, which implies \begin{multline*} \sum_{i=1}^mw_i|x_i^0|\geq \sum_{i=1}^mw_i|x_i^0+h_i|=\sum_{i\in T_0}w_i|x_i^0+h_i|+\sum_{i\in T_0^c}w_i|x_i^0+h_i|\\ \geq \sum_{i\in T_0}w_i|x_i^0|-\sum_{i\in T_0}w_i|h_i|-\sum_{i\in T_0^c}w_i|x_i^0|+\sum_{i\in T_0^c}w_i|h_i|, \end{multline*} and so $\| h_{T_0^c}\|_{1,w}\leq\| h_{T_0}\|_{1,w}+2\| x^0-x^0_s\|_{1,w}$. This last inequality induces $\min_{1\leq i\leq m}\{w_i\}\| h_{T_0^c}\|_1\leq\max_{1\leq i\leq m}\{w_i\}(\|h_{T_0}\|_1+2\| x^0-x^0_s\|_1)$, and so \begin{equation} \label{ineq2} \| h_{T_0^c}\|_1\leq\tau (\| h_{T_0}\|_1+2\| x^0-x^0_s\|_1). \end{equation} From \eqref{ineq1}, \eqref{ineq2}, and the Cauchy-Schwarz inequality, it follows that \begin{equation} \label{fivestars} \| h_{(T_0\cup T_1)^c}\|_2 \leq\tau s^{-1/2}(\| h_{T_0}\|_1+2\| x^0-x^0_s\|_1\leq\tau (\| h_{T_0}\|_2+2e_0), \end{equation} with $e_0:=s^{-1/2}\| x^0-x^0_s\|_1$. Now, let us estimate $\| h_{T_0\cup T_1}\|_2$. By the restricted isometry property, it follows that \begin{equation} \label{i1} \frac{1}{\sqrt{1+\delta_{2s}}}\| Ah_{T_0\cup T_1}\|_2\leq\| h_{T_0\cup T_1}\|_2\leq \frac{1}{\sqrt{1-\delta_{2s}}} \| Ah_{T_0\cup T_1}\|_2. \end{equation} Since $Ah_{T_0\cup T_1}=Ah-\sum_{j\geq 2}Ah_{T_j}$, we have that $\| Ah_{T_0\cup T_1}\|_2^2 =\langle Ah_{T_0\cup T_1}, Ah\rangle-\sum_{j\geq 2}\langle Ah_{T_0\cup T_1},Ah_{T_j}\rangle \leq |\langle Ah_{T_0\cup T_1}, Ah\rangle |+\sum_{j\geq 2}|\langle Ah_{T_0\cup T_1},Ah_{T_j}\rangle|$. This and the restricted isometry property imply that \begin{align} \nonumber |\langle Ah_{T_0\cup T_1}, Ah\rangle | &\leq \| Ah_{T_0\cup T_1}\|_2\| Ah\|_2 \leq \| Ah_{T_0\cup T_1}\|_2 (2\epsilon +M\| A\| \| h\|_2)\quad \mbox{(by \eqref{iineq1})}\\ \label{i3} &\leq \sqrt{1+\delta_{2s}}\| h_{T_0\cup T_1}\|_2(2\epsilon +M\| A\| \| h\|_2) \quad \mbox{(by \eqref{i1})}. \end{align} From Lemma~\ref{candes}, we have that $|\langle Ah_{T_i}, Ah_{T_j}\rangle |\leq \delta_{2s}\| h_{T_i}\|_2\| h_{T_j}\|_2$, for $i\neq j$, and so $|\langle Ah_{T_0\cup T_1}, Ah_{T_j}\rangle |\leq \delta_{2s}(\| h_{T_0}\|_2+\| h_{T_1}\|_2)\| h_{T_j}\|_2 \leq \sqrt{2} \delta_{2s}\| h_{T_0\cup T_1}\|_2\| h_{T_j}\|_2$. Therefore, \begin{equation} \label{iineq2} \| Ah_{T_0\cup T_1}\|_2^2\leq\sqrt{1+\delta_{2s}}\| h_{T_0\cup T_1}\|_2(2\epsilon +M\| A\| \| h\|_2)+ \sqrt{2}\delta_{2s}\| h_{T_0\cup T_1}\|_2\sum_{j\geq 2}\| h_{T_j}\|_2. \end{equation} As in \cite{C}, let $\alpha =2\sqrt{1+\delta_{2s}}(1-\delta_{2s})^{-1}$ and $\rho=\sqrt{2}\delta_{2s}(1-\delta_{2s})^{-1}$. Then, \begin{align} \nonumber \| h_{T_0\cup T_1}\|_2 &\leq \alpha (\epsilon +\frac{1}{2}M\| A\|\| h\|_2)+\rho \sum_{j\geq 2}\| h_{T_j}\|_2 \quad \mbox{(by \eqref{i1} and \eqref{iineq2})}\\ \nonumber &\leq \alpha (\epsilon +\frac{1}{2}M\| A\|\| h\|_2)+\rho s^{-1/2}\| h_{T_0^c}\|_1 \quad \mbox{(by \eqref{ineq3})}\\ \nonumber &\leq \alpha (\epsilon +\frac{1}{2}M\| A\|\| h\|_2)+\rho s^{-1/2}\tau (\| h_{T_0}\|_1+2\| x^0-x^0_s\|_1) \quad \mbox{(by \eqref{ineq2})}\\ \nonumber &\leq \alpha (\epsilon +\frac{1}{2}M\| A\|\| h\|_2)+\rho \tau \| h_{T_0\cup T_1}\|_2+2\rho\tau e_0, \end{align} and so \begin{equation} \label{i6} \| h_{T_0\cup T_1}\|_2 \leq (1-\rho\tau)^{-1}[\alpha (\epsilon +\frac{1}{2}M\| A\|\| h\|_2)+2\rho\tau e_0]. \end{equation} (Observe that $1-\rho\tau>0$ since $\delta_{2s}<(1+\sqrt{2}\tau)^{-1}$.) Then, \begin{align*} \| h\|_2 &\leq \| h_{T_0\cup T_1}\|_2+\| h_{(T_0\cup T_1)^c}\|_2 \leq (1+\tau)\| h_{T_0\cup T_1}\|_2+2\tau e_0 \quad \mbox{(by \eqref{fivestars})}\\ &\leq (1+\tau)(1-\rho\tau)^{-1}[\alpha (\epsilon +\frac{1}{2}M\| A\|\| h\|_2)+2\rho\tau e_0]+2\tau e_0, \quad \mbox{(by \eqref{i6})} \end{align*} and so $\| h\|_2 \leq C_0e_0+C_1\epsilon $, with $C_0:=2\tau (\rho +1)[1-\rho\tau-0.5\alpha (1+\tau)M\| A\|]^{-1}$ and $C_1:=\alpha (1+\tau)[1-\rho\tau-0.5\alpha (1+\tau)M\| A\|]^{-1}$. As a consequence of $T<\frac{1}{L}\ln{\Big( 1+\frac{1-\delta_{2s}(1+\tau\sqrt{2})}{(1+\tau )\| A\|\sqrt{1+\delta_{2s}}}\Big )}$, observe that both $C_0$ and $C_1$ are strictly positive. {\bf Acknowledgments.} This work was supported by the 2009 Northwest Indiana Computational Grid (NWICG) Summer Program. \end{document}
{\bf e} gin{document} \title{\textbf{Confidence sets for a level set in linear regression }} \author{ Fang Wan$^{1}$, Wei Liu$^{2}$, Frank Bretz$^{3}$ \\ $^{1}$ Lancaster University, UK\\ $^{2}$ University of Southampton, UK\\ $^{3}$ Novartis Pharma AG, Switzerland\\ } \date{} \maketitle {\bf e} gin{abstract} Regression modeling is the workhorse of statistics and there is a vast literature on estimation of the regression function. It is realized in recent years that in regression analysis the ultimate aim may be the estimation of a level set of the regression function, instead of the estimation of the regression function itself. The published work on estimation of the level set has thus far focused mainly on nonparametric regression, especially on point estimation. In this paper, the construction of confidence sets for the level set of linear regression is considered. In particular, $1-\alpha$ level upper, lower and two-sided confidence sets are constructed for the normal-error linear regression. It is shown that these confidence sets can be easily constructed from the corresponding $1-\alpha$ level simultaneous confidence bands. It is also pointed out that the construction method is readily applicable to other parametric regression models where the mean response depends on a linear predictor through a monotonic link function, which include generalized linear models, linear mixed models and generalized linear mixed models. Therefore the method proposed in this paper is widely applicable. Examples are given to illustrate the method. \\ \noindent{\bf keywords} Confidence sets; linear regression; nonparametric regression; parametric regression; simultaneous confidence bands; statistical inference. \end{abstract} \spacingset{2} \section{INTRODUCTION} Let $Y = h( {\pmb x} ) + e$ where $Y \in \Re^1$ is the response, $ {\pmb x} \in \Re^p$ is the covariate (vector), $h$ is the regression function, and $e$ is the random error. In regression analysis, there is a vast literature on how to estimate the regression function $h$, based on the observed data $(Y_i, {\pmb x} _i), i=1, \ldots, n$. In recent years, it is realized that an important problem in regression is the inference of the $\lambda$-level set $$ G = G_h (\lambda ) = \{ {\pmb x} \in K: \, h( {\pmb x} ) \ge \lambda \} $$ where $\lambda$ is a pre-specified number, and $K \subset \Re^p$ is a given covariate $ {\pmb x} $ region of interest. It is argued forcefully in Scott and Davenport (2007) that ``In a wide range of regression problems, if it is worthwhile to estimate the regression function $h$, it is also worthwhile to estimate certain level sets. Moreover, these level sets may be of ultimate importance. And in many classification problems, labels are obtained by thresholding a continuous variable. Thus, estimating regression level sets may be a more appropriate framework for addressing many problems that are currently envisioned in other ways". For example, when considering a regression model of infant systolic blood pressure on birth weight and age, it is of interest to identify the covariate region over which the systolic blood pressure exceeds (or falls below) a pre-specified level $\lambda$. For a regression model of perinatal mortality rate on birth weigh, it is interesting to identify the range of birth weight over which the perinatal mortality rate exceeds a certain $\lambda$. See more details on these two examples in Section 3. Other possible applications have been pointed out, for example, in Scott and Davenport (2007) and Dau {\it et al.} (2020). Inference of the level set $G$ is an important component of the more general field of subgroup analysis (cf. Wang et al., 2007, Herrera et al., 2011, Ting et al., 2020). In nonparametric regression where $h$ is not assumed to have a specific form, {\bf point} estimation of $G$ aims to construct $\hat{G}$ to approximate $G$ using the observed data. This has been considered by Cavalier (1997), Polonik and Wang (2005), Willett and Nowak (2007), Scott and Davenport (2007), Dau {\it et al.} (2020) and Reeve {\it et al.} (2021) among others. The main focus of these works is on large sample properties such as consistency and rate of convergence. Related work on estimation of level-sets of a nonparametric density function can be found in Hartigan (1987), Tsybakov (1997), Cadre (2006), Mason and Polonik (2009), Chen {\it et al.} (2017) and Qiao and Polonik (2019). {\bf Confidence-set} estimation of $G$ aims to construct sets $\hat{G}$ to contain or be contained in $G$ with a pre-specified confidence level $1-\alpha$. Large sample approximate $1- \alpha$ confidence-set estimation of $G$ is considered in Mammen and Polonik (2013). In this paper confidence-set estimation of $G$ for linear regression is considered. It is shown that lower, upper and two-sided confidence-set estimators of $G$ can be easily constructed from the corresponding lower, upper and two-sided simultaneous confidence bands for a linear regression function. Simultaneous confidence bands for linear regression have been considered in Wynn and Bloomfield (1971), Naiman (1984, 1986), Piegorsch (1985a,b), Sun and Loader (1994), Liu and Hayter (2007) and numerous others; see Liu (2010) for an overview. It is also pointed out that the method can be directly extended to, for example, the generalized linear regression models, though the confidence-set estimations are of asymptotic $1-\alpha$ level since the simultaneous confidence bands are of asymptotic $1-\alpha$ level in this case. A related problem is the confidence-set estimation of the maximum (or minimum) point of a linear regression model; see Wan {\it et al.} (2015, 2016) and the references therein. The layout of the paper is as follows. The construction method of confidence-set estimators is given in Section 2. The method is illustrated with three examples in Section 3. Section 4 contains conclusions and a brief discussion. Finally the appendix sketches the proof of a theorem. \section{Method} The confidence sets for $G$ are constructed in this section. Let the normal-error linear regression model be given by $$ Y = h( {\pmb x} ) + e = {\bf e} ta_0 + {\bf e} ta_1 x_1 + \cdots + {\bf e} ta_p x_p + e \, , $$ where the independent errors $e_i = Y_i - h( {\pmb x} _i )$ have distribution $N(0, \sigma^2)$. From the observed sample of observations $(Y_i, {\pmb x} _i), i=1, \cdots, n$, the usual estimator of $\mbox{\boldmath $ {\bf e} ta$} = ( {\bf e} ta_0, \cdots, {\bf e} ta_p)^T$ is given by $\hat{\mbox{\boldmath $ {\bf e} ta$}} = (X^TX)^{-1} X^T {\bf Y} $ where $X$ is the $n \times (p+1)$ design matrix and $ {\bf Y} = (Y_1, \cdots, Y_n)^T$. The estimator of the error variance $\sigma^2$ is given by $\hat{\sigma}^2$. It is known that $\hat{\mbox{\boldmath $ {\bf e} ta$}} \sim N( \mbox{\boldmath $ {\bf e} ta$}, \sigma^2 (X^TX)^{-1} )$, $\hat{\sigma}^2 \sim \sigma^2 \chi_{\nu}^2 / \nu$ with $\nu = n-p-1$, and $\hat{\mbox{\boldmath $ {\bf e} ta$}}$ and $\hat{\sigma}^2$ are independent. In order for both estimators $\hat{\mbox{\boldmath $ {\bf e} ta$}}$ and $\hat{\sigma}^2$ to be available, the sample size $n$ must be at least $n \ge p+2$. Let $ \tilde{\pmb x} = (1, {\pmb x} ^T)^T = (1, x_1, \cdots, x_p)^T$. Suppose the upper, lower and two-sided $1-\alpha$ simultaneous confidence bands over the covariate region $ {\pmb x} \in K$ are given, respectively, by {\bf e} gin{eqnarray} {\rm P} \left \{ \, \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \le \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} ) \ \forall \, {\pmb x} \in K \, \right\} &=& 1- \alpha \, \\ {\rm P} \left \{ \, \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \ge \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} ) \ \forall \, {\pmb x} \in K \, \right\} &=& 1- \alpha \, \\ {\rm P} \left \{ \, \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_2 \hat{\sigma} m( {\pmb x} ) \le \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \le \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_2 \hat{\sigma} m( {\pmb x} ) \ \forall \, {\pmb x} \in K \, \right\} &=& 1- \alpha \, \end{eqnarray} where $m( {\pmb x} ) = \sqrt{ \tilde{\pmb x} ^T (X^TX)^{-1} \tilde{\pmb x} }$ corresponding to the hyperbolic confidence bands, and $c_1>0$ and $c_2>0$ are the critical constants to achieve the exact $1-\alpha$ confidence level. Whilst another popular form is $m( {\pmb x} )=1$, corresponding to the constant-width confidence bands, the hyperbolic bands are often better than the constant-width band under various optimality criteria (see, e.g., Liu and Hayter, 2007, and the references therein) and so used throughout this paper. The critical constants $c_1$ and $c_2$ can be computed by using the method of Liu {\it et al.} (2005, 2008). It is worth emphasizing that the three probabilities in (1-3) do not depend on the unknown parameters $\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1}$ and $\sigma >0$, and that $c_1 < c_2$. From the simultaneous confidence bands in (1-3), define the confidence sets as {\bf e} gin{eqnarray} \hat{G}_{1u} & = & \left \{ \, {\pmb x} \in K : \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} ) \ge \lambda \, \right\} \, , \\ \hat{G}_{1l} & = & \left \{ \, {\pmb x} \in K : \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} ) \ge \lambda \, \right\} \, , \\ \hat{G}_{2u} & = & \left \{ \, {\pmb x} \in K : \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_2 \hat{\sigma} m( {\pmb x} ) \ge \lambda \, \right\} , \ \hat{G}_{2l} = \left \{ \, {\pmb x} \in K : \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_2 \hat{\sigma} m( {\pmb x} ) \ge \lambda \, \right\} . \end{eqnarray} The following theorem establishes that $\hat{G}_{1u}$ is an upper, and $\hat{G}_{1l}$ is a lower, confidence set for $G$ of exact $1-\alpha$ level, whilst $[\hat{G}_{2l}, \hat{G}_{2u}]$ is a two-sided confidence set for $G$ of at least $1-\alpha$ level. A proof is sketched in the appendix. \noindent\textbf{Theorem.} We have {\bf e} gin{eqnarray} \inf_{\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1},\, \sigma >0} {\rm P} \left\{ \, G \subseteq \hat{G}_{1u} \, \right\} & = & 1- \alpha \, , \\ \inf_{\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1},\, \sigma >0} {\rm P} \left\{ \, \hat{G}_{1l} \subseteq G \, \right\} & = & 1- \alpha \, , \\ \inf_{\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1},\, \sigma >0} {\rm P} \left\{ \, \hat{G}_{2l} \subseteq G \subseteq \hat{G}_{2u} \, \right\} & \ge & 1- \alpha \, . \end{eqnarray} From the definitions in (4-6), it is clear that each set $\hat{G}_{\cdot \cdot}$ is given by all the points in $K$ at which the corresponding simultaneous confidence band is at least as high as the given threshold $\lambda$. Note that each set could be an empty set when $\lambda$ is sufficiently large, and become $K$ when $\lambda$ is sufficiently small. Of course, each set cannot be larger than the given covariate set $K$ from the definition. Since $c_1 >0$ and $c_2 >0$, it is clear that $\hat{G}_{1l} \subseteq \hat{G}_{1u}$ and $\hat{G}_{2l} \subseteq \hat{G}_{2u}$. Since $c_1 < c_2 $, it is clear that $\hat{G}_{1u} \subseteq \hat{G}_{2u}$ and $\hat{G}_{2l} \subseteq \hat{G}_{1l}$. Hence $\hat{G}_{2l} \subseteq \hat{G}_{1l} \subset \hat{G}_{1u} \subseteq \hat{G}_{2u}$. Intuitively, since the regression function $ \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$}$ is bounded from above by the upper simultaneous confidence band $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} )$ over the region $ {\pmb x} \in K$, the level set $G$ cannot be bigger than the set $\hat{G}_{1u}$. Similarly, since the regression function $ \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$}$ is bounded from below by the lower simultaneous confidence band $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} )$ over the region $ {\pmb x} \in K$, the level set $G$ cannot be smaller than the set $\hat{G}_{1l}$. Finally, since the regression function $ \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$}$ is bounded, simultaneously, from below by the lower confidence band $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_2 \hat{\sigma} m( {\pmb x} )$, and from above by the upper confidence band $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_2 \hat{\sigma} m( {\pmb x} )$, over the region $ {\pmb x} \in K$, the level set $G$ must contain the set $\hat{G}_{2l}$ and be contained in the set $\hat{G}_{2u}$ simultaneously. Instead of the level set $G$, the set {\bf e} gin{equation} M = M_h (\lambda ) = \{ {\pmb x} \in K: \, h( {\pmb x} ) \le \lambda \} \end{equation} may be of interest in some applications; see e.g. Example 2 in Section 3. In this case, one can consider the regression of $-Y$ on $ {\pmb x} $, given by $-Y = -h( {\pmb x} ) + (-e)$, and hence $M$ becomes the level set $G$ of the regression function $-h( {\pmb x} )$ with level $-\lambda$. The confidence sets given in (4-6) for the normal-error linear regression can be generalized to other models that involve a linear predictor $ \tilde{\pmb x} ^T\mbox{\boldmath $ {\bf e} ta$}$. In generalized linear models, linear mixed models and generalized linear mixed models (cf. McCulloch and Searle, 2001 and Faraway, 2016), for example, the mean response $E(Y)$ is often related to a linear predictor $ \tilde{\pmb x} ^T\mbox{\boldmath $ {\bf e} ta$}$ by a given monotonic link function $L( \cdot )$, that is, $L[E(Y)]= \tilde{\pmb x} ^T\mbox{\boldmath $ {\bf e} ta$}$. Since $L( \cdot )$ is monotone, the set of interest $\{ \, {\pmb x} \in K: \ E(Y) \ge L_0 \}$, for a given threshold $L_0$, becomes either $\{ \, {\pmb x} \in K: \ \tilde{\pmb x} ^T\mbox{\boldmath $ {\bf e} ta$} \ge \lambda \, \}$ or $\{ \, {\pmb x} \in K: \ \tilde{\pmb x} ^T\mbox{\boldmath $ {\bf e} ta$} \le \lambda \, \}$, where $\lambda = L( L_0 )$, depending on whether the function $L( \cdot )$ is increasing or decreasing. However, when the distribution of $\hat{\bbeta}$ is asymptotically normal $N(\mbox{\boldmath $ {\bf e} ta$}, \hat{\Sigma})$, the simultaneous confidence bands of the forms in (1-3) are of approximate $1-\alpha$ level; see, e.g., Liu (2010, Chapter 8). As a result, the corresponding confidence sets of the forms in (4-6) are of approximate $1-\alpha$ level too. See Example 3 in the next section. \section{Illustrative examples} In this section, three examples are used to illustrate the confidence sets given in (4-6). The \texttt{R} code for all the computation in this section is available from the authors (and will be made available freely online). \noindent{\bf Example 1.} In Example 1.1 of Liu (2010), a linear regression model of systolic blood pressure ($Y$) on the two covariates birth weight in oz ($x_1$) and age in days ($x_2$) of an infant is considered $$ Y = {\bf e} ta_0 + {\bf e} ta_1 x_1 + {\bf e} ta_2 x_2 + e . $$ Based on the measurements on $n=17$ infants in Liu (2010, Table 1.1), the linear regression model provides a good fit with $R^2 = 95\%$. The observed values of $x_1$ range from 92 to 149, and the observed values of $x_2$ range from 2 to 5; hence we set $K = \{ \, {\pmb x} = (x_1, x_2)^T: \ 92 \le x_1 \le 149, 2 \le x_2 \le 5 \, \}$. It is of interest to identify infants, in terms of $ {\pmb x} = (x_1, x_2)^T \in K$, that have mean systolic blood pressure larger than 97, assuming systolic blood pressure larger than 97 is deemed to be too high. Therefore the level set $G= G(97) = \{ \, {\pmb x} \in K: \, {\bf e} ta_0 + {\bf e} ta_1 x_1 + {\bf e} ta_2 x_2 \ge 97 \, \}$ is of interest. {\bf e} gin{figure}[htbp] {\bf e} gin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=7.5cm]{Example1_1sidedupp.png} \parbox{15.5cm}{\small \hspace{1.2cm}(a) 1-sided upper confidence set $\hat{G}_{1u}$} \end{minipage} \hspace{3ex} {\bf e} gin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=7.5cm]{Example1_1sidedlow.png} \parbox{15.5cm}{\small \hspace{1.2cm}(b) 1-sided lower confidence set $\hat{G}_{1l}$} \end{minipage} {\bf e} gin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=7.5cm]{Example1_2sided.png} \parbox{15.5cm}{\small \hspace{1.2cm}(c) 2-sided confidence set $[\hat{G}_{2l},\hat{G}_{2u}]$} \end{minipage} \hspace{3ex} {\bf e} gin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=7.5cm]{Example1_all.png} \parbox{15.5cm}{\small \hspace{1.2cm}(d) All the confidence sets} \end{minipage} {\bf e} gin{center} \caption{The $95\%$ confidence sets in Example 1, given by the shaded regions. } \end{center} \end{figure} From Section 2, simultaneous confidence bands in (1-3) need to be constructed first in order to construct the confidence sets for $G$ in (4-6). In this example, with $p=2$, $n=17$, $1-\alpha = 95\%$ and the given design matrix $X$, $c_2$ is computed to be 3.11 and $c_1$ is computed to be $2.77$ by using the method of Liu {\it et al.} (2005) (see also Liu, 2010, Section 3.2). Figure 1(a) plots the 1-sided upper confidence set $\hat{G}_{1u}$ in the $ {\pmb x} $-plane, with the region $K$ given by the rectangle in solid line. Note that the curvilinear-boundary of $\hat{G}_{1u}$ is given by the projection, to the $ {\pmb x} $-plane, of the intersection between the horizontal plane at height $\lambda = 97$ and the 1-sided upper simultaneous confidence band over the region $ {\pmb x} \in K$. The upper confidence set $\hat{G}_{1u}$ tells us that, with 95\% confidence level, only those infants having $ {\pmb x} \in \hat{G}_{1u}$ may have mean systolic blood pressure larger than or equal to 97. Hence $ {\pmb x} \in \hat{G}_{1u}$ could be used as a screening criterion for further medical check due to concerns over too high systolic blood pressure. Similarly, Figure 1(b) plots the 1-sided lower confidence set $\hat{G}_{1l}$ in the $ {\pmb x} $-plane. Note that the curvilinear-boundary of $\hat{G}_{1l}$ is given by the projection, to the $ {\pmb x} $-plane, of the intersection between the horizontal plane at height $\lambda = 97$ and the 1-sided lower simultaneous confidence band over the region $K$. The lower confidence set $\hat{G}_{1l}$ tells us that, with 95\% confidence level, infants having $ {\pmb x} \in \hat{G}_{1l}$ have mean systolic blood pressure larger than or equal to 97. Hence these infants should have further medical check due to concerns over excessive high systolic blood pressure. Figure 1(c) plots the two-sided confidence set $[\hat{G}_{2l}, \hat{G}_{2u}]$ in the $ {\pmb x} $-plane. Note that the curvilinear-boundaries of $[\hat{G}_{2l}, \hat{G}_{2u}]$ are given by the projection, to the $ {\pmb x} $-plane, of the intersection between the horizontal plane at height $\lambda = 97$ and the two-sided confidence band over the region $K$. The two-sided confidence set tells us that, with 95\% confidence level, infants having $ {\pmb x} \in K \backslash \hat{G}_{2u}$ are not of concern, infants having $ {\pmb x} \in \hat{G}_{2l}$ are of concern, and infants having $ {\pmb x} \in \hat{G}_{2u}$ are possibly of concern, in terms of excessive high mean systolic blood pressure. Figure 1(d) plots $\hat{G}_{1u}$, $\hat{G}_{1l}$ and $[\hat{G}_{2l}, \hat{G}_{2u}]$ in the same picture for the purpose of comparison. It is clear from the figure that $\hat{G}_{2l} \subseteq \hat{G}_{1l} \subseteq \hat{G}_{1u} \subseteq \hat{G}_{2u}$ as pointed out in Section 2. Note that when $\lambda$ is large, 100 say, the horizontal plane at height $\lambda $ and the 1-sided lower simultaneous confidence band do not intersect over the region $K$. In this case the 1-sided lower confidence set $\hat{G}_{1l}$ is an empty set. Similar observations hold for other confidence sets. \noindent{\bf Example 2.} Selvin (1998, p224) provided a data set on perinatal mortality (fetal deaths plus deaths within the first month of life) rate (PMR) and birth weight (BW) collected in California in 1998. The interest is on modelling how PMR changes with BW; Selvin (1998) considered fitting a 4th order polynomial regression model between $Y=\log(-\log(PMR))$ and $x=BW$: $$ Y = {\bf e} ta_0 + {\bf e} ta_1 x + {\bf e} ta_2 x^2 + {\bf e} ta_3 x^3 + {\bf e} ta_4 x^4 + e \, . $$ Here we will focus on the black infants only, using the 35 observations extracted from Selvin (1998) and given in Liu (2010, Table 7.1). The 4th order polynomial regression model provides a good fit with $R^2 = 97\%$. The observed values of $x$ range from 0.85 to 4.25 and so we set $K = [0.85, 4.25]$. we are interested in the values of $x \in K$ that may result in excessive high PMR. Since $Y=\log(-\log(PMR))$ and $\log(-\log( \cdot ))$ is monotone decreasing, we are interested in the set $M = \{ x \in K: \ {\bf e} ta_0 + {\bf e} ta_1 x + \cdots + {\bf e} ta_4 x^4 \le \lambda \}$ in (10), with $\lambda = \log(-\log(0.01)) = 1.527$ assuming that $PMR \ge 0.01$ is regarded as excessively high. {\bf e} gin{figure}[htbp] {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.5cm]{Examp2_a.png} \parbox{15.5cm}{\small \hspace{1.2cm}(a) 1-sided upper confidence set $\hat{G}_{1u}$} \end{minipage} \hspace{1ex} {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.5cm]{Examp2_b.png} \parbox{15.5cm}{\small \hspace{1.2cm}(b) 1-sided lower confidence set $\hat{G}_{1l}$} \end{minipage} {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.5cm]{Examp2_c.png} \parbox{15.5cm}{\small \hspace{1.2cm}(c) 2-sided confidence set $[\hat{G}_{2l}, \hat{G}_{2u}]$} \end{minipage} \hspace{1ex} {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.4cm]{Examp2_d.png} \parbox{15.5cm}{\small \hspace{1.2cm} (d) All the confidence sets} \end{minipage} {\bf e} gin{center} \caption{The $95\%$ confidence sets in Example 2, indicated by the red line segments.} \end{center} \end{figure} From Section 2, simultaneous confidence bands for $- {\bf e} ta_0 - {\bf e} ta_1 x - \cdots - {\bf e} ta_4 x^4$ of the forms in (1-3), with $ \tilde{\pmb x} = (1, x, \cdots, x^4)^T$ in this example, over $x \in K$ need to be constructed first in order to construct the confidence sets for $M$ in (10), which is the same as $G = \{ x \in K: \, - {\bf e} ta_0 - {\bf e} ta_1 x - \cdots - {\bf e} ta_4 x^4 \ge -\lambda \}$. The critical constants $c_1$ and $c_2$ can be computed by using the method of Liu {\it et al.} (2008); see also Liu (2010, Section 7.1). With $p=4$, $n=35$, $1-\alpha = 95\%$ and the given design matrix $X$, $c_2$ is computed to be 2.99 and $c_1$ is computed to be $2.69$. Figure 2(a) plots the 1-sided upper simultaneous confidence band $- \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} ) \ {\rm over} \, x \in K$, and the 1-sided upper confidence set $\hat{G}_{1u} = [0.85, 2.82]$ on the $x$-axis. Note that the boundary of $\hat{G}_{1u}$, 2.82, is given by the projection, to the $x$-axis, of the intersection between the horizontal line at height $-\lambda = -1.527$ and the upper simultaneous confidence band over the interval $x \in K$. The upper confidence set $\hat{G}_{1u}$ tells us that, with confidence level 95\%, only those infants having $x \in \hat{G}_{1u}$ may have $E\{\log(-\log(PMR))\} \le 1.527$ and hence may need extra medical care due to concerns over excessive high PMR. Similarly, Figure 2(b) plots the 1-sided lower simultaneous confidence band $- \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} )$ over $x \in K$, and the 1-sided lower confidence set $\hat{G}_{1l} = [0.85, 2.44]$ on the $x$-axis. Note that the boundary of $\hat{G}_{1l}$, 2.44, is given by the projection, to the $x$-axis, of the intersection between the horizontal line at height $-\lambda = -1.527$ and the lower simultaneous confidence band over the interval $K$. The lower confidence set $\hat{G}_{1l}$ tells us that, with confidence level 95\%, infants having $x \in \hat{G}_{1l}$ have $E\{\log(-\log(PMR))\} \le 1.527$ and so should have extra medical care due to concerns over excessive high PMR. Figure 2(c) plots the 2-sided simultaneous confidence band $[- \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_2 \hat{\sigma} m( {\pmb x} ),\, - \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_2 \hat{\sigma} m( {\pmb x} )]$ over $x \in K$, and the 2-sided confidence set $[\hat{G}_{2l}, \hat{G}_{2u}] = \left[[0.85,2.42], [0.85, 2.84] \right]$ on the $x$-axis. Note that the boundaries of $[\hat{G}_{2l}, \hat{G}_{2u}]$, 2.42 and 2.84, are given by the projection, to the $x$-axis, of the intersection between the horizontal line at height $-\lambda = -1.527$ and the two-sided confidence band over the interval $K$. The two-sided confidence set tells us that, with confidence level at least 95\%, infants having $x \in K \backslash \hat{G}_{2u}$ are not of concern, infants having $x \in \hat{G}_{2l}$ are of concern, and infants having $x \in \hat{G}_{2u}$ are possibly of concern, in terms of excessive high PMR. Figure 2(d) plots $\hat{G}_{1u}$, $\hat{G}_{1l}$ and $[\hat{G}_{2l}, \hat{G}_{2u}]$ in the same picture for the purpose of comparison. From this figure, it is clear again that $\hat{G}_{2l} \subseteq \hat{G}_{1l} \subseteq \hat{G}_{1u} \subseteq \hat{G}_{2u}$ as pointed out in Section 2. \noindent{\bf Example 3.} Myers {\it et al.} (2002, p114) provided a data set on a single quantal bioassay of a toxicity experiment. The effect of different doses of nicotine on the common fruit fly is investigated by fitting a logistic regression model between the number of flies killed $y$ and $x =\ln ({\rm concentration\ of\ nicotine})$. Seven observations of $(y_j, m_j, x_j)$ are given (see also Liu, 2010, Table 8.1), where $m_j$ is the number of flies experimented at dose $x_j$. Let $p(x)$ denotes the probability that a fly will be killed at dose $x$. Then $y_j \sim {\rm Binomial} \left( m_j, p(x_j) \right)$ with ${\rm logit} \left( p(x_j) \right) = {\bf e} ta_0 + {\bf e} ta_1 x_j$, $j=1, \cdots, 7$. Based on the seven observations, the MLE $\hat{\mbox{\boldmath $ {\bf e} ta$}} = (\hat{ {\bf e} ta}_0, \hat{ {\bf e} ta}_1)^T$ is calculated to be $(3.124, 2.128)^T$ and the approximate covariance matrix of $\hat{\mbox{\boldmath $ {\bf e} ta$}}$ is $$ \hat{\cal I}^{-1} = \left( {\bf e} gin{array}{cc} 0.1122 & 0.0679 \\ 0.0679 & 0.0490 \\ \end{array} \right) \, . $$ Hence $\hat{\mbox{\boldmath $ {\bf e} ta$}}$ has approximate normal distribution $N_2 (\mbox{\boldmath $ {\bf e} ta$} , \hat{\cal I}^{-1})$. The seven observed $\left( {\rm logit} (y_j / m_j), x_j \right)$ are plotted in Figure 3(a), from which it seems that the logistic regression model fits the observations very well. Indeed the deviance is ${\cal D} = 0.734$, which is very small in comparison with $\chi_{5, \alpha}^2$ for any conventional $\alpha$ value. The median effective dose $ED_{50}$ is the dose $x$ such that $p(x) = 0.5$, and often of interest in dose-response study. Suppose we are interested in the doses $x$, within the dose range $K = [-2.30, -0.05]$ of the study, such that $p(x) \ge 0.5$, that is, we want to identify the level set $G = \{ x \in K: \ {\bf e} ta_0 + {\bf e} ta_1 x \ge \lambda \}$ with $\lambda = {\rm logit}(0.5) = 0$ due to the fact that ${\rm logit} (p)$ is monotone increasing in $p \in (0,1)$. Now the method of Section 2 can be used to construct various confidence sets for $G$. {\bf e} gin{figure}[htbp] {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.5cm]{Examp3_a.png} \parbox{15.5cm}{\small \hspace{1.2cm}(a) 1-sided upper confidence set $\hat{G}_{1u}$} \end{minipage} \hspace{1ex} {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.5cm]{Examp3_b.png} \parbox{15.5cm}{\small \hspace{1.2cm}(b) 1-sided lower confidence set $\hat{G}_{1l}$} \end{minipage} {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.5cm]{Examp3_c.png} \parbox{15.5cm}{\small \hspace{1.2cm}(c) 2-sided confidence set $[\hat{G}_{2l}, \hat{G}_{2u}]$} \end{minipage} \hspace{1ex} {\bf e} gin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=7.4cm]{Examp3_d.png} \parbox{15.5cm}{\small \hspace{1.2cm} (d) All the confidence sets} \end{minipage} {\bf e} gin{center} \caption{The $95\%$ approximate confidence sets in Example 3, indicated by the red line segments.} \end{center} \end{figure} From Section 2, simultaneous confidence bands for $ {\bf e} ta_0 + {\bf e} ta_1 x $ over $x \in K$ need to be constructed first in order to construct the confidence sets for $G$. Note, however, only approximate $1- \alpha$ confidence bands of the forms in (1-3), with $\hat{\sigma} = 1$ and $(X^TX)^{-1}$ replaced with $\hat{\cal I}^{-1}$, can be constructed by using the approximate normal distribution $N_2 (\mbox{\boldmath $ {\bf e} ta$} , \hat{\cal I}^{-1})$ of $\hat{\mbox{\boldmath $ {\bf e} ta$}}$. Hence the confidence sets for $G$ are also of approximate $1- \alpha$ level. For $1-\alpha = 95\%$ and $K = [-2.30, -0.05]$, $c_2$ is computed to be 2.42 and $c_1$ is computed to be $2.14$ by using the method of Liu {\it et al.} (2008); see also Liu (2010, Section 8.2). Figure 3(a) plots the approximate 1-sided upper simultaneous confidence band $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \sqrt{ \tilde{\pmb x} ^T \hat{\cal I}^{-1} \tilde{\pmb x} }$ over $x \in K$, and the approximate 1-sided upper confidence set $\hat{G}_{1u} = [-1.61, -0.05]$ on the $x$-axis. As before the boundary of $\hat{G}_{1u}$, -1.61, is given by the projection, to the $x$-axis, of the intersection between the horizontal line at height $\lambda = 0$ and the upper simultaneous confidence band over the interval $x \in K$. The upper confidence set $\hat{G}_{1u}$ tells us that, with approximate confidence level 95\%, only those doses $x$ in $\hat{G}_{1u}$ may have $p(x) \ge 0.5$. Similarly, Figure 3(b) plots the approximate 1-sided lower simultaneous confidence band $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \sqrt{ \tilde{\pmb x} ^T \hat{\cal I}^{-1} \tilde{\pmb x} }$ over $x \in K$, and the approximate 1-sided lower confidence set $\hat{G}_{1l} = [-1.33, -0.05]$ on the $x$-axis. The lower confidence set $\hat{G}_{1l}$ tells us that, with approximate confidence level 95\%, doses $x$ in $\hat{G}_{1l}$ have $p(x) \ge 0.5$. Figure 3(c) plots the approximate 2-sided simultaneous confidence band $[ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_2 \sqrt{ \tilde{\pmb x} ^T \hat{\cal I}^{-1} \tilde{\pmb x} },\, \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_2 \sqrt{ \tilde{\pmb x} ^T \hat{\cal I}^{-1} \tilde{\pmb x} }]$ over $x \in K$, and the corresponding approximate 2-sided confidence set $[\hat{G}_{2l}, \hat{G}_{2u}] = \left[[-1.32,-0.05], [-1.64, -0.05] \right]$ on the $x$-axis. The two-sided confidence set tells us that, with approximate confidence level at least 95\%, doses $x$ in $\hat{G}_{2u}$ may have, and doses $x$ in $\hat{G}_{2l}$ have, $p(x) \ge 0.5$. Figure 3(d) plots $\hat{G}_{1u}$, $\hat{G}_{1l}$ and $[\hat{G}_{2l}, \hat{G}_{2u}]$ in the same picture for the purpose of comparison. From this figure, it is clear again that $\hat{G}_{2l} \subseteq \hat{G}_{1l} \subseteq \hat{G}_{1u} \subseteq \hat{G}_{2u}$ as pointed out in Section 2, though the differences between $\hat{G}_{2l}$ and $\hat{G}_{1l}$, and between $\hat{G}_{1u}$ and $\hat{G}_{2u}$, are very small. \section{CONCLUSION AND DISCUSSION} In this paper, the construction of confidence sets for the level set of linear regression is discussed. Upper, lower and two-sided confidence sets of level $1-\alpha$ are constructed for the normal-error linear regression. It is shown that these confidence sets are constructed from the corresponding $1-\alpha$ level simultaneous confidence bands. Hence these confidence sets and simultaneous confidence bands are closely related. It is noteworthy that the sample size $n$ only needs to satisfy $\nu = n-p-1 \ge 1$, i.e. $n \ge p+2$, so that the regression coefficients $\mbox{\boldmath $ {\bf e} ta$}$ and the error variance $\sigma^2$ can be estimated. So long as $n \ge p+2$, the theorem in Section 2 holds. A larger sample size $n$ will make the confidence sets closer to the level set, which is similar to the usual confidence sets for the mean of a normally-distributed population. Hence the method for linear regression provided in this paper is much simpler than that for nonparametric regression and density level sets (cf. Mammend and Polonik, 2013, Chen {\it et al.}, 2017, Qiao and Polonik, 2019). In the theorem in Section 2, the minimum coverage probability over the whole parameter space $\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1}$ and $\sigma >0$ is sought since no assumption is made about any prior information on $\mbox{\boldmath $ {\bf e} ta$}$ or $\sigma >0$. If it is known {\it a priori} that $\mbox{\boldmath $ {\bf e} ta$}$ and $\sigma$ are in a restricted space, then the usual estimators $\hat{\mbox{\boldmath $ {\bf e} ta$}}$ and $\hat{\sigma}$ should be replaced by the maximum likelihood estimators over the restricted space, and the minimum coverage probability should also be over this restricted space. This situation becomes more complicated and is beyond the scope of this paper. It is also pointed out that the construction method is readily applicable to other parametric regression models where the mean response depends on a linear predictor through a monotonic link function. Examples are generalized linear models, linear mixed models and generalized linear mixed models. The illustrative Example 3 involves a generalized linear model. Therefore the method proposed in this paper is widely applicable. We are unable to establish thus far whether the two-sided confidence set $[\hat{G}_{2l}, \hat{G}_{2u}]$ is of confidence level $1- \alpha$ exactly. Construction of a two-sided confidence set of exact confidence level $1- \alpha$ is clearly of interest and warrants further research. We are actively researching on this and hope to report the results in the near future. \section{ Appendix} In this appendix a proof of the Theorem in Section 2 is sketched. For proving the statement in (7), we have {\bf e} gin{eqnarray} & & \left\{ \, G \subseteq \hat{G}_{1u} \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G: \, {\pmb x} \in \hat{G}_{1u} \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G: \, \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} ) \ge \lambda \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge \lambda - \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \, \right\} \nonumber \\ & \supseteq & \left\{ \, \forall \, {\pmb x} \in G: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge 0 \, \right\} \nonumber \\ & \supseteq & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge 0 \, \right\} \nonumber \end{eqnarray} where the second equation follows directly from the definition of $\hat{G}_{1u}$, the first ``$\supseteq$'' follows directly from the definition of $G$, and the second ``$\supseteq$'' follows directly from the fact that $G \subseteq K$. It follows therefore {\bf e} gin{equation} {\rm P} \left\{ \, G \subseteq \hat{G}_{1u} \, \right\} \ge {\rm P} \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge 0 \, \right\} = 1-\alpha \end{equation} where the last equality is directly due to the fact that $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} )$ is an upper simultaneous confidence band for $ \tilde{\pmb x} ^T {\mbox{\boldmath $ {\bf e} ta$}}$ over $ {\pmb x} \in K$ of exact $1-\alpha$ level, as given in (1). Next we show that the minimum probability over $\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1}$ and $ \sigma >0$ in statement (7) is $1-\alpha$, attained at $\mbox{\boldmath $ {\bf e} ta$} = (\lambda, 0, \ldots 0)^T$. At $\mbox{\boldmath $ {\bf e} ta$} = (\lambda, 0, \ldots 0)^T$, we have $G = K$ and so {\bf e} gin{eqnarray} & & \left\{ \, G \subseteq \hat{G}_{1u} \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, {\pmb x} \in \hat{G}_{1u} \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} + c_1 \hat{\sigma} m( {\pmb x} ) \ge \lambda \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge \lambda - \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge 0 \, \right\} \, \nonumber \end{eqnarray} which gives {\bf e} gin{equation} {\rm P} \left\{ \, G \subseteq \hat{G}_{1u} \, \right\} = {\rm P} \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) + c_1 \hat{\sigma} m( {\pmb x} ) \ge 0 \, \right\} = 1-\alpha \, . \end{equation} The combination of (11) and (12) proves the statement in (7). Now we prove the statement in (8). For a given set $A \subseteq K$, let $A^c$ denote the complement set within $K$, i.e. $A^c = K \backslash A$. We have {\bf e} gin{eqnarray} & & \left\{ \, \hat{G}_{1l} \subseteq G \, \right\} \nonumber \\ & = & \left\{ \, G^c \subseteq \hat{G}_{1l}^c \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G^c: \, {\pmb x} \in \hat{G}_{1l}^c \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G^c: \, \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} ) < \lambda \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G^c: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) < \lambda - \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \, \right\} \nonumber \\ & \supseteq & \left\{ \, \forall \, {\pmb x} \in G^c: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) \le 0 \, \right\} \nonumber \\ & \supseteq & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) \le 0 \, \right\} \nonumber \end{eqnarray} where the third equation follows directly from the definition of $\hat{G}_{1l}$ (or $\hat{G}_{1l}^c$), the first ``$\supseteq$'' follows directly from the definition of $G$ (or $G^c$), and the second ``$\supseteq$'' follows directly from the fact that $G^c \subseteq K$. It follows therefore {\bf e} gin{equation} {\rm P} \left\{ \, \hat{G}_{1l} \subseteq G \, \right\} \ge {\rm P} \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) \le 0 \, \right\} = 1-\alpha \end{equation} where the last equality is directly due to the fact that $ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} )$ is a lower simultaneous confidence band for $ \tilde{\pmb x} ^T {\mbox{\boldmath $ {\bf e} ta$}}$ over $ {\pmb x} \in K$ of exact $1-\alpha$ level, as given in (2). Next we show that the minimum probability over $\mbox{\boldmath $ {\bf e} ta$} \in \Re^{p+1}$ and $ \sigma >0$ in statement (8) is $1-\alpha$, attained at $\mbox{\boldmath $ {\bf e} ta$} = (\lambda^-, 0, \ldots 0)^T$, where $\lambda^-$ denotes a number that is infinitesimally smaller than $\lambda$. At $\mbox{\boldmath $ {\bf e} ta$} = (\lambda^-, 0, \ldots 0)^T$, we have $G^c = K$ and so {\bf e} gin{eqnarray} & & \left\{ \, \hat{G}_{1l} \subseteq G \, \right\} \nonumber \\ & & \left\{ \, G^c \subseteq \hat{G}_{1l}^c \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in G^c: \, {\pmb x} \in \hat{G}_{1l}^c \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, {\pmb x} \in \hat{G}_{1l}^c \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T \hat{\mbox{\boldmath $ {\bf e} ta$}} - c_1 \hat{\sigma} m( {\pmb x} ) < \lambda \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) < \lambda - \tilde{\pmb x} ^T \mbox{\boldmath $ {\bf e} ta$} \, \right\} \nonumber \\ & = & \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) < 0 \, \right\} \, \nonumber \end{eqnarray} which gives {\bf e} gin{equation} {\rm P} \left\{ \, \hat{G}_{1l} \subseteq G \, \right\} = {\rm P} \left\{ \, \forall \, {\pmb x} \in K: \, \ \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) - c_1 \hat{\sigma} m( {\pmb x} ) < 0 \, \right\} = 1-\alpha \, . \end{equation} The combination of (13) and (14) proves the statement in (8). The statement (9) can be proved by combining the arguments that establish (11) and (13) above to establish that $$ \left\{ \, \hat{G}_{2l} \subseteq G \subseteq \hat{G}_{2u} \, \right\} \supseteq \left\{ \, \forall \, {\pmb x} \in K: \, \ -c_2 \hat{\sigma} m( {\pmb x} ) \le \tilde{\pmb x} ^T (\hat{\mbox{\boldmath $ {\bf e} ta$}} - \mbox{\boldmath $ {\bf e} ta$}) < c_2 \hat{\sigma} m( {\pmb x} ) \, \right\}; $$ details are omitted here to save space. Unfortunately, a least favorable configuration of $\mbox{\boldmath $ {\bf e} ta$}$ that achieves the coverage probability $1-\alpha$ cannot be identified in this case, and so $1-\alpha$ is only a lower bound on the confidence level. \renewcommand{References}{References} {\bf e} gin{thebibliography}{100} \item Cadre, B. (2006). Kernel estimation of density level sets. {\it J. Multivariate Anal.}, 97 (4), 999 - 1023. \item Cavalier, L. (1997). Nonparametric estimation of regression level sets. {\it Statistics}, 29 (2), 131 - 160. \item Chen, Y.-C., Genovese, C.R., and Wasserman, L. (2017). Density level sets: Asymptotics, inference, and visualization. {\it J. Amer. Statist. Assoc.}, 112 (520), 1684 - 1696. \item Dau, H. D., Lalo\"e, T., and Servien, R. (2020). Exact asymptotic limit for kernel estimation of regression level sets. {\it Statistics and Probability Letters}, 161:108721. \item Faraway, J. J. (2016). \textit{ Extending the Linear Model with R, 2nd Edition}. {CRC Press}. \item Hartigan, J.A. (1987). Estimation of a convex density contour in two dimensions. {\it J. Amer. Statist. Assoc.}, 82 (397), 267 - 270. \item Herrera, F., Carmona, C. J., Gonz´alez, P., and Del Jesus, M. J. (2011). An overview on subgroup discovery: foundations and applications. {\it Knowledge and Information Systems}, 29(3):495 - 525. \item Liu, W. (2010). \textit{ Simultaneous Inference in Regression}. { CRC Press}. \item Liu, W. and Hayter, A.J. (2007). Minimum area confidence set optimality for confidence bands in simple linear regression. {\it J. Amer. Statist. Assoc.}, {\bf 102}, 181-190. \item Liu, W., Jamshidian, M., Zhang, Y., and Donnelly, J. (2005). Simulation-based simultaneous confidence bands in multiple linear regression with predictor variables constrained in intervals. {\it J. Comput. and Graph. Statist.}, {\bf 14}, 459-484. \item Liu, W., Wynn, H.P., and Hayter, A.J. (2008). Statistical inferences for linear regression models when the covariates have functional relationships: polynomial regression. {\em J. Statist. Comput. and Simulat.}, {\bf 78}, 315-324. \item Mammend, E., and Polonik, W. (2013). Confidence regions for level sets. {\em J. Multivariate Analysis}, 122, 202-214. \item Mason, D.M., and Polonik, W. (2009). Asymptotic normality of plug-in level set estimates. {\it Ann. Appl. Probab.}, 19 (3), 1108 - 1142. \item McCulloch, C. E., and Searle, S. R. (2001). \textit{ Generalized, Linear, and Mixed Models}. {Wiley}. \item Myers, R.H., Montgomery, D.C. and Vining, G.G. (2002). {\it Generalized Linear Models with Applications in Engineering and the Sciences}. Wiley. \item Naiman, D.Q. (1984). Optimal simultaneous confidence bounds. {\it Ann. Statist.}, {\bf 12}, 702-715. \item Naiman, D.Q. (1986). Conservative confidence bands in curvilinear regression. {\it Ann. Statist.}, {\bf 14}, 896-906. \item Piegorsch, W.W. (1985a). Admissible and optimal confidence bands in simple linear regression. {\it Ann. Statist.}, {\bf 13}, 801-810. \item Piegorsch, W.W. (1985b). Average-width optimality for confidence bands in simple linear-regression. {\it J. Amer. Statist. Assoc.}, {\bf 80}, 692-697. \item Polonik, W. and Wang, Z. (2005). Estimation of regression contour clusters: an application of the excess mass approach to regression. {\it J. Multivariate Anal.}, 94, 227 - 249. \item Qiao, W. and Polonik, W. (2019) Nonparametric confidence regions for level sets: statistical properties and geometry. Electronic Journal of Statistics, {\bf 13}, 985-1030. \item Reeve, H.W.J., Cannings, T.I., and Samworth, R.J. (2021). Optimal subgroup selection. 2109.01077.pdf (arxiv.org). \item Scott, C., Davenport, M. (2007). Regression level set estimation via cost-sensitive classification. {\it IEEE Trans. Signal Process.}, 55 (6, part 1), 2752 - 2757. \item Sun, J., and Loader, C.R. (1994). Simultaneous confidence bands for linear regression and smoothing. {\it Ann. Statist.}, {\bf 22}, 1328-1346. \item Ting, N., Cappelleri, J. C., Ho, S., and Chen, D.-G. (2020). {\it Design and Analysis of Subgroups with Biopharmaceutical Applications}. Springer. \item Tsybakov, A.B. (1997). On nonparametric estimation of density level sets. {\it Ann. Statist.}, 25 (3), 948 - 969. \item Wan, F., Liu, W., Han, Y., and Bretz, F. (2015). An exact confidence set for a maximum point of a univariate polynomial function in a given interval. {\it Technometrics}, 57(4), 559-565. \item Wan, F., Liu, W., Bretz, F., and Han, Y. (2016). Confidence sets for optimal factor levels of a response surface. {\it Biometrics}, 72(4), 1285-1293. \item Wang, R., Lagakos, S. W., Ware, J. H., Hunter, D. J., and Drazen, J. M. (2007). Statistics in medicine — reporting of subgroup analyses in clinical trials. {\it New England Journal of Medicine}, 357(21): 2189 - 2194. PMID: 18032770. \item Willett, R.M., Nowak, R.D. (2007). Minimax optimal level-set estimation. {\it IEEE Trans. Image Process.}, 16 (12), 2965 - 2979. \item Wynn, H.P., and Bloomfield, P. (1971). Simultaneous confidence bands in regression analysis. {\it JRSS (B)}, {\bf 33}, 202-217. \end{thebibliography} \label{lastpage} \end{document}
\begin{document} \title{Lipschitz extensions to finitely many points} \author{Giuliano Basso} \date{\today} \maketitle \begin{abstract} We consider Lipschitz maps with values in quasi-metric spaces and extend such maps to finitely many points. We prove that in this context every 1-Lipschitz map admits an extension such that its Lipschitz constant is bounded from above by the number of added points plus one. Moreover, we prove that if the source space is a Hilbert space and the target space is a Banach space, then there exists an extension such that its Lipschitz constant is bounded from above by the square root of the total of added points plus one. We discuss applications to metric transforms. \end{abstract} \tableofcontents \section{Introduction} Lipschitz maps are generally considered as an indispensable tool in the study of metric spaces. The need for a Lipschitz extension of a given Lipschitz map often presents itself naturally. Deep extension results have been obtained by Johnson, Lindenstrauss, and Schechtman \cite{johnson1986extensions}, Ball \cite{ball1992markov}, Lee and Naor \cite{lee2005extending}, and Lang and Schlichenmaier \cite{LangSchlichenmaier}. The literature surrounding Lipschitz extension problems is vast, for a recent monograph on the subject see \cite{brudnyi2011methods, brudnyimethods} and the references therein. Before we explain our results in detail, we start with a short presentation of what we will call the Lipschitz extension problem. Let \((X,\rho_X)\) be a \textit{quasi-metric space}, that is, the function \(\rho_X\colon X\times X\to \mathbb{R}\) is non-negative, symmetric and vanishes on the diagonal, cf. \cite[p. 827]{schoenberg1938}. Unfortunately, the term ``quasi-metric space'' has several different meanings in the mathematical literature. In the present paper, we stick to the definition given above. Let \(S\subset X\) be a subset and let \((Y, \rho_Y)\) be a quasi-metric space. A \textit{Lipschitz map} is a map \(f\colon S\to Y\) such that the quantity \begin{equation*} \Lip(f):=\inf\left\{ L\geq 0 : \textrm{for all points } x,x^\prime\in S\colon \rho_Y(f(x), f(x^\prime)) \leq L \rho_X(x,x^\prime) \right\} \end{equation*} is finite. We use the convention \(\inf \varnothing =+\infty\). We consider the following Lipschitz extension problem: \begin{question} Let \((X,d_X)\) be a metric space, let \((Y,\rho_Y)\) be a quasi-metric space, and suppose that \(S\subset X\) is a subset of \(X\). Under what conditions on \(S, X\) and \(Y\) is there a real number \(D\geq 1\) such that every Lipschitz map \(f\colon S\to Y\) has a Lipschitz extension \(\overline{f}\colon X \to Y\) with \(\Lip\big(\,\overline{f}\,\big)\leq D\Lip(f)\)? \end{question} Let \(e(X,S;Y)\) denote the infimum of the \(D\)'s satisfying the desired property in the ``Lipschitz extension problem''. Given integers \(n, m\geq 1\), we define \begin{equation*} \begin{split} e_n(X,Y)&:=\sup\big\{ e(X,S;Y) : S\subset X, \, \abs{S} \leq n\big\}, \\ e^m(X,Y)&:=\sup\big\{ e(S\cup T,S;Y) : S,T\subset X, \, S \textrm{ closed}, \, \abs{T} \leq m\big\}. \end{split} \end{equation*} We use \(\abs{\,\cdot\,}\) or \(\card(\cdot)\) to denote the cardinality of a set. The Lipschitz extension modulus \(e_n(X,Y)\) has been studied intensively in various settings. Nevertheless, many important questions surrounding \(e_n(X,Y)\) are still open, cf. \cite{naor2017lipschitz} for a recent overview. In the present article, we are interested in an upper bound for \(e^m(X,Y)\). We get the following result. \begin{theorem}\label{cor:main} Let \((X, d_X)\) be a metric space and let \((Y,\rho_Y)\) be a quasi-metric space. If \(m\geq 1\) is an integer, then \begin{equation}\label{main12} e^{m}(X,Y) \leq m+1. \end{equation} \end{theorem} A constructive proof of Theorem 1.1 is given in Section \ref{sec:two}. The estimate \eqref{main12} is optimal. This follows from the following simple example. We set \(P_{m+1}:=\{0, 1, \ldots, m+1\}\subset \mathbb{R}\) and we consider the subset \(S=Y=\{0, m+1\}\subset P_{m+1}\) and the map \(f\colon S\to Y\) given by \(x\mapsto x\). Suppose that \(F\colon P_{m+1} \to Y\) is a Lipschitz extension of \(f\) to \(P_{m+1}\). Without effort it is verified that \(\Lip(F)=(m+1)\Lip(f)\); hence, it follows that \eqref{main12} is sharp. The sharpness of Theorem \ref{cor:main} allows us to obtain a lower bound for the parameter \(\alpha(\omega)\) of the dichotomy theorem for metric transforms \cite[Theorem 1]{mendel2011note}, see Corollary \ref{cor:Ftrans}. If the condition that the subset \(S\subset X\) has to be closed is removed in the definition of \(e^m(X,Y)\), then Theorem \ref{cor:main} is not valid. Indeed, if \((X,d_X)\) is not complete and \(z\in \overline{X}\) is a point contained in the completion \(\overline{X}\) of \(X\) such that \(z\notin X\), then the identity map \(\id_X\colon X\to X\) does not extend to a Lipschitz map \(\overline{\id_X}\colon X\cup \{z\}\to X\) if we equip \(X\cup \{z\} \subset \overline{X}\) with the subspace metric. This is a well-known obstruction. As pointed out by Naor and Mendel, there is the following upper bound of \(e^m(X,Y)\) in terms of \(e_m(X,Y)\) . \begin{lemma}[\textrm{Claim 1 in \cite{mendel2017relation}}]\label{Lem:Naor} Let \((X, d_X)\) and \((Y,d_Y)\) be two metric spaces. If \(m\geq 1 \) is an integer, then \begin{equation*} e^m(X,Y) \leq e_m(X,Y)+2. \end{equation*} \end{lemma} By the use of Lemma \ref{Lem:Naor} and \cite[Theorem 1.10]{lee2005extending}, one can deduce that if \((X,d_X)\) is a metric space and \((E, \norm{\cdot}_{_E})\) is a Banach space, then \begin{equation*} e^m(X,E) \lesssim \frac{\log(m)}{\log\big(\log(m)\big)} \end{equation*} for all integers \(m\geq 3\), where the notation \(A \lesssim B\) means \(A \leq C B\) for some universal constant \(C\in (0, +\infty)\). As a result, for sufficiently large integers \(m\geq 3\) the estimate in Theorem \ref{cor:main} is not optimal if we restrict the target spaces to the class of Banach spaces. In Section \ref{sec:sharpness}, we present an example that shows that for Banach space targets the estimate \eqref{main12} is sharp if \(m=1\). As a byproduct of the construction in Section \ref{sec:sharpness}, we obtain the lower bound \begin{equation}\label{eq:estimate100} e(\ell_2, \ell_1) \geq \sqrt{2}, \end{equation} where \(e(\ell_2, \ell_1):=\sup\big\{ \,e(\ell_2, S; \ell_1) : S\subset \ell_2\big\}\). It is unknown if \(e(\ell_2, \ell_1)\) is finite or infinite. This question has been raised by Ball, cf. \cite{ball1992markov}. Let \((Y,\rho_Y)\) be a quasi-metric space and let \(F\colon [0,+\infty) \to [0,+\infty)\) be a map with \(F(0)=0\). The \textit{\(F\)-transform of \(Y\)}, denoted by \(F[Y]\), is by definition the quasi-metric space \((Y, F\circ \rho_Y)\). Our main result can be stated as follows: \begin{theorem}\label{hopefullySolved} Let \((H, \langle \cdot, \cdot \rangle_{_H})\) be a Hilbert space and let \((E, \norm{\cdot}_{_E})\) be a Banach space. Suppose that \(F\colon [0,+\infty)\to [0,+\infty)\) is a map such that the composition \(F(\sqrt{\cdot})\) is a strictly-increasing concave function with \(F(0)=0\). If \(X\subset F[H]\) is a finite subset, \(S\subset X\), and \(f\colon S\to E\) is a map, then there is a Lipschitz extension \(\overline{f}\colon X\to \Conv(f(S))\) such that \begin{equation}\label{main123} \Lip\left(\,\overline{f}\,\right) \leq \sup_{x>0} \frac{F\big(\sqrt{m+1} \,x\big)}{F(x)}\,\Lip(f), \end{equation} where \(m:=\abs{X\setminus S}\). \end{theorem} Theorem \ref{hopefullySolved} is optimal if \(m=1\) and \(F=\id\), see Proposition \ref{prop:lower}. Via this sharpness result we obtain that certain \(F\)-transforms of \(\ell_p\), for \(p>2\), do not isometrically embed into \(\ell_2\), see Corollary \ref{cor:snowflake}. Suppose that \(F\colon [0, +\infty)\to [0,+\infty)\) is a strictly-increasing continuous function such that the \(F\)-transform of \(\ell_2\) embeds isometrically into a Hilbert space. By a celebrated result of Schoenberg \(F(\sqrt{\cdot})^2\) is a Bernstein function, cf. \cite[Theorem \(6^\prime\)]{schoenberg1938}; thus, the function \(F(\sqrt{\cdot})\) is concave and therefore satisfies the assumptions on \(F\) in Theorem \ref{hopefullySolved}. This provides a natural class of examples for which Theorem \ref{hopefullySolved} may be applied. Let \(0 <\alpha \leq 1\) and \(L\geq 0\) be real numbers. An \textit{\((\alpha, L)\)-H\"older map} is a map \(f\colon X\to Y\) such that \[d_Y(f(x), f(x^\prime)) \leq L d_X(x,x^\prime)^\alpha\] for all points \(x,x^\prime \in X\). By considering the function \(F(x)=x^\alpha\), with \(0 < \alpha \leq 1\), we obtain the following direct corollary of Theorem \ref{hopefullySolved}. \begin{corollary} Let \((H, \langle \cdot, \cdot \rangle_{_H})\) be a Hilbert space, let \((E, \norm{\cdot}_{_E})\) be a Banach space and let \(0 <\alpha \leq 1\) and \(L\geq 0\) be real numbers. If \(X\subset H\) is a finite subset, \(S\subset X\), and \(f\colon S\to E\) is an \((\alpha, L)\)-H\"older map, then there is an extension \(\overline{f}\colon X\to \Conv(f(S))\) of \(f\) such that \(\overline{f}\) is an \((\alpha, \overline{L}\,)\)-H\"older map with \begin{equation*} \overline{L} \leq \left(\sqrt{m+1}\right)^\alpha \, L, \end{equation*} where \(m:=\abs{X\setminus S}\). \end{corollary} Along the lines of the proof of Claim 1 in \cite{mendel2017relation} one can show that if \((X,d_X)\) and \((Y, d_Y)\) are metric spaces, then for all integers \(m\geq 1\) we have \begin{equation*} e^m(X,Y) \leq \sup_{n\geq 1} \,e^m_{n}(X,Y)+2, \end{equation*} where \begin{equation*} e_n^m(X,Y):=\sup\big\{ e(S\cup T,S;Y) : S,T\subset X,\, \abs{S} \leq n,\, \abs{T} \leq m\big\}. \end{equation*} Thus, by the use of Theorem \ref{hopefullySolved}, we may deduce that if \(H\) is a Hilbert space and \(E\) is a Banach space, then \begin{equation}\label{eq:estLog} e^m(H,E) \leq \sqrt{m+1}+2 \end{equation} for all integers \(m\geq 1\). In \cite[Theorem 1.12]{lee2005extending}, Lee and Naor demonstrate that \(e_n(H, E) \lesssim \sqrt{\log(n)}\) for all integers \(n\geq 2\). Thus, via this estimate (and Lemma \ref{Lem:Naor}) it is possible to obtain the upper bound \begin{equation*} e^m(H,E) \lesssim \sqrt{\log(m)} \end{equation*} that has a better asymptotic behaviour than estimate \eqref{eq:estLog}. However, since Lee and Naor use different methods, we believe that our approach has its own interesting aspects. The paper is structured as follows. In Section \ref{sec:app}, we derive some corollaries of our main results. In Section \ref{sec:two} we prove Theorem \ref{cor:main} and in Section \ref{sec:sharpness} we show that our extension results are sharp for one point extensions. In \cite{ball1992markov}, Ball introduced the notions of Markov type and Markov cotype of Banach spaces. To establish Theorem \ref{hopefullySolved} we estimate quantities that are of similar nature. The necessary estimates are obtained in Section \ref{sec:quadraticForm} and Section \ref{sec:betterName}. In Section \ref{sec:betterName}, we deal with M-matrices, which appear naturally in the proof of Theorem \ref{hopefullySolved}. M-matrices have first been considered by Ostrowski, cf. \cite{ostrowski1937determinanten}, and since then have been investigated in many areas of mathematics, cf. \cite{poole1974survey}. The main result of Section \ref{sec:betterName}, Theorem \ref{thm:estiMate}, may be of independent interest for the general theory of M-matrices. Finally, a proof of Theorem \ref{hopefullySolved} is given in Section \ref{sec:mainProof}. \section{Embeddings and indices of \(F\)-transforms}\label{sec:app} In this section we collect some applications of our main theorems. Let \((X,\rho_X)\) and \((Y, \rho_Y)\) be quasi-metric spaces and let \(f\colon X\to Y\) be an injective map. We set \(\dist(f):=\Lip(f)\Lip(f^{-1})\) and \[c_Y(X):=\inf \big\{\dist(f) : f\colon X\to Y \textrm{ injective } \big\}.\] The sharpness of \eqref{main123} if \(m=1\) allows us to derive a necessary condition for an \(F\)-transform of an \(\ell_p\)-space to embed into a Hilbert space. \begin{corollary}\label{cor:snowflake} Let \((H, \langle \cdot, \cdot \rangle_{_H})\) be a Hilbert space and suppose that \(F\colon [0,+\infty) \to [0,+\infty)\) is a function such that \(F(0)=0\) and \begin{equation*} \sup\limits_{x>0} \frac{F(x)}{x} <+\infty. \end{equation*} If \(p\in [1,+\infty]\) is an extended real number and \[\sup \big\{ c_{H}(A) : A \subset F[\ell_p], \, A \textrm{ finite } \big\} \leq 2^\ensuremath\varepsilon, \quad \textrm{ where } \ensuremath\varepsilon \in \big[0,\frac{1}{2}\big),\] then \(p \leq \left(\frac{1}{2}-\ensuremath\varepsilon\right)^{-1}\). \end{corollary} The proof of Corollary \ref{cor:snowflake} is given at the end of Section \ref{sec:sharpness}. If \( 2 < p < +\infty\) is a real number and the \(F\)-transform \(F[\ell_p]\) embeds isometrically into a Hilbert space, then \[F(x)=F_a(x)= \begin{cases} 0 & x=0 \\ a & x>0 \end{cases} \quad \textrm{ where } a \geq 0; \] this follows essentially by combining a result of Kuelbs \cite[Corollary 3.1]{kuelbs1973positive} with a classical result that relates isometric embeddings to positive definite functions, cf. for example \cite[Theorem 4.5]{WellsJamesH1975Eaei}. Furthermore, by a result of Johnson and Randrianarivony, \(\ell_p\) with \(p > 2\) does not admit a coarse embedding into \(\ell_2\), cf. \cite{10.2307/4098068,mendel2008metric}. We proceed with an application of Theorem \ref{cor:main}. Let \(F\colon [0,+\infty)\to [0,+\infty)\) be a function with \(F(0)=0\). Suppose that \(F\) is subadditive and strictly increasing. We define \[D_F(\alpha)=\sup_{x >0} \frac{F(\alpha x)}{F(x)}\] for all \(\alpha \geq 0\). Clearly, the function \(D_F\colon [0,+\infty)\to [0,+\infty) \) is finite, submutliplicative and non-decreasing. Moreover, \[F(\alpha x) \leq D_F(\alpha) F(x)\] for all real numbers \(x, \alpha \geq 0\). The \textit{upper index} of \(F\) is defined by \begin{equation}\label{eq:limit1} \beta(F)=\lim_{\alpha \to +\infty} \frac{\log(D_F(\alpha))}{\log(\alpha)}. \end{equation} The existence of the limit \eqref{eq:limit1} may be deduced via the general theory of subadditive functions, since \(D_F\) is submultiplicative and non-decreasing, cf. \cite[Remark 1.3 (b)]{LechMaligranda1985}. We have \(0 \leq \beta(F) \leq 1\), for \(F\) is subadditive. If \((X,d_X)\) is a metric space, we set \[c_F(X):=\inf\big\{ c_{F[Y]}(X) : (Y, d_Y) \textrm{ metric space } \big\}.\] In \cite[Theorem 1]{mendel2011note}, Mendel and Naor obtained a dichotomy theorem for the quantity \(c_F(X)\), if \(F\) is concave and non-decreasing. The upper index of \(F\) allows us to obtain lower bounds for the rate of growth of \(c_F(P_n)\), where \(P_n:=\{0,1, \ldots, n\}\subset \mathbb{R}\). \begin{corollary}\label{cor:Ftrans} Let \(F\colon [0,+\infty)\to [0,+\infty)\) be a strictly-increasing subadditive function with \(F(0)=0\). If \(\,0\leq \alpha < 1-\beta(F)\,\) is a real number, then there exists an integer \(N\geq 1\) such that \[n^\alpha \leq c_F(P_n)\] for all \(n \geq N\). \end{corollary} \begin{proof} We may assume that \(\beta(F)<1\). Let \((Y, \rho_Y)\) be a quasi-metric space and let \((X, d_X)\) be a metric space. We may employ Theorem \ref{cor:main} to conclude that \begin{equation}\label{eq:Ftrans} e^m\big(F[X], Y\big) \leq \sup_{x >0} \frac{F\big((m+1) x\big)}{F(x)}, \end{equation} for all integers \(m\geq 0\). We set \(Y_{m}:=\{0, m\}\subset P_{m}\). Since \[e^{m-1} \big(P_{m}, Y_{m}\big)=m,\] inequality \eqref{eq:Ftrans} asserts that \begin{equation}\label{eq:lowerBoundF} m\leq \sup_{x >0} \frac{F( m x)}{F(x)} c_F(P_{m})=D_F(m) c_F(P_{m}) \end{equation} for all \(m\geq 1\). Let \(\ensuremath\varepsilon >0\) be a real number such that \(\alpha< 1-\beta(F)-\ensuremath\varepsilon\). By the virtue of Theorem 1.2 in \cite{LechMaligranda1985} there exists a real number \(C\geq 0\) such that \[D_F(\alpha) \leq \alpha^{\beta(F)+\ensuremath\varepsilon}\] for all \(\alpha \geq C\). Consequently, by the use of \eqref{eq:lowerBoundF} we obtain for all \(n\geq N:= \lceil C\rceil \) that \[n^\alpha \leq n^{1-\beta(F)-\ensuremath\varepsilon}\leq c_F(P_n),\] as desired. \end{proof} As a consequence of Corollary \ref{cor:Ftrans}, we conclude that if \(\beta(F)<1\), then the second possibility of the dichotomy \cite[Theorem 1]{mendel2011note} holds. Thus, there is the following natural question: If \(\beta(F)=1\), is it true that, then \(c_F(X)=1\) for all finite metric spaces \((X, d_X)\)? \section{Proof of Theorem \ref{cor:main}}\label{sec:two} In this section, we derive Theorem \ref{cor:main}. \begin{proof}[Proof of Theorem \ref{cor:main}] Let \(S\subset X\) be a closed subset and let \(T\subset X\) be a finite subset such that \(S\cap T=\varnothing \) and \(\abs{T}\leq m\). Let \(f\colon S\to Y\) be a Lipschitz map. In what follows we construct for each \(\ensuremath\varepsilon >0\) a map \(F_\ensuremath\varepsilon\colon S\cup T\to Y\) that is a Lipschitz extension of \(f\) to \(S\cup T\) such that \(\Lip(F_{\ensuremath\varepsilon})\leq \left((1+\ensuremath\varepsilon)m+1\right)\Lip(f)\). We start with a few definitions. Fix \(\ensuremath\varepsilon >0\). Let \(F\subset S\) be a finite subset such that for each point \(z\in T\) there is a point \(x\in F\) with \begin{equation}\label{eq:inequality} d_X(z,x) \leq (1+\ensuremath\varepsilon)d_X(z, S). \end{equation} Since \(S\) is closed and \(T\) is finite, such a set \(F\) clearly exists. We set \begin{equation*} E:=\big\{ \{u,v\} : u\neq v \textrm{ with } \left(u,v\in T\right) \textrm{ or } \left(u\in T, v\in F \right) \big\}. \end{equation*} Let \(G:=(V,E)\) denote the graph with vertex set \(V:=F\cup T\) and edge set \(E\). We say that a subset \(E^\prime\subset E\) is \textit{admissible} if the graph \(G^\prime:=(V,E^\prime)\) contains no cycles and has the property that if \(v,v^\prime \in F\) are distinct, then there is no path in \(G^\prime\) connecting them. For each edge \(\{u,v\}\in E\) we set \(\omega(\{u,v\}):=d_X(u,v)\). Furthermore, let \(N\geq 0\) denote the cardinality of \(E\). Let \(e\colon \{1, \ldots, N\}\to E\) be a bijective map such that the composition \(\omega\circ e\) is a non-decreasing function. We construct the sequence \(\{E_\ell\}_{\ell=0}^{N}\) of subsets of \(E\) via the following recursive rule: \begin{equation}\label{Eq:ConstructionP} E_0\coloneqq\varnothing, \quad E_\ell\coloneqq \begin{cases} \{e(\ell)\}\cup E_{\ell-1} & \textrm{ if } \{e(\ell)\}\cup E_{\ell-1} \textrm{ is admissible} \\ E_{\ell-1} & \textrm{ otherwise}. \end{cases} \end{equation} We claim that for each point \(z\in T\) there exists an integer \(L_z\geq 1\) and a unique injective path \(\gamma_z\colon \{1, \ldots, L_z\}\to E_N\) connecting \(z\) to a point \(x_z\) in \(F\). Indeed, the uniqueness part of the claim follows directly, as \(E_N\) is admissible. Now, we show the existence part. Let \(z\in T\) be a point. Choose an arbitrary point \(x\in F\). If the edge \(\{x,z\}\) is contained in \(E_N\), then an injective path \(\gamma_z\) with the desired property surely exists. Suppose now that \(\{x,z\}\notin E_N\). It follows from the recursive construction of \(E_N\) that in this case there either exists a path in \(E_N\) from \(z\) to \(x\) of length greater than or equal to two or there exists a path in \(E_N\) from \(z\) to a point \(x^\prime\in F\) distinct from \(x\). Thus, in any case an injective path \(\gamma_z\) with the desired properties exists. We define the map \(F_\ensuremath\varepsilon\colon S\cup T \to Y\) as follows \begin{equation*} \begin{split} &F_\ensuremath\varepsilon(x):=f(x) \quad\quad\quad\,\,\,\,\,\textrm{ for all } x\in S \\ &F_\ensuremath\varepsilon (z):=f(x_z) \quad\quad\quad\,\,\textrm{ for all } z\in T. \end{split} \end{equation*} In other words, \(F_\ensuremath\varepsilon=f\circ R_\ensuremath\varepsilon\), where \(R_\ensuremath\varepsilon\colon S\cup T\to S\) is the retraction that maps \(z\in T\) to \(x_z\in S\). In what follows, we show that \(R_\ensuremath\varepsilon\) has Lipschitz constant smaller than or equal to \((1+\ensuremath\varepsilon)m+1\). This is the reason that enables us to put so low requirements onto ‘distance’ in Y. Now, let \(z\in T\) and \(x\in S\) be points. By the use of the triangle inequality, we compute \begin{equation}\label{eq:sillyestimate} \begin{split} &\rho_Y(F_{\ensuremath\varepsilon}(x), F_{\ensuremath\varepsilon}(z))=\rho_Y(f(x), f(x_z))\leq \Lip(f)d_X(x,x_z) \\ &\leq \Lip(f)\left(d_X(x,z)+ \sum_{\ell=1}^{L_z} \omega(\gamma_z(\ell))\right).\\ \end{split} \end{equation} Let \(x^\prime \in F\) be a point such that the pair \((z, x^\prime)\) satisfies the estimate \eqref{eq:inequality}. By the recursive construction of \(E_N\), it follows that \(\omega(\gamma_z(\ell)) \leq d(x^\prime, z)\) for all \(\ell \in \{1, \ldots, L_z\}\), since the function \(\omega\circ e\) is non-decreasing. Hence, by the use of \eqref{eq:sillyestimate} we obtain \begin{equation*} \begin{split} &\rho_Y(F_{\ensuremath\varepsilon}(x), F_{\ensuremath\varepsilon}(z)) \\ &\leq \Lip(f)\left(d_X(x, z)+ L_z d_X(x^\prime, z)\right)\\ &\leq \Lip(f)\left(1+L_z(1+\ensuremath\varepsilon)\right)d_X(x, z)\\ &\leq \Lip(f)\left( (1+\ensuremath\varepsilon)m+1\right)d_X(x, z). \end{split} \end{equation*} Now, let \(z,z^\prime \in T\) be points. If \(x_z=x_{z^\prime}\), then \(F_\ensuremath\varepsilon(z)=F_\ensuremath\varepsilon(z^\prime)\), by construction. Suppose now that \(x_z\neq x_{z^\prime}\) . We compute \begin{equation}\label{eq:rushh} \begin{split} &\rho_Y(F_{\ensuremath\varepsilon}(z), F_{\ensuremath\varepsilon}(z^\prime))=\rho_Y(f(x_z), f(x_{z^\prime}))\leq \Lip(f)d_X(x_z,x_{z^\prime}) \\ &\leq \Lip(f)\left(\sum_{\ell=1}^{L_z} \omega(\gamma_z(\ell)) +d_X(z, z^\prime)+\sum_{\ell=1}^{L_{z^\prime}} \omega(\gamma_{z^\prime}(\ell))\right). \end{split} \end{equation} The edge \(\{z, z^\prime\}\) is not contained in \(E_N\); thus, by the recursive construction of \(E_N\) we obtain that \(\omega(\gamma_z(\ell)) \leq \omega(\{z, z^\prime\})\) for all \(\ell\in \{1, \ldots, L_z\}\) and \(\omega(\gamma_{z^\prime}(\ell)) \leq \omega(\{z, z^\prime\})\) for all for all \(\ell\in \{1, \ldots, L_{z^\prime}\}\). By virtue of \eqref{eq:rushh} we deduce \begin{equation*} \begin{split} &\rho_Y(F_{\ensuremath\varepsilon}(z), F_{\ensuremath\varepsilon}(z^\prime))\\ &\leq \Lip(f)\left( L_z+1+L_{z^\prime} \right) d_X(z,z^\prime) \\ &\leq \Lip(f)(m+1)d_X(z,z^\prime). \end{split} \end{equation*} The last inequality follows, since \(E_N\) is admissible and the paths \(\gamma_z, \gamma_{z^\prime}\) are injective; thus, \(L_z+L_{z^\prime} \leq m\). We have considered all possible cases and we have established that \begin{equation*} \Lip(F_{\ensuremath\varepsilon})\leq \left((1+\ensuremath\varepsilon)m+1\right)\Lip(f), \end{equation*} as desired. This completes the proof. \end{proof} \section{One point extensions of Banach space valued maps}\label{sec:sharpness} The collection of examples that we construct in this section is inspired by \cite{grunbaum1960projection}. We define the sequence \(\{W_k\}_{k\geq 0}\) of matrices via the recursive rule \begin{equation*} \begin{split} &W_0:=1, \\ &W_{k+1}:=\begin{pmatrix} W_k & W_k \\ W_k & -W_k \end{pmatrix}. \\ \end{split} \end{equation*} The matrices \(W_k\) are commonly known as \textit{Walsh matrices}. For each integer \(k\geq 1\) let \(W_k^{\hspace{0.08em}\prime}\) denote the \((2^k-1)\times 2^k\) matrix that is obtained from \(W_k\) by deleting the first row of \(W_k\). Further, for each integer \(k\geq 1\) and each integer \(\ell \in \{1, \ldots, 2^k\}\) we set \begin{equation}\label{eq:vertices} v_\ell^{(k)}\coloneqq \ell\textrm{-th column of the matrix } W_k^{\hspace{0.08em}\prime}. \end{equation} By construction, \(v_\ell^{(k)}\in \mathbb{R}^{2^k-1}\) for all \(k\geq 1\) and \(\ell \in \{1, \ldots, 2^k\}\). Clearly, \(v_\ell^{(k)}\in \ell_{p}\) for all \(p\in [1, +\infty]\) via the canonical embedding. The goal of this section is to prove the following proposition. \begin{proposition}\label{prop:lower} Let \(p\in[1, +\infty]\) be an element of the extended real numbers and let \(k\geq 1\) be an integer. If \(F\colon \left(\{v_1^{(k)} , \ldots , v_{2^k}^{(k)}\}\cup\{0\},\norm{\cdot}_p\right)\to \left(\ell_1, \norm{\cdot}_1\right)\) is a Lipschitz extension of the function \begin{equation*} \begin{split} &f\colon \left(\{v_1^{(k)} , \ldots , v_{2^k}^{(k)}\}, \norm{\cdot}_p\right) \to \left(\ell_1, \norm{\cdot}_1\right)\\ & v_\ell^{(k)}\mapsto v_\ell^{(k)}, \end{split} \end{equation*} then it holds that \begin{equation*} \Lip(F) \geq \left(2-\frac{1}{2^{k-1}}\right)^{\frac{1}{p_\star}} \Lip(f) , \end{equation*} where \(1/p_\star:=1-1/p\) if \(p\neq +\infty\) and \(1/p_\star:=1\) otherwise. \end{proposition} Note that Proposition \ref{prop:lower} implies in particular that \(e(\ell_2, \ell_1) \geq \sqrt{2}\). The key component in the proof of Proposition \ref{prop:lower} is the following geometric lemma. \begin{lemma}\label{Lem:Lower} Let \(k\geq 1\) be an integer and suppose that \(w\in \mathbb{R}^{2^k-1}\) is a vector such that \begin{equation}\label{Eq:Inequality} \norm{v_\ell^{(k)}-w}_1\leq \norm{v_\ell^{(k)}}_1 \textrm{ for all } \ell\in \{ 1, \ldots, 2^k\}, \end{equation} then it holds that \(w=0\). \end{lemma} \begin{proof} By the use of a simple induction it is straightforward to show that \begin{equation}\label{eq:sumToZero} \sum_{\ell=1}^{2^k} v_\ell^{(k)}=0. \end{equation} Moreover, since \(v_\ell^{(k)}\) is a \(\pm 1\) vector, inequality \eqref{Eq:Inequality} implies that \[\langle w, v_\ell^{(k)}\rangle_{_{\mathbb{R}^{2^k-1}}}\leq 0.\] Equality \eqref{eq:sumToZero} implies that none of these inequalities can be strict; thus, as (for instance) the vectors \(v_2^{(k)}, \ldots, v_{2^k}^{(k)}\) form a basis of \(\mathbb{R}^{2^k-1}\), we obtain \(w=0\), as desired. \end{proof} Having Lemma \ref{Lem:Lower} at our disposal, Proposition \ref{prop:lower} can readily be verified. \begin{proof}[Proof of Proposition \ref{prop:lower}] To begin, we compute \(\Lip(f)\). We claim that \begin{equation}\label{eq:Lip} \Lip(f)=\left(2^{k-1}\right)^{\frac{1}{p_\star}}. \end{equation} First, suppose that \(p\in [1, +\infty)\). A simple induction implies that two distinct columns of \(W_k\) are orthogonal to each other. Since the entries of \(W_k\) consist only of plus and minus one, we obtain that \begin{equation*} \norm{ v_i^{(k)}-v_j^{(k)}}_p^p=2^p\card\left(\left\{ \ell\in\{1, \ldots, 2^{k}-1\} : (v_i^{(k)})_\ell\neq (v_j^{(k)})_\ell \right\}\right)=2^p2^{k-1}, \end{equation*} where we use \(\card(\cdot)\) to denote the cardinality of a set. Hence, if \(p\in [1, +\infty)\), then the identity \eqref{eq:Lip} follows. Since the \(p\)-norms \(\norm{\cdot}_p\) converge pointwise to the maximum norm \(\norm{\cdot}_\infty\) if \(p\to +\infty\), the identity \eqref{eq:Lip} follows also in the case \(p=+\infty\), as was left to show. By considering the contraposition of the statement in Lemma \ref{Lem:Lower}, we may deduce that there is an index \(\ell\in \{1, \ldots, 2^k\}\) such that \begin{equation*} \norm{v_\ell^{(k)}-F(0)}_1 \geq \norm{v_\ell^{(k)}}_1. \end{equation*} As a result, we obtain that \begin{equation*} \Lip(F)\geq \frac{\norm{v_\ell^{(k)}-F(0)}_1}{\norm{v_\ell^{(k)}}_p}\geq \frac{\norm{v_\ell^{(k)}}_1}{\norm{v_\ell^{(k)}}_p}=(2^k-1)^{\frac{1}{p_\star}}. \end{equation*} Hence, it follows that \begin{equation*} \frac{\Lip(F)}{\Lip(f)}\geq \frac{(2^k-1)^{\frac{1}{p_\star}}}{(2^{k-1})^{\frac{1}{p_\star}}}=\left(2-\frac{1}{2^{k-1}}\right)^{\frac{1}{p_\star}}; \end{equation*} as desired. \end{proof} We conclude this section with the proof of Corollary \ref{cor:snowflake}. \begin{proof}[Proof of Corollary \ref{cor:snowflake}] Let \(k\geq 1\) be an integer and let \[g_F\colon \left(\{v_1^{(k)} , \ldots , v_{2^k}^{(k)}\}, F\circ \norm{\cdot}_p\right) \to \left(\ell_1, \norm{\cdot}_1\right)\] denote the map such that \(v_i^{(k)}\mapsto v_i^{(k)}\). The vectors \(v_i^{(k)}\) are given as in \eqref{eq:vertices} and interpreted as elements of \(\ell_p\) via the canonical embedding. It is readily verified that \begin{equation*} \Lip \left( g_{F} \right)= \frac{A}{F(A)} \Lip \left( g_{\id}\right), \end{equation*} where \(A:=\norm{ v_i^{(k)}-v_j^{(k)}}_p\). Now, let \(\delta >0\) be a real number. Using the assumptions in Corollary \ref{cor:snowflake} and Theorem \ref{hopefullySolved} (for the map \(F=\id\)) it follows that there is a map \(G_F\colon \left(\{v_1^{(k)} , \ldots , v_{2^k}^{(k)}\}\cup\{0\},F\circ\norm{\cdot}_p \right)\to \left(\ell_1, \norm{\cdot}_1\right)\) that extends \(g_F\) such that \begin{equation*} \Lip\left(G_F\right) \leq (1+\delta)\, 2^\ensuremath\varepsilon \,\sqrt{2} \,\Lip\left(g_F\right). \end{equation*} We define the map \(T\colon \left(\{v_1^{(k)} , \ldots , v_{2^k}^{(k)}\}\cup\{0\},\norm{\cdot}_p \right)\to \left(\ell_1, \norm{\cdot}_1\right)\) via \(x\mapsto G_F(x)\). We calculate \begin{equation*} \Lip(T)\leq (1+\delta)\, 2^\ensuremath\varepsilon \,\sqrt{2} \, \max\left\{\frac{F(A)}{A}, \frac{F(B)}{B}\right\} \Lip\left(g_F\right), \end{equation*} where \(B:=\norm{ v_i^{(k)}-0}_p\). Since the map \(T\) is a Lipschitz extension of \(g_{\id}\), Proposition \ref{prop:lower} tells us that \begin{equation*} \Lip(T) \geq \left(2-\frac{1}{2^{k-1}}\right)^{\frac{1}{q}} \Lip\left(g_{\id}\right)=\frac{A}{B} \left(1-\frac{1}{2^{k}}\right)\Lip\left(g_{\id}\right), \end{equation*} where \(1/q:=1-1/p\) if \(p\neq +\infty\) and \(1/q:=1\) otherwise. We set \(\gamma:=\frac{A}{B}\). Thus, by putting everything together and via a simple scaling argument, we obtain for all \(x >0\) \begin{equation*} \gamma \left(1-\frac{1}{2^{k}}\right) \frac{ F(\gamma x) }{\gamma x} \leq (1+\delta) 2^\ensuremath\varepsilon \sqrt{2}\, \max\left\{\frac{F(x)}{x},\frac{F(\gamma x)}{\gamma x}\right\}. \end{equation*} Thus, since \[\sup\limits_{x>0} \frac{F(x)}{x} <+\infty \] we obtain \[\frac{\sqrt[q]{2}}{\sqrt[p]{1-\frac{1}{2^k}}} \left(1-\frac{1}{2^{k}}\right)=\gamma \left(1-\frac{1}{2^{k}}\right) \leq (1+\delta) 2^\ensuremath\varepsilon \sqrt{2}.\] Consequently, as \(k\geq 1\) and \(\delta >0\) are arbitrary, we deduce \(p \leq \left( \frac{1}{2}-\ensuremath\varepsilon \right)^{-1}\). This completes the proof. \end{proof} \section{Minimum value of a certain quadratic form in Hilbert space}\label{sec:quadraticForm} Let \((H, \langle \cdot, \cdot \rangle_{_H})\) be a Hilbert space, let \(I\) denote a finite set and let \(\mathbf{x}\colon I \to H\) be a map. Suppose that \(\boldsymbol\lambda\colon I \times I \to \mathbb{R}\) is a symmetric, non-negative function. Further, assume that \(G\colon [0,+\infty)\to [0,+\infty)\) is a convex, non-decreasing function with \(G(0)=0\). We define \[\Phi(\mathbf{x}, \boldsymbol{\lambda}, G):=\sum_{(k, \ell) \in I\times I} \boldsymbol\lambda\big(k, \ell\big) \,G\big(\norm{\mathbf{x}(k)-\mathbf{x}(\ell)}_{_H}^2\big)\] and for each subset \(J\subset I\) we set \[\mathsf{m}(\mathbf{x}, \boldsymbol{\lambda}, G, J):=\inf\big\{\Phi(\mathbf{z}, \boldsymbol{\lambda}, G) \,:\, \mathbf{z}\colon I\to H \textrm{ is a map with } \mathbf{z}|_{J^c}=\mathbf{x}|_{J^c} \big\}.\] The remainder of this section is devoted to calculate the quantity \(\mathsf{m}(\mathbf{x}, \boldsymbol{\lambda}, \id, J)\). Let \(J\subset I\) be a proper subset. We may suppose that \(J=\big\{1, \ldots, m\big\}\), where \(m:=\card(J)\). To ease notation, we set \(\lambda_{k\ell}:=\bm{\lambda}(k,\ell)\) and we define the matrix \begin{equation}\label{eq:MMatrix} M(\bm{\lambda}, J):= \begin{bmatrix} \sum\limits_{k\in J^c} \lambda_{1k}+\sum\limits_{j=1}^m \lambda_{1j} & -\lambda_{12} & \dots & -\lambda_{1m} \\ -\lambda_{21} & \sum\limits_{k\in J^c} \lambda_{2k}+\sum\limits_{j=1}^m \lambda_{2j} & \dots & -\lambda_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ -\lambda_{m1} & -\lambda_{m2} & \dots & \sum\limits_{k\in J^c} \lambda_{mk}+\sum\limits_{j=1}^m \lambda_{mj} \end{bmatrix}. \end{equation} The matrices \(M(\bm{\lambda}, J)\) appear naturally in the proof of Theorem \ref{hopefullySolved}. If the symmetric matrix \(M:=M(\bm{\lambda}, J)\) is strictly diagonally dominant, that is, for each integer \(1 \leq i \leq m\), it holds \begin{equation*} \abs{m_{ii}} > \sum_{j\neq i}^m \abs{m_{ij}}, \end{equation*} it follows via Gershgorin's circle theorem that \(M\) is positive definite. As a result, the matrix \(M(\bm{\lambda}, J)\) is non-singular if \[\sum\limits_{k\in J^c} \lambda_{ik} >0 \quad \textrm{ for all } 1 \leq i \leq m.\] Next, we deduce the minimum value of \(\mathsf{m}(\mathbf{x}, \boldsymbol{\lambda}, \id, J)\). \begin{proposition}\label{prop:quadratic} Let \((H, \langle \cdot, \cdot \rangle_{_H})\) be a Hilbert space, let \(I\) be a finite set and let \(\mathbf{x}\colon I \to H\) be a map. Suppose that \(\boldsymbol\lambda\colon I \times I \to \mathbb{R}\) is a symmetric, non-negative function and let \(J\subset I\) be a proper subset. If the matrix \(M:=M(\bm{\lambda}, J)\) given by \eqref{eq:MMatrix} is strictly diagonally dominant and \(\lambda_{k\ell}=0\) for all \(k, \ell \in J^c\), then \begin{equation}\label{eq:weekend} \begin{split} \mathsf{m}(\mathbf{x}, \boldsymbol{\lambda}, \id, J)=\sum_{i\in J }\sum_{j\in J} \sum_{k\in J^c} \sum_{\ell\in J^c } \lambda_{ik} c_{ij} \lambda_{j\ell} \norm{\mathbf{x}(k)-\mathbf{x}(\ell)}^{2}_{_H} \\ \end{split} \end{equation} where \(C:=M^{-1}\). Moreover, \begin{equation}\label{eq:SumToOne} \sum_{j=1}^{\abs{J}} c_{ij}\sum\limits_{k\in J^c} \lambda_{jk}=1 \end{equation} for all integers \(1 \leq i \leq \abs{J}\). \end{proposition} \begin{proof} We set \(m:=\abs{J}\). We may suppose that \(J=\{1, \ldots, m\}\). Since \(D^{-1}M\bm{j}=\bm{j}\), where \(\bm{j}:=(1, \ldots, 1)\in \mathbb{R}^m\) and \(D:=(d_{ij})_{1 \leq i, j\leq m}\) is a diagonal matrix with \[d_{ii}:=\sum\limits_{k\in J^c} \lambda_{ik}, \quad \textrm{ for all } 1 \leq i \leq m,\] we obtain \(CD\bm{j}=\bm{j}\), that is, \begin{equation}\label{eq:sumofOne} \sum_{j=1}^m c_{ij}\sum\limits_{k\in J^c} \lambda_{jk}=1 \end{equation} for all \(1 \leq i \leq m\). Thus, \eqref{eq:SumToOne} follows. Let the map \(\Phi \colon H^m\to \mathbb{R}\) be given by the assignment \begin{equation*} (z_1, \ldots, z_m)\mapsto \sum_{i=1}^m\sum\limits_{k\in J^c}\lambda_{ik}\norm{z_i-\mathbf{x}(k)}_{_H}^2+\frac{1}{2}\sum_{i=1}^m\sum_{j=1}^m \lambda_{ij} \norm{z_i-z_j}^{2}_{_H}. \end{equation*} Note that \[ 2 \inf \Phi=\mathsf{m}(\mathbf{x}, \boldsymbol{\lambda}, G, J).\] Thus, to conclude the proof we calculate the minimum value of the map \(\Phi\). Let \(U\subset H\) denote the span of the vectors \(\big(\mathbf{x}(k)\big)_{k\in J^c}\). Clearly, \(\inf \Phi|_U=\inf \Phi\). In the following, we compute the minimal value of \(\Phi|_U\). The subset \(U\subset H\) is linearly isometric to \((\mathbb{R}^d, \norm{\cdot}_{_2})\) for some integer \(1 \leq d \leq \card(J^c)\). Consequently, we may suppose (by abuse of notation) for all \(k\in J^c\) that \(\mathbf{x}(k)\in \mathbb{R}^d\), say \(\mathbf{x}(k)=(x_{k1}, \ldots, x_{kd})\), and that the function \(\Phi|_U\colon (\mathbb{R}^d)^m\to \mathbb{R} \) is given by the assignment \begin{equation*} (p_1, \ldots, p_m)\mapsto \sum_{t=1}^d \left(\sum_{i=1}^m\sum_{j=1}^m p_{it}m_{ij} p_{jt}-2\sum_{i=1}^m p_{it}\sum\limits_{k\in J^c} \lambda_{ik}x_{rk}+\sum_{i=1}^m\sum\limits_{k\in J^c} \lambda_{ik}x_{kt}^2\right), \end{equation*} where \(p_i:=(p_{i1}, \ldots, p_{id})\) for all integers \(1 \leq i \leq m\). Using elementary analysis, one can deduce that the minimum value of \(\Phi|_U\) is equal to \begin{equation}\label{eq:SophieHunger} \sum_{t=1}^d \left(-\sum_{i=1}^m \sum_{j=1}^m \sum_{r=1}^n \sum_{s=1}^n \lambda_{js}c_{ij}\lambda_{ir}x_{st}x_{rt}+\sum_{i=1}^m\sum_{r=1}^n \lambda_{ir}x_{rt}^2\right). \end{equation} Thus, via \eqref{eq:SophieHunger} and \eqref{eq:sumofOne} we conclude that the minimum value of \(\Phi\) is equal to \begin{equation*} \begin{split} &\sum_{t=1}^d \left(\sum_{i=1}^m \sum_{j=1}^m \sum\limits_{k\in J^c} \sum\limits_{\ell\in J^c} \lambda_{j\ell}c_{ij}\lambda_{ik}\left(-x_{\ell t}x_{kt}+x_{kt}^2\right)\right)\\ &=\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m \sum\limits_{k\in J^c} \sum_{\ell\in J^c} \lambda_{j\ell}c_{ij}\lambda_{ik}\left(\sum_{t=1}^d (x_{\ell t}-x_{kt})^2 \right)\\ & =\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m \sum\limits_{k\in J^c} \sum\limits_{\ell\in J^c} \lambda_{j\ell}c_{ij}\lambda_{ik} \norm{\mathbf{x}(\ell)-\mathbf{x}(k)}^{2}_{_H}, \end{split} \end{equation*} as claimed. This completes the proof. \end{proof} \section{An inequality involving the entries of an M-matrix and its inverse}\label{sec:betterName} A matrix \(M\in \Mat(m\times m; \mathbb{R})\) with non-positive off-diagonal elements is said to be an \textit{M-matrix} if \(M\) is non-singular and each entry of \(M^{-1}\) is non-negative, cf. \cite[Definition 1.1]{markham1972nonnegative}. There are several equivalent definitions of an M-matrix, cf. \cite{fiedler1962matrices}. M-matrices and their matrix inverses are generally well understood, cf. \cite{poole1974survey, johnson1982inverse} for a survey of the theory. A primary example of M-matrices are matrices \(M:=M(\bm{\lambda}, J)\). Indeed, such matrices are strictly diagonally dominant (thus non-singular) and via Gauss elimination it is straightforward to show that each entry of the inverse of \(M(\bm{\lambda}, J)\) is non-negative.It is worth to point out that a matrix \(M\in \Mat(m\times m;\mathbb{R})\) with non-positive off-diagonal elements is an M-matrix if and only if there are matrices \(W, D \in \Mat(m\times m;\mathbb{R})\) such that \(W\) is a strictly diagonally dominant M-matrix, \(D\) is a diagonal matrix with positive diagonal elements and \(M=WD\). This is a classical result of Fiedler and Pták, cf. \cite[Theorem 4.3]{fiedler1962matrices}. The following result will play a major role in the proof of Theorem \ref{hopefullySolved}. \begin{theorem}\label{thm:estiMate} Let \(m\geq 2\) and let \(M\in \Mat(m\times m;\mathbb{R})\) be a symmetric invertible matrix with non-positive off-diagonal elements. We set \(C:=M^{-1}\). If \(M\) is an M-matrix, then \begin{equation}\label{eq:estimateWewant} \frac{1}{2}\sum_{i=1}^m \sum_{j=1}^m \abs{m_{ij}}\abs{c_{ik}c_{j\ell}-c_{jk}c_{i\ell}} \leq (m-1)c_{k\ell} \end{equation} for all integers \(1 \leq k,\ell \leq m \) with \(k\neq \ell\). \end{theorem} The estimate in Theorem \ref{thm:estiMate} is sharp. This is the content of the following example. \begin{example} Let \(m\geq 2\) be an integer and let \(M\in \Mat(m\times m;\mathbb{R})\) be the tridiagonal matrix given by \begin{equation*} m_{ij}:= \begin{cases} 3 & \textrm{if } i=j \\ -1& \textrm{if } i=j-1 \\ -1& \textrm{if } i=j+1 \\ 0 & \textrm{otherwise}. \end{cases} \end{equation*} Clearly, \(M\) is a symmetric M-matrix. As usual, we set \(C:=M^{-1}\). Since \(\det{(M)} C=\textrm{\normalfont adj}(M)\), where \(\textrm{\normalfont adj}(M)\) is the adjugate matrix of \(M\), it follows \begin{equation}\label{eq:firsteq} c_{1m}=\frac{1}{\det{M}}. \end{equation} Furthermore, via Jacobi's equality \cite{Jacobi1841}, see \eqref{eq:Jacobi}, we get \begin{equation}\label{eq:secondeq} \abs{c_{i1}c_{jm}-c_{j1}c_{im}}=\frac{\abs{\det{M\big[[m]\setminus{\{1,m\}},[m]\setminus{\{i,i+1\}}\big]}}}{\det{M}}=\frac{1}{\det{M}} \end{equation} for all pairs of integers \((i,j)\) with \(i=j-1\). By virtue of \eqref{eq:firsteq} and \eqref{eq:secondeq} we obtain \begin{equation*} \frac{1}{2}\sum_{i=1}^m \sum_{j=1}^m \abs{m_{ij}\left(c_{i1}c_{jm}-c_{j1}c_{im}\right)}=\frac{m-1}{\det{M}}=(m-1)c_{1m}. \end{equation*} Consequently, the estimate \eqref{eq:estimateWewant} is best possible. \end{example} This section is structured as follows. To begin, we gather some information that is needed to prove Theorem \ref{thm:estiMate}. At the end of the section, we establish Theorem \ref{thm:estiMate}. We start with a lemma that calculates the sum in \eqref{eq:estimateWewant} if the absolute values from the \(2\times 2\)-minors are removed. \begin{lemma}\label{lem:zeroRowSum} Let \(m\geq 2\) and let \(M\in \Mat(m\times m; \mathbb{R})\) be an M-matrix. We set \(C:=M^{-1}\). If \(1 \leq k, \ell \leq m\) are distinct integers, then \begin{equation}\label{eq:firstequation} \begin{split} \sum_{j=1}^m \abs{m_{kj}}(c_{kk}c_{j\ell}-c_{jk}c_{k\ell})=c_{k\ell}, \end{split} \end{equation} and for all integers \(1 \leq i \leq m\) with \(i\neq k,\ell\), \begin{equation}\label{eq:secondequation} \sum_{j=1}^m \abs{m_{ij}}(c_{ik}c_{j\ell}-c_{jk}c_{i\ell})=0. \end{equation} \end{lemma} \begin{proof} Since \(C\) is the matrix inverse of \(M\), we compute \begin{equation*} \begin{split} &\sum_{j=1}^m m_{ij}c_{ik}c_{j \ell}=\delta_{i \ell}c_{ik}, \\ &\sum_{j=1}^m m_{ij}c_{jk}c_{i\ell}=\delta_{ik}c_{i\ell} \end{split} \end{equation*} for all \(1 \leq i \leq m\). As a result, we obtain \begin{equation*} \sum_{j=1}^m m_{ij}(c_{ik}c_{j \ell}-c_{jk}c_{i\ell})=\delta_{i \ell}c_{ik}-\delta_{ik}c_{i\ell}. \end{equation*} Therefore, the desired equalities follow, since \(m_{ij} \leq 0\) for all distinct integers \(1 \leq i, j \leq m\). \end{proof} We proceed with the following corollary. \begin{corollary}[zero pattern of inverse M-matrices]\label{Cor:M-mat} Let \(m\geq 2\) and let \(M\in \Mat(m\times m;\mathbb{R})\) be an invertible matrix with non-positive off-diagonal elements. We set \(C:=M^{-1}\). If \(M\) is an M-matrix and \(k, \ell\in \{1, \ldots, m\}\) are two distinct integers such that \(c_{k \ell}=0\), then \begin{enumerate}[label=(\roman*)] \item\label{it:1} for all integers \(i\in \{1, \ldots, m\}\), \(m_{ki}=0\) or \(c_{i\ell}=0\). In particular, \(m_{k \ell}=0\). \item\label{it:2} for all integers \(i\in \{1, \ldots, m\}\), \(m_{ki}=0\) or \(m_{i \ell}=0\). \item\label{it:3} the matrix \(M\) has at least \(m-1\) zero entries. \end{enumerate} \end{corollary} \begin{proof} Clearly, item \ref{it:2} is a direct consequence of item \ref{it:1} and item \ref{it:3} is a direct consequence of item \ref{it:2}. To conclude the proof we establish item \ref{it:1}. Lemma \ref{lem:zeroRowSum} tells us that \begin{equation*} \sum_{i=1}^m \abs{m_{ki}}(c_{kk}c_{i\ell}-c_{ik}c_{k\ell})=0. \end{equation*} Thus, we obtain \begin{equation}\label{eq:zero1} \abs{m_{ki}}c_{kk}c_{i \ell}=0 \end{equation} for all integers \(1 \leq i \leq m\). Since each principal submatrix of \(C\) is the inverse matrix of an M-matrix, cf. \cite[Corollary 3]{johnson1982inverse}, it follows \(c_{kk} \neq 0\). Thus, via Equation \eqref{eq:zero1} we obtain \(m_{ki}=0\) or \(c_{i\ell}=0\) for all \(i\in\{1, \ldots, m\}\), as desired. \end{proof} Theorem \ref{thm:estiMate} will be established via a density argument. As it turns out, it will be beneficial to approximate \(C\) by matrices with non-vanishing minors. To this end, we need the following genericity condition. \begin{definition}[generic matrix] Let \(m\geq 1\) be an integer and let \(A\in \Mat(m\times m; \mathbb{R})\) be a matrix. Suppose that \(1 \leq k \leq m\) is an integer and let \(I,J\subset \{1, \ldots, m\}\) be two subsets such that \(\card{(I)}=\card{(J)}=k\). We use the notation \(A[I,J]\in \Mat(k\times k; \mathbb{R})\) to denote the matrix that is obtained from \(A\) by keeping the rows of \(A\) that belong to \(I\) and the columns of \(A\) that belong to \(J\). We say that \(A\) is generic if \begin{equation*} \det(A[I,J])\neq 0 \end{equation*} for all non-empty subsets \(I,J\subset \{1, \ldots, m\}\) with \(\card{(I)}=\card{(J)}\). \end{definition} The subsequent lemma demonstrates that being generic is a 'generic property' as used in the context of algebraic geometry. \begin{lemma}\label{lem:martin} Let \(m\geq 1\) be an integer and let \(A\in \Mat(m\times m; \mathbb{R})\) be a matrix. The following holds \begin{enumerate}[label=(\roman*)] \item if \(A\) is generic, then \(A^{-1}\) is generic as well. \item the set of generic matrices is open and dense in \( \Mat(m\times m; \mathbb{R})\). \end{enumerate} \end{lemma} \begin{proof} The first item is a direct consequence of Jacobi's equality, cf. \cite{Jacobi1841}, \begin{equation}\label{eq:Jacobi} \abs{\det (A^{-1}[I,J])\det(A)}=\abs{\det\left(A\big[[m]\setminus J, [m]\setminus I\big]\right)}, \end{equation} where \(I,J\subset [m]:=\{1, \ldots, m\}\) with \(\card{(I)}=\card{(J)}\) and \(A[\varnothing, \varnothing]\) is by definition equal to the identity matrix. Next, we establish the second item. A matrix \(A\in \Mat(m\times m; \mathbb{R})\) is generic if and only if \begin{equation*} p(A):=\prod_{I,J\subset [m], \abs{I}=\abs{J}} \det (A[I,J]) \neq 0. \end{equation*} Clearly, \(p\) is a non-zero polynomial in the entries of \(A\). It is straightforward to show that the complement of the zero set of a non-zero polynomial \(q\colon \mathbb{R}^N\to \mathbb{R}\) is an open and dense subset of \(\mathbb{R}^N\), for all \(N\geq 1\). Therefore, the set of generic matrices is an open and dense subset of \( \Mat(m\times m; \mathbb{R})\), as was to be shown. \end{proof} We proceed with the following lemma, which is the key component in the proof of Theorem \ref{thm:estiMate}. \begin{lemma}\label{lem:signPattern} Let \(m\geq 2\) and let \(A\in \Mat(m\times m; \mathbb{R})\) be a non-negative matrix. If \(A\) is a generic matrix, then for all distinct integers \(1 \leq k,\ell \leq m\) the skew-symmetric matrix \(A^{_{^{^{(k,\ell)}}}}\in \Mat(m\times m; \mathbb{R})\) given by \begin{equation*} a_{ij}^{_{^{^{(k,\ell)}}}}:=a_{ik}a_{j\ell}-a_{jk}a_{i\ell}, \end{equation*} has the property that each two rows of \(A^{_{^{^{(k,\ell)}}}}\) have a distinct number of positive entries. \end{lemma} \begin{proof} We fix two distinct integers \(1 \leq k,\ell \leq m\). If \(m=2\), then each two rows of \(A^{_{^{^{(k,\ell)}}}}\) have a distinct number of positive entries, since \(A\) is generic. Now, suppose that \(m=3\). The matrix \(A^{_{^{^{(k,\ell)}}}}\) is skew-symmetric; hence, as \(A\) is generic we obtain that \(A^{_{^{^{(k,\ell)}}}}\) can have \(2^{3}\) different sign patterns. If \begin{equation}\label{eq:cannot} a_{12}^{_{^{^{_{(k,\ell)}}}}}, a_{23}^{_{^{^{(k,\ell)}}}}, a_{31}^{_{^{^{(k,\ell)}}}} > 0 \quad\textrm{ or }\quad a_{12}^{_{^{^{(k,\ell)}}}}, a_{23}^{_{^{^{(k,\ell)}}}}, a_{31}^{_{^{^{(k,\ell)}}}} <0, \end{equation} then each row of \(A^{_{^{^{(k,\ell)}}}}\) has the same number of positive entries and the statement does not hold. For the other 6 sign patterns it is straightforward to check that each row of \(A^{_{^{^{(k,\ell)}}}}\) has a different number of positive entries. In the following, we show that \eqref{eq:cannot} cannot occur. For the sake of a contradiction, we suppose \(a_{12}^{_{^{^{(k,\ell)}}}}, a_{23}^{_{^{^{(k,\ell)}}}}, a_{31}^{_{^{^{(k,\ell)}}}} > 0 \). Since \(a_{12}^{_{^{^{(k,\ell)}}}}>0\), we obtain \begin{equation}\label{eq:firstConsequence} a_{1k} > \frac{a_{2k}a_{1\ell}}{a_{2\ell}}. \end{equation} Since \(a_{31}^{_{^{^{(k,\ell)}}}} > 0\), we estimate via \eqref{eq:firstConsequence} \begin{equation}\label{eq:estimateofMinors} a_{3k}a_{1\ell} > a_{1k}a_{3\ell} > \frac{a_{2k}a_{1\ell}}{a_{2\ell}}a_{3\ell}. \end{equation} Thus, \eqref{eq:estimateofMinors} tells us that \begin{equation*} a_{3k}a_{2\ell} > a_{2k}a_{3\ell}; \end{equation*} which contradicts \( a_{23}^{_{^{^{(k,\ell)}}}}>0\). Hence, the case \(a_{12}^{_{^{^{(k,\ell)}}}}, a_{23}^{_{^{^{(k,\ell)}}}}, a_{31}^{_{^{^{(k,\ell)}}}} > 0 \) cannot occur. The other invalid sign pattern can be treated analogously . Therefore, \eqref{eq:cannot} cannot occur, as claimed. By putting everything together, we conclude that the statement is valid if \(m=3\). We proceed by induction. Let \(m\geq 4\) be an integer and suppose that the statement is valid for all \(2 \leq m^\prime < m\). Before we proceed with the proof we introduce some notation. For every matrix \(B\in \Mat(m\times m;\mathbb{R})\) we denote by \(B_{ij}\in \Mat((m-1)\times (m-1); \mathbb{R})\) the matrix that is obtained from \(B\) by deleting the \(i\)-th row and the \(j\)-th column of \(B\). Moreover, for all integers \(1 \leq i,j \leq m\) with \(i \neq j\) we set \begin{equation*} \begin{split} &n_i^{+}(B):=\textrm{number of positive entries of the i-th row of } B, \\ &n_{i,j}^{+}(B):=\textrm{number of positive entries of } (b_{i1}, \ldots, \widehat{b_{ij}}, \ldots, b_{im}). \end{split} \end{equation*} We use \(\widehat{b_{ij}}\) to indicate that the entry \(b_{ij}\) is omitted. Since the non-negative \((m-1)\times (m-1)\)-matrix \(A_{ij}\) is generic for all \(1 \leq i, j \leq m\), we obtain via the induction hypothesis that each row of \(\left(A^{_{^{^{(k,\ell)}}}}\right)_{ii}\) has a different number of positive entries for all \(1 \leq i \leq m\). For simplicity of notation, we abbreviate \(B:=A^{_{^{^{(k,\ell)}}}}\) for the rest of this proof. We have to show that each two rows of \(B\) have a distinct number of positive entries. Let \(p\in \{1, \ldots, m\}\setminus{\{m\}}\) denote the unique integer such that \(n_{p,m}^{+}(B)=(m-1)-1\), that is, the \(p\)-th row of \(B_{mm}\) has the most positive entries. Suppose that \(b_{pm}>0\). This implies \(n_{p}^{+}(B)=m-1\). Consequently, the \(p\)-th column of \(B\) has no positive entries; hence, as each two rows of \(B_{pp}\) have a distinct number of positive entries and the number of positive entries of each row of \(B_{pp}\) is strictly smaller than \(m-1\), we obtain that all rows of \(B\) have a distinct number of positive entries. Hence, the statement follows if \(b_{pm}>0\). Now, we suppose that \( b_{pm} < 0\). This implies \(n_{p}^{+}(B)=m-2\). There is precisely one integer \( q\in \{1, \ldots, m\}\setminus \{p\}\) such that \(n_{q,p}^{+}(B)=(m-1)-1\). Suppose that \(q=m\). Since \(b_{mp} > 0\), we obtain that \(n_{m}^{+}\left(B\right)=m-1\). Thus, we obtain as before via the induction hypothesis that all rows of \(B\) have a distinct number of positive entries. Therefore, the statement follows if \(q=m\). We are left with the case \( b_{pm} < 0\) and \(q\neq m\). Note that in this case \begin{equation}\label{eq:contradiction} n_{p}^{+}\left(B\right)=n_{q}^{+}\left(B\right)=m-2 \textrm{ and } b_{qp}<0. \end{equation} As a result, for each integer \(r\in \{1, \ldots, m\}\setminus{\{p,q,m\}}\) both entries \(b_{pr}\) and \(b_{qr}\) are positive. But via \eqref{eq:contradiction} this implies \begin{equation*} n_{p,r}^{+}\left(B\right)=n_{q,r}^{+}\left(B\right)=m-3, \end{equation*} for all \(r\in \{1, \ldots, m\}\setminus{\{p,q,m\}}\) which is not possible due to the induction hypothesis. Therefore, the case \( b_{pm} < 0\) and \(q\neq m\) cannot occur. We have considered all cases and thus the statement follows by induction. The lemma follows. \end{proof} We conclude this section with the proof of Theorem \ref{thm:estiMate}. \begin{proof}[proof of Theorem \ref{thm:estiMate}] Fix \(k, \ell\in \{1, \ldots, m\}\) with \(k\neq \ell\). Lemma \ref{lem:martin} and a diagonal sequence argument tell us that there is a sequence \(\{C_r\}_{r\geq 1}\), where \(C_r:=(c_{ij}^{(r)})_{_{1 \leq i, j \leq m}}\), of non-negative generic matrices such that \(C_r\to C\) with \(r\to +\infty\). By passing to a subsequence (if necessary) we may assume that the matrices \(C^{_{^{^{(k,\ell)}}}}_r\), defined in Lemma \ref{lem:signPattern}, all have the same sign pattern. For each integer \(r\geq 1\) let \(T_r\in \Mat(m\times m; \mathbb{R})\) be the matrix given by \begin{equation*} t_{ij}^{(r)}:=|m_{ij}^{(r)}|\left( c_{ik}^{(r)}c_{j\ell}^{(r)}-c_{jk}^{(r)}c_{i\ell}^{(r)}\right), \end{equation*} where \(M_r:=C_r^{-1}\). Due to the first item in Lemma \ref{lem:martin}, it follows that \(m_{ij}^{(r)}\neq 0\). Thus, the matrices \(T_r\) and \(C^{_{^{^{(k,\ell)}}}}_r\) have the same sign pattern. Therefore, by the virtue of Lemma \ref{lem:signPattern}, each row of \(T_r\) has a distinct number of positive entries. Fix an integer \(r \geq 1\). For each integer \(1 \leq p \leq m\) let \(c(p)\) be the unique integer such that the \(c(p)\)-th row of \(T_r\) has exactly \(m-p\) positive entries. Since all matrices \(T_r\) have the same sign pattern, the definition of \(c\) is independent of the integer \(r\geq 1\). The map \(c\colon \{1, \ldots, m\}\to \{1, \ldots, m\}\) is a bijection and \begin{equation*}\label{eq:TriangularPattern} \begin{cases} &t_{c(p) j}^{(r)} < 0 \,\,\, \textrm{ if } \,\,\, j\in \{ c(1), \ldots, c(p-1)\} \,\,\, \\[+0.35em] &t_{c(p) j}^{(r)} > 0 \,\,\, \textrm{ if } \,\,\, j\in \{ c(p+1), \ldots, c(m)\} \end{cases} \end{equation*} for all integers \(r\geq 1\). Let \(T\in \Mat(m\times m; \mathbb{R})\) be the matrix given by \begin{equation*} t_{ij}:=\abs{m_{ij}}\left( c_{ik}c_{j\ell}-c_{jk}c_{i\ell}\right). \end{equation*} Clearly, \(T_r\to T\) with \(r\to +\infty\). As a result, \begin{equation}\label{eq:TriangularPatternII} \begin{cases} &t_{c(p) j} \leq 0 \,\,\, \textrm{ if } \,\,\, j\in \{ c(1), \ldots, c(p-1)\} \,\,\, \\[+0.35em] &t_{c(p) j} \geq 0 \,\,\, \textrm{ if } \,\,\, j\in \{ c(p+1), \ldots, c(m)\}. \end{cases} \end{equation} By Lemma \ref{lem:zeroRowSum} and \eqref{eq:TriangularPatternII} we obtain that \begin{equation}\label{eq:VeryUsefulIdentity} \sum_{ j=1}^{p-1} t_{c(j)c(p)}=\sum_{j=p+1}^m t_{c(p)c(j)} \end{equation} for all integers \(1 \leq p \leq m\) with \(c(p)\neq k,\ell\), since \(T\) is skew-symmetric. In \cite[Theorem 3.1]{markham1972nonnegative}, Markham established that every almost principal minor of \(C\) is non-negative. Hence, \begin{equation*} \abs{m_{kj}}\left( c_{kk}c_{j\ell}-c_{jk}c_{k\ell}\right) \geq 0 \,\,\,\textrm{ and }\,\,\, \abs{m_{\ell j}}\left( c_{\ell k}c_{j\ell}-c_{jk}c_{\ell \ell}\right) \leq 0 \end{equation*} for all integers \(1 \leq j \leq m\). Consequently, we obtain that \(c(1)=k\) and \(c(m)=\ell\). For each integer \(2 \leq h \leq m-1\) we compute via \eqref{eq:VeryUsefulIdentity}, \begin{equation}\label{eq:induction} \begin{split} &\sum_{p=2}^{h} \sum_{j=p+1}^m t_{c(p)c(j)}=\sum_{p=2}^{h} \sum_{ j=1}^{p-1} t_{c(j)c(p)} \\ &=\sum_{j=2}^{h} t_{c(1)c(j)}+\sum_{j=2}^{h-1}\sum_{p=j+1}^{h} t_{c(j)c(p)} \\ &\leq \sum_{j=2}^{h} t_{c(1)c(j)}+\sum_{p=2}^{h-1}\sum_{j=p+1}^{m} t_{c(p)c(j)} . \end{split} \end{equation} Note that \begin{equation*}\label{eq:theProdigy} \frac{1}{2}\sum_{i=1}^m \sum_{j=1}^m \abs{m_{ij}\left(c_{ik}c_{jl}-c_{jk}c_{il}\right)}=\sum_{p=1}^m \sum_{j=p+1}^m t_{c(p)c(j)} . \end{equation*} Therefore, by the use of \eqref{eq:induction} we obtain \begin{equation*} \begin{split} &\frac{1}{2}\sum_{i=1}^m \sum_{j=1}^m \abs{m_{ij}\left(c_{ik}c_{jl}-c_{jk}c_{il}\right)}\\ &\leq \sum_{h=2}^{m} \sum_{j=2}^h t_{c(1)c(j)}\leq (m-1)\sum_{j=1}^{m} t_{c(1)c(j)}. \end{split} \end{equation*} Lemma \ref{lem:zeroRowSum} tells us that \begin{equation*} \sum_{j=1}^{m} t_{c(1)c(j)}=c_{k\ell}; \end{equation*} therefore, the theorem follows. \end{proof} \section{Proof of Theorem \ref{hopefullySolved}}\label{sec:mainProof} \begin{proof}[Proof of Theorem \ref{hopefullySolved}] Without loss of generality we may assume (by scaling) that \(\Lip(f)=1\). We set \(I:=X\), \(T:=X\setminus S\) and let the map \(\mathbf{x}\colon I\to H\) be given by the identity. Let \(G\colon [0,+\infty)\to [0,+\infty)\) denote the function such that \(x=F(\sqrt{G(x)})\) for all real numbers \(x\in [0,+\infty)\). Observe that the function \(G\) is convex, strictly-increasing and \(G(0)=0\). We say that \(\bm{\xi}\colon I \times I \to \mathbb{R}\) \textit{lies above} \(f\) if there is a map \(\overline{f}\colon X \to \Conv(\mathsf{Im}(f))\) such that \(\overline{f}(s)=f(s)\) for all \(s\in S\) and \begin{equation*} G\left( \norm{\overline{f}\big(\mathbf{x}(i)\big)-\overline{f}\big(\mathbf{x}(j)\big)}_{_H}\right) \leq \bm{\xi}(i,j) \quad \textrm{ for all } i,j \in I. \end{equation*} We use \(\Conv\) to denote the closed convex hull. Let \(E_f\subset \mathbb{R}^{I\times I}\) be the set of all \(\bm{\xi}\in \mathbb{R}^{I\times I}\) that lie above \(f\). Moreover, let \(\bm{v}\colon I\times I\to \mathbb{R}\) be the map given by \begin{equation}\label{eq:vTee} \bm{v}(i,j):= \norm{ \mathbf{x}(i)-\mathbf{x}(j)}_{_H}^2. \end{equation} Suppose that \(L\in[1, +\infty)\) is a real number. If \(L\bm{v}\in E_f\), then the map \(f\) admits a Lipschitz extension \(\overline{f}\colon X\to E\) such that \[\Lip(\overline{f}) \leq \sup_{x>0} \frac{F(\sqrt{L} x)}{F(x)}.\] Indeed, if \(L\bm{v}\in E_f\), then (by definition) there exists a function \(\overline{f}\colon X \to \Conv(\mathsf{Im}(f))\) such that \[G\left( \norm{\overline{f}\big(\mathbf{x}(i)\big)-\overline{f}\big(\mathbf{x}(j)\big)}_{_H} \right) \leq L \bm{v}(i,j)\quad \textrm{ for all } i,j \in I;\] consequently, by applying the function \(F\left(\sqrt{\cdot}\right)\) on both sides, we obtain \begin{equation*} \begin{split} \norm{\overline{f}\big(\mathbf{x}(i)\big)-\overline{f}\big(\mathbf{x}(j)\big)}_{_H} &\leq F\left(\sqrt{\left(L\norm{ \mathbf{x}(i)-\mathbf{x}(j)}_{_H}^2\right)}\right) \\ &\leq \sup_{x>0} \frac{F(\sqrt{L} x)}{F(x)}\, F\big(\norm{\mathbf{x}(i)-\mathbf{x}(j)}_{_H}\big) \end{split} \end{equation*} for all \(i,j \in I\). Since \(X\subset F[H]\) the map \(\overline{f}\) is a Lipschitz extension of \(f\) such that \(\Lip(\overline{f})\) has the desired upper bound. Thus, to prove the theorem it suffices to show that if \(L\geq (m+1)\), then \(L\bm{v}\in E_f\). To this end, we suppose that \(L\bm{v}\notin E_f\) and we show that \(L < (m+1) \). Since the function \(G\) is strictly-increasing and convex, the set \(E_f\) is closed and convex; thus, by the hyperplane separation theorem we obtain a real number \(\ensuremath\varepsilon >0 \) and a non-zero vector \(\bm{\lambda}\in \mathbb{R}^{I\times I}\) such that \begin{equation}\label{Eq:HyperplaneSeparation} \langle L\bm{v} , \bm{\lambda} \rangle_{\mathbb{R}^{I\times I }} + \ensuremath\varepsilon < \langle \bm{\xi} , \bm{\lambda} \rangle_{\mathbb{R}^{I\times I}}\quad \textrm{ for all } \bm{\xi} \in E_f. \end{equation} We claim that each entry of \(\bm{\lambda}\) is non-negative. Indeed, if \(\bm{\xi}\in E_f\), then the point \((\xi_1, \ldots, \xi_{k-1}, c \xi_k, \xi_{k+1}, \ldots, \xi_N)\), where \(N:=\card(I\times I)\), is contained in \( E_f\) for all integers \(1 \leq k \leq N\) and real numbers \(c\in [1, +\infty)\). Hence, a simple scaling argument implies that the \(k\)-th entry of \(\bm{\lambda}\) is non-negative for each integer \(1 \leq k \leq N\), as claimed. In the following, we estimate \( \langle L\bm{v} , \bm{\lambda} \rangle_{\mathbb{R}^{I \times I }}\) from below. We may assume that \(\bm{\lambda}\) is symmetric. By adjusting \(\ensuremath\varepsilon >0 \) if necessary, we may assume that \(\sum_{k \in S} \lambda_{i k} \neq 0\) for all \(i\in T\). Let the matrix \(M:=M(\bm{\lambda}, T)\) be given as in \eqref{eq:MMatrix}. Since each entry of the vector \(\bm{\lambda}\) is non-negative and \(\sum_{k \in S} \lambda_{i k} \neq 0\) for all \(i\in T\), the matrix \(M(\bm{\lambda}, T)\) is non-singular. We set \(C:=M^{-1}\). Proposition \ref{prop:quadratic} tells us that \begin{equation}\label{eq:Equality77} \mathsf{m}:=\mathsf{m}(\mathbf{x}, \bm{\lambda}, \id, T)= \sum_{r\in S} \sum_{s\in S} \bm{\eta}(r,s) \norm{\mathbf{x}(r)-\mathbf{x}(s)}^{2}_{_H}, \end{equation} where \(\bm{\eta}\colon I\times I\to \mathbb{R}\) is given by \[\bm{\eta}(r,s):=\lambda_{rs}+\sum_{i\in T}\sum_{j\in T} \lambda_{ir} c_{ij} \lambda_{js}.\] Clearly, \begin{equation}\label{eq:Lowe} L \mathsf{m} \leq \langle L\bm{v} , \bm{\lambda} \rangle_{\mathbb{R}^{I \times I}}. \end{equation} Next, we estimate \(\langle L\bm{v} , \bm{\lambda} \rangle_{\mathbb{R}^{I\times I}}\) from above. We set \begin{equation*} \bar{\bm{\lambda}}_i:=\frac{1}{\norm{\bm{\lambda}_i}_{_1} }\bm{\lambda}_i \in \Delta^{\card(S)-1} \end{equation*} for each \(i\in T\), where \(\bm{\lambda}_i:=(\lambda_{ik})_{k\in S} \). By \eqref{eq:SumToOne}, \begin{equation}\label{eq:almostDone} \sum_{j\in T} c_{ij} \norm{\bm{\lambda}_i}_{_1}=\sum_{j\in T} c_{ij} \sum_{k\in S} \lambda_{jk}=1 \end{equation} for all \(i\in T\). For each \(i\in T\) we define \begin{equation*} w_{i}:=\sum_{j\in T} c_{ij}\left(\sum_{k\in S} \lambda_{jk} \right) y_{\bar{\bm{\lambda}}_j}, \,\,\textrm{ where }\, y_{\bar{\bm{\lambda}}_j}=\sum_{r\in S} \bar{\lambda}_{jr} f(r). \end{equation*} Using \eqref{eq:almostDone} we obtain \(w_i\in \Conv(\mathsf{Im}(f))\) for all \(i\in T\). Equation \eqref{Eq:HyperplaneSeparation} tells us that \begin{equation}\label{eq:fromabove} \langle L\bm{v} , \bm{\lambda} \rangle_{\mathbb{R}^{I\times I}} < A+B+C; \end{equation} where, \begin{equation*} \begin{split} &A:=2\sum_{i\in T} \sum_{r\in S} \lambda_{ir}G\left(\norm{f(r)-w_i}_{_E}\right), \\ & B:=\sum_{i\in T}\sum_{j\in T} \lambda_{ij} G\left(\norm{w_i-w_j}_{_E}\right), \\ &C:=\sum_{r\in S} \sum_{s\in S} \lambda_{rs} G\left(\norm{f(r)-f(s)}_{_E}\right). \end{split} \end{equation*} By convexity of the strictly-increasing function \(G\) and the use of \eqref{eq:almostDone}, we estimate \begin{equation*} \begin{split} &\, A+C \\ &\leq 2\, \sum_{i\in T}\sum_{r\in S}\sum_{j\in T} \lambda_{ir}c_{ij}\norm{\bm{\lambda}_j}_{_1}\,G\left(\norm{f(r)- y_{\bar{\bm{\lambda}}_j}}_{_E}\right)+C \\ & \leq 2\, \sum_{r\in S} \sum_{s\in S} \bm{\eta}(r,s)\, G\left(\norm{f(r)-f(s)}_{_E}\right). \end{split} \end{equation*} Thus, if \begin{equation}\label{eq:last} B=\sum_{i\in T}\sum_{j\in T} \lambda_{ij} G\left(\norm{w_i-w_j}_{_E}\right) \leq (m-1)\, \sum_{r\in S} \sum_{s\in S} \bm{\eta}(r,s) G\left(\norm{f(r)-f(s)}_{_E}\right), \end{equation} then we obtain via \eqref{eq:fromabove} and \eqref{eq:Lowe} that \begin{equation*} L\mathsf{m} < (m+1) \, \sum_{r\in S} \sum_{s\in S} \bm{\eta}(r,s) G\left(\norm{f(r)-f(s)}_{_E}\right). \end{equation*} Since \[\norm{f(r)-f(s)}_{_E} \leq F\big( \sqrt{\norm{r-s}_{_H}^2}\big) \quad \textrm{ for all } r,s\in S,\] it follows \[G(\norm{f(r)-f(s)}_{_E})\leq \norm{ r-s }_{_H}^2 \quad \textrm{ for all } r,s\in S; \] as a result, we obtain \[L\mathsf{m} < (m+1) \mathsf{m}.\] By virtue of Corollary \ref{Cor:M-mat} every entry of the matrix \(C\) is positive, hence \(\mathsf{m}>0\) and consequently \(L < m+1\). Thus, to conclude the proof we are left to establish the estimate \eqref{eq:last}. It is readily verified that \begin{equation*} w_i-w_j=\frac{1}{2}\sum_{k\in T}\sum_{\ell\in T} \norm{\bm{\lambda}_k}_{_1}\norm{\bm{\lambda}_\ell}_{_1}\left(c_{j\ell}c_{ik}- c_{i\ell}c_{jk}\right)\left(y_{\bar{\bm{\lambda}}_k}-y_{\bar{\bm{\lambda}}_\ell} \right). \end{equation*} Since \begin{equation*} \frac{1}{2}\sum_{k\in T} \sum_{\ell\in T} \norm{\bm{\lambda}_k}_{_1}\norm{\bm{\lambda}_\ell}_{_1}\abs{c_{j\ell}c_{ik}- c_{i\ell}c_{jk}} \leq \sum_{k\in T} \abs{c_{ik}}\norm{\bm{\lambda}_k}_{_1}\sum_{\ell\in T} |c_{j\ell}| \,\norm{\bm{\lambda}_\ell}_{_1}=1, \end{equation*} we can use the triangle inequality, the convexity of the strictly-increasing map \(G\) and \(G(0)=0\) to estimate \begin{equation}\label{eq:intermediate} \begin{split} &B=\sum_{i\in T}\sum_{j\in T} \lambda_{ij} G\left(\norm{w_i-w_j}_{_E}\right) \\ &\leq \sum_{i\in T}\sum_{j\in T} \lambda_{ij}\, \frac{1}{2} \sum_{k\in T} \sum_{\ell\in T} \norm{\bm{\lambda}_k}_{_1}\norm{\bm{\lambda}_\ell}_{_1}\,\abs{c_{j\ell}c_{ik}- c_{i\ell}c_{jk}} \,\, G\left(\norm{ y_{\bar{\bm{\lambda}}_k}-y_{\bar{\bm{\lambda}}_\ell}}_{_E}\right) \\ &=\sum_{k\in T} \sum_{\ell\in T} \norm{\bm{\lambda}_k}_{_1}\norm{\bm{\lambda}_\ell}_{_1} \left( \frac{1}{2} \sum_{i\in T} \sum_{j\in T} \lambda_{ij}\abs{c_{ik}c_{j\ell}- c_{jk}c_{i\ell}} \right)G\left(\norm{ y_{\bar{\bm{\lambda}}_k}-y_{\bar{\bm{\lambda}}_\ell}}_{_E}\right). \end{split} \end{equation} As pointed out in the beginning of Section \ref{sec:betterName}, \(M(\bm{\lambda}, T)\) is a symmetric M-matrix. Hence, we may invoke Theorem \ref{thm:estiMate} and obtain \begin{equation*} \frac{1}{2} \sum_{i\in T} \sum_{j\in T} \lambda_{ij}\abs{c_{ik}c_{j\ell}- c_{jk}c_{i\ell}} \leq (m-1)c_{k\ell} \end{equation*} for all distinct \(k, \ell\in T\). Using \eqref{eq:intermediate} we deduce \begin{equation*} \begin{split} &\sum_{i\in T}\sum_{j\in T} \lambda_{ij} G\left(\norm{w_i-w_j}_{_E}\right) \\ &\leq \left(m-1\right)\sum_{k\in T} \sum_{\ell\in T} \norm{\bm{\lambda}_k}_{_1}\norm{\bm{\lambda}_\ell}_{_1} c_{k\ell}\,\, G\left(\norm{ y_{\bar{\bm{\lambda}}_k}-y_{\bar{\bm{\lambda}}_\ell}}_{_E}\right). \end{split} \end{equation*} By convexity, \begin{equation*} G\left(\norm{ y_{\bar{\bm{\lambda}}_k}-y_{\bar{\bm{\lambda}}_\ell}}_{_E}\right) \leq \sum_{r\in S}\sum_{s\in S} \bar{\lambda}_{kr}\bar{\lambda}_{\ell r}G\big(\norm{f(r)-f(s)}_{_E}\big); \end{equation*} thereby, the desired estimate \eqref{eq:last} follows, as was left to show. This completes the proof. \end{proof} \noindent \textsc{\small{Mathematik Departement, ETH Zürich, Rämistrasse 101, 8092 Zürich, Schweiz}}\\ \textit{E-mail address:}{\textsf{ [email protected]}}\\ \end{document}
\begin{document} \title{Improvement in quantum communication using quantum switch} \author{Arindam Mitra$^{1,2}$} \email{[email protected]} \email{[email protected]} \affiliation{$^1$Optics and Quantum Information Group, The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai 600113, India.\\ $^2$Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India.} \author{Himanshu Badhani$^{1,2}$} \email{[email protected]} \email{[email protected]} \affiliation{$^1$Optics and Quantum Information Group, The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai 600113, India.\\ $^2$Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India.} \author{Sibasish Ghosh$^{1,2}$} \email{[email protected]} \affiliation{$^1$Optics and Quantum Information Group, The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai 600113, India.\\ $^2$Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India.} \date{} \begin{abstract} Applications of the quantum switch on quantum channels have recently become a topic of intense discussion. In the present work, we show that some useless (for communication) channels may provide useful communication under the action of quantum switch for several information-theoretic tasks: quantum random access codes, quantum steering, etc. We demonstrate that the quantum switch can also be useful in preventing the loss of coherence in a system when only coherence-breaking channels are the available channels for communication. We also show that if a useless quantum channel does not provide useful communication even after using a quantum switch, concatenating the channel with another suitable quantum channel, and subsequently using the switch, one may achieve useful communication. Finally, we discuss how the introduction of noise in the quantum switch can reduce the advantage that the switch provides. \end{abstract} \maketitle \section{Introduction} Perfect communication in the quantum world is impossible as inevitable noise gets introduced in the communication channel due to the interaction of any quantum system with its environments. It is, therefore, important to explore the possibility of better communication using such imperfect quantum channels. The idea of a `quantum switch' \cite{Chi13} is a process -- used in the context of indefinite causal order -- through which superposition of concatenations of two quantum channels, taken in two different orders, is realized. The superposition is created with the help of the coupling to a two-level ancilla (called the control qubit). In this way, the switch creates an \textit{indefinite causal order}, between two channels \cite{costa12}. Quantum switch has several applications in different information-theoretic and thermodynamics tasks. More specifically, quantum switch has been used to test properties of quantum channels \cite{Chiribella-perf-disc,Costa-comput}, to reduce quantum communication complexity \cite{Brukner-complex}, to boost the precision of quantum metrology \cite{Zhao20}, and to achieve thermodynamics advantages \cite{Guha20} etc. Recently, it was shown that using the quantum switch on a specific noisy channel, along with controlled operations, one can achieve perfect communication \cite{chiribella_perfect}. However, that example of the channel is unique up to unitary transformations. \\ {In this context, one may raise the following pertinent questions: (1) If a useless channel (i.e. a channel which does not perform useful communication for a \emph{given} information-theoretic task) does not provide improvement in communication even after the use of quantum switch then how to make those channel useful -(for that particular information theoretic task) under the action of quantum switch? (2) How much improvement can be achieved by applying quantum switch on useless quantum channels for \emph{different} information-theoretic tasks? The present work is in the direction of providing answers to some of these questions (specially in Sec. \ref{subsec:conc_ch} and Sec. \ref{subsec:Advantages_information} respectively).} The advantages of using the quantum switch in quantum communications have been studied before either in the context of certain scenarios like teleportation \citep{Pati20} or communications via several noisy channels \citep{Mcl20,Proc19,Saz21}. In Sec. \ref{subsec:Advantages_information} of this work, we discuss whether a quantum switch can improve the capability of a quantum channel to transfer quantum resources like entanglement, steerability, coherence, etc. It depends on the information-theoretic tasks whether a channel is useless for that particular task. Now consider the case where such a useless channel is unavoidable for communication due to some constraints e.g., spatial distances, type of interaction with the environment, etc. Now, it is possible that such a useless channel remains useless \emph{even after} the use of quantum switch on it. We call such channels as a \emph{completely useless} channels. Now, the question is whether there exists any process which makes these channels useful under the action of quantum switch. In Sec. \ref{subsec:conc_ch}, we show that a possible answer to that question is the concatenation of a quantum channel with that useless channel. More specifically, we show that if at first such a completely useless channel is concatenated with another quantum channel and after that, the switch is used on the resulting channel then useful communication may be achieved. \\ It has been found that using the superposition of trajectories can provide advantages over a single trajectory. Specifically, it has been shown that the so-called superposition of direct pure processes (SDPPs) \cite{Abbot2020,PRA-Rubino} can also provide perfect communication under a noisy channel. These components of the superposition of the trajectories need not have two different causal orders like what we get from a quantum switch, thus making the quantum switch a subset of SDPPs. It has therefore been argued that the advantages of the quantum switch should be attributed to the coherence in the path degree of freedom and not to the indefinite causal order. In these cases, however, the advantages arise due to the use of the external degree (or the path degree) of freedom to carry information \citep{shannon, resource}. In the case of the quantum switch, however, the information is not carried in the path degree of freedom. Recently, the effect of these more generalized superposition of trajectories on the four specific noisy qubit quantum channels has been also studied in \cite{PRR-Rubino} in the context of quantum capacity. However, the quantum switch is placed in a distinct category of its own as uses the indefinite causal order to gain advantages in communication. For this reason, we will restrict ourselves to the study of the improvements in communications under the action of the quantum switch compared to when the quantum switch is not available or used. We will not bring other processes like SDPP in this paper as done in some other works \citep{Pati20, Proc19, Saz21, Mcl20}. \\ The rest of the paper is organized as follows. In Sec. \ref{sec:prelims}, we discuss some preliminaries. In Sec. \ref{sec:main}, we discuss our main results. {In particular, in Subsec. \ref{subsec:transf-en}, we argue that it is important to study the effect of quantum switch considering one output branch at a time. In Subsec. \ref{subsec:conc_ch}, we discuss that a completely useless channel can be converted into a useful channel through the concatenation of it with another quantum channel and after that the use of switch on the resulting channel.} In Subsec. \ref{subsec:Advantages_information}, we discuss how communication improvement helps to get the advantage in some of the well-known information-theoretic tasks. In Subsec.\ref{subsec:noisy switch}, we discuss the fact that the aforesaid improvement in communication can decrease if the switch is noisy. Finally, in Sec. \ref{sec:conc}, we summarise our results and discuss some future directions. \section{Preliminaries}\label{sec:prelims} In this section, we discuss the preliminaries, to be used in the latter sections of the present paper. \subsection{Observables and their compatibility} An observable $A$, acting on Hilbert space $\mathcal{H}$ of dimension $d$, is defined as a set of positive semi-definite matrices $\{A(x)\}$ on $\mathcal{H}$ such that $\sum_x A(x)=\mathbb{I}$. We denote the outcome set of $A$ as $\Omega_A$ and therefore, in the above, $x \in {\Omega}_A$. If all $A(x)$'s are projectors then $A$ is a PVM and otherwise $A$ is a POVM. Now two observables $A$ and $B$ are said to be compatible if there exists a joint observable $\mathcal{G}=\mathcal{G}(x,y)$ with outcome set $\Omega_A\times\Omega_B$ such that \begin{align} A(y)=\sum_y\mathcal{G}(x,y);B(y)=\sum_x\mathcal{G}(x,y) \end{align} hold for all $x\in \Omega_A$ and $y\in\Omega_B$ \citep{Hzi12,Hein08,Hein16}. The above definition is equivalent to the statement that two observables $A_1$ and $A_2$ are compatible if there exists a probability distribution $P(x_i|i,\lambda)$ and an observable $\mathcal{J}=\{J_{\lambda}\}_{\lambda\in\Omega_{\mathcal{J}}}$ with outcome set ${\Omega}_J$, such that $A_i(x_i)=\sum_{\lambda}P(x_i|i,\lambda)J_{\lambda}$ for all $i\in\{1,2\}$. Implementation of the joint observable allows simultaneous implementation of both observables. If such joint observable does not exist for a set of observables, then the set is incompatible. Commutativity of observables always imply compatibility, but compatibility of observables implies commutativity only for PVMs. \subsection{Quantum channels} Let us denote the state space on Hilbert space $\mathcal{H}$ as $\mathcal{S}(\mathcal{H})$. Quantum channels are the CPTP maps from $\Lambda:\mathcal{S}(\mathcal{H}_1)\rightarrow\mathcal{S}(\mathcal{H}_2)$ that satisfies the equation $\Lambda(\sum_ip_i\rho_i)=\sum_ip_i\Lambda(\rho_i)$ for any arbitrary probability distribution $\{p_i\}$ and for any arbitrary set of states $\{\rho_i\in\mathcal{S}(\mathcal{H}_1)\}$ \citep{Nielson}. Let $\mathcal{L}(\mathcal{H}_2)$ be the set of all bounded linear operators acting on the Hilbert space $\mathcal{H}$. A CP unital map $\Lambda^*:\mathcal{L}(\mathcal{H}_2)\rightarrow\mathcal{L}(\mathcal{H}_1)$ is the dual channel of $\Lambda$ if $\text{Tr}[\Lambda(\rho) A(x)]=\text{Tr}[\rho\Lambda^*(A(x))]$ for all $x\in\Omega_A$ and for all $\rho\in \mathcal{S}(\mathcal{H}_1)$ with $A = \{A(x)\}_{ x \in {\Omega}_A}$ being an arbitrary observable acting on ${\cal H}_2$. It is well known that any quantum channel $\Lambda$ admits Krauss representation such that $\Lambda(\rho)=\sum_x \mathcal{K}_x\rho\mathcal{K}^{\dagger}_x$ where $\sum_x\mathcal{K}^{\dagger}_x\mathcal{K}_x=\mathbb{I}$. $\mathcal{K}$'s are the Krauss operators of $\Lambda$. A channel is called unital if it keeps the maximally mixed state unchanged. We know that any qubit state can be represented by a three component real vector $\vec{a} = (a_1, a_2, a_3)$ as $$ \rho=\dfrac{1}{2}(I+\vec{a}.\vec{\sigma}) $$ where $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)^T$ are the Pauli matrices and $|\vec{a}| \leq 1$. Any map from a qubit state to a qubit state can therefore be represented by a linear map $T$ such that \begin{equation} \begin{aligned} \Gamma(\rho)=&\dfrac{1}{2}(I+\vec{a}'.\vec{\sigma})\\ =&\dfrac{1}{2}(I+(T\vec{a}+\vec{t}).\vec{\sigma}) \end{aligned} \end{equation} This map $\Gamma$ is generally represented in form of a matrix (known as $T$-matrix): \begin{equation} \begin{aligned} \mathcal{T}_{\Gamma}=\begin{pmatrix} 1 & 0 \\ \vec{t} & T \\ \end{pmatrix} \end{aligned} \end{equation} where for given any qubit state $\rho$ represented as a column vector $v_{\rho}=(1,a_1,a_2,a_3)^T$, the action of the map is given by a matrix multiplication $v_{\Gamma(\rho)}=\mathcal{T}_{\Gamma}.v_{\rho}$ \cite{Hzi12}. The qubit map is unital when the vector $\vec{t}=0$. We should mention here that we index the components of $v_{\rho}$ from $0$ to $3$. \subsection{Special types of quantum channels}\label{subsec:special-type} \subsubsection{Depolarizing channels} A quantum depolarizing channel $\Gamma^t_d:\mathcal{S}(\mathcal{H})\rightarrow\mathcal{S}(\mathcal{H})$ is a specific type of quantum channel which has following form: \begin{align} \Gamma^t_d(\rho)=t\rho+(1-t)\frac{\mathbb{I}}{d} \label{depchannel} \end{align} where $-\frac{1}{d^2-1}\leq t\leq 1$ and $d$ is the dimension of the Hilbert space ${\cal H}$. For qubit depolarizing channels, a set of krauss operators is$\{\sqrt{1-p}\mathbb{I},\sqrt{\frac{p}{3}}\sigma_x, \sqrt{\frac{p}{3}}\sigma_y, \sqrt{\frac{p}{3}}\sigma_z\}$ . Here, $t=1-\frac{4p}{3}$. Clearly, quantum depolarizing channels are unital. \subsubsection{Entanglement breaking channels} A quantum channel $\Lambda^E:\mathcal{S}(\mathcal{H}_B)\rightarrow\mathcal{S}(\mathcal{K})$ is called an entanglement breaking channel (EBC) if for all $\rho_{AB}\in\mathcal{S}(\mathcal{H}_A\otimes\mathcal{H}_B)$, $(\mathcal{I}\otimes\Lambda^E)(\rho_{AB})$ is a separable state (irrespective of the dimension of ${\cal H}_A)$ \citep{Hor03,Rus03}. ${\Lambda}^E$ is an EBC iff its Choi matrix is separable. Entanglement breaking channels form a convex set. Any EBC $\Lambda^E$ has the following form: \begin{equation} \Lambda^E(\rho)=\sum_x\rho_x\text{Tr}[\rho A(x)] \end{equation} where $A=\{A(x)\}$ is a POVM acting on $\mathcal{H}_B$ and ${\rho}_x \in {\cal S}({\cal K})$. $\Gamma^t_2$ (given in equation \eqref{depchannel}) is EBC iff $\mid t\mid\leq \frac{1}{3}$. We denote set of all EBCs acting on the state space $\mathcal{S}(\mathcal{H})$ (where $d$ dimensional Hilbert space $\mathcal{H}$) as $\mathcal{C}^d_{EBC}$. It is well known that $\Phi\circ\Lambda^{EBC}$ is an $EBC$ for any quantum channel $\Phi$. If Choi matrix of a CP linear map is separable then we will call it an EB CP map. \subsubsection{Incompatibility breaking channels} A quantum channel $\Lambda^{n-IBC}$ is an $n$-incompatibility breaking channel($n$-IBC) if for an arbitrary set of $n$ observables $\{A_1,.....,A_n\}$, the set $\{(\Lambda^{n-IBC})^*(A_1),.....,(\Lambda^{n-IBC})^*(A_n)\}$ is compatible \cite{Heino_IBC}. The set of all $n-IBC$'s form a convex set. $\Gamma^t_d$ is $n-IBC$ for $t\leq \frac{n+d}{n(1+d)}$ \cite{Heino_IBC}. We denote the set of all $n-IBC$s acting on the state space $\mathcal{S}(\mathcal{H})$ (where $d$ dimensional Hilbert space $\mathcal{H}$) as $\mathcal{C}^d_{n-IBC}$. It is well known that $\Phi\circ\Lambda^{n-IBC}$ is a $n-IBC$ for any quantum channel $\Phi$.\\ A quantum channel $\Lambda^{IBC}$ is IBC if it is $n-IBC$ for all $n$ \cite{Heino_IBC}. $\Gamma^t_d$ is $IBC$ for $t\leq \frac{(3d-1)(d-1)^{(d-1)}}{d^d(d+1)}$ \cite{Heino_IBC}. We denote set of all IBCs acting on the state space $\mathcal{S}(\mathcal{H})$ as $\mathcal{C}^d_{IBC}$. It is well known that $\mathcal{C}^d_{IBC}\subseteq.....\subseteq \mathcal{C}^d_{(n+1)-IBC}\subseteq \mathcal{C}^d_{n-IBC}\subseteq .....\subseteq \mathcal{C}^d_{2-IBC}$. It is also well known that $\mathcal{C}^d_{EBC}\subset\mathcal{C}^d_{IBC}$. It is also well known that $\Phi\circ\Lambda^{IBC}$ is an $IBC$ for any quantum channel $\Phi$. \subsection{Quantum switch} A causal order is a partial ordering of events in the space-time. In our case the \textit{events} will correspond to quantum operations which are applied on a given system at two different times and places. Let a quantum system be sent from Alice to Bob through a quantum channel. The quantum channel is composed of two channels $\Lambda$ and $\Phi$. Conventional understanding dictates that resulting channel can be either of the form $\Lambda\circ \Phi$ or $\Phi\circ \Lambda$ or a classical mixture of the two orders, i.e. $p\Lambda\circ \Phi+(1-p)\Phi\circ \Lambda$ with $0\leq p\leq 1$. Causal ordering between two events is therefore the specification of which event effects the other. However, Oreshkov et al \cite{costa12} have shown that by using the property of quantum superposition one can construct processes wherein the order of events cannot be specified. As a result, if Alice has at her disposal two quantum channels $\Lambda$ and $\Phi$, she can construct a third channel which cannot be seen as a simple composition of the two given channels. This channel can be seen as a superposition of the two orderings of the initial two channels $\Lambda$ and $\Phi$ and therefore has the \textit{indefinite} causal order between the two channels. Quantum switch was introduced \citep{Chi13,Moqanaki-15,robino17,Gia18} as a quantum circuit that implements the indefinite causal order between two different operations. The basic ingredient for such a processes is a two level ancilla systems (We call it control qubit) with the state $\omega$. The signalling between the two events depends on the state of the ancilla and using the fact that the ancilla can be in a superposition of more than one state, the order of the two events can be in a superposition of the two scenarios: $\Lambda\circ\Phi$ or $\Phi\circ\Lambda$.\color{black} We start with the product state $\rho\otimes\omega$ where $\rho$ is the state of the system on which these operators ({\it i.e.} channels) act and $\omega$ is a state of control qubit. Given two quantum channels $\Lambda$ and $\Phi$ the quantum switch produces an output $(\Lambda\circ \Phi)(\rho)$ if $\omega=\ket{0}\bra{0}$ and $(\Phi\circ \Lambda)(\rho)$ if $\omega=\ket{1}\bra{1}$. If $\omega$ is in a superposition of states $\ket{0}$ and $\ket{1}$, the resultant operation on $\rho$ will be in a superposition of $(\Lambda\circ \Phi)(\rho)$ and $(\Phi\circ \Lambda)(\rho)$ thus mimicking the indefinite causal order between the two operations, (see fig. \ref{fig:switch}) \\ \begin{figure} \caption{{A quantum switch determines the order of two quantum operations acting on a state $\rho$ conditional on the state of the system $\omega$. The two causal orders are denoted by the red (bold) and blue (dotted) lines. If the state $\omega$ is in a superposition of the two basis states the resulting channel will also be in a superposition of two causal orders} \label{fig:switch} \end{figure} \\ \\ Let the set of Krauss operators of channels $\Lambda$ and $\Phi$ be $\{\mathcal{A}_x\}$ and $\{\mathcal{B}_x\}$ respectively. The Krauss operators for the Switch are given by: \begin{equation} \begin{aligned} \mathsf{S}_{x,y}=\mathcal{A}_x\mathcal{B}_y\otimes\ket{0}\bra{0}+\mathcal{B}_y\mathcal{A}_x\otimes\ket{1}\bra{1} \end{aligned} \end{equation} Using $\ket{0}\bra{0}=(\sigma_z+I)/2$ and $\ket{1}\bra{1}=(I-\sigma_z)/2$, the above Krauss operators take the following form \begin{equation} \begin{aligned} \mathsf{S}_{x,y}=\dfrac{1}{2}(\mathcal{A}_x\mathcal{B}_y+\mathcal{B}_y\mathcal{A}_x)\otimes I+\dfrac{1}{2}(\mathcal{A}_x\mathcal{B}_y-\mathcal{B}_y\mathcal{A}_x)\otimes\sigma_z \end{aligned} \end{equation} If we input identical channels into the switch, that is $\Lambda=\Phi$, the resultant channel has the following form: \begin{align} &\mathsf{S}_{\Lambda,\Lambda,\omega}(\rho)\nonumber\\ &=\dfrac{1}{4}\sum_{x,y}\Big(\{\mathcal{A}_x,\mathcal{A}_y\}\rho\{\mathcal{A}_x,\mathcal{A}_y\}^{\dagger}\otimes\omega\nonumber\\ & +[\mathcal{A}_x,\mathcal{A}_y]\rho[\mathcal{A}_x,\mathcal{A}_y]^{\dagger}\otimes\sigma_z\omega\sigma_z\Big)\nonumber\\ &\equiv C_{+,\Lambda,\Lambda}(\rho)\otimes\omega+C_{-,\Lambda,\Lambda}(\rho)\otimes\sigma_z\omega\sigma_z \label{switched_channel} \end{align} In general, $C_{+,\Lambda,\Lambda}$ and $C_{-,\Lambda,\Lambda}$ are CP maps, but are not trace preserving and hence are not quantum channels. We will occasionally refer to these branches as the $C_+$ and $C_-$ branches respectively for a general quantum channel $\Lambda$. As given in \cite{chiribella_perfect}, for a qubit channel of the form $\Lambda=\Phi=\sum_{\mu=0}^3p_\mu\sigma_\mu\rho\sigma_\mu$ (i.e., for Pauli channels), the two branches $C_{+,\Lambda_A,\Lambda_A}(\rho)$ and $C_{-,\Lambda,\Lambda}(\rho)$ become $C_{+,\Lambda,\Lambda}(\rho)=q\bar{C}_{\Lambda,+}(\rho)$ and $C_{-,\Lambda,\Lambda}(\rho)=(1-q)\bar{C}_{\Lambda,-}(\rho)$ where $0\le q\le 1$ and $\bar{C}_{\Lambda,+}(\rho)$ and $\bar{C}_{\Lambda,-}(\rho)$ are quantum channels. The exact forms of $\bar{C}_{\Lambda,+}(\rho)$ and $\bar{C}_{\Lambda,-}(\rho)$ are given by \begin{align} \bar{C}_{\Lambda,+}(\rho)=&[(p_0^2+p_1^2+p_2^2+p_3^2)\rho\nonumber\\ &+2p_0(p_1 \sigma_x\rho\sigma_x + p_2 \sigma_y\rho\sigma_y + p_3\sigma_z\rho\sigma_z)]/q,\label{1st_branch_random}\\[.2cm] \bar{C}_{\Lambda,-}(\rho)=&[2p_1 p_2 \sigma_z\rho\sigma_z + 2p_2 p_3 \sigma_x\rho\sigma_x\nonumber\\ & \hspace{2.5cm}+ 2p_1 p_3 \sigma_y\rho\sigma_y]/(1-q)\label{2nd_branch_random}\\[.2cm] &\text{where}\hspace{.5cm} q=1-2(p_1p_2+p_2p_3+p_3p_1). \end{align} If $\Lambda$ is a Pauli channel, sometimes, we write $\bar{C}_{\Lambda,\pm}$ as $\bar{C}_{\pm}$ where there is no confusion. One can therefore interpret the action of the switch on a Pauli channel as a convex combination of two channels $\bar{C}_{\Lambda,+}$ and $\bar{C}_{\Lambda,-}$ with $q$ interpreted as the probability of getting the $\bar{C}_{\Lambda,+}\otimes \omega$ state. When we say ``the action of quantum switch on the quantum channel $\Lambda$", we mean the circuit described in Fig. \ref{fig:switch} with $\Phi=\Lambda$. \\ Quantum switch can be generalized with the help of an $N$-level control system. Using this $N$-level control system, one can superpose the different causal orders of more than two channels \cite{Proc19} and get an advantage in transfer of information. However, this advantage does not seem to scale well with the increase in quantum resources required to create the coherence in the higher level control system. From this perspective, such a generalization of the quantum switch does not seem to lead to a drastically more efficient result. For this reason, we restrict ourselves to a $2$-level control system in this work. \section{Main Results}\label{sec:main} Suppose, Alice has system prepared in a bipartite qubit maximally entangled state. She wants to send one part to Bob so that these shared bipartite entangled states can be used later in information-theoretic tasks. But Alice does not have control over the environment and the only channels available to her are EBCs. Now if Alice sends these states through an EBC, the resulting bipartite state will no longer be an entangled state and therefore cannot be used for later information-theoretic tasks which require entanglement. Similarly, if Alice has a set of IBCs, she can transfer some entanglement, but those entangled states will not be useful to demonstrate information-theoretic tasks like- quantum steering (it will be discussed later in Sec. \ref{subsubsec:adv-steer} ) or Bell non-locality. By ``transfer of entanglement" we mean the transfer of one part of an entangled bipartite state. This phenomenon is also known as sharing/distribution of entanglement. If the channel used to transfer this part is an EBC, the transfer of entanglement is not possible. As mentioned before, Ref. \cite{chiribella_perfect} shows that the quantum switch can facilitate a perfect transfer of state if an entanglement breaking Pauli channel of the form $\Lambda(\rho)=(\sigma_x\rho\sigma_x+\sigma_y\rho\sigma_y)/2$ is used. This is an example of the use of the quantum switch in preserving a quantum resource even when the only operation available is a resource-destroying map. But unfortunately, such a channel is unique up to a unitary equivalence \cite{chiribella_perfect}. Therefore the question arises: to what extent the quantum switch can be used to provide an advantage in the transfer of quantum resources when we only have access to the resource-destroying maps? Here, we focus mainly on the transfer of entanglement, steerability, and coherence from Alice to Bob using a quantum switch and an EBC, an IBC, and a coherence-breaking channel respectively. We also focus on how communication using quantum switch provide an advantage in $(n,d)-$quantum random access codes (it will be discussed in Sec. \ref{subsubsec:adv-QRAC}) when $n-$IBCs are the only available channels for Alice to communicate with Bob. We restrict ourselves to the switch operation on two identical quantum channels throughout the paper. We will show that the advantage of the switch over a single use of the channel $\Lambda$ is sometimes constrained to only one of the branches (either $C_{\Lambda,+}$ or $C_{\Lambda,+}$), in which case one can post-select that particular branch. In some cases, however, the advantage of switch manifests in both branches resulting in a deterministic advantage. \\ We would like to mention here that throughout this paper we repeatedly use the term ``useless channels" which is context-dependent e.g., EBCs are useless in the context of entanglement transfer and IBCs are useless in the context of QRAC or quantum steering (it will be discussed later in the relevant sections). The channels which are not useless, depending on the context, are considered here to be useful. \subsection{Transfer of entanglement through EBC using quantum switch}\label{subsec:transf-en} In this section, we present the advantage of using quantum switch on EBCs. As we have seen in equation \eqref{switched_channel}, when the same channel is used in the switch operation, the resulting operation may be seen as a superposition of two CP maps $C_{+,\Lambda_A,\Lambda_A}$ and $C_{-,\Lambda_A,\Lambda_A}$. We make the following theorem regarding these two branches of the switch: \begin{theorem} Assume that Alice has a quantum channel ${\Lambda}_A$ to send a quantum state. If the communication is supported by a quantum switch, i.e. instead of $\Lambda_A$ Alice uses $\mathsf{S}_{\Lambda_A,\Lambda_A,\omega}$ as given in equation \eqref{switched_channel}, then the following will hold good (in the context of quantum switch)-\\ (a) If both the branches $C_{+, {\Lambda}_A, {\Lambda}_A}$ and $C_{-, {\Lambda}_A, {\Lambda}_A}$ are EB CP maps, there does not exist any quantum measurement based control operation which can make the final channel (after tracing out the control part) a non-EBC. (b) In case, $\Lambda_A$ is a Pauli channel, if both the branches $\bar{C}_{ {\Lambda}_A,+}$ and $\bar{C}_{{\Lambda}_A,+}$ are IBCs, there does not exist any quantum measurement based controlled operation which can make the final channel (after tracing out the control part) a non-IBC.\\ \end{theorem} \begin{proof} (a) Suppose after the switch operation a quantum measurement based controlled operation has been implemented which has the form- $\mathscr{U}(\rho\otimes \omega)=\Lambda_1(\rho)\otimes\bra{a}\omega\ket{a}\ket{a}\bra{a}+\Lambda_2(\rho)\otimes\bra{a^{\perp}}\omega\ket{a^{\perp}}\ket{a^{\perp}}\bra{a^{\perp}}$ where $(|a\rangle, |a^{\bot}\rangle)$ is an arbitrary pair of mutually orthogonal states in a two dimensional Hilbert space and $\Lambda_1$ and $\Lambda_2$ are the arbitrary channels. This operation $\mathscr{U}$ can be implemented by measuring the observable $\mathcal{A}=\{\ket{a}\bra{a}, \ket{a^{\perp}}\bra{a^{\perp}}\}$ on the state $\omega$ and then implementing the channels $\Lambda_1$ and $\Lambda_2$ on the state $\rho$ after getting outcomes $a$ and $a^{\perp}$ respectively. Then from equation \eqref{switched_channel}, the resulting state (after the switch operation and then implementing $\mathscr{U}$) will be given by \begin{align} \mathscr{U} (\mathsf{S}_{\Lambda_A,\Lambda_A}(\rho))=&(\Lambda_1\circ C_{+,\Lambda_A,\Lambda_A})(\rho)\otimes\bra{a}\omega\ket{a}\ket{a}\bra{a}+\nonumber\\ &(\Lambda_1\circ C_{-,\Lambda_A,\Lambda_A})(\rho)\otimes\bra{a}\sigma_z\omega\sigma_z\ket{a}\ket{a}\bra{a}+\nonumber\\ &(\Lambda_2\circ C_{+,\Lambda_A,\Lambda_A})(\rho)\otimes\bra{a^{\perp}}\omega\ket{a^{\perp}}\ket{a^{\perp}}\bra{a^{\perp}}+\nonumber\\ &(\Lambda_2\circ C_{-,\Lambda_A,\Lambda_A})(\rho)\otimes\bra{a^{\perp}}\sigma_z\omega\sigma_z\ket{a^{\perp}}\ket{a^{\perp}}\bra{a^{\perp}} \end{align} Therefore, the effective channel on the system after tracing out the control part is \begin{align} \text{Tr}_{\mathcal{H}_c}[\mathscr{U} (\mathsf{S}_{\Lambda_A,\Lambda_A}(\rho))]=&(\Lambda_1\circ C_{+,\Lambda_A,\Lambda_A})(\rho)\bra{a}\omega\ket{a}+\nonumber\\ &(\Lambda_1\circ C_{-,\Lambda_A,\Lambda_A})(\rho)\bra{a}\sigma_z\omega\sigma_z\ket{a}+\nonumber\\ &(\Lambda_2\circ C_{+,\Lambda_A,\Lambda_A})(\rho)\bra{a^{\perp}}\omega\ket{a^{\perp}}+\nonumber\\ &(\Lambda_2\circ C_{-,\Lambda_A,\Lambda_A})(\rho)\bra{a^{\perp}}\sigma_z\omega\sigma_z\ket{a^{\perp}} \end{align} where $\mathcal{H}_c$ is the Hilbert space of control qubit. Since, $C_{+,\Lambda_A,\Lambda_A}$ and $C_{-,\Lambda_A,\Lambda_A}$ are EB CP maps, concatenation of any quantum channel with these CP maps provide EB CP maps. Since the linear combination maps (with non-negative coefficients) of EB CP maps is an EB CP map, we have proved that $\text{Tr}_{\mathcal{H}_c}[\mathcal{U} (\mathsf{S}_{\Lambda_A,\Lambda_A,\omega})]$ is an EBC (since it is also trace preserving).\\ Generalisation of this proof to POVM based control operations is straightforward. (b) The proof of (b) is similar to that of (a). \end{proof} Therefore, the use of quantum switch will not give any significant benefit in communication for above-said channels. Now, suppose one of the branches, say for example $C_{+,\Lambda_A,\Lambda_A}$ is not an EB CP, even then it is possible that there does not exist any quantum controlled operation that can make the effective channel non-EBC (since the other branch is EB CP). But in that case Alice can improve communication probabilistically. In fact, in that case, Alice can perform measurements on the control qubit (if the state of the control qubit $\omega=\ket{+}\bra{+}$ where $\ket{+}=\frac{\ket{0}+\ket{1}}{\sqrt{2}}$ and $\{\ket{0},\ket{1}\}$ is the eigen basis of $\sigma_z$ then Alice can perform measurements in the basis $\{\omega, \omega^{\perp}\}$ where $\omega^{\perp}=\ket{-}\bra{-}$ with $\ket{-}=\frac{\ket{0}-\ket{1}}{2}$) and will record the outcomes. If the outcome corresponds to $\omega$ (corresponding to non-EB CP branch), the resulting post-measurement state will be non-separable (depends on the input entangled state), in general, otherwise she will throw away the output state. If both branches (i.e., both $C_{+, {\Lambda}_A, {\Lambda}_A}$ and $C_{-, {\Lambda}_A, {\Lambda}_A}$ ) are not EB CP maps then clearly, it is possible to improve communication deterministically. Therefore, the above-said discussion suggests that it is important to study the effect of quantum switch on a quantum channel considering \emph{one individual output branch} (i.e., either $C_{+, {\Lambda}_A, {\Lambda}_A}$ or $C_{-, {\Lambda}_A, {\Lambda}_A}$ ) at a time. Let us study the aforesaid discussion through the example of any unital qubit entanglement breaking channel. As discussed before, any unital qubit channel corresponds to the $T-matrix=diag(\lambda_1,\lambda_2,\lambda_3)$ upto the action of unitary operators on the input and output spaces. But the actions of unitary operators are nontrivial in the case of quantum switch acting on the channels. We will discuss this fact in later sections. A general qubit unital channel has nine parameters (without unitary equivalence) which is slightly difficult to handle. Therefore, for this subsection, we will restrict ourselves to Pauli Channels. The subset consisting of the Pauli entanglement breaking channels satisfy $\sum_i|\lambda_i|\le 1$ and they form an octahedron in the $\lambda_i$ space: position of a channel, characterized by the set of three parameters $(\lambda_1,\lambda_2,\lambda_3)$, in the three dimensional Euclidean space is given by the coordinates $({\lambda}_1, {\lambda}_2, {\lambda}_2)$. The vertices of the octahedron are: $(+ 1, 0, 0)$, $(- 1, 0, 0)$, $(0, + 1, 0)$, $(0, - 1, 0)$, $(0, 0, + 1)$, and $(0, 0, - 1)$. The figure \ref{cpuseful} shows the set of all those entanglement breaking Pauli channels that constitute the octahedron. The black region inside the octahedron corresponds to the set of entanglement breaking channels, the $\bar{C}_+$ branch of which -- after the action of the switch -- are being mapped outside the octahedron (shown in the blue color). The three vertices marked as $a,b$ and $c$ correspond to the points $(-1, 0, 0)$,$(0, -1, 0)$, $(0, 0, -1)$) respectively. The vertex $(0,0,-1)$ corresponds to the channel $(\sigma_x\rho\sigma_x+\sigma_y\rho\sigma_y)/2$ and the other two vertices are the unitary equivalents of this channel. These channels are mapped (under the action of the $\bar{C}_+$ branch of the quantum switch) onto the identity channel (corresponding to the point $(+ 1, + 1, + 1)$). \begin{figure} \caption{The 3-D plots show the octahedron of the entanglement breaking channels from two perspectives. The channels whose parameters $(\lambda_1,\lambda_2,\lambda_3)$ lie within the octahedron are entanglement breaking. The grey region inside the octahedron are the EBCs which, under the action of the $\bar{C} \label{cpuseful} \end{figure} \begin{figure} \caption{The figure shows (in grey) the EBCs inside the octahedron which are mapped to non-EBCs outside the octahedron under the $\bar{C} \label{cmuseful} \end{figure} Figure \ref{cmuseful} shows the EBCs (in grey) that under the $\bar{C}_-$ branch are mapped outside the octahedron (turned into non-EBCs). The mapped channels outside the octahedron are marked in blue. The channels are mapped onto the plane containing the vertices $( 1, - 1, - 1)$, $(- 1, + 1, - 1)$ and $(- 1, - 1, 1)$ excluding the region that overlaps with one of the faces of the octahedron. The previously mentioned points $a$, $b$ and $c$ of the octahedron are now matched to exactly these vertices $( 1, - 1, - 1)$, $(- 1, + 1, - 1)$, $(- 1, - 1, 1)$ respectively under the $\bar{C}_-$ branch. These three points $( 1, - 1, - 1)$, $(- 1, + 1, - 1)$, $(- 1, - 1, 1)$ correspond to the channels $\rho\rightarrow \sigma _X\rho\sigma_X$, $\sigma_Y\rho\sigma_Y$ and $\sigma_Z\rho\sigma_Z$ respectively and are equivalent to the identity channel upto a unitary transformation. These vertices along with the identity channels $\{1,1,1\}$, form the extremal points of the tetrahedron that contains all qubit channels. \subsection{Concatenation of quantum channels and the quantum Switch}\label{subsec:conc_ch} It is well known that if $\Lambda$ is EBC (or IBC) then $\Phi\circ\Lambda$ is also EBC (or IBC) for any channel $\Phi$. Therefore, if EBC (or IBC) is unavoidable for communication, concatenating it straightforwardly with another channel, i.e. without an external resource like superposition of direct pure processes or SDPP \cite{PRA-Rubino}, will not help in the transfer of entanglement (or steerable states).\\ Now suppose a quantum channel $\Lambda$ is useless even after the use of the quantum switch. Then we show below that there may exist another quantum channel $\Phi$ so that $\Phi\circ\Lambda$ is useful under the action of quantum switch (The operation is demonstrated in Fig. \ref{fig:conc-useful}). Therefore, the concatenation of quantum channel may provide an advantage when the communication is supported by the quantum switch. One should note that this concatenation is \textit{before} the implementation of the switch (Fig. \ref{fig:conc-useful}) and is different from the concatenation that happens intrinsically in the circuit that implements the switch (Fig. \ref{fig:switch}). It has been shown in previous studies \cite{PRA-Rubino} that such intrinsic concatenation when used in an SDPP (without indefinite causal order), can also provide an advantage in terms of quantum capacity. But the aforesaid study of \cite{PRA-Rubino} is quite different than what we study in this section. Now at first, we make the following two observations, and then we have a detailed study for the case of Pauli channels and three-parameter non-unital channel. \begin{figure} \caption{The figure shows the concatenation of quantum channels before the implementation of the switch. In this respect this figure is different from the figure \ref{fig:switch} \label{fig:conc-useful} \end{figure} \begin{obs} There exist quantum channels which do not provide any advantage under the action of quantum switch, but such a channel may become useful under the action of quantum switch if it is concatenated with another quantum channel. \label{conc-switch} \end{obs} \begin{proof} Suppose, $\Lambda(\rho)=\frac{1}{2}\rho+\frac{1}{2}\sigma_z\rho\sigma_z$. Now, from the Theorem 4 of \citep{Rus03}, it is clear that $\Lambda$ is an EBC. From the equation \eqref{1st_branch_random} and equation \eqref{2nd_branch_random} we obtain that \begin{equation} q\bar{C}_{\Lambda,+}(\rho)=\frac{1}{2}\rho+\frac{1}{2}\sigma_z\rho\sigma_z=\Lambda(\rho);~(1-q)\bar{C}_{\Lambda,-}(\rho)=0. \label{no_advantage_chan} \end{equation} Therefore, since $\Lambda$ is an EBC, from equation \eqref{no_advantage_chan}, we get that this channel can not give advantage under the action of quantum switch. Now suppose $\Lambda^{\prime}=\Phi\circ\Lambda$ where $\Phi(\rho)=\sigma_x\rho\sigma_x$. Then, \begin{equation} \Lambda^{\prime}(\rho)=\frac{1}{2}\sigma_x\rho\sigma_x+\frac{1}{2}\sigma_y\rho\sigma_y. \end{equation} This is the channel used in Ref. \cite{chiribella_perfect}. Therefore, we know that perfect communication can be achieved through this quantum channel using the quantum switch. It can be easily shown there exist several other such examples of channels. \end{proof} This example also indicates that the action of unitary on the output space has a non-trivial effect when the action of quantum switch is considered. Since $T$-matrices of the Pauli channels are diagonal, those channels are commuting, and therefore, we conclude that the action of a unitary on the input space also has non-trivial effects when the action of the quantum switch is considered. The results of the concatenation are more interesting if $\Phi$ is also an EBC (i.e., a useless channel) as we show in the next observation: \begin{obs} There exist quantum channels which do not provide any advantage under the action of quantum switch, but such a channel may become useful under the action of quantum switch if it is concatenated with another EBC. \end{obs} \begin{proof} Consider the quantum channels \begin{equation} \Lambda(\rho)=\frac{1}{2}\rho+\frac{1}{2}\sigma_z\rho\sigma_z \label{bad_chan} \end{equation} and \begin{equation} \Phi(\rho)=\frac{1-\lambda_3}{4}\rho+\frac{1+\lambda_3}{4}\sigma_x\rho\sigma_x+\frac{1+\lambda_3}{4}\sigma_y\rho\sigma_y+\frac{1-\lambda_3}{4}\sigma_z\rho\sigma_z\label{good_chan} \end{equation} where the $-1\leq \lambda_3\leq 1$. From equation \eqref{no_advantage_chan}, we know that $\Lambda(\rho)$ can not give advantage using quantum switch. Now, \begin{align} \Lambda^{\prime}(\rho)&=(\Phi\circ\Lambda)(\rho)\nonumber\\ &=\Phi(\rho). \end{align} Note that $\Phi$ has $T$-matrix \begin{align} \mathcal{T}_{\Phi} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0& 0& 0\\ 0 & 0 & 0 &0\\ 0&0&0& -\lambda_3\\ \end{pmatrix} \end{align} Therefore, from the Theorem 4 of \citep{Rus03}, we get that $\Phi$ is an EBC for $-1\leq\lambda_3\leq 1$. Now from the equation \eqref{1st_branch_random} and equation \eqref{2nd_branch_random} we obtain that \begin{align} \bar{C}_{\Phi,+}=&[\frac{(1+\lambda_3^2)}{4}\rho+\frac{(1-\lambda_3^2)}{8}\sigma_x\rho\sigma_x \nonumber\\ &+\frac{(1-\lambda_3^2)}{8}\sigma_y\rho\sigma_y+\frac{(1-\lambda_3)^2}{8}\sigma_z\rho\sigma_z]/q \end{align} and \begin{align} \bar{C}_{\Phi,-}=&[\frac{(1-\lambda_3^2)}{8}\sigma_x\rho\sigma_x+\frac{(1-\lambda_3^2)}{8}\sigma_y\rho\sigma_y\nonumber\\ &+\frac{(1+\lambda_3)^2}{8}\sigma_z\rho\sigma_z]/(1-q) \end{align} with $q=2\lambda_3+3\lambda_3^2$. Now if we implement a measurement-based controlled operation $\mathcal{U}=\mathbb{I}\otimes \ket{+}\bra{+}+\sigma_z\otimes\ket{-}\bra{-}$ and we prepare the control state as $\omega=\ket{+}\bra{+}$, then the effective channel on the system (after tracing out the control part) is \begin{align} \mathcal{C}_{eff}(\rho)=&\text{Tr}_{\mathcal{H}_c}[\mathcal{U} S_{\Phi,\Phi,\omega}(\rho)\mathcal{U}^{\dagger}]\nonumber\\ =&\dfrac{3+2\lambda_3+3\lambda^2}{8}\rho+\dfrac{(1-\lambda_3)^2}{8}\sigma_z\rho\sigma_z \nonumber\\& +\dfrac{1-\lambda_3^2}{4}(\sigma_x\rho\sigma_x+\sigma_y\rho\sigma_y)\label{good_chan_switched} \end{align} The T-matrix of the above channel $\mathcal{C}_{eff}$ is given by \begin{align} \mathcal{T}_{\mathcal{C}_{eff}} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & \frac{(1+\lambda_3)^2}{4}& 0&0\\ 0 & 0 & \frac{(1+\lambda_3)^2}{4}&0\\ 0&0&0& \lambda_3^2\\ \end{pmatrix} \end{align} Therefore, from the Theorem 4 of \citep{Rus03}, for $\lambda_3^2+\frac{(1+\lambda_3)^2}{2}> 1$ or equivalently for $1 \geq\lambda_3> \frac{1}{3}$, the channel $\mathcal{C}_{eff}$ is not EBC. It can be easily shown there exist several other such examples of channels. \end{proof} The above result can be explored further by scanning through the space of all Pauli channels and one can quantify the advantage that concatenation can provide with the use of the switch. Let us first take a general Pauli channel $\Lambda$ given by the following $T$ matrix \begin{align} \mathcal{T}_{\Lambda} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & \lambda_1& 0&0\\ 0 & 0 & \lambda_2&0\\ 0&0&0& \lambda_3\\ \end{pmatrix} \end{align} We generate a random pair of Pauli EBCs $\mathcal{T}_{\Lambda}$ and $\mathcal{T}_{\Gamma}$ that are not useful under either or both branches of the switch and check whether the concatenated channel $\mathcal{T}_{\Lambda \circ\Gamma}$ is useful under the $C_+$ or $C_-$ branch. We can also compare the results of a concatenation of Pauli EBCs with the concatenation of three parameters non-unital EBCs (say $\Lambda^{\prime}$) given by \begin{align} \mathcal{T}_{\Lambda^{\prime}} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & k_1& 0&0\\ 0 & 0 &k_1&0\\ t&0&0& k_3\\ \end{pmatrix}. \end{align} We present the results in Fig. \ref{venn}. \begin{figure} \caption{Pauli Channels} \label{venn1} \caption{Non-unital Channels} \label{venn2} \caption{The Venn diagrams show the fraction of useless channel pairs that become useful after concatenation. The larger circles (in yellow) contain the set of EBCs that remain entanglement breaking under either $C_+$ or $C_-$ or both. ]Recall that in the case of non-unital maps, the branches $C_{\pm} \label{venn} \end{figure} A useless channel is completely useless if under both $C_+$ and $C_-$ it remains useless. Such channels remain useless \emph{even after} the use of the quantum switch. From Fig. \ref{venn} it can be found that such channels remain completely useless even after concatenation- both for the Pauli channels and the particular class of non-unital channels that we have used here. This fact encourages us to provide the following conjecture:\\ \textbf{Conjecture 1.} \emph{Concatenation of two completely useless channels is always completely useless.}\\ A particularly useful set of channels is at the intersection of $C_+'$ and $C_-'$in Fig. \ref{venn}. From this figure, we see that there exist channels that only probabilistically can transfer entanglement under the switch, but after concatenation with another channel can transfer entanglement deterministically under the switch. \\ As an example of the effect of concatenation, figure \ref{cpconcat} shows some of the Pauli channels useless under $C_+$ inside the octahedron, which under concatenation become useful. The orange dots show the map of the concatenated channels (in red) outside of the octahedron under the $C_+$ channel. \begin{figure} \caption{The grey region shows the useless Pauli channels under concatenation. The blue dots are the useless channels that can be converted to useful under concatenation for $C_+$ channel. The red dots are the result of the concatenations, and the orange dots are the mapping of the red dots outside the octahedron under $C_+$ channel.} \label{cpconcat} \end{figure} We also see that, there is an interesting geometry of the concatenations that results in useful channels under $C_+$ or $C_-$. Figure \ref{dist} shows the Euclidean distances between the Pauli channels that become useful under one of the branches of the switch after concatenation. The Euclidean distance between the channels is calculated as $D(\Lambda,\Gamma)=\sqrt{\sum(\lambda_i-\gamma_i)^2}$ where $[\mathcal{T}_{\Gamma}]_{ii}=\gamma_i$ for all $i\in\{1,2,3\}$. We can see that the pairs that concatenate into useful channels under switch, are not located close to each other. In fact these distances are comparable to the edge length of the octahedron, i.e., $\sqrt{2}$. \begin{figure} \caption{\ref{dist1} \label{dist} \end{figure} \subsection{Advantages in different information theoretic tasks}\label{subsec:Advantages_information} \subsubsection{Advantage in quantum random access code} \label{subsubsec:adv-QRAC} Let, Alice has $n$-dit string denoted by $\vec{x}=(x_1,.....,x_n)$ at her disposal. She encodes a particular $n$-dit string in a state of a $d$-level quantum system (i.e., in a qudit) and then transfers this qudit to Bob. In addition, Bob receives a random number $j \in \{1, 2, \ldots, n\}$. Now, Bob's task is to guess the $j$-th dit $x_j$. He does this by doing a measurement on the quantum system sent by Alice. He has $n$ choices of measurements each with $d$ outcomes $0, 1, \ldots, d - 1$. After obtaining the random number $j$, he performs the $j$-th measurement on the qudit sent by Alice. Depending on the outcome of the measurement, Bob guesses the $j$-th entry $x_j$ of Alice's string $\vec{x}$. Let, his guess be $y$. The game will be successful if $y=x_j$. This is known as $(n,d)-$quantum random access code (QRAC). It has a classical counterpart, known as $(n,d)-$random access code (RAC), where Alice is allowed to send dits to Bob instead of qudits. The maximum average success probability of random access code (for $n = 2$ case) is $P^{(2,d),max}_{rac}=\frac{1}{2}(1+\frac{1}{d})$. But the maximum average success probability of quantum random access code is $P^{(2,d),max}_{qrac}=\frac{1}{2}(1+\frac{1}{\sqrt{d}})$. Therefore, $P^{(2,d),max}_{rac}< P^{(2,d),max}_{qrac}$. We will call a particular encoding scheme of Alice and a particular set of measurements performed by Bob to guess the desired $d$-level entry in Alice's $n$-dit string as a ``useful strategy" if it can achieve a quantum advantage in the average success probability i.e., the average success probability $P^{(2,d)}_{qrac}>P^{(2,d),max}_{rac}$. It is shown in \cite{Heino_random} that the incompatibility of measurement is necessary for $(2,d)-$ QRAC to be useful. \\ Now, suppose only a noisy channel $\Lambda$ is available to Alice to transfer the qudit to Bob. In this case, the achievable success probability decreases (assuming that the quantum switch is unavailable to Alice and Bob). Depending on $\Lambda$, this decrement of probability can be drastic. In this context, we have the following theorem: \begin{theorem}\label{Th:2-IBC,QRAC} If Alice has only $2-IBC$ channels to transfer qudits to Bob in a $(2,d)-QRAC$ game, there exist no measurement strategy for Bob as well as qudit encoding scheme for Alice which can be useful i.e., using which one can get quantum advantage. \end{theorem} \begin{proof} Let Alice encodes a $2-dit$ string $\vec{x} = (x_1, x_2)$ in the qudit state $\mathcal{E}(\vec{x})$ and sends it to Bob through channel $\Lambda$. Now suppose Bob chooses the measurement $M_1=\{M_1(j)\}_{j\in\{ 0,1,\ldots,d-1\}}$ for the first dit and $M_2=\{M_2(j)\}_{j\in\{ 0,1,\ldots,d-1\}}$ for the second dit. Then the average success probability is \begin{align} P^{(2,d)}_{qrac}&=\frac{1}{2d^2}\sum_{\vec{x}}\text{Tr}[\Lambda(\mathcal{E}(\vec{x})) (M_1(x_1)+M_2(x_2))]\nonumber\\ &=\frac{1}{2d^2}\sum_{\vec{x}}\text{Tr}[\mathcal{E} (\vec{x})\Lambda^*(M_1(x_1)+M_2(x_2))]\nonumber\\ &\leq \frac{1}{2d^2}\sum_{\vec{x}}\mid\mid \Lambda^*(M_1(x_1))+\Lambda^*(M_2(x_2))\mid\mid\nonumber\\ &\leq \frac{1}{2}(1+\frac{1}{d}) \end{align} where we get the last inequality from Theorem $1$ of \cite{Heino_random} and noting the fact that $\Lambda^*(M_1)$ and $\Lambda^*(M_2)$ are compatible. \end{proof} Therefore, $2-IBC$ channels are useless for $(2,d)-$QRAC \cite{Heino_random}. Note that incompatibility is necessary but not sufficient for $(2,d)-$QRAC. The analysis of previous sections indicates that a quantum switch can help to get a quantum advantage in $(2,d)-$QRAC though the $\Lambda$ is $2-IBC$ in some cases. We will study this through the next example. \begin{example}\label{ex_prob_adv_qrac} \rm{Suppose in a $(2,2)$-QRAC, Alice is encoding the two bit string ($ij$) in the quantum state $\rho(ij)=\ket{+,\hat{n}_{ij}}\bra{+,\hat{n}_{ij}}$ where $\ket{+,\hat{n}_{ij}}$ is the eigen state of $\hat{n}_{ij}.\vec{\sigma}$ corresponding to the eigen value $+1$ and $\hat{n}_{ij}=\frac{1}{\sqrt{2}}((-1)^{i},(-1)^{j},0)^T$ for $i,j\in\{0,1\}$. In this case, if Alice can transfer the state to Bob with perfect communication i.e., through the identity channel and Bob chooses the measurement $M_1=\{\ket{+,\hat{x}}\bra{+,\hat{x}},\ket{-,\hat{x}}\bra{-,\hat{x}}\}$ for the first bit and $M_2=\{\ket{+,\hat{y}}\bra{+,\hat{y}},\ket{-,\hat{y}}\bra{-,\hat{y}}\}$ for the second bit ($\ket{\pm,\hat{j}}$ is the eigen state of $\sigma_j$ with eigen value $\pm 1$ for $j = x, y$), then the average success probability is \\ \begin{align} P^{(2,2)}_{qrac}&=\frac{1}{8}\sum_{ij}\text{Tr}[\rho(ij)(M_1(i)+M_2(j))]\nonumber\\ &=\frac{1}{2}\left(1+\frac{1}{\sqrt{2}}\right) \end{align} which is the optimal average success probability for $(2,2)-$QRAC.\\ Now, suppose Alice has only $2-IBC$s to communicate with Bob. For example, suppose Alice has the channel $\Phi$ given in equation \eqref{good_chan}. Then, from Theorem \ref{Th:2-IBC,QRAC}, we know that no strategy of Alice and Bob can be useful. Now after using quantum switch, the effective channel $\mathcal{C}_{eff}(\rho)=Tr_{\mathcal{H}_c}[\mathcal{U} S_{\Phi,\Phi,\omega}(\rho)\mathcal{U}^{\dagger}]$ is given in equation \eqref{good_chan_switched}. If Alice uses this effective channel, the average success probability is \\ \begin{align} P^{\prime(2,2)}_{qrac}&=\frac{1}{8}\sum_{ij}\text{Tr}[\mathcal{C}_{eff}(\rho(ij))(M_1(i)+M_2(j))]\nonumber\\ &=\frac{(1+\lambda_3)^2}{4}P^{(2,2)}_{qrac}+\frac{(1-\frac{(1+\lambda_3)^2}{4})}{2}\nonumber\\ &=\frac{(1+\lambda_3)^2}{8\sqrt{2}}+\frac{1}{2}\nonumber\\ &=\frac{1}{2}[1+\frac{(1+\lambda_3)^2}{4\sqrt{2}}] \end{align} \\ Since, $P^{(2,2),max}_{rac}=\frac{3}{4}$, we have $P^{\prime(2,2)}_{qrac}>P^{(2,2),max}_{rac}$ only for $\lambda_3>2^{\frac{3}{4}}-1\approx 0.6818$. Therefore, clearly $\mathcal{C}_{eff}$ is not 2-IBC for $1\geq\lambda_3 > 2^{\frac{3}{4}}-1$. In FIG. \ref{fig:prob_random}, variation of $P^{\prime(2,2)}_{qrac}$ w.r.t. $\lambda_3$ is plotted. \begin{figure} \caption{Average success probability $P^{\prime(2,2)} \label{fig:prob_random} \end{figure} } \end{example} \subsubsection{Advantage in quantum steering}\label{subsubsec:adv-steer} Suppose Alice and Bob share a bipartite quantum state $\rho^{AB}\in\mathcal{S}(\mathcal{H}_A\otimes\mathcal{H}_B)$ and Alice has a measurement assemblage (i.e., a set of measurement) $\mathcal{M}_A=\{M_x\}$. Each $M_x$ has the outcome set $\Omega_x$. Now state $\rho^{AB}$ is called unsteerable from Alice to Bob i.e., from $A$ to $B$ with $\mathcal{M}_A$ if there exist a probability distribution $\pi_{\lambda}$, a set of states $\{\sigma_{\lambda}\}$ of $B$ and a set of probability distributions $P_A(a_x|x,\lambda)$ such that for all $a_x\in\Omega_x$ and for all $x$ the equality \begin{equation} \rho^B_{a_x|x}=\text{Tr}_A[(M_{a_x|x}\otimes\mathbb{I}_B)\rho^{AB}]=\sum_{\lambda}\pi_{\lambda}P_A(a_x|x,\lambda)\sigma_{\lambda} \end{equation} holds. A state $\rho^{AB}$ is called unsteerable from $A$ to $B$ if it is unsteerable from $A$ to $B$ with any measurement assemblage $\mathcal{M}_A$. A state $\rho^{AB}$ is called unsteerable if it is unsteerable from both $A$ to $B$ and from $B$ to $A$. Otherwise, it is called steerable. Steering is useful for different information-theoretic tasks \cite{steer-review}. There exist several steering inequalities each of which detects the steerability of a bipartite quantum state \cite{Das-steer-ineq, Wiseman-steer-ineq, Cao-steer-ineq, Costa-steer-ineq}. One such steering inequality using two measurement settings, we are writing below which was originally studied in \cite{Wiseman-steer-ineq, Das-steer-ineq, Costa-steer-ineq}. The bipartite state $\rho^{AB}$ is unsteerable then \begin{equation} F(\rho^{AB})=\frac{1}{\sqrt{2}}\mid\text{Tr}[\rho^{AB}(\sigma_x\otimes\sigma_x)]+\text{Tr}[\rho^{AB}(\sigma_z\otimes\sigma_z)]\mid\leq 1.\label{steer-ineq} \end{equation} \\ Now suppose Bob is preparing a bipartite entangled state $\rho^{AB}$ and sending the part $A$ to Alice. This shared bipartite entangled state, can be used in different information-theoretic tasks that can be performed with the help of $A$ to $B$ steering. We now have the following result in this connection (assuming that the quantum switch is not available to Alice and Bob)- \begin{theorem} Suppose Bob is sending the part $A$ of a bipartite state $\rho^{AB}$ to Alice (i.e., from $B$ to $A$) through an $n$-incompatibility breaking channel $\Lambda$ then $\rho^{\prime AB}=(\Lambda\otimes\mathcal{I})(\rho^{AB})$ is steerable from $A$ to $B$ with no measurement assemblage $\mathcal{M}_A=\{M_x\}^n_{x=1}$ where $\mathcal{I}$ is the identity channel. \end{theorem} \begin{proof} Consider an arbitrary measurement assemblage $\mathcal{M}_A=\{M_x\}^n_{x=1}$ for Alice. As $\Lambda$ is a $n$-IBC, \{$\Lambda^*(M_x)$\} is a compatible set and hence, has a joint observable $\mathcal{J}=\{J_{\lambda}\}$ such that $\Lambda^*(M_{a_x|x})=\sum_{\lambda}P_A(a_x|x,\lambda)J_{\lambda}$ where $P_A(a_x|x,\lambda)$ is some probability distribution. Then \begin{align} \rho^{\prime B}_{a_x|x}&=\text{Tr}_A[(M_{a_x|x}\otimes\mathbb{I}_B)\rho^{\prime AB}]\nonumber\\ &=\text{Tr}_A[(M_{a_x|x}\otimes\mathbb{I}_B)(\Lambda\otimes\mathcal{I})(\rho^{AB})]\nonumber\\ &=\text{Tr}_A[(\Lambda^*(M_{a_x|x})\otimes\mathbb{I}_B)\rho^{AB}]\nonumber\\ &=\text{Tr}_A[(\Lambda^*(M_{a_x|x})\otimes\mathbb{I}_B)\rho^{AB}]\nonumber\\ &=\text{Tr}_A[(\sum_{\lambda}P_A(a_x|x,\lambda)J_{\lambda}\otimes\mathbb{I}_B)\rho^{AB}]\nonumber\\ &=\sum_{\lambda}P_A(a_x|x,\lambda)\text{Tr}_A[(J_{\lambda}\otimes\mathbb{I}_B)\rho^{AB}]\nonumber\\ &=\sum_{\lambda}P_A(a_x|x,\lambda)\pi_{\lambda}{\sigma}_{\lambda}^B \end{align} where, $\pi_{\lambda}=\text{Tr}[(J_{\lambda}\otimes\mathbb{I}_B)\rho^{AB}]$ and $\sigma^B_{\lambda}=\frac{1}{\pi_{\lambda}}\text{Tr}_A[(J_{\lambda}\otimes\mathbb{I}_B)\rho^{AB}]$. Hence, $\rho^{\prime AB}$ is not steerable from $A$ to $B$ irrespective of the choice of the measurement assemblage $\mathcal{M}_A=\{M_x\}^n_{x=1}$. \end{proof} From this result, the next corollary is immediate: \begin{corollary}\label{obs-incom-break} Suppose Bob is sending $A$ part of a bipartite state $\rho^{AB}$ to Alice (i.e., from $B$ to $A$) through an incompatibility breaking channel $\Lambda$ then $\rho^{\prime AB}=(\Lambda\otimes\mathcal{I})(\rho^{AB})$ is not steerable from $A$ to $B$ where $\mathcal{I}$ is the identity channel. \end{corollary} Therefore, if Bob has only IBCs to communicate with Alice, the resulting bipartite state after the communication will no longer be steerable from $A$ to $B$. Corollary \ref{obs-incom-break} is also independently proved in the Theorem 1 of \cite{Franco-steer}. A similar corollary, but not exactly same as the corollary in \ref{obs-incom-break}, has been derived using channel state duality in \cite{Kiukas1}. But if Bob has a quantum switch, he can get rid of this situation. It will be clarified through the following example. \begin{example} \rm{Suppose Bob is sending Alice the $A$ part of a quantum state $\rho^{AB}=\ket{\psi}_{AB}\bra{\psi}$, where $\ket{\psi}_{AB}=\frac{1}{\sqrt{2}}[\ket{00}+\ket{11}]$. $\ket{\psi}_{AB}$ is steerable under suitable measurements. Clearly, $\frac{1}{\sqrt{2}}\mid\text{Tr}[\rho^{AB}(\sigma_x\otimes\sigma_x)]+\text{Tr}[\rho^{AB}(\sigma_z\otimes\sigma_z)]\mid=\sqrt{2}> 1$. Now suppose Bob has only IBCs, for example, the quantum channel $\Phi$, given in equation \eqref{good_chan} to communicate with Alice. Since, $\Phi$ is EBC (Therefore it is also IBC). Therefore, $(\Phi\otimes\mathcal{I})(\rho^{AB})$ is separable and so it is not steerable from Alice to Bob i.e., from $A$ to $B$. Now if Bob has a quantum switch he can improve the communication. After using quantum switch, the effective channel $\mathcal{C}_{eff}(\rho)=Tr_{\mathcal{H}_c}[\mathcal{U} S_{\Phi,\Phi,\omega}(\rho)\mathcal{U}^{\dagger}]$ is given in equation \eqref{good_chan_switched}. Therefore, the resulting shared state after the communication is \begin{align} \rho^{\prime AB}=&(\mathcal{C}_{eff}\otimes\mathcal{I})(\rho^{AB})\nonumber\\ =&\frac{1}{2}[\mathcal{C}_{eff}(\ket{0}\bra{0})\otimes\ket{0}\bra{0}+\mathcal{C}_{eff}(\ket{0}\bra{1})\otimes\ket{0}\bra{1}\nonumber\\ &+\mathcal{C}_{eff}(\ket{1}\bra{0})\otimes\ket{1}\bra{0}+\mathcal{C}_{eff}(\ket{1}\bra{1})\otimes\ket{1}\bra{1}]\nonumber\\ =&\frac{(1+\lambda_3)^2}{4}\ket{\psi}_{AB}\bra{\psi}+\Big(1-\frac{(1+\lambda_3)^2}{4}\Big)\rho_{sep}^{AB} \end{align} where $\rho_{sep}^{AB}=\frac{2}{4-(1+\lambda_3)^2}\Big[\frac{(1-\lambda_3)^2}{4}\ket{00}\bra{00}\\ +\frac{1-\lambda^2_3}{2}\ket{01}\bra{01}+\frac{1-\lambda_3^2}{2}\ket{10}\bra{10}+\frac{(1-\lambda_3)^2}{4}\ket{11}\bra{11}\Big]$ is a separable state. Now, $\frac{1}{\sqrt{2}}\mid\text{Tr}[\rho^{AB}_{sep}(\sigma_x\otimes\sigma_x)]+\text{Tr}[\rho^{AB}_{sep}(\sigma_z\otimes\sigma_z)]\mid=\frac{3\lambda_3^2-2\lambda_3-1}{4-(1+\lambda_3)^2}$. Therefore, \begin{align} F(\rho^{\prime AB})&=\frac{1}{\sqrt{2}}\mid\text{Tr}[\rho^{\prime AB}(\sigma_x\otimes\sigma_x)]+\text{Tr}[\rho^{\prime AB}(\sigma_z\otimes\sigma_z)]\mid\nonumber\\ &=\frac{(1+\lambda_3)^2}{4}\sqrt{2}+ \frac{3\lambda_3^2-2\lambda_3-1}{4}. \end{align} \begin{figure} \caption{$F(\rho^{\prime AB} \label{fig:SteeringInequality} \end{figure} From the Fig. \ref{fig:SteeringInequality}, we get that $F(\rho^{\prime AB})>1$ for $1\geq\lambda_3>0.8123$. Therefore, in this range of $\lambda_3$, $\mathcal{C}_{eff}$ is not $2$-IBC. As $0.8123>2^{\frac{3}{4}}-1$, from Example \ref{ex_prob_adv_qrac}, we guess that possibly QRAC is a better detector of $n$-IBCs than steering inequalities at least in some cases.} \end{example} \subsubsection{Prevention of the loss of coherence} Quantum coherence is an important resource that can be utilized in several information-theoretic and thermodynamic tasks \cite{res-coh}. Specifically, it can be used in work extraction \cite{Korzekwa}, quantum algorithms \cite{Hillery}, quantum metrology \cite{Giorda}, quantum channel discrimination \cite{Piani}, witnessing quantum correlations \cite{Bu} etc.\\ We know from the resource theory of coherence \cite{res-coh} that there are operations, known as incoherent operations which do not increase coherence but may decrease it and there are channels, known as coherence breaking channels that completely destroy the coherence of any quantum state.\\ Now suppose Alice is preparing quantum states for Bob who uses the coherence of these states w.r.t. some basis as a resource to get the advantage in different information-theoretic and thermodynamic tasks. But if Alice has only coherence-breaking channels to communicate with Bob, she will be unable to transfer the coherence of the state. Whereas if she has incoherent channels, the coherence of the state which Alice sends for Bob, will decrease when the state will reach Bob.\\ In these types of cases, if she has a quantum switch, she might be able to perform better communication, as stated in the following theorem.\\ \begin{theorem} A coherence breaking qubit channel may be converted to a non-coherence breaking qubit channel with the help of a quantum switch along with a measurement-based controlled incoherent unitary operation. \end{theorem} \begin{proof} Suppose we are interested in coherence w.r.t. the basis $\{\ket{0}, \ket{1}\}$ of a qubit. Then, the $T$ matrix of any coherence breaking channel $\mathcal{E}$ has the following form \citep{Pati18}: \begin{align} \mathcal{T}_{\mathcal{E}} = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0& 0&0\\ 0 & 0 & 0&0\\ t&0&0& \lambda\\ \end{pmatrix} \end{align} the Kraus operators (written in the eigen basis of $\sigma_z$) for this channel are given as follows \begin{equation} \begin{aligned} K_1 &=\sqrt{\dfrac{1}{2}(1-\lambda-t)} \begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix}\\ K_2 &=\sqrt{\dfrac{1}{2}(1+\lambda-t)} \begin{pmatrix} 0 & 0\\ 0 & 1 \end{pmatrix}\\ K_3 &=\sqrt{\dfrac{1}{2}(1-\lambda+t)} \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}\\ K_4 &=\sqrt{\dfrac{1}{2}(1+\lambda+t)} \begin{pmatrix} 1 & 0\\ 0 & 0 \end{pmatrix}\\ \end{aligned} \end{equation} Let us start with the most general qubit state (written in the eigen basis of $\sigma_z$) given by \begin{equation} \rho=\dfrac{1}{2}\begin{pmatrix} 1+c & a-ib\\ a+ib & 1-c \end{pmatrix} \end{equation} Given the Krauss operators of the channel, we can calculate the channel under the switch using equation \eqref{switched_channel}. Let the final state (after the use of the switch) be represented by $\mathsf{S}_{\mathcal{E},\omega}(\rho)$, where $\mathcal{E}$ is the coherence breaking channel. If we trace over the control part, the resulting channel naturally has the form of the initial channel and we get no advantage as we can see from the final state after tracing out the control qubit. \begin{align} \Gamma^{\prime}_{eff}(\rho)&=\text{Tr}_{\mathcal{H}_c} [\mathsf{S}_{\mathcal{E},\mathcal{E},\omega}(\rho) ] \nonumber\\ &=\dfrac{1}{2}\begin{pmatrix} 1+c\lambda^2+t(1+\lambda) & 0\\ 0 & 1-c\lambda^2-t(1+\lambda) \end{pmatrix} \end{align} The T matrix of this effective channel is therefore given in the following coherence breaking form \begin{equation} \begin{aligned} \mathcal{T}_{\Gamma^{\prime}_{eff}}&=\begin{pmatrix} 1 & 0 & 0 & 0\\ 0&0 & 0 & 0\\ 0& 0 & 0 & 0\\ t(1+\lambda)& 0 & 0 & \lambda^2 \end{pmatrix}. \end{aligned} \end{equation} But, instead if we use a general measurement-based incoherent controlled unitary operation before tracing out the control qubit we can in-fact get an effective non-coherence breaking channel. Let the measurement-based controlled unitary be given by $\mathcal{U}=I\otimes\ket{+}\bra{+}+U\otimes\ket{-}\bra{-}$ where \begin{equation} U=\begin{pmatrix} e^{i\phi_1}\cos\theta & e^{i\phi_2}\sin\theta\\ -e^{-i\phi_2}\sin\theta& e^{-i\phi_1}\cos\theta \end{pmatrix}. \end{equation} Here the matrix representation of $U$ is written in the eigen basis of $\sigma_z$. The final qubit state after the use of the switch followed by the controlled unitary and the partial trace over the control qubit is given by the following: \begin{align} &\Gamma_{eff}(\rho)\nonumber \\ &=\text{Tr}_{\mathcal{H}_c} [\mathcal{U}\mathsf{S}_{\mathcal{E},\mathcal{E},\omega}(\rho)\mathcal{U}^\dagger ] \nonumber \\ &=\dfrac{1}{2}\begin{pmatrix} 1+t_1+c\eta_3+a\eta_1-ib\eta_2&t_2^*+c\gamma_3^*+a\gamma_1^*-ib\gamma_2^*\\ t_2+c\gamma_3+a\gamma_1+ib\gamma_2& 1-t_1-c\eta_3-a\eta_1+ib\eta_2 \end{pmatrix} \\ &\text{where,}\\ &t_1=(1+\lambda)(6+2\cos\theta)t/8,\\ &\eta_1=[-\cos(\phi_1-\phi_2)\sin2\theta (1-2\lambda+\lambda^2-t^2)]/8,\\ &\eta_2=[\sin(\phi_1-\phi_2)\sin2\theta (1-2\lambda+\lambda^2-t^2)]/8, \\ &\eta_3=(1+2\lambda+t^2)(1-\cos^2 2\theta)/8+\lambda^2\cos2\theta\\ &\gamma_1= (1-2\lambda+\lambda^2-t^2)(1-e^{-2i\phi_1}\cos\theta^2+e^{-2i\phi_2}\sin\theta^2)/8\\ &\gamma_2= (1-2\lambda+\lambda^2-t^2)(1-e^{-2i\phi_1}\cos\theta^2-e^{-2i\phi_2}\sin\theta^2)/8\\ &\gamma_3= (1-2\lambda+\lambda^2-t^2)e^{-i(\phi_1+\phi_2)}\sin 2\theta\\ &t_2=-2e^{-i(\phi_1+\phi_2)}(\lambda+1)t\sin 2\theta. \end{align} The $\mathcal{T}$ matrix of the effective channel is given as follows \begin{equation} \begin{aligned} \mathcal{T}_{\Gamma_{eff}}&=\begin{pmatrix} 1 & 0 & 0 & 0\\ t_2^R&\gamma_1^R & -\gamma_2^I & \gamma_3^R\\ t_2^I&\gamma_1^I & \gamma_2^R & \gamma_3^I\\ t_1& \eta_1 & -i\eta_2 & \eta_3 \end{pmatrix}\\ &\text{where} \hspace{1cm} \text{$x^R=Re(x)$ and $x^I=Im(x)$} \end{aligned} \end{equation} The case of $\theta=0$ corresponds to the incoherent unitary operator, for which the T-matrix takes the following form: \begin{equation} \begin{aligned} \mathcal{T}_{\Gamma_{eff}(\theta=0)}&=\begin{pmatrix} 1 & 0 & 0 & 0\\ 0&\gamma_1^R & -\gamma_2^I & 0\\ 0& \gamma_1^I & \gamma_2^R & 0\\ t_1& 0 & 0 & \eta_3 \end{pmatrix} \end{aligned} \end{equation} Here, for $\phi_1=\frac{\pi}{2}$, $\eta_3=\lambda^2$, we have $t_1=t(1 + \lambda )/2$ and $\gamma_1^R=\gamma_2^R=(1 - 2 \lambda+ \lambda^2 - t^2)/4$. Therefore, using an incoherent controlled unitary, a quantum switch can be used to convert a coherence breaking channel into a non-coherence breaking channel. However in this particular case, since the components $\mathcal{T}_{13}$, $\mathcal{T}_{23}$, $\mathcal{T}_{31}$, and $\mathcal{T}_{32}$ are zero, the effective channel after the switch operation can not generate coherence but can reduce the loss. \end{proof} \subsection{ Communication using noisy quantum switch}\label{subsec:noisy switch} In the previous sections, we have assumed that the control qubit is not interacting with the environment. But in practice the control qubit will interact with its environment which will introduce noise in the state of the quantum switch. Therefore, through measurement-based controlled operation one may not get the desired effective channel. Now, we will study this through an example. \begin{example}[Switch with depolarising noise] \rm{Consider, the quantum channel $\Lambda_{perfect}(\rho)=\frac{1}{2}\sigma_x\rho\sigma_x+\frac{1}{2}\sigma_y\rho\sigma_y$. We know that perfect communication can be achieved through this channel using quantum switch \citep{chiribella_perfect}. Therefore, \begin{align} \mathsf{S}_{\Lambda_{perfect},\Lambda_{perfect},\omega}(\rho)=\frac{1}{2}\rho\otimes\omega+\frac{1}{2}\sigma_z\rho\sigma_z\otimes\sigma_z\omega_z\sigma_z \end{align} where, $\omega=\ket{+}\bra{+}$.\\ After this, suppose the depolarising, noisy channel $\Gamma^{t}_2$ has acted on the control qubit state $\omega$ due to the interaction with the environment. Then the joint state of the system and the switch is given by \begin{align} \mathsf{S}^{\prime}_{\Lambda_{perfect},\Lambda_{perfect},\omega}(\rho)&=\frac{1}{2}\rho\otimes\Gamma^{t}_2(\omega)+\frac{1}{2}\sigma_z\rho\sigma_z\otimes\Gamma^{t}_2(\sigma_z\omega\sigma_z)\nonumber\\ &=\frac{1}{2}(\frac{(1+t)}{2}\rho+\frac{(1-t)}{2}\sigma_z\rho\sigma_z)\otimes\ket{+}\bra{+}\nonumber\\ &+\frac{1}{2}(\frac{(1+t)}{2}\sigma_z\rho\sigma_z+\frac{(1-t)}{2}\rho)\otimes\ket{-}\bra{-}. \end{align} Now suppose Alice is implementing a measurement based controlled operation $\mathcal{U}=\mathbb{I}\otimes \ket{+}\bra{+}+\sigma_z\otimes\ket{-}\bra{-}$ Then the final state is given by \begin{align} Tr_{\mathcal{H}_c}[\mathcal{U}\mathsf{S}^{\prime}_{\Lambda_{perfect},\Lambda_{perfect},\omega}(\rho)\mathcal{U}^{\dagger}]=\frac{(1+t)}{2}\rho+\frac{(1-t)}{2}\sigma_z\rho\sigma_z. \end{align} Therefore, the above channel is no longer an identity channel, and hence, perfect communication is not achieved. In a similar way, it can be easily shown that the noisy quantum switch may hamper improvement in communication through other quantum channels.} \end{example} \section{conclusion}\label{sec:conc} In this work, we have presented several results regarding the improvement in quantum communication using the quantum switch. {We have argued that it is important to study the effect of the quantum switch by studying one output branch at a time. We have shown that if a useless channel remains useless even after using the quantum switch, concatenating it with another channel and subsequently using the quantum switch may provide improvements in communication. In particular, we have studied the conversion of \emph{completely useless} channels into useful channels through concatenation and use of quantum switch. This result might be useful in quantum communication technology in the future.} We have shown that improvements in communication due to the action of the quantum switch help us (i) to get the advantage in Quantum Random Access Codes as well as to demonstrate the quantum steering when only useless channels are available for communication, (ii) to prevent the loss of coherence, etc. We have shown that noise introduced in the switch may hamper the communication improvement.\\ Our work opens up several research avenues. It is an open problem to find out the necessary-sufficient condition for a generic quantum channel that can provide improvement under the action of the quantum switch. Though we have shown that if a channel is useless even after using the quantum switch, concatenating it with another channel may provide improvement in communication under the action of the quantum switch, the necessary and sufficient condition for this improvement, in this case, is not known. It may also be interesting to compare the effectiveness of the noisy quantum switch in achieving improvements in different quantum information processing (or quantum communication) tasks. A study of the communication problems, considered here in the context of quantum switch, in some other related contexts (like SDPP, etc.) is worth pursuing. Moreover, a comparison of the results -- obtained here using the quantum switch and the ones that might be obtained using SDPP -- is also an important aspect, to be considered in near future. Another interesting future direction is the application of quantum switch in the context of quantum dynamical maps to see whether, in particular, action of such a switch can improve the speed of communication. \end{document}
\begin{document} \begin{abstract} Over 50 years of work on group actions on $4$-manifolds, from the 1960's to the present, from knotted fixed point sets to Seiberg-Witten invariants, is surveyed. Locally linear actions are emphasized, but differentiable and purely topological actions are also discussed. The presentation is organized around some of the fundamental general questions that have driven the subject of compact transformation groups over the years and their interpretations in the case of $4$-manifolds. Many open problems are formulated. Updates to previous problems sets are given. A substantial bibliography is included. \end{abstract} \title{A Survey of Group Actions on $4$-Manifolds} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} These notes survey the realm of topological, locally linear actions of finite or compact Lie groups on topological 4-manifolds. The world of four-dimensional topological manifolds lies at the interface between the algebra of general $n$-manifolds, $n\ge 5$, and the geometry of $2$- and $3$-manifolds. Within that world of $4$-dimensions we now see a gradation from geometric and especially symplectic manifolds on the geometric side and topological manifolds on the algebraic side, with pure differentiable (or, equivalently, piecewise linear) manifolds residing at the interface. Naturally this pattern persists in the study of group actions on $4$-manifolds. The emphasis here is on highlighting those aspects of the higher dimensional framework where things go somewhat differently in dimension four and on highlighting positive topological results that pose challenges for differentiable transformation groups. Of special interest will be examples of topological locally linear actions on smooth manifolds that are not equivalent to smooth actions. We also indicate a few open problems related to possibly interesting phenomena related to purely topological actions. For a particular group $G$ it is natural to study a hierarchy of group actions: Free actions, which generalize both to semifree actions and to pseudofree actions, and then arbitrary actions. In this survey all group actions are understood to be effective, in the sense each nontrivial group element acts nontrivially. We will generally consider only locally linear actions, except where explicitly stated otherwise. But, as we shall see, some of the intriguing problems involve questions of whether particularly strange non-locally linear actions exist. We will also only consider orientation preserving actions unless explicitly stated otherwise, most notably in the examination of actions on the $4$-sphere. The story of actions by compact connected Lie groups was largely developed prior to the explosion of 4-manifold topology in the 1980's. It is now a good time to look back and identify problems left over from that era. In particular there are issues about circle actions on non-simply connected $4$-manifolds that still deserve attention. Similarly some of the early work on the $4$-sphere, including knotted $2$-sphere fixed point sets and also free actions occurred early on and merits a new look. The description of finite group actions is most successful in the world of simply connected topological 4-manifolds, which were completely classified 30 years ago in the work of Freedman \cite{FreedmanQuinn1990}. Some of the more tantalizing problems, and problems on which there has been the most recent progress, include questions about smooth actions on smooth $4$-manifolds, addressed by methods of gauge theory, both Donaldson's Yang-Mills instantons and the more recent Seiberg-Witten theory of monopoles. The results are more fragmentary and speculative in the world of non-simply connected 4-manifolds, including issues related to complements of $2$-dimensional fixed point sets, where most experts expect us eventually to find a failure of high dimensional surgery theory in the realm of 4-dimensional non-simply connected $4$-manifolds. In its broad outlines this report follows a historical progression, but we also attempt simultaneously to group topics together in a natural way, so that when appropriate we depart from the historical order of things. The exposition is also organized around a sequence of general, often familiar, open-ended, guiding \textbf{questions}, many of which make sense in all dimensions, and a parallel sequence of \textbf{problems}, which propose specific, concrete questions or conjectures, which are to the best of my knowledge still at least partially unsolved. There are two appendices. In the first we list the problems from the classic Kirby Problem List (original 1976, updated 1995) related to group actions on $4$-manifolds, and update them to 2015. In the second appendix we list and update the problems on $4$--manifolds from the 1984 Transformation Groups Conference at the University of Colorado. To accompany this paper we have prepared what we hope is a rather complete bibliography of papers on the subject of group actions on $4$-manifolds, including items not necessarily directly discussed in the body of the paper. {\color{black} The reader's attention is directed to a survey oriented toward group actions on symplectic 4-manifolds by W. Chen \cite{Chen2010}. Symplectic topology has seen substantial growth in recent years, so this survey is especially welcome. Although its content is largely distinct from the topics considered here, nevertheless we mention aspects of group actions on symplectic manifolds in several places. } \section{Circle and Torus Actions} Such actions of positive dimensional Lie groups are more accessible since the orbit space is a manifold of lower dimension. We focus on two questions: \begin{question}[Equivariant Classification] What is the classification of compact Lie group actions on $4$-manifolds, up to equivariant homemorphism, especially for the circle and torus groups? \end{question} This question has been satisfactorily answered through the combined work of Orlik and Raymond \cite{OrlikRaymond1970,OrlikRaymond1974}, Fintushel \cite{Fintushel1976,Fintushel1977,Fintushel1978}, Orlik \cite{Orlik1982}, Melvin and Parker \cite{MelvinParker1986}. The conclusion of all these cases is that one describes the action in terms of a weighted orbit space, with the weight information describing the non-principal orbits and appropriate bundle data. The details can be intricate. \begin{question}[Topological Classification] What $4$-manifolds admit actions by compact Lie groups of positive dimension, especially the torus or circle group, and how are they distinguished by their actions? \end{question} This question has pretty good answers in the case of simply connected $4$-manifolds or in the case of non-abelian Lie groups. Results are still somewhat incomplete in the non-simply connected case for circle and even torus actions. Melvin and Parker \cite{MelvinParker1986} have effectively handled the case of actions by non-abelian Lie groups and reduced the general case to that of actions of the circle group $S^{1}$ and the torus $T^{2}=S^{1}\times S^{1}$. Building on earlier work of Melvin \cite{Melvin1981,Melvin1982}, Parker, Orlik \cite{Orlik1982} and even earlier work of Richardson \cite{Richardson1961}, they prove that a $4$-manifold admitting an action of a non-abelian compact Lie group must be one of five types: \begin{enumerate} \item $S^{4}$ or $\pm \mathbb{C}P^{2}$ \item connected sums of copies of $S^{1}\times S^{3}$ and $S^{1}\times \mathbb{R}P^{3}$ \item $SU(2)/H$-bundles over $S^{1}$, where $H$ is a finite subgroup of $SU(2)$ \item $S^{2}$-bundles over surfaces \item certain quotients of $S^{2}$-bundles over surfaces by involutions. \end{enumerate} Orlik and Raymond \cite{OrlikRaymond1970,OrlikRaymond1974} had identified the simply connected 4-manifolds admitting torus actions, showing that they are connected sums of standard building blocks $S^{4}$, $\pm\mathbb{C}P^{2}$, and $S^{2}\times S^{2}$. Pao \cite{Pao1977a,Pao1977b} extended this by showing that a general $4$-manifold with torus action is a connected sum of these building blocks, plus members of three other less-familiar families in general (modulo the then undecided 3-dimensional Poincar\'e Conjecture). M.H. Kim \cite{Kim1993} studied torus actions on non-orientable $4$--manifolds. Then Fintushel \cite{Fintushel1978} was able to identify completely the underlying manifolds admitting circle actions in the simply connected case. Similar results are due independently to Yoshida \cite{Yoshida1978}. Fintushel used some of Pao's ideas in \cite{Pao1978} to show that the underlying manifold admits a possibly different circle action that extends to a torus action, must again be a connected sum of copies of $S^{4}$, $\pm\mathbb{C}P^{2}$, and $S^{2}\times S^{2}$ (again modulo the 3-dimensional Poincar\'e Conjecture). In particular this gives rise to many non-standard actions on standard manifolds, some of which we will allude to subsequently in the case of actions on $S^{4}$. Also the ``exotic'' topological $4$-manifolds, such as the $E_{8}$ manifold or the fake complex projective plane, etc., do not admit locally linear circle actions. Pao's replacement trick was to show how one can simplify the weighted orbit space (and the equivariant homeomorphism type of the action) while keeping the total space the same. As a first application this led to more knotted 2-spheres as fixed point sets in the $4$-sphere. The Pao trick has been applied more recently by Baldridge in a study of Seiberg-Witten invariants of 4-manifolds with circle action, which we'll mention again later in this survey. \begin{problem} Give a meaningful topological classification of the non-simply connected $4$-manifolds that admit $S^{1}$ actions. \end{problem} A special case of this was proposed by Melvin for the Kirby Problem List. See Appendix A. In particular, if the quotient manifold has free fundamental group, is the $4$--manifold a connected sum of copies of $S^{1}\times S^{3}$, $S^{2}\times S^{2}$, and $S^{2}{\otimes}S^{2}$ (the nontrivial bundle)? {\color{black} Chen \cite{ChenArXiv2013} has studied the role of the fundamental group for $4$-manifolds with a fixed point free circle action. He shows among other things that for any finitely presented group with infinite center, there are at most finitely many distinct smooth (resp. topological) 4-manifolds which support a fixed-point free smooth (resp. locally linear) $S^1$-action and realize the given group as the fundamental group. Smooth structures seem to play an interesting role in some of these issues about fundamental groups. Chen \cite{ChenArXiv2011} shows that the homotopy class of an orbit of a smooth fixed point free circle action on smooth 4-manifold with a nonzero Seiberg-Witten invariant necessarily has infinite order in the fundamental group. Earlier Kottschick \cite{Kottschick2006} had shown that a closed symplectic $4$-manifold does not admit a smooth free circle action with contractible orbits using Seiberg-Witten theory.} \begin{problem} Determine how much of the theory of smooth or locally linear actions of the circle or the torus depends only on algebraic topology. How much of the analysis carries over to topological circle or torus actions or actions on homology manifolds or on Poincar\'e duality spaces, for example? \end{problem} Work of Huck \cite{Huck1995} and Huck and Puppe \cite{HuckPuppe1998} is in this direction. The main goal is to use algebraic topology to prove that the intersection form of a 4-manifold with arbitrary circle action, defined on $H^{2}/\text{torsion}$ must split as an orthogonal sum of one and two-dimensional forms. This has been accomplished, generalizing the work of Fintushel in the simply connected, locally linear, case, but only provided the action satisfies the following property, which is a weakening of local linearity: Any fixed point has an invariant neighborhood containing at most four distinct orbit types. This remaining issue is the main impediment to a satisfactory study of circle actions in the non-locally linear case. \begin{problem} Does a purely topological circle action on a $4$-manifold have at most four distinct orbit types in a small invariant neighborhood of any fixed point? \end{problem} Kwasik and Schultz identified this basic problem and achieved partial results, but the final result seems elusive now. One should note that the coned-off $E_{8}$-plumbing manifold admits a circle action that does have more than four orbit types in a neighborhood of the cone point. Does the closed $E_{8}$ topological $4$--manifold provided by Freedman's classification of simply connected topological $4$--manifolds admit a non-locally linear circle action? Closely related to these considerations are the following two problems of Schultz from the Carlsson \cite{Carlsson1990} problem list of 1990. \begin{problem} Let $M$ be a closed topological 4-manifold with a topological circle action. Is the Kirby-Siebenmann triangulation obstruction of $M$ trivial? \end{problem} \begin{problem} Is every topological circle action on a 4-manifold concordant to a smooth action? \end{problem} More recent work in the area of smooth circle actions has been addressed toward Seiberg-Witten invariants, which were calculated by Baldridge (\cite{Baldridge2001,Baldridge2003,Baldridge2004}). No applications to the underlying questions above were addressed, however. Herrera \cite{Herrera2006} has shown that if a smooth $4$-manifold has an even intersection pairing, but is not necessarily spin, and admits a nontrivial smooth circle action, then the signature necessarily vanishes. This extends in dimension four the famous result of Atiyah-Hirzebruch that the $\widehat{A}$ genus vanishes for smooth spin manifolds admitting a smooth circle action. In brief Herrerra shows that the circle action lifts to a circle action on a spin covering. Finally, there is a whole additional thread of symplectic circle and torus actions in dimension four that we have not been able to address here, for reasons of lack of time and expertise. \section{The $4$-sphere} Changing focus to finite group actions, a natural place to begin is with actions on the $4$-sphere. \subsection{Codimension one fixed set} Poenaru \cite{Poenaru1960} and, independently, Mazur \cite{Mazur1961} were the first to observe non-simply connected homology spheres that bound contractible manifolds whose doubles are spheres, thus producing the first non-linear actions\footnote{Except for the suspension of Bing's (1952) involution on $S^{3}$ fixing a wildly embedded $2$-sphere.} on $S^{4}$. DeRham \cite{deRham1965} elaborated Poenaru's construction to produce infinitely many examples. Given Freedman's work on topological $4$-manifolds \cite{FreedmanQuinn1990} we have a complete understanding in the topological category. Such topological, locally linear, reflections (clearly orientation-reversing) are in one-to-one correspondence with their fixed point sets, which are integral homology $3$-spheres bounding contractible topological $4$-manifolds. Applying this construction to a homology $3$-sphere that does not bound a smooth contractible manifold, one obtains non-smoothable but locally linear actions on $S^{4}$. \subsection{Smith conjecture} The next earliest results refer to 2-dimensional fixed point sets. \begin{question}[Generalized Smith Conjecture] Can a knotted $2$-sphere be the fixed point set of a $C_{p}$ action on $S^{4}$. \end{question} Giffen \cite{Giffen1966} gave the first examples of such knotted fixed point sets, via cyclic branched covers of certain twist spun knots of Zeeman, valid in all dimensions greater than $3$. The first examples were restricted to odd order groups. Gordon \cite{Gordon1974} extended the construction to cover any order cyclic group. Then Pao \cite{Pao1978} used his ``replacement trick'' to give a complete analysis (modulo the classical Poincar\'e Conjecture) of the situation for circle actions, and in particular gave infinite famiies of examples. \begin{problem} Identify more obstructions to a knot in $S^{4}$ being a fixed point set. \end{problem} What if anything meaningful can one say about the classical invariants of fixed point knots in this dimension? It is a theorem of Kervaire, valid in the smooth, PL, or topological locally flat categories, that all $2$-knots in $S^{4}$ are slice. McCooey \cite{McCooey2007c} has shown that a $2$-sphere fixed by a $C_{p}$ action is equivariantly slice, in the topological locally linear category \subsection{Free actions} According to the Lefschetz Fixed Point Theorem the only group acting freely on a $4$-dimensional sphere is $C_{2}$. Moreover, the orbit space of such an action is homotopy equivalent to $\mathbb{R}P^{4}$. \begin{question}[Fixed Point Free Involutions] Is there a fixed point free involution on $S^{4}$ not equivalent to the antipodal map? \end{question} Cappell and Shaneson \cite{CappellShaneson1976} produced the first examples of smooth $4$-manifolds homotopy equivalent but not diffeomorphic to $\mathbb{R}P^{4}$, by an interesting construction removing a neighborhood of a torus and sewing back in something else. For some while the universal coverings of these examples held promise of providing counterexamples to the smooth four-dimensional Poincar\'e conjecture. Eventually, however, Gompf \cite{Gompf1991} showed that one of these examples has universal covering diffeomorphic to $S^{4}$. {\color{black} More recently Akbulut \cite{Akbulut2010} has shown that all the candidates in an infinite list of Cappell and Shaneson spheres are diffeomorphic to one another. It follows that none of these manifolds is a counterexample to the smooth four-dimensional Poincar\'e conjecture. And Gompf \cite{Gompf2010} has further shown that the members of an even larger family of examples are standard. } Meanwhile Fintushel and Stern \cite{FintushelStern1981} had used different techniques to produce the first smooth fake $\mathbb{R}P^{4}$ with universal covering diffeomorphic to $S^{4}$. Ruberman \cite{Ruberman1984} showed how to construct the unique topological 4-manifold homotopy equivalent to $\mathbb{R}P^{4}$ but not homeomorphic to it, as promised by surgery theory. \begin{problem}[Ruberman] Does any smooth, free involution on $S^{4}$ admit an invariant $2$-sphere? \end{problem} Of course, the standard antipodal map has an invariant $2$-sphere, and also Ruberman's topological construction provides just such an invariant (topological) $2$-sphere. At least some of the constructions mentioned above do yield an invariant $2$-sphere. These studies of free involutions predate the introduction of gauge theory into 4-dimensional topology. Are there useful contributions to these questions that might arise from gauge theory? \subsection{One fixed point actions} \begin{question} Can a finite group act locally linearly on $S^{4}$ with just one fixed point? \end{question} Without the explicit local linearity hypothesis, this would be equivalent, via one-point compactification, to actions on $\mathbb{R}^{4}$ without a fixed point, which we will also consider separately below. Of course, by Smith Theory, any $p$-group fixes at least two points, so the full singular set for the action consists of more than just a single fixed point. In an interesting turn of events, this was originally answered ``No'' in the smooth case via gauge theory, by Furuta \cite{Furuta1989}, in the first application of Donaldson's theory of (anti) self-dual connections to transformation groups. Only later was a topological, locally linear version proved, by DeMichelis \cite{DeMichelis1989}, using more traditional techniques. What about a purely topological action? One can argue, using Smith theory, that the answer is still ``no'' for arbitrary topological actions of solvable groups. (In higher dimensions this argument breaks down and there do exist smooth periodic maps of non-prime order on spheres with just one global fixed point.) \begin{problem} Can a finite, non-solvable group act topologically on $S^{4}$ with just one fixed point? \end{problem} We remark that there is indeed a one-fixed-point action of the group $A_{5}$ on a $4$-dimensional integral homology manifold of the homotopy type of $S^{4}$. For there is a smooth action on the Poincar\'e homology $3$-sphere with just one fixed point, leading to a one-fixed-point action on its reduced suspension. \subsection{Pseudofree actions} For our purposes an action of a finite group on a manifold is said to be \emph{pseudofree} if it is free away from a discrete set of singular points with nontrivial isotropy groups. Kulkarni \cite{Kulkarni1982} studied pseudofree actions of general finite groups on general spaces. He largely classified the finite groups that can act pseudofreely, preserving orientation, on a space with the homology of an even dimensional homology sphere. He left open whether a dihedral group $D_{n}$ of order $2n$, $n$ odd, could act pseudofreely on an even dimensional sphere of dimension greater than $2$. This author \cite{Edmonds2010} subsequently completed the argument that the dihedral group $D_{n}$ does not act pseudofreely, preserving orientation, on $S^{4}$, or, more generally, on $S^{d}$, where the dimension $d$ is divisible by 4. Then Hambleton \cite{Hambleton2011} was able to apply rather different techniques to show further that $D_{n}$ does not act pseudofreely, preserving orientation, on any even-dimensional sphere. The study of group actions with isolated fixed points has also generated some work related to topological actions that are not necessarily nice near the singular points. Kwasik and Schultz \cite{KwasikSchultz1990} studied orientation-preserving, pseudofree actions of cyclic groups on $S^{4}$ in both the locally linear and topological categories, exploring the projective class group obstruction to desuspending an action of $C_{2n}$ on $S^{4}$ to be the join of a free action on $S^{3}$ with a nontrivial action on $S^{0}$. See also the general discussion of conelike and weakly conelike actions in Freedman-Quinn \cite{FreedmanQuinn1990}, Section 11.10. They argue, in particular, that the weakly conelike actions are precisely the pseudofree actions whose associated projective class group obstruction vanishes. \subsection{Groups acting on $S^{4}$} Here we consider group actions with no particular assumptions about the fixed point sets. \begin{question} What finite groups act on $S^{4}$? \end{question} Of course, if the group is locally linear and has a fixed point, then it is a subgroup of $O(4)$, and if it acts linearly, then it is a subgroup of $O(5)$. \begin{problem} Show that a finite group acting on $S^{4}$, without fixed point, is isomorphic to a subgroup of $O(5)$. \end{problem} {\color{black} Mecchia and Zimmermann \cite{MecchiaZimmermann2011} came close to solving this problem in the case of smooth actions, building on their earlier work in \cite{MecchiaZimmermann2006}, which addressed the case of non-solvable groups. The orientation-preserving case has finally been completed by Chen, Kwasik, and Schultz \cite{ChenKwasikSchultzArXiv2014} dealing with a delicate issue of subgroups of index 2. The orientation-reversing case is not completely settled. These results should apply equally well to both smooth and locally linear actions. What if the local linearity is dropped? Then Chen, Kwasik, and Schultz \cite{ChenKwasikSchultzArXiv2014} construct a \emph{topological} orientation-reversing action of a group that does not embed in $O(5)$. } \subsection{The Smith conjecture on representations at fixed points} \begin{question} If a group $G$ acts locally linearly (or smoothly) on a sphere with two fixed points, are the local representations at the two fixed points are equivalent? \end{question} In higher dimensions the answer is no in general and the problem has provided a rich line of research. In dimension $4$, in the smooth case, an affirmative answer was given by Hambleton and Lee \cite{HambletonLee1992} and Braam and Matic \cite{BraamMatic1993} in another one of the early applications of gauge theory to group actions. \begin{thm}[Hambleton and Lee \cite{HambletonLee1992}, Braam and Matic \cite{BraamMatic1993}] If a finite group acts smoothly on $S^{4}$ with two isolated fixed points, then the local tangential representations at the fixed points are equivalent. \end{thm} The idea is to look at the instanton number one moduli space of self-dual connections with its induced action and relate the normal representations to the fixed points on the original manifold to the normal representations to fixed points in the moduli space near a reducible connection. The details are subtle. On the one hand, their proofs illustrated the fact that a gauge-theoretic approach can be intuitively appealing and eliminate a study of numerous special cases. On the other hand the gauge-theoretic arguments have their own subtleties, related to the failure of equivariant transversality, and were hard to get right. See Section \ref{sec:gauge} for a little more discussion. DeMichelis \cite{DeMichelis1989} was able to give a proof valid for topological, locally linear actions on homology $4$-spheres, using more classical techniques, after learning of the early attempts to prove this result via gauge theory. \section{Euclidean $4$-space} In some ways this might have been the appropriate place to start our survey of finite group actions, but non-compactness raises its own difficulties. \begin{question} Is there a periodic self-map of $\mathbb{R}^{4}$ that has no fixed point? \end{question} In higher dimensions such actions were first found by Conner and Floyd, with examples found in all dimensions $\ge 8$ by Kister. Smith \cite{Smith1960} observed that no such periodic maps exist in dimensions less than 5 (and not in dimension 5 or 6 either if one assumes the maps are smooth). Examples were found in dimension $7$ by Haynes, et al. \cite{HaynesKwasikMastSchultz2002}. Smith's argument extends easily to the case of solvable groups. Thus it remains to contemplate actions of more complicated groups. \begin{problem} Find a non-solvable group acting without a global fixed point on $\mathbb{R}^{4}$. \end{problem} Contemplating possible actions by unfamiliar groups on $\mathbf{R}^{4}$, one is led to ask just what groups arise. \begin{problem} Show that a finite group that acts on $\mathbb{R}^{4}$ is isomorphic to a subgroup of $O(4)$. \end{problem} Of course, if the action is smooth or locally linear and has a fixed point, then the group is a subgroup of $O(4)$. {\color{black}This problem has been completely resolved in the smooth or locally linear cases by Guazzi, Mecchia, and Zimmermann \cite{GuazziMecchiaZimmermann2011}, without, however, proving that such a group action necessarily has a fixed point. Indeed their argument applies to actions on acyclic manifolds and there is an action of the alternating group $A_{5}$ on such an acyclic $4$-manifold with no global fixed point. On the other hand a surgery-theoretic analysis of Hambleton and Madsen \cite{HambletonMadsen1986}, appropriately translated into dimension $4$, shows that there exist groups $G$ that act semifreely on $\mathbb{R}^{4}$ with a single isolated fixed point such that $G$ is not isomorphic to a subgroup of $O(4)$. See the discussion in Hambleton's recent survey \cite{Hambleton2015} after the statement of Theorem 9.1 for a brief, but explicit, discussion of extending surgery theory finite groups to dimension $4$. He also mentions some other sorts of exotic actions that arise from this point of view, including examples of non-solvable groups that act topologically, but not linearly.} Musing about the ``fake'' $\mathbb{R}^{4}$'s that came out of the work of Freedman and Donaldson in the 1980's one is led to wonder about their group actions. \begin{problem} Is there a smooth fake $\mathbb{R}^{4}$ that admits no nontrivial, smooth group actions? \end{problem} This problem was already raised at the 1984 Transformation Groups conference in Boulder. Taylor \cite{Taylor1999} has constructed examples of fake $\mathbb{R}^{4}$s on which the circle group cannot act smoothly. \section{Simply connected $4$-manifolds} As the study of group actions on $4$-manifolds developed, interest began to move beyond the basic spheres, disks, and euclidean spaces associated with Smith theory, to look more closely at standard $4$-manifolds including the complex projective plane $\mathbb{C}P^{2}$, $S^{2}\times S^{2}$, and the K3 surfaces of algebraic geometry. Along the way a rather successful study of existence and classification of locally linear, pseudofree actions on closed, simply connected $4$--manifolds ensued. \subsection{Particular model $4$-manifolds} After the sphere, perhaps the most important basic closed $4$-manifold is the complex projective plane. It certain ways it is even more important than the sphere. \subsubsection{The complex projective plane} \begin{question} What groups act on $\mathbb{C}P^{2}$? \end{question} There is an excellent answer. \begin{thm}[Wilczy\'nski \cite{Wilczynski1987,Wilczynski1990}, Hambleton and Lee \cite{HambletonLee1988}] If a finite or compact Lie group acts locally linearly on $\mathbb{C}P^{2}$, then $G$ admits a faithful projective representation of complex dimension $3$, and hence also acts linearly on $\mathbb{C}P^{2}$. \end{thm} The first version, for actions of finite groups that are homologically trivial was due independently to Hambleton and Lee \cite{HambletonLee1988} and Wilczy\'nski\cite{Wilczynski1987}. In that case one can just say that $G$ must be a subgroup of $PGL_{3}(\mathbb{C})$. The general result is due to Wilczy\'nski \cite{Wilczynski1990}. He characterizes the ``universal group'' acting as the quotient $NU(3)/ZU(3)$, where $NU(3)$ and $ZU(3)$ denote the normalizer and the centralizer, respectively, of the unitary group $U(3)$ when viewed as a subgroup of $O(6)$. Wilczy\'nski \cite{Wilczynski1988} also gave a similar result for groups acting locally linearly on the Chern manifold, the unique $4$-manifold homotopy equivalent to $\mathbb{C}P^{2}$ but not homeomorphic to it. In that case the possible groups, however, are more restrictive, since $G$ is necessarily finite and without $2$-torsion. In this case it also follows that any action is necessarily pseudofree. For more general $4$-manifolds with the homology of $\mathbb{C}P^{2}$, any group of symmetries is known to be a linear group. \begin{question} Is every locally linear action on $\mathbb{C}P^{2}$ equivalent to a (projective) linear action? \end{question} There is just one known exception: Hambleton and Lee observed an action of $C_{p}\times C_{p}$, where the $2$-sphere fixed by one $C_{p}$ is knotted with respect to the fixed point set of another $C_{p}$. For simplicity, especially in light of the preceding example, we restrict to semifree actions. \begin{thm}[Wilczy\'nski \cite{Wilczynski1991}] A locally linear, semifree action of a finite cyclic group on $\mathbb{C}P^{2}$ is equivalent to a linear action. \end{thm} According to Edmonds and Ewing \cite{EdmondsEwing1989}, in the pseudofree case, the local representations at the three isolated fixed points must be the same as those of a linear action. (Dovermann had already shown that in the case the fixed point set consists of a $2$--sphere and an isolated point, then the local representation at the isolated point is standard.) The proof involved showing that the only solutions of the equations of the $g$-signature theorem are those coming from standard linear actions. Hambleton and Lee \cite{HambletonLee1992} gave an independent proof of this, assuming the action is smooth, as an application of equivariant gauge theory. Then a four-dimensional surgery argument shows that the given action is equivalent to the corresponding linear model. See Hambleton and Lee-Madsen \cite{HambletonLeeMadsen1989} and especially Wilczy\'nski \cite{Wilczynski1991} for full details of a more general result. \subsection{General simply connected 4-manifolds} According to Freedman, closed simply connected $4$-manifolds are classified up to homeomorphism by the intersection form and, in the case of intersection forms of odd type, the stable triangulation obstruction. \subsubsection{Existence of group actions}\label{subsec:existence} \begin{question} What simply connected $4$-manifolds admit actions of $C_{p}$? \end{question} \begin{thm}[Kwasik and Vogel \cite{KwasikVogel1986b}] If $M$ is a closed, orientable, $4$-manifold and $C_{2}$ acts locally linearly on $M$, then the Kirby-Siebenmann triangulation obstruction $\text{KS}(M)\in H^{4}(M;\mathbb{Z}_{2})$ must vanish. \end{thm} This extended a slightly earlier result of Kwasik \cite{Kwasik1986a}, which applied specifically to the Chern manifold, or fake $\mathbb{C}P^{2}$. At the same time Kwasik argued that the Chern manifold admits locally linear, pseudofree, actions of odd order cyclic groups. \begin{thm}[Edmonds \cite{Edmonds1987}] If $M$ is a closed, orientable, $4$-manifold and $p$ is a prime greater than $3$, then $C_{p}$ acts locally linearly, homologically trivially, and pseudofreely on $M$. \end{thm} These manifolds all admit locally linear actions of $C_{3}$, too, but in some cases a two-dimensional fixed point set is required. \begin{question} What automorphisms of the intersection form of a simply connected $4$-manifold $M$ are induced by actions of $C_{p}$ on $M$? \end{question} In the case of pseudofree actions Edmonds and Ewing gave a good answer for groups of prime order. \begin{thm}[Edmonds and Ewing \cite{EdmondsEwing1992}] Let a closed, simply connected $4$--manifold $M$ be given, together with a representation of $C_{p}$ on $H_{2}(M)$, preserving the intersection form. Then there is a locally linear, pseudofree action of $C_{p}$ on $M$ inducing the given action on homology if and only if the representation on $H_{2}(M)$ is of the form $\mathbb{Z}[C_{p}]^{r}\oplus \mathbb{Z}^{t}$ and there is candidate fixed point data for $t+2$ isolated fixed points satisfying the $g$--signature theorem and an additional Reidemeister torsion condition. \end{thm} Note that the number of fixed points is determined by the action on homology by the Lefschetz fixed point theorem. This result has been useful for showing that certain kinds of actions exist topologically, even as it has been shown that the action cannot be smooth. The construction of such actions proceeds by building up a smooth equivariant handlebody structure, with one $0$--handle and suitable $2$--handles. One then faces the problem of capping off the manifold with boundary, which is necessarily an integral homology $3$--sphere with a free action of $C_{p}$. It is here that the signature and torsion condition comes in as one applies high-dimenionsal surgery theory ideas to a $4$--dimensional situation with finite fundamental groups to show that the action on the boundary is equivariantly $h$--cobordant to a linear action on $S^{3}$. And it is here that one is forced to use a topological as opposed to a smooth or PL manifold. {\color{black} \begin{problem} Develop an analog of the Edmonds and Ewing existence theorem \cite{EdmondsEwing1992} for actions that also allows for fixed point sets including $2$-spheres or, even, other surfaces of positive genus, not just isolated points. \end{problem} } {\color{black} Even in the case of homologically trivial actions this may not be so easy. If the group $C_{p}$ acts locally linearly and homologically trivially on a closed, oriented, simply connected $4$-manifold $M$, then an Euler characteristic argument shows that the number $n$ of $2$-spheres in the fixed point set satisfies $n\le \chi(M)/2$. One may ask what values of $n$ are realizable. Recently Hamilton \cite{Hamilton2016} has used a close analysis of aspects of the $G$-Signature Formula to show that in particular cases there are more subtle restrictions. For example if all $2$-spheres in the fixed point set have self-intersection $\le s<0$, where $s$ is a negative integer, then \[ n < \frac{\chi(M)}{2-s}\left(1 + \frac{6}{p(2-s)-(4+s)}\right). \] If the the manifold $M$ is smooth and minimal (no $(-1)$-spheres) and has $b_{2}^{+}>0$ and a nonzero Seiberg-Witten invariant (implying that self-intersections of smooth $2$-spheres are negative) then this implies \[ n < \frac{\chi(M)}{4}\left(1 + \frac{3}{2p-1} \right). \] } {\color{black} \subsubsection{Groups that act on some $4$-manifold} Of course \emph{any} finite group is the fundamental group of a closed, orientable, smooth $4$-manifold and hence acts freely on the universal covering of that manifold. But if the group acts homologically trivially on a manifold more complicated than the $4$-sphere, $\mathbb{C}P^{2}$ or $S^{2}\times S^{2}$, then there are significant restrictions on the finite groups that occur. A first result in this direction is due to Hambleton and Lee \cite{HambletonLee1995} who showed, making use of gauge theory, that a finite group that acts smoothly on a connected sum of $n\ge 1$ copies of the complex projective plane, inducing the identity on homology, is isomorphic to a subgroup of $PGL_{3}(\mathbb{C})$, and that if $n>1$, then in fact the group is abelian of rank $2$. They also analyzed the local representations at fixed points and showed as a consequence of work of Edmonds and Ewing \cite{EdmondsEwing1992} in the topological category that there are smooth cyclic group actions on a connected sum of $n\ge 1$ copies of the complex projective plane, inducing the identity on homology, that are not smoothable. More generally McCooey gave the following general result in the topological locally linear context. \begin{thm}[McCooey \cite{McCooey2002a}] Let $G$ be a compact Lie group acting effectively, locally linearly, and homologically trivially on a closed $4$-manifold $M$, with $H_1(M;\bold Z)=0$. If $b_2(M)\geq 3$, then $G$ is isomorphic to a subgroup of $S^1 \times S^1$. \end{thm} McCooey shows that the same conclusion applies when $b_2(M)= 2$ and ${\rm Fix}(M,G) \neq \emptyset$. When $b_{2}(M)=2$, Mecchia and Zimmerman \cite{MecchiaZimmermann2009} prove in general that the only simple group that occurs is $A_5$, and also give a short list of possible finite nonsolvable groups that are candidates for actions of such groups. McCooey \cite{McCooey2007b} has also studied some cases where $H_1(M;\mathbb{Z})\neq 0$ and is torsion-free. Assuming in addition that $b_{2}(M)\neq0, 2$ and that $\chi(M)\neq 0$, then he shows that an orientation-preserving finite group acting on $M$ in a homologically trivial way must be cyclic. Recently Mundet i Riera has a provided an alternative perspective in analogy with the classical Jordan theorem about orders of finite subgroups of $\text{GL}(n,\mathbb{R})$, as follows. \begin{thm}[Mundet i Riera \cite{MundetiRieraArXiv2015}] Let $M$ be a compact, orientable, connected $4$-dimensional smooth manifold, such that $\chi(M)\neq 0$. There exists a natural number $C$ such that any finite group $G$ acting smoothly and effectively on $M$ has an abelian subgroup $A$ of index $\leq C$ with $\chi(M^A)=\chi(M)$. If $\chi(M)>0$ then $A$ has rank at most $2$, and if $\chi(M)<0$ then $A$ is cyclic. \end{thm} } \subsubsection{Classification of group actions} in the case of actions on standard manifolds one attempts to show that an arbitrary action is equivalent to a standard action. But for general manifolds one does not necessarily have a standard model action. \begin{question} What is the classification of actions of a finite cyclic group $C_{n}$ on a given simply connected $4$-manifold? \end{question} Hambleton and Kreck \cite{HambletonKreck1988,HambletonKreck1990,HambletonKreck1993c} addressed the case of free actions, by classifying the orbit spaces, culminating in the following result. \begin{thm}[Hambleton and Kreck \cite{HambletonKreck1993c}] A closed, oriented $4$-manifold with finite cyclic fundamental group is classified up to homeomorphism by the fundamental group, the intersection form on $H_{2}(M;\mathbb{Z})/\text{Tors}$, the $w_{2}$-type, and the Kirby-Siebenmann triangulation obstruction. Moreover, any isometry of the intersection form can be realized by a homeomorphism. \end{thm} Actually Hambleton and Kreck only assume that the cohomology of $\pi_{1}$ is $4$--periodic, a condition that Bauer \cite{Bauer1988} weakened to requiring that the $2$--Sylow subgroup have $4$--periodic cohomology. For more general fundamental groups there are additional invariants, including a class in $\mathbb{Z}/|\pi_{1}|\mathbb{Z}$ and a secondary obstruction in $\text{Torsion}(\Gamma(\pi_{2}(X))\otimes \mathbb{Z})$, where $\Gamma$ denotes Whitehead's quadratic functor. Wilczy\'nski \cite{Wilczynski1994} then dealt with general pseudofree actions of cyclic groups, completed by Wilczy\'nski and Bauer \cite{BauerWilczynski1996} Up to a finite ambiguity pseudofree actions on simply connected $4$-manifolds are determined by the local representations at fixed points and by the action on homology. The most all-inclusive theorem would be the following. \begin{thm}[Wilczy\'nski \cite{Wilczynski1994}, Wilczy\'nski and Bauer \cite{BauerWilczynski1996} ] Suppose $M$ is a closed, oriented, simply-connected, topological $4$-manifold with an action of $G=C_{n}$, acting locally linearly and pseudofreely, preserving orientation, and with non\-empty fixed point set $M^{G}$. If $n$ is odd or $w_{2}(M)\ne 0$, then the oriented topological conjugacy classes of locally linear, pseudofree $G$-actions on $M$, with the same local fixed point representations and the same equivariant intersection form (and the same orbit space triangulation obstruction) are in one-to-one correspondence with the elements of a certain finite double coset space $O(\nu)\backslash O_{*}(\lambda)/O(\lambda)$, and are distinguished by the quadratic $2$-type. \end{thm} \subsection{Extension}According to Freedman's theory of topological $4$-manifolds \cite{FreedmanQuinn1990}, any homology $3$-sphere bounds a unique contractible topological $4$-manifold. \begin{problem} Does every finite group action on a homology $3$-sphere extend to one on the contractible topological $4$-manifold it bounds? \end{problem} The answer is known to be yes in the case of free actions on homology $3$-spheres, provided one does not insist on local linearity at the central fixed point. Versions of this are due to Kwasik and Vogel \cite{KwasikVogel1986b}, Ruberman, and Edmonds \cite{Edmonds1987}. The problem of making a locally linear extension was dealt with by Edmonds \cite{Edmonds1987} and Edmonds and Ewing \cite{EdmondsEwing1992} as the last step in the construction of locally linear pseudofree actions on closed $4$--manifolds. See also Kwasik and Lawson \cite{KwasikLawson1993} {\color{black} who used gauge theory to show that certain free cyclic actions on Brieskorn spheres that bound smooth, contractible $4$-manifolds do not extend smoothly. Recently Anvari and Hambleton \cite{AnvariHambleton2014} have used gauge-theoretic considerations to show that none of the standard free actions on Brieskorn spheres extend smoothly over any contractible smooth manifold they may bound. } An interesting remaining case is that of actions of $C_{p}$, with nonempty fixed point set a knotted circle, especially the case $p=2$. A related non-equivariant question would be whether a knot in a homology $3$-sphere bounds a disk in a contractible $4$-manifold, in analogy with the way that any knot in $S^{3}$ can be coned off in $D^{4}$. The analogous question for circle actions is closely related to the question mentioned earlier about whether a topological circle action can have more than four orbit types in a neighborhood of a fixed point. \begin{problem} Do all closed simply connected $4$-manifolds (even with nonvanishing triangulation obstruction) admit topological involutions? Does the Chern manifold, the fake $\mathbb{C}P^{2}$, admit an involution? \end{problem} Such an involution cannot be locally linear when the triangulation obstruction is nonzero, by Kwasik and Vogel \cite{KwasikVogel1986a,KwasikVogel1986b}. Kwasik and Vogel did show that some nontriangulable $4$--manifolds do admit involutions, including the case of closed, simply connected, spin $4$-manifolds. \subsection{Other special manifolds} \subsubsection{K3 surface} By a topological K3 surface we mean a closed, simply connected topological $4$-manifold with intersection pairing on second homology that is even of rank $22$ and signature $-16$. By Freedman's classification all such manifolds are homeomorphic to a standard algebraic surface, such as the hypersurface in $\mathbb{C}P^{3}$ consisting of solutions to the equation $z_{0}^{4}+z_{1}^{4}+z_{2}^{4}+z_{3}^{4}=0$. It follows that all such manifolds are smoothable. From the point of view of algebraic geometry a K3 surface is a smooth, simply connected, complex algebraic variety with trivial canonical bundle. All such algebraic surfaces are diffeomorphic. But gauge theory has led to constructions of many \emph{homotopy K3 surfaces,} that is smooth manifolds homotopy equivalent, and hence homeomorphic, to the standard K3 surface, that are definitely not diffeomorphic to the standard K3 surfaces. The many nontrivial algebraic K3 surfaces admit rich families of nontrivial symmetries, and there is a vast literature on the subject in algebraic geometry. We refer to Nikulin \cite{Nikulin1979}, who initiated much of the work in this area, to Mukai \cite{Mukai1988} who classified the maximal groups acting symplectically (i.e., acting trivially on the one-complex-dimensional space of holomorphic $2$-forms), and for the most recent work in the area, including the non-symplectic case, to work of Sarti and her coauthors, e.g.\cite{ArtebaniSarti2008, GarbagnatiSarti2007}. There has been a variety of work aimed at identifying non-smoothable actions on such a surface. \begin{question} Do smooth homotopy $K3$--surfaces admit a smooth, homologically trivial group actions? \end{question} Complex analytic group actions are known to be homologically nontrivial. On the other hand the work of Edmonds \cite{Edmonds1987} showed that such a manifold admits homologically trivial, locally linear actions. Thus the question is really about smooth actions. \begin{thm}[Ruberman \cite{Ruberman1995}, Matumoto \cite{Matumoto1992}] A topological K3 surface does not admit a homologically trivial, locally linear, involution. \end{thm} According to Jin Hong Kim \cite{Kim2008} a homotopy K3 surface does not admit a homologically trivial smooth $C_{3}$ action. The proof uses somewhat delicate computations involving Seiberg-Witten invariants. But see the comment in the appendix updating Problem 4.124 in the classic Kirby problem List. \begin{problem} Show that $C_{p}$, $p\ge 3$, cannot act smoothly and homologically trivially on a homotopy $K3$ surface. \end{problem} {\color{black} Chen and Kwasik \cite{ChenKwasik2007a} have used Seiberg-Witten theory to show that a finite group acting homologically trivially on a \emph{symplectic} $4$-manifold with nonzero signature, $c_{1}^{2}= 0$, and $b_{2}^{+}\ge 2$ (e.g. a K3 surface) must act trivially. Chen and Kwasik \cite{ChenKwasik2008} have also found examples of homotopy K3-surfaces that do not admit smooth actions of certain groups that do act on the standard K3-surface, by examining the action of the group permuting the Seiberg-Witten basic classes. } \subsubsection{Other building block manifolds}ÊÊÊ $E_{8}$: Edmonds \cite{Edmonds1997} analyzed the automorphisms of the $E8$ lattice and determined which ones could be realized by locally linear periodic maps. $S^{2}\times S^{2}$: Wilczynski \cite{Wilczynski1994} showed that there exist inequivalent (but stably equivalent) actions that induce the same action on homology. McCooey \cite{McCooey2007a} has characterized the groups that can act pseudofreely, under the assumption of local linearity, as standard subgroups of the normalizer of $O(3)\times O(3)$ in $O(6)$. $E_{4m}$: This is a family of definite intersection pairings for $4$-manifolds, that are spin if and only $m$ is even. Jastrzebowski \cite{Jastrzebowski1995} determined which manifolds admit locally linear involutions with exactly two isolated fixed points, corresponding to the case where the second homology group is a free $\mathbb{Z}[C_{2}]$-module. When he combines his algebraic analysis with the results of Edmonds \cite{Edmonds1989}, one conclusion is that a closed, simply connected topological $4$-manifold with intersection pairing $E_{4m}$ admits a locally linear involution if and only if $m$ is odd and the Kirby-Siebenmann triangulation obstruction vanishes. In particular the $E_{16}$ $4$-manifold admits no locally linear involution, despite having a vanishing triangulation obstruction. \subsection{Conjugation Spaces} Hambleton and Hausmann \cite{HambletonHausmann2011} have studied these special actions of $C_{2}$ on connected 4-manifolds, achieving a very satisfactory classification. The definition is too technical to give here, but implies that the fixed point set is a connected surface and the orbit space is a mod 2 homology 4-sphere. Indeed, their main theorem, shows that the fixed point set and orbit space together give a complete classification of these actions. The classical examples of such actions come from complex conjugation on ${\mathbb{C}} P^{2}$ and $S^{2}\times S^{2}$. \subsection{Permutation Representations} By results of Edmonds \cite{Edmonds1989}, interpreting the Borel spectral sequence in equivariant cohomology in dimension $4$, non-permutation representations on the second homology of simply connected $4$--manifolds correspond to fixed surfaces of positive genus. \begin{question} For a $4$-manifold $M$, what representations on $H_{2}(M)$ are induced by group actions? \end{question} Of course the representation must preserve the intersection form. \begin{problem} Show that an action of a finite group on a closed, simply connected 4-manifold, with definite intersection form, must induce a signed permutation representation on $H_{2}$. \end{problem} By Donaldson the intersection form of a smooth positive definite manifold is diagonalizable, and it follows that all automorphisms groups yield signed permutation representations automatically. Therefore this is only a question for topological $4$-manifolds. A detailed analysis of $\text{Aut} E_{8}$, where non-permutation representations arise algebraically, showed that this problem has an affirmative answer in this one case. Every integral representation of $C_{2}$ is automatically a signed permutation representation. So the result is also a ``yes'' answer for this group. In a related vein, work in Edmonds \cite{Edmonds2005} produces a $C_{25}$ action that is pseudofree but not semifree, while inducing a permutation representation. But work of Hambleton and Tanase \cite{HambletonTanase2004} has shown that a \emph{smooth} pseudofree action, on a positive definite $4$--manifold, is semifree, via ASD Yang-Mills gauge theory. \section{Non-simply connected $4$-manifolds} We focus on just a few questions that the author finds of interest. \subsection{Particular $4$--manifolds} One of the simplest non-simply connected $4$--manifolds is $S^{1}\times S^{3}$. \begin{question} What is the classification of group actions on $S^{1}\times S^{3}$? \end{question} Jahren and Kwasik \cite{JahrenKwasik2011} have given a thorough study of free involutions on $S^{1}\times S^{3}$. {\color{black}There are four possible homotopy types for the quotient manifolds. In what is perhaps the most interesting case the quotient is homotopy equivalent to $\mathbb{R}P^{4}\# \mathbb{R}P^{4}$, and there are infinitely many quotients up to homeomorphism. See also Brookman, Davis, and Khan \cite{BrookmanDavisKhan2007} for the latter case and its higher-dimensional analog. The case of free actions of $C_{p}$, $p$ an odd prime (or just odd and square free) has its own special subtleties and has recently been studied by Khan \cite{KhanArXiv2014}. In particular Khan showed the existence of non-standard actions. I am unaware of any work on non-free actions on $S^{1}\times S^{3}$. In this (orientation preserving) case the fixed point set should be a $2$--torus. } It makes sense to study the rich family of $4$--manifolds that can be expressed as bundles. In particular we suggest study surface bundles. \begin{problem} Study group actions on surface bundles over surfaces. \end{problem} If both the base and fiber surface have non-positive euler characteristic, then these manifolds are aspherical, having trivial higher homotopy groups. This leads to the next set of considerations. \subsection{Asymmetric $4$-manifolds} Aspherical manifolds in all dimensions figured prominently in the work of Conner-Raymond in their search for manifolds with no compact groups of symmetries. \begin{question} Can a manifold have no nontrivial finite groups of symmetries? \end{question} The first examples of such asymmetric manifolds, due to Bloomberg \cite{Bloomberg1975}, were connected sums of aspherical $4$-manifolds that Conner and Raymond \cite{ConnerRaymond1972} had shown had the property that their only finite group actions were free. Subsequently asymmetric, aspherical $n$-manifolds were produced in a few higher dimensions ($7, 11, 16, 22, 29,37$) by Conner, Raymond, and Weinberger \cite{ConnerRaymondWeinberger1972}, and in dimension $3$ by Raymond and Tollefsen \cite{RaymondTollefson1976,RaymondTollefson1982}. Schultz \cite{Schultz1981} produced infinitely many $n$-manifolds, for any $n\ge 3$, that have no nontrivial finite groups of symmetries, among the class of hypertoral manifolds (admitting a degree one map the the $n$-torus). Initially at least they appear to be connected sums. {\color{black} Subsequently infinitely many asymmetric aspherical $n$-manifolds have been produced in all dimensions $\ge 3$, via hyperbolic geometry. Belolipetsky and Lubotzky \cite{BelolipetskyLubotzky2005} gave a general construction in all dimensions $\ge 2$ of hyperbolic $n$-manifolds with any finite group as isometry group, including the trivial group. (The cases of dimensions 2 and 3 and of orientation-preserving isometries had been handled earlier and/or independently by others.) Since the center of the fundamental group $\pi$ of a hyperbolic manifold is trivial, it then follows from a result of Borel \cite{Borel1983} (pp. 57--60), that any finite group acting effectively would inject into the (trivial) outer automorphism group $\text{Out}(\pi)$. Thus one needs to produce aspherical manifolds such that $\text{Out}(\pi)$ is trivial. See Farb and Weinberger \cite{FarbWeinberger2005} for perspectives on Borel's theorem in this context. In the earlier work of Raymond, \emph{et al.}, they proceeded in somewhat the same way, but without input from hyperbolic geometry, and produced instead aspherical fibered manifolds with fundamental group having trivial center and torsion-free outer automorphism group. } As a counterpoint we propose the following: \begin{problem} Find a smooth, \underline{simply connected,} closed $4$-manifold with no nontrivial smooth symmetries. \end{problem} This, of course, could be a project for gauge theory. Work of Edmonds \cite{Edmonds1987} shows that there are no such examples for topological $4$-manifolds: Every simply connected $4$-manifold admits nontrivial topological, locally linear periodic maps. In higher dimensions it remains an important problem, pursued most notably by V.~ Puppe \cite{Puppe2007}. \subsection{The equivariant Borel conjecture} Aspherical manifolds also play a prominent role in higher dimensional topological manifold theory under the framework of the Borel and Novikov conjectures. The Borel conjecture states that a homotopy equivalence between two closed aspherical manifolds is homotopic to a homeomorphism. The Borel conjecture has an especially inviting equivariant version, which we state informally, following Weinberger \cite{Weinberger1994}. \begin{question}[Na\"ive Equivariant Borel Conjecture] Is an equivariant homotopy equivalence of aspherical, closed, topological $G$-manifolds $G$-homotopic to an equivariant homeomorphism? \end{question} This na\"ive version of the equivariant Borel conjecture fails in higher dimensions for relatively simple reasons of not being properly stratified, and not providing enough normal information to the fixed point sets. But there are also more subtle sorts of reasons, too. See Weinberger \cite{Weinberger1994}. The na\"ive equivariant Borel conjecture fails in dimension 4 for actions with two-dimensional fixed points, which may be connected-summed with an action on the sphere with a knotted fixed point $2$--sphere. \begin{problem} Find a counterexample to the na\"ive equivariant Borel conjecture in dimension four for pseudofree actions. \end{problem} {\color{black} Connolly, Davis, and Khan \cite{ConnollyDavisKhan2014} show that a pseudofree involution on $T^4$ is equivalent to the standard involution given by complex conjugation in each factor. In higher dimensions their work gives a complete, nontrivial, classification of pseudofree involutions on tori. In a follow-up generalization Connolly, Davis, and Khan \cite{ConnollyDavisKhan2015} give a similar classification of pseudofree actions on aspherical $n$-manifolds, $n\ge 4$, whenever there is an appropriate geometric action to use as a model. Many such model actions are constructed by the process of hyperbolization. } As is well-known, every finitely presented group $\pi$ is the fundamental group of a closed orientable $4$-manifold. \begin{problem} If $G$ is a finite group and $\pi$ is a finitely presented group, show that there is a closed, orientable $4$-manifold $M$ with fundamental group $\pi$ on which $G$ acts effectively. \end{problem} Perhaps one could even specify the action of $G$ on $\pi$ by a given homomorphism $G\to \text{Out}\, \pi$. \section{Gauge-theoretic applications to group actions}\label{sec:gauge} Although the main focus of these notes is topological manifolds, we collect together here some of the main results achieved using gauge-theoretic techniques, including those already briefly mentioned in earlier sections. In general here the focus is on proving that certain kinds of smooth actions do not exist and, especially, that certain locally linear actions on a smooth manifold are not equivalent to smooth actions. One word of clarification is appropriate here. Many--perhaps all--smooth manifolds admit more than one smooth structure. It is one thing to assert that a certain smooth manifold (such as the standard sphere $S^{4}$ or a standard $K3$ surface) does not admit a smooth action of a certain type. It is something different to say that there is no smooth structure at all on that manifold for which a certain type of action is smooth. \subsection{Yang-Mills moduli spaces} Applications are based on attempting to do as much as possible of Donaldson's construction of an instanton number one moduli space of self-dual (or anti-self-dual) connections equivariantly, producing a $5$-manifold with ideal boundary the given $4$-manifold. One key step causes trouble: applying equivariant transversality to produce an equivariant perturbed, smooth moduli space. In general equivariant transversality meets obstructions. An explicit, example of Fintushel (described in Hambleton and Lee \cite{HambletonLee1992}) shows that nontrivial obstructions can arise in this context. Nonetheless there were successes, in which one concentrated on the invariant ASD connections, and only a little bit of normal data. The primary success is with smooth group actions on $4$-manifolds with definite (or trivial) intersection pairing. Furuta \cite{Furuta1989} showed that there are no smooth one-fixed-point actions on (homology) $4$-spheres. Roughly speaking one shows that generically the space of instanton number one $G$-invariant connections forms a $1$--manifold with an end compactification whose boundary can be identified with the fixed points of the original group action. But a compact $1$--manifold cannot have just one boundary point. Braam and Matic \cite{BraamMatic1993} showed that smooth actions on (homology) $4$-spheres have the same representations at all fixed points. Although they had claimed the result much earlier, it took some years to produce an accepted proof, because of the difficulties with equivariant transversality mentioned above. And in the meantime Hambleton and Lee \cite{HambletonLee1992} gave a proof using the machinery they developed,. One can assume there are two isolated fixed points. In this case they were able again to study the space of instanton number one $G$-invariant connections and carry along just enough normal data to see that the normal representations must be equivalent. Both of these were soon followed by more traditional, non-gauge-theoretic proofs, as discussed earlier in this paper. Other work on (anti-) self-dual moduli spaces and group actions was carried out by Wang \cite{Wang1993} and Cho \cite{Cho1990,Cho1991}, who studied the moduli space of invariant connections for manifolds with smooth involution, respectively, with $C_{2}^{k}$ actions. Hambleton and Lee \cite{HambletonLee1992} developed an alternative notion of transversality (``equivariant polynomial general position'') via which they were also able to give a gauge-theoretic proof of the result of Edmonds and Ewing \cite{EdmondsEwing1989} on the fixed point data of semifree, pseudofree actions on $\mathbb{C}P^{2}$ and some related matters, also accessible by tradional methods. They also extended the known results beyond what had been accomplished using less powerful techniqes. More recently, Hambleton and Tanase \cite{HambletonTanase2004} applied the approach of Hambleton and Lee \cite{HambletonLee1992} to prove certain results about smooth actions that are actually false for locally linear actions. They prove that an action on a connected sum of copies of $\mathbb{C}P^{2}$ has the fixed point data of an equivariant connected sum of actions, which examples of Edmonds and Ewing \cite{EdmondsEwing1992} show need not be true in the locally linear case. They also show that a smooth action on such a manifold that is pseudofree must be semifree. Again an example of Edmonds \cite{Edmonds2005} shows that this is false in general in the locally linear case. \subsection{Seiberg-Witten theory} In the 1990's much attention shifted from the Yang-Mills theory to the Seiberg-Witten theory where the moduli spaces in question are generally compact, thus avoiding the thorniest issues associated with Donaldson's theory. As for group actions, however, results have been slow. The fundamental difficulty of a lack of equivariant transversality is still there. In this context the primary successes have been for smooth group actions on spin $4$-manifolds. Wang \cite{Wang1995} gave an early application of Seiberg-Witten invariants when he showed that the quotient of a K3 surface by a free antiholomorphic involution has vanishing Seiberg-Witten invariants. A major success of Seiberg-Witten invariants was Furuta's famous inequality \cite{Furuta2001} \[ b_{2}^{+}(X) \ge 1- \frac{\sigma(X)}{8}\] for any smooth, closed, spin $4$-manifold $X$ oriented so that $\sigma(X)\le 0$, with $b_{1}(X)=0$ and $b_{2}^{+}(X)>0$. While Furuta's theorem is not itself a result about group actions, a key point in the proof is the existence of a certain involution on the moduli space of Seiberg-Witten monopoles. One of the earliest applications of Seiberg-Witten invariants to more general smooth group actions came from Bryan \cite{Bryan1998}, in which, among other things, he found restrictions on homology representations of smooth involutions (and, more generally, groups of the form $C_{2^{k}}$) on a K3 surface. The approach to the use of the Seiberg-Witten equations was modeled in part on the result of Furuta , and in fact gives an equivariant analog of Furuta's inequality. \begin{thm}[Bryan \cite{Bryan1998}] Let $X$ be a smooth $4$-manifold with $b_{1}=0$ oriented so that $\sigma(X)\le 0$. Assume $X$ is spin and admits a smoothaction of $C_{2^{p}}$ preserving the spin structure and of odd type. Then under suitable nondegeneracy assumptions $$b_2^+(X)\geq p+1-\frac{\sigma(X)}{8}$$ \end{thm} A simple application is the following. \begin{thm}[Bryan \cite{Bryan1998}] If $C_{2}$ acts smoothly on a K3 surface with isolated fixed points, then $b_{2}^{+}(K3/C_{2})=3$. (In particular, $C_{2}$ must act trivially on the positive definite part of $H_{2}(K3;\mathbb{Q})$.) \end{thm} About the same time, Ue \cite{Ue1998} used Seiberg-Witten theory to construct infinitely many smooth free actions on a closed smooth 4-manifold such that their orbit spaces are pairwise homeomorphic, but not diffeomorphic. \begin{thm}[Ue \cite{Ue1998}] For any nontrivial finite group $G$ there exists a smooth 4-manifold that has infinitely many free $G$-actions with the property that their orbit spaces are homeomorphic but mutually nondiffeomorphic. \end{thm} In another direction, Kiyono \cite{Kiyono2008} used these ideas to construct nonsmoothable (with respect to any smooth structure), locally linear, pseudofree actions on suitable simply connected spin $4$-manifolds. The current version of his result applies to all but $S^{4}$ and $S^{2}\times S^{2}$. The existence of the actions is an application of work of Edmonds and Ewing \cite{EdmondsEwing1992}. \begin{thm}[Kiyono \cite{Kiyono2008}] Let $X$ be a closed, simply connected, spin topological $4$-manifold not homeomorphic to either $S^{4}$ or $S^{2} \times S^{2}$. Then, for any sufficiently large prime number $p$, there exists a homologically trivial, pseudofree, locally linear action of $C_{p}$ on $X$ which is nonsmoothable with respect to any smooth structure on $X$. \end{thm} \begin{problem} Find an exotic locally linear, pseudofree action on $S^{2}\times S^{2}$. One should seek both actions not equivalent to standard actions and also non-smoothable actions. \end{problem} Fang \cite{Fang1998,Fang2001} gave very similar results to Bryan's independently under slightly weaker hypotheses. He also found the following mod $p$ vanishing result for Seiberg-Witten invariants of manifolds with smooth $C_{p}$ actions. \begin{thm}[Fang \cite{Fang1998}] Suppose $X$ is a closed, smooth $4$-manifold with $b_1(X) =0$, $b_2^+ (X) >1$, and that $C_p$ acts trivially on $H^{2,+} (X, {\mathbb{R}})$. Then for any $C_p$-equivariant Spin${}^c$-structure $L$ on $X$, the Seiberg-Witten invariant satisfies ${\rm SW}(L) = 0\bmod p$, provided $k_i \le \frac12(b_2^+ -1)$ for $i=0,1, \cdots ,p-1$. \end{thm} In the result above the integers $k_{i}=m_{i}-n_{i}$, where $m_{i}$ and $n_{i}$ denote the dimensions of the $\omega^{i}$ eigenspaces of the linear $C_{p}$ actions on $\ker D_{A}$ and $\text{coker }D_{A}$, where $D_{A}$ denotes the Dirac operator corresponding to $L$ and $\omega=\exp(2\pi i/p)$. J.~H. Kim \cite{Kim2000} also generalized Bryan's result. In particular he developed the analog of Bryan's theorem for actions of $C_{2^{p}}$ of \emph{even} type, with the same inequality as a conclusion. Both Lee and Li \cite{LeeLi2001} and Bohr \cite{Bohr2002} extended the Bryan results to actions of $C_{2^{p}}$ on non-spin $4$-manifolds with even intersection pairing. The key point is to show that one can lift the action to an action on a covering space that is spin. In a series of papers Cho \cite{Cho1999a,Cho1999b,Cho2007} and Cho-Hong \cite{ChoHong2002, ChoHong2003,ChoHong2007} also investigated conditions imposed on the Seiberg-Witten invariants by a smooth finite group action. { \color{black} The inequality of Hamilton, mentioned at the end of Section \ref{subsec:existence}, uses nonzero Seiberg-Witten classes to bound the number of $2$-spheres in the fixed point set of a smooth action. See Hamilton \cite{Hamilton2016} for applications of the inequality. } Recent work of Liu and Nakamura \cite{LiuNakamura2007a} studies homologically nontrivial $C_{3}$ actions on a $K3$ surface. Applying work of Edmonds and Ewing \cite{EdmondsEwing1992}, they give a classification of locally linear actions and then apply Fang's mod $p$ vanishing result to show that certain of these actions cannot be realized smoothly. Chen and Kwasik \cite{ChenKwasik2011} have also investigated the relationship between symplectic symmetries and the smooth structure on a given symplectice homotopy K3 surface, showing that in a certain sense, an effective action by a large group forces the symplectic homotopy K3 surface to be minimally exotic. Consult the original paper for more statements. Recently, Fintushel, Stern, and Sunukjian \cite{FintushelSternSunukjian2009} have produced infinite families of exotic cyclic group actions on many simply connected smooth $4$-manifolds, all of which are equivariantly homeomorphic but not equivariantly diffeomorphic. The $4$-manifolds in question are constructed as branched cyclic covers of certain simply connected $4$-manifolds $Y$, branched along a smoothly embedded surface $\Sigma$ with the property that the pair $(Y,\Sigma)$ has a nontrivial relative Seiberg-Witten invariant. The branched cyclic cover $X$ has nontrivial Seiberg-Witten invariants, and these are the first such families of manifolds with nontrivial SW invariants. The examples are produced using a variant of the first two authors' ``rim surgery''. These examples are also the first that effectively deal with $2$-dimensional fixed point sets while producing exotic actions, aside from local knotting phenomena arising from counterexamples to the $4$-dimensional Smith Conjecture. In a similar vein, Kim and Ruberman \cite{KimRuberman2011} have shown how to construct some exotic actions of the cyclic group $C_{mn}=C_{m}\times C_{n}$, where $(m,n)=1$. The actions have the property that the fixed point sets of the two factors are both surfaces intersecting transversely at the isolated global fixed points. The actions are produced by rim surgery starting from standard actions. Topologically the new actions are equivalent to the originals. But smoothly they are distinct, even though the actions of the factors $C_{m}$ and $C_{n}$ are smoothly equivalent to the originals. {\color{black} It is an interesting question to study possible bounds on the order of a finite group acting smoothly on a $4$-manifold that does not admit a smooth circle action. Chen \cite{Chen2011} has found such bounds for symplectic or holomorphic actions under suitable technical assumptions. Recently, however, Chen \cite{Chen2014} has used the knot surgery of Fintushel and Stern to construct examples to show that there can be no such bound in the smooth case.} \appendix \section{Update of the Kirby Problem List} Here we briefly record the handful of problems related to group actions on $4$-manifolds that are to be found in the Kirby Problem List, originating at the 1976 Stanford Topology Conference \cite{Kirby1978} and previously updated on the occasion of the 1993 Georgia International Topology Conference \cite{Kirby1993}. \begin{problem}[Kirby list 4.55]\label{kirby:4.55} Describe the Fintushel-Stern involution on $S^{4}$ in equations. \textbf{Update (1995):} No progress. \end{problem} \textbf{Update:} No further progress. \begin{problem}[Kirby List 4.56, P. Melvin]\label{kirby:4.56} Let $M$ be a smooth closed orientable $4$-manifold which supports an effective action of a compact connected Lie group $G$. Suppose that $\pi_{1}M$ is a free group. \textbf{Question:} Is $M$ diffeomorphic to a connected sum of copies of $S^{1}\times S^{3}$, $S^{2}\times S^{2}$, and $S^{2}{\otimes}S^{2}$? \textbf{Remarks:} The answer to both questions is yes for $G\ne S^{1}\text{ or } T^{2}$; also for $G=T^{2}$ provided the orbit space of the action (a compact orientable surface) is not a disc with $\ge 2$ holes. \textbf{Update (1995):} Still open. \end{problem} \textbf{Update:} No further progress. \begin{problem}[Kirby list 4.124, Edmonds]\label{kirby:4.124} (A) Does the fake $\mathbb{CP}^{2}$ (homotopy equivalent but not homeomorphic to $\mathbb{CP}^{2}$) admit a topological involution?\newline (B) Does the K3 surface admit a periodic diffeomorphism acting trivially on homology?\newline \textbf{Remarks:} the answer to (B) is no for period 2 [Ruberman, 1995. Matumoto, 1992] . The question is open for odd period. \end{problem} \textbf{Update:}\label{prob:K3update} (A) No known progress. (B) Jin Hong Kim \cite{Kim2008} has used Seiberg-Witten invariants to show that a smooth homotopy K3 surface does not admit a smooth, homologically trivial, action of $C_{3}$. Experts have expressed doubts, however, about parts of the argument. An alternative proof would be prudent and welcome. \section{Update of problems from the 1984 Transformation Groups Conference} Here we briefly comment on the status of problems related to $4$--manifolds from the 1984 Boulder conference, edited by Schultz \cite{Schultz1985}. \begin{problem}[6.10] What algebraic properties must a knotted $2$--sphere $K$ in $S^{4}$ satisfy if it is fixed (or invariant) under a $C_{m}$ action? \end{problem} \textbf{Update:} No particular progress known. \begin{problem}[6.11] What is the complete topological classification of $4$--manifolds with effective $T^{2}$ actions? \end{problem} \textbf{Update:} No further progress beyond the work of Pao \cite{Pao1977a,Pao1977b}. It is the cases with nonsimply connected orbit space that are of interest. \begin{problem}[6.12] If $S^{1}$ acts smoothly on a homotopy $4$--sphere $\Sigma$, is $\Sigma$ diffeomorphic to $S^{4}$? \end{problem} \textbf{Update:} The answer is yes by the early work of Fintushel \cite{Fintushel1976} and the positive resolution of the classical Poincar\'e Conjecture by Perelman. \begin{problem}[6.13] Does there exist an exotic $\mathbb{R}^{4}$ with no smooth effective finite groups actions? \end{problem} \textbf{Update:} Taylor \cite{Taylor1999} has constructed examples of fake $\mathbb{R}^{4}$s on which the circle group cannot act smoothly. Taylor also notes that at least some fake $\mathbb{R}^{4}$s do have nontrivial smooth symmetries. \begin{problem}[6.14] Are there exotic free $C_{p}$ actions on $S^{1}\times S^{3}$, where $p$ is an odd prime? \end{problem} \textbf{Update:} {\color{black}Khan \cite{KhanArXiv2014} has developed a surgery-theoretic classification of actions on $S^{1}\times S^{n}$ whose existence portion, specialized to dimension $4$, shows that such exotic actions do exist.} \begin{problem}[6.15 (Cappell)] The Cappell-Shaneson example of an exotic smooth $\mathbb{R}P^{4}$ defines a differentiably nonlinear involution on some homotopy $4$--sphere $\Sigma$. Is $\Sigma^{4}$ an exotic homotopy $4$--sphere? \end{problem} \textbf{Update:} Gompf \cite{Gompf1991} showed that the simplest one of the Cappell-Shaneson examples does yield an exotic involution on the standard smooth $4$-sphere. More recently Akbulut \cite{Akbulut2010} has shown that the members of an infinite family of the Cappell-Shaneson spheres are all diffeomorphic to the standard sphere by showing that they are diffeomorphic to the simplest one, previously dealt with by Gompf. {\color{black}Subsequently Gompf \cite{Gompf2010} has shown that a yet larger family of such examples also yields standard $4$-spheres.} \nocite{AitchisonRubinstein1984,Akbulut2010,AkbulutKirby1979,AlldayPuppe1993,AnvariHambleton2014,ArtebaniSarti2008,Assadi1990,Baldridge2001,Baldridge2003,Baldridge2004,Bauer1988,BauerWilczynski1996,BelolipetskyLubotzky2005,Bloomberg1975,Bohr2002,Borel1983,BraamMatic1993,Bredon1972,BrookmanDavisKhan2007,Bryan1998,BuchdahlKwasikSchultz1990,CappellShaneson1976,CappellShaneson1979,Carlsson1990,Chen1997,Chen2010,Chen2011,Chen2014,ChenArXiv2011,ChenArXiv2013,ChenArXiv2013a,ChenKwasik2007a,ChenKwasik2008,ChenKwasik2011,ChenKwasikSchultzArXiv2014,Cho1990,Cho1991,Cho1999a,Cho1999b,Cho2007,ChoHong2002,ChoHong2003,ChoHong2007,ConnerRaymond1972,ConnerRaymondWeinberger1972,ConnollyDavisKhan2014,ConnollyDavisKhan2015,DeMichelis1989,DeMichelis1991,deRham1965,Edmonds1981,Edmonds1985,Edmonds1987,Edmonds1988,Edmonds1989,Edmonds1997,Edmonds2005,Edmonds2010,EdmondsEwing1989,EdmondsEwing1992,Fang1998,Fang2001,FarbWeinberger2005,Fintushel1976,Fintushel1977,Fintushel1978,FintushelLawson1986,FintushelPao1977,FintushelStern1981,FintushelStern1984,FintushelStern1985,FintushelSternSunukjian2009,Freedman1984,FreedmanQuinn1990,FukumotoFuruta2000,Furuta1989,Furuta2001,GarbagnatiSarti2007,Giffen1966,Godinho2007,Gompf1991,Gompf2010,Gordon1974,Gordon1986,GuazziMecchiaZimmermann2011,Habegger1982,Hambleton2011,Hambleton2015,HambletonHausmann2011,HambletonKreck1988,HambletonKreck1990,HambletonKreck1993a,HambletonKreck1993b,HambletonKreck1993c,HambletonKreckTeichner1994,HambletonLee1988,HambletonLee1992,HambletonLee1995,HambletonLeeMadsen1989,HambletonMadsen1986,HambletonTanase2004,Hamilton2016,HaynesKwasikMastSchultz2002,Herrera2006,Hsiang1964,Huck1995,HuckPuppe1998,JahrenKwasik2006,JahrenKwasik2011,Jastrzebowski1995,KhanArXiv2014,Kim1993,Kim2000,Kim2007,Kim2008,Kim2011,KimRuberman2011,Kirby1978,Kirby1993,KirbyTaylor2001,Kiyono2008,KiyonoLiu2006,Kottschick2006,Krushkal2005,Kulkarni1982,Kwasik1986a,Kwasik1986b,KwasikLawson1993,KwasikSchultz1988,KwasikSchultz1989,KwasikSchultz1990,KwasikSchultz1991,KwasikSchultz1992,KwasikSchultz1994,KwasikSchultz1995,KwasikSchultz1996,KwasikSchultz1997,KwasikSchultz1999,KwasikVogel1986a,KwasikVogel1986b,Lawson1976,Lawson1993,Lawson1994,LeeLi2001,LiLiu2006,LiLiu2007,LiLiu2008,Liu2000,Liu2005,LiuNakamura2007a,LiuNakamura2008,LongReid2005,Matumoto1986,Matumoto1992,Mazur1961,Mazur1962,Mazur1964,McCooey2002a,McCooey2002b,McCooey2007a,McCooey2007b,McCooey2007c,MecchiaZimmermann2006,MecchiaZimmermann2009,MecchiaZimmermann2011,Melvin1981,Melvin1982,MelvinParker1986,Morimoto1988,Mukai1988,MundetiRieraArXiv2015,Nagami2000,Nagami2001,Nakamura2002,Nakamura2006,Nakamura2009,Nakamura2014,Nikulin1979,Oh1983,Ono1993,Orlik1973,Orlik1982,OrlikRaymond1970,OrlikRaymond1974,Pao1977a,Pao1977b,Pao1978,Park2005,Plotnick1982,Poenaru1960,Puppe2007,RaymondTollefson1976,RaymondTollefson1982,Richardson1961,RuanWang2000,Ruberman1984,Ruberman1995,Ruberman1996,Schultz1980,Schultz1981,Schultz1985,Smith1960,Sumners1975,Sung2015,SungarXiv2011,Szymik2012,Taylor1999,Ue1994,Ue1996,Ue1998,VinogradovKuselman1972,Wang1993,Wang1995,Weinberger1994,Weintraub1975,Weintraub1977,Weintraub1989,Wilczynski1987,Wilczynski1988,Wilczynski1990,Wilczynski1991,Wilczynski1994,WuLiu2014,Yoshida1978,Zeeman1965} \end{document}
\begin{document} \title{Defense Against Adversarial Swarms with Parameter Uncertainty} \author{Claire~Walton, Isaac~Kaminer, Qi~Gong, Abram.~H.~Clark and~Theodoros~Tsatsanifos \thanks{Claire Walton is with the Departments of Mathematics and Electrical \& Computer Engineering, University of Texas at San Antonio, San Antonio, TX 78249 USA (e-mail:[email protected]).} \thanks{Isaac Kaminer and Theodoros Tsatsanifos are are with the Department of Mechanical and Aerospace Engineering, Naval Postgraduate School, Monterey, CA, 93943 USA (e-mail:[email protected];[email protected]).} \thanks{Qi Gong is with the Department of Applied Mathematics, University of California Santa Cruz, Santa Cruz, CA 95064 USA (e-mail:[email protected]).} \thanks{Abe Clark is with the Department of Physics, Naval Postgraduate School (e-mail:[email protected]).} \thanks{Manuscript received on February 28, 2020.}} \maketitle \begin{abstract} This paper addresses the problem of optimal defense of a High Value Unit against a large-scale swarm attack. We show that the problem can be cast in the framework of uncertain parameter optimal control and derive a consistency result for the dual problem of this framework. We show that the dual can be computed numerically and apply these numerical results to derive optimal defender strategies against a 100 agent swarm attack. \end{abstract} \begin{IEEEkeywords} optimal control, nonlinear control, numerical methods, swarming. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{S}{warms} are characterized by large numbers of agents which act individually, yet produce collective, herd-like behaviors. Implementing cooperating swarm strategies for a large-scale swarm is a technical challenge which can be considered to be from the ``insider's perspective.'' It assumes inside control over the swarm's operating algorithms. However, as large-scale `swarm' systems of autonomous systems become achievable---such as those proposed by autonomous driving, UAV package delivery, and military applications---interactions with swarms outside our direct control becomes another challenge. This generates its own ``outsider's perspective'' issues. In this paper, we look at the specific challenge of protecting an asset against an adversarial swarm. Autonomous defensive agents are tasked with protected a High Value Unit (HVU) from an incoming swarm attack. The defenders do not fully know the cooperating strategy employed by the adversarial swarm. Nevertheless, the task of the defenders is to maximize the probability of survival of the HVU against an attack by such a swarm. This challenge raises many issues---for instance, how to search for the swarm \cite{ifac2014}, how to observe and infer swarm operating algorithms \cite{JGCD2019}, and how to best defend against the swarm given algorithm unknowns and only limited, indirect control through external means. In this paper we restrict ourselves to the last issue. However, these problems share multiple technical challenges. The preliminary approach we apply in this paper demonstrates some basic methods which we hope will provoke development of more sophisticated tools. For objectives achieved via external control of the swarm, several features of swarm behavior must be characterized: capturing the dynamic nature of the swarm, tracking the collective risk profile created by a swarm, and engaging with a swarm via dynamic inputs such as autonomous defenders. The many modeling layers create a challenge for generating an effective response to the swarm, as model uncertainty and model error are almost certain. In this paper, we look at several dynamic systems where the network structure is determined by parameters. These parameters set neighborhood relations and interaction rules. Additional parameters establish defender input and swarm risk. We consider the generation of optimal defense strategies given uncertainty in parameter values. We demonstrate that small deviances in parameter values can have catastrophic effects on defense trajectories optimized without taking error into account. We then demonstrate the contrasting robustness of applying an uncertain parameter optimal control framework instead of optimizing with nominal values. The robustness against these parameter values suggests that refined parameter knowledge may not be necessary given appropriate computational tools. These computational tools---and the modeling of the high-dimensional swarm itself---are expensive. To assist with this issue, we provide dual conditions for this problem in the form of a Pontryagin minimum principle and prove the consistency of these conditions for the numerical algorithm. These dual conditions can thus be computed from the numerical solution of the computational method and provide a tool for solution verification and parameter sensitivity analysis. The structure of this paper is as follows. Section \ref{modeling section} provides examples of dynamic swarming models and extensions for defensive interactions. Section \ref{ProblemFormulation} discusses optimization challenges and describes a general uncertain parameter optimal control framework this problem could be addressed with. Section \ref{adjointconvergencesection} provides a proof of the consistency of the dual problem for this control framework, which expands on the results initially presented in the conference paper \cite{consistency_paper2}. Section \ref{numerical examples section} gives an example numerical implementation that demonstrates optimal defense against a large-scale swarm of $100$ agents. The final section discusses results and future work. \section{Modeling Adverserial Swarms}\label{modeling section} \subsection{Cooperative Swarm Models} \label{swarm models} The literature on the design of swarm strategies which produce coherent, stable collective behavior has become vast. A quick review of the literature points to two main trends/categories in swarm behavior design. The first one relies on dynamic modeling of the agents and potential functions to control their behavior (see \cite{TR2018,mehmood} and references therein). The second trend uses rules to describe agents' motion and local rule-based algorithms to control them \cite{SwarmIntelligence}, \cite{SwarmIntelligence2}. We present two examples of dynamic swarming strategy from the literature. These examples are illustrative of the forces considered in many swarming models: \begin{itemize} \item collision avoidance between swarm members \item alignment forces between neighboring swarm members \item stabilizing forces \end{itemize} These intra-swarm goals are aggregated to provide a swarm control law, which we will refer to as $F_S$, to each swarm agent. Both example models in this paper share the same double integrator form with respect to this control law. For $n$ swarm agents, dynamics are defined by \begin{equation} \begin{aligned} & \ddot x_i = u_i. ~~ i = 1, \ldots, n, \end{aligned} \end{equation} \begin{equation} u_i = F_S(x_i, \dot{x}_i, \forall j \neq i: x_j, \dot{x}_j | \theta). \end{equation} \subsubsection{Example Model 1: Virtual Body Artificial Potential, \cite{leonard2001virtual,ogren2004cooperative} } In this model, swarm agents track to a virtual body (or bodies) guiding their course while also reacting to intra-swarm forces of collision avoidance and group cohesion. The input $u_i$ is the sum of intra-swarm forces, virtual body tracking, and a velocity dampening term. In addition, in this adversarial scenario, swarm agents are influenced to avoid intruding defense agents. The intra-swarm force between two swarm agents has magnitude $f_I$ and is a gradient of an artificial potential $V_I$. Let \begin{equation} x_{ij} = x_i - x_j. \end{equation} The artificial potential $V_I$ depends on the distance $||x_{ij}||$ between swarm agents $i$ and $j$. The artificial potential $V_I$ is defined as: \begin{equation} V_I = \left\{ \begin{gathered} \alpha \left( \ln \left( ||x_{ij}|| \right)+\frac{d_0}{||x_{ij}||} \right), {~~~ 0 < ||x_{ij}|| < d_1} \\ \alpha \left( \ln(d_1)+\frac{d_0}{d_1} \right), {~~~~~~~~~~~~~||x_{ij}|| \geq d_1} \\ \end{gathered} \right. \label{eq:V_I}\\ \end{equation} where $\alpha$ is a scalar control gain, $d_0$ and $d_1$ are scalar constants for distance ranges. Then the magnitude of interaction force is given by \begin{equation} f_I = \left\{ \begin{gathered} \nabla_{||x_{ij}||} V_I, {~~~ 0 < ||x_{ij}|| < d_1} \\ 0, {~~~~~~~~~~~~~||x_{ij}|| \geq d_1} \\ \end{gathered} \right. \label{eq:f_I}\\ \end{equation} The swarm body is guided by `virtual leaders', non-corporeal reference trajectories which lead the swarm. We assign a potential $V_h$ on a given swarm agent $i$ associated with the $k$-th virtual leader, defined with the distance $||h_{ik}||$ between the swarm agent $i$ and leader $k$. Mirroring the parameters $\alpha$, $d_0$, and $d_1$ defining $V_I$, we assign $V_h$ the parameters $\alpha_h$, $h_0$, and $h_1$. An additional dissipative force $ f_{v_i}$ is included for stability. The control law $u_i$ for the vehicle $i$ associated with $m$ defenders is given by \begin{equation} \begin{aligned} u_i & = -\sum_{j \neq i}^{n} \nabla _{x_i} V_I(x_{ij}) - \sum_{k = 1}^{m} \nabla _{x_i} V_h(h_{ik}) + f_{v_i} \\ &= -\sum_{j \neq i}^{n} \frac{f_I(x_{ij})}{||x_{ij}||} x_{ij} - \sum_{k = 1}^{m} \frac{f_h(h_{ik})}{||h_{ik}||} h_{ik} + f_{v_i}. \end{aligned} \label{eq:u_i} \end{equation} \subsubsection{Example Model 2: Reynolds Boid Model, \cite{reynolds1987flocks}, \cite{mehmood} } For radius $r$, $j = 1, \dots, N$, define the neighbors of agent $i$ at position $x_i\in \mathbb{R}^n$by the set \begin{equation} \mathcal{N}_i = \{j | j\neq i \wedge \|x_i-x_j \| < r\} \end{equation} Swarm control is designated by three forces. {\it Alignment of velocity vectors:} \begin{equation} f_{al} = -w_{al} \left( \dot{x}_i - \frac{1}{| \mathcal{N}_i |} \sum_{j \in \mathcal{N}_i } \dot{x}_j \right) \end{equation} {\it Cohesion of swarm:} \begin{equation} f_{coh} = -w_{coh} \left( x_i - \frac{1}{| \mathcal{N}_i |} \sum_{j \in \mathcal{N}_i } x_j \right) \end{equation} {\it Separation between agents:} \begin{equation}\label{eqn: reynolds separation} f_{sep} = -w_{sep} \frac{1}{| \mathcal{N}_i |} \left( \sum_{j \in \mathcal{N}_i } \frac{x_j - x_i}{\| x_i - x_j \|} \right) \end{equation} for positive constant parameters $w_{al}$, $w_{coh}$, $w_{sep}$. \begin{equation} u_i = f_{al} + f_{coh} + f_{sep} \end{equation} \subsection{Adversarial Swarm Models}\label{defense model} In order to enable adversarial behavior and defense, the inner swarm cooperative forces $F_S$ need to be supplemented by additional forces of exogenoeous input into the collective. As written, the above cooperative swarming models neither respond to outside agents nor `attack' (swarm towards) a specific target. The review \cite{TR2018} discusses several approaches to adversarial control. Examples include containment strategies modeled after dolphins \cite{Egerstedt}, sheep-dogs \cite{sheepdogs1, sheepdogs2}, and birds of prey \cite{herdingbirds}. In \cite{Schwartz} the authors study interaction between two swarms, one of which can be considered adversarial. In these examples of adversarial swarm control, the mechanism of interaction and defense is provided through the swarm's own pursuit and evasion responses. This indirectly uses the swarm's own response strategy against it, an approach which can be termed `herding.' We summarize a control scheme wuth HVU target tracking and herding driven by the reactive forces of collision avoidance with the defenders as the following, for HVU states $y_0$ and defender states $y_k$, $k=1, \dots, K$: \begin{align*} u_i =&F_S(x_i, \dot{x}_i, \forall j \neq i: x_j, \dot{x}_j | \theta) &{\color{red} \leftarrow \text{\it intra-swarm}} \\ + &F_{HVU}(x_i, \dot{x}_i, y_0, \dot{y}_0 | \theta) &{\color{red} \leftarrow \text{\it target tracking}} \\ + &F_D(x_i, \dot{x}_i, \forall k \neq i: y_k, \dot{y}_k | \theta) &{\color{red} \leftarrow \text{\it herding}} \numberthis \label{full swarm dynamics} \end{align*} In addition to herding reactions, one can consider more direct additional forces of disruption, to model neutralizing swarm agents an/or physically removing them from the swarm. One form this can take, for example, is removal of agents from the communications network, as considered in \cite{szwaykowska2016collective}. Another approach is taken in \cite{walton2018optimal}, which uses survival probabilities based on damage attrition. Defenders and the attacking swarm engage in mutual damage attrition while the swarm also damages the HVU when in proximity. Probable damage between agents is tracked as damage rates over time, where the rate of damage is based on features such as distance between agents and angle of attack. The damage rate at time $t$ provides the probability of a successful `hit' in time period $[t, t+\Delta t]$. Probability of agent survival can be modeled based on the aggregate number of hits it takes to incapacitate. Reference \cite{walton2018optimal} provides derivations for multiple possibilities, such as single shot destruction and N shot destruction. These probabilities take the form of ODE equations. Tracking survival probabilities thus adds an additional state to the dynamics of each agents---a survival probability state. \subsubsection{Example Attrition Model: Single-Shot Destruction, \cite{walton2018optimal}} Let $P_0(t)$ be the probability the HVU has survived up to time $t$, $P_k(t)$, $k=1, \dots, K$, the probability defender $k$ has survived, and $Q_j(t)$, $j=1, \dots, N$ the probability swarm attacker $j$ has survived. Let $d_y^{j,k}(x_j(t),y_k(t))$ be the damage the defender $y_k$ inflicts on swarm attacker $x_j$ and let $d_x^{k,j}(y_k(t), x_j)$ be the damage the swarm attacker $x_j$ inflicts on the defender $y_k$, with the HVU represented by $k=0$. Then the survival probabilities for attackers and defenders from single shot destruction are given by the coupled ODEs: \[ \begin{cases} \dot{Q}_j(t)= & \\ -Q_j(t) \sum_{k=1}^K P_k(t) d_y^{j,k}(x_j(t),y_k(t)), & Q_j(0) = 1 \\ \dot{P}_k(t)= & \\ -P_k(t)\sum_{j=1}^N Q_j(t) d_x^{k,j}(y_k(t), x_j(t)), & P_k(0) = 1 \end{cases} \] for $j = 1, \dots, N$, $k=0\dots, K$. \section{Problem Formulation} \label{ProblemFormulation} The above models depend on a large number of parameters. The dynamic swarming model coupled with attrition functions result in over a dozen key parameters, and many more would result from a non-homogeneous swarm. A concern would be that this adds too much model specificity, making optimal defense strategies lack robustness due to sensitivity to the specific set of model parameters. This concern turns out to be justified. When defense strategies are optimized for fixed, nominal parameter values, they display catastrophic failure for small perturbations of certain parameters as can be seen in Figure \ref{nomex_fig}. In fact, the plots included in Figure \ref{nomex_fig} clearly demonstrate that the sensitivity of the cost with respect to the uncertain parameters is highly nonlinear. Thus, generating robust defense strategies requires a more sophisticated formalism introduced next in Section \ref{Uncertain Parameter Optimal Control}. \begin{figure} \caption{Example performance of solutions calculated using nominal values when parameter value is varied. Calculated using values in Section \ref{leonard_numerical_sec} \label{nomex_fig} \end{figure} \subsection{Uncertain Parameter Optimal Control} \label{Uncertain Parameter Optimal Control} The class of problems addressed by the computational algorithm is defined as follows: \\ \\ \noindent \textbf{Problem $\mathbf{P}$.}\ \ Determine the function pair $(x,u)$ with $x \in W_{1,\infty}([0,T]\times \Theta;\mathbb{R}^{n_x})$, $u \in L_\infty([0,T];\mathbb{R}^{n_u})$ that minimizes the cost \small \begin{align}\label{eqn:bobjective} J[x,u] = \! \! \int_\Theta \! \Big[F\left(x(T,\theta),\theta\right) \!+\! \! \int_0^T\! r(x(t,\theta),u(t),t,\theta)dt\Big] d\theta \end{align} \normalsize subject to the dynamics \small \begin{align} \frac{\partial x}{\partial t}(t,\theta) & = f(x(t,\theta),u(t),\theta), \label{eqn:dyn} \end{align} \normalsize initial condition $x(0,\theta) = x_0(\theta)$, and the control constraint $g(u(t)) \leq 0$ for all $t \in [0,T]$. The set $L_\infty([0,T]; \mathbb{R}^{n_u})$ is the set of all essentially bounded functions, $W_{1,\infty}([0,T]\times \Theta;\mathbb{R}^{n_x})$ the Sobolev space of all essentially bounded functions with essentially bounded distributional derivatives, and $F:\mathbb{R}^{n_x} \times \mathbb{R}^{n_\theta} \mapsto \mathbb{R}$, $r:\mathbb{R}^{n_x}\times\mathbb{R}^{n_u}\times\mathbb{R} \times \mathbb{R}^{n_\theta} \mapsto \mathbb{R}$, $g:\mathbb{R}^{n_u} \mapsto \mathbb{R}^{n_g}$. Additional conditions imposed on the state and control space and component functions are specified in Appendix Section \ref{regularity assumptions}. In Problem $\mathbf{P}$, the set $\Theta$ is the domain of a parameter $\theta \in \mathbb{R}^{n_\theta}$. The format of the cost functional is that of the integral over $\Theta$ of a Mayer-Bolza type cost with parameter $\theta$. This parameter can represent a range of values for a feature of the system, such as in ensemble control \cite{ensemble_inhomogeneous}, or a stochastic parameter with a known probability density function. For computation of numerical solutions, we introduce an approximation of Problem $\mathbf{P}$, referred to as Problem $\mathbf{P^M}$. Problem $\mathbf{P^M}$ is created by approximating the parameter space, $\Theta$, with a numerical integration scheme. This numerical integration scheme is defined in terms of a finite set of $M$ nodes $\{\theta_i^M\}_{i=1}^{M}$ and an associated set of $M$ weights $\{\alpha_i^M\}_{i = 1}^M \subset \mathbb{R}$ such that \begin{equation}\label{equ:quadrature} \int_{\Theta} h(\theta) d\theta = \lim_{M \to \infty} \sum_{i = 1}^M h(\theta^M_i) \alpha_i^M. \end{equation} given certain function smoothness assumptions. See Appendix Assumption \ref{numericalschemeassumption} for formal assumptions. Throughout the paper, $M$ is used to denote the number of nodes used in this approximation of parameter space. For a given set of nodes $\{\theta_i^M\}_{i=1}^{M}$, and control $u(t)$, let $\bar{x}_i^M(t)$, $i=1,\dots,M$, be defined as the solution to the ODE created by the state dynamics of Problem $\mathbf{P}$ evaluated at $\theta_i^M$: \small \begin{align}\label{eqn:barxdyn} \begin{cases} \frac{d\bar{x}_i^M}{dt}(t) = f( \bar{x}_i^M(t),u(t),\theta_i^M) \\ \bar{x}_i^M(0) = x_0(\theta_i^M), \end{cases} \quad i=1,\dots,M. \end{align} \normalsize Let $\bar{X}^M(t) = [ \bar{x}_1^M(t),\ldots, \bar{x}_M^M(t)]$. The system of ODEs defining $\bar{X}^M$ has dimension $n_x\times M$, where $n_x$ is the dimension of the original state space and $M$ is the number of nodes. The numerical integration scheme for parameter space creates an approximate objective functional, defined by: \small \[ \bar{J}^M\hspace{-1pt} [ \bar{X}^M\hspace{-1pt} ,u] \hspace{-2pt} = \qquad \qquad \qquad \qquad \qquad \qquad \qquad \] \begin{equation}\label{eqn:jmobjective} \quad \sum_{i = 1}^M \Big[ F\left( \bar{x}_i^M (T),\theta_i^M \right) +\int^T_0 r( \bar{x}_i^M (t),u(t),t,\theta_i^M )dt\hspace{-1pt} \Big] \alpha_i^M. \end{equation} \normalsize In \cite{walton_IJC}, the consistency of $\mathbf{P^M}$ is proved. This is the property that if optimal solutions to Problem $\mathbf{P^M}$ converge as the number of nodes $M\rightarrow \infty$, they converge to feasible, optimal solutions of Problem $\mathbf{P}$. See \cite{walton_IJC} for detailed proof and assumptions. \subsection{Computational Efficiency} The computation time of the numerical solution to the discretized problem defined in equations (\ref{eqn:barxdyn},\ref{eqn:jmobjective}) will depend on the value of $M$. Ideally, it should be sufficiently small as to allow for a fast solution. On the other hand, a value of $M$ that is too small will result in a solution that is not particularly useful, i.e too far from the optimal. Naturally, the question arises: how far is a particular solution from the optimal? One tool for assessing this lies in computing the Hamiltonian and is addressed in the next section. \section{Consistency of Dual Variables} \label{adjointconvergencesection} The dual variables provide a method to determine the solution of an optimal control problem or a tool to validate a numerically computed solution. For numerical schemes based on direct discretization of the control problem, analyzing the properties of the dual variables annd their resultant Hamiltonian may also lead to insight into the validity of approximation scheme \cite{Hager.00, Qi_covector1}. This could be especially helpful in high-dimensional problems such as swarming, where parsimonious discretization is crucial to computational tractability. Previous work shows the consistency of the primal variables in approximate Problem $\mathbf{P^M}$ to the original parameter uncertainty framework of Problem $\mathbf{P}$. Here we build on that and prove the consistency of the dual problem of Problem $\mathbf{P}$ as well. This theoretical contribution is diagrammed in Figure \ref{covector_partial}. The consistency of the dual problem in parameter space enables approximate computation of the Hamiltonian from numerical solutions. \begin{figure} \caption{Diagram of primal and dual relations for parameter uncertainty control. Red lines designate the contribution of this paper.} \label{covector_partial} \end{figure} In \cite{gabasov} necessary conditions for Problem $\mathbf{P}$ (subject to the assumptions of Section \ref{regularity assumptions}) were established. These conditions are as follows: \\ \\ \noindent \textbf{Problem $\mathbf{P^{\lambda}}$.}\quad\textbf{(\cite{gabasov}, pp. 80-82)}. If $(x^*,u^*)$ is an optimal solution to Problem $\mathbf{P}$, then there exists an absolutely continuous costate vector $\lambda^*(t,\theta)$ such that for $\theta \in \Theta$: \normalsize \[ \frac{\partial \lambda^*}{\partial t}(t,\theta) = -\frac{\partial H(x^*,\lambda^*,u^*,t,\theta)}{\partial x} , \] \begin{equation}\label{adjoint} \lambda^*(T,\theta) = \frac{\partial F(x^*(T,\theta),\theta) }{\partial x} \end{equation} where $H$ is defined as: \[ H(x,\lambda,u,t,\theta) = \qquad \qquad \qquad \qquad \qquad \qquad \] \begin{equation}\label{first_Hamiltonian} \lambda f(x(t,\theta),u(t),\theta) + r(x(t,\theta), u(t),t,\theta). \end{equation} Furthermore, the optimal control $u^*$ satisfies \begin{equation*} u^*(t) = \underset{u \in U}{\arg\min} \mathbf{H}(x^*,\lambda^*,u,t), \end{equation*} where $\mathbf{H}$ is given by \begin{equation}\label{second_Hamiltonian_def} \mathbf{H}(x,\lambda,u,t)= \int_{\Theta} H(x,\lambda, u,t,\theta) d\theta. \end{equation} Because Problem $\mathbf{P^M}$ is a standard nonlinear optimal control problem, it admits a dual problem as well, Problem $\mathbf{P^{M\lambda}}$, provided by the Pontryagin Minimum Principle (a survey of Minimum Principle conditions is given by\cite{Hartletall.95}). Applied to $\mathbf{P^M}$ this generates: \\ \\ {\bf Problem ${\mathbf{P^{M \lambda}}}$:} For feasible solution $(\bar{X}^M,u)$ to Problem ${\mathbf{P^M}}$, find $\bar{\Lambda}(t)=[\bar{\lambda}_1^M(t), \dots \bar{\lambda}_M^M(t)]$, $\bar{\lambda}_i^M:[0,T] \rightarrow \mathbb{R}^{N_x}$, that satisfies the following conditions: \[ \frac{d\bar{\lambda}_i^M}{dt}(t) = -\frac{\partial \bar{H}^M(\bar{x}_i^M,\bar{\lambda}_i^M,u,t)}{\partial x_i^M} , \] \begin{equation}\label{PMlambda adjoint} \bar{\lambda}_i^M(T) = \alpha_i^M \frac{\partial F(\bar{x}_i^M,\theta_i^M) }{\partial \bar{x}_i^M}, \end{equation} where $\bar{H}^M$ is defined as: \[ \bar{H}^M(\bar{X}^M,\bar{\Lambda}_M,u,t) = \qquad \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \sum_{i=1}^M \left[ \bar{\lambda}_i^M f(\bar{x}_i^M(t),u(t),\theta_i^M) + \right. \qquad \qquad \] \begin{equation}\label{PMlambda Hamiltonian} \qquad \qquad \left. \alpha_i^M r(\bar{x}_i^M(t), u(t),t,\theta_i^M) \right]. \end{equation} An alternate direction from which to approach to solving Problem $\mathbf{P}$ overall is to approximate the necessary conditions of Problem $\mathbf{P}$ , i.e. Problem $\mathbf{P^\lambda}$, directly rather than to approximate Problem $\mathbf{P}$. This creates the system of equations: \[ \frac{d\lambda}{dt}(t,\theta_i^M) = -\frac{\partial H(x,u,t,\theta_i^M)}{\partial x} \] \begin{equation}\label{collocated lambdas} \lambda(T,\theta_i^M) = \frac{\partial F(x(T,\theta_i^M),\theta_i^M) }{\partial x} \end{equation} for $i=1,\dots,M$, where $H$ is defined as: \[ H(x,\lambda,u,t,\theta) = \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \qquad \qquad \lambda f(x(t,\theta),u(t),\theta) + r(x(t,\theta), u(t),t,\theta). \] This system of equations can be re-written in terms of the quadrature approximation of the stationary Hamiltonian defined in equation (\ref{second_Hamiltonian_def}). Define \[ \tilde{H}^M(x,\lambda,u,t): = \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \sum_{i=1}^M \alpha_i^M H(x(t,\theta_i^M),\lambda(t,\theta_i^M), u(t),t,\theta_i^M) . \] Let \[\tilde{\Lambda}(t) =[\tilde{\lambda}_1^M(t), \dots \tilde{\lambda}_M^M(t)] =[\lambda(t,\theta_1^M),\dots, \lambda(t,\theta_M^M)] \] and let \[ \tilde{X}_M = [\tilde{x}_1^M(t),\dots,\tilde{x}_M^M(t)] \] denote the semi-discretized states from equation (\ref{eqn:barxdyn}). Equation (\ref{collocated lambdas}) can then be written as: \[ \frac{d \tilde{\lambda}_i^M}{dt}(t) = -\frac{1}{\alpha_i^M}\cdot\frac{\partial \tilde{H}^M(\tilde{X}_M,\tilde{\Lambda},u,t)}{\partial \tilde{x}_i^M} \] \begin{equation} \tilde{\lambda}_i^M(T) = \frac{\partial F(\tilde{x}_i^M(T), \theta_i^M) }{\partial \tilde{x}_i^M} \end{equation} for $i=1,\dots,M$. Thus we reach the following discretized dual problem: \\ \\ {\bf Problem ${\mathbf{P^{ \lambda M}}}$:} For feasible contols $u$ and solutions $\tilde{X}_M$ to equation (\ref{eqn:barxdyn}), find $\tilde{\Lambda}(t)=[\tilde{\lambda}_1^M(t), \dots \tilde{\lambda}_M^M(t)]$, $\tilde{\lambda}_i^M:[0,T] \rightarrow \mathbb{R}^{n_x}$, that satisfies the following conditions: \[ \frac{d\tilde{\lambda}_i^M}{dt}(t) = -\frac{1}{\alpha_i^M}\cdot\frac{\partial \tilde{H}^M(\tilde{X}_M,\tilde{\Lambda},u,t)}{\partial \tilde{x}_i^M} , \] \begin{equation}\label{PlambdaM adjoint} \tilde{\lambda}_i^M(T) = \frac{\partial F(\tilde{x}_i^M,\theta_i^M) }{\partial \tilde{x}_i^M}, \end{equation} where $\tilde{H}^M $ is defined as: \[ \tilde{H}^M(\tilde{X}_M,\tilde{\Lambda}_M,u,t) = \qquad \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \sum_{i=1}^M \left[ \alpha_i^M \tilde{\lambda}_i^M f(\tilde{x}_i^M(t),u(t),\theta_i^M) + \right. \qquad \qquad \qquad \] \begin{equation}\label{PlambdaM Hamiltonian} \qquad \qquad \qquad \qquad \qquad \left. \alpha_i^M r(\tilde{x}_i^M(t), u(t),t,\theta_i^M) \right]. \end{equation} In the case of this particular problem, unlike standard control, the collocation of the relevant dynamics involves no approximation of differentiation (since the discretization is in the parameter domain rather than the time domain) and thus the mapping of covectors between Problem ${\mathbf{P^{M \lambda }}}$ and Problem ${\mathbf{P^{ \lambda M}}}$ is straightforward. \begin{lemma}\label{covector lemma} The mapping: \[ (\bar{x}_i^M,\bar{u}) \mapsto (\tilde{x}_i^M,\tilde{u} ), \quad \frac{\bar{\lambda}_i^M}{\alpha_i^M} \mapsto \tilde{\lambda}_i^M, \] for $i=1, \dots, M$ is a bijective mapping from solutions of Problem ${\mathbf{P^{M \lambda }}}$ to Problem ${\mathbf{P^{ \lambda M}}}$. \end{lemma} \begin{theorem}\label{hamiltonian convergence theorems} Let $\{\tilde{X}_M, \tilde{\Lambda}_M, u_M\}_{M\in V}$ be a sequence of solutions for Problem $\mathbf{P^{\lambda M}}$ with an accumulation point $\{\tilde{X}^\infty, \tilde{\Lambda}^\infty, u^\infty\}$. Let $(x^\infty,\lambda^\infty,u^\infty)$ be the solutions to Problem $\mathbf{P^{\lambda }}$ for the control $u^\infty$. Then \[ \lim_{M\in V} \tilde{H}^M(\tilde{X}_M, \tilde{\Lambda}_M, u_M,t) = \mathbf{H}(x^\infty,\lambda^\infty,u^\infty,t) \] where $\tilde{H}^M$ is the Hamiltonian of Problem $\mathbf{P^{\lambda M}}$ as defined by equation (\ref{PlambdaM Hamiltonian}) and $\mathbf{H}$ is the Hamiltonian of Problem $\mathbf{P}$ as defined by equation (\ref{second_Hamiltonian_def}). The proof of this theorem can be found in the Appendix. \end{theorem} The convergence of the Hamiltonians of the approximate, standard control problems to the Hamiltonian of the general problem, $\mathbf{H}(x^\infty,\lambda^\infty,u^\infty,t)$, means that many of the useful features of the Hamiltonians of standard optimal control problems are preserved. For instance, it is straightforward to show that the satisfaction of Pontryagin's Minimum Principle by the approximate Hamiltonians implies minimization of $\mathbf{H}(x^\infty,\lambda^\infty,u^\infty,t)$ as well. That is, that \[ \mathbf{H}(x^\infty, \lambda^\infty,u^\infty,t) \leq \mathbf{H}(x^\infty, \lambda^\infty,u,t) \] for all feasible $u$. Furthermore, when applicable, the stationarity properties of the standard control Hamiltonian--such as a constant-valued Hamiltonian in time-invariant problems, or stationarity with respect to $u(t)$ in problems with open control regions--are also preserved. \section{Numerical Example} \label{numerical examples section} In a slight refashioning of the notation in the Section \ref{defense model}, equation (\ref{full swarm dynamics}), let the parameter vector $\theta$ be defined by all the {\it unknown} parameters defining the interaction functions. Assuming prior distribution $\phi(\theta)$ over these unknowns and parameter bounds $\Theta$, we construct the following optimal control problem for robustness against the unknown parameters. \\ \\ \noindent{\bf Problem SD (Swarm Defense):} For $K$ defenders and $N$ attackers, determine the defender controls $u_k(t)$ that minimize: \begin{equation} J =\int_\theta \left[1-P_0(t_f,\theta) \right] \phi(\theta)d\theta \label{SwarmDefense} \end{equation} subject to: \small \[ \begin{cases} \dot{y}_k(t) = f(y_k(t), u_k(t)), & \hspace{-25pt} y_k(0) = y_{k0}\\ \ddot{x}_j(t,\theta) = & \\ F_S(t,\theta) + F_{HVU}(t,\theta)+F_D(t,\theta), & \hspace{-25pt} x_j(0,\theta) = x_{j0}(\theta) \\ \dot{Q}_j(t,\theta)= & \\ -Q_j(t,\theta) \sum_{k=1}^K P_k(t,\theta) d_y^{j,k}(x_j(t,\theta),y_k(t)), & \hspace{-5pt} Q_j(0,\theta) = 1 \\ \dot{P}_k(t)= & \\ -P_k(t,\theta)\sum_{j=1}^N Q_j(t,\theta) d_x^{k,j}(y_k(t), x_j(t,\theta)), & \hspace{-5pt}P_k(0,\theta) = 1 \end{cases} \] \normalsize for swarm attackers $j = 1, \dots, N$ and controlled defenders $k=1\dots, K$. We implement Problem {\bf SD} for both swarm models in Section \ref{swarm models}, for a swarm of $N=100$ attackers and $K=10$ defenders. \subsection{Example Model 1: Virtual Body Artificial Potential} \label{leonard_numerical_sec} The cooperative swarm forces $F_S$ are defined with the Virtual Body Artificial Potential of Section \ref{swarm models} with parameters $\alpha$, $d_0$ and $d_1$. In lieu of a potential for the virtual leaders, we assign the HVU tracking function: \begin{equation}\label{eqn:hvu tracking} f_{HVU} = - \frac{K_1(x_i-y_0)}{\|x_i-y_0\|} \end{equation} where $y_0\in\mathbb{R}^3$ is the position of the HVU. The dissipative force $f_{v_i} = - K_2\dot x_i$ is employed to guarantee stability of the swarm system. $K_1$ and $K_2$ are positive constants. The swarm's collision avoidance response to the defenders is defined by equation (\ref{eq:V_I}) with parameters $\alpha_h$, $h_0$ and $h_1$. Since there is only a repulsive force between swarm members and defenders, not an attractive force, we set $h_1=h_0$. For attrition, we use the the damage function defined in equation (21) of \cite{walton2018optimal}. For the damage rate of defenders inflicted on attackers, we calibrate by the parameters $\lambda_D$, $\sigma_D$. For the damage rate of attackers inflicted on defenders, we calibrate by the parameters $\lambda_A$, $\sigma_A$. In both cases, the parameters $F$ and $a$ in \cite{walton2018optimal} are set to $F=0$, $a=1$. Table \ref{Fixed Parameter Values1} provides the parameter values that remain fixed in each simulation, and and Table \ref{tbl:uncert-param-values1} provides the parameters we consider as uncertain. \begin{table}[h!] \centering \begin{tabular}{c|| r |c|} { Parameter} & { Value} & { Reference}\\ \hline $t_f$ & $45$ & {\em final time}\\ \hline $K_1$ & $5$ & {\em tracking coefficient}\\ \hline $K$ & $10$ & {\em number of defenders}\\ \hline $h_1$ & $h_0$ & {\em interaction parameter}\\ \hline $\lambda_D$ & $2$ & {\em defender weapon intensity}\\ \hline $\sigma_D$ & $2$ & {\em defender weapon range}\\ \hline $N$ & $100$ & {\em number of attackers}\\ \hline $K_2$ & $5$ & {\em dissipative force}\\ \hline \noalign{ } \end{tabular} \caption{Model 1 Fixed Parameter Values} \label{Fixed Parameter Values1} \end{table} \begin{table}[h!] \centering \begin{tabular}{c|| r |c|c|} { Parameter} & { Nominal} & { Range} & {Reference}\\ \hline $\alpha$ & $.5$ & [0.1, 0.9] & {\em control gain}\\ \hline $d_0$ & $1$ & [0.5, 1.5] & {\em lower range limit}\\ \hline $d_1$ & $6$ & [4, 8] & {\em upper range limit}\\ \hline $\lambda_A$ & $.05$ & [.01,.09] & {\em weapon intensity}\\ \hline $\sigma_A$ & $2$ & [1.5 2.5] & {\em weapon range}\\ \hline $\alpha_h$ & $6$ & [5,7] & {\em herding intensity} \\ \hline $h_0$ & $3$ & [2,4] & {\em herding range} \\ \hline \noalign{ } \end{tabular} \caption{Model 1 Varied Parameter Values} \label{tbl:uncert-param-values1} \end{table} We first use the nominal parameter values provided in Tables \ref{Fixed Parameter Values1} and \ref{tbl:uncert-param-values1} to find a nominal solution defender trajectories that result in the minimum probability of HVU destruction. With the results of these simulations as a reference point, we consider as uncertain each of the parameters that define attacker swarm model and weapon capabilities. In this simulation, these parameters are considered individually. The number of discretization nodes for parameter space was chosen by examination of the Hamiltonian. To illustrate this method and the results obtained in Section \ref{adjointconvergencesection} we compute Hamiltonians for the Problem {\bf SD} and Model 1 with $\theta = d_0, d_0 \in [0.5,1.5]$ and $M = [5,8,11]$. As $M$ increases the sequence of Hamiltonians should converge to the optimal Hamiltonian for the Problem {\bf SD}. For Problem {\bf SD} that should result in a constant, zero-valued Hamiltonian. Figure \ref{leonard_H_fig} shows the respective Hamiltonians for $M = [5,8,11]$. The value $M=11$ was chosen for simulations, based on the approximately zero-valued Hamiltonian it generates. \begin{figure} \caption{Convergence of Hamiltonion as number of parameter nodes $M$ increases} \label{leonard_H_fig} \end{figure} We compare the performance of the solution generated using uncertain parameter optimal control Problem {\bf SD} versus a solution obtained with the nominal values. Figure \ref{fig:snapshots} shows the nominal solution trajectories. The comparitive results of the nominal solutions vs the uncertain parameter control solutions are shown in Figure~\ref{leonard_comparison_fig}, where the performance of each is shown for different parameters values. \begin{figure} \caption{Shown are four snapshots during a simulations at $t=0$, 15, 30, and 45 (time units are arbitrary). Defenders are represented by blue spheres and attackers by red spheres. Below these snapshots, we show full trajectories for the entire simulation, which is the result of an optimization protocol using the parameters shown in Table~\ref{Fixed Parameter Values1} \label{fig:snapshots} \end{figure} \begin{figure*} \caption{Performance of Solutions of Swarm Model 1 as parameter values are varied} \label{leonard_comparison_fig} \end{figure*} As seen in Figure~\ref{leonard_comparison_fig} the trajectories generated by optimization using the nominal values perform poorly over a range of $\alpha$, $d_0$, $\sigma_A$, $\alpha_k$ and $h_0$. In the case of $h_0$, for example, this is because the attackers are less repelled by the defenders when $h_0$ is decreased, and they are more able to destroy the HVU from a longer distance as $\sigma_A$ is increased. The parameter uncertainty solution, however, demonstrates that using the uncertain parameter optimal control framework a solution can be provided which is robust over a range of parameter values. We contrast these results with the case of uncertain parameters $d_1$ and $\lambda_A$, also shown in Figure~\ref{leonard_comparison_fig}. It can be seen that robustness improvements are modest to non-existent for these parameters. This suggests an insensitivity of the problem $d_1$ and $\lambda_A$ parameters. This kind of analysis can be used to guide inference and observability priorities. \subsection{Example Model 2: Reynolds Boid Model} To demonstrate flexibility of the proposed framework to include diverse swarm models we have applied the same analysis as was done in Section \ref{leonard_numerical_sec} to the Reynolds Boid Model introduced in Section \ref{swarm models}. We apply the same HVU tracking function as equation (\ref{eqn:hvu tracking}). The herding force $F_D$ of the defenders repelling attackers is applied as a separation force in the form of equation (\ref{eqn: reynolds separation}). The fixed parameter values are the same as those in Table \ref{Fixed Parameter Values1}; the uncertain parameters and ranges are given in Table \ref{tbl:uncert-param-values2}. The results are shown in Figure \ref{reynolds_comparison_fig}. Again, we see that the tools developed in this paper can be used to gain an insight into the robustness properties of the nominal versus uncertain parameter solutions. For example, we can see that the uncertain parameter solutions perform much better than the nominal ones for the cases where $\lambda$, $\sigma$ and $w_I$ are uncertain. \begin{table}[h!] \centering \begin{tabular}{c|| r |c|c|} { Parameter} & { Nominal} & { Range} & {reference}\\ \hline $\lambda_A$ & $.05$ & [.01,.09] & {\em weapon intensity}\\ \hline $\sigma_A$ & $2$ & [1.5 2.5] & {\em weapon range}\\ \hline $r_{al}$ & $2$ & [1.5,2.5] & {\em alignment range}\\ $w_{al}$ & $.75$ & [.25,1.25] & {\em alignment intensity}\\ $r_{coh}$ & $2$ & [1.5,2.5] & {\em cohesion range}\\ $w_{coh}$ & $.75$ & [.25,1.25] & {\em cohesion intensity}\\ $r_{sep}$ & $1$ & [.5, 1.5] & {\em separation range}\\ $w_{sep}$ & $0.5$ & [.1, .9] & {\em separation intensity}\\ $r_{I}$ & $2$ & [1.5,2.5] & {\em herding range}\\ $w_{I}$ & $4.5$ & [3.5, 5.5] & {\em herding intensity}\\ \noalign{ } \end{tabular} \caption{Model 2 Varied Parameter Values} \label{tbl:uncert-param-values2} \end{table} \begin{figure*} \caption{Performance of Solutions of Swarm Model 2 as parameter values are varied} \label{reynolds_comparison_fig} \end{figure*} \section{Conclusions} In this paper we have built on our previous work on developing an efficient numerical framework for solving uncertain parameter optimal control problems. Unlike uncertainties introduced into systems due to stochastic ``noise,'' parameter uncertainties do not average or cancel out in regards to their effects. Instead, each possible parameter value creates a specific profile of possibility and risk. The uncertain optimal control framework which has been developed for these problems exploits this inherent structure by producing answers which have been optimized over all parameter profiles. This approach takes into account the possible performance ranges due to uncertainty, while also utilizing what information is known about the uncertain features--such as parameter domains and prior probability distributions over the parameters. Thus we are able to contain risk while providing plans which have been optimized for performance under all known conditions. The results reported in this paper include analysis of the consistency of the adjoint variables of the numerical solution. In addition, the paper includes a numerical analysis of a large scale adversarial swarm engagement that clearly demonstrates the benefits of using the proposed framework. There are many directions of future work for the topics of this paper. The numerical simulations in this paper consider the parameters individually, as one-dimensional parameter spaces. However, Problem {\bf P} allows for multi-dimensional parameter spaces. A more dedicated implementation, taking advantage of the parallelizable form of equation (\ref{eqn:barxdyn}) for example, could certainly manage several simultaneous parameters. Exponential growth as parameter space dimension increases is an issue for both the quadrature format of equation (\ref{equ:quadrature}) and handling of the state space size for equation (\ref{eqn:barxdyn}). This can be somewhat mitigated by using sparse grid methods for high-dimensional integration to define the nodes in equation (\ref{equ:quadrature}). For large enough size, Monte Carlo sampling, rather than quadrature might be more appropriate for designating parameter nodes. Another direction of future work is in greater application of the duality results of Section \ref{adjointconvergencesection}. The numerical results in this paper simply utilize the Hamiltonian consistency. The proof of Theorem \ref{hamiltonian convergence theorems}, however, additionally demonstrates the consistency of the adjoint variables for the problem. As the results demonstrate, parameter sensitivity for these swarm models is highly nonlinear. The numerical solutions of Section \ref{numerical examples section} are able to demonstrate this sensitivity by applying the solution to varied parameter values. However, this is actually a fairly expensive method for a large swarm, as it involves re-evaluation of the swarm ODE for each parameter value. More importantly, it would not be scalable to high-dimensional parameter spaces, as the exponential growth of that approach to sensitivity analysis would be unavoidable. The development of an analytical adjoint sensitivity method for this problem could be of great utility for paring down numerical simulations to only focus on the parameters most relevant to success. \appendix[Proof of Theorem \ref{hamiltonian convergence theorems}] \subsection{Assumptions and Definitions}\label{regularity assumptions} We we impose the assumptions of \cite{walton_IJC} Section 2. The definition of accumulation point used in the following proof can be found in \cite{walton_IJC} Definition 3.2. The following assumption is placed on the choice of numerical integration scheme to be utilized in approximating Problem $\mathbf{P}$: \begin{assumption}\label{numericalschemeassumption} For each $M \in \mathbb{N}$, there is a set of nodes $\{\theta_i^M\}_{i=1}^{M} \subset \Theta$ and an associated set of weights $\{\alpha_i^M\}_{i = 1}^M \subset \mathbb{R}$, such that for any continuous function $h:\Theta \to \mathbb{R}$, \small \begin{align*} \int_{\Theta} h(\theta) d\theta = \lim_{M \to \infty} \sum_{i = 1}^M h(\theta^M_i) \alpha_i^M. \end{align*} \end{assumption} \normalsize This is the same as \cite{walton_IJC} Assumption 3.1; we include it for reference. In additions to the assumptions of \cite{walton_IJC}, we also impose the following: \begin{assumption} \label{regularityassumption} The functions $f$ and $r$ are $C^1$. The set $\Theta$ is compact and $x_0:\Theta \mapsto \mathbb{R}^{n_x}$ is continuous. Moreover, for the compact sets $X$ and $U$ defined in \cite{walton_IJC}'s Assumptions 2.3 and 2.4, and for each $t \in [0,T]$, $\theta \in \Theta$, the Jacobians $r_x$ and $f_x$ are Lipschitz on the set $X \times U$, and the corresponding Lipschitz constants $L_r$ and $L_f$ are uniformly bounded in $\theta$ and $t$. The function $F$ is $C^1$ on $X$ for all $\theta \in \Theta$; in addition, $F$ and $ F_x$ are continuous with respect to $\theta$. \end{assumption} \subsection{Main Theorem Proof} The theorem relies on the following lemma: \begin{lemma}\label{costate lemma} Let $\{u_M\}$ be a sequence of optimal controls for Problem $\mathbf{P^M}$ with an accumulation point $u^\infty$ for the infinite set $V\subset \mathbb{N}$. Let $(x^\infty(t,\theta),\lambda^\infty(t,\theta))$ be the solution to the dynamical system: \small \begin{align}\label{costate accumulation points} \begin{cases} \dot{x}^\infty(t,\theta)=f(x^\infty(t,\theta),u^\infty(t),\theta) \\ \dot{\lambda}^\infty(t,\theta)= -\frac{\partial H(x^\infty(t,\theta),\lambda^\infty(t,\theta),u^\infty(t),t,\theta)}{\partial x} \end{cases} \end{align} \begin{align} \begin{cases} x^\infty(0,\theta) = x_0(\theta) \\ \lambda^\infty(T,\theta) = \frac{\partial F(x^\infty(T,\theta),\theta) }{\partial x} \end{cases} \end{align} \normalsize where $H$ is defined as per Equation (\ref{first_Hamiltonian}), and let $\{(x_M(t,\theta),\lambda_M(t,\theta))\}$ for $M\in V$ be the sequence of solutions to the dynamical systems: \small \begin{align}\label{costate sequence} \begin{cases} \dot{x}_M(t,\theta)=f(x_M(t,\theta),u_M(t),\theta)\\ \dot{\lambda}_M(t,\theta)= -\frac{\partial H(x_M(t,\theta),\lambda_M(t,\theta),u_M(t),t,\theta)}{\partial x} \end{cases} \end{align} \begin{align} \begin{cases} x_M(0,\theta) = x_0(\theta) \\ \lambda_M(T,\theta) = \frac{\partial F(x_M(T,\theta),\theta) }{\partial x} \end{cases} \end{align} \normalsize Then, the sequence $\{(x_M(t,\theta),\lambda_M(t,\theta))\}$ converges pointwise to $(x^\infty(t,\theta),\lambda^\infty(t,\theta))$ and this convergence is uniform in $\theta$. \end{lemma} \subsubsection*{Proof} \noindent The convergence of $\{x_M(t,\theta)\}$ is given by \cite{walton_IJC}, Lemmas 3.4, 3.5. The convergence of the sequence of solutions $\{\lambda_M(t,\theta)\}$ is guaranteed by the optimality of $\{u_M\}$. The convergence of $\{\lambda_M(t,\theta)\}$ then follows the same arguments given the convergence of $\{x_M(t,\theta)\}$, utilizing the regularity assumptions placed on the derivatives of $F$, $r$, and $f$ with respect to $x$ to enable the use of Lipschitz conditions on the costate dynamics and transversality conditions. \begin{remark}\label{equivalence of sequence} Note that $\lambda_M(t,\theta)$ is not a costate of Problem ${\mathbf{P^{ \lambda M}}}$, since it is a function of $\theta$. However, when $\theta = \theta_i^M$, then $\lambda_M(t,\theta_i^M) = \tilde{\lambda}_i^{M}(t)$, where $\tilde{\lambda}_i^{M}$ is the costate of Problem ${\mathbf{P^{ \lambda M}}}$ generated by the pair of solutions to Problem ${\mathbf{P^{M}}}$, $(\tilde{x}_i^{M}, u_M^\ast)$ . In other words, the function $\lambda_M(t,\theta)$ matches the costate values at all collocation nodes. Since these values satisfy the dynamics equations of Problem ${\mathbf{P^{ \lambda M}}}$, a further implication of this is that the values of $\lambda_M(t,\theta_i^M)$ produce feasible solutions to Problem ${\mathbf{P^{ \lambda M}}}$. \end{remark} \begin{remark}\label{equivalence of accumulation points} Since the functions $\{(x_M(t,\theta),\lambda_M(t,\theta))\}$ obey the respective identities $x_M(t,\theta_i^M) = \tilde{x}_i^{M}(t)$ and $\lambda_M(t,\theta_i^M) = \tilde{\lambda}_i^{M}(t)$, their convergence to $(x^\infty(t,\theta),\lambda^\infty(t,\theta))$ also implies the convergence of the sequence of discretized primals and duals, $\{\tilde{X}_M\}$ and $\{\tilde{\Lambda}_M\}$, to accumulation points given by the relations \small \[ \lim_{M\in V} \tilde{x}_i^{M}(t) = x^\infty(t,\theta_i^M), \quad \lim_{M\in V} \tilde{\lambda}_i^{M}(t) = \lambda^\infty(t,\theta_i^M) \] \normalsize \end{remark} We now prove Theorem \ref{hamiltonian convergence theorems}. Let $\{(x_M(t,\theta),\lambda_M(t,\theta))\}$ for $M\in V$ be the sequence of solutions defined by Equation \ref{costate sequence} and let $(x^\infty(t,\theta),\lambda^\infty(t,\theta))$ be the accumulation functions defined by Equation \ref{costate accumulation points}. Incorporating Remarks \ref{equivalence of sequence} and \ref{equivalence of accumulation points}, we have: \small \[ \lim_{M\in V} \tilde{H}^M(\tilde{X}_M, \tilde{\Lambda}_M, u_M,t) = \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \lim_{M\in V} \sum_{i=1}^M \alpha_i^M \left[\tilde{\lambda}_i^M(t) f(\tilde{x}_i^M(t),u(t),\theta_i^M) + \right. \] \[ \left. r(\tilde{x}_i^M(t), u(t),t,\theta_i^M) \right] \] \[ = \lim_{M\in V} \sum_{i=1}^M \alpha_i^M \left[ \lambda_M(t,\theta_i^M)f(x_M(t,\theta_i^M),u(t),\theta_i^M) + \right. \] \[ \left. r(x_M(t,\theta_i^M), u(t),t,\theta_i^M) \right] \] \normalsize Due to the results of Lemma \ref{costate lemma}, and applying \cite{walton_IJC}'s Remark 1 on the convergence of the quadrature scheme for uniformly convergent sequences of continuous functions, we find that: \small \[ \lim_{M\in V} \tilde{H}^M(\tilde{X}_M, \tilde{\Lambda}_M, u_M,t) = \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \int_{\Theta} \left[ \lambda^\infty(t,\theta)f(x^\infty(t,\theta),u^\infty(t),\theta) + \right. \qquad \qquad \qquad \qquad \qquad \qquad \] \[ \left. r(x^\infty(t,\theta), u^\infty(t),t,\theta) \right] d\theta = \mathbf{H}(x^\infty,\lambda^\infty,u^\infty,t) \] \normalsize Thus proving the theorem. \section*{Acknowledgment} This work was supported in part by ONR SoA program and by NPS Cruser program. \ifCLASSOPTIONcaptionsoff \fi \end{document}
\begin{document} \title{Continuous-Variable Bell Inequalities in Phase Space} \author{Karl-Peter Marzlin} \affiliation{Department of Physics, St. Francis Xavier University, Antigonish, Nova Scotia, B2G 2W5, Canada} \affiliation{Institute for Quantum Information Science, University of Calgary, Calgary, Alberta T2N 1N4, Canada} \author{T.~A.~Osborn} \affiliation{Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada} \begin{abstract} We propose a variation of Bell inequalities for continuous variables that employs the Wigner function and Weyl symbols of operators in phase space. We present examples of Bell inequality violation which beat Cirel'son's bound. \end{abstract} \pacs{03.65.Ud,03.65.Ta,03.65.Ca} \maketitle Since its discovery in 1964 \cite{Bell:Physics1964} the Bell inequality (BI) has triggered an enormous interest in the differences between classical and quantum correlations. Bell inequalities are now commonly referred to as equations that relate correlation measurements which are fulfilled by any local hidden variable (LHV) theory, but are violated within the framework of quantum mechanics (QM). The original inequality was formulated for dichotomic variables in spin systems. Clauser, Horne, Shimony, and Holt (CHSH) \cite{PhysRevLett.23.880} presented a BI that was more amenable for experimental tests and is nowadays widely used. The original BI was inspired by the Einstein-Podolsky-Rosen paradox \cite{PhysRev.47.777,RevModPhys.81.1727} for continuous variables (CV). BIs for CV systems were first developed using dichotomic variables that have eigenvalues $\pm 1$ \cite{JModOpt42-939,Gour2004415,Praxmeyer:EPJD2005} . Recently, a new approach for CV Bell inequalities has been developed by Cavalcanti, Foster, Reid, and Drummond (CFRD) \cite{PhysRevLett.99.210405,arXiv:1005.2208}. The CFRD inequality can be formulated for arbitrary CV observables, but a multi-partite quantum system with five or more spatially separated sub-systems is usually required to obtain a violation. In this paper we offer a generalization of CFRD inequalities that employs the Wigner function $ W(q,p)$. The resulting BIs are conceptually different from those obtained using operator methods. Furthermore, the similarity of quantum phase space expectation values to those of classical statistical mechanics can provide new insight to the BI. The Wigner function is a quasi-probability distribution in phase space that is defined (in a 1D system) by \cite{PhysRev.40.749} $ W(q,p) = (2\pi \hbar)^{-1} \text{Smb}[\hat{\rho}](q,p)$, with \begin{align} \text{Smb}[\widehat{A}](q,p) &=2 \int_{-\infty}^\infty \; \langle q-q'| \widehat{A} | q+q' \rangle e^{2i p q'/\hbar} \,dq' \label{eq:wignerfunction} \end{align} the Weyl symbol of an operator $\widehat{A}$ and $\hat{\rho}$ the density matrix. $ W(q,p)$ appears in the expectation values of $\widehat{A}$, \begin{align} \langle \widehat{A} \rangle &= \int dq\, dp\, W(q,p)\, \text{Smb}[\widehat{A}](q,p)\; . \label{eq:weylExpec}\end{align} Phase space methods have been used to study specific implementations of the CHSH inequality for dichotomic variables \cite{PhysRevLett.82.2009,PhysRevA.58.4345,revzen:022103,quant-ph/0612029}. Here we derive a BI that is based on Weyl symbols for a large class of observables $\widehat{A}$ with dichotomic symbols. As there is no general relation between bounds of operators and bounds of their symbols, the resulting BI is conceptually different than other BIs. {\it Local Hidden-Variable Theories---} In the context of dichotomic observables such as photon polarization, LHV theories are based on the following assumptions. \\ {\em LHV1}: the possible measurement values of a LHV observable $B$ are given by the spectrum of the corresponding quantum observable $\widehat{B}$. \\ {\em LHV2}: the expectation value of $B$ is given by the expression $ \langle B \rangle_\text{LHV} = \int d\lambda \int db\, b\, p_\lambda(b)$, where the integration (or summation) variable $b$ runs over the spectrum of $\widehat{B}$. Here, $\lambda$ represents all hidden variables in the LHV theory, and $0\leq p_\lambda(b) \leq 1$ denotes the probability to measure the value $b$ if the hidden variable takes the value $\lambda$. Conservation of probability requires that $ \int d\lambda \int db\, p_\lambda(b)=1$. \\ {\em LHV3}: For simultaneous measurements on two spatially separated systems, causality implies that the probabilities for the measurements on the two systems must be independent (i.e., they usually factorize). In the context of CV observables, the assumption LHV1 is problematic. To see this, consider the case of a single 1D particle. If we choose $\widehat{B}$ to be the position $\hat{q}$ or momentum $\hat{p}$ of the particle, then the spectrum of $\widehat{B}$ is the set of real numbers. Let us instead choose $\widehat{B} = \widehat H = \frac 12 (\hat{q}^{2}+ \hat{p}^{2})$, which is the energy of a harmonic oscillator. If, in a LHV theory, $q$ and $p$ can take any real value, then the LHV spectrum of $B$ should include all positive real numbers. However, the quantum mechanical energy spectrum is discrete. Hence, if we tried to describe position and energy simultaneously, we could run into a contradiction about the spectra of the two observables. The origin of this contradiction is of course that, in quantum mechanics, $\widehat{H}$ and $\hat{q}$ do not commute and thus cannot be measured simultaneously. On the other hand, in LHV models the observables must be commutative for at least a very large class of models \cite{PhysRevA.69.022118}. The derivation of our generalized BI is based on the assumption of commutativity. {\it Re-derivation of the CHSH inequality.---} Our method to derive BIs is related to the proof of the CHSH inequality presented by CFRD \cite{PhysRevLett.99.210405}, which employs that the variance of a (generally complex) observable $B$ must be positive, $ | \langle B \rangle |^2 \leq \langle |B|^2 \rangle $. The expectation values in this expression may either be evaluated within QM, then denoted by $ \langle \widehat{B} \rangle $, or in the framework of LHV models where we use the notation $\langle B \rangle_\text{LHV}$. Within each theory this inequality is always fulfilled. However, the maximum value of $\langle |B|^2\rangle_\text{LHV}$ in all LHV theories provides an upper bound on local realism: if the quantum mechanical expectation value does not fulfill the inequality \begin{equation} | \langle \widehat{B} \rangle |^2 \leq \max_\text{LHV} \langle |B|^2 \rangle_\text{LHV}\; , \label{eq:newBellConjecture1}\end{equation} then the predictions of QM are inconsistent with the assumptions behind LHV models. To derive the CHSH inequality we choose the observable $ B= X_1 X_2 + X_1 Y_2 + Y_1 X_2 - Y_1 Y_2 $, with $X_i, Y_i $ four dichotomous observables (so that $X_i^2=Y_i^2 =1$) for two particles $i=1,2$. In quantum mechanics one finds $ \langle \widehat{B}^2 \rangle = 4 + \langle [Y_1,X_1] \, [X_2 ,Y_2] \rangle $. CFRD then argue that in any LHV model the commutators must vanish, which leads to the CHSH inequality $ | \langle \widehat{B} \rangle | \leq 2$. QM violates this inequality for a suitable choice of states and observables. Because of the above-mentioned commutativity of many LHV models, this seems to be a suitable approach to deriving BIs. {\it Bell inequalities in phase space. ---} In Ref.~\cite{PhysRevLett.99.210405}, CFRD used the method of commuting observables to find BIs for a more general form of the Bell observable $B$. Here we employ this approach to derive BIs for a system of $n$ degrees of freedom (e.g., $n$ one-dimensional particles) in phase space. We consider an operator $\widehat{B}$ that is a function of position and momentum operators $\hat{q}_i, \hat{p}_i\; (i=1,\cdots n)$ and will establish an upper bound for $\langle \widehat{B} \rangle $ in LHV models by ignoring the commutators between position and momentum. We represent $\widehat{B}$ by a sum of Weyl-ordered products, \begin{equation} \widehat{B} = \sum_{\mu_1, \cdots, \mu_{2n}} B_{\mu_1 \cdots \mu_{2n}} \left ( \hat{q}_1^{\mu_1} \hat{p}_1^{\mu_2}\cdots \hat{q}_n^{\mu_{2n-1}}\hat{p}_n^{\mu_{2n}} \right )_\text{Weyl}\; , \label{eq:Fexpansion}\end{equation} with complex coefficients $B_{\mu_1 \cdots \nu_n} $. Weyl ordering of a product of operators $\widehat{A}_1, \widehat{A}_2 , \cdots$ corresponds to the completely symmetric sum \begin{align} (\widehat{A}_1 \widehat{A}_2 \cdots\widehat{A}_k )_\text{Weyl} &:= \frac{ 1}{k!} \sum_P \widehat{A}_{P_1} \widehat{A}_{P_2} \cdots \widehat{A}_{P_k} \; , \end{align} where $\sum_P$ stands for the sum over all permutations of the $k$ operators. Any operator that possesses a Taylor expansion can be brought into the form (\ref{eq:Fexpansion}). For instance, the operator $\widehat{B} = \hat{q}\hat{p}$ can also be written as $\widehat{B} = \frac{ 1}{2} (\hat{q}\hat{p}+\hat{p}\hat{q})+ i \frac{ \hbar}{2} $, which corresponds to a single degree of freedom ($n=1$) and $B_{1,1}=1$, $B_{0,0}= i \frac{\hbar}{2} $ in Eq.~(\ref{eq:Fexpansion}). In phase space, the left-hand side (l.h.s.) of Eq.~(\ref{eq:newBellConjecture1}) can be evaluated using relation (\ref{eq:weylExpec}). The Weyl symbol of a Weyl-ordered operator $\widehat{B}$ can be evaluated by replacing the operators $\hat{q}_i, \hat{p}_i$ by the respective phase space variables, $\text{Smb}[\widehat{B}](q_i,p_i)= B(q_i,p_i)$. For the operator (\ref{eq:Fexpansion}), the symbol $B(q_i,p_i)$ then has the explicit form \begin{equation} B(q_i,p_i) = \sum_{\mu_1, \cdots, \mu_{2n}} B_{\mu_1 \cdots \mu_{2n}} q_1^{\mu_1} p_1^{\mu_2}\cdots q_n^{\mu_{2n-1}} p_n^{\mu_{2n}} . \label{eq:Bexpansion}\end{equation} To find the right-hand side (r.h.s.) of Eq.~(\ref{eq:newBellConjecture1}) we evaluate $ \langle \widehat{B} \widehat{B}^\dagger \rangle$ in QM and eliminate the contributions of all commutators between position and momentum operators. In quantum phase space one has \begin{align} \langle \widehat{B} \widehat{B}^\dagger\rangle &= \int\prod_{j=1}^n dq_j\, dp_j\, W(q_i,p_i)\, (B\star B^*)(q_i,p_i)\; , \label{eq:weylExpec2}\end{align} where the star product between two operator symbols captures the non-commutativity and non-locality of quantum observables and is given by \cite{FAB72,OM95,KO3} \begin{eqnarray}\label{eq:groene} f\star g(x) &=& \frac1{(\pi\hbar)^{2n}} \int\int d^{2n}y\, d^{2n}z f(y)\,g(z) \\ \nonumber &\ & \times\exp \frac {2i}{\hbar} \Big(y\cdot J z + z\cdot Jx + x\cdot Jy \Big)\,. \end{eqnarray} Here, $x=(q,p) \in \mathds{R}^{2n}$ and $ J =\big[\begin{array}{cc}0&-I\\I&0\end{array}\big]$, where $I$ denotes the $n$-dimensional identity matrix. We now employ the central assumption that the LHV upper bound can be found by eliminating all commutators between position and momentum operators. In the Weyl symbol representation this can easily be accomplished by taking the limit $\hbar \rightarrow 0$ in the star product $B\star B^*$ of Eq.~(\ref{eq:weylExpec2}). For smooth, $\hbar$-independent $f,g$ the star product has the expansion $ f\star g(x) = f(x)g(x) + \frac{i\hbar}{2}\{f,g\}(x) + \O(\hbar^2)$, where the $\hbar$-linear term is a Poisson bracket \cite{FAB72,OM95,KO3}. Consequently, $B\star B^*$ in Eq.~(\ref{eq:weylExpec2}) is replaced by the algebraic product $|B|^2$ so that a tentative new BI is given by \begin{align} |\langle \widehat{B} \rangle |^2 &\leq \int \prod_{j=1}^n dq_j\; dp_j\; | B(q_i,p_i)|^2 \, W(q_i,p_i)\; . \label{eq:BellTentative}\end{align} The r.h.s.~of Eq.~(\ref{eq:BellTentative}) does not yet provide the correct upper bound for LHV theories because it depends on the Wigner function of QM. Intuitively one could fix this by replacing the quantum state by a suitable phase space distribution $W_\text{LHV}(q_i,p_i)$ which is compatible with the assumptions of LHV models. However, general LHV models do not necessarily possess such a phase space distribution. Instead, one has to consider suitable Bell operators for which the r.h.s.~of Eq.~(\ref{eq:BellTentative}) can be evaluated without referring to a specific state. A simple but relevant example is the case when $ | B(q_i,p_i)|^2 = |B_0|^2 $ is constant. This happens if $ B(q_i,p_i) $ is a phase factor or a sign function, for instance. Because of the normalization of the Wigner function the BI then becomes \begin{align} |\langle B(\hat{q}_i, \hat{p}_i) \rangle |^2 &\leq | B_0|^2 \; . \label{eq:BellCV}\end{align} This is the main result of our paper. We remark that the dependence on the state has been removed here because the integral of the Wigner function over the entire phase space equals to the probability to find any measurement result, which is unity both in QM and LHV theories. Consequently, $ | B_0|^2 $ can be interpreted as the upper bound for commutative LHV theories. We emphasize that the BI (\ref{eq:BellCV}) is generally different from CFRD or CHSH type Bell inequalities. First, the upper bound is independent of the quantum state, while for the CFRD inequality the state dependence needs to be addressed. This can be a rather subtle issue: some entangled quantum states do have a positive Wigner function \cite{FoundPhys36-546} and positivity of the Wigner function is not sufficient to ensure consistency with LHV models \cite{PhysRevA.79.014104}. In fact, Revzen {\it et al.} \cite{revzen:022103} have shown that a dichotomic CV BI can be violated with a non-negative Wigner function. Second, the upper bound is determined by the symbol of the operator $\widehat{B}$ rather than its spectrum. This can be exploited to find new examples of BI violation. {\it Example of Bell inequality violation. ---} We consider a system of two harmonic oscillators, which are prepared in the Bell state $ |\psi_\text{Bell} \rangle = \frac{ 1}{\sqrt{2}} (|0 \rangle \otimes |1 \rangle -|1 \rangle \otimes |0 \rangle ) $. We set $\hbar=1$ and measure lengths in units of the ground state width of the oscillators so that $\langle q|m \rangle = e^{-\frac{q^2}{2}} H_m ( q)/(\sqrt{2^m m! \sqrt{\pi}})$, with $H_m(q)$ the Hermite polynomials. If we introduce a phase space variable $\mathbf{x}:=(q,p)$ with $\mathbf{x}^2=q^2+p^2$, and employ center-of-mass variables $\delta\mathbf{x}:=(\mathbf{x}_1-\mathbf{x}_2)/\sqrt{2}$ and $\mathbf{x}_\text{c} :=(\mathbf{x}_1+\mathbf{x}_2)/\sqrt{2}$, the Wigner function of $ |\psi_\text{Bell} \rangle$ is given by \begin{align} W_\text{Bell}(\mathbf{x}_c, \delta\mathbf{x}) &= \pi^{-2} e^{- (\mathbf{x}_c^2 +\delta\mathbf{x}^2)} \left (2 \delta\mathbf{x}^2 - 1 \right )\; . \label{eq:WignerBell}\end{align} This corresponds to the product $W_{00}(\mathbf{x}_c) W_{11}(\delta \mathbf{x})$ of the ground state in the center-of-mass coordinates and first excited state in the relative coordinates. To exploit the negativity of the Wigner function we consider a Bell operator that has the dichotomic Weyl symbol $ B(\mathbf{x}_1.\mathbf{x}_2) = \text{sgn} \left ( 2\,\delta\mathbf{x}^2 - 1 \right ) $, which implies that the bound $|B_0|^2$ in BI (\ref{eq:BellCV}) is unity. We remark that the assumption that the Weyl symbol is dichotomic is different from the assumption that the spectrum of an operator is dichotomic. There is no general relationship between the bounds for an operator and the bounds for its symbol. The l.h.s. of BI (\ref{eq:BellCV}) can be evaluated using Eq.~(\ref{eq:weylExpec}), so that the BI turns into \begin{equation} |\langle \widehat{B} \rangle |^2 = \left | \int d^2 \delta\mathbf{x}\, d^2 \mathbf{x}_c\; W(\delta\mathbf{x}, \mathbf{x}_c)\, \text{sgn} \left ( 2\,\delta\mathbf{x}^2 - 1 \right ) \right |^2 \leq 1 \; . \end{equation} For the Bell state (\ref{eq:WignerBell}) we find $ |\langle \widehat{B} \rangle | = \frac{ 4}{\sqrt{e}} -1 \approx 1.426$, so that the generalized BI is violated in quantum mechanics. This violation is slightly larger than Cirel'son's bound $\sqrt{2}$ for the ratio between the maximal quantum mechanical value of $ |\langle \widehat{B} \rangle |$ and the bound of the CHSH inequality for LHV theories \cite{LMP4-93}. {\it Generalization. ---} The previous example of BI violation exhibits two special features: (i) in center-of-mass variables, the Wigner function $W$ takes a product form. The negativity of $W$ depends on just the relative phase space variable $\mathbf{\delta x}$. (ii) The Bell operator's symbol $B(\mathbf{\delta x})$ exactly cancels the negativity of $W$, so that $\langle \widehat{B} \rangle = \int d^2 \mathbf{\delta x} \left | W(\delta\mathbf{x}) \right |$. These features can be readily generalized. Consider a Wigner function $W(\mathbf{x}) $, with $\mathbf{x}\in \mathds{R}^{2n}$, that is negative on a measurable subset ${\cal E}_- \subset \mathds{R}^{2n}$ and positive on its complement ${\cal E}_+$. We introduce characteristic symbols $\chi_\pm(\mathbf{x})$ that are unity for $\mathbf{x}\in {\cal E}_\pm$ and zero otherwise. The symbol of the Bell operator is then given by $B(\mathbf{x}) = \chi_+(\mathbf{x})-\chi_-(\mathbf{x})$. The general relation between an operator on Hilbert space, $\mathcal{H} = L^2(\mathds{R}^n,\mathds{C})$, and its Weyl symbol is given by \begin{equation} \widehat{A} = \int d^{2n} \mathbf{x} \, \widehat{\Delta}(\mathbf{x})\, \text{Smb}[\widehat{A}\,] (\mathbf{x} ) \, , \label{eq:opFromSymbol} \end{equation} with the quantizer \cite{KO3} defined via its Dirac kernel \begin{equation} \langle q'|\widehat{\Delta}(\mathbf{x})|q''\rangle = (\pi\hbar)^{-n} \delta\left(q'+q'' -2q\right)\, e^{\frac{ i}{\hbar} p\cdot(q'-q'')}. \end{equation} The two operators corresponding to the characteristic symbols are therefore given by $ \widehat{\chi}_\pm = \int_{{\cal E}_\pm} d^{2n} \mathbf{x}\; \widehat{\Delta}(\mathbf{x}) $. Together, they form a partition of unity, $\widehat{\mathds{1}}= \widehat{\chi}_+ + \widehat{\chi}_-$, and commute. The non-local character of the star product (\ref{eq:groene}) causes ${\chi}_+ \star {\chi}_- \neq 0$ and as a result the Bell operator fulfills $\widehat{B}^2 \neq \hat{\mathds{1}}$. This implies that $\widehat{B}$ is not a dichotomic operator, even though its symbol is dichotomic, in the sense that $B(\mathbf{x})^2 =1$. As a concrete example, we consider the special case that ${\cal E}_-$ consists of a disk of (dimensionless) radius $R$ around the origin of a 2D phase space. Since $\chi_-(\mathbf{x}) \in L^2(\mathds{R}^2,\mathds{C})$ the corresponding operator $\widehat{\chi}_-$ is Hilbert-Schmidt and so has a discrete spectrum. Furthermore, because of its spherical symmetry $\widehat{\chi}_-$ commutes with the Hamiltonian of the harmonic oscillator, so that the eigenstates of $\widehat{\chi}_-$ are the harmonic energy eigenstates $|m \rangle $. The eigenvalues $\lambda_m(R)$ of $\widehat\chi_-$ are conveniently computed from the fact that they are the expectation values with respect to the state $|m\rangle$ \cite{PhysRevLett.83.3758} \begin{equation}\label{TAO1} \lambda_m(R) = \frac 1{2\pi\hbar} \int d^2 \mathbf{x}\, \chi_-(\mathbf{x})\, \text{Smb}\left [ |m\rangle\langle m| \right ](\mathbf{x})\,. \end{equation} The integral above is the evaluation of the trace $\text{Tr}\, \chi_- |m\rangle \langle m|$ via the corresponding Weyl symbol. Using $ \text{Smb}\left [|m\rangle\langle m| \right ] (\mathbf{x}) = 2(-1)^m\,L_m(2\mathbf{x}^2 ) \exp(2\mathbf{x}^2 )$ where $L_m$ is the Laguerre function gives \begin{equation}\label{TAO2} \lambda_m(R) = \int_0^{R^2} (-1)^m\, L_m(r^2)\, e^{-r^2} \, d r^2\,. \end{equation} By replacing the Laguerre polynomials with their generating function (Eq.~(22.9.15) of Ref.~\cite{abramowitz}), we can derive a generating function for the eigenvalues of $\widehat{\chi}_-$, \begin{align} G(t) &= (t-1)^{-1} \left ( e^{R^2\frac{t-1}{t+1}}-1 \right ) . \end{align} The $m$th eigenvalue $\lambda_m$ is given by the $m$th coefficient of the Taylor expansion of $G(t)$ around $t=0$. For the Bell operator $\widehat{\mathds{1}}-2\widehat{\chi}_-$ and the relative coordinate prepared in an eigenstate of $\widehat{\chi}_-$, the BI then turns into the condition $|1-2\lambda_m|^2 \leq 1$. In Fig.~\ref{fig:BIviolation} we display the eigenvalues $1-2\lambda_m$ as a function of the excitation number $m$ for different choices of $R$. The example of the Bell state above corresponds to the case $m=1$ and $R=1/\sqrt{2}$. For larger $R$ and higher excitation numbers, a similar degree of BI violation can be obtained. \begin{figure} \caption{\label{fig:BIviolation} \label{fig:BIviolation} \end{figure} To increase the degree of BI violation, one can choose ${\cal E}_-$ to agree with the area in phase space where the Wigner function $W_m$ of state $|m \rangle $ is negative, so that $\langle \widehat{B} \rangle = \int d^2 \mathbf{x} \left | W_m(\mathbf{x}) \right |$. \begin{figure} \caption{\label{fig:Wnn} \label{fig:Wnn} \end{figure} The result for this expression for $m\leq 30$ is shown in Fig.~\ref{fig:Wnn}. BI violations that are much larger than in the CHSH case are possible. However, because of the normalization and exponential decay of the Wigner function we conjecture that the maximal BI violation for arbitrary $m$ will be bounded. In conclusion, we have proposed generalized BIs for which the bound is determined by the Weyl symbol of the Bell operator. Examples of states for which the BI is strongly violated have been presented for bi-partite systems. Extensions of this work for multi-partite or interacting systems may reveal further insight into the differences between classical and quantum mechanics. \acknowledgments This project was funded by NSERC and ACEnet. T.~A.~O. is grateful for an appointment as James Chair, and K.-P.~M. for a UCR grant from St.~Francis Xavier University. \end{document}
\begin{document} \title{Conformal covariance for the powers of the Dirac operator } \author{Jean-Louis Clerc and Bent \O rsted} \date{ } \maketitle \begin{abstract} A new proof of the conformal covariance of the powers of the flat Dirac operator is obtained. The proof uses their relation with the Knapp-Stein intertwining operators for the spinorial principal series. We also treat the compact picture, i.e. the corresponding operators on the sphere, where certain polynomials of the Dirac operator appear. This gives a new representation-theoretic framework for earlier results in \cite{bo, es, lr}. \footnote{2000 Mathematics Subject Classification : 22E45, 43A80} \end{abstract} \section{Introduction} The powers of the (flat) Dirac operator are known to satisfy a covariance property with respect to the M\"obius group (see \cite{b,pq, es}). We give a new proof by interpreting the powers of the Dirac operator as \emph{residues} of a meromorphic family of Knapp-Stein intertwining operators. The proof is elementary and does not involve any Clifford analysis. We also give in the last section the corresponding meromorphic family and its residues on the sphere. This corresponds to the so-called \emph{compact picture} of the induced representations. In particular we find as residues on the sphere the polynomials $D(D^2 - 1^2) \cdots (D^2 - m^2)$ of the Dirac operator $D$. These were found earlier by other methods in \cite{es, lr} (in these references the polynomials are not with constant coefficients, and it still needs some consideration to see that they may be identified with the polynomials above). \section{The conformal group of the sphere} Let $\big(E^{n+1}, (\,.\,,\,.\,)\big)$ be a Euclidean vector space of dimension $n+1$, and denote by $(x,y)$ the inner product of two vectors $x$ and $y$. Let $S=S^n$ be the unit sphere of $E$. The \emph{Lorentz space} is \[ E^{1,n+1} =\mathbb R \oplus E= \{(t, x), t\in \mathbb R, x\in E^{n+1}\}\] endowed with the symmetric bilinear form $[\,.\,,\,.\,]$ given by \[ [(t,x), (u,y)] =tu -(x,y)\ .\] Let $\mathcal S$ be the set of isotropic lines in $E^{1,n+1}$. The map \[ S\ni x\longmapsto d_x = \mathbb R(1,x) \in \mathcal S\] is a one-to-one correspondance of $S$ onto $\mathcal S$, which is moreover a diffeomorphism for the canonical differential structures on $S$ (as a submanifold of $E^{n+1}$) and $\mathcal S$ (as a submanifold of the projective space of $E^{1,n+1}$). Let $G= SO_0(E^{1,n+1})\simeq SO_0(1,n+1)$ be the connected component of the neutral element in the group of isometries of $E^{1,n+1}$. Then $G$ acts on $\mathcal S$ and hence on $S$. This action is \emph{conformal} in the sense that for any $g$ in $G$ and $x\in S$, the differential $Dg(x) : T_x\longrightarrow T_{g(x)}$ is a similarity of the tangent space $T_x$ of $S$ at $x$, i.e. \[ Dg(x) = \kappa(g,x)\, r(g,x)\ ,\] where $r(g,x)$ is a positive isometry of $T_x$ into $T_{g(x)}$ and $\kappa(g,x) >0$ is the \emph {conformal factor} of $g$ at $x$. The group $K=SO(E^{n+1})\simeq SO(n+1)$ is a maximal compact subgroup of $G$, associated to the standard Cartan involution $g\longmapsto \theta(g) = (g^t)^{-1}$. The group $K$ acts transitively on $S$. Let $(e_0,\dots, e_n)$ be an orthonormal basis of $E^{n+1}$, and choose $e_0$ as origin in $S$. Let $E^n= e_0^\perp$ be the hyperplane orthogonal to $e_0$ in of $E^{n+1}$. The stabilizer of $e_0$ in $K$ is the subgroup $M\simeq SO(n)$, and this gives a realization of $S\simeq K/M$ as a \emph{compact Riemannian symmetric space}. On the other hand, let $P$ be the stabilizer of $e_0$ in $G$. Then $P$ is a parabolic subgroup of $G$, and $S\simeq G/P$ is a realization of $S$ as a \emph{flag manifold}. Let $e_{-1}=(1,0, \dots, 0)\in E^{1,n+1}$, so that $\{e_{-1},e_0,\dots, e_n\}$ is a basis of $E^{1,n+1}$. Introduce the subgroups of $G$ defined by \[A = \left\{ a_s = \begin{pmatrix}\cosh s&\sinh s&0&\dots&0\\\sinh s& \cosh s&0&\dots&0\\0&0&1 & &\\ \vdots&\vdots&&\ddots&\\0&0&&&1\end{pmatrix},\quad s\in \mathbb R\right\} \] and \[N= \left\{\ n_u =\begin{pmatrix}1+\frac{\vert u\vert^2}{2}&-\frac{\vert u\vert^2}{2}&&u^t&\\\frac{\vert u\vert^ 2}{2}&1-\frac{\vert u\vert^2}{2}&&u^t&\\ &&1&& \\u&-u&&\ddots&\\&&&&1 \end{pmatrix},\quad u\in \mathbb R^n\simeq E^n\right\}\ . \] Then $P=MAN$ is a Langlands decomposition of $P$. Let $\overline N = \theta N$ be the opposite unipotent subgroup, given by \[ \overline N = \left\{\ \overline n_v =\begin{pmatrix}1+\frac{\vert v\vert^2}{2}&\frac{\vert v\vert^2}{2}&&v^t&\\-\frac{\vert v \vert^ 2}{2}&1-\frac{\vert v\vert^2}{2}&&-v^t&\\ &&1&& \\v&v&&\ddots&\\&&&&1 \end{pmatrix},\quad v\in \mathbb R^n\right\}\ . \] We also let \[w=\begin{pmatrix} 1& & & & &\\& -1& & & & &\\ & &-1&& & &\\ & & &1& & &\\ & & & & \ddots &\\ & & & & & 1\end{pmatrix}\ .\] The element $w$ is in $K$, acts on $S^n$ by $(x_0,x_1,x_2,\dots, x_n) \longmapsto (-x_0,-x_1,x_2,\dots, x_n)$, satisfies $wa_sw^{-1}=a_{-s}$, thus realizes the non trivial Weyl group element. The map \[c : v \longmapsto \Big(\frac{\vert v\vert^2-1}{\vert v \vert^2+1},\ \frac{2}{\vert v\vert^2+1}\,v \Big) \] is a diffeomorphism from $E^n$ onto $S\setminus\{e_0\} $. Its inverse is the classical \emph{stereographic projection} from the source ${e_0}$. As a convention let $\overline E^n = E^n\cup \infty$ be the one-point compactification of $E_n$. Then clearly the map $c$ can be extended to $\overline E_n$ by setting $c(\infty) = e_0$, to get a one-to-one correspondance between $\overline E^n$ and $S^n$. This allows to transfer the action of $G$ on $S$ to an action of $G$ on $\overline E^n$. In this way, the group $G$ is realized as a group of rational conformal transformations of $E^n$, usually called the \emph{M\"obius group} $M_+(\overline E_n)$. The subgroup $P$ is now realized by affine similarities, the group $M$ is the group of rotations of $E$ with center at $0$, $A$ is the group of positive dilations with center $0$ and $N$ is the group of translations of $E$. The element $w$ acts as the \emph{twisted inversion} \[(x_1,x_2,\dots, x_n) \longmapsto \Big(-\frac{x_1}{\vert x\vert^2}, \frac{x_2}{\vert x\vert^2}, \dots, \frac{x_n}{\vert x\vert^2}\Big)\ . \] When using this model for the sphere, we refer to the \emph{noncompact picture}. \section{The Vahlen-Maass-Ahlfors realization of the twofold covering of the M\"obius group} There is a quite useful realization of (a twofold covering of) $G$ acting on $\overline E_n$ via the Clifford algebra $Cl_{n-1}$, initiated by Vahlen (\cite{v}), revived by Maass (\cite{m}) and well presented by Ahlfors (\cite{a}, see also \cite{gm}, \cite{r1}). Let $E^{n-1}$ be the vector subspace generated by $e_2,\dots, e_n$, and form the \emph{Clifford algebra} $Cl_{n-1} = Cl(E^{n-1})$, i.e. the algebra (with unit $1$) generated by the vector space $E^{n-1}$ and the relations \[uv+vu +2(u,v) = 0\ . \] The space $E^n$ is identified with the subspace $\mathbb R \oplus E^{n-1}$ of $Cl(E^{n-1})$, the element $e_1$ corresponding to the unit of the Clifford algebra. Following Ahlfors, elements of $E^n$ will be called \emph{vectors}. Recall the three involutions of the Clifford algebra : the \emph{principal automorphism} $a\mapsto a'$, obtained by sending $e_j$ to $-e_j$ for $2\leq j\leq n$, the \emph{reversion} $a\mapsto a^*$ which is the antiisomorphism mapping $e_j$ to $e_j$ for $2\leq j \leq n$ and the \emph{conjugation} $a\mapsto \overline{a}$ which is the composition of the two first. There exists a canonical inner product on $Cl_{n-1}$ extending the inner product on $E^{n-1}$, the corresponding norm is denoted by $\vert \ \vert$. For vectors $x=x^*$, $x'=\overline x$, and $x\overline x = x_1^2+\dots + x_n^2=\vert x\vert ^2$, so that any vector $x\neq 0$ is invertible with inverse equal to $x^{-1} = \frac{\overline x}{\vert x\vert^2}$. The \emph{Clifford group} $\Gamma_n$ is the set of all elements of the Clifford algebra $Cl_{n-1}$ which can be written as products on non-zero vectors. If $a, b $ are in $\Gamma_n$, then \[ a\overline a = \vert a\vert^2\ ,\quad \vert ab\vert = \vert a\vert \vert b\vert \] Let $a$ be in $\Gamma_n$. Then $a$ is invertible and $a^{-1} = \frac{\overline a}{\vert a\vert^2}$. For $x$ in $E^n$, $ax{a'}^{-1}\in E^n$, and the map $x\mapsto ax{a'}^{-1}$ is a positive isometry of $E^n$. \begin{definition}\label{Cmatrix} A matrix $g=\begin{pmatrix} a&b\\ c&d\end{pmatrix}$ is a \emph{Clifford matrix} if $i) \ a,b,c,d \in \Gamma_n \cup \{0\}$ $ii)\ ad^* - bc^* = 1$ $iii)\ ab^* \text{ and } cd^* \in E^n$ \end{definition} \begin{theorem} Under matrix multiplication, the Clifford matrices form a group, denoted by $SL_2(\Gamma_n)$. \end{theorem} The following observation will be useful later : for $a, b\in \Gamma_n$, the conditions $ab^*\in E^n$ and $a^{-1}b \in E^n$ are equivalent. Let $g=\begin{pmatrix} a&b\\ c&d\end{pmatrix}$ in $SL_2(\Gamma_n)$ and $x$ in $\overline E^n$. We recall the meaning of the expression $g(x) = (ax+b)(cx+d)^{-1}$. First $cx+d$ is either $0$ or is invertible : $\bullet$ if $c\neq 0$, $c$ is in $\Gamma_n$. Condition $iii)$ and the remark imply that $c^{-1}d\in E^n$, so that $cx+d = c(x+c^{-1}d)$ is either $0$ or invertible. $\bullet$ if $c=0$, then $ii)$ implies that $d\neq 0$ so that $cx+d=d$ is in $\Gamma_n$, hence is invertible. Next, observe that $ax+b$ and $cx+d$ cannot be both $0$. In fact assume the contrary. By $ii)$, $a$ and $c$ can not be both $0$. So, assume that $a\neq 0$. Hence $a$ is invertible and $x=-a^{-1} b= b^* {a^*}^{-1}$. Hence \[0 = cx+d = -c b^* (a^*)^{-1} +d\ .\] Multiplying both sides on the left by $a^*$ yields $-c^*b+d a^*= 0$ which is incompatible with $ii)$. A similar argument holds if $c\neq 0$. So, if $cx+d\neq 0$, $(ax+b)(cx+d)^{-1}$ is a well defined element of $Cl_{n-1}$. If $cx+d=0$, then set $(ax+b)(cx+d)^{-1} =\infty$. Finally, let $(a\infty +b)(c\infty +d)^{-1} = ac^{-1}$ is $c\neq 0$ and $=\infty$ if $c=0$. \begin{theorem} ${ }$ $i)$ For any $g$ in $SL_2(\Gamma_n)$ and $x\in \overline E^n$, $g(x) = (ax+b)(cx+d)^{-1}$ belongs to $\overline E^n$ $ii)$ The map $\iota_g : \overline E^n \ni x\mapsto g(x) \in \overline E^n$ belongs to the M\"obius group $M(\overline E^n)$ $iii)$ The homomorphism $g\mapsto \iota_g$ is a twofold covering of the M\"obius group, with kernel $\pm\Id_2$. \end{theorem} See \cite{a} for a proof. In the sequel , we let $\widetilde G = SL_2(\Gamma_n)$. The stabilizer of $\infty$ in $\widetilde G$ is the subgroup \[\widetilde P=\Bigg\{ \begin{pmatrix} a&b\\0&d\end{pmatrix},\quad ad^* =1,\ ab^*\in E^n\Bigg\}\ .\] The Langlands decomposition of $\widetilde P$ is $\widetilde P = \widetilde L N = \widetilde M AN$, where \[ N= \Bigg\{ \begin{pmatrix} 1&v\\0&1\end{pmatrix},\quad v\in E^n \Bigg\}\] \[\widetilde L= \Bigg\{ \begin{pmatrix} a&0\\0&{a^*}^{-1}\end{pmatrix},\quad a\in \Gamma^n \Bigg\}\] \[ A = \Bigg\{a_t= \begin{pmatrix} t&0\\0&t^{-1}\end{pmatrix},\quad t>0\Bigg\}\] and the group $\widetilde M$ is realized as \[\widetilde M =\Bigg\{ \begin{pmatrix}m&0\\0&{m^*}^{-1}\end{pmatrix}, \quad m\in \Gamma_n, \vert m\vert =1 \Bigg\}\ . \] The element $\begin{pmatrix}m&0\\0&{m^*}^{-1}\end{pmatrix}$ of $\widetilde M$ acts on $E^n$ by $\sigma_m : x\longmapsto mxm^*$ and the map $\sigma : m\longmapsto \sigma_m$ is a twofold covering of $\widetilde M \simeq Spin_n$ onto $SO(n)$. Let $H = \begin{pmatrix} 1&0\\0&-1\end{pmatrix}$, so that, for $t>0$ $a_t=\exp \log t H$. A complex linear form on $\mathfrak a$ is identified with the complex number $\lambda$ equal to the value of the linear from on the element $H$. The half-sum of the roots $\rho$ corresponds to the number $n$, and we let $a_t^\lambda= t^\lambda , t>0, \lambda\in \mathbb C$. The group $N$ acts by translations $x\mapsto x+v$. An element of $\overline N$ is of the form $\begin{pmatrix}1&0\\x&1 \end{pmatrix}$ where $x\in E^n$. We will identify $\overline N$ with $E_n$. The non trivial Weyl group element is realized by the matrix $ w= \begin{pmatrix} 0&-1\\ 1&0 \end{pmatrix}$. The \emph{Bruhat decomposition} of an element $g=\begin{pmatrix} a&b\\c&d\end{pmatrix}$, where $a\neq 0$ is given by \[\begin{pmatrix} a&b\\c&d\end{pmatrix} = \begin{pmatrix} 1&0\\ca^{-1}&1\end{pmatrix} \begin{pmatrix} a&0\\0&{a^*}^{-1}\end{pmatrix}\begin{pmatrix} 1&a^{-1}b\\0&1\end{pmatrix}\ . \] The proof of this identity reduces to \ $\ ca^{-1}b+{a^*}^{-1}= d\ $\ , which is a consequence of the assumptions on $a,b,c$ and $d$ (see Definition \ref{Cmatrix}). \begin{proposition} Let $g\in \widetilde G$, $x,y \in E^n$ and assume that $g(x), g(y)\in E^n$. Then \begin{equation}\label{globalcov} g(x)-g(y) = {(cy+d)^*}^{-1}(x-y) (cx+d)^{-1}\ . \end{equation} \end{proposition} \begin{proposition}\label{localcov} Let $g$ be in $\widetilde G$, $x\in E^n$ and assume that $g(x)\in E^n$. Then the differential of the action of $g$ at $x$ is given by \begin{equation} Dg(x) \xi= {(cx+d)^*}^{-1}\xi (cx+d)^{-1}\ .\end{equation} The conformal factor of $g$ at $x$ is given by $\kappa(g,x) = \vert cx+d\vert^{-2}$ and the rotation factor is given by $\sigma((cx+d)' \vert cx+d\vert^{-1})$. \end{proposition} Proposition \ref{globalcov} is proved in \cite{a} (see also \cite{gm}). The formula for the differential in Proposition \ref{localcov} is a consequence. For the last part of the proposition, observe that $a=(cx+d)$ is in $\Gamma_n$ and \[{a^*}^{-1} \xi a^{-1} = \vert a\vert^{-2} \Big(\frac{a}{\vert a\vert}\Big)^{*-1}\,\xi\, \Big(\frac{a}{\vert a\vert}\Big)^{-1}= \vert a\vert^{-2}\,\frac{a'}{\vert a\vert}\,\xi\, \Big(\frac{a'}{\vert a\vert}\Big)^*\ . \] Another formula will be useful later. \begin{proposition} Let $g=\begin{pmatrix} a&b\\c&d\end{pmatrix}$ be in $SL_2(\Gamma_n)$. Let $x\in E^n$, assume that $g$ is defined at $x$ and let $y=(ax+b)(cx+d)^{-1}$. Then \begin {equation}\label{invcov} (-c^*y+a^*) = (cx+d)^{-1} \end{equation} \end{proposition} \begin{proof} The identity follows from \[ (-c^*y+a^*)(cx+d)= -c^*(ax+b) +a^*(cx+d) = -c^*b+a^*d\ ,\] and the fact that $a^*d -c^*b=1$, a consequence of the conditions satisfied by $a,b,c,d$ (see Definition\ref{Cmatrix}). \end{proof} The map $\widetilde \theta$ defined on $\widetilde G$ by \[\widetilde \theta \ \begin{pmatrix} a&b\\c&d\end{pmatrix} = \begin{pmatrix} d'&-c'\\-b'&a'\end{pmatrix} \] is an involution of $\widetilde G$, which covers the standard involution $\theta$ on $G$. The fixed points set of $\widetilde \theta$ is the subgroup $\widetilde K$ given by \[\widetilde K = \Bigg\{ \begin{pmatrix} a&b\\-b'&a'\end{pmatrix}, a,b,\in \Gamma_n\cup \{0\}, \vert a\vert^2+\vert b\vert^2 = 1, ab^*\in E^n\Bigg\}\ . \] The subgroup $\widetilde K$ is a maximal compact subgroup of $\widetilde G$, isomorphic to $Spin(n+1)$ and is a twofold covering of $K$. The sphere $S^n$ can be interpreted as $\widetilde G/\widetilde P$ (flag manifold) and as $\widetilde K/\widetilde M$ (compact symmetric space). To determine the Lie algebra of $Spin_n$ in this model, we will describe one-parameter groups, and find the associated vector field. First consider, for $2\leq j\leq r$ \[ m_t = \cos \frac{t}{2} +\sin \frac{t}{2} e_j, \quad t\in \mathbb R\ .\] Then \[\sigma_{m_t}= \begin{pmatrix}\cos t & & -\sin t& &\\&1&&\\ \sin t& &\cos t&&\\ & & &1&\\\end{pmatrix}\] For $2\leq j<k$, let \[ m_t = \cos \frac{t}{2} e_j + \sin \frac{t}{2} e_k\ . \] Then \[ \sigma_{m_t} = \begin{pmatrix} 1&&&&&&&&\\&&\cos t&&&-\sin t&&\\&&&&1&&&&\\&&\sin t&&&\cos t&&\\&&&&&&&&1\end{pmatrix} \] So, a basis of the Lie algebra of the spin group is \[\frac{1}{2} e_j,\quad 2\leq j \leq n,\quad \frac{1}{2} e_je_k,\quad 2\leq j<k\leq n \ . \] \section{The spinor representation} We recall some well-known results on the spinor representations. We will use the standard realization of $Spin$ in $ Cl_n$, (not to be confused with the realization of the same group in $Cl_{n-1}$ we used in the previous section), namely \[Spin_n=\{ a=v_1\dots v_{2k},\quad v_j\in \mathbb R\oplus \mathbb R^n, \vert v_j\vert =1, k\in \mathbb N\}\ . \] A finite dimensional complex Hilbert space $\mathcal H$ is said to be a \emph{Clifford module} if there exists $n$ skew-Hermitian operators $E_1,\dots, E_n$ on $\mathcal H$, such that \begin{equation}\label{Ej} E_iE_j + E_jE_i=-2\delta_{ij} \Id, \qquad 1\leq i,j\leq n\ . \end{equation} By the universal property of the Clifford algebra, there exists a (uniquely defined) representation $(\tau, \mathcal H)$ of the \emph{complex} Clifford algebra $\mathbb Cl_n$, which satisfies $\tau(e_j)=E_j$, and conversely, any representation of the Clifford algebra is obtained in this manner. Note that, for any $a\in \mathbb Cl_n$, $\tau(\overline a)$ is the adjoint of $\tau(a)$ for the Hilbert product on $\Sigma $. Viewing $Spin_n$ as a subset of $\mathbb Cl_n$, we obtain by restriction of $\tau$ a representation of $Spin_n$ on $\mathcal H$, still denoted by $\tau$, which is unitary by the last remark. The results concerning the \emph{irreducible} representations of $\mathbb Cl_n$ depend on the parity of $n$. Assume first that $n$ is \emph{even}. Then the \emph{complex Clifford algebra} $\mathbb Cl_n$ has a unique irreducible representation (up to isomorphism) for $n$ even. Let $\Sigma_n$ the complex (finite-dimensional) Hilbert space on which the representation acts, and denote by $\sigma : \mathbb Cl_n \longrightarrow \End(\Sigma_n)$ the representation. When restricted to $Spin_n$ (or equivalently to the even part $\mathbb Cl_n^{ev}$), the representation $(\sigma, \Sigma_n)$ splits into two inequivalent representations. In case $n$ is \emph{odd}, then there are two inequivalent irreducible representations, denoted by $(\sigma^+, \Sigma^+_n)$ and $(\sigma^-, \Sigma_n^-)$. To distinguish them, let \[\omega^\mathbb C = i^{[\frac{n+1}{2}]}e_1e_2\dots e_n \] be the \emph{volume element}. Observe that the oddness of $n$ implies that $\omega^\mathbb C$ is in the center of $\mathbb Cl_n$ . By Schur's lemma, $\tau(\omega)=\pm\Id$ for $\tau$ an irreducible representation. Hence $\sigma^\pm(\omega^\mathbb C)=\pm \Id$ on $\Sigma_n^\pm$, which distinguishes the two representations, and shows that they are not equivalent. When restricted to $\mathbb Cl_n^{ev}$ (or to $Spin_n$), both restrictions of $\sigma^\pm$ stay irreducible and are equivalent representations. Let $\Sigma_n=\Sigma_n^+\oplus \Sigma_n^-$, and let $\sigma = \sigma^+\oplus \sigma^-$. In any case of parity, we call $(\sigma, \Sigma_n)$ \emph{the spinor representation} of $\mathbb Cl_n$ (or of $Spin_n$). On $\Sigma$, there is a Hermitian scalar product for which $\sigma(x)$ is unitary for any $x\in \mathbb R^n$ with unit length. For this inner product, $\Sigma$ is a Clifford module. For $1\leq j\leq n$, let $E_j=\sigma(e_j)$. Then the $E_j$'s are skew Hermitian and satisfy the defining relations \eqref{Ej}. Finally, we have to connect the standard realization of $Spin_n$ with the realization of $\widetilde M$ in the Vahlen-Maass-Ahlfors approach. The linear map $\gamma : E^{n-1}\longmapsto \mathbb Cl^{ev}_n$ given by \[\gamma(e_j) = e_1e_j, \quad 2\leq j\leq n \] satisfies $\gamma(e_i)\gamma(e_j) +\gamma(e_j)\gamma(e_i)= -2\delta_{ij} $ and hence can be extended to yield an isomorphism (still denoted by $\gamma$) of $Cl(E^{n-1})$ onto $Cl_n^{ev}$. The map $\gamma$ induces an isomorphism of $\widetilde M$ onto $Spin_n$. For $a\in Cl(E^{n-1})$, let $\tau(a) = \sigma(\gamma(a))$. Then $(\tau, \Sigma)$ is a representation of $Cl(E^{n-1})$ and, by restriction to $\widetilde M$, a representation of $\widetilde M$, called the \emph{spinor representation}. For $x\in E^n$, \begin{equation}\tau(x_1+x_2e_2+\dots+x_ne_n) = x_1\Id + x_2 E_1E_2+\dots + x_n E_1E_n\ . \end{equation} Recall that the element $\widetilde w = \begin{pmatrix} 0&-1\\1&0\end{pmatrix}$ is a representative of the nontrivial Weyl group element. \begin{proposition} Let $\tau$ be the spinor representation of $\widetilde M$, and let $w\tau$ be the representation of $\widetilde M$ given by $w\tau(m) = \tau(\widetilde w^{-1}m\widetilde w)$. Then \[w\tau=\tau'\ , \] where $\tau'$ is the restriction to $\widetilde M$ of the representation of $\mathbb Cl_{n-1}$ given by $\tau'(a)=\tau(a')$, for $a\in \mathbb Cl_{n-1}$. \end{proposition} \begin{proof} For $a\in \Gamma_n$, \[ \begin{pmatrix} 0&1\\-1&0\end{pmatrix} \begin{pmatrix} a&0\\0&{a^*}^{-1}\end{pmatrix} \begin{pmatrix} 0&-1\\1&0\end{pmatrix}= \begin{pmatrix}{a^*}^{-1}&0\\0&a\end{pmatrix}\ . \] If moreover, $\vert a \vert =1$, then ${a^*}^{-1}= a'$, so that the automorphism of $\widetilde M$ induced by $\widetilde w$ coincides with the principal automorphism. The statement follows. \end{proof} \begin{proposition} For any $a\in Cl(E^{n-1})$, \begin{equation}\label{tautau'} E_1\,\tau'(a) = \tau(a)\,E_1\ . \end{equation} \end{proposition} \begin{proof} It suffices to verify the statement for any vector $x\in E^n$. Let $x= x_1+x_2e_2+\dots +x_ne_n$. Then \[ \tau(x)E_1 = (x_1\Id + x_2 E_1E_2+\dots x_nE_1E_n)\,E_1 = x_1 E_1 +x_2E_2+\dots +x_nE_n\] whereas \[ E_1 \tau'(x) = E_1\,(x_1\Id-x_2E_1E_2-\dots -x_nE_1E_n) = x_1 E_1+x_2E_2+\dots +x_nE_n\ .\] \end{proof} \section{ The principal spinorial series of $\widetilde G$ and the associated intertwining operators} Let us first recall the general theory of Knapp-Stein intertwining operators. Let $G$ be a semisimple Lie group (connected and with finite center), $P$ a minimal parabolic subgroup. Let $\theta$ be a Cartan involution, with fixed points $K$, which is a maximal compact subgroup of $G$. Let $P=MAN$ be a Langlands decomposition of $P$ adapted to $\theta$. In particular, $M=P\cap K$. Let $M'$ be the normalizer of $A$ and $W\simeq M'/M$ be the corresponding Weyl group. Let $X=G/P\simeq K/M$, and let $o=eP$ be the origin in $X$. Let $\mathfrak a$ be the Lie algebra of $A$, and let $\exp$ be the exponential map from $\mathfrak a$ onto $A$. Let $\rho\in \mathfrak a'$ (the dual of $\mathfrak a$) be the half-sum of the positive roots relative to $N$. The map \[ K\times A\times N \ni (k,a,n)\longmapsto kan \in G\] is a diffeomorphism onto $G$. If $g$ is in $G$, we write $g=\kappa(a) \exp H(g) \nu(g)$ for the \emph{Iwasawa decomposition} . Let $\overline N = \theta N$. The map \[\overline N\times M\times A\times N \ni (\overline n,m,a,n)\longmapsto \overline nman \in G\] is a diffeomorphism onto a dense open set of $G$. For $g$ an element in the image, let \[ g=\overline n(g) m(g)a(g)n(g)\] be the corresponding \emph{Bruhat decomposition}. Let $\tau$ be a unitary representation of $M$ on a (finite dimensional) Hilbert space $V$ and let $\lambda$ be a complex linear form on $\mathfrak a = Lie(A)$. Let $\tau_\lambda$ be the representation of $P$ defined by \[ \tau_\lambda(man) = a^\lambda \tau(m), \quad m\in M,\ a\in A,\ n\in N\ , \] where $a^\lambda = e^{\lambda(\log a)}$. Form the \emph{induced representation} $\pi_{\tau,\lambda}= \Ind_{MAN}^G (\sigma\otimes\exp \lambda\otimes 1)$. We will work with the \emph{noncompact picture} of this induced representation. Introduce the space $L^2_\lambda(\overline N)$ as the space of functions $f$ on $\overline N$, valued in $V$, which satisfy \[ \int_{\overline N} \vert f(x)\vert^2 e^{2\Re \lambda \big(H(\overline n)\big)} d\overline n <+\infty\ .\] We state the noncompact realization of the induced representation as a proposition (see \cite{kn} ch VII). \begin{proposition} For $g\in G$, \begin{equation} \pi_{\tau,\lambda}(g) f (\overline n) = e^{-(\lambda+\rho)\log a(g^{-1}\overline n)}\tau\big(m(g^{-1} \overline n)\big)^{-1} f\big(\overline n(g^{-1}\overline n)\big)\ . \end{equation} defines a representation $\pi_{\tau,\lambda}$ of $G$ by bounded operators on $L^2_\lambda(\overline N)$. \end{proposition} Let $w$ be an element of $W$, and choose a representative (still denoted by $w$) in $M'$. Let $w\tau$ be the representation of $\widetilde M$ defined by $w\tau(m) = \tau(\widetilde w^{-1}m\widetilde w)$. The Knapp-Stein theory of intertwining operators offers a construction of an intertwining operator between $\pi_{\tau,\lambda}$ and $\pi_{w\tau,w\lambda}$. We state it as a proposition (see \cite{kn} ch VII, (7.39). Set, for $f$ a function on $\overline N$ and $x\in \overline N$ \[J_{\tau,\lambda,w} f(x)= \int_{\overline N} e^{(-\rho+\lambda) \log a(w^{-1}\overline n)} \tau(m(w^{-1}\overline n)) f(x\overline n) d\overline n \] \begin{proposition} The operator $J_\lambda$ is an intertwining operator between $\pi_{\tau,\lambda}$ and $\pi_{w\tau,w\lambda}$, namely for any $g\in G$, \[J_{\tau,\lambda,w}\circ \pi_{\tau, \lambda}(g) = \pi_{w\tau, w\tau_\lambda}(g)\circ J_{\tau,\lambda,w}\ . \] \end{proposition} The proposition lets aside the convergence of the integral, which is true for $\lambda$ in some open set of $\mathfrak a'_\mathbb C$. The intertwining operator $J_{\tau,\lambda,w}$ can then be extended meromorphically to the whole space. This general scheme applies to our situation. Let $(\tau, \Sigma)$ be the spinor representation of $\widetilde M$ and let $\lambda\in \mathbb C$. Define the representation $\tau_\lambda$ of $\widetilde P$ by \[\tau_\lambda(ma_tn) = t^{2\lambda} \tau(m)\ . \] Let $\pi_\lambda= {\Ind\, } _{\widetilde P}^{\widetilde G}\, \tau_\lambda$. Following the procedure just described above, we obtain the following realization of these representations (\emph{noncompact picture}). \begin{theorem} For $\lambda\in \mathbb C$ and $g=\begin{pmatrix} a&b\\c&d\end{pmatrix}$, the formula \[\pi_\lambda(g) f(x) = \vert d^*-b^*x\vert^{-2\lambda-n}\, {\tau\Bigg(\frac{d^*-b^*x}{\vert d^*-b^*x\vert}\Bigg)}^{-1}\,f\big((-c^*+a^*x)(d^*-b^*x)^{-1}\big) \] defines a representation of $\widetilde G$ by bounded operators on $L^2_\lambda (E^n,\Sigma)$. \end{theorem} \begin{proof} For $x\in E$, \[g^{-1}\overline n_x= \begin{pmatrix} d^*-b^*x&-b^*\\-c^* +a^*x&a^*\end{pmatrix} .\] The components in the Bruhat decomposition of this element are \[(ma)(g^{-1} \overline n_x) = d^*-b^*x, \quad \overline n (g^{-1} \overline n_x)= (-c^*+a^*x)(d^*-b^*x)^{-1}\ . \] Hence the formula is a consequence of the general statement. \end{proof} There is another closely related representation. In fact, the same construction can be done using the representation $\tau'$ of $\widetilde M$ we introduced earlier instead of $\tau$. The corresponding representation will be denoted by $\pi_\lambda'$. It is related to $\pi_\lambda$ be the following elementary result, which follows from Proposition \ref{tautau'}. \begin{proposition} For any $g\in \widetilde G$ \begin{equation}\label{E1intw} E_1\pi'_\lambda (g) = \pi_\lambda(g)E_1\ . \end{equation} \end{proposition} We now apply Knapp-Stein theory to get an intertwining operator between $\pi_\lambda$ and $\pi_{-\lambda}'$. Let $\overline n = \begin{pmatrix}1&0\\y&1\end{pmatrix}$. Then \[ w^{-1}\overline n = \begin{pmatrix}0&1\\-1&0\end{pmatrix} \begin{pmatrix}1&0\\y&1\end{pmatrix}=\begin{pmatrix} y&1\\-1&0\end{pmatrix} \] so that \[t(w^{-1} \overline n) = \vert y\vert, \quad m(w^{-1}\overline n) = \frac{y}{\vert y\vert}\ .\] Hence the Knapp-Stein operator is given by \[J_\lambda f(x) = \int_{E^n} \vert y\vert^{2\lambda-n} \tau\Big(\frac{y}{\vert y\vert}\Big)f(x-y) dy\] \begin{theorem} For $\lambda\in \mathbb C$, let \begin{equation} J_\lambda f(x) = \int_{E^n} \vert y\vert^{2\lambda-n} \tau\Big(\frac{y}{\vert y\vert}\Big)f(x-y) dy\ . \end{equation} For $\Re \lambda >0$ and $f\in L^2_\lambda(E^n, \Sigma)$, the integral converges and the operator $J_\lambda$ thus defined is bounded on $L^2_\lambda(E^n, \Sigma)$ and for any $g\in \widetilde G$ satisfies \begin{equation}\label{intw} J_\lambda\, \pi_\lambda(g) =\pi_{-\lambda}'(g)\, J_\lambda\ . \end{equation} \end{theorem} Although this is a consequence of the Knapp-Stein theory, we may offer a direct proof (compare with \cite{gm}). \begin{proof} Let $g=\begin{pmatrix}a&b\\c&d\end{pmatrix}$. Let $\gamma=\begin{pmatrix} a^*&-c^*\\-b^*&d^*\end{pmatrix}$ which is easily seen to be an element of $\widetilde G$. Let $f$ be in $\mathcal C^\infty_c(E^n, \Sigma)$. Then \[J_\lambda f (x) = \int_{E^n} \vert x-y\vert^{2\lambda-n} \tau\Big(\frac{x-y}{\vert x-y\vert}\Big)f(y)\,dy\ , \] so that \[\pi'_{-\lambda} (g)J_\lambda f(x) = \dots\]\[= \vert d^*-b^*x\vert^{2\lambda-n}\tau'\Big(\frac{d^*-b^*x}{\vert d^*-b^*x}\Big)^{-1}\int_{E^n}\vert \gamma(x)-y\vert^{2\lambda-n}\tau\Big(\frac{\gamma(x)-y}{\vert \gamma(x)-y\vert}\Big) f(y)dy\ . \] Use the change of variable $y=\gamma(z)$, and hence $dy = \vert d^*-b^*z\vert^{-2n} dz$ to get \[\vert d^*-b^*x\vert^{-2\lambda-n} \tau'\Big(\frac{d^*-b^*x)}{\vert d^*-b^*x\vert}\Big)^{-1} \int_{E^n}\vert \gamma(x)-\gamma(z)\vert^{2\lambda-n} \tau\Big(\frac{\gamma(x)-\gamma(z)}{\vert \gamma(x)-\gamma(z)}\Big) \vert d^*-b^*z\vert^{-2n} f\big(\gamma(z)\big)dz\ . \] Now, using \eqref{globalcov} \[\gamma(x)-\gamma(z) = {(d^*-b^*x)^*}^{-1}(x-z)(d^*-b^*z)^{-1} \] \[ \vert \gamma(x)-\gamma(z)\vert = \vert d^*-b^*x\vert^{-1} \vert x-z\vert \vert d^*z-b^*z\vert^{-1} \] \[ \frac{\gamma(x)-\gamma(z)}{\vert\gamma(x)-\gamma(z) \vert} ={\Big( \frac{d^*-b^*x)}{\vert d^*-b^*x\vert}\Big)^*}^{-1} \frac{x-z}{\vert x-z\vert}\Big(\frac{d^*-b^*z}{\vert d^*-b^*z}\Big)^{-1}\ . \] For $u\in \Gamma_n$ such that $\vert u \vert =1$, ${\tau(u^*}^{-1})= \tau(u') = \tau'(u)$ so that \[\pi'_{-\lambda}(g)J_\lambda f(x) \]\[= \int_{E^n} \vert x-z\vert^{2\lambda-n}\vert d^*-b^*z\vert^{-2\lambda-n} \tau\Big(\frac{d^*-b^*z}{\vert d^*-b^*z\vert}\Big)^{-1}f\big((a^*z-c^*)(d^*-b^*z)^{-1}\big) dz \] \[ = J_\lambda \pi_\lambda(g)f(x)\ . \] \end{proof} For $s\in \mathbb C$, let, for $x\in E^n, x\neq 0$ \[d_s(x) = \vert x\vert^{s-1} \sum_{j=1}^n x_jE_j\ , \] and let $D_s$ be the associated convolution operator defined by \[D_sf(x) = \int_{E^n} d_s(y) f(x-y) dy\ . \] \begin{proposition} Let $\Re s>-n$. For any $g\in \widetilde G$, \begin{equation}\label{dsintw} D_s\circ \pi'_{\frac{s+n}{2}}(g) = \pi'_{-(\frac{s+n}{2})}(g)\circ D_s \end{equation} \end{proposition} \begin{proof} By using the trivial intertwining relations \eqref{tautau'} and \eqref{E1intw}, we can transform the intertwining relation \eqref{intw} to get \[ J_\lambda E_1 \circ \pi'_{\lambda}(g) = \pi'_{-\lambda}(g)\circ J_\lambda E_1\ ,\] for any $g\in \widetilde G$. Next, for any $x\in E^n, x\neq 0$ \[\vert x\vert^{2\lambda-n-1}(x_1+x_2E_1E_2+\dots + x_nE_1E_n)E_1 = d_{2\lambda-n}(x)\ .\] The statement follows, with $\lambda = \frac{s+n}{2}$. \end{proof} \section {The fundamental identity and its consequences} For $\Re s>-n$, $d_s$ is an integrable function on $E^n$ (with values in $\End(\Sigma)$), hence a (tempered) distribution. We want to meromorphically continue this distribution to $\mathbb C$. Let $D$ be the Dirac operator on $E^n$. By definition, it acts on smooth functions on $E^n$ with values in $\Sigma$ by \[ Df(x) = \sum_{j=1}^n E_j\frac{\partial f}{\partial x_j} (x)\ .\] Extend this formula to $\End(\Sigma)$ valued function : if $S(x)$ is such a function, let $\displaystyle DR(x) = \sum_{j=1}^n E_j \frac{\partial R}{\partial x_j}(x)$. Notice that the associated convolution operator satisfies \[D(R\star f) = DR\star f\ . \] In both cases, $D^2 f= -\sum_{j=1}^n \frac{\partial^2f}{\partial x_j^2}$, which we write as $D^2 = \Delta$, where $\Delta$ is the (extension to $\Sigma$ or $End(\Sigma)$-valued functions of the) standard Laplacian on $E^n$. \begin{proposition} [Fundamental identity] Let $s\in \mathbb C, \Re s > -n$. Then, for $x\in E^n, x\neq 0$ \begin{equation}\label{PBS} D \vert x\vert^{s+1} = (s+1) d_s\ , \end{equation} where both sides are $End(\Sigma)$-valued functions. \end{proposition} \begin{proof} It amounts to the formula \[\frac{\partial}{\partial x_j} \vert x\vert^{s+1} =(s+1)\,x_j\,\vert x \vert^{s-1}\ . \] \end{proof} As the meromorphic continuation of the distribution $\vert x\vert^s$ is well known (see \cite{gs}), equation \eqref{PBS} allows the meromorphic continuation of the distribution $d_s$. \begin{proposition} The distribution $\vert x\vert^s$ can be continued meromorphically to $\mathbb C$, with simple poles at $s=-n-2k$, for $k\in \mathbb N$. The residue at $s=-n-2k$ is given by \[Res\,(\vert x\vert^s, -n-2k) = c_k \Delta^k \delta\ , \] where $\displaystyle c_k= \frac{2\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}\, \frac{1}{2^k \,k!\,n(n+2)\dots(n+2k-2)}\ $. \end{proposition} \begin{proposition} The distribution $d_s$ can be meromorphically continued to $\mathbb C$ with simple poles at $s=-n-2k-1, k\in \mathbb N$. The residue of $d_s$ at $-n-2k-1$ is given by \[Res\,(d_s, -n-1-2k) = c_k \, D^{2k+1}\delta\ . \] \end{proposition} \begin{proof} Let $f$ be a function in $\mathcal C^\infty_c(E^n,\Sigma)$. Use \eqref{PBS}, recall that the $E_j$'s are skew Hermitian and integrate by part to get \[\int_{E^n} d_s(x) f(x) dx = \frac{1}{s+1} \int_{E^n} \vert x\vert^{s+1} Df(x)dx\ . \] This identity is valid {\it a priori} for $\Re s>-n$. The right hand side can be extended to a meromorphic function, with poles at $s+1 = -n-2k, k\in \mathbb N$. This serves to \emph{define} the left hand side. At $s=-n-2k-1$, the residue of the right hand side is $c_k\, \Delta^k(Df)(0)$. But $D^2 = \Delta$, so that $\Delta^k = D^{2k}$, hence the proposition follows. \end{proof} \begin{theorem} For any positive integer $k$, and any $g\in \widetilde G$, \begin{equation}\label{Dintw} D^{2k+1} \circ \pi'_{-k-\frac{1}{2}}(g) = \pi_{k+\frac{1}{2}}'(g)\circ D^{2k+1} \end{equation} \end{theorem} \begin{proof} Recall the intertwining relation \eqref{dsintw}. It is clearly valid for $s$ in $\mathbb C$, provided $s$ is not a pole. But at a pole, say $s=-n-2k-1$, we may pass to the limit on both sides of \eqref{intw}, thus obtaining \eqref{Dintw}. \end{proof} \section { The compact picture} The results above may also be realized in the compact picture, i.e. on the sphere, where the induced representation, the Knapp-Stein operators and their residues may also be computed. The main result is an explicit expression of the residues as a polynomial in the Dirac operator of the sphere. Recall from \cite{kn} that our induced representation may also be realized in the compact picture, see Chapter VII, (7.3a), and the intertwining operator given as in \cite{kn} (7.37) as follows: Set, for $f$ a function on $K$ and $x\in K$ \[J_{\tau,\lambda,w} f(x)= \int_K e^{(-\rho+\lambda) \log a(w^{-1}k)} \tau(m(w^{-1}k)) f(xk) dk \ .\] Now we can repeat the arguments from Euclidian space and realize the Knapp-Stein operator as a kernel operator, acting on sections of the spin bundle over $S^n$, and we may find the residues of this meromorphic family. For this is it convenient to calculate the spectrum of the Knapp-Stein operator, i.e. its eigenvalues on the $K$-types in the induced representation. Recall the method of spectrum-generating \cite{boo}, which we can apply in an elementary way to obtain the $K$-spectrum as in the following result. When $n$ is odd, the spin representation of $\widetilde M$ has highest weight $(\frac{1}{2}, \dots , \frac{1}{2})$ (we use the standard choices of a Cartan subalgebra of $\mathfrak m$ and a basis inside). The induced representation space (sections of the spin bundle over $S^n$) decomposes under the action of $\widetilde K$ without multiplicity, and the corresponding highest weights of the $\widetilde K$-types are $(j,\pm) =(j, \frac{1}{2}, \dots ,\frac{1}{2}, \pm \frac{1}{2})$ with $j \in \mathbb N + \frac{1}{2}$. When $n$ is even, the spin representation of $\widetilde M$ is a sum of two representations, say $\sigma^+$ and $\sigma^-$, with respective highest weights $(\frac{1}{2},\dots, \frac{1}{2}, \frac{1}{2})$ and $(\frac{1}{2},\dots, \frac{1}{2}, -\frac{1}{2})$. Each induced representation ($(\pi^+_\lambda, \mathcal S^+_\lambda)$ from $\sigma_+$, $(\pi^-_\lambda, \mathcal S^-_\lambda)$ from $\sigma_-$) decomposes under the action of $\widetilde K$ without multiplicity, and the corresponding heighest weights of the $\widetilde K$-types are $(j, \frac{1}{2},\dots,\frac{1}{2})$ with $j\in \mathbb N +\frac{1}{2}$. \begin{proposition} Define the spectral functions as in \cite{boo} by $$Z_{j,\pm}(\lambda) = \pm \frac{\Gamma(\frac{n}{2} + j -\lambda)}{\Gamma(\frac{n}{2} + j +\lambda)}$$ When $n$ is odd, the operator acting on the $(j, \pm)$ $\widetilde K$-type by the scalar $Z_{j,\pm}(\lambda)$ is an intertwining operator between $\pi_{\lambda}$ and $\pi_{-\lambda}$. \noindent When $n$ is even, the operator acting from $\mathcal S^\pm_\lambda$ into $\mathcal S^\mp_\lambda$ on the $\widetilde K$-type $(j, \frac{1}{2},\dots, \frac{1}{2})$ by the scalar $Z_{j,\pm}(\lambda)$ is an intertwining operator between $\pi_\lambda= \pi_\lambda^+ \oplus \pi_\lambda^-$ and $\pi_{-\lambda} = \pi_{-\lambda}^+ \oplus \pi_{-\lambda}^-$. \end{proposition} See \cite{boo}, noticing that the parameter $r$ there coincides with $-\lambda$ in our present context. Because of the generic uniqueness of the intertwining operator between $\pi_{\lambda}$ and $\pi_{-\lambda}$, the intertwining operator thus constructed is a multiple (by some meromorphic function of $\lambda$) of the one we use in the first part. The poles of the former were at $\lambda = -\frac{1}{2} -k, k\in \mathbb Z$. They now correspond to non singular values of the spectral functions, so that the residues are replaced by true values. The normalization is in fact such that the value of the intertwining operator at $\lambda = -\frac{1}{2}$ is exactly the Dirac operator on $S^n$. More precisely, for $\lambda=-\frac{1}{2}$, we obtain the spectrum of the Dirac operator on $S^n$. \begin{proposition} The spectrum of the Dirac operator $\mathbb D$ is given by \[Z_{k+\frac{1}{2},\, \pm} (-\frac{1}{2})= \pm (\frac{n}{2}+k)\ . \] \end{proposition} \noindent {\bf Remark}. An alternative determination of the spectrum of the Dirac operator on $S^n$ was given in \cite{bo}, using a more complicated argument which however is only using conformal geometry). See also \cite{ch}. For the other poles $\lambda = -\frac{1}{2}-m, m\in \mathbb N$, the computation of the values of the spectral functions and the previous result yield the following result. \begin{theorem} Let $m\in \mathbb N$. The differential operator \[\mathbb D_m = \mathbb D(\mathbb D^2-1)(\mathbb D^2-4)\dots (\mathbb D^2-m^2) \] is covariant with respect to $(\pi_{-\frac{1}{2}-m},\pi_{\frac{1}{2}+m})$. \end{theorem} \begin{proof} For $j= \frac{1}{2}+ k$, \[Z_{k+\frac{1}{2},\, \pm}(-\frac{1}{2} -m)=\pm \,(\frac{n}{2} +k+m)(\frac{n}{2} +m-1)\dots(\frac{n}{2}+k-m) \] which coincides with the spectral function of the operator \[\mathbb D_m =(\mathbb D+m)(\mathbb D+m-1)\dots (\mathbb D-m)\ .\] Hence the statement. \end{proof} For other approaches to the covariance properties of powers of the Dirac operator on $S^n$, see \cite{es}, \cite{lr}. \footnotesize{\noindent Address\\ Jean-Louis Clerc, Institut Elie Cartan, Universit\'e de Lorraine, 54506 Vand\oe uvre-l\`es-Nancy, France\\ Bent \O rsted, Matematisk Institut, Byg.\,430, Ny Munkegade, 8000 Aarhus C, Denmark.\\} \noindent \texttt{{[email protected], [email protected] }} \end{document}
\begin{document} \twocolumn[ \icmltitle{False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking} \icmlauthor{Qianqian Xu}{[email protected]} \icmladdress{State Key Laboratory of Information Security (SKLOIS), Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093 \& BICMR, Peking University, Beijing 100871, China} \icmlauthor{Jiechao Xiong}{[email protected]} \icmladdress{BICMR-LMAM-LMEQF-LMP, School of Mathematical Sciences, Peking University, Beijing 100871, China} \icmlauthor{Xiaochun Cao}{[email protected]} \icmladdress{State Key Laboratory of Information Security (SKLOIS), Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China} \icmlauthor{Yuan Yao}{[email protected]} \icmladdress{BICMR-LMAM-LMEQF-LMP, School of Mathematical Sciences, Peking University, Beijing 100871, China} \vskip 0.3in ] \begin{abstract} With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator's position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators -- the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator's abnormal behavior in crowdsourcing. \end{abstract} \section{Introduction} In applications, building good predictive models is challenging primarily due to the difficulties in obtaining annotated training data. A traditional way for data labeling is to hire a small group of experts to provide labels for the entire set of data. However, such an approach can be expensive and time consuming for large scale data. Thanks to the wide spread of crowdsourcing platforms (e.g., \href{https://www.mturk.com}{MTurk}, \href{http://www.innocentive.com/}{Innocentive}, \href{http://crowdflower.com/}{CrowdFlower}, \href{http://www.crowdrank.net/}{CrowdRank}, and \href{http://www.allourideas.org/}{Allourideas}), a much more efficient way is to post unlabeled data to a crowdsourcing marketplace, where a big crowd of low-paid workers can be hired instantaneously to perform labeling tasks \cite{Sheng2008,non-expert1,non-expert2,MIR10,MM09}. Despite of its high efficiency and immediate availability, crowd labeling raises many new challenges. Since typical crowdsourced tasks are tedious and annotators usually come from a diverse pool including genuine experts, novices, biased workers, and malicious annotators, labels generated by the crowd suffer from low quality. Thus, all crowdsourcers need strategies to ensure the reliability of answers. In other words, outlier detection is a critical task in order to achieve a robust labeling results. Various methods have been developed in literature for outlier detection, of which majority voting strategy \cite{gygli2013interestingness, jiang2013understanding} is the most typical one. In this setting, each pair is allocated to multiple annotators and their opinions are averaged over so as to identify and discard noisy data provided by unreliable raters. They thus require large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies \cite{fu2014interestingness,fu2015robust}. The work in~\cite{MM13} attacks this problem and formulates the outlier detection as a LASSO problem based on sparse approximations of the cyclic ranking projection of paired comparison data in Hodge decomposition. Regularization paths of the LASSO problem could provide an order on samples tending to be outliers. However, these work all treat pairwise comparison judgements as independent random outliers, which are typically defined to be data samples that have unusual deviations from the remaining data. In this paper, instead of modeling the random effect of sample-wise outliers, we are primarily interested in the fixed effect where the annotators are influenced by positions when labeling in pairwise comparison setting. Such an annotator's position bias \cite{day1969position} is ubiquitous in uncontrolled crowdsourced ranking experiments. In our studies, annotator's position bias typically arises from: i) \textbf{\emph{the ugly}}: one typically clicks one side more often than another. As some pairs are highly confusing or annotators get too tired, in these cases, some annotators tend to click one side hoping to simply raise their record to receive more payment; while for pairs with substantial differences, they click as usual. ii) \textbf{\emph{the bad}}: some extremely careless annotators, or robots pretending to be human annotators, actually do not look at the instances and click one side all the time to quickly receive pay for work. Such kinds of annotators may significantly deteriorate the quality of crowdsourcing data and increase the cost of acquiring annotations (since each raw feedback comes with a cost: the task requestor has to pay workers a pre-specified monetary reward for each labeling they provide, usually, regardless of the feedback correctness). Although it might be relatively easy to identify the bad annotators above by inspecting their inputs, it is impossible for eye inspection to pick up those ugly annotators with mixed behaviors. Therefore it is desired to design a statistical framework to quantitatively detect and eliminate annotator's position bias for crowdsourcing platforms in market. Such a systematic study, up to the author's knowledge, however has not been seen in literature. In this paper, we propose a linear model with annotator's position bias and new algorithms to find good estimates with an automatic control on the false discovery rate (FDR) -- the expected fraction of false discoveries among all discoveries. To understand FDR, imagine that we have a detection method that has just made 100 discoveries. Then, if our method is known to control the FDR at the 10\% level, this means that with high probability, we can expect at most 10 of these discoveries to be false and, therefore, at least 90 to be true and replicable. Such a FDR control is desired when we don't have a prior knowledge about the amount of bad or ugly annotators and typical statistical estimates will lead to an over-identification of them. Specifically, our contributions in this work are highlighted as follows: \begin{itemize} \item[(A)] A linear model with annotator's position bias as fixed effects; \item[(B)] New algorithms to find good estimates of such position bias, etc., based on Inverse Scale Space dynamics and its discretization Linearized Bregman Iteration; \item[(C)] New knockoff filters for FDR control adapted to our setting, which aims to mimic the correlation structure found within the original features for position bias; \item[(D)] Extensive experimental validation based on one simulated and four real-world crowdsourced datasets. \end{itemize} \section{Methodology} In this section, we systematically introduce the methodology for annotator's position bias estimation. Specifically, we first start from a basic linear model with different types of noise models, which have been successfully used widely in literature. Then we introduce a new dynamic approach with unbiased estimator called Inverse Scale Space (ISS). Based on this, we present the modified knockoff filter for FDR control in details. \subsection{Basic Linear Model} Let $V = \{1,2,\dots,n\}$ be the set of nodes and $E = \{(\alpha,i,j): i,j\in V, \alpha \in U\}$ be the set of edges, where $U$ is the set of all annotators. Suppose the pairwise ranking data is given as $Y: E\rightarrow R$. $Y_{ij}^\alpha>0$ means $\alpha$ prefers $i$ to $j$ and $Y_{ij}^{\alpha}\leq 0$ otherwise. The magnitude of $Y_{ij}^\alpha$ can represent the degree of preference and it varies in applications. It can be dichotomous choice $\{\pm 1\}$, $k$-point Likert scale (e.g. $k=3,4,5$), or even real values. In this paper, consider the following linear model: \begin{equation} \label{eq:linear} Y_{ij}^\alpha = \theta_i - \theta_j + z_{ij}^\alpha \end{equation} where $\theta: V\to \mathbb{R}$ is some common score on $V$ and the residue $z_{ij}^\alpha$ may have interesting structures in crowdsourcing settings. The annotators might have different effects on the residues. While for most annotators, the deviations from the common score are due to random noise; occasionally the annotators deviate from the common behavior regularly -- some careless ones always choose the left or the right candidate in comparisons, but others only do this when they get too confused to decide. Such behaviors can be modeled in the following way, \begin{equation} \label{eq:bias} z_{ij}^\alpha=\gamma^\alpha + \varepsilon_{ij}^\alpha, \end{equation} where $\gamma^\alpha$ measures an annotator's position bias in a fixed effect, and the remainder $\varepsilon_{ij}^\alpha$ measures the random effect in sampling which is assumed to be sub-gaussian noise. For example, a positive value of $\gamma^\alpha$ means the annotator $\alpha$ is more likely to prefer the left choice. Under the random design of pairwise comparison experiments, a candidate should be placed on the left or the right randomly, so the position should not affect the choice of a careful (good) annotator. Therefore $\gamma^\alpha$ is assumed to be sparse, i.e., zero for most of annotators, and a nonzero position bias $\gamma^\alpha$ means the annotator $\alpha$ is either always choosing one position over the other (bad) or occasionally incurring this when they get too confused or tired (ugly). We note that \eqref{eq:bias} should not be confused with recent studies in \cite{fu2014interestingness,MM13} on outlier detection problem, $z_{ij}^\alpha=\gamma_{ij}^\alpha + \varepsilon_{ij}^\alpha$, where $\gamma_{ij}^\alpha$ models sparse outliers for each sample $(\alpha,i,j)$, which only measures the random effect of samples rather than the fixed effect of annotators. By modeling the annotator's fixed effect on position bias, one can systematically classify the annotators into the good, the ugly, and the bad according to their behaviors. \subsection{ISS/LBI} \label{sec:ISS} Define the gradient operator by $\delta_0:\mathbb{R}^{|V|}\to \mathbb{R}^{|E|}$ such that $(\delta_0 \theta)(i,j,\alpha) = \theta_i - \theta_j$, and the annotator operator $A:\mathbb{R}^{|\mathcal{A}|} \to \mathbb{R}^{|E|}$ by $(A\gamma)(i,j,\alpha) = \gamma^\alpha$, then the model above can be rewritten as: \begin{equation}\label{eq:model1} Y = \delta_0 \theta + A \gamma + \varepsilon, \end{equation} In this case, detecting the annotators affected by position bias can be reformulated as: learning a sparse vector $\gamma$ from given data $(\delta_0,A,Y)$. To solve such a problem, in this paper, we choose a new approach based on the following dynamics, \begin{subequations} \label{eq:iss} \begin{align} \frac{dp}{dt} &= A^T(Y-\delta_0 \theta-A\gamma) \label{eq:iss1}\\ 0 &= \delta_0^T (Y-\delta_0 \theta-A\gamma) \label{eq:iss2}\\ p &\in \partial\|\gamma\|_1. \end{align} \end{subequations} Its solution path can be easily solved by a sequence of nonnegative least squares, see \cite{osher2014} and references therein. In this paper we use the free R-package \cite{Libra}. In \cite{osher2014}, it has been shown that the dynamics above has several advantages over the traditional LASSO approach, which can be formulated as follows in our setting \begin{equation}\label{eq:lasso} \min_{\theta,\gamma} \frac{1}{2}\|Y - \delta_0 \theta - A \gamma \|_2^2 + \lambda \|\gamma\|_1. \end{equation} First of all, the dynamics above is statistically equivalent to LASSO in terms of model selection consistency but may render oracle estimator which is bias-free, while the LASSO estimator is well-known biased. In this sense the ISS path can be better than the LASSO path. Here the solution path $\hat{\gamma}(t)_{t:0\rightarrow\infty}$ plays the same role of the regularization path of LASSO $\hat{\gamma}(\lambda)_{\lambda:\infty\rightarrow0}$ with roughly $t=1/\lambda$, where the important features (variables) are selected before the noisy ones. Following the tradition in image processing, such a dynamics is called \emph{Inverse Scale Space} (ISS). Beyond the charming statistical properties, ISS also admits an extremely simple discrete approximation, i.e., the Linearized Bregman Iteration (LBI), which has been widely used in image reconstruction with TV-regularization. Adapted to our setting, the discretized algorithm is illustrated in Algorithm \ref{alg:LB}, which is scalable, easy for parallelization, and particularly suitable for large scale crowdsourced ranking data analysis. \begin{algorithm} {{ \caption{LBI in correspondence to (\ref{eq:model1})}\label{alg:LB} \textbf{Initialization:} Given parameter $\kappa$ and $\triangle t$, define $k=0, w^0=0, \theta^0 = (\delta_0^T\delta_0)^{\dag}\delta_0^TY,\gamma^0=0$.\\ \textbf{Iteration:} \begin{subequations} \begin{align} w^{k+1} &= w^k + A^T(Y - \delta_0\theta^k - A\gamma^k)\triangle t. \label{alg1Step1}\\ \gamma^{k+1}&=\kappa\,\mathrm{shrink}(w^{k+1}). \label{alg1Step2}\\ \theta^{k+1} &= \theta^k + \kappa\delta_0^T (Y-\delta_0 \theta^k-A\gamma^k)\triangle t. \label{alg1Step3} \end{align} \end{subequations} \textbf{Stopping:} exit when $k\triangle t > t $. }\\ where $\mathrm{shrink}(x) := \mathrm{sign}(x)\max \{|x|-1,0\}$.} \end{algorithm} \subsection{FDR Control and New Knockoff Filter} \label{sec:knockoff} A crucial question for LASSO and ISS is how to choose the regularization parameter $\lambda$ and $t$ in real-world data. After all, different parameters can give different bad or ugly annotator sets. Traditional methods either require a prior knowledge on the amount of such annotators which is often unknown in practice, or some statistically optimal choice of such regularization parameters. Extensive studies in statistics have shown that such parameter tuning typically lead to an over estimation of the sparse signal, therefore False Discovery Rate (FDR) control is necessary \cite{barber2014controlling} which is adopted in this paper. FDR is defined as the expected proportion of false discoveries among the discoveries. Putting in a mathematical way, here we consider \[FDR = \mathbb{E} \left [\frac{\#\{\alpha:\gamma^\alpha=0,\hat{\gamma}^\alpha\ne 0\}}{\#\{\alpha:\hat{\gamma}^\alpha\ne 0\}\wedge1}\right].\] To control the FDR means to control the accuracy of the bad/ugly annotators we detected to see if they are reasonable ones. Recently, a new method called knockoff filter \cite{barber2014controlling} is proposed to automatically control FDR in standard linear regression without a prior knowledge on the sparsity. In this paper, such an approach will be extended to our linear model (\ref{eq:model1}) and the algorithms, where both non-sparse $\theta$ and sparse $\gamma$ co-exist in the model. The extended method consists of the same three steps as in \cite{barber2014controlling}, where the key difference lies in the knockoff feature construction adapted to our setting. \begin{enumerate} \item Construct knockoff features: let $\tilde{A}$ be knockoff features that satisfy \begin{equation}\label{newcond2} \tilde{A}^T\tilde{A} = A^TA,~ A^T\tilde{A} = A^TA- \mathrm{diag}(s),~\delta_0^T\tilde{A} = \delta_0^TA \end{equation} where $s$ is positive and can be solved by SDP: \begin{eqnarray*} \max_s & &\sum_j s_j\\ s.t. & &0 \le s_j \le 1\\ & &\mathrm{diag}(s) \preceq 2A^T(I - H)A, \end{eqnarray*} with $H:=\delta_0(\delta_0^T\delta_0)^{\dag}\delta_0^T$. Let $Q\in \mathbb{R}^{|E|\times|\mathcal{A}|}$ be an orthonormal matrix such that $\delta_0^TQ=0, A^TQ=0$, which requires $|E|\ge2|\mathcal{A}|+|V|$ easily met in crowdsourcing. Then \eqref{newcond2} can be satisfied by defining \[\tilde{A} := A - (I - H)A(A^T(I - H)A)^{-1}\mathrm{diag}(s) +QC\] where $C\in \mathbb{R}^{|\mathcal{A}|\times|\mathcal{A}|}$ satisfies $C^TC = 2\mathrm{diag}(s) - \mathrm{diag}(s)(A^T(I - H)A)^{-1}\mathrm{diag}(s)$. \\ Now define the extended design matrix $A_{ko} = [A,\tilde{A}]$ and $\gamma_{ko} = [\gamma,\tilde{\gamma}]^T$, then replace $A$ with $A_{ko}$ and $\gamma$ with $\gamma_{ko}$ in \eqref{eq:lasso} , \eqref{eq:iss} or Alg. \ref{alg:LB}, we can get solution path $\hat{\gamma}_{ko}(\lambda)$ (or $\hat{\gamma}_{ko}(t)$). \item Generate knockoff statistics for every original feature: define $Z_{j}$ to be the first entering time for $A_j$, i.e., $Z_{j} = \sup\{\lambda: \hat{\gamma}_j(\lambda) \neq 0\}$ for LASSO (or $\sup\{1/t: \hat{\gamma}_j(t) \neq 0\}$ for ISS/LBI) and $\tilde{Z}_j$ can be defined similarly. Then the knockoff statistics becomes \begin{equation}\label{eq:statistic} W_j = \max(Z_j,\tilde{Z}_j) \mathrm{sign}(Z_j - \tilde{Z}_j) \end{equation} \item Choose variables based on the knockoff statistics: define the selected variable set $\hat{S} = \{j: W_j\ge T_{0/1}\}$, where \[T_{0/1} = \min \{t: \frac{0/1+\#\{j:W_j\le-t\}}{\#\{j:W_j\ge t\}}\le q\}.\] $T_0$ is knockoff cut and $T_1$ is knockoff+ cut. \end{enumerate} It can be shown that the new knockoff filter above indeed controls FDR in the following sense, whose proof is similar to that of \cite{barber2014controlling} (collected in Supplementary Materials for completeness). \begin{thm}\label{thm1} If $\epsilon$ is i.i.d $N(0,\sigma^2)$ and $|E|\ge2|\mathcal{A}|+|V|$, then for any $q\in[0,1]$, the knockoff filter with ISS/LBI (or LASSO) satisfies \[\mathbb{E}\left[\frac{\#\{j:\gamma_j =0~and~j\in\hat{S}\}}{\#\{j:j\in\hat{S}\}+q^{-1}}\right] \le q\] and the knockoff+ method satisfies \[\mathbb{E}\left[\frac{\#\{j:\gamma_j =0~and~j\in\hat{S}\}}{\#\{j:j\in\hat{S}\}}\right] \le q\] \end{thm} \begin{remark}\label{rmk1} There is an equivalent reformulation of (\ref{eq:model1}) to eliminate the non-sparse structure variable $\theta$ and convert it to a standard LASSO. Let $\delta_0$ have a full SVD decomposition $\delta_0 = U \Sigma V^T$ and $U=[U_1,U_2]$, where $U_1$ is an orthonormal basis of the column space $\rm{col}(\delta_0)$ and $U_2$ becomes an orthonormal basis for $\rm{ker} (\delta_0^T)$. Then \begin{equation}\label{eq:model2} U_2^TY = U_2^TA \gamma + U_2^T\varepsilon. \end{equation} Let $y = U_2^TY, X = U_2^TA, e = U_2^T\varepsilon$, then $e$ is i.i.d $N(0,\sigma^2)$ \begin{equation}\label{eq:model0} y = X \gamma + e. \end{equation} Based on this, we can use the original knockoff filter $\tilde{X}$ in \cite{barber2014controlling} to select the position-biased annotators. A shortcoming of this approach lies in the full SVD decomposition which might be too expensive for large scale problem. The former approach will not suffer from this. However, one can see in the following theorem both approaches are in fact equivalent. Therefore such a reformulation provides us a conceptual insight in understanding the construction of knockoff filters. \end{remark} \begin{thm}\label{thm2} The approach in Remark 1 is equivalent to what we proposed above in the following sense: \begin{itemize} \item The knockoff features of \eqref{eq:model0} satisfies $\tilde{X} = U_2^T \tilde{A}$ and $\tilde{A} = U_2\tilde{X} + U_1U_1^TA$; \item The knockoff statistics constructed by ISS (or LASSO) for both procedures are exactly the same. \end{itemize} \end{thm} Both knockoff filters above can choose variables with FDR control but the estimator $(\hat{\theta},\hat{\gamma}_{ko})$ consists of knockoff features, so we need to reestimate $\hat{\theta},\hat{\gamma}$ after bad annotator detection by passing to a least square while only keeping those nonzero parameters and features. Suppose that $\hat{S}$ is the set of bad or ugly annotators given by knockoff filters, then one can find the final estimators by \begin{equation} (\hat{\theta},\hat{\gamma}) = \arg\min_{\theta,\gamma_{\hat{S}}} \|Y - \delta_0 \theta - A_{\hat{S}}\gamma_{\hat{S}}\|_2^2. \end{equation} {\renewcommand\baselinestretch{1.3}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table}\caption{\label{iss_1} Knockoff with $q = 10\%$ via ISS.} \scriptsize \centering \subtable[\small Control of Actual FDR]{ \begin{tabular}{c||ccccc} \hline \textbf{ } &\textbf{p2=40\%} &\textbf{p2=50\%} &\textbf{p2=60\%} &\textbf{p2=70\%} \\ \hline \hline \textbf{p1=10\%} &0.0959 &0.0833 &0.1198 &0.1229 \\ \hline \textbf{p1=20\%} &0.0917 &0.0989 &0.0935 &0.1006\\ \hline \textbf{p1=30\%} &0.0919 &0.0991 &0.0921 &0.0854 \\ \hline \textbf{p1=40\%} &0.1062 &0.1034 &0.0998 &0.1184 \\ \hline \end {tabular} } \qquad \subtable[\small Number of True Discoveries]{ \scriptsize \centering \begin{tabular}{c||ccccc} \hline \textbf{ } &\textbf{p2=40\%} &\textbf{p2=50\%} &\textbf{p2=60\%} &\textbf{p2=70\%} \\ \hline \hline \textbf{p1=10\%} &49.95 &50.00 &50.00 &50.00 \\ \hline \textbf{p1=20\%} &49.90 &50.00 &50.00 &50.00 \\ \hline \textbf{p1=30\%} &49.80 &50.00 &50.00 &50.00 \\ \hline \textbf{p1=40\%} &49.75 &49.95 &50.00 &50.00 \\ \hline \end {tabular} } \end{table} \par} \section{EXPERIMENTS}\label{sec:experiments} In this section, five examples are exhibited with both simulated and real-world data to illustrate the validity of the analysis above and applications of the methodology proposed. The first example is with simulated data while the latter four exploit real-world data collected by crowdsourcing. \subsection{Simulated Study} \label{sec:simulatedata} \textbf{Settings} We first validate the proposed algorithm on simulated binary data labeled by 150 annotators. Of the 150 annotators we have 100 \textbf{\emph{good}} annotators (annotators 1 to 100 without position bias) and 50 \textbf{\emph{bad/ugly}} annotators (annotators 101 to 150 with position bias). We note that for good annotators, it does not mean that each worker always present the correct labels. Instead, it means that they also have the probability to make incorrect judgements due to certain reasons, rather than position effect. Specifically, we first create a random total order on $n$ candidates $V$ as the ground-truth and add paired comparison edges $(i,j)\in E$ to graph $G=(V,E)$ until a complete graph, with the preference direction following the ground-truth order. Here we choose $n=|V|=16$, which is consistent with the third real-world dataset with smallest node size. Then, for good annotators, they make judgements with an incorrect probability $p_1$ (i.e., $p_1 \%$ of $E$ is reversed in preference direction), while for bad/ugly annotators, they are with a probability of $p_2$ disturbed by position effect. \textbf{Evaluation metrics} Two metrics are employed to evaluate the performance of the proposed algorithms. The first one is \emph{Control of Actual FDR}, the second is \emph{Number of True Discoveries}. \textbf{Experimental results} With different choices of $p_1$ and $p_2$, the mean \emph{Control of Actual FDR} and \emph{Number of True Discoveries} with $q = 10\%$ over 100 runs are shown in Table \ref{iss_1} to measure the performance of knockoff filter via ISS in position biased annotator detection. It can be seen that via knockoff filter, ISS can provide an accurate detection of position biased annotators (indicated by \emph{control of actual FDR} around $10\%$ and \emph{Number of True Discoveries} around 50). Comparable results of LASSO with $q = 10\%$ can be found in Table \ref{lasso_1}. It can be seen that via knockoff filter, both LASSO and ISS can provide an accurate detection of position biased annotators. This result is consistent with the theoretical comparison between LASSO and ISS discussed in \cite{osher2014}, where ISS/LBI has similar theoretical guarantees as LASSO, but with bias-free and simpler implementation (the 3 line algorithm in Sec. \ref{sec:ISS}) properties. {\renewcommand\baselinestretch{1.3}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table}\caption{\label{lasso_1} Knockoff with $q = 10\%$ via LASSO.} \scriptsize \centering \subtable[\small Control of Actual FDR]{ \begin{tabular}{c||ccccc} \hline \textbf{ } &\textbf{p2=40\%} &\textbf{p2=50\%} &\textbf{p2=60\%} &\textbf{p2=70\%} \\ \hline \hline \textbf{p1=10\%} &0.0711 &0.1326 &0.1433 &0.1256 \\ \hline \textbf{p1=20\%} &0.0998 &0.0954 &0.0970 &0.0780 \\ \hline \textbf{p1=30\%} &0.1044 &0.0918 &0.1093 &0.1061 \\ \hline \textbf{p1=40\%} &0.0843 &0.1035 &0.1063 &0.0941 \\ \hline \end {tabular} } \qquad \subtable[\small Number of True Discoveries]{ \scriptsize \centering \begin{tabular}{c||ccccc} \hline \textbf{ } &\textbf{p2=40\%} &\textbf{p2=50\%} &\textbf{p2=60\%} &\textbf{p2=70\%} \\ \hline \hline \textbf{p1=10\%} &49.95 &50.00 &50.00 &50.00 \\ \hline \textbf{p1=20\%} &49.90 &50.00 &50.00 &50.00 \\ \hline \textbf{p1=30\%} &49.90 &50.00 &50.00 &50.00 \\ \hline \textbf{p1=40\%} &49.85 &50.00 &50.00 &50.00 \\ \hline \end {tabular} } \end{table} \par} \subsection{Real-world Datasets} As there is no ground-truth for position biased annotators in real-world data, one can not compute \emph{control of actual FDR} and \emph{Number of True Discoveries} as in simulated data to evaluate the detection performance here. In this subsection, we inspect the annotators returned by knockoff filter via ISS/LASSO under $q = 10\%$ to see if they are reasonably good position biased workers. {\renewcommand\baselinestretch{1}\selectfont \setlength{\belowcaptionskip}{3pt} \begin{table}\caption{\label{age} Position biased annotators detected in Human age dataset, together with the click counts of each side (i.e., Left and Right).} \tiny \centering \newsavebox{\tablebox} \begin{lrbox}{\tablebox} \begin{tabular}{||c|c|c||c|c|c||} \hline \textbf{ID} &\textbf{Left} &\textbf{Right} & \textbf{ID} &\textbf{Left} &\textbf{Right} \\ \hline \hline \textcolor{red}{\textbf{40}} &40 &0 & \textcolor{blue}{\textbf{50}} &60 &3 \\ \hline \textcolor{red}{\textbf{51}} &63 &0 & \textcolor{blue}{\textbf{59}} &213 &66 \\ \hline \textcolor{red}{\textbf{94}} &0 &30 & \textcolor{blue}{\textbf{64}} &5 &14 \\ \hline \textcolor{blue}{\textbf{12}} &90 &270 & \textcolor{blue}{\textbf{70}} &191 &9 \\ \hline \textcolor{blue}{\textbf{18}} &74 &25 & \textcolor{blue}{\textbf{72}} &5 &24 \\ \hline \textcolor{blue}{\textbf{34}} &32 &48 & \textcolor{blue}{\textbf{77}} &11 &1 \\ \hline \textcolor{blue}{\textbf{38}} &110 &15 & \textcolor{blue}{\textbf{81}} &4 &28 \\ \hline \textcolor{blue}{\textbf{43}} &79 &1 & \textcolor{blue}{\textbf{91}} &79 &5 \\ \hline \textcolor{blue}{\textbf{46}} &40 &10 & \ & \ & \ \\ \hline \hline \end{tabular} \end{lrbox} \scalebox{1.0}{\usebox{\tablebox}} \end{table} \par} \begin{figure} \caption{ISS regularization path of four real-world datasets (Green: the good; Red: the bad; Blue: the ugly).} \label{total_path} \end{figure} \subsubsection{Human Age} In this dataset, 30 images from human age dataset FG-NET \footnote{http://www.fgnet.rsunit.com/} are annotated by a group of volunteer users on \href{http://www.chinacrowds.com/}{ChinaCrowds} platform. The groundtruth age ranking is known to us. The annotator is presented with two images and given a binary choice of which one is older. Totally, we obtain 14,011 pairwise comparisons from 94 annotators. By adopting the knockoff-based algorithm we proposed, LASSO and ISS identify exactly the same set of abnormal annotators (i.e., 17 users) at q=10\%, as is shown in Table \ref{age}. It is easy to see that these annotators can be divided into two types: (1) \textbf{\emph{the bad}}: click one side all the time (with ID in red); (2) \textbf{\emph{the ugly}}: click one side with high probability (with ID in blue). Besides, the regularization paths of ISS can be found in Figure \ref{total_path}(a), where the position biased annotators detected mostly lie outside the majority of the paths. Note that since we allow a small percentage of false positives, some ugly annotators might be good in reality as well. To see the effect of position biased annotators on global ranking scores, Table \ref{tab:agescore} shows the outcomes of two ranking algorithms, namely original and corrected. The original is calculated by least squares problems on all of the pairwise comparisons, while the corrected is obtained by the correction step via knockoff illustrated in Section \ref{sec:knockoff}. It is easy to see that the removal of position biased annotators often changes the orders of some competitive images, such as ID=11 and ID=21, ID=30 and ID=8, etc. To see which ranking is more reasonable, Table \ref{age_gt} shows the \emph{\textbf{groundtruth}} ranking of these competitive images. We can find from this table that, compared with the original ranking, the corrected one is in more agreement with the groundtruth ranking, which further shows that: i) position biased annotators may disturb the ranking to a departure from the real ranking. ii) pairs with little differences are more likely to lead to position biased annotations. From this viewpoint, we can see that the knockoff-based FDR-controlling method indeed effectively selects the position biased annotators. {\renewcommand\baselinestretch{1.3}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table} \caption{Comparison of original vs. corrected rankings on Human age dataset. The integer represents the ranking position and the number in parenthesis represents the global ranking score returned by the corresponding algorithm. }\label{tab:agescore} \centering \begin{lrbox}{\tablebox} \tiny \begin{tabular}{||c|c|c||c|c|c||} \hline\hline ID & Original. & Corrected. & ID & Original. & Corrected. \\ \hline 28 & 1 ( 0.7780 ) & 1 ( 0.7573 ) & 23 & 16 ( 0.0208 ) & 16 ( 0.0099 ) \\ 3 & 2 ( 0.6661 ) & 2 ( 0.6771 ) & 8 & 17 ( 0.0086 ) & \textcolor{red}{18 ( -0.0024 )} \\ 14 & 3 ( 0.5653 ) & 3 ( 0.5647 ) & 30 & 18 ( -0.0025 ) & \textcolor{red}{17 ( 0.0055 )} \\ 29 & 4 ( 0.4482 ) & 4 ( 0.4490 ) & 12 & 19 ( -0.0201 ) & 19 ( -0.0632 ) \\ 21 & 5 ( 0.4087 ) & \textcolor{red}{6 ( 0.4086 )} & 13 & 20 ( -0.1961 ) & \textcolor{red}{21 ( -0.2111 )} \\ 11 & 6 ( 0.4059 ) & \textcolor{red}{5 ( 0.4343 )} & 15 & 21 ( -0.2160 ) & \textcolor{red}{23 ( -0.2791 )} \\ 7 & 7 ( 0.3873 ) & 7 ( 0.4017 ) & 25 & 22 ( -0.2166 ) & \textcolor{red}{20 ( -0.2099 )} \\ 5 & 8 ( 0.3634 ) & 8 ( 0.3478 ) & 16 & 23 ( -0.2551 ) & \textcolor{red}{24 ( -0.2887 )} \\ 27 & 9 ( 0.3582 ) & 9 ( 0.3377 ) & 2 & 24 ( -0.3710 ) & \textcolor{red}{22 ( -0.2785 )} \\ 24 & 10 ( 0.2064 ) & 10 ( 0.1722 ) & 9 & 25 ( -0.4158 ) & 25 ( -0.3949 ) \\ 6 & 11 ( 0.0932 ) & \textcolor{red}{13 ( 0.1084 )} & 1 & 26 ( -0.6135 ) & \textcolor{red}{27 ( -0.6376 )} \\ 4 & 12 ( 0.0914 ) & \textcolor{red}{12 ( 0.1207 )} & 18 & 27 ( -0.6249 ) & \textcolor{red}{26 ( -0.6180 )} \\ 22 & 13 ( 0.0896 ) & \textcolor{red}{11 ( 0.1032 )} & 19 & 28 ( -0.6653 ) & 28 ( -0.6390 ) \\ 17 & 14 ( 0.0872 ) & 14 ( 0.1232 ) & 10 & 29 ( -0.6969 ) & 29 ( -0.7040 ) \\ 20 & 15 ( 0.0816 ) & 15 ( 0.0559 ) & 26 & 30 ( -0.7660 ) & 30 ( -0.7509 ) \\ \hline \hline \end{tabular} \end{lrbox} \scalebox{0.9}{\usebox{\tablebox}} \end{table} \par} {\renewcommand\baselinestretch{1.3}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table}\caption{\label{age_gt} Groundtruth ranking of the competitive images highlighted with red color in Table \ref{tab:agescore}.} \centering \tiny \begin{lrbox}{\tablebox} \begin{tabular}{||c||} \hline \hline $11 \succ 21$ \\ \hline $22 \succ 4 \succ 6$ \\ \hline $30 \succ \succ 8$ \\ \hline $25 \succ 13 \succ 16 \succ 2 \succ 15$ \\ \hline $18 \succ 1$ \\ \hline \hline \end {tabular} \end{lrbox} \scalebox{1.0}{\usebox{\tablebox}} \end{table} \par} \subsubsection{Reading Level} The second dataset is a subset of reading level dataset \cite{chen2013pairwise}, which contains 490 documents. 8,000 pairwise comparisons are collected from 346 annotators using \href{http://www.crowdflower.com/}{CrowdFlower} crowdsourcing platform. More specifically, each annotator is asked to provide his/her opinion on which text is more challenging to read and understand. Table \ref{readinglevel} shows the position biased annotators detected from this dataset, together with the ISS regularization path shown in Figure \ref{total_path}(b). It is easy to see that LASSO and ISS picked out the same 6 annotators as position biased ones. In terms of the small number of bad annotators detected, we can say that the overall quality of annotators on this task is relatively high. {\renewcommand\baselinestretch{1}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table}\caption{\label{readinglevel} Position biased annotators detected in Reading level.} \tiny \centering \begin{lrbox}{\tablebox} \begin{tabular}{||c|c|c||} \hline \textbf{ID} &\textbf{Left} &\textbf{Right} \\ \hline \hline \textcolor{red}{\textbf{50}} &5 &0 \\ \hline \textcolor{red}{\textbf{69}} &6 &0 \\ \hline \textcolor{blue}{\textbf{122}} &19 &3 \\ \hline \textcolor{blue}{\textbf{148}} &4 &19 \\ \hline \textcolor{blue}{\textbf{167}} &22 &8 \\ \hline \textcolor{blue}{\textbf{275}} &7 &22 \\ \hline \end {tabular} \end{lrbox} \scalebox{1.0}{\usebox{\tablebox}} \end{table} \par} \subsubsection{Image Quality Assessment} The third dataset is a pairwise comparison dataset for subjective image quality assessment (IQA), which contains 15 reference images and 15 distorted versions of each reference, for a total of 240 images which come from two publicly available datasets LIVE, \cite{LIVE} and IVC \cite{IVC}. Totally, 342 observers, each of whom performs a varied number of comparisons via Internet, provide $52,043$ paired comparisons for crowdsourced subjective image quality assessment. Note that the number of responses each reference image received is different in this dataset. To validate whether the annotators we detected are good position biased annotators or not, we randomly take reference image 1 as an illustrative example while other reference images exhibit similar results. Table \ref{ref1} shows the annotators with position bias picked by knockoff filter and the ISS regularization path is shown in Figure \ref{total_path}(c). In this dataset, the abnormal annotators picked out by LASSO and ISS are also exactly the same. It is easy to see that annotators picked out are mainly those clicking on one side almost all the time. Besides, it is interesting to see that all these bad annotators highlighted with red color in Table \ref{ref1} click the left side all the time. We then go back to the crowdsourcing platform and find out that the reason behind this is a default choice on the left button thus induces some lazy annotators cheat for the annotation task. {\renewcommand\baselinestretch{1}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table}\caption{\label{ref1} Position biased annotators detected in reference image 1.} \tiny \centering \begin{lrbox}{\tablebox} \begin{tabular}{||c|c|c||c|c|c||} \hline \textbf{ID} &\textbf{Left} &\textbf{Right} & \textbf{ID} &\textbf{Left} &\textbf{Right} \\ \hline \hline \textcolor{red}{\textbf{2}} &55 &0 & \textcolor{red}{\textbf{300}} &11 &0\\ \hline \textcolor{red}{\textbf{23}} &42 &0 & \textcolor{red}{\textbf{317}} &20 &0\\ \hline \textcolor{red}{\textbf{29}} &58 &0 & \textcolor{red}{\textbf{334}} &90 & 0\\ \hline \textcolor{red}{\textbf{99}} &29 &0 & \textcolor{blue}{\textbf{33}} &15 &1 \\ \hline \textcolor{red}{\textbf{177}} &77 &0 & \textcolor{blue}{\textbf{34}} & 8 &1 \\ \hline \textcolor{red}{\textbf{190}} &36 &0 & \textcolor{blue}{\textbf{103}} &74 &4 \\ \hline \textcolor{red}{\textbf{228}} &14 &0 & \textcolor{blue}{\textbf{133}} &20 &11\\ \hline \textcolor{red}{\textbf{241}} &22 &0 & \textcolor{blue}{\textbf{207}} &46 &2\\ \hline \textcolor{red}{\textbf{259}} &96 &0 & \textcolor{blue}{\textbf{260}} & 49 &2\\ \hline \textcolor{red}{\textbf{287}} &34 &0 & \textcolor{blue}{\textbf{304}} &17 &1\\ \hline \end {tabular} \end{lrbox} \scalebox{1.0}{\usebox{\tablebox}} \end{table} \par} \subsubsection{WorldCollege Ranking} We now apply the knockoff filter to the WorldCollege dataset, which is composed of 261 colleges. Using the \href{http://www.allourideas.org/}{Allourideas} crowdsourcing platform, a total of 340 distinct annotators from various countries (e.g., USA, Canada, Spain, France, Japan) are shown randomly with pairs of these colleges, and asked to decide which of the two universities is more attractive to attend. Finally, we obtain a total of 8,823 pairwise comparisons. We then apply knockoff filter to the resulting dataset and find out that both LASSO and ISS selected 36 annotators as position biased ones, as is shown in Table \ref{college} and Figure \ref{total_path}(d). It is easy to see that similar to the human age dataset, the annotators picked out are either clicking one side all the time, or clicking one side with high probability. {\renewcommand\baselinestretch{1}\selectfont \setlength{\belowcaptionskip}{0pt} \begin{table}\caption{\label{college} Position biased annotators detected in WorldCollege.} \tiny \centering \begin{lrbox}{\tablebox} \begin{tabular}{||c|c|c||c|c|c||} \hline \textbf{ID} &\textbf{Left} &\textbf{Right} & \textbf{ID} &\textbf{Left} &\textbf{Right} \\ \hline \hline \textcolor{red}{\textbf{56}} &17 &0 & \textcolor{blue}{\textbf{25}} &17 &6 \\ \hline \textcolor{red}{\textbf{75}} &0 &3 & \textcolor{blue}{\textbf{59}} &9 &29 \\ \hline \textcolor{red}{\textbf{101}} &26 &0 & \textcolor{blue}{\textbf{87}} &11 &62 \\ \hline \textcolor{red}{\textbf{115}} &34 &0 & \textcolor{blue}{\textbf{122}} &13 &9 \\ \hline \textcolor{red}{\textbf{145}} &0 &27 & \textcolor{blue}{\textbf{134}} &20 &7 \\ \hline \textcolor{red}{\textbf{166}} &35 &0 & \textcolor{blue}{\textbf{140}} &12 &4 \\ \hline \textcolor{red}{\textbf{209}} &127 &0 & \textcolor{blue}{\textbf{156}} &189 &67 \\ \hline \textcolor{red}{\textbf{222}} &0 &2 & \textcolor{blue}{\textbf{189}} &2 &12 \\ \hline \textcolor{red}{\textbf{245}} &0 &34 & \textcolor{blue}{\textbf{191}} &23 &7 \\ \hline \textcolor{red}{\textbf{256}} &0 &21 & \textcolor{blue}{\textbf{202}} &2 &8 \\ \hline \textcolor{red}{\textbf{267}} &45 &0 & \textcolor{blue}{\textbf{207}} &23 &10 \\ \hline \textcolor{red}{\textbf{268}} &148 &0 & \textcolor{blue}{\textbf{208}} &10 &2 \\ \hline \textcolor{red}{\textbf{275}} &1 &0 & \textcolor{blue}{\textbf{239}} &11 &2 \\ \hline \textcolor{red}{\textbf{289}} &35 &0 & \textcolor{blue}{\textbf{258}} &2 &13 \\ \hline \textcolor{red}{\textbf{299}} &31 &0 & \textcolor{blue}{\textbf{270}} &20 &70 \\ \hline \textcolor{red}{\textbf{321}} &33 &0 & \textcolor{blue}{\textbf{276}} &16 &54 \\ \hline \textcolor{red}{\textbf{323}} &35 &0 & \textcolor{blue}{\textbf{320}} &253 &324 \\ \hline \textcolor{red}{\textbf{338}} &0 &21 & \textcolor{blue}{\textbf{330}} &4 &10 \\ \hline \end {tabular} \end{lrbox} \scalebox{1}{\usebox{\tablebox}} \end{table} \par} \subsection{Discussion} \begin{figure} \caption{Number of left clicks vs. right clicks of abnormal and normal annotators on four real-world datasets.} \label{leftvsright} \end{figure} Someone may argue that setting a threshold on the ratio of left/right answers can be an easy way to detect position biased annotators. To illustrate why simply setting a threshold does not work, Figure \ref{leftvsright} shows the click counts of each side (i.e., X-axis: number of left clicks; Y-axis: number of right clicks), where each color $\circ/\times$ represents one annotator. It is easy to see that there are indeed some overlaps between abnormal and normal annotators. For example, in reading level dataset, annotators with ID=69 and ID=57 both provide 6:0 on the ratio of left/right clicks. However, ID=69 is detected as abnormal annotator, while ID=57 as normal one. To figure out the reason behind this, we further compute the Match Ratio (MR) of these two annotators with the global ranking scores obtained by all pairwise comparisons and find that $MR_{ID=69}=3/6$ and $MR_{ID=57}=5/6$. This indicates that the position biased annotator (i.e., ID=69) we picked out is the one not only with one-side click but also with a large deviation with the majority. Similar results can be easily found in other three datasets. \section{Related Work} \subsection{Outlier Detection} Outliers are often referred to as abnormalities, discordants, deviants, or anomalies in data. Generally speaking, there can be two types of outliers: (1) samples as outliers; (2) subjects as outliers. Hawkins formally defined in~\cite{hawkins1980identification} the concept of an outlier as follows: ``An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism." Outliers are rare events, but once they have occurred, they may lead to a large instability of models estimated from the noisy data. For type (1), many methods have been developed for outlier detection, such as distribution-based~\cite{barnett1994outliers}, depth-based~\cite{johnson1998fast}, distance-based~\cite{knorr1999finding,knorr2000distance}, density-based~\cite{breunig2000lof}, and clustering-based~\cite{jain1999data} methods. For subject-based outlier detection, some sophisticated methods have been proposed to model annotators' judgements. Recently, \cite{chen2013pairwise} propose a Crowd-BT algorithm to detect spammers and malicious annotators: spammers assign random labels while malicious annotators assign the wrong label most of the time. Besides, \cite{raykar2011ranking} defines a score to rank the annotators for crowdsourced labeling tasks. Furthermore, \cite{raykar2012} presents an empirical Bayesian algorithm called SpEM to eliminate the spammers and estimate the consensus labels based only on the good annotators. However, a phenomenon that has annoyed researchers who have used paired comparison tests is position bias or testing order effects. Until now, little work have been found for such kind of position biased annotator detection, which is our main focus in this paper. \subsection{FDR Control and Knockoff Method} Most variable selection techniques in statistics such as LASSO suffer from over-selection as picking up too many false positives by leaving out few true positives. In order to offer guarantees on the accuracy of the selection, it is desired to control the false discovery rate (FDR) among all the selected variables. The Benjamini-Hochberg (BH) procedure \cite{benjamini1995controlling} is a typical method known to control FDR under independence scenarios. Recently, \cite{barber2014controlling} developed a new knockoff filter filtermethod for FDR control for general dependent features as long as the sample size is larger than that of parameters. In this paper, we extend this method to our setting with mixed parameters of both nonsparse and sparse ones to achieve the same FDR control. \subsection{Inverse Scale Space and Linearized Bregman Iteration} Linearized Bregman Iteration (LBI) has been widely used in image processing and compressed sensing \cite{OBG+05,YODG08} even before its limit form as Inverse Scale Space (ISS) dynamics \cite{burger2005nonlinear}. ISS/LBI at least have two advantages over the popular LASSO in variable selection: (1) ISS may give unbiased estimator \cite{osher2014}, under nearly the same condition for model selection consistency as LASSO whose estimators are however always biased \cite{Fan2001}. (2) LBI, regarded as a discretization of ISS dynamics, is an extremely simple algorithm which combines an iterative gradient descent algorithm together with a soft thresholding. It only runs in a single path and regularization is achieved by early stopping like boosting algorithms \cite{osher2014}, which may save the computational cost greatly and thus suitable for large scale implementation \cite{LBI_decentral}. \section{Conclusion} Annotator's position bias is ubiquitous in crowdsourced ranking data, which, up to our knowledge, has not been systematically addressed in literature. In this paper, we propose a statistical model for annotator's position bias with pairwise comparison data on graphs, together with new algorithms to reach statistically good estimates with a FDR control based on some new design of knockoff filters. FDR control here does not need a prior knowledge on the sparsity of position bias, i.e., the amount of bad or ugly annotators. Such a framework is valid for both traditional LASSO estimator and the new dynamic approach based on ISS/LBI with debiased estimator and scalable implementations which is desired for crowdsourcing experiments. Experimental studies are conducted with both simulated examples and real-world datasets. Our results suggest that the proposed methodology is an effective tool to investigate annotator's abnormal behavior in modern crowdsourcing data. \setcounter{page}{1} \section*{Supplementary Material} {\bf{Sketchy Proof of Theorem \ref{thm1}.}} Similar to the treatment in \cite{barber2014controlling}, we only need to prove that the knockoff statistics $W_j$ satisfy the following two properties: \begin{itemize} \item \emph{sufficiency property}: \\ $W = f([\delta_0,A_{ko}]^T[\delta_0,A_{ko}], [\delta_0,A_{ko}]^TY)$, which indicates $W$ depends only on $[\delta_0,A_{ko}]^T[\delta_0,A_{ko}]$ and $[\delta_0,A_{ko}]^TY$. \item \emph{antisymmetry property}:\\ Swapping $A_j$ and $\tilde{A}_j$ has the effect of switching the sign of $W_j$. \end{itemize} The second property is obvious because $W_j$ is constructed using entering time difference. Now we go to prove the first property. For ISS and LBI, the whole path is only determined by \begin{align*} A_{ko}^T(Y-\delta_0 \theta-A_{ko}\gamma_{ko}) &= A_{ko}^TY - A_{ko}^T[\delta_0 ,A_{ko}][\theta^T,\gamma_{ko}^T]^T), \\ \delta_0^T (Y-\delta_0 \theta-A_{ko}\gamma_{ko}) &= \delta_0^TY - \delta_0^T[\delta_0 ,A_{ko}][\theta^T,\gamma_{ko}^T]^T), \end{align*} which is only based on $[\delta_0,A_{ko}]^T[\delta_0,A_{ko}]$ and $[\delta_0,A_{ko}]^TY$, so is the entering time $Z_{j}$ The same reasoning holds for LASSO since \begin{equation*} \min_{\theta,\gamma} \frac{1}{2}\|Y - [\delta_0,A_{ko}] [\theta^T,\gamma_{ko}^T]^T\|_2^2 + \lambda \|\gamma_{ko}\|_1 \end{equation*} is equivalent to \begin{eqnarray*} \min_{\theta,\gamma} &\frac{1}{2}(\|Y\|_2^2 + [\theta^T,\gamma_{ko}^T] [\delta_0,A_{ko}] ^T[\delta_0,A_{ko}] [\theta^T,\gamma_{ko}^T]^T\\ &-2 [\theta^T,\gamma_{ko}^T] [\delta_0,A_{ko}] ^TY )+ \lambda \|\gamma_{ko}\|_1 \end{eqnarray*} So the entire path is determined by $[\delta_0,A_{ko}]^T[\delta_0,A_{ko}]$ and $[\delta_0,A_{ko}]^TY$. \\ \\ {\bf{Proof of Theorem \ref{thm2}.}} Suppose $\tilde{X}$ is the knockoff statistics for \eqref{eq:model0}, then it satisfies \begin{equation}\label{cond2} \tilde{X}^T\tilde{X} = X^TX, X^T\tilde{X} = X^TX - \mathrm{diag}(s). \end{equation} Let $B = A + U_2(\tilde{X} - X)$, then $\tilde{X} = U_2^TB$ and it can be verified \begin{equation*} B^TB= A^TA, A^TB = A^TA-\mathrm{diag}(s), \delta_0^TB=\delta_0^TA \end{equation*} which means $B$ is a valid knockoff feature matrix for \eqref{eq:model1}. On the reverse, let $\tilde{A}$ be knockoff features for \eqref{eq:model1}, it is also easy to verify $\tilde{X} = U_2^T\tilde{A}$ satisfies condition \eqref{cond2}. This establishes an injection between $\tilde{X}$ and $\tilde{A}$. The equivalence of knockoff statistics comes from the equivalence of solution paths in both approaches. To see this, \eqref{eq:iss2} actually means $\hat{\theta} = (\delta_0^T\delta_0)^{\dag}\delta_0^T(Y - A_{ko}\gamma_{ko})$, plugging $\hat{\theta}$ in \eqref{eq:iss1}, we get \begin{eqnarray*} \frac{dp}{dt} &=&A_{ko}^T (Y-\delta_0 \hat{\theta}-A_{ko}\gamma_{ko}) \\ &=& A_{ko}^T(U_2U_2^T(Y-A_{ko}\gamma_{ko}))\\ &= & (U_2^TA_{ko})^T(U_2^TY-U_2^TA_{ko}\gamma_{ko}) \end{eqnarray*} This is equivalent to the ISS for the second procedure model \eqref{eq:model2} in Remark 1. So in both approaches, the two ISS solution paths are identical. The same reasoning holds for LASSO, the derivative of \eqref{eq:lasso} w.r.t. $\theta$ is zero at the optimal estimator which means \begin{equation*} 0 = \delta_0^T (Y-\delta_0 \hat{\theta}-A_{ko}\gamma_{ko}) \end{equation*} this is actually \eqref{eq:iss2}. So plugging $\hat{\theta}$ in \eqref{eq:lasso}, we get \begin{eqnarray*} &&\|Y-\delta_0 \theta-A_{ko}\gamma_{ko}\|_2^2 \\ &=& \|(I - \delta_0(\delta_0^T\delta_0)^{\dag}\delta_0^T)^T(Y-A_{ko}\gamma_{ko})\|_2^2\\ &= & \|U_2U_2^T(Y-A_{ko}\gamma_{ko})\|_2^2\\ &= & \|U_2^TY-U_2^TA_{ko}\gamma_{ko})\|_2^2. \end{eqnarray*} This is in fact the $l_2$ loss for the second procedure in Remark 1. Finally identical paths lead to the same knockoff statistics which ends the proof. \end{document}
\betaegin{document} \title{Positive definite distributions and normed spaces } \alphauthor{N.J. Kalton} \alphaddress{Department of Mathematics\\ University of Missouri\\ Columbia\\ Missouri 65211} \email{[email protected]} \alphauthor{M. Zymonopoulou}\alphaddress{ Department of Mathematics\\muathbb{C}ase Western Reserve University\\ Cleveland} \email{[email protected]} \thetaanks{The first author acknowledges the support of NSF grant DMS-0555670. The second author was partially supported by the NSF grant DMS-0652722} \betaegin{abstract} We answer a question of Alex Koldobsky. We show that for each $-\infty<p<2$ and each $n\ge 3-p$ there is a normed space $X$ of dimension $n$ which embeds in $L_s$ if and only if $-n<s\lambdae p.$ \end{abstract} \muaketitle{} \textbf{2000 Mathematics Subject Classification} 52A21 \textbf{Keywords} Absolute sums, Isometric embeddings. \section{Introduction}\lambdaabel{intro} Let $\|\cdot\|$ be a norm on $\muathbb R^n.$ It is well-known that if $p>0$ and not an even integer then $X=(\muathbb R^n,\|\cdot\|)$ embeds isometrically into $L_p$ if and only $\Gammaamma(-p/2)\|\cdot\|^p$ is a positive definite distribution (see \cite{Koldobsky2005} Theorem 6.10). In \cite{Koldobsky1999b} this idea was extended to the case when $p<0.$ Let $\muathcal S(\muathbb R^n)$ denote the Schwartz class of the rapidly decreasing functions on $\muathbb R^n.$ If $p<0$ and $n+p>0$ then the function $\|x\|^p$ is locally integrable and we say that $X$ embeds (isometrically) into $L_p$ if the distribution $\|\cdot\|^p$ is positive definite, i.e. for every non-negative even test function $\phii\in\muathcal S(\muathbb R^n),$ $$\lambdaangle (\|\cdot\|^p)^{\wedge},\phii\rangle\ge 0.$$ This can be expressed in the following form: We say that $X=(\muathbb{R}^n,\|\cdot\|)$ embeds into $L_{p}$, where $p<0<p+n,$ if there exists a finite Borel measure $\muu$ on $S^{n-1}$ so that for every even test function $\phi\in\muathcal S(\muathbb{R}^n)$ \betaegin{equation}\lambdaabel{Def:p<0}\int_{\muathbb{R}^n}\|x\|^{p}\phi(x)dx=\int_{S^{n-1}}\Bigl(\int_0^{\infty}t^{-p-1}\hat{\phi}(t\textbf {x}i)dt\Bigr)d\muu (\textbf {x}i). \end{equation} Later in \cite{KaltonKoldobskyYaskinYaskina2007} the appropriate definition for $p=0$ was explored: a normed space $X$ embeds into $L_0$ if and only $-\lambdan \|x\|$ is positive definite outside of the origin of $\muathbb{R}^n$. Part of the motivation for this definition is its connection to intersection bodies. The class of intersection bodies was defined by Lutwak \cite{Lutwak1988} and played an important role to the solution of the Busemann-Petty problem. Let $K$ and $L$ two origin symmetric star bodies in $\muathbb{R}^n.$ We say that $K$ is the {\it intersection body of $L$} if the radius of $K$ in every direction is equal to the volume of the central hyperplane section of $L$ perpendicular to this direction, i.e. for every $\textbf {x}i\in S^{n-1},$ $$\|\textbf {x}i\|_K^{-1}=\mubox{\rm Vol} _{n-1}(L\cap \textbf {x}i^{\perp}),$$ where $\|x\|_K=\muin \{a\geq 0 :x\in aK\},$ is the Minkowski functional of $K$. Note that if $K$ is convex then $\|\cdot\|_K$ is a norm. The class of {\it intersection bodies} is defined as the closure, in the radial metric, of the set of intersection bodies of all star bodies. This class was extended in \cite{Koldobsky1999} and \cite{Koldobsky2000}, to the class of $k$-intersection bodies, where $k\in\muathbb N$. Koldobsky in \cite{Koldobsky2000} showed that $X$ embeds into $L_{-k}$ if and only if its unit ball is a $k$-intersection body. For more on $k$-intersection bodies see \cite{Koldobsky2005}, (Chapters 4 and 6) or \cite{KoldobskyYaskin2008}, (Chapters 6 and 7). If $n>-p$, we denote by $\muathcal I_p(n)$ the collection of the finite-dimensional Banach spaces $X$ of dimension $n$ which embed into $L_p$ where $-\infty<p<\infty;$ we will adopt the convention that $\muathcal I_p(n)=\muathcal B_n,$ the collection of all spaces of dimension $n$ when $n\lambdae -p.$ It was shown by Koldobsky \cite{Koldobsky1999b} that if $p\lambdae 3-n$ then $\muathcal I_p(n)=\muathcal B_n.$ Let $\muathcal I_p=\cup_{n\in\muathbb N}\muathcal I_p(n).$ A classical result of Bretagnolle, Dacunha-Castelle and Krivine \cite{Bretagnolleetal1965/6} shows that if $0<p\lambdae q\lambdae 2$ then $\muathcal I_q\subset\muathcal I_p.$ Combining results of \cite{KaltonKoldobskyYaskinYaskina2007} and \cite{Koldobsky1999b} gives that $\muathcal I_q\subset \muathcal I_p$ where $q\in [0,2]$ and $p\lambdae q.$ It is, however, an open problem whether the same is true when $q<0.$ E.Milman \cite{MilmanE2006} showed that if $m\in\muathbb N$ and $p<0$ then $\muathcal I_{p}\subset\muathcal I_{mp}.$ A second problem in this area is to establish whether the classes $\muathcal I_p(n)$ for $-\infty<p\lambdae 1$ are really distinct (see for example \cite{KoldobskyYaskin2008} p.99). In this article we give a complete answer to this question. Previously only some partial results have been established. For the case $0<p\lambdae 1,$ it is shown in \cite{KaltonKoldobsky2004} that if $0<p<s\lambdae 1$ then $\muathcal I_p\nueq \muathcal I_s.$ However the methods of \cite{KaltonKoldobsky2004} are infinite-dimensional and only show that for given $0<p<q\lambdae 1$ we have $\muathcal I_p(n)\nueq\muathcal I_q(n)$ for some $n=n(p,q).$ It was noted in \cite{KaltonKoldobskyYaskinYaskina2007} that the space $\muathbb R\oplus_2\ell_1^n$ belongs to $\muathcal I_0$ for all $n$ but for each $p>0$ there is an $n\in\muathbb{N}$ so that $\muathbb R\oplus_2\ell_1^n\nuotin \muathcal I_p.$ In the case where $p,q<0$ it is clear that if $p \lambdae 3-n<q$ then $\muathcal I_q(n)$ is strictly contained in $\muathcal I_p(n)=\muathcal B_n.$ In fact $\ell_s^n\nuotin \muathcal I_q(n)$ if $2<s\lambdae \infty$ (see \cite{Koldobsky2005} Theorem 4.13 or \cite{Koldobsky1998a}). For other values of $n$, there are some recent partial results. In \cite{Schlieper2007} it was shown that $\muathcal I_{-4}(n)\setminus \muathcal I_{-2}(n)\nueq \emptyset$ for all $n\ge 7$ (and hence for $n\ge 5$) and that $\muathcal I_{-1/3}(n)\setminus\muathcal I_{-1/6}(n)\nueq \emptyset$ for all $n\ge 4.$ More recently Yaskin \cite{Yaskin2008} showed that if $l<k$ are integers and $k>3-n$ then $\muathcal I_l(n)\setminus \muathcal I_k(n)\nueq \emptyset.$ Our main example is that if $X=\ell_2^m\oplus_r\ell_q^n$ where $1\lambdae q<r\lambdae 2$ and $n\ge 2$ then $X\in\muathcal I_p$ if and only if $p\lambdae q-m.$ Thus it follows immediately that if $p\in (3-n,0)$ there exists a normed space $X$ so that $X\in\muathcal I_p(n)$ but for every $q>p$ $X\nuotin\muathcal I_q(n).$ Note that even in the case when $0<p<1$ this improves considerably the results in \cite{KaltonKoldobsky2004} and the examples are much more natural. To obtain these results we prove a general result on absolute direct sums of normed spaces. Let $X$ and $Y$ denote two finite-dimensional Banach spaces. Let $N$ be any absolute norm on $\muathbb{R}^2,$ ie. $N(x,y)=N(|x|,|y|),$ satisfying the normalization property $N(1,0)=N(0,1)=1.$ We consider the absolute $N$-direct sum of $X$ and $Y$, denoted $X\oplus_N Y$ that is defined as the space of pairs $\{(x,y),x\in X, y\in Y\}$ equipped with the norm $N.$ $$ \|(x,y)\|=N(\|x\|_X,\|y\|_Y), \qquad x\in X,\ y\in Y.$$ In the special case where $N(x,y)=(x^r+y^r)^{1/r},$ we write $X\oplus_NY=X\oplus_r Y.$ We examine the situation when $X\oplus_NY\in\muathcal I_p.$ There is an earlier result of Koldobsky of this type; see \cite{Koldobsky2005}, Theorem 4.21 or \cite{Koldobsky1998}. Koldobsky shows that if $p<0<2<q$ and $X\oplus_qY\in\muathcal I_p$ with $\text{dim } Y\ge 1$ then $\text{dim } X\lambdae 2-p.$ In fact this results hold under the more general hypothesis if $p<2<q$. A typical result we prove is that if $r\lambdae 2$ and $X\oplus_rY\in\muathcal I_p$ where $p\lambdae 2$ then $X\in \muathcal I_q$ as long as $p\lambdae q\lambdae m+p$ where $m=\text{dim } Y.$ We consider a more general absolute norm $N$ and use functional analytic and probabilistic methods as well as the theory of Gaussian processes, rather than the usual distributional approach from \cite{Koldobsky2005} or \cite{KoldobskyYaskin2008}. The remainder of the paper is devoted to showing that the examples $X=\ell_2^m\oplus_r\ell_q^n$ where $1\lambdae q<r\lambdae 2$ and $n\ge 2$ belong to $\muathcal I_p$ if $p\lambdae q-m.$ This requires a probabilistic approach using stable random variables. \section{Gaussian embeddings} Throughout this paper, $(\Omegamega,\muu)$ will be a Polish space with a $\sigma-$finite Borel measure and $\muathcal M(\Omegamega,\muu)$ will be the space of all real-valued measurable functions on $\Omegamega.$ In the special case when $\muu(\Omegamega)=1$ we say that $\muu$ is a probability measure and the members of $\muathcal M(\Omegamega,\muu)$ are then called {\it random variables}. Let $X$ be a finite dimensional normed space and suppose $T:X\to \muathcal M(\Omegamega,\muu)$ is a linear map. Suppose $0<p<\infty$. We shall say that $T$ is a {\it $c$-standard embedding} of $X$ into $L_p(\Omegamega,\muu)$, where $c>0,$ if $$\|x\|^p=\frac{1}{ c^p}\int_{\Omegamega}|Tx|^p\,d\muu, \qquad x\in X.$$ Let $(\Omegamega',\muathbb P)$ be some probability space. A measurable map $\textbf {x}i:\Omegamega'\to X$ is called an $X$-valued Gaussian process if it takes the form $$ \textbf {x}i=\sum_{j=1}^m \gamma_jx_j$$ where $x_1,\lambdadots,x_m\in X$ and $\{\gamma_1,\lambdadots,\gamma_m\}$ is a sequence of independent normalized Gaussians. The {\it rank} of $\textbf {x}i$ is defined to be the dimension of the space spanned by $\{x_1,\lambdadots,x_m\}$; we say that $\textbf {x}i$ has {\it full rank} if its rank is equal to the dimension of $X.$ Suppose $-\infty<p<\infty$ and $X$ has dimension $n>-p$. A linear map $T:X\to\muathcal M(\Omegamega,\muu)$ is called a {\it $c$-Gaussian embedding} of $X$ into $L_p(\Omegamega,\muu)$ if \betaegin{equation}\lambdaabel{Gaussiandef} \muathbb E\|\textbf {x}i\|^p=\frac{1}{c^p}\int_{\Omegamega}(\sum_{j=1}^n(Tx_j)^2)^{p/2}\,d\muu\end{equation} whenever $\textbf {x}i$ is an $X$-valued Gaussian random variable of full rank. In fact it can be shown quite easily that \eqref{Gaussiandef} holds for all $\textbf {x}i$ of rank greater than $-p.$ It should be noted that if $p\lambdae -1$ it is not generally true that $\int|Tx|^p<\infty$ for each $x\in X.$ It will be important for us that the existence of a Gaussian embedding in $L_p$ in the case when $p<0$ is equivalent to the fact that $X\in\muathcal I_p$ according to the definition in \cite{Koldobsky1999b} via positive definite functions (see \eqref{Def:p<0}). One direction of this equivalence appears implicitly in \cite{KaltonKoldobsky2005} but the converse direction has not apparently appeared before, although it has been known for a number of years. We first need a preparatory Lemma. Let $g_a$ denote the density function $$g_a(x)=(2\pi)^{-n/2} a^{-n}e^{-|x|^2/2a^2}, \qquad x\in\muathbb{R}^n$$ For $y\in Y$ we define $h_y(x)=(x,y).$ If $f\in\muathcal S(\muathbb R^n)$ we denote by $\tau_yf$ the function $\tau_yf(x)=f(x-y).$ \betaegin{lm}\lambdaabel{prep} Suppose $n\in\muathbb N$ and $\rho\in\muathcal S'(\muathbb R^n)$ is such that $\lambdaangle e^{-(Ax,x)},\rho\rangle=0$ for every positive definite matrix $A$. Then for $a>0$ and fixed $y\in\muathbb R^n$, we have $$\lambdaangle \tau_yg_a+\tau_{-y}g_a,\rho\rangle=0, \qquad y\in\muathbb R^n.$$\end{lm} \betaegin{proof} We start with two observations about the case $n=1.$ First we observe that the map $\{z:\ \muathbb{R}e z >0\}\to \muathcal S(\muathbb R)$ defined by $z\muapsto e^{-zx^2/2}$ is analytic into the locally convex Fr\'echet space $\muathcal S(\muathbb R).$ Similarly so is the map $\muathbb C\to\muathcal S(\muathbb R)$ defined by $z\muapsto e^{-a^2(x^2+2xz)/2}.$ From this it is easy to deduce that if $u\in\muathbb R^n$ is a unit vector and $a>0$ then the map $E_{a}(z)(x)=g_a(x) e^{-za^2(x,u)^2}$ is analytic for $Re z>-a^2$. Similarly $D_{a,u}(z)(x)=g_a(x)e^{-za^2(x,u)}$ is analytic on $\muathbb C.$ By assumption $\lambdaangle E_a(z),\rho\rangle=0$ if $z>-a^2$ is real. Hence $\lambdaangle E_a(z),\rho\rangle=0$ for all $z$ with $\muathbb{R}e z>-a^2.$ In particular $\lambdaangle E_a^{(k)}(0),\rho\rangle=0$ for $k=0,1,\lambdadots.$ This implies that $\lambdaangle h_u^{(2k)}g_a,\rho\rangle=0$ for all $k.$ Now $D_{a,u}^{(k)}(0)(x)= h_u^k g_a.$ Hence it follows that all the derivatives of $\rho\circ D_{a,u}(z)+\rho\circ D_{a,-u}(z)$ vanish at $0$ and thus $\lambdaangle D_{a,u}(z)+D_{a,-u}(z),\rho\rangle=0$ for all $z\in\muathbb C.$ In particular $$e^{t^2} \lambdaangle D_{a,u}(z)+D_{a,-u}(z),\rho\rangle=0, \qquad t\ge 0$$ which implies $$ \lambdaangle \tau_{tu}g_a+\tau_{-tu}g_a,\rho\rangle=0, \qquad 0\lambdae t<\infty.$$ Thus $$ \lambdaangle \tau_yg_a+\tau_{-y}g_a,\rho\rangle=0, \qquad y\in \muathbb R^n.$$ \end{proof} \betaegin{prop}\lambdaabel{equiv} Suppose $p<0.$ Let $X$ be a normed space of dimension $n>-p.$ Then $X\in\muathcal I_p$ if and only if there is a Polish space $\Omegamega$, a $\sigma$-finite Borel measure $\muu$ on $\Omegamega$ and a linear map $T:X\to\muathcal M(\Omegamega,\muu)$ which is a $c$-Gaussian embedding for some $c>0.$\end{prop} \betaegin{proof} First we assume that $X\in\muathcal I_p.$ Identify $X$ with $\muathbb R^n$ and suppose $\muu$ is the finite Borel measure on $S^{n-1}$ given by \eqref{Def:p<0}. Then Lemma 3.2 of \cite{KaltonKoldobsky2005} gives that the canonical map $Tx(u)=(x,u)$ defines a $c$-Gaussian embedding of $X$ into $ L_p(S^{n-1},\muu).$ Let us prove the converse. Assume $T:X\to\muathcal M(\Omegamega,\muu)$ is a $c$-Gaussian embedding of $X$ into $L_p(\Omegamega,\muu).$ As usual we identify $X$ with $\muathbb R^n$ and denote by $|\cdot|$ the usual Euclidean norm. Let $\{e_1,\lambdadots,e_n\}$ be the canonical basis. Define $\Phi:\Omegamega\to \muathbb R^n$ by $\Phi(\omega)=(Te_j(\omega))_{j=1}^n.$ Note that $|\Phi(\omega)|>0$ $\muu$-almost everywhere. Let $d\muu'=|\Phi(\omega)|^pd\muu$; then $\muu'$ is a finite Borel measure on $\Omegamega.$ Let $\pi$ be the canonical retraction of $\muathbb R^n\setminus\{0\}$ onto $S^{n-1}$ defined by $\pi(x)=x/|x|$. We define a finite positive Borel measure $\nuu$ on $S^{n-1}$ by $\nuu= c^{-p}2^{\frac{p}{2}+1}(\Gammaamma(-p/2))^{-1}\muu'\circ\Phi^{-1}\circ\pi^{-1}.$ Suppose $x_1,\lambdadots,x_n$ are linearly independent in $X$ and let $\textbf {x}i=\sum_{j=1}^n\gamma_jx_j$ be an $X$-valued Gaussian process. Let $\psi$ be the probability density function associated to this process. Then \betaegin{align}\lambdaabel{scalar} \int_{\muathbb R^n} \|x\|^p\psi(x)dx&=\muathbb E\|\sum_{j=1}^n\gamma_jx_j\|^p =\frac{1}{c^{p}}\int_{\Omegamega}(\sum_{j=1}^n|Tx_j|^2)^{p/2}d\muu \nuonumber \\ \text{Use the definition of the mea}&\text{sure $\muu'$ and then of $\nuu.$ So the latter is equal to} \nuonumber\\ &= \frac{1}{c^{p}}\int_{\Omegamega}(\sum_{j=1}^n(x_j,\pi\Phi(\omega))^2)^{p/2}d\muu'(\omega) \nuonumber \\ &=2^{-\frac{p}{2}-1}\Gammaamma(p/2)\int_{S^{n-1}} (\sum_{j=1}^n(x_j,u)^2)^{p/2}d\nuu(u)\\ \text{Now, by the definition of the}&\text{ Gamma function \eqref{scalar} becomes} \nuonumber\\ &= \int_{S^{n-1}}\int_0^{\infty}t^{-p-1}e^{-t^2\sum_{j=1}^n(x_j,u)^2/2}dt\,d\nuu(u)\nuonumber\\ &= \int_{S^{n-1}}\int_0^{\infty}t^{-p-1}\hat\psi(tu)\,dt\,d\nuu(u),\nuonumber\end{align} where $\hat{\psi}$ is the characteristic function of the process. Thus if $P$ is a positive definite matrix and $\psi(x)=e^{-(Px,x)}$ then $$ \int_{\muathbb R^n}\|x\|^p\psi(x)dx= \int_0^{\infty}t^{-p-1}\int_{S^{n-1}}\hat\psi(tu)\,d\nuu(u)\,dt.$$ Let us define a distribution $\rho\in\muathcal S'$ by $$ \lambdaangle \rho,\psi\rangle= \int_{\muathbb R^n}\|x\|^p\psi(x)dx-\int_0^{\infty}t^{-p-1}\int_{S^{n-1}}\hat\psi(tu)\,d\nuu(u)\,dt.$$ Then $\rho$ satisfies the conditions of the preceding lemma, and so we have: \betaegin{equation}\lambdaabel{fromlem} \int_{\muathbb R^n}\|x\|^p (g_a(x+y)+g_a(x-y))dx =2\int_0^{\infty}t^{-p-1}\int_{S^{n-1}}\cos(y,tu)\hat g_a(tu)\,d\nuu(u)\,dt.\end{equation} Now let $\phii$ be an even test function on $\muathbb{R}^n.$ Then $$ \phii*g_a(x)=\int_{\muathbb R^n}g_a(x-y)\phii(y)dy=\frac12\int_{\muathbb R^n}\phii(y)(g_a(x-y)+g_a(x+y))dy.$$ Thus, using the above equality, equation \eqref{fromlem} and since $\hat g_a(x)=e^{-a^2|x|^2/2},$ we have \betaegin{align*} \int_{\muathbb R^n}\|x\|^p\, \phii &*g_a(x)\,dx=\\ &=\frac12 \int_{\muathbb R^n}\phii(y)\int_{\muathbb R^n}\|x\|^p(g_a(x-y)+g_a(x+y))dx\,dy\\ &=\int_{\muathbb R^n}\phii(y)\int_0^{\infty}t^{-p-1}\int_{S^{n-1}}\cos (y,tu)e^{-t^2a^{2}/2}d\nuu(u)\,dt\,dy\\ \text{We apply Fubini's}&\text{ theorem to get} \\ &=\int_0^{\infty}t^{-p-1}\int_{S^{n-1}}e^{-t^2a^{2}/2}\int_{\muathbb R^n}\cos(y,tu)\phii(y)\,dy\,d\nuu(u)\,dt\\ &= \int_0^{\infty}t^{-p-1}\int_{S^{n-1}}e^{-t^2a^{2}/2}\hat\phii(tu)\,d\nuu(u)\,dt,\end{align*} since $\phii$ is even. Letting $a\to 0$ we get \eqref{Def:p<0}.\end{proof} Let us remark that in the above Proposition the space $X$ need not be a Banach space. In other words, the existence of a Gaussian embedding of $X$ into some $L_p$ for $p<0,$ requires no convexity for its unit ball. We will not need to consider the case $p=0$ separately; this can always be handled by reducing to the case $p<0.$ We refer the reader to \cite{KaltonKoldobskyYaskinYaskina2007} for a discussion of this case. The following fact is very elementary but will be used repeatedly. \betaegin{prop}\lambdaabel{closed} Let $X$ be a finite-dimensional normed space. Then the set of $p$ so that $X\in\muathcal I_p$ is closed.\end{prop} \betaegin{proof} Suppose $q$ is a limit point of the set $\muathcal P=\{p:\ X\in\muathcal I_p\}.$ If $q\lambdae -\text{dim } X$ then the result holds trivially by the definition of $\muathcal I_p.$ Suppose $-\text{dim } X<q<0;$ then $q\in\muathcal I_p$ by Lemma 1 of \cite{Koldobsky1998a}. For $q=0$ a modification of Theorem 6.4 of \cite{KaltonKoldobskyYaskinYaskina2007} gives the result. If $q>0$ then the fact that $q\in\muathcal I_p$ is well-known (and follows from considerations of positive definite functions).\end{proof} \section{Moment functions} In this section we will discuss moment functions of positive measurable functions on a measure space $(\Omegamega,\muu)$ and of random variables. We first record for future use: \betaegin{prop}\lambdaabel{criterion} Let $(\Omegamega,\muu)$ be a $\sigma-$finite measure space and suppose $\muathcal U$ is an open subset of $\muathbb C^n.$ Let $\phii:\Omegamega\times \muathbb C^n\to \muathbb C$ be a function such that for each $(z_1,\lambdadots,z_n)\in \muathcal U$ the map $\omega\muapsto \phii(\omega,z_1,\lambdadots,z_n)$ is measurable, and for each $\omega\in \Omegamega$ the map $(z_1,\lambdadots,z_n)\muapsto \phii(\omega,z_1,\lambdadots,z_n)$ is holomorphic on $\muathcal U.$ Let $$\Phi(z_1,\lambdadots,z_n)=\int_{\Omegamega}|\phii(\omega,z_1,\lambdadots,z_n)|d\muu(\omega), \qquad (z_1,\lambdadots,z_n)\in\muathcal U.$$ Assume that for every compact subset $K$ of $\muathcal U$ we have $$ \sup\{\Phi(z_1,\lambdadots,z_n):\ (z_1,\lambdadots,z_n)\in K\}<\infty.$$ Then $$ F(z_1,\lambdadots,z_n)=\int_{\Omegamega} \phii(\omega,z_1,\lambdadots,z_n)d\muu(\omega)$$ defines a holomorphic function on $\muathcal U.$\end{prop} Let us assume for the moment, merely that $\muu$ is $\sigma-$finite. The {\it distribution} of $f\in\muathcal M(\Omegamega,\muu)$ is the positive Borel measure $\nuu_f$ on $\muathbb R$ defined by $\nuu_f(B)=\muu\{\omega:\ f(\omega)\in B\}.$ If $f\in \muathcal M(\Omegamega,\muu)$ and $f'\in\muathcal M(\Omegamega',\muu')$ we write $f\alphapprox f'$ if $f$ and $f'$ have the same distribution, i.e. $\nuu_f=\nuu_{f'}.$ We also write $f\otimes f'$ for the function $f\otimes f'(\omega,\omega')=f(\omega)f'(\omega')$ in $\muathcal M(\Omegamega\times\Omegamega',\muu\times\muu').$ We say that $f\in\muathcal M(\Omegamega,\muu)$ is {\it positive} if $\muu\{f\lambdae 0\}=0.$ In this case $\nuu_f$ restricts to a Borel measure on $(0,\infty),$ and we write $f\in\muathcal M_+(\Omegamega,\muu).$ \betaegin{prop}\lambdaabel{extend} Let $f\in\muathcal M_+(\Omegamega,\muu)$, and suppose $f^p$ is integrable for $a<p<b$. Define $$F(z)=\int_{\Omega}f^z\, d\muu, \eqno a<\muathbb{R}e z <b.$$ Then, $F$ is analytic on the strip $a<\muathbb{R}e z<b$ and \nuewline (i) If $\lambdaiminf_{p\to b}F(p)<\infty$ then $$\lambdaim_{p\to b}F(p)=\int f^bd\muu<\infty.$$ \nuewline (ii) If $\lambdaiminf_{p\to a}F(p)<\infty$ then $$\lambdaim_{p\to a}F(p)=\int f^ad\muu<\infty.$$ \nuewline (iii) If $F$ can be extended to an analytic function on $(\alphalpha,\betaeta)$ where $\alphalpha\lambdae a<b\lambdae \betaeta$ then $f^p$ is integrable for $\alphalpha<p<\betaeta$ and $$F(z)=\int_{\Omega} f^z\, d\muu, \eqno \alphalpha<\muathbb{R}e z <\betaeta.$$ \end{prop} \betaegin{proof} The fact $F$ is analytic follows from Proposition \ref{criterion}. (i) and (ii) follow easily from Fatou's Lemma. We now prove (iii). Let $c$ be the supremum of all $a <\textbf {x}i <b$ such that $f^z$ is integrable on $(a,\textbf {x}i)$ and $$F(z)=\int_{\Omega}f^z d\muu, \eqno a<\muathbb{R}e z <\textbf {x}i.$$ \nuoindent We will show that $c=\beta.$ Then a similar argument for the left-hand side of the interval will complete the proof. Assume that $ \ c<\betaeta.$ The function $f^z\chi _{\{f\lambdaeq 1\}}$ is integrable for $a<\muathbb{R}e z <\beta.$ Let $$F_0(z)=\int_{\{f\lambdaeq 1\}} f^z d\muu, \eqno \alpha<\muathbb{R}e z <\beta.$$ Let $F_1(z)=F(z)-F_0(z).$ Then $$\int_{\{f> 1\}} f^z (\lambdaog f)^m d\muu=F_1^{(m)}(z), \eqno \alpha<\muathbb{R}e z <c, \ m=0,1,2,\lambdadots.$$ Using Fatou's Lemma, as $z\rightarrow c,$ we see that $$\int_{\{f> 1\}} f^c (\lambdaog f)^m d\muu\lambdaeq \lambdaiminf \int_{\{f> 1\}} f^z (\lambdaog f)^m d\muu=F_1^{(m)}(c)$$ and hence there exists $0<\tau<\betaeta-c$ so that $$\int_{\{f> 1\}} f^{c+t}d\muu=\sum\lambdaimits_{m=0}^{\infty}\frac{1}{m!}\int_{\{f> 1\}} f^c (\lambdaog f)^m t^m d\muu<\infty, \eqno 0<t<\tau.$$ It follows that $$F_1(z)=\int_{\{f> 1\}} f^z d\muu, \eqno a<\muathbb{R}e z<c+\tau,$$ which implies that $$F(z)=\int_{\Omega}f^z d\muu, \eqno a<\muathbb{R}e z <c+\tau.$$ The latter contradicts the choice of $c.$ \end{proof} We now recall the definitions and properties of some elementary random variables. Let $\gamma$ be a normalized Gaussian random variable. Then $\gamma$ has the distribution of the function $f(t)=t$ on $\muathbb R$ with the measure $(2\pi)^{-1/2}e^{-x^2/2}.$ We will use $(\gamma_k)_{k=1}^{\infty}$ to denote a sequence of independent normalized Gaussians defined on some probability space. It is known that if $\gamma$ is a normalized Gaussian r.v. then for $-1<p<\infty,$ $\muathbb E(|\gamma|^p)<\infty$ . We define \betaegin{equation}\lambdaabel{Gz} G(z)=\muathbb E(|\gamma|^z),\qquad -1<\muathbb{R}e z<\infty.\end{equation} It is in fact easy to give formulae for $G$, \betaegin{equation}\lambdaabel{GzG} G(z)=\frac{1}{\sqrt{\pi}}2^{z/2}\Gammaamma((z+1)/2)=2^{-z/2}\frac{2\Gammaamma(z)}{\Gammaamma(z/2)},\qquad -1<\muathbb{R}e z<\infty\end{equation} This uses the following important formula (see \cite{Remmert1998} p.45) \betaegin{equation}\lambdaabel{99} \Gammaamma(z)=\frac{2^{z-1}}{\sqrt{\pi}}\Gammaamma(z/2)\Gammaamma((z+1)/2), \qquad z\nueq 0,-1,-2,\lambdadots.\end{equation} It will be convenient to use $G$ in later calculations. We denote by $\varphi_p$ a normalized positive $p$-stable random variable where $0<p<1$, which is characterized by $$ \muathbb E(e^{-t\varphi_p})=e^{-t^p}, \eqno 0<t<\infty.$$ From the formula $$ x^z\Gammaamma(-z) =\int_0^{\infty}t^{z-1}e^{-xt}\,dt$$ and analytic continuation it is easy to deduce that \betaegin{equation}\lambdaabel{phip} \Phi_p(z):=\muathbb E(\varphi_p^z)= \frac{\Gammaamma(-z/p)}{p\Gammaamma(-z)}, \qquad -\infty<\muathbb{R}e z<p.\end{equation} Finally, for $0<p<2$ we use $\psi_p$ to denote a normalized symmetric $p$-stable random variable which is characterized by $$ \muathbb E(e^{it\psi_p})=e^{-|t|^p}, \eqno -\infty<t<\infty.$$ It may be shown that $\psi_p\alphapprox \sqrt{2\varphi_{p/2}}\otimes \gamma$ so that $$ \Psi_p(z)=\muathbb E(|\psi_p|^{z})=2^{z/2} \Phi_{p/2}(z/2)G(z), \eqno -1<\muathbb{R}e z< p.$$ Let us remark at this point that the functions $G,\Phi_p$ and $\Psi_p$ are superfluous in that they can each be expressed fairly easily in terms of the Gamma function. However it seems to us useful to keep them separate in order to follow some of the calculations later in the paper. We will need the following lemma later: \betaegin{lm}\lambdaabel{1} Let $\gamma_1,\lambdadots,\gamma_m$ be independent normalized Gaussian random variables, then if $\muathbb{R}e w>-1, \muathbb{R}e(w+z)>-m$ $$ \muathbb E|\gamma_1|^w(\gamma_1^2+\cdots+\gamma_m^2)^{z/2}= \frac{G(w)G(w+z+m-1)}{G(w+m-1)}.$$ \end{lm} \betaegin{proof} It is easy to calculate $$ \muathbb E(\gamma_1^2+\cdots+\gamma_m^2)^{z/2}= \frac{G(z+m-1)}{G(m-1)}, \qquad \muathbb{R}e z>-m.$$ Note that $\gamma_1( \gamma_1^2+\cdots+\gamma_m^2)^{-1/2}$ and $(\gamma_1^2+\cdots+\gamma_m^2)^{1/2}$ are independent. Hence for $\muathbb{R}e w>-1$ $$G(w) =\muathbb E(|\gamma_1|^w)= \muathbb E(|\gamma_1|^w(\gamma_1^2+\cdots+\gamma_m^2)^{-w/2})\frac{G(w+m-1)}{G(m-1)}.$$ Thus $$\muathbb E(|\gamma_1|^w(\gamma_1^2+\cdots+\gamma_m^2)^{-w/2})=\frac{G(w)G(m-1)}{G(w+m-1)}.$$ Finally, again using independence $$\muathbb E|\gamma_1|^w(\gamma_1^2+\cdots+\gamma_m^2)^{z/2})=\frac{G(w)G(w+z+m-1)}{G(w+m-1)}.$$\end{proof} \section{Mellin transforms and absolute norms} Let $f$ be a complex-valued Borel function on $(0,\infty)$. Let $J_f$ be the set of $a\in\muathbb R$ such that $$ \int_0^{\infty}t^{-a}|f(t)|\frac{dt}t<\infty.$$ It is known that $J_f$ is an interval (possibly unbounded) which may be degenerate (a single point) or empty. If $J_f\nueq \emptyset$ we define the {\it Mellin transform,} of $f$ by $$ Mf(z)=\int_0^{\infty} t^{-1-z}f(t)\,dt, \eqno z\in J_f.$$ Then by Proposition \ref{criterion}, $Mf$ is analytic on the interior of $J_f$ (if this is nonempty). For the general theory of the Mellin transform we refer to \cite{Zemanian1987}. The following are some basic facts about the Mellin transform that will be used throughout this article. The first part of the Proposition is a Uniqueness theorem of the transformation. \betaegin{prop}\lambdaabel{Mellin} (i) Suppose $f,g$ are two Borel functions defined on $(0,\infty)$ and $a\in J_f\cap J_g.$ If $Mf(a+it)=Mg(a+it)$ for $-\infty<t<\infty$ then $f(t)=g(t)$ almost everywhere. (ii) Suppose $f$ is a Borel function on $(0,\infty)$ Suppose $E$ is an analytic function on the strip $a<\muathbb{R}e z<b$ and that there exist $a\lambdae c<d\lambdae b$ so that $(c,d)\subset J_f$ and $Mf(z)=E(z)$ for $c<\muathbb{R}e z<d.$ Then $(a,b)\subset J_f$ and $Mf(z)=E(z)$ for $a<\muathbb{R}e z<b$. \end{prop} \betaegin{proof} For (i) see \cite{Zemanian1987}, Theorem 4.3-4, while (ii) is a restatement of Lemma \ref{extension} for the measure $f(t)dt/t.$\end{proof} Let $N$ be a normalized absolute norm on $\muathbb R^2.$ Thus $N$ is a norm satisfying $N(0,1)=N(1,0)=1$ and $N(u,v)\lambdae N(s,t)$ whenever $|u|\lambdae |s|$ and $|v|\lambdae |t|.$ We define an analytic function of two variables by $$ F_N(w,z)=\int_0^{\infty}t^{-z-1}N(1,t)^{w+z}dt,\eqno \muathbb{R}e z<0,\ \muathbb{R}e w<0.$$ For $p<0$ the Mellin transform of $N(1,t)^p$ is given by $$M_{p,N}(z)=F_N(p-z,z), \eqno \muathbb{R}e z<0.$$ Notice that if $N'(s,t)=N(t,s)$ then $F_{N'}(w,z)=F_N(z,w).$ Thus $M_{p,N'}(z)=M_{p,N}(p-z)$ for $\muathbb{R}e z<0.$ For the special case of the $\ell_\infty-$norm we define \betaegin{equation}\lambdaabel{Finf}F_\infty(w,z)=\int_0^{\infty}t^{-z-1}\muax\{1,t\}^{w+z}dt=-\frac1z-\frac1w, \qquad \muathbb{R}e z<0,\ \muathbb{R}e w<0.\end{equation} We write \betaegin{equation}\lambdaabel{Minf}M_{p,\infty}(z)=\frac{p}{z(z-p)}, \qquad \muathbb{R}e z<0.\end{equation} The following lemma is an immediate deduction from the Mean Value Theorem: \betaegin{lm}\lambdaabel{estimates} Suppose $w\in \muathbb C.$ Then: \betaegin{equation}\lambdaabel{firstestimate} |(1+t)^w-1|\lambdae |w|2^{\muathbb{R}e w-1}t\lambdae 2^{2|w|}t, \qquad 0\lambdae t\lambdae 1\end{equation} and \betaegin{equation}\lambdaabel{secondestimate}|\textstyle\frac12((1+t)^w+(1-t)^w)-1|\lambdae |w|(|w|+1)2^{\muathbb{R}e w-2}t^2\lambdae 2^{3|w|}t^2,\qquad 0\lambdae t\lambdae 1/2.\end{equation} \end{lm} In view to Lemma \ref{estimates} we define $$\tilde F_N(w,z)=\int_0^{\infty}t^{-z-1}(N(1,t)^{w+z}-\muax\{1,t\}^{w+z})\,dt$$ on the region $\{(w,z):\ \muathbb{R}e w<1,\ \muathbb{R}e z<1\}.$ Then applying analytic continuation we have $$\tilde F_N(w,z)=F_N(w,z)+\frac1w+\frac1z.$$ The following lemma is immediate, using Proposition \ref{criterion} and equations \eqref{firstestimate}, \eqref{secondestimate}: \betaegin{lm}\lambdaabel{estimates2} Suppose $1\lambdae r,s<\infty$ and $N$ is a normalized absolute norm satisfying the estimates $$ N(1,t)^r\lambdae 1+Ct^r,\eqno 0\lambdae t\lambdae 1$$ and $$ N(t,1)^s\lambdae 1+Ct^s,\eqno 0\lambdae t\lambdae 1.$$ Then $\tilde F_N$ extends to an analytic function of $(w,z)$ on the region $S=\{(w,z):\ \muathbb{R}e w<s,\ \muathbb{R}e z<r\}.$ \end{lm} This Lemma allows us to define $F_N(w,z)$ when $\muathbb{R}e w<s,\ \muathbb{R}e w<r$ and $w,z\nueq 0.$ We may then extend the definition of $M_{p,N}(z)$ to the case $p<r$ and $0<\muathbb{R}e z<p$; then $M_{p,N}$ is an analytic function on this strip. The following proposition explains our interest in the function $F_N.$ \betaegin{prop}\lambdaabel{absolute} Let $X$ and $Y$ be two normed spaces and let $Z=X\oplus_NY$. If $x\in X\subset Z$ and $y\in Y\subset Z$ with $\|x\|,\|y\|\nueq 0$ then \betaegin{equation}\lambdaabel{A1} \int_0^{\infty}t^{-z-1}\|x+ty\|^{w+z}dt = F_N(w,z)\|x\|^w\|y\|^z,\qquad \muathbb{R}e w,\muathbb{R}e z<0.\end{equation} \end{prop} \betaegin{proof} Assuming $\|x\|,\|y\|\nueq 0$, we observe that \betaegin{align*} \int_0^{\infty}t^{-z-1}\|x+ty\|^{w+z}dt&= \|x\|^{w+z}\int_0^{\infty}t^{-1-z}N(1,t\|y\|/\|x\|)^{w+z}dt\\ &= \|x\|^w\|y\|^z \int_0^{\infty}t^{-1-z}N(1,t)^{w+z}dt.\end{align*} \end{proof} Let us recall the Euler Beta function: $$ B(w,z) = \int_0^1 x^{w-1}(1-x)^{z-1}dx = \frac{\Gammaamma(w)\Gammaamma(z)}{\Gammaamma(w+z)},\qquad \muathbb{R}e w,\muathbb{R}e z>0.$$ Making the substitution $x=(1+t)^{-1}$ we get the alternative formula: \betaegin{equation}\lambdaabel{101} B(-w,-z) =\int_0^{\infty} t^{-z-1}(1+t)^{w+z}dt,\qquad \muathbb{R}e w,\muathbb{R}e z<0.\end{equation} Hence if $u,v>0$ and $\muathbb{R}e w,\ \muathbb{R}e z<0,$ we have \betaegin{equation}\lambdaabel{102}\int_0^{\infty} t^{-z-1}(\alpha^p+\beta^pt^p)^{(w+z)/p}dt=\frac1p\alpha^{w}\beta^{z}B(-w/p,-z/p).\end{equation} In particular if $N(s,t)=(|s|^q+|t|^q)^{1/q}$ is the $\ell_q-$norm we have an explicit formula for $F_q=F_N$ \betaegin{equation}\lambdaabel{Fq} F_q(w,z)=\frac1qB(-w/q,-z/q), \qquad \muathbb{R}e w,\muathbb{R}e z<0.\end{equation} As before we regard \eqref{Fq} as the definition of $F_q$ when $\muathbb{R}e w<q,\ \muathbb{R}e z<q$ and $w,z\nueq 0.$ Then for $p<0$ we can define \betaegin{equation}\lambdaabel{Mpq}M_{p,q}(z) =\frac1qB((z-p)/q,-z/q), \qquad \muathbb{R}e z<0.\end{equation} If $0<p<q$ the same definition gives an analytic function on $0<\muathbb{R}e z<p.$ \betaegin{lm}\lambdaabel{extension} Suppose $1\lambdae r,s<\infty$ and $N$ is a normalized absolute norm satisfying the estimates $$ N(1,t)^r\lambdae 1+Ct^r,\eqno 0\lambdae t\lambdae 1$$ and $$ N(t,1)^s\lambdae 1+C't^s,\eqno 0\lambdae t\lambdae 1.$$ Then the function $(w,z)\muapsto F_N(w,z)/F_2(w,z)$ extends to a holomorphic function on the region $\{(w,z):\ \muathbb{R}e w<\muin\{s,2\},\ \muathbb{R}e z <\muin\{r,2\}\}.$ Thus for $p<0$, $z\muapsto M_{p,N}(z)/M_{p,2}(z)$ extends to an analytic function on the strip $\{z:\ p-\muin\{s,2\}<\muathbb{R}e z<\muin\{r,2\}\}.$\end{lm} \betaegin{proof} This follows directly from the definition of $F_2$ and Lemma \ref{estimates}.\end{proof} \betaegin{lm}\lambdaabel{symmetrize} For $\muathbb{R}e z,\ \muathbb{R}e w<0$ and $\muathbb{R}e(w+z)>-1$ we have $$ \frac12\int_0^{\infty}t^{-z-1}(|1+t|^{w+z}+|1-t|^{w+z})\,dt = \frac{G(w+z)F_2(w,z)}{G(w)G(z)}.$$ \end{lm} \betaegin{proof} Let $$ Q(w,z)=\frac12\int_0^{\infty}t^{-z-1}(|1+t|^{w+z}+|1-t|^{w+z})\,dt.$$ Let $\gamma_1,\gamma_2$ be two normalized independent Gaussian random variables on some probability space. Then by \eqref{Gz} $$ \muathbb E(|\gamma_1+t\gamma_2|^{w+z})= (1+t^2)^{\frac{(w+z)}{2}} G(w+z)\eqno \muathbb{R}e (w+z)>-1.$$ Hence using \eqref{101} and \eqref{Fq} we have $$ \int_0^\infty t^{-z-1}\muathbb E(|\gamma_1+t\gamma_2|^{w+z})\,dt= G(w+z)F_2(w,z).$$ Note that the function $t^{-z-1}|\gamma_1+t\gamma_2|^{w+z}$ is integrable on the product space as long as $\muathbb{R}e z,\muathbb{R}e w<0$ and $\muathbb{R}e (w+z)>-1.$ Thus we can apply Fubini's theorem and a change of variables $t|\gamma_2|=s|\gamma_1|$ to obtain \betaegin{align*} G(w+z)&F_2(w,z)= \muathbb E\lambdaeft(\int_0^{\infty} t^{-z-1}|\gamma_1+t\gamma_2|^{w+z}\,dt\right)\\ &= \frac12\muathbb E\lambdaeft(\int_0^{\infty}t^{-z-1}\lambdaeft(\Big||\gamma_1|+t|\gamma_2|\Big|^{w+z}+\Big||\gamma_1|-t|\gamma_2|\Big|^{w+z}\right)dt\right)\\ &=\muathbb E\lambdaeft(Q(w,z)|\gamma_1|^{w}|\gamma_2|^{z}\right).\end{align*} Then using \eqref{Gz} the Lemma follows.\end{proof} \betaegin{lm}\lambdaabel{independent} Let $(\Omegamega,\muu)$ be a $\sigma-$finite measure space and suppose $f,g\in \muathcal M(\Omegamega,\muu).$ Then if $w,z \in \muathbb{C}$ are such that $\muathbb{R}e w,\muathbb{R}e z<0$ and $$\int_{\Omegamega}|f|^{\muathbb{R}e w}|g|^{\muathbb{R}e z}d\muu<\infty$$ we have \betaegin{equation}\lambdaabel{independent0} \int_0^{\infty}t^{-z-1}\int_{\Omegamega} (f^2+t^2g^2)^{w+z}d\muu = F_2(w,z) \int_{\Omegamega}|f|^w|g|^zd\muu.\end{equation} Further, if $\muathbb{R}e(w+z)>-1$ we have \betaegin{equation}\lambdaabel{independent1} \int_0^{\infty}t^{-z-1}\int_{\Omegamega} (|f+tg|^{w+z}+|f-g|^{w+z})d\muu\,dt =\frac{G(w+z)}{G(w)G(z)} F_2(w,z) \int_{\Omegamega}|f|^w|g|^zd\muu.\end{equation} \end{lm} \betaegin{proof} We first use Tonelli's theorem for $u=\muathbb{R}e w$ and $v=\muathbb{R}e z.$ Then by \eqref{102} and \eqref{Fq} we have $$\int_0^{\infty}t^{-1-v}\int_{\Omegamega} (f^2+t^2g^2)^{u+v}d\muu = F_2(u,v) \int_{\Omegamega}|f|^u|g|^vd\muu,$$ where both integrals converge. Then applying Fubini's theorem we get \eqref{independent0}. The proof of \eqref{independent1} is precisely similar using Lemma \ref{symmetrize}.\end{proof} Suppose $(\Omegamega,\muu)$ is a probability space and $h$ is a symmetric function in $L_p(\Omegamega,\muu)$ where $p>0.$ In the following Lemmas we show how to compute the Mellin transform of the function $t\muapsto \|1+th\|_p-\muax\{1,t\}^p.$ \betaegin{lm}\lambdaabel{later} Let $(\Omegamega,\muu)$ be a probability space and suppose $h\in\muathcal M(\Omegamega,\muu).$ Suppose $-1<a<0<b<2$ and that $$ \int_{\Omegamega}(|h|^a+|h|^b)d\muu<\infty.$$ Let $$H(z)=\int_{\Omegamega}|h|^z\,d\muu, \qquad a<\muathbb{R}e z<b.$$ Then $$E(w,z)=\int_0^{\infty}t^{-z-1} \int_{\Omegamega}\frac12(|1+th|^{w+z}+|1-th|^{w+z}-2\muax\{1,t|h|\}^{w+z})d\muu\,dt$$ defines a holomorphic function on the region $\muathcal U=\{(w,z):\ a<\muathbb{R}e(w+z),\ -1< \muathbb{R}e w<2,\ a<\muathbb{R}e z<b\}$ and \betaegin{equation}\lambdaabel{later0} E(w,z)=\lambdaeft(\frac{G(w+z)F_2(w,z)}{G(w)G(z)}+\frac1w+\frac1z\right)H(z),\end{equation} when $(w,z)\in\muathcal U, \muathbb{R}e w>-1,\ w,z\nueq 0.$\end{lm} \betaegin{proof} For $t>0$ and $w,z\in\muathbb C,$ we consider $$ \varphi(t,w,z)=t^{-z-1}(|1+t|^{w+z}+|1-t|^{w+z}-2\muax\{1,t\}^{w+z}).$$ Let $u=\muathbb{R}e w,\ v=\muathbb{R}e z.$ Then by Lemma \ref{estimates} we have $$ |\varphi(t,w,z)| \lambdae 2^{3|w+z|}t^{1-v}, \qquad 0\lambdae t\lambdae 1/2$$ and $$ |\varphi(t,w,z)|\lambdae 2^{3|w+z|}t^{u-3}, \qquad 2\lambdae t<\infty.$$ For $1/2\lambdae t\lambdae 2$ we have the estimates $$ |\varphi(t,w,z)|\lambdae 2^{1+|v|}2^{u+v+2}, \qquad u+v\ge 0$$ and $$ |\varphi(t,w,z)|\lambdae 2^{3+|v|}|1-t|^{u+v}, \qquad u+v<0.$$ Thus if $v<2,\ u<2, \ u+v>-1$ we have a very crude estimate: $$ \int_0^{\infty}|\varphi(t,w,z)|dt \lambdae 2^{3|w+z|}\lambdaeft(\frac{1}{2-u}+\frac{1}{2-v}\right)+ 2^{4+|v|}\frac{1}{u+v+1}.$$ Now $$ t^{-z-1}(|1+th|^{w+z}+|1-th|^{w+z}-2\muax\{1,t|h|\}^{w+z})=|h|^{1+z} \varphi(t|h|,w,z)$$ and so \betaegin{align*} &\int_0^{\infty}\int_{\Omegamega}\lambdaeft|t^{-z-1}(|1+th|^{w+z}+|1-th|^{w+z}-2\muax\{1,t|h|\}^{w+z})\right|d\muu\,dt \\ &=H(v)\int_0^{\infty}|\varphi(t,w,z)|dt.\end{align*} Combining these estimates shows that we have the conditions of Proposition \ref{criterion} for the region $\muathcal U$ and so $E$ defines a holomorphic function on $\muathcal U.$ For $(w,z)\in\muathcal U$ and $\muathbb{R}e w,\muathbb{R}e z<0$ we can use Lemma \ref{symmetrize} to show that $$\int_0^{\infty}t^{-z-1} \int_{\Omegamega}\frac12(|1+th|^{w+z}+|1-th|^{w+z})d\muu\,dt =\frac{G(w+z)F_2(w,z)}{G(w)G(z)}\int_{\Omegamega}|h|^zd\muu$$ and $$ \int_0^{\infty}t^{-z-1} \int_{\Omegamega}\muax\{1,t|h|\}^{w+z}t =(-\frac1w-\frac1z)\int_{\Omegamega}|h|^zd\muu.$$ Since the right-hand side of \eqref{later0} extends to an analytic function in $\muathcal U$, \eqref{later0} holds for all $(w,z)\in\muathcal U$ with $w,z\nueq 0$. \end{proof} \betaegin{lm}\lambdaabel{anotherone} Let $(\Omegamega,\muu)$ be a probability space and suppose $h\in\muathcal M(\Omegamega,\muu).$ Suppose $-1<a<0<b<2$ and that $$ \int_{\Omegamega}(|h|^a+|h|^b)d\muu<\infty.$$ Then $$ E_0(w,z)=\int_0^{\infty}t^{-z-1}\int_{\Omegamega}(\muax\{1,th\}^{w+z}-\muax\{1,t\}^{w+z})d\muu\,dt$$ defines an analytic function on the region $\muathcal U_0=\{(w,z):\ a<\muathbb{R}e (w+z),\ \muathbb{R}e w<0,\ \muathbb{R}e z<b\}.$ Furthermore \betaegin{equation}\lambdaabel{anotherone0} E_0(w,z)= (\frac1w+\frac1z)(1-H(z)), \qquad (w,z)\in\muathcal U_0.\end{equation}\end{lm} \betaegin{proof} Let $u=\muathbb{R}e w$ and $v=\muathbb{R}e z.$ Then if $s>0$ we have \betaegin{align*} \int_0^{\infty}t^{-1-v}|\muax\{1,st\}^{w+z}-\muax\{1,t\}^{w+z}|dt&\lambdae \int_{1/s}^\infty s^{u+v}t^{-1+u}dt +\int_1^{\infty}t^{-1+u}dt\\ &\lambdae \frac{s^v+1}{|u|}.\end{align*} Hence $$ \int_0^{\infty}t^{-1-v}\int_{\Omegamega}|\muax\{1,th\}^{w+z}-\muax\{1,t\}^{w+z}|d\muu\,dt \lambdae \frac1{|u|}(H(v)+1).$$ Again Proposition \ref{criterion} gives that $E_0$ defines a holomorphic function on $\muathcal U_0.$ If in addition $\muathbb{R}e z<0$ we can compute $$\int_0^{\infty}t^{-z-1}\int_{\Omegamega}\muax\{1,th\}^{w+z}d\muu\,dt = -H(z)(\frac1w+\frac1z)$$ and $$\int_0^{\infty}t^{-z-1}\int_{\Omegamega}\muax\{1,t\}^{w+z}d\muu\,dt = -(\frac1w+\frac1z).$$ As before analytic continuation gives \eqref{anotherone0} throughout $\muathcal U_0.$\end{proof} Combining the preceding Lemmas we have the following: \betaegin{prop}\lambdaabel{Mellinh} Let $(\Omegamega,\muu)$ be a probability space and suppose $h\in\muathcal M(\Omegamega,\muu)$ is a symmetric random variable. Suppose $-1<a<0<b<2$ and that $$ \int_{\Omegamega}(|h|^a+|h|^b)d\muu<\infty.$$ Let $$H(z)=\int_{\Omegamega}|h|^z\,d\muu, \qquad a<\muathbb{R}e z<b.$$ Suppose $0<p<b$ is such that $H(p)=1.$ Then the Mellin transform of $t\muapsto \int_{\Omegamega}(|1+th|^p-\muax\{1,t\}^p)\,d\muu$ is given by \betaegin{equation}\lambdaabel{eqMellinh0} \int_0^{\infty}t^{-z-1} \int_\Omegamega (|1+th|^p-\muax\{1,t\}^p)d\muu\,dt= \frac{G(p)M_{p,2}(z)H(z)}{G(p-z)G(z)}+\frac{p}{z(p-z)},\end{equation} for $ a<\muathbb{R}e z<\muin\{b,p+1\}.$ \end{prop} Let us remark that, since $H(0)=H(p)=1,$ the right-hand side of \eqref{eqMellinh0} has removable singularities at $z=0$ and $z=p.$ \betaegin{proof} Since the right-hand side is analytic in the strip $a<\muathbb{R}e z<\muin\{b,p+1\}$ it follows from Proposition \ref{Mellin} that it is necessary only to establish equality for the strip $p<\muathbb{R}e z<\muin\{b,p+1\}.$ In this case $-1<\muathbb{R}e (p-z)<0$ and so $(p-z,z)\in \muathcal U\cap\muathcal U_0$ as these sets are defined in Lemmas \ref{later} and \ref{anotherone}. Since $h$ is symmetric we can rewrite the left-hand side of \eqref{eqMellinh0} in the form $$\int_0^{\infty}t^{-z-1} \int_\Omegamega \frac12(|1+th|^p+|1-th|^p-2\muax\{1,t\}^p)d\muu\,dt.$$ Then combining Lemmas \ref{later} and \ref{anotherone} we get the conclusion.\end{proof} This Proposition can be extended by an approximation argument to the case when $a=0$ and $b=p$; we will not need this so we simply state the result: \betaegin{prop}\lambdaabel{Mellinhp} Let $(\Omegamega,\muu)$ be a probability space and suppose $h\in L_p(\Omegamega,\muu)$, with $\|h\|_p=1,$ where $0<p<2.$ Let $$H(z)=\int_{\Omegamega}|h|^z\,d\muu, \qquad a<\muathbb{R}e z<b.$$ Suppose $0<p<b$ is such that $H(p)=1.$ Then the Mellin transform of $t\to \int_{\Omegamega}|1+th|^p-\muax\{1,t\}^p\,d\muu$ is given by $$\int_0^{\infty}t^{-z-1} \int_\Omegamega (|1+th|^p-\muax\{1,t\}^p)d\muu\,dt= \frac{G(p)M_{p,2}(z)H(z)}{G(p-z)G(z)}+\frac{p}{z(p-z)},$$ for $0<\muathbb{R}e z<p.$ \end{prop} \section{Embedding $X\oplus_NY$ into $L_p$} \betaegin{prop}\lambdaabel{p<0Z} Let $X, Y$ be two non-trivial normed spaces, with $dim X =m$ and $dim Y=n.$ and suppose $N$ is a normalized absolute norm on $\muathbb{R}^2.$ Suppose $T:X\oplus_N Y\to \muathcal M(\Omegamega,\muu)$ is a 1-Gaussian embedding into $L_p(\Omegamega,\muu)$ where $-(n+m)<p<0.$ Suppose $x_1,\lambdadots,x_n \in X$ and $y_1\lambdadots, y_m \in Y$ are linearly independent, and suppose $\textbf {x}i=\sum\lambdaimits_{j=1}^{m}\gamma_j x_j$ and $\eta=\sum\lambdaimits_{j=1}^{n}\gamma^{'}_j y_j$ are independent Gaussian processes of full rank with values in $X$ and $Y$ respectively. Then for $\muax\{-n,p\}<\muathbb{R}e z<\muin\{0,p+m\}$ we have: \betaegin{equation}\lambdaabel{eq:p<0Z} \int_{\Omega}\Bigl(\sum\lambdaimits_{j=1}^{m}(Tx_j)^2\Bigr)^{\frac{p-z}{2}}\Bigl(\sum\lambdaimits_{j=1}^{n}(Ty_j)^{2}\Bigr)^{\frac{z}{2}}d\muu=\frac{M_{p,N}(z)}{M_{p,2}(z)}\muathbb{E} \|\textbf {x}i\|^{p-z} \muathbb{E} \|\eta\|^{z}.\end{equation} \end{prop} \betaegin{proof} By assumption we have $$ \muathbb E\|\textbf {x}i+t\eta\|^p = \int_{\Omegamega}\betaigl(\sum_{j=1}^m(Tx_j)^2+t^2\sum_{j=1}^n(Ty_j)^2\betaigr)^{p/2}d\muu,\eqno t>0.$$ Hence if $\muax\{-n,p\}<\muathbb{R}e z<\muin\{0,p+m\}$ we have $$ \int_0^{\infty}t^{-z-1}\muathbb E\|\textbf {x}i+t\eta\|^p\,dt=\int_{\Omegamega}\int_0^{\infty}t^{-z-1}\betaigl(\sum_{j=1}^m(Tx_j)^2+t^2\sum_{j=1}^n(Ty_j)^2\betaigr)^{p/2}d\muu$$ and both sides are integrable. Notice that, in particular, it follows that $\sum_{j=1}^m (Tx_j)^2>0$ and $\sum_{j=1}^n (Ty_j)^2>0,$ $\muu-$almost everywhere. Now for real $\muax\{n,-p\}<u<\muin\{0,p+m\},$ using Tonelli's theorem and \eqref{A1}, \betaegin{align*}\int_0^{\infty}t^{-u-1}\muathbb E\|\textbf {x}i+t\eta\|^p\,dt&=\muathbb E\int_0^{\infty}t^{-u-1}N(\textbf {x}i,t\eta)^p\,dt\\ &= F_N(p-u,u) \muathbb E\|\textbf {x}i\|^{p-u}\|\eta\|^u\\ &= F_N(p-u,u) \muathbb E\|\textbf {x}i\|^{p-u}\muathbb E\|\eta\|^u,\end{align*} since $\textbf {x}i$ and $\eta$ are independent. We repeat the calculation replacing $u$ by complex $z$ and apply Fubini's theorem. Then $$ \int_0^{\infty}t^{-z-1}\muathbb E\|\textbf {x}i+t\eta\|^p\,dt= M_{p,N}(z)\muathbb E\|\textbf {x}i\|^{p-z}\muathbb E\|\eta\|^z,$$ for $\muax\{-n,p\}<\muathbb{R}e z<\muin\{0,p+m\}.$ Hence (first for real $z$, using Tonelli's theorem and then for the general case), by Lemma \ref{independent} we get \betaegin{align*} & M_{p,N}(z)\muathbb{E} \|\textbf {x}i\|^{p-z} \muathbb{E} \|\eta\|^{z}=\\ &=M_{p,2}(z)\int_\Omegamega \Bigl(\sum\lambdaimits_{j=1}^{m}(Tx_j)^2\Bigr)^{\frac{p-z}{2}}\Bigl(\sum\lambdaimits_{j=1}^{n}(Ty_j)^{2}\Bigr)^{\frac{z}{2}}d\muu,\end{align*} which proves \eqref{eq:p<0Z}. \end{proof} We shall say that an embedding $T:X\to \muathcal M(\Omegamega,\muu)$ is {\it isotropic} if $Tx\alphapprox Tx'$ whenever $\|x\|=\|x'\|=1.$ We will say that it is $f-$isotropic if $f$ is a Borel function on some $\sigma-$finite Polish measure space $(K,\nuu)$ and $Tx\alphapprox f$ for every $x\in X$ with $\|x\|=1.$ For $0<p<2$, $T$ is a $p$-stable embedding if $Tx\alphapprox \psi_p$ whenever $\|x\|=1.$ If $X$ embeds into $L_p$ then there is a $p$-stable embedding of $X$ into $\muathcal M(\Omegamega,\muu),$ where $\muu$ is a probability measure. \betaegin{prop}\lambdaabel{p>0Z}Let $X, Y$ be two normed spaces, with $dim X =m$ and $dim Y=n,$ and suppose $N$ is a normalized absolute norm on $\muathbb{R}^2.$ Suppose $T:X\oplus_N Y\to \muathcal M(\Omegamega,\muu)$ is a $p$-stable embedding where $p>0.$ Then for any nonzero $x\in X$ and $y\in Y$ and $-1<\muathbb{R}e(w+z)<\muathbb{R}e w,\muathbb{R}e z<0,$ we have \betaegin{equation}\lambdaabel{eq:p>0Z} \int_\Omegamega |Tx|^{w}|Ty|^z\,d\muu = \frac{F_N(w,z)G(w)G(z)\Phi_{p/2}((w+z)/2)}{F_2(w,z)}\|x\|^w\|y\|^z. \end{equation}\end{prop} \betaegin{proof} If $f\in X\oplus_N Y$ we have $$ \int_\Omegamega |Tf|^z \,d\muu = \Psi_p(z)\|f\|^z,\eqno \muathbb{R}e z>-1.$$ Now consider $\textbf {x}i=\gamma_1x$ and $\eta=\gamma_2y$ where $\gamma_1,\gamma_2$ are normalized independent Gaussian random variables. Then $$ \muathbb E\int_{\Omegamega} |T\textbf {x}i+tT\eta|^z\,d\muu =\Psi_p(z)\muathbb E\|\textbf {x}i+t\eta\|^z.$$ If $-1<\muathbb{R}e(w+z)<\muathbb{R}e w,\muathbb{R}e z<0,$ then by Fubini's theorem, Proposition \ref{absolute} and \eqref{GzG} (first for real $w,z$ using Tonelli's theorem as in Proposition \ref{p<0Z}), we have that \betaegin{align}\lambdaabel{eq:p>0Z1} \int_0^{\infty}t^{-z-1}\muathbb E\|\textbf {x}i+t\eta\|^{w+z}\,dt &=\int_0^{\infty}t^{-z-1}\muathbb EN(\textbf {x}i,t\eta)^{w+z}\,dt \nuonumber \\ &= F_N(w,z)\muathbb E\|\textbf {x}i\|^w\muathbb E\|\eta\|^z \nuonumber \\ &= F_N(w,z) G(w)G(z) \|x\|^w\|y\|^z.\end{align} On the other hand, $\gamma_1,\gamma_2$ are Gaussian r.v. \betaegin{align}\lambdaabel{eq:p>0Z2} &\int_0^{\infty}t^{-z-1}\muathbb E\int_\Omegamega |T\textbf {x}i+tT\eta|^{w+z}\,d\muu\,dt \nuonumber \\ &=G(w+z)\int_\Omegamega\int_0^\infty t^{-z-1} ((Tx)^2+t^2(Ty)^2)^{z/2}dt\,d\muu \nuonumber \\ \text{and by Lemma \ref{independent} the}& \text{ latter is equal to} \nuonumber \\ &= G(w+z)F_2(w,z)\int_\Omegamega |Tx|^w|Ty|^z\,d\muu.\end{align} Then equation \eqref{eq:p>0Z} follows from \eqref{eq:p>0Z1}, \eqref{eq:p>0Z2} and the fact that $\Psi_p(z)=\Phi_{p/2}(z/2)G(z).$ \end{proof} \betaegin{Thm}\lambdaabel{main} Let $X,Y$ be two non-trivial finite dimensional normed spaces with dimensions $m$ and $n$ respectively. Suppose that $-(n+m)<p\lambdae 1\lambdae r,s\lambdaeq 2$ and that $N$ is a normalized absolute norm on $\muathbb{R}^2$ satisfying estimates of the type \betaegin{equation}\lambdaabel{eq:N} N(1,t)^r\lambdaeq 1+Ct^r, \qquad t>0, \end{equation} and \betaegin{equation}\lambdaabel{eq:N} N(t,1)^s\lambdaeq 1+C't^s, \qquad t>0. \end{equation} If $X\oplus_N Y\in\muathcal I_p$ then $X\in\muathcal I_q$ whenever $p-r\lambdae q\lambdae \muin\{s,p+n\}$ and $Y\in \muathcal I_q$ whenever $p-s\lambdaeq q \lambdaeq \muin \{r, p+m\}$ \end{Thm} \betaegin{proof} It suffices to consider the case of $Y$ and to prove the result if $p-s<q<\muin\{r,p+m\}.$ Then the limiting case follows by Proposition \ref{closed}. We will treat the cases $p<0,\ p=0$ and $0<p\lambdae 1$ separately. {\it Case 1: } Let $p<0.$ The space $X\oplus_N Y$ embeds into $L_p$ so we can consider a 1-Gaussian embedding $T:X\oplus_NY\to L_p(\Omega,\muu).$ By Proposition \ref{p<0Z}, for any linearly independent sets $x_1,\lambdadots, x_m \in X$ and $y_1,\lambdadots, y_n \in Y$ equation \eqref{eq:p<0Z} holds in the strip $\muax \{-n,p\}<\muathbb{R}e z <\muin \{p+m,0\}.$ However by Lemma \ref{extension} the function $M_{p,N}(z)/M_{p,2}(z)$ can be analytically continued to the strip $p-s<\muathbb{R}e z<r.$ Thus the right-hand side of \eqref{eq:p<0Z} can be analytically continued to the strip $\muax\{p-s,-n\}<\muathbb{R}e z<\muin \{r,p+m\}.$ By Proposition \ref{extend} this implies that \eqref{eq:p<0Z} holds (and both sides are integrable) in the strip $\muax\{p-s,-n\}<\muathbb{R}e z<\muin \{r,p+m\}.$ If $\muax\{p-s,-n\}<q<\muin \{r,p+m\}$ and $q\nueq 0$, we fix some $\textbf {x}i$ so that $\muathbb E\|\textbf {x}i\|^{p-q}=1.$ Let $f=(\sum_{j=1}^m(Tx_j)^2)^{1/2}.$ Then $$ \frac{M_{2,N}(q)}{M_{2,p}(q)}\muathbb E\|\eta\|^q = \int_\Omega (\sum_{j=1}^n(Ty_j)^2)^{q/2} f^{p-q}d\muu.$$ In particular $M_{2,N}(q)$ cannot vanish and $T$ is a Gaussian embedding of $Y$ into $L_q(f^{p-q}d\muu).$ If $q=0$ we note that our proof yields $Y\in\muathcal I_{\varepsilon}$ for sufficiently small $\varepsilon>0$ and so $Y\in\muathcal I_0.$ It follows that $Y\in \muathcal I_q$ for $p-s\lambdae q\lambdae \muin (r,p+m).$ (Our convention implies $Y\in\muathcal I_q$ if $q\lambdae -n.$) \muedskip {\it Case 2:} Let $p=0.$ In this case $X\oplus_N Y\in \muathcal I_p$ for all $p<0$ and the result follows from Case 1. \muedskip {\it Case 3:} Now we assume that $0<p\lambdae 1.$ Again we prove the result for $Y$. If $m\ge 2$ then $X\oplus_NY\in\muathcal I_0$ (\cite{KaltonKoldobskyYaskinYaskina2007} and \cite{Koldobsky1999b})and by Case 2 we have that $Y\in\muathcal I_r.$ Thus we only consider the case $m=1.$ Suppose that $X\oplus_N Y$ embeds into $\muathcal M (\Omega,\muu)$ via a $p$-stable embedding $T$. We fix $x\in X$ with $\|x\|=1$ and $p<q<\muin\{p+1,r\}$; let $f=|Tx|.$ Fix $a>0$ so that $q+a<p+1.$ Then $0<a<1$ and so by \eqref{eq:p>0Z} we have \betaegin{equation}\lambdaabel{Ty}\int_\Omegamega |Ty|^zf^{a-1}\,d\muu = \frac{F_N(a-1,z)G(a-1)G(z)\Phi_{p/2}((z+a-1)/2)}{F_2(a-1,z)}\|y\|^z,\end{equation} where $ y\in Y,$ as long as $-a<\muathbb{R}e z<0.$ However $F_N(a-1,z)/F_2(a-1,z)$ can be analytically continued to the half-plane $\muathbb{R}e z< r$ (by Lemma \ref{extension}). We also have that $\Phi((z+a-1)/2)$ can be analytically continued to the half-plane $\muathbb{R}e z<p+1-a.$ Hence the right-hand side can be analytically continued to the strip $-1<\muathbb{R}e z< \muin(r,p+1-a).$ By Lemma \ref{extend} this means that the left-hand side of \eqref{Ty} is integrable and equality holds for $-1<\muathbb{R}e z<\muin\{r,p+1-a\}.$ In particular $$ \int_\Omegamega |Ty|^q f^{a-1}\,d\muu = c\|y\|^q, \eqno y\in Y$$ where $c$ is a positive constant. This implies the result, since $Y\in\muathcal I_q$ whenever $p\lambdae q.$ \end{proof} The next result is known; it follows from Koldobsky's Second Derivative test (Theorem 4.19 of \cite{Koldobsky2005}; see also \cite{Koldobsky1998}). \betaegin{Thm}\lambdaabel{secondderivative} Let $N$ be a normalized absolute norm on $\muathbb R^2$ such that $$ \lambdaim_{t\to 0}\frac{N(1,t)-1}{t^2}=0.$$ Then if $-\infty<p<0$ and $X\oplus_N \muathbb R$ embeds into $L_p$ we have $\text{dim } X\lambdae 2-p.$ \end{Thm} Note that the result of Theorem \ref{secondderivative} can be extended for $p\in (-\infty,2).$ Here, we present only the proof for $p<0.$ \betaegin{proof} first we observe that $M_{p,N}(z)/M_{p,2}(z)$ extends to an analytic function on $-p-1<\muathbb{R}e z<2$ and that \betaegin{equation}\lambdaabel{limzero} \lambdaim_{r\to 2} \frac{M_{p,N}(r)}{M_{p,2}(r)}=0.\end{equation} To see \eqref{limzero} we note that by definition, for $0<r<2$ we have $$ M_{N,p}(r) =-\frac1r-\frac1{p-r}+\int_0^{\infty}t^{-1-r}(N(1,t)^p-max\{1,t\}^p)\,dt.$$ Fix any $0<\varepsilon<1$ and let $$\delta=\delta(\varepsilon)=\sup_{t\lambdae\varepsilon}\frac{N(1,t)^p-1}{t^2}.$$ Then $$ \lambdaeft|M_{N,p}(r)+\frac{p}{r(p-r)} -\int_{\varepsilon}^{\infty}t^{-1-r}(N(1,t)^p-1)\,dt\right|\lambdae \frac{\delta}{2-r}.$$ It follows that \betaegin{equation}\lambdaabel{lim} \lambdaimsup_{r\to 2}(2-r)M_{p,N}(r) \lambdae \delta.\end{equation} Since $\lambdaim_{\varepsilon\to 0}\delta(\varepsilon)=0$ we obtain \eqref{limzero}. Now suppose $m=\text{dim } X>2-p$ and assume $T:X\oplus_NY \to\muathcal M(\Omegamega,\muu)$ is a 1-Gaussian embedding into $L_p(\Omegamega,\muu),$ where $\text{dim } Y=1.$ Let us fix $\textbf {x}i=\sum_{j=1}^m\gamma_jx_j$, an $X$-valued Gaussian process of full rank and $\eta=\gamma'y$ where $y\in Y$ has norm one and $\gamma'$ is a Gaussian r.v. Then, if $f=(\sum\lambdaimits_{j=1}^m(Tx_j)^2)^{1/2}$ and $g=|Ty|,$ by Proposition \ref{p<0Z} we have $$ \int_{\Omegamega}f^{p-z}g^zd\muu= \frac{M_{p,N}(z)}{M_{p,2}(z)}G(z)\muathbb E\|\textbf {x}i\|^{p-z}$$ for $max\{-1,p\}<\muathbb{R}e z<0.$ The right-hand side can be analytically continued to $max\{-1,p\}<\muathbb{R}e z<2.$ By equation (\ref{lim}) we have $$ \lambdaim_{r\to 2}\int_{\Omegamega}(g/f)^r f^p\,d\muu=0$$ which by Proposition \ref{extend} implies $$ \int_{\Omegamega} g^2f^{p-2}\,d\muu=0$$ and this gives a contradiction.\end{proof} \betaigskip \section{Examples} We begin this section with some technical results which will be needed later. \betaegin{lm} \lambdaabel{bound} Let $X$ be a finite-dimensional normed space and suppose $\textbf {x}i=\sum_{j=1}^m\gamma_jx_j$ is an $X$-valued Gaussian process, where $\{\gamma_1,\lambdadots,\gamma_n\}$ are independent normalized Gaussian random variables and each $x_j\nueq 0.$ Then given $-n<u<0$ there is a constant $C=C(\textbf {x}i,u)$ so that $$ \muathbb E\|x+\textbf {x}i\|^u \lambdae C, \qquad x\in X.$$ \end{lm} \betaegin{proof} We consider the case when $\textbf {x}i$ is normalized so that $\muathbb E\|\textbf {x}i\|^u=1.$ Let $E$ be the linear span of $\{x_1,\lambdadots,x_n\}$ and let $P$ be a projection of $X$ onto $E.$ Then $$ \muathbb E\|x+\textbf {x}i\|^u\lambdae \|P\|^{-u}\muathbb E\|Px+\textbf {x}i\|^u.$$ On $E$ the distribution $\muu_{\textbf {x}i}$ is dominated by $C_0\lambdaambda,$ where $C_0$ is a constant depending on $\textbf {x}i$ and $\lambdaambda$ is the Lebesgue measure on $E$. Hence $$ \muathbb E\|Px+\textbf {x}i\|^u \lambdae C_0\int_{\|e-Px\|\lambdae 1}\|e-Px\|^ud\lambdaambda(e) + 1$$ and this is uniformly bounded.\end{proof} \betaegin{lm}\lambdaabel{upper0} Let $Z=X\oplus Y$ be a finite-dimensional normed space with $\text{dim } X=m$ and $\text{dim } Y=n.$ Suppose $\textbf {x}i$ is a $Z$-valued Gaussian process of full rank. Let $\textbf {x}i_X,\textbf {x}i_Y$ be the projections of $\textbf {x}i$ onto $X$ and $Y$ respectively. Then \betaegin{equation}\lambdaabel{upper}\muathbb E\|\textbf {x}i_X\|^u\|\textbf {x}i_Y\|^v<\infty, \qquad -m<u,\ -n<v.\end{equation} \end{lm} \betaegin{proof} Note that $\textbf {x}i_X$ and $\textbf {x}i_Y$ are not necessarily independent. However $\textbf {x}i_X$ and $\textbf {x}i_Y$ are of full rank in $X$ and $Y$ respectively. If either $u=0$ or $v=0$ the Lemma holds trivially. If either $u>0$ or $v>0$ we may use H\"older's inequality. Suppose $v>0$. Pick $a>1$ so that $au>-m$ and then suppose $1/a+1/b=1$. Then $$\muathbb E\|\textbf {x}i_X\|^u\|\textbf {x}i_Y\|^v\lambdae (\muathbb E\|\textbf {x}i_X\|^{au})^{1/a}(\muathbb E\|\textbf {x}i_Y\|^{bv})^{1/b}<\infty.$$ Now suppose $u,v<0$. We can write $\textbf {x}i$ in the form $$ \textbf {x}i=\sum_{j=1}^{m+n}(x_j+y_j)\gamma_j$$ where $y_j=0$ for $n+1\lambdae j\lambdae m+n.$ Let $\muathbb E_0$ be the conditional expectation onto the $\sigma-$algebra $\Sigma$ generated by $\{\gamma_1,\lambdadots,\gamma_n\}.$ Then $\textbf {x}i_Y$ is $\Sigma-$measurable. Then, by Lemma \ref{bound}, since $\textbf {x}i_X$ has rank $m$, there is a constant $C$ $$\muathbb E_0\|\textbf {x}i_X\|^u\|\textbf {x}i_Y\|^v=\|\textbf {x}i_Y\|^v\muathbb E_0\|\textbf {x}i_X\|^u\lambdae C\|\textbf {x}i_Y\|^v$$ and so \eqref{upper} holds.\end{proof} \betaegin{lm}\lambdaabel{existence} Suppose $1\lambdae p<2.$ There exists a positive random variable $h$ with $$ \muathbb E(h^z)= \frac{p}{2\Gammaamma(p/2)}\frac{\Gammaamma((p-z)/2)\Gammaamma(-z/2)}{\Gammaamma(-z/p)}, \qquad \muathbb{R}e z<2,$$ or $$ \muathbb E(h^z) =\frac{2^{z/2}G(p-1-z)}{G(p-1)\Phi_{p/2}(z/2)}, \qquad \muathbb{R}e z<2.$$ \end{lm} \betaegin{proof} Consider $f\alphapprox\varphi_{1/p}^{\frac{1}{2p}}.$ Then by \eqref{phip} $$ \muathbb E(f^z) = \frac{p\Gammaamma(-z/2)}{\Gammaamma(-z/2p)}\qquad \muathbb{R}e z<2.$$ If $f$ is defined on some probability space $(\Omegamega,\muathbb P)$ then we can consider $f$ as a random variable with respect to a new probability measure $$d\muathbb P'=\frac{\Gammaamma(1/2)}{p\Gammaamma(p/2)}|f|^{-p}d\muathbb P.$$ If we denote by $g$ this random variable we have $$ \muathbb E(g^z) = \frac{\Gammaamma(1/2)}{\Gammaamma(p/2)}\frac{\Gammaamma((p-z)/2)}{\Gammaamma((p-z)/2p)}, \qquad \muathbb{R}e z<p+2.$$ Let $h\alphapprox 2^{1/p}f\otimes g.$ Then for $\muathbb{R}e z<2$ and by using \eqref{99} we have \betaegin{align*} \muathbb E(h^z) &=\frac{p\Gammaamma(1/2)}{2\Gammaamma(p/2)}\frac{\Gammaamma(-z/2)\Gammaamma((p-z)/2)}{2^{-1-z/p}\Gammaamma(-z/2p)\Gammaamma((p-z)/2p)}\\ &= \frac{p}{2\Gammaamma(p/2)}\frac{\Gammaamma(-z/2)\Gammaamma((p-z)/2)}{\Gammaamma(-z/p)}.\end{align*} The second equation follows immediately from \eqref{GzG} and \eqref{phip}.\end{proof} \betaegin{lm}\lambdaabel{h} Suppose $m\in\muathbb N,$ and $\{p,q,r\}$ are such that $q>0$ and $p+m<q<r\lambdae 2.$ There exists a positive random variable $g=g(m,p,q,r)$ such that $$ \muathbb E(g^z)=\frac{2^{z/2}G(p+m-1-z)\Phi_{r/2}(z/2)\Phi_{r/2}((p-z)/2)}{G(p+m-1)\Phi_{r/2}(p/2)\Phi_{q/2}(z/2)}\qquad p-r<\muathbb{R}e z<p.$$ Here we adopt the convention that $\Phi_1(z)\equiv 1.$\end{lm} \betaegin{proof} We first use Lemma \ref{existence} to find a positive random variable $f_1$ such that $$ \muathbb E(f_1^z)= \frac{2^{z/2}G(q-1-z)}{G(q-1)\Phi_{q/2}(z/2)}, \qquad \muathbb{R}e z<q.$$ Now, if $p+m<q$ we let $f_2$ to be distributed as $t^{-1/2}$ with respect to the Beta distribution $$ d\muu=\frac{t^{(p+m)/2-1}(1-t)^{(q-p-m)/2-1}}{B((p+m)/2,(q-p-m)/2)}\,dt $$ on $[0,1].$ Then $$\muathbb E(f_2^z)= \frac{\Gammaamma((p+m-z)/2)\Gammaamma(q/2)}{\Gammaamma((q-z)/2)\Gammaamma((p+m)/2)}, \qquad \muathbb{R}e z<p+m,$$ and using \eqref{GzG} the latter can be rewritten as $$ \muathbb E(f_2^z) = \frac{G(p+m-1-z)G(q-1)}{G(q-1-z)G(p+m-1)}, \qquad \muathbb{R}e z<p+m.$$ We write $f_2\equiv 1$ if $p+m=q.$ If $r<2$ we define $f_3\alphapprox \varphi_{r/2}^{1/2}$ so that $$ \muathbb E(f_3^z) =\Phi_{r/2}(z/2), \qquad \muathbb{R}e z<r.$$ If $r=2$ we set $f_3\equiv 1.$ If $f_3$ is defined on some probability space $(K,\muathbb P)$ we define $f_4$ as the random variable $f_3^{-1}$ with respect to the measure $f_3^pd\muathbb P/\muathbb E(f_3^p)$ so that $f_4\equiv 1$ if $r=2.$ If $r<2$ we have $$ \muathbb E(f_4^z)= \frac{\Phi_{r/2}((p-z)/2)}{\Phi_{r/2}(p/2)}, \qquad p-r<\muathbb{R}e z.$$ We let $g\alphapprox f_1\otimes f_2\otimes f_3\otimes f_4.$\end{proof} \betaegin{lm}\lambdaabel{g}\lambdaabel{isotropicembedding} Suppose $m\in\muathbb N,$ and $\{p,q,r\}$ are such that $q>0$ and $p+m<q<r\lambdae 2.$ Suppose $Y\in\muathcal I_q.$ Then there is an $h$-isotropic embedding of $Y$ into $\muathcal M(\Omegamega,\muu)$ where $(\Omegamega,\muu)$ is a probability space where $h$ is symmetric and $$ \muathbb E(|h|^z)=\frac{2^{z/2}G(p+m-1-z)G(z)\Phi_{r/2}(z/2)\Phi_{r/2}((p-z)/2)}{G(p+m-1)\Phi_{r/2}(p/2)}$$ for $ -1<\muathbb{R}e z<p+m.$ \end{lm} \betaegin{proof} Since $Y\in\muathcal I_q$ there is a $\psi_q-$isotropic embedding $S$ of $Y$ into some $\muathcal M(\Omegamega_1,\muu_1)$ (where $(\Omegamega_1,\muu_1)$ is a probability measure space). Let $Ty=2^{-1/2}gSy$ where $g$ is independent of $S(Y)$ and distributed as in Lemma \ref{h}. Then $T$ is a $h$-isotropic embedding where $h$ is symmetric and $$ \muathbb E(|h|^z)=2^{-z/2}\Psi_q(z)\muathbb E(g^z)=\Phi_{q/2}(z/2)G(z)\muathbb E(g^z)\qquad -1<\muathbb{R}e z<p+m.$$ \end{proof} Let us remark that the case $p=0,$ $m=1$ and $r=2$ gives $E(|h|^z)= 2^{z/2}G(z)G(-z)$ which means that $h$ is symmetric 1-stable, i.e. has the Cauchy distribution. \betaegin{Thm}\lambdaabel{mainexample} Suppose $1\lambdae q,r\lambdae 2$. Suppose $X=\ell_2^m$ and $Y\in\muathcal I_q.$ If $p\lambdae q-m$ then $X\oplus_rY\in \muathcal I_p.$\end{Thm} \betaegin{proof} It is enough to consider the case $p+m>0.$ Also, the result holds trivially if $r\lambdae q$ since then $X\oplus_rY\in \muathcal I_r\subset \muathcal I_p.$ So we may also assume that $r>q.$ Hence $p-r<q-1-r<-1.$ We treat three separate cases as $p>0,$ $p<0$ or $p=0.$ {\it Case 1: } Let $p>0$. In this case we have $m=1$ and identify $X$ with $\muathbb R.$ In view to Lemma \ref{g} we construct an $h$-isotropic embedding $S:Y\to\muathcal M(\Omegamega,\muu)$ where $h$ is symmetric and \betaegin{equation}\lambdaabel{H} H(z):=\muathbb E(|h|^z)= \frac{G(p-z)G(z)\Phi_{r/2}(z/2)\Phi_{r/2}((p-z)/2)}{G(p)\Phi_{r/2}(p/2)}\end{equation} for $ -1<\muathbb{R}e z<p+1.$ It is important to observe that $H(p)=1$ and \betaegin{equation}\lambdaabel{H1} H(z)= \frac{G(p-z)G(z)M_{p,r}(z)}{G(p)M_{p,2}(z)}\end{equation} for $ -1<\muathbb{R}e z<p+1.$ We define $T:\muathbb R\oplus_rY\to\muathcal M(\Omegamega,\muu)$ by $T(\alphalpha,y)=\alphalpha+Sy.$ To verify that $T$ is a standard isometry we only need to show (considering $h$ as a function on $(\Omegamega,\muu)):$ \betaegin{equation}\lambdaabel{aim}\int_{\Omegamega}|1+th|^p\,d\muu =(1+t^r)^{p/r}, \qquad 0<t<\infty.\end{equation} To establish \ref{aim} we call Proposition \ref{Mellinh}. By \eqref{eqMellinh0} and \eqref{H1} we have $$ \int_0^{\infty}t^{-z-1}\int_{\Omegamega}\lambdaeft(|1+th|^p-\muin\{1,t\}^p\right)d\muu\,dt = M_{p,r}(z)+\frac{p}{(p-z)z}$$ for $-1<\muathbb{R}e z<p+1.$ On the other hand, by \eqref{102},\eqref{Fq} and \eqref{Mpq} $$ \int_0^{\infty}t^{-z-1}\betaigl((1+t^r)^{(w+z)/r}-\muax\{1,t\}^{w+z}\betaigr)\,dt= F_r(w,z)-F_\infty(w,z), $$ for $\muathbb{R}e w,\ \muathbb{R}e z<0$ and by analytic continuation this holds (and the right-hand side is holomorphic) for $\muathbb{R}e w,\ \muathbb{R}e z<r.$ Thus using \eqref{Minf} $$ \int_0^{\infty}t^{-z-1}((1+t^r)^{\frac{p}{r}}-\muax\{1,t\}^p)\,dt = M_{p,r}(z)+\frac{p}{z(p-z)}, \qquad 0<\muathbb{R}e z <p$$ and by the uniqueness property of the Mellin transform, Proposition \ref{Mellin}, we conclude that $$ \int |1+th|^p\,d\muu=(1+t^r)^{p/r}, \qquad 0<t<\infty$$ which proves the Theorem for $p>0.$ {\it Case 2:} Let $p<0 $ and let $\text{dim } Y=n.$ This is quite similar but now we deal with Gaussian embeddings rather than standard embeddings. First we note that there is an $f$-isotropic embedding of $\ell_2^m$ into $\muathcal M(\Omegamega,\muu),$ where $(\Omegamega,\muu)$ is a probability measure space and $f$ is symmetric with $$ \int |f|^z\,d\muu = \frac{G(z)G(m-1)}{G(z+m-1)}, \qquad -m<\muathbb{R}e z<\infty.$$ Indeed let $\{\gamma_1,\lambdadots,\gamma_m\}$ be independent normalized Gaussian random variables and let $$ R(a_1,\lambdadots,a_m)=\frac{a_1\gamma_1+\cdots+a_m\gamma_m}{(\gamma_1^2+\cdots+\gamma_m^2)^{1/2}}.$$ We now use Lemma \ref{1}. We consider $h=h(m,p,q,r)$ as in Lemma \ref{g}. Then $$ H(z):= \muathbb E(|h|^z)=\frac{G(p+m-1-z)G(z)\Phi_{r/2}(z/2)\Phi_{r/2}((p-z)/2)}{G(p+m-1)\Phi_{r/2}(p/2)}$$ for $ -1<\muathbb{R}e z<p+m.$ Note that by \eqref{phip} and \eqref{Fq} \betaegin{equation}\lambdaabel{HG} H(z)= \frac{G(p+m-1-z)G(z)F_r(p-z,z)}{G(p+m-1)F_2(p-z,z)}. \end{equation} We may then suppose that $S:Y\to\muathcal M(\Omegamega,\muu)$ is an $h$-isotropic embedding such that $R(X)$ and $S(Y)$ are independent. Finally we define $T:X\oplus_rY\to \muathcal M(\Omegamega,\muu)$ by $$ T(x+y)= \thetaeta(Rx+Sy)$$ where $\thetaeta>0$ is chosen so that $$\thetaeta^p=\frac{G(p+m-1)}{G(m-1)}.$$ We will show that $T$ is a 1-Gaussian embedding. To do this we suppose that $\textbf {x}i$ is an $X\oplus_r Y-$valued Gaussian process of full rank. Let $P:X\oplus_r Y\to X$ and $Q:X\oplus_rY\to Y$ be the natural projections onto $X$ and $Y$ respectively. Let $\textbf {x}i_X=P\textbf {x}i$ and $\textbf {x}i_Y=Q\textbf {x}i.$ Then $\textbf {x}i_X$ has full rank on $X$ and $\textbf {x}i_Y$ on $Y.$ In particular we can write $\textbf {x}i=\sum_{j=1}^{m+n}(x_j+y_j)\gamma_j$ where $y_j=0$ for $n+1\lambdae j\lambdae m+n,$ by choosing an appropriate basis of Gaussian random variables. For $0<s<t$ we have $$\muathbb E\|\textbf {x}i_X+t\textbf {x}i_Y\|^p\lambdae \muathbb E\|\textbf {x}i_X+s\textbf {x}i_Y\|^p \lambdae (s/t)^p\muathbb E\|\textbf {x}i_X+t\textbf {x}i_Y\|^p, \qquad .$$ So, the function $t\muapsto \muathbb E\|\textbf {x}i_X+t\textbf {x}i_Y\|^p$ is continuous on $(0,\infty).$ Similarly since $\textbf {x}i_X$ has full rank, $\{x_{n+1},\lambdadots,x_{m+n}\}$ form a basis of $X.$ This implies $$ \sum_{j=n+1}^{m+n} |Rx_j|^2 \ge c^2>0 \qquad \text{a.e.}$$ and thus, since $p<0$ $$ (\sum_{j=1}^{m+n}|T(x_j+ty_j)|^2)^{p/2} \lambdae c^p\qquad \text{a.e.}$$ We now may conclude, by the Lebesgue Dominated Convergence Theorem, that the map $$ t\muapsto \int_{\Omega}(\sum_{j=1}^{m+n}|T(x_j+ty_j)|^2)^{p/2}d\muu$$ is also continuous on $(0,\infty)$. We will show that \betaegin{equation}\lambdaabel{needed}\int_{\Omega}(\sum_{j=1}^{m+n}|T(x_j+ty_j)|^2)^{p/2}d\muu =\muathbb E\|\textbf {x}i_X+t\textbf {x}i_Y\|^p, \qquad 0<t<\infty\end{equation} by computing the Mellin transform of the left and the right-side of the equality. By Lemma \ref{upper0} we have \betaegin{equation}\lambdaabel{upper1}\muathbb E\|\textbf {x}i_X\|^u\|\textbf {x}i_Y\|^v<\infty, \qquad -m<u,\ -n<v.\end{equation} Suppose $x\in X$ and $y\in Y$ are non-zero. Then for $-1/2<u,\ v<0$ we use Lemma \ref{independent} to compute: \betaegin{align*} &\int_0^{\infty}\int_{\Omegamega}t^{-1-v}|T(x+ty)|^{u+v}\,d\muu\,dt=\\ &=\frac12\int_{\Omegamega}\int_0^{\infty}t^{-1-v}(|Tx+tTy|^{u+v}+|Tx-tTy|^{u+v})\,dt\,d\muu\\ &= \frac{G(u+v)F_2(u,v)}{G(u)G(v)}\int_{\Omegamega} |Tx|^u|Ty|^v\,d\muu .\end{align*} Then by the definition of $T,$ equation \eqref{HG} and Lemma \ref{1}, the latter is equal to $$ G(m-1)\frac{\thetaeta^{u+v}G(u+v)F_2(u,v)H(v)}{G(u+m-1)G(v)}\|x\|^u\|y\|^v<\infty. $$ The calculation can then be repeated for $-1/2<\muathbb{R}e w, \muathbb{R}e<0$ to give \betaegin{align*} &\int_0^{\infty}\int_{\Omegamega}t^{-z-1}|T(x+ty)|^{w+z}\,d\muu\,dt \\ &=G(m-1)\frac{\thetaeta^{w+z}G(w+z)F_2(w,z)H(z)}{G(w+m-1)G(z)}\|x\|^w\|y\|^z.\end{align*} Again calculating first with real $u,v,$ using Tonelli's theorem and since $\{\gamma_i\}$ are Gaussian r.v. we may compute the following integral for $-1/2<\muathbb{R}e z,\ \muathbb{R}e w<0,$ \betaegin{align*} &\int_0^\infty t^{-z-1} \int_\Omegamega (\sum_{j=1}^{m+n} (T(x_j+ty_j))^2)^{(w+z)/2}d\muu\,dt\\ &= \frac{1}{G(w+z)}\muathbb E\lambdaeft(\int_0^{\infty}t^{-z-1}(\int_\Omegamega |\sum_{j=1}^{m+n}\gamma_jTx_j +t\gamma_jTy_j|^{w+z}d\muu)dt\right).\end{align*} Then by Lemma \ref{symmetrize}, using \eqref{upper1} we have \betaegin{align}\lambdaabel{MT} &= G(m-1)\frac{\thetaeta^{w+z}F_2(w,z)H(z)}{G(w+m-1)G(z)}\muathbb E\lambdaeft(\|\sum_{j=1}^{m+n}\gamma'_jx_j\|^w\|\sum_{j=1}^{m+n}\gamma'_jy_j\|^z\right) \nuonumber \\ &= G(m-1)\frac{\thetaeta^{w+z}F_2(w,z)H(z)}{G(w+m-1)G(z)}\muathbb E(\|\textbf {x}i_X\|^w\|\textbf {x}i_Y\|^z) \end{align} Now the right-hand side of \eqref{MT} extends to be holomorphic when $-n<\muathbb{R}e z<0$ and $-m<\muathbb{R}e w<0.$ Using Proposition \ref{extension} (twice) one obtains that $$\int_0^{\infty}\int_{\Omegamega}t^{-1-v}(\sum_{j=1}^{m+n} (T(x_j+ty_j))^2)^{(u+v)/2}\,d\muu\,dt<\infty$$ when $-m<u<0$ and $-n<v<0.$ This in turn means that the function $$(w,z)\muapsto \int_0^{\infty}\int_{\Omegamega}t^{-z-1}(\sum_{j=1}^{m+n} (T(x_j+ty_j))^2)^{(w+z)/2}\,d\muu\,dt $$ is holomorphic for $-m<\muathbb{R}e w<0$ and $-n<\muathbb{R}e z<0.$ Thus we have \betaegin{align*} &\int_0^{\infty}\int_{\Omegamega}t^{-z-1}(\sum_{j=1}^{m+n} (T(x_j+ty_j))^2)^{(w+z)/2}\,d\muu\,dt=\\ &=G(m-1)\frac{\thetaeta^{w+z}F_2(w,z)H(z)}{G(z)G(w+m-1)}\muathbb E\|\textbf {x}i_X\|^w\|\textbf {x}i_Y\|^z\end{align*} whenever $-m<\muathbb{R}e w<0$ and $-n<\muathbb{R}e z<0.$ In particular by \eqref{HG} we have that for $\muax\{-n,p\}<\muathbb{R}e z<0$ \betaegin{equation}\lambdaabel{mel}\int_0^{\infty}t^{-z-1}\int_{\Omegamega}(\sum_{j=1}^{m+n} (T(x_j+ty_j))^2)^{p/2}\,d\muu\,dt=M_{p,r}(z)\muathbb E\|\textbf {x}i_X\|^{p-z}\|\textbf {x}i_Y\|^z.\end{equation} On the other hand for $-1/2<\muathbb{R}e w,\ \muathbb{R}e z<0$, using \eqref{Fq} we get that \betaegin{align*} \int_0^{\infty}t^{-1-z}E\|\textbf {x}i_X+t\textbf {x}i_Y\|^{w+z}dt &=\muathbb E\int_0^{\infty}t^{-z-1}(\|\textbf {x}i_X\|^r+t\|\textbf {x}i_Y\|^r)^{(w+z)/r}dt\\ &=F_r(w,z) \muathbb E\|\textbf {x}i_X\|^w\|\textbf {x}i_Y\|^z.\end{align*} As before these calculations should be done first for real $w,z$ to justify the use of Fubini's theorem. Since the right-hand side is holomorphic for $-m<\muathbb{R}e w<0$ and $-n<\muathbb{R}e z<0$, we again use Proposition \ref{extension} to derive equality for $(w,z)$ in the larger region. Hence the Mellin transform of the right-side of \eqref{needed} is \betaegin{equation}\lambdaabel{ME} \int_0^{\infty}t^{-1-z}E\|\textbf {x}i_X+t\textbf {x}i_Y\|^pdt=M_{p,r}(z)\muathbb E\|\textbf {x}i_X\|^{p-z}\|\textbf {x}i_Y\|^z, \end{equation} for $\muax\{-n,p\}<\muathbb{R}e z<0.$ Comparing \eqref{mel} and \eqref{ME} we get \eqref{needed}. In particular $$ \muathbb E\|\textbf {x}i\|^p=\int_{\Omega} (\sum_{j=1}^{m+n}|T(x_j+y_j)|^2)^{p/2}\,d\muu,$$ which implies that $T$ is a 1-gaussian embedding. {\it Case 3:} When $p=0$ the result follows by showing that each space embeds into $L_p$ for every $p<0.$ \end{proof} The particular case $p=0$ with $r=2$ also follows if we consider Proposition 6.6 of \cite{KaltonKoldobskyYaskinYaskina2007}. \betaegin{Thm}\lambdaabel{ex2} For any $-\infty<p<2$ and any $n\ge 3-p$ there exists a normed space $X$ of dimension $n$ such that $X\in \muathcal I_s$ whenever $s\lambdae p$ and $X\nuotin \muathcal I_s$ whenever $s>p.$ We may take $X=\ell_2^{1-[p]}\oplus_r\ell_{q}^{n-1+[p]}$ where $q=1+p-[p]$ and $q<r\lambdae 2.$\end{Thm} \betaegin{proof} If $1\lambdae p<2$ then $q=p$ and $X=\ell_p^n.$ Then by \cite{Koldobsky1999b} we have that $X\in\muathcal I_s$ only if $s\lambdae p$ (see also the Introduction). Let $p<1.$ Then by Theorem \ref{mainexample} if $s\lambdae p$ then $X\in\muathcal I_s,$ since $q=m+p.$ Conversely, we suppose that $X\in\muathcal I_s.$ If $n=1$ there is nothing to prove, so we may assume that $n\geq2.$ Then $n-1+[p]\ge 2-p+[p]>1.$ By Theorem \ref{main}, $\ell_q^{n-1+[p]}\in \muathcal I_{\alphalpha},$ where $\alpha\lambdaeq min\{r, s+1-[p]\}.$ But $q<r$ so $\ell_q^{n-1+[p]}\in \muathcal I_{s+1-[p]}.$ Consequently, $s+1-[p]\lambdae 1+p-[p]$ which implies that $s\lambdae p.$ \end{proof} \betaegin{bibsection} \betaegin{biblist} \betaib{Bretagnolleetal1965/6}{article}{ author={Bretagnolle, J.}, author={Dacunha-Castelle, D.}, author={Krivine, J.-L.}, title={Lois stables et espaces $L\sp {p}$}, language={French}, journal={Ann. Inst. H. Poincar\'e Sect. B (N.S.)}, volume={2}, date={1965/1966}, pages={231--259}, } \betaib{KaltonKoldobsky2004}{article}{ author={Kalton, N. J.}, author={Koldobsky, A.}, title={Banach spaces embedding isometrically into $L\sb p$ when $0<p<1$}, journal={Proc. Amer. Math. Soc.}, volume={132}, date={2004}, pages={67\nudash 76 (electronic)}, } \betaib{KaltonKoldobsky2005}{article}{ author={Kalton, N. J.}, author={Koldobsky, A.}, title={Intersection bodies and $L\sb p$-spaces}, journal={Adv. Math.}, volume={196}, date={2005}, pages={257--275}, } \betaib{KaltonKoldobskyYaskinYaskina2007}{article}{ author={Kalton, N. J.}, author={Koldobsky, A.}, author={Yaskin, V.}, author={Yaskina, M.}, title={The geometry of $L\sb 0$}, journal={Canad. J. Math.}, volume={59}, date={2007}, pages={1029--1049}, } \betaib{Koldobsky1998}{article}{ author={Koldobsky, A.}, title={Second derivative test for intersection bodies}, journal={Adv. Math.}, volume={136}, date={1998}, pages={15--25}, } \betaib{Koldobsky1998a}{article}{ author={Koldobsky, A.}, title={Intersection bodies in ${\muathbb R}\sp 4$}, journal={Adv. Math.}, volume={136}, date={1998}, pages={1--14}, } \betaib{Koldobsky1999}{article}{ author={Koldobsky, A.}, title={A generalization of the Busemann-Petty problem on sections of convex bodies}, journal={Israel J. Math.}, volume={110}, date={1999}, pages={75--91}, } \betaib{Koldobsky1999b}{article}{ author={Koldobsky, A.}, title={Positive definite distributions and subspaces of $L\sb {-p}$ with applications to stable processes}, journal={Canad. Math. Bull.}, volume={42}, date={1999}, pages={344--353}, } \betaib{Koldobsky2000}{article}{ author={Koldobsky, A.}, title={A functional analytic approach to intersection bodies}, journal={Geom. Funct. Anal.}, volume={10}, date={2000}, pages={1507--1526}, } \betaib{Koldobsky2005}{book}{ author={Koldobsky, A.}, title={Fourier analysis in convex geometry}, series={Mathematical Surveys and Monographs}, volume={116}, publisher={American Mathematical Society}, place={Providence, RI}, date={2005}, pages={vi+170}, } \betaib{KoldobskyYaskin2008}{book}{ author={Koldobsky, A.}, author={Yaskin, V.}, title={The interface between convex geometry and harmonic analysis}, series={CBMS Regional Conference Series in Mathematics}, volume={108}, publisher={Published for the Conference Board of the Mathematical Sciences, Washington, DC}, date={2008}, pages={x+107}, } \betaib{Lutwak1988}{article}{ author={Lutwak, E.}, title={Intersection bodies and dual mixed volumes}, journal={Adv. in Math.}, volume={71}, date={1988}, pages={232--261}, } \betaib{MilmanE2006}{article}{ author={Milman, E.}, title={Generalized intersection bodies}, journal={J. Funct. Anal.}, volume={240}, date={2006}, pages={530--567}, } \betaib{Remmert1998}{book}{ author={Remmert, R.}, title={Classical topics in complex function theory}, series={Graduate Texts in Mathematics}, volume={172}, note={Translated from the German by Leslie Kay}, publisher={Springer-Verlag}, place={New York}, date={1998}, } \betaib{Schlieper2007}{article}{ author={Schlieper, J.}, title={A note on $k$-intersection bodies}, journal={Proc. Amer. Math. Soc.}, volume={135}, date={2007}, pages={2081--2088 (electronic)}, } \betaib{Yaskin2008}{article}{ author={Yaskin, V.}, title={On strict inclusions in hierarchies of convex bodies}, journal={Proc. Amer. Math. Soc.}, volume={136}, date={2008}, pages={3281--3291}, } \betaib{Zemanian1987}{book}{ author={Zemanian, A. H.}, title={Generalized integral transformations}, publisher={Dover Publications, Inc.}, date={1987}, } \end{biblist} \end{bibsection} \end{document} \end{document}
\betagin{document} \date{\today} \title{A contraction principle in semimetric spaces} \author[M. Bessenyei]{Mih\'aly Bessenyei} \author[Zs. P\'ales]{Zsolt P\'ales} \address{Institute of Mathematics, University of Debrecen, H-4010 Debrecen, Pf.\ 12, Hungary} \email{[email protected]} \email{[email protected]} \subjclass[2010]{Primary 47H10; Secondary 54H25, 54A20, 54E25.} \keywords{Banach Fixed Point Theorem, Matkowski Fixed Point Theorem, contraction principle, iterative fixed point theorems, semimetric spaces.} \thanks{This research was realized in the frames of T\'AMOP 4.2.4. A/2-11-1-2012-0001 ``National Excellence Program Elaborating and operating an inland student and researcher personal support system''. The project was subsidized by the European Union and co-financed by the European Social Fund. This research was also supported by the Hungarian Scientific Research Fund (OTKA) Grant NK 81402.} \betagin{abstract} A branch of generalizations of the Banach Fixed Point Theorem replaces contractivity by a weaker but still effective property. The aim of the present note is to extend the contraction principle in this spirit for such complete semimetric spaces that fulfill an extra regularity property. The stability of fixed points is also investigated in this setting. As applications, fixed point results are presented for several important generalizations of metric spaces. \end{abstract} \maketitle \section{Introduction} Although the contraction principle appears partly in the method of successive approximation in the works of Cauchy \cite{Cau1835}, Liouville \cite{Lio1837}, and Picard \cite{Pic1890}, its abstract and powerful version is due to Banach \cite{Ban22}. This form of the contraction principle, commonly quoted as the Banach Fixed Point Theorem, states that any contraction of a complete metric space has exactly one fixed point. Until now, this seminal result has been generalized in several ways and some of these generalizations initiated new branches in the field of Iterative Fixed Point Theory. The books by Granas and Dugundji \cite{GraDug03}, and by Zeidler \cite{Zei86} give an excellent and detailed overview of the topic. Some generalizations of Banach's fundamental result replace contractivity by a weaker but still effective property; for example, the self-mapping $T$ of a metric space $X$ is supposed to satisfy \Eq{main}{ d(Tx,Ty)\le\varphi\bigl(d(x,y)\bigr)\qquad(x,y\in X).} To the best of our knowledge, the assumption above appears first in the paper by Browder \cite{Brow68} and by Boyd and Wong \cite{BoyWon69}. One of the most important result in this setting was obtained by Matkowski \cite{Mat75} who established the following statement. \Thmn{Assume that $(X,d)$ is a complete metric space and $\varphi\colon\mathbb R_+\to\mathbb R_+$ is a monotone increasing function such that the sequence of iterates $(\varphi^n)$ tends to zero pointwise on the positive half-line. If $T\colon X\to X$ is a mapping satisfying \eqref{main}, then it has a unique fixed point in $X$.} Another branch of generalizations of Banach's principle is based on relaxing the axioms of the metric space. For an account of such developments, see the monographs by Rus \cite{Rus01}, by Rus, Petru\c{s}el and Petru\c{s}el \cite{RusPetPet08}, and by Berinde \cite{Ber07}. The aim of this note is to extend the contraction principle combining these two directions: The main result is formulated in the spirit of Matkowski, while the underlying space is a complete semimetric space that fulfills an extra regularity condition. These kind of spaces involve standard metric, ultrametric and inframetric spaces. The stability of fixed points is also investigated in this general setting. \section{Conventions and Basic Notions} Throughout this note, $\mathbb R_+$ and $\overline{\mathbb R}_+$ stand for the set of all nonnegative and extended nonnegative reals, respectively. The \emph{iterates} of a mapping $T\colon X\to X$ are defined inductively by the recursion $T^1=T$ and $T^{n+1}=T\circ T^n$. Dropping the third axiom of Fr\'echet \cite{Fre1906}, we arrive at the notion of semimetric spaces: Under a \emph{semimetric space} we mean a pair $(X,d)$, where $X$ is a nonempty set, and $d\colon X\times X\to\mathbb R_+$ is a nonnegative and symmetric function which vanishes exactly on the diagonal of the Cartesian product $X\times X$. In semimetric spaces, the notions of \emph{convergent} and \emph{Cauchy sequences}, as like as (open) \emph{balls} with given center and radius, can be introduced in the usual way. For an open ball with center $p$ and radius $r$, we use the notation $B(p,r)$. Under the \emph{diameter} of $B(p,r)$ we mean the supremum of distances taken over the pairs of points of the ball. Under the \emph{topology} of a semimetric space we mean the topology induced by open balls. \betagin{Def} Consider a semimetric space $(X,d)$. We say that $\Phi\colon\overline{\mathbb R}_+^2\to\overline{\mathbb R}_+$ is a \emph{triangle function} for $d$, if $\Phi$ is symmetric and monotone increasing in both of its arguments, satisfies $\Phi(0,0)=0$ and, for all $x,y,z\in X$, the generalized triangle inequality \Eq{*}{ d(x,y)\le\Phi\bigl(d(x,z),d(y,z)\bigr)} holds. \end{Def} The construction below plays a key role in the further investigations. For a semimetric space $(X,d)$, define the function \Eq{basic}{ \Phi_d(u,v):=\sup\{d(x,y)\mid\exists p\in X: d(p,x)\le u,\,d(p,y)\le v\}\qquad(u,v\in\overline{\mathbb R}_+).} Simple and direct calculations show, that $\Phi_d\colon\overline{\mathbb R}^2_+\to\overline{\mathbb R}_+$ is a triangle function for $d$. This function is called the \emph{basic triangle function}. Note also, that the basic triangle function is optimal in the following sense: If $\Phi$ is a triangle function for $d$, then $\Phi_d\le\Phi$ holds. Obviously, metric spaces are semimetric spaces with triangle function $\Phi(u,v):=u+v$. Ultrametric spaces are also semimetric spaces if we choose $\Phi(u,v):=\max\{u,v\}$. Not claiming completeness, we present here some further examples that can be interpreted in this framework: \betagin{itemize} \item $\Phi(u,v)=c(u+v)$ ($c$-relaxed triangle inequality); \item $\Phi(u,v)=c\max\{u,v\}$ ($c$-inframetric inequality); \item $\Phi(u,v)=(u^p+v^p)^{1/p}$ ($p$th-order triangle inequality, where $p>0$). \end{itemize} Briefly, each semimetric space $(X,d)$ can be equipped with an optimal triangle function attached to $d$; this triangle function provides an inequality that corresponds to and plays the role of the classical triangle inequality. In this sense, semimetric spaces are closer relatives of metric spaces then the system of axioms suggests. Throughout the present note, we shall restrict our attention only to those semimetric spaces whose basic triangle function is continuous at the origin. These spaces are termed \emph{regular}. Clearly, the basic triangle function of a regular semimetric space is bounded in a neighborhood of the origin. The importance of regular semimetric spaces is enlightened by the next important technical result. \Lem{regsem}{The topology of a regular semimetric space is Hausdorff. A convergent sequence in a regular semimetric space has a unique limit and possesses the Cauchy property. Moreover, a semimetric space $(X,d)$ is regular if and only if \Eq{diameter}{ \lim_{r\to 0}\sup_{p\in X}\mathop{\hbox{\rm diam}} B(p,r)=0.}} \betagin{proof} For the first statement, assume to the contrary that there exist distinct points $x,y\in X$ such that, for all $r>0$, the balls $B(x,r)$ and $B(y,r)$ are not disjoint. The continuity and the separate monotonicity of the basic triangle function guarantee the existence of $\delta>0$ such that, for all $r<\delta$, we have $\Phi_d(r,r)<d(x,y)$. Therefore, if $p\in B(x,r)\cap B(y,r)$, we get the contradiction \Eq{*}{ d(x,y)\le\Phi_d\bigl(d(p,x),d(p,y)\bigr)\le\Phi_d(r,r)<d(x,y).} The Hausdorff property immediately implies that the limit of a convergent sequence is unique. Assume now that $(x_n)$ is convergent and tends to $x\in X$. Then the generalized triangle inequality implies the estimation $d(x_n,y_m)\le\Phi_d\bigl(d(x,x_n),d(x,x_m)\bigr)$. The regularity of the underlying space provides that the right-hand side tends to zero if we take the limit $n\to\infty$, yielding the Cauchy property. If $(X,d)$ is a regular semimetric space, then the basic triangle function $\Phi_d$ is continuous at the origin. Hence, for ${\varepsilon}>0$, there exists $u_0,v_0>0$ such that $\Phi_d(u,v)<{\varepsilon}$ whenever $0<u<u_0$ and $0<v<v_0$. Let $r_0=\min\{u_0,v_0\}$. Fix $0<r<r_0$ and $p\in X$. If $x,y\in B(p,r)$, then, using the separate monotonicity of triangle functions, \Eq{*}{ d(x,y)\le\Phi_d\bigl(d(p,x),d(p,y)\bigr)\le\Phi_d(r,r)<{\varepsilon}} follows. That is, $\mathop{\hbox{\rm diam}} B(p,r)\le{\varepsilon}$ holds for all $p\in X$. Since ${\varepsilon}$ is an arbitrary positive number, we arrive at the desired limit property. Assume conversely that the diameters of balls with small radius are uniformly small, and take sequences of positive numbers $(u_n)$ and $(v_n)$ tending to zero. For fixed $n\in\mathbb N$, define $r_n=\max\{u_n,v_n\}$ and take elements $p,x,y\in X$ satisfying $d(p,x)\le u_n$ and $d(p,y)\le v_n$. Then, $x,y\in B(p,r_n)$; therefore \Eq{*}{ d(x,y)\le\mathop{\hbox{\rm diam}} B(p,r_n)\le\sup_{p\in X}\mathop{\hbox{\rm diam}} B(p,r_n).} Taking into account the definition of the basic triangle function $\Phi_d$ and the choice of $p,x,y$, we arrive at the inequality \Eq{*}{ \Phi_d(u_n,v_n)\le\sup_{p\in X}\mathop{\hbox{\rm diam}} B(p,r_n).} Here the right-hand side tends to zero as $n\to\infty$ by hypothesis, resulting the continuity of $\Phi_d$ at the origin. \end{proof} As usual, a semimetric space is termed to be \emph{complete}, if each Cauchy sequence of the space is convergent. In view of the previous lemma, convergence and Cauchy property cannot be distinguished in complete and regular semimetric spaces. In order to construct a large class of complete, regular semimetric spaces, we introduce the notion of equivalence of semimetrics. Given two semimetrics $d_1$ and $d_2$ on $X$, an increasing function $L\colon\overline{\mathbb R}_+\to\overline{\mathbb R}_+$ such that $L(0)=0$ and \Eq{*}{ d_1(x,y)\leq L(d_2(x,y)) \qquad(x,y\in X) } is called a \emph{Lipschitz modulus with respect to the pair $(d_1,d_2)$}. It is immediate to see that the function $L_{d_1,d_2}\colon\overline{\mathbb R}_+\to\overline{\mathbb R}_+$ defined by \Eq{*}{ L_{d_1,d_2}(t):=\sup\{d_1(x,y)\mid x,y\in X,\,d_2(x,y)\leq t\} \qquad(t\in \overline{\mathbb R}_+) } is the smallest Lipschitz modulus with respect to $(d_1,d_2)$. The semimetrics $d_1$ and $d_2$ are said to be \emph{equivalent} if \Eq{*}{ \lim_{t\to0+} L_{d_1,d_2}(t)=0\qquad\mbox{and}\qquad \lim_{t\to0+} L_{d_2,d_1}(t)=0 } i.e., if $L_{d_1,d_2}$ and $L_{d_2,d_1}$ are continuous at zero. It is easy to verify that, indeed, this notion of equivalence is an equivalence relation. For the proof of the transitivity one should use the inequality $ L_{d_1,d_3}\leq L_{d_1,d_2}\circ L_{d_2,d_3}$. Our next result establishes that the convergence, completeness and the regularity of a semimetric space is invariant with respect to the equivalence of the semimetrics. \Lem{equiv}{If $d_1$ and $d_2$ are semimetrics on $X$, then \Eq{Lip}{ \Phi_{d_1}\leq L_{d_1,d_2}\circ\Phi_{d_2}\circ(L_{d_2,d_1},L_{d_2,d_1}). } Provided that $d_1$ and $d_2$ are equivalent semimetrics, we have that \betagin{enumerate}[(i)] \item a sequence converges to point in $(X,d_1)$ if and only if it converges to the same point in $(X,d_2)$; \item a sequence is Cauchy in $(X,d_1)$ if and only if it is Cauchy in $(X,d_2)$; \item $(X,d_1)$ is complete if and only if $(X,d_2)$ is complete; \item $(X,d_1)$ is regular if and only if $(X,d_2)$ is regular. \end{enumerate}} \betagin{proof} Using the monotonicity properties, for $x,y,z\in X$, we have \Eq{*}{ d_1(x,y)\leq L_{d_1,d_2}(d_2(x,y)) &\leq L_{d_1,d_2}\Big(\Phi_{d_2}(d_2(x,z),d_2(z,y))\big)\Big) \\ &\leq L_{d_1,d_2}\Big(\Phi_{d_2}\big(L_{d_2,d_1}(d_1(x,z)),L_{d_2,d_1}(d_1(z,y))\big)\Big), } whence it follows that the map $L_{d_1,d_2}\circ\Phi_{d_2}\circ(L_{d_2,d_1},L_{d_2,d_1})$ is a triangle function for $d_1$, hence \eqref{Lip} must be valid. For (i), let $(x_n)$ be a sequence converging to $x$ in $(X,d_1)$. Then $(d_1(x_n,x))$ is a null-sequence. By the continuity of $L_{d_2,d_1}$ at zero, the right-hand side of the inequality \Eq{*}{ d_2(x_n,x)\leq L_{d_2,d_1}(d_1(x_n,x))} also tends to zero, hence $(d_2(x_n,x))$ is also a null-sequence. The reversed implication holds analogously. The proof for the equivalence of the Cauchy property is completely similar. To prove (iii), assume that $(X,d_1)$ is complete and let $(x_n)$ be a Cauchy sequence in $(X,d_2)$. Then, by (ii), $(x_n)$ is Cauchy in $(X,d_1)$. Hence, there exists an $x\in X$ such that $(x_n)$ converges to $x$ in $(X,d_1)$. Therefore, by (i), $(x_n)$ converges to $x$ in $(X,d_2)$. This proves the completeness of $(X,d_2)$. The reversed implication can be verified analogously. Finally, assume that $(X,d_2)$ is a regular semimetric space, which means that $\Phi_{d_2}$ is continuous at $(0,0)$. Using inequality \eqref{Lip}, it follows that $\Phi_{d_1}$ is also continuous at $(0,0)$ yielding that $(X,d_1)$ is regular, too. \end{proof} By the above result, if a semimetric is equivalent to a complete metric, then it is regular and complete. With a similar argument that was followed in the above proofs, one can easily verify that equivalent semimetrics on $X$ generate the same topology. In the sequel, we shall need a concept that extends the notion of classical contractions to nonlinear ones. This extension is formulated applying comparison functions fulfilling the assumptions of Matkowski. \betagin{Def} Under a \emph{comparison function} we mean a monotone increasing function $\varphi\colon\mathbb R_+\to\mathbb R_+$ such that the limit property $\lim_{n\to\infty}\varphi^n(t)=0$ holds for all $t\ge 0$. Given a semimetric space $(X,d)$ and a comparison function $\varphi$, a mapping $T\colon X\to X$ is said to be \emph{$\varphi$-contractive} or a \emph{$\varphi$-contraction} if it fulfills \eqref{main}. \end{Def} The statement of the next lemma is well-known, we provide its proof for the convenience of the reader. \Lem{mainprop}{If $\varphi$ is a comparison function, then $\varphi(t)<t$ for all positive $t$. If $(X,d)$ is a semimetric space and $T\colon X\to X$ is a $\varphi$-contraction, then $T$ has at most one fixed point.} \betagin{proof} For the first statement, assume at the contrary that $t\le\varphi(t)$ for some $t>0$. Whence, by monotonicity and using induction, we arrive at \Eq{*}{ t\le\varphi(t)\le\varphi^2(t)\le\cdots\le\varphi^n(t).} Upon taking the limit $n\to\infty$, the right hand-side tends to zero, contradicting to the positivity of $t$. In view of this property, the second statement follows immediately. Indeed, if $x_0,y_0$ were distinct fixed points of a $\varphi$-contraction $T$, then we would arrive at \Eq{*}{ t=d(x_0,y_0)=d(Tx_0,Ty_0)\le\varphi(d(x_0,y_0))=\varphi(t)<t.} This contradiction implies $x_0=y_0$. \end{proof} \section{The Main Results} The main results of this note is presented in two theorems. The first one is an extension of the Matkowski Fixed Point Theorem \cite{Mat75} for complete and regular semimetric spaces. The most important ingredient of the proof is that a domain invariance property remains true for sufficiently large iterates of $\varphi$-contractions. \Thm{mainfix}{If $(X,d)$ is a complete regular semimetric space and $\varphi$ is a comparison function, then every $\varphi$-contraction has a unique fixed point.} \betagin{proof} Let $T\colon X\to X$ be a $\varphi$-contraction and let $p\in X$ be fixed arbitrarily. Define the sequence $(x_n)$ by the standard way $x_n:=T^np$. Observe that, for all fixed $k\in\mathbb N$, the sequence $\bigl(d(x_n,x_{n+k})\bigr)$ tends to zero. Indeed, by the asymptotic property of comparison functions, \Eq{*}{ d(x_n,x_{n+k})=d(Tx_{n-1},Tx_{n+k-1})\le \varphi\bigl(d(x_{n-1},x_{n+k-1})\bigr)\le\cdots\le \varphi^n\bigl(d(p,T^kp)\bigr)\longrightarrow 0.} We are going to prove that $(x_n)$ is a Cauchy sequence. Fix ${\varepsilon}>0$. The continuity of the basic triangle function guarantees the existence of a neighborhood $U$ of the origin such that, for all $(u,v)\in U$, we have the inequality $\Phi_d(u,v)<{\varepsilon}$. Or equivalently, applying the separate monotonicity, there exists some $\delta({\varepsilon})>0$ such that $\Phi_d(u,v)<{\varepsilon}$ holds if $0\le u,v<\delta({\varepsilon})$. The asymptotic property of comparison functions allows us to fix an index $n({\varepsilon})\in\mathbb N$ such that $\varphi^{n({\varepsilon})}({\varepsilon})<\delta({\varepsilon})$ hold. Then, $\psi:=\varphi^{n({\varepsilon})}$ is a comparison function. Hence, if $0\le u,v<\min\{{\varepsilon},\delta({\varepsilon})\}$, \Eq{*}{ \Phi_d\bigl(u,\psi(v)\bigr)\le\Phi_d(u,v)<{\varepsilon}.} Immediate calculations show, that the mapping $S:=T^{n({\varepsilon})}$ is a $\psi$-contraction. Let the nonnegative integer $k$ and the points $x,y\in X$ be arbitrary. Then, applying the monotonicity properties of comparison functions and their iterates, \Eq{*}{ d(T^kSx,T^kSy)\le\psi\bigl(d(T^kx,T^ky)\bigr) \le\psi\circ\varphi^k\bigl(d(x,y)\bigr) \le\psi\bigl(d(x,y)\bigr)} follows. This inequality immediately implies that $T^kS$ maps the ball $B(x,{\varepsilon})$ into itself if it makes small perturbation on the center. Indeed, if $y\in B(x,{\varepsilon})$ and $d(x,T^kSx)<\delta({\varepsilon})$, the choices of ${\varepsilon}$ and $\delta({\varepsilon})$ moreover the separate monotonicity of the basic triangle function yield \Eq{*}{ d(x,T^kSy) \le\Phi_d\bigl(d(x,T^kSx),d(T^kSx,T^kSy)\bigr) \le\Phi_d\bigl(d(x,T^kSx),\psi(d(x,y)\bigr)<{\varepsilon}.} The properties of $(x_n)$ established at the beginning of the proof ensure that, for all nonnegative $k$, there exists some $n_k\in\mathbb N$ such that the inequalities $d(x_n,T^kSx_n)<\delta({\varepsilon})$ hold whenever $n\ge n_k$. Choose \Eq{*}{ n_0=\max\{n_k\mid k=1,\ldots,n({\varepsilon})\}.} Then, taking into account the previous step, $T^kS\colon B(x_{n_0},{\varepsilon})\to B(x_{n_0},{\varepsilon})$ for $k=1,\ldots,n({\varepsilon})$. In particular, each iterates of $S$ is a self-mapping of the ball $B(x_{n_0},{\varepsilon})$. Let $n>n_0$ be an arbitrarily given natural number. Then $n=mn({\varepsilon})+k$, where $m\in\mathbb N$ and $k\in\{1,\ldots,n({\varepsilon})\}$; hence, the definition of $S$ leads to \Eq{*}{ T^nS=T^{mn({\varepsilon})+k}S=T^kT^{mn({\varepsilon})}S=T^kS^mS=T^kS^{m+1}.} Therefore, \Eq{*}{ T^nS\bigl(B(x_{n_0},{\varepsilon})\bigr) =T^kS^{m+1}\bigl(B(x_{n_0},{\varepsilon})\bigr) \subset T^kS\bigl(B(x_{n_0},{\varepsilon})\bigr)\subset B(x_{n_0},{\varepsilon}).} In other words, due to property \eqref{diameter} of \lem{regsem}, the sequence $(T^nSx_{n_0})$ is Cauchy and hence so is $(x_n)$. The completeness implies, that it tends to some element $x_0$ of $X$. Our claim is that $x_0$ is a fixed point of $T$. Applying the generalized triangle inequality, \Eq{*}{ d(x_0,Tx_0)&\le\Phi_d\bigl(d(x_0,x_{n+1}),d(Tx_0,x_{n+1})\bigr)\\ &=\Phi_d\bigl(d(x_0,x_{n+1}),d(Tx_0,Tx_n)\bigr)\\ &\le\Phi_d\bigl(d(x_0,x_{n+1}),\varphi(d(x_0,x_n))\bigr)\\ &\le\Phi_d\bigl(d(x_0,x_{n+1}),d(x_0,x_n)\bigr).} Therefore, \Eq{*}{ d(x_0,Tx_0)\le\lim_{n\to\infty}\Phi_d\bigl(d(x_0,x_{n+1}),d(x_0,x_n)\bigr)=\Phi_d(0,0)=0} follows. That is, $Tx_0=x_0$, as it was desired. To complete the proof, recall that a $\varphi$-contraction may have at most one fixed point. \end{proof} Our second main result is based on \thm{mainfix} whose proof requires some additional ideas. It asserts the stability of fixed points of iterates of $\varphi$-contractions. \Thm{stabfix}{Let $(X,d)$ be a complete and regular semimetric space. If $(T_n)$ is a sequence of $\varphi$-contractions converging pointwise to a $\varphi$-contraction $T_0\colon X\to X$, then the sequence of the fixed points of $(T_n)$ converges to the unique fixed point of $T_0$.} \betagin{proof} First we show that, for all $k\in\mathbb N$, $(T_n^k)$ converges to $T_0^k$ pointwise. We prove by induction on $k$. By the assumption of the theorem, we have the statement for $k=1$. Now assume that $T_n^k \to T_0^k$ pointwise. Then, for every $x\in X$, we have \Eq{*}{ d(T_n^{k+1}x,T_0^{k+1}x) &\leq \Phi_d\bigl(d(T_n^{k+1}x,T_n^{k}T_0x),d(T_n^{k}T_0x,T_0^{k+1}x)\bigr)\\ &\leq \Phi_d\bigl(\varphi^k(d(T_nx,T_0x)),d(T_n^{k}T_0x,T_0^{k}T_0x)\bigr)\\ &\leq \Phi_d\bigl(d(T_nx,T_0x),d(T_n^{k}T_0x,T_0^{k}T_0x)\bigr)\to0 } since $T_n\to T_0$, $T_n^k \to T_0^k$ pointwise and the semimetric $d$ is regular. This proves that $(T_n^{k+1})$ converges to $T_0^{k+1}$ pointwise. The previous theorem ensures that, for all $n\in\mathbb N$, there exists a unique fixed point $x_n$ of $T_n$ as well as a unique fixed point $x_0$ of $T_0$. To verify the statement of the theorem, assume indirectly that \Eq{*}{ {\varepsilon}:=\limsup_{n\to\infty}d(x_0,x_n)>0.} Choose $\deltalta>0$ such that $\Phi_d\bigl(\delta,\delta\bigr)<{\varepsilon}$ hold, and then choose $m\in\mathbb N$ such that $\varphi^m(2{\varepsilon})<\delta$, finally denote $\psi:=\varphi^m$. Using induction, one can easily prove that $S_n=T^m_n$ is a $\psi$-contraction and $x_n$ is a fixed point of $S_n$ for all $n\geq 0$. Furthermore, in view of the previous step, the sequence $(S_nx_0)$ converges to $S_0x_0=x_0$. Hence, for large $n$, $d(x_0,S_nx_0)<\deltalta$ and $d(x_n,x_0)<2{\varepsilon}$. Therefore, for large $n$, we have \Eq{*}{ d(x_0,x_n)&\le\Phi_d\bigl(d(x_0,S_nx_0),d(S_nx_0,x_n)\bigr)\\ &=\Phi_d\bigl(d(x_0,S_nx_0),d(S_nx_0,S_nx_n)\bigr)\\ &\le\Phi_d\bigl(d(x_0,S_nx_0),\psi(d(x_n,x_0))\bigr) \le\Phi_d\bigl(\delta,\psi(2{\varepsilon})\bigr). } Taking the limes superior, ${\varepsilon}\leq\Phi_d\bigl(\delta,\psi(2{\varepsilon})\bigr) \leq\Phi_d\bigl(\delta,\delta\bigr)<{\varepsilon}$ follows, which is a contradiction. That is, the sequence of fixed points of $T_n$ tends to the fixed point of $T_0$, as it was stated. \end{proof} If the semimetric $d$ is \emph{self-continuous}, that is, $d(x_n,y_n)\to d(x,y)$ whenever $x_n\to x$ and $y_n\to y$, then the $\varphi$-contractivity of the members of the sequence $(T_n)$ implies the $\varphi$-contractivity of $T_0$. Indeed, taking the limit $n\to\infty$ in the inequalities $d(T_nx,T_ny)\le\varphi\bigl(d(x,y)\bigr)$ and using the self-continuity of $d$, the inequality $d(T_0x,T_0y)\le\varphi\bigl(d(x,y)\bigr)$ follows. Note that the self-continuity of $d$ holds automatically in metric spaces and also in ultrametric spaces. \section{Applications and Concluding Remarks} Not claiming completeness, let us present here some immediate consequences of \thm{mainfix} and \thm{stabfix}, respectively. For their proof, one should observe only that in each case the underlying semimetric spaces are regular. Moreover, the extra assumption of \thm{stabfix} is obviously satisfied. \mathscr Cor{Matkowski1}{If $(X,d)$ is a complete $c$-relaxed metric space or complete $c$-inframetric space and $\varphi$ is a comparison function, then every $\varphi$-contraction has a unique fixed point.} \mathscr Cor{Matkowski2}{Let $(X,d)$ be a complete metric space or a complete ultrametric space. If $(T_n)$ is a sequence of $\varphi$-contractions converging pointwise to $T\colon X\to X$, then $T$ is a $\varphi$-contraction, and the sequence of the fixed points of $(T_n)$ converges to the fixed point of $T$.} Note, that the special case $c=1$ of \cor{Matkowski1} reduces to the result of Matkowski \cite{Mat75} which is a generalization that of Browder \cite{Brow68}. Each of these results generalizes the Banach Fixed Point Theorem under the particular choice $\varphi(t)=qt$ where $q\in[0,1[$ is given. Let us also mention, that the contraction principle was also discovered and applied independently (in a few years later after Banach) by Caccioppoli \cite{Cac30}. Our last corollary demonstrates the efficiency of \thm{mainfix} in a particular case when the Banach Fixed Point Theorem cannot be applied directly. For its proof, one should combine \cor{Matkowski1} with \lem{equiv}. \mathscr Cor{extension}{If $(X,d)$ is a semimetric space with a semimetric $d$ equivalent to a complete $c$-relaxed metric or to a complete $c$-inframetric on $X$, then every $\varphi$-contraction has a unique fixed point.} As a final remark, let us quote here a result due to Jachymski, Matkowski, and \'Swi\k{a}tkowski, which is a generalization of the Matkowski Fixed Point Theorem (for precise details, consult \cite{JacMatSwi95}). \Thmn{Assume that $(X,d)$ is a complete Hausdorff semimetric space such that there is some $r>0$ for which the diameters of balls with radius $r$ are uniformly bounded and in which the closure operator induced by $d$ is idempotent. If $\varphi$ is a comparison function, then every $\varphi$-contraction has a unique fixed point.} The proof of this theorem is based on the fundamental works of Chittenden \cite{Chi17} and Wilson \cite{Wil31}. In view of \lem{regsem}, the first two conditions of the result above is always satisfied in regular semimetric spaces. However, the exact connection between the properties of the basic triangle function and the idempotence of the metric closure has not been clarified yet. The interested Readers can find further details on the topology of semimetric spaces in the papers of Burke \cite{Bur72}, Galvin and Shore \cite{GalSho84}, and by McAuley \cite{Mca56}. \betagin{thebibliography}{10} \bibitem{Ban22} S.~Banach, \emph{Sur les op\'erations dans les ensembles abstraits et leur application aux \'equitations int\'grales}, Fund. Math. \textbf{3} (1922), 133--181. \bibitem{Ber07} V.~Berinde, \emph{Iterative approximation of fixed points}, second ed., Lecture Notes in Mathematics, vol. 1912, Springer, Berlin, 2007. \MR{2323613 (2008k:47106)} \bibitem{BoyWon69} D.~W. Boyd and J.~S.~W. Wong, \emph{On nonlinear contractions}, Proc. Amer. Math. Soc. \textbf{20} (1969), 458--464. \MR{0239559 (39 \#916)} \bibitem{Brow68} F.~E. Browder, \emph{On the convergence of successive approximations for nonlinear functional equations}, Indag. Math. \textbf{30} (1968), 27--35. \MR{0230180 (37 \#5743)} \bibitem{Bur72} Dennis~K. Burke, \emph{Cauchy sequences in semimetric spaces}, Proc. Amer. Math. Soc. \textbf{33} (1972), 161--164. \MR{0290328 (44 \#7512)} \bibitem{Cac30} R.~Caccioppoli, \emph{Un teorema generalle sulla esistenza di elementi uniti in una transformazione funzionale}, Rend. Acc. Naz. Lincei \textbf{11} (1930), 794--799. \bibitem{Cau1835} A.~Cauchy, \emph{Oeuvres completes {X}}, Gauthier--Villars, Paris, 1835/1958. \bibitem{Chi17} E.~W. Chittenden, \emph{On the equivalence of \'{E}cart and voisinage}, Trans. Amer. Math. Soc. \textbf{18} (1917), no.~2, 161--166. \MR{1501066} \bibitem{Fre1906} M.~Fr\'echet, \emph{Sur quelques points du calcul fonctionnel}, Rendic. Circ. Mat. Palermo \textbf{22} (1906), 1--74. \bibitem{GalSho84} F.~Galvin and S.~D. Shore, \emph{Completeness in semimetric spaces}, Pacific J. Math. \textbf{113} (1984), 67--74. \bibitem{GraDug03} A.~Granas and J.~Dugundji, \emph{Fixed point theory}, Springer Monographs in Mathematics, Springer-Verlag, New York, 2003. \MR{1987179 (2004d:58012)} \bibitem{JacMatSwi95} J.~Jachymski, J.~Matkowski, and T.~{\'S}wi{\c{a}}tkowski, \emph{Nonlinear contractions on semimetric spaces}, J. Appl. Anal. \textbf{1} (1995), no.~2, 125--134. \MR{1395268} \bibitem{Lio1837} J.~Liouville, \emph{Sur le d\'eveloppement des fonctions ou parties de fonctions en s\'eries, etc.}, Second M\'emoire J. Math. \textbf{2} (1837), 16--35. \bibitem{Mat75} J.~Matkowski, \emph{Integrable solutions of functional equations}, Dissertationes Math. \textbf{127} (1975), 1--68. \MR{0412650 (54 \#772)} \bibitem{Mca56} Louis~F. McAuley, \emph{A relation between perfect separability, completeness, and normality in semi-metric spaces}, Pacific J. Math. \textbf{6} (1956), 315--326. \MR{0080907 (18,325c)} \bibitem{Pic1890} E.~Picard, \emph{M\'emoire sur la th\'eorie des \'equations aux d\'eriv\'ee partielles et la m\'ethode des approximations successives}, J. Math. Pures. et Appl. \textbf{6} (1890), 145--210. \bibitem{Rus01} I.~A. Rus, \emph{Generalized contractions and applications}, Cluj University Press, Cluj-Napoca, 2001. \MR{1947742 (2004f:54043)} \bibitem{RusPetPet08} I.~A. Rus, A.~Petru{\c{s}}el, and G.~Petru{\c{s}}el, \emph{Fixed point theory}, Cluj University Press, Cluj-Napoca, 2008. \MR{2494238 (2010a:47127)} \bibitem{Wil31} Wallace~Alvin Wilson, \emph{On {S}emi-{M}etric {S}paces}, Amer. J. Math. \textbf{53} (1931), no.~2, 361--373. \MR{1506824} \bibitem{Zei86} E.~Zeidler, \emph{Nonlinear functional analysis and its applications. {I}}, Springer-Verlag, New York, 1986, Fixed-point theorems, Translated from the German by Peter R. Wadsack. \MR{816732 (87f:47083)} \end{thebibliography} \end{document}
\begin{document} \begin{abstract} An $N$-tiling of triangle $ABC$ is a way to cut $ABC$ into $N$ congruent smaller triangles. The smaller triangle is the ``tile.'' When $ABC$ is isosceles with base angles $\alpha$, and not equilateral, there are only four possible tiles (aside from a tile similar to $ABC$): a right-angled tile with one angle $\alpha$, a tile with angles $(\alpha, \beta, 2\alpha)$, a tile with angles $(\alpha,\beta, 2{\mathfrak P}i/3)$, or a tile with angles satisfying $3\alpha + 2\beta = {\mathfrak P}i$ (and in all but the first case, with $\alpha$ not a rational multiple of ${\mathfrak P}i$). We study the first three cases in this paper. For tilings by a right triangle, $N$ has to be a square, or an even sum of squares, or six times a square; in particular it cannot be a prime congruent to 3 mod 4; and all these possibilities actually occur. We prove that unless $ABC$ is a right isosceles triangle, $N$ has to be even. For tilings by $(\alpha, \beta, 2\alpha)$, we show that the tile is necessarily rational (the ratios of its sides are rational), and we give a necessary condition for the existence of a tiling. This condition implies that when an isosceles and not equilateral $ABC$ is $N$-tiled by such a tile, $N$ cannot be a prime number, or even squarefree. In the last case, when the tile has a 120 degree angle, we also prove that the tile must be rational, and find a necessary condition for the existence of a tiling. That condition rules out $N < 36$, but leaves open whether $N$ can possibly be prime. The smallest known such tiling has $N = 75140$. \noindent 2010 Mathematics Subject Classification: 51M20 (primary); 51M04 (secondary) \end{abstract} \title{Tilings of an isosceles triangle} \section{Introduction} An $N$-tiling of triangle $ABC$ by triangle $T$ is a way of writing $ABC$ as a union of $N$ triangles congruent to $T$, overlapping only at their boundaries. The triangle $T$ is the ``tile''. We consider here the case of an isosceles (but not equilateral) triangle $ABC$. Our results fit into a larger research program, begun by Lazkovich \cite{laczkovich1995}. Laczkovich studied the possible shapes of tiles and triangles that can possibly be used in tilings, and obtained results that will be described below. The reader who is new to the subject may want to see examples of $N$-tilings for various shapes of $ABC$; such pictures can be found in \cite{beeson-noseven}. Here we give only examples relevant to the case of $ABC$ isosceles. First we point out that {\em any} triangle can be decomposed into $n^2$ congruent triangles by drawing $n-1$ equally spaced lines parallel to each of the three sides of the triangle, as illustrated in Fig.~\ref{figure:bigquadratic}. Moreover, the large (tiled) triangle is similar to the small triangle (the ``tile''). We call such a tiling a {\em quadratic tiling.} \begin{figure} \caption{A quadratic tiling of an arbitrary triangle} \label{figure:bigquadratic} \end{figure} {\mathbb F}loatBarrier It follows that if we have a tiling of a triangle $ABC$ into $N$ congruent triangles, and $m$ is any integer, we can tile $ABC$ into $Nm^2$ triangles by subdividing the first tiling, replacing each of the $N$ triangles by $m^2$ smaller ones. Hence the set of $N$ for which an $N$-tiling of some triangle exists is closed under multiplication by squares. Sometimes it is possible to combine two quadratic tilings (using the same tile) into a single tiling, as shown in Fig.~\ref{figure:biquadratic}. \begin{figure} \caption{Biquadratic tilings with $N = 13 = 3^2 + 2^2$ and $N=74 = 5^2 + 7^2$} \label{figure:biquadratic} \end{figure} We will explain how these tilings are constructed. We start with a big right triangle resting on its hypotenuse, and divide it into two right triangles by an altitude. Then we quadratically tile each of those triangles. The trick is to choose the dimensions in such a way that the same tile can be used throughout. If that can be done then evidently $N$, the total number of tiles, will be the sum of two squares, $N = n^2 + m^2$, one square for each of the two quadratic tilings. On the other hand, if we start with an $N$ of that form, and we choose the tile to be an $n$ by $m$ right triangle, then we can construct such a tiling. We call these tilings ``biquadratic.'' More generally, a {\em biquadratic tiling} of triangle $ABC$ is one in which $ABC$ has a right angle at $C$, and can be divided by an altitude from $C$ to $AB$ into two triangles, each similar to $ABC$, which can be tiled respectively by $n^2$ and $m^2$ copies of a triangle similar to $ABC$. A larger biquadratic tiling, with $n=5$ and $m=7$ and hence $N=74$, is shown in at the right of Fig.~\ref{figure:biquadratic}. If the original triangle $ABC$ is chosen to be isosceles, and is then quadratically tiled, then each of the $n^2$ triangles can be divided in half by an altitude; hence any isosceles triangle can be decomposed into $2n^2$ congruent triangles. If the original triangle is equilateral, then it can be first decomposed into $n^2$ equilateral triangles, and then these triangles can be decomposed into 3 or 6 triangles each, showing that any equilateral triangle can be decomposed into $3n^2$ or $6n^2$ congruent triangles. For example we can 12-tile an equilateral triangle in two different ways, starting with a 3-tiling and then subdividing each triangle into 4 triangles (``subdividing by 4''), or starting with a 4-tiling and then subdividing by 3. There is another family of $N$-tilings, in which $N$ is of the form $3m^2$, and both the tile and the tiled triangle are 30-60-90 triangles. We call these the ``triple-square'' tilings. The case case $m=2$ makes $N=12$. There are two ways to 12-tile a 30-60-90 triangle with 30-60-90 triangle. One is to first quadratically 4-tile it, and then subtile the four triangles with the 3-tiling of Figure 1. This produces the first 12-tiling in Fig.~\ref{figure:12-tilings}. Somewhat surprisingly, there is another way to tile the same triangle with the same 12 tiles, also shown in Fig.~\ref{figure:12-tilings}. The next member of this family is $m=3$, which makes $N=27$. Two 27-tilings are shown in Fig.~\ref{figure:27-tilings}. Similarly, there are two 48-tilings (not shown). \begin{figure} \caption{Two 12-tilings} \label{figure:12-tilings} \end{figure} \begin{figure} \caption{Two 27-tilings } \label{figure:27-tilings} \end{figure} Whenever there is an $N$-tiling of the right triangle $ABM$, there is a $2N$-tiling of the isosceles triangle $ABC$. Using the biquadratic tilings (see Fig.~\ref{figure:biquadratic}) and triple-square tilings (see Fig~\ref{figure:12-tilings} and Fig.~\ref{figure:27-tilings}), we can produce $2N$-tilings when $N$ is a sum of squares or three times a sum of squares. We call these tilings ``double biquadratic'' and ``hexquadratic''. For example, one has two 10-tilings and two 26-tilings, obtained by reflecting Figs. 4 and 5 about either of the sides of the triangles shown in those figures; and one has 24-tilings and 54-tilings obtained from Figs. 8 and 9. Note that in the latter two cases, $ABC$ is equilateral. In the case when the sides of the tile $T$ form a Pythogorean triple $n^2 + m^2 + k^2 = N/2$, then we can tile one half of $ABC$ with a quadratic tiling and the other half with a biquadratic tiling. The smallest example is when the tile has sides 3, 4, and 5, and $N = 50$. See Fig.~\ref{figure:severalisosceles2}. One half is 25-tiled quadratically, and the other half is divided into two smaller right triangles which are 9-tiled and 16-tiled quadratically. This shows that the tiling of $ABC$ does not have to be symmetric about the altitude. {\mathbb F}loatBarrier As we shall see below, the work of Laczkovich implies that there are only four possible shapes of the tile: right-angled, $\gamma = 2\alpha$, $\gamma = 2{\mathfrak P}i/3$, and $3\alpha + 2\beta = {\mathfrak P}i$. The last case is taken up in another paper, since the techniques apply also to tilings of non-isosceles triangles $ABC$ with $3\alpha+2\beta = {\mathfrak P}i$. The first three are studied in this paper. We obtain, in the second and third case, necessary conditions on $N$, but not necessary and sufficient conditions. In the case of a right-angled tile, our conditions are necessary and sufficient. All known examples of tilings of isosceles $ABC$ with $\alpha \neq \frac {\mathfrak P}i 2$ have $N$ even. We could prove that it must be so when the tile is right-angled, but we could not prove it in the other two cases, where indeed we know only a few tilings, all of which require $N$ with five to seven digits. \subsection{Acknowledgment} I am grateful to Miklos Laczkovich for his valuable comments on my work and especially for his simplification of the proofs of Lemmas~\ref{lemma:Gamma}, \ref{lemma:isosceleshelper2}, and \ref{lemma:isosceleshelper3}, and of course for his many pioneering papers in this subject, on which this paper rests. \subsection{Definitions and notation} We first note that this paper is about triangles $ABC$ that are isosceles and not equilateral. Let that be understood; then for the rest of this paper, ``isosceles'' means ``exactly two sides are equal.'' \footnote{\,That was Euclid's definition of ``isosceles.''} We give a mathematically precise definition of ``tiling'' and fix some terminology and notation. Given a triangle $T$ and a larger triangle $ABC$, a ``tiling'' of triangle $ABC$ by triangle $T$ is a set of triangles $T_1,\ldots,T_n$ congruent to $T$, whose interiors are disjoint, and the closure of whose union is triangle $ABC$. Let $a$, $b$, and $c$ be the sides of the tile $T$, and angles $\alpha$, $\beta$, and $\gamma$ be the angles opposite sides $a$, $b$, and $c$. The letter ``$N$'' will always be used for the number of triangles used in the tiling. An $N$-tiling of $ABC$ is a tiling that uses $N$ copies of some triangle $T$. The meanings of $N$, $\alpha$, $\beta$, $\gamma$, $a$, $b$,$c$, $A$, $B$, and $C$ will be fixed throughout this paper. We do not assume $\alpha \le \beta$ in general; although that may sometimes be justified by symmetry, we often will consider some equation such as $3\alpha + 2\beta = {\mathfrak P}i$, in which case we do not want to assume $\alpha \le \beta$. \section{History} Above we exhibited quadratic and biquadratic tilings in which the tile is similar to $ABC$. There are hexagonal tilings, not exhibited in this paper, but see \cite{beeson-noseven} for pictures. These involve $N$ being square, a sum of two squares, or three times a square. The biquadratic tilings were known in 1964, when the paper \cite{golomb} was published. This is the earliest paper on the subject of which I am aware. \footnote{\,The simplest hexagonal tiling is attributed to Major MacMahon (1921) in the notes accompanying a plastic toy I purchased at an AMS meeting in 2012.} Snover {\em et.\,al.}\,\cite{snover1991} took up the challenge of showing that these are the only possible values of $N$, when the tile is similar to $ABC$. The following theorem completely answers the question, ``for which $N$ does there exist an $N$-tiling in which the tile is similar to the tiled triangle?'' \begin{theorem} [Snover {\em et.\,al.}\,\cite{snover1991}] \label{theorem:snover} Suppose $ABC$ is $N$-tiled by tile $T$ similar to $ABC$. If $N$ is not a square, then $T$ and $ABC$ are right triangles. Then either (i) $N$ is three times a square and $T$ is a 30-60-90 triangle, or (ii) $N$ is a sum of squares $e^2 + f^2$, the right angle of $ABC$ is split by the tiling, and the acute angles of $ABC$ have rational tangents $e/f$ and $f/e$, \noindent and these two alternatives are mutually exclusive. \end{theorem} Soifer's book \cite{soifer} appeared in 1990, with a second edition in 2009. He considered two ``Grand Problems'': for which $N$ can {\em every} triangle be $N$-tiled, and for which $N$ can {\em every} triangle be dissected into similar, but not necessarily congruent triangles. (The latter eventually became a Mathematics Olympiad problem.) The 2009 edition has an added chapter in which the biquadratic tilings and a theorem of Laczkovich occur. Miklos Laczkovich published six papers \cite{laczkovich1990,laczkovich1995, laczkovich1998, laczkovich2010, laczkovich-szekeres1995, laczkovich2012} on triangle and polygon tilings. According to Soifer, the 1995 paper was submitted in 1992. Laczkovich, like Soifer, studied dissecting a triangle into smaller {\em similar} triangles, not {\em congruent} triangles as we require here. If those similar triangles are rational (i.e., the ratios of their sides are rational) then if we divide each of them into small enough quadratic subtilings, we can achieve an $N$-tiling into {\em congruent} triangles, but of course $N$ may be large. Laczkovich focused primarily on the shapes of $ABC$ (or more generally, convex polygons) and of the tile. His theorems give us an exhaustive list of the possible shapes of $ABC$ and the tile, which we will need in our proof that there is no 7-tiling. This list can be found in \S\ref{section:laczkovich} (of this paper). However, his theorem published in the last chapter of \cite{soifer} does mention $N$. It states that given an integer $k$, there exists an $N$-tiling for some $N$ whose square-free part is $k$. \section{Laczkovich}\label{section:laczkovich} A basic fact is that, apart from a small number of cases that can be explicitly enumerated, if there is an $N$-tiling of $ABC$ by a tile with angles $(\alpha, \beta, \gamma)$, then the angles $\alpha$ and $\beta$ are not rational multiples of ${\mathfrak P}i$. This theorem follows from Theorems~4.1, 5.1, and 5.3 of \cite{laczkovich1995}. Laczkovich is dealing with a more general situation, tiling an arbitrary triangle by tiles that are only required to be similar, not congruent. We extract the following theorem from his results by specializing to isosceles $ABC$ and congruent tiles. \begin{table}[ht] \caption{Possible tilings of isosceles triangles, according to Laczkovich.} \label{table:isosceles} \begin{center} \setlength{\extrarowheight}{.5em} \begin{tabular}{rr} $ABC$ & the tile \\ \hline $(\beta,\beta,2 \alpha)$ & similar to $ABC$ \\ $(\beta,\beta,2\alpha)$ & $\gamma = {\mathfrak P}i/2$ \\ $(\alpha,\alpha,{\mathfrak P}i-2\alpha)$ & $\gamma = 2\alpha$ \\ $(\alpha,\alpha,{\mathfrak P}i-2\alpha)$ & $\gamma = 2{\mathfrak P}i/3$ \\ $(\alpha,\alpha,\alpha+2\beta)$ &$3\alpha + 2\beta = {\mathfrak P}i$ \\ $(\beta,\beta,3\alpha)$ &$3\alpha + 2\beta = {\mathfrak P}i$ \\ $(\alpha+\beta,\alpha+\beta,\alpha)$ &$3\alpha + 2\beta = {\mathfrak P}i$ \end{tabular} \end{center} \end{table} \begin{theorem}[Laczkovich \cite{laczkovich1995}] \label{theorem:laczkovich-isosceles} Let isosceles (and not equilateral) triangle $ABC$ be $N$-tiled by a tile with angles $(\alpha,\beta,\gamma)$. Then the possible shapes of $ABC$ and the tile are given by Table~\ref{table:isosceles}. In the table, the triples giving the angles of the tile are $(\alpha, \beta, \gamma)$ after a suitable permutation, i.e., they are unordered triples. In all but the first two lines, $\alpha$ is not a rational multiple of ${\mathfrak P}i$. \end{theorem} \noindent{\em Remark}. For example, in the second line of the table, we do not list separately $(\alpha,\alpha,2\beta)$, as that is already covered by the entry $(\beta,\beta,2\alpha)$, and the fact that we do not assume $\alpha < \beta$. \noindent{\em Proof}. This theorem is proved in \cite{laczkovich1995}, but it is not stated in quite this way; therefore we spell out in detail how this statement follows from theorems explicitly stated in \cite{laczkovich1995}. Let isosceles (and not equilateral) triangle $ABC$ be $N$-tiled by a tile with angles $(\alpha,\beta,\gamma)$. Then either all three angles are rational multiples of ${\mathfrak P}i$, or not. Case~1, they are not all rational multiples of ${\mathfrak P}i$. Then by Theorem~4.1 of \cite{laczkovich1995}, where $T$ in that paper is our $ABC$, one of six cases holds. Cases (i), (ii), and (iv) are the first three lines of our table. Case (iii) says $ABC$ is equilateral, which we have ruled out by hypothesis. Case(v) says $3\alpha + 2\beta = {\mathfrak P}i$ and the base angles of isosceles $ABC$ must be $\alpha$ or $\beta$ or $\alpha + \beta$, by Theorem~2.4 of \cite{laczkovich1995}; so that is lines 5, 6, and 7 of our table. Finally, case~(vi) is has the tile $(\alpha, \alpha,\alpha+3\beta)$ with $\gamma = 2{\mathfrak P}i/3$, which is another way of writing line~4 of the table, since if $\gamma = 2{\mathfrak P}i/3$, then $\alpha+3\beta = {\mathfrak P}i-2\alpha$. Case~2, all three of $(\alpha,\beta,\gamma)$ are rational multiples of ${\mathfrak P}i$. Then Theorem~5.1 of \cite{laczkovich1995} applies. That theorem is about dissections into similar (rather than congruent) triangles, and according to the subsequent Theorem~5.3, the last three cases (cases (v), (vi), and (vii)) in Theorem~5.1 cannot hold for dissections into congruent triangles. Cases (i) and (ii) are the first lines of our table. Case (iii) requires $ABC$ equilateral, which we have ruled out by hypothesis. Case (iv) has $ABC$ a right triangle with one angle ${\mathfrak P}i/6$, which is not isosceles and hence irrelevant here. That completes the proof. We note in passing the following immediate consequence of Laczkovich's theorem: If an isosceles triangle $ABC$ is tiled by a right-angled tile $(\alpha,\beta,\frac {\mathfrak P}i 2)$, then the base angles of $ABC$ are either equal to $\beta$ or to $\alpha$. That follows, because in Table~\ref{table:isosceles}, there is only one entry corresponding to a right-angled tile, namely the second line. Readers are invited to try to prove that directly, without appeal to Laczkovich, in order to gain a deeper appreciation for Laczkovich's work. \section{Some number-theoretic facts} The facts in this section may not be well-known to all our readers, and their proofs are short. \begin{lemma}\label{lemma:sumsofsquares} An integer $N$ can be written as a sum of two integer squares if and only if the squarefree part of $N$ is not divisible by any prime of the form $4n+3$. \end{lemma} \noindent{\em Proof}. See for example \cite{hardy-wright}, Theorem~366, p.~299. \begin{lemma}\label{lemma:doublesquares} $N$ is a sum of two squares if and only if $2N$ is a sum of two squares. \end{lemma} \noindent{\em Proof}. The lemma follows immediately from the identities \begin{eqnarray*} (p-q)^2 + (p+q)^2 &=& 2(p^2+q^2)\\ \left( \frac{p-q} 2 \right)^2 + \left(\frac{p+q} 2\right)^2 &=& \frac 1 2 \,(p^2+q^2). \end{eqnarray*} This lemma is also a corollary of Lemma~\ref{lemma:sumsofsquares}, of course, but that is not needed. The following lemma identifies those relatively few rational multiples of ${\mathfrak P}i$ that have rational tangents or whose sine and cosine satisfy a polynomial of low degree over ${\mathbb Q}$. \begin{lemma} \label{lemma:euler} Let $\theta = 2m {\mathfrak P}i/n$, where $m$ and $n$ have no common factor. Suppose $\cos \theta$ is algebraic of degree 1 or 2 over ${\mathbb Q}$. Then $n$ is one of $5,6,8,10,12$. If both $\cos \theta$ and $\sin \theta$ have degree 1 or 2 over ${\mathbb Q}$, then $n$ is $6,8$, or $12$. \end{lemma} \noindent{\em Proof}. Let $\varphi$ be the Euler totient function. Assume $\cos \theta$ has degree 1 or 2. By \cite{niven}, Theorem~3.9, p.~37, $\varphi(n) = 2$ or $4$. The stated conclusion follows from the well-known formula for $\varphi(n)$. The second part of Theorem~3.9 of \cite{niven} rules out $n=5$ or $10$ when $\sin \theta$ is also of degree 1 or 2. \begin{lemma}[Pythagorean triangles] \label{lemma:pythagoras} The integer solutions of the equation $x^2+y^2 = z^2$ have the form $(x,y,z) = (m^2-k^2, 2mk,m^2+k^2)$ for some integers $(m,k)$ \end{lemma} \noindent{\em Proof}. See any number theory textbook. But the proof is short, so we just give it here. By the Pythagorean theorem, $(x,y,z)$ form a right triangle, with one angle $\alpha$ such that $x/z = \cos \alpha$ and $y/z = \sin \alpha$. We use the Weierstrass substitution, $t = \tan(\alpha/2)$. Then \begin{eqnarray*} \cos \alpha &=& \frac {1-t^2}{1+t^2} \qquad \sin \alpha \ = \ \frac{2t}{1+t^2} \end{eqnarray*} Setting $t = m/k$ in lowest terms, and replacing $\sin \alpha$ and $\cos \alpha$ by $y/z$ and $x/z$, we find the formulas of the lemma for $(x,y,z)$. That completes the proof. \begin{lemma} \label{lemma:rationalsquares} If the integer $n$ is a sum of two rational squares then it is a sum of two integer squares. \end{lemma} \noindent{\em Proof}. Suppose $n = (p/q)^2 + (s/t)^2$. Then $(qt)^2 n = p^2 + s^2$. Then by Lemma~\ref{lemma:sumsofsquares}, the square-free part of n is not divisible by any prime congruent to 3 mod 4. Then by a second application of Lemma~\ref{lemma:sumsofsquares}, $n$ is a sum of two integer squares. That completes the proof. \section{Tilings of an isosceles $ABC$ by a right-angled tile: examples} \begin{figure} \caption{A 54-tiling; $N/2$ is three times a square. Tile is 30-60-90.} \label{figure:54} \end{figure} \begin{figure} \caption{$N$ is twice a square or twice a sum of squares. } \label{figure:severalisosceles} \end{figure} \begin{figure} \caption{50 is both twice a square and twice a sum of squares. } \label{figure:severalisosceles2} \end{figure} \begin{figure} \caption{$N=104$, eight essential segments, base angles about $56^\circ$} \label{figure:8essentials} \end{figure} Is it possible to have more complicated tilings without essential segments? Yes, because when two tiles share their hypotenuses, they form a rectangle, and we can just draw the diagonal of that rectangle the other way. In this way we can produce (exponentially) many different tilings, but they differ only in this trivial way. And sometimes, as shown in Fig.~\ref{figure:nicelytiled}, even those rectangles can be rotated. That figure also shows that a tiling need not necessarily include the altitude of $ABC$. \begin{figure} \caption{The altitude need not be part of the tiling.} \label{figure:nicelytiled} \end{figure} \begin{figure} \caption{$N = 2312$, $N/2 = 34^2$, $(a,b,c) = (3,4,5)$} \label{figure:2312} \end{figure} In the tilings based on two biquadratic tilings, there are no $c$ edges on $AB$ and $BC$, while in the tilings based on two quadratic tilings, there are only $c$ edges. There are of course some hybrid tilings when a square is also a sum of squares, in which $AB$ falls under one case and $BC$ under the other. If $N/2$ is not a square (as is the case for the biquadratic tilings) then there are no $c$ edges on $AB$ and $BC$, as we see in the biquadratic tilings (and prove in the next section). All these tilings, in which $N/2$ is a sum of squares, involve essential segments (where tiles of different lengths occur on the two sides of an internal line). One sees such linear relations in two of the tilings illustrated in Fig.~\ref{figure:severalisosceles}. \section{Laczkovich's graphs ${\mathbb G}amma_a$} In trying to prove the impossibility of certain tilings directly, it is easy to become involved in complicated arguments with many cases, involving complicated diagrams. Laczkovich had the brilliant idea to abstract some of these arguments using graph theory. The definition will not be grasped immediately, but instead will require time and the study of examples to understand. But it leads to very elegant proofs of theorems that are much more complicated or impossible to prove more directly. To emphasize its importance, we devote a whole section just to the definition. Given a tiling of a triangle $ABC$, an {\em internal segment} is a line segment connecting two vertices of the tiling that is contained in the union of the boundaries of the tiles, and lies in the interior of $ABC$ except possibly for its endpoints. A {\em maximal segment} is an internal segment that is not part of a longer internal segment. A segment is {\em terminated} at a vertex $P$ if it has tiles on both sides with vertices at $P$. (In that case there may or may not be a continuation of that segment past $P$.) A {\em left-terminated segment} is an internal segment $XY$ that is terminated at $X$. A {\em left-maximal segment} is an internal segment $PQ$ that cannot be extended past $P$ to a longer internal segment $UPQ$. (In these two concepts, we are using directed segments; so $PQ$ is not the same as $QP$ in this context. The ``left'' in ''left maximal'' refers to the fact that $P$ is listed to the left of $Q$ in $PQ$.) A tile is {\em supported by} $XY$ if one edge of the tile lies on $XY$. The internal segment $XY$ is said to have ``all $c$'s on the left'' if the endpoints $X$ and $Y$ are vertices of tiles supported by $XY$ and lying on the left side of $XY$, and all tiles supported by $XY$ lying on the left of $XY$ have their $c$ edges on $XY$. Similarly for ``all $c$'s on the right.'' (Here again $XY$ is a directed segment, so the concept ``left side'' of $XY$ makes sense; but this is a different sense of the English word ``left'' than in ``left-terminated.'') An internal segment $XY$ is said to {\em witness the relation} $jc = \ell a + mb$ if one side has $j$ more $c$ edges than the other, and the other has $\ell$ more $a$ edges and $m$ more $b$ edges than the first. The simplest example is when $XY$ has all $c$'s on one side, and exactly $j$ of them (that is, the length of $XY$ is $jc$), and on the other side $XY$ supports $\ell$ tiles with their $a$ edges on $XY$ and $m$ tiles with their $b$ edges on $XY$ (in any order) and no other tiles, and the endpoints $X$ and $Y$ are vertices of tiles on both sides of $XY$. Similarly we use the terminology ``$XY$ witnesses a relation $jb = \ell a + mc$.'' An internal segment that witnesses a relation is called an {\em essential segment}. The definition allows that an essential segment might have different numbers of tiles of lengths $a,b,c$ on its two sides, without necessarily having all the tiles on one side be the same length, but often it is the case that all the tiles on one side are the same length. To be sure that you understand the concept of ``essential segment'', identify the eight essential segments in Fig.~\ref{figure:8essentials}. Also identify in that figure some internal segments $PQ$ that are not essential segments, because each side of $PQ$ supports tiles with three $b$ and two $a$ sides on $PQ$. (Those $PQ$ connect the midpoints of the sides of $ABC$ in that figure.) The following definition is equivalent to the one given in \cite[p.~346]{laczkovich2012}, except that there condition (iv) is automatic because of an additional assumption about the tile. \begin{definition} [The directed graph ${\mathbb G}amma_a$] \label{definition:Gamma} Given a tiling of some triangle, the nodes of the graph ${\mathbb G}amma_a$ are certain vertices of the tiling. A link of ${\mathbb G}amma_a$ connects vertices $X$ and $Y$ if (i) the segment $XY$ is a left-maximal internal segment having all $a$ edges on one side (say Side~1) of $XY$, and (ii) On the other side of $XY$ (say Side~2) the first tile (the one with a vertex at $X$) does not have its $a$ edge on $XY$, and (iii) At vertex $Y$, there is another tile supported by $XY$ on Side~1 of $XY$ with a vertex at $Y$, that does not have its $a$ edge on Side~1, and (iv) No tile supported by $XY$ on Side~2 of $XY$ has a vertex at $Y$. The directed graphs ${\mathbb G}amma_b$ and ${\mathbb G}amma_c$ are defined similarly. \end{definition} \noindent{\em Remarks for clarification}. We use ``link'' instead of ``edge'' with these graphs, to avoid confusion with tile ``edges.'' If $XY$ is a link in ${\mathbb G}amma_a$, then $XY$ does not terminate at $Y$, because there is another tile past $Y$ whose $a$ side is not on $XY$; and also because $Y$ lies on the interior of an edge of a tile on the other side of $XY$. The reader is recommended to identify all three graphs ${\mathbb G}amma_a$, ${\mathbb G}amma_b$, and ${\mathbb G}amma_c$ in Fig.~\ref{figure:8essentials}. Hint: ${\mathbb G}amma_c$ is empty; ${\mathbb G}amma_b$ has four links, starting at the midpoints of the sides of $ABC$; ${\mathbb G}amma_a$ is, in this example, the same graph as ${\mathbb G}amma_b$. \section{Tilings of an isosceles $ABC$ by a right-angled tile: theory} Laczkovich studied the possible shapes of tiles that can tile an isosceles triangle, but did not characterize the possible $N$. We do so in this section for right-angled tiles and $N$ even. We have two ways to tile an isosceles triangle by a right triangle: either tile each of its two halves by a quadratic tiling, in which case $N$ is twice a square, or tile each of its halves with a biquadratic tiling, in which case $N$ is twice a sum of squares. See Figs.~\ref{figure:severalisosceles} and \ref{figure:severalisosceles2}. The main theorem in this section shows that these are the only possible values of $N$. But Fig.~\ref{figure:2312} shows that, when $N/2$ is a square, there are also more complicated tilings. \begin{lemma}\label{lemma:Gamma} Suppose isosceles (or equilateral) triangle $ABC$ with base angles $\beta$ is $N$-tiled by tile $(\alpha,\beta,{\mathfrak P}i/2)$ with sides $(a,b,c)$, and $\alpha$ is not a rational multiple of ${\mathfrak P}i$, or $\alpha$ is an odd multiple of $\beta$. Let $PQ$ be a link in ${\mathbb G}amma_c$. Then there are two adjacent tiles with vertices at $Q$ whose common boundary contains an $a$ or $b$ edge of one tile, and a $c$ edge of the other tile. \end{lemma} \noindent{\em Remarks}. For short, there is an $a/c$ edge at $Q$, or a $b/c$ edge at $Q$. Consider Fig.~\ref{figure:54}, in which $\beta = {\mathfrak P}i/6$ and $\alpha = {\mathfrak P}i/3$, so $\alpha$ is not an odd multiple of $\beta$. Observe that the present lemma fails in the tiling of Fig.~\ref{figure:54}, showing that the hypothesis that $\alpha$ is an odd multiple of $\beta$ cannot be removed. \noindent{\em Proof}. Let $\Delta_1,\ldots,\Delta_k$ be tiles with vertices at $Q$, numbered so that $\Delta_1$ is supported by $PQ$ (and hence has side $c$ on $PQ$), $\Delta_i$ and $\Delta_{i+1}$ are adjacent, and $\Delta_k$ has one edge extending $PQ$ past $Q$. Since $PQ$ is a link in ${\mathbb G}amma_c$, there does exist such a tile $\Delta_k$, and $\Delta_k$ has its $a$ or $b$ edge extending $PQ$. (There may or may not be tiles on the other side of $PQ$ with a vertex at $Q$, but if so, we do not list them among the $\Delta_i$.) If there are an even number of tiles with vertices at $Q$ and an $\alpha$ or $\beta$ angle at $Q$, then each has a $c$ edge ending at $Q$. Since $\Delta_1$ has its $c$ edge ending at $Q$ and $\Delta_k$ does not, the remaining odd number of $c$ edges cannot all be paired with other $c$ edges supported by the same line. Therefore, there is an $a/c$ or $a/b$ edge, as claimed. Therefore, we may assume the number of such tiles is odd. At most one tile can have its right angle at $Q$, and it cannot be $\Delta_1$. Now, if $\alpha$ is not a rational multiple of ${\mathfrak P}i$, then either $k=2$ and both angles are right angles, or $k=4$ and there are two $\alpha$ and two $\beta$ angles, or $k=3$ and there are one each of $(\alpha,\beta,\frac {\mathfrak P}i 2)$. In all these cases, the above condition that there are an even number of tiles at $Q$ with an $\alpha$ or $\beta$ angle at $Q$ is fulfilled. If $\alpha$ is an odd multiple $m$ of $\beta$, then the same argument works, as the number of tiles with vertices at $Q$ that do not have their right angle at $Q$ will still be even. That completes the proof of the lemma. \begin{lemma}\label{lemma:Gamma2} Suppose isosceles (or equilateral) triangle $ABC$ with base angles $\beta$ is $N$-tiled by tile $(\alpha,\beta,{\mathfrak P}i/2)$ with sides $(a,b,c)$, and $\alpha$ is not a rational multiple of ${\mathfrak P}i$, or $\alpha$ is an odd multiple of $\beta$. Suppose there is no relation $jc = ua + vb$ with $j > 0$ and $u,v \ge 0$, and $u,v,j$ integers. Then (i) Let $PQ$ be a link in ${\mathbb G}amma_c$. Then there is a vertex $R$ such that $QR$ is a link in ${\mathbb G}amma_c$. (ii) The in-degree and out-degree of each node of ${\mathbb G}amma_c$ is exactly 1. \end{lemma} \noindent{\em Remark}. Note that in Fig.~\ref{figure:8essentials}, it is not true that the in-degree of every vertex in ${\mathbb G}amma_b$ is equal to the out-degree of that vertex. That is because, in that tiling, there is a relation $2b = 3a$. Please take the time to verify that in Fig.~\ref{figure:54}, the conclusion of this lemma fails, but so does the hypothesis that $\alpha$ is an odd multiple of $\beta$; this shows the necessity of that hypothesis. \noindent{\em Proof}. According to Lemma~\ref{lemma:Gamma}, there is an outgoing $a/c$ edge from $Q$. Let $R$ lie on the internal segment containing that edge, as far as possible from $Q$ such that there are only $c$ edges on one side of $QR$. Then $R$ is not a vertex of a tile on the other side of $QR$, since that would give rise to relation $jc = ua + vb$, where $j$ is the excess of the number of $c$ edges on one side of $QR$ over the other. Then, by definition of ${\mathbb G}amma_c$, $QR$ is a link in ${\mathbb G}amma_c$. That completes the proof of (i). Next we observe that the in-degree of each node $Q$ of ${\mathbb G}amma_c$ is at most 1. For, if $PQ$ is a link of ${\mathbb G}amma_c$, then $PQ$ is part of an internal segment of the tiling that extends past $Q$ and on one side, is not a vertex of any tile on that side. Hence no other internal segment can pass through $Q$; hence there is no other link of ${\mathbb G}amma_c$ ending at $Q$. By part~(i), the out-degree of each node of ${\mathbb G}amma_q$ that has positive in-degree is at least 1. Hence, the out-degree always is greater than or equal to the in-degree. But since every link has one head and one tail, the total in-degree is equal to the total out-degree. Therefore, the in-degree and out-degree are equal at every node. Therefore, if the in-degree is positive, both the in-degree and out-degree are 1. If the in-degree is zero, so must the out-degree be zero, but then that node is not in ${\mathbb G}amma_c$ at all. That completes the proof of the lemma. \begin{corollary} \label{lemma:allc} Suppose isosceles (or equilateral) triangle $ABC$ with base $AC$ and base angles $\beta$ is $N$-tiled by tile $(\alpha,\beta,{\mathfrak P}i/2)$ with sides $(a,b,c)$, and $\alpha$ is not a rational multiple of ${\mathfrak P}i$, or $\alpha$ is an odd multiple of $\beta$. Suppose there is no relation $jc = ua + vb$ with $j > 0$ and $u,v \ge 0$, and $u,v,j$ integers. Then $AC$ is composed only of $c$ edges and there are no $c$ edges on $AB$ or $BC$, or $AB$ and $BC$ are composed only of $c$ edges and there is no $c$ edge on $AC$. \end{corollary} \noindent{\em Proof}. According to Lemma~\ref{lemma:Gamma2}, each link $PQ$ in ${\mathbb G}amma_c$ has a corresponding link $QR$. Suppose there is a tile with a $c$ edge on $AB$. Unless the entire segment $AB$ supports only tiles with their $c$ edges on $AB$, there will be a segment $PQ$ lying on $AB$ such that $PQ$ is composed of $c$ edges but beyond $Q$ there is another tile with an $a$ or $b$ edge on $PQ$. But that contradicts Lemma~\ref{lemma:Gamma2}, as there will be an outgoing link of ${\mathbb G}amma_c$ from $Q$, but at a boundary point, there can be no incoming link. Hence if there is any $c$ edge on $AB$, then $AB$ is composed entirely of $c$ edges. Similar for $BC$ and $AC$. At vertex $A$, there is an angle $\beta$. There must be a single tile there, with its $\beta$ angle at $A$, since either $\beta < \alpha$ or $\alpha$ is not a rational multiple of ${\mathfrak P}i$. The $c$ edge of that tile must lie on $AB$ or $AC$. If it lies on $AB$, then $AB$ is composed entirely of $c$ edges. If it lies on $AC$, then $AC$ is composed entirely of $c$ edges. If $AC$ is composed of $c$ edges, then $AB$ and $BC$ do not contain any $c$ edges, since if they contained one, then there would also be a $c$ edge at $A$ or $C$, which is impossible since the tiles at $A$ and $C$ have only one $c$ edge, and it is on $AC$. That completes the proof of the lemma. \begin{lemma} \label{lemma:isosceleshelper2} Suppose isosceles (or equilateral) triangle $ABC$ with base angles $\beta$ at $A$ and $C$ is $N$-tiled by a tile with angles $(\alpha,\beta,\gamma)$ and sides $(a,b,1)$. Suppose $\beta \neq {\mathfrak P}i/6$ and $\sqrt{N/2}$ is irrational (i.e., $N$ is not twice a square). Then $a$ and $b$ belong to ${\mathbb Q}(\sqrt{N/2})$. \end{lemma} \noindent{\em Remark}. The tiling in Fig.~\ref{figure:54} shows that the exception for $\beta = {\mathfrak P}i/6$ is necessary. \noindent{\em Proof}. Let $X = \vert AB\vert$. Twice the area of $ABC$ is the cross product of the two equal sides, which is \begin{eqnarray*} X^2 \sin 2\alpha &=& 2 X^2 \sin \alpha \cos \alpha \\ &=& 2X^2 \sin \alpha \sin \beta \ = \ 2X^2 ab \end{eqnarray*} Twice the area of the tile is $ab$. Since $N$ tiles cover $ABC$ we have the area equation \begin{eqnarray*} 2X^2 &=& N \end{eqnarray*} Define \begin{eqnarray*} \lambda := \sqrt{N/2} \end{eqnarray*} Then $X = \lambda$. Let $M$ be the midpoint of the base $AC$. Then triangle $ABM$ has a right angle at $M$, angle $\alpha$ at $B$, and angle $\beta$ at $A$, so it is similar to the tile. Therefore $AM = X \sin \alpha = \lambda a$. Therefore $\vert AC \vert = 2a \lambda$. Since there is a tiling of $ABC$, there are non-negative integers $(p,q,r)$ and $(s,t,u)$ such that \begin{eqnarray*} \lambda &=& pa + qb + r \\ 2a\lambda &=& sa + tb + u \end{eqnarray*} We write this as a system of equations in unknowns $a,b$: \begin{eqnarray*} pa + qb &=& \lambda -r\\ (s-2\lambda)a + tb &=& -u \end{eqnarray*} The determinant $D = pt-q(s-2\lambda)$. If $D \neq 0$, then both $a$ and $b$ belong to ${\mathbb Q}(\lambda)$, since that field contains $D$ and the right-hand sides of the system. Now suppose, for proof by contradiction, that $D=0$. Then since $\lambda$ is irrational, we have $q = 0$ and $pt = qs = 0$. Since $pa + qb = \lambda -r \neq 0$, we have $p \neq 0$ and hence $t = 0$. Then the two equations become \begin{eqnarray*} pa &=& \lambda-r \\ (s-2\lambda)a &=& -u \end{eqnarray*} Multiplying these two equations we have \begin{eqnarray*} (s-2\lambda)(\lambda-r) &=& -pu \\ -2\lambda^2 + \lambda(s+2r)-sr &=& -pu \\ -N + \lambda(s+2r)-sr &=& -pu \mbox{\qquad since $\lambda^2 = N$} \end{eqnarray*} Since $\lambda$ is irrational, and $s$ and $r$ are non-negative, we have $s=r=0$. Hence $AC = u$ is made up only of $c$ edges (each of length 1) and $X = pa + qb$ , so $AB$ has no $c$ edges. Since also $q=0$ (as shown above), $X = pa$, so $AB$ also has no $b$ edges, and is therefore composed entirely of $a$ edges. A similar argument applies to show that $CA$ also is composed entirely of $a$ edges. Therefore $a = X/p = \lambda / p$ belongs to ${\mathbb Q}(\lambda)$. We cannot, however, immediately conclude that $b$ is in ${\mathbb Q}(\lambda)$. But we can conclude that there is no relation $jc = ua + vb$ with integers $j, u, v$: Suppose there was such a relation. Since $c = 1$, that would mean $a$ is a rational multiple of $b$, and since $a$ belongs to ${\mathbb Q}(\lambda)$, so does $b$, and we are done. Therefore, as claimed, there is no relation $jc = ua + vb$. Now consider the tiles with a vertex at $B$, where triangle $ABC$ has an angle of $2 \alpha$. Since sides $AB$ and $CB$ are composed entirely of $a$ edges, the tiles at $B$ supported by $AB$ and $CB$ have their $a$ edges on $AB$ or $BC$, and hence do not have their $\alpha$ angles at $B$. In particular, the case when there is only one tile at $B$ is ruled out, since it cannot have an $a$ edge on both $AB$ and $BC$; and the case of just two tiles at $B$, both with $\alpha$ angles at $B$, is also ruled out, since neither would have an $a$ edge on $AB$ or $BC$. Therefore there must be some tiles with $\beta$ angles at $B$. Either $\alpha$, or $2\alpha$, or $2\alpha-{\mathfrak P}i/2$ must be a multiple of $\beta$. In the latter case, $2 \alpha - {\mathfrak P}i/2 = \alpha -\beta$ is a multiple of $\beta$, so $\alpha$ is a multiple of $\beta$. In all of these cases, then, $2\alpha$ is a multiple of $\beta$, say $2\alpha = m \beta$, with $m > 1$. Then \begin{eqnarray*} \beta &=& \frac {\mathfrak P}i 2 - \alpha \\ 2\beta &=& {\mathfrak P}i - 2 \alpha\ =\ {\mathfrak P}i - m \beta \\ (m+2)\beta &=& {\mathfrak P}i \\ \beta &=& \frac {{\mathfrak P}i} {m+2} \ = \ \frac {2{\mathfrak P}i} {2(m+2)} \end{eqnarray*} Now $\cos \beta = \sin \alpha = a$ belongs to ${\mathbb Q}(\lambda)$. By Lemma~\ref{lemma:euler}, $2(m+2)$ equal to one of $5,6,8,10,12$. Therefore $m+2$ is one of $3,4,5,6$. Since $m > 1$, we have $m = 3$ or $m=4$. Case~1, $m = 3$. Then $2\alpha = 3 \beta$. Then $\alpha = 3{\mathfrak P}i/10$ and $\beta = {\mathfrak P}i/5$. The $2\alpha$ angle of $ABC$ at $B$ is filled with three tiles, each with their $\beta$ angle at $B$. The two tiles on $AB$ and $BC$ have their $a$ edges on $AB$ and $BC$, and their $c$ edges on the two interior segments. The $a$ edge of the middle tile must lie on one of the $c$ edges of the outer tiles. Hence, there is a $c/a$ edge emanating from the vertex. Since there are no relations $jc = ua + vb$, there is a link of ${\mathbb G}amma_c$ emanating from $B$. Since $\alpha$ is an odd multiple of $\beta$, Lemma~\ref{lemma:Gamma2} is applicable; then there must be an incoming link at $B$, which is impossible, as links of ${\mathbb G}amma_c$ cannot terminate on the boundary of $ABC$. Hence Case~1 is impossible. Case~2, $m = 4$. Then $2\alpha = 4 \beta$, $\alpha = {\mathfrak P}i/3$ and $\beta = {\mathfrak P}i/6$, which is ruled out by hypothesis (and has to be, because of Fig.~\ref{figure:54}). Note that Lemma~\ref{lemma:Gamma2} does not apply, since $2\alpha$ is an even multiple of $\beta$. That completes the proof of the lemma. \begin{lemma} \label{lemma:isosceleshelper3} Suppose isosceles triangle $ABC$ with base angles $\beta$ at $A$ and $C$ is $N$-tiled by a tile with angles $(\alpha,\beta,\gamma)$ and sides $(a,b,c)$. Suppose $\beta \neq \frac {\mathfrak P}i 6$ and $\sqrt{N/2}$ is irrational. Then (i) $a$ and $b$ are rational multiples of $\lambda = \sqrt{N/2}$, and (ii) $AC$ is composed only of $c$ edges, and there are no $c$ edges on $AB$ or $BC$. \end{lemma} \noindent{\em Proof}. Without loss of generality, we may assume $c=1$. Since by hypothesis, $\sqrt{N/2}$ is irrational and $\beta \neq \frac {\mathfrak P}i 6$, we can apply Lemma~\ref{lemma:isosceleshelper2} to conclude that $a$ and $b$ belong to ${\mathbb Q}(\lambda)$. I say that $$\lambda = X = \vert AB \vert.$$ Twice the area of $ABC$ is equal on the one hand to $Nab$, and on the other to $$X^2 \cos 2\alpha =X^2 2 \sin \alpha \cos \alpha = 2X^2 ab.$$ Hence $N = 2X^2$. But $N = 2\lambda^2$ by definition of $\lambda$. Hence $X = \lambda$, as claimed. Let $a = x\lambda + y$ and $b= z\lambda + w$, where $x,y,z,w$ are rational. Then for some nonnegative integers $p,q,r$, $$ \lambda = pa + qb + r = (px + qz)\lambda + (py + qw + r) .$$ Since $\lambda$ is irrational, \begin{eqnarray} py + qw + r &=& 0. \label{eq:8067} \end{eqnarray} We also have $$ 1 = a^2 + b^2 = (x\lambda + z)^2 + (y\lambda + w)^2 = (x^2 + z^2) \frac N 2 + 2(xy + zw)\lambda.$$ Since $\lambda$ is irrational we have \begin{eqnarray} xy + zw &=& 0. \label{eq:8073} \end{eqnarray} Case 1: $xy \neq 0$. Then $xy$ and $zw$ have different signs. I say that $x > 0$ and $z > 0$. We will prove this by cases, according to the sign of $xy$. First suppose $xy > 0$. Then $x$ and $y$ are of the same sign. Since $a = \lambda x + y > 0$, it follows that $x,y > 0$. Since the tile at $B$ has either its $b$ or $c$ edge on $AB$, not both $q$ and $r$ are zero; hence (\ref{eq:8067}) implies that $qw \le 0$; hence $w \le 0$. Since $zw < 0$, we have $z > 0$ and $w < 0$. Thus the claim $x > 0$ and $z > 0$ holds if $xy >0$. Now suppose $xy < 0$. Then $zw > 0$, and we find $z > 0$ and $w > 0$, since $b = z\lambda + w > 0$. Since not both $q$ and $r$ are zero, and $w > 0$, we have $qw + r > 0$. Then (\ref{eq:8067}) implies $py < 0$; hence $y < 0$. Since $xy < 0$, we conclude $x > 0$, establishing the claim $x > 0$ and $z > 0$ also in case $xy < 0$. Thus we have proved that $xy \neq 0$ implies $x > 0$ and $z > 0$. I say there is no relation $jc = j = ua + vb$ with non-negative integers $j,u,v$ and $j > 0$. Suppose such a relation exists; then \begin{eqnarray*} j &=& ua + vb \\ &=& u(x\lambda + y) + v(z \lambda + w) \\ &=& (ux + vz) \lambda + (uy + vw) \end{eqnarray*} Since $\lambda$ is irrational, we have $ux + vz = 0$. Since $x$ and $z$ are positive and $u,v \ge 0$, this implies $u = v = 0$, which is a contradiction, since $j > 0$. Then by Corollary~\ref{lemma:allc}, $AC$ is composed of all $c$ edges, and there are no $c$ edges on $AB$. Since the length of $AC$ is $2a \lambda$, we conclude that $a = s/(2\lambda) = (s/N) \lambda$, since $2\lambda^2 = N$. Hence $a$ is a rational multiple of $\lambda$. Since $AB$ has no $c$ edges, $r = 0$ and $X = \lambda = pa + qb$. Hence $b =( \lambda - pa)/q $ is also a rational multiple of $\lambda$. That completes the proof in Case~1. Case 2: $xy = 0$. Then also $zw = 0$. I say this case implies $y = w = 0$. We argue by cases on whether $x = 0$ or not. If $x = 0$, then $a = y$ is rational and $y > 0$. If $z = 0$, then $b = w > 0$, contradicting (\ref{eq:8067}). If $w = 0$, then $b = z\lambda$, \begin{eqnarray*} 2 a \lambda = sa + tb + u &=-& tz\lambda + (sy + u) \\ sy + u &=& 0 \\ s &=& u \ = \ 0 \end{eqnarray*} and $AB$ is composed only of $b$ edges, which is impossible, since the tile at $A$ cannot have its $b$ edge on $AB$. On the other hand, if $x \neq 0$ then since $xy = 0$ we have $y = 0$. Then $qw + r = 0$ by (\ref{eq:8067}). If $z = 0$, then $b = w > 0$ and therefore $q=r=0$, contradiction. The only remaining possibility is $y=w=0$, as claimed. Then $a = x\lambda$ and $b = z\lambda$. Then $X = \lambda = pa + qb + r = (x+z)\lambda + r$, so $r = 0$ and \begin{eqnarray*} Y &=& 2a\lambda \\ &=& 2x\lambda^2 \\ &=& xN \\ &=& sa + tb + u \\ &=& (sx + tz)\lambda + u, \end{eqnarray*} which implies $s=t=0$. That completes the proof in Case~2, and also the proof of the lemma. \begin{theorem} \label{theorem:rationalisosceles} Suppose $ABC$ is isosceles with base angles $\beta$, or $ABC$ is equilateral, and $ABC$ is tiled by triangle $T$ similar to half of $ABC$. If $\alpha$ is a rational multiple of ${\mathfrak P}i$, then either (i) $N$ is even and $N/2$ is a square, or (ii) $N$ is a square and $\beta = {\mathfrak P}i/4$ and $ABC$ has base angles ${\mathfrak P}i/4$, or (iii) $N/2$ is three times a square and $\beta = {\mathfrak P}i/6$ and $\alpha = 2 \beta = {\mathfrak P}i/3$. \end{theorem} \noindent{\em Remark}. One possible tiling under case (iii) of the theorem is illustrated in Fig.~\ref{figure:54}. $N$ can be odd in case (ii), since half the triangle $ABC$ is similar to $ABC$, so quadratic tilings are allowed. \noindent{\em Proof}. We begin by remarking that $N/2$ is a rational square if and only if it is an integer square, since if it is a rational square then $2N$ is an integer that is a rational square, hence it is an integer square. Then it is the square of an even integer $2m$, so $N/2 = m^2$. Hence if $\sqrt{N/2}$ is rational, then $N$ is even and $N/2$ is a square. Then condition (i) holds. Therefore we may assume that $\sqrt{N/2}$ is irrational. We now divide into cases according as $\beta = \frac {\mathfrak P}i 6$ or not. First assume $\beta \neq \frac {\mathfrak P}i 6$. Then by Lemma \ref{lemma:isosceleshelper2}, $a = \cos \alpha$ and $b = \sin \alpha$ belong to ${\mathbb Q}(\sqrt{N/2})$. By hypothesis, $\alpha$ is a rational multiple of ${\mathfrak P}i$. These two facts make Lemma \ref{lemma:euler} applicable, so we can drastically limit the possible values of $\alpha$. Namely, by Lemma \ref{lemma:euler}, $\alpha$ and $\beta$ are odd multiples of $2{\mathfrak P}i/n$, where $n$ is one of $6,8,12$; that is, they are odd multiples of $\frac {\mathfrak P}i 3, \frac {\mathfrak P}i 4, or \frac {\mathfrak P}i 6$. Since they are both less than $\frac {\mathfrak P}i 2$, $\alpha$ and $\beta$ must be exactly $\frac {\mathfrak P}i 3$, $\frac {\mathfrak P}i 2$, or $\frac {\mathfrak P}i 6$. Those are the values of $\alpha$ and $\beta$ allowed in the statement of the lemma. We arrived at that conclusion under the assumption that $\beta \neq \frac {\mathfrak P}i 6$, but since that conclusion includes $\beta = \frac {\mathfrak P}i 6$, it holds without that assumption. That is, we have proved outright that $\alpha$ and $\beta$ are equal to $\frac {\mathfrak P}i 3$, $\frac {\mathfrak P}i 2$, or $\frac {\mathfrak P}i 6$. It remains to show that $N$ has one of the stated values. Let $X = pa + rb + q$ be the length of $AB$. Then the area equation is $$ Nab = X^2 \cos 2 \alpha = 2X^2 a b$$ so $N = 2X^2$. Case~1. $\alpha = \beta = \frac {\mathfrak P}i 4$. Then with $c=1$ we have $a = b = 1/\sqrt 2$, so the area of each tile is $\frac 1 2$. By Lemma~\ref{lemma:isosceleshelper3}, $q=0$, so $X = pa + rb$. Therefore \begin{eqnarray*} X &=& (p+r) (1/\sqrt 2)\\ X^2 &=& (p+r)^2 / 2 \\ N &=& 2X^2 \mbox{\qquad as shown above} \\ N &=& (p+r)^2 / 2 \mbox{\qquad by the previous two lines} \end{eqnarray*} Hence $2N$ is a rational square. As remarked at the beginning of the proof, it is therefore an integer square, and hence $N$ is even. Therefore conclusion (i) of the theorem holds. That completes Case~1. Case~2, $\alpha = \frac {\mathfrak P}i 6$. (This is the case when $ABC$ is equilateral.) Then $a =\cos \alpha = \sqrt{3}/2$ and $b =\sin \alpha = 1/2$; hence $X = \vert AB \vert = pa + qb + r $ belongs to ${\mathbb Q}(\sqrt 3)$. Then $\sqrt{N/2}$ has the form $u + v \sqrt 3$ with $u$ and $v$ rational. Squaring both sides we have $N/2 = u^2 + 3v^2 + 2uv \sqrt 3$. Hence $uv = 0$. Hence either $u=0$ or $v=0$. In case $u=0$ then $N/2$ is three times a rational square (which is possible, for example see Fig.~\ref{figure:54}). Then let $v = s/t$ with $s$ and $t$ relatively prime integers, so $N/2 = 3(s/t)^2$. Then $6N = (6s/t)^2$ so $Nt^2 = 6s^2$. Since $s$ and $t$ are relatively prime, $6$ divides $N$. Hence $N/6 = (s/t)^2 = v^2$ is a integer that is a rational square; hence $N/6$ is an integer square. Hence $N/2$ is three times a square, so conclusion (iii) holds. In case $v=0$ then $N/2 = u^2$ is a rational square, so it is an integer square by the first paragraph of this proof. Hence conclusion (i) holds. Case~3, $\alpha = \frac {\mathfrak P}i 3$. Then $\beta = \frac {\mathfrak P}i 6$, which is no Then $a =\cos \alpha = \sqrt{3}/2$ and $b =\sin \alpha = 1/2$; hence $X = \vert AB \vert = pa + qb + r $ belongs to ${\mathbb Q}(\sqrt 3)$. Then the proof is completed verbatim as in Case~2. That completes the proof of the theorem. \begin{lemma}\label{lemma:baseangles} Suppose isosceles (and not equilateral) triangle $ABC$ is $N$-tiled by a tile with angles $(\alpha,\beta,\frac {\mathfrak P}i 2)$. Then the base angles of $ABC$ are equal to $\alpha$ or to $\beta$. \end{lemma} \noindent{\em Remark}. The lemma fails for equilateral $ABC$. \noindent{\em Proof}. This is an immediate consequence of Laczkovich's work: the first and second lines of Table~\ref{table:isosceles} are the only ones allowing a right-angled tile, and the first line can apply to an isosceles $ABC$ only if $ABC$ and the tile are both right isosceles triangles. \begin{theorem} \label{theorem:isosceles2} Suppose isosceles triangle $ABC$ is $N$-tiled by a tile with angles $(\alpha,\beta,\frac {\mathfrak P}i 2)$. Then (i) $N$ is a square and $\alpha = \beta = \frac {\mathfrak P}i 4$, or (ii) $N$ is twice a square (possible for any such $N$ with any right-angled tile), or (iii) $N = 6k^2$ and $\beta = {\mathfrak P}i/6$ or $\alpha = {\mathfrak P}i/6$ (example with $N=54$ in Fig.~\ref{figure:54}), or (iv) $N$ is an even sum of squares (so $N/2$ is also a sum of squares). (Possible for any such $N$ with a suitable tile by a double biquadratic tiling as in Fig.~\ref{figure:severalisosceles}). \end{theorem} \noindent{\em Proof}. By Lemma~\ref{lemma:baseangles}, the base angles of $ABC$ are equal to $\alpha$ or to $\beta$. Since the conclusion of the theorem is insensitive to which angle is labeled $\alpha$, we may assume the base angles are $\beta$. By Theorem~\ref{theorem:rationalisosceles}, the conclusion is correct when $\alpha$ is a rational multiple of ${\mathfrak P}i$; indeed in that case either (i), (ii), or (iii) holds. Therefore we may assume that $\alpha$ is not a rational multiple of ${\mathfrak P}i$. If $N/2$ is a rational square, then $2N$ is a rational square, and an integer, hence an integer square. Hence $N$ is even, and $N/2$ is an integer. Since $N/2$ is a rational square and an integer, it is also an integer square, so $N$ is twice a square, and case (ii) of the theorem holds. Therefore we may assume that $$\lambda = \sqrt{N/2} \mbox{ is irrational}.$$ Let $X$ be the length of $AB$. I say that \begin{eqnarray*} X = \lambda \end{eqnarray*} Twice the area of $ABC$ is $$X^2 \cos 2\alpha = 2X^2 \cos \alpha \sin \alpha = 2X^2 ab.$$ It is also $Nab$, since there are $N$ tiles each of area $ab/2$. Therefore $N = 2X^2$. But $N = 2\lambda^2$ by definition of $\lambda$, and both $X$ and $\lambda$ are positive. Therefore $X = \lambda$, as claimed. Let $(a,b,c)$ be the sides of the tile; we may choose the scale so that $c=1$. Since $\alpha$ is not a rational multiple of ${\mathfrak P}i$, it is not equal to $\frac {\mathfrak P}i 6$. Since $\lambda$ is irrational, Lemma~\ref{lemma:isosceleshelper3} is applicable. Therefore, side $AC$ is composed only of $c$ edges. Let $u$ be the number of those edges. Let $M$ be the midpoint of $AC$ (which may or may not be a vertex of the tiling). Then triangle $ABM$ has angle $\alpha$ at $B$ and a right angle at $M$. The length of $AM$ is $u/2$, and the length of $AB$ is $\lambda$. Therefore \begin{eqnarray*} \tan \alpha = \frac u {2 \lambda} \end{eqnarray*} Since $\lambda$ is irrational, $\tan \alpha$ is irrational. It follows that there does not exist any linear relation $pa = qb$ with integers $p$ and $q$, for if there were, then $\tan \alpha = b/a = p/q$ would be rational. It follows that there are no relations of the form $ja = pb + qc$, $jb = pq + qc$, or $jc = pa + qb$ with $j \neq 0$. From this it follows that every internal segment in the tiling has equal numbers of $a$ edges on both sides, equal numbers of $b$ edges on both sides, and equal numbers of $c$ edges on both sides. I say that $N$ is even. For proof by contradiction, assume $N$ is odd. Now the number of $a$ edges in the interior is even, and the number of $b$ edges in the interior is even, and there are no $a$ or $b$ edges on $AC$. Hence the number of $a$ edges on $AB$ and $BC$ together is odd, and number of $b$ edges on $AB$ and $BC$ together is odd. Suppose $AB = pa + qb$ and $BC = ra + sb$. Then $p \neq r$ and $q \neq s$, since $p+r$ is odd and $q+s$ is odd. We may suppose $p \ge r$ by relabeling $A$ and $C$ if necessary. Then \begin{eqnarray*} \vert AB \vert &=&\vert BC \vert \\ (pa+qb)&=&(ra+sb) \\ (p-r)a &=& (s-q) b \end{eqnarray*} with $p-r$ a positive integer, and hence $s-q$ also a positive integer. Since we showed above that no such relations between $a$ and $b$ exist, we have reached a contradiction. Hence $N$ is even, as claimed. Lemma~\ref{lemma:isosceleshelper3} also tells us that $a$ and $b$ are rational multiples of $\lambda$. Let $x$ and $z$ be rational numbers such that $a = x\lambda$ and $b = z\lambda$. Then the equation $1 = a^2 + b^2$ becomes \begin{eqnarray*} 1 &=& (x^2 + z^2)\lambda^2 \\ &=& (x^2 + z^2) N/2 \end{eqnarray*} Multiplying by $2N$ we have \begin{eqnarray} 2N &=& (xN)^2 + (zN)^2 \label{eq:8232} \end{eqnarray} Thus $2N$ is a sum of two rational squares. Then by Lemma~\ref{lemma:rationalsquares}, $2N$ is a sum of two integer squares. Then by Lemma~\ref{lemma:doublesquares}, $N$ is also a sum of two squares. That completes the proof of the theorem. \begin{corollary} \label{lemma:notprime} Suppose isosceles triangle $ABC$ is $N$-tiled by a right triangle. Then $N$ is not a prime congruent to 3 mod 4, nor is it twice such a prime, except for $N=6$. \end{corollary} \noindent{\em Proof}. The Corollary follows from Theorem~\ref{theorem:isosceles2} by Lemma~\ref{lemma:sumsofsquares}. Theorem~\ref{theorem:isosceles2} gives necessary and sufficient conditions on $N$ for the existence of an $N$-tiling of {\em some} isosceles $ABC$ by a right-angled tile, if $N$ is even. It remains to specify exactly {\em which} isosceles $ABC$ can be $N$-tiled, when $N$ is given. The following theorem spells it out. \begin{theorem} \label{theorem:whichABC} Given a positive integer $N > 1$, the possible isosceles triangles $ABC$ that can be $N$-tiled by a right-angled tile are as follows. Here the sides of the tile are $(a,b,c)$ and the angles are $(\alpha,\beta,\frac {\mathfrak P}i 2)$. (i) if $N/2$ is a square, any isosceles triangle can be $N$-tiled (by a double quadratic tiling) (ii) if $N/2$ is a sum of two squares, then isosceles triangle $ABC$ with base angles $\beta$ can be $N$-tiled with tile $(\alpha,\beta,\frac {\mathfrak P}i 2)$ if and only if \begin{eqnarray*} \tan \beta = r/p \mbox{\qquad where $N/2 = r^2 + p^2$}. \end{eqnarray*} (iii) if $N$ is a square, the right isosceles $ABC$ can be $N$-tiled by a quadratic tiling. (iv) if $N$ is six times a square, then the isosceles triangle with base angles $\frac {\mathfrak P}i 6$ can be $N$-tiled by the tile with $\alpha = \frac {\mathfrak P}i 3$ and $\beta = \frac {\mathfrak P}i 6$. (v) If none of the above apply, then no isosceles triangle can be $N$-tiled by any tile. \end{theorem} \noindent{\em Remark}. Since $N/2$ may sometimes be expressible in more than one way as a sum of two squares, there can sometimes be more than one possible $ABC$ and tile for a given $N$, but only finitely many. Moreover, if $N$ is both a square and a sum of squares, there are more possibilities, as in Fig.~\ref{figure:severalisosceles}. It will be very difficult to provide a full characterization of all $N$-tilings. \noindent{\em Proof}. Ad (ii). Just divide $ABC$ by its altitude $BD$ and tile each half with a quadratic tiling. Ad (iii). If $ABC$ is $N$-tiled, and $N/2$ is not a square, then by Theorem~\ref{theorem:isosceles2}, the tile has the form mentioned. In case $N/2 = p^2 + r^2$, there is a tiling made by combining biquadratic tilings of the two halves of $ABC$. Ad (iv). By Theorem~\ref{theorem:isosceles2}, if $N = 6k^2$ then a tiling with $\beta = \frac {\mathfrak P}i 6$ is possible; see Fig.~\ref{figure:54}. It remains to show that no other tile is possible. Let $N=6k^2$. Then $N$ is not a square, and $N$ is not twice a square. Since $N$ contains an odd power of 3, $N$ is not a sum of two squares, and the same is true of $N/2$. Hence no other case in the theorem can apply, and $\beta = \frac {\mathfrak P}i 6$ is the only possibiity. Ad (v). By Theorem~\ref{theorem:isosceles2}, these cases are exhaustive. That completes the proof of the corollary. \section{Possible values of $N$ in tilings with commensurable angles} We wish to add a third column to Laczkovich's Table~\ref{table:isosceles}, giving the possible forms of $N$ if there is an $N$-tiling of $ABC$ by the tile in that row. For example, when $ABC$ is similar to the tile, then $N$ must be a square, so we put $n^2$ in the third column. While we are at it, we add a fourth column with a citation to the result, and delete the rows corresponding to the tilings of the equilateral triangle that we have proved impossible. The revised and extended table is Table~\ref{table:laczkovich-extended}. All the entries in this table except the last one give necessary and sufficient conditions on $N$ for the tilings to exist. The last one gives necessary conditions for certain tilings that probably do not actually exist, but since $ABC$ is equilateral, this question is out of scope for this paper. \begin{table}[ht] \caption{$N$-tilings by tiles with commensurable angles, with form of $N$} \label{table:laczkovich-extended} \begin{center} \setlength{\extrarowheight}{.5em} \begin{tabular}{rrrr} $ABC$ & the tile & form of $N$ & citation \\ \hline $(\beta,\beta,2 \alpha)$ & similar to $ABC$ & $n^2$ & \cite{snover1991} \\ $(\beta, \beta, 2\alpha)$ & $(\alpha,\beta,\frac {\mathfrak P}i 2)$& $e^2 + f^2$ & \cite{snover1991} \\ $(\frac {\mathfrak P}i 6, \frac {\mathfrak P}i 3, \frac {\mathfrak P}i 2)$ & similar to $ABC$ & $3n^2$ & \cite{snover1991} \\ $(\beta,\beta,2\alpha)$ & $(\alpha, \beta, \frac {\mathfrak P}i 2)$ & $2n^2$ & Theorem~\ref{theorem:rationalisosceles}\\ {\large $(\frac {\mathfrak P}i 6, \frac {\mathfrak P}i 6, \frac {2 {\mathfrak P}i} 3)$} &{\large $(\frac {\mathfrak P}i 6,\frac {\mathfrak P}i 3, \frac {\mathfrak P}i 2)$} & $6n^2$ & Theorem~\ref{theorem:rationalisosceles}\\ equilateral & {\large $(\frac {\mathfrak P}i 6, \frac {\mathfrak P}i 3, \frac {\mathfrak P}i 2)$} & $6n^2$ & Theorem~\ref{theorem:rationalisosceles} \\ equilateral & {\large $(\frac {\mathfrak P}i 6, \frac {\mathfrak P}i 6, \frac {2{\mathfrak P}i} 3)$ } & $3n^2$ & Theorem~\ref{theorem:rationalisosceles} \end{tabular} \end{center} \end{table} \begin{theorem} \label{theorem:laczkovich-extended} Suppose $(\alpha,\beta,\gamma)$ are all rational multiples of $2{\mathfrak P}i$, and triangle $ABC$ is $N$-tiled by a tile with angles $(\alpha,\beta,\gamma)$. Then $ABC$, $(\alpha,\beta,\gamma)$, and $N$ correspond to one of the lines in Table~\ref{table:laczkovich-extended}. \end{theorem} \noindent{\em Proof}. As discussed above, Laczkovich characterized the pairs of tiled triangle and tile, as given in Table~\ref{table:isosceles}. \footnote{\, Again, we remind readers who may check with \cite{laczkovich1995} that there are three entries in Laczkovich's Theorem~5.1 that are shown in the subsequent Theorem~5.3 not to apply to tilings by congruent triangles, so they do not appear in our tables. } It remains to characterize the possible $N$ for each line. In several cases lines in Table~\ref{table:isosceles} split into two or more lines in Table~\ref{table:laczkovich-extended}, which supplies the required possible forms of values of $N$. That table lists in its last column citations to the literature or theorems in this paper for each line. That completes the proof. \section{Tilings with $\gamma = 2\alpha$ and $(a,b,c)$ commensurable} In this section and the next, we take up the row of Laczkovich's second table in which $ABC$ is isosceles with base angles $\alpha$ and is tiled by a tile with $\gamma = 2\alpha$, and $\alpha$ is not a rational multiple of ${\mathfrak P}i$. The condition $\gamma = 2\alpha$ can also be written as $3\alpha + \beta = {\mathfrak P}i$. Unlike the similar-looking condition $3\alpha + 2\beta = {\mathfrak P}i$, this condition does not imply $\gamma > {\mathfrak P}i/2$. The vertex angle of $ABC$ is then ${\mathfrak P}i-2\alpha = \alpha +\beta$. As will be shown below, the tile $(4,5,6)$ meets this condition. The numbers $(a,b,c)$ are called {\em commensurable} if their ratios are rational. In that case we say the ``tile is rational.'' If the edges of a triangle are commensurable, then the triangle is similar to one with integer edges. The remarkable fact is that, if $ABC$ is isosceles with base angles $\alpha$ and vertex angle $\alpha+\beta$, then it can be $N$-tiled for some $N$ by a tile with angles $(\alpha,\beta,\gamma)$ if and only if the edges of the tile are commensurable. This fact is really two different theorems: \begin{itemize} \item If the tile is rational, there is an $N$-tiling for some $N$, and \item If the tile is not rational, there is no $N$-tiling for any $N$. \end{itemize}. The first statement is due to Laczkovich \cite{laczkovich1995}. We will explain his proof in this section. The second statement is proved in the next section. Laczkovich \cite{laczkovich1995} proves that an isosceles triangle with angles as described, can be dissected into triangles {\em similar} to the tile, plus one parallelogram; then using the commensurability condition, these triangles and the parallelogram can all be tiled with the same size of tile. The only problem is that the tile will have to be {\em really tiny}--so small that $N$ will have to be several millions. We illustrate this with the tile $(4,5,6)$. The idea of Laczkovich's construction (Fig.~3 in \cite{laczkovich1995}) is shown in Fig.~\ref{figure:bigisosceles}. \begin{figure} \caption{Laczkovich's dissection of isosceles $ABC$ into triangles similar to $(4,5,6)$ and a parallelogram} \label{figure:bigisosceles} \end{figure} Laczcovich's idea is to quadratically tile each triangle, and then tile the parallelogram. As Laczcovich pointed out, the commensurability of the tile edges mean that with a small enough tile, this will succeed. We illustrate the idea in Fig.~\ref{figure:bigisosceles2}. \begin{figure} \caption{We look for a tiling starting like this} \label{figure:bigisosceles2} \end{figure} But observe that the tiles shown in that figure will not work, because with that size of tile, we cannot continue the tiling into the next (blue) triangle, as if the tile is $(4,5,6)$, the boundary between light green and blue is five 6-edges, total 30, which cannot be made of 4-edges, as it would have to be to tile the blue triangle. Clearly we should have chosen a smaller tile, for example half that size. But with a tile half that size, we run into similar trouble at the next boundary. To choose the tile correctly, we introduce a variable for each triangle to count the number of tiles on each side of that triangle. Then there is a linear equation at each boundary. If we assume that the parallelogram will be tiled by $n$ tiles on its diagonal side and $m$ on its horizontal side, then these variables will satisfy the following equations. The equations show that everything is determined once the number of tiles on each side of the red triangle is chosen. \footnote{For readers without colors: red is the triangle to the right of the parallelogram, and the other colors are in counterclockwise order from red.} \begin{eqnarray*} &red &= \ 1 \\ &orange &= \ 4 \ red/6 \\ &yellow &= \ 5\ orange/4 \\ &blue &= \ 6 \ yellow/4 \\ &green &= \ 5\ blue/4 \\ &lightblue &= \ 6\ blue/4 \\ &pink &= \ 5\ lightblue/4 \\ &m &=\ (5\ pink - 4\ orange)/6 \\ &n &= \ (5\ red)/5 \\ &total &= \ red\,^2 + orange^2 + yellow^2+blue^2 \\ && \quad +\, green^2 + lightblue^2 + pink^2 + 2mn \\ \end{eqnarray*} Solving these equations, starting with $red = 1$, we get these answers (in the order of variables listed above): $$ 1 \quad \frac 2 3\quad \frac 5 6 \quad \frac 5 4 \quad \frac {25} {16} \quad \frac {15} 8 \quad \frac {75} {32} \quad \frac {869} {576} \quad 1 \quad. $$ This reveals that Fig.~\ref{figure:bigisosceles2} is misleading, in that the parallelogram of Fig.~\ref{figure:bigisosceles} is not accurately tiled--the tiled parallelogram is off by less than a pixel. To get it accurately tiled requires a {\em much} smaller tile. To clear those denominators we have to start with $ red = 192$ instead of $ red = 1$. Then we get 192 128 160 240 300 360 450 869/3 192 669780 $$ 192 \quad 128 \quad 160 \quad 240 \quad 300\quad 360 \quad 450\quad 869 \quad 192 .$$ So the number of tiles required (namely the sum of the squares of those numbers) is more than half a million: 669780. It is not possible to print such a large tiling (unless one could use the side of a large building), and we do not know a smaller one. To get 869 tiles along the yellow parallelogram, if the parallelogram were 3 cm long, we would need 29 tiles per millimeter. These are the smallest rational tiles with $\gamma = 2 \alpha$, and using Laczkovich's method, they still yield rather large values of $N$. Possibly Laczcovich's method is not the only way to produce such a tiling; although at present it is the only known method, next month some genius might produce a much smaller one, for all we know. \section{No tilings with $\gamma = 2\alpha$ and $(a,b,c)$ incommensurable} In this section we rule out tilings of an isosceles triangle with base angles $\alpha$ and vertex angle $\alpha+\beta$, in case the tile edges are incommensurable. To state the point another way, if there is a tiling of such an isosceles $ABC$, then the tile must be rational. \subsection{Stars and centers} Suppose isosceles triangle $ABC$ is tiled by a tile with angles $(\alpha,\beta,\gamma)$, not a right triangle, and $\alpha$ is not a rational multiple of ${\mathfrak P}i$. We will consider and analyze the possible configurations formed by tiles at a vertex of the tiling. We begin by ruling out certain possibilities. \begin{lemma}\label{lemma:twobeta} Let isosceles $ABC$ with base angles $\alpha$ (at $A$ and $C$) be $N$-tiled by a tile with angles $(\alpha,\beta,2\alpha)$, and suppose that the tile is not a right triangle and $\alpha$ is not a rational multiple of ${\mathfrak P}i$. Let $P$ be a vertex on the boundary of $ABC$. Then there are not two $\beta$ angles of tiles at $P$, and there are not two $\gamma$ angles of tiles at $P$. \end{lemma} \noindent{\em Proof}. Suppose, for proof by contradiction, that two tiles have their $\beta$ angles at the same boundary vertex. Then for some nonnegative integers $u,v,w$ we have \begin{eqnarray*} {\mathfrak P}i &=& u \alpha + (v+2) \beta + w \gamma \\ {\mathfrak P}i &=& (u+2w)\alpha + (v+2) \beta \mbox{\qquad since $\gamma = 2\alpha$}\\ 0 &=& (u+2w-3) \alpha + (v+1)\beta \mbox{\qquad since $3\alpha + \beta = {\mathfrak P}i$}\\ \beta &=& \left( \frac{3-u-2w}{v-1}\right) \alpha \end{eqnarray*} Putting that value for $\beta$ into $3\alpha+\beta = {\mathfrak P}i$ we can solve for $\alpha/{\mathfrak P}i$: \begin{eqnarray*} \alpha/{\mathfrak P}i &=& 3 + \left( \frac{3-u-2w}{v-1}\right) -1 \end{eqnarray*} But that is rational, contradicting the hypothesis that $\alpha$ is not a rational multiple of ${\mathfrak P}i$. That completes the proof that two $\beta$ angles do not occur at the same boundary vertex. The proof for two $\gamma$ angles is similar. First, if there are no $\beta$ angles, then \begin{eqnarray*} {\mathfrak P}i &=& u \alpha + (w+2) \gamma \\ &=& (u+ 4w+4) \alpha \end{eqnarray*} contradiction, since $\alpha$ is not a rational multiple of ${\mathfrak P}i$. And if there is one $\beta$ angle, then \begin{eqnarray*} {\mathfrak P}i &=& u \alpha + (w+2) \gamma + \beta \\ 0 &=& (u-1) \alpha + (w+1) \gamma \\ &=& (u-1 + 2(w+1)) \alpha \\ &=& (u+2w+1) \alpha \end{eqnarray*} contradiction, since $u+2w+1 > 0$. Since we already proved there cannot be more than one $\beta$, that completes the proof of the lemma. We define the angle sum of a vertex to be the sum of the angles of the tiles sharing that vertex. Except for the vertices $A$, $B$, and $C$, that angle sum will always be either ${\mathfrak P}i$ or $2{\mathfrak P}i$. It will be ${\mathfrak P}i$ if and only if the vertex lies on the interior of the boundary of a tile or of $ABC$. For short, we refer to a vertex with angle sum ${\mathfrak P}i$ as a {\em boundary vertex}, though it need not be on the boundary of $ABC$. Consider a boundary vertex. A {\em normal boundary vertex} has three tiles, with angles $\alpha$, $\beta$, and $\gamma$. A {\em star} has three $\alpha$ angles and a $\beta$. These are the only possibilities for a boundary vertex, since $\alpha$ is not a rational multiple of ${\mathfrak P}i$, as spelled out in Lemma~\ref{lemma:twobeta}. Next, consider an interior vertex. A {\em normal interior vertex} has two each of $(\alpha, \beta, \gamma)$ angles. A {\em center} has three $\gamma$ and two $\beta$ angles. (For example, there is a center in Fig.~\ref{figure:bigisosceles}, more or less in the center of the figure.) There may also be interior vertices other than centers that are not normal; these will have either $4\alpha + 2\beta + \gamma$ or $6\alpha + 2\beta$. These vertices we call {\em interior stars}. The case of angles $4\alpha +2 \beta + \gamma$ we call a {\em single interior star} and the other case is a {\em double interior star}. \begin{lemma} \label{lemma:starsandcenters} Suppose isosceles triangle $ABC$ is tiled by a tile with angles $(\alpha,\beta,\gamma)$ with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Let ${\mathcal C}$ be the number of centers and ${\mathcal S}$ the number of stars, counting double interior stars twice. Then ${\mathcal S} + 1 = {\mathcal C}$. In particular there is at least one center. \end{lemma} \noindent{\em Example}. When the tiling begun in Fig.~\ref{figure:bigisosceles} is completed, there will be one center and zero stars. All the vertices introduced by quadratic tilings will be normal vertices. If we then combine four copies of this tiling to create a quadratic tiling of a triangle twice the size, there will be four centers, balanced by four interior starts in the midpoints of the sides, where the three of the four copies have common vertices. \noindent{\em Remark}. This lemma is about the only thing we can prove about the internal structure of tilings. We use it only for the existence of at least one center. \noindent{\em Proof}. Each tile has one each of $\alpha$, $\beta$, and $\gamma$ angles. At the vertices of $ABC$ we have three $\alpha$ angles and one $\beta$ angle (just as we have at a star). Counting the vertex angles we have equal numbers of $\alpha$, $\beta$, and $\gamma$ angles at each normal vertex. At each vertex we define the ``excess'' or ``deficit'' of each of $(\alpha,\beta,\gamma)$ to be the difference between the number of those angles at the vertex and the number at a normal vertex. At a star we have two excess $\alpha$ angles and a deficit of one $\gamma$ angles. At a single interior star the same applies; at a double interior star we have double that contribution. At a center we have an excess of one $\gamma$ and a deficit of two $\alpha$. At interior stars we have excesses of $\alpha$ and $\beta$ and deficits of $\gamma$. Adding up the excesses and deficits from the vertices of the tiling, including $A$, $B$, and $C$, we must get zero. The vertices of $ABC$ count the same as a star. One center will ``balance'' one star, in the sense that the deficits and excesses add to zero. (For example, in Fig.~\ref{figure:bigisosceles}, we have one center, and no stars; so the center balances the vertices of $ABC$, which count as a star.) If there are no interior stars, then the number of stars, plus one for $ABC$, will equal the number of centers. If, however, there are interior stars, those will require additional centers to balance them, one center for each single interior star and two for each double interior star. Since we defined ${\mathcal S}$ by this double-counting of double interior stars, we still have ${\mathcal C} = {\mathcal S}+1$. That completes the proof of the lemma. \subsection{The tile is rational} If $Q$ is a vertex of a tiling, and $QR$ is an internal segment of the tiling supporting a tile on one side with its $a$ edge on $QR$ and a vertex at $Q$, and supporting a tile on the other side with its $b$ edge on $QR$ and a vertex at $Q$, then we say $QR$ is an {\em $a/b$ edge}, or an {\em $a/b$ edge at $Q$}. Similarly for {\em $a/c$ edge} and {\em $b/c$ edge}. Note that an $a/b$ edge is also a $b/a$ edge. \begin{lemma} \label{lemma:centeroutgoing} Let the isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by a tile with angles $(\alpha,\beta, 2\alpha)$, with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Let $Q$ be a center in the tiling. Then either there is an $a/b$ edge at $Q$ or there are both $a/c$ and $b/c$ edges at $Q$. \end{lemma} \noindent{\em Proof}. Assume there is no $a/b$ edge. We must prove there is an $a/c$ and a $b/c$ edge at $Q$. Each tile with a vertex at the center $Q$ has an $a$ edge ending at $Q$, since the angles at $Q$ are all $\beta$ or $\gamma$. At a center, five tiles meet, so that is a total of five $a$ edges. Since five is an odd number, these edges cannot all be paired with other $a$ edges. We have assumed there is no $a/b$ edge; hence there is an $a/c$ edge. Similarly, there are three $b$ edges ending at $Q$, belonging to the tiles with their $\gamma$ angles at $Q$. Since three is odd, these cannot each be paired with another $b$ edge. Since there are no $a/b$ edges, there is a $b/c$ edge. That completes the proof of the lemma. \begin{lemma} \label{lemma:essentialsegments} Let the isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by a tile with angles $(\alpha,\beta, 2\alpha)$ with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then the tiling contains essential segments with associated relations $jb = ua + vc$ and $Ja = Ub + Vc$. \end{lemma} \noindent{\em Proof}. Suppose, for proof by contradiction, that there are no such essential segments. We consider the directed graph ${\mathbb G}amma_b$ defined in Definition~\ref{definition:Gamma}. We wish to identify the terminal links in this graph. To that end we must consider the possible configurations that can arise when an internal segment $UQ$ of the tiling supports on the same side a series of (one or more) tiles with their $b$ edges, followed by a tile with an $a$ or $c$ edge. Let $PQV$ be the three successive vertices, with $PQ$ of length $b$ and $QV$ of length $a$ or $c$. We consider all possible configurations in which $Q$ is a vertex where three tiles on one side of a line $PQ$ each have a vertex at $Q$, contributing one angle each, so the angle sum at $Q$ is $\alpha +\beta + \gamma$ on one side of $PQ$. All these configurations are shown in Fig.~\ref{figure:GammaB}. The figure shows that in each of those cases, there is a unique outgoing segment $QT$ that is a $b/c$ edge or a $b/a$ edge. If this segment is extended far enough, we will come to the last $b$ edge on the side that has a $b$ edge at $Q$. Since there are no essential segments, that point cannot be a vertex of a tile on the other side of $QT$, so $QT$ is an outgoing link in ${\mathbb G}amma_b$. \begin{figure} \caption{A link $PQ$ in ${\mathbb G} \label{figure:GammaB} \end{figure} On the other hand, if $Q$ is a star, the possible configurations are more complicated, and some of them have zero outgoing $b/c$ or $b/a$ edges, while others have two. It turns out that we do not need to make use of that fact, so we do not give a diagram of these configurations. If $PQ$ is a link, then line $PQ$ extends past $Q$ as an interior segment of the tiling. Therefore no link terminates on the boundary, so certainly not at a boundary star. At an interior star $Q$, no interior segment of the tiling passes through $Q$, as if it did, two of the three $\gamma$ angles would lie on one side of it, leaving an angle $\beta-\alpha$ on that side, which cannot be filled by a tile. Therefore, no link can end at an interior star, since an interior star is not located on the interior of a tile edge, but the end of a link must be on the interior of a tile edge. Therefore the out-degree of any star is $\ge$ the in-degree, since the in-degree is zero. At a center, the out-degree is at least one, and the in-degree is zero. At a given vertex $Q$ (star or not) there can be at most one link ending at $Q$, since $Q$ lies on the interior of a tile boundary. At a normal vertex $Q$, if there is an incoming link $PQ$, there is an outgoing link $QR$, as shown above. So out-degree $\ge$ in-degree at a normal vertex. At a star or center, there are no incoming links, and there is at least one outgoing link, so out-degree $>$ in-degree. By Lemma~\ref{lemma:starsandcenters}, there exists at least one center. Therefore the total out-degree minus in-degree is positive. But since each link has one head and one tail, the total out-degree equals the total in-degree, contradiction. We have reached a contradiction from the assumption that there is no essential segment with associated relation of the form $jb = ua+vc$. Hence there is such a segment. Next we prove the existence of essential segments with relations of the forms $Ja = Ub + Vc$. This is proved in the same way, using the graph ${\mathbb G}amma_a$ instead of ${\mathbb G}amma_b$. Again there is an outgoing link from each center, since by Lemma~\ref{lemma:centeroutgoing}, there is either an $a/b$ or an $a/c$ edge at $Q$, and that edge is part of an outgoing link since there are no essential segments. Again we have to prove that at a normal boundary vertex there is a unique outgoing link. See Fig.~\ref{figure:GammaA}. That completes the proof. \begin{figure} \caption{A link $PQ$ in ${\mathbb G} \label{figure:GammaA} \end{figure} \begin{theorem} \label{theorem:rationaltile} Let the isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by a tile with angles $(\alpha,\beta, 2\alpha)$ with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then the tile is rational; that is, the ratios of its sides are rational. \end{theorem} \noindent{\em Remark}. If the tile is rational, then after scaling we can assume its sides are integers with no common factor. \noindent{\em Proof}. By Lemma~\ref{lemma:essentialsegments}, there is an essential segment witnessing a relation $ja = ub + vc$, and another essential segment witnessing $Jb = Ua + Vc$. In matrix form we have $$ \matrixtwo j {-u} U {-J} \vectortwo a b = \vectortwo {vc} {-Vc}.$$ This equation can be solved for $(a,b)$ provided $j/u \neq U/J$. We have \begin{eqnarray*} ja &=& ub + vc \ \le \ ub \\ j/u &\le& b/a \mbox{\qquad with equality only when $v=0$}\\ Jb &=& Ua + Vc \ \le \ Ua \\ Jb &\le& Ua \mbox{\qquad with equality only when $V=0$}\\ b/a &\le& U/J \\ j/u &\le& U/J \mbox{\qquad with equality only when $v=V=0$} \end{eqnarray*} Thus: either $a/b$ is rational (when $v=0$ or $V=0$), or $a$ and $b$ are both rational multiples of $c$ (when the equation can be solved). In either case $a/b$ is rational. Similarly, $a/c$ is rational. That completes the proof of the theorem. \section{On the number of tiles required when $\gamma = 2\alpha$} We continue to consider tilings of isosceles $ABC$ with base angles $\alpha$ and vertex angle $\alpha + \beta$. In the previous two sections, we showed that the tile has to be rational, and that in that case, an $N$-tiling always exists; but to our dismay, the smallest known tiling has $N$ more than half a million! Since we failed to construct a smaller tiling, we now try to show that some values of $N$ are impossible. We have two theorems along that line: First, $N$ cannot be a prime number. Second, $N$ has to be ``at least so big'', i.e., we have a lower bound on $N$. \subsection{Characterization of the tile} \begin{lemma}\label{lemma:twoalphatiles} Suppose $(a,b,c)$ are the integer sides of a triangle with angles $(\alpha,\beta,2\alpha)$. Then $$ c^2 = a^2 + ab.$$ \end{lemma} \noindent{\em Remark}. Rational triangles with $\gamma = 2\alpha$ correspond to solutions of this equation with $c < a+b$ and $b < a+c$ and $a < b+c$. For example, $(4,5,6)$, and $(9,7,12)$. \noindent{\em Proof}. By the law of cosines, \begin{eqnarray*} c^2 &=& a^2 + b^2 - 2ab\cos \gamma \\ &=& a^2 + b^2 - 2ab \cos 2\alpha \mbox{\qquad since $\gamma = 2\alpha$}\\ &=& a^2 + b^2 - 2ab (2\cos^2 \alpha-1)\\ &=& a^2 + b^2 + 2ab - 4ab\cos^2 \alpha \end{eqnarray*} By the law of sines, $\sin \alpha / a = \sin \gamma /c = \sin 2 \alpha /c = 2\sin \alpha \cos \alpha /c$, so $ \cos \alpha = c/(2a)$. Hence \begin{eqnarray} c^2 &=& a^2 + b^2 + 2ab - bc^2/a \\ &=& (a+b)^2 -bc^2/a \\ c^2(1+b/a) &=& (a+b)^2 \\ c^2 &=& a(a+b) \\ c^2 &=& a^2 + ab \end{eqnarray} That completes the proof of the lemma. The following lemma gives a more nuanced characterization of $(a,b,c)$. It was published in \cite{luthar}, but we give the short proof here. \footnote{\,I am indebted to Gerry Myerson for pointing out this representation of $(a,b,c)$ to me on MathOverflow.} \begin{lemma}\label{lemma:luthar} Let $(a,b,c)$ be integers with no common factor, and suppose the triangle with sides $(a,b,c)$ has angles $(\alpha, \beta, 2\alpha)$. Then $(a,b,c) =(k^2, m^2-k^2,mk)$ for some relatively prime integers $(k,m)$, with $2k > m > k$. Conversely, let $(a,b,c)$ be a triple of integers $(a,b,c) =(k^2, m^2-k^2,mk)$ with $2k > m > k$ and $m$ and $k$ relatively prime. Then $(a,b,c)$ form a triangle, and it has angles $(\alpha, \beta, 2\alpha)$. \end{lemma} \noindent{\em Examples}: $(4,5,6)$ satisfies this equation with $k = 2$ and $m=3$. Therefore it is an example of a tile satisfying $\gamma = 2\alpha$. Although $(1,3,2)$ satisfies this equation with $k = 1$ and $m=2$, it does not correspond to a triangle. \noindent{\em Remarks}. Thus $b$ and $c$ are relatively prime, but $a$ and $c$ have a common factor $k$ (if $k \neq 1$). Also, $c > a$ but not necessarily $c > b$, and $\gamma$ can be more or less than a right angle. \noindent{\em Proof}. By Lemma~\ref{lemma:twoalphatiles}, we have \begin{eqnarray*} c^2 &=& a^2 + ab \end{eqnarray*} Luthar observed that this can be written as \begin{eqnarray*} b^2 + (2c)^2 &=& (2a+b)^2 \\ \end{eqnarray*} as is apparent upon expanding the right side. But now we can apply Lemma~\ref{lemma:pythagoras}, according to which there are integers $(m,k)$ such that \begin{eqnarray*} b &=& m^2 - k^2 \\ 2c &=& 2mk \\ 2a+b &=& m^2 + k^2 \end{eqnarray*} These equations imply the equations to be proved. That completes the proof of the first claim of the lemma. Conversely, suppose $(a,b,c) =(k^2, m^2-k^2,mk)$, and $(a,b,c)$ form a triangle. Then one can check that \begin{eqnarray*} c^2 &=& a^2 + ab \\ &=& a (a+b) \end{eqnarray*} and we showed above that this equation characterizes $\gamma = 2\alpha$. Finally, if $(a,b,c) =(k^2, m^2-k^2,mk)$, then $b + c > a$ becomes $m^2-k^2 + mk > k^2$, or $m^2 + mk > 2k^2$, which follows from $m > k$. $a+c > b$ is $k^2 + mk > m^2-k^2$, or $k(k+m) > (m+k)(m-k)$, or $k > m-k$, which follows from $2k > m$. $a+b > c$ is $k^2 + (m^2-k^2) > mk$, which follows from $m > k$. That completes the proof of the lemma. \subsection{Possible shapes of the tile} In this section, we consider the possible shapes of a tile $(a,b,c)$ with $\gamma = 2\alpha$. We begin by observing that $a<c$ is the only obvious restriction on the ordering of the edges. As well as $a < b$, we can have $b < a$, as in $(9,7,12)$, or $a < c < b$, as in $(9,16,15)$. One way of describing the shape of a triangle is by the ratios of its sides. Here we give lower bounds on some of those ratios. Actually, we use only the bound on $c/b$, and that only once, but it does seem necessary. We present all the bounds anyway, as they improve the reader's mental picture of these (possible) tilings. Empirically, these bounds are tight, though we have not proved that they are best-possible. There are apparently no positive lower bounds on $b/a$ or $b/c$, although again we have not proved that. When $b/a$ is very small, an isosceles triangle $ABC$ with $\gamma = 2\alpha$ will be very close to equilateral, and by Laczkovich's method (see Fig.~\ref{figure:bigisosceles2}), it can be tiled with trillions of tiny needle-shaped tiles. For example, $(a,b,c) = (61504, 497, 61752)$ has $a/b > 123$, and we get $N = 7227976999088825426993$, more than a billion trillion, and the side and base of $ABC$ are $470043721566328$ and $471939059153289$. \begin{lemma}\label{lemma:shapes} Suppose $(a,b,c)$ are the integer sides of a triangle with angles $(\alpha,\beta,2\alpha)$. Then \begin{eqnarray*} a/c &>& 1/2 \\ a/b &>& 1/3 \\ c/b &>& 2/3 \end{eqnarray*} and if $b<a$ we also have \begin{eqnarray*} a/c &>& 1/\sqrt 2 \end{eqnarray*} \end{lemma} \noindent{\em Proof}. By Lemma~\ref{lemma:luthar}, there are relatively prime integers $k,m$ with $m/2 < k < m$, such that $a = k^2$, $b=m^2-k^2$, and $c = mk$. Then $$ a/c = k^2/mk = k/m > 1/2,$$ proving the first claim of the lemma. We have \begin{eqnarray*} a/b &=& \frac {k^2}{m^2-k^2} \\ &>& \frac {k^2}{4k^2-k^2} \mbox{\qquad since $4k^2 > m^2$}\\ &=& 1/3 \end{eqnarray*} proving the second claim. To prove the third claim, we consider the function $$ f(m,x) = \frac {mx}{m^2-x^2}.$$ Then $f$ is monotone increasing for $x < m$. Since $k > m/2$, and $c/b = f(m,k)$, a lower bound is $f(m,m/2)$, namely \begin{eqnarray*} c/b &>& \frac {m^2/2}{m^2-(m/2)^2} \\ &=& \frac{2m^2}{4m^2-m^2} \\ &=& 2/3 \end{eqnarray*} That is the third claim. Now suppose $b < a$. That is, $k^2 -m^2 < k^2$. Then $2k^2 < m^2$, so $a/c = k/m < 1/\sqrt 2$. That completes the proof of the lemma. \subsection{The area equation} \begin{lemma} [Area equation] \label{lemma:areaequation} Suppose isosceles triangle $ABC$, with base angles $\alpha$, is $N$-tiled by a tile with sides $(a,b,c)$ and angles $(\alpha,\beta,2\alpha)$. Let $X$ be the length of the equal sides $AB$ and $BC$. Then $X^2 = Nab$. \end{lemma} \noindent{\em Proof}. Let $\gamma = 2\alpha$. The base angles of $ABC$ are $\alpha$, so ${\mathfrak P}i = 2\alpha + \angle B = \gamma + \angle B$. But also ${\mathfrak P}i = \alpha + \beta + \gamma$, so $\angle B = \alpha + \beta$. Twice the area of $ABC$ is given by the magnitude of the cross product of $BA$ and $BC$, namely $X^2 \sin (\alpha + \beta)$. Twice the area of the tile is given by $ab \sin \gamma$. Since $\gamma = {\mathfrak P}i- (\alpha + \beta)$, twice the area of the tile is also $ab \sin(\alpha+\beta)$. But the area of $ABC$ is $N$ times the area of the tile. Hence \begin{eqnarray*} X^2 \sin(\alpha + \beta) &=& N ab \sin(\alpha + \beta) \end{eqnarray*} Dividing both sides by $\sin(\alpha+\beta)$, we have the area equation of the lemma. That completes the proof of the lemma. \begin{lemma} [Area equation, second form] \label{lemma:areaequation2} Suppose isosceles triangle $ABC$, with base angles $\alpha$, is $N$-tiled by a tile with sides $(a,b,c)$ and angles $(\alpha,\beta,2\alpha)$. Let $X$ be the length of the equal sides $AB$ and $BC$, and let $Y$ be the length of $AC$. Then $XY = Nbc$, and $Y = (c/a) X$. \end{lemma} \noindent{\em Proof}. Twice the area of $ABC$ is given by the magnitude of the cross product of $AB$ and $AC$, namely $XY \sin \alpha$. Twice the area of the tile is $bc \sin \alpha$. But the area of $ABC$ is $N$ times the area of the tile. Hence \begin{eqnarray*} XY \sin \alpha &=& N bc \sin\alpha \\ XY &=& Nbc \end{eqnarray*} which proves the first claim of the lemma. By Lemma~\ref{lemma:areaequation}, we have $X^2 = Nab$. Dividing $XY = Nbc$ by $X^2 = Nab$ we have $$ \frac Y X = \frac {N bc}{Nab} = \frac c a.$$ That completes the proof of the lemma. \subsection{The non-primality of $N$} In this section, we will show that $N$ cannot be a prime number. What is more, $N$ cannot even be squarefree. \begin{lemma} \label{lemma:notprimehelper} Let $ABC$ be isosceles with base angles $\alpha$, and $\alpha$ not a rational multiple of ${\mathfrak P}i$. Suppose $ABC$ is $N$-tiled by a tile with $\gamma = 2\alpha$ and integer sides $(a,b,c)$ with no common factor. Then the squarefree part of $N$ divides $b$ and the squarefree part of $b$ divides $N$. If $N$ is squarefree, then $N$ divides $b$, and $b/N$ is a square, i.e., $b = N\ell^2$ for some integer $\ell$. \end{lemma} \noindent{\em Example}. In Fig.~\ref{figure:bigisosceles2}, we indicated a tiling with tile $(4,5,6)$ and $N = 669780$. Factoring that number, we find $N = 2^2 \cdot 3^2 \cdot 5 \cdot 61^2$. So the squarefree part of $N$ is $b = 5$, in accordance with this lemma. This provides a check on the computation of the value of $N$, since it is not at all apparent from the construction of the tiling that $5$ has to be the squarefree part of $N$. \noindent{\em Proof}. By Lemma~\ref{lemma:luthar}, there exist relatively prime integers $m,k$ with $ 0 < m/2 < k < m$ such that $$a = k^2, b = m^2-k^2, c = km.$$ Let $X$ be the length of the equal sides $AB$ and $BC$, and $Y$ the length of $AC$. Then \begin{eqnarray} X^2 &=& Nab \label{eq:area1} \mbox{\qquad by Lemma~\ref{lemma:areaequation}}\\ XY &=& Nab \label{eq:area2} \mbox{\qquad by Lemma~\ref{lemma:areaequation2}} \end{eqnarray} Squaring both sides of (\ref{eq:area2}), we have \begin{eqnarray*} X^2Y^2 &=& N^2 b^2 c^2 \end{eqnarray*} Dividing by (\ref{eq:area1}), \begin{eqnarray*} Y^2 &=& \frac {X^2 Y^2}{X^2} \ = \ N b \frac {c^2} a \end{eqnarray*} We know $a$ divides $c^2$, since $c = km$ and $a = k^2$, so $c^2/a = m^2$. Then $$ Y^2 = Nbm^2.$$ Then $N$ divides $Y^2$. Then the squarefree part of $N$ divides $bm^2$. But I say that actually the squarefree part of $N$ divides $b$, not just $bm^2$. Let $p$ be a prime dividing $N$ to an odd power $p^{2k+1}$, and let $p^j$ be the highest power dividing $m$. Then $p^{2j+2k+1}$ divides $Y^2$, so $p^{j+k+1}$ divides $Y$, so $p^{2j+2k+2}$ divides $Y^2$, so $p^{2j+1}$ divides $bm^2$. If $p$ does not divide $b$, then $p^{j+1}$ divides $m$, contradiction. Therefore $p$ divides $b$. Since $p$ was any prime dividing $N$ to an odd power, it follows that the squarefree part of $N$ divides $b$, as claimed. Now let $p$ be a prime dividing $b$ to an odd power $p^{2j+1}$. Then $p^{2j+1}$ divides $Y^2$, so $p^{2j}$ divides $Y$, so $p$ divides $Nc^2/a$. But $b$ is relatively prime to $c^2/a = m^2$, so $p$ divides $N$. Therefore the squarefree part of $b$ divides $N$. If $N$ is squarefree, then $b/N$ is an integer. Since $Y^2 = Nbm^2$, we have $$ b/N = \left( \frac Y {Nm} \right)^2.$$ Therefore $b/N$ is a rational square. Since $b/N$ is an integer, it is also an integer square. That completes the proof of the lemma. \begin{theorem} \label{theorem:notsquarefree} Let $ABC$ be isosceles with base angles $\alpha$, and $\alpha$ not a rational multiple of ${\mathfrak P}i$. Suppose $ABC$ is $N$-tiled by a tile with angles $(\alpha, \beta,2\alpha)$ and sides $(a,b,c)$. Then $N$ is not squarefree. In particular, $N$ is not prime. \end{theorem} \noindent{\em Proof}. By Theorem~\ref{theorem:rationaltile}, the tile is rational, so we can assume without loss of generality that $(a,b,c)$ are integers with no common factor. By Lemma~\ref{lemma:luthar}, there exist relatively prime integers $m,k$ with $ 0 < m/2 < k < m$ such that $$a = k^2, b = m^2-k^2, c = km.$$ Let $X$ be the length of the equal sides $AB$ and $BC$, and $Y$ the length of $AC$. Then by Lemma~\ref{lemma:areaequation}, \begin{eqnarray*} X^2 &=& Nab \\ &=& N^2 a \mbox{\qquad \ since $b = N\ell^2$, by Lemma~\ref{lemma:notprimehelper}}\\ &=& N^2k^2\ell^2 \mbox{\qquad since $a=k^2$} \end{eqnarray*} Taking the square root of both sides, we have \begin{eqnarray} X &=& Nk\ell \label{eq:9356} \end{eqnarray} The tiling gives rise to a relation \begin{eqnarray} X &=& pa + qb + rc \label{eq:9360}\\ Nk\ell &=& pk^2 + q(m-k)(m+k) + rkm \mbox{\qquad since $X = Nk\ell$ by (\ref{eq:9356})} \label{eq:9361} \end{eqnarray} By (\ref{eq:9356}), $X \equiv 0 \mbox{\ mod\ } k$. Taking the last equation mod $k$ we find \begin{eqnarray*} 0 &\equiv& q m^2 \mbox{\ mod\ } k \end{eqnarray*} Since $k$ and $m$ are relatively prime, we can divide by $m^2$: \begin{eqnarray} q &\equiv& 0 \mbox{\ mod\ } k \label{eq:9370} \end{eqnarray} Putting $N\ell^2 = b = (m-k)(m+k)$ into (\ref{eq:9361}), we have \begin{eqnarray} k (m-k)(m+k)/\ell&=& pk^2 + q(m-k)(m+k) + rkm \nonumber \\ 0 &=& pk^2 + (q-k/\ell)(m-k)(m+k) + rkm \nonumber \end{eqnarray} Then $k/\ell$ is necessarily an integer. Since $m-k > 0$, we have $q-k/\ell \le 0$, and either $q-k/\ell < 0$ or $p=r=0$. We argue by cases: Case~1, $q-k/\ell < 0$. Then $q < k/\ell \le k$. Then by (\ref{eq:9370}), we have $q=0$. Therefore, no tile supported by $AB$ or $BC$ has its $b$ edge on $AB$ or $BC$, since a relation $X = pa + qb + rc$ would arise from each of $AB$ or $BC$ (although perhaps with different coefficients $(p,q,r)$). However, at $B$ there are two tiles, one with an $\alpha$ angle and one with a $\beta$ angle at $B$. Renaming $A$ and $C$ if necessary, we may assume the tile with $\alpha$ at $B$ is supported by $AB$. Since each tile supported by $AB$ has its $a$ or $c$ edge on $AB$, each of those tiles has a $\beta$ angle at one of its vertices on $AB$. But there are $\alpha$ angles at $A$ and at $B$. Then by the pigeonhole principle, one of the vertices on $AB$ is a vertex of two tiles with $\beta$ angles there. But that contradicts Lemma~\ref{lemma:twobeta}. That disposes of Case~1. Case~2, $p=r=0$. Then every tile supported by the side $Z$ = $AB$ or $BC$ that gave rise to (\ref{eq:9360}) has its $b$ edge on $Z$. Hence every tile supported by $Z$ has a $\gamma$ angle on $Z$. There are no $\gamma$ angles at $A$, $B$, or $C$, so by the pigeonhole principle, there must be a boundary vertex with two $\gamma$ angles. But there cannot be two $\gamma$ angles at the same boundary vertex, since the only integer relations between the angles are $\alpha + \beta + \gamma = {\mathfrak P}i$ and $3\alpha + \beta = {\mathfrak P}i$. This contradiction shows that Case~2 is impossible. That completes the proof of the theorem. \subsection{The number of tiles on a side of $ABC$} We wish to show that, given $N$, we can calculate a finite set of triangles and a finite set of possible tiles $(a,b,c)$, such that if there is an $N$-tiling of some isosceles $ABC$ with base angles $\alpha$ by some tile with $\gamma = 2\alpha$ and $\alpha$ not a rational multiple of ${\mathfrak P}i$, then $ABC$ and the tile are in those finite sets. It will be important for that proof to have an upper bound on the number of tiles on the sides $AB$ and $BC$ of isosceles triangle $ABC$, in terms of $N$. This section is devoted to that problem. We can count either the tiles supported by $AB$, or the tiles with an edge or a vertex on $AB$. At a given boundary vertex, there can be three tiles or there can be four tiles, as ${\mathfrak P}i = \alpha+\beta+\gamma = 3\alpha+\beta$. So the two ways of counting tiles ``on a side'' differ, but by a bounded factor. One might initially think that such a bound should be on the order of $\sqrt N$, but that idea is based on the picture in which $ABC$ is not long and narrow. If we consider the case when $\alpha$ is tiny, so $AB$ and $BC$ are almost half as long as $BC$ and the triangle has comparatively little interior, maybe most of the tiles touch the boundary! In that case, neglecting for the moment the fact that some tile edges may be a lot larger than others, we would expect almost a quarter of the tiles to have an edge or vertex on $AB$, and a quarter on $BC$, and half on $AC$. The bound we actually prove is that one of $AB$ or $BC$ must support fewer than $(N-1)/4$ tiles. The number of tiles supported by $AB$ should be about half the number of tiles with edges or vertices on $AB$, neglecting the vertices with four instead of three tiles. One illustration of the difficulty is the case when $b$ is tiny, and $a$ and $c$ are almost equal. We already mentioned the example $ (a,b,c) = (61504, 497, 61752)$. Then angle $\beta$ is tiny, and $\alpha$ and $\beta$ are both close to ${\mathfrak P}i/3$, so $ABC$ is nearly equilateral, and the tile is needle-shaped, long and narrow. Note that this is not at all the situation considered above when $ABC$ itself is long and narrow. But in this situation, $AB$ might be tiled by millions of tiles with their tiny $b$ edges on $AB$, while $BC$ might be tiled with relatively fewer tiles with their long $c$ or $a$ edges on $BC$. So there is no obvious relation between the number of tiles supported by one side and the number supported by another. The difference between $N/4$ and $N/2$ and $(N-1)/4$ may not seem important at first, but $(N-1)/4$ enables us to prove that $N$ cannot be twice a prime, while the others mentioned do not, though they might suffice for $N$ not being a prime. Furthermore, the better the bound, the more candidate values of $N$ can be ruled out because they violate certain simple conditions. We conjecture that all $N$ that correspond to tilings are not squarefree; but there are certainly not-squarefree numbers $N$ that we cannot yet rule out. We need to ``get off the ground'' by a close analysis of the case when $N$ is very small. In particular, what is the smallest number of tiles that can be supported by the base $AC$? We show that at least four tiles are required. (That already shows $N > 7$.) A few of our lemmas will be proved also for tilings with $\gamma = 2{\mathfrak P}i/3$; for example at least {\em three} tiles must be supported by $AC$ in that case. After these preliminary remarks, we plunge into the technical lemmas. \begin{lemma}\label{lemma:N/4helper} Let isosceles $ABC$ with base angles $\alpha$ (at $A$ and $C$) be $N$-tiled by a tile with angles $(\alpha,\beta,2\alpha)$, and suppose that the tile is not a right triangle and $\alpha$ is not a rational multiple of ${\mathfrak P}i$. Then no tile has one vertex on $AB$ and another on $BC$. \end{lemma} \noindent{\em Proof}. By Theorem~\ref{theorem:laczkovich-isosceles}, $\alpha$ is not a rational multiple of ${\mathfrak P}i$, and ${\mathfrak P}i $ cannot be expressed as a linear combination of $\alpha$, $\beta$, and $\gamma$, except in the way determined by the vertex angles of $ABC$. (That is, ${\mathfrak P}i = 3\alpha + \beta$ if $\gamma = 2 \alpha$, or ${\mathfrak P}i = 3\alpha + 3 \beta$ if $\gamma = 2{\mathfrak P}i/3$). By Theorem~\ref{theorem:rationaltile}, the tile is rational, so we may assume its sides are integers $(a,b,c)$ with no common factor, with $a$ opposite angle $\alpha$ and $b$ opposite $\beta$. Suppose some tile has an edge $EF$ with $E$ on $AC$ and $F$ on $BC$. Consider the triangle $BEF$. Since it has the same angle at $B$ as triangle $ABC$, namely $\alpha+\beta$, its angles at $E$ and $F$ must each be $\alpha$. Then the north side of $EF$ cannot be covered by a single tile, since if it were, that tile would have two $\alpha$ angles, one at $E$ and another at $F$. Therefore the north side of $EF$ supports at least two tiles. By hypothesis, $EF$ is an edge of a single; that tile, say Tile~1, must lie on the south side of $EF$. Since the tile is rational by Theorem~\ref{theorem:rationaltile}, we may assume without loss of generality that $(a,b,c)$ are integers with no common factor. In particular, none of $(a,b,c)$ is an integer multiple of another. Since the south side of $EF$ is equal to one tile edge, the north side cannot be composed of all $a$ edges, or all $b$ edges, or all $c$ edges, since then the edge on the south would be an integer multiple of the edge on the north. Suppose Tile~1 has its $a$ edge on $EF$. Since $a< c$, there are no $c$ edges on the north side of $EF$. Hence north of $EF$ are only $b$ edges, so $a$ is an integer multiple of $b$, contradicting the fact that $a$ and $b$ are relatively prime (by Lemma~\ref{lemma:luthar}). Similarly, if Tile~1 has its $b$ edge on $EF$, then $b$ is an integer multiple of $a$, contradiction. Finally, if Tile~1 has its $c$ edge on $EF$, then since $a+b > c$, either all the tiles on the north of $EF$ supported by $EF$ have their $a$ edges on $EF$, or they all have their $b$ edges on $EF$. Then $c$ is an integer multiple of $a$ or $c$ is an integer multiple of $b$. We argue by cases: Case~1, $c$ is a multiple of $a$, say $c = pa$. Then \begin{eqnarray*} c^2 &=& p^2 a^2 \\ &=& a^2 + ab \mbox{\qquad by Lemma~\ref{lemma:twoalphatiles}}\\ p^2 a &=& a+b \mbox{\qquad by the previous two lines}\\ a &=& 1 \mbox{\qquad since $a$ and $b$ are relatively prime} \\ k &=& 1 \mbox{\quad where $a=k^2$ and $b = m^2-k^2$, by Lemma~\ref{lemma:luthar}} \end{eqnarray*} Then $a = 1$ and $c = mk = m$ and $b = m^2-1 = c^2-1$, so $a+b = m^2 = c^2 \ge c$, so $(a,b,c)$ do not form a triangle, contradiction. That completes Case~1. Case~2, $c$ is a multiple of $b$, say $c = pb$. Then \begin{eqnarray*} pb^2 &=& c^2\\ c^2 &=& a^2 + ab \mbox{\qquad by Lemma~\ref{lemma:twoalphatiles}}\\ p^2 b^2 &=& a^2 + ab \mbox{\qquad by the previous two lines}\\ a^2 &\equiv& 0 \mbox{\ mod\ } b \\ a &\equiv& 0 \mbox{\ mod\ } b \mbox{\qquad since $a$ and $b$ are relatively prime} \end{eqnarray*} But then $b$ divides $a$. Since $a$ and $b$ are relatively prime, that implies $b = 1$. By Lemma~\ref{lemma:luthar}, there are relatively prime $(k,m)$ such that $b = m^2-k^2$. Then $m^2 = k^2 + b = k^2+1$, which is impossible, since $k > 0$. That completes Case~2. These contradictions complete the proof of the lemma. \begin{lemma}\label{lemma:cornersonly} Let isosceles $ABC$ with base angles $\alpha$ (at $A$ and $C$) be $N$-tiled by a tile with angles $(\alpha,\beta,2\alpha)$, with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Let $T$ be a tile supported by $AC$ but not having a vertex at $A$ or $C$. Then $T$ does not have a vertex on $AB$ or $BC$. \end{lemma} \noindent{\em Proof}. Let $PQ$ be the edge of $T$ that lies on $AC$. Let $R$ be the third vertex of $T$. We must show $R$ does not lie on $AB$ or $AC$. Suppose, for proof by contradiction that it does. By renaming $A$ and $C$ if necessary, we can assume that $R$ lies on $AB$. Since $\alpha$ is not a rational multiple of ${\mathfrak P}i$, there is only one tile, say $T_1$, with a vertex at $A$. The interior edge of $T_1$ connects $AB$ with $AC$. That is not a shared edge with $T$, since at least three tiles meet at $P$. See Fig.~\ref{figure:cornersonly}. \begin{figure} \caption{ $RPQ$ cannot be a single tile as $RP$ is too long.} \label{figure:cornersonly} \end{figure} Fig.~\ref{figure:cornersonly}, although already counterfactual, is not counterfactual enough, as it shows a gap between $T_1$ and $T$, which we must prove must be there, before we can prove that ``$RP$ would be too long.'' That is, $T$ might share a vertex with $T_1$. We will begin by showing that cannot happen. I say that $P$ does not share a vertex on $AC$ with $T_1$. Suppose, for proof by contradiction, that it does. Then $P$, the western vertex of $T$ on $AC$, is also the eastern vertex of $T_1$ on $AC$. Let $S$ be the third vertex of $T_1$, so $S$ lies on $AB$. See Fig.~\ref{figure:cornersonly2}. \begin{figure} \caption{What if $T_1$ and $T$ share a vertex on $AC$?} \label{figure:cornersonly2} \end{figure} Consider triangle $SPR$. The side $SP$ has length $a$, because that is the edge of $T_1$ opposite its $\alpha$ angle. Since $a=mk$ and $b = m^2-k^2$, $a$ and $b$ are relatively prime, so $a$ cannot be expressed as a sum of $b$ edges; since $a < c$ that means that $SP$ supports only one tile on the east, sharing an $a$ edge with $T_1$. Call that tile $T_2$. Then $T_2$ and $T_1$ each have a $\beta$ or $\gamma$ angle at $S$. At $P$, $T_1$ has a $\beta$ or $\gamma$ angle, since its $\alpha$ angle is at $A$. By Lemma~\ref{lemma:twobeta}, $T_2$ and $T_1$ do not both have either a $\beta$ or $\gamma$ angle at $P$. $T_2$ does not have its $\alpha$ angle at $P$, since $PS$ is its $a$ edge. So one of $T_2$ and $T_1$ has a $\beta$ angle at $P$, and the other has a $\gamma$. Then $T$ has its $\alpha$ angle at $P$, and exactly those three tiles have a vertex at $P$. Since $T$ has its $\alpha$ angle at $P$, its side $PS$ is equal to $b$ or $c$. Now triangle $PSR$ has one side equal to $a$ (namely $PS$), side $PR$ equal to $b$ or $c$ (since $T$ has its $\alpha$ angle at $P$), and its angle at $P$ is either $\beta$ or $\gamma$, since $T$ has $\alpha$ at $P$ and $T_1$ has $\beta$ or $\gamma$. Therefore $PSR$ is congruent to the tile, by the $ASA$ congruence theorem. But $T_1$ has angle $\beta$ or $\gamma$ at $S$, so angle $PSR$ is $\alpha+\gamma$ or $\alpha+\beta$, contradicting the fact that $PSR$ is congruent to the tile. That proves that $T$ and $T_1$ do not share a vertex on $AC$. Now I say that $T$ does not share a vertex with $T_1$ on $AB$ either. Suppose to the contrary that it does. Then $R$ is the eastern vertex on $AB$ of $T_1$ as well as the northern vertex of $T$. See Fig.~\ref{figure:cornersonly3}. \begin{figure} \caption{What if $T_1$ and $T$ share a vertex on $AB$?} \label{figure:cornersonly3} \end{figure} Then triangles $T_1$ and $T$ have the same height (measured from $AC$) and since they are each one tile, they are congruent; so they have the same area. Hence they have the same base. The base of $T_1$ is $b$ or $c$, since it has its $\alpha$ angle at $A$. Then the base $PQ$ of $T$ is also $b$ or $c$. Since the edge $RP$ is not shared with $T_1$, as we have already proved, there is another tile between $T_1$ and $T$ with a vertex at $R$. Since the third vertex of $T$, namely $Q$, does not lie on $AB$, there are at least four tiles meeting at $R$. Therefore there are three $\alpha$ angles and one $\beta$ angle at $R$. The $\beta$ angle must belong to $T_1$, since it has its $\alpha$ angle at $A$. Then $T$ has its $\alpha$ angle at $R$. Hence its base $PQ$ is opposite its $\alpha$ angle. Hence $PQ = a$. But we showed above that $PQ$ is $b$ or $c$. That is a contradiction; that proves (as claimed) that $T$ does not share a vertex with $T_1$ on $AB$. Let $X$ be the length of $AR$ and $Y$ the length of $RP$. Since $R$ is not a vertex of $T_1$, $AR$ is composed of at least two tile edges. One of those edges is not $a$, since $T_1$ has its $\alpha$ angle at $A$. I say that also one of them is not $b$. For suppose $AR$ supports only $b$ edges of tiles. Then for some integer $\ell$, $X = \ell b$. Each of those tiles has its $\alpha$ angle to the west and its $\gamma$ angle to the east. Then at $R$, the tile to the west of $R$ has its $\gamma$ angle at $R$. Then there are exactly three tiles with vertices at $R$. Since $RQ$ is an interior segment, $T$ is the middle one of those three. Then triangle $ARP$ is similar to the tile, since it has one $\alpha$ and one $\gamma$ angle. Triangle $ARP$ has $\alpha$ at $A$, and $\gamma$ at $R$, and therefore $\beta$ at $P$. But angle $ARP$ is the supplement of angle of $T$, which is impossible, as neither $\alpha$, $\beta$, nor $\gamma$ is the supplement of $\beta$. Hence, as claimed, one of the edges supported by $AR$ is not $b$. We now intend to reach a contradiction by showing that the length of $RP$ is more than the length of any tile edge $a$, $b$, or $c$. That will be a contradiction, because $RPQ$ is a tile, so $RP$ has to be equal to $a$, $b$, or $c$. Let $x$ and $y$ be two of $(a,b,c)$, not necessarily distinct, but one of $x$ and $y$ is not $a$, and one is not $b$. Then I say that $x+y > a$, $x+y > b$, and $x+y > c$. If we prove that, we can take $x$ and $y$ to be two of the edges supported by $RP$, and since $Y$ is a tile edge, and thus equal to $a$, $b$, or $c$, we will have $RP > Y$ as claimed. Since $x+y > x$, and $x+x > x$, we are done unless the third edge is distinct from the two to be added. If the two to be added are distinct, we are done, because two sides of a triangle are together greater than the third. That leaves the cases $a+a > b$, $b+b > c$, $a+a > c$, $b+b > a$, $c+c > b$, $c+c > a$. The cases $c+c > a$ and $b+b >a$ follow from $c>a$ and $b>a$. We can drop $a+a > b$ and $a+a > c$ because we know one of two edges to be added is not $a$. We can drop $b+b > a$ and $b+b > c$ because we know that one of two edges to be added is not $b$. That leaves only $c+c > b$ still to prove. By Lemma~\ref{lemma:shapes}, we have $c/b > 2/3$. Hence $c + c > (4/3)b > b$. That completes the proof of the lemma. \noindent{\em Remark}. $b+b > c$ is not generally true. We have seen that $b$ can be much smaller than $c$. Hence we had to show that $RP$ could not support only $b$ edges. \begin{lemma}\label{lemma:fourtiles} Let isosceles $ABC$ with base angles $\alpha$ (at $A$ and $C$) be $N$-tiled by a tile with angles $(\alpha,\beta,2\alpha)$, with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then there are at least four tiles supported by $AC$. If the tile instead has angles $(\alpha, \beta, 2{\mathfrak P}i/3)$ instead of $(\alpha,\beta,2\alpha)$, then there are at least three tiles supported by $AC$. \end{lemma} \noindent{\em Proof}. First assume $\gamma = 2\alpha$. We start by proving that at least three tiles are supported by the base $AC$. Suppose, for proof by contradiction, that only two tiles are supported. Those two tiles have their $\alpha$ angles at $A$ and $C$, and their $a$ edges both end at the shared vertex $P$ on $AC$. Without loss of generality we may assume that Tile~1, with edge $AP$, has its $\beta$ angle at $P$, and Tile~2, with edge $PC$, has its $\gamma$ angle at $P$, since two $\beta$ angles or two $\gamma$ angles at $P$ is impossible. Then the remaining angle to be filled at vertex $P$ is $\alpha$. By Theorem~\ref{theorem:laczkovich-isosceles}, $\alpha$ is not a rational multiple of ${\mathfrak P}i$, and hence not a multiple of $\beta$. Since $\alpha$ is not a multiple of $\beta$, the gap must be filled by a single tile, Tile~3, with its $\alpha$ angle at $P$. Then Tile~3 has its $c$ side either against Tile~1 or Tile~2, but that is impossible since those edges are of length $a$ and terminate at the boundary, and $c > a$. Hence there are indeed at least three tiles supported by $AC$. Note that this argument works for both cases, $\gamma = 2\alpha$ and $\gamma = 2{\mathfrak P}i/3$. Now suppose there are exactly three tiles supported by $AC$. Not all three tiles on $AC$ can have their $c$ edges on $AC$, since then each would have a $\beta$ angle on $AC$, and by the pigeonhole principle, there would be a vertex on $AC$ with two $\beta$ angles. If the tiles at $A$ and $C$ both have their $b$ edges on $AC$, then they have their $\gamma$ angles on $AC$, so the middle tile on $AC$ cannot have a $\gamma$ angle on $AC$, so it has its $c$ edge on $AC$. Therefore the possible values of $Y$ are exactly $2c+b$, $2c+a$, $2b+c$, and $b+c+a$. So far, $\gamma$ could be $2\alpha$ or $2{\mathfrak P}i/3$. Now we assume $\gamma = 2\alpha$. Then Lemma~\ref{lemma:areaequation2} applies, yielding $X = (a/c) Y$. Therefore, the possible values of $X$ are exactly $$ 2a + (ba/c), 2a + a^2/c, 2ab/c + a, (a+b+c)(a/c)$$ By Lemma~\ref{lemma:luthar}, $a=k^2$, $b=m^2-k^2$, $c=mk$, where $k$ and $m$ are relatively prime. Note that $$(a+b+c)(a/c) = (k^2+(m^2-k^2)+mk)(k/m) = (m+k)k = a+c.$$ Then the possible values of $X$ are $$ 2k^2 + k(m^2-k^2)/m, 2k^2 + k^3/m, 2k(m^2-k^2)/m, (m+k)k $$ Since $m > k \ge 1$, none of the first three can be an integer. Therefore $$X = (m+k)k = a+c.$$ The question now is, what are the tiles supported by $AB$, the sum of whose edges on $AB$ is $a+c$? One possibility is that there are exactly two tiles supported by $AB$, one with its $a$ edge on $AB$ and the other with its $c$ edge. However, that is impossible, since the tiles at $A$ and $C$ both have their $\alpha$ angles at $A$ or $C$, and hence their $a$ edges do not lie on $AC$. So $X$ has some other configuration of tiles supported. Then $X = ua + vb + wc = a+c$ for some nonnegative integers $u,v,w$. Then $u$ or $w$ must be zero. If $u=w=0$ then $vb = a+c$, i.e., $v(m^2-k^2) = k^2 + km$. Dividing by $m+k$ we have $v(m-k) = k$. Since $k$ is relatively prime to $m-k$, $k$ divides $v$. But also $v$ divides $k$. Hence $v=k$. Then $AC$ is composed of $k$ tile edges of length $b$. Then each edge has a $\gamma$ angle on $AC$. Since there are $\alpha$ angles at $A$ and $C$, then by the pigeonhole principle there is a vertex with two $\gamma$ angles. But that is impossible, since the only relations between the angles are $\alpha + 3\beta = {\mathfrak P}i$ and $\alpha +\beta+\gamma = {\mathfrak P}i$. Thus we have ruled out the case $u=w=0$. Now suppose $v=w=0$. Then $ua = a+c$, i.e., $(u-1)a = c$, or $(u-1)k^2 = km$, so $u-1 = m$. Then $ma = c= km$, so $m=k$, contradiction. Hence not both $v$ and $w$ are zero. I say that $w \neq 0$. To prove that, suppose $w=0$. Then $u \neq 0$ and $v \neq 0$ and $ ua + vb = a+c$. Then \begin{eqnarray*} uk^2 + v(m^2-k^2) &=& k^2 + km \\ (u-1)k^2 + vm^2 &=& km \end{eqnarray*} Then $k$ divides $v$, since $k$ is relatively prime to $m$. Since $v \neq 0$, we have $v \ge k$. \begin{eqnarray*} km &=& (u-1)k^2 + vm^2 \\ &\ge& vm^2 \mbox{\qquad since $u-1\ge 0$}\\ &\ge& km^2 \mbox{\qquad since $v \ge k$}\\ &>& km \mbox{\qquad since $m > k \ge 1$ }\\ km &>& km \end{eqnarray*} But that is a contradiction, reached on the assumption $w=0$. Therefore $w \neq 0$, as claimed. Then \begin{eqnarray*} ua + vb + wc &=& a+c \\ uk^2 + v(m^2-k^2) + wkm &=& k^2 + km \\ uk^2 +vm^2 + (w-1)km &=& (v+1)k^2 \\ (u-1)k^2 + vm^2 + (w-1)km &=& vk^2 \end{eqnarray*} I say that $u=0$. If $u \neq 0$, then all the terms on the left are non-negative, and $vm^2 > vk^2$ since $m >k$, so the equation is impossible. Hence $u = 0$, as claimed. Then \begin{eqnarray} v(m^2-k^2) + wkm &=& k^2 + km \nonumber\\ vm^2 + (w-1)km &=& (v+1)k^2 \label{eq:8986} \end{eqnarray} Since $m > k$, we have $vm^2 > vk^2$, and since $w \neq 0$ we have $(w-1)km > (w-1)k^2$. If $w \neq 1$ then the left side of (\ref{eq:8986}) is greater than the right, contradiction. Hence $w=1$. Then $$ a+c = ua + vb + wc = vb + c$$ since $w=1$ and $u=0$. Then $a = vb$, which is impossible since $a$ and $b$ are relatively prime and not zero. This contradiction depends only on the assumption that there are exactly three tiles supported by $AC$. Since we already proved there are at least three tiles supported by $AC$, we have now proved there are at least four tiles supported by $AC$. That completes the proof of the lemma. \begin{lemma} \label{lemma:N/4} Let isosceles $ABC$ with base angles $\alpha$ (at $A$ and $C$) be $N$-tiled by a tile with angles $(\alpha,\beta,2\alpha)$, with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then one of the sides $AB$ or $BC$ supports strictly less than $(N-1)/4$ tiles. If the tile instead has angles $(\alpha,\beta,2{\mathfrak P}i/3)$, then one of the sides supports strictly less than $N/4$ tiles. \end{lemma} \noindent{\em Remarks}. This bound is not used in our tiling non-existence theorem that $N$ cannot be squarefree. But it is crucial to the theorem that, given $N$, there is an explicitly computable set of possible $ABC$ and tiles. \noindent{\em Proof}. Let $P$ be a vertex of the tiling lying on the interior of a side of $ABC$. If only two tiles meet at $P$ then they cannot have different angles, since any two of $(\alpha,\beta,\gamma)$ make together less than ${\mathfrak P}i$. But if they have the same angle at $P$, that angle would be a right angle, contrary to hypothesis. Therefore at least three tiles meet at each such vertex $P$. Let $n$ and $m$ be, respectively, the total number of tiles with an edge or vertex on $AB$, and the total number of tiles with an edge or vertex on $BC$. First suppose the tile has $\gamma = 2\alpha$. By Lemma~\ref{lemma:fourtiles}, $AC$ supports at least four tiles. The middle two do not touch $AB$ or $BC$ even in a vertex, by Lemma~\ref{lemma:cornersonly}. and Lemma~\ref{lemma:N/4helper}, and between the tiles that have no vertex at $A$ or $C$ there is (at least) a fifth tile, having only a vertex on $AC$, which also does not touch $AB$ or $BC$, by Lemma~\ref{lemma:cornersonly}. Then $n+m \le N-3$, since $N$ is the total number of tiles, but at least three have a side or vertex on $AC$ and do not touch $AB$ or $BC$, and by Lemma~\ref{lemma:N/4helper}, no tile contributes to both $n$ and $m$. Therefore either $n \le (N-3)/2$ or $m \le (N-3)/2$. Relabeling $A$ and $C$ if necessary, we can assume without loss of generality that $n \le (N-3)/2$. Now let $p$ and $q$, respectively, be the number of tiles supported by $AB$, and the number of tiles with one and only one vertex on $AB$. Then $q \ge p-1$ (it might be strictly greater if some vertices have more than three tiles sharing that vertex). Therefore \begin{eqnarray*} p-1 &\le& q\\ p+q &=& n \ \le \ (N-3)/2 \\ p &\le& (N-3)/2-q \\ &\le& (N-3)/2-(p-1) \\ 2p &\le& (N-3)/2 + 1 = (N-1)/2\\ 2p &\le& (N-1)/2 \\ p &\le& (N-1)/4 \end{eqnarray*} We now will establish strict inequality in place of $\le$. Suppose that $p = (N-1)/4$. Then there are exactly four tiles supported by $AC$, and at all vertices on $AC$ except $A$ and $C$, there are just three tiles meeting, since any more would introduce a strict inequality $p-1 < q$, instead of $p-1 \le q$. Therefore each boundary vertex has one $\alpha$, one $\beta$, and one $\gamma$ angle. The corner tiles have their $\alpha$ angles at $A$ and $C$. The two tiles adjacent to the corner tiles, with only a vertex on $AC$, do not have their $\alpha$ angles on $AC$, since their $a$ sides must match the $a$ sides of the corner tiles. (It is impossible that $a$ is an integer multiple of $b$, since $a$ is relatively prime to $b$, by Lemma~\ref{lemma:luthar}.) Therefore the middle vertex on $AC$ must have two $\alpha$ angles, contradiction. (In other words, if $AC$ supports only four tiles, there must be a ``boundary star'' on $AC$.) Hence, as claimed, $p < N/4$. That completes the proof in case the tile is $(\alpha,\beta,2\alpha)$. Now suppose the tile is $(\alpha,\beta,2{\mathfrak P}i/3)$ and $p = N/4$. Then we start with $n + m \le N-1$ instead of $N-3$, since Lemma~\ref{lemma:fourtiles} gives us only three tiles supported by $AC$ instead of four. The result is $p \le N/4$ instead of $p \le (N-1)/4$. Then there must be exactly three tiles supported by $AC$ and each boundary vertex has one $\alpha$, one $\beta$, and one $\gamma$ angle, or else there would be strict inequality in the computation. That completes the proof of the lemma. \subsection{What is the least $N$ permitting a tiling?} The smallest explicitly-known such tiling has $N = 669780$, with tile $(4,5,6)$, as explained above. On the other hand, we have no guarantee that some {\em other} method of constructing tilings might work better, and produce some much smaller tilings. We will show below that $N \ge 45$. There is a large swath of ignorance between 45 and 669780! Until now, we have no {\em a priori} estimate on the size of $(a,b,c)$. For example, there is no {\em a priori} reason why we could not have $N < 100$ and $(a,b,c)$ each greater than a million. We now provide such a bound. \begin{lemma} \label{lemma:abcbound} Suppose isosceles triangle $ABC$ is $N$-tiled by tile $(a,b,c)$ with $\gamma = 2 \alpha$. Then $a, b$, and $c$ are less than $m^2$, where $m$ is as in Lemma~\ref{lemma:luthar} and satisfies $$ m < N + \frac{(N+1)^2}{16}.$$ \end{lemma} \noindent{\em Remark.} The form of the bound is not very beautiful, but we want to use it for fairly small $N$, and its asymptotic value is not of interest. We only care that there is {\em some} explicit and reasonably-sized bound. \noindent{\em Proof.} By Lemma~\ref{lemma:luthar}, there exist relatively prime integers $m$ and $k \le m$ such that $a = k^2$, $b = m^2-k^2$, and $c = mk$. Let $X = \vert AB \vert$ be the length of the two equal sides of $ABC$. The tiling provides integers $p,q,r$ such that $X = pa + qb + rc$. Then \begin{eqnarray*} X^2 &=& Nab \\ X^2 &=& N(k^2)(m^2-k^2)\\ X &=& pk^2 + q(m^2-k^2) + rmk \\ X &\equiv& (p-q)k^2 \mbox{\qquad mod $m$} \\ X^2 &\equiv& (p-q)^2k^4 \mbox{\qquad mod $m$}\\ N(k^2)(m^2-k^2) &\equiv&(p-q)^2k^4 \mbox{\qquad mod $m$} \\ -Nk^4 &\equiv&(p-q)^2k^4 \mbox{\qquad mod $m$} \\ -N &\equiv& (p-q)^2 \mbox{\qquad mod $m$, since $\gcd(k,m)=1$} \\ N+(p-q)^2 &\equiv& 0 \mbox{\qquad mod $m$} \end{eqnarray*} Therefore $m$ divides $N + (p-q)^2$. Since $N + (p-q)^2$ is positive, that implies \begin{eqnarray} m \le N + (p-q)^2 \label{eq:9585} \end{eqnarray} By Lemma~\ref{lemma:N/4}, we may assume $p+q+r < (N+1)/4$, since that is true on one of the two sides of $ABC$ of length $X$. In particular, each of $p,q,r$ is $< (N+1)/4$. Hence \begin{eqnarray*} \vert p-q \vert &<& (N+1)/4 \\ (p-q)^2 &<& \frac{(N+1)^2}{16} \end{eqnarray*} By (\ref{eq:9585}), we have $$ m < N + \frac{(N+1)^2}{16}.$$ Since $k \le m$, both $a$ and $b$ are $\le m^2$. Hence both $a$ and $b$ are bounded by $$ \left( N + \frac{(N+1)^2}{16} \right)^2.$$ That completes the proof of the lemma. By a {\em boundary tiling} of $ABC$ by the tile $(a,b,c)$, we mean a placement of tiles supported by the boundary of $ABC$ touching every point of the boundary of $ABC$. Let $X$ be the side $AB$, and $Y$ the base $AC$, of isosceles $ABC$. A boundary tiling of $ABC$ provides integers $(p,q,r)$ and $(u,v,w)$ with \begin{eqnarray*} X &=& pa + qb + rc \\ X^2 &=& Nab \mbox{\qquad the area equation} \\ Y &=& ua + vb + wc \end{eqnarray*} We use the phrase {\em possible boundary tiling} to mean a way of writing $X$ and $Y$ in this form, with integers $(p,q,r)$ and $(u,v,w)$ satisfying Lemma~\ref{lemma:N/4}, and $(a,b,c)$ satisfying Lemma~\ref{lemma:abcbound}. Of course, a tiling gives rise to a boundary tiling, but not every boundary tiling can be completed to a tiling, let alone every ``possible'' boundary tiling. Each ``possible boundary tiling'' might correspond to many different ways of arranging the tiles (in different orders) on the boundary, but there will be only finitely many ways. We note that a boundary tiling might use different $(p,q,r)$ on the two sides $AB$ and $BC$; we shall return to that point below. \begin{lemma} \label{lemma:boundarytilings} Given $N$, there is a finite set $\Delta$ of tiles $(a,b,c)$ having integer sides with no common factor and angles $(\alpha,\beta,\gamma)$, and for each tile in $\Delta$, a finite number of representations \begin{eqnarray*} X &=& pa + qb + rc \\ Y &=& ua + vb + wc \end{eqnarray*} such that if isosceles triangle $ABC$ with base angle $\alpha$ can be $N$-tiled with some tile, then the tile belongs to $\Delta$ and the boundary representations determined by the tiling are among those allowed for that tile. \end{lemma} \noindent{\em Example}. With $N=36$, there are just two possible tiles: $(9,16,15)$ and $(16,9,20)$, as we will show below, and the possible boundary tilings are given in Table~\ref{table:no36}. \noindent{\em Proof}. Let $N$ be given. Then the number of possible boundary tilings is finite (and one can easily loop through them), by definition of ``boundary tiling.'' By Lemma~\ref{lemma:luthar} and Lemma~\ref{lemma:abcbound}, every $N$-tiling gives rise to a possible boundary tiling. We spell out the algorithm just described: Given $N$, we loop through all $(k,m)$ satisfying Lemma~\ref{lemma:abcbound} and with $k$ and $m$ relatively prime. \footnote{\,The condition $k$ and $m$ relatively prime is important, because it results in $a$ and $b$ being relatively prime, which is assumed in Lemma~\ref{lemma:N/4}. Without it, we got some spurious possible boundary tilings with tiles like $(46,45,54)$, which cannot correspond to real tilings because Lemma~\ref{lemma:N/4} would be violated.} There are only finitely many $(k,m)$ to loop through, because according to Lemma~\ref{lemma:abcbound}, we have an explicit bound on $m$ in terms of $N$. Since $k<m$, both $m$ and $k$ are bounded in terms of $N$. For each such $(k,m)$, we compute the tile $$ (a,b,c) = (k^2, m^2-k^2, mk).$$ We reject triples $(a,b,c)$ that either \begin{itemize} \item cannot form triangles because one side is greater than or equal to the sum of the other two, or \item the squarefree part of $N$ is not equal to the squarefree part of $b$ \end{itemize} Then $X$ is defined by the area equation $X^2 = Nab$, and $Y$ is defined by the area equation $XY = Nbc$. We reject triples $(a,b,c)$ if either \begin{itemize} \item $X$ is not an integer, or \item $Y$ is not an integer \end{itemize} Then we loop through all triples $(p,q,r)$ satisfying the bound of Lemma~\ref{lemma:N/4} such that \begin{eqnarray*} X &=& pa + qb + rc \end{eqnarray*} We also reject triples with $p=r=0$, as if $X$ is composed entirely of $b$ edges, then each tile supported by $AB$ (or $BC$) has a $\gamma$ angle on $AB$ (or $BC$), which contradicts the pigeonhole principle, since no vertex has two $\gamma$ angles, and none of the tile angles at $A$ and $C$ and $C$ are $\gamma$. Now $Y = (c/a)X$. We need to check whether $Y$ can be expressed in the form $ua+vb+wc$. There are only finitely many possibilities for $(u,v,w)$, so we can check that. If $Y$ cannot be so expressed, then we can reject this $(a,b,c)$. Otherwise, we output the possible boundary tiling given by \begin{eqnarray*} X &=& pa + qb + rc \\ Y &=& ua + vb + wc. \end{eqnarray*} That completes the proof of the lemma. The algorithm as described above eliminates $N < 20$, but finds possible boundary tilings for $N=20, 28, 36,44, 45$. Below we will discuss improvements to the algorithm and eliminate some of these values. \footnote{ We coded the algorithm twice, once in SageMath, which offers unlimited precision integers, and once in C, taking care to use 64-bit integers in C. We got the same results from both implementations. } \begin{theorem} \label{theorem:search} Given $N$, it is decidable by a computation whether there exists an $N$-tiling of some (any) triangle $ABC$ by a tile with $\gamma = 2\alpha$, (where $ABC$ has base angles $\alpha$). \end{theorem} \noindent{\em Proof}. For a fixed $ABC$ and tile, it is (in principle) computationally decidable whether there is a tiling: By Lemma~\ref{lemma:boundarytilings}, there are finitely many possible boundary tilings so in principle you can all the possible ways of arranging tiles on the boundary, and check by backtracking search whether the boundary tiling can be completed to an $N$-tiling, just like solving a jigsaw puzzle. That completes the proof. \subsection{Ruling out more values of $N$} The problem of constructing an $N$-tiling divides into two parts: first construct an $ABC$ and a tile $(a,b,c)$, and a possible boundary tiling \begin{eqnarray*} X &=& pa + qb + rc \\ Y &=& ua + vb + wc \end{eqnarray*} Then, use backtracking search to either find an $N$-tiling, or show that there is none. The second part of this (the part involving backtracking) is not a trivial program, and even if coded, it would probably take too long when $N$ is large. But the first algorithm (searching for a possible boundary tiling) is very easy to implement, as no geometry is involved, just some simple linear equations. We have already described that algorithm in Lemma~\ref{lemma:boundarytilings}. \begin{lemma} \label{lemma:no20} Let isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by an integer-sided triangle with angles $(\alpha, \beta, 2\alpha)$. Then $N \neq 20$. \end{lemma} \noindent{\em Remark}. It seems that $N=20$ is the {\em only} value of $N$ (among those passed by Lemma~\ref{lemma:boundarytilings}) that can be rejected this way; at least, it's the only one less than 1000. \noindent{\em Proof}. Let $n$ be the minimum possible number $n$ of tiles supported on the boundary; then there must be at least $n-1$ tiles with a side or vertex on the boundary, thanks to there being that many gaps between the tiles, and no double-counting of tiles filling those gaps, because of Lemma~\ref{lemma:N/4helper} and Lemma~\ref{lemma:cornersonly}. Then if $n+(n-2)$ exceeds $N$, we can reject that possible boundary tiling. When $N=20$, the tile $(4,5,6)$ leads to $X = 20 =a + 2b + c$, so $X$ supports 4 tiles, and $20$ cannot be written as a sum of fewer tiles. Then $Y = 30 = 5c$, $n = 11$, $n+(n-1) = 21 > 20$. So this possible tile is rejected. Since that is the only possible tile for $N=20$, there is no 20-tiling. That completes the proof. \begin{lemma}\label{lemma:rejectbycenter} Let isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by an integer-sided triangle with angles $(\alpha, \beta, 2\alpha)$ and sides $(a,b,c)$, integers with no common factor. Then the tiling contains an interior segment of length at least $kb$, where $a=k^2$, and an interior segment of length at least $ma$, where $c = km$. In particular the max of the side and base of $ABC$ is at least $bk$ and at least $am$. More generally, the tiling contains an interior segment of length at least $jb = pa + qc$, for some nonnegative integers $p,q,j$ with $j \neq 0$, and an interior segment of length at least $ja = ua + vc$. \end{lemma} \noindent{\em Remarks}. This lemma does a good job of of rejecting many possible values of $N$ that only permit boundary tilings with large values of $ab$. Indeed, for $N \le 1000$, only two-digit values of $a$ and $b$ survive. For example, for $N=121$, the tile $(9,16,15)$ survives, but the tiles $(144, 25, 156)$, $(225, 64, 255)$, and $(400, 441, 580)$ are eliminated. That the bound $kb$ in the lemma cannot be improved is demonstrated by $(a,b,c) = (9,7,12)$, where we have $3b = a+c$, with $k=3$ and $m=4$. Unfortunately, although this lemma thinned the list of possible $N$, it did not remove the ``low-lying fruit'', in the sense that $N=28$ was not eliminated, although one possible tile for $N=28$ was eliminated, namely $(81, 175, 144)$. But $(9,7,12)$ was not eliminated. $N=32$ was eliminated, and two possible large tiles for $N=36$ were eliminated. \noindent{\em Proof}. The last sentence of the lemma follows from the first part, as any line contained in the triangle is less than the max of the side and base (which we invite the reader to verify). The first part of the proof depends on the fact that the tiling contains a center, and the use of Laczkovich's graph ${\mathbb G}amma_b$. As it turns out, we have already proved what we need here: By Lemma~\ref{lemma:essentialsegments}, the tiling contains a segment $PQ$ witnessing a relation $jb = pa + qc$. Since $k$ divides $a$ and $c$, but not $b$, we have $k$ divides $j$. Therefore (since $j \neq 0$), $j \ge k$. In order to witness such a relation, the length of $PQ$ must be at least $jb$, so the length of $PQ$ is at least $kb$. Similarly, by Lemma~\ref{lemma:essentialsegments}, the tiling contains a segment $UV$ witnessing a relation $ja = ub + vc$. Then \begin{eqnarray*} jk^2 &=& um^2-uk^2 + vkm \\ \end{eqnarray*} Therefore $k$ divides $u$, and \begin{eqnarray*} jk &=& (u/k) m^2-uk + vm \\ j &=& (u/k)k (m+v)(m/k) - u \\ j &\ge& (u/k) k (m/k) \\ &\ge& m \\ ja &\ge& ma \end{eqnarray*} That completes the proof of the lemma. \begin{lemma} \label{lemma:no28} Let isosceles triangle $ABC$ have base angles $\alpha$, not a rational multiple of ${\mathfrak P}i$. Let $(a,b,c)$ be integers with no common factor forming a triangle with angles $(\alpha,\beta,2\beta)$. Let $X$ and $Y$ be the lengths of $AB$ and $BC$ respectively. Suppose $$X = pa + qb + c$$ and $$Y = ua + vb + c \mbox{ or } ua+vb+2c$$ and suppose that these representations correspond to the actual tile edges on $AB$ and $BC$ and $BC$. Then there is no $N$-tiling of $ABC$ by $(a,b,c)$. \end{lemma} \noindent{\em Example.} Consider $N=28$. Consider $(a,b,c)= (9,7,12)$. By the area equations (\ref{eq:area1}) and \ref{eq:area2}), if there were a 28-tiling of an isosceles $ABC$ with base angles $\alpha$, we would have $X = 42$ and $Y = 56$. Then $X = a+3b+c$ and $Y=2a+2b+2c$. The lemma shows we cannot have a tiling corresponding to these representations of $X$ and $Y$. It does not address the question of tilings corresponding to other representations; that will be taken up below. \noindent{\em Proof}. Suppose that such a tiling exists. By hypothesis, the decomposition $X = pa + qb + c$ corresponds to the tiles supported by $AB$ and the tiles supported by $BC$. By hypothesis, the decomposition $Y = ua+vb+wc$, with $w = 1$ or 2, corresponds to the tiles supported by $AC$, and the decomposition $X = pa + qb + c$ corresponds to the tiles supported by $AB$ and $BC$ (though perhaps in different orders). I say that neither of the two tiles at $A$ has its $c$ edge on $AB$ or $BC$. Renaming $A$ and $C$ if necessary, we may suppose that the tile $T_1$ at $A$ supported by $AB$ has its $\alpha$ angle at $A$. Suppose $T_1$ has its $c$ edge on $AB$. Since there is only one $c$ edge on $AB$, all the rest of the tiles supported by $AB$ have a $\gamma$ angle on $AB$. Since $T_1$ has its $\alpha$ angle to the south, all the tiles except $T_1$ have their $\gamma$ angles to the north. Let the next tile supported by $AB$ be $T_3$, and let the one between $T_1$ and $T_3$ be $T_2$; and let $P$ be the shared vertex on $AB$ of these three tiles. Then $T_1$ has its $\beta$ angle at $P$, and $T_3$ has its $\gamma$ angle at $P$. Hence $T_2$ has its $\alpha$ angle there. Then the $a$ edge of $T_2$ is not shared with $T_1$. Now consider the tile $T_4$ supported by $BC$ at $B$. If it has its $c$ edge along $T_1$, then that extends past $T_2$, blocking the top edge of $T_2$, which must therefore be less than $a$. Possibly $b < a$, but then $a-b$ would have to be a multiple of $b$, say $a-b = jb$. Then $a = (j+1)b$, which is impossible since $a$ and $b$ are relatively prime. This contradiction shows that the $c$ edge of $T_4$ is not shared with $T_1$. Therefore the $c$ edge of $T_4$ is on $BC$, since $T_4$ has its $\beta$ angle at $B$. Then, the tile supported by $BC$ at $C$ has its $b$ edge on $BC$ and its $c$ edge on $AC$, since it has its $\alpha$ angle at $C$, and there is only one $c$ edge on $BC$. Now we have two $c$ edges on $AC$, with endpoints at $A$ and $C$ respectively. They are not the same edge, since $Y > c$. In case $Y= ua+vb+c$, we have reached a contradiction; so we may assume $Y=ua+vb+2c$. All the other tiles supported by $AC$ (except the two with $c$ edges) have a $\gamma$ angle on $AC$. Let $Q$ and $R$ be the vertices on $AC$ next to $A$ and next to $C$, respectively, i.e., the other vertices of the corner tiles. By the pigeonhole principle, one of the interior tiles with a vertex at $Q$ or $R$ has its $\gamma$ angle there. Say it is at $Q$. Let $T_5$ be the corner tile at $A$, $T_7$ the tile with a $\gamma$ angle at $Q$ and an edge on $AC$, and $T_6$ the tile between them. Then $T_5$ has its $\beta$ angle at $Q$, since its $b$ edge is on $AB$. Then $T_6$ has its $\alpha$ angle at $Q$. Therefore it cannot share its $a$ edge with $T_5$, as the $a$ edge must be opposite the $\alpha$ angle. If $a <b$ that is immediately impossible; if $b<a$ then it is also impossible, since $a$ and $b$ are relatively prime. We have reached a contradiction, depending on the assumption that tile $T_7$ has its $\gamma$ angle at $Q$. Therefore the corresponding tile next to the corner tile at $C$ has its $\gamma$ angle at vertex $R$. Then we similarly reach a contradiction, replacing $A$ by $C$ in the argument just given. Finally we reach a contradiction depending only on the assumption that such a tiling exists. That completes the proof of the lemma. \begin{theorem} \label{theorem:N45} Let isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by a triangle with angles $(\alpha, \beta, 2\alpha)$, and $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then $N \ge 45$. \end{theorem} \noindent{\em Proof}. With $N=20$ eliminated by Lemma~\ref{lemma:no20}, the first few possible boundary tilings found by the algorithm in Lemma~\ref{lemma:boundarytilings} are shown in Table~\ref{table:no36}. \begin{table}[ht] \caption{Possible boundary tilings} \label{table:no36} \begin{tabular}{cccc} $N$ & $(a,b,c)$ & $X$ & $Y$ \\ \hline 28 & (9,7,12) & $42 = a + 3b + c$ & $56 = 2a + 2b + 2c$ \\ 36 & (9,16,15) & $ 72 = a + 3b + c$ & $120 = a + 6b + c$ \\ 36 & (16, 9, 20) & $72 = a + 4b + c$ &$ 90 = 2 a + 2 b + 2 c$\\ 44 & (25, 11, 30) & 110 = $ a + 5 b + c $ &$ 132 = 2 a + 2 b + 2 c$\\ 45 &(4,5,6) & $30 = a + 4 b + c$ & $45 = a + b + 6c$\\ 45 &(4,5,6) & $30 = 2a + 2 b + 2c$ & $45 = a + b + 6c$ \end{tabular} \end{table} It remains to eliminate 28, 36, and 44. By Lemma~\ref{lemma:boundarytilings}, the only possible tiles are the ones shown in Table~\ref{table:no36}. We first take up the case $N=28$, which was discussed in the example following the statement of Lemma~\ref{lemma:no28}. That lemma shows that no tiling is possible corresponding to the representations of $X$ and $Y$ given in Table~\ref{table:no36}. There cannot be other representations with more $c$ edges, since no multiple of 12 cannot be made from one 9 and three 7s, and no multiple of 12 can be made from two 9s and two 7s, and $a$ and $b$ are relatively prime. It remains to consider representations with fewer $c$ edges. In this case that would mean zero $c$ edges; the only such candidate representation is $X = 6b$. But the algorithm of Lemma~\ref{lemma:no28} already eliminates $p=r=0$, as discussed in the proof that lemma (because then there would be a $\gamma$ angle at every vertex on $AB$). For the same reason there can be no tilings corresponding to $Y$ composed of all $b$ edges. The algorithm that produces the table does not print every possible representation for $Y$, and there is still the representation $Y = a+5b+c= 56=9+5\cdot7 + 12$, which is not shown in the table, to consider. But any tiling corresponding to that representation is also ruled out by Lemma~\ref{lemma:no28}. Hence no 28-tiling of this isosceles triangle $ABC$ by this $(a,b,c)$ exists. We turn to the case $N=36$. Suppose, for proof by contradiction, there is a 36-tiling. By the area equations, $(X,Y) - (72,120)$, which have representations given in Table~\ref{table:no36}. Since with tile $(9,16,15)$ we cannot make a multiple of 15 out of six 16s, except by using all of them. Then by Lemma~\ref{lemma:no28}, those representations for $X$ and $Y$ in Table~\ref{table:no36} do correspond to the tiling, unless the tiling has $AC$ composed of eight $c$ edges, i.e., $Y=8c$ corresponds to the tiling. In that case, each tile supported by $AC$ would have a $\beta$ angle. Since there are $\alpha$ angles at $A$ and $C$, the pigeonhole principle tells us that at some vertex on $AC$, there must be two $\beta$ angles, which is impossible. Therefore there is no 36-tiling by the tile $(9,16,15)$. Table~\ref{table:no36} also has a second entry for $N=36$, namely $(a,b,c) = (16, 9, 20)$, with $X= 72 = a + 4b + c$ and $Y= 90 = 2 a + 2 b + 2 c$. One can check that it is not possible to make a multiple of 20 with up to two $a$ edges and up to four $b$ edges. Then by Lemma~\ref{lemma:no28}, no such tiling exists. Since Table~\ref{table:no36} represents the results of Lemma~\ref{lemma:boundarytilings}, there is no 36-tiling of any isosceles $ABC$. Turning to $N=44$, the tile would have to be $(25,11,30)$. To apply Lemma~\ref{lemma:no28}, given the representations of $X$ and $Y$ in the table, it suffices to check that no multiple of 30 can be made of up to one $a$ edges and up to five $b$ edges, or up to two $a$ edges and up to two $b$ edges. That completes the proof of the theorem. We note that the technique does not extend to $N=45$. After that the next possibilities are $63, 64, 72$. We will never get to $669780$ by considering one $N$ at a time. \section{Tilings of an isosceles triangle by a tile $(\alpha,\beta,2{\mathfrak P}i/3)$} In this section we take up the tilings of an isosceles triangle $ABC$ with base angles $\alpha$, by a tiling with angles $(\alpha,\beta,2{\mathfrak P}i/3)$, where $\alpha$ is not a rational multiple of ${\mathfrak P}i$. Let $\gamma = 2{\mathfrak P}i/3$. Since $\alpha + \beta + \gamma = {\mathfrak P}i$, the vertex angle ${\mathfrak P}i-2\alpha$ is equal to $\alpha + 3 \beta$. Laczkovich proved \cite[Theorem~2.5]{laczkovich1995} that there exist tiles that can be used to tile {\em some} such $ABC$, but $N$ constructed by his method can be large. The smallest such tiling we have been able to construct has $N = 75140$, which is too large to draw. It is constructed by first constructing a dissection of $ABC$ into {\em similar} rational triangles and parallelograms, following the ideas of Laczkovich. Fig.~\ref{figure:isosceles2pi} shows such a preliminary dissection. Then to get a tiling by congruent triangles, we have to choose a very small tile such that if each of the visible triangles is tiled quadratically, then every shared edge is an integer multiple of the tile edges. For example if the red triangle will get $p^2$ tiles and the light blue triangle will get $q^2$ tiles, then we must satisfy $pb = qa$, in this case $5p = 3q$. There will be another such equation on every shared boundary. To clear all the denominators we will have to use a large number of tiles. \begin{figure} \caption{With tile $(3,5,7)$, $N$ would be $1878500$, too large to draw.} \label{figure:isosceles2pi} \end{figure} \begin{figure} \caption{With this configuration, $N$ is $3681860$--even larger.} \label{figure:isosceles2pi2} \end{figure} Fig.~\ref{figure:isosceles2pi2} shows that a slightly different arrangement of the similar triangles and parallelograms can make a difference in the resulting number of tiles. In one figure, the parallelogram is tiled using $a$ and $c$ edges on the boundary; in the other figure, the parallelogram is tiled using $a$ and $b$ edges on the boundary. That makes the equations at the boundary different, even though the other boundary conditions are the same and the areas of the two parallelograms are equal. The same decomposition into similar triangles works when the tile shape is different. One might hope that with the right shape of tile, the parallelogram would not be necessary, and $N$ would therefore be smaller. That is, however, not the case. \begin{comment} \subsection{The Diophantine equation $c^2 = a^2 + b^2 + ab$} Let $(a,b,c)$ be the sides of a triangle with angles $(\alpha,\beta,2{\mathfrak P}i/3)$. According to the law of cosines, we have \begin{eqnarray*} c^2 &=& a^2 + b^2 - 2ab \cos(2{\mathfrak P}i/3) \\ &=& a^2 + b^2 + ab \mbox{\qquad since $\cos(2{\mathfrak P}i/3) = -1/2$} \end{eqnarray*} Therefore this Diophantine equation determines the possible rational triangles with a $2{\mathfrak P}i/3$ angle. \begin{lemma}\label{lemma:relprime} Suppose $c^2 = a^2 + b^2 + ab$, and $(a,b,c)$ are integers with no common factor. Then $(a,b,c)$ are pairwise relatively prime. \end{lemma} \noindent{\em Proof}. If prime $p$ divides any two of $(a,b,c)$ then it also divides the third one. We wish to parametrize the integer solutions of the equation $c^2 = a^2 + b^2 + ab$. \begin{theorem} \label{theorem:representation} Let $(a,b,c)$ be positive integers with no common factor satisfying $c^2 = a^2 + b^2 + ac$. Then there are positive relatively prime integers $(s,t)$, exactly one of which is even, such that one of the following representations holds: (i) $(a,b,c)$ are all odd and \begin{eqnarray*} t &<& s \ < \ \ 3t \mbox{\rm \quad and 3 does not divide $s$ and}\\ a &=& 3t^2 + 2st - s^2 \ = \ (3t-s)(t+s)\\ b &=& s^2 + 2st- 3t^2 \ = \ (s+3t)(s-t) \\ c &=& s^2 + 3t^2 \end{eqnarray*} or $(a,b,c)$ are not all odd, and they are $1/4$ times those values. (ii) $(a,b,c)$ are not all odd and \begin{eqnarray*}\ s &>& t/2 \mbox{\quad and 3 does not divide $s+t$}\\ a &=& 2st - t^2 \\ b &=& s^2 - 2st \\ c &=& s^2 - st + t^2. \end{eqnarray*} In the first representation, $a < b$ if and only if $\sqrt 3 t < s$. In the second representation, $a < b$ if and only if $s^2 + t^2 > 4st$. \end{theorem} \noindent{\em Example}. With $(a,b,c) = (3,5,7)$, we have $(s,t) = (2,1)$ in the first representation. In the second representation, the condition $s^2 + t^2 > 4st$ corresponds (as it turns out) to $a < b$. With $(s,t) = (3,1)$ in the second representation we get $(5,3,7)$, because that condition is violated. With $(s,t) = (2,1)$, we get $b=0$ since the condition that $3 \not |\ s+t$ is violated. With $(s,t) = (4,1)$ we get $(7,8,13)$, a legitimate solution. \noindent{\em Proof}. We will make two transformations of the equation $c^2 = a^2 + b^2 + ab$, subject to the inequality $a < b$. We will divide into two cases, according as $a$ and $b$ are both odd, or one is even. (They can't both be even by Lemma~\ref{lemma:relprime}.) First we assume that $(a,b,c)$ are all odd. Then the first transformation is \begin{eqnarray*} x &=& (b-a)/2 \\ y &=& (b+a)/2 \\ z &=& c \end{eqnarray*} or in the other direction, \begin{eqnarray*} a &=& y-x \\ b &=& y+x \\ c &=& z/2 \end{eqnarray*} Then positive integer solutions of $c^2=a^2+b^2+ab$ correspond to integer solutions of \begin{eqnarray} 3y^2 +x^2 &=& z^2 \label{eq:8590} \end{eqnarray} with $(x,y)$ of opposite parities: \begin{eqnarray*} c^2 &=& a^2 + b^2 + ab \\ (2c)^2 &=& (2a)^2 + (2b)^2 + 4ab \\ z^2 &=& (y-x)^2 + (y+x)^2 + (y-x)(y+x) \\ &=& 2y^2 + 2x^2 + y^2 - x^2 \\ &=& 3y^2 + x^2 \end{eqnarray*} $0<x$ corresponds to $a < b$ and $x < y$ to $0 < a$. If $a$ and $b$ are relatively prime, then $x$ and $y$ have no common factor. If $x$ and $y$ are relatively prime, then $2x$ and $2y$ (which are $b-a$ and $b+a$) have no common factor but 2. Since any common factor of $a$ and $b$ would divide $2x$ and $2y$, $a$ and $b$ have no common factor but 2. But we have assumed they are both odd. Therefore, if $(x,y)$ are relatively prime, so are $(a,b)$. Solutions with $a$ and $b$ both odd correspond to $(x,y)$ with opposite parities. Equation (\ref{eq:8590}) is a special case of the equation analyzed in Corollary 6.3.15, p.~353 of \cite{cohen}. There it is proved that the integral solutions of (\ref{eq:8590}) with $x$ and $y$ relatively prime are parametrized by relatively prime integers $(s,t)$ of opposite parity, with two formulas given, one applicable when $3$ divides $s$, and the other when $3$ does not divide $s-t$. Those formulas, specialized to our equation, are \begin{eqnarray*} x &=& {\mathfrak P}m (s^2-3t^2) \mbox{\qquad when $3 \not | s$}\\ y &=& 2st \\ z &=& s^2 + 3t^2 \end{eqnarray*} \begin{eqnarray*} x &=& s^2 + t^2 + 4st \mbox{\qquad when $3 \not | (s-t)$} \\ y &=& s^2-t^2 \\ z &=& 2(s^2 + t^2+st) \end{eqnarray*} (In \cite{cohen} there is a ${\mathfrak P}m$ sign on $z$, but in this specialization $z \ge 2(s-t)^2$, so $z\ge 0$.) Then the first parametrization gives the formulas in part (i) of the lemma. Note that when $(s,t)$ have opposite parities, then $(x,y)$ also have opposite parities, in both parametrizations. In the first parametrization, changing the sign of both $s$ and $t$ leaves $(x,y,z)$ unchanged. Therefore we can assume $t \ge 0$. Then $y > 0$ implies both $t$ and $s$ are positive. The condition $a > 0$ corresponds to $x < y$, which becomes \begin{eqnarray*} s^2 - 3t^2 &<& 2st\\ s^2-2st - 3t^2 &<&0 \\ (s-3t)(s+t) &<& 0 \\ s &<& 3t \end{eqnarray*} That is one of the conditions stated in the lemma. The condition $b > 0$ is \begin{eqnarray*} 0 &<& y + x \\ &=& 2st + s^2-3t^2 \\ &=& (s-t)(s+3t)\\ 0&<& (s-t) \mbox{\qquad since $s$ and $t$ are positive} \\ t&<&s \end{eqnarray*} as stated in the lemma. Since $s < 3t$, we can drop the ${\mathfrak P}m$ sign in the definition of $x$. Now we will express $(a,b,c)$ in terms of $(s,t)$, by eliminating the intermediate variables $(x,y,z)$. That is a simple piece of algebra; it can be checked by hand or by the following snippet of SageMath code. \begin{verbatim} var('a,b,c,x,y,z,s,t') x = s^2-3*t^2 y = 2*s*t z = 3*t^2+s^2 a = y-x b = y+x c = z ans = (a,b,c) for x in ans: print(x) \end{verbatim} Finally, we observe that if we start with $(s,t)$ relatively prime and of opposite parities, these formulas yield odd $(a,b,c)$ with no common factor. That completes the proof of part (i) of the lemma. It remains to check the second possible form of the parametrization, which applies in case one of $(a,b,c)$ is even. (That is the same as to say one of $(a,b)$ is even, since $c^2 = a^2 + b^2 + ab$.) In that case we define the relationship between $(x,y)$ and $(a,b)$ differently than in case (i): \begin{eqnarray*} x &=& (b-a) \\ y &=& (b+a) \\ z &=& 2c \end{eqnarray*} or in the other direction, \begin{eqnarray*} a &=& (y-x)/2 \\ b &=& (y+x)/2 \\ c &=& z/2 \end{eqnarray*} Then according to \cite{cohen}, the second parametrization above generates all solutions of (\ref{eq:8590}) with $(x,y)$ of opposite parity. If $(x,y)$ have opposite parity then $(a,b,c)$ as just defined will be integers. It will be convenient to change the sign of $s$, which is legal since $s$ runs over all integers. Then we have, \begin{eqnarray*} x &=& s^2 + t^2 - 4st \mbox{\qquad when $3 \not | (s-t)$} \\ y &=& s^2-t^2 \\ z &=& 2(s^2 + t^2 -st) \end{eqnarray*} Since changing the sign of both $s$ and $t$ does not change the formula, we can assume $t \ge 0$. Since $y-x = a$, we have $y-x > 0$. Therefore \begin{eqnarray*} s^2 - t^2 - (s^2 + t^2 - 4st) &>& 0 \\ -2t^2+4st &>& 0 \\ -t + 2s &>& 0 \\ s &>& t/2 \end{eqnarray*} Hence both $s$ and $t$ are positive. Evaluating $(a,b,c)$ in terms of $(s,t)$, we find the second formula of the lemma. It can be checked with this SageMath code: \begin{verbatim} def feb27c(): var('a,b,c,x,y,z,s,t') x = s^2+t^2 + 4*s*t y = s^2-t^2 z = 2*(s^2+t^2 + s*t) a = (y-x)/2 b = (y+x)/2 c = z/2 ans = (a,b,c) for x in ans: print(x) \end{verbatim} That completes the proof of the lemma. \end{comment} \subsection{The tile is rational} Suppose given an $N$-tiling of some triangle $ABC$ by a tile with angles $(\alpha,\beta,\gamma)$ and sides $(a,b,c)$. In this section we will prove that the tile has to be rational, i.e., the ratios of the sides are all rational, so after a suitable scaling, they will be integers. The proof uses the graphs ${\mathbb G}amma_c$ introduced by Laczkovich and described above; several preparatory lemmas will be developed first. \begin{definition} \label{definition:c/a} A {\bf $c/a$ segment} is a left-terminated interior segment $PQ$ of the tiling supporting two tiles on opposite sides of $PQ$, each with a vertex at $P$, one with its $c$ edge on $PQ$ and one with its $a$ or $b$ edge on $PQ$. The segment is said to ``emanate from $P$.'' Similarly for {\bf $c/b$ segment} and {\bf $a/b$ segment}. \end{definition} \noindent{\em Remarks}. The point $Q$ serves only to indicate the direction of the segment; it can be any point on that ray. \begin{lemma} \label{lemma:extendlink} Let $ABC$ be an isosceles triangle with base angles $\alpha$, tiled by tile $(\alpha,\beta,\gamma)$ with $\gamma = 2{\mathfrak P}i/3$. Let $PQR$ be an internal segment of the tiling with only $c$ edges on one side of $PQ$, such that the tile on that side supported by $QR$ with a vertex at $Q$ has an $a$ or $b$ edge on $PQ$. Then there is a $c/a$ or $c/b$ segment emanating from $Q$. \end{lemma} \noindent{\em Proof}. We only consider tiles on the one side of $PQR$ mentioned in the lemma. There are two cases: Either there are three tiles with a vertex at $Q$, or there are six. Case~1, three tiles at $Q$. Then one of them has its $\gamma$ angle at $Q$, and hence no $c$ edge at $Q$. The other two have a $c$ edge ending at $Q$. One of those lies on $PQ$. The other does {\em not} lie on $QR$, by hypothesis. Since there is no other $c$ edge ending at $Q$, that third $c$ edge forms either a $c/b$ segment or a $c/a$ segment. Case~2, six tiles at $Q$. Then all six angles are $\alpha$ or $\beta$. Each of the six tiles has a $c$ edge ending at $Q$. One lies on $PQ$ and five lie on interior segments. Since five is odd, one of those $c$ edges is not paired with another $c$ edge, and hence constitutes a $c/a$ segment or a $c/b$ segment. That completes the proof of the lemma. \begin{lemma} \label{lemma:crelation} Suppose isosceles triangle $ABC$ with base angles $\alpha$ is $N$-tiled by $(\alpha,\beta,2{\mathfrak P}i/3)$, with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then there is a relation $$ jc = pa + qb$$ with nonnegative integers $p,q,j$ and $j > 0$. \end{lemma} \noindent{\em Proof}. Suppose, for proof by contradiction, that there is no such relation. Then, by Lemma~\ref{lemma:extendlink}, if $PQ$ is a link in the graph ${\mathbb G}amma_c$, then there is a $c/b$ or $c/a$ segment emanating from $Q$. Extend that segment to the maximal segment $QR$ supporting only tiles with $c$ edges on $QR$. Since there is no relation $jc = pa + qb$, $R$ cannot be the vertex of a tile on the other side of $QR$. Therefore $QR$ is a link in ${\mathbb G}amma_c$. Therefore the out-degree of every node $Q$ in ${\mathbb G}amma_c$ is at least one. But the in-degree of ${\mathbb G}amma_c$ is always at most one. Since the total out-degree is equal to the total in-degree, it follows that every node of ${\mathbb G}amma_c$ has both in-degree and out-degree equal to 1. Since no link of ${\mathbb G}amma_c$ can terminate on the boundary of $ABC$, there can be no links of ${\mathbb G}amma_c$ emanating from a vertex on the boundary of $ABC$. I say there is at least one $c$ edge on $AC$. For if not, every tile supported by $AC$ has its $\gamma$ angle at a vertex on $AC$. Since $\gamma > {\mathfrak P}i/2$, there cannot be two $\gamma$ angles at any one vertex. But $\gamma$ angles do not occur at $A$ or $C$, where there are only $\alpha$ angles. Then there is one more $\gamma$ angle on $AC$ than possible vertices to receive them, so by the pigeonhole principle, some vertex on $AC$ has two $\gamma$ angles, contradiction. Therefore, as claimed, there is at least one $c$ edge on $AC$. Similarly, there is at least one $c$ edge on $AB$ and at least one $c$ edge on $BC$. Since there is a single tile at $A$ with its $\alpha$ angle at $A$, the $b$ edge of that tile lies on $AC$ or on $AB$. Then there exists a segment $PQ$ lying on $AB$ or on $AC$ supporting only tiles with $c$ edges on $PQ$, and with a $b$ edge beyond $Q$. Then by Lemma~\ref{lemma:extendlink}, there is a $c/a$ segment or a $c/b$ segment emanating from the boundary point $Q$, say $QR$. Choose $R$ as far as possible from $Q$ such that $QR$ bounds only $c$ tiles on one side. Then $R$ is not a vertex of a tile on the other side of $QR$, since that would give rise to a relation $jc = pa + qb$ with $j > 0$. Hence $QR$ is a link in ${\mathbb G}amma_c$. But that is a contradiction, since $Q$ is on the boundary of $ABC$. That completes the proof of the lemma. \begin{lemma} \label{lemma:abrelation} Suppose isosceles triangle $ABC$ with base angles $\alpha$ is $N$-tiled by $(\alpha,\beta,2{\mathfrak P}i/3)$, with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then there is a relation $$ ja = pb + qc$$ with nonnegative integers $p,q,j$ and $j > 0$ and $p > 0$. \end{lemma} \noindent{\em Proof}. Suppose, for proof by contradiction, that there is no such relation. A {\bf center} is a vertex of the tiling where three tiles meet, each having its $\gamma$ angle at that vertex. A {\bf star} is a vertex $P$ where six tiles lying on one side of a line through $P$ have three $\alpha$ and three $\beta$ angles at $P$. A star can occur on the boundary of $ABC$ or in the interior. A {\bf double star} is a vertex where twelve tiles meet, six with $\alpha$ angles and six with $\beta$ angles. The three vertices of $ABC$ together have six angles, three $\beta$ and three $\alpha$, the same count as a star. Let ${\mathcal S}$ be the number of stars (counting a double star as two), and ${\mathcal C}$ the number of centers. Now let us calculate the number of $\alpha$ angles, plus the number of $\beta$ angles, minus twice the number of $\gamma$ angles. At each vertex other than stars, centers, and $A$, $B$, and $C$, we get zero. At each center we get $-6$. At each star we get $6$ (and 12 at double stars). Adding them up we get $6\,{\mathcal S}-6\,{\mathcal C} + 6$, where the final 6 is for $A$, $B$, and $C$ together. Since the total number of $\alpha$ is $N$, the total number of $\beta$ is $N$, and the total number of $\gamma$ is $N$, we get zero for the grand total. That is, $6\,{\mathcal S}-6\,{\mathcal C}+6 = 0$. Then ${\mathcal S} = {\mathcal C}-1$. (For example, in Fig.~\ref{figure:isosceles2pi2}, we see one center and no stars.) Now we consider the graph ${\mathbb G}amma_a$. Every center has an out-link, since at a center $P$ there are three tiles, each with an $a$ edge and a $b$ edge at $P$. Since 3 is odd, one of the $a$ edges shares a segment with one of the $b$ edges, i.e., an $a/b$ edge emanates from $P$. (For example, note the center in Fig.~\ref{figure:isosceles2pi2}.) Let $Q$ be the farthest point from $P$ along that segment such that $PQ$ supports only $a$ tiles on one side, say the ``left'' side. If $Q$ were vertex of a tile on the other side, we would have a relation $ja = pb + qc$, and $p$ would be positive since there is a $b$ edge on the ``right'' side of $PQ$. Since by hypothesis, there is no such relation, $Q$ is not a vertex of a tile on the other side. Then $PQ$ is a link in ${\mathbb G}amma_a$. On the the other hand, the in-degree of a center is zero, since no segment of the tiling passes through $P$. At a star $Q$ on an internal segment $PQ$, six tiles meet, providing six $c$ edges, three $a$ edges, and three $b$ edges. There could be an incoming link at $Q$, if the tile on $PQ$ at $Q$ has its $a$ edge there, the tile past $Q$ does not have its $a$ edge on $PQ$ extended, and the other two $a$ edges are not on the same segment. The in-degree of ${\mathbb G}amma_a$ can never exceed 1, since it is impossible for two lines of the tiling to cross at $Q$ when a link ends at $Q$. At the vertices $A$, $B$, and $C$ the in-degree is zero, since a link cannot terminate on the boundary. At $A$ and $C$ the out-degree is zero since there are no interior edges. At $B$ there might be outgoing links, or not. Now we calculate the out-degree minus the in-degree vertex by vertex. At centers it is 1. At stars it is 0 or -1 (or -2 possibly at double stars). Let $t$ be the total out-degree minus in-degree at stars; then $0 \ge t \ge -S$. At $A$ and $C$ it is zero. At $B$ it is non-negative, say $n_B$. At all other vertices it is zero. The total of out-degree minus in-degree is then ${\mathcal C} + t + n_B \ge {\mathcal C}-{\mathcal S}$. Since ${\mathcal S} = {\mathcal C}-1$, the total out-degree minus in-degree is $\ge 1$. On the other hand, it is zero since every link has a head and a tail. This contradiction completes the proof of the lemma. \begin{theorem} \label{theorem:rationaltile120} Let $ABC$ be an isosceles (and not equilateral) triangle with base angles $\alpha$. Suppose $ABC$ is tiled by a tile $(\alpha,\beta, 2{\mathfrak P}i/3)$ with $\alpha$ not a rational multiple of ${\mathfrak P}i$. Then the tile is rational. \end{theorem} \noindent{\em Proof}. Suppose $ABC$ is tiled as in the lemma. By Lemma~\ref{lemma:crelation} and Lemma~\ref{lemma:abrelation}, there are relations \begin{eqnarray*} j c &=& pa + qb \mbox{\qquad with nonnegative integers $j,p,q$ and $j > 0$} \\ J a &=& Pb + Qc \mbox{\qquad with nonnegative integers $J,P,Q$ and $J > 0$ and $P > 0$} \end{eqnarray*} Dividing by $c$ we have \begin{eqnarray*} j &=& p(a/c) + q(b/c) \\ -Q &=& -J(a/c) + P(b/c) \end{eqnarray*} Case~1, $q \neq 0$. The equations can be solved for $(a/c)$ and $(b/c)$, provided the determinant $pP+Jq \neq 0$. Since $q \neq 0$ and $J \neq 0$, and $p \ge 0$ and $q \ge 0$, the determinant is not zero. Case~2, $q = 0$. Then $a/c = j/p$ is rational by the first equation, and $b/c$ is rational by the second equation, since $P \neq 0$. That completes the proof of the theorem. \subsection{The Diophantine equation $c^2 = a^2 + b^2 + ab$} Let $(a,b,c)$ be the sides of a triangle with angles $(\alpha,\beta,2{\mathfrak P}i/3)$. According to the law of cosines, we have \begin{eqnarray*} c^2 &=& a^2 + b^2 - 2ab \cos(2{\mathfrak P}i/3) \\ &=& a^2 + b^2 + ab \mbox{\qquad since $\cos(2{\mathfrak P}i/3) = -1/2$} \end{eqnarray*} Therefore this Diophantine equation determines the possible rational triangles with a $2{\mathfrak P}i/3$ angle. \begin{lemma}\label{lemma:relprime} Suppose $c^2 = a^2 + b^2 + ab$, and $(a,b,c)$ are integers with no common factor. Then $(a,b,c)$ are pairwise relatively prime. \end{lemma} \noindent{\em Proof}. If prime $p$ divides any two of $(a,b,c)$ then it also divides the third one. \begin{lemma} \label{lemma:2b+a} Suppose $(a,b,c)$ are integers with no common factor that are the sides of a triangle with angles $(\alpha,\beta,2{\mathfrak P}i/3)$. Then $2b+a$ is relatively prime to each of $a$, $b$, and $c$, except that if $a$ is even, 2 divides both $a$ and $2b+a$. \end{lemma} \noindent{\em Proof}. By Lemma~\ref{lemma:relprime}, $2b+a$ is relatively prime to $b$ and $a$, with the exception mentioned in the statement. It remains to prove $2b+a$ is relatively prime to $c$. Suppose, for proof by contradiction, that $p$ is a prime that divides both $c$ and $2b+a$. Then $p$ is not 2, since then $c$ and $a$ would both be even, contradicting Lemma~\ref{lemma:relprime}. Suppose, for proof by contradiction, that $p=3$. Then mod 3 we have $2b+a \equiv 0$. Adding $b$ to both sides we have $3b + a \equiv b$. But $3b\equiv 0$, so $a \equiv b$. Now $c = a^2 + b^2 + ab = (a+b)^2 -ab \equiv a^2$ mod 3, since $a \equiv b$. Since $p | c$ we have $p | a^2$ and hence $p | a$. Hence $a$ and $b$ are both divisible by 3, contradiction, since $(a,b)$ are relatively prime. Hence $p \neq 3$. Then we have, mod p, \begin{eqnarray*} c &\equiv& 0 \\ c^2 &\equiv& a^2 + b^2 + ab \\ 2b+a &\equiv& 0 \end{eqnarray*} Substituting $c =0$ in the last two equations we have \begin{eqnarray} 0 &\equiv& a^2 + b^2 + ab \label{eq:8806}\\ 2b + a &\equiv&0 \label{eq:8807} \end{eqnarray} From the second equation we have $a \equiv -2b$. Since $a$ and $b$ are relatively prime, and $p \neq 2$, this implies that neither $a$ nor $b$ is divisible by $p$. Substituting $a = -2b$ in (\ref{eq:8806}), we have \begin{eqnarray*} 0 &\equiv& 4b^2 + b^2 -2b^2 \\ &\equiv& \ 3b^2 \\ 0 &\equiv& b^2 \mbox {\qquad since $p \neq 3$}\\ 0 &\equiv& b \end{eqnarray*} Then $a \equiv -2b \equiv 0$. Hence p divides both $a$ and $b$, contradiction, since $a$ and $b$ are relatively prime. That completes the proof of the lemma. {\em Remark}. Using the techniques of \cite{cohen}, Corollary 6.3.15, p.~353, we are able to parametrize the solutions of $c^2 = a^2 + b^2 + ab$ by two integer parameters $(s,t)$ or one rational parameter $s/t$. Having worked this out, and used it in preliminary versions, in the end I found simpler proofs without it. Nevertheless I mention the reference in case it may be useful to somebody. \subsection{The area equation for an isosceles tiling with $\gamma = 2{\mathfrak P}i/3$} \begin{lemma} \label{lemma:2pi/3area} Let isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by a tile with angles $(\alpha,\beta,2{\mathfrak P}i/3)$. Suppose $\alpha$ is not a rational multiple of ${\mathfrak P}i$. Let $X$ be the length of the equal sides $AB$ and $BC$, and $Y$ the length of the base $AC$. Then the area equation is \begin{eqnarray*} X^2 (2b+a) &=& Nbc^2 \end{eqnarray*} and another form of the area equation is \begin{eqnarray*} XY &=& Nbc \end{eqnarray*} \end{lemma} \noindent{\em Proof}. By the law of cosines, \begin{eqnarray} a^2 &=& b^2 + c^2 - 2bc \cos \alpha \nonumber\\ \cos \alpha &=& \frac {b^2 + c^2 -a^2} {2 bc}\nonumber \\ &=& \frac {b^2 + (a^2 + b^2 + ab) - a^2}{2bc}\nonumber \\ &=& \frac {2b^2 + ab} {2bc} \nonumber\\ \cos \alpha&=& \frac {2b+a}{2c}\label{eq:8767} \end{eqnarray} Twice the area of $ABC$ is $X^2 \sin({\mathfrak P}i-2\alpha) = X^2 \sin 2\alpha$. Twice the area of the tile is $bc\sin \alpha$. Equating the area of $ABC$ to $N$ times the area of the tile, we have \begin{eqnarray*} X^2 \sin 2\alpha &=& N bc \sin \alpha \\ 2X^2 \sin \alpha \cos \alpha &=& N bc \sin \alpha \\ 2X^2 \cos \alpha &=& Nbc \end{eqnarray*} Substituting for $\cos \alpha$ the value from (\ref{eq:8767}), \begin{eqnarray*} 2X^2 \left(\frac {2b+a}{2c}\right) &=& Nbc \\ X^2 (2b+a) &=& Nbc^2 \end{eqnarray*} That completes the proof of the first formula of the lemma. To prove the second form: twice the area of $ABC$ is $XY \sin \alpha$. Twice the area of the tile is $bc \sin \alpha$. Therefore $XY = Nbc$. That completes the proof of the lemma. \subsection{A necessary condition} \begin{lemma} \label{lemma:2b+adividesN} Let isosceles triangle $ABC$ with base angles $\alpha$ be $N$-tiled by a tile with angles $(\alpha,\beta,2{\mathfrak P}i/3)$ and sides $(a,b,c)$. Suppose $\alpha$ is not a rational multiple of ${\mathfrak P}i$. Then (i) $2b+a$ divides $N$, and $Nb/(2b+a)$ is a square, say $m^2$. (ii) The side and base of $ABC$ are given by \begin{eqnarray*} X &=& mc \\ Y &=& m(2b+a) \end{eqnarray*} \end{lemma} \noindent{\em Remarks}. This lemma gives us an {\em a priori} bound on $(a,b,c)$, namely $2N$, since $c^2 = a^2 + b^2 + ab \le (a+b)^2 \le (2b+a)^2 \le (2N)^2$. Also, if $N$ is prime, $N = 2b+a$, and $b = m^2$. It is unknown if this actually can happen. \noindent{\em Proof}. Let $X$ be the length of the equal sides $AB$ and $BC$. According to Lemma~\ref{lemma:2pi/3area}, $$ X^2 (2b+a) = Nbc^2.$$ By Lemma~\ref{lemma:relprime}, $a$, $b$, and $c$ are pairwise relatively prime. By Lemma~\ref{lemma:2b+a}, if $a$ is odd, then $2b+a$ is relatively prime to each of $a$, $b$, and $c$. On the other hand, if $a$ is even, then $b$ and $c$ are odd, so $2b+a$ is relatively prime to $c$ and $b$. Thus, whatever the parity of $a$, $2b+a$ is relatively prime to $b$ and $c$. Then by the area equation, $2b+a$ divides $N$. According to the area equation, \begin{eqnarray*} \left(\frac X c\right)^2 &=& \left( \frac {Nb}{2b+a} \right) \end{eqnarray*} Therefore $Nb/(2b+a)$ is a rational square, and since it is an integer, it is an integer square, say $m^2$. That completes the proof of part~(i). Ad (ii). Since $X/c$ and $m$ are positive and have equal squares, they are equal, so $X = cm$ as claimed. We compute $Y$: \begin{eqnarray*} \cos \alpha &=& \frac {b^2+c^2-a^2} {2 bc} \mbox{\qquad by the law of cosines}\\ &=& \frac {2b^2 + ab} {2bc} \mbox{\qquad since $c^2 = a^2+b^2+ab$}\\ &=& \frac {2b + a}{2c} \\ Y &=& 2 X \cos \alpha \mbox{\qquad where $X = \vert AB \vert$}\\ &=& \frac X c (2b+a) \end{eqnarray*} By part~(i), $c$ divides $X$, so the right-hand side is an integer. That completes the proof of the lemma. {\em Example~1}. In the tiling whose construction begins with Fig.~\ref{figure:isosceles2pi2}, we have $N = 75140$, $(a,b,c) = (3,5,7)$, so $2b + a = 13$, and $Nb/(2b+a) = 170^2$, so $m = 170$, $X = mc = 1190$, and $Y = m(2b + a) =2210$. {\em Example~2}. With $N=33$, $(a,b,c) = (5,3,7)$, $2b+a = 11$, $Nb/(2b+a) = 15$, $m = 3$, $X = mc = 21$, and $Y = m(2b+a) = 33$. We think that no such tiling exists, although the present lemma does not rule it out. In principle one can ``just check all the possibilities'', but that is easier said than done. {\em Example~3}. With $N=37$, $(a,b,c) = (5,16,19)$, $2b+a=37$, $Nb/(2b+a) = 16$, $m=4$, $X = mc = 76$, and $Y = m(2b+a) = 148$. See Fig.~\ref{figure:no37}. We prove in Theorem~\ref{theorem:no37} that no such tiling exists. The method of proof does not depend on 37 being prime and does not extend to $N=71$. \begin{comment} \subsection{$N$ is not prime} \begin{theorem} \label{theorem:notprime120} Let $N$ be a prime number. Then there is no $N$-tiling of any isosceles triangle with base angles $\alpha$ by a tile with angles $(\alpha,\beta,2{\mathfrak P}i/3)$. \end{theorem} \noindent{\em Proof}. Let $N$ be prime and suppose isosceles triangle $ABC$ is $N$-tiled as in the statement of the lemma. Let $X = \vert AB \vert$. By Lemma~\ref{lemma:2pi/3area}, the area equation is \begin{eqnarray*} X^2 (2b+a) &=& Nbc^2 \end{eqnarray*} By Lemma~\ref{lemma:2b+adividesN}, $2b+a$ divides $N$. Since $N$ is prime, $N = 2b+a$, and dividing the area equation by $N$, we have $X^2 = bc^2$. Then $b$ is a rational square, hence an integer square $m^2$, and $X = mc$. Then $Y = (X/c)(2b+a) = m(2b+a)= ma + 2mb$. Suppose $ma = jb$ for some $j$. Since $a$ and $b$ are relatively prime, we have $j \ge a$ and $m \ge b = m^2$, which is impossible unless $m=1$. If $m=1$ then $a=jb$, which is impossible since $a$ and $b$ are relatively prime. Let $T$ be a tile supported by $AC$ with its $a$ edge on $AC$. The altitude of $T$ is $\sin({\mathfrak P}i/3)b$. The altitude of $ABC$ is $mc \sin \alpha = mc(a/c) \sin ({\mathfrak P}i/3)$. The ratio of altitude of $T$ to altitude of $ABC$ is thus $b/(mc(a/c)) = bc/amc = b/(am) = m^2/(am) = m/a$. If Tile~4 has its $c$ edge at angle $\beta$ to $BC$ then its coordinates are $$(X \cos \alpha + c \cos(\alpha+\beta), X \sin \alpha - c \sin(\alpha+\beta)).$$ Since $X = mc$, the $y$-coordinate comes to $mc \sin \alpha + c \sin(\alpha+\beta)$. \end{comment} \subsection{Ruling out small values of $N$} \begin{theorem} \label{theorem:smallN120} If there is an $N$-tiling of some isosceles triangle $ABC$ with base angles $\alpha$ by a tile with angles $(\alpha,\beta,2{\mathfrak P}i/3)$, then $N$ is at least 33. If $N \le 200$ then $N$ is one of the values shown in Table~\ref{table:isosceles120}, and the side and base of $ABC$ must be as given in the table. \end{theorem} \begin{table}[ht] \caption{$N < 200$ and $(a,b,c)$ not ruled out by Lemma~\ref{lemma:2b+adividesN}.} \label{table:isosceles120} \begin{tabular}{rrr} $N$ & $(a,b,c)$ & $(X,Y)$\\ \hline 33 &(5, 3, 7) & (21, 33)\\ 37 &(5, 16, 19)& (76, 148)\\ 46 &(7, 8, 13)&(52, 92) \\ 65 &(3, 5, 7)&(35, 65)\\ 71 &(39, 16, 49)&(196, 284)\\ 74 &(56, 9, 61)&(183, 222)\\ 130& (16, 5, 19)&(95, 130)\\ 132 &(5, 3, 7)&(42, 66)\\ 148 &(5, 16, 19)&(152, 296)\\ 154 &(8, 7, 13)&(91, 154)\\ 184 &(7, 8, 13)&(104, 184)\\ 193 &(143, 25, 157)&(785, 965) \end{tabular} \end{table} \noindent{\em Remarks}. We do not suggest that tilings for $N$ in the table do, or do not, exist, only that they are not ruled out by the simple considerations of area and boundary tiling. The prime numbers 37, 71, and 193 are not ruled out immediately, and two of those are congruent to 3 mod 4. Hence the possibility of $N$ prime for this kind of tiling is not ruled out by the area equation and boundary-tiling conditions; but at least the cases 7, 11, 19 are eliminated, which is required for a proof that there are no $N$-tilings of any triangle for those values of $N$. Actually, we are able to rule out $N=37$; see Theorem~\ref{theorem:no37} below. But the argument is special to $N=37$, and does not appear to have anything to do with the primality of 37. \noindent{\em Proof}. Let the positive integer $N$ be given, and suppose there is an $N$-tiling of some isosceles $ABC$ by a tile $(\alpha,\beta,2{\mathfrak P}i/3)$. By Theorem~\ref{theorem:rationaltile120}, the tile is rational, so we may suppose its sides are integers $(a,b,c)$ with no common divisor. According to Lemma~\ref{lemma:2b+adividesN}, $2b+a$ divides $N$ (so $a$ and $b$ are at most $N$), and $Nb/(2b+a)$ is a square, say $m^2$. Then, since the tile has a $2{\mathfrak P}i/3$ angle, $c$ is determined by the equation $c^2 = a^2+b^2 + ab$. If $c$ is not an integer, then we do not consider $(a,b,c)$ further. Also if $(a,b,c)$ is not a triangle, because the sum of two of its sides is less than the third, we do not consider it further. Table~\ref{table:isosceles120} was computed by running this algorithm for $N \le 200$. There are no entries for $N < 33$. That completes the proof of the theorem. We note that it would be a waste of time to compute the length of the base $Y$ and reject $(a,b,c)$ in case $Y$ is not an integer, because $Y$ always {\em has} to be an integer $m(a+2b)$, by Lemma~\ref{lemma:2b+adividesN}. Similarly, it would be a waste of time to look for possible boundary tilings in the hope of rejecting some tiles, since by Lemma~\ref{lemma:2b+adividesN}, with $m = X/3$ we always have $X = mc$ and $Y = ma + 2mb$. \begin{comment} \begin{figure} \caption{SageMath code to implement Theorem~\ref{theorem:isosceles2} \label{figure:isosceles2picode} \end{figure} \end{comment} \begin{theorem} \label{theorem:no37} There is no 37-tiling of an isosceles triangle with base angles $\alpha$, using a tile with $\gamma = 2{\mathfrak P}i/3$. \end{theorem} {\em Proof}. By Theorem~\ref{theorem:smallN120}, the tile would have to be $(a,b,c) = (5,16,19)$. Then the area equation can be used to show that $(X,Y) = (76, 148)$. That makes the altitude of $ABC$ equal to $10\sqrt 3 = 17.32$. If there is a tiling, there must be a four tiles at $B$, three of which have their $\beta$ angles at $B$, and the other its $\alpha$ angle there. Number those tiles 1 to 4 starting from $AB$ and ending at $BC$. Renaming $A$ and $C$ if necessary, we may assume that the $\alpha$ angle at $B$ belongs to Tile 1 or Tile 2, so Tile~3 and Tile~4 have their $\beta$ angle at $B$. Those two tiles each have a $c$ edge. The case when Tile~4 has its $c$ edge in the interior is shown in Fig.~\ref{figure:no37}. \begin{figure} \caption{The tile is inside $ABC$, but just barely.} \label{figure:no37} \end{figure} In that position, a tile barely fits into triangle $ABC$, and its eastern $b$ edge cannot be matched by another tile's $b$ edge, for that tile would not be inside $ABC$. Nor can tiles be laid there with $a$ edges; so this case is impossible. Therefore Tile~4 has its $c$ edge on $BC$, and shares its $a$ edge with Tile~3. See Fig.~\ref{figure:no37b}. \begin{figure} \caption{No 37-tiling: Tile 4 with its $c$ edge on the boundary.} \label{figure:no37b} \end{figure} Tile~3 cannot have its $c$ edge on the west, as a $c$ edge emanating from $B$ at that angle would extend past $AC$. (Its $y$-coordinate, with $AC$ on the $x$-axis, would be $-0.866$.) Therefore it has its $c$ edge on the east, next to the $a$ edge of Tile~4, leaving an impossible situation, as a $b$ or $c$ edge will not fit inside $ABC$ on the east of Tile~3, nor will any number of $a$ edges. That contradiction completes the proof. \noindent{\em Remark.} The next case of prime $N$ to consider would be $N=71$. It does not seem fruitful to continue this game by hand; and in this paper, we abstain from the attempt to establish non-existence results by computer search, because of the difficulty of establishing the correctness of such results beyond a shadow of doubt. It is true that we used a computer in Theorem~\ref{theorem:smallN120}, but only in the most trivial way: a doubtful reader could easily replicate Table~\ref{table:isosceles120}, perhaps even by hand. \subsection{Given $N$, find the possible tiles and $ABC$} \begin{theorem}\label{theorem:possibletiles120} Given $N$, we can efficiently compute a finite set $\Delta$ of $(a,b,c,X)$, such that if there is an $N$-tiling of some isosceles triangle $ABC$ with base angles $\alpha$ by a tile with $\gamma = 2{\mathfrak P}i/3$, then the tile is $(a,b,c)$ and the side of $ABC$ is $X$, for some $(a,b,c)$ in $\Delta$. \end{theorem} \noindent{\em Remark}. Then by backtracking search, applied to each tile $(a,b,c)$ and isosceles triangle $ABC$ with base angles $\alpha$ and side $X$ with $(a,b,c,X)$ in $\Delta$, we can determine (in principle) if any $N$-tiling of any isosceles $ABC$ exists. But we do not undertake that in this paper; see the previous remark. \noindent{\em Proof}. Let $N$ be given. The algorithm given in the proof of Theorem~\ref{theorem:smallN120} determines the possible tiles $(a,b,c)$, in such a way that $N/(2b+1)$ is a square, say $m^2$. Then $X = mc$ must be the side of triangle $ABC$, if there is any $N$-tiling of isosceles $ABC$ by $(a,b,c)$, and $Y = m(2b+a)$ is the base, by Lemma~\ref{lemma:2b+adividesN}. That completes the proof of the theorem. \section{Open problems} The methods and results of this paper leave us still unable to answer some interesting questions. Here we list several. In the following, as elsewhere in this paper, ``isosceles'' means ``isosceles and not equilateral.'' (i) What is the smallest $N$ such that some isosceles triangle with base angles $\alpha$ can be $N$-tiled by a tile of the form $\gamma = 2\alpha$? The smallest such tiling so far explicitly constructed has more than half a million tiles, but for all we know there is a 45-tiling. In fact, we do not even know the smallest $N$ such that some isosceles triangle can be tiled by the tile with sides $(4,5,6)$ (which is the tile used in the 669780-tiling). (ii) If an isosceles triangle is $N$-tiled by a tile with with $\gamma = 2\alpha$, or $\gamma = 2{\mathfrak P}i/3$, and $\alpha$ is not a rational multiple of ${\mathfrak P}i$, is $N$ necessarily even? Or necessarily odd? (We did show $N$ has to be even when $\gamma$ is a right angle and $\alpha \neq {\mathfrak P}i/4$, but there seems to be no reason to think it has to be odd or even when $\gamma = 2 \alpha$. The least odd unknown case is $N = 121$.) Could $N$ be a square? (iii) Is it possible to $N$-tile some isosceles triangle with $N$ a prime number, when the tile has $\gamma = 2{\mathfrak P}i/3$? If it is possible, $N$ has to be at least 71. (For right-angled tiles, it is possible when $N$ is congruent to 1 mod 4, but not when $N$ is congruent to 3 mod 4; when $\gamma = 2\alpha$ it is never possible.) (iv) Find easily checkable necessary and sufficient conditions on $N$ for the existence of $N$-tilings of some isosceles $ABC$ with $\gamma = 2\alpha$ or $\gamma = 2{\mathfrak P}i/3$. Or, determine the existence or non-existence of such tilings, one $N$ at a time, using exhaustive computer search. You can use Table~\ref{table:isosceles120} for your initial test data. \section{Conclusions} We have studied the possible tilings of an isosceles (and not equilateral) triangle $ABC$ by a tile that is a right triangle, or by a tile of the form $(\alpha,\beta,2\alpha)$ where the base angles of $ABC$ are equal to $\alpha$. In the case of a tile $(\alpha,\beta, 2\alpha)$, we derived a necessary condition from the area equation, and we made use of directed graphs inspired by Laczkovich to prove that the tile is necessarily rational. We analyzed the case of a right-angled tile thoroughly enough to give a complete characterization of the possible values of $N$ for which some isosceles $ABC$ can be $N$-tiled. Namely, Theorem~\ref{theorem:isosceles2} says $N$ is twice a square or twice an even sum of squares, except of course for the right isosceles triangle, which can be quadratically $N$-tiled for any square $N$, including odd squares. In the case of a tile with $\gamma = 2\alpha$, we gave a necessary condition, using the area equation and the law of cosines for the tile. That this necessary condition is not trivial is shown by our proof that $N$ cannot be prime. $N=36$ is the least number for which we do not know whether a tiling exists, and $669780$ is the smallest $N$ for which we are certain that there does exist a tiling. Finally, in the case of a tile with $\gamma = 2{\mathfrak P}i/3$, we gave a necessary condition and an algorithm to check it. There are no such tilings for $N < 33$. There is one for $N = 75140$. Between those two values of $N$, there are many values of $N$ satisfying our necessary conditions, for which we do not know whether tilings exist. In all the possible cases of Laczkovich's tables, we have been able to show (either in this paper or in unpublished work) that given $N$, there is a finite set $\Delta$ of tiles $(a,b,c)$ and triangles $ABC$ such that either there is no $N$-tiling falling under that line of the table, or one of the finite set permits an $N$-tiling. Hence, there is (in principle) an algorithm, albeit inefficient, for determining if there is an $N$-tiling. The inefficiency arises from the exponentially large number of ways of placing $N$ tiles of a specific shape into a specific $ABC$. See the previous section for a list of open problems. \end{document}
\begin{document} \title{Quantum Measurement of a Single Spin using Magnetic Resonance Force Microscopy} \author{G.P. Berman,$\!^1$ F. Borgonovi,$\!^{1,2}$ G. Chapline,$\!^{1,3}$ S.A. Gurvitz,$\!^{1,4}$ P.C. Hammel$^5$,\\ D.V. Pelekhov,$\!^5$ A. Suter,$\!^5$ and V.I. Tsifrinovich$^6$} \address{$^1$Theoretical Division and CNLS, Los Alamos National Laboratory, Los Alamos, NM 87545} \address{$^2$Dipartimento di Matematica e Fisica, Universit\`a Cattolica, via Musei 41 , 25121 Brescia, Italy, and I.N.F.M., Gruppo Collegato di Brescia, Italy, and I.N.F.N., sezione di Pavia , Italy} \address{$^3$Lawrence Livermore National Laboratory, Livermore, CA 94551} \address{$^4$Department of Particle Physics, Weizmann Institute of Sciences, Rehovot 76100, Israel} \address{$^5$Condensed Matter and Thermal Physics, Los Alamos National Laboratory, MS K764, Los Alamos NM 87545} \address{$^6$IDS Department, Polytechnic University, Six Metrotech Center, Brooklyn NY 11201} \maketitle {\bf Single-spin detection is one of the important challenges facing the development of several new technologies, e.g. single-spin transistors and solid-state quantum computation. Magnetic resonance force microscopy with a cyclic adiabatic inversion, which utilizes a cantilever oscillations driven by a single spin, is a promising technique to solve this problem. We have studied the quantum dynamics of a single spin interacting with a quasiclassical cantilever. It was found that in a similar fashion to the Stern-Gerlach interferometer the quantum dynamics generates a quantum superposition of two quasiclassical trajectories of the cantilever which are related to the two spin projections on the direction of the effective magnetic field in the rotating reference frame. Our results show that quantum jumps will not prevent a single-spin measurement if the coupling between the cantilever vibrations and the spin is small in comparison with the amplitude of the radio-frequency external field.}\\ \ \\ Modern solid-state technologies are approaching the level at which manipulating a single electron, atom, electron or nuclear spin becomes extremely important. The future successful development of these technologies depends significantly on development of single-particle measurement methods. While a single electron charge can be detected using a single-electron transistor, methods for detection of a single electron (or nuclear) spin in solids are still not available. However, many proposals for solid-state nano-devices require a single-electron (or nucleus) spin measurement. There are a few proposals for a solid-state single-spin measurement based on the swap operation of a spin state to a charge state \cite{kane}, scanning tunneling microscopy \cite{manassen}, or magnetic resonant force microscopy (MRFM)\cite{sidles1,sidles2}. In this paper, we consider a MRFM single-spin measurement. One of the most promising MRFM techniques is based on the cyclic adiabatic inversion (CAI) of electron or nuclear spins \cite{rugar1}. In this technique, the frequency of the spin inversion is in the resonance with the frequency of the mechanical vibrations of the ultrathin cantilever, which allows one to amplify the extremely weak force of a spin on the cantilever. The CIA MRFM method was successfully implemented as an alternative to the electron and nuclear magnetic resonance for macroscopic ensembles of spins \cite{rugar1,wago}. It has already achieved a sensitivity, which is equivalent to detection of approximately 200 polarized electron spins\cite{bruland}. It is clear that the resonant amplification of the cantilever vibrations by a single spin cannot be considered as a classical process. Indeed, the driving force acting on the cantilever is quantized, since it is proportional to the spin projection. As a result, quantum jumps could appear in the cantilever motion, which might prevent the resonant amplification of the cantilever oscillations. In fact, the problem of quantum jumps is a very general one. It always arises in a continuous observation of a single quantum particle. Despite extensive study\cite{knight}, the quantum jumps in a mechanical motion of classical detectors and their effect on a measurement of a quantum system have not been investigated. The analysis of these quantum effects and their influence on a single-spin detection in CAI MRFM are the main subjects of this paper. We consider the cantilever-spin system shown in Fig. 1. A single spin ($S=1/2$) is placed on the cantilever tip. The tip can oscillate only in the $z$-direction. The ferromagnetic particle, whose magnetic moment points in the positive $z$-direction, produces a non-uniform magnetic field which acts on the spin. The uniform magnetic field, $\vec B_0$, oriented in the positive $z$-direction, determines the ground state of the spin. The rotating radio frequency ({\em rf}) magnetic field, $\vec B_1$, induces transitions between the ground state and the excited state of the spin. The origin is chosen to be the equilibrium position of the cantilever tip with no ferromagnetic particle. The {\em rf} magnetic field can be written as $B_x=B_1\cos [\omega t+\varphi(t)],~B_y=-B_1\sin [\omega t+\varphi(t)]$, where $\varphi(t)$ is a periodic change in phase with the frequency of the cantilever, required for a CAI of the spin. A non-uniform magnetic field produces a force on the spin which depends on the spin direction. If the spin direction is changed with a frequency which equals to the cantilever resonant frequency, the amplitude of the cantilever vibrations increases so that it can be detected by optical methods. In the reference frame rotating with the frequency of the transversal magnetic field, $\omega+d\varphi /dt$, the Hamiltonian of the ``cantilever-spin'' system is, \begin{equation} {\cal H}={{P^2_z}\over{2m^*_c}}+{{m^*_c\omega_c^2Z^2}\over{2}} -\hbar\Bigg(\omega_L-\omega-{{d\varphi}\over{dt}}\Bigg)S_z -\hbar\omega_1S_x-g\mu{{\partial B_z}\over{\partial Z}}ZS_z\ . \label{a1} \end{equation} In Eq.~(\ref{a1}), $Z$ is the coordinate of the oscillator which describes the dynamics of the quasi-classical cantilever tip; $P_z$ is its momentum, $m^*_c$ and $\omega_c$ are the effective mass and the frequency of the cantilever (the mass of the cantilever is: $m_c=4m_c^*$); $S_z$ and $S_x$ are the $z$- and the $x$-components of the spin; $\omega_L=\gamma B_z$ (at $z$=0) is its Larmor frequency; $\omega_1=\gamma B_1$ is the Rabi (or nutation) frequency; $\gamma=g\mu/\hbar$ is the gyromagnetic ratio of the spin; $g$ and $\mu$ are the g-factor and the nuclear magneton. (We consider for definiteness a nuclear spin, but the results can be applied also to an electron spin.) We assume that $\omega=\omega_L$, which means that the average frequency of the {\it rf} field, $\omega$, is equal to the Larmor frequency of the spin in the permanent magnetic field. Using the dimensionless variables: $\tau=\omega_ct$ and $z=Z\sqrt{m_c^*\omega_c/\hbar}$ the spin-cantilever dynamics is described in the rotating frame, by the following Heisenberg operator equation, \begin{equation} d^2z/d\tau^2+z=2\eta S_z\ ,~~~~ d{\vec{S}}/d\tau=[\vec{S}\times\vec{B}_{eff}], \label{a2} \end{equation} where $\vec {B}_{eff}=(\epsilon,0, -d\varphi/d\tau+2\eta z)$ is the dimensionless effective magnetic field, $\epsilon=\omega_1/\omega_c$ and $\eta=g\mu(\partial B_z/\partial Z)/(2\sqrt{m_c^*\omega_c^3\hbar})$. Thus, our model includes two dimensionless parameters, $\epsilon$ and $\eta$. The first one is the dimensionless amplitude of the {\em rf} field, and the second one describes the interaction of a single spin with a cantilever mechanical vibrations due to a non-uniform magnetic field produced by the ferromagnetic particle. Note that the term $d\varphi/d\tau$ in $\vec {B}_{eff}$ is caused by the phase modulation of the transversal magnetic field. The other part of the $z$-component of the effective magnetic field, $2\eta z$, describes a nonlinear effect -- a back reaction of the cantilever vibrations on the spin. If the adiabatic conditions ($|d^2\varphi/d\tau^2| \ll\epsilon^2$) are satisfied and the nonlinear effects are small in comparison to the effects of the {\it rf} field, the average spin is ``captured'' by the effective magnetic field. More precisely, it precesses around the effective magnetic field in such a way that the angle between the directions of the average spin and the effective magnetic field approximately remains constant. In CAI MRFM, the value of $d\varphi/d\tau$ changes periodically with the period of the cantilever ($2\pi$ in our dimensionless variables). As a results the effective magnetic field and the average spin change their directions with the same period. This leads to a resonant excitation of the cantilever vibrations. To test our model, we considered the classical limit of the macroscopic number of spins and the classical cantilever. Replacing the operators $S_x$ and $S_z$ in Eq.~(\ref{a2}) by the sums of operators over all spins in the sample and neglecting the quantum correlation effects, we obtain the classical equations of motion for the total average spin and for the cantilever. We solved these classical equations numerically for the parameters corresponding to the experiment with protons in ammonium nitrate\cite{rugar1}. To estimate the amplitude of stationary vibrations of the cantilever within the Hamiltonian approach, we consider the time, $\tau=Q_c$, where $Q_c$ is the quality factor of the cantilever. We obtained for the amplitude of stationary vibrations of the cantilever $Z_{max}\approx 15$ nm, which is close to the experimental value in \cite{rugar1}, $Z_{max}\approx 16$ nm. Since a cantilever can be considered as a quasi-classical measuring device, one might expect smooth increase of the cantilever amplitude also in a measurement of a single spin. However, a single spin z-component can accept only two values, $s_z=\pm 1/2$. Therefore, smooth resonant vibrations of the quasi-classical cantilever driven by the continuous oscillations of $s_z$ seam to violate the principles of quantum mechanics. From this point of view, one should face quantum jumps rather than smooth oscillations of the cantilever. Such jumps could prevent the increase of the amplitude of the cantilever vibrations and a single-spin detection. To resolve this problem we consider the quantum dynamics of the single spin-quasiclassical cantilever system. The dimensionless Schr\"odinger equation can be written in the form, \begin{equation} i{{\partial\Psi(z,s_z,\tau)}\over{\partial\tau}}={\cal H}^\prime\Psi(z,s_z,\tau ),~~~~ \Psi(z,s_z,\tau)=\left(\matrix{\Psi_1(z,\tau)\cr \Psi_2(z,\tau)\cr}\right)\ , \label{a5} \end{equation} where $\Psi(z,s_z,\tau)$ is a dimensionless spinor, ${\cal H}^\prime={\cal H}/\hbar\omega_c$ is the dimensionless Hamiltonian. The functions $\Psi_{1,2}(z,\tau)$ are the complex amplitudes to find a spin in the state $|s_z=\pm 1/2\rangle$ and a cantilever at the point $z$ at time $\tau$. To describe the cantilever as a sub-system close to the classical limit, we choose the initial wave function of the cantilever in the coherent state $|\alpha\rangle$. Namely, it was taken in the form, $ \Psi_1(z,0)=\sum_{n=0}^\infty A_n(0)|n\rangle,~\Psi_2(z,0)=0,~ $ and $ A_n(0)=(\alpha^n/n!)\exp(-|\alpha|^2/2), $ where $|n\rangle$ is an eigenstate of the oscillator (cantilever) without spin. The initial average values of the coordinate and momentum are related to $\alpha$ as $ \langle z\rangle={{1}\over{\sqrt{2}}}(\alpha^*+\alpha),~\langle p_z\rangle={{i}\over{\sqrt{2}}}(\alpha^*-\alpha)$. In order to correspond the classical limit, we took $|\alpha|^2\gg 1$. For numerical simulation of the single-spin-cantilever dynamics we used the value of the interaction parameter $\eta =0.3$. Currently this value is not feasible in experiments with a nuclear spin, but can be achieved in experiments with a single electron spin. For instance, for the cantilever parameters from\cite{stowe} and the magnetic field gradient reported in\cite{bruland}, we obtain for a single electron spin $\eta =0.8$. The phase modulation of the {\em rf} field was taken in the form $d\varphi/d\tau=-6000+300\tau$, (if $\tau \le 20$), and $d\varphi/d\tau=1000\sin(\tau-20)$, (if $\tau>20$), and the {\em rf} field amplitude, $\epsilon =400$. For these parameters the effective magnetic field, $2\eta z$, produced by the cantilever vibrations on the spin remains small with respect to the amplitude of the {\em rf} field. The numerical simulations of the quantum dynamics reveal the formation of the asymmetric quasi-periodic Schr\"odinger cat (SC) state for the cantilever. Fig.~2 shows the typical probability distributions, $P(z,\tau )=|\Psi_1(z,\tau )|^2+|\Psi_2(z,\tau )|^2$, for nine instants of time. Near $\tau=40$, the probability distribution, $P(z,\tau)$, splits in two asymmetric peaks. After this the separation between these two peaks varies periodically in time. The ratio of the peak amplitudes is about 1000 for chosen parameters (the probabilities are shown in the logarithmic scale). It is clear that the small peak does not significantly influence the average coordinate of the cantilever. Fig.~3 shows the average coordinate of the cantilever, $\langle z(\tau)\rangle$, and the corresponding standard deviation, $\Delta (\tau )=[\langle z^2\rangle -\langle z\rangle^2]^{1/2}$. One can see fast increase of the average amplitude of the cantilever vibrations, while the standard deviation still remains small. This, in fact, is related to the initial conditions of the spin, which was taken in the direction of the $z$-axis. For instance, if the spin initially points in the $x$-axis ($\Psi_1(z ,0)=\Psi_2(z,0)$), our calculations show two large peaks with similar amplitudes. The two peaks in the cantilever probability distribution, shown in Fig.~2, indicate two possible trajectories of the cantilever. As a result of the consequent measurement of the cantilever position the system selects one of the two trajectories. The crucial problem is the following: Do the two peaks of the SC state of the cantilever correspond to the definite spin states? To answer this question we studied the structure of the wave function of the cantilever-spin system. First we have found that both functions, $\Psi_1(z,\tau)$ and $\Psi_2(z,\tau)$, contribute to each peak. Fig. 4 illustrates the probability distributions, $|\Psi_1(z,\tau)|^2$ (red) and $|\Psi_2(z,\tau)|^2$ (blue), for nine instants of time. One can see that these functions have maxima at the same values of $z$. Next, we analyzed the structures of the functions, $\Psi_1(z,\tau)$ and $\Psi_2(z,\tau)$. When two peaks are clearly separated we can represent each of these functions as a sum of two terms, corresponding to the ``big'' and ``small'' peaks, \begin{equation} \Psi_{1,2}(z,\tau)=\Psi_{1,2}^b(z,\tau)+\Psi_{1,2}^s(z,\tau). \label{a6} \end{equation} We have found that with accuracy up to 1\% the ratio, $\Psi_1^s(z,\tau)/\Psi_2^s(z,\tau)=-\Psi_2^b(z,\tau)/\Psi_1^b(z,\tau)=\kappa (\tau )$, where the $\kappa (\tau )$ is a real function independent of $z$. As a result, the total wave function can be represented in the form, \begin{equation} \Psi(z,s_z,\tau)=\Psi^b(z,\tau)\chi^b(s_z,\tau)+\Psi^s(z,\tau)\chi^s(s_z,\tau), \label{a7} \end{equation} where $\chi^b(s_z,\tau)$ and $\chi^s(s_z,\tau)$ are spin wave functions, which are orthogonal to each other. Eq.~(\ref{a7}) shows that each peak in the probability distribution of the cantilever coordinate corresponds to a definite spin wave function. We found that the average spin corresponding to the big peak $\langle\chi^b|\vec S|\chi^b\rangle$ points in the direction of the vector $(\epsilon,0,-d\varphi/d\tau)$, whereas $\langle\chi^s|\vec S|\chi^s\rangle$ points in the opposite direction. Note that up to a small term, $2\eta z$, the vector $(\epsilon,0,-d\varphi/d\tau)$ is the effective magnetic field acting on the spin (see Eq.~(\ref{a2})). The ratio of the integrated probabilities ($\int P(z,\tau )dz$) for the small and big peaks ($\sim 10^{-3}$ in Fig. 2) can be easily estimated as $\tan^2(\Theta/2)$, where $\Theta$ is the initial angle between the effective field, $(\epsilon,0,-d\varphi/d\tau)$, and the spin direction. Therefore by measuring the cantilever vibrations, one finds the spin in a definite state along or opposite to the effective magnetic field. Our numerical simulations show that starting with such a new initial condition, i.e. when the average spin points along or opposite to the effective field, the probability distribution $P(z,\tau )$ shows again two peaks, as in Fig.~2, but the ratio of the integrated probabilities of these peaks is much less ($\sim 10^{-6}$). Thus for chosen parameters, the quantum jumps generated by a single spin measurement cannot prevent the amplification of the cantilever vibration amplitude, and thus the detection of a single spin. So far, the described picture reminds the classical Stern-Gerlach effect in which the cantilever measures the spin component along the effective magnetic field. An appearance of the second peak, even if the average spin points initially in the direction of the effective magnetic field, provides a difference with the Stern-Gerlach effect. The origin of this peak is a small deviations from the adiabatic motion of the spin even at large amplitude of the effective field, and the back reaction of the cantilever vibrations on the spin. The next important question is the following: Is it possible to use CAI MRFM to measure the state of a single spin? We studied numerically the phase of the cantilever vibrations when the initial spin points along or opposite the direction of the effective magnetic field. Our computer simulations show that the phases of the cantilever vibrations for these two initial conditions are significantly different. When the amplitude of the cantilever vibrations increases, the phase difference for two initial conditions approaches $\pi$. Thus, the classical phase of the cantilever vibrations indicates the state of the spin relatively to the effective magnetic field. If the spin is initially in the superposition of these two states, it will acquire one of these states in the process of measurement. In practical applications it would be very desirable to use CAI MRFM for measurement of the initial $z$-component of the spin. For this purpose one should provide the initial direction of the effective magnetic field to be a $z$-direction. Then, the initial $z$-component of the spin will coincide with its component relatively to the effective magnetic field. In our computer simulations presented in Figs.~2-4 we have assumed instantaneous increase of the amplitude of the {\it rf} field, at $\tau=0$. It causes the initial angle between the directions of the spin and the effective magnetic field, $\Theta\approx\epsilon/|d\varphi/d\tau|\approx 0.07$. To eliminate this initial angle we simulated the quantum spin-cantilever dynamics for adiabatic increase of the {\it rf} field amplitude: $\epsilon=20\tau$ for $\tau\le 20$, and $\epsilon=400$ for $\tau>20$. Dependence for $d\varphi/d\tau$ was taken the same as in Figs.~2-4. The results of these simulations are qualitatively similar to those presented in Figs.~ 2-4, but the integrated probability of the small peak was reduced to its residual value $\sim 10^{-6}$. Neglecting this small probability one can provide the measurement of the initial $z$-component of the spin. We should also mention that the detection of a single electron spin in an atom can be used to determine the state of its nuclear spin \cite{berman1,berman2}. Such a measurement is possible for an atom with a large hyperfine interaction in a high external magnetic field, because the electron spin frequency of the atom depends on the state of its nuclear spin. In conclusion, we have analyzed the quantum effects in the single-spin measurement using cyclic adiabatic inversion (CAI) to drive cantilever vibrations in magnetic resonance force microscopy (MRFM). We investigated the quasi-classical cantilever interacting with a single spin using Hamiltonian approach. We have shown that the spin-cantilever dynamics generates a Schr\"odinger-cat (SC) state for the cantilever. The two peaks of the probability distribution of the cantilever coordinate correspond approximately to the directions of the spin along or opposite to the direction of the effective magnetic field, in the rotating frame. In this paper, we did not discuss the intriguing possibility of observing the SC state. This requires an estimate of the SC life time, which cannot be derived using our Hamiltonian which neglects the environment of the cantilever. Instead, we concentrated on a possibility of observing the resonant excitation of the cantilever vibrations, driving by a single spin. We demonstrated by a direct computation of the average cantilever position and its standard deviation as a function of time that the resonant amplification of the cantilever oscillations in indeed possible (for considered region of the system parameters), despite the quantum jumps of the single spin. In fact, the standard deviation of the cantilever coordinate becomes large only when the angle between the initial spin direction and the effective magnetic field approaching $\pi /2$. In this case the SC state appears with approximately equals peaks. However after an observation of the cantilever position the system appears in one of the peaks, and the following evolution of the cantilever coordinate shows again the resonant amplifications with a very small standard deviation. We expect that taking into consideration the interaction with an environment will not change our conclusion. Such an interaction will cause the decoherence\cite{zurek}, which transforms the SC state into the statistical mixture. It is clear that this effect as well as the thermal vibrations of the cantilever (see, for example\cite{rugar1,stowe}), cannot prevent an observation of the driven oscillations of the cantilever if the corresponding rms amplitude exceeds the rms amplitude of the vibrational noise. Another effect of the interaction with the environment is the finite quality factor, $Q_c$, of the cantilever, which puts the limit on the increase of the cantilever vibrations. The stationary amplitude of the cantilever vibrations can be estimated in our Hamiltonian approach by putting $\tau =Q_c$. Finally, we mention two other possible techniques for the cyclic spin inversion in MRFM. One of them is the standard Rabi technique. This assumes that in our notation $d\varphi/d\tau=0$, and $\epsilon=1$, i.e. the Rabi frequency equals to the cantilever frequency. This technique seems to be simpler that the CAI MRFM. But the amplitude of the {\it rf} field, $\epsilon$, must be much greater than the effective field produced by the cantilever on the spin $2\eta z \ll\epsilon=1$. In this case, the force acting on the cantilever is very small, and the amplification of the driven cantilever vibrations requires a long time, i.e. a large cantilever quality factor. Another technique assumes the application of short $\pi$-pulses which periodically change the direction of the spin in the time-interval, which is very short in comparison to the cantilever period \cite{berman3}. If the time interval between successive pulses equals half of the cantilever period, this technique provides a resonant amplification of the cantilever vibrations. Testing this technique in MRFM experiments is a challenging problem. \section*{Acknowledgments} We thank D.P. DiVincenzo, R.G. Clark, G.D. Doolen, H.S. Goan, A.N. Korotkov, R.B. Laughlin, S. Lloyd, H.J. Mamin, G.J. Milburn, V. Privman, M.L. Roukes, D. Rugar, J.A. Sidles, K. Schwab for useful discussions. This work was supported by the Department of Energy under contract W-7405-ENG-36 and DOE Office of Basic Energy Sciences. The work of GPB and VIT was partly supported by the National Security Agency (NSA) and by the Advanced Research and Development Activity (ARDA). \begin{figure} \caption{ The cantilever-spin system. $\vec{B} \label{canti} \end{figure} \begin{figure} \caption{The probability distribution, $P(z)$, for the cantilever position. The values of parameters are: $\epsilon=400$ and $\eta=0.3$. The initial conditions are: $\langle z(0)\rangle=-20$, $\langle p_z(0)\rangle=0$ (which correspond to $\alpha=-10\sqrt{2} \label{prob1} \end{figure} \begin{figure} \caption{Cantilever dynamics. (a) Average coordinate of the cantilever as a function of $\tau$ and (b) its standard deviation $\Delta (\tau) = [\langle z^2 (\tau) \rangle - \langle z (\tau )\rangle^2]^{1/2} \label{ztau} \end{figure} \begin{figure} \caption{ The probability distributions $P_i(z,\tau) =|\Psi_i(z,\tau )|^2$, $i=1$ (red) and $i=2$ (blue) for nine instants in time: $\tau_k=92.08+0.8k$, $k=0,1,...,8$ } \label{new4} \end{figure} \end{document}
\begin{equation}gin{document} \author{Fabien Besnard} \title{An approximation theorem for non-decreasing functions on compact posets} \maketitle \abstract{In this short note we prove a theorem of the Stone-Weierstrass sort for subsets of the cone of non-decreasing continuous functions on compact partially ordered sets.} \section{Introduction} The classic book \cite{nachbin} contains a theorem which states that given a compact set $M$ and a separating semi-vector lattice $S$ of continuous real-valued functions on $M$ which contains the constants, there is one and only one way of making $M$ a compact ordered space so that $S$ becomes the set of all non-decreasing continuous real-valued functions on $M$. This theorem has been used in \cite{besnard} to give a putative definition of noncommutative compact ordered sets. However, infimum and supremum turned out to be quite difficult to handle in the noncommutative setting. A different kind of density theorem was thus needed. Since a ``continuous non-decreasing'' functional calculus was available in the noncommutative context, it was natural to look for a theorem which would replace stability under infimum and supremum with stability under continuous non-decreasing functions. Let us introduce some vocabulary in order to be more precise. Let $S$ be a subset of the set ${\cal C}(M,{\mathbb R})$ of continuous real-valued functions on some space $M$, and $H : {\mathbb R}\rightarrow {\mathbb R}$ be a function. We will say that $H$ \emph{operates} on $S$ is $H\circ f\in S$ for each $f\in S$. As remarked in \cite{briem}, the real version of the classical Stone-Weierstrass theorem can be rephrased in terms of operating functions. \begin{equation}gin{theorem}(Stone-Weierstrass) Let $S$ be a non-empty subset of ${\cal C}(X,{\mathbb R})$, with $X$ a compact Hausdorff space. If \begin{equation}gin{enumerate} \item $S$ is stable by sum, \item the affine functions from ${\mathbb R}$ to ${\mathbb R}$ operate on $S$, \item the function $t\mapsto t^2$ operates on $S$, \end{enumerate} then $S$ is dense in ${\cal C}(X,{\mathbb R})$ for the uniform norm. \end{theorem} The second hypothesis is a way to say that $S$ is a cone (hence a vector space thanks to first hypothesis) which contains the constant functions. In fact it is proved in \cite{dlk} that one can replace $t\mapsto t^2$ in the third hypothesis by any continuous non-affine function. It is a theorem of this kind that we prove in this note, but in the same category (compact ordered sets and non-decreasing continuous functions) as the theorem of Nachbin stated above. \section{Preliminaries} Let $M$ be a topological set equipped with a partial order $\preceq $. We let $I(M)$ denote the set of all continuous non-decreasing functions from $M$ to ${\mathbb R}$, where ${\mathbb R}$ has the natural topology and the natural ordering, which we write $\le$, as usual. The elements of $I(M)$ are sometimes called continuous isotonies. Let $S$ be a subset of $I(M)$. We define the relation $\preceq_S$ by \begin{equation} x\preceq _S y\Longleftrightarrow \forall f\in S, f(x)\le f(y) \end{equation} It is obvious that $\preceq _S$ is a preorder, which we call \emph{the preorder generated by $S$}. This preorder will be a partial order relation if, and only if, $S$ separates the points of $M$. We say that \emph{$S$ generates $\preceq$} iff $\preceq _S=\preceq$. This is the case if, and only if, $S$ satisfies \begin{equation} \forall a,b\in M,\ a\not\preceq b\Longrightarrow \exists f\in S,\ f(a)> f(b) \end{equation} Since $a\not=b\Rightarrow a\not\preceq b$ or $b\not\preceq a$, we see that if $S$ generates $\preceq$, it necessarily separates the points of $M$. Note that it is not guaranteed that for any poset there exists such an $S$ generating the order. When there is one, then $I(M)$ itself will also generate the order. Posets with the property that $I(M)$ generates the order are called \emph{completely separated ordered sets}. When $M$ is compact and Hausdorff, complete separation is equivalent to the relation $\preceq$ being closed in $M\times M$ (see \cite{nachbin}, p. 114). Let $A$ be a set of functions from ${\mathbb R}$ to ${\mathbb R}$. We will say that $A$ operates on $S$ iff \begin{equation} \forall H\in A, \forall f\in S,\ H\circ f\in S \end{equation} \section{Statement and proof of the theorem} \begin{equation}gin{theorem} Let $(M,\preceq )$ be a compact Hausdorff partially ordered set. Let $A$ be the set of continuous non-decreasing piecewise linear functions from ${\mathbb R}$ to ${\mathbb R}$. Let $S$ be a non empty subset of $I(M)$. If \begin{equation}gin{enumerate} \item $S$ is stable by sum, \item\label{h2} $A$ operates on $S$, \item\label{h3} $S$ generates $\preceq$. \end{enumerate} then $S$ is dense in $I(M)$ for the uniform norm. \end{theorem} Before proving the theorem, a few comments are in order. \begin{equation}gin{itemize} \item First of all, the theorem is true but empty if $M$ is not completely separated, since no $S$ can satisfy the hypotheses in this case. \item The hypothesis that $S$ is not empty is redundant if $M$ has at least two elements, by \ref{h3}. \item Finally, let us remark that \ref{h2} entails that $S$ is in fact a convex cone which contains the constant functions. \end{itemize} To prove the theorem we need two lemmas. \begin{equation}gin{lemma}\label{premierlemme} Let $x,y\in M$ be such that $y\not\preceq x$. Then $\exists f_{x,y}\in S$ such that $0\le f_{x,y}\le 1$, $f_{x,y}(x)=0$ and $f_{x,y}(y)=1$. \end{lemma} \begin{equation}gin{demo} Since $S$ generates $\preceq$, there exists $f\in S$ such that $f(x)<f(y)$. Let $H\in A$ be such that $H(t)=0$ for $t\le f(x)$, $H$ is affine on the segment $[f(x),f(y)]$, and $H(t)=1$ for $t\ge f(y)$. Then $f_{x,y}:=H\circ f$ meets the requirements of the lemma. \end{demo} \begin{equation}gin{lemma}\label{secondlemme} Let $K,L$ be two compact subsets of $M$ such that $\forall x\in K,\forall y\in L$, $y\not\preceq x$. Then $\exists f_{K,L}\in S$ such that $0\le f_{K,L}\le 1$, $f=0$ on $K$ and $f=1$ on $L$. \end{lemma} \begin{equation}gin{demo} For all $x\in K$ and $y\in L$, we find a $f_{x,y}\in S$ as in lemma \ref{premierlemme}. We fix a $y\in L$ and let $x$ vary in $K$. Since $f_{x,y}$ is continuous, there exists an open neighbourhoud $V_x$ of $x$ such that $f_{x,y}(V_x)\subset [0;1/4[$. By compacity of $K$, there exists $V_1,\ldots,V_k$ corresponding to $x_1,\ldots,x_k$ such that $K\subset V_1\cup\ldots\cup V_k$. Now we define $g_y:={1\over k}\sum_if_{x_i,y}$. We have $g_y\in S$ since $S$ is a convex cone (see the last remark below the theorem). It is clear that $g_y(y)=1$ and that for all $x\in K$, $0\le g_y(x)\le {1\over k}(k-1+1/4)=1-{3\over 4k}<1$. We then choose $H\in A$ such that $H(t)=0$ for $t\le 1-{3\over 4k}$ and $H(t)=1$ for $t\ge 1$. We set $f_{K,y}:=H\circ g_y$. We thus have $f_{K,y}\in S$, $f_{K,y}=0$ on $K$ and $f_{K,y}(y)=1$. Using the continuity of $f_{K,y}$, we find an open neighbourhood $W_y$ of $y$ such that $f_{K,y}(W_y)\subset[3/4;1]$. Since we can do this for every $y\in L$, and since $L$ is compact, we can find functions $f_{K,y_j}$, $j=1..l$, and open sets $W_1,\ldots,W_l$ of the above kind such that $L\subset W_1\cup\ldots\cup W_l$. We then define $g={1\over l}\sum_jf_{K,y_j}$. We have $g\in S$, and $g(K)=\{0\}$. Moreover, for all $z\in L$, $1\ge g(z)\ge {3\over 4l}>0$. We then choose a function $G\in A$ such that $G(t)=1$ for $t\ge 3/4l$ and $G(t)=0$ for $t\le 0$. Now the function $f_{K,L}:=G\circ g$ has the desired properties. \end{demo} We can now prove the theorem. \begin{equation}gin{demo} Let $f\in I(M)$. We will show that, for all $n\in{\mathbb N}^*$ there exists $F\in S$ such that $\|f-F\|_\infty\le{1\over n}$. If $f$ is constant then the result is obvious. Else, let $m$ be the infimum of $f$ and $M$ be its supremum. Let $\tilde f={1\over M-m}(f-m.1)$. Using the fact that $S$ is a convex cone, we can work with $\tilde f$ instead of $f$. Hence, we can suppose that $f(M)=[0;1]$ without loss of generality. We set $K_i=f^{-1}([0;{i\over n}])$, and $L_i=f^{-1}([{i+1\over n};1])$ for each $i\in\{0;\ldots;n-1\}$. Since $f$ is continuous and $M$ is compact, the sets $K_i$ and $L_i$ are both closed, hence compact. For each $i$ we use lemma \ref{secondlemme} to find $f_i\in S$ such that $f_i(K_i)=\{0\}$ and $f_i(L_i)=\{1\}$. We then consider the function $F={1\over n}\sum_{i=0}^{n-1}f_i$. We clearly have $F\in S$. Let $m\in M$. Suppose ${j\over n}<f(m)<{j+1\over n}$ for some $j\in\{0;\ldots;n-1\}$. We thus have $m\in K_i$ for $j< i< n$ and $m\in L_i$ for $i<j$. Hence $F(m)={1\over n}\sum_{i=0}^jf_i(m)={1\over n}(j+f_j(m))\in [{j\over n};{j+1\over n}]$. Thus $|f(m)-F(m)|\le {1\over n}$. Now suppose $f(m)={j\over n}$, with $j\in\{0;\ldots;n\}$. We have $m\in K_i$ for $i\ge j$ and $m\in L_i$ for $i<j$. Thus $F(m)={1\over n}\sum_{i=0}^{j-1}f_i(m)={j\over n}$. We see that $|f(m)-F(m)|=0$ in this case. Hence we have shown that $|f(m)-F(m)|\le {1\over n}$ for all $m\in M$, thus proving the theorem. \end{demo} To conclude, let us remark that the set $A$ in the theorem can be replaced by any subset of $I(M)$ with the following property : for any two reals $a<b$, there exists $f\in A$ such that $f=0$ on $]-\infty;a]$, and $f=1$ on $[b;+\infty[$. For example one can take $A=I(M)$ itself, or the set of non-decreasing ${\cal C}^\infty$ functions. \begin{equation}gin{thebibliography}{00} \bibitem [1] {nachbin} L. Nachbin, {\it Topology and Order}, D. Van Nostrand Company, 1965 \bibitem [2] {besnard} F. Besnard, {\it A Noncommutative View on Topology and Order}, J. Geom. Phys., {\bf 59}, 861-875 (2009) \bibitem [3] {briem} E. Briem, {\it Approximations from Subspaces of ${C}^0(X)$}, J. Approx. Th., {\bf 112}, 279-294 (2001) \bibitem [4] {dlk} K. de Leeuw, Y. Katznelson, {\it Functions that operate on non-selfadjoint algebras}, J. Anal. Math. {\bf 11} (1963), 207-219 \end{thebibliography} \end{document}
\begin{document} \title{Forcing of infinity \\ and algebras of distributions \\ of binary semi-isolating formulas \\ for strongly minimal theories\footnote{{\em Mathematics Subject Classification.} \begin{abstract} We apply a general approach for distributions of binary isolating and semi-isolating formulas to the class of strongly minimal theories. {\bf Key words:} structure of binary semi-isolating formulas, strongly minimal theory. \end{abstract} Algebras and structures associated with isolating and semi-isolating for\-mu\-l\-as of a theory are introduced in \cite{ShS, Su122}. We apply this general approach for distributions of formulas to the class of strongly minimal theories \cite{BaLa}. Let $T$ be a theory, $R\subseteq S^1(\varnothing)$ be a nonempty family, and $\nu(R)$ be a regular family of labelling functions for semi-isolating formulas forming a set $U$ of labels. Denote by $U_{\rm fin}$\index{$U_{\rm fin}$} (respectively $U_{\rm cofin}$)\index{$U_{\rm cofin}$} the set of labels $u$, each of which, being in $\bigcup\limits_{p,q\in R}\rho_{\nu(p,q)}$, $\unlhd$-dominates a (co)finite set of labels in $\rho_{\nu(p,q)}$. By the definition all almost deterministic labels belong to $U_{\rm fin}$. We say that a label $u\in \rho_{\nu(p,q)}$ {\em forces}\index{Label!forcing an infinite set of solutions} an infinite set of solutions for the formula $\theta_{p,u,q}(a,y)$, $\models p(a)$, if for any theory $T$ with a family $R$ of $1$-types, containing $p$ and $q$ and having a ${\rm POSTC}_R$-structure, including all labels that are $\unlhd$-dominated by $u$, the formula $\theta_{p,u,q}(a,y)$ has infinitely many solutions. By the definition a label $u\in \rho_{\nu(p,q)}$ forces an infinite set of solutions if and only if for any $n\in\omega$ and some $n$ elements $a_1,\ldots,a_n$ in the set of solutions of $\theta_{p,u,q}(a,y)$ (in an arbitrary structure), where $\models p(a)$, there exists new element $a_{n+1}$, for which $\models\theta_{p,u,q}(a,a_{n+1})$ and links between $a_{n+1}$ and $a_1,\ldots,a_n$ are defined by some labels. Clearly, almost deterministic labels do not force infinite sets of solutions and any label $u$, $\unlhd$-dominating infinitely many labels $v_i$, forces an infinite set of solutions. Moreover, each label $u\wedge\neg v_{i_1}\wedge\ldots\wedge \neg v_{i_n}$ also forces an infinite set of solutions. Another example with a label, which forces an infinite set of solutions, is presented by theory ${\rm Th}(\langle\mathbb{Q},<\rangle)$, for which any non-zero label (defining the strict order property) corresponds to formulas having only infinitely many solutions. An infinite set of solutions can be forced by labels in $U_{\rm fin}$ for formulas in stable theories. Such an example is produced by any label corresponding to a special element of an infinite group for an everywhere finitely defined polygonometry \cite{SuGP}. {\bf Definition} (J.~T.~Baldwin\index{Baldwin J. T.}, A.~H.~Lachlan\index{Lachlan A. H.} \cite{BaLa}). A theory $T$ is called {\em strongly minimal}\index{Theory!strongly minimal} if for any formula $\varphi(x,\bar{a})$ of language obtained by adding parameters of $\bar{a}$ (in some model ${\cal M}\models T$) to the language of $T$, either $\varphi(x,\bar{a})$, or $\neg\varphi(x,\bar{a})$ has finitely many solutions. An example of strongly minimal theory with the forcing of infinite set is represented by structure $\langle M;s\rangle$ with {\em successor function}\index{Function!successor} $s$ (having exactly one preimage for any element, and do not having cycles). Since ${\rm Th}(\langle M;s\rangle)$ has unique $1$-type, there is a label $u\in U_{\rm cofin}$ for the semi-isolating formula $(x\approx x)$. This label $\unlhd$-dominates infinitely many labels $v$ corresponding to formulas $(y\approx s^n(x))$, $n\in\mathbb Z$, and thus, $u$ forces an infinite set of solutions for the formulas $\theta_u(a,y)$, $a\in M$. {\bf Theorem 1.} {\em For any strongly minimal theory $T$, the family $R\rightleftharpoons S^1(\varnothing)$ of $1$-types, and a regular family $\nu(R)$ of labelling functions for semi-isolating formulas, the following conditions hold: {\rm (a)} the ${\rm POSTC}_\mathcal{R}$-structure $\mathfrak{M}_{\nu(R)}$ consists of labels belonging to $U_{\rm fin}\cup U_{\rm cofin}$; {\rm (b)} there is unique type $r_0\in R$ having infinitely many realizations; in particular, any set $\rho_{\nu(p,q)}$ is finite, where $p,q\in R$ and $q\ne r_0$, and all labels $u\in\rho_{\nu(p,q)}$ are almost deterministic and belong to $U_{\rm fin}$; {\rm (c)} if $R$ is finite, i.~e., all types in $R$ are principal, then all non-zero labels are positive and all labels $u$, including zero, have complements $\bar{u}$, and for any pair of labels $u,\bar{u}\in\rho_{\nu(p,r_0)}$, exactly one of them is almost deterministic and, in particular, belongs to $U_{\rm fin}$, and the other label marks a formula $\theta_{p,\dot,r_0}(a,y)$ with infinitely many solutions, where $\models p(a)$, and belongs to $U_{\rm cofin}$; {\rm (d)} if $R$ is infinite, i.~e., $r_0$ is unique non-principal $1$-type, then all non-zero labels, linking realizations of $r_0$ or realizations of types in $R\setminus\{r_0\}$, are positive, and labels, linking realizations of $r_0$ with realizations of types in $R\setminus\{r_0\}$, are negative; in this case, if a label $u$ belongs to $\rho_{\nu(p,r_0)}$ then $u$ is positive or zero, almost deterministic, does not have complements and $p=r_0$, moreover, $U_{\rm fin}=U_{\rm cofin}$ if $\rho_{\nu(r_0)}$ is finite, and $U_{\rm cofin}=\varnothing$ if $\rho_{\nu(r_0)}$ is infinite; {\rm (e)} only labels in $\rho_{\nu(p,r_0)}$ with the principal type $r_0$ can force infinitely many solutions.} {\em Proof.} By the definition of strongly minimal theory, each formula $\varphi(a,y)$, where $\models p(a)$, (in particular, witnessing the semi-isolation for a type $q(y)$) has a finite or a cofinite set $\varphi(a,{\cal M})$ of solutions. For the finite set $\varphi(a,{\cal M})$, the label $u\in\rho_{\nu(p,q)}$, marking the formula $\varphi(x,y)$ with $\varphi(a,y)\vdash q(y)$ and $\models p(a)$ can $\unlhd$-dominate only finitely many labels in $\rho_{\nu(p,q)}$. If $\varphi(a,{\cal M})$ is cofinite, then the label $u\in\rho_{\nu(p,q)}$ for the formula $\varphi(x,y)$ $\unlhd$-dominates all labels in $\rho_{\nu(p,q)}$ except for a finitely many $u_1,\ldots,u_k$, and $u$ has the complement $\bar{u}$, which is obtained from $u$ by disjunctive attachment of labels $u_i$. Thus the condition (a) holds: all labels belong to $U_{\rm fin}\cup U_{\rm cofin}$. Since $T$ is strongly minimal we also have that there is unique type $r_0\in R$, principal or non-principal, with infinitely many realizations: if there are finitely many $1$-types it is implied by the property that models are infinite and there are no two principal formulas with infinitely many realizations, and if there are infinitely many $1$-types, then the non-principal type $r_0(x)$, having infinitely many realizations by Compactness, is isolated by the set of all formulas $\neg\varphi(x)$, where $\varphi(x)$ are principal formulas and none of these formulas can not have infinitely many solutions. Since the type $r_0$ with infinitely many realizations is unique, then any set $\rho_{\nu(p,q)}$ is finite, where $p,q\in R$ and $q\ne r_0$. Here all labels $u\in\rho_{\nu(p,q)}$ are almost deterministic and belong to $U_{\rm fin}$. Thus, we have the condition (b). If $r_0$ is isolated then all $1$-types are isolated and by Proposition 1.1 \cite{Su122} all non-zero labels are positive. Since each isolating formula $\varphi(x)$ has a label, all labels, including zero, have complements. In this case for any pair of labels $u,\bar{u}\in\rho_{\nu(p,r_0)}$, exactly one of these labels is almost deterministic and, in particular, belongs to $U_{\rm fin}$, and the other label marks a formula $\theta_{p,\cdot,r_0}(a,y)$ with infinitely many solutions, where $\models p(a)$, and belongs to $U_{\rm cofin}$. Hence, the condition (c) holds. If $r_0$ is non-isolated, then all non-zero labels, linking realizations of $r_0$ are positive, since having a non-positive non-zero label $u$, linking realizations of $r_0$ we have the non-symmetric relation ${\rm SI}_{r_0}$ and as $r_0$ is non-isolated there are infinitely many solutions for the formula $\theta_u(x,a)$, where $\models r_0(a)$. This contradicts the strong minimality of theory $T$. By Proposition 1.1 \cite{Su122}, non-zero labels linking realizations of types in $R\setminus\{r_0\}$, are positive, and labels, linking realizations of $r_0$ with realizations of types in $R\setminus\{r_0\}$, are negative. In this case, since for non-principal type there are only relative complements, if a label $u$ belongs to $\rho_{\nu(p,r_0)}$, then $u$ is positive or zero, almost deterministic and does not have a complement. Moreover, $p=r_0$ since realizations of principal types cannot semi-isolate realizations of non-principal type $r_0$. If the set $\rho_{\nu(r_0)}$ is finite, then any label in $U_{\rm fin}$ belongs to $U_{\rm cofin}$ and vice versa, i.~e., $U_{\rm fin}=U_{\rm cofin}$, and if $\rho_{\nu(r_0)}$ is infinite, then all labels are almost deterministic and $U_{\rm cofin}=\varnothing$. Thus, the condition (d) holds. The condition (e) is implied by previous items.~$\Box$ If $\mathfrak{M}$ is a ${\rm POSTC}_\mathcal{R}$-structure and there is a theory $T$ with a family $R=S^1(\varnothing)$ and a regular family $\nu(R)$ of labelling functions for semi-isolating formulas such that $\mathfrak{M}_{\nu(R)}=\mathfrak{M}$, then we say that $\mathfrak{M}$ {\em is represented}\index{${\rm POSTC}_\mathcal{R}$-structure!represented} by $T$ and also say that $T$ {\em represents}\index{Theory!repsesenting ${\rm POSTC}_\mathcal{R}$-structure} the ${\rm POSTC}_\mathcal{R}$-structure $\mathfrak{M}$. If all types of $R$ are realized in a model $\mathcal{N}$ of $T$, then we say that $\mathfrak{M}$ {\em is represented}\index{${\rm POSTC}_\mathcal{R}$-structure!represented} by $\mathcal{N}$. Note that the syntactic representability of ${\rm POSTC}_\mathcal{R}$-structure $\mathfrak{M}$ (by a theory) is equivalent to the semantic representability of $\mathfrak{M}$ (by a model). Notice also that there is a representation $T$ for the ${\rm POSTC}_\mathcal{R}$-structure $\mathfrak{M}$ such that a label $u$ is almost deterministic if and only if $u$ does not force an infinite set of solutions. {\bf Theorem 2.} {\em Let $\mathfrak{M}$ be a ${\rm POSTC}_\mathcal{R}$-structure satisfying the following conditions: {\rm (a)} $\mathfrak{M}$ consists of labels belonging to $U_{\rm fin}\cup U_{\rm cofin}$; {\rm (b)} there is an element $r_0\in \mathcal{R}$ such that any set $\rho_{\mu(p,q)}$ is finite, where $p,q\in \mathcal{R}$ and $q\ne r_0$, and all labels $u\in\rho_{\mu(p,q)}$ are almost deterministic {\rm (}in some representation $\mathcal{N}$ of $\mathfrak{M}${\rm )} and belong to $U_{\rm fin}$; {\rm (c)} if $\mathcal{R}$ is finite then all non-zero labels are positive and all labels $u$, including zero, have complements $\bar{u}$, and for any pair of labels $u,\bar{u}\in\rho_{\mu(p,r_0)}$, exactly one of them is almost deterministic and, in particular, belongs to $U_{\rm fin}$, and the other label marks a formula $\theta_{p,\cdot,r_0}(a,y)$ {\rm (}for $mathcal{N}${\rm )} with infinitely many solutions, where $\models p(a)$, and belongs to $U_{\rm cofin}$; {\rm (d)} if $\mathcal{R}$ is infinite then all non-zero labels, linking $r_0$ or elements of $R\setminus\{r_0\}$, are positive, and labels, linking $r_0$ with elements in $R\setminus\{r_0\}$, are negative; in this case, if a label $u$ belongs to $\rho_{\mu(p,r_0)}$, then $u$ is positive or zero, almost deterministic, does not have complements and $p=r_0$, moreover, $U_{\rm fin}=U_{\rm cofin}$ if $\rho_{\mu(r_0)}$ is finite, and $U_{\rm cofin}=\varnothing$ if $\rho_{\mu(r_0)}$ is infinite; {\rm (e)} only labels in $\rho_{\mu(p,r_0)}$ and with $|\mathcal{R}|<\omega$ can force infinity. Then there is a strongly minimal theory $T$ representing the ${\rm POSTC}_\mathcal{R}$-structure $\mathfrak{M}$ and having unique $1$-type $r_0$ with infinitely many realizations.} {\em Proof.} Consider the construction for the proof of Theorems 9.1 \cite{ShS} and 8.1 \cite{Su122}. We identify $\mathcal{R}$ with the set of $1$-types for the required theory $T$. Now we add to the types describing links between elements with respect to binary relations $Q_u$, $u\in U$, an information for the cardinality of sets of solutions for formulas $\theta_{p,u,q}(a,y)$, where $\models p(a)$. The generic construction for the class ${\bf T}_0$ of types guarantees that the generic theory of the language $\{Q_u\mid u\in U\}$ is strongly minimal and represents the ${\rm POSTC}_\mathcal{R}$-structure $\mathfrak{M}$.~$\Box$ \end{document}
\begin{document} \begin{abstract} We show that the image of the adelic Galois representation attached to a non-CM modular form is open in the adelic points of a suitable algebraic subgroup of $\GL_2$ (defined by F.~Momose). We also show a similar result for the adelic Galois representation attached to a finite set of modular forms. \end{abstract} \maketitle \section*{Introduction} Let $E$ be an elliptic curve over $\mathbf{Q}$, and $p$ a prime number. Then the action of the Galois group on the Tate module of $E$ determines a Galois representation \[ \rho_{E, p}: \operatorname{Gal}(\overline{\mathbf{Q}} / \mathbf{Q}) \to \GL_2(\mathbf{Z}_p). \] If $\hat\mathbf{Z} = \prod_{p} \mathbf{Z}_p$ is the profinite completion of $\mathbf{Z}$ (the integer ring of the ring $\hat\mathbf{Q}$ of finite adeles), then the product of the $\rho_{E, p}$ defines an adelic Galois representation \[ \rho_E: \operatorname{Gal}(\overline{\mathbf{Q}} / \mathbf{Q}) \to \GL_2(\hat\mathbf{Z}).\] Suppose $E$ does not have complex multiplication. Then the images of these representations are described by the following three theorems, all of which are due to Serre: \begin{enumerate} \item[(A)] \cite[\S IV.2.2]{serre68b} For all primes $p$, the image of $\rho_{E, p}$ is open in $\GL_2({\ZZ_p})$. \item[(B)] \cite[Theorem 2]{serre72} For all but finitely many $p$, the image of $\rho_{E, p}$ is the whole of $\GL_2({\ZZ_p})$. \item[(C)] \cite[Theorem 3]{serre72} The image of the product representation $\rho_E$ is open in $\GL_2(\hat\mathbf{Z})$. \end{enumerate} Note that (C) implies both (A) and (B), but the converse is not automatic; theorem (C) shows that not only do the $\rho_{E, p}$ individually have large image, but they are in some sense ``independent of each other'' up to a finite error. If one replaces the elliptic curve $E$ by a modular eigenform $f$, then one has $p$-adic Galois representations $\rho_{f, p}$ and an adelic representation $\rho_f$, and it is natural to ask whether analogues of theorems (A)--(C) hold in this context. For modular forms of level 1, analogues of all three theorems were obtained by Ribet \cite{ribet75}; but the case of modular forms of higher level is considerably more involved, owing to the presence of so-called ``inner twists''. The appropriate analogues of (A) and (B) for general eigenforms were determined by Momose \cite{momose81} and Ribet \cite{ribet85} respectively. However, somewhat surprisingly, there does not seem to be a result analogous to (C) in the literature for general modular eigenforms $f$. The first aim of this paper is to fill this minor gap, by formulating and proving an analogue of (C) for general eigenforms; see \S \ref{sect:largeimage}, in particular Theorem \ref{thm:bigimageGL2}. The second aim of this paper is to extend these results to pairs of modular forms: given two modular forms $f, g$, how large is the image of the product representation $\rho_f \times \rho_g$? In \S \ref{sect:jointlargeimage} we formulate and prove analogues of (A)--(C) for this product representation. This extends earlier partial results due to Ribet \cite[\S 6]{ribet75} (who proves the analogue of (B) for pairs of modular forms of level 1, and sketches an analogue of (A)); and of Lei, Zerbes and the author \cite[\S 7.2.2]{LLZ14} (who prove an analogue of (B) for pairs of modular forms of weight 2). These results can all be extended to the case of arbitrary finite collections $(f_1, \dots, f_n)$ of modular forms, but we give the proofs only in the case $n = 2$ to save notation. In section \ref{sect:specialelt}, we give an application of these results which was the original motivation for our study of images of Galois representations; this is to exhibit certain special elements in the images of the tensor product Galois representations $\rho_{f, p} \otimes \rho_{g, p}$ whose existence is important in Euler system theory. These results are used in \cite{KLZ15b} in order to prove finiteness results for Selmer groups. \section{Some profinite group theory} \subsection{Preliminary lemmas} \begin{lemma}[Ribet] \label{lemma:lifting} Let $p \ge 5$ be prime, and let $K_1, \dots, K_t$ be finite unramified extensions of ${\QQ_p}$, with rings of integers $\mathcal{O}_1, \dots, \mathcal{O}_t$ and residue fields $k_1, \dots, k_t$. Let $G$ be a closed subgroup of $\SL_2(\mathcal{O}_1) \times \dots \times \SL_2(\mathcal{O}_t)$ which surjects onto $\PSL_2(k_1) \times \dots \times \PSL_2(k_t)$. Then $G = \SL_2(\mathcal{O}_1) \times \dots \times \SL_2(\mathcal{O}_t)$. \end{lemma} \begin{proof} If we assume that $G$ surjects onto $\SL_2(k_1) \times \dots \times \SL_2(k_t)$ this is a special case of Theorem 2.1 of \cite{ribet75}. So it suffices to check that there is no proper subgroup of $\SL_2(k_1) \times \dots \times \SL_2(k_t)$ surjecting onto $\PSL_2(k_1) \times \dots \times \PSL_2(k_t)$. But this follows readily by induction from the case $t = 1$, which is Lemma IV.3.4.2 of \cite{serre68b}. \end{proof} \begin{lemma}[{cf.~\cite[Lemma IV.3.4.1]{serre68b}}] \label{lemma:groupsoccurring} Let $K$ be a finite extension of ${\QQ_p}$ for some prime $p$, and let $Y_1, Y_2$ be closed subgroups of $\GL_2(\mathcal{O}_K)$ such that $Y_1 \triangleleft Y_2$ and $Y_2 / Y_1$ is a nonabelian finite simple group. Then $Y_2 / Y_1$ is isomorphic to one of the following groups: \begin{itemize} \item $\PSL_2(\mathbf{F})$, where $\mathbf{F}$ is a finite field of characteristic $p$ such that $\# \mathbf{F} \ge 4$; \item the alternating group $A_5$. \end{itemize} \end{lemma} \begin{proof} Since the kernel of $\GL_2(\mathcal{O}_K) \to \PGL_2(k)$ is solvable, where $k$ is the residue field of $K$, we see that any such quotient $Y_2 / Y_1$ is in fact a subquotient of $\PSL_2(k)$. The result now follows from the determination of the subgroup structure of $\PSL_2(k)$, which is due to Dickson; cf.~\cite[Theorem 6.25]{suzuki}. \end{proof} \begin{lemma} If $k$ and $k'$ are any two finite fields of characteristic $\ge 5$ and $\phi: \PSL_2(k) \cong \PSL_2(k')$ is a group isomorphism, then $\phi$ is conjugate in $\PGL_2(k')$ to an isomorphism induced by a field isomorphism $k \cong k'$. \end{lemma} \begin{proof} Since the groups $\PSL_2(k)$ for finite fields $k$ of characteristic $\ge 5$ are nonisomorphic unless $k \cong k'$, it suffices to check that every group automorphism of $\PSL_2(k)$ is induced by conjugation in $\PGL_2(k)$, which is a standard fact. \end{proof} We also have an ``infinitesimal'' version of this statement, which we will use later in the paper. \begin{lemma} \label{lemma:liealgs} If $K$ and $K'$ are finite extensions of ${\QQ_p}$ for some prime $p$, $B$ and $B'$ are central simple algebras of degree 2 over $K$ and $K'$ respectively, and the Lie algebras $\operatorname{sl}_1(B)$ and $\operatorname{sl}_1(B')$ are isomorphic as Lie algebras over ${\QQ_p}$, then the isomorphism is induced by a field isomorphism $K \cong K'$ and an isomorphism of central simple algebras $B \cong B'$ over $K$. \end{lemma} \begin{proof} We may recover $K$ from $\operatorname{sl}_1(B)$ as the algebra of ${\QQ_p}$-endomorphisms of $\operatorname{sl}_1(B)$ commuting with the adjoint action of $\operatorname{sl}_1(B)$; thus it suffices to consider the case $K' = K$. There are exactly two central simple algebras of degree 2 over any $p$-adic field (one unramified and one unramified), and their Lie algebras are non-isomorphic; and every automorphism of either of these is inner (since the corresponding Dynkin diagram $A_1$ has no automorphisms). \end{proof} \subsection{Subgroups of adele groups} Let $F$ be a \emph{finite \'etale extension} of $\mathbf{Q}$; that is, $F$ is a ring of the form $\bigoplus_{i = 1}^t F_i$, where $F_i$ are number fields. A \emph{quaternion algebra} over $F$ is defined in the obvious way: it is simply an $F$-algebra $B$ of the form $\bigoplus_{i = 1}^t B_i$, where $B_i$ is a quaternion algebra over $F_i$ (a central simple $F_i$-algebra of degree 2); we allow the case of the split algebra $M_{2 \times 2}(F_i)$. There is a natural norm map \[ \norm_{B/F}: B^\times \to F^\times\] (which is just the product of the reduced norm maps of the $B_i$ over $F_i$). We fix a homomorphism of algebraic groups $k: \mathbf{G}_m \to \operatorname{Res}_{F / \mathbf{Q}} \mathbf{G}_m$; this just amounts to a choice of integers $(k_1, \dots, k_t)$ such that $k(\lambda) = (\lambda^{k_1}, \dots, \lambda^{k_t})$. \begin{definition} \label{def:G} For $B$, $F$, $k$ as above, we let $G$ and $G^\circ$ be the algebraic groups over $\mathbf{Q}$ such that for any $\mathbf{Q}$-algebra $R$ we have \[ G(R) = \{ (x, \lambda) \in (B \otimes R)^\times \times R^\times: \norm_{B/F}(x) = \lambda^{1-k} \},\] and \[ G^\circ(R) = \{ x \in (B \otimes R)^\times: \norm_{B/F}(x) = 1\}.\] \end{definition} Then $G$ and $G^\circ$ are linear algebraic groups over $\mathbf{Q}$, and $G^\circ$ is naturally a subgroup of $G$. More generally, we may fix a maximal order $\mathcal{O}_B$ in $B$ and thus define $G$ and $G^\circ$ as group schemes over $\mathbf{Z}$. For all but finitely many primes $p$, we then have \[ G({\ZZ_p}) = \{ (x, \lambda) \in \GL_2(\mathcal{O}_F) \times {\ZZ_p}^\times: \det(x) = \lambda^{1-k} \};\] changing the choice of $\mathcal{O}_B$ does not change $G({\ZZ_p})$ away from finitely many primes $p$. \begin{theorem} \label{thm:mainthm0} Let $U^\circ$ be a compact closed subgroup of $G^\circ(\hat\mathbf{Q})$, where $\hat\mathbf{Q} = \mathbf{Q} \otimes \hat\mathbf{Z}$ is the finite adeles of $\mathbf{Q}$, such that: \begin{itemize} \item for every prime $p$, the projection of $U^\circ$ to $G^\circ({\QQ_p})$ is open in $G^\circ({\QQ_p})$; \item for all but finitely many primes $p$, the projection of $U^\circ$ to $G^\circ({\QQ_p})$ is $G^\circ({\ZZ_p})$. \end{itemize} Then $U^\circ$ is open in $G^\circ(\hat\mathbf{Q})$. \end{theorem} The proof we shall give of this theorem is a relatively straightforward generalization of the case $F = \mathbf{Q}$, $B = M_{2 \times 2}(\mathbf{Q})$, $k=2$, which is the Main Lemma of \cite[\S IV.3.1]{serre68b}. \begin{proof} Let $S$ be a finite set of primes containing 2, 3, all primes ramified in $F / \mathbf{Q}$, all primes at which $B$ is ramified, and all primes $p$ such that the projection of $U^\circ$ to $G^\circ({\QQ_p})$ is not equal to $G^\circ({\ZZ_p})$. For a prime $p$, let $k_p = \prod_{w \mid p} k_w$ where the product is over primes $w \mid p$ of $F$, and $k_w$ is the residue field of $F$ at $w$. Then for each $p \notin S$ we have a natural map \[ U^\circ \to \PSL_2(k_p)\] given by projection to the $p$-component and the natural quotient map. By the definition of $S$, it is surjective. I claim that the restriction of this map to $U^\circ \cap G^\circ({\ZZ_p})$ is also surjective (where we consider $G^\circ({\ZZ_p})$ as a subgroup of $G^\circ(\hat\mathbf{Q})$ in the natural way). If this is not the case, then the group \[ Q = U^\circ / \left(U^\circ \cap G^\circ({\ZZ_p})) \right) \] must have a quotient isomorphic to a nontrivial quotient of $\PSL_2(k_p)$, and in particular this group must surject onto the simple group $\PSL_2(k)$ for some finite field $k$ of characteristic $p$. However, the group $Q$ is exactly the image of $U^\circ$ in $\prod_{q \ne p} G^\circ(\mathbf{Q}_q)$. Hence the finite simple group $\PSL_2(k)$ must be a subquotient of an open compact subgroup of $\prod_{q \ne p} G^\circ(\mathbf{Q}_q)$; but this is not possible, since $\prod_{q \ne p} G^\circ(\mathbf{Q}_q)$ is a compact subgroup of $\prod_{q \ne p} \GL_2(L \otimes \mathbf{Q}_q)$ for any \'etale extension $L/F$ which splits $B$, and thus is conjugate to a closed subgroup of the maximal compact subgroup $\prod_{q \ne p} \GL_2(\mathcal{O}_L \otimes \mathbf{Z}_q)$, and we know that this group does not have $\PSL_2(k)$ as a quotient by Lemma \ref{lemma:groupsoccurring}. Hence $U^\circ \cap G^\circ({\ZZ_p})$ is a subgroup of $G^\circ({\ZZ_p}) = \SL_2(\mathcal{O}_F \otimes {\ZZ_p})$ which surjects onto $\PSL_2(k_p)$, and by Lemma \ref{lemma:lifting}, this implies that $G^\circ({\ZZ_p}) \subseteq U^\circ$. So we have $\prod_{p \notin S} G^\circ({\ZZ_p}) \subseteq U^\circ$. In order to show that $U^\circ$ is open in $G^\circ(\hat\mathbf{Q})$, it therefore suffices to show that the image of $U^\circ$ is open in $\prod_{p \in S} G^\circ({\QQ_p})$. However, since $G^\circ({\QQ_p})$ contains a finite-index pro-$p$ subgroup for each $p \in S$, and $S$ is finite, one sees easily by induction on $\#S$ that any subgroup of $\prod_{p \in S} G^\circ({\QQ_p})$ whose projection to $G^\circ({\QQ_p})$ is open for all $p \in S$ must itself be open. \end{proof} \begin{theorem} \label{thm:mainthm} Let $U$ be a compact subgroup of $G(\hat\mathbf{Q})$, where $\hat\mathbf{Q} = \mathbf{Q} \otimes \hat\mathbf{Z}$ is the finite adeles of $\mathbf{Q}$, such that: \begin{itemize} \item for every prime $p$, the projection of $U$ to $G({\QQ_p})$ is open in $G(F \otimes {\QQ_p})$; \item for all but finitely many primes $p$, the projection of $U$ to $G({\QQ_p})$ is $G({\ZZ_p})$; \item the image of $U$ in $\hat\mathbf{Q}^\times$ is open. \end{itemize} Then $U$ is open in $G(\hat\mathbf{Q})$. \end{theorem} \begin{proof} Let $U^\circ = U \cap G^\circ(\hat\mathbf{Q})$. We claim $U^\circ$ satisfies the hypotheses of the previous theorem. Since $G(\hat\mathbf{Q}) / G^\circ(\hat\mathbf{Q}) \cong \hat\mathbf{Q}^\times$ is abelian, the group $U^\circ$ contains the closure of the commutator subgroup of $U$. Since $\SL_2(\mathcal{O}_F \otimes {\ZZ_p})$ is the closure of its own commutator subgroup for $p \ge 5$, we see that if $p \ge 5$ and $U$ surjects onto $G({\ZZ_p})$, then $U^\circ$ surjects onto $G^\circ({\ZZ_p})$. Let us show that for an arbitrary prime $p$, the commutator subgroup of $G^\circ({\ZZ_p})$ has finite index. It suffices to show the corresponding result for $\SL_1(\mathcal{O}_B)$ for $B$ a quaternion algebra (possibly split) over a $p$-adic field; and this is equivalent to the statement that the Lie algebra $\mathfrak{sl}_1(B)$ is a nontrivial simple Lie algebra, which is clear since it becomes isomorphic to $\mathfrak{sl}_2$ after base extension to any splitting field of $B$. By the previous theorem, we conclude that $U$ contains an open subgroup of $G^\circ(\hat\mathbf{Q})$. But the image of $U$ in $\hat\mathbf{Q}^\times$ is open by hypothesis, so we conclude that $U$ is in fact open in $G(\hat\mathbf{Q})$. \end{proof} \begin{remark} We cannot dispense with the hypothesis that the image of $U$ in $\hat\mathbf{Q}^\times$ is open: there exist proper closed subgroups of $\hat\mathbf{Z}^\times$ whose projection to $\mathbf{Z}_p^\times$ is open for all $p$, but which are not open in $\hat\mathbf{Z}^\times$, such as the group $\hat\mathbf{Z}^{\times 2}$. We may even arrange that the projection to $\mathbf{Z}_p^\times$ is surjective for all $p$, as with the group $\{ x : x_p \in \mathbf{Z}_p^{\times 2}\ \forall p > 2\} \cup \{ x : x_p \notin \mathbf{Z}_p^{\times 2} \ \forall p > 2\}$. \end{remark} \section{Large image results for one modular form} \label{sect:largeimage} \subsection{Setup} Let $f$ be a normalized cuspidal modular newform of weight $k \ge 2$, level $N$ and character $\varepsilon$. We write $L = \mathbf{Q}(a_n(f): n \ge 1)$ for the number field generated by the $q$-expansion coefficients of $f$. Note that $L$ is totally real if $\varepsilon = 1$, and is a CM field if $\varepsilon \ne 1$. \begin{definition}\mbox{~} \begin{enumerate} \item For $p$ prime, we write \[ \rho_{f, p}: G_{\mathbf{Q}} \to \GL_2(L \otimes {\QQ_p}) \] for the unique (up to isomorphism) representation satisfying $\operatorname{Tr} \rho_f(\sigma_\ell^{-1}) = a_\ell(f)$ for all $\ell \nmid Np$, where $\sigma_\ell$ is the arithmetic Frobenius. \item We write \[\rho_{f}: G_{\mathbf{Q}} \to \GL_2(L \otimes \hat\mathbf{Q}) \] for the product representation $\prod_p \rho_{f, p}$, where $\hat\mathbf{Q}$ is the ring of finite adeles of $\mathbf{Q}$. \end{enumerate} \end{definition} The condition (1) only determines $\rho_{f, p}$ up to conjugacy, and we can (and do) assume that its image is contained in $\GL_2(\mathcal{O}_L \otimes {\ZZ_p})$, where $\mathcal{O}_L$ is the ring of integers of $L$. Thus $\rho_f$ takes values in $\GL_2(\mathcal{O}_L \otimes \widehat\mathbf{Z})$, where $\widehat\mathbf{Z} = \prod_p {\ZZ_p}$ is the profinite completion of $\mathbf{Z}$. \begin{remark} Our normalizations are such that if $f$ has weight 2, $\rho_{f, p}$ is the representation appearing in the \'etale cohomology of $X_1(N)$ with trivial coefficients. Some authors use an alternative convention that $\operatorname{Tr} \rho_f(\sigma_\ell) = a_\ell(f)$, which gives the representation appearing in the Tate module of the Jacobian $J_1(N)$; this is exactly the dual of the representation we study, so the difference between the two is unimportant when considering the image. \end{remark} \subsection{The theorems of Momose, Ribet, and Papier} For $\chi$ a Dirichlet character, we let $f \otimes \chi$ denote the unique newform such that $a_\ell(f \otimes \chi) = \chi(n) a_\ell(f)$ for all but finitely many primes $\ell$. \begin{definition}[{\cite[\S 3]{ribet85}}] An \emph{inner twist} of $f$ is a pair $(\gamma, \chi)$, where $\gamma: L \hookrightarrow \mathbf{C}$ is an embedding and $\chi$ is a Dirichlet character, such that the conjugate newform $f^\gamma$ is equal to the twist $f \otimes \chi$. \end{definition} Note that we always have $\overline{f} = f \otimes \varepsilon^{-1}$, so any newform of non-trivial character has at least one nontrivial inner twist. Lemma 1.5 of \cite{momose81} shows that if $(\gamma, \chi)$ is an inner twist of $f$, then $\chi$ takes values in $L^\times$ and $\gamma(L) = L$. Thus the inner twists $(\gamma, \chi)$ of $f$ form a group $\Gamma$ with the group law \[ (\gamma, \chi) \cdot (\sigma, \mu) = (\gamma \cdot \sigma, \chi^{\sigma} \cdot \mu). \] Moreover, for any $(\gamma, \chi) \in \Gamma$, the conductor of $\chi$ divides $N$ if $N$ is odd, and divides $4N$ if $N$ is even. It is well known that if there exists a nontrivial $\chi$ such that $f \otimes \chi = f$, then $f$ must be of CM type and $\chi$ must be the quadratic Dirichlet character attached to the corresponding imaginary quadratic field. We now assume (until further notice) that $f$ is not of CM type. Thus, for any inner twist $(\gamma, \chi) \in \Gamma$, the Dirichlet character $\chi$ is uniquely determined by $f$ and $\gamma$, and we write it as $\chi_\gamma$. The map $(\gamma, \chi) \mapsto \gamma$ identifies $\Gamma$ with an abelian subgroup of $\operatorname{Aut}(L / \mathbf{Q})$; we write $F$ for the subfield of $L$ fixed by $\Gamma$. The extension $L / F$ is Galois, with Galois group $\Gamma$ \cite[Proposition 1.7]{momose81}. Let us write $H$ for the open subgroup of $G_\mathbf{Q}$ which is the intersection of the kernels of the Dirichlet characters $\chi_\gamma$ for $\gamma \in \Gamma$, interpreted as characters of $G_{\mathbf{Q}}$ in the usual way. Then for all $\sigma \in H$ we have $\operatorname{Tr} \rho_f(\sigma) \in F \otimes \hat\mathbf{Q}$. \begin{theorem}[Momose, Ribet, Ghate--Gonzalez-Jimenez--Quer] There exists a central simple algebra $B$ of degree 2 over $F$, unramified outside $2 N \operatorname{disc}(L / \mathbf{Q}) \infty$, and an embedding $B \hookrightarrow M_{2 \times 2}(L)$, with the following property: we have \[ \rho_{f}(H) \subseteq B(F \otimes \hat\mathbf{Q})^\times \subseteq \GL_2(L \otimes \hat\mathbf{Q}).\] Moreover, for all but finitely many primes $p$ we have $B \otimes {\QQ_p} = M_{2 \times 2}(F \otimes {\QQ_p})$, and we may conjugate $\rho_{f, p}$ such that \[ \rho_{f, p}(H) = \{ x \in \GL_2(\mathcal{O}_{F} \otimes {\ZZ_p}) : \operatorname{det} x \in \mathbf{Z}_p^{\times (k - 1)} \} \tag{\dag}.\] \end{theorem} \begin{proof} This result is mostly proved in \cite{ribet85}, building on earlier results of Momose \cite{momose81}; the only statement not covered there is the explicit bound on the set of primes at which $B$ may ramify, which is Corollary 4.7 of \cite{GhateGJQuer}. \end{proof} We will need later in the paper the following refinement: \begin{corollary}[Papier]\label{papier} Let $p$ be such that $B$ and $L$ are unramified above $p$, and $\rho_{f, p}(H)$ is the whole group $(\dag)$; and let $\sigma \in G_{\mathbf{Q}(\mu_{p^\infty})}$. Then the image of the coset $\sigma \cdot \left(H \cap G_{\mathbf{Q}(\mu_{p^\infty})}\right)$ under $\rho_{f, p}$ is the set \[ \tbt \alpha 0 0 {\varepsilon(\sigma) \alpha^{-1}}\SL_2(\mathcal{O}_F \otimes {\ZZ_p}),\] for any $\alpha \in (\mathcal{O}_L \otimes {\ZZ_p})^\times$ such that $\gamma(\alpha) = \chi_\gamma(\sigma) \alpha$ for all $\gamma$ in $\Gal(L / F)$. \end{corollary} \begin{proof} See \cite[Theorem 4.1]{ribet85}. (Strictly speaking, Ribet in fact only shows that there is $\alpha \in L^\times$ with this property, and excludes any primes $p$ such that $\alpha$ is not a $p$-adic unit. However, since we have assumed $L / F$ is unramified above $p$, we can always re-scale $\alpha$ to be a $p$-adic unit.) \end{proof} \subsection{Adelic open image for $\GL_2$} Since the determinant of $\rho_f|_H$ is $\chi^{1-k}$, where $\chi: G_{\mathbf{Q}} \to \hat\mathbf{Z}^\times$ is the adelic cyclotomic character, we can extend $\rho_f$ to a homomorphism $\tilde\rho_f: H \to G(\hat\mathbf{Q})$, where $G$ is the algebraic group of Definition \ref{def:G} (for the specific choices of $B$, $F$ and $k$ as in this section). This homomorphism is characterized by the requirement that its projection to $\GL_2(L \otimes \hat\mathbf{Q})$ is $\rho_f$, and its projection to $\hat\mathbf{Q}^\times$ is the cyclotomic character. Applying Theorem \ref{thm:mainthm} to $\tilde\rho_f(H)$, we obtain the first new result of this paper: \begin{theorem} \label{thm:bigimageGL2} The image of $H$ under $\tilde\rho_f$ is an open subgroup of $G(\hat\mathbf{Q})$. \end{theorem} \begin{remark} One can show exactly the same result with modular forms replaced by Hilbert modular forms for a totally real field $E$, since the Momose--Ribet theorem has been generalized to this context by Nekov\'a\v{r} \cite[Theorem B.4.10]{nekovar12}. We have stated the result only for elliptic modular forms in order to save notation. \end{remark} \subsection{The CM case} For completeness, we briefly describe the image of $\tilde\rho_f$ in the CM case. Let us now suppose $f$ is of weight $k \ge 2$ and is of CM type, associated to some Hecke character \[ \psi: \hat{K}^\times \to \tilde{L}^\times\] for some imaginary quadratic field $K$ and Gr\"ossencharacter $\psi$ of infinity-type $(1-k, 0)$, with $\psi$ taking values in some extension $\tilde{L}$ of $L$. The relation between $f$ and $\psi$ is given by \[ a_p(f) = \psi(\varpi_\mathfrak{p}) + \psi(\varpi_{\mathfrak{p}'})\] whenever $p$ is a rational prime, not dividing the level of $f$, which splits in $K$ as $\mathfrak{p} \mathfrak{p}'$. Here $\varpi_{\mathfrak{p}} \in \widehat{K}^\times$ is a uniformizer at $\mathfrak{p}$. Let us write $\hat\psi$ for the homomorphism $K^\times \backslash \hat{K}^\times \to (\hat{K} \otimes_K \tilde{L})^\times$ defined by \[ \hat\psi(x) = x^{1-k} \psi(x). \] If we identify $K^\times \backslash \hat{K}^\times$ with $G_K^{\mathrm{ab}}$ via the Artin map\footnote{Normalized in the French manner, so geometric Frobenius elements correspond to uniformizers.}, then the adelic Galois representation $\rho_g$ is given by $\Ind_{G_K}^{G_{\mathbf{Q}}} (\hat\psi)$. Note that there is a finite-index subgroup $U \subseteq \hat\mathcal{O}_K^\times$ contained in the kernel of $\psi$; thus the image of $\hat\psi$ contains a finite-index subgroup of the group $\{ x^{1 - k} : x \in \hat\mathcal{O}_K^\times\}$. In particular, for almost all primes $p$ the image of $G_K$ under $\rho_{g, p}$ contains the group \[ \left\{ \tbt{x^{1-k}}{0}{0}{\bar{x}^{1-k}} : x \in (\mathcal{O}_K \otimes {\ZZ_p})^\times \right\}.\] \section{Joint large image} \label{sect:jointlargeimage} \subsection{Preliminaries} Now let $f$, $g$ be two newforms of weights $k_f, k_g \ge 2$, levels $N_f, N_g$ and characters $\varepsilon_f$ and $\varepsilon_g$, respectively. We assume neither $f$ nor $g$ is of CM type, and we write $L_f, L_g$ for their coefficient fields. We will need the following lemma: \begin{lemma} \label{lemma:ramakrishnan} Suppose there exist embeddings $L_f, L_g \hookrightarrow \mathbf{C}$ such that we have \[ \frac{a_\ell(f)^2}{\ell^{k_f-1} \varepsilon_f(\ell)} = \frac{a_\ell(g)^2}{\ell^{k_g-1} \varepsilon_g(\ell)} \] for a set of primes $\ell$ of positive upper density. Then there is a Dirichlet character $\chi$ such that $g = f \otimes \chi$. \end{lemma} \begin{proof} This is a special case of Theorem A of \cite{ramakrishnan00b}. \end{proof} \begin{remark} Recall that the \emph{upper density} of a set of primes $S$ is defined by \[ \operatorname{UD}(S) = \limsup_{X \to \infty} \frac{\# \{ \ell \in S : \ell \le X\}}{\# \{ \ell : \ell \le X\}}.\] We will need below the easily-verified fact that if $S_1, \dots, S_n$ are sets of primes, then \[ \operatorname{UD}(S_1 \cup \dots \cup S_n) \le \operatorname{UD}(S_1) + \dots + \operatorname{UD}(S_n),\] so if $S_1 \cup \dots \cup S_n$ has positive upper density, then at least one of the sets $S_i$ has positive upper density. \end{remark} We can obviously apply the theory of the previous section to each of $f$ and $g$, and we use the subscripts $f, g$ to refer to the corresponding objects for each form; so we have number fields $F_f, F_g$, quaternion algebras $B_f, B_g$, and algebraic groups $G_f, G_g$. We may unify these as follows: we let $F = F_f \times F_g$, which is an \'etale extension of $\mathbf{Q}$, and $B = B_f \times B_g$, which is a quaternion algebra over $F$; and the group $G^\circ$ of norm 1 elements of $G$ is just $G^\circ_f \times G^\circ_g$. We let \[ k: \mathbf{G}_m \to \operatorname{Res}_{F / \mathbf{Q}} \mathbf{G}_m = \operatorname{Res}_{F_f / \mathbf{Q}} \mathbf{G}_m \times \operatorname{Res}_{F_g / \mathbf{Q}} \mathbf{G}_m\] be the character sending $\lambda$ to $(\lambda^{k_f}, \lambda^{k_g})$. Then Definition \ref{def:G} gives us an algebraic group \begin{align*} G &= \{ (x, \lambda) \in B^\times \times \mathbf{G}_m : \norm(x) = \lambda^{1-k}\}\\ &= \{ (x_f, x_g, \lambda) \in B_f^\times \times B_g^\times \times \mathbf{G}_m: \norm(x_f) = \lambda^{1-k_f}, \norm(x_g) = \lambda^{1-k_g} \}. \end{align*} This is, of course, just the fibre product of $G_f$ and $G_g$ over $\mathbf{G}_m$. Letting $H = H_f \cap H_g$, we have a representation \[ \tilde\rho_{f, g}: G_{\mathbf{Q}} \to G(\hat\mathbf{Q}), \] and in particular \[ \tilde\rho_{f, g, p}: G_{\mathbf{Q}} \to G({\QQ_p})\] for all primes $p$. \subsection{Big image for almost all $p$} \begin{proposition} \label{prop:bigimage2} Let $p \ge 5$ be a prime unramified in $B$, and let $U$ be a subgroup of $G({\ZZ_p})$ which surjects onto $G_f({\ZZ_p})$ and $G_g({\ZZ_p})$. Then either $U = G({\ZZ_p})$, or (after possibly conjugating $U$) there are primes $v \mid p$ of $\mathcal{O}_{F_f}$, $w \mid p$ of $\mathcal{O}_{F_g}$ and an isomorphism \[ \mathcal{O}_{F_f, v} \cong \mathcal{O}_{F_g, w}, \] such that for all $(x, y, \lambda) \in U$ we have $ (y \bmod w) = \pm \lambda^{(k_f-k_g)/2}(x \bmod v)$. \end{proposition} \begin{proof} This is visibly a generalization of Proposition 7.2.8 of \cite{LLZ14}, and we follow essentially the same argument. (We have changed notation from $H$ to $U$ to avoid confusion with the Galois group $H$ above.) Let $U^\circ = U \cap G^\circ({\ZZ_p})$. By the same commutator argument as before, $U^\circ$ is a subgroup of $G^\circ({\ZZ_p}) = G_f^\circ({\ZZ_p}) \times G_g^\circ({\ZZ_p})$ which surjects onto either factor. By Goursat's Lemma, there are closed normal subgroups $N_f \triangleleft G_f^\circ({\ZZ_p})$ and $N_g \triangleleft G_g^\circ({\ZZ_p})$ such that $U^\circ$ is the graph of an isomorphism $\phi: G_f^\circ({\ZZ_p}) / N_f \cong G_g^\circ({\ZZ_p})/N_g$. The maximal normal closed subgroups of $G_f^\circ({\ZZ_p})$ are precisely the kernels of the quotient maps to $\PSL_2(k_v)$ for each prime $v \mid p$ of $F_f$, and every automorphism of $\PSL_2(k_v)$ is the composite of a field automorphism of $k_v$ and conjugation by an element of $\PGL_2(k_v)$. Hence, after possibly replacing $U$ by a conjugate of $U$ in $G({\ZZ_p})$, we may find primes $v \mid p$ of $F_f$ and $w \mid p$ of $F_g$, and an isomorphism $\mathcal{O}_{F_f, v} \cong \mathcal{O}_{F_g, w}$, such that $U^\circ$ is contained in a conjugate of the group \[ \{ (x, y) \in G_f^\circ({\ZZ_p}) \times G_g^\circ({\ZZ_p}) : x \bmod v = \pm y \bmod w \}.\] For a general element $(x, y, \lambda) \in U_p$, let $t = (x \bmod v)^{-1} (y \bmod w) \in \GL_2(\mathbf{F})$, and let $[t]$ denote its image in $\GL_2(\mathbf{F}) / \{\pm 1\}$. For any element $(u, v) \in U^\circ$, we have the same commutator identity as in \cite[Proposition 7.2.8]{LLZ14}, \[ [u^{-1} t u] = [u^{-1} x^{-1} y u] = [x^{-1}][(xux^{-1})^{-1}(y vy^{-1})][y][v^{-1} u] = [x^{-1} y] = [t],\] since $(x u x^{-1}, y v y^{-1}) \in U^\circ$. This shows that $[t]$ commutes with every element of $\PSL_2(\mathbf{F})$, so that $t$ is a scalar matrix. It is clear that we must have $t^2 = \lambda^{k_f - k_g}$ by comparing determinants, and this gives the result. \end{proof} \begin{theorem} \label{thm:goodprimes} If $f$ is not Galois-conjugate to a twist of $g$, then for all but finitely many primes $p$ we have $\rho_{f, g, p}(H) = G({\ZZ_p})$. \end{theorem} \begin{proof} Let us fix embeddings of $F_f$ and $F_g$ into $\mathbf{C}$, and let $F$ be their composite. The above theorem shows that for all $p$ outside some finite set $S$, if $\rho_{f, g, p}(H) \ne G({\ZZ_p})$, then there is some prime $v$ of $F$ above $p$ dividing the product \[ \prod_{\gamma \in \Gal(F_g / \mathbf{Q})} \left( a_\ell(f)^2 - \ell^{k_f-k_g} \gamma(a_\ell(g))^2\right) \] for all primes $\ell$ whose Frobenius elements lie in $H$. Since no nonzero element of $F$ may be divisible by infinitely many primes, we deduce that either $\rho_{f, g, p}(H) = G({\ZZ_p})$ for all but finitely many $p$, or the above product is zero, so for each prime $\ell$ whose Frobenius lies in $H$, there is $\gamma \in \Gal(F_g / \mathbf{Q})$ (possibly depending on $\ell$) such that we have \[ \frac{a_\ell(f)^2}{\ell^{k-1} \varepsilon_f(\ell)} = \gamma\left(\frac{a_\ell(g)^2}{\ell^{k-1} \varepsilon_g(\ell)}\right) \] (since $\varepsilon_f(\ell) = \varepsilon_g(\ell) = 1$ for all such $\ell$). Since there are only finitely many possible $\gamma$, there must be at least one $\gamma \in \Gal(F_g / \mathbf{Q})$ such that the above equality holds for a set of $\ell$ of positive upper density. By Lemma \ref{lemma:ramakrishnan}, this implies that for some (and hence any) $\gamma' \in \Gal(L_g / \mathbf{Q})$ lifting $\gamma$, the conjugate form $g^\gamma$ is a twist of $f$. \end{proof} \subsection{Open image for all $p$} \begin{proposition} Let $p$ be arbitrary and let $U$ be a subgroup of $G({\ZZ_p})$ which has open image in $G_f({\ZZ_p})$ and $G_g({\ZZ_p})$. Then either $U$ is open in $G({\ZZ_p})$, or there are primes $v$ of $F_f$ and $w$ of $F_g$ above $p$, a field isomorphism $F_{f, v} \cong F_{g, w}$, and an isomorphism $B_f \otimes F_{f, v} \cong B_g \otimes F_{g, w}$, such that $U$ has a finite-index subgroup contained in a conjugate of the subgroup \[ \{ (x, y, \lambda) \in G({\ZZ_p}): y_w = \lambda^{(k_f-k_g)/2} x_v\} \] where $x_v$ and $y_w$ are the projections of $x$ and $y$ to the direct summands $(B_f \otimes F_{f, v})^\times$ and $(B_g \otimes F_{g, w})^\times$. \end{proposition} \begin{proof} This follows in a very similar way to Proposition \ref{prop:bigimage2} with all the groups concerned replaced by their Lie algebras. We know that $\mathfrak{u} = \operatorname{Lie}(U)$ is a subalgebra of $\operatorname{Lie}(G)$ which surjects onto $\operatorname{Lie}(G_f)$ and $\operatorname{Lie}(G_g)$. Since $G^\circ_f$ and $G^\circ_g$ are semi-simple we deduce that $\mathfrak{u}^\circ = \operatorname{Lie}(U^\circ)$ is a subgroup of $\operatorname{Lie}(G_f^\circ) \oplus \operatorname{Lie}(G_g^\circ)$ surjecting onto either factor. By Goursat's Lemma for Lie algebras, we deduce that it must be contained in the graph of an isomorphism between simple factors of $\operatorname{Lie}(G_f^\circ)$ and $\operatorname{Lie}(G_g^\circ)$. Using Lemma \ref{lemma:liealgs}, we deduce the above result. \end{proof} \begin{proposition} If $f$ is not Galois-conjugate to a twist of $g$, then $\rho_{f, g, p}(H)$ is open in $G({\ZZ_p})$ for all primes $p$. \end{proposition} \begin{proof} By the previous result, if $\rho_{f, g, p}(H)$ is not open in $G({\ZZ_p})$, there is an element $\gamma \in \Gal(F_f / \mathbf{Q})$ and a positive-density set of primes $\ell$ such that we have \[ \frac{a_\ell(f)^2}{\ell^{k-1} \varepsilon_f(\ell)} = \gamma\left(\frac{a_\ell(g)^2}{\ell^{k-1} \varepsilon_g(\ell)}\right). \] Ramakrishnan's theorem now tells us that $g^\gamma$ is a twist of $f$. \end{proof} \subsection{Adelic big image} \begin{theorem} Let $f$, $g$ be non-CM-type cusp forms of weights $k_f, k_g \ge 2$. Then either $\rho_{f, g}(H)$ is open in $G(\hat\mathbf{Q})$, or $k_f = k_g$ and $f$ is Galois-conjguate to a twist of $g$. \end{theorem} \begin{proof} Suppose $f$ is not Galois-conjguate to a twist of $g$. Then, by the results of the previous two sections, $\rho_{f, g}(H)$ is a compact subgroup of $G(\hat\mathbf{Q})$ whose image is open in $G({\QQ_p})$ for all primes $p$, and equal to $G({\ZZ_p})$ for all but finitely many $p$. Applying Theorem \ref{thm:mainthm}, we deduce that this subgroup must be open in $G({\ZZ_p})$. \end{proof} Via exactly the same methods and induction on $n$, one can prove the following generalization. We shall not give the proof here, as the notation becomes somewhat cumbersome, but the arguments are exactly as before: \begin{theorem} Let $f_1, \dots, f_n$ be newforms of weights $k_1, \dots, k_n \ge 2$. Then either \begin{itemize} \item there is a Dirichlet character $\chi$ and $i, j \in \{1, \dots, n\}$ such that $f_i \otimes \chi$ is Galois-conjugate to $f_j$, with $\chi \ne 1$ if $i = j$; \item or there is an open subgroup $H$ of $G_{\mathbf{Q}}$ such that the image of $H$ under the map \[ \rho_{f_1} \times \dots \times \rho_{f_n} \times \chi: G_{\mathbf{Q}} \to \GL_2(L_{f_1} \otimes \hat\mathbf{Q}) \times \dots \times \GL_2(L_{f_1} \otimes \hat\mathbf{Q}) \times \hat\mathbf{Q}^\times\] is an open subgroup of $G(\hat\mathbf{Q})$, where $G$ is the algebraic group \[ \left\{ (g_1, \dots, g_n, \lambda) \in B_{f_1}^\times \times \dots \times B_{f_n}^\times \times \mathbf{G}_m: \norm(g_i) = \lambda^{1-k_i}\right\}.\] \end{itemize} \end{theorem} \begin{remark} Note that Serre \cite{serre91} has formulated a general conjecture on the image of Galois representations for motives: for any motive $M$ of rank $r$ over a number field $K$, one can define a connected subgroup $MT(M)$ of $\GL_r / \mathbf{Q}$ such that the image of $\rho_{M}: G_{K} \to \GL_r(\hat\mathbf{Q})$ is contained in $MT(M)(\hat\mathbf{Q})$. Thus a finite-index subgroup $H$ of $G_{\mathbf{Q}}$ lands in $MT^0(M)(\hat\mathbf{Q})$, where $MT^0(M)$ is the identity component. In general one does not expect $\rho_{M}(H)$ to be open in $MT^0(M)(\hat\mathbf{Q})$, because of obstructions arising from isogenies; e.g. if $M = \mathbf{Q}(2)$, then $MT(M) = \mathbf{G}_m$, but the image of $G_{\mathbf{Q}}$ is the group of squares in $\hat\mathbf{Z}^\times$, which is not open. However, there is a distinguished class of ``maximal'' motives for which this should be the case. The motive $M(f)$ attached to a weight $k$ modular form is not maximal if $k > 2$, but $M(f) \oplus \mathbf{Q}(1)$ is maximal if $f$ is not of CM type (cf.~\S 11.10 of op.cit.), and the group $G_f$ is the connected component of $MT(M(f) \oplus \mathbf{Q}(1))$. Thus we have verified Serre's open image conjecture for the maximal motives \[ M(f_1) \oplus \dots \oplus M(f_n) \oplus \mathbf{Q}(1)\] whenever the $f_i$ are non-CM forms of weight $\ge 2$ and no $f_i$ is Galois-conjugate to a twist of $f_j$. \end{remark} \section{Special elements in the images} \label{sect:specialelt} \subsection{Setup} This section is more technical, and was the original motivation for the present work: to find elements in the images of $\rho_{f, p} \times \rho_{g, p}$ with certain special properties. In this section we fix newforms $f, g$ as before, and a Galois extension $L / \mathbf{Q}$ with embeddings $L_f, L_g \hookrightarrow L$; we then have representations $\rho_{f, \mathfrak{p}}, \rho_{g, \mathfrak{p}}: G_{\mathbf{Q}} \to \GL_2(\mathcal{O}_{L, \mathfrak{p}})$ for each prime $\mathfrak{p}$ of $L$. Let $V_\mathfrak{p}$ be the four-dimensional $L_{\mathfrak{p}}$-vector-space $L_{\mathfrak{p}}^{\oplus 4}$, with $G_{\mathbf{Q}}$ acting via the tensor-product Galois representation $\rho_{f, \mathfrak{p}} \otimes \rho_{g, \mathfrak{p}}$; and let $T_\mathfrak{p}$ be the $G_{\mathbf{Q}}$-stable $\mathcal{O}_{L, \mathfrak{p}}$-lattice $\mathcal{O}_{L, \mathfrak{p}}^{\oplus 4}$ in $V_{\mathfrak{p}}$. Our aim is to verify the following conditions, in as many cases as possible: \begin{hypothesis}[$\operatorname{Hyp}(\mathbf{Q}(\mu_{p^\infty}), V_{\mathfrak{p}})$]\mbox{~} \begin{enumerate} \item $V_{\mathfrak{p}}$ is an irreducible $L_{\mathfrak{p}}\left[G_{\mathbf{Q}(\mu_{p^\infty})}\right]$-module (where $p$ is the rational prime below $\mathfrak{p}$). \item There is an element $\tau \in G_{\mathbf{Q}(\mu_{p^\infty})}$ such that $V_{\mathfrak{p}} / (\tau - 1)V_{\mathfrak{p}}$ has dimension 1 over $L_{\mathfrak{p}}$. \end{enumerate} \end{hypothesis} \begin{hypothesis}[$\operatorname{Hyp}(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$]\mbox{~} \begin{enumerate} \item $T_{\mathfrak{p}} \otimes k_{\mathfrak{p}}$ is an irreducible $k_{\mathfrak{p}}\left[G_{\mathbf{Q}(\mu_{p^\infty})}\right]$-module, where $k_{\mathfrak{p}}$ is the residue field of $L_{\mathfrak{p}}$. \item There is an element $\tau \in G_{\mathbf{Q}(\mu_{p^\infty})}$ such that $T_{\mathfrak{p}} / (\tau - 1) T_{\mathfrak{p}}$ is free of rank 1 over $\mathcal{O}_{L, \mathfrak{p}}$. \end{enumerate} \end{hypothesis} Our formulation of these is exactly that of \cite[Chapter 2]{rubin00}. Note that $\operatorname{Hyp}(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}}) \Rightarrow \operatorname{Hyp}(\mathbf{Q}(\mu_{p^\infty}), V_{\mathfrak{p}})$. We note the following preliminary negative result: \begin{proposition} If $\varepsilon_f \varepsilon_g$ is the trivial character, then $\operatorname{Hyp}(\mathbf{Q}(\mu_{p^\infty}), V_{\mathfrak{p}})$ is false (for every prime $\mathfrak{p}$). \end{proposition} \begin{proof} If $\varepsilon_f \varepsilon_g$ is trivial, the image of $G_{\mathbf{Q}(\mu_{p^\infty})}$ under $\rho_{f, \mathfrak{p}} \times \rho_{g, \mathfrak{p}}$ is contained in the subgroup $\left\{ (x, y) \in \GL_2(L_\mathfrak{p}) \times \GL_2(L_\mathfrak{p}) : \det(xy) = 1\right\}$. An easy case-by-case check shows that the image of this subgroup under the tensor-product map to $\GL_4(L_\mathfrak{p})$ contains no element $\tau$ such that $\tau - 1$ has one-dimensional cokernel. \end{proof} \subsection{Special elements: the higher-weight case} \label{sect:higherwt} In this section, we assume $f$ and $g$ have weights $\ge 2$, both $f$ and $g$ are non-CM, and $f$ is not Galois-conjugate to any twist of $g$. We say $\mathfrak{p}$ is a \emph{good prime} if the prime $p$ of $\mathbf{Q}$ below $\mathfrak{p}$ is $\ge 5$, $p$ is unramified in the quaternion algebra $B$ over $F_f \oplus F_g$ described above, $p \nmid N_f N_g$, and the conclusion of Theorem \ref{thm:goodprimes} holds for $p$. For any good prime, it is clear that the irreducibility hypothesis (1) in $\operatorname{Hyp}(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$ is satisfied. For convenience we set $N = \operatorname{LCM}(N_f, N_g)$ if $N_f$ and $N_g$ are both odd, and $N = 4\operatorname{LCM}(N_f, N_g)$ otherwise, so for any inner twist $(\gamma, \chi)$ of either $f$ or $g$, the conductor of $\chi$ divides $N$. \begin{proposition} \label{prop:existencetau} Let $u \in (\mathbf{Z} / N \mathbf{Z})^\times$ be such that $\varepsilon_f(u) \varepsilon_g(u) \ne 1$. Let $\mathfrak{p}$ be a good prime, and suppose that $\chi_{\gamma}(u) = 1$ for all $\gamma$ in the decomposition group of $\mathfrak{p}$ in $\Gamma_f$, and similarly for $\Gamma_g$. Then $\Hyp(\mathbf{Q}(\mu_{p^\infty}), V_{\mathfrak{p}})$ holds; and if $p \ge 7$ and $\varepsilon_f \varepsilon_g(u) \ne 1 \bmod p$, then in fact $\Hyp(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$ holds. \end{proposition} \begin{proof} The condition on the decomposition groups implies that for $\sigma \in G_{\mathbf{Q}(\mu_{p^\infty})}$ whose image in $(\mathbf{Z} / N_f N_g \mathbf{Z})^\times$ is $u$, the quantities $\alpha$ arising in Papier's theorem (Corollary \ref{papier}) for $f$ and $g$ lie in $F_{f, \mathfrak{p}}$ and $F_{g, \mathfrak{p}}$ respectively, so we have $\rho_{f, \mathfrak{p}}(\sigma) \in \GL_2(F_{f, \mathfrak{p}})$ and $\rho_{g, \mathfrak{p}}(\sigma) \in \GL_2(F_{g, \mathfrak{p}})$. Since \[ (\rho_{f, \mathfrak{p}} \times \rho_{g, \mathfrak{p}})\left(H \cap G_{\mathbf{Q}(\mu_{p^\infty})}\right) = \SL_2(\mathcal{O}_{F_f, \mathfrak{p}}) \times \SL_2(\mathcal{O}_{F_g, \mathfrak{p}}),\] it follows that the image of $G_{\mathbf{Q}(\mu_{p^\infty})}$ under $\rho_{f, \mathfrak{p}} \times \rho_{g, \mathfrak{p}}$ contains the element \[ \left( \tbt x 0 0 {x^{-1} \varepsilon_f(u)}, \tbt y 0 0 {y^{-1} \varepsilon_g(u)} \right) \] for any $x \in \mathcal{O}_{F_f, \mathfrak{p}}^\times$ and $y \in \mathcal{O}_{F_g, \mathfrak{p}}^\times$. Choosing $x, y \in \mathbf{Z}_p^\times$ with $xy = 1$ and $x^{-2}\varepsilon_f(u) \ne 1$, $x^2 \varepsilon_g(u) \ne 1$ we see that the image of this element under the tensor product map is diagonal and has exactly one entry equal to 1, so $\Hyp(\mathbf{Q}(\mu_{p^\infty}), V_{\mathfrak{p}})$ holds. If $p \ge 7$, then we may choose $x$ such that $x^{-2} \varepsilon_f(u) \ne 1$, $x^2 \varepsilon_g(u) \ne 1$ modulo $p$ (as there are at least three distinct quadratic residues modulo $p$); and the condition $\varepsilon_f \varepsilon_g(u) \ne 1 \bmod p$ implies that the fourth diagonal entry is also not equal to 1 modulo $p$. So $\Hyp(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$ holds. \end{proof} \begin{remark} In particular, the proposition applies if $\varepsilon_f \varepsilon_g \ne 1$ and $F_{f, \mathfrak{p}} = L_{f, \mathfrak{p}}$ and $F_{g, \mathfrak{p}} = L_{g, \mathfrak{p}}$, since in this case both decomposition groups are trivial and we may take any $u$ with $\varepsilon_f \varepsilon_g(u) \ne 1$. See \cite[Proposition 7.2.18]{LLZ14}, which is the special case where $L_{f, \mathfrak{p}} = L_{g, \mathfrak{p}} = {\QQ_p}$. \end{remark} \begin{proposition} \label{prop:existencetauII} Suppose there exists $u \in (\mathbf{Z} / N \mathbf{Z})^\times$ such that $\varepsilon_g(u) = -1$, but $\chi_\gamma(u) = 1$ for all $\gamma \in \Gamma_f$. Then for \emph{all} good primes $\mathfrak{p}$, $\Hyp(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$ holds. \end{proposition} \begin{proof} Since $p \nmid N_f N_g$, we may find $\sigma \in G_{\mathbf{Q}(\mu_{p^\infty})}$ mapping to $u$. By Papier's theorem (Corollary \ref{papier} above), the image of the coset $\sigma \cdot (H \cap G_{\mathbf{Q}(\mu_{p^\infty})})$ under $\rho_{f, \mathfrak{p}} \times \rho_{g, \mathfrak{p}}$ is the set \[\left \{(x, y) : x \in \SL_2(\mathcal{O}_{F_f, \mathfrak{p}}), y \in \stbt \alpha 0 0 {-\alpha^{-1}}\SL_2(\mathcal{O}_{F_g, \mathfrak{p}})\right\},\] where $\alpha \in \mathcal{O}_{L_g, \mathfrak{p}}^\times$ is any element such that $\gamma(\alpha) = \chi_{\gamma}(\sigma) \alpha$ for all inner twists $(\gamma, \chi_\gamma)$ of $g$ such that $\gamma$ lies in the decomposition group of $\mathfrak{p}$. However, the coset $\stbt \alpha 0 0 {-\alpha^{-1}}\SL_2(\mathcal{O}_{F_g, \mathfrak{p}})$ contains $\stbt \alpha 0 0 {-\alpha^{-1}} \stbt 0 1 {-1} 0 = \stbt 0 \alpha {\alpha^{-1}} 0$. Since $\alpha$ is only defined up to multiplication by $\mathcal{O}_{F, \mathfrak{p}}^\times$, we may assume that $\alpha^2 \ne 1 \bmod \mathfrak{p}$ (using the assumption that $p \ge 5$). Then the element $\stbt 0 \alpha {\alpha^{-1}} 0$ is conjugate in $\GL_2(\mathcal{O}_{L_g, \mathfrak{p}})$ to $\stbt 1 0 0 {-1}$. Hence the group $G_{\mathbf{Q}(\mu_{p^\infty})}$ contains an element $\tau$ whose image in $\GL_2(\mathcal{O}_{L_f, \mathfrak{p}}) \times \GL_2(\mathcal{O}_{L_g, \mathfrak{p}})$ is conjugate to $\left( \stbt 1 1 0 1, \stbt 1 0 0 {-1} \right)$, and this acts on $T_\mathfrak{p}$ with cokernel free of rank 1 as desired. \end{proof} \begin{remark} Note in particular that the hypotheses of the preceding proposition are satisfied if $g$ has odd weight, and either $N_f$ and $N_g$ are coprime, or $f$ has trivial character and no nontrivial inner twists. \end{remark} \subsection{Special elements: the CM case} We now suppose that $f$, $g$ both have weights $\ge 2$, as before, and $f$ is non-CM, but $g$ is CM, associated to a Gr\"ossencharacter $\psi$ of an imginary quadratic field $K$. Let $\tilde L_g$ be the extension of $L_g$ in which the values of $\psi$ lie, and let us suppose that our embedding $L_g \hookrightarrow L$ extends to an embedding $\tilde L_g \hookrightarrow L$. We let $H$ be an open subgroup of $G_K$, with $G_K / H$ abelian, such that $H \subseteq H_f$ and $\hat\psi(H) \subseteq (\hat\mathcal{O}_K)^{\times(1 - k)}$. In this CM setting, we say a prime $\mathfrak{p}$ of $L$ (above some rational prime $p$) is \emph{good} if $p \nmid N_f N_g$, $\mathfrak{p}$ is unramified in $F_f$ and in the quaternion algebra $B_f$, the image of $H$ under $\rho_{f, p}$ contains $G_f({\ZZ_p})$, and the image of $G_K$ under $\hat\psi_p$ contains $(\mathcal{O}_K \otimes {\ZZ_p})^{\times(1-k)}$. Since $H$ is open in $G_\mathbf{Q}$, all but finitely many primes $p$ are good, as before. \begin{proposition} Suppose there exists $u \in (\mathbf{Z} / N \mathbf{Z})^\times$ such that $\varepsilon_f \varepsilon_g(u) \ne 1$ and $\varepsilon_K(u) = 1$, where $\varepsilon_K$ is the quadratic Dirichlet character attached to $K$. Let $\mathfrak{p}$ be a good prime such that $L_{f, \mathfrak{p}} = F_{f, \mathfrak{p}}$ and $\tilde{L}_{g, \mathfrak{p}} = {\QQ_p}$. Then $\Hyp(\mathbf{Q}(\mu_{p^\infty}), V_{\mathfrak{p}})$ holds; and if $p \ge 7$ and $\varepsilon_f \varepsilon_g(u) \ne 1 \bmod p$, then in fact $\Hyp(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$ holds. \end{proposition} \begin{proof} This is similar to Proposition \ref{prop:existencetau}. Since $\SL_2(\mathcal{O}_{F, \mathfrak{p}})$ and $\mathbf{Z}_p^\times$ have no common quotient, the image of $H \cap G_{\mathbf{Q}(\mu_{p^\infty})}$ under $\rho_{f, \mathfrak{p}} \times \rho_{g, \mathfrak{p}}$ is the whole of the group \[ \SL_2(\mathcal{O}_{F, \mathfrak{p}}) \times \left\{ \tbt y 0 0 {y^{-1}}: y \in \mathbf{Z}_p^\times\right\}.\] If we choose $\sigma \in G_{\mathbf{Q}(\mu_{p^\infty})}$ lifting $u$, then $\rho_{f, \mathfrak{p}}(\sigma) \in \GL_2(\mathcal{O}_{F, \mathfrak{p}})$, and $\rho_{g, \mathfrak{p}}(\sigma)$ is diagonal; thus the image of the coset $\sigma \cdot(H \cap G_{\mathbf{Q}(\mu_{p^\infty})})$ contains all elements of the form \[ \left( \tbt x 0 0 {x^{-1} \varepsilon_f(u)}, \tbt y 0 0 {y^{-1} \varepsilon_g(u)} \right) \] with $x \in \mathcal{O}_{F_f, \mathfrak{p}}^\times$ and $y \in \mathbf{Z}_p^\times$. The proof now proceeds as before. \end{proof} \subsection{Special elements: the weight one case} \label{sect:wt1} We now assume $g$ is a weight 1 form, so the Galois representation $\rho_g$ lands in $\GL_2(L_g) \subset \GL_2(L_g \otimes \hat\mathbf{Q})$, and has finite image (i.e. it is an Artin representation). In this section we \emph{do} permit $g$ to be of CM type. As in the previous section, we assume that our other newform $f$ has weight $\ge 2$ and is not of CM type. \begin{theorem} Suppose $N_f$ is coprime to $N_g$. Then for all primes $\mathfrak{p}$ of $L$ such that $p \nmid N_g$ and $p$ is unramified in $F_f$ and $B_f$, we may find $\tau \in G_{\mathbf{Q}(\mu_{p^\infty})}$ such that $V_\mathfrak{p} / (\tau - 1) V_\mathfrak{p}$ is 1-dimensional over $L_{\mathfrak{p}}$. For all but finitely many $\mathfrak{p}$, we may choose $\tau$ such that $T_\mathfrak{p}/(\tau-1)T_\mathfrak{p}$ is free of rank 1 over $\mathcal{O}_{L, \mathfrak{p}}$. \end{theorem} \begin{proof} Let $p$ be the rational prime below $\mathfrak{p}$. As $\rho$ is unramified outside $N_g$ and $\rho_{f, \mathfrak{p}}$ is unramified outside $p N_f$, and $(p N_f, N_g) = 1$, we conclude that the splitting field of $\rho_g$ is linearly disjoint from that of $\rho_{f, \mathfrak{p}}$ and from $\mathbf{Q}(\mu_{p^\infty})$. Hence, given any $a \in \rho_{g}(G_{\mathbf{Q}})$ and $b \in \rho_{f, \mathfrak{p}}\left(G_{\mathbf{Q}(\mu_{p^\infty})}\right)$, we may find $\tau \in G_{\mathbf{Q}(\mu_{p^\infty})}$ such that $\rho_g(\tau) = a$ and $\rho_{f, \mathfrak{p}}(\tau) = b$. We know that $\rho_g$ is odd, so $\rho(G_{\mathbf{Q}})$ contains an element $a$ conjugate to $\stbt{-1}001$. Meanwhile, since $f$ is not of CM type, $\rho_{f, \mathfrak{p}}\left(G_{\mathbf{Q}(\mu_{p^\infty})}\right)$ contains a conjugate of an open subgroup of $\SL_2(F_{f, \mathfrak{p}})$, where $F_{f, \mathfrak{p}}$ is the fixed field of the extra twists of $f$ as in the previous section. In particular, it contains a conjugate of an open subgroup of $\SL_2({\ZZ_p})$; so, after a suitable conjugation, the image contains the element $b = \stbt 1 {p^r} 0 1$ for $r \gg 0$. The preceding argument allows us to find $\tau \in G_{\mathbf{Q}(\mu_{p^\infty})}$ such that $\rho_g(\tau) = a$ and $\rho_{f, \mathfrak{p}}(\tau) = b$. As $a \otimes b-1$ clearly has 1-dimensional kernel, we are done. For all but finitely many $\mathfrak{p}$ we have the stronger result that $\rho_{f,\mathfrak{p}}\left(G_{\mathbf{Q}(\mu_{p^\infty})}\right)$ contains a conjugate of $\SL_2(\mathcal{O}_{F, \mathfrak{p}})$, so we may take $r = 0$ and we deduce that $a \otimes b-1$ has 1-dimensional kernel modulo $p$. \end{proof} \begin{remark} Further strengthenings of the results of this section may be possible: it seems reasonable to expect that whenever $\varepsilon_f \varepsilon_g$ is nontrivial, $\Hyp(\mathbf{Q}(\mu_{p^\infty}), T_{\mathfrak{p}})$ should hold for all but finitely many $\mathfrak{p}$. But I have not been able to prove this. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}[1]{} \renewcommand{\MR}[1]{ MR \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}. } \newcommand{\articlehref}[2]{#2 (\href{#1}{link})} \end{document}
\begin{document} \title{Improving the Price of Anarchy for Selfish Routing via Coordination Mechanisms} \author{ George Christodoulou\thanks{University of Liverpool, United Kingdom. \texttt{[email protected]}} \and Kurt Mehlhorn\thanks{ Max-Planck-Institut f\"ur Informatik, Saarbr\"ucken, Germany. \texttt{[email protected]}} \and Evangelia Pyrga\thanks{Technische Universit\"at M\"unchen, Germany. \texttt{[email protected]}} } \ignore{ \author{ Giorgos Christodoulou\inst{1} \and Kurt Mehlhorn\inst{2} \and Evangelia Pyrga\inst{3} } \institute{University of Liverpool, United Kingdom. \\ \email{[email protected]}\and Max-Planck-Institut f\"ur Informatik, Saarbr\"ucken, Germany. \\ \email{[email protected]} \and Technische Universit\"at M\"unchen, Germany. \\ \email{[email protected]} } } \maketitle \begin{abstract} We reconsider the well-studied Selfish Routing game with affine latency functions. The Price of Anarchy for this class of games takes maximum value 4/3; this maximum is attained already for a simple network of two parallel links, known as Pigou's network. We improve upon the value 4/3 by means of Coordination Mechanisms. We increase the latency functions of the edges in the network, i.e., if $\ell_e(x)$ is the latency function of an edge $e$, we replace it by $\hat{\ell}_e(x)$ with $\ell_e(x) \le \hat{\ell}_e(x)$ for all $x$. Then an adversary fixes a demand rate as input. The \emph{engineered Price of Anarchy} of the mechanism is defined as the worst-case ratio of the Nash social cost in the modified network over the optimal social cost in the original network. Formally, if $\hat{C}_N(r)$ denotes the cost of the worst Nash flow in the modified network for rate $r$ and $C_{\mathit{opt}}(r)$ denotes the cost of the optimal flow in the original network for the same rate then \[ \ePoA = \max_{r \ge 0} \frac{\hat{C}_N(r)}{C_{\mathit{opt}}(r)}.\] We first exhibit a simple coordination mechanism that achieves for any network of parallel links an engineered Price of Anarchy strictly less than 4/3. For the case of two parallel links our basic mechanism gives 5/4 = 1.25. Then, for the case of two parallel links, we describe an {\em optimal} mechanism; its engineered Price of Anarchy lies between 1.191 and 1.192. \end{abstract} \section{Introduction} \label{sec: introduction} We reconsider the well-studied Selfish Routing game with affine cost functions and ask whether increasing the cost functions can reduce the cost of a Nash flow. In other words, the increased cost functions should induce a user behavior that reduces cost despite the fact that the cost is now determined by increased cost functions. We answer the question positively in the following sense. The Price of Anarchy, defined as the maximum ratio of Nash cost to optimal cost, is 4/3 for this class of games. We show that increasing costs can reduce the price of anarchy to a value strictly below 4/3 at least for the case of networks of parallel links. For a network of two parallel links, we reduce the price of anarchy to a value between 1.191 and 1.192 and prove that this is optimal. In order to state our results precisely, we need some definitions. We consider single-commodity congestion games on networks, defined by a directed graph $G=(V,E)$, designated nodes $s,t\in V$, and a set $\ell=(\ell_e)_{e\in E}$ of non-decreasing, non-negative functions; $\ell_e$ is the latency function of edge $e\in E$. Let $P$ be the set of all paths from $s$ to $t$, and let $f(r)$ be a feasible $s,t$-flow routing $r$ units of flow. For any $p\in P$, let $f_p(r)$ denote the amount of flow that $f(r)$ routes via path $p$. For ease of notation, when $r$ is fixed and clear from context, we will write simply $f_, f_p$ instead of $f(r), f_p(r)$. By definition, $\sum_{p\in P} f_p = r$. Similarly, for any edge $e\in E$, let $f_e$ be the amount of flow going through $e$. We define the latency of $p$ under flow $f$ as $\ell_p(f) = \sum_{e\in p}\ell_e(f_e)$ and the cost of flow $f$ as $C(f) = \sum_{e\in E} f_e \cdot \ell_e(f_e)$ and use $C_{\mathit{opt}}(r)$ to denote the minimum cost of any flow of rate $r$. We will refer to such a minimum cost flow as an \emph{optimal} flow (Opt). A feasible flow $f$ that routes $r$ units of flow from $s$ to $t$ is at \emph{Nash (or Wardrop~\cite{War52}) Equilibrium\footnote{This assumes continuity and monotonicity of the latency functions. For non-continuous functions, see the discussion later in this section.}} if for $p_1, p_2 \in P$ with $f_{p_1}>0$, $\ell_{p_1}(f)\le \ell_{p_2}(f)$. We use $C_N(r)$ to denote the maximum cost of a Nash \ignore{ We say that a feasible flow $f$ that routes $r$ units of flow from $s$ to $t$ is at \emph{Nash Equilibrium} if for $p_1, p_2 \in P$, with $f_{p_1}>0$ and $\delta\to 0$, $\ell_{p_1}(f)\le \ell_{p_2}(f')$, where \begin{equation} \label{eq:ds-definition} f_p' = \left\{ \begin{array} {ll} f_p - \delta, & p = p_1 \\ f_p + \delta, & p = p_2 \\ f_p & p \neq p_1,p_2. \\ \end{array} \right. \end{equation} \epnote{I used our old definition from Dafermos-Sparrow, but changed $\delta$ to being infinitesimally small. } We use $C_N(r)$ to denote the maximum cost of a Nash} flow for rate $r$. The \emph{Price of Anarchy (PoA)}~\cite{KP09} (for demand $r$) is defined as \[ \PoA(r) = \frac {C_N(r)} {C_{\mathit{opt}}(r)} \quad\text{and}\quad \PoA = \max_{r > 0} \PoA(r).\] PoA is bounded by $4/3$ in the case of affine latency functions $\ell_e(x) = a_ex + b_e$ with $a_e \ge 0$ and $b_e \ge 0$; see~\cite{Roughgarden-Tardos,Correa-et-al}. The worst-case is already assumed for a simple network of two parallel links, known as Pigou's network; see Figure~\ref{fig:Pigou}. \newcommand{\hat{\ell}}{\hat{\ell}} \begin{figure} \caption{Pigou's network: We show the original network in (a), the optimal flow in (b) and the Nash flow in (c) as a function of the rate $r$, respectively. The Price of Anarchy as a function of the rate is shown in (d); $PoA(r)$ is 1 for $r \le 1/2$, then starts to grow until it reaches its maximum of $4/3$ at $r = 1$, and then decreases again and approaches $1$ as $r$ goes to infinity. Finally, in (e) we show the modified latency functions. We obtain $\ePoA(r) = 1$ for all $r$ in the case of Pigou's network. \label{fig:Pigou} \label{fig:Pigou} \end{figure} A {\em Coordination Mechanism}\footnote{Technically, we consider {\em symmetric} coordination mechanisms in this work, as defined in \cite{CKN09} i.e., the latency modifications affect the users in a symmetric fashion.} replaces the cost functions $(\ell_e)_{e \in E}$ by functions\footnote{One can interpret the difference $\hat{\ell}_e-\ell_e$ as a flow-dependent toll imposed on the edge $e$.} $\hat{\ell} = (\hat{\ell}_e)_{e\in E}$ such that $\hat{\ell}_e(x)\ge \ell_e(x)$ for all $x\ge 0$. Let $\hat C(f)$ be the cost of flow $f$ when for each edge $e\in E$, $\hat \ell_e$ is used instead of $\ell_e$ and let $\hat{C}_N(r)$ be the maximum cost of a Nash flow of rate $r$ for the modified latency functions. We define the \emph{engineered Price of Anarchy} (for demand $r$) as \[ \npoa(r) = \frac {\hat{C}_N(r)} {C_{\mathit{opt}}(r)}\quad\text{and}\quad \npoa = \max_{r > 0} \npoa(r).\] We stress that the optimal cost refers to the original latency functions $\ell$. \\ \\ \noindent {\bf Non-continuous Latency Functions:} In the previous definition, as it will become clear in Section~\ref{sec: negative result}, it is important to allow non-continuous modified latencies. However, when we move from continuous to non-continuous latency functions, Wardrop equilibria do not always exist. Non-continuous functions have been studied by transport economists to model the effects of step-function congestion tolls and traffic lights. Several notions of equilibrium that handle discontinuities have been proposed in the literature\footnote{See~\cite{Pat94,MP07} for an excellent exposure of the relevant concepts, the relation among them, as well as for conditions that guarantee their existence.}. The ones that are closer in spirit to Nash equilibria, are those proposed by Dafermos\footnote{In~\cite{Daf71}, Dafermos weakened the orginal definition by \cite{DS69} to make it closer to the concept of Nash Equilibrium.}~\cite{Daf71} and Berstein and Smith~\cite{BS94}. According to the Dafermos'~\cite{Daf71} definition of {\em user optimization}, a flow is in equilibrium if no {\em sufficiently small} fraction of the users on any path, can decrease the latency they experience by switching to another path\footnote{See Section~\ref{lower bound} for a formal definition.}. Berstein and Smith~\cite{BS94} introduced the concept of {\em User Equilibrium}, weakening further the Dafermos equilibrium, taking the fraction of the users to the limit approaching 0. The main idea of their definition is to capture the notion of the {\em individual commuter}, that is implicit in Wardrop's definition for continuous functions. The Dafermos equilibrium on the other hand is a stronger concept that captures the notion of coordinated deviations by {\em groups of commuters}. We adopt the concept of User Equilibrium. Formally, we say that a feasible flow $f$ that routes $r$ units of flow from $s$ to $t$ is a User Equilibrium, iff for all $p_1, p_2 \in P$ with $f_{p_1}>0$, \begin{equation}\ell_{p_1}(f)\leq \lim\inf_{\epsilon\downarrow 0}\ell_{p_2}(f+\epsilon \mathbf{1}_{p_2} -\epsilon \mathbf{1}_{p_1}), \label{user equilibrium} \end{equation} where $\mathbf{1}_p$ denotes the flow where only one unit passes along a path $p$. Note that for continuous functions the above definition is identical to the Wardrop Equilibrium. One has to be careful when designing a Coordination Mechanism with discontinuous functions, because the existence of equilibria is not always guaranteed\footnote{See for example \cite{PN98,BS94} for examples where equilibria do not exist even for the simplest case of two parallel links and non-decreasing functions.}. It is important to emphasize, that all the mechanisms that we suggest in this paper use both lower semicontinuous and regular\footnote{See~\cite{BS94} for a definition of regular functions.} latencies, and therefore User Equilibrium existence is guaranteed due to the theorem of~\cite{BS94}. Moreover, since our modified latencies are non-decreasing, all User Equilibria are also Dafermos-Sparrow equilibria. From now on, we refer to the User Equilibria as Nash Equilibria, or simply Nash flows. \\ \\ \noindent {\bf Our Contribution:} We demonstrate the possibility of reducing the Price of Anarchy for Selfish Routing via Coordination Mechanisms. We obtain the following results for networks of $k$ parallel links. \begin{itemize} \item if original and modified latency functions are continuous, no improvement is possible, i.e., $\npoa \ge \poa$; see Section~\ref{sec: negative result}. \item for the case of affine cost functions, we describe a simple coordination mechanism that achieves an engineered Price of Anarchy strictly less than 4/3; see Section~\ref{sec: mechanism}. The functions $\hat{\ell}_e$ are of the form \begin{equation}\label{eq: simple} \hat{\ell}_e(x) = \begin{cases} \ell_e(x) & \text{for $x \le r_e$}\\ \infty & \text{for $x > r_e$}. \end{cases} \end{equation} For the case of two parallel links, the mechanism gives 5/4 (see Section~\ref{sec: two links}), for Pigou's network it gives 1, see Figure~\ref{fig:Pigou}. \item For the case of two parallel links with affine cost functions, we describe an {\em optimal \iffalse \footnote{The lower bound that we provide in Section~\ref{lower bound} holds for all deterministic coordination mechanisms with respect to the Dafermos-Sparrow\cite{DS69} definition of equilibria. However, the arguments of our proof work for all deterministic coordination mechanisms that use {\em non-decreasing modified latencies} even for the weaker definition of User Equilibrium. Therefore the mechanism of Section~\ref{sec: two links advanced} is optimal for these two classes of mechanisms.} \fi \footnote{The lower bound that we provide in Section~\ref{lower bound} holds for all deterministic coordination mechanisms that use {\em non-decreasing modified latencies}, with respect to both notions of equilibrium described in the previous paragraph.} } mechanism; its engineered Price of Anarchy lies between 1.191 and 1.192 (see Sections~\ref{sec: two links advanced} and~\ref{lower bound}). It uses modified cost functions of the form \begin{equation}\label{eq: refined} \hat{\ell}_e(x) = \begin{cases} \ell_e(x) & \text{for $x \le r_e$ and $x \ge u_e$}\\ \ell_e(u_e) & \text{for $r_e < x < u_e$}. \end{cases} \end{equation} \end{itemize} \noindent The Price of Anarchy is a standard measure to quantify the effect of selfish behavior. There is a vast literature studying the Price of Anarchy for various models of selfish routing and scheduling problems (see~\cite{NRTV07}). We show that simple coordination mechanisms can reduce the Price of Anarchy for selfish routing games below the 4/3 worst case for networks of parallel links and affine cost functions. We believe that our arguments extend to more general cost functions, e.g., polynomial cost functions. However, the restriction to parallel links is crucial for our proof. We leave it as a major open problem to prove results for general networks or at least more general networks, e.g., series-parallel networks. \\ \\ \noindent {\bf Implementation:} We discuss the realization of the modified cost function in a simple traffic scenario where the driving speed on a link is a decreasing function of the flow on the link and hence the transit time is an increasing function. The step function in (\ref{eq: refined}) can be realized by setting a speed limit corresponding to transit time $\ell_e(u_e)$ once the flow is above $r_e$. The functions in (\ref{eq: simple}) can be approximately realized by access control. In any time unit only $r_e$ items are allowed to enter the link. If the usage rate of the link is above $r_e$, the queue in front of the link will grow indefinitely and hence transit time will go to infinity. \\ \noindent {\bf Related Work:} The concept of Coordination Mechanisms was introduced in (the conference version of)~\cite{CKN09}. Coordination Mechanisms have been used to improve the Price of Anarchy in scheduling problems for parallel and related machines~\cite{CKN09,ILMS05,Kol08} as well as for unrelated machines~\cite{AJM07,Caragiannis09}; the objective is makespan minimization. Very recently, \cite{CCG+11} considered as an objective the weighted sum of completion times. Truthful coordination mechanisms have been studied in \cite{ABP06,CGP07,AngelBPT09}. Another very well-studied attempt to cope with selfish behavior is the introduction of taxes (tolls) on the edges of the network in selfish routing games~\cite{CDR03,FJM04,KK04a,KK04b,Fleischer05,Bonifaci}. The disutility of a player is modified and equals her latency plus some toll for every edge that is used in her path. It is well known (see for example \cite{CDR03,FJM04,KK04a,KK04b}) that so-called marginal cost tolls, i.e., $\hat{\ell}_e(x) = \ell_e(x) + x \ell'_e(x)$, result in a Nash flow that is equal to the optimum flow for the original cost functions.\footnote{It is important to observe that although the Nash flow is equal to the optimum flow, its cost with respect to the marginal cost function can be twice as large as its cost with respect to the original cost function. For Pigou's network, the marginal costs are $\hat{\ell}_1(x) = 2x$ and $\hat{\ell}_2(x) = 1$. The cost of a Nash flow of rate $r$ with $r \le 1/2$ is $2r^2$ with respect to marginal costs; the cost of the same flow with respect to the original cost functions is $r^2$.} Roughgarden~\cite{R01} seeks a subnetwork of a given network that has optimal Price of Anarchy for a given demand. \cite{Cole-et-al} studies the question whether tolls can reduce the cost of a Nash equilibrium. They show that for networks with affine latencies, marginal cost pricing does not improve the cost of a flow at Nash equilibrium, as well as that the maximum possible benefit that one can get is no more than that of edge removal. \\ \\ \noindent {\bf Discussion:} The results of this paper are similar in spirit to the results discussed in the previous paragraph, but also very different. The above papers assume that taxes or tolls are determined with full knowledge of the demand rate $r$. Our coordination mechanisms must {\em a priori} decide on the modified latency functions \emph{without knowledge of the demand}; it must determine the modified functions $\hat{\ell}$ and then an adversary selects the input rate $r$. More importantly, our target objectives are different; we want to minimize the ratio of the modified cost (taking into account the increase of the latencies) over the {\em original} optimal cost. \ignore{The negative results of \cite{Cole-et-al} hold for marginal cost prices that correspond to continuous and monotone modified cost functions. Therefore, one cannot expect to achieve a Price of Anarchy strictly less than 4/3 by use of marginal cost pricing, even if the demand rate is known. In fact, in Section~\ref{sec: negative result} we show that for parallel links one cannot improve the Price of Anarchy even with general continuous monotone functions. Our modified cost functions are monotone, but not continuous.} Our simple strategy presented in Section~\ref{sec: mechanism} can be viewed as a generalization of link removal. Removal of a link reduces the capacity of the edge to zero, our simple strategy reduces the capacity to a threshold $r_e$. Following~\cite{CKN09}, we study \emph{local} mechanisms; the decision of modifying the latency of a link is taken based on the amount of flow that comes through the particular link \emph{only}.\footnote{It is not hard to see that, similarly to the case where the demand is known, using global flow information (at least for the case of parallel links) can lead to mechanisms with $\ePoA=1$. We would like to thank Nicol\'as Stier Moses for making us emphasizing that distinction.} \section{Continuous Latency Functions Yield No Improvement}\label{sec: negative result} The network in this section consists of $k$ parallel links connecting $s$ to $t$ and the original latency functions are assumed to be continuous and non-decreasing. We show that substituting them by continuous functions brings no improvement. \begin{lemma}\label{lem:continuous-lb} Assume that the original functions $\ell_e$ are continuous and non-decreasing. Consider some modified latency functions $\hat \ell$ and some rate $r$ for which there is a Nash Equilibrium flow $\hat f$ such that the latency function $\hat \ell_i$ is continuous at $\hat f_i(r)$ for all $1\le i\le k$. Then $\npoa(r)\ge \poa(r)$. \end{lemma} \begin{proof} It is enough to show that $ \hat{C}_N(r)\ge C_N(r)$. Let $f$ be a Nash flow for rate $r$ and the original cost functions. If $f = \hat f$, the claim is obvious. If $\hat f \not= f$, there must be a $j$ with $\hat f_j(r) > f_j(r)$. The local continuity of $\hat \ell_i$ at $\hat f_i(r)$, implies that $\hat \ell_i(\hat f_i(r) ) = \hat \ell_{i'}(\hat f_{i'}(r)) $, for all $i,i'\le k$ such that $\hat f_i(r), \hat f_{i'}(r) >0$. Therefore, \[ \hat{C}_N(r) = \hat C(\hat f(r))= \sum_{i=1}^{k} \hat f_i(r) \hat \ell_i(\hat f_i(r)) = r\cdot \hat \ell_j(\hat f_j(r)) \ge r\cdot \ell_j(\hat f_j(r)) \ge r\cdot \ell_j( f_j(r)) \] since $\hat{\ell}_j(x) \ge \ell_j(x)$ for all $x$ and $\ell_j$ is non-decreasing. Since $f$ is a Nash flow we have $\ell_{i}(f_{i}(r))\le \ell_j(f_j(r))$ for any $i$ with $f_{i}(r)>0$. Thus \[ C_N(r) = \sum_{i=1}^{k} f_i(r) \ell_i( f_i(r)) \le r \cdot \ell_{j}( f_{j}(r)). \] \end{proof} \section{A Simple Coordination Mechanism}\label{sec: mechanism} Let $\ell_i(x) = a_i x + b_i = (x + \gamma_i)/\lambda_i$ be the latency function of the $i$-th link, $1\le i \le k$. We call $\lambda_i$ the {\em efficiency} of the link. We order the links in order of increasing $b$-value and assume $0\leq b_1 < b_2 < \ldots < b_{k}$ as two links with the same $b$-value may be combined (by adding their efficiencies). We say that a link is {\em used} if it carries positive flow. We may assume $a_i > 0$ for all $i < k$; if $a_i = 0$, links $i+1$ and higher will never be used. The following theorem summarizes some basic facts about optimal flows and Nash flows; it is proved by straightforward calculations.\footnote{In a Nash flow all used links have the same latency. Thus, if $j$ links are used at rate $r$ and $f_i^N$ is the flow on the $i$-th link, then $a_1 f_1^N + b_1 = \ldots = a_j f_j^N + b_j \le b_{j+1}$ and $r = f_1^N + \ldots + f_j^N$. The values for $r_j$ and $f_i^N$ follow from this. Similarly, in an optimal flow all used links have the same marginal costs.} We state the theorem for the case that $a_k$ is positive. The theorem is readily extended to the case $a_k = 0$ by letting $a_k$ go to zero and determining the limit values. We will only use the theorem in situations, where $a_k > 0$. \begin{theorem}\label{thm: basic facts} Let $0\leq b_1 <b_2 < \ldots < b_k$ and $\lambda_i \geq 0$ for all $i$. Let $\Lambda_j = \sum_{i \le j} \lambda_i$ and $\Gamma_j = \sum_{i \le j} \gamma_j$. Consider a fixed rate $r$ and let $f_i^*$ and $f_i^N$, $1 \le i \le k$, be the optimal flow and the Nash flow for rate $r$ respectively. Let \[ r_j = \sum_{1 \le i < j} (b_{j} - b_i)\lambda_i = \sum_{1 \le i < j} (b_{i+1} - b_i)\Lambda_i. \] Then \noindent (a) $\Gamma_j + r_j = b_j \Lambda_j$ and $\Gamma_{j-1} + r_j = b_j \Lambda_{j-1}$. \\ \noindent (b) If Nash uses exactly $j$ links at rate $r$ then \[ r_j\leq r \leq r_{j+1},\quad\quad f_i^N = \frac{r \lambda_i}{\Lambda_j} + \delta_i, \quad\text{where } \delta_i = \frac{\Gamma_j \lambda_i}{\Lambda_j} - \gamma_i,\quad\text{and}\quad C_N(r) = \frac{1}{\Lambda_j} \left( r^2 + \Gamma_j r \right).\] \noindent (c) If Opt uses exactly $j$ links at rate $r$ then \[\frac{r_j}{2}\leq r \leq \frac{r_{j+1}}{2},\quad f_i^* = \frac{r \lambda_i}{\Lambda_j} + \delta_i/2, \quad\text{where } \delta_i = \frac{\Gamma_j \lambda_i}{\Lambda_j} - \gamma_i, \] and \[ C_{\mathit{opt}}(r) = \frac{1}{\Lambda_j} \left( r^2 + \Gamma_j r \right) - \sum_{i \le j} \frac{\delta_i^2}{4\lambda_i} = \frac{1}{\Lambda_j} \left( r^2 + \Gamma_j r\right) - C_j , \text{where } C_j = \left(\sum_{i=1}^{h}\sum_{h=i}^{j}(b_h - b_i)^2 \lambda_h \lambda_i\right)/(4\Lambda_j) .\] \noindent (d) If $s < r$ and Opt uses exactly $j$ links at $s$ and $r$ then \[ C_{\mathit{opt}}(r) = C_{\mathit{opt}}(s) + \frac{1}{\Lambda_j} \left( (r - s)^2 + (\Gamma_j + 2s) (r -s) \right).\] \noindent (e) If $s < r$ and Nash uses exactly $j$ links at $s$ and $r$ then \[ C_N(r) = C_N(s) + \frac{1}{\Lambda_j} \left( (r - s)^2 + (\Gamma_j + 2s) (r -s) \right).\] \end{theorem} We next define our simple coordination mechanism. In the case of $k$ links, it is governed by parameters $R_1$, $R_2, \dots, R_{k-1}$; $R_i \ge 2$ for all $i$. We call the $j$-th link \emph{super-efficient} (with respect to parameters $R_1$ to $R_{k-1}$) if $\lambda_{j} > R_{j-1} \Lambda_{j-1}$. In Pigou's network (see Figure~\ref{fig:Pigou}), the second link is super-efficient for any choice of $R_1$ since $\lambda_2 = \infty$ and $\Lambda_1 = \lambda_1 = 1$. Super-efficient links are the cause of high Price of Anarchy. Observe that Opt starts using the $j$-th link at rate $r_j/2$ and Nash starts using it at rate $r_j$. If the $j$-th link is super-efficient, Opt will send a significant fraction of the total flow across the $j$-th link and this will result in a high Price of Anarchy. Our coordination mechanism induces the Nash flow to use super-efficient links earlier. The latency functions $\hat{\ell}_i$ are defined as follows: $\hat{\ell}_i = \ell_i$ if there is no super-efficient link $j > i$; in particular the latency function of the highest link (= link $k$) is unchanged. Otherwise, we choose a threshold value $T_i$ (see below) and set $\hat{\ell}_i(x) = \ell_i(x)$ for $x \le T_i$ and $\hat{\ell}_i(x) = \infty$ for $x > T_i$. The threshold values are chosen so that the following behavior results. We call this behavior \emph{modified Nash (MN)}. Assume that Opt uses $h$ links, i.e., $r_h/2 \le r \le r_{h+1}/2$. If $\lambda_{i+1} \le R_i \Lambda_i$ for all $i$, $1 \le i < h$, MN behaves like Nash. Otherwise, let $j$ be minimal such that link $j+1$ is super-efficient; MN changes its behavior at rate $r_{j+1}/2$. More precisely, it freezes the flow across the first $j$ links at their current values when the total flow is equal to $r_{j+1}/2$ and routes any additional flow across links $j+1$ to $k$. The thresholds for the lower links are chosen in such a way that this freezing effect takes place. The additional flow is routed by using the strategy recursively. In other words, let $j_1 + 1$, \ldots, $j_t + 1$ be the indices of the super-efficient links. Then MN changes behavior at rates $r_{j_i + 1}/2$. At this rate the flow across links $1$ to $j_i$ is frozen and additional flow is routed across the higher links. We use $\hat{C}_NN(r) = \hat{C}_N^{R_1,\ldots,R_{k - 1}}(r)$ to denote the cost of MN at rate $r$ when operated with parameters $R_1$ to $R_{k - 1}$. Then $\ePoA(r) = \hat{C}_NN(r)/C_{\mathit{opt}}(r)$. For the analysis of MN we use the following strategy. We first investigate the \emph{benign} case when there is no super-efficient link. In the benign case, MN behaves like Nash and the worst case bound of 4/3 on the PoA can never be attained. More precisely, we will exhibit a function $B(R_1,\ldots,R_{k-1})$ which is smaller than $4/3$ for all choices of the $R_i$'s and will prove $\hat{C}_NN(r) \le B(R_1,\ldots,R_{k - 1}) C_{\mathit{opt}}(r)$. We then investigate the non-benign case. We will derive a recurrence relation for \[ \ePoA(R_1,\ldots,R_{k-1}) = \max_{r} \frac{\hat{C}_N^{R_1,\ldots,R_{k - 1}}(r)}{C_{\mathit{opt}}(r)}. \] In the case of a single link, i.e., $k = 1$, MN behaves like Nash which in turn is equal to Opt. Thus \ignore{\Kurt{$\ePoA = 1$}}{$\ePoA() = 1$}. The coming subsections are devoted to the analysis of two links and more than two links, respectively. \subsection{Two Links}\label{sec: two links} The modified algorithm is determined by a parameter $R \ge 2$. If $\lambda_2 \le R \lambda_1$, modified Nash is identical to Nash. If $\lambda_2 > R \lambda_1$, the modified algorithm freezes the flow across the first link at $r_2/2$ once it reaches this level, i.e., $\hat{\ell}_1(x) = \ell_1(x)$ for $x \le r_2/2$ and $\hat{\ell}_1(x) = \infty$ for $x > r_2/2$.\footnote{In Pigou's network we have $\ell_1(x) = x$ and $\ell_2(x) = 1$. Thus $\lambda_2 = \infty$. The modified cost functions are $\hat{\ell}_2(x) = \ell_2(x)$ and $\hat{\ell}_1(x) = x$ for $x \le r_2/2 = 1/2$ and $\hat{\ell}_1(x) = \infty$ for $x > 1/2$. The Nash flow with respect to the modified cost function is identical to the optimum flow in the original network and $\hat{C}_N(f^*) = C(f^*)$. Thus $\ePoA = 1$ for Pigou's network.} \begin{theorem}\label{thm:two-links-simple} For the case of two links, $\ePoA \le \max\left\{1 + 1/R,(4 + 4R)/(4 + 3R)\right\}$. In particular $\ePoA = 5/4$ for $R = 4$. \end{theorem} \begin{proof} Consider first the benign case $\lambda_2 \le R \Lambda_1$. There are three regimes: for $r \le r_2/2$, Opt and Nash behave identically. For $r_2/2 \le r \le r_2$, Opt uses both links and Nash uses only the first link, and for $r \ge r_2$, Opt and Nash use both links. $\PoA(r)$ is increasing for $r \le r_2$ and decreasing for $r \ge r_2$. The worst case is at $r = r_2$. \ignore{&= \frac{C_N(r)}{C_{\mathit{opt}}(r)} = \frac{C_{\mathit{opt}}(r_2/2) + \frac{1}{\Lambda_1}\left((r - r_2/2)^2 + (\Gamma_1 + r_2)(r - r_2/2) \right)}{C_{\mathit{opt}}(r_2/2) + \frac{(r - r_2/2)^2}{\Lambda_2} + b_2(r - r_2/2)} \\ &= \frac{C_{\mathit{opt}}(r_2/2) + \frac{(r - r_2/2)^2}{\Lambda_1} + b_2 (r - r_2/2)}{C_{\mathit{opt}}(r_2/2) + \frac{(r - r_2/2)^2}{\Lambda_2} + b_2(r - r_2/2)}. \end{align*} The last equality uses $\Gamma_1 + r_2 = b_2 \lambda_1$. By Lemma~\ref{lem: derivative}, the sign of the derivative is equal to the sign of $\frac{1}{\Lambda_1} - \frac{1}{\Lambda_2}$ and hence is non-negative. Assume $r \ge r_2$. Then \[ PoA(r) = \frac{C_N(r)}{C_{\mathit{opt}}(r)} = \frac{C_N(r)}{C_N(r) - C_2} = \frac{1}{1 - C_2/C_N(r)}\] and hence $\PoA(r)$ is a decreasing function of $r$. The worst case occurs for $r = r_2$. We have thus shown that $\PoA(r)$ assumes its maximum at $r_2$.} Then $\PoA(r_2) = C_N(r_2)/C_{\mathit{opt}}(r_2) = C_N(r_2)/(C_N(r_2) - C_2) = 1/(1 - C_2/C_N(r_2))$. We upper-bound $C_2/C_N(r_2)$. Recall that $r_2 = (b_2 - b_1)\lambda_1$, $r_2 + \Gamma_1 = b_2 \lambda_1$ and $C_N(r_2) = 1/\lambda_1(r_2^2 + \Gamma_1 r_2)$. We obtain \[ \frac{C_2}{C_N(r_2)} = \frac{(b_2 - b_1)^2 \lambda_1 \lambda_2}{4 \Lambda_2 (1/\lambda_1) (r_2^2 + \gamma_1 r_2)} = \frac{(b_2 - b_1)^2 \lambda_1 \lambda_2}{4 \Lambda_2 (1/\lambda_1) (b_2 - b_1) \lambda_1 b_2 \lambda_1} \le \frac{\lambda_2}{4 \Lambda_2} \le \frac{1}{4 (1 + 1/R)}. \] Thus $\PoA(r) \le B(R) \assign \frac {1} {1 - \frac R {4(R+1)}} = (4 + 4R)/(4 + 3R)$. We come to the case $\lambda_2 > R \Lambda_1$: There are two regimes: for $r \le r_2/2$, Opt and MN behave identically. For $r > r_2/2$, Opt uses both links and MN routes $r_2/2$ over the first link and $r - r_2/2$ over the second link. Thus for $r \ge r_2/2$: \[ \ePoA(r) = \frac{\hat{C}_NN(r)}{C_{\mathit{opt}}(r)} = \frac{C_{\mathit{opt}}(r_2/2) + \frac{(r - r_2/2)^2}{\lambda_2} + b_2(r - r_2/2)} {C_{\mathit{opt}}(r_2/2) + \frac{(r - r_2/2)^2}{\Lambda_2} + b_2(r - r_2/2)} \le \frac{\Lambda_2}{\lambda_2} \le 1 + 1/R.\] \end{proof} \subsection{Many Links}\label{sec: many links} As already mentioned, we distinguish cases. We first study the benign case $\lambda_{i+1} \le R_i \Lambda_i$ for all $i$, $1 \le i < k$, and then deal with the non-benign case. \paragraph{The Benign Case:} We assume $\lambda_{i+1} \le R_i \Lambda_i$ for all $i$, $1 \le i < k$. Then MN behaves like Nash. We will show $e\PoA \le B(R_1,\ldots,R_{k-1}) < 4/3$; here $B$ stands for benign case or base case. Our proof strategy is as follows; we will first show (Lemma~\ref{lem:upper_bound_ratio_flows}) that for the $i$-th link the ratio of Nash flow to optimal flow is bounded by $2 \Lambda_k/(\Lambda_i + \Lambda_k)$. This ratio is never more than two; in the benign case, it is bounded away from two. We will then use this fact to derive a bound on the Price of Anarchy (Lemma~\ref{lem:upper-bound-PoA1}). \begin{lemma}\label{lem:upper_bound_ratio_flows} Let $h$ be the number of links that Opt is using. Then \[ \frac{f_i^N}{f_i^*}\leq\frac{2\Lambda_h}{\Lambda_i+\Lambda_h}\] for $i \le h$. If $\lambda_{i'+1} \le R_{i'} \Lambda_{i'}$ for all $i'$, then \[ \frac{2\Lambda_h}{\Lambda_i+\Lambda_h} \le \frac{2 P}{P + 1},\] where $P := \prod_{1 \leq i < k} (1 + R_i)$. \end{lemma} \begin{proof} Let $j$ be the number of links that Nash is using. For $i > j$, the Nash flow on the $i$-th link is zero and the claim is obvious. For $i \le j$, we can write the Nash and the optimal flow through link $i$ as \[ f_i^N = r \lambda_i/\Lambda_j + (\Gamma_j \lambda_i /\Lambda_j - \gamma_i) \quad\text{and}\quad f_i^* = r \lambda_i/\Lambda_h + (\Gamma_h \lambda_i /\Lambda_h - \gamma_i)/2.\] Therefore their ratio as a function of $r$ is \begin{align*} F(r)=\frac{f_i^N}{f_i^*} = \frac{\Lambda_h}{\Lambda_j}\cdot \frac{2r+2\Gamma_j-2b_i\Lambda_j}{2r+\Gamma_h-b_i\Lambda_h}. \end{align*} The sign of the derivative $F'(r)$ is equal to the sign of $\Gamma_h-b_i\Lambda_h-2\Gamma_j+2b_i\Lambda_j$ and hence constant. Thus $F(r)$ attains its maximum either for $r_j$ or for $r_{j+1}$. We have \begin{align*} F(r_{j+1}) &\leq \frac{\Lambda_h}{\Lambda_j}\cdot \frac{2r_{j+1}+2\Gamma_j-2b_i\Lambda_j}{2r_{j+1}+\Gamma_h-b_i\Lambda_h} = \frac{\Lambda_h}{\Lambda_j}\cdot \frac{2(b_{j+1}-b_i)\Lambda_j}{2b_{j+1}\Lambda_{j+1}-2\Gamma_{j+1}+\Gamma_h-b_i\Lambda_h}\\ & = \frac{2(b_{j+1}-b_i)\Lambda_h}{\sum_{g\leq j+1}(2b_{j+1}-2b_g)\lambda_g+\sum_{g\leq h}(b_g-b_i)\lambda_g} = \frac{2(b_{j+1}-b_i)\Lambda_h}{\sum_{g\leq j}(2b_{j+1}-b_g-b_i)\lambda_g+\sum_{j<g\leq h}(b_g-b_i)\lambda_g}\\ & = \frac{2(b_{j+1}-b_i)\Lambda_h}{\sum_{g\leq i}(2b_{j+1}-b_g-b_i)\lambda_g+\sum_{i<g\leq j}(2b_{j+1}-b_g-b_i)\lambda_g+\sum_{j<g\leq h}(b_g-b_i)\lambda_g}\\ & \leq \frac{2(b_{j+1}-b_i)\Lambda_h}{\sum_{g\leq i}2(b_{j+1}-b_i)\lambda_g+\sum_{i<g\leq h}(b_{j+1}-b_i)\lambda_g}\\ & = \frac{2\Lambda_h}{\sum_{g\leq i}2\lambda_g+\sum_{i<g\leq h}\lambda_g} = \frac{2\Lambda_h}{\Lambda_i+\Lambda_h} \end{align*} and \begin{align*} F(r_j) &\leq \frac{\Lambda_h}{\Lambda_j}\cdot \frac{2r_{j}+2\Gamma_j-2b_i\Lambda_j}{2r_{j}+\Gamma_h-b_i\Lambda_h} = \frac{\Lambda_h}{\Lambda_j}\cdot \frac{2(b_{j}-b_i)\Lambda_j}{2b_{j}\Lambda_{j}-2\Gamma_{j}+\Gamma_h-b_i\Lambda_h}\\ & = \frac{2(b_{j}-b_i)\Lambda_h}{\sum_{g\leq j}(2b_{j}-2b_g)\lambda_g+\sum_{g\leq h}(b_g-b_i)\lambda_g} = \frac{2(b_{j}-b_i)\Lambda_h}{\sum_{g\leq j}(2b_{j}-b_g-b_i)\lambda_g+\sum_{j<g\leq h}(b_g-b_i)\lambda_g}\\ & = \frac{2(b_{j}-b_i)\Lambda_h}{\sum_{g\leq i}(2b_{j}-b_g-b_i)\lambda_g+\sum_{i<g\leq j}(2b_{j}-b_g-b_i)\lambda_g+\sum_{j<g\leq h}(b_g-b_i)\lambda_g}\\ & \leq \frac{2(b_{j}-b_i)\Lambda_h}{\sum_{g\leq i}2(b_{j}-b_i)\lambda_g+\sum_{i<g\leq h}(b_{j}-b_i)\lambda_g} = \frac{2\Lambda_h}{\sum_{g\leq i}2\lambda_g+\sum_{i<g\leq h}\lambda_g} = \frac{2\Lambda_h}{\Lambda_i+\Lambda_h}. \end{align*} If $\lambda_{i'+1} \le R_{i'} \Lambda_{i'}$ for all $i'$, then $\Lambda_{i'+1} = \lambda_{i'+1} + \Lambda_{i'} \le (1 + R_{i'}) \Lambda_{i'}$ for all $i'$ and hence $\Lambda_{h} \le \Lambda_k \le P \Lambda_i$. \end{proof} \begin{lemma}\label{lem:arithmetic-inequality} For any positive reals $\mu$, $\alpha$, and $\beta$ with $1 \le \mu \le 2$ and $\alpha/\beta \le \mu$, $\beta \alpha \le \frac{\mu - 1}{\mu^2} \alpha^2 + \beta^2$. \end{lemma} \begin{proof} We may assume $\beta \ge 0$. If $\beta = 0$, there is nothing to show. So assume $\beta > 0$ and let $\alpha/\beta = \delta \mu$ for some $\delta \le 1$. We need to show (divide the target inequality by $\beta^2$) $\delta \mu \le (\mu - 1) \delta^2 + 1$ or equivalently $\mu \delta (1 - \delta) \le (1 - \delta) (1 + \delta)$. This inequality holds for $\delta \le 1$ and $\mu \le 2$. \end{proof} \begin{lemma}\label{lem:upper-bound-PoA1} If $f^N_i/f_i^*\leq \mu \le 2$ for all $i$, then $\PoA\leq {\mu^2}/(\mu^2-\mu+1)$. If $\lambda_{j+1} \le R_j \Lambda_j$ for all $j$, then \[ \PoA \le B(R_1,\ldots,R_{k-1}) \assign \frac{4 P^2}{3 P^2 + 1} ,\] where $P := \prod_{1 \leq i < k} (1 + R_i)$. \end{lemma} \begin{proof} Assume that Nash uses $j$ links and let $L$ be the common latency of the links used by Nash. Then $L = a_i f_i^N + b_i$ for $i \le j$ and $L \le b_{i} = a_i f_i^N + b_i $ for $i > j$. Thus, by use of Lemma~\ref{lem:arithmetic-inequality}, \begin{align*} C_N(r) &= Lr = \sum_i L f_i^* \le \sum_i \left(a_i f_i^N + b_i\right) f_i^* \le \frac{\mu-1}{\mu^2}\sum_i a_i (f_i^N)^2 + \sum_i \left(a_i (f_i^*)^2 + b_i f_i^*\right)\\ &\le \frac{\mu-1}{\mu^2} C_N(r) + C_{\mathit{opt}}(r) \end{align*} and hence $\PoA\leq {\mu^2}/(\mu^2-\mu+1)$. If $\lambda_{j+1} \le R_j \Lambda_j$ for all $j$, employing Lemma~\ref{lem:upper_bound_ratio_flows}, we may use $\mu = 2P/(P + 1)$ and obtain $\PoA \le 4P^2/(3P^2 + 1)$. \end{proof} \ignore{The next lemma gives an upper bound on the Nash and optimal flows across a link, as functions of the links' efficiencies. \begin{lemma}\label{lem:upper_bound_ratio_flows} Let $j,k$ be the number of links that Nash and Opt are using respectively. Then for every link $i\leq j$ it holds $$\frac{f_i^N}{f_i^*}\leq\frac{2\Lambda_k}{\Lambda_i+\Lambda_k}.$$ \end{lemma} \begin{proof} Recall first, that we can write the Nash and the optimal flow through link $i$ as follows \begin{align*} f_i^N &= r \lambda_i/\Lambda_j + (\Gamma_j \lambda_i /\Lambda_j - \gamma_i) &\text{for $1 \le i \le j$}\\ f_i^* &= r \lambda_i/\Lambda_k + (\Gamma_k \lambda_i /\Lambda_k - \gamma_i)/2 &\text{for $1 \le i \le k$}. \end{align*} Therefore their ratio as a function of $r$ is \begin{align*} F(r)=\frac{f_i^N}{f_i^*} &= \frac{\Lambda_k}{\Lambda_j}\cdot \frac{2r+2\Gamma_j-2b_i\Lambda_j}{2r+\Gamma_k-b_i\Lambda_k}\\ \end{align*} The sign of the derivative $F'(r)$ is equal to the sign of $\Gamma_k-b_i\Lambda_k-2\Gamma_j+2b_i\Lambda_j$. If the sign is nonnegative than $F$ is non-decreasing with respect to $r$, and therefore $F(r)\leq F(r_{j+1})$ or \begin{align*} \frac{f_i^N}{f_i^*} &\leq \frac{\Lambda_k}{\Lambda_j}\cdot \frac{2r_{j+1}+2\Gamma_j-2b_i\Lambda_j}{2r_{j+1}+\Gamma_k-b_i\Lambda_k}\\ & = \frac{\Lambda_k}{\Lambda_j}\cdot \frac{2(b_{j+1}-b_i)\Lambda_j}{2b_{j+1}\Lambda_{j+1}-2\Gamma_{j+1}+\Gamma_k-b_i\Lambda_k}\\ & = \frac{2(b_{j+1}-b_i)\Lambda_k}{\sum_{h\leq j+1}(2b_{j+1}-2b_h)\lambda_h+\sum_{h\leq k}(b_h-b_i)\lambda_h}\\ & = \frac{2(b_{j+1}-b_i)\Lambda_k}{\sum_{h\leq j}(2b_{j+1}-b_h-b_i)\lambda_h+\sum_{j<h\leq k}(b_h-b_i)\lambda_h}\\ & = \frac{2(b_{j+1}-b_i)\Lambda_k}{\sum_{h\leq i}(2b_{j+1}-b_h-b_i)\lambda_h+\sum_{i<h\leq j}(2b_{j+1}-b_h-b_i)\lambda_h+\sum_{j<h\leq k}(b_h-b_i)\lambda_h}\\ & \leq \frac{2(b_{j+1}-b_i)\Lambda_k}{\sum_{h\leq i}2(b_{j+1}-b_i)\lambda_h+\sum_{i<h\leq k}(b_{j+1}-b_i)\lambda_h}\\ & = \frac{2\Lambda_k}{\sum_{h\leq i}2\lambda_h+\sum_{i<h\leq k}\lambda_h}\\ & = \frac{2\Lambda_k}{\Lambda_i+\Lambda_k}\\ \end{align*} If the sign is negative than $F$ is decreasing with respect to $r$, and therefore $F(r)\leq F(r_j)$ or \begin{align*} \frac{f_i^N}{f_i^*} &\leq \frac{\Lambda_k}{\Lambda_j}\cdot \frac{2r_{j}+2\Gamma_j-2b_i\Lambda_j}{2r_{j}+\Gamma_k-b_i\Lambda_k}\\ & = \frac{\Lambda_k}{\Lambda_j}\cdot \frac{2(b_{j}-b_i)\Lambda_j}{2b_{j}\Lambda_{j}-2\Gamma_{j}+\Gamma_k-b_i\Lambda_k}\\ & = \frac{2(b_{j}-b_i)\Lambda_k}{\sum_{h\leq j}(2b_{j}-2b_h)\lambda_h+\sum_{h\leq k}(b_h-b_i)\lambda_h}\\ & = \frac{2(b_{j}-b_i)\Lambda_k}{\sum_{h\leq j}(2b_{j}-b_h-b_i)\lambda_h+\sum_{j<h\leq k}(b_h-b_i)\lambda_h}\\ & = \frac{2(b_{j}-b_i)\Lambda_k}{\sum_{h\leq i}(2b_{j}-b_h-b_i)\lambda_h+\sum_{i<h\leq j}(2b_{j}-b_h-b_i)\lambda_h+\sum_{j<h\leq k}(b_h-b_i)\lambda_h}\\ & \leq \frac{2(b_{j}-b_i)\Lambda_k}{\sum_{h\leq i}2(b_{j}-b_i)\lambda_h+\sum_{i<h\leq k}(b_{j}-b_i)\lambda_h}\\ & = \frac{2\Lambda_k}{\sum_{h\leq i}2\lambda_h+\sum_{i<h\leq k}\lambda_h}\\ & = \frac{2\Lambda_k}{\Lambda_i+\Lambda_k}\\ \end{align*} \end{proof} \begin{lemma}\label{lem:arithmetic-inequality} For any reals $\mu$ and $\alpha$, and $\beta$ with $1 \le \mu \le 2$ and $\alpha/\beta \le \mu$, $\beta \alpha \le \frac{\mu - 1}{\mu^2} \alpha^2 + \beta^2$ \end{lemma} \begin{proof} We may assume $\beta \ge 0$. If $\beta = 0$, there is nothing to show. So assume $\beta > 0$ and let $\alpha/\beta = \delta \mu$ for some $\delta \le 1$. We need to show (divide the target inequality by $\beta^2$) $\delta \mu \le (\mu - 1) \delta + 1$ or equivalently $\mu \delta (1 - \delta) \le (1 - \delta) (1 + \delta)$. This inequality holds for $\delta \le 1$ and $\mu \le 2$. \end{proof} -------------------- \begin{lemma}\label{lem:arithmetic-inequality-old} For any reals $\alpha,\beta,\mu$ such that $\alpha/\beta\leq \mu$, and $\mu\geq 1$ it holds $$\beta\alpha\leq \frac{\mu-1}{\mu^2}\alpha^2 + \beta^2.$$ \end{lemma} \begin{proof} What we need to show is an inequality of the form \begin{equation}\label{eq:arithmetic-inequality} \beta\alpha\leq k\alpha^2 + \beta^2, \end{equation} for some $0\leq k$. This would eventually lead to a PoA of $1/(1-k)$. In particular for $k=1/4$, the PoA becomes exactly 4/3. If $\beta=0,$ then for any $k>0$, (\ref{eq:arithmetic-inequality}) obviously holds. If $\beta>0,$ then by dividing both terms by $\beta^2$, and setting $x=\alpha/\beta$ we getting $$f(x)=kx^2-x+1\geq 0.$$ $f(x)$ has two roots $\rho_1={\frac {1-\sqrt {1-4\,k}}{2k}}$, and $\rho_2={\frac {1+\sqrt {1-4\,k}}{2k}}$, with $\rho_1\leq \rho_2$, and if $x\leq \rho_1$ then $f(x)\geq 0$. So by just solving the equation $\rho_1=\mu$ we get that $k=\frac{\mu-1}{\mu^2}$. \end{proof} \begin{remark} If $\alpha,\beta$ are arbitrary, by choosing $k=1/4$ we get that the inequality holds for any ratio $\alpha/\beta$. Note that $\alpha$ will correspond to the Nash flow $f_e^N$ through an edge $e$ and $\beta$ corresponds to the optimal flow $f_e^*$. For parallel links we can take advantage of the fact that $f_e^*,f_e^N$ can not be arbitrary, but there ratio is bounded and strictly less than 2. Then we can plug this dependence on proving a tighter inequality in Lemma~\ref{lem:arithmetic-inequality}. Note further that the inequalities give us indications about lower bounds as well. For example for $k=1/4$, the ratio of the flows $f_e^N/f_e^*=2$ and actually this is what happens in the Pigou Example. \end{remark} \begin{lemma}\label{lem:upper-bound-PoA} Let $f^N_i/f_i^*\leq \mu$, with $\mu\geq 1$, then $$PoA\leq \frac{\mu^2}{\mu^2-\mu+1}.$$ \end{lemma} \begin{proof} Since $f$ is at Nash equilibrium, we can derive the following variational inequality~\cite{BMW56} $$\sum_e\ell_e(f_e)f_e\leq \sum_e\ell_e(f_e)f_e^* \text{ or equivalently } \sum_e\left(a_ef_e^2+b_ef_e\right)\leq \sum_e\left(a_ef_ef_e^*+b_ef_e^*\right).$$ By using Lemma~\ref{lem:arithmetic-inequality} we get $$\sum_ea_ef_e^2+b_ef_e\leq \frac{\mu-1}{\mu^2}\sum_ea_ef^2_e+\sum_ea_e{f^*_e}^2+b_ef_e^*,$$ and the theorem follows. \end{proof} Let's denote $S_i=\Lambda_k/\Lambda_i$, and $S=\max_{i\leq j}S_i$. Then by Lemma~\ref{lem:upper_bound_ratio_flows} we get that $f^N_i/f_i^*\leq 2S/(S+1)$, and therefore by Lemma~\ref{lem:upper-bound-PoA} we obtain \begin{corollary} $$PoA\leq \frac{4S^2}{3S^2+1}.$$ \end{corollary} In the benign case $\Lambda_{j+1} = \lambda_j + \Lambda_j \le (1 + R_j) \Lambda_j$ and hence \[ \Lambda_{k} \le P := \prod_{1 \le i < k} (1 + R_i) \Lambda_1 \] and hence \[ \PoA \le \frac{4 P^2}{3 P^2 + 1} \] in the benign case. }\ignore{For $r \le r_{k-1}/2$, the PoA is bounded by $B(R_1,\ldots,R_{k-2})$. For $r \ge r_{k-1}/2$, we consider two subcases: $r \ge r_k$ and $r < r_k$. In the former case, Nash and Opt both use $k$ links and PoA is decreasing. In the latter case, Nash uses strictly less links than Opt. \ignore{ } Correa at al.~\cite{Correa-et-al} gave a geometric argument for the $4/3$ bound on the Price of Anarchy. We adopt their proof to our situation. Opt uses $k$ links and Nash uses $j$ links for some $j < k$. Then $\ell_i(f_i^N) = a_i f_i^N + b_i$ is constant (call this constant $L$) for $i \le j$ and is smaller than $b_{j+1}$. Hence $C_N(r) = L r = L \cdot \left(\sum_{i \le k} f_i^*\right)$ and, for $i > j$, $(\ell_i(f_i^*) - L)f_i^* = (a_i f_i^* + b_i - L)f_i^* \le a_i(f_i^*)^2$. We can now write~\footnote{We use $\sum_{j < i \le k} (f_i^*)^2 \ge (\sum_{j < i \le k} f_i^*)^2/(k-j)^2 = (\sum_{i \le j} f_i^N - \sum_{i \le j} f_i^*)^2/(k-j)^2 \ge \sum_{i \le j} (f_i^N - f_i^*)^2/(k-j)^2$. } \begin{align*} C_{\mathit{opt}}(r) &= \sum_{1 \le i \le k} \ell_i(f_i^*) f_i^* = C_N(r) + \sum_{1 \le i \le k} (\ell_i(f_i^*) - L) f_i^*\\ &\ge C_N(r) + \sum_{i \le j} a_i (f_i^* - f_i^N) f_i^* + \sum_{j < i \le k} a_i (f_i^*)^2\\ & \ge C_N(r) + \sum_{i \le j} a_i (f_i^* - f_i^N) f_i^* + \frac{\min(a_{j+1},\ldots,a_k)}{(k - j)^2} \sum_{i \le j} (f_i^N - f_i^*)^2. \end{align*} Let $I$ be the set of indices $i$ such that $i \le j$ and $f_i^* < f_i^N$ and let $I^*$ be the indices $i \in I$ with $\lambda_i \ge d_{k} \Lambda_{i-1}$, where $d_{k} = 10 k$; $1$ belongs to $I^*$ since $\Lambda_0 = 0$. Let \[ t^* = \min_{i \in I^*} \frac{\min(a_{j+1},\ldots,a_k)}{a_i (k - j)^2} = \min_{i \in I^*} \frac{\min(1/\lambda_{j+1},\ldots,1/\lambda_k) \lambda_i}{(k - j)^2}. \] For $k = 3$, we can use $d_3 = 8$; this choice will become clear below. Then $t^*\ge 1/(R_2\max\{8,4 + 4R_1\})$. Observe that $\lambda_2 \le R_1 \lambda_1$ and $\lambda_3 \le R_2 \Lambda_2 \le R_2(1 + R_1) \lambda_1$. \begin{lemma} $I = \sset{1,\ldots,h}$ for some $h \le j$ and $t^* \ge \epsilon = \frac{d}{(1 + R_1)\cdots (1 + R_{k - 1}) k^2 (d+1)}$.\end{lemma} \ignore{\begin{proof} We have (Lemma~\ref{thm: basic facts}) \begin{align*} f_i^* &= r \lambda_i/\Lambda_k + (\Gamma_k \lambda_i /\Lambda_k - \gamma_i)/2 &\text{for $1 \le i \le k$}\\ f_i^N &= r \lambda_i/\Lambda_j + (\Gamma_j \lambda_i /\Lambda_j - \gamma_i) &\text{for $1 \le i \le j$} \end{align*} and hence $f_i^N \ge f_i^*$ if and only if \[ \lambda_i \left( \frac{r + \Gamma_j}{\Lambda_j} - \frac{r + \Gamma_k/2}{\Lambda_k} - \frac{b_i}{2}\right) \ge 0 . \] Since the $b_i$'s are increasing, this is true for an initial segment of $i$'s. \end{proof} \begin{lemma} \[t^* \ge \epsilon = \frac{d}{(1 + R_1)\cdots (1 + R_{k - 1}) k^2 (d+1)} .\] \end{lemma} \begin{proof} For $h \ge j+1 > i$, \[ \lambda_h \le R_{h-1} \Lambda_{h-1} \le R_{h-1}(1 + R_{h-2})\Lambda_{h-2} \le \ldots \le R_{h-1} \prod_{i \le \ell \le h-2} (1 + R_\ell) \Lambda_i \le P \Lambda_i,\] where $P = \prod_i (1 + R_i)$. Thus $t^* \ge {d}/(P k^2 (d+1))$ since $\lambda_i \ge d \Lambda_{i-1} = d(\Lambda_i - \lambda_i)$ and hence $\lambda_i \ge d \Lambda_i/(d+1)$ for $i \in I^*$. \end{proof}} We next weaken the lower bound for $C_{\mathit{opt}}(r)$ by dropping all terms with $i \not\in I$ and dropping the quadratic term for all $i \not \in I^*$. We obtain \[ C_{\mathit{opt}}(r) \ge C_N(r) + \sum_{i \in I \setminus I^*} a_i (f_i^* - f_i^N) f_i^* + \sum_{i \in I^*} a_i \left((f_i^* - f_i^N) f_i^* + t^* (f_i^N - f_i^*)^2\right). \] The following Lemma generalizes the geometric argument in~\cite{Correa-et-al}. \begin{lemma}\label{lem: useful1} Let $t \ge 0$. The function $x \mapsto (x - r)x + t(r - x)^2$ is minimized for $x = (1 + 2t)r/(2 + 2t)$. It then has value $-r^2/(4(1 + t))$.\end{lemma} For $i \in I \setminus I^*$, we apply the Lemma with $t = 0$ and for $i \in I^*$ we apply the Lemma with $t = t^*$. Thus \[ C_{\mathit{opt}}(r) \ge C_N(r) - \sum_{i \in I \setminus I^*} \frac{1}{4}a_i (f_i^N)^2 - \sum_{i \in I^*} a_i \frac{1}{4 + 4t^*} (f_i^N)^2. \] We complete the proof by showing that the total flow across the edges in $I - I^*$ is small compared to the flow across the edges in $I^*$. \begin{lemma} If $i^* \in I^*$ and $i^* +1, \ldots, i^* + h \not\in I^*$, $f^N_{i^* + 1} + \ldots + f^N_{i^* + h} \le f^N_{i^*}/8$. \end{lemma} \ignore{ \begin{proof} If $i^* + \ell \not\in i^*$ then $\lambda_{i^* + \ell} \le d \Lambda_{i^* + \ell - 1}$ and hence $\Lambda_{i^* + \ell} \le (1 + 1/d) \Lambda_{i^* + \ell - 1}$. Thus $\Lambda_{i^* + h} \le (1 + 1/d)^h \Lambda_{i^*}$ and hence \[ \Lambda_{i^* + h} - \Lambda_{i^*} \le \left((1 + 1/d)^h - 1\right) \Lambda_{i^*} \le \frac{d+1}{d} \left((1 + 1/d)^h - 1\right)\lambda_{i^*}.\] Finally observe that \[ \frac{d+1}{d} \left((1 + 1/d)^h - 1\right) \le \frac{d+1}{d} \left(e^{h/d} - 1\right) \le \frac{d+1}{10d} \le \frac{1}{8}. \] In a Nash flow all links have the same delay, say $L$. Therefore, we have $f_i^N = (L - b_i)\lambda_i$. Thus \[ f^N_{i^* + 1} + \ldots + f^N_{i^* + h} = \sum_{1 \le i - i^* \le h} (L - b_i)\lambda_i \le (L - b_{i^*})\lambda_{i^*}/8 = \frac{1}{8} f_{i^*}^N. \] \end{proof}} For $k = 3$, we either have $I^* = \sset{1,\ldots,j}$ or $j = 2$, $I^* = \sset{1}$, $I = \sset{1,2}$, and $\lambda_2 \le \lambda_1/8$ and hence $f_2^N \le r/8$. The choice $d_3 = 8$ is dictated by the fact that we want $\lambda_2 \le \lambda_1/8$ if $2 \in I \setminus I^*$. \begin{lemma} $C_{\mathit{opt}}(r) \ge \left(\frac{3 + 7t^*/2}{4 + 4t^*}\right) C_N(r)$ and $\ePoA(R_1,\ldots,R_{k-1} \le \max_r \frac{CN(r)}{C_{\mathit{opt}}(r)} \le \frac{4 + 4t^*}{3 + 7 t^*/2} < 4/3$. \end{lemma} \begin{proof} If $I = I^*$, the claim is obvious. Assume otherwise and $k = 3$. Observe that $a_i (f_i^N)^2 \le L f_i^N$ for $i \le j$ and hence \begin{align*} C_{\mathit{opt}}(r) &\ge C_N(r) - \frac{1}{4} L f_2^N - \frac{1}{4 + 4t^*} L f_1^N \ge C_N(r) - \frac{1}{4 + 4t^*} L (f_1^N + f_2^N)- \frac{4t^*}{4 + 4t^*} L f_2^N\\ &= C_N(r) - \frac{1 + t^*/2}{4 + 4t^*} L r = \left(\frac{3 + 7t^*/2}{4 + 4t^*}\right) C_N(r) \end{align*} \par \end{proof} For $k = 3$, we obtain $\ePoA \le B(R_1,R_2) = (8 R_2 \max\{8,4 + 4R_1\} + 4)/(6 R_2\max\{8,4 + 4R_1\| + 7)$. } \paragraph{The General Case:} We come to the case where $\lambda_{i+1} \ge R_{i} \Lambda_{i}$ for some $i$. Let $j$ be the smallest such $i$. For $r \le r_{j+1}/2$, MN and Opt use only links $1$ to $j$ and we are in the benign case. Hence $e\PoA$ is bounded by $B(R_1,\ldots,R_{j-1}) < 4/3$. Assume now that $r>r_{j+1/2}$. MN routes the flow exceeding $r_{j+1}/2$ exclusively on higher links. \begin{lemma} \label{lem:MNbeforeOPT} MN does not use links before Opt. \end{lemma} \begin{proof} This is trivially true for the $j+1$-st link. Consider any $h > j+1$. MN starts to use link $h$ at $s_h = r_{j+1}/2 + \sum_{j+1 \le i < h}(b_{i+1} - b_i) (\Lambda_i - \Lambda_j)$ and Opt starts to use it at $r_h/2 = r_{j+1}/2 + \sum_{j+1 \le i < h}(b_{i+1} - b_i)\Lambda_i/2$. We have $s_h \ge r_h/2$ since $\Lambda_i - \Lambda_j \ge \Lambda_i/2$ for $i > j$. \end{proof} We need to bound the cost of MN in terms of the cost of Opt. In order to do so, we introduce an intermediate flow Mopt (modified optimum) that we can readily relate to MN and to Opt. Mopt uses links $1$ to $j$ to route $r_{j+1}/2$ and routes $f = r- r_{j+1}/2$ optimally across links $j+1$ to $k$. Let $f_i^*$ and $f_i^m$ be the optimal flows and the flows of Mopt, respectively, at rate $r$. Let $r_s = \sum_{i \le j} f_i^* \ge r_{j+1}/2$ be the total flow routed across the first $j$ links in the optimal flow (the subscript $s$ stands for small) and let \[ t = \frac{r - r_{j+1}/2}{r - r_s}\] We will show $t \le 1 + 1/R_j$ below. We next relate the cost of Mopt on links $j+1$ to $k$ to the cost of Opt on these links. To this end we scale the optimal flow on these links by a factor of $t$, i.e., we consider the following flow across links $j+1$ to $k$: on link $i$, $j+1 \le i \le k$, it routes $t \cdot f_i^*$. The total flow on the \emph{high} links, i.e., links $j+1$ to $k$, is $r - r_{j+1}/2$ and hence Mopt incurs at most the cost of this flow on its high links. Thus \[ \sum_{i > j} \ell_i(f_i^m) f_i^m \le \sum_{i > j} \ell_i(t f_i^*) t f_i^* \le t^2 \left(\sum_{i > j} \ell_i(f_i^*) f_i^*\right). \] The cost of MN on the high links is at most $\ePoA(R_{j+1},\ldots,R_{k-1})$ times this cost by the induction hypothesis. We can now bound the cost of MN as follows: \begin{align*} \hat{C}_NN(r) &= C_N(r_{j+1}/2) + \hat{C}_NN(\text{flow $f$ across links $j+1$ to $k$})\\ &\le B(R_1,\ldots,R_{j-1}) C_{\mathit{opt}}(r_{j+1}/2) + t^2 \ePoA(R_{j+1},\ldots,R_{k-1})\left(\sum_{i > j} \ell_i(f_i^*) f_i^*\right) \\ &\le B(R_1,\ldots,R_{j-1})\left(\sum_{i \le j} \ell_i(f_i^*) f_i^*\right) + t^2 \ePoA(R_{j+1},\ldots,R_{k-1})\left(\sum_{i > j} \ell_i(f_i^*) f_i^*\right) \\ &\le \max\left\{B(R_1,\ldots,R_{j-1}),\, t^2 \ePoA(R_{j+1},\ldots,R_{k-1})\right\} C_{\mathit{opt}}(r) \end{align*} \begin{lemma} \label{lem:tbound} $t \le 1 + 1/R_{j}$, where $j$ is the smallest $i$ for which $\lambda_{i+1} \ge R_{i} \Lambda_{i}$. \end{lemma} \begin{proof} Assume that Opt uses $h$ links where $j+1 \le h \le k$. Then $r_h/2 \le r \le r_{h+1}/2$. Let $r = r_h/2 + \delta$. According to Theorem~\ref{thm: basic facts}, $f_i^* = r \lambda_i/\Lambda_h + (\Gamma_h \lambda_i/\Lambda_h - \gamma_i)/2$ and hence \[ r_s = \left(\frac{r_h}{2} + \delta \right) \frac{\Lambda_j}{\Lambda_h} + \frac{1}{2} \left(\frac{\Gamma_h \Lambda_j}{\Lambda_h} - \Gamma_j\right).\] Since $\Gamma_h + r_h = b_h\Lambda_h$ and $\Gamma_j + r_j = b_j\Lambda_j$ (see Theorem~\ref{thm: basic facts}), this simplifies to \[ r_s = \frac{\Lambda_j \delta}{\Lambda_h} + \frac{b_h \Lambda_h - \Gamma_h}{2} \frac{\Lambda_j}{\Lambda_h} + \frac{1}{2} \left(\frac{\Gamma_h \Lambda_j}{\Lambda_h} - b_j \Lambda_j + r_j \right) = \frac{\Lambda_j \delta}{\Lambda_h} + \frac{1}{2} \left((b_h - b_j) \Lambda_j + r_j\right) = \frac{\Lambda_j \delta}{\Lambda_h} + r^*_s,\] where $r^*_s = \frac{1}{2} \left((b_h - b_j) \Lambda_j + r_j\right)$. We can now bound $t$. \[ t = \frac{r- r_{j+1}/2}{r - r_s} = \frac{r_h/2 + \delta - r_{j+1}/2}{r_h/2 + \delta - r^*_s - \Lambda_j \delta /\Lambda_h} \le \max \left\{ \frac{r_h/2 - r_{j+1}/2}{r_h/2 - r^*_s}, \frac{1}{1 - \frac{\Lambda_j}{\Lambda_h}}\right\}.\] Next observe that \begin{align*} \frac{r_h/2 - r_{j+1}/2}{r_h/2 - r^*_s}& = \frac{\sum_{j+1 \le i < h} (b_{i+1} - b_i) \Lambda_i}{\left(\sum_{j \le i < h} (b_{i+1} - b_i) \Lambda_i\right) - (b_h - b_j) \Lambda_j} = \frac{\sum_{j+1 \le i < h} (b_{i+1} - b_i) \Lambda_i}{\sum_{j + 1\le i < h} (b_{i+1} - b_i) (\Lambda_i - \Lambda_j)}\\ &\le \max_{j+1 \le i < h} \frac{\Lambda_i}{\Lambda_i - \Lambda_j} = \frac{\Lambda_{j+1}}{\Lambda_{j+1}- \Lambda_j} = \frac{\Lambda_{j} + \lambda_{j+1}}{\lambda_{j+1}} \le 1 + \frac{1}{R_{j}}. \end{align*} The second term in the upper bound for $t$ is also bounded by this quantity. \end{proof} We summarize the discussion. \begin{lemma}\label{thm: recurrence for nonbenign PoA} For every $k$ and every $j$ with $1 \le j < k$. If $\lambda_{j+1} > R_{j} \Lambda_j$ and $\lambda_{i+1} \le R_i \Lambda_i$ for $i < j$, then \[ \ePoA(R_1,\ldots,R_{k-1}) \le \max\left\{B(R_1,\ldots,R_{j-1}),\left(1 + \frac{1}{R_{j}}\right)^2\ePoA(R_{j+1},\ldots,R_{k-1})\right\}.\] \end{lemma} We are now ready for our main theorem. \begin{theorem}\label{thm:ePoAbound} For any $k$, there is a choice of the parameters $R_1$ to $R_{k-1}$ such that the engineered Price of Anarchy with these parameters is strictly less than $4/3$. \end{theorem} \begin{proof} We will show $\ePoA(R_i,\ldots,R_{k-1}) < 4/3$ by downward induction on $i$, i.e., we will define $R_{k-1}$, $R_{k-2}$, down to $R_1$ in this order. For $i = k$, we have $\ePoA() = 1 < 4/3$. We now come to the induction step. We have already defined $R_{k-1}$ down to $R_{i+1}$ and now define $R_i$. We have \[ \ePoA(R_i,\ldots,R_{k-1}) \le \max \left\{ \begin{array}{l} B(R_i,\ldots,R_{k-1}), \\ \max_{j; i \le j < k} \left\{ B(R_i,\ldots,R_{j-1}),\left(1 + \frac{1}{R_{j}}\right)^2\ePoA(R_{j+1},\ldots,R_{k-1}) \right\} \end{array}\right\}, \] where the first line covers the benign case and the second line covers the non-benign case (Lemma~\ref{thm: recurrence for nonbenign PoA}). We now fix $R_i$. We have $B(R_i,\ldots,R_{k-1})< 4/3$ and $B(R_i,\ldots,R_{j-1}) < 4/3$ for all $j$, $i \le j < k$ by Lemma~\ref{lem:upper-bound-PoA1} for any choice of $R_i$. This completes the induction if the first line defines the maximum. So assume that the second line defines the maximum. We only need to deal with the case $j = i$ in the second line, as the case $j > i$ was already dealt with for larger $i$. The case $j = i$ is handled by choosing $R_i$ sufficiently large, i.e., such that $(1 + 1/R_i)^2 \ePoA(R_{i+1},\ldots,R_{k-1}) < 4/3$. \end{proof} \begin{remark}Alternatively, we could take $i = k-1$ as the base case. Then, $j = i-1$ in the non-benign case and hence \begin{align*} \ePoA(R_{k-1}) & = \max (B(R_{k-1}), B(), (1 + \frac{1}{R_{k-1}})^2 \ePoA() )\\ & = \max(\frac{4 (1 + R_{k-1})^2}{3 (1 + R_{k-1})^2 + 1}, (1 + \frac{1}{R_{k-1}})^2) \end{align*} where the bound on $B(R_{k-1})$ comes from Lemma 4. For $R_{k-1} = 7$, the bound becomes $\max((4/3)(64/65),64/49) = \max(64/65,48/49)\cdot 4/3$. \end{remark} \section{An Improved Mechanism for the Case of Two Links} \label{sec: two links advanced} In this section we present a mechanism which achieves $\npoa=1.192$ for a network that consists of two parallel links. The ratio $C_N(r)/C_{\mathit{opt}}(r)$ is maximized at $r = r_2$. At this rate Nash still uses only the first link and Opt uses both links. In order to avoid this maximum ratio (if larger than 1.192), we force MN to use the second link earlier by increasing the latency of the first link after some rate $x_1$, $r_2/2 \le x_1 \le r_2$ to a value above $b_2$. In the preceding section, we increased the latency to $\infty$. In this way, we avoided a bad ratio at $r_2$, but paid a price for very large rates. The idea for the improved construction, is to increase the latency to a finite value. This will avoid the bad ratio, but also allow MN to use both links for large rates. In particular, we obtain the following result. \begin{theorem}\label{thm:two-links-advanced} There is a mechanism for a network of two parallel links that achieves $\npoa=1.192$. \end{theorem} \begin{proof} Recall (Theorem~\ref{thm:two-links-simple}) that the Price of Anarchy is upper bounded by $(4 + 4R)/(4 + 3R)$ where $R = a_1/a_2$. Let $R_0$ be such that $(4 + 4R_0)/(4 + 3R_0) = 1.192$. Then $R_0 = 96/53$. We only need to consider the case $R > R_0$. The latency function of the second link is unchanged and the latency function of the first link is changed into \begin{equation} \widehat{\ell}_1(x)= \left\{ \begin{array} {lll} \ell_1(x), & x\leq x_1 \\ \ell_1(x_2), & x_1 <x \le x_2 \\ \ell_1(x) & x> x_2. \end{array} \right. \label{eq:mechanism_2links} \end{equation} where $x_1$ and $x_2$ satisfy $r_2/2 \le x_1 \le r_2 \le x_2$ and will be fixed later. In words, when either the flow in the first link does not exceed $x_1$, or is larger than $x_2$, the network remains unchanged. However, when the flow in the first link is between these two values, the mechanism increases the latency of this link to $\ell_1(x_2)$. Let $r^*$ be such that \[ \ell_2(r^* - x_1) = \ell_1(x_2). \] We will fix $x_1$ and $x_2$ such that $r^* \ge r_2$. What is the effect of this modification? For $r \le r_2/2$, Opt and MN are the same and $\ePoA(r) = 1$. For $r_2/2 \le r \le x_1$, MN behaves like Nash and $\ePoA(r)$ increases. At $r = x_1$, MN starts to use the second link. MN will route any additional flow on the second link until $r = r^*$. At $r = r^*$, MN routes $x_1$ on the first link and $r^* - x_1$ on the second link. Beyond $r^*$, MN routes additional flow on the first link until the flow on the first link has grown to $x_2$. This is the case at $r^{**} = r^* - x_1 + x_2$. For $r \ge r^{**}$, MN behaves like Nash. \begin{figure} \caption{The engineered price of anarchy for the construction of Section~\ref{sec: two links advanced} \label{fig: ePoA for 2 links} \end{figure} Figure~\ref{fig: ePoA for 2 links} shows the graph of $\ePoA(r)$. We have $\ePoA(r) = 1$ for $r \le r_2/2$. For $r_2/2 \le r \le x_1$, $\ePoA(r)$ increases to \[ \ePoA(x_1) = \frac{a_1 x_1^2 + b_1 x_1}{C_{\mathit{opt}}(x_1)}.\] For $x_1 \le r \le r^*$, $\ePoA(r)$ is convex. It will first decrease and reach the value one (this assumes that $r^*$ is big enough) at the rate where Opt routes $x_1$ on the first link; after this rate it will increase again. At $r^*$, $\ePoA$ has a discontinuity because at $r^*$ MN routes $x_1$ on the first link for a cost of $\ell_1(x_1) x_1$ and at $r^*+\epsilon$ it routes $x_1+\epsilon$ on the first link for a cost of $\ell_1(x_2) (x_1+\epsilon)$. Thus \[ \lim_{r \rightarrow r^*_+} \frac{\hat{C}_NN(r)}{C_{\mathit{opt}}(r)} = \lim_{r \rightarrow r^*_+}\frac{\ell_1(x_2) r}{C_{\mathit{opt}}(r)} = \frac{\ell_1(x_2) r^*}{C_{\mathit{opt}}(r^*)} = \frac{\ell_2(r^* - x_1) r^*}{C_{\mathit{opt}}(r^*)}.\] For $r \ge r^*$, $\ePoA(r)$ decreases. Thus \begin{equation} \ePoA = \max\left\{\frac{a_1 x_1^2 + b_1 x_1}{C_{\mathit{opt}}(x_1)}, \frac{\ell_2(r^* - x_1) r^*}{C_{\mathit{opt}}(r^*)}\right\}. \label{eq: upper bound} \end{equation} It remains to show that $x_1\le r_2$ and $r^*\ge r_2$ can be chosen\footnote{The optimal choice for $x_1$ and $r^*$ is such that both terms are equal and as small as possible. We were unable to solve the resulting system explicitly. We will prove in the next section that the mechanism defined by these optimal choices of the parameters $x_1$ and $r^*$ is optimal.} such that the right-hand side is at most 1.192. By Theorem~\ref{thm: basic facts}~(c), \[ C_{\mathit{opt}}(r) = b_1r + \frac{a_1}{1 + R}\left(r^2 + R r_2 r - R r_2^2/4\right), \] for $r \ge r_2/2$ and $R = a_1/a_2$. Also $\ell_2(r^* - x_1) = a_2(r^* - x_1) + b_2 = a_2(r^* - x_1) + b_1 + a_1 r_2$. We first determine the maximum $x_1 \le r_2$ such that $\left(a_1 x_1^2 + b_1 x_1\right)/C_{\mathit{opt}}(x_1) \le 1.192$ for all $b_1$. Since the expression $\left(a_1 x_1^2 + b_1 x_1\right)/C_{\mathit{opt}}(x_1)$ is decreasing in $b_1$, this $x_1$ is determined for $b_1 = 0$. It follows that $\alpha = x_1/r_2$ is defined by the equation \begin{equation} \frac {4 (R +1) \alpha ^2} {4 \alpha R - R +4 \alpha^2} = 1.192. \label{eq: ePoA1} \end{equation} For $R \ge R_0$, this equation has a unique solution $\alpha_0 \in [1/2,1]$, namely $$\alpha_0= \frac 1 2\cdot \frac {149 R+2 \sqrt{894 R(R+1)}} {125 R-24}.$$ We turn to the second term in equation (\ref{eq: upper bound}). For $r^* > r_2$, it is a decreasing function of $b_1$. Substituting $b_1 = 0$ into the second term and setting $\beta = r^*/r_2$ yields after some computation \begin{equation} \label{eq:p2_d0} \ePoA_2 = \frac {4\beta (R+1) (\beta-\alpha+R) } {R(4 \beta^2+4 \beta R-R)}. \end{equation} \noindent For fixed $\alpha = \alpha_0$ and any $R \ge R_0$, $\ePoA_2$ is minimized for $\beta = \beta_0 = \frac {R+\sqrt R\sqrt{R +4 \alpha_0 (R -\alpha_0) }}{ 4\alpha_0}.$ For $R \ge R_0$, one can prove $\beta_0\ge 1$, as needed. Substituting $\alpha_0$ and $\beta_0$ into $\ePoA_2$ yields a function of $R$. It is easy to see, using the derivative, that the maximum value of this function for $R \ge R_0$ is at most $1.192$. \end{proof} \iffalse \begin{figure} \caption{The plot shows the price of anarchy of the unmodified Nash and engineered price of anarchy (with the scheme of Section TODO) as a function of $R$. For $R$ up to approximately 1.6, modified Nash is equal to Nash. For larger $R$, the modification decreases the price of anarchy. However, $\ePoA$ is still raising as a function of $R$. } \end{figure} \fi \section{A Lower Bound for the Case of Two Links}\label{lower bound} We prove that the construction of the previous section is optimal among the class of deterministic mechanisms that guarantee the existence of an equilibrium for every rate $r>0$ and that use non-decreasing\footnote{It remains open whether similar arguments can be applied for showing the lower bound for non-monotone mechanisms with respect to User Equilibria.} latency functions. For these mechanisms we show that $\ePoA \ge 1.191$. As in the preceding sections, we use $\hat{\ell}$ to denote the modified latency functions. The use of $R$ throughout this section denotes the ratio of the linear coefficients of the two latency functions of the instance, and should not be confused with its use in the previous sections, were it was a parameter of the mechanism. As mentioned above, we are making two assumptions about the $\hat{\ell}$'s: an equilibrium flow must exist for every rate $r$, and $\hat{\ell}_i$ is non-decreasing, i.e., if $x<x'$, then $\hat{\ell}_i(x) \le \hat{\ell}_i(x')$, for $i=1,2$. It is worthwhile to recall the equilibrium conditions for general latency functions (as given by Dafermos-Sparrow~\cite{DS69}): if $(x,y)$ is an equilibrium for rate $r = x + y$, then $\hat{\ell}_2(y') \ge \hat{\ell}_1(x)$ for $y' \in (y,r]$ (otherwise $y' - y$ amount of flow would move from the first link to the second) and $\hat{\ell}_1(x') \ge \hat{\ell}_2(y)$ for $x' \in (x,r]$ (otherwise, $x' - x$ amount of flow would move from the second link to the first). Since we assume our functions to be monotone, the condition $\hat{\ell}_2(y') \ge \hat{\ell}_1(x)$ for $y' \in (y,r]$ is equivalent to $\lim \inf_{y' \downarrow y} \hat{\ell}_2(y') \ge \hat{\ell}_1(x)$ provided that $y < r$ (or equivalently, $x > 0$). Since we are discussing a network of two parallel links, the latter condition is in turn equivalent to (\ref{user equilibrium}). \begin{theorem}\label{thm: lower bound for 2links} The construction of Section~\ref{sec: two links advanced} is optimal and $\ePoA \ge 1.191$. \end{theorem} \begin{proof} We analyze a network with latency functions $\ell_1(x) = x$ and $\ell_2(x) = x/R +1 = (x + R)/R$, $2 \le R \le 4$, and derive a lower bound as a function of the parameter $R$; the restriction $2 \le R \le 4$ will become clear below. In a second step we choose $R$ so as to maximize the lower bound; the optimal choice is $R = R^* \approx 2.1$. For $r \le 1/2$, Opt uses one link and $C_{\mathit{opt}}(r) = r^2$, and for $r \ge 1/2$, Opt uses two links and $C_{\mathit{opt}}(r) = (r^2 + Rr - R/4)/(1 + R)$; $C_{\mathit{opt}}(1) = (3R + 4)/(4R + 4)$ and $C_N(1) = 1$. Thus $\PoA = \PoA(1) = (4 R + 4)/(3 R + 4)$. For $R \ge 2$, we have $\PoA(1) \ge 12/10 = 1.2$. Consider now some modified latency functions $\hat \ell_1, \hat \ell_2$, and let $(x_1,1-x_1)$ be an equilibrium flow for rate $1$ for the modified network. Let \[ r^* = \inf \{ r \; ;\; \text{there is an equilibrium flow $(x,r - x)$ for MN with $x > x_1$} \} ;\] $r^* = \infty$ if there is no equilibrium flow $(x,y)$ with $x > x_1$. The equilibrium conditions for flow $(x_1,1-x_1)$ imply \begin{equation}\label{first} \hat{\ell}_1(x') \ge \hat{\ell}_2(1 - x_1) \ge \ell_2(1 - x_1) \ge 1 \text{ for } x_1 < x' \le 1. \end{equation} The above definition of $r^*$ is a core element of our proof. In Lemma~\ref{lem: domain_of_r*} we restrict the domain of $r^*$, as well as the range of the modified latencies for efficient mechanisms (those ones with low $\ePoA$), ending up with the lower bound provided in (\ref{lb1}). Then in Lemmas~\ref{lem: large x},~\ref{lem: small x} we focus on the properties that the equilibria of efficient mechanisms should satisfy. In Lemma~\ref{lem: large x}, we bound from above the amount of equilibrium flow that uses the first link if the mechanism is efficient, while in Lemma~\ref{lem: small x} we obtain a second lower bound on the $\ePoA$. Finally, in Lemma~\ref{lem: summary} we summarize the above properties ending up with the lower bound of~(\ref{lb2}). \begin{lemma}\label{lem: domain_of_r*} If $\hat{\ell}_1(x_1) \ge 1$ or $r^* = \infty$ or $r^* \le 1$, $\ePoA \ge 1.2$. \end{lemma} \begin{proof} If $\hat{\ell}_1(x_1) \ge 1$ , we have $C_NM(1) \ge 1$ and hence $\ePoA \ge \PoA(1) \ge 1.2$. If $r^* = \infty$, $\ePoA(\infty) \ge 1 + 1/R$. For $R \le 4$, this is at least $1.25$. If $r^* \leq 1$, there is an equilibrium flow $(x,y)$ with $x > x_1$ and $r = x + y \leq 1$. Then $\hat{\ell}_1(x) \ge 1$ by inequality (\ref{first}). Also $\hat{\ell}_2(y) \ge 1$. Thus $C_NM(r) \ge 1 \ge r$ and hence $\ePoA(r) \ge r/C_{\mathit{opt}}(r)$. For $r \le 1$, we have \[ \frac{r}{C_{\mathit{opt}}(r)} = \frac{r(1 + R)}{r^2 + Rr - R/4} \ge 1 + \frac{R/4}{r^2 + Rr - R/4} \ge 1 + \frac{R/4}{1 + R - R/4} = \frac{4 + 4R}{4 + 3R} = \PoA(1) \ge 1.2.\] \end{proof} In the light of the Lemma above, we proceed under the assumption $\hat{\ell}_1(x_1) < 1$, and hence $x_1 < 1$, and $1 < r^* < \infty$. Then $(x_1,0)$ is an equilibrium flow, since $\hat{\ell}_1(x_1) < 1 \le \hat{\ell}_2(y')$ for $0 < y' \le x_1$. Thus \begin{equation}\label{lb1} \ePoA(x_1) \ge \frac{x_1^2}{C_{\mathit{opt}}(x_1)}. \end{equation} By definition of $r^*$, MN routes at most $x_1$ on the first link for any rate $r < r^*$ and for any $\epsilon > 0$ there is an $r < r^* + \epsilon$ such that $(x,r - x)$ with $x > x_1$ is an equilibrium flow for MN. For $r < r^*$, any equilibrium flow $(x,r-x)$ has $x \le x_1$. Thus, for $x' \in (x,r] \supseteq (x_1,r]$, $\hat{\ell}_1(x') \ge \hat{\ell}_2(r-x) \ge \ell_2(r-x) \ge \ell_2(r - x_1)$. Since this inequality holds for any $r < r^*$, we have \begin{equation}\label{second} \hat{\ell}_1(x') \ge \ell_2(r^* - x_1) \quad\text{for}\quad x' \in (x_1,r^*). \end{equation} \newcommand{{\calF_{\epsilon_0}}}{{\calF_{\epsilon_0}}} For $\epsilon > 0$, let \[ \calF_\epsilon = \set{(x,y)}{\text{$(x,y)$ is an equilibrium flow with $r^* \le x + y \le r^* + \epsilon$ and $x > x_1$}}.\] Observe that $\calFe$ is non-empty by definition of $r^*$. \begin{lemma}\label{lem: large x}If for arbitrarily small $\epsilon > 0$, there is a $(x,y) \in \calFe$ with $x \ge r^*$, then $\ePoA \ge \PoA(1) \ge 1.2$. \end{lemma} \begin{proof} Let $r = x + y$. Then $\ePoA \ge r^2 / C_{\mathit{opt}}(r)$. Since this inequality holds for arbitrarily small $\epsilon$, $$\ePoA \ge (r^*)^2/C_{\mathit{opt}}(r^*) \ge \PoA(1) = 1.2.$$ \end{proof} We proceed under the assumption that there is an $\epsilon_0 > 0$ such that $\calF_{\epsilon_0}$ contains no pair $(x,y)$ with $x \ge r^*$. \begin{lemma}\label{lem: small x} If for arbitrarily small $\epsilon \in (0,\epsilon_0)$, $\calFe$ contains either a pair $(x,r^*-x_1)$ or pairs $(x,y)$ and $(u,v)$ with $y \not= v$, then $\ePoA \ge \ell_2(r^* - x_1)r^*/C_{\mathit{opt}}(r^*)$. \end{lemma} \begin{proof} Assume first that $\calFe$ contains a pair $(x,r^*-x_1)$ and let $r = x + r^* - x_1$. Then \[ C_NM(r) = \hat{\ell}_1(x) x + \hat{\ell}_2 (r^* - x_1) (r^* - x_1) \ge \ell_2(r^* - x_1) r\] since $\hat{\ell}_1(x) \ge \ell_2(r^* - x_1)$ by (\ref{second}). Assume next that $\calFe$ contains pairs $(x,y)$ and $(u,v)$ with $y \not= v$. Then $\hat{\ell}_1(x) \ge \ell_2(r^* - x_1)$ and $\hat{\ell}_1(u) \ge \ell_2(r^* - x_1)$ by (\ref{second}). We may assume, $y > v$. Let $r = x+ y$. Since $(u,v)$ is an equilibrium $\hat{\ell}_2(y') \ge \hat{\ell}_1(u)$ for $y' \in (v,u+v)$ and hence $\hat{\ell}_2(y) \ge \hat{\ell}_1(u)$ . Thus \[ C_NM(r) = \hat{\ell}_1(x) x + \hat{\ell}_2 (y) y \ge \ell_2(r^* - x_1) r.\] We have now shown that $C_NM(r) \ge \ell_2(r^* - x_1) r$ for $r$'s greater than $r^*$ and arbitrarily close to $r^*$. Thus $\ePoA \ge \ell_2(r^* - x_1)r^*/C_{\mathit{opt}}(r^*)$. \end{proof} We proceed under the assumption that there is an $\epsilon_0 > 0$ such that $\calF_{\epsilon_0}$ contains no pair $(x,y)$ with $x \ge r^*$, no pair $(x,r^* - x_1)$ and no two pairs with distinct second coordinate. In other words, there is a $y_0 < r^* - x_1$ such that all pairs in $\calF_{\epsilon_0}$ have second coordinate equal to $y_0$. Let $(x_0,y_0) \in \calF_{\epsilon_0}$. Then $y_0 < r^* - x_1$. Let $(x,y)$ be an equilibrium for rate $r = (r^* + x_1 + y_0)/2$. Then $r = (2r^* + y_0 - (r^* - x_1))/2 < r^*$ and hence $x \le x_1$. Thus $y = r - x \ge r - x_1 = (r^* - x_1 + y_0)/2 > y_0$ and $r - y_0 > x_1$. Consider the pair $(r - y_0,y_0)$. Its rate is less than $r^*$ and its flow across the first link is $r - y_0$ which is larger than $x_1$. Thus it is not an equilibrium by the definition of $r^*$. Therefore there is either an $x'' \in (r - y_0,r]$ with $\hat{\ell}_1(x'') < \hat{\ell}_2(y_0)$ or a $y'' \in (y_0,r]$ with $\hat{\ell}_2(y'') < \hat{\ell}_1(r - y_0)$. We now distinguish cases. Assume the former. Since $(x,y)$ is an equilibrium, we have $\hat{\ell}_1(x') \ge \hat{\ell}_2(y)$ for all $x' \in (x,x+y]$ and in particular for $x''$; observe that $r - y_0 \ge x$ since $r - x = y > y_0$. Thus $\hat{\ell}_2(y_0) > \hat{\ell}_2(y)$, a contradiction to the monotonicity of $\hat{\ell}_2$. Assume the latter. Since $(x_0,y_0)$ is an equilibrium, we have $\hat{\ell}_2(y') \ge \hat{\ell}_1(x_0)$ for all $y' \in (y_0,x_0 + y_0]$ and in particular for $y''$. Thus $\hat{\ell}_1(r - y_0) > \hat{\ell}_1(x_0)$, a contradiction to the monotonicity of $\hat{\ell}_1$; observe that $r - y_0 < x_0$ since $r < r^* \le x_0 + y_0$. \begin{lemma}\label{lem: summary} \begin{equation}\label{lb2} \ePoA \ge \min \left\{ 1.2, \min_{x_1 \le 1} \max\left\{ \frac{x_1^2}{C_{\mathit{opt}}(x_1)}, \min_{r^* \ge 1} \frac{\ell_2(r^* - x_1) r^*}{C_{\mathit{opt}}(r^*)} \right\} \right\} .\end{equation} \end{lemma} \begin{proof} If $x_1 \ge 1$ or $r^* \le 1$ or $r^* = \infty$, we have $\ePoA \ge 1.2$. So assume $x_1 < 1$ and $1 < r^* < \infty$. The argument preceding this Lemma shows that the hypothesis of either Lemma~\ref{lem: large x} or~\ref{lem: small x} is satisfied. In the former case, $\ePoA \ge 1.2$. In the latter case, $\ePoA \ge \max \left\{\frac{x_1^2}{C_{\mathit{opt}}(x_1)}, \frac{\ell_2(r^* - x_1) r^*}{C_{\mathit{opt}}(r^*)}\right\}$. This completes the proof. \end{proof} It remains to bound \begin{equation}\label{lb3} \min_{x_1 \le 1} \max\left\{ \frac{x_1^2}{C_{\mathit{opt}}(x_1)}, \min_{r^* \ge 1} \frac{\ell_2(r^* - x_1) r^*}{C_{\mathit{opt}}(r^*)} \right\} = \min_{x_1 \le 1} \max \left\{ \frac{x_1^2}{C_{\mathit{opt}}(x_1)}, \min_{r^* \ge 1} \frac{4 r^* (R+1) (r^* - x_1 + R)}{R(4(r^*)^2 + 4Rr^* - R)}\right\}\end{equation} from below. We prove a lower bound of $1.191$. The term $x_1^2/C_{\mathit{opt}}(x_1)$ is increasing in $x_1$. Thus there is a unique value $\alpha_1 \in [1/2,1]$ such that the first term is larger than $1.191$ for $x_1 > \alpha_1$. If the minimizing $x_1$ is larger than $\alpha_1$ we have established the bound. The second term is minimized for $r^* = \max\left\{1,\left(R + \sqrt{R^2 + 4 R^2 x_1 - 4 R x_1^2}\right)/(4x_1)\right\}$. Since $x_1 \le 1$ and hence $x_1^2 \le x_1$, we have $(R + \sqrt{R^2 + 4 R^2 x_1 - 4 R x_1^2})/(4x_1)) \ge 2R/4 \ge 1$ and hence $$r^* = \left(R + \sqrt{R^2 + 4 R^2 x_1 - 4 R x_1^2}\right)/(4x_1).$$ The second term is decreasing in $x_1$ and hence we may substitute $x_1$ by $\alpha_1$ for the purpose of establishing a lower bound. We now specialize $R$ to $21/10$. For this value of $R$ and $x_1 = \alpha_1$ \[ \frac{\ell_2(r^* - \alpha_1) r^* }{C_{\mathit{opt}}(r^*)} |_{r^* = (R + \sqrt{R^2 + 4 R^2 \alpha_1 - 4 R \alpha_1^2})/(4\alpha_1) \text{ and } R = 21/10} \ge 1.191.\] This completes the proof of the lower bound. We next argue that the construction of Section~\ref{sec: two links advanced} is optimal. Equations (\ref{eq: upper bound}) of Section~\ref{sec: two links advanced} for $b_1 = 0$ and Equation (\ref{lb3}) agree. Hence our refined solution is optimal. \end{proof} \section{Open Problems} \label{sec:conclusion} Clearly the ultimate goal is to design coordination mechanisms that work for general networks. In the case of parallel links that we studied, we showed that our mechanism approaches $4/3$, as the number of links $k$ grows. It is still an open problem to show a bound of the form $4/5 -\alpha$, for some strictly positive $\alpha$. A possible approach could be to use the ideas of Section~\ref{sec: two links advanced}. Another approach would be to define the benign case more restrictively. Assuming $R_i = 8$ for all $i$, we would call the following latencies benign: $\ell_1(x) = x$, and $\ell_i(x) = 1 + \epsilon\cdot i + x/8^i$ for $i > 1$ and small positive $\epsilon$. However, Opt starts using the $k$-th link shortly after $1/2$ and hence uses an extremely efficient link for small rates. Also, our results hold only for affine original latency functions.What can be said for the case of more general latencies, for instance polynomials? On the more technical side, it would be interesting to study whether our lower bound construction of Section~\ref{lower bound} can be extended to modified latency functions $\hat{\ell}$ that do not need to satisfy monotonicity. \paragraph{Acknowledgements} We would like to thank Elias Koutsoupias, Spyros Angelopoulos and Nicol\'as Stier Moses for many fruitful discussions. \end{document}
\begin{document} \title{Environment-assisted quantum-enhanced sensing with electronic spins in diamond} \author{Alexandre Cooper} \affiliation{Department of Nuclear Science and Engineering and Research Lab of Electronics,\\Massachusetts Institute of Technology, Cambridge, MA 02139, USA } \affiliation{Department of Physics, Mathematics and Astronomy,\\California Institute of Technology, Pasadena, CA 91125, USA } \author{Won Kyu Calvin Sun} \author{Jean-Christophe Jaskula} \author{Paola Cappellaro}\email{[email protected]} \affiliation{Department of Nuclear Science and Engineering and Research Lab of Electronics,\\Massachusetts Institute of Technology, Cambridge, MA 02139, USA } \date{\today} \begin{abstract} The performance of solid-state quantum sensors based on electronic spin defects is often limited by the presence of environmental spin impurities that cause decoherence. A promising approach to improve these quantum sensors is to convert environment spins into useful resources for sensing. Here we demonstrate the efficient use of an unknown electronic spin defect in the proximity of a nitrogen-vacancy center in diamond as both a quantum sensor and a quantum memory. We first experimentally evaluate the improvement in magnetic field sensing provided by mixed entangled states of the two electronic spins. Our results critically highlight the tradeoff between the advantages expected from increasing the number of spin sensors and the typical challenges associated with increasing control errors, decoherence rates, and time overheads. Still, by taking advantage of the spin defect as both a quantum sensor and a quantum memory whose state can be repetitively measured to improve the readout fidelity, we can achieve a gain in performance over the use of a single-spin sensor. These results show that the efficient use of available quantum resources can enhance quantum devices, pointing to a practical strategy towards quantum-enhanced sensing and information processing by exploiting environment spin defects. \end{abstract} \maketitle \section{Introduction} Precision measurement of weak magnetic fields at the atomic scale using spin defects in solids is enabling novel applications in the physical and life sciences~\cite{Taylor2008, Schirhagl2014, Degen2017}. Nitrogen-vacancy (NV) centers in diamond are particularly suitable for sensing magnetic fields, as their electronic spins can be optically polarized and read out, as well as coherently controlled under ambient conditions over long coherence times. Such spin sensors have recently been used for characterizing magnetic thin films~\cite{Maletinsky2012, Gross2017a}, imaging living cells~\cite{LeSage2013}, detecting single molecules~\cite{Lovchinsky2016}, and performing nuclear magnetic resonance spectroscopy of small-volume chemical samples~\cite{Mamin2013, Aslam2017, Glenn2018}. An important challenge in increasing the sensitivity of spin sensors is taking advantage of their intrinsic quantum nature~\cite{Tilma2010, Giovannetti2011}, e.g., to realize spin-squeezed states~\cite{Wineland1992, Kitagawa1993,Esteve2008} or entangled states~\cite{Jones2009} of $n$ spins, which improve the precision by $\sqrt{n}$ over the use of $n$ independent spins~\cite{Bollinger1996, Huelga1997}. This requires ensembles of interacting spins that can be efficiently initialized, manipulated, and read out, as well as robustly prepared in entangled states with long coherence times. Quantum-enhanced sensing with spin defects in diamond has been so far prevented by the difficulty of accessing ensembles of strongly-coupled NV centers~\cite{Neumann2010, Dolde2013}, while mitigating the detrimental influence of nearby defects on their coherence and charge state properties. \begin{figure} \caption{ \textbf{Environment-assisted quantum-enhanced sensing.} \label{FigExperimentalSystem} \end{figure} To overcome these challenges, we explore an approach to quantum-enhanced sensing based on exploiting electronic spins in the environment of a single NV spin-sensor~\cite{Schaffry2011, Goldstein2011}, such as those associated with crystalline defects~\cite{Shi2013, Grinolds2014, Knowles2016, Rosenfeld2018, Cooper2018a}, surface spins~\cite{Sushkov2014, Sangtawesin2018}, or paramagnetic labels~\cite{Shi2015,Schlipf2017}. Besides increasing the number of sensing spins, combining diverse spin species enables distributing sensing tasks, e.g., the primary spin is used for state preparation and read out, while auxiliary spins are used for sensing and storage. This approach bears resemblance to mixed-species quantum logic with trapped atomic ions~\cite{Schmidt2005, Ballance2015}, where an auxiliary ion is used to cool, prepare, and read out the state of another ion, which in turn serves the role of a memory or a spectroscopic probe. To demonstrate our approach to environment-assisted quantum-enhanced sensing, we focus on the problem of measuring time-varying magnetic fields with mixed entangled states~\cite{Vedral1997} of two electronic spins~(Fig.~\ref{FigExperimentalSystem}a). Specifically, we perform magnetic sensing experiments with a mixed entangled state of two electronic spins associated with a single NV center and an electron-nuclear spin defect (X) in diamond~(Fig.~\ref{FigExperimentalSystem}b-c). The X defect is one of two environmental spin defects, with electronic spin $S=1/2$ and nuclear spin $I=1/2$, whose hyperfine and dipolar interaction tensors have been recently characterized~\cite{Cooper2018a}. Taking advantage of its stability under optical illumination, we exploit the X spin as both a sensor and a memory whose state can be repetitively measured to improve readout fidelity. \section{Quantum control} To illustrate our ability to convert environmental spin defects into resources for sensing, we first implement coherent control techniques to initialize, control, and read out the X spin via the NV spin, as well as generate and characterize a mixed entangled state of two spins, which are known to contribute to quantum enhancement in metrology~\cite{Modi2011}. All experiments are performed at room temperature with a static magnetic field of $205.2(1)~\text{G}$ aligned along the molecular axis of the NV center. Because of the large energy mismatch between the NV and X spins~(Fig.~\ref{FigExperimentalSystem}d), coherent spin exchange in the laboratory frame is suppressed. We optically polarize and read out the NV spin using green laser pulses; because the X spin lacks a physical mechanism for state preparation and detection, while being robust against optical illumination, we use the NV spin to initialize and repetitively read out its state using cross-polarization and recoupled spin-echo sequences~(Fig.~\ref{FigExperimentalSystem}b). Both NV and X spins are coherently controlled using resonant microwave pulses delivered through a coplanar waveguide. We address only one out of two hyperfine transitions of the X spin to reduce control errors and time overheads. Because the X nuclear spin is unpolarized, our nominal signal contrast is reduced by half; throughout the manuscript, we normalize our signal by subtracting its nominal baseline and multiplying it by a factor of two, but analyze the performance of our approach for both an unpolarized and a fully polarized X nuclear spin. \begin{figure} \caption{ \textbf{Quantum control.} \label{FigQuantumControl} \end{figure} \subsection{State initialization via polarization transfer} We initialize the X spin by transferring polarization from the NV spin using a Hartmann-Hahn Cross-Polarization (HHCP) sequence~\cite{Hartmann1962, Laraoui2013}, during which both NV and X spins are continuously driven at the same Rabi frequency, $\Omega_\text{NV}=-\Omega_\text{X}$, such that both spins are brought into resonance in the rotating frame~(Fig.~\ref{FigExperimentalSystem}e). The cross-polarization sequence introduces coherent spin exchange between the NV and X spins at the dipolar coupling strength $d=58(4)~\text{kHz}$~(Fig.~\ref{FigExperimentalSystem}f), which we use with an appropriate choice of driving phases to implement both polarization transfer gates (SWAP, $|00\rangle\mapsto|11\rangle$) and entangling gates (iSWAP, $|00\rangle\mapsto(|00\rangle\pm i|11\rangle)/\sqrt{2}$) to prepare and detect two-spin coherence for sensing~\cite{Schuch2003}. Using the HHCP sequence for a spin-exchange time of $\tau_\text{HHCP}=8.6~\mu\text{s}$ after a green laser pulse, we perform $N$ rounds of polarization transfer~(Fig.~\ref{FigQuantumControl}a). The contrast of the cross-polarization signal increases from $49(3)\%~(\text{N}=0)$ to $82(3)\%~(\text{N}=1)$ and $88(4)\%~(\text{N}=3)$, corresponding to an increase in X spin polarization from $14(3)\%$ to $76(3)\%$ and $94(6)\%$ respectively. Because of the tradeoff between increasing polarization and reducing time overheads, we perform all of our experiments after a single round of polarization transfer. \subsection{Entanglement generation and characterization} To generate the mixed entangled state $\rho_\Phi$, we then implement an entangling gate using the HHCP sequence for half the spin-exchange time of $\tau_\text{HHCP}/2=4.3~\mu\text{s}$~(Fig.~\ref{FigQuantumControl}b). While we cannot prepare the pure Bell entangled states, $\ket{\Phi_\pm}=(\ket{00}\pm i\ket{11})/\sqrt{2}$, we can still achieve a mixed state that has non-zero two-spin coherence in the subspace spanned by the Bell states, i.e., $\tr{\rho_\Phi\ket{\Phi_\pm}\bra{\Phi_\pm}}\neq0$. Though such mixed states are unavoidable due to experimental imperfections, they still prove to be useful resources for sensing applications, as we demonstrate below. We characterize the two-spin coherence by converting it back into a population difference of the NV spin using a modulated disentangling gate; we modulate the phases of the pulses of the cross-polarization sequence acting on the NV and X spins at $500~\text{kHz}$ and $250~\text{kHz}$ to impart coherent oscillations at the sum of both frequencies~\cite{Mehring2003a, Scherer2008}, simulating the evolution of two-spin coherence in the presence of a static magnetic field. As expected, the signal oscillates at $751(2)~\text{kHz}$, the sum of both frequencies, and the signal contrast is $85(3)\%$, consistent with the value measured after a single round of polarization transfer~(Fig.~\ref{FigQuantumControl}b). We further measure the coherence time of the single-spin and two-spin states~(Fig.~\ref{FigQuantumControl}c) using a spin-echo sequence with decoupling pulses applied simultaneously to both NV and X spins. The coherence signal is fitted to $S(\tau)\propto e^{-(\Gamma_2\cdot \tau)^p}$, where $\Gamma_2$ is the decoherence rate and $p\approx1.6$ is the decay exponent. We measure $\Gamma_2^\text{NV}=22(1)~\text{kHz}$, $\Gamma_2^\text{X}=15(2)~\text{kHz}$, and $\Gamma_2^{\Phi} = 36(4)~\text{kHz}$, such that the decoherence rate of the two-spin state is consistent with the sum of the decoherence rates of the NV and X spins, $\Gamma_2^{\Phi}=\Gamma_2^{\text{NV}}+\Gamma_2^{\text{X}}$. The NV and X spins experience different magnetic environments, such that $\Gamma_2^\text{X}<\Gamma_2^\text{NV}$, which is beneficial for achieving a gain in sensitivity using environmental spin defects. \subsection{Repetitive readout} We finally demonstrate repetitive readout of the X spin via the NV spin. Any arbitrary state of the X spin can be generally measured by coherently mapping it onto the state of the NV spin before optical readout, e.g., using a cross-polarization sequence, as done for the experiments in Fig.~\ref{FigQuantumControl}a,c. A population state of the X spin can also be measured by correlating the states of the NV and X spin before optical readout, e.g., using a recoupled spin-echo sequence~(Fig.~\ref{FigExperimentalSystem}b). Conversely, we can improve the NV spin readout by exploiting the X spin as a quantum memory, storing the desired information onto its state: as the X spin is stable under optical illumination, we can repeat multiple times the mapping and readout steps to increase the signal-to-noise ratio~\cite{Jiang2009}. For example, the amplitude of the optimally-weighted cumulative signal obtained for a magnetometry experiment at $\tau=19~\mu\text{s}$ increases by a factor of 4.2 after $m=9$ additional repetitive measurements, providing an improvement in signal-to-noise ratio of 1.91(8)~(Fig.~\ref{FigQuantumControl}d). We note that, in the current experiment, the number of repetitive measurements that provides an improvement in signal-to-noise ratio is not limited by the intrinsic relaxation of the X spin, but by imperfections in the gate used for the mapping. These results illustrate the advantage of working with environmental spins that are robust against optical illumination, such that they can be used not only as quantum sensors, but also as quantum memories that can be repetitively measured. \section{Magnetic sensing} We now focus on the problem of estimating the amplitude of a time-varying magnetic field whose temporal profile is known, here a sinusoidal field $b(t)=b\sin{(2\pi\nu t)}$. We sample the field with a phase-matched spin-echo sequence of duration $\tau=1/\nu$ with decoupling pulses applied simultaneously on both spins. The average signal expected for $n$ maximally entangled spins, which we assume are all equally coupled to the field, is given by $S_n(\tau)\propto\alpha_n(\tau)\sin(\nu_n(\tau)b)$, where $\nu_n(\tau)=n\gamma_e\hat{f}\tau$ is the precession rate of the interferometric signal, $\gamma_e=2\pi\cdot2.8~\text{MHz}$ is the gyromagnetic ratio of the electronic spin, and $\hat{f}\leq2/\pi$ is a scaling factor quantifying the overlap between the sinusoidal field and the spin-echo sequence, with the equality held when phase-matched~\cite{Taylor2008, deLange2011, Magesan2013, Cooper2014}. We measure the magnetometry signal by sweeping the amplitude of the sinusoidal field~(Fig.~\ref{FigMagneticSensing}a) for a sensing time of $\tau=2$, $10$, and $19~\mu\text{s}$. The signal shows coherent oscillations with the two-spin state precessing at twice the rate of the single-spin state. The relative difference in signal contrast is explained by decoherence during sensing and control errors~(Fig.~\ref{FigMagneticSensing}b). Indeed, the decrease in signal amplitude satisfies $\alpha_n(\tau) = \alpha_0^n e^{-(\Gamma_2^n\cdot \tau)^{p_n}}$, where $\Gamma_2^n$ and $p_n$ are the coherence parameters for $n$-spin states measured from independent spin-echo experiments~(Fig.~\ref{FigQuantumControl}c). We estimate a nominal amplitude of $\alpha_0^\text{NV}=96(3)\%$ and $\alpha_0^{\Phi}=78(6)\%$ for the single-spin and two-spin states, resulting from inefficiencies during state preparation and readout caused by dissipation and unitary control errors. \begin{figure} \caption{ \textbf{Magnetic sensing.} \label{FigMagneticSensing} \end{figure} \section{Performance analysis} We finally quantify the improved performance of the composite quantum sensor with respect to a single spin sensor, analyzing the situation in which the spin defect acts as just a sensor or both as a sensor and a memory. The relevant figure of merit is the smallest magnetic field $\delta b_n=\sigma_{S_n}/|dS_n(\tau)/db|$ that can be measured by the $n$-spin sensor, where $\sigma_{S_n}$ is the standard deviation of the magnetometry signal, and $dS_n(\tau)/db=\alpha_n(\tau)\nu_n(\tau)$ is the maximum slope of the magnetometry signal~(Fig.~\ref{FigMagneticSensing}a). We thus define the \textit{gain in performance} as $g_n(\tau)=\delta b_1(\tau)/\delta b_n(\tau)$, which exceeds unity when the $n$-spin state outperforms the single-spin state, and is upper bounded by $n$ if utilizing $(n-1)$ ancillas as sensors only. In our experiment, the gain in performance is given by $g_\Phi(\tau)=|\alpha_\Phi(\tau)\nu_\Phi(\tau)|/|\alpha_\text{NV}(\tau)\nu_\text{NV}(\tau)|\leq2$, as we can safely assume that the signal uncertainty $\sigma_S$ is the same for the single-spin and two-spin sensing cases, as for both cases the signal is obtained by measuring the same NV and is limited by shot noise. This figure of merit is relevant in many scenarios, such as when the experiment can only be repeated a fixed number of times due to external constraints, e.g. the duration or triggering of the signal to be measured. However, often a more practical metric is the smallest magnetic field that can be measured in a fixed time. We thus also consider the sensitivity $\eta=\delta b\sqrt{T}$, where $T=M(\tau+\tau_O)$ is the total experimental time for $M$ measurements with a sensing time $\tau$ and $\tau_O$ time overheads. To account for these time overheads, needed to prepare and readout the mixed entangled state of $n$ spins, we define the \textit{gain in sensitivity} as $\tilde{g}_n(\tau)=g_n(\tau)h_n(\tau)$, where $g_n(\tau)$ is the gain in performance and $h_n(\tau)$ is the relative time overheads for $n$-spin protocols with more complex control requirements. In our experiment, $h_\Phi(\tau)=\sqrt{(\tau+\tau_{NV})/(\tau+\tau_\text{NV}+\tau_\Phi)}$, where $\tau$ is the sensing time, $\tau_\text{NV}=5.7~\mu\text{s}$, and $\tau_\Phi=21~\mu\text{s}$, which includes the additional time needed for state initialization, entanglement generation, and state readout, and scales inversely with the dipolar coupling strength. \begin{figure} \caption{ \textbf{Performance analysis.} \label{FigPerformance} \end{figure} \subsection{Environmental spin as sensor} Our results reported in Fig.~\ref{FigPerformance}a show that, despite the twofold increase in precession rate, $|\nu_\Phi(\tau)/\nu_\text{NV}(\tau)|=2$, the relative amplitude of the magnetometry signal is less than half, $|\alpha_\Phi(\tau)/\alpha_\text{NV}(\tau)|\leq1/2$, such that no gain in performance is achieved for an unpolarized X nuclear spin. However, extrapolating our data to a fully polarized X nuclear spin or to driving both hyperfine transitions simultaneously, we predict a gain in performance greater than unity for up to $\tau\approx25~\mu\text{s}$. Still, when accounting for time overheads, there exists no sensing time for which a gain in sensitivity is achievable, unless our control imperfections were reduced by at least $8~\%$. These results illustrate the fundamental tradeoff between increasing the number of spins and increasing control errors, decoherence rates, and time overheads. One approach to achieving a gain in sensitivity would be to improve the control fidelity or to look for a system with more favorable parameters. For instance, our simulation results illustrated in Fig.~\ref{FigPerformance}c show that, assuming the same level of control errors, a spin defect with either stronger dipolar coupling $d\gtrsim 75$~kHz (reducing time overhead) or smaller decoherence rate, $\Gamma_2^\text{X}/\Gamma_2^\text{NV}\lesssim 0.4$ (increasing visibility at given $\tau$), is sufficient to reach the regime where the entangled state outperforms the single spin sensor. \subsection{Environmental spin as sensor and memory} Here we demonstrate a different approach for improving the gain in performance, which exploits the X spin as both a sensor and a memory that can store information about the measured field. The stored information can be then repetitively measured with a quantum non-demolition measurement~\cite{Braginsky1980}. This results in an increased signal-to-noise ratio SNR($m$) after $m$ repetitive measurements and a upper bound for the gain in performance of $n\,\text{SNR}(m)$. In our case, the quantum non-demolition measurement is enabled by the fact that even at low magnetic field the X spin is unperturbed by the optical pulse used to perform a projective measurement on the NV spin, providing an advantage over other ancillary spin systems such as nitrogen nuclear spins or nitrogen substitutional impurities (P1 centers). In addition, going beyond the typical repetitive readout scheme~\cite{Jiang2009,Neumann2010b}, we take advantage of the fact that our disentangling gate maps the magnetic field-dependent phase as a population difference on both the NV and X spins. This provides a significant advantage, as it bypasses the need for an additional mapping operation, reducing both time overheads and the loss in signal contrast caused by control imperfections. Using repetitive measurements of the X spins, we experimentally observe a two-fold maximum increase in $\text{SNR}$ at both $\tau=2~\text{and}~19~\mu\text{s}$, such that a gain in performance is achieved for the entire coherence time of the two-spin sensor. For sensing experiments at $\tau=19~\mu\text{s}$ (Fig.~\ref{FigPerformance}b), we achieve a gain in performance greater than unity after $m=6$ repetitive measurements with a maximum of $g_\text{rr}=1.06(4)$ after $m=9$ repetitive measurements. When accounting for (increased) time overheads, using $\tau_\Phi\mapsto\tau_\Phi+(m-1)\tau_\text{rr}$ with $\tau_\text{rr}=6.1~\mu\text{s}$, the gain in sensitivity maximizes to $\tilde{g}_\text{rr}=0.55(2)$ after $m=7$ measurements when driving only one hyperfine transition of the unpolarized X nuclear spin. We expect to reach $\tilde{g}_\text{rr}=1.10(4)$ for a fully polarized X nuclear spin or by driving both transitions. Further gains in sensitivity could be achieved by accessing more strongly coupled spin systems with longer coherence times (see Fig.~\ref{FigPerformance}c), and reducing control errors, e.g., using optimal control techniques~\cite{Dolde2014}. \section{Conclusions} In conclusion, we have experimentally demonstrated an approach to quantum-enhanced sensing using mixed entangled states of electronic spins by converting electron-nuclear spin defects in the environment of a single-spin sensor into useful resources for sensing, serving both as quantum sensors and quantum memories whose state can be repetitively measured. This approach complements ongoing efforts to improve the performance of single-spin sensors, including methods to limit the concentration of spin impurities~\cite{Balasubramanian2009,Kucsko18}, extend coherence times with quantum error correction~\cite{deLange2010, Knowles2014, Kessler2014,Bar-Gill2013}, and increase collection efficiency with photonic structures~\cite{Babinec2010,Wan18}. We emphasize that this approach is not specific to our electron-nuclear spin defect, but also applicable to other environmental spin defects with favorable coupling strengths, coherence times, and stability under optical illumination. Using coherent control protocols to initialize and repetitively read out a single-spin defect, as well as create a mixed entangled state with two-spin coherence, we achieve a gain in performance in sensing time-varying magnetic fields and predict a gain in sensitivity for a fully polarized X nuclear spin, which is within experimental reach by further extending our control toolbox to nuclear spins. Still, we find that common challenges associated with increased control errors, faster decoherence of entangled states, and time overheads associated with their creation limit the sensitivity improvement. In particular, the additional time required for initialization, control and readout of the entangled state is especially deleterious, a practical fact often overlooked that our study helps highlighting. To at least partially overcome these challenges, we demonstrate that the environmental spin defect can serve a dual role, not only acting as a magnetic field sensor, but also as a quantum memory, enabling repetitive readouts of the relative population of its spin states. Our results thus demonstrate that, despite the increased complexity and fragility, quantum control protocols can turn electronic spin defects in the environment of a single-spin sensor, usually considered as noise sources, into useful resources for realizing quantum-enhanced sensing. Further improvements in quantum control, such as optimal control techniques to improve gate fidelities~\cite{Dolde2014}, and materials properties, e.g., to deterministically create confined ensembles of interacting spin defects with slower decoherence rates and stronger coupling strengths, should enable achieving magnetic sensing beyond the standard quantum limit using electronic spin sensors~\cite{Jones2009}. \end{document}
\begin{document} \begin{abstract} The aim of this paper is to construct ``special" isogenies between K3 surfaces, which are not Galois covers between K3 surfaces, but are obtained by composing cyclic Galois covers, induced by quotients by symplectic automorphisms. We determine the families of K3 surfaces for which this construction is possible. To this purpose we will prove that there are infinitely many big families of K3 surfaces which both admit a finite symplectic automorphism and are (desingularizations of) quotients of other K3 surfaces by a symplectic automorphism. In the case of involutions, for any $n\in\mathbb{N}_{>0}$ we determine the transcendental lattices of the K3 surfaces which are $2^n:1$ isogenous (by a non Galois cover) to other K3 surfaces. We also study the Galois closure of the $2^2:1$ isogenies and we describe the explicit geometry on an example.\end{abstract} \title{On certain isogenies between K3 surfaces} \section{Introduction}\leftarrowbel{intro} K3 surfaces are symplectic regular surfaces and among their finite order automorphisms the ones which preserve the symplectic structure (the symplectic automorphisms) play a special role. Indeed, the quotient of a K3 surface by a finite symplectic automorphism produces a singular surface whose desingularization is again a K3 surface. This construction establishes a particular relation between different sets of K3 surfaces: the ones which admit a finite symplectic automorphism and the ones obtained as desingularization of the quotient of a K3 surfaces by a symplectic automorphism. In the following the latter K3 surfaces are said to be (cyclically) covered by a K3 surface and the former are said to be the cover of a K3 surface. We denote by $\mathcal{L}_n$ the set of the K3 surfaces which admit an order $n$ symplectic automorphism and by $\mathcal{M}_n$ the set of the K3 surfaces which are $n:1$ cyclically covered by a K3 surface. From now on we assume the surfaces to be projective. Thanks to several works, starting from the end of the 70's until now (see, e.g. \cite{NikSympl}, \cite{Morrison}, \cite{vGS}, \cite{GS}, \cite{GSprime} \cite{GSnonprime}, \cite{G}), the sets $\mathcal{L}_n$ and $\mathcal{M}_n$ are described as the union of countably many families of $R$ polarized K3 surfaces, for certain known lattices $R$. The dimension of these families is at most 11, and, recalling that the families of generic projective K3 surfaces have dimension 19, one immediately observes that the K3 surfaces which either admit a finite symplectic automorphism or which are cyclically covered by a K3 surface are quite special. So, it is natural to expect that the intersection $\mathcal{L}_n\cap\mathcal{M}_n$ is extremely small, i.e. that a K3 surface which is both covered and cover of another K3 surface is really rare. On the other hand, there is at least one known example of a family of K3 surfaces contained in $\mathcal{L}_n\cap\mathcal{M}_n$, given by the family of the K3 surfaces which admit an elliptic fibration with an $n$-torsion section (see Section \ref{sec:intersection}). This family has codimension one in the families which are components of $\mathcal{L}_n$ and of $\mathcal{M}_n$. Hence, surprisingly, the intersection $\mathcal{L}_n\cap\mathcal{M}_n$ is not so small. The aim of this paper is to investigate more precisely the intersection between the two sets $\mathcal{L}_n$ and $\mathcal{M}_n$ and to relate it with the study of isogenies between K3 surfaces. In this paper, the term ``isogeny between K3 surface" means a generically finite rational map between K3 surfaces, as in \cite{I} and \cite{BSV}. The quotient by a finite symplectic automorphism on a K3 surface $X$ induces an isogeny between $X$, which admits the symplectic automorphism, and the K3 surface $Y$ cyclically covered by $X$. The isogeny is birationally the quotient map and has of course the same order as the automorphism. There are other isogenies between K3 surfaces, which are not quotient maps, see e.g. \cite{I} and \cite{BSV}. Here we discuss one of these other isogenies: given a K3 surface $Z\in\mathcal{L}_n\cap\mathcal{M}_n$, it induces an $n^2:1$ isogeny between other two K3 surfaces. Indeed, since $Z\in\mathcal{M}_n$, it is $n:1$ covered by a K3 surface $X$; since $Z\in\mathcal{L}_n$, it is an $n:1$ cover of a K3 surface $Y$. By composing these two $n:1$ maps one obtains an $n^2:1$ isogeny between $X$ and $Y$. We will prove that generically this isogeny is not induced by a quotient map. In Section \ref{sec: prlim results} we recall some preliminary results on the set $\mathcal{L}_n$ of K3 surfaces admitting a symplectic automorphism of order $n$ and on the set $\mathcal{M}_n$ of the K3 surfaces $n:1$ cyclically covered by a K3 surface. In Section \ref{sec:intersection} we obtain our main results on the intersection $\mathcal{L}_n\cap\mathcal{M}_n$. In particular in Theorem \ref{theorem: intersection Ln and Mn} we prove: {\bf Theorem }{\it There are components $\mathcal{Z}$ of $\mathcal{L}_n\cap\mathcal{M}_n$ such that $\dim\left(\mathcal{L}_n\right)=\dim\left(\mathcal{M}_n\right)=\dim \mathcal{Z}$, i.e. the dimension of $\mathcal{Z}$ is the maximal possible and thus $\mathcal{Z}$ is an irreducible component of both of $\mathcal{L}_n$ and $\mathcal{M}_n$.} As a consequence we construct $n^2:1$ isogenies and we prove that generically they are not quotient maps. The Section \ref{sec:involutions} contains the main results for the case $n=2$. In addition to the results which hold for every admissible $n$, we also obtain the following theorem (see Theorem \ref{theorem: intersection L2 and M2} and Corollary \ref{cor: infinite sets of families}) {\bf Theorem }{\it For any $d,n\in\mathbb{N}>0$, there exists a lattice $R_{d,n}$ (with $R_{d,n}\simeq R_{d',n'}$ if and only if $(d,n)=(d',n')$) and there exists a family of $R_{d,n}$-polarized K3 surfaces such that, for any $m\in\mathbb{N}_{>0}$ and any $R_{d,n}$-polarized K3 surface $X$ there exists an $R_{d,m}$-polarized K3 surface $Y$ isogenous to $X$ with an isogeny of degree $2^{|n-m|}$. So for each $d\in\mathbb{N}_{>0}$ there are countably many families of polarized K3 surfaces, such that there exists an isogeny between members of each family.} The N\'eron--Severi group and the transcendental lattice of all the surfaces involved in these isogenies are explicitly given. In Section \ref{subsec: Galois closure 2^2:1 cover} we describe the Galois closure of the $2^2:1$ (non Galois) covers constructed. Moreover, in Section \ref{sec: X1 and Y2} we describe the geometry of a generic member $X_2$ of a certain maximal dimensional family of K3 surfaces which is contained in $\mathcal{L}_2\cap\mathcal{M}_2$. The K3 surface $X_2$ admits two different polarizations of degree 4: one exhibits the surface $X_2$ as a special singular quartic in $\mathbb{P}^3$ with eight nodes, the other as smooth double cover of a quadric in $\mathbb{P}^3$. The former model is the singular quotient of another K3 surface by a symplectic involution (thus it implies that $X_2\in\mathcal{M}_2$), the latter implies that $X_2$ admits a symplectic involution induced by the switching of the rulings on the quadric (thus it implies that $X_2\in\mathcal{L}_2$). We describe both projective models of $X_2$ and give the explicit relation between them, providing a geometric realization of the previous lattice theoretic result which guarantees that $X_2$ is both covered by a K3 surface and is a cover of a K3 surface. In particular this allows us to describe a symplectic involution on the model of $X_2$ as singular quotient. In Section \ref{subsec: U+N e U+E8} we analyse the similar problem for two specific families of codimension 1 in $\mathcal{L}_2\cap\mathcal{M}_2$; one of these families is totally contained in all the components of $\mathcal{L}_2$ and the other in all the components of $\mathcal{M}_2$. \section{Preliminary results}\leftarrowbel{sec: prlim results} We recall in this section some of the definitions and results on K3 surfaces, symplectic automorphisms on K3 surfaces and quotients of K3 surfaces by their automorphisms. In the following we work with projective surfaces. \subsection{Symplectic automorphisms and cyclic covers of K3 surfaces} \begin{definition} A (projective) K3 surface is a regular projective surface with trivial canonical bundle. If $X$ is a K3 surface, we choose a generator of $H^{2,0}(X)$, (i.e. a symplectic form), we denote it by $\omega_X$ and we call it the period of the K3 surface. The second cohomology group $ H^2(X,\mathbb{Z})$ of a K3 surface $X$ equipped with the cup product is a lattice, isometric to a standard lattice which does not depend on $X$ and is denoted by $\Lambda_{K3}:= U^{\oplus 3}\oplus E_8(-1)^{\oplus 2}$. \end{definition} \begin{definition} Let $X$ be a K3 surface, and $\omega_X$ its period. An automorphism $\sigma$ of $X$ is said to be symplectic if $\sigma^*(\omega_X)=\omega_X$. \end{definition} One of the main results on symplectic automorphisms on K3 surfaces is that the quotient of a K3 surface by a symplectic automorphism is still a K3 surface, after a birational transformation which resolves the singularities of the surface. \begin{proposition}{\rm (\cite{NikSympl})} Let $X$ be a K3 surface and $\sigma\in \mbox{Aut}(X)$ a finite automorphism of $X$. Then the minimal smooth surface $Y$ birational to $X/\sigma$ is a K3 surface if and only if $\sigma$ is symplectic. \end{proposition} \begin{definition} We will say that a K3 surface $Y$ is $n:1$ cyclically covered by a K3 surface, if there exists a pair $(X,\sigma)$ such that $X$ is a K3 surface, $\sigma$ is an automorphism of order $n$ of $X$ and $Y$ is birational to $X/\sigma$. \end{definition} The first mathematician who worked on symplectic automorphisms of finite order on K3 surfaces and who established the fundamental results on these automorphisms was Nikulin, in \cite{NikSympl}. We summarize in Theorem \ref{therem: Nikulin results} and Theorem \ref{theorem: X has sympl autom iff Omega is in NS(X)} the main results obtained in his paper, but first we recall some useful information and definitions. If $\sigma$ is a symplectic automorphism on $X$ of order $n$, its linearization near the points with non trivial stabilizer is given by a $2\times 2$ diagonal matrix with determinant 1 and thus it is of the form ${\rm diag}(\zeta_n^a,\zeta_n^{n-a})$ for $1\leq a\leq n-1$ and $\zeta_n$ an $n$-th primitive root of unity. So, the points with non trivial stabilizer are isolated fixed points and the quotient $X/\sigma$ has isolated singularities, all of type $A_{m_j}$ where $m_j+1$ divides $n$. In particular the surface $Y$, which is the minimal surface resolving the singularities of $X/\sigma$, contains smooth rational curves $M_i$ arising from the desingularization of $X/\sigma$. The classes of these curves span a lattice isometric to $\oplus_{j} A_{m_j}$. \begin{definition} Let $Y$ be a K3 surface, $n:1$ cyclically covered by a K3 surface. The minimal primitive sublattice of $\mathbb{N}S(Y)$ containing the classes of the curves $M_i$, arising from the desingularization of $X/\sigma$, is denoted by $\mathbb{M}_n$. \end{definition} We observe that $\mathbb{M}_n$ is necessarily an overlattice of finite index (a priori possibly 1) of the lattice $\oplus_{j} A_{m_j}$ spanned by the curves $M_i$. The presence of a smooth cyclic cover of $X/\sigma$ branched over the singular points obtained as contraction of the curves $M_i$ suggests that there are some divisibility relations among the $M_i$'s and thus that the index of the inclusion $\left\leftarrowngle \left(M_i\right)_i\right\rightarrowngle\hookrightarrow \mathbb{M}_n$ would not be 1 (as indeed stated in Theorem \ref{therem: Nikulin results}). \begin{definition}{\rm (See \cite[Definition 4.6]{NikSympl})}\leftarrowbel{defi: essentially unique action} Let $\sigma$ be an order $n$ automorphism of a K3 surface $X$. We will say that its action on the second cohomology group is essentially unique if there exists an isometry $g_n:\Lambda_{K3}\stackrel{\sim}{\longrightarrow}\Lambda_{K3}$ of order $n$ of $\Lambda_{K3}$ such that for every pair $(X,\sigma)$, there exists an isometry $\varphi:H^2(X,\mathbb{Z})\rightarrow \Lambda_{K3}$ such that $\sigma^*=\varphi^{-1}\circ g_n\circ\varphi$. \end{definition} \begin{theorem}\leftarrowbel{therem: Nikulin results} Let $X$ be a K3 surface and $\sigma$ a finite symplectic automorphism of $X$ of order $|\sigma|=n$. Then \begin{itemize} \item $2\leq n\leq 8$ (see \cite[Theorem 6.3]{NikSympl}); \item the singularities of $X/\sigma$ depend only on $n$ (see \cite[Section 5]{NikSympl}); \item the class of isometry of the lattice $\mathbb{M}_n$ depends only on $n$ and $\mathbb{M}_n$ is an overlattice of index $n$ of the lattice $\left\leftarrowngle\left( M_i\right)_i\right\rightarrowngle$ spanned by the curves arising from the desingularization of the quotient $X/\sigma$ (see \cite[Theorem 6.3]{NikSympl}); \item the action of $\sigma^*$ on $H^2(X,\mathbb{Z})$ is essentially unique (see \cite[Theorem 4.7]{NikSympl}) and thus the classes of isometry of the lattices $H^2(X,\mathbb{Z})^{\sigma^*}$ and $\left(H^2(X,\mathbb{Z})^{\sigma^*}\right)^{\perp}$ depend only on $n$. \item The lattice $\left(H^2(X,\mathbb{Z})^{\sigma}\right)^{\perp}$ is primitively embedded in $\mathbb{N}S(X)$ (see \cite[Lemma 4.2]{NikSympl}) and ${\rm rank}(\left(\Lambda_{K3}^{g_n}\right)^{\perp})={\rm rank}(\mathbb{M}_n)$ (see \cite[Formula (8.12)]{NikSympl}). \end{itemize} \end{theorem} \begin{definition} Let $X$ be a K3 surface with a symplectic automorphism $\sigma$ of order $n$. Since the action of $\sigma^*$ on $H^2(X,\mathbb{Z})$ is essentially unique, the lattice $\left(H^2(X,\mathbb{Z})^{\sigma^*}\right)^{\perp}$ is isometric to $\left(\Lambda_{K3}^{g_n}\right)^{\perp}$ (with the notation of Definition \ref{defi: essentially unique action}) and we denote it by $\Omega_{n}$. \end{definition} For every admissible $n$ the lattices $\Omega_n$ were computed: in \cite{vGS} and \cite{Morrison} if $n=2$; in \cite{GSprime} if $n$ is an odd prime; in \cite{GSnonprime} if $n$ is not a prime. The lattices $\mathbb{M}_n$ were computed for every admissible $n$ in \cite[Theorems 6.3 and 7.1]{NikSympl}. The lattices $\Omega_n$ and $\mathbb{M}_n$ characterize the K3 surfaces admitting a symplectic automorphism of order $n$ or a $n:1$ cyclic cover by a K3 surface respectively; indeed, the following two results hold \begin{theorem}\leftarrowbel{theorem: X has sympl autom iff Omega is in NS(X)}{\rm (See \cite[Theorem 4.15]{NikSympl})} A K3 surface $X$ admits a symplectic automorphism of order $n$ if and only if $\Omega_n$ is primitively embedded in $\mathbb{N}S(X)$. \end{theorem} \begin{theorem}\leftarrowbel{theorem: Y has cyclic cover iff Mn is in NS(Y)}{\rm(See \cite[Proposition 2.3]{GS} for the case $n=2$ and \cite[Theorem 5.2]{G} for other $n$)} A K3 surface $Y$ is $n:1$ cyclically covered by a K3 surface if and only if $\mathbb{M}_n$ is primitively embedded in $\mathbb{N}S(Y)$. \end{theorem} \begin{corollary}\leftarrowbel{corollary: NS of K3 in Ln and Mn} Let $X$ be a projective K3 surface admitting a symplectic automorphism of order $n$. Then $\rho(X)\geq 1+{\rm rank}(\Omega_n)$ and if $\rho(X)=1+{\rm rank}(\Omega_n)$, then $\mathbb{N}S(X)$ is an overlattice of finite index (possibly 1) of $\leftarrowngle 2d\rightarrowngle\oplus \Omega_n$, for a certain $d\in\mathbb{N}_{>0}$, such that $\Omega_n$ is primitively embedded in this overlattice. Let $Y$ be a projective K3 surface $n:1$ cyclically cover by a K3 surface. Then $\rho(Y)\geq 1+{\rm rank}(\mathbb{M}_n)$ and if $\rho(Y)=1+{\rm rank}(\mathbb{M}_n)$, then $\mathbb{N}S(Y)$ is an overlattice of finite index (possibly 1) of $\leftarrowngle 2e\rightarrowngle\oplus \mathbb{M}_n$, for a certain $e\in\mathbb{N}_{>0}$, such that $\mathbb{M}_n$ is primitively embedded in this overlattice. \end{corollary} \proof Since $X$ admits a symplectic automorphism of order $n$, $\Omega_n$ is primitively embedded in $\mathbb{N}S(X)$. Since $\Omega_n$ is negative definite and $X$ is projective, the orthogonal to $\Omega_n$ in $\mathbb{N}S(X)$ contains a class with a positive self intersection, in particular it is non empty. So $\rho(X)\geq 1+{\rm rank}(\Omega_n)$ and $\leftarrowngle 2d\rightarrowngle\oplus \Omega_n$ is embedded in $\mathbb{N}S(X)$. Similarly one obtains the result for $\rho(Y)$ and $\mathbb{N}S(Y)$.\endproof \begin{definition} We define the following sets of K3 surfaces (which are subsets of the moduli space of the K3 surfaces): $$\mathcal{L}_n:=\{\mbox{K3 surfaces which admit a symplectic automorphims }\sigma\mbox{ of order }n\}/\cong,$$ $$\mathcal{M}_n:=\{\mbox{K3 surfaces which admit an }n:1\mbox{ cyclic cover by a K3 surface}\}/\cong,$$ where $\cong$ denotes the equivalence relation given by isomorphism between two K3 surfaces. \end{definition} Given an even hyperbolic lattice $R$ which admits a primitive embedding in $\Lambda_{K3}$, we denote by $\mathcal{P}(R)$ the moduli space of isomorphism classes of $R$-polarized $K3$ surfaces, i.e. of those $K3$ surfaces $X$ for which there exists a primitive embedding $R\subset \mathbb{N}S(X)$. Moreover, we will write $A<B$ in order to say that $B$ is an overlattice of finite index of $A$. \begin{corollary}\leftarrowbel{corollary construction sets Ln and Mn} The set $\mathcal{L}_n$ is a union of countably many components and each of them is a family of $R$-polarized K3 surfaces, for an appropriate choice of the lattice $R$: \[ \mathcal{L}_n=\bigcup_{d\in\mathbb{N}}\left(\bigcup_{\substack{(\leftarrowngle 2d\rightarrowngle\oplus\Omega_n)< R\\ \Omega_n\subset R\ \mathrm{prim.}}}\mathcal{P}(R)\right). \] All the components $\mathcal{P}(R)$ are equidimensional and have dimension $19-{\rm rank}(\Omega_n)$. The set $\mathcal{M}_n$ is a union of countably many components and each of them is a family of $R$-polarized K3 surfaces, for an appropriate choice of the lattice $R$: \[ \mathcal{M}_n=\bigcup_{d\in\mathbb{N}}\left(\bigcup_{\substack{(\leftarrowngle 2d\rightarrowngle\oplus\mathbb{M}_n)< R\\ \mathbb{M}_n\subset R\ \mathrm{prim.}}}\mathcal{P}(R)\right). \] All the components are equidimensional and have dimension $19-{\rm rank}(\mathbb{M}_n)=19-{\rm rank}(\Omega_n)$. \end{corollary} \proof Let $R$ be an overlattice of finite index of $\leftarrowngle 2d\rightarrowngle\oplus\Omega_n$ such that $\Omega_n$ is primitively embedded in it. If $X$ is a K3 surface such that $R$ is primitively embedded in $\mathbb{N}S(X)$, then $\Omega_n$ is primitively embedded in $\mathbb{N}S(X)$ and thus $X$ admits a symplectic automorphism of order $n$, by Theorem \ref{theorem: X has sympl autom iff Omega is in NS(X)}. Vice versa, if a projective K3 surface $X$ admits a symplectic automorphism of order $n$, then there exists a $d\in\mathbb{N}>0$ such that $\leftarrowngle 2d\rightarrowngle\oplus \Omega_n$ is embedded in $\mathbb{N}S(X)$, and an overlattice $R$ of $\leftarrowngle 2d\rightarrowngle\oplus \Omega_n$ is primitively embedded in $\mathbb{N}S(X)$. So one can describe the set $\mathcal{L}_n$ as union of families $\mathcal{P}(R)$ of $R$-polarized K3 surfaces, where $R$ is a proper overlattice of index $r$ (possibly 1) of $\leftarrowngle 2d\rightarrowngle\oplus \Omega_n$ for a certain $d\in\mathbb{N}$. There are countably many lattices $\leftarrowngle 2d\rightarrowngle\oplus\Omega_n$ and each of them has a finite number of overlattices of finite index. So $\mathcal{L}_n$ is the union of countably many families of $R$-polarized K3 surfaces. The dimension of each of these families is $20-{\rm rank}(R)=20-(1+{\rm rank}(\Omega_n))$. This concludes the proof for the set $\mathcal{L}_n$. The proof for $\mathcal{M}_n$ is similar, but one has to use the Theorem \ref{theorem: Y has cyclic cover iff Mn is in NS(Y)} instead of the Theorem \ref{theorem: X has sympl autom iff Omega is in NS(X)}. \endproof \subsection{Isogenies between K3 surfaces} The following definition was first given by Inose in \cite{I} in the case of K3 surfaces with Picard number $20$. \begin{definition}\leftarrowbel{def: isogenies} Let $X$ and $Y$ be two K3 surfaces. We say that $X$ and $Y$ are isogenous if there exists a rational map of finite degree between $X$ and $Y$. This map is said to be an isogeny between $X$ and $Y$ and if it is generically of degree $n$, the map is said to be an isogeny of degree $n$. \end{definition} The easiest construction of an isogeny between K3 surfaces is given by the quotient by a finite symplectic automorphism, i.e. if $X$ is a K3 surface admitting a symplectic automorphism $\sigma$ of order $n$, then the quotient map induces an isogeny of degree $n$ between $X$ and $Y$, the minimal model of $X/\sigma$. So if $X\in\mathcal{L}_n$, then there exists $Y\in\mathcal{M}_n$ which is isogenous to $X$ with an isogeny of degree $n$. Similarly if $Y\in\mathcal{M}_n$, then there exists a K3 surface $X\in\mathcal{L}_n$ which is isogenous to $Y$ with an isogeny $X\dashrightarrow Y$ of degree $n$. There exist however isogenies between K3 surfaces which are not induced by the quotient by a finite group of symplectic automorphisms: an example is given by isogenous Kummer surfaces constructed from Abelian surfaces related by an isogeny, as in \cite[Proof of Thm 2]{I}, under the additional assumption that the degree is a prime $p>7$, (see also \cite[Example 6.5]{BSV}). Now, let us suppose that $Z$ is a K3 surface such that $Z\in \mathcal{L}_n\cap \mathcal{M}_n$. Then, there exists a K3 surface $X\in\mathcal{L}_n$ which is isogenous to $Z$, with an isogeny $\rho:X\dashrightarrow Z$ of degree $n$, but also a K3 surface $Y\in \mathcal{M}_n$ which is isogenous to $Z$ with an isogeny $\pi:Z\dashrightarrow Y$ of degree $n$. So the existence of $Z\in\mathcal{L}_n\cap\mathcal{M}_n$ allows one to construct an isogeny of degree $n^2$ between the two K3 surfaces $X$ and $Y$, given by the composition $\pi\circ\rho:X\dashrightarrow Y$. We will show that in many cases this isogeny is not induced by a quotient by a finite group of symplectic automorphisms acting on $X$, see Proposition \ref{prop: n^2 isogenies not quotients}. In the Section \ref{sec:intersection} we prove that $\mathcal{L}_n\cap \mathcal{M}_n$ is non empty if $2\leq n\leq 8$ and then we provide examples of $n^2:1$ isogenies between K3 surfaces. \subsection{Remarks on Hodge isogenies between K3 surfaces} The Definition \ref{def: isogenies} is not the only notion of isogeny existing in the literature: to distinguish between the two definitions, we will talk here of {\it Hodge isogeny} for the notion used for example in \cite{Buskin,Huybrechts}. \begin{definition} Let $X$ and $Y$ be two K3 surfaces. We say the $X$ and $Y$ are Hodge isogenous if there exists a rational Hodge isometry between $H^2(X,\mathbb{Q})$ and $H^2(Y,\mathbb{Q})$. \end{definition} Hodge isogenous K3 surfaces have been studied since foundational work of \cite{Mukai} and \cite{NikulinCorresp}, also in relation with \v{S}afarevi\v{c}'s conjecture \cite{Shafarevich} about algebraicity of correspondences on $K3$ surfaces. In \cite[Prop. 3.1]{BSV}, the authors give a comparison between the notion of isogeny and of Hodge isogeny: \begin{proposition}\leftarrowbel{prop:BSV} If $\varphi: X\dashrightarrow Y$ is an isogeny of order $n$, $n$ is not a square and the rank of the transcendental lattices $T_X$ and $T_Y$ is odd, $\varphi$ is never a Hodge isogeny. \end{proposition} This follows from the fact that, under these assumptions, there cannot exist any isometry $T_X\otimes \mathbb{Q}\simeq T_Y\otimes \mathbb{Q}$. The transcendental lattice $T_X$ of the very general K3 surface $X\in\mathcal{L}_n$ has always odd rank (see Theorem \ref{theorem: intersection Ln and Mn}); by Proposition \ref{prop:BSV} if $n$ is not a square, so if $n\neq 4$, the surface $X$ is never Hodge isogenous to the minimal resolution of its quotient. The assumption on the degree $n$ is in particular due to the following straightforward fact: \begin{lemma}\leftarrowbel{lemma:isometry T and T(n^2)} For any non degenerate lattice $T$ and any integer $n\in\mathbb{N}$, there exists an isometry $T\otimes \mathbb{Q}\simeq T(n^2)\otimes \mathbb{Q}$. \end{lemma} \begin{proposition}\leftarrowbel{prop: Hodge isog} For any $n\in\mathbb{N}$, if $\varphi:X\dashrightarrow Y$ is an isogeny of degree $n^2$, then $X$ and $Y$ are Hodge isogenous. \end{proposition} \begin{proof} It is proven in \cite[Proposition 3.2]{BSV} that $T_X\otimes\mathbb{Q}\simeq T_Y\otimes\mathbb{Q}$ if and only if $T_Y\otimes \mathbb{Q}\simeq T_Y(n^2)\otimes \mathbb{Q}$, which is true by Lemma \ref{lemma:isometry T and T(n^2)}. Then Witt's theorem implies that the isometry $T_X\otimes\mathbb{Q}\simeq T_Y\otimes\mathbb{Q}$ extends to a Hodge isometry $H^2(X,\mathbb{Q})\simeq H^2(Y,\mathbb{Q})$. \end{proof} In Proposition \ref{prop: n^2 isogenies not quotients} we construct isogenies of degree $n^2$ between K3 surfaces; Proposition \ref{prop: Hodge isog} implies that they are necessarily Hodge isogenies. One of the interesting properties of Hodge isogenous K3 surfaces is that they have isomorphic rational motives, by \cite[Theorem 0.2]{Huybrechts}. This also holds in the case described above of a K3 surface $X$ isogenous to the minimal model $Y$ of the quotient $X/\sigma$, as shown for example in \cite[Proof of Thm 3.1]{Laterveer} following the argument of \cite{P}, but to the knowledge of the authors it is still an open question for a general isogeny. \section{The intersection $\mathcal{L}_n\cap\mathcal{M}_n$}\leftarrowbel{sec: intersection LnMn}\leftarrowbel{sec:intersection} The main result in this section is Theorem \ref{theorem: intersection Ln and Mn}, where we exhibit the maximal dimensional components of $\mathcal{L}_n\cap \mathcal{M}_n$. As preliminary result, we describe in \S \ref{prop: properties of the U+Mn polarized K3} a specific family of K3 surfaces contained in $\mathcal{L}_n\cap \mathcal{M}_n$. This family is related with a special isogeny between K3 surfaces, which is induced by an isogeny between elliptic curves, see Remark \ref{rem: isogeny between elliptic curve}. \subsection{The $(U\oplus \mathbb{M}_n)$-polarized K3 surfaces} The $(U\oplus \mathbb{M}_n)$-polarized K3 surfaces have interesting geometric properties: this family is considered for $n=2$ in \cite{vGS}, and for other values of $n$ in \cite{GSprime} and \cite{GSnonprime} to find explicitly $\Omega_n$. Here we reconsider it as example of a family of K3 surfaces contained in $\mathcal{L}_n\cap \mathcal{M}_n$. \begin{proposition}\leftarrowbel{prop: properties of the U+Mn polarized K3} Let $2\leq n\leq 8$ and $\mathcal{U}_n:=\mathcal{P}(U\oplus \mathbb{M}_n)$ be the family of the $(U\oplus \mathbb{M}_n)$-polarized K3 surfaces. Then:\begin{itemize}\item $\mathcal{U}_n$ is non empty and has dimension $18-{\rm rank}(\mathbb{M}_n)$; \item if $S$ is a K3 surface such that $S\in\mathcal{U}_n$, then $S$ admits an elliptic fibration $\mathcal{E}_n:S\rightarrow\mathbb{P}^1$ with an $n$-torsion section $t$; \item $\mathcal{U}_n\subset \mathcal{L}_n\cap \mathcal{M}_n$; \item if $S\in\mathcal{U}_n$ and $\sigma_t$ is the translation by $t$ on $\mathcal{E}_n$, the minimal model of $S/\sigma_t$ is a K3 surface in $\mathcal{U}_n$.\end{itemize} \end{proposition} \proof The family $\mathcal{U}_n$ is non empty for each $n$ such that $2\leq n\leq 8$ as showed for example in \cite[Table 2]{Shim} or \cite[Table 1]{G2}. The dimension of $\mathcal{U}_n$ follows directly by the fact that the dimension of a non-empty family $\mathcal{P}(R)$ of $R$-polarized K3 surfaces (for a certain lattice $R$) is $20-{\rm rank}(R)$, and in this case $R\simeq U\oplus \mathbb{M}_n$ has rank $2+{\rm rank}(\mathbb{M}_n)$. The family $\mathcal{U}_n$ was considered in \cite[Proposition 4.3]{G2}, where it is proved that the set of K3 surfaces admitting an elliptic fibration with a torsion section of order $n$ coincides with the set of $(U\oplus \mathbb{M}_n)$-polarized K3 surfaces. Since $\mathbb{M}_n$ is clearly primitively embedded in $U\oplus \mathbb{M}_n$, all the K3 surfaces in $\mathcal{U}_n$ are also contained in $\mathcal{M}_n$. Moreover, let $\mathcal{E}_n:S\rightarrow\mathbb{P}^1$ be an elliptic fibration on $S$ with an $n$-torsion section $t$. This allows to consider $S$ as an elliptic curve over the field of functions $k(\mathbb{P}^1)$ and the presence of an $n$-torsion section is equivalent to the presence of an $n$-torsion rational point on this elliptic curve. So the translation by $t$ is well defined and it induces an automorphism of order $n$ on $S$. This is a symplectic automorphism (it is the identity on the base of the fibration and acts on the smooth fibers preserving their periods). We denote this symplectic automorphism by $\sigma_t$. Since $S\in\mathcal{U}_n$ admits a symplectic automorphism of order $n$, $\mathcal{U}_n\subset\mathcal{L}_n$ and thus $\mathcal{U}_n\subset\mathcal{L}_n\cap \mathcal{M}_n$. In \cite[Proposition 4.3]{G2} it is also proved that the quotient of an elliptic fibration with basis $\mathbb{P}^1$ by the translation by a torsion section is another elliptic fibration over $\mathbb{P}^1$ with an $n$-torsion section. Thus $S/\sigma_t$ admits a smooth minimal model with an elliptic fibration with an $n$-torsion section and this minimal model is a K3 surface (since $\sigma_t$ is a symplectic automorphism). Thus the minimal model of $S/\sigma_t$ belongs to the family $\mathcal{U}_n$.\endproof \begin{corollary}\leftarrowbel{cor:non-empty intersection of Ln and Mn} For every $n$ such that $2\leq n\leq 8$, $\mathcal{L}_n\cap\mathcal{M}_n$ is non empty. \end{corollary} \proof The intersection $\mathcal{L}_n\cap\mathcal{M}_n$ contains at least the non empty family $\mathcal{U}_n$.\endproof \begin{remark}\leftarrowbel{rem: isogeny between elliptic curve}{\rm Since both $\mathcal{L}_n$ and $\mathcal{M}_n$ are the union of $\left(19-{\rm rank}(\mathbb{M}_n)\right)$-dimensional families of polarized K3 surfaces, the intersection between these sets is at most $\left(19-{\rm rank}(\mathbb{M}_n)\right)$-dimensional. The Proposition \ref{prop: properties of the U+Mn polarized K3} provides an intersection in a codimension 1 subfamily. In the Theorem \ref{theorem: intersection Ln and Mn} we will see that one can obtain larger intersection.} \end{remark} \begin{remark}{\rm Since each $(U\oplus \mathbb{M}_n)$-polarized K3 surface has an elliptic fibration, it can be interpreted as elliptic curve over the field of rational functions in one variable. The symplectic automorphism which induces the isogeny between the two K3 surfaces in $\mathcal{U}_n$ as in Proposition \ref{prop: properties of the U+Mn polarized K3} is an isogeny of the associated elliptic curve over the field of rational functions.} \end{remark} \subsection{Maximal dimensional components of $\mathcal{L}_n\cap\mathcal{M}_n$} In this section we prove that there are components of $\mathcal{L}_n$ completely contained in $\mathcal{M}_n$ and vice versa. The proof is lattice theoretic: in order to obtain this result, we need some extra information on the lattices $\mathbb{M}_n$ and $\Omega_n$. Both these lattices are primitively embedded in the N\'eron--Severi group of a $(U\oplus \mathbb{M}_n)$-polarized K3 surface, so we now use the K3 surfaces in the family $\mathcal{U}_n$ to compare the discriminant forms of $\mathbb{M}_n$ and $\Omega_n$. \begin{proposition}\leftarrowbel{prop: discriminant Omega and Mn} Let $A_{\Omega_n}$ (resp. $A_{\mathbb{M}_n}$) the discriminant group of $\Omega_n$ (resp. $\mathbb{M}_n$) and $q_{\Omega_n}$ (resp. $q_{\mathbb{M}_n}$) its discriminant form. Then $A_{\Omega_n}=\left(\mathbb{Z}/n\mathbb{Z}\right)^{\oplus 2}\oplus A_{\mathbb{M}_n}$ and $q_{\Omega_n}=u(n)\oplus q_{\mathbb{M}_n}$, where $u(n)$ is the discriminant form of the lattice $U(n)$. \end{proposition} \proof Nikulin proved that $A_{\Omega_n}=\left(\mathbb{Z}/n\mathbb{Z}\right)^{\oplus 2}\oplus A_{\mathbb{M}_n}$ in \cite[Lemma 10.2]{NikSympl}. By the Proposition \ref{prop: properties of the U+Mn polarized K3}, if $\mathbb{N}S(S)\simeq U\oplus\mathbb{M}_n$, then $S$ admits an elliptic fibration $\mathcal{E}_n:S\rightarrow\mathbb{P}^1$ with an $n$-torsion section $t$ and thus a symplectic automorphism $\sigma_t$, which is the translation by $t$. Let us denote by $F$ the class in $\mathbb{N}S(S)$ of the fiber of the elliptic fibration $\mathcal{E}_n$, by $O$ the class of the zero section, by $t$ the class of the $n$-torsion section, by $t_i$, $i=2,\ldots, n-1$, the class of the section corresponding to the sum of $t$ with itself $i$ times in the Mordell--Weil group. By definition $\sigma_t$ preserves the classes $F$ and $O+t+\sum_{i=2}^{n-1}t_i$. So $U(n)\simeq \leftarrowngle F,O+t+\sum_{i=2}^{n-1}t_i\rightarrowngle\subset \mathbb{N}S(S)^{\sigma_t}$ and thus $\leftarrowngle F,O+t+\sum_{i=2}^{n-1}t_i\rightarrowngle^{\perp}\supset \left(\mathbb{N}S(S)^{\sigma_t}\right)^{\perp}.$ Since ${\rm rank}(\Omega_n)={\rm rank}(\mathbb{M}_n)$, ${\rm rank}(\Omega_n)=\rho(S)-2$. So $$\leftarrowngle F,O+t+\sum_{i=2}^{n-1}t_i\rightarrowngle^{\perp}\supset \left(\mathbb{N}S(S)^{\sigma_t}\right)^{\perp}\simeq\Omega_n.$$ Denoted by $T_S$ the transcendental lattice of $S$, it follows from $\mathbb{N}S(S)\simeq U\oplus \mathbb{M}_n$ that $q_{T_S}=-q_{\mathbb{M}_n}$. By $\left(\mathbb{N}S(S)^{\sigma_t}\right)^{\perp}\simeq\Omega_n$ one obtains that the orthogonal of $\Omega_n$ in $H^2(S,\mathbb{Z})$ is an overlattice of finite index (possibly 1) of $U(n)\oplus T_S$. Since $A_{\Omega_n}=\left(\mathbb{Z}/n\mathbb{Z}\right)^2\oplus A_{\mathbb{M}_n}$, the orthogonal of $\Omega_n$ in $H^2(S,\mathbb{Z})$ is $U(n)\oplus T_S$. So $q_{\Omega_n}\simeq -q_{U(n)\oplus T_S}=u(n)\oplus q_{\mathbb{M}_n}$.\endproof \begin{lemma}\leftarrowbel{lemma:discriminant form of overlattice} Let $F$ be a finite abelian group with quadratic form $q_F$ and $m\geq 2$. Let $V=\leftarrowngle 2d\rightarrowngle\oplus W$ be an indefinite even non-degenerate lattice with discriminant group $A_V=(\mathbb{Z}/2d\mathbb{Z})\oplus (\mathbb{Z}/m\mathbb{Z})^{\oplus 2}\oplus F$, with discriminant form $q_{A_V}=(\frac{1}{2d})\oplus u(m)\oplus q_F $. If $d\equiv 0\mod 2m$, then $V$ admits an overlattice $Z$ of index $m$ with $A_Z=\mathbb{Z}/2d\mathbb{Z}\oplus F$ and $q_{A_Z}=(\frac{1}{2d})\oplus q_F $. Moreover, $Z$ contains primitively $W$. \end{lemma} \proof By assumption, there exists some integer $k$ such that $2d=4km$. Let $h$ be a generator of the $\mathbb{Z}/2d\mathbb{Z}$ summand of $A_V$ such that $h^2=\frac{1}{2d}$, and let $e_1,e_2$ be a basis of the $(\mathbb{Z}/m\mathbb{Z})^{\oplus 2}$ summand in $A_V$ such that $e_1^2=e_2^2=0$ and $e_1e_2=-\frac{1}{m}$. We define $\epsilon:=e_1+2ke_2$, so that $\epsilon^2=-\frac{4k}{m}$. Then the subgroup $H:=\leftarrowngle (4k)h+\epsilon\rightarrowngle$ is isotropic and its orthogonal inside $A_V$ is $H^{\perp}=\leftarrowngle h-e_2,\nu\rightarrowngle\oplus F$, where $\nu:=e_1-2ke_2$. It follows from \cite[Propostion 1.4.1 and Corollary 1.10.2]{NikulinIntQuadForms} that there exists an even overlattice $Z$ of $V$ of index $m$ with $A_Z\cong H^{\perp}/H= \leftarrowngle h-e_2\rightarrowngle\oplus F\cong \mathbb{Z}/2d\mathbb{Z}\oplus F$, and $q_{A_Z}$ is induced on the quotient $H^{\perp}/H$ by $(q_{A_V})_{|H^{\perp}}$, so it is exactly $(\frac{1}{2d})\oplus q_F $. Finally, we observe that the intersection of $H$ with $A_W$ inside $A_V$ is trivial, hence $W$ is a primitive sublattice of $Z$.\endproof \begin{corollary}\leftarrowbel{cor: construction of Ldn'} Let $2\leq n\leq 8$, $d\in\mathbb{N}$, $d\geq 1$ and $d\equiv 0\mod 2n$. Then $\leftarrowngle 2d\rightarrowngle\oplus\Omega_n$ admits an overlattice of index $n$ whose discriminant form is $(\frac{1}{2d})\oplus q_{\mathbb{M}_n}$; this overlattice contains primitively $\Omega_n$. \end{corollary} \proof It suffices to apply Lemma \ref{lemma:discriminant form of overlattice} to the lattice $V=\leftarrowngle 2d\rightarrowngle\oplus\Omega_n$ and to recall that $q_{\Omega_n}=u(n)\oplus q_{\mathbb{M}_n}$, by Proposition \ref{prop: discriminant Omega and Mn}.\endproof \begin{definition} For each $2\leq n\leq 8$ and each $d\in \mathbb{N}$, $d\geq 1$, we denote by $L_{d,n}$ the lattice $\leftarrowngle 2d\rightarrowngle\oplus \Omega_n$. For each $2\leq n\leq 8$ and each $d\in \mathbb{N}$, $d\geq 1$, and $d\equiv 0\mod 2n$, we denote by $L_{d,n}'$ the overlattice of index $n$ of $L_{d,n}$ constructed in Corollary \ref{cor: construction of Ldn'}. For each $2\leq n\leq 8$ and each $e\in \mathbb{N}$, $e\geq 1$, we denote by $M_{e,n}$ the lattice $\leftarrowngle 2e\rightarrowngle\oplus \mathbb{M}_n$. \end{definition} \begin{theorem}\leftarrowbel{theorem: intersection Ln and Mn} Let $d\in\mathbb{N}$, $d\geq 1$ and $d\equiv 0\mod 2n$. The lattice $L_{d,n}'$ is unique in its genus and $$L_{d,n}'\simeq M_{d,n}.$$ The family $\mathcal{P}(L_{d,n}')$ is $\left(19-{\rm rank}(\Omega_n)\right)$-dimensional and is a subset of $\mathcal{L}_n\cap \mathcal{M}_n$, i.e. each K3 surface in this family admits a symplectic automorphism of order $n$ and is $n:1$ cyclically covered by a K3 surface. \end{theorem} \proof By the Corollary \ref{cor: construction of Ldn'}, the lattice $L_{d,n}'$ has the same discriminant group and form as the lattice $M_{d,n}$. By \cite[Proposition 7.1]{NikSympl}, the length and the rank of the lattice $\mathbb{M}_n$ are the following: $$\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline n&2&3&4&5&6&7&8\\ \hline l(\mathbb{M}_n)&6&4&4&2&2&1&2\\ \hline {\rm rank}(\mathbb{M}_n)&8&12&14&16&16&18&18\\ \hline\end{array}$$ where the length $l(R)$ of a lattice $R$ is the minimal number of generators of the discriminant group $R^{\vee}/R$. Since ${\rm rank}(M_{d,n})=1+{\rm rank}(\mathbb{M}_n)$ and, if $d\equiv 0\mod 2n$, $l(\leftarrowngle 2d\rightarrowngle\oplus \mathbb{M}_n)=1+l(\mathbb{M}_n)$, for every admissible $n$ and $d\equiv 0\mod 2n$, ${\rm rank}(M_{d,n})\geq 2+l(M_{d,n})$, so by \cite[Corollary 1.13.3]{NikulinIntQuadForms}, there is a unique even hyperbolic lattice with the same rank, length, discriminant group and form as $M_{d,n}$. Since $L_{d,n}'$ has all the prescribed properties, we conclude that $L_{d,n}'\simeq M_{d,n}$. Moreover, by \cite[Theorem 1.14.4]{NikulinIntQuadForms}, if $n< 7$ the lattice $L_{d,n}'\simeq M_{d,n}$ admits a unique, up to isometry, primitive embedding in $\Lambda_{K3}$, and thus determines a $(19-{\rm rank}(\Omega_n))$-dimensional family of K3 surfaces. If $n=7,8$, any primitive embedding of $L_{d,n}'\simeq M_{d,n}$ in the unimodular lattice $\Lambda_{K3}$, which exists by results in \cite{GSprime, GSnonprime}, identifies the same genus of the orthogonal complement $T_{d,n}$ of rank three and signature $(2,1)$: we get respectively that $A_{T_{d,7}}=\mathbb{Z}/7\mathbb{Z}\oplus \mathbb{Z}/2d\mathbb{Z}$ and $A_{T_{d,8}}=\mathbb{Z}/2\mathbb{Z}\oplus Z/4\mathbb{Z}\oplus \mathbb{Z}/2d\mathbb{Z}$ with quadratic forms $q_{T_{d,7}}=\left(-\frac{4}{7}\right)\oplus \left(-\frac{1}{2d}\right)$ and $q_{T_{d,8}}=\left(\frac{1}{2}\right)\oplus \left(\frac{1}{4}\right)\oplus\left(-\frac{1}{2d}\right)$. It follows from \cite[Proposition 1.15.1]{NikulinIntQuadForms} that the primitive embedding of $L_{d,n}'\simeq M_{d,n}$ in $\Lambda_{K3}$ is unique, up to isometry, if and only if $T_{d,n}$ is unique in its genus and the map $O(T_{d,n})\rightarrow O(q_{T_{d,n}})$ is surjective. By \cite[Theorem VIII.7.5]{MirMorr}, these two conditions hold in particular if the discriminant quadratic form $q_{T_{d,n}}$ is $p$-regular for all prime numbers $p\neq s$ and it is $s$-semiregular for a single prime number $s$. The precise (and quite technical) definition of $p$-regular and $p$-semiregular form can be found in \cite[Definition VIII.7.4]{MirMorr}. An easy application of \cite[Lemma VIII.7.6 and VIII.7.7]{MirMorr} implies that: \begin{itemize} \item $q_{T_{d,7}}$ is $p$-regular if $p\neq 7$ and it is $7$-semiregular; \item $q_{T_{d,8}}$ is $p$-regular if $p\neq 2$ and it is $2$-semiregular. \end{itemize} Hence, also for $n=7,8$, $T_{d,n}$ is unique in its genus and the primitive embedding of $L_{d,n}'\simeq M_{d,n}$ in $\Lambda_{K3}$ is unique up to isometry, and thus determines a $(19-{\rm rank}(\Omega_n))$-dimensional family of K3 surfaces. Each K3 surface which is $ M_{d,n}$-polarized is contained in $\mathcal{L}_n\cap \mathcal{M}_n$ because there are primitive embeddings both of $\Omega_n$ and of $\mathbb{M}_n$ in its N\'eron--Severi group. \endproof \begin{proposition}\leftarrowbel{prop: non intersections} Let $2\leq n\leq 8$ and $d\in\mathbb{N}$, $d\geq 1$. The lattice $L_{d,n}$ is not isometric to any overlattice of finite index (possibly 1) of $M_{e,n}$, for any $e$. In particular if $X$ is a K3 surface such that $\mathbb{N}S(X)\simeq L_{d,n}$, then $X$ does not admit a cyclic $n:1$ cover by a K3 surface and the families of the $\left(L_{d,n}\right)$-polarized K3 surfaces are not (totally) contained in $\mathcal{M}_n$. \end{proposition} \proof By Proposition \ref{prop: discriminant Omega and Mn}, $l(\Omega_{n})=2+l(\mathbb{M}_n)$. Hence $\l(L_{d,n})\geq l(\Omega_n)=2+ l(\mathbb{M}_n)> l(M_{e,n})$. Since any overlattice of $M_{e,n}$ has at most the length of $M_{e,n}$, the lattices $L_{d,n}$ can not be isometric to any overlattice of the lattice $M_{e,n}$. \endproof In conclusion we proved that there are components of $\mathcal{L}_n$ (and of $\mathcal{M}_n$) which are completely contained in $\mathcal{L}_n\cap\mathcal{M}_n$, but there are also components of $\mathcal{L}_n$, which are not contained in $\mathcal{M}_n$, and thus in $\mathcal{L}_n\cap\mathcal{M}_n$. It is also true that there are components of $\mathcal{M}_n$ which are not totally contained in $\mathcal{L}_n$ (see e.g. Theorem \ref{theorem: intersection L2 and M2} for the case $n=2$.) In the following proposition we construct an $n^2:1$ isogeny between two K3 surfaces by using a third K3 surface, which is $L_{d,n}'$-polarized, and we prove that generically this $n^2:1$ isogeny is not just the quotient by an automorphism group. \begin{proposition}\leftarrowbel{prop: n^2 isogenies not quotients} Let $Z$ be a K3 surface, such that $\mathbb{N}S(Z)=L_{d,n}'$, let $X$ be the K3 surface which is a $n:1$ cyclic cover of $Z$ and $Y$ be the quotient of $Z$ by a symplectic automorphism of order $n$. Then there is an $n^2:1$ isogeny between $X$ and $Y$ but there is no finite group $G$ of automorphisms on $X$ such that $Y$ is birational to $X/G$.\end{proposition} \proof By Theorem \ref{theorem: intersection Ln and Mn} the K3 surfaces $Z$ which are $L_{d,n}'$-polarized are $n:1$ isogenous to two K3 surfaces, $X$ and $Y$, respectively with the two $n:1$ isogenies $X\dashrightarrow Z$ and $Z\dashrightarrow Y.$ The composition of these two isogenies is an $n^2:1$ isogeny $X\dashrightarrow Y$. If there exists a group of automorphism $G$ as required, it has to be a group of symplectic automorphisms (otherwise the quotient $X/G$ would not be birational to a K3 surface). So $X$ should admit a group of symplectic automorphisms of order $n^2$. If $X$ admits a group $G$ of symplectic automorphisms, then $\left(\mathbb{N}S(X)^G\right)^{\perp}$ is a lattice (analogous to $\Omega_n$) which is unique in most of the cases. Its rank depends only on $G$ and it is known for every admissible $G$, see \cite{Hashimoto}. In particular, if $2\leq n\leq 8$, for every group $G_{n^2}$ of order $n^2$ acting symplectically on a K3 surface, the rank of the lattice is ${\rm rank}\left(\left(\mathbb{N}S(X)^{G_{n^2}}\right)^{\perp}\right)>{\rm rank}(\Omega_n)$. Hence, if a K3 surface $X$ admits $G_{n^2}$ as group of symplectic automorphisms, $\rho(X)>1+\Omega_n=\rho(Z)$. But $X$ and $Z$ are isogenous, hence $\rho(X)=\rho(Z)$ and thus $X$ can not admit a group of symplectic automorphisms of order $n^2.$\endproof \begin{remark}\leftarrowbel{rem: Galois cover}{\rm In Proposition \ref{prop: n^2 isogenies not quotients} we proved that the $n^2:1$ isogeny $X\dashrightarrow Y$ is not induced by a quotient map, so that the rational map $X\dashrightarrow Y$ is not a Galois cover. Let us denote by $V$ its Galois closure, hence $V$ is a surface such that both $X$ and $Y$ are birational to Galois quotients of $V$. Denoted by $G$ the Galois group of the cover $V\dashrightarrow Y$ and by $H$ the subgroup of $G$ which is the Galois group of $V\dashrightarrow X$, $H$ is not a normal subgroup of $G$, otherwise the rational map $X\dashrightarrow Y$ would be a Galois cover with Galois group $G/H$. The Kodaira dimension of the surface $V$ is non negative (since $V$ covers K3 surfaces), but moreover $V$ can not be a K3 surface. This can be proved applying the same argument of the proof of Proposition \ref{prop: n^2 isogenies not quotients}: if $V$ were K3 surface, it would admit $G$ as group of symplectic automorphism, but its Picard number is not big enough. We give more details on the construction of $V$ and $G$ in the case $n=2$, see Section \ref{subsec: Galois closure 2^2:1 cover}} \end{remark} \section{Involutions}\leftarrowbel{sec:involutions} In this section we restrict our attention to the case of the symplectic involutions (i.e. $n=2$). In this case several more precise and deep results are known about the relations between K3 surfaces admitting a symplectic involutions and K3 surfaces which are their quotients, hence we can improve the general results of the previous section and we can describe explicit examples. In particular we obtain: a complete description of the maximal dimensional components of the intersection $\mathcal{L}_2\cap \mathcal{M}_2$, in Theorem \ref{theorem: intersection L2 and M2}; infinite families of K3 surfaces such that for each K3 surface in a family there is another one in another family which is isogenous to it, in Corollary \ref{cor: infinite sets of families}; geometric examples in Sections \ref{sec: X1 and Y2} and \ref{subsec: U+N e U+E8}. \subsection{Preliminary results on symplectic involutions and Nikulin surfaces} For historical reasons, we refer to the K3 surfaces in $\mathcal{M}_2$ (i.e. the K3 surfaces which are cyclically $2:1$ covered by a K3 surface) as the Nikulin surfaces and to the lattice $\mathbb{M}_2$ as the Nikulin lattice, denoted by $N$($:=\mathbb{M}_2$). So we have \begin{definition} A Nikulin surface $Y$ is a K3 surface which is the minimal resolution of the quotient of a K3 surface $X$ by a symplectic involution $\sigma$. The minimal primitive sublattice of $\mathbb{N}S(Y)$ containing the curves arising from the desingularization of $X/\sigma$ is denoted by $N$ and it is called Nikulin lattice. \end{definition} If $\sigma$ is a symplectic involution on a K3 surface $X$, then the fixed locus of $\sigma$ on $X$ consists of 8 isolated points. The quotient surface $X/\sigma$ has 8 singularities of type $A_1$, so the minimal resolution $X$ contains 8 disjoint rational curves, which are the exceptional divisors over the singular points of $X/\sigma$. Called $N_i$, $i=1,\ldots, 8$ their classes in the N\'eron--Severi group, the class $\left(\sum_{i=1}^8N_i\right)/2$ is contained in the N\'eron--Severi group. Indeed the union of the eight disjoint rational curves is the branch locus of the double cover $\widetilde{X}\rightarrow Y$, where $\widetilde{X}$ is the blow up of $X$ in the 8 points fixed by $\sigma$. We will say that a set of rational curves is an even set if the sum of their classes divided by 2 is contained in the N\'eron--Severi group and that a set of nodes is an even set if the curves resolving these nodes form an even set. \begin{proposition}{\rm (\cite[Section 6]{NikSympl})} The Nikulin lattice is an even negative definite lattice of rank 8 and its discriminant form is the same as the one of $U(2)^3$. It contains 16 classes with self-intersection $-2$, i.e. $\pm N_i$, $i=1,\ldots, 8$. A $\mathbb{Z}$-basis of $N$ is given by $\left(\sum_{i=1}^8N_i\right)/2$, $N_i$, $i=1,\ldots 7$.\end{proposition} As in the previous section, we will denote by $M_{e,2}$ the lattice $\leftarrowngle 2e\rightarrowngle\oplus N$. \begin{proposition}\leftarrowbel{prop: NS(X) Nikulin surfaces}$(a)$ A K3 surface $Y$ is a Nikulin surface if and only if the lattice $N$ is primitively embedded in $\mathbb{N}S(Y)$. $(b)$ The minimal Picard number of a Nikulin surface is 9. $(c)$ There exists an even overlattice of index two of $M_{e,2}$ in which $N$ is primitively embedded if and only if $e$ is even. In this case, this lattice is unique up to isometry and denoted by $M_{e,2}'$. $(d)$ If $Y$ is a Nikulin surface with Picard number 9, then $\mathbb{N}S(Y)$ is isometric either to $M_{e,n}$ or to $M_{e,n}'$ for a certain $e$. \end{proposition} \proof The point $(a)$ is Theorem \ref{theorem: Y has cyclic cover iff Mn is in NS(Y)}, the point $(b)$ is proved in Corollary \ref{corollary: NS of K3 in Ln and Mn} in the case $n=2$. The point $(c)$ is proved in \cite[Proposition 2.2]{GS} and the point $(d)$ in \cite[Proposition 2.1]{GS}.\endproof In the case $n=2$, the lattice $\Omega_2$ is known to be isometric to $E_8(-2)$. As in the previous section, we will denote by $L_{d,2}$ the lattice $\leftarrowngle 2d\rightarrowngle\oplus E_8(-2)$ and by $L_{d,2}'$ the overlattice of index two of $L_{d,2}$ such that $L_{d,2}'$ is even and $E_8(-2)$ is primitively embedded in $L_{d,2}'$. \begin{proposition}\leftarrowbel{prop: NS(Y) con inv sympl} $(a)$ A K3 surface $X$ admits a symplectic involution $\sigma$ if and only if the lattice $E_8(-2)$ is primitively embedded in $\mathbb{N}S(X)$. $(b)$ If $X$ admits a symplectic involution, $\rho(X)\geq 9$. $(c)$ There exists an even overlattice of index two of $L_{d,2}$ in which $E_8(-2)$ is primitively embedded if and only if $d$ is even. In this case, this lattice is unique up to isometry and is $L_{d,2}'$. $(d)$ If $X$ is a K3 surface admitting a symplectic involution and with Picard number 9, then $\mathbb{N}S(X)$ is isometric either to $L_{d,2}$ or to $L_{d,2}'$ for a certain $d$. \end{proposition} \proof The Proposition follows directly by \cite[Propositions 2.2 and 2.3]{vGS} (and the points $(a)$, $(b)$ and $(c)$ were already proved in the more general setting of automorphisms of order $n$ in the previous Section).\endproof The main result which is known for involutions and is not yet stated in the more general case of symplectic automorphisms of order $n$, is the explicit relation between the N\'eron--Severi group of a K3 surface which admits a symplectic involution and the N\'eron--Severi group of the K3 surface which is its quotient. \begin{proposition}\leftarrowbel{prop: NS(Y) determines NS(X)}{\rm (\cite[Corollary 2.2]{GS})} Let $X$ be a K3 surface with a symplectic involution $\sigma$ and $Y$ be the minimal resolution of $X/\sigma$. Then: \begin{itemize} \item $\mathbb{N}S(X)\simeq L_{e,2}$ if and only if $\mathbb{N}S(Y)\simeq M_{2e,2}'$ \item $\mathbb{N}S(X)\simeq L_{2e,2}'$ if and only if $\mathbb{N}S(Y)\simeq M_{e,2}$.\end{itemize} \end{proposition} \subsection{The intersection $\mathcal{L}_2\cap\mathcal{M}_2$ and infinite towers of isogenous K3 surfaces} Since we know the structure of all the possible N\'eron--Severi groups of Nikulin surfaces of minimal Picard number (by Proposition \ref{prop: NS(X) Nikulin surfaces}) and all the possible N\'eron--Severi groups of K3 surfaces of minimal Picard number admitting a symplectic involution (by Proposition \ref{prop: NS(Y) con inv sympl}), we are able to give the following refinement of the Theorem \ref{theorem: intersection Ln and Mn} and of the Proposition \ref{prop: non intersections}. \begin{theorem}\leftarrowbel{theorem: intersection L2 and M2} A Nikulin surface $Y$ such that $\rho(Y)=9$ admits a symplectic involution if and only if $\mathbb{N}S(Y)\simeq M_{2d,2}(\simeq L_{2d,2}')$. A K3 surface $X$ admitting a symplectic involution such that $\rho(X)=9$ is a Nikulin surface if and only if $\mathbb{N}S(X)\simeq L_{2d,2}'(\simeq M_{2d,2})$. So $$\mathcal{L}_2\cap\mathcal{M}_2\supset\bigcup_{d\in \mathbb{N}_{>0},\ d\equiv0 (2)}\mathcal{P}(M_{2d,2}).$$ \end{theorem} \proof By Theorem \ref{theorem: intersection Ln and Mn} $M_{2d,2}\simeq L_{2d,2}'$ and thus if $\mathbb{N}S(Y)\simeq M_{2d,2}$, then $Y$ admits a symplectic involution. Similarly if $\mathbb{N}S(X)\simeq L_{2d,2}'$, $X$ is a Nikulin surface. It remains to prove that if a K3 surface is in $\mathcal{L}_2\cap\mathcal{M}_2$, and its Picard number is 9, then its N\'eron--Severi can not be isometric to $M_{e,2}'$, to $M_{e,2}$ for an odd $e$ or to $L_{f,2}$ with $f\in\mathbb{N}_{>0}$. The argument is similar to the one of Proposition \ref{prop: non intersections}. By Proposition \ref{prop: NS(Y) con inv sympl}, if $Y$ is a Nikulin surface, its N\'eron--Severi group is either isometric to $M_{e,2}$ or to $M_{2e,2}'$. By Proposition \ref{prop: NS(X) Nikulin surfaces} if $X$ is a K3 surface admitting a symplectic involution, its N\'eron--Severi group is either isometric to $L_{d,2}$ or to $L_{2d,2}'$. So if a K3 surface has both properties (i.e. it is in $\mathcal{L}_2\cap \mathcal{M}_2$ and has Picard number 9), its N\'eron--Severi group is isometric both to a lattice in $\{M_{e,2}, M_{2e,2}'\}$ and to a lattice in $\{L_{d,2}, L_{2d,2}'\}$. Hence we are looking for pairs of lattices, one in $\{M_{e,2}, M_{2e,2}'\}$ and one in $\{L_{d,2}, L_{2d,2}'\}$, which are isometric. If two lattices are isometric, they have the same length. We observe that $l(M_{e,2})=1+l(N)=7$, $l(M_{2e,2}')=1+l(N)-2=5$, $l(L_{d,2})=1+l(\Omega_2)=9$, $l(L_{2d,2}')=1+l(\Omega_2)-2=7$. In particular, the unique possible pair of lattices as required is given by $(M_{e,2}, L_{2d,2}')$. Since if two lattices are isometric they have the same discriminant, one obtains that $e=2d$.\endproof \begin{corollary}\leftarrowbel{cor: related Nikulin surfaces} Two Nikulin surfaces $Y$ and $\hat{Y}$ with Picard number 9 are isogenous by a chain of quotients by involutions if and only if one of the following equivalent conditions hold:\\ $(i)$ $\mathbb{N}S(Y)\simeq M_{d,2}$, $\mathbb{N}S(\hat{Y})=M_{e,2}$, and there exists $m\in \mathbb{N}_{>0}$ such that either $d=2^me$ or $e=2^md$;\\ $(ii)$ $T_Y\simeq U\oplus U\oplus N\oplus \leftarrowngle -2d\rightarrowngle$, $T_{\hat{Y}}\simeq U\oplus U\oplus N\oplus \leftarrowngle -2e\rightarrowngle$ and there exists $m\in \mathbb{N}_{>0}$ such that either $d=2^me$ or $e=2^md$. \end{corollary} \proof We can assume that $\hat{Y}$ is obtained by iterated quotients from $Y$. Then $Y$ admits a symplectic involution $\sigma$ and, by Theorem \ref{theorem: intersection L2 and M2}, there exists an even $d$ such that $\mathbb{N}S(Y)\simeq M_{d,2}\simeq L_{d,2}'$. So $Y$ is the cover of a K3 surface $Z$ with N\'eron--Severi group $M_{d/2,2}$ (by Proposition \ref{prop: NS(Y) determines NS(X)}). If $d$ is odd, then the process stops and $\hat{Y}$ is necessarily $Z$; otherwise, $\mathbb{N}S(Z)\simeq M_{d,2}\simeq L_{d,2}'$ and $Z$ is the cover of a K3 surface $Z$ with N\'eron--Severi group $M_{d/4,2}$. Iterating, if possible, this process $m$ times, one obtains Nikulin surfaces with N\'eron--Severi group isometric to $M_{d/2^{m},2}$. In particular, one never obtains lattices isometric to $M_{e,2}'$ (for a certain $e$) as N\'eron--Severi groups of a Nikulin surface obtained by iterated quotients from $Y$. Vice versa, if $\mathbb{N}S(\hat{Y})\simeq M_{e,2}$ for a certain $e$, $\hat{Y}$ is covered by a K3 surface $W$ with $\mathbb{N}S(W)\simeq L_{2e,e}'\simeq M_{2e,2}$ (by Proposition \ref{prop: NS(Y) determines NS(X)}). So $W$ is a Nikulin surface, $2:1$ covered by a K3 surface with N\'eron--Severi group isometric to $L_{4e,e}'\simeq M_{4e,2}$. Reiterating this process $m$ times one obtains that $\hat{Y}$ is isogenous to a Nikulin surface whose N\'eron--Severi lattice is isometric to $M_{h,e}$ with $h=2^md$. The equivalent statement for the transcendental lattice follows by the fact that if the N\'eron--Severi group of a K3 surface is isometric to $M_{d,2}$, then its transcendental lattice is isometric to $U\oplus U\oplus N\oplus\leftarrowngle -2d\rightarrowngle$ (since the discriminant form of the latter is minus the discriminant form of $M_{d,2}$, and in this case the transcendental lattice is uniquely determined by its genus).\endproof We determined an infinite number of infinite series of Nikulin surfaces of Picard number 9 related by iterated quotients by symplectic involutions. More precisely we proved the following. \begin{corollary}\leftarrowbel{cor: infinite sets of families} For every $d\in\mathbb{N}$, if $\mathbb{N}S(Y)\simeq M_{d,2}$ there exists an infinite number of K3 surfaces $Y_m$ isogenous to $Y$. In particular for each $m$ there exists at least one K3 surface $Y_m$ with an isogeny of degree $2^m$ to $Y$ whose N\'eron--Severi group is isometric to $\mathbb{N}S(Y_m)=M_{2^md,2}$. The transcendental lattice of $Y$ is $T_Y\simeq U\oplus U\oplus N\oplus \leftarrowngle -2d\rightarrowngle$ and for each $m$ the one of $Y_m$ is $T_{Y_m}\simeq U\oplus U\oplus N\oplus \leftarrowngle -2^{m+1}d\rightarrowngle$. \end{corollary} \begin{remark}{\rm The $M_{2^{m+2}d,2}$-polarized K3 surfaces can be interpreted as moduli spaces of twisted sheaves on $M_{2^{m}d,2}$-polarized K3 surfaces. Let $e_1,e_2$ be a standard basis of the first copy of $U$ inside the K3 lattice $\Lambda_{K3}$ and choose a primitive embedding of $M_{2^{m+2}d,2}$ in $\Lambda_{K3}$ so that a generator of $\leftarrowngle 2^{m+1}d\rightarrowngle$ is $e_1+2^mde_2$ and $N$ is embedded in $U^\perp$. Given $S\in\mathcal{P}(M_{2^{m}d,2})$ generic, the transcendental lattice is $T_S=U^{\oplus 2}\oplus N\oplus\leftarrowngle -2^{m+1}d\rightarrowngle$, and with our previous choice it is easy to see that $\leftarrowngle -2^{m+1}d\rightarrowngle$ is generated by $t:=e_1-2^mde_2$. The $B$-field $B=\frac{e_2}{2}\in H^2(S,\mathbb{Q})$ is a lift for the Brauer class $\beta:T_S\rightarrow \mathbb{Z}/2\mathbb{Z}$ given by $v\mapsto (v,2B)$. It is easy to see that $T(S,B)\cong\ker\beta=U^{\oplus 2}\oplus N\oplus \leftarrowngle -2^{m+3}d\rightarrowngle\subset H^*(S,\mathbb{Z})$, where the last summand is spanned by $(0,2t,1)$. Moreover, the orthogonal of $T(S,B)$ inside the Mukai lattice $H^*(S,\mathbb{Z})$ is the generalized Picard group $\mbox{Pic}(S,B)$, which is the sublattice spanned by $f_1:=(0,0,1)$, $f_2:=(2,e_2,0)$, $f_3:=(0,e_1+2^mde_2,0)$ and $(0,b_i,0)$ with $b_1,\cdots,b_8\in M_{2^{m+2}d,2}$ a basis of the lattice $N$, and has quadratic form \[ \left(\begin{array}{ccc} 0&2&0\\ 2&0&1\\ 0&1&2^{m+1}d \end{array}\right)\oplus N. \] The isotropic element $v:=2^mdf_2-f_3$ now satisfies $((\mathbb{Z} v)^\perp\cap\mbox{Pic}(S,B))/\mathbb{Z} v\simeq M_{2^{m+2}d,2}$. Hence, the moduli space of stable twisted sheaves $M_v(S,\beta)$ is a smooth $M_{2^{m+2}d,2}$-polarized K3 surface. It is an interesting open question to see whether the isogeny of degree $4$ which we constructed here coincides with the one induced by a twisted universal family on $S\times M_v(S,\beta)$ or not (for further details see \cite[Theorem 0.1]{Huybrechts}). } \end{remark} \subsection{The Galois closure of $2^2:1$ covers}\leftarrowbel{subsec: Galois closure 2^2:1 cover} Let $X_d$ be a K3 surface such that $\mathbb{N}S(X_d)=M_{d,2}$. Then $X_{2d}\in\mathcal{L}_2\cap \mathcal{M}_2$ and there are the two Galois covers $X_{4d}\dashrightarrow X_{2d}$ and $X_{2d}\dashrightarrow X_d$. The composition of these two maps is a $2^2:1$ isogeny, not induced by a Galois cover, by Proposition \ref{prop: n^2 isogenies not quotients}. As observed in Remark \ref{rem: Galois cover} there exist a surface $V$, a group $G\subset\mbox{Aut}(V)$ and a subgroup $H$ of $G$ such that $V/G$ is birational to $X_d$, and $V/H$ is birational to $X_{4d}$. Here we construct the surface $V$ and the group $G$, proving the following \begin{proposition} The group $G$ is the dihedral group of order 8 and $V$ is a $(\mathbb{Z}/2\mathbb{Z})^2$ Galois cover of $X_{2d}$, whose branch locus $B$ is the union of 16 smooth rational curves. If $B$ is normal crossing, then $V$ is a positive Kodaira dimension smooth surface such that $h^{1,0}(V)=0$ and $h^{2,0}(V)\geq 35$. \end{proposition} To prove the Proposition one constructs a $\left(\mathbb{Z}/2\mathbb{Z}\right)^2$ Galois cover of $X_{2d}$ (see Section \ref{subsub: bidouble cover X2d}) by a surface denoted by $V$. Then one constructs a $\left(\mathbb{Z}/2\mathbb{Z}\right)^2$ Galois cover of $X_d$ (see Section \ref{subsub: bidouble cover Xd}), and eventually one proves that these two covers can be pasted to obtain a unique Galois cover by the dihedral group of order 8 (see Section \ref{subsub: Galois cover Xd}). In order to obtain the $\left(\mathbb{Z}/2\mathbb{Z}\right)^2$ covers one compares the branch loci of the $2:1$ maps $X_{4d}\dashrightarrow X_{2d}$ and $X_{2d}\dashrightarrow X_d$. Here we do not consider the Galois closure of $2^n:1$ isogenies given in Corollary \ref{cor: infinite sets of families}, but for any fixed $n$ a priori one can iterate the previous process. In the following we will call the $(\mathbb{Z}/2\mathbb{Z})^2$ Galois covers bidouble covers as in \cite{Catanese}, where all the basic definitions and properties of these covers can be found. \subsubsection{A bidouble cover of the surface $X_{2d}$}\leftarrowbel{subsub: bidouble cover X2d} Let us denote by $N_1,\ldots ,N_8\subset X_{2d}$ the rational curves which are the branch locus of the double cover $X_{4d}\dashrightarrow X_{2d}$. Let us denote by $\sigma_{2d}$ the symplectic involution on $X_{2d}$ such that $X_d$ is birational to $X_{2d}/\sigma_{2d}$. The curves $N_i$, $i=1,\ldots, 8$ are not preserved by $\sigma_{2d}$, since $ \leftarrowngle H\rightarrowngle:=\mathbb{N}S(X_{2d})^{\sigma_{2d}^*}$ is positive definite and more precisely $H^2=4d$. Set $N_i':=\sigma_{2d}(N_i)$. Hence we found two even sets of eight rational curves on $X_{2d}$: $\left\{N_1,\ldots, N_8\right\}$ and $\left\{N_1',\ldots, N_8'\right\}$. Let $D_1:=\sum_{i=1}^8N_i$, $D_2:=\sum_{i=1}^8 N_i'$, $D_3:=0$ and $2L_i:=D_j+D_k$, $i,j,k\in\{1,2,3\}$. The six divisors $D_j$, $L_i$, $j,i=1,2,3$ in $\mathbb{N}S(X_{2d})$ satisfy the conditions which define a bidouble cover, so there exists a surface $V$ such that $(\mathbb{Z}/2\mathbb{Z})^2\in\mbox{Aut}(V)$ and $V/(\mathbb{Z}/2\mathbb{Z})^2$ is (birational to) $X_{2d}$ (see \cite[Section 2]{Catanese}). Moreover, there are three surfaces which are double covers of $X_{2d}$ branched respectively along the curves supported on $2L_1$, $2L_2$ and $2L_3$; all of them are $2:1$ covered by $V$. Since $L_2=\sum_{i=1}^8N_i/2$, the double cover of $X_{2d}$ branched on $2L_2$ is a non-minimal model of $X_{4d}$. We denote the cover branched on the curves in the support of $2L_2$ by $\widetilde{X_{4d}}$. Similarly the double cover of $X_{2d}$ branched on $\cup_i N_i'$ is the blow up of a K3 surface, $X_{4d}'$, in 8 points and it will be denoted by $\widetilde{X_{4d}'}$. The N\'eron--Severi group of the K3 surface $X_{4d}'$ is determined by the one of $X_{2d}$, by Proposition \ref{prop: NS(Y) determines NS(X)}, and thus it is isometric to $M_{4d}$. We obtain the following diagram: \begin{align}\leftarrowbel{eq: bidouble X2d}\xymatrix{&V\ar[dr]\ar[d]\ar[dl]&\\ W\ar[dr]&\widetilde{X_{4d}}\ar[d]^{\widetilde{\pi_{4d}}}&\widetilde{X_{4d}'}\ar[dl]^{\widetilde{\pi'_{4d}}}\\&X_{2d}&}\end{align} The surfaces $W$ and $V$ have non negative Kodaira dimension, because they are covers of K3 surfaces. Let us now suppose that the intersections $N_i\cap N_j'$ are transversal, and thus both the branch divisors of $W\rightarrow X_{2d}$ and of $V\rightarrow X_{2d}$ are normal crossing. Under this assumption, $V$ is smooth and the birational invariants of $W$ and $V$ depend only on $L_j^2$, for $j=1,2,3$. The surface $W$ is the double cover of $X_{2d}$ branched on the reducible curve which is the support of $2L_3$, i.e. on the curve $\bigcup_{i=1}^8(N_i\cup N_i')$. Since $N_i+N_i'$ is an effective ($\sigma_{2d}^*$)-invariant divisor and $H$ is the ample generator of $\mathbb{N}S(X_{2d})^{\sigma_{2d}^*}$, there exists a positive integer $k_i$ such that $N_i+N_i'= k_i H$. Then $L_3^2=\left.\left(\sum_{i=1}^8 k_i H\right)^2\right/4=d\left(\sum_{i=1}^{8}k_i\right)^2$ and $$\chi(W)=4+\frac{d}{2}\left(\sum_{i=1}^{8}k_i\right)^2,\ h^{2,0}(W)=3+\frac{d}{2}\left(\sum_{i=1}^{8}k_i\right)^2 \mbox{ and } h^{1,0}(W)=0.$$ The singularities of $W$ are in the inverse image of the singular points of $\bigcup_{i=1}^8(N_i\cup N_i')$ and $V$ is a double cover of $W$ branched on its singular points. The invariants of $V$ can be computed by \cite[Section 2]{Catanese}, from which one obtains \[ h^{2,0}(V)=h^{2,0}(W)=3+\frac{d}{2}\left(\sum_{i=1}^{8}k_i\right)^2\geq 3+32d\geq 35,\ \ h^{1,0}(V)=h^{1,0}(W)=0. \] Hence $V$ is a surface with non negative Kodaira dimension, $h^{2,0}(V)\geq 35$ and $h^{1,0}(V)=0$, so its Kodaira dimension is necessarily positive. \subsubsection{A bidouble cover of the surface $X_{d}$}\leftarrowbel{subsub: bidouble cover Xd} The surface $X_d$ is the desingularization of the quotient of $X_{2d}/\sigma_{2d}$ and we will denote by $R_1,\ldots, R_8$ the eight disjoint rational curves resolving the singularities of $X_{2d}/\sigma_{2d}$. Equivalently, the double cover of $X_{d}$ branched on $\bigcup_i R_i$ is birational to $X_{2d}$. Denoted by $\pi_{2d}:X_{2d}\rightarrow X_{2d}/\sigma_{2d}$ the quotient map, one observes that $\pi_{2d}(N_i)=\pi_{2d}(N_i')$, $i=1,\ldots, 8$, and $\pi_{2d}(N_i)$ is a rational curve singular in the points $\pi_{2d}(N_i\cap N_i')$. We denote by $\overline{N_i}$ the strict transform on $X_d$ of the curve $\pi_{2d}(N_i)$. The curves $\overline{N_i}$ could be singular and the set $\{\overline{N_1},\ldots, \overline{N_8}\}$ is a divisible set. Moreover, since $N_i+N_i'=k_iH\subset \mathbb{N}S(X_{2d})^{\sigma_{2d}^*}$, one has $(\pi_{2d})_*(N_i+N_i')\subset \mathbb{N}S(X_{2d}/\sigma_{2d})$. Hence $\overline{N_i}\subset (N^{\perp_{\mathbb{N}S(X_d)}})$. The sets $\{R_1,\ldots, R_8\}$ and $\{\overline{N_1},\ldots \overline{N_8}\}$ are two 2-divisible sets of curves, which allow us to construct a bidouble cover of $X_d$, whose data are $\Delta_1:=\sum_{i=1}^8 R_i$, $\Delta_2:=\sum_{i=1}^8\overline{N_i}$, $\Delta_3:=\sum_{i=1}^8\left(R_i+\overline{N}_i\right)$, $2\Gamma_i:=\Delta_j+\Delta_k$, with $\{i,j,k\}=\{1,2,3\}$. The double cover $\widetilde{X_{2d}}\rightarrow X_{d}$ is branched over $\cup_iR_i$, i.e. the curve in the support of $2\Gamma_2$. It induces a double cover of $\widetilde{X_{2d}}$ branched over $\cup_i\left(\widetilde{N_i}+\widetilde{N_i'}\right)$, where $\widetilde{N_i}$ (resp. $\widetilde{N_i'}$) is the strict transform on $\widetilde{X_{2d}}$ of the curve $N_i$ (resp. $N_i'$). Let us denote by $\widetilde{W}$ the surface double cover of $\widetilde{X_{2d}}$ branched on $\cup_i\left(\widetilde{N_i}+\widetilde{N_i'}\right)$. So we have the following diagram: \begin{align}\leftarrowbel{eq: bidouble Xd}\xymatrix{&\widetilde{W}\ar[dr]\ar[d]\ar[dl]&\\ B\ar[dr]&A\ar[d]&\widetilde{X_{2d}}\ar[dl]^{\widetilde{\pi_{2d}}}\\&X_{d}&}\end{align} where $\widetilde{\pi_{2d}}:\widetilde{X_{2d}}\rightarrow X_d$ is induced by $\pi_{2d}$. \subsubsection{The $\mathcal{D}_4$ cover of $X_{d}$}\leftarrowbel{subsub: Galois cover Xd} Both the diagrams \eqref{eq: bidouble Xd} and \eqref{eq: bidouble X2d} induce a $2:1$ rational map $W\dashrightarrow X_{2d}$, which is (birationally) the double of cover of $X_{2d}$ branched on $\bigcup_{i=1}^8 \left(N_i\cup N_i'\right)$. Hence these diagrams can be pasted to obtain the following, where all the arrows are rational maps of generically degree 2 \begin{align}\leftarrowbel{eq: D4 cover}\xymatrix{&&V\ar@{-->}[dr]\ar@{-->}[d]\ar@{-->}[dl]&\\ &W\ar@{-->}[dr]\ar@{-->}[d]\ar@{-->}[dl]&X_{4d}\ar@{-->}[d]^{\pi_{4d}}&X_{4d}'\ar@{-->}[dl]^{\pi'_{4d}}\\B\ar@{-->}[dr]&A\ar@{-->}[d]&X_{2d}\ar@{-->}[dl]^{\pi_{2d}}&\\&X_{d} }\end{align} We already proved that the $4:1$ covers $X_{4d}\dashrightarrow X_{d}$ and $X_{4d}'\dashrightarrow X_d$ are not Galois covers in Proposition \ref{prop: n^2 isogenies not quotients}. On the other hand, the cover $4:1$ $W\rightarrow X_d$ is a Galois cover (indeed a bidouble cover), by construction. Since $V\dashrightarrow X_{2d}$ is constructed as bidouble cover, the cover involution of the cover $W\dashrightarrow X_{2d}$, lifts to an involution of $V$ (which is the cover involution of $V\dashrightarrow X_{4d}$). Hence one obtains that the cover $V\dashrightarrow X_{d}$ is a Galois $8:1$ cover. The cover group $G$ is an order 8 group, which admits non normal subgroups of order 2 (otherwise $X_{4d}\dashrightarrow X_d$ should be a Galois cover). Hence $G\simeq \mathcal{D}_4$, the dihedral group of order 8. We recall that $\mathcal{D}_4:=<s,r|s^2=1, r^4=1, rs=sr^{-1}>$. The center $H$ of $G$ is $H:=<r^2>$ and the quotient of $V$ by $H$ is birational to $W$. So we conclude that the Galois cover is given by the surface $V$ on which acts the group $G=\mathcal{D}_4$. \subsection{The K3 surface $X_2\in\mathcal{L}_2\cap\mathcal{M}_2$ }\leftarrowbel{sec: X1 and Y2} By Proposition \ref{theorem: intersection L2 and M2}, for every even $d$, if a K3 surface $X_d$ is such that $\mathbb{N}S(X_d)\simeq L_{d,2}'$, then $X_d$ admits a symplectic involution and it is $2:1$ cyclically covered by a K3 surface. Here we describe geometrically these properties for the minimum possible value of $d$, i.e. for $d=2$: let $X_2$ be a K3 surface with $\mathbb{N}S(X_2)\simeq L_{2,2}'$. It admits an involution $\sigma$ and by Proposition \ref{prop: NS(Y) determines NS(X)} the K3 surface $Y_1$ which is the desingularization of $X_2/\sigma$ is such that $\mathbb{N}S(Y_1)=M_{1,2}\simeq \leftarrowngle 2\rightarrowngle \oplus N$. Since $\mathbb{N}S(X_2)\simeq M_{2,2}\simeq\leftarrowngle 4\rightarrowngle\oplus N$ (by Proposition \ref{theorem: intersection L2 and M2}), the surface $X_2$ is $2:1$ covered by a K3 surface $X_4$, whose N\'eron--Severi group is $\mathbb{N}S(X_4)\simeq L_{4,2}'$ (by Proposition \ref{prop: NS(Y) determines NS(X)}). Since $X_2$ is a Nikulin surface, there are 8 disjoint rational curves, which resolve the singularities of the quotient of $X_4$ by a symplectic involution. Thus the surface $X_2$ admits two different descriptions according to the interpretation of it as K3 surface with a symplectic involution or as Nikulin surface. These descriptions are associated to different projective models, induced by different (pseudo)ample divisors. Here we recall these descriptions and we explain how to pass from one to the other.\\ By \cite[Section 3.5]{vGS}, any K3 surface $X_2$ such that $\mathbb{N}S(X_2)\simeq L_{2,2}'$ is described as bidouble cover of $\mathbb{P}^2$ as follows: one considers two smooth plane curves $B$ and $C_0$ of degree respectively 4 and 2 in $\mathbb{P}^2$. The double cover of $\mathbb{P}^2$ branched on $B\cup C_0$ is a surface singular in eight points, the inverse image of $B\cap C_0$. The resolution of this surface is the K3 surface $X_1$ such that $\mathbb{N}S(X_1)\simeq M_{1,2}$ and the eight rational curves arising from this resolution will be denoted by $R_i$, $i=1,\ldots, 8$. The curves $R_1,\ldots, R_8$ form an even set of rational curves on $X_1 $ and the double cover of $X_1$ branched on $\cup_i R_i$ is, by construction, a K3 surface $X_2$ such that $\mathbb{N}S(X_2)\simeq L_{2,2}'$. The choice of the curves $B\cup C_0$ totally determines the surfaces $X_1$ and $X_2$. To construct the bidouble cover one considers also the double cover of $\mathbb{P}^2$ branched on $C_0$ and the double cover of $\mathbb{P}^2$ branched on $B$. The first surface is a quadric $Q\simeq \mathbb{P}^1\times\mathbb{P}^1\subset\mathbb{P}^3$, the latter a del Pezzo surface of degree 2, denoted in the following by $dP$. Hence one has the following diagram, where all the arrows are rational maps of degree 2: \begin{align}\leftarrowbel{eq: bidouble P2}\xymatrix{&X_2\ar[dr]^{\pi_2}\ar[d]^{\pi_{dP}}\ar[dl]_{\pi_{Q}}&\\ \mathbb{P}^1\times\mathbb{P}^1\simeq Q\ar[dr]_{q_1}&dP\ar[d]^{q_2}&X_{1}\ar[dl]^{q_3}\\&\mathbb{P}^2&}\end{align} The N\'eron--Severi group of $X_2$ is isometric to $L_{2,2}'$, hence it is an overlattice of index two of $\leftarrowngle 4\rightarrowngle\oplus E_8(-2)$. The linear system of the ample divisor $L$, orthogonal to $E_8(-2)$ in $L_{2,2}'$ exhibits $X_2$ as double cover of a quadric $\mathbb{P}^1\times\mathbb{P}^1$ in $\mathbb{P}^3$. One can assume that the class generating $L_{2,2}'/L_{2,2}$ is $E_1:=(L+e_1)/2$, where $e_i$ is a standard basis of $E_8(-2)$ (i.e. $e_ie_{i+1}=2$ if $i=1,\ldots 6$, $e_3e_8=2$, $(e_i)^2=-4$ and the other intersections are 0). Then the divisor $E_1$ is a nef divisor and the map associated to its linear system $\varphi_{|E_1|}:X_2\rightarrow\mathbb{P}^1$ is a genus 1 fibration. The action of $\sigma^*$ on $\mathbb{N}S(X_2)$ is the identity on the subspace $\leftarrowngle L\rightarrowngle$ and minus the identity on the subspace $L^{\perp}\simeq E_8(-2)$. So the image of $E_1$ by $\sigma^*$ is the nef divisor $E_2:=(L-e_1)/2=E_1-e_1$ (see \cite[Section 3.5]{vGS}). The two maps $\varphi_{|E_i|}$, $i=1,2$ are the maps on the rulings of the quadric $Q\subset \mathbb{P}^3$ image of the map $\varphi_{|L|}=\varphi_{|E_1+E_2|}$. In particular the set of divisors $\{E_1, e_1,\ldots e_8\}$ is a basis of $\mathbb{N}S(X_2)$. By \eqref{eq: bidouble P2}, it follows that $X_2$ admits three commuting involutions, the covering involutions of the three double covers $\pi_Q$, $\pi_{dP}$, $\pi_2$. The latter involution is the symplectic involution $\sigma$, the others will be denoted by $\iota_Q$ and $\iota_{dP}$ respectively. \begin{proposition}\leftarrowbel{prop: three involutions on NS(X2)} The involutions $\iota_Q$ and $\iota_{dP}$ are non-symplectic involutions and their composition is the symplectic involution $\sigma$. The group $\leftarrowngle \iota_Q,\iota_{dP}\rightarrowngle$ is isomorphic to $\left(\mathbb{Z}/2\mathbb{Z}\right)^2$ and it is the Galois group of the $2^2:1$ cover $\pi:X_2\dashrightarrow\mathbb{P}^2$. The induced three involutions on $\mathbb{N}S(X_{2})$ act as follows on the basis $\{E_1,e_1,\ldots e_8\}$: \begin{eqnarray*} \begin{array}{llllll} \sigma^*(E_1)=E_1-e_1,&\sigma^*(e_i)=-e_i,&i=1,\ldots,8 \\ \iota_{Q}^*(E_1)=E_1,& \iota_{Q}^*(e_1)=e_1,& \iota_{Q}^*(e_2)=-e_1-e_2,& \iota_{Q}^*(e_j)=-e_j,\\ \iota_{dP}^*(E_1)=E_1-e_1,&\ \iota_{dP}^*(e_1)=-e_1,& \iota_{dP}^*(e_2)=e_1+e_2&\ \iota_{dP}^*(e_j)=e_j, \end{array} \end{eqnarray*} where $j=3,\ldots, 8$. \end{proposition} \proof The action of $\sigma$ is minus the identity on $\left(\mathbb{N}S(X_2)^\sigma\right)^{\perp}\simeq E_8(-2)\subset \mathbb{N}S(X_2)$ and we chose the basis of $\mathbb{N}S(X_2)$ in such a way that the divisors $e_i$, $i=1,\ldots, 8$ span exactly $\left(\mathbb{N}S(X_2)^\sigma\right)^{\perp}\simeq E_8(-2)$. Moreover we chose $L$ to be the orthogonal to $\leftarrowngle e_i\rightarrowngle_{i=1,\ldots 8}$ and thus $\sigma^*(L)=L$. By the definition of $E_1(=\left(L+e_1\right)/2)$ one obtains $\sigma^*(E_1)=(L-e_1)/2=E_1-e_1$. The automorphism $\iota_Q$ is such that $X_2/\iota_Q$ is a rational surface and thus $\iota_Q$ is non symplectic and $X_2/\iota_Q$ is smooth. Since $X_2/\iota_Q$ is $\mathbb{P}^1\times\mathbb{P}^1$, ${\rm rank}(\mathbb{N}S(X_2)^{\iota_Q})=2$ and $\mathbb{N}S(X_2)^{\iota_Q}$ is generated by the divisors which induce the maps $X_2\rightarrow\mathbb{P}^1$ given by the composition of the quotient map $\pi_Q:X_2\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ with the projection on the first, respectively second, factor. These maps are $\varphi_{|E_1|}: X_2\rightarrow\mathbb{P}^1$ and $\varphi_{|E_2|}: X_2\rightarrow\mathbb{P}^1$. So $\mathbb{N}S(X_2)^{\iota_Q}=\leftarrowngle E_1, E_2\rightarrowngle$ and $\iota_Q$ acts as minus the identity on $\left(\mathbb{N}S(X_2)^{\iota_Q}\right)^\perp$. So $\iota_Q^*(e_j)=-e_j$ if $j=3,\ldots, 8$, $\iota_Q^*(E_1)=E_1$, and $\iota_Q^*(E_2)=\iota_Q^*(E_1-e_1)=E_1-\iota_Q^*(e_1)=E_1-e_1=E_2$. It follows $\iota_Q^*(e_1)=e_1$. In order to find the image of $e_2$ it suffices to recall that $\iota_Q^*$ is an involution and that $\left(\iota_Q^*(e_2)\right)\cdot \left(\iota_Q^*(D)\right)=e_2D$ for any divisor $D\in \mathbb{N}S(X_2)$. The group $\leftarrowngle\iota_Q,\sigma\rightarrowngle$ is by construction the Galois group of the cover $\pi:X_2\rightarrow \mathbb{P}^2$, so it is isomorphic to $\left(\mathbb{Z}/2\mathbb{Z}\right)^2$ and contains three different involutions, each of them is the composition of the other two. In particular $\iota_{dP}=\iota_Q\circ\sigma$ and so $\iota_{dP}^*=\sigma^*\circ\iota_Q^*$ and $\iota_{dP}$ is non-symplectic.\endproof We already observed that the classes $E_1:=(L+e_1)/2$ and $E_2:=(L-e_1)/2$ induce two elliptic fibrations $\varphi_{|E_i|}:X_2\rightarrow\mathbb{P}^1$. By the properties of these elliptic fibrations we will be able to identify the classes of irreducible rational curves on $X_2$ and in particular 8 classes which span the Nikulin lattice. The following proposition gives the explicit isometry between $L_{2,2}'$ and $M_{2,2}$ and shows directly that the surface $X_2$ admits a 2:1 rational double cover by another K3 surface, thus it provides an explicit geometric interpretation of Theorem \ref{theorem: intersection Ln and Mn} in the case $n=2$. \begin{proposition}\leftarrowbel{prop: L4 simeq M4'} Both the genus 1 fibrations $\varphi_{|E_1|}:X_2\rightarrow\mathbb{P}^1$ and $\varphi_{|E_2|}:X_2\rightarrow\mathbb{P}^1$ have no reducible fibers and 8 disjoint sections which can be chosen to generate the Mordell--Weil group (which is isomorphic to $\mathbb{Z}^7$). One can choose these sections, for each fibration, in such a way that 7 sections are in common, the eighth section of $\varphi_{|E_1|}$ is a 5-section for $\varphi_{|E_2|}$ and vice versa the eighth section of $\varphi_{|E_2|}$ is a 5-section for $\varphi_{|E_1|}$. The eight sections of $\varphi_{|E_1|}$ (resp. $\varphi_{|E_2|}$) chosen as above form an even set of eight disjoint rational curves, so $X_2$ is a Nikulin surface and $\mathbb{N}S(X_2)\simeq M_{2,2}$. \end{proposition} \proof Since one has a basis of $\mathbb{N}S(X_2)$, one can compute explicitly the sublattice $E_1^{\perp}:=\{D\in \mathbb{N}S(X_2)\simeq L_{2,2}'| DE_1=0 \}$ and one observes that it is $P(2)$ for a certain degenerate even lattice $P$. In particular there are no $(-2)$-classes orthogonal to $E_1$ in $\mathbb{N}S(X_2)$ and thus the fibration $\varphi_{|E_1|}$ does not admit reducible fibers. The fibration $\varphi_{|E_2|}:X_2\rightarrow\mathbb{P}^1$ is the image of $\varphi_{|E_1|}$ for the automorphism $\sigma$, so also $\varphi_{|E_2|}$ has no reducible fibers too. To conclude the proof it suffices to exhibit the classes of the irreducible rational curves with the required properties. Let us assume that $N_i$ is a class such that $N_i^2=-2$, $N_iL>0$, $N_iE_1=1$. Then $N_i$ is the class of an effective divisor (by $N_iL>0$), supported on a (possibly reducible) curve. If $N_i$ is irreducible, then it is a section of $\varphi_{|E_1|}$. Otherwise it should be the sum of a section and some irreducible components of reducible fibers, but there are no reducible fibers in the genus 1 fibration $\varphi_{|E_1|}$. So $N_i$ is a section of $\varphi_{|E_1|}$. All the classes listed below satisfy the conditions $N_i^2=-2$, $N_iL>0$, $N_iE_1=1$, so they are supported on irreducible rational curves, all sections of $E_1$: $$ \begin{array}{l} N_1=E_1+e_2;\ \ N_2=E_1+e_2+e_3;\\ N_3=E_1+e_2+e_3+e_4;\ \ N_4=E_1+e_2+e_3+e_4+e_5;\\ N_5=E_1+e_2+e_3+e_4+e_5+e_6;\ \ N_6=E_1+e_2+e_3+e_4+e_5+e_6+e_7;\\ N_7=E_1-2e_1-3e_2-5e_3-4e_4-3e_5-2e_6-e_7-3e_8;\ \ N_8=3E_1+e_2-e_8.\end{array}$$ Since $N_iN_j=0$ for every $i,j=1,\ldots, 8$, $i\neq j$, the curves $N_i$ are disjoint. Moreover $(\sum_{i=1}^8N_i)/2\in \mathbb{N}S(X_2)$, so $\{N_1,\ldots, N_8\}$ is an even set of disjoint rational curves and thus there is a $2:1$ cover branched on these rational curves, i.e. $X_2\in\mathcal{M}_2$. The curves $N_i$ $i=1,\ldots 7$ intersect both $E_1$ and $E_2$ in 1 point. So the fibrations $\varphi_{|E_1|}$ and $\varphi_{|E_2|}$ share 7 sections. The divisor $H:=6E_1-e_1+2e_2-2e_8$ is a pseudoample divisor of self intersection 4 which is orthogonal to all the $N_i$'s. So $N(X_2)\simeq M_{2,2}$. The class $N_8'':=3E_1-2e_1+e_2-e_8$ is a section of $\varphi_{|E_2|}$ and a 5 section of $\varphi_{|E_1|}$. The class $\left(\sum_{i=1}^7N_i+N_8''\right)/2\in \mathbb{N}S(X_2)$, so $\{N_1,N_2,N_3,N_4,N_5,N_6,N_7,N_8''\}$ is an even set of disjoint rational curves. \endproof The explicit knowledge of the change of bases from $\{E_1,e_1,\ldots e_8\}$ to $\{H, N_1,\ldots, N_7, \sum_{i=1}^8 N_i/2\}$ given in the proof of Proposition \ref{prop: L4 simeq M4'} allows one to obtain some interesting geometric characterizations of the K3 surfaces with $\mathbb{N}S(X_2)\simeq L_{2,2}'$. Indeed, let $S$ be a K3 surface of Picard number 9 and which satisfies one of the following conditions: \begin{itemize} \item $S$ admits an elliptic fibration $\mathcal{E}:S\rightarrow\mathbb{P}^1$ without reducible fibers and admitting 8 disjoint sections, $P_1,\ldots,P_8$, such that $(\sum_iP_i)/2\in \mathbb{N}S(S)$. \item $S$ admits an elliptic fibration $\mathcal{E}:S\rightarrow\mathbb{P}^1$ without reducible fibers with zero section $O$. The Mordell--Weil group of $\mathcal{E}$ is generated by 7 sections, $P_1,\ldots,P_7$, such that $\{O,P_1,\ldots, P_6\}$ are mutually disjoint and $P_7$ intersects the zero section in 12 points and the other sections $P_i$, $i=1,\ldots, 6$ in 6 points. \item $S$ admits two elliptic fibrations $\mathcal{E}$ and $\mathcal{F}$ with class of the fiber $E$ and $F$ respectively such that $EF=2$. Let us assume that there are 7 orthogonal rational curves such that 6 are sections of both the fibrations, the seventh is section of one fibration and a 5-section for the others. \end{itemize} Then $S$ satisfies also the other conditions, it admits a symplectic involution switching $\mathcal{E}$ and $\mathcal{F}$ and $S$ is a Nikulin surface. In particular $\mathbb{N}S(X_2)\simeq L_{2,2}'\simeq M_{2,2}$. Indeed, any of the above set of data of fibrations and sections is enough to exhibit the lattice $L_{2,2}'$ as the N\'eron--Severi group of $X_2$, as it follows by the proof of Proposition \ref{prop: L4 simeq M4'}.\\ The map $\varphi_{|H|}:X_2\rightarrow\mathbb{P}^3$ exhibits $X_2$ as a singular quartic in $\mathbb{P}^3$ and its eight nodes are $\varphi_{|H|}(N_i)$ for $i=1,\ldots, 8$. It is well known that the projection of a nodal quartic from a node gives a model of the same K3 surface as a double cover of $\mathbb{P}^2$ branched on a sextic. In particular, let us consider the projection by the node $\varphi_{|H|}(N_8)$, induced by the linear system $|H-N_8|$. We thus have a $2:1$ map $\varphi_{|H-N_8|}:X_2\rightarrow \mathbb{P}^2$, which contracts the 7 curves $N_i$ to seven nodes of the branch sextic. Hence we obtain the following diagram, where the vertical arrows are contractions of 7 curves and the horizontal arrows are $2:1$ maps: $$\xymatrix{X_2\ar[drr]^-{\varphi_{|H-N_8|}}\ar[rr]^{2:1}\ar[d]&&\widetilde{\mathbb{P}^2}\ar[d]\\ \varphi_{|H|}(X_2)\ar[rr]^{2:1}&&\mathbb{P}^2}$$ In particular $\widetilde{\mathbb{P}^2}$ is the blow up of $\mathbb{P}^2$ in the seven points $\varphi_{|H-N_8|}(N_i)$ (which are the singular points of the branch locus of the map $X_2\rightarrow\mathbb{P}^2$) and thus $\widetilde{\mathbb{P}^2}$ is a del Pezzo surface of degree 2. The cover involution of the double cover $X_2\rightarrow\widetilde{\mathbb{P}}^2$ is an involution $i$, such that $i^*(H-N_8)=H-N_8$, $i^*(N_i)=N_i$, $i=1,\ldots 7$ and $\iota^*(N_8)=2H-3N_8$. One is now able to rewrite the action of $i^*$ on the basis $\{L,e_1\ldots, e_8\}$ and one finds that $i^*=\iota_{dP}^*$ (where $\iota_{dP}^*$ is as in Proposition \ref{prop: three involutions on NS(X2)}). Thus, we the notation of \eqref{eq: bidouble P2}, one obtains $\widetilde{\mathbb{P}^2}=dP$, $\iota_{dP}=i$ and the map $\pi_{dP}$ is induced by the projection of $\varphi_{|H|}(X_2)$ from the node $\varphi_{|H|}(N_8)$. The even set $\{N_1,\ldots, N_7, N_8''\}$ is nothing but the image of the even set $\{N_1,\ldots, N_8\}$ for the action of $\iota_{dP}$.\\ In Section \ref{subsec: Galois closure 2^2:1 cover} we proved that the construction of the $\mathcal{D}_4$ Galois cover of $X_d$ is totally determined by two sets of rational curves in $X_d$, i.e. the sets $\{R_1,\ldots, R_8\}$ and $\{\overline{N}_1,\ldots, \overline{N}_8\}$. In particular in the case we are now considering, i.e. if $d=1$, the curves $R_i$, $i=1,\ldots, 8$ were already considered in the diagram \eqref{eq: bidouble P2} and are mapped by $q_3:X_1\rightarrow\mathbb{P}^2$ to the eight singular points of the branch sextic $B\cap C_0$. We now describe the curves $\overline{N}_i$, by giving their image as plane curves $q_3(\overline{N_i})\subset\mathbb{P}^2$. By construction, $q_3(\overline{N_i})=\pi(N_i)$, where $\pi:X_2\dashrightarrow\mathbb{P}^2$ is the rational $2^2:1$ map given in \eqref{eq: bidouble P2}. \begin{proposition}\leftarrowbel{prop: palne curves image of Ni} For each $i=1,\ldots, 7$, the curve $\pi(N_i)\subset\mathbb{P}^2$ is a bitangent line to the quartic $B$. The curve $\pi(N_8)$ is a rational irreducible sextic $D\subset\mathbb{P}^2$ which is tangent to $B\cup C_0$ in all their intersection points. The curves $\pi(N_i)$, $i=1,\ldots, 8$ split in $X_1$, the orbit of $N_i$ with respect to $\leftarrowngle\iota_Q, \iota_{dP} \rightarrowngle$ consists of two rational curves if $i=1,\ldots, 7$ and of the four curves if $i=8$. \end{proposition} \proof The surface $dP$ is a degree 2 del Pezzo surface, and then it is naturally endowed with an involution $i$, which is the cover involution of the $2:1$ map $dP\rightarrow\mathbb{P}^2$ (see \cite[Chaptes VII, Section 4]{DO} for details on del Pezzo surfaces of degree 2 and its involution). The double cover $q_2:dP\rightarrow\mathbb{P}^2$ is branched on $B\subset\mathbb{P}^2$. Since $dP$ is a del Pezzo surface of degree 2, there is a set of 7 disjoint $(-1)$-curves on $dP$, denoted by $p_i$, $i=1,\ldots, 7$ and such that $\beta_{dP}:dP\rightarrow\mathbb{P}^2$ is a contraction of these $(-1)$-curves. The plane curves $q_2(p_i)\subset\mathbb{P}^2$ are 7 lines which are bitangent to $B$, and each of them splits in the double cover into two rational curves $p_i$, $i(p_i)$, $i=1,\ldots,7$. So we have the following commutative diagram: \begin{align}\leftarrowbel{eq: diagram dP P2 X2}\xymatrix{X_2\ar[drr]^-{\varphi_{|H-N_8|}}\ar[d]_{\beta_{X_2}}\ar[rr]^{2:1}&&dP\ar[d]^{\beta_{dP}}\ar[rr]_-{q_2}^-{2:1}&&\mathbb{P}^2\supset B\cup C_0\\ \varphi_{|H|}(X_2)\ar[rr]_{2:1}&&\mathbb{P}^2}\end{align} where $\beta_{X_2}$ contracts the curves $N_i$, $i=1,\ldots, 8$. For $i=1,\ldots, 7$, one has $\varphi_{|H-N_8|}\left(N_i\right)=\beta_{dP}(p_i)$. Each of the 7 lines $q_2(p_i)\subset{\mathbb{P}}^2$ is bitangent to $B$, and intersects $C_0$ transversally. So $q_2(p_i)$ does not split in the double cover $q_1:Q\rightarrow\mathbb{P}^2$, which is branched on $C_0$. In particular, $q_1^{-1}(q_2(p_i))=q_1^{-1}(\pi(N_i))$ is an irreducible smooth rational curve for $i=1,\ldots , 7$. So, for each $i=1,\ldots, 7$, $\pi^{-1}(\pi_{dP}(p_i))\subset X_2$ consists of a pair of rational curves, switched by $\iota_{Q}$, preserved by $\iota_{dP}$ and then switched by $\sigma=\iota_{Q}\circ\iota_{dP}$. This can also be checked directly on the classes of the curves $N_i$ by using Propositions \ref{prop: three involutions on NS(X2)} and \ref{prop: L4 simeq M4'}, indeed $\iota_Q^*(N_i)=N_i$ and $\iota_{dP}(N_i)=\sigma(N_i)\neq N_i$ for $i=1,\ldots, 7$. Since for each $i\in\{1,\ldots, 7\}$ the curve $N_i$ is a section of both the elliptic fibrations $|E_1|$ and $|E_2|$ on $X_2$, the curve $q_1(N_i)\subset Q\simeq \mathbb{P}^1\times\mathbb{P}^1$ is a curve of bidegree $(1,1)$ if $i=1,\ldots 7$. It remains to describe the curve $N_8\subset X_2$. The orbit of $N_8$ is given by the four classes $N_8=3E_1+e_2-e_8$, $N_8':=\sigma(N_8)=3E_1-3e_1-e_2+e_8$; $N_8''=\iota_{dP}(N_8)=3E_1-2e_1+e_2-e_8$, $N_8''':=\sigma(\iota_{dP}(N_8))=3E_1-e_1-e_2+e_8$. Since $N_8+N_8'+N_8''+N_8'''\simeq 12E_1-6e_1\simeq 6L$ the curve $\pi(N_8)$ is a sextic $C_8$ in $\mathbb{P}^2$, which splits in all the double covers $Q$, $dP$ and $X_2$ of $\mathbb{P}^2$. The sextic $C_8$ is a rational curve (since it is the image of rational curves) and thus has 10 nodes. Moreover, since $C_8$ splits in the double covers, $C_8\cap C_0$ consists of 6 points with multiplicity two and $C_8\cap B$ consists of 12 points with multiplicity two. The inverse image of $C_8$ in $Q$ consists of two rational curves, one of bidegree $(1,5)$ which is the common image of $N_8$ and $N_8'''$ and one of bidegree $(5,1)$ which is the common image of $N_8'$ and $N_8''$. The bidegrees of these curves are obtained by the fact that $N_8$ is a section of $E_1$ and a 5-section of $E_2$, and $N_8'$ is a section of $E_1$ and a 5-section of $E_2$. The inverse image $q_3^{-1}(C_8)$ in $X_1$ consists of two rational curves, one is $\pi_2(N_8)=\pi_2(N_8')$ and it is the curve denoted by $\overline{N_8}$ in Section \ref{subsub: bidouble cover Xd}, the other is $\pi_2(N_8'')=\pi_2(N_8''')$.\endproof \begin{rem}{\rm As in the proof of Proposition \ref{prop: palne curves image of Ni} one is able to determine the image of the curves in the linear systems $|E_i|$ under the map $\pi$. The orbit of $E_1$ for $\leftarrowngle\iota_Q^*, \iota_{dP}^*\rightarrowngle$ is $\{E_1, E_2\}$, thus we have two elliptic fibrations which are switched by $\sigma$ and by $\iota_{dP}$ but each of them is preserved by $\iota_Q$. Since $E_1+E_2=L$, a curve $F_1\in |E_1|$ is mapped to a line $f_1$ in $\mathbb{P}^2$. Moreover, for a general $F_1$, $q_1^{-1}(f_1)$ is the union of the two curves $\pi_Q(F_1)$ and $\pi_Q(\sigma(F_1))$. Hence the line $f_1$ is tangent to the conic $C_0$ (which is the branch locus of $q_1:Q\rightarrow\mathbb{P}^2$). The line $f_1$ does not splits for the covers $q_2$ and $q_3$ and in particular the class $(q_3)_*(E_1)$ induces an elliptic fibration on $X_1$. So $q_3^{-1}(f_1)$ is a genus 1 curve. This implies that $q\cap B$ consists of 4 disjoint points (which are the branch points of the $2:1$ cover $q_3^{-1}(f_1)\rightarrow f_1$). Hence the 1-dimensional linear system of genus 1 curve in $|E_1|$ is mapped by $\pi$ to the 1-dimensional linear system of lines tangent to the conic $C_0$. The same holds true for the 1-dimensional linear system $|E_2|$, since $\sigma^*(E_1)=E_2$. By definition the $2:1$ map $q_2:dP\rightarrow\mathbb{P}^2$ is the anticanonical map, and then, denoted by $h$ the class of a line in $\mathbb{P}^2$, $q_2^*(h)=-K_{dP}$. So $q_2^{-1}(f_1)$ is a genus 1 curve in the anticanonical system, with the special property that it intersects the curve $q_2^{-1}(C_0)$ with even multiplicity in each of their intersection points. Hence the curve $q_2^{-1}(f_1)\subset dP$ splits in the double cover $\pi_{dP}:X_2\rightarrow dP$. With the notation of \eqref{eq: diagram dP P2 X2}, the curve $\beta_{dP}(q_2^{-1}(f_1))$ is a plane cubic tangent to $\beta_{dP}(q_2^{-1}(C_0))$.}\end{rem} Until now our point of view was to consider $X_2$ as a surface with a symplectic automorphism $\sigma$ and to determine its structure as Nikulin surface, but one can consider the reverse problem: given a Nikulin surface with N\'eron--Severi group $M_{2,2}$, it has a very natural model as quartic in $\mathbb{P}^3$ with 8 nodes. To reconstruct the structure of this surface as double cover of $\mathbb{P}^1\times\mathbb{P}^1$ admitting a symplectic involution one has to identify the two elliptic fibrations $\varphi_{|E_1|}:X_2\rightarrow\mathbb{P}^1$ and $\varphi_{|E_2|}:X_2\rightarrow\mathbb{P}^1$. We gave a change of basis from $\{E_1,e_1,\ldots, e_8\}$ to $\{H, N_1,\ldots, N_7, \sum_{i=1}^8N_1/2\}$ in proof of Proposition \ref{prop: L4 simeq M4'}. Its inverse allows us to find the class of $E_1$ in terms of the classes $H$ and $N_i$, $i=1,\ldots, 8$, in particular $E_1=H-(\sum_{i=1}^8N_i)/2$. The curves in this linear system are mapped to cubics by the linear system $|H-N_8|$. So, given a curve $F_1\in |E_1|$, $c:=\varphi_{|H-N_8|}(F_1)$ is a cubic curve in $\mathbb{P}^2$ and $\varphi_{|H-N_8|}^{-1}(c)$ consists of two curves, whose linear systems are $E_1=H-(\sum_{i=1}^8N_i)/2$ and $E_2=2H-(\sum_{i=1}^8N_i)/2-2N_8$ respectively. Their sum exhibits $S$ as double cover of $\mathbb{P}^1\times \mathbb{P}^1$ admitting the required symplectic involution. In \cite[Section 3.7]{vGS} an equation of $X_2$ is given, starting from a description of a K3 surface $X_4$ such that $\mathbb{N}S(X_4)=L_{4,2}'$. The surface $X_4$ is given as complete intersections of three quadrics in $\mathbb{P}^5_{(y_0:y_1:x_0:\ldots: x_3)}$ of the form \begin{eqnarray}\leftarrowbel{eq: X_4}\left\{\begin{array}{l}y_0^2=Q_1(x_0:x_1:x_2:x_3)\\ y_1^2=Q_2(x_0:x_1:x_2:x_3)\\y_0y_1=Q_3(x_0:x_1:x_2:x_3)\end{array}\right.\end{eqnarray} Each complete intersection with equation \eqref{eq: X_4} admits a symplectic involution induced by the projective transformation $$(y_0:y_1:x_0:x_1:x_2:x_3)\mapsto (-y_0:-y_1:x_0:x_1:x_2:x_3).$$ As shown in \cite[Section 3.7]{vGS}, a singular model of the quotient surface is given by \begin{equation}\leftarrowbel{eq: Y2 as quartic}Q_1(x_0:x_1:x_2:x_3)Q_2(x_0:x_1:x_2:x_3)=Q_3^2(x_0:x_1:x_2:x_3)\subset\mathbb{P}^3_{(x_0:x_1:x_2:x_3)}.\end{equation} By Proposition \ref{prop: NS(Y) determines NS(X)}, the smooth model of the quotient surface \eqref{eq: Y2 as quartic} has N\'eron--Severi group isometric to $M_{2,2}$, i.e. it is the surface $X_2\simeq S$ and the map to $\mathbb{P}^3_{(x_0:x_1:x_2:x_3)}$ is given by the linear system of the pseudo ample polarization $H$ (with the notation of Proposition \ref{prop: L4 simeq M4'}). Let us consider the pencil of quadrics $\mathcal{P}_t:=\{Q_1=tQ_3\}\subset\mathbb{P}^3$. It cuts on $X_2$ a pencil of curves, whose class is $2H-\sum_{i=1}^8N_i$, since all the quadrics in $\mathcal{P}_t$ pass through the 8 points in $Q_1\cap Q_2\cap Q_3$, which are the singular points of the surfaces in \eqref{eq: Y2 as quartic}. For almost every $t$, $\mathcal{P}_t$ cuts two genus 1-curves on the surfaces in \eqref{eq: Y2 as quartic}: one is the complete intersection $Q_1\cap Q_2$ (and does not depend on $t$) the other is $(Q_1-tQ_3)\cap (Q_3-tQ_2)$. So, the first curve is a fixed component of the linear system $2H-\sum_{i=1}^8N_i$, the latter is a movable curve. The curves $Q_1\cap Q_2$ and $(Q_1-tQ_3)\cap (Q_3-tQ_2)$ intersect transversally in the singular points of the quartic \eqref{eq: Y2 as quartic}. So, they have no intersection points in the blow up $X_2$ of the quartic \eqref{eq: Y2 as quartic} in its singular points. Hence the curves $Q_1\cap Q_2$ and $(Q_1-tQ_3)\cap (Q_3-tQ_2)$ are two fibers of the same fibration $X_2\rightarrow \mathbb{P}^1_t$ and are represented by the same divisor in $\mathbb{N}S(X_2)$. It is necessarily $\left(2H-\sum_{i=1}^8N_i\right)/2=H-\left(\sum_{i=1}^8N_i\right)/2$. This is the divisor $E_1$ considered above so we conclude that if the surface $S\simeq X_2$ is embedded in $\mathbb{P}^3$ as a quartic of the form $Q_1Q_2=Q_3^2$, then the elliptic fibration $E_1$ is cut out by $\mathcal{P}_t:=\{Q_1=tQ_3\}$. The elliptic fibration $E_2$ is the image of $E_1$ under $\iota_{dP}$, which is the involution induced by the projection from the node $\varphi_{|H|}(N_8)$ of $\varphi_{|H|}(X_2)$. \subsection{Special 10-dimensional subfamilies of $\mathcal{L}_2$ and $\mathcal{M}_2$}\leftarrowbel{subsec: U+N e U+E8} In Proposition \ref{prop: properties of the U+Mn polarized K3} we discussed the family $\mathcal{U}_n$ of $(U\oplus \mathbb{M}_n)$-polarized K3 surfaces, proving that it is contained in $\mathcal{L}_n\cap \mathcal{M}_n$ and it has codimension 1 in this space. This holds for every admissible $n$, so in particular for $n=2$. Here we reconsider this family, since it also has interesting properties with respect to the components of $\mathcal{M}_2$: it is contained in the common intersection of all the irreducible components of $\mathcal{M}_2$. We discuss the analogous property for the components of $\mathcal{L}_2$, identifying another interesting family of K3 surfaces, which has codimension 1 in each component of $\mathcal{L}_2$. More precisely the aim of this section is to prove the following: \begin{itemize} \item There exists an irreducible connected 10-dimensional subvariety of the moduli space of K3 surfaces (it is $\mathcal{U}_2$) which is properly contained in all the families of Nikulin surfaces. Moreover all the K3 surfaces in this subvariety also admit a symplectic involution. \item There exists an irreducible connected 10-dimensional subvariety of the moduli space of K3 surfaces which is properly contained in all the families of K3 surfaces admitting a symplectic involution. All the K3 surfaces in this subvariety are also Nikulin surfaces. \end{itemize} \begin{proposition}\leftarrowbel{prop: U+N polarized K3} There exists an overlattice of index 2 of $U(2)\oplus N$, denoted by $(U(2)\oplus N)'$, which is isometric to $U\oplus N$ and such that for any $d\in\mathbb{N}_{\geq 1}$, both the lattice $M_{d,2}$ and the lattice $M_{2d,2}'$ are primitively embedded in $(U(2)\oplus N)'$. Hence all the irreducible components of the 11-dimensional families of Nikulin surfaces properly contain the 10-dimensional family $\mathcal{U}_2=\mathcal{P}(U\oplus N)$. \end{proposition} \proof Let us consider the lattice $U(2)\oplus N$. Let $u_1$ and $u_2$ be the basis of $U(2)$ such that $u_j^2=0$, $j=1,2$ and $u_1u_2=2$, and let $w_{i,j}$, $i=1,2$ , $j=1,2,3$, be a set of vectors in $N$ such that $w_{i,h}/2$ are contained in the discriminant group of $N\subset U(2)\oplus N$. Moreover we assume that the discriminant form on $w_{i,j}/2$, $i=1,2$, $j=1,2,3$ is $u(2)^3$. The vector $v:=\left(u_{1}+u_{2}+w_{1,1}+w_{2,1}\right)/2$ is isotropic in $A_{U(2)\oplus N}$, and the lattice obtained by adding the vector $v$ to $U(2)\oplus N$ is an even overlattice of index 2 of $U(2)\oplus N$. Let us call it $\left(U(2)\oplus N\right)'$. The discriminant group of $\left(U(2)\oplus N\right)'$ is generated by $\left(u_{1}+w_{1,1}\right)/2$, $\left(u_{1}+w_{1,2}\right)/2$, $w_{i,j}$, $i=1,2$, $j=2,3$ and its discriminant form is $u(2)^3$. There is a unique, up to isometry, even hyperbolic lattice with rank 10, length 6 and prescribed discriminant form $u(2)^3$. Hence $\left(U(2)\oplus N\right)'\simeq U\oplus N$. To give a primitive embedding of $M_{d,2}\simeq \leftarrowngle 2d\rightarrowngle\oplus N$ in $U\oplus N$ it suffices to give a primitive embedding of $\leftarrowngle 2d\rightarrowngle$ in $U$, for example the embedding $\left(\begin{array}{c}1\\d\end{array}\right)\hookrightarrow U$ is primitive. To give a primitive embedding of $M_{2d,2}'$ in $\left(U(2)\oplus N\right)'$ we consider a primitive embedding of $M_{2d,2}$ in $U(2)\oplus N$, which extends primitively to their overlattices. As above, a primitive embedding of $M_{2d,2}\simeq\leftarrowngle 4d\rightarrowngle\oplus N$ in $U(2)\oplus N$ is induced by a primitive embedding of $\leftarrowngle 4d\rightarrowngle$ in $U(2)$. We fix this embedding to be $\leftarrowngle u_1+du_2\rightarrowngle\hookrightarrow U(2)$. This induces a primitive embedding of $M_{2d,2}'$ in $\left(U(2)\oplus N\right)'\simeq U\oplus N$.\endproof Let $S$ be a K3 surface with $\mathbb{N}S(S)\simeq U\oplus N$. Then $S$ admits an elliptic fibration with 8 reducible fibers of type $I_2$ and a 2-torsion section. We denote by $F$ the class of the fiber of this fibration, by $O$ the class of the zero section, by $t$ the 2-torsion section and by $C_i^j$, $i=0,1$, $j=1,\ldots, 8$ the $i$-th component of the $j$-th fiber (with the usual assumption that the 0-component meets the zero section). A basis of $U\oplus N$ is then given by $F$, $F+O$, $C_1^j$, $j=1,\ldots 7$, $\left(\sum_{j=1}^8C_1^j\right)/2=2F+O-t$. The translation by a 2-torsion section is a symplectic involution, denoted by $\sigma$ and classically called van Geemen--Sarti involution. Its action is $F\leftrightarrow F$, $O\leftrightarrow t$, $C_1^j\leftrightarrow C_0^j$. The sublattice of $\mathbb{N}S(S)$ invariant for $\sigma$ is $\mathbb{N}S(S)^{\sigma}\simeq \leftarrowngle F,s+t\rightarrowngle\simeq U(2)$. This exhibits the N\'eron--Severi group $\mathbb{N}S(S)\simeq U\oplus N$ as an overlattice (necessarily of index $2^2$) of $U(2)\oplus E_8(-2)$, since $\left(\mathbb{N}S(S)^{\sigma}\right)^{\perp}\simeq E_8(-2)$. Chosen a positive integer $e$, the divisor $v:=F-e(s+t)$ has the following properties: $v^2=-4e$, $v$ is invariant and $v^{\perp_{\mathbb{N}S(S)}}\simeq L_{2e,2}'$. In particular the van Geemen--Sarti involution on $S$ induces the symplectic involution whose action on $\mathbb{N}S(S)$ is $-1$ on $E_8(-2)\hookrightarrow L_{2e,2}'\simeq v^{\perp}$ and $+1$ on $v$. Hence the isometry $\sigma^*$ on $\mathbb{N}S(S)$ extends the isometry associated to the symplectic involution on $L_{2e,2}'$-polarized K3 surfaces, once an embedding $L_{2e,2}'\hookrightarrow (U\oplus N)$ as in the proof of Proposition \ref{prop: U+N polarized K3} is fixed. Now we consider the analogous problem on the irreducible components of $\mathcal{L}_n$. \begin{proposition}\leftarrowbel{prop: U+E8(2)-polarized K3} The 10-dimensional family $\mathcal{P}(U\oplus E_8(-2))$ is properly contained in all the families $\mathcal{P}(L_{e,2})$ and $\mathcal{P}(L_{2e,2}')$. The lattice $U\oplus E_8(-2)$ is isometric to the lattice $U(2)\oplus N$, hence all the members of the family $\mathcal{P}(U\oplus E_8(-2))$ are Nikulin surfaces. \end{proposition} \proof The primitive embedding of $L_{e,2}\simeq \leftarrowngle 2e\rightarrowngle\oplus E_8(-2)$ in $U\oplus E_8(-2)$ is induced by the primitive embedding of $\leftarrowngle 2e\rightarrowngle\simeq \left\leftarrowngle \left(\begin{array}{c}1\\e\end{array}\right)\right\rightarrowngle$ in $U$, as in the proof of Proposition \ref{prop: U+N polarized K3}. We observe that $U\oplus E_8(-2)$ is an overlattice of index 2 of $U(2)\oplus E_8(-2)$. Indeed, similarly to what we did in proof of Proposition \ref{prop: U+N polarized K3}, we consider the basis $u_1$ and $u_2$ of $U(2)\subset U(2)\oplus E_8(-2)$ and the vectors $w_{i,j}/2$ $i=1,2$, $j=1,2,3,4$ in $A_{U(2)\oplus E_8(-2)}$ such that the discriminant form on $u_1/2$, $u_2/2$ and $w_{i,j}/2$, $i=1,2$, $j=1,\ldots, 4$ is $u(2)^5$. By adding $v=(u_{1}+u_{2}+w_{1,1}+w_{2,1})/2$ to $U(2)\oplus E_8(-2)$ one obtains an even overlattice $\left(U(2)\oplus E_8(-2)\right)'$ of index 2 of $U(2)\oplus E_8(-2)$, which is isometric to $U\oplus E_8(-2)$. Hence the primitive embedding of $L_{2e,2}'$ in $\left(U(2)\oplus E_8(-2)\right)'\simeq U\oplus E_8(-2)$ is induced by a primitive embedding of $\leftarrowngle 4e\rightarrowngle$ in $U(2)$, which is given by $\leftarrowngle 4e\rightarrowngle\simeq \left\leftarrowngle \left(\begin{array}{c}1\\e\end{array}\right)\right\rightarrowngle$ in $U(2)$. Since $L_{e,2}$ and $L_{2e,2}'$ are primitively embedded in $U\oplus E_8(-2)$ and they determine uniquely their orthogonal complement in $\Lambda_{K3}$, the families $\mathcal{P}(L_{e,2})$ and $\mathcal{P}(L_{2e,2}')$ properly contain the family $\mathcal{P}(U\oplus E_8(-2))$. The isometry between the lattices $U\oplus E_8(-2)$ and $U(2)\oplus N$ follows by observing that they are lattices with rank 10, length 8 and the same discriminant form. \endproof Let $\mathcal{E}_R:R\rightarrow\mathbb{P}^1$ be a rational elliptic surface (i.e. $R$ is a rational surface endowed with an elliptic fibration $\mathcal{E}_R$). It is known that a base change of order 2 on this elliptic fibration branched on two smooth fibers induces an elliptic fibration $\mathcal{E}_S:S\rightarrow\mathbb{P}^1$ on a K3 surface $S$. If the fibration $\mathcal{E}_R$ has no reducible fibers, then $\mathbb{N}S(S)\simeq U\oplus E_8(-2)$, see e.g. \cite[Proposition 4.6]{GSal}. More in general the family of the K3 surfaces obtained by a base change of order 2 on a rational elliptic surface, is the family $\mathcal{P}\left(U\oplus E_8(-2)\right)$, see e.g. \cite{GSal}. \begin{proposition}\leftarrowbel{prop: sympl inv on U+E8 polarized} The 10-dimensional family $\mathcal{P}(U\oplus E_8(-2))$ is the family $\mathcal{R}$ of the K3 surfaces obtained by a base change of order two on a rational elliptic fibration $\mathcal{E}_R:R\rightarrow\mathbb{P}^1$. Let $S$ be a general member of $\mathcal{R}$ and let $\mathcal{E}_S$ be the elliptic fibration induced by $\mathcal{E}_R$: $\mathcal{E}_S$ has no reducible fibers and its Mordell--Weil rank is equal to 8. The symplectic involution $\sigma$ on $S$ preserves $\mathcal{E}_S$. Denoted by $\widetilde{S/\sigma}$ the desingularization of $S/\sigma$, $\mathbb{N}S(\widetilde{S/\sigma})\simeq U\oplus D_4\oplus D_4$. \end{proposition} \proof We already observed that $S$ is obtained by a base change of order 2 by $\mathcal{E}_R:R\rightarrow\mathbb{P}^1$. Then $\mathcal{E}_S$ admits an involution $\iota$ which acts only on the basis of the fibration, and which is the deck involution of the generically 2:1 cover $R\rightarrow S$. The involution $\iota$ maps fibers of $\mathcal{E}_S$ to other fibers and in particular preserves the class of the fiber and of the sections, i.e. it acts trivially on the N\'eron--Severi group. Thus it preserves the elliptic fibration (cf. \cite[Proposition 4.6]{GSal}). Moreover, $\iota$ preserves exactly two fibers of $\mathcal{E}_S$ (the ramification fibers of the cover $R\rightarrow S$). The elliptic fibration $\mathcal{E}_S$ is preserved also by the elliptic involution $\epsilon$, which preserves the classes of the fiber and of the zero section (i.e. a set of generators of $U$ in $\mathbb{N}S(S)\simeq U\oplus E_{8}(-2)$). The composition $\sigma:=\iota\circ\epsilon$ is a symplectic involution which acts trivially on $U\hookrightarrow U\oplus E_8(-2)\simeq \mathbb{N}S(S)$ and as $-\mathrm{id}$ on $E_8(-2)\hookrightarrow U\oplus E_8(-2)\simeq \mathbb{N}S(S)$. Thus $\sigma$ is a symplectic involution whose fixed locus consists of 4 points on each of the two fibers preserved by $\iota$. Hence the elliptic fibration $\mathcal{E}_S:S\rightarrow\mathbb{P}^1$ induces an elliptic fibration on $\widetilde{S/\sigma}$ whose generic fiber is a copy of the two fibers of $\mathcal{E}_S$ switched by $\sigma$. The images of the two fibers preserved by $\iota$ are two fibers of type $I_0^*$. The Picard number of $\widetilde{S/\sigma}$ is 10, which is also the rank of the trivial lattice of an elliptic fibration with two fibers of type $I_0^*$. We conclude that there are no sections of infinite order for the elliptic fibration induced by $\mathcal{E}_S$ on $\widetilde{S/\sigma}$ and that $\mathbb{N}S(\widetilde{S/\sigma})\simeq U\oplus D_4\oplus D_4$. \endproof We observe that $U\oplus D_4\oplus D_4\not\simeq U\oplus E_8(-2)$ since their discriminant groups are different, so $\mathbb{N}S(S)\not\simeq \mathbb{N}S(\widetilde{S/\sigma})$. By Proposition \ref{prop: U+E8(2)-polarized K3}, if $S$ is a K3 surface such that $\mathbb{N}S(S)\simeq U\oplus E_8(-2)$, then it admits a symplectic involution (described in the proof of Proposition \ref{prop: sympl inv on U+E8 polarized}) and it is also $2:1$ cyclically covered by a K3 surface. So it admits a 2-divisible set of rational curves, which we describe here. As observed $S$ is obtained by a base change of order 2 on $R$. Since $R$ is a rational elliptic surface, it is the blow up of $\mathbb{P}^2$ in nine points which are the base points of a pencil of generically smooth cubics. So $S$, which is a 2:1 double cover of $R$ branched on two smooth fibers, is a generically $2:1$ cover of $\mathbb{P}^2$ branched in the union of two smooth cubics $C_1$ and $C_2$ (see e.g. \cite[Section 2.2]{GSal}). The branch locus is singular in the nine points $C_1\cap C_2$. We denote by $H$ the genus $2$ divisor on $S$ such that $\varphi_{|H|}:S\rightarrow\mathbb{P}^2$ is this $2:1$ cover of $\mathbb{P}^2$ and by $D_i$, $i=0,\ldots 8,$ the classes of the rational curves contracted by $\varphi_{|H|}$ to the nine singular points of the branch locus. By construction the fiber of the fibration $\mathcal{E}_S$ is the class of $C_1$ (and of $C_2$), i.e. $\left(3H-\sum_{i=0}^8D_i\right)/2$. The curves in the linear system $|H-D_0|$ (and in $|H-D_1|$ respectively) on $S$ are mapped to lines of a pencil in $\mathbb{P}^2$, with base point $\varphi_{|H|}(D_0)$ (with base point $\varphi_{|H|}(D_1)$ respectively). Each line in this pencil meets the branch in 4 points (with the exception of $\varphi_{|H|}(D_0)$), and so its inverse image in $S$ is a $2:1$ cover of a rational curve branched in 4 points. So the curves in $|H-D_0|$ (resp. $|H-D_1|$) are genus 1 curves and $|H-D_0|$ and $|H-D_1|$ induce two genus 1 fibrations on $S$ (see \cite[Proposition 3.8]{GSal}). Since $(H-D_0)(H-D_1)=2$, the map $\varphi_{|2H-D_0-D_1|}$ is a generically $2:1$ map to $\mathbb{P}^1\times\mathbb{P}^1\subset\mathbb{P}^3$ (see \cite{SD}). It contracts the 8 mutually disjoint rational curves $D_i$, $i=2,\ldots ,8$, and $H-D_0-D_1$. The last contracted curve is the pullback of the line through the two points $\varphi_{|H|}(D_0)$ and $\varphi_{|H|}(D_1)$. The $2:1$ map $\varphi_{|H-D_0|}\times\varphi_{|H-D_1|}:S\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ is induced by the $2:1$ map $\varphi_{|H|}:S\rightarrow\mathbb{P}^2$ via the birational transformation $\beta:\mathbb{P}^2\rightarrow\mathbb{P}^1\times \mathbb{P}^1$, which is the blow up of $\mathbb{P}^2$ in the two points $\varphi_{|H|}(D_0)$ and $\varphi_{|H|}(D_1)$ followed by the contraction of the line through these points. So the branch locus of the $2:1$ cover $S\rightarrow\mathbb{P}^1\times\mathbb{P}^1$ splits in the union of two genus 1 curves of bidegree $(2,2)$, which are the images of the two cubics $C_1$ and $C_2$ under the birational transformation $\beta$. The curves $\beta(C_1)$ and $\beta(C_2)$ intersect in 8 points in $\mathbb{P}^1\times\mathbb{P}^1$, which are the images of the curves $D_i$, $i=2,\ldots, 8$, and $H-D_0-D_1$. The classes of the pullback of $\beta(C_1)$ and $\beta(C_2)$ on $S$ coincide and each of them is represented by the class $\left(2(H-D_0)+2(H-D_1)-\sum_{i=2}^8D_i-(H-D_0-D_1)\right)/2$. So the set of curves $\{D_2,\ldots D_8, H-D_0-D_1\}$ is an even set. \end{document}
\begin{document} \title{Quantum hierarchic models for information processing} \author{M.V.Altaisky \\ {\small Joint Institute for Nuclear Research, Joliot Curie 6, Dubna, 141980, Russia;} \\ {\small and Space Research Institute RAS, Profsoyuznaya 84/32, Moscow, 117997, Russia}\\ {\small e-mail: [email protected]} \and N.E.Kaputkina \\ {\small National University of Science and Technology ``MISiS''},\\ {\small Leninsky prospect 4, Moscow, 119049, Russia} \\ {\small e-mail: [email protected]} } \partialate{Sep 12, 2011} \maketitle \begin{abstract} Both classical and quantum computations operate with the registers of bits. At nanometer scale the quantum fluctuations at the position of a given bit, say, a quantum dot, not only lead to the decoherence of quantum state of this bit, but also affect the quantum states of the neighboring bits, and therefore affect the state of the whole register. That is why the requirement of reliable separate access to each bit poses the limit on miniaturization, i.e, constrains the memory capacity and the speed of computation. In the present paper we suggest an algorithmic way to tackle the problem of constructing reliable and compact registers of quantum bits. We suggest to access the states of quantum register hierarchically, descending from the state of the whole register to the states of its parts. Our method is similar to quantum wavelet transform, and can be applied to information compression, quantum memory, quantum computations. \end{abstract} \section{Introduction} Classical information can be always encoded in a sequence of bits, the entities with two classically distinguishable states. The miniaturization of the information processing units to nanometer scales imposes constraints on memory capacity and the speed of computation, whatever the algorithm is classical or quantum. This constraints arise from the Heisenberg uncertainty principle and from the openness of any computational system. The mean momentum transfer required to access an element of the size $\Delta x \sim 10^1$nm exceeds $\frac{\hbar}{2\Delta x}$. This corresponds to the electron velocity $v \sim 10^5 {\rm m}/{\rm s}$ and the energy of meV order. The interaction of the same element with the environment at room temperature $T \sim 300$K results in energy transfer $k_B T \sim 25$ meV. The decrease of the operating voltage to less than one Volt makes nanoscale logical devices to operate with a few electrons only. Thus each logical state is achieved not with certainty, but with a finite probability, constrained by quantum effects. Devoid of the environment a state of quantum bit is linear superposition of two basic states, 0 and 1: $$ |\psi\rangle = \alpha |0\rangle + \beta |1\rangle, $$ subjected to a unitary evolution $|\psi(t)\rangle = e^{-\imath t H} |\psi(0)\rangle$. The interaction with environment decoheres the state $|\psi\rangle$ into a classical mixture of the two basic states. When the interaction with the environment can be neglected the computation can be performed dissipation free by unitary evolution of quantum register in a parallel way according to quantum algorithms \cite{Nielsen-Chuang:2000,Stolze-Suter:2008}. Therefore there exists an obstacle that still prevents practical implementation of a workable quantum computer with more than a few quantum bits. This obstacle is the {\em quantum decoherence} -- the loss of quantum information by means of relative dephasing of the qubits in the superposition of quantum states due to the interaction with environment. The perspective candidates for memory qubits, as well as for quantum gates, are quantum dots -- the artificial atoms of $10^1$nm size with the spin of the excess electrons used as quantum bits \cite{BLD1999,HKPTV2007}. The spin of the excess electron in an isolated quantum dot can be reliably initialized to a ground state by optical pumping, or by thermal equilibration in a strong magnetic field. Having the relaxation time of $10^{-3}$ sec order \cite{KN2000}, the electron spin of a quantum dot, having the spin decoherence time of $10^{-6}$ sec order \cite{KA1998}, can be used as a qubit in quantum computations. Challenging problem is to control spin states of the electrons in an {\em array of quantum dots}, separated from each other by less than a typical optical wavelength size, rather than an electron spin in a single quantum dot \cite{HKPTV2007}. The idea of the present paper is to arrange the quantum bits of a register {\em hierarchically} in blocks and process the blocks separately at each hierarchy level. In the next sections we will develop the necessary formalism to describe the Hilbert space of hierarchic states. We also present the physical models for hierarchic quantum registers based on the arrays of quantum dots. The remainder of this paper is organized as follows. In {\em Section \ref{wt:sec}} we generalize the ideas of the Mallat multiresolution analysis to quantum register. In {\em Section \ref{gate:sec}} we discuss the CNOT quantum gate in a multiplet basis of quantum register. In {\em Section \ref{qdot:sec}} we discuss physical implementation of quantum gates on spin qubits based on quantum dots. In {\em Conclusion} we summarize the advantages of the quantum hierarchic information coding and the difficulties of manipulations with such registers. \section{Generalization of wavelet transform for quantum registers \label{wt:sec}} Suppose we need to compress a large data vector ($N\!\gg\!1$) in such a way that a few coefficients store the most significant information -- perhaps, that distinguishes the given object from all others, -- the next coefficients store some less significant details, {\em etc.\, } Such techniques, first proposed by Burt and Adelson for digital image coding \cite{Burt-Adelson:1983}, is known as {\em pyramidal image compression algorithm}. It is based on the idea, that each four pixels of an image can be considered as a block, so that only one value is required to quantify the ``mean color'' of the block, and three more values required to quantify the deviations of pixel colors from the block mean. The same procedure can be applied to the group of 4 blocks of 2$\times$2 pixels each into 16 pixel block, and so fourth. The averaging operator is usually denoted by $H$, and is referred to as {\em low-pass filter}, the projection operator onto the space of averaging-lost details is denoted by $G$ and is referred to as {\em high-pass filter}. If no information is lost during compression, the low- and high-pass filters obey the condition $ G^* G + H^* H = 1 $. $H$ and $G$ operators project the sequence of length $N$ onto the sequences of length $N/2$ (so, that the total amount of information is conserved), decreasing the resolution twice at each step. For the one-dimensional data $s \in l^2(\mathbb{Z})$, the action of $H$ and $G$ filters can be written as \begin{equation} (Hs)_i = \sum_n h_{n-2i} s_n, \quad (Gs)_i = \sum_n g_{n-2i} s_n. \label{hg} \end{equation} The decomposition of the data vector with the $H$ and $G$ operators \eqref{hg} according to the scheme $$ \begin{array}{lllllll} s^0&\stackrel{H}{\to} & s^1 &\stackrel{H}{\to} & s^2 & \stackrel{H}{\to} & \ldots \\ &\stackrel{G}{\searrow}& &\stackrel{G}{\searrow} & &\stackrel{G}{\searrow} & \\ & & d^1 & & d^2 & & \ldots \end{array}, $$ and appropriate reconstruction is known as {\em fast wavelet transform algorithm} \cite{Daubechies:1988}. The pyramidal image coding was generalized into the Mallat multiresolution analysis (MRA) \cite{Mallat:1986}. The multiresolution analysis in ${L^2(\R)}$, or the {\em Mallat sequence}, is an increasing sequence of closed subspaces $\{ V_j \}_{j\in\mathbb{Z}}, V_j \in {L^2(\R)}$, such that \begin{enumerate} \item $\ldots \subset V_2 \subset V_1 \subset V_0 \subset V_{-1} \subset \ldots $ \item $\partialisplaystyle {\rm clos\, } \cup_{j\in\mathbb{Z}} V_j = {L^2(\R)}$ \item $\partialisplaystyle \cap_{j\in\mathbb{Z}} V_j = \emptyset$ \item The spaces $V_j$ and $V_{j-1}$ are "similar": \\ $ f(x) \in V_j \Leftrightarrow f(2x) \in V_{j-1},\quad j \in \mathbb{Z}$ \end{enumerate} To set a basis on the Mallat sequence one needs to choose a scaling function $\phi(x)$, so that $$V_j = {\rm linear\ span} \{ \phi^j_k; j,k \in \mathbb{Z} \}, $$ where $\phi^0_k(x) \equiv \phi(x-k),$ and $\phi^j_k (x) = 2^{-j/2} \phi(2^{-j}x - k)$. Any function $f\in V_1$, due to the inclusion property 1, can be written as a linear combination of the basic functions of $V_0$. Since the spaces $V_j$ and $V_{j+1}$ are different in resolution, some details are being lost when one sequentially projects a function $f \in V_0$ on a ladder of spaces $V_1, V_2, \ldots$. This details can be stored in the orthogonal complements $W_j = V_{j-1}\setminus V_j$. Explicitly: $V_0 = W_1 \oplus W_2 \oplus W_3 \oplus \ldots \oplus V_M $. In quantum case we might also suggest a block structure of application of a linear operator to a quantum register. However we have first to consider a simpler question: How the qubits can be stored in quantum register, and can there exist any quantum structure similar to the Mallat sequence? The direct analogs of the Haar wavelet transform, based on quantum networks, have been already suggested for quantum computations \cite{hoyer1997,fijany1999quantum}, but they implement a separate access to each quantum bit using the quantum gates, and can hardly form an effective memory. Suppose we have a quantum register consisting of $N=2^M$ qubits and we are going to store $N'<N$ quantum bits of information in it. The higher is the ratio $N/N'$, the higher fidelity of information storage can be achieved: for more than one qubit can be used to store the same information. Such devices may be of practical use if depending on real amount of information to be stored different number of quantum bits is allocated. Let us consider a quantum register implemented on spin-half particles: $$ \otimes_{i=1}^N |s^0_i\rangle = \otimes_{i=1}^N (a^0_i |\uparrow\rangle^0_i +b^0_i |\partialownarrow\rangle^0_i ),\quad |a^0_i|^2+|b^0_i|^2=1. $$ In contrast to classical bits the qubits take their values in the $SU(2)$ group, rather than in $\mathbb{R}$. So, the direct application of the Haar wavelet algorithm \begin{equation} s^j_k = \frac{s^{j-1}_{2k}+s^{j-1}_{2k+1}}{\sqrt 2}, \quad d^j_k = \frac{s^{j-1}_{2k}-s^{j-1}_{2k+1}}{\sqrt 2}, \quad \label{pd} k = 0,\ldots,2^{M-j}-1, \end{equation} is not possible. However the spin state of a {\em pair of fermions} with spin $\frac{1}{2}$, considered as a compound boson, is completely determined by the product $|s_a\rangle \otimes |s_b\rangle$ by means of the Clebsch-Gordan coefficients; and {\sl vice versa}: orthogonality of the Clebsch-Gordan coefficients allows to reconstruct the state of the pair of fermions from the known state of the boson they comprise \cite{Brink-Satchler:1994}. If a product of $N$ fermion wave functions is decomposed into a direct sum of irreducible representations $T^{(J)}$, corresponding to the rotations of composite system: \begin{equation} \otimes_j D^{(j)}= \oplus_J c_J T^{(J)}, \label{dsum} \end{equation} the compound system may be in either of the states $T^{(J)}$, for which the coefficient $c_J$ is not equal to zero. If a compound system was {\em measured} to be in a state $T^{(m)}$, then only those product terms $|s^0\rangle \otimes \cdots \otimes |s^0\rangle$ can survive, which contribute to the $T^{(m)}$ term in \eqref{dsum}. The process of multiplication of the representations of angular momentum group can be done hierarchically, starting from pairs. The whole composite system can be described in terms of hierarchic state vectors $$ \Phi = \{ |s^M\rangle, |s^M s^{M-1}\rangle, \ldots ,|s^M s_{M-1}\ldots s^0\rangle \}. $$ In Hilbert space of hierarchic state vectors the reduced density matrix can be constructed by taking the trace over the states of the next hierarchy level \cite{Alt03IJQI}. Hierarchic representation provides an extra possibility to construct quantum gates by acting onto the states of the lower hierarchy level $(\alpha\,-\,1)$ depending on the states of the next hierarchic level ($\alpha$): $$ \hat B = |\bvec{i}^{(\alpha-1)}\rangle |\theta_m^{(\alpha)}\rangle B^m_{\bvec{i}\bvec{k}}\langle\theta_m^{(\alpha)}| \langle\bvec{k}^{(\alpha-1)}|. $$ See \cite{Alt03IJQI} for details. A sequence $\{ s^0_j\}_j$ of bits can be hierarchically compressed using the projections onto the ladder of the Mallat sequence $V_0,V_{1},\ldots,V_{M}$, where only the projections onto the orthogonal complements $W_{k} = V_{k-1}\setminus V_{k}$ can be kept in the memory. For the sequence of $2^M$ quantum bits the information can be encoded in the spin state of the whole system $ \Psi = \prod_{k=0}^{2^M-1} \psi_k, $ where $\psi_k$ is a two-component spinor describing the $k$-th qubit. Let us consider simplest cases of $M=1$ and $M=2$. For $\bm{M=1}$ we have a pair of quantum bits. The composite wave function of such system transforms according to the representation \begin{equation} D_\frac{1}{2} \otimes D_\frac{1}{2} = D_1 \oplus D_0, \label{d2} \end{equation} {\em i.e.\, } the total system can be either in triplet ($D_1$) or in singlet ($D_0$) state, or in their superposition. The bases of the product states (l.h.s. of \eqref{d2}) and the composite system states (r.h.s. of \eqref{d2}) are related by the linear transform \eqref{aa}. If the basis in two-qubit space is chosen as \begin{equation} (e_1,e_2,e_3,e_4) = (|\partialownarrow\rangle |\partialownarrow\rangle,|\partialownarrow\rangle|\uparrow\rangle, |\uparrow\rangle |\partialownarrow\rangle, |\uparrow\rangle |\uparrow\rangle), \label{pb} \end{equation} and the basis in the space of states of composite boson is chosen as \begin{equation} (s_1,s_2,s_3,s_4) = (|0,0\rangle,|1,-1\rangle, |1,0\rangle, |1,+1\rangle), \label{mb} \end{equation} they are related by a linear transform \begin{equation} e_i = A_{ik} s_k, \quad A = \begin{pmatrix} 0 & 1 & 0 & 0 \cr -\frac{1}{\sqrt2} & 0 & \frac{1}{\sqrt2} & 0 \cr \frac{1}{\sqrt2} & 0 & \frac{1}{\sqrt2} & 0 \cr 0 & 0 & 0 & 1 \end{pmatrix}. \label{aa} \end{equation} For this system we can define the spaces $V_0,V_1, W_1$ as follows: $ V_0$ is the product space transforming according to $D_\frac{1}{2} \otimes D_\frac{1}{2}$; $V_1$ is the triplet state of the composite system, which transforms according to $D_1$ representation; $W_1$ is the singlet state of the compound system. For $\bm{M=2}$ the finest resolution space $V_0$ is the span of the four spinor product $$\Psi = \psi_0 \psi_1 \psi_2 \psi_3,$$ which transforms according to $(D_\frac{1}{2} \otimes D_\frac{1}{2} )\otimes (D_\frac{1}{2} \otimes D_\frac{1}{2} )$. We define $V_1$ as a linear span of the states of maximal spin of each block $$ V_1 = D_1 \otimes D_1= D_2 \oplus D_1 \oplus D_0.$$ In this case the detail space $W_1$ is $$ W_1 = V_0\setminus V_1 = D_1 \otimes D_0 + D_0 \otimes D_1 + D_0 \otimes D_0.$$ Similarly, the $V_2$ space is the maximal spin state of a next level block, which transforms according to $D_2$. The corresponding detailed space is $$W_2 = V_1\setminus V_2 = D_1 \oplus D_0.$$ The total number of degrees of freedom is conserved. $ V_0 = W_1 \oplus W_2 \oplus V_2$. Their dimensions are $16 = 7 + 4 + 5$. The information encoding in descending order saves the number of operations required to set the necessary configuration in $V_0$ space. For instance, in sufficiently strong magnetic field, the state $|2,2\rangle \in V_2 $ of the whole system uniquely determines configuration of all 4 qubits. The decrease of magnetic field results in evolution of qubit pairs into singlet states. \section{Gate operations \label{gate:sec}} The memory on spin states can be exploited in both classical \cite{nomoto1996,orlov1997} and quantum \cite{RSL2000} memory devices. Classical devices working on Boolean logic with AND and OR operations dissipate an energy of at least $k_B T \ln 2$ per logical step. The dissipation imposes a constraint on computation speed. Quantum computations are time-reversal, they do not dissipate the energy until the read-out of final result. Quantum computation is performed by a sequence of unitary operations applied to the quantum register. Any conceivable unitary operation in quantum computing can be performed by sequential application of single-qubit gates and the two-qubit CNOT gate \cite{barenco1995}. That is why the CNOT quantum gate is a key element of quantum information processing. The action of CNOT gate consists in the change of the second qubit state, if the first qubit, used as a {\em control}, is in the state ``1''. The action of the CNOT gate is defined by the rules, listed in Table ~\ref{t1}. \begin{table} \begin{center} \begin{tabular}{cccr} control & target & resulting & $S_z$\\ qubit & qubit & state & \\ $|\partialownarrow\rangle$ &$|\partialownarrow\rangle$& $|\partialownarrow\rangle|\partialownarrow\rangle$ & $-1$ \\ $|\partialownarrow\rangle$ &$|\uparrow\rangle$ & $|\partialownarrow\rangle|\uparrow\rangle$ & $0$ \\ $|\uparrow\rangle$ &$|\partialownarrow\rangle$& $|\uparrow\rangle| \uparrow\rangle$& $0$ \\ $|\uparrow\rangle$ &$|\uparrow\rangle$& $|\uparrow\rangle|\partialownarrow\rangle$ & $+1$\\ \end{tabular} \caption{CNOT gate implemented on two spin qubits. The last column gives the projection of the total spin of two qubits to the $z$ axis. $\uparrow$ corresponds to ``1'' (true), and $\partialownarrow$ corresponds to ``0'' (false). The first qubit in the pair is considered as a {\em control} qubit. The value of the second qubit is changed only in case the first qubit is in ``true'' state $|\uparrow\rangle$; otherwise the state of the second qubit is not affected} \label{t1} \end{center} \end{table} Any two-level quantum system can be used as a quantum bit. Among many suggested implementations of quantum bits, {\em viz.} nuclear magnetic resonance \cite{chuang1998experimental}, trapped ions \cite{cirac1995}, cavity electrodynamics \cite{turchette1995}, the usage of the quantum dot electron spin has a number of advantages: the qubit represented by a real $SU(2)$ spin is always well defined qubit without a possibility to dissipate; its decoherence time is much longer than that of other type qubits: for GaAs the spin decoherence time is of microsecond order \cite{KA1998,BLD1999}. The control over the entanglement in a pair of quantum dot spin qubits can be performed by changing the coupling constant $J(t)$ in the Heisenberg Hamiltonian \begin{equation} H_S = J(t) \bvec{S}_1 \cdot \bvec{S}_2, \label{HH} \end{equation} with typical switching time $\tau_s$ of the coupling constant $J(t)$ being of nanosecond order: the condition $\int_0^{\tau_s} J(t) dt = J_0 \tau_s = \pi {\ \rm mod\ } 2\pi$ swaps the states of the qubits $\bvec{S}_1$ and $\bvec{S}_2$ \cite{LossDiVincenzo1998PhysRevA.57.120}. To access the states of two spin qubits in the CNOT gate hierarchically, {\em i.e.\, } as the states of a composite boson, is to use the pair of operators ${\hat S}^2$ and ${\hat S}_z$ of the composite boson. In the polarization basis \eqref{pb} the CNOT gate matrix is written in the form $$ C = \begin{pmatrix} 1 & 0 & 0 & 0 \cr 0 & 1 & 0 & 0 \cr 0 & 0 & 0 & 1 \cr 0 & 0 & 1 & 0 \end{pmatrix}. $$ Since the multiplet states $(S,S_z)$ are linearly expressed in terms of the polarization states \eqref{pb}, see Eq.\eqref{aa}, the CNOT gate in the multiplet basis \eqref{mb} is given by the matrix \begin{equation} F = A^{-1} C A,\quad F = \begin{pmatrix} \frac{1}{2} & 0 & -\frac{1}{2} & \frac{1}{\sqrt2} \cr 0 & 1 & 0 & 0 \cr -\frac{1}{2} & 0 & \frac{1}{2} & \frac{1}{\sqrt2} \cr \frac{1}{\sqrt2} & 0 & \frac{1}{\sqrt 2} & 0 \end{pmatrix}. \label{cnot} \end{equation} The implementation of the CNOT gate \eqref{cnot} requires mixing between triplet and singlet states. This mixing is almost impossible for ordinary atoms, but can be easily performed on quantum dots subjected to oscillating electromagnetic field \cite{WMC1992,DH2003,BGDB2008}. \section{Implementation on quantum dots \label{qdot:sec}} To build hierarchic memory register (embedded in nanostructure) one needs an array of switching elements, the states of which are reliably controlled by external fields. Spin-based devices are promising for such applications in both conventional and quantum memory elements \cite{prinz1995,BLD1999,RSL2000}. The decoherence time of a charge qubit is of nanosecond order, {\em i.e.\, } $10^3$ times shorter than that of the spin qubit \cite{Huibers1998}. On the other hand the desired number of qubits in a quantum dot array can be entangled by changing the electromagnetic field acting on the array \cite{HKPTV2007}. Unlike real atoms, the singlet and triplet energy levels of GaAs quantum dots in an array can be easily controlled by changing the magnetic field and the interdot distance \cite{kaputkina2010aperiodic,WMC1992,Pfannkuche1993,BKL1996,LK1998a}. The quantum gates can be implemented either by changing tunneling barrier between neighboring single-electron quantum dots \cite{LossDiVincenzo1998PhysRevA.57.120,Waugh1996}, or by monitoring singlet-triplet transitions in two-electron quantum dot by means of spectroscopic manipulations \cite{DH2003,BGDB2008,Koppen2009}. Both ways are technologically feasible for GaAs heterostructures, where quantum dots with arbitrary number of excess electrons can be formed \cite{Tarucha1996,HKPTV2007}. A realization of quantum gates on the spin degrees of freedom of the coupled quantum dots have been proposed in \cite{LossDiVincenzo1998PhysRevA.57.120,BLD1999}. Similarly to the proposed realization of the CNOT (CROT) gate on the excitonic excitations in a pair of coupled quantum dots \cite{Li2003Science}, a pair of merged quantum dots in a double-well potential, see Fig.~\ref{qd2:pic}, allows for a four distinct spin states \eqref{pb}. \begin{figure} \caption{Two coupled one-electron quantum dots separated by the distance $2a$ form a quantum gate. Magnetic field $B$ is applied along the $z$ direction. The harmonic wells are centered at $(\pm a,0,0)$. The bias electric field can be applied in $x$ direction} \label{qd2:pic} \end{figure} The Hamiltonian of coupled single-electron quantum dots has the form $$ H = H_{kinetic} + H_{potential} + H_{Zeeman} + H_S, $$ where $H_{kinetic}$ is the kinetic term, $$H_{potential} = V(x,y)+\frac{e^2}{\varepsilon|r_1-r_2|} + e\sum_{i=1}^2 x_i E$$ includes the quantum dot confining potential $V(x,y)$, Coulomb repulsion of the excess electrons, and the action of the bias electric field $E$. The Zeeman splitting term is $$H_{Zeeman} = g \mu_B \sum_i \bvec{B}\cdot \bvec{S}_i,$$ and the Heisenberg Hamiltonian $H_S$ is given by \eqref{HH}. The confining potential is defined as $$ V(x,y) = \frac{m\omega_0^2}{2} \left[\frac{\left(x^2-a^2 \right)^2}{4a^2} +y^2 \right]. $$ The typical parameters of a quantum dot in GaAs, described in \cite{BLD1999}, are: $$ g \approx -0.44, \hbar\omega_0=3 \hbox{meV}, m=0.067m_e, \varepsilon=13.1.$$ The Bohr radius of harmonic confinement with the above listed parameters is $$a_B = \sqrt\frac{\hbar}{m\omega_0} \approx 20 \hbox{nm}.$$ The value of the spin-spin coupling constant \eqref{HH} for a pair of coupled single-electron quantum dots is \cite{BLD1999}: $$ J = \frac{\hbar\omega_0}{\sinh \left[2d^2 \left(2b-\frac{1}{b} \right)\right]} \left[ c \sqrt{b} \left\{ e^{-bd^2}I_0(bd^2)-e^{d^2 \left(b-1/b \right)}I_0\left(d^2 \left(b-1/b \right) \right) \right\} + \frac{3}{4b}(1+bd^2) \right], $$ where $ b = \frac{\omega}{\omega_0}=\sqrt{1+\left(\frac{\omega_L}{\omega_0}\right)^2}$ is dimensionless magnetic field, $\omega_L = \sqrt{\frac{eB}{2mc}}$ is the Larmor frequency, $I_0$ is the zeroth-order Bessel function. Varying the magnetic field $B$ in the range 0-2T one can control the value and the sign of the coupling $J$ in the range about $\pm 1$meV. The energy difference between the singlet and the triplet states in two-electron quantum dots can be found in \cite{WMC1992}. Quantum XOR, or the CNOT, gate can be obtained by applying a sequence of operations, consisting of single qubit rotations and the swapping of two qubits \cite{LossDiVincenzo1998PhysRevA.57.120}: \begin{equation} U_{XOR} = e^{\imath \frac{\pi}{2}S_{1z}} e^{-\imath \frac{\pi}{2}S_{2z}} U_{swap}^\frac{1}{2} e^{\imath\pi S_{1z}} U_{swap}^\frac{1}{2}. \label{swap} \end{equation} The swapping of two spin states is provided by the Heisenberg Hamiltonian \eqref{HH} with the condition for pulse duration $\int_0^{\tau_s} J(t) dt = J_0 \tau_s = \pi\ {\rm mod\ } 2\pi$ provided swapping the states of the qubits $\bvec{S}_1$ and $\bvec{S}_2$ by unitary operator $$ U_{s}(t) = {\rm T}e^{\imath \int_0^t H_S(\tau)d\tau}. $$ Since the Hamiltonian \eqref{HH} can be expressed in terms of the total spin $\hat S = \hat S_1 + \hat S_2$: $$H_S = \frac{J}{2} \left[\hat{S}^2-\frac{3}{2} \right],$$ the coupling constant $J$ determines the energy difference between the singlet, and the triplet state and the swapping operation is performed by action on the spin states of the compound two electron system. The read-out of the final state can be performed either by spin-to-charge conversion for a single electron tunneling off the dot \cite{LossDiVincenzo1998PhysRevA.57.120,Engel2004}, or by optical spectroscopy of the quantum dot state \cite{BGDB2008}. The $\sqrt{{\rm swap}}$ operations have been performed experimentally on quantum dots with the operation time of 180ps \cite{Petta2005}. The fluctuations of magnetic field do not affect the coherence of the spin states if their length is much greater than the magnetic length of quantum dot, which is of $10^1$ nm order. We can also neglect the spin-orbital coupling \cite{DVL1998}, $$ H_{SO}=\frac{\omega_0^2}{2mc^2}\bvec{L}\cdot\bvec{S}, $$ since $H_{SO}/\hbar\omega_0 \sim 10^{-7}$. As a consequence of this, the dephasing effects caused by charge density fluctuations can significantly affect only the charge degrees of freedom, but have little effect on the spin (except for the case when the Coulomb repulsion of the electrons is significant \cite{HuSarma2006}). The interaction between the spins of different quantum dots is proportional to the inverse third power of the distance. The strength of this interaction can be controlled by making the sequence of quantum dots aperiodic. The dipole interaction between the spin qubits and the surroundings spins of the environment can be estimated as $(g\mu_B)^2/a_B^3\approx 10^{-9}$ meV for GaAs quantum dots \cite{BLD1999}, which is very small. The only significant source of dephasing is the hyperfine interaction with nuclear spins, which, however, can be strongly suppressed either by dynamically polarizing the nuclear spins, or by applying magnetic field \cite{BLD1999,SAS2011}. \section{Conclusion} In this paper we present an analog of the Mallat sequence for pyramidal information compression, widely implemented in classical computing as {\em discrete wavelet transform} (DWT), for the case of quantum memory on electron spins. The known quantum analogs of the DWT are based on a register of quantum bits connected by a quantum network, evaluating sums and differences of the qubit functions, same as for classical Haar wavelet algorithm \eqref{pd}. As an any scheme that addresses each qubit separately, such scheme faces the usual scalability problem of quantum computation. Our method of information compression uses the multiplet decomposition of the whole space of spin states of the memory, instead of addressing each qubit separately. Doing so, we avoid the problem of decoherence caused by the local information transmission to a given qubit, which constrains miniaturization of information processing devices and implies extra restrictions on geometric equality of memory elements. Addressing different spin states of the whole system, rather than different quantum bits, can allow for a ''flexible'' memory elements (say, on aperiodic sequences of quantum dots \cite{kaputkina2010aperiodic}). The price paid for such flexibility is the spectroscopic problem to distinguish reliably the spin states of quantum system containing 2, 4 and more $2^M$ spins. Since the size of the physical support of the group of spins, manipulated spectroscopically, say a group of excess electrons in quantum dots, is comparable to the size of single qubit of the same nature, our method possibly provides a new way of miniaturization of memory elements on nanoscale heterostructures. \section*{Acknowledgment} The authors are thankful to Dr. V.N.Gorbachev for useful discussions. The research was supported in part by the RFBR Project 11-02-00604-a and the Program of Creation and Development of the National University of Science and Technology ''MISiS''. \end{document}
\begin{document} \title{ \Large \bf Analysis of a time-stepping scheme for time fractional diffusion problems with nonsmooth data \thanks { This work was supported in part by National Natural Science Foundation of China (11771312). } } \author{ Binjie Li \thanks{Email: [email protected]}, Hao Luo \thanks{Corresponding author. Email: [email protected]}, Xiaoping Xie \thanks{Email: [email protected]} \\ {School of Mathematics, Sichuan University, Chengdu 610064, China} } \date{} \maketitle \begin{abstract} This paper establishes the convergence of a time-steeping scheme for time fractional diffusion problems with nonsmooth data. We first analyze the regularity of the model problem with nonsmooth data, and then prove that the time-steeping scheme possesses optimal convergence rates in $ L^2(0,T;L^2(\Omega)) $-norm and $ L^2(0,T;H_0^1(\Omega)) $-norm with respect to the regularity of the solution. Finally, numerical results are provided to verify the theoretical results. \end{abstract} \noindent{\bf Keywords:} fractional diffusion problem, regularity, finite element, optimal a priori estimate. \section{Introduction} This paper considers the following time fractional diffusion problem: \begin{equation} \label{eq:model} \left\{ \begin{aligned} \D_{0+}^\alpha (u-u_0) - \Delta u & = f & & \text{in $ \Omega \times (0,T) $,} \\ u & = 0 & & \text{on $ \partial\Omega \times (0,T) $,} \\ u(0) & = u_0 & & \text{ in $ \Omega $, } \end{aligned} \right. \end{equation} where $ 0 < \alpha < 1 $, $ \D_{0+}^\alpha $ is a Riemann-Liouville fractional differential operator, $ \Omega \subset \mathbb R^d $ ($d=1,2,3$) is a convex polygonal domain, and $ u_0 $ and $ f $ are given functions. A considerable amount of numerical algorithms for time fractional diffusion problems have been developed. Generally, these numerical algorithms can be divided into three types. The first type uses finite difference methods to approximate the time fractional derivatives. Despite their ease of implementation, the fractional difference methods are generally of temporal accuracy orders no greater than two; see \cite{Yuste2005, Langlands2005, Yuste2006, Chen2007, Lin2007, Zhuang2008, Deng2009, Zhang2009524, Chen2010,Cui2009, Liu2011,Gao2011,Zeng2013, Wang2014, gao2014new, Li2016} and the references therein. The second type applies spectral methods to discretize the time fractional derivatives; see \cite{Li2009, Zheng2015,Li2017spectral,Zayernouri2014Fractional,Zayernouri2012Karniadakis,Zayernouri2014Exponentially,yang2016spectral,Li2017A}. The main advantage of these algorithms is that they possess high-order accuracy, provided the solution is sufficiently smooth. The third type adopts finite element methods to approximate the time fractional derivatives; see \cite{Mclean2009Convergence, Mustapha2011Piecewise, Mustapha2015Time,Li2017A,Mustapha2009Discontinuous,Mustapha2012Uniform,Mustapha2012Superconvergence,Mclean2015Time,mustapha2014well-posedness,Mustapha2014A}. These algorithms are easy to implement, like those in the first type, and possess high-order accuracy. The convergence analysis of the aforementioned algorithms is generally carried out on the condition that the underlying solution is sufficiently smooth. So far, the works on the numerical analysis for nonsmooth data are very limited. By using the Laplace transformation, Mclean and Thom\'ee \cite{Thomee2010IMA} analyzed three fully discretizations for fractional order evolution equations, where the initial values are allowed to have only $ L^2(\Omega) $-regularity. By using a growth estimation of the Mittag-Leffler function, Jin et al.~\cite{Jin2013SIAM,Jin2015IMA} analyzed the convergence of a spatial semi-discretization of problem \cref{eq:model}. They derived the following results: if $ f = 0 $, then \[ \nm{u(t) - u_h(t)}_{L^2(\Omega)} + h \nm{u(t)-u_h(t)}_{H_0^1(\Omega)} \leqslant C h^2 \snm{\ln h} t^{-\alpha} \nm{u_0}_{L^2(\Omega)}; \] if $ u_0 = 0 $ and $ 0 \leqslant \beta < 1 $, then \[ \nm{u\!-\!u_h}_{L^2(0,T;L^2(\Omega))} + h \nm{u\!-\!u_h}_{L^2(0,T;H_0^1(\Omega))} \leqslant C h^{2-\beta} \nm{f}_{L^2(0,T; H^{-\beta}(\Omega))}, \] \[ \nm{u(t)\!-\!u_h(t)}_{L^2(\Omega)}\!+\! h \nm{u(t)\!-\!u_h(t)}_{H_0^1(\Omega)}\!\leqslant\! C h^{2-\beta} \snm{\ln h}^2 \nm{f}_{L^\infty(0,t;H^{-\beta}(\Omega))}. \] Recently, McLean and Mustapha \cite{Mclean2015Time} derived that \[ \nm{u(t_n) - U^n}_{L^2(\Omega)} \leqslant C t_n^{-1} \Delta t \nm{u_0}_{L^2(\Omega)} \] for a piecewise constant DG scheme in temporal semi-discretization of fractional diffusion problems with $ f = 0$. For more related work, we refer the reader to \cite{Cuesta2006,Mclean2010Thom}. In this paper, we present a rigorous analysis of the convergence of a time-stepping scheme for problem \cref{eq:model}, which uses a space of continuous piecewise linear functions in the spatial discretization and a space of piecewise constant functions in the temporal discretization. We first apply the Galerkin method to investigate the regularity of problem \cref{eq:model} with non-smooth $ u_0 $ and $ f $, and then we derive the following error estimates: if $ 0 < \alpha < 1/2 $ and $ 0 \leqslant \beta < 1 $, then \begin{align*} & (h+\tau^{\alpha/2})^{-1} \nm{u-U}_{L^2(0,T;L^2(\Omega))} + \nm{u-U}_{L^2(0,T; H_0^1(\Omega))} \\ \leqslant{} & C \left( h^{1-\beta} + \tau^{\alpha(1-\beta)/2} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{H^{-\beta}(\Omega)} \right); \end{align*} if $ 1/2 \leqslant \alpha < 1 $ and $ 2-1/\alpha < \beta < 1 $, then \begin{align*} & (h+\tau^{\alpha/2})^{-1} \nm{u-U}_{L^2(0,T;L^2(\Omega))} + \nm{u-U}_{L^2(0,T; H_0^1(\Omega))} \\ \leqslant{} & C \left( h^{1-\beta} + \tau^{\alpha(1-\beta)/2} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{L^2(\Omega)} \right); \end{align*} if $ 1/2 \leqslant \alpha < 1 $ and $ u_0 = 0 $, then the above estimate also holds for all $ 0 \leqslant \beta < 1 $. Furthermore, if $ 1/2 < \alpha < 1 $ and $ u_0 = 0 $, then we derive the optimal error estimate \[ \nm{u-U}_{L^2(0,T;L^2(\Omega))} \leqslant C (h^2+\tau) \nm{f}_{H^{1-\alpha}(0,T;L^2(\Omega))}. \] By the techniques used in our analysis, we can also derive the error estimates under other conditions; for instance, $ u_0 $ and $ f $ are smoother than the aforementioned cases. The rest of this paper is organized as follows. \cref{sec:pre} introduces some Sobolev spaces, the Riemann-Liouville fractional calculus operators, the weak solution to problem \cref{eq:model}, and a time-stepping scheme. \cref{sec:regu} investigates the regularity of the weak solution, and \cref{sec:conv} establishes the convergence of the time-steeping scheme. Finally, \cref{sec:numer} provides some numerical experiments to verify the theoretical results. \section{Preliminaries} \label{sec:pre} \textit{\textbf{Sobolev Spaces.}} For a Lebesgue measurable subset $ \omega $ of $ \mathbb R^l $ ($l=1,2,3$), we use $ H^\gamma(\omega) $ ($ -\infty < \gamma < \infty $) and $ H_0^\gamma(\omega) $ ($ 0<\gamma<\infty $) to denote two standard Sobolev spaces \cite{Tartar2007}. Let $ X $ be a separable Hilbert space with an inner product $ (\cdot,\cdot)_X $ and an orthonormal basis $ \{e_i: i \in \mathbb N\} $. We use $ H^\gamma(0,T;X) $ ($0 \leqslant \gamma < \infty $) to denote an usual vector valued Sobolev space, and for $ 0 < \gamma < 1/2 $, we also use the norm \[ \snm{v}_{H^\gamma(0,T;X)} := \left( \sum_{i=0}^\infty \snm{(v,e_i)_X}_{H^\gamma(0,T)}^2 \right)^{1/2}, \quad \forall v \in H^\gamma(0,T;X). \] Here, the norm $ \snm{\cdot}_{H^\gamma(0,T)} $ is given by \[ \snm{w}_{H^\gamma(0,T)} := \left( \int_\mathbb R \snm{\xi}^{2\gamma} \snm{\mathcal F(w\chi_{(0,T)})(\xi)}^2 \,\mathrm{d}\xi \right)^{1/2}, \quad \forall w \in H^\gamma(0,T), \] where $ \mathcal F: L^2(\mathbb R) \to L^2(\mathbb R) $ is the Fourier transform operator and $ \chi_{(0,T)} $ is the indicator function of $ (0,T) $. \noindent\textit{\textbf{Fractional Calculus Operators.}} Let $ X $ be a Banach space and let $ -\infty \leqslant a < b \leqslant \infty $. For $ 0 < \gamma < \infty $, define \begin{align*} \left(\I_{a+}^\gamma v\right)(t) &:= \frac1{ \Gamma(\gamma) } \int_a^t (t-s)^{\gamma-1} v(s) \, \mathrm{d}s, \quad t\in(a,b), \\ \left(\I_{b-}^\gamma v\right)(t) &:= \frac1{ \Gamma(\gamma) } \int_t^b (s-t)^{\gamma-1} v(s) \, \mathrm{d}s, \quad t\in(a,b), \end{align*} for all $ v \in L^1(a,b;X) $, where $ \Gamma(\cdot) $ is the gamma function. For $ j-1 < \gamma < j $ with $ j \in \mathbb N_{>0} $, define \begin{align*} \D_{a+}^\gamma & := \D^j \I_{a+}^{j-\gamma}, \\ \D_{b-}^\gamma & := (-1)^j \D^j \I_{b-}^{j-\gamma}, \end{align*} where $ \D $ is the first-order differential operator in the distribution sense. \noindent\textit{\textbf{Eigenvectors of $-\Delta$.}} It is well known that there exists an orthonormal basis \[ \{\phi_i: i \in \mathbb N \} \subset H_0^1(\Omega) \cap H^2(\Omega) \] of $ L^2(\Omega) $ such that \[ -\Delta \phi_i = \lambda_i \phi_i, \] where $ \{ \lambda_i: i \in \mathbb N \} \subset \mathbb R_{>0} $ is a non-decreasing sequence. For any $ 0 \leqslant \gamma < \infty $, define \[ \dot H^\gamma(\Omega) := \left\{ v \in L^2(\Omega):\ \sum_{i=0}^\infty \lambda_i^\gamma (v,\phi_i)_{L^2(\Omega)}^2 < \infty \right\}, \] and equip this space with the norm \[ \nm{\cdot}_{\dot H^\gamma(\Omega)} := \left( \sum_{i=0}^\infty \lambda_i^\gamma (\cdot, \phi_i)_{L^2(\Omega)}^2 \right)^{1/2}. \] For $ \gamma \in [0,1] \setminus \{0.5\} $, the space $ \dot H^\gamma(\Omega) $ coincides with $ H_0^\gamma(\Omega) $ with equivalent norms, and for $ 1 < \gamma \leqslant 2 $, the space $ \dot H^\gamma(\Omega) $ is continuously embedded into $ H^\gamma(\Omega) $. \noindent\textit{\textbf{Weak Solution.}} Define \[ W := H^{\alpha/2}(0,T;L^2(\Omega)) \cap L^2(0,T;\dot H^1(\Omega)) \] and endow this space with the norm \[ \nm{\cdot}_{W} := \snm{\cdot}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \nm{\cdot}_{L^2(0,T;\dot H^1(\Omega))}. \] Assuming that \begin{equation} \label{eq:basic_assum} \D_{0+}^\alpha u_0 + f \in W^*, \end{equation} we call $ u \in W $ a weak solution to problem \cref{eq:model} if \begin{equation} \label{eq:weak_form} \dual{ \D_{0+}^\alpha u, v }_{ H^{\alpha/2}( 0,T;L^2(\Omega) ) } + \dual{ \nabla u, \nabla v }_{ \Omega \times (0,T) } = \dual{\D_{0+}^\alpha u_0 + f,v}_{W} \end{equation} for all $ v \in W $. Throughout the paper, if $ \omega $ is a Lebesgue measurable set of $ \mathbb R^l $ ($ l= 1,2,3,4 $) then the symbol $ \dual{p,q}_\omega $ means $ \int_\omega pq $, and if $ X $ is a Banach space then $ \dual{\cdot,\cdot}_X $ means the duality pairing between $ X^* $ (the dual space of $ X $) and $ X $. \begin{rem} \label{rem:well-posedness} The above weak solution is first introduced by Li and Xu \cite{Li2009}. Evidently, the well-known Lax-Milgram theorem indicates that problem \cref{eq:model} admits a unique weak solution by \cref{lem:coer}. Moreover, \[ \nm{u}_{W} \leqslant C \nm{\D_{0+}^\alpha u_0 + f}_{W^*}, \] where $ C $ is a positive constant that depends only on $ \alpha $. \end{rem} \noindent\textit{\textbf{Discretization.}} Let \[ 0 = t_0 < t_1 < \ldots < t_J = T \] be a partition of $ [0,T] $. Set $ I_j := (t_{j-1},t_j) $ for each $ 1 \leqslant j \leqslant J $, and we use $ \tau $ to denote the maximum length of these intervals. Let $ \mathcal K_h $ be a conventional conforming and shape regular triangulation of $ \Omega $ consisting of $ d $-simplexes, and we use $ h $ to denote the maximum diameter of the elements in $ \mathcal K_h $. Define \begin{align*} \mathcal S_h&:= \left\{ v_h \in H_0^1(\Omega):\ v_h|_K \in P_1(K),\ \forall \,K \in \mathcal K_h \right\},\\ \mathcal M_{h,\tau} &:= \left\{ V \in L^2(0,T;\mathcal S_h):\ V|_{I_j} \in P_0(I_j;\mathcal S_h),\ \forall\,1\leqslant j\leqslant J \right\}, \end{align*} where $ P_1(K) $ is the set of all linear polynomials defined on $ K $, and $ P_0(I_j; \mathcal S_h) $ is the set of all constant $ \mathcal S_h$-valued functions defined on $ I_j $. Naturally, the discretization of problem \cref{eq:weak_form} reads as follows: seek $ U \in \mathcal M_{h,\tau} $ such that \begin{equation} \label{eq:algor} \dual{\D_{0+}^\alpha U,V}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \dual{\nabla U,\nabla V}_{\Omega \times (0,T)} = \dual{\D_{0+}^\alpha u_0 + f,V}_{W} \end{equation} for all $ V \in \mathcal M_{h,\tau} $. \begin{rem} Similarly to the stability estimate in \cref{rem:well-posedness}, we have \[ \nm{U}_W \leqslant C \nm{\D_{0+}^\alpha u_0 + f}_{W^*}, \] where $ C $ is a positive constant depending only on $ \alpha $. Therefore, problem \cref{eq:algor} is also stable under condition \cref{eq:basic_assum}. \end{rem} \section{Regularity} \label{sec:regu} Let us first consider the following problem: seek $ y \in H^{\alpha/2}(0,T) $ such that \begin{equation} \label{eq:frac_ode_weak} \dual{ \D_{0+}^\alpha(y-y_0), z }_{ H^{\alpha/2}(0,T) } + \lambda \dual{y,z}_{(0,T)} = \dual{g,z}_{(0,T)} \end{equation} for all $ z \in H^{\alpha/2}(0,T) $, where $ g \in L^2(0,T) $, and $ y_0 $ and $ \lambda>1 $ are two real constants. By \cref{lem:coer}, the Lax-Milgram theorem indicates that the above problem admits a unique solution $ y \in H^{\alpha/2}(0,T) $. Moreover, it is evident that \begin{equation} \label{eq:frac_ode} \D_{0+}^\alpha (y-y_0) = g -\lambda y \end{equation} in $ L^2(0,T) $. For convenience, we use the following convention: if the symbol $ C $ has subscript(s), then it means a positive constant that depends only on its subscript(s), and its value may differ at each of its occurrence(s). Additionally, in this section we assume that $ u $ and $ y $ are the solutions to problems \cref{eq:weak_form,eq:frac_ode_weak}, respectively. \begin{lem} \label{lem:regu_ode_I} If $ 0 < \alpha < 1/2 $ and $ 0 \leqslant \beta < 1 $, then \begin{equation} \label{eq:regu_ode_I} \begin{aligned} & \lambda^{\beta/2} \snm{y}_{H^{\alpha(1-\beta/2)}(0,t)} + \lambda^{(1+\beta)/2} \snm{y}_{H^{(1-\beta)\alpha/2}(0,t)} + \lambda \nm{y}_{L^2(0,t)} \\ \leqslant{} & C_\alpha \left( \nm{g}_{L^2(0,t)} + t^{1/2-\alpha} \snm{y_0} \right) \end{aligned} \end{equation} for all $ 0 < t < T $. \end{lem} \begin{proof} Let us first prove that $ y \in H^\alpha(0,T) $. By the definition of $ \D_{0+}^\alpha $, equality \cref{eq:frac_ode} implies \[ \left( \I_{0+}^{1-\alpha}(y-y_0) \right)' = g - \lambda y, \] so that using integration by parts gives \[ \I_{0+}^{1-\alpha} (y-y_0) = \left( \I_{0+}^{1-\alpha} (y-y_0) \right)(0) + \I_{0+} (g-\lambda y). \] In addition, since \[ \snm{ \left( \I_{0+}^{1-\alpha} (y-y_0) \right)(s) } \leqslant \frac1{\Gamma(1-\alpha)} \sqrt{\frac{s^{1-2\alpha}}{1-2\alpha}} \, \nm{y-y_0}_{L^2(0,s)}, \quad 0 < s < T, \] we have \[ \left( \I_{0+}^{1-\alpha} (y-y_0) \right)(0) = \lim_{s \to 0+} \left( \I_{0+}^{1-\alpha} (y-y_0) \right)(s) = 0. \] Consequently, \[ \I_{0+}^{1-\alpha} (y-y_0) = \I_{0+} (g-\lambda y), \] and hence a simple computation gives that \[ y = y_0 + \I_{0+}^\alpha (g-\lambda y). \] Therefore, \cref{lem:regu} indicates that $ y \in H^\alpha(0,T) $. Then let us prove that \begin{equation} \label{eq:regu_ode_I-12} \snm{y}_{H^\alpha(0,t)}^2 + \lambda \snm{y}_{H^{\alpha/2}(0,t)}^2 + \lambda^2 \nm{y}_{L^2(0,t)}^2 \leqslant C_\alpha \left( \nm{g}_{L^2(0,t)}^2 + t^{1-2\alpha} \snm{y_0}^2 \right). \end{equation} Multiplying both sides of \cref{eq:frac_ode} by $ y $ and integrating over $ (0,t) $ yields \[ \dual{\D_{0+}^\alpha y, y}_{(0,t)} + \lambda \nm{y}_{L^2(0,t)}^2 = \dual{g,y}_{(0,t)} + \dual{\D_{0+}^\alpha y_0, y}_{(0,t)}. \] Since \begin{align*} \dual{g,y}_{(0,t)} & \leqslant \frac1\lambda \nm{g}_{L^2(0,t)}^2 + \frac\lambda4 \nm{y}_{L^2(0,t)}^2, \\ \dual{\D_{0+}^\alpha y_0,y}_{(0,t)} & \leqslant \frac1\lambda \nm{\D_{0+}^\alpha y_0}_{L^2(0,t)}^2 + \frac\lambda4 \nm{y}_{L^2(0,t)}^2, \end{align*} we have \[ \dual{\D_{0+}^\alpha y,y}_{(0,t)} + \lambda \nm{y}_{L^2(0,t)}^2 \leqslant C_\alpha \left( \lambda^{-1} \nm{g}_{L^2(0,t)}^2 + \lambda^{-1} t^{1-2\alpha} \snm{y_0}^2 \right). \] From \cref{lem:coer} it follows that \[ \lambda \snm{y}_{H^{\alpha/2}(0,t)}^2 + \lambda^2 \nm{y}_{L^2(0,t)}^2 \leqslant C_\alpha \left( \nm{g}_{L^2(0,t)}^2 + t^{1-2\alpha} \snm{y_0}^2 \right). \] Analogously, multiplying both sides of \cref{eq:frac_ode} by $ \D_{0+}^\alpha y $ and integrating over $ (0,t) $, we obtain \[ \snm{y}_{H^\alpha(0,t)}^2 + \lambda \snm{y}_{H^{\alpha/2}(0,t)}^2 \leqslant C_\alpha \left( \nm{g}_{L^2(0,t)}^2 + t^{1-2\alpha} \snm{y_0}^2 \right). \] Therefore, combining the above two estimates yields \cref{eq:regu_ode_I-12}. Now, let us prove that \begin{equation} \label{eq:regu_ode_I-34} \lambda^\beta \snm{y}_{H^{\alpha(1-\beta/2)}(0,t)}^2 + \lambda^{1+\beta} \snm{y}_{H^{\alpha(1-\beta)/2}(0,t)}^2 \leqslant C_\alpha \left( \nm{g}_{L^2(0,t)}^2 + t^{1-2\alpha} \snm{y_0}^2 \right). \end{equation} Since \[ \alpha(1-\beta/2) = \beta\, \alpha/2 + (1-\beta)\, \alpha, \] applying \cite[Proposition~1.32]{Bahouri2011} yields \[ \snm{y}_{H^{\alpha(1-\beta/2)}}(0,t) \leqslant \snm{y}_{H^{\alpha/2}(0,t)}^\beta \snm{y}_{H^\alpha(0,t)}^{1-\beta}. \] Therefore, by \cref{eq:regu_ode_I-12} we obtain \begin{align*} & \lambda^\beta \snm{y}_{H^{\alpha(1-\beta/2)}(0,t)}^2 \leqslant \left(\lambda \snm{y}_{H^{\alpha/2}(0,T)}^2 \right)^\beta \left( \snm{y}_{H^\alpha(0,t)}^2 \right)^{1-\beta} \\ \leqslant{} & \lambda \snm{y}_{H^{\alpha/2}(0,t)}^2 + \snm{y}_{H^\alpha(0,t)}^2 \\ \leqslant{} & C_\alpha \left( \nm{g}_{L^2(0,t)}^2 + t^{1-2\alpha} \snm{y_0}^2 \right), \end{align*} by the Young's inequality. Analogously, we have \[ \lambda^{1+\beta} \snm{y}_{H^{(1-\beta)\alpha/2}(0,t)}^2 \leqslant C_\alpha \left( \nm{g}_{L^2(0,t)}^2 + t^{1-2\alpha} \snm{y_0}^2 \right), \] and using the above two estimates then proves \cref{eq:regu_ode_I-34}. Finally, combing \cref{eq:regu_ode_I-12,eq:regu_ode_I-34} yields \cref{eq:regu_ode_I} and thus concludes the proof. \end{proof} \begin{lem} \label{lem:regu_ode_II} If $ 1/2 \leqslant \alpha < 1 $ and $ 0 \leqslant \theta < 1/\alpha -1 $, then \begin{small} \begin{equation} \label{eq:regu_ode_II} \begin{aligned} & \lambda^{(\theta-1)/2} \nm{y}_{H^\alpha(0,T)} + \nm{y}_{H^{\alpha(1+\theta)/2}(0,T)} + \lambda^{\theta/2} \snm{y}_{H^{\alpha/2}(0,T)} + \lambda^{1/2} \nm{y}_{H^{\alpha\theta/2}(0,T)} \\ & {} + \lambda^{(1+\theta)/2} \nm{y}_{L^2(0,T)} \leqslant C_{\alpha,\theta,T} \left( \lambda^{(\theta-1)/2} \nm{g}_{L^2(0,T)} + \snm{y_0} \right). \end{aligned} \end{equation} \end{small} \end{lem} \begin{proof} Proceeding as in the proof of \cref{lem:regu_ode_I} yields \[ y = y_0 + \frac{c}{\Gamma(\alpha)} t^{\alpha-1} + \I_{0+}^\alpha (g-\lambda y), \] where \[ c = \left( \I_{0+}^{1-\alpha} (y-y_0) \right)(0). \] Since $ y \in H^{\alpha/2}(0,T) $ and \cref{lem:regu} implies $ \I_{0+}^\alpha(g-\lambda y) \in H^\alpha(0,T) $, it is evident that $ c= 0 $, and hence \[ y = y_0 + \I_{0+}^\alpha (g-\lambda y) \in H^\alpha(0,T). \] Furthermore, \cref{lem:regu} indicates that \begin{equation} \label{eq:xy-1} \nm{y}_{H^\alpha(0,T)} \leqslant C_{\alpha,T} \left( \snm{y_0} + \nm{g-\lambda y}_{L^2(0,T)} \right). \end{equation} Now, we proceed to prove \cref{eq:regu_ode_II}, and since the techniques used below are similar to that used in the proof of \cref{lem:regu_ode_I}, the forthcoming proof will be brief. Firstly, let us prove that \begin{equation} \label{eq:regu_ode_II-1} \snm{y}_{H^{\alpha/2}(0,T)}^2 + \lambda \nm{y}_{L^2(0,T)}^2 \leqslant C_{\alpha,\theta,T} \left( \lambda^{-1} \nm{g}_{L^2(0,T)}^2 + \lambda^{-\theta} \snm{y_0}^2 \right). \end{equation} Using the standard estimate that (\cite[Lemma 16.3]{Tartar2007}) \[ \int_0^T t^{-(1-\theta)\alpha} \snm{y(t)}^2 \, \mathrm{d}t \leqslant C_{\alpha,\theta} \snm{y}_{H^{(1-\theta)\alpha/2}(0,T)}^2, \] by the Cauchy-Schwarz inequality we obtain \[ \dual{\D_{0+}^\alpha y_0, y}_{H^{\alpha/2}(0,T)} = \frac{y_0}{\Gamma(1-\alpha)} \dual{t^{-\alpha}, y}_{(0,T)} \leqslant C_{\alpha,\theta,T} \snm{y_0} \snm{y}_{H^{(1-\theta)\alpha/2}(0,T)}. \] Since \[ \snm{y}_{H^{(1-\theta)\alpha/2}(0,T)} \leqslant \nm{y}_{L^2(0,T)}^\theta \snm{y}_{H^{\alpha/2}(0,T)}^{1-\theta}, \] it follows that \begin{equation} \label{eq:703} \begin{aligned} \dual{\D_{0+}^\alpha y_0, y}_{H^{\alpha/2}(0,T)} & \leqslant C_{\alpha,\theta,T} \snm{y_0} \nm{y}_{L^2(0,T)}^\theta \snm{y}_{H^{\alpha/2}(0,T)}^{1-\theta} \\ & \leqslant C_{\alpha,\theta,T} \snm{y_0} \lambda^{-\theta/2} \left( \lambda^{1/2}\nm{y}_{L^2(0,T)} \right)^\theta \snm{y}_{H^{\alpha/2}(0,T)}^{1-\theta} \\ & \leqslant C_{\alpha,\theta,T} \snm{y_0} \lambda^{-\theta/2} \left( \snm{y}_{H^{\alpha/2}(0,T)} + \lambda^{1/2} \nm{y}_{L^2(0,T)} \right). \end{aligned} \end{equation} In addition, inserting $ z=y $ into \cref{eq:frac_ode_weak} yields \begin{align*} & \snm{y}_{H^{\alpha/2}(0,T)}^2 + \lambda \nm{y}_{L^2(0,T)}^2 \\ \leqslant{} & C_\alpha \left( \lambda^{-1} \nm{g}_{L^2(0,T)}^2 + \dual{\D_{0+}^\alpha y_0, y}_{H^{\alpha/2}(0,T)} \right). \end{align*} Consequently, inserting \cref{eq:703} into the above inequality and applying the Young's inequality with $ \epsilon $, we obtain \cref{eq:regu_ode_II-1}. Secondly, let us prove that \begin{equation} \label{eq:regu_ode_II-2} \nm{y}_{H^\alpha(0,T)}^2 \leqslant C_{\alpha,\theta,T} \left( \nm{g}_{L^2(0,T)}^2 + \lambda^{1-\theta} \snm{y_0}^2 \right). \end{equation} Multiplying both sides of \cref{eq:frac_ode} by $ \D_{0+}^\alpha(y-y_0) $ and integrating over $ (0,T) $, we obtain \[ \begin{aligned} {}& \nm{\D_{0+}^\alpha (y-y_0)}_{L^2(0,T)}^2 + \lambda \snm{y}_{H^{\alpha/2}(0,T)}^2\\ \leqslant{}& C_{\alpha,T} \left( \nm{g}_{L^2(0,T)}^2 + \lambda \dual{\D_{0+}^\alpha y_0,y}_{H^{\alpha/2}(0,T)} \right), \end{aligned} \] so that from \cref{eq:703,eq:regu_ode_II-1} it follows that \begin{align*} & \nm{\D_{0+}^\alpha (y-y_0)}_{L^2(0,T)}^2 + \lambda \snm{y}_{H^{\alpha/2}(0,T)}^2 \\ \leqslant{} & C_{\alpha,\theta,T} \left( \nm{g}_{L^2(0,T)}^2 + \snm{y_0} \lambda^{1-\theta/2} \left( \lambda^{-1/2} \nm{g}_{L^2(0,T)} + \lambda^{-\theta/2} \snm{y_0} \right) \right) \\ \leqslant{} & C_{\alpha,\theta,T} \left( \nm{g}_{L^2(0,T)}^2 + \snm{y_0} \lambda^{1/2-\theta/2} \nm{g}_{L^2(0,T)} + \lambda^{1-\theta} \snm{y_0}^2 \right) \\ \leqslant{} & C_{\alpha,\theta,T} \left( \nm{g}_{L^2(0,T)}^2 + \lambda^{1-\theta} \snm{y_0}^2 \right). \end{align*} Therefore, combing \cref{eq:frac_ode,eq:xy-1} yields \cref{eq:regu_ode_II-2}. Finally, using the same technique as that used to derive \cref{eq:regu_ode_I-34}, by \cref{eq:regu_ode_II-1,eq:regu_ode_II-2} we conclude that \begin{align*} & \nm{y}_{H^{\alpha(1+\theta)/2}(0,T)}^2 + \lambda \nm{y}_{H^{\alpha\theta/2}(0,T)}^2 \\ \leqslant{} & C_{\alpha,\theta,T} \left( \lambda^{\theta-1} \nm{g}_{L^2(0,T)}^2 + \snm{y_0}^2 \right), \end{align*} which, together with \cref{eq:regu_ode_II-1,eq:regu_ode_II-2}, yields inequality \cref{eq:regu_ode_II}. This theorem is thus proved. \end{proof} \begin{lem} \label{lem:regu_ode_III} Assume that $ 1/2 < \alpha < 1 $. If $ y_0 = 0 $ and $ g \in H^{1-\alpha}(0,T) $, then \begin{equation} \label{eq:regu_ode_III} \nm{y}_{H^1(0,T)} + \lambda^{1/2} \nm{y}_{H^{1-\alpha/2}(0,T)} + \lambda \nm{y}_{L^2(0,T)} \leqslant C_{\alpha,T} \nm{g}_{H^{1-\alpha}(0,T)}. \end{equation} \end{lem} \begin{proof} Let us first prove that \begin{equation} \label{eq:regu_ode_III-1} y' = \D_{0+}^{1-\alpha} (g - \lambda y). \end{equation} Since we have already proved \[ y = \I_{0+}^\alpha (g-\lambda y) \] in the proof of \cref{lem:regu_ode_II}, by \cref{lem:regu} we obtain $ y \in H^1(0,T) $. Moreover, because \[ \snm{ \I_{0+}^\alpha(g-\lambda y)(s) } \leqslant \frac{s^{\alpha-1/2}}{\Gamma(\alpha) \sqrt{2\alpha-1}} \nm{g-\lambda y}_{L^2(0,s)}, \quad 0 < s < T, \] we have \[ \lim_{s \to 0+}\I_{0+}^\alpha(g-\lambda y)(s) = 0. \] Consequently, we obtain $ y(0) = 0 $ and hence \[ \D_{0+}^\alpha y = \D_{0+}^\alpha \I_{0+} y' = \I_{0+}^{1-\alpha} y', \] which, together with \cref{eq:frac_ode}, yields \[ \I_{0+}^{1-\alpha} y' = g - \lambda y. \] Therefore, \[ y' = \D \I_{0+} y' = \D \I_{0+}^\alpha \I_{0+}^{1-\alpha} y' = \D_{0+}^{1-\alpha} \I_{0+}^{1-\alpha} y' = \D_{0+}^{1-\alpha} (g-\lambda y). \] This proves equality \cref{eq:regu_ode_III-1}. Then, let us prove \cref{eq:regu_ode_III}. Multiplying both sides of \cref{eq:regu_ode_III-1} by $ y' $ and integrating over $ (0,T) $ yields \[ \nm{y'}_{L^2(0,T)}^2 + \lambda \dual{\D_{0+}^{1-\alpha} y, y'}_{(0,T)} = \dual{\D_{0+}^{1-\alpha} g, y'}_{(0,T)}, \] so that \[ \nm{y'}_{L^2(0,T)}^2 + \lambda \dual{\D_{0+}^{1-\alpha} y, y'}_{(0,T)} \leqslant C_{\alpha,T} \nm{g}_{H^{1-\alpha}(0,T)}^2, \] by the Cauchy-Schwarz inequality, \cref{lem:coer} and the Young's inequality with $ \epsilon $. Additionally, using the fact that $ y \in H^1(0,T) $ with $ y(0) = 0 $ gives \[ \D_{0+}^{1-\alpha} y = \D \I_{0+}^\alpha y = \I_{0+}^\alpha y', \] so that \[ \dual{\D_{0+}^{1-\alpha}y, y'}_{(0,T)} \geqslant C_{\alpha,T} \nm{y}_{H^{1-\alpha/2}(0,T)}^2, \] by \cref{lem:key,lem:xy}. Therefore, \[ \nm{y'}_{L^2(0,T)}^2 + \lambda \nm{y}_{H^{1-\alpha/2}(0,T)}^2 \leqslant C_{\alpha,T} \nm{g}_{H^{1-\alpha}(0,T)}^2, \] and hence, as \cref{lem:regu_ode_II} implies \[ \lambda \nm{y}_{L^2(0,T)} \leqslant C_{\alpha,T} \nm{g}_{L^2(0,T)}, \] we readily obtain \cref{eq:regu_ode_III}. This completes the proof. \end{proof} It is clear that we can represent $ u $ in the following form \[ u(t) = \sum_{i=0}^\infty y_i(t) \phi_i, \quad 0 < t < T, \] where $ y_i $ solves problem \cref{eq:frac_ode_weak} with $ \lambda $, $g$ and $y_0$ replaced by $ \lambda_i $, $f_i$ and $u_{0,i}$, respectively. Here, note that $f_i$ and $ u_{0,i} $ are the coordinates of $f$ and $u_0$ respectively under the orthonormal basis $\{\phi_i:i\in\mathbb N\}$. Therefore, by the above three lemmas we readily conclude the following regularity estimates for problem \cref{eq:weak_form}. \begin{thm} \label{thm:regu_pde_I} Assume that $ 0 < \alpha < 1/2 $. If $ f \in L^2(0,T;H^{-\beta}(\Omega)) $ and $ u_0 \in H^{-\beta}(\Omega) $ with $ 0 \leqslant \beta < 1 $, then \begin{equation*} \begin{aligned} & \snm{u}_{H^{\alpha(1-\beta/2)}(0,t;L^2(\Omega))} + \snm{y}_{H^{\alpha/2}(0,t; \dot H^{1-\beta}(\Omega))} + \snm{u}_{H^{\alpha(1-\beta)/2}(0,t;\dot H^1(\Omega))} \\ & {} + \nm{u}_{L^2(0,t;\dot H^{2-\beta}(\Omega))} \leqslant C_{\alpha,\Omega} \left( \nm{f}_{L^2(0,t;H^{-\beta}(\Omega))} + t^{1/2-\alpha} \nm{u_0}_{H^{-\beta}(\Omega)} \right) \end{aligned} \end{equation*} for all $ 0 < t < T $. \end{thm} \begin{thm} \label{thm:regu_pde_II} Assume that $ 1/2 \leqslant \alpha < 1 $. If $ f \in L^2(0,T;H^{-\beta}(\Omega)) $ with $ 2 - 1/\alpha < \beta < 1 $ and $ u_0 \in L^2(\Omega) $, then \begin{equation*} \begin{aligned} & \nm{u}_{H^{\alpha(1-\beta/2)}(0,T;L^2(\Omega))} + \snm{u}_{H^{\alpha/2}(0,T; \dot H^{1-\beta}(\Omega))} + \nm{u}_{H^{\alpha(1-\beta)/2}(0,T; \dot H^1(\Omega))} \\ & {} + \nm{u}_{L^2(0,T; \dot H^{2-\beta}(\Omega))} \leqslant C_{\alpha,\beta,T,\Omega} \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{L^2(\Omega)} \right). \end{aligned} \end{equation*} Moreover, if $ u_0 = 0 $ and $ f \in L^2(0,T; H^{-\beta}(\Omega)) $ with $ 0 \leqslant \beta < 1 $, then the above estimate also holds. \end{thm} \begin{thm} Assume that $ 1/2 < \alpha < 1 $. If $ u_0 = 0 $ and $ f \in H^{1-\alpha}(0,T; L^2(\Omega)) $, then \begin{equation*} \begin{aligned} & \nm{u}_{H^1(0,T; L^2(\Omega))} + \nm{u}_{H^{1-\alpha/2}(0,T; \dot H^1(\Omega))} + \nm{u}_{L^2(0,T; \dot H^2(\Omega))} \\ \leqslant{} & C_{\alpha,T,\Omega} \nm{f}_{H^{1-\alpha}(0,T; L^2(\Omega))}. \end{aligned} \end{equation*} \end{thm} \section{Convergence} \label{sec:conv} We assume that $ u $ and $ U $ are respectively the solutions to problems \cref{eq:weak_form} and \cref{eq:algor}, and by $ a \lesssim b $ we mean that there exists a generic positive constant $ C $, independent of $ h $, $ \tau $ and $ u $, such that $ a \leqslant C b $. The main task of this section is to prove the following a priori error estimates. \begin{thm} \label{thm:conv_I} Assume that $ 0 < \alpha < 1/2 $ and $ 0 \leqslant \beta < 1 $. If $ u_0 \in H^{-\beta}(\Omega) $ and $ f \in L^2(0,T; H^{-\beta}(\Omega)) $, then \begin{align} & \nm{u-U}_{L^2(0,T;\dot H^1(\Omega))} \notag \\ \lesssim{} & \left( h^{1-\beta} + \tau^{\alpha(1-\beta)/2} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{H^{-\beta}(\Omega)} \right), \label{eq:conv_I_H1} \\ & \nm{u-U}_{L^2( 0,T;L^2(\Omega) )} \notag \\ \lesssim{} & \left( h^{2-\beta} + \tau^{\alpha(1-\beta/2)} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{H^{-\beta}(\Omega)} \right). \label{eq:conv_I_L2} \end{align} \end{thm} \begin{thm} \label{thm:conv_II} Assume that $ 1/2 \leqslant \alpha < 1 $ and $ 2-1/\alpha<\beta\leqslant 1 $. If $ u_0 \in L^2(\Omega) $ and $ f \in L^2(0,T; H^{-\beta}(\Omega)) $, then \begin{align*} & \nm{u-U}_{L^2(0,T;\dot H^1(\Omega))} \notag \\ \lesssim{} & \left( h^{1-\beta} + \tau^{\alpha(1-\beta)/2} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{L^2(\Omega)} \right), \\ & \nm{u-U}_{L^2(0,T;L^2(\Omega))} \notag \\ \lesssim{} & \left( h^{2-\beta} + \tau^{\alpha(1-\beta/2)} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{L^2(\Omega)} \right). \end{align*} Moreover, if $ u_0 = 0 $ and $ f \in L^2(0,T;H^{-\beta}(\Omega)) $, then the above two estimates also hold for all $ 0 \leqslant \beta < 1 $. \end{thm} \begin{thm} \label{thm:conv_III} Assume that $ 1/2 < \alpha < 1 $. If $ u_0 = 0 $ and $ f \in H^{1-\alpha}(0,T;L^2(\Omega)) $, then \begin{align*} \nm{u-U}_{L^2(0,T;L^2(\Omega))} & \lesssim \left( h^2 + \tau \right) \nm{f}_{H^{1-\alpha}(0,T;L^2(\Omega))},\\ \nm{u-U}_{L^2(0,T;\dot H^1(\Omega))} & \lesssim \left( h + \tau^{1-\alpha/2} \right) \nm{f}_{H^{1-\alpha}(0,T;L^2(\Omega))}. \end{align*} \end{thm} Since the proofs of \cref{thm:conv_II,thm:conv_III} are similar to that of \cref{thm:conv_I}, below we only show the latter. To this end, we start by introducing two interpolation operators. For any $ v \in L^1(0,T;X) $ with $ X $ being a separable Hilbert space, define $ P_\tau v $ by \[ \left( P_\tau v \right)|_{I_j} := \frac1{\tau_j} \int_{I_j} v(t) \, \mathrm{d}t, \quad 1 \leqslant j \leqslant J. \] Let $ P_h: L^2(\Omega) \to \mathcal S_h $ be the well-known Cl\'ement interpolation operator. For the above two operators, we have the following standard estimates \cite{Cl1975Approximation,Ciarlet2002}: if $ 0 \leqslant \beta \leqslant 1 $ and $ \beta \leqslant \gamma \leqslant 2 $, then \[ \nm{(I-P_h)v}_{H^\beta(\Omega)} \lesssim h^{\gamma-\beta} \nm{v}_{\dot H^\gamma(\Omega)}, \quad \forall v \in \dot H^\gamma(\Omega); \] if $ 0 \leqslant \beta < 1/2 $ and $ \beta \leqslant \gamma \leqslant 1 $, then \[ \nm{(I-P_\tau)w}_{H^\beta(0,T)} \lesssim \tau^{\gamma-\beta} \nm{w}_{H^\gamma(0,T)}, \quad \forall w \in H^\gamma(0,T). \] For clarity, below we shall use the above two estimates implicitly. \noindent{\bf Proof of \cref{thm:conv_I}.} Let us first prove \cref{eq:conv_I_H1}. By \cref{lem:coer}, a standard procedure yields that \[ \nm{u-U}_W \lesssim \nm{u-P_\tau P_h u}_W, \] then using the triangle inequality gives \begin{align*} \nm{u-U}_{W} & \lesssim \snm{(I-P_h)u}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \snm{(I-P_\tau)P_hu}_{H^{\alpha/2}(0,T;L^2(\Omega))} \\ & \quad + {} \nm{(I-P_h)u}_{L^2(0,T;\dot H^1(\Omega))} + \nm{(I-P_\tau)P_hu}_{L^2(0,T;\dot H^1(\Omega))}. \end{align*} Since \begin{align*} \snm{(I-P_\tau)P_hu}_{H^{\alpha/2}(0,T;L^2(\Omega))} &\leqslant \snm{(I-P_\tau)u}_{H^{\alpha/2}(0,T;L^2(\Omega))},\\ \nm{(I-P_\tau)P_hu}_{L^2(0,T;\dot H^1(\Omega))} &\lesssim \nm{(I-P_\tau)u}_{L^2(0,T; \dot H^1(\Omega))}, \end{align*} it follows that \begin{equation}\label{eq:u-U_W} \begin{aligned} \nm{u-U}_{W}& \lesssim \snm{(I-P_h)u}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \snm{(I-P_\tau)u}_{H^{\alpha/2}(0,T;L^2(\Omega))} \\ & \quad + {} \nm{(I-P_h)u}_{L^2(0,T;\dot H^1(\Omega))} + \nm{(I-P_\tau) u}_{L^2(0,T; \dot H^1(\Omega))}. \end{aligned} \end{equation} Therefore, \cref{eq:conv_I_H1} is a direct consequence of \cref{thm:regu_pde_I} and the following estimates: \begin{align*} \nm{(I-P_h)u}_{L^2(0,T;\dot H^1(\Omega))} & \lesssim h^{1-\beta} \nm{u}_{ L^2( 0,T; \dot H^{2-\beta}(\Omega) ) }, \\ \snm{(I-P_h)u}_{ H^{\alpha/2}( 0,T;L^2(\Omega) ) } & \lesssim h^{1-\beta} \snm{u}_{ H^{\alpha/2}( 0,T; \dot H^{1-\beta}(\Omega) ) }, \\ \snm{(I-P_\tau)u}_{ H^{\alpha/2}( 0,T;L^2(\Omega) ) } & \lesssim \tau^{\alpha(1-\beta)/2} \snm{u}_{ H^{\alpha(1-\beta/2)}( 0,T;L^2(\Omega) ) },\\ \nm{(I-P_\tau)u}_{ L^2( 0,T; \dot H^1(\Omega) ) } & \lesssim \tau^{\alpha(1-\beta)/2} \snm{u}_{ H^{\alpha(1-\beta)/2}( 0,T; \dot H^1(\Omega) ) }. \end{align*} Then let us prove \cref{eq:conv_I_L2}. By \cref{lem:coer}, the well known Lax-Milgram theorem implies that there exists a unique $ z \in W $ such that \[ \dual{\D_{T-}^\alpha z, v}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \dual{\nabla z, \nabla v}_{\Omega \times (0,T)} = \dual{u-U,v}_{\Omega \times (0,T)} \] for all $ v \in W $. Substituting $ v = u-U $ into the above equation yields \begin{align*} \nm{u-U}_{L^2(0,T;L^2(\Omega))}^2 &= \dual{\D_{T-}^\alpha z, u-U}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \dual{\nabla z,\nabla(u-U)}_{\Omega \times (0,T)} \\ &= \dual{\D_{0+}^\alpha(u-U), z}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \dual{\nabla(u-U),\nabla z}_{\Omega \times (0,T)}, \end{align*} by \cref{lem:coer}. Setting $ Z = P_\tau P_h z $, as combining \cref{eq:weak_form,eq:algor} gives \[ \dual{\D_{0+}^\alpha(u-U), Z}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \dual{\nabla(u-U), \nabla Z}_{\Omega \times (0,T)} =0, \] we obtain \begin{align*} & \nm{u-U}_{L^2(0,T;L^2(\Omega))}^2 \\ ={} & \dual{\D_{0+}^\alpha(u-U), z-Z}_{H^{\alpha/2}(0,T;L^2(\Omega))} + \dual{\nabla(u-U),\nabla (z-Z)}_{\Omega \times (0,T)}. \end{align*} Then \cref{lem:coer} implies that \begin{equation} \label{eq:conv_I_L2-1} \begin{aligned} \nm{u-U}_{L^2(0,T;L^2(\Omega))}^2 & \leqslant \snm{u-U}_{H^{\alpha/2}(0,T;L^2(\Omega))} \snm{z-Z}_{H^{\alpha/2}(0,T;L^2(\Omega))} \\ & \quad {} + \nm{u-U}_{L^2(0,T;\dot H^1(\Omega))} \nm{z-Z}_{L^2(0,T;\dot H^1(\Omega))}\\ & \leqslant \nm{u-U}_{W} \nm{z-Z}_{W}. \end{aligned} \end{equation} Similarly to the regularity estimate in \cref{thm:regu_pde_I}, we have \[ \nm{z}_{H^\alpha(0,T;L^2(\Omega))} + \snm{z}_{H^{\alpha/2}(0,T;\dot H^1(\Omega))} + \nm{z}_{L^2(0,T; \dot H^2(\Omega))} \lesssim \nm{u-U}_{L^2(0,T;L^2(\Omega))}, \] so that proceeding as in the proof of \cref{eq:conv_I_H1} yields \[ \nm{z-Z}_{W} \lesssim (h+\tau^{\alpha/2}) \nm{u-U}_{L^2(0,T;L^2(\Omega))}. \] Collecting the above estimate, \cref{eq:u-U_W,eq:conv_I_L2-1} gives \begin{align*} & \nm{u-U}_{L^2( 0,T;L^2(\Omega) )} \notag \\ \lesssim{} & (h+\tau^{\alpha/2})\left( h^{1-\beta} + \tau^{\alpha(1-\beta)/2} \right) \left( \nm{f}_{L^2(0,T;H^{-\beta}(\Omega))} + \nm{u_0}_{H^{-\beta}(\Omega)} \right). \end{align*} Therefore, \cref{eq:conv_I_L2} is a direct consequence of the following two estimates: \begin{align*} h\tau^{\alpha(1-\beta)/2} & = \left( h^{2-\beta} \right)^{1/(2-\beta)} \left( \tau^{\alpha(1-\beta/2)} \right)^{1-1/(2-\beta)} \\ & \leqslant h^{2-\beta}/(2-\beta) + (1-1/(2-\beta)) \tau^{\alpha(1-\beta/2)}, \\ h^{1-\beta} \tau^{\alpha/2} &= \left( h^{2-\beta} \right)^{(1-\beta)/(2-\beta)} \left( \tau^{\alpha(1-\beta/2)} \right)^{1-(1-\beta)/(2-\beta)} \\ & \leqslant (1-\beta)/(2-\beta) h^{2-\beta} + (1-(1-\beta)/(2-\beta)) \tau^{\alpha(1-\beta/2)}. \end{align*} This completes the proof. \ensuremath{\blacksquare} \section{Numerical Results} \label{sec:numer} This section performs some numerical experiments to verify our theoretical results in one-dimensional space. We set $ \Omega = (0,1) $, $ T = 1 $ and \begin{align*} \mathcal E_1 &: = \nm{\widetilde u - U}_{L^2(0,T;H_0^1(\Omega))}, \\ \mathcal E_2 &: = \nm{\widetilde u - U}_{L^2(0,T;L^2(\Omega))}, \end{align*} where $ \widetilde u $ is a reference solution. \vskip 0.2cm \noindent{\bf Experiment 1.} This experiment verifies \cref{thm:conv_I} under the condition that \begin{alignat*}{2} u_0(x) & := x^r, & \quad & 0 < x < 1, \\ f(x,t) & := x^r t^{-0.49}, & \quad & 0 < x < 1,\ 0 < t < T. \end{alignat*} We first summarize the numerical results in \cref{tab:ex1-space} as follows. \begin{itemize} \item If $ r=-0.8$, then \[ u_0 \in H^{-\beta}(\Omega) \quad\text{ and } \quad f \in L^2(0,T;H^{-\beta}(\Omega)) \] for all $ \beta > 0.3 $. Therefore, \cref{thm:conv_I} indicates that the spatial convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are close to $ \mathcal O(h^{0.7}) $ and $ \mathcal O(h^{1.7}) $, respectively. This is confirmed by the numerical results. \item If $ r=-0.99 $, then \[ u_0 \in H^{-\beta}(\Omega) \quad\text{ and } \quad f \in L^2(0,T;H^{-\beta}(\Omega)) \] for all $ \beta > 0.49 $. Therefore, \cref{thm:conv_I} indicates that the spatial convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are close to $ \mathcal O(h^{0.51}) $ and $ \mathcal O(h^{1.51}) $, respectively. This agrees well with the numerical results. \end{itemize} In the case of $ \alpha = 0.4 $ and $ r=-0.49 $, \cref{thm:conv_I} indicates that the temporal convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are close to $ \mathcal O(\tau^{0.2}) $ and $ \mathcal O(\tau^{0.4}) $, respectively. In the case of $ \alpha = 0.4 $ and $ r=0.99 $, \cref{thm:conv_I} indicates that the temporal convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are close to $ \mathcal O(\tau^{0.1}) $ and $ \mathcal O(\tau^{0.3}) $, respectively. These theoretical results coincide with the numerical results in \cref{tab:ex1-time}. \begin{table}[ht] \caption{ Convergence history with $ \tau = 2^{-15}$ ( $ \widetilde u $ is the numerical solution at $ h=2^{-11} $). } \label{tab:ex1-space} \small\setlength{\tabcolsep}{1.5pt} \begin{tabular}{lcccccccccccc} \toprule && & & \multicolumn{4}{c}{$\alpha=0.2$} & & \multicolumn{4}{c}{$\alpha=0.4$} \\ \cmidrule{5-8} \cmidrule{10-13} && $h$ &\phantom{a} & $ \mathcal E_1 $ & Order & $ \mathcal E_2 $ & Order & \phantom{aa} & $ \mathcal E_1 $ & Order & $ \mathcal E_2 $ & Order \\ \midrule \multirow{5}{*}{$r\!=\!-0.8$} & & $2^{-3}$ & & 7.56e-1 & -- & 1.15e-2 & -- & & 8.12e-1 & -- & 2.87e-2 & -- \\ & & $2^{-4}$ & & 4.78e-1 & 0.66 & 3.64e-3 & 1.66 & & 5.23e-1 & 0.64 & 9.42e-3 & 1.61 \\ & & $2^{-5}$ & & 2.99e-1 & 0.68 & 1.14e-3 & 1.68 & & 3.30e-1 & 0.66 & 3.02e-3 & 1.64 \\ & & $2^{-6}$ & & 1.85e-1 & 0.69 & 3.53e-4 & 1.69 & & 2.06e-1 & 0.68 & 9.51e-4 & 1.67 \\ \midrule \multirow{5}{*}{$r\!=\!-0.99$} & & $2^{-3}$ & & 1.51e-0 & -- & 5.10e-2 & -- & & 1.64e-0 & -- & 5.45e-2 & -- \\ & & $2^{-4}$ & & 1.07e-0 & 0.49 & 1.84e-2 & 1.47 & & 1.19e-0 & 0.47 & 2.01e-2 & 1.44 \\ & & $2^{-5}$ & & 7.54e-1 & 0.41 & 6.53e-3 & 1.49 & & 8.42e-1 & 0.49 & 7.25e-3 & 1.47 \\ & & $2^{-6}$ & & 5.27e-1 & 0.52 & 2.31e-3 & 1.50 & & 5.91e-1 & 0.51 & 2.58e-3 & 1.49 \\ \bottomrule \end{tabular} \end{table} \begin{table}[ht] \caption{Convergence history with $\alpha=0.4$ and $ h=2^{-10}$ ($ \widetilde u $ is the numerical solution at $ \tau=2^{-17} $). } \label{tab:ex1-time} \small\setlength{\tabcolsep}{1.5pt} \begin{tabular}{ccccccccccc} \toprule \multicolumn{5}{c}{$r=-0.49$} & & \multicolumn{5}{c}{$r=-0.99$} \\ \cmidrule{1-5} \cmidrule{7-11} $\tau $ & $ \mathcal E_1 $ & Order & $ \mathcal E_2 $ & Order & \phantom{aa} & $\tau $ & $ \mathcal E_1 $ & Order & $ \mathcal E_2 $ & Order \\ $2^{-5}$ & 4.54e-1 & -- & 1.20e-2 & -- & & $2^{-3}$ & 1.80 & -- & 3.49e-1 & -- \\ $2^{-6}$ & 3.77e-1 & 0.27 & 9.53e-2 & 0.33 & & $2^{-4}$ & 1.62 & 0.15 & 2.93e-1 & 0.25 \\ $2^{-7}$ & 3.11e-1 & 0.28 & 7.39e-2 & 0.37 & & $2^{-5}$ & 1.45 & 0.16 & 2.42e-1 & 0.28 \\ $2^{-8}$ & 2.56e-1 & 0.28 & 5.63e-2 & 0.39 & & $2^{-6}$ & 1.30 & 0.16 & 1.96e-1 & 0.30 \\ \bottomrule \end{tabular} \end{table} \vskip 0.1cm \noindent{\bf Experiment 2.} This experiment verifies \cref{thm:conv_II} under the condition that \begin{alignat*}{2} u_0(x) & := cx^{-0.49}, & \quad & 0 < x < 1, \\ f(x,t) & := x^{-0.8} t^{-0.49}, & \quad & 0 < x < 1,\ 0 < t < T. \end{alignat*} For $ \alpha = 0.7 $, \cref{thm:conv_II} implies the following results: if $ c=0 $, then \[ \mathcal E_1 \approx\mathcal O(h^{0.7})\quad\mathrm{and}\quad \mathcal E_2 \approx\mathcal O(h^{1.7}); \] if $ c=1 $, then \[ \mathcal E_1 \approx\mathcal O(h^{0.43})\quad\mathrm{and}\quad \mathcal E_2 \approx\mathcal O(h^{1.43}). \] These theoretical results are confirmed by the numerical results in \cref{tab:ex2-space}. For $ \alpha=0.8 $, \cref{thm:conv_II} implies the following results: if $ c = 0 $, then the temporal convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are close to $ \mathcal O(\tau^{0.28}) $ and $ \mathcal O(\tau^{0.68}) $, respectively; if $ c = 1 $, then the temporal convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are close to $ \mathcal O(\tau^{0.1}) $ and $ \mathcal O(\tau^{0.5}) $, respectively. These theoretical results are verified by \cref{tab:ex2-time}. \begin{table}[H] \caption{ Convergence history with $\alpha=0.7$ and $ \tau = 2^{-15}$ ($ \widetilde u $ is the numerical solution at $ h=2^{-11} $). } \label{tab:ex2-space} \small\setlength{\tabcolsep}{1.5pt} \begin{tabular}{ccccccccccc} \toprule & & \multicolumn{4}{c}{$c=0$} & & \multicolumn{4}{c}{$c=1$} \\ \cmidrule{3-6} \cmidrule{8-11} $h$ & & $ \mathcal E_1 $ & Order & $ \mathcal E_2 $ & Order & \phantom{aa} & $ \mathcal E_1 $ & Order & $ \mathcal E_2 $ & Order \\ \midrule $2^{-2}$ & & 7.50e-1 & -- & 5.07e-2 & -- & & 1.76e-0 & -- & 1.04e-1 & -- \\ $2^{-3}$ & & 5.12e-1 & 0.55 & 1.77e-2 & 1.52 & & 1.37e-0 & 0.36 & 4.19e-2 & 1.32 \\ $2^{-4}$ & & 3.42e-1 & 0.58 & 6.03e-3 & 1.55 & & 1.04e-0 & 0.40 & 1.67e-2 & 1.33 \\ $2^{-5}$ & & 2.23e-1 & 0.62 & 2.00e-3 & 1.59 & & 7.56e-1 & 0.46 & 6.35e-3 & 1.39 \\ $2^{-6}$ & & 1.42e-1 & 0.65 & 6.49e-4 & 1.63 & & 5.18e-1 & 0.55 & 2.26e-3 & 1.49 \\ \bottomrule \end{tabular} \end{table} \begin{table}[ht] \caption{Convergence history with $\alpha=0.8$, $r=-0.8$, and $ h = 2^{-10}$ ($ \widetilde u $ is the numerical solution at $ \tau=2^{-17} $). } \label{tab:ex2-time} \small\setlength{\tabcolsep}{1.5pt} \begin{tabular}{cccccccccccc} \toprule \multicolumn{6}{c}{$\mathcal E_1$} & & \multicolumn{5}{c}{$\mathcal E_2$} \\ \cmidrule{1-6} \cmidrule{8-12} $\tau $ & & $ c=0 $ & Order & $ c=1 $ & Order & \phantom{aa} & $\tau $ & $ c=0 $ & Order & $ c=1 $ & Order \\ \midrule $2^{-4}$ & & 3.08e-1 & -- & 8.32e-1 & -- & & $2^{-7}$ & 1.53e-2 & -- & 2.69e-2 & -- \\ $2^{-5}$ & & 2.55e-1 & 0.27 & 7.34e-1 & 0.18 & & $2^{-8}$ & 1.05e-2 & 0.55 & 1.88e-2 & 0.52 \\ $2^{-6}$ & & 2.09e-1 & 0.29 & 6.50e-1 & 0.18 & & $2^{-9}$ & 6.91e-3 & 0.60 & 1.31e-2 & 0.53 \\ $2^{-7}$ & & 1.69e-1 & 0.30 & 5.75e-1 & 0.18 & & $2^{-10}$ & 4.47e-3 & 0.63 & 9.00e-2 & 0.54 \\ $2^{-8}$ & & 1.37e-1 & 0.31 & 5.06e-1 & 0.18 & & $2^{-11}$ & 2.84e-3 & 0.65 & 6.17e-2 & 0.55 \\ $2^{-9}$ & & 1.10e-1 & 0.31 & 4.44e-1 & 0.19 & & $2^{-12}$ & 1.78e-3 & 0.68 & 4.19e-2 & 0.56 \\ \bottomrule \end{tabular} \end{table} \noindent{\bf Experiment 3.} This experiment verifies \cref{thm:conv_III}. Here we set $ \alpha = 0.8 $ and \begin{alignat*}{2} u_0(x) & := 0, & \quad & 0 < x < 1, \\ f(x,t) & := x^{-0.49} t^{-0.29}, & \quad & 0 < x < 1,\ 0 < t < T. \end{alignat*} \cref{thm:conv_III} implies that the convergence orders of $ \mathcal E_1 $ and $ \mathcal E_2 $ are $ \mathcal O(h+\tau^{0.6}) $ and $ \mathcal O(h^2+\tau) $, respectively, which is confirmed by \cref{tab:ex3-space,tab:ex3-time}. \newfloatcommand{capbtabbox}{table}[][\FBwidth] \begin{table}[H] \begin{floatrow} \capbtabbox{ \small\setlength{\tabcolsep}{1.3pt} \begin{tabular}{ccccccc} \toprule $h$ & \phantom{a} & $\mathcal E_1$ & Order & & $\mathcal E_2$ & Order \\ \midrule $2^{-3}$ & & 1.09e-2 & -- & & 4.08e-3 & -- \\ $2^{-4}$ & & 5.87e-2 & 0.89 & & 1.11e-3 & 1.88 \\ $2^{-5}$ & & 3.13e-2 & 0.91 & & 2.98e-4 & 1.90 \\ $2^{-6}$ & & 1.66e-2 & 0.92 & & 7.92e-5 & 1.91 \\ $2^{-7}$ & & 8.71e-3 & 0.93 & & 2.09e-5 & 1.92 \\ $2^{-8}$ & & 4.55e-3 & 0.94 & & 5.47e-6 & 1.93 \\ \bottomrule \end{tabular} }{ \caption{Convergence history with $ \tau = 2^{-15}$ ($ \widetilde u $ is the numerical solution at $ h=2^{-12} $).} \label{tab:ex3-space} } \capbtabbox{ \small\setlength{\tabcolsep}{1.3pt} \begin{tabular}{ccccccc} \toprule $\tau$ & \phantom{a} & $\mathcal E_1$ & Order & & $\mathcal E_2$ & Order \\ \midrule $2^{-6}$ & & 2.32e-2 & -- & & 6.75e-3 & -- \\ $2^{-7}$ & & 1.52e-2 & 0.62 & & 4.19e-3 & 0.69 \\ $2^{-8}$ & & 9.73e-3 & 0.64 & & 2.47e-3 & 0.76 \\ $2^{-9}$ & & 6.22e-3 & 0.65 & & 1.41e-3 & 0.81 \\ $2^{-10}$ & & 3.97e-3 & 0.65 & & 7.81e-4 & 0.85 \\ $2^{-11}$ & & 2.54e-3 & 0.65 & & 4.27e-4 & 0.87 \\ \bottomrule \end{tabular} }{ \caption{Convergence history with $ h = 2^{-10}$ ($ \widetilde u $ is the numerical solution at $ \tau=2^{-17} $).} \label{tab:ex3-time} } \end{floatrow} \end{table} \appendix \section{Properties of Fractional Calculus Operators} \begin{lem}[\cite{Samko1993, Diethelm2010,Podlubny1998}] \label{lem:basic-frac} Let $ -\infty < a < b < \infty $. If $ 0 < \beta, \gamma < \infty $, then \[ \I_{a+}^\beta \I_{a+}^\gamma = \I_{a+}^{\beta+\gamma}, \quad \I_{b-}^\beta \I_{b-}^\gamma = \I_{b-}^{\beta+\gamma}, \] and \[ \dual{\I_{a+}^\beta v,w}_{(a,b)} = \dual{v, \I_{b-}^\beta w}_{(a,b)} \] for all $ v, w \in L^2(a,b) $. \end{lem} \begin{lem}[\cite{Ervin2006}] \label{lem:coer} Assume that $ -\infty < a < b < \infty $ and $ 0 < \gamma < 1/2 $. If $ v \in H^\gamma(a,b) $, then \begin{align*} & \nm{\D_{a+}^\gamma v}_{L^2(a,b)} \leqslant \snm{v}_{H^\gamma(a,b)}, \\ & \nm{\D_{b-}^\gamma v}_{L^2(a,b)} \leqslant \snm{v}_{H^\gamma(a,b)}, \\ & \dual{\D_{a+}^\gamma v, \D_{b-}^\gamma v}_{(a,b)} = \cos(\gamma\pi) \snm{v}_{H^{\gamma}(a,b)}^2. \end{align*} Moreover, if $ v,w \in H^{\gamma}(a,b) $, then \begin{align*} & \dual{\D_{a+}^\gamma v, \D_{b-}^\gamma w}_{(a,b)} \leqslant \snm{v}_{H^\gamma(a,b)} \snm{w}_{H^\gamma(a,b)},\\ & \dual{\D_{a+}^{2\gamma} v, w}_{H^\gamma(a,b)} = \dual{\D_{a+}^\gamma v, \D_{b-}^\gamma w}_{(a,b)} = \dual{\D_{b-}^{2\gamma} w, v}_{H^\gamma(a,b)}. \end{align*} \end{lem} \begin{lem} \label{lem:key} If $ 0 < \gamma < 1/2 $ and $ v \in L^2(0,1) $, then \begin{equation} \label{eq:key} C_1 \nm{\I_{0+}^{\gamma} v}_{L^2(0,1)}^2 \leqslant \left( \I_{0+}^{\gamma} v, \I_{T-}^{\gamma} v \right)_{L^2(0,1)} \leqslant C_2 \nm{\I_{0+}^\gamma v}_{L^2(0,1)}^2, \end{equation} where $ C_1 $ and $ C_2 $ are two positive constants that depend only on $ \gamma $. \end{lem} \begin{proof} Extending $ v $ to $ \mathbb R \backslash (0,1) $ by zero, we define \begin{align*} w_{+}(t) &:= \frac1{ \Gamma(\gamma) } \int_{-\infty}^t (t-s)^{\gamma-1} v(s) \, \mathrm{d}s, \quad -\infty < t < \infty, \\ w_{-}(t) &:= \frac1{ \Gamma(\gamma) } \int_t^{\infty} (s-t)^{\gamma-1} v(s) \, \mathrm{d}s, \quad -\infty < t < \infty. \end{align*} Since $ 0 < \gamma < 1/2 $, a routine calculation yields $ w_{+}, w_{-} \in L^2(\mathbb R) $, and \cite[Theorem~7.1]{Samko1993} implies that \[ \begin{array}{ll} \mathcal Fw_{+}(\xi) = (\mathrm{i}\xi)^{-\gamma} \mathcal Fv(\xi), & -\infty < \xi < \infty, \\ \mathcal Fw_{-}(\xi) = (-\mathrm{i}\xi)^{-\gamma} \mathcal Fv(\xi), & -\infty < \xi < \infty. \end{array} \] By the Plancherel Theorem and the same technique as that used to prove \cite[Lemma~2.4]{Ervin2006}, it follows that \begin{align*} {} & \left( \I_{0+}^{\gamma} v, \I_{1-}^{\gamma} v \right)_{L^2(0,1)} = (w_{+},w_{-})_{L^2(\mathbb R)} = (\mathcal Fw_{+}, \mathcal Fw_{-})_{L^2(\mathbb R)} \\ ={} & \cos\big( \gamma\pi \big) \int_\mathbb R \snm\xi^{-2\gamma} \snm{\mathcal Fv(\xi)}^2 \, \mathrm{d}\xi \\ ={} & \cos( \gamma\pi ) \nm{w_{+}}_{L^2(\mathbb R)}^2 = \cos( \gamma\pi ) \nm{w_{-}}_{L^2(\mathbb R)}^2. \end{align*} Therefore, by the Cauchy-Schwarz inequality, \cref{eq:key} follows from the following two estimates: \[ \nm{ \I_{0+}^{\gamma} v }_{L^2(0,1)} \leqslant \nm{w_{+}}_{ L^2(\mathbb R) }, \quad \nm{ \I_{1-}^{\gamma} v }_{L^2(0,1)} \leqslant \nm{w_{-}}_{ L^2(\mathbb R) }. \] \end{proof} \begin{lem} \label{lem:regu} If $ \beta \in (0,1) \setminus \{0.5\} $ and $ 0 < \gamma < \infty $, then \begin{equation} \label{eq:regu-1} \nm{\I_{0+}^\gamma v}_{H^{\beta+\gamma}(0,1)} \leqslant C_{\beta,\gamma} \nm{v}_{H^\beta(0,1)} \end{equation} for all $ v \in H_0^\beta(0,1) $. Furthermore, if $ 0 < \gamma < 1/2 $ and $ v \in H^{1-\gamma}(0,1) $ with $ v(0) = 0 $, then \begin{equation} \label{eq:regu-2} \nm{\I_{0+}^\gamma v}_{H^1(0,1)} \leqslant C_\gamma \nm{v}_{H^{1-\gamma}(0,1)}. \end{equation} \end{lem} \begin{proof} For the proof of \cref{eq:regu-1}, we refer the reader to \cite{Li2017A} (Lemma A.4). Let us prove \cref{eq:regu-2} as follows. Define $ \widetilde v := v - g $, where \[ g(t) := t v(1), \quad 0 < t < 1. \] It is clear that $ \widetilde v \in H_0^{1-\gamma}(0,1) $, and hence \cref{eq:regu-1} implies \[ \nm{ \I_{0+}^\gamma \widetilde v }_{H^1(0,1)} \leqslant C_\gamma \nm{\widetilde v}_{ H_0^{1-\gamma}(0,1) }. \] Therefore, from the evident estimate \[ \nm{g}_{H^{1-\gamma}(0,1)} + \nm{\I_{0+}^\gamma g}_{H^1(0,1)} \leqslant C_\gamma \snm{v(1)}, \] it follows that \begin{align*} \nm{\I_{0+}^\gamma v}_{H^1(0,1)} & \leqslant \nm{ \I_{0+}^\gamma \widetilde v }_{H^1(0,1)} + \nm{ \I_{0+}^\gamma g }_{H^1(0,1)} \\ & \leqslant C_\gamma \nm{ \widetilde v }_{ H_0^{1-\gamma}(0,1) } + \nm{ \I_{0+}^\gamma g }_{H^1(0,1)} \\ & \leqslant C_\gamma \left(\nm{v}_{ H^{1-\gamma}(0,1) } + \nm{g}_{ H^{1-\gamma}(0,1) } \right) + \nm{ \I_{0+}^\gamma g }_{H^1(0,1)} \\ & \leqslant C_\gamma \left( \nm{v}_{ H^{1-\gamma}(0,1) } + \snm{v(1)} \right). \end{align*} As $ 0 < \gamma < 1/2 $ implies \[ \nm{v}_{C[0,1]} \leqslant C_\gamma \nm{v}_{H^{1-\gamma}(0,1)}, \] this indicates \cref{eq:regu-2} and thus proves the lemma. \end{proof} \begin{lem} \label{lem:xy} If $ 0 < \gamma < 1/2$ and $ v \in H^1(0,1) $, then \begin{equation} \label{eq:xy} C_1 \nm{v}_{ H^{1-\gamma}(0,1) } \leqslant \snm{v(0)} + \nm{ \I_{0+}^{\gamma} v' }_{L^2(0,1)} \leqslant C_2 \nm{v}_{H^{1-\gamma}(0,1)}, \end{equation} where $ C_1 $ and $ C_2 $ are two positive constants that depend only on $ \gamma $. \end{lem} \begin{proof} Since a simple calculation gives \[ \D \I_{0+}^{\gamma} (v-v(0)) = \D \I_{0+}^{\gamma} \I_{0+} v' = \I_{0+}^{\gamma} v', \] using \cref{lem:regu} yields \begin{align*} {} & \nm{ \I_{0+}^{\gamma} v' }_{L^2(0,1)} \leqslant \nm{ \I_{0+}^{\gamma}(v-v(0)) }_{H^1(0,1)} \\ \leqslant{} & C_\gamma \nm{v-v(0)}_{ H^{1-\gamma}(0,1) } \leqslant C_\gamma \left( \snm{v(0)} + \nm{v}_{ H^{1-\gamma}(0,1) } \right), \end{align*} which, together with the estimate \[ \snm{v(0)} \leqslant C_\gamma \nm{v}_{H^{1-\gamma}(0,1)} \quad \text{(since $ 1-\gamma > 0.5 $)}, \] indicates \[ \snm{v(0)} + \nm{ \I_{0+}^{\gamma} v' }_{L^2(0,1)} \leqslant C_\gamma \nm{v}_{ H^{1-\gamma}(0,1) }. \] Conversely, by \[ v = \I_{0+}^{ 1-\gamma } \I_{0+}^{ \gamma } v' + v(0), \] using \cref{lem:regu} again yields \[ \nm{v}_{ H^{1-\gamma}(0,1) } \leqslant C_\gamma \left( \snm{v(0)} + \nm{ \I_{0+}^{\gamma}v' }_{L^2(0,1)} \right). \] This lemma is thus proved. \end{proof} \end{document}
\begin{document} \title{User-Centric Federated Learning: Trading off Wireless Resources for Personalization} \author{Mohamad~Mestoukirdi$^{\dagger}$,~\IEEEmembership{Student Fellow,~IEEE,} Matteo Zecchin$^{\dagger}$,~\IEEEmembership{Student Fellow,~IEEE,} David~Gesbert,~\IEEEmembership{Fellow,~IEEE,} and Qianrui~Li,~\IEEEmembership{Member,~IEEE} \thanks{$^\dagger$ Equal contribution.} \thanks{ M. Mestoukirdi, M. Zecchin, and D. Gesbert are with the Communication Systems Department, EURECOM, Sophia-Antipolis, France. Emails: \{Mestouki, Zecchin, Gesbert\}@eurecom.fr. M. Mestoukirdi and Q. Li are with Mitsubishi Electric R\&D Centre Europe. Emails: \{M.Mestoukirdi, Q.Li\}@fr.merce.mee.com.} \thanks{The work of M. Zecchin is funded by the Marie Sklodowska Curie action WINDMILL (grant No. 813999).} } \maketitle \begin{abstract} Statistical heterogeneity across clients in a Federated Learning (FL) system increases the algorithm convergence time and reduces the generalization performance, resulting in a large communication overhead in return for a poor model. To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data in order to guarantee a privacy-preserving transfer. In this work, we design user-centric aggregation rules at the parameter server (PS) that are based on readily available gradient information and are capable of producing personalized models for each FL client. The proposed aggregation rules are inspired by an upper bound of the weighted aggregate empirical risk minimizer. Secondly, we derive a communication-efficient variant based on user clustering which greatly enhances its applicability to communication-constrained systems. Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead. \end{abstract} \begin{IEEEkeywords} Personalized federated learning, distributed optimization, user-centric aggregation, statistical learning theory \end{IEEEkeywords} \section{Introduction} In recent years, the evolution of energy and computation-efficient hardware, together with the wide adoption of data-driven solutions have led to an increased interest in pushing intelligence closer to edge devices where data is generated. This contributed to the emergence of autonomous and intelligent systems where decisions are made locally. However, the reliable operation of intelligent edge devices requires periodic ML training and tuning, which mainly relies on pooling data from a multitude of devices toward a central entity. Consequently, privacy concerns arise in such settings, as data owners may be reluctant in sharing sensible and personal pieces of information \cite{health}. Additionally, the soaring model complexity of modern ML solutions requires vast amounts of data to be harvested to achieve satisfactory inference accuracy. This introduces a large communication overhead and long training delays. Federated Learning (FL) \cite{mcmahan2017communication} was introduced to deal with these problems. It offers clients the possibility of collaboratively training models under the orchestration of a parameter server (PS), by iteratively aggregating locally optimized models without the need to offload any raw data centrally. Such an approach tackles both the privacy and communications challenges mentioned above. Early FL algorithms were devised under the assumption that the data distribution of clients' data sets is common. In this case, clients are said to share the same learning task, and traditional FL (e.g FedAvg \cite{mcmahan2017communication}) algorithms can perform and generalize well yielding a single model, fitting the common data distribution. However, this assumption is hardly met in practice \cite{sattler2020clustered}, as data distribution heterogeneity often arises in distributedly generated data sets. In such cases, traditional FL (e.g FedAvg) approaches exhibit slow convergence and often fail to generalize well \cite{li2018federated}, especially when conflicting objectives among users exist. This is a direct consequence of the fact that in heterogeneous settings, a convex combination of locally trained models may not be fit for any particular client data distribution. Hence, heterogeneous distributions bring about an interesting trade-off: On the one hand the advantage of exploiting training data at other clients when the local training data is insufficient, and on the other hand the problem of having the trained model steered towards improper directions due to differences in data distributions among clients. This trade-off motivates the search for new FL strategies that can navigate the compromise between model aggregation benefits and the threat of model mismatch. In \cite{prevwork}, we proposed a novel user-centric aggregation rule to tackle the underlying heterogeneity among clients and overcome the shortcomings of the traditional FL schemes. The proposed strategy leverages user-centric aggregation rules at the PS to produce models at each device that are tailored to their local data distribution. This is achieved by generalizing the aggregation rule introduced by McMahan et al. \cite{mcmahan2017communication}. In the case of a set of $m$ collaborating devices, the original objective in \cite{mcmahan2017communication} produces a common model at each communication round $t$ according to \begin{equation} \begin{aligned} \theta^{t}\leftarrow \sum_{i=1}^m w_i\theta_i^{t-1/2}, \end{aligned} \label{eq0} \end{equation} where each $w_i$ weights the contribution of the locally optimized model $\theta_i^{t-\frac{1}{2}}$ of user $i$, to the update global model $\theta^t$. On the other hand, the proposed aggregation rule replaces the weighting coefficients $\{w_i\}^m_{i=1}$ by user-specific weighting vectors $\vec{w}_i = (w_{i,1},\dots,w_{i,m})$ and it produces a personalized model update for each FL client \begin{equation} \begin{aligned} \theta^{{t}}_i \leftarrow \sum^m_{j=1}w_{i,j}\theta^{{t-1/2}}_j \hspace{1cm} \text{for } i = 1,2,\cdots,m \end{aligned} \label{eq2} \end{equation} The key motivation underpinning the use of distinct user-centric personalization rules is that a single model often fails in heterogeneous settings \cite{mcmahan2017communication}. At the same time, hard clustering strategies \cite{sattler2020clustered,briggs2020federated} are limited to restrictive intra-cluster collaboration and they cannot exploit similarities among different clusters. The authors in \cite{zhang2020personalized} proposed FedFomo, a personalization scheme that uses a similar aggregation policy as ours \cite{prevwork}. However, FedFomo's weighting scheme is repeatedly refined during training and it relies on sharing local models among clients at each communication round. This strategy can violate the FL privacy-preserving nature and introduces a large communication burden to the training procedure. In contrast, our personalization policy is shown experimentally to enjoy faster convergence, being able to capture the data heterogeneity at the start of training without the need for further refinements at later stages. In this work, we extend the findings of \cite{prevwork}. We derive an upper bound on the expected risk endured by the minimizer of the weighted empirical aggregate loss. Then, we motivate the use of heuristically defined weights in place of the theoretically optimal ones. Furthermore, to limit the communication costs induced by transmitting multiple personalized models, we propose a $K$-means clustering algorithm over the user-centric weights to limit the number of personalized streams, while taking into account the underlying heterogeneous target tasks and highlighting inter-cluster collaboration. This enables a trade-off between the learning accuracy and the communication load in some heterogeneous settings. Finally, we show that the silhouette score over the returned $K$-means solution can detect the underlying heterogeneity, and provides a principled way to choose the number of user-centric rules. Through extensive numerical experiments on FL benchmarks, we demonstrate the performance of our proposed strategy compared to other state-of-the-art solutions, in terms of inference accuracy, and communication costs. \section{Related Work} Several recent studies investigate the challenges that arise due to the underlying task heterogeneity present across learners in Federated Learning settings. For instance, \cite{briggs2020federated,sattler2020clustered} devised a hierarchical clustering scheme to group users that share the same learning task and enable collaboration among them only. However, their strategy is based on the assumption that heterogeneous tasks are either tangential or parallel, which is not necessarily true, as tasks are defined by the users' target data distributions which are often different for each of them. In this sense, hard-clustering strategies limit the degree of collaboration across learners and may not always be able to capture the differences across users' tasks. In \cite{mixture2021s} a distributed Expectation-Maximization (EM) algorithm has been proposed, that concurrently converges to a set of shared hypotheses and a personalized linear combination of them at each device. Similarly in \cite{reisser2021federated}, a Mixture of Experts' formulation has been devised to learn a personalized mixture of the outputs of a jointly trained set of models. Similar to \cite{zhang2020personalized}, exploiting the full personalization potential of the solutions in \cite{mixture2021s,reisser2021federated} induces a huge overhead over the communication resources in the federated system, which renders their approaches unpractical. Similar to Fedprox \cite{DBLP:journals/corr/abs-1812-06127}, the authors in \cite{DBLP:journals/corr/abs-1910-06378} propose SCAFFOLD to tackle the ``\textit{client drifts}" that emerge as a result of the heterogeneity of the clients' data sets during the global model training. However, in some heterogeneous settings, ``\textit{client drifts}" can act as an indication of the existence of opposing target tasks among the learners. Therefore, intelligently employing the drifts can highlight similarity patterns among the clients' tasks \cite{sattler2020clustered}, which in turn can aid in training multiple refined models to fit each of the available tasks, yielding better personalized models in contrast to a single global model trained by SCAFFOLD. More recently, the authors in \cite{DBLP:journals/corr/abs-2012-04221} propose Ditto, where users collaborate to train a separate global model akin to \cite{mcmahan2017communication}, which is then used to steer the training of the local personalized model at each user via local model adaptation. Their approach embodies the intuition of pFedMe \cite{DBLP:journals/corr/abs-2006-08848}, which decouples personalized model optimization from the global model learning by introducing a penalizing term to regularize the clients local adaptation step. Despite resulting in a per-user personalized model, collaboration among users in Ditto and pFedMe is limited to updating the global model, while relying solely on the local data sets to train their personalized models, rather than leveraging collaboration among statistically similar learners to refine those models. Consequently, the resulting personalized models may generalize poorly, especially in settings where local data sets are small in size. \label{section2} \section{Learning with heterogeneous data sources} \label{section3} \label{sec3} In this section, we provide theoretical guarantees for learners that combine data from heterogeneous data distributions. The set-up mirrors the one of personalized federated learning and the results are instrumental to derive our user-centric aggregation rule. In the following, we limit our analysis to the discrepancy distance, but it can be readily extended to other divergences as we show later. In the federated learning setting, the weighted combination of the empirical loss terms of the collaborating devices represents the customary training objective. Namely, in a distributed system with $m$ nodes, each endowed with a data set $\mathcal{D}_i$ of $n_i$ IID samples from a local distribution $P_i$, the goal is to find a predictor $f:\mathcal{X}\to\mathcal{\hat{Y}}$ from a hypothesis class $\mathcal{F}$ that minimizes \begin{equation} L(f,\vec{w})=\sum_{i=1}^m \frac{w_i}{n_i}\sum_{(x,y)\in \mathcal{D}_i}\ell(f(x),y) \label{aggregatedloss} \end{equation} where $\ell: \mathcal{\hat{Y}}\times \mathcal{Y}\to \mathbb{R}^+$ is a loss function and $\vec{w}=(w_1,\dots,w_m)$ is a weighting scheme. In case of identically distributed local data sets, the typical weighting vector is $\vec{w}=\frac{1}{\sum_i n_i}\left(n_1,\dots,n_m\right)$, the relative fraction of data points stored at each device. This particular choice minimizes the variance of the aggregated empirical risk, which is also an unbiased estimate of the local risk at each node in this scenario. However, in the case of heterogeneous local distributions, the minimizer of $\vec{w}$-weighted risk may transfer poorly to certain devices whose target distribution differs from the mixture $P_{\vec{w}}=\sum^m_{i=1}w_iP_i$. Furthermore, it may not exist a single weighting strategy that yields a universal predictor with satisfactory performance for all participating devices. To address the above limitation of a universal model, personalized federated learning allows adapting the learned solution at each device. In order to better understand the potential benefits and drawbacks coming from the collaboration with statistically similar but not identical devices, let us consider the point of view of a generic node $i$ that has the freedom of choosing the degree of collaboration with the other devices in the distributed system. Namely, identifying the degree of collaboration between node $i$ and the rest of users by the weighting vector $\vec{w}_i=(w_{i,1},\dots,w_{i,m})$ (where $w_{i,j}$ defines how much node $i$ relies on data from user $j$) we define the personalized objective for user $i$ \begin{equation} \min_{\theta} L(\theta)=\sum_{i=1}^m \frac{w_{i}}{|\mathcal{D}_i|}\sum_{(x,y)\in \mathcal{D}_i}\ell(f_{\theta}(x),y) \label{centricloss} \end{equation} and the resulting personalized model \begin{equation} \hat{f}_{\vec{w}_i}=\argmin_{f\in\mathcal{F}}L(f,\vec{w}_i). \label{learner} \end{equation} We now seek an answer to: \emph{``What's the proper choice of $\vec{w}_i$ in order to obtain a personalized model $\hat{f}_{\vec{w}_i}$ that performs well on the target distribution $P_i$?''}. This question is deeply tied to the problem of domain adaptation, in which the goal is to successfully aggregate multiple data sources in order to produce a model that transfers positively to a different and possibly unknown target domain. In our context, the data set $\mathcal{D}_i$ is made of data points drawn from the target distribution $P_i$ and the other devices' data sets provide samples from the sources $\{P_j\}_{j\neq i}$. Leveraging results from domain adaptation theory \cite{ben2010theory}, we provide learning guarantees on the performance of the personalized model $\hat{f}_{\vec{w}_i}$ to gauge the effect of collaboration that we later use to devise the weights for the user-centric aggregation rules.\newline\newline In order to avoid negative transfer, it is crucial to upper bound the performance of the predictor w.r.t. to the target task. The discrepancy distance introduced in \cite{mansour2009domain} provides a measure of similarity between learning tasks that can be used to this end. For a hypothesis set of functions $\mathcal{F}:\mathcal{X}\rightarrow\hat{\mathcal{Y}}$ and two distributions $P,Q$ on $\mathcal{X}$, the discrepancy distance is defined as \begin{equation} d_{\mathcal{F}}(P,Q)=\sup_{f,f'\in \mathcal{F}}\left|\mathbb{E}_{x\sim P}\left[\ell(f,f')\right]-\mathbb{E}_{x\sim Q}\left[\ell(f,f')\right]\right| \label{disc_dist} \end{equation} where we streamlined notation denoting $f(x)$ by $f$. For bounded and symmetric loss functions that satisfy the triangular inequality, the previous quantity allows to obtain the following inequality \begin{equation*} \mathbb{E}_{(x,y)\sim P}[\ell(f,y)]\leq \mathbb{E}_{(x,y)\sim Q}[\ell(f,y)]+d_{\mathcal{F}}(P,Q)+\gamma \end{equation*} where $\gamma=\inf_{f \in \mathcal{F}}\left(\mathbb{E}_{(x,y)\sim P}[\ell(f,y)]+\mathbb{E}_{(x,y)\sim Q}[\ell(f,y)]\right)$. We can exploit the inequality to obtain the following risk guarantee for $\hat{f}_{\vec{w}_i}$ w.r.t the true minimizer $f^*$ of the risk for the distribution $P_i$. \begin{theorem} For a loss function $\ell$ $B$-bounded range, symmetric and satisfying the triangular inequality, with probability $1-\delta$ the function $f_{\vec{w_i}}$ satisfies \begin{align*} &E_{z\sim P_i}[\ell(\hat{f}_{\vec{w}_i},z)]-E_{z\sim P_i}[\ell(f^*,z)]\leq \\&B \sqrt{\sum^m_{j=1}\frac{w^2_{i,j}}{n_j}}\left(\sqrt{\frac{2d}{\sum_i n_i}\log\left(\frac{e\sum_i n_i}{d}\right)}+\sqrt{\log\left(\frac{2}{\delta}\right)}\right)+\\&2 \sum_{j=1}^m w_{i,j}d_{\mathcal{F}}(P_i,P_{j})+2\gamma \end{align*} where $\gamma=\min_{f\in \mathcal{F}}\left( E_{z\sim P_i}[\ell(f,z)]+E_{z\sim P_{\vec{w}_i}}[\ell(f,z)]\right)$ and $d$ is the VC-dimension of the function space resulting from the composition of $\mathcal{F}$ and $\ell$. \label{Th1} \end{theorem} Recently, an alternative bound based on an information theoretic notion of dissimilarity, the Jensen-Shannon divergence, has been proposed \cite{shui2020beyond}. It is based on less restrictive constraints, as it only requires the loss function $\ell(f,Z)$ to be sub-Gaussian of some parameter $\sigma$ for all $f\in \mathcal{F}$, and therefore whenever $\ell(\cdot)$ is bounded, the requirement is automatically satisfied. Measuring similarity by the Jensens-Shannon divergence the following inequality is available \begin{equation} E_{X\sim P}[X]\leq E_{X\sim Q}[X] +\beta\sigma^2+\frac{D_{JS}(P||Q)}{\beta} \quad \text{for $\beta>0$} \label{JS} \end{equation} where $D_{JS}(P\|Q)=\text{KL}\left(P\Big\|\frac{P+Q}{2}\right)+\text{KL}\left(Q\Big\|\frac{P+Q}{2}\right)$. Exploiting the above inequality we obtain the following estimation error bound. \begin{theorem} For a loss function $\ell$ $B$-bounded range, the function $f_{\vec{w_i}}$ satisfies \begin{align*} &E_{z\sim P_i}[\ell(\hat{f}_{\vec{w}_i},z)]-E_{z\sim P_i}[\ell(f^*,z)]\leq \\ &B \sqrt{\sum^m_{j=1}\frac{w^2_{i,j}}{n_j}}\left(\sqrt{\frac{2d}{\sum_i n_i}\log\left(\frac{e\sum_i n_i}{d}\right)}+\sqrt{\log\left(\frac{2}{\delta}\right)}\right)+\\&B\sqrt{2\sum^m_{j=1}w_{i,j}D_{JS}(P_i||P_j)} \end{align*} \label{Th2} \end{theorem} \textbf{Proof of Theorem \ref{Th1} and \ref{Th2}:} In the Appendix \ref{A}.\newline The theorems highlights that a fruitful collaboration should strike a balance between the bias terms due to dissimilarity between local distribution and the risk estimation gains provided by the data points of other nodes. Minimizing the upper bound in Th. \ref{Th1},\ref{Th2} with respect to the user-specific weights, and using the optimal weights in our aggregation rule seems an appealing solution to tackle the data heterogeneity during training; however, the distance terms $\left(d_{\mathcal{F}}(P_i,P_k) \text{ and }D_{JS}(P_i||P_j)\right)$ are difficult to compute, especially under the privacy constraints that federated learning imposes. For this reason, in the following we consider a heuristic method based on the similarity of the readily available users' model updates to estimate the collaboration coefficients. \section{User-centric aggregation} \label{section4} For a suitable hypothesis class parametrized by $\theta\in \mathbb{R}^d$, federated learning approaches use an iterative procedure to minimize the aggregate loss (\ref{aggregatedloss}) with $\vec{w}=\frac{1}{\sum_i n_i}\left(n_1,\dots,n_m\right)$. At each round $t$, the PS broadcasts the parameter vector $\theta^{t-1}$ and then combines the locally optimized models by the clients $\{\theta_i^{t-1}\}_{i=1}^m$ according to the following aggregation rule \[ \theta^{t}\leftarrow \sum_{i=1}^m\frac{n_i}{\sum_{j=1}^m n_j}\theta_i^{t-1}. \] As mentioned in Sec. \ref{sec3}, this aggregation rule has two shortcomings: it does not take into account the data heterogeneity across users, and it is bounded to produce a single solution. For this reason, we propose a user-centric model aggregation scheme that takes into account the data heterogeneity across the different nodes participating in training and aims at neutralizing the bias induced by a universal model. Our proposal generalizes the naïve aggregation of FedAvg, by assigning a unique set of mixing coefficients $\vec{w}_i$ to each user $i$ and, consequently, a user-specific model aggregation at the PS side. Namely, on the PS side, the following set of user-centric aggregation steps are performed \begin{equation} \begin{aligned} \theta^{{t}}_i \leftarrow \sum^m_{j=1}w_{i,j}\theta^{{t-1/2}}_j \quad \textnormal{for $i=1,\dots,m$} \end{aligned} \label{eq3} \end{equation} where now, $\theta^{{t-1/2}}_j$ is the locally optimized model at node $j$ starting from $\theta^{{t-1}}_j$, and $\theta^{{t}}_i$ is the user-centric aggregated model for user $i$ at communication round $t$. \begin{figure} \caption{Personalized Federated Learning with user-centric aggregates at round $t$.} \label{fig:system_model} \end{figure}\ As we elaborate next, the mixing coefficients are heuristically defined based on a distribution similarity metric and the data set size ratios. These coefficients are calculated before the start of federated training. The similarity score we propose is designed to favour collaboration among similar users and takes into account the relative data set sizes, as more intelligence can be harvested from clients with larger data availability. Using these user-centric aggregation rules, each node ends up with its personalized model that yields better generalization for the local data distribution. It is worth noting that the user-centric aggregation rule does not produce a minimizer of the user-centric aggregate loss given by (\ref{centricloss}). At each round, the PS aggregates model updates are computed starting from a different set of parameters. Nonetheless, we find it to be a good approximation of the true update since personalized models for similar data sources tend to propagate in a close neighbourhood. The aggregation in \cite{zhang2020personalized} capitalizes on the same intuition. \subsection{Computing the Collaboration Coefficients} \label{collaboration} Computing the discrepancy distance (\ref{disc_dist}) can be challenging in high-dimension, especially under the communication and privacy constraints imposed by federated learning. For this reason, we propose to compute the mixing coefficient based on the relative data set sizes and the distribution similarity metric given by \begin{align*} \Delta_{i,j}(\hat{\theta}) = & \norm{ \frac{1}{n_i}\sum_{(x,y)\in \mathcal{D}_i}{\hspace{-10pt}\nabla \ell(f_{\hat{\theta}},y)} - \frac{1}{n_j}\sum_{(x,y)\in \mathcal{D}_j}\hspace{-10pt}\nabla \ell(f_{\hat{\theta}},y) }^2\\ \approx & \norm{ \mathbb{E}_{z\sim P_i}\nabla \ell(f_{\hat{\theta}},y) - \mathbb{E}_{z\sim P_j}\nabla \ell(f_{\hat{\theta}},y) }^2 \label{eq3} \end{align*} where the quality of the approximation depends on the number of samples $n_i$ and $n_j$. The mixing coefficients for user $i$ are then set to the following normalized Gaussian kernel function \begin{equation} w_{i,j}=\frac{\frac{n_j}{n_i}e^{-\frac{1}{2\sigma_i\sigma_j}\Delta_{i,j}(\hat{\theta}) }}{\sum^m_{j'=1}\frac{n_{j'}}{n_i}e^{-\frac{1}{2\sigma_i\sigma_j}\Delta_{i,j'}(\hat{\theta}) }} \hspace{1cm}\textnormal{for $j=1,\dots,m$} \label{eq7} \end{equation} The mixture coefficients are calculated at the PS during a special round before federated training. During this round, the PS broadcasts an initialized model denoted ($\hat{\theta}$ = $\theta^0$) to the users, which computes the full gradient on their local data sets. At the same time, each node $i$ locally estimates the value $\sigma^2_i$ partitioning the local data randomly in $K$ batches $\{\mathcal{D}^k_i\}^K_{k=1}$ of size $n_k$ and computing \begin{equation} \sigma^2_i = \frac{1}{K}\sum_{k=1}^K\norm{\frac{1}{n_k} \sum_{(x,y)\in \mathcal{D}^k_i}\hspace{-10pt}\nabla \ell(f_{\hat{\theta}},y)-\frac{1}{n_i}\sum_{(x,y)\in \mathcal{D}_i}\hspace{-10pt}\nabla \ell(f_{\hat{\theta}},y )}^2 \label{eq9} \end{equation} where $\sigma^2_i$ is an estimate of the gradient variance (i.e noise) computed over local data sets $\mathcal{D}^k_i$ sampled from the same target distribution $P_i$. The variances are computed as a function of the partitioned mini-batch sizes. Consequently, the size of the mini-batches shall be chosen carefully to successfully capture clients of similar data distributions during training. We discuss the suitable choice of the mini-batch sizes to compute the variances in section \ref{variance}. Once all the necessary quantities are computed, they are uploaded to the PS, which proceeds to calculate the mixture coefficients and initiates the federated training using the custom aggregation scheme given by (\ref{eq3}). An illustration of our proposal is found in Algorithm \ref{Algo2}. Note that the proposed heuristic embodies the intuition provided by Theorem \ref{Th1}. In fact, in the case of homogeneous users, it falls back to the standard FedAvg aggregation rule, while if node $i$ has an infinite amount of data it degenerates to the local learning rule which is optimal in that case. \begin{algorithm}[ht] \algsetup{linenosize=\small} \SetAlgoLined \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \caption{User-centric Federated Learning} \Input{number of clients $m$, local mini-batch size $B$, number of epochs $E$ and learning rate $\eta$} \textnormal{PS} broadcasts $\theta^0$ to the users\; \ForEach{\textnormal{user} $k$} { \textnormal{Compute} $\nabla\ell(\theta^0,\mathcal{D}_k)$ \;$\text{Compute $\sigma^2_k$ as in } \text{(\ref{eq9})}$\; \text{Transmit }$\{\nabla\ell(\theta^0,\mathcal{D}_k), \sigma^2_k\}$ \text{to PS} } \textnormal{PS computes} $w_{i,j}$ \text{as in (\ref{eq7})} \; \For {$t = 0,\dots,T$} { $\textnormal{PS } \text{unicasts } \theta_k^t \text{ to each node } k$ \; \ForEach{node $k$ } { $\theta^{t+1}_k \gets \textbf{ClientUpdate}(\theta^t_k,\mathcal{D}_k)$ \; $\text{return } \theta^{t+1}_k \text{ to } \textit{PS}$ } $\text{PS computes } \theta_k^{t+1} \gets \sum_{j=1}^m w_{k,j}\theta_j^{t+1}$} \; $\textbf{PROCEDURE: }$$\textnormal{ClientUpdate($\theta^t_k,\mathcal{D}_k$):}$ \\ $\mathcal{B}$ $\gets$ $\textnormal{Split}$ $\mathcal{D}_k$ $\textnormal{into batches of size $B$}$ \;$\theta_k \gets \theta_k^{t}$ \;\For{$t=0,\dots,E$} { \ForEach{\textnormal{batch $b$} $\in \mathcal{B}$} { $\theta_k$ $\gets$ $\theta_k-\eta \nabla \ell(\theta_k,b)$ }} $\textbf{return}$ $\theta_k$ \label{Algo2} \end{algorithm} \subsection{Reducing the Communication Load} \label{Clustering} A full-fledged personalization employing the user-centric aggregation rule (\ref{eq3}) would introduce an $m$-fold increase in communication load during the downlink phase as the original broadcast transmission is replaced by unicast ones. Although from a learning perspective the user-centric learning scheme is beneficial, it is also possible to consider overall system performance from a learning-communication trade-off point of view. The intuition is that, for small discrepancies between the user data distributions, the same model transfer positively to statistically similar devices. To strike a suitable trade-off between learning accuracy and communication overhead we hereby propose to adaptively limit the number of personalized downlink streams. In particular, for a number of personalized models $m_t$, we run a $k$-means clustering scheme over the set of collaboration vectors $\{\vec{w}_i\}_{i=1}^m$ and we select the centroids $\{\vec{c}_i\}_{i=1}^{m_t}$ to implement the $m_t$ personalized streams. Formally, given $m_t$ and the user-specific weights $\{\vec{w}_{i}\}^m_{i=1}$, the objective is to find $m_t<m$ clusters $\mathcal{C}_1,\dots,\mathcal{C}_{m_t}$ such that\begin{equation} \sum_{n=1}^{m_t} \sum_{\vec{w}_i \in \mathcal{C}_n}\norm{\vec{w}_i - \vec{c}_n} \end{equation} is minimized, where $\vec{c}_n$ is the centroid of cluster $\mathcal{C}_{n}$. We then proceed to replace the unicast transmission with group broadcast ones, in which all users belonging to the same cluster $i$ receive the same personalized model associated with the centroid $\vec{c}_i$. Choosing the right value for the number of personalized streams is critical to save communication bandwidth but at the same time obtain satisfactory personalization capabilities. In the following, we experimentally show that clustering quality indicators such as the Silhouette score can be used to guide the search for a suitable number of clusters $m_t$. \subsection{Choosing the Number of Personalized Streams} \begin{algorithm}[ht] \algsetup{linenosize=\small} \SetAlgoLined \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \caption{Silhouette based scoring} \Input{ Collaboration vectors $\{\vec{w}_{i}\}^m_{i=1}$ from Algorithm \ref{Algo2} and a trade-off function $c(k,s_k)$.} \Output{ Number of clusters $m_t$} \For{$K = 1,2,\dots,m$} { $\mathcal{C}_k\leftarrow$ $K$-means clustering of $\{\vec{w}_{i}\}^m_{i=1}$\\ $s_k\leftarrow$ the silhouette score of $s(\mathcal{C}_k)$ } \textbf{return }$\argmax_{k=1,\dots,m} c(k,s_k)$ \label{alg3} \end{algorithm} \label{abc} Choosing an insufficient number of personalized streams can yield unsatisfactory performance, while concurrently learning many models can prohibitively increase the communication load of personalized federated learning. Therefore, properly tuning this free parameter is essential to obtain a well-performing but still practical algorithm. Being agnostic w.r.t. the underlying data generating distributions at the devices, it does not exist a universal number of personalized streams that fits all problems. However, we now illustrate that the silhouette coefficient, a quality measure of the clustering, provides a rule of thumb to choose the number of personalized streams. In order to compute the silhouette score of a clustering $\mathcal{C}_1,\dots,\mathcal{C}_{m_t}$ of the clustering we define the intra-cluster similarity of the collaboration vector $\vec{w}_i \in \mathcal{C}_k$ as \[ a(\vec{w}_i)=\frac{1}{|\mathcal{C}_j|-1}\sum_{\vec{w}_j\in \mathcal{C}_k,\vec{w}_j\neq\vec{w}_i}\norm{\vec{w}_j-\vec{w}_i} \] and the smallest mean distance between the collaboration vector $\vec{w}_i \in \mathcal{C}_k$ and the closest cluster \[ b(\vec{w}_i)=\min_{\mathcal{C}_j\neq \mathcal{C}_k}\frac{1}{|\mathcal{C}_j|}\sum_{\vec{w}_j\in \mathcal{C}_j}\norm{\vec{w}_j-\vec{w}_i}. \] The average silhouette score $s$ is then defined as \[ s(\mathcal{C})=\frac{1}{m}\sum_{i=1}^m \frac{b(i)-a(i)}{\max\{a(i),b(i)\}} \] and it is a number in the range $\left[-1, 1\right]$, directly proportional to the quality of the clustering. In turn, a good clustering of the collaboration vectors $\{\vec{w}_i\}^m_{i=1}$ implies that users belonging to the same clusters are similar and that the centroid $\vec{c}_j$ is a good approximation of the collaboration coefficient of users in $\mathcal{C}_j$. Consequently, whenever the silhouette score is large, the loss in terms of personalization performance resulting from the reduced number of aggregation rules compared to the full-fledged personalization system is modest. For this reason, the silhouette score provides a proxy to the inference performance and at the same time, it allows to trade-off communication load and personalization capabilities in a principled way. In Algorithm \ref{alg3} we provide the pseudocode of the procedure that autonomously chooses the optimal number of personalized streams $m_t$ based on a communication-personalization trade-off function $c(k,s):\mathbb{N}\times[-1,1]\to \mathbb{R}^+$ scoring the utility of pairs of the systems based on the number of user-centric rules and the resulting silhouette scores. The function $c(k,s)$ is a system dependent function typically decreasing in $k$ and increasing in $s_k$. \begin{figure*} \caption{EMNIST + label shift} \label{fig:EMNIST_label} \caption{EMNIST + label and covariate shift} \label{fig:EMNIST_cov} \caption{CIFAR10 + concept shift} \label{fig:CIFAR} \caption{Average Validation Accuracy across the three different experiements} \label{acc} \end{figure*} \begin{table*} \caption{Average test accuracy of the different algorithms across the three proposed scenarios.} \centering \small \begin{tabular}{ @{}lccc @{} } \toprule \multicolumn{1}{c}{Algorithm} & \multicolumn{3}{c}{Scenario} \\\cmidrule(lr){2-4} \centering & \thead{\small EMNIST ($m$ = 20)\\ \small label shift} & \thead{\small EMNIST ($m$ = 100)\\\small covariate \& label shift} & \thead{\small CIFAR10 ($m$ = 20)\\\small concept shift} \\ \midrule Proposed $k=m$ &\textbf{79.4} ($\pm$ 4.2) & \textbf{77.9} ($\pm$ 2.7) & \textbf{47.7} ($\pm$ 2.2)\\ Proposed $k=4$ &77.8 ($\pm$ 3.9)& \textbf{79.7} ($\pm$ 2.5)& \textbf{49.1} ($\pm$ 1.4)\\ SCAFFOLD \cite{DBLP:journals/corr/abs-1910-06378} &77.2 ($\pm$ 4.0) & 72.5 ($\pm$ 2.2)& 17.5 ($\pm$ 1.8)\\ Ditto \cite{DBLP:journals/corr/abs-2012-04221} &78.3 ($\pm$ 3.9) & 74.1 ($\pm$ 2.3)& 44.1 ($\pm$ 1.4)\\ pFedMe \cite{DBLP:journals/corr/abs-2006-08848}& 77.6 ($\pm$ 4.1) & 75.2($\pm$ 4.4)&46.6 ($\pm$ 1.5)\\ Fedprox \cite{DBLP:journals/corr/abs-1812-06127}& \textbf{79.6} ($\pm$ 4.8) & 72.4 ($\pm$ 2.4) &22.3 ($\pm$ 2.2) \\ Local& 68.2 ($\pm$ 5.3) & 62.8 ($\pm$ 3.3) &38.3 ($\pm$ 1.2)\\ FedAvg \cite{mcmahan2017communication}& 76.7 ($\pm$ 4.0) & 70.5 ($\pm$ 2.2)&24.2 ($\pm$ 2.6)\\ Oracle \emph{(Upper bound)}& - & \textcolor{red}{80.7} ($\pm$ 1.8) &\textcolor{red}{49.5} ($\pm$ 1.2)\\ \bottomrule \end{tabular} \label{table1:acc} \end{table*} \begin{table*}[ht!] \centering \small \caption{ \centering Worst user performance averaged over 5 experiments in the three simulation scenarios} \label{table:worst} \begin{tabular}{@{} l c c c c c c c @{}} \toprule \multicolumn{1}{c}{Scenario} & \multicolumn{5}{c}{Algorithm} \\\cmidrule(lr){2-8} & Ditto \cite{DBLP:journals/corr/abs-2012-04221} & FedAvg \cite{mcmahan2017communication} & Oracle & CFL \cite{sattler2020clustered} & FedFOMO \cite{zhang2020personalized} & pFedMe \cite{DBLP:journals/corr/abs-2006-08848} & Proposed\\ \midrule $\text{EMNIST ($m$ = 20) label shift }$ & 72.2 & 68.9 & - & 70.3 & 70.0 & 71.5 & \textbf{73.2} $(k=20)$\\ $\text{EMNIST ($m$ = 100) covariate \& label shift}$ & 70.7 & 67.5 & 77.4 & 76.1 &73.6 &70.9& \textbf{76.4} $(k=4)$\\ $\text{CIFAR10 ($m$ = 20) concept shift}$ & 43.2 & 19.6 & 49.1 & 48.6 & 45.5 & 45.3 & \textbf{48.8} $(k=4)$\\ \bottomrule \end{tabular} \end{table*} \iffalse \begin{algorithm} \DontPrintSemicolon \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \caption{Proposed Algorithm} \Input{$K$ nodes indexed by $k$, $B$: mini-batch size, $E$: number of Epochs, $\eta$: learning rate, data set at each node $\{\mathcal{D}_k\}^K_{k=1}$ } At \textit{Orchestrator} : \textbf{CalculateWeights()} and initialize $[\theta_k^0]^K_{k=1} \gets \theta^0$\; \For {\textnormal{each round} $t = 0,1,...,T$} { $\textit{Orchestrator } \text{unicasts } \theta_k^t \text{ for each node } k$ \; \For{each node k in Parallel} { $\theta^{t+1}_k \gets \textbf{ClientUpdate}(\theta^t_k,\mathcal{D}_k)$ \; $\text{return } \theta^{t+1}_k \text{ to } \textit{orchestrator}$ \; } $\text{At }\textit{orchestrator: } \theta_k^{t+1} \gets \sum_{j=1}^K w_{k,j}\theta_j^{t+1}$ }\; \textbf{CalculateWeights(): //\emph{ Prior to federated training}} \; \textit{Orchestrator }initializes and broadcasts $\theta^0$, $\eta = 1,E = 1$ \For{\textnormal{each node} k in Parallel} { $\theta_k \gets \theta^0 - \nabla\ell(\theta^0,\mathcal{D}_k)$ \;$\text{Compute $\sigma^2_k$ as given in } \text{(\ref{eq9})}$\; $\text{return }$ $\theta_k$ $\text{and}$ $\sigma^2_k$ $\text{ to }$ $\textit{orchestrator}$ } $\textit{Orchestrator }$ $\text{computes }$ $w_{i,j}$ $\text{ using } \text{(\ref{eq4})} $ \;\; $\textbf{ClientUpdate($\theta^t_k,\mathcal{D}_k$):}$ // $\textbf{\textit{Run at node $k$}}$\\ $\mathcal{B}$ $\gets$ $\textnormal{Split}$ $\mathcal{D}_k$ $\textnormal{into batches of size $B$}$ \;$\theta_k \gets \theta_k^{t}$ \;\For{\textnormal{each local Epoch $i$ from} $1 \rightarrow E$} { \For{\textnormal{batch $b$} $\in \mathcal{B}$} { $\theta_k$ $\gets$ $\theta_k$ - $\eta \nabla \ell(\theta_k,b)$ } } $\text{return}$ $\theta^{t+1}_k \gets \theta_k$ $\text{to $\textit{Orchestrator}$}$ \label{Algo2} \end{algorithm} \fi \section{Experiments} \label{section5} We now provide a series of experiments to showcase the personalization capabilities and communication efficiency of the proposed algorithm. \subsection{Set-up} \label{setup} In our simulation we consider a handwritten character/digit recognition task using the EMNIST data set \cite{cohen2017emnist} and an image classification task using the CIFAR-10 data set \cite{krizhevsky2009learning}. Data heterogeneity is induced by splitting and transforming the data set differently across the group of devices. In particular, we analyze three different scenarios: \begin{itemize} \item\textbf{Character/digit recognition with user-dependent label shift} in which 10k EMNIST data points are split across 20 users according to their labels. The label distribution follows a Dirichlet distribution with parameter $\alpha =$ 0.4, as in \cite{mixture2021s,wang2020tackling}. \item\textbf{Character/digit recognition with user-dependent label shift and covariate shift} in which 100k samples from the EMNIST data set are partitioned across 100 users each with a different label distribution ($\alpha = 8$), as in the previous scenario. Additionally, users are clustered in 4 groups $\mathcal{G} = \{\mathcal{G}_1,\mathcal{G}_2,\mathcal{G}_3,\mathcal{G}_4\}$, and at each group images are rotated by $\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}$ respectively. In particular, heterogeneity is imposed such that $p_i\left(x | y\right) \ne p_j(x | y), \,\forall \, i\in \mathcal{G}_k,j\in\mathcal{G}_{k'}, k\ne k' ,\, \forall (x,y) \in \mathcal{X}\times\mathcal{Y}$. \item\textbf{Image classification with group dependent concept shift} in which the CIFAR-10 data set is distributed across 20 users which are grouped in 4 clusters, for each group we apply a different random label permutation. More specifically, given an image $x \in \mathcal{X}$ and the labelling functions $f_i,f_j: \mathcal{X} \rightarrow \mathcal{Y}$, then $f_i(x) \ne f_j(x), \forall i\in \mathcal{G}_k\,$,$\,j\in\mathcal{G}_{k'}, k\ne k'$. \end{itemize} For each scenario, we aim at solving the task at hand by leveraging the distributed and heterogeneous data sets. We compare our algorithm against two sets of baseline algorithms. The first set includes algorithms that achieve personalization by resulting multiple personalized models. Those include CFL \cite{sattler2020clustered}, FedFomo \cite{zhang2020personalized}, pFedMe \cite{DBLP:journals/corr/abs-2006-08848} and Ditto \cite{DBLP:journals/corr/abs-2012-04221}. The second set of baselines include algorithms that yield a single Federated model such as Fedprox \footnote{\textnormal{The penalizationn hyperparameters $\mu\textnormal{ and } \lambda = \{0.1,0.5,1\}$ were used in the simulations of Fedprox and Ditto, then, the best results were reported.}}\cite{DBLP:journals/corr/abs-1812-06127}, SCAFFOLD \cite{DBLP:journals/corr/abs-1910-06378}. FedAvg \cite{mcmahan2017communication}, and Local training algorithms are also included for reference. All algorithms are trained using LeNet-5 \cite{lecun1998gradient} convolutional neural network. In all scenarios and for all algorithms\footnote{\textnormal{\textnormal{Exception:} The hyperparameters $ \eta_{global}=\eta_{local} = 0.01$, $S = 15$, $E=1$ and batchsize$ = 20$ were used for pFedMe, and $\eta=0.01$, $E = 5$ for SCAFFOLD}}, we use stochastic gradient descent optimizer with fixed learning rate $\eta=0.1$, momentum $\beta=0.9$, and the number of epochs $E=1$. \subsection{Personalization Performance} \label{sec:per_perf} We now report the average accuracy over 5 trials attained by the different approaches. We also study the personalization performance of our algorithm when we restrain the overall number of personalized streams, namely the number of personalized models that are concurrently learned. \subsubsection{Multi-Model Baseline Algorithms} In Fig.\ref{acc} and Table \ref{table1:acc}, we report the average validation accuracy of the baseline algorithms that yield multiple personalized models, alongside FedAvg, Fedprox, SCAFFOLD and local training. In the EMNIST label shift scenario (Fig.\ref{fig:EMNIST_label}), we first notice that harvesting intelligence from the data sets of other users amounts to a large performance gain compared to the localized learning strategy. This indicates that data heterogeneity is moderate and collaboration is fruitful. Nonetheless, personalization can still provide gains compared to FedAvg. Our solution yields a validation accuracy which is increasing in the number of personalized streams. Allowing maximum personalization, namely a different model for each user, we obtain a 3\% gain in the average accuracy compared to FedAvg. CFL is not able to transfer intelligence among different groups of users and attains performance similar to the FedAvg. This behaviour showcases the importance of soft clustering compared to the hard one for the task at hand. We find that FedFOMO, despite excelling in case of strong statistical heterogeneity, fails to harvest intelligence in the label shift scenario. In Fig.\ref{fig:EMNIST_cov} we report the personalization performance for the second scenario. In this case, we also consider the oracle baseline, which corresponds to running 4 different FedAvg instances, one for each cluster of users, as if the 4 groups of users were known beforehand. Different from the previous scenario, the additional shift in the covariate space renders personalization necessary to attain satisfactory performance. The oracle training largely outperforms FedAvg. Furthermore, as expected, our algorithm matches the oracle final performance when the number of personalized streams is 4 or more. Also, CLF and FedFOMO can correctly identify the 4 clusters. However, the former exhibits slower convergence due to the hierarchical clustering over time while the latter plateaus to a lower average accuracy level. We turn now to the more challenging CIFAR-10 image classification task. In Fig.\ref{fig:CIFAR} we report the average accuracy of the proposed solution for a varying number of personalized streams, the baselines and the oracle solution. As expected, the label permutation renders collaboration extremely detrimental as the different learning tasks are conflicting. As a result, local learning provides better accuracy than FedAvg. On the other hand, personalization can still leverage data among clusters and provide gains also in this case. Our algorithm matches the oracle performance for a suitable number of personalized streams. This scenario is particularly suitable for hard clustering, which isolates conflicting data distributions. As a result, CFL matches the proposed solution. FedFOMO promptly detects clusters and therefore quickly converges, but it attains lower average accuracy compared to the proposed solution. On the other hand, Ditto and pFedMe perform relatively better than the aforementioned two approaches, given their personalization capabilities. However, they fall short while leveraging collaboration among users towards training the global model only, and disregarding the potential generalization gain that could be achieved by enabling collaboration among statistically similar users towards refining their local personalized models. \subsubsection{Single-Model Baseline Algorithms} Despite that all algorithms that yield a single model (i.e. Fedprox and SCAFFOLD) excel in the label shift setting (Table \ref{table1:acc}), our proposed algorithm stands out in the two other scenarios. This stems from their inadequacy in addressing the conflicting nature of the available target tasks via a single global model in the other two scenarios. \begin{figure*} \caption{\centering EMNIST label \& covariate shift (100 clients).} \label{fig:clusters1} \caption{\centering CIFAR10 concept shift (20 clients).} \label{fig:clusters2} \caption{\centering Clusters formed by our proposed algorithm in the EMNIST label and covariate shift, and CIFAR10 concept shift scenarios. Each 2D point denotes $w_{i,j} \label{fig:clusters} \end{figure*} \subsubsection{Average Worst Performance} The performance reported so far is averaged over users and therefore fails to capture the existence of outliers performing worse than average. To assess the fairness of the training procedure, in Table \ref{table:worst} we report the worst user performance in the federated system across the different algorithms. The proposed approach produces models with the highest worst case in all three scenarios. \subsubsection{Inter-Cluster Collaboration} We illustrate the clustering performance of our proposed solution in the EMNIST co-variate shift and the CIFAR10 concept shift scenarios (Experiments two and three) with four clusters each in Fig. \ref{fig:clusters}. Interestingly, we notice that in the EMNIST covariate shift experiment, our clustering algorithm can detect similarities among the different groups of users, leveraging inter-cluster collaboration among them, unlike hard clustering algorithms \cite{sattler2020clustered}. This stems from the fact that some digits and letters features are invariant to the 180$^{\circ}$ rotation applied (e.g letters $X,Z,O,N,\textnormal{etc}\,...$ and the digits $\{0,1,8\}$). \subsection{Silhouette Score} \begin{figure*} \caption{EMNIST label shift.} \label{fig:EMNIST_labelsol} \caption{EMNIST label \& covariate shift.} \label{fig:EMNIST_covsol} \caption{CIFAR10 concept shift.} \label{fig:CIFARsol} \caption{\centering Average silhouette scores of the $k$-means clustering in the three scenarios. In the last two scenarios, in which user inherently belongs to 4 different cluster, the scores indicates the necessity of at least 4 personalized streams.} \label{fig:Silhouette} \end{figure*} In Fig. \ref{fig:Silhouette} we plot the average silhouette score obtained by the $k$-means algorithm when clustering the federated users based on the procedure proposed in Sec. \ref{Clustering}. In the labels shift scenario, for which we have seen that a universal model performs almost as well as the personalized ones, the silhouette scores monotonically decrease with $k$. In fact, in this simulation setting, a natural cluster-like structure among clients' tasks does not exist. On the other hand, in the covariate shift and the concept shift scenarios, the silhouette score peaks around $k=4$. In Sec. \ref{sec:per_perf} this has shown to be the minimum number of personalized models necessary to obtain satisfactory personalization performance in the system. This behaviour of the silhouette score is expected and desired, in this case, the number of clusters matches exactly the number of underlying different tasks among the participants in FL that was induced by the rotation of the covariates and the permutation of the labels. We then conclude that the silhouette score provides meaningful information to tune the number of user-centric aggregation rules before training. \label{variance} \subsection{Communication Efficiency} \begin{figure*} \caption{$\rho=4, T_{min} \label{fig:rho4} \caption{$\rho=2, T_{min} \label{fig:rho2} \caption{$\rho=1, T_{min} \label{fig:rho3} \caption{Evolution of the average validation accuracy against time normalized w.r.t. $T_{dl} \label{fig:comm_eff} \end{figure*} Personalization comes at the cost of increased communication load in the downlink transmission from the PS to the federated user. To compare the algorithm convergence time, we parametrize the distributed system using two parameters. We define by $\rho=\frac{T_{ul}}{T_{dl}}$ the ratio between model transmission time in uplink (UL) and downlink (DL). Typical values of $\rho$ in wireless communication systems are in the $[2,4]$ range because of the larger transmitting power of the base station compared to the edge devices. Furthermore, to account for unreliable computing devices, we model the random computing time $T_i$ at each user $i$ by a shifted exponential r.v. with a cumulative distribution function \[ P[T_i>t]=1-\mathbbm{1}(t\geq T_{min})\left[1-e^{-\mu(t-T_{min})}\right] \] where $T_{min}$ represents the minimum possible computing time and $1/\mu$ is the average additional delay due to random computation impairments. Therefore, for a population of $m$ devices, we then have \[ T_{comp}=\mathbb{E}\left[\max\{T_1,\dots,T_m\}\right]=T_{min}+\frac{H_m}{\mu} \] where $H_m$ is the $m$-th harmonic number. To study the communication efficiency we consider the simulation scenario with the EMNIST data set with label and covariate shift. In Fig. \ref{fig:comm_eff} we report the time evolution of the validation accuracy in 3 different systems. A wireless systems with slow UL $\rho=4$ and unreliable nodes $T_{min}=T_{dl}=\frac{1}{\mu}$, a wireless system with fast uplink $\rho=2$ and reliable nodes $T_{min}=T_{dl}$, $\frac{1}{\mu}=0$ and a wired system $\rho=1$ (symmetric UL and DL) with reliable nodes $T_{min}=T_{dl}$, $\frac{1}{\mu}=0$. The increased DL cost is negligible for wireless systems with strongly asymmetric UL/DL rates and in these cases, the proposed approach largely outperforms the baselines. In the case of more balanced UL and DL transmission times $\rho=[1,2]$ and reliable nodes, it becomes instead necessary to properly choose the number of personalized streams to render the solution practical. Nonetheless, the proposed approach remains the best even in this case for $k=4$. Note that FedFOMO incurs a large communication cost as personalized aggregation is performed on the client side. \begin{figure*} \caption{\centering EMNIST label shift} \label{fig:1} \caption{\centering CIFAR10 label \& covariate shift} \label{fig:2} \caption{Comparison between the proposed algorithm and the parallel user centric federated learning approach. The validation accuracy is averaged over 5 experiment runs.} \label{fig:nxn} \end{figure*} \subsection{Comparison with Parallel User-centric FL} Even if the proposed user-centric aggregation rules outperform state-of-the-art personalized FL approaches, the resulting optimization procedure departs from the standard FL in the following sense: In the typical FL framework, at each communication round $t$, the PS aggregates the models that were locally trained, at each participating device, starting from the same launch model $\theta^{t-1}$. On the contrary, according to our proposed framework, devices may optimize different models depending on the specific user-centric aggregation rule they have been assigned to. This design choice is motivated by the assumption that the models of statistically similar propagate towards the same neighbourhood of the parameter space during the optimization \cite{zhang2020personalized}. As a result, in the proposed aggregation rule, models that are largely weighted, therefore associated with similar users, were locally optimized starting from similar initial parameters. Furthermore, if we were to adhere to the traditional FL procedure, and produce an exact minimizer of (\ref{centricloss}), we would have to run in parallel as many FL instances as the number of personalized streams $m_t$ and incur a $m_t$-fold computation and uplink communication load. To assess the quality of our assumption, we consider running in parallel $m_t$ collaborative FL instances employing the proposed user-centric weights and solving exactly $(\ref{centricloss})$ for each different aggregation rule. At each communication round, each user also optimizes the user-centric models of the other $m_t-1$ personalized streams which are then used at the PS server to apply the user-centric aggregation rules \begin{equation} \begin{aligned} \centering \theta^{{t}}_i \leftarrow \sum^m_{j=1}w_{i,j}\theta^{{t-1/2}}_{i,j} \hspace{0.7cm} \text{for } i=1,2\cdots,m. \end{aligned} \label{eq13} \end{equation} Note that the aggregation rule in (\ref{eq13}) is different from the one in (\ref{eq2}), as $\theta^{{t-1/2}}_{i,j}$ denotes the update of user $j$ to the model of user $i$ obtained by locally optimizing $\theta_j^{t-1}$. We experiment using the EMNIST data set with label shift and the CIFAR10 data set with covariate and label shift. We set $m_t=m=20$ and use the same neural network model and settings indicated in Sec. \ref{section5}. In Fig. \ref{fig:nxn}, we report the performance of the parallel collaborative FL approach compared to our personalization strategy. For reference, we also report the performance of the FedAvg, local learning and oracle baselines. First, we notice that the fully collaborative solution performance serves as an upper bound to our personalization approach and that the oracle slightly outperforms the fully collaborative approach, which highlights the sub-optimality of our heuristic weighting scheme. However, the slight performance gain of the fully collaborative approach compared to our personalization strategy comes at the expense of $m_t$ times larger uplink communication load and computation cost at each edge device. These empirical results support our assumption: Even if the updated models are trained starting from different points in the parameter space at each communication round, the user-centric weighting scheme can direct statistically similar models in a neighbourhood across the loss landscape during training. \subsection{Variance Computation: Mini-batch Size} As mentioned in section \ref{collaboration}, the mini-batch sizes chosen to calculate the variances play an essential role in the quality of the derived weights, i.e their ability to couple statistically similar users in the federated system. In Fig. \ref{fig:var}, we report the validation accuracy attained in an EMNIST label shift and covariate shift experiments. In both experiments, we randomly split 100k EMNIST data points across 100 users, i.e 1000 samples per user. Heterogeneity is introduced in both settings akin to the "label shift", and "label and covariate shift" settings in section \ref{setup}, respectively. We vary the mini-batch sizes used to calculate the variances from $ 100 \longrightarrow 660$ samples to explore the effect of this parameter on the validation accuracy of our personalization strategy in both scenarios. First, we note that according to (\ref{eq9}), decreasing the mini-batch size would yield an increase in the variance value as a result of the noisy gradients obtained compared to the average gradient computed over each user data set. In this case, our proposed aggregation rule renders similar to FedAvg, enabling collaboration among all users in the federated system, while still managing to softly couple statistically similar users under the assumption that $ \E_{\mathcal{D}_i,\mathcal{D}_j\sim P_i}\left[\Delta_{i,j}\right] \le \E_{\mathcal{D}_i\sim P_i, \mathcal{D}_k\sim P_k} \left[\Delta_{i,k}\right]$ given that $ d_\mathcal{F}(P_i,P_k) > 0 $. This condition is favourable in the label shift setting while being detrimental to the extremely heterogeneous co-variate shift experiment, as it enables collaboration among users with competing tasks. Our claim is verified by the performance attained by our personalization rule in Fig. \ref{fig:var}, achieving a high validation accuracy in the label shift setting, while suffering in the co-variate shift experiment with a performance comparable to that of FedAvg attained in Fig. \ref{fig:EMNIST_cov} $(\sim 70.5 \,\%)$. However, as we increase the mini-batch size, the variances converge towards zero and our personalization algorithm degenerates to local training which is detrimental to both settings. Therefore, we conclude that the mini-batch size can be seen as a hyper-parameter for our algorithm, to be tuned according to the local data set size and the type of heterogeneity present across the learners. In our experiments presented in Fig. \ref{acc}, we set the mini-batch size $n_k = 100$ for the label shift experiment, and $n_k = N/3$ for the other two EMNIST co-variate and CIFAR10 concept shift experiments, where $N$ denotes the local data set size of each user. \begin{figure} \caption{\centering\small Effect of the mini-batch sizes on the maximum validation accuracy attained: A proxy to the quality of the calculated collaboration coefficients} \label{fig:var} \end{figure} \section{Conclusion} \label{section6} In this work, we have presented a novel FL personalization framework that exploits multiple user-centric aggregation rules to produce personalized models. The aggregation rules are based on user-specific mixture coefficients that can be computed during one communication round prior to federated training and are designed based on an excess risk upper bound of the weighted aggregated loss minimizer. Additionally, in order to limit the communication burden of personalization, we have proposed a $K$-means clustering algorithm to lump together users based on their similarity and serve each group of similar users with a single personalized model. In order to effectively trade communication resources for personalization capabilities, we have proposed to use the silhouette score to tune the number of user-centric aggregation rules at the PS before training commences. We have studied the performance of the proposed solution across different tasks. Overall, our solution yields personalized models with higher testing accuracy while at the same time being more communication-efficient compared to other state-of-the-art personalized FL baselines. \section{Appendix} \subsection*{Proof of Theorem \ref{Th1}} \label{A} Denote by $f^*$ the $\argmin_{f\in \mathcal{F}} E_{z\sim P_i}[\ell(f,z)]$ and bound the estimation error of $\hat{f}_{\vec{w}_i}$ as \begin{align*} &Exc(\hat{f}_{\vec{w}_i},P_i)=E_{z\sim P_i}[\ell(\hat{f}_{\vec{w}_i},z)]-E_{z\sim P_i}[\ell(f^*,z)]\\ &\leq E_{z\sim P_{\vec{w}_i}}[\ell(\hat{f}_{\vec{w}_i},z)]-E_{z\sim P_{\vec{w}_i}}[\ell(f^*,z)] + 2d_{\mathcal{F}}(P_i,P_{\vec{w}_i})+2\lambda\\ &\leq E_{z\sim P_{\vec{w}_i}}[\ell(\hat{f}_{\vec{w}_i},z)]-\inf_{f\in \mathcal{F}}E_{z\sim P_{\vec{w}_i}}[\ell(f,z)] \\&+2 \sum_{j=1}^m w_{i,j}d_{\mathcal{F}}(P_i,P_{j})+2\lambda \end{align*} where $\lambda=\argmin_{f\in \mathcal{F}}\left( E_{z\sim P_i}[\ell(f,z)]+E_{z\sim P_{\vec{w}_i}}[\ell(f,z)]\right)$. We recognize the estimation error of $\hat{f}_{\vec{w}_i}$ w.r.t to the measure $P_{\vec{w}_i}$ that can be bounded following fairly standard approaches. In particular, \[ E_{z\sim P_{\vec{w}_i}}[\ell(\hat{f}_{\vec{w}_i},z)]-\inf_{f\in \mathcal{F}}E_{z\sim P_{\vec{w}_i}}[\ell(f,z)]\leq 2 \Delta(\mathcal{G},Z) \] where \[\Delta(\mathcal{G},Z)=\sup_{g\in \mathcal{G}} \left|E_{P_{\vec{w}_i}}[g(Z)]-\sum^m_{j=1}\frac{w_{i,j}}{n_i} \sum_{z\in \mathcal{D}_i}g(z)\right|. \] is the uniform deviation term and \begin{equation*} \mathcal{G}=\left\{Z \xrightarrow{} \ell(f,Z): f \in \mathcal{F}\right\}. \end{equation*} is the class resulting from the composition of the loss function $\ell(\cdot)$ and $\mathcal{F}$. The uniform deviation bound can be bounded in different ways, depending on the type of knowledge about the random variable $g(Z)$, in the following we assume that the loss function is bounded with range $B$ and we exploit Azuma's inequality. In particular, the Doob's Martingale associated to the weighted loss will still have increments bounded by $\frac{w_{i,j}}{n_i} B$ depending to which loss term the increment is associated. Recognizing this, we can then directly apply Azuma's concentration bound and state that w.p. $1-\delta$ the following holds \[ \Delta(\mathcal{G},Z)\leq E_P[\Delta(\mathcal{G},Z)]+B\sqrt{\sum^m_{j=1}\frac{w_{i,j}^2}{n_j}\log\left(\frac{2}{\delta}\right)} \] Finally, the expected uniform deviation can be bounded by the Rademacher complexity as follows \[ E_P[\Delta(\mathcal{G},Z)]\leq2\text{Rad}(\mathcal{G}) \] where \[ \text{Rad}(\mathcal{G})=E_{\vec{\sigma},\mathcal{D}_1,\dots,\mathcal{D}_j}\left[\sup_{g\in \mathcal{G}} \sum^m_{j=1}\frac{w_{i,j}}{n_i}\sum_{i=1}^{n_i}\sigma_{i,j} g(Z_{i,j})\right] \] By a direct application of Massart's and Sauer's Lemma we obtain \begin{align*} &\text{Rad}(\mathcal{G})\leq \sqrt{\sum^m_{j=1}\frac{w_{i,j}^2}{n_j}}\\&\times\sqrt{\frac{2\text{VCdim}(\mathcal{G})\left( \log(e\sum_j n_j)+\log(\text{VCdim}(\mathcal{G}))\right)}{\sum_j n_j}} \end{align*} combining everything together, we get the final result. \subsection*{Proof of Theorem \ref{Th2}} Thanks to the upper bound on the target domain risk and the fact that the sum of two sub-Gaussian random variables of parameter $\sigma$ is also sub-Gaussian with parameter $2\sigma$, we can decompose the excess risk as \begin{align*} &Exc(\hat{f}_{\vec{w}_i},P_i)=E_{z\sim P_i}[\ell(\hat{f}_{\vec{w}_i},z)]-\inf_{f\in \mathcal{F}}E_{z\sim P_i}[\ell(f,z)]\\ &=E_{z\sim P_i}[\ell(\hat{f}_{\vec{w}_i},z)-\ell(f^*,z)]\\ &\leq E_{z\sim P_{\vec{w}_i}}[\ell(\hat{f}_{\vec{w}_i},z)-\ell(f^*,z)]+2\beta\sigma^2+\frac{D_{JS}(P_i||P_{\vec{w}_i})}{\beta} \end{align*} From the convexity of the KL-divergence we can bound the Jensen-Shannon divergence as follows {\small \begin{align*} &D_{JS}(P_i||P_{\vec{w}_i}) =\frac{1}{2}KL\left(P_i||\frac{P_i+P_{\vec{w}_i}}{2}\right)+\frac{1}{2}KL\left(P_{\vec{w}_i}||\frac{P_i+P_{\vec{w}_i}}{2}\right)\\ &=\frac{1}{2}KL\left(P_i||\frac{\sum_jw_{i,j}(P_i+P_j)}{2}\right)\\&+\frac{1}{2}KL\left(\sum_jw_{i,j}P_j||\frac{\sum_jw_{i,j}(P_i+P_j)}{2}\right)\\&\leq \hspace{0.4cm}\frac{1}{2}\sum_j w_{i,j} \left( KL\left(P_i||\frac{(P_i+P_j)}{2}\right)+KL\left(P_j||\frac{(P_i+P_j)}{2}\right)\right)\\ &=\sum_j w_{i,j}D_{JS}(P_i||P_j) \end{align*} Plugging it back into the previous expression and minimizing with respect to $\beta$ we obtain \begin{align*} Exc(\hat{f}_{\vec{w}_i},P_i) &\leq E_{z\sim P_{\vec{w}_i}}[\ell(\hat{f}_{\vec{w}_i},z)]-\inf_{f\in \mathcal{F}}E_{z\sim P_{\vec{w}_i}}[\ell(f,z)]\\&+2\beta\sigma^2+\frac{\sum_j \vec{w}_{i,j}D_{JS}(P_i||P_j)}{\beta} \\ &\leq E_{z\sim P_{\vec{w}_i}}[\ell(\hat{f}_{\vec{w}_i},z)]-\inf_{f\in \mathcal{F}}E_{z\sim P_{\vec{w}_i}}[\ell(f,z)]\\&+2\sigma\sqrt{2\sum^m_{j=1}D_{JS}(P_i||P_j)} \\ \end{align*}} We identify the estimation error and we bound as previously done for Theorem $\ref{Th1}$ to obtain the final result. Moreover, for $B$-bounded random variables, $\sigma=B/2$ \end{document}
\begin{document} \title{ { extbf{A test for comparing conditional ROC curves with multidimensional covariates} \definecolor{dgreen}{rgb}{0.,0.5,0.} \renewcommand{$\sqbullet$}{$\sqbullet$} \theoremstyle{plain} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{cor}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{mydef}{Definition} \newtheorem{remark}{Remark} \begin{abstract} The comparison of Receiver Operating Characteristic (ROC) curves is frequently used in the literature to compare the discriminatory capability of different classification procedures based on diagnostic variables. The performance of these variables can be sometimes influenced by the presence of other covariates, and thus they should be taken into account when making the comparison. A new non-parametric test is proposed here for testing the equality of two or more dependent ROC curves conditioned to the value of a multidimensional covariate. Projections are used for transforming the problem into a one-dimensional approach easier to handle. Simulations are carried out to study the practical performance of the new methodology. A real data set of patients with Pleural Effusion is analysed to illustrate this procedure. \end{abstract} \textbf{Keywords}: bootstrap, covariates, hypothesis testing, projections, ROC curves. \section{Introduction} \label{sec1} In any classification problem such as a diagnostic method --in which the aim is to discriminate between two populations, usually identified as the healthy population and the diseased population-- the main concern is to minimize the number of subjects that are misclassified. Receiver Operating Characteristic (ROC) curves are commonly used in this context for studying the behaviour of the classification variables \citep[see, for example, the monograph of][as an introduction to the topic]{Pepe2003}. They combine the notions of sensitivity (the ability of classifying a diseased patient as diseased) and specificity (the ability of classifying a healthy individual as healthy), two measurements that can be expressed in terms of the cumulative distribution functions of the diagnostic variables of the diseased and the healthy populations. When there is more than one variable for diagnosing a certain disease one can compare their respective ROC curves in order to decide whether their discriminatory capability is different or not. This is what happens in the medical example that we will be using in this paper for illustrating purposes, a real data set containing the information of patients with pleural effusion. In this data set there are two variables (the carbohydrate antigen 152 and the cytokeratin fragment 21-1) that can be used for deciding whether that pleural effusion is due to the presence of a malignant tumour or not. The objective of the analysis will be to compare the diagnostic capability of those markers. There are several methodologies discussed in the literature for making that sort of comparisons \citep[for a review of such methodologies, see][]{Fanjul-Hevia2018}, although most of them do not consider the possible effect that the presence of covariates can have in the performance of the test. In the example provided, apart from the diagnostic variables there are other covariates such as the age or the neuron-specific enolase of the patients. It is important to take this information into account, because the diagnostic capability of a marker may change with the value of a covariate \citep{Pardo-Fernandez2014}. In this paper the aim is to propose a test to compare ROC curves that includes the presence of a multidimensional covariate in the analysis. One way of introducing the effect of the covariates into the study is by using the \textit{conditional ROC curve}. If we consider $Y^F$ and $Y^G$ as the continuous diagnostic markers in the diseased and healthy populations, respectively, $\bm{X}^F = {(X_1^F,\dots, X_d^F)}' $ as the continuous $d-$dimensional covariate of the diseased population and $\bm{X}^G = {(X_1^G,\dots, X_d^G)}' $ as the continuous $d-$dimensional covariate of the healthy population, then, given a fixed value $\bm{x} = {(x_1,\dots, x_d)}' \in \bm{R_{\bm{X}}}$ (where $\bm{R_{\bm{X}}}$ is the intersection of $\bm{R_{{X^F}}}$ and $\bm{R_{{X^G}}}$, the supports of $\bm{X}^F$ and $\bm{X}^G$, and is assumed to be non-empty), the conditional ROC curve is defined as \begin{eqnarray} ROC^{\bm{x}}(p) = 1- F(G^{-1} (1-p|\bm{x})|\bm{x}), \;p \in (0,1), \label{eq:defROCx} \end{eqnarray} where $F(y|\bm{x}) = P(Y^F\leq y | \bm{X}^F=\bm{x})$, and $G(y|\bm{x}) = P(Y^G\leq y | \bm{X}^G=\bm{x})$. By comparing these conditional ROC curves instead of the standard ROC curves it is possible to incorporate the potential effect of the covariates in the analysis of the equivalence of two or more methods of diagnosis. A test for performing this comparison is proposed in \cite{Fanjul-Hevia2019} for the case of a continuous one-dimensional covariate. The objective here is to extend that methodology to the case in which we have a multidimensional covariate. Thus, the aim is to test, given a certain $\bm{x} \in \bm{R_X}$, \begin{eqnarray} H_0: ROC_1^{\bm{x}}(p) = \dots = ROC_K^{\bm{x}}(p) \; \text{ for all } p \in (0,1), \label{eq:H0X} \end{eqnarray} where $K$ is the number of diagnostic markers (and thus, ROC curves) that are being compared. In this context we would have $K$ diagnostic variables and one $d-$dimensional covariate in the healthy population, $(\bm{X}^F,Y_1^F,\cdots,Y_K^F)$, and similar variables in the diseased population, $(\bm{X}^G,Y_1^G,\cdots,Y_K^G)$. In practice this kind of test could help to design a more personalised diagnostic method based on the covariate values of each patient. With this methodology, in the medical example at hand we could determine whether the carbohydrate antigen 152 and the cytokeratin fragment 21-1 are equally suitable for the diagnosis of a patient with a certain age and a certain enolase value. In order to be able to make this comparison, we are going to rely on the estimation of the corresponding conditional ROC curves. There is a wide range of estimation methods in the literature: some of them estimate the conditional distribution functions involved in the definition of the conditional ROC curve, others use regression functions to include the effect of the covariates (following direct or indirect approaches). See \cite{Pardo-Fernandez2014} for a further review of this topic. In \cite{Fanjul-Hevia2019} the estimation of the conditional ROC curve that is used is based on the indirect (or induced) regression methodology, which incorporates the covariate information through regression models by considering the effect of those covariates in the diagnostic marker in each population of healthy or diseased separately. However, this method was originally designed for one single covariate. One could think of extending that methodology by changing the estimator of the conditional ROC curve for another capable of handling multidimensional covariates. Nevertheless, there are not many methods in the literature capable of considering more than one covariate when estimating the conditional ROC curve, and most of them have some parametric assumptions that we would like to avoid making. See \cite{Carvalho2013} as an example of a non-parametric Bayesian model to estimate the conditional distribution functions involved in the ROC curves, \cite{Rodriguez-Alvarez2011a} or \cite{Rodriguez-Alvarez2018} as an example of a direct ROC regression model or \cite{RodriguezMartinez2014} as an example of induced methodology (framed in a Bayesian setting). In our case we will be following a frequentist approach. The tests related to multidimensional data tend to become less powerful when the dimension of the problem increases. This is why, in this paper, the problem of comparing conditional ROC curves is first transformed using projections in such a way that the multidimensional problem becomes a unidimensional problem easier to handle. This idea has been applied several times in the literature for reducing the dimension in goodness-of-fit problems \citep[see, for example,][]{Escanciano2006,GarciaPortugues2014,Patilea2016}, but, to the best of our knowledge, it is the first time that it is applied on an ROC curve setting. In the last few years random projections are increasingly being used as a way to overcome the curse of dimensionality. The characterization of the multidimensional distribution of the original data by the distribution of the randomly projected unidimensional data is what allows for the reduction of the dimension. To that end, in Section \ref{sec2} we show how (\ref{eq:H0X}) can be transformed in a test with one-dimensional covariates by using projections. Then, a methodology is proposed for testing that equivalent hypothesis. In Section \ref{sec3} the results from a simulation study show the practical performance of the test in terms of level approximation and power. The procedure is illustrated in Section \ref{sec4} by analysing the real data set containing information of patients with pleural effusion. \section{Methodology} \label{sec2} This section is divided in three subsections. In the first one, \ref{sec2.1}, we present a result that allows us to transform the problem discussed in (\ref{eq:H0X}) into an equivalent one, easier to handle, by using projections to reduce the multidimensional role of the covariate to a unidimensional one. In subsection \ref{sec2.2} we show a methodology to test the equality of conditional ROC curves on a unidimensional problem \citep[based on the one proposed in][]{Fanjul-Hevia2019}. Finally, in \ref{sec2.3}, we combine that methodology with the result obtained in \ref{sec2.1} to solve our original problem with multidimensional covariates. Both sections \ref{sec2.2} and \ref{sec2.3} include the statistic proposed to perform the test and a bootstrap algorithm to approximate its distribution. \subsection{An equivalent problem} \label{sec2.1} In order to present the transformation of the problem, first we need to introduce the definition of \textit{the ROC curve conditioned to a pair} $({x^F}, {x^G}) \in R_{{X^F}}\times R_{{X^G}}$: \begin{eqnarray} ROC^{{x^F},{x^G}} (p) = 1- F(G^{-1}(1-p|{x^G})|{x^F}), \; p \in (0,1). \label{eq:defROCxFxG} \end{eqnarray} This concept is very similar to the conditional ROC curve (\ref{eq:defROCx}): the only difference is that this new definition allows us to condition on different values for the diseased and healthy populations. In this case $x^F$ and $x^G$ are unidimensional, but the definition could be applied on a multidimensional case. Even if the interpretability of this new ROC curve is not very clear in practice, theoretically it does not present any problems (as it will not do its estimation), as the population of healthy and diseased are always considered to be independent. The following result is the base for developing the test for comparing ROC curves with multidimensional covariates. It borrows the ideas in \cite{Escanciano2006} of using projections for reducing the dimension of the covariate in a regression context. Since here we are dealing with ROC curves, the dimension reduction is less straightforward and some adjustments are required, as each ROC depends on two cumulative distribution functions. To the best of our knowledge, the idea of using projections has not been considered in the context of ROC curves. Given $\bm{x}, \bm{\beta} \in \mathbb{R}^d$, $\bm{x'\beta}$ denotes the scalar product of the vectors $\bm{x}$ and $\bm{\beta}$. For now on, all the vectors representing the projections will be considered to be contained in the $d-$dimensional unit sphere $\mathbb{S}^{d-1} = \{\bm{\beta} \in \mathbb{R}^d : ||\bm{\beta}|| = 1\}$. This way we ensure that all possible directions are equally important. \begin{lemma} \label{lemma0} Assume $\mathbb{E}|Y^F_k|<\infty$ and $\mathbb{E}|Y^G_k|<\infty$ for every $k \in\{1,\ldots,K\}$. Then, given a certain $\bm{x} \in \bm{R_X}$, and assuming dependence among the ROC curves (meaning the covariate is common for all the $K$ curves considered), then \[ROC_1^{\bm{x}} (p)= \dots = ROC_K^{\bm{x}}(p) \; \text{ for all } p \in (0,1) \; a.s.\] if and only if \[ROC_1^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} (p) = \dots = ROC_K^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} (p) \; \text{ for all } p \in (0,1) \; a.s. \; \text{ for any } \bm{\beta}^F , \bm{\beta}^G,\] where $\bm{\beta}^F$ and $\bm{\beta}^G$ are $d-$dimensional coordinates in $\mathbb{S}^{d-1}$ that represent the directions of the projections. \end{lemma} The proof of this Lemma can be found in the Appendix. Note that ${(\bm{\beta}^F)}' \bm{x}$ and ${\left(\bm{\beta}^G\right)}' \bm{x}$ are one-dimensional values. By using these ROC curves conditioned to a pair of projected covariates (as defined in \ref{eq:defROCxFxG}), the problem is reduced to a one-dimensional covariate conditional ROC curve comparison test for each possible direction $\bm{\beta}^F$ and $\bm{\beta}^G$. Thus, taking advantage of the result in Lemma~\ref{lemma0}, instead of testing for the null hypothesis (\ref{eq:H0X}), we may use this equivalent formulation to develop a methodology that, given a certain $\bm{x} \in R_{\bm{X}}$, tests \begin{eqnarray} H_0: ROC_1^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} (p) = \dots = ROC_K^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} (p) \; \text{ for all } p \in (0,1) \; \forall \bm{\beta}^F, \bm{\beta}^G \label{eq:H0forallproj} \end{eqnarray} against the general alternative $H_1:$ $H_0$ is not true. The notation $\forall$ will be used instead of `for any' to shorten the expression (this applies mainly in the proofs found in the Appendix). In a first step, a statistic for testing the equivalence of these ROC curves is presented for a certain pair of fixed projections, and then that statistic is adapted to include all possible directions. \subsection{Test for a one-dimensional covariate} \label{sec2.2} The objective in this section is to develop a test for the equivalent problem presented in Lemma~\ref{lemma0} for a fixed pair of projections $\bm{\beta}^F$ and $\bm{\beta}^G$. Here a test is presented for comparing two or more dependent ROC curves conditioned to two one-dimensional values. Given the pair $(x^F, x^G) \in R_{X^F}\times R_{X^G}$, the aim is then to test \begin{eqnarray} H_0: ROC_1^{x^F, x^G} (p) = \cdots = ROC_K^{x^F, x^G} (p) \; \text{ for all } p \in (0,1) \label{eq:H0xFxG} \end{eqnarray} against the general alternative $H_1: H_0$ is not true. The samples available in this context are: \begin{itemize} \item[-] $\{(X_{i}^F,Y_{1,i}^F,\dots, Y_{K,i}^F)\}_{i=1}^{n^F}$ an i.i.d. sample from the distribution of $(X^F,Y_1^F,\dots, Y_K^F)$, \item[-] $\{(X_{i}^G,Y_{1,i}^G,\dots, Y_{K,i}^G)\}_{i=1}^{n^G}$ an i.i.d. sample from the distribution of $(X^G,Y_1^G, \dots, Y_K^G)$, \end{itemize} with $n^F$ and $n^G$ the sample sizes of the diseased and healthy populations, respectively. Define $n=n^F+n^G$ as the total sample size used for the estimation of each conditional ROC curve (that will be the same for all $k \in \{1,\dots,K\}$). Note that both $X^F$ and $X^G$ are here one-dimensional covariates. The method used for the estimation of the conditional ROC curves is based on the one proposed in \cite{Gonzalez-Manteiga2011a}, which relies on non-parametric location-scale regression models. To be more precise, for each $k = 1,\dots,K$, assume that \begin{eqnarray} Y_k^F = \mu_k^F(X^F) + \sigma_k^F(X^F)\varepsilon_k^F \label{eq:locscale1}\\ Y_k^G = \mu_k^G(X^G) + \sigma_k^G(X^G)\varepsilon_k^G \label{eq:locscale2} \end{eqnarray} where, for $D\in \{F,G\}$, $\mu_k^D(\cdot) = E(Y_k^D|X^D=\cdot)$ and $(\sigma_k^D)^2(\cdot) = Var(Y_k^D | X^D=\cdot)$ are the conditional mean and the conditional variance functions (both of them unknown smooth functions), and the error $\varepsilon_k^D$ is independent of $X^D$. The dependence structure between the $K$ diagnostic variables is modelled by introducing a dependence structure between the errors: $(\varepsilon_1^D, \dots, \varepsilon_K^D)$ will follow a multivariate distribution function with zero mean and a covariance matrix with ones in the diagonal. Given this location-scale regression model structure for the diagnostic variables, the $k-$th ROC curve conditioned to a pair of values $({x^F}, {x^G}) \in R_{{X^F}}\times R_{{X^G}}$ can be expressed in terms of the marginal cumulative distribution functions of the errors, $H_k^F$ and $H_k^G$: \begin{eqnarray} ROC_k^{{x^F}, {x^G}}(p) = 1-H_k^F\left(\left(H_k^G\right)^{-1}(1-p)b_k(x^F,x^G)-a_k(x^F,x^G)\right), \end{eqnarray} where \[a_k(x^F,x^G) = \frac{\mu_k^F(x^F)- \mu_k^G(x^G)}{\sigma_k^F(x^F)} \quad \text{ and } \quad b_k(x^F,x^G) = \frac{\sigma_k^G(x^G)}{\sigma_k^F(x^F)}.\] Thus, this $k-th$ conditional ROC curve can be estimated by \begin{eqnarray} { \widehat{ROC}_k^{x^F,x^G}}(p) &=& 1- {\int}\hat{H}_k^F\left(\left(\hat{H}_k^G\right)^{-1}(1- p{+{ h_k} u }) \hat{b}_k({x^F,x^G}) -\hat{a}_k({x^F,x^G})\right) { \kappa(u) du}, \label{eq:EstROCxFxG} \end{eqnarray} where, for $D\in \{F,G\}$, \begin{itemize} \item $\hat{H}_k^D(y) = (n^D)^{-1} \sum_{i=1}^{n^D} I(\hat{\varepsilon}_{k,i}^{D} \leq y)$, \item $\hat{\varepsilon}_{k,i}^{D} = \dfrac{Y_{k,i}^{D} -\hat{\mu}_k^D(X_i^{D})}{\hat{\sigma}_k^D (X_i^{D})}$, with $i\in\{1,\cdots,n^D\}$, \item $\hat{\mu}_k^D(x) = \sum_{i=1}^{n^D} W_{k,i}^{D} (x, g_k^D) Y_{k,i}^{D}$ is a non-parametric estimator of $\mu_k^D(x)$ based on local weights $W_{k,i}^{D}(x,g_k^D)$ depending on a bandwidth parameter $g_k^D$, \item $(\hat{\sigma}_k^D)^2(x) = \sum_{i=1}^{n^D} W_{k,i}^{D} (x, g_k^D) [Y_{k,i}^{D}-\hat{\mu}_k^D(X_i^{D})]^2$ is a non-parametric estimator of $(\sigma_k^D)^2(x)$. For simplicity we take the same bandwidth parameter $g_k^D$ that is used for the estimation of the regression function $\hat{\mu}_k^D(x)$, \item $W_{k,i}^{D} (x,g_k^D) = \dfrac{\kappa_{g_k^D}(x-X_{i}^D)}{\sum_{l=1}^{n^D} \kappa_{g_k^D}(x-X_{l}^D)}$ are Nadaraya-Watson-type weights, where $\kappa_{g_k^D}(\cdot) = \kappa(\cdot / g_k^D) /g_k^D$ and $\kappa$ is a probability density function symmetric around zero. \item $\hat{a}_k (x^F,x^G)= \left(\hat{\mu}_k^F(x^F)-\hat{\mu}_k^G(x^G)\right) /\hat{\sigma}_k^F(x^F) $ and $\hat{b}_k(x^F,x^G) = \hat{\sigma}_k^G(x^G) / \hat{\sigma}_k^F(x^F)$. \item $h_k$ is a bandwidth parameter responsible for the smoothness of the estimator. Its value does not seem to have a significant effect on the conditional ROC curve estimation. \end{itemize} This way of estimating the conditional ROC curve is similar to the one proposed in \cite{Gonzalez-Manteiga2011a}, with the difference that they condition the ROC curve on a single value $x$ and here we have a pair of values $x^F$ and $x^G$, each one of them related to the diseased and the healthy population, respectively. As both populations are independent, the adaptation of the methodology of \cite{Gonzalez-Manteiga2011a} to this case is straightforward. Once we know how to estimate this doubly conditional ROC curve we can propose a test statistic for the test (\ref{eq:H0xFxG}): \begin{eqnarray} S^{x} = \sum_{k=1}^K \psi \left( \sqrt{n g_k} \{ \widehat{ROC_k}^{x^F,x^G}(p) - \widehat{ROC}_\bullet^{x^F,x^G}(p)\}\right), \label{eq:SNx} \end{eqnarray} where: \begin{itemize} \item for $k\in \{1,\dots,K\}$, $g_k= \frac{n^F g_k^F + n^G g_k^G}{n}$, where $g_k^F$ and $g_k^G$ are bandwidth parameters involved in the estimation of the $k$-th conditional ROC curve. \item for $k\in \{1,\dots,K\}$, $ \widehat{ROC}_k^{x^F,x^G}(p)$ is the estimated conditional ROC curve given $(x^F,x^G)$, as seen in (\ref{eq:EstROCxFxG}), \item $\widehat{ROC}_\bullet^{x^F,x^G} (p) = \left(\sum_{k=1}^K g_k \right)^{-1}\sum_{k=1}^K g_k \widehat{ROC_k}^{x^F,x^G} (p)$ is a sort of weighted average of the $K$ conditional ROC curves. \item $\psi$ is a real-valued function that measures the difference between each estimated conditional ROC curve and the weighted average of all of them. This function may be similar to the ones used for the comparison of cumulative distribution functions (after all, a ROC curve can be viewed as a cumulative distribution function). For example, if one considers the $L_2$-measure, then the resulting test statistic is \[S_{L2}^{x} = \sum_{k=1}^K { n g_k} \int \left( \widehat{ROC_k}^{x^F,x^G}(p) - \widehat{ROC}_\bullet^{x^F,x^G}(p)\right)^2 dp.\] On the other hand, when using the Kolmogorov-Smirnov criteria the resulting test statistic is \[S_{KS}^{x} = \sum_{k=1}^K \sqrt{ n g_k } \sup_{p} \left\vert \widehat{ROC_k}^{x^F,x^G}(p) - \widehat{ROC}_\bullet^{x^F,x^G}(p)\right\vert.\] \end{itemize} The null hypothesis will be rejected for large values of $S^{x}$. In order to obtain the distribution of this statistic, a bootstrap algorithm is proposed. This bootstrap algorithm is adapted from the procedure proposed in \cite{Martinez-Camblor2012} and has been already used by \cite{Martinez-Camblor2013a} and by \cite{Fanjul-Hevia2019} in the context of ROC curves. The key of this algorithm is that \begin{eqnarray*} T^{x} &=& \sum_{k=1}^K \psi \left( \sqrt{n g_k} \left\{ \left(\widehat{ROC}_k^{x^F,x^G}(p)- \widehat{ROC}_\bullet^{x^F,x^G}(p)\right) - \left(ROC_k^{x^F,x^G}(p)- ROC_\bullet^{x^F,x^G}(p)\right)\right\}\right), \end{eqnarray*} coincides with the statistic $S^{x}$ as long as the null hypothesis holds, where \[ROC_\bullet^{x^F,x^G}(p)= \left(\sum_{k=1}^K g_k \right)^{-1} \sum_{k=1}^K g_k ROC_k^{x^F,x^G}(p), \; 0<p<1.\] The quantity $T^{x}$ can be rewritten as \begin{eqnarray} T^{x} &=& \sum_{k=1}^K \psi \left( \sum_{j=1}^K \sqrt{n g_j } \alpha_{kj} \{ \widehat{ROC}_j^{x^F,x^G}(p) - ROC_j^{x^F,x^G}(p)\}\right), \label{eq:TNx} \end{eqnarray} where $\alpha_{kj} = I(k=j) - \sqrt{ g_k } \sqrt{g_j } \left(\sum_{i=1}^K g_i \right)^{-1}$. Note that, in general, $T^{x}$ cannot be computed from the data, as it depends on the unknown theoretical conditional ROC curves, but it is useful when applying the bootstrap algorithm. The bootstrap algorithm suggested to approximate a p-value for this test is the following: \begin{enumerate} \item[A.1] From the original samples, $\{(X_{i}^F,Y_{1,i}^F,\dots,Y_{K,i}^F)\}_{i=1}^{n^F}$ and $\{(X_{i}^G,Y_{1,i}^G,\dots,Y_{K,i}^G)\}_{i=1}^{n^G}$, compute the test statistic value (\ref{eq:SNx}), that we will denote by $s^x$. \item[A.2] \label{Step2} For $b=1,\dots,B$, generate the bootstrap samples $\{(X_{i}^F,Y_{1,i}^{F,b*},\dots,Y_{K,i}^{F,b*})\}_{i=1}^{n^F}$ and\\ $\{(X_{i}^G,Y_{1,i}^{G,b*},\dots,Y_{K,i}^{G,b*})\}_{i=1}^{n^G}$ as follows: \begin{enumerate} \item[(i)] For each $D\in\{F,G\}$ , let $\left\{\left(\varepsilon_{1,i}^{D,b*},\dots,\varepsilon_{K,i}^{D,b*}\right)\right\}_{i = 1}^{n^D} $ be an i.i.d. sample from the { empirical cumulative multivariate distribution function of the original residuals}. \item[(ii)] Reconstruct the bootstrap samples $\{(X_{i}^D,Y_{1,i}^{D,b*},\dots,Y_{K,i}^{ D,b*})\}_{i=1}^{n^D}$ for each $D\in \{F,G\}$, where $Y_{k,i}^{D,b*} = \hat{\mu}_k^D(X_{k,i}^D) + \hat{\sigma}_k^D(X_{k,i}^D) \varepsilon_{k,i}^{D,b*}$. \end{enumerate} \item[A.3] \label{Step3} Compute the test statistic based on the bootstrap samples, for $b=1,\dots,B$ using (\ref{eq:TNx}) as \vskip-6pt \begin{eqnarray*} t^{x,b*} &=& \sum_{k=1}^K \psi \left( \sum_{j=1}^K \sqrt{n g_j } \alpha_{kj} \{ \widehat{ROC}_j^{{x^F,x^G},b*}(p) - \widehat{ROC}_j^{x^F,x^G}(p)\}\right), \notag\\ \end{eqnarray*} where $\widehat{ROC}_j^{x^F,x^G,b*}$ is the estimated $j-$th conditional ROC curve of the $b-$th bootstrap sample. \item[A.4] The distribution of $S^x$ under the null hypothesis (and thus, the distribution of $T^x$) is approximated by the empirical distribution of the values $\{t^{x,1*},\dots,t^{x,B*}\}$ and the p-value is approximated by \[p-value = \frac{1}{B} \sum_{b=1}^B I(s^x \leq t^{x,b*}).\] \end{enumerate} In contrast with the usual bootstrap algorithms in testing setups, in this case the null hypothesis is not employed when generating of the bootstrap samples (Step A.2), because replicating the null hypothesis of equal ROC curves is not a straightforward problem. Instead, it is used in the computation of the bootstrap statistic (Step A.3) by using $T^x$ instead of $S^x$, that are equal under the null hypothesis. This particularity also appears in the bootstrap algorithm of the next section. There are two kind of bandwidth parameters that appear in the estimation of the $k-$th conditional ROC curve (\ref{eq:EstROCxFxG}), with $k\in \{1,\dots K\}$. The first one, $h_k$, is taken as $1/\sqrt{n}$, and the second ones, $g_k^F$ and $g_k^G$, are selected by least-squares cross-validation. Note that, for each bootstrap iteration, the bandwidth parameters could change, as their selection depends on the sample. However, $h_k$ remains constant, as we are choosing it in terms of the sample size, and that is the same for each bootstrap iteration. As for $g_k^F$ and $g_k^G$, for computational issues we have decided to compute them on step A.1 using the original sample, and then apply the same bandwidths for all the bootstrap estimations. The cross-validation method can be very time-consuming, and this simplification prevents the simulations to become infeasible. \subsection{Test for a multi-dimensional covariate} \label{sec2.3} Once having seen a strategy for testing (\ref{eq:H0forallproj}) for only one pair of fixed directions, the idea now is to modify the previous procedure so the new statistic takes into account all the possible directions that $\bm{\beta}^F$ and $\bm{\beta}^G$ can take. For that purpose, consider the test statistic \begin{eqnarray} D_S^{\bm{x}} = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} S^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)'\bm{x}} d\bm{\beta}^F d \bm{\beta}^G, \label{eq:DN1} \end{eqnarray} where $d\bm{\beta}^F$ and $d\bm{\beta}^G$ represent the uniform density on the sphere of dimension $d$, $\mathbb{S}^{d-1}$. This ensures that all directions are equally important. The expression $S^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}}$ is equal to the statistic used in (\ref{eq:SNx}) for testing the equality of $K$ ROC curves when conditioned to the value of the pair $\left((\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}\right)$, that is, \[S^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}} = \sum_{k=1}^K \psi \left( \sqrt{n g_k } \{ \widehat{ROC_k}^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}}(p) - \widehat{ROC}_\bullet^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}}(p)\}\right). \] Note that, in this context with $d-$dimensional covariates, the samples are $\{(\bm{X}_{i}^F,Y_{1,i}^F,\dots, Y_{K,i}^F)\}_{i=1}^{n^F}$ and $\{(\bm{X}_{i}^G,Y_{1,i}^G,\dots, Y_{K,i}^G)\}_{i=1}^{n^G}$ , with $\bm{X}_{i}^F = (X_{1,i}^F, \cdots, X_{d,i}^F)'$ and $\bm{X}_{i}^G = (X_{1,i}^G, \cdots, X_{d,i}^G)'$. In practice, as it is done in \cite{Colling2017}, to compute the test statistic $D_S^{\bm{x}}$ random directions $\bm{\beta}_1^F, \dots, \bm{\beta}_{n_\beta}^F$ and $\bm{\beta}_1^G, \dots, \bm{\beta}_{n_\beta}^G$ are drawn uniformly from $\mathbb{S}^{d-1}$, where $n_{\bm{\beta}}$ is the number of random directions considered (the same number of directions is taken for $\bm{\beta}^F$ and for $\bm{\beta}^G$). With them, the approximated statistic is \begin{eqnarray} \tilde{D}_S^{\bm{x}} = \frac{1}{n_{\bm{\beta}}^2} \sum_{r=1}^{n_{\bm{\beta}}} \sum_{l=1}^{n_{\bm{\beta} }} S^{(\bm{\beta}_r^F)'\bm{x},(\bm{\beta}_l^G)' \bm{x}}. \label{ref:DNap} \end{eqnarray} In order to obtain the distribution of the statistic, a bootstrap algorithm (similar to the one described in the previous section) is proposed. To do so, the following expression is introduced: \begin{eqnarray} D_{T}^{\bm{x}} = \int_{\mathbb{S}^{d-1}} \int_{\mathbb{S}^{d-1}} T^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} d\bm{\beta}^F d \bm{\beta}^G, \label{eq:TN1} \end{eqnarray} where $T^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}}$ is the same as in (\ref{eq:TNx}), but for the conditioning values of $\left((\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}\right)$: \begin{eqnarray*} T^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} = \sum_{k=1}^K \psi \left( \sum_{j=1}^K \sqrt{n g_j} \alpha_{kj} \{ \widehat{ROC}_j^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}}(p) - ROC_j^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}}(p)\}\right). \end{eqnarray*} As it happened in (\ref{eq:TNx}), $T^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}}$ cannot be computed without knowing the true distribution of the diagnostic markers. However, it can be computed in the bootstrap algorithm below, and there $D_{T}^{\bm{x}}$ is approximated by \begin{eqnarray} \tilde{D}_{T}^{\bm{x}} = \frac{1}{n_{\bm{\beta}}^2} \sum_{r=1}^{n_{\bm{\beta}}} \sum_{l=1}^{n_{\bm{\beta}}} T^{(\bm{\beta}_r^F)' \bm{x},(\bm{\beta}_l^G)' \bm{x}}. \label{ref:DTNap} \end{eqnarray} As happened before, for two given projections $\bm{\beta}^F$ and $\bm{\beta}^G$, $S^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)' \bm{x}}$ and $T^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}}$ coincide as long as the null hypothesis holds, and thus the same happens with $D_S^{\bm{x}} $ and $D_{T}^{\bm{x}} $. Taking into account these approximations, the resulting bootstrap algorithm goes as follows: \begin{itemize} \item[B.1] Draw $n_{\bm{\beta}}$ random directions $\bm{\beta}_1^F, \dots, \bm{\beta}_{n_\beta}^F$ and $\bm{\beta}_1^G, \dots, \bm{\beta}_{n_\beta}^G$ uniformly from $\mathbb{S}^{d-1}$. \item[B.2] For each random directions $\bm{\beta}_r^F$ and $\bm{\beta}_l^G$ (with $r, {l}\in \{1, \dots,n_{\bm{\beta}}\}$) , consider the sample \\ $\left\{\left((\bm{\beta}_r^F)' \bm{X}_{i}^F,Y_{1,i}^F,\dots,Y_{K,i}^F\right)\right\}_{i=1}^{n^F}$ and $\left\{\left((\bm{\beta}_l^G)' \bm{X}_{i}^G,Y_{1,i}^G,\dots,Y_{K,i}^G\right)\right\}_{i=1}^{n^G}$ and the conditioning values\\ $\left( (\bm{\beta}_r^F)' \bm{x}, (\bm{\beta}_l^G)' \bm{x}\right)$. With them, following steps A.1--A.3 of the bootstrap algorithm of the previous subsection, compute the value of $s^{(\bm{\beta}_r^F)' \bm{x}, (\bm{\beta}_l^G)' \bm{x}}$ and the $B$ corresponding $t^{(\bm{\beta}_r^F)' \bm{x}, (\bm{\beta}_l^G)' \bm{x}, b*}$. \item[B.3] Compute $\tilde{d}_S^{\bm{x}} = \frac{1}{n_{\bm{\beta}}^2} \sum_{r=1}^{n_\beta} \sum_{l=1}^{n_\beta} s^{ (\bm{\beta}_r^F)' \bm{x},(\bm{\beta}_l^G)' \bm{x}}$ and $\tilde{d}_{T}^{\bm{x},b*} = \frac{1}{n_{\bm{\beta}}^2} \sum_{r=1}^{n_{\bm{\beta}}} \sum_{l=1}^{n_{\bm{\beta}}} t^{(\bm{\beta}_r^F)'\bm{x},(\bm{\beta}_l^G)' \bm{x}, b*}$ as in (\ref{ref:DNap}) and (\ref{ref:DTNap}). \item[B.4] Approximate the p-value of the test by: \[p-value = \frac{1}{B} \sum_{b=1}^B I(\tilde{d}_S^{\bm{x}} \leq \tilde{d}_{T}^{\bm{x},b*}).\] \end{itemize} \begin{remark} \label{remark1} Note that $n_{\bm{\beta}}$ represents the number of random directions drawn from $\mathbb{S}^{d-1}$ considered for the approximation of (\ref{ref:DNap}) and (\ref{ref:DTNap}), but that, in fact, we are using $n_{\bm{\beta}}^2$ different combination of pairs $(\bm{\beta}^F, \bm{\beta}^G)\in \mathbb{S}^{d-1}\times\mathbb{S}^{d-1}$ to make that approximation. This could become a problem from the computational point of view, as the complexity of the problem increases very fast when increasing the value of $n_{\bm{\beta}}$. As an alternative, we could consider using \begin{eqnarray*} D_S^{\bm{x}} = \int_{\mathbb{S}^{d-1}\times\mathbb{S}^{d-1}} S^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)'\bm{x}} d\bm{\beta}^F \bm{\beta}^G, \end{eqnarray*} instead of statistic (\ref{eq:DN1}), where $d\bm{\beta}^F\bm{\beta}^G$ represents the uniform density on the torus of dimension $d$, $\mathbb{S}^{d-1}\times\mathbb{S}^{d-1}$. This ensures, as before, that all pairs of directions are equally important. Thus, in practice, instead of using the approximation (\ref{ref:DNap}) we could consider \begin{eqnarray*} \hat{D}_S^{\bm{x}} = \frac{1}{m_{\bm{\beta}}} \sum_{r=1}^{m_{\bm{\beta}}} S^{(\bm{\beta}_r^F)'\bm{x},(\bm{\beta}_r^G)' \bm{x}}, \end{eqnarray*} where $(\bm{\beta}_1^F,\bm{\beta}_1^G), \dots, (\bm{\beta}_{m_\beta}^F,\bm{\beta}_{m_\beta}^G)$ are pairs of random directions drawn uniformly from $\mathbb{S}^{d-1} \times \mathbb{S}^{d-1} $, and where $m_{\bm{\beta}}$ would represent here the same as $n_{\bm{\beta}}^2$ before, with the advantage that it allows for more flexibility because it can assume non-squared values. A similar adaptation could be applied for the approximation of $D_T^{\bm{x}}$ in (\ref{eq:TN1}). \end{remark} \begin{remark} \label{remark2} In the literature we can find papers, like for example \cite{Cuesta-Albertos2007} or \cite{Cuesta-Albertos2019}, that use only one random projection. The main idea is to perform the test at hand for a randomly selected projection instead of for all possible projections. The use of projections results in a dimension reduction (as desired), and, despite being a procedure that may produce less powerful tests, the use of one single projection results in a reduction of the computational cost. Following that idea, instead of testing the equality of covariate-projected ROC curves for all possible projections, we could test the equality of covariate-projected ROC curves for some random pair of projections given a certain $\bm{x} \in R_{\bm{X}}$, meaning: \begin{eqnarray} H_0: ROC_1^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} = \dots = ROC_K^{(\bm{\beta}^F)' \bm{x},(\bm{\beta}^G)' \bm{x}} \; \text{ for some } \bm{\beta}^F, \bm{\beta}^G. \label{eq:H0forrandproj} \end{eqnarray} The equivalence between this hypothesis and the one of interest in this paper given in (\ref{eq:H0X}) still needs theoretical justification. However, it is a possibility worth studying, if only for computational reasons . A way of perform this approach could be to consider the proposed methodology for $n_{\bm{\beta}} =1$. \end{remark} \section{Simulations} \label{sec3} In order to analyse the performance of the proposed methodology, simulations were run for the comparison of several dependent conditional ROC curves. On a first stage, these simulations were focused on analysing the behaviour of the unidimensional test described in Section~\ref{sec2.2}, but we do not display them here, as they are very similar to the ones that can be found in \cite{Fanjul-Hevia2019}. Instead, we show the results for several scenarios (first under the null hypothesis and then under the alternative) in which we compare $K$ ROC curves (with $K\in\{2,3\}$) conditioned to a $d-$dimensional covariate (with $d\in\{2,3\}$). All the curves used in the simulation study were drawn from location-scale regression models similar to the ones presented in (\ref{eq:locscale1}) and (\ref{eq:locscale2}), only that, in this case, the regression and the conditional standard deviation functions are for $d-$dimensional covariates. The construction of those curves is summarized in Table~\ref{table:SimulROCx}, were all the different conditional mean and conditional standard deviation functions are displayed. \begin{table}[!t] \centering \begin{tabular}{m{1.75cm} m{1.85cm} m{6cm} m{4.5cm}} \hline \thead{\textbf{Covariate}} & \thead{\textbf{ROC curves}} & \thead{\textbf{Regression functions}}& \thead{\textbf{Conditional standard }\\\textbf{deviation functions}} \\ \hline & \centering $ ROC_1^{\bm{x}}$ & $\mu_1^F (\bm{x})= \sin(0.5 \pi x_1) + 0.1x_2$ $\mu_1^G (\bm{x})= 0.5x_1 x_2$ & $\sigma_1^F (\bm{x})= 0.5 + 0.5x_1$ $\sigma_1^G (\bm{x})= 0.5 + 0.5x_1$ \\ \centering $\bm{x} =\begin{pmatrix} x_1 \\x_2 \end{pmatrix}$ &\centering $ ROC_2^{\bm{x}}$ & $\mu_2^F (\bm{x})= 0.3 + \sin(0.5 \pi x_1) + 0.1 x_2$ $\mu_2^G (\bm{x})= 0.5x_1 x_2$ & $\sigma_2^F (\bm{x})= 0.5 + 0.5x_1$ $\sigma_2^G (\bm{x})= 0.5 + 0.5x_1$ \\ & \centering $ ROC_3^{\bm{x}}$ & $\mu_3^F (\bm{x})= \sin(0.5 \pi x_1) + 0.1 x_2$ $\mu_3^G (\bm{x})= -0.3 + 0.4 x_2 + 0.5 x_1 x_2$ & $\sigma_3^F (\bm{x})= 0.5 + 0.5x_1$ $\sigma_3^G (\bm{x})= 0.5 + 0.5x_1$ \\ \hline & \centering $ ROC_4^{\bm{x}}$ & $\mu_4^F (\bm{x})= \sin(0.5 \pi x_1) + 0.1 x_2 + 0.5 x_3,$ $\mu_4^G (\bm{x})= 0.5 x_1 x_2 + x_3$ & $\sigma_4^F (\bm{x})= 0.5 + 0.1 x_3 ,$ $\sigma_4^G (\bm{x})= 0.5 + 0.1 x_3$ \\ \centering $\bm{x} =\begin{pmatrix} x_1 \\x_2\\x_3 \end{pmatrix}$ & \centering $ ROC_5^{\bm{x}}$ & $\mu_5^F (\bm{x})= \sin(0.5 \pi x_1) + 0.1 x_2 + 0.5 x_3,$ $\mu_5^G (\bm{x})= x_1 x_2 + x_3$ & $\sigma_5^F (\bm{x})= 0.5 + 0.1 x_3$ $\sigma_5^G (\bm{x})= 0.5 + 0.1 x_3$ \\ &\centering $ ROC_6^{\bm{x}}$ & $\mu_6^F (\bm{x})= \sin(0.5 \pi x_1) + 0.1 x_2 + 0.5 x_3,$ $\mu_6^G (\bm{x})= -0.3 + 0.5 x_1 x_2 + x_3$ & $\sigma_6^F (\bm{x})= 0.5 + 0.2 x_2 + 0.3 x_3$ $\sigma_6^G (\bm{x})= 0.5 + 0.1 x_3$ \\ \hline \end{tabular} \caption{Conditional mean and conditional standard deviation functions of the conditional ROC curves considered in the simulation study.} \label{table:SimulROCx} \end{table} The regression errors were considered to have multivariate normal distribution with zero mean, variance one and correlation $\rho$ for all the models. In all scenarios the covariates $X_1^F$, $X_1^G$, $X_2^F$, $X_2^G$, $X_3^F$ and $X_3^G$ are uniformly distributed in the unit interval. Thus, the value of the multidimensional covariate $\bm{x}$ at which the conditional ROC curves should be compared is contained in $[0,1]^d$. Particularly, the comparisons are made for $\bm{x} = (0.5,0.6)'$ and for $\bm{x} = (0.5,0.6,0.5)'$, for $d=2$ and $d=3$, respectively. The study contains simulations for different sample sizes $(n^F,n^G) \in \{(100,100)$, $(250,150)$, $(250,350)\}$ and different values of $\rho$ that represent different possible degrees of correlation between the diagnostic variables under comparison ($\rho \in \{-0.5, 0, 0.5\}$). Moreover, two different functions $\psi$ were considered for the construction of $S^{(\bm{\beta}^F)' \bm{x}, (\bm{\beta}^G)' \bm{x}}$: one based on the $L_2-$measure and the other one based on the Kolmogorov-Smirnov criterion (from now on denoted by $L_2$ and $KS$ respectively). The number of iterations used in the bootstrap algorithm was 200, and 500 data sets were simulated to compute the proportion of rejection in each scenario. Furthermore, the number of directions that was used for approximating the test statistic $D_S^{\bm{x}}$ was taken as $n_{\bm{\beta}}=5$ (as mentioned in Remark~\ref{remark1}, notice that this means that $n_{\bm{\beta}}^2=$25 different pairs of directions were considered). \subsection{Level of the test} \label{sec3.1} The scenarios that were considered for calibrating the level of the test (by comparing the same conditional ROC curves) are represented in Table~\ref{table:level}. \begin{table}[!t] \begin{center} \begin{tabular}{ m{1.5cm} m{6cm} m{6.5cm}} \hline \centering & \centering \textbf{$2-$dimensional covariate} & \textbf{$3-$dimensional covariate}\\ \hline \centering $K=2,3$ & \centering $ROC_1^x$ & \qquad \qquad \qquad $ROC_4^x$ \\ & \includegraphics[scale=0.65]{ROCH0X2K2.pdf} & \includegraphics[scale=0.65]{ROCH0X3K2.pdf}\\ \hline \end{tabular} \end{center} \caption{Scenarios under the null hypothesis considered for calibrating the level of the test. } \label{table:level} \end{table} The results of the simulations obtained for $n_{\bm{\beta}} = 5$ are summarized in Figures \ref{fig:levelX2n5} (for $d = 2$) and \ref{fig:levelX3n5} (for $d = 3$). Each subfigure represents the test of one scenario for a particular sample size. The nominal level considered is 0.05. The estimated proportion of rejections over 500 replications of the data sets is represented along with the rejection region of such nominal level. For the test to be well calibrated the estimated proportions should fall between the gray lines. \begin{figure} \caption{Estimated proportion of rejection under the null hypothesis and the corresponding limits of the critical region (in gray) for the level 0.05 (dotted black line) with $d=2$ and $n_{\bm{\beta} \label{fig:levelX2n5} \end{figure} \begin{figure} \caption{Estimated proportion of rejection under the null hypothesis and the corresponding limits of the critical region (in gray) for the level 0.05 (dotted black line) with $d=3$ and $n_{\bm{\beta} \label{fig:levelX3n5} \end{figure} In general it can be said that the expected nominal level is reached, as most of the estimated proportions are close to the corresponding nominal level. The $L_2$ statistic seems to overestimate the level in a few scenarios, but its behaviour improves when increasing the sample size. The $KS$ statistic is a little more conservative. \subsection{Power of the test} \label{sec3.2} On the other hand, the scenarios that were considered for studying the power of the test (by comparing different conditional ROC curves) are represented in Table~\ref{table:power}. \begin{table}[!t] \begin{center} \begin{tabular}{ m{1.2cm} m{6cm} m{6.5cm}} \hline \centering & \centering \textbf{$2-$dimensional covariate} & \textbf{$3-$dimensional covariate}\\ \hline \centering $K=2$ & \centering $ROC_1^x \text{ vs. } ROC_2^x$ & \qquad \quad $ROC_4^x \text{ vs. } ROC_5^x$ \\ & \includegraphics[scale=0.65]{ROCH1X2K2.pdf} & \includegraphics[scale=0.65]{ROCH1X3K2.pdf}\\ \hline \centering $K=3$ & \centering $ROC_1^x \text{ vs. } ROC_2^x \text{ vs. } ROC_3^x$ & \quad $ROC_4^x \text{ vs. } ROC_5^x \text{ vs. } ROC_6^x$\\ & \includegraphics[scale=0.65]{ROCH1X2K3.pdf} & \includegraphics[scale=0.65]{ROCH1X3K3.pdf}\\ \hline \end{tabular} \end{center} \caption{Scenarios under the alternative hypothesis considered for calibrating the power of the test. $ROC_1^x$ and $ROC_4^x$ are represented in purple, $ROC_2^x$ and $ROC_5^x$ in green, and $ROC_3^x$ and $ROC_6^x$ in yellow.} \label{table:power} \end{table} The results of the simulations are summarized in Figures \ref{fig:powern5} (for $n_{\bm{\beta}}=5$). In those figures the first and second row represent the simulation results for the scenarios with $K=2$ and $K=3$, respectively, and the first and the second column represent the simulation results for $d=2$ and for $d=3$, respectively. In this case, only $\alpha = 0.05$ was considered. \begin{figure} \caption{Estimated proportion of rejection under the alternative hypothesis for different sample sizes and different $\rho$, for $n_{\bm{\beta} \label{fig:powern5} \end{figure} It can be seen that the power of the test grows with the considered sample sizes. The $L_2$ statistic yields higher power than the $KS$ statistic, which is consistent with $KS$ being more conservative. Moreover, the difference between the conditional ROC curves considered for the case of $d=2$ is bigger than the difference between the ROC curves in the scenarios with $d=3$, which translates in higher power for the cases in which $d=2$. We can also observe that for each scenario, the highest power is always obtained for the cases in which the correlation of the diagnostic variables is $\rho = 0.5$, and the lowest for $\rho = -0.5$. { Note that for the scenario with $d=3$ and $\rho=-0.5$ the power of the test does not increase significantly from the first sample size to the second (in fact, for $K=3$ it even decreases a little), but this can be due to the fact that the lower sample size has balanced data, ($n^F$,$n^G$) being (100,100). whereas for the second sample size considered ($n^F$,$n^G$) take the value (250,150). The highest sample size is also unbalanced, but not so much.} \begin{remark} In order to evaluate the modification of the method proposed in Remark \ref{remark1} and \ref{remark2} we have run simulations for the same scenarios previously described. We show here the results for the scenarios with $K=2$ and $d=2$ under the null and the alternative hypotheses for assessing the level and the power of the test, respectively. Similar conclusions were obtained with the rest of the scenarios. {The parameters that are used here are the same as before, with the exception that now 1000 data-sets were simulated instead of 500.} Figure \ref{fig:levelremark} shows the results of the simulations when considering the modification of Remark \ref{remark1} for $m_{\bm{\beta}} =50$ (first row) and $m_{\bm{\beta}} =25$ (second row), and the results for considering only one random projection (Remark~\ref{remark2}), i.e., $m_{\bm{\beta}} =n_{\bm{\beta}}=1$ (third row). Note that taking $m_{\bm{\beta}} =25$ is comparable with $n_{\bm{\beta}} =5$ used in the previous simulations (see first row of Figure~\ref{fig:levelX2n5}), and that the results are very similar: the estimated proportion or rejections is a little overestimated for the $L_2$ statistic for the smaller sample size and otherwise close to the nominal level, and the $KS$ statistic is always more conservative. Increasing $m_{\bm{\beta}}$ from 25 to 50 does not seem to affect the results significantly, and neither does reducing it to a single random projection ($m_{\bm{\beta}} =1$). \begin{figure} \caption{Estimated proportion of rejection under the null hypothesis and the corresponding limits of the critical region (in gray) for the level 0.05 (dotted black line) with $K=2$, $d=2$ and $m_{\bm{\beta} \label{fig:levelremark} \end{figure} In Figure \ref{fig:powerremark} we can observe the results for the simulations under the alternative hypothesis, once again for $m_{\bm{\beta}} =50$, $m_{\bm{\beta}} =25$ and $m_{\bm{\beta}} =1$. The firs two graphics are very similar to the one obtained for $n_{\bm{\beta}} =5$ (see the first graphic of Figure~\ref{fig:powern5}), but from the last graphic it is obvious that by using only one random projection the power of the test decreases considerably (as it was expected). \begin{figure} \caption{ Estimated proportion of rejection under the alternative hypothesis for different sample sizes and different $\rho$, for $n_{\bm{\beta} \label{fig:powerremark} \end{figure} In the light of these results it seems that the alternative methodology proposed in Remark~\ref{remark1} yields similar conclusions than the first proposal, with no noticeable gain when increasing the number $m_{\bm{\beta}}$ used to approximate the value of the statistic from 25 to 50. It remains an open problem to determine an optimal value for that parameter. As for the idea mentioned in Remark~\ref{remark2}, using only one random projection seems to produce a well calibrated test, despite having considerably lower power. \end{remark} \section{Application} \label{sec4} An illustration of the proposed test is displayed in this section through the analysis of {\color{dgreen}the previously mentioned} data set concerning 463 patients with pleural effusion. This data set has been provided by Dr. F. Gude, from the Unidade de Epidemiolox\'ia Cl\'inica of the Hospital Cl\'inico Universitario de Santiago (CHUS), and it has been used for a previous study in \cite{Valdes2013}. From a medical perspective, the goal is to find a way to discriminate the patients in which the pleural effusion (PE) has a malignant origin (MPE) from those in which the PE is due to other non-cancer-related causes. 200 individuals form the sample had MPE (the diseased population in this context), against 263 who did not (healthy population). For that matter, two diagnostic markers were considered, the carbohydrate antigen 152 (\textit{ca125}) and the cytokeratin fragment 21-1 (\textit{cyfra}). Moreover, the information of two different covariates is also available: the \textit{age} and the neuron-specific enolase (\textit{nse}). Due to the characteristics of the data (positive values, most of them close to zero, with some extreme high values), logarithms of those variables -- excluding the variable \textit{age} -- were considered for the study. Being the logarithm a monotone transformation, its use does not have an effect on the estimation of the common ROC curve. However, it does affect the estimation of the conditional ROC curves, as it reduces the effect of the more extreme values of the variables. A representation of the relationship of each one of those biomarkers with the two covariates is depicted in Figure~\ref{fig:appli}, for both MPE (green) and the non-MPE (blue) patients. \begin{figure} \caption{Scatterplot of the three different diagnostic biomarkers in function of the two covariates considered: \textit{age} \label{fig:appli} \end{figure} It can be observed that the shape of the point clouds of the two populations changes with the values of the covariates, specially in the case of the diseased population. In order to evaluate whether the discriminatory capability of those markers ($Y_1^F$ and $Y_1^G$ as the variables containing the information of $\log(ca125)$, and $Y_2^F$ and $Y_2^G$ as the variables containing the information of $\log(cyfra)$) is the same when the covariates \textit{age} and ${\log(nse)}$ are taken into account, the methodology explained in previous sections is applied, comparing their respective ROC curves conditioned to different values of the bidimensional covariate $\bm{X} = (X_1, X_2)$ with $X_1 =$ \textit{age} and $X_2 ={\log(nse)}$. In order to explore the advantages of using this method over the ones that do not consider multidimensional covariates, we also test the equivalence of the ROC curves of those diagnostic markers for the case in which no covariates are taken into account and for the case in which only one of the covariates is included in the analysis. Figure~\ref{fig:histandbox} shows how those two covariates are distributed in the diseased and healthy populations. \begin{figure} \caption{Histograms and boxplots of the two covariates considered (\textit{age} \label{fig:histandbox} \end{figure} Note that the covariates have different magnitudes: the values that the variable \textit{age} takes are always going to be bigger than the values of ${\log(nse)}$. Thus, if we were to use the procedure directly over these variables, when projecting the multidimensional covariate $\bm{X}$ on any direction, the effect of the second component will be overshadowed by the first component's. To prevent this from happening we decided to use the standardized variables of $X_1$ and $X_2$ instead of the originals. This also affects the value $\bm{x}$ at which the conditional ROC curves are being compared. { Note that an ROC curve conditioned to a certain value $x$ is the same as the ROC curve in which the covariate is modified by a one-to-one transformation and that is conditioned to the corresponding transformed $x$ value.} { Given a non-degenerate multidimensional covariate $\bm{X}$ the standardization proposed here is to consider the multidimensional covariate $\bm{X}_s = \bm{B}^{-1}(\bm{X} - \bm{a})$, with $\bm{B}$ a diagonal matrix with \\ $(\sqrt{Var(X_1)}, \dots, \sqrt{Var(X_d)})$ in the diagonal and $\bm{a}=(E(X_1),\cdots, E(X_d))'$. Then, for a given variable $Y$, a given $y\in \mathbb{R}$ and a certain value of the covariate $\bm{x}$, \begin{eqnarray*} P(Y\leq y | \bm{X} = \bm{x})= P(Y\leq y |\bm{B}^{-1}(\bm{X} - \bm{a}) = \bm{B}^{-1}(\bm{x} - \bm{a})) = P(Y\leq y |\bm{X}_s = \bm{x}_s)), \end{eqnarray*} with $\bm{x}_s=\bm{B}^{-1}(\bm{x} - \bm{a})$ and, thus, \begin{eqnarray*} ROC^{\bm{x}}(p) = 1-F(G^{-1}(1-p| \bm{x})|\bm{x}) = 1-F(G^{-1}(1-p| \bm{x}_s)|\bm{x}_s) = ROC^{\bm{x}_s}(p) , \end{eqnarray*} Note that the standardization that takes place here does not care for the covariance between the covariates that conform $\bm{X}$, as we are only interested on obtaining covariates with similar magnitudes. Also, in practice the standardization is made considering the sample mean and the sample standard deviation of the covariates at hand. } We start the analysis of the performance of the two diagnostic markers by comparing their respective ROC curves without taking into account any covariate information. For that matter we use the method proposed by \cite{DeLong1988}. The estimated ROC curves for both markers are depicted in Figure \ref{fig:graph_ROC_estim}. The p-value obtain for that comparison was 0.138. Similar results were obtained when using other ways of comparing ROC without covariates (like \cite{Martinez-Camblor2013a} or \cite{Venkatraman1996}). Thus, we do not find significant differences between the two diagnostic variables in terms of diagnostic accuracy. Next, we compare the two diagnostic markers taking into account a unidimensional covariate using the test proposed in \cite{Fanjul-Hevia2019} for dependent diagnostic markers. We consider the covariates \textit{age} and ${\log(nse)}$, each one at a time. We test the equality of the ROC curves conditioned to the values of $\{51,67,83\}$ in the case of \textit{age} and the values of $\{-0.92,1.14,3.27\}$ in the case of ${\log(nse)}$. The corresponding ROC curve for every case is estimated in Figure \ref{fig:graph_ROC_estim}. \begin{figure} \caption{ROC curve estimation for both diagnostic variables (\textit{log(ca125)} \label{fig:graph_ROC_estim} \end{figure} For each considered covariate and each value of the covariate we obtain a p-value of the test, summarized in Table~\ref{table:ResCovUni}. The test is made considering two types of statistics, one based on the $L_2$-measure and the other in the Kolmogorov-Smirnov criteria, although both of them yield similar results. \begin{table}[!t] \begin{center} \begin{tabular}{cccc} \textit{age} &\textbf{51} & \textbf{67} & \textbf{83} \\ \hline p-values ($L2$) & 0.454 & 0.218 & 0.936 \end{tabular} \qquad \begin{tabular}{cccc} \textit{ age} &\textbf{51} & \textbf{67} & \textbf{83} \\ \hline p-values ($KS$) & 0.512 & 0.202 & 0.762 \end{tabular} \vskip12pt \begin{tabular}{cccc} ${\log(nse)}$ &\textbf{-0.92} & \textbf{1.14} & \textbf{3.20} \\ \hline p-values ($L2$) & 0.844 & {\color{gray}0.012} & 0.470 \end{tabular} \qquad \begin{tabular}{cccc} ${\log(nse)}$ &\textbf{-0.92} & \textbf{1.14} & \textbf{3.20} \\ \hline p-values ($KS$) & 0.900 & {\color{gray}0.008} & 0.412 \end{tabular} \end{center} \caption{Results for the comparison of the ROC curves of the diagnostic markers ${\log(ca125)}$ and ${\log(cyfra)}$ when considering a unidimensional covariate, that covariate being the \textit{age} or the ${\log(nse)}$.} \label{table:ResCovUni} \end{table} When comparing the ROC curves conditioned on different values of the \textit{age}, the results are in line with the obtained for the previous case, in which no covariates where taken into account: the equality of the two curves is not rejected. However, when considering the covariate ${\log(nse)}$, we see that for a certain value (1.14) the null hypothesis is rejected (for a significance level of 5$\%$). This matches the representation of the conditional ROC curves depicted in Figure~\ref{fig:graph_ROC_estim}. Finally, we compare the performance of the two diagnostic variables considering the effect of both the \textit{age} and the ${\log(nse)}$ at the same time. This is where we use the methodology proposed in this paper. We test the equality of their respective ROC curves conditioned to nine pairs of values of the two covariates: the ones obtained by making all the possible combinations of $\{51,67,83\}$ and $\{-0.92,1.14,3.27\}$. As before, two different type of statistics were considered: $L_2$ and $KS$ (and once again, the results are similar in both cases). The results obtained are summarized in Table~\ref{table:ResCovMulti}. \begin{table}[!t] \begin{center} \begin{tabular}{c|ccc} \backslashbox{\textit{$\log(nse)$}}{\textit{age}} &\textbf{51} & \textbf{67} & \textbf{83} \\ \hline \textbf{-0.92} & {\color{gray} 0.000} & {\color{gray} 0.030} & 0.258 \\ \textbf{1.14} & 0.152 & 0.070 & {\color{gray} 0.004} \\ \textbf{3.20} & {\color{gray} 0.026} & 0.056 & {\color{gray}0.010} \\ \end{tabular} \qquad \begin{tabular}{c|ccc} \backslashbox{\textit{$\log(nse)$}}{\textit{age}} &\textbf{51} & \textbf{67} & \textbf{83} \\ \hline \textbf{-0.92} & {\color{gray} 0.004} & {\color{gray} 0.048} & 0.424 \\ \textbf{1.14} & 0.212 & 0.050 & {\color{gray}0.016} \\ \textbf{3.20} & 0.066 & 0.196 & {\color{gray}0.032} \\ \end{tabular} \end{center} \caption{Results for the comparison of the ROC curves of the diagnostic markers \textit{log(ca125)} and \textit{log(cyfra)} when considering the multidimensional covariate (\textit{age},\textit{$\log(nse)$}).} \label{table:ResCovMulti} \end{table} Note that in this case we did not represent the estimated ROC curves conditioned to the bidimensional covariate $(age, \log(nse))$. This is to stress the fact that, with this methodology, $\widehat{ROC}^{\bm{x}}$ (with $\bm{x}$ bidimensional) does not need to be computed at all. The obtained p-values show that, depending on the pair of values of the covariate considered, we can find significative differences between the ROC curves of the ${\log(ca125)}$ and the ${\log(cyfra)}$ markers, including pairs of values that when considered separately in the previous test did not rejected the null hypothesis. Likewise, finding differences between the ROC curves conditioned to marginal covariates at certain values does not mean that those differences will be significant when considering the multidimensional covariates (for example, when we conditioned the ROC curves marginally to the value of 1.14 ${\log(nse)}$ we find differences, but when considering both covariates this difference between the ROC curves only remains significant for the \textit{age} of 83). \section{Discussion} \label{sec5} In this work a new non-parametric methodology has been presented for comparing two or more dependent ROC curves conditioned to the value of a continuous multidimensional covariate. This method combines existing techniques for reducing the dimension in goodness-of-fit tests and for estimating and comparing ROC curves conditioned to a one-dimensional covariate. A simulation study was carried out in order to analyse the practical performance of the test. Two different functions were proposed for the construction of the statistic, the $L_2$ and the $KS$, the second one being a little more conservative. Different correlations between the diagnostic variables and different sample sizes have been considered, including uneven ones without any appreciable effect on the test performance. Finally, the methodology was illustrated by means of an application to a data set: with this new test it was possible to detect differences on the discriminatory ability of two diagnostic variables conditioned to two different covariates without the need of an estimator of an ROC curve conditioned to a multidimensional covariate. With this application it becomes clear the importance of being able to include the effect of multidimensional covariates to the ROC curves analysis, as different conclusions could be drawn of the comparison of those curves when considering a multidimensional covariate, when considering unidimensional covariates or when excluding the covariates from the study. \section*{Appendix: proofs} The proofs needed for Lemma~\ref{lemma0} are presented below. \begin{lemma} \cite{Escanciano2006} or \cite{Cuesta-Albertos2019}: Given a random variable $Y$ such that $\mathbb{E}|Y|<\infty$, \begin{eqnarray} \mathbb{E}[Y|\bm{X}] = 0 \; a.s. \Leftrightarrow \mathbb{E}[Y|\bm{\beta' X}] = 0 \; a.s. \text{ for any vector $\bm{\beta} \in \mathbb{S}^{d-1}$}. \label{eq:Esc} \end{eqnarray} \end{lemma} From now on it will be assumed that all projections $\bm{\beta}$ considered satisfy $\bm{\beta} \in \mathbb{S}^{d-1}$. \begin{lemma} Let $Y_1, \cdots , Y_K$ be $K$ dependent random variables with cumulative distribution functions $F_1, \ldots, F_K$, respectively, such that $\mathbb{E}|Y_k|<\infty$ for every $k \in\{1,\ldots,K\}$. Let $\bm{X}$ be a multidimensional covariate. Then, given $c_1,\ldots, c_K $, \begin{eqnarray} F_1(c_1|\bm{X}) = \cdots = F_K(c_K|\bm{X}) \; a.s. \Leftrightarrow F_1(c_1|\bm{\beta'X}) = \cdots = F_K(c_K|\bm{\beta'X}) \; a.s. \; \forall \bm{\beta}, \label{eq:FbetaX} \end{eqnarray} with $\bm{\beta} \in \mathbb{S}^{d-1}$. \begin{proof} It is proven for $K=2$: \begin{eqnarray*} F_1(c_1|\bm{X}) = F_2(c_2|\bm{X}) \; a.s. &\Leftrightarrow & \mathbb{E}[I(Y_1 \leq c_1 )| \bm{X}] =\mathbb{E}[I(Y_2 \leq c_2) | \bm{X}] \; a.s. \notag \\ &\stackrel{(*)}{\Leftrightarrow} & \mathbb{E}[I(Y_1 \leq c_1) - I(Y_2 \leq c_2)| \bm{X}] = 0 \; a.s. \notag \\ & \stackrel{(\ref{eq:Esc})}{\Leftrightarrow}& \mathbb{E}[I(Y_1 \leq c_1) - I(Y_2 \leq c_2)| \bm{\beta' X}] = 0 \; a.s. \; \forall \bm{\beta} \notag \\ &\Leftrightarrow & \mathbb{E}[I(Y_1 \leq c_1 )| \bm{\beta'X}] =\mathbb{E}[I(Y_2 \leq c_2) | \bm{\beta'X}] \; a.s. \; \forall \bm{\beta}\notag \\ &\Leftrightarrow & F_1^{\bm{\beta}}(c_1|\bm{\beta'X}) = F_2^{\bm{\beta}}(c_2|\bm{\beta'X}) \; a.s. \; \forall \bm{\beta}, \end{eqnarray*} where $F_i^{\bm{\beta}}(c_i|\bm{\beta'X}) = P(Y_i \leq c_i | \bm{\beta'X} = \bm{\beta'X})$ for $i =1,2$. Note that in (*) the fact that the random variables are dependent is being used in the sense that they are conditioned to the same covariate $\bm{X}$ (i.e., there is no $X_1$ and $X_2$ as there would be in the independent case). \end{proof} \end{lemma} \begin{mydef} The \textit{inverted conditional ROC curve (IROC)} is defined as: \[IROC(p)=1-G(F^{-1} (1-q)), \; q \in (0,1).\] Related to the previous definition, the \textit{inverted conditional ROC curve ($IROC^x$)}, given the pair $(x^F,x^G)\in R_{X^F}\times R_{X^G}$, can also be defined as: \[IROC^{x^G,x^F}(q) = 1-G(F^{-1} (1-q|x^F)|x^G), \; q \in (0,1). \] \end{mydef} \begin{lemma} The equality of ROC curves is equivalent to the equality of the inverted ROC curves, i.e., \[ROC_1 (p) = \cdots = ROC_K(p) \; \forall p \in (0,1) \; \Leftrightarrow \; IROC_1 (q) = \cdots = IROC_K(q) \; \forall q \in (0,1).\] Moreover, the same property holds when talking about conditional ROC curves. Given the pair $(x^F,x^G)\in R_{X^F}\times R_{X^G}$ , \begin{eqnarray} ROC_1^{x^F,x^G} (p) = \cdots = ROC_K^{x^F,x^G}(p) \; \forall p \in (0,1) \; \Leftrightarrow \; IROC_1^{x^G,x^F} (q) = \cdots = IROC_K^{x^G,x^F}(q) \; \forall q \in (0,1). \label{eq:ROCIROC} \end{eqnarray} \begin{proof} It is proven for the unconditional case, and for $K=2$. The conditional case is similar. \begin{eqnarray*} ROC_1(p) = ROC_2(p) \; \forall p \in (0,1) &\Leftrightarrow & 1-F_1(G_1^{-1} (1-p)) = 1-F_2(G_2^{-1} (1-p)) \; \forall p \in (0,1) \end{eqnarray*} Take $q = 1-F_2(G_2^{-1} (1-p))$ (and hence, $q = ROC_2(p) $). $q$ will take all the values in $(0,1)$, and thus, $p = 1-G_2(F_2^{-1}(1-q)) = IROC_2(q)$. Then, \begin{eqnarray*} ROC_1(p) = ROC_2(p) \; \forall p \in (0,1) &\Leftrightarrow & 1-F_1(G_1^{-1} (1-(1-G_2(F_2^{-1}(1-q)) )) = q \; \forall q \in (0,1) \notag \\ &\Leftrightarrow & 1-G_2(F_2^{-1}(1-q) = 1- G_1(F_1^{-1}(1-q)) \; \forall q \in (0,1)\notag \\ &\Leftrightarrow & IROC_2 (q) = IROC_1(q) \; \forall q \in (0,1). \end{eqnarray*} \end{proof} \end{lemma} \textbf{Proof of Lemma~\ref{lemma0}} \begin{proof} It is proven for $K=2$. For $p \in (0,1)$, \begin{eqnarray*} ROC_1^{\bm{x}}(p) &=& ROC_2^{\bm{x}}(p) \; a.s. \Leftrightarrow \notag \\ &\Leftrightarrow & 1- F_1(G_1^{-1} (1-p|\bm{x})|\bm{x}) = 1- F_2(G_2^{-1} (1-p|\bm{x})|\bm{x} ) \; a.s. \notag \\ &\Leftrightarrow & F_1(G_1^{-1} (1-p|\bm{x})|\bm{x}) = F_2(G_2^{-1} (1-p|\bm{x})|\bm{x}) \; a.s. \notag \\ &\stackrel{(\ref{eq:FbetaX})}{\Leftrightarrow} & F_1^{\bm{\beta}^F}\left( G_1^{-1} (1-p|\bm{x}) |(\bm{\beta}^F)' \bm{x}\right) = F_2^{\bm{\beta}^F}\left( G_2^{-1} (1-p|\bm{x} )|(\bm{\beta}^F)'\bm{x}\right) \; a.s. \; \forall \bm{\beta}^F \notag \\ &\Leftrightarrow & ROC_1^{(\bm{\beta}^F)' \bm{x}, \bm{x}}(p) = ROC_2^{(\bm{\beta}^F)' \bm{x} , \bm{x}}(p) \; a.s. \; \forall \bm{\beta}^F \notag \\ & \stackrel{(\ref{eq:ROCIROC})}{\Leftrightarrow}& IROC_1^{\bm{x},(\bm{\beta}^F)'\bm{x}}(q) = IROC_2^{\bm{x},(\bm{\beta}^F)' \bm{x}}(q) \; a.s. \; \forall \bm{\beta}^F \;\text{ for } q \in (0,1) \notag \\ &\Leftrightarrow & G_1((F_1^{\bm{\beta}^F})^{-1}(1-q|(\bm{\beta}^F)' \bm{x})| \bm{x}) = G_2((F_2^{\beta^F})^{-1}(1-q|(\bm{\beta}^F)' \bm{x})| \bm{x}) \; a.s. \; \forall \bm{\beta}^F \notag \\ &\stackrel{(\ref{eq:FbetaX})}{\Leftrightarrow}& G_1^{\bm{\beta}^G}((F_1^{\bm{\beta}^F})^{-1}(1-q|(\bm{\beta}^F)' \bm{x})| (\bm{\beta}^G)' \bm{x}) = G_2^{\bm{\beta}^G}((F_2^{\bm{\beta}^F})^{-1}(1-q|(\bm{\beta}^F)' \bm{x})| (\bm{\beta}^G)'\bm{x}) \; a.s. \; \forall \bm{\beta}^F, \bm{\beta}^G \notag \\ &\Leftrightarrow & IROC_1^{(\bm{\beta}^G)' \bm{x},(\bm{\beta}^F)'\bm{x}}(q) = IROC_2^{(\bm{\beta}^G)' \bm{x},(\bm{\beta}^F)'\bm{x}}(q) \; a.s. \; \forall \bm{\beta}^F, \bm{\beta}^G \notag \\ & \stackrel{(\ref{eq:ROCIROC})}{\Leftrightarrow}& ROC_1^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}}(\tilde{p}) = ROC_2^{(\bm{\beta}^F)'\bm{x},(\bm{\beta}^G)'\bm{x}}(\tilde{p}) \; a.s. \; \forall \bm{\beta}^F, \bm{\beta}^G \; \text{for $\tilde{p}\in (0,1)$}, \end{eqnarray*} where $F_1^{\bm{\beta}^F} \left(c|(\bm{\beta}^F)'\bm{x}\right) = P\left(Y_1^F \leq c |(\bm{\beta}^F)'\bm{X}^F = (\bm{\beta}^F)'\bm{x}\right) $, $F_2^{\bm{\beta}^F} \left(c|(\bm{\beta}^F)'\bm{x}\right) = P\left(Y_2^F \leq c |(\bm{\beta}^F)'\bm{X}^F = (\bm{\beta}^F)'\bm{x}\right) $, $G_1^{\bm{\beta}^G} \left(c|(\bm{\beta}^G)'\bm{x}\right) = P\left(Y_1^G \leq c |(\bm{\beta}^G)'\bm{X}^G = (\bm{\beta}^G)'\bm{x}\right) $, $G_2^{\bm{\beta}^G} \left(c|(\bm{\beta}^G)'\bm{x}\right) = P\left(Y_2^G \leq c |(\bm{\beta}^G)'\bm{X}^G = (\bm{\beta}^G)'\bm{x}\right)$. \end{proof} \end{document}
\begin{document} \title{Finite Class $2$ Nilpotent and Heisenberg Groups} \begin{abstract} We present a structural description of finite nilpotent groups of class at most $2$ using a specified number of subdirect and central products of $2$-generated such groups. As a corollary, we show that all of these groups are isomorphic to a subgroup of a Heisenberg group satisfying certain properties. The motivation for these results is of topological nature as they can be used to give lower bounds to the nilpotently Jordan property of the birational automorphism group of varieties and the homeomorphism group of compact manifolds. \end{abstract} \section{Introduction} A finite non-abelian $p$-group $G$ is \emph{special} if its Frattini subgroup $\Phi(G)$, derived subgroup $G'$ and centre $\mathbb{C}enter(G)$ all coincide and are isomorphic to $(\mathbb{Z}/p\mathbb{Z})^r$ for some $r$. A special group $G$ is \emph{extra-special} if $r=1$. The structure of these groups is described by the following classical result. \begin{thm}[{\cite[(4.16)/(ii), Theorem~4.18]{Suzuki2}}]\label{thm:specialPgroups} Every special $p$-group is a subdirect product of groups of the form: the central product of an extra-special $p$-group and an abelian group . Every extra-special $p$-group $H$ of order $p^{2n+1}$ is the central product of $n$ extra-special $p$-subgroups of order $p^3$. For every prime $p$, there are exactly two extra-special groups of order $p^3$ (up to isomorphism). \end{thm} We present a generalisation to finite nilpotent groups of class at most $2$. \begin{thmMain}\label{thm:mainStructure} Every finite nilpotent group $G$ is a subdirect product of $d(Z(G))$ groups each with cyclic centre, see \autoref{prop:subdirectProductFinite}. Every finite \nilpotent{2} group $G$ with cyclic commutator subgroup is the internal central product of $t$ many suitable nilpotent $2$-generated subgroups of class $2$ and an abelian subgroup $A$ satisfying $d(G)=2t+d(A)$ and some further properties discussed in \autoref{cor:decompositionCyclicDeriverGroup}. \end{thmMain} The $p$-groups of class $2$ with $2$ generators are classified in \cite[Theorem~1.1]{2generatedClassification}. The central product decomposition of \autoref{thm:mainStructure} is a generalisation of \cite[Theorem~2.1]{brady_bryce_cossey_1969} where groups with cyclic centre were considered. The argument presented in the current paper is more structural and gives some invariants needed for a topological application of \autoref{thm:mainAction} below. For every $\mathbb{Z}$-bilinear map $\mu\colon A\times B\to C$ between $\mathbb{Z}$-modules, we associate a (\emph{Heisenberg}) group \begin{equation}\label{eq:matrixGroupH} \matrixGroupH{A}{B}{C}_{\!\!\mu}\coloneqq\left\{\matrixGroupH{a}{b}{c}:a\in A,b\in B,c\in C\right\} \end{equation} where the group operation is formal matrix multiplication via $\mu$, see \autoref{def:H} and \autoref{rem:Hdef}. As an application of (the proof of) \autoref{thm:mainStructure}, we obtain the main statement of the paper. \begin{thmMain}\label{thm:mainHeisenberg} Every finite \nilpotent{2} group $G$ is isomorphic to a subgroup of a non-degenerate Heisenberg group of the form \eqref{eq:matrixGroupH} for a suitable $\mu\colon A\times B\to C$ depending on $G$. Here the number of generators and exponents of $A,B,C$ are bounded by concrete functions of $G$. See \autoref{thm:embedToH} for the precise statement and details. \end{thmMain} See \cite[Corollary~2.21]{Magidin} for a weaker conclusion in a much more general setup. We remark here that both \autoref{thm:mainStructure} and \autoref{thm:mainHeisenberg} are true in the finitely generated setup, see the thesis of the author \cite[Theorems A,B]{phd}. The ideas presented in the current paper are heavily polished and simplified compared to the thesis. Note that the number of isomorphism classes of groups of order $p^n$ is $p^{\frac{2}{27}n^3+\mathcal{O}(n^{8/3})}$ as $n\to\infty$ \cite[Theorem,~p.~153]{Sims} of which at least $p^{\frac{2}{27}n^3- \frac{12}{27}n^2}$ is \nilpotent{2} \cite[Theorem~2.3]{Higman}. Bounding the invariants in both statements is essential for the following topological application of \autoref{thm:mainAction} (that shall be proved in a follow-up paper). Recall that a group $G$ is \emph{of rank at most $r$} if every subgroup $H$ of $G$ can be generated by at most $r$ elements. \begin{cor}[{\cite[Theorem C]{phd}}]\label{thm:mainAction} For every natural number $r$, there exists an algebraic variety $X_r$, respectively a compact manifold $M_r$, such that every finite \nilpotent{2} group of rank at most $r$ acts faithfully on $X_d$ via birational automorphisms, respectively on $M_d$ via diffeomorphisms. \end{cor} This statement about varieties is sharp up to bounded extensions by \autoref{thm:birational2Nilpotent} and \autoref{rem:boundedRankBir} below, i.e. no larger set of finite subgroups can act simultaneously on a variety birationally than the set of \nilpotent{2} groups of bounded rank. In fact, \autoref{thm:mainAction} demonstrates the sharpness of \autoref{thm:birational2Nilpotent}. \begin{defn}[{\cite[Definition~1]{guld2020finite2nilpotent}}] A group $G$ nilpotently Jordan (of class at most $c$) if there is an integer $J_{G}$ such that every finite subgroup $F\leq G$ contains a nilpotent subgroup $N\leq F$ (of class at most $c$) such that $|F:N|\leq J_G$. \end{defn} \begin{thm}[{\cite[Theorem 2]{guld2020finite2nilpotent} based on \cite{prokhorov2016jordan}}]\label{thm:birational2Nilpotent} The birational automorphism group of any variety over a field of characteristic zero is nilpotently Jordan of class at most $2$. \end{thm} \begin{thm}[{\cite[Remark 6.9]{ProkhorovShramov2014} or \cite[Theorem~15]{guld2019finiteDnilpotent} for details}]\label{rem:boundedRankBir} The rank of every finite group acting birationally on a variety $X$ over a field of characteristic zero is bounded by a function of $X$. \end{thm} The situation for manifolds is similar, but no bound on the nilpotency class is known apart from low dimensional cases \cite[Theorem 1.1]{Riera4dim}. The sharpness of \autoref{thm:mainAction} and \autoref{lem:boundedRankDiff} is an open question, cf. \cite[Question 1.5]{nilpotentJordanHomeo}. \begin{thm}[{\cite[Theorem 1.3]{nilpotentJordanHomeo}}]\label{thm:homeoNilpotent} The homeomorphism group of every compact topological manifold is nilpotently Jordan. \end{thm} \begin{thm}[{\cite[Theorem~1.8]{CsikosMundetPyberSzabo}}]\label{lem:boundedRankDiff} The rank of every finite group acting continuously on a compact manifold $M$ is bounded by a function of $M$. \end{thm} \mathfrak{p}aragraph{Structure of the paper} In \autoref{sec:subdirectProduct}, we prove the first part of \autoref{thm:mainStructure} using induction and find the smallest number of factors needed. In \autoref{sec:alterntingModules}, we turn our attention to \nilpotent{2} groups $G$ with cyclic centre. The commutator map on these groups induces an alternating bilinear map on the $\mathbb{Z}$-module on $G/G'$. We prove the second part of \autoref{thm:mainStructure} by finding a suitable generating set imitating the Darboux basis of symplectic vector spaces. In \autoref{sec:HeisenbergGroups}, we define a complex structure and isotropic structure on $G/\mathbb{C}enter(G)$. This enables us to assign a Heisenberg group to $G$ using a method that is independent on the prime divisors of $|G|$. Then we discuss the general idea of modifying the previous construction by extending the centre so that actually $G$ embeds to the resulting Heisenberg group. Finally, in \autoref{sec:HeisenbergEmbedding} we prove \autoref{thm:mainHeisenberg} in two steps. First, we consider the special case when the group has cyclic centre and apply the method of the previous two sections. Second, we use the reduction of \autoref{sec:subdirectProduct} to handle the general case. \[\begin{tikzcd}[column sep=4em,row sep=2em] \cbox{0.3\textwidth}{\nilpotent{2} group $G$} \ar[d,rightsquigarrow,"\text{reduction}","\text{\autoref{sec:subdirectProduct}}"'] \ar[rr,dotted,inclusion,"\text{\autoref{thm:embedToH}}"'] & & \cbox{0.3\textwidth}{$\matrixGroupH{A}{B}{C}_{\!\!\mu}$} \\ \cbox{0.3\textwidth}{\nilpotent{2} groups $G_i$ with cyclic centre} \ar[r,rightsquigarrow,"\text{\autoref{sec:alterntingModules}}"',"{[-,-]}"] \ar[rr,mono,dashed,bend left=15, inclusion, "\text{\autoref{thm:maximalCbACyclicCentreEmbedsToHeisenberg}}"'] & \cbox{0.23\textwidth}{Hermitian forms with isotropic structure} \ar[r,rightsquigarrow,"\text{\autoref{sec:HeisenbergGroups}}"'] & \cbox{0.3\textwidth}{$\matrixGroupH{A_i}{B_i}{C_i}_{\!\!\mu_i}$} \ar[u,rightsquigarrow,"\text{\autoref{sec:subdirectProduct}}","\mathfrak{p}rod_i"'] \end{tikzcd}\] The present paper is the first of two compiled from the thesis of the author \cite{phd}. \mathfrak{p}aragraph{Notation} $|X|$ denotes the \emph{cardinality} of a set $X$. $\mathbb{N}_{+}\subset \mathbb{Z}$ is the set of positive integers, ${\mathbb{N}_{0}}=\{0\}\cup \mathbb{N}_{+}$. $n\bigm| k$ denotes \emph{divisibility}. (Note that this symbol is slightly taller than the one denoting the cardinality.) $\lcm(n_1,\dots,n_k)$ is the least common multiple of integers $n_1,\dots,n_k$. We apply functions \emph{from the left}. For $f\colon X\to Y$ and $X_0\subseteq X$, we write $f|_{X_0}\colon X_0\to Y$ for \emph{restriction} and $f(X_0)\coloneqq\{f(x_0):x_0\in X_0\}$. For maps $f_i\colon X_i\to Y_i$, denote by $f_1\times f_2\colon X_1\times X_2\to Y_1\times Y_2, (x_1,x_2)\mapsto (f_1(x_1),f_2(x_2))$ their \emph{direct product}. The arrow $\arrowInline{mono}$ indicates a \emph{monomorphism} (or an injective map), $\arrowInline{epi}$ means an \emph{epimorphism} (or a surjective map), and these arrow notations can be combined. We write $\arrowInline{identity}$ for the \emph{identity map}. In bigger diagrams, we use $\arrowInline{solid}$, $\arrowInline{dashed}$ or $\arrowInline{dotted}$ to indicate the `chronological order': the further it is from a solid one, the later it appears in the construction. Let $G$ denote a group. We denote the \emph{identity element} of $G$ by $1$, or sometimes by $0$ when $G$ is an additive abelian group. By abuse of notation, we also write $1$ or $0$ for the \emph{trivial group}. For a subset $S\subseteq G$, $\generate{S}$ denotes the subgroup generated by $S$, and write $\generate{g_1,\dots,g_n}\coloneqq\generate{\{g_1,\dots,g_n\}}$. $\Hom(X,Y)$ is the \emph{set of morphisms} $X\to Y$. $N\lhd G$ means that $N$ is a \emph{normal subgroup} of $G$. Write $[-,-]\colon G\times G\to G',(g,h)\mapsto[g,h]$ for the \emph{commutator map} where we use the convention $[g,h]\coloneqqg^{-1}h^{-1}gh$. The \emph{commutator subgroup} (or \emph{derived subgroup}) is denoted by $G'\coloneqq[G,G]$. $\mathbb{C}enter(G)$ is the \emph{centre} of $G$. $\Syl(G)$ is the set of \emph{Sylow subgroups} of $G$, $\Syl_p(G)$ consists of \emph{Sylow $p$-subgroups}. We denote by $\exp(G)\coloneqq\inf\{n\in \mathbb{N}_{+}:\forall g\in G\, g^n = 1\}$ the \emph{exponent} of a group $G$. Write $d(G)$ for the \emph{cardinality of the smallest generating set}. We say $G$ is \emph{\dgenerated{d}}, if $d\leq d(G)$; and $G$ is \emph{$d$-generated} if $d=d(G)$. \section{Subdirect product decomposition}\label{sec:subdirectProduct} \begin{summary} The goal of this section is to prove the first part of \autoref{thm:mainStructure} from \autopageref{thm:mainStructure} by passing to abelian groups. To show the existence of such a subdirect product, we recursively take quotients by the invariant factors of the centre. To attain the minimal number of factors, we consider intersections with the centre. \end{summary} \begin{defn}[$\mathbb{C}C$]\label{defn:CdecompositioninG} Let $\mathbb{C}C$ denote the class of groups with cyclic centre. A \emph{$\mathbb{C}C$-decomposition in a group $G$} is a finite set $D$ of normal subgroups of $G$ such that $G/N\in\mathbb{C}C$ for every $N\in D$ and $\bigcap D=1$. (Use the convention $\bigcap D = G$ if $D=\emptyset$.) Let $m_\mathbb{C}C(G)$ denote the minimal $|D|$ amongst all $\mathbb{C}C$-decomposition $D$ in $G$, or $\infty$ is no such decomposition exists. \end{defn} \begin{rem}\label{rem:associatedEmbedding} This is a reformulation of subdirect products, as the \emph{associated (central) embedding} $\mu_D\colon G\rightarrowtail G/D\coloneqq\mathfrak{p}rod_{N\in D}G/N, gK\mapsto(gN)_{N\in D}$ makes $G$ a subdirect product of groups from $\mathbb{C}C$. \end{rem} \begin{lem}\label{lem:mCFinite} There is a $\mathbb{C}C$-decomposition in every finite group $G$. Furthermore, $d(\mathbb{C}enter(G))\leq m_\mathbb{C}C(G)$. \end{lem} \begin{proof} Let $l(G)$ be the maximal length of a strictly increasing subgroup series consisting of normal subgroups of $G$. Note that $l(G/N)<l(G)$ for any non-trivial normal subgroup $N$ of $G$ as a series $K_0/N<K_1/N<\dots<K_n/N$ of normal subgroups of $G/N$ induces $1<N<K_1<\dots <K_n$ in $G$. Write $\mathbb{C}enter(G)=\mathfrak{p}rod_{i=1}^{d} C_i$ where $C_i$ are non-trivial cyclic groups. If $d\leq 1$ (for example when $l(G)=0$), then $D=\{1\}$ is a $\mathbb{C}C$-decomposition in $G$. Otherwise, by induction of $l(G)$, there are $\mathbb{C}C$-decompositions $D_i$ in $G/C_i$. Lift $D_i$ to a set of normal subgroups $\bar D_i$ of $G$ containing $N$. i.e. $D_i=\{K/C_i:K\in\bar D_i\}$. We claim that $D\coloneqq\bigcup_{i=1}^{d} \bar D_i$ is a $\mathbb{C}C$-decomposition in $G$. Indeed, it is a finite set of normal subgroups of $G$. For every $K\in D$, we have $K/C_i\in D_i$ for some $i$, hence $G/K\cong (G/C_i)/(K/C_i)\in\mathbb{C}C$ as $D_i$ is a $\mathbb{C}C$-decomposition in $G/C_i$. Finally, note that $\bigcap D= \bigcap_{i=1}^{d} \bigcap\bar D_i = \bigcap_{i=1}^{d} C_i = 1$. For the second part, suppose $D$ is a $\mathbb{C}C$-decomposition in $G$. Then $\mu_D(\mathbb{C}enter(G))\subseteq \mathbb{C}enter(G/D)=\mathfrak{p}rod_{N\in D} \mathbb{C}enter(G/N)$, so $d(\mathbb{C}enter(G))\leq \sum_{N\in D} d(\mathbb{C}enter(G/N))\leq |D|$ since $G/N$ has cyclic centre by assumption and using that $d$ is a monotone function on abelian groups. \end{proof} The next statement is motivated by an idea of Endre Szabó. \begin{lem}\label{lem:triviallyIntersectionInFiniteAbelian} Let $A$ be a finite abelian $p$-group, $X$ be a trivially intersecting set of subgroups of $A$. Then there exists $Y\subseteq X$ with $|Y|\leq d(A)$ and $\bigcap Y=1$. \end{lem} \begin{proof} We prove this by induction on $d(A)$. If $d(A)=0$, then $A$ is trivial, and $Y=\emptyset$ works by convention. Else assume that $d(A)>0$. For any subgroup $K\leq A$, define $V(K)=\{g\in K:g^p=1\}$. Note that $V(K)$ can be considered as an $\mathbb{F}_p$-vector space of dimension $d(K)$. Assume by contradiction that $V(K)=V(A)$ for all $K\in X$. Then $V(A)\subseteq K$, hence $V(A)\subseteq \bigcap X=0$, but this contradicts that $V(A)$ has positive dimension. So we may pick $B\in X$ so that $d(B)<d(A)$. Now $X_B\coloneqq\{B\cap K:K\in X\}$ is a trivially intersecting set of subgroups of $B$, so by induction, there is $Y_B\subseteq X_B$ of size at most $d(B)$ with trivial intersection. Lift back $Y_B$ to $Z\subseteq X$. Then $|Z|=|Y_B|$ and $Y_B=\{B\cap K:K\in Z\}$. We show that $Y\coloneqq\{B\}\cup Z\subseteq X$ satisfies the claim. Indeed, $|Y|\leq 1+|Y_B|\leq 1+d(B)\leq d(A)$ by construction, and $\bigcap Y= B \cap \bigcap Z= \bigcap_{K\in Z} (B\cap K)= \bigcap Y_B=1$. \end{proof} \begin{lem}\label{lem:mCpGroup} $m_\mathbb{C}C(P)=d(\mathbb{C}enter(P))$ for any finite $p$-group $P$. \end{lem} \begin{proof} There is a $\mathbb{C}C$-decomposition $D$ in $P$ by \autoref{lem:mCFinite}. We claim the existence of a $\mathbb{C}C$-decomposition $S\subseteq D$ of size at most $d(\mathbb{C}enter(P))$. This then proves the statement as no smaller $\mathbb{C}C$-decomposition may exist by \autoref{lem:mCFinite}. Let $A\coloneqq\mathbb{C}enter(P)$ and consider $X\coloneqq\{N\cap A:N\in D\}$, a trivially intersecting set of subgroups of the abelian group $A$. Let $Y\subseteq X$ with $|Y|\leq d(A)$ and $\bigcap Y=1$ be given by \autoref{lem:triviallyIntersectionInFiniteAbelian}. Lift $Y$ back to $S\subseteq D$. Then $1=\bigcap Y=\mathbb{C}enter(P)\cap \bigcap S$, so we must have $\bigcap S=1$ since in a nilpotent group, there is no non-trivial normal subgroup intersecting the centre trivially. Also by construction, $|S|=|Y|\leq d(A)=d(\mathbb{C}enter(P))$, so $S$ is indeed a $\mathbb{C}C$-decomposition of $P$ with the stated properties. \end{proof} \begin{lem}\label{lem:mCSylowMax} Let $G$ be a finite nilpotent group. Then $m_\mathbb{C}C(G)=\max\{m_\mathbb{C}C(P):P\in \Syl(G)\}$ where $\Syl(G)$ is the set of Sylow subgroups of $G$. \end{lem} \begin{proof} Let $D$ be a $\mathbb{C}C$-decomposition in $G$ and $P\in\Syl(G)$. We claim that $D_P\coloneqq\{N\cap P:N\in D\}$ is a $\mathbb{C}C$-decomposition in $P$. Indeed, $P$ is a normal subgroup of $G$ because $G$ is finite nilpotent, so $N\cap P$ is a normal subgroup of $G$, hence of $P$. On the other hand $\bigcap N_P=P\cap \bigcap D=1$. This shows that $m_\mathbb{C}C(G)\geq m_\mathbb{C}C(P)$ for all $P\in\Syl(G)$. For the other direction, let $D_p$ be a $\mathbb{C}C$-decomposition for $G_p\in \Syl_p(G)$ for all prime divisors $p$ of $|G|$. Let $\mathcal{D}$ be a partition of $\bigcup_p D_p$ of size $\max\{|D_p|:p\}$ such that $|S\cap D_p|\leq 1$ for every $S\in \mathcal{D}$ and $p$. We claim that $D\coloneqq\{\mathfrak{p}rod_{N\in S}N:S\in \mathcal{D}\}$ is a $\mathbb{C}C$-decomposition in $G$. Indeed, as above, every $N_p\in D_p$ is normal in $G$, so every $N\in D$ is also a normal subgroup of $G$ being the product of such groups in a finite nilpotent group. On the other hand, $G_p\cap \bigcap D = \bigcap \{G_p\cap \mathfrak{p}rod_{N\in S}N:S\in \mathcal{D}\} = \bigcap \{K:K\in D_p\} = \bigcap D_p=1$ using the assumption on the partition. This shows that $m_\mathbb{C}C(G)\leq \max\{m_\mathbb{C}C(P):P\in\Syl(G)\}$. \end{proof} \begin{prop}\label{prop:subdirectProductFinite} $m_\mathbb{C}C(G)=d(\mathbb{C}enter(G))$ for any finite nilpotent group $G$. In particular, $G$ is a subdirect product of $d(\mathbb{C}enter(G))$ groups each having cyclic centre. \end{prop} \begin{proof} Using \autoref{lem:mCSylowMax}, \autoref{lem:mCpGroup} and the Chinese remainder theorem, we get \begin{align*} m_\mathbb{C}C(G) &= \max\{m_\mathbb{C}C(P):P\in\Syl(G)\}= \max\{d(\mathbb{C}enter(P)):P\in\Syl(G)\}\\&= \max\{d(Q):Q\in\Syl(\mathbb{C}enter(G))\}= d(\mathbb{C}enter(G)) \end{align*} after noting that $G=\mathfrak{p}rod_{P\in \Syl(G)} P$ implies $\Syl(\mathbb{C}enter(G))=\{\mathbb{C}enter(P):P\in\Syl(G)\}$. \end{proof} \begin{rem} The statement holds when $G$ is finitely generated using similar reasoning, see \cite[\S3.1.2]{phd}. \end{rem} \section{Alternating modules}\label{sec:alterntingModules} \begin{summary} In this section, we introduce alternating modules to generalise the notion of symplectic vector spaces prominently to the abelianisation of \nilpotent{2} groups. We show that, under some conditions, they possess an analogue of the Darboux basis. Using this, on one hand, we prove the second part of \autoref{thm:mainStructure} from \autopageref{thm:mainStructure}, on the other hand, we endow the module with a non-canonical complex structure having an isotropic structure. \end{summary} We start with an elementary statement. \begin{lem}[`Alternating Smith' normal form]\label{lem:alternatingSmith} Let $R$ be a principal ideal domain, $W\in R^{n\times n}$ be an alternating matrix (i.e. $W^\top = -W$ and has $0$'s at the main diagonal). Then for $s=\frac{1}{2}\rank(W)$, there exist elements $d_1\mid d_2\mid\dots\mid d_s\neq 0$ in $R$ (unique up to unit multiples) and $B\in \SL_n(R)$ such that \begin{equation} B^\top W B = \diag\left( \begin{pmatrix}0 & d_1 \\ -d_1 & 0\end{pmatrix}, \begin{pmatrix}0 & d_2 \\ -d_2 & 0\end{pmatrix}, \dots, \begin{pmatrix}0 & d_s \\ -d_s & 0\end{pmatrix}, 0,\dots,0\right) . \label{eq:alternatingSmith} \end{equation} \end{lem} \begin{proof} The idea is similar to the standard proof of Smith normal form \cite[\S3.7]{jacobson1985basic}, but instead of focusing on the main diagonal, we consider the superdiagonal entries. At each step, we choose different pivots and apply the base change $W\mapsto X^\top W X$ (to respect the alternating property) for a series of well-chosen matrices $X\in\SL_n(R)$ until the stated form for $W=(w_{i,j})$ is obtained. Once the existence is verified, the uniqueness statement follows from the fact that for each $1\leq k\leq n$, the ideal generated by the $k\times k$ minors is unchanged under these transformations. See \cite[\S2.3]{phd} for details. \end{proof} A key notion is the following analogue of symplectic vector spaces. \begin{defn}\label{defn:alternatingModule} We call $(M,\omega, C)$ an \emph{alternating $R$-module},\index{alternating module} if $R$ is a principal ideal domain, $M$ is a finitely generated $R$-module, $C$ is a cyclic $R$-module, and $\omega\colon M\times M\to C$ is an $R$-bilinear map that is alternating, i.e. $\omega(m,m)=0$ for every $m\in M$. $N\leq M$ is $N$ \emph{isotropic} if $\omega(N\,N)=0$. The \emph{orthogonal complement} of $N$ is $N^\mathfrak{p}erp\coloneqq\{m\in M:\omega(m,N)=0\}$. Call $\omega$ and $(M,\omega,C)$ \emph{non-degenerate} if $M^\mathfrak{p}erp=0$. \end{defn} \begin{lem}[Darboux-generators]\label{lem:DarbouxGenerators} Let $(M,\omega,C)$ be an alternating $R$-module. Then there exists a minimal $R$-module generating set $\mathcal{B}$ of $M$, and a subset $\{x_1,y_1,\dots,x_t,y_t\}\subseteq\mathcal{B}$ such that $\omega(M,M)=R\omega(x_1,y_1)\geq R \omega(x_2,y_2)\geq \dots \geq R \omega(x_t,y_t)\neq 0$ and $\omega(b_1,b_2)=0$ for all other pairs $(b_1,b_2)\in \mathcal{B}^2$. In any such generating set, the chain of submodules of $C$ above is invariant. More concretely, $t=\frac{1}{2}d(M/M^\mathfrak{p}erp)$ and $M/M^\mathfrak{p}erp \cong \bigoplus_{i=1}^t (R\omega(x_i,y_i))^{\oplus 2}$. \end{lem} \begin{proof} Let $(M,\omega,C)$ be a Darboux module. Pick a minimal $R$-module generating set $\{b_1,\dots,b_n\}$ of $M$, and let $c$ be a fixed generator of $\omega(M,M)$. Pick $w_{i,j}\in R$ such that $\omega(b_i,b_j) = w_{i,j} c$. (Note that these are not necessarily unique, but $w_{i,j}+\ann_R(c)\in R/\ann_R(c)$ are.) Without loss of generality, we may assume that $W\coloneqq(w_{i,j})_{i,j}\in R^{n\times n}$ is an alternating matrix. Let $B\in \SL_n(R)$ given by \autoref{lem:alternatingSmith}. Since $B$ is an invertible square matrix, $\mathcal{B}\coloneqq\{\sum_{j=1}^n B_{j,i} b_j:1\leq i\leq n\}=\{x_1,y_1,\dots,x_s,y_s,\dots\}$ is also a (minimal) generating set of $M$ in which $\omega$ can be expressed at the matrix from \eqref{eq:alternatingSmith}. Then the statement follows by setting $t\in {\mathbb{N}_{0}}$ so that $d_ic\neq 0$ for $1\leq i\leq t$, and $d_jc=0$ for $t<j\leq s$. For the second part, let $x_i,y_i$ be as above. We claim that \begin{equation} \begin{tikzcd}[align] 0\ar[r] & M^\mathfrak{p}erp \ar[r,inclusion] & M\ar[r,"\omega^\flat"] & \bigoplus_{i=1}^t (R\omega(x_i,y_i))^{\oplus 2} \ar[r] & 0 \\ & & m \ar[r,mapsto] & (\omega(m,y_i),\omega(x_i,m))_{i=1}^t \end{tikzcd} \label{diag:omegaFlat} \end{equation} is a short exact sequence of $R$-modules. Indeed, $\omega^\flat$ is well-defined using the bilinearity of $\omega$ and the orthogonality elements of $\mathcal{B}$. $\ker(\omega^\flat) = \{m\in M:\forall i\;\omega(x_i,m)=\omega(m,y_i)\}= \{m\in M:\forall b\in \mathcal{B}\;\omega(b,m)=0\}= M^\mathfrak{p}erp$. On the other hand, using orthogonality once more, $\omega^\flat(\sum_{i=1}^t r_ix_i + s_iy_i)=(r_i,y_i)_i$ for arbitrary $r_i,s_i\in R$, thus $\omega^\flat$ is surjective. Hence $M/M^\mathfrak{p}erp\cong \bigoplus_{i=1}^t (R\omega(x_i,y_i))^{\oplus 2}$, so the isomorphism class of the $R$-modules $R\omega(x_i,y_i)$ are invariant by the structure theorem of finitely generated modules over a principal ideal domain and $t=\frac{1}{2}d(M/M^\mathfrak{p}erp)$. If $\ann_R(c)\neq 0$, then $\omega(M,M)$ is a cyclic torsion $R$-module, so its isomorphic submodules are necessarily equal. This means that the submodules $R\omega(x_i,y_i)\leq \omega(M,M)$ are themselves invariant. Otherwise, $\omega(M,M)$ is a free $R$-module of rank $1$ generated by, say, $c$. Thus $\omega(x_i,y_i)=d_ic$ for some unique $d_i(c)\in R$. By \autoref{lem:alternatingSmith}, $Rd_i$ is independent of the choice the generators of $M$. Hence $Rd_ic=R\omega(x_i,y_i)$ may depend only on $c$, but the right-hand side does not. \end{proof} Our main resource of alternating $\mathbb{Z}$-modules is the following. \begin{defn}\label{defn:centralByAbelian} A short exact sequence $\epsilon:1\to C\xrightarrow{\iota} G\xrightarrow{\mathfrak{p}i} M\to 1$ of groups is called a \emph{central-by-abelian{} extension}, if $\iota(C)\subseteq\mathbb{C}enter(G)$ and $M$ is abelian. This extension $\epsilon$ is \emph{non-degenerate}\index{central-by-abelian{} extension!non-degenerate}, if $\iota(C)=Z(G)$. \end{defn} \begin{lem}[The alternating functor $\mathop{\mathcal{A}}$]\label{lem:alterntingFunctor} Every central-by-abelian{} extension groups $\epsilon:1\to C\xrightarrow{\iota} G\xrightarrow{\mathfrak{p}i} M\to 1$ induces an alternating $\mathbb{Z}$-bilinear map $\omega\colon M\times M\to C$ defined by $(m_1,m_2)\mapsto \iota^{-1}([g_1,g_2])$ for arbitrary $g_i\in\mathfrak{p}i^{-1}(m_i)$. In particular, when $M$ is finitely generated and $C$ is cyclic, then $\mathop{\mathcal{A}}(\epsilon)\coloneqq(M,\omega,C)$ is an alternating $\mathbb{Z}$-module. \end{lem} \begin{proof} We consider $M$ and $C$ as $\mathbb{Z}$-modules. First note that $G'\subseteq\iota(C)\subseteq\mathbb{C}enter(G)$, so $G$ is necessarily \nilpotent{2}. Then the general commutator identities $[g_1g_2,h]=[g_1,h][g_1,h,g_2][g_2,h]$ and $[g,h_1h_2]=[g,h_2][g,h_1][g,h_1,h_2]$ imply that $[-,-]:G\times G\to G'$ is a group morphism in both coordinates. Next, we check that $\omega$ is well-defined. Pick $g_i,g_i'\in \mathfrak{p}i^{-1}(m_i)$. Then $g_i^{-1}g_i'\in \ker(\mathfrak{p}i)=\im(\iota)$, so there are $c_i\in C$ with $\iota(c_i)=g_i^{-1}g_i'$. Then $[g_1',g_2']= [g_1\iota(c_1),g_2\iota(c_2)]= [g_1,g_2][g_1,\iota(c_2)][\iota(c_1),g_2][\iota(c_1),\iota(c_2)]= [g_1,g_2]$ by above as $\iota(C)\subseteq Z(G)$. Finally, $G'\subseteq \iota(C)$ implies that we can apply $\iota^{-1}$ to this element. $\mathbb{Z}$-bilinearity of $\omega$ follows directly from the previously mentioned fact. The alternating property follows as every group element commutes with itself. \end{proof} \begin{rem}\label{rem:dictionary} \autoref{lem:alterntingFunctor} gives the following dictionary between subgroups $H,H_i$ of $G$ and submodules of $M$. The commutator map corresponds to $\omega$ ($[g,g']=\iota\circ\omega(\mathfrak{p}i(g),\mathfrak{p}i(g'))$ and $[H_1,H_2]=\iota\circ\omega(\mathfrak{p}i(H_1),\mathfrak{p}i(H_2))$), commutes to being orthogonal ($[H_1,H_2]=1 \iff \mathfrak{p}i(H_1)\mathfrak{p}erp \mathfrak{p}i(H_2)$), the centraliser to the orthogonal complement ($\mathfrak{p}i(C_G(H))= \mathfrak{p}i(H)^\mathfrak{p}erp$, in particular $\mathfrak{p}i(\mathbb{C}enter(G))=M^\mathfrak{p}erp$), abelian to isotropic ($[H,H]=1 \iff \mathfrak{p}i(H) \mathfrak{p}erp \mathfrak{p}i(H)$), and the notion of non-degeneracy coincide. \end{rem} The dictionary can be extended to Darboux-generators as the following generalisation of \autoref{thm:specialPgroups} and \cite[Theorem~2.1]{brady_bryce_cossey_1969} shows. \begin{thm}[Central product decomposition]\label{cor:decompositionCyclicDeriverGroup} Let $G$ be a finite \nilpotent{2} group with cyclic commutator subgroup $G'$. Then it contains pairwise commuting subgroups $A$ and $E_1,\dots,E_t$ such that $G=AE_1\dots E_t$ (a central product) where $A\leq \mathbb{C}enter(G)$, $E_i$ are $2$-generated and of class exactly $2$, $d(G)=d(A)+2t$ and $G'=E_1'\supsetneq E_2'\supsetneq \dots \supsetneq E_t'\neq 1$. In any such case, $t=\frac{1}{2} d(G/\mathbb{C}enter(G))$ and $E_i'\subseteq G'$ are invariants given by $G/\mathbb{C}enter(G)\cong \mathfrak{p}rod_{i=1}^t E_i'^2$. \end{thm} \begin{proof} Let $(M,\omega,C)=\mathop{\mathcal{A}}(1\to G'\xrightarrow{\subseteq} G\xrightarrow{\mathfrak{p}i} G/G'\to 1)$. This is an alternating $\mathbb{Z}$-module by \autoref{lem:alterntingFunctor}. Consider the minimal generating set $\mathcal{B}=\{x_1,y_1,\dots,x_t,y_t,o_1,\dots,o_k\}$ of $M=G/G'$ as in \autoref{lem:DarbouxGenerators}. For every $b\in \mathcal{B}$, fix an arbitrary lift $\bar b\in\mathfrak{p}i^{-1}(b)\subseteq G$, and set $\bar {\mathcal{B}} \coloneqq\{\bar g:g\in \mathcal{B}\}$. We show that the subgroups $E_i\coloneqq\generate{\bar x_i,\bar y_i}\leq G$ and $A\coloneqq\generate{\bar o_1,\dots,\bar o_k}\leq G$ satisfy the statement Indeed, $A\subseteq \mathfrak{p}i^{-1}(M^\mathfrak{p}erp)=\mathbb{C}enter(G)$ using \autoref{rem:dictionary}. Moreover, $[E_i,E_j]=1$ if and only if $(\mathbb{Z} x_i+\mathbb{Z} y_i)\mathfrak{p}erp (\mathbb{Z} x_j+\mathbb{Z} y_j)$ if and only if $i\neq j$ by \autoref{rem:dictionary} and \autoref{lem:DarbouxGenerators}. By \autoref{rem:dictionary}, $G'=[G,G]=\omega(M,M)$ and $E_i'=[E_i,E_i]=\omega(\mathbb{Z} x_i+\mathbb{Z} y_i,\mathbb{Z} x_i+\mathbb{Z} y_i)=\mathbb{Z}\omega(x_i,y_i)$. Then all parts about the derived subgroups follow from \autoref{lem:DarbouxGenerators}. In particular, $G'=E_1'=\mathbb{Z}\omega(x_i,y_i)=\generate{[\bar x_1, \bar y_1]}$, so considering central-by-abelian{} extension, we see that $G =\generate{\bar{\mathcal{B}}\cup G'} =\generate{\bar{\mathcal{B}}\cup \{[\bar x_1, \bar y_1]\}} = \generate{\bar{\mathcal{B}}} =AE_1\dots E_t $ So $d(G)\leq d(A)+\sum_{i=1}^t d(E_i)\leq |\bar{\mathcal{B}}| =|\mathcal{B}|=d(M)\leq d(G)$. This forces equality everywhere, so $d(E_i)=2$ and $2t+d(A)=d(G)$. Finally \autoref{rem:dictionary} shows that $M^\mathfrak{p}erp=\mathfrak{p}i(\mathbb{C}enter(G))=\mathbb{C}enter(G)/G'$, so $M/M^\mathfrak{p}erp = (G/G')/(\mathbb{C}enter(G)/G')\cong G/\mathbb{C}enter(G)$. \end{proof} \begin{rem}\label{rem:centralProductDecompositionNotUnique} The isomorphism class of the subgroups $E_1,\dots,E_t$ are not unique, which is demonstrated by the classical decomposition of extra-special $p$-groups \autoref{thm:specialPgroups}. For example, the extra-special $p$-group $G$ of order $p^{2t+1}$ of exponent $p^2$ has many different internal central product decompositions. For any $1\leq s\leq t$, $G=E_1E_2\dots E_t$ such that $E_i\cong M$ for $1\leq i\leq s$ and $E_i\cong E$ for $s<i\leq t$, where $E$ and $M$ are the non-abelian groups of order $p^3$ and of exponent $p$ and $p^2$, respectively \cite[Theorem~4.18]{Suzuki2}. \end{rem} If the alternating module is non-degenerate, we can endow it with additional structures. \begin{defn} For a commutative ring $Q$, define a ring $Q[i] \coloneqqQ[x]/(x^2+1)$ with $i\coloneqqx+(x^2+1)\in Q[i]$ and maps $\sigma\colon Q[i]\to Q,q+iq'\mapsto q-iq'$ (conjugation) and $\Im\colon Q[i]\to Q,q+iq\mapsto q'$ (the imaginary part). For a $Q[i]$-module $M$, we call a map $h\colon M\times M\to Q[i]$ a \emph{Hermitian form on $M$ over $Q[i]$} if $h$ is $Q[i]$-linear in the first argument and $h$ is $\sigma$-conjugate symmetric (i.e. $h(m,m')=\sigma(h(m',m))$). \end{defn} The next statement is essential for \autoref{thm:mainAction}. \begin{prop}\label{lem:complexStructure} Let $(M,\omega,C)$ be a non-degenerate alternating $R$-module. Set $Q\coloneqqR/\ann_R(\omega(M,M))$, and let $\mathfrak{p}hi\colon Q\rightarrowtail C$ be a monomorphism of $R$-modules such that $\omega(M,M)\subseteq \mathfrak{p}hi(Q)$. Then there is a Hermitian form $h$ on $M$ over $Q[i]$ making the following diagram commute. \[\begin{tikzcd} M\times M \ar[r,"\omega"] \ar[d,"h","\exists"',dashed] & C \\ Q[i] \ar[r,"\Im",epi] & Q \ar[u,"\mathfrak{p}hi"',mono] \end{tikzcd}\] Furthermore, $M$ has a non-canonical isotropic $\mathbb{Q}$-structure, i.e. an isotropic $Q$-submodule $M_Q$ of $M$ such that $M=M_Q\oplus iM_{Q}$ (as $Q$-modules). Finally, there is $\alpha\in M_Q$ such that $\Im h(\alpha,i\alpha)$ is a $Q$-module generator of $Q$. \end{prop} \begin{rem} The $Q[i]$-module structure of $M$ is non-canonical, but is compatible with $R\to Q\to Q[i]$. The $Q$-module $iM_Q$ is automatically isotropic as $\omega(ia,ia')=\mathfrak{p}hi(\Im(h(ia,ia')))=\mathfrak{p}hi(\Im(h(a,a')))=\omega(a,a')=0$. If $C$ is finite, then by order considerations the condition $\omega(M,M)\subseteq \mathfrak{p}hi(Q)$ is automatically satisfied. \end{rem} \begin{rem}\label{rem:complexStructuresIsomorphic} While the structures themselves from the statement depend on the choice of the generators of $M$, their isomorphism class do not. More concretely, let $\bar M$ be an arbitrary $Q[i]$-module structure on $M$ together with a Hermitian form $\bar h$ and an isotropic $\mathbb{Q}$-structure $\bar M = \bar M_Q\oplus i\bar M_Q$ as at the statement. Then \autoref{lem:alternatingSmith} implies the existence of a $Q[i]$-module isomorphism $f\colon M\to \bar M$ such that $f(M_Q)=\bar M_Q$ and $\bar h\circ(f\times f) = h$ (hence $\omega\circ(f\times f)=\omega)$. \end{rem} \begin{proof} First, we claim that $\ann_R(\omega(M,M))\subseteq \ann_R(M)$. Indeed, for arbitrary $r\in \ann_R(\omega(M,M))$ and $m\in M$, $\omega(rm,m')=r\omega(m,m')=0$ for every $m'\in M$, hence $rm\in M^\mathfrak{p}erp=0$. This then means that $M$ can naturally be considered as a $Q$-module. To define the $Q[i]$-module structure, let $\mathcal{B}=\{x_1,y_1,\dots,x_t,y_t,o_1,\dots,o_k\}$ be a Darboux-generating set as in \autoref{lem:DarbouxGenerators}. The isomorphism $\omega^\flat:M\cong \bigoplus_{j=1}^t (R\omega(x_j,y_j))^{\oplus 2}$ from \eqref{diag:omegaFlat} shows that $k=0$ and $M=\bigoplus_{j=1}^t (Rx_i\oplus Ry_i)=\bigoplus_{j=1}^t (Qx_i\oplus Qy_i)$. In particular, $M_Q\coloneqq\bigoplus_{j=1}^t Qx_i$ and $M_{iQ}\coloneqq\bigoplus_{j=1}^t Qy_i$ are (non-canonical) isotropic submodules of $M$ giving a $Q$-module decomposition $M=A\oplus B$. Define a $Q$-module automorphism $\iota_j$ of $(Q\omega(x_j,y_j))^{\oplus 2}$ by $\iota_j\colon(n,n')\mapsto (-n',n)$. Pulling back $\bigoplus_{j=1}^t \iota_j$ along $\omega^\flat$ gives a non-canonical automorphism $\iota$ of $M$ such that $\iota(x_j)=y_j$ and $\iota(y_j)=-x_j$. Thus $\iota\circ \iota=-\mathrm{id}_M$, $\iota(M_Q)=M_{iQ}$ and $\iota(M_{iQ})=M_Q$. Thus defining $(q+iq')\cdot m\coloneqqqm+\iota(q'm)$ gives the $Q[i]$-module structure in which $M_{iQ}=iM_Q$. By assumption, there a $Q$-bilinear map $\omega_Q\colon M\times M\to Q$ such that $\omega = \mathfrak{p}hi\circ \omega_Q$. We claim that \begin{align} h\colon M\times M\to C[i],\mathfrak{q}uad (m,m')\mapsto \omega_Q(im,m')+i\omega_Q(m,m') \label{eq:HermitianFromAlternating} \end{align} is the Hermitian form with the stated properties. Indeed, $Q$-linearity in the first argument is inherited from $\omega$, and \[h(im,m') =\omega_Q(-m,m')+i\omega_Q(im,m') =i(\omega_Q(im,m')+i\omega_Q(m,m')) =ih(m,m') \] then implies $Q[i]$-linearity. For the conjugate symmetry, first note that $\omega(ix_j,iy_j)=\omega(y_j,-x_j)=\omega(x_j,y_j)$ using the alternating property, and for all other pairs $(b_1,b_2)\in\mathcal{B}^2$, we also have $\omega(ib_1,ib_2)=0=\omega(b_1,b_2)$. Hence the $Q$-bilinearity of $\omega_Q$ implies and $\omega(im,im')=\omega(m,m')$ for every $m,m'\in M$. This together with the alternating property of $\omega_Q$ gives $\omega_Q(im,m') =-\omega_Q(m',im) =\omega_Q(i^2m',im) =\omega_Q(im',m) $, thus \[h(m,m') =\omega_Q(im,m')+i\omega_Q(m,m') =\omega_Q(im',m)-i\omega_Q(m',m) =\sigma(h(m',m)) .\] Finally, by construction, we can take $\alpha=x_1$. \end{proof} \begin{rem}\label{rem:omega-mu equivalence} The Hermitian form and the alternating map determine each other uniquely via \eqref{eq:HermitianFromAlternating} and $\Im\circ h = \omega_Q$. Furthermore, given the isotropic $\mathbb{Q}$-structure, these maps are determined by the restriction $\mu\colon M_Q\times iM_Q\to C, (a,b)\mapsto \omega(a,b)$. Indeed, $\omega(a+ib,a'+ib')= \mu(a,ib')-\mu(a',ib)$ using the bilinearity of $\omega$ and that $M_Q$ is isotropic. \end{rem} \section{Heisenberg groups}\label{sec:HeisenbergGroups} \begin{summary} In this section, associate a (polarised) Heisenberg group to every $\mathbb{Z}$-bilinear map, in particular to alternating modules, or to finite \nilpotent{2} groups $G$ with cyclic centre. We show that upon extending the centre of the Heisenberg group suitably (using an extended polarisation), it will contain a normal subgroup isomorphic to $G$. \end{summary} \begin{defn}[Heisenberg group]\label{def:H} Let $A$, $B$ and $C$ be $\mathbb{Z}$-modules and $\mu\colon A\times B\to C$ a $\mathbb{Z}$-bilinear map. We call $\mu$ \emph{non-degenerate}, if $\mu(a,B)=0$ implies $a=0$ and $\mu(A,b)=0$ implies $b=0$. Define the associated \emph{Heisenberg group} as $\HH(\mu)\coloneqqA\ltimes_\mathfrak{p}hi(B\times C)$ where $\mathfrak{p}hi\colon A\to \Aut(B\times C),a\mapsto((b,c)\mapsto (b,\mu(a,b)+c)$. Call $\HH(\mu)$ \emph{non-degenerate} if $\mathbb{C}enter(\HH(\mu))=\{(0,0,c):c\in C\}$. Define a central-by-abelian{} extension \[\mathop{\mathcal{H}}(\mu):1\to C\xrightarrow{\HeisenbergMono[\mu]} \HH(\mu) \xrightarrow{\HeisenbergEpi[\mu]} A\times B \to 1\] by $\HeisenbergMono\coloneqq\HeisenbergMono[\mu]:c\mapsto(0,0,c)$ and $\HeisenbergEpi\coloneqq\HeisenbergEpi[\mu]:(a,b,c)\mapsto (a,b)$. \end{defn} \begin{rem}\label{rem:Hdef} More explicitly, the group structure on $\HH(\mu)$ is given by \[(a,b,c)*(a',b',c') = (a+a', b+b',c + \mu(a,b')+c')\] with $(0,0,0)$ being the identity and $(a,b,c)^{-1} =(-a,-b,\mu(a,b)-c)$ the inverse. So formally $\HH(\mu)\cong\begin{smallpmatrix}1&A&C\\0&1&B\\0&0&1\end{smallpmatrix}$ with matrix multiplication induced by $\mu$. In particular, $[(a,b,c),(a',b',c')]=(0,0,\mu(a,b')-\mu(a',b))$, i.e. the commutator coincides with $\HeisenbergMono[\mu]\circ\omega\circ(\HeisenbergEpi[\mu]\times \HeisenbergEpi[\mu])$ using the notation of \autoref{rem:omega-mu equivalence}. Note that $\mathbb{C}enter(\HH(\mu)) = \{(a,b,c):\mu(a,B)=\mu(A,b)=0\}\supseteq \HeisenbergMono[\mu](C)$. The notion of non-degeneracy for $\mu$, $\omega$, $\HH(\mu)$ and $\mathop{\mathcal{H}}(\mu)$ all coincide. \end{rem} \begin{exmp}[Heisenberg group of alternating modules]\label{rem:HeisenbergOfAlternetingModule} Let $(M,\omega,C)$ be a non-degenerate alternating $R$-module. Apply \autoref{lem:complexStructure}, set $A\coloneqqM_Q$ and $B\coloneqqiM_Q$ considered as $\mathbb{Z}$-modules. Then the restriction $\mu\colon A\times B\to C$ as at \autoref{rem:omega-mu equivalence} is non-degenerate and produces the non-degenerate central-by-abelian{} extension $\mathop{\mathcal{H}}(\mu)$. Note that while this construction depends on isotropic $\mathbb{Q}$-structure, the isomorphism class $\mathop{\mathcal{H}}(\mu)$ does not because $f$ from \autoref{rem:complexStructuresIsomorphic} induce an isomorphism of short exact sequences, cf. \cite[\S4.1]{phd} In particular, the isomorphism class of $\HH(\mu)$ is invariant which we call the Heisenberg group of the alternating module. Note that $(M,\omega,C)$, $\mu$, $\HH(\mu)$ and $\mathop{\mathcal{H}}(\mu)$ basically decode the same information as they mutually determine each other. \end{exmp} \begin{rem} Alternatively, if $2\in R$ has a multiplicative inverse (for example $C$ is finite of odd order), then there is a canonical way to define this Heisenberg group. As for symplectic vector spaces, we can define a group on the set $H\coloneqqM\times C$ with binary operation $(m,c)\cdot (m',c')\coloneqq (m+m',c+c'+2^{-1}\omega(m,m'))$. Then $H\to \HH(\mu),(a+b,c)\mapsto (a,b,c-2^{-1}\mu(a,b))$ is a group isomorphism to the group from \autoref{rem:HeisenbergOfAlternetingModule}. We shall use the construction of \autoref{rem:HeisenbergOfAlternetingModule} to treat all cases uniformly independently whether the group has $2$-torsion or not. \end{rem} \begin{exmp}\label{exmp:mu_G} Let $G$ be a finite \nilpotent{2} group with cyclic centre. Apply \autoref{lem:alterntingFunctor} to the non-degenerate central-by-abelian{} extension \[\begin{tikzcd}[label] \maxCBAfunctor(G) \ar[:] & 1\ar[r] & \mathbb{C}enter(G)\ar[r,inclusion] & G\ar[r,epi,"\pi_{\mathcal{Z}}"] & G/\mathbb{C}enter(G)\ar[r] & 1 \end{tikzcd}\] giving a non-degenerate alternating $\mathbb{Z}$-module $(G/\mathbb{C}enter(G),\omega, \mathbb{C}enter(G))\leteq\mathop{\mathcal{A}}(\maxCBAfunctor(G))$. Hence \autoref{rem:HeisenbergOfAlternetingModule} gives a non-degenerate $\mathbb{Z}$-bilinear map $\mu_G\colon A\times B\to \mathbb{C}enter(G)$ (where actually $A\cong B$) and \[\begin{tikzcd}[label] \mathop{\mathcal{H}}(\mu_G) \ar[:]& 1\ar[r] & \mathbb{C}enter(G)\ar[r,mono,"{\HeisenbergMono[\mu_G]}"] & \HH(\mu_G) \ar[r,epi,"{\HeisenbergEpi[\mu_G]}"] & A\times B \ar[r] & 1. \end{tikzcd}\] In this way, we assign a Heisenberg group $\HH(\mu_G)$ to $G$. These groups share many properties: the order, isomorphism class of centre and the commutator subgroup, and the nilpotency class. However, they are non-isomorphic for example if $G=Q_8$ (the quaternion group) or the extra-special $p$-group of order $p^3$ of exponent $p^2$. Thus in general, we cannot even expect to have a morphism between the two short exact sequences, as that would imply $G\cong \HH(\mu_G)$ by the $5$-lemma. In fact, this (iso)morphism exists if and only if $G\cong \HH(\mu_G)$ since then $A\oplus B$ gives an isotropic $\mathbb{Z}_c$-structure by \autoref{rem:omega-mu equivalence}. \end{exmp} Our goal in to establish a monomorphism $\maxCBAfunctor(G)\to \mathop{\mathcal{H}}(\hat\mu)$ for a suitable (non-degenerate) $\hat\mu$. In fact, we will show that $\hat\mu=\zeta\circ\mu_G$ works for a suitable $\zeta:\mathbb{C}enter(G)\rightarrowtail \hat C$. For this, we generalise the notion of isotropic structure from \autoref{lem:complexStructure}. \begin{defn}\label{def:polarisationOfExtensions} An \emph{extended polarisation} of a central-by-abelian{} extension $\epsilon$ is the pair of the following commutative diagrams ($j\in\{1,2\}$) \begin{equation}\label{diag:polarisationOfExntensions} \begin{tikzcd}[label] \epsilon_j\ar[:] & 1\ar{r} & C_j \ar[r,"\iota_j",dashed]\ar[d,"\kappa_j",dashed] & G_j \ar[r,"\mathfrak{p}i_j",dashed]\ar[d,"\gamma_j",dashed] \ar[ddl,dashed,bend left=10,"\zeta_j" near end] & L_j\ar[r,dashed]\ar[d,mono,"\lambda_j",dashed] & 1 \\ \epsilon\ar[:]& 1\ar{r} & C \ar[r,"\iota"] \ar[d,dashed,"\zeta"] & G \ar["\mathfrak{p}i"]{r} & M \ar{r} & 1 \\ & & \hat C & & L_1\times L_2 \ar[u,iso,dashed,"\lambda\coloneqq\lambda_1\oplus\lambda_2"'] \end{tikzcd} \end{equation} such that $\epsilon_j$ is a central-by-abelian{} extension, $\lambda\colon L_1\times L_2\to M,(l_1,l_2)\mapsto \lambda_1(l_1)+\lambda_2(l_2)$ is an isomorphism and $\hat C$ is an abelian group. \end{defn} \begin{lem}\label{lem:G=G_2CG_1} Every extended polarisation as in \eqref{diag:polarisationOfExntensions} induces a decomposition $G=\gamma_2(G_2)\iota(C)\gamma_1(G_1)$. \end{lem} \begin{proof} Pick $g\in G$ arbitrarily. Set $(l_1,l_2)\coloneqq\lambda^{-1}(\mathfrak{p}i(g))\in L_1\times L_2$. From the surjectivity of $\mathfrak{p}i_j$, pick $g_j\in G_j$ such that $\mathfrak{p}i_j(g_j)=l_j$. Then $\mathfrak{p}i(\gamma_2(g_2)^{-1}g\gamma_1(g_1)^{-1})= -\lambda_2(\mathfrak{p}i_2(g_2)) + (\lambda_1(l_1)+\lambda_2(l_2)) -\lambda_1(\mathfrak{p}i_1(g_1))= 0$ using the commutativity of the diagram \eqref{diag:polarisationOfExntensions}. So $\gamma_2(g_2)^{-1}g\gamma_1(g_1)^{-1}\in \ker(\mathfrak{p}i)=\im(\iota)$, hence there is a $c\in C$ such that $\iota(c)=\gamma_2(g_2)^{-1}g\gamma_1(g_1)^{-1}$. Rearranging this gives the decomposition as claimed. \end{proof} \begin{prop}[Key]\label{lem:key} Every extended polarisation as in \eqref{diag:polarisationOfExntensions} can be completed to a commutative diagram \begin{equation} \label{diag:key} \begin{tikzcd}[label] \epsilon\ar[d,dashed,"\exists"]\ar[:] & 1\ar{r} & C \ar[r,"\iota"]\ar[d,"\zeta"] & G \ar["\mathfrak{p}i"]{r}\ar[d, dashed,"\exists \delta"] & M \ar{r}\ar[d, iso',"\lambda^{-1}"] & 1 \\ \mathop{\mathcal{H}}(\hat\mu)\ar[:] &1\ar{r} & \hat C \ar[r,"{\HeisenbergMono[\hat\mu]}"] & \HH(\hat\mu) \ar["{\HeisenbergEpi[\hat\mu]}"]{r} & L_1\times L_2 \ar{r} & 1 \end{tikzcd} \end{equation} where $\hat\mu$ is defined by \[\begin{tikzcd} M\times M \ar[r,"\omega"] & C \ar[d,"\zeta"] \\ L_1 \times L_2 \ar[r,dashed,"\hat\mu"] \ar[u,mono,"\lambda"] & \hat C \end{tikzcd}\] for the alternating $\mathbb{Z}$-bilinear map $\omega$ from \autoref{lem:alterntingFunctor} when applied to $\epsilon$. \end{prop} \begin{rem} The 4-lemma implies that $\delta$ is injective if and only if $\zeta$ is. In this case, $\HH(\hat\mu)$ is the external central product of $\hat C$ and $G$ amalgamating $C$ along $\iota$ and $\zeta$; in particular, $G$ is isomorphic to a normal subgroup of a Heisenberg group. \end{rem} \begin{rem}\label{rem:keyNonDegeneracy} The central-by-abelian{} extension $\epsilon$ is non-degenerate if and only if $\HH(\hat\mu)$ is non-degenerate. \end{rem} \begin{proof} First note that $\hat\mu$ is indeed an alternating $\mathbb{Z}$-bilinear map by \autoref{lem:alterntingFunctor}, so $\HH(\hat \mu)$ is well-defined. We show that $\delta\coloneqq(\delta_1,\delta_2,\delta_3)$ satisfies the statement for $j\in\{1,2\}$ and \[\delta_j\colon G\to L_j,g\mapsto \mathfrak{p}i_j(g_j),\mathfrak{q}uad \delta_3\colon G\to \hat C,g\mapsto \zeta_2(g_2)\zeta(c)\zeta_1(g_1),\] for any decomposition $g=\gamma_2(g_2)\iota(c)\gamma_1(g_1)$ from \autoref{lem:G=G_2CG_1}. The map $\delta_j$ is actually the natural composition of group morphisms $G\xrightarrow{\mathfrak{p}i} M\xrightarrow{\lambda^{-1}} L_1\times L_2\to L_j$, in particular, $\delta_j$ is independent of the choice of the decomposition. To show that $\delta_3$ is independent of the choice of the decomposition, let $\gamma_2(g_2)\iota(c)\gamma_1(g_1)=g=\gamma_2(g_2')\iota(c')\gamma_1(g_1')$. Then on one hand, $\mathfrak{p}i_j(g_j)=\delta_j(g)=\mathfrak{p}i_j(g_j')$ by above, hence by the exactness of $\epsilon_j$, there are $c_j\in C_j$ such that $\iota_1(c_1)=g_1'g_1^{-1}$ and $\iota_2(c_2)=g_2^{-1}g_2'$. On the other hand, using $\iota(C)\subseteq \mathbb{C}enter(G)$, rearranging the original equation gives $\iota(cc'^{-1})=\gamma_2(g_2^{-1}g_2')\gamma_1(g_1'g_1^{-1})= \iota(\kappa_2(c_2)\kappa_1(c_1))$, hence $cc'^{-1}=\kappa_2(c_2)\kappa_1(c_1)$ as $\iota$ is injective. Putting these together gives \begin{align*} \zeta_2(g_2)\zeta(c)\zeta_1(g_1) &=\zeta_2(g_2'\iota_2(c_2)^{-1}) \cdot \zeta(\kappa_2(c_2)c'\kappa_1(c_1)) \cdot \zeta_1(\iota_1(c_1)^{-1}g_1') \\&=\zeta_2(g_2') \cdot ( \zeta_2(\iota_2(c_2))^{-1} \zeta(\kappa_2(c_2)) )\cdot \zeta(c') \cdot ( \zeta(\kappa_1(c_1)) \zeta_1(\iota_1(c_1))^{-1} ) \cdot \zeta_1(g_1') \\&=\zeta_2(g_2')\zeta(c')\zeta_1(g_1') \end{align*} using commutativity of \eqref{diag:polarisationOfExntensions}. Thus $\delta_3$ is indeed well defined. Note that, unlike the other maps, $\delta_3$ is just a map of sets, \emph{not} a group morphism. Its failure to be a group morphism is measured by $\hat\mu$. Indeed, pick decompositions $g=\gamma_2(g_2)\iota(c)\gamma_1(g_1)$ and $g'=\gamma_2(g_2')\iota(c')\gamma_1(g_1')$. Set $x\coloneqq\omega(\mathfrak{p}i(\gamma_1(g_1)),\mathfrak{p}i(\gamma_2(g_2')))\in C$, so $\iota(x)=[\gamma_1(g_1),\gamma_2(g_2')]$. Use this to find a decomposition of the product as \begin{align*} gg'&= \gamma_2(g_2)\iota(c)\gamma_1(g_1)\gamma_2(g_2')\iota(c')\gamma_1(g_1')\\&= \gamma_2(g_2)\gamma_2(g_2')\iota(cc')[\gamma_1(g_1),\gamma_2(g_2')]\gamma_1(g_1)\gamma_1(g_1')\\&= \gamma_2(g_2g_2')\iota(cc'x)\gamma_1(g_1g_1'). \intertext{Then by definitions and using the commutativity of the diagram,} \delta_3(gg') &=\zeta_2(g_2g_2')\zeta(cc'x)\zeta_1(g_1g_1') \\&=\zeta_2(g_2)\zeta(c)\zeta_1(g_1)\cdot \zeta_2(g_2')\zeta(c')\zeta_1(g_1')\cdot \zeta(\omega(\mathfrak{p}i(\gamma_1(g_1)),\mathfrak{p}i(\gamma_2(g_2')))) \\&= \delta_3(g)\delta_3(g')\hat\mu(\mathfrak{p}i_1(g_1),\mathfrak{p}i_2(g_2'))\\&= \delta_3(g)\delta_3(g')\hat\mu(\delta_1(g),\delta_2(g')). \intertext{This property together with \autoref{rem:Hdef} imply that $\delta$ is a group morphism:} \delta(gg')&= (\delta_1(gg'),\delta_2(gg'),\delta_3(gg'))\\&= (\delta_1(g)\delta_1(g'),\delta_2(g)\delta_2(g'),\delta_3(g)\delta_3(g')\hat\mu(\delta_1(g),\delta_2(g')))\\&= (\delta_1(g),\delta_2(g),\delta_3(g))*(\delta_1(g'),\delta_2(g'),\delta_3(g'))\\&= \delta(g)*\delta(g'). \end{align*} We check that diagram \eqref{diag:key} is commutative. Indeed, if $c\in C$, then using the decomposition $\iota(c)=\gamma_2(1)\iota(c)\gamma_1(1)$ gives $\delta\circ\iota = (c\mapsto (0,0,\zeta(c))) = \HeisenbergMono[\hat\mu]\circ\zeta$ by definitions. Similarly, the decomposition $g=\gamma_2(g_2)\iota(c)\gamma_1(g_1)\in G$ gives $\lambda^{-1}\circ \mathfrak{p}i=(g\mapsto \lambda^{-1}(\mathfrak{p}i(\gamma_2(g_2)\gamma_1(g_1))) =\delta_1(g)+\delta_2(g))= \HeisenbergEpi[\hat\mu] \circ\delta$. \end{proof} \section{Heisenberg embeddings}\label{sec:HeisenbergEmbedding} \begin{summary} In this section, we put together the pieces from earlier sections to prove \autoref{thm:mainHeisenberg} from \mathfrak{p}ageref{thm:mainHeisenberg}, the main statement of the paper. First, we handle the cyclic centre case by constructing a suitable extended polarisation using the isotropic structure from \autoref{sec:alterntingModules}. For the general case, we take the direct product of the resulting Heisenberg groups (which itself is of this type) and use the first part of \autoref{thm:mainStructure}. \end{summary} We start with an elementary statement. \begin{lem}\label{lem:Zextension} The solid arrows of the following diagram can be completed with suitable dashed arrows making a commutative diagram of groups where $K$ is a finite abelian group and $l=\lcm(m,\exp(K))$. \[\begin{tikzcd} \mathbb{Z}/n\mathbb{Z} \ar[r,mono,"\iota"] \ar[d,"\kappa"] & K \ar[d,dashed,"\exists\mathfrak{p}hi"] \\ \mathbb{Z}/m\mathbb{Z} \ar[r,mono,dashed,"\exists\theta"] & \mathbb{Z}/l\mathbb{Z} \end{tikzcd}\] \end{lem} \begin{proof} First, we prove the case $K=\mathbb{Z}/k\mathbb{Z}$. The map $\kappa$ is defined by some $b\in \mathbb{Z}$ such that $\kappa(1+n\mathbb{Z})=\frac{m}{n}b+m\mathbb{Z}$. Similarly, $\iota$ is given by some $a\in\mathbb{Z}$ with $\iota(1+n\mathbb{Z})=\frac{k}{n}a+k\mathbb{Z}$. Since $\iota$ is injective, we have $\gcd(a,n)=1$, hence we may pick $x\in \mathbb{Z}$ so that $ax\equiv b \mathfrak{p}mod{n}$. Define $\mathfrak{p}hi(i+k\mathbb{Z})\coloneqqi\frac{l}{k}x+l\mathbb{Z}$ and $\theta(i+m\mathbb{Z})\coloneqqi\frac{l}{m}+l\mathbb{Z}$. Short computation shows that $\mathfrak{p}hi\circ\iota = \theta\circ\kappa$ with these definitions. In the general case, $K=\mathfrak{p}rod_{C\in S} C$ for some suitable set $S$ of cyclic subgroups of $K$ of prime power order. For $C\in S$, let $\mathfrak{p}i_C\colon K\to C$ be the natural projection, and write $o(C)\coloneqq|\im(\mathfrak{p}i_C\circ\iota)|$. For every prime $p$ dividing $n$, pick a $C_p\in S$ so that $o(C_p)=\max\{o(C):C\in S,p\bigm| |C|\}$. Define the composition \[\bar\iota\colon \mathbb{Z}/n\mathbb{Z}\xrightarrow{\iota} K\twoheadrightarrow \mathfrak{p}rod_{p\mid n} C_p\cong \mathbb{Z}/d\mathbb{Z}\to \mathbb{Z}/k\mathbb{Z}\] where the second map is $\mathfrak{p}rod_{p\mid n}\mathfrak{p}i_{C_p}$, the isomorphism is given by the Chinese remainder theorem, and the last one is any embedding where $d\bigm| k\coloneqq\exp(K)$. This map is injective, because $|\im(\bar\iota)| = \lcm\{|C_p|:p\mid n\} = \lcm\{o(C):C\in S\} =n $. So replacing $\iota$ by $\bar\iota$ reduces to the special case $K=\mathbb{Z}/k\mathbb{Z}$ discussed above. \end{proof} \begin{rem} Essentially this statement replaces the usage of the second part of \autoref{thm:mainStructure} from \mathfrak{p}ageref{thm:mainStructure} and the central product approach of \cite{phd} and \cite{specialGroups} with a much shorter argument. The statement could be extracted when tracking down the behaviour of the centre the group during taking maximal central products \cite[proof of Proposition~4.2.18]{phd} and the construction of extended polarisation for $2$-generated groups \cite[Lemma~4.2.13]{phd}. More concretely, writing $G=\mathfrak{p}rod_{i=1}^t C_i$ as a product of cyclic groups, we take iteratively the (so-called maximal) central product of $\mathbb{Z}/m\mathbb{Z},C_1,\dots,C_t$ at each step amalgamating the largest possible subgroup compatible with the given maps. This maximality condition ensures that the resulting groups remain cyclic. \end{rem} \begin{prop} \label{thm:maximalCbACyclicCentreEmbedsToHeisenberg} For every finite \nilpotent{2} group $G$ with cyclic centre, there exists a monomorphism \begin{equation}\label{diag:embeddingToH} \begin{tikzcd}[label] \maxCBAfunctor(G)\ar[:] \ar[d,"\exists f",dashed] & 1\ar[r] & \mathbb{C}enter(G) \ar[r,inclusion]\ar[d,mono,"\exists \zeta",dashed] & G \ar[r,epi,"\pi_{\mathcal{Z}}"]\ar[d, mono,"\exists \delta",dashed] & G/\mathbb{C}enter(G) \ar{r}\ar[d, iso',"\exists \nu",dashed] & 1 \\ \mathop{\mathcal{H}}(\hat\mu)\ar[:]&1\ar[r] & \hat C \ar[r,inclusion] & \HH(\hat\mu) \ar[r,"{\HeisenbergEpi[\hat\mu]}"] & A\times A \ar[r] & 1 \end{tikzcd} \end{equation} of non-degenerate central-by-abelian{} extensions for a suitable $\hat\mu\colon A\times A\to \hat C$ where $\hat C$ is cyclic and the $\exp(A)= |G'| \bigm| |\mathbb{C}enter(G)| \bigm| |\hat C| \bigm|\exp(G)$ divisibility conditions hold. \end{prop} \begin{rem} Actually, the Heisenberg group from the statement can be replaced with a canonical one as follows. Define a $\mathbb{Z}$-bilinear map $\nu\colon \Hom(A,\hat C)\times A\to \hat C$ by $\alpha(a)\mapsto \alpha(a)$, and write $\HH(A,\hat V)\coloneqq\HH(\nu)$ for the corresponding Heisenberg group. Then the map $\HH(\hat\mu)\to \HH(A,\hat C)$ given by $(a,a',c)\mapsto (x\mapsto \hat\mu(a,x), a', c)$ is an isomorphism because $\hat\mu$ is non-degenerate and $\exp(A)\bigm| |C|$. In particular, every $G$ as above is isomorphic to a normal subgroup of $\HH(A, \hat C)$ of index at most $\exp(G)/|\mathbb{C}enter(G)|$. \end{rem} \begin{proof} By \autoref{rem:dictionary}, \autoref{lem:complexStructure} is applicable to $(G/\mathbb{C}enter(G),\omega,\mathbb{C}enter(G))\coloneqq\mathop{\mathcal{A}}(\maxCBAfunctor(G))$ giving an isotropic structure $\mathbb{C}enter(G)=L_1\oplus L_2$. Note that $L_2=iL_1\cong L_1$. Since $L_j$ is isotropic, $\pi_{\mathcal{Z}}^{-1}(L_j)$ is abelian by \autoref{rem:dictionary}. Consider the inclusion maps from the following diagram. \[\begin{tikzcd} Z(G)\cap \pi_{\mathcal{Z}}^{-1}(L_1) \ar[r,inclusion,"\iota_1"'] \ar[d,inclusion,"\kappa_1"'] & \pi_{\mathcal{Z}}^{-1}(L_1) \ar[d,dashed,"\mathfrak{p}hi_1" near start] \ar[dr,dotted,"\zeta_1"] \\ \mathbb{C}enter(G) \ar[r,mono,dashed,"\theta_1"'] \ar[rr,bend left=20,dotted,"\zeta" near start] & C \ar[r,dotted,mono,"\theta_2"'] & \hat C \\ \mathbb{C}enter(G)\cap \pi_{\mathcal{Z}}^{-1}(L_2) \ar[u,inclusion] \ar[ur,dashed,mono,"\kappa_2"'] \ar[r,inclusion,"\iota_2"'] & \pi_{\mathcal{Z}}^{-1}(L_2) \ar[ur,dotted,"\mathfrak{p}hi_2=\zeta_2"'] \end{tikzcd}\] Applying \autoref{lem:Zextension} to the inclusions $\iota_1$ and $\kappa_1$ gives a cyclic group $C$ with a morphisms $\mathfrak{p}hi_1$, $\theta_1$. Then we can apply \autoref{lem:Zextension} to the inclusion $\iota_2$ and the composition $\kappa_2$ giving another cyclic $\hat C$ with $\mathfrak{p}hi_2$, $\theta_2$. By construction, $\theta_1$ and $\theta_2$ are both injective, hence so is $\zeta\coloneqq\theta_2\circ \theta_1$. This diagram is commutative by construction. Set $\zeta_1\coloneqq\theta_2\circ \mathfrak{p}hi_1$ and $\zeta_2\coloneqq\mathfrak{p}hi_2$. The maps above give the following extended polarisation of $\maxCBAfunctor(G)$ \[\begin{tikzcd}[label] \epsilon_j\ar[:] & 1\ar{r} & \mathbb{C}enter(G)\cap \pi_{\mathcal{Z}}^{-1}(L_j) \ar[r,inclusion]\ar[d,inclusion] & \pi_{\mathcal{Z}}^{-1}(L_j) \ar[r,"\mathfrak{p}i_j",dashed]\ar[d,inclusion] \ar[ddl,bend left=10,"\zeta_j" near end]& L_j\ar[r]\ar[d,mono,inclusion] & 1 \\ \maxCBAfunctor(G)\ar[:]& 1\ar{r} & \mathbb{C}enter(G) \ar[r,inclusion] \ar[d,"\zeta",mono] & G \ar["\pi_{\mathcal{Z}}"]{r} & G/\mathbb{C}enter(G)\ar[d,identity] \ar{r} & 1 \\ & & \hat C & & L_1\times L_2 \end{tikzcd}\] where $\mathfrak{p}i_j\coloneqq\pi_{\mathcal{Z}}|_{\pi_{\mathcal{Z}}^{-1}(L_j)}$. Then \autoref{lem:key} gives the diagram \eqref{diag:embeddingToH} upon setting $A\coloneqqL_1\cong L_2$. Finally, we check that the stated properties hold. Note that the injectivity of $\zeta$ together with the 4-lemma implies the injectivity of $\delta$. $\exp(A)=\exp(L_1\times L_2)=\exp(G/\mathbb{C}enter(G))=|(G')|$ follows from \autoref{cor:decompositionCyclicDeriverGroup}. Since $G$ is of nilpotency class at most $2$, $G'\subseteq \mathbb{C}enter(G)$ implies $|G'|\bigm| |\mathbb{C}enter(G)|$. By \autoref{lem:Zextension}, $|\hat C|$ is the least common multiple of $|\mathbb{C}enter(G)|$, $\exp(\pi_{\mathcal{Z}}^{-1}(L_1))$ and $\exp(\pi_{\mathcal{Z}}^{-1}(L_2))$, hence $|\hat C| \bigm| \exp(G)$ as stated. \end{proof} \begin{thm}\label{thm:embedToH} For every finite \nilpotent{2} group $G$, there exists non-degenerate maps $\mu\colon A\times B\to C$, $\mu_i\colon A_i\times A_i\to C_i$ for $1\leq i\leq d(\mathbb{C}enter(G))$ and \[\begin{tikzcd}[label] \maxCBAfunctor(G)\ar[:] \ar[d,"\exists f",mono,dashed]& 1\ar[r] & \mathbb{C}enter(G) \ar[r,central,inclusion]\ar[d,mono,"\exists \zeta",dashed] & G \ar[r,epi,"\pi_{\mathcal{Z}}"]\ar[d, mono,"\exists \delta",dashed] & G/\mathbb{C}enter(G) \ar{r}\ar[d,mono,"\exists \nu",dashed] & 1 \\ \mathop{\mathcal{H}}(\mu)\ar[:] \ar[d,dashed,mono,"\exists"] & 1\ar[r] & C \ar[r,inclusion] \ar[d,dashed,inclusion,"\exists"']& \HH(\mu) \ar[r,"{\HeisenbergEpi[\mu]}"] \ar[d,dashed,inclusion,"\exists"'] & A\times B \ar[r] \ar[d,dashed,mono,"\exists"] & 1 \\ \mathfrak{p}rod_i\mathop{\mathcal{H}}(\mu_i)\ar[:] & 1\ar[r] & \mathfrak{p}rod_i C_i \ar[r,inclusion] & \mathfrak{p}rod_i \HH(\mu_i) \ar[r,"{\mathfrak{p}rod_i \HeisenbergEpi[\mu_i]}"] & \mathfrak{p}rod_i A_i\times A_i \ar[r] & 1 \end{tikzcd}\] such that any prime divisor of the order of any group above also divides $|G|$, and \begin{alignat*}{4} d(G/\mathbb{C}enter(G))&\geq d(A), d(B), d(A_i^2),& d(\mathbb{C}enter(G))&=d(C)\geq d(C_i)\leq 1, \\ \exp(G/\mathbb{C}enter(G))&\bigm| \exp(A\times B) \bigm| \exp(G') \bigm| &\;\exp(\mathbb{C}enter(G)) &\bigm| \exp(C)\bigm| \exp(G). \end{alignat*} \end{thm} \begin{rem} The monomorphism $\zeta$ shows that $d(C)$ is as small as possible. On the other hand, $\nu$ gives $d(G/\mathbb{C}enter(G))\leq d(A\times B)\leq 2 d(G/\mathbb{C}enter(G))$. The lower bound is attained in the $d(C)\leq 1$ case, see \autoref{thm:maximalCbACyclicCentreEmbedsToHeisenberg}. It is a natural question to ask for the smallest possible value of $d(A\times B)$ in general. To give a better upper bound than above, one may need to develop some version of \autoref{lem:alternatingSmith} for matrices with entries from $\mathbb{Z}^n$. \end{rem} \begin{proof} Using \autoref{prop:subdirectProductFinite}, write $\mathfrak{p}hi\colon G\rightarrowtail \mathfrak{p}rod_{i=1}^n G_i$ as a subdirect product where $n\coloneqqd(\mathbb{C}enter(G))$ and each $\mathbb{C}enter(G_i)$ is cyclic. Then $G_i$ is \nilpotent{2} as this class is closed under taking quotients, so \autoref{thm:maximalCbACyclicCentreEmbedsToHeisenberg} gives $f_i=(\zeta_i,\delta_i,\nu_i)\colon \maxCBAfunctor(G_i)\to \mathop{\mathcal{H}}(\mu_i)$ for some $\mu_i\colon A_i\times B_i\to C_i$ (where $B_i=A_i$). Let $\bar A\coloneqq\mathfrak{p}rod_{i=1}^n A_i$, $\bar B\coloneqq\mathfrak{p}rod_{i=1}^n B_i$, $\bar C\coloneqq\mathfrak{p}rod_{i=1}^n C_i$ and $\bar \mu\coloneqq\mathfrak{p}rod_{i=1}^n \mu_i\colon \bar A\times \bar B\to \bar C$, a non-degenerate $\mathbb{Z}$-bilinear map by construction. We obtain the following diagram of central-by-abelian{} extensions where $\mathfrak{p}rod_i$ is a shorthand for $\mathfrak{p}rod_{i=1}^n$. \[\begin{tikzcd}[label] \maxCBAfunctor(G)\ar[:] \ar[d,mono,"\text{\autoref{prop:subdirectProductFinite}}"'] & 1\ar[r] & \mathbb{C}enter(G) \ar[r,inclusion]\ar[d,mono,"\mathfrak{p}hi|_{\mathbb{C}enter(G)}"] & G \ar[r,epi,"\pi_{\mathcal{Z}}"]\ar[d, mono,"\mathfrak{p}hi"] & G/\mathbb{C}enter(G) \ar{r}\ar[d, mono, "{[\mathfrak{p}hi]}"] & 1 \\ \mathfrak{p}rod_i\maxCBAfunctor(G_i)\ar[:] \ar[d,mono,"\mathfrak{p}rod f_i","\text{\autoref{thm:maximalCbACyclicCentreEmbedsToHeisenberg}}"'] & 1\ar[r] & \mathfrak{p}rod_i\mathbb{C}enter(G_i) \ar[r,inclusion]\ar[d,mono,"\mathfrak{p}rod \zeta_i"] & \mathfrak{p}rod_i G_i \ar[r,epi,"\mathfrak{p}rod \pi_{\mathcal{Z}}"]\ar[d, mono,"\mathfrak{p}rod \delta_i"] & \mathfrak{p}rod_i G_i/\mathbb{C}enter(G_i) \ar{r}\ar[d, iso',"\mathfrak{p}rod \nu_i"] & 1 \\ \mathfrak{p}rod_i\mathop{\mathcal{H}}(\mu_i)\ar[:] \ar[d,iso] & 1\ar[r] & \mathfrak{p}rod_i C_i \ar[r,inclusion] \ar[d,identity] & \mathfrak{p}rod_i \HH(\mu_i) \ar[r,"{\mathfrak{p}rod_i \HeisenbergEpi[\mu_i]}"]\ar[d,iso] & \mathfrak{p}rod_i A_i\times B_i \ar[r] \ar[d,iso] & 1 \\ \mathop{\mathcal{H}}(\bar\mu)\ar[:] & 1\ar[r]& \bar C \ar[r,inclusion] & \HH(\bar\mu) \ar[r,"{\HeisenbergEpi[\bar\mu]}"] & \bar A\times \bar B \ar[r] & 1 \end{tikzcd}\] Denote by $\bar f=(\bar \zeta,\bar \delta,\bar \nu)\colon \maxCBAfunctor(G)\to \mathop{\mathcal{H}}(\bar \mu)$ the resulting monomorphism. This may have more generators than stated, so we take a suitable subobject of $\mathop{\mathcal{H}}(\bar\mu)$. Let $A\leq \bar A$ be the image of $G/\mathbb{C}enter(G)\xrightarrow{\bar \nu} \bar A\times \bar B\to \bar A$, and $B\leq\bar B$ be that of $G/\mathbb{C}enter(G)\xrightarrow{\bar \nu} \bar A\times \bar B\to \bar B$. Then $d(A)$ and $d(B)$ are at most $d(G/\mathbb{C}enter(G))$. Let $C\coloneqq\generate{\bar\zeta(\mathbb{C}enter(G)), \bar\mu(A,B)}\leq \bar C$. Then $d(\mathbb{C}enter(G)) = d(\bar\zeta(\mathbb{C}enter(G))) \leq d(C) \leq d(\bar C)=\sum_{i=1}^n d(C_i)\leq n=d(\mathbb{C}enter(G))$, hence comparing the two ends give $d(C)=d(\mathbb{C}enter(G))$. Define $\mu\colon A\times B\to C, (a,b)\mapsto \bar\mu(a,b)$, an abelian bihomomorphism. The image of $\bar f$ lies in $\mathop{\mathcal{H}}(\mu)$ by definition, so restricting the domain to $\mathop{\mathcal{H}}(\mu)$ gives a map $f=(\zeta,\delta,\nu)\colon \maxCBAfunctor(G)\to \mathop{\mathcal{H}}(\mu)$, i.e. $\bar f = (\mathop{\mathcal{H}}(\mu)\rightarrowtail\mathop{\mathcal{H}}(\bar\mu)) \circ f$ for the natural inclusion map. We show that this $f$ satisfies the statement. We check that $\mu$ is non-degenerate. Pick $0\neq a\in A$ and write $a=(a_1,\dots,a_n)\in \mathfrak{p}rod_{i=1}^n A_i$. Then without loss of generality, $a_1\neq 0$. Then by the non-degeneracy of $\mu_1$, there is a $b_1'\in B_1$ such that $0\neq \mu_1(a_1,b_1')\in C_1$. By the diagram above, there is $g_1'\in G_1$ such that $\nu_1(g_1'\mathbb{C}enter(G_1))=(0,b_1')$. As $\mathfrak{p}hi$ is a subdirect product, there is $g'\in G$ such that the $1$st factor of $[\mathfrak{p}hi](g'\mathbb{C}enter(G))$ is $g_1'\mathbb{C}enter(G_1)$. Write $b'=(b_1',\dots,b_n')$ for the image of $g'\mathbb{C}enter(G)$ under $G/\mathbb{C}enter(G)\xrightarrow{\bar \nu} \bar A\times \bar B\to \bar B$. By construction, $b_1'$ coincides with the above choice. By definition, $b\in B$, and $\mu(a,b) = (\bar\mu_1(a_1,b_1'),\dots,\bar\mu_n(a_n,b_n'))\neq 0$ as the first factor is non-trivial by construction. This argument remains valid when the roles of $A$ and $B$ are swapped, hence $\mu$ is non-degenerate. Assume that $G$ is finite and consider the statement on the exponents. $\exp(G/\mathbb{C}enter(G))\bigm| \exp(A\times B)$ follows from $\nu$ being a monomorphism of abelian groups. For every $i$, $\exp(A_i\times B_i)\exp(G_i') \bigm| \exp(G')$ using \autoref{thm:maximalCbACyclicCentreEmbedsToHeisenberg} and the fact that as $G_i$ is a quotient of $G$. Thus $\exp(A\times B)\bigm| \exp(\bar A\times \bar B)=\lcm\{\exp(A_i\times B_i):1\leq i\leq n\}\bigm| \exp(G')$. Since $G$ is \nilpotent{2}, we have $G'\subseteq \mathbb{C}enter(G)$, so $\exp(G')\bigm| \exp(\mathbb{C}enter(G))$. The embedding $\zeta\colon \mathbb{C}enter(G)\rightarrowtail C$ shows $\exp(\mathbb{C}enter(G))\bigm| \exp(C)$. Once again using \autoref{thm:maximalCbACyclicCentreEmbedsToHeisenberg}, we see that $\exp(C_i)\bigm| \exp(G_i)\bigm| \exp(G)$ as $G_i$ is a quotient of $G$. Then $\exp(C)\bigm| \exp(\bar C)=\lcm\{\exp(C_i):1\leq i\leq n\}\bigm| \exp(G)$ as stated. Finally, by construction $A_i$ and $C_i$ were obtained from $G$ by taking quotients and subgroups, so no new prime divisor was introduced. \end{proof} \mathfrak{p}rintbibliography \noindent \textsc{Alfréd Rényi Institute of Mathematics}, Reáltanoda u. 13–15, H–1053, Budapest, Hungary E-mail address: \email{[email protected]} \end{document}
\begin{document} \title{Detection of Gene-Gene Interactions by Multistage\\ Sparse and Low-Rank Regression} \date{\empty} \author{Hung~Hung$^a$, Yu-Tin Lin$^b$, Pengwen~Chen$^c$, Chen-Chien Wang$^d$\\ Su-Yun~Huang$^b$, and Jung-Ying Tzeng$^e$\footnote{To whom correspondence should be addressed. E-mail: \textit{[email protected]}}\\[2ex] $^a$Institute of Epidemiology \& Preventive Medicine\\[-1ex] National Taiwan University\\[1ex] $^b$Institute of Statistical Science, Academia Sinica\\[1ex] $^c$Department of Applied Mathematics, National Chung Hsing University\\[1ex] $^d$Department of Computer Science, New York University\\[1ex] $^e$Department of Statistics and Bioinformatics Research Center\\[-1ex] North Carolina State University} \maketitle \begin{abstract} A daunting challenge faced by modern biological sciences is finding an efficient and computationally feasible approach to deal with the curse of high dimensionality. The problem becomes even more severe when the research focus is on interactions. To improve the performance, we propose a low-rank interaction model, where the interaction effects are modeled using a low-rank matrix. With parsimonious parameterization of interactions, the proposed model increases the stability and efficiency of statistical analysis. Built upon the low-rank model, we further propose an Extended Screen-and-Clean approach, based on the Screen and Clean (SC) method (Wasserman and Roeder, 2009; Wu \textit{et al}., 2010), to detect gene-gene interactions. In particular, the screening stage utilizes a combination of a low-rank structure and a sparsity constraint in order to achieve higher power and higher selection-consistency probability. We demonstrate the effectiveness of the method using simulations and apply the proposed procedure on the warfarin dosage study. The data analysis identified main and interaction effects that would have been neglected using conventional methods.\\ \end{abstract} \section{Introduction}\label{sec.1} Modern biological researches deal with high-throughput data and encounter the curse of high-dimensionality. The problem is further exacerbated when the question of interest focuses on gene-gene interactions (G$\times$G). Due to the extremely high-dimensionality for modeling G$\times$G, many G$\times$G methods are multi-staged in nature that rely on a screening step to reduce the number of loci (Cordell 2009; Wu et al. 2010). Joint screening based on the multi-locus model with all main effect and interactions terms is preferred over marginal screening based on single-locus tests --- it improves the ability to identify loci that interact with each other but exhibit little marginal effect (Wan et al. 2010) and improves the overall screening performance by reducing the unexplained variance in the model (Wu et al. 2010). However, joint screening imposes statistical and computational challenges due to the ultra-large number of variables. To tackle this problem, one promising method that has good results is the Screen and Clean (SC) procedure (Wasserman and Roeder, 2009; Wu et al. 2010). The SC procedure first uses Lasso to pre-screen candidate loci where only main effects are considered. Next, the expanded covariates are constructed to include the selected loci and their corresponding pairwise interactions, and another Lasso is applied to identity important terms. Finally, in the cleaning stage with an independent data set, the effects of the selected terms are estimated by least squares estimate (LSE) method, and those terms that pass $t$-test cleaning are identified to form the final model. A crucial component of the SC procedure is the Lasso step in the screening process for interactions. Let $Y$ be the response of interest and $G=(g_1,\cdots,g_p)^T$ be the genotypes at the $p$ loci. A typical model, which is also the model considered in SC, for G$\times$G detection is \begin{eqnarray} E(Y|G)= \gamma +\sum_{j=1}^p\xi_j\cdot g_j+ \sum_{j< k}\eta_{jk}\cdot (g_j\,g_k), \label{model_full0} \end{eqnarray} where $\xi_j$ is the main effect of the $j^{\rm th}$ loci, and $\eta_{jk}$, $j< k$, is the G$\times$G corresponding to the $j^{\rm th}$ and $k^{\rm th}$ loci. The Lasso step of SC then fits model~(\ref{model_full0}) to reduce the model size from \begin{eqnarray} m_p=1+p+{p\choose2}\label{mp} \end{eqnarray} to a number relatively smaller than sample size, $n$, based on which the validity of the subsequent LSE cleaning can be guaranteed. The performance of Lasso is known to depend on the involved number of parameters $m_p$ and the available sample size $n$. Although Lasso has been verified to perform well for large $m_p$, caution should be used when $m_p$ is ultra-large such as in the order of $\exp\{O(n^\delta)\}$ for some $\delta>0$ (Fan and Lv, 2008). In addition, the $m_p$ encountered in modern biomedical study is usually greatly larger than $n$ even for a moderate size of $p$. In this situation, statistical inferences can become unstable and inefficient, which would impact the screening performance and consequently affect the selection-consistency of the SC procedure or reduce the power in the $t$-tests cleaning. To improve the exhaustive screening involving all main and interaction terms, we consider a reduced model by utilizing the matrix nature of interaction terms. Observing model~(\ref{model_full0}) that $(g_j\,g_k)$ is the $(j,k)^{\rm th}$ element of the symmetric matrix ${\boldsymbol J}=GG^T$, it is natural to treat $\eta_{jk}$ as the $(j,k)^{\rm th}$ entry of the symmetric matrix ${\boldsymbol \eta}$, which leads to an equivalent expression of model~(\ref{model_full0}) as \begin{eqnarray} E(Y|G) =\gamma+\xi^T G +{\rm vec}p({\boldsymbol \eta})^T {\rm vec}p({\boldsymbol J}),\label{model_full} \end{eqnarray} where $\xi=(\xi_1,\ldots,\xi_p)^T$ and ${\rm vec}p(\cdot)$ denotes the operator that stacks the lower half (excluding diagonals) of a symmetric matrix columnwisely into a long vector. With the model expression~(\ref{model_full}), we can utilize the structure of the symmetric matrix ${\boldsymbol \eta}$ to improve the inference procedure. Specifically, we posit the condition for the interaction parameters \begin{equation}\label{sparse_LR} {\boldsymbol \eta}: {\boldsymbol m}ox{being sparse and low-rank}. \end{equation} Condition~(\ref{sparse_LR}) is typically satisfied in modern biomedical research. First, in a G$\times$G scan, it is reasonable to assume most elements of ${\boldsymbol \eta}$ are zeros because only a small portion of the terms are related to the response $Y$. {This sparsity assumption is also the underlying rationale for applying Lasso for }variable selection in conventional approaches (e.g., Wu's SC procedure). Second, if the elements of ${\boldsymbol \eta}$ are sparse, the matrix ${\boldsymbol \eta}$ is also likely to be low-rank. Displayed below is an example of ${\boldsymbol \eta}$ with $p=10$ that contains three pairs of non-zero interactions, and hence has rank 3 only: \begin{equation}\label{eta_structure} {\boldsymbol \eta}=\left[\begin{array}{cc}\begin{array}{ccc} 0& \bigstar &\spadesuit\\ \bigstar&0&\blacklozenge\\ \spadesuit&\blacklozenge&0\\ \end{array}&{\bm 0}_{3\times 7} \\ {\bm 0}_{7\times 3}&{\bm 0}_{7\times7}\end{array}\right]. \end{equation} One key characteristic in our proposed method is the consideration of the sparse and low-rank condition~(\ref{sparse_LR}), which allows us to express ${\boldsymbol \eta}$ with much fewer parameters. In contrast, Lasso does not utilize the matrix structure but only assumes the sparsity of ${\boldsymbol \eta}$ and, hence, still involves ${p\choose2}$ parameters in ${\boldsymbol \eta}$. From a statistical viewpoint, parsimonious parameterizations can improve the efficiency of model inferences. Our aims of this work are thus twofold. First, {using model~(\ref{model_full}) and condition (\ref{sparse_LR}), we propose an efficient screening procedure referred} to as the sparse and low-rank screening (SLR-screening). Second, {we demonstrate how the SLR-screening can be incorporated into existing multi-stage GxG methods to enhance the power and selection-consistency. Based on the promise of the SC procedure, we illustrate the concept by proposing the {\rm Extended Screen-and-Clean~} (ESC) procedure, which replaces the Lasso screening with SLR-screening in the standard SC procedure.} Some notation is defined here for reference. Let $\{(Y_i,G_i)\}_{i=1}^n$ be random copies of $(Y,G)$, and let ${\boldsymbol J}_i=G_iG_i^T$. Let $\mathcal{Y}b=(Y_1,\cdots,Y_n)^T$ be an $n$-vector of observed responses, and let ${\bm X}}\def\Wb{{\bm W}=[X_1,\cdots,X_n]^T$ be the design matrix with $X_i= [1, G_i^T, {\rm vec}p({\boldsymbol J}_i)^T ]^T$. For any square matrix ${\boldsymbol M}$, ${\boldsymbol M}^-$ is its Moore-Penrose generalized inverse. ${\rm vec}(\cdot)$ is the operator that stacks a matrix columnwisely into a long vector. ${\boldsymbol K}_{p,k}$ is the commutation matrix such that ${\boldsymbol K}_{p,k}\, {\rm vec}({\boldsymbol M}) = {\rm vec}({\boldsymbol M}^T)$ for any $p\times k$ matrix ${\boldsymbol M}$ (Henderson and Searle, 1979; Magnus and Neudecker, 1979). ${\boldsymbol P}$ is the matrix satisfying ${\boldsymbol P}{\rm vec}({\boldsymbol M}) = {\rm vec}p({\boldsymbol M})$ for any $p\times p$ symmetric matrix ${\boldsymbol M}$. ${\boldsymbol P}$ can be chosen such that ${\boldsymbol P}{\boldsymbol K}_{p,p}={\boldsymbol P}$. For a vector, $\|\cdot\|$ is its Euclidean norm (2-norm), and $\|\cdot\|_1$ is its 1-norm. For a set, $|\cdot|$ denotes its cardinality. \section{Inference Procedure for Low-Rank Model}\label{sec.low.rank} \subsection{Model specification and estimation}\label{low.rank.fit} To incorporate the low-rank property (\ref{sparse_LR}) into model building, for a pre-specified positive integer $r\le p$, we consider the {following} rank-$r$ model \begin{equation} E(Y|G) = \gamma + \xi^T G +{\rm vec}p({\boldsymbol \eta})^T {\rm vec}p({\boldsymbol J}),\quad{\rm rank}({\boldsymbol \eta})\le r.\label{model} \end{equation} Although the above low-rank model expression is straightforward, it is not convenient for numerical implementation. In view of this point, we adopt an equivalent parameterization ${\boldsymbol \eta}(\phi)$ for ${\boldsymbol \eta}$ that directly satisfies the constraint rank$({\boldsymbol \eta})\le r$. Consider the case with the minimum rank $r=1$ (the rank-1 model), we use the parameterization \begin{equation} {\boldsymbol \eta}(\phi)=u\alpha\alpha^T,\quad\phi=(\alpha^T,u)^T,\quad\alpha\in \mathbb{R}^p, \quad u\in \mathbb{R}. \label{phi_rank1} \end{equation} For the case of higher rank, we consider the parameterization \begin{eqnarray} {\boldsymbol \eta}(\phi) = {\boldsymbol A}{\boldsymbol B}^T + {\boldsymbol B}{\boldsymbol A}^T,\quad \phi={\rm vec}({\boldsymbol A},{\boldsymbol B})^T,\quad {\boldsymbol A},{\boldsymbol B}\in \mathbb{R}^{p\times k}, \label{phi_AB} \end{eqnarray} which gives $r=2k$ (the rank-$2k$ model), since the maximum rank attainable by ${\boldsymbol \eta}(\phi)$ in (\ref{phi_AB}) is $2k$. Note that in either cases of (\ref{phi_rank1}) or (\ref{phi_AB}), the number of parameters required for interactions ${\boldsymbol \eta}(\phi)$ can be largely smaller than $p\choose2$. See Remark~\ref{rmk.over} for more explications. Thus, when model~(\ref{model}) is true, standard MLE arguments show that statistical inference based on model~(\ref{model}) must be the most efficient. Even if model~(\ref{model}) is incorrectly specified, when the sample size is small, we are still in favor of the low-rank model. In this situation, model~(\ref{model}) provides a good ``working'' model. It compromises between the model approximation bias and the efficiency of parameters estimation. With limited sample size, instead of unstably estimating the full model, it is preferable to more efficiently estimate the approximated low-rank model. As will be shown later, a low-rank approximation of ${\boldsymbol \eta}$ with parsimonious parameterization suffices to more efficiently screen out relevant interactions. Let the parameters of interest in the rank-$r$ model~(\ref{model}) be \begin{equation}\label{b_theta} \beta({\rm th}eta)=\left[\gamma,\xi^T,{\rm vec}p\{{\boldsymbol \eta}(\phi)\}^T\right]^T\quad{\rm with}\quad{\rm th}eta=\left(\gamma,\xi^T,\phi^T\right)^T, \end{equation} which consist of intercept, main effects, and interactions. Under model~(\ref{model}) and assuming i.i.d. errors from a normal distribution $N(0,\sigma^2)$, the log-likelihood function (apart from constant term) is derived to be \begin{eqnarray} \ell({\rm th}eta)=-\frac{1}{2}\sum_{i=1}^n\left\{Y_i-\gamma -\xi^T G_i - {\rm vec}p\{{\boldsymbol \eta}(\phi)\}^T{\rm vec}p({\boldsymbol J}_i)\right\}^2=-\frac{1}{2}\|Y-{\bm X}}\def\Wb{{\bm W} \beta({\rm th}eta)\|^2. \end{eqnarray} To further stabilize the maximum likelihood estimation {MLE}, a common approach is to append a penalty on ${\rm th}eta$ to the log-likelihood function. We then propose to estimate ${\rm th}eta$ through maximizing the penalized log-likelihood function \begin{align} \ell_{\lambda_\ell}({\rm th}eta) = \ell({\rm th}eta)- \frac{\lambda_{\ell}}2\, \| {\rm th}eta \|^2,\label{criterion} \end{align} where $\lambda_{\ell}$ is the penalty (the subscript $\ell$ is for low-rank). Denote the penalized MLE as \begin{eqnarray} \widehat{\rm th}eta_{\lambda_{\ell}}= \left(\widehat\gamma_{\lambda_{\ell}},\, \widehat\xi_{\lambda_{\ell}},\, \widehat\phi_{\lambda_\ell}^T\right)^T =\mathop{\rm argmax}_{{\rm th}eta}\,\ell_{\lambda_\ell}({\rm th}eta).\label{mle.theta} \end{eqnarray} The parameters of interest $\beta({\rm th}eta)$ are then estimated by \begin{eqnarray} \widehat\beta_{\lambda_\ell}=\beta(\widehat{\rm th}eta_{\lambda_{\ell}}),\label{est.beta} \end{eqnarray} on which subsequent analysis for main and G$\times$G effects can be based. In practical implementation, we use $K$-fold cross-validation ($K=10$ in this work) to select $\lambda_\ell$. \begin{rmk}\label{rmk.over} We only need $pr-r^2/2+r/2$ parameters to specify a $p\times p$ rank-$r$ symmetric matrix, and the number of parameters required for model~(\ref{model}) is \begin{eqnarray} d_r=1+p+(pr-r^2/2+r/2).\label{dr} \end{eqnarray} However, adding constraints makes no difference to our inference procedures, but only increases the difficulty in computation. For convenience, we keep this simple usage of $\phi$ without imposing any identifiability constraint. \end{rmk} \subsection{Implementation algorithm} \subsubsection{The case of rank-1 model} For the rank-1 model ${\boldsymbol \eta}(\phi)=u\alpha\alpha^T$, it suffices to maximize (\ref{criterion}) using Newton method under both $u=+1$ and $u=-1$. The one from $u=\pm1$ with the larger value of penalized log-likelihood will be used as the estimate of ${\rm th}eta$. For any fixed $u$, maximizing (\ref{criterion}) is equivalent to the minimization problem: \begin{align} \min_{{\rm th}eta_u}\, \frac{1}{2} \left\|Y - {\bm X}}\def\Wb{{\bm W}_u \beta_u({\rm th}eta_u)\right\|^2 + \frac{\lambda_{\ell}}{2} \big\| {\rm th}eta_u\big\|^2,\label{criterion.rank1} \end{align} where ${\bm X}}\def\Wb{{\bm W}_u = [ X_{u1}, \cdots, X_{un}]^T$ with $X_{ui} = [1, G_i^T, u\cdot{\rm vec}p({\boldsymbol J}_i)^T]^T$ is the design matrix, and $\beta_u({\rm th}eta_u)=\{\gamma , \xi^T,{\rm vec}p(\alpha\alpha^T)^T\}^T$ with ${\rm th}eta_u = (\gamma , \xi^T, \alpha^T)^T$. Define \begin{eqnarray*} {\boldsymbol W}_u({\rm th}eta_u)={\bm X}}\def\Wb{{\bm W}_u\,\frac{\partial \beta_u({\rm th}eta_u)}{\partial{\rm th}eta_u}\quad{\rm with}\quad \frac{\partial \beta_u({\rm th}eta_u)}{\partial{\rm th}eta_u}= \left[\begin{array}{cc} {\boldsymbol I}_{p+1}&{\bm 0}\\ {\bm 0}& 2{\boldsymbol P}(\alpha\otimes{\boldsymbol I}_p). \end{array}\right]. \end{eqnarray*} The gradient and Hessian matrix (ignoring the zero expectation term) of (\ref{criterion.rank1}) are \begin{eqnarray*} g_u({\rm th}eta_u)&=&-\{{\boldsymbol W}_u({\rm th}eta_u)\}^T\{Y-{\bm X}}\def\Wb{{\bm W}_u \beta_u({\rm th}eta_u)\}+\lambda_{\ell}{\rm th}eta_u,\\ {\boldsymbol H}_u({\rm th}eta_u)&=&\{{\boldsymbol W}_u({\rm th}eta_u)\}^T\{{\boldsymbol W}_u({\rm th}eta_u)\}+\lambda_{\ell} {\boldsymbol I}_{2p+1}. \end{eqnarray*} Then, given an initial ${\rm th}eta_u^{(0)}$, the minimizer $\widehat{\rm th}eta_u$ of (\ref{criterion.rank1}) can be obtained through the iteration \begin{eqnarray} {\rm th}eta_u^{(t+1)}={\rm th}eta_u^{(t)}-\left\{{\boldsymbol H}_u({\rm th}eta_u^{(t)}) \right\}^{-1}g_u({\rm th}eta_u^{(t)}),~t=0,1,2,\ldots,\label{newton} \end{eqnarray} until convergence, and output $\widehat{\rm th}eta_u={\rm th}eta_u^{(t+1)}$. Let $u^*$ correspond to the optimal $u$ from $u=\pm1$. The final estimate is defined to be $\widehat{\rm th}eta_{\lambda_\ell}=(\widehat{\rm th}eta_{u^*}^T,u^*)^T$. \subsubsection{The case of rank-$\bm{2k}$ model} When ${\boldsymbol \eta}(\phi)={\boldsymbol A}{\boldsymbol B}^T+{\boldsymbol B}{\boldsymbol A}^T$, we use the alternating least squares (ALS) method to maximize (\ref{criterion}). By fixing ${\boldsymbol A}$, the problem of solving ${\boldsymbol B}$ becomes a standard penalized least squares problem. This can be seen from \begin{align*} {\rm vec}p({\boldsymbol A}{\boldsymbol B}^T + {\boldsymbol B}{\boldsymbol A}^T) &= 2{\boldsymbol P} {\rm vec}({\boldsymbol A}{\boldsymbol B}^T) = 2{\boldsymbol P}({\boldsymbol B} \otimes {\boldsymbol I}_p) {\rm vec}({\boldsymbol A}), \end{align*} where the second equality holds by ${\boldsymbol P}{\boldsymbol K}_{p,p}={\boldsymbol P}$. Hence, maximizing (\ref{criterion}) with fixed ${\boldsymbol B}$ is equivalent to the minimization problem: \begin{align} \min_{ {\rm th}eta_{\boldsymbol B}} \frac{1}{2} \left\|\mathcal{Y}b - {\bm X}}\def\Wb{{\bm W}_{\boldsymbol B} {\rm th}eta_{\boldsymbol B}\right\|^2 + \frac{\lambda_{\ell}}{2} \big\| {\rm th}eta_{\boldsymbol B}\big\|^2,\label{max.B} \end{align} where ${\bm X}}\def\Wb{{\bm W}_{\boldsymbol B} = [ X_{{\boldsymbol B} 1}, \cdots, X_{{\boldsymbol B} n}]^T$ with $X_{{\boldsymbol B} i} = [1, G_i^T, 2{\rm vec}p({\boldsymbol J}_i)^T {\boldsymbol P}({\boldsymbol B} \otimes {\boldsymbol I}_p)]^T$ being the design matrix when ${\boldsymbol B}$ is fixed, and ${\rm th}eta_{\boldsymbol B} =\left[\gamma , \xi^T, {\rm vec}({\boldsymbol A})^T\right]^T$. It can be seen that (\ref{max.B}) is the penalized least squares problem with data design matrix ${\bm X}}\def\Wb{{\bm W}_{\boldsymbol B}$ and parameters ${\rm th}eta_{\boldsymbol B}$, which is solved by \begin{equation} \widehat{{\rm th}eta}_{\boldsymbol B} = \left( {\bm X}}\def\Wb{{\bm W}^T_{\boldsymbol B} {\bm X}}\def\Wb{{\bm W}_{\boldsymbol B} + \lambda_{\ell} {\boldsymbol I}_{1+p+pk}\right)^{-1} {\bm X}}\def\Wb{{\bm W}_{\boldsymbol B}^T \mathcal{Y}b. \label{fixB} \end{equation} Similarly, the maximization problem with fixed ${\boldsymbol A}$ is equivalent to the minimization problem \begin{align*} \min_{{\rm th}eta_{\boldsymbol A}}\, \frac{1}{2} \left\|\mathcal{Y}b - { {\bm X}}\def\Wb{{\bm W}}_{\boldsymbol A} {\rm th}eta_{\boldsymbol A}\right\|^2 + \frac{\lambda_{\ell}}{2} \big\| {\rm th}eta_{\boldsymbol A} \big\|^2, \end{align*} where ${\bm X}}\def\Wb{{\bm W}_A = [ X_{1{\boldsymbol A}}, \cdots, X_{n{\boldsymbol A}}]^T$ with $X_{i{\boldsymbol A}} = [1, G_i^T, 2{\rm vec}p({\boldsymbol J}_i)^T {\boldsymbol P}({\boldsymbol A} \otimes {\boldsymbol I}_p)]^T$ being the design matrix when ${\boldsymbol A}$ is fixed, and ${\rm th}eta_{\boldsymbol A} =\left[\gamma , \xi^T, {\rm vec}({\boldsymbol B})^T\right]^T$. Thus, when ${\boldsymbol A}$ is fixed, ${\rm th}eta_{\boldsymbol A}$ is solved by \begin{equation} \widehat{{\rm th}eta}_{\boldsymbol A} = \left( {\bm X}}\def\Wb{{\bm W}^T_{\boldsymbol A} {\bm X}}\def\Wb{{\bm W}_{\boldsymbol A} + \lambda_{\ell} {\boldsymbol I}_{1+p+pk}\right)^{-1} {\bm X}}\def\Wb{{\bm W}_{\boldsymbol A}^T \mathcal{Y}b. \label{fixA} \end{equation} The ALS algorithm then iteratively and alternatively changes the roles of ${\boldsymbol A}$ and ${\boldsymbol B}$ until convergence. Detailed algorithm is summarized below.\\ \hrule ~\ \noindent \textbf{Alternating Least Squares (ALS) Algorithm:} \begin{enumerate} \item Set initial ${\boldsymbol B}^{(0)}$. For $t=0,1,2,\ldots,$ \begin{enumerate} \item[(1)] Fix ${\boldsymbol B}={\boldsymbol B}^{(t)}$, obtain $\widehat{{\rm th}eta}_{{\boldsymbol B}^{(t)}}=\{\gamma^{(t)},\xi^{(t)},{\rm vec}({\boldsymbol A}^{(t+1)})^T\}^T$ from (\ref{fixB}). \item[(2)] Fix ${\boldsymbol A}={\boldsymbol A}^{(t+1)}$, obtain $\widehat{{\rm th}eta}_{{\boldsymbol A}^{(t+1)}}=\{\gamma^{(t+1)},\xi^{(t+1)},{\rm vec}({\boldsymbol B}^{(t+1)})^T\}^T$ from (\ref{fixA}). \end{enumerate} \item Repeat Step-1 until convergence. Output $(\gamma^{(t+1)},\xi^{(t+1)},{\boldsymbol A}^{(t+1)},{\boldsymbol B}^{(t+1)})$ to form $\widehat{\rm th}eta_{\lambda_\ell}$. \end{enumerate} \hrule \noindent Note that the objective function value increases in each iteration of the ALS algorithm. In addition, the penalized log-likelihood function is bounded above by zero, which ensures that the ALS algorithm converges to a stationary point. We found in our numerical studies that a random initial ${\boldsymbol B}^{(0)}$ will converge quickly and produce a good solution. \subsection{Asymptotic properties}\label{low.rank.asymptotics} This subsection devotes to derive the asymptotic distribution of $\widehat\beta_{\lambda_\ell}$ defined in \eqref{est.beta}, which is the core to propose our SLR-screening in the next section. Assume that the parameter space $\Theta$ of ${\rm th}eta$ is bounded, open and connected, and define $\Xi=\beta(\Theta)$ be the induced parameter space. Let $\beta_0=\{\gamma_0,\xi_0^T,{\rm vec}p({\boldsymbol \eta}_0)^T\}^T$ be the true parameter value of the low-rank model~(\ref{model}) and define \begin{eqnarray} {\boldsymbol \Delta}({\rm th}eta)=\frac{\partial}{\partial{\rm th}eta}\beta({\rm th}eta). \label{delta} \end{eqnarray} We need the following regularity conditions for deriving asymptotic properties. \begin{enumerate}[label=(C\arabic{*}),ref=C\arabic{*}] \item \label{C1} Assume $\beta_0=\beta({\rm th}eta_0)$ for some ${\rm th}eta_0\in\Theta$. \item \label{C2} Assume that $\beta({\rm th}eta)$ is locally regular at ${\rm th}eta_0$ in the sense that ${\boldsymbol \Delta}({\rm th}eta)$ has the same rank as ${\boldsymbol \Delta}({\rm th}eta_0)$ for all ${\rm th}eta$ in a neighborhood of ${\rm th}eta_0$. Further assume that there exists neighborhoods ${\cal U}$ and ${\cal V}$ of ${\rm th}eta_0$ and $\beta_0$ such that $\Xi\cap{\cal V}=\beta({\cal U})$. \item \label{C3} Let ${\boldsymbol V}_n=\frac{1}{n}{\bm X}}\def\Wb{{\bm W}^T{\bm X}}\def\Wb{{\bm W}$. Assume that ${\boldsymbol V}_n \stackrel{p}{\to} {\boldsymbol V}_0$ and that ${\boldsymbol V}_0$ is strictly positive definite. \end{enumerate} The main result is summarized in the following theorem. \begin{thm}\label{thm.asymp_dist} Assume model~\eqref{model} and conditions \eqref{C1}-\eqref{C3}. Assume also $\lambda_{\ell}=o(\sqrt{n})$. Then, as $n\to\infty$, we have \begin{eqnarray}\label{asymp_dist} \sqrt{n}(\widehat\beta_{\lambda_\ell}-\beta_0)\stackrel{d}{\rightarrow} N(0,{\boldsymbol \Sigma}_0), \end{eqnarray} where ${\boldsymbol \Sigma}_0=\sigma^2{\boldsymbol \Delta}_0({\boldsymbol \Delta}_0^T {\boldsymbol V}_0{\boldsymbol \Delta}_0)^{-}{\boldsymbol \Delta}_0^T$ with ${\boldsymbol \Delta}_0={\boldsymbol \Delta}({\rm th}eta_0)$. \end{thm} To estimate the asymptotic covariance ${\boldsymbol \Sigma}_0$, we need to estimate $(\sigma^2,{\boldsymbol \Delta}_0)$. The error variance $\sigma^2$ can be naturally estimated by \begin{eqnarray} \widehat\sigma^2=\frac{\|\mathcal{Y}b-{\bm X}}\def\Wb{{\bm W}\widehat\beta_{\lambda_\ell}\|^2}{n-d_r},\label{sigma} \end{eqnarray} where $d_r$ is defined in (\ref{dr}). We propose to estimate ${\boldsymbol \Delta}_0$ by $\widehat{\boldsymbol \Delta}_0={\boldsymbol \Delta}(\widehat{\rm th}eta_{\lambda_\ell})$. Finally, the asymptotic covariance matrix in Theorem~\ref{thm.asymp_dist} is estimated by \begin{eqnarray} \widehat{\boldsymbol \Sigma}_0=\widehat\sigma^2\widehat{\boldsymbol \Delta}_0\left\{{\boldsymbol U}\left({\boldsymbol \Lambda}+\frac{\lambda_\ell}{n} {\boldsymbol I}_{d_r}\right){\boldsymbol U}^T \right\}^-\widehat{\boldsymbol \Delta}_0^T,\label{est.cov} \end{eqnarray} where ${\boldsymbol U}{\boldsymbol \Lambda}{\boldsymbol U}^T$ is the singular value decomposition of $\widehat{\boldsymbol \Delta}_0^T {\boldsymbol V}_n\widehat{\boldsymbol \Delta}_0$, ${\boldsymbol \Lambda}\in \mathbb{R}^{d_r\times d_r}$ is the diagonal matrix consisting of $d_r$ nonzero singular values with the corresponding singular vectors in ${\boldsymbol U}$. We note that adding $\frac{\lambda_\ell}{n} {\boldsymbol I}_{d_r}$ to ${\boldsymbol \Lambda}$ in (\ref{est.cov}) aims to stabilize the estimator $\widehat{\boldsymbol \Sigma}_0$, and will not affect its consistency to ${\boldsymbol \Sigma}_0$. \begin{rmk}\label{rmk.select_model} The number $d_r$ in \eqref{sigma} can be used as a guide in determining how large the model rank is allowed with the given data size $n$. That is, the value $n-d_r$ should be adequate for error variance estimation. \end{rmk} \section{Multistage Variable Selection for {Genetic Main and G$\bm\times$G Effects}}\label{sec.gg} By the developed inference procedure of low-rank model, we introduce in Section~\ref{sec.slr} the SLR-screening. In Section~\ref{sec.esc}, the SLR-screening is incorporated into the conventional SC procedure to propose ESC for G$\times$G detection. \subsection{Sparse and low-rank screening}\label{sec.slr} Due to the extremely high dimensionality for G$\times$G, a single-stage Lasso screening is not adequately flexible enough for variable selection. To improve the performance, it is helpful to reduce the model size from $m_p$ to a smaller number. The main idea of SLR-screening is to fit a low-rank model to filter out insignificant variables first, followed by implementing Lasso screening on the survived variables. The algorithm is summarized below.\\ \hrule ~\ \noindent \textbf{Sparse and Low-Rank Screening (SLR-Screening):} \begin{itemize} \item[1.] \textbf{Low-Rank Screening:} Fit the low-rank model~(\ref{model}). Based on the test statistics for $\beta_0$, screen out variables to obtain the index set ${\cal I}_{\rm LR}$. \item[2.] \textbf{Sparse (Lasso) Screening:} Fit Lasso on ${\cal I}_{\rm LR}$. Those variables with non-zero estimates are identified in ${\cal I}_{\rm SLR}$. \end{itemize} \hrule The goal of Stage-1 in SLR-screening is to screen out important variables by utilizing the low-rank property of ${\boldsymbol \eta}$. To achieve this task, we propose to fit the low-rank model~(\ref{model}) to obtain $\widehat\beta_{\lambda_\ell}$ and $\widehat{\boldsymbol \Sigma}_0$. Based on Theorem~\ref{thm.asymp_dist}, it is then reasonable to screen out variables as \begin{eqnarray} {\cal I}_{\rm LR}=\left\{j:\frac{|\widehat\beta_{\lambda_\ell,j}|} {\sqrt{n^{-1}\, \widehat{\boldsymbol \Sigma}_{0,j}}}>\alpha_\ell\right\}\label{I1} \end{eqnarray} for some $\alpha_\ell>0$, where $\widehat\beta_{\lambda_\ell,j}$ is the $j^{\rm th}$ element of $\widehat\beta_{\lambda_\ell}$, and $\widehat{\boldsymbol \Sigma}_{0,j}$ is the $j^{\rm th}$ diagonal element of $\widehat{\boldsymbol \Sigma}_0$. Here the threshold value $\alpha_\ell$ controls the power of the low-rank screening. The goal of Stage-2 in SLR-screening is to enforce sparsity. Based on the selected index set ${\cal I}_{\rm LR}$, we refit the model with 1-norm penalty through minimizing \begin{align} \frac{1}{2}\|\mathcal{Y}b-{\bm X}}\def\Wb{{\bm W}_{{\cal I}_{\rm LR}} \beta_{{\cal I}_{\rm LR}}\|^2 +\lambda_{s}\|\beta_{{\cal I}_{\rm LR}}\|_1,\label{criterion2} \end{align} where ${\bm X}}\def\Wb{{\bm W}_{{\cal I}_{\rm LR}}$ and $\beta_{{\cal I}_{\rm LR}}$ are, respectively, the selected variables and parameters in ${\cal I}_{\rm LR}$, and $\lambda_s$ is a penalty parameter for sparsity constraint. Let the minimizer of (\ref{criterion2}) be $\widehat\beta_{{\cal I}_{\rm LR}}$, and define \begin{eqnarray} {\cal I}_{\rm SLR}=\left\{j\in {\cal I}_{\rm LR}:\widehat\beta_{{\cal I}_{\rm LR},j}\neq 0\right\} \label{I2} \end{eqnarray} to be the final identified main effects and interactions from the screening stage, where $\widehat\beta_{{\cal I}_{\rm LR},j}$ is the $j^{\rm th}$ element of $\widehat\beta_{{\cal I}_{\rm LR}}$. To determine $\lambda_s$, the $K$-fold cross-validation ($K=10$ in this work) is applied. Subsequent analysis can then be conducted on those variables in ${\cal I}_{\rm SLR}$. \subsection{Extended Screen-and-Clean for G$\bm{\times}$G}\label{sec.esc} {\rm Screen-and-Clean~} (SC) of Wasserman and Roeder (2009) is a novel variable selection procedure. Firstly, the data are split into two parts, one for screening and the other for cleaning. The main reason of using two independent data sets is to control the type-I errors while maintaining high detection power. In the screening stage, Lasso is used to fit all covariates, of which zero estimates are dropped. The threshold for passing the screening is determined by cross-validation. In the cleaning stage, a linear regression model with variables passing the screening process is fitted, which leads to the LSE to identify significant covariates via hypothesis testing. A critical assumption for the validity of SC is the sparsity of effective covariates. As a consequence, by using Lasso to reduce the model size, the success of the cleaning stage in identifying relevant covariates is guaranteed. Recently, SC has been modified by Wu \textit{et al}. (2010) to detect G$\times$G as described in Section~\ref{sec.1}. This procedure has been shown to perform well through simulation studies. However, the procedure can be less efficient when the number of genes is large. For instance, there could be many genes remain after the first screening and, hence, a rather large number of parameters is required to fit model~\eqref{model_full0} for the second screening. As the performance of Lasso depends on the model size, a further reduction of model size can be helpful to increase the detection power. To achieve this aim, unlike standard SC that fits the full model~\eqref{model_full0} with Lasso screening, we propose to fit the low-rank model~\eqref{model} with SLR-screening instead. We call this procedure Extended Screen-and-Clean (ECS). Let $G^*$ be the set of all genes under consideration. Given a random partition $\mathcal{D}_1$ and $\mathcal{D}_2$ of the original data $\mathcal{D}$, the ESC procedure for detecting G$\times$G is summarized below.\\ \hrule ~\ {\noindent\bf Extended Screen-and-Clean (ESC):} \begin{enumerate}\itemsep=0pt \item Based on $\mathcal{D}_1$, fit Lasso on $(Y,G^*)$ to obtain $\widetilde\xi_{G^*}$ with the 1-norm penalty $\lambda_m$. Let $G$ consist of genes in $\{j:\widetilde\xi_{G^*,j}\neq 0\}$. Obtain $\mathcal{E}(G)=G\cup\{{\boldsymbol m}ox{all interactions of $G$}\}$. \item Based on $\mathcal{D}_1$, implement SLR-screening on $(Y,\mathcal{E}(G))$ to obtain ${\cal I}_{\rm SLR}$. Let $\mathcal{S}$ consist of main and interaction terms in ${\cal I}_{\rm SLR}$. \item Based on $\mathcal{D}_2$, fit LSE on $(Y,\mathcal{S})$ to obtain estimates of main effects and interactions $\widehat\xi_\mathcal{S}$ and $\widehat{\boldsymbol \eta}_\mathcal{S}$. The chosen model is \begin{eqnarray*} \mathcal{M}=\displaystyle{\left\{g_j,g_kg_l\in \mathcal{S}:|T_j|>t_{n-1-|\mathcal{S}|,\frac{\alpha}{2|\mathcal{S}|}},|T_{kl}|>t_{n-1-|\mathcal{S}|,\frac{\alpha} {2|\mathcal{S}|}}\right\}}, \end{eqnarray*} where $T_j$ and $T_{kl}$ are the $t$-statistics based on elements of $\widehat\xi_\mathcal{S}$ and $\widehat{\boldsymbol \eta}_\mathcal{S}$, respectively. \end{enumerate} \hrule ~\ \\ \noindent For the determination of $\lambda_m$ in Step-1 of ESC, in Wu \textit{et al}. (2010) they use cross-validation. Later, Liu, Roeder and Wasserman (2010) introduce StARS (Stability Approach to Regularization Selection) for $\lambda_m$ selection, and this selection criterion is adopted in the R code of \textit{Screen} \& \textit{Clean} (available at {\tt http:/\!/wpicr.wpic.pitt.edu/WPICCompGen/}). Note that the intercept will be included in the model all the time. Note also that the proposed ESC is exactly the same with Wu's SC, except SLR-screening is implemented in Step-2 instead of Lasso screening. See Figure~\ref{Lasso_tensor} for the flowchart of ESC. \section{Simulation Studies}\label{simulation} Our simulation studies are based on the design considered in Wu et al. (2010) with some extensions. In each simulated dataset, we generated genotype and trait values of $400$ individuals. For genotypes, we generated 1000 SNPs, $G=[g_1, \cdots, g_{1000}]^T$ with $g_j\in\{0,1,2\}$, from a discretization of normal random variable satisfying $P(g_j= 0) = P(g_j = 2) = 0.25$ and $P(g_j = 1) = 0.5$. The 1000 SNPs can be grouped into $200$ 5-SNP blocks, with which SNPs from different blocks are independent and SNPs within the same block are correlated with $R^2 = 0.3^2$. Conditional on $G$, we generate $Y$ using the following 4 models, where $\beta$ is the effect size and $\varepsilon \sim N(0,1)$: \begin{enumerate}\itemsep=0pt \item[M1:]$Y=\beta(g_5g_6+0.8g_{10}g_{11}+0.6g_{15}g_{16}+0.4g_{20}g_{21}+0.2g_{25}g_{26})+\varepsilon$. \item[M2:]$Y=\beta(g_5g_6+0.8g_{10}g_{11}+0.6g_{15}g_{16}+2g_{20}+2g_{21})+\varepsilon$. \item[M3:]$Y=\beta{\rm vec}p({\boldsymbol \eta})^T {\rm vec}p({\boldsymbol J})+\varepsilon$, $\eta_{jk}=0.9^{|j-k|}$ for $1\le j\ne k\le6$ and $\eta_{jk}=0$ for $j,k>6$. \item[M4:]$Y=\beta{\rm vec}p({\boldsymbol \eta})^T {\rm vec}p({\boldsymbol J})+\varepsilon$, where we randomly generate $\eta_{jk}={\rm sign}(u_1)\cdot u_2$ with $u_1\sim U(-0.1,0.9)$ and $u_2\sim U(0.5,1)$ for $1\le j\ne k\le8$, and $\eta_{jk}=0$ for $j,k>8$. \end{enumerate} To compare the performances, let $\mathcal{M}_0$ denote the index set of nonzero coefficients of the true model, and let $\mathcal{M}$ be the estimated model. Define the \textit{power} to be $E(|{\mathcal{M}}\cap \mathcal{M}_0|/|\mathcal{M}_0|)$, the \textit{exact discovery} to be $P({\mathcal{M}}= \mathcal{M}_0)$, the \textit{false discovery rate} (FDR) to be $E(|{\mathcal{M}}\cap \mathcal{M}_0^c|/|{\mathcal{M}}|)$, and the \textit{type-I error} to be $P({\mathcal{M}}\cap \mathcal{M}_0^c\ne\varnothing)$. These quantities are reported with 100 replicates for each model. Simulation results under different model settings are placed in Figures~\ref{fig.m1}-\ref{fig.m4}. It can be seen that both ESC(1) and ESC(2) can control FDR and type-I error adequately in all settings. In the pure interaction model M1, ESC(1) is the best performer, while the performances of SC and ESC(2) are comparable. Interestingly, when the true model contains main effects (M2, Figure~\ref{fig.m2}), both ESC(1) and ESC(2) do outperform SC obviously for every effect size $\beta$. It indicates that conventional SC using model~(\ref{model_full0}) is not able to identify main effects efficiently. We found SC procedure is more likely to wrongly filter out the true main effects in the second Lasso screening stage. However, with the low-rank screening to reduce the model size, these true main effects have higher chances to enter the final LSE cleaning and, hence, a higher power of ESC is reasonably expected. The superiority of ESC procedure can be more obviously observed under models M3-M4 (Figures~\ref{fig.m3}-\ref{fig.m4}), where the powers and exact discovery rates of ESC(1) and ESC(2) dominate that of SC for every effect size $\beta$. One reason is that there are many significant interactions involved in M3-M4, and ESC with a low-rank model is able to correctly filter out insignificant interactions in ${\boldsymbol \eta}$ to achieve better performances. In contrast, directly using Lasso screening does not utilize the matrix structure of ${\boldsymbol \eta}$. On one side, it tends to wrongly filter out significant interactions. On the other side, it tends to leave too many insignificant terms in the screening stage. Consequently, the subsequent LSE does not have enough sample size to clean the model well, and results in lower detection powers. We note that although the rank of ${\boldsymbol \eta}$ in models M1-M4 ranges from 6 to 8, ESC with rank-1 and rank-2 models suffice to achieve good performances. It indicates the robustness and applicability of the low-rank model~\eqref{model}, even with an incorrectly specified rank $r$. Moreover, we observe that ESC(1) outperforms ESC(2) in most of the settings. Given that the aim of low-rank screening in SLR-screening is to reduce the model size, a good approximation of ${\boldsymbol \eta}$ is capable to remove non-important terms. In contrast, while the rank-2 model approximates ${\boldsymbol \eta}$ more precisely, it also requires more parameters in model fitting. With limited sample size, the gain in approximation accuracy from rank-2 model cannot compensate the loss in estimation efficiency and, hence, ESC(2) may not have a better performance than ESC(1) does. See also Remark~\ref{rmk.select_model} for the discussion of selecting $r$ in ESC procedure. \begin{figure} \caption{Flowchart of ESC for detecting G$\times$G. The arrow indicates which part of the data is used. The case of SC replaces SLR-screening by Lasso screening.} \label{Lasso_tensor} \end{figure} \begin{figure} \caption{Simulation results under M1.} \label{fig.m1} \end{figure} \begin{figure} \caption{Simulation result under M2.} \label{fig.m2} \end{figure} \begin{figure} \caption{Simulation result under M3.} \label{fig.m3} \end{figure} \begin{figure} \caption{Simulation result under M4.} \label{fig.m4} \end{figure} \end{document}
\begin{document} \title{Log minimal models according to Shokurov} \begin{abstract} Following Shokurov's ideas, we give a short proof of the following klt version of his result: termination of terminal log flips in dimension $d$ implies that any klt pair of dimension $d$ has a log minimal model or a Mori fibre space. Thus, in particular, any klt pair of dimension $4$ has a log minimal model or a Mori fibre space. \end{abstract} \section{Introduction} All the varieties in this paper are assumed to be over an algebraically closed field $k$ of characteristic zero. We refer the reader to section 2 for notation and terminology. Shokurov [\ref{ordered}] proved that the log minimal model program (LMMP) in dimension $d-1$ and termination of terminal log flips in dimension $d$ imply existence of a log minimal model or a Mori fibre space for any lc pair of dimension $d$. Following Shokurov's method and using results of [\ref{BCHM}], we prove that termination of terminal log flips in dimension $d$ implies existence of a log minimal model or a Mori fibre space for any klt pair of dimension $d$. In this paper, by termination of terminal log flips in dimension $d$ we will mean termination of any sequence $X_i\dashrightarrow X_{i+1}/Z_i$ of log flips$/Z$ starting with a $d$-dimensional klt pair $(X/Z,B)$ which is terminal in codimension $\ge 2$. \begin{thm}\label{main} Termination of terminal log flips in dimension $d$ implies that any klt pair $(X/Z,B)$ of dimension $d$ has a log minimal model or a Mori fibre space. \end{thm} \begin{cor}\label{c-dim4} Any klt pair $(X/Z,B)$ of dimension $4$ has a log minimal model or a Mori fibre space. \end{cor} Note that, in the corollary, when $(X/Z,B)$ is effective (eg of nonnegative Kodaira dimension), log minimal models are constructed in [\ref{B2}] using different methods. \section{Basics} Let $k$ be an algebraically closed field of characteristic zero. For an $\mathbb R$-divisor $D$ on a variety $X$ over $k$, we use $D^\sim$ to denote the birational transform of $D$ on a specified birational model of $X$. \begin{defn} A pair $(X/Z,B)$ consists of normal quasi-projective varieties $X,Z$ over $k$, an $\mathbb R$-divisor $B$ on $X$ with coefficients in $[0,1]$ such that $K_X+B$ is $\mathbb{R}$-Cartier, and a projective morphism $X\to Z$. $(X/Z,B)$ is called log smooth if $X$ is smooth and $\Supp B$ has simple normal crossing singularities. For a prime divisor $D$ on some birational model of $X$ with a nonempty centre on $X$, $a(D,X,B)$ denotes the log discrepancy. $(X/Z,B)$ is terminal in codimension $\ge 2$ if $a(D,X,B)>1$ whenever $D$ is exceptional$/X$. Log flips preserve this condition but divisorial contractions may not. \end{defn} Let $(X/Z,B)$ be a klt pair. By a log flip$/Z$ we mean the flip of a $K_X+B$-negative extremal flipping contraction$/Z$. A sequence of log flips$/Z$ starting with $(X/Z,B)$ is a sequence $X_i\dashrightarrow X_{i+1}/Z_i$ in which $X_i\to Z_i \leftarrow X_{i+1}$ is a $K_{X_i}+B_i$-flip$/Z$ and $B_i$ is the birational transform of $B_1$ on $X_1$, and $(X_1/Z,B_1)=(X/Z,B)$. By termination of terminal log flips in dimension $d$ we mean termination of such a sequence in which $(X_1/Z,B_1)$ is a $d$-dimensional klt pair which is terminal in codimension $\ge 2$. Now assume that $G\ge 0$ is an $\mathbb R$-Cartier divisor on $X$. A sequence of $G$-flops$/Z$ starting with $(X/Z,B)$ is a sequence $X_i\dashrightarrow X_{i+1}/Z_i$ in which $X_i\to Z_i \leftarrow X_{i+1}$ is a $G_i$-flip$/Z$ such that $K_{X_i}+B_i\equiv 0/Z_i$ where $G_i$ is the birational transform of $G$ on $X=X_1$.\\ \begin{rem}\label{r-rays} We borrow a result of Shokurov [\ref{ordered}, Corollary 9, Addendum 4] concerning extremal rays. Let $(X/Z,B)$ be a $\mathbb Q$-factorial klt pair and $F$ a reduced divisor on $X$. Then, there is $\epsilon>0$ such that if $G\ge 0$ is an $\mathbb R$-divisor supported in $F$ satisfying\\\\ (1) $||G||<\epsilon$ where $||.||$ denotes the maximum of coefficients, and\\ (2) $(K_X+B+G)\cdot R<0$ for an extremal ray $R$,\\ then $(K_X+B)\cdot R\le 0$. This follows from certain numerical properties of log divisors such as [\ref{ordered}, Proposition 1] which is essentially the boundedness of the length of an extremal ray. Moreover, $\epsilon$ can be chosen such that for any $\mathbb R$-divisor $G'\ge 0$ supported in $F$ and any sequence $X_i\dashrightarrow X_{i+1}/Z_i$ of $G'$-flops starting with $(X/Z,B)$ satisfying\\\\ (1') $||G_i||<\epsilon$ where $G_i\ge 0$ is a multiple of $G_i'$, the birational transform of $G'$, and\\ (2') $(K_{X_i}+B_i+G_i)\cdot R<0$ for an extremal ray $R$ on $X_i$,\\ we have $(K_{X_i}+B_i)\cdot R\le 0$. In other words, $\epsilon$ is preserved after $G'$-flops but possibly only in the direction of $G'$. These claims are proved in [\ref{ordered}, Corollary 9, Addendum 4]. \end{rem} \begin{defn}[Cf., {[\ref{B2}, \S 2]}]\label{d-mmodel} Let $(X/Z,B)$ be a klt pair, $(Y/Z,B_Y)$ a $\mathbb Q$-factorial klt pair, $\phi\colon X\dashrightarrow Y/Z$ a birational map such that $\phi^{-1}$ does not contract divisors, and $B_Y$ the birational transform of $B$. Moreover, assume that $$ a(D,X,B)\le a(D,Y,B_Y) $$ for any prime divisor $D$ on birational models of $X$ and assume that the strict inequality holds for any prime divisor $D$ on $X$ which is exceptional/$Y$. We say that $(Y/Z,B_Y)$ is a log minimal model of $(X/Z,B)$ if $K_Y+B_Y$ is nef$/Z$. On the other hand, we say that $(Y/Z,B_Y)$ is a Mori fibre space of $(X/Z,B)$ if there is a $K_Y+B_Y$-negative extremal contraction $Y\to Y'/Z$ such that $\dim Y'<\dim Y$. \end{defn} Typically, one obtains a log minimal model or a Mori fibre space by a finite sequence of divisorial contractions and log flips. \begin{rem}\label{r-q-fact} Let $(X/Z,B)$ be a klt pair and $W\to X$ a log resolution. Let $B_W=B^\sim+(1-\epsilon)\sum E_i$ where $0<\epsilon\ll 1$ and $E_i$ are the exceptional$/X$ divisors on $W$. Remember that $B^\sim$ is the birational transform of $B$. If $(Y/X,B_Y)$ is a log minimal model of $(W/X,B_W)$, which exists by [\ref{BCHM}], then by the negativity lemma $Y\to X$ is a small $\mathbb Q$-factorialisation of $X$. To find a log minimal model or a Mori fibre space of $(X/Z,B)$, it is enough to find one for $(Y/Z,B_Y)$. So, one could assume that $X$ is $\mathbb Q$-factorial by replacing it with $Y$. \end{rem} Let $(X/Z,B+C)$ be a $\mathbb Q$-factorial klt pair such that $K_X+B+C$ is nef/$Z$. By [\ref{B2}, Lemma 2.6], either $K_X+B$ is nef/$Z$ or there is an extremal ray $R/Z$ such that $(K_X+B)\cdot R<0$ and $(K_X+B+\lambda_1 C)\cdot R=0$ where $$ \lambda_1:=\inf \{t\ge 0~|~K_X+B+tC~~\mbox{is nef/$Z$}\} $$ and $K_X+B+\lambda_1 C$ is nef$/Z$. Now assume that $R$ defines a divisorial contraction or a log flip $X\dashrightarrow X'$. We can consider $(X'/Z,B'+\lambda_1 C')$ where $B'+\lambda_1 C'$ is the birational transform of $B+\lambda_1 C$ and continue the argument. That is, either $K_{X'}+B'$ is nef/$Z$ or there is an extremal ray $R'/Z$ such that $(K_{X'}+B')\cdot R'<0$ and $(K_{X'}+B'+\lambda_2 C')\cdot R'=0$ where $$ \lambda_2:=\inf \{t\ge 0~|~K_{X'}+B'+tC'~~\mbox{is nef/$Z$}\} $$ and $K_{X'}+B'+\lambda_2 C'$ is nef$/Z$. By continuing this process, we obtain a special kind of LMMP on $K_X+B$ which we refer to as the \emph{LMMP with scaling of $C$}. If it terminates, then we obviously get a log minimal model or a Mori fibre space for $(X/Z,B)$. Note that the required log flips exist by [\ref{BCHM}]. \section{Proofs} \begin{proof}(of Theorem \ref{main}) Let $(X/Z,B)$ be a klt pair of dimension $d$. By Remark \ref{r-q-fact}, we can assume that $X$ is $\mathbb Q$-factorial. Let $H\ge 0$ be an $\mathbb R$-divisor which is big$/Z$ so that $K_X+B+H$ is klt and nef$/Z$. Run the LMMP/$Z$ on $K_X+B$ with scaling of $H$. If the LMMP terminates, then we get a log minimal model or a Mori fibre space. Suppose that we get an infinite sequence $X_i \dashrightarrow X_{i+1}/Z_i$ of log flips$/Z$ where we may also assume that $(X_1/Z,B_1)=(X/Z,B)$. Let $\lambda_i$ be the threshold on $X_i$ determined by the LMMP with scaling as explained in section 2. So, $K_{X_i}+B_i+\lambda_i H_i$ is nef$/Z$, $(K_{X_i}+B_i)\cdot R_i<0$ and $(K_{X_i}+B_i+\lambda_i H_i)\cdot R_i=0$ where $B_i$ and $H_i$ are the birational transforms of $B$ and $H$ respectively and $R_i$ is the extremal ray which defines the flipping contraction $X_i\to Z_i$. Obviously, $\lambda_i\ge \lambda_{i+1}$. Put $\lambda=\lim_{i\to \infty} \lambda_i$. If the limit is attained, that is, $\lambda=\lambda_i$ for some $i$, then the sequence terminates by [\ref{BCHM}, Corollary 1.4.2]. So, we assume that the limit is not attained. Actually, if $\lambda>0$, again [\ref{BCHM}] implies that the sequence terminates. However, we do not need to use [\ref{BCHM}] in this case. In fact, by replacing $B_i$ with $B_i+\lambda H_i$, we can assume that $\lambda=0$ hence $\lim_{i\to \infty} \lambda_i=0$. Put $\Lambda_i:=B_i+\lambda_i H_i$. Since we are assuming that terminal log flips terminate, or alternatively by [\ref{BCHM}, Corollary 1.4.3], we can construct a terminal (in codimension $\ge 2$) crepant model $(Y_i/Z,\Theta_i)$ of $(X_i/Z,\Lambda_i)$. A slight modification of the argument in Remark \ref{r-q-fact} would do this. Note that we can assume that all the $Y_i$ are isomorphic to $Y_1$ in codimension one perhaps after truncating the sequence. Let $\Delta_1=\lim_{i\to \infty} \Theta_i^{\sim}$ on $Y_1$ and let $\Delta_i$ be its birational transform on $Y_i$. The limit is obtained component-wise. Since $H_i$ is big$/Z$ and $K_{X_i}+\Lambda_i$ is klt and nef$/Z$, $K_{X_i}+\Lambda_i$ and $K_{Y_i}+\Theta_i$ are semi-ample$/Z$ by the base point freeness theorem for $\mathbb R$-divisors. Thus, $K_{Y_i}+\Delta_i$ is a limit of movable/$Z$ divisors which in particular means that it is pseudo-effective/$Z$. Note that if $K_{Y_i}+\Delta_i$ is not pseudo-effective/$Z$, we get a contradiction by [\ref{BCHM}, Corollary 1.3.2]. Now run the LMMP$/Z$ on $K_{Y_1}+\Delta_1$. No divisor will be contracted again because $K_{Y_1}+\Delta_1$ is a limit of movable/$Z$ divisors. Since $K_{Y_1}+\Delta_1$ is terminal in codimension $\ge 2$, by assumptions, the LMMP terminates with a log minimal model $(W/Z,\Delta)$. By construction, $\Delta$ on $W$ is the birational transform of $\Delta_1$ on $Y_1$ and $G_i:=\Theta_i^{\sim}-\Delta$ on $W$ satisfies $\lim_{i\to \infty} G_i=0$. By Remark \ref{r-rays}, for each $G_i$ with $i\gg 0$, we can run the LMMP/$Z$ on $K_W+\Delta+G_i$ which will be a sequence of $G_i$-flops, that is, $K+\Delta$ would be numerically zero on all the extremal rays contracted in the process. No divisor will be contracted because $K_W+\Delta+G_i$ is movable$/Z$. The LMMP ends up with a log minimal model $(W_i/Z,\Omega_i)$. Here, $\Omega_i$ is the birational transform of $\Delta+G_i$ and so of $\Theta_i$. Let $S_i$ be the lc model of $(W_i/Z,\Omega_i)$ which is the same as the lc model of $(Y_i/Z,\Theta_i)$ and that of $(X_i/Z,\Lambda_i)$ because $K_{W_i}+\Omega_i$ and $K_{Y_i}+\Theta_i$ are nef$/Z$ with $W_i$ and $Y_i$ being isomorphic in codimension one, and $K_{Y_i}+\Theta_i$ is the pullback of $K_{X_i}+\Lambda_i$. Also note that since $K_{X_i}+B_i$ is pseudo-effective$/Z$, $K_{X_i}+\Lambda_i$ is big$/Z$ hence $S_i$ is birational to $X_i$. By construction $K_{W_i}+\Delta^{\sim}$ is nef$/Z$ and it turns out that $K_{W_i}+\Delta^{\sim}\sim_{\mathbb R} 0/S_i$. Suppose that this is not the case. Then, $K_{W_i}+\Delta^{\sim}$ is not numerically zero$/S_i$ hence there is some curve $C/S_i$ such that $(K_{W_i}+\Delta^{\sim}+G_i^{\sim})\cdot C=0$ but $(K_{W_i}+\Delta^{\sim})\cdot C>0$ which implies that $G_i^{\sim}\cdot C<0$. Hence, there is a $K_{W_i}+\Delta^{\sim}+(1+\tau)G_i^{\sim}$-negative extremal ray $R/S_i$ for any $\tau>0$. This contradicts Remark \ref{r-rays} because we must have $$ (K_{W_i}+\Delta^{\sim}+G_i^{\sim})\cdot R=(K_{W_i}+\Delta^{\sim})\cdot R=0 $$ Therefore, $K_{W_i}+\Delta^{\sim}\sim_{\mathbb R} 0/S_i$. Now $K_{X_i}+\Lambda_i\sim_{\mathbb R}0/Z_i$ implies that $Z_i$ is over $S_i$ and so $K_{Y_i}+\Delta_i \sim_{\mathbb R} 0/S_i$. On the other hand, $K_{X_i}+B_i$ is the pushdown of $K_{Y_i}+\Delta_i$ hence $K_{X_i}+B_i\sim_{\mathbb R} 0/S_i$. Thus, $K_{X_i}+B_i\sim_{\mathbb R} 0/Z_i$ and this contradicts the fact that $X_i\to Z_i$ is a $K_{X_i}+B_i$-flipping contraction. So, the sequence of flips terminates and this completes the proof. \end{proof} \begin{proof}(of Corollary \ref{c-dim4}) Since terminal log flips terminate in dimension $4$ by [\ref{F}][\ref{Sh2}], the result follows from the Theorem. \end{proof} \flushleft{DPMMS}, Centre for Mathematical Sciences,\\ Cambridge University,\\ Wilberforce Road,\\ Cambridge, CB3 0WB,\\ UK\\ email: [email protected] \end{document}
\begin{document} \title{Weyl groups of the fine gradings on $\e_6$} \begin{abstract} The Weyl groups of the fine gradings with infinite universal grading group on $\mathfrak{e}_6$ are given. \mathfrak{e}nd{abstract} \noindent {\bf Keywords:} exceptional Lie algebra of type $E_6$, Weyl group, fine grading, model. \noindent {\bf MSC2010 Subject Classification:} 17B25, 17B70. \mathop{\rm sl}ection{Introduction} If $\mathcal{G}$ denotes a connected algebraic group, defined over an algebraically closed field $\mathbb{F}$, and $T$ is a maximal torus of $\mathcal{G}$, because of the rigidity of tori, the quotient $N_{\mathcal{G}}(T)/C_{\mathcal{G}}(T)$ is a finite group, called the Weyl group of $\mathcal{G} $ (since all the maximal tori are conjugate, all their Weyl groups are isomorphic). Here $N_{\mathcal{G}}(T)$ is the normalizer and $C_{\mathcal{G}}(T)$ is the centralizer of $T$ in $\mathcal{G}$. In case $\mathcal{G}$ is a reductive group, this Weyl group is isomorphic to the Weyl group of the root system. Recall that a Weyl group of a root system $\Phi$ is the subgroup of the isometry group of $\Phi$ generated by reflections through the hyperplanes orthogonal to the roots, and as such is a finite reflection group. Abstractly, Weyl groups are finite Coxeter groups, and are important examples of these. The Weyl group of a semisimple finite-dimensional Lie algebra is the Weyl group of its root system. There is no need to emphasize the importance of Weyl groups in the theory of Lie algebras, and their important role in representation theory. Thus, a lot of generalizations of this concept have arisen, among them the Weyl group of a Lie grading. This notion was introduced by Patera and Zassenhaus in \cite{LGI}, trying to generalize the groups related with the Cartan decomposition of a complex simple Lie algebra. That work can be considered as the beginning of the theory of the gradings on Lie algebras. Until that moment, the gradings usually were considered over cyclic groups, with the obvious exception of the Cartan decomposition (root decomposition). They proposed the search of the remaining fine gradings as a way of looking for an alternative approach to the structure theory and representation theory. Moreover, these fine gradings provide immediately basis with interesting properties, as proved in \cite[Proposition 10]{f4}. In a particular case, \cite[Lemma~1]{e6} shows that every element in any basis of homogeneous elements of a fine grading by a finite group is directly semisimple. The task of computing the Weyl group of one fine grading on a concrete Lie algebra was done in \cite{checosWeyl}, for the Lie algebra $\mathfrak{sl}(p^2,\mathbb{C})$, $p$ being a prime, and for the fine grading given as a tensor product of the Pauli matrices of the same order $p$. In authors' words, the study of the normalizer of the MAD-group corresponding to a fine grading offers the most important tool for describing symmetries in the system of nonlinear equations connected with a contraction of the Lie algebra. This technique is applied in \cite{normPauli} for obtaining all the graded contractions of the Lie algebra $\mathfrak{sl}_3(\mathbb{C})$ arising from the Pauli grading. Afterwards, the same job has been boarded in \cite{EK11} and \cite{EK12} for a variety of algebras: matrix algebras, octonion algebras, Albert algebras, and Lie algebras of types $A$, $B$, $C$ and $D$. As $\mathfrak{f}rak{g}_2$ is the derivation algebra of the octonion algebra, and $\mathfrak{f}rak{f}_4$ is the derivation algebra of the Albert algebra, our goal will be to determine these Weyl groups for $\mathfrak{e}_6$, the following simple Lie algebra in size, and the only one of the series-E for which the fine gradings are completely classified up to equivalence (\cite{e6}). Thus, we wish to investigate the symmetries of the splittings on $\mathfrak{e}_6$. This algebra has the striking quantity of $14$ fine gradings up to equivalence, whose universal groups and types will be brought in Section~\ref{sec_graduaciones4}. It is not our purpose to study the $7$ fine gradings whose universal group is finite, due to the fact that the work \cite{elchino} has almost completed the problem of finding the Weyl groups in such cases. Also, the normalizer of the MAD-group of type $\mathbb{Z}_3^4$ is described in \cite{Viruannals} in detail, while determining properties of the elementary $3$-subgroups of $\mathop{{\rm Aut}}(\mathfrak{e}_6)$. Consequently, we will restrict our attention to the infinite cases different from the Cartan grading. In the case of the Cartan grading, its Weyl group is of course very well known, as well as its applications to other branches of Mathematics, as finite geometry and incidence structures. Namely, the Weyl group of $\mathfrak{e}_6$ is closely related with the automorphism group of the $27$ lines on the smooth cubic surface. This connection was recognized through the relation with del Pezzo surfaces of degree three (\cite{Coxeter}). Also \cite{Manivel} discusses about relations between classical configurations of lines and the (complex) Lie algebra $\mathfrak{e}_6$.\mathop{\rm sl}mallskip The paper is structured as follows. Section~2 gives the basic definitions about group gradings and about several groups related to them, including the notion of Weyl group of a grading as a generalization of the classical concept. In Section~3 we have compiled some algebraic constructions of the Lie algebra $\mathfrak{e}_6$, which will be used in further sections. This list of models begins with the most famous, the unified Tits' construction, but it encloses other ones not explicitly appeared in the literature. Also this section reviews some of the standard facts of the algebraic structures involved in the mentioned constructions: composition algebras, symmetric composition algebras and the exceptional Jordan algebra, the Albert algebra. Section~4 provides a detailed description of the fine gradings on $\mathfrak{e}_6$ with infinite universal groups. Finally, in the fifth section our main results are stated and proved: they consist of the computation of the Weyl groups of each of the above gradings. They are summarized in Theorem~\ref{th_maintheorem}.\mathop{\rm sl}mallskip In light of the obtained results, we can think of the following question: \textbf{Conjecture:} If $Q$ is a maximal abelian diagonalizable group of the algebraic group $\mathcal{G}=\mathop{{\rm Aut}}(L)$, for $L$ a semisimple Lie algebra over an algebraically closed field $\mathbb{F}$ of characteristic zero, then two elements $q_1,q_2\in Q$ are conjugated in $ \mathcal{G}$ if and only if they are conjugated in the normalizer of $Q$. This result is well known for the case in which $Q$ is a maximal torus of $ \mathcal{G}$ (see for instance \cite[p.~ 99]{Carter}), that is, in case the diagonalization produced by $Q$ is the Cartan grading on $L$. The proof is not trivial at all and it is related with some deep topics, as the Bruhat decomposition or the BN-pairs. As far as we know, the question for an arbitrary MAD-group $Q$ is at present far from being solved. A direct proof of this conjecture would shorten considerably our work. An affirmative answer for the case $L=\mathfrak{e}_6$ is a consequence of the results of this paper, and the cases $\mathfrak{f}rak{f}_4$ and $\mathfrak{f}rak{g}_2$ can be concluded from \cite{EK11}. This is unlikely to be a coincidence. \mathop{\rm sl}ection{Basics on gradings and their Weyl groups}\label{sec_defsbasicas2} Let $\mathscr{A}$ be a finite-dimensional algebra (not necessarily associative) over a field $\mathbb{F}$, and $G$ an abelian group. \defi\rm A \mathfrak{e}mph{$G$-grading} $\Gamma$ on $\mathscr{A}$ is a vector space decomposition $$ \Gamma: \mathscr{A} = \bigoplus_{g\in G} \mathscr{A}_g $$ such that $$ \mathscr{A}_g \mathscr{A}_h\mathop{\rm sl}ubseteq \mathscr{A}_{g+h} \quad {\textnormal{ for all }} g,h\in G. $$ Fixed such a decomposition, the algebra $\mathscr{A}$ will be called a \mathfrak{e}mph{$G$-graded algebra}, the subspace $\mathscr{A}_g$ will be referred to as \mathfrak{e}mph{homogeneous component of degree $g$} and its nonzero elements will be called the \mathfrak{e}mph{homogeneous elements of degree $g$}. The \mathfrak{e}mph{support} is the set $\mathop{\hbox{\rm Supp}} \Gamma :=\{g\in G\mid \mathscr{A}_g\neq 0\}$. \defi\rm If $\Gamma\colon \mathscr{A}=\oplus_{g\in G} \mathscr{A}_g$ and $\Gamma'\colon \mathscr{A}=\oplus_{h\in H} \mathscr{A}_{h}$ are gradings over two abelian groups $G$ and $H$, $\Gamma$ is said to be a \mathfrak{e}mph{refinement} of $\Gamma'$ (or $\Gamma'$ a \mathfrak{e}mph{coarsening} of $\Gamma$) if for any $g\in G$ there is $h\in H$ such that $\mathscr{A}_g\mathop{\rm sl}ubseteq \mathscr{A}_{h}$. In other words, any homogeneous component of $\Gamma'$ is the direct sum of some homogeneous components of $\Gamma$. A refinement is \mathfrak{e}mph{proper} if some inclusion $\mathscr{A}_g\mathop{\rm sl}ubseteq \mathscr{A}_{h}$ is proper. A grading is said to be \mathfrak{e}mph{fine} if it admits no proper refinement. \defi\rm Let $\Gamma$ be a $G$-grading on $\mathscr{A}$ and $\Gamma'$ an $H$-grading on $\mathscr{B}$, with supports, respectively, $S$ and $T$. Then $\Gamma$ and $\Gamma'$ are said to be \mathfrak{e}mph{equivalent} if there is an algebra isomorphism $\varphi\colon\mathscr{A} \rightarrow \mathscr{B}$ and a bijection $\alpha\colon S \rightarrow T$ such that $\varphi(\mathscr{A}_s)=\mathscr{B}_{\alpha(s)}$ for all $s\in S$. Any such $\varphi$ is called an \mathfrak{e}mph{equivalence} of $\Gamma$ and $\Gamma'$. \mathop{\rm sl}mallskip The study of gradings is based on classifying fine gradings up to equivalence, because any grading is obtained as a coarsening of some fine one. A useful invariant of a grading is given by the dimensions of the components: the \mathfrak{e}mph{type} of a grading $\Gamma$ is the sequence of numbers $(h_1,\ldots, h_r)$ where $h_i$ is the number of homogeneous components of dimension $i$, with $i=1,\ldots, r$ and $h_r\neq 0$. Obviously, $\dim \mathscr{A} = \mathop{\rm sl}um_{i=1}^r ih_i$. It can always be assumed that $\mathop{\hbox{\rm Supp}} \Gamma$ generates $G$ (replacing $G$ with a smaller group, if necessary), but there are, in general, many other groups $H$ such that the vector space decomposition $\Gamma$ can be realized as an $H$-grading. According to \cite{LGI}, there is one distinguished group among them: \defi\rm Suppose that $\Gamma$ admits a realization as a $G_0$-grading for some abelian group $G_0$. We will say that $G_0$ is a \mathfrak{e}mph{universal grading group of $\Gamma$} if for any other realization of $\Gamma$ as a $G$-grading, there exists a unique group homomorphism $G_0 \rightarrow G$ that restricts to the identity on $\mathop{\hbox{\rm Supp}} \Gamma$. We denote this group by $U(\Gamma)$, which always exists and besides it is unique up to isomorphism depending only on the equivalence class of $\Gamma$ (see \cite{Koc09}). We will only deal with gradings over their universal grading groups. From now on, consider a finite-dimensional Lie algebra $\mathcal{L}$. The ground field $\mathbb{F}$ will be supposed to be algebraically closed and of characteristic zero throughout this work. With these hypothesis, the automorphism group of $\mathcal{L}$ is an algebraic linear group. Thus, following \cite[\S 3, p.104]{enci}, it is useful to look at gradings focusing on quasitori of the automorphism group $\mathop{{\rm Aut}}( \mathcal{L})$, where quasitori are diagonalizable groups, or equivalently, direct products of tori with abelian finite groups. If $\mathcal{L}=\oplus_{g\in G} \mathcal{L}_g$ is a $G$-grading, the map $\psi\colon \mathfrak{X}(G):=\text{hom} (G,\mathbb{F}^\times) \rightarrow \mathop{{\rm Aut}}(\mathcal{L})$ applying each $\alpha \in \mathfrak{X}(G)$ to the automorphism $\psi_{\alpha}\colon \mathcal{L} \rightarrow \mathcal{L}$ given by $\mathcal{L}_g\ni x \mapsto \psi_{\alpha}(x) :=\alpha (g) x$ is a group homomorphism. Since $G$ is finitely generated, $\psi(\mathfrak{X}(G))$ is an algebraic quasitorus. And conversely, if $Q$ is a quasitorus and $\psi\colon Q \rightarrow \mathop{{\rm Aut}} (\mathcal{L})$ is a homomorphism, $\psi(Q)$ consists of semisimple automorphisms and we have a $\mathfrak{X}(Q)$-grading $\mathcal{L}=\oplus_{g\in\mathfrak{X}(Q)} \mathcal{L}_g$ given by $\mathcal{L}_g=\{x\in \mathcal{L}\mid \psi(q)(x)=g(q)x \, \mathfrak{f}orall q \in Q\}$, with $\mathfrak{X}(Q)$ a finitely generated abelian group. In case $Q$ is a torus $T$ of the automorphism group of $\mathcal{L}$, this means that there exists a group isomorphism $(\mathbb{F}^\times)^s\to T$, $(\alpha_1,\dots,\alpha_s)\mapsto t_{\alpha_1,\dots,\alpha_s}$, thus $\mathbb{Z}^s\cong\mathfrak{X}(T)$ by means of $(n_1,\dots,n_s)\mapsto \psi_{n_1,\dots,n_s}\colon T\to \mathbb{F}^\times$, with $\psi_{n_1,\dots,n_s}(t_{\alpha_1,\dots,\alpha_s})=\alpha_1^{n_1}\dots\alpha_s^{n_s}$, and the $\mathbb{Z}^s$-grading produced on $\mathcal{L}$ is given by \begin{equation}\label{eq_torosconisomorfismoprefijado} \mathcal{L}_{(n_1,\dots,n_s)}=\{x\in\mathcal{L}\mid t_{\alpha_1,\dots,\alpha_s}(x)=\alpha_1^{n_1}\dots\alpha_s^{n_s}x\quad\mathfrak{f}orall \alpha_i\in\mathbb{F}^\times,\mathfrak{f}orall i=1,\dots,s\}. \mathfrak{e}nd{equation} A grading is \mathfrak{e}mph{toral} if there exists a torus $T$ of the automorphism group of $\mathcal{L}$ containing the quasitorus that {produces} the grading. This is equivalent to the grading is a coarsening of the root decomposition of a semisimple Lie algebra $\mathcal{L}$ relative to some Cartan subalgebra. \mathop{\rm sl}mallskip Following \cite{LGI}, we can consider three distinguished subgroups of $\mathop{{\rm Aut}} (\mathcal{L})$ associated to a $G$-grading $\Gamma\colon \mathcal{L}= \oplus_{g\in G} \mathcal{L}_g$. \defi\rm The subgroup of $\mathop{{\rm Aut}}(\mathcal{L})$ of all automorphisms of $\mathcal{L}$ that permute the components of $\Gamma$ is called the \mathfrak{e}mph{automorphism group of $\Gamma$} and denoted by $\mathop{{\rm Aut}}( \Gamma)$. Each $\varphi \in \mathop{{\rm Aut}} (\Gamma)$ determines a self-bijection $ \alpha_{\varphi}\colon S\to S$ of the support $S=\mathop{\hbox{\rm Supp}}(\Gamma)$ such that $\varphi(\mathcal{L}_s)=\mathcal{L}_{\alpha_{\varphi}(s)}$ for all $s\in S$. \defi\rm The \mathfrak{e}mph{stabilizer of $\Gamma$}, denoted by $\mathop{\hbox{\rm Stab}} (\Gamma)$, is the set of automorphisms of $\mathcal{L}$ that leave invariant the homogeneous components of the grading, that is, the group of automorphisms of $\mathcal{L}$ as graded algebra. It coincides with the kernel of the homomorphism $\mathop{{\rm Aut}} (\Gamma) \rightarrow \text{Sym} (S)$ given by $\varphi \mapsto \alpha_{\varphi}$. \defi\rm The \mathfrak{e}mph{diagonal group} of the grading, denoted by $\mathop{\hbox{\rm Diag}} (\Gamma)$, is the set of automorphisms of $\mathcal{L}$ such that each $\mathcal{L}_g$ is contained in some eigenspace. This group is a quasitorus isomorphic to $\mathfrak{X}(U(\Gamma))$, and the related $\mathfrak{X}(\mathop{\hbox{\rm Diag}} (\Gamma))$-grading (the eigenspace decomposition of $\mathcal{L}$ relative to $\mathop{\hbox{\rm Diag}} (\Gamma)$) is equivalent to $\Gamma$, under our assumptions on the field. Moreover, the group $\mathop{\hbox{\rm Stab}} (\Gamma)$ is the centralizer of $\mathop{\hbox{\rm Diag}} (\Gamma)$, and $\mathop{{\rm Aut}} (\Gamma)$ is its normalizer. \mathop{\rm sl}mallskip The grading $\Gamma$ is fine if and only if $\mathop{\hbox{\rm Diag}}(\Gamma)$ is a Maximal Abelian Diagonalizable group, usually called a \mathfrak{e}mph{MAD-group}. Moreover the number of conjugacy classes of MAD-groups in $\mathop{{\rm Aut}}(\mathcal{L})$ agrees with the number of equivalence classes of fine gradings on $\mathcal{L}$. \defi\rm The \mathfrak{e}mph{Weyl group} of $\Gamma$, denoted by $\mathscr{W}(\Gamma)$, is defined as the quotient group $\mathop{{\rm Aut}}(\Gamma)/ \mathop{\hbox{\rm Stab}}(\Gamma)$, which coincides with $\text{Norm}\, (\mathop{\hbox{\rm Diag}} (\Gamma))/ \mathscr{C}ent(\mathop{\hbox{\rm Diag}} (\Gamma))$ (normalizer and centralizer). It can be identified with a subgroup of the symmetric group $\text{Sym}(S)$, but also with a subgroup of $\mathop{{\rm Aut}}(U(\Gamma))$, since the bijection $\alpha_{\varphi}\colon S\to S$ can be extended to an automorphism of the universal grading group. \mathop{\rm sl}mallskip The term ``Weyl group'' is suitable, because if $\Gamma$ is the Cartan grading on a semisimple Lie algebra $\mathcal{L}$, which is produced by a maximal torus $T$ of $\mathop{{\rm Aut}}(\mathcal{L})$, then $\mathscr{W}(\Gamma)\cong \text{Norm}\, (T)/ T$ is isomorphic to the so-called \mathfrak{e}mph{extended Weyl group} of $\mathcal{L}$, i.e., the automorphism group of the root system of $\mathcal{L}$. \mathop{\rm sl}ection{Models}\label{sec_modelos3} Here we review some constructions of the exceptional simple Lie algebra of type $E_6$, and add some other models of it which will allow us to have several different descriptions of equivalent gradings. They will be used in the last section to compute the symmetries of such gradings, changing the viewpoint when necessary. Recall that we are assuming that $\mathbb{F}$ is an algebraically closed field of characteristic zero throughout the work, in spite of that most of the next constructions can be considered in more general settings. \mathop{\rm sl}ubsection{Involved structures}\label{subsec_involvedstructures} Recall that a \mathfrak{e}mph{composition algebra} is an $\mathbb{F}$-algebra $C$ endowed with a nondegenerate multiplicative quadratic form (the \mathfrak{e}mph{norm}) $n\colon C \rightarrow \mathbb{F}$, that is, $n(xy)=n(x)n(y)$ for all $x,y\in C$. Denote the associated bilinear form (the \mathfrak{e}mph{polar form}) by $n(x,y):=n(x+y)-n(x)-n(y)$. The unital composition algebras are called \mathfrak{e}mph{Hurwitz algebras}. Each Hurwitz algebra satisfies a quadratic equation $$ x^2 - t(x)x + n(x)1=0, $$ where the linear map $t(x):=n(x,1)$ is called the \mathfrak{e}mph{trace} form. Besides it has a \mathfrak{e}mph{standard involution} defined by $\bar{x}:=t(x)1-x$. The subspace of trace zero elements will be denoted by $C_0$. Composition algebras can only have dimensions $1$, $2$, $4$ and $8$. Over our algebraically closed field $\mathbb{F}$, the only composition algebras are, up to isomorphism, the ground field $\mathbb{F}$, the cartesian product of two copies of the ground field $\mathbb{F}Å\oplus \mathbb{F}$ (with the norm of $(a,b)$ given by the product $ab$), the matrix algebra $\mathop{\rm Mat}_{2\times2}(\mathbb{F})$ (where the norm of a matrix is its determinant), and the split \mathfrak{e}mph{octonion algebra}. (See, for instance, \cite[Chapter 2]{libroRussi}, for all this material.) In the following we will denote this {octonion algebra} or \mathfrak{e}mph{Cayley algebra} by $\mathscr{C}$. There is a \mathfrak{e}mph{canonical basis} $\{e_1, e_2, u_1, u_2, u_3, v_1, v_2, v_3\}$ of $\mathscr{C}$ consisting of isotropic elements, where $e_1$ and $e_2$ are idempotents, and such that $n(e_1,e_2)=n(u_i,v_i)=1$ and $n(e_k,u_i)=n(e_k,v_i)=n(u_i,u_j)=n(u_i,v_j)=n(v_i,v_j)=0$ for any $k=1,2$ and $1\leq i\neq j \leq 3$. This basis is associated to a fine $\mathbb{Z}^2$-grading, called the \mathfrak{e}mph{Cartan grading} on $\mathscr{C}$, which is given by \begin{equation}\label{eq_graddeCdeCartan} \begin{array}{rll} &\mathscr{C}_{(0,0)} = \mathbb{F}e_1 \oplus \mathbb{F}e_2,&\\ &\mathscr{C}_{(1,0)} = \mathbb{F}u_1, \qquad& \mathscr{C}_{(-1,0)} = \mathbb{F}v_1,\\ &\mathscr{C}_{(0,1)} = \mathbb{F}u_2, & \mathscr{C}_{(0,-1)} = \mathbb{F}v_2,\\ &\mathscr{C}_{(-1,-1)} = \mathbb{F}u_3, \qquad& \mathscr{C}_{(1,1)} = \mathbb{F}v_3. \mathfrak{e}nd{array} \mathfrak{e}nd{equation} Another grading on the octonion algebra arises quite naturally. If we take any three orthogonal elements $w_1, w_2, w_3 \in \mathscr{C}$ such that $w_i^2=1$ for all $i$ in $\{1,2,3\}$, then $\mathscr{C}$ is spanned by $ \{ 1, w_1, w_2, w_3, w_1w_2, w_2w_3, w_3w_1, w_1w_2w_3 \}, $ and this produces a fine $\mathbb{Z}_2^3$-grading on $\mathscr{C}$ by means of \begin{equation}\label{eq_gradZ2cubodeC} \mathop{\hbox{\rm deg}}(w_1) = (\bar1,\bar0,\bar0),\,\mathop{\hbox{\rm deg}}(w_2) = (\bar0,\bar1,\bar0),\,\mathop{\hbox{\rm deg}}(w_3) = (\bar0,\bar0,\bar1). \mathfrak{e}nd{equation} Moreover the restriction to the $4$-dimensional composition algebra $\langle{1, w_1, w_2, w_1w_2}\rangle$ is a $\mathbb{Z}_2^2$-grading and the restriction to the two-dimensional composition algebra $\langle{1, w_1}\rangle$ is a $\mathbb{Z}_2$-grading (the grading provided by the Cayley-Dickson doubling process applied to the field). \mathop{\rm sl}mallskip The algebra of derivations of $\mathscr{C}$ is a simple Lie algebra of type $G_2$, which will be denoted by $\mathfrak{g}_2$. For any $a,b \in \mathscr{C}$, the endomorphism $$ d_{a,b}=[l_a, l_b]+[l_a, r_b]+[r_a, r_b] $$ is a derivation of $\mathscr{C}$, where $l_a(b)=ab=r_b(a)$ for any $a,b\in \mathscr{C}$. The linear span $d_{\mathscr{C},\mathscr{C}}=d_{\mathscr{C}_0,\mathscr{C}_0}=\{\mathop{\rm sl}um_id_{x_i,y_i}\mid x_i,y_i\in\mathscr{C}_0,i\in \mathbb{Z}\}$ of these derivations coincides with the whole algebra $\mathop{\hbox{\rm Der}}(\mathscr{C})$ (\cite{Schafer}). There is an algebraic group isomorphism $\mathop{{\rm Aut}}(\mathscr{C}) \rightarrow \mathop{{\rm Aut}}(\mathfrak{g}_2)$ mapping each $\varphi\in \mathop{{\rm Aut}}(\mathscr{C})$ into the automorphism $\mathscr{A}d(\varphi)$ defined by $\mathscr{A}d(\varphi)(d)= \varphi d \varphi^{-1}$ for any $d\in \mathop{\hbox{\rm Der}}(\mathscr{C})$. Therefore, $G$-gradings on $\mathscr{C}$ can be extended to $G$-gradings on $\mathop{\hbox{\rm Der}}(\mathscr{C})$. \mathop{\rm sl}mallskip Within our work we will often use a special class of composition algebras: the symmetric composition algebras (see \cite{ElduqueCompo1}). \defi\rm A composition algebra $(S,\ast,n)$ (where $\ast$ denotes the product in $S$ and $n$ the norm) is said to be $\mathfrak{e}mph{symmetric}$ if the polar form of its norm is associative, that is: $$ n(x\ast y,z)= n(x, y\ast z) $$ for any $x,y,z\in S$. \mathop{\rm sl}mallskip If $(C,\cdot,n)$ is a Hurwitz algebra, then $pC=(C,*,n)$ with the new product $x*y:=\bar x\cdot\bar y$ is called a \mathfrak{e}mph{para-Hurwitz algebra}, which is an important example of symmetric composition algebra. Note that the unit of $(C,\cdot,n)$ becomes a paraunit in $pC$, that is, an element $e$ such that $e * x = x * e = n(e, x)e - x$. Within our setting, the only symmetric composition algebras are one para-Hurwitz algebra of each dimension $1$, $2$, $4$ and $8$, and the pseudooctonion algebra, also of dimension $8$. This is defined as $P_8(\mathbb{F})=(\mathop{\rm Mat}_{3\times3}(\mathbb{F})_0,*,n)$, for the new product $$ x * y = \omega xy - \omega^2 yx -\mathfrak{f}rac{\omega-\omega^2}3\mathop{\hbox{\rm tr}}(xy)I_3, $$ with $\omega\in \mathbb{F}$ a primitive cubic root of the unit and the norm given by $n(x)=\mathfrak{f}rac16\mathop{\hbox{\rm tr}}(x^2)$. Throughout the text, $I_n$ will denote the identity matrix of size $n$. The gradings on symmetric composition algebras are described and classified in \cite[Theorem~4.5]{Eldgradsencomposimetricas}. Following that work, every group grading on a Hurwitz algebra $(C,\cdot,n)$ is a grading on $pC$, and the gradings coincide when $C$ has dimension at least $4$. There is a remarkable $\mathbb{Z}_3$-grading in the case of dimension $2$ which does not come from a grading on the corresponding Hurwitz algebra, namely, $$ pC_{\bar{0}}=0,\qquad pC_{\bar{1}}=\mathbb{F} e_1\qquad pC_{\bar{2}}=\mathbb{F} e_2, $$ where $e_1$ and $e_2$ are the orthogonal idempotents $(1,0)$ and $(0,1)$ (idempotents for the product $\cdot$), which verify $e_1*e_1=e_2$ and $e_2*e_2=e_1$. Also, a natural $\mathbb{Z}_3^2$-grading appears on the pseudooctonion algebra, determined by $$ P_8(\mathbb{F})_{(\bar1,\bar0)}=\mathbb{F} \left( \begin{array}{ccc} 1 & 0 & 0\\ 0 & \omega & 0\\ 0 & 0 & \omega^2 \mathfrak{e}nd{array} \right) ,\qquad P_8(\mathbb{F})_{(\bar0,\bar1)}=\mathbb{F} \left( \begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \mathfrak{e}nd{array} \right) , $$ which is closely related to the $\mathbb{Z}_3$-gradings occurring on exceptional Lie algebras, as shown in \cite{Eldgradsencomposimetricas}. Let $C$ be a Hurwitz algebra and consider the Jordan algebra of hermitian matrices of size $3$, $$ J=\mathcal{H}_3(C):= \left\{ \left( \begin{array}{ccc} \alpha_1 & \bar{a}_3 & a_2\\ a_3 & \alpha_2 & \bar{a}_1\\ \bar{a}_2 & a_1 & \alpha_3 \mathfrak{e}nd{array} \right) \mid \alpha_1, \alpha_2, \alpha_3 \in \mathbb{F}, \;a_1, a_2, a_3 \in C \right\} ,$$ with commutative multiplication given by $xy:=\mathfrak{f}rac{1}{2}(x\cdot y + y \cdot x)$ (where $x \cdot y$ is the usual matrix product) and \mathfrak{e}mph{normalized trace form} given by $t_J=\mathfrak{f}rac{1}{3}(\alpha_1+\alpha_2+\alpha_3)$. For such an algebra there is a decomposition $J=\mathbb{F} 1\oplus J_0$ for $J_0=\{x\in J\mid t_{J}(x)=0\}$. For any $x,y\in J_0$, the product $$ x \ast y= xy - t_{J}(xy)1 $$ gives a commutative multiplication on $J_0$. A remarkable Jordan algebra of this type is obtained starting from the Cayley algebra $\mathscr{C}$, that is, $\mathbb{A}:=\mathcal{H}_3(\mathscr{C})$, which is called the \mathfrak{e}mph{Albert algebra}, and is the only exceptional simple Jordan algebra over $\mathbb{F}$ (see, for instance, \cite{Schafer}). The algebra of derivations of $\mathbb{A}$ is a simple Lie algebra of type $F_4$, which will be denoted by $\mathfrak{f}_4$. As for $\mathop{\hbox{\rm Der}}(\mathscr{C})$, there is also an algebraic group isomorphism $\mathop{{\rm Aut}}(\mathbb{A}) \rightarrow \mathop{{\rm Aut}}(\mathop{\hbox{\rm Der}}(\mathbb{A}))$ mapping each $\varphi\in \mathop{{\rm Aut}}(\mathbb{A})$ to the automorphism $d\mapsto \varphi d \varphi^{-1}$, for any $d\in\mathop{\hbox{\rm Der}}(\mathbb{A})$. Thus, gradings on $\mathbb{A}$ are extended to gradings on $\mathfrak{f}_4$. Note that the linear map $D_{x,y}\colon \mathbb{A} \rightarrow \mathbb{A}$ defined by $$ D_{x,y}(z)=x(yz)-y(xz) $$ for any $x,y,z\in\mathbb{A}$, is a derivation and that the linear span $D_{\mathbb{A}, \mathbb{A}}=\{\mathop{\rm sl}um_i D_{x_i,y_i}\mid x_i,y_i\in\mathbb{A}\}$ of these derivations fills the entire algebra $\mathop{\hbox{\rm Der}}(\mathbb{A})=\mathfrak{f}_4$. \mathop{\rm sl}ubsection{Tits' model}\label{subsec_Titsmodel} In 1966, Tits provided a beautiful unified construction of all the exceptional simple Lie algebras (\cite{Tits}). (Recall our assumptions on the field, although the construction is valid over fields of characteristic $\neq 2,3$.) When in this construction we use a composition algebra and a Jordan algebra of hermitian $3\times 3$ matrices over a second composition algebra, Freudenthal's Magic Square is obtained:\mathop{\rm sl}mallskip \begin{center} {\mathop{\rm sl}mall \begin{tabular}{c|cccc} $\mathcal{T}(C,J)$ & $\mathcal{H}_3(\mathbb{F})$& $\mathcal{H}_3(\mathbb{F}\oplus \mathbb{F})$& $\mathcal{H}_3(\mathop{\rm Mat}_2(\mathbb{F}))$& $\mathcal{H}_3(\mathscr{C})$\\ \hline $\mathbb{F}$& $A_1$&$A_2$&$C_3$&${ F_4}$\\ $ {\mathbb{F}\oplus \mathbb{F}}$& $A_2$&$A_2\oplus A_2$&$A_5$&${ E_6}$\\ $ {\mathop{\rm Mat}_2(\mathbb{F})}$& $C_3$&$A_5$&$D_6$&${ E_7}$\\ $\mathscr{C}$& ${ F_4}$&${ E_6}$&${ E_7}$&${ E_8}$\\ \mathfrak{e}nd{tabular}} \mathfrak{e}nd{center} \noindent This construction is reviewed here. For $C,C'$ two Hurwitz algebras and $J=\mathcal{H}_3(C')$, consider $$ \mathcal{T}(C,J)=\mathop{\hbox{\rm Der}}\,(C)\oplus (C_0 \otimes J_0) \oplus \mathop{\hbox{\rm Der}}\,(J) $$ with the anticommutative multiplication given by $$ \begin{array}{l} \bullet\ \mathop{\hbox{\rm Der}}(C)\oplus \mathop{\hbox{\rm Der}}(J) \text{ is a Lie subalgebra of } \mathcal{T}(C,J), \\ \bullet\ {[}d, a\otimes x]=d(a) \otimes x, \\ \bullet\ [D, a\otimes x]=a \otimes D(x), \\ \bullet\ {[}a\otimes x, b\otimes y]= t_{J}(xy) d_{a,b}+[a,b]\otimes (x\ast y)+ 2t_C(ab)D_{x,y}, \mathfrak{e}nd{array} $$ for all $d\in \mathop{\hbox{\rm Der}}(C)$, $D\in \mathop{\hbox{\rm Der}}(J)$, $a,b\in C_0$ and $x,y\in J_0$. Then, $\mathcal{T}(C,J)$ is a Lie algebra. In particular, note that $\mathcal{T}(\mathscr{C},\mathcal{H}_3(\mathbb{F}\oplus \mathbb{F}))$ and $\mathcal{T}(\mathbb{F}\oplus \mathbb{F},\mathcal{H}_3(\mathscr{C}))$ (with $\mathscr{C}$ the Cayley algebra) are both Lie algebras of type $E_6$, and hence, isomorphic. We call $\mathfrak{e}_6:=\mathcal{T}(\mathbb{F}\oplus \mathbb{F},\mathbb{A})$, identified with $\mathop{\hbox{\rm Der}}(\mathbb{A}) \oplus \mathbb{A}_0$, which is naturally $\mathbb{Z}_2$-graded with even part $\mathfrak{f}_4$. Gradings on the two ingredients involved in Tits' construction (composition and Jordan algebras) can be used to get some interesting gradings on the resulting Lie algebras. Indeed, any automorphism $\varphi\in \mathop{{\rm Aut}} (C)$ can be extended to an automorphism $\widetilde{\varphi}$ of $\mathcal{T}(C,J)$ by defining $$ \begin{array}{l} \widetilde{\varphi}(d) := \varphi d \varphi^{-1}, \\ \widetilde{\varphi}(a\otimes x) := \varphi(a) \otimes x,\\ \widetilde{\varphi}(D) := D, \mathfrak{e}nd{array} $$ for all $d\in \mathop{\hbox{\rm Der}}(C)$, $D\in \mathop{\hbox{\rm Der}}\, (J)$, $a\in C_0$ and $x\in J_0$. Similarly, any $\psi \in \mathop{{\rm Aut}}(J)$ can be extended to an automorphism $\widetilde{\psi}$ of $\mathcal{T}(C,J)$. \mathop{\rm sl}ubsection{Elduque's model}\label{subsec_Elduquemodelo} Even though Tits' construction is not symmetric in the two composition algebras that are being used, the resultant Magic Square is indeed symmetric. Thus, some more symmetric constructions of the exceptional Lie algebras were developed by several authors. Among them, one remarkable is that one in \cite{ElduqueCompo1}, based on symmetric composition algebras. Let $(S,\ast, q)$ be a symmetric composition algebra and let $$ \mathfrak{o}(S,q)=\{d\in \mathop{\rm End}_\mathbb{F}(S)\mid q(d(x),y)+q(x,d(y))=0\,\mathfrak{f}orall x,y\in S\} $$ be the corresponding orthogonal Lie algebra. On the subalgebra of $\mathfrak{o}(S,q)^3$ defined by $$ \mathfrak{tri}(S,\ast,q)=\{(d_0,d_1,d_2)\in \mathfrak{o}(S,q)^3 \mid d_0(x\ast y)=d_1(x)\ast y + x\ast d_2(y)\;\mathfrak{f}orall x,y\in S\}, $$ we have an order three automorphism $\vartheta$ given by $$ \vartheta\colon \mathfrak{tri}(S,\ast,q)\longrightarrow \mathfrak{tri}(S,\ast,q), \quad (d_0, d_1, d_2)\longmapsto (d_2, d_0, d_1), $$ which is called the \mathfrak{e}mph{triality automorphism}. Take the element of $\mathfrak{tri}(S,\ast,q)$ (denoted by $\mathfrak{tri}(S)$ when it is no ambiguity) given by $$ t_{x,y}:=\left(\mathop{\rm sl}igma_{x,y},\mathfrak{f}rac{1}{2}q(x,y)id-r_x l_y,\mathfrak{f}rac{1}{2}q(x,y)id-l_x r_y\right), $$ where $\mathop{\rm sl}igma_{x,y}(z)=q(x,z)y-q(y,z)x$, $r_x(z)=z\ast x$, and $l_x(z)=x\ast z$ for any $x,y,z\in S$. Given two symmetric composition algebras $(S,\ast, q),(S',\mathop{\rm sl}tar, q')$ we can consider the following symmetric construction: $$ \mathfrak{g}(S,S') := \mathfrak{tri}(S,\ast, q)\oplus \mathfrak{tri}(S',\mathop{\rm sl}tar, q') \oplus (\bigoplus_{i=0}^2 \iota_i(S\otimes S')) $$ where $\iota_i(S\otimes S')$ is just a copy of $S\otimes S'$ ($i=0,1,2$), and the skewsymmetric product is given by $$ \begin{array}{l} \bullet\ \mathfrak{tri}(S,\ast, q)\oplus \mathfrak{tri}(S',\mathop{\rm sl}tar, q') \text{ is a Lie subalgebra of } \mathfrak{g}(S,S'), \\ \bullet\ [(d_0,d_1,d_2), \iota_i(x\otimes x')]=\iota_i(d_i(x)\otimes x'), \\ \bullet\ [(d'_0,d'_1,d'_2), \iota_i(x\otimes x')]=\iota_i(x\otimes d'_i(x')), \\ \bullet\ [\iota_i(x\otimes x'), \iota_{i+1}(y\otimes y')]=\iota_{i+2}((x*y)\otimes(x'\mathop{\rm sl}tar y')), \\ \bullet\ [\iota_i(x\otimes x'),\iota_i(y\otimes y')] = q'(x',y')\vartheta^i(t_{x,y}) + q(x,y)\vartheta'^i(t'_{x',y'}), \mathfrak{e}nd{array} $$ for any $(d_0,d_1, d_2)\in \mathfrak{tri}(S)$, $(d'_0,d'_1, d'_2)\in \mathfrak{tri}(S')$, $x,y\in S$ and $x',y'\in S'$, being $\vartheta$ and $\vartheta'$ the corresponding triality automorphisms. The so defined algebra $\mathfrak{g}(S,S')$ is a Lie algebra (\cite[Theorem~3.1]{ElduqueCompo1}), and the Freudenthal's Magic Square is obtained again. In particular, $\mathfrak{g}(S,S')$ is of type $E_6$ whenever one of the symmetric composition algebras involved has dimension $8$ and the other one has dimension $2$. \mathop{\rm sl}mallskip This construction is equipped with the following $\mathbb{Z}_2^2$-grading: $$ \begin{array}{l} \mathfrak{g}(S,S')_{(\bar{0},\bar{0})}=\mathfrak{tri}(S,\ast, q)\oplus \mathfrak{tri}(S',\mathop{\rm sl}tar, q'), \\ \mathfrak{g}(S,S')_{(\bar{1},\bar{0})}=\iota_0(S\otimes S'), \\ \mathfrak{g}(S,S')_{(\bar{0},\bar{1})}=\iota_1(S\otimes S'), \\ \mathfrak{g}(S,S')_{(\bar{1},\bar{1})}=\iota_2(S\otimes S'), \mathfrak{e}nd{array} $$ and also with the $\mathbb{Z}_3$-grading produced by the next extension of the triality automorphism, the map $\vartheta\in\mathop{{\rm Aut}}(\mathfrak{g}(S,S'))$ given by $ \vartheta(d_0,d_1,d_2):=(d_2,d_0,d_1)$, $\vartheta(d'_0,d'_1,d'_2):=(d'_2,d'_0,d'_1)$ and $\vartheta(\iota_{i}(x\otimes x')):=\iota_{i+2}(x\otimes x'), $ for any $x\in S$, $x'\in S'$, $(d_0, d_1, d_2)\in \mathfrak{tri} (S)$, $(d'_0, d'_1, d'_2)\in \mathfrak{tri} (S')$. Moreover, if $f,f'$ are automorphisms of $S$ and $S'$ respectively, we can extend them to an automorphism $(f,f')$ of $\mathfrak{g}(S,S')$ given by: $$ \begin{array}{ll} (f,f')(d_0,d_1,d_2) &:= (f d_0 f^{-1}, f d_1 f^{-1}, f d_2 f^{-1}), \\ (f,f')(d'_0,d'_1,d'_2) &:= (f' d'_0 f'^{-1}, f' d'_1 f'^{-1}, f' d'_2 f'^{-1}),\\ (f,f')(\iota_i(x\otimes x')) &:= \iota_i (f(x)\otimes f'(x')). \mathfrak{e}nd{array} $$ This automorphism commutes with $\vartheta$ and with the automorphisms producing the $\mathbb{Z}_2^2$-grading. \mathop{\rm sl}ubsection{The 5-grading model}\label{subsec_modelo5grad} We are interested in finding a construction of a Lie algebra of type $E_6$ in which an order two outer automorphism fixing a subalgebra of type $C_4$ is easy to describe, as well as some fine gradings involving it. The knowledge of such gradings (extracted from \cite{e6}) make us think that a good $\mathbb{Z}$-grading as a starting point would be the related one to the following marked Dynkin diagram: \mathop{\rm sl}etlength{\unitlength}{0.07in} \begin{center}{\vbox{\begin{picture}(25,5)(4,-0.5) \put(5,0){\circle{1}} \put(9,0){\circle{1}} \put(13,0){\circle{1}} \put(17,0){\circle{1}} \put(21,0){\circle{1}} \put(13,3){\circle*{1}} \put(5.5,0){\line(1,0){3}} \put(9.5,0){\line(1,0){3}} \put(13.5,0){\line(1,0){3}} \put(17.5,0){\line(1,0){3}} \put(13,0.5){\line(0,1){2}} \put(4.7,-2){$\mathop{\rm sl}criptstyle \alpha_1$} \put(8.7,-2){$\mathop{\rm sl}criptstyle \alpha_3$} \put(12.7,-2){$\mathop{\rm sl}criptstyle \alpha_4$} \put(16.7,-2){$\mathop{\rm sl}criptstyle \alpha_5$} \put(20.7,-2){$\mathop{\rm sl}criptstyle \alpha_6$} \put(13.9,2.6){$\mathop{\rm sl}criptstyle \alpha_2$} \mathfrak{e}nd{picture}}}\mathfrak{e}nd{center} Recall that the gradings on simple Lie algebras over cyclic groups, in particular the $ \mathbb{Z}$-gradings, were classified by Kac (\cite[Chapter VIII]{Kac}). If $\Phi$ denotes the root system of $L $, a Lie algebra of type $E_6$, relative to a Cartan subalgebra $\mathfrak{h}$, $L_{\alpha}$ are the corresponding root spaces, $\Delta=\{\alpha_1,\dots,\alpha_6\}$ is a basis of $\Phi$ and $\mathop{\rm sl}um_{i=1}^6n_i\alpha_i$ is the maximal root, then the above diagram determines a $ \mathbb{Z}$-grading on $L$ as follows: $$ L_p=\bigoplus_{\alpha\in\Phi\cup\{0\}}\{L_\alpha\mid \alpha=\mathop{\rm sl}um_{j=1}^6k_j\alpha_j,\, k_2=p\}, $$ for each $p\in\mathbb{Z}$, so that $ L=\bigoplus_{p=-2}^2 L_p$. The subalgebra $ L_0$ is reductive with one-dimensional center $Z=\{h\in \mathfrak{h}\mid \alpha_j(h)=0\quad\mathfrak{f}orall j\ne 2\}$ and semisimple part with root system $\{ \alpha\in\Phi\mid \alpha=\mathop{\rm sl}um_{j\ne 2}k_j\alpha_j\}$ and basis $\{\alpha_j\mid j\ne 2\} $, that is, an algebra of type $A_5$. Moreover, \cite[p.\,108]{enci} implies that, in this $\mathbb{Z}$-grading, all the homogeneous components (different from $L_0$) are $L_0$-irreducible modules. By counting the roots, one immediately observes that $\dim L_{\pm2}=1$ and $\dim L_{\pm1}=20$. Therefore, if we take $V$ a $6$-dimensional vector space, we can identify $L_0$ with $ \mathfrak{gl}(V)$ (direct sum of a Lie algebra of type $A_5$ with a one-dimensional center) and we can also identify the components $L_2$, $L_{1}$, $L_{-1}$ and $L_{-2}$ with $\bigwedge^6V$, $\bigwedge^3V$, $\bigwedge^3V^*$ and $\bigwedge^6V^*$ respectively. Well known results of representation theory imply, for these $\mathfrak{sl}_6(\mathbb{F})$-modules, the key facts $\dim\hom_{\mathfrak{sl}(V)}(L_n\otimes L_m,L_{n+m})\le1$ if $n+m\ne0$, $\dim\hom_{\mathfrak{sl}(V)}(L_n\otimes L_{-n},\mathfrak{sl}(V))=1$ and $\dim\hom_{\mathfrak{sl}(V)}(L_n\otimes L_{-n},\mathbb{F})=1$, for $n\in\{1,2\}$. As the restriction of the bracket $[\ ,\ ]\mid_{L_n\times L_m}\colon L_n\times L_m\to L_{n+m}$ is an $L_0$-invariant map for each $n,m\in\{\pm2,\pm1,0\}$, the above allows to recover the bracket once we find the $L_0$-invariant maps from $L_n\times L_m$ to $L_{n+m}$. These maps are standard in multilinear algebra (the reader can consult \cite[Appendix~B]{FulHar}). First, for each natural $n$, we have a bilinear product $\bigwedge^nV\times \bigwedge^nV^*\rightarrow \mathbb{F}$, $(u, f) \mapsto \langle u, f\rangle := \det(f_i(u_j))$, if $u=u_1\wedge\dots\wedge u_n\in\bigwedge^nV$ and $f=f_1\wedge\dots\wedge f_n\in\bigwedge^nV^*$. In particular we have $L_0$-invariant maps $L_{2}\times L_{-2}\to L_{0}$ and $L_{1}\times L_{-1}\to L_{0}$. Second, we can consider the contractions $$ \begin{array}{lrcl} L_1\times L_{-2}\rightarrow L_{-1},&(u,f)&\mapsto &u\lrcorner f,\\ L_2\times L_{-1}\rightarrow L_1,&(v,g) &\mapsto &v\llcorner g, \mathfrak{e}nd{array} $$ determined by $\langle w,u\lrcorner f\rangle=\langle w\wedge u,f\rangle$ and $\langle v\llcorner g,h\rangle=\langle v,g\wedge h\rangle$ for any $w\in L_1$ and $h\in L_{-1}$. Third, the wedge product gives skewsymmetric maps $L_1\times L_1\to L_2$ and $L_{-1}\times L_{-1}\to L_{-2}$. Finally, for each $n\in\{1,2\}$, $u\in L_n$ and $f\in L_{-n}$, define $\psi_{f,u}$ as the only element in $\mathfrak{sl}(V)$ verifying $\mathop{\hbox{\rm tr}}(g \psi_{f,u})=\langle g\cdot u,f\rangle$ for any $g\in\mathfrak{sl}(V)$ ($\cdot$ denotes the natural action of $\mathfrak{sl}(V)$ on $\bigwedge^{3n}V$). This provides too an $L_0$-invariant map $L_{-n}\times L_n\to L_{0}$. The above guarantees the existence of scalars $\alpha_{ij},\beta_i\in\mathbb{F}^\times$ such that the product in $L$ is given by \begin{equation}\label{eq_elproductoenlaZgrad} \begin{array}{ll} \bullet \ [u,v] = \alpha_{1,1 }\,u\wedge v , &\bullet \ [u',f'] = \alpha_{2,-2 }\,\psi_{f',u'} +\beta_2\,\langle u',f'\rangle , \\ \bullet \ [u,f'] =\alpha_{1,-2 }\,u\lrcorner f' , &\bullet \ [u',f]=\alpha_{-1,2 }\,u'\llcorner f , \\ \bullet \ [u,f] = \alpha_{1,-1 }\,\psi_{f,u} +\beta_1\,\langle u,f\rangle , &\bullet \ [f,g] = \alpha_{-1,-1 }\,f\wedge g , \mathfrak{e}nd{array} \mathfrak{e}nd{equation} for any $u,v\in L_1$, $f,g\in L_{-1}$, $u'\in L_2$, $f'\in L_{-2}$, and the actions of $L_0$ on $L_n$ and $L_{-n}$ are the actions of the Lie algebra $\mathfrak{gl}(V)$ on $\bigwedge^{3n}V$ and $\bigwedge^{3n}V^*$ respectively. These scalars can be determined by imposing the Jacobi identity, as in \cite{miomodelos}, but it is not necessary for our purposes. Denote by $T_1$ the torus of $\mathop{{\rm Aut}}(L)$ producing this $\mathbb{Z}$-grading, that is, the group of automorphisms of the form $f_\alpha$ for $\alpha\in\mathbb{F}^\times$ such that $f_\alpha\vert_{L_n}=\alpha^n\mathop{\rm id}$ for all $n\in\mathbb{Z}$. \mathop{\rm sl}mallskip Fix $\textbf{i}\in \mathbb{F}$ with $\textbf{i}^2=-1$. Take a basis $\{e_i\mid i=1,\dots,6\}$ of $V$ and let $\{e_i^*\mid i=1,\dots,6\}\mathop{\rm sl}ubset V^*$ be the dual basis. Define $\theta\colon L\to L$ by: $$ \begin{array}{l} \theta(e_{\mathop{\rm sl}igma(1)}\wedge e_{\mathop{\rm sl}igma(2)}\wedge e_{\mathop{\rm sl}igma(3)}) := \text{sg}(\mathop{\rm sl}igma)\, \textbf{i} e_{\mathop{\rm sl}igma(4)}\wedge e_{\mathop{\rm sl}igma(5)}\wedge e_{\mathop{\rm sl}igma(6)}, \\ \theta(e^*_{\mathop{\rm sl}igma(1)}\wedge e^*_{\mathop{\rm sl}igma(2)}\wedge e^*_{\mathop{\rm sl}igma(3)}) := -\text{sg}(\mathop{\rm sl}igma) \,\textbf{i} e^*_{\mathop{\rm sl}igma(4)}\wedge e^*_{\mathop{\rm sl}igma(5)}\wedge e^*_{\mathop{\rm sl}igma(6)}, \\ \theta(\alpha I_6+A):=\alpha I_6-A^t, \\ \theta|_{L_2\oplus L_{-2}} :=-\mathop{\rm id}, \mathfrak{e}nd{array} $$ for any $\alpha\in \mathbb{F}, A\in\mathfrak{sl}(V),\mathop{\rm sl}igma\in S_6$. Making use of Equation~(\ref{eq_elproductoenlaZgrad}), some straightforward computations prove that $\theta\in\text{Aut}(L)$ and that besides $\dim(\mathfrak{f}ix \theta)=36$. Hence $\theta$ is an outer order two automorphism fixing a subalgebra of type $C_4$ and commuting with the torus $T_1$. For each $\varphi\in\text{GL}(V)$, let $\widetilde{\varphi}\colon L\to L$ be $$ \begin{array}{ll} \widetilde{\varphi}(u) := \varphi(u_1)\wedge\dotsc\wedge \varphi(u_r), \\ \widetilde{\varphi}(f) :=(\varphi\cdot f_1) \wedge\dotsc\wedge (\varphi\cdot f_r), \\ \widetilde{\varphi}(F) :=\varphi F\varphi^{-1}, \mathfrak{e}nd{array} $$ for any $u=u_1\wedge\dotsc\wedge u_r\in L_1 \cup L_2$, $f= f_1 \wedge\dotsc\wedge f_r \in L_{-1}\cup L_{-2}$ and $F\in L_0$, with $\varphi\cdot f_i=f_i\varphi^{-1}\in V^*$. Actually $\psi_{\widetilde{\varphi}(f),\widetilde{\varphi}(u)}=\widetilde{\varphi}(\psi_{f,u})$, what implies that $\widetilde{\varphi} $ is an automorphism of $L$. With abuse of notation, we will consider $\text{GL}_6(\mathbb{F})\leq\text{Aut}(\mathfrak{e}_6)$. Notice that these automorphisms also commute with the torus $T_1$ (the torus itself is contained in $\mathop{\rm GL}(V)$, since $f_\alpha=\alpha \widetilde{I}_6$). Furthermore, the centralizer of $T_1$ in $\mathop{{\rm Aut}}(L)$ equals $\mathfrak{sp}an{\widetilde{\varphi}\mid \varphi\in\mathop{\rm GL}(V)}\rtimes \mathfrak{sp}an{\theta}$. Indeed, if $ \mathfrak{g}amma\in\mathop{{\rm Aut}}(L)$ commutes with $T_1$, the restriction $\mathfrak{g}amma\vert_{\mathfrak{sl}(V)}$ belongs to $\mathop{{\rm Aut}}(\mathfrak{sl}(V))$, which is well known to be isomorphic to $\text{PSL}\,(V)\rtimes\mathfrak{sp}an{\theta\vert_{\mathfrak{sl}(V)}}$. \mathop{\rm sl}ubsection{Adams' model}\label{subsec_modeloAdams} \label{Adams} Following the lines of the decomposition of $\mathfrak{g}_2=\mathfrak{sl}(W)\oplus W \oplus W^*$ (\cite[\S22.2 and \S22.4]{FulHar}), for $W$ a three-dimensional vector space, Adams gave in \cite[Chapter~13]{Adams} a model for $E_6$ using three copies of a $3$-dimensional vector space $W_1=W_2=W_3=W$ and their dual spaces $W_i^*$ as follows: take $\mathcal{L} = \mathcal{L}_{\bar0}\oplus \mathcal{L}_{\bar1}\oplus \mathcal{L}_{\bar2}$, for $$ \mathcal{L}_{\bar0} = \mathfrak{sl}(W_1)\oplus \mathfrak{sl}(W_2) \oplus \mathfrak{sl}(W_3), \quad \mathcal{L}_{\bar1} = W_1 \otimes W_2 \otimes W_3, \quad \mathcal{L}_{\bar2} = W_1^* \otimes W_2^* \otimes W_3^*, $$ where $\mathop{\rm sl}um \mathfrak{sl}(W_i)$ is a Lie subalgebra of type $3A_2$, its actions on $ W_1\otimes W_2\otimes W_3$ and $W_1^*\otimes W_2^*\otimes W_3^*$ are the natural ones (the $i$th simple ideal acts on the $i$th slot), and $$ \begin{array}{ll} \bullet \ [\otimes f_i,\otimes u_i]&=\mathop{\rm sl}um_{\buildrel{k=1,2,3}\over{i\ne j\ne k}} f_i(u_i)f_j(u_j)\big(f_k(-)u_k-\mathfrak{f}rac13f(u_k)\mathop{\rm id}_{W_k}\big),\\ \bullet \ [ \otimes u_i,\otimes v_i]&=\otimes (u_i\wedge v_i),\\ \bullet \ [ \otimes f_i,\otimes g_i]&=\otimes (f_i\wedge g_i),\nonumber \mathfrak{e}nd{array} $$ for any $u_i,v_i\in W_i$, $f_i,g_i\in W^*_i$. We have fixed nonzero trilinear alternating maps $\det_i\colon W_i\times W_i\times W_i\to \mathbb{F}$, so that $u_i\wedge v_i$ denotes the element in $W_i^*$ given by $\det_i(u_i,v_i,-)$ and $f_i\wedge g_i$ denotes the element in $W_i^*$ given by $\det_i^*(f_i,g_i,-)$, being $\det_i^*$ the dual map of $\det_i$. It turns out that $\mathcal{L}$ is a Lie algebra of type $E_6$. Denote by $H_1$ the automorphism producing the $\mathbb{Z}_3$-grading, that is, $H_1|_{\mathcal{L}_{\bar i}}=\omega^i \mathop{\rm id}$, with $\omega$ a primitive cubic root of $1$ in $\mathbb{F}$. As any automorphism is determined by its action on $\mathcal{L}_{\bar1}$, we can take $H_2$ the automorphism such that $H_2(u\otimes v\otimes w)=v\otimes w\otimes u$ for any $u,v,w\in W$. Furthermore, each $f\in \mathop{\rm GL}(W)$ extends to $\Psi(f) \in \mathop{{\rm Aut}}(\mathcal{L})$, determined by $\Psi(f)(u\otimes v\otimes w)=f(u)\otimes f(v)\otimes f(w)$. Note that $ \Psi\colon\mathop{\rm GL}(W) \rightarrow \mathop{{\rm Aut}}(\mathcal{L})$ is a homomorphism of algebraic groups with kernel $\{\omega^nI_3\mid n=0,1,2\}$. In particular, if we identify any endomorphism of $W$ with its matrix relative to a fixed basis of $W$, and take $T_{\alpha,\beta}=\Psi(\text{diag}(\alpha,\beta,(\alpha\beta)^{-1}))\in\Psi(\mathop{\rm GL}_3(\mathbb{F}) )$, we get the quasitorus $ \langle H_1,H_2,T_{\alpha,\beta}\mid \alpha,\beta\in\mathbb{F}^\times\rangle$ ($\mathcal{Q}_2$ in \cite{e6}), that produces a fine $\mathbb{Z}_3^2\times\mathbb{Z}^2$-grading on $\mathcal{L}$. \mathop{\rm sl}ubsection{ Model based on $\mathfrak{sl}(2)\oplus\mathfrak{sl}(6)$}\label{subsec_modeloa5masa1} Recall (see again \cite[Chapter~VIII]{Kac}) that there is a $\mathbb{Z}_2$-grading on any algebra of type $E_6$ with zero homogeneous component isomorphic to $A_5+A_1$ and the other one an irreducible module. Thus allows to endow $\mathfrak{L}=\mathfrak{L}_{\bar0}\oplus \mathfrak{L}_{\bar1}$ with a Lie algebra structure, starting from $U$ and $V$ vector spaces of dimensions $2$ and $6$ respectively, and by considering $\mathfrak{L}_{\bar0}= \mathfrak{sl}(U)\oplus\mathfrak{sl}(V)$ and $\mathfrak{L}_{\bar1}= U\otimes\bigwedge^3V$. For that purpose, take $b\colon U\times U\to\mathbb{F}$ a (nonzero) skewsymmetric bilinear form, and note that $\mathfrak{sp}(U)=\mathfrak{sl}(U)$. For each $v,w\in U$, consider the map $\varphi_{v,w}:=b(v,-)w+b(w,-)v\in\mathfrak{sp}(U)$. If $x=x_1\wedge x_2\wedge x_3\in\bigwedge^3V$ and $f\in\text{End}\,(V)$, denote by $$ \begin{array}{l} f(x):=f(x_1)\wedge x_2 \wedge x_3 + x_1 \wedge f(x_2)\wedge x_3 + x_1 \wedge x_2 \wedge f(x_3),\\ f\cdot x:=f(x_1)\wedge f(x_2) \wedge f(x_3). \mathfrak{e}nd{array} $$ Fix a nonzero map $\bigwedge^6V\rightarrow \mathbb{F}$ and let $\langle\cdot,\cdot\rangle\colon \bigwedge^3V\times\bigwedge^3V\rightarrow \mathbb{F}$ be the related (skewsymmetric) product. Note that $\langle h(x),y\rangle+\langle x,h(y)\rangle=(\text{tr}\, h) \langle x,y\rangle$ and $\langle h\cdot x,h\cdot y\rangle=(\det h) \langle x,y\rangle$ for any $x,y\in\bigwedge^3V$ and any endomorphism $h\in\mathop{\rm End}(V)$. For each $x,y\in\bigwedge^3V$ we denote by $[x,y]\,(=[y,x])$ the element of $\mathfrak{sl}(V)$ characterized by the property $\text{tr}(f[x,y])=\langle f(x),y\rangle$ for all $f\in\mathfrak{sl}(V)$ (denote the composition by juxtaposition). Now we note, by using arguments as in Subsection~\ref{subsec_modelo5grad}, that there exist suitable scalars $\lambda,\mu\in \mathbb{F}^\times$ such that the product on $ \mathfrak{L}$ given by $$ \begin{array}{l} \bullet \ \mathfrak{sl}(U)\oplus\mathfrak{sl}(V) \text{ is a subalgebra of $\mathfrak{L}$}, \\ \bullet \ [g,v\otimes x] = g(v)\otimes x, \\ \bullet \ [f,v\otimes x] = v\otimes f(x), \\ \bullet \ [v\otimes x, w\otimes y] = \lambda\langle x,y\rangle \varphi_{v,w} + \mu b(v,w)[x,y], \mathfrak{e}nd{array} $$ for all $g\in\mathfrak{sl}(U)$, $f\in\mathfrak{sl}(V)$, $v,w\in U$ and $x,y\in\bigwedge^3V$, turns $\mathfrak{L}$ into a Lie algebra of type $E_6$. The obvious advantage of this model is that allows to have a source of automorphisms commuting with that one producing the $\mathbb{Z}_2$-grading. If we take $f_1\in \mathop{\rm SP}(U)$ and $f_2\in \mathop{\rm SL}(V)$, then the map $f_1\times f_2\colon \mathfrak{L}\to \mathfrak{L}$ given by $$ \begin{array}{l} f_1\times f_2(g+f):= f_1gf_1^{-1}+f_2ff_2^{-1},\\ f_1\times f_2(v\otimes x):= f_1(v)\otimes f_2\cdot x, \mathfrak{e}nd{array} $$ for any $g\in\mathfrak{sl}(U)$, $f\in\mathfrak{sl}(V)$, $v\in U$ and $x\in\bigwedge^3V$, is an automorphism of $\mathfrak{L}$. It is not difficult to prove it: $$ \begin{array}{c} \text{tr}(f (f_2[x,y]f_2^{-1}))= \text{tr}((f_2^{-1}ff_2) [x,y])=\langle f_2^{-1}ff_2(x),y\rangle\\ =\langle f_2\cdot (f_2^{-1}ff_2)(x),f_2\cdot y\rangle = \langle f(f_2\cdot x),f_2\cdot y\rangle=\text{tr}(f [f_2\cdot x,f_2\cdot y]), \mathfrak{e}nd{array} $$ so that $[f_2\cdot x,f_2\cdot y]=f_2[x,y]f_2^{-1}$. Besides $\langle f_2\cdot x,f_2\cdot y\rangle=\mathfrak{sp}an{x,y}$, $ \varphi_{f_1(v),f_1(w)} =f_1 \varphi_{v,w} f_1^{-1}$, so that $f_1\times f_2 ([v\otimes x, w\otimes y]) =[f_1\times f_2 (v\otimes x), f_1\times f_2( w\otimes y)]$. Analogously the rest of relations can be checked. Thus, as $f_1\times\mathop{\rm id}_V$ always commutes with $\mathop{\rm id}_U\times f_2$, the direct product $\mathop{\rm SP}(U)\times \mathop{\rm SL}(V)$ can be taken as a subalgebra of $\mathop{{\rm Aut}} (\mathfrak{L})$. \mathop{\rm sl}ubsection{ Model based on $\mathfrak{sp}_8(\mathbb{F})$}\label{subsec_modeloc4} In \cite[Section~5.3]{e6} a model of $\mathfrak{f}rak{e}_6$ based on an algebra of type $C_4$ is outlined. Take the vector space $\mathcal{V}=\mathbb{F}^8$ and the nondegenerate symplectic bilinear form $b\colon \mathcal{V}\times \mathcal{V}\to\mathbb{F}$ given by $b(u,v)=u^tCv$ for the matrix $$ C= \left(\begin{array}{cccc} 0&\mathop{\rm sl}igma_3&0&0\\ \mathop{\rm sl}igma_3&0&0&0\\ 0&0&0&\mathop{\rm sl}igma_3\\ 0&0&\mathop{\rm sl}igma_3&0 \mathfrak{e}nd{array}\right) \qquad\text{with } \mathop{\rm sl}igma_3=\left(\begin{array}{cc} 0&-1\\ 1&0 \mathfrak{e}nd{array}\right). $$ Let $\mathbf{L}_0:=\mathfrak{f}rak{sp}(\mathcal{V},b)$, which relative to the canonical basis coincides with $\{x\in\mathop{\rm Mat}_{8\times8}(\mathbb{F})\mid xC+Cx^t=0\}$, a simple Lie algebra of type $C_4$. Now, we consider the contraction $$ \begin{array}{llcl} c\colon &\bigwedge^4\mathcal{V}& \longrightarrow &\bigwedge^2\mathcal{V} \\ &v_1\wedge v_2\wedge v_3 \wedge v_4 &\mapsto&\mathop{\rm sl}um_{\tiny {\begin{array}{l}\mathop{\rm sl}igma\in S_4\\ \mathop{\rm sl}igma(1)<\mathop{\rm sl}igma(2)\\ \mathop{\rm sl}igma(3)<\mathop{\rm sl}igma(4)\mathfrak{e}nd{array}}} (-1)^\mathop{\rm sl}igma b(v_{\mathop{\rm sl}igma(1)},v_{\mathop{\rm sl}igma(2)}) v_{\mathop{\rm sl}igma(3)}\wedge v_{\mathop{\rm sl}igma(4)}. \mathfrak{e}nd{array} $$ Its kernel $\ker c$ is isomorphic to $V(\lambda_4)+V(0)$ as $\mathbf{L}_0$-module ($\lambda_i$ the fundamental weights, with the notations in \cite{Humphreysalg}). Let $\mathbf{L}_1$ be the submodule of $\ker c$ isomorphic to $V(\lambda_4)$. Then the vector space $\mathbf{L}=\mathbf{L}_0\oplus \mathbf{L}_1$ can be endowed with a $\mathbb{Z}_2$-graded Lie algebra structure such that $\mathbf{L}$ turns out to be an algebra of type $E_6$. Call $ \mathcal{T}heta$ the grading automorphism. For each $f\in\mathop{\rm SP}(\mathcal{V})$, the map $f^\diamondsuit\colon \mathbf{L}\to\mathbf{L}$ given by $$ \begin{array}{l} f^\diamondsuit(g):=f^{-1}gf, \\ f^\diamondsuit(v):=\mathop{\rm sl}um_{ }f(v_{i_1})\wedge f(v_{i_2})\wedge f(v_{i_3}) \wedge f(v_{i_4}) , \mathfrak{e}nd{array} $$ if $g\in {\mathbf{L}}_{\bar0}$, $v=\mathop{\rm sl}um_{ } v_{i_1}\wedge v_{i_2}\wedge v_{i_3} \wedge v_{i_4} \in {\mathbf{L}}_{\bar1} $, is an automorphism of $\mathbf{L}$. Furthermore, $\mathscr{C}ent_{\mathop{{\rm Aut}} (\mathbf{L})}( \mathcal{T}heta)=\{f^\diamondsuit\mid f\in\mathop{\rm SP}(V)\}\cdot\mathfrak{sp}an{ \mathcal{T}heta}\cong\mathop{\rm SP}(V)\times\mathbb{Z}_2$. \mathop{\rm sl}ection{Description of the gradings}\label{sec_graduaciones4} The main theorem in \cite{e6} gives the classification up to equivalence of the fine gradings on $\mathfrak{e}_6$. It encloses the following table, which provides, for each fine grading, its universal grading group and the type of the grading. These invariants are enough to distinguish the considered equivalence class. \begin{center}{ \begin{tabular}{ |c|c|} \hline Universal grading group & Type \cr \hline\hline $ \mathbb{Z}_3^4$ & $( 72,0,2 )$ \cr \hline $\mathbb{Z}^2\times\mathbb{Z}_3^2$ & $( 60,9 )$ \cr \hline $\mathbb{Z}_3^2\times\mathbb{Z}_2^3$ & $( 64,7 )$ \cr \hline $\mathbb{Z}^2\times\mathbb{Z}_2^3$ & $( 48,1,0,7 )$ \cr \hline $\mathbb{Z}^6 $ & $(72,0,0,0,0,1 )$ \cr \hline $\mathbb{Z}^4\times\mathbb{Z}_2$ & $(72,1,0,1 )$ \cr \hline $ \mathbb{Z}_2^6$ & $( 48,1,0,7 )$ \cr \hline $\mathbb{Z}\times\mathbb{Z}_2^4$ & $( 57,0,7)$\cr \hline $\mathbb{Z}_3^3\times\mathbb{Z}_2$ & $( 26,26 )$ \cr \hline $\mathbb{Z}^2\times\mathbb{Z}_2^3$ & $(60,7,0,1 )$ \cr \hline $\mathbb{Z}_4\times\mathbb{Z}_2^4$ & $(48,13,0,1 )$ \cr \hline $\mathbb{Z} \times\mathbb{Z}_2^5$ & $( 73,0,0,0,1 )$ \cr \hline $ \mathbb{Z}_2^7$ & $( 72,0,0,0,0,1 )$ \cr \hline $ \mathbb{Z}_4^3$ & $( 48,15 )$ \cr \hline \mathfrak{e}nd{tabular}}\mathfrak{e}nd{center}\mathop{\rm sl}mallskip In this section we describe the six gradings by infinite universal grading groups different from the Cartan grading. Our descriptions will not be, in most cases, the same as those ones in \cite{e6}, to adapt them to our study of symmetries. This makes necessary to recognize properties of the quasitori producing them, and, particularly, of the automorphisms involved. There are five conjugacy classes of order three automorphisms of $\mathfrak{e}_6$, characterized by the isomorphism type of the fixed subalgebra. It is said that such an automorphism is of type $3B$, $3C$, $3D$, $3E$ and $3F$ when the fixed subalgebra is of type $A_5+Z$, $3A_2$, $D_4+2Z$, $A_4+A_1+Z$ and $D_5+Z$ respectively. Observe that in this case the dimension of the fixed subalgebra ($36$, $24$, $30$, $28$ and $46$, respectively) determines the conjugacy class. Also, there are four conjugacy classes of order two automorphisms of $\mathfrak{e}_6$. The inner ones have fixed subalgebras isomorphic to $A_5+ A_1$ and $D_5+Z$, and are called of type $2A$ and $2B$ respectively, and the outer ones have fixed subalgebras isomorphic to $F_4$ and $C_4$, and are called of type $2C$ and $2D$ respectively. \mathop{\rm sl}ubsection{Inner $ \mathbb{Z}_2^3\times\mathbb{Z}^2$-grading}\label{subsec_gradZcuadZ2cubo} Consider the Tits' construction $\mathcal{T}(\mathscr{C},\mathcal{J})=\mathop{\hbox{\rm Der}}(\mathscr{C}) \oplus (\mathscr{C}_0 \otimes \mathcal{J}_0) \oplus\, \mathop{\hbox{\rm Der}}(\mathcal{J}) $ for $\mathcal{J}=\mathcal{H}_3(\mathbb{F} \oplus \mathbb{F})$. Note that this Jordan algebra is isomorphic to $\mathop{\rm Mat}_{3\times3}(\mathbb{F})^+$ with the symmetrized product. Consider the $\mathbb{Z}^2$-grading on $\mathcal{J}$ given by $$ \begin{array}{c}\mathcal{J}_{0,0} = \langle E_1, E_2, E_3\rangle, \\ \mathcal{J}_{1,0}=\langle E_{12}\rangle, \; \mathcal{J}_{0,1}=\langle E_{23}\rangle, \; \mathcal{J}_{1,1}=\langle E_{13}\rangle, \\ \mathcal{J}_{-1,0} = \langle E_{21}\rangle, \; \mathcal{J}_{0,-1}=\langle E_{32}\rangle, \; \mathcal{J}_{-1,-1}=\langle E_{31}\rangle, \mathfrak{e}nd{array} $$ where $E_{ij}$ is the matrix with $1$ in the entry $(i,j)$ and $0$ elsewhere, and $E_i:=E_{ii}$. Let $T_2:=\{s_{\alpha, \beta}\mid \alpha,\beta\in \mathbb{F}^\times\}\le\mathop{{\rm Aut}}(\mathcal{J})$ be the torus producing the grading, that is, $s_{\alpha,\beta}\vert_{\mathcal{J}_{n,m}}= \alpha^n \beta^m\mathop{\rm id}$. We also use the notation $s_{\alpha, \beta}$ for its extension to an automorphism of $\mathcal{T}(\mathscr{C},\mathcal{J})$, so as $T_2$ as a torus of $\mathop{{\rm Aut}}(\mathcal{T}(\mathscr{C},\mathcal{J}))$. Besides, we have a $\mathbb{Z}_2^3$-grading on $\mathcal{T}(\mathscr{C},\mathcal{J}) $ induced by the grading on $\mathscr{C}$ given by Equation~(\ref{eq_graddeCdeCartan}). We will denote by $f_1$, $f_2$ and $f_3$ the automorphisms of $\mathcal{T}(\mathscr{C},\mathcal{J})$ producing that grading. As seen in \cite{e6}, a nontoral group isomorphic to $\mathbb{Z}_2^3$ is unique up to conjugation, and its centralizer is isomorphic to itself direct product with a copy of the group $\mathop{\rm PSL}_3(\mathbb{F})$ (in our case, the extension of $\mathop{{\rm Aut}}(\mathcal{J})$), what proves that the quasitorus $ P_1:= T_2\langle f_1, f_2, f_3 \rangle \cong (\mathbb{F}^\times)^2\times\mathbb{Z}_2^3$ is a MAD-group (conjugated to $\mathcal{Q}_4$ of \cite{e6}) producing a fine $\mathbb{Z}^2\times\mathbb{Z}_2^3$-grading denoted by $\Gamma_1$. \mathop{\rm sl}ubsection{$ \mathbb{Z}_2^4\times\mathbb{Z} $-grading}\label{subsec_gradZZ2cuarta} A generalization of the $\mathbb{Z}$-grading on the Albert algebra described in \cite[Subsection~4.1]{EK11} is the following: For $1$ and $1'$ paraunits of two para-Hurwitz algebras, $S$ and $S'$, consider the map $d:=\mathop{\rm ad}(\iota_0(1\otimes 1'))$, which is a semisimple derivation of $\mathfrak{g}(S,S')$ with set of eigenvalues $\{\pm2,\pm1,0\}$. Thus, the eigenspace decomposition gives the following $\mathbb{Z}$-grading (5-grading) on $ \mathfrak{g}(S,S')$: \begin{equation}\label{eq_laZgrad} \begin{array}{l} \mathfrak{g}(S,S')_{\pm2}=S_{\pm}(S_0,S'_0), \\ \mathfrak{g}(S,S')_{\pm1}=\nu_{\pm}(S\otimes S'), \\ \mathfrak{g}(S,S')_0 = t_{S_0, S_0}\oplus t_{S'_0,S'_0}\oplus \iota_0 (S_0\otimes S'_0) \oplus \mathbb{F}\iota_0 (1\otimes 1'), \mathfrak{e}nd{array} \mathfrak{e}nd{equation} where $S_0$ and $S_0'$ denote the trace zero elements of $S$ and $S'$, and we are using the notation $$ \begin{array}{l} S_{\pm}(y,y'):=t_{1,y}+t'_{1,y'}\pm \textbf{i} \iota_0(y\otimes 1 + 1\otimes y') \\ \nu_{\pm}(y\otimes y'):=\iota_1(y\otimes y')\mp \textbf{i} \iota_2(\bar{y}\otimes \bar{y}') \mathfrak{e}nd{array} $$ for all $y\in S$, $y'\in S'$, and for a fixed scalar $\textbf{i} \in \mathbb{F}$ such that $\textbf{i}^2=-1$. Moreover, this $\mathbb{Z}$-grading can be refined with any grading coming from $S$ or $S'$, since the derivation $d$ commutes with the automorphism $(f,f')$ described in Subsection~\ref{subsec_Elduquemodelo}. We particularize this $\mathbb{Z}$-grading to the algebra of type $ E_6$ constructed from the para-Hurwitz algebras $S_8$ and $S_2$ of dimensions $8$ and $2$ respectively. Take the $\mathbb{Z}_2^3$-grading on $S_8$ and the $\mathbb{Z}_2$-grading on $S_2$ coming from the corresponding $\mathbb{Z}_2^3$ and $\mathbb{Z}_2$-gradings on $\mathscr{C}$ and $\mathbb{F}\oplus\mathbb{F}$ as in Subsection~\ref{subsec_involvedstructures}. Thus we obtain a $\mathbb{Z}_2^4\times\mathbb{Z}$-grading on $\mathfrak{g}(S_8,S_2)$, denoted by $\Gamma_2$. This is a fine grading equivalent to that one produced by $\mathcal{Q}_8$ in \cite{e6}, because if $\varrho$ denotes the order two automorphism of $\mathfrak{g}(S_8,S_2)$ producing the $\mathbb{Z}_2$-grading which comes from extending that one on $S_2$, then $\varrho$ fixes a subalgebra of type $F_4$. Indeed, if we denote $e_1=(1,0)$ and $e_2=(0,1)$, the grading on $p(\mathbb{F}\oplus\mathbb{F})=S_2$ is given by $(S_2)_{\bar0}=\mathfrak{sp}an{1=e_1+e_2}$ and $(S_2)_{\bar1}=\mathfrak{sp}an{e_1-e_2}$, so that $\varrho (\iota_i(x\otimes e_1))=\iota_i(x\otimes e_2)$ and $\mathfrak{f}ix\varrho=\mathfrak{tri}\,(S_8)\oplus(\oplus_{i=0}^2\iota_i(S_8\otimes \mathbb{F} 1))\cong\mathfrak{g}(S_8,\mathbb{F})\cong\mathfrak{f}_4$. \mathop{\rm sl}ubsection{$ \mathbb{Z}_2\times\mathbb{Z}^4 $-grading}\label{subsec_gradZcuartaZ2} First we provide a $\mathbb{Z}^4$-grading on $ \mathfrak{g}(S_8,S)$ inspired in \cite{EK11}, where $S$ and $S_8$ are para-Hurwitz algebras, the second one of dimension $8$. Take $a_1=(1,0,0,0)$, $a_2=(0,1,0,0)$, $g_1=(0,0,1,0)$, $g_2=(0,0,0,1)$ generators of the group $\mathbb{Z}^4$, and $a_0=-a_1-a_2, g_0=-g_1-g_2$. Set the degrees of the $\mathbb{Z}^4$-grading as follows: $$ \begin{array}{rcccl} \mathop{\hbox{\rm deg}}\, \iota_i(e_1\otimes s) & = & a_i & = & -\mathop{\hbox{\rm deg}}\, \iota_i(e_2 \otimes s), \\ \mathop{\hbox{\rm deg}}\, \iota_i(u_i\otimes s) & = & g_i & = & -\mathop{\hbox{\rm deg}}\, \iota_i(v_i \otimes s), \\ \mathop{\hbox{\rm deg}}\, \iota_i(u_{i+1}\otimes s) & = & a_{i+2}+g_{i+1} & = & -\mathop{\hbox{\rm deg}}\, \iota_i(v_{i+1} \otimes s), \\ \mathop{\hbox{\rm deg}}\, \iota_i(u_{i+2}\otimes s) & = & - a_{i+1}+g_{i+2} & = & -\mathop{\hbox{\rm deg}}\, \iota_i(v_{i+2} \otimes s), \mathfrak{e}nd{array} $$ where $s\in S$ and $B=\{e_1, e_2, u_0, u_1, u_2, v_0, v_1, v_2\}$ is a canonical basis of the algebra $S_8$. Also set $\mathop{\hbox{\rm deg}}(t)=(0,0,0,0)$ for all $t\in \mathfrak{tri}(S)$, and $\mathop{\hbox{\rm deg}} t_{x,y}=\mathop{\hbox{\rm deg}} \iota_0(x\otimes s) + \mathop{\hbox{\rm deg}} \iota_0(y\otimes s)$, if $x,y\in B$. It is a straightforward computation that this assignment provides a $\mathbb{Z}^4$-grading on $\mathfrak{g}(S_8,S)$. In particular, all the exceptional Lie algebras in the fourth row of the magic square, namely, $F_4$, $E_6$, $E_7$ and $E_8$, are $\mathbb{Z}^4$-graded. In fact, such grading is a root grading for the root system of $F_4$, taking into account the interesting relationship between fine gradings and root gradings which has been recently stated in \cite{ElduqueRootgradings}. When applied to $E_6$, viewed as $\mathfrak{g}(S_8, S_2)$ for $S_2$ the symmetric composition algebra $p(\mathbb{F}\oplus\mathbb{F})$, the torus $T_4$ inducing the $\mathbb{Z}^4$-grading commutes with the outer automorphism $\varrho$ and the quasitorus $P_3:=\langle T_4\cup\{\varrho\}\rangle$ provides a $\mathbb{Z}^4 \times \mathbb{Z}_2$-fine grading on $\mathfrak{g}(S_8, S_2)$, denoted by $\Gamma_3$. This MAD-group is essentially $\mathcal{Q}_6$ of \cite{e6}, taking into account that the $\mathbb{Z}^4 $-grading produced by the restriction of $T_4$ to $\mathfrak{f}ix\varrho\cong\mathfrak{f}_4$ is the Cartan grading. \mathop{\rm sl}ubsection{$ \mathbb{Z}_3^2 \times\mathbb{Z}^2$-grading}\label{subsec_gradZcuadZ3cuad} The $ \mathbb{Z}_3^2 \times\mathbb{Z}^2$-grading described in Subsection~\ref{subsec_modeloAdams}, produced by $\mathcal{Q}_2$ in \cite{e6}, admits different realizations convenient for our objectives. Consider again the Tits' construction $\mathcal{T}(\mathscr{C},\mathcal{J})=\mathop{\hbox{\rm Der}}(\mathscr{C}) \oplus (\mathscr{C}_0 \otimes \mathcal{J}_0) \oplus \mathop{\hbox{\rm Der}}(\mathcal{J}) $ applied to the Jordan algebra $\mathcal{J}=\mathcal{H}_3(\mathbb{F} \oplus \mathbb{F})\cong\mathop{\rm Mat}_{3\times3}(\mathbb{F})^+$. Its Lie algebra of derivations $\mathop{\hbox{\rm Der}}(\mathcal{J})$ can be identified with $\mathcal{J}_0$ as a vector space, since for $x\in\mathcal{J}_0$, the map $\mathop{\rm ad}(x)$ is a derivation of $\mathop{\rm Mat}_{3\times3}(\mathbb{F})^+$, being $\mathop{\rm ad}(x)(y)=xy-yx$. This allows to identify $\mathfrak{e}_6$ with $ \mathop{\hbox{\rm Der}}(\mathscr{C}) \oplus (\mathscr{C}_0 \otimes \mathcal{J}_0) \oplus \mathbb{F} \otimes \mathcal{J}_0$ and then, as $\mathscr{C}=\mathscr{C}_0\oplus\mathbb{F}$, with \begin{equation}\label{eq_paraverbienlosmodulos} \mathfrak{e}_6\cong\mathop{\hbox{\rm Der}}(\mathscr{C}) \oplus (\mathscr{C} \otimes \mathcal{J}_0). \mathfrak{e}nd{equation} Think of the $\mathbb{Z}_3^2$-grading obtained on $\mathfrak{e}_6$ when grading $\mathcal{J}$ through the assignment $\mathop{\hbox{\rm deg}}(b)=(\bar1,\bar0)$, $\mathop{\hbox{\rm deg}}(c)=(\bar0,\bar1)$, for $$\mathop{\rm sl}mall{ b=\left( \begin{array}{ccc} 1 & 0 & 0\\ 0 & \omega & 0\\ 0 & 0 & \omega^2 \mathfrak{e}nd{array} \right), \quad c=\left( \begin{array}{ccc} 0 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 0 \mathfrak{e}nd{array} \right)}, $$ where $\omega$ denotes a primitive cubic root of the unit, and then take $h_1$ and $h_2$ as the automorphisms of $\mathfrak{e}_6$ producing such $\mathbb{Z}_3^2$-grading. Observe that the simultaneous diagonalization has fixed subalgebra isomorphic to $\mathfrak{g}_2 = \mathop{\hbox{\rm Der}} (\mathscr{C})$ and all the other homogeneous components can be identified with $\mathscr{C}$ as $L_e$-modules (obvious from Equation~(\ref{eq_paraverbienlosmodulos})), so that every order three element in the subgroup $\mathfrak{sp}an{h_1,h_2}$ has type $3D$. Moreover, consider the $\mathbb{Z}^2$-grading on $\mathscr{C}$, which is extended to a $\mathbb{Z}^2$-grading on $\mathfrak{e}_6$. The two-dimensional torus producing this grading will be denoted by $\mathcal{T}_2$. Then, the quasitorus $ P_4:=\langle h_1, h_2, \mathcal{T}_2 \rangle \cong\mathbb{Z}_3^2 \times (\mathbb{F}^\times)^2 $ is conjugated to $\mathcal{Q}_2$ (although $\mathfrak{sp}an{h_1,h_2}$ is not conjugated to $\mathfrak{sp}an{H_1,H_2}$, as proved in \cite{e6}), so that it produces a fine $\mathbb{Z}_3^2\times \mathbb{Z}^2 $-grading of type $(60,9)$ on $\mathfrak{e}_6$, denoted by $\Gamma_4$. \mathop{\rm sl}ubsection{$ \mathbb{Z}_2^5 \times\mathbb{Z}$-grading}\label{subsec_gradZZ2quinta} For this grading, the 5-grading model in Subsection~\ref{subsec_modelo5grad} is used. Note that, if $\varphi\in \mathop{\rm GL}(V)$, the condition $\theta\widetilde{\varphi}(A)=\widetilde{\varphi}\theta(A)$ for all $A\in\mathfrak{sl}(V)$ is equivalent to the fact $\varphi^t\varphi\in\mathbb{F} I_6$. This is a necessary not sufficient condition for the automorphisms $\widetilde{\varphi}$ and $\theta$ to commute. They commute when $ \widetilde{\varphi}^{-1}\theta^{-1}\widetilde{\varphi}\theta\vert_{L_1+L_{-1}}=\mathop{\rm id}$; since $L_1+L_{-1}$ generate $L$ as an algebra. For instance, if $\varphi$ is given by the diagonal matrix $\mathop{\hbox{\rm diag}}(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5,\alpha_6)$, then $\widetilde{\varphi}$ commute with $\theta$ if and only if $ \alpha_{\mathop{\rm sl}igma(1 )}\alpha_{\mathop{\rm sl}igma( 2)}\alpha_{\mathop{\rm sl}igma( 3)}=\alpha_{\mathop{\rm sl}igma(4 )}\alpha_{\mathop{\rm sl}igma(5 )}\alpha_{\mathop{\rm sl}igma( 6)} $ for all permutation $\mathop{\rm sl}igma\in S_6$, which is equivalent to the fact that $\mathfrak{f}rac1{\alpha_1}\varphi$ has order two and determinant equal to $1$. In particular, if we denote $f_{ij}=\mathop{\hbox{\rm diag}}(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5,\alpha_6)$ for $\alpha_i=\alpha_j=-1$ and the remaining elements in the diagonal equal to $1$, the above implies that all of their extensions commute with both $\theta$ and $T_1$, so that we can consider the quasitorus $P_5:=T_1\cdot \langle \theta, \widetilde{f}_{12}, \widetilde{f}_{13}, \widetilde{f}_{14}, \widetilde{f}_{15}\rangle \cong \mathbb{F}^\times\times\mathbb{Z}_2^5$. In order to check that it is a MAD-group of $\mathop{{\rm Aut}}(L)$ (and hence conjugated to $\mathcal{Q}_{12}$ in \cite{e6}), it is enough to take into account that \begin{itemize} \item[(i)] the centralizer of the torus $T_1$ is $\widetilde\mathop{\rm GL}(V)\cup \widetilde\mathop{\rm GL}(V)\theta$; \item[(ii)] If $\varphi,\psi\in\mathop{\rm GL}(V)$, their extensions commute if and only if $ {\varphi}^{-1}{ \psi}^{-1} {\varphi} { \psi} =\omega^i I_6$ for $\omega^3=1$ and some $i=0,1,2$; but in case $\varphi^2=I_6$, the extensions $\widetilde{\varphi}$ and $\widetilde{\psi}$ commute if and only if $\varphi$ and $\psi$ do; \item[(iii)] The element $\widetilde{f}_{12}\widetilde{f}_{34}\widetilde{f}_{56}=-\widetilde{I}_6=f_{-1}\in T_1$; \item[(iv)] A matrix $A\in\mathop{\rm Mat}_{6\times6}(\mathbb{F})$ commutes with all the order two diagonal matrices if and only if $A$ itself is a diagonal matrix. \mathfrak{e}nd{itemize} The induced fine grading will be denoted by $\Gamma_5$. \mathop{\rm sl}ubsection{Outer $ \mathbb{Z}_2^3\times\mathbb{Z}^2 $-grading}\label{subsec_gradZoutercuadZ2cubo} According to \cite[Section~5.3]{e6}, the following is a $\mathbb{Z}^2 \times \mathbb{Z}_2^3$-fine grading on $\mathbf{L}$, the algebra of type $E_6$ described in Subsection~\ref{subsec_modeloc4}. Denote $\mathop{\rm sl}igma_1= \left(\begin{array}{cc} 0&1\\ 1&0 \mathfrak{e}nd{array}\right) $ and $\mathop{\rm sl}igma_2= \left(\begin{array}{cc} 1&0\\ 0&-1 \mathfrak{e}nd{array}\right). $ Then the automorphisms $$ \begin{array}{l} \tau_{\alpha,\beta}= \mathop{\hbox{\rm diag}}({\alpha,\alpha,\mathfrak{f}rac1\alpha,\mathfrak{f}rac1\alpha,\beta,\beta,\mathfrak{f}rac1\beta,\mathfrak{f}rac1\beta}) ,\\ g_1=\mathop{\hbox{\rm diag}}({\mathop{\rm sl}igma_1,\mathop{\rm sl}igma_1,\mathop{\rm sl}igma_1,\mathop{\rm sl}igma_1}) ,\\ g_2=\mathop{\hbox{\rm diag}}({\mathop{\rm sl}igma_2,\mathop{\rm sl}igma_2,\mathop{\rm sl}igma_2,\mathop{\rm sl}igma_2}) , \mathfrak{e}nd{array} $$ constitute a MAD-group of $\mathop{\rm SP}_8(\mathbb{F})\cong\mathop{{\rm Aut}}(\mathbf{L}_0)$ and hence $ P_6:= \langle \mathcal{T}heta, {g_1}^\diamondsuit, {g_2}^\diamondsuit \rangle \cdot \{ \tau_{\alpha,\beta}^\diamondsuit\mid \alpha,\beta\in\mathbb{F}^\times\} \cong\mathbb{Z}_2^3 \times (\mathbb{F}^\times)^2$ is a MAD-group of $\mathop{{\rm Aut}}(\mathbf{L})$. Although in $\mathop{\rm SP}_8(\mathbb{F})$ it is very easy to find elements whose extensions move the elements in the MAD-group $P_6$, the disadvantage is that all of them commute with $\mathcal{T}heta$. That forces us to look for a different realization through the 5-grading model of Subsection~\ref{subsec_modelo5grad}. Our quest is inspired in the following key fact about the above grading: the fixed part of the two-dimensional torus must be a subalgebra of dimension $18$ (and rank $6$), and $\theta$ acts on it fixing a subalgebra of dimension $8$ (and rank $4$). Consider the elements of $\text{GL}_6(\mathbb{F})$ given by $$ \begin{array}{l} \psi_{a,a'}= \left(\begin{array}{c|c}\begin{array}{cc} a&a' \\ -a'&a \mathfrak{e}nd{array} & 0\\ \hline0&I_4\mathfrak{e}nd{array}\right),\\ g'_1=\text{diag}(1,1,1,-1,1,-1), \\ g'_2=\text{diag}(1,1,1,-1,-1,1), \mathfrak{e}nd{array} $$ where $a,a'\in \mathbb{F}$ satisfy $a^2+a'^2=1$. Then, we have a one-dimensional torus $T'_1=\{\widetilde{\psi}_{a,a'} \mid a,a'\in \mathbb{F}, a^2+a'^2=1 \}\le\mathop{{\rm Aut}}(L)$, since this is a connected diagonalizable group (the ideal of $\mathbb{F}[x,y]$ generated by the polynomial $x^2+y^2-1$ is prime) and a maximal quasitorus of $\mathop{{\rm Aut}}(L)$ given by $$ P'_6:= \langle \theta, \widetilde{g}'_1, \widetilde{g}'_2 \rangle \cdot T_1\cdot T'_1\cong\mathbb{Z}_2^3 \times (\mathbb{F}^\times)^2 . $$ For proving its maximality, take $f$ in the centralizer. We can replace $f$ with $f\theta$, if necessary, to have that there exists $\varphi\in\mathop{\rm GL}_6(\mathbb{F})$ such that $f=\widetilde{\varphi}$. As $f$ commutes with $\theta$, then $\varphi\varphi^t=I_6$, the identity matrix. Also $f$ commutes with $\psi_{0,1}$, so that there are $x,y\in\mathbb{F}$ such that $\varphi=\left(\begin{array}{c|c}\begin{array}{cc} x&y \\ -y&x \mathfrak{e}nd{array} & 0\\ \hline0&B\mathfrak{e}nd{array}\right)$. The fact $\varphi\varphi^t=I_6$ means that $x^2+y^2=1$, so that by replacing $\varphi$ with $\varphi\psi_{x,y}$ we can assume that $x=1$ and $y=0$. By item ii) above, $B$ commutes with $\mathop{\hbox{\rm diag}}(1,-1,-1,1)$ and with $\mathop{\hbox{\rm diag}}(1,-1,1,-1)$, so that there are $b_i$'s for $i=1,\dots,4$, such that $B=\mathop{\hbox{\rm diag}}(b_1,b_2,b_3,b_4)$. As $BB^t=I_4$, then $b_i^2=1$, but again using that $\varphi$ commutes with $\theta$ we get that $\det(B)=1$ and the number of $1$'s in its diagonal is either 0, or 2, or 4. Hence $B\in\mathfrak{sp}an{\mathop{\hbox{\rm diag}}(1,-1,1,-1),\mathop{\hbox{\rm diag}}(1,-1,-1,1),-I_4}$ and $\widetilde\varphi\in P_6$ (note that $\psi_{-1,0}f_{-1}= \left(\begin{array}{c|c}I_2 & 0\\ \hline0&-I_4\mathfrak{e}nd{array}\right)$). \mathop{\rm sl}ection{Calculating the Weyl groups}\label{sec_gruposWeyl} In this section we compute $\mathscr{W}(\Gamma_i)=\mathop{{\rm Aut}}(\Gamma_i)/\mathop{\hbox{\rm Stab}}(\Gamma_i)$ for $i=1,\dots,6$. Recall that if $P_i$ is the MAD-group producing the grading $\Gamma_i$, then $G_i=\mathfrak{X}(P_i)$ is the universal grading group, and for each $f\in \mathop{{\rm Aut}}(\Gamma_i)$, there is a group isomorphism $\alpha_f\in\mathop{{\rm Aut}}(G_i)$ such that $f(L_s)= L_{\alpha_f(s)}$ for all $s\in\mathop{\hbox{\rm Supp}}(\Gamma_i)$. That allows us to identify $\mathscr{W}(\Gamma_i)$ with a subgroup of $\mathop{{\rm Aut}}(G_i)$ and consequently to take a basis of the (always finitely generated) group and work with the matrices relative to that basis. Concretely, if $G=\mathbb{Z}_p^m\times\mathbb{Z}^n$, we fix \begin{equation}\label{eq_basedelgrupo} \begin{array}{c} \{(\bar1,\dots,\bar0, 0,\dots,0),\dots,(\bar0,\dots,\bar1, 0,\dots,0),\\ (\bar0,\dots,\bar0, 1,\dots,0),\dots,(\bar0,\dots,\bar0, 0,\dots,1)\} \mathfrak{e}nd{array} \mathfrak{e}nd{equation} as canonical basis. We also fix scalars $\textbf{i},\omega,\xi,\zeta\in\mathbb{F}$ such that $\textbf{i}^4=\zeta^{12}=\omega^3=\xi^9=1$, primitive roots of the unit. \mathop{\rm sl}ubsection{Weyl group of the inner $\mathbb{Z}_2^3 \times \mathbb{Z}^2$-grading}\label{subsec_WeylZ2cuboZcuad} Take $\Gamma_1$ the $G_1=\mathbb{Z}_2^3\times\mathbb{Z}^2$-grading on $\mathfrak{e}_6$ described in Subsection~\ref{subsec_gradZcuadZ2cubo}, which is produced by a quasitorus of inner automorphisms. Identify any element of $\mathop{{\rm Aut}}(G_1)$ with its matrix (by columns) relative to the canonical basis (\ref{eq_basedelgrupo}), which is a $3+2$-block matrix whose first three rows have coefficients in $\mathbb{Z}_2$ and the other ones in $\mathbb{Z}$. \begin{prop} The Weyl group $\mathscr{W}(\Gamma_1)$ coincides with the set of block matrices $$ \left\{\left(\begin{array}{c|c}A & C\\ \hline0&B\mathfrak{e}nd{array}\right)\mid A\in\mathop{\rm GL}_3(\mathbb{Z}_2),B\in\langle\tau_1,\tau_2\rangle, C\in\mathop{\rm Mat}_{3\times2}(\mathbb{Z}_2)\right\} $$ where $$ \tau_1=\left(\begin{array}{cc}0&-1\\1&-1\mathfrak{e}nd{array}\right), \quad \tau_2=\left(\begin{array}{cc}-1&1\\0&1\mathfrak{e}nd{array}\right), $$ generate the dihedral group $D_3$. Thus $$ \mathscr{W}(\Gamma_1) \cong \mathop{\rm Mat}_{3\times2}(\mathbb{Z}_2) \rtimes (\mathop{\rm GL}_3(\mathbb{Z}_2) \times D_3). $$ \mathfrak{e}nd{prop} \begin{proof} First, note that the matrix of $\alpha_f$ has necessarily a zero block in the left lower corner, because $\alpha_f$ must apply finite order elements of $G_1$ into finite order elements. Recall from \cite[Theorem~3.5]{EK11} that the Weyl group of the $\mathbb{Z}_2^3$-grading on $\mathscr{C}$ fills $\mathop{{\rm Aut}}(\mathbb{Z}_2^3)$, since the three elements $w_i$'s as in Equation~(\ref{eq_gradZ2cubodeC}) play exactly the same role. Thus this Weyl group can be identified with $\mathop{\rm GL}_3(\mathbb{Z}_2)$. Since the automorphisms of $\mathscr{C}$ are extended to $\mathcal{T}(\mathscr{C},\mathcal{J})\cong\mathfrak{e}_6$ using the Tits' construction and commuting with $T_2$, this implies that any $\left( \begin{array}{cc} A & 0\\ 0 & I_2 \mathfrak{e}nd{array} \right)$ belongs to $\mathscr{W}(\Gamma_1)$. Now note that the $\mathbb{Z}^2$-grading on $\mathcal{J}=M_3(\mathbb{F})^+$ induces the $\mathbb{Z}^2$-grading on $\mathop{\hbox{\rm Der}}(\mathcal{J})\cong\mathfrak{sl}_3(\mathbb{F})$ which is the root decomposition of an algebra of type $A_2$. It is well known that the Weyl group of this grading is isomorphic to $D_3$. With our identifications, if $\mathop{\rm sl}igma\in S_3$, the map $E_{ij}\mapsto E_{\mathop{\rm sl}igma(i)\mathop{\rm sl}igma(j)}$ is an automorphism of the Jordan algebra $\mathcal{J}$. All these automorphisms must fill the Weyl group ($S_3$ is isomorphic to $D_3$), being $\tau_1$ the matrix corresponding to the permutation $(1,2,3)$ (since $E_{12}$ is applied into $E_{23}\in\mathcal{J}_{0,1}$ and $E_{23}$ into $E_{31}\in\mathcal{J}_{-1,-1}$) and $\tau_2$ the matrix corresponding to the permutation $(1,2)\in S_3$. Again these automorphisms can be extended to $\mathfrak{e}_6$ commuting with $\{f_1,f_2,f_3\}$, appearing the elements $\left( \begin{array}{cc} I_3 & 0\\ 0 & B \mathfrak{e}nd{array} \right)$ in the Weyl group (and only those elements with such particular shape). In order to find the remaining elements in the Weyl group, we need to change the model used for describing the grading. Then we consider the construction $\mathfrak{L} =\mathfrak{sl}(V)\oplus\mathfrak{sl}(W)\oplus (V\otimes\bigwedge^3W)$ given in Subsection~\ref{subsec_modeloa5masa1}. According to \cite[Section~3.5]{e6}, $\Gamma_1$ is equivalent to $\Gamma'_1$, the $\mathbb{Z}_2^3 \times \mathbb{Z}^2$-grading on $\mathfrak{L}$ with grading automorphisms: $$ \begin{array}{l} f_1'=\textbf{i}\left(\begin{array}{cc} 1&0\\0&-1 \mathfrak{e}nd{array}\right) \times \zeta\left(\begin{array}{cc} I_3 & 0 \\ 0 & - I_3 \mathfrak{e}nd{array}\right), \\ \text{$f_2'$ given by $\mathop{\rm id}$ on $\mathfrak{sl}(V)\oplus\mathfrak{sl}(W)$ and $-\mathop{\rm id}$ on $V\otimes\bigwedge^3W$,} \\ f_3'=\textbf{i}\left(\begin{array}{cc} 0&1\\1&0 \mathfrak{e}nd{array}\right) \times \zeta\left(\begin{array}{cc} 0 & I_3 \\ I_3 & 0 \mathfrak{e}nd{array}\right), \\ s_{\alpha,\beta}'=I_2 \times \text{diag}(\alpha,\beta,\alpha^{-1}\beta^{-1},\alpha,\beta,\alpha^{-1}\beta^{-1}), \quad \alpha,\beta\in \mathbb{F}^\times, \mathfrak{e}nd{array} $$ \noindent if we recall that we are considering $\mathop{\rm SP}(V)\times \mathop{\rm SL}(W)\leq\text{Aut}( \mathfrak{L})$. Note that not only the quasitorus $P'_1=\{s'_{\alpha, \beta}\mid \alpha,\beta\in \mathbb{F}^\times\}\langle f_1', f_2', f_3' \rangle$ can be considered conjugated to $P_1$ through the identification, but it is possible to take an isomorphism $\psi\colon \mathfrak{L} \to\mathfrak{e}_6$ in a way such that $\psi^{-1}f_i\psi=f_i'$ for all $i=1,2,3$, due to the fact that a nontoral $\mathbb{Z}_2^3$-subgroup of $\mathop{{\rm Aut}}(\mathfrak{e}_6)$ is unique up to conjugation and that we have already proved that we can change any nonidentity element in that subgroup $\mathbb{Z}_2^3$ by any other one. These arguments allow to work with the other model. For each $\mathop{\rm sl}igma\in S_6$, denote by $p_\mathop{\rm sl}igma\in\mathop{\rm GL}_6(\mathbb{F})$ the matrix with $(i,\mathop{\rm sl}igma(i))$ entry equal to $1$ for all $i=1,\dots,6$ and $0$ otherwise. Consider the automorphism given by $\varphi=I_2\times p_\mathop{\rm sl}igma\in \mathop{\rm SP}(V)\times \mathop{\rm SL}(W)\leq\mathop{{\rm Aut}}(\mathfrak{L})$ for the permutation $\mathop{\rm sl}igma=(1,4)(3,6)\in S_6$. It is a trivial verification that $$ \begin{array}{ll} \varphi f'_1\varphi^{-1}=f'_1s'_{-1,1},\qquad& \varphi f_2'\varphi^{-1}=f_2',\\ \varphi f_3'\varphi^{-1}=f_3',& \varphi s'_{\alpha,\beta}\varphi^{-1}=s'_{\alpha,\beta}, \mathfrak{e}nd{array} $$ which implies that $\varphi$ belongs to $\mathop{\hbox{\rm Norm}}(P'_1)$ and that the induced element by its projection on the Weyl group of the grading is $$\mathop{\rm sl}mall{ \left( \begin{array}{ccccc} \bar 1 & 0 & 0 & \bar 1 & 0\\ 0 & \bar 1 & 0 & 0 & 0\\ 0 & 0 & \bar 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 \mathfrak{e}nd{array} \right)}\in\mathscr{W}(\Gamma'_1). $$ In a similar way, the automorphism $I_2\times p_\mathop{\rm sl}igma$ for $\mathop{\rm sl}igma=(2,5)(3,6)$ applies $f'_1$ into $f'_1s'_{1,-1}$ and does not move $f_2'$, $f_3'$ and $s'_{\alpha,\beta}$. As the automorphisms $f'_i$'s are interchangeable, we find any possible $\left( \begin{array}{cc} I_3 & C\\ 0 & I_2 \mathfrak{e}nd{array} \right)$ belonging to $\mathscr{W}(\Gamma'_1)$. It is important to observe that the two-dimensional torus $\psi^{-1}T_2\psi$ commutes with the automorphisms $\psi^{-1}f_i\psi=f_i'$ for all $i$, hence it is equal to $\{s'_{\alpha,\beta}\mid \alpha,\beta\in\mathbb{F}^\times\}$ (the only one satisfying that property), although possibly $\psi^{-1}s_{\alpha,\beta}\psi\ne s'_{\alpha,\beta}$. Anyway, the map $\psi$ can be chosen verifying besides $\psi^{-1}s_{\alpha,\beta}\psi= s'_{\alpha,\beta}$ for all $\alpha^2=\beta^2=1$, if we compose $\psi$ with an automorphism of $\mathfrak{L}$ with suitable projection $\left( \begin{array}{cc} I_3 & C\\ 0 & I_2 \mathfrak{e}nd{array} \right)$. This justifies that the previously found elements in $\mathscr{W}(\Gamma'_1)$ also belong to $\mathscr{W}(\Gamma_1)$, and so any $\left( \begin{array}{cc} A & C\\ 0 & B \mathfrak{e}nd{array} \right)$ does. \mathfrak{e}nd{proof} \mathop{\rm sl}ubsection{Weyl group of the $\mathbb{Z}_2^4 \times \mathbb{Z}$-grading}\label{subsec_WeylZ2cuartaZ} Let $\Gamma_2$ be the $G_2$-grading on $\mathfrak{g}(S_8,S_2)\cong\mathfrak{e}_6$ described in Subsection~\ref{subsec_gradZZ2cuarta}, for $G_2= \mathbb{Z}_2\times\mathbb{Z}_2^3 \times \mathbb{Z}$, the first factor $\mathbb{Z}_2$ corresponding to the outer automorphism $\varrho$ coming from $S_2$, the $\mathbb{Z}_2^3$-grading induced by the three order two automorphisms $\{F_1,F_2,F_3\}\mathop{\rm sl}ubset\mathop{{\rm Aut}}(\mathfrak{e}_6)$ extending those ones of the paraoctonion algebra $S_8$, and the $\mathbb{Z}$-grading as in Equation~(\ref{eq_laZgrad}) produced by the torus $\{t_\alpha\mid \alpha\in\mathbb{F}^\times\}$. Identify any element of $\mathop{{\rm Aut}}(G_2)$ with its $1+3+1$-block matrix relative to the basis (\ref{eq_basedelgrupo}). \begin{prop} The Weyl group $\mathscr{W}(\Gamma_2)$ coincides with $$ \left\{\left(\begin{array}{c|c|c} \bar1&0&b \\ \hline 0&A&D\\ \hline 0&0&c \mathfrak{e}nd{array}\right) \mid b\in\{\bar0,\bar1\},c\in\{\pm1\}, A\in\mathop{\rm GL}_3(\mathbb{Z}_2),D\in\mathop{\rm Mat}_{3\times1}(\mathbb{Z}_2)\right\}. $$ Thus $$ \mathscr{W}(\Gamma_2) \cong \mathbb{Z}_2^4 \rtimes (\mathop{\rm GL}_3(\mathbb{Z}_2)\times\mathbb{Z}_2).$$ \mathfrak{e}nd{prop} \begin{proof} First note that the restriction of our grading to the subalgebra $\mathfrak{f}ix\varrho\cong\mathfrak{f}_4$ gives a $\mathbb{Z}_2^3 \times \mathbb{Z}$-fine grading on $\mathfrak{f}_4$. As the algebraic groups $\mathop{{\rm Aut}}(\mathbb{A})$ and $\mathop{{\rm Aut}}(\mathfrak{f}_4)$ are isomorphic, we can apply the results in \cite[Theorem~4.6]{EK11} about the Weyl group of the fine $\mathbb{Z}_2^3 \times \mathbb{Z}$-grading on $\mathbb{A}$ to guarantee that the Weyl group of the corresponding grading on $\mathfrak{f}_4$ is the whole group $\mathop{{\rm Aut}}( \mathbb{Z}_2^3 \times \mathbb{Z})$. This implies that all the elements of the form $\left(\begin{array}{c|c|c} \bar1&0&0 \\ \hline 0&A&D\\ \hline 0&0&c \mathfrak{e}nd{array}\right)$ belong to $\mathscr{W}(\Gamma_2)$. Notice that the torsion subgroup $\mathbb{Z}_2^4$ of $G_2$ is preserved by the action of $\mathscr{W}(\Gamma_2)$, so the zero blocks of the third row must appear. Observe also that if $(a_{ij})$ is the matrix of $\alpha_f\in\mathop{{\rm Aut}}(G_2)$ related to $f\in\mathop{{\rm Aut}}(\Gamma_2)$, not only this automorphism $f$ belongs to the normalizer of the MAD-group producing the grading, but we can specify its action: $$ \begin{array}{l} f^{-1}\varrho f=\varrho^{a_{11}}F_1^{a_{12}}F_2^{a_{13}}F_3^{a_{14}}t_{-1}^{a_{15}},\\ f^{-1}F_{i-1} f=\varrho^{a_{i1}}F_1^{a_{i2}}F_2^{a_{i3}}F_3^{a_{i4}}t_{-1}^{a_{i5}},\qquad \mathfrak{f}orall i=2,3,4,\\ f^{-1}t_\alpha f=t_{\alpha^c},\qquad c=a_{55}\in\{\pm1\},\mathfrak{f}orall\alpha\in\mathbb{F}^\times. \mathfrak{e}nd{array} $$ In particular, as $f$ applies inner automorphisms into inner automorphisms, then the zero blocks of the first column appear. Furthermore, in order to prove that the block located in $(1,2)$-position is zero, it is enough to check that $\varrho$ is not conjugated to $\varrho F$ for any $F\in\mathfrak{sp}an {F_1,F_2,F_3}$. Indeed, if we think of $\mathfrak{e}_6$ as $\mathfrak{f}ix\varrho\cong\mathop{\hbox{\rm Der}}(\mathbb{A})$ direct sum $\text{antifix}\,\varrho\cong\mathbb{A}_0$, note that such an $F$ comes from an order two automorphism of the octonion algebra (fixing a quaternion algebra $Q$), first extended to the Albert algebra (fixing $\mathcal{H}_3(Q )$, of dimension $15$) and then to $\mathfrak{f}_4$ (fixing $C_3$, seen as $\mathop{\hbox{\rm Der}}(\mathcal{H}_3(Q ))$, direct sum with an ideal of type $A_1$, so of dimension $24$). Thus $F$ is of type $2A$, and, what is more useful for us: the subalgebra fixed by $\varrho F$ has dimension $24+12$ (the fixed part of $\mathop{\hbox{\rm Der}}(\mathbb{A})$ and the antifixed part of $\mathbb{A}_0$), so that it is of type $C_4$ instead of $F_4$, as we were proving. For ending the proof, we need some other automorphisms. We consider again the model $\mathfrak{g}(S_8,S_2)\cong\mathfrak{e}_6$ and take the following maps, \begin{equation}\label{eq_losautomorfismosdeDiego} \begin{array}{rccl} \Psi_i\colon &\mathfrak{e}_6 & \longrightarrow & \mathfrak{e}_6 \\ &\iota_i(x\otimes y) & \longmapsto & \iota_i(x\otimes y)\\ &\iota_{i+1}(x\otimes y) & \longmapsto & \textbf{i} \iota_{i+1}(x\otimes \mathfrak{e}ta(y))\\ &\iota_{i+2}(x\otimes y) & \longmapsto & -\textbf{i} \iota_{i+2}(x\otimes \mathfrak{e}ta(y))\\ &t & \longmapsto & t \mathfrak{e}nd{array} \mathfrak{e}nd{equation} for every $t\in \mathfrak{tri}(S_8) \oplus \mathfrak{tri}(S_2)$ and for all $x\in S_8$, $y\in S_2$, $i=0,1,2$, where $\mathfrak{e}ta\colon S\rightarrow S$ is the linear map defined by $\mathfrak{e}ta(e_1)=e_1$ and $\mathfrak{e}ta(e_2)=-e_2$. It is a straightforward computation that the maps $\Psi_i$'s are indeed order $4$ automorphisms of $\mathfrak{e}_6$. Observe that $\Psi_0$ belongs to $\mathop{{\rm Aut}}(\Gamma_2)$, since it preserves the $\mathbb{Z}$-grading, it commutes with all the $F_i$'s, and it verifies $\Psi_0^{-1}\varrho\Psi_0=\varrho t_{-1}$ (so it interchanges $L_{(\bar0,*)}$ with $L_{(\bar1,*)}$). Then its projection on $\mathscr{W}(\Gamma_2)$ is $$ \mathop{\rm sl}mall{\left( \begin{array}{ccc} \bar1 & 0 & \bar1\\ 0 & I_3 & 0\\ 0 & 0 & 1 \mathfrak{e}nd{array} \right), } $$ what finishes the proof. \mathfrak{e}nd{proof} \mathop{\rm sl}ubsection{Weyl group of the $\mathbb{Z}_2 \times \mathbb{Z}^4$-grading}\label{subsec_WeylZ2Zcuarta} Let $\Gamma_3$ be the $G_3=\mathbb{Z}_2 \times \mathbb{Z}^4$-grading on $\mathfrak{g}(S_8,S_2)$ described in Subsection~\ref{subsec_gradZcuartaZ2}. The MAD-group producing the grading is $\mathop{\hbox{\rm Diag}}(\Gamma_3)=P_3=\mathfrak{sp}an{\{\varrho\}\cup T_4}$. Identify any element of $\mathop{{\rm Aut}}(G_3)$ with its $1+4$-block matrix relative to the canonical basis (\ref{eq_basedelgrupo}). \begin{prop} The Weyl group $\mathscr{W}(\Gamma_3)$ coincides with \begin{equation}\label{eq_weyldegamma3} \mathscr{W}(\Gamma_3) = \left\{ \left(\begin{array}{c|c} \bar1& \begin{array}{cccc} a&b&a&b \mathfrak{e}nd{array} \\ \hline 0&A \mathfrak{e}nd{array}\right) \mid A\in\mathscr{W}_{\mathfrak{f}_4},\, a,b\in\mathbb{Z}_2 \right\}, \mathfrak{e}nd{equation} where $\mathscr{W}_{\mathfrak{f}_4}$ is the Weyl group of the Cartan grading of $\mathfrak{f}_4$, generated by $$\mathop{\rm sl}mall{ s_1=\left( \begin{array}{cccc} 0 & -1 & 1 & -1 \\ 1 & -1 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \mathfrak{e}nd{array} \right),\,s_2=\left( \begin{array}{cccc} -1 & 1 & 0 & -1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \mathfrak{e}nd{array} \right),} $$ $$\mathop{\rm sl}mall{ s_3=\left( \begin{array}{cccc} 0 & 0 & 1 & -1 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 \mathfrak{e}nd{array} \right),\,s_4=\left( \begin{array}{cccc} 1 & 0 & -1 & 1 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & -1 \mathfrak{e}nd{array} \right).} $$ Thus $$ \mathscr{W}(\Gamma_3) \cong \mathbb{Z}_2^2\rtimes \mathscr{W}_{\mathfrak{f}_4} \cong\mathbb{Z}_2^2\rtimes( (\mathbb{Z}_2^3\rtimes S_4)\rtimes S_3 ), $$ where $S_n$ denotes the symmetric group of $n$ elements. \mathfrak{e}nd{prop} \begin{proof} If $f\in\mathop{{\rm Aut}}(\Gamma_3)$ and $\alpha_f$ denotes its related matrix, it is clear that the first column must be as claimed, since the torsion subgroup $\mathbb{Z}_2$ of $G_3$ is invariant by automorphisms. Recall that the $\mathbb{Z}_2$-grading produced by $\varrho$ allows to identify $\mathfrak{g}(S_8,S_2)$ with $\mathfrak{e}_6=\mathop{\hbox{\rm Der}}(\mathbb{A})\oplus\mathbb{A}_0$ in such a way that the restriction of $T_4$ is the Cartan grading of $\mathop{\hbox{\rm Der}}(\mathbb{A})=\mathfrak{f}_4$ and any automorphism of $\mathop{\hbox{\rm Der}}(\mathbb{A})$ can be extended to $\mathfrak{e}_6$ commuting with $\varrho$. In particular we extend the automorphisms of $\mathfrak{f}_4$ which normalize the maximal torus, so that the elements $\left(\begin{array}{c|c} \bar1& 0 \\ \hline 0&A \mathfrak{e}nd{array}\right)$ belong to $\mathscr{W}(\Gamma_3)$ for all $A\in\mathscr{W}_{\mathfrak{f}_4}$. Conversely, if an element in $\mathscr{W}(\Gamma_3)$ has such a particular shape, then the block $A$ necessarily belongs to $\mathscr{W}_{\mathfrak{f}_4}$, taking into account that commuting with $\varrho$ forces any automorphism of $\mathfrak{e}_6$ to be an extension of an automorphism of $\mathfrak{f}_4$. The generators $\{s_i\mid i=1,\dots,4\}$ of $\mathscr{W}_{\mathfrak{f}_4}$ that we have chosen are, respectively, the ones related to the automorphisms $\psi_{(1,2,3)}$, $\psi_{(2,3)}$, $\psi_c$ and the extension of $\tau$ described in \cite[Theorem~4.2]{EK11} and with the notations used there. The automorphisms $\Psi_i$ considered in Equation~(\ref{eq_losautomorfismosdeDiego}) belong to $\mathop{{\rm Aut}}(\Gamma_3)$ for all $i=0,1,2$, being their projections in $\mathscr{W}(\Gamma_3)$: $$\tiny{ \alpha_{\Psi_0}=\left( \begin{array}{c|c} \bar 1 & \begin{array}{cccc}\bar0 & \bar1 & \bar0 & \bar1\mathfrak{e}nd{array}\\ \hline 0&I_4 \mathfrak{e}nd{array} \right), \alpha_{\Psi_1}=\left( \begin{array}{c|c} \bar 1 & \begin{array}{cccc}\bar1 & \bar0 & \bar1 & \bar0\mathfrak{e}nd{array}\\ \hline 0&I_4 \mathfrak{e}nd{array} \right), \alpha_{\Psi_2}=\left( \begin{array}{c|c} \bar 1 & \begin{array}{cccc}\bar1 & \bar1 & \bar1 & \bar1\mathfrak{e}nd{array}\\ \hline 0&I_4 \mathfrak{e}nd{array} \right).} $$ Therefore, all the elements of the right side in Equation~(\ref{eq_weyldegamma3}) belong to the Weyl group. To prove the equality now is equivalent to prove that there is not $f\in\mathop{\hbox{\rm Norm}}(P_3)$ such that $f^{-1}\varrho f=\varrho t$ for $t=t_{(-1)^{b_1},(-1)^{b_2},(-1)^{b_3},(-1)^{b_4}}\in T_4$ with $b_i\in \{0,1\}$ when $b_1\neq b_3$ or $b_2\neq b_4$. Recall that $t$ acts in $L_{(\bar n_0,n_1,n_2,n_3,n_4)}$ with eigenvalue $(-1)^{b_1n_1+b_2n_2+b_3n_3+b_4n_4}$. But observe that $\varrho$ is not even conjugated to $\varrho t$, since the latter automorphism has type $2D$. Indeed, the subalgebra fixed by $\varrho t $ is $$ \begin{array}{rl} \mathfrak{f}ix(\varrho t)=&\mathfrak{f}ix t\vert_{\mathfrak{tri} (S_8)}\oplus (\bigoplus_{i=0}^{2}\iota_i(S_8\otimes 1))\cap \text{fix}(t)\\ & \oplus (\bigoplus_{i=0}^{2}\iota_i (S_8 \otimes (e_1-e_2)))\cap \text{antifix}(t). \mathfrak{e}nd{array} $$ If $x\in B$, being $B$ the canonical basis of $S_8$, $t$ acts in $\iota_i (x\otimes s)$ with eigenvalue either $1$ or $-1$ independently of the considered element $s\in S_2$, so that just one of the two elements $\iota_i (x\otimes 1)$ and $\iota_i (x\otimes (e_1-e_2))$ is fixed by $\varrho t$. Hence $\dim \mathfrak{f}ix(\varrho t)= \dim \mathfrak{f}ix(t |_{ \mathfrak{tri} (S_8)})+24$, which will be different from $52$ whenever $t|_{\mathfrak{tri} (S_8) }\ne\mathop{\rm id}$. But if $b_1\neq b_3$, $t$ acts on $t_{v_2,v_3}\in L_{(\bar 0,1,0,1,0)}$ with eigenvalue $-1$, and if $b_2\neq b_4$, $t$ acts on $t_{e_2,u_2}\in L_{(\bar 0,0,1,0,1)}$ with eigenvalue $-1$, proving our assertion. \mathfrak{e}nd{proof} \mathop{\rm sl}ubsection{Weyl group of the $\mathbb{Z}_3^2 \times \mathbb{Z}^2$-grading}\label{subsec_WeylZ3cuadZcuad} In Subsection~\ref{subsec_gradZcuadZ3cuad} we described the grading $\Gamma_4$ as the one produced by the MAD-group $P_4=\langle h_1, h_2, \mathcal{T}_2 \rangle\le\mathop{{\rm Aut}}(\mathcal{T}(\mathscr{C},\mathcal{J}))$. We work with $2+2$-block matrices relative to the canonical basis of $G_4=\mathbb{Z}_3^2 \times \mathbb{Z}^2$. \begin{prop} The Weyl group $\mathscr{W}(\Gamma_4)$ coincides with \begin{equation}\label{eq_elgrupodeweyldelostreses} \left\{\left(\begin{array}{c|c} A& \begin{array}{cc} a&a\\b&b \mathfrak{e}nd{array} \\ \hline 0&B \mathfrak{e}nd{array}\right)\mid A\in\mathop{\rm GL}_2(\mathbb{Z}_3),B\in\langle \mathop{\rm sl}igma,\tau\rangle, a,b\in\mathbb{Z}_3\right\} \mathfrak{e}nd{equation} where $$ \mathop{\rm sl}igma=\left(\begin{array}{cc}1&-1\\1&0\mathfrak{e}nd{array}\right) \quad \text{and} \quad \tau=\left(\begin{array}{cc}1&-1\\0&-1\mathfrak{e}nd{array}\right) $$ generate the dihedral group $D_6$. Thus $$ \mathscr{W}(\Gamma_4) \cong \mathbb{Z}_3^2 \rtimes (\mathop{\rm GL}_2(\mathbb{Z}_3) \times D_6). $$ \mathfrak{e}nd{prop} \begin{proof} First of all, every automorphism of $G_4$ preserves the torsion subgroup, which forces to have a zero block in the $(2,1)$-position. It has been computed in \cite{normPauli} that the Weyl group of the Pauli $\mathbb{Z}_n^2$-grading on $\mathfrak{sl}_n(\mathbb{F})$ is the group $\{A\in\mathop{\rm Mat}_{2\times2}(\mathbb{Z}_n)\mid \det(A)=\pm1 (\text{mod}\,n)\}$, isomorphic to two copies of $\mathop{\rm SL}_2(\mathbb{Z}_n)$, which in our case $n=3$ ($\mathfrak{sl}_3(\mathbb{F})\cong\mathop{\hbox{\rm Der}}(\mathcal{J})$) coincides with the group $\mathop{\rm GL}_3(\mathbb{Z}_2)$. Thus, the elements $$\left( \begin{array}{cc} A & 0\\ 0 & I_2 \mathfrak{e}nd{array} \right)$$ belong to $W(\Gamma_4)$ for all $A\in \mathop{\rm SL}_2(\mathbb{Z}_3)\cup \tiny{\left( \begin{array}{cc} \bar1 & 0\\ 0 & \bar2 \mathfrak{e}nd{array} \right)} \mathop{\rm SL}_2(\mathbb{Z}_3)=\mathop{\rm GL}_3(\mathbb{Z}_2)$, since $\mathop{{\rm Aut}}(\mathcal{J})$ and $\mathop{{\rm Aut}}(\mathop{\hbox{\rm Der}}(\mathcal{J}))$ are isomorphic algebraic groups, so that we can extend any automorphism of $\mathcal{J}$ to an automorphism of $\mathcal{T}(\mathscr{C},\mathcal{J})\,(\cong\mathfrak{e}_6)$ which commutes with $\mathcal{T}_2$. Similarly, if an automorphism $\psi$ belonging to $\mathscr{W}(\Gamma_4)$ has matrix $\left( \begin{array}{cc} I_2 & 0\\ 0 & B \mathfrak{e}nd{array} \right)$, then $\psi$ commutes with both $h_1$ and $h_2$, that is, $\psi$ belongs to the centralizer $\mathscr{C}ent\mathfrak{sp}an{h_1, h_2}\cong\langle h_1, h_2 \rangle \times \mathop{{\rm Aut}}(\mathscr{C})$ (\cite{e6}). Hence, certain $\psi h_1^{n_1}h_2^{n_2}\in\mathop{{\rm Aut}}(\mathscr{C})$ and, so, it is in the normalizer of $\mathcal{T}_2$. Thus, $B$ belongs to the Weyl group of the $\mathbb{Z}^2$-grading on $\mathscr{C}$, that is isomorphic to Weyl group of the Cartan grading on $\mathfrak{g}_2$, and consequently it is isomorphic to the dihedral group $D_6$. To concrete the representation in terms of our fixed basis of $G_4$, recall from \cite{EK11} that this Weyl group is generated by the classes of the following automorphisms of $\mathscr{C}$: $$ \begin{array}{ll} \rho\colon \mathscr{C} \rightarrow \mathscr{C},&\quad e_j \mapsto e_j, u_{i}\mapsto u_{i+1},v_{i} \mapsto v_{i+1},\\ \psi_1\colon \mathscr{C} \rightarrow \mathscr{C},&\quad e_1 \leftrightarrow e_2,u_i \leftrightarrow v_i,\\ \psi_2\colon \mathscr{C} \rightarrow \mathscr{C},&\quad e_j \mapsto e_j,u_1 \mapsto -u_1, u_2 \leftrightarrow u_3, v_1 \mapsto -v_1, v_2 \leftrightarrow v_3, \mathfrak{e}nd{array} $$ where $i=1,2,3$ (mod 3) and $j=1,2$. Their extensions to $\mathfrak{e}_6$ belong to $\mathscr{W}(\Gamma_4)$ and have related matrices, respectively, $$\mathop{\rm sl}mall{\left( \begin{array}{ccc} I_2 & 0 & 0\\ 0 & -1 & 1\\ 0 & -1 & 0 \mathfrak{e}nd{array} \right), \quad \left( \begin{array}{ccc} I_2 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1 \mathfrak{e}nd{array} \right), \quad \left( \begin{array}{ccc} I_2 & 0 & 0\\ 0 & 1 & -1\\ 0 & 0 & -1 \mathfrak{e}nd{array} \right)}. $$ We need some knowledge about the various order $3$ elements in the torus of $P_4$. Let us denote by $t_{a,b}\in\mathcal{T}_2$ the automorphism of $\mathcal{T}(\mathscr{C},\mathcal{J}) $ which extends the automorphism of $\mathscr{C}$ whose action on the canonical basis of $\mathscr{C}$ (see Subsection~\ref{subsec_involvedstructures}) is diagonal with scalars $$ \{1,1,a,b,\mathfrak{f}rac1{ab},\mathfrak{f}rac1a,\mathfrak{f}rac1b,ab\}. $$ Its extension to $\mathop{\hbox{\rm Der}}(\mathscr{C})$ is also diagonal in the basis of $\mathop{\hbox{\rm Der}}(\mathscr{C})$ given by $ \{D_{u_1,v_1},D_{u_2,v_2},D_{e_1,u_1},$ $D_{u_2,e_1},D_{e_1,u_3},D_{e_1,v_1},D_{e_1,v_2},D_{e_1,v_3} D_{u_1,v_2},D_{u_1,v_3},D_{u_2,v_1},D_{u_2,v_3},D_{u_3,v_1},D_{u_3,v_2}\}, $ with respective eigenvalues $$ \{1,1,a,b,\mathfrak{f}rac1{ab},\mathfrak{f}rac1a,\mathfrak{f}rac1b,ab,\mathfrak{f}rac ab,a^2b,\mathfrak{f}rac ba,ab^2,\mathfrak{f}rac1{a^2b},\mathfrak{f}rac1{ab^2}\}. $$ This list allows us to distinguish the conjugacy classes of the elements in the torus by the dimension of its fixed part. Thus, if we split the elements of order $3$ in $\mathcal{T}_2$ as $\mathcal{T}_2^1\cup\mathcal{T}_2^2$ for $\mathcal{T}_2^1=\{t_{\omega,\omega},t_{\omega^2,\omega^2}\}$ and $\mathcal{T}_2^2=\{t_{a,b}\mid a^3=b^3=1, a\ne b\}\mathop{\rm sl}etminus\{\mathop{\rm id}\}$, then the elements in $\mathcal{T}_2^1$ fix subalgebras of dimension $24$, so that they are order 3 automorphisms of type $3C$ and the elements in $\mathcal{T}_2^2$ belong to the class $3B$. We leave it to the reader to verify that $h_it$ is of type $3D$ if $t\in \mathcal{T}_2^1$ and of type $3C$ if $t\in \mathcal{T}_2^2$, for all $i=1,2$. This has an immediate consequence on the shape of the possible elements in the Weyl group. If $$ \left(\begin{array}{c|c}I_2& \begin{array}{cc} a&b\\c&d \mathfrak{e}nd{array} \\ \hline 0&I_2 \mathfrak{e}nd{array}\right)\in \mathscr{W}(\Gamma_4), $$ this forces $h_1$ to be conjugated to $h_1t_{\omega^{2a},\omega^{2b}}$, and $h_2$ to be conjugated to $h_2t_{\omega^{2c},\omega^{2d}}$, so that, by the arguments above, necessarily $a=b$ and $c=d$ (modulo 3). In order to finish the proof, we have to show that every element in the right side of Equation~(\ref{eq_elgrupodeweyldelostreses}) belong to the Weyl group, but it is sufficient to find one automorphism $\psi\in\mathop{{\rm Aut}}(\Gamma_4)$ with related matrix $\mathop{\rm sl}mall{\left(\begin{array}{c|c}I_2& \begin{array}{cc} \bar1&\bar1\\\bar0&\bar0 \mathfrak{e}nd{array} \\ \hline 0&B \mathfrak{e}nd{array}\right)}$ for some $B\in\mathfrak{sp}an{\mathop{\rm sl}igma,\tau }$. Within our search, we are going to change the viewpoint recurring to a different way of realizing the grading, as described in Subsection~\ref{subsec_modeloAdams}. The advantage is a greater disposability of automorphisms. Take as $\psi=\Psi \left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \mathfrak{e}nd{array}\right) \in \Psi(\mathop{\rm GL}_3(\mathbb{F}))\le\mathop{{\rm Aut}}(\mathcal{L})$. This automorphism $\psi$ commutes with $H_1$ and $H_2$ by construction, and verifies $\psi T_{\alpha,\beta}\psi^{-1}=T_{\beta,\mathfrak{f}rac1{\alpha\beta}}$, in particular $\psi\in\mathop{\hbox{\rm Norm}}(\mathcal{Q}_2)$ and then it can be projected on our Weyl group. But this cannot be done without making completely precise the identification: In spite that $P_4$ and $\mathcal{Q}_2$ are conjugated, the conjugating automorphism does not necessarily apply $h_i$ into $H_i$ and $t_{a,b}$ into $T_{a,b}$, although it always applies $\mathcal{T}_2$ into $\{ T_{a,b}\mid a,b\in\mathbb{F}^\times\}$ (the only two-dimensional torus contained in $P_4$). First note that (recall $\xi^9=1$) the subgroup $\mathfrak{sp}an{H_1 T_{\xi, \xi},H_2}$($\cong\mathbb{Z}_3^2$) has every order three element of type $3D$, as proved in \cite[Lemma~11]{e6}. Besides the elements of type $3B$ in the accompanying torus are $T_{\omega,1}$ and its square. To avoid the inconvenience of the kernel of $\Psi$, consider the following isomorphism of two-dimensional algebraic torus $$ \begin{array}{rcl} (\mathbb{F}^\times)^2/\mathfrak{sp}an{(\omega,\omega)}&\to&(\mathbb{F}^\times)^2\\ \overline{(\alpha,\beta)}&\mapsto& (\alpha\beta^2,\alpha/\beta). \mathfrak{e}nd{array} $$ Thus consider $T_{\alpha,\beta}=:T'_{\alpha\beta^2,\alpha/\beta}$, $H_1':=H_1 T_{\xi, \xi}$ and $H_2':=H_2$. With this notation the order three elements in the torus of type $3C$ are $\{T'_{\omega,\omega},T'_{\omega^2,\omega^2}\}$, and the other ones are of type $3B$. Now we will see that there is an isomorphism $\Upsilon\colon \mathcal{L}\to\mathcal{T}(\mathscr{C},\mathcal{J})$ such that $P_4=\Upsilon \mathcal{Q}_2\Upsilon^{-1}$, $h_i=\Upsilon H_i'\Upsilon^{-1}$ and $t_{\alpha,\beta}=\Upsilon T'_{\alpha,\beta}\Upsilon^{-1}$ if $\alpha^3=\beta^3=1$. As obviously the equations $$ \begin{array}{l} \psi T'_{\alpha,\beta}\psi^{-1}=T'_{ \mathfrak{f}rac1{\alpha\beta},\alpha},\\ \psi H_1'\psi^{-1}=H'_1T'_{\omega,\omega},\\ \psi H_2'\psi^{-1}=H'_2, \mathfrak{e}nd{array} $$ hold, then $\Upsilon \psi\Upsilon^{-1}\in\mathop{\hbox{\rm Norm}}(P_4)$ has as related matrix $$ \left(\begin{array}{c|c}I_2& \begin{array}{cc} \bar1&\bar1\\\bar0&\bar0 \mathfrak{e}nd{array} \\ \hline 0& \begin{array}{cc} -1 & 1\\ -1 & 0\mathfrak{e}nd{array} \mathfrak{e}nd{array}\right), $$ as searched. The existence of $\Upsilon$ is justified by the following argument: the elements of type $3D$ in $\mathcal{Q}_2$ are just $\{{H'_1}^i{H'_2}^j{T'_{\omega,\omega}}^k\mid i,j,k\in\{0,1,2\},(i,j)\ne(0,0)\}$. An automorphism passing from $P_4$ to $\mathcal{Q}_2$ necessarily applies $h_1$ into one of these elements. By using previous elements in $\mathscr{W}(\Gamma_4)$ (any nonidentity element in $\mathfrak{sp}an{H_1',H_2'}$ can be moved into $H_1'$ by an element in the normalizer), we can replace the above automorphism with another one applying $h_1$ into $H'_1{T'_{\omega,\omega}}^j$ for some $j\in\{0,1,2\}$, and then by composing it with $\psi^{-j}$, with a third one which applies $h_1$ into $H'_1$. \mathfrak{e}nd{proof} \mathop{\rm sl}ubsection{Weyl group of the $\mathbb{Z}_2^5\times\mathbb{Z}$-grading}\label{subsec_WeylZZ2quinta} Take $\Gamma_5$ the $G_5=\mathbb{Z}_2^5\times\mathbb{Z}$-grading described in Subsection~\ref{subsec_gradZZ2quinta}, produced by $P_5=\mathfrak{sp}an{\theta, \widetilde{f}_{12}, \widetilde{f}_{13}, \widetilde{f}_{14}, \widetilde{f}_{15}}\cdot T_1$. Take the canonical basis of $G_5$ in order to identify the elements of $\mathscr{W}(\Gamma_5)\le\mathop{{\rm Aut}}(G_5)$ with their matrices relative to it, written by blocks $1+4+1$. \begin{prop}\label{prop_el5} The Weyl group $\mathscr{W}(\Gamma_5)$ coincides with the set $$ \left\{\left( \begin{array}{c|c|c} \bar1& \begin{array}{cccc}a&b&c&d\mathfrak{e}nd{array} &e\\ \hline 0&A& \kappa(A) \\ \hline 0& 0 & \pm1 \mathfrak{e}nd{array}\right) \mid A\in \mathop{\rm SP}_4(\mathbb{Z}_2),\,a,b,c,d,e\in\mathbb{Z}_2 \right\} , $$ where $\mathop{\rm SP}_4(\mathbb{Z}_2)=\{A\in\mathop{\rm Mat}_{4\times4}(\mathbb{Z}_2)\mid ACA^t=C\}$, for $C=\mathop{\rm sl}mall{\left( \begin{array}{cccc} \bar0 & \bar1 & \bar1 & \bar1 \\ \bar1 & \bar0 & \bar1 & \bar1 \\ \bar1 & \bar1 & \bar0 & \bar1 \\ \bar1 & \bar1 & \bar1 & \bar0 \mathfrak{e}nd{array} \right)}$, $$ \kappa_0(a_1,a_2,a_3,a_4)= \begin{cases}\bar0\, \text{ if $\ \vert\{i\mid a_i=\bar1\}\vert=1,2$},\\ \bar1\, \text{ if $\ \vert\{i\mid a_i=\bar1\}\vert=3,4$}, \mathfrak{e}nd{cases} $$ and $\kappa(A)=\kappa((a_{ij}))=\left(\begin{array}{c} \kappa_0(a_{11},a_{12},a_{13},a_{14})\\ \kappa_0(a_{21},a_{22},a_{23},a_{24})\\ \kappa_0(a_{31},a_{32},a_{33},a_{34})\\ \kappa_0(a_{41},a_{42},a_{43},a_{44}) \mathfrak{e}nd{array}\right)$ . Therefore, $$ \mathscr{W}(\Gamma_5) \cong (\mathop{\rm SP}_4(\mathbb{Z}_2)\times\mathbb{Z}_2) \ltimes \mathbb{Z}_2^5. $$ \mathfrak{e}nd{prop} \begin{proof} Since the action by conjugation of $\mathop{{\rm Aut}}(\Gamma_5)$ on $P_5$ takes inner automorphisms to inner automorphisms, any element in $\mathscr{W}(\Gamma_5)$ must have zero blocks in the positions $(2,1)$ and $(3,1)$. Also, the zero block in the position $(3,2)$ is a consequence of the fact that the torsion group of $G_5$ is preserved. The centralizer of $\theta$ is known to be isomorphic to $\mathop{\rm SP}_8(\mathbb{F})\cdot\mathfrak{sp}an{1,\theta}\cong\mathop{\rm SP}_8(\mathbb{F})\times \mathbb{Z}_2$, what implies that every automorphism commuting with $\theta$ is an extension of some automorphism of $\mathfrak{sp}\,(8)\mathfrak{e}quiv\mathfrak{c}_4$ and conversely. That means that any automorphism $f$ such that $\alpha_f=\left(\begin{array}{c|c|c} \bar1& 0 &0\\ \hline 0&A& B \\ \hline 0& 0 & c \mathfrak{e}nd{array}\right)\in\mathscr{W}(\Gamma_5)$ verifies that $f\vert_{\mathfrak{f}ix\theta}$ belongs to the automorphism group of the $\mathbb{Z}\times\mathbb{Z}_2^4$-grading on $\mathfrak{c}_4$ produced by the restriction to $\mathfrak{f}ix\theta$ of the automorphisms in $P_5 $. Its Weyl group has been proved in \cite[Theorem~5.7]{EK12} to be $\mathop{\rm SP}_4(\mathbb{Z}_2)\times\mathbb{Z}_2$. For completeness, we wish to get a matricial expression in terms of our basis. It is possible to take $c=-1$, considering for that purpose $\phi\colon\bigwedge^6V\to\mathbb{F}$ a fixed nonzero linear map, and then the automorphism of $L$ extending $\rho\colon\bigwedge^3V\to\bigwedge^3V^*$ given by $\mathfrak{sp}an{x,\rho(y)}=\phi(y\wedge x)$. Now assume that $f\in\mathop{\hbox{\rm Norm}}(P_5)$ commutes with $\theta$ and with $T_1$ (so $c=1$). Hence there is $X\in\mathop{\rm GL}_6(\mathbb{F})$ such that either $f$ or $f\theta$ equals $\widetilde X$. Any inner order 2 element in $P_5$ is $\widetilde Y$ for some $Y\in \mathcal{D}=\{\mathop{\hbox{\rm diag}}(\varepsilon_1,\dots,\varepsilon_6)\mid \varepsilon_i^2=1,\Pi_{i=1}^6\varepsilon_i=1\}$. As $Xf_{ij}X^{-1}\in \mathcal{D}$ for all $i,j$ (see item (ii) in Subsection~\ref{subsec_gradZZ2quinta}), this implies that $X$ is a permutation matrix (in each row and in each column, there is only one nonzero entry). Thus there is $\mathop{\rm sl}igma\in S_6$ such that $X=\mathop{\hbox{\rm diag}}(a_1,\dots,a_6)p_\mathop{\rm sl}igma$ (recall that $p_\mathop{\rm sl}igma=(\delta_{\mathop{\rm sl}igma(i)j})_{ij}$ for $\delta$ the Kronecker delta) and the action on $\mathcal{D}$ is given by $$ X\mathop{\hbox{\rm diag}}(\varepsilon_1,\dots,\varepsilon_6) X^{-1}=\mathop{\hbox{\rm diag}}(\varepsilon_{\mathop{\rm sl}igma(1)},\dots,\varepsilon_{\mathop{\rm sl}igma(6)}). $$ Note that, for each $Y=\mathop{\hbox{\rm diag}}(\varepsilon_1,\dots,\varepsilon_6)\in\mathcal{D}\mathop{\rm sl}etminus\{I_6\}$, the automorphism $\widetilde Y$ is of type $2A$ if the cardinal $\vert\{i\in\{1,\dots,6\}\mid \varepsilon_i=-1\}\vert$ is either $2$ or $6$, and of type $ 2B$ otherwise. All the extensions of these elements are obviously conjugated by suitable elements in $\mathop{\hbox{\rm Norm}}(P_5)$, as well as all the elements of type $2A$ different from $f_{-1}=-\widetilde\mathop{\rm id}$. In particular, there is not a subgroup of type $\mathbb{Z}_2^4$ which is invariant for $\mathop{\hbox{\rm Norm}}(P_5)$. Anyway, $S_6$ does act in the quotient $\mathcal{D}/\mathfrak{sp}an{-\mathop{\rm id}}\cong\mathbb{Z}_2^4$ (the action $\mathop{\rm sl}igma\cdot \overline{Y}=\overline{p_\mathop{\rm sl}igma Yp_\mathop{\rm sl}igma^{-1}}$ is well defined) and it can be easily checked that for any cycle $\mathop{\rm sl}igma=(i,j)\in S_6$, the matrix $X_\mathop{\rm sl}igma$ relative to the basis $\{\overline{f}_{12}, \overline{f}_{13}, \overline{f}_{14}, \overline{f}_{15}\}$ verifies $X_\mathop{\rm sl}igma CX^t_\mathop{\rm sl}igma=C$. The key point now is that the column matrix $B$ in $\alpha_f$ is completely determined by $A=X_\mathop{\rm sl}igma$. If the $i$th row of $A$ ($i=1,2,3,4$) is given by $(a,\,b,\,c,\,d)\in\mathbb{Z}_2^4$, then $f_{1,i+1}$ is sent to $\mathop{\hbox{\rm diag}}((-1)^{a+b+c+d},(-1)^{a},(-1)^{b},(-1)^{c},(-1)^{d},1)$, which has four $-1$'s if and only if $(a,\,b,\,c,\,d)$ has three or four $\bar1$'s. Finally, we wish to know if every order two outer automorphism in $P_5 $ is conjugated to $\theta$ by an element in the normalizer, so that $a,b,c,d,e$ could take any value in $\{\bar0,\bar1\}$. The nontrivial point is to choose such an element inside the normalizer of the MAD-group, because of course we know that all these order two outer automorphisms are conjugated to $\theta$. Indeed, otherwise there would exist a fine $\mathbb{Z}\times\mathbb{Z}_2^4$-grading on $\mathfrak{f}_4$, what does not happen according to \cite{f4}. The required automorphisms are easy to find by working in our model. Note, if $\varphi=\text{diag}(\alpha_{ 1},\alpha_{2 },\alpha_{ 3},\alpha_{ 4},\alpha_{ 5},\alpha_{ 6})\in\mathop{\rm GL}_6(\mathbb{F})$, that $\widetilde\varphi$ commutes with every inner automorphism of $P_5 $, although it does not commute necessarily with $\theta$. More precisely, $\widetilde\varphi\theta\widetilde\varphi^{-1}\theta^{-1}$ acts in $L_0$ as $\mathscr{A}d(\varphi\varphi^t)$ and it acts in $e_{\mathop{\rm sl}igma(1)}\wedge e_{\mathop{\rm sl}igma(2)}\wedge e_{\mathop{\rm sl}igma(3)}\in L_1$ with eigenvalue $\mathfrak{f}rac{\alpha_{ \mathop{\rm sl}igma(1)}\alpha_{\mathop{\rm sl}igma(2) }\alpha_{ \mathop{\rm sl}igma(3)}}{\alpha_{ \mathop{\rm sl}igma(4)}\alpha_{\mathop{\rm sl}igma(5)}\alpha_{ \mathop{\rm sl}igma(6)}}$. Thus, the condition $\tilde\varphi\theta\tilde\varphi^{-1}=\theta \widetilde\psi$, for $\psi={\text{diag}(\beta_{ 1},\beta_{2 },\beta_{ 3},\beta_{ 4},\beta_{ 5},\beta_{ 6})}$ with $\beta_i\in\{\pm1\}$, is equivalent to the conditions ${\alpha_i}^2= \beta_i$ for all $i$ and $\Pi_{i=1}^6\alpha_i=1$. In particular, the automorphisms extending $ \varphi_1=\text{diag}(\textbf{i},-\textbf{i},1,1,1,1)$ and $\varphi_2=\text{diag}(-\textbf{i},\textbf{i},\textbf{i},\textbf{i},\textbf{i},\textbf{i})$ satisfy that $$ \widetilde{\varphi}_1\theta{\widetilde{\varphi}_1}^{-1}=\theta \widetilde{f}_{12},\quad \widetilde{\varphi}_2\theta{\widetilde{\varphi}_2}^{-1}=\theta f_{-1}, $$ so that they belong to $\mathop{{\rm Aut}}(\Gamma_5)$ with induced matrices in $\mathscr{W}(\Gamma_5)$ $$ \begin{array}{l} \left( \begin{array}{c|c|c} \bar 1& \begin{array}{cccc}\bar 1&\bar 0&\bar 0&\bar 0\mathfrak{e}nd{array} & 0 \\ \hline 0& I_4 &0 \\ \hline 0& 0 & 1 \mathfrak{e}nd{array}\right), \quad \left( \begin{array}{c|c|c} \bar 1& 0 & \bar 1 \\ \hline 0& I_4 &0 \\ \hline 0& 0 & 1 \mathfrak{e}nd{array}\right). \mathfrak{e}nd{array} $$ The proof is ended when we multiply for previous elements in the Weyl group. \mathfrak{e}nd{proof} \mathop{\rm sl}ubsection{Weyl group of the outer $\mathbb{Z}_2^3\times\mathbb{Z}^2$-grading}\label{subsec_WeylZcuadZ2cubo} For avoiding ambiguity, we must fix an isomorphism between the torus of $P_6$, $\{ \tau_{\alpha,\beta}^\diamondsuit\mid \alpha,\beta\in\mathbb{F}^\times\} $, and $ (\mathbb{F}^\times)^2$. If we notice that $\tau_{-1,-1}=\mathop{\rm id}$, and that the following is an isomorphism of two-dimensional algebraic torus $$ \begin{array}{rcl} (\mathbb{F}^\times)^2/\mathfrak{sp}an{(-1,-1)}&\to&(\mathbb{F}^\times)^2\\ \overline{(\alpha,\beta)}&\mapsto& (\alpha\beta,\alpha/\beta), \mathfrak{e}nd{array} $$ hence, the automorphisms $\tau'_{\alpha\beta,\alpha/\beta }:=\tau_{\alpha,\beta}$ are more convenient to work with the grading produced by $P_6 $, according to Equation~(\ref{eq_torosconisomorfismoprefijado}). Denote by $\Gamma_6$ the $G_6=\mathbb{Z}_2\times\mathbb{Z}_2^2\times\mathbb{Z}^2$-grading induced by $P_6=\langle \mathcal{T}heta, {g_1}^\diamondsuit, {g_2}^\diamondsuit \rangle \cdot \{{\tau'_{\alpha,\beta}}^\diamondsuit \mid \alpha,\beta\in\mathbb{F}^\times\}\le\mathop{{\rm Aut}}(\mathbf{L})$. Take as always the {canonical} basis of $G_6$. \begin{prop} The Weyl group $\mathscr{W}(\Gamma_6)$ is \begin{equation}\label{eq_ultimoweyl} \left\{\left( \begin{array}{c|c|c} \bar1& a \quad b & c \quad d \\ \hline \begin{array}{c}\bar0\\\bar0\mathfrak{e}nd{array} & A & \begin{array}{c} e \quad e \\ f \quad f \mathfrak{e}nd{array} \\ \hline 0 & 0 & B \mathfrak{e}nd{array}\right) \mid a,b,c,d,e,f\in\mathbb{Z}_2, A\in\mathop{\rm GL}_2(\mathbb{Z}_2), B\in \mathfrak{sp}an{ \mathop{\rm sl}igma_1,\mathop{\rm sl}igma_2,-\mathop{\rm id} }\right\} \mathfrak{e}nd{equation} where the set $\{\mathop{\rm sl}igma_1,\mathop{\rm sl}igma_2,-\mathop{\rm id}\}$ generates a group isomorphic to $\mathbb{Z}_2^2 \rtimes \mathbb{Z}_2.$ Hence, $$\mathscr{W}(\Gamma_6) \cong \mathbb{Z}_2^4 \rtimes ((\mathbb{Z}_2^2 \rtimes S_3) \times (\mathbb{Z}_2^2 \rtimes \mathbb{Z}_2)). $$ \mathfrak{e}nd{prop} \begin{proof} First, any element in $\mathscr{W}(\Gamma_6)$ has the first column as in Equation~(\ref{eq_ultimoweyl}), since the action by conjugation maps inner automorphisms to inner automorphisms. Also, there is a zero block in position $(3,2)$, since the torus is invariant. Second, if an automorphism $f\in\mathop{{\rm Aut}}(\Gamma_6)$ has as related matrix \begin{equation}\label{eq_laresttricccionultimocaso}\left(\begin{array}{c|c|c} \bar1& 0 &0\\ \hline 0&A& D \\ \hline 0& 0 & B \mathfrak{e}nd{array}\right)\in\mathscr{W}(\Gamma_6), \mathfrak{e}nd{equation} then, as in the proof of Proposition~\ref{prop_el5}, the restriction $f\vert_{\mathfrak{f}ix\mathcal{T}heta}$ belongs to the group of automorphisms of the $\mathbb{Z}_2^2\times\mathbb{Z}^2$-fine grading on $\mathfrak{f}ix\mathcal{T}heta\cong\mathfrak{c}_4$, and conversely. Such Weyl group is, following \cite[Theorem~5.7]{EK12}, isomorphic to $(\mathbb{Z}_2^2 \rtimes \mathop{\rm SP}_2(\mathbb{Z}_2)) \times (\mathbb{Z}_2^2 \rtimes \mathbb{Z}_2)$. We will provide matricial expressions in terms of our basis. Take the elements in $\mathop{\rm SP}_8(\mathbb{F})$, $$ p_1=\left(\begin{array}{c|c} 0&I_4\\ \hline I_4&0 \mathfrak{e}nd{array}\right), \qquad p_2=\left(\begin{array}{c|c} \begin{array}{cc} 0&I_2\\I_2&0 \mathfrak{e}nd{array}&0\\ \hline 0&I_4 \mathfrak{e}nd{array}\right), \qquad p_3=\left(\begin{array}{c|c} I_4&0\\ \hline 0&\begin{array}{cc} 0&I_2\\I_2&0 \mathfrak{e}nd{array} \mathfrak{e}nd{array}\right). $$ It holds $$ p_1\tau'_{\alpha,\beta}p_1^{-1}=\tau_{\alpha,\mathfrak{f}rac1\beta },\quad p_2\tau'_{\alpha,\beta}p_2^{-1}=\tau_{\mathfrak{f}rac1\beta,\mathfrak{f}rac1\alpha},\quad p_3\tau'_{\alpha,\beta}p_3^{-1}=\tau_{\beta,\alpha}, $$ so that $p_i^\diamondsuit$ is an automorphism of $\mathbf{L}$ with related matrix $ \left( \begin{array}{c|c|c} \bar1& 0 & 0 \\ \hline 0 & I_2 & 0 \\ \hline 0 & 0 & * \mathfrak{e}nd{array}\right) $, and in the $(3,3)$-position $\mathop{\rm sl}igma_2$, $-\mathop{\rm sl}igma_1$ and $\mathop{\rm sl}igma_1$ respectively. On the other hand, $\{\textbf{i}\mathop{\rm sl}igma_1,-\textbf{i}\mathop{\rm sl}igma_2,\mathop{\rm sl}igma_3=\mathop{\rm sl}igma_1\mathop{\rm sl}igma_2\}$ are the Pauli matrices and are all of them interchangeable in $\mathop{\rm Mat}_{2\times2}(\mathbb{F})$. Thus, if $p\mathop{\rm sl}igma_1p^{-1}=\mathop{\rm sl}igma_2$, the element $\mathop{\hbox{\rm diag}}(p,p,p,p)^\diamondsuit$ commutes with $\tau_{\alpha,\beta}$ and with $ \mathcal{T}heta$, and it applies $g_1^\diamondsuit$ into $g_2^\diamondsuit$. In this way $\left(\begin{array}{c|c|c} \bar1& 0 &0\\ \hline 0&A& 0 \\ \hline 0& 0 &I_2 \mathfrak{e}nd{array}\right)\in\mathscr{W}(\Gamma_6)$ for all $A\in\mathop{\rm GL}_2(\mathbb{Z}_2)$. For the block $D$ in the $(2,3)$-position, we analyze the conjugacy class of the order two elements involved: if we denote $g_3=g_1 g_2$, the elements of type $2A$ are $\{g_i^\diamondsuit,(g_i\tau'_{-1,-1})^\diamondsuit,\tau'^\diamondsuit_{-1,1},\tau'^\diamondsuit_{1,-1} \mid i=1,2,3\}$ and those of type $2B$ are $\{(g_i\tau'_{-1,1})^\diamondsuit,(g_i\tau'_{1,-1})^\diamondsuit,\tau'^\diamondsuit_{-1,-1} \mid i=1,2,3\}$. Hence the only possibility for $D$ is $\left( \begin{array}{cc} e & e \\ f & f \mathfrak{e}nd{array} \right)$, which effectively happens: note that the automorphism $\psi= \left(\begin{array}{c|c|c} I_4&0&0\\ \hline 0&\mathop{\rm sl}igma_1&0\\ \hline 0&0&\mathop{\rm sl}igma_1 \mathfrak{e}nd{array} \right)^\diamondsuit $ commutes with $\tau_{\alpha,\beta}$, $ \mathcal{T}heta$, and $g_1^\diamondsuit$, and it verifies $\psi g_2^\diamondsuit\psi^{-1}=(\tau_{1,-1}g_2)^\diamondsuit=(\tau'_{-1,-1}g_2)^\diamondsuit$. We have, then, the required subgroup isomorphic to $(\mathbb{Z}_2^2 \rtimes \mathop{\rm SP}_2(\mathbb{Z}_2)) \times (\mathbb{Z}_2^2 \rtimes \mathbb{Z}_2)$ and hence we must have found every element of the form (\ref{eq_laresttricccionultimocaso}). \mathop{\rm sl}mallskip Third, we are left with the task of checking that every order two outer automorphism in $P_6 $ is conjugated to $\mathcal{T}heta$ by an element in the normalizer. It is enough to check that every order two outer automorphism in $P'_6 $ is conjugated to $\theta$ by an element in the normalizer. Set $\varphi_0=\text{diag}(1,1,1,\textbf{i},1,-\textbf{i})$ and $\varphi_1=\text{diag}(\textbf{i},-\textbf{i},1,1,1,1)$. By the arguments in the proof of Proposition~\ref{prop_el5}, both $\widetilde{\varphi}_0$ and $\widetilde{\varphi}_1$ commute with the elements $\widetilde{g}'_1$, $\widetilde{g}'_2$ and the elements of $T_1$ and $T'_1$, and also satisfy $\widetilde{\varphi}_0\theta = \theta \widetilde{g}'_1 \widetilde{\varphi}_0$ and $\widetilde{\varphi}_1\theta = \theta \widetilde{\psi}_{-1,0} \widetilde{\varphi}_1$. Therefore, the elements of $\mathscr{W}(\Gamma_6)$ induced by them are, respectively, $$ \begin{array}{l} \left( \begin{array}{c|c|c} \bar 1& \bar1 \quad \bar0 & \bar0 \quad \bar0 \\ \hline 0 & I_2 & 0 \\ \hline 0 & 0 & I_2 \mathfrak{e}nd{array}\right), \quad \left( \begin{array}{c|c|c} \bar1& \bar0 \quad \bar0 & \bar1 \quad \bar0 \\ \hline 0 & I_2 & 0 \\ \hline 0 & 0 & I_2 \mathfrak{e}nd{array}\right), \mathfrak{e}nd{array} $$ and it follows that the parameters $a,b,c,d\in\mathbb{Z}_2$ can take all possible values. \mathfrak{e}nd{proof} \mathop{\rm sl}mallskip Summarizing the results of the paper, \begin{theo}\label{th_maintheorem} The Weyl groups of the fine gradings up to equivalence different from the Cartan grading on the Lie algebra of type $\mathfrak{e}_6$ over an algebraically closed field of characteristic zero with infinite universal grading group are: \begin{itemize} \item[i)] $\mathscr{W}(\Gamma_1) \cong \mathop{\rm Mat}_{3\times2}(\mathbb{Z}_2) \rtimes (\mathop{\rm GL}_3(\mathbb{Z}_2) \times D_3)$, for $\Gamma_1$ the $\mathbb{Z}_2^3 \times\mathbb{Z}^2$-grading of type $(48, 1, 0, 7)$ described in Subsection~\ref{subsec_gradZcuadZ2cubo}. \item[ii)] $\mathscr{W}(\Gamma_2) \cong \mathbb{Z}_2^4 \rtimes (\mathop{\rm GL}_3(\mathbb{Z}_2)\times\mathbb{Z}_2)$, for $\Gamma_2$ the $\mathbb{Z}_2^4 \times\mathbb{Z}$-grading of type $(57,0,7 )$ described in Subsection~\ref{subsec_gradZZ2cuarta}. \item[iii)] $\mathscr{W}(\Gamma_3) \cong\mathbb{Z}_2^2\rtimes( (\mathbb{Z}_2^3\rtimes S_4)\rtimes S_3 )$, for $\Gamma_3$ the $\mathbb{Z}_2 \times\mathbb{Z}^4$-grading of type $(72, 1, 0, 1)$ described in Subsection~\ref{subsec_gradZcuartaZ2}. \item[iv)] $\mathscr{W}(\Gamma_4) \cong \mathbb{Z}_3^2 \rtimes (\mathop{\rm GL}_2(\mathbb{Z}_3) \times D_6)$, for $\Gamma_4$ the $\mathbb{Z}_3^2\times\mathbb{Z}^2$-grading of type $(60,9 )$ described in Subsection~\ref{subsec_gradZcuadZ3cuad}. \item[v)] $\mathscr{W}(\Gamma_5) \cong (\mathop{\rm SP}_4(\mathbb{Z}_2)\times\mathbb{Z}_2) \ltimes \mathbb{Z}_2^5$, for $\Gamma_5$ the $\mathbb{Z}_2^5\times\mathbb{Z}$-grading of type $( 73,0,0,0,1)$ described in Subsection~\ref{subsec_gradZZ2quinta}. \item[vi)] $\mathscr{W}(\Gamma_6)\cong \mathbb{Z}_2^4 \rtimes ((\mathbb{Z}_2^2 \rtimes S_3) \times (\mathbb{Z}_2^2 \rtimes \mathbb{Z}_2))$, for $\Gamma_6$ the $\mathbb{Z}_2^3\times\mathbb{Z}^2$-grading of type $(60,7,0,1 )$ described in Subsection~\ref{subsec_gradZoutercuadZ2cubo}. \mathfrak{e}nd{itemize} \mathfrak{e}nd{theo} \begin{thebibliography}{99} \bibitem{Adams} J.F.~Adams. {\mathop{\rm sl}l Lectures on exceptional Lie groups.} Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 1996. \bibitem{Viruannals} K.K.S.~Andersen, J.~Grodal, J.M.~M{\o}ller and A.~Viruel. {\mathop{\rm sl}l The classification of p-compact groups for p odd}. Ann. of Math. (2) \textbf{167} (2008), no.\,1, 95--210. \bibitem{Carter} R.W.~Carter. {\mathop{\rm sl}l Finite groups of Lie type: Conjugacy Classes and Complex Characters}. John Wiley and Sons, 1985. \bibitem{Coxeter} H.S.M.~Coxeter. {\mathop{\rm sl}l Regular complex Polytopes}. Second edition, Cambridge University Press, 1991. \bibitem{miomodelos} C.~Draper. {\mathop{\rm sl}l Models of the Lie algebra $F_4$}. Linear Algebra Appl. \textbf{428} (2008), no.\,11-12, 2813--2839. \bibitem{f4} C.~Draper and C.~Mart\'{\i}n. {\mathop{\rm sl}l Gradings on the Albert algebra and on $\mathfrak{f}rak{f}_4$}. Rev. Mat. Iberoam. \textbf{25} (2009), no.\,3, 841--908. \bibitem{e6} C.~Draper and A.~Viruel. {\mathop{\rm sl}l Fine gradings on $\mathfrak{f}rak{e}_6$.} arXiv:1207.6690v1. \bibitem{ElduqueCompo1} A.~Elduque. {\mathop{\rm sl}l The magic square and symmetric compositions}. Rev. Mat. Iberoam. \textbf{20} (2004), no.\,2, 475--492. \bibitem{Eldgradsencomposimetricas} A.~Elduque. {\mathop{\rm sl}l Gradings on symmetric composition algebras}. J. Algebra \textbf{322} (2009), no.~10, 3542--3579. \bibitem{Albclasicas} A.~Elduque. {\mathop{\rm sl}l Fine gradings on simple classical Lie algebras}. J. Algebra \textbf{324} (2010), no.~12, 3532--3571. \bibitem{ElduqueRootgradings} A.~Elduque. {\mathop{\rm sl}l Fine gradings and gradings by root systems on simple Lie algebras}. arXiv:1303.0651. \bibitem{EK11} A.~Elduque and M.~Kochetov. {\mathop{\rm sl}l Weyl groups of fine gradings on matrix algebras, octonions and the Albert algebra}. J. Algebra \textbf{366} (2012), 165--186. \bibitem{EK12} A.~Elduque and M.~Kochetov. {\mathop{\rm sl}l Weyl groups of fine gradings on simple Lie algebras of types A, B, C and D.} Serdica Math. J. \textbf{38} (2012), no.1-3, 7--36. \bibitem{FulHar} W.~Fulton and J.~Harris. {\mathop{\rm sl}l Representation Theory. A first course}. Graduate Texts in Mathematics, 129. Springer-Verlag, New York, 1991. \bibitem{normPauli} M.~Havl\'{\i}\v{c}ek, J.~Patera, E.~Pelantov\'{a} and J.~Tolar. {\mathop{\rm sl}l Automorphisms of the fine grading of $sl(n,\mathbb{C})$ aasociated with the generalized Pauli matrices}. J. Math. Phys. \textbf{43}, 1083ñ1094 (2002). \bibitem{Humphreysalg} J.E.~Humphreys. {\mathop{\rm sl}l Introduction to Lie algebras and representation theory}. Graduate Texts in Mathematics, vol. 9, Springer-Verlag, New-York, 1978, Second printing, revised. \bibitem{Kac} V.G.~Kac. {\mathop{\rm sl}l Infinite dimensional Lie algebras}. Cambridge University Press, 1990. \bibitem{Koc09} M.~Kochetov. {\mathop{\rm sl}l Gradings on finite-dimensional simple Lie algebras}. Acta Appl. Math. \textbf {108} (2009), no.\,1, 101--127. \bibitem{Manivel} L.~Manivel. {\mathop{\rm sl}l Configurations of lines and models of Lie algebras} J. Algebra \textbf{304} (2006), no.1, 457--486. \bibitem{enci} A.~L.~Onishchnik and E.~B.~Vinberg (Editors). {\mathop{\rm sl}l Lie Groups and Lie Algebras III, Encyclopaedia of Mathematical Sciences, Vol. 41}. Springer-Verlag. Berlin, 1991. \bibitem{LGI} J.~Patera and H.~Zassenhaus. {\mathop{\rm sl}l On Lie gradings. I}. Linear Algebra Appl. \textbf{112} (1989), 87--159. \bibitem{checosWeyl} E.~Pelantov\'{a}, M.~Svobodov\'{a} and S.~Tremblay. {\mathop{\rm sl}l Fine grading of $\text{sl}\,(p^2,\mathbb{C})$ generated by tensor product of generalized Pauli Matrices and its symmetries.} J. Math. Phys. \textbf{47} (2006), no.1, 013512. \bibitem{Schafer} R.D.~Schafer. {\mathop{\rm sl}l An introduction to nonassociative algebras}. Dover Publications Inc., New York, 1995. Corrected reprint of the 1966 original. \bibitem{Tits} J.~Tits. {\mathop{\rm sl}l Alg\`{e}bres alternatives, alg\`{e}bres de Jordan et alg\`{e}bres de Lie exceptionelles. I. Construction}. Nederl. Akab. Wetensch. Proc. Ser. A 69 = Indag. Math. \textbf{28} (1966), 223--237. \bibitem{elchino} J.~Yu. {\mathop{\rm sl}l Maximal abelian subgroups of compact simple Lie groups}. arXiv:1211.1334v1. \bibitem{libroRussi} K.A.~Zhevlakov, A.M.~Slin'ko, I.P.~Shestakov and A.I.~Shirshov, {\mathop{\rm sl}l Rings that are nearly associative}. Pure and Applied Mathematics, vol. 104. Academic Press Inc. New York, 1982. . \mathfrak{e}nd{thebibliography} \mathfrak{e}nd{document}
\betam{e}gin{document} \deltaef\sigmapacingset#1{\rhoenewcommand{\betaaselinestretch} {#1}\sigmamall\normalsize} \sigmapacingset{1} \if00 { \thetaitle{\betaf Efficient Bayesian estimation for flexible panel models for multivariate outcomes: Impact of life events on mental health and excessive alcohol consumption\thetahanks{The research of Gunawan and Kohn was partially supported by the ARC Center of Excellence grant CE140100049. The research of all the authors was also partially supported by ARC Discovery Grant DP150104630. } } \alphauthor[1]{David G. Gunawan} \alphauthor[1]{Chris K. Carter} \alphauthor[1]{Denzil G. Fiebig} \alphauthor[1]{Robert J. Kohn} \alphaffil[1]{School of Economics, University of New South Wales} \title{f Efficient Bayesian estimation for flexible panel models for multivariate outcomes: Impact of life events on mental health and excessive alcohol consumption hanks{The research of Gunawan and Kohn was partially supported by the ARC Center of Excellence grant CE140100049. The research of all the authors was also partially supported by ARC Discovery Grant DP150104630. } } \fi \if10 { \betaigskip \betaigskip \betaigskip \betam{e}gin{center} {\LARGE\betaf Efficient Bayesian estimation for flexible panel models for multivariate outcomes: Impact of life events on mental health and excessive alcohol consumption} {\rhom e}nd{center} } \fi \betaigskip \betam{e}gin{abstract} We consider the problem of estimating a flexible multivariate longitudinal panel data model whose outcomes can be a combination of discrete and continuous variables and whose dependence structures are modelled using copulas. This is a challenging problem because the likelihood is usually analytically intractable. Our article makes both a methodological contribution as well as a substantive contribution to the application. The methodological contribution is to introduce into the panel data literature a particle Metropolis within Gibbs method to carry out Bayesian inference, using a Hamiltonian Monte Carlo \citep{Neal:2011} proposal for sampling the high dimensional vector of unknown parameters. Our second contribution is to apply our method to analyse the impact of serious life events on mental health and excessive alcohol consumption. {\rhom e}nd{abstract} \noindent {\it Keywords:} Copula; Hamiltonian Monte Carlo; Particle Gibbs; Pseudo marginal method \betaoldsymbolfill \sigmapacingset{1.45} \sigmaection{Introduction}\lambdaabel{S: introduction} Our article considers estimating a flexible longitudinal panel data model with multivariate outcomes that are a combination of discrete and continuous variables. In general, estimating such nonlinear and non-Gaussian longitudinal models is challenging because the likelihood is an integral over the latent individual random effects and the observations are not Gaussian. Our article makes two substantive contributions. First, we introduce into the longitudinal panel data methodology a version of particle Metropolis-within-Gibbs (PMwG) \citep{Andrieu:2010} that allows us to carry out Bayesian inference where the unknown model parameters are generated using a proposal obtained by Hamiltonian Monte Carlo \citep{Neal:2011}. We note that the parameter vector in panel data models is often high dimensional, usually because there are many covariates, so that a Metropolis-Hastings proposal based Hamiltonian Monte Carlo (HMC) can be much more efficient than competitor proposals such as a random walk; see \Cref{Sec: PMCMC samplers} for more details. We show in a simulation study that our PG approach outperforms two other approaches for estimating such panel data models with random effects and intractable likelihoods. The first is the standard data augmentation MCMC as in \cite{Albert1993}. The second is the pseudo marginal Metropolis-Hastings (PMMH) approach in \cite{Andrieu:2009} and \cite{Andrieu:2010}. The motivation for our methodological development, and our second contribution, is to investigate the impact of life events on mental health and excessive alcohol consumption using the Household, Income, and Labour Dynamics in Australia (HILDA) panel data set. In the literature that investigates the impact of life events it is standard to consider a single outcome of interest, e.g. mental health or life satisfaction, and if multiple outcomes are considered, then they are estimated as separate models; see, for example, \citet{Lindeboom:2002}, \citet{Frijters:2011} and \citet{Buddelmeyer:2016}. We contend that in many cases, we can gain additional insight by jointly estimating the models for the outcomes. Although it is unsurprising that there is an association between mental health problems and excessive alcohol consumption, establishing the nature of those links requires further research. A natural approach is to attempt to identify causal effects as in \citet{Mentzakis:2015}, but this relies on the availability of instruments or a natural experiment. We use a reduced form approach but argue that it has the potential to provide complementary and useful evidence; see \citet{Kleinberg:2015} for similar arguments. Providing a description of the joint distribution of related outcomes is often of interest and has the potential to inform about causal links, albeit indirectly. It is common in this literature to simplify the outcomes to allow such joint estimation. For example, \citet{Contoyannis:2004} consider a joint model of a range of lifestyle variables in their study of health and lifestyle choice. \citet{Buchmueller:2013} model choices across several insurance types. Both of these studies are representative of the methodology where outcomes of interest are a mix of continuous and various types of discrete variables which are all converted into binary outcomes in order to accommodate estimation as a multivariate probit model (MVP) model using simulated maximum likelihood (SML). The bivariate probit that accommodates the longitudinal nature of the data serves as our baseline model. This model is also be of independent interest motivated by numerous applications of the MVP model; see for example \citet{Atella:2004} and \citet{Mullahy2016}. However, in general, there can be associated costs in simplifying the multivariate structure to fit into the MVP framework. Abstracting from the mix of outcomes can obscure interesting features of the relationship. Furthermore, the MVP model imposes a linear correlation structure that may lead to misspecification risk when the existence of an asymmetric or nonlinear dependence structure is plausible. Because the MVP model is limited in these ways, we also consider models that accommodate one continuous outcome and one that is categorical. Here we maintain the assumption of normality for the marginal distributions while allowing both normal and non-normal dependence of error terms, where non-normal dependence is obtained by using copulas. Problems induced by correlated individual effects are addressed with Mundlak type specifications that perform reasonably well in practice; see for example \citet{Contoyannis:2004b} and \citet{Woolridge:2005}. The paper is organised as follows. The panel data model is outlined in Section \rhoef{sec:General-Panel-Data}. The Bayesian estimation methodology is given in Section \rhoef{Sec: PMCMC samplers}. The HILDA dataset is described in Section \rhoef{S: data and characterization}. Section \rhoef{S: results and discussion} discusses the estimation results. Section \rhoef{S: conclusions} concludes. The paper has two appendices. Appendix \rhoef{sec:lemma1} provides some proofs of theorem defined in Section \rhoef{Sec: PMCMC samplers}. Appendix \rhoef{sec:Empirical-Results} presents some additional empirical results. The paper also has an online supplement whose sections are denoted as Sections~S1, etc. \sigmaection{General Panel Data Models with Random Effects\lambdaabel{sec:General-Panel-Data}} Our motivating example is to investigate the impact of life events on excessive alcohol consumption $(y_1)$ and mental health $(y_2)$. To match our motivating example, we describe the following panel data models with two outcomes. The extension to three or more outcomes is conceptually straightforward, but can be much more demanding computationally. \Cref{ss: biv probit with random effects} defines a bivariate probit model which serves as a baseline model for comparison with the models discussed later. \Cref{ss: mixed biv model with random effects} then extends the model to accommodate one continuous outcome and one categorical variable. Both of these models impose a linear correlation structure that represents misspecification risk when the existence of asymmetric or non-linear dependence structures is plausible. Then, we extend the model such that we still maintain the assumption of normality for the marginal distributions while introducing non-normal dependence of error terms using copulas and is given in Section \rhoef{SS: copulal models}. Lastly, we also define and discuss briefly the Mundlak type specifications which are used for all the models in this section (see \citet[pg 615-616]{Mundlak:1978,Woolridge:2010}). Note that the models defined in this section can be applied more generally to any variables of interest that can be combination of discrete and continuous variables and they are not restricted to only the applications in this paper. \sigmaubsection{Bivariate Probit Model with Random Effects}\lambdaabel{ss: biv probit with random effects} We first consider the joint distribution of two binary outcomes given by a bivariate probit model. Let $y_{1,it}$ and $y_{2,it}$ be the two observed binary outcomes, for $i=1,...,P$ people and and $t=1,...,T$ time periods. The bivariate probit model is defined using the following latent variable specification \betam{e}gin{align} y_{1,it}^{*} &=\betaoldsymbol{x}_{1,it}^\text{\tiny {\it {T}}}\betaoldsymbol{\betam{e}ta}_{11}+\overlineerline{\betaoldsymbol x}_{1,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{12}+\alphalpha_{1,i}+\betaoldsymbolarepsilon_{1,it} \quad \thetaext{and} \quad y_{2,it}^{*}=\betaoldsymbol{x}_{2,it}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol x}_{2,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}+\betaoldsymbolarepsilon_{2,it}, \lambdaabel{eq: biv probit}\\ \intertext{where} \betaoldsymbol{\alphalpha}_{i}&:=\lambdaeft ( \alphalpha_{1,i}, \alphalpha_{2,i}\rhoight )^\text{\tiny {\it {T}}} \sigmaim N \lambdaeft ( \betaoldsymbol{0}, \betaoldsymbol{\Sigmama_{\alphalpha}} \rhoight ) \quad \thetaext{and} \quad \betaoldsymbol{\betaoldsymbolarepsilon}_{it}=\lambdaeft ( \betaoldsymbolarepsilon_{1,it}, \betaoldsymbolarepsilon_{2,it}\rhoight )^\text{\tiny {\it {T}}} \sigmaim N \lambdaeft ( \betaoldsymbol{0}, \betaoldsymbol{\Sigmama_{\betaoldsymbolarepsilon}} \rhoight ) \lambdaabel{eq: alpha and vareps distns} \intertext{with} \betaoldsymbol{\Sigmama_{\alphalpha}} & := \betam{e}gin{pmatrix} \thetaau_1^2 & \rhoho_\alphalpha \thetaau_1 \thetaau_2 \\ \rhoho_\alphalpha \thetaau_1 \thetaau_2 & \thetaau_1^2 {\rhom e}nd{pmatrix} \quad \thetaext{and} \quad \betaoldsymbol{\Sigmama_{\betaoldsymbolarepsilon}} := \betam{e}gin{pmatrix} 1 & \rhoho_\betaoldsymbolarepsilon \\ \rhoho_\betaoldsymbolarepsilon & 1 {\rhom e}nd{pmatrix}. \lambdaabel{eq: cov matrices} {\rhom e}nd{align} In \cref{eq: biv probit}, $\betaoldsymbol{x}_{j,it},j=1,2,$ are the exogenous variables which may be associated with the two outcomes (including serious/major life-events) and \betam{e}gin{align*} \overlineerline {\betaoldsymbol{x}}_{i} &:= \lambdaeft ( \overlineerline{\betaoldsymbol{x}}_{1,i}^\text{\tiny {\it {T}}}, \overlineerline{\betaoldsymbol{x}}_{2,i}^\text{\tiny {\it {T}}} \rhoight)^\text{\tiny {\it {T}}} = \frac1T\sigmaum_{t=1}^T \betaoldsymbol{x}_{it}^{(M)} {\rhom e}nd{align*} is the average over the sample period of the observations on a subset of the exogenous variables $\betaoldsymbol{x}_{it}^{(M)}$. In \cref{eq: biv probit,eq: alpha and vareps distns}, $\betaoldsymbol{\alphalpha}_{i}$ is an individual-specific and time invariant random component comprising potentially correlated outcome-specific effects, $\betaoldsymbol{\betaoldsymbolarepsilon}_{it}$ is the idiosyncratic disturbance term that varies over time and individuals and is assumed to be bivariate normally distributed allowing for contemporaneous correlation but otherwise uncorrelated across individuals and time and also uncorrelated with $\betaoldsymbol{\alphalpha}_{i}$. In this model $y_{1,it}^{*}$ and $y_{2,it}^{*}$ are unobserved. The observed binary outcomes are defined as \betam{e}gin{align} y_{1,it}:=I\lambdaeft(y_{1,it}^{*}>0\rhoight) \quad \thetaext{and} \quad y_{2,it}:=I\lambdaeft(y_{2,it}^{*}>0\rhoight). \lambdaabel{eq: obsns baseline model} {\rhom e}nd{align} The explanatory variables $\lambdaeft(\betaoldsymbol{x}_{it},i=1, \deltaots, P, t=1, \deltaots, T\rhoight)$ are also assumed to be exogenous with respect to the individual random effects $\betaoldsymbol{\alphalpha}_{i}$. Including terms $\overlineerline{\betaoldsymbol x}_{1,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{12}$ and $\overlineerline{\betaoldsymbol x}_{2,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{22}$ to \cref{eq: biv probit} is called the \citet{Mundlak:1978} correction because we can now consider $\lambdaeft (\alphalpha_{1i} + \overlineerline{\betaoldsymbol x}_{1,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{12}, \alphalpha_{2i} + \overlineerline{\betaoldsymbol x}_{2,i}^\text{\tiny {\it {T}}}\betaoldsymbol{\betam{e}ta}_{12} \rhoight )^\text{\tiny {\it {T}}} $ as the $i$th composite random effect which is now potentially correlated with the exogenous covariates $\betaoldsymbol x_{it}$. The joint density, conditional to the vector of individual random effects $\betaoldsymbol{\alphalpha}_{i}$, is \betam{e}gin{align}\lambdaabel{eq: biv probit likel} p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight) &=\prod_{i=1}^{P}\prod_{t=1}^{T} {\rhom P}hi_{2}\lambdaeft(\betaoldsymbol{\mu}_{it} , \betaoldsymbol{\Sigmama}_{\rhom probit} \rhoight ) {\rhom e}nd{align} where $\betaoldsymbol{\Sigmama}_{\rhom probit}$ is a $2 \thetaimes 2$ covariance matrix with ones on the diagonal and $(2y_{1,it}-1)(2y_{2,it}-1)\rhoho_{\betaoldsymbolarepsilon}$ on the off diagonal, and \betam{e}gin{align}\lambdaabel{eq: biv probit mu} \betaoldsymbol{\mu}_{it} &:=\lambdaeft ( (2y_{1,it}-1) \lambdaeft(\betaoldsymbol{x}_{1,it}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{11}+\betaoldsymbol{\overlineerline{x}}_{1,i}^{\text{\tiny {\it {T}}}} \betaoldsymbol{\betam{e}ta}_{12}+ \alphalpha_{1,i}\rhoight),(2y_{2,it}-1)\lambdaeft(\betaoldsymbol{x}_{2,it}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{21}+\betaoldsymbol{\overlineerline{x}}_{2,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{22}+ \alphalpha_{2,i}\rhoight) \rhoight )^\text{\tiny {\it {T}}} \ . {\rhom e}nd{align} \sigmaubsection{Mixed marginal bivariate Model with Random Effects}\lambdaabel{ss: mixed biv model with random effects} We next consider an extension to the baseline model \crefrange{eq: biv probit}{eq: obsns baseline model}, where we treat one of the outcomes $y_{2,it}=y_{2,it}^{*}$ as an observed continuous variable, with $y_{1,it}$ still discrete as in \cref{eq: obsns baseline model}. This applies to the mental health variable where continuous values are available. The joint density conditional on the vector of individual random effects $\betaoldsymbol{\alphalpha}_{i}$ is \betam{e}gin{align*} p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight) & = \prod_{i=1}^{P}\prod_{t=1}^{T} \lambdaeft[{\rhom P}hi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)\phi\lambdaeft(y_{2,it}-\lambdaeft(\betaoldsymbol{x}_{2,it}^{\text{\tiny {\it {T}}}} \betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol{x}}_{2,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}\rhoight)\rhoight)\rhoight]^{y_{1,it}}\\ & \lambdaeft[\lambdaeft(1-{\rhom P}hi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)\rhoight)\phi\lambdaeft(y_{2,it}-\lambdaeft(\betaoldsymbol{x}_{2,it}^{\text{\tiny {\it {T}}}} \betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol{x} }_{2,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}\rhoight)\rhoight)\rhoight]^{1-y_{1,it}} \intertext{where $\phi(\cdot)$ is the standard normal pdf, $\sigmaigma_{1|2}=\sigmaqrt{1-\rhoho_\betaoldsymbolarepsilon^{2}}$ and } \mu_{1|2} &=\lambdaeft(\betaoldsymbol{x}_{1,it}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{11}+\overlineerline{\betaoldsymbol{x}}_{1,i}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{12}+ \alphalpha_{1,i}\rhoight)+ \rhoho_\betaoldsymbolarepsilon\lambdaeft(y_{2,it}-\lambdaeft(\betaoldsymbol{x}_{2,it}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{21}+ \overlineerline{\betaoldsymbol{x}}_{2,i}^{\text{\tiny {\it {T}}}} \betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i} \rhoight)\rhoight ) {\rhom e}nd{align*} We assume here and in \Cref{ss: biv probit with random effects} that the error term $\betaoldsymbol \betaoldsymbolarepsilon_{it}$ is bivariate normal and consequently impose a linear correlation structure. \Cref{SS: copulal models} maintains the assumption of normality for the marginal distributions of $\betaoldsymbol \betaoldsymbolarepsilon_{it}$, while introducing non-normal dependence of the error terms using bivariate copulas. \sigmaubsection{Bivariate Copula Models} \lambdaabel{SS: copulal models} Copula based models provide a flexible approach to multivariate modeling because they can: (i)~capture a wide range of non-linear dependence between the marginals beyond simple linear correlation; (ii)~allow the marginal distributions to come from different families of distributions; and, in particular, (iii)~allow the marginal distributions to be a combination of discrete and continuous distributions as in \cref{ss: mixed biv model with random effects}. There are many possible parametric copula functions proposed in the Statistics and Econometric literatures, with the choice of parametric copula determining the dependence structure of the variables being analysed. \citet{Trivedi2005} discuss some of the most popular copulas. A major difference between copula distribution functions is the range of their dependence structures. Our article considers the Gaussian, Gumbel and Clayton copulas, which are three of the most commonly used bivariate copulas and they are able to capture a wide range of dependence structures. Note that the baseline model is a Gaussian copula. \Cref{sec:Archimedian-and-Elliptical copulas} gives some details of copula models. For a discussion of copula based models with a combination of discrete and continuous marginals, and their estimation methods see \cite{Pitt2006} and \cite{Smith2012}. Their method augments the copula model with latent variables which are generated within an MCMC scheme. Note that in our estimation, we do not generate any copula latent variables and we work directly with the conditional density given in Equation {\rhom e}qref{eq: mixed copula model} below. They also do not consider any individual random effects in their models. We use the copula framework to obtain a more flexible joint distribution for $\betaoldsymbol {\betaoldsymbolarepsilon}_{it}$, while assuming that its two marginals are normally distributed. Let $c(\cdot; \betaoldsymbol \thetaheta_{\rhom copula}) $ be a bivariate copula density with $\betaoldsymbol{\thetaheta}_{\rhom copula} $ the vector of parameters of the copula. Suppose that $\betaoldsymbol{u}_{it}$ has density $c(\cdot; \betaoldsymbol \thetaheta_{\rhom copula}) $ and define $\betaoldsymbolarepsilon_{j,it}:= {\rhom P}hi^{-1} (u_{j,it})$ for $ j=1,2$, where ${\rhom P}hi(\cdot)$ the standard normal cdf. Then the density of $\betaoldsymbol{\betaoldsymbolarepsilon}_{it}$ is \betam{e}gin{align} \lambdaabel{eq: biv copula} c(\betaoldsymbol{u}_{it}; \betaoldsymbol{\thetaheta}_{\rhom copula}) \phi(\betaoldsymbolarepsilon_{1,it}) \phi( \betaoldsymbolarepsilon_{2,it}) {\rhom e}nd{align} The joint density of the observations conditional on the vector of individual random effects for the model in \cref{ss: mixed biv model with random effects} is \betam{e}gin{eqnarray} \lambdaabel{eq: mixed copula model} p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight) & = & \prod_{i=1}^{P}\prod_{t=1}^{T}\lambdaeft[\lambdaeft(1-C_{1|2}\lambdaeft(u_{1,it}|u_{2,it};\betaoldsymbol{\thetaheta}_{cop}\rhoight)\rhoight)\phi\lambdaeft(y_{2,it}-\lambdaeft(\betaoldsymbol{x}_{2,it}^{T}\betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol{x}}_{2,i}^{T}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}\rhoight)\rhoight)\rhoight]^{y_{1,it}}\nonumber \\ & & \lambdaeft[C_{1|2}\lambdaeft(u_{1,it}|u_{2,it};\betaoldsymbol{\thetaheta}_{cop}\rhoight)\phi\lambdaeft(y_{2,it}-\lambdaeft(\betaoldsymbol{x}_{2,it}^{T}\betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol{x}}_{2,i}^{T}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}\rhoight)\rhoight)\rhoight]^{1-y_{1,it}},\lambdaabel{eq: mixed biv model} {\rhom e}nd{eqnarray} where $u_{1,it}={\rhom P}hi\lambdaeft(-\lambdaeft(\betaoldsymbol{x}_{1,it}^{\text{\tiny {\it {T}}}}\betaoldsymbol{\betam{e}ta}_{11}+\overlineerline{\betaoldsymbol{x}}_{1,i}^{'}\betaoldsymbol{\betam{e}ta}_{12}+ \alphalpha_{1,i}\rhoight)\rhoight)$, and $u_{2,it}={\rhom P}hi\lambdaeft(y_{2,it}- \lambdaeft(\betaoldsymbol{x} _{2,it}^{'}\betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol{x} }_{2,i}^{'}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}\rhoight)\rhoight)$. The conditional distribution function of $U_1$ given $U_2$ in the copula $C\lambdaeft ( \betaoldsymbol u;\betaoldsymbol{\thetaheta}_{\rhom cop}\rhoight)$ is \betam{e}gin{align*} C_{1|2}\lambdaeft(u_{1}|u_{2};\betaoldsymbol{\thetaheta}_{\rhom cop}\rhoight) & =\frac{\partial}{\partial u_{2}}C\lambdaeft(\betaoldsymbol u ;\betaoldsymbol{\thetaheta}_{\rhom cop}\rhoight),\quad \thetaext{where} \quad C\lambdaeft(u_{1},u_{2};\betaoldsymbol{\thetaheta}_{\rhom cop}\rhoight) =\int_{0}^{u_{1}}\int_{0}^{u_{2}}c\lambdaeft(s_{1},s_{2};\betaoldsymbol{\thetaheta}_{\rhom cop}\rhoight){\rhom d} s_{1}{\rhom d}s_{2} {\rhom e}nd{align*} and $c\lambdaeft(\betaoldsymbol u ;\betaoldsymbol \thetaheta_{\rhom cop}\rhoight)$ is the density of the copula. \Cref{sec:Archimedian-and-Elliptical copulas} gives closed form expressions for the conditional copula distribution functions for the bivariate copulas used in our article. The Pearson correlation coefficient is unsuitable for comparing the dependence structures implied by the different copula models with that of the Gaussian copula, because it only measures linear dependence. \Cref{sec:Measures-of-Dependence} discusses Kendall's $\thetaau$ and the upper and lower tail dependence measures that we use in the article. \sigmaection{Bayesian Inference and particle Markov chain Monte Carlo samplers} \lambdaabel{Sec: PMCMC samplers} This Section discuss efficient Bayesian inference for the random effect panel data models described in Section \rhoef{sec:General-Panel-Data}. Our approach is similar to the particle Markov chain Monte Carlo (PMCMC) approaches of \citet{Andrieu:2010}. However, the PMCMC approaches in \citet{Andrieu:2010} are derived for state space models and the random effect panel data models we are interested in this paper have a different structures, since random effects vary across individual, but they do not change over time. This requires us to derive the PMCMC approaches for the models we are interested in from first principles. The benefit is that the simple particle structure gives straightforward derivations that make the material more accessible than the current PMCMC literature. \newline \indent Let $\betaoldsymbol \thetaheta$ be the parameters in the panel data models described in Section \rhoef{sec:General-Panel-Data}. The vector individual random effects is denoted as $\betaoldsymbol{\alphalpha}_{1:P}$, where $P$ is the number of individuals, and the vector of observations for individual $i$ is denoted by $\betaoldsymbol{y}_i$, for $i = 1, \lambdadots, P$, with $\betaoldsymbol{y}_{1:P}$ denoting all the observations in the sample. In Bayesian inference, we are interested in sampling from the posterior density \betam{e}gin{align} \lambdaabel{eq: posterior density} \pi\lambdaeft(\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P}\rhoight)&:= p(\betaoldsymbol{y}_{1:P} |\betaoldsymbol \thetaheta , \betaoldsymbol \alphalpha_{1:P})p(\betaoldsymbol \alphalpha_{1:P}|\betaoldsymbol \thetaheta) p(\betaoldsymbol \thetaheta)/Z {\rhom e}nd{align} where $Z:=p(\betaoldsymbol{y}_{1:P}) $ is the marginal likelihood. \newline \indent The basis of our PMCMC approach is to define a target distribution on an augmented space that includes the parameters and multiple copies of the random effects, which we describe as particles. This target distribution is used to derive two samplers. The first is the Pseudo Marginal Metropolis-Hasting (PMMH) sampler and the second is the Particle Metropolis within Gibbs (PMwG) sampler. The PMMH sampler is an efficient method to sample low dimensional parameters that are highly correlated with the latent states because each PMMH step generates these parameters without conditioning on the states. However, in the panel data models defined in \Cref{sec:General-Panel-Data}, the dimension of the parameter space is large so that it is difficult to implement PMMH efficiently. The reason is that it is difficult to obtain good proposals for the parameters that require derivatives of the log-posterior because they cannot be computed exactly and need to be estimated. The efficiency of PMMH then depends crucially on how accurately we can estimate the gradient of the log-posterior. If the error in the estimate of the gradient is too large, then there will be no advantage in using proposals with derivatives information over a random walk proposal \citep{Nemeth:2016}. Furthermore, the random walk proposal has a step size proportional to $2.56/\sigmaqrt{d}$, where $d$ is the number of parameters used in the random walk step~\citep{Sherlock:2015}. This implies that for a large number of parameters, the random walk proposal will move very slowly and so will be very inefficient. The PMwG sampler generates the parameters conditioning on the latent random effects and the parameters of the models can be sampled in separate Gibbs or Metropolis within Gibbs step. Note that by conditioning on the states for the panel data models we are interested in, we are able to compute the gradient of conditional log-posterior analytically. Our article uses a Hamiltonian Monte Carlo proposal which requires the gradient of the log posterior to sample the high dimensional parameter $\betaoldsymbol{\betam{e}ta}$. However, this sampler is not very efficient for the parameters that are highly correlated with the latent states. We demonstrate in Section \rhoef{S: results and discussion} that PMwG sampler performs well for the models that we are interested in this paper. This section is organised as follows. \Cref{sec:target distribution} discusses the target distribution. \Cref{sec:Pseudo-Marginal-Metropolis} discusses the Pseudo Marginal Metropolis-Hastings (PMMH) sampler. \Cref{sec: PMwG} discusses the Particle Gibbs (PG) and Particle Metropolis within Gibbs (PMwG) samplers. \Cref{sub:Sampling--using Hamiltonian proposal} discusses the Hamiltonian Monte Carlo proposal. \Cref{sub:TNV} compares our proposed PMwG approach to some alternative approaches and shows that our method can be much more efficient. \sigmaubsection{Target Distribution}\lambdaabel{sec:target distribution} We first describe Algorithm~\rhoef{alg: IS sampling alg} given below, which constructs a particle approximation to the distribution $\pi\lambdaeft( \rhom d \betaoldsymbol \alphalpha_{1:P}|\betaoldsymbol \thetaheta \rhoight)$. Note that all the models in Section~\rhoef{sec:General-Panel-Data} have the independence properties \betam{e}gin{align} \lambdaabel{eq:independence likelihood} p(\betaoldsymbol y |\betaoldsymbol \thetaheta) & = \prod_{i=1}^{P} p(\betaoldsymbol y_i |\betaoldsymbol \thetaheta) {\rhom e}nd{align} and \betam{e}gin{align} \lambdaabel{eq:independence posterior} \pi\lambdaeft(\rhom d \betaoldsymbol \alphalpha_{1:P}|\betaoldsymbol \thetaheta \rhoight)& = \prod_{i=1}^{P}\pi\lambdaeft(\rhom d \betaoldsymbol \alphalpha_{i}|\betaoldsymbol \thetaheta \rhoight) \\ & = \prod_{i=1}^{P} p \lambdaeft(\rhom d \betaoldsymbol \alphalpha_{i}|\betaoldsymbol \thetaheta, \mathbf{y}_i \rhoight), \nonumber {\rhom e}nd{align} where the independence property given in \Cref{eq:independence posterior} will be replicated in our particle appoximation. Let $\lambdaeft\{ m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight);i=1,...,P\rhoight\} $ be a family of proposal densities that we will use to approximate the corresponding densities $\lambdaeft\{ \pi \lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta}\rhoight);i=1,...,P\rhoight\} $. We define \betam{e}gin{align*} S_{i}^{\betaoldsymbol{\thetaheta}}& \coloneqq \lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\in \betaoldsymbol \chi:\pi\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol \thetaheta \rhoight)>0\rhoight\} \quad {\rhom and}\quad Q_{i}^{\betaoldsymbol{\thetaheta}} \coloneqq \lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\in\betaoldsymbol \chi:m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight)>0\rhoight\} . {\rhom e}nd{align*} The next assumption ensures that the proposal densities $m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight)$ can be used to approximate approximate the corresponding densities $\lambdaeft\{ \pi \lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta}\rhoight);i=1,...,P\rhoight\} $ in Algorithm~\rhoef{alg: IS sampling alg}. \betam{e}gin{assumption}\lambdaabel{ass: use of IS} We assume that $S_{i}^{\betaoldsymbol{\thetaheta}}\sigmaubseteq Q_{i}^{\betaoldsymbol{\thetaheta}}$ for any $\betaoldsymbol{\thetaheta}\in\betaoldsymbol{\Theta}$ and $i=1,...,P$ {\rhom e}nd{assumption} Note that Assumption \rhoef{ass: use of IS} is always satisfied in our implementation because we use the prior density $p\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta}\rhoight)$ as a proposal density and the prior density is positive everywhere. The generic Monte Carlo Algorithm~\rhoef{alg: IS sampling alg} proceeds as follows. \betam{e}gin{algorithm}[H]\caption{Monte Carlo Algorithm\lambdaabel{alg: IS sampling alg}} \betam{e}gin{description} \item For $i=1,...,P$, \betam{e}gin{description} \item [Step (1)] Sample $\betaoldsymbol{\alphalpha}{}_{i}^{j}$ from $m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight)$ for $j=1,...,N$. \item [Step (2)] Compute the weights $w_{i}^{j}\coloneqq\frac{p\lambdaeft(\mathbf{y}_{i}|\betaoldsymbol{\alphalpha}{}_{i}^{j},\betaoldsymbol{\thetaheta}\rhoight)p\lambdaeft(\betaoldsymbol{\alphalpha}{}_{i}^{j}|\betaoldsymbol{\thetaheta}\rhoight)}{m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}^{j}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight)},$ for $j=1,...,N$. \item [Step (3)] Normalised the weights $\betaar{w}_{i}^{j}\coloneqq\frac{w_{i}^{j}}{\sigmaum_{k=1}^{N}w_{i}^{k}}$, for $j=1,...,N$. {\rhom e}nd{description} {\rhom e}nd{description} {\rhom e}nd{algorithm} To define the joint distribution of the particles given the parameters, let $\betaoldsymbol \alphalpha_{1:P}^{1:N}:= \{ \betaoldsymbol \alphalpha_1^{1:N}, \deltaots, \betaoldsymbol \alphalpha_P^{1:N} \}$. The joint distribution is \betam{e}gin{align} \lambdaabel{eq:particle dist} \psi^{\thetaheta}\lambdaeft(\betaoldsymbol{\alphalpha}_{1:P}^{1:N}\rhoight) & \coloneqq\prod_{j=1}^{N}\prod_{i=1}^{P}m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}^{j}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight). {\rhom e}nd{align} Under Assumption \rhoef{ass: use of IS}, Algorithm~\rhoef{alg: IS sampling alg} yields the approximations to $\pi\lambdaeft( \rhom d \betaoldsymbol \alphalpha_{1:P}|\betaoldsymbol \thetaheta \rhoight)$ and the likelihood $p(\betaoldsymbol y|\betaoldsymbol \thetaheta )$ as \betam{e}gin{align*} \widehat{\pi}_{N}\lambdaeft(\rhom d \betaoldsymbol \alphalpha_{1:P}|\betaoldsymbol \thetaheta \rhoight)& \coloneqq\prod_{i=1}^{P}\lambdaeft\{ \sigmaum_{j=1}^{N}\betaar{w}_{i}^{j}\deltaelta_{\betaoldsymbol{\alphalpha}_{i}^{j}}\lambdaeft(\rhom d\betaoldsymbol{\alphalpha}_{i}\rhoight) \rhoight\} {\rhom e}nd{align*} and \betam{e}gin{align} \lambdaabel{eq:estimated likelihood} \widehat{p}_{N}(\betaoldsymbol y |\betaoldsymbol \thetaheta) & \coloneqq\prod_{i=1}^{P}\lambdaeft\{ \frac{1}{N}\sigmaum_{j=1}^{N}w_{i}^{j}\rhoight\} . {\rhom e}nd{align} It follows straightforwardly that $\widehat{p}_{N}(\betaoldsymbol y |\betaoldsymbol \thetaheta)$ is an unbiased estimator of the likelihood $p(\betaoldsymbol y |\betaoldsymbol \thetaheta)$. The proof is given in \Cref{sec:lemma1}. To obtain particle MCMC schemes to estimate $\pi(\betaoldsymbol \thetaheta, \betaoldsymbol \alphalpha_{1:P})$, let $\betaoldsymbol k :=(k_1, \deltaots, k_P)$, with each $k_i \in \{1, \deltaots, N\}$, and let $\betaoldsymbol \alphalpha_{1:P}^{\betaoldsymbol k}:= \{ \betaoldsymbol \alphalpha_1^{k_1}, \deltaots, \betaoldsymbol \alphalpha_P^{k_P}\}$. For $N \gammaeq 1$, we define the target density \betam{e}gin{align} \lambdaabel{eq: expanded target density} \widetilde {\pi}_N(\betaoldsymbol k, \betaoldsymbol \alphalpha_{1:P}^{1:N}, \betaoldsymbol \thetaheta)&:= \frac{\pi(\betaoldsymbol \thetaheta, \betaoldsymbol \alphalpha_{1:P}^{\betaoldsymbol k}) }{N^P} \thetaimes \frac{\psi^\thetaheta (\betaoldsymbol \alphalpha_{1:P}^{1:N} ) }{ \prod_{i=1}^P m_i (\betaoldsymbol \alphalpha_i^{k_i} |\betaoldsymbol \thetaheta, \betaoldsymbol y_i) }. {\rhom e}nd{align} \Cref{sec:lemma1} shows that $N^{-P}\pi\lambdaeft(\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}}\rhoight)$ is the marginal probability density of $\widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}\rhoight)$. Using the target density in \Cref{eq: expanded target density}, the next two sections consider two particle based methods for carrying Markov chain Monte Carlo in panel data models with an intractable likelihood. \sigmaubsection{Pseudo Marginal Metropolis Hastings (PMMH) sampler\lambdaabel{sec:Pseudo-Marginal-Metropolis}} The PMMH sampler is a Metropolis Hastings update on the extended space with target density defined in Equation {\rhom e}qref{eq: expanded target density} and the proposal density for $\betaoldsymbol \thetaheta^\alphast, \betaoldsymbol k^\alphast, $ and $ {\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N}$, given their current values $\betaoldsymbol \thetaheta, \betaoldsymbol k$ and $\betaoldsymbol \alphalpha_{1:P}^{1:N}$, as \betam{e}gin{align} \lambdaabel{eq: extended prop} q_N( \betaoldsymbol k^\alphast, {\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N}, \betaoldsymbol \thetaheta^\alphast | \betaoldsymbol k, \betaoldsymbol \alphalpha_{1:P}^{1:N}, \betaoldsymbol \thetaheta):= q(\betaoldsymbol \thetaheta^\alphast|\betaoldsymbol \thetaheta) \thetaimes \psi^{\thetaheta^\alphast} \lambdaeft( {\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N} \rhoight) \thetaimes \prod_{i=1}^P {\overline {w}^{\alphast}_i}^{k^\alphast_i}. {\rhom e}nd{align} Note that this proposal density first samples $\betaoldsymbol \thetaheta^\alphast$ from $q(\betaoldsymbol \thetaheta^\alphast | \betaoldsymbol \thetaheta) $. The ${\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N}$ are then sampled from $\psi^{\thetaheta^\alphast} (\cdot )$. Finally, $\betaoldsymbol k^\alphast$ is sampled from $\prod_{i=1}^P {\overline {w}^{\alphast}_i}^{k^\alphast_i}$. We now consider the ratio of the extended target \cref{eq: expanded target density} and extended proposal \cref{eq: extended prop} to obtain the PMMH acceptance probability for this target and proposal. This ratio is \betam{e}gin{align} \lambdaabel{eq: ratio of extended target to prop} \frac{\widetilde {\pi}_N\lambdaeft(\betaoldsymbol k^\alphast, {\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N}, \betaoldsymbol \thetaheta^\alphast\rhoight)}{ q_N\lambdaeft(\betaoldsymbol k^\alphast, {\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N}, \betaoldsymbol \thetaheta^\alphast | \betaoldsymbol k, \betaoldsymbol \alphalpha_{1:P}^{1:N}, \betaoldsymbol \thetaheta \rhoight)} & = \frac{\pi(\betaoldsymbol \thetaheta^\alphast, \betaoldsymbol \alphalpha_{1:P}^{\alphast \betaoldsymbol k^\alphast}) }{N^P} \thetaimes \frac{\psi^{\thetaheta^\alphast} \lambdaeft({\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N} \rhoight) }{ \prod_{i=1}^P m_i (\betaoldsymbol \alphalpha_i^{\alphast k^\alphast_i} |\betaoldsymbol \thetaheta^\alphast, \betaoldsymbol y_i) }\thetaimes \{0,1\}igg \{q(\betaoldsymbol \thetaheta^\alphast|\betaoldsymbol \thetaheta) \thetaimes \psi^{\thetaheta^\alphast} \lambdaeft( {\betaoldsymbol \alphalpha}_{1:P}^{\alphast 1:N} \rhoight) \thetaimes \prod_{i=1}^P {\overline {w}^{\alphast}_i}^{k^\alphast_i}\{0,1\}igg \}^{-1}\notag \\ & = \frac{\widehat p_N(\betaoldsymbol y|\betaoldsymbol \thetaheta^\alphast )p(\betaoldsymbol \thetaheta^\alphast) }{q(\betaoldsymbol \thetaheta^\alphast| \betaoldsymbol \thetaheta)}. {\rhom e}nd{align} Hence, the acceptance probability is \betam{e}gin{align}\lambdaabel{eq: PMMH acc prob} \min \{0,1\}igg \{ 1, \frac{\widehat p_N(\betaoldsymbol y|\betaoldsymbol \thetaheta^\alphast )p(\betaoldsymbol \thetaheta^\alphast) }{q(\betaoldsymbol \thetaheta^\alphast| \betaoldsymbol \thetaheta)} \frac{q(\betaoldsymbol \thetaheta| \betaoldsymbol \thetaheta^\alphast)} {\widehat p_N(\betaoldsymbol y|\betaoldsymbol \thetaheta )p(\betaoldsymbol \thetaheta) }\{0,1\}igg \}, {\rhom e}nd{align} which is the acceptance probability of a PMMH scheme based on only estimating $\pi(\thetaheta)$. The next assumption is needed to ensure that the PMMH algorithm converges. \betam{e}gin{assumption}\lambdaabel{ass: convergence of pmmh} The MH sampler of the target density $\pi\lambdaeft(\betaoldsymbol{\thetaheta}\rhoight)$ and proposal density $q\lambdaeft(\betaoldsymbol{\thetaheta}^\alphast|\betaoldsymbol{\thetaheta}\rhoight)$ is irreducible and aperiodic. {\rhom e}nd{assumption} \betam{e}gin{theorem} \lambdaabel{thm:convergence of pmmh} Suppose Assumptions \rhoef{ass: use of IS} and \rhoef{ass: convergence of pmmh} hold. Then the PMMH algorithm with the expanded target density \cref{eq: expanded target density} and expanded proposal density \cref{eq: extended prop} generates a sequence $\lambdaeft\{ \betaoldsymbol{\thetaheta}\lambdaeft(s\rhoight),\betaoldsymbol{\alphalpha}_{1:P}\lambdaeft(s\rhoight)\rhoight\} $ of iterates whose marginal distributions $\lambdaeft\{ \mathcal{\mathcal{L}}_{N}\lambdaeft\{ \betaoldsymbol{\thetaheta}\lambdaeft(s\rhoight),\betaoldsymbol{\alphalpha}_{1:P}\lambdaeft(s\rhoight)\in.\rhoight\} \rhoight\}$ satisfy \[ ||\mathcal{\mathcal{L}}_{N}\lambdaeft\{ \betaoldsymbol{\thetaheta}\lambdaeft(s\rhoight),\betaoldsymbol{\alphalpha}_{1:P}\lambdaeft(s\rhoight)\in\cdot \rhoight\} -\pi\lambdaeft(\cdot \rhoight)||_{TV}\rhoightarrow0,\qquad as\;s\rhoightarrow\infty, \] where $|| \cdot||_{TV}$ is the total variation norm. {\rhom e}nd{theorem} The proof is given in \Cref{sec:lemma1} \sigmaubsection{Particle Metropolis within Gibbs (PMwG) sampling} \lambdaabel{sec: PMwG} We use the same extended target distribution \cref{eq: expanded target density} as before in order to sample from $\pi\lambdaeft(\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P}\rhoight)$. Let $\betaoldsymbol{\thetaheta}=\lambdaeft(\betaoldsymbol{\thetaheta}_{1},...,\betaoldsymbol{\thetaheta}_{R}\rhoight)$ be a partition of the parameter vector $\betaoldsymbol{\thetaheta} $ into $R$ components and let $\betaoldsymbol{\Theta}=\lambdaeft(\betaoldsymbol{\Theta}_{1},...,\betaoldsymbol{\Theta}_{R}\rhoight)$ be the corresponding partition of the parameter space $\betaoldsymbol{\Theta} $. We will use the notation $ \betaoldsymbol{\thetaheta}_{-r}\coloneqq \lambdaeft(\betaoldsymbol{\thetaheta}_{1},...,\betaoldsymbol{\thetaheta}_{r-1},\betaoldsymbol{\thetaheta}_{r+1},...,\betaoldsymbol{\thetaheta}_{R}\rhoight). $ The Particle Gibbs (PG) and Particle Metropolis within Gibbs (PMwG) algorithm involves the following steps. \betam{e}gin{algorithm}[H] \caption{PMwG Algorithm \lambdaabel{alg: PMwG}} \betam{e}gin{description} \item [Step (1)] For $r=1,...,R$ \betam{e}gin{description} \item [Step (1.1)] Sample $\betaoldsymbol{\thetaheta}_{r}^\alphast $ from the proposal $q_{r}\lambdaeft(\cdot| \betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\thetaheta_r, \betaoldsymbol{\thetaheta}_{-r} \rhoight)$. \item [Step (1.2)] Set $\betaoldsymbol{\thetaheta}_{r} = \betaoldsymbol{\thetaheta}_{r}^\alphast$ with probability \betam{e}gin{align*} \min\lambdaeft \{1, \frac{\thetailde{\pi}_{N}\lambdaeft(\betaoldsymbol{\thetaheta}_{r}^\alphast|\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}_{-r}\rhoight) } {\thetailde{\pi}_{N}\lambdaeft(\betaoldsymbol{\thetaheta}_{r}|\betaoldsymbol{k}, \betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}_{-r} \rhoight)} \thetaimes\frac{q_{r}\lambdaeft(\betaoldsymbol{\thetaheta}_{r} |\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}}, \thetaheta^\alphast_r, \betaoldsymbol{\thetaheta}_{-r}\rhoight)} {q_{r}\lambdaeft(\betaoldsymbol{\thetaheta^\alphast}_{r} |\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\thetaheta_r, \betaoldsymbol{\thetaheta}_{-r} \rhoight)} \rhoight \} {\rhom e}nd{align*} {\rhom e}nd{description} \item [Step (2)] Sample $\betaoldsymbol{\alphalpha}_{1:P}^{-\betaoldsymbol k}\sigmaim\thetailde{\pi}_{N}\lambdaeft(\cdot|\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{-\betaoldsymbol k},\betaoldsymbol{\thetaheta}\rhoight)$ by running the conditional importance sampling Algorithm \rhoef{alg: CIS}. \item [Step (3)] Sample the index vector $\betaoldsymbol{k} = \lambdaeft(k_{1},...,k_{P}\rhoight)$ with probability given by \[ \thetailde{\pi}_{N} \lambdaeft(k_{1}=l_{1},...,k_{P}=l_{P}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P}^{1:N} \rhoight)=\prod_{i=1}^{P}\overlineerline{w}_{i}^{l_{i}}, \] where $w_i^j := w_i^j (\betaoldsymbol \thetaheta, \betaoldsymbol{\alphalpha}_{1:P}^{1:N} ) $ and $\overlineerline{w}_{i}^{l} := w_i^l / \lambdaeft ( \sigmaum_{s=1}^N w_i^s \rhoight ) $. {\rhom e}nd{description} {\rhom e}nd{algorithm} It is straightforward to implement Steps~(1) and (3). We implement Step~(2) using the Conditional Monte Carlo Algorithm \rhoef{alg: CIS} given below. Note that Step (1) might appear unusual, but it leaves the augmented target posterior density $\widetilde{\pi}^{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{1:N},\betaoldsymbol{\thetaheta}\rhoight)$ invariant. This is related to collapsed Gibbs sampler, see, for example \citet[section 6.7]{Liu:2001a}. \sigmaubsection*{Conditional Monte Carlo} The expression \[ \frac{\psi^{\betaoldsymbol{\thetaheta}}\lambdaeft(\betaoldsymbol{\alphalpha}_{1:P}^{1:N}\rhoight)}{\prod_{i=1}^{P}m_{i}\lambdaeft(\alphalpha_{i}^{k_{i}}\rhoight)} \] appearing in the target density \cref{eq: expanded target density} is the density of all the variables that are generated by the Monte Carlo algorithm conditional on $\lambdaeft(\betaoldsymbol \alphalpha_{1:P}^{\betaoldsymbol k},\betaoldsymbol{k} \rhoight)$. This is a key element of the PMwG Algorithm \rhoef{alg: PMwG}. This update can be understood as updating $N-1$ Monte Carlo samples together while keeping one Monte Carlo sample fixed in $\widetilde{\pi}_N\lambdaeft(\betaoldsymbol{\alphalpha}_{1:P}^{1:N}|\betaoldsymbol \thetaheta \rhoight)$. \betam{e}gin{algorithm}[H] \caption{Conditional Monte Carlo Algorithm\lambdaabel{alg: CIS}} \betam{e}gin{description} \item [Step (1)] Fix $\betaoldsymbol{\alphalpha}{}_{1:P}^{1}=\betaoldsymbol{\alphalpha}{}_{1:P}^{\betaoldsymbol{k}}$. \item [Step (2)] For $i=1,...P,$ \betam{e}gin{description} \item [Step (2.1)] Sample $\betaoldsymbol{\alphalpha}{}_{i}^{j}$ from $m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight)$ for $j=2,...,N$. \item [Step (2.2)] Compute the importance weights $w_{i}^{j}=\frac{p\lambdaeft(\mathbf{y}_{i}|\betaoldsymbol{\alphalpha}{}_{i}^{j},\betaoldsymbol{\thetaheta}\rhoight)p\lambdaeft(\betaoldsymbol{\alphalpha}{}_{i}^{j}|\betaoldsymbol{\thetaheta}\rhoight)}{m_{i}\lambdaeft(\betaoldsymbol{\alphalpha}_{i}^{j}|\betaoldsymbol{\thetaheta},\betaoldsymbol{y}_{i}\rhoight)},$ for $j=1,...,N$. \item [Step (2.3)] Normalise the weights $\betaar{w}_{i}^{j}=\frac{w_{i}^{j}}{\sigmaum_{k=1}^{N}w_{i}^{k}}$, for $j=1,...,N$. {\rhom e}nd{description} {\rhom e}nd{description} {\rhom e}nd{algorithm} To derive convergence results for the PMwG sampler in Algorithm~\rhoef{alg: PMwG} we require the following assumption. \betam{e}gin{assumption} \lambdaabel{ass: gibbs1} The Metropolis within Gibbs sampler that is defined by the proposals $q_r\lambdaeft(\cdot|\betaoldsymbol{\thetaheta}_{r},\betaoldsymbol{\thetaheta}_{-r},\betaoldsymbol{\alphalpha}_{1:P}\rhoight)$, for $r=1,...,R$, and $\pi\lambdaeft(\betaoldsymbol{\alphalpha}_{1:P}|\betaoldsymbol{\thetaheta}\rhoight)$ is irreducible and aperiodic. {\rhom e}nd{assumption} Assumption \rhoef{ass: gibbs1} is satisfied in our applications because all the proposals and conditional distributions have strickly positive densities. \betam{e}gin{theorem} [Convergence of the PMwG sampler\lambdaabel{thm: converg of PMwG}] For any $N\gammaeq2$, the PMwG update is a transition kernel for the invariant density $\thetailde{\pi}^{N}$ defined in \cref{eq: expanded target density}. If Assumptions~\rhoef{ass: use of IS} and \rhoef{ass: gibbs1} hold then the PMwG sampler generates a sequence of iterates $\lambdaeft\{ \betaoldsymbol{\thetaheta}\lambdaeft(s\rhoight),\betaoldsymbol{\alphalpha}_{1:P}\lambdaeft(s\rhoight)\rhoight\} $ whose marginal distributions $\lambdaeft\{ \mathcal{\mathcal{L}}_{N}\lambdaeft\{ \betaoldsymbol{\thetaheta}\lambdaeft(s\rhoight),\betaoldsymbol{\alphalpha}_{1:P}\lambdaeft(s\rhoight)\in.\rhoight\} \rhoight\} $ satisfy \[ ||\mathcal{\mathcal{L}}_{N}\lambdaeft\{ ( \betaoldsymbol{\thetaheta}\lambdaeft(s\rhoight),\betaoldsymbol{\alphalpha}_{1:P}\lambdaeft(s\rhoight)) \in\cdot \rhoight\} -{\rhom P}i\lambdaeft \{ ( \betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P})\in \cdot \rhoight\} ||_{\rhom TV}\rhoightarrow0,\qquad as\;s\rhoightarrow\infty. \] {\rhom e}nd{theorem} The proof is given in \Cref{sec:lemma1}. \sigmaubsection{Sampling the high-Dimensional parameter vector $\betaoldsymbol \betam{e}ta$ using a Hamiltonian Proposal\lambdaabel{sub:Sampling--using Hamiltonian proposal}} This section discusses the Hamiltonian Monte Carlo (HMC) proposal to sample the high dimensional parameter vector $\betaoldsymbol{\betam{e}ta}$ from the conditional posterior density $p\lambdaeft(\betaoldsymbol{\betam{e}ta}|\betaoldsymbol{\thetaheta}_{-\betaoldsymbol{\betam{e}ta}},\betaoldsymbol{y},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{k}\rhoight)$. It can be used to generate distant proposals for the Particle Metropolis within the Gibbs algorithm to avoid the slow exploration behaviour that results from simple random walk proposals. Suppose we want to sample from a $d$-dimensional distribution with pdf proportional to ${\rhom e}xp\lambdaeft(\mathcal{L\lambdaeft(\betaoldsymbol{\betam{e}ta}\rhoight)}\rhoight)$, where $\mathcal{L}\lambdaeft(\betaoldsymbol{\betam{e}ta}\rhoight)=\lambdaog p\lambdaeft(\betaoldsymbol{\betam{e}ta}|\betaoldsymbol{\thetaheta}_{-\betaoldsymbol{\betam{e}ta}},\betaoldsymbol{y},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{k}\rhoight)$ is the logarithm of the conditional posterior density of $\betaoldsymbol{\betam{e}ta}$ (up to a normalising constant). In Hamiltonian Monte Carlo \citep{Neal:2011}, we augment an auxiliary momentum vector $\betaoldsymbol r$ having the same dimension as the parameter vector $\betaoldsymbol \betam{e}ta$ with the density $p\lambdaeft(\betaoldsymbol{r}\rhoight)=N\lambdaeft(\betaoldsymbol{r}|0,\betaoldsymbol M\rhoight)$, where $\betaoldsymbol M$ is a mass matrix of the momentum and often set to the identity matrix. We define the joint conditional density of $(\betaoldsymbol \betam{e}ta , \betaoldsymbol r ) $ as \betam{e}gin{align} p\lambdaeft(\betaoldsymbol{\betam{e}ta},\betaoldsymbol{r}|\betaoldsymbol{\thetaheta}_{-\betaoldsymbol{\betam{e}ta}},\betaoldsymbol{y},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{k}\rhoight)\propto{\rhom e}xp\lambdaeft(-H\lambdaeft(\betaoldsymbol{\betam{e}ta},\betaoldsymbol{r}\rhoight)\rhoight)\lambdaabel{eq:jointHamiltonian-1} {\rhom e}nd{align} where \betam{e}gin{align}\lambdaabel{eq: hamiltonian} H\lambdaeft(\betaoldsymbol{\betam{e}ta},\betaoldsymbol{r}\rhoight):=-\mathcal{L\lambdaeft(\betaoldsymbol{\betam{e}ta}\rhoight)} + \frac12 \betaoldsymbol{r}^\text{\tiny {\it {T}}} M^{-1} \betaoldsymbol{r} {\rhom e}nd{align} is called the Hamiltonian. In an idealized HMC step, the parameters $\betaoldsymbol{\betam{e}ta}$ and the momentum variables $\betaoldsymbol{r}$ move continuously according to the differential equations \betam{e}gin{align} \frac{d\betaoldsymbol{\betam{e}ta}}{\rhom d t} & =\frac{\partial H}{\partial\betaoldsymbol{r}} =\betaoldsymbol M^{-1}\betaoldsymbol{r}\\ \frac{d\betaoldsymbol{r}}{\rhom d t }&=-\frac{\partial H}{\partial\betaoldsymbol{\betam{e}ta}} =\nabla_{\betaoldsymbol{\betam{e}ta}}\mathcal{L\lambdaeft(\betaoldsymbol{\betam{e}ta}\rhoight)}, {\rhom e}nd{align} where $\nabla_{\betaoldsymbol{\betam{e}ta}}$ denotes the gradient with respect to $\betaoldsymbol{\betam{e}ta}$. In a practical implementation, the continuous time HMC dynamics need to be approximated by discretizing time, using a small step size ${\rhom e}psilonilon$. We can simulate the evolution over time of $\lambdaeft(\betaoldsymbol{\betam{e}ta},\betaoldsymbol{r}\rhoight)$ via the \lambdaq\lambdaq leapfrog\rhoq\rhoq~{} integrator, where one step of the leapfrog update is \betam{e}gin{eqnarray*} \betaoldsymbol{r}\lambdaeft( t +\frac{{\rhom e}psilonilon}{2}\rhoight) & = & \betaoldsymbol{r}\lambdaeft( t \rhoight)+{\rhom e}psilonilon\nabla_{\betaoldsymbol{\betam{e}ta}}\mathcal{L}\lambdaeft(\betaoldsymbol{\betam{e}ta}\lambdaeft(t \rhoight)\rhoight)/2\\ \betaoldsymbol{\betam{e}ta}\lambdaeft( t +{\rhom e}psilonilon\rhoight) & = & \betaoldsymbol{\betam{e}ta}\lambdaeft(t \rhoight)+{\rhom e}psilonilon \betaoldsymbol M^{-1}\betaoldsymbol{r}\lambdaeft(t +\frac{{\rhom e}psilonilon}{2}\rhoight)\\ \betaoldsymbol{r}\lambdaeft(t +{\rhom e}psilonilon\rhoight) & = & \betaoldsymbol{r}\lambdaeft( t +{\rhom e}psilonilon/2\rhoight)+{\rhom e}psilonilon\nabla_{\betaoldsymbol{\betam{e}ta}}\mathcal{L}\lambdaeft(\betaoldsymbol{\betam{e}ta}\lambdaeft( t +{\rhom e}psilonilon\rhoight)\rhoight)/2 {\rhom e}nd{eqnarray*} Each leapfrog step is time reversible by negating the step size ${\rhom e}psilonilon$. The leapfrog integrator provides a mapping $\lambdaeft(\betaoldsymbol{\betam{e}ta}^\alphast,\betaoldsymbol{r}^\alphast\rhoight)\rhoightarrow\lambdaeft(\betaoldsymbol{\betam{e}ta},\betaoldsymbol{r}\rhoight)$ that is both time-reversible and volume preserving \citep{Neal:2011}. It follows that the Metropolis-Hastings algorithm with acceptance probability \[\min\lambdaeft(1,\frac{{\rhom e}xp\lambdaeft(\mathcal{L\lambdaeft(\betaoldsymbol{\betam{e}ta}\rhoight)}-\frac{1}{2}\betaoldsymbol{r}^\text{\tiny {\it {T}}} \betaoldsymbol M^{-1}\betaoldsymbol{r}\rhoight)} {{\rhom e}xp\lambdaeft(\mathcal{L\lambdaeft(\betaoldsymbol \betam{e}ta^\alphast\rhoight)}-\frac{1}{2}{\betaoldsymbol r^\alphast}^\text{\tiny {\it {T}}} \betaoldsymbol M^{-1}\betaoldsymbol{r^\alphast}\rhoight)}\rhoight)\] produces an ergodic, time reversible Markov chain that satisfies detailed balance and has stationary density $p\lambdaeft(\betaoldsymbol{\betam{e}ta}|\betaoldsymbol{\thetaheta}_{-\betaoldsymbol{\betam{e}ta}},\betaoldsymbol{y},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{k}\rhoight)$ \citep{Liu:2001a,Neal:1996}. \Cref{alg:Hamiltonian-Monte-Carlo} summarizes a single iterate of the Hamiltonian Monte Carlo method. \betam{e}gin{algorithm}[h!] \caption{Hamiltonian Monte Carlo\lambdaabel{alg:Hamiltonian-Monte-Carlo}} Given $\betaoldsymbol{\betam{e}ta}^{*}$, ${\rhom e}psilonilon$, $L$, where $L$ is the number of Leapfrog updates. Sample $\betaoldsymbol{r}^{*}\sigmaim N\lambdaeft(0,\betaoldsymbol M\rhoight)$. For $i=1$ to $L$ Set $\lambdaeft(\betaoldsymbol{\betam{e}ta},\betaoldsymbol{r}\rhoight)\lambdaeftarrow {\rhom Leapfrog}\lambdaeft(\betaoldsymbol{\betam{e}ta}^{*},\betaoldsymbol{r}^{*},{\rhom e}psilonilon\rhoight)$ end for With probability $\alphalpha=\min\lambdaeft(1,\frac{{\rhom e}xp\lambdaeft(\mathcal{L\lambdaeft(\betaoldsymbol{\betam{e}ta}\rhoight)}-\frac{1}{2}\betaoldsymbol{r}^{'}\betaoldsymbol M^{-1}\betaoldsymbol{r}\rhoight)}{{\rhom e}xp\lambdaeft(\mathcal{L\lambdaeft(\betaoldsymbol{\betam{e}ta}^{*}\rhoight)}-\frac{1}{2}\betaoldsymbol{r}^{*'}\betaoldsymbol M^{-1}\betaoldsymbol{r}^{*}\rhoight)}\rhoight)$, then set $\betaoldsymbol{\betam{e}ta}^{*}=\betaoldsymbol{\betam{e}ta}$, $\betaoldsymbol{r}^{*}=-\betaoldsymbol{r}$. {\rhom e}nd{algorithm} The performance of HMC depends strongly on choosing suitable values for $\betaoldsymbol M$, ${\rhom e}psilonilon$, and $L$. We set $\betaoldsymbol M=\widehat{\betaoldsymbol \Sigmama}^{-1}$, where $\widehat{\betaoldsymbol \Sigmama}$ is an estimate of the posterior covariance matrix after some preliminary pilot runs of the HMC algorithm. The step size ${\rhom e}psilonilon$ determines how well the leapfrog integration can approximate the Hamiltonian dynamics. If we set ${\rhom e}psilonilon$ too large, then the simulation error is large yielding a low acceptance rate. However, if we set ${\rhom e}psilonilon$ too small, then the computational burden is too high to obtain distant proposals. Similalry, if we set $L$ too small, the proposal will be close to the current value of the parameters, resulting in undesirable random walk behaviour and slow mixing. If $L$ is too large, HMC will generate trajectories that retrace their steps. Our article uses the No-U-Turn sampler (NUTS) with the dual averaging algorithm developed by \citet{Hoffman:2014} and \citet{Nesterov:2009}, respectively, that still leaves the target density invariant and satisfies time reversibility to adaptively select $L$ and ${\rhom e}psilonilon$, respectively. \Crefrange{sec:Sampling-Scheme-for bivariate probit}{sec:Sampling-Scheme-for mixed gumbel} give the derivatives required by the Hamiltonian dynamics for the panel data models given in \cref{sec:General-Panel-Data}. \sigmaubsection{Comparing the performance of the PMwG with some other approaches \lambdaabel{sub:TNV}} We now specialize the PMwG sampling scheme described in \Cref{alg: PMwG}, to the bivariate probit model with random effects to obtain~\Cref{alg: pmwg for biv probit} below; see \Cref{sec:Sampling-Scheme-for bivariate probit} for more details. Let $\betaoldsymbol{\thetaheta}=\lambdaeft(\betaoldsymbol{\betam{e}ta}_{1},\betaoldsymbol{\betam{e}ta}_{2},\rhoho,\betaoldsymbol \Sigmama_{\alphalpha}\rhoight)$ be the set of unknown parameters of interest. We use the following prior distributions: $\rhoho\sigmaim U\lambdaeft(-1,1\rhoight)$, $p\lambdaeft(\betaoldsymbol \Sigmama_{\alphalpha}^{-1}\rhoight)\sigmaim {\rhom Wishart}\lambdaeft(v_{0},\betaoldsymbol R_{0}\rhoight)$, where $v_{0}=6, \betaoldsymbol R_{0}=400I_{2}$, and the prior distribution for the parameters of the covariates is $N\lambdaeft(0,100\betaoldsymbol I_{d}\rhoight)$. All the priors are uninformative. \betam{e}gin{algorithm}[H]\caption{PMwG sampling scheme for bivariate probit \lambdaabel{alg: pmwg for biv probit}} \betam{e}gin{enumerate} \item Generate $\betaoldsymbol \Sigmama_{\alphalpha}|\betaoldsymbol{k}^{*},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{\thetaheta}_{-\Sigmama_{\alphalpha}}^{*},\betaoldsymbol{y}$ from a Wishart $W\lambdaeft(v_{1},R_{1}\rhoight)$ distribution, where $v_{1}=v_{0}+P$ and $\betaoldsymbol R_{1}=\lambdaeft[\betaoldsymbol R_{0}^{-1}+\sigmaum_{i=1}^{P}\betaoldsymbol{\alphalpha}_{i}\betaoldsymbol{\alphalpha}_{i}^{'}\rhoight]^{-1}.$ \item Generate $\rhoho|\betaoldsymbol{k}^{*},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{\thetaheta}_{-\rhoho}^{*},\betaoldsymbol{y}$ using the adaptive random walk proposal described below. \item Generate $\lambdaeft(\betaoldsymbol \betam{e}ta_{1},\betaoldsymbol \betam{e}ta_{2}\rhoight)|\betaoldsymbol{k}^{*},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{\thetaheta}_{-\lambdaeft(\betaoldsymbol \betam{e}ta_{1},\betaoldsymbol \betam{e}ta_{2}\rhoight)}^{*},\betaoldsymbol{y}$ using PMwG with Hamiltonian proposal described in the Section \rhoef{sub:Sampling--using Hamiltonian proposal}. \item Sample from $\betaoldsymbol{\alphalpha}_{1:P}^{-k_{1}:-k_{P}}\sigmaim\thetailde{\pi}^{N}\lambdaeft(\cdot |\betaoldsymbol{k}^{*},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{\thetaheta}^{*}\rhoight)$ using Conditional Monte Carlo method. \item Sample $\lambdaeft(k_{1},...,k_{P}\rhoight)$ with probability given by ${\rhom P}r\lambdaeft(k_{1}=l_{1},...,k_{P}=l_{P}|\betaoldsymbol{\thetaheta}, \betaoldsymbol{\alphalpha}_{1:P}^{-\betaoldsymbol{k}},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{y}\rhoight) =\prod_{i=1}^{P}\overlineerline{w}_{i}^{l_{i}}$. {\rhom e}nd{enumerate} {\rhom e}nd{algorithm} In Step 2 we transform $\rhoho$ to $\rhoho_{\rhom un} ={\rhom tanh}^{-1} (\rhoho)$ so that $\rhoho_{\rhom un} $ is unconstrained and then use the adaptive random walk method of \citet{Garthwaite:2015} that automatically scales univariate Gaussian random walk proposals to ensure that the acceptance rate is around 0.3. The MH acceptance probability is \[ 1\lambdaand\frac{\thetailde{\pi}^{N}\lambdaeft(\rhoho|\betaoldsymbol{k}^{*},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{\thetaheta}_{-\rhoho}^{*}\rhoight)}{\thetailde{\pi}^{N}\lambdaeft(\rhoho^{*}|\betaoldsymbol{k}^{*},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}^{*}},\betaoldsymbol{\thetaheta}_{-\rhoho}^{*}\rhoight)}\frac{\lambdaeft|1-\lambdaeft(\rhoho\rhoight)^{2}\rhoight|}{\lambdaeft|1-\lambdaeft(\rhoho^{*}\rhoight)^{2}\rhoight|}. \] We can alternatively replace steps 4 and 5, and sample the latent random effects using the Metropolis-Hastings algorithm, which is denoted by MCMC-MH and is given as \betam{e}gin{itemize} \item $4^{*})$ Sample $\betaoldsymbol{\alphalpha}_{i}\sigmaim p\lambdaeft(\betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{\thetaheta}\rhoight)=N\lambdaeft(0,\betaoldsymbol \Sigmama_{\alphalpha}\rhoight)$ for $i=1,\deltaots, P.$ \item $5^{*})$ Accept $\betaoldsymbol{\alphalpha}_{i}$ with acceptance probability given by \[ \alphalpha\lambdaeft(\betaoldsymbol{\alphalpha}_{i},\betaoldsymbol{\alphalpha}_{i}^{*}|\betaoldsymbol{\thetaheta}\rhoight)=1\lambdaand\frac{\prod_{t=1}^{T}p\lambdaeft(y_{it}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{i}\rhoight)}{\prod_{t=1}^{T}p\lambdaeft(y_{it}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{i}^{*}\rhoight)}. \] {\rhom e}nd{itemize} To improve the mixing of the MCMC-MH algorithm, we can run steps $4^{*}$ and $5^{*}$ of MCMC-MH method above for a number of iterations, say $10$, $20$, or $50$ iterations, for each individual random effect. \Cref{sec:Data-Augmentation-Bivariate probit} describes an alternative Gibbs sampling scheme with data augmentation for the bivariate probit with random effects model. We conducted a simulation study to compare three different approaches to estimation: PG, data augmentation, and MCMC-MH, using the base case bivariate probit model with random effects as the data generating process. To define our measure of the inefficiency of different sampling schemes that takes computing time into account, we first define the Integrated Autocorrelation Time $\lambdaeft(IACT_{\betaoldsymbol{\thetaheta}}\rhoight)$. For a univariate parameter $\thetaheta$, the IACT is estimated by \[ \widehat{IACT}\lambdaeft(\thetaheta_{1:M}\rhoight)\coloneqq1+2\sigmaum_{t=1}^{L}\widehat{\rhoho}_{t}\lambdaeft(\thetaheta_{1:M}\rhoight), \] where $\widehat{\rhoho}_{t}\lambdaeft(\thetaheta_{1:M}\rhoight)$ denotes the empirical autocorrelation at lag $t$ of $\thetaheta_{1:M}$ (after discarding the burnin period iterates). A low value of the IACT estimate suggests that the chain mixes well. Here, $L$ is chosen as the first index for which the empirical autocorrelation satisfies $\lambdaeft|\widehat{\rhoho}_{t}\lambdaeft(\thetaheta_{1:M}\rhoight)\rhoight|<2/\sigmaqrt{M}$, i.e. when the empirical autocorrelation coefficient is statistically insignificant. Our measure of inefficiency of the sampling scheme is the time normalised variance \[ TNV\coloneqq IACT_{mean}\thetaimes CT, \] where $CT$ is the computing time and $IACT_{mean}$ is the average of the IACT's over all parameters. For this simulation study, we generated a number of datasets with $P=1000$ people and $T=4$ time periods. The covariates are generated as $x_{1,it},...,x_{10,it}\sigmaim U\lambdaeft(0,1\rhoight)$, and the parameters are set as \betam{e}gin{align*}\betaoldsymbol{\betam{e}ta}_{1} &=\lambdaeft(-1.5,0.1,-0.2,0.2,-0.2,0.1,-0.2,0.1,-0.1,-0.2,0.2\rhoight)^{\text{\tiny {\it {T}}}}\\ \betaoldsymbol{\betam{e}ta}_{2} &=\lambdaeft(-2.5,0.1,0.2,-0.2,0.2,0.12,0.2,-0.2,0.12,-0.12,0.12\rhoight)^{\text{\tiny {\it {T}}}} {\rhom e}nd{align*} with $\thetaau_{1}^{2}=2.5$, $\thetaau_{2}^{2}=1$, and, $\rhoho_{{\rhom e}psilonilon}=\rhoho_{\alphalpha}=0.5$. In the simulation study, the total number of MCMC iterations was 11000, with the first 1000 discarded as burnin. The number of importance samples in the PMwG method was set as $100$. \Cref{tab:Comparison-of-Different biv probit2-1} summarises the estimation results and show that the PMwG sampler performs best. Tables \rhoef{tab:Comparison-of-Different biv probit1} and \rhoef{tab:Comparison-of-Different biv probit2} in Appendix \rhoef{tab:Comparison-of-Different bivariate probit} show the inefficiency factors (IACT) for each parameters in the bivariate probit model. In terms of TNV, PMwG is more than twice as good as the data augmentation approach and is also $7.51$, $8.35$, $14.22$, and $25.40$ times better than MCMC-MH with $1$, $10$, $20$, and $50$ iterations, respectively. This gain is mostly due to the faster computing time (CT) of the PG over MCMC-MH method. Note that with PG, the computation of importance weights in the Conditional Monte Carlo to sample each individual latent random effects can easily be paralellised. On the other hand, the MCMC-MH approach is a sequential method that may not easily be parallelised. The full Gibbs sampler with the data augmentation approach may not be available for all of the models one might want to consider. The high dimensional parameter vector $\betaoldsymbol \betam{e}ta$ is sampled much more efficiently using Hamiltonian proposals compared to the data augmentation approach, which confirms the usefulness of Hamiltonian proposals for such high dimensional parameters. We also ran a second simulation study for the mixed discrete-linear Gaussian regression. \Cref{sec:Simulation-Mixed-Discrete} reports the results. \betam{e}gin{table}[H] \caption{TNV comparison of different sampling schemes (PG, data augmentation, MCMC-MH) for the bivariate probit regression simulation with random effects with $P=1000$ and $T=4$ \lambdaabel{tab:Comparison-of-Different biv probit2-1}} \centering{} \betam{e}gin{tabular}{ccccccc} \hline & PG & Data Aug. & MH1 & MH10 & MH20 & MH50\thetaabularnewline \hline Time & $0.62$ & $0.13$ & $0.48$ & $3.21$ & $6.56$ & $15.13$\thetaabularnewline $IACT_{mean}$ & $4.42$ & $44.23$ & $42.88$ & $7.13$ & $5.94$ & $4.60$\thetaabularnewline TNV & $2.74$ & $5.75$ & $20.58$ & $22.89$ & $38.97$ & $69.60$\thetaabularnewline Rel. TNV & $1$ & $2.09$ & $7.51$ & $8.35$ & $14.22$ & $25.40$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \sigmaection{The Data and their characteristics}\lambdaabel{S: data and characterization} \sigmaubsection{Sample and Variable Definitions}\lambdaabel{SS: sample and variable defs} To estimate the impact of life-shock events on the two outcomes, alcohol consumption, especially the propensity to binge drink, and the level of mental health, we use data from Release 14 of the Household, Income, and Labour Dynamics in Australia (HILDA) survey. HILDA is a nationally representative longitudinal survey, which commenced in Australia in 2001, with a survey of 13969 persons in 7682 households, and is conducted annually. Each year, all household members aged 15 years or older were interviewed and considered as part of the sample. Information is collected on education, income, health, life satisfaction, family formation, labour force dynamics, employment conditions, and other economic and subjective well-being. In our paper, the analysis is at the level of the individual, where we include those who are aged 15 years or older who have non-missing information on our two outcome variables: life shock variables, and other independent variables. We use balanced samples. \footnote{The data used in this paper was extracted using the Add-On package PanelWhiz for Stata. PanelWhiz (http://www.PanelWhiz.eu) was written by Dr. John P. Haisken-DeNew ([email protected]). See \citet{Hahn:2013} and \citet{Haisken-DeNew:2010} for details. } Following \citet{Frijters:2014}, the data on mental health status used in this paper is generated from nine questions included in the Short-Form General Health Survey (SF-36), which is available in all the waves. We construct a mental health score by taking the mean of the responses by the individual and then standardise so that the index has a mean zero and standard deviation one. Lower scores indicate better mental health status. \citet{Butterworth:2004} provide evidence that the SF-36 data collected in the HILDA survey are valid and can be used as a general measure of physical and mental health status. We then categorise someone with good mental health if their score is below 0, and someone with poor mental health if their score is above 0. The data on alcohol consumption used in this paper is generated from two questions in the HILDA survey. Subjects are asked to respond to the question: Do you drink alcohol? The second question we considered is related to the problem of binge drinking and is only available in waves 7, 9, 11, and 13. Respondents identified as drinkers from the first question are asked: how often do you have 5 or more (female) or 7 or more (male) standard drinks on any one occasion? Similarly to \citet{Srivastava:2010}, we define the composite binary variable FREQUENT\_BINGE as FREQUENT\_BINGE=1 if a male respondent drinks excessive alcohol more than 1 day per week or a female respondent drinks alcohol more than 2 or 3 times a month, and is zero otherwise. The life-shock indicators are generated from responses in a section of HILDA's self-completion questionaire. Respondents are told \lambdaq We now would like you to think about major events that have happened in your life over the past 12 months\rhoq{} and are asked whether any of the following apply to them: (1) Separated from spouse or long-term partner, (2) Serious personal injury or illness to self, (3) Death of spouse/child, (4) Got back together with spouse or long-term partner after a separation, (5) Death of a close friend, (6) Victim of property crime (e.g. theft, housebreaking), (7) Got married, (8) Promoted at work, (9) Major improvement in financial situation, (10) Major worsening in financial situation, (11) Changed residence, (12) Partner or I gave birth to a child. \sigmaubsection*{Other Variables} Other control variables included are marital status (married, single/widow/divorce) and the highest educational qualification attained (degree, diploma/certificate, high school and no qual). single/widow/divorce is the excluded category for marital status. Similarly, high school and no qual (no academic qualification) is excluded for the educational variable. We also include the total number of children below age 18 living in the household, age, and the logarithm of annualised household income. The Mundlak correction contains $\overlineerline{age}$, $\overlineerline{age}^{2}$, $\overlineerline{\lambdaog\lambdaeft(income\rhoight)}$, $\overlineerline{num.child}$. \sigmaection{Results and Discussion }\lambdaabel{S: results and discussion} \sigmaubsection{Estimation Results}\lambdaabel{SS: estimation results} The PMwG sampling scheme was used to estimate the various models defined in \cref{sec:General-Panel-Data}. For each panel data model, we used 11000 MCMC samples of which the first 1000 were discarded as burnin. After convergence, $M=10,000$ iterates $\lambdaeft\{ \betaoldsymbol{\thetaheta}^{\lambdaeft(m\rhoight)}\rhoight\} $ were collected from which we estimated the posterior means of the parameters as well as their 95\% credible intervals. The Bayesian methodology provides information on the entire posterior distribution not only for the parameters of the models, but also other parameters of interest especially the partial effects. We say that the variable of interest is significant if its 95\% posterior probability intervals does not cover zero. Models for men and women were analysed separately throughout. \Cref{tab:Estimation-Results-for Male dependence parameters,tab:Estimation-Results-for feMale dependence parameters} in \cref{sec:Empirical-Results} show the estimates for various model specifications for the dependence parameters. The Clayton and Gumbel copula specifications are used for the contemporaneous error terms. The overall pattern of dependence is similar for males and females. The dependence in the individual effects and the error terms of the two outcomes are weak for both male and female models as measured by the Kendall tau, which we denote as $\kappaappa_{\thetaau}$. The lower tail dependence based on the Clayton copula is very close to zero for both males and females. This indicates that there is little relationship between the unobservables who are in very good mental health and no excessive alcohol consumption in our data. Furthermore, the upper tail dependence based on the Gumbel copula is also very close to zero for both males and females. This also suggests that there is a weak relationship between having very poor mental health and excessive alcohol consumption after conditioning on the covariates. The estimates of the dependence parameters from the bivariate probit model are similar to those from the Gaussian model specification and are consistent with the expected positive correlation although only one of the correlations is significant. The estimate of $\thetaau_{2}^{2}$ is much bigger for the bivariate probit model than for the Gaussian model because some information is lost in going to binary variables from continuous variables. Tables \rhoef{tab:Estimation-Results-for male binge } to \rhoef{tab:Estimation-Results-for female mental health} in \cref{sec:Empirical-Results} give the estimates of the parameters of the main covariates. We do not report the estimates associated with the Mundlak corrections for conciseness. The patterns of the estimates for the covariates in the binge drinking equation $(y_1)$ from the bivariate probit model and the Gaussian model specification are relatively similar across males and females. However, the estimates of the covariates in the mental health equation $(y_2)$ are slightly different and the Gaussian model has tighter posterior probability intervals. All the copula models gave similar results. Furthermore, it can be seen that all the parameters $\lambdaeft(\betaoldsymbol{\betam{e}ta}_{1},\betaoldsymbol{\betam{e}ta}_{2}\rhoight)$ are estimated efficiently for all the models. For males, the mean IACT for $\betaoldsymbol{\betam{e}ta}_{1}$ is $2.55$, $2.55$, $2.59$, and $2.60$ and the mean IACT for $\betaoldsymbol{\betam{e}ta}_{2}$ is $1.41$, $2.43$, $1.30$, and $1.51$ for the mixed Gaussian, bivariate probit, mixed Clayton, and mixed Gumbel, respectively. For females, the mean IACT for $\betaoldsymbol{\betam{e}ta}_{1}$ is $2.35$, $2.41$, $2.41$, and $2.27$ and the mean IACT for $\betaoldsymbol{\betam{e}ta}_{2}$ is $1.58$, $2.09$, $1.41$, and $1.18$ for mixed Gaussian, bivariate probit, mixed Clayton, and mixed Gumbel, respectively. This confirms the usefulness of Hamiltonian Monte Carlo proposals for a high dimensional parameter $\betaoldsymbol{\betam{e}ta}$. Our primary interest is in the impact of the shocks on the joint outcomes of binge drinking and poor mental health. For these we compute average partial effects as described in the next section. \sigmaubsection{Average Partial Effects}\lambdaabel{SS: average partial effects} We use an Average Partial Effect (APE) to study how a life event such \lambdaq victim of a crime\rhoq{} impacts on the association between the joint outcomes of binge drinking and low mental health. Let $A_{it}$ denote the event that person $i$ at time $t$ both binge drinks and has poor mental health. We define the APE for a particular life event LE as \betam{e}gin{align} \lambdaabel{eq: ape def} \rhom {APE}_{ \rhom LE}&:= \frac{1}{PT}\sigmaum_{i=1}^P\sigmaum_{t=1}^T \int \lambdaeft [ {\rhom P}r\lambdaeft (A_{it}| \betaoldsymbol x_{it}^{(1)}, \overline {\betaoldsymbol x_i} , \betaoldsymbol \thetaheta, \betaoldsymbol y \rhoight ) - {\rhom P}r\lambdaeft (A_{it}| \betaoldsymbol x_{it}^{(0)}, \overline {\betaoldsymbol x_i} , \betaoldsymbol \thetaheta, \betaoldsymbol y \rhoight ) \rhoight ] \pi(\betaoldsymbol \thetaheta) { \rhom d } \betaoldsymbol \thetaheta, {\rhom e}nd{align} where $\pi(\betaoldsymbol \thetaheta)$ is the posterior density of $\betaoldsymbol \thetaheta$ and the superscript $(1)$ in $\betaoldsymbol x_{it}^{(1)}$ means that the life event of interest is set to 1, with a similar interpretation for $\betaoldsymbol x_{it}^{(0)}$. That is, $\rhom {APE}_ { \rhom LE}$ is the average over all people and time periods of the posterior probability of both binge drinking and poor mental health given the data. Due to the similarity of the results across copula models, we only present the results for the Gaussian copula and the bivariate probit models. Given the draws $\{ \betaoldsymbol \thetaheta ^{(m)}, m=1, \deltaots, M\}$ from the posterior $\betaoldsymbol \thetaheta$, the estimate of the ${\rhom APE}_{\rhom LE}$ for the bivariate probit model is \betam{e}gin{align*} \widehat{APE}_{ \rhom LE} & = \frac{1}{M}\sigmaum_{m=1}^{M}\frac{1}{PT}\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T} \lambdaeft [ {\rhom P}hi_{2}\lambdaeft(\lambdaeft (\betaoldsymbol \zeta_{it}^{(1)}\rhoight )^{(m)} ; \betaoldsymbol \Sigmama^{(m)} + \betaoldsymbol \Sigmama_\alphalpha ^{(m)} \rhoight ) - {\rhom P}hi_{2}\lambdaeft(\lambdaeft (\betaoldsymbol \zeta_{it}^{(0)}\rhoight )^{(m)} ; \betaoldsymbol \Sigmama^{(m)} + \betaoldsymbol \Sigmama_\alphalpha ^{(m)} \rhoight ) \rhoight ], \intertext{where} \betaoldsymbol \zeta_{it} & =\betaoldsymbol{\zeta}_{it}(\betaoldsymbol x_{it}, \overline{ \betaoldsymbol x}_i, \betaoldsymbol \thetaheta):=\lambdaeft(\betaoldsymbol{x}_{1,it}^{T}\betaoldsymbol{\betam{e}ta}_{1,1}+ \betaoldsymbol{\overlineerline{x}}_{1,i}\betaoldsymbol{\betam{e}ta}_{1,2},\betaoldsymbol{x}_{2,it}^{T}\betaoldsymbol{\betam{e}ta}_{2,1}+ \betaoldsymbol{\overlineerline{x}}_{2,i}\betaoldsymbol{\betam{e}ta}_{2,2}\rhoight)^{T}, \lambdaeft (\betaoldsymbol \zeta_{it}^{(1)}\rhoight )^{(m)} = \betaoldsymbol \zeta_{it}(\betaoldsymbol x_{it}^{(1)}, \betaoldsymbol {\overline x_i}, \betaoldsymbol \thetaheta^{(m)}, {\rhom e}nd{align*} with $\lambdaeft (\betaoldsymbol \zeta_{it}^{(0)}\rhoight )^{(m)}$ defined similarly, $\Sigmama^{(m)} = \betaoldsymbol \Sigmama ( \betaoldsymbol \thetaheta^{(m)} ) $ and $\betaoldsymbol \Sigmama_\alphalpha^{(m)} = \betaoldsymbol \Sigmama_\alphalpha ( \betaoldsymbol \thetaheta^{(m)} ) $. \Cref{tab:Average-Partial-Effects for male,tab:Average-Partial-Effects for female} summarize the estimates of the APEs for major life events shocks for the probability of binge drinking and low mental health score for both males and females for the bivariate probit and Gaussian copula models. In this case, the sign of the APE provides a clear qualitative interpretation, with a significant positive sign implying a significant increase in the joint probability of binge drinking and having low mental health score, and vice versa. All the IACTs of the APEs, for both males and females, for all life events are very close to $1$, showing that the APEs are estimated efficiently. Although these APE effects are small, it is necessary to compare them with the unconditional joint probability of binge drinking and poor mental health which are also small for both males and females. For example, the effect of a personal injury for males is a bit over 2 percentage points for both models but when expressed as a percentage of the unconditional probability it is 37\% according to the probit estimates and 38\% for the Gaussian estimates. The death of a spouse/child also has large relative effects for males but these are not significant. In fact, the only significant APE for the joint probability is for personal injury despite several of the estimates in the marginal models reported in \cref{tab:Estimation-Results-for male binge ,tab:Estimation-Results-for male mental health} being significant. In general, the results across the probit and Gaussian models are very similar for both males and females. However, unlike the results for males, several shocks have large and significant effects for females. Being separated from their spouse, changing residence, a worsening financial situation, and having a promotion at work are the shocks that are all significant for females. Each of these increases the joint probability of binge drinking and poor mental health and, using the Gaussian results, have impacts relative to the unconditional probability ranging from 16\% for ,a change in residence to 58\% for a worsening in the financial position. Finally, the \lambdaq gave birth\rhoq{} shock has a significant negative association which reduces the joint probability of binge drinking and poor mental health. \betam{e}gin{table}[H] \caption{Average Partial Effects on the probability of binge drinking and mental health for major life events variables for male. Symbol $^{*}$ denotes statistical significance \lambdaabel{tab:Average-Partial-Effects for male}} \centering{} \betam{e}gin{tabular}{ccccc} \hline Variables & Probit & IACT & Gaussian & IACT\thetaabularnewline \hline \hline gave birth & $\underset{\lambdaeft(-0.02,0.00\rhoight)}{-0.01}$ & 1.17 & $\underset{\lambdaeft(-0.03,0.00\rhoight)}{-0.01}$ & 1.22\thetaabularnewline death of a friend & $\underset{\lambdaeft(-0.00,0.01\rhoight)}{0.00}$ & 1.28 & $\underset{\lambdaeft(-0.01,0.01\rhoight)}{0.00}$ & 1.38\thetaabularnewline death of a spouse/child & $\underset{\lambdaeft(-0.01,0.08\rhoight)}{0.03}$ & 1.05 & $\underset{\lambdaeft(-0.02,0.08\rhoight)}{0.02}$ & 1.12\thetaabularnewline personal injury & $\underset{\lambdaeft(0.01,0.04\rhoight)^{*}}{0.02}$ & 1.19 & $\underset{\lambdaeft(0.01,0.04\rhoight)^{*}}{0.02}$ & 1.45\thetaabularnewline getting married & $\underset{\lambdaeft(-0.02,0.01\rhoight)}{-0.01}$ & 1.15 & $\underset{\lambdaeft(-0.03,0.01\rhoight)}{-0.01}$ & 1.14\thetaabularnewline changed residence & $\underset{\lambdaeft(-0.01,0.01\rhoight)}{-0.00}$ & 1.24 & $\underset{\lambdaeft(-0.01,0.01\rhoight)}{0.00}$ & 1.22\thetaabularnewline victim of crime & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.00}$ & 1.09 & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.01}$ & 1.16\thetaabularnewline promoted at work & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.00}$ & 1.29 & $\underset{\lambdaeft(-0.01,0.01\rhoight)}{0.00}$ & 1.26\thetaabularnewline back with spouse & $\underset{\lambdaeft(-0.02,0.04\rhoight)}{0.00}$ & 1.26 & $\underset{\lambdaeft(-0.03,0.03\rhoight)}{-0.00}$ & 1.38\thetaabularnewline separated from spouse & $\underset{\lambdaeft(-0.00,0.03\rhoight)}{0.01}$ & 1.14 & $\underset{\lambdaeft(-0.00,0.03\rhoight)}{0.01}$ & 1.22\thetaabularnewline improvement in financial & $\underset{\lambdaeft(-0.01,0.01\rhoight)}{-0.00}$ & 1.04 & $\underset{\lambdaeft(-0.02,0.02\rhoight)}{-0.00}$ & 1.10\thetaabularnewline worsening in financial & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.01}$ & 1.26 & $\underset{\lambdaeft(-0.01,0.03\rhoight)}{0.01}$ & 1.29 \thetaabularnewline \hline Unconditional Prob. & 0.065 & & & \thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Average Partial Effects for the probability of binge drinking and mental health for major life events variables for Female. Symbol $^{*}$ denotes statistical significance. \lambdaabel{tab:Average-Partial-Effects for female}} \centering{} \betam{e}gin{tabular}{ccccc} \hline Variables & Probit & IACT & Gaussian & IACT\thetaabularnewline \hline \hline gave birth & $\underset{\lambdaeft(-0.03,-0.01\rhoight)}{-0.02}$ & 1.60 & $\underset{\lambdaeft(-0.04,-0.01\rhoight)}{-0.03}$ & 1.56\thetaabularnewline death of a friend & $\underset{\lambdaeft(-0.00,0.02\rhoight)}{0.01}$ & 1.53 & $\underset{\lambdaeft(0.00,0.02\rhoight)}{0.01}$ & 1.66\thetaabularnewline death of a spouse/child & $\underset{\lambdaeft(-0.02,0.05\rhoight)}{0.01}$ & 1.32 & $\underset{\lambdaeft(-0.03,0.04\rhoight)}{0.01}$ & 1.51\thetaabularnewline personal injury & $\underset{\lambdaeft(-0.00,0.02\rhoight)}{0.01}$ & 1.49 & $\underset{\lambdaeft(-0.00,0.02\rhoight)}{0.01}$ & 1.59\thetaabularnewline getting married & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.00}$ & 1.36 & $\underset{\lambdaeft(-0.01,0.03\rhoight)}{0.01}$ & 1.55\thetaabularnewline changed residence & $\underset{\lambdaeft(0.00,0.01\rhoight)^{*}}{0.01}$ & 1.47 & $\underset{\lambdaeft(0.00,0.02\rhoight)^{*}}{0.01}$ & 1.75\thetaabularnewline victim of crime & $\underset{\lambdaeft(-0.01,0.01\rhoight)}{-0.00}$ & 1.46 & $\underset{\lambdaeft(-0.02,0.01\rhoight)}{-0.00}$ & 1.56\thetaabularnewline promoted at work & $\underset{\lambdaeft(0.01,0.03\rhoight)^{*}}{0.02}$ & 1.45 & $\underset{\lambdaeft(0.01,0.03\rhoight)^{*}}{0.02}$ & 1.59\thetaabularnewline back with spouse & $\underset{\lambdaeft(-0.02,0.03\rhoight)}{-0.00}$ & 1.42 & $\underset{\lambdaeft(-0.03,0.03\rhoight)}{-0.00}$ & 1.64\thetaabularnewline separated from spouse & $\underset{\lambdaeft(0.01,0.04\rhoight)^{*}}{0.03}$ & 1.36 & $\underset{\lambdaeft(0.01,0.04\rhoight)^{*}}{0.03}$ & 1.60\thetaabularnewline improvement in financial & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.01}$ & 1.49 & $\underset{\lambdaeft(-0.01,0.02\rhoight)}{0.01}$ & 1.44\thetaabularnewline worsening in financial & $\underset{\lambdaeft(0.01,0.05\rhoight)^{*}}{0.03}$ & 1.61 & $\underset{\lambdaeft(0.01,0.06\rhoight)^{*}}{0.03}$ & 1.74\thetaabularnewline \hline Unconditional Prob. & 0.058 & & & \thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \sigmaection{Conclusions}\lambdaabel{S: conclusions} Based on recent advances in Particle Markov chain Monte Carlo (PMCMC), we demonstrate an approach to estimating flexible model specifications for multivariate outcomes using panel data. We propose a particle Metropolis within Gibbs (PMwG) sampling scheme for Bayesian inference of a flexible for multivariate outcomes using panel data and show that this sampler is more efficient than competing methods. The panel data methods we develop in this paper also accommodate a mix of discrete and continuous outcomes and in doing so avoid the common approach of reducing all outcomes to binary variables so that a multivariate probit approach is possible. We demonstrate in our application that joint modelling of alcohol consumption and mental health often gave only slightly different results after discretising the outcomes. But given that more general specifications better reflect the discrete outcomes of alcohol consumption and the continuous mental health measure, there is an argument that the bivariate probit model is potentially masking important features of the relationship. The results in the application are somewhat surprising. Specifying and comparing different copulas was motivated by the belief that the dependence structure between excessive alcohol consumption and poor mental health might potentially be very different in the tails of the distributions. The results indicate that this is not the case in that all three copulas provide qualitatively similar results. Moreover, they indicate that the relationship between alcohol consumption and mental health is weak which is a key reason why differences did not emerge across different copulas. While we have not estimated a formal model allowing two-way causality between excessive alcohol consumption and mental health, if such effects exist then we would expect them to manifest themselves in a positive relationship in our joint estimation. Not finding such a relationship is possibly evidence that the causal effects running both ways between excessive alcohol consumption and mental health are indeed weak or even non-existent. This is not inconsistent with the existing literature where evidence is mixed; see for example \citet{Boden:2011}. Another possibility is that there are causal effects but they relate to particular subgroups of the population and our models are insufficiently rich to capture the heterogeneity in these effects. We have conducted all analyses for males and females separately and found some differences across these groups but it may be that other sources of heterogeneity may be associated with unobservable rather than observable individual features. We leave this interesting line of work for future research. \betaibliographystyle{Chicago} \betaibliography{references_v1} \sigmaection*{Appendices} \betam{e}gin{appendix} \sigmaection{Proofs of Results} \lambdaabel{sec:lemma1} The first lemma shows that the estimate $\widehat{p}_{N}(\betaoldsymbol y |\betaoldsymbol \thetaheta)$ given in \cref{eq:estimated likelihood} is an unbiased estimate of the likelihood $p(\betaoldsymbol y |\betaoldsymbol \thetaheta)$. \betam{e}gin{lemma} \lambdaabel{lemma1} $E \lambdaeft\{ \widehat{p}_{N}(\betaoldsymbol y |\betaoldsymbol \thetaheta) \rhoight\} = p(\betaoldsymbol y |\betaoldsymbol \thetaheta)$. \betam{e}gin{proof} From \cref{ass: use of IS} and Steps 1 and 2 of \cref{alg: IS sampling alg}, $E \lambdaeft( w_{i}^{j} \rhoight) = p(\betaoldsymbol y_i |\betaoldsymbol \thetaheta)$, and hence the result follows from equations {\rhom e}qref{eq:independence likelihood}), {\rhom e}qref{eq:particle dist}) and {\rhom e}qref{eq:estimated likelihood}). {\rhom e}nd{proof} {\rhom e}nd{lemma} \betam{e}gin{lemma} \lambdaabel{lemma2} The marginal distribution of $\widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}\rhoight)$ is given by \[ \widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}\rhoight)=\frac{\pi\lambdaeft(\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}}\rhoight)}{N^{P}}. \] \betam{e}gin{proof} We integrate the target density $\widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{1:N},\betaoldsymbol{\thetaheta}\rhoight)$ over $\betaoldsymbol{\alphalpha}_{1:P}^{\lambdaeft(-\betaoldsymbol{k}\rhoight)}$ \betam{e}gin{eqnarray*} \widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}\rhoight)=\int\widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{1:N},\betaoldsymbol{\thetaheta}\rhoight)d\betaoldsymbol{\alphalpha}_{1:P}^{\lambdaeft(-\betaoldsymbol{k}\rhoight)}=\frac{\pi\lambdaeft(\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}}\rhoight)}{N^{P}}. {\rhom e}nd{eqnarray*} {\rhom e}nd{proof} {\rhom e}nd{lemma} \betam{e}gin{proof}[Proof of Theorem~\rhoef{thm:convergence of pmmh}] The proof follows from Assumption \rhoef{ass: use of IS}, Lemmas \rhoef{lemma1} and \rhoef{lemma2} and Theorem 1 in \citet{Andrieu:2009}. {\rhom e}nd{proof} \betam{e}gin{proof}[Proof of Theorem~\rhoef{thm: converg of PMwG}] The proof follows the approach in Theorem 5 in \citet[pg. 300]{Andrieu:2010}. The algorithm is a Metropolis within Gibbs sampler targeting \Cref{eq: expanded target density}. Hence we focus on establishing irreducibility and aperiodicity. It will be convenient to use the notation $ \widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{1:N},\betaoldsymbol{\thetaheta}\rhoight)=\widetilde{\pi}_{N}\lambdaeft(\betaoldsymbol{k},\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}},\betaoldsymbol{\alphalpha}_{1:P}^{-\betaoldsymbol{k}},\betaoldsymbol{\thetaheta}\rhoight) $ to partition the particles $\betaoldsymbol{\alphalpha}_{1:P}^{1:N}$ into $\betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{k}}$ and $\betaoldsymbol{\alphalpha}_{1:P}^{-\betaoldsymbol{k}}$ which are the particles selected and not selected by the indices $\betaoldsymbol{k}$ respectively. Let $\betaoldsymbol{k} \in \{1, \lambdadots, N\}^P$, $D \in {\cal B}\lambdaeft( {\cal R}^P \rhoight)$, $E \in {\cal B} \lambdaeft( {\cal R}^{(N-1) P} \rhoight)$ and $F \in {\cal B} \lambdaeft( \Theta \rhoight)$ be such that $\widetilde{\pi}_{N}\lambdaeft( \{\betaoldsymbol{k}\} \thetaimes D \thetaimes E \thetaimes F \rhoight) > 0$. From Assumption \rhoef{ass: use of IS} it is possible to show that accessible sets for the Metropolis within Gibbs sampler are also marginally accessible by the particle Metropolis within Gibbs sampler. From this and Assumption \rhoef{ass: gibbs1}, we deduce that there is a finite $j > 0$ such that ${\cal L}_{PMwG} \lambdaeft\{ \lambdaeft( \betaoldsymbol{K}(j), \betaoldsymbol{\alphalpha}_{1:P}(j), \betaoldsymbol{\thetaheta}(j) \rhoight) \in \{\betaoldsymbol{k} \}\thetaimes D \thetaimes F \rhoight\} > 0$. Now because Step 2 consists of a Gibbs step using $\widetilde{\pi}_{N}(\cdot)$, we deduce that \betam{e}gin{eqnarray*} {\cal L}_{PMwG} \lambdaeft\{ \lambdaeft( \betaoldsymbol{K}(j), \betaoldsymbol{\alphalpha}_{1:P}^{\betaoldsymbol{K}(j)},\betaoldsymbol{\alphalpha}_{1:P}^{-\betaoldsymbol{K}(j)} \betaoldsymbol{\thetaheta}(j) \rhoight) \in \{\betaoldsymbol{k} \}\thetaimes D \thetaimes E \thetaimes F \rhoight\} > 0 {\rhom e}nd{eqnarray*} and the irreducibility of the PMwG samper follows. Aperiodicity can be proved by contradiction since, if the PMwG sample is periodic then from Assumption \rhoef{ass: use of IS} so is the MwG sampler, which contradicts Assumption \rhoef{ass: gibbs1}. The result now follows from Theorem 1 of \cite{tierney1994}. {\rhom e}nd{proof} \sigmaection{Empirical Results \lambdaabel{sec:Empirical-Results}} \betam{e}gin{table}[H] \caption{Estimation results for male Individual Effects and Dependence Parameters. Posterior mean estimates with 95\% credible intervals (in brackets). \lambdaabel{tab:Estimation-Results-for Male dependence parameters}} \centering{} \betam{e}gin{tabular}{ccccc} \hline & Gaussian & Clayton & Gumbel & Probit\thetaabularnewline \hline $\rhoho_{\alphalpha}$ & $\underset{\lambdaeft(-0.03,0.10\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.04,0.09\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.03,0.10\rhoight)}{0.03}$ & $\underset{\lambdaeft(0.03,0.15\rhoight)}{0.09}$\thetaabularnewline Kendall tau & $\underset{\lambdaeft(-0.02,0.07\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.03,0.06\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.02,0.06\rhoight)}{0.02}$ & $\underset{\lambdaeft(0.02,0.09\rhoight)}{0.06}$\thetaabularnewline \hline $\thetaheta_{dep}$ & $\underset{\lambdaeft(-0.07,0.11\rhoight)}{0.02}$ & $\underset{\lambdaeft(0.11,0.30\rhoight)}{0.19}$ & $\underset{\lambdaeft(1.00,1.06\rhoight)}{1.02}$ & $\underset{\lambdaeft(-0.10,0.09\rhoight)}{-0.00}$\thetaabularnewline Kendall tau & $\underset{\lambdaeft(-0.04,0.07\rhoight)}{0.01}$ & $\underset{\lambdaeft(0.05,0.13\rhoight)}{0.09}$ & $\underset{\lambdaeft(0.00,0.06\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.06,0.06\rhoight)}{-0.00}$\thetaabularnewline Lower/Upper Tail & NA & $\underset{\lambdaeft(0.00,0.10\rhoight)}{0.03}$ & $\underset{\lambdaeft(0.00,0.08\rhoight)}{0.03}$ & NA\thetaabularnewline \hline $\thetaau_{1}^{2}$ & $\underset{\lambdaeft(3.63,5.04\rhoight)}{4.31}$ & $\underset{\lambdaeft(3.60,5.09\rhoight)}{4.29}$ & $\underset{\lambdaeft(3.71,5.15\rhoight)}{4.37}$ & $\underset{\lambdaeft(3.66,5.07\rhoight)}{4.34}$\thetaabularnewline $\thetaau_{2}^{2}$ & $\underset{\lambdaeft(0.40,0.47\rhoight)}{0.43}$ & $\underset{\lambdaeft(0.40,0.47\rhoight)}{0.43}$ & $\underset{\lambdaeft(0.40,0.47\rhoight)}{0.44}$ & $\underset{\lambdaeft(2.45,3.09\rhoight)}{2.76}$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Estimation results for female Individual Effects and Dependence Parameters. Posterior mean estimates with 95\% credible intervals (in brackets). \lambdaabel{tab:Estimation-Results-for feMale dependence parameters}} \centering{} \betam{e}gin{tabular}{ccccc} \hline & Gaussian & Clayton & Gumbel & Probit\thetaabularnewline \hline $\rhoho_{\alphalpha}$ & $\underset{\lambdaeft(-0.06,0.08\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.08,0.06\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.06,0.07\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.04,0.08\rhoight)}{0.02}$\thetaabularnewline Kendall tau & $\underset{\lambdaeft(-0.04,0.05\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.05,0.04\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.04,0.04\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.02,0.05\rhoight)}{0.01}$\thetaabularnewline \hline $\thetaheta_{dep}$ & $\underset{\lambdaeft(-0.07,0.11\rhoight)}{0.02}$ & $\underset{\lambdaeft(0.11,0.29\rhoight)}{0.19}$ & $\underset{\lambdaeft(1.00,1.06\rhoight)}{1.02}$ & $\underset{\lambdaeft(-0.08,0.08\rhoight)}{-0.00}$\thetaabularnewline Kendall tau & $\underset{\lambdaeft(-0.05,0.07\rhoight)}{0.01}$ & $\underset{\lambdaeft(0.05,0.13\rhoight)}{0.09}$ & $\underset{\lambdaeft(0.00,0.05\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.05,0.05\rhoight)}{-0.00}$\thetaabularnewline Lower/Upper tail & NA & $\underset{\lambdaeft(0.00,0.09\rhoight)}{0.03}$ & $\underset{\lambdaeft(0.00,0.07\rhoight)}{0.03}$ & NA\thetaabularnewline \hline $\thetaau_{1}^{2}$ & $\underset{\lambdaeft(3.35,4.62\rhoight)}{3.93}$ & $\underset{\lambdaeft(3.31,4.58\rhoight)}{3.90}$ & $\underset{\lambdaeft(3.37,4.69\rhoight)}{3.99}$ & $\underset{\lambdaeft(3.38,4.63\rhoight)}{3.96}$\thetaabularnewline $\thetaau_{2}^{2}$ & $\underset{\lambdaeft(0.34,0.40\rhoight)}{0.37}$ & $\underset{\lambdaeft(0.34,0.40\rhoight)}{0.37}$ & $\underset{\lambdaeft(0.34,0.40\rhoight)}{0.37}$ & $\underset{\lambdaeft(1.91,2.36\rhoight)}{2.13}$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Estimation results for male Binge/Excessive Drinking $\lambdaeft(y_{1}\rhoight)$ (balanced panel). Posterior mean estimates with 95\% credible intervals (in brackets). \lambdaabel{tab:Estimation-Results-for male binge }} \centering{} \betam{e}gin{tabular}{ccccc} \hline & Gaussian & Probit & Clayton & Gumbel\thetaabularnewline \hline university/degree & $\underset{\lambdaeft(-0.97,-0.41\rhoight)}{-0.69}$ & $\underset{\lambdaeft(-0.97,-0.40\rhoight)}{-0.69}$ & $\underset{\lambdaeft(-0.96,-0.40\rhoight)}{-0.68}$ & $\underset{\lambdaeft(-0.97,-0.40\rhoight)}{-0.68}$\thetaabularnewline diploma/certificate & $\underset{\lambdaeft(-0.22,0.22\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.23,0.22\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.21,0.23\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.22,0.23\rhoight)}{0.00}$\thetaabularnewline married & $\underset{\lambdaeft(-0.61,-0.21\rhoight)}{-0.41}$ & $\underset{\lambdaeft(-0.61,-0.21\rhoight)}{-0.41}$ & $\underset{\lambdaeft(-0.62,-0.22\rhoight)}{-0.41}$ & $\underset{\lambdaeft(-0.62,-0.22\rhoight)}{-0.42}$\thetaabularnewline income & $\underset{\lambdaeft(-0.11,0.02\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.11,0.02\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.11,0.01\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.11,0.02\rhoight)}{-0.05}$\thetaabularnewline num. child & $\underset{\lambdaeft(-0.27,-0.05\rhoight)}{-0.16}$ & $\underset{\lambdaeft(-0.27,-0.05\rhoight)}{-0.16}$ & $\underset{\lambdaeft(-0.27,-0.05\rhoight)}{-0.16}$ & $\underset{\lambdaeft(-0.27,-0.05\rhoight)}{-0.16}$\thetaabularnewline gave birth & $\underset{\lambdaeft(-0.66,-0.05\rhoight)}{-0.35}$ & $\underset{\lambdaeft(-0.66,-0.06\rhoight)}{-0.35}$ & $\underset{\lambdaeft(-0.66,-0.04\rhoight)}{-0.34}$ & $\underset{\lambdaeft(-0.67,-0.05\rhoight)}{-0.36}$\thetaabularnewline death of a friend & $\underset{\lambdaeft(-0.12,0.26\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.12,0.25\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.12,0.25\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.12,0.26\rhoight)}{0.07}$\thetaabularnewline death of a spouse/child & $\underset{\lambdaeft(-0.62,0.98\rhoight)}{0.20}$ & $\underset{\lambdaeft(-0.63,1.00\rhoight)}{0.21}$ & $\underset{\lambdaeft(-0.74,1.00\rhoight)}{0.17}$ & $\underset{\lambdaeft(-0.74,1.01\rhoight)}{0.17}$\thetaabularnewline personal injury & $\underset{\lambdaeft(-0.08,0.32\rhoight)}{0.12}$ & $\underset{\lambdaeft(-0.08,0.32\rhoight)}{0.12}$ & $\underset{\lambdaeft(-0.06,0.34\rhoight)}{0.14}$ & $\underset{\lambdaeft(-0.08,0.33\rhoight)}{0.13}$\thetaabularnewline getting married & $\underset{\lambdaeft(-0.61,0.15\rhoight)}{-0.22}$ & $\underset{\lambdaeft(-0.60,0.14\rhoight)}{-0.22}$ & $\underset{\lambdaeft(-0.61,0.13\rhoight)}{-0.23}$ & $\underset{\lambdaeft(-0.61,0.14\rhoight)}{-0.23}$\thetaabularnewline changed residence & $\underset{\lambdaeft(-0.11,0.20\rhoight)}{0.04}$ & $\underset{\lambdaeft(-0.12,0.19\rhoight)}{0.04}$ & $\underset{\lambdaeft(-0.10,0.20\rhoight)}{0.05}$ & $\underset{\lambdaeft(-0.12,0.20\rhoight)}{0.04}$\thetaabularnewline victim of crime & $\underset{\lambdaeft(-0.31,0.24\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.32,0.22\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.30,0.24\rhoight)}{-0.03}$ & $\underset{\lambdaeft(-0.33,0.23\rhoight)}{-0.05}$\thetaabularnewline promoted at work & $\underset{\lambdaeft(-0.14,0.30\rhoight)}{0.08}$ & $\underset{\lambdaeft(-0.14,0.30\rhoight)}{0.08}$ & $\underset{\lambdaeft(-0.15,0.30\rhoight)}{0.08}$ & $\underset{\lambdaeft(-0.14,0.30\rhoight)}{0.08}$\thetaabularnewline back with spouse & $\underset{\lambdaeft(-0.84,0.40\rhoight)}{-0.21}$ & $\underset{\lambdaeft(-0.84,0.41\rhoight)}{-0.21}$ & $\underset{\lambdaeft(-0.84,0.42\rhoight)}{-0.20}$ & $\underset{\lambdaeft(-0.85,0.39\rhoight)}{-0.21}$\thetaabularnewline separated from spouse & $\underset{\lambdaeft(-0.26,0.33\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.27,0.31\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.27,0.32\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.27,0.31\rhoight)}{0.02}$\thetaabularnewline improvement in financial & $\underset{\lambdaeft(-0.31,0.30\rhoight)}{-0.00}$ & $\underset{\lambdaeft(-0.31,0.32\rhoight)}{-0.00}$ & $\underset{\lambdaeft(-0.32,0.31\rhoight)}{-0.00}$ & $\underset{\lambdaeft(-0.32,0.32\rhoight)}{-0.00}$\thetaabularnewline worsening in financial & $\underset{\lambdaeft(-0.43,0.21\rhoight)}{-0.11}$ & $\underset{\lambdaeft(-0.45,0.20\rhoight)}{-0.12}$ & $\underset{\lambdaeft(-0.43,0.21\rhoight)}{-0.10}$ & $\underset{\lambdaeft(-0.44,0.20\rhoight)}{-0.12}$\thetaabularnewline \hline $\min_{1:23}IACT\lambdaeft(\betam{e}ta_{1i}\rhoight)$ & 1.00 & 1.00 & 1.00 & 1.13\thetaabularnewline $\max_{1:23}IACT\lambdaeft(\betam{e}ta_{1i}\rhoight)$ & 7.58 & 8.19 & 8.67 & 8.64\thetaabularnewline ${\rhom mean}_{1:23}\lambdaeft(\betam{e}ta_{1i}\rhoight)$ & 2.55 & 2.55 & 2.59 & 2.60\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Estimation results for male Mental Health score $\lambdaeft(y_{2}\rhoight)$ (balanced panel). Posterior mean estimates with 95\% credible intervals (in brackets). \lambdaabel{tab:Estimation-Results-for male mental health}} \centering{} \betam{e}gin{tabular}{ccccc} \hline & Gaussian & Probit & Clayton & Gumbel\thetaabularnewline \hline university/degree & $\underset{\lambdaeft(-0.14,0.03\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.29,0.07\rhoight)}{-0.11}$ & $\underset{\lambdaeft(-0.13,0.03\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.14,0.03\rhoight)}{-0.05}$\thetaabularnewline diploma/certificate & $\underset{\lambdaeft(-0.10,0.05\rhoight)}{-0.02}$ & $\underset{\lambdaeft(-0.22,0.10\rhoight)}{-0.06}$ & $\underset{\lambdaeft(-0.09,0.05\rhoight)}{-0.02}$ & $\underset{\lambdaeft(-0.10,0.05\rhoight)}{-0.03}$\thetaabularnewline married & $\underset{\lambdaeft(-0.12,0.01\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.28,0.01\rhoight)}{-0.13}$ & $\underset{\lambdaeft(-0.12,0.01\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.12,0.01\rhoight)}{-0.05}$\thetaabularnewline income & $\underset{\lambdaeft(-0.02,0.03\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.07,0.02\rhoight)}{-0.02}$ & $\underset{\lambdaeft(-0.02,0.03\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.02,0.04\rhoight)}{0.01}$\thetaabularnewline num. child & $\underset{\lambdaeft(-0.03,0.06\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.05,0.10\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.03,0.06\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.03,0.06\rhoight)}{0.02}$\thetaabularnewline gave birth & $\underset{\lambdaeft(-0.05,0.19\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.11,0.30\rhoight)}{0.09}$ & $\underset{\lambdaeft(-0.05,0.19\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.05,0.20\rhoight)}{0.07}$\thetaabularnewline death of a friend & $\underset{\lambdaeft(-0.05,0.10\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.07,0.19\rhoight)}{0.06}$ & $\underset{\lambdaeft(-0.05,0.10\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.05,0.09\rhoight)}{0.02}$\thetaabularnewline death of a spouse/child & $\underset{\lambdaeft(-0.14,0.49\rhoight)}{0.18}$ & $\underset{\lambdaeft(0.02,1.11\rhoight)}{0.56}$ & $\underset{\lambdaeft(-0.13,0.50\rhoight)}{0.18}$ & $\underset{\lambdaeft(-0.14,0.49\rhoight)}{0.18}$\thetaabularnewline personal injury & $\underset{\lambdaeft(0.30,0.46\rhoight)}{0.38}$ & $\underset{\lambdaeft(0.48,0.75\rhoight)}{0.62}$ & $\underset{\lambdaeft(0.30,0.46\rhoight)}{0.38}$ & $\underset{\lambdaeft(0.30,0.46\rhoight)}{0.38}$\thetaabularnewline getting married & $\underset{\lambdaeft(-0.18,0.15\rhoight)}{-0.02}$ & $\underset{\lambdaeft(-0.32,0.24\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.18,0.15\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.18,0.15\rhoight)}{-0.01}$\thetaabularnewline changed residence & $\underset{\lambdaeft(-0.06,0.07\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.18,0.05\rhoight)}{-0.07}$ & $\underset{\lambdaeft(-0.06,0.07\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.07,0.07\rhoight)}{0.00}$\thetaabularnewline victim of crime & $\underset{\lambdaeft(0.04,0.27\rhoight)}{0.16}$ & $\underset{\lambdaeft(-0.01,0.40\rhoight)}{0.19}$ & $\underset{\lambdaeft(0.04,0.27\rhoight)}{0.16}$ & $\underset{\lambdaeft(0.04,0.27\rhoight)}{0.15}$\thetaabularnewline promoted at work & $\underset{\lambdaeft(-0.12,0.06\rhoight)}{-0.03}$ & $\underset{\lambdaeft(-0.12,0.19\rhoight)}{0.04}$ & $\underset{\lambdaeft(-0.12,0.06\rhoight)}{-0.03}$ & $\underset{\lambdaeft(-0.12,0.06\rhoight)}{-0.03}$\thetaabularnewline back with spouse & $\underset{\lambdaeft(-0.12,0.40\rhoight)}{0.14}$ & $\underset{\lambdaeft(-0.11,0.82\rhoight)}{0.36}$ & $\underset{\lambdaeft(-0.12,0.41\rhoight)}{0.14}$ & $\underset{\lambdaeft(-0.13,0.40\rhoight)}{0.13}$\thetaabularnewline separated from spouse & $\underset{\lambdaeft(0.10,0.37\rhoight)}{0.24}$ & $\underset{\lambdaeft(0.13,0.60\rhoight)}{0.36}$ & $\underset{\lambdaeft(0.10,0.37\rhoight)}{0.23}$ & $\underset{\lambdaeft(0.10,0.37\rhoight)}{0.24}$\thetaabularnewline improvement in financial & $\underset{\lambdaeft(-0.13,0.11\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.27,0.16\rhoight)}{-0.05}$ & $\underset{\lambdaeft(-0.14,0.11\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.13,0.12\rhoight)}{-0.01}$\thetaabularnewline worsening in financial & $\underset{\lambdaeft(0.24,0.49\rhoight)}{0.37}$ & $\underset{\lambdaeft(0.24,0.69\rhoight)}{0.47}$ & $\underset{\lambdaeft(0.24,0.49\rhoight)}{0.37}$ & $\underset{\lambdaeft(0.24,0.49\rhoight)}{0.37}$\thetaabularnewline \hline $\min_{1:23}IACT\lambdaeft(\betam{e}ta_{2i}\rhoight)$ & 1.00 & 1.00 & 1.00 & 1.01\thetaabularnewline $\max_{1:23}IACT\lambdaeft(\betam{e}ta_{2i}\rhoight)$ & 3.31 & 9.92 & 3.60 & 2.73\thetaabularnewline ${\rhom mean}_{1:23}\lambdaeft(\betam{e}ta_{2i}\rhoight)$ & 1.41 & 2.43 & 1.30 & 1.51\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Estimation results for female Binge/Excessive Drinking $\lambdaeft(y_{1}\rhoight)$. Posterior mean estimates with 95\% credible intervals (in brackets). \lambdaabel{tab:Estimation-Results-for female binge drinking}} \centering{} \betam{e}gin{tabular}{ccccc} \hline & Gaussian & Probit & Clayton & Gumbel\thetaabularnewline \hline university/degree & $\underset{\lambdaeft(-0.61,-0.16\rhoight)}{-0.38}$ & $\underset{\lambdaeft(-0.61,-0.16\rhoight)}{-0.39}$ & $\underset{\lambdaeft(-0.62,-0.18\rhoight)}{-0.40}$ & $\underset{\lambdaeft(-0.63,-0.17\rhoight)}{-0.40}$\thetaabularnewline diploma/certificate & $\underset{\lambdaeft(-0.30,0.13\rhoight)}{-0.09}$ & $\underset{\lambdaeft(-0.30,0.13\rhoight)}{-0.09}$ & $\underset{\lambdaeft(-0.31,0.12\rhoight)}{-0.09}$ & $\underset{\lambdaeft(-0.31,0.12\rhoight)}{-0.09}$\thetaabularnewline married & $\underset{\lambdaeft(-0.68,-0.35\rhoight)}{-0.51}$ & $\underset{\lambdaeft(-0.69,-0.35\rhoight)}{-0.52}$ & $\underset{\lambdaeft(-0.70,-0.36\rhoight)}{-0.53}$ & $\underset{\lambdaeft(-0.70,-0.36\rhoight)}{-0.53}$\thetaabularnewline income & $\underset{\lambdaeft(0.05,0.19\rhoight)}{0.12}$ & $\underset{\lambdaeft(0.05,0.20\rhoight)}{0.12}$ & $\underset{\lambdaeft(0.05,0.20\rhoight)}{0.13}$ & $\underset{\lambdaeft(0.05,0.20\rhoight)}{0.13}$\thetaabularnewline num. child & $\underset{\lambdaeft(-0.22,0.00\rhoight)}{-0.11}$ & $\underset{\lambdaeft(-0.22,0.00\rhoight)}{-0.11}$ & $\underset{\lambdaeft(-0.22,0.01\rhoight)}{-0.10}$ & $\underset{\lambdaeft(-0.22,0.00\rhoight)}{-0.11}$\thetaabularnewline gave birth & $\underset{\lambdaeft(-1.16,-0.49\rhoight)}{-0.81}$ & $\underset{\lambdaeft(-1.16,-0.48\rhoight)}{-0.81}$ & $\underset{\lambdaeft(-1.15,-0.47\rhoight)}{-0.81}$ & $\underset{\lambdaeft(-1.18,-0.48\rhoight)}{-0.83}$\thetaabularnewline death of a friend & $\underset{\lambdaeft(-0.00,0.37\rhoight)}{0.19}$ & $\underset{\lambdaeft(-0.00,0.36\rhoight)}{0.18}$ & $\underset{\lambdaeft(-0.01,0.36\rhoight)}{0.17}$ & $\underset{\lambdaeft(-0.01,0.37\rhoight)}{0.18}$\thetaabularnewline death of a spouse/child & $\underset{\lambdaeft(-0.90,0.56\rhoight)}{-0.16}$ & $\underset{\lambdaeft(-0.92,0.56\rhoight)}{-0.16}$ & $\underset{\lambdaeft(-0.95,0.56\rhoight)}{-0.17}$ & $\underset{\lambdaeft(-0.93,0.59\rhoight)}{-0.16}$\thetaabularnewline personal injury & $\underset{\lambdaeft(-0.43,0.03\rhoight)}{-0.20}$ & $\underset{\lambdaeft(-0.43,0.03\rhoight)}{-0.20}$ & $\underset{\lambdaeft(-0.42,0.04\rhoight)}{-0.18}$ & $\underset{\lambdaeft(-0.44,0.03\rhoight)}{-0.20}$\thetaabularnewline getting married & $\underset{\lambdaeft(-0.17,0.55\rhoight)}{0.19}$ & $\underset{\lambdaeft(-0.17,0.55\rhoight)}{0.20}$ & $\underset{\lambdaeft(-0.16,0.55\rhoight)}{0.20}$ & $\underset{\lambdaeft(-0.17,0.56\rhoight)}{0.20}$\thetaabularnewline changed residence & $\underset{\lambdaeft(0.04,0.32\rhoight)}{0.18}$ & $\underset{\lambdaeft(0.04,0.31\rhoight)}{0.17}$ & $\underset{\lambdaeft(0.04,0.32\rhoight)}{0.18}$ & $\underset{\lambdaeft(0.04,0.32\rhoight)}{0.18}$\thetaabularnewline victim of crime & $\underset{\lambdaeft(-0.43,0.15\rhoight)}{-0.14}$ & $\underset{\lambdaeft(-0.44,0.15\rhoight)}{-0.14}$ & $\underset{\lambdaeft(-0.43,0.15\rhoight)}{-0.13}$ & $\underset{\lambdaeft(-0.44,0.14\rhoight)}{-0.15}$\thetaabularnewline promoted at work & $\underset{\lambdaeft(0.12,0.53\rhoight)}{0.33}$ & $\underset{\lambdaeft(0.12,0.53\rhoight)}{0.33}$ & $\underset{\lambdaeft(0.11,0.53\rhoight)}{0.32}$ & $\underset{\lambdaeft(0.12,0.53\rhoight)}{0.33}$\thetaabularnewline back with spouse & $\underset{\lambdaeft(-0.91,0.29\rhoight)}{-0.30}$ & $\underset{\lambdaeft(-0.88,0.29\rhoight)}{-0.31}$ & $\underset{\lambdaeft(-0.91,0.30\rhoight)}{-0.30}$ & $\underset{\lambdaeft(-0.93,0.27\rhoight)}{-0.32}$\thetaabularnewline separated from spouse & $\underset{\lambdaeft(0.11,0.65\rhoight)}{0.38}$ & $\underset{\lambdaeft(0.11,0.65\rhoight)}{0.38}$ & $\underset{\lambdaeft(0.11,0.65\rhoight)}{0.38}$ & $\underset{\lambdaeft(0.10,0.65\rhoight)}{0.38}$\thetaabularnewline improvement in financial & $\underset{\lambdaeft(-0.15,0.44\rhoight)}{0.14}$ & $\underset{\lambdaeft(-0.17,0.44\rhoight)}{0.14}$ & $\underset{\lambdaeft(-0.16,0.45\rhoight)}{0.14}$ & $\underset{\lambdaeft(-0.15,0.45\rhoight)}{0.15}$\thetaabularnewline worsening in financial & $\underset{\lambdaeft(-0.10,0.56\rhoight)}{0.24}$ & $\underset{\lambdaeft(-0.10,0.56\rhoight)}{0.24}$ & $\underset{\lambdaeft(-0.06,0.61\rhoight)}{0.27}$ & $\underset{\lambdaeft(-0.08,0.57\rhoight)}{0.25}$\thetaabularnewline \hline $\min_{1:23}IACT\lambdaeft(\betam{e}ta_{1i}\rhoight)$ & 1.25 & 1.01 & 1.00 & 1.00\thetaabularnewline $\max_{1:23}IACT\lambdaeft(\betam{e}ta_{1i}\rhoight)$ & 6.55 & 7.56 & 6.78 & 7.58\thetaabularnewline ${\rhom mean}_{1:23}\lambdaeft(\betam{e}ta_{1i}\rhoight)$ & 2.35 & 2.41 & 2.41 & 2.27\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Estimation results for female Mental Health score $\lambdaeft(y_{2}\rhoight)$ (balanced panel) Posterior mean estimates with 95\% credible intervals (in brackets). \lambdaabel{tab:Estimation-Results-for female mental health}} \centering{} \betam{e}gin{tabular}{ccccc} \hline & Gaussian & Probit & Clayton & Gumbel\thetaabularnewline \hline university/degree & $\underset{\lambdaeft(-0.18,-0.04\rhoight)}{-0.11}$ & $\underset{\lambdaeft(-0.39,-0.10\rhoight)}{-0.24}$ & $\underset{\lambdaeft(-0.18,-0.04\rhoight)}{-0.11}$ & $\underset{\lambdaeft(-0.18,-0.04\rhoight)}{-0.11}$\thetaabularnewline diploma/certificate & $\underset{\lambdaeft(-0.10,0.03\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.21,0.07\rhoight)}{-0.07}$ & $\underset{\lambdaeft(-0.10,0.03\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.10,0.03\rhoight)}{-0.04}$\thetaabularnewline married & $\underset{\lambdaeft(-0.21,-0.09\rhoight)}{-0.15}$ & $\underset{\lambdaeft(-0.36,-0.14\rhoight)}{-0.25}$ & $\underset{\lambdaeft(-0.21,-0.09\rhoight)}{-0.15}$ & $\underset{\lambdaeft(-0.20,-0.09\rhoight)}{-0.15}$\thetaabularnewline income & $\underset{\lambdaeft(-0.02,0.03\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.04,0.04\rhoight)}{0.00}$ & $\underset{\lambdaeft(-0.02,0.03\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.02,0.03\rhoight)}{0.01}$\thetaabularnewline num. child & $\underset{\lambdaeft(-0.04,0.05\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.05,0.09\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.03,0.06\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.03,0.06\rhoight)}{0.01}$\thetaabularnewline gave birth & $\underset{\lambdaeft(0.02,0.26\rhoight)}{0.14}$ & $\underset{\lambdaeft(0.09,0.47\rhoight)}{0.28}$ & $\underset{\lambdaeft(0.03,0.26\rhoight)}{0.14}$ & $\underset{\lambdaeft(0.02,0.25\rhoight)}{0.14}$\thetaabularnewline death of a friend & $\underset{\lambdaeft(-0.04,0.10\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.09,0.14\rhoight)}{0.02}$ & $\underset{\lambdaeft(-0.04,0.10\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.04,0.10\rhoight)}{0.03}$\thetaabularnewline death of a spouse/child & $\underset{\lambdaeft(0.01,0.52\rhoight)}{0.27}$ & $\underset{\lambdaeft(0.08,0.97\rhoight)}{0.52}$ & $\underset{\lambdaeft(0.01,0.52\rhoight)}{0.26}$ & $\underset{\lambdaeft(0.01,0.52\rhoight)}{0.27}$\thetaabularnewline personal injury & $\underset{\lambdaeft(0.36,0.51\rhoight)}{0.44}$ & $\underset{\lambdaeft(0.58,0.85\rhoight)}{0.72}$ & $\underset{\lambdaeft(0.36,0.51\rhoight)}{0.44}$ & $\underset{\lambdaeft(0.36,0.52\rhoight)}{0.44}$\thetaabularnewline getting married & $\underset{\lambdaeft(-0.19,0.11\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.41,0.09\rhoight)}{-0.16}$ & $\underset{\lambdaeft(-0.19,0.12\rhoight)}{-0.04}$ & $\underset{\lambdaeft(-0.19,0.11\rhoight)}{-0.04}$\thetaabularnewline changed residence & $\underset{\lambdaeft(-0.04,0.09\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.11,0.09\rhoight)}{-0.01}$ & $\underset{\lambdaeft(-0.03,0.09\rhoight)}{0.03}$ & $\underset{\lambdaeft(-0.04,0.08\rhoight)}{0.02}$\thetaabularnewline victim of crime & $\underset{\lambdaeft(-0.04,0.18\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.08,0.30\rhoight)}{0.11}$ & $\underset{\lambdaeft(-0.04,0.18\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.05,0.18\rhoight)}{0.07}$\thetaabularnewline promoted at work & $\underset{\lambdaeft(-0.08,0.10\rhoight)}{0.01}$ & $\underset{\lambdaeft(0.02,0.32\rhoight)}{0.17}$ & $\underset{\lambdaeft(-0.08,0.11\rhoight)}{0.01}$ & $\underset{\lambdaeft(-0.08,0.10\rhoight)}{0.01}$\thetaabularnewline back with spouse & $\underset{\lambdaeft(-0.01,0.51\rhoight)}{0.25}$ & $\underset{\lambdaeft(-0.10,0.81\rhoight)}{0.35}$ & $\underset{\lambdaeft(-0.01,0.52\rhoight)}{0.25}$ & $\underset{\lambdaeft(-0.01,0.51\rhoight)}{0.26}$\thetaabularnewline separated from spouse & $\underset{\lambdaeft(0.04,0.30\rhoight)}{0.17}$ & $\underset{\lambdaeft(0.11,0.52\rhoight)}{0.32}$ & $\underset{\lambdaeft(0.04,0.30\rhoight)}{0.17}$ & $\underset{\lambdaeft(0.04,0.30\rhoight)}{0.17}$\thetaabularnewline improvement in financial & $\underset{\lambdaeft(-0.14,0.09\rhoight)}{-0.02}$ & $\underset{\lambdaeft(-0.11,0.26\rhoight)}{0.07}$ & $\underset{\lambdaeft(-0.14,0.09\rhoight)}{-0.02}$ & $\underset{\lambdaeft(-0.14,0.08\rhoight)}{-0.02}$\thetaabularnewline worsening in financial & $\underset{\lambdaeft(0.35,0.60\rhoight)}{0.48}$ & $\underset{\lambdaeft(0.40,0.86\rhoight)}{0.63}$ & $\underset{\lambdaeft(0.35,0.61\rhoight)}{0.48}$ & $\underset{\lambdaeft(0.34,0.60\rhoight)}{0.47}$\thetaabularnewline \hline $\min_{1:23}IACT\lambdaeft(\betam{e}ta_{2i}\rhoight)$ & 1.23 & 1.00 & 1.00 & 1.00\thetaabularnewline $\max_{1:23}IACT\lambdaeft(\betam{e}ta_{2i}\rhoight)$ & 2.48 & 5.59 & 3.17 & 2.94\thetaabularnewline ${\rhom mean}_{1:23}\lambdaeft(\betam{e}ta_{2i}\rhoight)$ & 1.58 & 2.09 & 1.41 & 1.18\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} {\rhom e}nd{appendix} \rhoenewcommand{S\alpharabic{equation}}{S\alpharabic{equation}} \rhoenewcommand{S\alpharabic{section}}{S\alpharabic{section}} \rhoenewcommand{S\alpharabic{proposition}}{S\alpharabic{proposition}} \rhoenewcommand{S\alpharabic{assumption}}{S\alpharabic{assumption}} \rhoenewcommand{S\alpharabic{lemma}}{S\alpharabic{lemma}} \rhoenewcommand{S\alpharabic{corollary}}{S\alpharabic{corollary}} \rhoenewcommand{S\alpharabic{algorithm}}{S\alpharabic{algorithm}} \rhoenewcommand{S\alpharabic{figure}}{S\alpharabic{figure}} \rhoenewcommand{S\alpharabic{table}}{S\alpharabic{table}} \rhoenewcommand{S\alpharabic{page}}{S\alpharabic{page}} \rhoenewcommand{S\alpharabic{table}}{S\alpharabic{table}} \rhoenewcommand{S\alpharabic{page}}{S\alpharabic{page}} \sigmaetcounter{page}{1} \sigmaetcounter{section}{0} \sigmaetcounter{equation}{0} \sigmaetcounter{algorithm}{0} \sigmaetcounter{table}{0} \thetaitle{{\cal H}uge \sigmaf Online Supplement to \lambdaq Efficient Bayesian estimation for flexible panel models for multivariate outcomes: Impact of life events on mental health and excessive alcohol consumption\rhoq} \rhoenewcommand{\cal A}uthands{ and } \title{f Efficient Bayesian estimation for flexible panel models for multivariate outcomes: Impact of life events on mental health and excessive alcohol consumption hanks{The research of Gunawan and Kohn was partially supported by the ARC Center of Excellence grant CE140100049. The research of all the authors was also partially supported by ARC Discovery Grant DP150104630. } \sigmaection{Gaussian, Clayton and Gumbel Copula Models\lambdaabel{sec:Archimedian-and-Elliptical copulas}} The Gaussian copula is \[ C^{\rhom Gauss}\lambdaeft(u_{1},u_{2};\rhoho \rhoight)={\rhom P}hi_{2}\lambdaeft({\rhom P}hi^{-1}\lambdaeft(u_{1}\rhoight),{\rhom P}hi^{-1}\lambdaeft(u_{2}\rhoight)\rhoight), \] It can capture both positive and negative dependence and has the full range $\lambdaeft(-1,1\rhoight)$ of pairwise correlations. where ${\rhom P}hi_{2}$ is the distribution function of the standard bivariate normal distribution, ${\rhom P}hi$ is the distribution function for the standard univariate normal distribution, and $\thetaheta$ is the dependence parameter. The dependence structure in the Gaussian copula is symmetric, making it unsuitable for data that exhibits strong lower tail or upper tail dependence. The baseline model in \Cref{ss: biv probit with random effects} is a special case of a Gaussian copula where all the univariate marginal distributions are normally distributed. The bivariate Clayton copula is \[ C^{\rhom Cl}\lambdaeft(u_{1},u_{2};\thetaheta\rhoight)=\lambdaeft(u_{1}^{-\thetaheta}+u_{2}^{-\thetaheta}-1\rhoight)^{-\frac{1}{\thetaheta}} \] It can only capture positive dependence, although one can reflect a Clayton copula to model the dependence between $u_{1}$ and $-u_{2}$ instead. The dependence parameter $\thetaheta$ is defined on the interval of $\lambdaeft(0,\infty\rhoight)$. It is suitable for the data which exhibits strong lower tail dependence and weak upper tail dependence. The bivariate Gumbel copula is \[ C^{\rhom Gu}\lambdaeft(u_{1},u_{2};\thetaheta\rhoight)={\rhom e}xp\lambdaeft\{ -\lambdaeft(\lambdaeft(-\lambdaog u_{1}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2}\rhoight)^{\thetaheta}\rhoight)^{1/\thetaheta}\rhoight\} \] For the Gumbel copula, the dependence parameter $\thetaheta$ is defined on the interval $\lambdaeft[1,\infty\rhoight)$, where $1$ represents the independence case. The Gumbel copula only captures positive dependence. It is suitable for data which exhibits strong upper tail dependence and weak lower tail dependence. \Cref{fig: copula plots} plots 10, 000 draws from each of the three copula models. The conditional copula distribution functions for the bivariate copulas used in our article can be computed in closed form and are given by \[ C_{1|2}^{\rhom Gauss}\lambdaeft(u_{1}|u_{2};\rhoho \rhoight)={\rhom P}hi\lambdaeft(\frac{{\rhom P}hi^{-1}\lambdaeft(u_{1}\rhoight)-\rhoho{\rhom P}hi^{-1}\lambdaeft(u_{2}\rhoight)}{\sigmaqrt{1-\rhoho^{2}}}\rhoight) \] \[ C_{1|2}^{\rhom Cl}\lambdaeft(u_{1}|u_{2};\thetaheta\rhoight)=u_{2}^{-\thetaheta -1}\lambdaeft(u_{1}^{-\thetaheta}+u_{2}^{-\thetaheta}-1\rhoight)^{-1-1/\thetaheta} \] \betam{e}gin{align*} C_{1|2}^{\rhom Gu}\lambdaeft(u_{1}|u_{2};\thetaheta\rhoight) & = C^{\rhom Gu}\lambdaeft(u_{1},u_{2};\thetaheta\rhoight) \frac{1}{u_{2}} \lambdaeft(-\lambdaog u_{2}\rhoight)^{\thetaheta-1}\\ & \lambdaeft\{ \lambdaeft(-\lambdaog u_{1}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2}\rhoight)^{\thetaheta}\rhoight\} ^{1/\thetaheta-1} {\rhom e}nd{align*} \betam{e}gin{figure}[!h] \centering \caption{Left panel: 10000 draws from a Gaussian copula with $\thetaheta=0.8$; center panel: 10000 draws from a Clayton copula with $\thetaheta=6$; right panel: 10000 draws from a Gumbel copula with $\thetaheta=6$ \lambdaabel{fig: copula plots}} \betam{e}gin{tabular}{ccc} \includegraphics[width=5cm,height=5cm]{gaussian_copula_plot} & \includegraphics[width=5cm,height=5cm]{clayton_copula_plot} & \includegraphics[width=5cm,height=5cm]{gumbel_copula_plot} {\rhom e}nd{tabular} {\rhom e}nd{figure} \sigmaection{Dependence Measures\lambdaabel{sec:Measures-of-Dependence}} We use Kendall's tau and upper and lower measures of tail dependence to compare the the dependence structures implied by different copula models because the Pearson (linear) correlation coefficient is not a good measure of general dependence between two random variables as it only detects linear dependence. Kendall's tau \citep[][p. 54]{Joe:2015}, is a popular measure of the degree of concordance between two random variables. Let $\lambdaeft(U_{1},V_{1}\rhoight)$ and $\lambdaeft(U_{2},V_{2}\rhoight)$ be two draws from the joint distribution of $U$ and $V$. Kendall's tau is defined as \[ \kappaappa_{\thetaau}:={\rhom P}r\lambdaeft[\lambdaeft(U_{1}-U_{2}\rhoight)\lambdaeft(V_{1}-V_{2}\rhoight)>0\rhoight]-{\rhom P}r\lambdaeft[\lambdaeft(U_{1}-U_{2}\rhoight)\lambdaeft(V_{1}-V_{2}\rhoight)<0\rhoight] \] The value of $\kappaappa_{\thetaau}$ can vary between $-1$ to $1$ and is zero if the two random variables are independent. The Gumbel and Clayton copulas only capture positive dependence so that $0 \lambdaeq \kappaappa_{\thetaau}\lambdaeq 1 $ For the Gaussian copula, $-1\lambdaeq \kappaappa_{\thetaau}\lambdaeq 1 $. For the copula models we consider, $\kappaappa_{\thetaau}$ can be computed in closed form as a function of its copula parameters. \betam{e}gin{align*} \kappaappa_{\thetaau}^ {\rhom Gauss}& =\frac{2}{\pi}\alpharcsin\lambdaeft(\rhoho \rhoight), \kappaappa_{\thetaau}^ {\rhom Cl}=\frac{\thetaheta_{Cl}}{\thetaheta+2}, \kappaappa_{\thetaau}^ {\rhom Gu}=1-\thetaheta_{\rhom Gu}^{-1} {\rhom e}nd{align*} In many cases, the concordance between extreme (tail) values of random variables is of interest, i.e. the clustering of extreme events in the upper or lower tails. For example, suppose we are interested in the relationship between poor mental health and excessive/binge alcohol consumption or good mental health and no alcohol consumption. This requires a dependence measure for the upper and lower tails of the bivariate distribution. In this case, measures of asymmetric dependence are often based on conditional probabilities. The lower and upper tail dependence measures are defined as \citep[][p. 62]{Joe:2015}, \[ \lambdaambda^{U}:=\underset{\alphalpha\uparrow1}{\lambdaim}{\rhom P}r\lambdaeft(U_{1}>\alphalpha|U_{2}>\alphalpha\rhoight)\quad {\rhom and} \quad \lambdaambda^{L}:=\underset{\alphalpha\deltaownarrow0}{\lambdaim}{\rhom P}r\lambdaeft(U_{1}<\alphalpha|U_{2}<\alphalpha\rhoight) \] If $\lambdaambda^{U}=0$, then the copula is said to have no upper tail dependence, and if $\lambdaambda^{L}=0$ then the copula is said to have no lower tail dependence. The Gaussian copula has no lower or upper tail dependence. For the Clayton copula, $ \lambdaambda^{L}=2^{-1/\thetaheta} > 0$ and $\lambdaambda^{U} = 0 $ so the Clayton copula has no upper tail dependence. For the Gumbel copula, $ \lambdaambda^{U}=2-2^{1/\thetaheta}$ and $\lambdaambda^{L} = 0 $. Hence the Gumbel copula has no lower tail dependence and it has upper tail dependence if and only if $\thetaheta \neq 1 $. \sigmaection{Simulation Mixed Discrete Linear Gaussian Regression\lambdaabel{sec:Simulation-Mixed-Discrete}} This section provides additional simulation study to compare our particle Metropolis-within-Gibbs approach to the MCMC-MH using mixed discrete linear Gaussian regression model given in Section \rhoef{ss: mixed biv model with random effects}. The design is similar to the first experiment in Section \rhoef{sub:TNV} with $n=1000$ and $T=4$, $x_{1,it},...,x_{10,it}\sigmaim U\lambdaeft(0,1\rhoight)$, true parameters set as follows: $\betam{e}ta_{1}=\lambdaeft(-1.5,0.1,-0.2,0.2,-0.2,0.1,-0.2,0.1,-0.1,-0.2,0.2\rhoight)^{'}$, $\betam{e}ta_{2}=\lambdaeft(-0.5,0.1,0.2,-0.2,0.2,0.12,0.2,-0.2,0.12,-0.12,0.12\rhoight)^{'}$, $\thetaau_{1}^{2}=1$, $\thetaau_{2}^{2}=2.5$, and, $\rhoho_{{\rhom e}psilonilon}=\rhoho_{\alphalpha}=0.5$. We only compare MCMC-MH and PG methods for this simulation since they can be applied more generally to panel data models with random effects. \Cref{tab:Comparison-of-Different mixed discrete linear Gaussian1,tab:Comparison-of-Different mixed discrete linear Gaussian2,tab:Comparison-of-Different mixed discrete linear Gaussian2-1} summarise the estimation results and show that the PG is still much better than MCMC-MH methods. \betam{e}gin{table}[H] \caption{Comparison of Inefficiency Factors (IACT) for the Parameters with Different Sampling Schemes (PG, data augmentation, MCMC-MH) of Mixed Discrete-Linear Gaussian regression Simulation with random effects $P=1000$ and $T=4$. } \lambdaabel{tab:Comparison-of-Different mixed discrete linear Gaussian1} \centering{} \betam{e}gin{tabular}{cccccc} \hline Param. & PG & MH1 & MH10 & MH20 & MH50\thetaabularnewline \hline $\betam{e}ta_{11}$ & $1.58$ & $3.64$ & $1.10$ & $1.34$ & $1.00$\thetaabularnewline $\betam{e}ta_{21}$ & $1.71$ & $4.15$ & $1.11$ & $1.55$ & $1.00$\thetaabularnewline $\betam{e}ta_{31}$ & $1.65$ & $4.49$ & $1.00$ & $1.40$ & $1.00$\thetaabularnewline $\betam{e}ta_{41}$ & $1.82$ & $3.34$ & $1.10$ & $1.53$ & $1.00$\thetaabularnewline $\betam{e}ta_{51}$ & $1.55$ & $3.13$ & $1.17$ & $1.48$ & $1.00$\thetaabularnewline $\betam{e}ta_{61}$ & $1.55$ & $3.67$ & $1.06$ & $1.40$ & $1.00$\thetaabularnewline $\betam{e}ta_{71}$ & $1.63$ & $4.48$ & $1.00$ & $1.39$ & $1.01$\thetaabularnewline $\betam{e}ta_{81}$ & $1.63$ & $3.86$ & $1.15$ & $1.46$ & $1.00$\thetaabularnewline $\betam{e}ta_{91}$ & $1.56$ & $4.30$ & $1.03$ & $1.32$ & $1.00$\thetaabularnewline $\betam{e}ta_{101}$ & $1.55$ & $3.69$ & $1.07$ & $1.36$ & $1.00$\thetaabularnewline $\thetaau_{1}^{2}$ & $30.54$ & $398.18$ & $59.13$ & $46.50$ & $28.69$\thetaabularnewline $\rhoho_{\alphalpha}$ & $6.78$ & $83.99$ & $14.06$ & $10.29$ & $6.71$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Comparison of Inefficiency Factors (IACT) for the Parameters with Different Sampling Schemes (PG, data augmentation, MCMC-MH) of Mixed Discrete-Linear Gaussian regression Simulation with random effects $P=1000$ and $T=4$. \lambdaabel{tab:Comparison-of-Different mixed discrete linear Gaussian2}} \centering{} \betam{e}gin{tabular}{cccccc} \hline Param. & PG & MH1 & MH10 & MH20 & MH50\thetaabularnewline \hline $\betam{e}ta_{12}$ & $1.51$ & $3.50$ & $1.15$ & $1.21$ & $1.00$\thetaabularnewline $\betam{e}ta_{22}$ & $1.59$ & $3.52$ & $1.07$ & $1.30$ & $1.00$\thetaabularnewline $\betam{e}ta_{32}$ & $1.53$ & $3.35$ & $1.01$ & $1.40$ & $1.00$\thetaabularnewline $\betam{e}ta_{42}$ & $1.59$ & $4.71$ & $1.00$ & $1.45$ & $1.00$\thetaabularnewline $\betam{e}ta_{52}$ & $1.53$ & $4.66$ & $1.00$ & $1.34$ & $1.00$\thetaabularnewline $\betam{e}ta_{62}$ & $1.61$ & $3.99$ & $1.01$ & $1.32$ & $1.00$\thetaabularnewline $\betam{e}ta_{72}$ & $1.52$ & $4.06$ & $1.00$ & $1.27$ & $1.00$\thetaabularnewline $\betam{e}ta_{82}$ & $1.50$ & $3.29$ & $1.00$ & $1.40$ & $1.00$\thetaabularnewline $\betam{e}ta_{92}$ & $1.50$ & $4.15$ & $1.01$ & $1.35$ & $1.00$\thetaabularnewline $\betam{e}ta_{102}$ & $1.58$ & $4.82$ & $1.01$ & $1.27$ & $1.00$\thetaabularnewline $\thetaau_{2}^{2}$ & $1.42$ & $12.62$ & $1.63$ & $1.60$ & $1.49$\thetaabularnewline $\rhoho$ & $7.24$ & $13.35$ & $9.85$ & $7.72$ & $7.56$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{TNV comparison of different sampling schemes (PG, data augmentation, MCMC-MH) of Mixed Discrete-Linear Gaussian regression Simulation with random effects $P=1000$ and $T=4$. \lambdaabel{tab:Comparison-of-Different mixed discrete linear Gaussian2-1}} \centering{} \betam{e}gin{tabular}{cccccc} \hline & PG & MH1 & MH10 & MH20 & MH50\thetaabularnewline \hline Time & $0.21$ & $0.18$ & $0.67$ & $1.23$ & $2.89$\thetaabularnewline $IACT_{mean}$ & $3.24$ & $24.46$ & $4.40$ & $3.90$ & $2.65$\thetaabularnewline TNV & $0.68$ & $4.40$ & $2.95$ & $4.80$ & $7.66$\thetaabularnewline rel. TNV & $1$ & $6.47$ & $4.34$ & $7.06$ & $11.26$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \sigmaection{Additional Results on Simulation Bivariate Probit Regression}\lambdaabel{tab:Comparison-of-Different bivariate probit} This section provides additional simulation results on bivariate probit regression models in Section \rhoef{sub:TNV} \betam{e}gin{table}[H] \caption{Comparison of Inefficiency Factors (IACT) for the Parameters with Different Sampling Schemes (PG, data augmentation, MCMC-MH) of bivariate Probit regression Simulation with random effects $P=1000$ and $T=4$ \lambdaabel{tab:Comparison-of-Different biv probit1}} \centering{} \betam{e}gin{tabular}{ccccccc} \hline Param. & PG & Data Aug. & MH1 & MH10 & MH20 & MH50\thetaabularnewline \hline $\betam{e}ta_{11}$ & $1.00$ & $9.73$ & $2.13$ & $1.00$ & $1.00$ & $1.17$\thetaabularnewline $\betam{e}ta_{21}$ & $1.00$ & $11.59$ & $2.15$ & $1.00$ & $1.00$ & $1.19$\thetaabularnewline $\betam{e}ta_{31}$ & $1.00$ & $10.46$ & $1.00$ & $1.00$ & $1.00$ & $1.12$\thetaabularnewline $\betam{e}ta_{41}$ & $1.00$ & $11.01$ & $2.42$ & $1.01$ & $1.00$ & $1.19$\thetaabularnewline $\betam{e}ta_{51}$ & $1.00$ & $11.49$ & $2.82$ & $1.00$ & $1.00$ & $1.17$\thetaabularnewline $\betam{e}ta_{61}$ & $1.03$ & $9.91$ & $1.01$ & $1.00$ & $1.00$ & $1.16$\thetaabularnewline $\betam{e}ta_{71}$ & $1.00$ & $10.39$ & $2.49$ & $1.00$ & $1.00$ & $1.15$\thetaabularnewline $\betam{e}ta_{81}$ & $1.00$ & $11.40$ & $2.78$ & $1.00$ & $1.00$ & $1.20$\thetaabularnewline $\betam{e}ta_{91}$ & $1.01$ & $14.43$ & $2.24$ & $1.00$ & $1.00$ & $1.25$\thetaabularnewline $\betam{e}ta_{101}$ & $1.00$ & $10.44$ & $3.89$ & $1.00$ & $1.01$ & $1.20$\thetaabularnewline $\thetaau_{1}^{2}$ & $18.20$ & $79.95$ & $178.23$ & $19.18$ & $20.17$ & $17.17$\thetaabularnewline $\rhoho_{\alphalpha}$ & $13.91$ & $62.36$ & $92.64$ & $20.25$ & $15.44$ & $13.41$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \betam{e}gin{table}[H] \caption{Comparison of Inefficiency Factors (IACT) for the Parameters with Different Sampling Schemes (PG, data augmentation, MCMC-MH) of bivariate Probit regression Simulation with random effects $P=1000$ and $T=4$ \lambdaabel{tab:Comparison-of-Different biv probit2}} \centering{} \betam{e}gin{tabular}{ccccccc} \hline Param. & PG & Data Aug. & MH1 & MH10 & MH20 & MH50\thetaabularnewline \hline $\betam{e}ta_{12}$ & $1.00$ & $16.49$ & $1.02$ & $1.00$ & $1.00$ & $1.12$\thetaabularnewline $\betam{e}ta_{22}$ & $1.00$ & $17.03$ & $1.00$ & $1.09$ & $1.00$ & $1.07$\thetaabularnewline $\betam{e}ta_{32}$ & $1.00$ & $19.18$ & $4.32$ & $1.00$ & $1.04$ & $1.17$\thetaabularnewline $\betam{e}ta_{42}$ & $1.00$ & $23.78$ & $1.02$ & $1.00$ & $1.00$ & $1.15$\thetaabularnewline $\betam{e}ta_{52}$ & $1.00$ & $16.50$ & $5.52$ & $1.01$ & $1.03$ & $1.19$\thetaabularnewline $\betam{e}ta_{62}$ & $1.00$ & $21.58$ & $1.02$ & $1.00$ & $1.00$ & $1.15$\thetaabularnewline $\betam{e}ta_{72}$ & $1.00$ & $15.53$ & $1.00$ & $1.00$ & $1.00$ & $1.06$\thetaabularnewline $\betam{e}ta_{82}$ & $1.00$ & $17.08$ & 1.03 & $1.05$ & $1.00$ & $1.13$\thetaabularnewline $\betam{e}ta_{92}$ & $1.00$ & $15.37$ & $1.00$ & $1.00$ & $1.00$ & $1.11$\thetaabularnewline $\betam{e}ta_{102}$ & $1.01$ & $17.17$ & $2.79$ & $1.00$ & $1.00$ & $1.19$\thetaabularnewline $\thetaau_{2}^{2}$ & $45.11$ & $207.69$ & $709.23$ & $102.65$ & $78.98$ & $47.92$\thetaabularnewline $\rhoho$ & $9.69$ & $420.90$ & $14.29$ & $10.38$ & $9.09$ & $8.64$\thetaabularnewline \hline {\rhom e}nd{tabular} {\rhom e}nd{table} \sigmaection{Some Further Details of the Sampling Scheme for the Bivariate Probit Model with Random Effects\lambdaabel{sec:Sampling-Scheme-for bivariate probit}} Steps~3 and 4 of \Cref{alg: pmwg for biv probit} use a Hamiltonian Monte Carlo proposal to sample $\betaoldsymbol \betam{e}ta_{1}$ and $\betaoldsymbol \betam{e}ta_{2}$ conditional on the other parameters and random effects. The HMC requires the gradient of $\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)$ with respect to $\betaoldsymbol{\betam{e}ta}_{1}$ and $\betaoldsymbol \betam{e}ta_2 $, where \[ \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol \betam{e}ta_{1}}=\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft(\frac{q_{1,it}g_{1,it}}{{\rhom P}hi_{2}\lambdaeft(w_{1,it},w_{2,it},q_{1,it}q_{2,it}\rhoho\rhoight)}\rhoight), \] \[ w_{1,it}=q_{1,it}\lambdaeft(\betaoldsymbol{x}_{1,it}^{'}\betaoldsymbol{\betam{e}ta}_{11}+\overlineerline{\betaoldsymbol{x}}_{1,i}^{'}\betaoldsymbol{\betam{e}ta}_{12}+\alphalpha_{1,i}\rhoight), \quad w_{2,it}=q_{2,it}\lambdaeft(\betaoldsymbol{x}_{2,it}^{'}\betaoldsymbol{\betam{e}ta}_{21}+\overlineerline{\betaoldsymbol{x}}_{2,i}^{'}\betaoldsymbol{\betam{e}ta}_{22}+\alphalpha_{2,i}\rhoight), \] and \betam{e}gin{eqnarray*} g_{1,it} & = & \phi\lambdaeft(w_{1,it}\rhoight)\thetaimes{\rhom P}hi\lambdaeft(\frac{w_{2,it}-q_{1,it}q_{2,it}\rhoho w_{1,it}}{\sigmaqrt{1-\lambdaeft(q_{1,it}q_{2,it}\rhoho\rhoight)^{2}}}\rhoight). {\rhom e}nd{eqnarray*} The gradient of $\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)$ with respect to $\betaoldsymbol{\betam{e}ta}_{2}$ is obtained similarly. \sigmaection{The Sampling Scheme for the Mixed Marginal Gaussian Regression with Random Effects \lambdaabel{sec:Sampling-Scheme-for mixed Gaussian}} The model is described in \Cref{ss: mixed biv model with random effects}. The PG sampling scheme is similar to bivariate probit case and the following derivatives are needed in the HMC step. From this section onwards, we denote $\betaoldsymbol{x}_{j,it}^{'}$ as all the covariates for the $j$th outcomes and ${\rhom e}ta_{j,it}=\betaoldsymbol{x}_{j,it}^{'}\betaoldsymbol{\betam{e}ta}_{j}+\alphalpha_{j,i}$ for $j=1,2$ \[ \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol{\betam{e}ta}_{1}}=\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft\{ \frac{\betaoldsymbol{x}_{1,it}y_{1,it}}{\sigmaqrt{1-\rhoho^{2}}}\frac{\phi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}{{\rhom P}hi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}-\frac{\betaoldsymbol{x}_{1,it}\lambdaeft(1-y_{1,it}\rhoight)}{\sigmaqrt{1-\rhoho^{2}}}\frac{\phi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}{1-{\rhom P}hi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}\rhoight\} \] and \betam{e}gin{eqnarray*} \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol{\betam{e}ta}_{2}} & = & \sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}-\frac{\rhoho}{\sigmaqrt{1-\rhoho^{2}}}\betaoldsymbol{x}_{2,it}y_{1,it}\frac{\phi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}{{\rhom P}hi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}\\ & & +\betaoldsymbol{x}_{2,it}y_{1,it}\lambdaeft(y_{2,it}-{\rhom e}ta_{2,it}\rhoight)\\ & & +\frac{\lambdaeft(1-y_{1,it}\rhoight)}{1-{\rhom P}hi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)}\phi\lambdaeft(\frac{\mu_{1|2}}{\sigmaigma_{1|2}}\rhoight)\lambdaeft(\frac{\rhoho}{\sigmaqrt{1-\rhoho^{2}}}\betaoldsymbol{x}_{2,it}\rhoight)\\ & & +\betaoldsymbol{x}_{2,it}\lambdaeft(1-y_{1,it}\rhoight)\lambdaeft(y_{2,it}-{\rhom e}ta_{2,it}\rhoight) {\rhom e}nd{eqnarray*} \sigmaection{Gradients for the HMC Sampling Scheme for the Mixed Marginal Clayton Copula Regression with Random Effects \lambdaabel{sec:Sampling-Scheme-for mixed clayton}} \betam{e}gin{eqnarray*} \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol{\betam{e}ta}_{1}} & = & \sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft\{ \frac{\betaoldsymbol{x}_{1,it}y_{1,it}}{1-C_{1|2}^{\rhom Cl}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)}\lambdaeft(u_{2,it}^{-\thetaheta-1}\lambdaeft(-1-\frac{1}{\thetaheta}\rhoight) \lambdaeft({\rhom P}hi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight)^{-\thetaheta}+u_{2,it}^{-\thetaheta}-1\rhoight)^{-2-\frac{1}{\thetaheta}}\rhoight)\rhoight.\\ & & \phi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight)\lambdaeft(-\thetaheta\rhoight){\rhom P}hi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight)^{-\thetaheta-1}\frac{\betaoldsymbol{x}_{1,it}\lambdaeft(1-y_{1,it}\rhoight)} {C_{1|2}^{\rhom Cl}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)}\phi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight)\lambdaeft(\thetaheta\rhoight){\rhom P}hi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight)^{-\thetaheta-1}\\ & & \lambdaeft.\lambdaeft(u_{2,it}^{-\thetaheta-1}\lambdaeft(-1-\frac{1}{\thetaheta}\rhoight)\lambdaeft({\rhom P}hi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight)^{-\thetaheta}+u_{2,it}^{-\thetaheta}-1\rhoight)^{-2-\frac{1} {\thetaheta}}\rhoight)\rhoight\}; {\rhom e}nd{eqnarray*} \[ \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol{\betam{e}ta}_{2}}=\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft\{ I_{it}+II_{it}+III_{it}+IV_{it}\rhoight\} , \] where \betam{e}gin{align*} I_{it}& \coloneqq\frac{\betaoldsymbol{x}_{2,it}y_{1,it}}{1-C_{1|2}^{\rhom Cl}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)}\lambdaeft(\nabla u v+u \nabla v \rhoight), \quad \zeta_{it}=y_{2,it}-{\rhom e}ta_{2,it}, \\ u & :=-{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta-1} \quad \nabla {u} \coloneqq\frac{du}{d\betaoldsymbol{\betam{e}ta}_{2}}=\lambdaeft(-1-\thetaheta\rhoight){\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta-2}\phi\lambdaeft(\zeta_{it}\rhoight), v=\lambdaeft(u_{1,it}^{-\thetaheta}+{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta}-1\rhoight)^{-1-\frac{1}{\thetaheta}}, \\ \nabla {v} &\coloneqq\frac{dv}{d\betaoldsymbol{\betam{e}ta}_{2}}=\lambdaeft(-1-\frac{1}{\thetaheta}\rhoight)\lambdaeft(u_{1,it}^{-\thetaheta}+{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta} -1\rhoight)^{-2-\frac{1}{\thetaheta}}\thetaheta{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta-1}\phi\lambdaeft(\zeta_{it}\rhoight) {\rhom e}nd{align*} The terms $II_{it}$, $III_{it}$, and $IV_{it}$ are given by \betam{e}gin{eqnarray*} II_{it} & = & \betaoldsymbol{x}_{2,it}y_{1,it}\lambdaeft(y_{2,it}-{\rhom e}ta_{2,it}\rhoight), III_{it} = \betaoldsymbol{x}_{2,it}\lambdaeft(1-y_{1,it}\rhoight)\lambdaeft(y_{2,it}-{\rhom e}ta_{2,it}\rhoight),\\ IV_{it} & = & \frac{\betaoldsymbol{x}_{2,it}\lambdaeft(1-y_{1,it}\rhoight)}{C_{1|2}^{Cl}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)}\lambdaeft(\nabla {u}_{IV}v_{IV}+u_{IV}\nabla {v}_{IV}\rhoight), {\rhom e}nd{eqnarray*} where \betam{e}gin{align*} u_{IV} & ={\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta-1},\nabla{u}_{IV}=-\lambdaeft(-\thetaheta-1\rhoight){\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta-2}\phi\lambdaeft(\zeta_{it}\rhoight), v_{IV}=\lambdaeft(u_{1,it}^{-\thetaheta}+{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta}-1\rhoight)^{-1-\frac{1}{\thetaheta}},\\ \nabla{v}_{IV}& =\lambdaeft(-1-\frac{1}{\thetaheta}\rhoight) \lambdaeft(u_{1,it}^{-\thetaheta}+{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta}-1\rhoight)^{-2-\frac{1}{\thetaheta}}\thetaheta{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)^{-\thetaheta-1} \phi\lambdaeft(\zeta_{it}\rhoight). {\rhom e}nd{align*} The PG sampling scheme is similar to that for the bivariate probit case except that in step 2 we work with the unconstrained parameter $\thetaheta_{\rhom un} = \lambdaog \thetaheta$, where $\thetaheta> 0 $ for the Clayton copula. \sigmaection{Gradients for the HMC Sampling Scheme for the Mixed Marginal Gumbel Copula Regression with Random Effects \lambdaabel{sec:Sampling-Scheme-for mixed gumbel}} \betam{e}gin{eqnarray*} \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol{\betam{e}ta}_{1}} & = & \sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}(I_{it}+II_{it}), {\rhom e}nd{eqnarray*} \betam{e}gin{align*} I_{it}& =\frac{\betaoldsymbol{x}_{1,it}y_{1,it}}{1-C_{1|2}^{\rhom Gu}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)}\lambdaeft(-\frac{1}{u_{2,it}}\rhoight)\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta-1}\lambdaeft(\nabla{u}v+u\nabla{v}\rhoight),\\ II_{it} & =\frac{\betaoldsymbol{x}_{1,it}\lambdaeft(1-y_{1,it}\rhoight)}{C_{1|2}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)}\lambdaeft(\frac{1}{u_{2,it}}\rhoight)\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta-1}\lambdaeft(\nabla{u}v+u\nabla{v}\rhoight), \intertext{where} u& = {\rhom e}xp\lambdaeft(-\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{1/\thetaheta}\rhoight)\\ \nabla{u} & = {\rhom e}xp\lambdaeft(-\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{1/\thetaheta}\rhoight)\lambdaeft(-\frac{1}{\thetaheta}\rhoight)\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{\frac{1}{\thetaheta}-1}\\ & \thetaheta\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta-1}\lambdaeft(\frac{1}{u_{1,it}}\rhoight)\phi\lambdaeft(-{\rhom e}ta_{1}\rhoight), v=\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{\frac{1}{\thetaheta}-1}\\ \nabla{v} & = \lambdaeft(\frac{1}{\thetaheta}-1\rhoight)\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{\frac{1}{\thetaheta}-2} \thetaheta\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta-1}\lambdaeft(\frac{1}{u_{1,it}}\rhoight)\phi\lambdaeft(-{\rhom e}ta_{1,it}\rhoight). {\rhom e}nd{align*} \betam{e}gin{align*} \frac{\partial\lambdaog p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{\thetaheta},\betaoldsymbol{\alphalpha}\rhoight)}{\partial\betaoldsymbol{\betam{e}ta}_{2}} & =\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft\{ I_{it}+II_{it}+III_{it}+IV_{it}\rhoight\} ,\\ I_{it} &=\frac{\betaoldsymbol{x}_{2,it}y_{1,it}}{1-C_{1|2}^{\rhom Gu}\lambdaeft(u_{1}|u_{2}\rhoight)}\lambdaeft(-\lambdaeft(\nabla{u}vwz+u\nabla{v}wz+uv\nabla{w}z+uvw\nabla{z}\rhoight)\rhoight),\\ II_{it} &=\betaoldsymbol{x}_{2,it}y_{1,it}\lambdaeft(y_{2,it}-\lambdaeft({\rhom e}ta_{2,it}\rhoight)\rhoight),\\ III_{it} &=\betaoldsymbol{x}_{2,it}\lambdaeft(1-y_{1,it}\rhoight)\lambdaeft(y_{2,it}-\lambdaeft({\rhom e}ta_{2,it}\rhoight)\rhoight),\\ IV_{it} &=\frac{\betaoldsymbol{x}_{2,it}\lambdaeft(1-y_{1,it}\rhoight)}{C_{1|2}^{Gu}\lambdaeft(u_{1,it}|u_{2,it}\rhoight)} \lambdaeft(\nabla{u}vwz+u\nabla{v}wz+uv\nabla{w}z+uvw\nabla{z}\rhoight), \intertext{where} \zeta_{it}& =\lambdaeft(y_{2,it}- \lambdaeft({\rhom e}ta_{2,it}\rhoight)\rhoight), u =\frac{1}{u_{2,it}}, \nabla{u}:=\frac{\partial u}{\partial \betaoldsymbol \betam{e}ta_2} = u_{2,it}^{-2}\phi\lambdaeft(\zeta_{it}\rhoight), v=\lambdaeft(-\lambdaog{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)\rhoight)^{\thetaheta-1}\\ \nabla{v}& := \frac{\partial u}{\partial \betaoldsymbol \betam{e}ta_2}= \lambdaeft(\thetaheta-1\rhoight)\lambdaeft(-\lambdaog{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)\rhoight)^{\thetaheta-2}\frac{1} {{\rhom P}hi\lambdaeft(\zeta_{it}\rhoight)}\phi\lambdaeft(\zeta_{it}\rhoight)\\ w & ={\rhom e}xp\lambdaeft(-\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{1/\thetaheta}\rhoight), \\ \nabla{w}&:= \frac{\partial u}{\partial \betaoldsymbol \betam{e}ta_2}= {\rhom e}xp\lambdaeft(-\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{1/\thetaheta}\rhoight)\lambdaeft(-\frac{1}{\thetaheta}\rhoight)\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{\frac{1}{\thetaheta}-1}\\ & \thetaheta\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta-1}\lambdaeft(\frac{1}{u_{2,it}}\rhoight)\phi\lambdaeft(\zeta_{it}\rhoight)\\ z &=\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{\frac{1}{\thetaheta}-1} \\ \nabla{z} & = \lambdaeft(\frac{1}{\thetaheta}-1\rhoight)\lambdaeft(\lambdaeft(-\lambdaog u_{1,it}\rhoight)^{\thetaheta}+\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta}\rhoight)^{\frac{1}{\thetaheta}-2} \thetaheta\lambdaeft(-\lambdaog u_{2,it}\rhoight)^{\thetaheta-1}\lambdaeft(\frac{1}{u_{2,it}}\rhoight)\phi\lambdaeft(\zeta_{it}\rhoight) {\rhom e}nd{align*} The PG sampling scheme is similar to that for the bivariate probit model case except that in step 2 we reparametrize the Gumbel dependence parameter to $\thetaheta_{\rhom un}:= \lambdaog (\thetaheta -1 )$ because $\thetaheta > 1$. \sigmaection{Data Augmentation Bivariate Probit Models with Random Effects\lambdaabel{sec:Data-Augmentation-Bivariate probit}} We will work with the augmented posterior distribution \betam{e}gin{eqnarray*} p\lambdaeft(\betaoldsymbol{y}^{*},\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta}|\betaoldsymbol{y}\rhoight) & = & p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{y}^{*},\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta}\rhoight)p\lambdaeft(\betaoldsymbol{y}^{*}|\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta}\rhoight)p\lambdaeft(\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} |\betaoldsymbol{\thetaheta}\rhoight)p\lambdaeft(\betaoldsymbol{\thetaheta}\rhoight), {\rhom e}nd{eqnarray*} where \betam{e}gin{eqnarray*} p\lambdaeft(\betaoldsymbol{y}|\betaoldsymbol{y}^{*},\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta}\rhoight) & = & \prod_{i=1}^{P}\prod_{t=1}^{T}\lambdaeft[I\lambdaeft(y_{1,it}^{*}\lambdaeq0\rhoight)I\lambdaeft(y_{1,it}=0\rhoight)+I\lambdaeft(y_{1,it}^{*}>0\rhoight)I\lambdaeft(y_{1,it}=1\rhoight)\rhoight]\\ & & \lambdaeft[I\lambdaeft(y_{2,it}^{*}\lambdaeq0\rhoight)I\lambdaeft(y_{2,it}=0\rhoight)+I\lambdaeft(y_{2,it}^{*}>0\rhoight)I\lambdaeft(y_{2,it}=1\rhoight)\rhoight], {\rhom e}nd{eqnarray*} \[ p\lambdaeft(\betaoldsymbol{y}^{*}|\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta}\rhoight)\sigmaim N\lambdaeft(\betaoldsymbol \mu_{it}, \betaoldsymbol \Sigmama_{\rhom e}psilonilon \rhoight ) , p\lambdaeft(\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} |\betaoldsymbol{\thetaheta}\rhoight)\sigmaim N \lambdaeft (\betaoldsymbol 0, \betaoldsymbol \Sigmama_\alphalpha \rhoight ) \] where $\betaoldsymbol{\mu}_{it}=\lambdaeft(\betaoldsymbol{{\rhom e}ta}_{1,it},\betaoldsymbol{{\rhom e}ta}_{2,it}\rhoight)^{T}$, $\betaoldsymbol \Sigmama_{\rhom e}psilonilon $ and $\betaoldsymbol \Sigmama_\alphalpha$ are defined in Equations {\rhom e}qref{eq: cov matrices} in Section \rhoef{ss: biv probit with random effects}, and $p\lambdaeft(\betaoldsymbol{\thetaheta}\rhoight)$ is the prior distribution for $\betaoldsymbol{\thetaheta}$. The following complete conditional posteriors for the Gibbs sampler can then be derived. \betam{e}gin{equation} y_{1,it}^{*}|\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta},\betaoldsymbol{y},y_{2,it}^{*}\sigmaim\betam{e}gin{cases} TN_{\lambdaeft(-\infty,0\rhoight)}\lambdaeft(\mu_{1|2},\sigmaigma_{1|2}\rhoight), & y_{1,it}=0\\ TN_{\lambdaeft(0,\infty\rhoight)}\lambdaeft(\mu_{1|2},\sigmaigma_{1|2}\rhoight), & y_{1,it}=1 {\rhom e}nd{cases} {\rhom e}nd{equation} and \betam{e}gin{equation} y_{2,it}^{*}|\lambdaeft\{ \betaoldsymbol{\alphalpha}_{i}\rhoight\} ,\betaoldsymbol{\thetaheta},\betaoldsymbol{y},y_{1,it}^{*}\sigmaim\betam{e}gin{cases} TN_{\lambdaeft(-\infty,0\rhoight)}\lambdaeft(\mu_{2|1},\sigmaigma_{2|1}\rhoight), & y_{2,it}=0\\ TN_{\lambdaeft(0,\infty\rhoight)}\lambdaeft(\mu_{2|1},\sigmaigma_{2|1}\rhoight), & y_{2,it}=1 {\rhom e}nd{cases}, {\rhom e}nd{equation} where $\mu_{1|2}=\lambdaeft(\betaoldsymbol{x}_{1,it}^{'}\betaoldsymbol{\betam{e}ta}_{1}+\alphalpha_{1,i}\rhoight)+\rhoho\lambdaeft(y_{2,it}^{*}-\lambdaeft(\betaoldsymbol{x}_{2,it}^{'}\betaoldsymbol{\betam{e}ta}_{2}+\alphalpha_{2,i}\rhoight)\rhoight)$, and $\sigmaigma_{1|2}=\sigmaqrt{1-\rhoho^{2}}$ and $\mu_{2|1}$ and $\sigmaigma_{2|1}$ are defined similarly; \betam{e}gin{equation} \betaoldsymbol{\alphalpha}_{i}|\betaoldsymbol{y}^{*},\betaoldsymbol{y},\Sigmama_{\alphalpha},\Sigmama,\betaoldsymbol{\betam{e}ta}\sigmaim N\lambdaeft(\betaoldsymbol D_{\alphalpha_{i}}\betaoldsymbol d_{\alphalpha_{i}},\betaoldsymbol D_{\alphalpha_{i}}\rhoight), {\rhom e}nd{equation} where $\betaoldsymbol D_{\alphalpha_{i}}=\lambdaeft(\betaoldsymbol T\betaoldsymbol \Sigmama^{-1}+\betaoldsymbol \Sigmama_{\alphalpha}^{-1}\rhoight)^{-1}$ and $\betaoldsymbol d_{\alphalpha_{i}}=\betaoldsymbol \Sigmama^{-1}\sigmaum_{t}\lambdaeft(\lambdaeft[\betam{e}gin{array}{c} y_{1,it}^{*}\\ y_{2,it}^{*} {\rhom e}nd{array}\rhoight]-\lambdaeft[\betam{e}gin{array}{cc} \betaoldsymbol{x}_{1,it}^{'} & \betaoldsymbol{0}\\ \betaoldsymbol{0} & \betaoldsymbol{x}_{2,it}^{'} {\rhom e}nd{array}\rhoight]\lambdaeft[\betam{e}gin{array}{c} \betaoldsymbol{\betam{e}ta}_{1}\\ \betaoldsymbol{\betam{e}ta}_{2} {\rhom e}nd{array}\rhoight]\rhoight)$ for $i=1,...,P$; \betam{e}gin{equation} \betaoldsymbol{\betam{e}ta}|\betaoldsymbol{y}^{*},\betaoldsymbol{y},\betaoldsymbol \Sigmama_{\alphalpha},\betaoldsymbol \Sigmama,\lambdaeft\{ \alphalpha_{i}\rhoight\} \sigmaim N\lambdaeft(\betaoldsymbol D_{\betam{e}ta}\betaoldsymbol d_{\betam{e}ta},\betaoldsymbol D_{\betam{e}ta}\rhoight), {\rhom e}nd{equation} where \[ \betaoldsymbol D_{\betaoldsymbol \betam{e}ta}=\lambdaeft(\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft[\betam{e}gin{array}{cc} \betaoldsymbol{x}_{1,it} & \betaoldsymbol{0}\\ \betaoldsymbol{0} & \betaoldsymbol{x}_{2,it} {\rhom e}nd{array}\rhoight]^{'}\betaoldsymbol \Sigmama^{-1}\lambdaeft[\betam{e}gin{array}{cc} \betaoldsymbol{x}_{1,it} & \betaoldsymbol{0}\\ \betaoldsymbol{0} & \betaoldsymbol{x}_{2,it} {\rhom e}nd{array}\rhoight]+\betaoldsymbol \Sigmama_{0}^{-1}\rhoight)^{-1}, \] and \[ \betaoldsymbol d_{\betam{e}ta}=\sigmaum_{i=1}^{P}\sigmaum_{t=1}^{T}\lambdaeft[\betam{e}gin{array}{cc} \betaoldsymbol{x}_{1,it} & \betaoldsymbol{0}\\ \betaoldsymbol{0} & \betaoldsymbol{x}_{2,it} {\rhom e}nd{array}\rhoight]^{'}\Sigmama^{-1}\lambdaeft(\lambdaeft[\betam{e}gin{array}{c} y_{1,it}^{*}\\ y_{2,it}^{*} {\rhom e}nd{array}\rhoight]-\lambdaeft[\betam{e}gin{array}{c} \alphalpha_{1,i}\\ \alphalpha_{2,i} {\rhom e}nd{array}\rhoight]\rhoight) \] \betam{e}gin{equation} \betaoldsymbol \Sigmama_{\alphalpha}|\betaoldsymbol{y}^{*},\betaoldsymbol{y},\betaoldsymbol \Sigmama,\betaoldsymbol{\betam{e}ta}\sigmaim W\lambdaeft(v_{1},\betaoldsymbol R_{1}\rhoight), {\rhom e}nd{equation} where \[ v_{1}=v_{0}+P, \quad \thetaext{and} \quad \betaoldsymbol R_{1}=\lambdaeft[\betaoldsymbol R_{0}^{-1}+\sigmaum_{i=1}^{P}\lambdaeft[\betam{e}gin{array}{c} \alphalpha_{1,i}\\ \alphalpha_{2,i} {\rhom e}nd{array}\rhoight]\lambdaeft[\betam{e}gin{array}{cc} \alphalpha_{1,i} & \alphalpha_{2,i}{\rhom e}nd{array}\rhoight]\rhoight]^{-1}. \] We use Metropolis within Gibbs to sample $\rhoho$. {\rhom e}nd{document}
\begin{document} \title{\Large Temporal Network Optimization Subject to Connectivity Constraints\footnote{This work was supported in part by (i) the project ``Foundations of Dynamic Distributed Computing Systems'' (\textsf{FOCUS}) which is implemented under the ``ARISTEIA'' Action of the Operational Programme ``Education and Lifelong Learning'' and is co-funded by the European Union (European Social Fund) and Greek National Resources, (ii) the FET EU IP project \textsf{MULTIPLEX} under contract no 317532, and (iii) the EPSRC Grants EP/P020372/1, EP/P02002X/1, and EP/K022660/1. A preliminary version of this work has appeared at ICALP 2013 \cite{MMCS13}.}} \author{George B. Mertzios\thanks{Department of Computer Science, Durham University, UK. Email: \texttt{[email protected]}} \and Othon Michail\thanks{Department of Computer Science, University of Liverpool, UK. Email: \texttt{[email protected]}} \and Paul G. Spirakis\thanks{Department of Computer Science, University of Liverpool, UK and Department of Computer Engineering and Informatics, University of Patras, Greece. Email: \texttt{[email protected]}}} \date{ } \maketitle \begin{abstract} In this work we consider \emph{temporal networks}, i.e.~networks defined by a \emph{labeling} $\longrightarrowmbda$ assigning to each edge of an \emph{underlying graph} $G$ a set of \emph{discrete} time-labels. The labels of an edge, which are natural numbers, indicate the discrete time moments at which the edge is available. We focus on \emph{path problems} of temporal networks. In particular, we consider \emph{time-respecting} paths, i.e.~paths whose edges are assigned by $\longrightarrowmbda$ a strictly increasing sequence of labels. We begin by giving two efficient algorithms for computing shortest time-respecting paths on a temporal network. We then prove that there is a \emph{natural analogue of Menger's theorem} holding for arbitrary temporal networks. Finally, we propose two \emph{cost minimization parameters} for temporal network design. One is the \emph{temporality} of $G$, in which the goal is to minimize the maximum number of labels of an edge, and the other is the \emph{temporal cost} of $G$, in which the goal is to minimize the total number of labels used. Optimization of these parameters is performed subject to some \emph{connectivity constraint}. We prove several lower and upper bounds for the temporality and the temporal cost of some very basic graph families such as rings, directed acyclic graphs, and trees.\newline \noindent \textbf{Keywords:} Temporal network, graph labeling, Menger's theorem, optimization, temporal connectivity, hardness of approximation. \end{abstract} \section{Introduction} \longrightarrowbel{sec:intro} A \emph{temporal} (or \emph{dynamic}) \emph{network} is, loosely speaking, a network that changes with time. This notion encloses a great variety of both modern and traditional networks such as information and communication networks, social networks, transportation networks, and several physical systems. In the literature of traditional communication networks, the network topology is rather static, i.e.~topology modifications are rare and they are mainly due to link failures and congestion. However, most modern communication networks such as mobile ad hoc, sensor, peer-to-peer, opportunistic, and delay-tolerant networks are inherently dynamic and it is often the case that this dynamicity is of a very high rate. In social networks, the topology usually represents the social connections between a group of individuals and it changes as the social relationships between the individuals are updated, or as existing individuals leave, or new individuals enter the group. In a transportation network, there is usually some fixed network of routes and a set of transportation units moving over these routes and dynamicity refers to the change of the positions of the transportation units in the network as time passes. Physical systems of interest may include several systems of interacting particles. In this work, embarking from the foundational work of Kempe \emph{et al.}~\cite{KKK00}, we consider \emph{discrete time}, that is, we consider networks in which changes occur at discrete moments in time, e.g.~days. This choice is not only a very natural abstraction of many real systems but also gives to the resulting models a purely combinatorial flavor. In particular, we consider those networks that can be described via an underlying graph $G$ and a labeling $\longrightarrowmbda$ assigning to each edge of $G$ a (possibly empty) set of discrete labels. Note that this is a generalization of the single-label-per-edge model used in~\cite{KKK00}, as we allow many time-labels to appear on an edge. These labels are drawn from the natural numbers and indicate the discrete moments in time at which the corresponding connection is available. For example, in the case of a communication network, availability of a communication link at some time $t$ may mean that a communication protocol is allowed to transmit a data packet over that link at time $t$. In this work, we initiate the study of the following fundamental network design problem: ``\emph{Given an underlying (di)graph $G$, assign labels to the edges of $G$ so that the resulting temporal graph $\longrightarrowmbda (G)$ minimizes some parameter while satisfying some connectivity property}''. In particular, we consider two cost optimization parameters for a given graph $G$. The first one, called \emph{temporality} of $G$, measures the maximum number of labels that an edge of $G$ has been assigned. The second one, called \emph{temporal cost} of $G$, measures the total number of labels that have been assigned to all edges of $G$ (i.e.~if $|\longrightarrowmbda(e)|$ denotes the number of labels assigned to edge $e$, we are interested in $\sum_{e\in E} |\longrightarrowmbda(e)|$). That is, if we interpret the number of assigned labels as a measure of \emph{cost}, the temporality (resp.~the temporal cost)\ of $G$ is a measure of the decentralized (resp.~centralized) cost of the network, where only the cost of individual edges (resp.~the total cost over all edges) is considered. Each of these two cost measures can be minimized subject to some particular connectivity property $\mathcal{P}$ that the temporal graph $\longrightarrowmbda (G)$ has to satisfy. In this work, we consider two very basic connectivity properties. The first one, that we call the \emph{all paths} property, requires the temporal graph to preserve every simple path of its underlying graph, where by ``preserve a path of $G$'' we mean in this work that the labeling should provide at least one strictly increasing sequence of labels on the edges of that path, in which case we also say that the path is \emph{time-respecting}. Before describing our second connectivity property let us give a simple illustration of temporality minimization. We are given a directed ring $u_1,u_2,\ldots,u_n$ and we want to determine the temporality of the ring subject to the all paths property. That is, we want to find a labeling $\longrightarrowmbda$ that preserves every simple path of the ring and at the same time minimizes the maximum number of labels of an edge. Looking at Figure~\ref{fig:ring}, it is immediate to observe that an increasing sequence of labels on the edges of path $P_1$ implies a decreasing pair of labels on edges $(u_{n-1},u_n)$ and $(u_1,u_2)$. On the other hand, path $P_2$ uses first $(u_{n-1},u_n)$ and then $(u_1,u_2)$ thus it requires an increasing pair of labels on these edges. It follows that in order to preserve both $P_1$ and $P_2$ we have to use a second label on at least one of these two edges, thus the temporality is at least 2. Next, consider the labeling that assigns to each edge $(u_i,u_{i+1})$ the labels $\{i, n+i\}$, where $1\leq i\leq n$ and $u_{n+1}=u_1$. It is not hard to see that this labeling preserves all simple paths of the ring. Since the maximum number of labels that it assigns to an edge is 2, we conclude that the temporality is also at most 2. In summary, the temporality of preserving all simple paths of a directed ring is 2. The other connectivity property that we define, called the \emph{reach} property, requires the temporal graph to preserve a path from node $u$ to node $v$ whenever $v$ is reachable from $u$ in the underlying graph. Furthermore, the minimization of each of our two cost measures can be affected by some problem-specific constraints on the labels that we are allowed to use. We consider here one of the most natural constraints, namely an upper bound of the \emph{age} of the constructed labeling $\longrightarrowmbda$, where the age of a labeling $\longrightarrowmbda$ is defined to be equal to the maximum label of $\longrightarrowmbda$ minus its minimum label plus 1. Now the goal is to minimize the cost parameter, e.g.~the temporality, satisfy the connectivity property, e.g.~\emph{all paths}, and additionally guarantee that the age does not exceed some given natural $k$. Returning to the ring example, it is not hard to see, that if we additionally restrict the age to be at most $n-1$ then we can no longer preserve all paths of a ring using at most 2 labels per edge. In fact, we must now necessarily use the worst possible number of labels, i.e.~$n-1$ on every edge. Minimizing such parameters may be crucial as, in most real networks, making a connection available and maintaining its availability does not come for free. For example, in wireless sensor networks the cost of making edges available is directly related to the power consumption of keeping nodes awake, of broadcasting, of listening to the wireless channel, and of resolving the resulting communication collisions. The same holds for transportation networks where the goal is to achieve good connectivity properties with as few transportation units as possible. At the same time, such a study is important from a purely graph-theoretic perspective as it gives some first insight into the structure of specific families of temporal graphs. To make this clear, consider again the ring example. Proving that the temporality of preserving all paths of a ring is 2 at the same time proves the following. If a \emph{temporal ring} is defined as a ring in which all nodes can communicate clockwise to all other nodes via time-respecting paths then \emph{no temporal ring exists with fewer than $n+1$ labels}. This, though an easy one, is a structural result for temporal graphs. Finally, we believe that our results are a first step towards answering the following fundamental question: ``\emph{To what extent can algorithmic and structural results of graph theory be carried over to temporal graphs?}''. For example, is there an analogue of Menger's theorem for temporal graphs? One of the results of the present work is an affirmative answer to the latter question. \subsection{Related Work} \longrightarrowbel{subsec:rw} \noindent\textbf{Labeled Graphs.} Labeled graphs have been widely used in Computer Science and Mathematics, e.g.~in Graph Coloring \cite{MR02}. In our work, labels correspond to moments in time and the properties of labeled graphs that we consider are naturally \emph{temporal properties}. Note, however, that any property of a graph labeled from a discrete set of labels corresponds to some temporal property if interpreted appropriately. For example, a proper edge-coloring, i.e.~a coloring of the edges in which no two adjacent edges share a common color, corresponds to a temporal graph in which no two adjacent edges share a common label, i.e.~no two adjacent edges ever appear at the same time. Though we focus on properties with natural temporal meaning, our definitions are generic and do not exclude other, yet to be defined, properties that may prove important in future applications. \noindent\textbf{Single-label Temporal Graphs and Menger's Theorem.} The model of temporal graphs that we consider in this work is a direct extension of the single-label model studied in \cite{Be96} and \cite{KKK00} to allow for many labels per edge. The main result of \cite{Be96} was that in single-label networks the max-flow min-cut theorem holds with unit capacities for time-respecting paths. In \cite{KKK00}, Kempe \emph{et al.}, among other things, proved that a fundamental property of classical graphs does not carry over to their temporal counterparts. In particular, they proved that there is no analogue of Menger's theorem, at least in its original formulation, for arbitrary single-label temporal networks and that the computation of the number of node-disjoint $s$-$t$ time-respecting paths is NP-complete. \emph{Menger's theorem} states that the maximum number of node-disjoint $s$-$t$ paths is equal to the minimum number of nodes needed to separate $s$ from $t$ (see \cite{Bo98}). In this work, we go a step ahead showing that if one reformulates Menger's theorem in a way that takes time into account then a very natural temporal analogue of Menger's theorem is obtained. Both of the above papers, consider a path as \emph{time-respecting} if its edges have non-decreasing labels. In the present work, we depart from this assumption and consider a path as time-respecting if its edges have \emph{strictly increasing} labels. Our choice is very well motivated by recent work in dynamic communication networks. If it takes one time unit to transmit a data packet over a link then a packet can only be transmitted over paths with strictly increasing availability times. \noindent\textbf{Continuous Availabilities (Intervals).} Some authors have assumed that an edge may be available for a whole time-interval $[t_1,t_2]$ or several such intervals and not just for discrete moments as we assume here. This is a clearly natural assumption but the techniques used in those works are quite different from those needed in the discrete case \cite{XFJ03,FT98}. \noindent\textbf{Dynamic Distributed Networks.} In recent years, there is a growing interest in distributed computing systems that are inherently dynamic. This has been mainly driven by the advent of low-cost wireless communication devices and the development of efficient wireless communication protocols. Apart from the huge amount of work that has been devoted to applications, there is also a steadily growing concrete set of foundational work. A notable set of works has studied (distributed) computation in \emph{worst-case} dynamic networks in which the topology may change arbitrarily from round to round subject to some constraints that allow for bounded end-to-end communication~\cite{OW05,KLO10,MCS12b-journal,DPRS13}. Population protocols \cite{AADFP06} and variants \cite{MCS11-2} are collections of finite-state agents that move arbitrarily like a soup of particles and interact in pairs when they come close to each other. The goal is there for the population to compute (i.e.~agree on) something useful in the limit in such an adversarial setting. Another interesting direction assumes that the dynamicity of the network is a result of randomness. Here the interest is on determining ``good'' properties of the dynamic network that hold with high probability, such as small (temporal) diameter, and on designing protocols for distributed tasks \cite{CFTE08,AKL08}. For introductory texts on the above lines of research in dynamic distributed networks the reader is referred to \cite{CFQS12,MCS11,Sc02}. \noindent\textbf{Distance Labeling.} A distance labeling of a graph $G$ is an assignment of unique labels to the vertices of $G$ so that the distance between any two vertices can be inferred from their labels alone. The goal is to minimize some parameter of the labeling and to provide a (hopefully fast) decoder algorithm for extracting a distance from two labels \cite{GPPR01,KKKP04}. There are several differences between a distance labeling and the time-labelings that we consider in this work. First of all, a distance labeling is being assigned on the vertices and not on the edges. Moreover, in distance labeling, one usually seeks the most compact set of labels (in binary length) that still guarantees efficient decoding. That is, the labeling parameter to be minimized is the binary length of an appropriate encoding, which is quite different from our cost parameters. Finally, the optimization constraint there is efficient decoding while in our case the constraints have to do with connectivity properties of the labeled graph. Also, we encourage the interested reader to see \cite{Mi16} for a recent introductory text on the recent algorithmic progress on temporal graphs. \subsection{Contribution} \longrightarrowbel{subsec:con} In \S \ref{sec:prel}, we formally define the model of temporal graphs under consideration and provide all further necessary definitions. The rest of the paper is partitioned into two parts. Part I focuses on journey problems for temporal graphs. In particular, in \S \ref{sec:journeys}, we give two efficient algorithms for computing shortest time-respecting paths. Then in \S \ref{sec:menger} we present an analogue of Menger's theorem which we prove valid for arbitrary temporal graphs. We apply our Menger's analogue to simplify the proof of a recent result on distributed token gathering. Part II studies the problem of designing a temporal graph optimizing some parameters while satisfying some connectivity constraints. Specifically, in \S \ref{min-cost-connectivity-sec} we formally define the temporality and temporal cost optimization metrics for temporal graphs. In \S \ref{basic-properties-cost-subsec}, we provide several upper and lower bounds for the temporality of some fundamental graph families such as rings, directed acyclic graphs (DAGs), and trees, as well as an interesting trade-off between the temporality and the age of rings. Furthermore, we provide in \S \ref{generic-method-subsec} a generic method for computing a lower bound of the temporality of an arbitrary graph w.r.t. the $\text{\emph{all paths}}$ property, and we illustrate its usefulness in cliques, close-to-complete bipartite subgraphs, and planar graphs. In \S \ref{cost-computation-subsec}, we consider the temporal cost of a digraph $G$ w.r.t. the $reach$ property, when additionally the age of the resulting labeling $\longrightarrowmbda (G)$ is restricted to be the smallest possible. We prove that this problem is hard to approximate, i.e.~there exists no PTAS unless P=NP. To prove our claim, we first prove (which may be of interest in its own right) that the Max-XOR($3$) problem is APX-hard via a PTAS reduction from Max-XOR. In the Max-XOR($3$) problem, we are given a $2$-CNF formula $\phi $, every literal of which appears in at most 3 clauses, and we want to compute the greatest number of clauses of $\phi $ that can be simultaneously XOR-satisfied. Then we provide a PTAS reduction from Max-XOR$(3)$ to our temporal cost minimization problem. On the positive side, we provide an $(r(G)/n)$-factor approximation algorithm for the latter problem, where $r(G)$ denotes the total number of reachabilities in $G$. Finally, in \S \ref{sec:conc} we conclude and give further research directions that are opened by our work. \section{Preliminaries} \longrightarrowbel{sec:prel} \subsection{A Model of Temporal Graphs} Given a (di)graph $G=(V,E)$, \footnote{The reason that we do not consider only digraphs and then allow undirected graphs to result as their special case, is that in that way an undirected edge would formally consist of two antiparallel edges. This would allow those edges to be labeled differently, unless we introduced an additional constraint preventing it. We've chosen to avoid this by considering explicit undirected graphs (whenever required) with at most one bidirectional edge per pair of nodes.} a \emph{labeling} of $G$ is a mapping $\longrightarrowmbda:E\rightarrow 2^\mathbb{N}$, that is, a labeling assigns to each edge of $G$ a (possibly empty) \footnote{The reader may be wondering whether it is pointless to allow the assignment of no labels to an edge $e$ of $G$, as it would have been equivalent to delete $e$ from $G$ in the first place. Even though this is true for temporal graphs provided as input, it isn't for temporal graphs that will be \emph{designed} by an algorithm based on an underlying graph. In the latter case, it is the algorithm's task to decide whether some of the provided edges need not be ever made available.} set of natural numbers, called \emph{labels}. \begin{definition} \longrightarrowbel{temporal-graph-def} Let $G=(V,E)$ be a (di)graph and $\longrightarrowmbda$ be a labeling of $G$. Then $\longrightarrowmbda (G)$ is the \emph{temporal graph} (or \emph{dynamic graph} \footnote{Even though both names are almost equally used in the literature, in this paper we have chosen to use the term ``temporal'' in order to avoid confusion of readers that are more familiar with the use of the term ``dynamic'' to refer to dynamically updated instances, with which usually an algorithm has to deal in an online way (including the rich literature of problems in which the algorithm has to maintain a graph property that is being disturbed by adversarial graph modifications).}) of $G$ with respect to $\longrightarrowmbda$. Furthermore, $G$ is the \emph{underlying graph} of $\longrightarrowmbda (G)$. \end{definition} We denote by $\longrightarrowmbda(E)$ the multiset of all labels assigned to the underlying graph by the labeling $\longrightarrowmbda$ and by $|\longrightarrowmbda|= |\longrightarrowmbda(E)|$ their cardinality (i.e.~$|\longrightarrowmbda|=\sum_{e\in E} |\longrightarrowmbda(e)|$). We also denote by $\longrightarrowmbda_{\min}= \min\{l\in \longrightarrowmbda(E)\}$ the minimum label and by $\longrightarrowmbda_{\max}= \max\{l\in \longrightarrowmbda(E)\}$ the maximum label assigned by $\longrightarrowmbda$. We define the \emph{age} of a temporal graph $\longrightarrowmbda(G)$ as $\alpha(\longrightarrowmbda)= \longrightarrowmbda_{\max}-\longrightarrowmbda_{\min}+1$. Note that in case $\longrightarrowmbda_{\min}=1$ then we have $\alpha(\longrightarrowmbda)=\longrightarrowmbda_{\max}$. For every graph $G$ we denote by $\mathcal{L}_{G}$ the set of all possible labelings $\longrightarrowmbda $ of $G$. Furthermore, for every $k\in \mathbb{N}$, we define $\mathcal{L}_{G,k}=\{\longrightarrowmbda \in \mathcal{L}_{G}:\alpha (\longrightarrowmbda )\leq k\}$. \subsection{Further Definitions} For every time $r\in\mathbb{N}$, we define the $r$\emph{th instance of a temporal graph $\longrightarrowmbda(G)$} as the static graph $\longrightarrowmbda(G,r)= (V,E(r))$, where $E(r)= \{e\in E: r\in\longrightarrowmbda(e)\}$ is the (possibly empty) set of all edges of the underlying graph $G$ that are assigned label $r$ by labeling $\longrightarrowmbda$. A temporal graph $\longrightarrowmbda(G)$ may be also viewed as a \emph{sequence of static graphs} $(G_1,G_2,\ldots,G_{\alpha(\longrightarrowmbda)})$, where $G_i=\longrightarrowmbda(G,\longrightarrowmbda_{\min}+i-1)$ for all $1\leq i\leq\alpha(\longrightarrowmbda)$. Another, often convenient, representation of a temporal graph is the following. \begin{definition} The \emph{static expansion} \footnote{The notion of static expansion is related to the notion of \emph{time-expanded graphs} of temporal graphs such as periodic, or resulting from public transportation networks (cf. \cite{SWW00,MSWZ07}).} of a temporal graph $\longrightarrowmbda(G)$ is a \emph{static digraph} $H=(S,A)$, and in particular a DAG, defined as follows. If $V=\{u_1,u_2,\ldots,u_n\}$ then $S= \{u_{ij}: \longrightarrowmbda_{\min}-1\leq i\leq \longrightarrowmbda_{\max},1\leq j\leq n\}$ and $A=\{(u_{(i-1)j},u_{ij^\prime}):$ if $j=j^\prime$ or $(u_j,u_j^\prime)\in E(i)$ for some $\longrightarrowmbda_{\min}\leq i\leq \longrightarrowmbda_{\max}\}$. In words, we create $\alpha(\longrightarrowmbda)+1$ copies of $V$ representing the nodes over time (time-nodes) and add outgoing edges from time-nodes of one level only to time-nodes of the next level. In particular, we connect a time-node $u_{(i-1)j}$ to its own subsequent copy $u_{ij}$ and to every time node $u_{ij^\prime}$ s.t.~$(u_j,u_j^\prime)$ is an edge of $\longrightarrowmbda(G)$ at time $i$. \end{definition} A \emph{journey} (or \emph{time-respecting path}) $J$ of a temporal graph $\longrightarrowmbda(G)$ is a path $(e_1,e_2,$ $\ldots,e_k)$ of the underlying graph $G=(V,E)$, where $e_i\in E$, together with labels $l_1<l_2<\ldots<l_k$ such that $l_i\in \longrightarrowmbda(e_i)$ for all $1\leq i\leq k$. In words, a journey is a path that uses strictly increasing edge-labels. If labeling $\longrightarrowmbda$ defines a journey on some path $P$ of $G$ then we also say that $\longrightarrowmbda $ \emph{preserves} $P$. A natural notation for a journey is $(e_1,l_1),(e_2,l_2),\ldots,(e_k,l_k)$. We call each $(e_i,l_i)$ a \emph{time-edge} as it corresponds to the availability of edge $e_i$ at some time $l_i$. We call $l_1$ the \emph{departure time} and $l_k$ the \emph{arrival time} of journey $J$ and denote them by $d(J)$ and $a(J)$, respectively. A $(u,v)$-journey $J$ is called \emph{foremost from time $t$} if $d(J)\geq t$ and $a(J)$ is minimized. Formally, let $\mathcal{J}$ be the set of all $(u,v)$-journeys $J$ with $d(J)\geq t$. A $J\in \mathcal{J}$ is foremost if $a(J)=\min_{J^\prime\in \mathcal{J}}\{ a(J^\prime)\}$. A journey $J$ is called \emph{fastest} if $a(J)-d(J)+1$ is minimized. We call $a(J)-d(J)+1$ the \emph{duration} of the journey. A journey $J$ is called \emph{shortest} if $k$ is minimized, that is it minimizes the number of nodes visited (also called number of hops). We say that a journey $J$ \emph{leaves from node $u$} (\emph{arrives at node $u$}, resp.) \emph{at time $t$} if $(u,v,t)$ ($(v,u,t)$, resp.) is a time-edge of $J$. Two journeys are called \emph{out-disjoint} (\emph{in-disjoint}, respectively) if they never leave from (arrive at, resp.) the same node at the same time. Given a set $\mathcal{J}$ of $(s,v)$-journeys we define their \emph{arrival time} as $a(\mathcal{J})=\max_{J\in \mathcal{J}}$ $\{a(J)\}$. We say that a set $\mathcal{J}$ of $(s,v)$-journeys satisfying some constraint $c$ (e.g.~containing at least $k$ journeys and/or containing only out-disjoint journeys) is \emph{foremost} if $a(\mathcal{J})$ is minimized over all sets of journeys satisfying the constraint. If, in addition to the labeling $\longrightarrowmbda$, a positive weight $w(e)>0$ is assigned to every edge $e\in E$, then we call a temporal graph a \emph{weighted} temporal graph. In case of a weighted temporal graph, by ``shortest journey'' we mean a journey that minimizes the sum of the weights of its edges. Throughout the text we denote by $n$ the number of nodes and by $m$ and $m_t$ the number of edges of graphs and temporal graphs, respectively. In case of a temporal graph, by ``number of edges'' we mean ``number of time-edges'', i.e.~$m_t=|\longrightarrowmbda|$. By $d(G)$ we denote the diameter of a (di)graph $G$, that is the length of the longest shortest path between any two nodes of $G$. By $\delta_u$ we denote the degree of a node $u\in V(G)$ (in case of an undirected graph $G$). \part*{Part I} \section{Journey Problems} \longrightarrowbel{sec:journeys} \subsection{Foremost Journeys} We are given (in its full ``offline'' description) a \emph{temporal graph} $\longrightarrowmbda(G)$, where $G=(V,E)$, a distinguished source node $s\in V$, and a time $\longrightarrowmbda_{\min}\leq t_{start}\leq \longrightarrowmbda_{\max}$ and we are asked for all $w\in V\backslash\{s\}$ to compute a foremost $(s,w)$-journey from time $t_{start}$. \begin{algorithm}[t!] \mathcal{A}ption{FJ} \longrightarrowbel{alg:fj} \begin{algorithmic} [1] \REQUIRE Temporal graph $\longrightarrowmbda(G)$ (full ``offline'' description), source node $s\in V$, and time $t_{start}$, where $\longrightarrowmbda_{\min}\leq t_{start}\leq \longrightarrowmbda_{\max}$. The input is represented by an array $A_{v}$ with $\longrightarrowmbda_{\max}-\longrightarrowmbda_{\min}+1$ entries for every node $v$, where the entry $A_{v}[t]$ stores a pointer to the linked list of the adjacent nodes of~$v$ at time step $t$. \textup{E}NSURE For all $v\in V\backslash\{s\}$ a foremost $(s,v)$-journey from time $t_{start}$. In particular, outputs for every $v$ a pair $(p[v],a[v])$, where $p[v]$ is the predecessor node of $v$ on the journey and $a[v]$ is the arrival time of the journey at $v$ (the pair as a whole may be viewed as the predecessor time-node of $v$ on the journey). \STATE{$R\leftarrow\{s\}$, $t\leftarrow t_{start}$} \FOR{each $v\in V\backslash\{s\}$} \STATE $p[v]\leftarrow\emptyset$ \STATE $a[v]\leftarrow\infty$ \textup{E}NDFOR \WHILE{$R\neq V$ and $t\neq \longrightarrowmbda_{\max}+1$} \STATE{$C\leftarrow \emptyset$} \FOR{each $u\in R$} \FOR{each $(u,v)\in E(t)$} \IF[that is, $v\notin R$]{$p[v]=\emptyset$} \STATE{$p[v]\leftarrow u$} \STATE{$a[v]\leftarrow t$} \STATE{$C\leftarrow C\mathcal{U}p\{v\}$} \textup{E}NDIF \textup{E}NDFOR \textup{E}NDFOR \STATE{$R\leftarrow R \mathcal{U}p C$} \STATE{$t++$} \textup{E}NDWHILE \end{algorithmic} \end{algorithm} \begin{theorem} Algorithm \ref{alg:fj} correctly computes for all $w\in V\backslash\{s\}$ a foremost $(s,w)$-journey from time $t_{start}$. The running time of the algorithm is $O(n\longrightarrowmbda_{\max}+m_t)$. \end{theorem} \begin{proof} Assume that at the end of round $t-1$ all nodes in $R$ have been reached by foremost journeys from $s$. Let $(u,v,t)$ be a time-edge s.t.~$u\in R$ and $v\notin R$ and let $f(s,u)$ denote the foremost journey from $s$ to $u$. We claim that $J=f(s,u),(u,v,t)$ is a foremost journey from $s$ to $v$. Recall that we denote the arrival time of $J$ by $a(J)$. To see that our claim holds assume that there is some other journey $J^\prime$ s.t.~$a(J^\prime)<a(J)$. So there must be some time-edge $(w,z,t^\prime)$ for $w\in R$, $z\notin R$ and $t^\prime<t$. However, this contradicts the fact that $z\notin R$ as the algorithm should have added it in $R$ at time $t^\prime$. The proof follows by induction on $t$ beginning from $t=t_{start}$ at which time $R=\{s\}$ ($s$~has trivially been reached by a foremost journey from itself so the claim holds for the base case). We now prove that the time complexity of the algorithm is $O(n\longrightarrowmbda_{\max}+m_t)$. In the worst-case, the last node may be inserted at step $\longrightarrowmbda_{\max}$, so the while loop is executed $O(\longrightarrowmbda_{\max})$ times. In each execution of the while loop, the algorithm visits the $O(n)$ nodes of the current set $R$ in the worst-case (e.g.~when all nodes but one have been added into $R$ from the first step). For each such node $v$ and for each time $\longrightarrowmbda_{\min} \leq t\leq \longrightarrowmbda_{\max}$ the algorithm first locates the entry $A_{v}[t]$ in the array $A_{v}$ in constant time and then it visits the whole linked list of the adjacent nodes of $v$ at time step $t$. All these operations can be performed in $O(n\longrightarrowmbda_{\max}+m_t)$ time in total. \qquad \end{proof} \subsection{Shortest Journeys with Weights} \begin{theorem} \longrightarrowbel{shortest-time-path-thm}Let $\longrightarrowmbda(G)$, where $G=(V,E)$, be a weighted temporal graph with $n$ vertices and $m$ edges. Assume also that $|\longrightarrowmbda(e)|=1$ for all $e\in E$, i.e.~there is a single label on each edge (this implies also that $m_t=m$). Let $s,t\in V$. Then, we can compute a shortest journey $J$ between $s$ and~$t$ in $\longrightarrowmbda(G)$ (or report that no such journey exists) in $O(m\log m+\sum_{v\in V}\delta_{v}^{2})=O(n^{3})$ time, where $\delta_{v}$ is the degree of~$v$ in $\longrightarrowmbda(G)$. \end{theorem} \begin{proof} First, we may assume without loss of generality that $\longrightarrowmbda(G)$ is a connected graph, and thus $m\geq n-1$. For the purposes of the proof we construct from $\longrightarrowmbda(G)$ a weighted directed graph $H$ with two specific vertices $s^{\prime },t^{\prime }$, such that there exists a journey $J$ in $\longrightarrowmbda(G)$ between $s$ and $t$ if and only if there is a directed path $P$ in $H$ from $s^{\prime }$ to $t^{\prime }$. Furthermore, if such paths exist, then the weight of the shortest journey $J$ of $\longrightarrowmbda(G)$ between $s$ and $t$ equals the weight of the shortest directed path $P$ of $ H$ from $s^{\prime }$ to $t^{\prime }$. First consider the (undirected) graph $G^{\prime }$ that we obtain when we add two vertices $s_{0}$ and $t_{0}$ to $\longrightarrowmbda(G)$ and the edges $s_{0}s$ and $tt_{0}$. Assign to these two new edges the weight zero and assign to them the time labels $\longrightarrowmbda (s_{0}s)=0$ and $\longrightarrowmbda (tt_{0})=\longrightarrowmbda_{\max}+1$. Then, clearly there exists a time-respecting path between $s$ and $t $ in $\longrightarrowmbda(G)$ if and only if there exists a time-respecting path between $s_{0}$ and $t_{0}$ in $G^{\prime }$, while the weights of these two paths coincide. For simplicity of the presentation, denote in the following by $V$ and $E$ the vertex and edge sets of $G^{\prime }$, respectively. Then we construct $ H=(V_{H},E_{H})$ from $G^{\prime }=(V,E)$ as follows. Let $V_{H}=E$. Furthermore, for every vertex $v\in V$, denote by $M(v)=\{vu:u\in N(v)\}$ the set of all incident edges to $v$ in $G^{\prime }$. For every pair $ e_{1},e_{2}\in M(v)$ for some $v\in V$, add the arc $\widehat{e_{1}e_{2}}$ to $E_{H}$ if and only if $\longrightarrowmbda (e_{1})< \longrightarrowmbda (e_{2})$. In this case, we assign to the arc $\widehat{e_{1}e_{2}}$ of $E_{H}$ the weight $w_{H}( \widehat{e_{1}e_{2}})=w(e_{2})$. Suppose first that $G^{\prime }$ has a journey between $s_{0}$ and $t_{0}$. Let $J=(u_{0},u_{1},\ldots ,u_{k})$, where $u_{0}=s_{0}$ and $ u_{k}=t_{0}$, be the shortest among them with respect to the weight function $w$ of $G^{\prime }$. Then, by the definition of $G^{\prime }$, $s_{0}s$ and $tt_{0}$ are the first and the last edges of $J$. Furthermore, by the definition of a time-respecting path, $\longrightarrowmbda (u_{i-1}u_{i})< \longrightarrowmbda (u_{i}u_{i+1})$ for every $i=1,2,\ldots ,k-1$. Therefore, by the above construction of $H$, there exists the directed path $Q=(e_{0},e_{1},\ldots ,e_{k-1})$ in $H$, where $e_{i}=u_{i}u_{i+1}$ for every $i=0,1,\ldots ,k-1$. Note that $e_{0}=s_{0}s$ and that $e_{k-1}=tt_{0}$. Furthermore, in the weight function $w_{H}$ of $H$, $w_{H}(\widehat{e_{i}e_{i+1}})=w(e_{i+1})$ for every $i=0,1,\ldots ,k-2$. Note that $w_{H}(\widehat{e_{k-2}e_{k-1}} )=w(e_{k-1})=w(u_{k-1}u_{k})$, i.e.~$w_{H}(\widehat{e_{k-2}e_{k-1}} )=w(tt_{0})=0$. Thus, the total weight $w(J)$ of $J$ in $G^{\prime }$ equals the total weight $w_{H}(Q)$ of $Q$ in $H$. Let now $s_{H}=s_{0}s$ and $t_{H}=tt_{0}$. Suppose now that $H$ has a path between $s_{H}$ and $t_{H}$. Let $Q=(e_{0},e_{1},\ldots ,e_{k})$, where $e_{0}=s_{H}$ and $e_{k}=t_{H}$, be the shortest among them with respect to the weight function $w_{H}$ of $H$. Since $Q$ is a directed path between $s_{H}$ and $t_{H}$, $\longrightarrowmbda (e_{i})< \longrightarrowmbda (e_{i+1})$ for every $i=0,1,\ldots ,k-1$ by the construction of $H$. Furthermore, the edges $e_{i}$ and $e_{i+1}$ of $G^{\prime }$ are incident for every $i=0,1,\ldots ,k-1$. Denote now by $p_{i}$ the common vertex of the edges $e_{i}$ and $ e_{i+1}$ in $G^{\prime }$ for every $i=0,1,\ldots ,k-1$. We will prove that $ p_{i}\neq p_{i+1}$ for every $i=0,1,\ldots ,k-2$. Suppose otherwise that $ p_{i}=p_{i+1}$ for some $0\leq i\leq k-2$. Then the edges $e_{i}$, $e_{i+1}$, and $e_{i+2}$ of $G^{\prime }$ are as it is shown in Figure~\ref{shortest-respecting-forbidden-fig}, where $e_{i}=ad$, $e_{i+1}=bd$, $e_{i+2}=cd$, and $d=p_{i}=p_{i+1}$ is the common point of the edges $e_{i}$, $e_{i+1}$, and $e_{i+2}$. However, since $\longrightarrowmbda (e_{i})< \longrightarrowmbda (e_{i+1})$ and $\longrightarrowmbda (e_{i+1})< \longrightarrowmbda (e_{i+2})$, it follows that $\longrightarrowmbda (e_{i})< \longrightarrowmbda (e_{i+2})$, and thus there exists the arc $\widehat{e_{i}e_{i+2}}$ in the directed graph $H$. Furthermore $w_{H}(\widehat{e_{i}e_{i+2}})=w_{H}( \widehat{e_{i+1}e_{i+2}})=w(e_{i+2})$, and thus $w_{H}(\widehat{e_{i}e_{i+1}} )+w_{H}(\widehat{e_{i+1}e_{i+2}})>w_{H}(\widehat{e_{i}e_{i+2}})$. Therefore there exists in $H$ the strictly shorter directed path $Q^{\prime }=(e_{0},e_{1},\ldots ,e_{i},e_{i+2}\ldots ,e_{k})$ between $e_{0}=s_{H}$ and $e_{k}=t_{H}$. This is a contradiction, since $Q$ is the shortest directed path between $s_{H}$ and $t_{H}$. Therefore $p_{i}\neq p_{i+1}$ for every $i=0,1,\ldots ,k-2$. Thus, we can denote now $e_{i}=p_{i-1}p_{i}$ for every $i=1,2,\ldots ,k$, where $p_{0}=s_{0}$ and $p_{k}=t_{0}$. That is, $ J=(p_{0},p_{1},\ldots ,p_{k+1})$ is a walk in $G^{\prime }$ between $ p_{0}=s_{0}$ and $p_{k}=t_{0}$. Since $Q$ is a simple directed path, it follows that every edge of $J$ appears exactly once in $J$, and thus $J$ is a path of $G^{\prime }$. Now we will prove that $J$ is actually a simple path of $G^{\prime }$. Suppose otherwise that $p_{i}=p_{j}$ for some $0\leq i<j\leq k+1$. If $p_{j}=p_{k}$, i.e.~$p_{j}=t_{0}$, then the subpath $(p_{0},p_{1},\ldots ,p_{i})$ of $J$ implies a strictly shorter directed path $Q^{\prime }$ than $Q$ between $ s_{H}$ and $t_{H}$ in $H$, which is a contradiction. Therefore $p_{j}\neq p_{k}$. Then, since $\longrightarrowmbda (p_{i-1}p_{i})< \longrightarrowmbda (p_{i}p_{i+1})$ for every $ i=0,1,\ldots ,k-1$ by the construction of the directed graph $H$, it follows in particular that $\longrightarrowmbda (p_{i-1}p_{i})< \longrightarrowmbda (p_{j}p_{j+1})$, and thus $ \widehat{e_{i}e_{j+1}}$ is an arc in the directed graph $H$. Thus the path $ (p_{0},p_{1},\ldots ,p_{i},p_{j+1},\ldots ,p_{k})$ of $G^{\prime }$ implies a strictly shorter directed path $Q^{\prime }$ than $Q$ between $s_{H}$ and $ t_{H}$ in $H$, which is again a contradiction. Therefore $p_{i}\neq p_{j}$ for every $0\leq i<j\leq k+1$ in $J$, and thus $J$ is a simple path in $ G^{\prime }$ between $p_{0}=s_{0}$ and $p_{k}=t_{0}$. Finally, it is easy to check that the weight $w(J)$ of $J$ in $G^{\prime }$ equals the weight $ w_{H}(Q)$ of $Q$ in $H$. Summarizing, there exists a journey $J$ in $G^{\prime }$ between $s_{0}$ and $t_{0}$ if and only if there is a directed path $Q$ in $ H $ from $s_{H}$ to $t_{H}$. Furthermore, if such paths exist, then the weight of the shortest journey $J$ of $G^{\prime }$ between $ s_{0}$ and $t_{0}$ equals the weight of the shortest directed path $Q$ of $H$ from $s_{H}$ to $t_{H}$. Moreover, the above proof immediately implies an efficient algorithm for computing the graph $H$ from~$\longrightarrowmbda(G)$ (by first constructing the auxiliary graph $G^{\prime }$ from $\longrightarrowmbda(G)$). This can be done in $O(\sum_{v\in V}\delta_{v}^{2})$ time. Indeed, for every vertex $v$ of $G^{\prime }$ we add at most $2{\binom{\delta_{v}}{2}}=\delta_{v}(\delta_{v}-1)$ arcs to $H$. That is, $|V_{H}|=m+2$ and $|E_{H}|\leq \sum_{v\in V(G^{\prime })}\delta_{v}(\delta_{v}-1)=O(\sum_{v\in V}\delta_{v}^{2})$. After we construct $H$, we can compute a shortest directed path between $s_{H}$ and $t_{H}$ in $O(|E_{H}|+|V_{H}|\log |V_{H}|)$ time using Dijkstra's algorithm with Fibonacci heaps~\cite{FT87}. That is, we can compute a shortest directed path $Q$ in $H$ between $s_{H}$ and $ t_{H}$ in $O(m\log m+\sum_{v\in V}\delta_{v}^{2})$ time. Once we have computed the path $Q$, we can easily construct the shortest undirected journey $J$ in $\longrightarrowmbda(G)$ between $s$ and $t$ in $O(m+n)$ time. This completes the proof of the theorem. \qquad \end{proof} \section{A Menger's Analogue for Temporal Graphs} \longrightarrowbel{sec:menger} In \cite{KKK00}, Kempe \emph{et al.} proved that Menger's theorem, at least in its original formulation, does not hold for single-label temporal networks in which journeys must have non-decreasing labels (and not necessarily strictly increasing as in our case). For a counterexample, it is not hard to see in Figure \ref{fig:ber} that there are no two disjoint time-respecting paths from $v_1$ to $v_4$ but after deleting any one node (other than $v_1$ or $v_4$) there still remains a time-respecting $v_1$-$v_4$ path. Moreover, they proved that the violation of Menger's theorem in such temporal networks renders the computation of the number of disjoint $s$-$t$ paths NP-complete. We prove in this section that, in contrast to the above important negative result, there is a natural analogue of Menger's theorem that is valid for all temporal networks. In Theorem \ref{the:dmeng}, we define this analogue and prove its validity. Then as an illustration (\S \ref{subsec:application}), we show how using our theorem can simplify the proof of a recent token dissemination result. When we say that we remove \emph{node departure time} $(u,t)$ we mean that we remove \emph{all time-edges leaving $u$ at time $t$}, i.e.~we remove label $t$ from all $(u,v)$ edges (for all $v\in V$). In case of an undirected graph, we replace each edge by two antiparallel edges and remove label $t$ only from the outgoing edges of $u$. So, when we ask how many node departure times are needed to separate two nodes $s$ and $v$ we mean how many node departure times must be selected so that after the removal of all the corresponding time-edges the resulting temporal graph has no $(s,v)$-journey. \footnote{Note that this is a different question from how many time-edges must be removed and, as we shall see, the latter question does not result in a Menger's analogue. Of course, removing a node departure time again results in the removal of some time-edges, but a Menger's analogue based on the number of those edges would not work. Instead, what turns out to work is an analogue based on counting the number of node departure times.} \begin{theorem} [Menger's Temporal Analogue] \longrightarrowbel{the:dmeng} Take any temporal graph $\longrightarrowmbda(G)$, where $G=(V,E)$, with two distinguished nodes $s$ and $v$. The maximum number of out-disjoint journeys from $s$ to $v$ is equal to the minimum number of node departure times needed to separate $s$ from $v$. \end{theorem} \begin{proof} Assume, in order to simplify notation, that $\longrightarrowmbda_{\min}=1$. Take the static expansion $H=(S,A)$ of $\longrightarrowmbda(G)$. Let $\{u_{i1}\}$ and $\{u_{in}\}$ represent $s$ and $v$ over time, respectively (first and last columns, respectively), where $0\leq i\leq \longrightarrowmbda_{\max}$. We extend $H$ as follows. For each $u_{ij}$, $0\leq i\leq \longrightarrowmbda_{\max}-1$, with at least 2 outgoing edges to nodes different than $u_{(i+1)j}$, e.g.~to nodes $u_{(i+1)j_1},u_{(i+1)j_2},\ldots,u_{(i+1)j_k}$, we add a new node $w_{ij}$ and the edges $(u_{ij},w_{ij})$ and $(w_{ij},u_{(i+1)j_1}),(w_{ij},u_{(i+1)j_2}),\ldots,(w_{ij},u_{(i+1)j_k})$. We also define an edge capacity function $c:A\rightarrow \{1,\longrightarrowmbda_{\max}\}$ as follows. All edges of the form $(u_{ij},u_{(i+1)j})$ take capacity $\longrightarrowmbda_{\max}$ and all other edges take capacity $1$. We are interested in the maximum flow from $u_{01}$ to $u_{\longrightarrowmbda_{\max}n}$. As this is simply a usual static flow network, the max-flow min-cut theorem applies stating that the maximum flow from $u_{01}$ to $u_{\longrightarrowmbda_{\max}n}$ is equal to the minimum of the capacity of a cut separating $u_{01}$ from $u_{\longrightarrowmbda_{\max}n}$. So it suffices to show that (i) the maximum number of out-disjoint journeys from $s$ to $v$ is equal to the maximum flow from $u_{01}$ to $u_{\longrightarrowmbda_{\max}n}$ and (ii) the minimum number of node departure times needed to separate $s$ from $v$ is equal to the minimum of the capacity of a cut separating $u_{01}$ from $u_{\longrightarrowmbda_{\max}n}$. For (i) observe that any set of $h$ out-disjoint journeys from $s$ to $v$ corresponds to a set of $h$ disjoint paths from $u_{01}$ to $u_{\longrightarrowmbda_{\max}n}$ w.r.t. diagonal edges (edges in $E\backslash\{(u_{ij},u_{(i+1)j})\}$) and inversely, so their maximums are equal. Next observe that any set of $h$ disjoint paths from $u_{01}$ to $u_{\longrightarrowmbda_{\max}n}$ w.r.t. diagonal edges corresponds to an integral $u_{01}$-$u_{\longrightarrowmbda_{\max}n}$ flow on $H$ of value $h$ and inversely. As the maximum integral $u_{01}$-$u_{\longrightarrowmbda_{\max}n}$ flow is equal to the maximum $u_{01}$-$u_{\longrightarrowmbda_{\max}n}$ flow (the capacities are integral and thus the integrality theorem of maximum flows applies) we conclude that the maximum $u_{01}$-$u_{\longrightarrowmbda_{\max}n}$ flow is equal to the maximum number of out-disjoint journeys from $s$ to $v$. For (ii) observe that any set of $r$ node departure times that separate $s$ from $v$ corresponds to a set of $r$ diagonal edges leaving $u_{ij}$ nodes (ending either in $w_{ij}$ or in $u_{(i+1)j^\prime}$ nodes) that separate $u_{01}$ from $u_{\longrightarrowmbda_{\max}n}$ and inversely. Finally, observe that there is a minimum $u_{01}$-$u_{\longrightarrowmbda_{\max}n}$ cut on $H$ that only uses such edges: for if a minimum cut uses vertical edges we can replace them by diagonal edges and we can replace all edges leaving a $w_{ij}$ node by the edge $(u_{ij},w_{ij})$ without increasing the total capacity. \qquad \end{proof} \begin{corollary} By symmetry we have that the maximum number of in-disjoint journeys from $s$ to $v$ is equal to the minimum number of node arrival times needed to separate $s$ from $v$. \end{corollary} \begin{corollary} The following alternative statements are both valid: \begin{itemize} \item The maximum number of time-node disjoint journeys from $s$ to $v$ is equal to the minimum number of time-nodes needed to separate $s$ from $v$. \item The maximum number of time-edge disjoint journeys from $s$ to $v$ is equal to the minimum number of time-edges needed to separate $s$ from $v$. \footnote{By time-node disjointness we mean that they do not meet on the same node at the same time (in terms of the expansion graph the corresponding paths should be disjoint in the classical sense) and by time-edge disjointness that they do not use the same time-edge (which again translates to using the same diagonal edge on the expansion graph).} \end{itemize} \end{corollary} The following version is though violated: ``the maximum number of out-disjoint (or in-disjoint) journeys from $s$ to $v$ is equal to the minimum number of time-edges needed to separate $s$ from $v$'' (see Figure \ref{fig:mv-ex}). The same holds for the original statement of Menger's theorem as discussed in the beginning of this section (see \cite{KKK00}). \subsection{An Application: Foremost Dissemination (Journey Packing)} \longrightarrowbel{subsec:application} Consider the following problem. We are given a temporal graph $\longrightarrowmbda(G)$, where $G=(V,E)$, a source node $s$, a sink node $v$ and an integer $q$. We are asked to find the minimum arrival time of a set of $q$ out-disjoint $(s,v)$-journeys or even the minimizing set itself. By exploiting the Menger's analogue proved in Theorem \ref{the:dmeng} (and in order to provide an example application of it), we give an alternative (and probably simpler to appreciate) proof of the following Lemma from \cite{DPRS13} (stated as Lemma \ref{lem:gathering} below) holding for a special case of temporal networks, namely those that have \emph{connected instances}. Formally, a temporal network $\longrightarrowmbda(G)$ is said to have connected instances if $\longrightarrowmbda(G,t)$ is connected at all times $t\in\mathbb{N}$. The problem under consideration is distributed $k$-token dissemination: there are $k$ tokens assigned to some given source nodes. In each round (i.e.~discrete moment in the temporal network), each node selects a single token to be sent to all of its current neighbors (i.e.~broadcast). The current neighbors at round $i$ are those defined by $E(i)$. The goal of a distributed protocol (or of a centralized strategy for the same problem) is to deliver all tokens to a given sink node $v$ as fast as possible. We assume that the algorithms know the temporal network in advance. \begin{lemma} \longrightarrowbel{lem:gathering} Let there be $k\leq n$ tokens at given source nodes and let $v$ be an arbitrary node. Then, all the tokens can be sent to $v$ using broadcasts in $O(n)$ rounds. \end{lemma} Let $S=\{s_1,s_2,\ldots,s_h\}$ be the set of source nodes and let $k(s_i)$ be the number of tokens of source node $s_i$, so that $\sum_{1\leq i\leq h} k(s_i)=k$. Clearly, it suffices to prove the following lemma. \begin{lemma} \longrightarrowbel{lem:tok} We are given a temporal graph $\longrightarrowmbda(G)$ with connected instances and age $\alpha(\longrightarrowmbda)=n+k$. We are also given a set of source nodes $S\subseteq V$, a mapping $k:S\rightarrow \mathbb{N}_{\geq 1}$ so that $\sum_{s\in S} k(s)=k$, and a sink node $v$. Then there are at least $k$ out-disjoint journeys from $S$ to $v$ such that $k(s_i)$ journeys leave from each source node $s_i$. \end{lemma} \begin{proof} We conceive $k(s)$ as the number of tokens of source $s$. Number the tokens arbitrarily. Create a supersource node $s^\prime$ and connect it to the source node with token $i$ by an edge labeled $i$. Increase all other edge labels by $k$. Clearly the new temporal graph $D=\longrightarrowmbda^\prime(G^\prime)$ has asymptotically the same age as the original and all properties have been preserved (we just shifted the original temporal graph in the time dimension). Moreover, if there are $k$ out-disjoint journeys from $s^\prime$ to $v$ in $D$ then by construction of the edges leaving $s^\prime$ we have that precisely $k(s)$ of these journeys must be leaving from each source $s\in S$. So it suffices to show that there are $k$ out-disjoint journeys from $s^\prime$ to $v$. By Theorem \ref{the:dmeng} it is equivalent to show that the minimum number of departure times that must be removed from $D$ to separate $s^\prime$ from $v$ is $k$. Assume that we remove $y<k$ departure times. Then for more than $n$ rounds all departure times are available (as we have $n+2k$ rounds and we just have $y<k$ removals). As every instance of $G$ is connected, we have that there is always an edge in the cut between the nodes that have been reached by $s^\prime$ already and those that have not, unless we remove some departure times. As for more than $n$ rounds all departure times are available it is immediate to observe that $s^\prime$ reaches $v$ implying that we cannot separate $s^\prime$ from $v$ with less that $k$ removals and this completes the proof. \qquad \end{proof} \part*{Part II} \section{Minimum Cost Temporal Connectivity} \longrightarrowbel{min-cost-connectivity-sec} In this section, we introduce some cost measures for maintaining different types of temporal connectivity. According to these temporal connectivity types, individuals are required to be capable to communicate with other individuals over the dynamic network, possibly with further restrictions on the timing of these connections. We initiate this study by considering the following fundamental problem: Given a (di)graph $G$, assign labels to the edges of $G$ so that the resulting temporal graph $\longrightarrowmbda (G)$ minimizes some parameter and at the same time preserves some connectivity property of $ G$ in the time dimension. For a simple illustration of this, consider the case in which $\longrightarrowmbda (G)$ should contain a journey from $u$ to $v$ if and only if there exists a path from $u$ to $v$ in $G$. In this example, the reachabilities of $G$ completely define the temporal reachabilities that $ \longrightarrowmbda (G)$ is required to have. We consider two cost optimization criteria for a (di)graph $G$. The first one, called \emph{temporality} of $G$, measures the maximum number of labels that an edge of $G$ has been assigned. The second one, called \emph{temporal cost} of $G$, measures the total number of labels that have been assigned to all edges of $G$. That is, if we interpret the number of assigned labels as a measure of \emph{cost}, the temporality (resp.~the temporal cost)\ of $G$ is a measure of the decentralized (resp.~centralized) cost of the network, where only the cost of individual edges (resp.~the total cost over all edges) is considered. We introduce these cost parameters in Definition~\ref {temporality-cost-def}. Each of these two cost measures can be minimized subject to some particular connectivity property $\mathcal{P}$ that the labeled graph $\longrightarrowmbda (G)$ has to satisfy. For simplicity of notation, we consider in Definition~\ref{temporality-cost-def} the connectivity property $ \mathcal{P}$ as a subset of the set $\mathcal{L}_{G}$ of all possible labelings $\longrightarrowmbda $ on the (di)graph $G$. Furthermore, the minimization of each of these two cost measures can be affected by some problem-specific constraints on the labels that we are allowed to use. We consider here one of the most natural constraints, namely an upper bound on the \emph{age} of the constructed labeling $\longrightarrowmbda $. \begin{definition} \longrightarrowbel{temporality-cost-def}Let $G=(V,E)$ be a (di)graph, $\alpha _{\max}\in \mathbb{N}$, and $\mathcal{P}$ be a connectivity property. Then the \emph{temporality} of $(G,\mathcal{P},\alpha _{\max })$ is \begin{equation*} \tau (G,\mathcal{P},\alpha _{\max })=\min_{\longrightarrowmbda \in \mathcal{P\mathcal{A}p L} _{G,\alpha _{\max }}}\max_{e\in E}|\longrightarrowmbda (e)| \end{equation*} and the \emph{temporal cost} of $(G,\mathcal{P},\alpha _{\max })$ is \begin{equation*} \kappa (G,\mathcal{P},\alpha _{\max })=\min_{\longrightarrowmbda \in \mathcal{P\mathcal{A}p L} _{G,\alpha _{\max }}}\sum_{e\in E}|\longrightarrowmbda (e)| \end{equation*} Furthermore $\tau (G,\mathcal{P})=\tau (G,\mathcal{P},\infty )$ and $\kappa (G,\mathcal{P})=\kappa (G,\mathcal{P},\infty )$. \end{definition} Note that Definition~\ref{temporality-cost-def} can be stated for an arbitrary property $\mathcal{P}$ of the labeled graph $\longrightarrowmbda (G)$ (e.g. some proper coloring-preserving property). Nevertheless, we only consider here $\mathcal{P}$ to be a connectivity property of $\longrightarrowmbda (G)$. In particular, we investigate the following two connectivity properties $ \mathcal{P}$: \begin{itemize} \item \emph{all-paths}$(G)=\{\longrightarrowmbda \in \mathcal{L}_{G}:$ for all simple paths $P$ of $G$, $\longrightarrowmbda $ preserves $P\}$, \item \emph{reach}$(G)=\{\longrightarrowmbda \in \mathcal{L}_{G}:$ for all $u,v\in V$ where $v$ is reachable from $u$ in $G$, $\longrightarrowmbda$ preserves at least one simple path from $u$ to $v\}$. \end{itemize} \subsection{Basic Properties of Temporality Parameters} \longrightarrowbel{basic-properties-cost-subsec} \subsubsection{Preserving All Paths} We begin with some simple observations on $\tau(G,\text{\emph{all paths}})$. Recall that given a (di)graph $G$ our goal is to label $G$ so that all simple paths of $G$ are preserved by using as few labels per edge as possible. From now on, when we say ``graph'' we will mean a directed one and we will state it explicitly when our focus is on undirected graphs. Another interesting observation is that if $p(G)$ is the length of the longest path in $G$ then we can trivially preserve all paths of $G$ by using $p(G)$ labels per edge. Give to every edge the labels $\{1,2,\ldots,p(G)\}$ and observe that for every path $e_1,e_2,\ldots,e_k$ of $G$ we can use the increasing sequence of labels $1,2,\ldots,k$ due to the fact that $k\leq p(G)$. Thus, we conclude that the upper bound $\tau(G,\text{\emph{all paths}})\leq p(G)$ holds for all graphs $G$. Of course, note that equality is easily violated. For example, a directed line has $p(G)=n$ but $\tau(G,\text{\emph{all paths}})=1$. \begin{observation} $\tau(G,\text{all paths})\leq p(G)$ for all graphs $G$. \end{observation} \noindent \textbf{Directed Rings.} The following proposition states that if $G$ is a directed ring then the temporality of preserving all paths is 2. This means that the minimum number of labels per edge that preserve all simple paths of a ring is 2. As the proof was already sketched in Section \ref{sec:intro}, we don't provide a proof here. \begin{proposition} \longrightarrowbel{pro:ring} $\tau(G,\text{all paths})=2$ when $G$ is a ring and $\tau(G,\text{all paths})\geq 2$ when $G$ contains a ring. \end{proposition} \noindent \textbf{Directed Acyclic Graphs.} A topological sort of a digraph $G$ is a linear ordering of its nodes such that if $G$ contains an edge $(u,v)$ then $u$ appears before $v$ in the ordering. It is well known that a digraph $G$ can be topologically sorted iff it has no directed cycles that is iff it is a DAG. A topological sort of a graph can be seen as placing the nodes on a horizontal line in such a way that all edges go from left to right; see e.g.~\cite[page 549]{CLRS01}. \begin{proposition} \longrightarrowbel{pro:dag-all-paths} If $G$ is a DAG then $\tau(G,\text{all paths})=1$. \end{proposition} \begin{proof} Take a topological sort $u_1,u_2,\ldots,u_n$ of $G$. Clearly, every edge is of the form $(u_i,u_j)$ where $i<j$. Give to every edge $(u_i,u_j)$ label $i$, that is $\longrightarrowmbda(u_i,u_j)=i$ for all $(u_i,u_j)\in E$. Now take any node $u_l$. Each of its incoming edges has some label $l^\prime<l$ and all its outgoing edges have label $l$. Now take any simple path $p=v_1,v_2,\ldots,v_k$ of $G$. Clearly, $v_i$ appears before $v_{i+1}$ in the topological sort for all $1\leq i\leq k-1$, which implies that $\longrightarrowmbda(v_i,v_{i+1})<\longrightarrowmbda(v_{i+1},v_{i+2})$, for all $1\leq i\leq k-2$. This proves that $p$ is preserved. As we have preserved all simple paths with a single label on every edge, we conclude that $\tau(G,\text{\emph{all paths}})=1$ as required. \qquad \end{proof} \subsubsection{Preserving All Reachabilities} Now, instead of preserving all paths, we impose the apparently simpler requirement of preserving just a single path between every reachability pair $u,v\in V$. We claim that it is sufficient to understand how $\tau(G,reach)$, behaves on strongly connected digraphs. Let $\mathcal{C}(G)$ be the set of all strongly connected components of a digraph $G$. The following lemma proves that, w.r.t. the $reach$ property, the temporality of any digraph $G$ is equal to the maximum temporality of its components. \begin{lemma} \longrightarrowbel{lem:components} $\tau(G,reach) = \max\{1, \max_{C\in\mathcal{C}(G)} \tau(C,reach)\}$ for every digraph $G$ with at least one edge. In the case of no edge, $\tau(G,reach) = 0$ trivially. \end{lemma} \begin{proof} Take any digraph $G$. Now take the DAG $D$ of the strongly connected components of $G$. The nodes of $D$ are the components of $G$ and there is an edge from component $C$ to component $C^\prime$ if there is an edge in $G$ from some node of $C$ to some node of $C^\prime$. As $D$ is a DAG, we can obtain a topological sort of it which is a labeling $C_1,C_2,\ldots ,C_t$ of the $t$ components so that all edges between components go only from left to right. In the case where at least one component has at least 2 nodes (in which case $\max_{C\in\mathcal{C}(G)} \tau(C,reach)\geq 1$), we have to prove that we can label $G$ by using at most $\max_{1\leq i\leq t} \tau(C_i,reach)$ labels per edge and that we cannot do better than this. Consider the following labeling process. For each component $C_i$ define $d_i= \min_{\longrightarrowmbda\in\mathcal{C}_i}(\longrightarrowmbda_{\max}(\longrightarrowmbda) - \longrightarrowmbda_{\min}(\longrightarrowmbda))$, where $\mathcal{C}_i$ is the set of all labelings of $C_i$ that preserve all of its reachabilities using at most $\tau(C_i,reach)$ labels per edge. Note that any $C_i$ can be labeled beginning from any desirable $\longrightarrowmbda_{\min}$ with at most $\tau(C_i,reach)$ labels per edge and with $\longrightarrowmbda_{max}$ equal to $\longrightarrowmbda_{\min}+d_i$. Now, label component $C_1$ with $\longrightarrowmbda_{\min}=1$ and $\longrightarrowmbda_{\max}=1+d_1$. Label all edges leaving $C_1$ with label $d_1+2$. Label component $C_2$ with $\longrightarrowmbda_{\min}=d_1+3$ and $\longrightarrowmbda_{\max}=(d_1+3)+d_2$ and all its outgoing edges with label $(d_1+3)+d_2+1$. In general, label component $C_i$ with $\longrightarrowmbda_{\min}=1+\sum_{1\leq j\leq i-1} (d_j+2)$ and $\longrightarrowmbda_{\max}=\longrightarrowmbda_{\min}+d_i$ and label all edges leaving $C_i$ with label $\longrightarrowmbda_{\max}+1$. It is not hard to see that this labeling scheme preserves all reachabilities of $G$ using just one label on each edge of $G$ corresponding to an edge of $D$ and at most $\tau(C_i,reach)$ labels per edge inside each component $C_i$. Thus, it uses at most $\max_{1\leq i\leq t} \tau(C_i,reach)$ labels on every edge. By observing that for each strongly connected component $C_i$, $\tau(C_i,reach)$ must be paid by any labeling of $G$ that preserves all reachabilities in that component, the equality $\tau(G,reach) = \max_{C\in\mathcal{C}(G)} \tau(C,reach)$ follows. In the extreme case where all components are just single nodes (in which case $\max_{C\in\mathcal{C}(G)} \tau(C,reach)=0$), it holds that $D=G$, therefore $G$ itself is a DAG and we only need 1 label per edge (as in Proposition \ref{pro:dag-all-paths}) and, thus, $\tau(G,reach) = 1$. \qquad \end{proof} Lemma \ref{lem:components} implies that any upper bound on the temporality of preserving the reachabilities of strongly connected digraphs can be used as an upper bound on the temporality of preserving the reachabilities of general digraphs. In view of this, we focus on strongly connected digraphs $G$. We begin with a few simple but helpful observations. Obviously, $\tau(G,reach)\leq\tau(G,\text{\emph{all paths}})$ as any labeling that preserves all paths trivially preserves all reachabilities as well. If $G$ is a clique then $\tau(G,reach) = 1$ as giving to each edge a single arbitrary label (e.g.~label 1 to all) preserves all direct connections (one-step reachabilities) which are all present. If $G$ is a directed ring (which is again strongly connected) then it is easy to see that $\tau(G,reach) = 2$. An interesting question is whether there is some bound on $\tau(G,reach)$ either for all digraphs or for specific families of digraphs. The following lemma proves that indeed there is a very satisfactory generic upper bound. \begin{lemma} \longrightarrowbel{lem:strongly-connected} $\tau(G,reach)\leq 2$ for all strongly connected digraphs $G$. \end{lemma} \begin{proof} As $G$ is strongly connected, if we pick any node $u$ then for all $v$ there is a $(v,u)$ and a $(u,v)$-path. As for any $v$ there is a $(v,u)$-path, then we may form an in-tree $T_{in}$ rooted at $u$ (that is a tree with all directions going upwards to $u$). Now beginning from the leaves give any direction preserving labeling (just begin from labels 1 at the leaves and increase them as you move upwards). Say that the depth is $k$ which means that you have increased up to label $k$. Now consider an out-tree $T_{out}$ rooted at $u$ that has all edge directions going from $u$ to the leaves. To make things simpler create second copies of all nodes but $u$ so that the two trees are disjoint (w.r.t. to all nodes but $u$). In fact, one tree passes through all the first copies and arrives at $u$ and the other tree begins from $u$ and goes to all the second copies. Now we can begin the labeling of $T_{out}$ from $k+1$ increasing labels as we move away from $u$ on $T_{out}$. This completes the construction. Now take any two nodes $w$ and $v$. Clearly, there is a time-respecting path from $w$ to $u$ and then a time-respecting path from $u$ to $v$ using greater labels so there is a time-respecting path from $w$ to $v$. Finally, notice that for any edge on $T_{in}$ there is at most one copy of that edge on $T_{out}$ thus clearly we use at most 2 labels per edge. \qquad \end{proof} Combining Lemma \ref{lem:components} and Lemma \ref{lem:strongly-connected} gives the following theorem: \begin{theorem} $\tau(G,reach)\leq 2$ for all digraphs $G$. \end{theorem} \subsubsection{Restricting the Age} Now notice that for all $G$ we have $\tau(G, reach,$ $d(G))\leq d(G)$; recall that $d(G)$ denotes the diameter of (di)graph $G$. Indeed it suffices to label each edge by $\{1,2,\ldots,d(G)\}$. Since every shortest path between two nodes has length at most $d(G)$, in this manner we preserve all shortest paths and thus all reachabilitities arriving always at most by time $d(G)$, thus we also preserve the diameter. Thus, a clique $G$ has trivially $\tau(G, reach, d(G)) = 1$ as $d(G)=1$ and we can only have large $\tau(G, reach, d(G))$ in graphs with large diameter. For example, a directed ring $G$ of size $n$ has $\tau(G, reach, d(G))=n-1$ (note that on a ring it always holds that $\tau(G, reach, k)=\tau(G, \text{\emph{all paths}}, k)$, as on a ring it happens that satisfying all reachabilities also satisfies all paths while the inverse is true for all graphs). Indeed, assume that from some edge $e$, label $1\leq i \leq n-1$ is missing. It is easy to see that there is some shortest path between two nodes of the ring that in order to arrive by time $n-1$ must use edge $e$ at time $i$. As this label is missing, it uses label $i+1$, thus it arrives by time $n$ which is greater than the diameter. In this particular example we can preserve the diameter only if all edges have the labels $\{1,2, \ldots,n-1\}$. On the other hand, there are graphs with large diameter in which $\tau(G,reach,d(G))$ is small. This may also be the case even if G is strongly connected. For example, consider the graph with nodes $u_1,u_2,\ldots,u_n$ and edges $(u_i, u_{i+1})$ and $(u_{i+1}, u_i)$ for all $1\leq i \leq n-1$. In words, we have a directed line from $u_1$ to $u_n$ and an inverse one from $u_n$ to $u_1$. The diameter here is $n-1$ (e.g.~the shortest path from $u_1$ to $u_n$). On the other hand, we have $\tau(G,reach,d(G)) = 1$: simply label one path $1,2,..., n-1$ and label the inverse one $1,2,...,n-1$ again, i.e.~give to edges $(u_i, u_{i+1})$ and $(u_{n-i+1}, u_{n-i+2})$ label $i$. The reason here is that there are only two pairs of nodes that must necessarily use the long paths $(u_1,u_n)$ and $(u_n, u_1)$ and preserve the diameter $n-1$. All other smaller shortest paths between other pairs of nodes have now a big gap of $n-1$ to exploit. We will now demonstrate what makes $\tau(G,reach,d(G))$ grow. It happens when many maximum shortest paths (those that determine the diameter of $G$) between different pairs of nodes that are additionally unique (the paths), in the sense that we must necessarily take them in order to preserve the reachabilities (it may hold even if they are not unique but this simplifies the argument), all pass through the same edge $e$ but use $e$ at many different times. It will be helpful to look at Figure \ref{fig:diam}. Each $(u_i,v_i)$-path is a unique shortest path between $u_i$ and $v_i$ and has additionally length equal to the diameter (i.e.~it is also a maximum one), so we must necessarily preserve all 5 $(u_i,v_i)$-paths. Note now that each $(u_i,v_i)$-path passes through $e=(u_1,v_5)$ via its $i$-th edge. Each of these paths can only be preserved without violating $d(G)$ by assigning the labels $1,2,\ldots, d(G)$, however note that then edge $e$ must necessarily have all labels $1,2,\ldots,d(G)$. To see this, notice simply that if any label $i$ is missing from $e$ then there is some maximum shortest path that goes through $e$ at step $i$. As $i$ is missing it cannot arrive sooner than time $d(G)+1$ which violates the preservation of the diameter. \noindent\textbf{Undirected Tree.} Now consider an undirected tree $T$. \begin{corollary} If $T$ is an undirected tree then $\tau(T,\text{all paths},d(T))\leq 2$. \end{corollary} \begin{proof} This follows as a simple corollary of Lemma \ref{lem:strongly-connected}. If we replace each undirected edge by two antiparallel edges, then $T$ is a strongly connected digraph and, additionally, for every ordered pair of nodes $(u,v)$ there is precisely one simple path from $u$ to $v$. The latter implies that preserving all paths of $T$ is equivalent to preserving all reachabilities of $T$. So, all assumptions of Lemma \ref{lem:strongly-connected} are satisfied and therefore $\tau(T,\text{all paths})\leq 2$. Finally, recall that the labeling of the construction in the proof of Lemma \ref{lem:strongly-connected} starts increasing labels level-by-level from the leaves to the root and then from the root to the leaves, therefore the number of increments (i.e., the maximum label used) is upper bounded by the diameter of $T$, thus, $\tau(T,\text{all paths},d(T))\leq 2$ as required. \qquad \end{proof} \newline \noindent\textbf{Trade-off on a Ring.} We shall now prove that there is a trade-off between the temporality and the age. In particular, we consider a directed ring $G=(e_1,e_2,\ldots,e_n)$, where the $e_i$ are edges oriented clockwise. As we have already discussed, if $\alpha= n-1$ then $\tau(G,\text{\emph{all paths}},\alpha)=n-1$ (which is the worst possible) and if $\alpha= 2(n-1)$ then $\tau(G,\text{\emph{all paths}},\alpha)=2$ (which is the best possible). We now formalize the behavior of $\tau$ as $\alpha$ moves from $n-1$ to $2(n-1)$. \begin{theorem} \longrightarrowbel{the:tradeoff} If $G$ is a directed ring and $\alpha=(n-1)+k$, where $1\leq k \leq n-1$, then $\tau(G,\text{all paths},\alpha)=\Theta(n/k)$ and in particular $\lfloor\frac{n-1}{k+1}\rfloor\leq\tau(G,\text{all paths},\alpha)\leq\lceil\frac{n}{k+1}\rceil+1$. Moreover, $\tau(G,\text{all paths},n-1)=n-1$ (i.e.~when $k=0$). \end{theorem} \begin{proof} The proof of the upper bound is constructive. In particular, we present a labeling that preserves all paths of the ring $G$ using at most $\lceil\frac{n}{k+1}\rceil+1$ labels on every edge and maximum label $(n-1)+k$. Let the ring be $e_1,e_2,\ldots,e_n$ and clockwise. We say that an edge $e_i$ is \emph{satisfied} if there is a journey of length $n-1$ beginning from $e_i$ (clearly, considering only those journeys that do not use a label greater than $\alpha=(n-1)+k$). Consider the following labeling procedure. \begin{itemize} \item For all $i=0,1,2,\ldots,\lceil\frac{n}{k+1}\rceil-2$ \begin{itemize} \item Assign label 1 to edge $e_{j=i(k+1)+1}$. \item Beginning from edge $e_{j+1}$, assign labels $2,3,\ldots,(n-1)+k$ clockwise. \end{itemize} \item For $i=\lceil\frac{n}{k+1}\rceil-1$, assign label 1 to edge $e_{j=i(k+1)+1}$ and beginning from edge $e_{j+1}$ assign labels $2,3,\ldots,(n-1)+(n-j)$ clockwise. \end{itemize} Note that in each iteration $i$ we satisfy edges $e_{i(k+1)+1},e_{i(k+1)+2},\ldots,e_{(i+1)(k+1)}$, i.e.~$k+1$ new edges, without leaving gaps. It follows that in $\lceil\frac{n}{k+1}\rceil$ iterations all edges have been satisfied. The first iteration assigns at most two labels on edge $e_1$ and every other iteration, apart from the last one, assigns one label on $e_1$ (and clearly at most one on every other edge), thus $e_1$ gets a total of at most $\lceil\frac{n}{k+1}\rceil+1$ labels (and all other edges get at most this). Now, for the lower bound, take an arbitrary edge, e.g.~$e_1$. Given an edge $e_i$ and a journey $J$ from $e_i$ to $e_1$ that uses label $l_1$ on $e_1$, define the delay of $J$ as $l_1-l(J)$, where $l(J)$ is the length of journey $J$ i.e.~$n-i+2$. In words, the delay of a $(e_i,e_1)$-journey is the difference between the time at which the journey visits $e_1$ minus the fastest time that it could have visited $e_1$. Now, beginning from $e_n$ count $k+1$ times counterclockwise, i.e.~consider edge $e_{n-k}$. We show that in order to satisfy $e_{n-k}$ we must necessarily use one of the labels $\{k+2,k+3,\ldots,2k+2\}$ on $e_1$. To this end, notice that the delay of any journey that satisfies some edge can be at most $k$, the reason being that a delay of $k+1$ or greater implies that the journey cannot visit $n-1$ edges in less than $(n-1)+(k+1)$ time, thus it will have to use some label greater than $\alpha=(n-1)+k$, which is the maximum allowed. Thus, the maximum label by which a journey that satisfies $e_{n-k}$ can go through $e_1$ is $l(e_{n-k})+k=2k+2$, where $l(e_{i})$ denotes the length of the path beginning from the tail of $e_i$ and ending at the head of $e_1$. Moreover, the minimum label by which any journey from $e_{n-k}$ can go through $e_1$ is $l(e_{n-k})=k+2$. Thus, we conclude that any journey that satisfies $e_{n-k}$ has to use one of the labels $\{k+2,k+3,\ldots,2k+2\}$ on $e_1$. It is not hard to see that the above idea generalizes as follows. For all $i=0,1,\ldots,\lfloor\frac{n-1}{k+1}\rfloor-1$, in order to satisfy edge $e_{n-i(k+1)+1}$ (note that $e_{n+1}=e_1$) we must necessarily use one of the labels $\{i(k+1)+1, i(k+1)+2,\ldots,(i+1)(k+1)\}$ on $e_1$. For example, for $i=0$ we get $\{1,2,\ldots,k+1\}$, for $i=1$ we get $\{k+2,\ldots,2k+2\}$, for $i=2$ we get $\{2k+3,\ldots,3k+3\}$, and so on. In summary, as the above sets are disjoint, if we begin from $e_1$ and move counterclockwise then for every $k+1$ edges we encounter we must pay for another (new) label on $e_1$ thus we pay at least $\lfloor\frac{n-1}{k+1}\rfloor$. \qquad \end{proof} \subsection{A Generic Method for Computing Lower Bounds for Temporality} \longrightarrowbel{generic-method-subsec} Proposition \ref{pro:ring} showed that graphs with directed cycles need at least 2 labels on some edge(s) in order for all paths to be preserved. Now a natural question to ask is whether we can preserve all paths of any graph by using at most 2 labels (i.e.~whether $\tau(G,\text{\emph{all paths}})\leq 2$ holds for all graphs). We shall prove that there are graphs $G$ for which $\tau(G,\text{\emph{all paths}})=\Omega(p(G))$ (recall that $p(G)$ denotes the length of the longest path in $G$), that is graphs in which the optimum labeling, w.r.t. temporality, is very close to the trivial labeling $\longrightarrowmbda(e)=\{1,2,\ldots,p(G)\}$, for all $e\in E$, that always preserves all paths. \begin{definition} \longrightarrowbel{def:kernel} Call a set $K=\{e_1,e_2,\ldots,e_k\}\subseteq E(G)$ of edges of a digraph $G$ an \emph{edge-kernel} if for every permutation $\pi=(e_{i_1},e_{i_2},\ldots,e_{i_k})$ of the elements of $K$ there is a simple path $P$ of $G$ that visits all edges of $K$ in the ordering defined by the permutation $\pi$. \end{definition} We will now prove that an edge-kernel of size $k$ needs at least $k$ labels on some edges. Our proof is constructive. In particular, given any labeling using $k-1$ labels on an edge-kernel of size $k$, we present a specific path that forces a $k$th label to appear. \begin{theorem} [Edge-kernel Lower Bound] \longrightarrowbel{the:kernel} If a digraph $G$ contains an edge-kernel of size $k$ then $\tau(G,\text{all paths})\geq k$. \end{theorem} \begin{proof} Let $K=\{e_1,e_2,\ldots,e_k\}$ be such an edge-kernel of size $k$. Assume for contradiction that there is a path-preserving labeling using on every edge at most $k-1$ labels. Then there is a path-preserving labeling that uses precisely $k-1$ labels on every edge (just extend the previous labeling by arbitrary labels). On every edge $e_i$, $1\leq i\leq k$, sort the labels in an ascending order and denote by $\longrightarrowmbda_l(e)$ the $l$th smallest label of edge $e$; e.g.~if an edge $e$ has labels $\{1,3,7\}$, then $\longrightarrowmbda_1(e)=1$, $\longrightarrowmbda_2(e)=3$, and $\longrightarrowmbda_3(e)=7$. Note that, by definition of an edge-kernel, all possible permutations of the edges in $K$ appear in paths of $G$ that should be preserved. We construct a permutation $\pi=(e_{j_1},e_{j_2},\ldots,e_{j_k})$ of the edges in $K$ which cannot be time-respecting without using a $k$th label on some edge. As $e_{j_1}$ use the edge with the maximum $\longrightarrowmbda_1$, that is $\arg\max_{e\in K} \longrightarrowmbda_1(e)$. Then as $e_{j_2}$ use the edge with the maximum $\longrightarrowmbda_2$ between the remaining edges, that is $\arg\max_{e\in K\backslash\{e_{j_1}\}} \longrightarrowmbda_2(e)$, and define $e_{j_3}, e_{j_4}, \ldots$ analogously. It is not hard to see that $\pi$ satisfies $\longrightarrowmbda_i(e_{j_i})\geq \longrightarrowmbda_i(e_{j_{i+1}})$ for all $1\leq i\leq k-1$. This, in turn, implies that for $\pi$ to be time-respecting it cannot use the labels $\longrightarrowmbda_1,\ldots,\longrightarrowmbda_{i-1}$ at edge $e_{j_i}$, for all $i\geq 2$, which shows that at edge $e_{j_k}$ it can use none of the $k-1$ available labels, thus a $k$th label is necessarily needed and the theorem follows. \qquad \end{proof} \begin{lemma} If $G$ is a complete digraph of order $n$ then it has an edge-kernel of size $\lfloor n/2\rfloor$. \end{lemma} \begin{proof} Note that $\lfloor n/2\rfloor$ is the size of a maximum matching $M$ of $G$. As all possible edges that connect the endpoints of the edges in $M$ are available, $M$ is an edge-kernel of size $\lfloor n/2\rfloor$. \qquad \end{proof} Now, Theorem \ref{the:kernel} implies that a complete digraph of order $n$ requires at least $\lfloor n/2\rfloor$ labels on some edge in order for all paths to be preserved, that is $\lfloor n/2\rfloor\leq \tau(G,\text{\emph{all paths}})$. At the same time we have the trivial upper bound $\tau(G,\text{\emph{all paths}})\leq n-1$ which follows from the fact that the longest path of a clique is hamiltonian, thus has $n-1$ edges, and for any graph $G$ the length of its longest path is an upper bound on $\tau(G,\text{\emph{all paths}})$. The above, clearly remain true for the following (close to complete) bipartite digraph. There are two partitions $A=\{u_i:1\leq i\leq k\}$ and $B=\{v_i:1\leq i\leq k\}$ both of size $k$. The edge set consists of $(u_i,v_i)$ for all $i$ and $(v_i,u_j)$ for all $i,j$. In words, from $A$ to $B$ we have only horizontal connections while from $B$ to $A$ we have all possible connections. \begin{lemma} \longrightarrowbel{planar-lower-bound-edge-kernel-lem}There exist planar graphs $G$ with $n$ vertices having edge-kernels of size $\Omega (n^{\frac{1}{3}})$. \end{lemma} \begin{proof} The proof is done by construction. Consider the grid graph $G=G_{2n^{2},2n}$ , i.e.~$G$ is formed as a part of the infinite grid having width of $2n^{2}$ vertices and height of $2n$ vertices. Note that $G$ is a planar graph. For simplicity of the presentation, we consider the grid graph $G$ on the Euclidean plane, where the vertices have integer coordinates and the lower left vertex has coordinates $(1,1)$. Furthermore denote by $v_{i,j}$ the vertex of $G$ that is placed on the point $(i,j)$, where $1\leq i\leq 2n^{2}$ and $1\leq j\leq 2n$. For every $i\in \{1,2,\ldots ,n\}$ denote $p_{i}=v_{(2i-1)n,n}$ and $q_{i}=v_{(2i-1)n+1,n}$. We define the edge subset $S=\{e_{i}=p_{i}q_{i}:1\leq i\leq n\}$. We now prove that $S$ is an edge-kernel of $G$. Let $\pi =(e_{i_{1}},e_{i_{2}},\ldots ,e_{i_{n}})$ be an arbitrary permutation of the edges of $S=\{e_{1},e_{2},\ldots ,e_{n}\}$. We construct a simple path $P$ in $G$ that visits all the edges of $S$ in the order of the permutation $\pi $. That is, we construct a path $ P=(p_{i_{1}},q_{i_{1}},P_{1},p_{i_{2}},q_{i_{2}},P_{2},\ldots p_{i_{n-1}},q_{i_{n-1}},P_{n-1},p_{i_{n}},q_{i_{n}})$. In order to do so, it suffices to define iteratively the simple paths $P_{1},P_{2},\ldots ,P_{n-1}$ such that no two of these paths share a common vertex. The path $P_{1}$ starts at $q_{i_{1}}$ and continues upwards on the column of $q_{i_{1}}$ in the grid, until it reaches the top $2n$th row of the grid. Then, if $ i_{2}>i_{1}$ (resp.~if $i_{2}<i_{1}$), the path $P_{1}$ continues on this top row to the right (resp.~to the left), until it reaches the column of vertex $p_{i_{2}}$ of the grid. Finally it continues downwards on this column until it reaches $p_{i_{2}}$, where $P_{1}$ ends. Consider now an index $t\in \{2,3,\ldots ,n-1\}$. In a similar manner as $ P_{1}$, the path $P_{t}$ starts at vertex $q_{i_{t}}$. Then it continues upwards on the column of $q_{i_{t}}$ in the grid as much as possible, such that it does not reach any vertex of a path $P_{k}$, where $k\leq t-1$. Note that, if no path $P_{k}$, $k\leq t-1$, passes through any vertex of the column of $q_{i_{t}}$ in the grid, then the path $P_{t}$ reaches the top $2n$ th row of the grid in this column. On the other hand, note that, since $ q_{i_{t}}=v_{(2i_{t}-1)n+1,n}$ and $t\leq n-1$, at most the upper $t-1\leq n-2$ vertices of the column of $q_{i_{t}}$ in the grid can possibly belong to a path $P_{k}$, where $k\leq t-1$. Thus the path $P_{t}$ can always continue upwards from $q_{i_{t}}$ by at least one edge. Let $a_{t}$ be the uppermost vertex of $P_{t}$ on the column of $q_{i_{t}}$ of the grid (cf. Figure~\ref{planar-lower-bound-fig} for $t=5$ and $e_{i_{5}}=e_{1}$). Assume that $i_{t+1}>i_{t}$, i.e.~vertex $p_{i_{t+1}}$ lies to the right of vertex $q_{i_{t}}$ on the $n$th row of the grid. Then, the path $P_{t}$ continues from vertex $a_{t}$ to the right, as follows. If $P_{t}$ can reach the column of $p_{i_{t}}$ without passing through a vertex of a path $P_{k}$, $k\leq t-1$, then it does so; in this case the path $P_{t}$ continues downwards until it reaches vertex $p_{i_{t}}$, where it ends (cf.~Figure~\ref {planar-lower-bound-fig} for $t=3$ and $e_{i_{3}}=e_{3}$). Suppose now that $ P_{t}$ can not reach the column of $P_{t}$ without passing through a vertex of a path $P_{k}$, $k\leq t-1$ (cf.~Figure~\ref{planar-lower-bound-fig} for $ t=5$ and $e_{i_{5}}=e_{1}$). Then, $P_{t}$ continues on the row of vertex $ a_{t}$ to the right as much as possible (say, until vertex $b_{t}$), such that it does not reach any vertex of a path $P_{k}$, $k\leq t-1$. In this case the path $P_{t}$ continues from vertex $b_{t}$ downwards as much as possible until it reaches a vertex $c_{t}$ that is not neighbored to its right to any vertex of a path $P_{k}$, $k\leq t-1$ (cf.~Figure~\ref {planar-lower-bound-fig} for $t=5$ and $e_{i_{5}}=e_{1}$). Furthermore $ P_{t} $ continues from vertex $c_{t}$ to the right as much as possible until it reaches a vertex $d_{t}$ that is not neighbored from above to any vertex of a path $P_{k}$, $k\leq t-1$. Then, $P_{t}$ continues from $d_{t}$ in a similar way until it reaches the column of vertex $p_{i_{t+1}}$ (cf.~Figure~\ref{planar-lower-bound-fig} for $t=5$, $e_{i_{5}}=e_{1}$, and $ e_{i_{6}}=e_{6}$), and then it continues downwards until it reaches $ p_{i_{t+1}}$, where $P_{t}$ ends. Note that, by definition of the edge set $ S $, there exist at least $2n$ columns of the grid between any two edges of the set $S$. Furthermore there exist $n-1$ rows of the grid below every edge of $S$. Thus, since there exist at most $t-1\leq n-2$ previous paths $P_{k}$, $k\leq t-1$, it follows that there exists always enough space for the path $P_{t}$ in the grid to (a) reach vertex $d_{t}$ and (b) continue from $d_{t}$ until it reaches vertex $p_{i_{t+1}}$, where $P_{t}$ ends. Assume now that $i_{t+1}<i_{t}$, i.e.~vertex $p_{i_{t+1}}$ lies to the left of vertex $q_{i_{t}}$ on the $n$th row of the grid. In this case, when we start the path $P_{t}$ at vertex $q_{i_{t}}$, we first move one edge downwards and then two edges to the left (cf.~Figure~\ref {planar-lower-bound-fig} for $t=2$ and $e_{i_{2}}=e_{5}$, as well as for $ t=4 $ and $e_{i_{4}}=e_{4}$). After that point we continue constructing the path $P_{t}$ similarly to the case where $i_{t+1}>i_{t}$ (cf.~Figure~\ref {planar-lower-bound-fig}). Therefore, we can construct in this way all the paths $P_{1},P_{2},\ldots ,P_{n-1}$, such that no two of these paths share a common vertex, and thus the path $P=(p_{i_{1}},q_{i_{1}},P_{1},p_{i_{2}},$ $q_{i_{2}},P_{2},\ldots p_{i_{n-1}},q_{i_{n-1}},P_{n-1},p_{i_{n}},q_{i_{n}})$ is a simple path of $G$ that visits all the edges of $S$ in the order of the permutation $\pi $. An example of the construction of such a path $P$ is given in Figure~\ref {planar-lower-bound-fig}. In this example $S=(e_{1},e_{2},\ldots ,e_{6})$ and $\pi =(e_{2},e_{5},e_{3},e_{4},e_{1},e_{6})$. That is, using the above notation, $i_{1}=2$, $i_{2}=5$, $i_{3}=3$, $i_{4}=4$, $i_{5}=1$, and $ i_{6}=6 $. In this figure we also depict for $t=5$ the vertices $ a_{t},b_{t},c_{t}$ that we defined in the above construction of the path $ P_{i_{t}}$. Since such a path $P$ exists for every permutation $\pi $ of the edges of the set $S$, it follows by Definition~\ref{def:kernel} that $S$ is an edge-kernel of $G$, where $G$ is a planar graph. Finally, since $G=(V,E)$ has by construction $|V|=4n^{3}$ vertices and $|S|=n$, it follows that the size of the edge-kernel $S$ is $\Omega (|V|^{\frac{1}{3}})$. This completes the proof of the lemma. \qquad \end{proof} \subsection{Computing the Cost} \longrightarrowbel{cost-computation-subsec} \subsubsection{Hardness of Approximation} Consider a boolean formula $\phi $ in conjunctive normal form with two literals in every clause ($2$-CNF). Let $\tau $ be a truth assignment of the variables of $\phi $ and $\alpha =(\ell _{1}\vee \ell _{2})$ be a clause of $ \phi $. Then $\alpha $ is \emph{XOR-satisfied} (or \emph{NAE-satisfied}) in $ \tau $, if one of the literals $\{\ell _{1},\ell _{2}\}$ of the clause $ \alpha $ is true in $\tau $ and the other one is false in $\tau $. The number of clauses of $\phi $ that are XOR-satisfied in $\tau $ is denoted by $|\tau(\phi )|$. The formula $\phi $ is \emph{XOR-satisfiable} (or \emph{NAE-satisfiable}) if there exists a truth assignment $\tau $ of $\phi $ such that every clause of $\phi $ is XOR-satisfied in $\tau $. The \emph{Max-XOR} problem (also known as the \emph{Max-NAE-2-SAT} problem) is the following maximization problem: given a $2$-CNF formula $\phi $, compute the greatest number of clauses of $\phi $ that can be simultaneously XOR-satisfied in a truth assignment $\tau $, i.e.~compute the greatest value for $|\tau(\phi )|$. The \emph{Max-XOR(}$k$\emph{)} problem is the special case of the Max-XOR problem, where every variable of the input formula $\phi $ appears in at most~$k$ clauses of $\phi $. It is known that a special case of Max-XOR($3$), namely the \emph{monotone Max-XOR($3$)} problem, is APX-hard (i.e.~it does not admit a PTAS unless P=NP~\cite{KMSV99,CKS01}), as the next lemma states~\cite{AlimontiKann97}. In this special case of the problem, the input formula $\phi$ is monotone, i.e.~every variable appears not negated in the formula. The monotone Max-XOR($3$) problem essentially encodes the \emph{Max-Cut} problem on 3-regular (i.e.~cubic) graphs, which is known to be APX-hard~\cite{AlimontiKann97}. \begin{lemma}[\hspace{-0,001cm}\cite{AlimontiKann97}] \longrightarrowbel{Max_XOR-3-hard-lem}The (monotone) Max-XOR($3$) problem is APX-hard. \end{lemma} Now we provide a reduction from the Max-XOR$(3)$ problem to the problem of computing $\kappa(G,reach,d(G))$. Let $\phi $ be an instance formula of Max-XOR$(3)$ with $n$ variables $x_{1},x_{2},\ldots ,x_{n}$ and $m$ clauses. Since every variable $ x_{i}$ appears in $\phi $ (either as $x_{i}$ or as $\overline{x_{i}}$) in at most $3$ clauses, it follows that $m\leq \frac{3}{2}n$. We will construct from $\phi $ a graph $G_{\phi }$ having length of a directed cycle at most $ 2 $. Then, as we prove in Theorem~\ref{cost-diameter-upper-lower-bound-thm}, $\kappa(G_{\phi },reach,d(G_{\phi }))\leq 39n-4m-2k$ if and only if there exists a truth assignment $\tau $ of $\phi $ with $|\tau(\phi )|\geq k$, i.e.~$ \tau $ XOR-satisfies at least $k$ clauses of $\phi $. Since $\phi $ is an instance of Max-XOR$(3)$, we can replace every clause $(\overline{x_{i}}\vee \overline{x_{j}})$ by the clause $(x_{i}\vee x_{j})$ in $\phi $, since $( \overline{x_{i}}\vee \overline{x_{j}})=(x_{i}\vee x_{j})$ in XOR. Furthermore, whenever $(\overline{x_{i}}\vee x_{j})$ is a clause of $\phi $, where $i<j$, we can replace this clause by $(x_{i}\vee \overline{x_{j}})$, since $(\overline{x_{i}}\vee x_{j})=(x_{i}\vee \overline{x_{j}})$ in XOR. Thus, we can assume without loss of generality that every clause of $\phi $ is either of the form $(x_{i}\vee x_{j})$ or $(x_{i}\vee \overline{x_{j}})$, where $i<j$. For every $i=1,2,\ldots ,n$ we construct the graph $G_{\phi ,i}$ of Figure~\ref{variable-gadget-fig}. Note that the diameter of $G_{\phi ,i}$ is $ d(G_{\phi ,i})=9$ and the maximum length of a directed cycle in $G_{\phi ,i}$ is $2$. In this figure, we call the induced subgraph of $G_{\phi ,i}$ on the $13$ vertices $\{s^{x_{i}},u_{1}^{x_{i}},\ldots ,u_{6}^{x_{i}},v_{1}^{x_{i}},\ldots ,v_{6}^{x_{i}}\}$ the \emph{trunk} of $ G_{\phi ,i}$. Furthermore, for every $p\in \{1,2,3\}$, we call the induced subgraph of $G_{\phi ,i}$ on the $5$ vertices $ \{u_{7,p}^{x_{i}},u_{8,p}^{x_{i}},v_{7,p}^{x_{i}},v_{8,p}^{x_{i}},t_{p}^{x_{i}},\} $ the $p$\emph{th branch} of $G_{\phi ,i}$. Finally, we call the edges $ u_{6}^{x_{i}}u_{7,p}^{x_{i}}$ and $v_{6}^{x_{i}}v_{7,p}^{x_{i}}$ the \emph{transition edges} of the $p$th branch of $G_{\phi ,i}$. Furthermore, for every $i=1,2,\ldots ,n$, let $r_{i}\leq 3$ be the number of clauses in which variable $x_{i}$ appears in $\phi $. For every $1\leq p\leq r_{i}$, we assign the $p$th appearance of the variable $x_{i}$ (either as $x_{i}$ or as $\overline{x_{i}}$) in a clause of $\phi $ to the $p$th branch of~$G_{\phi ,i}$. Consider now a clause $\alpha =(\ell _{i}\vee \ell _{j})$ of $\phi $, where $ i<j$. Then, by our assumptions on $\phi $, it follows that $\ell _{i}=x_{i}$ and $\ell _{j}\in \{x_{j},\overline{x_{j}}\}$. Assume that the literal $\ell _{i}$ (resp.~$\ell _{j}$) of the clause $\alpha $ corresponds to the $p$th (resp.~to the $q$th) appearance of the variable $x_{i}$ (resp.~$x_{j}$) in $ \phi $. Then we identify the vertices of the $p$th branch of $G_{\phi ,i}$ with the vertices of the $q$th branch of $G_{\phi ,j}$ as follows. If $\ell _{j}=x_{j}$ then we identify the vertices $ u_{7,p}^{x_{i}},u_{8,p}^{x_{i}},v_{7,p}^{x_{i}},v_{8,p}^{x_{i}},t_{p}^{x_{i}} $ with the vertices $ v_{7,q}^{x_{j}},v_{8,q}^{x_{j}},u_{7,q}^{x_{j}},u_{8,q}^{x_{j}},t_{q}^{x_{j}} $, respectively (cf.~Figure~\ref{clause-gadget-fig-1}). Otherwise, if $\ell _{j}=\overline{x_{j}}$ then we identify the vertices $ u_{7,p}^{x_{i}},u_{8,p}^{x_{i}},v_{7,p}^{x_{i}},v_{8,p}^{x_{i}},t_{p}^{x_{i}} $ with the vertices $ u_{7,q}^{x_{j}},u_{8,q}^{x_{j}},v_{7,q}^{x_{j}},v_{8,q}^{x_{j}},t_{q}^{x_{j}} $, respectively (cf.~Figure~\ref{clause-gadget-fig-2}). This completes the construction of the graph $G_{\phi }$. Note that, similarly to the graphs $ G_{\phi ,i}$, $1\leq i\leq n$, the diameter of $G_{\phi }$ is $d(G_{\phi })=9 $ and the maximum length of a directed cycle in $G_{\phi }$ is $2$. Furthermore, note that for each of the $m$ clauses of $\phi $, one branch of a gadget $G_{\phi ,i}$ coincides with one branch of a gadget $G_{\phi ,j}$, where $1\leq i<j\leq n$, while every $G_{\phi ,i}$ has three branches. Therefore $G_{\phi }$ has exactly $3n-2m$ branches which belong to only one gadget $G_{\phi ,i}$, and $m$ branches that belong to two gadgets $G_{\phi ,i},G_{\phi ,j}$. \begin{theorem} \longrightarrowbel{cost-diameter-upper-lower-bound-thm}There exists a truth assignment $ \tau $ of $\phi $ with $|\tau(\phi )|\geq k$ if and only if $\kappa(G_{\phi },reach,d(G_{\phi }))\leq 39n-4m-2k$. \end{theorem} \begin{proof} ($\Rightarrow $) Assume that there is a truth assignment $\tau $ that XOR-satisfies $k$ clauses of $\phi $. We construct a labeling $\longrightarrowmbda $ of $ G_{\phi }$ with cost $39n-4m-2k$ as follows. Let $i=1,2,\ldots ,n$. If $ x_{i}=0$ in $\tau $, we assign labels to the edges of the trunk of $G_{\phi ,i}$ as in Figure~\ref{labeling-x-0-fig}. Otherwise, if $x_{i}=1$ in $\tau$, we assign labels to the edges of the trunk of $G_{\phi ,i}$ as in Figure~\ref{labeling-x-1-fig}. We now continue the labeling $\longrightarrowmbda $ as follows. Consider an arbitrary clause $\alpha =(\ell _{i}\vee \ell _{j})$ of $\phi $, where $i<j$. Recall that $\ell _{i}=x_{i}$ and $\ell _{j}\in \{x_{j}, \overline{x_{j}}\}$. Assume that the literal $\ell _{i}$ (resp.~$\ell _{j}$) of the clause $\alpha $ corresponds to the $p$th (resp.~to the $q$th) appearance of variable $x_{i}$ (resp.~$x_{j}$) in $\phi $. Then, by the construction of $G_{\phi }$, the $p$th branch of $G_{\phi ,i}$ coincides with the $q$th branch of $G_{\phi ,j}$. \begin{figure}\end{figure} Assume that $\ell _{j}=x_{j}$ (cf.~Figure~\ref{clause-gadget-fig-1}). Then by our construction $u_{7,p}^{x_{i}}=v_{7,q}^{x_{j}}$, $ u_{8,p}^{x_{i}}=v_{8,q}^{x_{j}}$, $v_{7,p}^{x_{i}}=u_{7,q}^{x_{j}}$, $ v_{8,p}^{x_{i}}=u_{8,q}^{x_{j}}$, and $t_{p}^{x_{i}}=t_{q}^{x_{j}}$ (cf. Figure~\ref{clause-gadget-fig-1}). Let $\alpha $ be XOR-satisfied in $\tau $, i.e.~$x_{i}=\overline{x_{j}}$. If $x_{i}=\overline{x_{j}}=0$ then we label the edges of the $p$th branch of $G_{\phi ,i}$ (equivalently, the edges of the $q$th branch of $G_{\phi ,j}$), the transition edges of the $p$th branch of $G_{\phi ,i}$, and the transition edges of the $q$th branch of $G_{\phi ,j}$, as illustrated in Figure~\ref{assignment-fig-1}. In the symmetric case where $x_{i}=\overline{x_{j}}=1$ we label these edges in the same way as in Figure~\ref{assignment-fig-1}, with the only difference that we exchange the role of $u$'s and $v$'s. Let now $\alpha $ be XOR-unsatisfied in $\tau $, i.e.~$x_{i}=x_{j}$. If $x_{i}=x_{j}=0$ then we label the edges of the $p$th branch of $G_{\phi ,i}$ (equivalently, the edges of the $q$th branch of $ G_{\phi ,j}$), the transition edges of the $p$th branch of $G_{\phi ,i}$, and the transition edges of the $q$th branch of $G_{\phi ,j}$, as illustrated in Figure~\ref{assignment-fig-2}. In the symmetric case where $ x_{i}=x_{j}=1$ we label these edges in the same way as in Figure~\ref {assignment-fig-2}, with the only difference that we exchange the role of $u$'s and $v$'s. For the case where $\ell _{j}=\overline{x_{j}}$ we label the edges of Figure~\ref{clause-gadget-fig-2} similarly to the case where $\ell _{j}=x_{j}$ (cf.~Figure~\ref{assignment-fig}). \begin{figure}\end{figure} Finally consider any of the $3n-2m$ branches that belong to only one gadget $ G_{\phi ,i}$, where $1\leq i\leq n$. Let this be the $p$th branch of $ G_{\phi ,i}$. If $x_{i}=0$ then we label the edges of this branch and its transition edges as illustrated in Figure~\ref{assignment-fig-1} (by ignoring in this figure the vertices $u_{6}^{x_{j}},v_{6}^{x_{j}}$). In the symmetric case where $x_{i}=1$, we label these edges in the same way, with the only difference that we exchange the role of $u$'s and $v$'s. This finalizes the labeling $\longrightarrowmbda $ of $G_{\phi }$. It is easy to check that $ \longrightarrowmbda $ preserves all reachabilities of $G_{\phi }$ and its greatest label is $d$. \begin{figure}\end{figure} Summarizing, for every $i\in \{1,2,\ldots ,n\}$, the edges of the trunk of $ G_{\phi ,i}$ are labeled with $18$ labels (cf.~Figure~\ref {labeling-x-0-1-fig}), and thus $\longrightarrowmbda $ uses in total $18n$ labels for the trunks of all $G_{\phi ,i}$, $i\in \{1,2,\ldots ,n\}$. Furthermore, for every $i\in \{1,2,\ldots ,n\}$ and every $p\in \{1,2,3\}$, $\longrightarrowmbda $ uses $ 1 $ label for the two transition edges of the $p$th branch of $G_{\phi ,i}$ (cf.~Figure~\ref{assignment-fig}), and thus $\longrightarrowmbda $ uses in total $3n$ labels for the transition edges of all $G_{\phi ,i}$, $i\in \{1,2,\ldots ,n\} $. Moreover, for each of the $3n-2m$ branches that belong to only one gadget $G_{\phi ,i}$, where $1\leq i\leq n$, $\longrightarrowmbda $ uses $6$ labels for the edges of this branch of $G_{\phi ,i}$, and thus $\longrightarrowmbda $ uses in total $6(3n-2m)$ labels for all these $3n-2m$ branches. Finally consider any of the remaining $m$ branches of $G_{\phi }$, each of which corresponds to a clause $\alpha $ of $\phi $ (i.e.~this branch belongs simultaneously to a gadget $G_{\phi ,i}$ and a gadget $G_{\phi ,j}$, where $1\leq i<j\leq n$). If $\alpha $ is XOR-satisfied in $\tau $, then $\longrightarrowmbda $ uses $6$ labels for the edges of this branch (cf.~for example Figure~\ref{assignment-fig-1}). Otherwise, if $\alpha $ is XOR-unsatisfied in $\tau $, then $\longrightarrowmbda $ uses $8$ labels for the edges of this branch (cf.~for example Figure~\ref {assignment-fig-2}). Therefore, since $\tau $ XOR-satisfies by assumption $k$ of the $m$ clauses of $\phi $, it follows $\longrightarrowmbda $ uses in total $ 18n+3n+6(3n-2m)+6k+8(m-k)=39n-4m-2k$ labels, and thus $\kappa(G_{\phi },reach,d(G_{\phi }))\leq 39n-4m-2k$. ($\Leftarrow $) Assume that $\kappa(G_{\phi },reach,d(G_{\phi }))\leq 39n-4m-2k$ and let $\longrightarrowmbda $ be a labeling of $G_{\phi }$ that maintains all reachabilities and has minimum cost (i.e.~has the smallest number of labels); that is, $|\longrightarrowmbda|\leq 39n-4m-2k$. Let $i\in \{1,2,\ldots ,n\}$. Note that for every $z\in \{1,2,\ldots ,6\}$, the vertices $u_{z}^{x_{i}}$ and $v_{z}^{x_{i}}$ reach each other in $G_{\phi }$ with a unique path (of length one). Therefore, each of the directed edges $\left\longrightarrowngle u_{z}^{x_{i}}v_{z}^{x_{i}}\right\rightarrowngle $ and $\left\longrightarrowngle v_{z}^{x_{i}}u_{z}^{x_{i}}\right\rightarrowngle $, where $z\in \{1,2,\ldots ,6\}$, receives at least one label in every labeling, and thus also in $\longrightarrowmbda $. Similarly it follows that each of the directed edges $\left\longrightarrowngle u_{z,p}^{x_{i}}v_{z,p}^{x_{i}}\right\rightarrowngle $ and $\left\longrightarrowngle u_{z,p}^{x_{i}}v_{z,p}^{x_{i}}\right\rightarrowngle $, where $z\in \{7,8\}$ and $ p\in \{1,2,3\}$, receives at least one label in every labeling, and thus also in $\longrightarrowmbda $. For every $i\in \{1,2,\ldots ,n\}$, define now the two paths $ P_{i}=(s^{x_{i}},u_{1}^{x_{i}},u_{2}^{x_{i}},\ldots ,u_{6}^{x_{i}})$ and $ Q_{i}=(s^{x_{i}},v_{1}^{x_{i}},v_{2}^{x_{i}},\ldots ,v_{6}^{x_{i}})$. Furthermore, for every $p\in \{1,2,3\}$, define the paths $ P(i,p)=(P_{i},u_{7,p}^{x_{i}},u_{8,p}^{x_{i}},t_{p}^{x_{i}})$ and $ Q(i,p)=(Q_{i},v_{7,p}^{x_{i}},v_{8,p}^{x_{i}},t_{p}^{x_{i}})$. Note that $ P(i,p)$ and $Q(i,p)$ are the only two paths in $G_{\phi }$ from $s^{x_{i}}$ to $t_{p}^{x_{i}}$ with distance $d(G_{\phi })=9$. Thus, since $\longrightarrowmbda $ preserves all reachabilities of $G_{\phi }$ with maximum label $9$, it follows that for every $i\in \{1,2,\ldots ,n\}$ and every $p\in \{1,2,3\}$, the edges of $P(i,p)$ or the edges of $Q(i,p)$ are labeled with the labels $ 1,2,\ldots ,9$ in $\longrightarrowmbda $. Assume that there exists an ${i\in \{1,2,\ldots ,n\}}$ such that all edges of the path $P_{i}$ and all edges of the path $Q_{i}$ are labeled in $\longrightarrowmbda$. Note that, if there exists no value $p\in\{1,2,3\}$ such that all edges of $P(i,p)$ (resp.~of $Q(i,p)$) are labeled, then we can remove all labels from $P(i,p)$ (resp.~from $Q(i,p)$) and construct another labeling $\longrightarrowmbda'$ that still maintains all reachabilities of $G_{\phi}$ but has fewer labels than $\longrightarrowmbda$, which is a contradiction to the minimality assumption of $\longrightarrowmbda$. Therefore, there must exist values $p,q\in\{1,2,3\}$ such that all edges of $P(i,p)$ and all edges of $Q(i,q)$ are labeled in~$\longrightarrowmbda$. Then, in both cases where ${p=q}$ and ${p\neq q}$, we modify $ \longrightarrowmbda $ into a labeling $\longrightarrowmbda ^{\prime }$ as follows. We remove the labels from the seven edges of the path $(Q_{i},v_{7,q}^{x_{i}})$, and we add labels (if they do not already have labels) to the six edges $ \longrightarrowngle u_{6}^{x_{i}}u_{7,z}^{x_{i}}\rightarrowngle ,\longrightarrowngle u_{7,z}^{x_{i}}u_{8,z}^{x_{i}}\rightarrowngle ,\longrightarrowngle u_{8,z}^{x_{i}}t_{z}^{x_{i}}\rightarrowngle $, where $z\in \{1,2,3\}\setminus \{p\}$. Note that, in this new labeling $\longrightarrowmbda ^{\prime }$, we can always preserve all reachabilities of the vertices by choosing the appropriate labels for the edges $\left\longrightarrowngle u_{1}^{x_{i}}v_{1}^{x_{i}}\right\rightarrowngle ,\left\longrightarrowngle v_{1}^{x_{i}}u_{1}^{x_{i}}\right\rightarrowngle ,\left\longrightarrowngle u_{2}^{x_{i}}v_{2}^{x_{i}}\right\rightarrowngle ,\left\longrightarrowngle v_{2}^{x_{i}}u_{2}^{x_{i}}\right\rightarrowngle ,\ldots ,\left\longrightarrowngle u_{6}^{x_{i}}v_{6}^{x_{i}}\right\rightarrowngle ,$ $\longrightarrowngle v_{6}^{x_{i}}u_{6}^{x_{i}}\rightarrowngle ,\longrightarrowngle u_{7,z}^{x_{i}}v_{7,z}^{x_{i}}\rightarrowngle ,\longrightarrowngle v_{7,z}^{x_{i}}u_{7,z}^{x_{i}}\rightarrowngle ,\longrightarrowngle u_{8,z}^{x_{i}}v_{8,z}^{x_{i}}\rightarrowngle ,\longrightarrowngle v_{8,z}^{x_{i}}u_{8,z}^{x_{i}}\rightarrowngle $, where $z\in \{1,2,3\}$, cf. for example the labelings of Figures~\ref{labeling-x-0-1-fig} and~\ref {assignment-fig}. However, by construction, the new labeling $\longrightarrowmbda ^{\prime }$ uses a smaller number of labels than the initial labeling $ \longrightarrowmbda $, which is a contradiction. Therefore, we may assume without loss of generality that for every $i\in \{1,2,\ldots ,n\}$, it is not the case that all edges of both paths $P_{i}$ and $Q_{i}$ are labeled in $\longrightarrowmbda $, i.e.~either all edges of $P_{i}$ or all edges of $ Q_{i}$ are labeled in $\longrightarrowmbda $. We now construct a truth assignment $\tau $ for the formula $\phi $ as follows. For every $i\in \{1,2,\ldots ,n\}$, if all edges of the path $P_{i}$ are labeled in $\longrightarrowmbda $, then we define $x_{i}=0$ in $\tau $. Otherwise, if all edges of the path $Q_{i}$ are labeled in $\longrightarrowmbda $, then we define $ x_{i}=1$ in $\tau $. We will prove that $|\tau(\phi )|\geq k$, i.e.~that $\tau $ XOR-satisfies at least $k$ clauses of the formula $\phi $. Let $i\in \{1,2,\ldots ,n\}$. Recall that each of the directed edges $ \left\longrightarrowngle u_{z}^{x_{i}}v_{z}^{x_{i}}\right\rightarrowngle $ and $\left\longrightarrowngle v_{z}^{x_{i}}u_{z}^{x_{i}}\right\rightarrowngle $, where $z\in \{1,2,\ldots ,6\}$, receives at least one label in $\longrightarrowmbda $. Therefore, since all six edges of $P_{i}$ or all six edges of $Q_{i}$ are labeled in $\longrightarrowmbda $, it follows that $\longrightarrowmbda $ uses for the trunk of $G_{\phi ,i}$ at least $18$ labels. Thus, $\longrightarrowmbda $ uses in total at least $18n$ labels for the trunks of all $ G_{\phi ,i}$, $i\in \{1,2,\ldots ,n\}$. Let now $p\in \{1,2,3\}$. Then, since $ P(i,p)=(P_{i},u_{7,p}^{x_{i}},u_{8,p}^{x_{i}},t_{p}^{x_{i}})$ and $ Q(i,p)=(Q_{i},v_{7,p}^{x_{i}},v_{8,p}^{x_{i}},t_{p}^{x_{i}})$ are the only two paths in $G_{\phi }$ from $s^{x_{i}}$ to $t_{p}^{x_{i}}$ with distance $ d(G_{\phi })=9$, it follows that $\longrightarrowmbda $ uses at least one label for the pair of the transition edges $\{\longrightarrowngle u_{6}^{x_{i}}u_{7,p}^{x_{i}}\rightarrowngle ,$ $\longrightarrowngle v_{6}^{x_{i}}v_{7,p}^{x_{i}}\rightarrowngle \}$ of the $p$th branch of $ G_{\phi ,i}$. Thus, $\longrightarrowmbda $ uses in total at least $3n$ labels for the transition edges of all $G_{\phi ,i}$, $i\in \{1,2,\ldots ,n\}$. Consider an arbitrary branch of $G_{\phi }$, e.g.~the $p$th branch of $ G_{\phi ,i}$, where $i\in \{1,2,\ldots ,n\}$ and $p\in \{1,2,3\}$. Since $ P(i,p)$ and $Q(i,p)$ are the only two paths in $G_{\phi }$ from $s^{x_{i}}$ to $t_{p}^{x_{i}}$ with distance $d(G_{\phi })=9$, it follows that $\longrightarrowmbda $ assigns at least one label to each of the edges $\{\longrightarrowngle u_{7,p}^{x_{i}}u_{8,p}^{x_{i}}\rightarrowngle ,\longrightarrowngle u_{8,p}^{x_{i}}t_{p}^{x_{i}}\rightarrowngle \}$, or at least one label to each of the edges $\{\longrightarrowngle v_{7,p}^{x_{i}}v_{8,p}^{x_{i}}\rightarrowngle ,\longrightarrowngle v_{8,p}^{x_{i}}t_{p}^{x_{i}}\rightarrowngle \}$. Furthermore recall that each of the edges $\left\longrightarrowngle u_{z,p}^{x_{i}}v_{z,p}^{x_{i}}\right\rightarrowngle $ and $ \left\longrightarrowngle v_{z,p}^{x_{i}}u_{z,p}^{x_{i}}\right\rightarrowngle $, where $z\in \{7,8\}$, receives at least one label in $\longrightarrowmbda $. Therefore, $\longrightarrowmbda $ uses at least $6$ labels for an arbitrary branch of $G_{\phi }$. Consider now one of the clauses $\alpha =(\ell _{i}\vee \ell _{j})$ of $\phi $ that are not XOR-satisfied in $\tau $ that we defined above. Note that there exist exactly $m-|\tau(\phi )|$ such clauses in $\phi $. Let $i<j$, and thus $\ell _{i}=x_{i}$ and $\ell _{j}\in \{x_{j},\overline{x_{j}}\}$. Assume that the literal $\ell _{i}$ (resp.~$\ell _{j}$) of the clause $\alpha $ corresponds to the $p$th (resp.~to the $q$th) appearance of variable $x_{i}$ (resp.~$x_{j}$) in $\phi $. Then, by the construction of $G_{\phi }$, the $p$th branch of $G_{\phi ,i}$ coincides with the $q$th branch of $G_{\phi ,j}$. Suppose first that $\ell _{j}=x_{j}$. Then $x_{i}=x_{j}$, since $\alpha $ is not XOR-satisfied in $\tau $. By the construction of the truth assignment $ \tau $ from the labeling $\longrightarrowmbda $, it follows that either all edges of $ P(i,p)$ and all edges of $P(j,q)$ are labeled in $\longrightarrowmbda $ (in the case where $x_{i}=x_{j}=0$), or all edges of $Q(i,p)$ and all edges of $Q(j,q)$ are labeled in $\longrightarrowmbda $ (in the case where $x_{i}=x_{j}=1$). Since $\ell _{j}=x_{j}$, note by the construction of $G_{\phi }$ that the last two edges of $P(i,p)$ are different from the last two edges of $P(j,q)$, while the last two edges of $Q(i,p)$ are different from the last two edges of $Q(j,q)$. Therefore, since each of the edges $\longrightarrowngle u_{z,p}^{x_{i}}v_{z,p}^{x_{i}}\rightarrowngle $ and $\longrightarrowngle v_{z,p}^{x_{i}}u_{z,p}^{x_{i}}\rightarrowngle $, where $z\in \{7,8\}$, receives at least one label in $\longrightarrowmbda $, it follows that $\longrightarrowmbda $ uses for the $p$th branch of $G_{\phi ,i}$ (equivalently for the $q$th branch of $G_{\phi ,j}$) at least $8$ labels, if $\ell _{j}=x_{j}$. Suppose now that $\ell _{j}=\overline{x_{j}}$. Then $x_{i}=\overline{x_{j}}$, since $\alpha $ is not XOR-satisfied in $\tau $. Similarly to the case where $x_{i}=x_{j}$, it follows that either all edges of $P(i,p)$ and all edges of $Q(j,q)$ are labeled in $\longrightarrowmbda $ (in the case where $x_{i}= \overline{x_{j}}=0$), or all edges of $Q(i,p)$ and all edges of $P(j,q)$ are labeled in $\longrightarrowmbda $ (in the case where $x_{i}=\overline{x_{j}}=1$). Since $ \ell _{j}=\overline{x_{j}}$, note by the construction of $G_{\phi }$ that the last two edges of $P(i,p)$ are different from the last two edges of $ Q(j,q)$, while the last two edges of $Q(i,p)$ are different from the last two edges of $P(j,q)$. Therefore, since each of the edges $\longrightarrowngle u_{z,p}^{x_{i}}v_{z,p}^{x_{i}}\rightarrowngle $ and $\longrightarrowngle v_{z,p}^{x_{i}}u_{z,p}^{x_{i}}\rightarrowngle $, where $z\in \{7,8\}$, receives at least one label in $\longrightarrowmbda $, it follows that $\longrightarrowmbda $ uses for the $p$th branch of $G_{\phi ,i}$ (equivalently for the $q$th branch of $G_{\phi ,j}$) at least $8$ labels, if $\ell _{j}=\overline{x_{j}}$. Summarizing, $\longrightarrowmbda $ uses in total at least $18n$ labels for the edges of the trunks of all $G_{\phi ,i}$, at least $3n$ labels for the transition edges of all $G_{\phi ,i}$, at least $6$ labels for an arbitrary branch of $ G_{\phi }$, and at least $8$ labels for each of the branches of $G_{\phi }$ that corresponds to a clause $\alpha $ of $\phi $ that is not XOR-satisfied in $\tau $. Therefore, since $G_{\phi }$ has in total $3n-m$ branches and $ \phi $ has $m-|\tau(\phi )|$ XOR-unsatisfied clauses in $\tau $, it follows that $\longrightarrowmbda $ uses at least $18n+3n+6(3n-m-(m-|\tau(\phi )|))+8(m-|\tau(\phi )|)=39n-4m-2|\tau(\phi )|$ labels. However $|\longrightarrowmbda |\leq 39n-4m-2k$ by assumption. Therefore $39n-4m-2|\tau(\phi )|\leq |\longrightarrowmbda |\leq 39n-4m-2k$, and thus $\tau $ XOR-satisfies $|\tau(\phi )|\geq k$ clauses in $\phi $. This completes the proof of the theorem. \qquad \end{proof} Using Theorem~\ref{cost-diameter-upper-lower-bound-thm}, we are now ready to prove the main theorem of this section. \begin{theorem} [Hardness of Approximating the Temporal Cost] \longrightarrowbel{cost-diameter-Apx-hard-thm}The problem of computing $ \kappa(G,reach,d(G))$ is APX-hard, even when the maximum length of a directed cycle in $G$ is $2$. \end{theorem} \begin{proof} Denote now by OPT$_{\text{Max-XOR}(3)}(\phi )$ the greatest number of clauses that can be simultaneously XOR-satisfied by a truth assignment of $ \phi $. Then Theorem~\ref{cost-diameter-upper-lower-bound-thm} implies that \begin{equation} \kappa(G_{\phi },reach,d(G_{\phi }))\leq 39n-4m-2\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \longrightarrowbel{c-opt-upper-bound-eq} \end{equation} Note that a random assignment XOR-satisfies each clause of $\phi $ with probability $\frac{1}{2}$, and thus we can easily compute (even deterministically) an assignment $\tau $ that XOR-satisfies $\frac{m}{2}$ clauses of $\phi $. Therefore OPT$_{\text{Max-XOR}(3)}(\phi )\geq \frac{m}{2} $, and thus, since every variable $x_{i}$ appears in at least one clause of $ \phi $, it follows that \begin{equation} \frac{n}{2}\leq ~m\leq 2\mathcal{D}ot\text{OPT}_{\text{Max-XOR}(3)}(\phi ) \longrightarrowbel{m-upper-bound-eq} \end{equation} Assume that there is a PTAS for computing $\kappa(G,reach,d(G))$. Then, for every $\varepsilon >0$ we can compute in polynomial time a labeling $ \longrightarrowmbda $ for the graph $G_{\phi }$, such that \begin{equation} |\longrightarrowmbda |\leq (1+\varepsilon )\mathcal{D}ot \kappa(G_{\phi },reach,d(G_{\phi })) \longrightarrowbel{PTAS-bound-eq} \end{equation} Given such a labeling $\longrightarrowmbda $ we can compute by the sufficiency part\ ($ \Leftarrow $) of the proof of Theorem~\ref{cost-diameter-upper-lower-bound-thm} a truth assignment $\tau $ of $\phi $ such that $39n-4m-2|\tau(\phi )|\leq |\longrightarrowmbda |$, i.e. \begin{equation} 2|\tau(\phi )|\geq 39n-4m-|\longrightarrowmbda | \longrightarrowbel{tau-lower-bound-eq} \end{equation} Therefore it follows by (\ref{c-opt-upper-bound-eq}), (\ref{m-upper-bound-eq}), (\ref{PTAS-bound-eq}), and (\ref{tau-lower-bound-eq}) that \begin{eqnarray*} 2|\tau(\phi )| &\geq &39n-4m-(1+\varepsilon )\mathcal{D}ot \kappa(G_{\phi},reach,d(G_{\phi })) \\ &\geq &39n-4m-(1+\varepsilon )\mathcal{D}ot \left( 39n-4m-2\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi )\right) \\ &=&\varepsilon \left( 4m-39n\right) +2(1+\varepsilon )\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \\ &\geq &\varepsilon \left( 4m-78m\right) +2(1+\varepsilon )\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \\ &\geq &-74\varepsilon m+2(1+\varepsilon )\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \\ &\geq &-74\varepsilon \mathcal{D}ot 2\text{OPT}_{\text{Max-XOR}(3)}(\phi)+2(1+\varepsilon )\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \\ &=&2(1-73\varepsilon )\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \end{eqnarray*} and thus \begin{equation} |\tau(\phi )|\geq (1-73\varepsilon )\mathcal{D}ot \text{OPT}_{\text{Max-XOR}(3)}(\phi ) \longrightarrowbel{PTAS-contradiction-eq} \end{equation} That is, assuming a PTAS for computing $\kappa(G,reach,d(G))$, we obtain a PTAS for the Max-XOR$(3)$ problem, which is a contradiction by Lemma~\ref {Max_XOR-3-hard-lem}. Therefore computing $\kappa(G,reach,d(G))$ is APX-hard. Finally, since the graph $G_{\phi }$ that we constructed from the formula $\phi $ has maximum length of a directed cycle at most $2$, it follows that computing $\kappa(G,reach,d(G))$ is APX-hard even if the given graph $G$ has maximum length of a directed cycle at most $2$. \qquad \end{proof} \subsubsection{Approximating the Cost} In this section, we provide an approximation algorithm for computing $\kappa(G,reach,d(G))$, which complements the hardness result of Theorem \ref{cost-diameter-Apx-hard-thm}. Given a digraph $G$ define, for every $u\in V$, $u$'s reachability number $r(u)= |\{v\in V: v\text{ is reachable from } u\}|$ and $r(G)=\sum_{u\in V} r(u)$, that is $r(G)$ is the total number of reachabilities in $G$. \begin{theorem} There is an $\frac{r(G)}{n-1}$-factor approximation algorithm for computing $\kappa(G,reach,d(G))$ on any weakly connected digraph $G$. \end{theorem} \begin{proof} First of all, note that $\text{OPT}\geq n-1$, where $\text{OPT}$ is the cost of the optimal solution. The reason is that if a labeling labels less than $n-1$ edges then the subgraph of $G$ induced by the labeled edges is disconnected (not even weakly connected) thus clearly fails to preserve some reachabilities. To see this, take any two components $C_1$ and $C_2$. $G$ either has an edge from $C_1$ to $C_2$ or from $C_2$ to $C_1$ (or both). The two cases are symmetric so just consider the first one. Clearly some node from $C_1$ can reach some node from $C_2$ but this reachability has not been preserved by the labeling. Now consider the following labeling algorithm. \begin{enumerate} \item For all $u\in V$, compute a BFS out-tree $T_u$ rooted at $u$. \item For all $T_u$, give to each edge at distance $i$ from the root label $i$. \item Output this labeling $\longrightarrowmbda$. \end{enumerate} Clearly, the maximum label used by $\longrightarrowmbda$ is $d(G)$: indeed if an edge $e$ was assigned some label $l>d(G)$ then this would imply that on some BFS out-tree $e$ appeared at distance $>d(G)$ which is a contradiction. Moreover, $\longrightarrowmbda$ preserves all reachabilities as for every $u$ the corresponding tree rooted at $u$ reaches all nodes that are reachable from $u$ and the described labeling clearly preserves the corresponding paths. Finally, we have that the cost paid by our algorithm is $\text{ALG}=|\longrightarrowmbda|=r(G)$. To see this, notice that for all $u$ we use (i.e.~we label) precisely $r(u)$ edges in $T_u$, thus, in total, we use $\sum_{u\in V} r(u)=r(G)$ edges by definition of $r(G)$. We conclude that \begin{equation*} \frac{\text{ALG}}{\text{OPT}} \leq \frac{r(G)}{n-1}\Rightarrow \text{ALG}\leq \frac{r(G)}{n-1}\text{OPT} \end{equation*} \qquad \end{proof} \section{Conclusions and Further Research} \longrightarrowbel{sec:conc} There are many open problems related to the findings of the present work. We have considered several graph families in which the temporality of preserving all paths is very small (e.g.~2 for rings) and others in which it is very close to the worst possible (i.e.~$\Omega(n)$ for cliques and $\Omega(n^{1/3})$ for planar graphs). There are still many interesting graph families to be investigated like regular or bounded-degree graphs. Moreover, though it turned out to be a generic lower-bounding technique related to the existence of a large edge-kernel in the underlying graph $G$, we still do not know whether there are other structural properties of the underlying graph that could cause a growth of the temporality (i.e.~the absence of a large edge-kernel does not necessarily imply small temporality). Similar things hold also for the $reach$ property. There are also many other natural connectivity properties subject to which optimization is to be performed that we haven't yet considered, like preserving a shortest path from $u$ to $v$ whenever $v$ is reachable from $u$ in $G$, or even depart from paths and require the preservation of more complex subgraphs (for some appropriate definition of ``preservation''). Another interesting direction which we didn't consider in this work is to set the optimization criterion to be the age of $\longrightarrowmbda$ e.g.~w.r.t. the $\text{\emph{all paths}}$ or the $reach$ connectivity properties. In this case, computing $\alpha(G,\text{all paths})$ is NP-hard, which can be proved by reduction from HAMPATH. On the positive side, it is easy to come up with a 2-factor approximation algorithm for $\alpha(G, reach, 2)$, where we have restricted the maximum number of labels of an edge (i.e.~the temporality) to be at most 2. Additionally, there seems to be great room for approximation algorithms (or even randomized algorithms) for all combinations of optimization parameters and connectivity constraints that we have defined so far, or even polynomial-time algorithms for specific graph families. Finally, it would be valuable to consider other models of temporal graphs and in particular models with succinct representations, that is models in which the labels of every edge are provided by some short function associated to that edge (in contrast to a complete description of all labels). Such examples are several probabilistic models and several periodic models which are worth considering.\\ \noindent \textbf{Acknowledgments.} We would like to thank the anonymous reviewers of this article and its preliminary versions. Their thorough reading and comments have helped us to improve our work substantially. {\small } \end{document}
\begin{document} \title{Euclidean Windows} \author{ Stefania Cavallar \and Franz Lemmermeyer } \maketitle \abstract In this paper we study number fields which are Euclidean with respect to a function different from the absolute value of the norm. We also show that the Euclidean minimum with respect to weighted norms may be irrational and not isolated. \endabstract \section*{Introduction} Let $R$ be an integral domain. A function $f:R \longrightarrow {\mathbb R}_{\ge 0}$ is called a Euclidean function on $R$ if it satisfies the following conditions with $\kappa = 1$: \begin{enumerate} \item[i)] $f(R) \cap [0,c]$ is finite for every $c \ge 0$; \item[ii)] $f(r) = 0$ if and only if $r = 0$; \item[iii)] for all $a, b \in R$ with $b \ne 0$ there exists a $q \in R$ such that $f(a-bq) < \kappa \cdot f(b)$. \end{enumerate} If $f:R \longrightarrow {\mathbb R}_{\ge 0}$ is a function satisfying i) and ii), then the infimum of all $\kappa \in {\mathbb R}$ such that iii) holds is called the Euclidean minimum of $R$ with respect to $f$ and will be denoted by $M(R,f)$; thus for all $a, b \in R \setminus \{0\}$ and every $\varepsilon > 0$ there is a $q \in R$ such that $f(a-bq) < M(R,f)\cdot f(b) + \varepsilon$. If $f$ is a multiplicative function, then we can replace iii) by the equivalent condition that for all $\xi \in K$ ($K$ being the quotient field of $R$) there is a $q \in R$ such that $f(\xi - q) < \kappa$. The infimum of all $\kappa \in {\mathbb R}$ such that this condition holds for a fixed $\xi$ is denoted by $M(\xi,f)$; clearly $M(R,f)$ is the supremum of the $M(\xi,f)$. If $R = \mathcal O_K$ is the ring of integers in a number field $K$, then the absolute value of the norm satisfies i) and ii), and $M(K) := M(R,|N|)$ coincides conjecturally with the inhomogeneous minimum of the norm form of $\mathcal O_K$ (this conjecture is known to hold for number fields with unit rank at most $1$). Let $C_1$ be the set of representatives modulo $\mathcal O_K$ of all $\xi = \frac{a}{b} \in K$ with $M(\xi) = M(K)$ (here $M(\xi) := M(\xi,|N|)$); then we say that $M(K)$ is isolated if there is a $\kappa_2 < \kappa$ such that $M(\xi) \le \kappa_2$ for all $\xi \in K$ that are not represented by some point in $C_1$. Replacing $K$ in these definitions by $\overline{K} = {\mathbb R}^n$ (this is the topological closure of the image of $K$ und the standard embedding $K \longrightarrow {\mathbb R}^n$; for totally real fields we have $\overline{K} = K \otimes_{\mathbb Q} {\mathbb R}$), the Euclidean minimum becomes the inhomogeneous minimum of the norm form of $K$; we clearly have $M_j(\overline{K}) \ge M_j(K)$ whenever these minima are defined, and it is conjectured that $M_1(\overline{K}) = M_1(K)$ is rational. The aim of this paper is to explain how the Euclidean minimum of $\mathcal O_K$ with respect to ``weighted norms'' can be computed in some cases; we will show that the Euclidean minimum for certain weighted norms in ${\mathbb Q}(\sqrt{69}\,)$ is irrational and not isolated, thereby showing that these conjectured properties for minima with respect to the usual norm do not carry over to weighted norms. \section{Weighted norms} Let $K$ be a number field, $\mathcal O_K$ its ring of integers, and $\mathfrak p$ a prime ideal in $\mathcal O_K$. Then, for any real number $c > 0$, $$\phi: \mathfrak q \longmapsto \begin{cases} N\mathfrak q, & \text{ if } \mathfrak q \ne \mathfrak p \\ c, & \text{ if } \mathfrak q = \mathfrak p \end{cases} $$ defines a map from the set of prime ideals $\mathfrak q$ of $\mathcal O_K$ into the positive real numbers, which can be uniquely extended to a multiplicative map $\phi:I_K \longrightarrow {\mathbb R}_{>0}$ on the group $I_K$ of fractional ideals. Putting $f(\alpha) = \phi(\alpha\mathcal O_K)$ for any $\alpha \in K^\times$ and $f(0) = 0$, we get a function $f = f_{\mathfrak p,c}: K \longrightarrow {\mathbb R}_{\ge 0}$ which H.~W.~Lenstra \cite{Len} called a {\em weighted norm}. Our aim is to study examples of number fields which are Euclidean with respect to some weighted norm. Lenstra \cite{Len} showed that ${\mathbb Q}(\zeta_3)$ and ${\mathbb Q}(\zeta_4)$ are such fields, but the first examples that are not norm-Euclidean were given by D.~Clark \cite{Cl1,Cl2}. A formal condition for $f_{\mathfrak p,c}$ to be a Euclidean function is the finiteness of the sets $\{f_{\mathfrak p,c}(\alpha) < \langlembda : \alpha \in \mathcal O_K \}$ for all $\langlembda \in {\mathbb R}$. This property is easily seen to be equivalent to $c > 1$. For weighted norms $f = f_{\mathfrak p,c}$ on $K$, we define the {\em Euclidean window} of $\mathfrak p$ by $$ w(\mathfrak p) = \{ c \in {\mathbb R}: f_{\mathfrak p,c} \text{ is a Euclidean function on } \mathcal O_K \}.$$ \begin{prop} The Euclidean window is a (possibly empty) interval contained in $(1,\infty)$. \end{prop} \begin{proof} Assume that $w(\mathfrak p)$ is not empty, and let $r,t \in w(\mathfrak p)$ with $r < t$. Then it is sufficient to show that $f_{\mathfrak p,s}$ is a Euclidean function on $\mathcal O_K$ for every $r \le s \le t$. Now $\mathcal O_K$ is Euclidean with respect to e.g. $f_{\mathfrak p,r}$, so $\mathcal O_K$ is a principal ideal domain, hence every $\xi \in K$ has the form $\xi = \alpha/\beta$ with $(\alpha,\beta) = 1$. Moreover, there exist $\gamma_r, \gamma_t \in \mathcal O_K$ such that $$ f_{\mathfrak p,r}(\alpha - \beta\gamma_r) < f_{\mathfrak p,r}(\beta), \qquad f_{\mathfrak p,t}(\alpha - \beta\gamma_t) < f_{\mathfrak p,t}(\beta).$$ If $\mathfrak p \nmid \beta$, then $ f_{\mathfrak p,s}(\alpha - \beta\gamma_t) \le f_{\mathfrak p,t}(\alpha - \beta\gamma_t) < f_{\mathfrak p,t}(\beta) = f_{\mathfrak p,s}(\beta)$; if $\mathfrak p \mid \beta$, on the other hand, then $\mathfrak p \nmid \alpha$, hence $\mathfrak p \nmid (\alpha - \beta\gamma_r)$, and $f_{\mathfrak p,s}(\alpha - \beta\gamma_r) = f_{\mathfrak p,r}(\alpha - \beta\gamma_r) < f_{\mathfrak p,r}(\beta) \le f_{\mathfrak p,s}(\beta).$ Thus $f_{\mathfrak p,s}$ is indeed a Euclidean function on $\mathcal O_K$. \end{proof} In this paper, we investigate Euclidean windows for various algorithms in some quadratic and cubic number fields; we will give examples of empty, finite and infinite Euclidean windows, and we show that the first minima with respect to weighted norms need not be rational. \section{Weighted norms in ${\mathbb Z}$} The Euclidean window for primes in ${\mathbb Z}$ can easily be determined: \begin{prop} The Euclidean minimum $M(f_{p,c})$ of a weighted norm in ${\mathbb Z}$ is given by $$ M(f_{p,c}) = \begin{cases} \infty & \text{if } c < p \\ \frac12 & \text{if } c = p \\ 1 & \text{if } c > p \end{cases}.$$ Moreover, $w(p) = [p,\infty)$. \end{prop} \begin{proof} We first show that $M(f_{p,c}) = \infty$ if $c < p$ (this implies that $w(\mathfrak p) \widetilde{\subset}eteq [p,\infty)$). To this end, put $b = p^n$ and $$ a = \begin{cases} \frac12(p^n-1) & \text{if } p \ne 2,\\ 2^{n-1}-1 & \text{if } p = 2. \end{cases}$$ Then $p \nmid (a-bq)$, hence $f_{p,c}(a-bq) = |a-bq|$ for all $q \in {\mathbb Z}$. If the minimum $\kappa = M(f_{p,c})$ were finite, there would exist a $q \in {\mathbb Z}$ such that $f_{p,c}(a-bq) < \kappa f_{p,c}(b) = \kappa c^n$. But clearly $|a| \le |a-bq| = f_{p,c}(a-bq)$, hence we get $|a|c^{-n} < \kappa$ for all $n \in {\mathbb N}$: but since $c < p$, the expression on the left hand side tends to $\infty$ with $n$. Since it is well known that $M(f_{p,p}) = \frac12$, we next show that $M(f_{p,c}) = 1$ if $c > p$. To this end, choose $\alpha, \beta \in {\mathbb N}$ not divisible by $p$ such that $p < \frac{\alpha}{\beta} < c$. If we put $a = p^n\beta^n$ and $b = \alpha^n + p^n\beta^n$, then we get \begin{eqnarray*} f_{p,c}\Big(\frac{a}{b}\Big) & = & \frac{c^n \beta^n}{\alpha^n + p^n\beta^n} = \frac{c^n}{(\alpha/\beta)^n + p^n} > \frac{c^n}{c^n+ p^n}, \\ f_{p,c}\Big(\frac{a}{b} - 1 \Big) & = & \frac{\alpha^n}{\alpha^n + p^n\beta^n}, \end{eqnarray*} and both expressions tend to $1$ as $n$ goes to $\infty$. Note also that $f_{p,c}(\frac{a}b-q) \ge |\frac{a}b-q| > 1$ for all $q \in {\mathbb N} \setminus \{0, 1\}$, since the denominator of $\frac{a}b-q$ is prime to $p$ and since $c > p$. Thus $M(f_{p,c}) \ge 1$ if $c > p$; but we can easily show that $M(f_{p,c}) \le 1$ by proving that $f_{p,c}$ is a Euclidean function for all $p \ge c$. In fact, suppose that $a, b \in {\mathbb Z}\setminus \{0\}$ are given, and that they are relatively prime. If $p \mid b$, then $p \nmid (a-bq)$ for all $q \in {\mathbb Z}$, hence $f_{p,c}(a-bq) = |a-bq|$, and we can certainly find $q \in {\mathbb Z}$ such that $|a-bq| < |b|$. But $|b| \le f_{p,c}(b)$ since $c \ge p$. Now consider the case $p \nmid b$; then we choose $q \in {\mathbb Z}$ such that $|a-bq|, |a-b(q+1)| \le b$. But $r = a-bq$ and $r' = a-b(q+1)$ cannot both be divisible by $p$; if $p \nmid r$, then $f_{p,c}(r) = |r| < |b| = f_{p,c}(b)$, and if $p \nmid r'$, then $f_{p,c}(r') < f_{p,c}(b)$. \end{proof} \section{Weighted norms in ${\mathbb Q}(\sqrt{14}\,)$} Since it is well known that an imaginary quadratic number field is Euclidean if and only if it is norm-Euclidean, only the case of real quadratic fields is interesting. We will deal with only two examples here: one is ${\mathbb Q}(\sqrt{14}\,)$, which has been studied often in this respect (cf. Bedocchi \cite{Bed}, Nagata \cite{Na1,Na2} and Cardon \cite{Car}), and the other is ${\mathbb Q}(\sqrt{69}\,)$, which was shown to be Euclidean with respect to a weighted norm by Clark \cite{Cl1} (see also Niklasch \cite{Nik} and Hainke \cite{Hai}). Consider the quadratic number field $K = {\mathbb Q}(\sqrt{14}\,)$. It is well known that $M_1(K) = \frac54$ and $M_2(K) = \frac{31}{32}$ (cf. \cite{Lem}); moreover $M_1$ is attained exactly at the points $\xi \equiv \frac12(1+\sqrt{14}\,) \bmod \mathcal O_K$. Now we claim \begin{prop} For $K = {\mathbb Q}(\sqrt{14}\,)$ and $\mathfrak p = (2,\sqrt{14}\,)$ we have $w(\mathfrak p) \widetilde{\subset}eteq (\sqrt5, \sqrt7)$. \end{prop} \begin{proof} Put $\alpha = 1+\sqrt{14}$, $\beta = 2$. Then $|N(\alpha - \beta\gamma)|$ is an odd integer $\ge 5$ for all $\gamma \in \mathcal O_K$. Thus $f_{\mathfrak p,c}(\alpha - \beta\gamma) = |N(\alpha - \beta\gamma)| \ge 5$, and if $f_{\mathfrak p,c}$ is a Euclidean function, we must have $5 < f_{\mathfrak p,c}(\beta) = c^2$. This shows that $c > \sqrt{5}$. In order to show that $c < \sqrt{7}$ we look at the ideal $\mathfrak q = (7,\sqrt{14}\,) = (7+2\sqrt{14}\,)$ of norm $7$. If $f_{\mathfrak p,c}$ is Euclidean, then every residue class modulo $\mathfrak q$ must contain an element $\alpha$ such that $f_{\mathfrak p,c}(\alpha) < f_{\mathfrak p,c}(\mathfrak q) = 7$. Since the unit group generates the subgroup $\{-1, +1\}$ of $(\mathcal O_K/\mathfrak q)^\times$ (and $f_{\mathfrak p,c}(\pm 1) = 1$), and since $\pm 3+\sqrt{14} \equiv \pm 3 \bmod \mathfrak q$ (where $f_{\mathfrak p,c}(\pm 3+\sqrt{14}) = |N(\pm 3+\sqrt{14})| = 5$), we must find elements in the residue classes $\pm 2 \bmod \mathfrak q$. The only possible candidates are powers of $4+\sqrt{14}$, because the only ideals of odd norm $< 7$ are $(0)$, $(1)$, and $(3 \pm \sqrt{14}\,)$, none of which yields elements $\equiv \pm 2 \bmod \mathfrak q$. Moreover, $\pm 4+\sqrt{14} \equiv \pm 3 \bmod \mathfrak q$, and we see that if there exist elements $\alpha \equiv 2 \bmod \mathfrak q$ with $f_{\mathfrak p,c}(\alpha) < 7$, then $\alpha = 2$ is one of them. But $f_{\mathfrak p,c}(2) = c^2$, and we find $c < \sqrt{7}$. \end{proof} We remark that it is not known whether $w(\mathfrak p)$ is empty or not. If we look at prime ideals other than $(2,\sqrt{14}\,)$, the situation is quite different: \begin{prop} Let $K = {\mathbb Q}(\sqrt{14}\,)$, and let $\mathfrak p$ be a prime ideal in $\mathcal O_K$ of norm $N\mathfrak p \equiv \pm 1 \bmod 8$. Then $w(\mathfrak p) = \varnothing$. \end{prop} \begin{proof} Assume that $f_{\mathfrak p,c}$ is a Euclidean function. Then there exists an $\alpha = x+y\sqrt{14} \equiv 1 + \sqrt{14} \bmod 2$ such that $f_{\mathfrak p,c}(\alpha) < f_{\mathfrak p,c}(2) = 4$. Since $\alpha$ cannot be a unit, this is only possible if $\alpha$ is divisible by $\mathfrak p$. If $\alpha$ is divisible by some other prime ideal $\mathfrak q$, then $f_{\mathfrak p,c}(\mathfrak q) = N\mathfrak q \ge 5$, and we conclude $f_{\mathfrak p,c}(\mathfrak p) < 1$: contradiction. Thus $(\alpha) = \mathfrak p^m$ for some $m \ge 1$. But $\mathfrak p = (a+b\sqrt{14}\,)$ since $K$ has class number $1$, and $b$ must be even since $\pm p = a^2 - 14b^2 \equiv \pm 1 \bmod 8$: thus $a+b\sqrt{14} \not\equiv 1 + \sqrt{14} \bmod 2$, and again we have a contradiction. \end{proof} \section{The Euclidean Algorithm in ${\mathbb Q}(\sqrt{69}\,)$} Next we study the field ${\mathbb Q}(\sqrt{69}\,)$; we will prove the following result that corrects a claim\footnote{namely that $M_2(K) < M_2(\overline{K})$, and that $M_2(\overline{K})$ is isolated.} announced without proof in \cite{Lem}: \begin{thm}\langlebel{T69} In $K = {\mathbb Q}(\sqrt{69}\,)$, we have $$ \begin{array}{ll} M_1 = \frac{25}{23}, & C_1 = \big\{\pm \frac4{23}\sqrt{69}\big\}, \\ M_2 = \frac1{46}\left(165-15\sqrt{69}\,\right), & C_2 = \big\{(\pm P_r, \pm P_r')\big\}, r \ge 0 \end{array} $$ where $$ P_r = \frac12 \varepsilon^{-r} + \left(\frac4{23} + \frac1{2\sqrt{69}}\varepsilon^{-r}\right)\sqrt{69}, \quad P_r' = \frac12 \varepsilon^{-r} - \left(\frac4{23} + \frac1{2\sqrt{69}}\varepsilon^{-r}\right)\sqrt{69}.$$ Here $M_j$ denotes the $j$-th inhomogeneous minimum of the norm form of $\mathcal O_K$, $C_j$ is a set of representatives modulo $\mathcal O_K$ of the points where $M_j$ is attained, and $\varepsilon = \frac12(25+3\sqrt{69}\,)$ is the fundamental unit of $K$. The second minimum $M_2(K) = M_2(\overline{K})$ is not isolated. \end{thm} The proof of Theorem \ref{T69} is based on methods developed by Barnes and Swinnerton-Dyer \cite{BSD}. In the following, we will regard $K$ as a subset of ${\mathbb R}^2$ via the embedding $x+y\sqrt{69}) \longrightarrow (x,y)$. Conversely, any point $P = (x,y) \in {\mathbb R}^2= \overline{K}$ corresponds to a pair $\xi_P = x + y\sqrt{69}$, $\xi_P' = x-y\sqrt{69}$. These elements are not necessarily in $K$; nevertheless we call $\xi_P' = x-y\sqrt{69}$ the conjugate of $\xi_P$. Note that e.g. $\xi_P = \sqrt{69}$ alone does not determine $P$, since both $P = (0,1)$ and $P = (\sqrt{69},0)$ correspond to such a $\xi_P$. The ``$\overline{K}$-valuations'' $|\,\cdot\,|_1$ and $|\,\cdot\,|_2$ are defined by $|(x,y)|_1 = | x + y\sqrt{69}|$ and $|(x,y)|_2 = | x - y\sqrt{69}|$, with a positive square root of $69$. Using the technique described in \cite{CL}, it is easy to cover the whole fundamental domain of the lattice $\mathcal O_K$ with a bound of $k = 0.875$ except for $\pm S_0 \cup \pm S_1 \cup \pm S_2 \cup \pm T$, where $$ \begin{array}{rcrcl} S_0 & = & [-0.00085, \phantom{-}0.00085] & \times & [0.1739, 0.1742] \\ S_1 & = & [\phantom{-}0.01917, \phantom{-}0.02005] & \times & [0.1763, 0.1765] \\ S_2 & = & [-0.02005, -0.01917] & \times & [0.1763, 0.1765] \\ T & = & [\phantom{-}0.4999\phantom{0}, \phantom{-}0.5001\phantom{0}] & \times & [0.2341, 0.2342]. \\ \end{array} $$ Transforming these exceptional sets by multiplication with the units $\varepsilon$ and $\overline{\varepsilon} = \frac12(25-3\sqrt{69}\,)$ we find e.g. $$ \varepsilon S_0 \widetilde{\subset}et 18 + 2\sqrt{69} + [-0.012, 0.041] \times [0.172, 0.179],$$ that is, $\varepsilon S_0 - (18+2\sqrt{69}\,)$ is contained in covered regions or $S_0 \cup S_1$, which we will denote by $\varepsilon S_0 - (18+2\sqrt{69}\,) \widetilde{\subset} S_0 \cup S_1$. Similar calculations show that $$ \begin{array}{rclrcl} \varepsilon S_0 - (18+2\sqrt{69}\,) & \widetilde{\subset} & S_0 \cup S_1, & \overline{\varepsilon} S_0 + (18 - 2\sqrt{69}\,) & \widetilde{\subset} & S_0 \cup S_2, \\ \varepsilon S_1 - (18+2\sqrt{69}\,) & \widetilde{\subset} & T, & \overline{\varepsilon} S_1 + (18 - 2\sqrt{69}\,) & \widetilde{\subset} & S_0 \cup S_2, \\ \varepsilon S_2 - (18+2\sqrt{69}\,) & \widetilde{\subset} & S_0 \cup S_1, & \overline{\varepsilon} S_2 + (19 - 2\sqrt{69}\,) & \widetilde{\subset} & T, \\ \varepsilon T - \frac12(61 + 7 \sqrt{69}\,) & \widetilde{\subset} & S_2, & \overline{\varepsilon} \,T + (18 - 2\sqrt{69}\,) & \widetilde{\subset} & S_1. \end{array}$$ \noindent{\bf Remark.} The inclusions on the right hand side can be computed from those on the left: for example, all exceptional points in $S_2$ must come from $T$, so the exceptional points in $\varepsilon^{-1}S_2$ must be congruent modulo $\mathcal O_K$ to points in $T$, and since $\frac12(61 + 7\sqrt{69}\,) \varepsilon^{-1} = 19-2\sqrt{69}$, we conclude that $\overline{\varepsilon} S_2 + (19 - 2\sqrt{69}\,) \widetilde{\subset} T$. We will need the following result (this is Prop. 2 of \cite{CL}): \begin{prop}\langlebel{PEP} Let $K$ be a number field and $\varepsilon$ a non-torsion unit of $E_K$. Suppose that $S \widetilde{\subset}et \widetilde{F}$ has the following property: \begin{quotation} There exists a unique $\theta \in \mathcal O_K$ such that, for all $\xi \in S$, the element $\varepsilon\xi - \theta$ lies in a $k$-covered region of $\widetilde{F}$ or again in $S$. \end{quotation} Then every $k$-exceptional point $\xi_0 \in S$ satisfies $|\xi_0-\frac{\theta}{\varepsilon-1}|_j = 0$ for every $\overline{K}$-valuation $|\cdot|_j$ such that $|\varepsilon|_j > 1$. \end{prop} We also need a method to compute Euclidean minima of given points. Recall that the orbit of $\xi \in \overline{K}$ is the set $\operatorname{Orb}(\xi) = \{\varepsilon\xi: \varepsilon \in E_K\}$, where $E_K$ is the unit group of $\mathcal O_K$. Note that all the elements in an orbit have the same minimum. \begin{prop}\langlebel{Pbd} Let $m \in {\mathbb N}$ be squarefree, $K = {\mathbb Q}(\sqrt{m}\,)$ a real quadratic number field, $\varepsilon > 1$ a unit in $\mathcal O_K$, and $\xi \in \overline{K}$. If $M(K,\xi) < k$ for some real $k$, then there exists an element $\eta = r+s\sqrt{m} \in K$ with the following properties: \begin{enumerate} \item[i)] $\eta \equiv \xi_j \bmod \mathcal O_K$ for some $\xi_j \in \operatorname{Orb}(\xi)$; \item[ii)] $|N\eta | < k$; \item[iii)] $|r| < \mu$, $|s| < \frac{\mu}{\sqrt{m}}$, where $\mu = \frac{\sqrt{k}\,}2\Big(\sqrt{\varepsilon} + \frac1{\sqrt{\varepsilon}}\Big)$. \end{enumerate} \end{prop} \begin{proof} Assume that $M(K,\xi) < k$; then there is an $\alpha \in \mathcal O_K$ such that $|N(\xi-\alpha)| < k$. Choose $m \in {\mathbb Z}$ such that $\sqrt{k/\varepsilon} \le |(\xi-\alpha)\varepsilon^m| < \sqrt{k\varepsilon}$ and put $\eta = (\xi-\alpha)\varepsilon^m$. Then \begin{enumerate} \item[i)] $\eta = (\xi-\alpha)\varepsilon^m \equiv \xi\varepsilon^m \bmod \mathcal O_K$, and clearly $\xi\varepsilon^m \in \operatorname{Orb}(\xi)$; \item[ii)] $|N\eta | = |N(\xi-\alpha)| < k$; \item[iii)] Write $\eta = r+s\sqrt{m}$ and $\eta' = r-s\sqrt{m}$. Then $|\eta| < \sqrt{k\varepsilon}$ and $|\eta'| = |\eta\eta'|/|\eta| < k/|\eta| \le \sqrt{k\varepsilon}$. Thus $2|r| = |\eta + \eta'| \le |\eta| + |\eta'|$ and $2|s|\sqrt{m} = |\eta - \eta'| \le |\eta| + |\eta'|$. Using the lemma below, this yields the desired bounds. \end{enumerate} This concludes the proof. \end{proof} \begin{lem} If $x, y$ are positive real numbers such that $x<a$, $y<a$ and $xy<b$, then $x+y < a+\frac{b}a$. \end{lem} \begin{proof} $0 < (a-x)(a-y) = a^2 - a(x+y) + xy < a^2 - a(x+y) + b$. \end{proof} \noindent Now we are ready to determine a certain class of exceptional points inside $S_0$: \begin{claim} If $P$ is an exceptional point in $S_0$ that stays inside $S_0$ under repeated applications of the maps \begin{eqnarray} \langlebel{Eal} \alpha: & \xi \longmapsto \varepsilon^{-1}\xi + 18 - 2\sqrt{69} \\ \langlebel{Ebe} \beta: & \xi \longmapsto \varepsilon \xi - (18 + 2\sqrt{69}\,) \end{eqnarray} then $P = \frac{18+2\sqrt{69}\,}{\varepsilon-1} = (0,\frac{4}{23})$. Moreover, $M(P) = \frac{25}{23}$. \end{claim} This follows directly from Proposition \ref{PEP}; the Euclidean minimum $M(P) = \frac{25}{23}$ is easily computed using Proposition \ref{Pbd}. Any exceptional point that does not stay inside $S_0$ must eventually come through $T$; it is therefore sufficient to consider exceptional points in $T$ from now on. Let $P_0 \in T$ be such an exceptional point and define the series of points $P_0$, $P_1$, $P_2$, \ldots recursively by $P_{j+1} = \alpha(P_j)$. Then $P_1 \in S_1$, and now there are two possibilities: \begin{enumerate} \item[(A)] $P_j \in S_0$ for all $j \ge 2$; \item[(B)] there is an $n \ge 2$ such that $P_n \in S_1$. \end{enumerate} Before we can go in the other direction we have to adjust $P_0$ somewhat. In fact, $\beta(P_0) \in T$ implies that $\beta(P_0) - \varepsilon \widetilde{\subset} S_2$; thus we can define a sequence of points $P_0-1$, $P_{-1}$, $P_{-2}$, $\ldots$ by $P_{-1} = \beta(P_0-1)$ and $P_{-j-1} = \beta(P_{-j})$ for $j \ge 1$. Again, there are two possibilities: \begin{enumerate} \item[(C)] $P_{-j} \in S_0$ for all $j \ge 2$; \item[(D)] there is an $n \ge 2$ such that $P_{-n} \in S_1$. \end{enumerate} \begin{claim} If $P_0 \in T$ is an exceptional point satisfying conditions (A) and (C), then $P_0 = (\frac12, \frac4{23} + \frac1{2\sqrt{69}}) \approx (0.5, 0.234105)$. \end{claim} Note that this point is not contained in $K$. Of course we knew this before: every point in $K$ has a finite orbit, whereas $P_0$ does not. For a proof, we apply Proposition \ref{PEP} to the set $S = \{P_0, P_1, P_2, \ldots\}$; this shows that any $\xi = P_j$ lies on the line $|\xi + \frac{18-2\sqrt{69}}{\overline{\varepsilon} - 1}|_2 = 0$ (the $\overline{K}$-valuation $|\cdot|_2$ chosen so that $|\overline{\varepsilon}|_2 > 1$), that is, $\xi' = -\frac4{23}\sqrt{69}$. Applying the same proposition to $S = \{P_0-1, P_{-1}, P_{-2}, \ldots\}$ gives $|\xi - \frac{18+2\sqrt{69}}{\varepsilon - 1}|_1 = 0$, with $\frac{18+2\sqrt{69}}{\varepsilon - 1} = \frac4{23}\sqrt{69}$, hence such $P_0 = (x,y)$ satisfy $x+y\sqrt{69} = 1+ \frac4{23}\sqrt{69}$. Thus any point $\xi = P_0$ giving rise to a doubly infinite sequence $(P_{j})_{j \in {\mathbb Z}}$ that stays inside $S_0$ modulo $\mathcal O_K$ for all $j \ne 0, \pm 1$ satisfies $\xi = 1+ \frac4{23}\sqrt{69}$ and $\xi' = -\frac4{23}\sqrt{69}$. If we write $P_0 = (x,y)$, then this gives $x = \frac12(\xi + \xi') = \frac12$ and $y = \frac1{2\sqrt{69}}(\xi-\xi') = \frac4{23} + \frac1{2\sqrt{69}} \approx 0.2341059$ as claimed. Before we go on exploring the other possibilities, we study the orbit of $P_0$ and compute its Euclidean minimum. \begin{claim} The points $P_r \equiv \varepsilon^{-r} P_0 \bmod \mathcal O_K$ in the orbit of $P_0$ coincide with the $P_r$ given in Theorem \ref{T69}. \end{claim} This is done by induction: the case $r=0$ is clear. For the induction step, notice that $\varepsilon^{-1}(x,y) = (\frac{25}2x - \frac{207}4y,\frac{25}2y - \frac32x)$; now \begin{eqnarray*} \varepsilon^{-1}P_r & = & \Big(\frac{25}4\varepsilon^{-r} - 18 - \frac{207}{2\sqrt{69}}\varepsilon^{-r}, \frac{50}{23} + \frac{25}{4\sqrt{69}}\varepsilon^{-r} - \frac34\varepsilon^{-r}\Big) \\ & = &(-18,2) + \Big(\Big(\frac{25}4 -\frac34\sqrt{69}\,\Big)\varepsilon^{-r}, \frac4{23} + \Big(-\frac34 + \frac{25}{4\sqrt{69}}\Big)\varepsilon^{-r}\Big) \\ & = & (-18,2) + \Big(\frac12\varepsilon^{-r-1}, \frac4{23} + \frac1{2\sqrt{69}}\varepsilon^{-r-1}\Big) \ \equiv \ P_{r+1} \bmod \mathcal O_K \end{eqnarray*} Next one computes that $\varepsilon P_0 = (\frac{61}2,\frac72) - P_1'$ and shows, again by induction, that $\varepsilon^r P_0 \equiv -P_r' \bmod \mathcal O_K$ for all $r \ge 0$. Thus the orbit of $P_0$ under the action of the unit group $E_K$ of $\mathcal O_K$ is represented modulo $\mathcal O_K$ by the points $\{\pm P_r, \pm P_r': r \ge 0\}$. \begin{claim} The points $P_r$ have Euclidean minimum $$M(K,P_r) = M(K,P_0) = \frac1{46}\big(165 - 15\sqrt{69}\,).$$ \end{claim} First we observe that the points $P_r$ have the same Euclidean minimum since they all belong to the same orbit. Now assume that $\varepsilon = t+u\sqrt{m}$ has positive norm. We want to apply Proposition \ref{Pbd} and find $\varepsilon^{-1} = t-u\sqrt{m}$, hence $(\sqrt{\varepsilon} + \frac{1}{\sqrt{\varepsilon}}\,)^2 = 2t+2$ and $\mu = \sqrt{k(t+1)/2}$. In the case $m = 69$, we have $t = \frac{25}2$, hence $\mu/\sqrt{m} = \sqrt{k}\sqrt{27/276} < \frac13\sqrt{k}$. The orbit of $P_0 = \frac12 + (\frac4{23} + \frac1{2\sqrt{69}})\sqrt{69}$ is $\{\pm P_r, \pm P_r': r \in {\mathbb N}_0\}$, so it is clearly sufficient to compute $M(K,P_r)$ for $r \ge 0$. We start with $P_0$ itself. The only $\eta \equiv P_0 \bmod \mathcal O_K$ satisfying the bounds of Proposition \ref{Pbd} have the form $P_0 + a$ for some $a \in {\mathbb Z}$ or $P_0-\frac{b+\sqrt{69}}{2}$ for some odd $b \in {\mathbb Z}$. The minimal absolute value of the norm of these elements is $|N(\eta - \frac{5+\sqrt{69}}{2})| = \frac1{46}\big(165 - 15\sqrt{69}\,)$. Similarly, the minimal norm for the $\eta \equiv P_1 \bmod \mathcal O_K$ is attained at $P_1 + \frac{5-\sqrt{69}}{2}$ and again equals $\frac1{46}\big(165 - 15\sqrt{69}\,)$. Finally, consider the $\eta \equiv P_r \bmod \mathcal O_K$ for some $r \ge 2$. Then $P_r = x_r + y_r\sqrt{69}$ with $|x_r| \le 0.00081 =: \delta_0$ and $|y_r - \frac{4}{23}| < 0.0001 =: \delta_1$. The minimal absolute value of the norm of $P_r + a$ for some $a \in {\mathbb Z}$ is attained for $a = 1$, and equals $|(1+\delta_0)^2 - 69(\frac4{23}-\delta_1)^2| \ge 1.07$; similarly, we find that $|N(P_r - \frac{b+\sqrt{69}}2)| \ge 1.07$. Thus we have seen that $\inf \,\{|N(P_r - \alpha)| : \alpha \in \mathcal O_K, r \in {\mathbb Z}\}$ is attained for $r=0$ and $\alpha = \frac{5+\sqrt{69}}{2}$, giving $M(K,P_0) = \frac1{46}\big(165 - 15\sqrt{69}\,)$ as claimed. Before we go on, let us recall what we know by now: $K = {\mathbb Q}(\sqrt{69}\,)$ has first minimum $M_1(K) = \frac{25}{23}$, and $M_1$ is isolated. Moreover, the orbit of every $k$-exceptional point for $k = 0.875$ not congruent to $\pm \frac4{23}\sqrt{69} \bmod \mathcal O_K$ has a representative in the exceptional set $T$. Finally, if the orbit of such a point visits $T$ exactly once, then the point is $P_0 = \frac12 + (\frac4{23} + \frac1{2\sqrt{69}})\sqrt{69}$, and its minimum is $M(K,P_0) = \frac1{46}\big(165 - 15\sqrt{69}\,)$. \begin{claim} Any exceptional point $Q \ne P_0$ in $T$ has Euclidean minimum $M(K,Q) < M(K,P_0) = \frac1{46}\big(165 - 15\sqrt{69}\,)$, and $M_2(K) = M(P_0)$ is attained only at points in the orbit of $P_0$. \end{claim} In fact, let $Q_0 \ne P_0$ be an exceptional point in $T$ and consider the orbit $\{Q_r: r \in {\mathbb Z}\}$ of $Q_0$, where the $Q_j$ are defined by $Q_j \equiv \varepsilon^{-j}Q_0 \bmod \mathcal O_K$. Since $Q_0 \ne P_0$, we know that we are in one of the following situations: \begin{enumerate} \item (A) and (D) hold; \item (B) and (C) hold; \item (B) and (D) hold. \end{enumerate} In each case, there exists a point $Q \ne P_0$ in $T$ whose orbit moves into $T$ both to the right and to the left: \begin{equation}\langlebel{EE} \ldots T \longrightarrow S_2 \longrightarrow S_0 \cdots S_0 \longrightarrow S_1 \longrightarrow Q \longrightarrow S_2 \longrightarrow S_0 \cdots S_0 \longrightarrow S_1 \longrightarrow T \ldots \end{equation} Now we prove the following lemma: \begin{lem}\langlebel{Lm1} Suppose there is a $Q_0 \in T$ such that $Q_1 = \beta(Q_0-1) \in S_2$ and $Q_{m+1} = (x,y) = \beta^{m}(Q_1) \in S_1$ with $\beta$ as in (\ref{Ebe}). Then $x-y\sqrt{69} < -\frac{4}{23}\sqrt{69}$. \end{lem} \begin{proof} Write $Q_n = (x_n,y_n)$ and put $\xi'_n = x_n - y_n\sqrt{69}$. Then $\xi'_1 \approx -1.48 < -\frac{4}{23}\sqrt{69}$; now we use induction to show that $\xi'_n < -\frac{4}{23}\sqrt{69}$ for $1 \le n \le m$. In fact, if $Q_{n+1} = \beta(x_n,y_n)$, then $\xi_{n+1}' = (\varepsilon \xi_n - (18 + 2\sqrt{69}\,))' = \varepsilon' \xi'_n - 18 + 2\sqrt{69} < -\varepsilon' \frac{4}{23}\sqrt{69} - 18 + 2\sqrt{69} = -\frac{4}{23}\sqrt{69}$. \end{proof} A similar result holds for the other direction: \begin{lem}\langlebel{Lm2} Suppose there is a $Q_0 \in T$ such that $Q_{-1} = \alpha(Q_0) \in S_1$ and $Q_{-m-1} = (x,y) = \alpha^{m}(Q_{-1}) \in S_2$. Then $x+y\sqrt{69} > 1 + \frac{4}{23}\sqrt{69}$. \end{lem} \begin{proof} Similar. \end{proof} This shows that, in (\ref{EE}), we have $\xi > \xi_0 = 1 + \frac{4}{23}\sqrt{69}$ and $\xi' < \xi_0' = -\frac{4}{23}\sqrt{69}$ for the point $Q = (x,y)$ and $\xi = x + y\sqrt{69}$, $\xi' = x - y\sqrt{69}$. Put $\alpha = \xi_0 - \frac{5+\sqrt{69}}2$ and $\alpha' = \xi_0' - \frac{5-\sqrt{69}}2$. Then $-\alpha\alpha' = \frac1{46}\big(165 - 15\sqrt{69}\,)$, and, since $\alpha<0$ and $\alpha'>0$, $0 < (\xi - \frac{5+\sqrt{69}}2\,)(\xi' - \frac{5-\sqrt{69}}2\,) < -\alpha\alpha'$. Thus any such point has Euclidean minimum strictly smaller than $\frac1{46}\big(165 - 15\sqrt{69}\,)$. \begin{claim}\langlebel{Cl6} The second minimum $M_2(K)$ is not isolated. \end{claim} This is accomplished by constructing a series of rational points $Q_r \in K \setminus C_2$ such that $\lim_{r \to \infty} M(Q_r) = M_2(K)$. To this end, we look for a point $Q_r \in T-1$ that gets mapped (multiplication by $\varepsilon$ plus reduction modulo $\mathcal O_K$) to $S_2$, stays in $S_0$ exactly $r$ times, and then goes to $S_1$ and back to the point in $T$ congruent to $Q_r \bmod \mathcal O_K$, then $Q_r$ will satisfy the following equation:\footnote{For more details, see the analogous construction of the points $R_r$ in Section \ref{S5}.} $$ \varepsilon^{r+4}Q_r = \varepsilon^{r+4} + (\varepsilon^{r+3} + \ldots + \varepsilon + 1)(18+2\sqrt{69}\,) + Q_r.$$ This gives $$Q_r = 1 + \frac{4}{23}\sqrt{69} + \frac1{\varepsilon^{r+4}-1}.$$ Here's a short table with explicit coordinates for small values of $r$: $$ \begin{array}{|r|c|cl|}\hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} r & Q_r & \multicolumn{2}{c|}{M(Q_r)} \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} -1 & \frac{1}{2} + \frac{97}{414}\sqrt{69} & \frac{541}{621} &\approx 0.871175523 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 0 & \frac{1}{2} + \frac{70}{299}\sqrt{69} & \frac{13651}{15548} &\approx 0.877990738 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 1 & \frac{1}{2} + \frac{2423}{10350}\sqrt{69} & \frac{340876}{388125} &\approx 0.878263446 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 2 & \frac{1}{2} + \frac{6989}{29854}\sqrt{69} & \frac{8508391}{9687623} &\approx 0.878274371 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 3 & \frac{1}{2} + \frac{30239}{129168}\sqrt{69} & \frac{212369041}{241802496} &\approx 0.878274809 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 4 & \frac{1}{2} + \frac{174445}{745154}\sqrt{69} & \frac{5300717776}{6035374823} &\approx 0.878274826 \\ \hline \end{array}$$ We claim that $M(Q_r)$ tends to $M_2(K) = \frac1{46}\left(165-15\sqrt{69}\,\right) \approx 0.87827$ as $r \longrightarrow \infty$. Applying Proposition \ref{Pbd} shows that, for given $r \ge 0$, the Euclidean minimum of $Q_r$ is attained at $Q_r - \frac{5+\sqrt{69}}{2}$. Writing $n = r+4$ and $Q_r - \frac{5+\sqrt{69}}{2} = (\xi,\xi')$ we have \begin{eqnarray*} \xi & = & -\frac32 - \frac{15}{46} \sqrt{69} + \frac1{\varepsilon^n-1}, \\ \xi' & = & -\frac32 + \frac{15}{46} \sqrt{69} + \frac1{\varepsilon^{-n}-1} \ = \ -\frac52 + \frac{15}{46} \sqrt{69} - \frac1{\varepsilon^n-1}, \end{eqnarray*} and now we find $$\Big|N\Big(Q_r - \frac{5+\sqrt{69}}{2}\Big)\Big| = -\xi\xi' = \frac{165-15\sqrt{69}}{46} - \frac1{\varepsilon^n-1}\Big(-1+\frac{15}{23}\sqrt{69}\,\Big).$$ Since the ``error term'' $\frac1{\varepsilon^n-1}(-1+\frac{15}{23}\sqrt{69}\,)$ is positive and tends to $0$ as $n \longrightarrow \infty$, Claim \ref{Cl6} follows, and Theorem \ref{T69} is proved. \section{Weighted norms in ${\mathbb Q}(\sqrt{69}\,)$}\langlebel{S5} Now we study the weighted norm $f_{\mathfrak p,c}$ defined by $\mathfrak p = (23,\sqrt{69}\,)$. We claim \begin{thm}\langlebel{T69w} Let $R = \mathcal O_K$ be the ring of integers in $K = {\mathbb Q}(\sqrt{69}\,)$, and let $\mathfrak p = (23,\sqrt{69}\,)$ be the prime ideal above $23$. Then the Euclidean window of $f = f_{\mathfrak p,c}$ is $w(\mathfrak p) = (25,\infty)$; the Euclidean minimum is $$M_1(\mathcal O_K,f_{\mathfrak p,c}) = \max\Big\{\frac{25}{c}, \frac1{23}(-600+75\sqrt{69}\,)\Big\}$$ for all $c \in w(\mathfrak p)$, and $M_1$ is isolated exactly when $c \in [23, \frac{23}{15}(8+\sqrt{69}\,))$. \end{thm} Using the method described in \cite{Cl1}, with some modifications described in the next section, we can cover the fundamental domain of $\mathcal O_K$ with a bound of $k = 0.99$ except for a set surrounding $(0,0)$ that contains no exceptional point, and $\pm S_1 \cup \pm S_2 \cup \pm S_2'$, where $$\begin{array}{rcrcl} S_1 & = & [-0.0084, 0.0084] & \times & [0.1739, 0.175] \\ S_2 & = & [0.2086, 0.2087] & \times & [0.19903, 0.19904] \\ S_2' & = & [0.2086, 0.2087] & \times & [-0.19904, -0.19903] \\ \end{array}$$ Transforming by units, we find $$ \begin{array}{rclrcl} \varepsilon S_1 - (18+2\sqrt{69}\,) & \widetilde{\subset} & S_1 \cup S_2, & \overline{\varepsilon} S_1 + (18-2\sqrt{69}\,) & \widetilde{\subset} & S_1 \cup (-S_2'), \\ \varepsilon S_2 - (23+3\sqrt{69}\,) & \widetilde{\subset} & \phantom{-}S_2', & \overline{\varepsilon} S_2 + (18-2\sqrt{69}\,) & \widetilde{\subset} & S_1, \\ \varepsilon S_2'+ (18+2\sqrt{69}\,) & \widetilde{\subset} & -S_1, & \overline{\varepsilon} S_2' - (23-3\sqrt{69}\,) & \widetilde{\subset} & S_2. \end{array}$$ \begin{claim}\langlebel{C51} If $P$ is an exceptional point that stays inside $S_1$ under repeated transformations by $\varepsilon$ and $\varepsilon^{-1}$, then $P = (0,\frac4{23})$ has Euclidean minimum $M(P,f_{\mathfrak p,c}) = \frac{25}{23c}$. \end{claim} This is easy to see. Again, this enables us to reduce everything to exceptional points $P \in S_2$, and for the orbit $(P_j)$ of such $P$ (here $P_{j+1}$ is the image of $P_j$ under multiplication by $\varepsilon$ plus reduction modulo $\mathcal O_K$) there are the following possibilities: \begin{enumerate} \item[(a)] $P_j \in -S_1$ and $P_{-j} \in S_1$ for all $j \ge 2$; \item[(b)] there exist $m \ne n$ such that $P_m, P_n \in S_2$. \end{enumerate} \begin{claim} If $P_0 \in S_2$ is an exceptional point with property (a), then $$ P_0 = \textstyle (\frac{-115+15\sqrt{69}}{46} , \frac{-5+\sqrt{69}}{2\sqrt{69}}\,) \approx (0.20868169, 0.19903536). $$ \end{claim} For a proof, suppose that $P_0$ is a point in $S_2$ with property (a). Then $P_1 = - \varepsilon P_0 + (23 + 3 \sqrt{69}\,) \in -S_2'$, and $P_2 = \varepsilon P_1 - (18 + 2\sqrt{69}\,)$ is a point whose transforms by powers of $\varepsilon$ stay inside $S_1$. By Proposition \ref{PEP}, this implies that $|P_2 - \frac4{23}\sqrt{69}|_1 = 0$, and going back to $P_0$ we find that $|P_0 - (-5 + \frac{19}{23}\sqrt{69}\,)|_1 = 0$. Similarly, any exceptional point $\xi \in S_2$ whose transforms by powers of $\overline{\varepsilon}$ stay inside $S_1$ satisfies $|\xi + \frac4{23}\sqrt{69}|_2 = 0$. Thus any point satisfying (a) has $x$-coordinate $(\xi+\xi')/2 = \frac{-115+15\sqrt{69}}{46}$ and $y$-coordinate $(\xi-\xi')/2\sqrt{69} = \frac{-5+\sqrt{69}}{2\sqrt{69}}$ as claimed. Note that there is no obvious definition of a ``Euclidean minimum'' of $P_0$ with respect to weighted norms $f_{\mathfrak p,c}$, since $f_{\mathfrak p,c}$ is a continuous function on $K$ (with respect to the topology inherited from the embedding $K \longrightarrow {\mathbb R}^2$) if and only if $c = \mathfrak p$, that is, if and only if $f_{\mathfrak p,c}$ is the absolute value of the usual norm. Thus we cannot extend $f_{\mathfrak p,c}$ by continuity to ${\mathbb R}^2$. On the other hand, we can put $$\overline{M}(P,f_{\mathfrak p,c}) = \sup \ \{M(P_r,f_{\mathfrak p,c}): P_r \in K, \ \lim P_r = P\},$$ that is, define the minimum at a point $P \in K$ as the supremum of the minima at $P_r \in K$ over all sequences $(P_r)$ converging to $P$ in the topology mentioned above. If $P \in K$, then clearly $\overline{M}(P,f_{\mathfrak p,c}) \ge M(P,f_{\mathfrak p,c})$, as the constant series $P_r = P$ shows. We don't know an example where this last inequality is strict. \begin{claim} We have $\overline{M}(P_0) \le \kappa_0 = \frac1{23}(-600+75\sqrt{69}\,)$ for all $c \ge 23$. Moreover, any $K$-rational exceptional point with property (b) has minimum strictly smaller than $\kappa_0$. In particular, we have $M_1(K) = \frac{25}{23c}$ for all $c \in [23, \frac{25}{23}(24+3\sqrt{69}\,)]$, and $M_1$ is isolated for these values of $c$ unless possibly when $c = \frac{25}{23}(24+3\sqrt{69}\,)$. \end{claim} We start by observing that $$ \begin{array}{rccl} |N(P_0 - 2)| & = & \textstyle \frac{94-10\sqrt{69}}{23} & \approx 0.47538092916, \quad \text{and} \\ \textstyle |N(P_0 - \frac12(5 + \sqrt{69}\,))| & = & \frac{-600+75\sqrt{69}}{23} & \approx 0.99986042255. \end{array}$$ Using the same technique as in Lemma \ref{Lm1} and \ref{Lm2} we can show that the $K$-rational points in $S_2$ that satisfy condition (b) have minimum strictly smaller than $\kappa_0$; observe that the difference $\eta_1 - \eta_2$ for $\eta_1 = \frac12(5 + \sqrt{69}\,)$ and $\eta_2 = 2$ is not divisible by $\mathfrak p$, hence we have $f_{\mathfrak p,c}(P_0 - \eta_j) \le |N(P_0 - \eta_j)|$ for $j = 1$ or $j = 2$. Since any sequence of $K$-rational points $P_r$ converging to $P_0$ eventually stays inside $S_2$ this also proves that $M_1(\mathcal O_K,f_{\mathfrak p,c}) = \frac{25}{23c}$ as long as $\frac{25}{23c} \ge \kappa_0$; but the last inequality holds for all $c \le \frac{23}{15}(8+\sqrt{69}\,) \approx 25.0034899$. It also shows that the minimum is isolated for these values unless possibly when $c = \frac{23}{15}(8+\sqrt{69}\,)$. \begin{claim}\langlebel{CMP} We have $\overline{M}(P_0,f_{\mathfrak p,c}) = \kappa_0 = \frac1{23}(-600+75\sqrt{69}\,)$ for all $c > 23$, and $\overline{M}(P_0,f_{\mathfrak p,c}) = M(P) = \frac{94- 10\sqrt{69}}{23}$ for $c = 23$. \end{claim} In order to show that $\kappa_0$ is a lower bound for $M(P_0)$ for $c > 23$, we construct a series of $K$-rational points converging to $P_0$ whose minima converge to $\kappa_0$. We do this in the following way: assume that $R_r \in S_2$ gets mapped to $S_2'$, stays in $-S_1$ exactly $r-2$ times and then gets mapped to the point $-R_r \in -S_2$. Then $\varepsilon R_r - (23+3\sqrt{69}\,) \in S_2'$, $\varepsilon^2 R_r - \varepsilon(23+3\sqrt{69}\,) + (18+2\sqrt{69}\,) \in -S_1$, \ldots, $\varepsilon^r R_r - \varepsilon^{r-1}(23+3\sqrt{69}\,) + (18+2\sqrt{69}\,) (1 + \varepsilon + \ldots + \varepsilon^{r-2}) \in -S_1$ and finally \begin{equation*} (\varepsilon^{r+1}+1)R_r = \varepsilon^r(23+3\sqrt{69}\,) - (18+2\sqrt{69}\,) \frac{\varepsilon^r-1}{\varepsilon-1} \end{equation*} Now we use $\frac{\varepsilon^r-1}{\varepsilon-1} = \frac{\varepsilon^{r+1}-1}{\varepsilon-1} - \varepsilon^r$ to find \begin{eqnarray*} (\varepsilon^{r+1}+1)R_r & = & \varepsilon^r(41+5\sqrt{69}\,) - (18+2\sqrt{69}\,) \frac{\varepsilon^{r+1}-1}{\varepsilon-1} \\ & = & \varepsilon^{r+1}(-5+\sqrt{69}\,) - (18+2\sqrt{69}\,) \frac{\varepsilon^{r+1}-1}{\varepsilon-1}. \end{eqnarray*} Dividing through by $\varepsilon^{r+1}+1$ and simplifying we get $$ R_r = -5 + \frac{19}{23}\sqrt{69} + \frac1{\varepsilon^{r+1}+1}\Big(5 - \frac{15}{23}\sqrt{69}\,\Big).$$ The explicit coordinates for the first few points are given in the following table: $$ \begin{array}{|r|c|cl|cl|}\hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} r & R_r & \multicolumn{2}{c|}{|N(R_r-\frac12(5 + \sqrt{69}\,))|} & \multicolumn{2}{c|}{|N(R_r- 2)|} \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 1 & \frac{1}{5} + \frac{1}{5}\sqrt{69} & \frac{23}{25} &= 0.92 & \frac{12}{25} & = 0.48 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 2 & \frac{5}{24} + \frac{43}{216} \sqrt{69} & \frac{3875}{3888} &\approx 0.996656378 & \frac{1849}{3888} &\approx 0.475565843 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 3 & \frac{130}{623} + \frac{124}{623}\sqrt{69} & \frac{388025}{388129} &\approx 0.999732047 & \frac{184512}{388129} &\approx 0.475388337 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 4 & \frac{125}{599} + \frac{1073}{5391}\sqrt{69} & \frac{9686225}{9687627} &\approx 0.999855279 & \frac{4605316}{9687627} &\approx 0.475381225 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 5 & \frac{649}{3110} + \frac{619}{3110}\sqrt{69} & \frac{2417687}{2418025} &\approx 0.999860216 & \frac{1149483}{2418025} &\approx 0.475380941 \\ \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} 6 & \frac{3120}{14951} + \frac{26782}{134559}\sqrt{69} & \frac{6034532375}{6035374827} &\approx 0.999860414 & \frac{2935561516}{6035374827} &\approx 0.475380929\\ \hline \end{array}$$ \begin{claim} The Euclidean minimum of $R_r$ ($r \ge 2$) with respect to $f_{\mathfrak p,c}$ is attained at $R_r - 2$ or $R_r-\frac12(5 + \sqrt{69}\,)$. \end{claim} In fact, applying Proposition \ref{Pbd} to $R_r$ one checks that the two smallest values of $|N(R_r - \eta)|$ occur for $\eta_1 = 2$ or $\eta_2 = \frac12(5 + \sqrt{69}\,)$; one also verifies that $|N(R_r - 2)| \approx 0.47$ and $|N(R_r-\frac12(5 + \sqrt{69}\,))| \approx 0.99$. Since the denominator of $R_r-\eta$ is not divisible by $\mathfrak p$ for any $\eta \in \mathcal O_K$ (it divides $\varepsilon^{r+1}+1 \equiv 2 \bmod \mathfrak p$), and since $\eta_1 - \eta_2$ is an integer not divisible by $\mathfrak p$, our claim follows. Where the minimum with respect to $f_{\mathfrak p,c}$ is attained depends on whether the numerator of $R_r-2$ is divisible by $\mathfrak p$ or not: if it isn't, then the Euclidean minimum is attained there, and we have $M(P,f_{\mathfrak p,c}) = |N(R_r-2)| < \frac12$. If this numerator, however, {\em is} divisible by $\mathfrak p$, then $f_{\mathfrak p,c}(R_r - 2)$ can be made as large as we please by adding weight to $\mathfrak p$, and in this case the minimum is attained at $R_r-\frac12(5 + \sqrt{69}\,)$ for large values of $c$. \begin{claim}\langlebel{C6} The numerator of $R_r-2$ is divisible by $\mathfrak p$ if and only if $r \equiv 10 \bmod 23$. In this case, it is even divisible by $(23) = \mathfrak p^2$. \end{claim} Let us compute $R_r \bmod \mathfrak p$. Since $\varepsilon \equiv 1 \bmod \mathfrak p$, we find $\frac{\varepsilon^{r+1}-1}{\varepsilon-1} = 1 + \varepsilon + \ldots + \varepsilon^r \equiv r+1 \bmod \mathfrak p$, hence $2R_r = \varepsilon^r(23+3\sqrt{69}\,) - (18+2\sqrt{69}\,) \frac{\varepsilon^r-1}{\varepsilon-1} \equiv 5r \bmod \mathfrak p$, and therefore $R_r - 2 \equiv 0 \bmod \mathfrak p$ if and only if $5r \equiv 4 \bmod 23$, which in turn is equivalent to $r \equiv 10 \bmod 23$. The second part of the claim follows by observing $\varepsilon^s \equiv (1 + 13\sqrt{69}\,) \equiv 1+13s\sqrt{69} \bmod 23$, in particular $\varepsilon^{23m+10} \equiv 1+13\sqrt{69} \bmod 23$ and $\frac{\varepsilon^r-1}{\varepsilon-1} = \varepsilon^{r-1} + \ldots + \varepsilon+1 \equiv r+1 + 13 \frac{r(r+1)}2 \sqrt{69} \bmod 23$. With a little more effort we can show much more, namely that there is a subsequence of $R_r-2$ with numerators divisible by an arbitrarily large power of $\mathfrak p$. In fact, the numerator of $R_r-2$ will be divisible by $\mathfrak p^k$ if and only if $T_r = 23(\varepsilon^{r+1}+1)(R_r-2) \equiv 0 \bmod \mathfrak p^{k+2}$, and here $T_r$ is an algebraic integer. An elementary calculation shows that the last congruence is equivalent to \begin{equation} \varepsilon^{r+1} \equiv - \frac{47+5\sqrt{69}}{22}=: \alpha \bmod \mathfrak p^{k+2}. \end {equation} This will hold for arbitrarily large $k$ if and only if there is a $23$-adic integer $s = r+1$ such that \begin{equation}\langlebel{E23} \varepsilon^s = \alpha \end {equation} holds in $K_\mathfrak p = {\mathbb Q}_{23}(\sqrt{69}\,)$. Since both sides are congruent $1 \bmod \mathfrak p$, we can take the $\pi$-adic logarithm (with $\pi = \frac{23+3\sqrt{69}}{2}$) and get $s = \frac{\log_\pi \alpha}{\log_\pi \varepsilon}$ as an equation in $K_\mathfrak p$, and (\ref{E23}) holds if we can show that $s$ is in ${\mathbb Z}_{23}$. To this end,\footnote{We thank (in chronological order) Hendrik Lenstra, Gerhard Niklasch and David Kohel for this argument.} let $\sigma$ denote the non-trivial automorphism of $K_\mathfrak p/{\mathbb Q}_{23}$. Since $\log_\pi$ is Galois-equivariant, and since $\varepsilon^{1+\sigma} = \alpha^{1+\sigma} = 1$, we get $$s^\sigma = \frac{\log_\pi \alpha^\sigma}{\log_\pi \varepsilon^\sigma} = \frac{-\log_\pi \alpha}{-\log_\pi \varepsilon} = s.$$ Thus $s \in {\mathbb Q}_{23}$, and since it is a $\pi$-adic unit, $s \in {\mathbb Z}_{23}$ as desired. We remark that $s = 11 + 13 \cdot 23 + 15 \cdot 23^2 + 5 \cdot 23^3 + 3 \cdot 23^4 + \ldots$. This proves Claim \ref{CMP} and completes the proof of Theorem \ref{T69w}. \section{Weighted norms in cubic number fields} Using the idea of Clark (see \cite{Cl1,Cl2,Hai,Nik}; it actually first appears in Lenstra \cite[p. 35]{Len}), we modified the programs described in \cite{CL} slightly in order to examine weighted norms in cubic fields. Many of the results in this section have been obtained by the first author in \cite{Cav}; see Table \ref{TR} for the results obtained so far. \begin{table}[!ht] \begin{center} \caption{}\langlebel{TR} \begin{tabular}{|r|c|c|c|c|}\hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $\operatorname{disc} K$ & $M_1(K)$ & $M_2(K)$ & $N\mathfrak p$ & w$(\mathfrak p)$ \\ \hline \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $-367$ & $1$ & $9/13$ & $13$ & $(13, 279/8)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $-351$ & $1$ & $9/11$ & $11$ & $(11, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $-327$ & $101/99$ & $<0.9$ & $11$ & $(101/9, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $-199$ & $1 $ & $< 0.47$ & $7$ & $(7, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $ 985$ & $1 $ & $5/11$ & $5$ & $(5,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $1345$ & $7/5$ & $< 0.4$ & $5$ & $(7, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $1825$ & $7/5$ & $< 0.5$ & $5$ & $(7, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $1929$ & $1$ & $3/7$ & $7$ & $(7, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $1937$ & $1$ & $5/9$ & $3$ & $(3, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $2777$ & $5/3$ & $17/19$ & $3$ & $\varnothing$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $2836$ & $7/4$ & $7/8$ & $2$ & $(\sqrt{7}, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $2857$ & $8/5$ & $< 0.5$ & $5$ & $(8, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $3305$ & $13/9$ & $37/45$ & $3$ & $(\sqrt{13}, 5)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $3889$ & $13/7$ & 1 & $7$ & $(13,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $4193$ & $7/5$ & $< 0.65$ & $5$ & $(7, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $4345$ & $7/5$ & $11/13$ & $5$ & $(7, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $4360$ & $41/35$ & $7/10$ & $7$ & $(41/5,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $5089$ & $17/11$ & $7/11$ & $11$ & $(17,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $5281$ & $1$ & $<0.6$ & $5$ & $(5,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $5297$ & $21/11$ & $23/33$ & $11$ & $(21,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $5329$ & $9/8$ & $63/73$ & $2^3$ & $(9,73)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $5369$ & $21/19$ & $17/19$ & $19$ & $(21,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $5521$ & $23/7$ & $8/7$ & $7$ & $(23,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $7273$ & $973/601$ & $729/601$ & $601$ & $(973, \infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $7465$ & $1$ & $< 0.8$ & $5$ & $(5,\infty)$ \\ \hline \raisebox{0em}[2.4ex][1.2ex]{\rule{0em}{2ex}} $7481$ & $1$ & $< 0.7$ & $5$ & $(5,\infty)$ \\ \hline \end{tabular} \end{center} \end{table} The idea is simple. Assume that $K$ is a number field with class number $1$ such that $M = M_1(K) \ge 1$ and $M_2(K) < 1$; assume that $\# C_1(K)$ is finite and write the points $\xi \in C_1(K)$ ($1 \le i \le t$) in the form $\xi_i = \alpha_i/\beta_i$, where $(\alpha_i,\beta_i) = 1$. Assume moreover that there is a prime ideal $\mathfrak p$ such that $\mathfrak p \mid \beta_i$ for all $i$. Now consider the weighted norm $f_{\mathfrak p,c}$; by making $c$ big enough we can certainly arrange that $f_{\mathfrak p,c}(\xi_i) < 1$ for all $i \le t$: in fact, if $\mathfrak p^m \parallel \gcd(\beta_1, \ldots, \beta_t)$, then $f_{\mathfrak p,c}(\xi_i) \le M (N\mathfrak p)^m c^{-m}$; thus we only need to choose $c > N\mathfrak p \sqrt[m\,]{M}$ (actually this shows that $w(\mathfrak p) \widetilde{\subset}eteq (N\mathfrak p \sqrt[m\,]{M},\infty)$). In order to guarantee that, for every $\xi \in K$, there exists a $\gamma \in \mathcal O_K$ such that $f_{\mathfrak p,c}(\xi-\gamma) < 1$, we will look for $\gamma_1, \gamma_2 \in \mathcal O_K$ such that $|N_{K/{\mathbb Q}}(\xi-\gamma_i)| < 1$ for $i=1, 2$ and $\mathfrak p \nmid (\gamma_1 - \gamma_2)$; then at least one of the $\xi - \gamma_i$, say $\xi - \gamma_1$, has numerator not divisible by $\mathfrak p$, and this implies that $f_{\mathfrak p,c}(\xi-\gamma_1) \le |N(\xi-\gamma_1)| < 1$. By modifying the programs described in \cite{CL} slightly we can use them to find new examples of cubic fields that are not norm-Euclidean but Euclidean with respect to some weighted norm. We represented prime ideals of the maximal order $\mathcal O_K = {\mathbb Z} \oplus \alpha{\mathbb Z} \oplus \beta{\mathbb Z}$ in the form $\mathfrak p = (p, \alpha + a)$, $(p,\beta + a\alpha + b)$ or $(p)$ according as $\mathfrak p$ has degree $1$, $2$ or $3$. Testing the divisibility of an integer of $\mathcal O_K$ by $\mathfrak p$ then can be done using only rational arithmetic. Let us call $\xi \in K$ covered if there exist $\gamma_1, \gamma_2 \in \mathcal O_K$ such that $|N_{K/{\mathbb Q}}(\xi-\gamma_i)| < 1$ and $\mathfrak p \nmid (\gamma_1 - \gamma_2)$; if $\xi$ is covered, then so is $\varepsilon \xi$ for any unit $\varepsilon \in \mathcal O_K^\times$ (this allows us to use the program E--3 of \cite{CL}). We first consider the field $K$ generated by a root $\alpha$ of $x^3 + x^2 - 6x - 1$; we have $\operatorname{disc} K = 985$, and the only point with minimum $\ge 1$ is $\xi_1 = \frac{3\alpha-\alpha^2}{\alpha-1} = \frac{2-\alpha+2\alpha^2}{5}.$ The ideal $\mathfrak p = (\alpha - 1)$ occurring in the denominator is a prime ideal of norm $5$. Our programs cover a fundamental domain of $K$ except for the possible exceptional points $\xi = 0$ and $\xi = \xi_1$. Thus $f_{\mathfrak p,c}$ is a Euclidean function for every $c > N\mathfrak p = 5$, i.e. $w(\mathfrak p) = (5,\infty)$. Now let $K$ be the field with $\operatorname{disc} K = 1937$ generated by a root $\alpha$ of $x^3 + x^2 - 8x + 1$. It has Euclidean minimum $M(K) = 1$ attained at $\frac{4+4\alpha^2}{9}$; in fact $|N(\xi_1)| = 1$ for $\xi_1 = \frac19(-14+9\alpha+4\alpha^2)$, and the prime ideal factorization of $\xi_1$ is $(\xi_1) = (3,\alpha^2+1)(3,\alpha+1)^{-2}$. Our programs cover a fundamental domain of $K$ except for the possible exceptional points $\xi_0 = 0$, $\xi = \xi_1$ and $\xi = \frac13(1+\alpha^2)$. This last point has Euclidean minimum $\frac13 = |N(\frac13(1-3\alpha+\alpha^2))|$ with respect to the usual norm, and since $(1-3\alpha+\alpha^2)/3 = \mathfrak p^{-1}$, adding weight to $\mathfrak p$ does not increase its minimum. Our third example is the cubic field $K$ with discriminant $\operatorname{disc} K = 3305$, generated by a root $\alpha$ of $x^3 - x^2 - 10x - 3$. It has minimum $M_1 = \frac{13}{9}$ attained at $\frac19(1 - 2\alpha - 4\alpha^2)$, with $|N(\xi_1)| = \frac{13}{9}$ for $\xi_1 = \frac19(-71 + 52\alpha + 32\alpha^2)$. Its prime ideal factorization is $(\xi_1) = (13,\alpha-1)(3,\alpha)^{-2}$; we thus add weight $c > \sqrt{13}$ to $\mathfrak p = (3,\alpha)$, and we can cover a fundamental domain of $K$ except for the possible exceptional points $\xi_0 = 0$, $\xi = \xi_1$ and $\xi = \frac15(2-\alpha+2\alpha^2)$. Now $M(\xi) = |N(\xi_2)| = \frac35$, where $\xi_2 = \frac15(-3+4\alpha+2\alpha^2)$ has the prime ideal factorization $(\xi_2) = \mathfrak p(5,\alpha+2)^{-1}$. Thus the weighted prime ideal occurs in the numerator of $\xi_2$, and we have $f_{\mathfrak p,c}(\xi_2) < 1$ if and only if $c < 5$; since $|N(\xi)| \ge 1$ for all $\xi \equiv \xi_2 \bmod \mathcal O_K$, this implies that $w(\mathfrak p) = (\sqrt{13},5)$. Finally, consider the cubic field $K$ with discriminant $\operatorname{disc} K = 3889$. Its first minimum is attained at $\xi_1 = \frac17(3-\alpha - 3\alpha^2)$, and its denominator is the prime ideal $\mathfrak p$ that divides the denominator of $\xi_2 = \frac17(2-3\alpha - 2\alpha^2)$, where the second minimum $M_2(K) = 1$ is attained (something similar happens for $\operatorname{disc} K = 5521$ and $\operatorname{disc} K = 7273$, where $M_2(K) > 1$; in these cases, we have to verify that $M_3(K) < 1$). Here we find the possible exceptional points $\xi = 0$, $\xi_1$, $\xi_2$, as well as $\eta_1 = \frac17(1 - \alpha - 2\alpha^2)$, $\eta_2 = \frac17(2 - 2\alpha + 3\alpha^2)$ and $\eta_3 = \frac17(3 -3\alpha + \alpha^2)$. Since their denominator is the prime ideal $(7, 2+\alpha)$, their Euclidean minimum is $\frac17$ both for the usual as well as for the weighted norm. Some of our examples of cubic fields that are Euclidean with respect to some weighted norm were found independently by Amin Coja-Oghlan; see his forthcoming thesis \cite{CO}. \section{Norm-Euclidean cubic fields} We take this opportunity to report on recent computations concerning norm-Euclidean cubic fields. Calculations for the totally real cubic fields up to $\operatorname{disc} K \le 13,000$ have produced the following results: \begin{center} \begin{tabular}{|r|r|r|r|r|}\hline $\operatorname{disc} K \qquad $ & E & N & $\Sigma$ \\ \hline \hline $ 0 < d \le \phantom{0} 1000$ & 26 & 1 & 27 \\ $1000 < d \le \phantom{0} 2000$ & 29 & 5 & 34 \\ $2000 < d \le \phantom{0} 3000$ & 31 & 4 & 35 \\ $3000 < d \le \phantom{0} 4000$ & 36 & 6 & 42 \\ $4000 < d \le \phantom{0} 5000$ & 28 & 7 & 35 \\ $5000 < d \le \phantom{0} 6000$ & 35 & 7 & 42 \\ $6000 < d \le \phantom{0} 7000$ & 30 & 8 & 38 \\ $7000 < d \le \phantom{0} 8000$ & 37 & 10 & 47 \\ $8000 < d \le \phantom{0} 9000$ & 30 & 11 & 41 \\ $9000 < d \le 10000$ & 29 & 10 & 39 \\ $10000 < d \le 11000$ & 34 & 9 & 43 \\ $11000 < d \le 12000$ & 37 & 16 & 53 \\ $12000 < d \le 13000$ & 31 & 6 & 37 \\ \hline $\Sigma \qquad $ &413 & 100 & 513 \\ \hline \end{tabular} \end{center} The columns $E$ and $N$ display the number of norm-Euclidean and not norm-Euclidean number fields of fields with discriminants in the indicated intervals. We also have to correct the entries for the fields with discriminant $3969$ in our tables in \cite{CL}: the field $K_1$ generated by a root of $x^3-21x-28$ has $M_1(K_1) = 4/3$, $M_2(K_1) = 31/24$ and $M_3(K_1) = 1$, and the field $K_2$ generated by $x^3-21x-35$ has $M_1(K_2) = 7/3$ and $M_2(K_2) = 125/63$. For complex cubic fields, calculations by R. Qu\^eme indicated that the fields with $\operatorname{disc} K = -999$ and $\operatorname{disc} K = -1055$ are not norm-Euclidean, and we could meanwhile verify that $M(K) \ge 294557/272112$ for $\operatorname{disc} K = -999$ and $M(K) \ge 1483/1370$ for $\operatorname{disc} K = -1055$, and that there are no norm-Euclidean number fields with $-876 > \operatorname{disc} K \ge -1600$, suggesting the following \noindent {\bf Conjecture.} There are exactly $58$ norm-Euclidean complex cubic fields, and their discriminants are $-23$, $-31$, $-44$, $-59$, $-76$, $-83$, $-87$, $-104$, $-107$, $-108$, $-116$, $-135$, $-139$, $-140$, $-152$, $-172$, $-175$, $-200$, $-204$, $-211$, $-212$, $-216$, $-231$, $-239$, $-243$, $-244$, $-247$, $-255$, $-268$, $-300$, $-324$, $-356$, $-379$, $-411$, $-419$, $-424$, $-431$, $-440$, $-451$, $-460$, $-472$, $-484$, $-492$, $-499$, $-503$, $-515$, $-516$, $-519$, $-543$, $-628$, $-652$, $-687$, $-696$, $-728$, $-744$, $-771$, $-815$, $-876$. Note that, by a result of Cassels \cite{Cas52}, there are only finitely many norm-Euclidean complex cubic number fields $K$, and in fact their discriminant is bounded by $|\operatorname{disc} K| < 170\,520$. \begin{center} \begin{tabular}{|r|r|r|r|}\hline $d = |\operatorname{disc} K| \qquad $ & E & N & $\Sigma$ \\ \hline \hline $ 0 < d \le \phantom{0}200$ & 18 & 1 & 19 \\ $200 < d \le \phantom{0}400$ & 15 & 9 & 24 \\ $400 < d \le \phantom{0}600$ & 16 & 10 & 26 \\ $600 < d \le \phantom{0}800$ & 7 & 20 & 27 \\ $800 < d \le 1000$ & 2 & 29 & 31 \\ $1000 < d \le 1200$ & 0 & 29 & 29 \\ $1200 < d \le 1400$ & 0 & 35 & 35 \\ $1400 < d \le 1600$ & 0 & 27 & 27 \\ \hline $\Sigma \qquad$ & 58 & 160 & 218 \\ \hline \end{tabular} \end{center} In the real case, the situation is not so clear. The numerical data suggest that the proportion of norm-Euclidean fields is decreasing with $\operatorname{disc} K$, but they do not yet support the conjecture that the norm-Euclidean real cubic number fields have density $0$ among the real cubic fields with class number $1$. \section{Some Open Problems} In this last section we would like to mention several open problems concerning the Euclidean algorithm with respect to weighted norms. One of the most studied questions is of course whether ${\mathbb Z}[\sqrt{14}\,]$ is Euclidean with respect to some $f_{\mathfrak p,c}$, where $\mathfrak p = (2,\sqrt{14}\,)$. Is it true, in particular, that $w(\mathfrak p) = (\sqrt{5},\sqrt{7}\,)$ in this case? More generally: assume that $K$ is a number field with unit rank $\ge 1$. Is $w(\mathfrak p)$ always an open subset of $(1,\infty) \widetilde{\subset}et {\mathbb R}$ for every prime ideal $\mathfrak p$ in $\mathcal O_K$? If this were the case, then there would also exist number fields such that $f_{\mathfrak p,c}$ is a Euclidean function for some $c < N\mathfrak p$ since there do exist number fields with $w(\mathfrak p) \supseteq [p,\infty)$ for suitable primes (take norm-Euclidean fields, for example). A related question is whether $M(f_{\mathfrak p,c})$ is a continuous function of $c$ on $[N\mathfrak p,\infty)$ for number fields with unit rank $\ge 1$. The cubic field with discriminant $\operatorname{disc} K = -335$ has $M_1(K) = 1$; the minimum is attained at points that have different prime ideals above $5$ in their denominator. Calculations have not yet confirmed that $\mathcal O_K$ is Euclidean with respect to a norm that is weighted at two different prime ideals. Similar remarks apply to algorithms with respect to functions that are not multiplicative: instead of giving weight $c$ to a prime ideal $\mathfrak p$, one could look at functions with $f(\mathfrak p) = N\mathfrak p$ and $f(\mathfrak p^2) = c$ for some $c \ge N\mathfrak p^2$. This idea is applicable whenever the denominators of the exceptional points are divisible by the square of a prime ideal, e.g. for ${\mathbb Z}[\sqrt{14}\,]$. \end{document}
\begin{document} \title{Fluid Flows of Mixed Regimes in Porous Media} \begin{center} \textit{$^a$Department of Mathematics and Statistics, Texas Tech University, Box 41042\\ Lubbock, TX 79409--1042, U. S. A.} \\ \textit{$^b$Department of Mathematics, University of North Georgia, Gainesville Campus\\ 3820 Mundy Mill Rd., Oakwood, GA 30566, U. S. A.}\\ Email addresses: \texttt{[email protected], [email protected],\\ [email protected], [email protected]} \end{center} \begin{abstract} In porous media, there are three known regimes of fluid flows, namely, pre-Darcy, Darcy and post-Darcy. Because of their different natures, these are usually treated separately in literature. To study complex flows when all three regimes may be present in different portions of a same domain, we use a single equation of motion to unify them. Several scenarios and models are then considered for slightly compressible fluids. A nonlinear parabolic equation for the pressure is derived, which is degenerate when the pressure gradient is either small or large. We estimate the pressure and its gradient for all time in terms of initial and boundary data. We also obtain their particular bounds for large time which depend on the asymptotic behavior of the boundary data but not on the initial one. Moreover, the continuous dependence of the solutions on initial and boundary data, and the structural stability for the equation are established. \end{abstract} \pagestyle{myheadings}\markboth{E. Celik, L. Hoang, A. Ibragimov and T. Kieu} {Fluid Flows of Mixed Regimes in Porous Media} \myclearpage \section{Introduction and the models}\label{intro} Fluid flows are very common in nature such as in soil, sand, aquifers, oil reservoir, sea ice, plants, bones, etc. Contrary to the usual perception of their simplicity, they, in fact, can be very complicated and are modeled by many different equations of various types. Broadly speaking, they are categorized into three known regimes, namely, pre-Darcy (i.e. pre-linear, non-Darcy), Darcy (linear) and post-Darcy (i.e. post-linear, non-Darcy). While the Darcy regime is well-known, the other two do exist and are studied in physics and engineering. For example, when the Reynolds number is high, there is a deviation from the Darcy law and Forchheimer's equations are usually used to account for it \cite{Forchh1901,ForchheimerBook}, see also \cite{Muskatbook,BearBook,NieldBook}. On the other end of the Reynolds number's range, when it is small, the pre-Darcy regime is observed but not well understood, although it contributes to unexpected oil extraction, see \cite{Dudgeon85,SSHI2016,SoniIslamBasak78} and references therein. Concerning mathematical research of fluids in porous media, the flows' diverse nature is much overlooked. Almost all of the papers focus on the Darcy regime which is presented by the (linear) Darcy equation, see e.g. \cite{VazquezPorousBook}. The post-Darcy regime has been attracted attention recently with the (nonlinear) Forchheimer models, see \cite{StraughanBook,ABHI1,HI2,HIK1,HIK2,HKP1,HK2,CHK1,CHK2} and references therein. In contrast, the (nonlinear) pre-Darcy regime is virtually ignored. Moreover, the three regimes are always treated separately. This is due to the different natures of the models and the ranges of their applicability. However, this separation is unsatisfactory since the fluid may present all three regimes in different unidentified portions of the confinement. Therefore, there is a need to unify the three regimes into one formulation and study the fluid as a whole. This paper aims at deriving admissible models for this unification and analyze their properties mathematically. To the best of our knowledge, this is the first paper to treat such a problem with rigorous mathematics. We now start the investigation of different types of fluid flows in porous media. Consider fluid flows with velocity $v\in \mathbb R^n$, pressure $p\in \mathbb R$, and density $\rho\in[0,\infty)$. Depending on the range of the Reynolds number, there are different groups of equations to describe their dynamics. The most popular equation is Darcy's law: \begin{equation}\label{D} v=- k\nabla p, \text{ where $k$ is a positive constant.} \end{equation} (In this paper, we will not discuss other variations, such as those of Brinkman-type, for \eqref{D} or Forchheimer equations \eqref{F2}--\eqref{FP}.) When $|v|$ is small, there are Izbash-type equations that describe the pre-Darcy regime: \begin{equation}\label{PD} |v|^{-\alpha}v=- k \nabla p\quad\text{for some constant power } \alpha\in (0,1)\text{ and coefficient }k>0. \end{equation} For experimental values of $\alpha$, see e.g. \cite{SSHI2016,SoniIslamBasak78}. When $|v|$ is large, the following Forchheimer equations are usually used in studying post-Darcy flows. Forchheimer's two-term law \begin{equation}\label{F2} av+b|v|v=-\nabla p. \end{equation} Forchheimer's three-term law \begin{equation}\label{F3} av+b|v|v+c|v|^2v=-\nabla p. \end{equation} Forchheimer's power law \begin{equation}\label{FP} av+d|v|^{m-1}v=-\nabla p. \end{equation} Here, the positive numbers $a$, $b$, $c$, $d$, and $m\in(1,2)$ are derived from experiments for each case. The above three Forchheimer equations can be combined and generalized to the following form: \begin{equation}\label{gF} g_F(|v|)v=-\nabla p, \end{equation} where \begin{equation} g_F(s)=a_0+a_1s^{\alpha_1}+\dots+a_Ns^{\alpha_N}, \end{equation} with $N\ge 1$, $\alpha_0=0<\alpha_1<\ldots<\alpha_N$, $a_0,a_N>0$, $a_1,a_2,\ldots,a_{N-1}\ge 0$. The generalized Forchheimer equation \eqref{gF} was intensely used by the authors to model and study fast flows in the porous media (see \cite{ABHI1,HI1,HI2,HIKS1,HKP1,HK1,HK2,CH1,CH2,CHK1,CHK2}). The techniques developed in those papers will be essential in our approach and analysis below. In previous work, each regime pre-Darcy, Darcy, or post-Darcy was studied separately, even though they exist simultaneously in porous media. In particular cases, some models must consider multi-layer domains with each layer having a different regime of fluid flows, see for e.g. section 6.7.8 of \cite{StraughanBook}. The goal of this section is to model all regimes together in the same domain. \newcommand{\mathbf G}{\mathbf G} We write a general equation of motion for all cases \eqref{D}--\eqref{gF} as \begin{equation}\label{vecform} \mathbf G(v)=-\nabla p, \end{equation} where $\mathbf G$ is a vector field on $\mathbb R^n$ with $\mathbf G(0)=0$. In this paper, based on the known equations \eqref{PD}--\eqref{gF}, we study $\mathbf G$ of the form \begin{equation}\label{gg} \mathbf G(v)= \begin{cases} g(|v|)v & \text{if } v\in \mathbb R^n\setminus \{0\},\\ 0& \text{if } v=0, \end{cases} \end{equation} where $g(s)$ is a continuous function from $(0,\infty)$ to $(0,\infty)$ that satisfies \begin{equation}\label{sgs} \lim_{s\searrow 0} sg(s)=0. \end{equation} Different forms of $g(s)$ give different models, for example, \begin{equation}s g(s)=k^{-1}s^{-\alpha},\ k^{-1},\ a+bs,\ a+bs+cs^2,\ a+ds^{m-1},\ g_F(s), \end{equation}s for equations \eqref{PD}, \eqref{D}, \eqref{F2}, \eqref{F3}, \eqref{FP}, \eqref{gF}, respectively. As in our previous work for compressible fluids, to reduce the complexity of the system of equations describing the fluid motion, we solve for velocity $v$ in \eqref{gg} in terms of the pressure gradient $\nabla p$. For example, from \eqref{PD} we have \begin{equation}s v=-k^\frac{1}{1-\alpha}|\nabla p|^\frac{\alpha}{1-\alpha}\nabla p, \end{equation}s and from \eqref{gF} we have \begin{equation}s v=-K_F(|\nabla p|)\nabla p, \end{equation}s where \begin{equation}\label{KF} K_F(\xi)=\frac{1}{g_F(G_F^{-1}(\xi))} \quad \text{with }G_F(s)=sg_F(s) \quad\text{for }\xi, s\ge 0. \end{equation} Taking the modulus both sides of \eqref{vecform}, we have \begin{equation}\label{vG} G(|v|)=|\nabla p|, \end{equation} where \begin{equation}\label{Gdef} G(s)= \begin{cases} sg(s)& \text{if } s>0,\\ 0& \text{if } s=0. \end{cases} \end{equation} By \eqref{sgs}, we have \begin{enumerate} \item[{\rm (g1)}] $G(s)$ is continuous on $[0,\infty)$. \end{enumerate} We assume \begin{enumerate} \item[{\rm (g2)}] $G(s)$ is strictly increasing on $[0,\infty)$, \item[{\rm (g3)}] $G(s)\to\infty$ as $s\to\infty$, and \item[{\rm (g4)}] the function $1/g(s)$ on $(0,\infty)$ can be extended to a continuous function $k_g(s)$ on $[0,\infty)$. \end{enumerate} By (g1)--(g3), we can invert equation \eqref{vG} to have $$|v|=G^{-1}(|\nabla p|).$$ Combining this with (g4), we can solve from \eqref{vecform} and \eqref{gg} for $v=- k_g(|v|)\nabla p$, thus, \begin{equation}\label{gen-darcy} v=-K(|\nabla p|)\nabla p, \end{equation} where \begin{equation}\label{Kg} K(\xi)= k_g(G^{-1}(\xi))\quad\text{for } \xi\ge 0. \end{equation} In particular, when $\xi>0$ \begin{equation}\label{Kpos} K(\xi)=\frac{1}{g(s(\xi))} \quad \text{with}\quad s=s(\xi)>0 \quad \text{satisfying} \quad s g(s)=\xi. \end{equation} One can interpret equation \eqref{gen-darcy} as a generalization Darcy equation \eqref{D} with conductivity $k=K(|\nabla p|)$ depending on the pressure's gradient. We consider the following two main models. Below, ${\mathbf 1}_E$ denotes the characteristic (indicator) function of a set $E$. \textbf{Model 1.} Function $g(s)$ is piece-wise smooth on $(0, \infty)$. Based on \eqref{PD}, \eqref{D} and \eqref{gF} and their validity in different ranges of $|v|$, our first consideration is the following piece-wise defined function \begin{equation} \label{gdef} g(s)=\bar{g}(s)\stackrel{\rm def}{=} c_1 s^{-\alpha}{\mathbf 1}_{(0,s_1)}(s)+ c_2{\mathbf 1}_{[s_1,s_2]}(s)+g_F(s){\mathbf 1}_{(s_2,\infty)}(s) \quad \text{for } s>0, \end{equation} where $\alpha\in(0,1)$, and $s_2>s_1>0$ are fixed threshold values. To avoid abrupt transitions between three regimes, we impose the continuity on $\bar g(s)$, that is, \begin{equation}s c_1s_1^{-\alpha}=c_2=g_F(s_2). \end{equation}s Note that $\bar g(s)$ is not differentiable at $s_1,s_2$. Obviously, \eqref{sgs} holds. Then function $G(s)$ in \eqref{Gdef} becomes \begin{equation}s G(s)=\bar G(s)\stackrel{\rm def}{=} c_1 s^{1-\alpha}{\mathbf 1}_{[0,s_1)}(s)+ c_2 s {\mathbf 1}_{[s_1,s_2]}(s)+G_F(s){\mathbf 1}_{(s_2,\infty)}(s) \quad\text{for } s\ge 0. \end{equation}s Clearly, conditions (g2)--(g3) are satisfied. Then \begin{equation}s \bar G^{-1}(\xi)=\Big(\frac\xi{c_1}\Big)^\frac1{1-\alpha} {\mathbf 1}_{[0,Z_1)}(\xi) + \frac\xi{c_2} {\mathbf 1}_{[Z_1,Z_2]}(\xi)+G_F^{-1}(\xi){\mathbf 1}_{(Z_2,\infty)}(\xi)\quad\text{for } \xi\ge 0, \end{equation}s where $Z_1=c_2 s_1$ and $Z_2=c_2 s_2$. Also, (g4) holds true with \begin{equation}s k_{\bar g}(s)=\frac{s^\alpha }{c_1} {\mathbf 1}_{[0,s_1)}(s) + \frac1{c_2} {\mathbf 1}_{[s_1,s_2]}(s)+\frac1{g_F(s)}{\mathbf 1}_{(s_2,\infty)}(s)\quad\text{for } s\ge 0. \end{equation}s Thus, we derive function $K(\xi)$ in \eqref{Kg} explicitly as \begin{equation}\label{K-bar} K(\xi)=\bar K(\xi)\stackrel{\rm def}{=} M_1\xi^{\beta_1} {\mathbf 1}_{[0,Z_1)}(\xi) + M_2 {\mathbf 1}_{[Z_1,Z_2]}(\xi)+K_F(\xi){\mathbf 1}_{(Z_2,\infty)}(\xi)\quad\text{for } \xi\ge 0, \end{equation} where $M_1=c_1^{-\frac{1}{1-\alpha}}$, $M_2=c_2^{-1}$, and \begin{equation}\label{beta1} \beta_1=\alpha/(1-\alpha)>0. \end{equation} Note that, similar to the function $\bar g$, this function $\bar K$ is continuous on $[0,\infty)$, continuously differentiable on $(0,\infty)\setminus\{Z_1,Z_2\}$. \textbf{Model 2.} Function $g(s)$ is smooth on $(0, \infty)$. Another generalization is to use a smooth interpolation between pre-Darcy \eqref{PD} and generalized Forchheimer \eqref{gF}. Instead of \eqref{gdef}, we propose the following \begin{equation}\label{gI} g(s)=g_I(s)\stackrel{\rm def}{=} a_{-1}s^{-\alpha}+a_0+a_1s^{\alpha_1}+\dots+a_Ns^{\alpha_N} \quad \text{for } s>0, \end{equation} where $N\ge 1$, $\alpha\in (0,1)$, $\alpha_N>0$, \begin{equation}\label{aicond} a_{-1},a_N>0 \text{ and }a_i\ge 0\quad \forall i=0,1,\ldots, N-1. \end{equation} Normally, $a_0>0$ and, thus, the model \eqref{gI} already contains the Darcy regime in its formulation. Nonetheless, our mathematical study in this paper allows the case $a_0=0$ as well. If only one function $g_I$ is studied, then we can impose $a_i> 0$ for all $i=-1,0,\ldots,N$. The weaker condition \eqref{aicond} is used here to allow comparison between $g_I$ functions with different powers $\alpha_i$, see section \ref{strucsec}. The main advantage of $g_I$ over $\bar g$ is that it is smooth on $(0,\infty)$. This allows further mathematical analysis of the flows. It also can be used as a framework for perspective interpretation of field data, i.e., matching the coefficients $a_i$ for $i=-1,0,1,\ldots,N$ to fit the data. Similar to Model 1, conditions \eqref{sgs} and (g2)--(g4) are satisfied with \begin{equation}s G(s)= G_I(s)\stackrel{\rm def}{=} a_{-1}s^{1-\alpha}+a_0 s +a_1s^{1+\alpha_1}+\dots+a_Ns^{1+\alpha_N}, \end{equation}s \begin{equation}s k_{g_I}(s)=\frac{s^\alpha}{a_{-1}+a_0 s^\alpha + a_1 s^{\alpha+\alpha_1}+\ldots+a_N s^{\alpha+\alpha_N}} \end{equation}s for $s\ge 0$. We then obtain \begin{equation}\label{K-I} K(\xi)=K_I(\xi)\stackrel{\rm def}{=} \frac{s(\xi)^\alpha}{a_{-1}+a_0 s(\xi)^\alpha + a_1 s(\xi)^{\alpha+\alpha_1}+\ldots+a_N s(\xi)^{\alpha+\alpha_N}}\quad \text{for } \xi\ge 0, \end{equation} where $s(\xi)= G_I^{-1}(\xi)$. In case we want to consider dependence on the coefficients of $g_I(s)$, we denote \begin{equation}\label{KIxa} \vec a=(a_{-1},a_0,a_1,\ldots,a_N),\quad g_I(s)=g_I(s,\vec a),\quad \text{and}\quad K_I(\xi)=K_I(\xi,\vec a). \end{equation} \textbf{Model 3.} Another way to describe the flows of mixed regimes is to take the formula \eqref{gen-darcy} and define the conductivity function $K(\xi)$ directly that possesses some desired properties. In doing so, one can impose the smoothness on $K(\xi)$. An important feature in constructing $K(\xi)$ is to preserve its behavior when $\xi\to 0$ or $\xi\to \infty$ to be the same as that of $\bar K(\xi)$. As $\xi\to 0$ it is clear from \eqref{K-bar} that $\bar K(\xi)$ is like $\xi^{\beta_1}$. For sufficiently large $\xi$ we have $\bar K(\xi)=K_F(\xi)$ defined by \eqref{KF}. Thus, we recall from Lemma 2.1 of \cite{HI1} that the function $K_F(\xi)$ satisfies \begin{equation}\label{Fdegen} \frac{d_1^{-1}}{(1+\xi)^{\beta_2}}\le K_F(\xi)\le \frac{d_1}{(1+\xi)^{\beta_2}}, \end{equation} where \begin{equation} \label{beta2} \beta_2=\alpha_N/(1+\alpha_N)\in(0,1)\quad\text{and}\quad d_1=d_0 (\max\{a_0,a_1,\ldots,a_N,a_0^{-1},a_N^{-1}\})^{1+\beta_2} \end{equation} with $d_0>0$ depending on $N$ and $\alpha_N$. In summary, we want $K(\xi)$ to behave like $\xi^{\beta_1}$ for small $\xi$, and, as in \eqref{Fdegen}, like $(1+\xi)^{-\beta_2}$ for large $\xi$. Therefore, ones can introduce \begin{equation}\label{Ksim1} K(\xi)=\hat{K}(\xi)\stackrel{\rm def}{=} \frac{a\xi^{\beta_1}}{(1+b\xi^{\beta_1})(1+c\xi^{\beta_2})}\quad\text{for }\xi\ge 0. \end{equation} Here, positive coefficients $a$, $b$, $ c$, and parameters $\beta _1$, $\beta_2$ can be used to match experimental or field data. This function $\hat K$ belongs to $C^\infty((0,\infty))$. \textbf{Model 4.} Ones can also refine the model \eqref{Ksim1} to match more accurately $\bar K(\xi)$ in \eqref{K-bar}. Specifically, $K(\xi)$ is close to $M_1 \xi^{\beta_1}$ when $\xi\to 0$, and to $K_F(\xi)$ when $\xi\to \infty$. Then we choose \begin{equation}\label{KM} K(\xi)=K_M(\xi)\stackrel{\rm def}{=} K_F(\xi)\cdot\frac{\bar{k}\xi^{\beta_1}}{1+\bar{k}\xi^{\beta_1}} \quad\text{for }\xi\ge 0, \end{equation} where $ \bar k=M_1/K_F(0)>0. $ \vspace*{1em} Above, we have introduced several models which can be used to interpret experimental and field data. We now use them to investigate the fluid flow's properties. They are used together with other basic equations of continuum mechanics which we recall here. Continuity equation \begin{equation}s \phi\rho_t+\nabla\cdot(\rho v)=0, \end{equation}s where $\phi\in(0,1)$ is the constant porosity. Constitutive law for slightly compressible fluids \begin{equation}s \frac{d\rho}{dp}=\frac{\rho}\kappa, \end{equation}s where $1/\kappa>0$ is small compressibility. Combining the above two equations with \eqref{gen-darcy}, we obtain \begin{equation}s \phi p_t=\kappa \nabla \cdot(K|\nabla p|)\nabla p)+K(|\nabla p|)|\nabla p|^2. \end{equation}s Since $\kappa$ is large, we neglect the last term in this study. Such a simplification is commonly used in petroleum engineering. The full treatment requires more accurate models for the flows and can use the similar analysis as in \cite{CHK1,CHK2}. By rescaling $t$, we assume $\kappa=1$ and obtain the following reduced equation \begin{equation}\label{peq} p_t=\nabla \cdot(K(|\nabla p|)\nabla p), \end{equation} where $K$ is $\bar K$, $K_I$, $\hat K$ or $K_M$. We will study the initial, boundary value problem (IBVP) associated with the partial differential equation \eqref{peq}. We will derive estimates for the solutions, and establish their continuous dependence on the initial and boundary data, and, in case $K=K_I$, on the coefficients of the function $g_I(s)$ in \eqref{gI}. As seen in the next section, the PDE \eqref{peq} is degenerate when either $|\nabla p|\to 0$ or $|\nabla p|\to\infty$. Moreover, it possesses a monotonicity of mixed type which requires extra care in the proof and analysis. The paper is organized as follows. In section \ref{presec}, we present important properties of $K(\xi)$ including its type of degeneracy (Lemma \ref{lem21}) and monotonicity (Lemma \ref{lemmono}). They are essential not only for the remaining sections \ref{boundsec}--\ref{strucsec} in this paper but also for our future work on the models. In section \ref{boundsec}, we study solutions of \eqref{peq} subjected to the time-dependent Dirichlet boundary condition $\psi(x,t)$. We derive estimates the $L^2$-norm for a solution $p(x,t)$ and the $L^{2-\beta_2}$-norm for its gradient, both for all $t\ge 0$ and, particularly, for large $t$, see Theorems~\ref{Lem31} and \ref{grad-est}. Furthermore, we show in Theorems~\ref{pasmall} and \ref{gradasmall} that if the boundary data is asymptotically small as $t\to\infty$, then so are these two norms. Section \ref{dependsec} is focused on the continuous dependence of solutions on the initial and boundary data. Theorem~\ref{theo62} shows that the difference between two solutions $p_1(x,t)$ and $p_2(x,t)$ with boundary data $\psi_1(x,t)$ and $\psi_2(x,t)$, respectively, is small if their initial difference $p_1(x,0)-p_2(x,0)$ and the boundary data's difference $\psi_1(x,t)-\psi_2(x,t)$ are small, see \eqref{neww}. Especially when $t\to\infty$, the estimates of $p_1(x,t)-p_2(x,t)$ depend on the asymptotic behavior of $\psi_1(x,t)-\psi_2(x,t)$. In section \ref{strucsec}, we consider particularly $g=g_I(s,\vec a)$, $K=K_I(\xi,\vec a)$ and prove the structural stability of equation \eqref{peq} with respect to the coefficient vector $\vec a$ of the function $g_I$. In order to obtain this, we first establish in Lemma~\ref{lempm} the perturbed monotonicity for our degenerate PDE. It is then proved in Theorem~\ref{DepCoeff} that the difference $P(x,t)$ between the two solutions which correspond to two different coefficient vectors $\vec a^{(1)}$ and $\vec a^{(2)}$ is estimated in terms of their initial difference $P(x,0)$ and $|\vec a^{(1)}-\vec a^{(2)}|$, see \eqref{ssc2}. Moreover, when time goes to infinity, this difference can be controlled by $|\vec a^{(1)}-\vec a^{(2)}|$ only, see \eqref{ssc3}. \section{Basic properties and inequalities}\label{presec} In this section, we study some properties of the conductivity function $K(\xi)$ which play crucial roles in the analysis of the PDE \eqref{peq}. For comparison purpose we define the function \begin{equation} K_*(\xi)=\frac{\xi^{\beta_1}}{(1+\xi)^{\beta_1+\beta_2}}\quad \text{for } \xi\ge 0, \end{equation} where $\beta_1>0$ and $\beta_2\in(0,1)$ are defined in \eqref{beta1} and \eqref{beta2}, respectively. Let $\xi_c=\beta_1/\beta_2$. It is elementary to see that \begin{enumerate} \item[{\rm (P1)}] $K_*(\xi)$ is increasing on $[0,\xi_c]$, \item[{\rm (P2)}] $K_*(\xi)$ is decreasing on $[\xi_c,\infty)$, and hence, \item[{\rm (P3)}] $K_*(\xi_c)$ is the maximum of $K_*(\xi)$ over $[0,\infty)$. \end{enumerate} For $m,\xi\ge 0$, \begin{equation}s K_*(\xi)\xi^m=\Big(\frac{\xi}{1+\xi}\Big)^{\beta_1} \frac{\xi^m}{(1+\xi)^{\beta_2}}\le \frac{\xi^m}{(1+\xi)^{\beta_2}}. \end{equation}s Therefore, \begin{equation}\label{P4} K_*(\xi)\xi^m\le \xi^{m-\beta_2} \quad\forall m\ge 0,\ \xi\ge 0. \end{equation} If $m\ge \beta_2$ and $\xi>\delta>0$ then \begin{equation}s K_*(\xi)\xi^m= \Big(\frac{\xi}{1+\xi}\Big)^{\beta_1+\beta_2} \xi^{m-\beta_2} \ge \Big(\frac{\delta}{1+\delta}\Big)^{\beta_1+\beta_2} (\xi^{m-\beta_2}-\delta^{m-\beta_2} ). \end{equation}s This inequality is obviously true when $\xi\le \delta.$ Hence, \begin{equation}\label{P5} K_*(\xi)\xi^m \ge \Big(\frac{\delta}{1+\delta}\Big)^{\beta_1+\beta_2} (\xi^{m-\beta_2}-\delta^{m-\beta_2} )\quad \forall \delta>0, \ m\ge \beta_2,\ \xi\ge 0. \end{equation} \begin{lemma}\label{lem21} Let $K=\bar K$, $K_I$, $\hat{K}$, and $K_M$ as in \eqref{K-bar}, \eqref{K-I}, \eqref{Ksim1} and \eqref{KM}, respectively. Then there exist $d_2,d_3>0$ such that \begin{equation}\label{mc1} d_2K_*(\xi)\le K(\xi)\le d_3 K_*(\xi)\quad \forall \xi\ge 0. \end{equation} Consequently, for all $m\ge \beta_2$ and $\delta>0$, \begin{equation}\label{mc2} d_2\Big(\frac{\delta}{1+\delta}\Big)^{\beta_1+\beta_2} (\xi^{m-\beta_2}-\delta^{m-\beta_2} )\le K(\xi)\xi^m\le d_3 \xi^{m-\beta_2}\quad \forall \xi\ge 0. \end{equation} In particular, when $K=K_I$ ones can take \begin{equation}\label{MM} d_2=\frac1{\max\{1,\xi_0\}^{1+\beta_1}} \text{ and } d_3= \frac{(1+\max\{1,\beta_0\})^{\beta_1+\beta_2}}{\min\{1,a_{-1},a_N\}^{1+\beta_1}} \end{equation} with $\xi_0=a_{-1}+a_0+a_1+\dots+a_N.$ \end{lemma} \begin{proof} The inequalities in \eqref{mc1} clearly holds for $K=\hat K$. When $K=K_M$, \eqref{mc1} can be easily proved by using relation \eqref{Fdegen}. We prove \eqref{mc1} for $K=\bar K$ now. For $0\le \xi<Z_1$, we have \begin{equation}\label{km1} \frac{K(\xi)}{M_1(1+Z_1)^{\beta_1+\beta_2}}=\frac{\xi^{\beta_1}}{(1+Z_1)^{\beta_1+\beta_2}} \le K_*(\xi)\le \xi^{\beta_1}= \frac{K(\xi)}{M_1}. \end{equation} For $Z_1\le \xi\le Z_2$, we have \begin{equation}\label{km2} \frac{Z_1^{\beta_1}K(\xi)}{M_2(1+Z_1)^{\beta_1}(1+Z_2)^{\beta_2}} =\Big(\frac{Z_1}{1+Z_1}\Big)^{\beta_1} \frac1{(1+Z_2)^{\beta_2}} \le K_*(\xi)\le Z_2^{\beta_1}=\frac{Z_2^{\beta_1}K(\xi)}{M_2}. \end{equation} Above, we used, for the first inequality, the fact that the function $x/(x+1)$ is increasing. For $\xi>Z_2$, we have from \eqref{Fdegen} that \begin{equation}\label{km3} \frac{Z_2^{\beta_1} }{(1+Z_2)^{\beta_1}} d_1^{-1}K(\xi) \le \frac{Z_2^{\beta_1}}{(1+Z_2)^{\beta_1}}\cdot \frac1{(1+\xi)^{\beta_2}} \le K_*(\xi)\le \frac{1}{(1+\xi)^{\beta_2}}\le d_1 K(\xi). \end{equation} Therefore, relation \eqref{mc1} follows \eqref{km1}, \eqref{km2}, and \eqref{km3}. Next, consider $K=K_I$. Since $K_I(0)=0$, it suffices to prove \eqref{mc1} for $\xi>0$. Let $s=s(\xi)=G_I^{-1}(\xi)>0$. Then we have \begin{equation} \label{xis} \xi=G_I(s)=a_{-1}s^{1-\alpha}+a_0s+a_1s^{1+\alpha_1}+\dots+a_Ns^{1+\alpha_N}. \end{equation} Note that $\xi_0=G_I(1)$. We consider the following two cases. \textit{Case 1: $\xi>\xi_0$.} Then $s>1$ and we have from \eqref{xis} that $$ a_Ns^{1+\alpha_N}\le \xi\le \xi_0s^{1+\alpha_N}.$$ This and the fact $ K_I(\xi)=1/g_I(s)=s/\xi$ give \begin{equation}\label{ki1} C_1 \xi^{-\beta_2}=\frac{(\xi/\xi_0)^\frac 1{1+\alpha_N}}{\xi} \le K_I(\xi)\le \frac{(\xi/a_N)^\frac 1{1+\alpha_N}}{\xi} =C_2\xi^{-\beta_2}, \end{equation} where $C_1=\xi_0^{\beta_2-1}$ and $C_2=a_N^{\beta_2-1}$. The first inequality of \eqref{ki1} immediately yields the lower bound for $K_I(\xi)$ as \begin{equation}\label{kxi40} K_I(\xi)\ge \frac {C_1}{(1+\xi)^{\beta_2}} \ge \frac {C_1}{(1+\xi)^{\beta_2}} \cdot \frac {\xi^{\beta_1}}{(1+\xi)^{\beta_1}}=C_1 K_*(\xi). \end{equation} For the upper bound of $K_I(\xi)$ we note for $\xi> \xi_0$ that \begin{equation}\label{ki2} \frac{\xi_0}{\xi_0+1}\le \frac{\xi}{\xi+1} \text{ and } \xi\ge\frac{\xi+\xi_0}2\ge \min\{1,\xi_0\} \frac{\xi+1} 2. \end{equation} Combining the second inequality of \eqref{ki1} and \eqref{ki2} gives \begin{equation}\label{ki4} K_I(\xi) \le C_2 \Big(\frac2{\xi+\xi_0}\Big)^{\beta_2}\Big(\frac{\xi_0+1}{\xi_0}\cdot \frac{\xi}{\xi+1}\Big)^{\beta_1} \le C_3 \frac{\xi^{\beta_1}}{(\xi+1)^{\beta_1+\beta_2}}=C_3 K_*(\xi), \end{equation} where $$C_3=C_2\Big(\frac{\xi_0+1}{\xi_0}\Big)^{\beta_1} \Big(\frac2{\min\{1,\xi_0\}}\Big)^{\beta_2}.$$ \textit{Case 2: $0< \xi\le \xi_0$.} We have $0< s\le 1$ in this case and \begin{equation}\label{ki6} \frac{s^\alpha}{\xi_0} \le K_I(\xi)=\frac{s^\alpha}{a_{-1}+a_0s^{\alpha_1+\alpha}+\dots+a_Ns^{\alpha_N+\alpha}}\le \frac{s^\alpha}{a_{-1}}. \end{equation} By \eqref{xis}, \begin{equation}s a_{-1}s^{1-\alpha} \le \xi \le \xi_0s^{1-\alpha} \text{ which implies}\quad (\xi/\xi_0)^\frac1{1-\alpha}\le s\le (\xi/a_{-1})^\frac1{1-\alpha}. \end{equation}s Utilizing this in \eqref{ki6} yields \begin{equation}s \frac {\xi^{\beta_1}}{\xi_0^{1+\beta_1}}\le K_I(\xi) \le \frac {\xi^{\beta_1}}{a_{-1}^{1+\beta_1}} . \end{equation}s Since $\xi\le \xi_0$, we obtain \begin{equation}\label{ki5} C_4\frac {\xi^{\beta_1}}{(1+\xi)^{\beta_1+\beta_2}} \le K_I(\xi) \le \frac 1{a_{-1}^{1+\beta_1}} \xi^{\beta_1} \frac{(1+\xi_0)^{\beta_1+\beta_2}}{(1+\xi)^{\beta_1+\beta_2}}= C_5\frac {\xi^{\beta_1}}{(1+\xi)^{\beta_1+\beta_2}} , \end{equation} where $C_4=1/\xi_0^{1+\beta_1}$ and $C_5=(1+\xi_0)^{\beta_1+\beta_2}/a_{-1}^{1+\beta_1}$. Combining inequalities \eqref{kxi40}, \eqref{ki4} and \eqref{ki5}, we have for both cases that \begin{equation}\label{ki3} C_6K_*(\xi) \le K_I(\xi) \le C_7 K_*(\xi), \end{equation} where $C_6=\min\{C_1, C_4\}$ and $C_7=\max\{C_3, C_5\}$. Note that $$ C_6=\frac1{\max\big\{\xi_0^{1+\beta_1},\xi_0^{1-\beta_2}\big\}} \ge \frac1{\max\{1,\xi_0\}^{1+\beta_1}}=d_2 $$ and \begin{align*} C_7&= \max\Big\{\Big(\frac{\xi_0+1}{\xi_0}\Big)^{\beta_1} \frac{2^{\beta_2}}{a_N^{1-\beta_2} \min\{1,\xi_0\} ^{\beta_2}},\frac {(1+\xi_0)^{\beta_1+\beta_2}}{a_{-1}^{1+\beta_1}}\Big\} \\ &\le \max\Big\{\frac{(\xi_0+1)^{\beta_1} 2^{\beta_2}}{(\min\{1,a_{-1},a_N\})^{\beta_1+(1-\beta_2)+\beta_2}},\frac {(1+\xi_0)^{\beta_1+\beta_2}}{(\min\{1,a_{-1},a_N\})^{1+\beta_1}}\Big\}\\ &\le \frac{(1+\max\{1,\xi_0\})^{\beta_1+\beta_2}}{\min\{1,a_{-1},a_N\}^{1+\beta_1}}=d_3 . \end{align*} Hence we obtain \eqref{mc1} from \eqref{ki3}. Combining \eqref{mc1} with \eqref{P4} and \eqref{P5} gives \eqref{mc2}. The proof is complete. \end{proof} Combining \eqref{mc1} and (P3) gives the upper bound for $K(\xi)$ as \begin{equation}\label{Kbdd} K(\xi)\le d_4\stackrel{\rm def}{=} d_3K_*(\xi_c)\quad\forall \xi\in [0,\infty). \end{equation} \begin{lemma}\label{LemKp} Let $K=K_I$, $\hat{K}$, and $K_M$ as in \eqref{K-I}, \eqref{Ksim1}, and \eqref{KM}, respectively. Then for all $\xi>0$ one has \begin{equation}\label{Kderest} -\beta_2 \frac{K(\xi)}{\xi}\le K'(\xi) \le \beta_1 \frac{K(\xi)}{\xi} \end{equation} and, consequently, \begin{equation}\label{Kp2} |K'(\xi)|\le \max\{\beta_1,\beta_2\}\frac{K(\xi)}{\xi}. \end{equation} In case $K=\bar K$ in \eqref{K-bar}, the inequalities \eqref{Kderest} and \eqref{Kp2} hold for $0<\xi\ne Z_1,Z_2$. \end{lemma} \begin{proof} Let $\xi>0$, then $ s(\xi)=G^{-1}(\xi)>0$. First, consider $g=g_I$ and $K=K_I$. By the chain rule, we have from \eqref{Kpos} that \begin{equation}\label{Kp1} K'(\xi)=-\frac{g'(s(\xi))s'(\xi)}{g(s(\xi))^2}. \end{equation} Denote $s=s(\xi)$, then $s\cdot g(s)=\xi$. Hence, $s'g(s)+sg'(s)s'=1$ which yields \begin{equation}\label{s:prime} s'(\xi)=\frac{1}{g(s)+sg'(s)}. \end{equation} Substituting \eqref{s:prime} into \eqref{Kp1} yields \begin{align} K'(\xi)&=-\frac{g'(s)}{g(s)^2}\cdot \frac{1}{g(s)+sg'(s)} =-\frac{1}{g(s)}\cdot \frac{1}{sg(s)} \cdot \frac{g'(s)s}{g(s)+s g'(s)} \nonumber\\ &=-\frac{K(\xi)}{\xi}\cdot \frac{g'(s)s}{ g(s)+ s g'(s)}=\frac{K(\xi)}{\xi}\Big(\frac{g(s)}{ g(s)+ s g'(s)}-1\Big) \label{Kprime}. \end{align} Now, for any $s>0$, ones observe that \begin{equation}s g(s)+ s g'(s)= (1-\alpha) a_{-1}s^{-\alpha}+a_0+\sum_{i=1}^N a_i(\alpha_i+1)s^{\alpha_i} \end{equation}s and, by the fact $0<1-\alpha< 1+\alpha_i<1+\alpha_N$, have \begin{equation}s (1-\alpha)g(s)\le g(s)+ s g'(s)\le (1+\alpha_N)g(s). \end{equation}s Then \begin{equation}s -\beta_2= \frac 1{1+\alpha_N} -1\le\frac{g(s)}{ g(s)+ s g'(s)}-1\le \frac{1}{1-\alpha}-1=\beta_1. \end{equation}s Therefore, inequality \eqref{Kderest} follows this and \eqref{Kprime}. If $K=\bar K$ then \begin{equation}\label{Kbder} \bar K'(\xi)= \begin{cases} M_1\beta_1\xi^{\beta_1-1}=\beta_1 \bar K(\xi)/\xi,&\text{if } 0<\xi<Z_1,\\ 0,&\text{if } Z_1<\xi<Z_2,\\ K'_F(\xi),&\text{if } \xi>Z_2. \end{cases} \end{equation} We recall from \cite{ABHI1} that \begin{equation}\label{Fder} -\beta_2 \frac{K_F(\xi)}{\xi} \le K_F'(\xi)\le 0. \end{equation} Then the relation \eqref{Kderest} obviously follows \eqref{Kbder} and \eqref{Fder} for $0<\xi\ne Z_1,Z_2$. If $K=\hat{K}$ then \begin{equation} \frac{\beta_1}{\xi} \ge \frac{K'}{K}=(\ln K)' =\frac{\beta_1}\xi-\frac{b\beta_1\xi^{\beta_1-1}}{1+b\xi^{\beta_1}}-\frac{c\beta_2\xi^{\beta_2-1}}{1+c\xi^{\beta_2}} \ge -\frac{\beta_2}{\xi}\frac{c\xi^{\beta_2}}{1+c\xi^{\beta_2}}\ge -\frac{\beta_2}{\xi}. \end{equation} This leads to \eqref{Kderest}. Finally, consider $K=K_M$. Write $K_M(\xi)=K_F(\xi)M(\xi)$, where $M(\xi)=\bar{k}\xi^{\beta_1}/ (1+\bar{k}\xi^{\beta_1})$. On the one hand, $M'(\xi)\ge 0$, hence \begin{equation}s K_M'(\xi)=K_F'(\xi)M(\xi)+K_F(\xi)M'(\xi)\ge K_F'(\xi)M(\xi) \ge -\beta_2 \frac{K_F(\xi) M(\xi)}{\xi}=-\beta_2\frac{K_M(\xi)}{\xi}. \end{equation}s On the other hand, $K_F'(\xi)\le 0$ by \eqref{Fder}, and \begin{equation}s M'(\xi)=M(\xi)\Big(\frac{\beta_1}{\xi}-\frac{\bar k \beta_1 \xi^{\beta_1-1}}{1+\bar k \xi^{\beta_1}}\Big) \le \frac{\beta_1}{\xi} M(\xi) , \end{equation}s hence \begin{equation}s K_M'(\xi)=K_F'(\xi)M(\xi)+K_F(\xi)M'(\xi) \le K_F(\xi) M'(\xi) \le \frac{\beta_1}{\xi} K_F(\xi) M(\xi)=\beta_1 \frac{K_M(\xi)}{\xi}. \end{equation}s The proof is complete. \end{proof} \begin{corollary}\label{incor} Let $K=\bar K$, $K_I$, $\hat{K}$, $K_M$. Then the function $\xi^{m} K(\xi)$ is increasing on $[0,\infty)$ for any real number $m\ge \beta_2$. \end{corollary} \begin{proof} First, consider $K=K_I$, $\hat{K}$, or $K_M$. According to \eqref{Kderest}, \begin{equation}s (\xi^{m} K(\xi))'=m\xi^{m-1}K(\xi)+\xi^{m} K'(\xi)\ge m\xi^{m-1}K(\xi)-\beta_2\xi^{m-1}K(\xi)\ge 0 \end{equation}s for any $\xi>0$. Hence $\xi^m K(\xi)$ is increasing on $[0,\infty)$. If $K=\bar K$, then the statement is true on intervals not having $Z_1$ or $Z_2$ as an interior point. By continuity of $\xi^m \bar K(\xi)$ on $[0,\infty)$, the statement then holds true on $[0,\infty)$. \end{proof} \begin{lemma}[Monotonicity]\label{lemmono} Let $K=\bar K$, $K_I$, $\hat{K}$, $K_M$, then \begin{equation}\label{Kmono} \big (K(|y'|)y'-K(|y|)y\big )\cdot (y'-y) \ge \frac{d_5|y-y'|^{2+\beta_1}}{(1+|y|+|y'|)^{\beta_1+\beta_2}} \quad \forall y,y'\in \mathbb R^n, \end{equation} where \begin{equation}s d_5=\frac{ d_2(1-\beta_2)}{2^{\beta_1+1}(\beta_1+1)} . \end{equation}s \end{lemma} \begin{proof} Let $y\neq y'$ and denote by $[y,y']$ the line segment connecting $y$ and $y'$. \textbf{Case 1:} The origin does not belong to $[y,y']$. We parametrize $[y,y']$ by \begin{equation}s \gamma(t)=ty'+(1-t)y\quad \text{for }t \in[0,1]. \end{equation}s Define \begin{equation}s h(t)=K(|\gamma(t)|)\gamma(t)\cdot (y'-y) \quad \text{for } t\in[0,1]. \end{equation}s In case $K=K_I$, $\hat{K}$, $K_M$, function $h(t)\in C^1([0,1])$. When $K=\bar K$, $h(t)$ is continuous on $[0,1]$ and $h'(t)$ is piecewise continuous on $[0,1]$ with at most four points of jump discontinuity at which $|\gamma(t)|=Z_1$ or $Z_2$. By fundamental theorem of calculus, \begin{equation}\label{Idef} I\stackrel{\rm def}{=} [K(|y'|)y'-K(|y|)y] \cdot (y'-y) = h(1)-h(0) =\int_0^1 h'(t) dt. \end{equation} At $t$ where $h'(t)$ exists, we calculate \begin{equation}s h'(t)=K(|\gamma(t)|)|y'-y|^2+K'(|\gamma(t)|)\frac{|\gamma(t)\cdot (y'-y)|^2}{|\gamma(t)|}. \end{equation}s By \eqref{Kderest} and Cauchy-Schwarz inequality \begin{align*} h'(t) &\ge K(|\gamma(t)|)|y'-y|^2-\beta_2 \frac{K(|\gamma(t)|)}{|\gamma(t)|} \frac{|\gamma(t)\cdot (y'-y)|^2}{|\gamma(t)|}\\ &\ge K(|\gamma(t)|)|y'-y|^2-\beta_2 \frac{K(|\gamma(t)|)}{|\gamma(t)|} \frac{|\gamma(t)|^2|y'-y|^2}{|\gamma(t)|} =(1-\beta_2)K(|\gamma(t)|)|y'-y|^2. \end{align*} Applying Lemma \ref{lem21} and triangle inequality $|\gamma(t)|\le |y|+|y'|$, we infer \begin{equation}\label{hp} h'(t) \ge (1-\beta_2)|y'-y|^2 \frac{d_2 |\gamma(t)|^{\beta_1}}{(1+|\gamma(t)|)^{\beta_1+\beta_2} } \ge \frac{d_2 (1-\beta_2)|y'-y|^2}{(1+|y|+|y'|)^{\beta_1+\beta_2}} |\gamma(t)|^{\beta_1}. \end{equation} Together with \eqref{Idef}, it implies \begin{equation}\label{Igam1} I \ge \frac{d_2 (1-\beta_2)|y'-y|^2}{(1+|y|+|y'|)^{\beta_1+\beta_2}} \int_0^1 |\gamma(t)|^{\beta_1} dt. \end{equation} It remains to estimate the last integral. Let $z_0=\frac{y'+y}{2|y'-y|}$ and $u=\frac{y'-y}{|y'-y|}$. Note that $|u|=1$. Then we write \begin{align*} \int_0^1 |\gamma(t)|^{\beta_1} dt &=|y'-y|^{\beta_1} \int_0^1 |z_0+(t-\frac12)u|^{\beta_1}dt\\ &=|y'-y|^{\beta_1} \int_0^1 \Big(|z_0|^2+2(t-\frac12)z_0\cdot u +(t-\frac12)^2\Big)^{\beta_1/2}dt. \end{align*} If $z_0\cdot u\ge 0$ then \begin{equation}\label{ig1} \int_0^1 |\gamma(t)|^{\beta_1} dt \ge |y'-y|^{\beta_1} \int_{1/2}^1 \Big(t-\frac12\Big)^{\beta_1}dt = \frac{|y'-y|^{\beta_1}}{2^{\beta_1+1}(\beta_1+1) } . \end{equation} If $z_0\cdot u< 0$ then \begin{equation}\label{ig2} \int_0^1 |\gamma(t)|^{\beta_1} dt \ge |y'-y|^{\beta_1} \int_0^{1/2} \Big(\frac12-t\Big)^{\beta_1}dt = \frac{|y'-y|^{\beta_1}}{2^{\beta_1+1}(\beta_1+1) } . \end{equation} In both cases, we have \begin{equation}\label{Igam2} I\ge \frac{ d_2(1-\beta_2)}{2^{\beta_1+1}(\beta_1+1)}\cdot \frac{|y'-y|^{\beta_1+2}}{(1+|y|+|y'|)^{\beta_1+\beta_2}}, \end{equation} which proves \eqref{Kmono}. \textbf{Case 2:} The origin belongs to $[y,y']$. We replace $y'$ by some $y_{\epsilon} \neq 0$ such that $0 \not\in[y,y_{\epsilon}]$, and $y_{\epsilon} \to y'$ as $\epsilon \to 0$. Then let apply the inequality established in Case 1 for $y$ and $y_{\epsilon}$, then let $\epsilon \to 0$. \end{proof} \begin{remark} Our proof of \eqref{Igam2} from \eqref{Igam1} simplifies DiBenedetto's arguments in \cite{DiDegenerateBook}, p. 13, 14. \end{remark} \textbf{Degree Condition:} One of the following equivalent conditions \begin{equation}s \deg(g)\le \frac{4}{n-2},\quad \beta_2\le \frac 4{n+2},\quad 2\le (2-\beta_2)^*=\frac{(2-\beta_2)n}{n-2+\beta_2},\quad 2-\beta_2\ge \frac{2n}{n+2}. \end{equation}s (Above, $(2-\beta_2)^*$ is the Sobolev exponent corresponding to $2-\beta_2$.) Hereafter, we assume the Degree Condition. Then the Sobolev space $W^{1,2-\beta_2}(U)$ is continuously embedded into $L^2(U)$. Also, the Poincar\'e-Sobolev inequality \begin{equation}\label{PSi} \|u\|_{L^2(U)}\le C_{\rm PS}\|\nabla u\|_{L^{2-\beta_2}(U)} \end{equation} holds for all functions $u\in W^{1,2-\beta_2}(U)$ which vanish on the boundary $\Gamma$, where $C_{\rm PS}$ is a positive constant. In statements and calculations throughout, we use short-hand writing $\| \cdot\|=\|\cdot\|_{L^2(U)}$ and $\| u(t)\|_{L^p}=\|u(\cdot,t)\|_{L^p(U)}$ for a function $u(x,t)$ of $x$ and $t$. \section{The IBVP and estimates of its solutions}\label{boundsec} Let $K(\xi)$ be one of the functions $\bar{K}(\xi)$, $K_I(\xi)$, $\hat{K}(\xi)$, $K_M(\xi)$. Consider the following IBVP for the main PDE \eqref{peq}: \begin{equation}\label{IBVP} \begin{aligned} \begin{cases} p_t=\nabla\cdot (K(|\nabla p|)\nabla p) &\text{in } U\times (0,\infty),\\ p(x,0)=p_0(x), &\text{in } U\\ p=\psi(x,t), &\text{on } \partial U\times (0,\infty). \end{cases} \end{aligned} \end{equation} Dealing with the boundary condition, let $\Psi(x,t)$ be an extension of $\psi$ from $x\in\Gamma$ to $x\in \bar U$. Let $\bar p=p-\Psi$. Then \begin{equation}\label{IBVP2} \begin{cases} \bar p_t=\nabla\cdot (K(|\nabla p|)\nabla p)-\Psi_t &\text{in } U\times (0,\infty),\\ \bar p(x,0)=p_0(x)-\Psi(x,0), &\text{in } U\\ \bar p=0, &\text {on } \partial U\times (0,\infty). \end{cases} \end{equation} We will focus on estimates for $\bar p(x,t)$. The estimates for $p(x,t)$ can be obtained by simply using the triangle inequality $$|p(x,t)|\le |\bar p(x,t)|+|\Psi(x,t)|.$$ Also, our results are stated in terms of $\Psi(x,t)$. These can be rewritten in terms of $\psi(x,t)$ by using a specific extension. For instance, the harmonic extension is utilized in \cite{HI1} with the use of norm relations in \cite{JerisonKenig1995}. Throughout the paper, we will frequently use the following basic inequalities. By Young's inequality, we have \begin{equation}\label{bi3} x^\beta\le x^{\gamma_1}+x^{\gamma_2}\quad\text{for all }x>0,\ \gamma_1\le \beta \le \gamma_2, \end{equation} \begin{equation}\label{bi2} x^\beta\le 1+x^\gamma\quad\text{for all }x\ge 0, \ \gamma\ge\beta > 0. \end{equation} For any $r\ge1$, $x_1,x_2,\ldots,x_k\ge 0$, and $a,b\in\mathbb{R}^n$, \begin{equation}\label{bi0} (x_1+x_2+\ldots+x_k)^r\le k^{r-1}(x_1^r+x_2^r+\ldots+x_k^r), \end{equation} \begin{equation}\label{bi1} |a-b|^{r} \ge 2^{1-r}|a|^{r}-|b|^{r}. \end{equation} We also recall here a useful inequality from \cite{HIKS1}. \begin{definition}\label{Env} Given a function $f(t)$ defined on $I=[0,\infty)$. We denote by $Env(f)$ a continuous, increasing function $F(t)$ on $I$ such that $F(t) \ge f(t)$ for all $t \in I$. \end{definition} \begin{lemma}[\cite{HIKS1}, Lemma 2.7] \label{ODE2} Let $\theta>0$ and let $y(t)\ge 0, h(t)>0, f(t)\ge 0$ be continuous functions on $[0,\infty)$ that satisfy \begin{equation}s y'(t)\le -h(t)y(t)^\theta +f(t)\quad \text{for all } t>0. \end{equation}s Then \begin{equation}\label{ubode} y(t)\le y(0)+\big[Env(f(t)/h(t))\big]^\frac{1}{\theta}\text{ for all } t\ge 0. \end{equation} If $\int_0^\infty h(t)dt=\infty$ then \begin{equation}\label{ulode} \limsup_{t\rightarrow\infty} y(t)\le \limsup_{t\rightarrow\infty} \big[f(t)/h(t)\big]^\frac{1}{\theta}. \end{equation} \end{lemma} \noindent\textbf{Notation for constants.} In this section and section \ref{dependsec} below, the symbol $C$ denotes a \emph{generic} positive constant independent of the initial and boundary data; it may depend on the function $g(s)$ and the Poincar\'e-Sobolev constant $C_{PS}$ in \eqref{PSi}. In a particular proof, $C_0,C_1,\dots$ denote positive constants of this type but having their values fixed. \subsection{Energy estimates }\label{L2sec} In this subsection, we obtain $L^2$-estimates for the solution $p(x,t)$ for all time $t\ge 0$ and for $t\to\infty$. \begin{theorem}\label{Lem31} There exists a positive constant $C$ such that for all $t\ge 0,$ \begin{equation}\label{west} \|\bar p(t)\|^2\le \|\bar p(0)\|^2+ C \big[1+Env f(t)\big]^\frac{2}{2-\beta_2}, \end{equation} where \begin{equation} \label{fdef} f(t)=f[\Psi](t)\stackrel{\rm def}{=} \norm{\nabla \Psi(t)}^2+\norm{\Psi_t (t)}^{\frac{2-\beta_2}{1-\beta_2}}. \end{equation} Furthermore, \begin{equation}\label{nodelta} \begin{aligned} \limsup_{t\to \infty} \|\bar p(t)\|^2&\le C (1+\limsup_{t\to\infty} f(t))^\frac{ 2}{2-\beta_2}. \end{aligned} \end{equation} \end{theorem} \begin{proof} Multiplying both sides of first equation in \eqref{IBVP2} by $\bar p$, integrating over the domain $U$ and using integration by parts we find that \begin{equation}\label{weq} \begin{aligned} \frac 12\frac d{dt}\|\bar p\|^2 &=-\int_UK(|\nabla p|)\nabla p\cdot \nabla \bar p dx-\int_U\Psi_t \bar pdx\\ &= -\int_UK(|\nabla p|)|\nabla p|^2dx+\int_U K(|\nabla p|)\nabla p \cdot \nabla\Psi dx-\int_U\Psi_t \bar pdx. \end{aligned} \end{equation} Using Cauchy's inequality and the bound \eqref{Kbdd} for function $K(\cdot)$, we obtain \begin{align*} \int_U K(|\nabla p|)\nabla p \cdot \nabla\Psi dx &\le \frac12\int_UK(|\nabla p|)|\nabla p|^2dx + \frac12\int_UK(|\nabla p|)|\nabla \Psi|^2dx\\ &\le \frac12\int_UK(|\nabla p|)|\nabla p|^2dx + C\norm{\nabla\Psi}^2. \end{align*} Let $\varepsilon>0$. By H\"older's and Young's inequalities, \begin{equation}s -\int_U\Psi_t \bar pdx \le\|\bar p\| \norm{\Psi_t} \le \varepsilon\|\bar p\|^{2-\beta_2}+C\varepsilon^{-\frac{1}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}. \end{equation}s Therefore, \begin{equation}s \frac d{dt}\|\bar p\|^2 \le -\int_UK(|\nabla p|)|\nabla p|^2dx+2\varepsilon\|\bar p\|^{2-\beta_2}+C\norm{\nabla\Psi}^2+C\varepsilon^{-\frac{1}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}. \end{equation}s Let $\delta \in(0,1]$. By virtue of \eqref{mc2} and applying \eqref{bi1} to $r=2-\beta_2$, $a=\nabla \bar p$, $b=-\nabla \Psi$, we have \begin{align*} K(|\nabla p|)|\nabla p|^2 &\ge d_2 \Big(\frac{\delta}{1+\delta}\Big)^{\beta_1+\beta_2} (|\nabla p|^{2-\beta_2}-\delta^{2-\beta_2}) \\ &\ge d_2\Big(\frac \delta {1+\delta} \Big)^{\beta_1+\beta_2}\Big(2^{\beta_2-1}|\nabla \bar p|^{2-\beta_2}-|\nabla \Psi|^{2-\beta_2}-\delta^{2-\beta_2}\Big)\\ &\ge \frac{d_2\delta^{\beta_1+\beta_2}}{2^{\beta_1+1}}|\nabla \bar p|^{2-\beta_2}- d_2|\nabla \Psi|^{2-\beta_2}-d_2\delta^{2+\beta_1}. \end{align*} Hence, we obtain \begin{multline}\label{psi-delta} \frac d{dt}\norm {\bar p}^2 \le - \frac{d_2\delta^{\beta_1+\beta_2}}{2^{\beta_1+1}}\int_{U}|\nabla \bar p|^{2-\beta_2}dx+C\int_{U}|\nabla \Psi|^{2-\beta_2}dx + C{\delta^{2+\beta_1}}\\ +2\varepsilon\|\bar p\|^{2-\beta_2} +C\norm{\nabla\Psi}^2 +C\varepsilon^{-\frac{1}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}. \end{multline} Using Poincar\'e--Sobolev's inequality \eqref{PSi}, we bound $\int_{U}|\nabla \bar p|^{2-\beta_2}dx$ from below by \begin{equation}\label{pP}\int_{U}|\nabla \bar p|^{2-\beta_2}dx \ge \frac{ \|\bar p\|^{2-\beta_2}}{C_{\rm PS}^{2-\beta_2} }. \end{equation} For comparison of $\nabla \Psi$-terms on the right-hand side of \eqref{psi-delta}, applying H\"older's inequality gives $$ \int_{U}|\nabla \Psi|^{2-\beta_2}dx \le C \|\nabla \Psi\|^{2-\beta_2}. $$ Then we have from \eqref{psi-delta} that \begin{equation}s \frac d{dt}\norm {\bar p}^2 \le - \Big(\frac {d_2 \delta^{\beta_1+\beta_2} } {2^{1+\beta_1} C_{\rm PS}^{2-\beta_2} } - 2\varepsilon\Big) \|\bar p\|^{2-\beta_2}+ C\Big({\delta^{2+\beta_1}}+\|\nabla \Psi\|^{2-\beta_2}+\norm{\nabla\Psi}^2+\varepsilon^{-\frac{1}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}\Big) . \end{equation}s Selecting $\varepsilon =d_2 \delta^{\beta_1+\beta_2} /(2^{3+\beta_1} C_{\rm PS}^{2-\beta_2})$ yields \begin{equation}\label{psi-delta-2} \frac d{dt}\norm {\bar p}^2 \le - \frac {d_2 \delta^{\beta_1+\beta_2} } {2^{2+\beta_1} C_{\rm PS}^{2-\beta_2} } \|\bar p\|^{2-\beta_2}+ C\Big({\delta^{2+\beta_1}}+\|\nabla \Psi\|^{2-\beta_2}+\norm{\nabla\Psi}^2+\delta^{-\frac{\beta_1+\beta_2}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}\Big) . \end{equation} Denote $y(t)=\|\bar p(t)\|^2$. We rewrite \eqref{psi-delta-2} as \begin{equation}\label{ydif} \frac {dy}{dt} \le - C_0 \delta^{\beta_1+\beta_2} y^\frac{2-\beta_2}2+ C\Big({\delta^{2+\beta_1}}+\|\nabla \Psi\|^{2-\beta_2}+\norm{\nabla\Psi}^2 +\delta^{-\frac{\beta_1+\beta_2}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}\Big), \end{equation} where \begin{equation}s C_0=\frac {d_2} {2^{2+\beta_1} C_{\rm PS}^{2-\beta_2} } . \end{equation}s Select $\delta =1$. On the RHS of \eqref{ydif}, we apply inequality \eqref{bi2} to have \begin{equation}\label{psH} \|\nabla \Psi\|^{2-\beta_2}\le 1 +\|\nabla \Psi\|^2 . \end{equation} Then \begin{equation}\label{pd3} \frac {d y}{dt} \le - C_0 y^\frac{2-\beta_2}2+ C(1+f(t)). \end{equation} Applying \eqref{ubode} and \eqref{ulode} in Lemma \ref{ODE2} to \eqref{pd3}, we obtain \eqref{west} and \eqref{nodelta}, respectively. \end{proof} In case the boundary data is asymptotically small as $t\to\infty$, we prove in the next theorem that so is $\|\bar p(t)\|$. \begin{theorem}\label{pasmall} For any $\varepsilon>0$, there is $\delta_0>0$ such that if \begin{equation}\label{small0} \limsup_{t\to\infty} \|\nabla \Psi(t)\|\le \delta_0 \quad \text{and} \quad \limsup_{t\to\infty} \| \Psi_t( t)\|\le \delta_0, \end{equation} then \begin{equation}\label{wlim0} \limsup_{t\to\infty}\| \bar p(t) \| \le \varepsilon . \end{equation} Consequently, if \begin{equation}\label{limcond} \lim_{t\to\infty} \|\nabla \Psi(t)\|= \lim_{t\to\infty} \| \Psi_t( t)\|=0, \end{equation} then \begin{equation}\label{wlim} \lim_{t\to\infty}\| \bar p(t) \| = 0. \end{equation} \end{theorem} \begin{proof} Let $\delta\in(0,1]$. Applying \eqref{ulode} in Lemma \ref{ODE2} to \eqref{ydif}, and then using inequality \eqref{bi0} with $r=2/(2-\beta_2)$ give \begin{align}\label{f-delta} &\limsup_{t\to \infty} \|\bar p(t)\|^2 \notag \\ &\le C \Big\{\delta^{2-\beta_2} + \limsup_{t\to\infty} \Big[\delta^{-(\beta_1+\beta_2)}\big(\|\nabla \Psi\|^{2-\beta_2}+\norm{\nabla\Psi}^2\big)+\delta^{-\frac{(\beta_1+\beta_2)(2-\beta_2)}{1-\beta_2} } \norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}\Big] \Big\}^\frac{ 2}{2-\beta_2} \notag \\ &\le C \Big\{\delta^{2} + \limsup_{t\to\infty} \Big[\delta^\frac{-2(\beta_1+\beta_2)}{2-\beta_2}\big(\|\nabla \Psi\|^2+\norm{\nabla\Psi}^\frac{4}{2-\beta_2}\big)+\delta^{-\frac{2(\beta_1+\beta_2)}{1-\beta_2} } \norm{\Psi_t}^{\frac{2}{1-\beta_2}}\Big] \Big\}. \end{align} Assume \eqref{small0} with $0<\delta_0\le 1$. It follows \eqref{f-delta} that \begin{equation}\label{small1} \limsup_{t\to \infty} \|\bar p(t)\|^2 \le C_1 \Big(\delta^{2}+3\delta^{-\frac{2(\beta_1+\beta_2)}{1-\beta_2} } \delta_0^2 \Big) \end{equation} for some $C_1>0$. We choose $\delta$ sufficiently small so that $C_1 \delta^2\le \varepsilon^2/2$, and then, with such $\delta$, choose $\delta_0$ to satisfy \begin{equation}s 3C_1 \delta^{-\frac{2(\beta_1+\beta_2)}{1-\beta_2} } \delta_0^2\le \varepsilon^2/2. \end{equation}s Therefore, the desired estimate \eqref{wlim0} follows \eqref{small1}. In the case \eqref{limcond} is satisfied, we have \eqref{wlim0} holds for any $\varepsilon>0$, which implies \eqref{wlim}. \end{proof} \subsection{Gradient estimates}\label{gradsec} This subsection is focused on estimating the $L^{2-\beta_2}$-norm for $\nabla p(x,t)$. The following function $H(\xi)$ will be crucial in our gradient estimates. \begin{definition} Define for $\xi\ge 0$, the auxiliary function \begin{equation}\label{Hdef} H(\xi)=\int_0^{\xi^2} K(\sqrt s) ds. \end{equation} \end{definition} We compare $H(\xi)$ with $K(\xi)\xi^2$ and $\xi^{2-\beta_2}$ in the following lemma. \begin{lemma} For any $\xi\ge 0$, \begin{equation}\label{HtoK} \frac{d_2}{d_3} K(\xi)\xi^2 -d_2 K_*(\xi_c) \xi_c^2\le H(\xi)\le 2 K(\xi)\xi^2. \end{equation} For any $\delta>0$ and $\xi\ge 0$, \begin{equation}\label{HK2} d_2 \Big(\frac\delta{1+\delta}\Big)^{\beta_1+\beta_2} \big(\xi^{2-\beta_2}-\delta^{2-\beta_2}\big)\le H(\xi) \le 2d_3 \xi^{2-\beta_2}. \end{equation} \end{lemma} \begin{proof} By Corollary \ref{incor}, the function $K(\xi)\xi$ is increasing, hence we have \begin{equation}\label{HKtwice} H(\xi)=2\int_0^{\xi} K(s)s ds\le 2K(\xi)\xi\int_0^\xi 1ds=2K(\xi)\xi^2. \end{equation} This proves the second inequality of \eqref{HtoK}. Combining this with the second inequality in \eqref{mc2} for $m=2$ yields the second inequality in \eqref{HK2}. By Lemma \ref{lem21}, \begin{equation} H(\xi)\ge d_2 \int_0^{\xi^2} K_*(\sqrt s) ds. \end{equation} For $\xi>\xi_c$, using properties (P2) and (P3) of $K_*(\xi)$ in section \ref{presec}, we have \begin{equation} H(\xi)\ge d_2K_*(\xi) \int_{\xi_c^2}^{\xi^2} 1 ds = d_2 K_*(\xi)(\xi^2-\xi_c^2) \ge d_2( K_*(\xi)\xi^2-K_*(\xi_c)\xi_c^2 ). \end{equation} For $\xi\le\xi_c$, according to Corollary~\ref{incor} the function $\xi^2 K(\xi)$ is increasing thus \begin{equation} d_2( K_*(\xi)\xi^2-K_*(\xi_c)\xi_c^2 ) \le 0\le H(\xi). \end{equation} Combining the above two inequalities we have \begin{equation}\label{HKstar} d_2(K_*(\xi)\xi^2 - K_*(\xi_c) \xi_c^2)\le H(\xi). \end{equation} Applying \eqref{mc1} in Lemma \ref{lem21} to compare $K_*(\xi)$ with $K(\xi)$ in \eqref{HKstar} yields \begin{equation} \frac{d_2}{d_3} K(\xi)\xi^2 -d_2K_*(\xi_c)\xi_c^2 \le H(\xi). \end{equation} Hence, we obtain the first inequality of \eqref{HtoK}. Next, we prove the first inequality of \eqref{HK2}. Since it trivially holds true for all $\xi\le\delta$, it suffices to consider $\xi>\delta$. From \eqref{HKtwice}, \begin{equation}s H(\xi)\ge 2 \int_{\delta}^{\xi}K(s)s ds = 2 \int_{\delta}^{\xi}K(s)s^{\beta_2}s^{1-\beta_2} ds. \end{equation}s According to Corollary~\ref{incor}, the function $K(s)s^{\beta_2}$ is increasing thus \begin{equation}s H(\xi)\ge 2 K(\delta)\delta^{\beta_2} \int_{\delta}^{\xi}s^{1-\beta_2}ds =\frac{2}{2-\beta_2} K(\delta)\delta^{\beta_2}\big(\xi^{2-\beta_2}-\delta^{2-\beta_2}\big) \ge K(\delta)\delta^{\beta_2}\big(\xi^{2-\beta_2}-\delta^{2-\beta_2}\big), \end{equation}s which, together with \eqref{mc1}, proves the first inequality of \eqref{HK2}. The proof is complete. \end{proof} Bounds for the gradient in terms of the initial and boundary data are obtained in the next theorem. \begin{theorem}\label{grad-est} For all $t\ge 0$, \begin{equation}\label{Hest2} \begin{aligned} \int_U |\nabla p(x,t)|^{2-\beta_2}dx &\le C\Big(1+\|\bar p(0)\|^2 +e^{-\frac t 2} \int_U |\nabla p(x,0)|^{2-\beta_2}dx \\ &\quad + \big[Env f(t) \big]^\frac{2}{2-\beta_2}+\int_0^t e^{-\frac 12(t-\tau)} \norm{\nabla\Psi_t(\tau)}^2d\tau\Big). \end{aligned} \end{equation} Furthermore, \begin{equation}\label{limsupG} \limsup_{t\to\infty} \int_U |\nabla p(x,t)|^{2-\beta_2}dx \le C ( 1+ \limsup_{t\to\infty}G_1(t)), \end{equation} where \begin{equation}s G_1(t) = G_1[\Psi](t) \stackrel{\rm def}{=} f(t)^\frac{2}{2-\beta_2}+\norm{\nabla\Psi_t(t)}^2. \end{equation}s \end{theorem} \begin{proof} Multiplying the first equation in the \eqref{IBVP2} by $\bar p_t$, integrating over the domain $U$, using integration by parts for the first integral on the RHS and by the fact that $\bar{p}_t=p_t-\Psi_t$; we have \begin{align*} \int_U \bar p_t^2 dx &= -\int_UK(|\nabla p|) \nabla p \cdot \nabla \bar p_tdx-\int_U \bar p_t\Psi_t\\ &= -\int_UK(|\nabla p|) \nabla p \cdot \nabla p_tdx +\int_UK(|\nabla p|) \nabla p \cdot \nabla\Psi_tdx-\int_U \bar p_t\Psi_t dx. \end{align*} For the first integral on the RHS, using definition \eqref{Hdef} of $H(\xi)$ we have \begin{equation}\label{wteq} \|\bar p_t\|^2 +\frac 12\frac d{dt}\int_U {\mathcal H}(x,t)dx =\int_UK(|\nabla p|) \nabla p \cdot \nabla\Psi_tdx-\int_U \bar p_t\Psi_t dx, \end{equation} where, for the sake of simplicity, we denoted $${\mathcal H}(x,t)=H(|\nabla p(x,t)|).$$ Let \begin{equation}\label{Edef} \mathcal E(t) = \int_U |\bar p(x,t)|^2 dx +\int_U {\mathcal H}(x,t) dx. \end{equation} Summing \eqref{weq} and \eqref{wteq} gives \begin{equation}\label{gradeq} \begin{aligned} \|\bar p_t\|^2 +\frac 12\frac d{dt}\mathcal E(t) &= -\int_UK(|\nabla p|)|\nabla p|^2dx +\int_U K(|\nabla p|)\nabla p \cdot \nabla(\Psi +\Psi_t) dx\\ &\quad -\int_U\Psi_t (\bar p +\bar p_t) dx=I_1+I_2+I_3. \end{aligned} \end{equation} It follows from \eqref{HtoK} that \begin{equation}\label{I1est} I_1\le - \frac 1 2\int_U {\mathcal H}(x,t) dx. \end{equation} Let $\varepsilon>0$. Applying Cauchy's inequality for $I_2$, and using the fact \eqref{Kbdd} that $K(\cdot)$ bounded \begin{align*} |I_2|\le \varepsilon \int_UK(|\nabla p|)|\nabla p|^2dx+ C\varepsilon^{-1} (\norm{\nabla\Psi}^2+\norm{\nabla\Psi_t}^2). \end{align*} Again using \eqref{HtoK}, \begin{equation}\label{I2est} |I_2|\le \varepsilon \int_U(C_1 {\mathcal H}(x,t)+ C_2)dx+ C\varepsilon^{-1} (\norm{\nabla\Psi}^2+ \norm{\nabla\Psi_t}^2), \end{equation} where $C_1=d_3/d_2$ and $C_2=d_3K_*(\xi_c)\xi_c^2$. For $I_3$, applying Cauchy's inequality gives \begin{equation}\label{I3est} |I_3|\le \frac 12 (\|\bar p\|^2 + \|\bar p_t\|^2 )+\norm{ \Psi_t}^2. \end{equation} Combining \eqref{gradeq}--\eqref{I3est}, we obtain \begin{equation}\label{Hw} \frac d{dt}\mathcal E(t) + \|\bar p_t\|^2 \le -(1-2\varepsilon C_1) \int_U{\mathcal H}(x,t)dx+\|\bar p\|^2 +2\varepsilon C_2|U| + C\varepsilon^{-1}(\norm{\nabla\Psi}^2+\norm{\nabla\Psi_t}^2)+ 2\norm{\Psi_t}^2. \end{equation} Selecting $\varepsilon=\delta/(4C_1)$ in \eqref{Hw} with $\delta\in(0,1]$ and using \eqref{Edef}, we find that \begin{align} \frac d{dt}\mathcal E(t) + \|\bar p_t\|^2 &\le -\frac 12 \int _U {\mathcal H}(x,t)dx + \|\bar p\|^2+C\delta + C\delta^{-1}(\norm{\nabla\Psi}^2+ \norm{\nabla \Psi_t}^2) + 2\norm{\Psi_t}^2 \nonumber \\ &\le -\frac 12 \mathcal E(t) + \frac 32 \|\bar p\|^2 +C\delta + C\delta^{-1}(\norm{\nabla\Psi}^2+ \norm{\nabla \Psi_t}^2) +2 \norm{\Psi_t}^2.\label{Hw2} \end{align} Letting $\delta=1$, \begin{equation}\label{Hsim} \frac d{dt}\mathcal E(t) + \|\bar p_t\|^2\\ \le -\frac 12 \mathcal E(t) + \frac 32 \|\bar p\|^2+C(1+\norm{\nabla\Psi}^2+ \norm{\nabla \Psi_t}^2+ \norm{\Psi_t}^2). \end{equation} Thanks to estimate \eqref{west} of $ \|\bar p(t)\|^2$ and by using \eqref{bi2} to bound \begin{equation}\label{pst1} \norm{\Psi_t}^2\le 1+ \norm{\Psi_t}^\frac{2-\beta_2}{1-\beta_2},\end{equation} we obtain \begin{equation}s \frac d{dt}\mathcal E(t) \le -\frac 12 \mathcal E(t)+ \frac 32 \|\bar p(0)\|^2+C+C(\big[Env f(t) \big]^\frac{2}{2-\beta_2}+\norm{\nabla\Psi_t(t)}^2). \end{equation}s It follows from Gronwall's inequality that \begin{equation}s \mathcal E(t) \le e^{-\frac t 2}\mathcal E(0)+ 3 \|\bar p(0)\|^2 +C+C\int_0^t e^{-\frac 12(t-\tau)} (\big[Env f(\tau) \big]^\frac{2}{2-\beta_2}+\norm{\nabla\Psi_t(\tau)}^2)d\tau, \end{equation}s which, by monotonicity of function $Env f(t)$ and definition of $\mathcal{E}(t)$, leads to \begin{multline*} \int_U {\mathcal H}(x,t)dx \le 4\|\bar p(0)\|^2 +e^{-\frac t 2}\int_U {\mathcal H}(x,0) dx \\+C+C[Env f(t) \big]^\frac{2}{2-\beta_2}+C\int_0^t e^{-\frac 12(t-\tau)} \norm{\nabla\Psi_t(\tau)}^2d\tau. \end{multline*} We bound ${\mathcal H}(x,t)$ from below by the first inequality in \eqref{HK2} with $\delta=1$, and bound $\mathcal H(x,0)$ from above by the second inequality of \eqref{HK2}. It results in \begin{multline*} C_3 \int_U |\nabla p(x,t)|^{2-\beta_2}dx - C_4\le 4\|\bar p(0)\|^2 +2d_3e^{-\frac t 2}\int_U |\nabla p(x,0)|^{2-\beta_2} dx \\ +C+C[Env f(t) \big]^\frac{2}{2-\beta_2}+C\int_0^t e^{-\frac 12(t-\tau)} \norm{\nabla\Psi_t(\tau)}^2d\tau \end{multline*} for constants $C_3=d_2/2^{\beta_1+\beta_2}$ and $C_4=C_3|U|$, and estimate \eqref{Hest2} follows. Neglecting $ \|\bar p_t\|^2$ on the LHS from \eqref{Hsim}, we have \begin{equation}\label{Hsim4} \frac d{dt}\mathcal E(t)\\ \le -\frac 12 \mathcal E(t) + \frac 32 \|\bar p\|^2+C(1+\norm{\nabla\Psi}^2+ \norm{\nabla \Psi_t}^2+ \norm{\Psi_t}^2). \end{equation} Applying \eqref{ulode} in Lemma \ref{ODE2} to \eqref{Hsim4}, we have \begin{equation}s \limsup_{t\to\infty} \mathcal E(t)\le 3\limsup_{t\to\infty}\|\bar p\|^2 +C\limsup_{t\to\infty}(1+\norm{\nabla\Psi}^2+\norm{\nabla\Psi_t}^2+\norm{\Psi_t}^2). \end{equation}s Combining this with \eqref{nodelta} yields \begin{equation}\label{limH0} \limsup_{t\to\infty} \int_U {\mathcal H}(x,t)dx \le C ( 1+ \limsup_{t\to\infty}G_1(t)). \end{equation} Again, by using the first inequality in \eqref{HK2} with $\delta=1$ to bound ${\mathcal H}(x,t)$ from below in terms of $|\nabla p(x,t)|^{2-\beta_2}$, we obtain estimate \eqref{limsupG} from \eqref{limH0}. \end{proof} Below is a counterpart of Theorem \ref{pasmall}, but for the gradient instead. \begin{theorem}\label{gradasmall} For any $\varepsilon>0$, there is $\delta_0>0$ such that if \begin{equation}\label{small2} \limsup_{t\to\infty} (\|\nabla \Psi(t)\|+ \| \Psi_t( t)\|+ \|\nabla \Psi_t( t)\|)\le \delta_0 \end{equation} then \begin{equation}\label{gradplim} \limsup_{t\to\infty}\int_U |\nabla p(x,t)|^{2-\beta_2} dx \le \varepsilon . \end{equation} Consequently, if \begin{equation}\label{small3} \lim_{t\to\infty} \|\nabla \Psi(t)\|=\lim_{t\to\infty} \| \Psi_t( t)\|=\lim_{t\to\infty} \|\nabla \Psi_t(t)\|=0 \end{equation} then \begin{equation}\label{wlim2} \lim_{t\to\infty}\int_U |\nabla p(x,t)|^{2-\beta_2} dx =0. \end{equation} \end{theorem} \begin{proof} First, we estimate the limit superior of $ \int_U {\mathcal H}(x,t)dx$ as $t\to\infty$. Let $\delta\in (0,1]$. Applying \eqref{ulode} in Lemma \ref{ODE2} to \eqref{Hw2}, we have \begin{equation}\label{Hw23} \limsup_{t\to\infty} \mathcal E(t)\le C\limsup_{t\to\infty}\|\bar p\|^2 +C\Big\{\delta^{-1}\limsup_{t\to\infty}(\norm{\nabla\Psi}^2+\norm{\nabla\Psi_t}^2)+ \limsup_{t\to\infty} \norm{\Psi_t}^2+ \delta\Big\}. \end{equation} It follows from \eqref{Hw23} and \eqref{f-delta} that \begin{equation}s \begin{split} &\limsup_{t\to\infty} \int_U {\mathcal H}(x,t)dx \\ & \le C \Big\{ \delta^2 + \delta^{-\frac{2(\beta_1+\beta_2)}{2-\beta_2} }\big(\limsup_{t\to\infty}\|\nabla \Psi\|^2+\limsup_{t\to\infty}\norm{\nabla\Psi}^{\frac{4}{2-\beta_2}}\big)+ \delta^{-\frac{2(\beta_1+\beta_2)}{1-\beta_2} } \limsup_{t\to\infty}\norm{\Psi_t}^{\frac{2}{1-\beta_2}} \Big\}\\ &\quad +C\Big\{ \delta^{-1}\limsup_{t\to\infty}(\norm{\nabla\Psi}^2+\norm{\nabla\Psi_t}^2)+ \limsup_{t\to\infty} \norm{\Psi_t}^2+\delta\Big\}. \end{split} \end{equation}s Thanks to the fact $\delta\le 1$, it follows that \begin{equation}\label{Hlim} \begin{split} \limsup_{t\to\infty} \int_U {\mathcal H}(x,t)dx &\le C \Big\{ \delta + \delta^{-\kappa } \limsup_{t\to\infty}\Big(\|\nabla \Psi(t)\|^2+\norm{\nabla\Psi(t)}^\frac{4}{2-\beta_2}\\ &\qquad + \norm{\Psi_t(t)}^{\frac{2}{1-\beta_2}} + \norm{\Psi_t(t)}^2 +\norm{\nabla\Psi_t(t)}^2\Big)\Big\}, \end{split} \end{equation} where $\kappa=\max\Big\{\frac{2(\beta_1+\beta_2)}{1-\beta_2},1\Big\}$. Let $\delta_1$ be any number in $(0,1]$. Applying the first inequality of \eqref{HK2} with $\delta=\delta_1$ gives \begin{equation}\label{Hd1} {\mathcal H}(x,t)\ge \frac{d_2 \delta_1^{\beta_1+\beta_2}}{2^{\beta_1+\beta_2}}|\nabla p(x,t)|^{2-\beta_2}-d_2 \delta_1^{2+\beta_1}. \end{equation} Combining this with \eqref{Hlim} yields \begin{align}\label{limsupGrad} \limsup_{t\to\infty} \int_U |\nabla p(x,t)|^{2-\beta_2}dx &\le C\delta_1^{-(\beta_1+\beta_2)} \limsup_{t\to\infty} \int_U {\mathcal H}(x,t)dx +C \delta_1^{2-\beta_2} \notag \\ &\le C \delta_1^{-(\beta_1+\beta_2)} \Big\{ \delta+ \delta^{-\kappa} \limsup_{t\to\infty}\Big( \|\nabla \Psi\|^2+\norm{\nabla\Psi}^\frac{4}{2-\beta_2} \notag \\ &\quad +\norm{\Psi_t}^{\frac{2}{1-\beta_2}} + \norm{\Psi_t}^2 +\norm{\nabla\Psi_t}^2\Big)\Big\} +C \delta_1^{2-\beta_2}. \end{align} Assume \eqref{small2} with $\delta_0\in(0,1]$, then \eqref{limsupGrad} yields \begin{align} \limsup_{t\to\infty} \int_U |\nabla p|^{2-\beta_2}dx & \le C_5 \delta_1^{-(\beta_1+\beta_2)} \Big\{ \delta+ \delta^{-\kappa} (\delta_0^2+\delta_0^\frac{4}{2-\beta_2} + \delta_0^{\frac{2}{1-\beta_2}} )\Big\} + C_5\delta_1^{2-\beta_2} \nonumber\\ & \le C_5 \delta_1^{-(\beta_1+\beta_2)} (\delta+ 3\delta^{-\kappa} \delta_0^2) + C_5\delta_1^{2-\beta_2} \label{gpl} \end{align} for some $C_5>0$ is independent of $\delta$, $\delta_0$, and $\delta_1$. First we choose $\delta_1$ sufficiently small satisfying $ C_5\delta_1^{2-\beta_2}\le \varepsilon/3$. With this $\delta_1$, choose $\delta\in (0,1]$ such that $ C_5\delta_1^{-(\beta_1+\beta_2)} \delta\le \varepsilon/ 3. $ Next, we choose $\delta_0>0$ much smaller than $\delta_1, \delta$ that satisfies $$ C_5 \delta_1^{-(\beta_1+\beta_2)} \delta^{-\kappa} \delta_0^2 \le \varepsilon/9. $$ Then \eqref{gradplim} follows \eqref{gpl}. Finally, under condition \eqref{small3}, the estimate \eqref{gradplim} holds for all $\varepsilon>0$, which proves \eqref{wlim2}. \end{proof} When time $t$ is large, we improve the estimates in Theorem \ref{grad-est} by deriving uniform Gronwall-type inequalities. \begin{theorem}\label{theo39} If $t\ge 1$ then \begin{equation}\label{wteq8} \int_{t-\frac 12}^t\|\bar p_t(\tau)\|^2 d\tau+ \int_U |\nabla p(x,t)|^{2-\beta_2}dx \le C \Big(1+\| \bar p(t-1)\|^2 +\int_{t-1}^{t} (f(\tau)+\norm{\nabla\Psi_t(\tau)}^2)d\tau\Big), \end{equation} and, consequently, \begin{equation}\label{wteq9} \int_U |\nabla p(x,t)|^{2-\beta_2} dx \le C\Big(1+\|\bar p(0)\|^2+ (Env f(t))^\frac{2}{2-\beta_2}+\int_{t-1}^{t}\norm{\nabla\Psi_t(\tau)}^2 d\tau\Big). \end{equation} \end{theorem} \begin{proof} On the right-hand side of \eqref{psi-delta}, we use \eqref{pP} again but this time to bound $\|\bar p\|$ in terms of $\int_U |\nabla \bar p|^{2-\beta_2}dx$. Then, with the same choice of $\varepsilonsilon$, we have instead of \eqref{psi-delta-2} \begin{equation}\label{psi-delta-3} \frac d{dt}\norm {\bar p}^2 \le - \frac {d_2 \delta^{\beta_1+\beta_2} } {2^{2+\beta_1} } \int_U |\nabla \bar p|^{2-\beta_2}dx + C\Big({\delta^{2+\beta_1}}+\|\nabla \Psi\|^{2-\beta_2}+\norm{\nabla\Psi}^2+\delta^{-\frac{\beta_1+\beta_2}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}}\Big). \end{equation} Integrating \eqref{psi-delta-3} in time from $t-1$ to $t$, we have \begin{multline}\label{new1} \norm {\bar p(t)}^2+C_1 \delta^{\beta_1+\beta_2}\int_{t-1}^t \int_{U}|\nabla \bar p|^{2-\beta_2}dxd\tau \le \norm {\bar p(t-1)}^2 + C{\delta^{2+\beta_1}} \\ +C\int_{t-1}^t(\|\nabla \Psi\|^{2-\beta_2}+ \norm{\nabla\Psi}^2+\delta^{-\frac{\beta_1+\beta_2}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}})d\tau, \end{multline} where $C_1>0$ is independent of $\delta$. Let $\varepsilon\in(0,1]$. Applying Cauchy's inequality to integrals on the RHS of \eqref{wteq}, we have the following: \begin{equation}\label{wteq1} \|\bar p_t\|^2 +\frac 12\frac d{dt}\int_U {\mathcal H}(x,t)dx \le\varepsilonsilon\int_UK(|\nabla p|) |\nabla p|^2dx+\frac{C}{\varepsilonsilon}\int_UK(|\nabla p|) |\nabla\Psi_t|^2dx+\frac 12\|\bar p_t\|^2+\frac 12\norm{\Psi_t}^2 . \end{equation} Then using \eqref{Kbdd} for the second integral on the RHS of \eqref{wteq1}, we have \begin{equation}\label{new2a} \begin{aligned} \|\bar p_t\|^2 + \frac d{dt}\int_U {\mathcal H}(x,t)dx &\le 2\varepsilonsilon\int_UK(|\nabla p|) |\nabla p|^2dx+C(\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2). \end{aligned} \end{equation} By virtue of \eqref{HtoK}, we find from \eqref{new2a} that \begin{equation}\label{new3} \begin{aligned} \|\bar p_t\|^2 + \frac d{dt}\int_U {\mathcal H}(x,t)dx &\le C\varepsilon \int_U{\mathcal H}(x,t)dx+C(\varepsilon +\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2). \end{aligned} \end{equation} Integrating \eqref{new3} in time from $s$ to $t$ where $s\in[t-1,t]$, we have \begin{align} &\int_s^t\|\bar p_t\|^2 d\tau + \int_U {\mathcal H}(x,t)dx \notag \\ &\quad \le \int_U {\mathcal H}(x,s)dx+ C\varepsilon\int_s^t\int_U{\mathcal H}(x,\tau)dxd\tau+ C\int_s^t(\varepsilon +\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2)d\tau \notag\\ &\quad \le \int_U {\mathcal H}(x,s)dx+C\int_{t-1}^t\int_U{\mathcal H}(x,t)dxd\tau + C\varepsilon+C\int_{t-1}^t(\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2)d\tau. \label{wteq5} \end{align} Integrating \eqref{wteq5} in $s$ from $t-1$ to $t$ shows that \begin{equation}\label{wteq6} \int_{t-1}^t\int_s^t\|\bar p_t\|^2 d\tau ds + \int_U {\mathcal H}(x,t)dx \le C\Big\{\int_{t-1}^t\int_U{\mathcal H}(x,\tau)dxd\tau+\varepsilon+\int_{t-1}^t(\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2)d\tau\Big\}. \end{equation} Estimating first term of \eqref{wteq6} by \begin{equation}s \int_{t-1}^t\int_s^t\|\bar p_t\|^2 d\tau ds \ge \int_{t-1}^{t-\frac 12}\int_{t-\frac 12}^t\|\bar p_t\|^2 d\tau ds\ge \frac 12\int_{t-\frac 12}^t\|\bar p_t\|^2 d\tau, \end{equation}s we get \begin{equation}\label{wteq06} \begin{aligned} \frac 12\int_{t-\frac 12}^t\|\bar p_t\|^2 d\tau + \int_U {\mathcal H}(x,t)dx &\le C\Big\{ \int_{t-1}^t\int_U{\mathcal H}(x,\tau)dxd\tau+\varepsilon+\int_{t-1}^t(\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2)d\tau\Big\}. \end{aligned} \end{equation} Estimating the double integral on the RHS of \eqref{wteq06} by combining the second inequality of \eqref{HK2} with \eqref{new1}, we obtain \begin{multline}\label{new8} \frac12 \int_{t-\frac 12}^t\|\bar p_t(\tau)\|^2 d\tau+ \int_U {\mathcal H}(x,t)dx\\ \le C\Big\{\delta^{-(\beta_1+\beta_2)}\norm {\bar p(t-1)}^2 + \delta^{2-\beta_2}+\delta^{-(\beta_1+\beta_2)}\int_{t-1}^t (\|\nabla \Psi\|^{2-\beta_2}+ \norm{\nabla\Psi}^2+\delta^{-\frac{\beta_1+\beta_2}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}})d\tau\\ +\varepsilon+\int_{t-1}^t(\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2) d\tau \Big\}. \end{multline} In \eqref{new8}, choosing $\varepsilon=\delta=1$, using \eqref{psH} and \eqref{pst1} give \begin{align*} \int_{t-1/2}^t \|\bar p_t\|^2 d\tau + \int_U {\mathcal H}(x,t)dx &\le C\Big(1+ \|\bar p(t-1)\|^2+\int_{t-1}^{t}(\norm{\nabla\Psi}^2+\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^\frac{2-\beta_2}{1-\beta_2} ) d\tau\Big) \notag \\ &= C \Big(1+\| \bar p(t-1)\|^2 +\int_{t-1}^{t} (f(\tau)+\norm{\nabla\Psi_t(\tau)}^2)d\tau\Big). \end{align*} Combining this with the first inequality in \eqref{HK2} with $\delta=1$ yields \eqref{wteq8}. Combining \eqref{wteq8} with \eqref{west}, we obtain \eqref{wteq9}. \end{proof} As for $t\to\infty$, we have the following alternative results. \begin{corollary}\label{cor310} Ones have \begin{equation}\label{wteq11} \limsup_{t\to\infty} \int_U |\nabla p(x,t)|^{2-\beta_2} dx \le C\Big(1+ \limsup_{t\to\infty} f(t)^\frac{2}{2-\beta_2}+\limsup_{t\to\infty}\int_{t-1}^{t}\norm{\nabla\Psi_t(\tau)}^2d\tau\Big). \end{equation} Moreover, if \begin{equation}\label{small30} \lim_{t\to\infty} \|\nabla \Psi(t)\|=\lim_{t\to\infty} \| \Psi_t( t)\|= \lim_{t\to\infty} \int_{t-1}^t \|\nabla \Psi_t(\tau)\|^2 d\tau=0 \end{equation} then \begin{equation}\label{wlim20} \lim_{t\to\infty}\int_U |\nabla p(x,t)|^{2-\beta_2} dx =0. \end{equation} \end{corollary} \begin{proof} Combining \eqref{nodelta} and the limit superior of \eqref{wteq8}, we have \begin{multline*} \limsup_{t\to\infty} \int_U |\nabla p(x,t)|^{2-\beta_2}dx \\ \le C\Big(1+ \limsup_{t\to\infty} f(t)^\frac{2}{2-\beta_2}+\limsup_{t\to\infty}\int_{t-1}^{t} (\norm{\nabla \Psi}^{2}+\norm{\nabla\Psi_t}^2 +\norm{\Psi_t}^\frac{2-\beta_2}{1-\beta_2} )d\tau\Big), \end{multline*} which implies \eqref{wteq11}. Let $\delta_1\in (0,1]$. On the left-hand side of \eqref{new8}, we neglect the time derivative term and apply \eqref{Hd1} to bound $\mathcal H(x,t)$ from below in terms of $|\nabla p(x,t)|^{2-\beta_2}$. It yields \begin{multline}\label{graded} \delta_1^{\beta_1+\beta_2}\int_U |\nabla p(x,t)|^{2-\beta_2}dx \le C\Big\{ \delta_1^{2+\beta_1} +\delta^{2-\beta_2} + \delta^{-(\beta_1+\beta_2)}\norm {\bar p(t-1)}^2 \\ +\delta^{-(\beta_1+\beta_2)}\int_{t-1}^t (\|\nabla \Psi\|^{2-\beta_2}+ \norm{\nabla\Psi}^2+\delta^{-\frac{\beta_1+\beta_2}{1-\beta_2}}\norm{\Psi_t}^{\frac{2-\beta_2}{1-\beta_2}})d\tau\\ +\varepsilon+\int_{t-1}^t(\varepsilon^{-1}\norm{\nabla\Psi_t}^2+ \norm{\Psi_t}^2) d\tau \Big\}. \end{multline} Under condition \eqref{small30}, we have from Theorem \ref{pasmall} that $\lim_{t\to\infty}\|\bar p(t-1)\|=0$. Passing $t\to\infty$ in \eqref{graded} gives \begin{align*} \limsup_{t\to\infty} \int_U |\nabla p(x,t)|^{2-\beta_2}dx &\le C \delta_1^{-(\beta_1+\beta_2)}( \delta_1^{2+\beta_1} + \delta^{2-\beta_2}+\varepsilon)\\ &=C \delta_1^{2-\beta_2} + C\delta_1^{-(\beta_1+\beta_2)} ( \delta^{2-\beta_2}+\varepsilon). \end{align*} Letting $\varepsilon\to 0$, $\delta\to 0$, and then $\delta_1\to 0$, we obtain \eqref{wlim20}. \end{proof} \begin{remark} (a) Comparing with \eqref{Hest2}, the inequality \eqref{wteq9} explicitly shows the independence on the initial norm $\|\nabla p(0)\|_{L^{2-\beta_2}}$. Also, the term $\int_{t-1}^{t}\norm{\nabla\Psi_t(\tau)}^2 d\tau$ explicitly shows that the dependence on the second derivative $\nabla \Psi_t$ of the boundary data is not accumulative in time on the whole interval $(0,t)$. (b) Since $$\limsup_{t\to\infty} \int_{t-1}^{t}\norm{\nabla\Psi_t(\tau)}^2 d\tau \le \limsup_{t\to\infty} \norm{\nabla\Psi_t(t)}^2,$$ the results \eqref{wteq11}--\eqref{wlim20} in Corollary \ref{cor310} improve \eqref{limsupG} in Theorem~\ref{grad-est} and \eqref{small2}--\eqref{wlim2} in Theorem \ref{gradasmall}. \end{remark} \section{Continuous dependence on the initial and boundary data}\label{dependsec} In this section, we establish the continuous dependence of the solution of problem \eqref{IBVP} on the initial and boundary data. We consider $K(\xi)=\bar K(\xi)$, $K_I(\xi)$, $\hat{K}(\xi)$, $K_M(\xi)$. Let $p_1(x,t)$ and $p_2(x,t)$ be two solutions of \eqref{IBVP} with boundary data $\psi_1(x,t)$ and $\psi_2(x,t)$, respectively. For $i=1,2$, let $\Psi_i(x,t)$ be an extension of $\psi_i(x,t)$ to $\bar U\times [0,\infty)$, and $\bar p_i=p_i-\Psi_i$ . Denote $$\Phi=\Psi_1-\Psi_2\quad\text{and}\quad \bar P=\bar p_1-\bar p_2=p_1-p_2-\Phi.$$ Then \begin{align}\label{Wequl} \frac{\partial \bar P}{\partial t}&= \nabla \cdot (K(|\nabla p_1|)\nabla p_1)-\nabla \cdot (K(|\nabla p_2|)\nabla p_2)-\Phi_t\quad \text{on }U\times(0,\infty),\\ \bar P&=0 \quad \text{on }\Gamma\times(0,\infty).\notag \end{align} Let \begin{equation}\label{tilU} \Lambda(t)=1+\|\nabla p_1( t)\|_{L^{2-\beta_2}}+ \|\nabla p_2(t)\|_{L^{2-\beta_2}}. \end{equation} For the difference between two boundary data, we define \begin{equation}s D(t)=\norm{\Phi_t(t)}+\norm{\nabla \Phi(t)}_{L^{2-\beta_2}}+\norm{\nabla \Phi(t)}_{L^{2+\beta_1}}^{2+\beta_1}. \end{equation}s First, we obtain the estimates for $\|\bar P(t)\|$ in terms of $D(t)$ and individual solutions $p_1(x,t)$, $p_2(x,t)$. \begin{proposition} \label{prop61} If $t\ge 0$ then \begin{equation}\label{one3} \| \bar P(t)\|^2 \le \| \bar P(0)\|^2+C\Big\{ Env \Big[ \Lambda(t)^{\beta_1+\beta_2} (\Lambda(t)^{1-\beta_2}+\| \bar p_1 (t) \|+\| \bar p_2 (t) \|) D(t) \Big] \Big\}^\frac{2}{2+\beta_1}. \end{equation} If $\displaystyle \int_0^\infty \Lambda(t)^{-(\beta_1+\beta_2)}dt =\infty$, then \begin{equation}\label{lmsp1} \begin{aligned} \limsup_{t\to\infty}\|\bar P(t)\|^2 &\le C \limsup_{t\to\infty} \Big\{ \Lambda(t)^{\beta_1+\beta_2} (\Lambda(t)^{1-\beta_2}+\|\bar p_1(t)\| + \|\bar p_2(t)\| ) D(t)\Big\}^\frac{2}{2+\beta_1}. \end{aligned} \end{equation} \end{proposition} \begin{proof} First, we find a differential inequality for $\| \bar P(t)\|^2$. We define \begin{equation}s \omega(x,t) = 1+|\nabla p_2(x,t)|+|\nabla p_1(x,t)|. \end{equation}s Multiplying \eqref{Wequl} by $\bar P$, integrating the resulting equation over $U$, and using integration by parts, we obtain \begin{align*} \frac 12 \frac{d}{dt}\int_U \bar P^2dx&= -\int_U[ K(|\nabla p_1|)\nabla p_1- K(|\nabla p_2|)\nabla p_2]\cdot \nabla \bar Pdx-\int_U \Phi_t\bar Pdx,\\ \intertext{thus,} \frac 12 \frac{d}{dt}\int_U \bar P^2dx&=- \int_U[ K(|\nabla p_1|)\nabla p_1- K(|\nabla p_2|)\nabla p_2]\cdot \nabla (p_1-p_2)dx\\ &\quad + \int_U[ K(|\nabla p_1|)\nabla p_1- K(|\nabla p_2|)\nabla p_2]\cdot \nabla \Phi dx-\int_U \Phi_t\bar Pdx. \end{align*} Using the monotonicity in Lemma \ref{lemmono} for the first integral on the RHS, and property \eqref{mc2} with $m=1$ for the second integral, we have \begin{equation}\label{DiffInq4W} \begin{aligned} \frac 12 \frac{d}{dt}\int_U \bar P^2dx &\le-d_5\int_U \frac{|\nabla (p_1- p_2)|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}}dx +d_3 \int_U( |\nabla p_1|^{1-\beta_2}+|\nabla p_2|^{1-\beta_2} )|\nabla \Phi |dx\\ &\quad+\int_U |\Phi_t|(|\bar p_1|+|\bar p_2|)dx \stackrel{\rm def}{=} -J_1+J_2+J_3. \end{aligned} \end{equation} By \eqref{bi1}, \begin{equation}\label{J11} \begin{aligned} -J_1 \le - C_1\int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}}dx + C\int_U \frac{|\nabla \Phi|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}}dx\\ \le - C_1\int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}}dx + C\int_U |\nabla \Phi|^{2+\beta_1}dx, \end{aligned} \end{equation} where $C_1=d_5/2^{1+\beta_1}$. Using H\"older's inequality for $J_2$ and $J_3$ terms in \eqref{DiffInq4W}, we find that \begin{equation}\label{J22} \begin{split} J_2&\le C\Big(\norm{\nabla p_1}_{L^{2-\beta_2}}^{1-\beta_2}+ \norm{\nabla p_2}_{L^{2-\beta_2}}^{1-\beta_2}\Big)\norm{\nabla \Phi}_{L^{2-\beta_2}}. \end{split} \end{equation} \begin{equation}\label{J33} J_3 \le C(\|\bar p_{1}\|+\|\bar p_{2}\| )\norm{\Phi_t}. \end{equation} Utilizing estimates \eqref{J11}--\eqref{J33} in \eqref{DiffInq4W}, we obtain \begin{equation}s \frac d{dt}\| \bar P\|^2 \le - 2C_1\int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}}dx +C\Big(\norm{\nabla \Phi}_{L^{2+\beta_1}}^{2+\beta_1}+ \sum_{i=1,2}\|\nabla p_i\|_{L^{2-\beta_2}}^{1-\beta_2} \|\nabla \Phi\|_{L^{2-\beta_2}} + \sum_{i=1,2} \| \bar p_i \|\norm{\Phi_t}\Big). \end{equation}s Thus, \begin{equation}\label{DW} \begin{aligned} \frac d{dt}\| \bar P(t)\|^2 &\le - 2C_1\int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}}dx +C( \Lambda(t)^{1-\beta_2} +\|\bar p_1(t)\|+\|\bar p_2(t)\|)D(t). \end{aligned} \end{equation} Applying H\"{o}lder's inequality with powers $\frac{2+\beta_1}{2-\beta_2}$ and $\frac{2+\beta_1}{\beta_1+\beta_2}$, we have \begin{equation}s \int_U |\nabla \bar P|^{2-\beta_2} dx =\int_U \frac{|\nabla \bar P|^{2-\beta_2}}{\omega^\frac{(\beta_1+\beta_2)(2-\beta_2)}{2+\beta_1}} \cdot \omega^\frac{(\beta_1+\beta_2)(2-\beta_2)}{2+\beta_1} dx \le \Big(\int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}} dx\Big)^\frac{2-\beta_2}{2+\beta_1}\Big(\int_U \omega^{2-\beta_2} dx\Big)^\frac{\beta_1+\beta_2}{2+\beta_1}. \end{equation}s This implies \begin{equation}\label{em0} \Lambda(t)^{\beta_1+\beta_2} \int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}} dx\ge \Big(\int_U |\nabla \bar P|^{2-\beta_2} dx\Big)^\frac{2+\beta_1}{2-\beta_2}. \end{equation} Combining \eqref{em0} with Poincar\'e-Sobolev inequality \eqref{PSi} yields \begin{equation}\label{embL2} \int_U \frac{|\nabla \bar P|^{2+\beta_1}}{\omega^{\beta_1+\beta_2}} dx \ge C_{\rm PS}^{-(2+\beta_1)} \Lambda(t)^{-(\beta_1+\beta_2)} \Big(\int_U \bar P^2 dx \Big)^\frac{2+\beta_1}2. \end{equation} Using \eqref{embL2} to estimate the first integral on the RHS of \eqref{DW}, we obtain for all $t>0$ that \begin{equation}\label{66} \frac d{dt}\| \bar P(t)\|^2 \le -C_2 \Lambda(t)^{-(\beta_1+\beta_2)} \|\bar P(t)\|^{2+\beta_1} +C( \Lambda(t)^{1-\beta_2}+\|\bar p_1(t)\|+\|\bar p_2(t)\|)D(t), \end{equation} where $C_2=2C_1 C_{\rm PS}^{-(2+\beta_1)}$. Denote $y(t)=\| \bar P(t)\|^2$ and rewrite \eqref{66} as \begin{equation}\label{6one} \frac {dy}{dt} \le -C_2 \Lambda(t)^{-(\beta_1+\beta_2)} y(t)^\frac{2+\beta_1}{2} +C( \Lambda(t)^{1-\beta_2}+\|\bar p_1(t)\|+\|\bar p_2(t)\|)D(t). \end{equation} Applying \eqref{ubode} in Lemma \ref{ODE2} to \eqref{6one} proves \eqref{one3}. Similarly, applying \eqref{ulode} in Lemma \ref{ODE2} to \eqref{6one}, we obtain \eqref{lmsp1}. \end{proof} Next, we combine Proposition \ref{prop61} with results in section \ref{boundsec} to derive more specific estimates. According to \eqref{west}, \begin{equation}\label{XvY} 1+\|\bar p_1\|+\|\bar p_2\|\le \mathcal{X}(t), \end{equation} where \begin{equation}s \mathcal{X}(t)= 1+ \sum_{i=1,2} \Big\{\|\bar p_i(0)\|+ \Big(Env f[\Psi_i](t)\Big)^\frac{1}{2-\beta_2}\Big\}. \end{equation}s By \eqref{Hest2} and \eqref{wteq9}, \begin{equation}\label{U2} \Lambda(t)\le C\hat {\mathcal Y}(t)^{\frac 1{2-\beta_2}}, \end{equation} where \begin{equation}s \hat {\mathcal Y}(t)= \begin{cases} \displaystyle 1+ \sum_{i=1,2}\Big(\|\bar p_i(0)\|^2 +e^{-\frac t 2}\|\nabla p_i(0)\|_{L^{2-\beta_2}}^{2-\beta_2} \\ \quad + (Env f[\Psi_i](t))^\frac{2}{2-\beta_2}+ \int_0^t e^{-\frac 12(t-\tau)} \| \nabla \Psi_{i,t} (\tau)\|^2 d\tau\Big) &\text{ if } 0\le t<1, \\ \displaystyle 1+\sum_{i=1,2} \Big(\|\bar p_i(0)\|^2+ (Env f[\Psi_i](t))^\frac{2}{2-\beta_2}+\int_{t-1}^{t}\norm{\nabla\Psi_{i,t}(\tau)}^2 d\tau\Big) &\text{ if } t\ge 1. \end{cases} \end{equation}s To simplify expressions of our estimates, we set \begin{equation}\label{Y0def} \mathcal Y_0= 1+ \sum_{i=1,2}\Big(\|\bar p_i(0)\|^2 +\|\nabla p_i(0)\|_{L^{2-\beta_2}}^{2-\beta_2} \Big), \end{equation} and define the function \begin{equation}\label{deftilY} \widetilde{\mathcal Y}(t)= \mathcal Y_0+\sum_{i=1,2} (Env f[\Psi_i](t))^\frac{2}{2-\beta_2}+ \begin{cases} \int_0^t e^{-\frac 12(t-\tau)} \sum_{i=1,2}\| \nabla \Psi_{i,t} (\tau)\|^2 d\tau&\text{ if } 0\le t<1, \\ \int_{t-1}^{t}\sum_{i=1,2}\norm{\nabla\Psi_{i,t}(\tau)}^2 d\tau &\text{ if } t\ge 1. \end{cases} \end{equation} Then $\hat {\mathcal Y}(t)\le \widetilde{\mathcal Y}(t)$ and $\mathcal X(t)\le \widetilde{\mathcal Y}^{1/2}(t)$. These properties and \eqref{XvY}, \eqref{U2} imply \begin{equation}\label{U3} \Lambda(t)\le C\widetilde {\mathcal Y}(t)^{\frac 1{2-\beta_2}}, \end{equation} \begin{equation}\label{YY} \Lambda(t)^{1-\beta_2}+\|\bar p_1(t)\|+\|\bar p_2(t)\|\le C(\widetilde{\mathcal Y}(t)^\frac{1-\beta_2}{2-\beta_2}+\widetilde{\mathcal Y}(t)^{1/2})\le C\widetilde{\mathcal Y}(t)^{1/2}. \end{equation} Above, we used the fact $1/2>(1-\beta_2)/(2-\beta_2)$ and $\widetilde{\mathcal Y}(t) \ge 1$ . For asymptotic estimates, we will use the following numbers \begin{equation}\label{tilAKdef} \begin{aligned} \widetilde{\mathcal{A}}&=\Big(\sum_{i=1,2}\limsup_{t\to\infty} f[\Psi_i](t)\Big)^\frac1{2-\beta_2},\quad \widetilde{\mathcal K}={\widetilde{\mathcal{A}}}^2 +\sum_{i=1,2}\limsup_{t\to\infty}\int_{t-1}^{t}\norm{\nabla\Psi_{i,t}(\tau)}^2d\tau,\\ \mathcal D&=\limsup_{t\to\infty}D(t). \end{aligned} \end{equation} Now we can estimate the $L^2$-norm of $\bar P(t)$ utterly in terms of the initial and boundary data. \begin{theorem}\label{theo62} For $t\ge 0$, \begin{equation}\label{neww} \|\bar P(t)\|^2 \le \|\bar P(0)\|^2+C\Big\{ Env \Big[ \widetilde{\mathcal Y}(t)^{\frac {\beta_1+\beta_2}{2-\beta_2}+\frac12} D(t)\Big] \Big\}^\frac{2}{2+\beta_1}. \end{equation} If $\widetilde{\mathcal K}<\infty$ then \begin{equation}\label{6one3} \limsup_{t\to\infty}\|\bar P(t)\|^2 \le C \Big\{(1 +\widetilde{\mathcal K})^{\frac {\beta_1+\beta_2}{2-\beta_2}+\frac12} \mathcal D\Big\}^\frac{2}{2+\beta_1}. \end{equation} \end{theorem} \begin{proof} It follows from \eqref{one3}, \eqref{U3} and \eqref{YY} that \begin{equation}s \|\bar P(t)\|^2 \le \|\bar P(0)\|^2+C\Big\{ Env \Big[ \widetilde{\mathcal Y}(t)^{\frac{\beta_1+\beta_2} {2-\beta_2}}\widetilde{\mathcal Y}(t)^\frac12 D(t) \Big] \Big\}^\frac{2}{2+\beta_1} \end{equation}s which implies \eqref{neww}. We have from limit estimates \eqref{wteq11} and \eqref{nodelta} that \begin{equation}\label{limsup} \limsup_{t\to\infty} \Lambda(t) \le C(1 +\widetilde{\mathcal K})^\frac1{2-\beta_2}, \quad \limsup_{t\to\infty}(1+\|\bar p_1(t)\| +\|\bar p_2(t)\|)\le C(1+\widetilde{\mathcal{A}})\le C(1+{\widetilde{\mathcal K}})^\frac{1}{2}. \end{equation} Combining \eqref{limsup} with \eqref{lmsp1} we obtain \begin{equation}\label{Lsup2} \limsup_{t\to\infty}\|\bar P(t)\|^2 \le C \Big\{(1 +\widetilde{\mathcal K})^\frac {\beta_1+\beta_2}{2-\beta_2} \Big[(1+\widetilde{\mathcal K})^\frac{1}{2}+(1 +\widetilde{\mathcal K})^\frac {1-\beta_2}{2-\beta_2}\Big] \mathcal D\Big\}^\frac{2}{2+\beta_1}. \end{equation} Note that $1/2>(1-\beta_2)/(2-\beta_2)$, then \eqref{6one3} follows \eqref{Lsup2}. \end{proof} \section{Structural stability}\label{strucsec} In this section, we consider the case $K(\xi)=K_I(\xi,\vec a)$ in \eqref{KIxa}, and study the dependence of the solutions to IBVP \eqref{IBVP} on the coefficient vector $\vec a$. Let $N\ge 1$ and the exponent vector $\vec{\alpha} =(-\alpha, 0, \alpha_1, \ldots, \alpha_N)$ be fixed. Since $\vec a$ satisfies condition \eqref{aicond}, we denote the set of admissible $\vec a$ by $S$, that is, \begin{equation}s S=\{ \vec{a} =(a_{-1}, a_0, \ldots, a_N): a_{-1}, a_N>0, a_0, a_1, \ldots, a_{N-1}\ge 0\}. \end{equation}s The following ``perturbed monotonicity'' is important for our structural stability in this section; it plays the same role as the monotonicity (Lemma \ref{lemmono}) for the continuous dependence in section \ref{dependsec}. Below, the notation $\vee$, resp. $\wedge$, denotes the maximum, resp. minimum, of two numbers or two vectors meaning coordinate-wise. \begin{lemma}[Perturbed Monotonicity]\label{lempm} Let $K_I(\xi,\vec a)$ be defined as in \eqref{KIxa}. For any coefficient vectors $\vec a^{(1)}$, $\vec a^{(2)}\in S$, and any $y,y'\in \mathbb R^n$, one has \begin{multline}\label{monoper} \big(K_I(|y'|,\vec a^{(1)})y'-K_I(|y|,\vec a^{(2)})y\big)\cdot (y'-y) \ge \frac{d_6 |y-y'|^{2+\beta_1}}{(1+|y|+|y'|)^{\beta_1+\beta_2}}\\ -d_7 K\big(|y|\vee |y'|,\vec a^{(1)}\wedge \vec a^{(2)}\big) \ (|y|\vee |y'|) \ |\vec{a}^{(1)}-\vec{a}^{(2)}|\ |y-y'|, \end{multline} where $d_6=d_6(\vec a^{(1)},\vec a^{(2)})$ and $d_7=d_7(\vec a^{(1)},\vec a^{(2)})$ are positive constants defined by \begin{align*} d_6&=\frac{ 1-\beta_2}{(\beta_1+1)\Big [2(N+2)\max\big\{1,a_i^{(j)}:i=-1,0,\ldots,N,\ j=1,2\big\}\Big]^{\beta_1+1}} , \\ d_7&=\frac{N+1}{(1-\alpha)\min\big\{a_{-1}^{(1)},a_{-1}^{(2)},a_N^{(1)},a_N^{(2)}\big\}}. \end{align*} \end{lemma} \begin{proof} Let $\vec a^{(1)}$, $\vec a^{(2)}\in S$ and $y,y'\in \mathbb R^n$. Same as in Lemma \ref{lemmono}, it suffices to consider the case when the line segment $[y,y']$ does not contain the origin. For $t \in[0,1]$, let $$\gamma(t)=ty+(1-t)y',\quad \vec{b}(t)=(b_{-1}(t),b_0(t),\ldots,b_N(t))\stackrel{\rm def}{=} t\vec{a}^{(1)}+(1-t)\vec{a}^{(2)},$$ and define $$z(t)=K(|\gamma(t)|,\vec{b}(t))\, \gamma(t)\cdot (y-y').$$ We have \begin{equation}s I \stackrel{\rm def}{=} [K(|y|,\vec{a}^{(1)})y-K(|y'|,\vec{a}^{(2)})y']\cdot (y'-y) =z(1)-z(0)=\int_0^1 z'(t) dt. \end{equation}s In calculations below, we use the following short-hand notation for partial derivatives $$X_s=\partial X/\partial s,\quad X_\xi=\partial X/\partial \xi,\quad X_{a_i}=\partial X/\partial a_i, \text{ and } X_{\vec a}=\partial X/\partial \vec a.$$ Elementary calculations give \begin{equation}\label{Ifm} I=\int_0^1 h_1(t)dt + \int_0^1 h_2(t) dt\stackrel{\rm def}{=} I_1+I_2, \end{equation} where \begin{align*} h_1(t)&=K(|\gamma(t)|,\vec{b}(t))|y-y'|^2 +K_\xi(|\gamma(t)|,\vec{b}(t))\frac{|\gamma(t)\cdot(y-y')|^2}{|\gamma(t)|},\\ h_2(t)&= K_{\vec{a}}(|\gamma(t)|,\vec{b}(t))(\vec{a}^{(1)}-\vec{a}^{(2)})\gamma(t)\cdot (y-y') . \end{align*} $\bullet$ \emph{Estimation of $I_1$.} By Lemma \ref{LemKp}, \begin{align*} K_\xi(|\gamma(t)|,\vec{b}(t))\ge -\beta_2 \frac{K(|\gamma(t)|,\vec{b}(t))}{|\gamma(t)|}. \end{align*} Same as the proof of \eqref{hp}, \begin{equation}\label{ks} h_1(t) \ge \frac{d_2(t) (1-\beta_2)|y'-y|^2}{(1+|y|+|y'|)^{\beta_1+\beta_2}} |\gamma(t)|^{\beta_1}, \end{equation} where \begin{equation}s d_2(t)=\frac1{(\max\{1,\sum_{i=-1}^N \vec{b}_i(t)\})^{1+\beta_1}} . \end{equation}s We can estimate $d_2(\vec b(t))\ge d_2^*$ for all $t\in[0,1]$, where \begin{equation}s d_2^*=\frac1{((N+2)\max\{1,a_i^{(j)}:i=-1,0,\ldots,N,\ j=1,2\})^{1+\beta_1}} . \end{equation}s Hence, it follows \eqref{ks} that \begin{equation}\label{int-h1} \int_0^1 h_1(t) dt \ge \frac{d_2^* (1-\beta_2)|y'-y|^2}{(1+|y|+|y'|)^{\beta_1+\beta_2}} \int_0^1 |\gamma(t)|^{\beta_1} dt. \end{equation} Same calculations in \eqref{ig1} and \eqref{ig2} of Lemma \ref{lemmono} show that \begin{equation}\label{int-gam} \int_0^1 |\gamma(t)|^{\beta_1} dt \ge \frac{ |y'-y|^{\beta_1}}{2^{\beta_1+1}(\beta_1+1) } . \end{equation} It follows from \eqref{int-h1} and \eqref{int-gam} that \begin{equation}\label{ksame} I_1= \int_0^1 h_1(t)dt \ge \frac{d_6|y-y'|^{2+\beta_1}}{(1+|y|+|y'|)^{\beta_1+\beta_2}} . \end{equation} $\bullet$ \emph{Estimation of $I_2$.} We find the partial derivative of $K(\xi,\vec{a})$ in $\vec{a}$. In calculations below, we denote, for convenience, $\alpha_{-1}=-\alpha$. For $i=-1,0,1,\ldots,N$, taking the partial derivative in $a_i$ of the identity $K(\xi,\vec{a})=1/g(s(\xi,\vec{a}),\vec{a})$, we find that \begin{align*} K_{a_i}(\xi,\vec{a})=-\frac{g_{a_i}+g_s\cdot s_{a_i}}{g^2}=-K(\xi,\vec{a})\frac{g_{a_i}+g_s\cdot s_{a_i}}{g}. \end{align*} Similarly, from $sg(s,\vec a)=\xi $, we have for $i=-1,0,1,\dots, N$, \begin{align*} s_{a_i}\cdot g+s\cdot ( g_{a_i}+g_s\cdot s_{a_i} )=0, \end{align*} which implies \begin{equation}s s_{a_i}=\frac{-s\cdot g_{a_i}}{g+s\cdot g_s}. \end{equation}s Then we obtain \begin{equation}\label{Kai} K_{a_i}(\xi,\vec{a})=-K(\xi,\vec{a})\frac{g_{a_i}+g_s\cdot \frac{-s\cdot g_{a_i}}{g+s\cdot g_s}}{g} =-K(\xi,\vec{a})\frac{ g_{a_i}}{g+s\cdot g_s}=-K(\xi,\vec{a})\frac{ s^{\alpha_i}}{g+s\cdot g_s}, \end{equation} and consequently, \begin{equation}s \sum_{i=-1}^N |K_{a_i}(\xi,\vec{a})| \le K(\xi,\vec{a}) \frac{ s^{-\alpha}+1+s^{\alpha_1}+\cdots+s^{\alpha_N} }{ (1-\alpha)a_{-1}s^{-\alpha}+a_0+(1+\alpha_1)a_1s^{\alpha_1}+\cdots+(1+\alpha_N)a_Ns^{\alpha_N} }. \end{equation}s Using \eqref{bi3}, we have \begin{equation}s 1,s^{\alpha_1},\ldots, s^{\alpha_{N-1}}\le s^{-\alpha}+s^{\alpha_N}. \end{equation}s Hence, \begin{equation}s \sum_{i=-1}^N |K_{a_i}(\xi,\vec{a})| \le K(\xi,\vec{a}) \frac{(N+1)(s^{-\alpha}+s^{\alpha_N}) }{ (1-\alpha)a_{-1}s^{-\alpha}+(1+\alpha_N)a_N s^{\alpha_N} } \le d (\vec{a})K(\xi,\vec{a}). \end{equation}s where \begin{equation}s d(\vec{a})=\frac{N+1}{\min\{(1-\alpha)a_{-1},(1+\alpha_N)a_N\}}. \end{equation}s Thus, \begin{equation}\label{Ka} |K_{\vec a}(\xi,\vec{a})| \le d(\vec{a}) K(\xi,\vec{a}). \end{equation} Now, by estimate \eqref{Ka} \begin{equation}\label{ih2} \begin{aligned} | h_2(t)| &\le |K_{\vec{a}}(|\gamma(t)|,\vec{b}(t))|\cdot |\vec{a}^{(1)}-\vec{a}^{(2)}|\cdot | \gamma(t) |\cdot |y-y'|\\ &\le d(\vec{b}(t)) K(|\gamma(t)|,\vec{b}(t))| \gamma(t) | |\vec{a}^{(1)}-\vec{a}^{(2)}| \cdot |y-y'|. \end{aligned} \end{equation} Since $a_i^{(j)}$ is positive for $i=-1,N$ and $j=1,2$, the number $d(\vec{b}(t))$, for all $t\in[0,1]$, can be bounded by \begin{equation}s d(\vec{b}(t)) \le \frac{N+1}{\min\{(1-\alpha)a_{-1}^{(1)},(1-\alpha)a_{-1}^{(2)},(1+\alpha_N)a_N^{(1)},(1+\alpha_N)a_N^{(2)}\}}\le d_7. \end{equation}s Applying Corollary \ref{incor} to $m=1\ge\beta_2$ gives that the function $\xi K(\xi,\vec{b}(t))$ is increasing in $\xi$. This, together with \eqref{ih2} and the fact $|\gamma(t)|\le |y|\vee |y'|$, yields \begin{equation}s | h_2(t)| \le d_7 K(|y|\vee |y'|,\vec{b}(t)) (|y|\vee |y'|) |\vec{a}^{(1)}-\vec{a}^{(2)}| \cdot |y-y'|. \end{equation}s Note from \eqref{Kai} that $K(\xi,\vec a)$ is decreasing in each $a_i$, hence \begin{equation} K(\xi,\vec{b}(t))\le K(\xi,\vec a^{(1)}\wedge \vec a^{(2)}). \end{equation} Therefore, \begin{equation}s | h_2(t)| \le d_7 K(|y|\vee |y'|,\vec a^{(1)}\wedge \vec a^{(2)}) (|y|\vee |y'|) |\vec{a}^{(1)}-\vec{a}^{(2)}| |y-y'|, \end{equation}s and consequently, \begin{equation} \label{Ke} I_2\ge - \int_0^1 | h_2(t)|dt \ge - d_7 K(|y|\vee |y'|,\vec a^{(1)}\wedge \vec a^{(2)}) (|y|\vee |y'|) |\vec{a}^{(1)}-\vec{a}^{(2)}| |y-y'|. \end{equation} Thus, we obtain \eqref{monoper} by combining \eqref{Ifm}, \eqref{ksame} and \eqref{Ke}. \end{proof} Let $\mathcal R$ be a compact subset of $S$ and let the boundary data $\psi(x,t)$ be fixed. For $i=1,2$, let $\vec a^{(i)}\in \mathcal R$, and let $p_i(x,t)$ be the solution of \eqref{IBVP} with $K=K(\xi,\vec a^{(i)})$. Our goal is to estimate $p_1(x,t)-p_2(x,t)$ in terms of $\vec a^{(1)}-\vec a^{(2)}$. We will use the results in section \ref{boundsec} for estimates of $p_1$ and $p_2$. Examining constants $d_2$, $d_3$, $d_4$ in section \ref{presec}, and $d_6$, $d_7$ in Lemma \ref{lempm}, we see that they can be made dependent only on $N$, $\alpha$, $\alpha_N$, $\beta_1$, $\beta_2$ and the following constants \begin{align*} \bar c_\mathcal R&=\max\{a_i: -1\le i\le N, \vec a=(a_i)_{i=-1}^N\in \mathcal R\}, \\ \underline c_\mathcal R&=\min\{a_{-1},a_N:\vec a=(a_i)_{i=-1}^N\in \mathcal R\}. \end{align*} Consequently, the constants $C,C_0,C_1,\dots$ in calculations and bounds in section \ref{boundsec} can be made dependent only on $N$, $\alpha$, $\alpha_N$, $\beta_1$, $\beta_2$, $\bar c_\mathcal R$, $\underline c_\mathcal R$ and $C_{\rm PS}$. Such dependence will also apply to the generic, positive constant $C$ in this section. Let $\Psi$ be the extension of $\psi$ as in section \ref{L2sec}. The calculations in section \ref{dependsec}, when used in this section, will correspond to $\Psi_1=\Psi_2=\Psi$. Let $P=p_1-p_2$. We have \begin{align}\label{ss1} \frac{\partial P}{\partial t}&= \nabla \cdot \Big(K(|\nabla p_1|,\vec{a}^{(1)})\nabla p_1-K(|\nabla p_2|,\vec{a}^{(2)})\nabla p_2\Big)\quad \text{on }U\times(0,\infty),\\ P&=0\quad \text{on }\Gamma\times(0,\infty).\notag \end{align} Let $f(t)$ be defined by \eqref{fdef}, $\Lambda(t)$ by \eqref{tilU}, and $\mathcal Y_0$ by \eqref{Y0def}. We define, similar to \eqref{deftilY}, the function \begin{equation}s {\mathcal Y}(t)=\mathcal Y_0 + (Env f(t))^\frac{2}{2-\beta_2}+ \begin{cases} \int_0^t \norm{\nabla\Psi_t(\tau)}^2d\tau &\text{if } 0\le t<1, \\ \int_{t-1}^{t}\norm{\nabla\Psi_t(\tau)}^2 d\tau&\text{if } t\ge 1, \end{cases} \end{equation}s and, similar to \eqref{tilAKdef}, the numbers \begin{equation}\label{AK} \mathcal{A} =\limsup_{t\to\infty} f(t)^\frac1{2-\beta_2} \quad\text{and}\quad \mathcal K ={\mathcal{A}}^2+\limsup_{t\to\infty}\int_{t-1}^t \norm{ \nabla\Psi_{t}(\tau) }^2d\tau. \end{equation} \begin{theorem} \label{DepCoeff} \ \begin{enumerate} \item[\rm (i)] For $t\ge 0$, one has \begin{equation}\label{ssc2} \int_U |P(x,t)|^2 dx\le \int_U |P(x,0)|^2dx+C Env\,{ \mathcal Y}(t)^{\frac{2}{2-\beta_2}} |\vec{a}^{(1)}-\vec{a}^{(2)}|^\frac{2}{2+\beta_1} . \end{equation} \item[\rm (ii)] If $\mathcal K<\infty$ then \begin{equation}\label{ssc3} \limsup_{t\to\infty}\int_U |P(x,t)|^2 dx\le C(1+\mathcal K)^\frac{2}{2-\beta_2} |\vec{a}^{(1)}-\vec{a}^{(2)}|^\frac{2}{2+\beta_1}. \end{equation} \end{enumerate} \end{theorem} \begin{proof} Multiplying equation \eqref{ss1} by $P$, integrating over $U$, and by integration by parts, we find that \begin{align*} \frac 12\frac{d}{dt}\int_U P^2 dx= -\int_U (K(|\nabla p_1|,\vec{a}^{(1)})\nabla p_1-K(|\nabla p_2|,\vec{a}^{(2)})\nabla p_2)\cdot (\nabla p_1-\nabla p_2)dx. \end{align*} By the perturbed monotonicity \eqref{monoper} of $K(\xi,\vec{a})$, we have \begin{equation}\label{ssc4} \frac 12\frac{d}{dt}\int_U P^2 dx \le -d_6J+C |\vec{a}^{(1)}-\vec{a}^{(2)}|\int_UK(|\nabla p_1|\vee |\nabla p_2|,\vec a^{(1)}\wedge \vec a^{(2)}) (|\nabla p_1|\vee |\nabla p_2|)^2 dx, \end{equation} where \begin{equation}s J=\int_U\frac{|\nabla P|^{2+\beta_1}}{(1+|\nabla p_1|+|\nabla p_2|)^{\beta_1+\beta_2}} dx. \end{equation}s Using \eqref{mc2} with $m=2$ for the last integral of \eqref{ssc4}, we have \begin{equation}\label{keyEst} \frac 12\frac{d}{dt}\int_U P^2 dx \le -d_6J+C |\vec{a}^{(1)}-\vec{a}^{(2)}|\int_U ( |\nabla p_1|^{2-\beta_2}+ |\nabla p_2|^{2-\beta_2}) dx. \end{equation} In \eqref{keyEst}, estimating $J$ by \eqref{embL2} we have, same as \eqref{6one}, \begin{equation}\label{Hu} \frac 12\frac{d}{dt}\int_U P^2 dx\le - C_1\Big(\int_U P^2 dx\Big)^\frac{2+\beta_1}{2}\Lambda(t)^{-(\beta_1+\beta_2)}+C |\vec{a}^{(1)}-\vec{a}^{(2)}|\Lambda(t)^{2-\beta_2}, \end{equation} where $C_1=2d_6 C_{\rm PS}^{-(2+\beta_1)}$. (i) Same as \eqref{U3}, there is $C_2>0$ such that $\Lambda(t) \le C_2 {\mathcal Y}(t)^{\frac{1}{2-\beta_2}}$. Hence, we find that \begin{equation}\label{Hu1} \frac 12\frac{d}{dt}\int_U P^2 dx\le - C_3\Big(\int_U P^2 dx\Big)^\frac{2+\beta_1}{2}{\mathcal Y}(t)^{-\frac{\beta_1+\beta_2}{2-\beta_2}}+C |\vec{a}^{(1)}-\vec{a}^{(2)}| {\mathcal Y}(t) \end{equation} for some $C_3>0$. Applying \eqref{ubode} of Lemma \ref{ODE2} to differential inequality \eqref{Hu1}, we obtain \begin{equation}s \int_U |P(x,t)|^2 dx\le \int_U |P(x,0)|^2dx+C\Big\{ |\vec{a}^{(1)}-\vec{a}^{(2)}| Env\, { \mathcal Y}(t)^{1+\frac{\beta_1+\beta_2}{2-\beta_2}}\Big\}^\frac{2}{2+\beta_1} , \end{equation}s thus, \eqref{ssc2} follows. (ii) By the virtue of \eqref{wteq11} and \eqref{AK}, \begin{equation}\label{ssc5} \limsup_{t\to\infty} \Lambda(t) \le 1+ \sum_{i=1,2}\limsup_{t\to\infty} \norm{\nabla p_i}_{L^{2-\beta_2}} \le C \big( 1+ \mathcal K\big)^\frac{1}{2-\beta_2}<\infty. \end{equation} Thus, $$\displaystyle \int_0^\infty\Lambda(t)^{-(\beta_1+\beta_2)}dt =\infty.$$ Applying Lemma \ref{ODE2} to \eqref{Hu}, for $\theta=\frac{2+\beta_1}2$, we have \begin{equation}\label{baseineq3} \begin{aligned} \limsup_{t\to\infty}\int_U P^2 dx\le C\Big[ |\vec{a}^{(1)}-\vec{a}^{(2)}|\limsup_{t\to\infty} \Lambda(t)^{2+\beta_1} \Big]^\frac{2}{2+\beta_1}= C |\vec{a}^{(1)}-\vec{a}^{(2)}|^\frac{2}{2+\beta_1} \limsup_{t\to\infty} \Lambda^2(t). \end{aligned} \end{equation} Therefore, we obtain \eqref{ssc3} from \eqref{baseineq3} and \eqref{ssc5}. \end{proof} \myclearpage \def$'${$'$} \end{document}
\begin{document} \raggedbottom \begin{abstract} Persistence diagrams play a fundamental role in Topological Data Analysis where they are used as topological descriptors of filtrations built on top of data. They consist in discrete multisets of points in the plane $\mathbb{R}^2$ that can equivalently be seen as discrete measures in $\mathbb{R}^2$. When the data is assumed to be random, these discrete measures become random measures whose expectation is studied in this paper. First, we show that for a wide class of filtrations, including the \v Cech and Rips-Vietoris filtrations, but also the sublevels of a Brownian motion, the expected persistence diagram, that is a deterministic measure on $\mathbb{R}^2$, has a density with respect to the Lebesgue measure. Second, building on the previous result we show that the persistence surface recently introduced in \cite{adams2017persistence} can be seen as a kernel estimator of this density. We propose a cross-validation scheme for selecting an optimal bandwidth, which is proven to be a consistent procedure to estimate the density. \end{abstract} \title{The density of expected persistence diagrams and its kernel based estimation} \section{Introduction} Persistent homology (see \cite{em-ph-17} for a review), a popular approach in Topological Data Analysis (TDA), provides efficient mathematical and algorithmic tools to understand the topology of some dataset (e.g. a point cloud or a time-series) by tracking the evolution of its homology at different scales. For instance, given a scale (or time) parameter $r$ and a point cloud $x = (x_1,\dots,x_n)$ of size $n$, a simplicial complex $\mathbb{K}K(x,r)$ is built on $\{1,\dots,n\}$ thanks to some procedure, such as, e.g., the nerve of the union of balls of radius $r$ centered on the point cloud or the Vietoris-Rips complex. Letting the scale $r$ increase gives rise to an increasing sequence of simplicial complexes $\mathbb{K}K(x)=(\mathbb{K}K(x,r))_r$ called a {\em filtration}. When a simplex is added in the filtration at a time $r$, it either "creates" or "fills" some hole in the complex. Persistent homology keeps track of the birth and death of these holes and encodes them as a {\em persistence diagram} that can be seen as a relevant and stable multi-scale topological descriptor of the data (see \cite{ccgmo-ghsssp-09,cso-psgc-13}). Similarly, one can create a filtration by considering the sublevel sets $\mathbb{K}K(f) = (f^{-1}(]-\infty,r]))_r$ of a given continuous real-valued function $f$ and one can track the evolution of the homology of the sublevels with very few requirements on the function $f$ (see \cite[Section 3.9]{chazal2016structure}). A persistence diagram $D_s$ is thus a collection of pairs of numbers, each of those pairs corresponding to the birth time and the death time of a $s$-dimensional hole. A precise definition of persistence diagram can be found, for example, in \cite{em-ph-17,chazal2016structure}. Mathematically, a diagram is a multiset of points in \begin{equation} \mathbb{D}elta \vcentcolon= \{\textbf{r} = (r_1,r_2),\ r_1 <r_2 < \infty\}. \end{equation} Note that in a general setting, points $\textbf{r}=(r_1,r_2)$ in diagrams can be "at infinity" on the line $\{r_2=\infty\}$ (e.g. a hole may never disappear). However, in the cases considered in this paper, this will be the case for a single point for $0$-dimensional homology, and this point will simply be discarded in the following. In statistical settings, one is often given a (i.i.d.) sample of random datasets (either point clouds or functions in this paper) $\mathbb{X}_1,\dots,\mathbb{X}_N$ and filtrations $\mathbb{K}K(\mathbb{X}_1), \dots, \mathbb{K}K(\mathbb{X}_N)$ built on top of them. We consider the set of persistence diagrams $D_s[\mathbb{K}K(\mathbb{X}_1)],\dots,D_s[\mathbb{K}K(\mathbb{X}_N)]$, which are thought to contain relevant topological information about the geometry of the underlying phenomenon generating the datasets. The space of persistence diagrams is naturally endowed with the so-called \emph{bottleneck distance} \cite{cohen2007stability} or some variants. However, the resulting metric space turns out to be highly non linear, making the statistical analysis of distributions of persistence diagrams rather awkward, despite several interesting results such as, e.g., \cite{turner2014frechet,balakrishnan2013statistical, chazal2014optimal}. A common scheme to overcome this difficulty is to create easier to handle statistics by mapping the diagrams to a vector space thanks to a feature map $\mathbb{P}si$, also called a representation (see, e.g., \cite{adams2017persistence, biscio2016accumulated, bubenik2015statistical, chazal2014stochastic, chen2015statistical, kusano2016persistence, reininghaus2015stable}). A classical idea to get information about the typical behavior of an observation is then to estimate the expectation $E[\mathbb{P}si(D_s[\mathbb{K}K(\mathbb{X}_i)])]$ of the distribution of representations using the mean representation \begin{equation} \overline{\mathbb{P}si}_N \vcentcolon= \frac{\sum_{i=1}^N \mathbb{P}si(D_s[\mathbb{K}K(\mathbb{X}_i)])}{N}. \end{equation} In this direction, \cite{bubenik2015statistical} introduces a representation called persistence landscape, and shows that it satisfies law of large numbers and central limit theorems. Similar theorems can be shown for a wide variety of representations: it is known that $\overline{\mathbb{P}si}_N$ is a consistent estimator of $E[\mathbb{P}si(D_s[\mathbb{K}K(\mathbb{X}_i)])]$. Although it may be useful for a classification task, this mean representation is still somewhat disappointing from a theoretical point of view. Indeed, what exactly $E[\mathbb{P}si(D_s[\mathbb{K}K(\mathbb{X}_i)])]$ is, has been scarcely studied in a non-asymptotic setting, i.e.~when the cardinality of the random point cloud $\mathbb{X}_i$ is fixed or bounded. When the observed data $\mathbb{X}_i$s are large point clouds, asymptotic results are well understood for some non-persistent descriptors of the data, such as the Betti numbers: a natural question in geometric probability is to study the asymptotics of the $s$-dimensional Betti numbers $\beta_s(\mathbb{K}K(\mathbb{X}_n,r_n))$ where $\mathbb{X}_n$ is a point cloud of size $n$ and under different asymptotics for $r_n$. Notable results on the topic include \cite{kahle2013limit, yogeshwaran2015topology, Yogeshwaran2017}. Considerably less results are known about the asymptotic properties of fundamentally persistent descriptors of the data: \cite{bobrowski2017maximally} finds the right order of magnitude of maximally persistent cycles and \cite{duy2016limit} shows the convergence of persistence diagrams on stationary process in a weak sense. \paragraph*{Contributions of the paper.} In this paper, representing persistence diagrams as discrete measures, i.e.~as element of the space of measures on $\mathbb{R}^2$, we establish non-asymptotic global properties of various representations and persistence-based descriptors. A multiset of points is naturally in bijection with the discrete measure defined on $\mathbb{R}^2$ created by putting Dirac measures on each point of the multiset, with mass equal to the multiplicity of the point. In this paper a persistence diagram $D_s$ is thus represented as a discrete measure on $\mathbb{D}elta$ and with a slight abuse of notation, we will write \begin{equation} D_s = \sum_{\textbf{r} \in D_s} \delta_{\textbf{r}}, \end{equation} where $\delta_{\textbf{r}}$ denotes the Dirac measure in $\textbf{r}$ and where, as mentioned above, points with infinite persistence are simply discarded. A wide class of representations, including the persistence surface \cite{adams2017persistence} (variants of this object have been also introduced \cite{chen2015statistical, kusano2016persistence, reininghaus2015stable}), the accumulated persistence function \cite{biscio2016accumulated} or persistence silhouette \cite{chazal2014stochastic} are conveniently expressed as $\mathbb{P}si(D_s) = D_s(f) \vcentcolon= \sum_{\textbf{r} \in D_s} f(\textbf{r})$ for some function $f$ on $\mathbb{D}elta$. Such representations, having particularly good theoretical properties, will be called \emph{linear representations}. Given a random set of points $\mathbb{X}$, the expected behavior of the linear representations $E[D_s[\mathbb{K}K(\mathbb{X})](f)]$ is well understood if the expectation $E[D_s[\mathbb{K}K(\mathbb{X})]]$ of the distribution of persistence diagrams is understood, where the expectation $E[\mu]$ of a random discrete measure $\mu$ is defined by the equation $E[\mu](B) = E[\mu(B)]$ for all Borel sets $B$ (see \cite{ledoux2013probability} for a precise definition of $E[\mu]$ in a more general setting). Our main contributions consists in showing that for two different kind of situations (e.g. filtrations built on point clouds in Theorem \ref{thm:main_thm} or filtration built with the sublevel sets of a Brownian motion in Theorem \ref{thm:brownian}), the expected persistence diagram $E[D_s[\mathbb{K}K(\mathbb{X})]]$, which is a measure on $\mathbb{D}elta \subset \mathbb{R}^2$, has a density $p$ with respect to the Lebesgue measure on $\mathbb{R}^2$. Therefore, $E[\mathbb{P}si(D_s[\mathbb{K}K(\mathbb{X})])]$ is equal to $\int pf$, and if properties of the density $p$ are shown (such as smoothness), those properties will also apply to the expectation of the representation $\mathbb{P}si$. Note that Theorem \ref{thm:brownian} is, to our knowledge, one of the first result about the \emph{persistent} homology of Gaussian random fields. The main argument of the proof of Theorem \ref{thm:main_thm} relies on the basic observation that for point clouds $\mathbb{X}$ of given size $n$, the filtration $\mathbb{K}K(\mathbb{X})$ can induce a finite number of ordering configurations of the simplices. The core of the proof consists in showing that, under suitable assumptions, this ordering is locally constant for almost all $\mathbb{X}$. As one needs to use geometric arguments, having properties only satisfied almost everywhere is not sufficient for our purpose. One needs to show that properties hold in a stronger sense, namely that the set on which it is satisfied is a dense open set. Hence, a convenient framework to obtain such properties is given by subanalytic geometry (see \cite{shiota1997geometry} for a monograph on the subject). Subanalytic sets are a class of subsets of $\mathbb{R}^d$ that are locally defined as linear projections of sets defined by analytic equations and inequations. As most considered filtrations in Topological Data Analysis result from real algebraic constructions, such sets naturally appear in practice. On open sets where the combinatorial structure of the filtration is constant, the way the points in the diagrams are matched to pairs of simplices is fixed: only the times/scales at which those simplices appear change. Under an assumption of smoothness of those times, and using the coarea formula \cite[Chapter 3]{morgan2016geometric}, a classical result of geometric measure theory generalizing the change of variables formula in integrals, one then deduces the existence of a density for $E[D_s[\mathbb{K}K(\mathbb{X})]]$. Among the different linear representations, persistence surface is of particular interest. It is defined as the convolution of a diagram with a gaussian kernel. Hence, the mean persistence surface can be seen as a kernel density estimator of the density $p$ of Theorem \ref{thm:main_thm}. As a consequence, the general theory of kernel density estimation applies and gives theoretical guarantees about various statistical procedures. As an illustration, we consider the bandwidth selection problem for persistence surfaces. Whereas authors in \cite{adams2017persistence} state that any reasonable bandwidth is sufficient for a classification task, we give arguments for the opposite when no "obvious" shapes appear in the diagrams. We then propose a cross-validation scheme to select the bandwidth matrix. The consistency of the procedure is shown using Stone's theorem \cite{stone1984asymptotically}. This procedure is implemented on a set of toy examples illustrating its relevance. The paper is organized as follow: Section \ref{sec:preli} is dedicated to the necessary background in geometric measure theory and subanalytic geometry. Results are stated in Section \ref{sec:statements}, and Theorem \ref{thm:main_thm} is proved in Section \ref{sec:proof}. It is shown in Section \ref{sec:examples} that the main result applies to the \v Cech and Rips-Vietoris filtrations. Section \ref{sec:brownian} deals with the study of the persistence diagram of the Brownian motion whereas Section \ref{sec:stability} provides elements to understand the stability of the expected persistence diagrams with respect to the measure generating them. Section \ref{sec:kde} is dedicated to the statistical study of persistence surface, and numerical illustrations are found in Section \ref{sec:num}. All the technical proofs that are not essential to the understanding of the idea and results of the paper have been moved to the Appendix. \section{Preliminaries}\label{sec:preli} \subsection{The coarea formula} The proof of the existence of the density of the expected persistence diagram depends heavily on a classical result in geometric measure theory, the so-called coarea formula (see \cite[Chapter 3]{morgan2016geometric} for a gentle introduction to the subject). It consists in a more general version of the change of variables formula in integrals. Let $(M,\rho)$ be a metric space. The diameter of a set $A\subset (M,\rho)$ is defined by $\sup_{x,y\in A} \rho(x,y)$. \begin{definition} Let $k$ be a non-negative integer. For $A\subset M$, and $\delta >0$, consider \begin{equation} \mathcal{H}_k^\delta(A) \vcentcolon= \inf\left\{ \sum_i \alpha(k) \left(\frac{\diam(U_i)}{2}\right)^k, \ A\subset \ \bigcup_i U_i \mbox{ and } \diam(U_i)<\delta\right\}, \end{equation} where $\alpha(k)$ is the volume of the $k$-dimensional unit ball. The \emph{$k$-dimensional Hausdorff measure} on $M$ of $A$ is defined by $\mathcal{H}_k(A) \vcentcolon= \lim_{\delta\to 0} \mathcal{H}_k^\delta(A)$. \end{definition} If $M$ is a $d$-dimensional submanifold of $\mathbb{R}^D$, the $d$-dimensional Hausdorff measure coincides with the volume form associated to the ambient metric restricted to $M$. For instance, if $M$ is an open set of $\mathbb{R}^D$, the Hausdorff measure is the $D$-dimensional Lebesgue measure. \begin{theorem}[Coarea formula \cite{morgan2016geometric}]\label{thm:coarea} Let $M$ (resp. $N$) be a smooth Riemannian manifold of dimension $m$ (resp $n$). Assume that $m\geq n$ and let $\mathbb{P}hi: M\to N$ be a differentiable map. Denote by $D\mathbb{P}hi$ the differential of $\mathbb{P}hi$. The Jacobian of $\mathbb{P}hi$ is defined by $J\mathbb{P}hi = \sqrt{\det((D\mathbb{P}hi)\times (D\mathbb{P}hi)^t)}$. For $f : M\to \mathbb{R}_+$ a positive measurable function, the following equality holds: \begin{equation} \int_M f(x) J\mathbb{P}hi(x) \mathrm{d}\mathcal{H}_m(x) = \int_N \left(\int_{x\in \mathbb{P}hi^{-1}(\{y\})} f(x) \mathrm{d} \mathcal{H}_{m-n}(x)\right) \mathrm{d}\mathcal{H}_n(y). \end{equation} \end{theorem} In particular, if $J\mathbb{P}hi>0$ almost everywhere, one can apply the coarea formula to $f\times(J\mathbb{P}hi)^{-1}$ to compute $\int_M f$. Having $J\mathbb{P}hi>0$ is equivalent to have $D\mathbb{P}hi$ of full rank: most of the proof of our main theorem consists in showing that this property holds for certain functions $\mathbb{P}hi$ of interest. \subsection{Background on subanalytic sets} We now give basic results on subanalytic geometry, whose proofs are given in Appendix. See \cite{shiota1997geometry} for a thorough review of the subject. Let $M\subset \mathbb{R}^D$ be a connected real analytic submanifold, possibly with boundary, whose dimension is denoted by $d$. \begin{definition} A subset $X$ of $M$ is \emph{semianalytic} if each point of $M$ has a neighbourhood $U\subset M$ such that $X \cap U$ is of the form \begin{equation} \bigcup_{i=1}^p\bigcap_{j=1}^q X_{ij}, \end{equation} where $X_{ij}$ is either $f_{ij}^{-1}(\{0\})$ or $f_{ij}^{-1}((0,\infty))$ for some analytic functions $f_{ij} : U\to \mathbb{R}$. \end{definition} \begin{definition} A subset $X$ of $M$ is \emph{subanalytic} if for each point of $M$, there exists a neighborhood $U$ of this point, a real analytic manifold $N$ and $A$, a relatively compact semianalytic set of $N\times M$, such that $X\cap U$ is the projection of $A$ on $M$. A function $f: X\to \mathbb{R}$ is subanalytic if its graph is subanalytic in $M \times \mathbb{R}$. The set of real-valued subanalytic functions on $X$ is denoted by $\mathcal{S}(X)$. \end{definition} A point $x$ in a subanalytic subset $X$ of $M$ is smooth (of dimension $k$) if, in some neighbourhood of $x$ in $M$, $X$ is an analytic submanifold (of dimension $k$). The maximal dimension of a smooth point of $X$ is called the dimension of $X$. The smooth points of $X$ of dimension $d$ are called regular, and the other points are called singular. The set $\mbox{Reg}(X)$ of regular points of $X$ is an open subset of $M$, possibly empty; the set of singular points is denoted by $\mathrm{Sing}(X)$. \begin{restatable}{lemma}{firstProp} \label{lem:basic_analytic} \begin{enumerate} \item[(i)] For $f\in \mathcal{S}(M)$, the set $A(f)$ on which $f$ is analytic is an open subanalytic set of $M$. Its complement is a subanalytic set of dimension smaller than $d$. \end{enumerate} Fix $X$ a subanalytic subset of $M$. Assume that $f,g: X\to \mathbb{R}$ are subanalytic functions such that the image of a bounded set is bounded. Then, \begin{enumerate} \item[(ii)] The functions $fg$ and $f+g$ are subanalytic. \item[(iii)] The sets $f^{-1}(\{0\})$ and $f^{-1}((0,\infty))$ are subanalytic in $M$. \end{enumerate} \end{restatable} As a consequence of point (i), for $f \in \mathcal{S}(M)$, one can define its gradient $\nabla f$ everywhere but on some subanalytic set of dimension smaller than $d$. \begin{restatable}{lemma}{usefulLem} \label{usefulLem} Let $X$ be a subanalytic subset of $M$. If the dimension of $X$ is smaller than $d$, then $\mathcal{H}_d(X)=0$. \end{restatable} As a direct corollary, we always have \begin{equation}\label{regFull} \mathcal{H}_d(X) = \mathcal{H}_d(\mbox{Reg}(X)). \end{equation} Write $\mathbb{N}N(M)$ the class of subanalytic subsets $X$ of $M$ with $\mbox{Reg}(X)=\emptyset$. We have just shown that $\mathcal{H}_d \equiv 0$ on $\mathbb{N}N(M)$. They form a special class of negligeable sets. We say that a property is verified \emph{almost subanalytically everywhere} (a.s.e.) if the set on which it is not verified is included in a set of $\mathbb{N}N(M)$. For example, Lemma \ref{lem:basic_analytic} implies that $\nabla f$ is defined a.s.e.. \section{The density of expected persistence diagrams}\label{sec:statements} Let $n>0$ be an integer. Write $\mathbb{F}F_n$ the collection of non-empty subsets of $\{1,\dots,n\}$. Let $\varphi=(\varphi[J])_{J\in \mathbb{F}F_n}:M^n \to \mathbb{R}^{\mathbb{F}F_n}$ be a continuous function. The function $\varphi$ will be used to construct the persistence diagram and is called a \emph{filtering function}: a simplex $J$ is added in the filtration at the time $\varphi[J]$. Write for $x=(x_1,\dots,x_n)\in M^n$ and for $J$ a simplex, $x(J)\vcentcolon= (x_j)_{j\in J}$. We make the following assumptions on $\varphi$: \begin{enumerate} \item[(K1)] \emph{Absence of interaction:} For $J\in \mathcal{F}_n$, $\varphi[J](x)$ only depends on $x(J)$. \item[(K2)] \emph{Invariance by permutation:} For $J\in \mathcal{F}_n$ and for $(x_1,\dots,x_n)\in M^n$, if $\tau$ is a permutation of $\{1,\dots,n\}$ whose support is included in $J$, then $\varphi[J](x_{\tau(1)},\dots,x_{\tau(n)})=\varphi[J](x_1,\dots,x_n)$. \item[(K3)] \emph{Monotony:} For $J \subset J' \in \mathbb{F}F_n$, $\varphi[J] \leq \varphi[J']$. \item[(K4)] \emph{Compatibility:} For a simplex $J \in \mathbb{F}F_n$ and for $j\in J$, if $\varphi[J](x_1,\dots,x_n)$ is not a function of $x_j$ on some open set $U$ of $M^n$, then $\varphi[J] \equiv \varphi[J\backslash\{j\}]$ on $U$. \item[(K5)] \emph{Smoothness:} The function $\varphi$ is subanalytic and the gradient of each of its entries (which is defined a.s.e.) is non vanishing a.s.e.. \end{enumerate} Assumptions (K2) and (K3) ensure that a filtration $\mathbb{K}K(x)$ can be defined thanks to $\varphi$ by: \begin{equation} \forall J \in \mathcal{F}_n,\ J \in \mathbb{K}K(x,r) \mathbb{L}ongleftrightarrow \varphi[J](x)\leq r. \end{equation} Assumption (K1) means that the moment a simplex is added in the filtration only depends on the position of its vertices, but not on their relative position in the point cloud. For $J\in \mathcal{F}_n$, the gradient of $\varphi[J]$ is a vector field in $TM^n$. Its projection on the $j$th coordinate is denoted by $\nabla^j \varphi[J]$: it is a vector field in $TM$ defined a.s.e.. The persistence diagram of the filtration $\mathbb{K}K(x)$ for $s$-dimensional homology is denoted by $D_s[\mathbb{K}K(x)]$. \begin{theorem}\label{thm:main_thm} Fix $n\geq 1$. Assume that $M$ is a real analytic compact $d$-dimensional connected submanifold possibly with boundary and that $\mathbb{X}$ is a random variable on $M^n$ having a density with respect to the Hausdorff measure $\mathcal{H}_{dn}$. Assume that $\mathbb{K}K$ satisfies the assumptions (K1)-(K5). Then, for $s \geq 0$, the expected measure $E[D_s[\mathbb{K}K(\mathbb{X})]]$ has a density with respect to the Lebesgue measure on $\mathbb{D}elta$. \end{theorem} \begin{remark}The condition that $M$ is compact can be relaxed in most cases: it is only used to ensure that the subanalytic functions appearing in the proof satisfy the boundedness condition of Lemma \ref{lem:basic_analytic}. For the \v{C}ech and Rips-Vietoris filtrations, one can directly verify that the function $\varphi$ (and therefore the functions appearing in the proofs) satisfies it when $M= \mathbb{R}^d$. Indeed, in this case, the filtering functions are semi-algebraic. \end{remark} Classical filtrations such as the Rips-Vietoris and \v Cech filtrations do not satisfy the full set of assumptions (K1)-(K5). Specifically, they do not satisfy the second part of assumption (K5): all singletons $\{j\}$ are included at time $0$ in those filtrations so that $\varphi[\{j\}]\equiv 0$, and the gradient $\nabla \varphi[\{j\}]$ is therefore null everywhere. This leads to a well-known phenomenon on Rips-Vietoris and \v Cech diagrams: all the non-infinite points of the diagram for $0$-dimensional homology are included in the vertical line $\{0\}\times [0,\infty)$. A theorem similar to Theorem \ref{thm:main_thm} still holds in this case: \begin{restatable}{theorem}{mainThmbisState} \label{thm:main_thm'}Fix $n\geq 1$. Assume that $M$ is a real analytic compact $d$-dimensional connected submanifold and that $\mathbb{X}$ is a random variable on $M^n$ having a density with respect to the Hausdorff measure $\mathcal{H}_{dn}$. Define assumption (K5'): \begin{enumerate} \item[(K5')] The function $\varphi$ is subanalytic and the gradient of its entries $J$ of size larger than 1 is non vanishing a.s.e.. Moreover, for $\{j\}$ a singleton, $\varphi[\{j\}]\equiv 0$. \end{enumerate} Assume that $\mathbb{K}K$ satisfies the assumptions (K1)-(K4) and (K5'). Then, for $s \geq 1$, $E[D_s[\mathbb{K}K(\mathbb{X})]]$ has a density with respect to the Lebesgue measure on $\mathbb{D}elta$. Moreover, $E[D_0[\mathbb{K}K(\mathbb{X})]]$ has a density with respect to the Lebesgue measure on the vertical line $\{0\}\times [0,\infty)$. \end{restatable} The proof of Theorem \ref{thm:main_thm'} is very similar to the proof of Theorem \ref{thm:main_thm}. It is therefore relegated to the appendix. One can easily generalize Theorem \ref{thm:main_thm} and assume that the size of the point process $\mathbb{X}$ is itself random. For $n\in \mathbb{N}$, define a function $\varphi^{(n)} : M^n \to \mathbb{R}^{\mathbb{F}F_n}$ satisfying the assumption (K1)-(K5). If $x$ is a finite subset of $M$, define $\mathbb{K}K(x)$ by the filtration associated to $\varphi^{(|x|)}$ where $|x|$ is the size of $x$. We obtain the following corollary, proven in the appendix. \begin{restatable}{corollary}{mainCorState} \label{cor:main_cor} Assume that $\mathbb{X}$ has some density with respect to the law of a Poisson process on $M$ of intensity $\mathcal{H}_d$, such that $E\left[2^{|\mathbb{X}|}\right]<\infty$. Assume that $\mathbb{K}K$ satisfies the assumptions (K1)-(K5). Then, for $s \geq 0$, $E[D_s[\mathbb{K}K(\mathbb{X})]]$ has a density with respect to the Lebesgue measure on $\mathbb{D}elta$. \end{restatable} The condition $E\left[2^{|\mathbb{X}|}\right]<\infty$ ensures the existence of the expected diagram and is for example satisfied when $\mathbb{X}$ is a Poisson process with finite intensity. As the way the filtration is created is smooth, one may actually wonder whether the density of $E[D_s[\mathbb{K}K(\mathbb{X})]]$ is smooth as well: it is the case as long as the way the points are sampled is smooth. Recalling that a function is said to be of class $C^k$ if it is $k$ times differentiable, with a continuous $k$th derivative, we have the following result. \begin{restatable}{theorem}{smoothnessState} \label{thm:smoothness} Fix $0\leq k\leq \infty$ and assume that $\mathbb{X}\in M^n$ has some density of class $C^k$ with respect to $\mathcal{H}_{nd}$. Then, for $s\geq 0$, the density of $E[D_s[\mathbb{K}K(\mathbb{X})]]$ is of class $C^k$. \end{restatable} The proof is based on classical results of continuity under the integral sign as well as an use of the implicit function theorem: it can be found in the appendix. As a corollary of Theorem \ref{thm:smoothness}, we obtain the smoothness of various expected descriptors computed on persistence diagrams. For instance, the expected birth distribution and the expected death distribution have smooth densities under the same hypothesis, as they are obtained by projection of the expected diagram on some axis. Another example is the smoothness of the expected Betti curves. The $s$th Betti number $\beta^r_s(\mathbb{K}K(x))$ of a filtration $\mathbb{K}K(x)$ is defined as the dimension of the $s$th homology group of $\mathbb{K}K(x,r)$. The Betti curves $r\mapsto \beta^r_s(\mathbb{K}K(x))$ are step functions which can be used as statistics, as in \cite{umeda2017time} where they are used for a classification task on time series. With few additional work (see proof in Appendix), the expected Betti curves are shown to be smooth. \begin{restatable}{corollary}{corBettiState} \label{cor:betti} Under the same hypothesis than Theorem \ref{thm:smoothness}, for $s\geq 0$, the expected Betti curve $ r\mapsto E[\beta^r_s(\mathbb{K}K(\mathbb{X}))]$ is a $C^k$ function. \end{restatable} \section{Proof of Theorem \ref{thm:main_thm}}\label{sec:proof} First, one can always replace $M^n$ by $A(\varphi)=\bigcap_{J\in \mathbb{F}F_n} A(\varphi[J])$, as Lemma \ref{lem:basic_analytic} implies that it is an open set whose complement is in $\mathbb{N}N(M^n)$. We will therefore assume that $\varphi$ is analytic on $M^n$. Given $x\in M^n$, the different values taken by $\varphi(x)$ on the filtration can be written $r_1 < \cdots < r_L$. Define $E_l(x)$ the set of simplices $J$ such that $\varphi[J](x) =r_l$. The sets $E_1(x),\dots,E_L(x)$ form a partition of $\mathcal{F}_n$ denoted by $\mathcal{A}(x)$. \begin{lemma}\label{lemma1} For a.s.e. $x\in M^n$, for $l\geq 1$, $E_l(x)$ has a unique minimal element $J_l$ (for the partial order induced by inclusion). \end{lemma} \begin{proof} Fix $J,J' \subset \{1,\dots,n\}$ with $J\neq J'$ and $J\cap J'\neq \emptyset$. consider the subanalytic functions $f: x\in M^n \mapsto \varphi[J](x)-\varphi[J'](x)$ and $g: x\in M^n \mapsto \varphi[J](x)-\varphi[J\cap J'](x)$. The set \begin{equation} C(J,J') \vcentcolon= \{ f=0 \} \cap \{ g>0\}. \end{equation} is a subanalytic subset of $M^n$. Assume that it contains some open set $U$. On $U$, $\varphi[J](x)$ is equal to $\varphi[J'](x)$. Therefore, it does not depend on the entries $x_j$ for $j \in J \backslash J'$. Hence, by assumption (K4), $\varphi[J](x)$ is actually equal to $\varphi[J\cap J'](x)$ on $U$. This is a contradiction with having $g>0$ on $U$. Therefore, $C(J,J')$ does not contain any open set, and all its points are singular: $C(J,J')$ is in $\mathcal{N}(M^n)$. If $J\cap J'=\emptyset$, similar arguments show that $C(J,J') = \{f=0\}$ cannot contain any open set: it would contradict assumption (K5). On the complement of \begin{equation} C\vcentcolon= \bigcup_{J\neq J' \subset\{1,\dots,n\}} C(J,J'), \end{equation} having $\varphi[J](x) = \varphi[J'](x)$ implies that this quantity is equal to $\varphi[J\cap J'](x)$. This show the existence of a unique minimal element $J_l$ to $E_l(x)$ on the complement of $C$. This property is therefore a.s.e. satisfied. \end{proof} \begin{lemma}\label{lemma2} A.s.e., $x \mapsto \mathcal{A}(x)$ is locally constant. \end{lemma} \begin{proof} Fix $\mathcal{A}_0 = \{E_1,\dots,E_l\}$ a partition of $\mathcal{F}_n$ induced by some filtration, with minimal elements $J_1,\dots,J_l$. Consider the subanalytic functions $F, G$ defined, for $x\in M^n$, by \[ F(x) = \sum_{l=1}^L \sum_{J\in E_l} \left( \varphi[J](x)-\varphi[J_l](x)\right) \mbox{ and } G(x) = \sum_{l\neq l'} \left(\varphi[J_l](x))-\varphi[J_{l'}](x)\right)^2.\] The set $\{x \in M^n, \mathcal{A}(x)=\mathcal{A}_0\}$ is exactly the set $C(\mathcal{A}_0)=\{F=0\} \cap \{G>0\}$, which is subanalytic. The sets $C(\mathcal{A}_0)$ for all partitions $\mathcal{A}_0$ of $\mathcal{F}_n$ define a finite partition of the space $M^n$. On each open set $\mbox{Reg}(C(\mathcal{A}_0)))$, the application $x\mapsto \mathcal{A}(x)$ is constant. Therefore, $x\mapsto \mathcal{A}(x)$ is locally constant everywhere but on $\bigcup_{\mathcal{A}_0} \mathrm{Sing}(C(\mathcal{A}_0)) \in \mathbb{N}N(M^n)$. \end{proof} Therefore, the space $M^n$ is partitioned into a negligeable set of $\mathbb{N}N(M^n)$ and some open subanalytic sets $U_1,\dots,U_R$ on which $\mathcal{A}$ is constant. \begin{lemma}\label{lem:grad_non_null} Fix $1\leq r \leq R$ and assume that $J_1,\dots,J_L$ are the minimal elements of $\mathcal{A}$ on $U_r$. Then, for $1\leq l \leq L$ and $j\in J_l$, $\nabla^j \varphi[J_l] \neq 0$ a.s.e. on $U_r$. \end{lemma} \begin{proof} By minimality of $J_l$, for $j\in J_l$, the subanalytic set $\{\nabla^j \varphi[J_l] = 0\} \cap U_r$ cannot contain an open set. It is therefore in $\mathbb{N}N(M^n)$. \end{proof} Fix $1\leq r \leq R$ and write \[V_r = U_r\ \mathbb{B}ig\backslash \left(\bigcup_{l=1}^L \bigcup_{j=1}^{|J_l|} \{ \nabla^j \varphi[J_l] = 0\}\right).\] The complement of $V_r$ in $U_r$ is still in $\mathbb{N}N(M^n)$. For $x\in V_r$, $D_s[\mathbb{K}K(x)]$ is written $\sum_{i=1}^N \delta_{\textbf{r}_i}$, where \[\textbf{r}_i = (\varphi[J_{l_1}](x),\varphi[J_{l_2}](x))=\vcentcolon (b_i,d_i).\] The integer $N$ and the simplices $J_{l_1}$, $J_{l_2}$ depend only on $V_r$. Note that $d_i$ is always larger than $b_i$, so that $J_{l_2}$ cannot be included in $J_{l_1}$. The map $x\mapsto \textbf{r}_i$ has it differential of rank 2. Indeed, take $j \in J_{l_2} \backslash J_{l_1}$. By Lemma \ref{lem:grad_non_null}, $\nabla^j \varphi[J_{l_2}](x) \neq 0$. Also, as $\varphi[J_{l_1}]$ only depends on the entries of $x$ indexed by $J_{l_1}$ (assumption (K1)), $\nabla^j \varphi[J_{l_1}](x)=0$. Furthermore, take $j'$ in $J_{l_1}$. By Lemma \ref{lem:grad_non_null}, $\nabla^{j'} \varphi[J_{l_1}](x)\neq 0$. This implies that the differential is of rank 2. We now compute the $s$th persistence diagram for $s \geq 0$. Write $\kappa$ the density of $\mathbb{X}$ with respect to the measure $\mathcal{H}_{nd}$ on $M^n$. Then, \begin{align*} E[D_s[\mathbb{K}K(\mathbb{X})]] &= \sum_{r=1}^R E\left[ \mathbbm{1}\{\mathbb{X} \in V_r\} D_s[\mathbb{K}K(\mathbb{X})] \right] =\sum_{r=1}^R E\left[ \mathbbm{1}\{\mathbb{X} \in V_r\} \sum_{i=1}^{N_r} \delta_{\textbf{r}_i} \right]\\ &= \sum_{r=1}^R \sum_{i=1}^{N_r} E\left[ \mathbbm{1}\{\mathbb{X} \in V_r\}\delta_{\textbf{r}_i} \right] \end{align*} Write $\mu_{ir}$ the measure $E[\mathbbm{1}\{\mathbb{X} \in V_r\} \delta_{\textbf{r}_i}]$. To conclude, it suffices to show that this measure has a density with respect to the Lebesgue measure on $\mathbb{D}elta$. This is a consequence of the coarea formula. Define the function $\mathbb{P}hi_{ir} : x\in V_r \mapsto \textbf{r}_i = (\varphi[J_{l_1}](x),\varphi[J_{l_2}](x))$. We have already seen that $\mathbb{P}hi_{ir}$ is of rank $2$ on $V_r$, so that $J\mathbb{P}hi_{ir}>0$. By the coarea formula (see Theorem \ref{thm:coarea}), for a Borel set $B$ in $\mathbb{D}elta$, \begin{align*} \mu_{ir}(B) = P(\mathbb{P}hi_{ir}(\mathbb{X}) \in B, \mathbb{X} \in V_r) &= \int_{V_r} \mathbbm{1}\{\mathbb{P}hi_{ir}(x) \in B\}\kappa(x)d\mathcal{H}_{nd}(x) \\ &= \int_{u\in B} \int_{x\in \mathbb{P}hi_{ir}^{-1}(\{u\})} (J\mathbb{P}hi_{ir}(x))^{-1}\kappa(x) d\mathcal{H}_{nd-2}(x) du. \end{align*} Therefore, $\mu_{ir}$ has a density with respect to the Lebesgue measure on $\mathbb{D}elta$ equal to \begin{equation}\label{densityLoc} p_{ir}(u) = \int_{x\in \mathbb{P}hi_{ir}^{-1}(\{u\})} (J\mathbb{P}hi_{ir}(x))^{-1}\kappa(x) d\mathcal{H}_{nd-2}(x). \end{equation} Finally, $E[D_s[\mathbb{K}K(\mathbb{X})]]$ has a density equal to \begin{equation}\label{fullDens} p(u) = \sum_{r=1}^R \sum_{i=1}^{N_r} \int_{x\in \mathbb{P}hi_{ir}^{-1}(\{u\})} (J\mathbb{P}hi_{ir}(x))^{-1}\kappa(x) d\mathcal{H}_{nd-2}(x). \end{equation} \begin{remark} Notice that, for $n$ fixed, the above proof, and thus the conclusion, of Theorem \ref{thm:main_thm} also works if the diagrams are represented by normalized discrete measures, i.e.~probability measures defined by \begin{equation} D_s = \frac{1}{|D_s|} \sum_{\textbf{r} \in D_s} \delta_{\textbf{r}}. \end{equation} \end{remark} \section{Examples}\label{sec:examples} We now note that the Rips-Vietoris and the \v{C}ech filtrations satisfy the assumptions (K1)-(K4) and (K5') when $M = \mathbb{R}^d$ is an Euclidean space. Note that the similar arguments show that weighted versions of those filtrations (see \cite{buchet2016efficient}) satisfy assumptions (K1)-(K5). \subsection{Rips-Vietoris filtration} For the Rips-Vietoris filtration, $\varphi[J](x) = \max_{i,j\in J} \|x_i-x_j\|$. The function $\varphi$ clearly satisfies (K1), (K2) and (K3). It is also subanalytic, as it is the maximum of semi-algebraic functions. Let $x\in M^n$ and $J\in \mathcal{F}_n$ a simplex of size larger than one. Then, $\varphi[J](x)=\|x_i-x_j\|$ for some indices $i,j$. Those indices are locally stable, and $\varphi[J](x)=\varphi[\{i,j\}](x)$: hypothesis (K4) is satisfied. Furthermore, on this set, \begin{equation} \nabla \varphi[\{i,j\}](x) = \left(\frac{x_i-x_j}{\|x_i-x_j\|},\frac{x_j-x_i}{\|x_i-x_j\|}\right) \neq 0. \end{equation} Hence, (K5') is also satisfied: both Theorem \ref{thm:main_thm'} and Theorem \ref{thm:smoothness} are satisfied for the Rips-Vietoris filtration. \subsection{\v Cech filtration} The ball centered at $x$ of radius $r$ is denoted by $B(x,r)$. For the \v{C}ech filtration, \begin{equation} \varphi[J](x) = \inf_{r>0} \left\{ \bigcap_{j\in J} B(x_j,r) \neq \emptyset \right\}. \end{equation} First, it is clear that (K1), (K2) and (K3) are satisfied by $\varphi$. We give without proof a characterization of the \v{C}ech complex. \begin{proposition} Let $x$ be in $M^n$ and fix $J\in \mathcal{F}_n$. If the circumcenter of $x(J)$ is in the convex hull of $x(J)$, then $\varphi[J](x)$ is the radius of the circumsphere of $x(J)$. Otherwise, its projection on the convex hull belongs to the convex hull of some subsimplex $x(J')$ of $x(J)$ and $\varphi[J](x)=\varphi[J'](x)$. \end{proposition} \begin{definition} The Cayley-Menger matrix of a $k$-simplex $x=(x_1,\dots,x_k)\in M^k$ is the symmetric matrix $(M(x)_{i,j})_{i,j}$ of size $k+1$, with zeros on the diagonal, such that $M(x)_{1,j}=1$ for $j>1$ and $M(x)_{i+1,j+1} = \|x_i-x_j\|^2$ for $i,j\leq k$. \end{definition} \begin{proposition}[see \cite{coxeter1930circumradius}]Let $x \in M^k$ be a point in general position. Then, the Cayley-Menger matrix $M(x)$ is invertible with $(M(x))^{-1}_{1,1} = -2r^2$, where $r$ is the radius of the circumsphere of $x$. The $k$th other entries of the first line of $M(x)^{-1}$ are the barycentric coordinates of the circumcenter. \end{proposition} Therefore, the application which maps a simplex to its circumcenter is analytic, and the set on which the circumcenter of a simplex belongs in the interior of its convex hull is a subanalytic set. On such a set, the function $\varphi$ is also analytic, as it is the square root of the inverse a matrix which is polynomial in $x$. Furthermore, on the open set on which the circumcenter is outside the convex hull, we have shown that $\varphi[J](x)=\varphi[J'](x)$ for some subsimplex $J'$: assumption (K4) is satisfied. Finally, let us show that assumption (K5') is satisfied. The previous paragraph shows the subanalyticity of $\varphi$. For $J\in \mathcal{F}_n$ a simplex of size larger than one, there exists some subsimplex $J'$ such that $\varphi[J](x)$ is the radius of the circumsphere of $x(J')$. It is clear that there cannot be an open set on which this radius is constant. Thus, $\nabla \varphi[J]$ is a.s.e. non null. \section{The expected persistence diagram of a Brownian motion}\label{sec:brownian} Another instance of random objects one can build filtrations on are random functions. The most fundamental instance of such functions is the Brownian motion $B: t\in[0,1] \mapsto B_t\in \mathbb{R}$, defined as the continuous Gaussian random field on $\mathbb{R}$ having covariance function $C(t_1,t_2)=\min(t_1,t_2)$ (see \cite[Chapter 2]{le2016brownian} for a concise and rigorous introduction). The continuity of $B$ ensures that the persistence module induced by the $0$-level homology of its sublevel sets is \emph{q-tame} \cite[Section 3.9]{chazal2016structure}. In particular, the persistence diagram $D$ of this persistence module is well-defined, but may contain accumulation points close to the diagonal. From a measure point of view, the persistence diagram is not a finite measure as in previous sections, but a Radon measure on $\mathbb{D}elta$. \begin{theorem}\label{thm:brownian} The random persistence diagram $D$ of the $0$-level homology of the sublevel sets of $B$ is such that its expectation $E[D]$ is well defined and has a density with respect to the Lebesgue measure. \end{theorem} The result holds as the persistent Betti numbers, defined by \[\beta^{r,s} \vcentcolon= D(]-\infty,r] \times [s,\infty[),\] have a particularly convenient expression in this setting. Indeed, $\beta^{r,s}$ is exactly the number of upward crossings of the band $[r,s]$ by the Brownian motion. The law of this quantity is explicitly known, and happens to be continuous with respect to $r$ and $s$. Standard measure theoretic arguments are then enough to conclude. \begin{figure} \caption{Example of a function $f:[0,1] \to \mathbb{R} \label{fig:brownian} \end{figure} More precisely, for $a\in \mathbb{R}$, define $T(a) \vcentcolon= \inf \{t>0,\ B_t=a\}$. Then, $T(a)$ has a density with respect to the Lebesgue measure equal to \[ f_a(t) = \frac{a}{\sqrt{2\pi t^3}} \exp\left( -\frac{a^2}{2t}\right).\] Assume that $0<r<s$ (similar arguments hold when both numbers are negative or if $r<0<s$). Define $T_0=S_0 = 0$ and for $i\geq 0$ \begin{align*} &T_{i+1} \vcentcolon= \inf \{ t\geq S_i,\ B_t = r\},\\ & S_{i+1} \vcentcolon= \inf\{ t \geq T_{i+1}, \ B_t =s\}. \end{align*} Then $\beta^{r,s}$ is equal to $\max\{ k\geq 0,\ T_k \leq 1\}$ (see Figure \ref{fig:brownian}) and $P(\beta^{r,s}\geq k) = P(T_k \leq 1)$. First, note that $T_1$ is equal to $T(r)$. Also, by Markov property, for $i\geq 1$, conditionally on $S_i$, $T_{i+1}-S_i$ has the same law than $T(s-r)$, and so does $S_{i+1}-T_{i+1}$ conditionally on $T_{i+1}$. Therefore for $k\geq 2$, \begin{align*} P(\beta^{r,s}\geq k) &= P(T_k \leq 1) = \int_{\mathbb{S}igma_{2k-2}} f_r(t_1)f_{s-r}(s_1)f_{s-r}(t_2)\cdots f_{s-r}(s_{k-1})f_{s-r}(t_k) \mathrm{d} s \mathrm{d} t, \end{align*} where $\mathbb{S}igma_{2k-2} \vcentcolon= \{ u=(t_1,\dots,t_k,s_1,\dots,s_{k-1}) \in \mathbb{R}^{2k-1}, \ t_i\geq 0,\ s_i\geq 0 \mbox{ and } \sum_{i=1}^k t_i + \sum_{i=1}^{k-1} s_i \leq 1\}$ is the unit simplex of dimension $2k-2$. Therefore, \begin{align*} E[\beta^{r,s}] &= \sum_{k \geq 1} P(\beta^{r,s} \geq k) = \sum_{k \geq 1} \int_{\mathbb{S}igma_{2k-2}} f_r(t_1)f_{s-r}(s_1)f_{s-r}(t_2)\cdots f_{s-r}(s_{k-1})f_{s-r}(t_k) \mathrm{d} s \mathrm{d} t \\ &= \sum_{k\geq 1} \int_{\mathbb{S}igma_{2k-2}} \frac{r(s-r)^{2k-2}}{\prod_{i=1}^{2k-1} \sqrt{2 \pi u_i^3}} \exp\left( -\frac{r^2}{2u_1} - \frac{(s-r)^2}{2}\sum_{i=2}^{2k-1} u_i^{-1} \right) \mathrm{d} u \\ &\vcentcolon= \sum_{k\geq 1} \int_{\mathbb{S}igma_k} G_k(u;r,s)\mathrm{d} u \vcentcolon= \sum_{k \geq 1} I_k(r,s) . \end{align*} Note first that this sum is finite. Indeed, for $b \geq 0$, the function $x\in [0,1] \mapsto x^{-1} + \frac{\ln(x)}{b}$ is bounded from below by $b^{-1}(1+\ln(b))$. Therefore, \begin{align*} G_k(u;r,s) &\leq \frac{r(s-r)^{2k-2}}{(2\pi)^{k-1/2}} \exp \bigg( -\frac{r^2}{2} \frac{3}{r^2} \left(1 + \ln\left( \frac{r^2}{3} \right)\right) \\ &\hspace{2cm} -(2k-2)\frac{(s-r)^2}{2} \frac{3}{(s-r)^2} \left(1 + \ln\left( \frac{(s-r)^2}{3} \right)\right) \bigg)\\ &= \frac{r(s-r)^{2k-2}}{(2\pi)^{k-1/2}} \exp \left( -(2k-1) \frac{3}{2}(1-\ln 3) - \ln(r^3)- (2k-2)\ln ((s-r)^3) \right) \\ &=\frac{r^{-2}(s-r)^{-4(k-1)}}{(2\pi)^{k-1/2}} BC^k \end{align*} for some constants $B,C$. As the volume of $\mathbb{S}igma_k$ is $\frac{\sqrt{k+1}}{k!}$, $\sum_{k\geq 0} I_k(r,s)$ is finite. Moreover, it is possible to find a local bound of $I_k(r,s)$ independent of $r$ and $s$: using classical results on the continuity of parametric integrals, one has that $E[\beta^{r,s}] = \mu(A_{r,s})$ is continuous in $r$ and $s$. Using the similar bounds on the derivatives of $I_k(r,s)$, one can show that $(r,s) \mapsto \mu(A_{r,s})$ is a $C^1$ function. This implies that $\mu$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{D}elta$. \section{A few remarks on the stability of expected persistence diagrams}\label{sec:stability} In two different situations, namely for point clouds in Section \ref{sec:statements} and for sublevels of functions in Section \ref{sec:brownian}, we have described how to define a map $P \mapsto E[D(\mathbb{K}K(\mathbb{X}))]$ where $\mathbb{X}$ has distribution $P$ and $P$ is a probability distribution on either $M^n$ or $C([0,1])$, the space of continuous functions defined on $[0,1]$. The continuity (or even Lipschitz-continuity) of such a map with respect to some metrics is a natural question, having both theoretical and practical implications: in particular, it implies the stability of mean linear representations, which justifies the use of such representations to perform statistical inference. We propose partial answers to this general question, with metrics measured with $L_1$ and $L_\infty$ distances. \begin{theorem} Let $n\geq 1$ and $M$ be a real analytic compact $d$-dimensional connected submanifold. Let $\mathbb{X}_1$ (resp. $\mathbb{X}_2$) be a random variable on $M^n$ having a density $\kappa_1$ (resp. $\kappa_2$) with respect to the Hausdorff measure $\mathcal{H}_{dn}$. Assume that $\mathbb{K}K$ satisfies the assumptions (K1)-(K5) (or (K5')). Let $\overline{p}_1$ be the density of the \emph{normalized} measure $E\left[\frac{D_s[\mathbb{K}K(\mathbb{X}_1)]}{|D_s[\mathbb{K}K(\mathbb{X}_1)]|}\right]$ and $\overline{p}_2$ be the density of $E\left[\frac{D_s[\mathbb{K}K(\mathbb{X}_2)]}{|D_s[\mathbb{K}K(\mathbb{X}_2)]|}\right]$. Also, let $p_1$ and $p_2$ be the non-normalized densities.Then, \begin{align} \|\overline{p}_1-\overline{p}_2\|_1 &\leq \|\kappa_1-\kappa_2\|_1, \text{ and }\label{eq:bound_normalized}\\ \|p_1-p_2\|_1 &\leq C_n\mathcal{H}_d(M)^n\|\kappa_1-\kappa_2\|_\infty, \label{eq:bound_infty} \end{align} where $C_n$ is the expected number of points in the persistence diagram built with the filtration $\mathbb{K}K$ on $n$ i.i.d. uniform points on $M$. \end{theorem} It is conjectured (and even proved for $M=[0,1]^d$ in a parallel work \cite{divol2018choice}) that $C_n$ is of order $n$ when $\mathbb{K}K$ is either the Rips or the \v Cech filtration. \begin{proof} Consider first the non-normalized case. Given the expression \eqref{densityLoc}, one can write for $u\in \mathbb{D}elta$: \begin{align*} p_1(u) -p_2( u) &= \sum_{r=1}^R \sum_{i=1}^{N_r} \int_{x\in \mathbb{P}hi_{ir}^{-1}(\{u\})} (J\mathbb{P}hi_{ir}(x))^{-1}(\kappa_1(x)-\kappa_2(x)) d\mathcal{H}_{nd-2}(x) \\ \int_\mathbb{D}elta |p_1(u)-p_2(u)|du &\leq \sum_{r=1}^R \sum_{i=1}^{N_r} \int_\mathbb{D}elta\int_{x\in \mathbb{P}hi_{ir}^{-1}(\{u\})} (J\mathbb{P}hi_{ir}(x))^{-1}|\kappa_1(x)-\kappa_2(x)| d\mathcal{H}_{nd-2}(x) \\ &= \sum_{r=1}^R \sum_{i=1}^{N_r} \int_{V_r} \mathbbm{1}\{\mathbb{P}hi_{ir}(x) \in \mathbb{D}elta \} |\kappa_1(x)-\kappa_2(x)| d\mathcal{H}_{nd}(x) \text{ by the coarea formula} \\ &= \sum_{r=1}^R N_r \int_{V_r} |\kappa_1(x)-\kappa_2(x)| d\mathcal{H}_{nd}(x) \\ &\leq \sum_{r=1}^R N_r \mathcal{H}_{nd}(V_r) \|\kappa_1-\kappa_2\|_\infty \\ &= \mathcal{H}_{nd}(M^n) \sum_{r=1}^R N_r \frac{\mathcal{H}_{nd}(V_r)}{\mathcal{H}_{nd}(M^n)} \|\kappa_1-\kappa_2\|_\infty = \mathcal{H}_d(M)^n C_n \| \kappa_1 - \kappa_2\|_\infty. \end{align*} Inequality \eqref{eq:bound_normalized} is likewise obtained. \end{proof} \begin{remark} Other metrics of interest on the space of persistence diagrams are Wasserstein metrics $d_p$, defined as the minimal cost of some matchings over the points of two diagrams. Endowed with those metrics, persistence diagrams are known to satisfy strong stability results with respect to the data they are built with (see \cite{cohen2010lipschitz}). It would therefore be expected that a similar stability holds for the expectation of random diagrams. However, the expected diagrams are not persistence diagrams, but Radon measures on $\mathbb{D}elta$. It is therefore first needed to extend $d_p$ to this more general space in a meaningful way. Similar technique to the one used in \cite{chazal2015subsampling} would then be sufficient to conclude. Extending the $d_p$ measures to Radon measures is the topic of a parallel work, see \cite{divol2019understanding}. \end{remark} \section{Persistence surface as a kernel density estimator}\label{sec:kde} Persistence surface is a representation of persistence diagrams introduced by \cite{adams2017persistence}. It consists in a convolution of a diagram with a kernel, a general idea that has been repeatedly and fruitfully exploited, with slight variations, for instance in \cite{chen2015statistical, kusano2016persistence, reininghaus2015stable}. For $K:\mathbb{R}^2\to \mathbb{R}$ a kernel and $H$ a bandwidth matrix (e.g. a symmetric positive definite matrix), let for $u\in \mathbb{R}^2$, \begin{equation} K_H(u) = \det(H)^{-1/2} K(H^{-1/2}\cdot u). \end{equation} For $D$ a diagram, $K : \mathbb{R}^2 \to \mathbb{R}$ a kernel, $H$ a bandwidth matrix and $w:\mathbb{R}^2 \to \mathbb{R}_+$ a weight function, one defines the persistence surface of $D$ with kernel $K$ and weight function $w$ by: \begin{equation} \forall u \in \mathbb{R}^2, \ \rho(D)(u) \vcentcolon= \sum_{\textbf{r} \in D} w(\textbf{r})K_H(u-\textbf{r}) = D(wK_H(u-\cdot)) \end{equation} Assume that $\mathbb{X}$ is some point process satisfying the assumptions of Theorem \ref{thm:main_thm}. Then, for $s\geq 1$, $\mu \vcentcolon= E[D_s[\mathbb{K}K(\mathbb{X})]]$ has some density $p$ with respect to the Lebesgue measure on $\mathbb{D}elta$. Therefore, $\mu_w$, the measure having density $w$ with respect to $\mu$, has a density equal to $w\times p$ with respect to the Lebesgue measure. The mean persistence surface $E[\rho(D_s[\mathbb{K}K(\mathbb{X})])]$ is exactly the convolution of $\mu_w$ by some kernel function: the persistence surface $\rho(D_s[\mathbb{K}K(\mathbb{X})])$ is actually a kernel density estimator of $w\times p$. If a point cloud approximates a shape, then its persistence diagram (for the \v{C}ech filtration for instance) is made of numerous points with small persistences and a few meaningful points of high persistences which corresponds to the persistence diagram of the "true" shape. As one is interested in the latter points, a weight function $w$, which is typically an increasing function of the persistence, is used to suppress the importance of the topological noise in the persistence surface. \cite{adams2017persistence} argue that in this setting, the choice of the bandwidth matrix $H$ has few effects for statistical purposes (e.g. classification), a claim supported by numerical experiments on simple sets of synthetic data, e.g. torus, sphere, three clusters, etc. However, in the setting where the datasets are more complicated and contain no obvious "real" shapes, one may expect the choice of the bandwidth parameter $H$ to become more critical: there are no highly persistent, easily distinguishable points in the diagrams anymore and the precise structure of the density functions of the processes becomes of interest. We show that a cross validation approach allows the bandwidth selection task to be done in an asymptotically consistent way. This is a consequence of a generalization of Stone's theorem \cite{stone1984asymptotically} when observations are not random vectors but random measures. Assume that $\mu_1,\dots,\mu_N$ are i.i.d. random measures on $\mathbb{R}^2$, such that there exists a deterministic constant $C$ with $|\mu_1|\leq C$. Assume that the expected measure $E[\mu_1]$ has a bounded density $p$ with respect to the Lebesgue measure on $\mathbb{R}^2$. Given a kernel $K:\mathbb{R}^2\to \mathbb{R}$ and a bandwidth matrix $H$, one defines the kernel density estimator \begin{equation} \hat{p}_{H}(x) \vcentcolon= \frac{1}{N} \sum_{i=1}^N \int K_H(x-y)\mu_i(dy). \end{equation} The optimal bandwidth $H_{opt}$ minimizes the Mean Integrated Square Error (MISE) \begin{equation} MISE(H) \vcentcolon= E\left[\|p-\hat{p}_H\|^2\right] = E\left[ \int \left(p(x)-\hat{p}_H(x)\right)^2 dx\right]. \end{equation} Of course, as $p$ is unknown, $MISE(H)$ cannot be computed. Minimizing $MISE(H)$ is equivalent to minimize $J(H) \vcentcolon= MISE(H)-\|p\|^2$. Define \begin{equation} \hat{p}_{iH}(x) \vcentcolon= \frac{1}{N-1} \sum_{j\neq i} \int K_H(x-y)\mu_j(dy) \end{equation} and \begin{equation}\label{crit} \hat{J}(H) \vcentcolon= \frac{1}{N^2} \sum_{i,j} \iint K_H^{(2)}(x-y)\mu_i(dx)\mu_j(dy) -\frac{2}{N} \sum_i \int \hat{p}_{iH}(x)\mu_i(dx), \end{equation} where $K^{(2)}: x \mapsto \int K(x-y)K(y)dy$ denotes the convolution of $K$ with itself. The quantity $\hat{J}(H)$ is an unbiased estimator of $J(H)$. The selected bandwidth $\hat{H}$ is then chosen to be equal to $\arg \min_H \hat{J}(H)$. \begin{theorem}[Stone's theorem \cite{stone1984asymptotically}]\label{thm:stone} Assume that the kernel $K$ is nonnegative, H\"older continuous and has a maximum attained in $0$. Also, assume that the density $p$ is bounded. Then, $\hat{H}$ is asymptotically optimal in the sense that \begin{equation} \frac{\|p-\hat{p}_{\hat{H}}\|}{\|p-\hat{p}_{H_{opt}}\|} \xrightarrow[N\to \infty]{} 1 \mbox{ a.s..} \end{equation} \end{theorem} Note that the gaussian kernel $K(x) = \exp(-\|x\|^2/2)$ satisfies the assumptions of Theorem \ref{thm:stone}. The quality of the optimal estimator can also be studied. Indeed, a straightforward adaptation of the classical study of kernel density estimator (as presented for example in \cite{tsybakov2008introduction}) to the case of a sample of i.i.d. random measures shows that there exists a choice $H_N$ of bandwidth depending on $N$ and on the (unknown) regularity of $p$ such that the $\hat{p}_{H_N}$ is a consistent estimator of $p$ in the sense that $E[\|p-\hat{p}_{H_N}\|^2] \to 0$ (with known rate of convergence). Therefore, Theorem \ref{thm:stone} asserts that the cross-validation procedure is consistent. Let $\mathbb{X}_1,\dots,\mathbb{X}_N$ be i.i.d. processes on $M$ having a density with respect to the law of a Poisson process of intensity $\mathcal{H}_d$. Assume that there exists a deterministic constant $C$ with $|\mathbb{X}_i|\leq C$. Then, Theorem \ref{thm:stone} can be applied to $\mu_i = D_s[\mathbb{K}K(\mathbb{X}_i)]$. Therefore, \emph{the cross validation procedure \eqref{crit} to select $H$ the bandwidth matrix in the persistence surface ensures that the mean persistence surface} \begin{equation} \overline{\rho}_N \vcentcolon= \frac{1}{N} \sum_{i=1}^N \rho(D_s[\mathbb{K}K(\mathbb{X}_i)]) \end{equation} \emph{is a consistent estimator of $p$ the density of $E[D_s[\mathbb{K}K(\mathbb{X}_1)]]$.} \section{Numerical illustration}\label{sec:num} Three sets of synthetic data are considered (see Figure \ref{fig:datasets}). The first one (a) is made of $N=40$ sets of $n = 300$ i.i.d. points uniformly sampled in the square $[0,1]^2$. The second one (b) is made of $N$ samples of a clustered process: $n/3$ cluster's centers are uniformly sampled in the square. Each center is then replaced with $3$ i.i.d. points following a normal distribution of standard deviation $0.01\times n^{-1/2}$. The third dataset (c) is made of $N$ samples of $n$ uniform points on a torus of inner radius $1$ and outer radius $2$. For each set, a \v Cech persistence diagram for $1$-dimensional homology is computed. Persistence diagrams are then transformed under the map $(r_1,r_2) \mapsto (r_1,r_2-r_1)$, so that they now live in the upper-left quadrant of the plane. Figure \ref{fig:diagrams} shows the superposition of the diagrams in each class. One may observe the slight differences in the structure of the topological noise over the classes (a) and (b). The cluster of most persistent points in the diagrams of class (c) correspond to the two holes of a torus and are distinguishable from the rest of the points in the diagrams of the class, which form topological noise. The persistence diagrams are weighted by the weight function $w(\textbf{r})=(r_2-r_1)^3$, as advised in \cite{kusano2017kernel} for two-dimensional point clouds. The bandwidth selection procedure will be applied to the measures having density $w$ with respect to the diagrams, e.g. a measure is a sum of weighted Dirac measures. \begin{figure} \caption{Realization of the processes (a), (b) and (c) described in Section \ref{sec:num} \label{fig:datasets} \end{figure} \begin{figure} \caption{Superposition of the $N=40$ diagrams of class (a), (b) and (c), transformed under the map $\textbf{r} \label{fig:diagrams} \end{figure} \begin{figure} \caption{Persistence surfaces for each class (a), (b) and (c), computed with the weight function $w(\textbf{r} \label{fig:surfaces} \end{figure} For each class of dataset, the score $\hat{J}(H)$ is computed for a set of bandwidth matrices of the form $h^2 \times \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$, for $50$ values $h$ evenly spaced on a log-scale between $10^{-5}$ and $1$. Note that the computation of $\hat{J}(H)$ only involves the computations of $K_H(\textbf{r}_1-\textbf{r}_2)$ for points $\textbf{r}_1$, $\textbf{r}_2$ in different diagrams. Hence, the complexity of the computation of $\hat{J}(H)$ is in $O(T^2)$, where $T$ is the sum of the number of points in the diagrams of a given class. If this is too costly, one may use a subsampling approach to estimate the integrals. The selected bandwidth were respectively $h=0.22, 0.60, 0.17$. Persistence surfaces for the selected bandwidth are displayed in Figure \ref{fig:surfaces}. The persistence of the "true" points of the torus are sufficient to suppress the topological noise: only two yellow areas are seen in the persistence surface of the torus. Note that the two areas can be separated, whereas it is not obvious when looking at the superposition of the diagrams, and would not have been obvious with an arbitrary choice of bandwidth. The bandwidth for class (b) may look to have been chosen too large. However, there is much more variability in class (b) than in the other classes: this phenomenon explains that the density is less peaked around a few selected areas than in class (a). The cross-validation scheme has also been applied to non-synthetic data: the walk of 3 persons A, B and C, has been recorded using the accelerometer sensor of a smartphone in their pocket, giving rise to 3 multivariate time series in $\mathbb{R}^3$. Using a sliding window, each series has been splited in a list of 10 times series made of 200 consecutive points. Using a time-delay embedding technique, those new time series are embedded into $\mathbb{R}^9$: these are the point clouds on which we build the Rips filtration. For each person, the set of 10 persistence diagrams is transformed under the map $(r_1,r_2)\mapsto (r_1,r_2-r_1)$. The persistence diagrams are weighted by the weight function $w(\textbf{r})=(r_2-r_1)^3$. For each person, the scores $\hat{J}(H)$ are computed for a set of bandwidth matrix of the form $h^2 \times \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$, for $20$ values $h$ evenly spaced on a log-scale between $10^{-3}$ and $10^{-1}$. The selected bandwidths are $0.0089, 0.01833$ and $0.0089$ and the corresponding persistence images are displayed in Figure \ref{fig:surfaces_acc}. The three images show very distinct patterns: a reasonable machine learning algorithm will easily make the distinction between the three classes using the images as input. \begin{figure} \caption{Persistence surfaces for each person A,B and C, computed with the weight function $w(\textbf{r} \label{fig:surfaces_acc} \end{figure} \section{Conclusion and further works} Taking a measure point of view to represent persistence diagrams, we have shown that the expected behavior of persistence diagrams built on top of random point sets reveals to have a simple and interesting structure: a measure on $\mathbb{R}^2$ with density with respect to Lebesgue measure that is as smooth as the random process generating the data points! This opens the door to the use of effective kernel density estimation techniques for the estimation of the expectation of topological features of data. Our approach and results also seem to be particularly well-suited to the use of recent results on the Lepski method for parameter selection \cite{lacour2016estimator} in statistics, a research direction that deserves further exploration. As many persistence-based features considered among the literature - persistence images, birth and death distributions, Betti curves,... - can be expressed as linear functional of the discrete measure representation of diagrams, our results immediately extend to them. The ability to select the parameters on which these features are dependent in a well-founded statistical way also opens the door to a well-justified usage of persistence-based features in further supervised and un-supervised learning tasks. \appendix \section{Proofs of the subanalytic elementary lemmas} \firstProp* \begin{proof} \begin{enumerate} \item[(i)] Section I.2.1 in \cite{shiota1997geometry} states that $A(f)$ is subanalytic. Therefore, its complement $E$ is also subanalytic: it is enough to show that $E$ is of empty interior to conclude. \begin{claim} The set $F$ of points $x$ where $f$ is not analytic but $G_f$ is locally a real analytic manifold in $(x,f(x))$ is a subanalytic set of empty interior. \end{claim} \begin{claimproof} Assume $F$ contains an open set $U$. Replacing $U$ by a smaller open set if necessary, there exists some local parametrization of $U_f = \{(x,f(x)),\ x\in U\}$ by some analytic function $\mathbb{P}hi : V \to \mathbb{R}$, $V$ being a neighborhood of $U_f$ in $M\times \mathbb{R}$. Denote by $\nabla^u \mathbb{P}hi \in \mathbb{R}$ the gradient of $\mathbb{P}hi$ with respect to the real variable $u\in \mathbb{R}$. The set $Z$ on which $\nabla^u \mathbb{P}hi = 0$ is an analytic subset of $V$. As $G_f$ is the graph of a function, $Z \cap G_f$ is made of isolated points: one can always assume that those points are not in $U_f$. Therefore, there exists some neighborhood $V'$ of $U_f$ which does not intersect $Z$. One can now apply the analytic implicit function theorem (see for instance \cite[Section 8]{kaup1983holomorphic}) anywhere on $U_f$: for $(x_0,u_0)\in U_f$, there exists some neighborhood $W \subset V'$ and an analytic function $g:\mathbb{O}mega\to \mathbb{R}$, $\mathbb{O}mega$ being a neighborhood of $x_0$, such that, on $W$ \[ \mathbb{P}hi(x,u) = 0 \mathbb{L}ongleftrightarrow u=g(x).\] As we also have $\mathbb{P}hi(x,u)=0$ if and only if $u=f(x)$, $f\equiv g$ on $\mathbb{O}mega$ and $f$ is analytic on $\mathbb{O}mega$. This is a contradiction with having $f$ not analytic in every point of $U$. \end{claimproof} Now, the set $E$ is the union of $F$ and of $E\cap G$ where $G$ is the projection on $M$ of $\mathrm{Sing}(G_f)$. As, by definition, $\mathrm{Sing}(G_f)$ is of empty interior, $G$ is also of empty interior. Therefore, $E$ is of empty interior, which is equivalent to say that its dimension is smaller than $d$. \item[(i)] See \cite[Section II.1.1]{shiota1997geometry}. \item[(ii)] See \cite[Section II.1.6]{shiota1997geometry}. \end{enumerate} \end{proof} \usefulLem* \begin{proof} Write $k$ the dimension of $X$. First, one can always assume that $X$ is closed, as $\mathcal{H}_d(\overline{X}) \geq \mathcal{H}_d(X)$. Therefore, there exists some real analytic manifold $N$ of dimension $k$ and a proper real analytic mapping $\mathbb{P}si : N \to M$ such that $\mathbb{P}si(N)=X$ (see \cite[Section I.2.1]{shiota1997geometry}). The set $X$ can be written as the union of some compact sets $X_K$ for $K\geq 0$. It is enough to show that $\mathcal{H}_d(X_K) = 0$. The set $X_K$ can be written $\mathbb{P}si(\mathbb{P}si^{-1}(X_K))$, where $\mathbb{P}si^{-1}(X_K)$ is some compact subset of $N$. We have $\mathcal{H}_d(\mathbb{P}si^{-1}(X_K)) = 0$ because $N$ is of dimension $k<d$. Furthermore, as $\mathbb{P}si$ is analytic on $Y$, it is Lipschitz on $\mathbb{P}si^{-1}(X_K)$. Therefore, $\mathcal{H}_d(\mathbb{P}si(\mathbb{P}si^{-1}(X_K)))=\mathcal{H}_d(X_K)$ is also null. \end{proof} \section{Proof of Theorem \ref{thm:main_thm'}} \mainThmbisState* We indicate how to change the proof of Theorem \ref{thm:main_thm} when assumption (K5') is satisfied instead of assumption (K5). In the partition $E_1(x),\dots,E_L(x)$ of $\mathcal{F}_n$, the set $E_1(x)$ plays a special role: it corresponds to the value $r_1=0$ and contains all the singletons, which satisfy $\varphi[\{j\}] \equiv 0$ by assumption. Lemma \ref{lemma1} holds for $l>1$ and one can always define $J_1 = \{1\}$ to be a minimal element of $E_1(x)$. With this convention in mind, it is straightforward to check that Lemma \ref{lemma2} still holds and that Lemma \ref{lem:grad_non_null} is satisfied as well for $l>1$. Now, one can define in a likewise manner the sets $V_r$. For $x\in V_r$, the diagram $D_s[\mathbb{K}K(x)]$ is still decomposed $\sum_{i=1}^N \delta_{\textbf{r}_i}$, with $\textbf{r}_i = (\varphi[J_{l_1}](x),\varphi[J_{l_2}](x))$. If $s>0$, the end of the proof is similar. However, for $s=0$, the pairs of simplices $(J_{l_1},J_{l_2})$ are made of one singleton $J_{l_1}$ and of one 2-simplex $J_{l_2}$. As $\varphi$ is null on singletons, the points in this diagram are all included in the vertical line $L_0 \vcentcolon= \{0\} \times [0,\infty)$. The map $\mathbb{P}hi_{ir}:x\in V_r\mapsto \textbf{r}_i \in L_0$ has a differential of rank 1, as Lemma \ref{lem:grad_non_null} ensures that $\nabla^j \varphi[J_{l_2}](x) \neq 0$ for $j \in J_{l_2}$. One can apply the coarea formula to $\mathbb{P}hi_{ir}$ to conclude to the existence of a density with respect to the Lebesgue measure on $L_0$. \section{Proof of Corollary \ref{cor:main_cor}} \mainCorState* The diagram $D_s[\mathbb{K}K(\mathbb{X})]$ can be written \begin{equation} D_s[\mathbb{K}K(\mathbb{X})] = \sum_{n\geq 0} \mathbbm{1}\{|\mathbb{X}|=n\} D_s[\mathbb{K}K(\mathbb{X})], \end{equation} and Theorem \ref{thm:main_thm} states that $\mathbbm{1}\{|\mathbb{X}|=n\} D_s[\mathbb{K}K(\mathbb{X})]$ has a density $p_n$ with respect to the Lebesgue measure on $\mathbb{D}elta$. Take $B$ a Borel set in $\mathbb{D}elta$: \begin{align*} E[D_s[\mathbb{K}K(\mathbb{X})]](B) &= \sum_{n\geq 0} E[\mathbbm{1}\{|\mathbb{X}|=n\} D_s[\mathbb{K}K(\mathbb{X})]](B) \\ &= \sum_{n\geq 0} \int_B p_n = \int_B \sum_{n\geq 0} p_n \mbox{ by Fubini-Torelli's theorem.} \end{align*} It is possible to use Fubini-Torelli's theorem because $E[D_s[\mathbb{K}K(\mathbb{X})]](B)$ is finite. Indeed, as $D_s[\mathbb{X}]$ is always made of less than $2^{|\mathbb{X}|}$ points, and as we have supposed that $E\left[ 2^{|\mathbb{X}|}\right]<\infty$, the measure $E[ D_s[\mathbb{K}K(\mathbb{X})]]$ is finite as well. \section{Proof of Theorem \ref{thm:smoothness}} \smoothnessState* Given the expression \eqref{densityLoc}, it is sufficient to show that integrating a function along the fibers is a smooth operation in the fibers. We only show that the density is continuous. Continuity of the higher orders derivatives is obtained in a similar fashion. The proof is a standard application of the implicit function theorem. Using the same notations than in the proof of Theorem \ref{thm:main_thm}, fix $1\leq r \leq R$ and $1\leq i \leq N_r$. We will show that $p_{ir}$ is continuous. As the indices $r$ and $i$ are now fixed, we drop the dependency in the notation: $V \vcentcolon= V_r$ and $\mathbb{P}hi \vcentcolon= \mathbb{P}hi_{ir}$. By using a partition of unity and taking local diffeomorphisms, one can always assume that $V \subset \mathbb{R}^{nd}$. Define the function $f: (x,u)\in V\times\mathbb{D}elta \mapsto \mathbb{P}hi(x)-u \in \mathbb{R}^2$. We have already shown in the proof of Theorem \ref{thm:main_thm} that for $x_0\in V$, there exists two indices $a_1$ and $a_2$ (depending on $x_0$) such that the minor $M(x_0)=(D \mathbb{P}hi(x_0))_{a_{1,2}} $ is invertible. Rewrite $x \in V$ in $(y,z)$ where $z = (x_{a_1},x_{a_2})\in \mathbb{R}^2$. By the implicit function theorem, for $(x_0,u_0)$ such that $f(x_0,u_0)=0$, there exists a neighborhood $\mathbb{O}mega_{x_0} \subset V\times \mathbb{D}elta$ of $(x_0,u_0)$ and an analytic function $g_{x_0}: W_{y_0}\times Y_{u_0} \to \mathbb{R}^2$ defined on a neighborhood of $(y_0,u_0)$ such that for $(x,u)\in \mathbb{O}mega_{x_0}$ \[ f(x,u) = 0 \mathbb{L}ongleftrightarrow z=g_{x_0}(y,u).\] The sets $(\mathbb{O}mega_{x_0})_{x_0 \in V}$ constitutes an open cover of the fiber $f^{-1}(0)$. Consider a smooth partition of unity $(\rho_{x_0})_{x_0 \in V}$ subordinate to this cover. Then, for all $(x,u) \in f^{-1}(0)$ \begin{align*} (J\mathbb{P}hi(x))^{-1}\kappa(x) &= \sum_{x_0 \in V} \rho_{x_0}(y,u,g_{x_0}(y,u)) (J\mathbb{P}hi(y,g_{x_0}(y,u)))^{-1}\kappa(y,g_{x_0}(y,u)) \end{align*} Therefore, \begin{align} p_{ir}(u)&=\int_{x\in \mathbb{P}hi^{-1}(u)} (J\mathbb{P}hi(x))^{-1}\kappa(x) d\mathcal{H}_{nd-2}(x) \nonumber \\ &=\sum_{x_0 \in V} \int_{y \in W_{y_0}}\rho_{x_0}(y,u,g_{x_0}(y,u)) (J\mathbb{P}hi(y,g_{x_0}(y,u)))^{-1}\kappa(y,g_{x_0}(y,u))dy. \label{sumPart} \end{align} We are now faced with a classical continuity under the integral sign problem. First, the Cauchy-Binet formula (see \cite[Example 2.15]{kwak2004linear}) states that $J\mathbb{P}hi$ is equal to the square root of the sum of the squares of the determinants of all $2\times 2$ minors of $D\mathbb{P}hi$. Therefore, $J\mathbb{P}hi(x)$ is larger than the determinant of $M(x)$, the minor of $f$ of indices $a_1$ and $a_2$. The implicit function theorem gives the exact value of $M(x)$. Indeed, for $X=(x,u)\in \mathbb{O}mega_{x_0}$, and for any index $k$, \begin{equation} \frac{\partial g}{\partial X_k}(y,u)= -\left(M^{-1} \cdot \frac{\partial f}{\partial X_k}\right)(y,u,g(y,u)) \end{equation} Take $X_k=u_{1,2}$. Then, $\partial f/\partial X_k = (-1,0)$, resp. $(0,-1)$. Therefore, \begin{equation}\label{boundJac} M^{-1}(y,u,g(y,u)) = \frac{\partial g}{\partial u}(y,u,g(y,u)) \end{equation} As $\rho_{x_0}$ has a compact support, it suffices to show that the integrand is bounded by a constant independent of $u$. The only issue is that $(J\mathbb{P}hi)^{-1}$ may diverge. Equation \eqref{boundJac} shows that it is bounded by $\det \partial g/\partial u$. This is bounded, as $g$ is analytic on the compact support of $\rho_{x_0}$: each term in the sum \eqref{sumPart} is continuous. By the compactness of $M$ and $f^{-1}(0)$, all the partitions of unity can be taken finite, and a finite sum of continuous functions is continuous. This proves the continuity of $p$. \section{Proof of Corollary \ref{cor:betti}} \corBettiState* Define $f(r,u)$ to be equal to $1$ if $u_1\leq r \leq u_2$ and $0$ otherwise. Then, $\beta^r_s(\mathbb{K}K(\mathbb{X}))$ is equal to $D_s[\mathbb{K}K(\mathbb{X})](f(r,\cdot))$. Therefore, the expectation $E[\beta^r_s(\mathbb{K}K(\mathbb{X}))]$ is equal to \begin{equation} \int p(u)f(r,u)du. \end{equation} As we assumed that the hypothesis of Theorem \ref{thm:smoothness} were satisfied, the density $p$ is smooth. Moreover, $p(u)f(r,u)$ is smaller than $p(u)$. The function $p$ being integrable, one can apply the continuity under the integral sign theorem to conclude that $r\mapsto E[\beta^r_s(\mathbb{K}K(\mathbb{X}))]$ is continuous. Higher-order derivatives are obtained in a similar fashion. \end{document}
\begin{document} \title{Robust Stability Analysis of Sparsely Interconnected Uncertain Systems$^{\ast}$} \author{Martin S. Andersen$^{1}$, Sina Khoshfetrat Pakazad$^{2}$, Anders Hansson$^{2}$, and Anders Rantzer$^{3}$ \thanks{$^*$This work has been supported by the Swedish Department of Education within the ELLIIT project.} \thanks{$^{1}$Martin S. Andersen is with the Department of Applied Mathematics and Computer Science, Technical University of Denmark. Email: {\tt\footnotesize [email protected]}. The work was carried out while Martin S. Andersen was a postdoc at Link\"oping University.} \thanks{$^{2}$ Sina Khoshfetrat Pakazad and Anders Hansson are with the Division of Automatic Control, Department of Electrical Engineering, Link\"oping University, Sweden. Email: {\tt\footnotesize \{sina.kh.pa, hansson\}@isy.liu.se}. The work was carried out while Anders Hansson was a visiting professor at the University of California, Los Angeles.} \thanks{$^{3}$Anders Rantzer is with the Department of Automatic Control, Lund University, Sweden. Email: {\tt\footnotesize [email protected]}}} \maketitle \begin{abstract} In this paper, we consider robust stability analysis of large-scale sparsely interconnected uncertain systems. By modeling the interconnections among the subsystems with integral quadratic constraints, we show that robust stability analysis of such systems can be performed by solving a set of sparse linear matrix inequalities. We also show that a sparse formulation of the analysis problem is equivalent to the classical formulation of the robustness analysis problem and hence does not introduce any additional conservativeness. The sparse formulation of the analysis problem allows us to apply methods that rely on efficient sparse factorization techniques, and our numerical results illustrate the effectiveness of this approach compared to methods that are based on the standard formulation of the analysis problem. \end{abstract} \begin{IEEEkeywords} Interconnected uncertain systems, IQC analysis, Sparsity, Sparse SDPs. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction}\label{sec:Intro} \IEEEPARstart{R}{obust} stability of uncertain systems can be analyzed using different approaches, e.g., $\mu$-analysis and IQC analysis \cite{robustandoptimal,multivariable,ulfiqc,rantzer}. In general, the computational burden of performing robust stability analysis grows rapidly with the state dimension of the uncertain system. For instance, performing robust stability analysis using integral quadratic constraints (IQCs) involves finding a solution to a frequency-dependent semi-infinite linear matrix inequality (LMI). This frequency dependent semi-infinite LMI can be reformulated using the Kalman-Yakubovich-Popov (KYP) lemma as a finite-dimensional frequency independent LMI which is generally dense \cite{rantzer,ran:96}. The size and number of variables of this LMI grows with the number of states and dimension of the uncertainty in the system, and hence IQC analysis of uncertain systems with high state and uncertainty dimensions can be very computationally demanding. This is the case even if the underlying structure in the resulting LMI is exploited, \cite{ChapterBook1,Anders,Parrilo,Kao2,and+dah+van10,fuj+kim+koj+oka+yam09,WHJ:09,Wallin,Kao1}. An alternative approach to solving the semi-infinite LMI is to perform frequency gridding where the feasibility of the frequency dependent semi-infinite LMI is verified for a finite number of frequencies. Like the KYP lemma-based formulation, this method leads to a set of LMIs that are generally dense and costly to solve for problems with a high state dimension. The focus of this paper is the analysis of large-scale sparsely interconnected uncertain systems, which inherently have high number of states. As a result, the mentioned methods are prohibitively costly for analyzing large-scale interconnected uncertain systems even when the interconnections are sparse. In order to address the robustness analysis of such systems, we will show how the sparsity in the interconnection can be utilized to reduce the computational burden of solving the analysis problem. Sparsely interconnected uncertain systems are common in e.g.\ large power networks that include many interconnected components, each of which is connected only to a small number neighboring components. In \cite{Langbort} and \cite{P5}, the authors consider robust stability analysis of interconnected systems where the uncertainty lies in the interconnections among the subsystems. In these papers, the authors consider the use of IQCs to describe the uncertain interconnections, and they provide coupled LMIs to address the stability analysis and control design for such systems. However, they do not investigate the computational complexity of analyzing large-scale systems. In \cite{UlfLetter}, a method is proposed for robust stability analysis of interconnected uncertain systems using IQCs. The authors show that when the interconnection matrix is given as $\Gamma = \bar \Gamma \otimes I$ and the adjacency matrix of the network $\bar \Gamma$ is normal, the analysis problem can be decomposed into smaller problems that can be solved more easily. In \cite{Kao} and \cite{Ulf}, the authors describe a method for robust stability analysis of interconnected systems with uncertain interconnections. The analysis approach considered in these papers is based on separation of subsystem frequency responses and eigenvalues of the interconnection and adjacency matrices. Although the computational complexity of this approach scales linearly with the number of subsystems in the network, the proposed method can only be used for analyzing interconnected systems with specific interconnection descriptions. In \cite{Kao}, the uncertain interconnection matrix is assumed to be normal, and its spectrum is characterized using quadratic inequalities. In \cite{Ulf}, the interconnection matrix is defined as $\Gamma = \bar \Gamma \otimes I$, where the adjacency matrix of the network $\bar \Gamma$ is assumed to be normal with its spectrum expressed using general polyhedral constraints. The authors in \cite{Vinnicombe} provide a scalable method for analyzing robust stability of interconnections of SISO subsystems over arbitrary graphs. The proposed analysis approach in \cite{Vinnicombe} is based on Nyquist-like conditions which depend on the dynamics of the individual subsystems and their neighbors. Finally, the authors in \cite{kim:12} propose a decomposable approach to robust stability analysis of interconnected uncertain systems. When the system transfer matrices for all subsystems are the same, this approach makes it possible to solve the analysis problem by studying the structured singular values of the individual subsystems and the eigenvalues of the interconnection matrix. In related work \cite{and+han:12}, we considered robust stability analysis of interconnected uncertain systems where the only assumption was the sparsity of the interconnection matrix, and we laid out the basic ideas of how to exploit this kind of sparsity in the analysis problem. In that paper, we put forth an IQC-based analysis method for analyzing such systems, and we showed that by characterizing the interconnections among subsystems using IQCs, the sparsity in the interconnections can be reflected in the resulting semi-infinite LMI. No numerical results were presented in \cite{and+han:12}. In this paper, we present an extended version of \cite{and+han:12} where we show how the sparsity in the problem allows us to use highly effective sparse SDP solvers, \cite{BeY:05,and+dah+van10,fukuda_exploitingsparsity}, to solve the analysis problem in a centralized manner. Note that none of the methods mentioned in the previous paragraph consider as general a setup as the one presented in this paper. \vspace*{-5pt} \subsection{Outline} This paper is organized as follows. Section \ref{sec:IQC} briefly reviews relevant theory from the IQC analysis framework. In Section~\ref{sec:Inter}, we provide a description of the interconnected uncertain system that is later used in lumped and sparse formulations of the analysis problem. We discuss these formulations in sections \ref{sec:Lumped} and~\ref{sec:Sparse}, and we report the numerical results in Section \ref{sec:Results}. Section~\ref{sec:Conclusions} presents conclusions and some remarks regarding future research directions. \vspace*{-5pt} \subsection{Notation} We denote by $\mathbb R$ the set of real scalars and by $\mathbb R^{m\times n}$ the set of real $m \times n$ matrices. The transpose and conjugate transpose of a matrix $G$ is denoted by $G^T$ and $G^{\ast}$, respectively. The set $\mathbf S^n$ denotes $n \times n$ Hermitian matrices, and $I_n$ denotes the $n \times n$ identity matrix. We denote by $\mathbf 1_n$ an $n$-dimensional vector with all of its components equal to~$1$. We denote the real part of a complex vector $v$ by $\real(v)$ and $\eigen(G)$ denotes the eigenvalues of a matrix $G$. We use superscripts for indexing different matrices, and we use subscripts to refer to different elements in the matrix. Hence, by $G^{k}_{ij}$ we denote the element on the $i$th row and $j$th column of the matrix $G^k$. Similarly, $v^k_i$ is the $i$th component of the vector $v^k$. Given matrices $G^k$ for $k = 1, \dots, N$, $\diag(G^1, \dots, G^N)$ denotes a block-diagonal matrix with diagonal blocks specified by the given matrices. Likewise, given vectors $v^k$ for $k= 1, \dots, N$, the column vector $(v^1, \dots, v^N)$ is all of the given vectors stacked. The generalized matrix inequality $G \prec H$ ($G \preceq H$) means that $G-H$ is negative (semi)definite. Given state-space data $A, B, C$ and $D$, we denote by \small \begin{align*} G(s) := \begin{bmatrix} \begin{array}{c|c} A & B \\ \hline C & D \end{array}\end{bmatrix} \end{align*} \normalsize the transfer function matrix $G(s) = C\left( sI-A \right)^{-1}B + D$. The infinity norm of a transfer function matrix is defined as $\| G(s) \|_\infty = \supremum_{\omega \in \mathbb R} \bar \sigma(G(j\omega))$, where $\supremum$ denotes the supremum and $\bar \sigma$ denotes the largest singular value of a matrix. By $\mathcal L_2^n$ we denote the set of $n$-dimensional square integrable signals, and $\mathcal{RH}_{\infty}^{m \times n}$ represents the set of real, rational $m \times n$ transfer matrices with no poles in the closed right half plane. Given a graph $Q(V,E)$ with vertices $V = \{v_1, \dots, v_n\}$ and edges $E \subseteq V\times V$, two vertices $v_i, v_j \in V$ are adjacent if $(v_i,v_j) \in E$, and we denote the set of adjacent vertices of $v_i$ by $\adj (v_i) = \{ v_j \in V | (v_i, v_j) \in E \}$. The degree of a vertex (node) in a graph is defined as the number of its adjacent vertices, i.e., $\degree(v_i) = |\adj(v_i)|$. The adjacency matrix of a graph $Q(V,E)$ is defined as a $|V| \times |V|$ matrix $A$ where $A_{ij} = 1$ if $(i,j) \in E$ and $A_{ij} =0$ otherwise. \section{Robustness Analysis Using IQCs} \label{sec:IQC} \subsection{Integral Quadratic Constraints} IQCs play an important role in robustness analysis of uncertain systems where they are used to characterize the uncertainty in the system. Let $\Delta: \mathbb R^d \rightarrow \mathbb R^d$ denote a bounded and causal operator. This operator is said to satisfy the IQC defined by $\Pi$, i.e., $\Delta \in \IQC(\Pi)$, if \small \begin{align}\label{eq:IQCT} \int_{0}^{\infty} \begin{bmatrix} v \\ \Delta(v) \end{bmatrix}^{T} \Pi \begin{bmatrix} v \\ \Delta(v) \end{bmatrix} \, dt \geq 0, \quad \forall v \in \mathcal{L}_2^d \ , \end{align} \normalsize where $\Pi$ is a bounded and self-adjoint operator. Additionally, the IQC in~\eqref{eq:IQCT} can be written in the frequency domain as \small \begin{align}\label{eq:IQCF} \int_{-\infty}^{\infty} \begin{bmatrix} \widehat{v}(j\omega) \\ \widehat{\Delta(v)}(j\omega) \end{bmatrix}^{\ast} \Pi(j\omega) \begin{bmatrix} \widehat{v}(j\omega) \\ \widehat{\Delta(v)}(j\omega) \end{bmatrix} \, d\omega \geq 0, \end{align} \normalsize where $\hat v$ and $\widehat{\Delta(v)}$ are the Fourier transforms of the signals \cite{ulfiqc,rantzer}. IQCs can also be used to describe operators constructed from other operators. For instance, suppose $\Delta^i \in\IQC(\Pi^i)$, where \small$\Pi^i = \begin{bmatrix} \Pi^i_{11} & \Pi^i_{12} \\ \Pi^i_{21} & \Pi^i_{22} \end{bmatrix}$\normalsize. Then the diagonal operator $\Delta = \diag(\Delta^1, \dots, \Delta^N)$ satisfies the IQC defined by \small \begin{align}\label{eq:IQCDiag} \bar{\Pi} = \begin{bmatrix} \bar{\Pi}_{11} & \bar{\Pi}_{12}\\ \bar{\Pi}_{21} & \bar{\Pi}_{22}\end{bmatrix}, \end{align} \normalsize where $\bar \Pi_{ij} = \diag( \Pi_{ij}^1, \dots, \Pi_{ij}^N)$, \cite{ulfiqc}. We will use diagonal operators to describe the uncertainties of interconnected systems, but first we briefly describe the IQC analysis framework for robustness analysis of uncertain systems. \subsection{IQC Analysis} Consider the following uncertain system, \begin{equation}\label{eq:UncertainSystem} \begin{split} p = G q, \quad q = \Delta(p), \end{split} \end{equation} where $G \in \mathcal{RH}_{\infty}^{m\times m}$ is referred to as the system transfer function matrix, and $\Delta: \mathbb R^m \rightarrow \mathbb R^m$ is a bounded and causal operator representing the uncertainty in the system. The uncertain system in \eqref{eq:UncertainSystem} is said to be robustly stable if the interconnection between $G$ and $\Delta$ remains stable for all uncertainty values described by $\Delta$, \cite{essentials}. Using IQCs, the following theorem provides a framework for analyzing robust stability of uncertain systems. \begin{theorem}[IQC analysis, \cite{ulfiqc}]\label{thm:IQC} The uncertain system in~\eqref{eq:UncertainSystem} is robustly stable, if \begin{enumerate} \item for all $\tau \in [0,1]$ the interconnection described in \eqref{eq:UncertainSystem}, with $\tau\Delta$, is well-posed; \item for all $\tau \in [0,1]$, $\tau \Delta \in \IQC(\Pi)$; \item there exists $\epsilon > 0$ such that \small \begin{align}\label{eq:thmIQC} \begin{bmatrix} G(j\omega) \\ I \end{bmatrix}^{\ast} \Pi(j\omega) \begin{bmatrix} G(j\omega) \\ I \end{bmatrix} \preceq -\epsilon I, \hspace{2mm} \forall \omega \in [0, \infty]. \end{align} \normalsize \end{enumerate} \end{theorem} \begin{IEEEproof} See \cite{ulfiqc,rantzer}. \end{IEEEproof} The second condition of Theorem~\ref{thm:IQC} mainly imposes structural constraints on $\Pi$, \cite{rantzer}. IQC analysis then involves a search for an operator $\Pi$ that satisfies the LMI in \eqref{eq:thmIQC} with the required structure. This can be done using either the KYP lemma-based formulation, \cite{ulfiqc,rantzer,ran:96}, or it can be done approximately by performing frequency gridding where the feasibility of the LMI in~\eqref{eq:thmIQC} is checked only for a finite number of frequencies. In the next section, we propose an efficient method for robust stability analysis of sparsely interconnected uncertain systems based on the frequency-gridding approach. \section{Robust Stability Analysis of Interconnected Uncertain Systems}\label{sec:RobustStability} \subsection{Interconnected Uncertain Systems}\label{sec:Inter} Consider a network of $N$ uncertain subsystems where each of the subsystems is described as \small \begin{equation}\label{eq:Subsystems} \begin{split} &p^i = G_{pq}^i q^i + G_{pw}^iw^i \\ &z^i = G_{zq}^i q^i + G_{zw}^iw^i\\ &q^i = \Delta^i(p^i), \end{split} \end{equation} \normalsize where $G_{pq}^i \in \mathcal{RH}_{\infty}^{d_i \times d_i}$, $G_{pw}^i \in \mathcal{RH}_{\infty}^{d_i \times m_i}$, $G_{zq}^i \in \mathcal{RH}_{\infty}^{l_i \times d_i}$, $G_{zw}^i \in \mathcal{RH}_{\infty}^{l_i \times m_i}$, and $\Delta^i:\mathbb{R}^{d_i} \to \mathbb{R}^{d_i}$. If we let $p = (p^1, \dots, p^N)$, $q = (q^1, \dots, q^N)$, $w = (w^1, \dots, w^N)$ and $z = (z^1, \dots, z^N)$, the interconnection among the subsystems in \eqref{eq:Subsystems} can be characterized by the interconnection constraint $w = \Gamma z$ where $\Gamma$ is an interconnection matrix that has the following structure \small \begin{align}\label{eq:Interconst} \underbrace{ \begin{bmatrix} w^1\\w^2\\ \vdots\\ w^N \end{bmatrix}}_{w} = \underbrace{ \begin{bmatrix} \Gamma_{11} & \Gamma_{12} & \cdots & \Gamma_{1N} \\ \Gamma_{21} & \Gamma_{22} & \cdots & \Gamma_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ \Gamma_{N1} & \Gamma_{N2} & \cdots & \Gamma_{NN} \end{bmatrix}}_{\Gamma} \underbrace{ \begin{bmatrix} z^1\\z^2\\ \vdots\\ z^N \end{bmatrix}}_{z}. \end{align} \normalsize Each of the blocks $\Gamma_{ij}$ in the interconnection matrix are 0-1 matrices that describe how individual components of the input-output vectors of different subsystems are connected to each other. The entire interconnected uncertain system can be expressed as \small \begin{equation}\label{eq:SysInter} \begin{split} p& = G_{pq} q + G_{pw}w \\ z& = G_{zq} q + G_{zw}w\\ q& = \Delta(p)\\ w& = \Gamma z, \end{split} \end{equation} \normalsize where $G_{\star\bullet} = \diag(G_{\star\bullet}^1, \dots, G_{\star\bullet}^N)$ and $\Delta = \diag(\Delta^1, \dots, \Delta^N)$. Using this description of interconnected uncertain systems, we consider two formulations for analyzing robust stability, namely a "lumped" and a "sparse" formulation, as we explain next. \subsection{Lumped Formulation}\label{sec:Lumped} The classical approach to robust stability analysis of interconnected uncertain systems is to eliminate the interconnection constraint in \eqref{eq:Interconst} in order to describe the entire interconnected system as a lumped system \begin{equation}\label{eq:Lumped} \begin{split} p = \bar G q, \quad q = \Delta(p), \end{split} \end{equation} where $\bar G = G_{pq} + G_{pw}(I - \Gamma G_{zw})^{-1}\Gamma G_{zq}$. We will refer to $\bar G$ as the lumped system transfer function matrix. Note that $I - \Gamma G_{zw}$ must have a bounded inverse for all frequencies in order for the interconnection to be well-posed. Using the lumped formulation, one can use the IQC framework from Theorem \ref{thm:IQC} to analyze the robustness of the interconnected uncertain system. Let $\Delta \in \IQC(\bar \Pi)$ and assume that it satisfies the regularity conditions in Theorem~\ref{thm:IQC}, i.e., conditions 1 and 2 in the theorem. Now suppose that $\bar G \in \mathcal{RH}_{\infty}^{\bar d \times \bar d}$, where $\bar d = \sum_{i=1}^{N}d_i$. The interconnected uncertain system is then robustly stable if there exists a matrix $\bar \Pi$ such that \small \begin{align} \label{eq:IQCLumped} \begin{bmatrix} \bar G(j\omega)\\ I \end{bmatrix}^\ast \bar \Pi(j\omega) \begin{bmatrix} \bar G(j\omega) \\ I \end{bmatrix} \preceq -\epsilon I, \quad \forall \omega \in [0, \infty], \end{align} \normalsize for some $\epsilon > 0$. Notice that the matrix on the left hand side of LMI \eqref{eq:IQCLumped} is of order $\bar d$, and it is generally dense even if the subsystems are sparsely interconnected. IQC analysis of large-scale interconnected systems based on the lumped formulation can therefore be prohibitively costly even when the interconnection matrix $\Gamma$ is sparse. This is because the matrix $(I - \Gamma G_{zw})^{-1}$ is generaly dense even if $I - \Gamma G_{zw}$ is sparse. This follows from the Cayley-Hamilton theorem. Next we consider a sparse formulation of the analysis problem. \subsection{Sparse Formulation}\label{sec:Sparse} To avoid solving dense LMIs, we express the interconnection constraint using IQCs. First, notice that the equation $w = \Gamma z$ is equivalent to the following quadratic constraint \small \begin{align}\label{eq:IQCSimple} -\|w-\Gamma z \|_X^2 = -\begin{bmatrix} z \\ w \end{bmatrix}^{\ast} \begin{bmatrix} -\Gamma^T \\ I \end{bmatrix} X \begin{bmatrix} -\Gamma & I \end{bmatrix} \begin{bmatrix} z \\ w \end{bmatrix}\geq 0 \end{align} \normalsize where $\| \cdot \|_X$ denotes the norm induced by the inner product $\langle \alpha , X \beta \rangle$ for some $X \succ 0$ of order $\bar m = \sum_{i=1}^N m_i$. The inequality \eqref{eq:IQCSimple} can therefore be rewritten as an IQC defined by the multiplier \small \begin{align} \label{eq:IQCInter} \hat{\Pi} &=\begin{bmatrix} -\Gamma^T X \Gamma & \Gamma^T X\\ X\Gamma & -X \end{bmatrix}. \end{align} \normalsize This allows us to include the interconnection constraint in the IQC analysis problem explicitly, and this often results in sparse LMIs if the interconnection matrix is sparse. Consider \eqref{eq:SysInter}, and let $\Delta \in \IQC(\bar \Pi)$ where $\bar \Pi$ is defined in~\eqref{eq:IQCDiag}. Assuming that the first and second condition in Theorem \ref{thm:IQC} are satisfied, then it can be shown that the interconnected uncertain system is stable if there exist matrices $\bar \Pi$ and $X\succ 0$ such that \small \begin{multline}\label{eq:IQCInterconnected} \begin{bmatrix} G_{pq} & G_{pw} \\ I & 0 \end{bmatrix}^{\ast}\begin{bmatrix} \bar{\Pi}_{11} & \bar{\Pi}_{12} \\ \bar{\Pi}_{21} & \bar{\Pi}_{22} \end{bmatrix}\begin{bmatrix} G_{pq} & G_{pw} \\ I & 0 \end{bmatrix} -\\ \begin{bmatrix} -G_{zq}^{\ast}\Gamma^T\\ I -G_{zw}^{\ast}\Gamma^T \end{bmatrix}X\begin{bmatrix} -\Gamma G_{zq} & I-\Gamma G_{zw} \end{bmatrix} \preceq -\epsilon I. \end{multline} \normalsize for $\epsilon > 0$ and for all $\omega \in [0, \infty]$, \cite{and+han:12}. The sparse formulation and lumped formulation result in equivalent optimization problems. The following theorem establishes this equivalence. \begin{theorem}\label{thm:IQCSparse} The LMI in \eqref{eq:IQCInterconnected} is feasible if and only if the LMI in~\eqref{eq:IQCLumped} is feasible. \end{theorem} \begin{IEEEproof} See \cite{and+han:12}. \end{IEEEproof} Theorem \ref{thm:IQCSparse} implies that analyzing robust stability of interconnected uncertain systems using the sparse and lumped formulations lead to equivalent conclusions. As a result, using the sparse formulation of the IQC analysis does not change the conservativeness of the analysis. However, note that the LMI in \eqref{eq:IQCInterconnected} is of order $\sum_{i=1}^{N}(d_i + m_i)$, and if the matrix variable $X$ is dense, the LMI will also be dense in general. Hence, if the scaling matrix $X$ is chosen to be dense, the sparse formulation does not present any improvement compared to the lumped formulation. This issue can be addressed using the following corollary to Theorem \ref{thm:IQCSparse}. \begin{corollary} \label{cor:IQC} In \eqref{eq:IQCInterconnected}, it is sufficient to consider a diagonal scaling matrix of the form $X= x I$ with $x > 0$. \end{corollary} \begin{IEEEproof} See \cite{and+han:12}. \end{IEEEproof} Corollary \ref{cor:IQC} implies that it is possible to choose the scaling matrix $X$ as a diagonal matrix with a single scalar variable, without adding any conservativeness to the analysis approach. Consequently, if the interconnection matrix is sufficiently sparse, then the LMI in \eqref{eq:IQCInterconnected} will generally also be sparse. \begin{rem} The proposed formulation for analyzing interconnected uncertain systems can also be used to analyze systems with uncertain interconnections. This can be done by modifying the subsystems uncertainty descriptions and their system transfer matrices, in order to accommodate the uncertainty in the interconnection within the subsystems uncertainty blocks. \end{rem} \begin{rem} As was mentioned in Section \ref{sec:Intro}, we solve the semi-infinite LMIs in \eqref{eq:IQCLumped} and \eqref{eq:IQCInterconnected} by performing frequency gridding where instead of considering all the frequencies in $[0, \infty]$, we study only a finite set frequencies denoted by $S_f$. \end{rem} \section{Sparsity in Semidefinite Programs (SDPs)}\label{sec:Chordal} In this section, we discuss how the sparse formulation of the robustness analysis problem can be solved efficiently. The approach is based on sparse Cholesky factorization techniques \cite{blp:94}, \cite{geo:93}, which play a fundamental role in many sparse matrix algorithms. Given a sparse positive definite matrix $X$, it is often possible to compute a sparse Cholesky factorization of the form $P^TXP = LL^T$ where $P$ is a permutation matrix and $L$ is a sparse lower-triangular Cholesky factor \cite{geo:81,rot:93,duf:89}. The nonzero pattern of $L+L^T$ depends solely on $P$: it includes all the nonzeros in $P^TXP$ and possibly a number of additional nonzeros which are referred to as \emph{fill}. For general sparsity patterns, the problem of computing a minimum fill factorization is known to be NP-complete \cite{Yan:81}, so in practice a fill-reducing permutation is often used. Sparsity patterns for which a zero-fill permutation exists are called \emph{chordal}, and for such sparsity patterns it is possible to efficiently compute a permutation matrix $P$ that leads to zero fill; see \cite{blp:94} and references therein. Sparse factorization techniques are also useful in interior-point methods for solving semidefinite optimization problems of the form \small \begin{subequations}\label{eq:SDPDual} \begin{align} & \minimize_{S,y} \quad \ b^T y \label{eq:SDPDual1} \\ & \subject \quad \sum_{i = 1}^{m} y_iQ^i + S = W, \ \ S \succeq 0. \label{eq:SDPDual2} \end{align} \end{subequations} \normalsize The variables are $S \in \mathbf S^n$ and $y \in \mathbb R^{m}$, and the problem data are $b \in \mathbb R^{m}$ and $W, Q^i \in \mathbf S^n$ for $i = 1, \dots, m$. The LMIs in~\eqref{eq:IQCLumped} and~\eqref{eq:IQCInterconnected} are of the form \eqref{eq:SDPDual} if we let $b = 0$, \cite{boyd:04,elg:00}, and it is clear that the slack variable $S$ inherits its sparsity pattern from the problem data because of the equality constraint \eqref{eq:SDPDual2}. This means that $S$ is sparse if the aggregate sparsity pattern of the data matrices is sparse. Solving the problem \eqref{eq:SDPDual} using an interior-point method involves evaluating the logarithmic barrier function $\phi(S) = -\log\det S$, its gradient $\nabla \phi(S) = -S^{-1}$, and terms of the form $\tr(Q^i\nabla^2 \phi(S)Q^j) = \trace(S^{-1}Q^iS^{-1}Q_j)$ at each iteration. When $S$ is sparse, this can be done efficiently using sparse factorization techniques. Note however that $S^{-1}$ is generally not sparse even when $S$ is sparse, but it is possible to avoid forming $S^{-1}$ by working in a lower-dimensional subspace defined by the filled sparsity pattern of $P^TSP$, \cite{and+dah+van10}, \cite{and:12}. This approach generally works well when filled sparsity pattern is sufficiently sparse. The cost of forming and factorizing the so-called Newton equations typically dominates cost of a single interior-point iteration. The Newton equations are of the form \begin{align}\label{eq:Newton} H \Delta y = r \end{align} where $\Delta y$ is the search direction, $H_{ij} = \trace(S^{-1}Q^i S^{-1} Q^j)$ for $i,j = 1, \dots, m$, and $r$ is a vector of residuals. In general, $H$ is dense even if the data matrices $W$ and $Q^i$ for $i = 1, \dots, m$ are sparse. As a result, it is typically not possible to reduce the computational cost of factorizing $H$, but when the data matrices are sparse and/or have low rank, it is possible to form $H$ very efficiently by exploiting the structure in the data matrices, \cite{ben:99,Fuji:97}. \section{Numerical Experiments}\label{sec:Results} In this section, we compare the computational cost of robustness analysis based on the sparse and the lumped formulations using two sets of numerical experiments. In Section~\ref{sec:Chain}, we study a chain of uncertain systems where we compare the performance of the sparse and lumped formulations with respect to the number of subsystems in the network. In Section~\ref{sec:Tree}, we illustrate the effectiveness of the sparse formulation by analyzing an interconnection of uncertain systems over a so-called scale-free network. \subsection{Chain of Uncertain Systems}\label{sec:Chain} Consider a chain of $N$ uncertain subsystems where each of the subsystems is defined as in \eqref{eq:Subsystems}. We represent the uncertainty in each of the subsystems using scalar uncertain parameters $\delta^1, \dots, \delta ^N$ which correspond to parametric uncertainties in different subsystems. The chain of uncertain systems is shown in Figure \ref{fig:System}. The gains are assumed to be within the normalized interval $[-1, 1]$. The inputs and outputs of the subsystems are denoted by $w^i$ and $z^i$, respectively, where $w^i, z^i \in \mathbb R^2$ for $1 < i < N$, and $w^i, z^i \in \mathbb R$ for $i = 1, N$. \begin{figure} \caption{A chain of $N$ uncertain subsystem.} \label{fig:System} \end{figure} The interconnections in the network are defined as $w^i_2 = z^{i+1}_1 $ and $w^i_1 = z^{i-1}_2$ for $1<i<N$, and as $w^1 = z^2_1$ and $w^N = z^{N-1}_2$ for the remaining subsystems in the chain, see Figure \ref{fig:System}. Consequently, the interconnection matrix $\Gamma$ for this network is given by the nonzero blocks \small $\Gamma_{i,i-1} = \Gamma_{i-1,i}^T$ for $i = 2, \dots, N$, where $\Gamma_{i,i-1} = \Gamma_{i-1,i}^T = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \ i=3,\ldots,N-1$\normalsize, and $ \Gamma_{21} = \Gamma_{12}^T = (1,0), \quad \Gamma_{N-1,N} = \Gamma_{N,N-1}^T = (0,1)$. Given the uncertainty in each of the subsystems, we have $\delta^i \in \IQC(\Pi^i)$ for $i = 1, \dots, N$, where \small $\Pi^i = \begin{bmatrix} r_i(j\omega) & 0 \\ 0 & -r_i(j\omega) \end{bmatrix}$\normalsize, and $r_i(j\omega) \geq 0$, \cite{rantzer}. We choose the scaling matrix in \eqref{eq:IQCInterconnected} to be of form $X = xI$. Note that analyzing the lumped system yields the LMI in~\eqref{eq:IQCLumped} of order $N$ whereas the sparse LMI in \eqref{eq:IQCInterconnected} is of order $3N-2$. As a result, for medium-sized networks, it may be computationally cheaper to solve the dense LMI in \eqref{eq:IQCLumped}, but for large and sparse networks, the sparse formulation is generally much more tractable. In order to confirm this, we conduct a set of numerical experiments where we compare the computation time required to solve the lumped and sparse formulation of the analysis problem for different number of subsystems in the chain. The interconnected systems considered in these experiments are chosen such that their robust stability can be established using both sparse and lumped formulations of the analysis problem. Note that generating such systems are generally not straightforward. In this paper, we use the following approach to generate such systems. Consider the interconnected system description in \eqref{eq:Subsystems} and \eqref{eq:SysInter}, and assume that \small \begin{align*} G_{\star\bullet}^i(s) = \begin{bmatrix} \begin{array}{c|c} A_{\star\bullet}^i & B_{\star\bullet}^i \\ \hline C_{\star\bullet}^i & D_{\star\bullet}^i \end{array}\end{bmatrix}, \end{align*} \normalsize for $i = 1, \dots,N$, $\star \in \{ p,z \}$ and $\bullet \in \{ q,w \}$. This results in \small \begin{align*} G_{\star\bullet}(s) = \begin{bmatrix} \begin{array}{c|c} A_{\star\bullet} & B_{\star\bullet} \\ \hline C_{\star\bullet} & D_{\star\bullet} \end{array}\end{bmatrix}, \end{align*} \normalsize where $A_{\star\bullet} = \diag(A_{\star\bullet}^1, \dots, A_{\star\bullet}^N)$, $B_{\star\bullet} = \diag(B_{\star\bullet}^1, \dots, B_{\star\bullet}^N)$, $C_{\star\bullet} = \diag(C_{\star\bullet}^1, \dots, C_{\star\bullet}^N)$ and $D_{\star\bullet} = \diag(D_{\star\bullet}^, \dots, D_{\star\bullet}^N)$. The system transfer matrices for the subsystems are then chosen such that they satisfy the following conditions \begin{enumerate} \item $\real(\eigen(A_{\star\bullet}^i)) \prec 0$ for all $i = 1, \dots, N$, $\star \in \{ p,z \}$ and $\bullet \in \{ q,w \}$; \item there exists $r_i(j\omega) \geq 0$ such that \small \begin{align*} \begin{bmatrix} G^i_{pq}(j\omega)\\ I \end{bmatrix}^\ast \Pi^i(j\omega) \begin{bmatrix} G^i_{pq}(j\omega) \\ I \end{bmatrix} \preceq -\epsilon I, \quad \forall \omega \in S_f, \end{align*} \normalsize for $\epsilon > 0$ and all $i = 1, \dots, N$; \item $\real(\eigen(A_{l})) \prec 0$, where $A_{l} = A_{zw} - B_{zw}(I-\Gamma D_{zw})^{-1}\Gamma C_{zw}$, \end{enumerate} where the first and second conditions ensure that each of the subsystems is robustly stable, and the third condition guarantees that the lumped system transfer function matrix satisfies $\bar G \in \mathcal{RH}_{\infty}^{\bar d \times \bar d}$. Also note that the non-singularity of the matrix $I - \Gamma D_{zw}$ is guaranteed by the well-posedness of the interconnection. We generate subsystems that satisfy the three mentioned conditions according to the procedure described in Appendix~\ref{app:sub}. We then solve the lumped formulation of the analysis problem to check whether robust stability of the interconnected system can be established using IQC-based analysis methods. If the system is proven to be robustly stable using the lumped formulation, we conduct the analysis once again using its sparse formulation. We solve the lumped formulation of the analysis problem using the general-purpose SDP solvers SDPT3, \cite{TTT:99}, and SeDuMi, \cite{Stu:01}. To solve the sparse formulation of the analysis problem, we use the sparse solvers DSDP \cite{BeY:05} and SMCP \cite{and+dah+van10} which exploit the aggregate sparsity pattern. Figure \ref{fig:Results} illustrates the average CPU time required to solve the lumped and sparse formulations of the problem based on 10 trials for each value of $N$ and for a single frequency. As can be seen from the figure, the analysis based on the sparse LMI in \eqref{eq:IQCInterconnected} is more efficient when the number of subsystems is large, and SMCP yields the best results for $N > 70$. For $N = 200$ the Cholesky factorization of the slack variable $S$ for the sparse formulation was computed separately using the toolbox CHOLMOD, \cite{Chen:08}, with AMD ordering, \cite{Ame:04,Ame:96}, which on average resulted in approximately $1\%$ fill-in. Note that DSDP was not able to match the performance of SMCP in this example. This is because the performance of DSDP mainly relies on exploiting low rank structure in the optimization problem and this structure is not present in this example. We also tried to solve the sparse problem using SDPT3 and SeDuMi, but it resulted in much higher computational times than for the dense problem, so we have chosen not to include these results. \begin{figure} \caption{Average CPU time required to solve (i) the lumped formulation of the analysis problem with SDPT3 and SeDuMi, and (ii) the sparse formulation of the analysis problem with DSDP and SMCP for a chain of length $N$.} \label{fig:Results} \end{figure} \vspace*{-5pt} \subsection{Interconnection of Uncertain Systems Over Scale-free network}\label{sec:Tree} In this section, we study the interconnection of $N$ uncertain subsystems over a randomly generated scale-free network. Scale-free networks are networks where the degree distribution of the nodes follow a distribution defined as \small \begin{align}\label{eq:Plaw} P_{\alpha}(k) = \frac{k^{-\alpha}}{\sum_{n=1}^{N}n^{-\alpha}}, \end{align} \normalsize for $k\geq 1$ and where typically $2<\alpha<3$, \cite{clauset:09}. In order to compare the cost of solving the sparse and lumped formulations of the analysis problem, we conduct a set of numerical experiments where we analyze the stability of a network of 500 uncertain subsystems. The network considered in this experiment has been generated using the NetworkX software package, \cite{NetworkX}, which provides us with the adjacency matrix of the network. In this experiment we use $\alpha = 2.5$ and the resulting network is a tree. The degree of each node~$i$ in the network is given by the number of nonzero elements in the $i$th row or column of the adjacency matrix. For this network, the number of nodes with $\degree(i) \leq 5$, $6\leq\degree(i) \leq 10$ and $11 \leq \degree(i)$ are 478, 16 and 6 respectively. Having this degree distribution for the network, we can see that only a small number of nodes in the network have a degree larger than 10. This implies that the interconnection among the subsystems is very sparse. Given the adjacency matrix of the network, we will now describe the interconnection matrix for this network. Recall the interconnection matrix description in \eqref{eq:Interconst}. The only nonzero blocks in the $i$th block-row of this matrix are $\Gamma_{ij}$ where $j \in \adj(i)$. A nonzero block $\Gamma_{ij}$ describes which of the outputs of the $j$th subsystem are connected to what inputs of the $i$th subsystem, and it depends on the chosen indexing for the input-output vectors. More specifically, we use Algorithm~\ref{alg:alg2} to generate the interconnection matrix. \begin{algorithm} \caption{Interconnection matrix}\label{alg:alg2} \footnotesize \begin{algorithmic}[1] \State{Given the adjacency matrix $A$ and the number of subsystems $N$} \State{Set $\mathbf I = \mathbf 1_N$ and $\mathbf O = \mathbf 1_N$} \For {$i = 1, \dots, N$} \For {$j = 1, \dots, N$} \If{ $A_{ij} == 1 $} \State{Choose $\Gamma_{ij}$ in \eqref{eq:Interconst} such that $w^i_{\mathbf I_i} = z^j_{\mathbf O_j}$} \State{$\mathbf I_i := \mathbf I_i + 1$, $\mathbf O_j := \mathbf O_j + 1$} \EndIf \EndFor \EndFor \end{algorithmic} \normalsize \end{algorithm} The input-output dimension for the subsystem associated with the $i$th node is given by $\degree(i)$. In order to provide suitable systems for the experiment in this section, we use the same procedure as described in Section \ref{sec:Chain}, and the uncertainty in the interconnected system is chosen to have the same structure as in Section~\ref{sec:Chain}. Table \ref{tab:results} reports the average CPU time required to solve the sparse and lumped formulations of the analysis problem. The results are based on 10 different system transfer matrices for a single frequency, and they clearly demonstrate the advantage of the sparse formulation compared to the lumped formulation. In this experiment, the Cholesky factorization of the slack variable $S$ on average results in about $0.9 \%$ fill-in. Note that the advantage of the sparse formulation becomes even more visible if the number of frequency points considered in the stability analysis, i.e., $|S_f|$, is large. \begin{table} \centering \caption{\footnotesize time for analyzing 500 subsystems over a scale-free network.\normalsize} \label{tab:results} \begin{tabular}{|c||c||c|} \hline \textbf{Solver} & \textbf{Avg. CPU time [sec]} & \textbf{Std. dev. [sec]} \\ \hline \hline SDPT3 (lumped) &$5640 $ & 529.8 \\ \hline SeDuMi (lumped) & $2760$ &284.3\\ \hline DSDP (sparse) & $167 $ & 28.3 \\ \hline SMCP (sparse) & $33 $ & 5.6 \\ \hline \end{tabular} \end{table} \section{Conclusions}\label{sec:Conclusions} IQC-based analysis of sparsely interconnected uncertain systems generally involves solving a set of dense LMIs. In this paper, we have shown that this can be avoided by using IQCs to describe the interconnections among the subsystems. This yields an equivalent formulation of the analysis problem that involves a set of sparse LMIs. By exploiting the sparsity in these LMIs, we have shown that it is often possible to solve the analysis problem associated with large-scale sparsely interconnected uncertain systems more efficiently than with existing methods. As future research directions, we mention the possibility to decompose the sparse LMIs into a set of smaller but coupled LMIs. This would allow us to employ distributed algorithms to solve these LMIs, and such methods may also be amenable to warm-starting techniques to accelerate the frequency sweep in the analysis problem. \appendices \section{Subsystems Generation for Numerical Experiments}\label{app:sub} The system transfer matrices for the subsystems can be generated in different ways. For instance this can either be done by using the \texttt{rss} command in MATLAB$^\text{TM}$ that generates random multi-input multi-output (MIMO) systems, or more generally, by constructing the MIMO systems by randomly choosing each of the elements in the system transfer function matrix separately. The latter approach allows us to produce transfer matrices where different elements have different poles, zeros and orders. In the experiments described in sections~\ref{sec:Chain} and \ref{sec:Tree}, we use the latter approach where we randomly choose each element in the transfer matrices to be a first-order system. Then we explicitly check whether the generated transfer matrices satisfy conditions 1 and 2 in Section \ref{sec:Chain}. \begin{algorithm} \caption{Subsystem rescaling}\label{alg:alg1} \footnotesize \begin{algorithmic}[1] \State{Given $G^i_{zw}$ for $i= 1, \ldots, N$ and $\gamma$} \For {$i = 1, \dots, N$} \If{ $\| G^i_{zw} \|_\infty \geq 1/\gamma$} \State{ Choose $\alpha$ such that $\alpha\| G^i_{zw} \|_\infty < 1/\gamma$} \State{ $G_{zw}^i := \alpha G_{zw}^i$} \Else { Leave the system transfer function matrix unchanged.} \EndIf \EndFor \end{algorithmic} \normalsize \end{algorithm} Let $\bar \sigma(\Gamma) = \gamma$ and assume that we have generated system transfer matrices for all subsystems such that they satisfy conditions 1 and~2 in Section \ref{sec:Chain}. The third condition can then be satisfied by scaling the system transfer matrices using Algorithm \ref{alg:alg1}. By the small gain theorem, the rescaled subsystems satisfy the third condition in Section~\ref{sec:Chain}, \cite{essentials}. \ifCLASSOPTIONcaptionsoff \fi \end{document}
\begin{document} \begin{frontmatter} \title{Physics-informed MTA-UNet: Prediction of Thermal Stress and Thermal Deformation of Satellites} \author[mymainaddress,mysecondaryaddress]{Zeyu Cao} \ead{[email protected]} \author[mymainaddress]{Wen Yao\corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \author[mymainaddress]{Wei Peng} \author[mymainaddress]{Xiaoya Zhang} \author[mysecondaryaddress]{Kairui Bao} \address[mymainaddress]{Defense Innovation Institute, Chinese Academy of Military Science, No. 53, Fengtai East Street, Beijing 100071, China} \address[mysecondaryaddress]{The No.92941st Troop of PLA, Huludao 125001, China} \begin{abstract} The rapid analysis of thermal stress and deformation plays a pivotal role in the thermal control measures and optimization of the structural design of satellites. For achieving real-time thermal stress and thermal deformation analysis of satellite motherboards, this paper proposes a novel Multi-Task Attention UNet (MTA-UNet) neural network which combines the advantages of both Multi-Task Learning (MTL) and U-Net with attention mechanism. Besides, a physics-informed strategy is used in the training process, where partial differential equations (PDEs) are integrated into the loss functions as residual terms. Finally, an uncertainty-based loss balancing approach is applied to weight different loss functions of multiple training tasks. Experimental results show that the proposed MTA-UNet effectively improves the prediction accuracy of multiple physics tasks compared with Single-Task Learning (STL) models. In addition, the physics-informed method brings less error in the prediction of each task, especially on small data sets. The code can be downloaded at: \url{https://github.com/KomorebiTso/MTA-UNet}. \end{abstract} \begin{keyword} Physics-informed surrogate model \sep Multi-Task Learning \sep U-Net \sep Thermal stress and deformation \end{keyword} \end{frontmatter} \section{Introduction}\label{sec1} Satellites are important performers of space missions and play an indispensable role in fields such as communication, remote sensing, navigation, and military reconnaissance \cite{montenbruck2002satellite}. Increasingly complicated aerospace missions pose greater challenges to the stability and reliability of satellites \cite{kodheli2020satellite}. The components installed on satellite motherboards inevitably generate much heat during operation due to their high power density \cite{chen2020heat}. The heat then leads to a raft of thermal responses such as thermal deformation, thermal buckling, and thermally induced vibration, affecting the accuracy of the satellite mission \cite{zhengchun2016design,liu2022thermal}. Ground-based simulation test and numerical analysis are two traditional methods commonly adopted in engineering to analyze thermo-mechanical coupling problems such as thermal stress and thermal deformation of satellites \cite{stohlman2018coupled,azadi2017thermally,johnston2000thermally,shen2019thermal}. However, these two methods are usually costly and require a long computational time. As a result, in terms of optimization of the design of thermal control measures and structural layout for satellites, the iterative design by the above two methods is likely to face a significant cost and computational burden or even fail to be completed in a given time \cite{zhao2021surrogate,chen2018practical,cuco2015multi}. It is urgent and necessary to conduct a real-time analysis of the thermo-mechanical coupling problems of the satellite. This paper aims to achieve fast and accurate predictions of thermal stress and thermal deformation of satellite motherboards when given a temperature field. A common method for the aforementioned engineering requirements is to build a surrogate model, which implements a high-precision prediction quickly. Once iterated in optimization procedure, surrogate model could achieve a compromise between computational accuracy and computational cost, improving the efficiency of prediction tasks \cite{chen2019satellite,cuco2015multi,yao2012surrogate}. At present, the mainstream surrogate model methods include polynomial-based response surface \cite{goel2009comparing}, support vector machine regression \cite{clarke2005analysis}, radial basis function \cite{yao2011concurrent}, and Kriging interpolation \cite{zhang2019regularization}. However, when dealing with ultra-high dimensional regression problems, these traditional surrogate modeling methods all face the challenge of “curse of dimensionality” (i.e. the difficulty in constructing surrogate models between high-dimensional variables). As artificial intelligence advances, many deep learning-based surrogate model methods have been applied to the prediction of physics-related problems, and neural networks such as fully connected neural networks \cite{zakeri2019deep}, fully convolutional neural networks \cite{edalatifar2021using}, and conditional generative adversarial networks \cite{farimani2017deep} have been widely used. In particular, the U-Net, as an advanced neural network, is coming into vogue. Rishi Sharma et al. \cite{sharma2018weakly} take different boundary conditions as input and applies the U-Net to predict the temperature field in a two-dimensional plane. Cheng et al. \cite{chen2019u} analyze the two-dimensional velocity and pressure fields around arbitrary shapes in laminar flow through the U-Net. In addition, Saurabh Deshpande et al. \cite{deshpande2021fem} use the U-Net to predict the large deformation response of hyperelastic bodies under loading. These related works prove the potential of U-Net in ultra-high dimensional regression problems. The focus of the aforementioned deep learning-based surrogate models is to dig a huge number of training data built by finite element or experiments. Nevertheless, these models are driven by data loss only, and they often suffer from high costs of training data or a small number of training data. Physics-informed neural network (PINN) \cite{raissi2019physics} is an effective way to alleviate this problem. In the face of the physics prediction tasks where some existing experience has been accumulated in the real world, the PINN method enables the integration of prior physical information with neural networks, so as to reduce the dependence of neural networks on large amounts of training data. Raissi et al. \cite{raissi2019physics} combine the residual error of PDEs with the loss function of FCN and predict the velocity and pressure fields around a cylinder, while Liu et al. \cite{liu2022temperature} use PINN to solve the forward and inverse problem of the temperature field. In addition, motivated by PINN, PDE-based loss function can also be explored for surrogate modeling. Bao et al. \cite{bao2022physics} and Zhao et al. \cite{zhao2021physics} discrete the heat conduction equation by finite difference, making it a loss function. It is noticed that all the physics prediction models mentioned above are only trained in a Single-Task Learning (STL) manner (i.e., each task is trained separately). When faced with multiple physics prediction tasks, the STL has two drawbacks. On the one hand, the separate training for each sub task takes up much memory space and computational time. On the other hand, if each sub task is modeled separately, the STL tends to ignore the relationship between tasks, such as correlation, conflict, and constraint, making it hard to achieve high precision for the prediction of multiple tasks. The Multi-Task Learning (MTL) shows more obvious advantages in dealing with such multiple tasks. In the MTL, multiple tasks share one model, which reduces memory usage and improves inference speed. MTL combines related tasks together and enables the tasks to complement each other by sharing feature information between tasks, which can effectively reduce over-fitting and improve prediction performance \cite{zhang2018overview,zhang2021survey}. Existing MTL is divided into soft parameter sharing and hard parameter sharing. In recent years, there have been a proliferation of research and successful practices in the industrial community on soft parameter sharing, such as the Cross-Stitch \cite{misra2016cross}, PLE \cite{tang2020progressive}, ESMM \cite{ma2018entire} and SNR \cite{ma2019snr}. In particular, the MMOE \cite{ma2018modeling} proposed by Google has gained great success. It consists of expert networks shared by multiple tasks and task-specific gate networks which can decide what to share. Such a flexible design allows the model to automatically learn how to assign expert parameters based on the relationship of the low-level tasks. The gate networks thus effectively improve the generalization performance of the model and the prediction accuracy across tasks. The MTL has made remarkable achievements in the field of computer vision and has been widely applied in engineering problems such as recommendation systems. However, such MTL models have rarely been used in high-dimensional regression problems with multiple physics tasks. In this paper, the regression tasks of temperature fields to thermal deformation and thermal stress of satellite motherboards are regarded as mappings between images. To achieve accurate and fast prediction of these multiple physics tasks, three strategies are adopted. \begin{enumerate} \item [(1)] This paper integrates the advantages of the U-Net and MTL, and proposes the Multi-Task Attention UNet network (MTA-UNet). This network not only shares feature information between high-level layers and low-level layers, but also shares feature information between different tasks. Specially, it shares the parameters in a targeted way through the attention mechanism. The MTA-UNet effectively reduces the training time of the model and improves the accuracy of prediction compared with STL U-Net. \item [(2)] A Physics-informed approach is applied in training the deep learning-based surrogate model, where the finite difference is applied to discrete thermoelastic and thermal equilibrium equations. The equations are encoded into a loss function to fully exploit the existing physics knowledge. \item [(3)] Faced with multiple physics tasks, an uncertainty-based loss balancing strategy is adopted to weigh the loss functions of different tasks during the training process. This strategy solves the problem that the training speed and accuracy between different tasks are difficult to balance if trained together, effectively reducing the phenomenon of competition between tasks. \end{enumerate} The rest of the paper is structured as follows. Section \ref{sec2} presents the specific definition of the mathematical model. Section \ref{sec3} introduces the strategies applied in the construction of deep learning-based MTA-UNet in detail. Section \ref{sec4} shows the training steps and the experimental results. Section \ref{sec5} is the conclusion. \section{Mathematical Modeling of Thermal Stress and Deformation Prediction}\label{sec2} In this paper, as previously defined in \cite{chen2020heat} and \cite{chen2018practical}, a two-dimensional satellite motherboard with partial openings is used as a study case. The objective is to realize a rapid prediction of thermal stress and thermal deformation. \begin{figure*} \caption{Schematic diagram of a two-dimensional satellite motherboard.} \label{motherboard} \end{figure*} The satellite motherboard with four circular holes is shown in Fig.\ref{motherboard}, and the holes represent screw holes in engineering assembly. There are $n$ electronic components installed on the motherboard, which generate a large amount of heat during operation and can be regarded as internal heat sources. It is assumed that the temperature of the satellite motherboard changes $T$ under the joint action of the internal heat sources and external environment. Due to the thermal expansion and contraction, the elastomer tends to undergo thermal deformation, and the deformation trend is restricted to a certain extent because of the external constraints and mutual constraints between various parts of the body. With thermal stress generated inside the elastomer, the thermal stress in turn leads to new additional tension and affects the thermal deformation. In this case, non-displacement boundary conditions are adopted at the edges of the holes. The thermoelastic properties on the motherboard tend to be isotropic, with the parameters consistent with the thermoelastic properties of aluminum. The linear elasticity coefficient of the motherboard is considered to be around the reference temperature $T_{0}=273$ K and is assumed not to change with temperature. Non-stress and adiabatic boundary conditions are adopted on the outer boundary of the motherboard. The side length of the rectangular computational domain is set to $L=H=20$ cm. The radius of the holes is $r=0.5$ cm, and the thermal conductivity coefficient within the domain is $k = 1$ W/(m $\times$ k). The coefficient of linear expansion is $\alpha = 1e-5 $, Young’s modulus is $E = 50e3$, and Poisson’s ratio is $\mu = 0.2$. The thermoelasticity motherboard is a two-dimensional planar elastomer and its thermal stress components and temperature satisfy the following equations: \begin{equation}\label{1} \left\{\begin{array}{l} \sigma_{xx}=\frac{E}{1-\mu^{2}}\left(\frac{\partial u_{x}}{\partial x}+\mu \frac{\partial u_{y}}{\partial y}\right)-\frac{E \alpha T}{1-\mu} \\ \sigma_{yy}=\frac{E}{1-\mu^{2}}\left(\frac{\partial u_{y}}{\partial y}+\mu \frac{\partial u_{x}}{\partial x}\right)-\frac{E \alpha T}{1-\mu} \\ \sigma_{x y}=\frac{E}{2(1+\mu)}\left(\frac{\partial u_{y}}{\partial x}+\frac{\partial u_{x}}{\partial y}\right) \end{array}\right. \end{equation} where $\sigma_{xx}$ means the x-directional thermal stress, $\sigma_{yy}$ means the y-directional thermal stress, and $\sigma_{xy}$ denotes the tangential directional thermal stress. In addition, $u_{x}$ and $u_{y}$ represent x-directional thermal displacement and y-directional thermal displacement respectively. According to the differential equation of thermal equilibrium, the $u_{x}$ and $u_{y}$ satisfy the following set of equations: \begin{equation}\label{2} \left\{\begin{array}{l} \frac{\partial^{2} u_{x}}{\partial x^{2}}+\frac{1-\mu}{2} \frac{\partial^{2} u_{x}}{\partial y^{2}}+\frac{1+\mu}{2} \frac{\partial^{2} u_{y}}{\partial x \partial y}-(1+\mu) \alpha \frac{\partial T}{\partial x}=0 \\ \frac{\partial^{2} u_{y}}{\partial y^{2}}+\frac{1-\mu}{2} \frac{\partial^{2} u_{y}}{\partial x^{2}}+\frac{1+\mu}{2} \frac{\partial^{2} u_{x}}{\partial x \partial y}-(1+\mu) \alpha \frac{\partial T}{\partial y}=0 \end{array}\right. \end{equation} and the boundary conditions further satisfy: \begin{equation}\label{3} \left\{\begin{array}{l} l\left(\frac{\partial u_{x}}{\partial x}+\mu \frac{\partial u_{y}}{\partial y}\right)_{s}+m \frac{1-\mu}{2}\left(\frac{\partial u_{x}}{\partial y}+\frac{\partial u_{y}}{\partial x}\right)_{s}=l(1+\mu) \alpha(T)_{s} \\ m\left(\frac{\partial u_{y}}{\partial y}+\mu \frac{\partial u_{x}}{\partial x}\right)_{s}+l \frac{1-\mu}{2}\left(\frac{\partial u_{y}}{\partial x}+\frac{\partial u_{x}}{\partial y}\right)_{s}=m(1+\mu) \alpha(T)_{s} \end{array}\right. \end{equation} Assuming that the direction of the outer normal to the elastic plane is $\textbf{N}$, then $l$ and $m$ can be expressed as the directional cosine of the $x$ axis and $y$ axis, respectively. Based on the above set of equations and the corresponding parameters, the thermal stress components and the thermal displacement components can be calculated when given an arbitrary temperature field $T$. However, the numerical methods are usually costly and time-consuming. In this paper, a deep learning-based surrogate model is constructed to quickly predict five physics components of satellite motherboard, $u_{x}$, $u_{y}$, $\sigma_{xx}$, $\sigma_{yy}$, and $\sigma_{xy}$. The five multiple physics prediction tasks are divided into three multiple sub tasks, $u_{x}$ and $\sigma_{xx}$ in task 1, $u_{y}$ and $\sigma_{yy}$ in task 2, and $\sigma_{xy}$ in task 3. Three sub tasks are trained together and the temperature field $T$ is the input of model. \section{Method}\label{sec3} This section details the technical strategies proposed during the construction of the deep learning surrogate model. Firstly, on account of multiple physics tasks, to reduce the training cost and improve the prediction performance of the model, we design a neural network structure MTA-UNet in section \ref{sec31}. Subsequently, in section \ref{sec32}, a physics-informed approach is applied to reduce the sample size of training data. Lastly, section \ref{sec33} adopts a uncertainty-based loss balancing strategy to weight the loss function of multiple tasks, adjusting the training speed and accuracy of multiple tasks. Fig.\ref{framework} shows the frame diagram of the main technical strategies. \begin{figure*} \caption{The frame diagram of the main technical strategy. This paper applies three strategies when build the surrogate model, the MTA-UNet network, physics-informed model, and uncertainty-based loss balancing method respectively.} \label{framework} \end{figure*} \subsection{MTA-UNet Network Structure}\label{sec31} This section describes the MTA-UNet in detail, and Fig.\ref{MTA-UNet} shows the actual structure. This model takes the U-Net as the basic architecture, and the improvement is the sharing method between different tasks. In MTA-UNet, multiple sub tasks are trained together and can share feature parameters with other tasks. Different tasks share the coding layer and each of them has their specific decoding layer. In addition, when the features of different layers are concatenated, feature selection is carried out through the Attention Gate (AG). The AG distinguishes which shared features are highly correlated with specific tasks and extracts the relevant shared features from the shared coding layer. The MTA-UNet combines the merits of U-Net and MTL, having the advantage of a “purposeful” combination of feature parameters from different layers and physics tasks. This design fully learns information from the training sets, reducing the training time of the model and memory space of the system and also improving the prediction performance of the model. This section is divided into the architecture of the MTA-UNet (\ref{sec311}), the sharing mechanism of the MTA-UNet (\ref{sec312}), and the Attention Gate of the MTA-UNet (\ref{sec313}). \begin{figure*} \caption{Structure of MTA-UNet.} \label{MTA-UNet} \end{figure*} \subsubsection{Architecture of the MTA-UNet Network}\label{sec311} To fully share feature information of different layers, MTA-UNet adopts the U-Net \cite{ronneberger2015u} as its basic architecture. In recent years, the U-Net has exhibited good performance when dealing with various ultra-high dimensional regression problems. According to Fig.\ref{UNet}, U-Net has a U-shaped symmetric structure which combines feature parameters of different layers. Owing to this characteristic, the U-Net network is suitable for image-to-image regression tasks as it takes multi-scale feature fusion into account and increases the amount of information. \begin{figure*} \caption{Structure of U-Net.} \label{UNet} \end{figure*} The MTA-UNet network built in this paper fully retains the advantages of the U-Net network. The first half of MTA-UNet is a classical downsampling process, which is the feature extraction part similar to the coding process, capable of learning multi-scale features of an image through pooling and convolution operations. The second half is an upsampling process similar to the decoding part, capable of restoring the size of the image layer by layer through deconvolution operations. These feature maps are effectively used in subsequent calculations through skip connections. In MTA-UNet, the downsampling layers are shared layers and the upsampling layers are exclusive to multiple tasks, respectively. The MTA-UNet consists of two repetitive applications of 3x3 convolutions (padding=1), and each convolution is followed by a BatchNorm2d normalized activation function layer and a 2x2 max-pooling operation with a stride of 2. We double the number of feature channels in each step of the shared downsampling and, in contrast, halve the number of feature channels in each step of the exclusive upsampling. After the feature selection through the AG, feature parameters of upsampling are skip connected to downsampling with the corresponding scale of the feature map. In the last layer, each component feature vector is mapped to its corresponding class by a 1x1 convolution. \subsubsection{Feature Sharing Mechanism of MTA-UNet}\label{sec312} To enable the sharing of feature parameters across multiple physics tasks, the MTA-UNet uses a partially shared coding layer to enable simultaneous learning of multiple tasks. During the downsampling process, multiple tasks share the coding layer, each having an independent decoding layer. This information concatenation operation of shared and specific feature parameters realizes the deep sharing of multi-task feature parameters. For the task $k$ in the given $K$ tasks, $k=1,2, \ldots, K$. The model consists of the shared downsampling function $f$, the exclusive upsampling $g_{k}$ , and $k$ task networks $h_{k}$. The shared layer follows the input layer, and the task networks are built upon the output of the shared-bottom. The $y_{k}$ is the output of the task $k$, follows the corresponding task-specific tower. Assuming that for the task $k$, the $m^{th}$ downsampling layer of the model is $f^{m}$, and its corresponding upsampling layer is $g_{k}^{m}$, the output of the task $k$ is: \begin{equation} \begin{aligned}\label{4} y_{k}=h_{k}\left(\text {SkipConnect}\left(AG_{k}^{m}\left(f^{m}, g_{k}^{m}\right), g_{k}^{m}\right)\right) \end{aligned} \end{equation} where $AG_{k}^{m}$ denotes the AG operation for the $m^{th}$ layer of the task $k$. \subsubsection{Attention Gate of MTA-UNet}\label{sec313} Inspired by CNN-based MTL models such as MMOE \cite{ma2018modeling} and PLE \cite{tang2020progressive}, the AG operation is added to the MTA-UNet model to realize the selection of shared features. In this way, shared features that are highly relevant to specific tasks are extracted from the shared network layer. \begin{figure*} \caption{Attention Gate operation. } \label{attention} \end{figure*} Fig.\ref{attention} illustrates the AG operation of task $k$, where a series of operations such as Relu, Sigmoid, and resampler are performed to evaluate the similarity between $f^{m}$ and $g_{k}^{m}$. The $m^{th}$ upsampling layer $f^{m}$ and its corresponding downsampling layer $g_{k}^{m}$ are input of $AG_{k}^{m}$. Since the underlay network $g_{k}^{m}$ is generally more accurate, when the similarity between $f^{m}$ and $g_{k}^{m}$ is calculated, the features in $f^{m}$ with higher similarity with $g_{k}^{m}$ are given higher weights, equals chosen of feature parameters of $f^{m}$. The selection is reflected in the calculation of the weight coefficient, and the operation is presented as: \begin{equation} \begin{aligned}\label{5} A G_{k}^{m}\left(f^{m}, g_{k}^{m}\right)=\sum_{i=1}^{L_{x}} \operatorname{Similarity}\left(f^{m}, g_{k}^{m}\right) \times f^{m} \end{aligned} \end{equation} \subsection{A Physics-informed Training Strategy}\label{sec32} To make full use of the prior physical laws among the physical quantities being predicted, reducing the size of the training sample, a physics-informed strategy is used in the training process. The existing physics knowledge (Eq.\ref{1}-Eq.\ref{3}) is discretized by the Finite Difference Method (FDM) and integrated into a loss function to construct a physics-informed surrogate model. The FDM is a numerical solution method that expresses basic equations and boundary conditions in the form of function approximation. It was assumed that an arbitrary two-dimensional elastomer is divided into a uniform grid as shown in Fig.\ref{FDM}, in which the intersections of the lines are the nodes. \begin{figure*} \caption{Grid diagram of finite differences.} \label{FDM} \end{figure*} Let $f=f(x,y)$ be a continuous physical quantity, which can be expressed as stress component, displacement component, temperature, etc. Assuming that the grid spacing is sufficiently small, then $\left|x-x_{0}\right|=h$. The central difference formulas of the first and second derivatives of $x$ at node 0 can be obtained : \begin{equation} \begin{aligned}\label{6} \left(\frac{\partial f}{\partial x}\right)_{0} \approx \frac{f_{1}-f_{3}}{2 h},\left(\frac{\partial^{2} f}{\partial x^{2}}\right)_{0} \approx \frac{f_{1}+f_{3}-2 f_{0}}{h^{2}} \end{aligned} \end{equation} Similarly, the central difference formulas for the first and second derivatives at node 0 in the y direction are: \begin{equation} \begin{aligned}\label{7} \left(\frac{\partial f}{\partial y}\right)_{0} \approx \frac{f_{2}-f_{4}}{2 h},\left(\frac{\partial^{2} f}{\partial y^{2}}\right)_{0} \approx \frac{f_{2}+f_{4}-2 f_{0}}{h^{2}} \end{aligned} \end{equation} According to the above equations, the central difference formulas for mixed second derivatives are: \begin{equation} \begin{aligned}\label{8} \left(\frac{\partial^{2} f}{\partial x \partial y}\right)_{0}=\left[\frac{\partial}{\partial x}\left(\frac{\partial f}{\partial y}\right)\right]_{0}=\frac{\left(\frac{\partial f}{\partial y}\right)_{1}-\left(\frac{\partial f}{\partial y}\right)_{3}}{2 h} \approx \frac{1}{4 h^{2}}\left[\left(f_{6}+f_{8}\right)-\left(f_{5}+f_{7}\right)\right] \end{aligned} \end{equation} For boundary points where central difference cannot be used, forward difference or backward difference are used for discretization: \begin{equation} \begin{aligned}\label{9} \left(\frac{\partial f}{\partial x}\right)_{0} \approx \frac{-3 f_{0}+4 f_{1}-f_{9}}{2 h} \approx \frac{3 f_{0}-4 f_{3}+f_{11}}{2 h}, \end{aligned} \end{equation} \begin{equation} \begin{aligned}\label{10} \left(\frac{\partial f}{\partial y}\right)_{0} \approx \frac{-3 f_{0}+4 f_{2}-f_{10}}{2 h} \approx \frac{3 f_{0}-4 f_{4}+f_{12}}{2 h} \end{aligned} \end{equation} To deal with the prediction tasks of thermal stress and thermal displacement, a neural network $\hat{Y}(x, \theta)$ is constructed, where $\theta=\left\{W^{\ell}, b^{\ell}\right\}_{1 \leq \ell \leq D}$, and $W^{\ell}$ is the weight matrix, $b^{\ell}$ is the deviation vector in the neural network, and the physics-informed loss function of the neural network is defined as: \begin{equation} \begin{aligned}\label{11} \mathcal{L}_{\text {PINN}}(\theta)=\mathcal{L}_{\text {data }}+\mathcal{L}_{\text {pde}} \end{aligned} \end{equation} where $\mathcal{L}_{\text {data}}$ is the data loss obtained from labeled data with the loss function of Mean Square Error (MSE), and the data loss of the task $k$ is defined as: \begin{equation} \begin{aligned}\label{12} \mathcal{L}_{\text {data}_{k}}=\frac{1}{\left|\mathcal{P}_{f}\right|} \sum_{(\mathbf{x}_{k}, \mathbf{y}_{k}) \in \mathcal{P}_{f}}\left(Y_{{k}}-\hat{Y}_{{k}}\right)^{2} \end{aligned} \end{equation} where $\mathcal{P}_{f}$ denotes the size of data set, $\hat{Y}_{k}$ represents the prediction of the model, $Y_{k}$ is the ground truth. The aforementioned finite difference equations (Eq.\ref{6}-Eq.\ref{10}) are used to discretize the thermoelasticity and the thermal equilibrium equations, with the central difference method for interior points and a forward/backward difference method for boundary points. The discretized PDEs are encoded into a loss function $\mathcal{L}_{\text {pde}}$ ($\mathcal{L}_{\text {pde}}$ denotes the physical losses of $K$ tasks), which is defined as: \begin{equation} \begin{aligned}\label{13} \mathcal{L}_{pde}=\frac{1}{\left|\mathcal{P}_{f}\right|} \sum_{(x, y) \in \mathcal{P}_{f}}\left(\mathcal{L}_{\sigma_{xx_{-}} pde}+\mathcal{L}_{\sigma_{yy_{-}}pde}+\mathcal{L}_{\sigma_{xy_{-}}pde}+\mathcal{L}_{u_{x_{-}}p d e}+\mathcal{L}_{u_{y_{-}}p d e}\right) \end{aligned} \end{equation} Where $\mathcal{L}_{u_{x_{-}}pde}$, $\mathcal{L}_{u_{y_{-}}pde}$, $\mathcal{L}_{\sigma_{xx_{-}} pde}$, $\mathcal{L}_{\sigma_{xy_{-}} pde}$, $\mathcal{L}_{\sigma_{yy_{-}} pde}$ mean the PDE loss of Eq.\ref{1} and Eq.\ref{2} respectively.The boundary loss $\mathcal{L}_{bc}$ is added to the physical loss function matrix $\mathcal{L}_{pde}$ in the form of the matrix mask. A convolution operation can also realize the physics-informed strategy. By training the neural network and minimizing the loss function until the best parameter $\theta^{*}$ is found, we construct a physics-informed surrogate model of the thermal deformation and thermal stress of satellite motherboards. \subsection{Uncertainty-based Multi-Task Loss Balancing Strategy}\label{sec33} The training goal of the MTA-UNet model is to minimize the difference between the prediction results and the labels. For MTL, however, errors between multiple tasks features cause competition between multiple tasks. To overcome this seesaw phenomenon, it is an integral part of MTL to balance the training speed and accuracy of multiple tasks through some loss balancing strategies. Since the loss functions of different tasks tend to have different magnitudes, this paper first normalized their inputs. Subsequently, to solve the problem of competition between different tasks, the most direct approach is to use the weight coefficient $w_{i}$ to adjust the proportions of the loss functions of different tasks, as shown in the following equation. \begin{equation} \begin{aligned}\label{13} \operatorname{Loss}=\sum_{k}^{K} w_{k} \times \operatorname{Loss}_{k} \end{aligned} \end{equation} However, this loss balancing method of fixed weights may not be applicable in some cases as different tasks are very sensitive to the setting of $w_{i}$, and different settings of $w_{i}$ vary greatly in terms of performance. We would like to use a dynamic weight method to balance the loss functions of different tasks. The mainstream methods include GradNorm \cite{chen2018gradnorm}, DWA \cite{liu2019end}, DTP \cite{guo2018dynamic}, and Uncertainty \cite{kendall2018multi,xiang2021self}, among which GradNorm consumes a longer computational time as it requires gradient computation, while DWA and DTP need additional weighting operations. This paper adopts the uncertainty-based method to dynamically adjust the weight coefficients for the loss functions of different tasks. The model in this study is constructed based on task-dependent and homoscedastic uncertainty in aleatoric uncertainty \cite{kendall2018multi}. For a certain task $k$, the model predicts both a prediction $y_{k}$ and the homoscedastic uncertainty $\sigma_{k}$ of the model. Thus the loss function for MTL is defined as: \begin{equation} \begin{aligned}\label{14} \mathcal{L}\left(W, \sigma_{1}, \sigma_{2}, \ldots, \sigma_{k}\right)=\sum_{k} \frac{1}{2 \sigma_{k}^{2}} \mathcal{L}_{k}(W)+\log \sigma_{k}^{2} \end{aligned} \end{equation} where $W$ means the weight matrix, $\mathcal{L}_{k}(W)$ is the loss function of task k. $\sigma_{k}$ means noise of the model of task $k$, $\frac{1}{2 \sigma_{k}^{2}}$ is the weight coefficient of the corresponding task, and $\text{log}\sigma_{k}^{2}$ is the regularization item to prevent $\sigma_{k}$ from learning too large. In this way, higher weights can be given to simple tasks, and the learning of other tasks are driven by them. For the total of $K$ regression tasks driven by physics, the overall loss function is defined as: \begin{equation} \begin{aligned}\label{15} \mathcal{L}=\sum_{k=1}^{K} \left(\frac{1}{2 \sigma_{k}^{2}} \mathcal{L}_{\text {data}_{k}}(W)+\log \sigma_{k}^{2}\right) + \frac{1}{2 \sigma_{pde}^{2}} \mathcal{L}_{pde}(W)+\log \sigma_{pde}^{2} \end{aligned} \end{equation} \section{Experiments}\label{sec4} In this section, the training details and the results of experiments are shown. Section \ref{sec41} introduces the data set and training metrics. Section \ref{sec42} discusses the effect of dynamic loss balancing strategy, MTA-UNet, and the physics-informed method in detail, respectively. \subsection{Training Steps}\label{sec41} \pmb{Data set:} In MTA-UNet, the input is a two-dimensional planar temperature field on a $200\times200$ uniform grid and the output is five corresponding thermal stress matrices and thermal deformation matrices, including x-directional displacement $u_{x}$, y-directional displacement $u_{y}$, x-directional thermal stress $\sigma_{xx}$, y-directional thermal stress $\sigma_{yy}$, and tangential thermal stress $\sigma_{xy}$. The OpenFOAM software is applied to build the data sets, and the computational domain is a uniform grid. Fig.\ref{dataset} shows one of the data set samples. \begin{figure*} \caption{A sample of data set. This figure shows a sample of the data set. The temperature field is the input of surrogate model, and $u_{x} \label{dataset} \end{figure*} The input temperature field is a Gaussian Random Field (GRF) \cite{haran2011gaussian}, and the roughness of the temperature field matrix changes when the mean and covariance of GRF are adjusted. This paper generates training sets with sample sizes of 100, 200, 500,1000, 2000, 5000, and 10,000. Firstly, the prediction of the STL model U-Net and the MTL model MTA-UNet are compared in the training set with a sufficient number of 10,000 samples to confirm the superiority of our model. Subsequently, the MTA-UNet model is trained in training sets with sample sizes of 100, 200, 500, 1000, 2000, and 5000 respectively to verify the effectiveness of the physics-informed approach. The trained models are tested in a general test set with a sample size of 500. \pmb{Metrics:} AdamDelda \cite{zeiler2012adadelta} is selected as the optimizer and the MTA-UNet model is implemented by PyTorch 1.8. The training batch size is set to 16. The prediction performance of different models for task $k$ is measured by metrics MAE and MRE, defined as: \begin{equation} \begin{aligned}\label{15} MAE_{k}=\frac{1}{\left|\mathcal{P}_{f}\right|} \sum_{(\mathbf{x}_{k}, \mathbf{y}_{k}) \in \mathcal{P}_{f}}\left|Y_{k}-\hat{Y}_{k}\right| \end{aligned} \end{equation} \begin{equation} \begin{aligned}\label{15} M R E_{k}=\frac{1}{\left|\mathcal{P}_{f}\right|} \sum_{(\mathbf{x}_{k}, \mathbf{y}_{k}) \in \mathcal{P}_{f}} \frac{\left|Y_{k}-\hat{Y}_{k}\right|}{\hat{Y}_{k}} \end{aligned} \end{equation} \subsection{Experiment Results}\label{sec42} \subsubsection{Effectiveness of Dynamic Loss Balancing Strategies}\label{sec421} To verify the necessity and effectiveness of the dynamic loss balancing strategy in our MTL regression model, training is carried out with the fixed weighted loss balancing strategy and uncertainty-based dynamic loss balancing strategy, respectively. Fig.\ref{balance comparastion} shows the training curves, indicating that if the loss function of MTL is balanced by fixed weight, multiple tasks will deviate in different directions due to different task objectives. In this way, the better learning of one task decreases the accuracy of another. Unlike fixed weight, the loss functions of multiple tasks simultaneously decrease at a stable speed with the uncertainty-based dynamic balancing strategy. The dynamic strategy avoids the phenomenon of competition and enables multiple tasks to achieve good performance simultaneously. \begin{figure} \caption{Training curves of different loss balancing strategies. The training curves of fixed weighted strategy shock in a certain range, making it difficult for training among multiple tasks. While the training curves of dynamic uncertainty-based loss balancing strategy decrease in a stable speed, enable good prediction for multiple tasks simultaneously.} \label{balance comparastion} \end{figure} \subsubsection{Performances of Models}\label{sec422} We train the MTA-UNet model and the U-Net model in the training set of 10,000 samples and test them in the general test set, respectively. The MAE and MRE errors obtained from the training are shown in Table.\ref{table1} and Table.\ref{table2}. The $\sigma_{MAE}$ and $\sigma_{MRE}$ mean the standard deviations of MAE and MRE respectively, and the data in bold represent the result of MTA-UNet. \begin{table} \caption{The MAE of different networks}\label{table1} \centering \setlength{\tabcolsep}{3mm}{ \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Task} & \multicolumn{2}{c}{$MAE$} & \multicolumn{2}{c}{$\sigma_{MAE}$} \\ \cline{2-5} & U-Net & MTA-UNet & U-Net & MTA-UNet \\ \hline $u_{x}$ & 1.9754$\times10^{-6}$ & \bf{1.1556}\bm{$\times10^{-6}$} & $\pm$8.2727$\times10^{-7}$ & \bm{$\pm$}\bf{1.9903}\bm{$\times10^{-7}$} \\ $u_{y}$ & 2.0947$\times10^{-6}$ & \bf{1.0721}\bm{$\times10^{-6}$} & $\pm$8.3263$\times10^{-7}$ & \bm{$\pm$}\bf{2.2980}\bm{$\times10^{-7}$} \\ $\sigma_{xx}$ & 0.0945 & \bf{0.0713} & $\pm$0.01 & \bm{$\pm$}\bf{9.5297}\bm{$\times10^{-5}$} \\ $\sigma_{yy}$ & 0.0924 & \bf{0.0728} & $\pm$0.01 & \bm{$\pm$}\bf{0.0098} \\ $\sigma_{xy}$ & 0.0474 & \bf{0.0402} & $\pm$0.01 & \bm{$\pm$}\bf{0.0087} \\ \bottomrule \end{tabular}} \end{table} \begin{table} \caption{The MRE of different networks}\label{table2} \centering \setlength{\tabcolsep}{5.5mm}{ \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Task} & \multicolumn{2}{c}{$MRE/\%$}& \multicolumn{2}{c}{$\sigma_{MRE}$} \\ \cline{2-5} & U-Net & MTA-UNet & U-Net & MTA-UNet \\ \hline $u_{x}$ & 2.23 & \bf{1.30} & $\pm$0.0056 & \bm{$\pm$}\bf{0.0044}\\ $u_{y}$ & 2.39 & \bf{1.21} & $\pm$0.0058 & \bm{$\pm$}\bf{0.0039}\\ $\sigma_{xx}$ & 1.27 & \bf{0.96} & $\pm$0.0036 & \bm{$\pm$}\bf{0.0027}\\ $\sigma_{yy}$ & 1.29 & \bf{0.98} & $\pm$0.0039 & \bm{$\pm$}\bf{0.0027}\\ $\sigma_{xy}$ & 1.36 & \bf{1.03} & $\pm$0.0038 & \bm{$\pm$}\bf{0.0030}\\ \bottomrule \end{tabular}} \end{table} We first investigate the prediction performance. According to Table.\ref{table2}, the minimum per-pixel MRE of the proposed MTA-UNet model is only 0.96\%, and the model shows good performance in predicting thermal stress and thermal deformation of satellite motherboards. Fig.\ref{result1} shows the prediction result of MTA-UNet. The mean per-pixel error is calculated with high precision according to the ground truth, demonstrating the feasibility of using the MTA-UNet model for regression between ultra-high dimensional variables. In addition, as for the comparison between the STL U-Net and the MTL MTA-UNet model proposed, it can be seen that the MTA-UNet exhibits better prediction performance on each physical prediction task by fully sharing feature information between different tasks. Especially in $u_{x}$, the proposed MTL MTA-UNet model reduces the $MAE$ by at least 41\% lower than the STL U-Net model in our task. Since different tasks have different noise, the sharing of feature parameters between tasks by MTA-UNet cancels part of the noise in different directions to a certain extent, improving the prediction accuracy. According to variances of MAE and MRE for different models, the MTA-UNet model is more robust than U-Net. The sharing mechanism in MTA-UNet plays the role of data enhancement. Moreover, some parameters required in a single task are better trained by other tasks, and the sharing of feature between different tasks make sense in this respect. Then, the training cost has been discussed. This deep learning surrogate model can reduce the prediction time of thermal stress and thermal deformation from 2 minutes to 0.23 seconds, effectively saving the time cost, which also verifies the effectiveness of the deep learning-based thermal stress and thermal deformation surrogate model. \begin{figure} \caption{Prediction of the MTA-UNet model. This figure shows two samples of results by MTA-UNet. The up row of each sample is the label of $u_{x} \label{result1} \end{figure} \subsubsection{Effects of the Physics-informed Strategy}\label{sec423} To verify the contributions of the physics-informed strategy to the accuracy of the model and sample size, the MTA-UNet surrogate model is trained in data sets with different sample sizes and tested in a general test set, and the comparison of MRE between data-driven only and the physics-informed is shown in Table.\ref{table3}. The $\sigma_{MRE}$ means the standard deviation of MRE, and the data in bold represent the result of the physics-informed strategy. \begin{table}[htbp] \caption{MRE of training strategies on dataset with different scales}\label{table3} \centering \setlength{\tabcolsep}{1.6mm}{ \begin{tabular}{cccccccccccc} \toprule \multirow{2}{*}{Scale} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{$MRE/\%$} & \multicolumn{5}{c}{$\sigma_{MRE}$} \\ \cline{3-12} & & $u_{x}$ & $u_{y}$ & $\sigma_{xx}$ & $\sigma_{yy}$ & $\sigma_{xy}$ & $u_{x}$ & $u_{y}$ & $\sigma_{xx}$ & $\sigma_{yy}$ & $\sigma_{xy}$ \\ \hline \multirow{2}{*}{200} & Data & 6.46 & 6.42 & 4.43 & 4.55 & 5.52 &$\pm$0.0421 & $\pm$0.0396 &$\pm$0.0145 &$\pm$0.0143 &$\pm$0.0200 \\ & PDE & \textbf{6.07} & \textbf{6.02} & \textbf{4.02} & \textbf{4.17} & \textbf{5.11} & \textbf{$\pm$0.0305} & \textbf{$\pm$0.0297} & \textbf{$\pm$0.0103} & \textbf{$\pm$0.0102} & \textbf{$\pm$0.0142} \\ \hline \multirow{2}{*}{500} & Data & 3.72 & 3.76 & 2.77 & 2.78 & 2.93 &$\pm$0.0185 &$\pm$0.0193 &$\pm$0.0083 &$\pm$0.0088 &$\pm$0.0104 \\ & PDE & \textbf{3.48} & \textbf{3.50} & \textbf{2.37} & \textbf{2.45} & \textbf{2.66} & \textbf{$\pm$0.0095} & \textbf{$\pm$0.0096} & \textbf{$\pm$0.0065} & \textbf{$\pm$0.0056} & \textbf{$\pm$0.0093} \\ \hline \multirow{2}{*}{1000}& Data & 3.11 & 3.19 & 1.99 & 2.01 & 2.29 &$\pm$0.0114 &$\pm$0.0115 &$\pm$0.0056 &$\pm$0.0057 &$\pm$0.0086\\ & PDE & \textbf{2.96} & \textbf{2.98} & \textbf{1.82} & \textbf{1.78} & \textbf{2.09} & \textbf{$\pm$0.0073} & \textbf{$\pm$0.0073} & \textbf{$\pm$0.0046} & \textbf{$\pm$0.0044} & \textbf{$\pm$0.0071} \\ \hline \multirow{2}{*}{2000}& Data & 2.62 & 2.65 & 1.55 & 1.61 & 1.72 &$\pm$0.0089 &$\pm$0.0092 &$\pm$0.0049 &$\pm$0.0052 &$\pm$0.0065 \\ & PDE & \textbf{2.49} & \textbf{2.51} & \textbf{1.41} & \textbf{1.40} & \textbf{1.53} & \textbf{$\pm$0.0063} & \textbf{$\pm$0.0062} & \textbf{$\pm$0.0044} & \textbf{$\pm$0.0042} & \textbf{$\pm$0.0044} \\ \hline \multirow{2}{*}{5000}& Data & 2.28 & 2.41 & 1.19 & 1.25 & 1.41 &$\pm$0.0057 &$\pm$0.0069 &$\pm$0.0039 &$\pm$0.0040 &$\pm$0.0046 \\ & PDE & \textbf{2.17} & \textbf{2.30} & \textbf{1.11} & \textbf{1.12} & \textbf{1.22} & \textbf{$\pm$0.0048} & \textbf{$\pm$0.0052} & \textbf{$\pm$0.0029} & \textbf{$\pm$0.0030} & \textbf{$\pm$0.0031} \\ \bottomrule \end{tabular}} \end{table} According to the experimental results, the physics-informed surrogate model exhibits better performance than data-driven only in data set with different sample sizes. Relatively speaking, the physics-informed strategy improves more significantly in the case of small samples, and the gap between the data-driven strategy and the physics-informed strategy gradually narrows as the number of samples increases, which demonstrates the potential of the physics-informed strategy in cases with small sample size. In addition, we conduct training on smaller size of samples. Fig.\ref{result2} shows the prediction obtained by the data-driven strategy and the physics-informed strategy respectively for 100 samples. The comparison shows that physics-informed strategy obtains predictions that are more in line with the practical meaning of physics and realistic working conditions. \begin{figure*} \caption{Predictions of the data-driven strategy and the physics-informed strategy.} \label{result2} \end{figure*} \section{Conclusion}\label{sec5} To solve the regression problems of multiple physical quantities in thermal-mechanical coupled fields, this paper proposes a novel MTL network MTA-UNet. The MTA-UNet integrates the strengths of U-Net and MTL, having the advantage of selective sharing parameters of different tasks and layers. With shared encoding layers, task-specific decoding layers, and AG feature filtering operations of MTA-UNet, the correlation between different tasks is fully learned. The performance of the deep learning-based surrogate model is greatly enhanced, and the training time and system memory are also effectively decreased. Subsequently, an uncertainty-based dynamic loss weighted strategy is used among loss functions of different tasks. With this strategy, the training speed and accuracy of each sub task are well balanced. In addition, a physics-informed method is applied during training to rapidly predict the thermal stress and thermal deformation of satellite motherboards. In the process, a set of PDEs are encoded into a loss function, making full use of the prior physics knowledge. Compared with the data-driven only strategy, the physics-informed method exhibits better performance in training sets with different sample sizes and obtains solutions more consistent with the laws of physics. Despite the MTA-UNet neural network we constructed for the regression problems of multiple physical quantities, further research is still required. As our model structure is strongly general, it is worth exploring strategies to enhance the balance between multiple tasks further. Extension to more complex boundaries and more complicated nonlinear problems is also valuable. Besides, embedding physics knowledge into the network structure remains a challenging work. \section*{Conflict of interest statement} On behalf of all authors, the corresponding author states that there is no conflict of interest. \section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China (Nos.11725211 and 52005505). \end{document}
\begin{document} \title{Convergence rates of Gibbs measures with degenerate minimum} \begin{abstract} We study convergence rates of Gibbs measures, with density proportional to $e^{-f(x)/t}$, as $t \rightarrow 0$ where $f : \mathbb{R}^d \rightarrow \mathbb{R}$ admits a unique global minimum at $x^\star$. We focus on the case where the Hessian is not definite at $x^\star$. We assume instead that the minimum is strictly polynomial and give a higher order nested expansion of $f$ at $x^\star$, which depends on every coordinate. We give an algorithm yielding such an expansion if the polynomial order of $x^\star$ is no more than $8$, in connection with Hilbert's $17^{\text{th}}$ problem. However, we prove that the case where the order is $10$ or higher is fundamentally different and that further assumptions are needed. We then give the rate of convergence of Gibbs measures using this expansion. Finally we adapt our results to the multiple well case. \end{abstract} \section{Introduction} Gibbs measures and their convergence properties are often used in stochastic optimization to minimize a function defined on $\mathbb{R}^d$. That is, let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be a measurable function and let $x^\star \in \mathbb{R}^d$ be such that $f$ admits a global minimum at $x^\star$. It is well known \cite{hwang1980} that under standard assumptions, the associated Gibbs measure with density proportional to $e^{-f(x)/t}$ for $t >0$, converges weakly to the Dirac mass at $x^\star$, $\delta_{x^\star}$, when $t \rightarrow 0$. The Langevin equation $dX_s = -\nabla f(X_s) ds + \sigma dW_s$ consists in a gradient descent with Gaussian noise. For $\sigma = \sqrt{2t}$, its invariant measure has a density proportional to $e^{-f(x)/t}$ (see for example \cite{khasminskii2012}, Lemma 4.16), so for small $t$ we can expect it to converge to $\text{argmin}(f)$ \cite{dalalyan2016} \cite{barrera2020}. The simulated annealing algorithm \cite{laarhoven1987} builds a Markov chain from the Gibbs measure where the parameter $t$ converges to zero over the iterations. This idea is also used in \cite{gelfand-mitter}, giving a stochastic gradient descent algorithm where the noise is gradually decreased to zero. Adding a small noise to the gradient descent allows to explore the space and to escape from traps such as local minima and saddle points which appear in non-convex optimization problems \cite{lazarev1992} \cite{dauphin2014}. Such methods have been recently brought up to light again with SGLD (Stochastic Gradient Langevin Dynamics) algorithms \cite{welling2011} \cite{li2015}, especially for Machine Learning and calibration of artificial neural networks, which is a high-dimensional non-convex optimization problem. The rates of convergence of Gibbs measures have been studied in \cite{hwang1980}, \cite{hwang1981} and \cite{athreya2010} under differentiability assumptions on $f$. It turns out to be of order $t^{1/2}$ as soon as the Hessian matrix $\nabla^2 f(x^\star)$ is positive definite. Furthermore, in the multiple well case i.e. if the minimum of $f$ is attained at finitely many points $x_1^\star$, $\ldots$, $x_m^\star$, \cite{hwang1980} proves that the limit distribution is a sum of Dirac masses $\delta_{x_i^\star}$ with coefficients proportional to $\det(\nabla^2 f(x_i^\star))^{-1/2}$ as soon as all the Hessian matrices are positive definite. If such is not the case, we can conjecture that the limit distribution is concentrated around the $x_i^\star$ where the degeneracy is of the highest order. The aim of this paper is to provide a rate of convergence in this degenerate setting, i.e. when $x^\star$ is still a strict global minimum but $\nabla^2 f(x^\star)$ is no longer definite, which extends the range of applications of Gibbs measure-based algorithms where positive definiteness is generally assumed. A general framework is given in \cite{athreya2010}, which provides rates of convergence based on dominated convergence. However a strong and rather technical assumption on $f$ is needed and checking it seems, to some extent, more demanding than proving the result. To be more precise, the assumption reads as follows: there exists a function $g : \mathbb{R}^d \rightarrow \mathbb{R}$ with $e^{-g} \in L^1(\mathbb{R}^d)$ and $\alpha_1, \ \ldots, \ \alpha_d \in (0,+\infty)$ such that \begin{equation} \label{eq:intro} \forall h \in \mathbb{R}^d, \ \ \frac{1}{t} \left[ f(x^\star + \left( t^{\alpha_1} h_1,\ldots, t^{\alpha_d} h_d \right) ) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} g(h_1,\ldots,h_d). \end{equation} Our objective is to give conditions on $f$ such that \eqref{eq:intro} is fulfilled and then to elucidate the expression of $g$ depending on $f$ and its derivatives by studying the behaviour of $f$ at $x^\star$ in every direction. Doing so we can apply the results from \cite{athreya2010} yielding the convergence rate of the corresponding Gibbs measures. The orders $\alpha_1$, $\ldots$, $\alpha_d$ must be chosen carefully and not too big, as the function $g$ needs to depend on every of its variables $h_1$, $\ldots$, $h_d$, which is a necessary condition for $e^{-g}$ to be integrable. We also extend our results to the multiple well case. We generally assume $f$ to be coercive, i.e. $f(x) \rightarrow + \infty$ as $||x|| \rightarrow + \infty$, $\mathcal{C}^{2p}$ in a neighbourhood of $x^\star$ for some $p \in \mathbb{N}$ and we assume that the minimum is polynomial strict, i.e. the function $f$ is bounded below in a neighbourhood of $x^\star$ by some non-negative polynomial function, null only at $x^\star$. Thus we can apply a multi-dimensional Taylor expansion to $f$ at $x^\star$, where the successive derivatives of $f :\mathbb{R}^d \rightarrow \mathbb{R}$ are seen as symmetric tensors of $\mathbb{R}^d$. The idea is then to consider the successive subspaces where the derivatives of $f$ are null up to some order ; using that the Taylor expansion of $f(x^\star+h)-f(x^\star)$ is non-negative, some cross derivative terms are null. However a difficulty arises at orders $6$ and higher, as the set where the derivatives of $f$ are null up to some order is no longer a vector subspace in general. This difficulty is linked with Hilbert's $17^{\text{th}}$ problem \cite{hilbert1888}, stating that a non-negative multivariate polynomial cannot be written as the sum of squares of polynomials in general. We thus need to change the definition of the subspaces we consider. Following this, we give a recursive algorithm yielding an adapted decomposition of $\mathbb{R}^d$ into vector subspaces and a function $g$ satisfying \eqref{eq:intro} up to a change of basis, giving a canonical higher order nested decomposition of $f$ at $x^\star$ in degenerate cases. An interesting fact is that the case where the polynomial order of $x^\star$ is $10$ or higher fundamentally differs from those of orders $2$, $4$, $6$ and $8$, owing to the presence of even cross terms which may be not null. The algorithm we provide works at the orders $10$ or higher only under the assumption that all such even cross terms are null. In general, it is more difficult to get a general expression of $g$ for the orders $10$ and higher. We then apply our results to \cite{athreya2010}, where we give conditions such that the hypotheses of \cite{athreya2010}, especially \eqref{eq:intro}, are satisfied so as to infer rates of convergence of Gibbs measures in the degenerate case where $\nabla^2 f(x^\star)$ is not necessarily positive definite. The function $g$ given by our algorithm is a non-negative polynomial function and non-constant in any of its variables, however it needs to be assumed to be coercive to be applied to \cite{athreya2010}. We study the case where $g$ is not coercive and give a method to deal with simple generic non-coercive cases, where our algorithm seems to be a first step to a more general procedure. However, we do not give a general method in this case. Our results are applied to Gibbs measures but they can also be applied to more general contexts, as we give a canonical higher order nested expansion of $f$ at a minimum, in the case where some derivatives are degenerate. For general properties of symmetric tensors we refer to \cite{comon2008}. In the framework of stochastic approximation, \cite{fort1999} Section 3.1 introduced the notion of strict polynomial local extremum and investigated their properties as higher order "noisy traps". The paper is organized as follows. In Section \ref{section:gibbs_measures}, we recall convergence properties of Gibbs measures and revisit the main theorem from \cite{athreya2010}. This theorem requires, as an hypothesis, to find an expansion of $f$ at its global minimum ; we properly state this problem in Section \ref{subsection:statement_of_problem} under the assumption of strict polynomial minimum. In Section \ref{section:main_result}, we state our main result for both single well and multiple well cases, as well as our algorithm. In Section \ref{section:expansion}, we detail the expansion of $f$ at its minimum for each order and provide the proof. We give the general expression of the canonical higher order nested expansion at any order in Section \ref{sec:expansion_any_order}, where we distinguish the orders $10$ and higher from the lower ones. We then provide the proof for each order $2$, $4$, $6$ and $8$ in Sections \ref{section:order_2}, \ref{section:order_4}, \ref{section:order_6} and \ref{section:order_8} respectively. We need to prove that, with the exponents $\alpha_1$, $\ldots$, $\alpha_d$ we specify, the convergence in \eqref{eq:intro} holds ; we do so by proving that, using the non-negativity of the Taylor expansion, some cross derivative terms are zero. Because of Hilbert's $17^{\text{th}}$ problem, we need to distinguish the orders $6$ and $8$ from the orders $2$ and $4$, as emphasized in Section \ref{section:hilbert}. For orders $10$ and higher, such terms are not necessarily zero and must then be assumed to be zero. In Section \ref{section:order_10}, we give a counter-example if this assumption is not satisfied before proving the result. In Section \ref{subsec:unif_non_constant}, we prove that for every order the resulting function $g$ is constant in none of its variables and that the convergence in \eqref{eq:intro} is uniform on every compact set. In Section \ref{section:non_coercive}, we study the case where the function $g$ is not coercive and give a method to deal with the simple generic case. In Section \ref{section:proofs_athreya}, we prove our main theorems stated in Section \ref{section:main_result} using the expansion of $f$ established in Section \ref{section:expansion}. Finally, in Section \ref{section:flat}, we deal with a "flat" example where all the derivatives in the local minimum are zero and where we cannot apply our main theorems. \section{Definitions and notations} We give a brief list of notations that are used throughout the paper. We endow $\mathbb{R}^d$ with its canonical basis $(e_1,\ldots,e_d)$ and the Euclidean norm denoted by $|| \boldsymbol{\cdot} ||$. For $x \in \mathbb{R}^d$ and $r >0$ we denote by $\mathcal{B}(x,r)$ the Euclidean ball of $\mathbb{R}^d$ of center $x$ and radius $r$. For $E$ a vector subspace of $\mathbb{R}^d$, we denote by $p_{_E} : \mathbb{R}^d \rightarrow E$ the orthogonal projection on $E$. For a decomposition of $\mathbb{R}^d$ into orthogonal subspaces, $\mathbb{R}^d = E_1 \oplus \cdots \oplus E_p$, we say that an orthogonal transformation $B \in \mathcal{O}_d(\mathbb{R})$ is adapted to this decomposition if for all $j \in \lbrace 1, \ldots, p \rbrace $, $$ \forall i \in \lbrace \dim(E_1)+\cdots+\dim(E_{j-1})+1, \ldots, \dim(E_1)+\cdots+\dim(E_j) \rbrace, \ B \cdot e_i \in E_j .$$ For $a, \ b \in \mathbb{R}^d$, we denote by $a \ast b$ the element-wise product, i.e. $$ \forall i \in \lbrace 1, \ldots, d \rbrace, \ (a \ast b)_i = a_i b_i .$$ For $v^1$, $\ldots$, $v^k$ vectors in $\mathbb{R}^d$ and $T$ a tensor of order $k$ of $\mathbb{R}^d$, we denote the tensor product $$ T \cdot (v^1 \otimes \cdots \otimes v^k) = \sum_{i_1,\ldots,i_k \in \{1,\ldots,d \}} T_{i_1 \cdots i_k} v^1_{i_1} \ldots v^k_{i_k} . $$ More generally, if $j \le k$ and $v^1, \ \ldots, \ v^j$ are $j$ vectors in $\mathbb{R}^d$, then $T \cdot (v^1 \otimes \cdots \otimes v^j)$ is a tensor of order $k-j$ such that: $$ T \cdot (v^1 \otimes \cdots \otimes v^j)_{i_{j+1}\ldots i_k} = \sum_{i_1,\ldots,i_j \in \{1,\ldots ,d \}} T_{i_1 \ldots i_k} v^1_{i_1} \ldots v^j_{i_j}. $$ For $h \in \mathbb{R}^d$, $h^{\otimes k}$ denotes the tensor of order $k$ such that $$ h^{\otimes k} = (h_{i_1} \ldots h_{i_k} )_{i_1,\ldots,i_k \in \{1,\ldots,d \}}.$$ For a function $f \in \mathcal{C}^p\left(\mathbb{R}^d, \mathbb{R}\right)$, we denote $\nabla^k f(x)$ the differential of order $k \le p$ of $f$ at $x$, as $\nabla^k f(x)$ is the tensor of order $k$ defined by: $$ \nabla^k f(x) = \left(\frac{\partial^k f(x)}{\partial x_{i_1} \cdots \partial x_{i_k}}\right)_{i_1,i_2,\ldots,i_k \in \{1,\ldots,d \}} .$$ By Schwarz's theorem, this tensor is symmetric, i.e. for all permutation $\sigma \in \mathfrak{S}_k$, $$ \frac{\partial^k f(x)}{\partial x_{i_{\sigma(1)}} \cdots \partial x_{i_{\sigma(k)}}} = \frac{\partial^k f(x)}{\partial x_{i_1} \cdots \partial x_{i_k}} .$$ We recall the Taylor-Young formula in any dimension, and the Newton multinomial formula. \begin{theorem}[Taylor-Young formula] \label{theorem:taylor} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be $\mathcal{C}^p$ and let $x \in \mathbb{R}^d$. Then: $$ f(x + h) \underset{h \rightarrow 0}{=} \sum_{k=0}^{p} \frac{1}{k!} \nabla^k f(x) \cdot h^{\otimes k} + ||h||^p o(1) .$$ \end{theorem} \noindent We denote by $\binom{k}{i_1, \ldots, i_p}$ the $p$-nomial coefficient, defined as: $$ \binom{k}{i_1,\ldots ,i_p} = \frac{k!}{i_1!\ldots i_p!} .$$ \begin{theorem}[Newton multinomial formula] Let $h_1, \ \ldots, \ h_p \in \mathbb{R}^d$, then \begin{equation} \label{equation:multinomial} (h_1 + h_2 + \cdots + h_p)^{\otimes k} = \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_p = k }} \binom{k}{i_1, \ldots, i_p} h_1^{\otimes i_1} \otimes \cdots \otimes h_p^{\otimes i_p} . \end{equation} \end{theorem} \noindent For $T$ a tensor of order $k$, we say that $T$ is non-negative (resp. positive) if \begin{equation} \label{eq:tensor_positive_def} \forall h \in \mathbb{R}^d, \ T \cdot h^{\otimes k} \ge 0 \text{ (resp. } T \cdot h^{\otimes k} > 0 \text{)}. \end{equation} We denote $L^1(\mathbb{R}^d)$ the set of measurable functions $f:\mathbb{R}^d \rightarrow \mathbb{R}$ that are integrable with respect to the Lebesgue measure on $\mathbb{R}^d$. We denote by $\lambda_d$ the Lebesgue measure on $\mathbb{R}^d$. For $f : \mathbb{R}^d \rightarrow \mathbb{R}$ such that $e^{-f} \in L^1(\mathbb{R}^d)$, we define for $t > 0$, $C_t := \left( \int_{\mathbb{R}^d} e^{-f/t} \right)^{-1}$ and $\pi_t$ the Gibbs measure $$ \pi_t(x)dx := C_t e^{-f(x)/t} dx .$$ For a family of random variables $(Y_t)_{t \in (0,1]}$ and $Y$ a random variable, we write $Y_t \underset{t \rightarrow 0}{\overset{\mathscr{L}}{\longrightarrow}} Y$ meaning that $(Y_t)$ weakly converges to $Y$. We give the following definition of a strict polynomial local minimum of $f$: \begin{definition} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be $\mathcal{C}^{2p}$ for $p \in \mathbb{N}$ and let $x^\star$ be a local minimum of $f$. We say that $f$ has a strict polynomial local minimum at $x^\star$ of order $2p$ if $p$ is the smallest integer such that: \begin{equation} \label{eq:polynomial_strict} \exists r >0, \ \forall h \in \mathcal{B}(x^\star, r) \setminus \{0 \} , \ \sum_{k=2}^{2p} \frac{1}{k!} \nabla^k f (x^\star) \cdot h^{\otimes k} > 0 . \end{equation} \end{definition} \textbf{Remarks :} \begin{enumerate} \item A local minimum $x^\star$ of $f$ is not necessarily strictly polynomial, for example, $f : x \mapsto e^{-||x||^{-2}}$ and $x^\star = 0$. \item If $x^\star$ is polynomial strict, then the order is necessarily even, because if $x^\star$ is not polynomial strict of order $2l$ for some $l \in \mathbb{N}$, then we have $h_n \rightarrow 0$ such that the Taylor expansion in $h_n$ up to order $2l$ is zero ; by the minimum condition, the Taylor expansion in $h_n$ up to order $2l+1$ must be non-negative, so we also have $\nabla^{2l+1}f(x^\star) \cdot h_n^{\otimes 2l+1} = 0$. \end{enumerate} For $f :\mathbb{R}^d \rightarrow \mathbb{R}$ such that $\min_{\mathbb{R}^d}(f)$ exists, we denote by $\text{argmin}(f)$ the arguments of the minima of $f$, i.e. $$ \text{argmin}(f) = \left\lbrace x \in \mathbb{R}^d : \ f(x) = \min_{\mathbb{R}^d}(f) \right\rbrace .$$ Without ambiguity, we write "minimum" or "local minimum" to designate $f(x^\star)$ as well as $x^\star$. Finally, we define, for $x^\star \in \mathbb{R}^d$ and $p \in \mathbb{N}$: \begin{align*} \mathscr{A}_p(x^\star) & := \left\lbrace f \in \mathcal{C}^{2p}(\mathbb{R}^d, \mathbb{R}) : \ f \text{ admits a local minimum at } x^\star \right\rbrace . \\ \mathscr{A}_p^\star(x^\star) & := \left\lbrace f \in \mathcal{C}^{2p}(\mathbb{R}^d, \mathbb{R}) : \ f \text{ admits a strict polynomial local minimum at } x^\star \text{ of order } 2p \right\rbrace . \end{align*} \section{Convergence of Gibbs measures} \label{section:gibbs_measures} \subsection{Properties of Gibbs measures} Let us consider a Borel function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ with $e^{-f} \in L^1(\mathbb{R}^d)$. We study the asymptotic behaviour of the probability measures of density for $t \in (0,\infty)$: $$ \pi_t(x) dx = C_t e^{-\frac{f(x)}{t}} dx $$ when $t \rightarrow 0$. When $t$ is small, the measure $\pi_t$ tends to the set $\argmin(f)$. The following proposition makes this statement precise. \begin{proposition} \label{proposition:gibbs} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be a Borel function such that $$ f^\star := \textup{essinf}(f) = \inf \{y : \ \lambda_d\{f \le y \} >0 \} > - \infty ,$$ and $e^{-f} \in L^1(\mathbb{R}^d)$. Then $$ \forall \varepsilon >0, \ \pi_t(\{ f \ge f^\star + \varepsilon\} ) \underset{t \rightarrow 0}{\longrightarrow} 0 .$$ \end{proposition} \begin{proof} As $f^\star > -\infty$, we may assume without loss of generality that $f^\star = 0$ by replacing $f$ by $f-f^\star$. Let $\varepsilon>0$. It follows from the assumptions that $f\ge 0$ $\lambda_d$-$a.e.$ and $\lambda_d \lbrace f \le \varepsilon \rbrace >0$ for every $\varepsilon>0$. As $e^-f \in L^1(\mathbb{R}^d)$, we have $$ \lambda_d \lbrace f \le \varepsilon/3 \rbrace \le e^{\varepsilon/3} \int_{\mathbb{R}^d} e^{-f} d\lambda_d<+\infty. $$ Moreover by dominated convergence, it is clear that $$ C_t^{-1} \downarrow \lambda_d\lbrace f=0 \rbrace<+\infty. $$ We have $$C_t \le \left(\int_{f \le \varepsilon/3} e^{-\frac{f(x)}{t}}dx \right)^{-1} \le \left( e^{-\frac{\varepsilon}{3t}} \underbrace{\lambda_d \{ f \le \frac{\varepsilon}{3} \} }_{>0} \right)^{-1}. $$ Then \begin{align*} \pi_t\{f \ge \varepsilon\} = C_t \int_{f \ge \varepsilon} e^{-\frac{f(x)}{t}}dx \le \frac{e^{\varepsilon/3t} \int_{f \ge \varepsilon} e^{-f(x) / t}dx }{\lambda_d \{ f \le \frac{\varepsilon}{3} \}} \le \frac{e^{-\varepsilon/3t}C_{3t}^{-1}}{\lambda_d \{ f \le \frac{\varepsilon}{3} \}} \underset{t \rightarrow 0}{\longrightarrow} 0, \end{align*} because if $f(x) \ge \varepsilon$, then $e^{-\frac{f(x)}{t}} \le e^{-\frac{2\varepsilon}{3t}}e^{-\frac{f(x)}{3t} }$, and where we used that $C_{3t}^{-1}\le C_1^{-1}$ if $t\le 1/3$ \end{proof} Now, let us assume that $f : \mathbb{R}^d \rightarrow \mathbb{R}$ is continuous, $e^{-f} \in L^1(\mathbb{R}^d)$ and $f$ admits a unique global minimum at $x^\star$ so that $\text{argmin}(f) = \{ x^\star \}$. In \cite{athreya2010} is proved the weak convergence of $\pi_t$ to $\delta_{x^\star}$ and a rate of convergence depending on the behaviour of $f(x^\star + h)-f(x^\star)$ for small enough $h$. Let us recall this result in detail ; we may assume without loss of generality that $x^\star=0$ and $f(x^\star) = 0$. \begin{theorem}[Athreya-Hwang, 2010] \label{theorem:athreya:1} Let $f : \mathbb{R}^d \rightarrow [0,\infty)$ be a Borel function such that : \begin{enumerate} \item $e^{-f} \in L^1(\mathbb{R}^d)$. \item For all $\delta > 0$, $\inf \{ f(x), \ ||x|| > \delta \} > 0$. \item There exist $\alpha_1, \ \ldots, \ \alpha_d > 0$ such that for all $(h_1,\ldots,h_d) \in \mathbb{R}^d$, $$ \frac{1}{t} f(t^{\alpha_1} h_1,\ldots, t^{\alpha_d} h_d) \underset{t \rightarrow 0}{\longrightarrow} g(h_1,\ldots,h_d) \in \mathbb{R}. $$ \item $ \displaystyle\int_{\mathbb{R}^d} \sup_{0<t<1} e^{-\frac{f\left(t^{\alpha_1}h_1,\ldots ,t^{\alpha_d}h_d\right)}{t}} dh_1\ldots dh_d < \infty$. \end{enumerate} For $0<t<1$, let $X_t$ be a random vector with distribution $\pi_t$. Then $e^{-g} \in L^1(\mathbb{R}^d)$ and \begin{equation} \label{equation:athreya_assumption:3} \left( \frac{(X_t)_1}{t^{\alpha_1}}, \ldots, \frac{(X_t)_d}{t^{\alpha_d}} \right) \overset{\mathscr{L}}{\longrightarrow} X \ \text{ as } t \rightarrow 0 \end{equation} where the distribution of $X$ has a density proportional to $e^{-g(x_1,\ldots,x_d)}$. \end{theorem} \noindent \textbf{Remark:} Hypothesis 2. is verified as soon as $f$ is continuous, coercive (i.e. $f(x) \longrightarrow + \infty$ when $||x|| \rightarrow + \infty$) and that $\text{argmin}(f) = \lbrace 0 \rbrace$. To study the rate of convergence of the measure $\pi_t$ when $t \rightarrow 0$ using Theorem \ref{theorem:athreya:1}, we need to identify $\alpha_1,\ldots, \ \alpha_d$ and $g$ such that the condition \eqref{equation:athreya_assumption:3} holds, up to a possible change of basis. Since $x^\star$ is a local minimum, the Hessian $\nabla^2 f(x^\star)$ is positive semi-definite. Moreover, if $\nabla^2 f(x^\star)$ is positive definite, then choosing $\alpha_1=\cdots=\alpha_d=\frac{1}{2}$, we have: $$ \frac{1}{t} f(t^{1/2} h) \underset{t \rightarrow 0}{\longrightarrow} \frac{1}{2} h^T \cdot \nabla^2f(x^\star) \cdot h :=g(x) .$$ And using an orthogonal change of variable: $$ \int_{\mathbb{R}^d} e^{-g(x)}dx = \int_{\mathbb{R}^d} e^{-\frac{1}{2} \sum_{i=1}^d \beta_i y_i^2} dy_1\ldots dy_d < \infty ,$$ where the eigenvalues $\beta_i$ are positive. However, if $\nabla^2f(x^\star)$ is not positive definite, then some of the $\beta_i$ are zero and the integral does not converge. \subsection{Statement of the problem} \label{subsection:statement_of_problem} We still consider the function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ and assume that $f \in \mathscr{A}_p^\star(x^\star)$ for some $x^\star \in \mathbb{R}^d$ and some integer $p \ge 1$. Then our objective is to find $\alpha_1 \ge \cdots \ge \alpha_d \in (0,+\infty)$ and an orthogonal transformation $B \in \mathcal{O}_d(\mathbb{R})$ such that: \begin{equation} \label{eq:alpha_developpement} \forall h \in \mathbb{R}^d, \ \ \frac{1}{t} \left[ f(x^\star + B \cdot (t^\alpha \ast h)) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} g(h_1,\ldots,h_d), \end{equation} where $t^\alpha$ denotes the vector $(t^{\alpha_1},\ldots,t^{\alpha_d})$ and where $g : \mathbb{R}^d \rightarrow \mathbb{R}$ is a measurable function which is not constant in any $h_1, \ \ldots, \ h_d$, i.e. for all $i \in \lbrace 1,\ldots,d \rbrace$, there exist $h_1, \ \ldots, \ h_{i-1}, \ h_{i+1},\ \ldots, \ h_d \in \mathbb{R}^d$ such that \begin{equation} \label{eq:def_non_constant} h_i \mapsto g(h_1,\ldots,h_d) \text{ is not constant.} \end{equation} Then we say that $\alpha_1, \ \ldots, \ \alpha_d$, $B$ and $g$ are a solution of the problem \eqref{eq:alpha_developpement}. The hypothesis that $g$ is not constant in any of its variables is important ; otherwise, we could simply take $\alpha_1 = \cdots = \alpha_d = 1$ and obtain, by the first order condition: $$ \frac{1}{t} \left[ f(x^\star + t(h_1,\ldots,h_d)) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} 0 .$$ \subsection{Main results : rate of convergence of Gibbs measures} \label{section:main_result} \begin{theorem}[Single well case] \label{theorem:single_well} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be $\mathcal{C}^{2p}$ with $p \in \mathbb{N}$ and such that: \begin{enumerate} \item $f$ is coercive, i.e. $f(x) \longrightarrow + \infty$ when $||x|| \rightarrow + \infty$. \item $\textup{argmin}(f) = {0}$. \item $f \in \mathscr{A}_p^\star(0)$ and $f(0)=0$. \item $e^{-f} \in L^1(\mathbb{R}^d)$. \end{enumerate} Let $(E_k)_k$, $(\alpha_i)_i$, $B$ and $g$ to be defined as in Algorithm \ref{algo:algorithm} stated right after, so that for all $h \in \mathbb{R}^d$, \begin{equation*} \frac{1}{t} \left[ f\left( x^\star + B \cdot (t^\alpha \ast h) \right) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} g(h) , \end{equation*} and where $g$ is not constant in any of its variables. Moreover, assume that $g$ is coercive and the following technical hypothesis if $p \ge 5$: \begin{align} \label{equation:even_terms_null} &\forall h \in \mathbb{R}^d, \ \forall (i_1,\ldots,i_p) \in \lbrace 0,2,\cdots,2p \rbrace^p, \\ & \frac{i_1}{2} + \cdots + \frac{i_p}{2p} < 1 \implies \ \nabla^{i_1+\cdots+i_p}f(x^\star) \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_{p}}}(h)^{\otimes i_{p}} = 0 . \nonumber \end{align} Then the conclusion of Theorem \ref{theorem:athreya:1} holds, with: $$ \left(\frac{1}{t^{\alpha_1}}, \ldots, \frac{1}{t^{\alpha_d}}\right) \ast (B^{-1} \cdot X_t) \overset{\mathscr{L}}{\longrightarrow} X \ \text{ as } t \rightarrow 0,$$ where $X$ has a density proportional to $e^{-g(x)}$. \end{theorem} \begin{algorithm} \label{algo:algorithm} Let $f \in \mathscr{A}_p^\star(x^\star)$ for $p \in \mathbb{N}$. \begin{enumerate} \item Define $(F_k)_{0 \le k \le p-1}$ recursively as: $$ \left\lbrace \begin{array}{l} F_0 = \mathbb{R}^d \\ F_k = \lbrace h \in F_{k-1} : \ \forall h' \in F_{k-1}, \ \nabla^{2k} f(x^\star) \cdot h \otimes h'^{\otimes 2k-1} = 0 \rbrace. \end{array} \right. $$ \item For $1 \le k \le p-1$, define the subspace $E_k$ as the orthogonal complement of $F_k$ in $F_{k-1}$. By abuse of notation, define $E_p := F_{p-1}$. \item Define $B \in \mathcal{O}_d(\mathbb{R})$ as an orthogonal transformation adapted to the decomposition $$ \mathbb{R}^d = E_1 \oplus \cdots \oplus E_p .$$ \item Define for $1 \le i \le d$, \begin{equation} \alpha_i := \frac{1}{2j} \ \text{ for } i \in \lbrace \dim(E_1)+\cdots+\dim(E_{j-1})+1, \ldots, \dim(E_1)+\cdots+\dim(E_j) \rbrace . \end{equation} \item Define $g : \mathbb{R}^d \rightarrow \mathbb{R}$ as \begin{equation} \label{eq:def_g} g(h) = \sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}=1 }} \binom{k}{i_1,\ldots,i_p} \nabla^k f(x^\star) \cdot p_{_{E_1}}(B \cdot h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(B \cdot h)^{\otimes i_p} . \end{equation} \end{enumerate} \end{algorithm} \textbf{Remarks :} \begin{enumerate} \item The function $g$ is not unique, as we can choose any base $B$ adapted to the decomposition $\mathbb{R}^d = E_1 \oplus \cdots \oplus E_p$. \item The case $p \ge 5$ is fundamentally different from the case $p \le 4$, since Algorithm \ref{algo:algorithm} may fail to provide such $(E_k)_k$, $(\alpha_i)_i$, $B$ and $g$ if the technical hypothesis \eqref{equation:even_terms_null} is not fulfilled, as explained in Section \ref{section:order_10}. This yields fewer results for the case $p \ge 5$. \item For $p \in \lbrace 1,2,3,4 \rbrace$, the detail the expression of $g$ in \eqref{equation:order_2}, \eqref{equation:order_4}, \eqref{equation:order_6} and \eqref{equation:order_8:2} respectively. \item The function $g$ has the following general properties : $g$ is a non-negative polynomial of order $2p$; $g(0)=0$ and $\nabla g(0) = 0$. \item The condition on $g$ to be coercive may seem not natural. We give more details about the case where $g$ is not coercive in Section \ref{section:non_coercive} and give a way to deal with the simple generic case of non-coercivity. However dealing with the general case where $g$ is not coercive goes beyond the scope of our work. \item The hypothesis that $g$ is coercive is a necessary condition for $e^{-g} \in L^1(\mathbb{R}^d)$. We actually prove in Proposition \ref{prop:coercive} that it is a sufficient condition. \end{enumerate} Still following \cite{athreya2010}, we study the multiple well case, i.e. the global minimum is attained in a finite number of points in $\mathbb{R}^d$, say $\lbrace x_1^\star,\ldots,x_m^\star \rbrace$ for some $m \in \mathbb{N}$. In this case, the limiting measure of $\pi_t$ will have its support in $\lbrace x_1^\star,\ldots,x_m^\star \rbrace$, with different weights. \begin{theorem}[Athreya-Hwang, 2010] \label{theorem:athreya:2} Let $f:\mathbb{R}^d \rightarrow [0,\infty)$ measurable such that: \begin{enumerate} \item $e^{-f} \in L^1(\mathbb{R}^d)$. \item For all $\delta > 0$, $\inf \lbrace f(x), \ ||x - x_i^\star|| > \delta, \ 1 \le i \le m \rbrace > 0$. \item There exist $(\alpha_{ij})_{\substack{1\le i \le m \\ 1 \le j \le d}}$ such that for all $i$, $j$, $\alpha_{ij} \ge 0$ and for all $i$: $$ \frac{1}{t} f(x_i^\star + (t^{\alpha_{i1}}h_1,\ldots,t^{\alpha_{id}}h_d)) \underset{t \rightarrow 0}{\longrightarrow} g_i(h_1,\ldots,h_d) \in [0,\infty) .$$ \item For all $i \in \lbrace 1, \ldots, m \rbrace$, $$ \int_{\mathbb{R}^d} \sup_{0<t<1} e^{-\frac{f(x_i^\star + (t^{\alpha_{i1}}h_1,\ldots,t^{\alpha_{id}}h_d))}{t}}dh_1\ldots dh_d < \infty .$$ \end{enumerate} Then, let $\alpha := \min_{1 \le i \le m} \left\lbrace \sum_{j=1}^d \alpha_{ij} \right\rbrace $ and let $J := \left\lbrace i \in \lbrace 1,\ldots,m \rbrace : \ \sum_{j=1}^d \alpha_{ij} = \alpha \right\rbrace$. For $0 < t < 1$, let $X_t$ be a random vector with distribution $\pi_t$. Then: $$ X_t \overset{\mathscr{L}}{\underset{t \rightarrow 0}{\longrightarrow}} \frac{1}{\sum_{j \in J} \int_{\mathbb{R}^d} e^{-g_j(x)}dx} \sum_{i \in J} \int_{\mathbb{R}^d} e^{-g_i(x)}dx \cdot \delta_{x_i^\star} .$$ \end{theorem} \begin{theorem}[Multiple well case] \label{theorem:multiple_well} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be $\mathcal{C}^{2p}$ for $p \in \mathbb{N}$ and such that: \begin{enumerate} \item $f$ is coercive i.e. $f(x) \longrightarrow + \infty$ when $||x||\rightarrow +\infty$. \item $\argmin(f) = \lbrace x_1^\star, \ldots, x_m^\star \rbrace$ and for all $i$, $f(x_i^\star)=0$. \item For all $i \in \lbrace 1, \ldots, m \rbrace$, $f \in \mathscr{A}^\star_{p_i}(x_i^\star)$ for some $p_i \le p$. \item $e^{-f} \in L^1(\mathbb{R}^d)$. \end{enumerate} Then, for every $i \in \lbrace 1, \ldots, m \rbrace$, we consider $(E_{ik})_k$, $(\alpha_{ij})_j$, $B_i$ and $g_i$ as defined in Algorithm \ref{algo:algorithm}, where we consider $f$ to be in $\mathscr{A}_{p_i}^\star(x_i^\star)$, so that for every $h \in \mathbb{R}^d$: $$ \frac{1}{t} f(x_i^\star + B_i \cdot (t^{\alpha_i} \ast h)) \underset{t \rightarrow 0}{\longrightarrow} g_i(h_1,\ldots,h_d) \in [0,\infty) ,$$ where $t^{\alpha_i}$ is the vector $(t^{\alpha_{i1}},\ldots, t^{\alpha_{id}})$ and where $g_i$ is not constant in any of its variables. Furthermore, we assume that for all $i$, $g_i$ is coercive and the following technical hypothesis for every $i$ such that $p_i \ge 5$: \begin{align*} & \forall h \in \mathbb{R}^d, \ \forall (i_1,\ldots,i_{p_i}) \in \lbrace 0,2,\ldots,2{p_i} \rbrace^{p_i}, \\ & \frac{i_1}{2} + \cdots + \frac{i_{p_i}}{2p} < 1 \implies \ \nabla^{i_1+\cdots+i_{p_i}}f(x_i^\star) \cdot p_{_{E_{i1}}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_{ip_i}}}(h)^{\otimes i_{p_i}} = 0 . \nonumber \end{align*} Let $\alpha := \min_{1 \le i \le m} \left\lbrace \sum_{j=1}^d \alpha_{ij} \right\rbrace $ and let $J := \left\lbrace i \in \lbrace 1,\ldots,m \rbrace : \ \sum_{j=1}^d \alpha_{ij} = \alpha \right\rbrace$. Then: $$ X_t \underset{t \rightarrow 0}{\longrightarrow} \frac{1}{\sum_{j \in J} \int_{\mathbb{R}^d} e^{-g_j(x)}dx} \sum_{i \in J} \int_{\mathbb{R}^d} e^{-g_i(x)}dx \cdot \delta_{x_i^\star} .$$ Moreover, let $\delta > 0$ be small enough so that the balls $\mathcal{B}(x_i^\star,\delta)$ are disjoint, and define the random vector $X_{it}$ to have the law of $X_t$ conditionally to the event $||X_t - x_i^\star||< \delta$. Then: $$\left(\frac{1}{t^{\alpha_{i1}}}, \ldots,\frac{1}{t^{\alpha_{id}}}\right) \ast (B_i^{-1} \cdot X_{it}) \overset{\mathscr{L}}{\longrightarrow} X_i \ \text{ as } t \rightarrow 0 ,$$ where $X_i$ has a density proportional to $e^{-g_i(x)}$. \end{theorem} \section{Expansion of $f$ at a local minimum with degenerate derivatives} \label{section:expansion} In this section, we aim at answering to the problem stated in \eqref{eq:alpha_developpement} in order to devise conditions to apply Theorem \ref{theorem:athreya:1}. This problem can also be considered in a more general setting, independently of the study of the convergence of Gibbs measures. It provides a non degenerate higher order nested expansion of $f$ at a local minimum when some of the derivatives of $f$ are degenerate. Note here that we only need $x^\star$ to be a local minimum instead of a global minimum, since we only give local properties. For $k \le p$, we define the tensor of order $k$, $T_k := \nabla^k f(x^\star)$. \subsection{Expansion of $f$ for any order $p$} \label{sec:expansion_any_order} In this section, we state our result in a synthetic form. The proofs of the cases $p =1,2,3,4$ are individually detailled in Sections \ref{section:order_2}, \ref{section:order_4}, \ref{section:order_6} and \ref{section:order_8} respectively. \begin{theorem} \label{theorem:main} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be $\mathcal{C}^{2p}$ for some $p \in \mathbb{N}$ and assume that $f \in \mathscr{A}_p^\star(x^\star)$ for some $x^\star \in \mathbb{R}^d$. \begin{enumerate} \item If $p \in \lbrace 1,2,3,4 \rbrace$, then there exists orthogonal subspaces of $\mathbb{R}^d$, $E_1, \ \ldots, \ E_{p}$ such that $$ \mathbb{R}^d = E_1 \oplus \cdots \oplus E_{p},$$ and satisfying for every $h \in \mathbb{R}^d$: \begin{align} \label{equation:order_p:1} & \frac{1}{t} \left[ f\left(x^\star + t^{1/2}p_{_{E_1}}(h) + \cdots + t^{1/(2p)}p_{_{E_{p}}}(h) \right) - f(x^\star) \right] \\ \label{equation:order_p:2} \underset{t \rightarrow 0}{\longrightarrow} & \sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\cdots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}=1 }} \binom{k}{i_1,\ldots,i_{p}} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_{p}}}(h)^{\otimes i_{p}}. \end{align} The convergence is uniform with respect to $h$ on every compact set. Moreover, let $B \in \mathcal{O}_d(\mathbb{R})$ be an orthogonal transformation adapted to the decomposition $E_1 \oplus \cdots \oplus E_{p}$, then \begin{equation} \label{equation:order_p:3} \frac{1}{t} \left[ f\left( x^\star + B \cdot (t^\alpha \ast h) \right) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} g(h) , \end{equation} where \begin{equation} \label{eq:def_g:2} g(h) = \sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}=1 }} \binom{k}{i_1,\ldots,i_{p}} T_k \cdot p_{_{E_1}}(B \cdot h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_{p}}}(B \cdot h)^{\otimes i_p} \end{equation} is not constant in any of its variables $h_1, \ \ldots, \ h_d$ and \begin{align} \label{eq:def_alpha:2} \alpha_i & := \frac{1}{2j} \ \text{ for } i \in \lbrace \dim(E_1)+\cdots+\dim(E_{j-1})+1, \ldots, \dim(E_1)+\cdots+\dim(E_j) \rbrace . \end{align} \item If $p \ge 5$ and if there exist orthogonal subspaces of $\mathbb{R}^d$, $E_1, \ \ldots, \ E_{p}$ such that $$ \mathbb{R}^d = E_1 \oplus \cdots \oplus E_{p}$$ and satisfying the following additional assumption \begin{align} \label{equation:even_terms_null:2} & \forall h \in \mathbb{R}^d, \ \forall (i_1,\ldots,i_p) \in \lbrace 0,2,\ldots,2p \rbrace^{p}, \\ & \frac{i_1}{2} + \cdots + \frac{i_p}{2p} < 1 \implies \ T_{i_1+\cdots+i_p} \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_{p}}}(h)^{\otimes i_{p}} = 0 , \nonumber \end{align} then \eqref{equation:order_p:2} stills holds true, as well as the uniform convergence on every compact set. Moreover, if $B \in \mathcal{O}_d(\mathbb{R})$ is an orthogonal transformation adapted to the previous decomposition, then \eqref{equation:order_p:3} still hold true. However, depending on the function $f$, such subspaces do not necessarily exist. \end{enumerate} \end{theorem} \textbf{Remarks:} \begin{enumerate} \item The limit \eqref{equation:order_p:2} can be rewritten as: $$\sum_{k=2}^{2p} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\cdots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}=1 }} T_k \cdot \frac{p_{_{E_1}}(h)^{\otimes i_1}}{i_1!} \otimes \cdots \otimes \frac{p_{_{E_{p}}}(h)^{\otimes i_{p}}}{i_p!}. $$ \item For $p \in \lbrace 1,2,3,4 \rbrace$, we explicitly give the expression of the sum \eqref{equation:order_p:2} and the $p$-tuples $(i_1,\ldots,i_p)$ such that $\frac{i_1}{2}+\cdots+\frac{i_p}{2p}=1$, in \eqref{equation:order_2}, \eqref{equation:order_4}, \eqref{equation:order_6} and \eqref{equation:order_8:2} respectively. \item For $p \in \lbrace 1,2,3,4 \rbrace$, we give in Algorithm \ref{algo:algorithm} an explicit construction of the orthogonal subspaces $E_1, \ \ldots, \ E_{p}$ as complementaries of annulation sets of some derivatives of $f$. \item The case $p \ge 5$ is fundamentally different from the case $p \in \lbrace 1,2,3,4 \rbrace$. The strategy of proof developed for $p \in \lbrace 1,2,3,4 \rbrace$ fails if the assumption \eqref{equation:even_terms_null:2} is not satisfied. In \ref{section:order_10} a counter-example is detailed. The case $p \ge 5$ yields fewer results than for $p \le 4$, as the assumption \eqref{equation:even_terms_null:2} is strong. \item For $p \ge 5$, such subspaces $E_1$, $\ldots$, $E_p$ may also be obtained from Algorithm \ref{algo:algorithm}, however \eqref{equation:even_terms_null:2} is not necessarily true in this case. \end{enumerate} The proof of Theorem \ref{theorem:main} is given first individually for each $p \in \lbrace 1,2,3,4 \rbrace$, in Sections \ref{section:order_2}, \ref{section:order_4}, \ref{section:order_6}, \ref{section:order_8} respectively. The proof for $p \ge 5$ is given in Section \ref{section:order_10}. The proof of the uniform convergence and of the fact that $g$ is not constant is given in Section \ref{subsec:unif_non_constant}. \subsection{Review of the one dimensional case} We review the case $d=1$, as it guides us for the proof in the case $d \ge 2$. The strategy is to find the first derivative $f^{(m)}(x^\star)$ which is non zero and then to choose $\alpha_1 = 1/m$. \begin{proposition} Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be $\mathcal{C}^p$ for some $p \in \mathbb{N}$ and let $x^\star$ be a strict polynomial local minimum of $f$. Then : \begin{enumerate} \item The order of the local minimum $m$ is an even number and $f^{(m)}(x^\star) > 0$. \item $ f(x^\star + h) \underset{h \rightarrow 0}{=} f(x^\star) + \frac{f^{(m)}(x^\star)}{m!}h^p + o(h^m) $ \end{enumerate} \end{proposition} Then $\alpha_1 := 1/m$ is the solution of \eqref{eq:alpha_developpement} and $$\frac{1}{t} (f(x^\star + t^{1/m}h) - f(x^\star)) \underset{t \rightarrow 0}{\longrightarrow} \frac{f^{(m)}(x^\star)}{m!} h^m $$ which is a non-constant function of $h$, since $f^{(m)}(x^\star) \ne 0$. The direct proof using the Taylor formula is left to the reader. \subsection{Proof of Theorem \ref{theorem:main} for $p=1$} \label{section:order_2} Let $f \in \mathscr{A}^\star_1(x^\star)$. The assumption that $x^\star$ is a strict polynomial local minimum at order $2$ implies that $\nabla^2 f(x^\star)$ is positive definite. Let us denote $(\beta_i)_{1 \le i \le d}$ its positive eigenvalues. By the spectral theorem, let us write $\nabla^2 f(x^\star ) = B {\rm Diag}(\beta_{1:d})B^T$ for some $B \in \mathcal{O}_d(\mathbb{R})$. Then: \begin{equation} \label{equation:order_2} \frac{1}{t} (f(x^\star + t^{1/2}B \cdot h) - f(x^\star)) \underset{t \rightarrow 0}{\longrightarrow} \frac{1}{2}\sum_{i=1}^{d} \beta_i h_i^2. \end{equation} Thus, a solution of \eqref{eq:alpha_developpement} is $\alpha_1=\cdots=\alpha_d=\frac{1}{2}$, $B$, and $g(h_1,\ldots,h_d)=\frac{1}{2} \sum_{i=1}^d \beta_i h_i^2$, which is a non-constant function of every $h_1, \ \ldots, \ h_d$, since for all $i$, $\beta_i$ is positive. In the following, our objective is to establish a similar result when $\nabla^2 f(x^\star)$ is not necessarily positive definite. \subsection{Proof of Theorem \ref{theorem:main} for $p=2$} \label{section:order_4} \begin{theorem} \label{theorem:order_4} Let $f \in \mathscr{A}_2(x^\star)$. Then there exist orthogonal subspaces $E$ and $F$ such that $\mathbb{R}^d = E \oplus F$, and that for all $h \in \mathbb{R}^d$: \begin{align} \frac{1}{t} & \left[ f(x^\star + t^{1/2}p_{_{E}}(h) + t^{1/4}p_{_F}(h)) - f(x^\star)\right] \nonumber \\ \label{equation:order_4} \underset{t \rightarrow 0}{\longrightarrow} & \ \frac{1}{2} \nabla^2 f(x^\star) \cdot p_{_E}(h)^{\otimes 2} + \frac{1}{2} \nabla^3 f(x^\star) \cdot p_{_E}(h) \otimes p_{_F}(h)^{\otimes 2} + \frac{1}{4!} \nabla^4 f(x^\star)\cdot p_{_F}(h)^{\otimes 4} . \end{align} Moreover, if $f \in \mathscr{A}_2^\star (x^\star)$, then this is a solution to the problem \eqref{eq:alpha_developpement}, with $E_1 = E$, $E_2=F$, $\alpha$ defined in \eqref{eq:def_alpha:2}, $B$ adapted to the previous decomposition and $g$ defined in \eqref{eq:def_g:2}. \end{theorem} \noindent \textbf{Remark:} The set of $2$-tuples $(i_1,i_2)$ such that $\frac{i_1}{2} + \frac{i_2}{4} = 1$, are $(2,0)$, $(1,2)$ and $(0,4)$, which gives the terms appearing in the sum in \eqref{equation:order_p:2}. \begin{proof} Let $F := \{ h \in \mathbb{R}^d : \ \nabla^2 f(x^\star) \cdot h^{\otimes 2} = 0 \}$. By the spectral theorem and since $\nabla^2 f(x^\star)$ is positive semi-definite, $F = \{ h \in \mathbb{R}^d : \ \nabla^2 f(x^\star) \cdot h = 0^{\otimes 1} \}$ is a vector subspace of $\mathbb{R}^d$. Let $E$ be the orthogonal complement of $F$ in $\mathbb{R}^d$. For $h \in \mathbb{R}^d$ we expand the left term of \eqref{equation:order_4} using the Taylor formula up to order $4$ and the multinomial formula \eqref{equation:multinomial}, giving $$ \sum_{k=2}^4 \frac{1}{k!} \sum_{\substack{i_1,i_2 \in \lbrace 0,\ldots,k \rbrace \\ i_1+i_2=k}} \binom{k}{i_1,i_2} t^{\frac{i_1}{2} + \frac{i_2}{4}-1} T_k \cdot p_{_E}(h)^{\otimes i_1} \otimes p_{_F}(h)^{\otimes i_2} + o(1).$$ The terms with coefficient $t^a$, $a>0$, are $o(1)$ as $t \rightarrow 0$. By definition of $F$ we have $\nabla^2 f(x^\star) \cdot p_{_F}(h) = 0^{\otimes 1}$, so we also have $$\nabla ^3 f(x^\star) \cdot p_{_F}(h)^{\otimes 3} = 0$$ by the local minimum condition. This yields the convergence stated in \eqref{equation:order_4}. Moreover, if $x^\star$ is a local minimum of polynomial order 4, then by the local minimum condition, $\nabla^4 f(x^\star) > 0$ on $F$ in the sense of \eqref{eq:tensor_positive_def}. Moreover, since $\nabla^2 f(x^\star) > 0$ on $E$, then the limit is not constant in any $h_1, \ \ldots, \ h_d$. \end{proof} \noindent \textbf{Remark:} The cross odd term is not necessarily null. For example, consider $$ \begin{array}{rrl} f : & \mathbb{R}^2 & \longrightarrow \mathbb{R} \\ & (x,y) & \longmapsto x^2 + y^4 + xy^2. \end{array} $$ Then $f$ admits a global minimum at $x^\star=0$ since $|xy^2| \le \frac{1}{2}(x^2+y^4)$. We have $E_1 = \mathbb{R}(1,0)$, $E_2 = \mathbb{R}(0,1)$ and for all $(x,y) \in \mathbb{R}^2$, $T_3 \cdot (xe_1) \otimes (ye_2)^{\otimes 2} = 2xy^2$ is not identically null. \subsection{Difficulties beyond the 4th order and Hilbert's $17^{\text{th}}$ problem} \label{section:hilbert} If we do not assume as in the previous section that $\nabla^4 f(x^\star)$ is not positive on $F$, then we carry on the development of $f(x^\star + h)$ up to higher orders. A first idea is to consider $F_2 := \{ h \in F: \ \nabla^4 f(x^\star) \cdot h^{\otimes 4}=0 \} \subseteq F$ and $E_2$ a complement subspace of $F_2$ in $F$, and to continue this process by induction as in Section \ref{section:order_4}. However, $F_2$ is not necessarily a subspace of $F$. Indeed, let $T$ be a symmetric tensor defined on $\mathbb{R}^{d'}$ of order $2k$ with $k \in \mathbb{N}$. As $T$ is symmetric, there exist vectors $v^1, \ \ldots, \ v^q \in \mathbb{R}^{d'}$, and scalars $\lambda_1, \ \ldots, \ \lambda_q \in \mathbb{R}$ such that $T = \sum_i \lambda_i (v^i)^{\otimes 2k}$ (see \cite{comon2008}, Lemma 4.2.), so $$ \forall h \in \mathbb{R}^{d'}, \ T \cdot h^{\otimes 2k} = \sum_{i=1}^q \lambda_i (v^i)^{\otimes 2k} \cdot h^{\otimes 2k} = \sum_{i=1}^q \lambda_i \langle v^i , h \rangle^{2k} .$$ For $k = 2$ and $T = \nabla^{2k} f(x^\star)_{|_F}$, since $x^\star$ is a local minimum, we have, identifying $F$ and $\mathbb{R}^{d'}$, $$ \forall h \in \mathbb{R}^{d'}, \ T \cdot h^{\otimes 2k} \ge 0 $$ Then, we could think it implies that for all $i$, $\lambda_i \ge 0$, and then $$ T \cdot h^{\otimes 2k} = 0 \ \implies \forall i, \ \langle v^i, h \rangle = 0$$ which would give a linear caracterization of $\{h \in \mathbb{R}^{d'} : \ T \cdot h^{\otimes 2k} = 0 \}$ and in this case, $F_2$ would be a subspace of $F$. However this reasoning is not correct in general as we do not have necessarily that for all $i$, $\lambda_i \ge 0$. We can build counter-examples as follows. Since $T$ is a non-negative symmetric tensor, $T$ can be seen as a non-negative homogeneous polynomial of degree $2k$ with $d'$ variables. A counter-example at order $2k=4$ is $T(X,Y,Z) = ((X-Y)(X-Z))^2$, which is a non-negative polynomial of order 4, but $\{T=0\} = \{ X=Y \text{ or } X=Z \}$, which is not a vector space. Another counterexample given in \cite{motzkin1967} at order $2k = 6$ is the following. We define $$ T(X,Y,Z) = Z^6 + X^4 Y^2 + X^2 Y^4 - 3 X^2 Y^2 Z^2 $$ By the arithmetic-geometric mean inequality and its equality case, $T$ is non-negative and $T(x,y,z) = 0$ if and only if $z^6 = x^4 y^2 = x^2 y^4 $, so that $$\{ T = 0 \} = \mathbb{R} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \ \cup \ \mathbb{R} \begin{pmatrix} - 1 \\ 1 \\ 1 \end{pmatrix} \ \cup \ \mathbb{R} \begin{pmatrix} 1 \\ - 1 \\ 1 \end{pmatrix} \ \cup \ \mathbb{R} \begin{pmatrix} 1 \\ 1 \\ - 1 \end{pmatrix}. $$ Hence, $\{ T = 0 \}$ is not a subspace of $\mathbb{R}^{3}$. In particular $T$ cannot be written as $\sum_i \lambda_i (v^i)^{\otimes 2k}$ with $\lambda_i \ge 0$. In fact, this problem is linked with the Hilbert's seventeenth problem that we recall below. \begin{problem}[Hilbert's seventeeth problem] Let $P$ be a non-negative polynomial with $d'$ variables, homogeneous of even degree $2k$. Find polynomials $P_1, \ \ldots, \ P_r$ with $d'$ variables, homogeneous of degree $k$, such that $P = \sum_{i=1}^r P_i^2$ \end{problem} Hilbert proved in 1888 \cite{hilbert1888} that there does not always exist a solution. In general $\{T=0\}$ is not even a submanifold of $\mathbb{R}^{d'}$. Indeed, taking $T :h \mapsto \nabla^{2k} f(x^\star) \cdot h^{\otimes 2k}$, we have $\partial_h T\cdot h = 2k \nabla^{2k} f(x^\star) \cdot h^{\otimes 2k-1} $ is not surjective in $h=0$, so the surjectivity condition for $\lbrace T=0 \rbrace$ to be a submanifold is not fulfilled. \subsection{Proof of Theorem \ref{theorem:main} for $p=3$} \label{section:order_6} We slightly change our strategy of proof developed in Section \ref{section:order_4}. For $k \ge 2$, we define $F_k$ recursively as \begin{equation} \label{eq:F_k_def} F_k := \lbrace h \in F_{k-1} : \ \forall h' \in F_{k-1}, \ \nabla^{2k} f(x^\star) \cdot h \otimes h'^{\otimes 2k-1} = 0 \rbrace, \end{equation} instead of $\lbrace h \in F_{k-1} : \ \nabla^{2k} f(x^\star) \cdot h^{\otimes 2k} = 0 \rbrace$. Then, by construction, $F_k$ is a vector subspace of $\mathbb{R}^d$. \begin{theorem} \label{theorem:order_6} Let $f \in \mathscr{A}_3(x^\star)$. Then there exist orthogonal subspaces of $\mathbb{R}^d$, $E_1$, $E_2$ and $F_2$, such that $$ \mathbb{R}^d = E_1 \oplus E_2 \oplus F_2 ,$$ and such that for all $h \in \mathbb{R}^d$, \begin{align} \label{equation:order_6} \frac{1}{t} & \left[ f(x^\star + t^{1/2}p_{_{E_1}}(h) + t^{1/4}p_{_{E_2}}(h) + t^{1/6}p_{_{F_2}}(h)) - f(x^\star) \right] \\ \underset{t \rightarrow 0}{\longrightarrow} & \ \frac{1}{2} \nabla^2 f(x^\star) \cdot p_{_{E_1}}(h)^{\otimes 2} + \frac{1}{2} \nabla^3 f(x^\star) \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h)^{\otimes 2} + \frac{1}{4!} \nabla^4 f(x^\star)\cdot p_{_{E_2}}(h)^{\otimes 4} \nonumber \\ & + \frac{4}{4!}\nabla^4 f(x^\star)\cdot p_{_{E_1}}(h)\otimes p_{_{F_2}}(h)^{\otimes 3} + \frac{10}{5!}\nabla^5 f(x^\star) \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{F_2}}(h)^{\otimes 3} + \frac{1}{6!}\nabla^6 f(x^\star) \cdot p_{_{F_2}}(h)^{\otimes 6}. \nonumber \end{align} Moreover, if $f \in \mathscr{A}_3^\star(x^\star)$, then this is a solution to the problem \eqref{eq:alpha_developpement}, with $E_3 = F_2$, $\alpha$ defined in \eqref{eq:def_alpha:2}, $B$ adapted to the previous decomposition and $g$ defined in \eqref{eq:def_g:2}. \end{theorem} \noindent \textbf{Remark:} The set of $3$-tuples $(i_1,i_2,i_3)$ such that $\frac{i_1}{2} + \frac{i_2}{4} + \frac{i_3}{6} = 1$, are $(2,0,0)$, $(1,2,0)$, $(0,4,0)$, $(1,0,3)$, $(0,2,3)$, $(0,0,6)$, which gives the terms appearing in \eqref{equation:order_p:2}. \begin{proof} We consider the subspace $$ F_1 := \lbrace h \in \mathbb{R}^d : \ T_2 \cdot h^{\otimes 2} = 0 \rbrace = \lbrace h \in \mathbb{R}^d : \ T_2 \cdot h = 0^{\otimes 1} \rbrace, $$ since $T_2 \ge 0$. Then, let $E_1$ be the orthogonal complement of $F_1$ in $\mathbb{R}^d$ and consider the vector subspace of $F_1$ defined by $$ F_2 = \lbrace h \in F_1 : \ \forall h' \in F_1, \ T_4 \cdot h \otimes h'^{\otimes 3} = 0 \rbrace .$$ Let $E_2$ be the orthogonal complement of $F_2$ in $F_1$. Then we have $$ \mathbb{R}^d = E_1 \oplus F_1 = E_1 \oplus E_2 \oplus F_2 .$$ For $h \in \mathbb{R}^d$ we expand the left term of \eqref{equation:order_6} using the Taylor formula up to order $6$ and the multinomial formula \eqref{equation:multinomial}, giving $$ \sum_{k=2}^6 \frac{1}{k!} \sum_{\substack{i_1,i_2,i_3 \in \lbrace 0,\ldots,k \rbrace \\ i_1+i_2+i_3=k}} \binom{k}{i_1,i_2,i_3} t^{\frac{i_1}{2} + \frac{i_2}{4} + \frac{i_3}{6}-1} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes p_{_{E_2}}(h)^{\otimes i_2} \otimes p_{_{F_2}}(h)^{\otimes i_3} + o(1),$$ and we prove the convergence stated in \eqref{equation:order_6}. All the terms with coefficient $t^a$ where $a>0$ are $o(1)$ as $t \rightarrow 0$. \textbf{Order 2:} we have $T_2 \cdot p_{_{E_2}}(h) = 0^{\otimes 1}$ and $T_2 \cdot p_{_{F_2}}(h) = 0^{\otimes 1}$ so the only term for $k=2$ is $\frac{1}{2} T_2 \cdot p_{_{E_1}} (h)^{\otimes 2}$. \textbf{Order 3:} $\triangleright$ Since $x^\star$ is a local minimum and $T_2 \cdot p_{_{F_1}}(h)^{\otimes 2} = 0$, we have $T_3 \cdot p_{_{F_1}}(h)^{\otimes 3} = 0$. Then, using property Proposition \ref{proposition:null_tensor:1}, if the factor $p_{_{E_1}}(h)$ does not appear as an argument in $T_3$, then the corresponding term is zero. $\triangleright$ Let us prove that \begin{equation*} T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{F_2}}(h)^{\otimes 2} = 0 . \end{equation*} Using Theorem \ref{theorem:order_4} with $E = E_1$, $F = E_2 \oplus F_2$, we have in particular that for all $h \in \mathbb{R}^d$, \begin{equation} \label{equation:order_4_positive} \frac{1}{2} T_2 \cdot p_{_E}(h)^{\otimes 2} + \frac{1}{2} T_3 \cdot p_{_E}(h) \otimes p_{_F}(h)^{\otimes 2} + \frac{1}{4!} T_4 \cdot p_{_F}(h)^{\otimes 4} \ge 0 . \end{equation} Then taking $h \in E_1 \oplus F_2$ so that $h= p_{_{E_1}}(h)+p_{_{F_2}}(h)$ and with \begin{equation} \label{eq:proof:T4_null} \left[T_4\cdot p_{_{F_2}}(h)\right]_{|F_1} \equiv 0^{\otimes 3}, \end{equation} we may rewrite \eqref{equation:order_4_positive} as $$ \frac{1}{2} T_2 \cdot p_{_{E_1}}(h)^{\otimes 2} + \frac{1}{2} T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{F_2}}(h)^{\otimes 2} \ge 0 .$$ Now, considering $h' = \lambda h$, we have that for all $\lambda \in \mathbb{R}$, $$ \lambda^2 \left(\frac{1}{2} T_2 \cdot p_{_{E_1}}(h)^{\otimes 2} + \frac{\lambda}{2} T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{F_2}}(h)^{\otimes 2} \right) \ge 0 ,$$ so that necessarily $T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{F_2}}(h)^{\otimes 2} = 0$. $\triangleright$ Let us prove that $$ T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h) \otimes p_{_{F_2}}(h) = 0 .$$ We use again \eqref{equation:order_4_positive}, with $p_{_F}(h) = p_{_{E_2}}(h) + p_{_{F_2}}(h)$, so that $$ \frac{1}{2} T_2 \cdot p_{_{E_1}}(h)^{\otimes 2} + \frac{1}{2} T_3 \cdot p_{_{E_1}}(h) \otimes \left(p_{_{E_2}}(h) + p_{_{F_2}}(h)\right)^{\otimes 2} + \frac{1}{4!} T_4 \cdot \left(p_{_{E_2}}(h) + p_{_{F_2}}(h)\right)^{\otimes 4} \ge 0 . $$ But using \eqref{eq:proof:T4_null} and that $T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{F_2}}(h)^{\otimes 2} = 0$, we obtain $$ \frac{1}{2} T_2 \cdot p_{_{E_1}}(h)^{\otimes 2} + \frac{1}{2} T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h)^{\otimes 2} + T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h) \otimes p_{_{F_2}}(h) + \frac{1}{4!} T_4 \cdot p_{_{E_2}}(h)^{\otimes 4} \ge 0 . $$ Now, considering $h' = p_{_{E_1}}(h) + p_{_{E_2}}(h) + \lambda p_{_{F_2}}(h)$, we have that for all $\lambda \in \mathbb{R}$, $$ \frac{1}{2} T_2 \cdot p_{_{E_1}}(h)^{\otimes 2} + \frac{1}{2} T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h)^{\otimes 2} + \lambda T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h) \otimes p_{_{F_2}}(h) + \frac{1}{4!} T_4 \cdot p_{_{E_2}}(h)^{\otimes 4} \ge 0 ,$$ so necessarily $T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h) \otimes p_{_{F_2}}(h) = 0$. $\triangleright$ The last remaining term for $k=3$ is $\frac{1}{2} T_3 \cdot p_{_{E_1}}(h) \otimes p_{_{E_2}}(h)^{\otimes 2}$. \textbf{Order 4:} If the factor $p_{_{E_1}}(h)$ does not appear and if the factor $p_{_{F_2}}(h)$ appears at least once, then using \eqref{eq:proof:T4_null} the corresponding term is zero. If $p_{_{E_1}}(h)$ appears, the only term with a non-positive exponent of $t$ is $\frac{4}{4!} T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{F_2}}(h)^{\otimes 3}$. So the only terms for $k=4$ are $\frac{1}{4!}T_4 \cdot p_{_{E_2}}(h)^{\otimes 4}$ and $\frac{4}{4!} T_4 \cdot p_{_{E_1}}(h)\otimes p_{_{F_2}}(h)^{\otimes 3}$. \textbf{Order 5:} $\triangleright$ The terms where $p_{_{E_1}}(h)$ appears at least once have a coefficient $t^a$ with $a>0$ so are $o(1)$ when $t \rightarrow 0$. $\triangleright$ We have $T_2 \cdot p_{_{F_2}}(h)^{\otimes 2} = 0$, $T_3 \cdot p_{_{F_2}}(h)^{\otimes 3} = 0$, $T_4 \cdot p_{_{F_2}}(h)^{\otimes 4} = 0$ and since $x^\star$ is a local minimum, we have $$ T_5 \cdot p_{_{F_2}}(h)^{\otimes 5} = 0 .$$ $\triangleright$ Let us prove that $$ T_5 \cdot p_{_{E_2}}(h) \otimes p_{_{F_2}}(h)^{\otimes 4} = 0 .$$ Let $h \in \mathbb{R}^d$. We have $$ \frac{1}{t^{11/12}} \left[ f(x^\star + t^{1/4}p_{_{E_2}}(h) + t^{1/6}p_{_{F_2}}(h)) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} \frac{1}{4!} T_5 \cdot p_{_{E_2}}(h) \otimes p_{_{F_2}}(h)^{\otimes 4} \ge 0 .$$ Hence, considering $h'=\lambda h$, we have for every $\lambda \in \mathbb{R}$, $$ \lambda^5 T_5 \cdot p_{_{E_2}}(h) \otimes p_{_{F_2}}(h)^{\otimes 4} \ge 0 ,$$ which yields the desired result. $\triangleright$ The only remaining term for $p=5$ is $$\frac{10}{5!} T_5 \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{F_2}}(h)^{\otimes 3} .$$ \textbf{Order 6:} The only term for $k=6$ is $\frac{1}{6!} T_6 \cdot p_{_{F_2}}(h)^{\otimes 6}$ ; the other terms have a coefficient $t^a$ with $a > 0$, so are $o(1)$ when $t \rightarrow 0$. \end{proof} \textbf{Remark :} As in Theorem \ref{theorem:order_4} and the remark that follows, the remaining odd cross-terms cannot be proved to be zero using the same method of proof, and may be actually not zero. For example, consider: $$ \begin{array}{rrl} f : & \mathbb{R}^2 & \longrightarrow \mathbb{R} \\ & (x,y) & \longmapsto x^4 + y^6 + x^2y^3, \end{array} $$ which satisfies $h \mapsto \nabla^5 f(x^\star) \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{F_2}}(h)^{\otimes 3} \not\equiv 0$. \subsection{Proof of Theorem \ref{theorem:main} for $p=4$} \label{section:order_8} \begin{theorem} \label{theorem:order_8} Let $f \in \mathscr{A}_4(x^\star)$. Then there exist orthogonal subspaces of $\mathbb{R}^d$, $E_1$, $E_2$, $E_3$ and $F_3$ such that $$ \mathbb{R}^d = E_1 \oplus E_2 \oplus E_3 \oplus F_3 ,$$ and for all $h \in \mathbb{R}^d$, \begin{align} & \frac{1}{t} \left[ f(x^\star + t^{1/2}p_{_{E_1}}(h) + t^{1/4}p_{_{E_2}}(h) + t^{1/6}p_{_{E_3}}(h) + t^{1/8}p_{_{F_3}}(h)) - f(x^\star) \right] \nonumber \\ \label{equation:order_8:2} \underset{t \rightarrow 0}{\longrightarrow} \ & \sum_{k=2}^{8} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_{4} \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{4} = k }} \binom{k}{i_1,\ldots,i_{4}} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes p_{_{E_2}}(h)^{\otimes i_2} \otimes p_{_{E_3}}(h)^{\otimes i_3} \otimes p_{_{F_3}}(h)^{\otimes i_4}. \end{align} These terms are summarized as tuples $(i_1,\ldots,i_4)$ in Table \ref{figure:terms_exponent_1}. Moreover, if $f \in \mathscr{A}_4^\star(x^\star)$, then this is a solution to \eqref{eq:alpha_developpement}, with $E_4 = F_3$, $\alpha$ defined in \eqref{eq:def_alpha:2}, $B$ adapted to the previous decomposition and $g$ defined in \eqref{eq:def_g:2}. \end{theorem} \begin{table} \centering \begin{tabular}{|c|c|} \hline \rule[-1ex]{0pt}{2.5ex} Order $2$ & $(2,0,0,0)$ \\ \hline \rule[-1ex]{0pt}{2.5ex} Order $3$ & $(2,1,0,0)$ \\ \hline \rule[-1ex]{0pt}{2.5ex} Order $4$ & $(0,4,0,0), \ (1,1,0,2), \ (1,0,3,0)$ \\ \hline \rule[-1ex]{0pt}{2.5ex} Order $5$ & $(1,0,0,4), \ (0,2,3,0), \ (0,3,0,2)$ \\ \hline \rule[-1ex]{0pt}{2.5ex} Order $6$ & $(0,1,3,2), \ (0,2,0,4), \ (0,0,6,0)$ \\ \hline \rule[-1ex]{0pt}{2.5ex} Order $7$ & $(0,1,0,6), \ (0,0,3,4)$ \\ \hline \rule[-1ex]{0pt}{2.5ex} Order $8$ & $(0,0,0,8)$ \\ \hline \end{tabular} \caption{Terms expressed as $4$-tuples in the development \eqref{equation:order_8:2}} \label{figure:terms_exponent_1} \end{table} \begin{proof} As before, we define the subspaces $F_0 := \mathbb{R}^d$ and by induction: $$ F_k = \left\lbrace h \in F_{k-1} : \ \forall h' \in F_{k-1}, \ T_{2k} \cdot h \otimes h'^{\otimes 3} = 0 \right\rbrace $$ for $k=1,2,3$. We define $E_k$ as the orthogonal complement of $F_k$ in $F_{k-1}$ for $k=1,2,3$, so that $$ \mathbb{R}^d = E_1 \oplus E_2 \oplus E_3 \oplus F_3 .$$ \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline $E_1$ & \multicolumn{3}{c|}{$F_1$} \\ \hline \multirow{5}{*}{$T_2 \ge 0$} & \multicolumn{3}{c|}{$T_2=0$} \\ \hhline{~---} & $E_2$ & \multicolumn{2}{c|}{$F_2$} \\ \hhline{~---} & \multirow{3}{*}{$T_4 \ge 0$} & \multicolumn{2}{c|}{$T_4=0$} \\ \hhline{~~--} & & $E_3$ & $F_3$ \\ \hhline{~~--} & & $T_6 \ge 0$ & $T_6=0$ \\ \hline \end{tabular} \caption{Illustration of the subspaces} \label{figure:subspaces} \end{table} \noindent Then we apply a Taylor expansion up to order $8$ to the left side of \eqref{equation:order_8:2} and the multinomial formula \eqref{equation:multinomial}, which reads $$ \sum_{k=2}^{8} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_{4} \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{4} = k }} \binom{k}{i_1,\ldots,i_{4}} t^{\frac{i_1}{2} + \cdots + \frac{i_4}{8} -1} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes p_{_{E_2}}(h)^{\otimes i_2} \otimes p_{_{E_3}}(h)^{\otimes i_3} \otimes p_{_{F_3}}(h)^{\otimes i_4} + o(1) .$$ $\triangleright$ If $\frac{i_1}{2} + \cdots + \frac{i_4}{8} > 1$ then the corresponding term is in $o(1)$ when $t \rightarrow 0$. \noindent $\triangleright$ If $\frac{i_1}{2} + \cdots + \frac{i_4}{8} < 1$ then the corresponding term diverges when $t \rightarrow 0$, so we need to prove that actually \begin{equation} \label{eq:proof:order_8_null_terms} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes p_{_{E_2}}(h)^{\otimes i_2} \otimes p_{_{E_3}}(h)^{\otimes i_3} \otimes p_{_{F_3}}(h)^{\otimes i_4} = 0 . \end{equation} -- If $\frac{i_1}{2} + \frac{i_2}{4} + \frac{i_3}{6} + \frac{i_4}{8} < 1$ but if we also have $\frac{i_1}{2} + \frac{i_2}{4} + \frac{i_3}{6} + \frac{i_4}{6} < 1$, then by applying the property at the order $6$ (Theorem \ref{theorem:order_6}) with the $3$-tuple $(i_1,i_2,i_3+i_4)$, we get \eqref{eq:proof:order_8_null_terms}. -- So we only need to consider $4$-tuples such that $\frac{i_1}{2} + \frac{i_2}{4} + \frac{i_3}{6} + \frac{i_4}{8} < 1$ and $\frac{i_1}{2} + \frac{i_2}{4} + \frac{i_3}{6} + \frac{i_4}{6} \ge 1$. We can remove all the terms which are null by the definitions of the subspaces $E_1, \ E_2, \ E_3, \ F_3$. The remaining terms are: For $k=4$: $\frac{t^{21/24}}{6} T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{F_3}}(h)^{\otimes 3}$, $\frac{t^{11/12}}{2} T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{E_3}}(h) \otimes p_{_{F_3}}(h)^{\otimes 2}$, $\frac{t^{23/24}}{2} T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{E_3}}(h)^{\otimes 2} \otimes p_{_{F_3}}(h)$. For $k=5$ : $\frac{t^{21/24}}{12} T_5 \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{F_3}}(h)^{\otimes 3}$, $\frac{t^{11/12}}{4} T_5 \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{E_3}}(h) \otimes p_{_{F_3}}(h)^{\otimes 2}$, $\frac{t^{23/24}}{4} T_5 \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{E_3}}(h)^{\otimes 2} \otimes p_{_{F_3}}(h)$. For $k=6$ : $\frac{t^{21/24}}{5!} T_6 \cdot p_{_{E_2}}(h) \otimes p_{_{F_3}}(h)^{\otimes 5}$, $\frac{t^{11/12}}{4!} T_6 \cdot p_{_{E_2}}(h) \otimes p_{_{E_3}}(h) \otimes p_{_{F_3}}(h)^{\otimes 4}$, $\frac{t^{23/24}}{12} T_6 \cdot p_{_{E_2}}(h) \otimes p_{_{E_3}}(h)^{\otimes 2} \otimes p_{_{F_3}}(h)^{\otimes 3}$. First, we note that \begin{align*} & \frac{1}{t^{21/24}} \left[ f(x^\star + t^{1/2}p_{_{E_1}}(h) + t^{1/4}p_{_{E_2}}(h) + t^{1/6}p_{_{E_3}}(h) + t^{1/8}p_{_{F_3}}(h)) - f(x^\star) \right] \\ \underset{t \rightarrow 0}{\longrightarrow} \ & \frac{1}{6} T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{F_3}}(h)^{\otimes 3} + \frac{1}{12} T_5 \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{F_3}}(h)^{\otimes 3} + \frac{1}{5!} T_6 \cdot p_{_{E_2}}(h) \otimes p_{_{F_3}}(h)^{\otimes 5} \ge 0. \end{align*} Then, considering $h' = \lambda p_{_{E_1}}(h) + p_{_{E_2}}(h) + p_{_{E_3}}(h) + p_{_{F_3}}(h)$, we have that for all $\lambda \in \mathbb{R}$, \begin{equation*} \frac{\lambda}{6} T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{F_3}}(h)^{\otimes 3} + \frac{1}{12} T_5 \cdot p_{_{E_2}}(h)^{\otimes 2} \otimes p_{_{F_3}}(h)^{\otimes 3} + \frac{1}{5!} T_6 \cdot p_{_{E_2}}(h) \otimes p_{_{F_3}}(h)^{\otimes 5} \ge 0 , \end{equation*} so necessarily $$T_4 \cdot p_{_{E_1}}(h) \otimes p_{_{F_3}}(h)^{\otimes 3} = 0.$$ Then, considering $h' = p_{_{E_2}}(h) + \lambda p_{_{F_3}}(h)$ for $\lambda \in \mathbb{R}$, we get successively that the two other terms are null. Likewise, we prove successively that the terms in $t^{11/12}$ are null, and then that the terms in $t^{23/24}$ are null. This yields the convergence stated in \eqref{equation:order_8:2}. \end{proof} \subsection{Counter-example and proof of Theorem \ref{theorem:main} with $p\ge 5$ under the hypothesis \eqref{equation:even_terms_null:2}} \label{section:order_10} Algorithm \ref{algo:algorithm} may fail to yield such expansion of $f$ for orders no lower than $10$ if the hypothesis \eqref{equation:even_terms_null:2} is not fulfilled. Indeed for $p \ge 5$, there exist $p$-tuples $(i_1,\ldots,i_p)$ such that $\frac{i_1}{2}+\cdots+\frac{i_p}{2p} < 1$ and $i_1$, $\ldots$, $i_p$ are all even. Such tuples do not appear at orders $8$ and lower, but they do appear at orders $10$ and higher, for example $(0,2,0,0,4)$ for $k=6$. In such a case, we cannot use the positiveness argument to prove that the corresponding term $T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p}$ is zero, and in fact, it may be not zero. Let us give a counter example. Consider $$ \begin{array}{rrl} f : & \mathbb{R}^2 & \longrightarrow \mathbb{R} \\ & (x,y) & \longmapsto x^4 + y^{10} + x^2 y^4. \end{array} $$ Then $f \in \mathscr{A}_5^\star(0)$ and we have $E_1 = \lbrace 0 \rbrace$, $E_2 = \mathbb{R}\cdot(1,0)$, $E_3 = \lbrace 0 \rbrace$, $E_4 = \lbrace 0 \rbrace$, $F_4 = \mathbb{R}\cdot(0,1)$. But $$ \frac{1}{t} f(t^{1/4}, t^{1/10}) = \frac{1}{t} \left(t + t + t^{9/10} \right) $$ goes to $+\infty$ when $t \rightarrow 0$. \textbf{Now, let us give the proof of Theorem \ref{theorem:main} for $p \ge 5$.} In this proof, we assume that the subspaces $E_1, \ \ldots, \ E_p$ given in Algorithm \ref{algo:algorithm} satisfy the hypothesis \eqref{equation:even_terms_null:2}. \begin{proof} We develop \eqref{equation:order_p:1}, which reads: $$ \sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_{p} \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{p} = k }} \binom{k}{i_1,\ldots,i_{p}} t^{\frac{i_1}{2} + \cdots + \frac{i_p}{2p} -1} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} + o(1) =: S .$$ The terms such that $\frac{i_1}{2} + \cdots + \frac{i_p}{2p} < 1$ may diverge when $t \rightarrow 0$, so let us prove that they are in fact null. Let $$ \alpha := \inf \left\lbrace \frac{i_1}{2}+\cdots+\frac{i_p}{2p} : \ h \longmapsto\sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}= \alpha }} \binom{k}{i_1,\ldots,i_{p}} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \not\equiv 0 \right\rbrace ,$$ and assume by contradiction that $\alpha < 1$. Then we have for all $h \in \mathbb{R}^d$: $$ t^{1-\alpha} S \underset{t \rightarrow 0}{\longrightarrow} \left( \sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}= \alpha }} \binom{k}{i_1,\ldots,i_{p}} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \right) \ge 0,$$ by the local minimum property. Then, considering $h' = \lambda_1 p_{_{E_1}}(h) + \cdots + \lambda_p p_{_{E_p}}(h)$, we have, for all $h \in \mathbb{R}^d$ and $\lambda_1, \ \ldots, \ \lambda_d \in \mathbb{R}$, \begin{equation} \label{eq:order_5_polynomial} \sum_{k=2}^{2p} \frac{1}{k!} \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k \rbrace \\ i_1 + \cdots + i_{p} = k \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}= \alpha }} \lambda_1^{i_1}\ldots\lambda_p^{i_p} \binom{k}{i_1,\ldots,i_{p}} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \ge 0 . \end{equation} Now, we fix $h \in \mathbb{R}^d$ such that the polynomial in \eqref{eq:order_5_polynomial} in the variables $\lambda_1, \ \ldots, \ \lambda_p$ is not identically zero, and we consider $k_{\max}$ its highest homogeneous degree, so that we have $$ \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k_{\max} \rbrace \\ i_1 + \cdots + i_{p} = k_{\max} \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}= \alpha }} \lambda_1^{i_1}\ldots\lambda_p^{i_p} \binom{k_{\max}}{i_1,\ldots,i_{p}} T_{k_{\max}} \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \ge 0 .$$ If $k_{\max}$ is odd, this yields a contradiction, taking $\lambda_1 = \cdots = \lambda_p =: \lambda \rightarrow \pm \infty$. If $k_{\max}$ is even, we consider the index $l_1$ such that $i_{l_1} =: a_1$ is maximal and the coefficients in the above sum with $i_{l_1} = a_1$ are not all zero. Then fixing all the $\lambda_l$ for $l \ne l_1$ and taking $\lambda_{l_1} \rightarrow \infty$, we have $$ \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k_{\max} \rbrace \\ i_1 + \cdots + i_{p} = k_{\max} \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}= \alpha \\ i_{l_1} = a_1 }} \lambda_1^{i_1}\ldots\lambda_p^{i_p} \binom{k_{\max}}{i_1,\cdots,i_{p}} T_{k_{\max}} \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \ge 0 .$$ Thus, if $a_1$ is odd, this yields a contradiction. If $a_1$ is even, we carry on this process by induction : knowing $l_1, \ \ldots, \ l_r$, we choose the index $l_{r+1}$ such that $l_{r+1} \notin \lbrace l_1,\ldots,l_r \rbrace$, the corresponding term $$ \sum_{\substack{i_1,\ldots,i_p \in \lbrace 0,\ldots,k_{\max} \rbrace \\ i_1 + \cdots + i_{p} = k_{\max} \\ \frac{i_1}{2} + \cdots + \frac{i_p}{2p}= \alpha \\ i_{l_1}=a_1,\ldots,i_{l_{r+1}}=a_{r+1} }} \lambda_1^{i_1}\ldots\lambda_p^{i_p} \binom{k_{\max}}{i_1,\ldots,i_{p}} T_{k_{\max}} \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} $$ is not identically null and such that $i_{l_{r+1}} =: a_{r+1}$ is maximal. Necessarily, $a_{r+1}$ is even. In the end we will find a non-zero term whose exponents $i_{\ell}$ are all even which contradicts assumption \eqref{equation:even_terms_null:2}. \end{proof} \subsection{Proofs of the uniform convergence and of the non-constant property} \label{subsec:unif_non_constant} In this section we prove the additional properties claimed in Theorem \ref{theorem:main} : the uniform convergence with respect to $h$ on every compact set and the fact that the function $g$ is not constant in any of its variables $h_1, \ \ldots, \ h_d$. \begin{proof} First, let us prove that the convergence is uniform with respect to $h$ on every compact set. Let $\varepsilon >0$ and let $R>0$. By the Taylor formula at order $2p$, there exists $\delta>0$ such that for $||h|| < \delta$, $$ \left| f(x^\star + h) - f(x^\star) - \sum_{k=2}^{2p} \frac{1}{k!} \sum_{i_1+\cdots+i_p=k} \binom{k}{i_1,\ldots,i_p} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \right| \le \varepsilon ||h||^{2p} .$$ Now, let us consider $t \rightarrow 0$ and $h \in \mathbb{R}^d$ with $||h|| \le R$. Then we have: $$ \forall t \le \max\left(1, \left(\frac{\delta}{R}\right)^{1/(2p)}\right), \ ||t^{1/2}p_{_{E_1}}(h) + \cdots + t^{1/(2p)}p_{_{E_p}}(h) || \le \delta ,$$ so that \begin{align*} \left|\frac{1}{t} \left[ \right.\right. & \left. \left. f(x^\star + t^{1/2}p_{_{E_1}}(h) + \cdots + t^{1/(2p)}p_{_{E_p}}(h)) - f(x^\star)\right] - \sum_{k=2}^{2p} \frac{1}{k!} \sum_{i_1+\cdots+i_p=k} \binom{k}{i_1,\ldots,i_p} \right. \\ & \left. \cdot t^{\frac{i_1}{2}+\cdots+\frac{i_p}{2p}-1} T_k \cdot p_{_{E_1}}(h)^{\otimes i_1} \otimes \cdots \otimes p_{_{E_p}}(h)^{\otimes i_p} \right| \le \frac{\varepsilon}{t}||t^{1/2}p_{_{E_1}}(h) + \cdots + t^{1/(2p)}p_{_{E_p}}(h)||^{2p}. \end{align*} We proved or assumed that the terms such that $\frac{i_1}{2}+\cdots+\frac{i_p}{2p} < 1$ are zero. We denote by $g_1(h)$ the sum in the last equation with the terms such that $\frac{i_1}{2}+\cdots+\frac{i_p}{2p}=1$ and by $g_2(h)$ the sum with the terms such that $\frac{i_1}{2}+\cdots+\frac{i_p}{2p} > 1$. We also define $a$ as the smallest exponent of $t$ appearing in $g_2(h)$: $$ a := \min \left\lbrace \frac{i_1}{2}+\cdots+\frac{i_p}{2p} \ : \ i_1,\ldots,i_p \in \lbrace 0,\ldots,2p\rbrace, \ i_1+\cdots+i_p \le 2p, \ \frac{i_1}{2}+\cdots+\frac{i_p}{2p} > 1 \right\rbrace > 1 .$$ So that: \begin{align} \label{eq:uniform_proof:1} & \left| \frac{1}{t} \left[ f(x^\star + t^{1/2}p_{E_1}(h) +\cdots + t^{1/(2p)}p_{_{E_p}}(h)) - f(x^\star) \right] - g_1(h) \right| \\ & \le t^{a-1}|g_2(h)| + \frac{\varepsilon}{t}||t^{1/2}p_{_{E_1}}(h) + \cdots + t^{1/(2p)}p_{_{E_p}}(h)||^{2p}. \nonumber \end{align} We remark that $h \mapsto g_2(h)$ is a polynomial function so is bounded on every compact set. We also have: $$ \frac{\varepsilon}{t}||t^{1/2}p_{_{E_1}}(h) + \cdots + t^{1/(2p)}p_{_{E_p}}(h)||^{2p} \le \frac{\varepsilon (t^{1/(2p)})^{2p}}{t} ||h||^{2p} = \varepsilon ||h||^{2p} .$$ So \eqref{eq:uniform_proof:1} converges to $0$ as $t \rightarrow 0$, uniformly with respect to $h$ on every compact set. Now let us assume that $f \in \mathscr{A}_p^\star(x^\star)$ ; we prove that the function $g$ defined in \eqref{eq:def_g} is not constant in any of its variables in the sense of \eqref{eq:def_non_constant}. Let $B \in \mathcal{O}_d(\mathbb{R})$ adapted to the decomposition $\mathbb{R}^d = E_1 \oplus \cdots \oplus E_p$. We have: $$ \frac{1}{t} \left[ f\left( x^\star + B \cdot (t^\alpha \ast h)\right) - f(x^\star) \right] \underset{t \rightarrow 0}{\longrightarrow} g(h) .$$ Let $i \in \lbrace 1, \ldots, p \rbrace$ and $k$ such that $v_i := B \cdot e_i \in E_k$. Let us assume by contradiction that $g$ does not depend on the $i^{\text{th}}$ coordinate. Considering the expression of $g$ in \eqref{eq:def_g} and setting all the variables outside $E_k$ to $0$, we have: $$ \forall h \in E_k, \ \lambda \in \mathbb{R} \mapsto T_{2k} \cdot (h + \lambda v_i)^{\otimes 2k} $$ is constant. Then applying \eqref{equation:multinomial}, we have: $$ \forall h \in E_k, \ T_{2k} \cdot v_i \otimes h^{\otimes 2k-1} = 0 .$$ Moreover, for $h \in F_{k-1}$, let us write $h = h' + h''$ where $h' \in E_k$ and $h'' \in F_k$, so that $$T_{2k} \cdot v_i \otimes h^{\otimes 2k-1} = T_{2k} \cdot v_i \otimes h'^{\otimes 2k-1} = 0,$$ where we used that $$ \forall h^{(3)} \in F_{k-1}, \ T_{2k} \cdot h'' \otimes \left(h^{(3)}\right)^{\otimes 2k-1} = 0$$ following \eqref{eq:F_k_def}, and Proposition \ref{proposition:null_tensor:1}. Considering the definition of $E_k$ as the orthogonal complement of $F_k$, which is defined in \eqref{eq:F_k_def}, the last equation contradicts that $v_i \in E_k$. \end{proof} \subsection{Non coercive case} \label{section:non_coercive} The function $g$ we obtain in Algorithm \ref{algo:algorithm} is a non-negative polynomial function which is constant in none of its variables. However, this does not always guarantee that $e^{-g} \in L^1(\mathbb{R}^d)$, or even that $g$ is coercive. Indeed, $g$ can be null on an unbounded continuous polynomial curve, while the polynomial degree of the minimum $x^\star$ of $f$ is higher than the degree of $g$ in these variables. For example, let us consider \begin{align} \label{eq:non_coercive_f} f \colon \ \mathbb{R}^2 &\to \mathbb{R}\\ (x,y) &\mapsto (x-y^2)^2 + x^6. \nonumber \end{align} Then $f \in \mathscr{A}_3^\star(0)$ and using Algorithm \ref{algo:algorithm}, we get $$ g(x,y) = (x-y^2)^2 ,$$ which does not satisfy $e^{-g} \in L^1(\mathbb{R}^d)$. In fact this case is highly degenerate, as, with \begin{equation*} f_\varepsilon(x,y) := f(x,y) + \varepsilon xy^2 = x^2 + y^4 - (2-\varepsilon)xy^2 + x^6 , \end{equation*} we have that $g_\varepsilon(x,y) = x^2 + y^4 - (2-\varepsilon)xy^2$ satisfies $e^{-g_\varepsilon} \in L^1(\mathbb{R}^d)$ for every $\varepsilon \in (0,4)$ and that $x^\star$ is not the global minimum of $f_\varepsilon$ for every $\varepsilon \in (-\infty, 0) \cup (4, \infty)$. We now prove that instead of assuming $e^{-g} \in L^1(\mathbb{R}^d)$, we can only assume that $g$ is coercive, which is justified in the following proposition. More specific conditions for $g$ to be coercive can be found in \cite{bajbar2015} and \cite{bajbar2019}. \begin{proposition} \label{prop:coercive} Let $g : \mathbb{R}^d \rightarrow \mathbb{R}$ be the polynomial function obtained from Algorithm \ref{algo:algorithm}. If $g$ is coercive, then $e^{-g} \in L^1(\mathbb{R}^d)$. \end{proposition} \begin{proof} Let $$ A_k := \text{Span}\left(e_i: \ i \in \lbrace \dim(E_1) + \cdots + \dim(E_{k-1}) +1, \ldots, \dim(E_1)+\cdots+\dim(E_k) \rbrace \right) $$ for $k \in \lbrace 1,\ldots, p \rbrace$. By construction of $g$, note that for all $t \in [0,+\infty)$, $$ g\left(\sum_{k=1}^p t^{1/2k}p_{_{A_k}}(h)\right) = tg(h) .$$ Since $g$ is coercive, there exists $R \ge 1$ such that for every $h$ with $||h|| \ge R$, $g(h) \ge 1$. Then, for every $h \in \mathbb{R}^d$, we have: \begin{align*} g(h) & = g\left( \sum_{k=1}^p p_{_{A_k}}(h) \right) = g\left( \sum_{k=1}^p \frac{||h||^{1/2k}}{R^{1/2k}}p_{_{A_k}}\left(R^{1/2k}\frac{h}{||h||^{1/2k}}\right) \right) \\ & = \frac{||h||}{R} g\left( \sum_{k=1}^p p_{_{A_k}}\left(R^{1/2k}\frac{h}{||h||^{1/2k}}\right) \right). \end{align*} Then, for $||h|| \ge R$, \begin{align*} \left|\left| \sum_{k=1}^p p_{_{A_k}}\left(R^{1/2k}\frac{h}{||h||^{1/2k}}\right) \right| \right|^2 = \sum_{k=1}^p \frac{R^{1/k}}{||h||^{1/k}} ||p_{_{A_k}}(h)||^2 \ge \frac{R}{||h||} ||h||^2 = R ||h|| \ge R^2 \ge R, \end{align*} so that $g(h) \ge \frac{||h||}{R}$ which in turn implies $e^{-g} \in L^1(\mathbb{R}^d)$. \end{proof} We now deal with the simplest configuration where the function $g$ is not coercive, as described in \eqref{eq:non_coercive_sum}, by dealing with the case where $f$ is given by \eqref{eq:non_coercive_f}, which is an archetype of such configuration. However, dealing with the general case is more complicated and to give a general formula for the rate of convergence of the measure $\pi_t$ in this case is not our current objective. \begin{proposition} \label{prop:x_y2_convergence} Let the function $f$ be given by \eqref{eq:non_coercive_f}. Then, if $(X_t,Y_t) \sim C_t e^{-f(x,y)/t} dx dy$, we have: \begin{equation*} \left( \frac{X_t}{t^{1/6}}, \frac{Y_t^2-X_t}{t^{1/2}} \right) \underset{t \rightarrow 0}{\longrightarrow} C\frac{e^{-x^6}}{\sqrt{x}} \frac{e^{-y^2}}{\sqrt{\pi}} \mathds{1}_{x \ge 0} dx dy , \end{equation*} where $C = \left( \int_0^\infty \frac{e^{-x^6}}{\sqrt{x}}dx \right)^{-1}$. \end{proposition} \begin{proof} First, let us consider the normalizing constant $C_t$. We have : \begin{align*} C_t^{-1} & = \int_{\mathbb{R}^2} e^{-\frac{(x-y^2)^2+x^6}{t}} dx dy = 2t^{3/4} \int_{-\infty}^{\infty} e^{-t^2 x^6} \int_0^\infty e^{-(y^2-x)^2}dy \ dx \\ & = t^{3/4} \int_{-\infty}^{\infty} e^{-t^2 x^6} \int_{-x}^\infty \frac{e^{-u^2}}{\sqrt{u+x}}dy \ dx = t^{7/12} \int_{-\infty}^{\infty} e^{-x^6} \int_{-t^{-1/3}x}^{\infty} \frac{e^{-u^2}}{\sqrt{t^{1/3}u+x}} du \ dx \\ & \underset{t \rightarrow 0}{\sim} t^{7/12} \int_{0}^{\infty} \frac{e^{-x^6}}{\sqrt{x}} \int_{-\infty}^{\infty} e^{-u^2} du \ dx, \end{align*} where the convergence is obtained by dominated convergence and where we performed the change of variables $x' = t^{-1/6}x$ and $u = t^{-1/2}(y^2-x)$. Then we consider, for $a_1 < b_1$ and $a_2 < b_2$, $$ \mathbb{P}\left( \frac{X_t}{t^{1/6}} \in [a_1,b_1], \ \frac{Y^2-X}{t^{1/2}} \in [a_2,b_2] \right) .$$ Performing the same changes of variables and using the above equivalent of $C_t$ completes the proof. \end{proof} More generally, if the function $g$ is not coercive and if we can write, up to a change of basis, \begin{equation} \label{eq:non_coercive_sum} g(h_1,\ldots,h_d) = Q_1(h_1,h_2)^2 + Q_2(h_3,h_4)^2 + \cdots + Q_r(h_{2r-1},h_{2r})^2 + \widetilde{g}(h_{2r+1},\ldots,h_d) , \end{equation} where the $Q_i$ are polynomials with two variables null on an unbounded curve (for example, $Q_i(x,y) = (x-y^2)$, $Q_i(x,y) = (x^2-y^3)$, $Q_i(x,y)=x^2y^2$), and where $\widetilde{g}$ is a non-negative coercive polynomial, then \begin{align*} & \left( a_1\left((X_t)_{1}, (X_t)_{2}, t \right), \ldots, a_r\left((X_t)_{2r-1}, (X_t)_{2r}, t \right), \left(\frac{1}{t^{\alpha_{2r+1}}},\ldots,\frac{1}{t^{\alpha_d}}\right) \ast \left(\widetilde{B} \cdot ((X_t)_{2r+1},\ldots,(X_t)_d)\right) \right) \\ & \underset{t \rightarrow 0}{\longrightarrow} b_1(x_1,x_2) \ldots b_r(x_{2r-1},x_{2r}) Ce^{-\widetilde{g}(x_{2r+1},\ldots,x_d)} dx_1 \ldots dx_{2r} dx_{2r+1}\cdots dx_d , \end{align*} where $C$ is a normalization constant, $\widetilde{B} \in \mathcal{O}_{d-2r-1}(\mathbb{R})$ is an orthogonal transformation and for all $k=1,\ldots,r$, $a_k : \mathbb{R}^2 \times (0,+\infty) \rightarrow \mathbb{R}^2$ and $b_k$ is a density on $\mathbb{R}^2$. Such $a_k$ and $b_k$ can be obtained by applying the same method as in Proposition \ref{prop:x_y2_convergence}. Algorithm \ref{algo:algorithm} yields the first change of variable for this method, given by the exponents $(\alpha_i)$ (in the proof of Proposition \ref{prop:x_y2_convergence}, the first change of variable is $t^{-1/2}x$ and $t^{-1/4}y$) and thus seems to be the first step of a more general procedure in this case. However, we do not give a general formula as the general case is cumbersome. Moreover, we do not give a method where the non coercive polynomials $Q_i$ depend on more than two variables, like $$ Q(x,y,z) = (x-y^2)^2 + (x-z^2)^2 .$$ The method sketched in Proposition \ref{prop:x_y2_convergence} cannot be direclty applied to this case. \section{Proofs of Theorem \ref{theorem:single_well} and Theorem \ref{theorem:multiple_well} using Theorem \ref{theorem:main}} \label{section:proofs_athreya} \subsection{Single well case} \label{subsec:single_well_proof} We now prove Theorem \ref{theorem:single_well}. \begin{proof} Using Theorem \ref{theorem:main}, we have for all $h \in \mathbb{R}^d$: $$ \frac{1}{t} f(B \cdot (t^\alpha \ast h)) \underset{t \rightarrow 0}{\longrightarrow} g(h) .$$ To simplify the notations, assume that there is no need of a change of basis i.e. $B=I_d$. We want to apply Theorem \ref{theorem:athreya:1} to the function $f$. However the condition $$ \int_{\mathbb{R}^d} \sup_{0<t<1} e^{-\frac{f \left(t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d\right)}{t}} dh_1\ldots dh_d < \infty $$ is not necessarily true. Instead, let $\varepsilon > 0$ and we apply Theorem \ref{theorem:athreya:1} to $\widetilde{f}$, where $\widetilde{f}$ is defined as: $$ \widetilde{f}(h) = \left\lbrace \begin{array}{ll} f(h) & \text{ if } h \in \mathcal{B}(0,\delta) \\ ||h||^2 & \text{ else} , \end{array} \right. $$ and where $\delta > 0$ will be fixed later. Then $\widetilde{f}$ satisfies the hypotheses of Theorem \ref{theorem:athreya:1}. The only difficult point to prove is the last condition of Theorem \ref{theorem:athreya:1}. If $t \in (0,1]$ and $h \in \mathbb{R}^d$ are such that $(t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d) \notin \mathcal{B}(0,\delta)$, then $$ \frac{\widetilde{f} (t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d)}{t} = \frac{||(t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d) ||^2}{t} \ge ||h||^2, $$ because for all $i$, $\alpha_i \le \frac{1}{2}$. If $t$ and $h$ are such that $(t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d) \in \mathcal{B}(0,\delta)$, then choosing $\delta$ such that for all $(t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d) \in \mathcal{B}(0,\delta)$, $$ \left| \frac{f (t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d)}{t} - g(h) \right| \le \varepsilon, $$ which is possible because of the uniform convergence on every compact set (see Section \ref{subsec:unif_non_constant}), we derive that $$ \frac{f (t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d)}{t} \ge g(h) - \varepsilon .$$ Hence $$ \int_{\mathbb{R}^d} \sup_{0<t<1} e^{-\frac{\widetilde{f} \left(t^{\alpha_1}h_1,\ldots,t^{\alpha_d}h_d\right)}{t}} dh_1\ldots dh_d \le \int_{\mathbb{R}^d} e^{-||h||^2} dh + e^{\varepsilon} \int_{\mathbb{R}^d} e^{-g(h)} dh .$$ Since $g$ is coercive, using Proposition \ref{prop:coercive} we have $e^{-g} \in L^1(\mathbb{R}^d)$ and it follows from Theorem \ref{theorem:athreya:1} that if $\widetilde{X}_t$ has density $\widetilde{\pi}_t(x) := \widetilde{C}_t e^{-\widetilde{f}(x)/t}$, then $$ \left( \frac{(\widetilde{X}_t)_1}{t^{\alpha_1}}, \ldots, \frac{(\widetilde{X}_t)_d}{t^{\alpha_d}} \right) \overset{\mathscr{L}}{\longrightarrow} X \ \text{ as } t \rightarrow 0 ,$$ where $X$ has density proportional to $e^{-g(x)}$. Now, let us prove that if $X_t$ has density proportional to $e^{-f(x)/t}$, then we also have \begin{equation} \label{equation:Y_law_convergence} \left( \frac{(X_t)_1}{t^{\alpha_1}}, \ldots, \frac{(X_t)_d}{t^{\alpha_d}} \right) \overset{\mathscr{L}}{\longrightarrow} X \ \text{ as } t \rightarrow 0 . \end{equation} Let $\varphi : \mathbb{R}^d \rightarrow \mathbb{R}$ be continuous with compact support. Then \begin{align*} & \mathbb{E}\left[ \varphi\left( \frac{(X_t)_1}{t^{\alpha_1}}, \ldots, \frac{(X_t)_d}{t^{\alpha_d}} \right) - \varphi\left( \frac{(\widetilde{X}_t)_1}{t^{\alpha_1}}, \ldots, \frac{(\widetilde{X}_t)_d}{t^{\alpha_d}} \right) \right] \\ & = \int_{\mathbb{R}^d} \varphi \left(\frac{x_1}{t^{\alpha_1}},\ldots,\frac{x_d}{t^{\alpha_d}}\right) \left(C_t e^{-\frac{f(x_1,\ldots,x_d)}{t}} - \widetilde{C}_t e^{-\frac{\widetilde{f}(x_1,\ldots,x_d)}{t}} \right) dx_1\ldots dx_d =: I_1 + I_2, \end{align*} where $I_1$ is the integral on the set $\mathcal{B}(0,\delta)$ and $I_2$ on $\mathcal{B}(0,\delta)^c$. We have then: $$ |I_2| \le || \varphi ||_\infty ( \pi_t(\mathcal{B}(0,\delta)^c) + \widetilde{\pi}_t(\mathcal{B}(0,\delta)^c) ) \underset{t \rightarrow 0}{\longrightarrow} 0 ,$$ where we used Proposition \ref{proposition:gibbs}. On the other hand, we have $f=\widetilde{f}$ on $\mathcal{B}(0,\delta)$, so that $$ |I_1| \le ||\varphi||_\infty |C_t - \widetilde{C}_t| \int_{\mathcal{B}(0,\delta)} e^{-\frac{f(x)}{t}}dx \le ||\varphi||_\infty \left|1 - \frac{\widetilde{C}_t}{C_t}\right| .$$ And we have: $$ \frac{\widetilde{C}_t}{C_t} = \frac{\int e^{-\frac{f(x)}{t}}dx}{\int e^{-\frac{\widetilde{f}(x)}{t}}dx} = \frac{\int_{\mathcal{B}(0,\delta)} e^{-\frac{f(x)}{t}}dx + \int_{\mathcal{B}(0,\delta)^c} e^{-\frac{f(x)}{t}}dx}{\int_{\mathcal{B}(0,\delta)} e^{-\frac{f(x)}{t}}dx + \int_{\mathcal{B}(0,\delta)^c} e^{-\frac{\widetilde{f}(x)}{t}}dx} .$$ By Proposition \ref{proposition:gibbs}, we have when $t \rightarrow 0$ \begin{align*} \int_{\mathcal{B}(0,\delta)^c} e^{-\frac{\widetilde{f}(x)}{t}}dx & = o\left( \int_{\mathcal{B}(0,\delta)} e^{-\frac{\widetilde{f}(x)}{t}}dx \right) \\ \int_{\mathcal{B}(0,\delta)^c} e^{-\frac{f(x)}{t}}dx & = o\left( \int_{\mathcal{B}(0,\delta)} e^{-\frac{f(x)}{t}}dx \right), \end{align*} so that $\widetilde{C}_t/C_t \rightarrow 1$, so $I_1 \rightarrow 0$, which then implies \eqref{equation:Y_law_convergence}. \end{proof} \subsection{Multiple well case} We now prove Theorem \ref{theorem:multiple_well}. \begin{proof} The first point is a direct application of Theorem \ref{theorem:athreya:2}. For the second point, we remark that $X_{it}$ has a density proportional to $e^{-f_i(x)/t}$, where $$ f_i(x) := \left\lbrace\begin{array}{l} f(x) \text{ if } x \in \mathcal{B}(x_i, \delta) \\ + \infty \text{ else}. \end{array} \right. $$ We then consider $\widetilde{f}_i$ as in Section \ref{subsec:single_well_proof}: $$ \widetilde{f}_i(x) = \left\lbrace \begin{array}{ll} f_i(x) & \text{ if } x \in \mathcal{B}(x_i,\delta) \\ ||x||^2 & \text{ else} . \end{array} \right. $$ and still as in Section \ref{subsec:single_well_proof}, we apply Theorem \ref{theorem:athreya:1} to $\widetilde{f}_i$ and then prove that random variables with densities proportional to $e^{-\widetilde{f}_i(x)/t}$ and $e^{-f_i(x)/t}$ respectively have the same limit in law. \end{proof} \section{Infinitely flat minimum} \label{section:flat} In this section, we deal with an example of infinitely flat global minimum, where we cannot use a Taylor expansion. \begin{proposition} Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ such that $$ \forall x \in \mathcal{B}(0,1), \ f(x) = e^{-\frac{1}{||x||^2}} $$ and $$ \forall x \notin \mathcal{B}(0,1), \ f(x) > a $$ for some $a > 0$. Furthermore, assume that $f$ is coercive and $e^{-f} \in L^1(\mathbb{R}^d)$. Then, if $X_t$ has density $\pi_t$, $$ \log^{1/2}\left(\frac{1}{t}\right) \cdot X_t \overset{\mathscr{L}}{\longrightarrow} X \ \text{ as } t \rightarrow 0 ,$$ where $X \sim \mathcal{U}(\mathcal{B}(0,1))$. \end{proposition} \begin{proof} Noting that $\int_{||x|| >1}e^{-f(x)/t}dx \rightarrow 0$ as $t \rightarrow 0$ by dominated convergence, we have $$ C_t \underset{t \rightarrow 0}{\sim} \left(\int_{\mathcal{B}(0,1)} e^{-e^{-\frac{1}{||x||^2}}/t} dx \right)^{-1} = \log^{d/2}\left(\frac{1}{t}\right) \left( \underbrace{\int_{\mathcal{B}(0,\sqrt{\log(1/t)})} e^{-t^{\frac{1}{||x||^2}-1}} dx}_{\underset{t \rightarrow 0}{\rightarrow} \text{Vol}(\mathcal{B}(0,1)) } \right)^{-1}, $$ where the convergence of the integral is obtained by dominated convergence. Then we have, for $-1<a_i<b_i<1$ and $\sum_i a_i^2 < 1$, $\sum_i b_i^2 < 1$: \begin{align*} \mathbb{P}\left(\log^{1/2}\left(\frac{1}{t}\right) \cdot X_t \in \prod_{i=1}^d [a_i,b_i] \right) = \frac{C_t}{\log^{d/2}\left(\frac{1}{t}\right)}\int_{(a_i)}^{(b_i)} e^{-t^{\frac{1}{|x|^2}-1}} dx \underset{t \rightarrow 0}{\longrightarrow} \frac{\prod_{i=1}^d (b_i-a_i)}{\text{Vol}(\mathcal{B}(0,1))}. \end{align*} \end{proof} \section{Properties of tensors} \begin{proposition} \label{proposition:null_tensor:1} Let $T_k$ be a symmetric tensor of order $k$ in $\mathbb{R}^d$. Let $E$ be a subspace of $\mathbb{R}^d$. Assume that $$ \forall h \in E, \ T_k \cdot h^{\otimes k} = 0 .$$ Then we have $$ \forall h_1,\ldots,h_k \in E, \ T_k \cdot h_1 \otimes \cdots \otimes h_k = 0 .$$ \end{proposition} \begin{proof} Using \eqref{equation:multinomial}, we have for $h_1, \ \ldots, \ h_k \in E$ and $\lambda_1, \ \ldots, \ \lambda_k \in \mathbb{R}$, $$ T_k \cdot (\lambda_1 h_1+\cdots+\lambda_k h_k)^{\otimes k} = \sum_{i_1+\cdots+i_k=k} \binom{k}{i_1,\ldots,i_k} \lambda_1^{i_1}\ldots\lambda_k^{i_k} T_k \cdot h_1^{\otimes i_1} \otimes \cdots \otimes h_k^{\otimes i_k} = 0 ,$$ which is an identically null polynomial in the variables $\lambda_1, \ \ldots, \ \lambda_k$, so every coefficient is null, in particular $$ \forall h_1,\ldots,h_k \in E, \ T_k \cdot h_1 \otimes \cdots \otimes h_k = 0 .$$ \end{proof} \end{document}
\begin{document} \title{Coherent transport over an explosive percolation lattice} \author{\.{I}. Yal\c{c}{\i}nkaya and Z. Gedik} \address{Faculty of Engineering and Natural Sciences, Sabanc{\i} University, Tuzla, 34956, \.{I}stanbul, Turkey} \ead{[email protected]} \begin{indented} \item[]January 2016 \end{indented} \begin{abstract} We investigate coherent transport over a finite square lattice in which the growth of bond percolation clusters are subjected to an Achlioptas type selection process, i.e., whether a bond will be placed or not depends on the sizes of clusters it may potentially connect. Different than the standard percolation where the growth of discrete clusters are completely random, clusters in this case grow in correlation with one another. We show that certain values of correlation strength, if chosen in a way to suppress the growth of the largest cluster which actually results in an explosive growth later on, may lead to more efficient transports than in the case of standard percolation, satisfied that certain fraction of total possible bonds are present in the lattice. In this case transport efficiency increases as a power function of bond fraction in the vicinity of where effective transport begins. It turns out that the higher correlation strengths may also reduce the efficiency as well. We also compare our results with those of the incoherent transport and examine the average spreading of eigenstates for different bond fractions. In this way, we demonstrate that structural differences of discrete clusters due to different correlations result in different localization properties. \end{abstract} \pacs{03.67.-a, 05.60.Gg} \noindent{\it Keywords}: coherent transport, explosive percolation, localization, correlated disorder \section{\label{sec:int}Introduction} Coherent transport over complex networks has been a topic of much interest in the recent years \cite{mulken2011}. Such processes are often related with the dynamics of excitations over networks modelled by quantum walks and studied for both variants, namely the discrete- \cite{ambainis2001,konno,bach,yamasaki,kwek,chandrashekar2014quantum,asboth,gonulol} and the continuous-time \cite{mulken2005slow,mulken2005spacetime,mulken2006efficiency,mulken2006coherent,krapivsky,mulken2007survival,blumen,volta} quantum walks. Although the original proposals \cite{aharonov,farhi} of either models are mainly aimed to outperform their classical counterparts in terms of spreading rates, it has been shown later that quantum walks are useful tools also for developing new quantum algorithms \cite{ambainis}, quantum simulations \cite{schreiber,crespi,ghosh,kitagawa,genske} and the universal quantum computation \cite{childs,lovett}. In the context of coherent transport, they provide simple models to describe quite significant physical phenomena such as the excitonic energy transfer through the photosynthetic light-harvesting complexes \cite{mohseni} or breakdown of an electron system driven by strong electric fields \cite{oka}. It is possible to introduce a disorder to a network using the standard bond percolation model in which the bonds between sites are either present or missing with some probability $p$ \cite{grimmett,saberi}. A group of connected sites is called a cluster and its size is defined by the number of these sites in total. For an infinite network, if $p$ is smaller than some critical value $p_c$ (the percolation threshold), there exist only small clusters which remain small even if we increase the network size. However, just after $p=p_c$, small clusters merge to form a large cluster covering the whole network, namely the wrapping cluster, comparable to the network in size. This kind of disorder constitutes a source of decoherence for quantum walks and appears in two variants: the static and dynamic percolations. In the former, the network configuration does not change during propagation, whereas in the latter, connections between sites do alter in time. Both variants have extensively been studied so far in the context of transport and spreading properties for discrete- and continuous-time quantum walks \cite{romanelli,annabestani,leung,schijven,kollar,mulken2007quantum,anishchenko,darazs2013,darazs2014,elster,stefanak}. Achlioptas \etal proposed a model for network construction by slightly changing the standard percolation, which leads to a very fast growth of the wrapping cluster \cite{achlioptas}. According to this model, two bond candidates are randomly selected each time a new bond is intended to be added. Then, the one minimizing the product of the cluster sizes it merges is placed as a new bond and the other one is omitted. This simple rule suppresses the formation of the wrapping cluster but eventually results in the abrupt (or so-called \textit{explosive}) growth of it after $p=p_c$. In contrast with the standard percolation, the network configuration for a given $p$ depends on the total previous occupation history. In this sense, the disorder due to explosive percolation cannot be considered as a fully random process but rather a correlated one. It is well known that the efficiency of coherent transport can be increased by exploiting environmental effects \cite{mohseni,viciani,novo,biggerstaff,chandrashekar2014noise}. One of the methods in achieving this goal, for example, utilizes the interplay between the coherent dynamics and the disorder due to the topology of the network where transport takes place \cite{stefanak,asboth}. In this article, we use a similar, but distinctively different approach where the structure of the clusters contributing to the transport are affected by the cluster correlations during the growth of the network. We look for correlation strengths that yield more efficient transports than that of coherent dynamics alone. With this motivation, we examine the transport of an excitation along a certain direction over a square lattice where the bond configuration is determined by explosive percolation statically. We therefore introduce the sites on the left edge of the lattice as sources and the ones on the right as sinks, where an excitation is created and absorbed, respectively \cite{anishchenko}. In this way, we monitor the survival probability over the lattice in the long time limit to find out the transport efficiency after starting with an initial state localized on the source sites. In modelling the transport, we use the continuous-time quantum walk which is also closely related with the tight-binding models in solid state physics. We compare the transport efficiencies with increasing bond fraction for standard and explosive percolation models to find out whether any model has supremacy over the other. This article is organized as follows. In \sref{sec:met}, we overview coherent and incoherent transport over dissipative lattices along with the description of percolation models to be used. In \sref{ssec:treff}, we compare the efficiencies of transport models in case of standard and explosive percolation. In \sref{ssec:boswc}, we investigate the spreading of eigenstates. In conclusion, we summarize our results. \section{\label{sec:met}Methods} \subsection{\label{ssec:cohtr}The coherent and incoherent transport} We consider a square lattice of $N$ sites, which we will denote by $L=\sqrt{N}$, as the environment where the transport process takes place. The sites are labeled by positive integers $\cal{N} =$ $\{1,2,\cdots,N\}$. The information about the existence of bonds in between is held by the Laplacian matrix $L$ where the non-diagonal elements $L_{ij}$ are $-1$ if site $i$ and site $j$ are connected and zero otherwise. The diagonal elements $L_{ii}$ hold the total number of bonds that belong to the site $i$. Thus, $L$ is a positive semi-definite matrix, i.e., its eigenvalues are non-negative. An excitation localized on any site $i$ is interpreted as being in the state $\ket{i}$ and these states form an orthonormal and complete set over all sites, i.e., $\braket{k}{j}=\delta_{kj}$ and $\sum_{i\in\mathcal{N}}\outerp{i}{i}=I$. A coherent (incoherent) transport is modelled by continuous-time quantum (random) walk which is described by the Hamiltonian (transfer matrix) $H_0=L$ ($T_0=-L$) \cite{mulken2011,farhi}. Here, we assume that transition rates are identical and equal to $1$ for all sites. The transition probability from the initial state $\ket{\psi_j}$ at $t=0$ to the state $\ket{\psi_k}$ is $\pi_{kj}(t)=|\bra{\psi_k} \exp{(-iH_0 t)} \ket{\psi_j}|^2$ for the coherent and $p_{kj}(t)=\bra{\psi_k}\exp(T_0t)\ket{\psi_j}$ for the incoherent transport, where we assumed $\hbar=1$. Once an excitation covers the lattice from one side to another, we understand that it did get transported over the lattice. In order to keep track of this process, we define the sites at the left (right) edge of the lattice as the sources (sinks) as in \fref{fig:perclat} \cite{anishchenko}. We will denote the set of all source and sink sites by $\mathcal{S}$ and $\cal{S}'$, respectively. Sources are the only sites where an excitation can initially be localized and the sinks are abstract representations of absorption or trapping processes. Thus, a `leak' taking place on the right edge implies that an excitation, originally localized on the left edge, has been transported along the lattice. This process can be introduced by a projection operator $\Gamma=\sum_{k\in\cal{S}'}\outerp{k}{k}$ perturbing the Hamiltonian $H=H_0-i\Gamma$ or the transfer matrix $T=T_0-\Gamma$ where we choose the leaking rates to be the same and equal to $1$ for all sink sites. In the limit $t\rightarrow\infty$ and for the initial state $\ket{\psi_j}$, the total probabilities of finding the excitation on the lattice, namely the survival probabilities for coherent and incoherent transports are given as (see Appendix), \begin{equation} \Pi_j=\sum_l|\braket{\psi_j}{\Phi_l^R}|^2,\hspace{4mm} P_j=\sum_{k\in\mathcal{N}}\sum_l\braket{k}{\phi_l^0}\braket{\phi_l^0}{\psi_j}, \label{eq:limsurprob} \end{equation} \noindent where $\ket{\Phi_l^R}$ and $\ket{\phi_l^0}$ are the eigenstates of $H$ and $T$ with real and zero eigenvalues, respectively. The initial state $\ket{\psi_j}$ may involve sites only from the set $\mathcal{S}$. In this article, we will denote the complement of survival probabilities by $\mu$ which can equally be interpreted as the transport efficiency \cite{stefanak} or the percolation probability \cite{chandrashekar2014quantum}. By calculating $\mu$, we will monitor how much information may escape from the lattice. \begin{figure} \caption{An example of a bond percolation lattice for $L=4$. Left (right) edge contains source (sink) sites where an excitation is created (absorbed). Dashed lines represent two randomly selected bond candidates labeled by $1$ and $2$ with corresponding weights $w_1=4\times 5=20$ and $w_2=2\times 3=6$. According to the best-of-two rule, bond $2$ will be selected and bond $1$ discarded for $w_2 < w_1$. A wrapping cluster is defined to be the one connecting the left edge to the right edge. For this example, it lies along the bottom edge of the lattice and occupies two source sites. The set $\mathcal{R} \label{fig:perclat} \end{figure} \subsection{\label{ssec:expperc}The explosive percolation} The standard bond percolation is implemented on a square lattice by first removing all bonds between the sites and then, randomly adding them one after another. This process results in random growth of discrete clusters. For infinite lattices, the opposite borders get connected to each other through one large wrapping cluster after reaching a critical fraction of bonds $p_c=0.5$ \cite{essam} called the percolation threshold. Here, the bond fraction $p$ is defined as the ratio of the number of bonds $n$ present in lattice to the number of total possible bonds, $p=n(2L^2-L)^{-1}$. In explosive percolation, a similar implementation procedure is followed with slight modification. Now, in order to add a bond, $m$ random bond candidates are chosen and a weight is assigned to each of them equal to the product of cluster sizes they may potentially merge. Then, the bond with smallest weight is occupied and the others are discarded (see \fref{fig:perclat}). In case a bond connects two sites within the same cluster, the corresponding weight becomes the square of the cluster size. We will see later that this complementary rule has drastic effects on the results we obtain. This selection rule here, called the \textit{best-of-m product rule} \cite{andrade,costa}, systematically suppresses the merging of small discrete clusters and consequently avoids the formation of a giant cluster up to some percolation threshold $p_c$ dependent on $m$ \cite{radicchi2010explosive,ziff2010scaling}. Once the threshold is exceeded, finite discrete clusters start joining each other much faster than in standard percolation ($m=1$) and finally, this results in an explosive behavior in the growth of the largest cluster. In particular, $m=2$ corresponds to the Achlioptas \etal model \cite{achlioptas}. For $m>1$, discrete clusters cannot grow in a completely random manner as in standard percolation case. The shape and size of a given cluster becomes somewhat correlated with those of other clusters during the growth process. In this context, we interpret $m$ as the correlation strength since it specifies the number of discrete clusters taken into account while deciding to add a new bond. We examine the behavior of transport processes on a square lattice formed for different $m$ values. Lastly, we note that a transport process can only take place after a wrapping cluster is formed. In infinite lattices, this happens just after $p=p_c$. In our case of finite lattices, however, there is still a chance of having no wrapping clusters when $p>p_c$ and, indeed, having one for $p<p_c$. Consequently, the average efficiency of the transport inevitably gets affected by these finite-size effects. \section{\label{sec:numres}Numerical results} We numerically determined all eigenvalues and corresponding eigenvectors of $H$ and $T$ for each lattice realization to calculate $\Pi_j$ and $P_j$ in \eref{eq:limsurprob}. It is clear that these quantities depend sensitively on the total number of real and zero eigenvalues. For this reason, we carefully compared numerical values with the exact ones for different lattice sizes and concluded that $L=7$ is the optimal one for our calculations where there is one-to-one correspondence between the exact and numerical results provided that numerical values smaller than $1.0\times 10^{-14}$ are set to zero. In order to obtain notable quantities for the above mentioned static percolation models, we averaged our results over $4\times 10^3$ lattice realizations for each $p$. From now on, $\langle\cdots\rangle$ will be used to indicate the ensemble averaged quantities we have obtained. In the figures, the standard deviation of the mean of each data point gets smaller than the thickness of the data line as the sample size grows larger than 10. For this reason, we do not include error bars to maintain clarity. \subsection{\label{ssec:treff}Transport efficiency} \begin{figure*} \caption{(a) The average transport efficiency of coherent transport $\langle \mu_c^m \rangle = 1-\langle \Pi_\mathcal{S} \label{fig:transeff} \end{figure*} \begin{table} \caption{\label{tab:exceeding}Numerical values of some important parameters related to $m$.} \begin{indented} \item[]\begin{tabular}{ccccccc} \br $~~m~~$&$~~p_a^m~~$&$~p_b^m~$&$\langle\mu_{c}(p_b^m)\rangle$ & $~k^m~$ & $~\langle p_w^m \rangle~$\\ \mr 1 & 0.33 & \textrm{n/a} & \textrm{n/a} & 9.0 & 0.49\\ 2 & 0.44 & 0.58 & 0.61 & 16 & 0.54\\ 4 & 0.50 & 0.61 & 0.69 & 25 & 0.57\\ 8 & 0.51 & 0.69 & 0.87 & 28 & 0.59\\ 16 & 0.51 & 0.76 & 0.94 & \textrm{n/a} & 0.62\\ 32 & 0.51 & 0.80 & 0.96 & \textrm{n/a} & 0.63\\ 84 & 0.51 & 0.81 & 0.97 & \textrm{n/a} & 0.64\\ \br \end{tabular} \end{indented} \end{table} \normalsize We choose an initial state which is equiprobably distributed over the source sites as $\ket{\psi_j}\equiv\ket{\Psi_\mathcal{S}}=\kappa\sum_{i\in\mathcal{S}}\ket{i}$ where $\kappa=L^{-1/2}$ for the coherent transport and $\kappa=L^{-1}$ for the incoherent transport. This state represents our lack of knowledge about the exact position of the excitation at $t=0$. In the limit $t\rightarrow\infty$, we define the average transport efficiencies for a given bond fraction and correlation strength $m$ as \begin{equation} \langle \mu_c^m \rangle \equiv 1-\langle \Pi_\mathcal{S}^{m} \rangle,\hspace{4mm} \langle \mu_i^m \rangle \equiv 1-\langle P_\mathcal{S}^{m} \rangle, \end{equation} for coherent and incoherent transports, respectively. These quantities are the average probabilities for the excitation to be absorbed in the limit $t\rightarrow \infty$. We will use the superscript $m$ for labeling purposes through the rest of this article. Let us also define $p_a^m$ here as the minimum bond percentage which satisfies $\langle \mu_{c}^m\rangle \geqslant 0.01$ \cite{chandrashekar2014quantum} in order to determine the effective starting point of the coherent transport. In \fref{fig:transeff}(a) the change in $\langle \mu_c^m \rangle$ is plotted with respect to $p$. The differences $\Delta\langle\mu_c^{m}\rangle \equiv\langle\mu_c^{1}\rangle - \langle \mu_c^{m>1} \rangle$ between the efficiency of the case $m=1$ and the efficiencies of the cases with larger $m$ are given in its inset. We see that when $m>1$, we obtain transports with partially higher efficiencies than that of the $m=1$ once $p$ exceeds certain values denoted by $p_b^m$. Additionally, $p_a^m$ increases and gets fixed for $m\geqslant 8$ within the given accuracy in T \tref{tab:exceeding}. The case $m=2$ starts to overcome $m=1$ at $p_b^2=0.58$ and the maximum peak occurs at $p=0.64$ where $\Delta\langle \mu_c^m\rangle\approx 0.1$. Actually, this is the extreme case among the others, i.e., a two-by-two correlation between discrete clusters contributes the most to the transport efficiency here unlike the rest of the cases. For increasing values of $m$ following $m=2$, the positive peak of $\Delta\langle\mu_c^m\rangle$ decreases in contrast with its increasing negative peak and $p_b^m$ shifts towards higher bond fractions. When $m=84$, there is almost no contribution to the transport efficiency: $\langle\mu_c^{84}\rangle$ can barely exceed the $\langle\mu_c^1\rangle$ after $p_b^{84}=0.81$. It is therefore evident that higher correlation strengths are increasingly inhibiting the transport process. In \fref{fig:transeff}(b), size of the largest cluster $\langle \zeta^m \rangle$ with respect to $p$ is given. The inset shows the average bond fractions $\langle p_w^m \rangle$ where a wrapping cluster is formed. When $p<0.5$, although $\langle\zeta^m\rangle$ reduces with increasing $m$, it tends to remain almost the same for $m\gtrsim 8$. This result exhibits a correlation with the behavior of $p_a^m$ in \fref{fig:transeff}(a) despite the saturating increase in $\langle p_w^m \rangle$: The size of the largest cluster may have a direct effect on the bond fraction where coherent transport effectively starts. When $p>0.5$, the $\langle \zeta^m \rangle$ decreases for higher $m$ values which is very similar with the behavior of $\langle \mu_c^m \rangle$. Also, the $p_b^m$ appear to be very close to the bond fractions where $\langle \zeta^{m>1} \rangle \approx \langle \zeta^1 \rangle$. Therefore, the comparison of \fref{fig:transeff}(a) and (b) strongly suggests that independent of $m$, the transport efficiency is determined by the size of the largest cluster at any $p$. Our choice of $m=84$ as an upper limit of correlation strength for this article is intentional. There can be maximum $84$ bonds in the lattice $L=7$ in total, and hence, all discrete clusters most probably get correlated with each other as the lattice gets filled with bonds. We have repeated our calculations for $m>84$ and obtained quite similar results with the $m=84$ case. We also note that $\left < p_w^m \right >$ is approximately saturated after $m=84$. Therefore, it can be considered as an upper limit and the transport properties do not change thereafter. When we again look at \fref{fig:transeff}(a), we see that the behavior of each $\langle\mu_c^m\rangle$ can be examined in three successive regions: (i) An initial growth with increasing rate in the vicinity of $p_a^m$, (ii) an approximately linear behavior and (iii) saturation. In region (i), transport efficiency fits a power function $\langle\mu_c^m\rangle \sim p^k$ as shown in \fref{fig:transeff}(c). The $k$ exponents are also listed in \tref{tab:exceeding} for different $m$ values. We see that the exponent $k^2$ for the Achlioptas \etal model is approximately twice as large as $k^1$ which is the exponent for the standard percolation model. The exponent $k^m$ keeps increasing with $m$ with a reducing rate. After $m \approx 8$, we find out that the power law behavior starts disappearing since the linear behavior in region (ii) gradually dominate the behavior in region (i) as can be seen in \fref{fig:transeff}(a). The reason for this is the suppressed growth of the largest cluster even just after $p_a^m$. In order to understand the underlying mechanism here, let us note that when there exists many discrete clusters in the lattice with sizes greater than one, adding new bonds joining two discrete clusters is generally favored over adding others that would join the sites of a single cluster. For example, think of a U-shaped cluster with $4$ connected sites exists in the lattice. Let us choose two bond candidates where one of them converts this U-shaped cluster into a unit square with weight $4\times4=16$ and the other connects two discrete clusters of sizes $3$ and $4$ with weight $12$. Obviously, the one with weight $12$ will be occupied even though the total size of the cluster it forms will be greater than that of the unit square (see \Sref{ssec:expperc}). Therefore, the rules we have defined for the growth of lattices support the merging of discrete clusters to form larger ones instead of just `feeding' the present discrete clusters. This fact, of course, leads to an abrupt growth of the largest cluster seen in \fref{fig:transeff}(b) for $m=2$ and $m=4$. Thus, the efficiency increases accordingly since large clusters have more chance to hit both of the opposite sides of the lattice. However, as discrete clusters get more correlated with each other, they are forced to grow in a more specific way from the very beginning, i.e., they grow as homogeneous as possible in size along the lattice as we add the bonds one by one, which eventually results in the suppression of the explosive behavior. Therefore, in order to obtain the most efficient transports, $m$ should be kept at an optimal value and in our case, it is $m=2$ where discrete clusters are `slightly aware' of each other. In case of an incoherent transport, the excitation can be interpreted as a classical random walker transported with unit efficiency in the limit $t\rightarrow\infty$, provided that it is initially localized on one of the sites from the set $\mathcal{R}$ (see \fref{fig:perclat}) for a given lattice realization. The reason for this is that the walker has enough time to find a correct path towards sink sites within the wrapping cluster. However, in the coherent transport case, the walker may not be able to cross the lattice even if it is initially localized on one of the sites that belong to $\mathcal{R}$. This result originates from the localization effects due to the random scatterings within the disordered structure of the wrapping cluster, namely the Anderson localization \cite{anderson}. In two-dimensions, although the finite size scaling theory suggests that all eigenstates of the system should be exponentially localized independent of disorder strength in thermodynamic limit \cite{abrahams}, it is an ongoing debate whether there are some delocalized states or not due to the different nature of disorder in the percolation model \cite{meir,soukoulis,dillon}. In our case, since different $m$ values provide different growth mechanisms, there are structural differences between the clusters they form. For finite lattices, this may lead to different localization effects which directly affect the transport efficiency. We see in \fref{fig:transeff}(d) that coherent transport is slightly inefficient than incoherent transport for all $m$ values even though their behaviors with respect to $p$ are almost completely the same. The differences between their efficiencies are depicted in the inset of \fref{fig:transeff}(d). Although both processes are exposed to the same average disorder, for a given $m$ these slight differences point out the existence of Anderson-type localization in case of coherent transport even for $L=7$. The excitations are obliged to proceed in such disordered paths along the wrapping clusters that they result in destructive interferences, and hence, a decrease in the efficiency. As we expect, the difference is higher for bond fractions where the lattice is highly disordered for each $m$. When $p<0.4$ or $p>0.8$, the difference almost disappears for the lattice transforms into an ordered structure. The cases where the most and the least differences occur are $m=1$ and $m=2$, respectively. This result suggests that, the wrapping clusters formed by choosing $m=2$ are the most scattering ones that eventually prevent the excitations from reaching the sink sites coherently even in the infinite time limit. As we mentioned earlier, this scattering pattern of clusters is highly supported by the $m=2$ case since the selection rule itself favors connecting discrete clusters over placing bonds within a single cluster. In other words, at a certain bond fraction, the total number of bonds for a given wrapping cluster is the least on average for $m=2$. For $m>2$, the efficiency difference starts to reduce since the probability of connecting two discrete clusters gets more equal to the probability of adding a bond to an existing cluster. These results may imply that the amount of correlation between discrete clusters can affect the localization length of eigenstates of coherent transport. \subsection{\label{ssec:boswc}Localization of eigenstates} \begin{figure*} \caption{(a) The average participation ratios $\langle \xi_l(p) \rangle$ of each eigenstate $\ket{\Phi_l^0} \label{fig:partratio} \end{figure*} In order to gain better insight into the localization effects of coherent transport, we can examine each eigenstate of $H_0$ to find out whether they are localized or not. Here we will ignore the trap sites and consider only how disorder affects the eigenstates. Let $\ket{\Phi_l^0(p)}$ be the $l$th eigenstate of $H_0$ for bond fraction $p$. Then, $|\braket{i}{\Phi_l^0(p)}|^2$ gives the probability distribution of the $l$th state over the sites $i\in\mathcal{N}$. The participation ratio provides information about how much a given probability distribution is spread over the lattice and for our case it can be defined as \begin{equation} \xi_l(p)=\left( \sum_{i}^N |\braket{i}{\Phi_l^0(p)}|^4 \right)^{-1}. \label{eq:partratio} \end{equation} It estimates the number of sites over which the $l$th eigenstate is distributed, i.e., for $\xi_l=1$ the distribution is localized on a single site whereas $\xi=N$ indicates a homogeneous distribution over $N$ sites. However, we do not acquire any knowledge about the geometrical shape of this distribution, e.g., while a line-shaped cluster with $7$ sites wraps the lattice, a square-shaped cluster with $25$ sites cannot. Moreover, a given distribution may belong to a non-wrapping cluster. Therefore, the participation ratio by itself is not sufficient to decide whether a given eigenstate contributes to transport process or not. For these reasons, let us define a step function $\nu_l (p)$ yielding $1$ if the $l$th eigenstate has at least one nonzero amplitude for each of $\mathcal{S}$ and $\mathcal{S'}$, and yielding $0$ otherwise. The number of source and sink sites involved by the eigenstate is not important since we consider the limit $t \rightarrow \infty$. Thus, the ensemble average $\langle \nu_l (p) \rangle$ becomes the probability of $l$th eigenstate to contribute to the transport process. In \fref{fig:partratio}(a), $\langle \xi_l(p) \rangle$ is given for $m=1,2,4$ and $84$. The triangle-shaped black regions represent single-site discrete clusters. For $p=0$, there are $N=49$ of them. As expected, the number of these clusters is decreased in a linear manner by the addition of more bonds. This transition is most clearly observed in $m=84$ where all single-sited clusters are paired until almost all lattice is covered by two-sited clusters at $p=0.3$. Moreover, different regions are getting separated from each other as we increase $m$. In other words, for a given $p$, the case where $\langle \xi_l \rangle$ are the most uniformly distributed over $l$ is $m=84.$ This result, of course, arises due to the homogeneous growth of discrete clusters with increasing correlation strength as we have mentioned in \Sref{ssec:treff}. We can safely claim that if $\langle \xi_l(p) \rangle < 7$ then $l$th eigenstate is localized and do not contribute to the transportation. It can be seen in Fig. \fref{fig:partratio}(a) that all eigenstates are localized for $p \lesssim 0.2$. For $m=1$ and $m=2$, eigenstates have a chance of contributing to the transport after $p \approx 0.2$ and $p \approx 0.4$, respectively, which also goes along with the results of \fref{fig:transeff}(a) (see $p_a^1$ and $p_a^2$). While $m$ is increasing, we observe that the contributing states are accumulated above $p=0.5$ and support our findings in \fref{fig:transeff}(a) where the transport efficiency starts increasing effectively for $p>0.5$. We note that there are some highly delocalized states near $l=49$. They appear between nearly $p\in[0.75,1]$ for $m=1$ and $m=84$ whereas the same interval becomes $p\in[0.6,1]$ for $m=2$ and $m=4$. For $m=2,4$ and $84$, the sharp transition to these highly delocalized states with increasing $p$ suggests that the wrapping cluster is most likely to reach almost all the sites of the lattice as soon as it is formed. We can further deduce that for $m=1$, the wrapping cluster does not instantly cover the whole lattice when it first appears since the average wrapping probability of $m=1$ is smaller than others. We need to keep adding bonds until $p=0.75$ to make the wrapping cluster cover the lattice. This is one of the main consequences of correlation effects on the transport efficiency. The average participation ratio of all eigenstates which we define as $\langle \xi(p) \rangle_\textsubscript{avg}=\frac{1}{N}\sum_{l=1}^N \langle \xi_l(p) \rangle $ is shown in \fref{fig:partratio}(b). We see that $\langle \xi \rangle_\textsubscript{avg}$ increases smoothly for $m=1$ whereas it is suppressed up to $p=0.5$ for $m=2,4$ and up to $p=0.7$ for $m=84$, i.e., the greater the value of $m$, the more suppressed is $\langle \xi\rangle_\textsubscript{avg}$. This result also clears up the decrease in efficiency for increasing $m$ values in \fref{fig:transeff}(a). The eigenstates abruptly delocalize following this suppression and get distributed in the lattice over larger regions than the ones in the $m=1$ case. We finally note that the minimum bond fractions satisfying $\left < \xi^{m>1}\right >_\textsubscript{avg} > \left < \xi^{m=1} \right >_\textsubscript{avg}$ are almost the same with $p_b^m$. The contribution probability $\langle \nu_l (p) \rangle$ depicted in \fref{fig:partratio}(c) is more convenient for understanding whether a given eigenstate contributes to transport or not. The average number of eigenstates contributing to the transport process $\gamma(p)=\sum_l\langle\nu_l(p)\rangle$ is also given in \fref{fig:partratio}(d). We see again that there is almost no contributing eigenstate when $p<0.2$ for all $m$. The only exception is $m=1$ where only a few eigenstates are contributing on average. This early contribution arise from wrapping clusters in the form of lines or strips along the lattice, which we expect to disappear for larger lattices and explains the early rise in the transport efficiency shown in \fref{fig:transeff}(a). On the other hand, nearly all eigenstates are contributing when $p>0.8$ for all $m$. In \fref{fig:partratio}(d), we clearly observe after $p \approx 0.4$ that the average number of contributing eigenstates stays approximately constant, or rather slightly decrease, even though we keep adding more bonds into the lattice. This result is more significant for large $m$ values and it explains the high transport suppression in \fref{fig:transeff}(a). Also, around $p\approx 0.6$ ($p\approx 0.64$), the $\gamma(p)$ for $m=2$ ($m=4$) starts to exceed that of $m=1$ which clears up the high transport efficiencies appear in low bond fractions. If we take into account the similarities between \fref{fig:transeff}(a) and \fref{fig:partratio}(d), it is clear that the transport efficiency is closely related with the number of contributing eigenstates. \section{\label{sec:con}Conclusion} We have studied the coherent and incoherent transports between opposite edges of a finite square lattice where the existence of bonds between sites are determined by either standard or explosive percolation models. Since the explosive percolation model provides a disorder source enabling discrete clusters to grow in correlation with each other, we managed to investigate the possible effects of this correlated disorder on the transport efficiency. We have shown that the least possible correlation is the most contributing to the transport efficiency. For small correlation strengths, we have obtained more efficient transports than that of the standard percolation case. As we increased the correlation strength, the transport efficiency gradually decreased and reduced below that of the standard percolation case. We have demonstrated that the effective starting point and the efficiency of any transport process is directly related to the size of the largest cluster for a given bond fraction. Moreover, we compared our results with the incoherent transport to see possible localization effects. We have shown that more correlation causes less localization. Therefore, the least correlation provides the most efficient transport despite inducing localization the most. We have explained the possible mechanism behind these localization effects. Lastly, we supported our findings by explicitly examining the average participation ratio and contribution probability of the eigenstates of the system, which allows us to decide whether an eigenstate is localized or delocalized over the lattice. Depending on our results we conjecture that the localization length of the eigenstates in case of an explosive percolation may be affected by the correlation strength between clusters. A proof of this conjecture, of course, requires careful analysis of the localization properties of eigenstates for larger lattices, which we leave as a topic of further research. Lastly, although there is no strong application of coherent transport on correlated networks yet, our work could provide insight for engineering quantum systems to achieve high efficiency transports by utilizing environmental effects. \ack We would like to thank E. \.{I}lker, E. Canay, T. \c{C}a\u{g}lar and O. Benli for helpful discussions. \appendix \setcounter{section}{1} \section*{Appendix: Derivation of the survival probabilities} We calculate the survival probability $\pi_j$ after starting with the initial state $\ket{\psi_j}$ by summing the transition probabilities over all lattice sites as \begin{eqnarray} \pi_j(t)=\sum_{k\in\mathcal{N}}{\pi_{kj}(t)}&=&\sum_{k\in\mathcal{N}}{\bra{\psi_j}e^{iH^{\dagger}t}\ket{k}\bra{k}e^{-iHt}\ket{\psi_j}} \nonumber\\ &=&\bra{\psi_j}e^{iH^{\dagger}t}e^{-iHt}\ket{\psi_j}. \label{eq:sp} \end{eqnarray} \noindent The Hamiltonian $H=H_0-i\Gamma$ is non-Hermitian and it has $N$ complex eigenvalues $E_l=\epsilon_l-i\gamma_l$ and $E_l^*=\epsilon_l+i\gamma_l$ with respective left $\ket{\Phi_l}$ and right $\bra{\tilde{\Phi}_l}$ eigenstates. These eigenstates can be taken as biorthonormal $\braket{\tilde{\Phi}_l}{\Phi_{l'}}=\delta_{ll'}$ and complete $\sum_{l=1}^{N}\outerp{\Phi_l}{\tilde{\Phi}_l}=I$ \cite{sternheim}. Also, they satisfy $\braket{k}{\Phi_l}^*=\braket{\tilde{\Phi}_l}{k}$. Therefore, \eref{eq:sp} becomes, \begin{equation} \pi_j(t)=\sum_{ll'=1}^N{\braket{\psi_j}{\Phi_l}\bra{\tilde{\Phi}_l}}e^{iH^{\dagger}t}e^{-iHt}\ket{\Phi_{l'}}\braket{\tilde{\Phi}_{l'}}{\psi_j}. \label{eq:sp2} \end{equation} \noindent By using the following identities, \begin{eqnarray} e^{-iHt}\ket{\Phi_{l'}}&=e^{-i\epsilon_{l'}t}e^{-\gamma_{l'}t}\ket{\Phi_{l'}}, \nonumber\\ \bra{\tilde{\Phi}_l}e^{iH^{\dagger}t}&=\bra{\tilde{\Phi}_l}e^{i\epsilon_{l}t}e^{-\gamma_{l}t} \nonumber, \label{eq:evol} \end{eqnarray} \noindent \eref{eq:sp2} becomes, \begin{eqnarray} \pi_j(t)&=\sum_{ll'=1}^N{\braket{\psi_j}{\Phi_l}\braket{\tilde{\Phi}_l}{\Phi_{l'}}\braket{\tilde{\Phi}_{l'}}{\psi_j}}e^{i\epsilon_{l}t}e^{-\gamma_{l}t}e^{-i\epsilon_{l'}t}e^{-\gamma_{l'}t} \nonumber \\ &=\sum_{l=1}^{N}{e^{-2\gamma_l t}\braket{\psi_j}{\Phi_l}\braket{\tilde{\Phi}_l}{\psi_j}}=\sum_{l=1}^{N}{e^{-2\gamma_l t}|\braket{\psi_j}{\Phi_l}|^2} \label{eq:sp3} \end{eqnarray} \noindent This provides information on how an excitation decays over the lattice in time. In the limit $t \rightarrow \infty$, we expect $\pi_j$ to decay exponentially because of the imaginary parts $\gamma_l$ if the lattice is fully connected. However, when some bonds in the lattice are broken, there exist non imaginary eigenvalues which results in $\lim_{t \rightarrow \infty}\pi_j (t) \ne 0$. Therefore, in \eref{eq:sp3} only the terms with $\gamma_l=0$ may remain and we obtain, \begin{equation} \Pi_j=\lim_{t \rightarrow \infty}\pi_j (t) =\sum_{\{l|E_l\in\mathbb{R}\}}|\braket{\psi_j}{\Phi_l}|^2. \label{eq:limsurprobb} \end{equation} \noindent If we choose an initial state $\ket{\psi_j}=\frac{1}{\sqrt{L}}\sum_{i\in S}\ket{i}$ which is a superposition of $L$ sites from the set $\mathcal{S}$, \eref{eq:limsurprobb} becomes, \begin{equation} \Pi_\mathcal{S}=\frac{1}{L}\sum_{\{l|E_l\in\mathbb{R}\}}{\left|\sum_{i\in \mathcal{S}}{\braket{i}{\Phi_l}}\right|^2}. \label{eq:limsurprobcohini} \end{equation} Similarly, we can calculate the survival probability for the incoherent transport as \begin{equation} p_j=\sum_{k\in\mathcal{N}}\bra{k}e^{Tt}\ket{\psi_j}=\sum_l \sum_{k\in\mathcal{N}} e^{-\lambda_l t}\braket{k}{\phi_l}\braket{\phi_l}{\psi_j}, \label{eq:incohsurv} \end{equation} \noindent where $-\lambda_l$ ($\lambda_l>0$) and $\ket{\phi_l}$ are the eigenvalues and the corresponding eigenstates of the transfer matrix $T=T_0-\Gamma$. In the $t\rightarrow\infty$ limit, only the terms with $\lambda_l=0$ survive. Therefore, \eref{eq:incohsurv} becomes, \begin{equation} P_j=\sum_{\set{l|\lambda_l=0}} \sum_{k\in\mathcal{N}} \braket{k}{\phi_l} \braket{\phi_l}{\psi_j}. \end{equation} Then, the initial state $\ket{\psi_j}=\frac{1}{L}\sum_{i\in \mathcal{S}}\ket{i}$ yields, \begin{equation} P_\mathcal{S}=\frac{1}{L}\sum_{\set{l|\lambda_l=0}} \sum_{k\in\mathcal{N}} \braket{k}{\phi_l} \left(\sum_{i\in\mathcal{S}} \braket{\phi_l}{i} \right). \label{eq:limsurprobincohini} \end{equation} \section*{References} \end{document}
\begin{document} \stepcounter{footnote} \footnotetext{Ecole des Ponts, Marne-la-Vall√©e, France.} \stepcounter{footnote} \footnotetext{Laboratoire d'informatique de l'\'Ecole polytechnique, Institut Polytechnique de Paris, Palaiseau, France.} \stepcounter{footnote} \footnotetext{Univ. Grenoble Alpes, CNRS, Grenoble INP (Institute of Engineering Univ. Grenoble Alpes), GIPSA-lab, 38000 Grenoble, France.} \stepcounter{footnote} \footnotetext{Corresponding author. \texttt{\href{mailto:[email protected]}{[email protected]}}} \title{Minimal time nonlinear control via semi-infinite programming} \begin{abstract} We address the problem of computing a control for a time-dependent nonlinear system to reach a target set in a minimal time. To solve this minimal time control problem, we introduce a hierarchy of linear semi-infinite programs, the values of which converge to the value of the control problem. These semi-infinite programs are increasing restrictions of the dual of the nonlinear control problem, which is a maximization problem over the subsolutions of the Hamilton-Jacobi-Bellman (HJB) equation. Our approach is compatible with generic dynamical systems and state constraints. Specifically, we use an oracle that, for a given differentiable function, returns a point at which the function violates the HJB inequality. We solve the semi-infinite programs using a classical convex optimization algorithm with a convergence rate of $O(\frac{1}{k})$, where $k$ is the number of calls to the oracle. This algorithm yields subsolutions of the HJB equation that approximate the value function and provide a lower bound on the optimal time. We study the closed-loop control built on the obtained approximate value functions, and we give theoretical guarantees on its performance depending on the approximation error for the value function. We show promising numerical results for three non-polynomial systems with up to $6$ state variables and $5$ control variables. \end{abstract} \begin{keywords} Nonlinear control, Minimal time control, Weak formulation, Semi-infinite programming. \end{keywords} \section{Introduction} \subsection{Motivation and related works} This paper deals with the control of a deterministic dynamical system to reach a target set in a minimal time. We consider a general case of a time-dependent nonlinear system under nonlinear state constraints. Several applications in various fields, such as robotics \cite{jazar_theory_2010}, aerospace \cite{trelat_optimal_2012}, maritime routing \cite{mannarini_graph-search_2020} or medicine \cite{zabi_time-optimal_2017}, can be formulated as minimal time control problems. Minimal time control, also known as time optimal control, can be seen as a special case of the general framework of Optimal Control Problems (OCP). Solving an OCP for such generic dynamics and constraints is a difficult challenge, although deep theoretical tools are available such as the Pontryagin Maximum Principle (PMP) \cite{bourdin_pontryagin_2015,clarke_relationship_1987, pontryagin_mathematical_1987} and the Hamilton-Jacobi-Bellman (HJB) equation \cite{crandall_viscosity_1983,frankowska_optimal_1989}. Those theoretical tools, initially developed in the unconstrained setting, have been extended to the case of state constraints \cite{capuzzo-dolcetta_hamilton-jacobi_nodate,soner_optimal_1986}. From an numerical point of view, the \textit{multiple shooting} techniques \cite{pesch_practical_1996,von_stryk_direct_1992} are based on the PMP, and reduce to the solution of a two-point boundary value problem. The \textit{direct methods} reduce to the solution a nonlinear programming problem after discretizing the time space, or parameterizing the control $u(t)$ in a finite dimensional subspace \cite{trelat_optimal_2012,von_stryk_direct_1992}. The celebrated Model Predictive Control (MPC) approach belongs to the category of direct methods \cite{camacho_model_2013}. Another approach is to compute the value function of the problem as a maximal subsolution of the HJB equation \cite{vinter_convex_1993}. This approach is related to the weak formulation of the OCP, which is an infinite dimensional linear program (LP) involving occupation measures. The dual problem of this LP is exactly the problem of finding a maximal subsolution of the HJB equation \cite{hernandez-hernandez_linear_1996,lasserre_nonlinear_2008}. In \cite{henrion_linear_2014,lasserre_nonlinear_2008}, the Moment Sum-of-Squares (SoS) hierarchy is used to approximate the solution of the resulting infinite dimensional LPs, in the case where the dynamics and the constraints of the OCP are defined by polynomials. The convergence rate of this numerical scheme is studied in \cite{korda_convergence_2017} for infinite-time discounted polynomial control problems. Still in the context of polynomial control problems, a work \cite{jones_polynomial_2023} based on the dual LP and the SoS hierarchy also studies the design of a closed-loop controller based on the approximate value function that is computed. In \cite{berthier2022infinite}, an extension of the SoS hierarchy based on Kernel methods is employed to extend this computation to general nonlinear system. Regarding the methods specifically dedicated to time-optimal control, we find the same categories: direct methods such as MPC \cite{verschueren_stabilizing_2017}, indirect methods based on the PMP and the bang-bang property \cite{liberzon_calculus_2012,olsder_time-optimal_1975} or methods based on convex optimization \cite{leomanni_time-optimal_2022}. \subsection{Contribution} In this paper, we focus on the problem of computing a control to reach a target set in a minimal time. We follow the line of works that use convex optimization to solve the dual problem of the nonlinear control problem, over the subsolutions of the HJB equation \cite{henrion_linear_2014,hernandez-hernandez_linear_1996,korda_convergence_2017,lasserre_nonlinear_2008,vinter_convex_1993}. In contrast to several works using the Moment-SoS hierarchy \cite{jones_polynomial_2023,korda_convergence_2017,lasserre_nonlinear_2008,oustry_inner_2019,sager_efficient_2015}, the dynamical system and the state constraints considered here are generic and, in particular, are not assumed to be defined by polynomials. Instead of using polynomial optimization theory and the associated positivity certificates, our approach relies on the existence of a \textit{separation oracle} capable of returning, for a given differentiable function V, a point $(t, x)$ where the function $V$ does not satisfy the HJB inequality. Such an oracle can be provided by a global optimization solver or by a sampling scheme in a black-box optimization approach. In particular, our approach is compatible with the sampled-data control paradigm \cite{berthier2022infinite,bourdin_pontryagin_2015,korda_computing_2020}. Our contribution is manifold \begin{itemize} \item We introduce a hierarchy of linear semi-infinite programs, the values of which converge to the value of the control problem. After regularization, we solve these semi-infinite programs using a classical algorithm with a convergence rate in $O(\frac{1}{k})$, where $k$ is the number of calls to the oracle. This yields subsolutions of the HJB equation that lower-approximate the value function and provide a certified lower bound on the minimum time. \item It is known that one can leverage any function $V(t,x)$ approximating the value function, to design a closed-loop, \textit{i.e.}, feedback controller \cite{henrion_nonlinear_2008,jones_polynomial_2023}. In this paper, we study the existence of trajectories generated by such a controller. \item We study the performance of such a closed-loop controller, depending on how well $V(t,x)$ approximates the value function, in a way distinct from the analysis in \cite{jones_polynomial_2023}. In particular, this novel analysis enables us to give a sufficient condition for the closed-loop controller to effectively generate a trajectory reaching the target set within the considered time horizon. \item We perform numerical experiments on three non-polynomial controlled systems and compute lower and upper bounds on the minimum time. \end{itemize} \subsection{Mathematical notation} For any $p \in \mathbb{N}^*$, and $k\in \mathbb{N} \cup \{ \infty\}$, we denote by $C^k(\mathbb{R}^p) = C^k(\mathbb{R}^p,\mathbb{R})$ the vector space of real-valued functions with $k$ continuous derivatives over $\mathbb{R}^p$. For a given set $A \subset \mathbb{R}^p$, for any function $f \in C^k(\mathbb{R}^p)$, we denote by $f_{|A}$ the restriction of $f$ on $A$; moreover, we define the vector space $C^k(\mathbb{R}^p | A) = \{ f_{|A} \colon f \in C^k(\mathbb{R}^p) \}$ of restrictions on $A$ of $C^k$ functions. For any locally Lipschitz function $f$, we denote by $\partial^c f $ its Clarke subdifferential \cite{clarke_generalized_1975}, to be distinguished from $\partial_{x_i} g $, the partial derivative of a differentiable function $g$ with respect to $x_i$. For any two Lebesgue integrable functions $f,g \in L^1(\mathbb{R}^p)$, we define the convolution product $f \star g = g \star f$ as $f \star g(x) = \int_{\mathbb{R}^p} f(x) g(x-h) dh$. We emphasize that this convolution product is also well defined if $f$ is supported on a compact set, and $g$ is locally integrable. We denote by $\mathbb{R}[x_1, \dots x_p]$ the vector space of real multivariate polynomials with variables $x_1, \dots, x_p$, and $\mathbb{R}_d[x_1, \dots x_p]$ the vector space of such real multivariate polynomials with degree at most $d$. For any set $A \subset \mathbb{R}^p$, we write $\mathsf{conv}(A)$ for the convex hull of the set $A$. For any nonempty set $A$, and any $x \in \mathbb{R}^p$, we denote $d(x,A) =\inf_{a \in A} \lVert x - a \rVert_2$ the distance between the set $A$ and the point $x$. We also define the contingent cone to $A$ at $x \in A$, denoted $T_A(x)$ as the set of directions $d \in \mathbb{R}^p$, such that there exist a sequence $(t_k) \in \mathbb{R}_{++}^\mathbb{N}$, and a sequence $(d_k) \in (\mathbb{R}^p)^\mathbb{N}$, satisfying $t_k \rightarrow 0$, $d_k \rightarrow d$, and $x + t_k d_k \in A$, for all $k \in \mathbb{N}$. Finally, we say that a property $P$ holds ``almost everywhere'' (a.e.) on $A$, or equivalently ``for almost all $x \in A$'', to denote that there exists a set $N$ of Lebesgue measure zero such that the property $P$ holds for all $x \in A \setminus N$. \section{Problem statement and Linear Programming formulations} \subsection{Definition of the minimal time control problem} Let $n$ and $m$ be nonzero integers. We consider on $\mathbb{R}^n$ the control system \begin{align} \dot{x}(t) = f(t,x(t), u(t)), \label{eq:system} \end{align} where $f\colon \mathbb{R} \times \mathbb{R}^{n} \times \mathbb{R}^m \to \mathbb{R}^n$ is Lipschitz continuous, and where the controls are bounded measurable functions, defined on intervals $[t_0, t_1] \subset [0, T]$, and taking their values in a compact set $U$ of $\mathbb{R}^m$. Let $X$ and $K\subset X$ be compact sets of $\mathbb{R}^n$ and $x_0 \in \mathbb{R}$. For $t_0, t_1 \geq 0$, a control $u$ is said admissible on $[t_0, t_1]$ whenever the solution $x(.)$ of \eqref{eq:system}, such that $x(t_0) = x_0$, is well defined on $[t_0, t_1]$ and satisfies the constraints \begin{align} (x(t),u(t)) \in X \times U, \quad \text{a. e. on } [t_0, t_1], \label{eq:constraints} \end{align} and satisfies the terminal state constraint \begin{align} x(t_1) \in K. \end{align} We denote by $\mathcal{U}(t_0,t_1,x_0)$ the set of admissible controls on $[t_0, t_1]$. We consider the question of the minimal time problem from $x_0$ to $K$, \begin{align} V^* (t_0,x_0) = \underset{\begin{subarray}{c} t_1 \in [t_0, T] \\ u(\cdot) \in \mathcal{U}(t_0,t_1,x_0)\end{subarray}}{\inf} t_1-t_0. \label{eq:inf} \end{align} This is a particular case of the OCP with free final time \cite{lasserre_nonlinear_2008}, associated with the cost $\int_{t_0}^{t_1} \ell(t,x(t),u(t)) dt$ for $\ell(t,x(t),u(t)) = 1$. The function $V^*$ is called the value function of this minimal time control problem: this describes the smallest time to reach the target set $K$, starting from $x_0$ at time $t_0$. \begin{assumption} For any $(t,x) \in [0,T] \times X$, the set $f(t,x,U)$ is convex. \label{as:convexF} \end{assumption} We underline that we do not have any convexity assumption on the constraint set $X$ and on the target set $K$. \begin{remark} Even if the dynamical system of interest does not satisfy Assumption~\ref{as:convexF}, we can apply the present analysis to the \textit{convexified inclusion} $\dot{x}(t) \in \mathsf{conv} \: f(t,x(t), U)$. According to the Filippov-Wa\.{z}ewski relaxation Theorem \cite[Th.~10.4.4]{aubin_set-valued_2009}, the trajectories of the original control problem are dense in the set of trajectories of the convexified inclusion. The trajectories of the convexified inclusion may be seen as the limit of chattering trajectories, \textit{i.e.}, when the control oscillates infinitely fast and where the constraint set is infinitesimally dilated. \end{remark} \begin{theorem} Under Assumption~\ref{as:convexF}, the minimal time control problem \eqref{eq:system}-\eqref{eq:inf} associated with a starting point $(t_0, x_0) \in [0, T] \times X$ is either infeasible or admits an optimal trajectory. \label{th:exopti} \end{theorem} \begin{proof} We consider the case where a feasible trajectory exists. This is a direct application of \cite[Th.~2.1]{vinter_convex_1993}, which, among others, characterizes the existence of an optimal trajectory for a control problem over a differential inclusion. To emphasize the correspondence with the notation of \cite{vinter_convex_1993}, we highlight that we apply the theorem with: the running cost function $\ell(t,x,p) = 1$, the terminal cost function $g(t,x) = 0$, the set-valued map $F(t,x) = f(t,x, U)$, the constraint set $A = [0,T] \times X$ and the target set $C = [0,T] \times K$. We underline that the assumptions (H1)-(H5) in \cite{vinter_convex_1993} are satisfied here; more precisely, we highlight that our Assumption~\ref{as:convexF} enforces (H2) and the hypothesis that a feasible trajectory exists enforces (H4). \end{proof} \subsection{Hamilton-Jacobi-Bellman equation and subsolutions} In optimal control theory, a well-known sufficient condition for a function $V$ to be the value function $V^*$ is to satisfy the Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE). This PDE may be seen as a continuous time generalization of Bellman's dynamic programming optimality principle in discrete time \cite{bellman_dynamic_1966}. In our minimal time control setting, the HJB PDE reads \begin{align} \partial_t V (t,x) + \min_{u \in U} \{ 1 + \nabla_x V(t,x)^\top f(t,x,u) \} = 0, \quad \forall (t,x) \in [0,T] \times X \label{eq:hjb1}\\ V(t,x) = 0, \quad \forall (t,x) \in [0,T] \times K. \label{eq:hjb2} \end{align} In general, differentiable solutions of this PDE may not exist, so the concept of viscosity solutions is typically used \cite{crandall_viscosity_1983}. Another approach to get around the lack of a differentiable solution to the HJB PDE consists in leveraging the concept of subsolutions \cite{vinter_convex_1993}, \textit{i.e.}, functions $V \in C^1(\mathbb{R}^{n+1})$ satisfying the following inequalities: \begin{align} \partial_t V (t,x) + \min_{u \in U} \{ 1 + \nabla_x V(t,x)^\top f(t,x,u) \} \geq 0, \quad \forall (t,x) \in [0,T] \times X \label{eq:subsol1}\\ V(t,x) \leq 0, \quad \forall (t,x) \in [0,T] \times K. \label{eq:subsol2} \end{align} The following lemma states that any subsolution of the HJB PDE is an under-approximation of the value function. \begin{lemma} For any $V \in C^1(\mathbb{R}^{n+1})$ satisfying Eqs.~\eqref{eq:subsol1}-\eqref{eq:subsol2}, the following holds: \begin{align*} V(t,x) \leq V^*(t,x), \quad \forall (t,x) \in [0,T] \times X. \end{align*} \label{lem:subsol} \end{lemma} \begin{proof} We take any $(t,x) \in [0,T] \times X$ and we consider that $V^*(t,x)< \infty$, since the case $V^*(t,x) = \infty$ is trivial. Hence, according to Th.~\ref{th:exopti}, there exists an admissible control $\Bar{u}(\cdot) \in \mathcal{U}(t,t_1, x)$ for $t_1 \in [t, T]$ such that $V^*(t,x) = t_1 - t$, and an associate trajectory $\Bar{x}(t)$ such that $\Bar{x}(t) = x$ and $\Bar{x}(t_1) \in K$. We observe that $\frac{d}{dt} [V(t,\Bar{x}(t))] = \partial_t V(t,\Bar{x}(t)) + \nabla_x V (t, \Bar{x}(t))^\top f(t,\Bar{x}(t),\Bar{u}(t)) \geq -1$ a.e. on $[t,t_1]$, the inequality holding since $V$ satisfies Eq.~\eqref{eq:subsol1}. By integration, we observe that $V(t_1,\Bar{x}(t_1)) - V(t,x) \geq t - t_1 = - V^*(t,x)$, \textit{i.e.}, $V(t_1,\Bar{x}(t_1)) + V^*(t,x) \geq V(t,x)$. As $V$ satisfies Eq.~\eqref{eq:subsol2} and as $\Bar{x}(t_1) \in K$, we observe that $0 \geq V(t_1,\Bar{x}(t_1))$, and therefore $V^*(t,x) \geq V(t,x)$. \end{proof} \subsection{Infinite dimensional Linear Programming formulations} In the rest of the paper, we consider a given point $x_0 \in X$, and we raise the issue of computing the minimal time from $x_0$ to $K$ and the associated control. We make the following assumption: \begin{assumption} There exists an admissible control $u \in \mathcal{U}(0,t_1,x_0)$ associated with $t_1 \in [0, T]$. In other words, $V^*(0,x_0) \leq t_1 < \infty$. \label{as:finite} \end{assumption} We consider the optimization problem of finding the subsolution of the HJB PDE that maximizes the evaluation in $(0, x_0)$. This problem may be cast as an infinite dimensional linear program: \begin{align} \begin{array}{rll} \underset{V \in \mathcal{F}} \sup & V(0,x_0) & \\ \text{s.t.} & \partial_t V (t,x) + 1 + \nabla_x V(t,x)^\top f(t,x,u) \geq 0 & \forall (t,x,u) \in [0,T] \times X \times U \\ & V(t,x) \leq 0 & \forall (t,x) \in [0,T] \times K, \end{array} \tag{\mbox{$D_\mathcal{F}$}} \label{eq:lpc1} \end{align} with $\mathcal{F} \in \{ C^1(\mathbb{R}^{n+1}), C^\infty(\mathbb{R}^{n+1}),\mathbb{R}[t,x_1,\dots, x_n] \}$. For a given $V \in \mathcal{F}$, the feasibility in \eqref{eq:lpc1} is clearly equivalent to the satisfaction of Eqs.~\eqref{eq:subsol1}-\eqref{eq:subsol2}. We also note that this infinite dimensional LP formulation corresponds to the dual LP formulation in \cite{lasserre_nonlinear_2008}; in fact, this is the dual problem of an infinite dimensional LP formulation of the control problem based on occupation measures. According to the next theorem, the problem \eqref{eq:lpc1} on $C^1$ functions has the same value as the minimal time control problem. \begin{theorem} Under Assumption~\ref{as:convexF} and Assumption~\ref{as:finite}, and for $\mathcal{F} = C^1(\mathbb{R}^{n+1})$, the value of the LP formulation \eqref{eq:lpc1} equals $V^*(0,x_0)$. \label{th:duality} \end{theorem} \begin{proof} As for the proof of Th.~\ref{th:exopti}, this is a direct application of \cite[Th.~2.1]{vinter_convex_1993}, which also states the absence of duality gap between a control problem over a differential inclusion and a maximization problem over subsolutions of the HJB equation. We underline that the assumptions (H1)-(H5) in \cite{vinter_convex_1993} are satisfied here; more precisely, our Assumption~\ref{as:convexF} enforces (H2) and our Assumption~\ref{as:finite} enforces (H4). \end{proof} Theorem~\ref{th:smoothvf} extends this result by stating that we can require the subsolutions of the HJB equation to be in $C^\infty(\mathbb{R}^{n+1})$, while preserving the value of \eqref{eq:lpc1}. Before stating this theorem, we introduce an auxiliary lemma. \begin{lemma} For any $V \in C^1(\mathbb{R}^{n+1})$ satisfying Eqs.~\eqref{eq:subsol1}-\eqref{eq:subsol2} with feasibility error less or equal than $\eta \geq 0$, $V(t,x) + \eta(t-1-T)$ satisfies Eqs.~\eqref{eq:subsol1}-\eqref{eq:subsol2}.\label{lem:feas} \end{lemma} \begin{proof} We introduce $\tilde{V}(t,x) = V(t,x) + \eta(t-1-T)$. By assumption on $V(t,x)$, we have $\partial_t V (t,x) + 1 + \nabla_x V(t,x)^\top f(t,x,u) \geq -\eta$, for all $(t,x,u) \in [0,T] \times X \times U$. By linearity, and since $\partial_t (t-1-T) = 1$ and $\nabla_x (t-1-T) = 0$, $\partial_t \tilde{V} (t,x) + 1 + \nabla_x \tilde{V}(t,x)^\top f(t,x,u) \geq 0$. By assumption on $V(t,x)$, we have $V(t,x) \leq \eta$, for all $(t,x) \in [0,T] \times K$. Hence, $\tilde{V}(t,x) \leq \eta + \eta(t-1-T) \leq \eta + \eta(T-1-T) = 0$ for all $(t,x) \in [0,T] \times K$. \end{proof} \begin{theorem} Under Assumption~\ref{as:convexF} and Assumption~\ref{as:finite}, and for $\mathcal{F} = C^\infty(\mathbb{R}^{n+1})$, the value of the LP formulation \eqref{eq:lpc1} equals $V^*(0,x_0)$. \label{th:smoothvf} \end{theorem} \begin{proof} We consider $\mathcal{F} = C^\infty(\mathbb{R}^{n+1})$ and we use the notation Y to denote the compact set $[0, T] \times X$. We fix $\epsilon > 0$, and we will prove that there exists $V \in C^\infty(\mathbb{R}^{n+1})$ that is feasible in \eqref{eq:lpc1} and such that $V(0,x_0) \geq V^*(0,x_0) - \epsilon$. According to Th.~\ref{th:duality}, there exists $V_1 \in C^1(\mathbb{R}^{n+1})$ that is feasible in \eqref{eq:lpc1} and such that $V_1(0,x_0) \geq V^*(0,x_0) - \frac{\epsilon}{2}$. For any $\sigma \in (0,1]$, we introduce the mollified function $V_{1 \sigma} = V_1 \ast \phi_\sigma \in C^\infty(\mathbb{R}^{n+1})$, where $\omega_\sigma$ is the standard mollifier defined as $\omega_\sigma(y) = \frac{1}{\sigma^{n+1}} \omega(y/\sigma)$, where $\omega(y) = \left \lbrace \begin{array}{ll} \xi e^{-\frac{1}{1-\lVert y \rVert^2}} & \text{ if } \lVert y \rVert < 1 \\ 0 & \text{ if } \lVert y \rVert \geq 1 \end{array} \right.$ for a given constant $\xi > 0$ such that $\int_{\mathbb{R}^{n+1}} \omega(y)dy = 1$. Hence, a simple change of variable shows that $\int_{\mathbb{R}^{n+1}} \omega_\sigma(y)dy = 1$. We also underline that $\omega_\sigma$ is non-negative and supported on the ball $B(0,\sigma)$. For any $y = (t,x) \in Y$ and any $\sigma \in (0,1]$, we have that $|V_1(y) - V_{1\sigma}(y)| = |V_1(y) - \int_{B(0,\sigma)} V_1(y - h) \omega_\sigma(h) dh | = |\int_{B(0,\sigma)} (V_1(y) - V_1(y - h)) \omega_\sigma(h) dh |$ as $\int_{B(0,\sigma)} \omega_\sigma(h)dh = 1$. We denote by $L_V$ an upper bound for the continuous function $\rVert \nabla V_1(y) \lVert_2$ over the compact set $\hat{Y} = \{ y \in \mathbb{R}^{n+1} \colon d(y,Y) \leq 1 \}$, which is a Lipschitz constant for the function $V_1$. We deduce, by triangular inequality and non-negativity of $\omega_\sigma$ that for any $y \in Y$, \begin{align} |V_1(y) - V_{1\sigma}(y)| & \leq \int_{B(0,\sigma)} |V_1(y) - V_1(y - h)| \omega_\sigma(h) dh \\ & \leq \int_{B(0,\sigma)} L_V \lVert h \rVert \omega_\sigma(h) dh \\ & \leq L_V \sigma \int_{B(0,\sigma)} \lVert h/\sigma \rVert \omega(h/\sigma) \frac{1}{\sigma^{n+1}} dh \\ & \leq L_V \sigma \underbrace{\int_{B(0,1)} \lVert \tilde{h} \rVert \omega(\tilde{h}) d\tilde{h}.}_{\text{constant, denoted $\mathcal{I}$.}} \label{eq:boundingdiff} \end{align} By property of the mollifiers \cite{hormander_analysis_2003}, we have $\partial_i V_{1\sigma}(y) = \partial_i ( V_1 \ast \omega_\sigma) = (\partial_i V_1 \ast \omega_\sigma)$ for any $i \in \{t,x_1, \dots, x_n\}$. Therefore, $\partial_t V_{1\sigma} (y) = \int_{B(0,\sigma)} \partial_t V_{1} (y-h)\omega_\sigma(h)dh$ and $\nabla_x V_{1\sigma}(y) = \int_{B(0,\sigma)} \nabla_x V_{1} (y-h)\omega_\sigma(h)dh$. Using the equality $\int_{B(0,\sigma)} \omega_\sigma(h)dh = 1$, we deduce that for any $y \in Y$, \begin{align} \hspace{-0.5cm} \partial_t V_{1\sigma} (y) + 1 + (\nabla_x V_{1\sigma}(y))^\top f(y,u) &= \int_{B(0,\sigma)} (\partial_t V_{1} (y-h) + 1 + (\nabla_x V_{1}(y-h))^\top f(y,u))\omega_\sigma(h)dh \label{eq:decfirst} \\ & = \int_{B(0,\sigma)} (\partial_t V_{1} (y-h) + 1 + (\nabla_x V_{1}(y-h))^\top f(y-h,u))\omega_\sigma(h)dh \\ & \quad \quad \quad + \int_{B(0,\sigma)} \nabla_x V_{1}(y-h)^\top(f(y,u) - f(y-h,u))\omega_\sigma(h)dh. \label{eq:declast} \end{align} We compute lower bounds for the two terms of the sum. We start with the second term: using Cauchy-Schwarz inequality, we notice that $\int_{B(0,\sigma)} \nabla_x V_{1}(y-h)^\top(f(y,u) - f(y-h,u))\omega_\sigma(h)dh \geq - \int_{B(0,\sigma)} \lVert V_{1}(y-h) \rVert \lVert f(y,u) - f(y-h,u) \rVert \omega_\sigma(h)dh$. Noticing that $\lVert \nabla_x V_{1}(y-h) \rVert \leq L_V$, since $y-h \in \hat{Y}$ for any $h \in B(0,\sigma) \subset B(0,1)$, and introducing the Lipschitz constant $L_f$ for $f$, we have \begin{align} \int_{B(0,\sigma)} \nabla_x V_{1}(y-h)^\top(f(y,u) - f(y-h,u))\omega_\sigma(h)dh \geq - L_V L_f \int_{B(0,\sigma)} \lVert h \rVert \omega_\sigma(h)dh = - L_V L_f \sigma \mathcal{I}. \label{eq:bound1} \end{align} We define $\eta = \frac{\epsilon}{2(T+2)}$. We introduce the compact set $Z = [0,T] \times X \times U$ and the family of compact sets $Z_\delta = \{ z \in \mathbb{R}^N \: \colon \: d(z,Z) \leq \delta \}$ for $\delta \in (0, 1]$. For any $z = (y,u) \in Z_1$, we introduce $\psi(z) = \partial_t V_{1} (y) + 1 + (\nabla_x V_{1}(y))^\top f(y,u)$. The function $\psi(z)$ is continuous and according to Lemma~\ref{lem:valuefunction}, there exists $\sigma_1 > 0$ such that $\min_{z \in Z_{\sigma}} \psi(z) \geq \min_{z \in Z} \psi(z) - \frac{\eta}{2}$ for any $\sigma \in (0, \sigma_1]$. By feasibility of $V_1$ in \eqref{eq:lpc1}, we know that $\min_{z \in Z} \psi(z) \geq 0$, which yields that $\psi(z) \geq - \frac{\eta}{2}$ for any $z \in Z_{\sigma_1}$. We deduce that \begin{align} \int_{B(0,\sigma)} (\partial_t V_{1} (y-h) + 1 + (\nabla_x V_{1}(y-h))^\top f(y-h,u))\omega_\sigma(h)dh \geq - \int_{B(0,\sigma)} \frac{\eta}{2} \omega_\sigma(h)dh = - \frac{\eta}{2}, \label{eq:bound2} \end{align} since $(y-h,u) \in Z_\sigma$ for any $h \in B(0,\sigma)$. Combining the decomposition of Eqs.~\eqref{eq:decfirst}-\eqref{eq:declast}, with the lower bounds of Eq.~\eqref{eq:bound1} and \eqref{eq:bound2}, we deduce that \begin{align} \partial_t V_{1\sigma} (y) + 1 + (\nabla_x V_{1\sigma}(y))^\top f(y,u) \geq - (L_V L_f \sigma \mathcal{I} + \frac{\eta}{2}), \label{eq:boundinghjb} \end{align} for any $(y,u) = (t,x,u) \in [0,T] \times X \times U$ and $\sigma \in (0,\sigma_1]$. We define $\tilde{\sigma} = \min\{\sigma_1, \frac{\eta}{2 L_V L_f \mathcal{I}}, \frac{\eta}{L_V \mathcal{I}} \}$. From Eq.~\eqref{eq:boundingdiff} and Eq.~\eqref{eq:boundinghjb}, we deduce that \begin{align} V_{1 \tilde{\sigma}}(0,x_0) \geq V_1(0,x_0) - \eta \geq V^*(0,x_0) - \frac{\epsilon}{2} - \eta & \label{eq:boundobj}\\ V_{1 \tilde{\sigma}}(t,x) \leq V_{1}(t,x) + \eta \leq \eta&, \quad \forall (t,x) \in [0,T] \times K \label{eq:proofinal1}\\ \partial_t V_{1\tilde{\sigma}} (t,x) + 1 + \nabla_x V_{1\tilde{\sigma}}(t,x)^\top f(t,x,u) \geq -\eta &, \quad \forall (t,x,u) \in [0,T] \times X \times U. \label{eq:proofinal2} \end{align} From Lemma~\ref{lem:feas}, we deduce that $V(t,x) = V_{1 \tilde{\sigma}}(t,x) + \eta(t-1-T) \in C^\infty(\mathbb{R}^{n+1})$ is feasible in \eqref{eq:lpc1}. From Eq.~\eqref{eq:boundobj}, we deduce that $V(0,x_0) \geq V^*(0,x_0) - \frac{\epsilon}{2} - \eta - (1+T)\eta$, and by definition of $\eta$, $V(0,x_0) \geq V^*(0,x_0) - \epsilon$. \end{proof} The next theorem underlies the convergence proof of the hierarchy of semi-infinite problems in Sect.~\ref{sec:convexsipsubsolutions}: if we restrict to polynomials HJB subsolutions, the value of the problem \eqref{eq:lpc1} remains unchanged. \begin{theorem} Under Assumption~\ref{as:convexF} and Assumption~\ref{as:finite}, and for $\mathcal{F} = \mathbb{R}[t,x_1, \dots, x_n]$, the value of the LP formulation \eqref{eq:lpc1} equals $V^*(0,x_0)$. \label{th:polyvf} \end{theorem} \begin{proof} We consider $\mathcal{F} = \mathbb{R}[t,x_1, \dots, x_n]$. For a given $\epsilon > 0$, and we will prove that there exists $V \in \mathbb{R}[t,x_1, \dots, x_n]$ that is feasible in \eqref{eq:lpc1} and such that $V(0,x_0) \geq V^*(0,x_0) - \epsilon$. According to Th.~\ref{th:smoothvf}, there exists a function $Q \in C^\infty(\mathbb{R}^{n+1})$ which is a subsolution of the HJB equation and such that $Q(0,x_0) \geq V^*(0,x_0) - \frac{\epsilon}{2}$. We notice that $Q$ has a locally Lipschitz gradient. Therefore, we can apply Lemma~\ref{lem:polapprox}. This yields, in particular, that for any $\nu > 0$, there exists a polynomial $w \in \mathbb{R}[t,x_1, \dots, x_n]$ such that for all $(t,x) \in [0,T] \times X$, $| w (y) - w (y) | \leq \nu$ and $|\partial_i w (t,x) - \partial_i V (t,x) | \leq \nu, \: i \in \{t,x_1, \dots, x_N \}$. We deduce that $|\partial_t Q (t,x) + \nabla_x Q(t,x)^\top f(t,x,u) - \partial_t w (t,x) + \nabla_x w(t,x)^\top f(t,x,u) | \leq |\partial_t Q (t,x) - \partial_t w (t,x) | + \sum_{i=1}^n |\partial_{x_i} Q (t,x) - \partial_{x_i} w (t,x) | M_i \leq \nu (1 + \sum_{i=1}^n M_i)$, where $M_i = \max_{(t,x,u) \in [0,T] \times X \times U} |f_i(t,x,u)|$. Therefore, we observe that for all $(t,x,u) \in [0,T] \times X \times U$, \begin{align} \partial_t w (t,x) + 1 + \nabla_x w (t,x)^\top f(t,x,u) & \geq \partial_t Q (t,x) + 1 + \nabla_x Q (t,x)^\top f(t,x,u) - \nu (1 + \sum_{i=1}^n M_i) \\ & \geq - \nu (1 + \sum_{i=1}^n M_i), \label{eq:hjbpol} \end{align} as $Q$ is a subsolution of the HJB equation. In summary, for $\nu = \eta(1 + \sum_{i=1}^n M_i)^{-1} \leq \eta$, \begin{align} w(0,x_0) \geq Q(0,x_0) - \eta \geq V^*(0,x_0) - \frac{\epsilon}{2} - \eta \label{eq:firstpoly}\\ w(t,x) \leq Q(t,x) + \eta \leq \eta &, \quad \forall (t,x) \in [0,T] \times K \label{eq:secpoly} \\ \partial_t w (t,x) + 1 + \nabla_x w (t,x)^\top f(t,x,u) \geq -\eta&, \quad \forall (t,x,u) \in [0,T] \times X \times U \label{eq:lastpoly}, \end{align} the last inequality following from Eq.~\eqref{eq:hjbpol}. Based on Eqs.~\eqref{eq:secpoly}-\eqref{eq:lastpoly} and Lemma~\ref{lem:feas}, we notice that the polynomial $V(t,x) = w(t,x) + \eta(t-T-1)\in \mathbb{R}[t,x_1, \dots, x_n]$ is feasible in the problem \eqref{eq:lpc1}. Having defined $\eta = \frac{\epsilon}{2(T+2)}$, we see, based on Eq.~\eqref{eq:firstpoly}, that it satisfies $V(0,x_0) \geq V^*(0,x_0) - \frac{\epsilon}{2} - \eta - \eta(T+1) = V^*(0,x_0) - \epsilon$. \end{proof} \section{Convex semi-infinite programming to compute near-optimal subsolutions} \label{sec:convexsipsubsolutions} For $\mathcal{F}$ being either $C^1(\mathbb{R}^{n+1})$, $C^\infty(\mathbb{R}^{n+1})$, or $\mathbb{R}[t,x_1,\dots, x_n] \}$, the linear program \eqref{eq:lpc1} is infinite dimensional, and thus, not tractable as it stands. Therefore, we next present a hierarchy of convex SIP problems that are solvable with a dedicated algorithm, to compute subsolutions to the HJB equation that are near optimal in the problem \eqref{eq:lpc1}. \subsection{A hierarchy of linear semi-infinite programs} Instead of having an optimization space $\mathcal{F}$ that is infinite dimensional, we suggest to restrict to the finite dimensional subspaces $\mathbb{R}_d[t,x_1,\dots, x_n]$ of polynomials of degree bounded by $d$. This restricted dual problem is: \begin{align} \begin{array}{rll} \underset{V \in \mathbb{R}_d[t,x_1,\dots, x_n]} \sup & V(0,x_0) & \\ \text{s.t.} & \partial_t V (t,x) + 1 + \nabla_x V(t,x)^\top f(t,x,u) \geq 0 & \forall (t,x,u) \in [0,T] \times X \times U \\ & V(t,x) \leq 0 & \forall (t,x) \in [0,T] \times K. \end{array} \tag{\mbox{$R_d$}} \label{eq:lprd} \end{align} In the rest of the paper, we will denote by $N$ the dimension of the vector space $\mathbb{R}_d[t,x_1,\dots, x_n]$ and $\Phi(t,x) \in \mathbb{R}^{N}$ a basis of this space. For both objects, there is indeed a dependence of $d$, that is implicit here for readability reasons. For any $V \in \mathbb{R}_d[t,x_1,\dots, x_n]$, we introduce the vector $\theta$ of the coordinates of $V$ in the basis $\Phi$. Hence, we have the relation \begin{align} V(t,x) = \theta^\top \Phi(t,x) \in \mathbb{R}_d[t,x_1,\dots, x_n]. \end{align} Expressing problem~\eqref{eq:lprd} as an optimization problem over the vector of coefficients, it appears clearly that this is a linear semi-infinite program. \begin{proposition} For $d \in \mathbb{N}^*$, problem \eqref{eq:lprd} is a linear semi-infinite program, \textit{i.e}, a linear program with a finite number of variables and an infinite number of constraints. More precisely, there exist a vector $c \in \mathbb{R}^N$, and a compact set $\mathcal{Y} \subset \mathbb{R}^{N+1}$ such that \eqref{eq:lprd} reads \begin{align} \begin{array}{rll} \underset{\theta \in \mathbb{R}^N}{\sup} & c^\top \theta & \\ \text{s.t.} & a^\top \theta + b \leq 0 & \forall (a,b) \in \mathcal{Y}. \end{array} \tag{${SIP}$} \label{eq:sip} \end{align} \end{proposition} \begin{proof} We define the vector $c = \Phi(0,x_0)$, and the compact sets \begin{align} & \mathcal{Y}_1 = \{ (-\partial_t\Phi(t,x) - \nabla_x\Phi(t,x)^\top f(t,x,u), -1), (t,x,u) \in [0,T] \times X \times U \} \\ & \mathcal{Y}_2 = \{ (\Phi(t,x),0), (t,x) \in [0,T] \times K \} \\ & \mathcal{Y} = \mathcal{Y}_1 \cup \mathcal{Y}_2. \end{align} We see that for any $V_\theta(t,x) = \theta^\top \Phi(t,x) \in \mathbb{R}_d[t,x_1,\dots, x_n]$, $V_\theta(0,x_0) = c^\top \theta$, and $V_\theta(t,x)$ is feasible in $\eqref{eq:lprd}$ if and only if $a^\top \theta + b \leq 0$, for all $(a,b) \in \mathcal{Y}$. \end{proof} We will see in the next section how to efficiently solve those semi-infinite programs. Prior to that, we state the convergence of this hierarchy of semi-infinite programs. \begin{theorem} The sequence $\mathsf{val}\eqref{eq:lprd}$ converges to $V^*(0,x_0)$ when $d \rightarrow \infty$. \end{theorem} \begin{proof} On the one hand, we introduce the notation $v_d = \mathsf{val}\eqref{eq:lprd}$. This sequence is obviously an increasing sequence, bounded above by $V^*(0,x_0)$. Hence, it converges to a value $\ell$, and any subsequence converges to $\ell \leq V^*(0,x_0)$. On the other hand, Th.~\ref{th:polyvf} guarantees that there exists a sequence of polynomials $w_k \in \mathbb{R}[t,x_1,\dots,x_n]$ that are feasible in \eqref{eq:lpc1} and such that $w_k(0,x_0) \rightarrow_k V^*(0,x_0)$. By definition, we have $v_{d_k} \geq w_k(0,x_0)$, where $d_k = \mathsf{deg}(w_k)$. Up to the extraction of a subsequence of $(w_k)$, we can assume that the sequence $d_k$ increasing, therefore $ (v_{d_k})_{k\in\mathbb{N}}$ is a subsequence of $(v_d)_{d\in\mathbb{N}}$. As $v_{d_k} \rightarrow \ell$ and $w_k(0,x_0) \rightarrow_k V^*(0,x_0)$, we deduce that $\ell \geq V^*(0,x_0)$, which yields the equality $\ell = V^*(0,x_0)$. \end{proof} \subsection{Regularization and solution of the semi-infinite programs} We introduce a quadratic regularization in the semi-infinite program \eqref{eq:sip}, yielding the following formulation depending on $\mu \in \mathbb{R}_{++}$: \begin{align} \begin{array}{rll} \underset{\theta \in \mathbb{R}^N}{\max} & c^\top \theta - \frac{\mu}{2} \lVert \theta \rVert^2 & \\ \text{s.t.} & a^\top \theta + b \leq 0 & \forall (a,b) \in \mathcal{Y}. \end{array} \tag{\mbox{${SIP}_\mu$}} \label{eq:sipr} \end{align} \begin{proposition} For any $\mu > 0$, the semi-infinite program \eqref{eq:sipr} has a unique optimal solution with value $\mathsf{val}\eqref{eq:sipr} \leq V^*(0,x_0)$. Moreover, $\mathsf{val}\eqref{eq:sipr} \underset{\mu \rightarrow 0}{\rightarrow} \mathsf{val}\eqref{eq:sip}$. \end{proposition} \begin{proof} The feasible set of \eqref{eq:sipr} being convex, and the objective function being strongly concave, this optimization problem admits a unique maximum $\theta$. By definition, $\mathsf{val}\eqref{eq:sipr} = c^\top \theta - \frac{\mu}{2} \lVert \theta \rVert^2 \leq c^\top \theta \leq \mathsf{val}\eqref{eq:sip}$, since $\theta$ is also feasible in the maximization problem \eqref{eq:sip}. Additionally, $\mathsf{val}\eqref{eq:sip} \leq V^*(0,x_0)$, since any function $V$ feasible in $\eqref{eq:lprd}$ satisfies $V(0,x_0) \leq V^*(0,x_0)$. We also notice that the function $\mu \mapsto \mathsf{val}\eqref{eq:sipr}$ is decreasing, so it admits a limit $\ell$ at $0^+$, due to the aforementioned inequalities, $\ell \leq \mathsf{val}\eqref{eq:sip}$. For any $\mu, \epsilon > 0$, if we take $\theta_\epsilon$ an $\epsilon$-optimal solution in the problem \eqref{eq:sip}, we see that $\mathsf{val}\eqref{eq:sip} - \epsilon - \frac{\mu}{2} \lVert \theta_\epsilon \rVert^2 \leq c^\top \theta_\epsilon - \frac{\mu}{2} \lVert \theta_\epsilon \rVert^2 \leq \mathsf{val}\eqref{eq:sipr}$. For a fixed $\epsilon$, and taking $\mu \rightarrow 0^+$, we obtain $\mathsf{val}\eqref{eq:sip} - \epsilon \leq \ell$. This being true for any $\epsilon > 0$, we deduce that $\mathsf{val}\eqref{eq:sip}\leq \ell$, which proves the equality. \end{proof} Setting the regularization parameter $\mu$ in practice implies a trade-off between the computational tractability of the semi-infinite program \eqref{eq:sipr} and the accuracy of the approximation of the original problem \eqref{eq:sip}. To solve the formulation \eqref{eq:sipr}, we propose to use a standard algorithm for convex semi-infinite programming, called cutting-plane (CP) algorithm \cite{cerulli_convergent_2022}. To that extent, we assume to have an \textit{separation oracle} computing, for any $\theta \in \mathbb{R}^N$, \begin{align} \phi(\theta) = \max_{(a,b) \in \mathcal{Y}} a^\top \theta + b, \label{eq:oracle} \end{align} and an associate argmaximum. Solving the optimization problem in Eq.~\eqref{eq:oracle} may be computationally intensive, since the compact set $\mathcal{Y}$ may not be convex. Therefore, we only assume to have an oracle with relative optimality gap $\delta \in [0,1)$ computing $(a,b) \in \mathcal{Y}$, such that $\phi(\theta)- (a^\top \theta + b) \leq \delta |\phi(\theta)|$. We treat this oracle as a black box, regardless of its implementation, via global optimization, gridding, interval arithmetics or sampling for instance. \begin{algorithm}[H] \caption{Cutting-plane algorithm for \eqref{eq:sipr}} \label{chap:alg:CP} \hspace*{\algorithmicindent}\textbf{{Input:}} {An oracle with parameter $\delta \in [0,1)$, a tolerance $\epsilon \in \mathbb{R}_+$, a finite set $\mathcal{Y}^0 \subset \mathcal{Y}$, $k\gets0$} \begin{algorithmic}[1] \setcounter{ALG@line}{-1} \While{true} \State Compute $\theta^k$, the solution of the convex Quadratic Programming problem \begin{align} \begin{array}{rll} \underset{\theta \in \mathbb{R}^N}{\max} & c^\top \theta - \frac{\mu}{2} \lVert \theta \rVert^2 & \\ \text{s.t.} & a^\top \theta + b \leq 0 & \forall (a,b) \in \mathcal{Y}^k. \end{array} \label{eq:master} \end{align} \State Call the oracle to compute $(a^k,b^k)$ an approximate solution of \eqref{eq:oracle} with relative optimality gap $\delta$. \label{step:y^k} \If {$(a^k)^\top \theta^k + b^k \leq \epsilon$} \State Return $\theta^k$. \label{stepterm} \Else \State{$\mathcal{Y}^{k+1} \gets \mathcal{Y}^k \cup \{ (a^k,b^k)\} $} \State {$k \gets k + 1$} \EndIf \EndWhile \end{algorithmic} \end{algorithm} Before stating the termination and the convergence of Algorithm~\ref{chap:alg:CP}, we introduce the vector $\hat{\theta} \in \mathbb{R}^N$ of coordinates of the polynomial $\hat{v}(t,x) = t - 1 - T$ in the basis $\Phi(t,x)$, and we notice that this element helps obtaining feasible solutions since $\phi(\hat{\theta}) = -1$: Due to Lemma~\ref{lem:feas}, we observe that if $\theta$ has a feasibility error less or equal than $\eta \geq 0$ in \eqref{eq:sip} and \eqref{eq:sipr} then, $\theta + \eta \hat{\theta}$ is feasible in \eqref{eq:sip} and \eqref{eq:sipr}. For any $\mu > 0$, we define the convex and compact set $\mathcal{X}_\mu = \{ \theta \in \mathbb{R}^N \colon c^\top \theta - \frac{\mu}{2} \lVert \theta \rVert^2 \geq c^\top \hat{\theta} - \frac{\mu}{2} \lVert \hat{\theta} \rVert^2 \}$, and we define $R_\mu = \sup_{\theta \in \mathcal{X}_\mu } \lVert \theta \rVert$. Finally, we define the function $r_\mu(e) = e (1+T + \mu R_\mu^2 (1+\frac{e}{2}))$. Note that $r_\mu(e) \underset{e \rightarrow 0}{\rightarrow} 0$. \begin{theorem} If $\epsilon > 0$, Algorithm~\ref{chap:alg:CP} stops after a finite number $K$ of iterations, and $\theta^K + \frac{\epsilon}{1-\delta} \hat{\theta}$ is a feasible and $r_\mu(\frac{\epsilon}{1-\delta})$-optimal in \eqref{eq:sipr}. If $\epsilon = 0$, the alternative holds: (a) Algorithm~\ref{chap:alg:CP} either stops after a finite number of iterations, and the last iterate is the optimal solution of \eqref{eq:sipr}, (b) Or it generates an infinite sequence, and the optimality gap and the feasibility error converge towards zero with an asymptotic rate in $O(\frac{1}{k})$. \end{theorem} \begin{proof} First of all, we notice that during the execution of Algorithm~\ref{chap:alg:CP}, we necessarily have $\theta^k \in \mathcal{X}_\mu$, since $\hat{\theta}$ is a feasible solution in \eqref{eq:master} with value $c^\top \hat{\theta} - \frac{\mu}{2} \lVert \hat{\theta} \rVert^2$, therefore by optimality of $\theta^k$ in \eqref{eq:master}, $c^\top \theta^k - \frac{\mu}{2} \lVert \theta^k \rVert^2 \geq c^\top \hat{\theta} - \frac{\mu}{2} \lVert \hat{\theta} \rVert^2$. The finite convergence of Algorithm~\ref{chap:alg:CP} if $\epsilon >0$, and the convergence rate in the case $\epsilon = 0$ (if no finite convergence) follows from \cite[Th.1.1-1.2]{oustry_global_2023} (which is an extension of \cite{cerulli_convergent_2022}) : we apply these theorems to the problem \eqref{eq:sipr} with the additional constraint $\theta \in \mathcal{X}_\mu$. As previously explained, this additional constraint does not change the execution of the algorithm, but it enables us to satisfy the compactness assumption of \cite[Th.1.1-1.2]{oustry_global_2023}. We also note that the objective function is $\mu$-strongly concave, and that $\hat{\theta} \in \mathcal{X}_\mu$ is a strictly feasible point with respect to the semi-infinite constraints. We finish the proof by showing that if Algorithm~\ref{chap:alg:CP} stops at iteration $K$, then $\tilde{\theta} = \theta^K + \frac{\epsilon}{1-\delta} \hat{\theta}$ is feasible and $r_\mu(\frac{\epsilon}{1-\delta})$-optimal in \eqref{eq:sipr}. If Algorithm~\ref{chap:alg:CP} stops at iteration $K$, this means that $(a^K)^\top \theta^K + b^K \leq \epsilon$. If $\phi(\theta^K)\leq 0$, then $\theta^K$ is feasible in \eqref{eq:sipr}, and so is $\tilde{\theta}$ due to Lemma~\ref{lem:feas}. If $\phi(\theta^K) > 0$, then by property of the $\delta$-oracle, $(1-\delta) \phi(\theta^K) \leq (a^K)^\top \theta^K + b^K \leq \epsilon$, and we deduce that the feasibility error is $\phi(\theta^K) \leq \frac{\epsilon}{1-\delta}$. With Lemma~\ref{lem:feas}, we deduce that $\tilde{\theta}$ is feasible in \eqref{eq:sipr}. We also note that \begin{align} c^\top \tilde{\theta} - \frac{\mu}{2} \lVert \tilde{\theta} \rVert^2 &= c^\top \theta^K + \frac{\epsilon}{1-\delta} c^\top \hat{\theta} - \frac{\mu}{2} \lVert \theta^K + \frac{\epsilon}{1-\delta} \hat{\theta} \rVert^2 \\ & \geq c^\top \theta^K - \frac{\epsilon}{1-\delta} (1+T) - \frac{\mu}{2} \left( \lVert \theta^K \rVert^2 + \frac{2 \epsilon}{1-\delta} \lVert \theta^K \rVert \: \lVert\hat{\theta} \rVert + \frac{\epsilon^2}{(1-\delta)^2} \lVert \hat{\theta} \rVert^2 \right), \end{align} since $c^\top \hat{\theta} = V_{\hat{\theta}}(0,x_0) = -(1+T)$, and due to the Cauchy-Schwartz inequality. By optimality of $\theta^K$ in \eqref{eq:master}, which is a relaxation of \eqref{eq:sipr}, we know that $\mathsf{val}\text{\eqref{eq:sipr}} \leq c^\top \theta^K - \frac{\mu}{2} \lVert \theta^K \rVert^2$. Applying this, we deduce that \begin{align} c^\top \tilde{\theta} - \frac{\mu}{2} \lVert \tilde{\theta} \rVert^2 & \geq \mathsf{val}\text{\eqref{eq:sipr}}- \frac{\epsilon}{1-\delta} (1+T) - \frac{\mu}{2} \left( \frac{2 \epsilon}{1-\delta} \lVert \theta^K \rVert \: \lVert\hat{\theta} \rVert + \frac{\epsilon^2}{(1-\delta)^2} \lVert \hat{\theta} \rVert^2 \right) \\ & \geq \mathsf{val}\text{\eqref{eq:sipr}}- \frac{\epsilon}{1-\delta} (1+T) - \mu R_\mu^2 \left( \frac{ \epsilon}{1-\delta} + \frac{\epsilon^2}{2(1-\delta)^2} \right) \\ & \geq \mathsf{val}\text{\eqref{eq:sipr}}- r_\mu(\frac{\epsilon}{1-\delta}), \end{align} the second inequality following from the fact that $\lVert \hat{\theta} \rVert \leq R_\mu$ and $\lVert \theta^K \rVert \leq R_\mu$, as $\hat{\theta}, \theta^K \in \mathcal{X}_\mu$. \end{proof} \section{Feedback control based on approximate value functions} \label{sec:control} In the previous section, we have seen how to compute subsolutions of the HJB equation based on convex semi-infinite programming, and how to deduce a lower bound on the minimal travel time. In this section, we focus on how subsolutions of the HJB equation, which approximate the value function $V^*$, enable one to recover a near-optimal control for the minimal time control problem \eqref{eq:system}-\eqref{eq:inf}. \subsection{Controller design and existence of trajectories} For a given continuously differentiable function $V \in C^1(\mathbb{R}^{n+1})$, we define the set-valued maps \begin{align} \mathcal{U}_V(t,x) &= \underset{u \in U}{\mathsf{argmin}} \: \nabla_x V (t,x)^\top f(t,x,u) \label{eq:selection} \\ \mathcal{I}_V(t,x) &= \{ u \in \mathcal{U}_V(t,x) \colon f(t,x,u) \in T_X(x) \}, \end{align} where $T_X(x)$ is the contingent cone to $X$ at point $x$ (see Introduction). In line with previous works designing feedback controllers based on approximate value functions \cite{henrion_nonlinear_2008,jones_polynomial_2023}, we are interested in the trajectories satisfying the following differential inclusion depending on the function $V \in C^1(\mathbb{R}^{n+1})$: \begin{align} \dot{x}_V(t) = f(t,x_V(t),u_V(t)) \text{ with } u_V(t) \in \mathcal{U}_V(t,x_V(t)). \label{eq:diffinclusion} \tag{\mbox{$CL_V$}} \end{align} Intuitively, such a feedback control pushes the system towards the descent direction of the function $V$. The following proposition confirms that, should the function $V \in C^1(\mathbb{R}^{n+1})$ be optimal in problem \eqref{eq:lpc1}, then any minimal time trajectory satisfies the differential inclusion \eqref{eq:diffinclusion} with respect to $V$. \begin{proposition} Under Assumptions~\ref{as:convexF}-\ref{as:finite}, we consider an optimal trajectory $(x^*(\cdot),u^*(\cdot))$ of the minimal time control problem \eqref{eq:system}-\eqref{eq:inf} starting from $(0,x_0)$, with hitting time $\tau^* = V^*(0,x_0)$. If the linear program \eqref{eq:lpc1}, for $\mathcal{F} = C^1(\mathbb{R}^{n+1})$, admits an optimal solution $V$, then, for almost every $t \in [0, \tau^*],$ \begin{align} u^*(t) \in \mathcal{I}_V(t,x^*(t)) \subset \mathcal{U}_{V}(t,x^*(t)). \label{eq:optcommand} \end{align} In particular, the trajectory $(x^*(\cdot),u^*(\cdot))$ satisfies the differential inclusion \eqref{eq:diffinclusion}. \label{prop:opttraj} \end{proposition} \begin{proof} We define the function $\alpha(t) = V (t,x^*(t)) + t$, which is differentiable. We have that $\alpha'(t) = \partial_t V (t,x^*(t)) + 1 + \nabla_x V (t,x^*(t))^\top f(t,x^*(t),u^*(t))$, for almost all $t \in [0,\tau^*]$. Since $V$ is feasible in \eqref{eq:lpc1}, therefore satisfies Eq.~\eqref{eq:subsol1}, and since $(x^*(t),u^*(t)) \in X \times U$ a. e. on $[0,\tau^*]$, we know that $\alpha'(t) \geq 0$ a. e. on $[0,\tau^*]$. This proves that the differentiable function $\alpha(t)$ is non-decreasing function over $[0,\tau^*]$. By optimality of $V$ in \eqref{eq:lpc1}, and due to Th.~\ref{th:duality} (Assumptions~\ref{as:convexF}-\ref{as:finite} are satisfied), $\alpha(0) = V(0,x_0) = \mathsf{val}\eqref{eq:lpc1} = \tau^*$. Moreover, $\alpha(\tau^*)= \tau^* +V(\tau^*,x^*(\tau^*)) \leq \tau^*$, since $V$ satisfies Eq.~\eqref{eq:subsol2} and $x^*(\tau^*) \in K$. From $\alpha(\tau^*) \leq \alpha(0)$, we obtain that $\alpha(t)$ is constant. Hence, $\partial_t V(t,x^*(t)) + 1 + \nabla_x V (t,x^*(t))^\top f(t,x^*(t),u^*(t)) = 0$, meaning \begin{align} \nabla_x V (t,x^*(t))^\top f(t,x^*(t),u^*(t)) = -(\partial_t V (t,x^*(t)) + 1), \text{ a.e. on } [0,\tau^*]. \label{eq:val} \end{align} As $V$ satisfies Eq.~\eqref{eq:subsol1}, we have that $\nabla_x V (t,x^*(t))^\top f(t,x^*(t),u) \geq -(\partial_t V (t,x^*(t)) + 1)$ for all $t \in [0,\tau^*]$ and for all $u\in U$. Together with Eq.~\eqref{eq:val}, we deduce that $u^*(t) \in \mathcal{U}_V(t,x^*(t))$ for almost all $t \in [0, \tau^*]$. Based on this fact, Lemma~\ref{lem:tangentcone} yields that for almost all $t \in [0, \tau^*]$, $f(t,x^*(t),u^*(t)) \in T_X(x^*(t))$. Therefore, for almost all $t \in [0, \tau^*]$, $u^*(t) \in \mathcal{I}_V(t,x^*(t))$. \end{proof} We just saw that whenever $V \in C^1(\mathbb{R}^{n+1})$ is optimal in the linear program \eqref{eq:lpc1}, any minimal time trajectory is a solution of the differential inclusion \eqref{eq:diffinclusion} associated with the function $V$. However, we may not be able to compute exactly such an optimal function in practice, especially because it may not exist. The next theorem states the existence of closed-loop trajectories following \eqref{eq:diffinclusion}, for any function $V \in C^1(\mathbb{R}^{n+1})$. \begin{theorem} Under Assumptions~\ref{as:convexF}-\ref{as:finite}, if $V \in C^1(\mathbb{R}^{n+1})$ is such that for any $(t,x) \in \mathbb{R}_+ \times X, \mathcal{I}_V(t,x) \neq \emptyset$, then there exists a trajectory $(x_V(\cdot),u_V(\cdot))$ starting at $(0,x_0)$, satisfying the differential inclusion \eqref{eq:diffinclusion} over $[0,\infty)$ and such that $x_V(t) \in X$ for almost all $t \in [0,\infty)$. \label{th:existencetraj} \end{theorem} \begin{proof} We introduce an auxiliary control system to reduce to a time-invariant system with a convex control set, so as to fit in the setting of \cite[Th.~6.6.6]{aubin_jean-pierre_viability_1991}. In what follows, we use the notation $y = (t,x)$ again. We introduce two objects: the set-valued map $\hat{U}(y) = \{ f(y,u), u \in U \}$ and the function $\hat{f}(y,v) = \begin{pmatrix} 1 \\ v \end{pmatrix}$ for $y \in \mathbb{R}^{n+1}$ and $v \in \mathbb{R}^{n}$. According to the terminology introduced in \cite[Def.~6.1.3]{aubin_jean-pierre_viability_1991}, $(\hat{U},\hat{f})$ is a Marchaud control system, as (i) $\{(y,v) \in \mathbb{R}^{2n+1} \colon v \in \hat{U}(y)\} $ is closed, (ii) $\hat{f}$ is continuous, (iii) the velocity set $\{1 \} \times f(y,U)$ is convex according to Assumption~\ref{as:convexF} and (iv) $\hat{f}$ has a linear growth, and so has $\hat{U}$ due to the fact that $f$ is Lipschitz continuous and $\mathcal{U}$ is bounded. We introduce $\mathcal{C} = \mathbb{R}_+ \times X$ and define the regulation map $REG(y) = \{ v \in \hat{U}(y) \colon \hat{f}(y,v) \in T_{\mathcal{C}}(y) \}$. We also introduce the set-valued map $SEL(y) = \underset{v \in \hat{U}(y)}{\text{argmin}} \: \nabla_x V(y)^\top v$. We prove now that the graph of $SEL$ is closed. For any converging sequence $(y_k,v_k) \rightarrow (\bar{y},\bar{v})$ with $v_k \in SEL(y_k)$, we see that for all $k \in \mathbb{N}$, $v_k = f(y_k,u_k)$ for a given $u_k \in U$ and $\nabla_x V(y_k)^\top f(y_k,u_k) = h(y_k)$, where $h(y_k) = \min_{u \in U} \nabla_x V(y_k)^\top f(y_k,u)$. Up to extracting a subsequence of $u_k$, we can assume that $u_k \rightarrow \bar{u}$, as $U$ is compact. Note that $h$ is continuous, by application of the Maximum Theorem \cite[Th.~2.1.6]{aubin_jean-pierre_viability_1991}, in so far as (i) $(y,u) \mapsto \nabla_x V(y)^\top f(y,u)$ is continuous, therefore lower and upper semicontinuous, (ii) the set-valued map $M(y)= U$ is compact-valued, and lower and upper semicontinuous since it is constant. By continuity of $h$, $\nabla V$ and $f$, we conclude that $\nabla_x V(\bar{y})^\top \bar{v} = \nabla_x V(\bar{y})^\top f(\bar{y},\bar{u}) = h(\bar{y}) = \underset{v \in \hat{U}(\bar{y})}{\min} \: \nabla_x V(\bar{y})^\top v$, meaning that $\bar{v} = f(\bar{y},\bar{u}) \in SEL(\bar{y})$. We notice that if $u \in \mathcal{I}_V(y)$, then $v = f(y,u) \in REG(y) \cap SEL(y)$. As $\mathcal{I}_V(y) \neq \emptyset$ for all $y \in \mathcal{C}$ (by assumption), $REG(y) \cap SEL(y) \neq \emptyset$. Together with the closedness of the graph of $SEL$, this means $SEL$ is a selection procedure of $REG$, according to the terminology of \cite[Def.~6.5.2]{aubin_jean-pierre_viability_1991}, and has convex values. We underline that $REG(y) \neq \emptyset$, for all $y \in \mathcal{C}$, \textit{i.e.}, $\mathcal{C}$ is a viability domain for $(\hat{U},\hat{f})$. As $(0, x_0) \in \mathcal{C}$, \cite[Th.~6.6.6]{aubin_jean-pierre_viability_1991} yields the existence of a solution $(y(\cdot), v(\cdot))$ such that $y(t) \in \mathcal{C}$, $v(t) \in REG(y(t))$ and \begin{align} v(t) \in SEL(y(t)) \cap REG(y(t)), \label{eq:argmin} \end{align} for almost all $t \in [0, \infty)$. We notice first that $y_1(0) = 0$ and $\dot{y}_1(t) = 1$ for almost all $t \geq 0$, thus $y_1(t) = t$. Hence, we can indeed see $y(t)$ as $(t,x(t))$, with $x(0) = x_0$ and $\dot{x}(t) = v(t)$. Moreover, $v(t) = f(t,x(t),u(t))$ for a given $u(t) \in U$, since $v(t) \in \hat{U}(y(t)) = f(t,x(t),U)$ a.e. on $[0,\infty)$. We deduce from $v(t) \in SEL(y(t))$, which comes from \eqref{eq:argmin}, that $u(t) \in \mathcal{U}_V(t,x(t))$. Moreover, we deduce from $y(t) \in \mathcal{C}$ that $x(t) \in X$ a.e. on $[0,\infty)$. \end{proof} \begin{remark} The condition $\mathcal{I}_V(t,x) \neq \emptyset$ in Th.~\ref{th:existencetraj} may appear restrictive, because it is not evident why a vector $f(t,x,u_V)$ minimizing $\nabla_x V (t,x)^\top f(t,x,u)$ over $u \in U$ would belong to $T_X(x)$. However, we have seen that under the hypotheses of Prop.~\ref{prop:opttraj}, Eq.~\eqref{eq:optcommand} yields $\mathcal{I}_V(t,x) \neq \emptyset$. Moreover, should the condition $\mathcal{I}_V(t,x) \neq \emptyset$ not be satisfied, we could enlarge the definition of $\mathcal{U}_V(t,x)$ in $\mathcal{U}_{V,\epsilon}(t,x) = \underset{u \in U}{\mathsf{argmin}_\epsilon} \: \nabla_x V (t,x)^\top f(t,x,u)$, so that for $\epsilon > 0$ large enough, $\mathcal{I}_{V, \epsilon}(t,x) = \{ u \in \mathcal{U}_{V,\epsilon}(t,x) \colon f(t,x,u) \in T_X(x) \}\neq \emptyset$. \end{remark} \subsection{Performance of the feedback controller depending on the value function approximation error} Previously, we introduced closed-loop trajectories satisfying the differential inclusion \eqref{eq:diffinclusion} with respect to a function $V \in C^1(\mathbb{R}^{n+1})$. In this section, we state some performance guarantees on those trajectories, depending on some properties of the function $V$. In the following, we assume that, up to an enlargement of the time horizon, the system can reach the target set starting from any initial condition $(t,x) \in [0, T] \times X$, and that the associated value function is Lipschitz. \begin{assumption} There exists a time $T^\sharp \geq T$ such that the minimal time control problem \eqref{eq:system}-\eqref{eq:inf} defined over $[0, T^\sharp]$ has a value function $V^\sharp$ which takes finite values over $Y = [0,T] \times X$, and is Lipschitz continuous. \label{as:regV} \end{assumption} We emphasize that, under Assumption~\ref{as:regV}, $V^*(t,x) < \infty$ implies $V^*(t,x) = V^\sharp(t,x)$ for any $(t,x) \in Y$. Since $V^\sharp$ is Lipschitz continuous over $Y \subset \mathbb{R}^{n+1}$, it admits a Lipschitz continuous extension over $\mathbb{R}^{n+1}$ \cite[Chap.~3, Th.~1]{gariepy_measure_2015}. We assimilate the value function and its extension on $\mathbb{R}^{n+1}$, such that we can speak about the Clarke's generalized derivative $\partial^c V^\sharp(y)$ of $V^\sharp$ at $y \in Y$. For any $V\in C^1(\mathbb{R}^{n+1})$, we introduce the notation \begin{align} \lVert \nabla V - \nabla V^\sharp \rVert_\infty = \sup_{y \in Y } \sup_{g\in\partial^c V^\sharp(y)} \lVert \nabla V(y) - g \rVert_2. \end{align} We also define the constant $C_f = \sup_{(t,x,u) \in Y \times U} \lVert f(t,x,u) \rVert <\infty$. \begin{theorem} Let $V\in C^1(\mathbb{R}^{n+1})$ be a continuously differentiable function, and let $(x_V(\cdot),u_V(\cdot))$ be a closed-loop trajectory starting at $(0,x_0)$ satisfying the differential inclusion \eqref{eq:diffinclusion} and the state constraints over $[0, T]$. We define $t_V = \sup \{t \in [0,T] \colon x_V([0,t])\subset X \setminus K \}$. Then, under Assumptions~\ref{as:convexF}-\ref{as:regV}, \begin{align} V^\sharp(t, x_V(t)) \leq (\tau^* - t) + t \: 2 (1 + C_f) \lVert \nabla V - \nabla V^\sharp \rVert_\infty \quad \forall t \in[0, t_V], \label{eq:timeerror} \end{align} where $\tau^* = V^*(0,x_0) = V^\sharp(0,x_0) \leq t_V$. In particular, we notice that \begin{align} V^\sharp(\tau^*, x_V(\tau^*)) \leq 2 \tau^* (1 + C_f) \lVert \nabla V - \nabla V^\sharp \rVert_\infty. \label{eq:timeerrortau} \end{align} \label{th:perf} \end{theorem} In Eq.~\eqref{eq:timeerrortau}, $V^\sharp(\tau^*, x_V(\tau^*))$ measures how far the closed-loop trajectory $(x_V(\cdot),u_V(\cdot))$ is from the target set $K$ at the moment when the time-optimal trajectory reaches $K$. As a corollary, we give a condition for the closed-loop trajectory $(x_V(\cdot),u_V(\cdot))$ to effectively reach the target set $K$, with a bounded delay compared to the time-optimal trajectory. \begin{corollary} Under the same hypotheses as Th.~\ref{th:perf}, if $\lVert \nabla V - \nabla V^\sharp \rVert_\infty \leq \frac{1 - \tau^*/T}{2 (1 + C_f)}$, then \begin{align} x_V(t_V) \in K \text{ with } t_V \in [\tau^*, \frac{1}{1 - 2 (1 + C_f) \lVert \nabla V - \nabla V^\sharp\rVert_\infty}\tau^*]. \end{align} \label{cor:perf} \end{corollary} We underline that the hitting time $t_V \geq \tau^*$ converges to the minimal time $\tau^*$, when the approximation error $\lVert \nabla V - \nabla V^\sharp\rVert_\infty$ vanishes. \begin{proof}[Proof of Th~\ref{th:perf} and Cor.~\ref{cor:perf}] For any Lipschitz continuous function $F: \mathbb{R}^{n+1} \to \mathbb{R}$, we recall that $\partial^c F(y)$ denote the Clarke's generalized derivative at $y$, and we define $H_F$ as \begin{align} \label{eq:defHmin} H_F(y) = 1 + \min_{\begin{subarray}{c} u \in U \\ g \in \partial^c F(y) \end{subarray}} \{ g^\top \begin{pmatrix} 1 \\ f(y,u) \end{pmatrix} \}. \end{align} The minimum is attained by continuity of the objective, and by compactness of $U$ and $\partial^c F(y)$ (see \cite{clarke_generalized_1975}). Note also that for any $V \in C^1(\mathbb{R}^{n+1})$, for any $y = (t,x) \in \mathbb{R}^{n+1}$, $H_V(t,x) = 1 + \partial_t V(t,x) + \min_{u \in U} \nabla_x V(t,x)^\top f(t,x,u)$, and the argmin is $\mathcal{U}_V(t,x)$. By application of the Maximum Theorem \cite[Th.~2.1.6]{aubin_jean-pierre_viability_1991}, we know that $H_{F}$ is lower semi-continuous, since (i) $\partial^c F(y)$ is a compact-valued and upper semi-continuous set-valued map \cite{clarke_generalized_1975}, therefore so is $y \mapsto U \times \partial^c F(y)$, and (ii) $(y,u,g) \mapsto g^\top \begin{pmatrix} 1 \\ f(y,u) \end{pmatrix} $ is continuous. First, we take any $y_1 = (t_1,x_1) \in [0, T] \times X \setminus K$, and we prove that $H_{V^\sharp}(t_1,x_1) \leq 0$. According to Assumption~\ref{as:regV}, $V^\sharp(t_1,x_1) < \infty$, and according to Th.~\ref{th:exopti} applied to the control system \eqref{eq:system}-\eqref{eq:inf} on the interval $[0 , T^\sharp]$, there exists an optimal trajectory $(x(\cdot), u(\cdot))$ over $[t_1, t_2]$ (with $t_2 > t_1$ since $x_0 \notin K$) starting from $(t_1, x_1)$. By definition, $V^\sharp(t_1,x_1) = t_2 - t_1$. We can also prove that for all $t \in [t_1, t_2]$, $V^\sharp(t,x(t)) = t_2 - t$: (i) the trajectory restricted to $[t, t_2]$, yields an admissible trajectory starting from $(t,x(t))$, therefore $V^\sharp(t,x(t)) \leq t_2 - t$, and (ii) for an optimal trajectory $(\tilde{x}(\cdot), \tilde{u}(\cdot))$ starting from $(t,x(t))$ over $[t, t_3]$, the trajectory following $(x(\cdot), u(\cdot))$ over $[t_1, t]$ and $(\tilde{x}(\cdot), \tilde{u}(\cdot))$ over $[t, t_3]$ is admissible and starting from $(t_1, x_1)$, therefore, $V^\sharp(t,x(t)) + (t-t_1) \geq V^\sharp(t_1,x_1) = t_2 - t_1$, giving $V^\sharp(t,x(t)) \geq t_2 - t$. As $\alpha(t) = V^\sharp(t,x(t)) = t_2 - t$ for all $t \in [t_1, t_2]$, we deduce that \begin{align} \alpha'(t) = -1 \text{ a. e. on } [t_1, t_2]. \label{eq:constantderivative} \end{align} Moreover, since $V^\sharp$ is Lipschitz continuous by assumption, and $t \mapsto (t, x(t))$ is Lipschitz continuous as $x(t)$ is differentiable a.e. with a bounded derivative, Lemma~\ref{lem:clarkedifferential} gives: $\alpha'(t) = \frac{d (V^\sharp(t,x(t)))}{dt} \geq \min_{g \in \partial^c V^{\sharp}(t,x(t))} g^\top \begin{pmatrix} 1 \\ \dot{x}(t) \end{pmatrix}$ a.e. on $[t_1, t_2]$. Using that $\dot{x}(t) = f(t,x(t),u(t))$ a.e. on $[t_1, t_2]$, and the definition of $H_{V^\sharp}$: \begin{align} \alpha'(t) \geq \min_{g \in \partial^c V^{\sharp}(t,x(t))} g^\top \begin{pmatrix} 1 \\ f(t,x(t),u(t)) \end{pmatrix} \geq H_{V^\sharp}(t,x(t)) - 1, \end{align} a.e. on $[t_1, t_2]$. Combining this with Eq.~\eqref{eq:constantderivative}, we deduce that for almost all $t \in [t_1, t_2]$, $H_{V^\sharp}(t,x(t)) \leq 0$. By lower semi-continuity of $H_{V^{\sharp}}$ (see above), and by continuity of $x(\cdot)$ \begin{align} H_{V^{\sharp}}(t_1,x_1) \leq 0. \label{eq:negeps} \end{align} Second, still for any $(t_1,x_1) \in [0, T] \times X \setminus K$, we observe that there exists $(g,u_1) \in \partial^c V^\sharp(t_1,x_1) \times U$ such that $H_{V^\sharp}(t_1,x_1) = 1 + g^\top \begin{pmatrix} 1 \\ f(t_1,x_1,u_1) \end{pmatrix}$; indeed, we already mentioned that the minimum in \eqref{eq:defHmin} is attained. Therefore, for any $V \in C^1(\mathbb{R}^{n+1})$ \begin{align}1 + \partial_t V(t_1,x_1) + \nabla_x V (t_1,x_1)^\top f(t_1,x_1,u_1) & = H_{V^{\sharp}}(t_1,x_1) + (\nabla V(t_1,x_1) - g)^\top \begin{pmatrix} 1 \\ f(t_1,x_1,u_1)\end{pmatrix} \\ & \leq H_{V^{\sharp}}(t_1,x_1) + \lVert \nabla V - \nabla V^\sharp \rVert (1+C_f), \label{eq:chasles} \end{align} the inequality being due to Cauchy-Schwartz inequality, and the definition of $\lVert \nabla V - \nabla V^\sharp \rVert$. We know that $H_{V}(t_1,x_1) \leq \partial_t V(t_1,x_1) + 1 + \nabla_x V (t_1,x_1)^\top f(t_1,x_1,u_1)$ by definition of $H_{V}(t_1,x_1)$ (as $u_1 \in U$), therefore Eq.~\eqref{eq:chasles} gives $H_{V}(t_1,x_1) \leq H_{V^{\sharp}}(t_1,x_1) + (1+C_f) \lVert \nabla V - \nabla V^\sharp \rVert$. Using this inequality and Eq.~\eqref{eq:negeps}, we deduce that for all $(t_1,x_1) \in [0, T] \times X \setminus K$, \begin{align} H_{V}(t_1,x_1) \leq (1+C_f) \lVert \nabla V - \nabla V^\sharp \rVert. \label{eq:boundeps} \end{align} Third, according to the hypotheses of the theorem, we take any $V\in C^1(\mathbb{R}^{n+1})$, and any closed-loop trajectory $(x_V(\cdot),u_V(\cdot))$ starting at $(0,x_0)$ satisfying the differential inclusion \eqref{eq:diffinclusion} and the state constraints over $[0, T]$. We, then, study the evolution of $V^\sharp$ over this trajectory. As $x_V(t)$ is Lipschitz continuous, Lemma~\ref{lem:clarkedifferential} yields the existence of $g(t) \in \partial^c V^\sharp(t,x_V(t))$ for almost all $t \in [0, T]$, such that \begin{align} \frac{d}{dt}\left(V^\sharp(t,x_V(t))\right) & \leq g(t)^\top \begin{pmatrix} 1 \\ f(t,x_V(t),u_V(t)) \end{pmatrix} \text{ a.e. on } [0, T]. \end{align} As $u_V(t) \in \mathcal{U}_V(t,x_V(t))$, we know that $H_V(t,x_V(t)) = 1 + \nabla V(t,x_V(t))^\top \begin{pmatrix} 1 \\ f(t,x_V(t),u_V(t)), \end{pmatrix}$, and therefore, \begin{align} \frac{d}{dt}\left(V^\sharp(t,x_V(t))\right) & \leq -1 + H_V(t,x_V(t)) + (g(t) - \nabla V(t,x_V(t)))^\top \begin{pmatrix} 1 \\ f(t,x_V(t),u_V(t)), \end{pmatrix} \end{align} We deduce, using Cauchy-Schwartz inequality and the definition of $\lVert \nabla V - \nabla V^\sharp \rVert$, \begin{align} \frac{d}{dt}\left(V^\sharp(t,x_V(t))\right) \leq -1 + H_V(t,x_V(t)) + (1+C_f) \lVert \nabla V - \nabla V^\sharp \rVert, \label{eq:evvcirc} \end{align} for almost all $[0, T]$. Moreover, for all $t \in [0, t_V)$, $x_V(t) \notin K$. Therefore, we can apply Eq.~\eqref{eq:boundeps} to deduce, in combination with Eq.~\eqref{eq:evvcirc}, that for almost all $[0, t_V]$,$ \frac{d}{dt}\left(V^\sharp(t,x_V(t))\right) \leq -1 + 2 (1+C_f) \lVert \nabla V - \nabla V^\sharp \rVert$. By integration, we deduce that for all $t \in [0, t_V]$, $V^\sharp(t, x_V(t)) - \tau^* \leq - t + 2 t (1+C_f) \lVert \nabla V - \nabla V^\sharp \rVert$, as $V^\sharp(0,x_V(0)) = V^\sharp(0,x_0) = \tau^*$. This proves Eq.~\eqref{eq:timeerror}. Fourth and finally, we prove the corollary. Due to the definition of $t_V$, the following (non-exclusive) alternative holds: either $x_V(t_V) \in K$ or $t_V = T$. Moreover, if $\lVert \nabla V - \nabla V^\sharp \rVert_\infty \leq \frac{1 - \tau^*/T}{2 (1 + C_f)}$, then $V^\sharp(t, x_V(t)) - \tau^* \leq - t + t (1 - \tau^*/T)$ and $V^\sharp(t, x_V(t)) \leq \tau^* (1 - t/T)$ for all $t \in [0, t_V]$. We notice that if $t_V= T$, then $V^\sharp(t_V, x_V(t_V)) \leq 0$, \textit{i.e.}, $x_V(t_V) \in K$. Coming to the aforementioned alternative, we deduce that $x_V(t_V) \in K$. Moreover, this fact combined with Eq.~\eqref{eq:timeerror} gives us that $0 \leq (\tau^* - t_V) + t_V \: 2 (1 + C_f) \lVert \nabla V - \nabla V^\sharp \rVert_\infty$, hence \begin{align} t_V \left(1 - 2 (1 + C_f) \lVert \nabla V - \nabla V^\sharp \rVert_\infty \right) \leq \tau^*. \label{eq:ineqtv} \end{align} By assumption, $1 - 2 (1 + C_f) \lVert \nabla V - \nabla V^\sharp \rVert_\infty \geq \tau^*/T >0$, we can thus divide Eq.~\eqref{eq:ineqtv} by this quantity to obtain the result of the corollary: $t_V \leq \tau^*/\left( 1 - 2 (1 + C_f) \lVert \nabla V - \nabla V^\sharp \rVert_\infty \right)$. \end{proof} In the previous theorem and the corollary, we saw that the suboptimality, in terms of hitting time, of a closed-loop trajectory $(x_V(\cdot),u_V(\cdot))$ satisfying the differential inclusion \eqref{eq:diffinclusion} decreases as the approximation error $\lVert \nabla V - \nabla V^\sharp \rVert_\infty$ decreases. Furthermore, we see that the closed-loop trajectory comes close to optimality when the approximation error vanishes. We now study a sufficient condition under which the approximation $\lVert \nabla V_d - \nabla V^\sharp \rVert_\infty$ can be made arbitrarily small, using a polynomial $V_d(t,x)$ of sufficiently large degree $d \in \mathbb{N}$. \subsection{A sufficient regularity condition for the existence of near-optimal controllers based on polynomials} In the case where the value function is twice differentiable, there exist polynomials $V_d$ with such a vanishing approximation error $\lVert \nabla V_d - \nabla V^\sharp \rVert_\infty$, and that are near optimal solutions in the hierarchy of semi-infinite programs $\eqref{eq:lprd}$. \begin{theorem} \label{th:convergenceC2hjb} Under Assumptions~\ref{as:convexF}-\ref{as:regV}, if the value function $V^\sharp$ belongs to $C^2(\mathbb{R}^p | Y)$, and is a subsolution to the HJB equation, then there exist a sequence of polynomials $(V_d(t,x))_{d\in\mathbb{N}^*}$, with $V_d(t,x) \in \mathbb{R}_d[t,x_1,\dots, x_n]$, and two constants $c_1, c_2 >0$, such that for all $d \in \mathbb{N}^*$, \begin{itemize} \item The polynomial $V_d(t,x)$ is feasible, and $\frac{c_1}{d}$-optimal in the problems \eqref{eq:lpc1} and \eqref{eq:lprd}, \item The following inequality holds: $\lVert \nabla V_d - \nabla V^\sharp \rVert_\infty \leq \frac{c_2}{d}$. \end{itemize} \end{theorem} Under these hypotheses, the polynomials $V_d(t,x)$ are subsolutions to the HJB equation, and form a maximizing sequence of the problem \eqref{eq:lpc1}; we also notice that the hierarchy of semi-infinite programs \eqref{eq:lprd} converges in $O(\frac{1}{d})$ in terms of objective value. Moreover, according to Cor.~\ref{cor:perf}, for any sequence of closed-loop trajectories $(x_{V_d}(\cdot), u_{V_d}(\cdot))$, the associated hitting times converge to the minimal time $\tau^*$: this is a minimizing sequence of trajectories for the optimal time control problem \eqref{eq:system}-\eqref{eq:inf}. \begin{proof} By definition of $C^2(\mathbb{R}^{n+1} |Y)$, there exists a function $Q \in C^2(\mathbb{R}^{n+1})$ such that $V^\sharp(y) = Q(y)$ and $\nabla V^\sharp(y) = \nabla Q (y)$ for all $y \in Y$. In application of Lemma~\ref{lem:polapprox}, as $Q$ has a locally Lipschitz gradient since it is twice differentiable, there exists a constant $A > 0$, and a sequence of polynomials $(w_d(t,x))_{d \in \mathbb{N}^*}$ with $w_d(t,x) \in \mathbb{R}_d[t,x_1,\dots,x_n] $ and such that for all $(t,x) \in Y$, $| w_d(t,x) - Q(t,x) | \leq \frac{A}{d}$ and $\lVert \nabla w_d(t,x) - \nabla Q(t,x) \rVert_2 \leq \frac{A}{d}$. With $\alpha_d = \frac{A(1+C_f)}{d}$, and $\beta_d = \frac{A}{d}(1 + T + T C_f)$, we define the polynomial $V_d(t,x) = w_{d}(t,x) + \alpha_d t - \beta_{d} \in \mathbb{R}_d[t,x_1,\dots,x_n]$. First, we notice that $\lVert \nabla V_d - \nabla V^\sharp \rVert_\infty \leq \lVert \nabla w_{d} - \nabla V^\sharp \rVert_\infty + \alpha_d \leq \frac{A ( 2 + C_f)}{d}$ for all $d\geq 1$. This proves the second point of the theorem, having defined the constant $c_2 = A(2+C_f)$, which is independent from $d$. We prove now the first point. For all $d \geq 1$, and $(t,x,u) \in [0, T] \times X \times U$, \begin{align} \partial_t V_d(t,x) + 1 + \nabla_x V_d(t,x)^\top f(t,x,u) &= \alpha_d + \partial_t V^\sharp(t,x) + 1 + \nabla_x V^\sharp(t,x)^\top f(t,x,u) \\ & & \hspace{-4cm} + (\nabla w_{d}(t,x) - \nabla V^\sharp(t,x))^\top \begin{pmatrix} 1 \\ f(t,x,u) \end{pmatrix} \\ & \geq \alpha_d + (\nabla w_{d}(t,x) - \nabla V^\sharp(t,x))^\top \begin{pmatrix} 1 \\ f(t,x,u) \end{pmatrix}, \end{align} as $V^\sharp$ is a subsolution to the HJB equation, hence satisfies Eq.~\eqref{eq:subsol1}. Using the Cauchy-Schwartz inequality, we obtain $\partial_t V_d(t,x) + 1 + \nabla_x V_d(t,x)^\top f(t,x,u) \geq \alpha_d - \lVert \nabla w_d(y) - \nabla V^\sharp(y) \rVert_2 (1 + C_f) \geq \alpha_d - \frac{A}{d} (1 + C_f) = 0$. This proves that $V_d$ satisfies Eq.~\eqref{eq:subsol1}. It also satisfies Eq.~\eqref{eq:subsol2}, because for any $(t,x) \in [0, T] \times K$, \begin{align} V_d(t, x) &= w_{d}(t,x) + \alpha_d t - \beta_d \\ & \leq V^\sharp(t,x) + \frac{A}{d} + \alpha_d t - \beta_d \\ & \leq V^\sharp(t,x) + \frac{A}{d} + \alpha_d T - \beta_d \\ & \leq V^\sharp(t,x) = 0. \end{align} since $\frac{A}{d} + \alpha_d T - \beta_d = 0$ by definition of $\beta_d$, and since $x \in K$. We deduce that $V_d$ is feasible in \eqref{eq:lprd}. Its objective value is $V_d(0,x_0) \geq w_{d}(0, x_0) - \beta_d \geq V^\sharp(0, x_0) - \frac{A}{d} - \beta_d = V^\sharp(0, x_0) - \frac{c_1}{d}$, where $c_1 = A(2 + T + T C_f)$. As $V^\sharp(0, x_0) = V^*(0,x_0)$ due to Assumption~\ref{as:finite}, $V_d(0,x_0) \geq V^*(0,x_0) - \frac{c_1}{d} \geq \mathsf{val}\eqref{eq:lpc1} - \frac{c_1}{d} \geq \mathsf{val} \eqref{eq:lprd} - \frac{c_1}{d}$, and we therefore conclude that $V_d$ is $\frac{c_1}{d}$-optimal in \eqref{eq:lpc1} and $\eqref{eq:lprd}$. \end{proof} \begin{remark} Admittedly, the hypothesis in Th.~\ref{th:convergenceC2hjb} that the value function $V^\sharp$ belongs to $C^2(\mathbb{R}^p | \Sigma)$ is stringent. It is worth noting, however, that there exist systems that satisfy this hypothesis. Here is an example: $\dot{x}(t) = u(t)$, $x(t) \in X = [0,1]^2$, $\lVert u(t) \rVert \leq 1 $ and $K = \{0 \} \times [0,1]$. The value function associated with the horizon $T = \infty$ is $V^\sharp(t,x) = x_1$. \end{remark} \section{Illustrative examples} \label{sec:numerics} We implemented and tested the proposed methodology on three Minimal Time Control Problems: a generalization of the Zermelo problem, a regatta problem and a generalization of the Brockett integrator. The numerical examples in this section were processed with our \verb+Julia+ package \verb+MinTimeControl.jl+\footnote{This package is available at \url{github.com/aoustry/MinTimeControl.jl}}. In this implementation of Algorithm~\ref{chap:alg:CP}, the master problem \eqref{eq:master} is solved with the simplex algorithm of the commercial solver \verb+Gurobi 10.0+ \cite{gurobi_optimization_llc_gurobi_2021}. At each iteration, we add a maximum of 100 points to the set $\mathcal{Y}^k$. The separation oracle \eqref{eq:oracle} is implemented with a random sampling scheme (with 500,000 samples at each iteration to detect violated constraints), and with the global optimization solver \verb+SCIP 8+ \cite{bestuzheva_scip_2021}, for the certification at the last iterate. This solver is used with a relative tolerance $\delta = 10^{-4}$, and with a time limit of $10,000s$. We also precise that we compute a heuristic trajectory based on the particularities of each problem; this heuristic is not optimal, but provides an upper bound $T$ on the minimum time, and therefore, a relevant time horizon $[0,T]$. The trajectory resulting from the heuristic is used to initialize the set $\mathcal{Y}^0$, in the sense that we enforce the HJB inequality for some points of this trajectory. During the iterations of the algorithm, we obtain functions $V_{\theta^k}(t,x)$ and we simulate the associate feedback trajectory defined by the differential inclusion \eqref{eq:diffinclusion}; if the obtained trajectory reaches the target set, it gives us an upper bound. Those trajectories are also used to enrich the set $\mathcal{Y}^k$. For all the numerical experiments, the regularization parameter is $\mu = 10^{-5}$, and we use the tolerance $\epsilon = 10^{-3}$. Table~\ref{tab:zermelo}, Table~\ref{tab:regatta} and Table~\ref{tab:brockett} present the numerical results for three different applications. The different columns of these tables are the following: \begin{itemize} \item ``$d \in \mathbb{N}$'' is the degree of the polynomial basis used. \item ``Estimated value of \eqref{eq:lprd}'' stands for the value $V_\theta(0,x_0)$, where $\theta$ is the output of Algorithm~\ref{chap:alg:CP}, using the sampling oracle. This estimated value of \eqref{eq:lprd} is not an exact lower bound, since this sampling oracle does not provide the guarantee that $\theta$ is indeed feasible in \eqref{eq:sip}. \item ``Certified lower bound for \eqref{eq:lprd}'' stands for $V_\theta(0,x_0) - \hat{\phi}(\theta)(1+T)$, where $\theta$ is as defined above and $\hat{\phi}(\theta)$ is a guaranteed upper bound on $\phi(\theta)$, the feasibility error of $\theta$ in \eqref{eq:sip}, computed by the global optimization solver \verb+SCIP 8+. As $V_\theta(t,x) + \hat{\phi}(\theta)(t-1-T) $ is therefore feasible in \eqref{eq:lprd}, the value $V_\theta(0,x_0) - \hat{\phi}(\theta)(1+T)$ is a guaranteed lower bound on $\mathsf{val}$\eqref{eq:lprd}, and, therefore, on $V^*(0,x_0)$. \item ``Value feedback control \eqref{eq:diffinclusion}'' is the hitting time of the best feasible control generated along the iterations: either with the heuristic control at the first iteration, or the closed-loop controlled trajectory defined by \eqref{eq:diffinclusion} associated with $V = V_{\theta^k}$ at iteration $k$ of Algorithm~\ref{chap:alg:CP}. \item ``Solution time (in $s$)'' is the total computational time of the heuristic control, of the iterations of Algorithm~\ref{chap:alg:CP} including the sampling oracle, and of the closed-loop trajectory simulation. Therefore, this is the computational time needed to obtain the estimated value of \eqref{eq:lprd} (second column), and the best feasible control (fourth column). \item ``Iterations number'' is the total number of iterations of Algorithm~\ref{chap:alg:CP}. \item ``Certification time (in $s$)'' is the computational time of the global optimization solver \verb+SCIP 8+, playing the role of $\delta$-oracle, to compute the aforementioned bound $\hat{\phi}(\theta)$, and deduce the certified lower bound (third column). \end{itemize} \subsection{A time-dependent Zermelo problem} We consider a time-dependent nonlinear system with $n = 2$ and $m = 2$, defined by \begin{align} \dot{x}_1(t) &= u_1(t) + \frac{1}{2} (1+ t) \: \sin(\pi x_2 (t)) \\ \dot{x}_2(t) &= u_2(t), \end{align} with the state constraint set $X = [-1,1] \times [-1,0]$, the control set $U = B(0,1)$. This is the celebrated Zermelo problem, but with a river flow gaining in intensity over time. Fig.~\ref{fig:flow} gives a representation of this flow. The initial condition is $x(0) = (0,-1)$, and the target set is $K = B(0,r)$, for $r = 0.05$. The travel time associated with the heuristic control, consisting in following a straight trajectory, is $1.261$ (see Fig~\ref{fig:zermelocontrol}). Table~\ref{tab:zermelo} presents the numerical results for different values of $d$. We see that the value of the linear semi-infinite program \eqref{eq:lprd} quickly converges as $d$ increases: starting from $d = 6$, the 4 first digits of the estimated value (second column) reach a plateau which corresponds to the value (1.100) of the best feasible trajectory we generate with our feedback control. As regards the certified lower bound, the best value (1.092) is obtained for $d=5$. For greater $d$, we see that increasing $d$ deteriorates the tightness of the best certified bound. This is due to the fact that the separation problem becomes more difficult, with two consequences: (i) the sampling fails to detect unsatisfied constraints, so Algorithm~\ref{chap:alg:CP} stops with a solution that has a real infeasibility $\phi(\theta)$ larger than $\epsilon$ (targeted tolerance), and (ii) the global optimization solver called afterwards does not manage to solve the separation problem to global optimality within the time limit (case $d \in \{7,8\}$), given only a large upper bound $\hat{\phi}(\theta)$ on the true infeasibility $\phi(\theta)$. We notice that as soon as $d \geq 3$, the feedback control defined by \eqref{eq:diffinclusion} (see Sect.~\ref{sec:control}) yields a trajectory that is $13\%$ faster than the heuristic trajectory. In summary, we obtain a certified optimization gap of $0.7\%$ for this minimal time control problem. \begin{figure} \caption{Representation of the water flow in the Zermelo problem} \label{fig:flow} \end{figure} \begin{table}[ht!] \centering \begin{tabular}{|c|ccc|ccc|} \hline \makecell{$d \in \mathbb{N}$} & \makecell{\textbf{Estimated} \\ \textbf{value of} \eqref{eq:lprd}} & \makecell{\textbf{Certified} \\ \textbf{LB for} \eqref{eq:lprd}} & \makecell{\textbf{Value feedback} \\ \textbf{control} \eqref{eq:diffinclusion}} & \makecell{\textbf{Solution} \\ \textbf{time (in $s$)}} & \makecell{\textbf{Iterations} \\ \textbf{number}} & \makecell{\textbf{Certification} \\ \textbf{time (in $s$)}} \\\hline 2 & 0.952 & 0.945 & 1.261 & 2 & 4 & 1 \\ 3 & 1.064 & 1.044 & 1.101 & 12 & 14 & 12 \\ 4 & 1.096 & 1.084 & 1.100 & 22 & 17 & 1530 \\ 5 & 1.099 & 1.092 & 1.100 & 54 & 22 & 4000 \\ 6 & 1.100 & 1.059 & 1.100 & 60 & 18 & 2330 \\ 7 & 1.100 & 1.051 & 1.100 & 105 & 24 & TL \\ 8 & 1.100 & 0.690 & 1.100 & 215 & 31 & TL \\ \hline \end{tabular} \caption{Time-dependent Zermelo problem: lower and upper bounds, and computational times for various degrees of the SIP hierarchy \eqref{eq:lprd}} \label{tab:zermelo} \end{table} \begin{figure} \caption{Time-dependent Zermelo problem: heuristic control and feedback control ($d=6$)} \label{fig:zermelocontrol} \end{figure} In the special case of this non-polynomial controlled system, a polynomial reformulation exists, at the price of increasing the dimension of the system to $n = 4$: \begin{align} \dot{{x}}_1(t) &= u_1(t) + \frac{1}{2} (1+ t) \: \sin(\pi x_2 (t)) \\ \dot{x}_2(t) &= u_2(t) \\ \dot{x}_3(t) &= -\pi x_4(t)u_2(t) \\ \dot{x}_4(t) &= \pi x_3(t)u_2(t), \end{align} with the state constraint set $\hat{X} = [-1,1] \times [-1,0] \times [-1,1] \times [-1,0] $, the control set $\hat{U} = B(0,1)$, the terminal set $\hat{K} = B(0,r) \times [-1,1] \times [-1,0]$, and the initial condition $x_0 = (0,-1,-1,0)$. The dynamics maintain the equalities $x_3(t) = \cos(\pi x_2(t))$ and $x_4(t) = \sin(\pi x_2(t))$. We are therefore able to compare our approach with the sum-of-squares (SOS) hierarchy, which consists of replacing SIP inequalities in \eqref{eq:lprd} with SOS positivity certificates. For each order $k$ of the hierarchy, i.e., for a maximal degree $d = 2k$ of the polynomial basis, this yields a semi-definite programming problem that we solve with the solver \verb+CSDP+, used with the package \verb+SumOfSquares.jl+. We obtain a polynomial $V(t,x_1,x_2,x_3,x_4)$ that is solution of the corresponding relaxation. Based on this polynomial, we can also generate a feedback controlled trajectory solution of the differential inclusion \eqref{eq:diffinclusion}. \begin{table}[ht!] \centering \begin{tabular}{|c|ccc|ccc|} \hline & \multicolumn{3}{|c|}{\textbf{SIP hierarchy}} & \multicolumn{3}{|c|}{\textbf{SOS hierarchy}}\\ \hline \makecell{\footnotesize{$d \in \mathbb{N}$}} & \makecell{\textbf{Est./Cert.} \\ \textbf{LB } } & \makecell{\textbf{Val. feedback} \\ \textbf{control} \eqref{eq:diffinclusion} } & \makecell{\textbf{Sol./Cert.} \\ \textbf{time (in $s$)}} & \makecell{\textbf{Cert.} \\ \textbf{LB } } & \makecell{\textbf{Val. feedback} \\ \textbf{control} \eqref{eq:diffinclusion} } & \makecell{\textbf{Sol.} \\ \textbf{time (in $s$) }} \\\hline 2 & 0.952/0.945 & 1.261& 2/1 & 0.533 & 1.261 & $\leq 1$\\ 4 & 1.096/1.084 & 1.101 & 22/1530 & 1.064 & 1.105 & 1 \\ 6 & 1.100/1.059 & 1.100 & 60/2330 & 1.099 & 1.100 & 12\\ 8 & 1.100/0.690 & 1.100& 215/TL & 1.100 & 1.100 & 190\\ \hline \end{tabular} \caption{Time-dependent Zermelo problem: comparing the SIP and the SOS approaches} \label{tab:zermeloSOS} \end{table} Table~\ref{tab:zermeloSOS} compares the performance of the SIP and the SOS approaches. We see that for low-degree polynomials ($d \leq 4$), the semi-infinite hierarchy gives better lower bounds than the SOS hierarchy, although at a higher computational time in the case $d=4$. For $d \in \{6,8 \}$, the lower bound of the SOS hierarchy is tight, while only the estimated lower-bound of the SIP hierarchy is tight: to obtain a certified lower bound, the SOS hierarchy performs better. For these values of the degree $d$, this optional certification step (calling to the global optimization solver) is costly in the proposed approach. For this first example, where the SOS hierarchy is applicable since a polynomial reformulation of the dynamical system exists, the SIP approach is slower than the SOS hierarchy. \subsection{A regatta toy-model} We consider a time-dependent nonlinear (and non-polynomial) system with $n = 2$ and $m = 1$, defined by \begin{align} \dot{x}_1(t) &= \mathsf{windspeed}(t) \: \mathsf{polar}\left[ u(t)\right] \: \cos(u(t) + \mathsf{windangle}(t)) \\ \dot{x}_2(t) &= \mathsf{windspeed}(t) \: \mathsf{polar}\left[ u(t)\right] \: \sin(u(t)+\mathsf{windangle}(t)), \end{align} where $\mathsf{windspeed}(t) = 2 + t$, $\mathsf{windangle}(t) = \frac{\pi}{2}(1-0.4 t)$ and $\mathsf{polar}[u] = |\sin(\frac{2u}{3})|$. In this model, the control $u(t)$ represents the relative angle between the heading of the boat and the (origin) direction of the wind. The evolution of the wind direction over time is depicted in Fig.~\ref{fig:wind}. The polar curve of this toy model of a sailing boat is represented in Fig~\ref{fig:polar}; this figure clearly shows that this model does not satisfy Assumption~\ref{as:convexF}. Although the absence of duality gap between the control problem and the LP problem \eqref{eq:lpc1} is, therefore, not guaranteed, we see in Table~\ref{tab:regatta} that if this gap exists in this case, it is low (below $1.6 \%$). The state constraint set is $X = [-1,1]^2$, and the control set $U = [-\pi, \pi]$. The initial condition is $x(0) = (0,-1)$, and the target set is $K = B(0,r)$, for $r = 0.05$. The travel time associated with the heuristic control, consisting in following a straight trajectory, is $1.278$ (see Fig.~\ref{fig:regattacontrol}). \begin{figure} \caption{Regatta problem: wind direction at different times} \label{fig:wind} \end{figure} \begin{figure} \caption{Regatta problem: the polar curve of the sailing boat} \label{fig:polar} \end{figure} \begin{table}[ht!] \centering \begin{tabular}{|c|ccc|ccc|} \hline \makecell{$d \in \mathbb{N}$} & \makecell{\textbf{Estimated} \\ \textbf{value of} \eqref{eq:lprd}} & \makecell{\textbf{Certified} \\ \textbf{LB for} \eqref{eq:lprd}} & \makecell{\textbf{Value feedback} \\ \textbf{control} \eqref{eq:diffinclusion}} & \makecell{\textbf{Solution} \\ \textbf{time (in $s$)}} & \makecell{\textbf{Iterations} \\ \textbf{number}} & \makecell{\textbf{Certification} \\ \textbf{time (in $s$)}} \\\hline 2 & 0.834 & 0.829 & 1.278 & 6 & 6.0 & 2 \\ 3 & 0.896 & 0.880 & 0.912 & 16 & 10.0 & 56 \\ 4 & 0.904 & 0.896 & 0.915 & 31 & 13.0 & 498 \\ 5 & 0.907 & 0.774 & 0.912 & 52 & 16.0 & 1020 \\ 6 & 0.907 & 0.799 & 0.912 & 100 & 22.0 & 1930 \\ 7 & 0.908 & 0.691 & 0.912 & 190 & 29.0 & 7600 \\ 8 & 0.908 & 0.000 & 0.911 & 312 & 33.0 & TL \\\hline \end{tabular} \caption{Regatta problem: lower and upper bounds, and computational times for various degrees of the SIP hierarchy \eqref{eq:lprd}} \label{tab:regatta} \end{table} \begin{figure} \caption{Regatta problem: heuristic control and feedback control ($d=6$)} \label{fig:regattacontrol} \end{figure} We see that the highest estimated value of \eqref{eq:lprd}, for $d=7$ and $d=8$, is $0.5\%$ lower than the value ($0.913$) of the best feasible trajectory obtained with the feedback controller for $d=6$. This feedback controller yields a trajectory which is $29\%$ faster than the heuristic trajectory. As regards the certified lower bound, $d = 4$ yields the best result (0.896), at a price of a running time of $498s$ for the exact oracle (\verb+SCIP 8+). For the same reasons as in the previous application, a larger $d$ does not necessarily mean a better certified lower bound obtained within the time limit. In summary, we obtain a certified optimization gap of $1.6\%$ for this minimal time control problem. \subsection{A generalized Brockett integrator} For $n \in \mathbb{N}^*$ and $m = n-1$ and given a continuous mapping $q\colon \mathbb{R}^n \to \mathbb{R}^m$, we consider the following generalization of the Brockett integrator \cite{loheac_time-optimal_2017}, \begin{align} \dot{x}_i(t) &= u_i(t) \quad \forall i \in \{1, \dots, n-1\} \\ \dot{x}_n(t) &= q(x(t))^\top u(t). \end{align} In particular, we study this system for $n = 6$, and $q(x) = \Bigl( 2/(2+x_4),-x_1,-\cos(x_1 x_3),\exp(x_2),x_1 x_2 x_6\Bigr)$. The state constraint set is $X = [-1,1]^5$, and the control set is $U = B(0,1)$. The initial condition is $(x_0) = \frac{1}{2} \mathbf{1}$, and the target set is $K = B(0,r)$, for $r = 0.05$. The travel time associated with the heuristic control is 1.377. \begin{table}[ht!] \centering \begin{tabular}{|c|ccc|ccc|} \hline \makecell{$d \in \mathbb{N}$} & \makecell{\textbf{Estimated} \\ \textbf{value of} \eqref{eq:lprd}} & \makecell{\textbf{Certified} \\ \textbf{LB for} \eqref{eq:lprd}} & \makecell{\textbf{Value feedback} \\ \textbf{control} \eqref{eq:diffinclusion}} & \makecell{\textbf{Solution} \\ \textbf{time (in $s$)}} & \makecell{\textbf{Iterations} \\ \textbf{number}} & \makecell{\textbf{Certification} \\ \textbf{time (in $s$)}} \\\hline 2 & 1.071 & 0.763 & 1.071 & 220 & 28 & 73 \\ 3 & 1.072 & 0.000 & 1.071 & 5630 & 133 & TL \\ 4 & 1.072 & 0.000 & 1.070 & 147400 & 319 & TL \\ \hline \end{tabular} \caption{Generalized Brockett integrator: lower and upper bounds, and computational times for various degrees of the SIP hierarchy \eqref{eq:lprd}} \label{tab:brockett} \end{table} Since this system has a larger dimension than the other two examples, we see that the computation times are longer for the same degree $d$. Already for $d=2$, we obtain an estimated value of \eqref{eq:lprd} that is within $0.1\% $ of the value of the feedback control (1.070). This feedback control yields an improvement of $22\% $ over the heuristic trajectory. Note that the estimated values of \eqref{eq:lprd} computed by Algorithm~\ref{chap:alg:CP} with the (inexact) sampling oracle are slightly larger than the value of the best trajectory we computed: thus, these estimates are not valid lower bounds, but only estimates of the value of the minimum time control problem. Regarding the certification of lower bounds, the global optimization solver \verb+SCIP 8+ fails to produce tight upper and lower bounds on $\phi(\theta)$, the infeasibility of the solution $\theta$ returned by Algorithm~\ref{chap:alg:CP}. Therefore, the resulting certified lower bounds are not tight either. In summary, we obtain a certified optimization gap of $29\%$ for this minimal time control problem. \section{Discussion} We apply the dual approach in minimal time control, that consists in searching for maximal subsolutions of the HJB equation, to generic nonlinear, even non-polynomial, controlled systems. The basis functions used to generate these subsolutions are polynomials, that are subject to semi-infinite constraints. We prove the theoretical convergence of the resulting hierarchy of semi-infinite linear programs, and our numerical tests on three different systems show good convergence properties in practice. These results show that the use of a random sampling oracle allows a good approximation of the value of the control problem. For small systems, it is even possible to obtain tight and certified lower bounds, based on a global optimization solver. Finally, the numerical experiments also show that the computed subsolutions of the HJB equations help to recover near-optimal controls in a closed-loop form. As illustrated in these numerical experiments, the advantage of our approach based on semi-infinite programming, compared to the sum-of-squares approach, is the ability to handle non-polynomial systems. In the numerical example where a polynomial reformulation of the system was possible, the sum-of-squares approach was, however, faster. A promising avenue for continuing this work is to investigate the use of a other basis of functions to search for an approximate value function, resulting in other semi-infinite programming hierarchies with convergence guarantees. In particular, it would be relevant to use non-differentiable functions in the basis to improve approximation capabilities for non-differentiable value functions. Another avenue of research is to extend the approach and theoretical results to a generic optimal control problem. \section*{Acknowledgments} The authors would like to thank Maxime Dupuy, Leo Liberti and Claudia D'Ambrosio for fruitful discussions and advice. \appendix \section{Technical lemmata} \begin{lemma} We consider a compact set $Z \subset \mathbb{R}^p$, and the family of compact sets $Z_\delta = \{ z \in \mathbb{R}^p \: \colon \: d(z,Z) \leq \delta \}$ for any $\delta \geq 0$ and a continuous function $\psi \in C(\mathbb{R}^p)$. Then, the function $\Psi(\delta) = \min_{z \in Z_\delta} \psi(z)$ is continuous at $0$. \label{lem:valuefunction} \end{lemma} \begin{proof} First of all, we notice that the function $\delta \mapsto \min_{z \in Z_\delta} \psi(z)$ is well-defined, since $\psi$ is continuous and $Z_\delta$ is compact. As $Z_{\delta_1} \subset Z_{\delta_2}$ for any $\delta_1 \leq \delta_2$, the function $\Psi$ is non-increasing, which proves that the following limit exists: \begin{align} \underset{\delta \rightarrow 0^+}{\lim} \Psi(\delta) = \Psi(0^+) \leq \Psi(0). \label{eq:limitright} \end{align} We take a positive sequence $(\delta_k) \in \mathbb{R}_{++}^\mathbb{N}$ such that $\delta_k \rightarrow 0$. Hence, $\Psi(\delta_k) \rightarrow \Psi(0^+)$ by definition of the right-limit. For any $k \in \mathbb{N}$, we define $z_k \in Z_{\delta_k}$ such that $\psi(z_k) = \Psi(\delta_k)$. The sequence $(\delta_k)$ being bounded, we can introduce an upper bound $\Bar{\delta}$. Hence, any element of the sequence $(z_n)$ belongs to the compact set $Z_{\Bar{\delta}}$, and up to the extraction of a subsequence, converges to a point $z$ being such that $\psi(z) = \Psi(0^+)$ by continuity of $\psi$ and uniqueness of the limit. As $d(z_k,Z)$, the distance between $z_k$ and the compact set $Z$, is bounded above by $\delta_k$ and is non-negative, it converges to $0$. By continuity of the distance, we know that $d(z,Z) = 0$ and, thus, $\Psi(0^+) = \psi(z) \geq \Psi(0)$. Together with Eq.~\eqref{eq:limitright}, this yields $\Psi(0^+) = \Psi(0)$. \end{proof} \begin{lemma} Let $Q \in C^1(\mathbb{R}^{p})$ be a continuously differentiable function, with a locally Lipschitz gradient. Let $Z \subset \mathbb{R}^{p}$ be a compact set. Then, there exists a constant $A > 0$, and a sequence of polynomials $(w_d(x))_{d \in \mathbb{N}^*}$ such that for all $d \in \mathbb{N}^*$, $w_d \in \mathbb{R}_d[x_1,\dots,x_p]$ and \begin{align} \sup_{x \in Z} | w_d (x) - Q (x) | \leq \frac{A}{d} \\ \sup_{x \in Z} \lVert \nabla w_d (x) - \nabla Q (x) \lVert \leq \frac{A}{d}. \end{align} \label{lem:polapprox} \end{lemma} We underline that the constant $A$ implicitly depends on $p$, $Q$ and $Z$, but not on the polynomial $w_d(x)$, nor on its degree $d$. \begin{proof} We introduce a constant $R > 0$ such that $Z \subset B(0,R)$, and the function $\tilde{\omega} = \omega \ast \mathbf{1}_{B(0,R+1)}$, where $\omega$ is the mollifier introduced in the proof of Th.~\ref{th:smoothvf}. We notice that $\tilde{\omega} \in C^\infty(\mathbb{R}^p)$ is supported on $\tilde{Z} = B(0, R +2)$ and constant equal to $1$ over $B(0,R)$. We define $\tilde{Q}(x) = Q(x) \tilde{\omega}(x)$, and we notice that (i) $\tilde{Q}$ is supported on the compact set $\tilde{Z}$, which contains $Z$, (ii) for all $x \in Z$, $\tilde{Q}(x) = Q(x)$ and $\nabla \tilde{Q}(x) = \nabla Q(x)$. Applying \cite[Th.~1]{bagby_multivariate_2002} to the function $\tilde{Q}$, that has a compact support, we know that there exists a constant $C$ such that for any $d \geq 1$, there exists a polynomial $w_d(x)$ of degree at most $d$ such that \begin{align} \label{eq:bounding1} \sup_{x \in Z} |w_d(x) - \tilde{Q}(x) | \leq \frac{C}{d} \kappa(\frac{1}{d}) \leq C \kappa(\frac{1}{d}), \\ \sup_{x \in Z} |\partial_i(w_d - \tilde{Q})(x) | \leq C \kappa(\frac{1}{d}) \label{eq:bounding2} \end{align} where $\kappa(\delta) = \underset{1 \leq i \leq p }{\sup} \: \underset{\begin{subarray}{c} (x,y) \in \mathbb{R}^p \times \mathbb{R}^p \\ |x-y|\leq \delta \end{subarray}}{\sup} |\partial_i \tilde{Q}(x) - \partial_i \tilde{Q}(y)|$. We define $\tilde{Z} = \{ x \in \mathbb{R}^p \colon d(x,Z) \leq 1 \}$. Since $\partial \tilde{Q}$ is uniformly null outside $\tilde{Z}$, and assuming that $\delta \leq 1$, we notice that \begin{align*} \kappa(\delta) = \underset{1 \leq i \leq p }{\sup} \: \underset{\begin{subarray}{c} x,y \in \tilde{Z} \times \mathbb{R}^p \\ |x-y|\leq \delta \end{subarray}}{\sup} |\partial_i \tilde{Q}(x) - \partial_i \tilde{Q}(y)| = \underset{1 \leq i \leq p }{\sup} \: \underset{\begin{subarray}{c} x,y \in \tilde{Z} \times \tilde{Z} \\ |x-y|\leq \delta \end{subarray}}{\sup} |\partial_i \tilde{Q}(x) - \partial_i \tilde{Q}(y)|. \end{align*} We note that $\partial_i \tilde{Q}(x) =\tilde{\omega}(x) \partial_i {Q} (x) + Q(x) \partial_i {\tilde{\omega}} (x)$, and therefore, $|\partial_i \tilde{Q}(x) - \partial_i \tilde{Q}(y)| = | \tilde{\omega}(x) (\partial_i {Q} (x) - \partial_i {Q} (y)) + \partial_i {Q} (y)(\tilde{\omega}(x) - \tilde{\omega}(y)) + Q(x) (\partial_i {\tilde{\omega}} (x) - \partial_i {\tilde{\omega}} (y)) + \partial_i {\tilde{\omega}} (y)(Q(x) - Q(y))|$. We use, then, the triangle inequality and the facts that (i) $\tilde{\omega}$ is $C^\infty$, therefore bounded, Lipschitz continuous, and with a Lipschitz-continuous gradient over $\tilde{Z}$ and (ii) $Q$ is continuously differentiable, therefore bounded, and Lipschitz continuous over $\tilde{Z}$; by assumption it has a Lipschitz continuous gradient over the compact set $\tilde{Z}$. We deduce that $\partial_i \tilde{Q}$ is Lipschitz continuous over $\tilde{Z}$: there exists $L_i>0$ such that $\underset{\begin{subarray}{c} x,y \in \tilde{Z} \times \tilde{Z} \\ |x-y|\leq \delta \end{subarray}}{\sup} |\partial_i \tilde{Q}(x) - \partial_i \tilde{Q}(y)| \leq L_i \delta$, for all $\delta \in [0, 1]$. Defining $L = \max_i L_i$, we deduce $\kappa(\delta) \leq L \delta$. Then Eq.~\eqref{eq:bounding1} reads $\sup_{x \in Z} |w_d(x) - \tilde{Q}(x) | \leq \frac{CL}{d}$, and Eq.~\eqref{eq:bounding2} reads $\sup_{x \in Z} |\partial_i (w_d - \tilde{Q})(x) | \leq \frac{CL}{d}$ for all $i \in \{1, \dots, p \}$. We also deduce that $\sup_{x \in Z} \lVert \nabla w_d (x) - \nabla \tilde{Q} (x) \lVert \leq \frac{C L p}{d}$. Defining $A = C L p$, and noticing that for all $x \in Z$, $\tilde{Q}(x) = Q(x)$ and $\nabla \tilde{Q}(x) = \nabla Q(x)$, one obtains the claimed statement. \end{proof} \begin{lemma} Under Assumption~\ref{as:convexF} and Assumption~\ref{as:finite}, we consider an admissible trajectory $(x(\cdot),u(\cdot))$ over $[0, t_1]$ of the minimal time control problem \eqref{eq:system}-\eqref{eq:inf} starting from $(0,x_0)$. Then, for almost all $t \in [0, t_1]$, $f(t,x(t),u(t)) \in T_X(x(t))$. \label{lem:tangentcone} \end{lemma} \begin{proof} We reduce to a time-invariant controlled system: we define, for any $y = (t,x) \in \mathbb{R}^{n+1}$ and $u \in \mathbb{R}^m$, $\tilde{f}(y,u) = \begin{pmatrix} 1 \\ f(y,u) \end{pmatrix}$, and the constant set-valued map $\tilde{U}(y) = U$. The control system $(\tilde{f},\tilde{U})$ is a Marchaud control system \cite[Def.~6.1.3]{aubin_jean-pierre_viability_1991}, since: (i) the graph of $\tilde{U}$ is closed (ii) $\tilde{f}$ is continuous (iii) the velocity subsets $\{ \tilde{f}(y,u) \colon u \in \tilde{U}(y) \}$ are convex due to Assumption~\ref{as:convexF}, and (iv) the function $f$ has a linear growth since it is Lipschitz continuous, and the set-valued map are bounded, thus also has a linear growth. We define the set $\mathcal{C} = \mathbb{R}_+ \times X$ and notice that the control $u(\cdot)$ regulates a trajectory $y(t) = \begin{pmatrix} t \\ x(t) \end{pmatrix}$ that remains in $\mathcal{C}$, therefore according to \cite[Th.~6.1.4]{aubin_jean-pierre_viability_1991}, for all most all $t \in [0, t_1]$, $u(t) \in \{ u \in \tilde{U}(y(t)) \colon \tilde{f}(y(t),u(t)) \in T_\mathcal{C}(y(t)) \}$. We notice that $ T_\mathcal{C}(y(t)) \subset \mathbb{R} \times T_X(x(t))$, implying that $f(y(t),u(t)) \in T_X(x(t))$. \end{proof} \begin{lemma} For any locally Lipschitz continuous function $F \colon \mathbb{R}^p \to \mathbb{R}$, and for any Lipschitz continuous curve $y \colon [0, T] \to \mathbb{R}^p$, the function $t \mapsto F(y(t))$ is differentiable a.e. and satisfies \begin{align} \min_{g \in \partial^c F(y(t))} g^\top \dot{y}(t) \leq \frac{d}{dt}(F(y(t))) \leq \max_{g \in \partial^c F(y(t))} g^\top \dot{y}(t), \label{eq:chainrulineq} \end{align} for almost all $t \in [0, T]$. \label{lem:clarkedifferential} \end{lemma} The particular functions $F$ for which these three quantities are equal are called path-differentiable in \cite{bolte_conservative_2021}. \begin{proof} First, we notice that the functions $t \mapsto y(t)$ and $t \mapsto F(y(t))$ are Lipschitz continuous, therefore differentiable a.e. on $[0, T]$ due to the Rademacher theorem \cite{gariepy_measure_2015}. Hence, for almost all $t \in [0, T]$, both $y(t)$ and $F(y(t))$ are differentiable at $t$. We consider such a $t$, and we show that \eqref{eq:chainrulineq} holds for this particular $t$, which we prove the Lemma. Since $y$ is differentiable at $t$, $r(h) = y(t+h) - y(t) - h \dot{y}(t)$ is in $o_{h\to 0}(h)$ . Since $s \mapsto F(y(s))$ is differentiable at $t$, the following holds \begin{align} \frac{d}{dt}(F(y(t))) &= \underset{h \rightarrow 0, h >0}{\lim} \frac{F(y(t+h)) - F(y(t))}{h} \\ &= \underset{h \rightarrow 0, h >0}{\lim} \frac{F(y(t) + h \dot{y}(t) + r(h)) - F(y(t))}{h} \label{eq:limitderiv} \end{align} Since $r(h) = o_{h\to 0}(h)$ and $F$ is locally Lipschitz, we know that $\underset{h \rightarrow 0, h >0}{\lim} \frac{F(y(t)) - F(y(t) + r(h)) }{h} = 0$. Summing this with Eq.~\eqref{eq:limitderiv}, we deduce that \begin{align} \frac{d}{dt}(F(y(t))) &= \underset{h \rightarrow 0, h >0}{\lim} \frac{F(y(t) + r(h) + h \dot{y}(t) ) - F(y(t) + r(h))}{h} \\ & \leq \underset{\begin{subarray}{c} y' \rightarrow y(t) \\ h \rightarrow 0, h> 0 \end{subarray}}{\limsup} \frac{F(y' + h \dot{y}(t)) - F(y')}{h} = F^\circ(y(t), \dot{y}(t)), \end{align} where $F^\circ(y;v) = \underset{\begin{subarray}{c} y' \rightarrow y \\ h \rightarrow 0, h> 0 \end{subarray}}{\limsup} \frac{F(y' + h v) - F(y')}{h}$ is the $F^\circ(y;h)$ Clarke's directional derivative at $y \in \mathbb{R}^p$ in the direction $v \in \mathbb{R}^p$. The inequality follows from the fact that $y(t) +r(h) \rightarrow y(t)$. By property of the Clarke subdifferential \cite{clarke_generalized_1975}, we also know that $F^\circ(y;v) = \max_{g \in \partial^c F(y)} g^\top v$. Hence, in particular, \begin{align} \frac{d}{dt}(F(y(t))) \leq \max_{g \in \partial^c F(y(t))} g^\top \dot{y}(t). \label{eq:ineqclarkemax} \end{align} The reasoning that proved Eq.~\eqref{eq:ineqclarkemax} is also applicable to $-F$, that is also locally Lipschitz, and such that $s \mapsto (-F)(s)$ is differentiable at $t$. Therefore, \begin{align} \frac{d}{dt}(-F(y(t))) \leq \max_{g \in \partial^c (-F)(y(t))} g^\top \dot{y}(t). \end{align} As $\partial^c (-F)(y(t)) = - \partial^c F(y(t))$ by property of the Clarke subdifferential, we deduce that \begin{align} -\frac{d}{dt}(F(y(t))) \leq \max_{g \in \partial^c F(y(t))} - g^\top \dot{y}(t) = - \min_{g \in \partial^c F(y(t))} g^\top \dot{y}(t), \end{align} and therefore $\frac{d}{dt}(F(y(t))) \geq \min_{g \in \partial^c F(y(t))} g^\top \dot{y}(t)$. \end{proof} \end{document}
\betagin{document} \date{\today} \betagin{abstract} For a partial Galois extension of commutative rings we give a seven terms sequence, which is an analogue of the Chase-Harrison-Rosenberg sequence. \varepsilonnd{abstract} {}^{-1}aketitle \betagin{section}{Introduction} The concept of a Galois extension of commutative rings was introduced in the same paper by M. Auslander and O. Goldman \cite{AG}, in which they laid the foundations for separable extensions of commutative rings and defined the Brauer group of a commutative ring. Later, in \cite{CHR}, S. U. Chase, D.K. Harrison and A. Rosenberg developed Galois theory of commutative rings by giving several equivalent definitions of a Galois extension, establishing a Galois correspondence, and specifying, to the case of a Galois extension, the Amitsur cohomology seven terms exact sequence, given by S. U. Chase and A. Rosenberg in \cite{CR}. The Chase-Harrison-Rosenberg sequence can be viewed as a common generalization of the two most fundamental facts from Galois co\-ho\-mo\-lo\-gy of fields: the Hilbert's Theorem 90 and the isomorphism of the relative Brauer group with the second cohomology group of the Galois group. Since then much attention have been payed to the sequence and its parts subject to more constructive proofs, generalizations and analogues in various contexts. Our point of view is to replace global actions by partial ones. The latter are becoming an object of intensive research and have their origins in the theory of operator algebras, where they, together with the corresponding crossed products and partial representations, form the essential ingredients of a new and successful method to study $C^*$-algebras generated by partial isometrics, initiated by R. Exel in \cite{E-1}, \cite{E-2}, \cite{E0}, \cite{E2} and \cite{E1}. The first algebraic results on these new concepts, established in \cite{E1}, \cite{DEP}, \cite{St1}, \cite{St2}, \cite{KL} and \cite{DE}, and the development of a Galois theory of partial actions in~\cite{DFP}, stimulated a growing algebraic activity around partial actions (see the surveys~\cite{D3} and~\cite{F2}). In particular, partial Galois theoretic results have been obtained in \cite{BP}, \cite{CaenDGr}, \cite{CaenJan}, \cite{FrP}, \cite{KuoSzeto}, \cite{PRSantA}, and applications of partial actions were found to graded algebras in~\cite{DE} and \cite{DES}, to tiling semigroups in~\cite{KL1}, to Hecke algebras in~\cite{E3}, to automata theory in \cite{DNZh}, to restriction semigroups in~\cite{CornGould} and \cite{Kud} and to Leavitt path algebras in~\cite{GR}. In addition, the interpretation of the famous R. Thompson's groups as partial action groups on finite binary words permitted J.-C. Birget to study algorithmic problems for them \cite{Birget}. Amongst the recent advances we mention a remarkable application of the theory of partial actions to paradoxical decompositions and to algebras related to separated graphs \cite{AraE1}, its efficient use in the study of the Carlsen-Matsumoto $C^*$-algebras associated to arbitrary subshifts~\cite{DE2} and of the Steinberg algebras~\cite{BeuGon2}, as well as the proof of an algebraic version of the Effros-Hahn conjecture on the ideals in partial crossed products~\cite{DE3}. The general notion of a continuous twisted partial action of a locally compact group on a $C^*$-algebra introduced in \cite{E0}, and adapted to the abstract ring theoretic context in \cite{DES}, contains multipliers which satisfy a sort of $2$-cocycle identity, and it was natural to ask what kind of cohomology theory would suite it. The answer was given in \cite{DK}, where the partial cohomology of groups were introduced and studied together with their relation to cohomology of inverse semigroups, showing also that it fits nicely the theory of partial projective group representations developed in \cite{DN}, \cite{DN2} and \cite{DoNoPi}. Note that partial group cohomology turned out to be useful to study ideals of global reduced $C^*$-crossed products \cite{KennedySchafhauser}. Having at hand partial Galois theory and partial group cohomology we may ask now what would be the analogue of the Chase-Harrison-Rosenberg exact sequence in the context of a partial Galois extension of commutative rings. The purpose of the present paper is to answer this question. The additional new ingredients include a partial action of the Galois group $G$ on the disjoint union of the Picard groups of all direct summands of $R$ (see Section~\ref{phi1phi2phi3}), as well as partial representations of $G$ (see Section~\ref{phi6}). In Section~\ref{prelim} we recall for reader's convenience some facts used in the paper, whereas the homomorphisms of the sequence are given in Sections~\ref{phi1phi2phi3}, ~\ref{phi4phi5}, ~\ref{phi6}. Our proofs are constructive and the partial case is essentially more laborious than the classical one, so that in this article we only build up the homomorphisms, and the proof of the exactness of the sequence will be given in a forthcoming paper. Throughout this work the word ring means an associative ring with an identity element. For any ring $R$ by an $R$-module we mean a left unital $R$-module. If $R$ is commutative, we shall consider an $R$-module $M$ as a central $R$-$R$-bimodule, i.e, an $R$-$R$-bimodule $M$ with $mr = rm$ for all $m\in M$ and $r\in R.$ We write that $M$ is a {\it f.g.p. $R$-module} if $M$ is a (left) projective and finitely generated $R$-module, and by a {\it faithfully projective $R$-module} we mean a faithful, f.g.p. $R$-module. For a monoid (or a ring) $T,$ the group of its units (i.e, invertible elements) is denoted by ${{}^{-1}athcal U}(T).$ In all what follows, unless otherwise stated, $R$ will denote a commutative ring and unadorned ${\otimes}$ will mean ${\otimes}_R.$ \varepsilonnd{section} \sigmaection{Preliminaries}{\longrightarrow}bel{prelim} In this section we give some definitions and results which will be used in the paper. All modules over commutative rings are considered as central bimodules. \sigmaubsection{The Brauer group of a commutative ring} Recall that an $R$-algebra $A$ is called \varepsilonmph{separable} if $A$ is a projective module over its enveloping algebra $A^e=A{\otimes} A^{\rm{op}},$ where $A^{\rm{op}}$ denotes the opposite algebra of $A.$ If $A$ is faithful as an $R$-module we can identify $R$ with $R1_A$, and if, in addition, its center $C(A)$ is equal to $R$ we say that $A$ is \varepsilonmph{central}. Moreover, $A$ is called an \varepsilonmph{Azumaya} $R$-algebra if $A$ is central and separable. Equivalently, $A$ is a faithfully projective $R$-module and $A{\otimes} A^{\rm{op}} \sigmaimeq {\rm End}_R (A)$ as $R$-algebras, (see \cite[Theorem 2.1(c)]{AG}). In \cite{AG} the following equivalence relation was defined on the class of all Azumaya $R$-algebras: \noindent $A\sigmaim B$ if there exist faithfully projective $R$-modules $P$ and $Q$ such that $$A {\otimes}imes {\rm End}_R(P)\cong B {\otimes}imes {\rm End}_R(Q), $$ as $R$-algebras. Let $[A]$ denote the equivalence class containing $A$, and $B(R)$ the set of all such equi\-va\-len\-ce classes. Then $B(R)$ has a natural structure of a multiplicative abelian group, whose multiplication is induced by the tensor product of $R$-algebras, that is, $$[A][B]=[A {\otimes}imes B],\, \ \thetaxt{for all}\,\ [A],[B]\in B(R).$$ Its identity element is $[R]$ and $[A]^{-1}=[A^{\rm{op}}]$, for all $[A]\in B(R)$. This group is called the {\it Brauer group of R.} According to \cite{AG}, for any commutative $R$-algebra $S,$ the map from $B(R)$ to $B(S)$, given by $[A]{}^{-1}apsto [A{\otimes}imes S]$, is a well defined group homomorphism. Its kernel is denoted by $B(S/R)$ and called the {\it relative Brauer group of S over $R$}. If $A$ is an Azumaya $R$-algebra whose equivalence class in $B(R)$ belongs to $B(S/R),$ we say that $A$ is {\it split} by $S,$ or that $S$ is a {\it splitting ring} for $A.$ For any nonempty subset $X$ of a ring $B$ and any subring $V$ of $B,$ we denote by $C_V(X)=\{y\in V\,|\, xy= yx\, {\rm \,\,for\, \,all}\, \,x\in X\}$ the {\it centralizer} of $X$ in $V.$ In particular, if $X=V,$ then $C_V(V)$ is the {\it center} $C(V)$ of $V.$ It is known that a commutative $R$-subalgebra $B$ of an $R$-algebra $A$ is a {\it maximal commutative subalgebra} if and only if $C_A(B)=B.$ \sigmaubsection{Partial cohomology of groups} Let $G$ be a group. A {\it unital twisted partial action} of $G$ on $R$ is a triple $$\alpha=(\{D_g\}_{g\in G}, \{\alphag\}_{g\in G}, \{\omegaega_{g,h}\}_{(g,h)\in G{\times}mes G}),$$ such that for every $g\in G,$ $D_g$ is an ideal of $R$ generated by a non-necessarily non-zero idempotent $1_g,$ $\alphag\colon D_{g{}^{-1}}\to D_g$ is a ring isomorphism, for each pair $(g,h)\in G{\times}mes G,$ $\omegaega_{g,h}\in {{}^{-1}athcal U}(D_gD_{gh})$ and for all $g,h,l\in G$ the following statements are satisfied: \sigmamallskip \noindent (i) $D_1=R$ and $\alpha_1$ is the identity map of $R,$ \noindent (ii) $\alphag(D_{g{}^{-1}}D_h)=D_gD_{gh},$ \noindent (iii) $\alphag \circ \alphah(t)=\omegaega_{g,h}\alphagh(t)\omegaega_{g,h}^{-1}$ for any $t\in D_{h{}^{-1}} D_{(gh){}^{-1}},$ \noindent (iv) $\omega_{1,g}=\omega_{g,1}=1_g$ and \noindent (v) $\alphag(1_{g{}^{-1}}\omega_{h,l})\omega_{g,hl}=\omegaega_{g,h}\omega_{gh,l}.$ \sigmamallskip Using (v) one obtains \betagin{equation}{\longrightarrow}bel{afom}\alphag(\omega_{g{}^{-1},g})=\omega_{g,g{}^{-1}},\,\, \,\,\thetaxt{for any}\,\,g\in G.\varepsilonnd{equation} In \cite{DES} the authors defined twisted partial actions of groups on algebras in a more general setting in which the $D_g$'s are not necessarily unital rings. In all what follows we shall use only unital twisted partial actions. If $(\{D_g\}_{g\in G}, \{\alphag\}_{g\in G}, \{\omega_{g,l}\}_{(g,l)\in G{\times}mes G})$ is a twisted partial action of $G$ on $R,$ the family of partial isomorphisms $(D_g,\alphag)_{g\in G}$ forms a {\it partial action}\varphiootnote {In the sense defined in \cite{DE}.} which we denote by $\alpha.$ Then, the family $\omega=\{\omegaega_{g,h}\}_{(g,h)\in G{\times}mes G}$ is called a {\it twisting} of $\alpha,$ and the above twisted partial action will be denoted by $(\alpha, \omega).$ If $R$, in particular, is a multiplicative monoid, then one obtains from the above definition the concept of a unital twisted partial action of a group on a monoid. \sigmamallskip We recall from \cite{DK} the following. \betagin{defi}{\longrightarrow}bel{defn-cochain} Let $T$ be a commutative ring or monoid, $n\in{\mathbb N}, n>0$ and $\alpha=(T_g, \alpha_g)_{g\in G}$ a unital partial action of $G$ on T. An $n$-cochain of $G$ with values in $T$ is a function $f:G^n\to T,$ such that $f(g_1,\dots,g_n)\in{{}^{-1}athcal U}(T1_{g_1}1_{g_1g_2}\dots1_{g_1g_2\dots g_n}).$ A $0$-cochain is an element of ${{}^{-1}athcal U}(T)$. \varepsilonnd{defi} Denote the set of all $n$-cochains by $C^n(G,\alpha, T)$. This set is an abelian group via the point-wise multiplication. Its identity is the map $(g_1,\dots , g_n){}^{-1}apsto 1_{g_1}1_{g_1g_2}\dots1_{g_1g_2\dots g_n} $ and the inverse of $f\in C^n(G,\alpha,T)$ is $f^{-1}(g_1,\dots,g_n)=f(g_1,\dots,g_n)^{-1}$, where $f(g_1,\dots,g_n)^{-1}$ is the inverse of $f(g_1,\dots,g_n)$ in $T1_{g_1}1_{g_1g_2}\dots1_{g_1g_2\dots g_n},$ for all $g_1,\dots, g_n\in G$. \betagin{defi}{\rm (}{\bf The coboundary homomorphism}{\rm)} Given $n\in{\mathbb N}, n>0,$ $f\in C^n(G,\alpha, T)$ and $g_1,\dots,g_{n+1}\in G$, set \betagin{align}{\longrightarrow}bel{pcob} (\deltalta^nf)(g_1,\dots,g_{n+1})=&\,\alpha_{g_1}\left(f(g_2,\dots,g_{n+1})1_{g{}^{-1}_1}\right) {\bf Proof. }rod_{i=1}^nf(g_1,\dots , g_ig_{i+1}, \dots,g_{n+1})^{(-1)^i}\notag\\ &f(g_1,\dots,g_n)^{(-1)^{n+1}}. \varepsilonnd{align} Here, the inverse elements are taken in the corresponding ideals. If $n=0$ and $t$ is an invertible element of $T$, we set $(\deltalta^0t)(g)=\alpha_g(t1_{g^{-1}})t^{-1},$ for all $g\in G$. \varepsilonnd{defi} \betagin{prop}\cite[Proposition 1.5]{DK}{\longrightarrow}bel{pcobh} $\deltalta^n$ is a group homomorphism from $C^n(G,\alpha, T)$ to $C^{n+1}(G,\alpha, T)$ such that $ (\deltalta^{n+1}\deltalta^nf)(g_1,g_2,\dots, g_{n+2})=1_{g_1}1_{g_1g_2}\dots1_{g_1g_2\dots g_{n+2}}, $ for any $n\in {\mathbb N},$ $f\in C^n(G,\alpha,T)$ and $g_1,g_2,\dots, g_{n+2} \in G.$ \varepsilonnd{prop} \betagin{defi}{\longrightarrow}bel{defn-cohomology} For $n\in{\mathbb N},$ we define the groups $Z^n(G,\alpha,T)=\ker{\deltalta^n}$ of partial $n$-cocycles, $B^n(G,\alpha,T)$$={\rm im}\,{\deltalta^{n-1}}$ of partial $n$-co\-boun\-da\-ries, and $H^n(G,\alpha,T)={\diamond}splaystyle\varphirac{\ker{\deltalta^n}}{{\rm im}\,{\deltalta^{n-1}}}$ of partial $n$-co\-ho\-mo\-lo\-gies of $G$ with values in $T$, $n\ge 1.$ For $n=0$ we define $H^0(G,\alpha,T)=Z^0(G,\alpha,T)=\ker{\deltalta^0}$. \varepsilonnd{defi} \betagin{exe} \betagin{align*} H^0(G,\alpha,T)&=Z^0(G,\alpha,T)=\{t\in{{}^{-1}athcal U}(T){}^{-1}id{\alpha}_g(t 1_{g^{-1}}) = t 1_g, \varphiorall g \in G \},\\ B^1(G,\alpha,T)&=\{ f\in C^1(G,\alpha,T) {}^{-1}id f(g) = {\alpha}_g(t 1_{g^{-1}}) t^{-1}, \, {}^{-1}box{for some}\,\, t \in {{}^{-1}athcal U}(T) \} . \varepsilonnd{align*} We have $ ( \deltalta^{1} f) (g,h) = \alpha _g ( f(h) 1_{g^{-1}} ) f(gh)^{-1}f(g)$ for $f \in C^1 (G,\alpha,T),$ so that \betagin{align*} Z^1(G,\alpha, T) &=\{f\in C^1 (G,\alpha,T){}^{-1}id f(gh)1_g=f(g) \, \alpha _g ( f(h) 1_{g^{-1}} ), \varphiorall g,h\in G \},\varepsilonnd{align*} moreover $B^2 (G,\alpha,T)$ is the group \betagin{align*} &\left\{ w \in C^2 (G,\alpha, T) {}^{-1}id \varepsilonxists f \in C^1 (G,\alpha, T),\thetaxt{with}\,\,w(g,h) = \alpha _g ( f(h) 1_{g^{-1}} )f(g) f(gh){}^{-1} \right\}. \varepsilonnd{align*} For $n=2$ we obtain $$ ( \deltalta^{2} w) (g,h,l) = \alpha _g ( w(h,l) 1_{g^{-1}} ) \; w(gh,l)^{-1} \; w(g,hl) \; w(g,h)^{-1}, $$ with $w \in C^2 (G,\alpha,T),$ and $Z^2(G,\alpha,T)$ is \betagin{align*} &\{w\in C^2 (G,\alpha, T) {}^{-1}id \alpha _g ( w(h,l) 1_{g^{-1}} ) \; w(g,hl) = w(gh,l) \, w(g,h),\,\,\varphiorall g,h,l \in G \}. \varepsilonnd{align*} Hence, the elements of $Z^2(G,\alpha,T)$ are exactly the twistings for $\alpha.$ ${{\mathbf l}acksquare}$ \varepsilonnd{exe} Two cocycles $f,f'\in Z^n(G,\alpha,T)$ are called {\it cohomologous} if they differ by an $n$-coboundary. \betagin{remark}{\longrightarrow}bel{normalized} Notice that a $1$-cocycle is always normalized, i.e. $f(1) = 1_T.$ Indeed, taking $g=h=1$ in the $1$-cocycle equality we immediately see that $f(1)= f(1)^2,$ so $f(1)$ must be $1_T,$ as $f(1) \in {{}^{-1}athcal U} (T).$ \varepsilonnd{remark} \sigmaubsection {Partial Galois extensions} Let $G$ be a finite group and $\alpha=(D_g, \alphag)_{g\in G}$ a unital (non twisted) partial action of $G$ on $R$. The subring of {\it invariants} of $R$ under $\alpha$ was introduced in \cite{DFP} as \betagin{equation} {\longrightarrow}bel{inva}R^\alpha=\{r\in R\,|\, \alphag(r1_{g{}^{-1}})=r1_g \,\,{\rm for \,\, all} \, g\in G\}.\varepsilonnd{equation} Notice that $R^\alpha=H^0(G,\alpha,T).$ The ring extension $R\sigmaupseteq R^\alpha$ is called an {\it $\alpha$-partial Galois extension} if for some $m\in {\mathbb N}$ there exists a subset $\{x_i, y_i\,|\, 1\le i\le m\}$ of $R$ such that ${\diamond}splaystyle\sigmaum_{i=1}^{m}x_i\alpha_g(y_i 1_{g{}^{-1}})=\deltalta_{1,g}, \,\,g\in G.$ As in \cite{DFP}, we call the set $\{x_i, y_i\,|\, 1\le i\le m\}$ a {\it partial Galois coordinate system} of $R\sigmaupseteq R^\alpha.$ The {\it trace map} ${\rm tr}_{R/R^\alpha}:R\to R^\alpha$ is given by $x{}^{-1}apsto\sigmaum_{g\in G}\alphag(x1_{g{}^{-1}}).$ By \cite[Remark 3.4]{DFP} there exists $c\in R$ such that ${\rm tr}_{R/R^\alpha}(c)=1 ,$ provided that the extension $R\sigmaupseteq R^\alpha$ is $\alpha$-partial Galois (here we write $1=1_R = 1_{R^\alpha}$). The \varepsilonmph{partial skew group ring} $R\sigmatar_\alpha G$ is defined as the set of all formal sums $\sigmaum_{g\in G} r_g\delta_g$, $r_g\in D_g,$ with the usual addition and the multiplication determined by the rule $$(r_g\delta_g)(r'_h\delta_h)= r_g\alpha_g(r'_h1_{g{}^{-1}})\delta_{gh}.$$ It is shown in \cite[Theorem 4.1]{DFP} that $R\sigmaupseteq R^\alpha$ is a partial Galois extension if and only if $R$ is a f.g.p. $R^\alpha$-module and the map \betagin{equation}{\longrightarrow}bel{jota} j\colon R\sigmatar_\alpha G \to {\rm End}_{R^\alpha}(R),\,\,\,\,\, j\left (\sigmaum_{g\in G} r_g\delta_g\right)(r)=\sigmaum_{g\in G} r_g\alphag(r1_{g{}^{-1}}) \varepsilonnd{equation} is an $R^\alpha$-algebra and an $R$-module isomorphism. \sigmaection{On generalizations of the Picard group}{\longrightarrow}bel{Picard} To construct our version of the seven term sequence, we need some generalizations of the concept of the Picard group of a commutative ring. First, we recall the next. \betagin{defi} The abelian group of all $R$-isomorphism classes of f.g.p. $R$-modules of rank 1, with binary operation given by $[P][Q]=[P{\otimes} Q]$ is denoted by ${\bf Pic}(R).$ The identity in ${\bf Pic}(R)$ is $[R]$ and the inverse of $[P]$ in ${\bf Pic}(R)$ is $[P^*],$ where $M^*={\rm Hom}_R(M,R)$ for any $R$-module $M.$ \varepsilonnd{defi} Recall that if $P$ is a faithfully projective $R$-module, then $[P] \in {\bf Pic}(R)$ exactly when the map $R\to {\rm End}_R (P),$ given by $r {}^{-1}apsto m_r$, where $ m_r(p)=r p$ for all $p\in P$, is an isomorphism of rings (see \cite[Lemma I.5.1]{DI}). We also recall the next. \betagin{prop}{\longrightarrow}bel{ht}\cite[Hom-Tensor Relation I.2.4]{DI} Let $A$ and $B$ be $R$-algebras. Let $M$ be a f.g.p. $A$-module and $N$ be a f.g.p. $B$-module. Then for any $A$-module $M'$ and any $B$-module $N'$, the map $${\bf Proof. }si\colon {\rm Hom}_A(M,M'){\otimes} {\rm Hom}_B(N,N')\to {\rm Hom}_{(A{\otimes} B)}(M{\otimes} N,M'{\otimes} N' ),$$ induced by $(f{\otimes} g)(m{\otimes} n)=f(m){\otimes} g(n),$ for all $m\in M,\, n\in N$, is an $R$-module isomorphism. If $M=M'$ and $N=N',$ then ${\bf Proof. }si$ is an $R$-algebra isomorphism. ${{\mathbf l}acksquare}$\varepsilonnd{prop} Let $\Lambdambda$ be a unital commutative $R$-algebra. We give the following. \betagin{defi} A $\Lambda$-$\Lambda$-bimodule $P$ is called $R$-partially invertible if $P$ is central as an $R$-$R$-bimodule, and \betagin{itemize} \item P is a left f.g.p. $\Lambda$-module, \item There is an $R$-algebra epimorphism $\Lambdambda^{\rm{op}} \to {\rm End}_\Lambdambda(P).$ \varepsilonnd{itemize} Let $$[P]=\{ M\,|\, M\, \thetaxt{is a}\,\, \Lambda\thetaxt{-}\Lambda\thetaxt{-bimodule and} \,\, M\cong P\, \,\thetaxt{as} \,\,\Lambda\thetaxt{-}\Lambda\thetaxt{-bimodules} \}.$$ We denote by ${\bf PicS}_R(\Lambda)$ the set of the isomorphism classes $[P]$ of $R$-partially invertible $\Lambda$-$\Lambda$-bimodules. Finally, we set ${\bf PicS}_R (R):={\bf PicS} (R).$ \varepsilonnd{defi} \betagin{prop}{\longrightarrow}bel{ccom} The product $[P][Q]=[P{\otimes}_\Lambda Q]$ endows ${\bf PicS}_{R}(\Lambda)$ with the structure of a semigroup. \varepsilonnd{prop} {\bf Proof. } We shall show that $[P{\otimes}_\Lambda Q]\in {\bf PicS}_{R}(\Lambda),$ for any $[P], [Q]\in {\bf PicS}_{R}(\Lambda).$ Notice that $P{\otimes}_\Lambda Q$ is a left f.g.p. $\Lambda$-module. Indeed, there are free f.g left $\Lambda$-modules $F_1,F_2$ and $\Lambda$-modules $M_1,M_2$ such that $P\oplus M_1=F_1,\, Q\oplus M_2=F_2.$ Now consider $M_1$ and $F_1$ as central $\Lambda$-$\Lambda$-bimodules, then by tensoring the two previous relations we see that there exists a left $\Lambda$-module $M$ such that $(P{\otimes}_\Lambda Q)\oplus M=F_1{\otimes}_\Lambda F_2,$ and the assertion follows. By assumption there are $R$-algebra epimorphisms $\xi_1\colon \Lambdambda^{\rm{op}} \to {\rm End}_\Lambdambda(P)$ and $\xi_2\colon \Lambdambda^{\rm{op}} \to {\rm End}_\Lambdambda(Q).$ It follows from Proposition \ref{ht} that $\xi_1{\otimes} \xi_2 \colon \Lambdambda^{\rm{op}}{\otimes}_\Lambdambda \Lambdambda^{\rm{\rm{op}}}\to {\rm End}_{\Lambda}(P{\otimes}_\Lambda Q)$ is an $R$-algebra epimorphism. Since $\Lambdambda^{\rm{\rm{op}}}\ni {\longrightarrow}mbda {}^{-1}t {\longrightarrow}mbda{\otimes}_\Lambdambda 1_\Lambdambda \in \Lambdambda^{\rm{op}}{\otimes}_\Lambdambda \Lambdambda^{\rm{op}} $ is an $R$-algebra isomorphism, we conclude that $ \Lambdambda^{\rm{op}} \ni {\longrightarrow}mbda{}^{-1}apsto \xi_1({\longrightarrow}mbda){\otimes} \xi_2(1_\Lambdambda )\in {\rm End}_{\Lambda}(P{\otimes}_\Lambda Q)$ is an $R$-algebra epimorphism. ${{\mathbf l}acksquare}$ Throughout the paper, by ${\rm Spec}(R)$ we mean, as usual, the set of all prime ideals of $R$. \betagin{defi} We say that a f.g.p. central $R$-$R$-bimodule $P$ has rank less than or equal to one, if for any ${{}^{-1}athfrak p}\in {\rm Spec}(R)$ one has $P_{{}^{-1}athfrak p}=0$ or $P_{{}^{-1}athfrak p}\cong R_{{}^{-1}athfrak p}$ as $R_{{}^{-1}athfrak p}$-modules. In this case we write ${\rm rk}_{R}(P)\le 1.$ \varepsilonnd{defi} The following result characterizes ${\bf PicS} (R).$ \betagin{prop}{\longrightarrow}bel{carpic} We have \betagin{equation*}{\bf PicS} (R)=\{[E]\,|\, E\, \thetaxt{is a f.g.p. central $R$-$R$-bimodule and}\,\, {\rm rk}_{R}(E)\le 1\}. \varepsilonnd{equation*} \varepsilonnd{prop} {\bf Proof. } Let $E$ be a f.g.p. central $R$-$R$-bimodule such that ${\rm rk}_{R}(E)\le 1,$ and consider the map $$m_R \colon R \ni r {}^{-1}apsto m_r\in {\rm End}_R(E),\,\, m_r(x)=rx,\, r\in R, x\in E.$$ Via localization it is easy to show that $m_R$ is an $R$-algebra epimorphism. Conversely if $[E]\in {\bf PicS} (R),$ then $E$ is a f.g.p. central $R$-$R$-bimodule and there is an $R$-algebra epimorphism $R\to {\rm End}_R(E).$ Thus, for any ${{}^{-1}athfrak p}\in {\rm Spec}(R)$ there is an $R$-algebra epimorphism $$R_{{{}^{-1}athfrak p}}\to {\rm End}_{R_{{{}^{-1}athfrak p}}} (R^{n_{{}^{-1}athfrak p}}_{{{}^{-1}athfrak p}})\sigmaimeq M_{{n_{{}^{-1}athfrak p}}} (R_{{{}^{-1}athfrak p}}),$$ where $n_{{}^{-1}athfrak p}={\rm rk}_{R_{{}^{-1}athfrak p}}(E_{{}^{-1}athfrak p}),$ which gives $n_{{}^{-1}athfrak p}\le 1$. ${{\mathbf l}acksquare}$ {{}^{-1}athcal E}dskip \betagin{remark}{\longrightarrow}bel{unit} Notice that ${{}^{-1}athcal U}( {\bf PicS}(R))={\bf Pic}(R).$ Indeed, the inclusion ${{}^{-1}athcal U}( {\bf PicS}(R))\sigmaupseteq{\bf Pic}(R)$ is trivial. On the other hand, for $[E] \in {{}^{-1}athcal U}( {\bf PicS}(R))$ there exists $[P] \in {\bf PicS}(R)$ with $E{\otimes}imes P \cong R,$ so that $E_{{}^{-1}athfrak p} {\otimes}imes P_{{}^{-1}athfrak p} \cong R_{{}^{-1}athfrak p} $ for each prime ${{}^{-1}athfrak p} $ in $R.$ Then $E_{{}^{-1}athfrak p} \neq 0$ and we see by Proposition~\ref{carpic} that ${\rm rk}_{R_{{}^{-1}athfrak p}}\, (E_{{}^{-1}athfrak p})=1$ for each prime ${{}^{-1}athfrak p} ,$ and thus $[E] \in {\bf Pic}(R).$ \varepsilonnd{remark} Given an inverse semigroup $S,$ we denote the inverse of $s\in S$ by $s^*.$ We proceed with the following fact. \betagin{prop}{\longrightarrow}bel{picinv}The set ${\bf PicS} (R)$ with the binary operation induced by the tensor product is a commutative inverse monoid with 0. Moreover $[E^*]=[E]^*,$ for all $[E]\in {\bf PicS} (R).$ \varepsilonnd{prop} {\bf Proof. } It follows from Proposition \ref{ccom} and Proposition \ref{carpic} that ${\bf PicS} (R)$ is a commutative monoid with $0.$ Take $[M]\in {\bf PicS} (R).$ By Proposition \ref{ht} we obtain $(M^*)_{{}^{-1}athfrak p}\cong (M_{{}^{-1}athfrak p})^*={\rm Hom}_{R_{{}^{-1}athfrak p}}(M_{{}^{-1}athfrak p}, R_{{}^{-1}athfrak p}),$ for all ${{}^{-1}athfrak p} \in {\rm Spec}(R),$ and hence $[M^*]\in {\bf PicS} (R)$ thanks to Proposition \ref{carpic}. Now we prove that $[M][M^*][M]=[M]$ and $[M^*][M][M^*]=[M^*].$ Recall that $M {\otimes}imes M^* \cong {\rm End}_{R}(M),$ since $M$ is a f.g.p. $R$-module (see \cite[Lemma I.3.2 (a)]{DI}), and we get $[M][M^*][M]=$$[{\rm End}_{R}(M)][M].$ There is an $R$-module homomorphism $$\kappa\colon {\rm End}_{R}(M) {\otimes} M\ni f{\otimes} m{}^{-1}apsto f(m)\in M,$$ and via localization we will prove that $\kappa$ is an isomorphism. Indeed, take ${{}^{-1}athfrak p}\in {\rm Spec}(R)$ then there are two cases to consider. \varepsilonmph{Case 1: $M_{{}^{-1}athfrak p}=0$.} In this case $\kappa_{{}^{-1}athfrak p} \colon 0\to 0$ is clearly an $R_{{}^{-1}athfrak p}$-module isomorphism. \varepsilonmph{Case 2: $M_{{}^{-1}athfrak p}\cong R_{{}^{-1}athfrak p}.$} Here we have $\kappa_{{}^{-1}athfrak p} \colon R_p{\otimes}_{R_{{}^{-1}athfrak p}} R_p\ni r'_{{}^{-1}athfrak p}{\otimes}_{R_{{}^{-1}athfrak p}} r_{{}^{-1}athfrak p}\to r'_{{}^{-1}athfrak p} r_{{}^{-1}athfrak p}\in R_{{}^{-1}athfrak p}$ is an $R_{{}^{-1}athfrak p}$-module isomorphism. From this we conclude that $[M][M^*][M]=[M],$ for all $[M]\in {\bf PicS} (R).$ Finally since $M$ is a f.g.p. $R$-module, there is an $R$-module isomorphism $M\cong (M^*)^* $ (see \cite[Theorem V.4.1]{MC}), and consequently $[M^*][M][M^*]=[M^*][(M^*)^*][M^*]=[M^*].$ ${{\mathbf l}acksquare}$ By Proposition \ref{picinv} and Clifford's Theorem (see for instance \cite{CP}), ${\bf PicS} (R)$ is a semilattice of abelian groups. In particular, $${\bf PicS} (R)=\bigcup\limits_{\zetaeta \in F(R)}{\bf PicS}_\zetaeta (R),$$ where $F(R)$ is a semilattice isomorphic to the semilattice of the idempotents of ${\bf PicS} (R).$ Therefore, to describe ${\bf PicS} (R)$ we need to know its idempotents. \sigmamallskip We recall that given an inverse semigroup $S,$ its idempotents form a commutative subsemigroup which is a semilattice with respect to the natural order, given by $e\leq f \Leftrightarrow ef=e.$ \sigmamallskip Let $T$ be a commutative ring. For a $T$-module $M$ denote by ${\rm Ann}_T(M)$ the annihilator of $M$ in $T$. If $M$ is a finitely generated $T$-module, the sets ${\rm Supp}_T (M)=\{{}^{-1}athfrak{p}\in {\rm Spec}(T)\,|\, M_{{}^{-1}athfrak p}\ne 0\}$ and $V({\rm Ann}_T(M))=\{{}^{-1}athfrak{p}\in {\rm Spec}(T)\,|\, {}^{-1}athfrak{p}\sigmaupseteq {\rm Ann}_T(M)\}$ coincide, (see e.g. \cite[p. 25-26]{HM}). The following lemma characterizes the idempotents of ${\bf PicS} (R).$ \betagin{lema} {\longrightarrow}bel{idemp} Let $M$ be a f.g.p. $R$-module and $I_M={\rm Ann}_R(M)$. Then, the following statements are equivalent: \betagin{itemize} \item[(i)] $M{\otimes} M\cong M$. \item[(ii)] $M\cong R/{I_M}$. \item[(iii)] $M\cong Re$ (and $I_M=R(1-e)$), for some idempotent $e$ of $R$. \varepsilonnd{itemize} \varepsilonnd{lema} {\bf Proof. } (i)${{}^{-1}athcal R}ightarrow$(ii) It easily follows from the dual basis lemma that $M$ is a faithfully projective $\left(R/I_M\right)$-module. Moreover, the $R$-module isomorphism $M{\otimes} M\cong M,$ implies $M{\otimes}_{R/I_M} M\cong M$ as $R/I_M$-modules, and hence ${\rm rk}_{R/I_M}(M)\le 1$. Moreover, ${\rm Supp}_{R/I_M} (M)=V(\bar{0})={\rm Spec}(R/I_M),$ and since $M$ is a faithfully projective $R/I_M$-module, then $[M] \in {\bf Pic} (R/I_M).$ Being an idempotent, $[M]$ must be the identity element of ${\bf Pic} (R/I_M), $ so that there is an $R/I_M$-module isomorphism $M\cong R/I_M, $ which is clearly an isomorphism of $R$-modules. (ii)${{}^{-1}athcal R}ightarrow$(iii) Since $M\cong R/{I_M}$ is f.g.p. as an $R$-module, then the exact sequence $$0\to I_M\to R\to R/{I_M}\to 0$$ splits, thus the ideal $I_M$ is a direct summand of $R$ and the assertion easily follows. (iii)${{}^{-1}athcal R}ightarrow$(i) It is clear. ${{\mathbf l}acksquare}$ Let ${\bf I_p}(R)$ be the semilattice of the idempotents of $R$ with respect to the product. If the $R$-modules $Re \cong Rf$ are isomorphic where $e,f \in {\bf I_p}(R)$, then their annihilators in $R$ coincide, i.e. $(1-e)R= (1-f)R.$ This yields $e=f,$ and it follows by Lemma \ref{idemp} that the map $e {}^{-1}apsto [eR]$ is an isomorphism of ${\bf I_p}(R)$ with the semilattice of the idempotents of ${\bf PicS} (R).$ Consequently, the components of ${\bf PicS} (R)$ can be indexed by the idempotents of $R,$ and $${\bf PicS} (R) = \bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic S}_e (R)$$ gives the decomposition of ${\bf PicS} (R)$ as a semilattice of abelian groups. Thus, if for each $e\in {\bf Ip}(R)$ we denote by $[M_e]$ the identity element of ${\bf PicS }_e(R),$ then $[M_e][M_f]=[M_{ef}]$ and ${\bf PicS} _e(R){\bf PicS} _f(R)\sigmaubseteq {\bf PicS}_{ef} (R),$ for all $e,f\in {\bf I_p}(R)$ (this also can be seen directly from Lemma \ref{idemp}). Now we will describe the components of ${\bf PicS} (R).$ For this, note that for any $R$-module $N$ we have ${\rm Ann}_R(N)={\rm Ann}_R({\rm End}_R(N)),$ and if $N$ is projective it follows from the dual basis lemma that ${\rm Ann}_R(N)={\rm Ann}_R(N^*).$ \betagin{lema} {\longrightarrow}bel{dec} ${\bf PicS} _e(R)=\{[N]\in {\bf PicS} (R){}^{-1}id{\rm Ann}_R(N)=R(1-e) \} \cong {\bf Pic} (Re),$ for all $e\in {\bf Ip}(R).$ In particular, the identity element of ${\bf PicS} _e(R)$ is $[Re].$ \varepsilonnd{lema} {\bf Proof. } For the first equality let $[N]\in {\bf PicS} _e(R).$ Then, there are $R$-module isomorphisms ${\rm End}_R(N)\cong N{\otimes} N^*\cong M_e\cong Re,$ where the last isomorphism follows from Lemma \ref{idemp}, and hence ${\rm Ann}_R(N)={\rm Ann}_R({\rm End}_R(N))=R(1-e).$ Conversely, if ${\rm Ann}_R(N)=R(1-e)$ we have $N{\otimes} M_e\cong N{\otimes} Re\cong Ne=Ne\oplus N(1-e)=N,$ as $R$-modules, so $[N]\in{\bf PicS} _e(R). $ Thus for any $[N]\in{\bf PicS} _e(R),$ its representative is a faithfully projective $Re$-module and the isomorphism ${\bf PicS} _e(R) \cong {\bf Pic} (Re)$ is now trivial. Finally, notice that the image of $e$ in $R_{{}^{-1}athfrak p} $ is either $0$ (if $e \in {{}^{-1}athfrak p}$) or the identity of $R_{{}^{-1}athfrak p} $ (if $e \notin {{}^{-1}athfrak p}$). Hence $ (e R )_{{}^{-1}athfrak p} $ is free of rank $0$ or $1,$ and, consequently, $[eR] \in {\bf PicS}(R).$ Summarizing, we have. \betagin{teo} {\longrightarrow}bel{picde} The (disjoint) union \betagin{equation}{\longrightarrow}bel{pics}{\bf PicS} (R) = \bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic S}_e (R)\cong\bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic} (Re) \varepsilonnd{equation} gives the decomposition of ${\bf PicS} (R)$ as a semilattice of abelian groups, whose structural homomorphisms are given by $\varepsilon _{e,f}: {\bf Pic} (Re) \to {\bf Pic} (Rf) ,$ $[M] {}^{-1}apsto [M {\otimes}imes Rf],$ where $e,f \in {\bf I_p}(R),$ $ e \geq f.$ ${{\mathbf l}acksquare}$ \varepsilonnd{teo} We point out the following. \betagin{lema} {\longrightarrow}bel{iguald}For any $g\in G,$ we have: \betagin{itemize} \item[(i)] ${\bf PicS}_{1_g}(R)\cong {\bf Pic}(D_g).$ \item[(ii)] Let $g\in G$ and $[M]\in {\bf PicS}(R).$ If $1_gm=m,$ for all $m\in M$ and $M_{{}^{-1}athfrak p}\cong (D_g)_{{}^{-1}athfrak p}$ as $R_{{}^{-1}athfrak p}$-modules, for all ${{}^{-1}athfrak p}\in{\rm Spec}(R),$ then $[M]\in {\bf Pic}(D_g).$ \item [(iii)] ${\bf PicS} (D_g) = \bigcup\limits_{e\in {\bf I_p}(R)\atop e1_g=e}{\bf Pic} (Re),$ for any $g\in G .$ \varepsilonnd{itemize} \varepsilonnd{lema} {\bf Proof. } Item (i) is clear from Theorem \ref{picde}. (ii) Notice that $M$ is a f.g.p. $D_g$-module. Let ${{}^{-1}athfrak p}\in {\rm Spec}(D_g).$ Since we have a ring isomorphism $D_g\cong R/{\rm Ann}_R(D_g) ,$ we may consider $M$ as an $R/{\rm Ann}_R(D_g)$-module and make the identification ${{}^{-1}athfrak p}=\bar{{{}^{-1}athfrak p}}_1={{}^{-1}athfrak p}_1/{\rm Ann}_R(D_g),$ where ${{}^{-1}athfrak p}_1\in{\rm Spec}(R)$ and ${{}^{-1}athfrak p}_1$ contains ${\rm Ann}_R(D_g).$ Thus, it follows from the assumption that there exist a $(D_g)_{\bar{{}^{-1}athfrak p}_1}$-module isomorphisms $M_{{}^{-1}athfrak p}\cong M_{{\bar {{}^{-1}athfrak p}}_1}\cong (D_g)_{\bar{{}^{-1}athfrak p}_1}\cong (D_g)_{{}^{-1}athfrak p}$, which imply $[M]\in {\bf Pic}(D_g).$ (iii) Since \varepsilonqref{pics} holds for any commutative ring, we get $${\bf PicS} (D_g) = \bigcup\limits_{e\in {\bf I_p}(D_g)}{\bf Pic} (D_ge)=\bigcup\limits_{e \in {\bf I_p}(D_g)}{\bf Pic} (Re).$$ Moreover, $e\in {\bf I_p}(D_g)$ exactly when $e $ is an element of ${\bf I_p}(R)$ and $e1_g=e.$ ${{\mathbf l}acksquare}$ \sigmaection{ A partial action on {\bf PicS}($R$) and the sequence $$H^1(G,\alpha,R)\sigmatackrel{\varphi_1}\to {\bf Pic}(R^\alpha)\sigmatackrel {\varphi_2}\to {\bf Pic}(R )\cap {\bf Pic S}(R)^{\alpha^*} \sigmatackrel{\varphi_3}\to H^2(G,\alpha, R)$$ } {\longrightarrow}bel{phi1phi2phi3} \sigmaubsection{A partial action on PicS($R$) } Let $\alpha=(D_g, \alphag)_{g\in G}$ be a unital partial action of a group $G$ on $R.$ It is known that $\alphag(1_h1_{g{}^{-1}})=1_g1_{gh},$ for all $g,h\in G$ (see \cite[p. 1939]{DE}). Then for any $y\in R,$ we have \betagin{equation}{\longrightarrow}bel{prodp}\alphag(\alphah(y1_{h{}^{-1}})1_{g{}^{-1}})=\alpha_{gh}(y1_{(gh){}^{-1}})1_g, \,\,\,\thetaxt{ for all}\,\, g,h\in G. \varepsilonnd{equation} In all what follows $\alpha=(D_g, \alphag)_{g\in G}$ will be a fixed unital partial action of the group $G$ on the ring $R.$ The next result will help us in the construction of a partial action on ${\bf PicS}(R).$ \betagin{lema}{\longrightarrow}bel{gaction} Let E and F be central $R$-$R$-bimodules and $g\in G.$ Suppose that $1_{g{}^{-1}}x=x$ and $1_{g{}^{-1}}y=y$ for all $x\in E$ and $y\in F.$ Denote by $E_g$ the set $E$ where the (central) action of $R$ is given by $$r\bullet x_g= \alpha_{g{}^{-1}}(r1_g)x=x_g\bullet r,\,\, r\in R,\,\, x_g\in E_g.$$ Then \betagin{itemize} \item[(i)] $E_g$ is an $R$-module and $(E_g)_{{}^{-1}athfrak p}=(E_{{}^{-1}athfrak p})_g$ as $R$-modules, where the action of $R$ on $(E_{{}^{-1}athfrak p})_g$ is $r\bullet \varphirac{x}{s}=\varphirac{\alpha_{g{}^{-1}}(r1_g)x}{s},$ for any $x\in E,\,{{}^{-1}athfrak p} \in {\rm Spec}(R), \,s\in R\sigmaetminus {{}^{-1}athfrak p}.$ \item[(ii)] ${\rm Hom}_R(E,F)= {\rm Hom}_R(E_{g},F_{g })$ as sets. In particular, we have ${\rm Iso}_R(E,F)= {\rm Iso}_R( E_{g },F_{g } )$ and ${\rm End}_R(E) = {\rm End}_R(E_{g } ).$ \item[(iii)] If $E$ is a f.g.p. $R$-module, so too is $E_g.$ \item[(iv)] There is an R-module isomorphism $(E{\otimes} F)_g\cong E_g{\otimes} F_g.$ \item[(v)] If $\rm{rk} ($E$)\le 1,$ then $\rm{rk} ({\it E_g})\le 1.$ \item[(vi)] For any $[M]\in {\bf Pic}(D_{g{}^{-1}}),$ $[M_g]\in{\bf Pic}(D_{g}). $ \varepsilonnd{itemize} \varepsilonnd{lema} {\bf Proof. } Item (i) is clear. (ii) Obviously ${\rm Hom}_R(E,F)\sigmaubseteq {\rm Hom}_R( E_{g},F_{g } ).$ Let $f\in {\rm Hom}_R( E_{g },F_{g }),\, r\in R$ and $x\in E_{g } .$ Then $f(rx)=f(r1_{g^{-1}} x)=f(\alpha_{g{}^{-1}}(r'1_{g})x)=f(r'\bullet x)=r'\bullet f(x)=rf(x),$ where $r'\in R$ is such that $\alpha_{g{}^{-1}}(r'1_{g})=r1_{g{}^{-1}}.$ (iii) For any $f\in E^*,$ the map $$\alphag\circ f\colon E_g \ni x{}^{-1}apsto \alphag \circ f(x)=\alphag(f(x)1_{g{}^{-1}})\in R$$ is an element of $(E_g)^*.$ Indeed, $$\alphag \circ f(r\bullet x)=\alphag(f(r\bullet x)1_{g{}^{-1}})=\alphag(\alpha_{g{}^{-1}}(r1_{g})f(x))=r(\alphag \circ f)(x).$$ Suppose that $E$ is a f.g.p. $R$-module. Then, there are $f_i\in E^*$ and $x_i\in E$ such that $x=\sigmaum\limits_{i}f_i(x)x_i=\sigmaum\limits_{i}f_i(x)1_{g{}^{-1}}x_i=\sigmaum\limits_{i}(\alphag\circ f_i(x))\bullet x_i,$ for any $x\in E$, which implies that $E_g$ is a f.g.p. $R$-module, with dual basis $\{\alpha_g\circ f_i, x_i\}.$ (iv) The map $E_g{\times}mes F_g\ni (x,y){}^{-1}t (x{\otimes} y)_g\in (E{\otimes} F)_g$ is $R$-balanced, therefore it induces a well defined $R$-module map \betagin{equation}{\longrightarrow}bel{iotag}\iota_g\colon E_g{\otimes}_R F_g\ni x{\otimes} y {}^{-1}t (x{\otimes} y)_g \in(E{\otimes} F)_g\varepsilonnd{equation} which is clearly bijective. (v) Take ${{}^{-1}athfrak p} \in {\rm Spec}(R).$ We have two cases to consider. \varepsilonmph{Case 1:} $E_{{}^{-1}athfrak p}=0.$ In this case we have $(E_g)_{{}^{-1}athfrak p}\sigmatackrel{{\rm (i)}}{=}(E_{{}^{-1}athfrak p})_g=0.$ \varepsilonmph{Case 2:} $E_{{}^{-1}athfrak p}\cong R_{{}^{-1}athfrak p}$ as $R_{{}^{-1}athfrak p}$-modules. Since $1_{g{}^{-1}}x=x,$ for all $x\in E$ we have $1_{g{}^{-1}} r_{{}^{-1}athfrak p}= r_{{}^{-1}athfrak p}$ for all $r_{{}^{-1}athfrak p}\in R_{{}^{-1}athfrak p},$ which implies $1_{g{}^{-1}}\notin{{}^{-1}athfrak p},$ because the image of $1_{g{}^{-1}}$ in $R_{{}^{-1}athfrak p}$ is either $0$ or the identity of $R_{{}^{-1}athfrak p} $, and thus $E_{{}^{-1}athfrak p}\cong R_{{}^{-1}athfrak p} = (D_{g{}^{-1}})_{{}^{-1}athfrak p}\,\,\, \thetaxt{as}\,\,\, R_{{}^{-1}athfrak p}\thetaxt{-modules.}$ Finally, using (i) we get $$(E_g)_{{}^{-1}athfrak p}=(E_{{}^{-1}athfrak p})_g\cong((D_{g{}^{-1}})_{{}^{-1}athfrak p})_g=((D_{g{}^{-1}})_g)_{{}^{-1}athfrak p}\cong (D_{g})_{{}^{-1}athfrak p} ,$$ where the latter isomorphism is given by $\alpha _g.$ This ensures that $\rm{rk}({\it E_g})\le 1.$ vi) In the proof of item (iii) we saw that if $\{f_i, m_i\}$ is a dual basis for $M,$ then $\{\alpha_g\circ f_i, m_i\}$ is a dual basis for the $D_g$-module $M_g.$ On the other hand, by the same reason as in the proof of item (ii), we have that $D_{g} \cong D_{g{}^{-1}}\cong {\rm End}_{D_{g{}^{-1}}}(M)={\rm End}_{D_{g}}(M_g)$ as rings. Since $M$ is faithful, so too is $M_g,$ and the ring isomorphism $D_g\cong {\rm End}_{D_{g}}(M_g)$ implies $[M_g]\in {\bf Pic}(D_{g}). $ ${{\mathbf l}acksquare}$ \betagin{lema}{\longrightarrow}bel{igual} For any $g\in G$ set $$X_g=\{[1_gE]\,|\, [E]\in \,{\bf PicS}(R)\}=[D_g]{\bf PicS}(R).$$ Then, $X_g$ is an ideal of ${\bf PicS} (R)$ and \betagin{itemize} \item[(i)] $X_g=\{[E]\in {\bf PicS} (R)\,|\, E=1_gE\},$ \item[(ii)] For any $[E]\in X_{g{}^{-1}}$ we have $[E_g]\in X_g.$ \varepsilonnd{itemize} \varepsilonnd{lema} {\bf Proof. } (i) It is clear that $X_g\sigmaupseteq\{[E]\in \,${\bf PicS}(R)$\,|\, E=1_gE\}.$ On the other hand, given $[E]\in X_g $ there exists $[F]\in {\bf PicS} (R)$ and an $R$-module isomorphism $\varphi_g\colon 1_gF\to E.$ This leads to $E=\varphi_g(1_gF)=1_g\varphi_g(1_gF)=1_gE.$ (ii) Notice that $1_g\bullet x_g=1_{g{}^{-1}}x_g=x_g$ for any $x_g \in E_g,$ so $1_g\bullet E_g=E_g.$ By Lemma \ref{gaction} $E_g$ is a f.g.p. $R$-module and ${\rm rk}(E_g) \le 1,$ hence $[E_g] \in {\bf PicS}(R).$ Thus, using item (i) we conclude that $[E_g]\in X_g.$ ${{\mathbf l}acksquare}$ \betagin{teo} {\longrightarrow}bel{psem} The family $\alpha^*=( X_g, \alphag^*)_{g\in G},$ where $\alphag^*\colon X_{g{}^{-1}}\ni [E]{}^{-1}apsto [E_g]\in X_{g}$ defines a partial action of G on ${\bf PicS} (R).$ \varepsilonnd{teo} {\bf Proof. } By Lemmas \ref{gaction} and \ref{igual} the map $\alphag^*$ is a well defined semigroup homomorphism, for all $g\in G.$ Clearly $X_1={\bf PicS} (R)$ and $\alpha_1^*={\rm id}_{{\bf PicS} (R)}.$ We need to show that $\alpha_{gh}^*$ is an extension of $\alphag^*\circ \alphah^*.$ If $[E]\in X_{h{}^{-1}}$ is such that $[E_h]\in X_{g{}^{-1}},$ then $E=1_{h{}^{-1}}E$ and $$E=E_h=1_{g{}^{-1}}\bullet E_h=\alpha_{h{}^{-1}}(1_{g{}^{-1}}1_h)E_h=1_{(gh){}^{-1}}1_{h{}^{-1}}E=1_{(gh){}^{-1}}E.$$ Thus $E =1_{(gh){}^{-1}}E,$ which shows that ${\rm dom}(\alphag^*\circ \alphah^*)\sigmaubseteq {\rm dom}\,\alpha^*_{gh},$ thanks to item (i) of Lemma \ref{igual}. Furthermore, we have $\alphag^*\circ \alphah^*([E])=[(E_h)_g],$\, $\alpha^*_{gh}([E])=[E_{gh}],$ and $(E_h)_g=E_{gh}$ as sets. Now, for any $r\in R,\, x=(x_h)_g\in (E_h)_g$ we get \betagin{align*} r\bullet(x_h)_g &=\alpha_{h{}^{-1}}(\alpha_{g{}^{-1}}(r1_g)1_h)x \sigmatackrel{(\ref{prodp})} {=}\alpha_{(gh){}^{-1}}(r1_{gh})1_{h{}^{-1}}x =\alpha_{(gh){}^{-1}}(r1_{gh})x =r\bullet 1_{gh}x, \varepsilonnd{align*} and $ (E_h)_g\cong E_{gh}$ as $R$-modules. In particular, $\alphag^*$ has an inverse $\alpha_{g{}^{-1}}^*$ , so that each $\alphag^*$ is an isomorphism. ${{\mathbf l}acksquare}$ \betagin{remark}{\longrightarrow}bel{ggaction} It follows from Theorem \ref{psem} and {\rm (\ref{prodp})} that there is an $R$-module isomorphism $$( D_{(gh){}^{-1}}{\otimes} P)_{gh}{\otimes} D_g\cong ( D_{g{}^{-1}}{\otimes} (D_{h{}^{-1}}{\otimes} P)_h)_g,$$ for any $R$-module $P$ and $g,h\in G.$ \varepsilonnd{remark} The subset of invariants of ${\bf PicS}(R)$ (see equation (\ref{inva})) is given by \betagin{equation}{\longrightarrow}bel{fixx}{\bf PicS}(R)^{\alpha^*}=\{[E]\in {\bf PicS}(R)\,|\, (D_{g{}^{-1}}{\otimes} E)_g\cong D_g{\otimes} E, \,\,\thetaxt{for all} \,g\in G\}. \varepsilonnd{equation} \betagin{prop} ${\bf PicS}(R)^{\alpha^*}$ has an element 0 and is a commutative inverse submonoid of ${\bf PicS}(R).$ \varepsilonnd{prop} {\bf Proof. } Evidently $0\in {\bf PicS}(R)^{\alpha^*}.$ Moreover, for any $[E],[N]\in {\bf PicS}(R)^{\alpha^*},$ we have $\alpha^*_g([D_{g{}^{-1}}{\otimes} (E{\otimes} N)])=\alpha^*_g([D_{g{}^{-1}}{\otimes} E])\alpha^*_g([D_{g{}^{-1}}{\otimes} N)])=[D_g{\otimes} E][D_g{\otimes} N]=[D_g{\otimes} (E{\otimes} N)]$ and $[E][N]\in {\bf PicS}(R)^{\alpha^*}. $ Given any element $[E]$ of ${\bf PicS}(R)^{\alpha^*},$ we need to show that $[E^*]$ is also in ${\bf PicS}(R)^{\alpha^*}.$ If $[E]\in X_{g{}^{-1}},$ for some $g\in G,$ then $[E^*]=[E^*][E][E^*]\in X_{g{}^{-1}}.$ Since $[E^*][E][E^*]=[E^*]$ and $[E ][E^*][E ]=[E ],$ then $[(E^*)_g][E_g][(E^*)_g]=[(E^*)_g]$ and $[E_g][E^*_g][E_g]=[E_g],$ thanks to (iv) of Lemma \ref{gaction} . Thus, \betagin{equation}{\longrightarrow}bel{star}[(E^*)_g]=[E_g]^*=[(E_g)^*],\varepsilonnd{equation} where the last equality follows from Proposition \ref{picinv}. Therefore for any $[E]\in {\bf PicS}(R)^{\alpha^*}$ we get \betagin{align*}\{\alpha^*_g([D_{g{}^{-1}}{\otimes} E^*])\}^* &=[(D_{g{}^{-1}}{\otimes} E^*)_g]^*=[((D_{g{}^{-1}}{\otimes} E)^*)_g]^* \sigmatackrel{\varepsilonqref{star}}=[(D_{g{}^{-1}}{\otimes} E)_g]\\ &=[D_g{\otimes} E]=[D_g{\otimes} E]^{**}=[D_g{\otimes} E^*]^*,\varepsilonnd{align*} hence $\alpha^*_g([D_{g{}^{-1}}{\otimes} E^*])=[D_g{\otimes} E^*],$ and $[E^*]\in {\bf PicS}(R)^{\alpha^*}.$ Finally, since $\alphag^*$ is a ring isomorphism, we have $\alphag^*([D_{g{}^{-1}}])=[D_g],\, g\in G,$ or equivalently $[R]\in {\bf PicS}(R)^{\alpha^*}.$ ${{\mathbf l}acksquare}$ \betagin{remark} {\longrightarrow}bel{unxg} Recall that by definition $X_g={\bf PicS}(R)[D_g],$ and thus by Theorem \ref{picde} we have that $X_g=\bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic} (Re) [D_g]=\bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic} (D_ge),$ where the last equality holds because the map ${\bf Pic} (Re)\ni [M]{}^{-1}apsto [M{\otimes} D_ge ]\in {\bf Pic} (D_ge) $ is a group epimorphism. Consequetly, $$X_g=\bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic} (D_ge)=\bigcup\limits_{e\in {\bf I_p}(R)}{\bf Pic} (Re1_g)=\bigcup\limits_{e\in {\bf I_p}(D_g)\atop e1_g=e}{\bf Pic} (Re)={\bf PicS}(D_g),$$ thanks to {\rm (iii)} of Lemma \ref{iguald}. Finally, Remark~\ref{unit} implies that ${{}^{-1}athcal U}(X_g)={\bf Pic}(D_g).$ \varepsilonnd{remark} In all what follows $G$ will stand for a finite group and $R\sigmaupseteq R^\alpha$ will be an $\alpha$-partial Galois extension. In particular, it follows from \cite[(ii) Theorem 4.1]{DFP} that $R$ and any $D_g, g\in G,$ are f.g.p $R^\alpha$-modules. We also recall two well known results that will be used several times in the sequel. The proof of the first can be easily obtained by localization. For the second one we refer to the literature. \betagin{lema}{\longrightarrow}bel{proje} Let $M,N$ be R-modules such that $M$ and $M{\otimes} N$ are f.g.p. R-modules, and $M_{{}^{-1}athfrak p}\ne 0$ for all ${{}^{-1}athfrak p} \in {\rm Spec}(R).$ Then, N is also a f.g.p. R-module. \varepsilonnd{lema} \betagin{lema}{\longrightarrow}bel{cancel}\cite[Chapter I, Lemma 3.2 (b)]{DI} Let $M$ be a faithfully projective R-module. Then, there is a $R$-$R$-bimodule isomorphism $M^*{\otimes}_{{\rm End}_R(M)} M\cong R.$ Consequently if $N,N'$ are $R$-modules such that $M{\otimes} N\cong M{\otimes} N'$ as $R$-modules, then $N\cong N'$ as $R$-modules. ${{\mathbf l}acksquare}$ \varepsilonnd{lema} \sigmaubsection{The sequence $H^1(G,\alpha,R)\sigmatackrel{\varphi_1}\to{\bf Pic}(R^\alpha)\sigmatackrel{\varphi_2} \to {\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R)\sigmatackrel{\varphi_3}\to H^2(G,\alpha,R)$} For any $R\sigmatar_\alpha G$-module $M,$ as in \cite[page 82]{DFP} we denote $$M^G=\{m\in M\,|\, (1_g\delta_g)m=1_gm, \,\,\thetaxt{for all}\,\, g\in G\}.$$ It can be seen that $R$ is an $R\sigmatar_\alpha G$-module via $(r_g\delta_g)\rhd r=r_g\alphag(r1_{g{}^{-1}}),$ for each $g\in G$ and $r\in R,r_g\in D_g,$ and this action induces an $R\sigmatar_\alpha G$-module structure on $R{\otimes}_{R^\alpha} M^G.$ We have the following. \betagin{teo}{\longrightarrow}bel{homp} There is a group homomorphism $\varphi_1\colon H^1(G,\alpha, R) \to {\bf Pic}(R^\alpha).$ \varepsilonnd{teo} {\bf Proof. } Take $f\in Z^1(G,\alpha, R)$ and define $\theta_f\in {\rm End}_{R^\alpha}(R\sigmatar_\alpha G)$ by $\theta_f(r_g\delta_g)=r_gf(g)\delta_g$ for all $r_g\in D_g, g\in G.$ Then, $\theta_f$ is an $R^\alpha$-algebra homomorphism because \betagin{align*} \theta _f(r_g\delta_g)\theta _f (r_h\delta_h)&=(r_gf(g)\delta_g)(r_hf(h)\delta_h)=r_gf(g)\alpha_g(r_hf(h)1_{g{}^{-1}})\delta_{gh}\\ &=r_g\alpha_g(r_h1_{g{}^{-1}})f(g)\alpha_g(f(h)1_{g{}^{-1}})\delta_{gh} =r_g\alpha_g(r_h1_{g{}^{-1}})f(gh)\delta_{gh}\\ &=\theta_f((r_g\delta_g)(r_h\delta_h)), \varepsilonnd{align*} for all $r_g\in D_g, r_h\in D_h, g,h\in G.$ Hence, we may define an $R\sigmatar_\alpha G$-module $R_f$ by $R_f=R$ as sets and \betagin{equation*}{\longrightarrow}bel{rf}(r_g\delta_g)\cdot r=\theta_f(r_g\delta_g) \rhd r,\,\,\,\thetaxt{for any}\,\, r\in R, \, r_g\in D_g,\, g\in G.\varepsilonnd{equation*} In particular $R_f=R$ as $R$-modules, as $f$ is normalized in view of Remark~\ref{normalized}. By (iii) of \cite[Theorem 4.1]{DFP} there is an $R$-module isomorphism $R{\otimes}_{R^\alpha}R_f^G \cong R_f$. Since $R$ is a f.g.p. $R^\alpha$-module we conclude that $R_f^G $ is a f.g.p. $R^\alpha$-module by Lemma \ref{proje}. Finally, via localization we see from the last isomorphism that ${\rm rk}_{R^\alpha}(R_f^G)=1,$ so $[R_f^G]\in {\rm Pic}(R^\alpha).$ Define \betagin{equation*}{\longrightarrow}bel{1homo}\varphi_1\colon H^1(G,\alpha, R) \ni {\rm cls}(f) \to [R_f^G]\in {\rm Pic}(R^\alpha).\varepsilonnd{equation*} We will check that $\varphi_1$ is a well defined group isomorphism. If $f\in B^1(G,\alpha, R),$ there exists $a\in {{}^{-1}athcal U}(R)$ such that $f(g)=\alphag(a1_{g{}^{-1}})a{}^{-1},$ for all $g\in G.$ In this case one has $ r\in R_f^G$ if and only if, $ar\in R^\alpha.$ Thus, multiplication by $a$ gives an $R^{\alpha}$-module isomorphism $R_f^G\cong R^\alpha,$ which yields $[R_f^G]= [R^\alpha] \in {\rm Pic}(R^\alpha).$ Therefore, to prove that $\varphi_1$ is well defined, it is enough to show that $\varphi_1$ preserves products. For any $f,g \in Z^1(G,\alpha, R)$ there is a chain of $R^\alpha$-module isomorphisms $$ R{\otimes} _{R^{\alpha}} (R_f^G{\otimes}_{R^\alpha} R_g^G)\cong (R{\otimes}_{R^\alpha} R_f^G){\otimes} (R{\otimes}_{R^\alpha} {R_g}^G)\cong R_f {\otimes} R_g \cong R {\otimes} R \cong R_{fg} \cong R{\otimes} _{R^{\alpha}} R_{fg}^G,$$ and recalling that $R$ is a f.g.p. $R^\alpha$-module we have $R_f^G{\otimes} _ { R^\alpha }R_g^G \cong R_{fg}^G$ as $R^\alpha$-modules, by Lemma \ref{cancel}. ${{\mathbf l}acksquare}$ \betagin{prop} There is a group homomorphism $ {\bf Pic}(R^\alpha)\sigmatackrel{\varphi_2}{\to }{\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R).$ \varepsilonnd{prop} {\bf Proof. } For any $[E]\in {\bf Pic}(R^\alpha)$ set $\varphi_2([E])=[R{\otimes}_{R ^\alpha} E].$ Clearly $\varphi_2$ is a well defined group homomorphism from ${\bf Pic}(R^\alpha)$ to ${\bf Pic}(R)$. We shall check that ${\rm im}\,\varphi_2\sigmaubseteq {\bf PicS}(R)^{\alpha^*}.$ There are $R$-module isomorphisms \betagin{equation}{\longrightarrow}bel{priso}D_g{\otimes}(R{\otimes}_{R ^\alpha} E)\cong D_g{\otimes}_{R^\alpha}E\,\,\thetaxt{ and}\,\, (D_{g{}^{-1} } {\otimes} (R{\otimes}_{R ^\alpha} E))_{g}\cong(D_{g{}^{-1}} {\otimes}_{R^\alpha} E )_{g} .\varepsilonnd{equation} Furthermore, the map determined by \betagin{equation}{\longrightarrow}bel{seiso} D_g{\otimes}_{R^\alpha}E \ni d{\otimes}_{R^\alpha} x{}^{-1}apsto \alpha_{g{}^{-1}}(d) {\otimes}_{R^\alpha} x \in (D_{g{}^{-1}} {\otimes}_{R^\alpha} E )_g,\varepsilonnd{equation} is also an $R$-module isomorphism. Then, combining \varepsilonqref{priso} and \varepsilonqref{seiso} we obtain an $R$-module isomorphism $D_{g}{\otimes} (R{\otimes}_{R^\alpha} E)\cong (D_{g{}^{-1}}{\otimes} (R{\otimes}_{R^\alpha} E))_{g},$ for all $g\in G,$ and hence $\varphi_2([E])\in {\bf PicS}(R)^{\alpha^*}.$ ${{\mathbf l}acksquare}$ Now, we proceed with the construction of $\varphi_3.$ First, for any $R$-module $M$ we identify $M{\otimes} D_g$ with $MD_g,$ via the $R$-module isomorphism $M{\otimes} D_g\ni x{\otimes} d{}^{-1}t xd \in MD_g$, for any $g\in G.$ Now, let $[E]\in {\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R).$ Then, by \varepsilonqref{fixx} there is a family of $R$-module isomorphisms \betagin{equation}{\longrightarrow}bel{psiii}\{{\bf Proof. }si_g\colon ED_g\to (ED_{g{}^{-1}})_g\}_{g\in G} \;\; \thetaxt{with}\;\; {\bf Proof. }si_g(rx)=\alpha_{g{}^{-1}}(r1_g){\bf Proof. }si_g(x), \varepsilonnd{equation} where $r\in R, x\in ED_g, g\in G.$ Thus the maps ${\bf Proof. }si{}^{-1}_g\colon (ED_{g{}^{-1}})_g\to ED_g,\,\, g\in G,$ satisfy \betagin{equation}{\longrightarrow}bel{psii}{\bf Proof. }si{}^{-1}_g(rx)={\bf Proof. }si {}^{-1}_g ( \alphag(r1_{g{}^{-1}})\bullet x)=\alphag(r1_{g{}^{-1}}){\bf Proof. }si{}^{-1}_g(x),\,\,\thetaxt{ for all}\,\,r\in R, \,\,x \in ED_{g{}^{-1}}. \varepsilonnd{equation} We shall prove that ${\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}\colon ED_gD_{gh}\to ED_gD_{gh}$ is well defined and is an element of ${{}^{-1}athcal U}({\rm End}_{D_gD_{gh}}(ED_gD_{gh})).$ From (\ref{psii}) we have ${\bf Proof. }si{}^{-1}_{g{}^{-1}}(ED_gD_{gh}) \sigmaubseteq ED_{g{}^{-1}}D_{h}\sigmaubseteq {\rm dom}\,{\bf Proof. }si{}^{-1}_{h{}^{-1}}\cap\, {\rm dom}\,{\bf Proof. }si_{g{}^{-1}},$ and also ${\bf Proof. }si{}^{-1}_{h{}^{-1}}(ED_{g{}^{-1}}D_{h})\sigmaubseteq ED_{h{}^{-1} g{}^{-1}}D_{h{}^{-1}}\sigmaubseteq {\rm dom}\,{\bf Proof. }si_{h{}^{-1} g{}^{-1}}\cap \, {\rm dom}\,{\bf Proof. }si_{h{}^{-1}}.$ This yields that the map ${\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}$ is well defined and ${\bf Proof. }si_{g{}^{-1}}{\bf Proof. }si_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}}$ is its inverse. Now we check that ${\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}$ is $D_gD_{gh}$-linear. Take $d\in D_gD_{gh}$ and $x\in ED_gD_{gh}.$ Using (\ref{psii}) and (\ref{psiii}) we get the following \betagin{align*} &{\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(dx) \sigmatackrel{d\in D_gD_{gh}}{=} {\bf Proof. }si_{(gh){}^{-1}}(\alpha_{h{}^{-1}} ( \alpha_{g{}^{-1}}(d))){\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x) =\\ &{\bf Proof. }si_{(gh){}^{-1}}(\alpha_{h{}^{-1} g{}^{-1}}(d)){\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x)= d{\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x). \varepsilonnd{align*} Since $[E]\in {\bf Pic}(R) $ then $ [ED_gD_{gh}]\in {\bf Pic}(D_gD_{gh})$, and thus ${\rm End}_{D_gD_{gh}}(ED_gD_{gh})\cong D_gD_{gh}.$ Moreover, ${\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}$ is an invertible element of ${\rm End}_{D_gD_{gh}}(ED_gD_{gh}),$ and hence there exists $\omega_{g,h}\in {{}^{-1}athcal U}(D_gD_{gh})$ such that ${\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x)=\omega_{g,h}x,$ for all $x\in ED_gD_{gh}.$ Summarizing, for an element $[E]\in {\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R) $ we have found a map $\omega_{[E]}=\omega\colon G{\times}mes G\ni (g,h){}^{-1}apsto \omega_{g,h}\in {{}^{-1}athcal U}(D_gD_{gh})\sigmaubseteq R.$ We shall see that $\omega \in Z^2(G,\alpha,R).$ Take $g,h,l\in G$ and $x\in ED_gD_{gh}D_{ghl}.$ Then \betagin{align*} \omega_{g,hl}\alphag(\omega_{h,l}1_{g{}^{-1}})x\sigmatackrel{\varepsilonqref{psiii}}=&\omega_{g,hl}{\bf Proof. }si_{g{}^{-1}}(\omega_{h,l}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x))\\ =&{\bf Proof. }si_{(ghl){}^{-1}}{\bf Proof. }si{}^{-1}_{(hl){}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}{\bf Proof. }si_{g{}^{-1}}(\omega_{h,l}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x))\\ =&{\bf Proof. }si_{(ghl){}^{-1}}{\bf Proof. }si{}^{-1}_{(hl){}^{-1}}(\omega_{h,l}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x))\\ =&{\bf Proof. }si_{(ghl){}^{-1}}{\bf Proof. }si{}^{-1}_{(hl){}^{-1}}({\bf Proof. }si_{(hl){}^{-1}}{\bf Proof. }si{}^{-1}_{l{}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x))\\ =&{\bf Proof. }si_{(ghl){}^{-1}}{\bf Proof. }si{}^{-1}_{l{}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x)\\ =&{\bf Proof. }si_{(ghl){}^{-1}}{\bf Proof. }si{}^{-1}_{l{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}}{\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{g{}^{-1}}(x)\\ =& \omega_{gh,l}\omega_{g,h}x. \varepsilonnd{align*} Notice that $ [E D_gD_{gh}D_{ghl}] \in {\bf Pic} ( D_gD_{gh}D_{ghl} )$ and, in particular, $E D_gD_{gh}D_{ghl}$ is a faithful $D_gD_{gh}D_{ghl}$-module. Since $\omega_{g,hl}\alphag(\omega_{h,l}1_{g{}^{-1}}), \omega_{gh,l}\omega_{g,h} \in D_gD_{gh}D_{ghl},$ we obtain that $\omega_{g,hl}\alphag(\omega_{h,l}1_{g{}^{-1}})= \omega_{gh,l}\omega_{g,h}$ as desired. \betagin{claim}{\longrightarrow}bel{IsoChoice} cls$(\omega)$ does not depend on the choice of the isomorphisms.\varepsilonnd{claim} Let $\{{\longrightarrow}mbda_g\,|\, g\in G\}$ be another choice of $R$-isomorphisms $ ED_g\to (ED_{g{}^{-1}})_g.$ Then ${\longrightarrow}mbda_g(rx)=\alpha_{g{}^{-1}}(r1_g){\longrightarrow}mbda_g(x),$ ${\longrightarrow}mbda{}^{-1}_g(rx)=\alphag(r1_{g{}^{-1}}){\longrightarrow}mbda{}^{-1}_g(x)$, for all $g\in G,\,r\in R$ and $x$ belonging to the correspondent domain. Let ${\times}lde\omega\colon G{\times}mes G\to R$ also be defined by $${\times}lde\omega(g,h)x={\longrightarrow}mbda_{(gh){}^{-1}}{\longrightarrow}mbda{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}}(x),\,\,\,\thetaxt{ for any}\,\,\,x\in ED_gD_{gh}.$$ We shall prove that ${\rm cls}(\omega)={\rm cls}({\times}lde\omega)$ in $H^2(G,\alpha,R).$ Since ${\longrightarrow}mbda_g{\bf Proof. }si{}^{-1}_g\colon( ED_{g{}^{-1}})_g\to (ED_{g{}^{-1}})_{g}$ is $D_{g}$-linear and $[(ED_{g{}^{-1}})_g]\in {\bf Pic}(D_g),$ there exists $u_g\in {{}^{-1}athcal U}(D_g)$ such that ${\longrightarrow}mbda_g{\bf Proof. }si{}^{-1}_g$ is the multiplication by $u_g.$ Then, the map $u\colon G\ni g\to u_g\in R$ belongs to $C^1(G,\alpha,R)$, and for any $x\in ED_gD_{gh}D_{ghl}$ we have \betagin{align*} \omega{}^{-1}_{g,h}{\times}lde\omega_{g,h}x&={\bf Proof. }si_{g{}^{-1}}{\bf Proof. }si_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}}({\times}lde\omega_{g,h}x)\\ &=({\bf Proof. }si_{g{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}}){\longrightarrow}mbda_{g{}^{-1}}({\bf Proof. }si_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{h{}^{-1}}){\longrightarrow}mbda{}^{-1}_{g{}^{-1}}({\longrightarrow}mbda_{g{}^{-1}}{\longrightarrow}mbda_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}})({\times}lde\omega_{g,h}x)\\ &=u{}^{-1}_{g{}^{-1}}({\longrightarrow}mbda_{g{}^{-1}}u{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}})({\longrightarrow}mbda_{g{}^{-1}}{\longrightarrow}mbda_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}})(\!\!\!\underbrace{{\times}lde\omega_{g,h}}_{\in\, D_gD_{gh}}\!\!\!\!x)\\ &=u{}^{-1}_{g{}^{-1}}({\longrightarrow}mbda_{g{}^{-1}}u{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}}){\times}lde\omega_{g,h}({\longrightarrow}mbda_{g{}^{-1}}{\longrightarrow}mbda_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}})(x)\\ &=u{}^{-1}_{g{}^{-1}}({\longrightarrow}mbda_{g{}^{-1}}u{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}})({\longrightarrow}mbda_{(gh){}^{-1}}{\longrightarrow}mbda{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}}{\longrightarrow}mbda_{g{}^{-1}}{\longrightarrow}mbda_{h{}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}})(x)\\ &=u{}^{-1}_{g{}^{-1}}({\longrightarrow}mbda_{g{}^{-1}}u{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}})({\longrightarrow}mbda_{(gh){}^{-1}}{\bf Proof. }si{}^{-1}_{(gh){}^{-1}})(x)\\ &=u{}^{-1}_{g{}^{-1}}({\longrightarrow}mbda_{g{}^{-1}}u{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}})(u_{(gh){}^{-1}}x)\\ &=u{}^{-1}_{g{}^{-1}}{\longrightarrow}mbda_{g{}^{-1}}[u{}^{-1}_{h{}^{-1}}\alpha_{g{}^{-1}}(u_{(gh){}^{-1}}1_{g}){\longrightarrow}mbda{}^{-1}_{g{}^{-1}}(x)]\varepsilonnd{align*}\betagin{align*} &=u{}^{-1}_{g{}^{-1}}\alphag(u{}^{-1}_{h{}^{-1}}1_{g{}^{-1}})u_{(gh){}^{-1}}x\\ &=v_{g}\alphag(v_{h}1_{g{}^{-1}})v{}^{-1}_{(gh) }x, \varepsilonnd{align*} where $v_{g}=u{}^{-1}_{g{}^{-1}}.$ Since the map $v\colon G\ni g\to v_g\in R$ belongs to $C^1(G,\alpha,R),$ this shows that ${\rm cls}(\omega)={\rm cls}({\times}lde\omega)$ in $H^2(G,\alpha,R)$ as desired. Let $\varphi_3\colon {\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R)\ni [E] {}^{-1}apsto {\rm cls(\omega)}\in H^2(G,\alpha, R).$ \betagin{claim} $\varphi_3$ does not depend on the choice of the representative of $[E],$ for any $[E]\in{\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R).$ \varepsilonnd{claim} Let $[E]=[F]\in {\bf PicS}(R)^{\alpha^*}\cap {\bf Pic}(R),$ and $\{{\bf Proof. }si_g\,|\, g\in G\},\,\{{\longrightarrow}mbda_g\,|\, g\in G\}$ be families of $R$-isomorphisms $ED_g\to (ED_{g{}^{-1}})_g,$ $FD_g\to (FD_{g{}^{-1}})_g$ inducing the $(2,\alpha)$-cocycles $\omega,{\times}lde{\omega}$ respectively. Let also $\Omegaega\colon E\to F$ be an $R$-module isomorphism. Then $\Omegaega(ED_g)=\Omegaega(E)D_g=FD_g$ and one obtains the $R$-module isomorphisms $\Omegaega|_{ED_g}\colon ED_g\to FD_g$ and $\Omegaega{}^{-1}id _{ED_{g{}^{-1}}}\colon (ED_{g{}^{-1}})_g \to (FD_{g{}^{-1}})_g$ (thanks to (ii) of Lemma \ref{gaction}), for any $g\in G.$ Thus, the family $\{\Omega{\bf Proof. }si_g\Omega{}^{-1}{}^{-1}id_{FD_g}\colon FD_g\to (FD_{g{}^{-1}})_g\,|\, g\in G\}$ induces a $(2,\alpha)$-cocycle which is cohomologous to ${\times}lde{\omega},$ in view of Claim~\ref{IsoChoice}, and we may suppose that $\Omega{\bf Proof. }si_g\Omega{}^{-1}={\longrightarrow}mbda_g$ on $FD_g,g\in G.$ Hence, given $x\in FD_gD_{gh}$ we have \betagin{align*}{\times}lde{\omega}_{g,h}x={\longrightarrow}mbda_{(gh){}^{-1}}{\longrightarrow}mbda{}^{-1}_{h{}^{-1}}{\longrightarrow}mbda{}^{-1}_{g{}^{-1}}(x)\varepsilonquiv \Omega{\bf Proof. }si_{(gh){}^{-1}}{\bf Proof. }si_{h{}^{-1}}{\bf Proof. }si_{g{}^{-1}}\Omega{}^{-1}(x)=\Omega(\omega_{g,h}\Omega{}^{-1}(x))=\omega_{g,h}x, \varepsilonnd{align*} which implies ${\times}lde{\omega}_{g,h} = {\omega}_{g,h},$ because $ FD_gD_{gh}$, being an element of ${\bf Pic}(D_gD_{gh})$, is a faithful $D_gD_{gh}$-module. We conclude that in general ${\rm cls}(\omega)={\rm cls}({\times}lde\omega)$ and $\varphi_3$ is well defined. \betagin{claim} $\varphi_3$ is a group homomorphism.\varepsilonnd{claim} Let $[E],[F]\in {\bf PicS}(R)^{\alpha^*}\cap{\bf Pic}(R)$ with $\varphi_3([E])={\rm cls}(\omega)$ and $\varphi_3([F])={\rm cls}(\omega').$ Consider families of $R$-module isomorphisms $\{{\bf Proof. }hi_g\colon ED_g\to (ED_{g{}^{-1}})_g\}_{g\in G}$ and $\{{\longrightarrow}mbda_g\colon FD_g\to (FD_{g{}^{-1}})_g\}_{g\in G}$ defining ${\rm cls}(\omega)$ and ${\rm cls}(\omega')$ respectively. Notice that $ED_g{\otimes} FD_g=(E{\otimes} F)D_g,$ for all $g\in G,$ and by (ii) and (iv) of Lemma \ref{gaction}, $(ED_{g{}^{-1}})_g{\otimes} (FD_{g{}^{-1}})_g\cong ((E{\otimes} F)D_{g{}^{-1}})_g, \, g\in G,$ via the map $\iota_g$ defined in \varepsilonqref{iotag}. Then, $$\iota_g\circ ({\bf Proof. }hi_g{\otimes} {\longrightarrow}mbda_g) \colon (E{\otimes} F)D_g\to ((E{\otimes} F)D_{g{}^{-1}})_g,\, g\in G $$ is an $R$-module isomorphism which induces an element $u\in Z^2(G,\alpha, R),$ where \betagin{align*} u_{g,h} x{\otimes} y &=[\iota_{(gh){}^{-1}}\!\circ\! ({\bf Proof. }hi_{(gh){}^{-1}}{\otimes} {\longrightarrow}mbda_{(gh){}^{-1}} ) ][\iota_{h{}^{-1}}\circ\!({\bf Proof. }hi_{h{}^{-1}}{\otimes} {\longrightarrow}mbda_{h{}^{-1}}) ]{}^{-1}[\iota_{g{}^{-1}}\!\circ ({\bf Proof. }hi_{g{}^{-1}}{\otimes} {\longrightarrow}mbda_{g{}^{-1}}) ]{}^{-1} (x{\otimes} y)\\ &=({\bf Proof. }hi_{(gh){}^{-1}}{\bf Proof. }hi {}^{-1} _{h{}^{-1}}{\bf Proof. }hi {}^{-1} _{g{}^{-1}}{\otimes} {\longrightarrow}mbda_{(gh){}^{-1}} {\longrightarrow}mbda {}^{-1} _{h{}^{-1}} {\longrightarrow}mbda {}^{-1} _{g{}^{-1}} )(x{\otimes} y)\\ &=(\omega_{g,h}{\otimes} \omega'_{g,h})(x{\otimes} y)=(\omega_{g,h}\omega'_{g,h})(x{\otimes} y), \varepsilonnd{align*} for all $x\in ED_gD_{gh}, y\in FD_gD_{gh}, \, g,h\in G.$ Then, $u=\omega\omega'$ and $\varphi_3$ is a group homomorphism. \sigmaection{The sequence $H^2(G,\alpha,R)\sigmatackrel{\varphi_4}\to B(R/R^\alpha)\sigmatackrel{\varphi_5}\to H^1 (G,\alpha^*,{\bf PicS}(R))$ }{\longrightarrow}bel{phi4phi5} We start this section by giving some preliminary results that help us to construct the homomorphism $\varphi_4$. First of all we recall from \cite{DES} that the partial crossed product $R\sigmatar_{\alpha,\omega}G$ for the unital twisted partial action $(\alpha,\omega)$ of $G$ on $R$ is the direct sum $\bigoplus_{g\in G}D_g\delta_g$, in which the $\delta_g'$s are symbols, with the multiplication defined by the rule: $$(r_g\delta_g) � (r'_h\delta_h) = r_g\alpha_g(r'_h1_{g{}^{-1}})\omega_{g,h}\delta_{gh},$$ for all $g,h\in G$, $r_g\in D_g$ and $r'_h\in D_h$. If, in particular, the twisting $\omega$ is trivial, then we recover the partial skew group ring $R\sigmatar_\alpha G$ as given in Subsection 2.3. \betagin{prop} {\longrightarrow}bel{iso}If $\omega, {\times}lde{\omega}\in Z^2(G,\alpha, R)$ are cohomologous, there is an isomorphism of $R^\alpha$-algebras and $R$-modules $R\sigmatar_{\alpha,\omega} G\cong R\sigmatar_{\alpha,{\times}lde{\omega}} G.$ \varepsilonnd{prop} {\bf Proof. } There exists $u\in C^1(G,\alpha, R),$ $u\colon G\ni g{}^{-1}apsto u_g\in {{}^{-1}athcal U}(D_g)\sigmaubseteq R$ such that $\omega_{g,h}={\times}lde{\omega}_{g,h} u_g\alpha_g(u_h1_{g{}^{-1}})u{}^{-1}_{gh} ,$ for all $g,h\in G.$ Take $a_g\in D_g,$ set $\varphi(a_g\delta_g)=a_gu_g\delta_g$ and extend $\varphi$ to $\varphi\colon R\sigmatar_{\alpha,\omega} G\to R\sigmatar_{\alpha,{\times}lde{\omega}} G$ by $R$-linearity. Clearly $\varphi$ is bijective with inverse $a_g\delta_g{}^{-1}apsto a_gu{}^{-1}_g\delta_g, $ then we only need to prove that $\varphi$ preserves products. \betagin{align*} &\varphi((a_g\delta_g)(b_h\delta_h))=\varphi(a_g\alphag(b_h1_{g{}^{-1}})\omegaega_{g,h}\delta_{gh})= a_g\alphag(b_h1_{g{}^{-1}})\omegaega_{g,h} u_{gh}\delta_{gh}=\\ &a_g\alphag(b_h1_{g{}^{-1}}){\times}lde{\omega}_{g,h}u_g\alphag(u_h1_{g{}^{-1}}) \delta_{gh}=(a_gu_g\delta_{g})(b_hu_h\delta_{h}) =\varphi(a_g\delta_g)\varphi(b_h\delta_h). \varepsilonnd{align*} ${{\mathbf l}acksquare}$ We also give the next. \betagin{prop}{\longrightarrow}bel{rpiso} If $\varphi\colon R\sigmatar_{\alpha,\omega} G\to R\sigmatar_{\alpha} G$ is an isomorphism of $R^\alpha$-algebras and $R$-modules, then $\omega$ is cohomologous to $1_{\alpha}=\{1_g1_{gh}\}_{g,h\in G}.$ \varepsilonnd{prop} {\bf Proof. } To avoid confusion, we write $R\sigmatar_{\alpha,\omega} G=\bigoplus\limits_{g\in G}D_g\delta_g,$ $R\sigmatar_{\alpha} G=\bigoplus\limits_{g\in G}D_g\delta'_g$ and identify $R=R\delta_1=R\delta'_1.$ For $r\in R$ we have $\varphi(r\delta_1)=r\delta'_1\in R$ and $$(1_g\delta_g)r(\omega{}^{-1}_{g{}^{-1},g}\delta_{g{}^{-1}}) =\alphag(r1_{g{}^{-1}})\alpha_g(\omega{}^{-1}_{g{}^{-1},g})\omega_{g,g{}^{-1}}\delta_{1} \sigmatackrel{(\ref{afom})}{=}\alphag(r1_{g{}^{-1}})\delta_1.$$ Hence, $\varphi(1_g\delta_g)r\varphi(\omega{}^{-1}_{g{}^{-1},g}\delta_{g{}^{-1}})=\varphi(\alphag(r1_{g{}^{-1}})\delta_1)=\alphag(r1_{g{}^{-1}})\delta'_1=(1_g\delta'_g)r(1_{g{}^{-1}}\delta'_{g{}^{-1}}).$ From this we obtain $$(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)r\varphi(\omega{}^{-1}_{g{}^{-1},g}\delta_{g{}^{-1}})=(1_{g{}^{-1}}\delta'_{g{}^{-1}})(1_g\delta'_g)r(1_{g{}^{-1}}\delta'_{g{}^{-1}})=r(1_{g{}^{-1}}\delta'_{g{}^{-1}}),$$ and, multiplying by the right both sides of the equality by $\varphi(1_g\delta_g)$, we get \betagin{align*} r[(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)]&=[(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)r\varphi(\omega{}^{-1}_{g{}^{-1},g}\delta_{g{}^{-1}})] \varphi(1_g\delta_g)\\ &=(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)r\varphi[(\omega{}^{-1}_{g{}^{-1},g}\delta_{g{}^{-1}})(1_g\delta_g)]\\ &=(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)r\varphi(1_{g{}^{-1}}\delta_1)\\ &=(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)\varphi(1_{g{}^{-1}}\delta_1)r\\ &=(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi((1_g\delta_g)(1_{g{}^{-1}}\delta_1))r\\ &=(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)r, \varepsilonnd{align*} so $(1_{g{}^{-1}}\delta'_{g{}^{-1}})\varphi(1_g\delta_g)\in C_{R\sigmatar_{\alpha } G}(R).$ Since $R$ is commutative and $R\sigmaupseteq R^\alpha$ is an $\alpha$-partial Galois extension, \cite[Lemma 2.1(vi) and Proposition 3.2]{PS} imply that $R\sigmatar_{\alpha,{\times}lde\omega} G$ is $R^\alpha$-Azumaya and $C_{R\sigmatar_{\alpha,{\times}lde\omega} G}(R)= R$ for arbitrary ${\times}lde{\omega} ,$ in particular this is true for $R\sigmatar_{\alpha } G.$ Thus, $$(1_{g{}^{-1}}\deltalta'_{g{}^{-1}})\varphi(1_g\delta_g)=r_g,\,\, \thetaxt{for some}\,\,r_g\in R,$$ and, multiplying from the left both the sides of the last equality by $1_{g}\deltalta'_{g}$ we obtain $\varphi(1_g\delta_g)=u_g\deltalta'_g,$ where $u_g=\alphag(r_g1_{g{}^{-1}})\in D_g.$ Therefore, $\varphi(1_g\delta_g)= u_g\deltalta'_g, \, g\in G.$ On the other hand there exists $W=\sigmaum\limits_{ h\in G} a_h \delta_h $ such that $1_g\deltalta'_g=\varphi(W)=\sigmaum\limits_{h\in G}a_hu_h {{\delta }'_h},$ then $1_g\deltalta'_g=a_gu_g\deltalta'_g$ and $u_g\in {{}^{-1}athcal U}(D_g).$ Set $u\colon G\ni g{}^{-1}t u_g\in {{}^{-1}athcal U}(D_g)\sigmaubseteq R,$ then $u\in C^1(G,\alpha, R)$ and $$\omega_{g,h}u_{gh}\deltalta'_{gh}=\varphi(\omega_{g,h}\delta_{gh})=\varphi(1_g\delta_g)\varphi(1_h\delta_h)=(u_g\deltalta'_g)(u_h\deltalta'_h)=u_g\alphag(u_h1_{g{}^{-1}})\deltalta'_{gh}.$$ From this we conclude that $\omega_{g,h}u_{gh}= u_g\alphag(u_h1_{g{}^{-1}})$, hence $\omega$ is cohomologous to $1_{\alpha}.$ ${{\mathbf l}acksquare}$ \betagin{prop} {\longrightarrow}bel{pb} Let $\omega\in Z^2(G,\alpha, R).$ Then $R\sigmatar_{\alpha,\omega} G$ is an Azumaya $R^\alpha$-algebra and $[R\sigmatar_{\alpha,\omega} G]\in B(R/R^\alpha).$ \varepsilonnd{prop} {\bf Proof. } As mentioned above, the facts that $R$ is commutative and $R\sigmaupseteq R^\alpha$ is an $\alpha$-partial Galois extension imply that $R\sigmatar_{\alpha,\omega} G$ is $R^\alpha$-Azumaya and $C_{R\sigmatar_{\alpha,\omega} G}(R)= R.$ The latter implies that $R$ is a maximal commutative $R^\alpha$-subalgebra of $R\sigmatar_{\alpha,\omega} G.$ On the other hand, by \cite[Theorem 4.2]{DFP} the extension $R\sigmaupseteq R^\alpha$ is separable, and finally \cite[Theorem 5.6]{AG} tells us that $R\sigmatar_{\alpha,\omega} G$ is split by $R,$ which means that $[R\sigmatar_{\alpha,\omega} G]\in B(R/R^\alpha).$ ${{\mathbf l}acksquare}$ It follows from Propositions \ref{iso} and \ref{pb} that there is a well defined function \betagin{equation}{\longrightarrow}bel{fun} \varphi_4\colon H^2(G,\alpha, R)\ni {\rm cls}(\omega){}^{-1}apsto [R\sigmatar_{\alpha,\omega} G]\in B(R/R^\alpha). \varepsilonnd{equation} \betagin{subsection}{ $\varphi_4$ is a homomorphism} In this subsection we follow the ideas of \cite[chapter IV]{DI} to prove that the map $\varphi_4$ defined in (\ref{fun}) is a group homomorphism. For this we need a series of lemmas. \betagin{lema} {\longrightarrow}bel{iso1}Let $\omega \in Z^2( G, \alpha, R).$ Then, there exists an $R^\alpha$-algebra isomorphism\linebreak $(R\sigmatar_{\alpha,\omega}G)^{op}\cong R\sigmatar_{\alpha,\omega{}^{-1}} G.$ \varepsilonnd{lema} {\bf Proof. } Define ${\bf Proof. }hi\colon (R\sigmatar_{\alpha,\omega}G)^{op}\to R\sigmatar_{\alpha,\omega{}^{-1}} G$ by ${\bf Proof. }hi(r_g\deltalta_g)=\alpha_{g{}^{-1}}(r_g\omega_{g,g{}^{-1}})\delta_{g{}^{-1}},$ for all $g\in G,\, r_g\in D_g.$ Note that ${\bf Proof. }hi$ is an $R^\alpha$-module isomorphism with inverse $R\sigmatar_{\alpha,\omega{}^{-1}} G\ni r_{g{}^{-1}}\delta_{g{}^{-1}} {}^{-1}t \alpha_g(r_{g{}^{-1}}\omega{}^{-1}_{g{}^{-1},g}) \delta _g\in (R\sigmatar_{\alpha,\omega}G)^{op} $. For $g,h\in G,r_g\in D_g$ and $t_h\in D_h$ we have \betagin{align*} {\bf Proof. }hi[(r_g\delta_g)\circ(t_h\delta_h)]&={\bf Proof. }hi(t_h\alphah(r_g 1_{h{}^{-1}})\omega_{h,g}\delta_{hg})\\ &=\alpha_{g{}^{-1} h{}^{-1}}(t_h\alphah(r_g 1_{h{}^{-1}})\omega_{h,g}\omega_{hg,g{}^{-1} h{}^{-1}})\delta_{g{}^{-1} h{}^{-1}}\\ &=\alpha_{g{}^{-1}}[\alpha_{h{}^{-1}}(t_h\alphah(r_g 1_{h{}^{-1}})\omega_{h,g}\omega_{hg,g{}^{-1} h{}^{-1}})]\delta_{g{}^{-1} h{}^{-1}}\\ &\sigmatackrel{(\rm {v})}{=}\alpha_{g{}^{-1}}[\alpha_{h{}^{-1}}(t_h\alphah(r_g 1_{h{}^{-1}})\alphah(\omega_{g,g{}^{-1} h{}^{-1}})\omega_{h, h{}^{-1}})]\delta_{g{}^{-1} h{}^{-1}}\\ &\sigmatackrel{(\ref{afom})}{=}\alpha_{g{}^{-1}}(\alpha_{h{}^{-1}}(t_h)r_g \omega_{g,g{}^{-1} h{}^{-1}}\omega_{h{}^{-1}, h})\delta_{g{}^{-1} h{}^{-1}}. \varepsilonnd{align*} On the other hand \betagin{align*} {\bf Proof. }hi(r_g\delta_g){\bf Proof. }hi(t_h\delta_h)&=(\alpha_{g{}^{-1}}(r_g\omega_{g,g{}^{-1}})\delta_{g{}^{-1}})(\alpha_{h{}^{-1}}(t_h\omega_{h,h{}^{-1}})\delta_{h{}^{-1}})\\ &=(\alpha_{g{}^{-1}}(r_g)\omega_{g{}^{-1},g})(\alpha_{g{}^{-1}}(\alpha_{h{}^{-1}}(t_h)\omega_{h{}^{-1},h})\omega{}^{-1}_{g{}^{-1},h{}^{-1}}\delta_{g{}^{-1} h{}^{-1}})\\ &=\alpha_{g{}^{-1}}(r_g\alpha_{h{}^{-1}}(t_h)\omega_{h{}^{-1},h})\omega_{g{}^{-1},g}\omega{}^{-1}_{g{}^{-1},h{}^{-1}}\delta_{g{}^{-1} h{}^{-1}}\\ &\sigmatackrel{(\rm {v})}{=}\alpha_{g{}^{-1}}(r_g\alpha_{h{}^{-1}}(t_h)\omega_{h{}^{-1},h})\alpha_{g{}^{-1}}(\omega_{g,g{}^{-1} h{}^{-1}})\delta_{g{}^{-1} h{}^{-1}}\\ &=\alpha_{g{}^{-1}}(r_g\alpha_{h{}^{-1}}(t_h)\omega_{h{}^{-1},h}\omega_{g,g{}^{-1} h{}^{-1}})\delta_{g{}^{-1} h{}^{-1}}, \varepsilonnd{align*} and we conclude that ${\bf Proof. }hi$ is multiplicative. ${{\mathbf l}acksquare}$ From now on, in order to simplify notation we will denote $R^e=R{\otimes}imes_{R^\alpha}R$. \betagin{lema}{\longrightarrow}bel{idem} There is a family of orthogonal idempotents $e_g\in R^e$ with $g\in G,$ satisfying the following properties: \betagin{equation}{\longrightarrow}bel{i} (1{\otimes}imes_{R^\alpha} \alphag(x1_{g{}^{-1}}))e_g=(x{\otimes}imes_{R^\alpha} 1)e_g,\,\,\thetaxt {for all} \,\,x\in R.\varepsilonnd{equation} \betagin{equation} {\longrightarrow}bel{ii} \sigmaum\limits_{g\in G}e_g=1_{R^e}.\varepsilonnd{equation} \varepsilonnd{lema} {\bf Proof. } By (iv) of \cite[Theorem 4.1]{DFP} the map ${\bf Proof. }si \colon R^e\to {\bf Proof. }rod_{g\in G}D_g,$ given by ${\bf Proof. }si(x{\otimes}_{R^\alpha} y)=(x\alphag(y1_{g{}^{-1}}))_{g\in G},$ is an isomorphism of $R$-algebras. Take $v_g=(x_h)_{h\in G} \in {\bf Proof. }rod_{h\in G}D_h , g\in G,$ where $x_h=\delta_{h,g}1_g.$ Then, the set $\{e_g={\bf Proof. }si{}^{-1}(v_{g{}^{-1}}) |\, g\in G\}$ is a family of orthogonal idempotents in $R^e.$ Using the isomorphism ${\bf Proof. }si$ we check \varepsilonqref{i} and \varepsilonqref{ii}. ${{\mathbf l}acksquare}$ \betagin{lema}{\longrightarrow}bel{iso2} For any $\omega \in Z^2(G,\alpha, R)$ there is an $R$-module isomorphism $R\sigmatar_{\alpha,\omega} G\cong R^e.$ \varepsilonnd{lema} {\bf Proof. } Let $\{e_g\,|\, g\in G\}$ be the family of pairwise orthogonal idempotents constructed in Lemma \ref{idem}, and set $\varepsilonta \colon R\sigmatar_{\alpha,\omega} G\to R^e$ defined by $\varepsilonta(\sigmaum_{g\in G}r_g\delta_g)=\sigmaum_{g\in G}(r_g{\otimes}_{R^\alpha} 1)e_{g{}^{-1}}.$ Clearly $\varepsilonta$ is $R$-linear and we only need to check that $\varepsilonta$ is an isomorphism. If $\sigmaum_{g\in G}(r_g{\otimes}_{R^\alpha} 1)e_{g{}^{-1}}=0,$ then $(r_h{\otimes}_{R^\alpha} 1)e_{h{}^{-1}}=0,$ for any $h\in G.$ Applying the isomorphism ${\bf Proof. }si$ from the proof of Proposition \ref{idem} we conclude that $r_h=0$ and $\varepsilonta$ is injective. Now we prove the surjectivity. By applying ${\bf Proof. }si$ we get $(1{\otimes}_{R^\alpha} r1_{g{}^{-1}})e_{g{}^{-1}}=(1{\otimes}_{R^\alpha}r )e_{g{}^{-1}},$ $r\in R, g\in G.$ Then for any $r,s\in R,$ we obtain \betagin{align*}r{\otimes}_{R^\alpha}s&\sigmatackrel{\varepsilonqref{ii}}=\sigmaum_{g\in G}(r{\otimes}_{R^\alpha}s)e_{g{}^{-1}}=\sigmaum_{g\in G}(r{\otimes}_{R^\alpha}s1_{g{}^{-1}})e_{g{}^{-1}}\\ &\sigmatackrel{\varepsilonqref{i}}=\sigmaum_{g\in G}(r\alpha_g(s1_{g{}^{-1}}){\otimes}_{R^\alpha}1)e_{g{}^{-1}}=\varepsilonta \left(\sigmaum_{g\in G}r\alpha_g(s1_{g{}^{-1}})\delta_g\right). \varepsilonnd{align*} ${{\mathbf l}acksquare}$ {{}^{-1}athcal E}dskip By Lemma \ref{iso2} the map $\bar\varepsilonta\colon {\rm End}_{R^\alpha}(R\sigmatar_{\alpha,\omega}G)\ni f{}^{-1}apsto \varepsilonta f\varepsilonta{}^{-1}\in {\rm End}_{R^\alpha}(R^e),$ is an $R^\alpha$-algebra isomorphism. Now we prove that the tensor product of partial Galois extensions is also a partial Galois extension. More precisely we have the following. \betagin{prop} {\longrightarrow}bel{tensorgal}Let $G$ and $H$ be finite groups and $\alpha=(D_g, \alphag)_{g\in G},\theta=(I_h, \theta_h)_{h\in H}$ (unital) partial actions of $G$ and $H$ on commutative rings $R_1$ and $R_2,$ respectively. Assume that $R_1^\alpha=R_2^\theta=k$ and suppose that the ring extensions $R_1\sigmaupseteq k,$ $R_2\sigmaupseteq k$ are $\alpha$- and $\theta$-partial Galois, respectively. Then $R_1{\otimes}imes_k R_2\sigmaupseteq k {\otimes}imes_kk=k$ is an $(\alpha{\otimes}imes_k\theta)$-partial Galois extension. \varepsilonnd{prop} {\bf Proof. } In this proof unadorned ${\otimes}imes$ means ${\otimes}imes_k.$ Note that $\alpha{\otimes}imes \theta=( D_g{\otimes}imes I_h, \alphag{\otimes}imes \theta_h)_{(g,h)\in G{\times}mes H},$ is partial action of $G{\times}mes H$ on the ring $R_1{\otimes} R_2.$ Consider $x=\sigmaum_{i}u_i{\otimes}imes v_i\in k{\otimes} k= R_1^\alpha{\otimes}imes R_2^ \theta,$ then for all $(g,h)\in G{\times}mes H$ we have $\alphag{\otimes}imes\theta_h(x (1_{g{}^{-1}}{\otimes}imes 1_{h{}^{-1}}))=\sigmaum_{i}u_i1_{g}{\otimes}imes v_i1_{h}=x(1_g{\otimes}imes 1_h),$ and $(R_1{\otimes}imes R_2)^{\alpha{\otimes}imes \theta}\sigmaupseteq R_1^\alpha{\otimes}imes R_2^ \theta.$ Conversely, take $c_1\in R_1$ and $c_2\in R_2$ such that ${\rm tr}_{R_1/R_1^\alpha}(c_1)=1_{k}={\rm tr}_{R_2/R_2^\theta}(c_2).$ Now let $x\in (R_1{\otimes}imes R_2)^{\alpha{\otimes}imes\theta}$ and write $x(c_1{\otimes}imes c_2)=\sigmaum\limits_{i=1}^{m}t_i{\otimes}imes s_i.$ Then, \[ \betagin{array}{ccl} x&=&x(1_{R_1}{\otimes}imes 1_{R_2})=\sigmaum_{(g,h)\in G{\times}mes H}x(\alphag(1_{g{}^{-1}}c_1){\otimes}imes \theta_h(1_{h{}^{-1}}c_2))\\ &=& \sigmaum_{(g,h)\in G{\times}mes H}\!x(1_g\!{\otimes}imes \!1_h)(\alphag\!{\otimes}imes\!\theta_h)(1_{g{}^{-1}}c_1\!{\otimes}imes\!1_{h{}^{-1}}c_2)\\ &=&\sigmaum_{(g,h)\in G{\times}mes H}\!(\alphag {\otimes}imes\theta_h)(x(c_1\!{\otimes}imes\! c_2)(1_{g{}^{-1}}\!{\otimes}imes\!1_{h{}^{-1}}))\\ &=&\sigmaum_{i=1}^{m}\sigmaum_{(g,h)\in G{\times}mes H}(\alphag{\otimes}imes\!\theta_h)(t_i1_{g{}^{-1}}{\otimes}imes s_i1_{h{}^{-1}})\\ &=&\sigmaum_{i=1}^{m}\sigmaum_{(g,h)\in G{\times}mes H}\alphag(t_i1_{g{}^{-1}}){\otimes}imes\theta_h( s_i1_{h{}^{-1}})\\ & =&\sigmaum_{i=1}^{m}{\rm tr}_{R_1/R_1^\alpha}(t_i){\otimes}imes{\rm tr}_{R_2/R_2^\theta}(s_i)\in R_1^\alpha{\otimes}imes R_2^ \theta . \varepsilonnd{array} \] Thus $(R_1{\otimes}imes R_2)^{\alpha{\otimes}imes \theta}=R_1^\alpha{\otimes}imes R_2^ \theta=k.$ Finally, let $m,n\in {\mathbb N}$ and $\{x_i, y_i\,|\, 1\le i\le m\},$ $\{u_i, z_i\,|\, 1\le i\le n\}$ be the $\alpha$- and $\theta$-partial Galois systems for the extensions $R_1\sigmaupseteq k$ and $R_2\sigmaupseteq k,$ respectively. Then for all $(g,h)\in G{\times}mes H$ we see that \betagin{align*} &\sigmaum_{i,j}(x_i{\otimes}imes u_j)(\alphag{\otimes}imes\theta_h)((y_i{\otimes}imes z_j)(1_{g{}^{-1}}{\otimes}imes1_{h{}^{-1}}))=\sigmaum_{i,j}(x_i{\otimes}imes u_j)(\alphag(y_i1_{g{}^{-1}}){\otimes}imes\theta_h(z_j1_{h{}^{-1}}))\\ &=\sigmaum_{i}x_i\alphag(y_i1_{g{}^{-1}}){\otimes}imes \sigmaum_ju_j\theta_h(z_j1_{h{}^{-1}})=\delta_{1,g}{\otimes}imes \delta_{1,h}=\delta_{(1,1), (g,h)}. \varepsilonnd{align*} We conclude that the set $\{x_i{\otimes}imes u_j, y_i{\otimes}imes z_j\,|\, 1\le i\le m, 1\le j\le n\}$ is an $\alpha{\otimes}imes\theta$-partial Galois coordinate system for the extension $R_1{\otimes}imes R_2\sigmaupseteq k {\otimes}imes k.$ ${{\mathbf l}acksquare}$ With the same hypothesis and notations given in Proposition \ref{tensorgal}, we have. \betagin{prop} {\longrightarrow}bel{isotens}There is an isomorphism of $k$-algebras $$(R_1\sigmatar_{\alpha,\omega} G){\otimes} _k (R_2\sigmatar_{\theta,{\times}lde{\omega}} H)\cong (R_1{\otimes}_{k}R_2)\sigmatar_{\alpha {\otimes} \theta,\omega{\otimes} {\times}lde{\omega}}(G{\times}mes H).$$ \varepsilonnd{prop} {\bf Proof. } Here again unadorned ${\otimes}imes$ means ${\otimes}imes_k.$ We denote $$S=(R_1{\otimes} R_2)\sigmatar_{\alpha {\otimes}\theta,\omega{\otimes}{\times}lde{\omega}}(G{\times}mes H)=\bigoplus\limits_{(g,h)\in G{\times}mes H}(D_g{\otimes}imes I_h)\varepsilonpsilon_{(g,h)},$$ $S_1=R_1\sigmatar_{\alpha,\omega} G=\bigoplus\limits_{g\in G}D_g\deltalta_g$ and $S_2=R_2\sigmatar_{\theta,{\times}lde{\omega}} H={\bigoplus\limits_{h \in H }I_h \deltalta'_h}.$ For $(g,h)\in G{\times}mes H,$ the map $$D_g\delta_g{\times}mes I_h\delta'_h\ni (a_g\delta_g,b_h\delta'_h){}^{-1}apsto (a_g{\otimes} b_h)\varepsilonpsilon_{(g,h)}\in S$$ extended by $k$ linearity to $S_1{\times}mes S_2$ is clearly a bilinear $k$-balanced map. Hence, it induces a bijective $k$-linear map $\xi\colon S_1{\otimes} S_2\to S$ such that $a_g\delta_g{\otimes} b_h\delta'_h{}^{-1}apsto (a_g{\otimes} b_h)\varepsilonpsilon_{(g,h)}.$ The fact that $\xi$ preserves products is straightforward. ${{\mathbf l}acksquare}$ \betagin{prop} {\longrightarrow}bel{2co} Given $\omega \in Z^2(G,\alpha, R) ,$ we have $\omega{\otimes}_{R^\alpha} \omega{}^{-1}\in B^2(G{\times}mes G,\alpha{\otimes} _{R^\alpha}\alpha, R^e).$ Thus if ${\times}lde\omega\in Z^2(G,\alpha, R), $ then $\omega{\otimes}_{R^\alpha}{\times}lde\omega$ is cohomologous to $\omega{\times}lde\omega {\otimes}_{R^\alpha} 1_{\alpha}.$ \varepsilonnd{prop} {\bf Proof. } By Proposition \ref{tensorgal}, $R^e$ is an $\alpha{\otimes}_{R^\alpha} \alpha$-partial Galois extension of $R^\alpha$ with the partial action of $G{\times}mes G.$ Then using the isomorphisms appeared in Proposition \ref{isotens}, Lemmas \ref{iso1}, \ref{iso2}, \cite[Theorem 2.1 (c)]{AG} and iv) of \cite[Theorem 4.1]{DFP} we obtain a chain of $R^\alpha$-algebra isomorphisms \betagin{align*} &(R^e)\sigmatar_{\alpha{\otimes}_{R^\alpha}\alpha, \omega{\otimes}_{R^\alpha}\omega{}^{-1}}(G{\times}mes G)\sigmatackrel{}{\to}(R\sigmatar_{\alpha,\omega} G){\otimes}_{R^\alpha}(R\sigmatar_{\alpha,\omega{}^{-1}} G) \sigmatackrel{{\rm id}{\otimes}_{R^\alpha}{\bf Proof. }hi{}^{-1}}{\to}\\ &(R\sigmatar_{\alpha,\omega} G){\otimes}_{R^\alpha}(R\sigmatar_{\alpha,\omega} G)^{\rm{op}}\sigmatackrel{\varGammaamma}{\to}\, {\rm End}_{R^\alpha}(R\sigmatar_{\alpha,\omega} G)\sigmatackrel{\bar\varepsilonta}{\to}\,{\rm End}_{R^\alpha}(R^e)\sigmatackrel{j{}^{-1}}{\to} \\ &\,(R^e)\sigmatar_{\alpha{\otimes}\alpha}(G{\times}mes G), \varepsilonnd{align*} where $\varGammaamma$ is given by $\varGammaamma(x{\otimes} y):z{}^{-1}apsto xzy$, for all $x,y,z\in R\sigmatar_{\alpha,\omega} G$, and $j$ is given by \varepsilonqref{jota}. By Proposition \ref{rpiso} one only needs to show that the above composition restricted to $R^e$ is the identity. We have \betagin{align*} (r_1{\otimes}_{R^\alpha} t_1)\delta_{(1,1)}&{}^{-1}apsto r_1\delta_1{\otimes}_{R^\alpha} t_1\delta_1{}^{-1}apsto y=r_1\delta_1{\otimes}_{R^\alpha} t_1\delta_{1}{}^{-1}apsto \varGammaamma(y){}^{-1}apsto j{}^{-1}(\varepsilonta \varGammaamma(y)\varepsilonta{}^{-1}). \varepsilonnd{align*} For a fixed $a\in G$ we compute the image of $\varepsilonta \varGammaamma(y)\varepsilonta{}^{-1}$ on $(r_a{\otimes}_{R^\alpha} 1)e_{a{}^{-1}}\in R^e,\, r_a\in D_a.$ We see that $\varepsilonta \varGammaamma(y)\varepsilonta{}^{-1}((r_a{\otimes}_{R^\alpha} 1)e_{a{}^{-1}})$ \betagin{align*} =&\varepsilonta \varGammaamma(y)(r_a \delta_a)=\varepsilonta((r_1\delta_1)(r_a\deltalta_a)(t_1\delta_{1}))=\varepsilonta(r_1r_a\alpha_{a}(t_11_{a{}^{-1} })\delta_{a })\\ =&(r_1r_a\alpha_{a}(t_11_{a{}^{-1}}){\otimes}_{R^\alpha} 1 ) e_{a{}^{-1}} =(r_1r_a{\otimes}_{R^\alpha} 1)(\alpha_{a}(t_11_{a{}^{-1}}){\otimes}_{R^\alpha} 1)e_{ a{}^{-1} }\\ \sigmatackrel{\varepsilonqref {i}}{=}&(r_1r_a{\otimes}_{R^\alpha} 1)(1{\otimes}_{R^\alpha} t_11_{a{}^{-1}})e_{a{}^{-1} }= (r_1r_a{\otimes}_{R^\alpha} t_1)(1{\otimes}_{R^\alpha} \alpha_{a{}^{-1}}(1_a))e_{a{}^{-1} }\\ \sigmatackrel{\varepsilonqref{i}}{=}&(r_1{\otimes}_{R^\alpha} t_1)(r_a{\otimes}_{R^\alpha} 1)e_{a{}^{-1} }=j((r_1{\otimes}_{R^\alpha} t_1)\delta_{(1,1)})[(r_a{\otimes}_{R^\alpha} 1)e_{a{}^{-1} }]. \varepsilonnd{align*} Then, $\varepsilonta \varGammaamma(y)\varepsilonta{}^{-1}(x)=j((r_1{\otimes}_{R^\alpha} t_1)\delta_{(1,1)})x,$ for all $x\in R^e,$ and we conclude that $$j{}^{-1}(\varepsilonta \varGammaamma(y)\varepsilonta{}^{-1}) =(r_1{\otimes}_{R^\alpha} t_1)\delta_{(1,1)}.$$ Hence, the composition is $R^e$-linear and $\omega{\otimes}_{R^\alpha} \omega{}^{-1}\in B^2(G{\times}mes G,\alpha{\otimes}_{R^\alpha} \alpha, R^e).$ Finally, for any ${\times}lde\omega\in { Z^2(G , \alpha, R)},$ we have $(\omega{\otimes}_{R^\alpha}{\times}lde\omega)({\times}lde\omega{\otimes}_{R^\alpha}{\times}lde\omega{}^{-1})= \omega{\times}lde\omega{\otimes}_{R^\alpha} 1_\alpha,$ and this yields that $\omega{\otimes}_{R^\alpha}{\times}lde\omega$ is cohomologous to $\omega{\times}lde\omega{\otimes}imes_{R^\alpha}1_\alpha.$ ${{\mathbf l}acksquare}$ \betagin{teo} {\longrightarrow}bel{homo}Let R be an $\alpha$-partial Galois extension of $R^\alpha.$ Then the map \linebreak $ \varphi_4\colon H^2(G,\alpha, R)\ni {\rm cls}(\omega){}^{-1}apsto [R\sigmatar_{\alpha,\omega} G]\in B(R/R^\alpha) $ is a group homomorphism. \varepsilonnd{teo} {\bf Proof. } In this proof unadorned ${\otimes}imes$ will mean ${\otimes}imes_{R^\alpha}.$ Let ${\rm cls}(\omega), {\rm cls}({\times}lde \omega)\in H^2(G,\alpha, R). $ By Propositions \ref{isotens}, \ref{iso} and \ref{2co} we have \betagin{align*} [R\sigmatar_{\alpha,\omega} G][R\sigmatar_{\alpha,{\times}lde\omega} G]&=[(R{\otimes} R)\sigmatar_{\alpha{\otimes} \alpha,\omega{\otimes}{\times}lde\omega} (G{\times}mes G)]=[(R{\otimes} R)\sigmatar_{\alpha{\otimes} \alpha,\omega{\times}lde\omega{\otimes} 1_{\alpha}} (G{\times}mes G)]\\ &=[R\sigmatar_{\alpha,\omega{\times}lde\omega} G][R\sigmatar_{\alpha,1_{\alpha}} G]=[R\sigmatar_{\alpha,\omega{\times}lde\omega} G][{\rm End}_{R^\alpha}(R)]=[R\sigmatar_{\alpha,\omega{\times}lde\omega} G]. \varepsilonnd{align*} which gives $[R\sigmatar_{\alpha,\omega} G][R\sigmatar_{\alpha,{\times}lde\omega} G]=[R\sigmatar_{\alpha,\omega{\times}lde\omega} G]$ and the assertion follows. ${{\mathbf l}acksquare}$\varepsilonnd{subsection} \sigmaubsection{The construction of $B(R/R^\alpha)\sigmatackrel{\varphi_5}\to H^1 (G,\alpha^*,{\bf PicS}(R)) $} We remind that unadorned ${\otimes}$ stands for ${\otimes} _R.$ Let $\alpha^*=(\alpha^*_g, X_g)_{g\in G}$ be the partial action of $G$ on ${\bf PicS}(R)$ constructed in Theorem \ref{psem}. Since ${{}^{-1}athcal U}( {\bf PicS}(R))={\bf Pic}(R) $ we have that $B^1(G,\alpha^*, {\bf PicS}(R))$ is the group $$\{ f\in C^1(G,\alpha ^*,{\bf PicS}(R)) {}^{-1}id f(g) = \alpha^*_g([P] [D_{g^{-1}}]) [P^{*}], \, {}^{-1}box{for some}\,\, [P]\in {\bf Pic}(R) \}$$ and $ Z^1(G,\alpha^*, {\bf PicS}(R))$ is given by \betagin{align*} \{f\in C^1 (G,\alpha ^*,{\bf PicS}(R)){}^{-1}id f(gh)[D_g]=f(g) \, \alpha^* _g ( f(h) [D_{g^{-1}}] ),\,\, \varphiorall g,h\in G \}.\varepsilonnd{align*} \betagin{remark}{\longrightarrow}bel{trivi}Let $f\in C^1 (G,\alpha,{\bf PicS}(R)), \, g\in G$ and ${{}^{-1}athfrak p} \in {\rm Spec}(R).$ We shall make a little abuse of notation by writing $f(g)_{{}^{-1}athfrak p}$ for a representative of the class $f(g)$ localized at ${{}^{-1}athfrak p}.$ \varepsilonnd{remark} {{}^{-1}athcal E}dskip We proceed with the construction of $\varphi_5.$ Take $[A]\in B(R/R^\alpha).$ Then by \cite[Theorem 5.7]{AG} there is an Azumaya $R^\alpha$-algebra equivalent to $A$ containing $R$ as a maximal commutative subalgebra. Hence, we assume that $A$ contains $R$ as a maximal commutative subalgebra. By \cite[Theorem 4.2]{DFP} $R\sigmaupseteq R^\alpha$ is separable, moreover \cite[Theorem 5.6]{AG} tells us that $A$ is a faithfully projective $R$-module and there is a $R$-algebra isomorphism $R{\otimes}_{R^\alpha} A^{\rm{op}}\cong {\rm End}_R(A).$ On the other hand, $D_g{\otimes} A$ is a faithfully projective $D_g$-module, thus by Proposition \ref{ht} there is an $R$-algebra isomorphism ${\rm End}_{D_g}(D_g{\otimes} _R A)\cong D_g{\otimes} {\rm End}_R(A),$ for any $g\in G.$ Consequently, we have an $R$-algebra isomorphism ${\rm End}_{D_g}(D_g{\otimes} A)\cong D_g{\otimes}_{R^\alpha} A^{\rm{op}}.$ Therefore, by \cite[Proposition I.3.3]{DI} the functor \betagin{equation}{\longrightarrow}bel{cequiv}\_\_{\otimes}_{D_g} (D_g{\otimes} A) \colon _{D_g}{\rm Mod}\to _{(D_g{\otimes}_{R^\alpha} A^{\rm{op}})}\!\!{\rm Mod},\varepsilonnd{equation} determines a category equivalence. It is clear that $D_g{\otimes} A$ is a left $R{\otimes}_{R^\alpha}A^{\rm{op}}$-module via $(r{\otimes}_{R^\alpha} a)(d'{\otimes} a')=rd'{\otimes} a'a.$ Moreover, let $(D_{g{}^{-1}}{\otimes} A)^g = D_{g{}^{-1}}{\otimes} A$ (as sets) endowed with a left $R{\otimes}_{R^\alpha}A^{op}$-module structure via \betagin{equation} {\longrightarrow}bel{ngaction} (r{\otimes}_{R^\alpha} a)\bullet (d'{\otimes} a')=\alpha_{g{}^{-1}}(r1_g)d'{\otimes} a'a,\varepsilonnd{equation} for any { $g\in G, r\in R, a\in A, d'\in D_{g{}^{-1}}.$} Restricting, we obtain left $D_g{\otimes}_{R^\alpha}A^{op}$-module structures on $D_g{\otimes} A$ and $(D_{g{}^{-1}}{\otimes} A)^g,$ respectively. Moreover $D_{g{}^{-1}}{\otimes} A$ is also a right $R{\otimes}_{R^\alpha} A^{op}$-module via \betagin{equation}{\longrightarrow}bel{ngaction1}(d{\otimes} a)(r{\otimes}_{R^\alpha} a')= dr{\otimes} a'a.\varepsilonnd{equation} Furthermore, we denote by $_{h}(D_{g{}^{-1}}{\otimes} A)_I,$ the $R$-$R$-bimodule $D_{g{}^{-1}}{\otimes} A,$ where the actions of $R$ are induced by (\ref{ngaction}) and (\ref{ngaction1}), and $h\in\{g,1_G\}.$ It follows from (\ref{ngaction}) that $(D_{g{}^{-1}}{\otimes} A)^g$ is an object in ${}_{D_g{\otimes}_{R^\alpha} A^{\rm{op}}}{\rm Mod}$ and by \varepsilonqref{cequiv} there is a $D_g$-module $M^{(g)}$ such that \betagin{equation}{\longrightarrow}bel{unico}(D_{g{}^{-1}}{\otimes} A)^g\cong M^{(g)}{\otimes}_{D_g} (D_g{\otimes} A) \cong M^{(g)} {\otimes} A,\, \,\thetaxt{as}\,\, (D_g{\otimes}_{R^\alpha} A^{\rm{op}})\thetaxt{-modules},\varepsilonnd{equation} where $M^{(g)}$ is considered as an $R$-module via the map $r{}^{-1}t r1_g,\, r\in R.$ Our aim is to show that $[M^{(g)}]\in {\bf PicS}(R),$ for all $g\in G.$ From (\ref{unico}) we see that \betagin{equation}{\longrightarrow}bel{n2}(D_{g{}^{-1}}{\otimes} A)^g\cong M^{(g)} {\otimes} A\,\,\, \,\, \thetaxt{as} \,\, \,\,R{\otimes}_{R^\alpha} A^{\rm{op}}\,\thetaxt{-modules.}\varepsilonnd{equation} As $R$-modules we have $(D_{g{}^{-1}}{\otimes} A)^g=(D_{g{}^{-1}}{\otimes} A)_g.$ Since $D_{g{}^{-1}}$ is a f.g.p. $R$-module, we have that $D_{g{}^{-1}}{\otimes} A$ is a f.g.p. $R$-module too, and by (iii) of Lemma \ref{gaction} we conclude that $(D_{g{}^{-1}}{\otimes} A)^g$ is also a f.g.p. $R$-module. Since $A_{{}^{-1}athfrak p}\ne 0,$ for all ${{}^{-1}athfrak p} \in {\rm Spec}(R),$ Lemma \ref{proje} and \varepsilonqref{n2} imply that $M^{(g)}$ is a f.g.p. $R$-module. Now we prove that ${\rm rk}_{{}^{-1}athfrak p}(M^{(g)}_{{}^{-1}athfrak p})\le 1,$ for all ${{}^{-1}athfrak p} \in {\rm Spec}(R).$ Since $A_{{}^{-1}athfrak p} \cong R_{{}^{-1}athfrak p}^{n_{{}^{-1}athfrak p}},$ for some $n_{{}^{-1}athfrak p}\ge 1$, then $(D_{g{}^{-1}}{\otimes} A)_{{}^{-1}athfrak p} \cong ({D_{g{}^{-1}}})_{{}^{-1}athfrak p}^{n_{{}^{-1}athfrak p}}.$ By (i) of Lemma \ref{gaction} we get $((D_{g{}^{-1}}{\otimes} A)_g)_{{}^{-1}athfrak p} \cong (D_{g})_{{}^{-1}athfrak p}^{n_{{}^{-1}athfrak p}},$ which using \varepsilonqref{n2} implies $$(M^{(g)}_{{}^{-1}athfrak p}){\otimes}_{R_{{}^{-1}athfrak p}}A_{{}^{-1}athfrak p} \cong (D_{g})_{{}^{-1}athfrak p}^{n_{{}^{-1}athfrak p}} \cong (D_{g})_{{}^{-1}athfrak p}{\otimes}_{R_{{}^{-1}athfrak p}}{R_{{}^{-1}athfrak p}}^{n_{{}^{-1}athfrak p}}\cong (D_{g})_{{}^{-1}athfrak p} {\otimes}_{R_{{}^{-1}athfrak p}}A_{{{}^{-1}athfrak p}}.$$ We conclude that \betagin{equation}{\longrightarrow}bel{rigual}{\rm rk}_{{}^{-1}athfrak p}(M^{(g)}_{{}^{-1}athfrak p})={\rm rk}_{{}^{-1}athfrak p}((D_{g})_{{}^{-1}athfrak p})\le1, \,\,\, \,\thetaxt{for any} \,\,\,{{}^{-1}athfrak p} \in {\rm Spec}(R),\varepsilonnd{equation} and by Proposition \ref{carpic} we have $[M^{(g)}]\in {\bf PicS}(R).$ In addition, since $M^{(g)}$ is a (unital) $D_g$-module, we also have that $[M^{(g)}]\in X_g=[D_g]{\bf PicS}(R).$ \sigmamallskip Set $f_A\colon G\ni g{}^{-1}apsto [M^{(g)}]\in {\bf PicS}(R).$ Notice that $f=f_A$ is well defined by Lemma \ref{proje} and $M^{(1)}=R$ satisfies (\ref{unico}). We shall check that $f\in Z^1(G,\alpha^*, {\bf PicS}(R)).$ Using (\ref{n2}), Remark \ref{ggaction} and (iv) of Lemma \ref{gaction} we obtain $R$-module isomorphisms \betagin{align*} (D_g{\otimes} M^{(gh)}){\otimes} A&\cong D_g {\otimes} (M^{(gh)}{\otimes} A) \cong D_g{\otimes} (D_{(gh){}^{-1}}{\otimes} A)_{gh} \\ &\cong [D_{g{}^{-1}}{\otimes} (D_{h{}^{-1}}{\otimes} A)_{h}]_g \cong [ D_{g{}^{-1}}{\otimes} (M^{(h)} {\otimes} A)]_g\\ &\cong (D_{g{}^{-1}}{\otimes} M^{(h)} )_g{\otimes}( D_{g{}^{-1}}{\otimes} A )_g\cong (D_{g{}^{-1}}{\otimes} M^{(h)} )_g{\otimes} M^{(g)}{\otimes} A. \varepsilonnd{align*} Finally, by Lemma \ref{cancel} we get \betagin{equation}{\longrightarrow}bel{1cocy}M^{(gh)}{\otimes} D_g\cong (M^{(h)} {\otimes} D_{g{}^{-1}})_g{\otimes} M^{(g)},\varepsilonnd{equation} which gives $f(gh)[D_g]=f(g)\alpha^*_g(f(h)[D_{g{}^{-1}}]).$ Taking $h=g{}^{-1}$ in (\ref{1cocy}) we obtain that $D_g\cong (M^{(g{}^{-1})} {\otimes} D_{g{}^{-1}})_g{\otimes} M^{(g)},$ as $R$-modules. Thus, $[M^{(g)}]\in {{}^{-1}athcal U}(X_g)$ and we conclude that $f\in Z^1(G,\alpha^*,\! {\bf PicS}(R)).$ {{}^{-1}athcal E}dskip We define $\varphi_5 \colon B(R/R^\alpha)\ni [A]{}^{-1}t {\rm cls}(f_A)\in H^1(G,\alpha^*, {\bf PicS}(R)).$ \betagin{claim} $\varphi_5$ is well defined.\varepsilonnd{claim} Suppose $[A]=[B]\in B(R/R^\alpha),$ where $A$ and $B$ contain $R$ as a maximal commutative subalgebra (see \cite[Theorem 5.7]{AG}). There are faithfully projective $R^\alpha$-modules $P,Q$ such that $A {\otimes}imes_{R^\alpha} {\rm End}_{R^\alpha}(P)\cong B {\otimes}imes_{R^\alpha} {\rm End}_{R^\alpha}(Q), $ as $R^\alpha$-algebras. It is proved in \cite[page 127]{DI} that this leads to the existence of a f.g.p. $R$-module $N$ with ${\rm rk }(N)=1$ satisfying \betagin{equation}{\longrightarrow}bel{clasic}(A{\otimes}_{R^\alpha} P^*){\otimes} N\cong B{\otimes}_{R^\alpha} Q^* \,\,\, \thetaxt{as}\,\,\, R \thetaxt{-modules.} \varepsilonnd{equation} We know that there are $D_g$-modules $M^{(g)}, W^{(g)}$ such that \betagin{equation}{\longrightarrow}bel{mequiv1} (D_{g{}^{-1}}{\otimes} A)^g\cong M^{(g)} {\otimes}_ R A\,\,\,\, \thetaxt{as}\,\, \,\, R{\otimes}_{R^\alpha} A^{\rm{op}} \thetaxt{-modules} \varepsilonnd{equation} and \betagin{equation}{\longrightarrow}bel{mequiv2} (D_{g{}^{-1}}{\otimes} B)^g\cong W^{(g)} {\otimes}_ R B\,\, \,\, \thetaxt{as} \,\,\,\, R{\otimes}_{R^\alpha} B^{\rm{op}}\thetaxt{-modules}, \varepsilonnd{equation} for each $g\in G.$ Let $f_A,f_B\colon G\to {\bf PicS}(R)$ be defined by $$f_A(g)=[M^{(g)}], \,\,\,\,\,\,f_B(g)=[W^{(g)}],\, \,\,\,\,\,\,g\in G.$$ We must show that ${\rm cls}(f_A)={\rm cls}(f_B)$ in $H^1(G,\alpha^*, {\bf PicS}(R) ).$ Since for any $R^\alpha$-module $P$ one has $(D_{g{}^{-1}}{\otimes} A {\otimes}_{R^\alpha} P^*)_g\cong (D_{g{}^{-1} } {\otimes} A)_g {\otimes}_{R^\alpha} P^*$ as $R$-modules, there are $R$-module isomorphisms \betagin{align*} [D_{g{}^{-1}}{\otimes} (B{\otimes}_{R^\alpha} Q^* )]_g&\sigmatackrel{(\ref{clasic})}{\cong} [D_{g{}^{-1}}{\otimes} (A{\otimes}_{R^\alpha} P^*){\otimes} N]_g \cong [D_{g{}^{-1}}{\otimes} N{\otimes} (A{\otimes}_{R^\alpha} P^*) ]_g \\ &\cong (N{\otimes} D_{g{}^{-1}})_g {\otimes} ( D_{g{}^{-1}} {\otimes} A)_g {\otimes}_{R^\alpha} P^* \sigmatackrel{(\ref{mequiv1})}{\cong} (N{\otimes} D_{g{}^{-1}})_g{\otimes} M^g{\otimes} A {\otimes}_{R^\alpha} P^*. \varepsilonnd{align*} On the other hand, \betagin{equation}{\longrightarrow}bel{isoB} [D_{g{}^{-1}}{\otimes} (B{\otimes}_{R^\alpha} Q^* ) ]_g \cong(D_{g{}^{-1}}{\otimes} B)_g{\otimes}_{R^\alpha} Q^* \sigmatackrel{(\ref{mequiv2})}{\cong} W^{(g)} {\otimes} B{\otimes}_{R^\alpha} Q^*. \varepsilonnd{equation} But $[N]\in {\bf Pic}(R),$ and, as $R$-modules, one has \betagin{align*} &[(D_{g{}^{-1}}{\otimes} N )_g{\otimes} N^*{\otimes} M^{(g)}] {\otimes} (N{\otimes} A{\otimes}_{R^\alpha} P^*)\cong [(D_{g{}^{-1}}{\otimes} N )_g{\otimes} M^{(g)}{\otimes} A)]{\otimes}_{R^\alpha} P^*\\ &\cong [D_{g{}^{-1}}{\otimes} (B{\otimes}_{R^\alpha} Q^* ) ]_g \sigmatackrel{(\ref{isoB})} \cong W^{(g)} {\otimes} B{\otimes}_{R^\alpha} Q^* \sigmatackrel{(\ref{clasic})}{\cong} W^{(g)} {\otimes}(A{\otimes}_{R^\alpha} P^*){\otimes} N\\ &\cong W^{(g)} {\otimes} (N{\otimes} A{\otimes}_{R^\alpha} P^*). \varepsilonnd{align*} Since $A$ is a faithfully projective $R$-module and $P^*$ is a faithfully projective $R^\alpha$-module, then $A{\otimes}_{R^\alpha} P^*$ is a faithfully projective $R=R{\otimes}_{R^\alpha}R^\alpha$-module. Therefore, $N{\otimes} A{\otimes}_{R^\alpha} P^*$ is a also a faithfully projective $R$-module, and by Lemma \ref{cancel} $$(D_{g{}^{-1}}{\otimes} N )_g{\otimes} N^*{\otimes} M^{(g)}\cong W^{(g)}$$ as $R$-modules, which is equivalent to say that $f_B(g)=\alphag^*([N][D_{g{}^{-1}}])[N]^{{}^{-1}}f_A(g).$ This shows that $\varphi_5$ is well defined. {{}^{-1}athcal E}dskip \betagin{teo} $\varphi_5$ is a group homomorphism.\varepsilonnd{teo} {\bf Proof. } Let $[A_1],[A_2]\in B(R/R^\alpha)$ and suppose that $R$ is a maximal commutative subalgebra of $A_i, \, i=1,2.$ Let $B=A_1{\otimes}_{R^\alpha}A_2.$ By \cite[Theorem 4.2]{DFP} the extension $R\sigmaupseteq R^\alpha$ is separable, so let $e\in R^e$ be a separability idempotent for $R.$ Then ${\rm End}_{R^\alpha}(Be)$ is an Azumaya $R^\alpha$-algebra with $C_{{\rm End}_{R^\alpha}(Be)} (B)={\rm End}_{B}(Be)\cong (eBe)^{\rm{op}},$ via the $R^\alpha$-algebra map $f{}^{-1}t ef(e).$ It follows from \cite[Theorem II.4.3]{DI} that $(eBe)^{\rm{op}}$ is an Azumaya $R^\alpha$-algebra and $B{\otimes}_{R^\alpha}(eBe)^{\rm{op}}\cong {\rm End}_{R^\alpha}(Be)$ as $R^\alpha$-algebras. We conclude that $[B]=[eBe] $ in $B(R/R^\alpha).$ Following the procedure given in \cite[page 128]{DI} we also check that $R$ is a maximal commutative $R^\alpha$-subalgebra of $eBe.$ Thus, $[A_1][A_2]=[eBe]$ and there are $D_g$-modules $M^{(g)}, W_1^{(g)}, W_2^{(g)}$ such that \betagin{equation}{\longrightarrow}bel{iisso}(D_{g{}^{-1}}{\otimes} eBe)^g\!\cong M^{(g)}{\otimes} eBe,\,(D_{g{}^{-1}}{\otimes} A_1)^g\!\cong W_1^{(g)} {\otimes} A_1,\,(D_{g{}^{-1}}{\otimes} A_2)^g\!\cong W_2^{(g)} {\otimes} A_2, \varepsilonnd{equation} for any $g\in G,$ as $R{\otimes}_{R^\alpha}(eBe)^{\rm{op}}$-, $R{\otimes}_{R^\alpha}(A_1)^{\rm{op}}$- and $R{\otimes}_{R^\alpha}(A_2)^{\rm{op}}$-modules, respectively. Let ${\times}lde\alpha=({\times}lde{D}_g, {{\times}lde\alpha}_g)_{g\in G}$ denote the partial action of $G$ on $R{\otimes}_{R^\alpha}R,$ where $${\times}lde{D}_g=D_g{\otimes}_{R^\alpha}D_g\, \,\thetaxt{and}\,\, {{\times}lde\alpha}_g\colon {\times}lde{D}_{g{}^{-1}}\to {\times}lde{D}_g \,\,\thetaxt{is induced by} \,\, x{\otimes}_{R^\alpha}y{}^{-1}t \alphag(x){\otimes}_{R^\alpha}\alphag(y). $$ Since $e_g=(1_g{\otimes}_{R^\alpha}1_g)e$ satisfies $$e_g(d{\otimes}_{R^\alpha} 1_g- 1_g{\otimes}_{R^\alpha}d)=e(d{\otimes}_{R^\alpha} 1- 1{\otimes}_{R^\alpha}d)(1_g{\otimes}_{R^\alpha}1_g)=0,$$ for all $d\in D_g, g\in G,$ then $e_g$ is a separability idempotent for the commutative $R^\alpha$-algebra $D_g.$ The fact that ${\times}lde\alpha_g$ is a ring isomorphism implies that ${{\times}lde\alpha}_g(e_{g{}^{-1}})\in D_g{\otimes}_{R^\alpha}D_g$ is another separability idempotent for $D_g$. Since separability idempotents for commutative algebras are unique, \betagin{equation}{\longrightarrow}bel{egg}{{\times}lde\alpha}_g(e_{g{}^{-1}})=e_g,\,\,\, \thetaxt{for all}\,\,\,\, g\in G.\varepsilonnd{equation} On the other hand, $$({\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e}Be)^g=( ( D_{g{}^{-1}}{\otimes}_{R^\alpha}D_{g{}^{-1}}) {\otimes}_{R^e}Be)^g,$$ is an $R^e{\otimes}_{R^\alpha}(eBe)^{op}$-module via the action induced by \betagin{equation}{\longrightarrow}bel{actiongg} [(r_1{\otimes}_{R^\alpha}r_2){\otimes}_{R^\alpha}b]\bullet [(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e}b')]= {\times}lde\alpha_{g{}^{-1}}[(r_1{\otimes}_{R^\alpha}r_2)1_{{\times}lde g}](d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e}b'b, \varepsilonnd{equation} for all $r_1,r_2\in R, d_1,d_2\in D_{g{}^{-1}}, \,b\in eBe,\,b'\in Be $ and $1_{{\times}lde g}=1_g{\otimes}_{R^\alpha} 1_g, g\in G.$ {{}^{-1}athcal E}dskip Then, there is an $R^e{\otimes}_{R^\alpha}(eBe)^{op}$-module isomorphism \betagin{equation*} ({\times}lde{D}_{g{}^{-1}} {\otimes}_{R^e}Be)^g\cong ({\times}lde{D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e. \varepsilonnd{equation*} Notice that for any $R^e{\otimes}_{R^\alpha}(eBe)^{op}$-module $M,$ the abelian group $(e{\otimes}_{R^\alpha} 1_{eBe})M$ is an $ R{\otimes}_{R^\alpha}(eBe)^{op}$-module via $$(r{\otimes}_{R^\alpha} ebe)\cdot((e{\otimes}_{R^\alpha}1_{eBe})m)=((r{\otimes}_{R^\alpha} 1_R)e{\otimes}_{R^\alpha} ebe)m,$$ for all $m\in M,\, r\in R$ and $b\in B.$ In particular, for $M= ({\times}lde{D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e$ we see that \betagin{align*} (e{\otimes}_{R^\alpha}1_{eBe})\bullet( {\times}lde{ D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e &={\times}lde\alpha_{g{}^{-1}}(e_g)( {\times}lde{D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e\\ &=e_{g{}^{-1}}({\times}lde{ D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e\\ &=e({\times}lde{D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e \varepsilonnd{align*} is an $R{\otimes}_{R^\alpha}(eBe)^{\rm{op}}$-module via \betagin{align*} (r{\otimes}_{R^\alpha} ebe)\cdot(e(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e} b'e) = &((r{\otimes}_{R^\alpha} 1_R)e{\otimes}_{R^\alpha} ebe)\bullet (e(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e} b'e)\\ \sigmatackrel{\varepsilonqref{actiongg},\varepsilonqref{egg}} =&[{\times}lde\alpha_{g{}^{-1}}((r{\otimes}_{R^\alpha} 1)e_g)](d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e} b'ebe\\ =&(\alpha_{g{}^{-1}}(r1_g){\otimes}_{R^\alpha} 1_R)e(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e} b'ebe\\ =&\alpha_{g{}^{-1}}(r1_g)e(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e} b'ebe. \varepsilonnd{align*} Also $$[{\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e}eBe]^g=[e{\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e}Be]^g$$ is an $ R{\otimes}_{R^\alpha}(eBe)^{\rm{op}}$-module via \betagin{equation}{\longrightarrow}bel{emod}(r{\otimes}_{R^\alpha} ebe) {\mathbf l}acktriangleright (e(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e}b'e)=\alpha_{g{}^{-1}}(r1_g)e(d_1{\otimes}_{R^\alpha}d_2){\otimes}_{R^e}b'ebe.\varepsilonnd{equation} We conclude that \betagin{equation}{\longrightarrow}bel{isso} { e({\times}lde{D}_{g{}^{-1}} {\otimes}_{R^e}B)^g e\cong [{\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e}eBe]^g },\varepsilonnd{equation} as $ R{\otimes}_{R^\alpha}(eBe)^{op}$-modules. \sigmamallskip Moreover we have the next. \betagin{claim}{\longrightarrow}bel{iso11} There is an $ R{\otimes}_{R^\alpha}(eBe)^{\rm{op}}$-module isomorphism $$(D_{g{}^{-1}}{\otimes} eBe)^g\cong({\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} eBe)^g,$$ where $ R{\otimes}_{R^\alpha}(eBe)^{\rm{op}}$ acts on $({\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} eBe)^g$ as in {\rm(\ref{emod}).} \varepsilonnd{claim} Indeed, the map $(D_{g{}^{-1}}{\otimes} eBe)^g \sigmatackrel{\varsigma}{\to} ({\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} eBe)^g,$ determined by $$d{\otimes} ebe{}^{-1}t (1_{g{}^{-1}}{\otimes}_{R^\alpha} d){\otimes}_{R^e} ebe$$ is a well defined $ { R {\otimes} }_{R^\alpha}(eBe)^{\rm{op}}$-module isomorphism whose inverse $$ ({\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} eBe)^g\sigmatackrel{\varsigma^*}{\to} (D_{g{}^{-1}}{\otimes} eBe)^g,$$ is induced by $$(d_1{\otimes}_{R^\alpha} d_2){\otimes}_{R^e} ebe{}^{-1}t d_1d_2{\otimes} ebe.$$ In fact, since $e\in R^e$ is the separability idempotent for $R,$ we have $(d_1{\otimes}_{R^\alpha} d_2)e=(1_{g{}^{-1}}{\otimes}_{R^\alpha}d_1 d_2)e.$ Hence, \betagin{align*} \varsigma\varsigma^*((d_1{\otimes}_{R^\alpha} d_2){\otimes}_{R^e} ebe)&=\varsigma(d_1d_2{\otimes} ebe) =(1_{g{}^{-1}}{\otimes}_{R^\alpha} d_1d_2){\otimes}_{R^e} ebe \\ &=(1_{g{}^{-1}}{\otimes}_{R^\alpha} d_1d_2)e{\otimes}_{R^e} ebe =(d_1{\otimes}_{R^\alpha} d_2)e{\otimes}_{R^e} ebe\\ &=(d_1{\otimes}_{R^\alpha} d_2){\otimes}_{R^e} ebe, \varepsilonnd{align*} and $$\varsigma^*\varsigma(d_1{\otimes} ebe)=\varsigma^*((1_{g{}^{-1}}{\otimes}_{R^\alpha } d_1){\otimes}_{R^e} ebe)=d_1{\otimes} ebe. $$ Now we prove that $\varsigma$ is ${R {\otimes}}_{R^\alpha}(eBe)^{\rm{op}}$-linear. For \betagin{align*} \varsigma((r{\otimes}_{R^\alpha} ebe)\!\bullet\! (d{\otimes} eb'e))&=\!\varsigma(\alpha_{g{}^{-1}}(r1_g)d{\otimes} eb'ebe)\! =\!(1_{g{}^{-1}}\!{\otimes}_{R^\alpha}\alpha_{g{}^{-1}}(r1_g)d){\otimes}_{R^e} eb'ebe, \varepsilonnd{align*} and \betagin{align*} (r{\otimes}_{R^\alpha} ebe) {\mathbf l}acktriangleright \varsigma( d{\otimes} eb'e) &=(r{\otimes}_{R^\alpha} ebe) {\mathbf l}acktriangleright ((1_{g{}^{-1}}{\otimes}_{R^\alpha } d){\otimes}_{R^e} eb'e)\\ &=\alpha_{g{}^{-1}}(r1_g)e(1_{g{}^{-1}}{\otimes}_{R^\alpha}d){\otimes}_{R^e}eb'ebe\\ &=(1_{g{}^{-1}}{\otimes}_{R^\alpha}\alpha_{g{}^{-1}}(r1_g))e(1_{g{}^{-1}}{\otimes}_{R^\alpha}d){\otimes}_{R^e}eb'ebe\\ &=(1_{g{}^{-1}}{\otimes}_{R^\alpha}\alpha_{g{}^{-1}}(r1_g))(1_{g{}^{-1}}{\otimes}_{R^\alpha}d){\otimes}_{R^e}eb'ebe\\ &=(1_{g{}^{-1}}{\otimes}_{R^\alpha}\alpha_{g{}^{-1}}(r1_g)d) {\otimes}_{R^e} eb'ebe, \varepsilonnd{align*} which ends the proof of the claim. We still need the following. \betagin{claim}{\longrightarrow}bel{1iso} There is an $(R^e){\otimes}_{R^\alpha}(eBe)^{\rm{op}}$-module isomorphism $$[{\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e}(A_1{\otimes}_{R^\alpha} A_2)]^g\cong (D_{g{}^{-1}}{\otimes} A_1)^g{\otimes}_{R^\alpha} (D_{g{}^{-1}}{\otimes} A_2)^g,$$ where the action of $(R^e){\otimes}_{R^\alpha}(eBe)^{op}$ on $(D_{g{}^{-1}}{\otimes} A_1)^g{\otimes}_{R^\alpha} (D_{g{}^{-1}}{\otimes} A_2)^g$ is induced by \betagin{align*}&[(r_1{\otimes}_{R^\alpha}r_2){\otimes}_{R^\alpha} (x{\otimes}_{R^\alpha}y)]\bullet [(d_1{\otimes} a_1){\otimes}_{R^\alpha}(d_2{\otimes} a_2)]\\ =&(\alpha_{g{}^{-1}}(r_11_g)d_1{\otimes} a_1x){\otimes}_{R^\alpha}(\alpha_{g{}^{-1}}(r_21_g)d_2{\otimes} a_2y),\varepsilonnd{align*} for $r_1,r_2\in R, d_1,d_2\in D_{g{}^{-1}}, x{\otimes}_{R^\alpha}y\in eBe, a_1\in A_1$ and $a_2\in A_2.$ \varepsilonnd{claim} Indeed, we have a well defined (additive) group homomorphism $$\chi:({\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} (A_1{\otimes}_{R^\alpha} A_2))^g\to (D_{g{}^{-1}}{\otimes} A_1)^g{\otimes}_{R^\alpha} (D_{g{}^{-1}}{\otimes} A_2)^g,$$ determined by $$(d_i{\otimes}_{R^\alpha} d'_i) {\otimes}_{R^e} (a_j{\otimes}_{R^\alpha}a'_j){}^{-1}t (d_i{\otimes} a_j){\otimes}_{R^\alpha}(d'_i{\otimes} a'_j), $$ which has $$(d_i{\otimes} a_i){\otimes}_{R^\alpha} (d_j{\otimes}_{R^\alpha}a_j){}^{-1}t (d_i{\otimes}_{R^\alpha} d_j){\otimes}_{R^e}(a_i{\otimes} a_j), $$ as an inverse. The fact that $\chi$ is $(R^e){\otimes}_{R^\alpha}(eBe)^{op}$-linear is straightforward. By (\ref{iisso}), Claim \ref{iso11}, (\ref{isso}) and Claim \ref{1iso} we have the $R{\otimes}_{R^\alpha} (eBe)^{\rm{op}}$-module isomorphisms \betagin{align*} M^{(g)}{\otimes} eBe&\cong (D_{g{}^{-1}}{\otimes} eBe)^g \cong[{\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} { eBe]^g}\\ &\cong e[{\times}lde{D}_{g{}^{-1}}{\otimes}_{R^e} { B]^g } e \cong e[(D_{g{}^{-1}}{\otimes} { A_1)^g} {\otimes}_{R^\alpha} (D_{g{}^{-1}}{\otimes} {A_2)^g} ]e\\ &\cong e[(W_1^{(g)}{\otimes} A_1){\otimes}_{R^\alpha} (W_2^{(g)}{\otimes} A_2)]e \cong e\{W_1^{(g)}{\otimes} [(A_1{\otimes}_{R^\alpha} A_2){\otimes} W_2^{(g)}]\}e\\ &\cong e[(W_1^{(g)}{\otimes} W_2^{(g)}){\otimes} (A_1{\otimes}_{R^\alpha} A_2)]e \cong e[(W_1^{(g)}{\otimes} W_2^{(g)}){\otimes} (A_1{\otimes}_{R^\alpha} A_2)e]\\ &\cong e[(A_1{\otimes}_{R^\alpha} A_2)e {\otimes}(W_1^{ (g) }{\otimes} W_2^{(g)})] \cong e(A_1{\otimes}_{R^\alpha} A_2)e {\otimes}(W_1^{ (g) } {\otimes} W_2^{(g)})\\ &\cong (W_1^{(g)}{\otimes} W_2^{(g)}){\otimes} eBe. \varepsilonnd{align*} Finally Lemma \ref{cancel} implies $M^{(g)}\cong W_1^{(g)} {\otimes} W_2^{(g)},$ and we conclude that $\varphi_5$ is a group homomorphism. ${{\mathbf l}acksquare}$ \sigmaection{Two partial representations $G\to{\bf PicS}_{R^\alpha}(R)$ and the homomorphism $H^1(G,\alpha^*,{\bf PicS}(R)) \sigmatackrel{\varphi_6}\to H^3(G,\alpha, R)$}{\longrightarrow}bel{phi6} For reader's convenience we recall from \cite{DEP} the concept of a partial representation. \betagin{defi}{\longrightarrow}bel{defn-par-repr} A (unital) partial representation of $G$ into an algebra (or, more ge\-ne\-ral\-ly, a monoid) $S$ is a map $\Phi: G \to S$ which satisfies the following properties, for all $g,h\in G,$ \vspace*{2mm} \noindent (i) $\Phi (g{}^{-1}) \Phi (g) \Phi (h) = \Phi (g{}^{-1}) \Phi (g h),$ \vspace*{2mm} \noindent (ii) $\Phi (g ) \Phi (h) \Phi (h{}^{-1}) = \Phi (g h) \Phi ( h {}^{-1} ),$ \vspace*{2mm} \noindent (iii) $\Phi (1_G ) = 1_S.$ \varepsilonnd{defi} For any $g\in G$ denote by $_{g}{(D_{g{}^{-1}})}_I$ the $R$-$R$-bimodule $D_{g{}^{-1}}$ regarded as an $R$-$R$-bimodule with new action $*$ given by $$r*d=\alpha_{g{}^{-1}}(r1_{g})d,\,\,\,\,\thetaxt{and} \,\,\, d*r=dr, \,\,\, \thetaxt{for any}\,\,\, r\in R,\,d\in D_{g{}^{-1}},$$ (analogously we define ${}_{I}{(D_{g})}_{g{}^{-1}};$ notice that $D_g={}_{I}{(D_{g})}_I$ as $R$-$R$-bimodules). Then using (iii) of Lemma \ref{gaction}, we get that $ [_{g}{(D_{g{}^{-1}})}_I]\in {\bf PicS}_{R^\alpha}(R).$ We set \betagin{equation}{\longrightarrow}bel{fi0}\Phi_0\colon G\ni g{}^{-1}t [_{g}{(D_{g{}^{-1}})}_I]\in {\bf PicS}_{R^\alpha}(R).\varepsilonnd{equation} Some useful properties of $\Phi_0$ are given in the next. \betagin{prop}{\longrightarrow}bel{pr1} Let $\Phi_0$ be as in {\rm(\ref{fi0})}. Then, \betagin{itemize} \item $\Phi_0$ is a partial representation of $G$ in ${\bf PicS}_{R^\alpha}(R)$ with \betagin{equation}{\longrightarrow}bel{dg}\Phi_0(g)\Phi_0(g{}^{-1})=[D_g],\varepsilonnd{equation} for all $g\in G,$ \item $\Phi_0(g)[D_{h}]=[D_{gh}]\Phi_0(g),$ for any $g,h\in G.$ In particular, \betagin{equation}{\longrightarrow}bel{fiab}\Phi_0(g)[D_{g{}^{-1}}]=\Phi_0(g)=[D_g]\Phi_0(g),\varepsilonnd{equation} \item for any $g\in G$ and $[P]\in X_{g{}^{-1}}$ there is an $R-R$-bimodule isomorphism \betagin{equation}{\longrightarrow}bel{isoa}P_g\cong \Phi_0(g){\otimes} P{\otimes} \Phi_0(g{}^{-1}).\varepsilonnd{equation} \varepsilonnd{itemize} \varepsilonnd{prop} {\bf Proof. } It is clear that $\Phi_0(1)=[R].$ Now let $g,h\in G$ and consider the map ${\bf Proof. }hi_{g,h}\colon _{g}{(D_{g{}^{-1}})}_I {\otimes}\, _{h}{(D_{h{}^{-1}})}_I {\otimes}\, _{h{}^{-1}}{(D_{h})}_I \to _{gh}\!\!(D_{(gh){}^{-1}})_I {\otimes}\, _{h{}^{-1}}{(D_{h})}_I, $ defined by $${\bf Proof. }hi_{g,h}(a_{g{}^{-1}}{\otimes} b_{h{}^{-1}} {\otimes} c_h)= \alpha_{h{}^{-1}}(a_{g{}^{-1}}1_h)b_{h{}^{-1}}{\otimes} c_h.$$ Then, ${\bf Proof. }hi_{g,h}$ is $R$-$R$-linear because for $r_1,r_2\in R$ we have \betagin{align*}{\bf Proof. }hi_{g,h}(r_1 *a_{g{}^{-1}}{\otimes} b_{h{}^{-1}} {\otimes} c_h \ast r_2)&={\bf Proof. }hi_{g,h}(\alpha_{g{}^{-1}}(r_11_{g})a_{g{}^{-1}}{\otimes} b_{h{}^{-1}} {\otimes} c_hr_2)\\ &=\alpha_{h{}^{-1}}(\alpha_{g{}^{-1}}(r_11_{g})a_{g{}^{-1}}1_h)b_{h{}^{-1}}{\otimes} c_hr_2\\ &=\alpha_{(gh){}^{-1}}(r_11_{gh})\alpha_{h{}^{-1}}(a_{g{}^{-1}}1_h)b_{h{}^{-1}}{\otimes} c_hr_2\\ &=r_1*{\bf Proof. }hi_{g,h}(a_{g{}^{-1}}{\otimes} b_{h{}^{-1}} {\otimes} c_h) \ast r_2.\varepsilonnd{align*} Moreover, the map $ _{gh}{(D_{(gh){}^{-1}})}_I {\otimes}\, _{h{}^{-1}}{(D_{h})}_I\to{}_{g}{(D_{g{}^{-1}})}_I {\otimes}{}_{h}{(D_{h{}^{-1}})}_I {\otimes}{}_{h{}^{-1}}{(D_{h})}_I$ induced by $$x_{(gh){}^{-1}}{\otimes} x_h\to \alpha_h(x_{(gh){}^{-1}}1_{h{}^{-1}}){\otimes} 1_{h{}^{-1}}{\otimes} x_h,$$ for all $x_{(gh){}^{-1}}\in\, _{gh}{(D_{(gh){}^{-1}})}_I,\, x_h\in _{h{}^{-1}}(D_{h})_I $ is the inverse of ${\bf Proof. }hi_{g,h}.$ This yields that $\Phi_0(g)\Phi_0(h)\Phi_0(h{}^{-1})=\Phi_0(gh)\Phi_0(h{}^{-1}).$ In a similar way, the map $ _{g{}^{-1}}{(D_{g})}_I {\otimes}\, _{gh}\!(D_{(gh){}^{-1}})_I\to _{g{}^{-1}}\!\!(D_{g})_I {\otimes}\, _{g}{(D_{g{}^{-1}})}_I {\otimes} _{h}{(D_{h{}^{-1}})}_I, $ such that $$a_{g}{\otimes} b_{(gh){}^{-1}} {}^{-1}t a_{g} {\otimes}\alpha_h(b_{(gh){}^{-1}}1_{h{}^{-1}}) {\otimes} 1_{h{}^{-1}},$$ is an $R$-$R$-bimodule isomorphism with inverse $$ x_g {\otimes}imes y_{g{}^{-1}} {\otimes} z_{h{}^{-1}} {}^{-1}apsto x_g {\otimes} \alpha _{h{}^{-1}} (y_{g{}^{-1} } 1_h) z_{h{}^{-1}},$$ and we obtain $\Phi_0(g{}^{-1})\Phi_0(gh)=\Phi_0(g{}^{-1})\Phi_0(g)\Phi_0(h).$ To prove (\ref{dg}) one can check that the map determined by $${}_{g}{(D_{g{}^{-1}})}_I\, {\otimes}\, _{g{}^{-1}}{(D_{g})}_I \ni a_{g{}^{-1}}{\otimes} b_{g} {}^{-1}t \alpha_g(a_{g{}^{-1}}) b_{g}\in D_{g}$$ is a well defined $R$-$R$-bimodule isomorphism whose inverse is $D_g \ni d\to 1_{g{}^{-1}}{\otimes} d\in\, _{g}{(D_{g{}^{-1}})}_I\, {\otimes} {}_{g{}^{-1}}{(D_{g})}_I.$ The second item follows from the first and (2), (3) of \cite{DEP}. To check the last item consider the map $$P_g\ni p\sigmatackrel{\nu}{\to} 1_{g{}^{-1}}{\otimes} p{\otimes} 1_g\in{}_{g}{(D_{g{}^{-1}})}_I {\otimes} \, P {\otimes}{} _{g{}^{-1}}{(D_{g})}_I.$$ Then, for $r_1,r_2\in R$ we have \betagin{align*}\nu(r_1\bullet p\bullet r_2) &=1_{g{}^{-1}}{\otimes} r_1\bullet p\bullet r_2{\otimes} 1_g=1_{g{}^{-1}} {\otimes} \alpha_{g{}^{-1}}(r_11_g)p\alpha_{g{}^{-1}}(r_21_g){\otimes} 1_g\\ &=\alpha_{g{}^{-1}}(r_11_g){\otimes} p{\otimes} r_21_g= r_1\ast 1_{g{}^{-1}}{\otimes} p{\otimes} 1_g \ast r_2=r_1 \ast \nu(p) \ast r_2.\varepsilonnd{align*} Hence, $\nu$ is an $R$-$R$-bimodule isomorphism with inverse induced by $$a_{g{}^{-1}}{\otimes} p{\otimes} b_g = a_{g{}^{-1}}{\otimes} p{\otimes} \alpha_{g{}^{-1}}( b_g) \ast 1_g = 1_{g{}^{-1}} {\otimes} a_{g{}^{-1}}p\alpha_{g{}^{-1}}(b_g) {\otimes} 1_g {}^{-1}t a_{g{}^{-1}}p\alpha_{g{}^{-1}}(b_g),$$ for all $a_{g{}^{-1}}{\otimes} p{\otimes} b_g\in\! _{g}{(D_{g{}^{-1}})}_I {\otimes}\, P {\otimes} \, _{g{}^{-1}}{(D_{g})}_I.$ ${{\mathbf l}acksquare}$ Now we construct another partial representation of $G$ in ${\bf PicS}_{R^\alpha}(R).$ \betagin{lema} {\longrightarrow}bel{pr2}For any $f\in Z^1(G,\alpha^*, {\bf PicS}(R))$ set $\Phi_f=f\Phi_0\colon G\to {\bf PicS}_{R^\alpha}(R),$ that is $\Phi_f(g)=f(g)\Phi_0(g),$ for any $g\in G.$ Then, \betagin{itemize} \item $\Phi_f$ is a partial representation, \item $\Phi_f(g)\Phi_f(g{}^{-1})=[D_g],$ for all $g\in G.$ \varepsilonnd{itemize} Moreover, writing $\Phi_f(g)=[J_g],$ we have that $D_g\cong {\rm End}_{D_g}(J_g),$ as $R$- and $D_g$-algebras, for any $g\in G.$ \varepsilonnd{lema} {\bf Proof. } First of all we have $\Phi_f(1)=[R].$ Now, let $g,h\in G.$ Then, \betagin{align*}\Phi_f(g{}^{-1})\Phi_f(gh)&=f(g{}^{-1})\Phi_0(g{}^{-1})f(gh)\Phi_0(gh)\\ &\sigmatackrel {(\ref{fiab})}=f(g{}^{-1})\Phi_0(g{}^{-1})[D_g]f(gh)\Phi_0(gh)\\ &=f(g{}^{-1})\Phi_0(g{}^{-1})f(g)\alpha^*_g(f(h)[D_{g{}^{-1}}])\Phi_0(gh)\\ &\sigmatackrel{(\ref{isoa})}{=}f(g{}^{-1})\Phi_0(g{}^{-1})f(g)\Phi_0(g)f(h)[D_{g{}^{-1}}]\Phi_0(g{}^{-1})\Phi_0(gh) \\ &\sigmatackrel{(\ref{fiab})}=f(g{}^{-1})\Phi_0(g{}^{-1})f(g)\Phi_0(g)f(h)\Phi_0(g{}^{-1})\Phi_0(gh)\\ &=f(g{}^{-1})\Phi_0(g{}^{-1})f(g)\Phi_0(g)f(h)\Phi_0(g{}^{-1})\Phi_0(g)\Phi_0(h)\\ &\sigmatackrel{(\ref{dg})}=f(g{}^{-1})\Phi_0(g{}^{-1})f(g)\Phi_0(g)f(h)[D_{g{}^{-1}}]\Phi_0(h)\\ &\sigmatackrel{(\ref{fiab})}=f(g{}^{-1})\Phi_0(g{}^{-1})f(g)\Phi_0(g)f(h)\Phi_0(h)\\ &=\Phi_f(g{}^{-1})\Phi_f(g)\Phi_f(h).\varepsilonnd{align*} Analogously, it can be shown that $\Phi_f(gh)\Phi_f(h{}^{-1})=\Phi_f(g)\Phi_f(h)\Phi_f(h{}^{-1}).$ Indeed, since $f\in C^1 (G,\alpha^*,{\bf PicS}(R))$ we have $[D_{h{}^{-1}}]f(h{}^{-1})=f(h{}^{-1}),$ and by the second item of Proposition \ref{pr1} we obtain $\Phi_0(gh)[D_{h{}^{-1}}]=[D_g]\Phi_0(gh).$ Thus, \betagin{align*}\Phi_f(gh)\Phi_f(h{}^{-1})&=f(gh)\Phi_0(gh)f(h{}^{-1})\Phi_0(h{}^{-1})\\ &=f(gh)[D_g]\Phi_0(gh)f(h{}^{-1})\Phi_0(h{}^{-1})\\ &\sigmatackrel{(\ref{isoa})}{=}f(g)\Phi_0(g)f(h)[D_{g{}^{-1}}]\Phi_0(g{}^{-1})\Phi_0(gh)f(h{}^{-1})\Phi_0(h{}^{-1})\\ &\sigmatackrel{(\ref{fiab})}=f(g)\Phi_0(g)f(h)\Phi_0(g{}^{-1})\Phi_0(gh)f(h{}^{-1})\Phi_0(h{}^{-1})\\ &\sigmatackrel{(\ref{dg})}=f(g)\Phi_0(g)f(h)[D_{g{}^{-1}}]\Phi_0(h)f(h{}^{-1})\Phi_0(h{}^{-1})\\ &=f(g)\Phi_0(g)f(h)\Phi_0(h)f(h{}^{-1})\Phi_0(h{}^{-1})\\ &=\Phi_f(g)\Phi_f(h)\Phi_f(h{}^{-1}). \varepsilonnd{align*} With respect to the second item we have $\Phi_f(g)\Phi_f(g{}^{-1})=f(g)\Phi_0(g)f(g{}^{-1})\Phi_0(g{}^{-1})\sigmatackrel{(\ref{isoa})}{=}f(g)\alpha^*_g(f(g{}^{-1}))=f(gg{}^{-1})[D_g]=[R][D_g]=[D_g].$ Finally, by definition $[J_g]= f(g) \Phi_0(g),$ and $J_g\cong D_g{\otimes} J_g$ as left $R$- and $D_g$-modules, for all $g\in G$. Therefore, after identifying $D_g={\rm End}_{D_g}(D_g)$, the $R$-algebra epimorphism $ R\to {\rm End}_R(J_g)$ induces an $R$- and $D_g$-algebra epimorphism $\xi\colon D_g\to {\rm End}_{D_g}(J_g),$ thanks to Proposition \ref{ht}. Via localization we will check that $\xi$ is injective. If for any $g\in G$ the ring $D_g$ is semi-local, { then ${\bf Pic}(D_g)$ is trivial (see, for example, \cite[Ex. 2.22 (D)]{L}), and by Remark \ref{unxg} we have that} $f(g)\cong D_g,$ as $ D_g$-modules as well as $ R $-modules. Then \betagin{equation}{\longrightarrow}bel{anj}J_g\cong \, _g (D_{g{}^{-1}})_I\,\,\,\thetaxt{as $R$-$R$-bimodules.}\varepsilonnd{equation} Moreover, after localizing by a prime ideal of $R^\alpha , $ we obtain $R^\alpha$-module isomorphisms $D_{g{}^{-1}}\cong (R^\alpha)^m\cong D_g$ for some $m\in{\mathbb N},$ thanks to the facts that the maps $\alpha_g$ are isomorphisms of $R^\alpha$-modules and localization is an exact functor. Since in this case any $D_g$ is semi-local, \varepsilonqref{anj} implies that the map $\xi \colon (R^\alpha)^m\to {\rm End}_{(R^\alpha)^m}(_g(R^\alpha)^m)$ is an epimorphism of $R^\alpha$-algebras. On the other hand, the left $R^\alpha$-modules $(R^\alpha)^m$ and $_g(R^\alpha)^m$ are isomorphic via $r {}^{-1}apsto {\alpha}_{g{}^{-1}} (r),$ and we have ${\rm End}_{(R^\alpha)^m}(_g(R^\alpha)^m) \cong (R^\alpha)^m,$ and $\xi $ must be an isomorphism of $R^\alpha$-modules. Finally, since $\xi $ is $R$-linear, then it is an isomorphism of $R$- and $D_g$-algebras, for any $g\in G.$ ${{\mathbf l}acksquare}$ In what follows we shall write $\Phi_f (g) =[J_g].$ \betagin{remark}{\longrightarrow}bel{jgfiel} Localizing by ideals in ${\rm Spec}(R^\alpha)$ and using \varepsilonqref{anj} we see that $J_g$ is a faithful $D_g$-module. Then if we ignore the right $R$-module structure of $J_g$ and use the last item of Lemma~\ref{pr2} we obtain that $[J_g]\in {\bf Pic}(D_g),$ for any $f\in Z^1(G,\alpha^*, {\bf PicS}(R))$ and $g\in G.$ In particular, the map $m_{D_g}\colon D_g\to {\rm End}_{D_g}(J_g)$ given by left multiplication is a $D_g$-algebra isomorphism. \varepsilonnd{remark} \betagin{remark}{\longrightarrow}bel{jgdg} Let $f$ be an element of $ Z^1(G,\alpha^*, {\bf PicS}(R))$ and write $f(g)=[ M_g ].$ Then $J_g= M_g {\otimes} \, _{g}{(D_{g{}^{-1}})}_I$ and \betagin{equation}{\longrightarrow}bel{jgtwits}x_gr=\alpha_g(r1_{g{}^{-1}})x_g,\,\,\thetaxt{for any}\,\, x_g\in J_g,\,\, r\in R,\varepsilonnd{equation} and the map \betagin{equation}{\longrightarrow}bel{jgdgi} J_g\ni x\to x{\otimes} 1_{g{}^{-1}}\in J_g{\otimes} D_{g{}^{-1}} \varepsilonnd{equation} is an $R$-$R$-bimodule isomorphism. Furthermore, if $D_g$ is a semi-local for any $g\in G,$ then by \varepsilonqref{anj} there is an $R$-$R$-bimodule isomorphism $\gammamma_g\colon _{g}{(D_{g{}^{-1}})}_I\ni d\to \gammamma_g(d)\in J_g .$ Therefore, setting $u_g=\gammamma_g(1_{g{}^{-1}}),$ we have, for any $x\in J_g,$ that $$x=\gammamma_g(d)=\gammamma_g(\alpha_g(d)* 1_{g{}^{-1}})=\alpha_g(d)u_g,$$ for some $d\in D_{g{}^{-1}}.$ We conclude that $J_g=D_gu_g,$ $u_g$ is a free generator of $J_g$ over $D_g$ and $u_gr=\alpha_g(r1_{g{}^{-1}})u_g,$ for all $ r\in R,\,g\in G,$ in view of (\ref{jgtwits}). \varepsilonnd{remark} We know from Proposition~\ref{pr1} and Lemma~\ref{pr2} that there are $R$-$R$-bimodule isomorphisms ${}_{h{}^{-1}}(D_{h})_{I}{\otimes} {}_{g{}^{-1}}(D_{g})_I\cong D_{h{}^{-1}}{\otimes} \,{}_{h{}^{-1} g{}^{-1}}(D_{g h})_{I}\,\, \,\,\thetaxt{and}\,\,\,\, J_g{\otimes} \, D_h \cong D_{gh}{\otimes} J_g,$ for all $g,h\in G.$ For further reference, we shall construct these isomorphisms explicitly. \betagin{lema}{\longrightarrow}bel{iisoo} The map $\varrho\colon {}_{h{}^{-1}}(D_{h})_{I}{\otimes} {}_{g{}^{-1}}(D_{g})_I\to D_{h{}^{-1}}{\otimes} \,{}_{h{}^{-1} g{}^{-1}}(D_{g h})_{I},$ induced by \betagin{equation*}{\longrightarrow}bel{isovr}x_h{\otimes} y_g{}^{-1}t 1_{h{}^{-1}}{\otimes} \alpha_g(x_h1_{g{}^{-1}})y_g, \varepsilonnd{equation*} is an $R$-$R$-bimodule isomorphism, and $\varrho{}^{-1}\colon D_{h{}^{-1}}{\otimes}\,{}_{h{}^{-1} g{}^{-1}}(D_{g h})_{I}\to {}_{h{}^{-1}}(D_{h})_{I}{\otimes} {}_{g{}^{-1}}(D_{g})_I$ is determined by $${\longrightarrow}bel{isovri}x_{h{}^{-1}}{\otimes} y_{gh}{}^{-1}t\alpha_{g{}^{-1}}(y_{gh}1_g)\alpha_h(x_{h{}^{-1}}){\otimes} 1_{g},$$ for $g,h\in G.$ \varepsilonnd{lema} {\bf Proof. } Indeed, $\varrho$ is well defined, and using \varepsilonqref{prodp} one can show that $\varrho$ is an $R$-$R$-bimodule homomorphism. Moreover, \betagin{align*} x_h{\otimes} y_g&\sigmatackrel{\varrho}{}^{-1}t 1_{h{}^{-1}}{\otimes} \alpha_g(x_h1_{g{}^{-1}})y_g \sigmatackrel{\varrho{}^{-1}}{{}^{-1}t}\alpha_{g{}^{-1}}(\alpha_g(x_h1_{g{}^{-1}})y_g)1_h{\otimes} 1_g =x_h\alpha_{g{}^{-1}}(y_g){\otimes} 1_g =x_h{\otimes} y_g.\varepsilonnd{align*} On the other hand, \betagin{align*}x_{h{}^{-1}}{\otimes} y_{gh}&\sigmatackrel{\varrho{}^{-1}}{}^{-1}t \alpha_{g{}^{-1}}(y_{gh}1_g)\alpha_h(x_{h{}^{-1}}){\otimes} 1_{g} \sigmatackrel{\varrho}{}^{-1}t1_{h{}^{-1}}{\otimes} \alpha_g(\alpha_{g{}^{-1}}(y_{gh}1_g)\alpha_h(x_{h{}^{-1}}))\\ &=1_{h{}^{-1}}{\otimes} y_{gh}\alpha_g(\alpha_h(x_{h{}^{-1}})1_{g{}^{-1}}) \sigmatackrel{\varepsilonqref{prodp}}=1_{h{}^{-1}}{\otimes} y_{gh}\alpha_{gh}(x_{h{}^{-1}}1_{({gh}){}^{-1}})1_{g}\\ &=1_{h{}^{-1}}{\otimes} y_{gh}\alpha_{gh}(x_{h{}^{-1}}1_{(gh){}^{-1}})\alpha_{gh}(1_{h{}^{-1}}1_{(gh){}^{-1}}) =1_{h{}^{-1}}{\otimes} y_{gh}\alpha_{gh}(x_{h{}^{-1}}1_{(gh){}^{-1}})\\ &=1_{h{}^{-1}}{\otimes} (x_{h{}^{-1}}*y_{gh}) =x_{h{}^{-1}}{\otimes} y_{gh},\varepsilonnd{align*} as desired. ${{\mathbf l}acksquare}$ \betagin{lema}{\longrightarrow}bel{kapa} The map $ J_g{\otimes} D_h \sigmatackrel{\kappa_{g,h}}\to D_{gh}{\otimes} J_g$ induced by \betagin{equation}{\longrightarrow}bel{kappa}a_{g}{\otimes} \, b_h{}^{-1}t\alpha_g(b_h1_{g{}^{-1}}){\otimes} a_{g},\varepsilonnd{equation} for any $g,h\in G,$ is an $R$-$R$-bimodule isomorphism. \varepsilonnd{lema} {\bf Proof. } First, $\kappa_{g,h}$ is well defined by (\ref{jgtwits}). Notice that $\kappa_{g,h}$ is bijective with inverse $\iota_{g,h}\colon D_{gh}{\otimes} J_g\to J_g{\otimes} \, D_h, $ $a_{gh}{\otimes} b_g{}^{-1}t b_g{\otimes} \alpha_{g{}^{-1}}(a_{gh}1_g),$ for all $a_{gh}\in D_{gh}$ and $b_g \in J_g.$ Indeed, $$\iota_{g,h}\circ \kappa_{g,h}(a_{g}{\otimes} \, b_h)=\iota_{g,h}(\alpha_g(b_h1_{g{}^{-1}}){\otimes} a_{g})=a_{g}{\otimes} b_h1_{g{}^{-1}}\sigmatackrel{\varepsilonqref{jgtwits}}=1_ga_{g}{\otimes} b_h =a_{g}{\otimes} b_h.$$ In addition, $$ \kappa_{g,h}\circ \iota_{g,h}(a_{gh}{\otimes} b_g)= \kappa_{g,h}( b_g{\otimes} \alpha_{g{}^{-1}}(a_{gh}1_g))=a_{gh}1_g{\otimes} b_g= a_{gh} {\otimes} 1_g b_g= a_{gh}{\otimes} b_g.$$ Finally, to prove that $\kappa_{g,h}$ is $R$-$R$-linear take $r_1, r_2\in R.$ Then, \betagin{align*} \kappa_{g,h}(r_1\cdot a_{g}{\otimes} \, b_hr_2)&=\alpha_g(b_hr_21_{g{}^{-1}}){\otimes} r_1\cdot a_{g}=\alpha_g(b_h1_{g{}^{-1}})\alpha_g(r_21_{g{}^{-1}}){\otimes} r_1\cdot a_{g}\\ &=\alpha_g(b_h1_{g{}^{-1}})r_1{\otimes} \alpha_g(r_21_{g{}^{-1}})\cdot a_{g}=r_1\alpha_g(b_h1_{g{}^{-1}}){\otimes} a_{g}r_2.\varepsilonnd{align*} This completes the proof. ${{\mathbf l}acksquare}$ \sigmaubsection{The map $H^1(G,\alpha^*,{\bf PicS}(R)) \sigmatackrel{\varphi_6}\to H^3(G,\alpha, R)$} Let $f\in Z^1(G,\alpha^*, {\bf PicS}(R))$ and $\Phi_f=f\Phi_0.$ Write $ \Phi _f (g)=[J_g].$ By Lemma \ref{pr2} the map $\Phi_f$ is a partial homomorphism such that $\Phi_f(g)\Phi_f(g{}^{-1})=[D_g].$ Then, there is a family of $R$-$R$-bimodule isomorphisms $$\{\chi _{g,h}\colon J_g{\otimes} J_h\to D_g{\otimes} J_{gh}\}_{g,h\in G}.$$ Consider the following diagram \betagin{equation} {\longrightarrow}bel{diag1} \xymatrix{J_g{\otimes} J_h{\otimes} J_l\ar[d]^{ \chi_{g,h}{\otimes}{\rm id}_l}\ar[r]^{{\rm id}_g{\otimes} \chi_{h,l}}& J_g{\otimes} D_{h}{\otimes} J_{hl} \ar[r]^{\kappa_{g,h}{\otimes} {\rm id }_{hl}}&\,\,\,D_{gh}{\otimes} J_g {\otimes} J_{hl}\,\,\ar[r]^{{\rm id}_{D_{gh}}{\otimes} \chi_{g,hl}} &\,\,\,\,D_{gh} {\otimes} D_g{\otimes} J_{ghl}\ar[d]^{{\tau}_{gh,g}{\otimes} {\rm id}_{ghl}} \\ D_g{\otimes} J_{gh}{\otimes} J_l\ar[rrr]^{{\rm id}_{D_g}{\otimes} \chi_{gh,l}}&&&D_{g}{\otimes} D_{gh}{\otimes} J_{ghl}}, \varepsilonnd{equation} for any $g,h,l\in G,$ where $\kappa_{g,h}$ is from Lemma \ref{kapa} and ${\tau}_{gh,g}$ is the twisting $u{\otimes} v{}^{-1}t v{\otimes} u.$ We use diagram \varepsilonqref{diag1} to construct a cocycle in $Z^3(G,\alpha,R).$ Let ${\times}lde\omega(g,h,l)$ be the map obtained making a counterclockwise loop in \varepsilonqref{diag1}: $$({\rm id}_{D_g}{\otimes}\chi_{gh,l})\circ(\chi_{g,h}{\otimes}{\rm id}_l)\circ({\rm id}_g{\otimes} \chi_{h,l}){}^{-1}\circ(\kappa_{g,h}{\otimes} {\rm id }_{hl}){}^{-1}\circ({\rm id}_{D_{gh}}{\otimes} \chi_{g,hl}){}^{-1}\circ({\tau}_{gh,g}{\otimes} {\rm id}_{ghl}){}^{-1},$$ for all $g,h,l\in G.$ Evidently, ${\times}lde\omega(g,h,l)$ is a left $R$-linear automorphism of $D_g{\otimes} D_{gh}{\otimes} J_{ghl}.$ Moreover, from the fact that \betagin{equation}{\longrightarrow}bel{acom}(a_g{\otimes} b_{gh}{\otimes} c_{ghl})\cdot(t_g{\otimes} u_{gh}{\otimes} v_{ghl})=a_gt_g{\otimes} b_{gh}u_{gh}{\otimes} c_{ghl}v_{ghl}=a_gb_{gh}c_{ghl}t_g{\otimes} u_{gh}{\otimes} v_{ghl},\varepsilonnd{equation} for all $a_g,t_g\in D_g, b_{gh},u_{gh}\in D_{gh},c_{ghl}\in D_{ghl},$ $v_{ghl}\in J_{ghl},$ and $g,h,l\in G,$ we conclude that ${\times}lde\omega(g,h,l)$ is an invertible element of $${\rm End}_{D_g{\otimes} D_{gh}{\otimes} D_{ghl}}(D_g{\otimes} D_{gh}{\otimes} J_{ghl})\cong D_g{\otimes} D_{gh}{\otimes} {\rm End}_{D_{ghl}}( J_{ghl})\cong D_g{\otimes} D_{gh}{\otimes} D_{ghl},$$ where the last ring isomorphism follows from Remark \ref{jgfiel}. Thus there is a unique in\-ver\-ti\-ble element $\omega_1(g,h,l)\in {{}^{-1}athcal U}(D_g{\otimes} D_{gh}{\otimes} D_{ghl})$ such that ${\times}lde\omega(g,h,l)(z)=\omega_1(g,h,l)z,$ for all $z\in D_g{\otimes} D_{gh}{\otimes} J_{ghl},$ and it follows from (\ref{acom}) that there is a unique $\omega(g,h,l)\in {{}^{-1}athcal U}(D_gD_{gh}D_{ghl})$ satisfying $${\times}lde\omega(g,h,l)z=\omega(g,h,l)z,\,\,\, g,h,l\in G,\,\,\, z\in D_g{\otimes} D_{gh}{\otimes} J_{ghl}. $$ We shall check that $\omega\in Z^3(G,\alpha, R),$ or equivalently \betagin{equation}{\longrightarrow}bel{cobordo}(\delta^3\omega)(g,h,l,t)=1_g1_{gh}1_{ghl}1_{ghlt},\,\,\thetaxt{ for all}\,\, g,h,l,t\in G,\varepsilonnd{equation} where $\delta^3$ is the coboundary operator given by (\ref{pcob}). Since $(\delta^3\omega)(g,h,l,t)$ and $1_g1_{gh}1_{ghl}1_{ghlt}$ belong to the $R^\alpha$-module $R,$ equality \varepsilonqref{cobordo} holds if and only if for every ${{}^{-1}athfrak p}\in {\rm Spec}(R^\alpha)$ the image of $( \delta^3 \omega )(g,h,l,t) $ in $(D_gD_{gh}D_{ghl}D_{ghlt})_{{}^{-1}athfrak p}$ is $(1_g1_{gh}1_{ghl}1_{ghlt})_{{}^{-1}athfrak p}.$ But if $D_g$ is semi-local for any $g\in G$, Remark \ref{jgdg} implies $J_g=D_gu_g,$ and it follows that $$\chi_{g,h}(a_gu_g{\otimes} b_h u_h)=a_g\alpha_g(b_h1_{g{}^{-1}})\chi_{g,h}(u_g{\otimes} u_h)=a_g\alpha_g(b_h1_{g{}^{-1}}){\times}lde\rho(g,h) (1_g{\otimes} u_{gh} ),$$ with ${\times}lde\rho(g,h)\in {{}^{-1}athcal U}(D_g{\otimes} D_{gh}).$ One can write ${\times}lde\rho(g,h)=1_g{\otimes} \rho(g,h),$ where $\rho(g,h)$ belongs to ${{}^{-1}athcal U}(D_gD_{gh}).$ In particular, we have a map $\rho\in C^2(G,\alpha, R)$ and \betagin{equation*}{\longrightarrow}bel{coci}\chi_{g,h}(a_gu_g{\otimes} b_h { u_h})=a_g\alpha_g(b_h1_{g{}^{-1}})\rho(g,h)1_g{\otimes} u_{gh}.\varepsilonnd{equation*} We conclude that $\chi{}^{-1}_{g,h}(1_g{\otimes} u_{gh})=\rho(g,h){}^{-1} u_g{\otimes} u_h,$ where $\rho(g,h){}^{-1}$ is the inverse of $\rho(g,h)$ in $D_gD_{gh}.$ Now we apply $\omega(g,h,l)$ to $1_{g}{\otimes} 1_{gh}{\otimes} u_{ghl}.$ \betagin{align*} 1_{g}{\otimes} 1_{gh}{\otimes} u_{ghl}&{}^{-1}t 1_{gh}{\otimes} 1_{g}{\otimes} u_{ghl} {}^{-1}t 1_{gh}{\otimes} \rho{}^{-1}(g,hl)u_g{\otimes} u_{hl} {}^{-1}t \rho(g,hl){}^{-1} u_g{\otimes} 1_h1_{g{}^{-1}}{\otimes} u_{hl}\\ &\sigmatackrel{\varepsilonqref{jgtwits}}=\rho(g,hl){}^{-1} u_g{\otimes} 1_h{\otimes} u_{hl} {}^{-1}t\rho(g,hl){}^{-1} \alpha_g(\rho(h,l){}^{-1}1_{g{}^{-1}})u_g{\otimes} u_h{\otimes} u_{l}\\ & {}^{-1}t\rho(g,hl){}^{-1} \alpha_g(\rho(h,l){}^{-1}1_{g{}^{-1}})\rho(g,h)1_g{\otimes} u_{gh}{\otimes} u_{l}\\ & {}^{-1}t\rho(g,hl){}^{-1} \alpha_g(\rho(h,l){}^{-1}1_{g{}^{-1}})\rho(g,h)\rho(gh,l)1_g{\otimes} 1_{gh}{\otimes} u_{ghl}. \varepsilonnd{align*} Thus, \betagin{equation*}{\longrightarrow}bel{3cocl}\omega(g,h,l)(1_g{\otimes} 1_{gh}{\otimes} u_{ghl})=(\delta^2 { \rho {}^{-1} } )(g,h,l)(1_g{\otimes} 1_{gh}{\otimes} u_{ghl}), \, g,h,l\in G.\varepsilonnd{equation*} Hence, $\omega(g,h,l)=(\delta^2 { \rho {}^{-1} } )(g,h,l)$ and it follows from Proposition \ref{pcobh} that $$(\delta^3\omega)(g,h,l,t)=1_g1_{gh}1_{ghl}1_{ghlt}.$$ This yields $\omega\in Z^3(G,\alpha,R).$ \betagin{claim} The map $\varphi_6 \colon H^1(G,\alpha^*,{\bf PicS}(R))\ni {\rm cls}(f)\to {\rm cls}(\omega) \in H^3(G,\alpha,R)$ is well defined. \varepsilonnd{claim} {\bf Proof. }roof If one takes another family $\{\chi'_{g,h}\colon J_g{\otimes} J_h\to D_g{\otimes} J_{gh}\}_{g,h\in G}$ of $R$-$R$-bimodule isomorphisms, the map $\chi'_{g,h}\circ \chi{}^{-1}_{g,h}$ is an invertible element of ${\rm End}_{D_g{\otimes} D_{gh}}(D_g{\otimes} J_{gh}),$ and there exists $\sigmaigma(g,h)\in {{}^{-1}athcal U}(D_gD_{gh})$ such that $\chi'_{g,h}\circ \chi{}^{-1}_{g,h}(z)=\sigmaigma(g,h)z,\,\, \thetaxt{ for all}\,\, z\in D_g{\otimes} J_{gh}.$ Thus, $\sigmaigma\in C^2(G,\alpha, R),\, \chi_{g,h}=\sigmaigma(g,h)\chi'_{g,h},$ and setting ${\times}lde\omega'(g,h,l)=({\rm id}_{D_g}{\otimes} \chi'_{gh,l})\circ(\chi'_{g,h}{\otimes}{\rm id}_l)\circ({\rm id}_g{\otimes} \chi'_{h,l}){}^{-1}\circ(\kappa_{g,h}{\otimes} {\rm id }_{hl}){}^{-1}\circ({\rm id}_{D_{gh}}{\otimes} \chi'_{g,hl}){}^{-1}\circ({\tau}_{gh,g}{\otimes} {\rm id}_{ghl}){}^{-1},$ we see that after localizaing by ideals in ${\rm Spec}(R^\alpha)$ that \betagin{equation*}{\longrightarrow}bel{prodfam}\omega'(g,h,l)=(\delta^2\sigmaigma{}^{-1})(g,h,l)\omega(g,h,l).\varepsilonnd{equation*} This implies that ${\rm cls}(\omega)={\rm cls}(\omega')$ in $H^3(G,\alpha,R).$ On the other hand taking another representative $J'_g\in [J_g],$ for any $g\in G,$ we have families of $R$-$R$-bimodule isomorphisms $\{\chi'_{g,h}\colon J'_g{\otimes} J'_h\to D_g{\otimes} J'_{gh}\}_{g,h\in G}$ and $\{\zetaeta_g\colon J_g\to J'_g\}_{g\in G}.$ Let $\chi''_{g,h}= ({\rm id}_{D_g}{\otimes} \zetaeta_{gh})\circ\chi_{g,h}\circ (\zetaeta{}^{-1}_g{\otimes} \zetaeta{}^{-1}_h),\,\, g,h\in G.$ Thus, if $\omega'$ and $\omega''$ are the corresponding cocycles in $Z^3(G,\alpha,R)$ induced by the families $\{\chi'_{g,h}\colon J'_g{\otimes} J'_h\to D_g{\otimes} J'_{gh}\}_{g,h\in G}$ and $\{\chi''_{g,h}\colon J'_g{\otimes} J'_h\to D_g{\otimes} J'_{gh}\}_{g,h\in G}$ respectively, by the above we have ${\rm cls}(\omega')={\rm cls}(\omega'')$ in $H^3(G,\alpha,R).$ We shall prove that $\omega=\omega''.$ By localization we may assume that each $D_g,$ $g\in G,$ is a semi-local ring. Then, by Remark \ref{jgdg} there is $u_g\in J_g$ such that $J_g=D_gu_g$ and $J'_g=D_gu'_g,$ where $\zetaeta_g(u_g)=u'_g.$ Hence, the equality ${ \chi _{g,h} } (au_g{\otimes} bu_h)=a\alpha_g(b1_{g{}^{-1}})\rho(g,h)1_g{\otimes} u_{gh},$ for all $g,h\in G,$ and the definition of $\chi''_{g,h}$ imply $$\chi''_{g,h}(au'_g{\otimes} bu'_h)=a\alpha_g(b1_{g{}^{-1}})\rho(g,h)1_g{\otimes} u'_{gh},$$ and using the construction of $\omega$ and $\omega''$ we get $\omega=\omega''.$ Finally, if ${\rm cls}(f)={\rm cls}(f')\in H^1(G,\alpha^*,{\bf PicS}(R)),$ there exists $f_0\in B^1(G,\alpha^*,{\bf PicS}(R)) $ such that $f'=f_0f,$ and $[P]\in{\bf Pic}(R)$ with $f_0(g)=[P]\alpha^*_g([P^*][D_{g{}^{-1}}]),$ for all $g\in G.$ Hence, \betagin{align*}\Phi_{f'}(g)= f'(g)\Phi_0(g)&=f_0(g)f(g)\Phi_0(g)\\ &=[P]\alpha^*_g([P^*][D_{g{}^{-1}}])f(g)\Phi_0(g)\\ &\sigmatackrel{(\ref{fiab}),(\ref{isoa})}=[P]\Phi_0(g)[P^*]\underbrace{\Phi_0(g{}^{-1})f(g)\Phi_0(g)}_{\in\, {\rm PicS}(R)}\\ &=[P]\Phi_0(g)\Phi_0(g{}^{-1})f(g)\Phi_0(g)[P^*]\\ &\sigmatackrel{(\ref{dg})}=[P]f(g)\Phi_0(g)[P^*]\\ &=[P]\Phi_f(g)[P^*]. \varepsilonnd{align*} Set $\Phi_f(g)=[J_g].$ Let $\{\chi _{g,h}\colon J_g{\otimes} J_h\to D_g{\otimes} J_{gh}\}_{g,h\in G}$ be a family of $R$-$R$-bimodule isomorphisms which come from $\Phi_f $, and $\omega\in Z^3(G,\alpha,R)$ determined by the $ \chi _{g,h}.$ Identifying $(P{\otimes} J_g{\otimes} P^*){\otimes} (P{\otimes} J_h{\otimes} P^*)\cong P{\otimes} J_g{\otimes} J_h{\otimes} P^*$, we choose the family of $R$-$R$-bimodule isomorphisms $$\{{\chi}'_{g,h}\colon P{\otimes} J_g{\otimes} J_h{\otimes} P^*\to P{\otimes} D_g{\otimes} J_{gh}{\otimes} P^*\}_{g,h\in G},$$ where ${\chi}'_{g,h}={\rm id}_P{\otimes} \chi _{g,h}{\otimes}{\rm id}_{P^*}\colon P{\otimes} J_g{\otimes} J_h{\otimes} P^*\to P{\otimes} D_g{\otimes} J_{gh}{\otimes} P^*,$ for all $g,h\in G.$ The isomorphisms $\{{\chi}'_{g,h}\}$ correspond to $\Phi_{f'}(g).$ Therefore, if $\omega'\in Z^3(G,\alpha,R)$ is induced by the family $\{{\chi}'_{g,h}\}_{g,h\in G},$ we get $$\omega'(g,h,l)\!=\!{\rm id}_P{\otimes} \omega(g,h,l){\otimes} {\rm id}_{P^*}\!\!\in\!{\rm End}_{R}(P){\otimes} {\rm End}_{ D_g{\otimes} D_{gh}{\otimes} D_{ghl}}\!(\! D_g{\otimes} D_{gh}{\otimes} J_{ghl})\!{\otimes} {\rm End}_{R}(P^*).$$ Finally, since the $R$-algebra isomorphisms ${\rm End}_{R}(P)\cong R\cong {\rm End}_{R}(P^*),$ send ${\rm id}_{P} $ and ${\rm id}_{P^*}$ to $1_R,$ we obtain that $\omega'$ coincides with $\omega.$ This shows that $\varphi_6$ is well defined. ${{\mathbf l}acksquare}$ \betagin{teo} $\varphi_6\colon H^1(G,\alpha^*, {\bf PicS}(R))\to H^3(G,\alpha, R)$ is a group homomorphism. \varepsilonnd{teo} {\bf Proof. } Let $f,f'\in Z^1(G,\alpha^*, {\bf PicS}(R)).$ Write $\Phi _f (g)=[J_g]$ and $\Phi _{f'}(g)=[J'_g].$ Notice that $ f(g)=f(g)[D_g]=f(g)\Phi_0(g)\Phi_0(g{}^{-1})=[J_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}]. $ Then $\Phi_{f f'} (g)=[J_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}{\otimes} J'_g],$ for all $g\in G, $ and there are $R$-$R$-bimodule isomorphisms $$\{F_{g,h}\colon T_g{\otimes} T_h\to D_g{\otimes} T_{gh}\}_{g,h\in G},$$ where $T_g=J_g{\otimes}{}_{g{}^{-1}}(D_g)_{I}{\otimes} J'_g,$ $g\in G.$ We shall make a specific choice of the $F_{g,h}.$ Notice first that \betagin{align*}T_g{\otimes} T_h& \sigmatackrel{(\ref{jgdgi})}{\cong} (J_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}{\otimes} J'_g {\otimes} D_{g{}^{-1}}){\otimes}[\underbrace{(J_h{\otimes} {}_{h{}^{-1}}(D_h)_{I})}_{\in {\bf PicS}(R)}{\otimes} J'_h] \\ &\cong (J_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}{\otimes} J'_g {\otimes} D_{g{}^{-1}}){\otimes} [(J_h{\otimes} {}_{h{}^{-1}}(D_h)_{I}){\otimes} D_{g{}^{-1}}{\otimes} J'_h]. \varepsilonnd{align*} Moreover, ${}_{g{}^{-1}}(D_g)_{I} {\otimes} {}_{g}(D_{g{}^{-1}})_{I} \cong D_{g{}^{-1}}$ by (\ref{dg}), and we see that $T_g{\otimes} T_h$ is isomorphic to \betagin{align*} &[(J_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}{\otimes} \underbrace{(J'_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}}_{\in\, {\bf PicS}(R)})] {\otimes} [\underbrace{{}_{g}(D_{g{}^{-1}})_{I}{\otimes} (J_h{\otimes} {}_{h{}^{-1}}(D_h)_I){\otimes} _{g{}^{-1}}(D_g)_{I}}_{\in \,{\bf PicS}(R)}] &\!\!\!{\otimes} _{g}(D_{g{}^{-1}})_{I}{\otimes} J'_h\varepsilonnd{align*} as $R$-$R$-bimodules. Furthermore, since the elements in ${\bf PicS}(R)$ commute, there are $R$-$R$-bimodule isomorphisms \betagin{align*} T_g{\otimes} T_h&\cong ( J_g{\otimes} {}_{g{}^{-1}}(D_g)_{I}){\otimes} [{}_{g}(D_{g{}^{-1}})_{I}{\otimes} (J_h{\otimes} {}_{h{}^{-1}}(D_h)_I){\otimes} {}_{g{}^{-1}}(D_g)_{I}]{\otimes}\\ &\,\,\,\,\,\,\,\,( J'_g{\otimes}{}_{g{}^{-1}}(D_g)_{I}){\otimes} {}_{g}(D_{g{}^{-1}})_{I}{\otimes} J'_h\\ &\cong J_g{\otimes} [{}_{g{}^{-1}}(D_g)_{I}{\otimes} {}_{g}(D_{g{}^{-1}})_{I}]{\otimes} [J_h{\otimes} {}_{h{}^{-1}}(D_h)_I {\otimes} {}_{g{}^{-1}}(D_g)_{I}]{\otimes} J'_g \\ &\,\,\,\,\,\,\,\, {\otimes} [{}_{g{}^{-1}}(D_g)_{I} {\otimes} {}_{g}(D_{g{}^{-1}})_{I}]{\otimes} J'_h\\ &\sigmatackrel{(\ref{dg})}\cong ( J_g{\otimes} D_{g{}^{-1}}){\otimes} J_h{\otimes}{}_{h{}^{-1}}(D_h)_I {\otimes} {}_{g{}^{-1}}(D_g)_{I}{\otimes} (J'_g {\otimes} D_{g{}^{-1}}){\otimes} J'_h\\ & \sigmatackrel{(\ref{jgdgi})}{\cong} ( J_g{\otimes} J_h){\otimes} [{}_{h{}^{-1}}(D_h)_I{\otimes} {}_{g{}^{-1}}(D_g)_{I}] {\otimes} (J'_g {\otimes} J'_h).\varepsilonnd{align*} Now, applying $\chi _{ g, h },$ ${\chi }' _{ g, h} $ and Lemma~\ref{isovr}, we get \betagin{align*} T_{g}{\otimes} T_h&\cong D_g{\otimes} J_{gh}{\otimes} {}_{h{}^{-1}}(D_h)_I{\otimes} \underbrace{ ({}_{g{}^{-1}}(D_g)_{I}{\otimes} D_g)}{\otimes} J'_{gh}\\ &\cong D_g{\otimes} J_{gh}{\otimes} [{}_{h{}^{-1}}(D_h)_I{\otimes} {}_{g{}^{-1}}(D_g)_{I}]{\otimes} J'_{gh}\\ & {\cong} D_g{\otimes} J_{gh}{\otimes} (D_{h{}^{-1}}{\otimes} {}_{h{}^{-1} g{}^{-1}}(D_{gh})_{I}){\otimes} J'_{gh}\\ & \cong D_g{\otimes} (J_{gh}{\otimes} D_{h{}^{-1}}){\otimes} {}_{h{}^{-1} g{}^{-1}}(D_{gh})_{I} {\otimes} J'_{gh}\\ & \sigmatackrel{(\ref{kappa})}{\cong} D_g{\otimes} D_g{\otimes} J_{gh}{\otimes} {}_{h{}^{-1} g{}^{-1}}(D_{gh})_{I}{\otimes} J'_{gh}\\ &\cong D_g{\otimes} J_{gh}{\otimes} {}_{h{}^{-1} g{}^{-1}}(D_{gh})_{I}{\otimes} J'_{gh} = D_g {\otimes} T_{gh}, \varepsilonnd{align*} and we pick the family $\{F_{g,h}\}_{g,h\in G}$ as the composition of the isomorphisms constructed above. By direct verification we obtain the following: \betagin{claim} The values of $F_{g,h}$ are given by \betagin{equation}{\longrightarrow}bel{F} F_{g,h}((x_g{\otimes} d_g{\otimes} x'_g){\otimes} (x_h{\otimes} d_h{\otimes} x'_h))=\chi_{g,h}(x_g{\otimes} x_h){\otimes} d_g\alpha_g(d_h1_{g{}^{-1}}) \chi'_{g,h}(x'_g{\otimes} x'_h), \varepsilonnd{equation} for any $(x_g{\otimes} d_g{\otimes} x'_g){\otimes} (x_h{\otimes} d_h{\otimes} x'_h)\in T_g{\otimes} T_h,\, g,h\in G,$ where $ d_g\alpha_g(d_h1_{g{}^{-1}}) \chi'_{g,h}(x'_g{\otimes} x'_h)$ is considered in ${}_{h{}^{-1} g{}^{-1}}(D_{gh})_{I}{\otimes} J'_{gh},$ and is given by $\sigmaum\limits_{i} d_g\alpha_g(d_h1_{g{}^{-1}})e'_{g,i}{\otimes} v'_{gh,i},$ in which ${ \chi ' _{g,h} } (x'_g{\otimes} x'_h)=\sigmaum_{i}e'_{g,i}{\otimes} v'_{gh,i}.$ \varepsilonnd{claim} We shall also need the next. \betagin{claim} The inverse of $F_{g,h}$ is given by \betagin{equation}{\longrightarrow}bel{Finv} d_g{\otimes} x_{gh}{\otimes} d_{gh}{\otimes} x'_{gh}{}^{-1}t\sigmaum_{i,j} (y_{g,i}{\otimes} 1_{g}{\otimes} y'_{g,j}){\otimes} ( z_{h,i}{\otimes} \alpha_{g{}^{-1}}(d_{gh}1_g) {\otimes} z'_{h,j}), \,\,g,h\in G, \varepsilonnd{equation} where $\chi_{g,h}(\sigmaum_iy_{g,i}{\otimes} z_{h,i})=d_g{\otimes} x_{gh}$ and $\chi'_{g,h}(\sigmaum_jy'_{g,j}{\otimes} z'_{h,j})=1_g{\otimes} x'_{gh},$ $g,h\in G.$ \varepsilonnd{claim} Let $V_{g,h}$ be the map defined by \varepsilonqref{Finv}. Then, \betagin{align*} d_g{\otimes} x_{gh}{\otimes} d_{gh}{\otimes} x'_{gh} &\sigmatackrel{V_{g,h}}{}^{-1}t \sigmaum_{i,j} (y_{g,i}{\otimes} 1_{g}{\otimes} y'_{g,j}){\otimes} ( z_{h,i}{\otimes} \alpha_{g{}^{-1}}(d_{gh}1_g) {\otimes} z'_{h,j})\\ &\sigmatackrel{F_{g,h}}{}^{-1}t \sigmaum_{i,j} [ \chi_{g,h}(y_{g,i}{\otimes} z_{h,i}){\otimes} d_{gh}1_g \chi'_{g,h}(y'_{g,j}{\otimes} z'_{h,j})]\\ &=(\sigmaum_{i}\chi_{g,h}(y_{g,i}{\otimes} z_{h,i})){\otimes} d_{gh}1_g \sigmaum_{j} \chi'_{g,h}(y'_{g,j}{\otimes} z'_{h,j})\\ &=d_g{\otimes} x_{gh}{\otimes} d_{gh}1_g{\otimes} x'_{gh}\\ &=d_g{\otimes} x_{gh}{\otimes} 1_{h{}^{-1}}\bullet d_{gh}{\otimes} x'_{gh}\\ &=d_g{\otimes} 1_gx_{gh}{\otimes} d_{gh}{\otimes} x'_{gh}\\ &=d_g{\otimes} x_{gh}{\otimes} d_{gh}{\otimes} x'_{gh}, \varepsilonnd{align*} and since $F_{g,h}$ is invertible, we conclude that $V_{g,h} =F{}^{-1}_{g,h}$ for all $g,h\in G.$ The automorphism ${\times}lde\omega_F$ induced by the family $\{F_{g,h}\}_{g,h\in G}$ is ${\times}lde\omega_F(g,h,l)=({\rm id}_{D_g}{\otimes} F_{gh,l})\circ(F_{g,h}{\otimes}{\rm id}_l)\circ({\rm id}_g{\otimes} F_{h,l}){}^{-1}\circ(\kappa_{g,h}{\otimes} {\rm id }_{hl}){}^{-1}\circ({\rm id}_{D_{gh}}{\otimes} F_{g,hl}){}^{-1}\circ({\tau}_{gh,g}{\otimes} {\rm id}_{ghl}){}^{-1}.$ Hence, for any $u_{ghl}= 1_{g}{\otimes}1_{gh}{\otimes} x_{ghl}{\otimes} d_{ghl} {\otimes} x'_{ghl}\in {\rm dom}\,{\times}lde\omega_F(g,h,l),$ there is a unique $\omega_F(g,h,l)\in$\linebreak$ {{}^{-1}athcal U}(D_gD_{gh}D_{ghl}),$$\, g,h,l\in G$, such that ${\times}lde\omega_F(g,h,l) ( u_{ghl} ) =\omega_F(g,h,l)u_{ghl}$. \betagin{claim} $\omega_F=\omega\omega',$ or equivalently ${\times}lde\omega_F(g,h,l) (u_{ghl}) = \omega\omega'(g,h,l)u_{ghl},$ for all $g,h,l\in G.$ \varepsilonnd{claim} First, we calculate the value ${\times}lde\omega(g,h,l)(1_g{\otimes}1_{gh}{\otimes} x_{ghl}),$ where $x_{ghl}\in J_{ghl}.$ For this we denote $$\chi{}^{-1}_{g,hl}(1_g{\otimes} x_{ghl})=\sigmaum_iy_{g,i}{\otimes} z_{hl,i},\,\,\,\,\,\,\,\,\chi{}^{-1}_{h,l}(1_h{\otimes} z_{hl,i}) =\sigmaum_m u_{h,m}^{(i)}{\otimes} v_{l,m}^{(i)}$$ and $$\chi_{g,h}(y_{g,i}{\otimes} u_{h,m}^{(i)})=\sigmaum\limits_ks_{g,k}^{(i,m)}{\otimes} t_{gh,k}^{(i,m)},\,\,\,\,\,\,\,\, \chi_{gh,l} ( t_{gh,k}^{(i,m)}{\otimes} v_{l,m}^{(i)} )= \sigmaum_p c_{gh,p}^{(i,m,k)}{\otimes} e_{ghl,p} ^{(i,m,k)}.$$ Then, \betagin{align*} 1_g{\otimes} 1_{gh}{\otimes} x_{ghl}&{}^{-1}t 1_{gh}{\otimes} 1_{g}{\otimes} x_{ghl} {}^{-1}t 1_{gh}{\otimes} \chi{}^{-1}_{g,hl}(1_g{\otimes} x_{ghl})\\ &=\sigmaum\limits_i (1_{gh}{\otimes} y_{g,i}{\otimes} z_{hl,i}) {}^{-1}t \sigmaum\limits_i (y_{g,i} {\otimes} 1_{h} {\otimes} z_{hl,i})\\ &{}^{-1}t \sigmaum\limits_i [y_{g,i} {\otimes} \chi{}^{-1}_{h,l}(1_{h} {\otimes} z_{hl,i})] = \sigmaum\limits_{i,m} [y_{g,i} {\otimes}(u_{h,m}^{(i)}{\otimes} v_{l,m}^{(i)})]\varepsilonnd{align*} \betagin{align*} &{}^{-1}t \sigmaum\limits_{i,m} [\chi _{g,h}(y_{g,i} {\otimes} u_{h,m}^{(i)}){\otimes} v_{l,m}^{(i)})] = \sigmaum\limits_{i,m,k} [(s_{g,k}^{(i,m)}{\otimes} t_{gh,k}^{(i,m)}){\otimes} v_{l,m}^{(i)})]\\ &{}^{-1}t \sigmaum\limits_{i,m,k} [s_{g,k}^{(i,m)}{\otimes} \chi _{gh,l}( t_{gh,k}^{(i,m)}{\otimes} v_{l,m}^{(i)})]\\ &= \sigmaum\limits_{i,m,k,p} [s_{g,k}^{(i,m)}{\otimes} (c_{gh,p}^{(i,m,k)}{\otimes} { e_{ghl,p}^{(i,m,k)} } )]\\ &= 1_g{\otimes} 1_{gh}{\otimes} \sigmaum\limits_{i,m,k,p} (s_{g,k}^{(i,m)}c_{gh,p}^{(i,m,k)} { e_{ghl,p}^{(i,m,k)} } ). \varepsilonnd{align*} Since ${\times}lde\omega(g,h,l)(1_g{\otimes}1_{gh}{\otimes} x_{ghl})=\omega(g,h,l)(1_g{\otimes}1_{gh}{\otimes} x_{ghl}),$ the uniqueness of $\omega(g,h,l)$ implies \betagin{equation} {\longrightarrow}bel{omega} 1_g{\otimes} 1_{gh}{\otimes} \sigmaum\limits_{i,m,k,p} (s_{g,k}^{(i,m)}c_{gh,p}^{(i,m,k)} { e_{ghl,p}^{(i,m,k)} } )=1_g{\otimes}1_{gh}{\otimes} \omega(g,h,l)x_{ghl},\varepsilonnd{equation} for all $g,h,l \in G.$ Analogously, denoting $$\chi'{}^{-1}_{g,hl}(1_g{\otimes} {x'}_{ghl})=\sigmaum\limits_j{y'}_{g,j}{\otimes} z'_{hl,j},\,\,\,\,\,\,\,\,\chi'{}^{-1}_{h,l}(1_h{\otimes} z'_{hl,j}) = \sigmaum\limits_n {u'}_{h,n}^{(j)}{\otimes} {v'}_{l,n}^{(j)}$$ and $$\chi'_{g,h}({y'}_{g,j}{\otimes} {u'}_{h,n}^{(j)})=\sigmaum\limits_{k'}{s'}_{g,{k'}}^{(j,n)}{\otimes} {t'}_{gh,{k'}}^{(j,n)},\,\,\,\,\,\,\,\, \chi'_{gh,l}({ t'}_{gh,{k'}}^{(j,n)}{\otimes} {v'}_{l,n}^{(j)} )= \sigmaum_{p'} {c'}_{gh,p'}^{(j,n,{k'})}{\otimes} {e'}_{ghl,{p'}} ^{(j,n,{k'})},$$ we obtain $ 1_g{\otimes} 1_{gh}{\otimes}\sigmaum\limits_{j,n,{k'},{p'}} { {s'}_{g, {k'} }^{(j,n)} } {c'}_{gh,{p'}}^{(j,n,{k'})} {e'}_{ghl,{p'}}^{(j,n,k')}= 1_g{\otimes} 1_{gh}{\otimes} \omega'(g,h,l)x'_{ghl},$ which implies \betagin{equation} {\longrightarrow}bel{omegal} \sigmaum\limits_{j,n,{k'},{p'}} {s'}_{g,k'}^{(j,n)}{c'}_{gh,{p'}}^{(j,n,{k'})} {e'}_{ghl,{p'}} ^{(j,n,k')}= \omega'(g,h,l)x'_{ghl}. \varepsilonnd{equation} Now we use (\ref{F}), (\ref{Finv}) and diagram (\ref{diag1}) to calculate ${\times}lde\omega_F(g,h,l)u_{ghl}.$ We have that \betagin{align*} u_{ghl}&{}^{-1}t 1_{gh}{\otimes}1_{g}{\otimes} x_{ghl}{\otimes} d_{ghl} {\otimes} x'_{ghl} {}^{-1}t 1_{gh}{\otimes} F{}^{-1}_{g,hl}(1_{g}{\otimes} x_{ghl}{\otimes} d_{ghl} {\otimes} x'_{ghl})\\ &=\sigmaum_{i,j} 1_{gh}{\otimes}(y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j}){\otimes}(z_{hl,i}{\otimes}\alpha_{g{}^{-1}}(d_{ghl}1_g){\otimes} z'_{hl,j})\\ &{}^{-1}t\sigmaum_{i,j} \kappa{}^{-1}_{g,h}(1_{gh}{\otimes}(y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j})){\otimes}(z_{hl,i}{\otimes}\alpha_{g{}^{-1}}(d_{ghl}1_g){\otimes} z'_{hl,j}) \\ &= \sigmaum_{i,j} \underbrace{ y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j} }_{\in T_g}{\otimes} 1_{h} 1_{g{}^{-1}}{\otimes}(z_{hl,i}{\otimes}\alpha_{g{}^{-1}}(d_{ghl}1_g){\otimes} z'_{hl,j})\\ &= \sigmaum_{i,j} y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j}{\otimes} 1_{h} {\otimes}(z_{hl,i}{\otimes}\alpha_{g{}^{-1}}(d_{ghl}1_g){\otimes} z'_{hl,j})\\ &{}^{-1}t\sigmaum_{i,j} (y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j}){\otimes} F{}^{-1}_{h,l}[1_{h} {\otimes} (z_{hl,i} {\otimes} \alpha_{g{}^{-1}}(d_{ghl}1_g){\otimes} z'_{hl,j}) ]\\ &= \sigmaum_{i,j,m,n}(y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j}){\otimes} [(u_{h,m}^{(i)}{\otimes} 1_h{\otimes} {u'}_{h,n}^{(j)}){\otimes}(v_{l,m}^{(i)}{\otimes}\alpha_{h{}^{-1}}(\alpha_{g{}^{-1}}(d_{ghl}1_g)1_h){\otimes} {v'}_{l,n}^{(j)})]\varepsilonnd{align*} \betagin{align*} &{}^{-1}t\sigmaum_{i,j\atop m,n}F_{g,h}[(y_{g,i}{\otimes} 1_g{\otimes} y'_{g,j}){\otimes} (u_{h,m}^{(i)}{\otimes} 1_h{\otimes} {u'}_{h,n}^{(j)})]{\otimes}(v_{l,m}^{(i)}{\otimes}\alpha_{h{}^{-1}}(\alpha_{g{}^{-1}}(d_{ghl}1_g)1_h){\otimes} {v'}_{l,n}^{(j)})]\\ &= \sigmaum_{i,j\atop k,n} \chi _{g,h}(y_{g,i}{\otimes} u_{h,m}^{(i)}){\otimes} \underbrace{1_g} 1_{gh} \chi '_{g,h}(y'_{g,j}{\otimes} {u'}_{h,n}^{(j)}) {\otimes} (v_{l,m}^{(i)}{\otimes}\alpha_{h{}^{-1}}(\alpha_{g{}^{-1}}(d_{ghl}1_g)1_h){\otimes} {v'}_{l,n}^{(j)})\\ &= \sigmaum_{i,j\atop k,n} \chi _{g,h}(y_{g,i}{\otimes} u_{h,m}^{(i)}){\otimes} 1_{gh} \chi '_{g,h}(y'_{g,j}{\otimes} {u'}_{h,n}^{(j)}) {\otimes} (v_{l,m}^{(i)}{\otimes}\alpha_{h{}^{-1}}(\alpha_{g{}^{-1}}(d_{ghl}1_g)1_h){\otimes} {v'}_{l,n}^{(j)})\\ &= \sigmaum_{i,j\atop m,n} \chi _{g,h}(y_{g,i}{\otimes} u_{h,m}^{(i)}){\otimes} 1_{gh} \chi '_{g,h}(y'_{g,j}{\otimes} {u'}_{h,n}^{(j)}){\otimes} (v_{l,m}^{(i)}{\otimes}\underbrace{\alpha_{ (gh){}^{-1}}(d_{ghl}1_{gh})}_{\in\,\,{}_{l{}^{-1}}(D_l)_{I} }1_{h{}^{-1}}{\otimes} {v'}_{l,n}^{(j)})\\ &= \sigmaum_{i,j\atop m,n} \chi_{g,h}(y_{g,i}{\otimes} u_{h,m}^{(i)}){\otimes} 1_{gh} \chi'_{g,h}(y'_{g,j}{\otimes} {u'}_{h,n}^{(j)}){\otimes} (\underbrace{v_{l,m}^{(i)}}_{\in\,\,J_l }1_{l{}^{-1} h{}^{-1}}{\otimes}\alpha_{ (gh){}^{-1}}(d_{ghl}1_{gh}){\otimes} {v'}_{l,n}^{(j)})\\ &=\sigmaum_{i,j\atop m,n} \chi_{g,h}(y_{g,i}{\otimes} u_{h,m}^{(i)}){\otimes} 1_{gh} \chi'_{g,h}(y'_{g,j}{\otimes} {u'}_{h,n}^{(j)}){\otimes} (v_{l,m}^{(i)}{\otimes}\alpha_{ (gh){}^{-1}}(d_{ghl}1_{gh}){\otimes} {v'}_{l,n}^{(j)})\\ &=\sigmaum_{i,j,k\atop n,m,{k'}} [ (s_{g,k}^{(i,m)}{\otimes} t_{gh,k}^{(i,m)}){\otimes} ( 1_{gh} {s'}_{g,{k'}}^{(j,n)}{\otimes} {t'}_{gh,k'}^{(j,n)}){\otimes} (v_{l,m}^{(i)}{\otimes}\alpha_{ (gh){}^{-1}}(d_{ghl}1_{gh}){\otimes} {v'}_{l,n}^{(j)})]\\ &{}^{-1}t \sigmaum_{i,j,k\atop n,m,{k'}} s_{g,k}^{(i,m)}{\otimes} F_{gh, l} [( t_{gh,k}^{(i,m)}{\otimes} 1_{gh} {s'}_{g,{k'}}^{(j,n)}{\otimes} {t'}_{gh,k'}^{(j,n)}){\otimes} (v_{l,m}^{(i)}{\otimes}\alpha_{ (gh){}^{-1}}(d_{ghl}1_{gh}){\otimes} {v'}_{l,n}^{(j)})] \\ &= \sigmaum_{i,j,k\atop n,m,{k'}} s_{g,k}^{(i,m)}{\otimes} \chi_{gh,l} (t_{gh,k}^{(i,m)}{\otimes} v_{l,m}^{(i)}){\otimes} {s'}_{g,{k'}}^{(j,n)}\alpha_{gh}(\alpha_{ (gh){}^{-1}}(d_{ghl}1_{gh}))\chi'_{gh,l}( {t'}_{gh,k'}^{(j,n)}{\otimes} {v'}_{l,n}^{(j)})\\ &= \sigmaum_{i,j,k\atop n,m,{k'}} s_{g,k}^{(i,m)}{\otimes} \chi_{gh,l} (t_{gh,k}^{(i,m)}{\otimes} v_{l,m}^{(i)}){\otimes} {s'}_{g,{k'}}^{(j,n)}d_{ghl}\chi'_{gh,l}( {t'}_{gh,k'}^{(j,n)}{\otimes} {v'}_{l,n}^{(j)})\\ & = \sigmaum_{i,m,k} s_{g,k}^{(i,m)}{\otimes} \chi_{gh,l} (t_{gh,k}^{(i,m)}{\otimes} v_{l,m}^{(i)}){\otimes} d_{ghl}\sigmaum_{j,n ,{k'}}{s'}_{g,{k'}}^{(j,n)}\chi'_{gh,l}( {t'}_{gh,k'}^{(j,n)}{\otimes} {v'}_{l,n}^{(j)})\\ & = 1_g{\otimes} 1_{gh}{\otimes} \sigmaum\limits_{i,m,k,p} (s_{g,k}^{(i,m)}c_{gh,p}^{(i,m,k)} e_{ghl,p}^{(i,m,k)}){\otimes} d_{ghl}{\otimes} \sigmaum\limits_{j,n,k',p'} ({s'}_{g,{k'}}^{(j,n)}{c'}_{gh,{p'}}^{(j,n,k')} {e'}_{ghl,{p'}} ^{(j,n,k')})\\ &\sigmatackrel{(\ref{omega},\ref{omegal})}= 1_g{\otimes} 1_{gh}{\otimes} \omega(g,h,l)x_{ghl}{\otimes} d_{ghl}{\otimes} \omega'(g,h,l)x'_{ghl}\\ &=\omega(g,h,l) \omega'(g,h,l) (1_g{\otimes} 1_{gh}{\otimes} x_{ghl}{\otimes} d_{ghl}{\otimes} x'_{ghl})\\ &=\omega(g,h,l) \omega'(g,h,l) u_{ghl}. \varepsilonnd{align*} Therefore, $\varphi_6$ is a group homomorphism. ${{\mathbf l}acksquare}$ \betagin{thebibliography}{9} \bibitem{AraE1} P.\ Ara, R.\ Exel, Dynamical systems associated to separated graphs, graph algebras, and paradoxical decompositions, {\it Advances Math.}, {\bf 252} (2014), 748-804. \bibitem{AG} M. Auslander, O. Goldman, The Brauer group of a commutative ring, {\it Trans. Amer. Math. Soc.} {\bf 97} (1960) 367--409. \bibitem{BP} D.\ Bagio, A.\ Paques, Partial groupoid actions: globalization, Morita theory and Galois theory, {\it Comm. Algebra}, {\bf 40}, (2012), 3658--3678. \bibitem{BeuGon2} V.\ Beuter, D.\ Gon\c calves, The interplay between Steinberg algebras and partial skew rings, arXiv:1706.00127 (2017). \bibitem{Birget} J.-C. Birget, The Groups of Richard Thompson and Complexity, {\it Internat. J. Algebra Comput.}, {\bf 14} (2004), Nos. 5\& 6, 569--626. \bibitem{CaenDGr} S. Caenepeel and E. D. Groot, Galois corings applied to partial Galois theory, {\it Proc. ICMA-2004, Kuwait Univ.} (2005), 117--134. \bibitem{CaenJan} S. Caenepeel, K. Janssen, Partial (Co)Actions of Hopf Algebras and Partial Hopf-Galois Theory, {\it Commun. Algebra} {\bf 36} (2008), 2923--2946. \bibitem{CR} S. U. Chase, A. Rosenberg, Amitsur cohomology and the Brauer group, {\it Mem. Am. Math. Soc.} {\bf 52} (1965) 34--79. \bibitem{CHR} S.U. Chase, D.K. Harrison, A. Rosenberg, Galois theory and Galois cohomology of commutative rings, {\it Mem. Amer. Math. Soc.} {\bf 52} (1965) 15--33. \bibitem{CornGould} C. Cornock, V. Gould, Proper restriction semigroups and partial actions, {\it J. Pure Appl. Algebra} {\bf 216} (2012), 935--949. \bibitem{CP} A. H. Clifford, G. B. Preston, Algebraic Theory of Semigroups I, Amer. Math. Soc., Providence, 1964. \bibitem{DI} F. DeMeyer, E. Ingraham, Separable algebras over commutative rings, {\it Lect. notes in math.}, {\bf 181}, Springer (1971). \bibitem{D3} M.\ Dokuchaev, Recent developments around partial actions, {\it S\~ao Paulo J. Math. Sci.} (to appear), arXiv:1801.09105v2. \bibitem{DE} M. Dokuchaev, R. Exel, Associativity of crossed products by partial actions, enveloping actions and partial representations, {\it Trans. Amer. Math. Soc.} {\bf 357 } (2005) 1931--1952. \bibitem{DE2} M.\ Dokuchaev, R.\ Exel, Partial actions and subshifts, {\it J.~Funct.~Analysis}, {\bf 272} (2017), 5038--5106. \bibitem{DE3} M.\ Dokuchaev, R.\ Exel, The ideal structure of algebraic partial crossed products, {\it Proc.\ London Math. Soc.}, {\bf 115}, (1) (2017), 91–134. \bibitem{DES} M. Dokuchaev, R. Exel, J.J. Sim\'on, Crossed products by twisted partial actions and graded algebras, {\it J. Algebra} {\bf 320} (2008) 3278--3310. \bibitem{DEP} M. Dokuchaev, R. Exel, P. Piccione, Partial representations and partial group algebras, {\it J. Algebra}, {\bf 226} (1) (2000), 505--532. \bibitem{DFP} M. Dokuchaev, M.\ Ferrero, A.\ Paques, Partial actions and Galois theory, {\it J. Pure Appl. Algebra} {\bf 208} (2007) 77--87. \bibitem{DK} M.\ Dokuchaev, M.\ Khrypchenko, Partial cohomology of groups, {\it J. Algebra} {\bf 427} (2015) 142--182. \bibitem{DN} M.\ Dokuchaev, B.\ Novikov, Partial projective representations and partial actions, {\it J. Pure Appl. Algebra} {\bf 214} (2010), 251--268. \bibitem{DN2} M.\ Dokuchaev, B.\ Novikov, Partial projective representations and partial actions II, {\it J. Pure Appl. Algebra} {\bf 216} (2012), 438--455. \bibitem{DoNoPi} M.\ Dokuchaev, B.\ Novikov, H.\ Pinedo, The partial Schur Multiplier of a group, {\it J. Algebra}, {\bf 392} (2013), 199--225. \bibitem{DNZh} M. Dokuchaev, B. Novikov, G. Zholtkevych, Partial actions and automata, {\it Algebra Discrete Math.,} {\bf 11} (2011), (2), 51--63. \bibitem{E-1} R.\ Exel, Circle actions on $C^*$-algebras, partial automorphisms and generalized Pimsner-Voiculescu exact sequences, {\it J. Funct. Anal.} {\bf 122} (1994), (3), 361--401. \bibitem{E-2} R.\ Exel, The Bunce-Deddens algebras as crossed products by partial automorphisms, {\it Bol. Soc. Brasil. Mat. (N.S.)}, {\bf 25} (1994), 173--179. \bibitem{E0} R.\ Exel, Twisted partial actions: a classification of regular $C^*$-algebraic bundles, {\it Proc.\ London Math. Soc.}, {\bf 74} (1997), (3), 417--443. \bibitem{E2} R.\ Exel, Amenability for Fell Bundles, {\it J.\ Reine Angew.\ Math.} {\bf 492} (1997), 41--73. \bibitem{E1} R.\ Exel, Partial actions of groups and actions of inverse semigroups, {\it Proc.\ Am.\ Math.\ Soc.} {\bf 126} (1998), (12), 3481--3494. \bibitem{E3} R.\ Exel, Hecke algebras for protonormal subgroups, {\it J. Algebra,} {\bf 320} (2008), 1771--1813. \bibitem{F2} M.\ Ferrero, Partial actions of groups on algebras, a survey, {\it S\~ao Paulo J. Math. Sci.} {\bf 3}, (2009), (1), 95--107. \bibitem{FrP} D.\ Freitas, A.\ Paques, On partial Galois Azumaya extensions, {\it Algebra Discrete Math.,} {\bf 11}, (2011), 64--77. \bibitem{GR} D.\ Gon\c calves, D.\ Royer, Leavitt path algebras as partial skew group rings, {\it Commun. Algebra} {\bf 42 } (2014) 127--143. \bibitem{KL} J.\ Kellendonk, M.\ V.\ Lawson, Partial actions of groups, {\it Internat. J. Algebra Comput.}, {\bf 14} (2004), (1), 87--114. \bibitem{KL1} J.\ Kellendonk, M. \ V.\ Lawson, Tiling semigroups, {\it J. Algebra}, {\bf 224} (2000), 140--150. \bibitem{KennedySchafhauser} M.\ Kennedy, C.\ Schafhauser, Noncommutative boundaries and the ideal structure of reduced crossed products, {\it Preprint}, arXiv:1710.02200, (2017). \bibitem{Kud} G.\ Kudryavtseva, Partial monoid actions and a class of restriction semigroups, {\it J. Algebra}, {\bf 429} (2015), 342--370. \bibitem{KuoSzeto} J.-M. Kuo, G. Szeto, The structure of a partial Galois extension, {\it Monatsh Math}, 175, No. {\bf 4,} (2014), 565--576. \bibitem{L} T. Y.\ Lam, Lectures on Modules and Rings, $2^{\rm{nd}}$ Edition {\it Springer} (1998). \bibitem{MC} S. Maclane, Homology, Springer-Verlag, Berlin (1963). \bibitem{HM} H. Matsumura, Commutative ring theory, {\it Camb. Univ. press.}, 8, Cambridge Studies in Advanced Mathematics (1989). \bibitem{PRSantA} A.\ Paques, V.\ Rodrigues, A.\ Sant'Ana, Galois correspondences for partial Galois Azumaya extensions, {\it J. Algebra Appl.}, {\bf 10}, (2011), (5), 835--847. \bibitem{PS} A. Paques, A. Sant'Ana, When is a crossed product by a twisted partial action Azumaya?, {\it Comm. Algebra} {\bf 38} (2010) 1093--1103. \bibitem{St1} B.\ Steinberg, Inverse semigroup homomorphisms via partial group actions, {\it Bull. Aust. Math. Soc.}, {\bf 64}, (2001), No.1, 157--168. \bibitem{St2} B.\ Steinberg, Partial actions of groups on cell complexes, {\it Monatsh. Math.}, {\bf 138} (2003), (2), 159--170. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \newcommand{\qbc}[2]{ {\left [{#1 \atop #2}\right ]}} \newcommand{\anbc}[2]{{\left\langle {#1 \atop #2} \right\rangle}} \newcommand{\end{enumerate}}{\end{enumerate}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\bm}[1]{{\mbox{\boldmath $#1$}}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\lambda/\mu}}{{\lambda/\mu}} \newcommand{{\cal L}_P}{{\cal L}_P} \newcommand{{(P,\omega)}}{{(P,\omega)}} \newcommand{{P,\omega}}{{P,\omega}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\mathrm{maj}}{\mathrm{maj}} \newcommand{\,(\mathrm{mod}\,2)}{\,(\mathrm{mod}\,2)} \newcommand{\textcolor{blue}}{\textcolor{blue}} \newcommand{\textcolor{green}}{\textcolor{green}} \newcommand{\textcolor{magenta}}{\textcolor{magenta}} \newcommand{\textcolor{brown}}{\textcolor{brown}} \newcommand{\textcolor{purple}}{\textcolor{purple}} \newcommand{\textcolor{nice}}{\textcolor{nice}} \newcommand{\textcolor{orange}}{\textcolor{orange}} \definecolor{brown}{cmyk}{0,0,.35,.65} \definecolor{purple}{rgb}{.5,0,.5} \definecolor{nice}{cmyk}{0,.5,.5,0} \definecolor{orange}{cmyk}{0,.35,.65,0} \begin{centering} \textcolor{red}{\Large\bf Some Remarks on Sign-Balanced and Maj-Balanced Posets}\\[.2in] \textcolor{blue}{Richard P. Stanley}\footnote{Partially supported by NSF grant \#DMS-9988459.}\\ Department of Mathematics\\ Massachusetts Institute of Technology\\ Cambridge, MA 02139\\ \emph{e-mail:} [email protected]\\[.2in] \textcolor{magenta}{version of 14 January 2004}\\[.2in] \end{centering} \vskip 10pt \section{Introduction.} \label{sec:intro} Let $P$ be an $n$-element poset (partially ordered set), and let $\omega:P\rightarrow [n]=\{1,2,\dots,n\}$ be a bijection, called a \emph{labeling} of $P$. We call the pair $(P,\omega)$ a \emph{labelled poset}. A \emph{linear extension} of $P$ is an order-preserving bijection $f:P\rightarrow [n]$. We can regard $f$ as defining a permutation $\pi=\pi(f)$ of the set $[n]$ given by $\pi(i)=j$ if $f(\omega^{-1}(j))=i$. We write $\pi$ in the customary way as a word $a_1a_2\cdots a_n$, where $\pi(i)=a_i=\omega(f^{-1}(i))$. We will say for instance that $f$ is an \emph{even linear extension} of $(P,\omega)$ if $\pi$ is an even permutation (i.e., an element of the alternating group $\mathfrak{A}_n$). Let $\ep$ denote the set of linear extensions of $P$, and set $\lpw=\{\pi(f)\st f\in\ep\}$ We say that $\ppw$ is \emph{sign-balanced} if $\lpw$ contains the same number of even permutations as odd permutations. Note that the parity of a linear extension $f$ depends on the labeling $\omega$. However, the notion of sign-balanced depends only on $P$, since changing the labeling of $P$ simply multiplies the elements of $\lpw$ by a fixed permutation in $\sn$, the symmetric group of all permutations of $[n]$. Thus we can simply say that $P$ is sign-balanced without specifying $\omega$. We say that a function $\vartheta:\ep\rightarrow\ep$ is \emph{parity-reversing} (respectively, \emph{parity-preserving}) if for all $f\in\ep$, the permutations $\pi(f)$ and $\pi(\vartheta(f))$ have opposite parity (respectively, the same parity). Note that the properties of parity-reversing and parity-preserving do not depend on $\omega$; indeed, $\vartheta$ is parity-reversing (respectively, parity-preserving) if and only if for all $f\in\ep$, the permutation $\vartheta f\circ f^{-1}\in\sn$ is odd (respectively, even), Sign-balanced posets were first considered by Ruskey \cite{ruskey}. He established the following result, which shows that many combinatorially occuring classes of posets, such as geometric lattices and Eulerian posets, are sign-balanced. \begin{theorem} \label{thm:p-r} \emph{Suppose $\#P\geq 2$. If every nonminimal element of the poset $P$ is greater than at least two minimal elements, then $P$ is sign-balanced.} \end{theorem} \textbf{Proof.} Let $\pi=a_1a_2a_3\cdots a_n\in\lpw$. Let $\pi'= \pi(1,2) = a_2a_1a_3\cdots a_n\in \sn$. (We always multiply permutations from right to left.) By the hypothesis on $P$, we also have $\pi'\in\lpw$. The map $\pi\mapsto \pi'$ is a parity-reversing involution (i.e., exactly one of $\pi$ and $\pi'$ is an even permutation) on $\lpw$, and the proof follows. $\Box$ The above proof illustrates what will be our basic technique for showing that a poset $P$ is sign-balanced, viz., giving a bijection $\sigma:\lpw \rightarrow \lpw$ such that $\pi$ and $\sigma(\pi)$ have opposite parity for all $\pi\in\lpw$. Equivalently, we are giving a parity-reversing bijection $\vartheta:\ep\rightarrow\ep$. In 1992 Ruskey \cite[{\S}5, item~6]{ruskey2} conjectured as to when the product $\bm{m}\times \bm{n}$ of two chains of cardinalities $m$ and $n$ is sign-balanced, viz., $m,n>1$ and $m\equiv n\,(\mathrm{mod}\,2)$. Ruskey proved this when $m$ and $n$ are both even by giving a simple parity-reversing involution, which we generalize in Proposition~\ref{prop:alch} and Corollary~\ref{cor:dom}. Ruskey's conjecture for $m$ and $n$ odd was proved by D. White \cite{white}, who also computed the ``imbalance'' between even and odd linear extensions in the case when exactly one of $m$ and $n$ is even (stated here as Theorem~\ref{thm:white}). None of our theorems below apply to the case when $m$ and $n$ are both odd. Ruskey \cite[{\S}5, item~5]{ruskey2} also asked what order ideals $I$ (defined below) of $\bm{m}\times \bm{n}$ are sign-balanced. Such order ideals correspond to integer partitions $\lambda$ and will be denoted $P_\lambda$; the linear extensions of $P_\lambda$ are equivalent to standard Young tableaux (SYT) of shape $\lambda$. White \cite{white} also determined some additional $\lambda$ for which $P_\lambda$ is sign-balanced, and our results below will give some further examples. In Sections~\ref{sec:maj} and \ref{sec:hl} we consider some analogous questions for the parity of the major index of a linear extension of a poset $P$. Given $\pi=a_1a_2\cdots a_n\in\lpw$, let inv$(f)$ denote the number of \emph{inversions} of $\pi$, i.e., $$ \mathrm{inv}(\pi) = \#\{(i,j)\st i<j,\ a_i>a_j\}. $$ Let \beq I_{P,\omega}(q) = \sum_{\pi\in\lpw}q^{\mathrm{inv}(f)}, \label{eq:ipq} \eeq the generating function for linear extensions of $\ppw$ by number of inversions. Since $f$ is an even linear extension if and only if inv$(f)$ is an even integer, we see that $P$ is sign-balanced if and only if $I_{P,\omega}(-1)=0$. In general $I_{P,\omega}(q)$ seems difficult to understand, even when $P$ is known to be sign-balanced. I am grateful to Marc van Leeuwen for his many helpful suggestions regarding Section~\ref{sec:ptn}. \section{Promotion and evacuation.} Promotion and evacuation are certain bijections on the set $\ep$ of linear extensions of a finite poset $P$. They were originally defined by M.-P.\ Sch\"utzenberger \cite{schut} and have subsequently arisen is many different situations (e.g., \cite[{\S}5]{g-e}\cite[{\S}8]{haiman}\cite[{\S}4]{haiman2}\cite[{\S3}]{leeu}). To be precise, the original definitions of promotion and evacuation require an insignificant reindexing to become bijections. We will incorporate this reindexing into our definition. Let $f:P\rightarrow [n]$ be a linear extension of the poset $P$. Define a maximal chain $u_0<u_1<\cdots<u_\ell$ of $P$, called the \emph{promotion chain} of $f$, as follows. Let $u_0=f^{-1}(1)$. Once $u_i$ is defined let $u_{i+1}$ be that element $u$ covering $u_i$ (i.e., $u_i<u_{i+1}$ and no $s\in P$ satisfies $u_i<s<u_{i+1}$) for which $f(u)$ is minimal. Continue until reaching a maximal element $u_\ell$ of $P$. Now define the \emph{promotion} $g=\partial f$ of $f$ as follows. If $t\neq u_i$ for any $i$, then set $g(t)=f(t)-1$. If $1\leq i\leq k-1$, then set $g(u_i) = f(u_{i+1})-1$. Finally set $g(u_\ell)=n$. Figure~\ref{fig1} gives an example, with the elements in the promotion chain of $f$ circled. (The vertex labels in Figure~\ref{fig1} are the values of a linear extension and are unrelated to the (irrelevant) labeling $\omega$.) It is easy to see that $\partial f\in\ep$ and that the map $\partial:\ep\rightarrow\ep$ is a bijection. \begin{figure} \caption{The promotion operator $\partial$} \label{fig1} \end{figure} \begin{lemma} \label{lemma1} \emph{Let $P$ be an $n$-element poset. Then the promotion operator $\partial: \ep\rightarrow \ep$ is parity-reversing if and only if the length $\ell$ (or cardinality $\ell+1$) of every maximal chain of $P$ satisfies $n\equiv \ell\,(\mathrm{mod}\,2)$. Similarly, $\partial$ is parity-preserving if and only if the length $\ell$ of every maximal chain of $P$ satisfies $n\equiv \ell+1\,(\mathrm{mod}\,2)$.} \end{lemma} \textbf{Proof.} Let $f\in\ep$, and let $u_0<u_1<\dots<u_\ell$ be the promotion chain of $f$. Then $(\partial f)f^{-1}$ is a product of two cycles, viz., $$ (\partial f)f^{-1}= (n,n-1,\dots,1)(b_0,b_1,\dots,b_\ell), $$ where $b_i=f(u_i)$. This permutation is odd if and only if $n\equiv\ell\,(\mathrm{mod}\,2)$, and the proof follows since every maximal chain of $P$ is the promotion chain of some linear extension. $\ \Box$ \begin{corollary} \label{cor:sbmc} \emph{Let $P$ be an $n$-element poset, and suppose that the length $\ell$ of every maximal chain of $P$ satisfies $n\equiv \ell\,(\mathrm{mod}\,2)$. Then $P$ is sign-balanced.} \end{corollary} \textbf{Proof.} By the previous lemma, $\partial$ is parity-reversing. Since it is also a bijection, $\ep$ must contain the same number of even linear extensions as odd linear extensions. $\ \Box$ We now consider a variant of promotion known as evacuation. For any linear extension $g$ of an $m$-element poset $Q$, let $u_0<u_1<\cdots<u_\ell$ be the promotion chain of $g$, so $\partial g(u_\ell)=m$. Define $\rho_g(Q)=Q-\{u_\ell\}$. The restriction of $\partial g$ to $\rho_g(Q)$, which we also denote by $\partial g$, is a linear extension of $\rho_g(Q)$. Let $$ \mu_{g,k}(Q)=\rho_{\partial^kg}\,\rho_{\partial^{k-1}g}\cdots \rho_{\partial g}\,\rho_g(Q). $$ Now let $\#P=n$ and define the \emph{evacuation} evac$(f)$ of $f$ to be the linear extension of $P$ whose value at the unique element of $\mu_{g,k-1}(P)-\mu_{g,k}(P)$ is $n-k+1$, for $1\leq k\leq n$. Figure~\ref{fig2} gives an example of evac$(f)$, where we circle the values of evac$(f)$ as soon as they are determined. A remarkable theorem of Sch\"utzenberger \cite{schut} asserts that evac is an involution (and hence a bijection $\ep\rightarrow\ep$). \begin{figure} \caption{The evacuation operator evac.} \label{fig2} \end{figure} We say that the poset $P$ is \emph{consistent} if for all $t\in P$, the lengths of all maximal chains of the principal order ideal $\Lambda_t := \{s\in P\st s\leq t\}$ have the same parity. Let $\nu(t)$ denote the length of the longest chain of $\Lambda_t$, and set $$ \Gamma(P) = \sum_{t\in P}\nu(t). $$ We also say that a permutation $\sigma$ of a finite set has \emph{parity} $k\in\zz$ if either $\sigma$ and $k$ are both even or $\sigma$ and $k$ are both odd. Equivalently, inv$(\sigma)\equiv k\,(\mathrm{mod}\,2)$. \begin{proposition} \label{prop:conev} \emph{Suppose that $P$ is consistent. Then} evac$:\ep \rightarrow\ep$ \emph{is parity-preserving if ${n\choose 2}-\Gamma(P)$ is even, and parity-reversing if ${n\choose 2}-\Gamma(P)$ is odd.} \end{proposition} \textbf{Proof.} The evacuation of a linear extension $f$ of an $n$-element poset $P$ consists of $n$ promotions $\delta_1,\dots,\delta_n$, where $\delta_i$ is applied to a certain subposet $P_{i-1}$ of $P$ with $n-i+1$ elements. Let $f_i$ be the linear extension of $P$ whose restriction to $P_i$ agrees with $\delta_i \delta_{i-1}\cdots \delta_1$, and whose value at the unique element of $P_{j-1}-P_j$ for $j\leq i$ is $n-i+1$. Thus $f_0=f$ and $f_n= \mathrm{evac}(f)$. (Figure~\ref{fig2} gives an example of the sequence $f_0,\dots,f_5$.) Let $u_i$ be the end (top) of the promotion chain for the promotion $\delta_i$. Thus $\{u_1,u_2,\dots,u_n\}=P$. Lemma~\ref{lemma1} shows that if $P$ is consistent, then $f_if_{i-1}^{-1}$ has parity $n-i+1-(\nu(u_i)+1)$. Hence the parity of evac$(f)f^{-1}$ is given by $$ \sum_{i=1}^n (n-i-\nu(u_i)) ={n\choose 2}-\sum_{t\in P}\nu(P) = {n\choose 2}-\Gamma(P), $$ from which the proof follows. $\ \Box$ \begin{corollary} \label{cor:cons} \emph{Suppose that $P$ is consistent and ${n\choose 2}-\Gamma(P)$ is odd. Then $P$ is sign-balanced.} \end{corollary} \textsc{Note.} In \cite[pp.\ 50--51]{rs:thesis}\cite[Cor.\ 19.5]{rs:mem} it was shown using the theory of $P$-partitions that the number $e(P)$ of linear extensions of $P$ is even if $P$ is graded of rank $\ell$ (i.e., every maximal chain of $P$ has length $\ell$) and $n-\ell$ is even, and it was stated that it would be interesting to give a direct proof. Our Corollary~\ref{cor:sbmc} gives a direct proof of a stronger result. Similarly in \cite[Cor.~4.6]{rs:thesis}\cite[Cor.~19.6]{rs:mem} it was stated (in dual form) that if for all $t\in P$ all maximal chains of $\Lambda_t$ have the same length, and if ${n\choose 2}-\Gamma(P)$ is odd, then $e(P)$ is even. Corollary~\ref{cor:cons} gives a direct proof of a stronger result. \section{Partitions.} \label{sec:ptn} In this section we apply our previous results and obtain some new results for certain posets corresponding to (integer) partitions. We first review some notation and terminology concerning partitions. Further details may be found in \cite[Ch.\ 7]{ec2}. Let $\lambda= (\lambda_1,\lambda_2,\dots)$ be a partition of $n$, denoted $\lambda\vdash n$ or $|\lambda|=n$. Thus $\lambda_1\geq\lambda_2\geq\cdots\geq 0$ and $\sum \lambda_i =n$. We can identify $\lambda$ with its \emph{diagram} $\{(i,j)\in \pp\times \pp\st 1\leq j\leq\lambda_i\}$. Let $\mu$ be another partition such that $\mu\subseteq\lambda$, i.e., $\mu_i\leq \lambda_i$ for all $i$. Define the \emph{skew partition} or \emph{skew diagram} $\lambda/\mu$ by $$ \lambda/\mu = \{ (i,j)\in \pp\times \pp\st \mu_i+1\leq j\leq \lambda_i\}. $$ Write $|\lm|=n$ to denote that $|\lambda|-|\mu|=n$, i.e., $n$ is the number of squares in the shape $\lm$, drawn as a Young diagram \cite[p.\ 29]{ec1}. We can regard $\lm$ as a subposet of $\pp\times\pp$ (with the usual coordinatewise ordering). We write $P_\lm$ for this poset. As a set it is the same as $\lm$, but the notation $P_\lm$ emphasizes that we are considering it to be a poset. In this section we will only be concerned with ``ordinary'' shapes $\lambda$, but in Section~\ref{sec:maj} skew shapes $\lambda/\mu$ will arise as a special case of Proposition~\ref{prop:slabps}. The posets $P_\lambda$ are consistent for any $\lambda$, so we can ask for which $P_\lambda$ is evacuation parity-reversing, i.e., ${n\choose 2}-\Gamma(P_\lambda)$ is odd. To this end, the \emph{content} $c(i,j)$ of the cell $(i,j)$ is defined by $c(i,j)=j-i$ \cite[p.~373]{ec2}. Also let ${\cal O}(\mu)$ denote the number of odd parts of the partition $\mu$. An \emph{order ideal} of a poset $P$ is a subset $K\subseteq P$ such that if $t\in K$ and $s<t$, then $s\in K$. Similarly a \emph{dual order ideal} or \emph{filter} of $P$ is a subset $F\subseteq P$ such that if $s\in F$ and $t>s$, then $t\in F$. If we successively remove two-element chains from $P_\lambda$ which are dual order ideals of the poset from which they are removed, then eventually we reach a poset core$_2(P_\lambda)$, called the \emph{2-core} of $P_\lambda$, that contains no dual order ideals which are two-element chains. The 2-core is \emph{unique}, i.e., independent of the order in which the dual order ideals are removed, and is given by $P_{\delta_k}$ for some $k\geq 1$, where $\delta_k$ denotes the ``staircase shape'' $(k-1,k-2,\dots,1)$. For further information see \cite[Exer.~7.59]{ec2}. \begin{proposition} Let $\lambda\vdash n$. The following numbers all have the same parity. \be \item[(a)] $\Gamma(P_\lambda)$ \item[(b)] $\sum_{t\in P_\lambda}c(t)$ \item[(c)] $\frac 12({\cal O}(\lambda) - {\cal O}(\lambda'))$ \item[(d)] $\frac 12(n-{k\choose 2})$, where ${k\choose 2}=\#\mathrm{core}_2(P_\lambda)$ \ee Hence if $a_\lambda$ denotes any of the above four numbers, then evacuation is partity-reversing on $P_\lambda$ if and only if ${n\choose 2}-a_\lambda$ is odd. \end{proposition} \textbf{Proof.} It is easy to see that if $t\in P_\lambda$, then $\nu(t) \equiv c(t)\,(\mathrm{mod}\,2)$. Hence (a) and (b) have the same parity. It is well-known and easy to see \cite[Exam.\ 3, p.\ 11]{macd} that $$ \sum_{t\in P_\lambda}c(t) = \sum {\lambda_i\choose 2} -\sum {\lambda'_i\choose 2}. $$ Since $\sum \lambda_i=\sum \lambda'_i$, we have $$ \sum_{t\in P_\lambda}c(t) = \frac 12\left( \sum \lambda_i^2 -\sum \left(\lambda'_i\right)^2\right). $$ Since $a^2\equiv 0,1\,(\mathrm{mod}\,4)$ depending on whether $a$ is even or odd, we see that (b) and (c) have the same parity. If we remove from $P_\lambda$ a 2-element dual order ideal which is also a chain, then we remove exactly one element with an odd content. A 2-core is self-conjugate and hence has an even content sum. Hence the number of odd contents of $P_\lambda$ is equal to the number of dominos that must be removed from $P_\lambda$ in order to reach core$_2(P_\lambda)$. It follows that (b) and (c) have the same parity, completing the proof. $\ \Box$ It can be shown \cite{rs:amm} that if $t(n)$ denotes the number of partitions $\lambda\vdash n$ for which $a_\lambda$ is even, then $t(n)= \frac 12(p(n)+f(n))$, where $p(n)$ denotes the total number of partitions of $n$ and $$ \sum_{n\geq 0}f(n)x^n = \prod_{i\geq 1}\frac{1+x^{2i-1}} {(1-x^{4i})(1+x^{4i-2})^2}. $$ Hence the number $g(n)$ of partitions $\lambda\vdash n$ for which evac is parity-reversing on $P_\lambda$ is given by $$ g(n) = \left\{ \begin{array}{rl} \frac 12(p(n)+f(n)), & \mathrm{if}\ {n\choose 2}\ \mathrm{is\ odd}\\[.1in] \frac 12(p(n)-f(n)), & \mathrm{if}\ {n\choose 2}\ \mathrm{is\ even} \end{array} \right. $$ We conclude this section with some applications of the theory of domino tableaux. A \emph{standard domino tableau} (SDT) of shape $\lambda\vdash 2n$ is a sequence $$ \emptyset=\lambda^0\subset \lambda^1 \subset \cdots\subset \lambda^n=\lambda $$ of partitions such that each skew shape $\lambda^i/\lambda^{i-1}$ is a \emph{domino}, i.e., two squares with an edge in common. Each of these dominos is either horizontal (two squares in the same row) or vertical (two squares in the same column). Let $\dla$ denote the set of all SDT of shape $\lambda$. Given $D\in\dla$, define ev$(D)$ to be the number of vertical dominos in even columns of $D$, where an \emph{even column} means the $2i$th column for some $i\in\pp$. For the remainder of this section, fix the labeling $\omega$ of $P_\lambda$ to be the usual ``reading order,'' i.e., the first row of $\lambda$ is labelled $1,2,\dots,\lambda_1$; the second row is labelled $\lambda_1+1, \lambda_1+2,\dots, \lambda_1+\lambda_2$, etc. We write $I_\lambda(q)$ for $I_{P_\lambda,\omega}(q)$ and set $I_\lambda=I_\lambda(-1)$, the \emph{imbalance} of the partition $\lambda$. It is shown in \cite[Thm.\ 12]{white} (by analyzing the formula that results from setting $q=-1$ in (\ref{eq:oiip})) that $$ I_\lambda = \sum_{D\in\dla} (-1)^{\mathrm{ev}(D)}. $$ \indent Let $\lambda\vdash n$. Lascoux, Leclerc and Thibon \cite[(27)]{l-l-t} define a certain class of symmetric functions $\tilde{G}^{(k)}_\lambda(x;q)$ (defined earlier by Carr\'e and Leclerc \cite{c-l} for the special case $k=2$ and $\lambda=2\mu$). We will only be concerned with the case $k=2$ and $q=-1$, for which we write $G_\lambda = \tilde{G}^{(2)}_\lambda(x;-1)$. The symmetric function $G_\lambda$ vanishes unless core$_2(\lambda)=\emptyset$, so we may assume $n=2m$. If core$_2(\lambda)=\emptyset$, then $G_\lambda$ is homogeneous of degree $m=n/2$. We will not define it here but only recall the properties relevant to us. The connection with the imbalance $I_\lambda$ is provided by the formula (immediate from the definition of $G_\lambda$ in \cite{l-l-t} together with \cite[Thm. 12]{white}) \beq [x_1\cdots x_m]G_\lambda = (-1)^{r(\lambda)}I_\lambda, \label{eq:sfg} \eeq where $[x_1\cdots x_m]F$ denotes the coefficient of $x_1\cdots x_m$ in the symmetric function $F$, and $r(\lambda)$ is the maximum number of vertical dominos that can appear in even columns of a domino tableau of shape $\lambda$. Also define $d(\lambda)$ to be the maximum number of disjoint vertical dominos that can appear in the diagram of $\lambda$, i.e., $$ d(\lambda) = \sum_i\left\lfloor \frac 12\lambda'_{2i}\right\rfloor. $$ Note that $d(\lambda)\geq r(\lambda)$, but equality need not hold in general. For instance, $d(4,3,1)=1$, $r(4,3,1)=0$. However, we do have $d(2\mu)= r(2\mu)$ for any partition $\mu$. Let us also note that our $r(\lambda)$ is denoted $d(\lambda)$ in \cite{white} and is defined only for $\lambda$ with an empty 2-core. \begin{theorem} \label{thm:kcor} (a) \emph{We have} $$ \sum_{\mu\vdash m} I_{2\mu} = 1 $$ \emph{for all $m\geq 1$.}\\ \indent (b) \emph{Let $v(\lambda)$ denote the maximum number of disjoint vertical dominos that fit in the shape $\lambda$. Equivalently,} $$ v(\lambda) = \sum_{i\geq 1} \left\lfloor \frac 12\lambda_i^\prime \right\rfloor. $$ \emph{Then} $$ \sum_{\lambda\vdash 2m} (-1)^{v(\lambda)}I_\lambda^2 =0. $$ \end{theorem} \textbf{Proof.} (a) Barbasch and Vogan \cite{b-v} and Garfinkle \cite{gar} define a bijection between elements $\pi$ of the hyperoctahedral group $B_m$, regarded as signed permutations of $1,2,\dots,m$, and pairs $(P,Q)$ of SDT of the same shape $\lambda\vdash 2m$. (See \cite[p.~25]{leeuwen} for further information.) A crucial property of this bijection, stated implicitly without proof in \cite{k-l-l-t} and proved by Shimozono and White \cite[Thm.\ 30]{s-w}, asserts that \beq \mathrm{tc}(\pi) = \frac 12(v(P)+v(Q)), \label{eq:srsk} \eeq where tc$(\pi)$ denotes the number of minus signs in $\pi$ and $v(R)$ denotes the number of vertical dominos in the SDT $R$. Carr\'e and Leclerc \cite[Def.\ 9.1]{c-l} define a symmetric function $H_\mu(x;q)$ which satisfies $H_\mu(x,-1)=(-1)^{v(\mu)} G_{2\mu}$. In \cite[Thm.\ 1]{k-l-l-t} is stated the identity \beq \sum_\mu H_\mu(x;q)=\prod_i \frac{1}{1-x_i} \prod_{i<j} \frac{1}{1-x_ix_j}\prod_{i\geq j}\frac{1}{1-qx_ix_j}. \label{eq:symqc} \eeq The proof of (\ref{eq:symqc}) in \cite{k-l-l-t} is incomplete, since it depends on a semistandard version of the $P=Q$ case of (\ref{eq:srsk}) (easily deduced from (\ref{eq:srsk})), which had not yet been proved. The proof of (\ref{eq:srsk}) in \cite{s-w} therefore completes the proof of (\ref{eq:symqc}). A generalization of (\ref{eq:symqc}) was later given by Lam \cite[Thm.~28]{lam}. Setting $q=-1$ in (\ref{eq:symqc}) gives $$ \sum_{\mu} (-1)^{v(\mu)} G_{2\mu} = \prod_i \frac{1}{(1-x_i)(1+x_i^2)} \prod_{i<j} \frac{1}{1-x_i^2 x_j^2}. $$ Taking the coefficient of $x_1\cdots x_m$ on both sides and using (\ref{eq:sfg}) together with $v(\mu)=d(2\mu)=r(2\mu)$ completes the proof. (b) It is easy to see that for any SDT $D$ we have $$ v(D) = v(\lambda)-2d(\lambda)+ 2\mathrm{ev}(D). $$ Thus by (\ref{eq:srsk}) we have \begin{eqnarray*} 0 & = & \sum_{\pi\in B_m}(-1)^{\mathrm{tc}(\pi)}\\ & = & \sum_{P,Q}(-1)^{\frac 12(v(P)+v(Q))}\\ & = & \sum_{\lambda\vdash 2m}\left(\sum_{D\in\dla}(-1)^{\frac 12 v(D)} \right)^2\\ & = & \sum_{\lambda\vdash 2m} (-1)^{v(\lambda)} \left(\sum_{D\in\dla}(-1)^{\mathrm{ev}(D)}\right)^2\\ & = & \sum_{\lambda\vdash 2m} (-1)^{v(\lambda)}I_\lambda^2. \ \ \Box \eeas In the same spirit as Theorem~\ref{thm:kcor} we have the following conjecture. \begin{conjecture} \label{conj:sytimb} \hspace{-.2in}\footnote{A combinatorial proof of (a) was found by Thomas Lam \cite{lam} after this paper was written. Later a combinatorial proof of both (a) and (b) was given by Jonas Sj\"ostrand \cite{sjo}. Sj\"ostrand's main result \cite[Thm.~2.3]{sjo} leads to further identities, such as $\sum_{\mu\vdash n}q^{v(\mu)}I_{2\mu}=1$, thereby generalizing our Theorem~\ref{thm:kcor}(a).} (a) For all $n\geq 0$ we have \beq \sum_{\lambda\vdash n} q^{v(\lambda)}t^{d(\lambda)} x^{v(\lambda')}y^{d(\lambda')} I_\lambda = (q+x)^{\lfloor n/2\rfloor}. \label{eq:opq} \eeq (b) If $n\not\equiv 1\,(\mathrm{mod}\,4)$, then $$ \sum_{\lambda\vdash n} (-1)^{v(\lambda)}t^{d(\lambda)}I_\lambda^2=0. $$ \end{conjecture} It is easy to see that $d(\lambda)=d(\lambda')$ for all $\lambda$. (E.g., consider the horizontal and vertical line segments in Figure~\ref{fig:horver}.) Hence the variable $y$ is superfluous in equation (\ref{eq:opq}), but we have included it for the sake of symmetry. In particular, if $F_n(q,t,x,y)$ denotes the left-hand side of (\ref{eq:opq}) then $$ F_n(q,0,x,y) = F_n(q,t,x,0) = F_n(q,0,x,0). $$ Note also that $d(\lambda)=0$ if and only $\lambda$ is a \emph{hook}, i.e., a partition of the form $(n-k,1^k)$. The case $t=0$ (or $y=0$, or $t=y=0$) of equation~(\ref{eq:opq}) follows from the following proposition, which in a sense ``explains'' where the right-hand side $(q+x)^{\lfloor n/2\rfloor}$ comes from. \begin{figure} \caption{$d(86655431)=d(86655431')$} \label{fig:horver} \end{figure} \begin{proposition} \label{prop:hooksum} \emph{For all $n\geq 0$ we have} \beq \sum_{\lambda=(n-k,1^k)} q^{v(\lambda)} x^{v(\lambda')} I_\lambda = (q+x)^{\lfloor n/2\rfloor}, \label{eq:hsum} \eeq \emph{where $\lambda$ ranges over all hooks $(n-k,1^k)$, $0\leq k\leq n-1$.} \end{proposition} \textbf{First proof.} Let $\lambda=(n-k,1^k)$. Let $\omega$ denote the ``reading order'' labeling of $P_\lambda$ as above. The set ${\cal L}_\pw$ consists of all permutations $1,a_2,\dots,a_m$, where $a_2,\dots,a_m$ is a \emph{shuffle} of the permutations $2,3,\dots,n-k$ and $n-k+1,n-k+2,\dots,n$. It follows e.g.\ from \cite[Prop.~1.3.17]{ec1} that $$ I_\lambda(q) = \qbc{n-1}{k}, $$ a $q$-binomial coefficient. Suppose first that $n=2m+1$. By \cite[Exer.~3.45(b)]{ec1}, $$ \qbc{n-1}{k}_{q=-1}= \left\{ \begin{array}{rl} \ds {m\choose j}, & k=2j\\[.2in] 0, & k=2j+1. \end{array} \right. $$ Note that if $\lambda=( n-2j,1^{2j})$, then $v(\lambda)=j$ and $v(\lambda')=m-j$. Hence \begin{eqnarray*} \sum_{\lambda=( n-k,1^k)} q^{v(\lambda)} x^{v(\lambda')} I_\lambda & = & \sum_{j=0}^m q^j x^{m-j}{m\choose j}\\ & = & (q+x)^m, \eeas as desired. The proof for $n$ even is similar and will be omitted. $\ \Box$ \textbf{Second proof.} Assume first that $n=2m$. We use an involution argument analogous to the proof of Theorem~\ref{thm:p-r} or to arguments in \cite[{\S}5]{white} and Section~\ref{sec:chains} of this paper. Let $T$ be an SYT of shape $\lambda= (n-k,1^k)$, which can be regarded as an element of ${\cal L}_{P_\lambda,\omega}$. Let $i$ be the least positive integer (if it exists) such that $2i-1$ and $2i$ appear in different rows and in different columns of $T$. Let $T'$ denote the SYT obtained from $T$ by transposing $2i-1$ and $2i$. Since multiplying by a transposition changes the sign of a permutation, we have $(-1)^{\mathrm{inv}(T)} + (-1)^{\mathrm{inv}(T')}=0$. The surviving SYT are obtained by first placing $1,2$ in the same row or column, then $3,4$ in the same row or column, etc. If $k=2j$ or $2j+1$, then the number of survivors is easily seen to be ${m-1\choose j}$. Because the entries of $T$ come in pairs $2i-1,2i$, the number of inversions of each surviving SYT is even. Moreover, if $k=2j$ then $v(\lambda)=j$ and $v(\lambda')=m-j$, while if $k=2j+1$ then $v(\lambda)=j+1$ and $v(\lambda')=m-1-j$. Hence \begin{eqnarray*} \sum_{\lambda=(n-k,1^k)} q^{v(\lambda)} x^{v(\lambda')} I_\lambda & = & \sum_{j=0}^{m-1}(q+x){m-1\choose j}q^j x^{m-1-j}\\ & = & (q+x)^m, \eeas as desired. The proof is similar for $n=2m+1$. Let $i$ be the least positive integer (if it exists) such that $2i$ and $2i+1$ (rather than $2i-1$ and $2i$) appear in different rows and in different columns of $T$. There are now no survivors when $k=2j+1$ and ${m\choose j}$ survivors when $k=2j$. Other details of the proof remain the same, so we get \begin{eqnarray*} \sum_{\lambda=(n-k,1^k)} q^{v(\lambda)} x^{v(\lambda')} I_\lambda & = & \sum_{j=0}^m {m-1\choose j}q^j x^{m-j}\\ & = & (q+x)^m, \eeas completing the proof. $\ \Box$ There are some additional properties of the symmetric functions $G_\lambda$ that yield information about $I_\lambda$. For instance, there is a product formula in \cite[Thm.\ 2]{k-l-l-t} for $\sum_{\mu} G_{2\mu\cup 2\mu}$, where $\mu$ ranges over all partitions and $$ 2\mu\cup 2\mu=(2\mu_1,2\mu_1,2\mu_2,2\mu_2, \dots), $$ which implies that $\sum_{\mu\vdash n}I_{2\mu\cup 2\mu}=0$. In fact, in \cite[Cor.\ 9.2]{c-l} it is shown that $G_{2\mu\cup 2\mu}(x)=\pm s_\mu(x_1^2, x_2^2,\dots)$, from which it follows easily that in fact $I_{2\mu\cup 2\mu}=0$. However, this result is just a special case of Corollary~\ref{cor:sbmc} and of Proposition~\ref{prop:conev}, so we obtain nothing new. Also relevant to us is an expansion of $G_\lambda$ into Schur functions due to Shimozono (see \cite[Thm.\ 18]{white}) for certain shapes $\lambda$, namely, those whose 2-quotient (in the sense e.g.\ of \cite[Exam.~I.1.8]{macd}) is a pair of rectangles. This expansion was used by White \cite[Cor.\ 20]{white} to evaluate $I_\lambda$ for such shapes. White \cite[{\S}8]{white} also gives a combinatorial proof, based on a sign-reversing involution, in the special case that $\lambda$ itself is a rectangle. We simply state here White's result for rectangles. \begin{theorem} \label{thm:white} \emph{Let $\lambda$ be an $m\times n$ rectangle. Then} $$ I_\lambda = \left\{ \begin{array}{rl} 1, & \mathrm{if}\ m=1\ \mathrm{or}\ n=1\\ 0, & \mathrm{if}\ m\equiv n\,(\mathrm{mod}\,2)\ \mathrm{and}\ m,n>1\\ \pm g^\mu, & m\not\equiv n\,(\mathrm{mod}\,2), \end{array} \right. $$ \emph{where $g^\mu$ denotes the number of shifted standard tableaux (as defined e.g.\ in \cite[Exam.~III.8.12]{macd}) of shape} $$ \mu = \left( \frac{m+n-1}{2}, \frac{m+n-3}{2}, \cdots, \frac{|n-m|+3}{2},\frac{|n-m|+1}{2}\right). $$ \emph{(An explicit ``hook length formula'' for any $g^\mu$ appears e.g.\ in the reference just cited.)} \end{theorem} It is natural to ask whether Theorem~\ref{thm:white} can be generalized to other partitions $\lambda$. In this regard, A. Eremenko and A. Gabrielov (private communication) have made a remarkable conjecture. Namely, if we fix the number $\ell$ of parts and parity of each part of $\lambda$, then there are integers $c_1,\dots,c_k$ and integer vectors $\gamma_1,\dots, \gamma_k\in\zz^\ell$ such that $$ I_\lambda = \sum_{i=1}^k c_i g^{\frac 12(\lambda+\gamma_i)}. $$ One defect of this conjecture is that the expression for $I_\lambda$ is not unique. We can insure uniqueness, however, by the additional condition that all the vectors $\gamma_i$ have coordinate sum 0 when $|\lambda|$ is even and $-1$ when $|\lambda|$ is odd (where $|\lambda|=\sum \lambda_i$). In this case, however, we need to define properly $g^\mu$ when $\mu$ isn't a strictly decreasing sequence of nonnegative integers. See the discussion preceding Conjecture~\ref{conj:egsf}. For instance, we have \begin{eqnarray*} I_{(2a,2b,2c)} & = & g^{(a,b,c)}-g^{(a+1,b,c-1)}\\ I_{(2a+1,2b,2c)} & = & g^{(a,b,c)}+g^{(a+1,b-1,c)}\\ I_{(2a,2b+1,c)} & = & 0\\ I_{(2a,2b,2c+1)} & = & -g^{(a+1,b-1,c)}-g^{(a+1,b,c-1)}\\ I_{(2a+1,2b+1,2c)} & = & g^{(a+1,b,c)}+g^{(a+1,b+1,c-1)}\\ I_{(2a+1,2b,2c+1)} & = & 0\\ I_{(2a,2b+1,2c+1)} & = & g^{(a+1,b,c)}+g^{(a,b+1,c)}\\ I_{(2a+1,2b+1,2c+1)} & = & g^{(a,b+1,c)}+g^{(a+1,b+1,c-1)}\\ I_{(2a,2b,2c,2d)} & = & g^{(a,b,c,d)}-g^{(a+1,b,c-1,d)}- g^{(a+1,b+1,c-1,d-1)}-2g^{(a+1,b,c,d-1)}. \eeas It is easy to see that $I_{(2a,2b+1,c)}=I_{(2a+1,2b,2c+1)}=0$, viz., the 2-cores of the partitions $(2a,2b+1,c)$ and $(2a+1,b,2c+1)$ have more than one square. More generally, we have verified by induction the formulas for $I_\mu$ when $\ell(\mu)\leq 3$. We have found a (conjectured) symmetric function generalization of the Eremenko-Gabrielov conjecture. If $f(x)$ is any symmetric function, define $$ f(x/x) = f(p_{2i-1}\rightarrow 2p_{2i-1},\ p_{2i}\rightarrow 0). $$ In other words, write $f(x)$ as a polynomial in the power sums $p_j$ and substitute $2p_{2i-1}$ for $p_{2i-1}$ and 0 for $p_{2i}$. In $\lambda$-ring notation, $f(x/x)=f(X-X)$. Let $Q_\mu$ denote Schur's shifted $Q$-function \cite[{\S}3.8]{macd}. The $Q_\mu$'s form a basis for the ring $\qq[p_1,p_3,p_5,\dots]$. Hence $f(x/x)$ can be written uniquely as a linear combination of $Q_\mu$'s. We mentioned above that the symmetric function $G_\lambda$ was originally defined only when core$_2(\lambda)=\emptyset$. We can extend the definition to any $\lambda$ as follows. The original definition has the form \beq G_\lambda(x) = \sum_D (-1)^{\mathrm{cospin}(D)}x^D, \label{eq:glax} \eeq summed over all semistandard domino tableaux of shape $\lambda$, where cospin$(\lambda)$ is a certain integer and $x^D$ a certain monomial depending on $\lambda$. If $\#\mathrm{core}_2(\lambda)=1$, then define $G_\lambda$ exactly as in (\ref{eq:glax}), except that we sum over all semistandard domino tableaux of the skew shape $\lambda/1$. If $\#\mathrm{core}_2(\lambda)>1$, then define $G_\lambda=0$. (In certain contexts it would be better to define $G_\lambda$ by (\ref{eq:glax}), summed over all semistandard domino tableaux of the skew shape $\lambda/\mathrm{core}_2(\lambda)$, but this is not suitable for our purposes.) Equation~(\ref{eq:sfg}) then continues to hold for any $\lambda\vdash n$, where $m=\lfloor n/2\rfloor$. We also need to define $G_\mu(x/x)$ properly when $\mu$ is not a strictly decreasing sequence of positive integers. The following definition seems to be correct, but perhaps some modification is necessary. Let $\mu=(\mu_1,\dots,\mu_k)\in \zz^k$. Trailing 0's are irrelvant and can be ignored, so we may assume $\mu_k>0$. If $\mu$ is not a sequence of distinct nonnegative integers, then $G_\mu(x/x)=0$. Otherwise $G_\mu(x/x) = \varepsilon_\mu G_\lambda(x/x)$, where $\lambda$ is the decreasing rearrangement of $\mu$ and $\varepsilon_\mu$ is the sign of the permutation that converts $\mu$ to $\lambda$. \begin{conjecture} \label{conj:egsf} Fix the number $\ell$ of parts and parity of each part of the partition $\lambda$. Then there are integers $c_1,\dots,c_k$ and integer vectors $\gamma_1,\dots,\gamma_k\in\zz^\ell$ such that \beq (-1)^{r(\lambda)} G_\lambda(x/x) = \sum_{i=1}^k c_i Q_{\frac 12(\lambda+\gamma_i)}(x). \label{eq:geconj} \eeq \end{conjecture} Let $\lambda\vdash 2n$ or $\lambda\vdash 2n+1$. Take the coefficient of $x_1x_2\cdots x_n$ on both sides of (\ref{eq:geconj}). By (\ref{eq:sfg}) the left-hand side becomes $2^nI_\lambda$. Moreover, if $\mu\vdash m$ then the coefficient of $x_1\cdots x_m$ in $Q_\mu$ is $2^m g^\mu$ \cite[(8.16)]{macd}. Hence Conjecture~\ref{conj:egsf} specializes to the Eremenko-Gabrielov conjecture. At present we have no conjecture for the values of the coefficients $c_i$. Here is a short table (due to Eremenko and Gabrielov for $I_\lambda$; they have extended this table to the case of four and five rows) of the three-row case of Conjecture~\ref{conj:egsf}. For simplicity we write $\pm$ for $(-1)^{r(\lambda)}$. \begin{eqnarray*} \pm G_{(2a,2b,2c)}(x/x) & = & Q_{(a,b,c)}(x) - Q_{(a+1,b,c-1)}(x)\\ \pm G_{(2a+1,2b,2c)}(x/x) & = & Q_{(a,b,c)}(x) + Q_{(a+1,b-1,c)}(x)\\ \pm G_{(2a,2b+1,2c)}(x/x) & = & 0\\ \pm G_{(2a,2b,2c+1)}(x/x) & = & -Q_{(a+1,b-1,c)}(x) -Q_{(a+1,b,c-1)}(x)\\ \pm G_{(2a+1,2b+1,2c)}(x/x) & = & Q_{(a+1,b,c)}(x) + Q_{(a+1,b+1,c-1)}(x)\\ \pm G_{(2a+1,2b,2c+1)}(x/x) & = & 0\\ \pm G_{(2a,2b+1,2c+1)}(x/x) & = & Q_{(a+1,b,c)}(x)+ Q_{(a,b+1,c)}(x)\\ \pm G_{(2a+1,2b+1,2c+1)}(x/x) & = & Q_{(a,b+1,c)}(x) +Q_{(a+1,b+1,c-1)}(x). \eeas \indent We now discuss some general properties of the polynomial $I_\lambda(q)$ and its value $I_\lambda(-1)$. Let $C(\lambda)$ denote the set of \emph{corner squares} of $\lambda$, i.e., those squares of the Young diagram of $\lambda$ whose removal still gives a Young diagram. Equivalently, Pieri's formula \cite[Thm.~7.15.7]{ec2} implies that \beq s_{\lambda/1}=\sum_{t\in C(\lambda)} s_{\lambda-t}. \label{eq:pieri} \eeq Let $f^\lambda$ denote the number of SYT of shape $\lambda$ \cite[Prop.~7.10.3]{ec2}, so \beq f^\lambda=\sum_{t\in C(\lambda)}f^{\lambda-t}. \label{eq:flrec} \eeq Note that $I_\lambda(1)=f^\lambda$, so $I_\lambda(q)$ is a (nonstandard) $q$-analogue of $f^\lambda$. The $q$-analogue of equation (\ref{eq:flrec}) is the following result. \begin{proposition} \label{prop:ilqrec} \emph{We have} $$ I_\lambda(q) = \sum_{t\in C(\lambda)} q^{b_\lambda(t)} I_{\lambda-t}(q), $$ \emph{where $b_\lambda(t)$ denotes the number of squares in the diagram of $\lambda$ in a lower row than that of $t$.} \end{proposition} \textbf{Proof.} We have by definition $$ I_\lambda(q) = \sum_T q^{\mathrm{inv}(\pi(T))}, $$ where $T$ ranges over all SYT of shape $\lambda$ and $\pi(T)$ is the permutation obtained by reading the entries of $T$ in the usual reading order, i.e., left-to-right and top-to-bottom when $T$ is written in ``English notation'' as in \cite{macd}\cite{ec1}\cite{ec2}. Suppose $\lambda\vdash n$. If $T$ is an SYT of shape $\lambda$, then the square $t$ occupied by $n$ is a corner square. The number of inversions $(i,j)$ of $\pi(T)=a_1\cdots a_m$ such that $a_i=n$ is equal to $b_\lambda(t)$, and the proof follows. $\ \Box$ Now let $D_1$ denote the linear operator on symmetric functions defined by $D_1(s_\lambda)=s_{\lambda/1}$. We then have the commutation relation \cite[Exercise~7.24(a)]{ec2} \beq D_1 s_1 - s_1 D_1 = I, \label{eq:dssd} \eeq the identity operator. This leads to many enumerative consequences, discussed in \cite{rs:dp}. There is an analogue of (\ref{eq:dssd}) related to $I_\lambda$, though we don't know of any applications. Define a linear operator $D(q)$ on symmetric functions by $$ D(q)s_\lambda =\sum_{t\in C(\lambda)} q^{b_\lambda(t)}s_{\lambda-t}. $$ Let $U(q)$ denote the adjoint of $D(q)$ with respect to the basis $\{s_\lambda\}$ of Schur functions, so $$ U(q)s_\mu=\sum_t q^{b_{\mu+t}(t)}s_{\mu+t}, $$ where $t$ ranges over all boxes that we can add to the diagram of $\mu$ to get the diagram of a partition $\mu+t$ (for which necessarily $t\in C(\mu+t)$). Note that $U(1)=s_1$ (i.e., multiplication by $s_1$) and $D(1) = D_1$ as defined above. It follows from Proposition~\ref{prop:ilqrec} that $$ U(q)^n\cdot 1 = \sum_{\lambda\vdash n} I_\lambda(q)s_\lambda, $$ where $U(q)^n\cdot 1$ denotes $U(q)^n$ acting on the symmetric function $1=s_{\emptyset}$. Write $U=U(-1)$ and $D=D(-1)$. Let $A$ be the linear operator on symmetric functions given by $As_\lambda = (2k(\lambda)+1)s_\lambda$, where $k(\lambda)=\#C(\lambda)$, the number of corner boxes of $\lambda$. \begin{proposition} \emph{We have $DU+UD=A$.} \end{proposition} \textbf{Proof.} The proof is basically a brute force computation. Write $\bar{\lambda}_i = \lambda_i+\lambda_{i+1} +\cdots$. Suppose $\mu$ is obtained from $\lambda$ by adding a box in row $r-1$ and deleting a box in row $s-1$, where $r<s$. Then the coefficient of $s_\mu$ in $(D(q)U(q)+U(q)D(q))s_\lambda$ is given by $$ \langle s_\mu,(D(q)U(q)+U(q)D(q))s_\lambda\rangle = q^{\bar{\lambda}_r}q^{\bar{\lambda}_s}+ q^{\bar{\lambda}_s}q^{\bar{\lambda}_r-1}, $$ which vanishes when $q=-1$. Similarly if $r>s$ we get $$ \langle s_\mu,(D(q)U(q)+U(q)D(q))s_\lambda\rangle = q^{\bar{\lambda}_s}q^{\bar{\lambda}_r+1}+ q^{\bar{\lambda}_r}q^{\bar{\lambda}_s}, $$ which again vanishes when $q=-1$. On the other hand, if $\lambda=\mu$ we have \begin{eqnarray*} \langle s_\lambda,(D(q)U(q)+U(q)D(q))s_\lambda\rangle & = & (c(\lambda)+1)q^{2\bar{\lambda}_r} + c(\lambda)q^{2\bar{\lambda}_r}\\ & = & (2c(\lambda)+1)q^{2\bar{\lambda}_r}. \eeas When $q=-1$ the right-hand side become $2c(\lambda)+1$, completing the proof. $\ \Box$ \section{Chains of order ideals.} \label{sec:chains} Suppose that $P$ is an $n$-element poset, and let $\alpha= (\alpha_1,\dots,\alpha_k)$ be a composition of $n$, i.e., $\alpha_i\in\pp =\{1,2,\dots\}$ and $\sum \alpha_i=n$. Define an $\alpha$-\emph{chain} of order ideals of $P$ to be a chain \beq \emptyset=K_0\subset K_1\subset\cdots\subset K_k=P \label{eq:ac} \eeq of order ideals satisfying $\#(K_i-K_{i-1})=\alpha_i$ for $1\leq i\leq k$. The following result is quite simple but has a number of consequences. \begin{proposition} \label{prop:alch} \emph{Let $P$ be an $n$-element poset and $\alpha$ a fixed composition of $n$. Suppose that for every $\alpha$-chain (\ref{eq:ac}) of order ideals of $P$, at least one subposet $K_i-K_{i-1}$ is sign-balanced. Then $P$ is sign-balanced.} \end{proposition} \textbf{Proof.} Let ${\cal C}$ be the $\alpha$-chain (\ref{eq:ac}). We say that a linear extension $f$ is ${\cal C}$-\emph{compatible} if $$ K_1=f^{-1}(\{ 1,\dots,\alpha_1\}),\ \ K_2-K_1= f^{-1}(\{\alpha_1+1,\dots,\alpha_1+\alpha_2\}), $$ etc. Let inv$({\cal C})$ be the minimum number of inversions of a ${\cal C}$-compatible linear extension. Clearly $$ \sum_f q^{\mathrm{inv}(f)} = q^{\mathrm{inv}({\cal C})} \prod_{i=1}^k I_{K_i-K_{i-1}}(q), $$ where the sum is over all ${\cal C}$-compatible $f$. Since every linear extension is compatible with a unique $\alpha$-chain, there follows \beq I_{P,\omega}(q) = \sum_{{\cal C}} q^{\mathrm{inv}({\cal C})} \prod_{i=1}^k I_{K_i-K_{i-1}}(q), \label{eq:oiip} \eeq where ${\cal C}$ ranges over all $\alpha$-chains of order ideals of $P$. The proof now follows by setting $q=-1$. $\ \Box$ Define a finite poset $P$ with $2m$ elements to be \emph{tilable by dominos} if there is a chain $\emptyset=K_0\subset K_1\subset \cdots\subset K_m=P$ of order ideals such that each subposet $K_i-K_{i-1}$ is a two-element chain. Similarly, if $\#P=2m+1$ and $1\leq j\leq m+1$ then we say that $P$ is \emph{$j$-tilable by dominos} if there is a chain $\emptyset=K_0\subset K_1\subset \cdots\subset K_{m+1}=P$ of order ideals such that $\#(K_i-K_{i-1})=2$ if $1\leq i\leq m+1$ and $i\neq j$ (so $\#(K_j-K_{j-1})=1$). Note that being tilable by dominos is stronger than the existence of a partition of $P$ into cover relations (or two element saturated chains). For instance, the poset $P$ with cover relations $a<c, b<c, a<d, b<d$ can be partitioned into the two cover relations $a<c$ and $b<d$, but $P$ is not tilable by dominos. When $n=2m$, we define a \emph{$P$-domino tableau} to be a chain $\emptyset=K_0\subset K_1\subset \cdots\subset K_m=P$ of order ideals such that $K_i-K_{i-1}$ is a two-element chain for $1\leq i\leq m$. Similarly, when $n=2m+1$, we define a (standard) \emph{$P$-domino tableau} to be a chain $\emptyset=K_0\subset K_1\subset \cdots\subset K_{m+1}=P$ of order ideals such that $K_i-K_{i-1}$ is a two-element chain for $1\leq i\leq m$ (so that $K_{m+1}-K_m$ consists of a single point). Thus for $\lambda\vdash 2n$, a $P_\lambda$-domino tableau coincides with our earlier definition of an SDT of shape $\lambda$. \begin{corollary} \label{cor:dom} {Let $\#P=2m$, and assume that $P$ is not tilable by dominos. Then $P$ is sign-balanced. Similarly if $\#P=2m+1\geq 3$ and $P$ is not $j$-tilable by dominos for some $j$, then $P$ is sign-balanced.} \end{corollary} \textbf{Proof.} Let $\alpha=(2,2,\dots,2)$ ($m$ 2's). If $\#P=2m$ and $P$ is not tilable by dominos, then for any $\alpha$-chain (\ref{eq:ac}) there is an $i$ for which $K_i-K_{i-1}$ consists of two disjoint points. Since a poset consisting of two disjoint points is sign-balanced, it follows from Proposition~\ref{prop:alch} that $P$ is sign-balanced. The argument is similar for $\#P=2m+1$. $\ \Box$ Corollary~\ref{cor:dom} was proved in a special case (the product of two chains with an even number of elements, with the $\hat{0}$ and $\hat{1}$ removed), using essentially the same proof as we have given, by Ruskey \cite[{\S}5, item~6]{ruskey2}. Corollary~\ref{cor:dom} is particularly useful for the posets $P_\lambda$. From this corollary and the definition of core$_2(\lambda)$ we conclude the following. \begin{corollary} \emph{If} core$_2(P_\lambda)$ \emph{consists of more than one element, then $P_\lambda$ is sign-balanced.} \end{corollary} It follows from \cite[Exer.~7.59(e)]{ec2} that if $f(n)$ denotes the number of partitions $\lambda\vdash n$ such that $\#$core$_2(\lambda)\leq 1$, then $$ \sum_{n\geq 0} f(n)x^n = \frac{1+x} {\prod_{i\geq 1}(1-x^{2i})^2}. $$ Standard partition asymptotics (e.g., \cite[Thm.\ 6.2]{andrews}) shows that $$ f(n) \sim \frac{C}{n^{5/4}}\exp\left( \pi\sqrt{2n/3}\right) $$ for some $C>0$. Since the total number $p(n)$ of partitions of $n$ satisfies $$ p(n) \sim \frac{C'}{n}\exp\left( \pi\sqrt{2n/3}\right), $$ it follows that $\lim_{n\geq 0} f(n)/p(n)=0$. Hence as $n\rightarrow \infty$, $P_\lambda$ is sign-balanced for almost all $\lambda\vdash n$. \section{Maj-balanced posets.} \label{sec:maj} If $\pi=a_1 a_2\cdots a_m$ is a permutation of $[n]$, then the \emph{descent set} $D(\pi)$ of $\pi$ is defined as $$ D(\pi) = \{ i\st a_i>a_{i+1}\}. $$ An element of $D(\pi)$ is called a \emph{descent} of $\pi$, and \emph{major index} maj$(\pi)$ is defined as $$ \maj(\pi) = \sum_{i\in D(\pi)} i. $$ The major index has many properties analogous to the number of inversions, e.g., a classic theorem of MacMahon states that inv and maj are equidistributed on the symmetric group $\sn$ \cite{foata}\cite{f-s}. Thus it is natural to try to find ``maj analogues'' of the results of the preceding sections. In general, the major index of a linear extension of a poset can be more tractable or less tractable than the number of inversions. Thus, for example, in Theorem~\ref{thm:majdom} we are able to completely characterize naturally labelled maj-balanced posets. An analogous result for sign-balanced partitions seems very difficult. On the other hand, since multiplying a permutation by a fixed permutation has no definite effect on the parity of the major index, many of the results for sign-balanced posets are false (Theorem~\ref{thm:p-r}, Lemma~\ref{lemma1}, Proposition~\ref{prop:conev}). Let $f$ be a linear extension of the labelled poset $\ppw$, and let $\pi=\pi(f)$ be the associated permutation of $[n]$. In analogy to our definition of inv$(f)$, define maj$(f)=\mathrm{maj}(\pi)$ and $$ W_{P,\omega}(q) = \sum_{f\in\ep}q^{\mathrm{maj}(f)} = \sum_{\pi\in\lpw}q^{\maj(\pi)}. $$ We say that $\ppw$ is \emph{maj-balanced} if $W_{P,\omega}(-1)=0$, i.e., if the number of linear extensions of $P$ with even major index equals the number with odd major index. Unlike the situation for sign-balanced posets, the property of being maj-balanced can depend on the labeling $\omega$. Thus an interesting special case is that of \emph{natural labelings}, for which $\omega(s)<\omega(t)$ whenever $s<t$ in $P$. We write $W_P(q)$ for $W_{P,\omega}(q)$ when $\omega$ is natural. It is a basic consequence of the theory of $P$-partitions \cite[Thm.~4.5.8]{ec1} that $W_P(q)$ does not depend on the choice of natural labeling of $P$. Figures~\ref{fig:cx}(a) and (b) show two different labelings of a poset $P$. The first labeling (which is natural) is not maj-balanced, while the second one is. Moreover, the dual poset $P^*$ to the poset $P$ in Figure~\ref{fig:cx}(b), whether naturally labelled or labelled the same as $P$, is maj-balanced. Contrast that with the trivial fact that the dual of a sign-balanced poset is sign-balanced. As a further example of the contrast between sign and maj-balanced posets, Figure~\ref{fig:cx}(c) shows a naturally labelled maj-balanced poset $Q$. However, if we adjoin an element $\hat{0}$ below every element of $Q$ and label it 0 (thus keeping the labeling natural) then we get a poset which is no longer maj-balanced. On the other hand, it is clear that such an operation has no effect on whether a poset is sign-balanced. (In fact, it leaves $I_{Q,\omega}(q)$ unchanged.) \begin{figure} \caption{Some counterexamples} \label{fig:cx} \end{figure} Corollary~\ref{cor:dom} carries over to the major index in the following way. \begin{theorem} \label{thm:majdom} (a) \emph{Let $P$ be naturally labelled. Then $W_P(-1)$ is equal to the number of $P$-domino tableaux. In particular, $P$ is maj-balanced if and only if there does not exist a $P$-domino tableau.} (b) \emph{A labelled poset $(\pw)$ is maj-balanced if there does not exist a $P$-domino tableau.} \end{theorem} \textbf{Proof.} (a) Let $\pi=a_1 \cdots a_m\in\lpw$. Let $i$ be the least number (if it exists) for which $\pi'=a_1 \cdots a_{2i} a_{2i+2} a_{2i+1} a_{2i+3}\cdots a_m\in \lpw$. Note that $(\pi')^\prime=\pi$. Now exactly one of $\pi$ and $\pi'$ has a descent at $2i+1$. The only other differences in the descent sets of $\pi$ and $\pi'$ occur (possibly) for the even numbers $2i$ and $2i+2$. Hence $(-1)^{\mathrm{maj}(\pi)}+(-1)^{\mathrm{maj}(\pi')} =0$. The surviving permutations $\sigma=b_1\cdots b_m$ in $\lpw$ are exactly those for which $\emptyset\subset \{b_1,b_2\} \subset \{b_1,\dots,b_4\}\subset\cdots$ is a $P$-domino tableau with $\omega^{-1}(b_{2i-1})<\omega^{-1}(b_{2i})$ in $P$. (If $n$ is even, then the $P$-domino tableau ends as $\{b_1,\dots,b_{n-2}\}\subset P$, while if $n$ is odd it ends as $\{b_1,\dots,b_{n-1}\}\subset P$.) Since $\omega$ is natural we have $b_{2i-1}<b_{2i}$ for all $i$, so maj$(\sigma)$ is even. Hence $W_P(-1)$ is equal exactly to the number of $P$-domino tableaux. (b) Regardless of the labeling $\omega$, if there does not exist a $P$-domino tableau then there will be no survivors in the argument of (a), so $W_P(-1)=0$. $\ \Box$ The converse to Theorem~\ref{thm:majdom}(b) is false. The labelled poset $(P,\omega)$ of Figure~\ref{fig:notdt} is tilable by dominos and is maj-balanced. \begin{figure} \caption{A maj-balanced labelled poset tilable by dominos} \label{fig:notdt} \end{figure} Given an $n$-element poset $P$ with dual $P^*$, set $\Delta(P)=\Gamma(P^*)$. In \cite[Thm~4.4]{rs:thesis}\cite[Prop.~18.4]{rs:mem}\cite[Thm.~4.5.2]{ec1} it is shown that the following two conditions are equivalent: \be \item[(i)] For all $t\in P$, all maximal chains of the principal dual order ideal $V_t=\{s\in P\st s\geq t\}$ have the same length. \item[(ii)] $q^{{n\choose 2}-\Delta(P)}W_P(1/q)=W_P(q)$. \ee It follows by setting $q=-1$ that if (i) holds and ${n\choose 2}-\Delta(P)$ is odd, then $P$ is maj-balanced. Corollary~\ref{cor:cons} suggests in fact the following stronger result. \begin{corollary} \label{cor:dcmb} \emph{Suppose that $P$ is naturally labelled and dual consistent (i.e., $P^*$ is consistent). If ${n\choose 2}-\Delta(P)$ is odd, then $P$ is maj-balanced.} \end{corollary} \textbf{Proof.} By Theorem~\ref{thm:majdom} we need to show that there does not exist a $P$-domino tableau. Given $t\in P$, let $\delta(t)$ denote the length of the longest chain of $V_t$, so $\Delta(P)= \sum_{t\in P}\delta(t)$. First suppose that $n=2m$, and assume to the contrary that $\emptyset=I_0\subset I_1\subset\cdots\subset I_m=P$ is a $P$-domino tableau. If $s,t\in I_i-I_{i-1}$ then by dual consistency $\delta(s)+\delta(t)\equiv 1\,(\mathrm{mod}\,2)$. Hence $\Delta(P)\equiv m\,(\mathrm{mod}\,2)$, so $$ {n\choose 2}-\Delta(P) \equiv m(2m-1)-m\equiv 0\,(\mathrm{mod}\,2), $$ a contradiction. Similarly if $n=2m+1$, then the existence of a $P$-domino tableau implies $\Delta(P)\equiv m\,(\mathrm{mod}\,2)$, so $$ {n\choose 2}-\Delta(P) \equiv m(2m+1)-m\equiv 0\,(\mathrm{mod}\,2), $$ again a contradiction. $\ \Box$ Now let ${\cal S}$ be a finite subset of solid unit squares with integer vertices in $\rr\times\rr$ such that the set $|\cs| =\bigcup_{S\in \cs}$ is simply-connected. For $S,T\in\cs$, define $S<T$ if the center vertices $(s_1,s_2)$ of $S$ and $(t_1,t_2)$ of $T$ satisfy either (a) $t_1=s_1$ and $t_2=s_2+1$ or (b) $t_1=s_1+1$ and $t_2=s_2$. Regard $\cs$ as a poset, denoted $P_\cs$, under the transitive (and reflexive) closure of the relation $<$. Figure~\ref{fig:pposet} gives an example, where (a) shows $\cs$ as a set of squares and (b) as a poset. Note that the posets $P_\lm$ are a special case. \begin{figure} \caption{A set ${\cal S} \label{fig:pposet} \end{figure} A \emph{Schur labelling} $\omega$ of $P_\cs$ is a labeling that increases along rows and decreases along columns, as illustrated in Figure~\ref{fig:pposet}. For the special case $P_\lm$, Schur labelings play an important role in the expansion of skew Schur functions $s_\lm$ in terms of quasisymmetric functions \cite[pp.~360--361]{ec2}. Suppose that $\#P_\cs$ is even and that $P_\cs$ is tilable by dominos. Then $\cs$ itself is tilable by dominos in the usual sense. It is known (implicit, for instance, in \cite{thurston}, and more explicit in \cite{chaboud}) that any two domino tilings of $\cs$ can be obtained from each other by ``$2\times 2$ flips,'' i.e., replacing two horizontal dominos in a $2\times 2$ square by two vertical dominos or \emph{vice versa}. It follows that if $D$ is a domino tiling of $\cs$ with $v(D)$ vertical dominos, then $(-1)^{v(D)}$ depends only on $\cs$. Set sgn$(\cs)=(-1)^{v(D)}$ for any domino tiling of $\cs$. \begin{proposition} \label{prop:slabps} \emph{Let $\cs$ be as above, and let $\omega$ be a Schur labeling of $P_\cs$, where $\#P_\cs$ is even, say $\#P_\cs=n$. Then} sgn$(\cs)W_{P_\cs}(-1)$ \emph{is the number of $P_\cs$-domino tableaux.} \end{proposition} \textbf{Proof.} The proof parallels that of Theorem~\ref{thm:majdom}. Define the involution $\pi\mapsto \pi'$ as in the proof of Theorem~\ref{thm:majdom}. Each survivor $\sigma=b_1\cdots b_m$ corresponds to a $P_\cs$-domino tableau $D$. We have $b_{2i-1}>b_{2i}$ if and only if the domino labelled with $b_{2i-1}$ and $b_{2i}$ is vertical. As noted above, $(-1)^{v(D)}=\mathrm{sgn}({\cal S})$, independent of $D$. Hence $(-1)^{\maj(\sigma)}=\mathrm{sgn}(\sigma)$, and the proof follows as in Theorem~\ref{thm:majdom}(a). $\ \Box$ A result analogous to Proposition~\ref{prop:slabps} holds for $\#P_\cs$ odd (with essentially the same proof) provided $P_\cs$ has a $\hat{0}$ or $\hat{1}$. The special case $P_\lm$ of Proposition~\ref{prop:slabps} (and its analogue for $\#P_\cs$ odd) can also be proved using the theory of symmetric functions, notably, \cite[Prop.~7.19.11]{ec2} and the Murnaghan-Nakayama rule (\cite[Cor.~7.17.5]{ec2}). \section{Hook lengths.} \label{sec:hl} In this section we briefly discuss a class of posets $P$ for which $W_P(q)$, and sometimes even $I_{P,\omega}(q)$, can be explicitly computed. For this class of posets we get a simple criterion for being maj balanced and, if applicable, sign balanced. Following \cite[p.\ 84]{rs:mem}, an $n$-element poset $P$ is called a \emph{hook length poset} if there exist positive integers $h_1,\dots, h_n$, the \emph{hook lengths} of $P$, such that \beq W_P(q) = \frac{[n]!}{(1-q^{h_1}) \cdots(1-q^{h_n})}, \label{eq:wpqhl} \eeq where $[n]!=(1-q)(1-q^2)\cdots(1-q^n)$. It is easy to see that if $P$ is a hook length poset, then the multiset of hook lengths is unique. In general, if $P$ is an ``interesting'' hook length poset, then each element of $P$ should have a hook length associated to it in a ``natural'' combinatorial way. \textsc{Note.} We could just as easily have extended our definition to \emph{labelled} posets $(P,\omega)$, where now $$ W_\pw(q) = \frac{q^c\,[n]!}{(1-q^{h_1}) \cdots (1-q^{h_n})} $$ for some $c\in\nn$. However, little is known about the labelled situation except when we can reduce it to the case of natural labelings by subtracting certain constants from the values of $\sigma$. The following result is an immediate consequence of equation (\ref{eq:wpqhl}). \begin{proposition} \label{prop:hlsb} \emph{Suppose that $P$ is a hook length poset with hook lengths $h_1,\dots, h_n$. Then $P$ is maj-balanced if and only if the number of even hook lengths is less than $\lfloor n/2\rfloor$. If $P$ isn't maj-balanced, then the maj imbalance is given by} $$ W_P(-1) = \frac{\lfloor n/2\rfloor!}{\prod_{h_i\ \mathrm{even}}(h_i/2)}. $$ \end{proposition} It is natural to ask at this point what are the known hook length posets. The strongest work in this area is due to Proctor \cite{proc}\cite{proc2}. We won't state his remarkable results here, but let us note that his \emph{$d$-complete} posets encompass all known ``interesting'' examples of hook length posets. These include forests (i.e., posets for which every element is covered by at most one element) and the duals $P^\ast_\lambda$ of the posets $P_\lambda$ of Section~\ref{sec:ptn}. Bj\"orner and Wachs \cite[Thm.\ 1.1]{b-w} settle the question of what naturally labelled posets $(P,\omega)$ satisfy \beq I_{P,\omega}(q)=W_{P,\omega}(q). \label{eq:bw} \eeq Namely, $P$ is a forest and $\omega$ is a postorder labeling. Hence for postorder labelled forests, Proposition~\ref{prop:hlsb} holds also for $I_{P,\omega}(-1)$. Bj\"orner and Wachs also obtain less definitive results for arbitrary labelings, whose relevance to sign and maj imbalance we omit. \end{document}
\begin{document} \input{epsf} \draft \title{Nonlocality of Einstein-Podolsky-Rosen State in Wigner Representation} \author{Konrad Banaszek and Krzysztof W\'{o}dkiewicz\cite{unm}} \address{Instytut Fizyki Teoretycznej, Uniwersytet Warszawski, Ho\.{z}a~69, PL--00--681~Warszawa, Poland} \date{\today} \maketitle \begin{abstract} We demonstrate that the Wigner function of the Einstein-Podolsky-Rosen state, though positive definite, provides a direct evidence of the nonlocal character of this state. The proof is based on an observation that the Wigner function describes correlations in the joint measurement of the phase space displaced parity operator. \end{abstract} \pacs{PACS Number(s): 03.65.Bz, 42.50.Dv} Einstein Podolsky and Rosen (EPR) in their argument about completeness of quantum mechanics used the following wave function for a system composed of two particles\cite{epr}: \begin{equation} \label{eprfunction} \Psi (x_{1},x_{2}) = \int_{-\infty}^{\infty} e^{(2\pi i/h)(x_{1}-x_{2}+x_{0})p} \, \text{d}p. \end{equation} Despite its obvious simplicity, this wave function has not been explicitly used in arguments relating the nonlocality of quantum mechanics with the Bell inequalities. Following Bohm \cite{bohm} the EPR correlations have been analyzed with the help of a singlet state of two spin-1/2 particles. For this state the nonlocality of quantum correlations has been demonstrated \cite{bell}. Quantum correlations for position-momentum variables can be analyzed in phase space using the Wigner distribution function. Using this phase space approach to the EPR correlations Bell argued \cite{bellwigner}, that the original EPR wave function (\ref{eprfunction}) will not exhibit nonlocal effects because its joint Wigner distribution function $W(x_{1},p_{1};x_{2},p_{2})$ is positive everywhere and as such will allow for a local hidden variable description of momentum sign correlations. In local hidden variable theories these and analogous correlations can be written in a form of a statistical ensemble of two local realities $\sigma({\bf a},\lambda_{1})=\pm 1$ and $\sigma({\bf b},\lambda_{2})=\pm 1$, for two spatially separated detectors with certain settings labeled by ${\bf a}$ and ${\bf b}$: \begin{equation} \label{lhv} E({\bf a};{\bf b}) = \int \text{d}\lambda_{1} \int \text{d}\lambda_{2}\, \sigma({\bf a},\lambda_{1}) \, \sigma({\bf b},\lambda_{2})\, W(\lambda_{1};\lambda_{2}). \end{equation} In this relation $W(\lambda_{1};\lambda_{2})$ is a local, positive and normalized distribution of hidden variables $\lambda_{1}$ and $\lambda_{2}$. In the Wigner representation, these variables can be associated respectively with the phase space realities $(x_{1},p_{1})$ and $(x_{2},p_{2})$. Bell's argument against the nonlocality of the EPR wave function (\ref{eprfunction}) goes as follows. If the Wigner function of the system is positive everywhere it can be used to construct a local hidden variable correlation in a form given by (\ref{lhv}) and accordingly the Bell inequality is never violated. In order to emphasize this point Bell used a nonpositive Wigner function to show that the momentum sign correlation function will violate local realism. These examples indicated a relation between the locality and the positivity of the phase space Wigner function. The relation between the EPR correlations and the Wigner distribution function has been addressed in several papers \cite{cetto,kimble,leonhardt,johansen,cohen}. Although the singular character of the wave function (\ref{eprfunction}) and the corresponding unnormalized Wigner function has been criticized, the main point of the Bell argument relating the positivity of the Wigner function to the lack of nonlocality of such a state has not been questioned. It has been argued that the problem of normalization can be simply solved by a ``smoothing'' procedure of the original wave function (\ref{eprfunction}). An example of such a ``smoothing'' procedure, with a clear application to quantum optics, has been the use of a two-mode squeezed vacuum state produced in a process of nondegenerate optical parametric amplification (NOPA) \cite{reid}. The NOPA state has been generated experimentally \cite{kimble} and applied to discuss the implications of the positivity of the phase space Wigner function on the Bell inequality \cite{leonhardt}. These discussions have led to rather ambiguous results. On one hand, it has been argued that the quantum description for the system of the NOPA as well as for the system originally discussed by EPR is consistent with deterministic realism \cite{kimble}. From this remark one can conclude that the EPR wave function (\ref{eprfunction}) cannot be used to test direct violations of the Bell inequality. This rather vexing conclusion indicates that tests of quantum nonlocality have to rely not on the original EPR wave function but on Bohm's spin-1/2 system or on exotic states described by negative Wigner functions. On the other hand, attempts have been made to design an experiment which would reveal the nonlocality of the EPR state \cite{cetto,cohen}. The purpose of this Rapid Communication is to demonstrate that the positive definite Wigner function of the EPR state provides a direct evidence of the nonlocality exhibited by this state. We shall show that the positivity or the negativity of the Wigner function has a rather weak relation to the locality or the nonlocality of quantum correlations. In fact we shall show that the NOPA wave function violates the Bell inequality and that the original EPR wave function (\ref{eprfunction}) exhibits strong nonlocality, but one should be careful with the singular limit of strong squeezing (in this limit the NOPA state reduces to the EPR state). The NOPA phase space will be parameterized by two complex coherent states amplitudes $\alpha$ and $\beta$ corresponding respectively to $(x_{1},p_{1})$ and $(x_{2},p_{2})$. The starting point of our proof is an observation that the two-mode Wigner function $W(\alpha ;\beta)$ can be expressed as \begin{equation} \label{Eq:Walphabeta} W(\alpha ;\beta) = \frac{4}{\pi^2} \Pi(\alpha ;\beta) \end{equation} where ${\Pi}(\alpha,\beta)$ is a quantum expectation value of a product of displaced parity operators: \begin{equation} \label{Eq:Pialphabeta} \hat{\Pi}(\alpha ;\beta) = \hat{D}_1(\alpha) (-1)^{\hat{n}_1} \hat{D}^{\dagger}_1(\alpha) \otimes \hat{D}_2(\beta) (-1)^{\hat{n}_2} \hat{D}_2^{\dagger}(\beta) . \end{equation} The connection of the parity operator $(-1)^{\hat n}$ with the Wigner function provides an equivalent definition of the latter \cite{wignerparity}, as well as a feasible quantum optical measurement scheme \cite{wignerparityexp}. In the above formula, $\hat{D}_1(\alpha)$ and $\hat{D}_2(\beta)$ denote the unitary phase space displacement operators for the subsystems $1$ and $2$. As the measurement of the parity operator yields only one of two values: $+1$ or $-1$, there exists an apparent analogy between the measurement of the parity operator and of the spin-1/2 projectors. The solid angle defining the direction of the spin measurement is now replaced by the coherent displacement describing the shift in phase space. Consequently, all types of Bell's inequalities derived for a correlated pair of spin-1/2 particles can be immediately used to test the nonlocality of the NOPA wave function. The two NOPA field modes are equivalent to an entangled state of two harmonic oscillators. As Eq.~(\ref{Eq:Pialphabeta}) clearly demonstrates, the correlation functions measured in such experiments are given, up to a multiplicative constant, by the joint Wigner function of the system. As a consequence we have the fundamental relation: \begin{equation} E({\bf a};{\bf b})\equiv \Pi(\alpha ; \beta). \end{equation} The original EPR state (\ref{eprfunction}) is an unnormalizable delta function. In order to avoid problems arising from this singularity, we will consider a normalizable state that can be generated in a NOPA. Such a state is characterized by the dimensionless effective interaction time $r$ (the squeezing parameter). The Wigner function of this NOPA state is well know\cite{kimble,leonhardt} and is given by \begin{eqnarray} \Pi(\alpha;\beta) & = & \exp[-2 \cosh 2r (|\alpha|^2 + |\beta|^2) \nonumber \\ & & + 2 \sinh 2r (\alpha \beta + \alpha^\ast \beta^\ast)]. \end{eqnarray} The Wigner function of the original EPR state (\ref{eprfunction}) is obtained in the limit $r\rightarrow \infty$. The correlation function is measured for any of four combinations of $\alpha=0,\sqrt{\cal J}$ and $\beta=0,-\sqrt{\cal J}$, where ${\cal J}$ is a positive constant characterizing the magnitude of the displacement. From these quantities we construct the combination \cite{CHSH}: \begin{eqnarray} \label{B} {\cal B} & = & \Pi(0;0) + \Pi(\sqrt{\cal J};0) + \Pi(0;-\sqrt{\cal J}) - \Pi(\sqrt{\cal J}; -\sqrt{\cal J}) \nonumber \\ & = & 1 + 2 \exp(-2{\cal J} \cosh 2r) - \exp(-4{\cal J}e^{2r}), \end{eqnarray} which for local theories satisfies the inequality $-2\le {\cal B} \le 2$. Let us note that one of the components of the above combination desribes perfect correlations: $\Pi(0,0) = 1$, obtained for a direct measurement of the parity operator with no displacements applied. This is a manifestation of the fact that in the parametric process photons are always generated in pairs. As depicted in Fig.~\ref{Fig:B}, the result (\ref{B}) violates the upper bound imposed by local theories. With increased $r$, the violation of the Bell's inequality is observed for smaller ${\cal J}$. We will therefore perform an asymptotic analysis for large $r$ and ${\cal J} \ll 1$. In this regime we may approximate $\cosh 2r$ appearing in the argument of the first exponent in Eq.~(\ref{B}) just by $e^{2r}/2$. Then a straightforward calculation shows that the maximum value of ${\cal B}$ (for this particular selection of coherent displacements) is obtained for \begin{equation} {\cal J} e^{2r} = \frac{1}{3} \ln 2, \end{equation} and equals ${\cal B} = 1 + 3 \cdot 2^{-4/3} \approx 2.19$. Thus, in the limit $r\rightarrow \infty$, when the original EPR state is recovered, a significant violation of Bell's inequality takes place. This result has been obtained without any serious attempt to find the maximum violation (for this purpose one should consider a general quadruplet of displacements). Let us note that in order to observe the nonlocality of the EPR state, very small displacements have to be applied, decreasing like ${\cal J} \propto e^{-2r}$. This shows the subtleties related to the original EPR state (\ref{eprfunction}) and the need for considering its regularized version. This discussion shows, that despite conflicting claims, the original EPR wave function (\ref{eprfunction}) exhibits strong nonlocality. The violation of the Bell inequality is achieved for a state that is described by a positive Wigner function. This example puts to rest various conjectures, relating the positivity or the negativity of the Wigner function to the violation of local realism. We have shown that in quantum mechanics, the correlation (\ref{lhv}) can be a Wigner function itself. This is due to the fact that the Wigner function can be directly associated with the parity operator. This operator can be measured in a photon-photon coincidence experiment. Apparently, the Wigner representation cannot serve as a model local hidden variable theory describing the joint parity measurement. A straightforward explanation of this fact is given by expressing the correlation function $\Pi(\alpha ; \beta)$ in the form analogous to Eq.~(\ref{lhv}): \begin{eqnarray} \Pi(\alpha ; \beta) & = & \int \text{d}^2 \lambda_1 \int \text{d}^2 \lambda_2 \, \frac{\pi}{2} \delta^{(2)}(\alpha - \lambda_1) \nonumber \\ & & \times \frac{\pi}{2} \delta^{(2)}(\beta - \lambda_2) W(\lambda_1 ; \lambda_2), \end{eqnarray} where $\lambda_1$ and $\lambda_2$ are now complex phase space look-alikes of hidden variables. Though the outcome of the parity measurement may be only $+1$ or $-1$, the analog of local realities appearing in the Wigner representation is described by unbounded delta-functions \begin{eqnarray} \sigma({\bf a},\lambda_{1}) & \equiv & \frac{\pi}{2} \delta^{(2)}(\alpha - \lambda_1), \nonumber\\ \sigma({\bf b},\lambda_{2}) & \equiv & \frac{\pi}{2} \delta^{(2)}(\beta - \lambda_2), \end{eqnarray} which makes the Bell inequality void. A tempting aspect of the Wigner representation is the interpretation of quantum mechanics in classical-like terms in phase space. One well known difficulty with this approach is the negativity of the Wigner function \cite{ghosts}. The example discussed in this Communication shows, that quantum mechanics manifests its nature also in another, equally important way: the Wigner representations of quantum observables cannot be in general interpreted as phase space distributions of possible experimental outcomes. In particular, the Wigner representation of the parity operator is not a bounded reality corresponding to the dichotomic result of the measurement. This enables violation of Bell's inequalities even for quantum states described by positive definite Wigner functions. {\it Acknowledgements.} This research was partially supported by the Polish KBN grants and by Stypendium Krajowe dla M{\l}odych Naukowc\'{o}w Fundacji na rzecz Nauki Polskiej. \begin{references} \bibitem[*]{unm} Also at the Center of Advanced Studies and Department of Physics, University of New Mexico, Albuquerque NM 87131, USA. \bibitem{epr} A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. {\bf 47}, 777 (1935). \bibitem{bohm} D. Bohm, {\em Quantum Theory}, (Dover, New York, 1989), Chap.~21. \bibitem{bell} J. S. Bell, Physics {\bf 1}, 195 (1965). \bibitem{bellwigner} J. S. Bell, {\em Speakable and unspeakable in quantum mechanics} (Cambridge University Press, Cambridge, 1987), Chap.~21. \bibitem{cetto} A. M. Cetto, L. De La Pena, and E. Santos, Phys. Lett. {\bf 113A}, 304 (1985). \bibitem{kimble} Z. Y. Ou, S. F. Pereira, H. J. Kimble and K. C. Peng, Phys. Rev. Lett. {\bf 68}, 3663 (1992); Z. Y. Ou, S. F. Pereira and H. J. Kimble, Appl. Physics B {\bf 55}, 265 (1992). \bibitem{leonhardt} U. Leonhardt, Phys. Lett. A {\bf 182}, 195, (1993); U. Leonhardt and J. A. Vaccaro, J. Mod. Opt. {\bf 42}, 939 (1995). \bibitem{johansen} L. M. Johansen, Phys. Lett. {\bf A236}, 173 (1997). \bibitem{cohen} O. Cohen, Phys. Rev. A {\bf 56}, 3484 (1997). \bibitem{reid} M. D. Reid and P. D. Drummond, Phys. Rev. Lett. {\bf 60}, 2731 (1988). \bibitem{wignerparity} A. Royer, Phys. Rev. A {\bf 15}, 449 (1977); H. Moya-Cessa and P. L. Knight, Phys. Rev. A {\bf 48}, 2479 (1993). \bibitem{wignerparityexp} K. Banaszek and K. W\'{o}dkiewicz, Phys. Rev. Lett {\bf 76}, 4344 (1996); S. Wallentowitz and W. Vogel, Phys. Rev. A {\bf 53}, 4528 (1996). \bibitem{CHSH} J. F. Clauser, M. A. Horne, A. Shimony, and R. A Holt, Phys. Rev. Lett. {\bf 23}, 880 (1969); J. S. Bell, in {\em Foundations of Quantum Mechanics}, ed. by B. d'Espagnat (New York, Academic, 1971). \bibitem{ghosts} M. O. Scully, Phys. Rev. D {\bf 28}, 2477 (1983); K. W\'{o}dkiewicz, Contemporary Physics {\bf 36}, 139 (1995). \end{references} \begin{figure} \caption{Plot of the combination ${\cal B} \label{Fig:B} \end{figure} \end{document}
\begin{document} \title{Barrier Coverage with Non-uniform Lengths to Minimize Aggregate Movements\footnote{This work was supported by the Australian Research Council (ARC) under the Discovery Projects funding scheme (DP150101134). Serge Gaspers is the recipient of an ARC Future Fellowship (FT140100048).}} \author[1,2]{Serge Gaspers} \author[3]{Joachim Gudmundsson} \author[3]{Juli\'{a}n Mestre} \author[1,3]{Stefan R\"{u}mmele} \affil[1]{UNSW Sydney, Australia} \affil[2]{Data61, CSIRO, Australia} \affil[3]{The University of Sydney, Australia} \Copyright{Serge Gaspers, Joachim Gudmundsson, Juli\'{a}n Mestre, and Stefan R\"{u}mmele} \subjclass{F.2.2 Nonnumerical Algorithms and Problems} \keywords{Barrier coverage, Sensor movement, Approximation, Parameterized complexity} \maketitle \begin{abstract} Given a line segment $I=[0,L]$, the so-called \emph{barrier}, and a set of $n$ sensors with varying ranges positioned on the line containing $I$, the \emph{barrier coverage} problem is to move the sensors so that they cover $I$, while minimising the total movement. In the case when all the sensors have the same radius the problem can be solved in $O(n \log n)$ time (Andrews and Wang, Algorithmica 2017). If the sensors have different radii the problem is known to be \ccfont{NP}-hard to approximate within a constant factor (Czyzowicz \textit{et~al.}, ADHOC-NOW 2009). We strengthen this result and prove that no polynomial time $\rho^{1-\varepsilon}$-approxi\-mation algorithm exists unless $\ccfont{P}=\ccfont{NP}$, where $\rho$ is the ratio between the largest radius and the smallest radius. Even when we restrict the number of sensors that are allowed to move by a parameter $k$, the problem turns out to be \W{1}-hard. On the positive side we show that a $((2+\varepsilon)\rho+2/\varepsilon)$-approximation can be computed in $O(n^3/\varepsilon^2)$ time and we prove fixed-parameter tractability when parameterized by the total movement assuming all numbers in the input are integers. \end{abstract} \section{Introduction} The original motivation for the problem of covering barriers comes from intrusion detection, where the goal is to guard the boundary (barrier) of a region in the plane. In this case the barrier can be described by a polygon and the initial position of the sensors can be anywhere in the plane. The barrier coverage problem, and many of its variants, has received much attention in the wireless sensor community, see for example~\cite{AroraEtal-exscal-05,ckl-lbcws-10,kla-bcws-05} and the recent surveys~\cite{tw-sbcds-14,wgwgc-sbcs-16}. Large scale barriers with more than a thousand sensors have been experimentally tested and evaluated~\cite{AroraEtal-exscal-05}. In a general setting of the barrier coverage problem each sensor has a fixed sensor radius and is initially placed in the plane and the cost of moving a sensor is proportional to the Euclidean distance it is moved. In this paper we consider the special case where we have $n$ sensors on the real line. Each sensor $i=1,\ldots, n$ has a location $x_i$ and a radius $r_i$. When located at $y_i$, the $i$-th sensor covers the \emph{interval} $B(y_i, r_i) = [y_i - r_i, y_i + r_i]$. The goal is to move around the sensor intervals to cover the interval $[0,L]$, the so-called \emph{barrier}. In other words, for each sensor, we need to decide its new location $y_i$ so that $ [0,L] \subseteq \bigcup_{i} B(y_i, r_i). $ The cost of the solution is the sum of sensor movements: $ \mathrm{cost}(y) = \sum_{i} |y_i - x_i|, $ and the objective is to find a feasible solution of minimum cost. \begin{figure} \caption{(left) Illustrating an instance with three sensors $\{1,2,3\} \label{fig:intro} \end{figure} \subsection{Our Results and Related Work} Even though the barrier coverage problem, and many of its variants, has received a lot of attention from the wireless sensor community, not much is known from a theoretical point of view. In the literature three different optimisation criteria have been considered: minimize the sum of movements (min-sum), minimize the maximum movement (min-max) and, minimize the number of sensors that move (min-num). Dobrev \textit{et~al.}~\cite{Dobrev-cbcrs-15} studied the min-sum and min-max version in the case when the sensors' start position can be anywhere in the plane and $k$ parallel barriers are required to be covered. However, they restricted the movement of the sensors to be perpendicular to the barriers. They showed an $O(kn^{k+1})$ time algorithm. If the barriers are allowed to be horizontal and vertical then the problem is \ccfont{NP}-complete, even for two barriers. Most of the existing research has focussed on the special case when the barrier is a line segment $I$ and all the sensors are initially positioned on a line containing $I$. \paragraph*{The Min-Sum model.} If all intervals have the same radius, it is not difficult to show that any solution can be converted into one where $x_i < x_j$ if and only if $y_i < y_j$ without incurring any extra cost. Czyzowicz \textit{et~al.}~\cite{conf/adhoc/CzyzowiczKKLNOSUY10} showed an $O(n^2)$ time algorithm for this case which was later improved to $O(n \log n)$ by Andrews and Wang~\cite{AndrewsW17}. Andrews and Wang also showed a matching $\Omega(n \log n)$ lower bound. When the radii are non-uniform, this is not the case anymore. In fact, Czyzowicz \textit{et~al.}~\cite{conf/adhoc/CzyzowiczKKLNOSUY10} showed that this variant of the problem is \ccfont{NP}-hard, and remarked that not even a $2$-approximation is possible in polynomial time. In fact their hardness proof can be modified to show (Theorem~\ref{thm:inapproximabilty}) that no approximation factor is possible. The catch is that the instance used in the reduction needs to have some intervals that are very small and some intervals that are very large. This is a scenario that is not likely to happen in practice, so the question is whether there is an approximation algorithm whose factor depends on the ratio of the largest radius to the smallest radius. Let $\rho$ be the ratio between the largest radius $r_{\max} = \max_i r_i$ and the smallest radius $r_{\min} = \min_i r_i$. Theorem~\ref{thm:inapproximabilty} states that no $\rho^{1-\varepsilon}$ approximation algorithm exists for any $\varepsilon>0$ unless $\ccfont{P}=\ccfont{NP}$. On the positive side we show an $O(n^3/\varepsilon^2)$ time $((2+\varepsilon)\rho+2/\varepsilon)$-approximation algorithm for any given $\varepsilon>0$. The general idea is to look at ``order-preserving'' solutions, that is, solutions where the set of sensors covering the barrier maintains their individual order from left to right. This will be described in more detail in Section~\ref{sec:order-preserving}. We also study the problem from the perspective of parameterized complexity and show that the problem is hard even if the number of intervals required to move is small, that is \W{1}-hardness with respect to parameter number of moved intervals. Complementary, we provide a fixed-parameter tractable algorithm when the problem is parameterized by the budget, i.e., the target sum of movements. \paragraph*{The Min-Max and Min-Num models.} Czyzowicz \textit{et~al.}~\cite{conf/adhoc/CzyzowiczKKLNOSUY10} also considered min-max version of the problem, where the aim is to minimize the maximum movement. If the sensors have the same radius they gave an $O(n^2)$ time algorithm. Chen \textit{et~al.}~\cite{cglw-ammsm-13} improved the bound to $O(n \log n)$. In the same paper Chen \textit{et~al.}{} presented an $O(n^2 \log n)$ time algorithm for the case when the sensors have different radius. For the min-num version Mehrandish \textit{et~al.}~\cite{Mehrandish-11} showed that the problem can be solved in polynomial time using dynamic programming if the sensor radii are uniform, otherwise the problem is \ccfont{NP}-hard. \section{Order-Preserving Approximations} \label{sec:order-preserving} Let $y$ be a solution to the barrier problem. We say a subset of intervals $S \subseteq \{1, \ldots, n\}$ is \emph{active} for a solution $y$ if the intervals in $S$ alone are enough to cover the barrier. Additionally, we say that $S$ is a \emph{minimal active set} if no proper subset of $S$ is active. Notice that in an optimal solution $y$ if $y_i \neq x_i$ then $i$ must belong to a minimal active set. Without loss of generality we assume that $x_1 \leq x_2 \leq \cdots \leq x_n$. We say a solution $y$ is \emph{order-preserving} if it has an active set $S$ such that for any $i, j \in S$ with $i < j$, we have $y_i < y_j$. Our algorithm is based on finding a nearly optimal order-preserving solution. First we show, in Section~\ref{ssec:order-preserving}, that there always exists an order-preserving solution that is a good approximation of the optimal unrestricted solution, and prove that our analysis is almost tight. Then, in Section~\ref{ssec:algorithm}, we show how to compute a nearly optimal order-preserving solution in polynomial time. \subsection{Quality of Order-Preserving Solutions} \label{ssec:order-preserving} The high level idea to prove that there exists an order-preserving solution that approximates the optimal solution is to start from an arbitrary optimal solution $y$ and progressively modify the positions of two overlapping active intervals so that they are in the right order and together cover the exact same portion of the barrier, as shown in Fig.~\ref{fig:order-preserving-example}. We refer to this process as the \emph{untangling} process. \begin{figure} \caption{Two overlapping intervals $i$ and $j$ being swapped. After the swap the union of the intervals cover the same section of the barrier but their centers swap order.} \label{fig:order-preserving-example} \end{figure} \begin{comment} Note that after swapping two overlapping active intervals, their centers swap order. Notice that because both intervals are active, before the swap, there cannot be another interval centered in between them. This means that after the swap, the large interval only swaps position with the smaller interval. Unfortunately, the same is not true of the smaller interval: After the swap the smaller interval can be in disorder with one new interval; if this happens, however, the smaller interval is not needed anymore and we can move it to its original position and drop it from the active set. \end{comment} This untangling process continues until all overlapping active intervals are in order. Let us denote the resulting solution with $\hat{y}$. Our goal is to charge the cost of $\hat{y}$ to the intervals in such a way that the total charge an interval can receive is comparable to its contribution to the cost of $y$. More formally, we define an \emph{$\beta$-balanced cost sharing scheme} to be a function $\xi: S \rightarrow \mathbb{R}^+$, where \begin{enumerate}[i)] \item $\mathrm{cost}(\hat{y}) \leq \sum_{i \in S} \xi(i)$, and \item $\xi(i) \leq \beta\, |x_i - y_i|$ for all $i \in S$. \end{enumerate} It is easy to see that the existence of a well balanced cost sharing scheme implies a good approximation guarantee. \begin{lemma}\label{lem:untangling} Let $\hat{y}$ be the result of untangling an optimal solution $y$. If $\hat{y}$ admits an $\beta$-balanced cost sharing scheme then $\hat{y}$ is $\beta$-approximate. \end{lemma} \begin{proof} We bound the cost of $\hat{y}$ as follows: $\mathrm{cost}(\hat{y}) \leq \sum_{i} \xi(i) \leq \sum_i \beta |x_i - y_i| = \beta \cdot \mathrm{cost}(y) = \beta \cdot \textsc{opt}$, where the first two inequalities follow from the definition of $\beta$-balancedness and the last equality follows from the fact that $y$ is optimal. \end{proof} To show the existence of a good cost sharing scheme, we will study the structure of an optimal solution $y$ and its untangling process leading to the order-preserving solution $\hat{y}$. Let $\gamma(i) \subseteq S$ be the set of indices that \emph{cross} $i$, that is, $ i < j \text{ and } y_i > y_j, \text{ or } i > j \text{ and } y_i < y_j$. Let $\widetilde{\gamma}(i)=\{j \in \gamma(i) : |x_i - y_i| \geq |x_j - y_j|\}$, that is, the set of sensors in $\gamma(i)$ that move at most as far as $i$. If $y_i<x_i$ we define $h(i)$ to be the $y$-rightmost sensor in $\widetilde{\gamma}(i)$, and we let $\ell(i)$ be the $y$-rightmost sensor in $\widetilde{\gamma}(i)$ to the left or equal of $x_i$. See Figure~\ref{fig:structure}. Symmetrically, if $y_i\geq x_i$ we define $h(i)$ to be the $y$-leftmost sensor in $\widetilde{\gamma}(i)$, and $\ell(i)$ to be the $y$-leftmost sensor in $\widetilde{\gamma}(i)$ to the right or equal of $x_i$. For sake of brevity, when the interval $i$ is clear from context, we refer to $h(i)$ as $h$ and to $\ell(i)$ as $\ell$. Note that $\ell(i)$ is not well-defined in the case when there are no intervals between $x_i$ and $y_i$. Let us make some observations about the intervals. Figure~\ref{fig:structure} sums up these observations by depicting $i$ together with $\widetilde{\gamma}(i)$ with $\ell$ and $h$ highlighted. \begin{figure} \caption{ \label{fig:structure} \label{fig:structure} \end{figure} \begin{obs} \label{obs:close-gamma} Every $j \in \widetilde{\gamma}(i)$ must have $y_j \in [x_i - |x_i - y_i|, x_i + |x_i - y_i|]$. \end{obs} \begin{proof} Note that if $x_i = y_i$ then the claim is trivially true since $\widetilde{\gamma}(i) = \emptyset$. Without loss of generality assume $x_i > y_i$, since the case $x_i < y_i$ is symmetric. Since $j \in \widetilde\gamma(i)$ we have $|x_j - y_j| \leq |x_i - y_i|$, and it follows that $x_j < x_i$ and $y_j > y_i$. Therefore, $y_j > y_i = x_i - |x_i - y_i|$ and $y_j < x_j + | x_i - y_i| < x_i + |x_i - y_i|$. \end{proof} \begin{obs} \label{obs:at-most-two} Let $y$ be an optimal solution and let $S$ be a minimal active set of~$y$. Every point stabs (intersects) at most two intervals in $S$. \end{obs} \begin{proof} If three active intervals in $S$ are stabbed by one point, then one of those intervals can be removed without making the solution infeasible, thus contradicting minimality of $S$. \end{proof} \begin{obs} \label{obs:overlaps-in-gamma} In an optimal solution $y$, if $y_i < x_i$ then the intervals $j \in \widetilde{\gamma}(i)$ such that $y_j > x_i$ do not overlap; similarly, if $y_i > x_i$ then the intervals $j \in \widetilde{\gamma}(i)$ such that $y_j < x_i$ do not overlap. \end{obs} \begin{proof} Without loss of generality assume $x_i > y_i$, since the case $x_i < y_i$ is symmetric. If there were two indices $j, j' \in \widetilde{\gamma}(i)$ that overlap in $y$ and $y_j > y_{j'} > x_i$, then we could reduce $y_{j'}$ by $r_j + r_{j'} - (y_j - y_{j'})$ to get another feasible solution with lower cost, since $x_j,x_{j'}<x_i$. See Figure~\ref{fig:observation} for an illustration. \end{proof} \begin{figure} \caption{ \label{fig:observation} \label{fig:observation} \end{figure} \begin{obs} \label{obs:length-gamma} If $\ell$ is well defined for $i$ in an optimal solution $y$ then \[\sum_{j \in \tilde{\gamma}(i)} 2 r_j \leq 3 |x_i - y_i| + r_\ell + r_h.\] \end{obs} \begin{proof} Note that if $x_i = y_i$ then the claim is trivially true since $\widetilde{\gamma}(i) = \emptyset$. Without loss of generality assume $x_i > y_i$, since the case $x_i < y_i$ is symmetric. By Observation~\ref{obs:at-most-two} every point in the interval $[y_i, x_i]$ stabs at most two intervals from $\widetilde{\gamma}(i)$. By Observation~\ref{obs:overlaps-in-gamma} every point in the interval $[x_i, y_h]$ stabs at most one interval $j \in \widetilde{\gamma}(i)$ such that $y_j>x_i$. This accounts for the term $3 |x_i - y_i|$. Additionally, we have to add $r_h$ to account for the interval $[y_h,y_h+r_h]$ and $r_\ell$, since $\ell(i)$ might overlap interval $[x_i,x_i+r_\ell]$. Let $j$ be the $y$-leftmost sensor in $\widetilde{\gamma}(i)$. We do not have to account for the fact that $x_j$ might end left of $y_i$, that is the interval $[y_j-r_j,y_i]$. The reason is that $| y_i - y_j + r_j | < r_i$ and counted the interval $[y_i, y_i + r_i]$ already needlessly when considering that $[y_i, x_i]$ stabs at most two intervals from $\widetilde{\gamma}(i)$. It follows that $\sum_{j \in \tilde{\gamma}(i)} 2 r_j \leq 3 |x_i - y_i| + r_\ell + r_h$. \end{proof} \begin{obs} \label{obs:card-gamma} If $\ell$ is well defined for $i$ in an optimal solution $y$ then \[|\widetilde{\gamma}(i)| \leq 3 + \frac{ 3\, |x_i - y_i| - 2\, r_i - r_\ell - r_h}{2\, r_{\min}}.\] \end{obs} \begin{proof} From Observation~\ref{obs:length-gamma} we have $\sum_{j \in \tilde{\gamma}(i)} 2 r_j \leq 3 \, |x_i - y_i| + r_\ell + r_h$. Notice that each interval in $\tilde{\gamma}(i)$ has length at least $2 r_{\min}$, therefore the number of intervals in $\widetilde{\gamma}(i)$ is no more than $\sum_{j \in \tilde{\gamma}(i)} 2 r_j$ divided by $2 r_{\min}$. To get a better bound we count three intervals explicitly: $\ell(i)$, $h(i)$, and $j$, where $j$ is the $y$-leftmost sensor if $x_i> y_i$ or the rightmost otherwise. Note that if $x_i = y_i$ then the claim is trivially true since $\widetilde{\gamma}(i) = \emptyset$. Without loss of generality assume $x_i > y_i$, since the case $x_i < y_i$ is symmetric. Ignoring $j$, we can adjust the bound from Observation~\ref{obs:length-gamma} as follows. Since by Observation~\ref{obs:at-most-two} every point stabs at most two intervals, only $j$ might overlap with $i$ in $y$. Hence, we only need to consider the interval $[y_i+r_i,x_i]$ where every point stabs at most two intervals from $\widetilde{\gamma}(i)$. Hence, ignoring the three explicitly counted intervals, the sum of the lengths of the remaining intervals of $\widetilde{\gamma}(i)$ can be bounded by $2 (|x_i - y_i| - r_i) + |x_i - y_i| + r_\ell + r_h - 2 r_\ell - 2 r_h$. Therefore, we have $|\widetilde{\gamma}(i)| \leq 3 + \frac{ 3 |x_i - y_i| - 2 r_i - r_\ell - r_h}{2 r_{\min}}$. \end{proof} Now everything is in place to describe our cost sharing schemes. Our first scheme is simpler to describe and is $(3 \rho + 4)$-balanced. Our second scheme is a refinement and is $\big((2 + \epsilon) \rho + 2/\epsilon \big)$-balanced for any $\epsilon > 0$. \begin{lemma}\label{lem:scheme-first-bound} For an optimal solution $y$ to the barrier problem there is an untangling $\hat{y}$ of $y$ such that there is a $(3 \rho + 4)$-balanced cost sharing scheme. \end{lemma} \begin{proof} The high level idea of our charging scheme is as follows: When $i$ swaps places with $j \in \tilde{\gamma}(i)$, we charge $i$ enough to pay for the movements of both $i$ and $j$. In particular if $\tilde{\gamma}(i) = \emptyset$ then we do not charge $i$ at all, that is, $\xi(i) = 0$. From now on we assume that $\tilde{\gamma}(i) \neq \emptyset$. For the analysis it will be useful to study how $i$ moves in the untangling process. If $y_i < x_i$ then swapping $i$ and $j \in \tilde{\gamma}(i)$ always moves $i$ to the right; similarly, if $y_i > x_i$ then swapping $i$ and $j \in \tilde{\gamma}(i)$ always moves $i$ to the left. On the other hand, when swapping $i$ and $j \in \gamma(i) \setminus \tilde{\gamma}(i)$, the interval $i$ can move either left or right. We consider two scenarios. If $\hat{y}_i$ ends up on the same side of $x_i$ as $y_i$ then $ |x_i - \hat{y}_i| \leq \sum_{j\in \gamma(i)\setminus \tilde{\gamma}(i)} 2 r_j + |x_i - y_i|$, so we charge $2r_j$ to each $j \in \gamma(i) \setminus \tilde{\gamma}(i)$ and $|x_i - y_i|$ to $i$. Thus, under this scenario, the total amount charged to $i$ is \begin{equation} \label{eq:xi-left} \xi(i) \leq 2r_i |\widetilde{\gamma}(i)| + |x_i - y_i| \end{equation} The second scenario is when $\hat{y}_i$ and $y_i$ end up on opposite sides of $x_i$ then $ |x_i - \hat{y}_i| \leq \sum_{j \in \gamma(i)} 2 r_j - |x_i - y_i|$, so we charge $\sum_{j \in \tilde{\gamma}(i)} 2 r_j - |x_i - y_i|$ to $i$ and $2r_j$ to each $j \in \gamma(i) \setminus \tilde{\gamma}(i)$. Thus, under this scenario, the total amount charged to $i$ is \begin{equation} \label{eq:xi-right} \xi(i) \leq 2r_i |\widetilde{\gamma}(i)| + \sum_{j \in \widetilde{\gamma}(i)} 2 r_j - |x_i - y_i|. \end{equation} The rest of the proof is broken up into four cases. \noindent Case 1: Intervals $i$ and $h(i)$ overlap in $y$. In this case $\widetilde{\gamma}(i) = \set{h(i)}$ and $\widetilde{\gamma}(h(i)) = \emptyset$. Furthermore, if there is another interval $i'$ such that $h(i') = h(i)$ then $i'$ and $h(i')$ do not overlap. Indeed, if $y_i$ lies in between $y_{i'}$ and $y_{h(i)}$ then $i'$ and $h(i)$ cannot overlap otherwise there is a point covered by $i$, $i'$ and $h(i)$; if $y_{i'}$ lies in between $y_i$ and $y_{h(i)}$ we get a similar contradiction, so it must be that $y_{h(i)}$ lies in between $y_i$ and $y_{i'}$. See Fig.~\ref{fig:case1}. This means that $i$ and $i'$ cross, so either $i \in \widetilde{\gamma}(i')$ or $i' \in \widetilde{\gamma}(i)$, which, together with $h(i)= h(i')$, yields a contradiction. Therefore, we can run the untangling process so that all pairs $i$ and $h(i)$ that overlap in $y$ are swapped first. Let $y'$ be the solution after these initial swaps are carried out. Then, \begin{align*} |x_i - {y'}_i| + |x_{h(i)} - {y'}_{h(i)}| & \leq |x_i - y_i| + |x_{h(i)} - y_{h(i)}| + 2 |r_i - r_{h(i)}|\\ & \leq 4 ( |x_i - y_i| + |x_{h(i)} - y_{h(i)}|) & \leq 6 |x_i - y_i|. \end{align*} The first inequality is due to the fact that additional cost comes from swapping $i$ and $h$, where at most one them moves in a direction that increases the cost and they are overlapping. Hence the additional cost is bounded by $2 |r_i - r_{h(i)}|$. The second inequality is due to the fact that the movement $|x_i - y_i| + |x_{h(i)} - y_{h(i)}|$ needs to be larger than $|r_i - r_{h(i)}|$ for $i$ and $h$ to swap positions and both be active. Later on in the untangling process, $i$ and $h$ may be swapped with another interval, call it $j$, causing them to move further and to increase their contribution towards $\mathrm{cost}(\hat{y})$. If this happens, we charge the movement of $i$, or $h$, to $j$. Therefore, setting $\xi(i) = 6 |x_i - y_i|$ is enough to cover the contribution of $i$ and $h$ to the cost of $y$ that is not covered by other intervals. Obviously, the scheme so far is $(3 \rho + 4)$-balanced. \begin{figure} \caption{ \label{fig:case1} \label{fig:case1} \end{figure} The proof of Cases 2 and 3 are deferred to the \longversion{appendix} \shortversion{long version~\cite{arxiv}} where it is shown that when $\ell$ is not well-defined (Case 2) or $\ell$ is well-defined and intervals $\ell$ and $i$ overlap in $y$ (Case 3), then $\frac{\xi(i)}{|x_i-y_i|} \leq 2 \rho + 1$. \noindent Case 4: $\ell$ is well-defined and intervals $i$ and $\ell$ do not overlap in $y$. The assumption implies $|x_i - y_i| \geq r_i + r_\ell$. Since we will use Observation~\ref{obs:length-gamma} to bound $\sum_{j \in \widetilde{\gamma}(i)} 2r_j$, it follows that the sub-case when $i$ is charged the most is when $y_i$ and $\hat{y}_i$ are on opposite sides of $x_i$, so we start with the bound provided by~\eqref{eq:xi-right}: \begin{align*} \xi(i) & \leq 2r_i |\widetilde{\gamma}(i)| + \sum_{j \in \widetilde{\gamma}(i)} 2 r_j - |x_i - y_i| \\ & \leq r_i \left( 6 + \frac{3 |x_i - y_i| - 2r_i - r_\ell - r_h}{r_{\min}} \right) + 2|x_i - y_i| + r_\ell + r_h \\ & = \left( 3 \frac{r_i}{r_{\min}} + 2 + r_i \frac{6 - 2r_i/r_{\min} - r_\ell/r_{\min} - r_h/ r_{\min} + r_h / r_i + r_\ell / r_i}{|x_i - y_i|} \right) |x_i - y_i| \\ & \leq \left( 3 \frac{r_i}{r_{\min}} + 2 + \frac{4 - 2r_i/r_{\min} + 2 r_{\min} / r_i}{1 + r_{\min}/r_i} \right) |x_i - y_i| \qquad \leq \left( 3 \frac{r_i}{r_{\min}} + 4\right) |x_i - y_i|\\ & \leq \left( 3 \rho + 4\right) |x_i - y_i| \end{align*} where the second inequality follows from Observations~\ref{obs:length-gamma} and~\ref{obs:card-gamma}, the third inequality follows from $|x_i - y_i| \geq r_i + r_\ell$, the forth inequality follows from the fact that the right hand side of the previous line decreases with $r_\ell$ and $r_h$, and so it is maximized when $r_\ell = r_h = r_{\min}$, and the fifth inequality follows from the fact that third term inside the parenthesis is a decreasing function for $r_i \geq r_{\min}$. This completes the proof of Lemma~\ref{lem:scheme-first-bound}. \end{proof} \begin{lemma} \label{lem:scheme-bound} For an optimal solution $y$ to the barrier problem there is an untangling $\hat{y}$ of $y$ such that there is a $\big((2 +\epsilon) \rho + 2/\epsilon\big)$-balanced charging scheme. \end{lemma} \begin{proof}[Proof sketch] The key insight to get this charging scheme is to realize that the intervals $j \in \widetilde{\gamma}(i)$ such that $y_i$ and $y_j$ end up on opposite sides of $x_i$ must have $|x_j - y_j| > 0$, so we can use some of this cost to pay for the distance it moves when swapping places with $i$. If $|x_i - y_j| \geq \epsilon |x_i - y_i|$ then swapping $i$ and $j$ causes $j$ to move $2r_i$, we charge that to $j$ instead of $i$ like before. In this modified charging scheme $i$ gets charged $(1- \epsilon) \frac{r_i}{r_{\min}} |x_i - y_i|$ less because it does not pay for the movement of $j \in \widetilde{\gamma}(i)$ with $y_j > x_i (1+\epsilon)$. On the other hand, it has to pay for its own movement when swapped with some $j'$ such that $i \in \widetilde{\gamma}(j')$ and $|x_i - y_i| \geq \epsilon |x_{j'} - y_{j'}|$. However, it can be shown that the total extra charge that an interval $i$ is given is at most $\frac{2}{\epsilon} |x_i - y_i|$. Therefore, the scheme is $\big((2 +\epsilon) \rho + 2/\epsilon\big)$-balanced. \end{proof} Selecting $\epsilon$ appropriately gives a minimum approximation of $2(\rho + \sqrt{2 \rho})$. We conclude this sub-section by showing that our analysis is almost tight. \begin{lemma} \label{lem:rho-gap} There is a family of instances where the ratio of the cost of the best order-preserving solution to the cost of the unrestricted optimal solution tends to $\rho$. \end{lemma} \begin{figure} \caption{\label{fig:rho_lowerbound} \label{fig:rho_lowerbound} \end{figure} \begin{proof} Consider the instance in Figure~\ref{fig:rho_lowerbound}. There are $\frac{L-2\rho}{2}$ unit-radius intervals covering $[0, L-2\rho]$ and one $\rho$-radius interval covering $[-\rho, \rho]$. The optimal solution moves the long interval $L -\rho$ distance to the right to cover $[L - 2\rho, L]$, at a cost of $L - \rho$. On the other hand, the order-preserving solution involves moving each small interval $2\rho$ units to the right, at a cost of $2\rho \frac{L-2\rho}{2}$. For large enough $L$ the ratio of the cost of these solutions tends to $\rho$. \end{proof} As a closing note, we mention that our analysis of the current untangling procedure is nearly tight. Indeed, consider the instance in Figure~\ref{fig:rho_lowerbound2}. The optimal solution moves the long interval $L-\rho$ distance to the right. If there is a small gap between two consecutive small intervals, every interval will be active; therefore, in the untangled solution every small interval is moved a distance of $2\rho$ to the right. This means that the ratio of the cost of the untangled solution to $\textsc{opt}$ tends to $2 \rho$ as $L$ grows. \begin{figure} \caption{\label{fig:rho_lowerbound2} \label{fig:rho_lowerbound2} \end{figure} \subsection{Computing Good Order-Preserving Solutions} \label{ssec:algorithm} First we describe a pseudo-polynomial time algorithm for finding an optimal order-preserving solution. Then we show how to get a $(1+\epsilon)$-approximate order-preserving solution in strongly-polynomial time. \begin{lemma} Assuming the coordinates defining the instance are integral, there is an $O(\textsc{opt}^2 n )$ time algorithm for computing an optimal order-preserving solution, where $\textsc{opt}$ is the value of said solution. \end{lemma} \begin{proof} Consider the following dynamic programming formulation where we let $T[i,b]$ be the largest value such that there is an order-preserving solution using the intervals $1, \ldots, i$ to cover $[0, T[i,b]]$ having cost at most $b$. For $i=0$ there is no active set and so $T[0, b] = 0$ for all $b$. For $i > 0$, if $i$ is not part of the active set of the solution that defines $T[i,b]$ then $T[i,b] = T[i-1, b]$. For $i > 0$, if $i$ is part of the active set in the optimal solution then we can condition on how much $i$ moves, say $k$ units either to the left or to the right. The most coverage that we can possibly get is to move $i$ to $y_i = T[i-1, b-k] + r_i$, which would allow a cover up to $T[i-1, b-k] + 2r_i$; however, this is only possible if $|T[i-1, b-k] + r_i - x_i| \leq k$. On the other hand, if $|T[i-1, b-k] + r_i - x_i| > k$ then it must be that $x_i < T[i-1, b-k] + r_i$ (otherwise $k$ needs to be larger) and the best coverage we can get is then $x_i + k$, which should be larger than $T[i-1, b-k]$. At this point it is straightforward to write a recurrence for $T[i,b]$ that can be computed in $O(b)$ time given the values for $T[i-1, \ast]$. There are $n \times \textsc{opt}$ dynamic programming states and each takes $O(\textsc{opt})$ time to compute. \end{proof} \begin{lemma} There is an $O(n^3/\epsilon^2)$ time algorithm for computing a $(1+\epsilon)$-approximate order-preserving solution. \end{lemma} \begin{proof} For $q = \frac{\epsilon \cdot \textsc{opt}}{n}$ we define the following objective function: $ \mathrm{cost}'(y) = \sum_{i} \left\lceil{\frac{|y_i - x_i|}{q}}\right\rceil. $ This new cost function is closely related to the original objective, namely: $ \mathrm{cost}(y) \leq q \cdot \mathrm{cost}'(y) \leq \mathrm{cost}(y) + q n. $ Using the same dynamic formulation as the one used in the pseudo-polynomial time algorithm, we can optimize $\mathrm{cost}'$ in $O(n^3 / \epsilon^2)$ time. Furthermore, the value of this solution under the original objective is at most $(1+\epsilon) \textsc{opt}$, so the claim follows. \end{proof} \section{Inapproximability Results} The known \ccfont{NP}-hardness proof for the barrier coverage problem~\cite{conf/adhoc/CzyzowiczKKLNOSUY10} is a reduction from 3-{\sc Partition}. The reduction takes an instance of 3-{\sc Partition} and creates an instance of the barrier coverage problem with integral values, $n+1$ different radii values, and $\rho = c n^d$ for some constants $c$ and $d$. Computing a 2-approximate solution in this instance is enough to decide the 3-{\sc Partition} instance. Therefore, there is no 2-approximation unless $\ccfont{P}=\ccfont{NP}$. In fact, the same reduction can be used to obtain inapproximability results in terms of $\rho$. \begin{theorem} \label{thm:inapproximabilty} There is no polynomial time $\rho^{1 - \epsilon}$-approximation algorithm for any constant $\epsilon > 0$ unless $\ccfont{P}=\ccfont{NP}$. \end{theorem} \begin{proof} As noted in~\cite{conf/adhoc/CzyzowiczKKLNOSUY10}, a similar reduction can be used to construct an instance with $\rho = \alpha c n^d$ for $\alpha > 1$ such that an $\alpha$-approximation is enough to decide the 3-{\sc Partition} instance. If we set $\alpha = (c n^d)^{\frac{1-\epsilon}{\epsilon}}$ then we get that $\alpha = \rho^{1-\epsilon}$ and the claim follows. \end{proof} \section{Parameterized Complexity} \label{sec:hardness} We show that the barrier coverage problem is hard, even if we only allow a small number of sensors to move. Formally, we show that the following problem is \W{1}-hard when parameterized by $k$. \begin{center} \begin{tabular}{|r p{0.8\columnwidth}|} \hline \multicolumn{2}{|l|}{\textsc{$k$-move-Barrier-Coverage}} \\ \textit{Instance:} & Sensors $(x_1,r_1),\dots,(x_n,r_n)$, $L\in\mathbb{R}$, $B\in\mathbb{R}$, and $k\in\mathbb{N}$. \\ \textit{Problem:} & Does there exist a barrier coverage $y$ of interval $[0,L]$ such that $\mathrm{cost}(y)\leq B$ and $\abs{\{i \mid x_i \neq y_i\}}\leq k$?\\ \hline \end{tabular} \end{center} To show \W{1}-hardness, we will reduce from \textsc{Exact-Cover}. \begin{center} \begin{tabular}{|r p{0.8\columnwidth}|} \hline \multicolumn{2}{|l|}{\textsc{Exact-Cover}} \\ \textit{Instance:} & Universe $U=\{u_1,\dots,u_m\}$, set of subsets $S=\{S_1,\dots,S_n\}\subseteq 2^U$, and $k\in \mathbb{N}$. \\ \textit{Problem:} & Does there exist $T=\{T_1,\dots,T_l\}\subseteq S$ such that $l\leq k$, $\bigcup_{i=1}^l T_i = U$, and $T_i\cap T_j =\emptyset$ for $1\leq i < j \leq l$?\\ \hline \end{tabular} \end{center} A special case of \textsc{Exact-Cover} is the problem \textsc{Perfect-Code}, which was shown to be \W{1}-hard when parameterized by $k$~\cite{DowneyF95} (\W{1}-membership was proved later~\cite{Cesati02}). Hence, \textsc{Exact-Cover} is \W{1}-hard when parameterized by $k$. Actually, \W{1}-hardness for \textsc{Perfect-Code} was shown for the case where one asks for a solution of size exactly $k$ and not, as in our problem definition, a solution of size at most~$k$. However, the proof can easily be adapted to our problem variant. \begin{theorem} \textsc{$k$-move-Barrier-Coverage} is \W{1}-hard when parameterized by~$k$. \end{theorem} \begin{proof} We reduce from \textsc{Exact-Cover}. Let $U=\{u_1,\dots,u_m\}$, $S=\{S_1,\dots,S_n\}\subseteq 2^U$, and $k$ be an instance of \textsc{Exact-Cover}. We construct an instance $(x_1,r_1)$, $\dots$, $(x_n,r_n)$, $L$ and $B$ for \textsc{$k$-move-Barrier-Coverage} as follows. For $1\leq i \leq n$ and $1\leq j \leq m$ we define \[ e_{i,j} = \begin{cases} (n+1)^{j-1} & \text{ if } u_j \in S_i, \\ 0 & \text{ otherwise.} \end{cases} \qquad\qquad d_{i,j} = \begin{cases} (n+1)^{j+m} & \text{ if } u_j \in S_i, \\ 0 & \text{ otherwise.} \end{cases} \] Our instance consists of intervals having radius $r_i = \frac{1}{2} \sum_{j=1}^m e_{i,j}$ and initial position $x_i= -r_i - \sum_{j=1}^m d_{i,j}$ for $1\leq i \leq n$. Furthermore, we set $L=\sum_{j=1}^m (n+1)^{j-1}$ and $B=\sum_{j=1}^m (n+1)^{j+m} + k \sum_{j=1}^m (n+1)^{j-1}$. This reduction can be constructed in polynomial time. Figure~\ref{fig:exact-cover} shows part of the reduction for a small example instance. For the correctness, first assume that the \textsc{Exact-Cover} instance is a yes-instance, i.e., there exists $T=\{T_1,\dots,T_l\}\subseteq S$ such that $l\leq k$, $\bigcup_{i=1}^l T_i = U$, and $T_i\cap T_j =\emptyset$ for $1\leq i < j \leq l$. Let $I\subseteq \{1,\dots,n\}$ be the indices of the intervals corresponding to sets $\{T_1,\dots,T_l\}$. By construction, $\abs{I} \leq k$. We have to show that $[0,L]$ can be covered by moving only the intervals identified by $I$ and that this solution has cost at most $B$. Since $\bigcup_{i=1}^l T_i = U$, for every $u_j \in U$ there exists exactly one $i\in I$ such that $u_j \in S_i$. Hence, $\sum_{i\in I} r_i = \frac{1}{2} \sum_{i\in I} \sum_{j=1}^m e_{i,j} = \frac{1}{2} \sum_{j=1}^m (n+1)^{j-1}$. Therefore, the total length of the selected intervals is exactly $L$ and we can cover $[0,L]$. Next, we consider the cost of this solution. Moving all the intervals identified by $I$ to the beginning of the barrier, that is, to position $-r_i$ for interval $i\in I$ results in cost $\sum_{j=1}^m (n+1)^{j+m}$. Again, the argument is that for every $u_j \in U$ there exists exactly one $i\in I$ such that $u_j \in S_i$. Hence, $\sum_{i\in I} \abs{-r_i-x_i} = \sum_{i\in I} \sum_{j=1}^m d_{i,j} = \sum_{j=1}^m (n+1)^{j+m}$. Additionally, the movement of these $k$ intervals to the exact position on $L$ can be bounded by $k L$ resulting in a total cost of at most $\sum_{j=1}^m (n+1)^{j+m} + k \sum_{j=1}^m (n+1)^{j-1}=B$. For the reverse direction, assume that there exists a barrier coverage $y$ of interval $[0,L]$ such that $\mathrm{cost}(y)\leq B$ and $\abs{\{i \mid x_i \neq y_i\}}\leq k$. Let $I\subseteq \{1,\dots,n\}$ be the indices of the moved intervals. We have to show that $T=\{S_i \mid i \in I\}$ is a solution for the \textsc{Exact-Cover} instance, that is, every element $u\in U$ is contained exactly once in the sets of $T$. Assume towards a contradiction, that this is not true. Let $u_c \in U$ be the element with the highest index such that $u_c$ is either not contained in $T$ or it occurs more than once. Since elements $u_{c+1},\dots,u_m$ occur exactly once, they contribute the length $\sum_{j=c+1}^{m}(n+1)^{j-1}$ towards covering $[0,L]$. Therefore, $\sum_{j=1}^{c}(n+1)^{j-1}$ remains to be covered. We have two cases: \begin{itemize} \item {\bf $u_c$ is not contained in $T$.} Then the maximum length we can cover is if every element $u_1,\dots,u_{c-1}$ is contained in every moved interval. Since $n \cdot \sum_{j=1}^{c-1}(n+1)^{j-1} = (n+1)^{c-1} - 1 < \sum_{j=1}^{c}(n+1)^{j-1}$, this is not enough and contradicts our assumption that $y$ is a barrier coverage. Hence, $u_c$ is contained in $T$. \item {\bf $u_c$ occurs in multiple moved intervals.} Since elements $u_{c+1},\dots,u_m$ occur exactly once, they contribute $\sum_{j=c+1}^{m}(n+1)^{j+m}$ to the total cost just for moving the corresponding intervals to the beginning of the barrier. Since $u_c$ occurs at least twice, it will contribute $2\cdot (n+1)^{c+m}$ to the total cost just for moving the corresponding intervals to the beginning of the barrier. But $2(n+1)^{c+m} + \sum_{j=c+1}^{m}(n+1)^{j+m} = (n+1)^{c+m} + \sum_{j=c}^{m}(n+1)^{j+m}$, which is larger than our budget $B$, because $B \leq \sum_{j=1}^m (n+1)^{j+m} + n \sum_{j=1}^m (n+1)^{j-1} < \sum_{j=0}^m (n+1)^{j+m} = \sum_{j=0}^{c-1} (n+1)^{j+m} + \sum_{j=c}^{m} (n+1)^{j+m}$ and $\sum_{j=0}^{c-1} (n+1)^{j+m} < (n+1)^{c+m}$. Hence, $u_c$ is contained exactly once in the sets of $T$, which contradicts our assumption. \end{itemize} Therefore, $T$ is indeed a solution for the \textsc{Exact-Cover} instance. \end{proof} \begin{figure} \caption{\label{fig:exact-cover} \label{fig:exact-cover} \end{figure} Complementary to this \W{1}-hardness result, we will show next, that the problem is fixed-parameter tractable when parameterized by the budget $B$. To this end we have to change the problem to restrict the input to integers instead of real numbers. \begin{center} \begin{tabular}{|r p{0.8\columnwidth}|} \hline \multicolumn{2}{|l|}{\textsc{Barrier-Coverage}} \\ \textit{Instance:} & Sensors $(x_1,r_1),\dots,(x_n,r_n)$ with $x_i,r_i\in \mathbb{N}$ for each $i\in \{1,\dots,n\}$, $L\in\mathbb{N}$, and $B\in\mathbb{N}$. \\ \textit{Problem:} & Does there exist a barrier coverage $y$ of interval $[0,L]$ such that $\mathrm{cost}(y)\leq B$?\\ \hline \end{tabular} \end{center} \begin{theorem} The \textsc{Barrier-Coverage} problem can be solved in $2^{2B^2(B+1)} \cdot n^{O(1)}$ time. \end{theorem} \begin{proof} Our algorithm is a branching algorithm, which, for any candidate sensor branches on which integer point in the gaps (empty intervals) to move this sensor to (or leave it at its original position). The crucial observations will be that we can give a bound on the number of candidate sensors we need to consider to move into the gaps as well as on the positions where they end up in the final configuration, both in terms of the budget $B$. The sum of the gaps on the barrier is at most $B$, otherwise we have a trivial no-instance. Given a gap $G$, we only need to consider intervals that are distance $\leq B$ left and right of $G$, since intervals further away cost too much to move them into $G$. Assume the interval of $G$ is $[y_l,y_r]$. We consider the range left of $G$, that is $[y_l-B,y_l]$ (the right side is symmetrical). At each point $p_i$ in $[y_l-B,y_l]$, we consider all the intervals whose right end equals $p_i$, that is intervals $(x_j,r_j)$ with $x_j+r_j = p_i$. Let $S_i$ denote the set of these intervals. We would like to branch on which intervals (if any) from $S_i$ move into the gap $G$, but $|S_i|$ is not necessarily bounded by a function of $B$. Hence, we sort the intervals in $S_i$ by length and consider only the $B+1$ longest ones. This is sound, since our budget allows us to move at most $B$ intervals and additionally, an interval from $S_i$ might need to remain stationary in order to cover $p_i$. Assume there exists an optimal solution in which interval $(x_j,r_j)\in S_i$ is moved to position $y_j \ne x_j$ and $(x_j,r_j)$ is not among the top $B+1$ longest ones. Then at most $B-1$ of the longest intervals in $S_i$ where moved. This leaves at least two remaining intervals among the $B+1$ many. Assume $(x_k,r_k)$ is the shorter one of those two. Moving $(x_k,r_k)$ the same distance to the right as $(x_j,r_j)$ was moved, covers everything $(x_j,r_j)$ was covering and has the same cost. Additionally, $[x_k-r_k,x_k+r_k]$ is still covered by the longer interval which we did not move. Hence, to conclude, we need to consider at most $B+1$ intervals for each of the $B$ points left and right of a gap. The only thing remaining, is to show that it suffices to consider integer points for the solution. \longversion{By Lemma~\ref{lem:integer-solution} in the appendix, this is indeed the case.} \shortversion{The proof of this is deferred to the long version~\cite{arxiv}}. Therefore, for our branching algorithm, the total number of intervals to consider is bounded by $B$ and their possible new positions is bounded by the budget $B$ as well, which leads to fixed-parameter tractability in $B$ because $B$ decreases by at least one in each recursive call. \end{proof} \begin{comment} \section{Further Observations} \todo{This comments may or may not be mentioned in the introduction.} There is an example showing that if we insist on preserving the order of all intervals, then the ratio between the best such solution and $\textsc{opt}$ is unbounded. There is an example showing that even when restricted to active intervals, preserving the order of left/right-endpoint does not lead to a constant approximate solution. \end{comment} \section{Conclusion} We showed a $((2+\varepsilon)\rho+2/\varepsilon)$-approximation for the barrier coverage problem for the case when the sensors initially are on a line containing the barrier. This works well when the ratio between the largest radius and the smallest radius is small, but in theory the difference could be arbitrarily large. However, we also proved that no polynomial time $\rho^{1-\varepsilon}$-approxi\-mation algorithm exists unless $\ccfont{P}=\ccfont{NP}$. There are still several open problems for this special case that would be interesting to pursue. \begin{enumerate} \item Improve the approximation ratio analysis of an order-preserving solution. Ideally, down to $\rho + O(1)$. \item Determine if the problem is fixed-parameter tractable for parameter $k$ when the interval radii are $1, 2, \ldots, k$. \item Approximate the weighted version where each interval has a weight and we want to minimize $\sum_i w_i |x_i - y_i|$. \end{enumerate} \begin{comment} The original problem was motivated from the application of intrusion detection, where the goal is to guard the perimeter (barrier) of a region in the plane. In this case the barrier can be described by a polygon and the initial position of the sensors can be anywhere in the plane. This version has not been studied from a theoretical point of view. The only known attempt (to the best of our knowledge) is by Dobrev \textit{et~al.}~\cite{Dobrev-cbcrs-15} who considered the case when the sensors' start position can be anywhere in the plane and $k$ parallel barriers are required to be covered. However, they restricted the movement of the sensors to be perpendicular to the barriers. They showed an $O(kn^{k+1})$ time algorithm. If the barriers are allowed to be horizontal and vertical then the problem is \ccfont{NP}-complete. So the obvious question is, can we develop any positive algorithmic results for the general 2D version of the problem? Or is it \ccfont{NP}-hard to find any approximation? \end{comment} \longversion{ \appendix \section{Missing proofs} \begin{proof}[Proof complement of Lemma \ref{lem:scheme-first-bound}: Cases 2 and 3] We will now show that $\frac{\xi(i)}{|x_i-y_i|} \leq 2 \rho + 1$ when either $\ell$ is not well-defined (Case 2) or $\ell$ is well-defined and intervals $\ell$ and $i$ overlap in $y$ (Case 3), complementing the proof that the charging scheme is $(3 \rho + 4)$-balanced. \paragraph*{Case 2: $\ell$ is not well-defined.} Assume $i$ a $h(i)$ do not overlap, otherwise we are in Case~1. This means $|x_i - y_i| \geq r_h$. If $\widetilde{\gamma}(i) = \set{h}$, it is easy to see that $ \frac{\xi(i)}{|x_i - y_i|} \leq \frac{2r_i + 2 r_h}{r_h} \leq \rho + 1$, so let us assume the stronger property that $\abs{\widetilde{\gamma}(i)} \geq 2$. Since $\ell$ is not well-defined, there are no intervals in $\widetilde{\gamma}(i)$ that lie (in $y$) between $x_i$ and $y_i$. It follows that there are at least two intervals in $\widetilde{\gamma}(i)$ that lie on the side of $x_i$ opposite to $y_i$; let $j$ be interval with $y_j$ closest to $x_i$. By Observation~\ref{obs:overlaps-in-gamma}, $j$ and $h$ and all the other intervals in between them cannot overlap. Using this fact, we can derive two inequalities: $$ |x_i - y_i| \geq r_j + r_h + 2 r_{\min} (|\widetilde{\gamma}(i)| - 2) \geq 2 r_{\min} (|\widetilde{\gamma}(i)| - 1)\text{, \quad and} $$ $$ \sum_{j \in \widetilde{\gamma}(i)} 2r_j \leq r_j + |x_i - y_i| + r_h. $$ Therefore, using~\eqref{eq:xi-left}~and~\eqref{eq:xi-right} we get \begin{align*} \frac{\xi(i)}{|x_i-y_i|} & \leq \frac{2 |\widetilde{\gamma}(i)| r_i + \max \{ |x_i - y_i|, \sum_{j \in \widetilde{\gamma}(i)} 2r_j - |x_i-y_i| \}}{|x_i - y_i|} \\ & \leq \frac{2 |\widetilde{\gamma}(i)| r_i}{|x_i - y_i|} + \max \left\{ 1, \frac{ r_j + r_h}{|x_i - y_i|} \right\} \\ & \leq \frac{2 |\widetilde{\gamma}(i)| r_i}{2r_{\min} (|\widetilde{\gamma}(i)| - 1)} + \max \left\{ 1, \frac{ r_j + r_h}{ r_j + r_h} \right\} \\ & \leq \frac{|\widetilde{\gamma}(i)| \rho}{ (|\widetilde{\gamma}(i)| - 1)} + \max \left\{ 1, \frac{ r_j + r_h}{ r_j + r_h} \right\} \\ & \leq 2 \rho + 1, \end{align*} where the second to last inequality follows from the fact that the previous expression is maximized when $r_i = r_{\max}$, and the last inequality, from the fact that the previous expression is a decreasing function of $\abs{\widetilde{\gamma}(i)}$, so the maximum value is attained at $\abs{\widetilde{\gamma}(i)}=2$. Therefore, the charging scheme so far is $(3 \rho + 4)$-balanced. \begin{figure} \caption{ \label{fig:case3} \label{fig:case3} \end{figure} \paragraph*{Case 3: $\ell$ is well-defined and intervals $\ell$ and $i$ overlap in $y$.} Since $\ell$ is well-defined and overlaps $i$, there can be no other sensor from $\widetilde{\gamma}(i)$ can lie (in $y$) on the same side of $x_i$ as $y_i$, see Fig.~\ref{fig:case3}. Notice that because $\ell$ exists, the interval $i$ cannot overlap any interval that lies (in $y$) to right of $\ell$. Using this fact, we can derive the following inequality: $$ 2|x_i - y_i| \geq r_i + \sum_{\mathclap{j \in \widetilde{\gamma}(i) \setminus \set{\ell, h}}} 2r_j + r_h $$ Therefore, if we are under the regime of~\eqref{eq:xi-left}~then we have \begin{align*} \frac{\xi(i)}{|x_i-y_i|} & \leq \frac{2 |\widetilde{\gamma}(i)| r_i + |x_i - y_i| }{|x_i - y_i|} \\ & \leq \frac{4 |\widetilde{\gamma}(i)| r_i}{r_i + 2r_{\min} (|\widetilde{\gamma}(i)| - 2) + r_h} + 1 \\ & \leq \frac{4 |\widetilde{\gamma}(i)| \rho}{\rho + 2 |\widetilde{\gamma}(i)| - 1} + 1 \\ & \leq 2 \rho + 1, \\ \end{align*} where the second to last inequality follows from the fact that the previous expression is maximized when $r_i = r_{\max}$ and $r_h = r_{\min}$, and the last inequality, from the fact that the previoius expression increases as $\abs{\widetilde{\gamma}(i)}$ increases, so the maximum is attained when $\abs{\widetilde{\gamma}(i)} \rightarrow \infty$. Finally, if we are under the regime of~\eqref{eq:xi-right} then we have \begin{align*} \frac{\xi(i)}{|x_i-y_i|} & \leq \frac{2 |\widetilde{\gamma}(i)| r_i + \sum_{j \in \widetilde{\gamma}(i)} 2r_j - |x_i - y_i| }{|x_i - y_i|} \\ & \leq \frac{4 |\widetilde{\gamma}(i)| r_i + 2 \sum_{j \in \widetilde{\gamma}(i)} 2r_j}{r_i + \sum_{j \in \widetilde{\gamma}(i) \setminus \set{\ell, h}} 2r_j + r_h} - 1 \\ & \leq \frac{(4 |\widetilde{\gamma}(i)|-2) r_i + 4 r_\ell + 2r_h+ 2 \left( r_i + \sum_{j \in \widetilde{\gamma}(i) \setminus \set{\ell, h}} 2r_j +r_h\right)}{r_i + \sum_{j \in \widetilde{\gamma}(i) \setminus \set{\ell, h}} 2r_j + r_h} - 1 \\ & \leq \frac{(4 |\widetilde{\gamma}(i)|-2) r_i + 4 r_\ell + 2r_h}{r_i + \sum_{j \in \widetilde{\gamma}(i) \setminus \set{\ell, h}} 2r_j + r_h} + 1 \\ & \leq \frac{(4 |\widetilde{\gamma}(i)|+2) \rho + 2}{\rho + 2 \abs{\widetilde{\gamma}(i)} -3} + 1 \\ & \leq 2 \rho + 3, \\ \end{align*} where the second to last inequality follows from the fact that the previous expression is maximized when $r_i = r_\ell = r_{\max}$ and the remaining intervals have radius $r_{\min}$, and the last expression increases as a function of $|\widetilde{\gamma}(i)|$ for $|\widetilde{\gamma}(i)|\geq 4$. Therefore, the charging scheme so far is $(3 \rho + 4)$-balanced. \end{proof} \begin{lemma}\label{lem:integer-solution} There is an optimal solution $y$ for the \textsc{Barrier-Coverage} problem where each new sensor position $y_i\in y$ is an integer. \end{lemma} \begin{proof} We will show that any optimal solution can be converted into an optimal solution with no sensors at non-integral positions. The proof is by induction on the number $f$ of sensors at non-integral positions in an optimal solution $y$. If $f=0$ we are done. Suppose that if there is an optimal solution with at most $f-1$ sensors at non-integral positions, then there is an optimal solution with no sensors at non-integral positions. Let $\epsilon_l$ be the smallest distance that any non-integral sensor has to move to the left to become integral. Let $\epsilon_r$ be the smallest distance that any non-integral sensor has to move to the right to become integral. Among the non-integral sensors, either at least half of them have their movement cost reduced by moving to the left or more than half of them have their movement cost reduced by moving to the right. Therefore, consider the following two solutions with at most $f-1$ non-integral sensors: in the first one, all non-integral sensors are moved a distance of $\epsilon_l$ to the left, and in the second one, all non-integral sensors are moved a distance of $\epsilon_r$ to the right. At least one of these two solutions has cost at most $\textrm{cost}(y)$. Moreover, it is easy to see that the barrier is covered in both solutions. Therefore, by our induction hypothesis, there is an optimal solution with no sensors at non-integral positions. \end{proof} } \end{document}
\begin{document} \newtheorem{thm}{Theorem}[section] \newtheorem{exam}[thm]{Example} \newtheorem{cor}[thm]{Corollary} \newtheorem{ques}[thm]{Question} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{ax}{Axiom} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}{Remark}[section] \newtheorem{rems}{Remarks}[section] \newcommand{\thmref}[1]{Theorem~\ref{#1}} \newcommand{\secref}[1]{\S\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\bysame}{\mbox{\rule{3em}{.4 pt}}\,} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\sigma}{\sigma} \newcommand{{(X,Y)}}{{(X,Y)}} \newcommand{{S_X}}{{S_X}} \newcommand{{S_Y}}{{S_Y}} \newcommand{{S_X}Y}{{S_{X,Y}}} \newcommand{{S_X}gYy}{{S_{X|Y}(y)}} \newcommand{\Cw}[1]{{\hat C_#1(X|Y)}} \newcommand{{G(X|Y)}}{{G(X|Y)}} \newcommand{{P_{\mathcal{Y}}}}{{P_{\mathcal{Y}}}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\widetilde}{\widetilde} \newcommand{\widehat}{\widehat} \begin{frontmatter} \title{Hahn-Banach theorem for operators on the lattice normed $f$-algebras} \author{Abdullah Ayd{\i}n} \address{Department of Mathematics, Mu\c{s} Alparslan University, Mu\c{s}, Turkey} \begin{abstract} Let $X$ and $E$ be $f$-algebras and $p:X \to E_+$ be a monotone vector norm. Then the triple $(X,p,E)$ is called a lattice-normed $f$-algebraic space. In this paper, we show a generalization of the extension of the Hahn-Banach theorem for operators on the lattice-normed $f$-algebras, in which the extension of one step of that is not similar to the other Hahn-Banach theorems. Also, we give some applications and results. \end{abstract} \begin{keyword} $f$-algebra\sep Hahn-Banach theorem\sep vector lattice 2010 AMS Mathematics Subject Classification: 47B60 \sep 47B65\sep 46A40 \end{keyword} \end{frontmatter} \section{Introductory Facts}\label{hb1} The Hahn-Banach theorem has a lot of applications in different fields of analysis, which attracted the attention of several authors such as Vincent-Smith \cite{Vic} and Turan \cite{Tu}. In this present paper, we give an extension of the Hahn-Banach theorem on lattice normed $f$-algebras and some applications. The extension of one step in our theorem is not similar to the other Hahn-Banach theorems. Vector lattices (i.e., Riesz spaces) are ordered vector spaces that have many applications in measure theory, operator theory, and applications in economics. We suppose that the reader to be familiar with the elementary theory of vector lattices, and we refer the reader for information on vector lattices \cite{ABPO,LZ,Za} as sources of unexplained terminology. Besides, all vector lattices are assumed to be real and Archimedean. A vector lattice $E$ is a {\em lattice-ordered algebra} (briefly, {\em $l$-algebra}) if $E$ is an associative algebra whose positive cone $E_+$ is closed under the algebra multiplication. A Riesz algebra $E$ is called \textit{$f$-algebra} if $E$ has additionally property that $x\wedge y=0$ implies $(x\cdot z)\wedge y=(z\cdot x)\wedge y=0$ for all $z\in E_+$. For an order complete vector lattice (i.e., Dedekind complete), the set $L_b(E)$ of all order bounded operators on $E$ and the set $C(X)$ of all real valued continuous function on a topological space $X$ are examples of lattice-ordered algebra. However, $L_b(E)$ is not $f$-algebra because it is Archimedean vector lattice but not commutative because every Archimedean $f$-algebra is commutative; see for example \cite[Theorem 140.10.]{Za}. Consider $Orth(E):=\{T\in L_b(E):x\perp y\ \text{implies}\ Tx\perp y\}$ the set of orthomorphisms on a vector lattice $E$. Then, the space $Orth(E)$ is not only vector lattice but also an $f$-algebra. On the other hand, a sublattice $A$ of an $f$-algebra $E$ is called $f$-subalgebre of $E$ whenever it is also an $f$-algebra under the multiplication operation in $E$. In this paper, we assume that if a positive element has inverse then the inverse also positive. We refer the reader for much more information on $f$-algebras \cite{ABPO,Ay1,Ay2,Hu,P,Za}. Also, for more details information on the following example, we refer the reader to \cite[p.13]{BGKKKM}. \begin{exam}\label{example of orh} Let $E$ be a vector lattice. An order bounded band preserving operator $T:D\to E$ on an order dense ideal $D\subseteq E$ is an extended orthomorphism. $Orth^\infty(E)$ denote the set of all extended orthomorphisms: denote by $\mathcal{M}$ the collection of all pairs $(D;\pi)$, where $D$ is order dense ideal in $E$ and $\pi\in Orth(D,E)$. Then the space $Orth^\infty(E)$ is an $f$-algebra. Moreover, $Orth(E)$ is an $f$-subalgebra of $Orth^\infty(E)$. On the other hand, $\mathcal{L}(E)$ stands for the order ideal generated by the identity operator $I_E$ in $Orth(E)$. Then $\mathcal{L}(E)$ is an $f$-subalgebra of $Orth(E)$. \end{exam} Recall that a net $(x_\alpha)_{\alpha\in A}$ in a vector lattice $X$ is called \textit{order convergent} (or shortly, \textit{$o$-convergent}) to $x\in X$, if there exists another net $(y_\beta)_{\beta\in B}$ satisfying $y_\beta \downarrow 0$ (i.e. $y_\beta \downarrow$ and $\inf(y_\beta)=0$), and for any $\beta\in B$ there exists $\alpha_\beta\in A$ such that $|x_\alpha-x|\leq y_\beta$ for all $\alpha\geq\alpha_\beta$. In this case, we write $x_\alpha\xrightarrow{o} x$. On the other hand, for a given positive element $u$ in a vector lattice $E$, a net $(x_\alpha)$ in $E$ is said to converge $u$-uniformly to the element $x\in E$ whenever, for every $\varepsilon>0$, there exists an index $\alpha_0$, such that $\lvert x_\alpha-x\rvert<\varepsilon u$ for every $\alpha\geq\alpha_0$. Moreover, $E$ is said to be $u$-uniformly complete if every $u$-uniform Cauchy net has an $u$-uniform limit; see \cite{LZ}. Let $X$ be a vector space, $E$ be a vector lattice, and $p:X \to E_+$ be a vector norm (i.e. $p(x)=0\Leftrightarrow x=0$, $p(\lambda x)=|\lambda|p(x)$ for all $\lambda\in\mathbb{R}$, $x\in X$, and $p(x+y)\leq p(x)+p(y)$ for all $x,y\in X$), then the triple $(X,p,E)$ is called a {\em lattice-normed space}, abbreviated as $LNS$. A subset $Y$ of $X$ is called $p$-bounded whenever every net $(y_\alpha)$ in $Y$ with $p(y_\alpha-y)\xrightarrow{o} 0$ implies $y\in Y$. Let $(X,p,E)$ and $(Y,q,F)$ be two $LNS$s. Then an operator $T:X\to Y$ is called dominated operator if there is a positive operator $S:E\to F$ such that $q(T(x))\leq S(p(x))$ for all $x\in X$. In this case, $T$ is called a {\em dominated operator} and $S$ is called dominant of $T$. Take $maj(T)$ as the set of all dominants of the operator $T$. If there is a least element in $maj(T)$ then it is called the exact {\em dominant} of $T$ and denoted by $[T]$; see for much more details information see \cite{BGKKKM,Ku}. If $X$ is decomposable space and $F$ is order complete then exact dominant exists; see \cite[Theorem 4.1.2.]{Ku}. Consider an $LNS$ $(X,p,E)$. Assume $X$ and $E$ are $f$-algebras, and the vector norm $p$ is monotone (i.e. $|x|\leq |y|\Rightarrow p(x)\leq p(y)$) then the triple $(X,p,E)$ is said to be {\em lattice normed $f$-algebra} and abbreviated as $LNFA$. \begin{defn} Let $(X,p,E)$ be an $LNFA$ and $Y$ be an $f$-subalgebra of $X$. If $p(x\cdot y)=y\cdot p(x)$ holds for all $x\in X$ and $y\in Y$ then $p$ is said to be {\em $f$-subalgebraic-linear}. Also, we said that $(X,p,E)$ has {\em $f$-subalgebraic-linear property}. \end{defn} Recall that an element $x$ in Riesz algebra is called \textit{nilpotent} if $x^n=0$ for some $n\in \mathbb{N}$. Moreover, an algebra $E$ is called \textit{semiprime} if the only nilpotent element in $E$ is zero. \begin{lem}\label{inequality semiprime} Let $E$ be a semiprime $f$-algebra. Then $x\leq y$ and $x\leq z$ imply $x^2\leq y\cdot z$ for all $x,\ y,\ z\in E_+$. \end{lem} \begin{proof} Suppose $x,\ y,\ z$ are positive elements in $E$ such that $x\leq y$ and $x\leq z$. It follows from \cite[Theorem 3.2.(ii)]{P} that $x^2\leq y\cdot z$. \end{proof} \begin{exam} Let $E$ be a vector lattice such that $x^2=x$ for all $x\in E_+$ and $p:\mathcal{L}(E)\to Orth(E)$ be a map defined by $T\to p(T)=\lvert T\rvert$. Then one can see that $p$ is vector norm and $\big(\mathcal{L}(E),p, Orth(E)\big)$ is an $LNS$. Moreover, since $\mathcal{L}(E)$ and $Orth(E)$ are $f$-algebras and $\lvert\cdot\rvert$ is monotone, $\big(\mathcal{L}(E),p, Orth(E)\big)$ is an $LNFA$. Take arbitrary $T,\ S\in \mathcal{L}(E)$. Then there exists some positive scalars $\lambda_T$ and $\lambda_S$ such that $\lvert T\rvert\leq \lambda_T I$ and $\lvert S\rvert\leq \lambda_S I$ because $\mathcal{L}(E)$ is an order ideal generated by the identity operator $I_E$. So, by using \cite[Theorem 2.40.]{ABPO}, we have $$ p(S(T))=\lvert S(T)\rvert=\lvert S\rvert\big(\lvert T\rvert\big)\leq \lambda_S I\big(\lvert T\rvert\big)=\lambda_S \lvert T\rvert $$ and also $$ p(S(T))=\lvert S(T)\rvert= \lvert S\rvert\big(\lvert T\rvert\big)\leq \lvert S\rvert\big(\lambda_T\lvert I\rvert\big)=\lambda_T \lvert S\rvert. $$ So, it follows from Lemma \ref{inequality semiprime} and our assumption that $p(S(T))=\big[p(S(T))\big]^2\leq \lambda_S\lambda_T\lvert S\rvert \cdot \lvert T\rvert=\lambda_S\lambda_T\lvert S\rvert \cdot p(T)$ holds true because $Orth(E)$ is semiprime; see \cite[Theorem 142.5.]{Za}. Next, consider a new $LNFA$ $\big(\mathcal{L}(E)_+,q, Orth(E)\big)$, where $q(T)=\frac{1}{\lambda_T}p(T)$ for all $T\in \mathcal{L}(E)_+$. Then it follows from the above observation that the $LNFA$ space $\big(\mathcal{L}(E)_+,q,Orth(E)\big)$ has the $f$-subalgebraic-linear property. \end{exam} For the following example, we consider \cite[Theorem 2.62.]{ABPO}. \begin{exam} Let $E$ be an $f$-algebra. Then we define a map $p$ from $E$ to $Orth(E)$ by $u\to p(u)=p_u$ such that $p_u(x)=\lvert u\cdot x\rvert$ for each $x\in E$. So, by using \cite[Theorem 142.1.(ii)]{Za}, it is easy to see that $p$ is $(E,p,Orth(E))$ is an $LNFA$ with the $f$-subalgebraic-linear property. \end{exam} In this article, unless otherwise, all lattice normed $f$-algebra are assumed to be with the $f$-subalgebraic-linear property. \section{Main Results} We begin the section with the following definition. \begin{defn} Let $(X,p,E)$ be an $LNS$. Then an operator $T:X\to E$ is said to be {\em $E$-dominated} if it is dominated by $p$ on $E$. It means that $$ \lvert T(x)\rvert\leq p(x) $$ for all $x\in X$. \end{defn} It can be seen that every dominated operator on $LNS$s is $E$-dominated because dominant operators are positive. \begin{lem}\label{f algebra subspace} Let $X$ be an $f$-algebra and $Y$ be an $f$-subalgebra of $X$. Then, for any $w\in X_+$, the set $A=\{u+v\cdot w:u,v\in Y\}$ is also an $f$-subalgebra of $X$. \end{lem} \begin{proof} Firstly, we show that $A$ is an sublattice of $X$. Take an arbitrary $u+v\cdot w\in A$. Then we have $\lvert u+v\cdot w\rvert=\lvert u\rvert+\lvert v\rvert\cdot \lvert w\rvert=\lvert u\rvert+\lvert v\rvert\cdot w\in A$ because of $\lvert u\rvert,\lvert v\rvert \in Y$. Then we get the desired result. Next, we show that $A$ is an $f$-subalgebra of $X$. For any positive elements $y_1+u_1\cdot w,\ y_2+u_2\cdot w\in A_+$, we have $$ (y_1+u_1\cdot w)\cdot(y_2+u_2\cdot w)=y_1\cdot y_2+(y_1\cdot u_2+y_2\cdot u_1+u_1\cdot u_2\cdot w)w\in A_+ $$ because of $y_1\cdot y_2\in Y$, $y_1\cdot u_2+y_2\cdot u_1+u_1\cdot u_2\cdot w\in E$, $A\subseteq X$ and $X$ is $f$-algebra. Thus, $A$ is an $l$-algebra. On the other hand, assume $(y_1+u_1\cdot w)\wedge(y_2+u_2\cdot w)=0$ for arbitrary $y_1+u_1\cdot w,\ y_2+u_2\cdot w\in A$. Then we have $[(y+u\cdot w)\cdot(y_1+u_1\cdot w)]\wedge(y_2+u_2\cdot w)=0$ for all $y+u\cdot w\in A_+$ because $A_+\subseteq X_+$ and $X$ is $f$-algebra. Therefore, we obtain that $A$ is a $f$-subalgebra of $X$. \end{proof} \begin{prop}\label{f algebra order complete} Let $X$ be an $f$-algebra and $Y$ be an $u$-uniformly complete $f$-subalgebra of $X$. Then, for any $w\in X_+$, the set $A=\{u+v\cdot w:y,z\in Y_+\}$ is also an $u$-uniformly complete $f$-subalgebra. \end{prop} \begin{proof} Suppose $Y$ is $f$-subalgebra of $X$. Then, by applying Lemma \ref{f algebra subspace}, we see that $A$ is $f$-subalgebra of $X$. On the other hand, take an $u$-uniform Cauchy net $(x_\alpha)$ in $A$. Then there exist two $u$-uniform Cauchy nets $(y_\alpha)$ and $(z_\alpha)$ with $x_\alpha=y_\alpha+z_\alpha\cdot w$ in $Y_+$ because of $y_\alpha\leq x_\alpha$ and $z_\alpha\leq x_\alpha$. So, there are $y, \ z\in Y$ such that $y_\alpha\xrightarrow{u} y$ and $z_\alpha\xrightarrow{u}z$ because $Y$ is $u$-uniformly complete. Therefore, we get $x_\alpha=y_\alpha+z_\alpha\cdot w\xrightarrow{u}y+z\cdot w$. As a result, $A$ is also $u$-uniformly complete. \end{proof} \begin{thm}\label{basic theorem} Let $(X,p,E)$ be an $LNFA$ with $X$ being $f$-subalgebra of order complete $f$-algebra $E$ and $G$ be an unital $f$-subalgebra of $X$. If $T:G\to E$ is an $E$-dominated operator and $G$ is $e$-uniform complete then there exists another $E$-dominated operator $\hat{T}:X\to E$ such that $\hat{T}(g)=T(g)$ for all $g\in G$. \end{thm} \begin{proof} First of all, if we take $T=0$ or $X=G$ then the poof is obvious. Suppose, $G$ is a proper subspace of $X$ and $T\neq 0$. So, there is a vector $w$ in $X$ so that it is not in $G$. WLOG, we assume $w\in X_+$. Then we consider the set $G_1=\{u+v\cdot w:u,v\in G\}$. Thus, by Lemma \ref{f algebra subspace}, we get that $G_1$ is also an $f$-subalgebra of $X$. Also, by using this extension, we can arrive at $X$ because $G$ is $f$-subalgebra with the multiplicative unit. The extension of one step is not similar to the other Hahn-Banach theorems. It can be observed that $v\cdot w$ can be in $G$ for some $v\in G$. Thus, we have that the representation $G_1$ may not be unique. So, it causes to difficulties getting an extension of one step. Whenever it is done, by using Zorn's lemma and applying Proposition \ref{f algebra order complete}, we can get the extension of $\hat{T}$ to $X$. Now, consider elements $u,v \in G$. Since $T$ is an $E$-dominated operator. Then we have $$ T(u)+T(v)=T(u+v)\leq p(u-w+w+v)\leq p(u-w)+p(w+v) $$ Hence, we get $T(u)-p(u-w)\leq p(w+v)-T(v)$. From there, by applying order completeness of $E$, the both $$ s=\sup\{T(u)-p(u-w):u\in G\} $$ and $$ r=\inf\{p(v+w)-T(v):v\in G\} $$ exist in $E$. So, it is also clear $s\leq r$. Next, let's take any element $z\in E$ such that $s\leq z\leq r$ (for example we can take $z=s$). Now, we define a map \begin{align*} \hat{T}:G_1&\to E \\(u+v\cdot w)&\to\hat{T}(u+g\cdot w)=T(u)+v\cdot z. \end{align*} We need to show that $\hat{T}$ is a well defined operator. To prove that, we firstly prove the $E$-dominatedness fo $\hat{T}$. Let's apply $e$-uniformly completeness of $G$. Then we have that $(v+e)^{-1}$ exits for any positive element $v\in G_+$; see \cite[Theorem 146.3.]{Za}. Next, by using \cite[Theorem 11.1.]{P}, the inverse element $(v+\frac{1}{n}e)^{-1}$ exists in $G_+$ for all $n\in\mathbb{N}_+$. Then, for each $u\in G_+$ and $n\in\mathbb{N}$, we have $$ z\leq r\leq p(u\cdot(v+\frac{1}{n}e)^{-1}+w)-T(u\cdot (v+\frac{1}{n}e)^{-1}) $$ and so, by using the $f$-subalgebraic-linear property of $p$, we get $$ T(u)+(v+\frac{1}{n}e)\cdot z\leq p(u+w\cdot(v+\frac{1}{n}e))\leq p(u+w\cdot v)+\frac{1}{n}p(w). $$ Thus, we have $\hat{T}(u+v\cdot w)=T(u)+v\cdot z\leq p(u+v\cdot w)$ for any $u,v\in G_+$ because $F$ is an Archimedean vector lattice. Thus, $\hat{T}$ is $E$-dominated for arbitrary $u,v\in G_+$. Now, we show for arbitrary $v\in G$. We can write $v=v^+-v^-$. By using the first observation, we can write \begin{equation} \hat{T}(u+v^+\cdot w)=T(u)+v^+\cdot z\leq p(u+v^+\cdot w) \end{equation} For the band $B_{v^+}$ generated by $v^+$, we consider the band projection $q:G\to B_{v^+}$. Then $q$ holds $q(v)=v^+$ and $q=q^2$, and it is an positive orthomorphism on $G$ because every order projection is a positive orthomorphism on vector lattices. By using \cite[Theorem 141.1.]{Za}, we can choose a positive element $t\in G_+$ such that $q(x)=x\cdot t$ for all $x\in G$. Thus we have a positive vector $t\in G_+$ so that $v^+=q(v)=v\cdot t$, and $t=e\cdot t=q(e)=q(q(e))=t^2$, and $v^+=q(v^+)=v^+\cdot t$, and $0=q(v^-)=v^-\cdot t$. Also, the equality $v^+=q(v)=v\cdot t$ implies $v^-+v=v^+=v\cdot t$, and so we vet $v^-=v\cdot(t-e)$. Thus, we obtain the following both equalities \begin{equation} t\cdot(v^+\cdot z)=(t\cdot v^+)\cdot z=v^+\cdot z \end{equation} and \begin{equation} t\cdot(v^+\cdot w)=t\cdot v^+\cdot w=t\cdot(v\cdot t)\cdot w=t^2\cdot v\cdot w=t\cdot v\cdot w. \end{equation} It follows from $(1),\ (2)$ and $(3)$ and the $f$-subalgebraic-linear property of $p$ that \begin{eqnarray} t\cdot\big(T(u)+v^+\cdot z\big)\leq t\cdot p(u+v^+\cdot w)=p(t\cdot u+t\cdot v^+\cdot w)\big]=t\cdot p(u+v\cdot w). \end{eqnarray} As one repeat the same way and use $r\leq z$, it can be seen the following inequality \begin{equation} (e-t)\cdot\big(T(u)-v^-\cdot z\big)\leq (e-t)\cdot p(u+v\cdot w). \end{equation} Therefore, by summing up the inequalities $(4)$ and $(5)$, we can get the following result \begin{equation} T(u)+v\cdot z\leq p(u+v\cdot w) \end{equation} for arbitrary $v\in G$ and $u\in G_+$. Lastly, one can show for arbitrary element $u\in G$. Therefore, we get that $\hat{T}$ is $E$-dominated. Now, we show well defined of $\hat{T}$. Let's take arbitrary elements $u_1,\ u_2,\ v_1,\ v_2\in G$ such that $u_1+v_1\cdot w=u_2+v_2\cdot w$. It follows from $(6)$ that $T(u_1-u_2)+(v_1-v_2)\cdot z\leq p\big((u_1-u_2)+(v_1-v_2)\cdot w)\big)=p(0)=0$ and $T(u_2-u_1)+(v_2-v_1)\cdot z\leq p\big((u_2-u_1)+(v_2-v_1)\cdot w)\big)=p(0)=0$. As a result, we get $\hat{T}(v_1+g_1\cdot w)=\hat{T}(v_2+g_2\cdot w)$. Therefore, we have obtained that the map $\hat{T}$ is well defined. On the other hand, by using the linearity of $T$, one can show that $\hat{T}$ is a linear map (or, operator) from $G_1$ to $F$. Expressly, $\hat{T}$ is $E$-dominated operator by $f$-subalgebraic-linear map $p$. By applying Zorn's lemma under the desired conditions, we provide the extension of $\hat{T}$ to all of $X$. \end{proof} Under the condition of Theorem \ref{basic theorem}, we have the following results. \begin{cor} If $(X,p,E)$ is a decomposable $LNFA$ then we have $[\hat{T}]=[T]$. \end{cor} \begin{proof} Since $T$ is $E$-dominated operator, it is dominated. Indeed, Since $\lvert T(g)\rvert\leq p(g)$, we have $p(T(g))\leq p(p(g))$ (for example we can take a dominant $S=p$). Also, it follows from \cite[Theorem 4.1.2.]{Ku} that $T$ has the exact dominant $[T]$. Now, consider the $f$-subalgebra $G_1$ of $X$ in the proof of Theorem \ref{basic theorem}. For $v=0$ the addition unit and $u\in G$, we have $$ \hat{T}(u)=T(u)\leq \lvert T(u)\rvert\leq S(p(u)) $$ and also $$ -\hat{T}(u)=-T(u)\leq \lvert T(u)\rvert\leq S(p(u)). $$ Therefore, we get $\lvert \hat{T}(u)\rvert\leq S(p(u))$ for each $u\in G$. Hence, $\hat{T}$ is also dominated by $S$, and so, we get $[\hat{T}]\leq [T]$. On the other hand, by considering the $maj(T)$ and $maj(\hat{T})$, we have $[T]\leq [\hat{T}]$. As a result, we get the desired result. \end{proof} \begin{cor} Let $Y$ be an unital and $e$-uniform complete $p$-closed $f$-subalgebra of $X$. If every non zero positive element has inverse in $Y$ then, for each $y_0\notin Y$, we have a map $F:X\to E$ such that $F(Y)=0$ and $F(y_0)>0$. \end{cor} \begin{proof} Let's take a set $Y_1=\{u+v\cdot y_0:u,v\in Y\}$ and $w=\inf\{p(y+ y_0):y\in Y\}\geq0$. Then we show $w\neq0$. Assume it is not hold true, i.e., $w=0$. For any $a_1,\ a_2\in A$, it is enough to show that $a_1\wedge a_2\in A$. For proving that, we consider \cite[Theorem 2.1.2]{Ku} and take a band $B=(a_1-a_\vee a_2)$. The there is a band projection $\pi_B:E\to B$. Then we have another projection $\pi'_B$ on $X$ such that $\pi_B\big(p(x)\big)=p\big(\pi'_B(x)\big)$. So, we have \begin{eqnarray*} \pi_B(a_1)+\pi^d_B(a_1)&=&\pi_B(a_1\vee a_2+a_1\wedge a_2-a_2)+\pi^d_B(a_1\vee a_2+a_1\wedge a_2-a_2)\\&=& \pi_B(a_1\wedge a_2)+\pi^d_B(a_1\wedge a_2)\\&=&a_1\wedge a_2. \end{eqnarray*} Now, take $y_1,y_2\in Y_+$ so that $a_1=p(y_1+y_0)$ and $a_2=p(y_2+y_0)$. Thus, we can get \begin{eqnarray*} a_1\wedge a_2=\pi_B(a_1)+\pi^d_B(a_1)&=&\pi_B\big(p(y_1+y_0)\big)+\pi^d_B\big(p(y_2+y_0)\big)\\&=&p\big(\pi'_B(y_1+y_0)\big)+p\big(\pi'^d_B(y_2+y_0)\big)\\&=& p\big(\pi'_B(y_1+y_0)+\pi'^d_B(y_2+y_0)\big)\\&=& p\big(\pi'_B(y_1+y_0)+\pi'^d_B(y_2+y_0)\big)\\&=& p\big(y_0+\pi'_B(y_1)+\pi'^d_B(y_2)\big) \end{eqnarray*} Therefore, we can see $a_1\wedge a_2\in A$. Thus, one can see $a_1\wedge a_2\leq a_1$ and $a_1\wedge a_2\leq a_2$. So, $A$ is downward directed set. Therefore, we can take $A$ as a net in $E$. Since $p(y_\alpha-y_0)=p(y_0-y_\alpha)\downarrow 0$, we have $y_\alpha\xrightarrow{p}y_0$. Thus, we get $y_0\in Y$ because $Y$ is $p$-closed set. Which is contradict with $y_0\notin Y$, and so, we have $w>0$. Next, we define a map $T:Y_1\to E$ by $f(u+v\cdot y_0)=v\cdot w$. Then $T$ is linear and $T(Y)=0$. Moreover, $T$ is also $E$-dominated. Indeed, we can write $p(u+v\cdot y_0)=v\cdot p(v^{-1}\cdot u+y_0)\geq v\cdot w=T(u+v\cdot y_0)$. It follows from the Theorem \ref{basic theorem} that there exists a map from $X$ to $E$ satisfying the desired result. \end{proof} For the next result, we consider the $f$-algebraic spaces $\mathcal{L}(E)\subseteq Orth(E)\subseteq Orth^\infty(E)$ in Example \ref{example of orh}. \begin{cor} Let $E$ be an order complete vector lattice. $\big(Orth(E),\lvert\cdot\rvert,Orth^\infty(E)\big)$ is an $LNFA$. Moreover, If $T:\mathcal{L}(E)\to Orth^\infty(E)$ an $E$-dominated operator then it has an extension to $Orth(E)$. \end{cor} \begin{proof} Since $E$ be an order complete vector lattice, we see that $Orth^\infty(E)$ is order complete $f$-algebra; see \cite[p.14]{BGKKKM}. Moreover, we can say that $\big(Orth(E),\lvert\cdot\rvert,Orth^\infty(E)\big)$ is an $LNFA$ because $Orth(E)$ is $f$-subalgebra of $Orth^\infty(E)$ and $\lvert\cdot\rvert$ has the $f$-subalgebraic-linear property. By applying \cite[Theorem 3.1.]{WE}, we can see that $L(E)$ is order complete because $E$ is order complete. Moreover, by using \cite[Theorem 42.6.]{LZ}, we also get that $L(E)$ is $e$-uniform complete because $L(E)$ has unit $I_E$. Then, we have an $E$-dominated extension $T$ to $(Orth(E)$. \end{proof} \end{document}
\begin{document} \title{Perturbation of frames in Banach spaces} \pagestyle{myheadings} \markboth{Stoeva}{perturbations} \begin{abstract} In this paper we consider perturbation of $X_d$-Bessel sequences, $X_d$-frames, Banach frames, atomic decompositions and $X_d$-Riesz bases in separable Banach spaces. Equivalence between some perturbation conditions is investigated. \end{abstract} \section{Introduction} From practical point of view, it is very important to know what happens with a frame for a Hilbert space ${\cal H}$, when frames' elements are changed. Throughout the years, different conditions for closeness of two frames are investigated, looking for weaker and weaker assumptions. The first perturbation results on Hilbert frames appeared in \cite{C95pert}, where it is proved that if $\seqgr[g]$ is a frame for ${\cal H}$ with lower bound $A$ and $\seqgr[f]$ satisfies the condition $\sum_{i=1}^\infty\|g_i-f_i\|_{\cal H}^2<A$, then $\seqgr[f]$ is also a frame for ${\cal H}$. Recall that $\seqgr[g]\subset {\cal H}$ is called a {\it frame for the Hibert space ${\cal H}$ with bounds $A,B$} if $0<A\leq B<\infty$ and $A \|h\|^2\leq \sum_{i=1}^\infty | \langle h,g_i\rangle|^2\leq B\|h\|^2$ for every $h\in{\cal H}$. Further perturbation results on frames with weaker assumptions appeared in [2, 5\,-\,9]. Perturbation of sequences, satisfying the upper frame inequality, is considered in \cite{B}. Such perturbation results are important for investigation of multipliers, which are very useful in signal processing. As far as it is known to the author, the best condition for perturbation of frames up to now is obtained by Casazza and Christensen \cite{CC}: \begin{thm} {\rm\cite{CC}} \label{thcc} Let $\seqgr[g]$ be a frame for ${\cal H}$ with bounds $A,B$ and let $\phi_i\newin{\cal H}$, $i\newin\mathbb N$. If there exist constants $\lambda_1, \lambda_2, \mu \geq 0$, such that $\max(\lambda_1 +\frac{\mu}{\sqrt{A}}, \lambda_2)<1$ and \begin{equation} \label{nerPW3} \left\| \sumgrd[i]{1}{n} c_i (g_i - \phi_i ) \right\| \leq \mu \left( \sumgrd[i]{1}{n} |c_i|^2 \right)^{\frac{1}{2}} + \lambda_1 \left\| \sumgrd[i]{1}{n} c_i g_i\right\| + \lambda_2 \left\| \sumgrd[i]{1}{n} c_i \phi_i\right\| \end{equation} for all finite scalar sequences $\{c_1, c_2, ..., c_n\} \ (n\in \mathbb N)$, then $\seq[\phi]$ is a frame for ${\cal H}$ with bounds \begin{equation} \label{pertcc} A\left(1-\frac{\lambda_1 +\lambda_2 +\mu/\sqrt{A}}{1+\lambda_2}\right)^2, \ \ \ B\left(1+\frac{\lambda_1 +\lambda_2 +\mu/\sqrt{B}}{1-\lambda_2}\right)^2. \end{equation} \end{thm} Motivated by (\ref{nerPW3}), Sun \cite{Sun} have considered perturbation of $G$-frames, which are sequences in Hilbert spaces, generalizing frames. In the present paper we generalize condition (\ref{nerPW3}) to Banach spaces (see (\ref{bcond}) and (\ref{bcondxd})) and obtain perturbation results for some generalizations of frames to Banach spaces. The paper is organized as follows. Section \ref{nd} contains notation and needed results. Perturbation of $X_d$-Bessel sequences, $X_d$-frames, Banach frames, atomic decompositions and $X_d$-Riesz bases is the topic of Section \ref{sperturb}. For any kind of the above sequences, we use the perturbation condition (\ref{bcond}) and determine appropriate additional assumptions on the constants $\mu, \lambda_1, \lambda_2$. Some of the results in this section generalize results from \cite{CC,CH}. Section \ref{sequiv} concerns connection between some conditions for closeness. Equivalences with simpler perturbation conditions are proved: for $X_d$-Bessel sequences and $X_d$-frames, the $\mu$-term in (\ref{bcond}) is essential and the other two additions in (\ref{bcond}) can be omitted; for $X_d$-Riesz bases and Banach frames - both the $\mu$-term and the $\lambda_2$-term are essential, the $\lambda_1$-term can be omitted in some cases; for atomic decompositions - the $\mu$- and $\lambda_1$- terms can be reduced to one term. \section{Notation, definitions and needed results}\label{nd} Throughout the paper, $X$ and $Y$ denote Banach spaces, $X^*$ - the dual of $X$, $X_d$ - Banach space of scalar sequences. Recall that $X_d$ is called: {\it $BK$-space}, if the coordinate functionals are continuous; {\it $CB$-space}, if the canonical vectors form a Schauder basis for $X_d$; {\it $RCB$-space}, if it is reflexive $CB$-space. The canonical basis of a $CB$-space is denoted by $\seqgr[e]$. \begin{prop} \label{bkxdstar} {\rm \cite[p.\,201]{KA}} If $X_d$ is a $CB$-space, then $X_d^\circledast \mathrel{\mathop:}=\{ \{G e_i\}_{i=1}^\infty : G\in X_d^* \}$ with the norm $\|\{G e_i\}_{i=1}^\infty\|_{X_d^\circledast}\mathrel{\mathop:}=\|G\|_{X_d^*}$ is a $BK$-space isometrically isomorphic to $X_d^*$. \end{prop} Throughout the paper, when $X_d$ is a $CB$-space, $X_d^*$ is identified with $X_d^\circledast$. As usual, a scalar sequence $\seqgr[d]$ is called {\it finite}, when it has only finitely many non-zero elements. The notion {\it operator} is used for a linear mapping. It is said that an operator $F$ is defined from $X$ {\it onto} $Y$ if its range ${\cal R}({F})$ coincides with $Y$. An operator $G$, given by $G \seqgr[c]\mathrel{\mathop:}=\sum_{i=1}^\infty c_i g_i$ ($g_i\,\kern-0.4em\in\kern-0.15emY, i\newin\mathbb N$), is called {\it well defined from $X_d$ into $Y$} if the series $\sum_{i=1}^\infty c_i g_i$ converges in $Y$ for every $\seqgr[c]\in X_d$. The notation $\seqgr[g]\subset Y$ is used with the meaning $g_i\,\kern-0.4em\in\kern-0.15emY$, $\forall i\newin\mathbb N$. If the index set of a sequence or a sum is omitted, the set $\mathbb N$ should be understood. Let us recall the definitions of the sequences, whose perturbations are investigated in the present paper. \begin{defn} \label{defxdfr} Let $X_d$ be a $BK$-space and $\seq[g]\subset X^*$. If \begin{itemize} \item[\rm{($a$)}] $ \{ g_i(f) \} \in X_d$, \ $\forall f\,\kern-0.4em\in\kern-0.15emX$, \item[\rm{($b$)}] $\exists \ \, B\in (0,\infty) \ \ : \ \ \|\{g_i(f)\}\|_{X_d} \leq B \|f\|_X, \ \forall f\,\kern-0.4em\in\kern-0.15emX$, \end{itemize} then $\seq[g]$ is called an $X_d$-Bessel sequence for $X$ with bound $B$. If $\seq[g]$ is an $X_d$-Bessel sequence for $X$ with bound $B$ and there exists $A\in(0,B]$ such that $A\|f\|_X\leq \|\{g_i(f)\}\|_{X_d}$ for every $f\in X$, then $\seq[g]$ is called an $X_d$-frame for $X$ with bounds $A,B$. An $\ell^p$-frame is called a $p$-frame. When $\seq[g]$ is an $X_d$-frame for $X$ and there exists a bounded operator $S:X_d\to X$ such that $S\{g_i(f)\}=f$, $\forall f\in X$, then $(\seq[g], S)$ is called a {\it Banach frame for $X$ with respect to $X_d$} and $S$ is called a Banach frame operator for $\seq[g]$; $\seq[g]$ is also called a {\it Banach frame for $X$ w.r.t. $X_d$}. When $\seq[g]$ is an $X_d$-frame for $X$ and there exists $\seq[f]\subset X$ such that $f=\sum g_i(f) f_i$, $\forall f\in X$, then $(\seq[g],\seq[f])$ is called an atomic decomposition of $X$ with respect to $X_d$. \end{defn} \begin{defn} \label{q} A sequence $\seq[g]\subset Y$ is called an {\it $X_d$-Riesz basis for $Y$ with bounds A,B}, if it is complete in $Y$, $0<A\leq B<\infty$ and \begin{eqnarray} \label{bo} A \|\seq[c]\|_{X_d} \le \left\| \sum c_i g_i \right\|_{Y} \le B\|\seq[c]\|_{X_d}, \ \forall \seq[c]\in X_d. \end{eqnarray} \end{defn} Note that when $X_d$ is a $CB$-space, validity of (\ref{bo}) for all finite scalar sequences $\seqgr[c]$ implies validity of (\ref{bo}) for all $\seqgr[c]\in X_d$. While in the Hilbert space setting ($X$-Hilbert space and $X_d=\ell^2$) the concepts {\it $X_d$-frame, Banach frame} and {\it atomic decomposition} lead to a same one, namely - {\it frame for $X$}, in the Banach space setting this is not so. 1. {\it $X_d$-frame $\nRightarrow$ Banach frame w.r.t. $X_d$; \ \ \ $X_d$-frame $\nRightarrow$ atomic decomposition w.r.t. $X_d$}: \noindent Casazza has proved that there exist $p$-frames, which do not give rise to atomic decompositions: {\it For every $p\neq 2$, $p\in(1,\infty)$, there exist a Banach space $X$ and a $p$-frame $\seq[g]\subset X^*$ for $X$ such that there is no family $\seq[f]\subset X$ satisfying $f= \sum g_i (f) f_i, \forall f\,\kern-0.4em\in\kern-0.15emX$.} Moreover, this $p$-frame is not a Banach frame for $X$ w.r.t. $\ell^p$ (see the equivalence of {\rm(iii)} and {\rm(v)} in \cite[Proposition 3.4]{CCS}, valid for $CB$-spaces $X_d$). 2. {\it Banach frame for $X$ $\nRightarrow$ atomic decomposition of $X$}: \noindent A sequence $\seq[g]$ is a Banach frame for $X$ if and only if $\seq[g]$ is total on $X$ i.e., if and only if $g_i(x)=0, \forall i\in \mathbb N$, implies $x=0$ (for one of the directions see \cite[Lemma 2.6]{CCS}, the other direction is clear). Not every total sequence on $X$ give rise to atomic decomposition of $X$: if $\seq[z]$ denotes an orthonormal basis for a Hilbert space ${\cal H}$, then the sequence $\{e_i+e_{i+1}\}$ is total and thus, it is a Banach frame for ${\cal H}$ w.r.t. appropriate $BK$-space $X_d$, however, $\{e_i+e_{i+1}\}$ does not give rise to atomic decomposition of ${\cal H}$, see \cite[Example 2.8]{CCS}. 3. {\it Atomic decomposition of $X$ w.r.t.\,$X_d$ $\nRightarrow$ Banach frame for $X$ w.r.t.\,$X_d$}: \noindent If $X=c_0$, $X_d={\ell}^{\infty}$ and $\seqgr[g]$ denotes the sequence of the coefficient functionals, associated to the canonical basis $\seqgr[z]$ of $c_0$, then it is clear that $(\seq[g],\seq[z])$ is an atomic decomposition of $X$ w.r.t. $X_d$. However, $\seq[g]$ is not a Banach frame for $X$ w.r.t. $X_d$ (see \cite[Example 2.3 and Proposition 3.4]{CCS}). 4. {\it $X_d^*$-Riesz basis $\&$ $X_d$-$RCB$ $\Rightarrow$ Banach frame and atomic decomp.}: \noindent If $X_d$ is an $RCB$-space and $\seq[g]$ is an $X_d^*$-Riesz basis for $X^*$, then $\seq[g]$ is a Banach frame for $X$ w.r.t. $X_d$ and there exists $\seq[f]$ such that $(\seq[g],\seq[f])$ is an atomic decomposition of $X$ w.r.t. $X_d$ (see \cite{Srbasis}). Clearly, the converse does not hold in general. Let $\seq[g]\subset X^*$. The operators $U$ and $T$ given by $$ Uf=\{g_i(f)\}, \, f\in X, \ \mbox{and} \ T\seq[d]= \sum d_ig_i, $$ are called the {\it analysis operator for $\seq[g]$} and the {\it synthesis operator for $\seq[g]$}, respectively. We will use the following assertions. \begin{prop} {\rm \cite{CCS}} \label{prop22} Let $X_d$ be a $CB$-space (resp. $RCB$-space). A family $\seq[g] \subset X^* $ is an $X_d^*$-Bessel sequence (resp. $X_d$-Bessel sequence) for $X$ with bound $B$ if and only if the synthesis operator $T$ is well defined and hence bounded from $X_d$ (resp. $X_d^*$) into $X^*$ and $ \|T\| \le B$. \end{prop} \begin{prop} {\rm \cite{Sthesis}} \label{exp} Let $X_d$ be an $RCB$-space and $\{g_i\}\subset X^*$. The sequence $\{g_i\}$ is an $X_d$-frame for $X$ if and only if the synthesis operator $T$ is well defined and hence bounded from $X_d^*$ onto $ X^*$. \end{prop} For the sake of completeness we give the proof. \noindent{\bf Proof: \ } Let $T$ and $U$ denote the synthesis and the analysis operator for $\seq[g]$, respectively. By Proposition \ref{prop22}, $\seq[g]$ is an $X_d$-Bessel sequence for $X$ if and only if $T$ is well defined from $X_d^*$ into $X^*$. Let now $\seq[g]$ be an $X_d$-Bessel sequence for $X$. Consider arbitrary $F\,\kern-0.4em\in\kern-0.15emX_d^*$ and the corresponding sequence $\{F(e_i)\}\,\kern-0.4em\in\kern-0.15emX_d^\circledast$ (see Proposition \ref{bkxdstar}). For every $f\,\kern-0.4em\in\kern-0.15emX$, $U^*(F) \,(f)=F(U f)= \sum g_i(f) \,F(e_i)=T \{F(e_i)\} \,(f)$ and hence $U^*(F)=T \{F(e_i)\} $. Thus, we can write $U^*=T$. Since $X_d$ is reflexive and $X$ is isomorphic to the closed subspace ${\cal R}(U)$ of $X_d$, $X$ is also reflexive and hence $T^*=U$. By \cite{Heuser}, the operator $U^*$ is surjective if and only if $U$ has a bounded inverse defined on ${\cal R}(U)$, which by \cite{KA} is equivalent to the validity of the lower $X_d$-frame inequality. \noindent{$\Box$} \begin{prop} {\rm \cite{CCS}} \label{propconv} Let $X_d$ be a $BK$-space and $\seq[g] \subset X^* $ be an $X_d$-frame for $X$. If there exists $\seq[f]\subset X$ such that $\sum c_i f_i$ converges in $X$ for every $\seq[c]\in X_d$ and $f=\sum c_i f_i$, $\forall f\in X$, then $\seq[g]$ is a Banach frame for $X$ w.r.t. $X_d$; when $X_d$ is a $CB$-space, the converse also holds. \end{prop} It is well known that a bounded operator $G:X \rightarrow X$ on a Banach space $X$, for which $\|G-Id_X\| < 1$, has a bounded inverse. An improved version of this result is given by Casazza and Christensen: \begin{prop} {\rm\cite{CC}} \label{imprinv} Let $G:X \rightarrow X$ be an operator. Assume that there exist constants $\lambda_1\in [0,1), \lambda_2 \in [0,1)$ such that \begin{equation*} \|Gx-x\| \leq \lambda_1 \|x\| + \lambda_2 \|Gx\|, \ \forall x \in X. \end{equation*} \noindent Then $G$ is bounded with bounded linear inverse $G^{-1}:X\to X$ and \begin{equation*} \frac{1-\lambda_2}{1+\lambda_1} \|x\| \leq \|G^{-1}x\| \leq \frac{1+\lambda_2}{1-\lambda_1} \|x\|, \ \forall x\in X. \end{equation*} \end{prop} \section{Perturbation results} \label{sperturb} Throughout the rest of the paper we assume that $\seq[g]\subset X^*$, $\seq[\phi]\subset X^*$, $\seq[f]\subset X$, $\seq[\psi]\subset X$. Let $X_d$ be an $RCB$-space and $\seq[g]$ be an $X_d$-Bessel sequence for $X$. Note that $\sum c_i g_i$ is not necessarily convergent for all $\seq[c]\in X_d$ and therefore, in general we can not generalize (\ref{nerPW3}) using $X_d$-norm of $\seq[c]$ instead of $\ell^2$-norm. Motivated by Proposition \ref{prop22}, which implies that $\sum d_i g_i$ converges for all $\seq[d]\,\kern-0.4em\in\kern-0.15emX_d^*$, we generalize condition (\ref{nerPW3}) using $X_d^*$-norm of scalar sequences. Thus, we consider perturbation condition in the following form: \begin{itemize} \item[$(\mathcal{P}^*)$] \ $\exists \ \mu\geq 0, \lambda_1\geq 0, \lambda_2\geq 0,$ such that \begin{equation} \label{bcond} \left\| \sum d_i (\phi_i - g_i) \right\|_{X^*} \!\!\le \mu \left\| \seq[d]\right\|_{X_d^*} + \lambda_1 \left\| \sum d_i g_i \right\|_{X^*}\!\! + \lambda_2 \left\| \sum d_i \phi_i \right\|_{X^*}\!\! \end{equation} for all finite scalar sequences $\seq[d]$. \end{itemize} By analogue, when $\seq[f]$ is an $X_d^*$-Bessel sequence for $X^*$, we consider perturbation condition of the form: \begin{itemize} \item[$(\mathcal{P})$] \ $\exists \ \mu\geq 0, \lambda_1\geq 0, \lambda_2\geq 0,$ such that \begin{equation} \label{bcondxd} \left\| \sum c_i (\psi_i - f_i) \right\|_{X} \le \mu \left\| \seq[c]\right\|_{X_d} + \lambda_1 \left\| \sum c_i f_i \right\|_{X} + \lambda_2 \left\| \sum c_i \psi_i \right\|_{X} \end{equation} for all finite scalar sequences $\seq[c]$. \end{itemize} When $\seq[g]$ is an $X_d$-Bessel sequence (resp. $\seq[f]$ is an $X_d^*$-Bessel sequence) with bound $B$ and $(\mathcal{P}^*)$ (resp. $(\mathcal{P})$) holds, denote $$\Delta=\frac{ B(\lambda_1+\lambda_2) + \mu }{1-\lambda_2}\,.$$ \subsection{Perturbation of ${\bf X_d}$-Bessel sequences} We begin with a result on the upper $X_d^*$-frame condition. \begin{prop} \label{con2xd} Let $X_d$ be a $CB$-space and $\seq[f]$ be an $X_d^*$-Bessel sequence for $X^*$ with bound $B$. Assume that \begin{itemize} \item[$\mathcal{A}_{1}:$] \ \ $(\mathcal{P})$ holds with $\lambda_2<1$. \end{itemize} Then $\seq[\psi]$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\widetilde{B}=B+\Delta$, the inequality in (\ref{bcondxd}) holds for all $\seq[c] \in X_d$ and $\{\psi_i-f_i \}$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\Delta$. \end{prop} \noindent{\bf Proof: \ } By the triangle inequality and (\ref{bcondxd}), for every finite sequence $\seq[c]$ we have \begin{equation*} (1-\lambda_2) \left\|\sum c_i \psi_i \right\|_{X} \leq \mu \left\| \seq[c]\right\|_{X_d} + (\lambda_1+1) \left\| \sum c_i f_i \right\|_{X}. \end{equation*} Using Proposition \ref{prop22}, for every $\seq[c]\in X_d$ and every $n> m>0$ one can conclude that \begin{equation}\label{finew} 0\leq \left\| \displaystyle{\sum_{i=m}^n} c_i \psi_i \right\|_{X} \leq \frac{ B(\lambda_1+1) + \mu}{1-\lambda_2} \left\| \sum_{i=m}^n c_i e_i\right\|_{X_d}\underset{n,m\to 0}{\longrightarrow} 0. \end{equation} Therefore, the series $\sum c_i \psi_i$ converges for every $\seq[c] \in X_d$ and \begin{equation*} \left\| \displaystyle{\sum} c_i \psi_i \right\|_{X} \leq \frac{ B(\lambda_1+1) + \mu}{1-\lambda_2} \left\| \sum c_i e_i\right\|_{X_d}, \ \, \forall \seq[c]\,\kern-0.4em\in\kern-0.15emX_d. \end{equation*} Now Proposition \ref{prop22} implies that $\{\psi_i\}$ is an $X_d^*$-Bessel sequence for $X$ with bound $\widetilde{B}=\frac{B(\lambda_1+1) + \mu }{1-\lambda_2}=B+\Delta$. It also follows easy that $\sum c_i (\psi_i - f_i) $ converges for all $\seq[c] \in X_d$ and the inequality in (\ref{bcondxd}) holds for all $\seq[c] \in X_d$. Moreover, $$ \left\| \sum c_i (\psi_i - f_i) \right\|_{X} \le (\lambda_1 \, B + \mu + \lambda_2 \, \widetilde{B})\, \left\| \seq[c]\right\|_{X_d} = \Delta \left\| \seq[c]\right\|_{X_d}, \forall \seq[c]\in X_d, $$ which by Proposition \ref{prop22} implies that $\{\psi_i-f_i\}$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\Delta $. \noindent{$\Box$} \begin{cor} \label{con2} Let $X_d$ be an $RCB$-space, $\seq[g]$ be an $X_d$-Bessel sequence for $X$ with bound $B$ and \begin{itemize} \item[$\mathcal{A}_{2}:$] \ \ $(\mathcal{P}^*)$ holds with $\lambda_2<1$. \end{itemize} Then $\seq[\phi]$ is an $X_d$-Bessel sequence for $X$ with bound $\widetilde{B}=B+\Delta$, the inequality in (\ref{bcond}) holds for all $\seq[d]\in X_d^*$ and $\{\phi_i-g_i \}$ is an $X_d$-Bessel sequence for $X$ with bound $\Delta$. \end{cor} \subsection{Perturbation of ${\bf X_d}$-frames} While a frame for a Hilbert space ${\cal H}$ is also a Banach frame for ${\cal H}$ and give rice to atomic decomposition of ${\cal H}$, the concepts {\it $X_d$-frame, Banach frame} and {\it atomic decomposition} are not the same in the Banach space setting. We consider perturbation of sequences of all these kinds. \begin{prop} \label{movepfrp5xd} Let $X_d$ be a $CB$-space and $\seq[f]$ be an $X_d^*$-frame for $X^*$ with bounds $A, B$. Assume that \begin{itemize} \item[$\mathcal{A}_{3}:$] \ \ $(\mathcal{P})$ holds with \, $\mu+\lambda_2 (A+B)+ \lambda_1 B<A$. \end{itemize} Then $\{ \psi _i \}$ is an $X_d^*$-frame for $X^*$ with bounds $\widetilde{A}=A- \Delta$, $\widetilde{B}=B+\Delta$. \end{prop} \noindent{\bf Proof: \ } The assumptions in $\mathcal{A}_{3}$ imply that $\lambda_2 <1$ and thus, by Proposition \ref{con2xd}, $\{\psi_i\}$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\widetilde{B}=B+\Delta$. Again by Proposition \ref{con2xd}, $\{\psi_i-f_i\}$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\Delta$ and therefore, \begin{equation*} \|\{g(\psi_i)\}\|_{X_d^*}\geq\|\{g(f_i)\}\|_{X_d^*} - \|\{g(\psi_i - f_i)\} \|_{X_d^*} \geq (A- \Delta) \|g\|_{X^*}. \end{equation*} where $A- \Delta=\frac{A-\mu-\lambda_2(A+B)-\lambda_1 B}{1-\lambda_2}>0$, which proves the lower $X_d^*$-frame inequality. \noindent{$\Box$} \begin{cor} \label{con22} Let $X_d$ be an $RCB$-space, $\seq[g]$ be an $X_d$-frame for $X$ with bounds $A$,$B$ and \begin{itemize} \item[$\mathcal{A}_{4}$\rm{:}] \ \ $(\mathcal{P^*})$ holds with $\mu+\lambda_2 (A+B)+ \lambda_1 B<A$.\end{itemize} Then $\seq[\phi]$ is an $X_d$-frame for $X$ with bounds $\widetilde{A}=A- \Delta$, $\widetilde{B}=B+\Delta$. \end{cor} {\bf Note} If one would like to perturb $X_d$-Bessel sequences or $X_d$-frames, keeping the new bounds close to the original ones - with difference smaller then $ arepsilon$, one can add the restriction $\Delta< arepsilon$. \subsection{Perturbation of Banach frames} If $(\seq[g], S)$ is a Banach frame for $X$, there are two possibilities for perturbation: 1. perturb the operator $S$; 2. perturb the sequence $\seq[g]$. \noindent Casazza and Christensen \cite{CC} have investigated perturbation of the Banach frame operator $S$: \begin{thm} {\rm \cite{CC}} Let $(\seq[g], S)$ be a Banach frame for $X$ with respect to $X_d$ with bounds $A,B$ and let the bounded operator $\widetilde{S}:X_d\to X$ satisfies the condition \begin{itemize} \item[$\mathcal{A}_5$\rm{:}] \ $\exists\, \beta_1, \beta_2, \nu \geq 0$ such that $\max(\beta_2, \beta_1+\nu B)<1$ and $\|S c-\widetilde{S} c\|_X\leq \nu \|c\|_{X_d}+ \beta_1 \|S c\|_X+ \beta_2 \|\widetilde{S} c\|_X,$ \ $\forall \, c\in X_d$. \end{itemize} Then there exists a sequence $\seq[\theta]\subset X^*$ such that $(\seq[\theta], \widetilde{S})$ is a Banach frame for $X$ with respect to $X_d$ with bounds $A\frac{1-\beta_2}{1+\beta_1+\nu B}$, $B\frac{1+\beta_2}{1-(\beta_1+\nu B)}$. \end{thm} We consider perturbation of the Banach-frame sequence. \begin{thm} \label{wmove} Let $X_d$ be an $RCB$-space and $(\seq[g], S)$ be a Banach frame for $X$ with respect to $X_d$ with bounds $A,B$. Assume that \begin{itemize} \item[$\mathcal{A}_{6}$\rm{:}] \ \ $(\mathcal{P^*})$ holds with $\max(\lambda_2, \lambda_1+\mu \|S\|)<1$. \end{itemize} Then there exists an operator $\widetilde{S}:X_d\to X$ such that $(\seq[\phi], \widetilde{S})$ is a Banach frame for $X$ with respect to $X_d$ with bounds $\widetilde{A}=\frac{1- (\mu \|S\|+\lambda_1)}{(1+\lambda_2)\|S\|}$, $\widetilde{B}=B+\Delta$. \end{thm} \noindent{\bf Proof: \ } By Corollary \ref{con2}, $\seq[\phi]$ is an $X_d$-Bessel sequence for $X$ with bound $B+\Delta$ and hence, the operator $\widetilde{T}$, given by $\widetilde{T} \seq[d]:=\sum d_i \phi_i$, is well defined from $X_d^*$ into $X^*$ (see Proposition \ref{prop22}). Consider the sequence $\seq[f]:=\{S e_i\}$. By the isometrical isomorphism of $X_d^*$ and $X_d^\circledast$, $S^*(g) \in X_d^*$ corresponds to $\{g(f_i)\}=\{S^*(g)\,(e_i)\} \in X_d^\circledast, \ \forall g \in X^*$. Moreover, $\seq[f]$ is an $X_d^*$-frame for $X^*$ such that $g=\sum g(f_i)g_i$ for all $g \in X^*$ (see the proof of \cite[Proposition 3.4]{CCS}). Now we use an idea from \cite[Theorem 4]{CC}, namely, to apply Proposition \ref{imprinv} with appropriate operator $G$. Consider the bounded operator $\widetilde{T} S^* : X^* \rightarrow X^*$. Let $g \in X^*$. By Corollary \ref{con2}, the inequality in (\ref{bcond}) holds for all $\seq[d] \in X_d^*$; applying this inequality to the sequence $\{g(f_i)\}$, we get \begin{eqnarray*} \left\| g - \widetilde{T} S^* g \right\|_{X^*} &=& \left\| g - \widetilde{T} \{ g(f_i) \} \right\|_{X^*}= \left\| \sum g(f_i)g_i - \sum g(f_i) \phi_i \right\|_{X^*} \\ & \le & \mu \left\|\{g(f_i)\}\right\|_{X_d^\circledast} + \lambda_1 \left\| \sum g(f_i) g_i \right\|_{X^*} + \lambda_2 \left\| \sum g(f_i) \phi_i \right\|_{X^*}\\ &\le& (\mu \|S\|+\lambda_1) \|g\|_{X^*} + \lambda_2 \left\| \widetilde{T} S^* g \right\|_{X^*}. \end{eqnarray*} By Proposition \ref{imprinv}, the operator $\widetilde{T} S^*$ is invertible and the inverse is bounded with \begin{equation} \label{upperinv2} \|(\widetilde{T} S^*)^{-1}\| \leq \frac{1+\lambda_2}{1- (\mu \|S\|+\lambda_1)}. \end{equation} Thus, every $g\in X^*$ can be written as $g =\widetilde{T}S^*(\widetilde{T} S^*)^{-1} \,g$, which implies that $\widetilde{T}$ is onto $X^*$. Hence, by Proposition \ref{exp}, $\{\phi_i\}$ is an $X_d$-frame for $X$. Since $X_d$ is reflexive and $\seq[g]$ is an $X_d$-frame for $X$, the space $X$ is isomorphic to a closed subspace of $X_d$ and thus, $X$ is also reflexive. Let $\widetilde{S}$ denote the bounded operator $((\widetilde{T} S^*)^{-1})^*S:X_d\to X^{**}=X$ and let $\widetilde{U}$ denote the analysis operator for $\seq[\phi]$. Note that $\widetilde{U}=\widetilde{T}^*$ (see the proof of Proposition \ref{exp}). Therefore, $$\widetilde{S}\{\phi_i(f)\}=\widetilde{S}\widetilde{U}f= ((\widetilde{T} S^*)^{-1})^*S \widetilde{T}^*f= f, \, \forall f\in X,$$ and hence, $(\seq[\phi], \widetilde{S})$ is a Banach frame for $X$ with respect to $X_d$. Moreover, for every $f\in X$ we have \begin{equation*} \|f\|=\|\widetilde{S} \{\phi_i(f)\}\|\leq \|\widetilde{S}\|\, \|\{\phi_i(f)\}\| \leq \|S\|\,\frac{1+\lambda_2}{1- (\mu \|S\|+\lambda_1)} \, \|\{\phi_i(f)\}\| \end{equation*} and therefore, $\frac{1- (\mu \|S\|+\lambda_1)}{(1+\lambda_2)\|S\|} $ is a lower bound for $\seq[\phi]$. \noindent{$\Box$} {\bf Remark} If $\seq[g]$ is a frame for a Hilbert space ${\cal H}$ with bounds $A,B$, and $T_d$ denotes the synthesis operator for the canonical dual of $\seq[g]$, then ($\seq[g], T_d$) is a Banach frame for ${\cal H}$ with respect to $\ell^2$ with bounds $\sqrt{A},\sqrt{B}$. In this case Theorem \ref{wmove} gives Theorem \ref{thcc} - the perturbation conditions and the bounds are the same. Therefore, Theorem \ref{wmove} generalizes Theorem \ref{thcc}. \begin{cor} \label{wmove2} Let $X_d$ be an $RCB$-space, $\seq[g]$ be an $X_d$-frame for $X$ with bounds $A,B$ and $P$ be a bounded projection from $X_d$ onto ${\cal R}(U)$, where $U$ denotes the analysis operator for $\seq[g]$. Assume that \begin{itemize} \item[$\mathcal{A}_{7}$\rm{:}] \ $(\mathcal{P^*})$ holds with $\max(\lambda_2, \lambda_1+\mu \frac{\|P\|}{A})<1$. \end{itemize} Then there exists an operator $\widetilde{S}:X_d\to X$ such that $(\seq[\phi], \widetilde{S})$ is a Banach frame for $X$ with respect to $X_d$ with bounds $\widetilde{A}=\frac{A}{\|P\|}\frac{1- (\mu \frac{\|P\|}{A}+\lambda_1)}{(1+\lambda_2)}$, $\widetilde{B}=B+\Delta$. \end{cor} \noindent{\bf Proof: \ } The operator $U$ has bounded inverse $U^{-1}$ with $\|U^{-1}\|\leq 1/A$. Moreover, $S:=U^{-1}P$ is a Banach frame operator for $\seq[g]$ and $\|S\|\leq \|P\|/A$. Thus, $\mathcal{A}_{6}$ holds and Theorem \ref{wmove} implies that $\seq[\phi]$ is a Banach frame for $X$ w.r.t. $X_d$ with bounds $\frac{1-(\mu \|S\|+\lambda_1)}{(1+\lambda_2)\|S\|}\geq \frac{A}{\|P\|}\frac{1-(\mu\frac{\|P\|}{A}+\lambda_1)}{1+\lambda_2}$, $B+\Delta$. \noindent{$\Box$} \subsection{Perturbation of ${\bf X_d}$-Riesz bases} \begin{prop} \label{wmoveriesz} Let $X_d$ be an $RCB$-space and $\seq[f]$ be an $X_d$-Riesz basis for $X$ with bounds $A,B$. Assume that \begin{itemize} \item[$\mathcal{A}_{8}$\rm{:}] \ $(\mathcal{P})$ holds with $\max(\lambda_2, \lambda_1+\mu /A)<1$. \end{itemize} Then $\seq[\psi]$ is an $X_d$-Riesz basis for $X$ with bounds $\widetilde{A}=A- \frac{A(\lambda_1+\lambda_2)+ \mu}{1+\lambda_2}$, $\widetilde{B}=B+\Delta$. \end{prop} \noindent{\bf Proof: \ } First note that $X$ is isomorphic to $X_d$ (see \cite[Proposition 3.4]{Srbasis}) and thus, $X$ is also reflexive. By \cite[Proposition 4.7]{Srbasis}, $\seq[f]$ is an $X_d^*$-frame for $X^*$ with bounds $A,B$ and the analysis operator $U$ for $\seq[f]$ is injective with ${\cal R}(U)=X_d^*$. Corollary \ref{wmove2}, applied with $P$ being the Identity operator on $X_d^*$, implies that $\seq[\psi]$ is an $X_d^*$-frame for $X^*$ with bounds $A\,\frac{1- (\frac{\mu}{A}+\lambda_1)}{(1+\lambda_2)}$, $B+\Delta$. By Proposition \ref{exp}, $\seq[\psi]$ is complete in $X^{**}=X$. By Proposition \ref{prop22}, $\{\psi_i\}$ satisfies the upper $X_d$-Riesz basis inequality with the same upper bound $B+\Delta$. For the lower $X_d$-Riesz basis inequality, note that $\lambda_1<1$ and for every $\{c_i\} \in X_d$ one has \begin{equation*} \left\| \sum c_i f_i \right\| - \left\| \sum c_i \psi_i\right\| \le \lambda_1 \left\| \sum c_i f_i \right\| + \mu \left\|\seq[c]\right\| + \lambda_2 \left\| \sum c_i \psi_i \right\|, \end{equation*} \noindent which implies that \begin{eqnarray*} (1+\lambda_2) \left\| \sum c_i \psi_i\right\|_{X^*} &\geq& (1- \lambda_1) \left\| \sum c_i f_i \right\|_{X^*} - \mu \left\|\seq[c]\right\| \\ &\geq& \left( (1-\lambda_1)A - \mu \right) \left\|\seq[c]\right\|. \end{eqnarray*} Therefore $\seq[\psi]$ satisfies the lower $X_d$-Riesz basis condition with bound $\widetilde{A}=\frac{(1-\lambda_1)A - \mu}{1+\lambda_2}=A- \frac{A(\lambda_1+\lambda_2)+ \mu}{1+\lambda_2}.$ \noindent{$\Box$} Concerning the above proposition, note that if $A-\Delta>0$, then $A- \Delta$ is also a lower bound for the $X_d$-Riesz basis $\seq[\psi]$, but $A- \frac{ A(\lambda_1+\lambda_2) + \mu }{1+\lambda_2}$ is closer to the optimal one. {\bf Remark} If $\seq[g]$ is a Riesz bazis for a Hilbert space ${\cal H}$, then Proposition \ref{wmoveriesz} becomes \cite[Corollary 2]{CC}. Therefore, Proposition \ref{wmoveriesz} is a generalization of \cite[Corollary 2]{CC} to Banach spaces. \subsection{Perturbation of atomic decompositions} Perturbation of atomic decompositions under ($\mathcal{P}$) with $\lambda_2=0$ is considered in \cite{CH}. Below we add the $\lambda_2$-term and assume (\ref{bcondxd}) only for a subspace of $X_d$. The reason to work with a subspace of $X_d$ is the following. Let $X_d$ be a $BK$-space and $(\seq[g], \seq[f])$ be an atomic decomposition of $X$ with respect to $X_d$. Clearly, $\sum c_i f_i$ converges in $X$ for every $\seq[c]=\{g_i(f)\}, f\in X$. However, $\sum c_i f_i$ does not need to converge for all $\seq[c]$ in $X_d$, $\seq[f]$ might be not an $X_d^*$-Bessel sequence - an example can be found in \cite{LO}. That is why below we only assume that (\ref{bcondxd}) holds for $\seq[c]=\{g_i(f)\}, f\in X$, not necessarily for all $\seq[c]\in X_d$. Note that convergence of $\sum c_i f_i$ in $X$ for all $\seq[c]\in X_d$ implies that $\seq[g]$ is a Banach frame for $X$ w.r.t. $X_d$ (see Proposition \ref{propconv}). \begin{prop} \label{atdmove2} Let $X_d$ be an $RCB$-space and $(\seq[g], \seq[f])$ be an atomic decomposition of $X$ with respect to $X_d$ with bounds $A,B$. Assume that \begin{itemize} \item[$\mathcal{A}_{9}$\rm{:}] \ $\exists$ $\mu\geq 0, \lambda_1\geq 0, \lambda_2\geq 0,$ $\max(\lambda_2, \lambda_1+\mu B)<1$, such that (\ref{bcondxd}) holds for every $\seq[c]=\{g_i(f)\}, f\in X$. \end{itemize} Then there exists a sequence $\seq[\theta]\subset X^*$ such that $(\seq[\theta], \seq[\psi])$ is an atomic decomposition of $X$ with respect to $X_d$ with bounds $\widetilde{A}=A\frac{1-\lambda_2}{1+\lambda_1+\mu B}$, $\widetilde{B}=B\frac{1+\lambda_2}{1-(\lambda_1+\mu B)}$. \end{prop} \noindent{\bf Proof: \ } By $\mathcal{A}_{9}$, $\sum g_i(f)\psi_i$ converges in $X$ for every $f\in X$. Thus, we can consider the operator $G:X\to X$ given by $Gf:=\sum g_i(f) \psi_i$, $f\in X$. For every $f\in X$, one has \begin{eqnarray*} \left\| f-Gf \right\|_{X} &=& \left\| \sum g_i(f) f_i - \sum g_i(f) \psi_i \right\|_{X} \\ &\le& \mu \left\| \{g_i(f)\} \right\|_{X_d} + \lambda_1 \left\| \sum g_i(f) f_i \right\|_{X} + \lambda_2 \left\| \sum g_i(f) \psi_i \right\|_{X}\\ &\le& (\lambda_1+B\mu) \left\| f \right\|_{X} + \lambda_2 \left\| Gf \right\|_{X}. \end{eqnarray*} By Proposition \ref{imprinv}, $G$ is bounded with bounded inverse. For $i\in\mathbb N$, define $\theta_i:= (G^{-1})^*g_i\in X^*$. For every $f\in X$, one has $$\{\theta_i(f)\}=\{g_i(G^{-1}f)\}\in X_d,$$ \begin{equation*}\|\{\theta_i(f)\}\|_{X_d}=\|\{g_i(G^{-1}f)\}\|_{X_d} \left\{ \begin{array}{l} \leq B\|G^{-1}f\|\leq B\,\frac{1+\lambda_2}{1-(\lambda_1+\mu B)}\,\|f\|\\ \geq A\|G^{-1}f\|\geq A\,\frac{1-\lambda_2}{1+\lambda_1+\mu B}\,\|f\| \end{array} \right., \end{equation*} $$f=G(G^{-1}f)=\sum g_i(G^{-1}f)\psi_i =\sum \theta_i(f)\psi_i.$$ This concludes the proof. \noindent{$\Box$} \begin{thm} \label{atdmove3} Let $X_d$ be an $RCB$-space and $(\seq[g], \seq[f])$ be an atomic decomposition of $X$ with respect to $X_d$ with bounds $A,B$. Assume that \begin{itemize} \item[$\mathcal{A}_{10}$\rm{:}] \ $\exists$ $\mu\geq 0, \lambda_1\geq 0, \lambda_2\geq 0,$ $\max(\lambda_2, \lambda_1+\mu B)<1$, such that (\ref{bcondxd}) holds for every $\{c_i\}_{i=1}^n=\{g_i(f)\}_{i=1}^n, n\in\mathbb N, f\in X$. \end{itemize} Then the conclusion of Proposition \ref{atdmove3} holds. \end{thm} \noindent{\bf Proof: \ } Let $f\in X$. First prove the convergence of $\sum g_i(f)\psi_i$ in $X$. By $\mathcal{A}_{10}$, for $m>n$ we obtain \begin{equation*}\label{psiconv2} (1-\lambda_2) \left\| \sum_{i=n+1}^m g_i(f) \psi_i \right\|_{X} \le \mu \left\| \sum_{i=n+1}^m g_i(f)e_i\right\|_{X_d} + (\lambda_1 +1)\left\| \sum_{i=n+1}^m g_i(f) f_i \right\|_{X} \to 0 \end{equation*} when $m,n\to\infty$, which implies that $\sum g_i(f)\psi_i$ converges in $X$. Furthermore, consider $$ \left\| \sum_{i=1}^n g_i(f) (\psi_i - f_i) \right\|_{X} \!\!\!\le \mu \left\| \sum_{i=1}^n g_i(f) e_i\right\|_{X_d} \!\!\!\!+ \lambda_1 \left\| \sum_{i=1}^n g_i(f) f_i \right\|_{X} \!\!+ \lambda_2 \left\| \sum_{i=1}^n g_i(f) \psi_i \right\|_{X} $$ and take limit as $n\to\infty$. This implies that (\ref{bcondxd}) holds for every $\seq[c]=\{g_i(f)\}$, $f\in X$, and therefore, $\mathcal{A}_{9}$ holds. The rest follows from Proposition \ref{atdmove2}. \noindent{$\Box$} Concerning the above theorem, note that if in addition $\sum c_i f_i$ converges in $X$ for all $\seq[c]\in X_d$ and (\ref{bcondxd}) holds for all finite sequences $\seq[c]$, then $\sum c_i \psi_i$ converges in $X$ for all $\seq[c]\in X_d$. Thus, the following assertion follows easy from Theorem \ref{atdmove3} and Proposition \ref{propconv}. \begin{cor} Let the assumptions of Theorem \ref{atdmove3} hold and let in $\mathcal{A}_{10}$ it is assumed that (\ref{bcondxd}) holds for all finite sequences $\seq[c]$. If $\seq[g]$ is a Banach frame for $X$ w.r.t. $X_d$, then there exists a sequence $\seq[\theta]\subset X^*$, which is a Banach frame for $X$ w.r.t. $X_d$, and such that $(\seq[\theta], \seq[\psi])$ is an atomic decomposition of $X$ with respect to $X_d$. \end{cor} \section{Perturbation conditions} \label{sequiv} In this section we investigate how essential are the terms in (\ref{bcond}) and (\ref{bcondxd}). We prove that some of the terms can be omitted and simpler perturbation conditions can be used. \subsection{$X_d^*$-Bessel sequences} We begin with observation that the terms with $\lambda_1$ and $\lambda_2$ in (\ref{bcondxd}) are not essential for perturbation of $X_d^*$-Bessel sequences. \begin{prop} \label{bequiv} Let $X_d$ be a $CB$-space and $\seq[f]$ be an $X_d^*$-Bessel sequence for $X^*$ with bound $B$. Then $\mathcal{A}_{1}$ is equivalent to the following conditions for closeness: \begin{itemize} \item[$\mathcal{A}_{11}${\rm :}] \ $\exists \, \widetilde{\mu} \geq 0$ such that \begin{equation}\label{bs} \left\| \sum c_i (\psi_i - f_i) \right\|_{X} \le \widetilde{\mu} \| \{c_i\}\|_{X_d} \end{equation} for all finite sequences $\seq[c]$ (and hence for all $\seq[c]\in X_d$). \item[$\mathcal{A}_{12}${\rm :}] $\{g(\psi_i - f_i)\} \in X_d^*$, $\forall g\in X^*$, and $\exists \, \widetilde{\mu} \geq 0$ such that \begin{equation}\label{b2b} \| \{g(\psi_i - f_i)\} \|_{X_d^*} \le \widetilde{\mu} \| g\|_{X^*}, \ \forall g\in X^*. \end{equation} \item[$\mathcal{A}_{13}${\rm :}] $\{g(\psi_i)\} \in X_d^*$, $\forall g\in X^*$, and $\exists \, \mu\geq 0, \lambda_1\geq 0, \lambda_2\in [0,1),$ such that \begin{equation}\label{bcond6cb} \| \{g(\psi_i - f_i)\} \|_{X_d^*} \le \mu \| g\|_{X^*} + \lambda_1 \| \{ g(f_i) \|_{X_d^*} + \lambda_2 \| \{g( \psi_i)\} \|_{X_d^*}, \ \forall g\in X^*. \end{equation} \end{itemize} \end{prop} \noindent{\bf Proof: \ } $\mathcal{A}_{11} \Leftrightarrow \mathcal{A}_{12} $: If one of the conditions $\mathcal{A}_{11} $ and $\mathcal{A}_{12}$ holds with $\widetilde{\mu}=0$, then $\seq[f]\equiv\seq[\psi]$ and thus the other condition also holds with $\widetilde{\mu}=0$. For the cases when $\widetilde{\mu}\neq 0$, the equivalence of $\mathcal{A}_{11} $ and $\mathcal{A}_{12}$ follows from Proposition \ref{prop22}. $\mathcal{A}_{11}\Rightarrow \mathcal{A}_{1}$ and $\mathcal{A}_{12} \Rightarrow \mathcal{A}_{13}$: obvious. $\mathcal{A}_{1} \Rightarrow \mathcal{A}_{11}$: Let $\mathcal{A}_{1}$ hold. By Proposition \ref{con2}, $\{\psi_i - f_i\}$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\Delta$ and now Proposition \ref{prop22} implies that (\ref{bs}) holds with $\widetilde{\mu}=\Delta\geq 0$. $\mathcal{A}_{13} \Rightarrow \mathcal{A}_{12}$: Let $\mathcal{A}_{13}$ hold. For every $g\in X^*$, it follows that $\{g(\psi_i - f_i)\} \in X_d^*$ and $$ \| \{g(\psi_i)\} \|_{X_d^*} \leq (\lambda_1+1 ) \| \{g(f_i)\} \|_{X_d^*} + \mu \|g\|_{X^*} + \lambda_2 \| \{g(\psi_i)\} \|_{X_d^*}. $$ Hence, $$(1-\lambda_2) \| \{g(\psi_i)\} \|_{X_d^*} \leq ((\lambda_1+1 )B + \mu)\|g\|_{X^*}, $$ which implies that $\seq[\psi]$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\widetilde{B}=\frac{(\lambda_1+1 )B + \mu}{1-\lambda_2}=B+\Delta$. Therefore, $$\| \{g(\psi_i - f_i)\} \|_{X_d^*} \leq (\mu + \lambda_1 B +\lambda_2 \widetilde{B})\|g\|_{X^*}=\Delta \|g\|_{X^*}, $$ i.e. $\mathcal{A}_{12}$ holds. \noindent{$\Box$} Note that if $\seq[f]$ and $\seq[\psi]$ are $X_d^*$-Bessel sequences for $X^*$, then $\{f_i-\psi_i\}$ is also an $X_d^*$-Bessel sequence for $X^*$ and thus, $\mathcal{A}_{12}$ is a natural perturbation assumption for $X_d^*$-Bessel sequences. \subsection{$X_d^*$-frames} As in the $X_d^*$-Bessel case, the terms with $\lambda_1$ and $\lambda_2$ are not essential for $X_d^*$-frames. Note that going from the $X_d^*$-Bessel case to the $X_d^*$-frame case, we add the restriction $\widetilde{\mu}<A$ to $\mathcal{A}_{11}$ and $\mathcal{A}_{12}$. This restriction is essential - if $\mathcal{A}_{11}$ holds with $\widetilde{\mu}=A$, then $\seq[\psi]$ is not needed to be an $X_d^*$-frame for $X$. \begin{prop} \label{xdfrequiv} Let $X_d$ be a $CB$-space and $\seq[f]$ be an $X_d^*$-frame for $X^*$ with bounds $A,B$. Then $\mathcal{A}_{3}$ is equivalent to the following conditions for closeness: \begin{itemize} \item[$(a)$\rm{:}] \ $\mathcal{A}_{11}$ holds with $\widetilde{\mu}<A$. \item[$(b)$\rm{:}] \ $\mathcal{A}_{12}$ holds with $\widetilde{\mu}<A$. \item[$(c)$\rm{:}] \ $\exists \, \mu\geq 0, \lambda_1\geq 0, \lambda_2>0, \mu + \lambda_2(A+B) + \lambda_1 B<A$, such that (\ref{bcond6cb}) holds. \end{itemize} \end{prop} \noindent{\bf Proof: \ } $(a)\Leftrightarrow (b)$: Follows in the same way as in Proposition \ref{bequiv}. $\mathcal{A}_{3} \Rightarrow (a)$: Let $\mathcal{A}_{3}$ hold and hence, $\Delta\in [0,A)$. By Proposition \ref{con2}, $\{\psi_i - f_i\}$ is an $X_d^*$-Bessel sequence for $X^*$ with bound $\Delta$ and now Proposition \ref{prop22} implies that (\ref{bs}) holds with $\widetilde{\mu}=\Delta$. $(a) \Rightarrow \mathcal{A}_{3}$ and $(b) \Rightarrow (c)$: obvious. $(c) \Rightarrow (b)$: Let $(c)$ hold and hence, $\Delta\in [0,A)$. By the proof of $\mathcal{A}_{13}\Rightarrow \mathcal{A}_{12}$, it follows that (\ref{b2b}) holds with $\widetilde{\mu}=\Delta$. \noindent{$\Box$} In a similar way as above, assertions concerning equivalence of perturbation conditions for $X_d$-Bessel sequences and $X_d$-frames can be written. \subsection{Banach frames and ${\bf X_d}$-Riesz bases} For perturbation of $X_d$-Riesz bases and Banach frames, the $\lambda_1$-term can be omitted in certain cases. For general Banach frames it is clear that $\|S\|\geq 1/B$. The following proposition concerns the case when the equality holds. \begin{prop} Let $X_d$ be an $RCB$-space and $(\seq[g], S)$ be a Banach frame for $X$ with respect to $X_d$ with bounds $A,B$. If $\|S\|=1/B$, then $\mathcal{A}_{6}$ is equivalent to the following condition: \begin{itemize} \item[$\widetilde{\mathcal{A}_{6}}$\rm{:}] \ $\exists \widetilde{\mu}\geq 0, \widetilde{\lambda}_2\geq 0, \max(\widetilde{\lambda}_2, \widetilde{\mu}/ B)<1$, such that \begin{equation*}\label{rcond} \left\| \sum d_i (\phi_i - g_i) \right\|_{X} \le \widetilde{\mu} \| \{d_i\}\|_{X_d^*} + \widetilde{\lambda_2} \left\| \sum d_i \phi_i \right\|_{X} \end{equation*} for all finite sequences $\{d_i\}$ (and hence for all $\seq[d]\in X_d^*$). \end{itemize} \end{prop} \noindent{\bf Proof: \ } For the implication ($\mathcal{A}_{6}$ $\Rightarrow$ $\widetilde{\mathcal{A}_6}$), take $\widetilde{\mu}:=\mu+\lambda_1 B$, $\widetilde{\lambda}_2:=\lambda_2$. The other implication is obvious. \noindent{$\Box$} \begin{prop} \label{r1} Let $X_d$ be an $RCB$-space, $\seq[f]$ be an $X_d$-Riesz basis for $X$ with bounds $A,B$. If $A=B$, then $\mathcal{A}_{8}$ is equivalent to the following condition: \begin{itemize} \item[$\widetilde{\mathcal{A}_{8}}$\rm{:}] \ $\exists \, \widetilde{\mu}\geq 0, \widetilde{\lambda}_2\geq 0, \max(\widetilde{\lambda}_2, \widetilde{\mu} /A)<1$, such that \begin{equation*} \left\| \sum c_i (\psi_i - f_i) \right\|_{X} \le \widetilde{\mu} \| \{c_i\}\|_{X_d} + \widetilde{\lambda_2} \left\| \sum c_i \psi_i \right\|_{X} \end{equation*} for all finite sequences $\{c_i\}$ (and hence for all $\seq[c]\in X_d$). \end{itemize} \end{prop} \noindent{\bf Proof: \ } For the implication ($\mathcal{A}_{8}$ $\Rightarrow$ $\widetilde{\mathcal{A}_8}$), take $\widetilde{\mu}:=\mu+\lambda_1 B$, $\widetilde{\lambda}_2:=\lambda_2$. The other implication is obvious. \noindent{$\Box$} \subsection{Atomic decompositions} Concerning perturbation of atomic decompositions, recall that $\mathcal{A}_{9}$ requires validity of (\ref{bcondxd}) only for $\seq[c]=\{g_i(f)\}$. In this case the $\lambda_1$-term is $\lambda_1\|f\|$. Thus, we can replace the $\lambda_1$- and the $\mu$- terms by one term with $\|f\|$: \begin{prop} \label{a1} Let $X_d$ be an $RCB$-space and $(\seq[g],\seq[f])$ be an atomic decomposition of $X$ with respect to $X_d$ with upper bound $B$. Then $\mathcal{A}_{9}$ is equivalent to the following condition: \begin{itemize} \item[$\widetilde{\mathcal{A}_{9}}$\rm{:}] \ $\exists \, \widetilde{\mu}\geq 0, \widetilde{\lambda}_2\geq 0, \max(\widetilde{\lambda}_1, \widetilde{\lambda}_2)<1$, such that \begin{equation} \label{adeq} \left\| \sum g_i(f) (\psi_i - f_i) \right\|_{X} \le \widetilde{\lambda}_1 \| f\|_{X} + \widetilde{\lambda_2} \left\| \sum g_i(f) \psi_i \right\|_{X} \end{equation} for every $f\in X$. \end{itemize} \end{prop} \noindent{\bf Proof: \ } For the implication ($\mathcal{A}_{9}$ $\Rightarrow$ $\widetilde{\mathcal{A}_9}$), take $\widetilde{\lambda}_1=\lambda_1 +\mu B$, $\widetilde{\lambda}_2=\lambda_2$. The other implication is clear. \noindent{$\Box$} space{.1 in} \noindent Diana T. Stoeva\\ Department of Mathematics \\ University of Architecture, Civil Engineering and Geodesy \\ Blvd. Christo Smirnenski 1,\\ 1046 Sofia \\ Bulgaria \\ Email: stoeva\_\,[email protected] \end{document}
\begin{document} \begin{abstract} A powerful tool of investigation of Fano varieties is provided by exceptional collections in their derived categories. Proving the fullness of such a collection is generally a nontrvial problem, usually solved on a case-by-case basis, with the aid of a deep understanding of the underlying geometry. Likewise, when an exceptional collection is not full, it is not straightforward to determine whether its ``residual'' category, i.e., its right orthogonal, is the derived category of a variety. We show how one can use the existence of Bridgeland stability condition these residual categories (when they exist) to address these problems. We examine a simple case in detail: the quadric threefold $Q_3$ in $\mathbb{P}^{4}$. We also give an indication how a variety of other classical results could be justified or re-discovered via this technique., e.g., the commutativity of the Kuznetsov component of the Fano threefold $Y_4$. \end{abstract} \maketitle \section{Introduction} Semiorthogonal decompositions, originally introduced by Bondal and Orlov\cite{bondal-orlov}, are one of the most insighful features triangulated categories can have. A classic example is the semiorthogonal decomposition produced via a full exceptional collection, i.e. via a collection of objects each of which generates a subcategory equivalent to the derived category of a point, interacting with each other with prescribed hom-vanishings and spanning the whole triangulated category. These exceptional objects behave like one-dimensional, simple generating blocks of their ambient category. Their existence is a specific feature of certain types of triangulated categories, notably the derived categories of some Fano varieties (see, e.g. \cite{Kuz08,Kuz18} and the references therein). Generally, when a triangulated category admits an exceptional collection, this collection is not full: it usually admits a semiorthogonal complement, its right orthogonal, sometimes called residual category or, when the triangulated category is in fact the derived category of a variety, Kuznetsov component (after \cite{Kuz15}). Kuznetsov components have been increasingly studied, as they somehow represent the non-trivial part of the derived categories they are embedded into, and have been seen and conjectured to encode subtle geometric information on their ambient varieties, notably (and conjecturally) their rationality properties \cite{Kuz08b}. \par In the same flavor, Bridgeland stability conditions on the derived category of an algebraic variety $X$ are often used to investigate the geometric properties of moduli spaces of sheaves and complexes of sheaves on $X$, and their very existence is an intense object of study on threefolds and higher dimensional varieties \cite{Bri01,Bri03,BM12,BM13}. Recently, Bayer-Lahoz-Macrì-Stellari \cite{BLMS} have showed how to induce a stability condition on the Kuznetsov component of a projective variety from a weak stability condition on its hosting derived category, provided that certain conditions are satisfied. These conditions are notably satisfied by Fano threefolds of Picard rank 1. \par We illustrate how the existence of stability conditions on a Kuznetsov component automatically allows to prove certain results that, when already known, usually require case-by-case geometric techniques to be dealt with. Namely, showing that a given exceptional collection is full or, more generally, showing that a certain triangulated subcategory exhausts the right orthogonal of an exceptional collection, are usually nontrivial problems due to the a priori possible presence of subcategories which are invisible to numerical detection: the so-called phantom subcategories \cite{gorchinskiy-orlov}. A stability condition, including a ``positivity condition'' for nonzero objects, notably forbids the presence of phantom subcategories (or at least, as we will show, of phantomic summands, which is enough for our purposes), thus remarkably simplifying proofs of fullness. \par In particular we will focus our attention on a simple example, while at the same time indicating how a few others can at least in principle be described in a similar fashion: the index 3 Fano threefold, i.e., a smooth quadric threefold $Q_3$ in $\mathbb{P}^4$. Quadrics are actually one of those few notable cases (among, e.g., projective spaces and Grassmannians) where exhibiting a full exceptional collection does not require somehow sophisticated techniques as stability conditions, see \cite{kapranov,sasha-quadrics}. We present $Q_3$ as a simple case study, to illustrate the effectiveness of the use of stability conditions (when they exist) in investigating fullness of exceptional collections. In particular, we show how in this case one rediscovers Kapranov's full exceptional collection $(S,\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$, where $S$ is the spinor bundle over $Q_3$ \cite{kapranov}. Among other possible applicatons, we sketch how one could recover the equivalence between the Kuznetsov component $\mathrm{Ku}(Y_4)$ of ${\mathcal{D}^b}(Y_4)$, where $Y_4$ is the index 2 Fano threefold given by the complete intersection of two smooth generic quadric hypersurfaces in $\mathbb{P}^5$, and the derived category of the moduli space of spinor bundles on $Y_4$ \cite{bondal-orlov}, via the identification of this moduli space with a moduli space of Bridgeland-stable point-objects. Similarly, one sees how the the numerical condition $\chi(v,v)=0$ for a numerical classe $v$ of a cubic fourfold $W_4$ naturally shows up in ehibiting the equivalence between the Kuznetsov component $\mathrm{Ku}(W_4)$ and the derived category of a (possibly twisted) K3 surface \cite{addington-thomas,huybrechts,BLMNPS}. {\bf Acknowledgements.} The authors wish to thank Arend Bayer and Alexander Kuznetsov, for useful comments on a first draft of this article. \tableofcontents \section{Notation and conventions} We assume that the reader is familiar with the fundamental notions from the theory of stability conditions on triangulated categories and of semiorthogonal decompositions. See \cite{bondal-orlov,bayer,MS} for an introduction. In this section we will simply set up the necessary notation. \par We will be working over the field $\mathbb{C}$ of complex numbers and we will denote the imaginary unit by $\sqrt{-1}$. By $(X,H)$ we will denote a primitively polarized, $n$-dimensional smooth projective variety $i\colon X\hookrightarrow \mathbb{P}^N$. For top dimensional cohomology classes of $X$, by a slight abuse of notation, we will simply write the class $\omega$ for the complex number $\int_X \omega$. \par We will denote the abelian category of coherent sheaves on $X$ by $\mathrm{Coh}(X)$, and its derived category by ${\mathcal{D}^b}(X)$. The Grothendieck and the numerical Grothendieck groups of ${\mathcal{D}^b}(X)$ will be denoted by $K(X)$ and by $K_{\mathrm{num}}(X)$, respetively. More generally, we will write $K(\mathcal{D})$ and $K_{\mathrm{num}}(\mathcal{D})$ to denote, respectively, the Grothendieck and numerical Grothendieck groups, of a numerically finite triangulated category $\mathcal{D}$, i.e., of a triangulated category $\mathcal{D}$ endowed with a Serre functor an such that for any two objects $E,F\in \mathcal{D}$ one has $\dim \mathrm{Hom}_{\mathcal{D}}(E,F[n])<+\infty$ for every $n\in \mathbb{Z}$ and $\dim \mathrm{Hom}_{\mathcal{D}}(E,F[n])=0$ for every $|n|>\!>0$. \par On ${\mathcal{D}^b}(X)$ we will consider the weak numerical stability condition $\sigma_H=(\mathrm{Coh}(X),\allowbreak Z_H)$, where $Z_H\colon K_{\mathrm{num}}(X)\to \mathbb{C}$ is defined by \[ Z_H(E)=-H^{n-1}\mathrm{ch_1}(E)+\sqrt{-1}\,H^n\mathrm{ch}_0(E). \] For a nonzero object $E$ in $\mathrm{Coh}(X)$, the slope of $E$ with respect to the weak stability function $Z_H$ is \[ \mu_H(E)=\frac{H^{n-1}\mathrm{ch_1}(E)}{H^n\mathrm{ch}_0(E)}, \] where we set $\mu_H(E)=+\infty$ if $\mathrm{ch}_0(E)=0$. \par For any $\beta\in \mathbb{R}$, we denote by $\mathrm{Coh}_H^\beta(X)$ the heart of the $t$-structure on ${\mathcal{D}^b}(X)$ obtained by tilting the standard heart $\mathrm{Coh}(X)$ with respect to the torsion pair $(\mathrm{Coh}(X)_{\mu_H\leq \beta},\mathrm{Coh}(X)_{\mu_H> \beta})$, where \[ \mathrm{Coh}(X)_{\mu_H\leq \beta}=\langle E\in \mathrm{Coh}(X)\colon\text{$E$ is $\sigma_H$-semistable with $\mu_H(E)\leq \beta$}\rangle \] \[ \mathrm{Coh}(X)_{\mu_H> \beta}=\langle E\in \mathrm{Coh}(X)\colon\text{$E$ is $\sigma_H$-semistable with $\mu_H(E)> \beta$}\rangle. \] For any $\alpha>0$, by $\sigma_{\alpha,\beta}=(\mathrm{Coh}_H^\beta(X),Z_{\alpha,\beta})$ we denote the weak stability condition on ${\mathcal{D}^b}(X)$ with heart $\mathrm{Coh}_H^{\beta}(X)$ and weak stability function \begin{align}\label{eq:weak-stability-function} Z_{\alpha, \beta}(E) &= -\int _X H^{n-2}\mathrm{ch}^{\beta+\sqrt{-1}\,\alpha}(E) \\ \notag &=\left(\frac{\alpha^2}{2}\mathrm{ch}_0^\beta(E)H^n\-\mathrm{ch}_2^\beta(E)H^{n-2}\right)+\sqrt{-1}\,\left(\alpha\mathrm{ch}_1^\beta(E) H^{n-1}\right) \\ \notag &= \left(\frac{\alpha^2 - \beta^2}{2}\mathrm{ch}_0(E)H^n+ \beta \mathrm{ch}_1(E) H^{n-1} -\mathrm{ch}_2(E) H^{n-2}\right)\\ \notag &\qquad\qquad\qquad+ \sqrt{-1}\,\left(-\alpha\beta \mathrm{ch}_0(E)H^n+\alpha \mathrm{ch}_1(E)H^{n-1}\right) , \end{align} where as customary we write $\mathrm{ch}^{\gamma}(E)=e^{-\gamma H}\mathrm{ch}(E)$. The associated slope, for a nonzero object $E$ in $\mathrm{Coh}_H^\beta(X)$, is \[ \mu _{\alpha, \beta}(E) = \frac{\mathrm{ch}_2(E)H^{n-2}- \beta \mathrm{ch}_1 H ^{n-1}+ \frac{\beta^2 - \alpha^2}{2}\mathrm{ch}_0(E)H^n}{\alpha \mathrm{ch}_1(E)H^{n-1}-\alpha\beta \mathrm{ch}_0(E)H^n} . \] Finally, for any $\mu\in \mathbb{R}$, we denote by $\mathrm{Coh}_{\alpha,\beta}^\mu(X)$ the heart of the $t$-structure on ${\mathcal{D}^b}(X)$ obtained by tilting the heart $\mathrm{Coh}_H^\beta(X)$ with respect to the torsion pair $(\mathrm{Coh}^\beta(X)_{\mu_{\alpha,\beta}\leq \mu},\mathrm{Coh}^\beta(X)_{\mu_{\alpha,\beta}> \mu})$. \begin{remark}\label{remark:four-cases} For an object $E\in {\mathcal{D}^b}(X)$ which is at the same time $\sigma_H$-semistable and $\sigma_{\alpha,\beta}$-semistable, the property of belonging to the doubly tilted heart $\mathrm{Coh}_{\alpha,\beta}^\mu(X)$ reduces to a pair of inequalities involving the slopes $\mu_{H}$ and $\mu_{\alpha,\beta}$, and the property of belonging to the standard heart $\mathrm{Coh}(X)$, of a suitable shift of $E$. Namely, for a $\sigma_H$- and $\sigma_{\alpha,\beta}$-semistable object $E$ in ${\mathcal{D}^b}(X)$ we have that $E\in \mathrm{Coh}_{\alpha,\beta}^\mu(X)$ precisely when one of the following four cases occurs: \[ \begin{cases} E\in \mathrm{Coh}(X), & \mu_H(E)>\beta, \quad \mu_{\alpha,\beta}(E)>\mu\\ \\ E[-1]\in \mathrm{Coh}(X),& \mu_H(E[-1])\leq \beta,\quad \mu_{\alpha,\beta}(E)>\mu\\ \\ E[-1]\in \mathrm{Coh}(X),& \mu_H(E[-1])> \beta,\quad \mu_{\alpha,\beta}(E[-1])\leq \mu\\ \\ E[-2]\in \mathrm{Coh}(X),& \mu_H(E[-2])\leq \beta,\quad \mu_{\alpha,\beta}(E[-1])\leq \mu\,. \end{cases} \] \end{remark} \begin{remark}\label{rem:in-codimension-3} If the integral cohomology of $X$ is generated by $H$ in degree $\leq 4$, one sees that a nonzero object $E$ in $\mathrm{Coh}_{\alpha,\beta}^\mu(X)$ with $Z_{\alpha,\beta}(X)=0$ is a coherent sheaf on $X$ supported in codimension at least 3. This is easy and well known, but as we were not able to locate a completely explicit proof in the literature we provide it here for the reader's convenience. If $E\in \mathrm{Coh}_{\alpha,\beta}^\mu(X)$, then $E$ fits into a distinguished triangle \[ E_{\leq \mu}[1]\to E\to E_{>\mu}\xrightarrow{+1} E_{\leq \mu}[2] \] in $\mathcal{D}^b(X)$, with $E_{> \mu}\in \mathrm{Coh}^\beta_H(X)_{\mu_{\alpha,\beta}>\mu}$ and $E_{\leq\mu}\in \mathrm{Coh}^\beta_H(X)_{\mu_{\alpha,\beta}\leq\mu}$. If $Z_{\alpha,\beta}(E)=0$, this gives $Z_{\alpha,\beta}(E_{\geq \mu})=Z_{\alpha,\beta}(E_{<0})=0$, as this is the only possible common value for $Z_{\alpha,\beta}(E_{\geq \mu})$ and $Z_{\alpha,\beta}(E_{<0})$. But $Z_{\alpha,\beta}(E_{\leq \mu})$ can not be $0$ unless $E_{\leq \mu}=0$, so $E$ is an object in $\mathrm{Coh}^\beta_H(X)$ with $Z_{\alpha,\beta}(E)=0$. Now consider the distinguished triangle \[ F_{\leq \beta}[1]\to E\to F_{>\beta}\xrightarrow{+1} F_{\leq \beta}[2] \] in $\mathcal{D}^b(X)$, with $F_{> \beta}\in \mathrm{Coh}(X)_{\mu_{H}>\beta}$ and $F_{\leq\beta}\in \mathrm{Coh}(X)_{\mu_{H}\leq\beta}$. As $F_{> \beta},F_{\leq \beta}[1]$ in $\mathrm{Coh}^\beta(X)$, we have \[\mathrm{Im}(Z_{\alpha,\beta}(F_{>\beta}))\geq 0,\quad \mathrm{Im}(Z_{\alpha,\beta}(F_{\leq \beta}[1]))\geq 0. \] Additivity of $\mathrm{Im}(Z_{\alpha,\beta}))$ then gives $\mathrm{Im}(Z_{\alpha,\beta}(F_{>\beta}))=\mathrm{Im}(Z_{\alpha,\beta}(F_{\leq \beta}[1]))=0$, and so \[\mathrm{Re}(Z_{\alpha,\beta}(F_{>\beta})\leq 0,\quad \mathrm{Re}(Z_{\alpha,\beta}(F_{\leq \beta}[1])\leq 0. \] Additivity again gives $\mathrm{Re}(\allowbreak Z_{\alpha,\beta}(F_{>\beta}))=\mathrm{Re}(Z_{\alpha,\beta}(F_{\leq \beta}[1]))=0$, and so $\mathrm{Re}(\allowbreak Z_{\alpha,\beta}(\allowbreak F_{\leq \beta})\allowbreak =0$. Assume $F_{\leq \beta}$ is nonzero. Since $F_{\leq \beta}$ has finite slope, it must have positive rank. This implies that, if $F_{\leq \beta}$ has a slope-semistable factor with slope strictly less than $\beta$ we get $\mu_H(F_{\leq \beta})<\beta$ and so $\mathrm{Im}(Z_{\alpha,\beta}(F_{\leq \beta}))=\mathrm{ch}^\beta_1(F_{\leq \beta})<0$, a contradiction. Hence, $F_{\leq \beta}$ is actually a torsion-free slope semistable sheaf with slope $\mu_H(F_{\leq\beta})=\beta$. Using this we compute \begin{align*} \mathrm{Re}(Z_{\alpha,\beta}(F_{\leq \beta}))&=\frac{\alpha^2(\mathrm{ch}_0(F_{\leq \beta})H^n)^2+\Delta_H(F_{\leq \beta})}{2(\mathrm{ch}_0(F_{\leq \beta})H^n)} \\ &\geq \frac{\alpha^2}{2}\mathrm{ch}_0(F_{\leq \beta})H^n>0, \end{align*} a contradiction. Here, for an object $F$ in $\mathcal{D}^b(X)$ we have written $\Delta_H(F)$ for the $H$-discriminant \[ \Delta_H(F)=(\mathrm{ch}_1(F)H^{n-1})^2-2(\mathrm{ch}_0(F)H^n)(\mathrm{ch}_2(F)H^{n-1}) \] and we have used the Bogomolov-Gieseker-type inequality from \cite[Theorem 3.5]{BMS}: for $F$ a torsion-free slope semistable sheaf one has $\Delta_H(F)\geq 0$. The above contradiction shows that $E=F_{>\beta}$ and so $E$ is a coherent sheaf with $Z_{\alpha,\beta}(E)=0$. Looking at the real part of $Z_{\alpha,\beta}(E)$ we find \[ \frac{\alpha^2 - \beta^2}{2}\mathrm{ch}_0(E)H^n+ \beta \mathrm{ch}_1(E) H^{n-1} -\mathrm{ch}_2(E) H^{n-2} \] for any $\alpha>0$ and any $\beta\in \mathbb{R}$. This implies $\mathrm{ch}_{\leq 2}(E)=0$ and so, as $E$ is a coherent sheaf, that $E$ is supported in codimension at least 3. \end{remark} \section{The Kuznetsov component of the quadric threefold and its Serre functor} Let $i\colon Q_3\hookrightarrow \mathbb{P}^4$ be a smooth quadric. The derived category of $\mathbb{P}^4$ has a semiorthogonal decomposition induced by a full exceptional collection \[ {\mathcal{D}^b}(\mathbb{P}^4)=\langle \mathcal{O}_{\mathbb{P}^4}, \mathcal{O}_{\mathbb{P}^4}(1),\mathcal{O}_{\mathbb{P}^4}(2),\mathcal{O}_{\mathbb{P}^4}(3),\mathcal{O}_{\mathbb{P}^4}(4)\rangle, \] see \cite{beilinson}. The residual category $\mathrm{Ku}(Q_3)$ is defined as the right orthogonal to the exceptional collection $(\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$ in ${\mathcal{D}^b}(Q_3)$. See \cite{Kuz15} for details, where this residual category would be denoted as $\mathcal{A}_{Q_3}$. Here we use the notation $\mathrm{Ku}(Q_3)$ as these residual categories are most commonly known as \emph{Kuznetsov components}. \begin{remark}\label{rem:admissibility} If one has a semi-orthogonal decomposition $\mathcal{D}=\langle \mathcal{D}_1, \mathcal{D}_2\rangle$, then the subcategory $\mathcal{D}_1$ is \emph{left admissible}: the left adjoint $\iota^L_1$ to the inclusion functor $\iota _1: \mathcal{D}_1 \hookrightarrow \mathcal{D}$ is simply given by the projection functor $\tau_1\colon \mathcal{D}\to \mathcal{D}_1$ associated with the semiorthogonal decomposition. Remarkably, the converse is true: if $\mathcal{C}\hookrightarrow \mathcal{D}$ is a left admissible subcategory, then $\mathcal{C}$ is the left part of a semiorthogonal decomposition $\mathcal{D}=\langle \mathcal{C},{}^\perp\mathcal{C}\rangle$, where \[ {}^\perp\mathcal{C}=\{X\in \mathcal{D}\,"\, \mathrm{Hom}_{\mathcal{D}}(X,Y)=0, \quad \forall Y\in \mathcal{C}\} \] is the \emph{left-orthogonal} to $\mathcal{C}$; see \cite{bondal}. Dually, one has that a triangulated subcategory $\mathcal{C}\hookrightarrow \mathcal{D}$ is right admissible if and only if it is the right part of a semiorthogonal decomposition $\mathcal{D}=\langle \mathcal{C}^\perp,\mathcal{C}\rangle$. A subcategory $\mathcal{C}\hookrightarrow{\mathcal{D}}$ which is at the same time left and right admissible is simply called \emph{admissible}. In this case one has \emph{two} semiorthogonal decompositions $\mathcal{D}=\langle \mathcal{C},{}^\perp\mathcal{C}\rangle=\langle \mathcal{C}^\perp,\mathcal{C}\rangle$. Notice that one generally has ${}^\perp\mathcal{C}\neq {\mathcal{C}}^\perp$. \end{remark} \begin{remark}\label{rem:admissibility-2} If the category $\mathcal{D}$ admits a Serre functor $\mathbb{S}$ and $\mathcal{D}_1\subseteq \mathcal{D}$ is left admissible, then also $\mathcal{D}_1$ has a Serre functor $\mathbb{S}_1$. More precisely, the inverse Serre functor on $\mathcal{D}_1$ is given by \[ \mathbb{S}_{1}^{-1}=\tau_1\circ \mathbb{S}^{-1}. \] The existence of Serre functors both on $\mathcal{D}$ and $\mathcal{D}_1$ immediatley implies that $\mathcal{D}$ is also right admissible, and so admissible: the right adjoint $\iota^R$ to the inclusion functor $\iota _1:$ is given by the composition \[ \iota_1^R = \mathbb{S}_1\circ \tau _1 \circ \mathbb{S}^{-1} \] Dual considerations apply to the subcategory $\mathcal{D}_2$. \end{remark} \begin{remark}\label{rem:admissible-3} Let $\mathcal{A}\subseteq \mathcal{B}\subseteq\mathcal{C}$ be an inclusion of triangulated subcategories. It is easy to see that if both $\mathcal{A}$ and $\mathcal{B}$ are admissible subcategories of $\mathcal{C}$, then $\mathcal{A}$ is an admissible subcategory of $\mathcal{B}$. \end{remark} \begin{remark}\label{rem:numerical-semiorthogonality} Semiorthogonal decompositions have an immediate numerical counterpart. Assume $\mathcal{D}$ is a numerically finite triangulated category endowed with a Serre functor. Then a semiorthogonal decomposition $\mathcal{D}=\langle \mathcal{D}_1,\mathcal{D}_2\rangle$ induces a semiorthogonal decomposition \[ K_{\mathrm{num}}(\mathcal{D})=K_{\mathrm{num}}(\mathcal{D}_1)\oplus K_{\mathrm{num}}(\mathcal{D}_2). \] By this one means that one has a direct sum decomposition $K_{\mathrm{num}}(\mathcal{D})=K_{\mathrm{num}}(\mathcal{D}_1)\oplus K_{\mathrm{num}}(\mathcal{D}_2)$ of free $\mathbb{Z}$-modules, such that $K_{\mathrm{num}}(\mathcal{D}_1)$ is the right orthogonal to $K_{\mathrm{num}}(\mathcal{D}_2)$ with respect to the Euler pairing $\chi_{\mathcal{D}}$. Moreover $(K_{\mathrm{num}}(\mathcal{D}_i),\chi_{\mathcal{D}_i})\hookrightarrow (K_{\mathrm{num}}(\mathcal{D}),\chi_{\mathcal{D}})$ are injective morphisms of free $\mathbb{Z}$-modules endowed with nondegenerate bilinear pairings. \end{remark} \begin{remark} The semiorthogonal decomposition \[{\mathcal{D}^b}(Q_3)=\langle \mathrm{Ku}(Q_3),\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2)\rangle\] is derived from the rectangular Lefschetz decomposition \[{\mathcal{D}^b}(\mathbb{P}^4)=\langle \mathcal{O}_{\mathbb{P}^4}, \mathcal{O}_{\mathbb{P}^4}(1),\mathcal{O}_{\mathbb{P}^4}(2),\mathcal{O}_{\mathbb{P}^4}(3),\mathcal{O}_{\mathbb{P}^4}(4)\rangle\] together with the spherical functor $i_*\colon {\mathcal{D}^b}(Q_3)\to {\mathcal{D}^b}(\mathbb{P}^4)$ induced by the divisorial embedding $i\colon Q_3\hookrightarrow \mathbb{P}^4$. Again, see \cite{Kuz15} for details. For later use, we explicitly note that the rectangular Lefschetz decomposition we are considering on ${\mathcal{D}^b}(\mathbb{P}^4)$ has lenght $m=5$ and that in going from the semiorthogonal decomposition of ${\mathcal{D}^b}(\mathbb{P}^4)$ to that of ${\mathcal{D}^b}(Q_3)$ we `lose' $d=2$ terms from the exceptional collection. \end{remark} Following \cite{Kuz15} (see also the exposition in \cite{MS18}) one can easily determine a more explicit expression for the Serre functor $\mathbb{S}_{{\mathrm{Ku}}(Q_3)}$ with respect to the somehow implicit one given in Remark \ref{rem:admissibility}. We will use it in Section \ref{section:point-objects} to show that the Serre functor $\mathbb{S}_{\mathrm{Ku}(Q_3)}$ acts as the identity functor on the spinor bundle of $Q_3$. \begin{notation} We write \[ \mathrm{O}_{\mathrm{Ku}(Q_3)}\colon \mathrm{Ku}(Q_3)\to \mathrm{Ku}(Q_3) \] for the \emph{rotation} (or \emph{degree shift}) endofunctor of $\mathrm{Ku}(Q_3)$ given by \[ \mathrm{O}_{\mathrm{Ku}(Q_3)}\colon E\mapsto \tau_{\mathrm{Ku}(Q_3)}(E(1)). \] \end{notation} In our situation, $m=5$ an $d=2$, and so $m-d=3$; therefore \cite[Lemma 2.40]{MS18} gives that for any $0\leq n\leq 3$ one has $\mathrm{O}_{\mathrm{Ku}(Q_3)}^n(E)=\tau_{\mathrm{Ku}(Q_3)}(E(n))$. As an immediate consequence we get the following particular case of \cite[Example 2.2]{kuznetsov-smirnov}. \begin{lemma}\label{lemma:inverse-serre} The Serre functor on the residual category $\mathrm{Ku}(Q_3)$ is given by \[ \mathbb{S}_{\mathrm{Ku}(Q_3)}=\mathrm{O}^{-3}_{\mathrm{Ku}(Q_3)}[3], \] \end{lemma} \begin{proof} The inverse Serre functor on ${\mathcal{D}^b}(Q_3)$ is $\mathbb{S}_{Q_3}^{-1}(E)=E(3)[-3]$ and so, by Remark \ref{rem:admissibility}, $\mathbb{S}_{\mathrm{Ku}(Q_3)}^{-1}(E)=\tau_{\mathrm{Ku}(Q_3)}(E(3))[-3]=\mathrm{O}_{\mathrm{Ku}(Q_3)}^3(E)[-3]$. \end{proof} \section{Point objects and numerical point objects} We now describe the numerical Grothendieck group of the Kuznetsov component $\mathrm{Ku}(Q_3)$ and determine an even codimensional numerical point object, i.e., a primitive eigenvector for the numerical action of the Serre functor on it. We will see in Section \ref{section:point-objects} that this numerical point object comes from an actual point object. If $\mathcal{A}$ is a numerically finite triangulated category with a Serre functor $\mathbb{S}$, then $\mathbb{S}$ induces an isometry of $\mathbb{Z}$-modules endowed with bilinear pairings \[ \mathbb{S}_{\mathrm{num}}\colon (K_{\mathrm{num}}(\mathcal{A}),\chi_{\mathcal{A}})\to (K_{\mathrm{num}}(\mathcal{A}),\chi_{\mathcal{A}}). \] \begin{definition} Let $\mathcal{A}$ be a numerically finite triangulated category with a Serre functor $\mathbb{S}$. Let $[d]\in \mathbb{Z}/2\mathbb{Z}$. A numerical class $[E]$ in $K_{\mathrm{num}}(\mathcal{A})$ is called a codimension $[d]$ \emph{numerical point object} if \[ \mathbb{S}_{\mathrm{num}}([E])=(-1)^{[d]}[E]. \] \end{definition} The definition of numerical point object is motivated by Bondal-Orlov's definition of point object in a triangulated category with a Serre funtor, which we recall below. \begin{definition}[\cite{bondal-orlov}, Definition 4.1]\label{def:BO-point-objects} Let $\mathcal{A}$ be a triangulated category with a Serre functor $\mathbb{S}$, and let $d\in \mathbb{Z}$. An object $E$ in $\mathcal{A}$ is called a {\it codimension $d$ point object} if \begin{enumerate} \item $\mathrm{Hom}_{\mathcal{A}}(E,E)=\mathbb{C}$; \item $\mathrm{Hom}_{\mathcal{A}}(E,E[n])=0$ for every $n<0$; \item $\mathbb{S}_{\mathcal{A}}(E)\cong E[d]$. \end{enumerate} \end{definition} Since the shift functor acts as the multiplication by $-1$ on $K_{\mathrm{num}}(\mathcal{A})$, one immediately sees that if $\mathcal{A}$ is numerically finite and $E$ is a codimension $d$ point object in $\mathcal{A}$, then the numerical class $[E]$ is a codimension $[d]$ numerical point object in $K_{\mathrm{num}}(\mathcal{A})$. With these premises, we can now look for numerical point ojects in the Kuznetsov component $\mathrm{Ku}(Q_3)$. We will see in the next section that these numerical point objects come from actual point objects. The semiorthogonal decomposition ${\mathcal{D}^b}(Q_3)=\langle \mathrm{Ku}(Q_3),\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2)\rangle$ induces a semiorthogonal decomposition at the level of numerical Grothendieck groups, with respect to the (non-symmetric) Euler pairing. Thus $K_{\mathrm{num}}(\mathrm{Ku}(Q_3))$ is naturally identified with the right orthogonal to the three Chern vectors $\mathrm{ch}(\mathcal{O_{Q_3}})=1$, $\mathrm{ch}(\mathcal{O}_{Q_3}(1))=e^H$ and $\mathrm{ch}(\mathcal{O}_{Q_3}(2))=e^{2H}$ in \[ K_{\mathrm{num}}(Q_3)\xrightarrow[\raisebox{4pt}{$\sim$}]{\mathrm{ch}} \mathbb{Z}\oplus \mathbb{Z}H\oplus \mathbb{Z}\frac{H^2}{2}\oplus \mathbb{Z}\frac{H^3}{12}\cong\mathbb{Z}\oplus \mathbb{Z}\oplus \mathbb{Z}\frac{1}{2}\oplus \mathbb{Z}\frac{1}{12} \] (see \cite{Fritzsche,Kuz08}), with respect to the pairing induced by the Euler form $\chi$ on ${\mathcal{D}^b}(Q_3)$. The Todd class of $Q_3$ is \[ \mathrm{td}_{Q_3}=1+\frac{3}{2}H+\frac{13}{12}H^2+\frac{1}{2}H^3, \] hence we see that the matrix representing the Euler pairing with respect to the $\mathbb{Q}$-basis $\{1,H,H^2,H^3\}$ of $K_{\mathrm{num}}(Q_3)\otimes \mathbb{Q}$ is \[ \left( \begin{matrix} 1/2 & 13/12 & 3/2 & 1\\ -13/12 & -3/2 & -1 & 0\\ 3/2 & 1 & 0 & 0\\ -1 & 0 & 0 & 0 \end{matrix} \right) \] The right orthogonal to $1,e^H,e^{2H}$ is therefore determined by the equation \[ \left( \begin{matrix} 1 & 0 & 0 & 0\\ 1 & 1 & 1/2 & 1/6\\ 1 & 2 & 2 & 4/3 \end{matrix} \right) \left( \begin{matrix} 1/2 & 13/12 & 3/2 & 1\\ -13/12 & -3/2 & -1 & 0\\ 3/2 & 1 & 0 & 0\\ -1 & 0 & 0 & 0 \end{matrix} \right) \left(\begin{matrix} a_0\\ a_1\\ a_2\\ a_3 \end{matrix} \right) =0 \] from which one immediately sees that a solution $(a_0,a_1,a_2,a_3)$ in $K_{\mathrm{num}}(Q_3)$ must be an integer multiple of the primitive lattice vector $(2,-1,0,\frac{1}{12})$. In other words, if $K_{\mathrm{num}}(\mathrm{Ku}(Q_3))$ is generated by the Chern character of an object $S$, one must have $\mathrm{ch}(S)=2-H+\frac{1}{12}H^3$. As we are going to see, this is precisely the Chern character of the spinor bundle on $Q_3$. \begin{lemma} The numerical action of $\mathbb{S}_{\mathrm{Ku}(Q_3)}$ on $K_{\mathrm{num}}(Q_3)$ is the identity. In particular, via the isomorphism $K_{\mathrm{num}}(Q_3)\xrightarrow[\raisebox{4pt}{$\sim$}]{\mathrm{ch}} \mathbb{Z}\oplus \mathbb{Z}H\oplus \mathbb{Z}\frac{H^2}{2}\oplus \mathbb{Z}\frac{H^3}{12}$ the lattice vector $2-H+\frac{1}{12}H^3$ is an even dimensional numerical point object. \end{lemma} \begin{proof} Since $K_{\mathrm{num}}(\mathrm{Ku}(Q_3))\cong \mathbb{Z}$, the numerical action of the Serre functor has necessarily to be the identity or minus the identity. From the identity \[ \chi^{}_{\mathrm{Ku}(Q_3)}(v,\mathbb{S}_{\mathrm{Ku}(Q_3);\mathrm{num}}v)=\chi^{}_{\mathrm{Ku}(Q_3)}(v,v) \] and the nondegeneracy of $\chi_{\mathrm{Ku}(Q_3)}$ we see that $\mathbb{S}_{\mathrm{Ku}(Q_3);\mathrm{num}}=\mathrm{id}_{K_{\mathrm{num}}(\mathrm{Ku}(Q_3))}.$ \end{proof} \begin{remark} In higher rank situations, the existence of numerical point objects in the Kuznetsov component is a nontrivial requirement. For instance, for index 2 Fano threefolds with Picard rank 1 this requirement singles out $Y_4$ as the only case where odd codimensional numerical point objects exist, and the double covering $Y_2$ of $\mathbb{P}^3$ ramified over a quadric as only case with even codimensional numerical point objects. Not surprisingly, the numerical point object for $Y_4$ is the Chern character $2-1H+\frac{1}{12}H^3$ of the spinor bundles. On the other hand, every primitive vector in $K_{\mathrm{num}}(\mathrm{Ku}(Y_2))$ is an even codimensional point object. Indeed, the Serre functor of $\mathrm{Ku}(Y_2)$ is the composition of the shift by 2 with the involution $\tau$ of $Y_2$ (see \cite[Corollary 4.6]{Kuz15}) and the generator of the Picard group $\mathrm{Pic}(Y_2)$ is pulled back from $\mathbb{P}^3$, and so is $\tau$-invariant (see, e.g., \cite{welters}). \end{remark} \section{Spinor bundles as point objects}\label{section:point-objects} On an odd dimensional smooth quadric hypersurface $Q_{2m+1}\hookrightarrow \mathbb{P}^{2m+2}$ one has a distingushed rank $2^m$ vector bundle, induced by the spinor representation of $\mathfrak{so}(2m+3;\mathbb{C})$ associated with the quadratic form defining the quadric. This vector bundle is called the \emph{spinor bundle} and will be denoted as $S$. Similarly, on an even dimensional smooth quadric $Q_{2m}\hookrightarrow \mathbb{P}^{2m+1}$ one has two nonisomorphic rank $2^{m-1}$ spinor bundles $S^+,S^-$, induced by the two half-spin representations of $\mathfrak{so}(2m;\mathbb{C})$. See \cite{ottaviani,sasha-perry} for details. In what follows we will make use of various elementary properties of spinor bundles on odd quadrics. We point the reader towards \cite{ottaviani} for complete statements and proofs. \begin{lemma}[\cite{ottaviani}, Remark 2.9] The Chern character of the spinor bundle $S$ on the quadric threefold $Q_3$ is \[ \mathrm{ch}(S)=2-H+\frac{1}{12}H^3. \] In particular, we have $K_{\mathrm{num}}(\mathrm{Ku}(Q_3))=\mathbb{Z}[S]$, where $[S]$ is the numerical class of $S$. \end{lemma} \begin{lemma}\label{lemma:S-in-Ku} The spinor bundle $S$ is an object of the Kuznetsov component $\mathrm{Ku}(Q_3)$ \end{lemma} \begin{proof} We have to show that $\mathrm{Hom}_{{\mathcal{D}^b}(Q_3)}(\mathcal{O}(i),S[n])=0$, for every $i\in\{0,1,2\}$ and every $n\in \mathbb{Z}$. As $\mathcal{O}(i)$ and $S$ are locally free sheaves on $Q_3$, this is equivalent to \[ H^n(Q_3,S(-i))=0 \] for $i\in\{0,1,2\}$ and every $n\in \mathbb{Z}$. This is trivial for $n<0$ and for $n>3$, while for $0\leq n<3$ it is a particular case of \cite[Theorem 2.3]{ottaviani}. For $n=3$ we argue by Serre duality: \begin{align*} H^3(Q_3,S(-i))&=H^0(Q_3;S^*(i-3))^\vee\\ &=H^0(Q_3,S^*(i-3))^\vee\\ &=H^0(Q_3,S^*(i-3))^\vee\\ &=H^0(Q_3,S(i-2))^\vee\\ &=0, \end{align*} where we use the isomorphism $S^\ast\cong S(1)$ (see \cite[Theorem 2.8]{ottaviani}) and \cite[Theorem 2.3]{ottaviani} again. \end{proof} \begin{lemma}\label{lemma:S-as-point-object} The spinor bundle $S$ on $Q_3$ is a 0-codimensional point object in ${\mathrm{Ku}}(Q_3)$ in the sense of Bondal-Orlov \cite{bondal-orlov}. \end{lemma} \begin{proof} We have to show that the three conditions from Definition \ref{def:BO-point-objects} are satisfied. The statements appearing in the first two conditions for the spinor bundle are classical: since $S$ is a locally free sheaf, we have \begin{align*} \mathrm{Hom}_{{\mathrm{Ku}}(Q_3)}(S,S[n])&=\mathrm{Hom}_{{\mathcal{D}^b}(Q_3)}(S,S[n])\\ &=\mathrm{Ext}^n_{\mathrm{Coh}(Q_3)}(S,S)\\ &=H^n(Q_3,S^\ast\otimes_{\mathcal{O}_{Q_3}}S). \end{align*} This immediately gives $\mathrm{Hom}_{{\mathrm{Ku}}(Q_3)}(S,S[n])=0$ for every $n<0$, while for $n=0$ it is known that $H^0(Q_3,S^\ast\otimes_{\mathcal{O}_{Q_3}}S)=\mathbb{C}$, see, e.g., \cite[Lemma 2.7]{ottaviani}. As far as the third condition is concerned, by rotating the short exact sequence of vector bundles \[ 0 \to S \to \mathcal{O}^4_{Q_3} \to S(1) \to 0 \] \cite[Theorem 2.8]{ottaviani} we obtain the distinguished triangle \[ \mathcal{O}^4_{Q_3} \to S(1) \to S[1] \overset{+1}{\to} \mathcal{O}^4_{Q_3}[1]. \] Here $\mathcal{O}_{Q_3}$ is an object in the triangulated subcategory of ${\mathcal{D}^b}(Q_3)$ spanned by the eceptional collection $(\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$, while $S[1]$ is an object in the Kuznetsov component ${\mathrm{Ku}}(Q_3)$. Hence the image of $S(1)$ in ${\mathrm{Ku}}(Q_3)$ via the truncation functor $\tau_{\mathrm{Ku}(Q_3)}\colon {\mathcal{D}^b}(Q_3)\to {\mathrm{Ku}}(Q_3)$ is $S[1]$. As the composition $\tau_{\mathrm{Ku}(Q_3)}\circ (\mathcal{O}_{Q_3} (1)\otimes -)$ is the rotation functor ${\mathrm{O}}_{\mathrm{Ku}(Q_3)}$ for the Kuznetsov component, we obtain $\mathrm{O}_{\mathrm{Ku}(Q_3)} (S) = S[1]$. Therefore, Lemma \ref{lemma:inverse-serre} gives \[ \mathbb{S}_{\mathrm{Ku}(Q_3)} (S)= O_{\mathrm{Ku}}^{-3}(S)[3] = S. \] \end{proof} The following result is classical. We reprove by using the fact $S$ is a point object in the Kuznetsov component. \begin{corol} The spinor bundle $S$ is an exceptional object in ${\mathcal{D}^b}(Q_3)$ and so $(S,\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$ is an exceptional collection. \end{corol} \begin{proof} We need to show that \[ \mathrm{Hom}_{{\mathcal{D}^b}(Q_3)}(S,S[n])\cong\begin{cases} \mathbb{C}\qquad&\text{if $n=0$}\\ 0\qquad&\text{if $n=0$} \end{cases} \] As $\mathrm{Ku}(Q_3)$ is a full subcategory od ${\mathcal{D}^b}(Q_3)$, by Lemma \ref{lemma:S-as-point-object} we only need to prove that $\mathrm{Hom}_{\mathrm{Ku}(Q_3)}(S,S[n])=0$ for $n>0$. By Lemma \ref{lemma:S-as-point-object} again, we have \begin{align*} \mathrm{Hom}_{\mathrm{Ku}(Q_3)}(S,S[n])&=\mathrm{Hom}_{\mathrm{Ku}(Q_3)}(S,\mathbb{S}_{\mathrm{Ku}(Q_3)}S[n])\\ &=\mathrm{Hom}_{\mathrm{Ku}(Q_3)}(S,S[-n])^\vee\\ &=0, \end{align*} for $n>0$. \end{proof} In order to exhibit a numerical stability condition on ${\mathrm{Ku}}(Q_3)$ we will make use of the following result. \begin{prop}[Proposition 5.1.\cite{BLMS}]\label{prop:induces} Let $(E_1,\dots,E_m)$ be an exceptional collection in a triangulated category $\mathcal{D}$, and let $\mathcal{K}=\langle\{E_i\}\rangle^\perp\subseteq \mathcal{D}$ be the corresponding right orthogonal. Let $\sigma=(\mathcal{A},Z)$ be a weak stability condition on $\mathcal{D}$, and let $\mathcal{A}_{\mathcal{K}}=\mathcal{A}\cap \mathcal{K}$. Assume $\mathcal{D}$ has a Serre functor $S$ and that the following hold: \begin{enumerate} \item $E_i, S(E_i)[-1]\in \mathcal{A}$ for every $i\in\{1,\dots,m\}$; \item $Z(E_i)\neq 0$ for every $i\in\{1,\dots,m\}$; \item for any nonzero object $F\in \mathcal{A}_{\mathcal{K}}$ one has $Z(F)\neq 0$. \end{enumerate} Then $(Z\bigr\vert_{\mathcal{A}_{\mathcal{K}}},\mathcal{A}_{\mathcal{K}})$ is a stability condition on $\mathcal{K}$. \end{prop} We can now prove \begin{prop}\label{prop:stability-condition-on-ku} Let $0<\alpha<\frac{1}{2}$. Then the weak stability condition $(Z_{\alpha,-\frac{1}{2}},\allowbreak \mathrm{Coh}^0_{\alpha,-\frac{1}{2}}(Q_3))$ induces a numerical stability condition on $\mathrm{Ku}(Q_3)$, whose heart is $\mathrm{Ku}(Q_3)\cap \mathrm{Coh}^0_{\alpha,-\frac{1}{2}}(Q_3)$. \end{prop} \begin{proof} We check that conditions (1)-(3) in proposition \ref{prop:induces} are satisfied. As the weak stability condition $(Z_{\alpha,-\frac{1}{2}},\mathrm{Coh}^0_{\alpha,-\frac{1}{2}}(Q_3))$ is numerical, so is the induced stability condition on $\mathrm{Ku}(Q_3)$. The exceptional collection of $\mathcal{D}^b(Q_3)$ defining $\mathrm{Ku}(Q_3)$ as its right orthogonal is $(\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$, and the Serre functor of $\mathcal{D}^b(Q_3)$ is $\mathbb{S}_{Q_3}(E)=E(-3)[3]$. So, we need to check that \begin{equation}\label{eq:in-the-tilted-heart} \mathcal{O}_{Q_3}(n), \mathcal{O}_{Q_3}(n-3)[2]\in \mathrm{Coh}^0_{\alpha,\frac{1}{2}}(Q_3), \qquad \forall n\in \{0,1,2\}. \end{equation} For any $n\in \mathbb{Z}$, the line bundle $\mathcal{O}_{Q_3}(n)$ is slope stable and satisfies $\Delta_H(\mathcal{O}_{Q_3}(n))=0$, where \[ \Delta_H(E)=(\mathrm{ch}_1(E)H^2)^2-2(\mathrm{ch}_0(E)H^3)(\mathrm{ch}_2(E)H). \] So from \cite[Proposition 2.14]{BLMS} we have that $\mathcal{O}(n)[k]$ is both $\sigma_H$- and $\sigma_{\alpha,-\frac{1}{2}}$-semistable for every $n,k\in \mathbb{Z}$. As $\mathcal{O}(n)[k]\in \mathrm{Coh}(Q_3)[k]$, joint $\sigma_H$- and $\sigma_{\alpha,-\frac{1}{2}}$-semistability reduces checking (\ref{eq:in-the-tilted-heart}) to checking the condition on the slopes from Remark \ref{remark:four-cases}. As \[ Z_H(\mathcal{O}(n)[k])=-2n+2\sqrt{-1}\, \] and \[ Z_{\alpha,-\frac{1}{2}}(\mathcal{O}(n)[k])=(-1)^k\left((\alpha^2 -n^2-n-\frac{1}{4})+\sqrt{-1}\,\alpha(2n+1) \right) \] we see that for any $n\in \{0,1,2\}$ and any $0<\alpha<\frac{1}{2}$ we have $\mathcal{O}(n)\in \mathrm{Coh}(Q_3)$ with \[ \mu_{H}(\mathcal{O}(n))=n>-\frac{1}{2},\quad \mu_{\alpha,-\frac{1}{2}}(\mathcal{O}(n))=\frac{n^2 +n+\frac{1}{4}-\alpha^2}{\alpha(2n+1)} >0, \] and $(\mathcal{O}(n-3)[2])[-2]\in\mathrm{Coh}(Q_3)$ with \[ \mu_{H}((\mathcal{O}(n-3)[2])[-2])=n-3\leq -\frac{1}{2}\] and \[ \mu _{\alpha, -\frac{1}{2}}(\mathcal{O}(n-3)[2][-1]) =\frac{n^2-5n+\frac{25}{4}-\alpha^2}{\alpha(2n-5)}\leq 0. \] Next, it is immediate that $Z_{\alpha,-\frac{1}{2}}(\mathcal{O}(n))\neq 0$ and $Z_{\alpha,-\frac{1}{2}}(\mathcal{O}(n-3)[2])\neq 0$ for any $n\in \{0,1,2\}$. Finally, to show that for any nonzero object $F\in \mathrm{Coh}^0_{\alpha,\frac{1}{2}}(Q_3)\cap \mathrm{Ku}(Q_3)$ we have $Z_{\alpha,-\frac{1}{2}}(F)\neq 0$, recall from Remark \ref{rem:in-codimension-3} that a nonzero object $F\in \mathrm{Coh}^0_{\alpha,\frac{1}{2}}(Q_3)$ with $Z_{\alpha,-\frac{1}{2}}(F)= 0$ is a coherent sheaf supported in dimension 0. This implies \[ \mathrm{Hom}_{{\mathcal{D}^b}(Q_3)}(\mathcal{O},F)=\mathrm{Hom}_{\mathrm{Coh}(Q_3)}(\mathcal{O},F)\neq 0, \] and so $F$ can not be an object in $\mathrm{Ku}(Q_3)$. \end{proof} Next, we show that the spinor bundle $S$ is $\sigma_{\alpha,-\frac{1}{2}}$-semistable for the values of $\alpha$ as in the statement of Proposition \ref{prop:stability-condition-on-ku}. To do this, we will use the explicit wall and chamber structure of the $(\alpha,\beta)$-half-plane, see \cite{maciocia,Barbara}. \begin{lem} {\bf (No-wall lemma)}\label{nowall} The $(\alpha,\beta)$-half-plane contains no walls for the truncated Chern vector $\mathrm{ch}_{\leq 2}(S)=2-H$ of the spinor bundle $S$ on $Q_3$. \end{lem} \begin{proof} Let $v=\mathrm{ch}_0+\mathrm{ch}_1+\mathrm{ch}_2$ be a truncated vector in the Mukai lattice. By \cite[Corollary 2.8]{maciocia}, if $\mathrm{ch}_0H^3>0$ and $F>0$, where \[ F=\frac{(\mathrm{ch}_1H^2)^2-2(\mathrm{ch}_0H^3)(\mathrm{ch}_2H)}{(\mathrm{ch}_0H^3)^2}, \] then every numerical wall for $v$ intersects the vertical line $\beta=\beta_0$, where \[ \beta_0=\frac{\mathrm{ch}_1 H^2}{\mathrm{ch}_0 H^3}-\sqrt{F}. \] If an actual wall exists, this must intersect this vertical line at some point $(\alpha_0,\beta_0)$ and so, by definition of wall, there exists a pair $(E,E_0)$ of $\sigma_{\alpha_0,\beta_0}$-semistable objects in $\mathrm{Coh}^{\beta_0}(X)$ with $\mathrm{ch}_{\leq 2}(E)=v$, and $E_0$ a subobject of $E$ with $\mu_{\alpha_0,\beta_0}(E_0)=\mu_{\alpha_0,\beta_0}(E)$. As \[ \mathrm{ch}_1^{\beta_0}(E)H^2=\mathrm{ch}_1H^2-\beta_0\mathrm{ch}_0H^3 =\sqrt{F} \mathrm{ch}_0H^3>0, \] an $\alpha_0$, this gives $\mathrm{Im}(Z_{\alpha_0,\beta_0}(E))>0$. The destabilizing short exact sequence for $(E,E_0)$ and the additivity of $Z_{\alpha_0,\beta_0}$ then imply \[ 0<\mathrm{ch}_1^{\beta_0}(E_0)H^2<\mathrm{ch}_1^{\beta_0}(E)H^2=\sqrt{F} \mathrm{ch}_0H^3. \] The truncated Chern character for the spinor bundle $S$ is $2-H$. So $\beta_0=-1$ and $\sqrt{F} \mathrm{ch}_0H^3=2$ and for any potentially destabilizing subobject $E_0$ we get \[ 0<\mathrm{ch}_1^{-1}(E_0)H^2<2. \] As $\beta_0=-1$ is an integer, $\mathrm{ch}_1^{-1}(E_0)$ is an element of $H^2(X;\mathbb{Z})=\mathbb{Z}H$, so we have $\mathrm{ch}_1^{-1}(E_0)H=n_0H$ for some integer $n_0$. Therefore, we obtain \[ 0<2n_0<2 \] which is clearly impossible. So no actual wall can exist for $\mathrm{ch}_{\leq 2}(S)$. \end{proof} \begin{corol} The spinor bundle $S$ on $Q_3$ is a Bridgeland semistable object in $\mathrm{Ku}(Q_3)$ with respect to the stability function $Z_{\alpha,-\frac{1}{2}}$, for any $\alpha>0$. Moreover, $S[1]$ lies in the heart $\mathrm{Coh}_{\alpha,-\frac{1}{2}}^0(Q_3)\cap \mathrm{Ku}(Q_3)$ of $\mathrm{Ku}(Q_3)$ with respect to the stability function $Z_{\alpha,-\frac{1}{2}}$. \end{corol} \begin{proof} The spinor bundle $S$ is slope stable (see, e.g, \cite{ottaviani}), so it \sout{is} is $\sigma_{\alpha, \beta}$-stable for $\alpha$ sufficiently big and any $\beta$ by \cite[Proposition 2.13]{BLMS}. Since there are no walls in the $(\alpha,\beta)$-semiplane (Lemma \ref{nowall}), we see that $S$ is also $\sigma_{\alpha, \beta}$-stable for any $\alpha,\beta$. As the shift preserves stability, also $S[1]$ is both $\sigma_H$- and $\sigma_{\alpha, \beta}$-semistable. We have: \[ Z_H(S)=2+4\sqrt{-1}\,\qquad\text{and}\qquad Z_{\alpha, -\frac{1}{2}}(S[1]) = -2\alpha^2 - \frac{1}{2} \] which yields $(S[1])[-1]\in \mathrm{Coh}(Q_3)$ with \[ \mu_H((S[1])[-1])=-\frac{1}{2}\leq -\frac{1}{2}; \qquad \mu_{\alpha,-\frac{1}{2}}(S[1])=+\infty \] and so, by Remark \ref{remark:four-cases}, $S[1]\in \mathrm{Coh}_{\alpha,-\frac{1}{2}}(Q_3)$. \end{proof} This suggests the following definition, of which the spinor bundle on $Q_3$ is an example. \begin{definition} Let $\sigma=(\mathcal{A},Z)$ be a numerical stability condition on a numerically finite triangulated category $\mathcal{D}$ with a Serre functor $\mathbb{S}$. A $\sigma$-semistable object $E$ in $\mathcal{D}$ is called a $d$-codimensional \emph{Bridgeland point object} for $\sigma$ if \begin{enumerate} \item $\mathbb{S}(E)\cong E[d]$; \item the numerical class $[E]$ of $E$ is indecomposable in the effective numerical Cone $K^+_{\mathrm{num}}(\mathcal{D}_{\phi_E})$ of $\mathcal{D}_{\phi_E}$, i.e., in the image of $\mathcal{D}_{\phi_E}$ in $K_{\mathrm{num}}(\mathcal{D})$, where $\phi_E$ is the phase of $E$ with respect to the stability function $Z$, and $\mathcal{D}_{\phi_E}\subseteq \mathcal{D}$ is the associated slice. \end{enumerate} \end{definition} \begin{lemma}\label{lemma:schur-type} A $d$-codimensional Bridgeland point object is a $d$-codimensional Bondal-Orlov point object. \end{lemma} \begin{proof} Let $E$ be a Bridgeland point object. The condition $\mathbb{S}(E)\cong E[d]$ is true by definition, while the Hom-vanishing $\mathrm{Hom}_{\mathcal{D}}(E,E[n])=0$ for every $n<0$ comes from the defining properties of the slicing $\{\mathcal{D}_\phi\}_{\phi\in \mathbb{R}}$ associated to the stability condition $(\mathcal{A},Z)$. To see that $\mathrm{Hom}_{\mathcal{D}}(E,E)\cong \mathbb{C}$ we use that the slice $\mathcal{D}_{\phi_E}$ is a full abelian subcategory of $\mathcal{D}$ and prove that \[ \mathrm{End}_{\mathcal{D}_{\phi_E}}(E)\cong \mathbb{C} \] by a Schur lemma-type argument. The category $\mathcal{D}_{\phi_E}$ is abelian, hence for every morphism $f\colon E \to E$ we have a short exact sequence \[ 0\to \mathrm{ker}_{\phi_E}(f)\to E \to E/\mathrm{ker}_{\phi_E}(f)\to 0 \] in $\mathcal{D}_{\phi_E}$. From this we get \[ [E]=[\mathrm{ker}_{\phi_E}(f)]+[E/\mathrm{ker}_{\phi_E}(f)] \] in $K^+_{\mathrm{num}}(\mathcal{D}_{\phi_E})$. Since in presence of a numerical stability condition a semistable object (i.e., an object in a slice $\mathcal{D}_\phi$ for some $\phi$) can not have zero numerical class unless it is the zero object, the fact that $E$ is a Bridgeland point object, and that the stability condition $\sigma$ is numerical, implies $\mathrm{ker}_{\phi_E}(f)=0$ or $\mathrm{ker}_{\phi_E}(f)=E$. Arguing in the same way for the image of $f$ in $\mathcal{D}_{\phi_E}$ one concludes that $f$ is either zero or an isomorphism, i.e., that $\mathrm{End}_{\mathcal{D}_{\phi_E}}(E)$ is a division ring. As we are working over the algebraically closed field $\mathbb{C}$, this implies $\mathrm{End}_{\mathcal{D}_{\phi_E}}(E)\cong \mathbb{C}$. \end{proof} \begin{remark} Since $Z(K^+_{\mathrm{num}}(\mathcal{D}_{\phi_E}))\subseteq Z(K_{\mathrm{num}}(\mathcal{D}))\cap \mathbb{R}_{> 0}e^{\pi i\phi_E}$, one sees that if $Z(E)$ is indecomposable in $Z(K_{\mathrm{num}}(\mathcal{D})\cap \mathbb{R}_{> 0}e^{\pi i\phi_E}$, then surely $[E]$ is indecomposable in $K^+_{\mathrm{num}}(\mathcal{D}_{\phi_E})$. In concrete situations this often allows us to check the numerical indecomposability condition in the definition of Bridgeland point object via simple considerations on vectors in $\mathbb{C}$. \end{remark} The fact that $S$ is an exceptional object in ${\mathcal{D}^b}(Q_3)$ can be equivalently stated by saying that the Fourier-Mukai transform \[ \Phi^S_{\mathcal{M}_S\to Q_3}\colon {\mathcal{D}^b}(\mathcal{M}_{S})\to {\mathcal{D}^b}(Q_3), \] where $\mathcal{M}_S$ is the one-point moduli space consisting of the equivalence class of the object $S$ alone, is fully faithful. While this way of formulating the exceptionality of $S$ is surely an overkill, it allows us to see the use we made of the property of $S$ of being a point object in $\mathrm{Ku}(Q_3)$ as a particular instance of the following Proposition, which is a reformulation of \cite[Theorem 5.1]{Bri98}. \begin{prop}\label{prop:fully-faithful} Let $X$ be an smooth projective variety, and let $\mathcal{K}$ be an admissible triangulated subcategory of ${\mathcal{D}^b}(X)$ endowed with a numerical stablity condition. Assume there exists a $d$-dimensional smooth projective variety which is a fine moduli space $\mathcal{M}_v$ of $d$-codimensional Bridgeland point objects in $\mathcal{A}_\mathcal{K}\subseteq \mathcal{K}$ of numerical class $v$ and let $\mathcal{E}\in {\mathcal{D}^b}(\mathcal{M}_v\times X)$ be the corresponding universal family. If for every two nonismorphic objects $E_{y_1}$ and $E_{y_2}$ parametrised by $\mathcal{M}_v$ one has \begin{equation}\label{eq:orthogonality} \mathrm{Hom}_{{\mathcal{D}^b}(X)}(E_{y_1},E_{y_2}[n])=0, \qquad\text{for $0<n<d$}, \end{equation} then \[ \Phi_{\mathcal{M}_v\to X}^\mathcal{E}\colon {\mathcal{D}^b}(\mathcal{M}_v)\to {\mathcal{D}^b}(X) \] is fully faithful. \end{prop} \begin{proof} Let $E_y$ be the object of $\mathcal{K}$ corresponding to the point $y$ in $\mathcal{M}_v$. We have, by definition of universal family, that $E_y=\Phi_{\mathcal{M}_v\to X}^\mathcal{E}(\mathcal{O}_y)$. Therefore, since $\mathcal{K}$ is a full subcategory of ${\mathcal{D}^b}(X)$, in order to show that the assumptions of \cite[Theorem 5.1]{Bri98} are satisfied we need to show that \begin{enumerate} \item for any point $y\in \mathcal{M}_v$ one has $\mathrm{Hom}_{\mathcal{K}}(E_y,E_y)=\mathbb{C}$; \item for any point $y\in \mathcal{M}_v$ one has $\mathrm{Hom}_{\mathcal{K}}(E_y,E_y[n])=0$ for any $n<0$ and any $n>d$; \item for any two distinct points $y_1,y_2\in \mathcal{M}_v$ one has $\mathrm{Hom}_{\mathcal{K}}(E_{y_1},E_{y_2}[n])=0$, for any $n\in \mathbb{Z}$ \end{enumerate} Let $\mathcal{K}_\lambda\subseteq \mathcal{A}_{\mathcal{K}}$ be the slice of $\mathcal{K}$ corresponding to the numerical class $v$. As the slice $\mathcal{K}_\lambda$ is a full abelian subcategory of $\mathcal{K}$, we have \[ \mathrm{Hom}_{\mathcal{K}}(E_{y_1},E_{y_2})=\begin{cases} \mathbb{C}&\text{if $y_1=y_2$}\\ 0&\text{if $y_1\neq y_2$} \end{cases} \] by the same Schur's Lemma argument as in Lemma \ref{lemma:schur-type}. By the properties of slicings, one has $\mathrm{Hom}_{\mathcal{K}}(E_{y_1},E_{y_2}[n])=0$, for any $y_1,y_2$ in $\mathcal{M}_v$ and any $n<0$. One then concludes by Serre duality in $\mathcal{K}$. \end{proof} \begin{remark}\label{rem:addington-thomas} Notice that the orthogonality condition (\ref{eq:orthogonality}) is empty if $d\leq 1$. That is, derived categories of $d$-dimensional moduli spaces of Bridgeland stable $d$-codimensional point objects of $\mathcal{K}\subseteq {\mathcal{D}^b}(X)$ are automatically fully faithfully embedded in ${\mathcal{D}^b}(X)$ when $d=0,1$. Examples of this phenomenon are the 0-dimensional moduli space of the spinor bundle on the smooth quadric threefold $Q_3$ (consisting of a single point), and the 1-dimensional moduli space of spinor bundles on the intersection $Y_4$ of two quadrics in $\mathbb{P}^5$ (which is a genus 2 curve). For $d=2$ the orthogonality condition (\ref{eq:orthogonality}) reduces to the numerical condition $\chi(v,v)=0$. This is notably the case for the numerical classes of a cubic fourfold $W_4$ leading an equivalence between the Kuznetsov component $\mathrm{Ku}(W_4)$ and the derived category of a (possibly twisted) K3 surface \cite{addington-thomas,huybrechts,BLMNPS} \end{remark} \section{Stability conditions loathe phantomic summands} We will now show that the existence of a stability condition on a numerically finite triangulated category $\mathcal{D}$ with a Serre functor automatically excludes the possibility that $\mathcal{D}$ has a nontrivial orthogonal decomposition $\mathcal{D}=\mathcal{D}_1\oplus\mathcal{D}_2$ such that one of the two summands is numerically trivial. This fact will be used to prove the fullness of the exceptional collection $(S,\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$ in Section \ref{sec:fullness}. Most of the proofs in this Section are straightforward and can be skipped; we will only write them for the sake of completeness. Recall that a semiorthogonal decomposition $\mathcal{D}=\langle\mathcal{D}_1,\mathcal{D}_2\rangle$ is called \emph{orthogonal} if one has \[ \mathrm{Hom}_{\mathcal{D}}(E_1,E_2)=0 \] for any $E_1\in \mathcal{D}_1$ and $E_2\in \mathcal{D}_2$. When this happens we write \[ \mathcal{D} = \mathcal{D}_1 \oplus \mathcal{D}_2. \] The following is immediate from the definitions. \begin{lemma}\label{lemma:orthogonal} Let $\mathcal{A}\hookrightarrow \mathcal{B}$ be an admissible subcategory of the triangulated category $\mathcal{B}$. If $\mathcal{A}^\perp\subseteq {}^\perp\mathcal{A}$, then the semiorthogonal decomposition $\mathcal{B}=\langle \mathcal{A}^\perp,\mathcal{A}\rangle$ is an orthogonal decomposition. \end{lemma} An immediate application of \cite[Lemma 3.4]{Bri98} gives the following. \begin{lemma} Let $\mathcal{D} = \mathcal{D}_1 \oplus \mathcal{D}_2$ be a orthogonal decomposition of a triangulated category $\mathcal{D}$. Then every object $X$ in $\mathcal{D}$ can be decomposed as a biproduct $X=X_1\oplus X_2$, with $X_i\in \mathcal{D}_i$. Moreover, this decomposition is unique up to isomorphism. \end{lemma} From this, we obtain that $t$-structures are well-behaved with respect to orhogoanl decompositions. \begin{lemma}\label{lemma:t-structure-on-decomposition} Let $\mathcal{D} = \mathcal{D}_1 \oplus \mathcal{D}_2$ be a orthogonal decomposition of a triangulated category $\mathcal{D}$, and let $\langle \mathcal{D}_{\leq 0}, \mathcal{D}_{\geq 0} \rangle$ be a $t$-structure on $\mathcal{D}$. Then \[ \mathcal{D}_{i;\leq 0} = \mathcal{D}_{\leq 0} \cap \mathcal{D}_i , \qquad \mathcal{D}_{i;\geq 0} = \mathcal{D}_{\geq 0} \cap \mathcal{D}_i \] is a $t$-structure on $\mathcal{D}_i$ for $i=1,2$. \end{lemma} \begin{proof} One easily sees that \[ \mathcal{D}_{\leq 0}=\mathcal{D}_{1;\leq 0}\oplus \mathcal{D}_{2;\leq 0}. \] Namely, an object $X\in \mathcal{D}_{\leq 0}\subseteq \mathcal{D}$ can be written as $X=X_1\oplus X_2$ with $X_i\in \mathcal{D}_{i}$. For any $Y\in \mathcal{D}_{\geq 1}$ one has \[ 0=\mathrm{Hom}_{\mathcal{D}}(Y,X)=\mathrm{Hom}_{\mathcal{D}}(Y,X_1)\oplus \mathrm{Hom}_{\mathcal{D}}(Y,X_2), \] and so $X_i\in \mathcal{D}_{\leq 0}$ for $i=1,2$. Similarly, one shows $ \mathcal{D}_{\geq 0}=\mathcal{D}_{1;\geq 0}\oplus \mathcal{D}_{2;\geq 0}$. Defining the truncation functor $\tau_{i;\leq0}\colon \mathcal{D}_i\to \mathcal{D}_{i;\leq 0}$ as the composition \[ \mathcal{D}_i\hookrightarrow \mathcal{D}\xrightarrow{\tau_{\leq 0}} \mathcal{D}_{\leq 0} \xrightarrow{\pi_i}\colon \mathcal{D}_{i;\leq 0}, \] where $\pi_i$ is the projection functor, and similarly for $\tau_{i;\geq0}$, one sees that $(\mathcal{D}_{i;\leq 0},\allowbreak \mathcal{D}_{i;\geq 0})$ is a $t$-structure on $\mathcal{D}_i$. \end{proof} \begin{remark} In the setup of the above lemma, if $\mathcal{A}$ denotes the heart of $\langle \mathcal{D}_{\leq 0},\allowbreak \mathcal{D}_{\geq 0} \rangle$, one immediately sees that the heart of the induced $t$-structure $\langle \mathcal{D}_{i;\leq 0},\allowbreak \mathcal{D}_{i;\geq 0} \rangle$ is $\mathcal{A}_i=\mathcal{A}\cap \mathcal{D}_i$. Moreover one has $\mathcal{A}=\mathcal{A}_1\oplus \mathcal{A}_2$. \end{remark} \begin{lemma}\label{lemma:bounded} In the same assumptions as in Lemma \ref{lemma:t-structure-on-decomposition}, if the t-structure $\langle \mathcal{D}_{\leq 0}, \allowbreak\mathcal{D}_{\geq 0} \rangle$ on $\mathcal{D}$ is bounded, then the same is true for the induced $t$-structure $\langle \mathcal{D}_{i;\leq 0},\allowbreak \mathcal{D}_{i;\geq 0} \rangle$ on $\mathcal{D}$, for $i=1,2$. \end{lemma} \begin{proof} From the definition of the truncation functors $\tau_{i;\leq0}, \tau_{i\geq 0}$ in the proof of Lemma \ref{lemma:t-structure-on-decomposition}, and since the $t$-structure on $\mathcal{D}$ is bounded, for any $E_i\in \mathcal{D}_i\subseteq \mathcal{D}$, we have \[ \tau_{i;\geq n}(E_i)=\tau_{\geq n}(E_i)= 0, \qquad \text{for $n>\!>0$} \] and \[ \tau_{i;\leq n}(E_1)=\tau_{\leq n}(E_i)= 0, \qquad \text{for $n<\!<0$} \] which shows that the induced $t$-structure on $\mathcal{D}_i$ is bounded. \[ \tau_{i;\geq n}(E_1)=\tau_{\geq n}(E_1)= 0, \qquad \text{for $n>\!>0$} \] and \[ \tau_{i;\leq n}(E_1)=\tau_{\leq n}(E_1)= 0, \qquad \text{for $n<\!<0$} \] which shows that the induced $t$-structure on $\mathcal{D}_1$ is bounded. \end{proof} \begin{remark} By the argument in Lemma \ref{lemma:bounded} we see that for any object $E_i$ in $\mathcal{D}_i$, for $i=1,2$, we have \[ \mathcal{H}^n_{\mathcal{A}_i}(E_i)=\mathcal{H}^n_{\mathcal{A}}(E_i) \] for any $n\in \mathbb{Z}$, where $\mathcal{A}$ and $\mathcal{A}_i$ denote the hearts of the $t$-structures under consideration on $\mathcal{D}$ and $\mathcal{D}_i$, respectively, and $\mathcal{H}^n_{\mathcal{A}}\colon \mathcal{D}\to\mathcal{A}$ and $\mathcal{H}^n_{\mathcal{A}_i}\colon \mathcal{D}_i\to\mathcal{A}_i$ are the corresponding cohomology functors. \end{remark} \begin{definition} Let $\mathcal{D}$ be a numerically bounded triangulated category with a Serre functor. We say that a triangulated subcategory $\mathcal{D}_{\mathrm{phan}}$ of $\mathcal{D}$ is a \emph{phantomic summand} if \begin{enumerate} \item there exists an orthogonal decomposition $\mathcal{D}=\mathcal{D}_{\mathrm{body}}\oplus\mathcal{D}_{\mathrm{phan}}$; \item for every object $E$ in $\mathcal{D}_{\mathrm{phan}}$ one has $[E]=0$ in $K_{\mathrm{num}}(\mathcal{D})$. $K_{\mathrm{num}}(\mathcal{D}_{\mathrm{phan}})=0$. \end{enumerate} \end{definition} \begin{lemma}\label{lemma:phantomic} Let $\mathcal{D}$ be a numerically finite bounded triangulated category with a Serre functor with a numerical stability condition. Then $\mathcal{D}$ has no nonzero phantomic summands. \end{lemma} \begin{proof} Let $\sigma=(\mathcal{A},Z)$ be a numerical stability condition on $\mathcal{D}$. Assume we have a phantomic summand $\mathcal{D}_{\mathrm{phan}}$, and consider an orthogonal decomposition $\mathcal{D}=\mathcal{D}_{\mathrm{body}}\oplus\mathcal{D}_{\mathrm{phan}}$. By Lemmas \ref{lemma:t-structure-on-decomposition} and \ref{lemma:bounded}, $\mathcal{A}_{\mathrm{phan}}=\mathcal{A}\cap \mathcal{D}_{\mathrm{phan}}$ is the heart of a bounded $t$-structure on $\mathcal{D}_{\mathrm{phan}}$. If $E$ is a nonzero object in $\mathcal{A}_{\mathrm{phan}}$, then it is a nonzero object in $\mathcal{A}$ and so $Z(E)\neq 0$. On the other hand, if $E\in \mathcal{A}_{\mathrm{phan}}$, then $E\in\mathcal{D}_{\mathrm{phan}}$ and so $[E]=0$ in $K_{\mathrm{num}}(\mathcal{D}_{\mathrm{phan}})\subseteq K_{\mathrm{num}}(\mathcal{D})$. As the stability condition $\sigma$ is numerical, $Z\colon K(\mathcal{D})\to \mathbb{C}$ factors through $K_{\mathrm{num}}(\mathcal{D})$ and so $Z(E)=0$, a contradiction. This means that $\mathcal{A}_{\mathrm{phan}}=0$ and so $\mathcal{D}_{\mathrm{phan}}=0$. \end{proof} \begin{lemma}\label{lemma:i-is-equivalence} Let $\mathcal{A}$ be and admissible subcategory of the triangulated category $\mathcal{B}$. If \begin{enumerate} \item $\mathcal{A}^\perp\subseteq {}^\perp\mathcal{A}$; \item $\mathcal{B}$ is numerically finite and endowed with a Serre functor; \item $\mathcal{B}$ has a numerical stability condition; \item $K_{\mathrm{num}}(\mathcal{A})= K_{\mathrm{num}}(\mathcal{B})$; \end{enumerate} then $\mathcal{A}=\mathcal{B}$. \end{lemma} \begin{proof} By Lemma \ref{lemma:orthogonal}, we have an orthogonal decomposition $ \mathcal{B}\cong \mathcal{A}^{\perp}\oplus \mathcal{A}$, and so a direct sum decomposition $K_{\mathrm{num}}(\mathcal{B})=K_{\mathrm{num}}(\mathcal{A})\oplus K_{\mathrm{num}}(\mathcal{A}^{\perp})$, by Remark \ref{rem:numerical-semiorthogonality}. Our assumptions give $K_{\mathrm{num}}(\mathcal{A}^{\perp})=0$ and so $\mathcal{A}^\perp$ is phantomic. By Lemma \ref{lemma:phantomic}, $\mathcal{A}^\perp=0$ and so $\mathcal{B}=\mathcal{A}$. \end{proof} \section{Fullness of the standard exceptional collection on $Q_3$}\label{sec:fullness} \begin{prop}\label{prop:collection-is-full} The exceptional collection $(S,\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2))$ of ${\mathcal{D}^b}(Q_3)$ is full. \end{prop} \begin{proof} We have to show that $\langle S,\mathcal{O}_{Q_3},\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2)\rangle$ is a semiorthogonal decomposition of ${\mathcal{D}^b}(Q_3)$. By definition of ${\mathrm{Ku}}(Q_3)$, we have a semorthogonal decomposition $\langle {\mathrm{Ku}}(Q_3),\mathcal{O}_{Q_3},\allowbreak\mathcal{O}_{Q_3}(1),\mathcal{O}_{Q_3}(2)\rangle$, and we know from Lemma \ref{lemma:S-in-Ku} that $S\in \mathrm{Ku}(Q_3)$. Therefore all we have to show is that the inclusion $\langle S\rangle \hookrightarrow \mathrm{Ku}(Q_3)$ is an equivalence. Since $\langle S\rangle \hookrightarrow \mathrm{Ku}(Q_3)$ induces an isomorphism between $K_{\mathrm{num}}(\langle S\rangle)$ and $K_{\mathrm{num}}(\mathrm{Ku}(Q_3))$, and since we have a stability condition on $\mathrm{Ku}(Q_3)$ by Proposition \ref{prop:stability-condition-on-ku}, this is a particular case of Proposition \ref{prop:FM-equivalence} below, which is in turn a version of \cite[Theorem 5.4]{Bri98}. \end{proof} \begin{prop}\label{prop:FM-equivalence} Let $X$ be an smooth projective variety, and let $\mathcal{K}$ be an admissible triangulated subcategory of ${\mathcal{D}^b}(X)$ endowed with a numerical stability condition. Assume there exists a $d$-dimensional smooth projective variety $\mathcal{M}_v$ which is a fine moduli space of $d$-codimensional Bridgeland point objects in $\mathcal{A}_{\mathcal{K}}\subseteq\mathcal{K}$ of numerical class $v$ \sout{phase $\lambda$} and let $\mathcal{E}\in {\mathcal{D}^b}(\mathcal{M}_v\times X)$ be the corresponding universal family. If for every two nonismorphic objects $E_{y_1}$ and $E_{y_2}$ parametrised by $\mathcal{M}_v$ one has \begin{equation*} \mathrm{Hom}_{{\mathcal{D}^b}(X)}(E_{y_1},E_{y_2}[n])=0, \qquad\text{for $0<n<d$}, \end{equation*} then the Fourier-Mukai transform $\Phi^{\mathcal{E}}_{\mathcal{M}\to X}$ induces an equivalence $\Phi\colon {\mathcal{D}^b}(\mathcal{M}_v)\to \mathcal{K}$ if and only if it induces an isomorphism $\Phi_{\mathrm{num}}\colon K_{\mathrm{num}}(\mathcal{M}_v)\to K_{\mathrm{num}}(\mathcal{K})$. \end{prop} \begin{proof} Since $\mathcal{M}_v$ is a moduli space of objects in $\mathcal{K}\subseteq {\mathcal{D}^b}(X)$, the Fourier-Mukai transform $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}$ factors as \[ \xymatrix{ {\mathcal{D}^b}(\mathcal{M}_v)\ar[r]^-{\Phi}\ar@/^1.5pc/[rr]^{\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}}&\mathcal{K}\ar[r]^-{j}&{\mathcal{D}^b}(X), } \] where $j\colon \mathcal{K}\to {\mathcal{D}^b}(X)$ is the inclusion. Namely, as $\mathcal{K}$ is admissible, to show that $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}(F)\in \mathcal{K}$ for every $F\in \mathcal{D}^b(\mathcal{M}_v)$ we only need to show that \[ \mathrm{Hom}_{\mathcal{D}^b(X)}(G,\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}(F))=0 \] for any $G\in {}^\perp\mathcal{K}$. The Fourier-Mukai transform $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}$ has both a left and a right adjoint, see \cite[Lemma 4.5]{Bri98}, so \[ \mathrm{Hom}_{\mathcal{D}^b(X)}(G,\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}(F))= \mathrm{Hom}_{\mathcal{D}^b(\mathcal{M}_v)}((\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X})^L(G),F) \] But $(\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X})^L(G)=0$ for any $G\in {}^\perp\mathcal{K}$. Indeed, for any $y\in \mathcal{M}_v$ we have \begin{align*} \mathrm{Hom}_{\mathcal{D}^b(\mathcal{M}_v)}((\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X})^L(G),\mathcal{O}_y)&= \mathrm{Hom}_{\mathcal{D}^b(X)}(G,\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}(\mathcal{O}_y))\\ &=\mathrm{Hom}_{\mathcal{D}^b(X)}(G,E_y)\\ &=0, \end{align*} since for every $y\in \mathcal{M}_v$ we have $E_y\in\mathcal{K}$. As $\{\mathcal{O}_y\}_{y\in \mathcal{M}_v}$ is a spanning class for $\mathcal{D}^b(\mathcal{M}_v)$ this implies $(\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X})^L(G)=0$ and so $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}(F)\in \mathcal{K}$. Clearly, if $\Phi$ is an equivalence, then it induces an isomorphism between $K_{\mathrm{num}}(\mathcal{M}_v)$ and $K_{\mathrm{num}_v}(\mathcal{K})$. Vice versa, assume that $\Phi$ induces an isomorphism between $(K_{\mathrm{num}}(\mathcal{M}_v),\chi_{{\mathcal{D}^b}(\mathcal{M}_v)})$ and $K_{\mathrm{num}}(\mathcal{K}),\chi_{\mathcal{K}})$, and denote by $\mathcal{F}$ the essential image of $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}$ in $\mathcal{D}^b(X)$. As $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}$ is fully faithful by Proposition \ref{prop:fully-faithful}, we obtain the diagram \[ \mathcal{D}^b(\mathcal{M}_v)\xrightarrow[\raisebox{4pt}{$\sim$}]{\Phi} \mathcal{F}\subseteq \mathcal{K}\subseteq \mathcal{D}^b(X) \] and we are reduced to showing that $\mathcal{F}=\mathcal{K}$. The subcategory $\mathcal{K}$ is admissible by assumption, and $\mathcal{F}$ is admissible as the Fourier-Mukai transform $\Phi^{\mathcal{E}}_{\mathcal{M}_v\to X}$ has both a left and a right adjoint, see \cite[Lemma 4.5]{Bri98}. Therefore $\mathcal{F}$ is an admissible subcategory of $\mathcal{K}$, by Remark \ref{rem:admissible-3}. Denote by ${}^\perp \mathcal{F}$ and by $\mathcal{F}^\perp$ the left and right orthogonal of $\mathcal{F}$ in $\mathcal{K}$, respectively. As we have a numerical stability condition on $\mathcal{K}$ and $K_{\mathrm{num}}(\mathcal{F})=K_{\mathrm{num}}(\mathcal{K})$, in order to apply Lemma \ref{lemma:i-is-equivalence} we only need to show that $\mathcal{F}^\perp\subseteq {}^\perp \mathcal{F}$. Let therefore $F$ be an object in $\mathcal{F}^\perp$. As $E_y$ is a $d$-codimensional point object in $\mathcal{K}$ for every $y\in \mathcal{M}_v$ we have, for every integer $k$ and for every point $y$ in $\mathcal{M}$, \begin{align*} \mathrm{Hom}_{\mathcal{K}}(F,E_y[k])=\mathrm{Hom}_{\mathcal{K}}(F,\mathbb{S}_{\mathcal{K}}E_y[k-d]) =\mathrm{Hom}_{\mathcal{K}}(E_y[k-d],F)^\vee=0. \end{align*} Therefore, \[\mathcal{F}^\perp \subseteq {}^\perp\{E_y[k]\}_{y\in \mathcal{M}_v,k\in\mathbb{Z}}. \] Since $\{\mathcal{O}_y\}{y\in \mathcal{M}_v}$ is a spanning class for ${\mathcal{D}^b}(\mathcal{M}_v)$ and $\Phi\colon {\mathcal{D}^b}(\mathcal{M}_v)\xrightarrow{\sim} \mathcal{F}$ is an equivalence, $\{E_y\}_{y\in \mathcal{M}_v}=\{\Phi(\mathcal{O}_y\}_{y\in \mathcal{M}_v}$ is a spanning class for $\mathcal{F}$ and so \allowbreak ${}^\perp\{E_y[k]\}_{y\in \mathcal{M}_v,k\in\mathbb{Z}}={}^\perp\mathcal{F}$. \end{proof} \begin{remark} Proposition \ref{prop:FM-equivalence} can be used to show that the Kuznetsov component $\mathrm{Ku}(Y_4)$ is equivalent to the derived category of a genus 2 curve. Namely, all the proofs given in the present article for the spinor bundle $S$ on $Q_3$ more or less verbatim apply to the spinor bundles on $Y_4$, to show that they are 1-codimensional Bridgeland stable objects in $\mathrm{Ku}(Y_4)$ with respect to the stability condition induced by $(\mathrm{Coh}(Y_4)_{\alpha,-\frac{1}{2}}^0,Z_{\alpha,-\frac{1}{2}}^0)$. The only additional check one needs, besides the properties of spinor bundles on quadrics recalled or derived above, is showing that the spinor bundles (and, more generally, spinor sheaves) on $Y_4$ are objects of the Kuznetsov component $\mathrm{Ku}(Y_4)$. Since $\mathrm{Ku}(Y_4)$ is defined as the right orthogonal to the exceptional collection $(\mathcal{O},\mathcal{O}(1))$ in ${\mathcal{D}^b}(Y_4)$, an argument like that of Lemma \ref{lemma:S-in-Ku} proves that this is equivalent to showing that $H^n(Y_4;S(-i))=0$ for $i\in\{0,1\}$ and every $n\in \mathbb{Z}$. By definition, $S$ is the restriction to $Y_4$ of a spinor bundle over a 4-dimensional quadric $Q_4$ containing $Y_4$ (more generally, of spinor sheaf when $Q_4$ is singular), and the required cohomological vanishing is then immediate from the short exact sequence \[ 0\to S_{Q_4}(-2) \to S_{Q_4}\to S\to 0 \] and from the homological vanishing for $S_{Q_4}(-i)$. The genus 2 curve arises as the moduli space of such sheaves. This is classical and due to Bondal-Orlov \cite{bondal-orlov}: for every point $p$ in the pencil of 4-dimensional quadrics containing $Y_4$ corresponding to a smooth quadric $Q_{4;p}$ one has two non isomorphic spinor bundles $S^+_{Q_{4,p}},S^-_{Q_{4,p}}$ whose restriction to $Y_4$ gives two nonisomorphic spinor bundles on $Y_4$. This realizes the moduli space of spinor bundles on $Y_4$ as a double cover of $\mathbb{P}^1$ ramified over the 6 points corresponding to singular quadrics in the pencil, i.e., as an hyperelliptic genus 2 curve. Results of Section \ref{section:point-objects} identify this moduli space with the moduli space of Bridgeland stable objects in $\mathrm{Coh}^0_{\alpha,-\frac{1}{2}}(Y_4)[-1]\cap \mathrm{Ku}(Y_4)$ with Chern character $2-H+\frac{1}{12}H^3$. \end{remark} \begin{remark} The only missing ingredient in order to generalize Proposition \ref{prop:collection-is-full} to an arbitrary $n$-dimensional quadric hypersurface $Q_n$ is the existence of a numerical stability condition on the residual category of the exceptional collection $(\mathcal{O}_{Q_n},\mathcal{O}_{Q_n}(1),\dots,\mathcal{O}_{Q_n}(n-1))$ in ${\mathcal{D}^b}(Q_n)$. Of course, one knows from \cite{kapranov} that one has a full exceptional collection \begin{equation}\label{eq:Kapranov-even} (S^+,S^{-},\mathcal{O}_{Q_n},\mathcal{O}_{Q_n}(1),\dots,\mathcal{O}_{Q_n}(n-1)) \end{equation} for even $n$ and \begin{equation}\label{eq:Kapranov-odd} (S,\mathcal{O}_{Q_n},\mathcal{O}_{Q_n}(1),\dots,\mathcal{O}_{Q_n}(n-1)) \end{equation} for odd $n$, so that $\mathrm{Ku}(Q_n)$ is equivalent to the derived category of one or of two points depending on the parity of $n$, and so it surely has numerical stability conditions. But if one were able to prove a priori, i.e., without using the knowledge of the full excepional collections (\ref{eq:Kapranov-even}-\ref{eq:Kapranov-odd}), that there existed a numerical stability condition on $\mathrm{Ku}(Q_n)$, one could use the constructions in this article to rediscover the full exceptional collections (\ref{eq:Kapranov-even}-\ref{eq:Kapranov-odd}). \end{remark} \begin{remark} Fano threefolds of Picard rank 1 are known to have stability conditions (\cite{li:stability-conditions}, preceeded by \cite{macri12} for the particular case of $\mathbb{P}^3$ and by \cite{schmidt13} for the case of the quadric threefold $Q_3$ considered in this article). We preferred not to use this stronger result and only use the weak stability condition (\ref{eq:weak-stability-function}) on ${\mathcal{D}^b}(Q_3)$ to induce a stability condition on $\mathrm{Ku}(Q_3)$ following \cite{BLMS}, as only a stability condition on the residual category is relevant to the constructions in this article, and weak stability conditions on the ambient category are much easier to have with respect to actual stability conditions. For instance, (\ref{eq:weak-stability-function}) defines a weak stability condition on every smooth projective variety, regardless of its dimension. It is therefore reasonable to conjecture that, in principle, proving the existence of a numerical stability condition on a residual category of the form $\mathrm{Ku}(X)$ should be possible in many cases, even without the presence of a true stability condition on ${\mathcal{D}^b}(X)$. Hopefully, this should apply for instance to the residual components $\mathrm{Ku}(Q_n)$ of smooth quadrics. \end{remark} \begin{remark} Another possible application of Proposition \ref{prop:FM-equivalence} is the follwing. Let $W_4$ be a smooth cubic fourfold, and let $v$ be an indecomposable numerical class in $K_{\mathrm{num}}^+(W_4)$ with $\chi_{W_4}(v,v)=0$. Let $\sigma$ be a numerical stability condition on $\mathrm{Ku}(W_4)$. Since $\mathrm{Ku}(W_4)$ is a 2-Calabi-Yau category, i.e., $\mathbb{S}_{\mathrm{Ku}(W_4)}=[2]$, see, e.g., \cite{Kuz15}, we see that any $\sigma$-semistable object in $\mathcal{A}_{\mathrm{Ku}(W_4)}$ is a codimension 2 Bridgeland point object in $\mathrm{Ku}(W_4)$. If a fine moduli space $\mathcal{M}_v$ for $\sigma$-semistable object in $\mathcal{A}_{\mathrm{Ku}(W_4)}$ exists which is a projective variety, it will be 2-dimensional. Indeed, the numerical condition $\chi_{W_4}(v,v)=0$ and Serre duality for $\mathrm{Ku}(W_4)$ give \[ \mathrm{hom}(E,E[1])=\mathrm{hom}(E,E)+\mathrm{hom}(E,E[2])=2 \mathrm{hom}(E,E)=2, \] for any Bridgeland point object of numerical class $v$, where we wrote $\mathrm{hom}(F,G)$ for the dimension of the hom-space $\mathrm{Hom}_{\mathrm{Ku}(W_4)}(F,G)$. So, by Remark \ref{rem:addington-thomas} and Proposition \ref{prop:FM-equivalence}, if such a $\mathcal{M}_v$ exists then we have an equivalence \[ \mathcal{D}^b(\mathcal{M}_v)\cong \mathrm{Ku}(W_4). \] In particular, $\mathcal{M}_v$ will be a K3 surface. See \cite{addington-thomas,huybrechts,BLMNPS} for additional information on this result. \end{remark} \end{document}
\begin{document} \preprint{AIP/123-QED} \title[]{Quantum dots as potential sources of strongly entangled photons for quantum networks} \author{Christian Schimpf} \author{Marcus Reindl} \affiliation{Institute of Semiconductor and Solid State Physics, Johannes Kepler University, Linz 4040, Austria} \author{Francesco Basso Basset} \affiliation{Department of Physics, Sapienza University, 00185 Rome, Italy} \author{Klaus D. Jöns} \affiliation{Department of Physics, Paderborn Universiy, 33098 Paderborn, Germany} \author{Rinaldo Trotta} \affiliation{Department of Physics, Sapienza University, 00185 Rome, Italy} \author{Armando Rastelli} \affiliation{Institute of Semiconductor and Solid State Physics, Johannes Kepler University, Linz 4040, Austria} \date{\today} \begin{abstract} The generation and long-haul transmission of highly entangled photon pairs is a cornerstone of emerging photonic quantum technologies, with key applications such as quantum key distribution and distributed quantum computing. However, a natural limit for the maximum transmission distance is inevitably set by attenuation in the medium. A network of quantum repeaters containing multiple sources of entangled photons would allow to overcome this limit. For this purpose, the requirements on the source's brightness and the photon pairs' degree of entanglement and indistinguishability are stringent. Despite the impressive progress made so far, a definitive scalable photon source fulfilling such requirements is still being sought for. Semiconductor quantum dots excel in this context as sub-poissonian sources of polarization entangled photon pairs. In this work we present the state-of-the-art set by GaAs based quantum dots and use them as a benchmark to discuss the challenges to overcome towards the realization of practical quantum networks. \end{abstract} \maketitle \section{Introduction} After decades of fundamental research, quantum entanglement emerged as a pivotal concept in a variety of fields, such as quantum computation\cite{OBrien2009}, -communication\cite{Gisin2007,Kimble2008,Ekert1991} and -metrology \cite{Degen2017}. A manifold of quantum systems are beeing investigated and photons stand out in many areas due to their robustness against environmental decoherence and their compatibility with existing optical fiber\cite{Xiang2019} and free-space\cite{Yin2020} infrastructure. Non-local correlations were demonstrated in several photonic degrees of freedom, such as time-bin\cite{Simon2005, Jayakumar2014}, time-energy\cite{Tanzilli2005}, orbital angular momentum \cite{Mair2001}, polarization\cite{Aspect1981}, spin-polarization\cite{Bock2018,Vasconcelos2020} or in a combination of them ("hyper-entanglement"\cite{Kwiat1997,Prilmuller2018}). In quantum information processing the manipulation and measurement of entangled qubits plays a major role. Applications like quantum key distribution (QKD) with entangled qubits \cite{Ekert1991,Bennett1992,Acin2007} require high source brightness, high degree of entanglement, transmission through a low-noise quantum channel and finally a straightforward measurement at remote communication partners, all with minimal losses. These prerequisites could be met by polarization entangled photon pairs\cite{Wengerowsky2019}. Besides the most prominent sources based on spontaneous parametric down-conversion (SPDC)\cite{Kwiat1995,Giustina2015,Shalm2015}, also semiconductor quantum dots (QDs)\cite{Akopian2006,Gurioli2019,Huo2013,Huber2018,Reimer2012,Juska2013} are capable of generating polarization entangled photon pairs with a fidelity to a maximally entangled state above $0.97$\cite{Huber2018}. The probabilistic emission characteristics of SPDC sources prohibit a high brightness in combination with a high degree of entanglement\cite{Orieux2017}. This is not the case for QDs due to their sub-poissionian photon statistics \cite{Schweickert2018}. Furthermore, in a real-world context, applications like QKD require entanglement to be communicated over large distances\cite{Xiang2019,Yin2020} to be practically relevant. Most transmission channels, like optical fibers, underlie damping (about \SI{0.2}{dB/km} for common telecom-fibers), which severely limits the transmission range. This limitation can be alleviated by exploiting a concept of quantum communication\cite{Gisin2007}: The interconnection of multiple light sources in quantum networks \cite{Gisin2007,Kimble2008} via the realization of a cascaded quantum repeater scheme with entangled photons and quantum memories\cite{Briegel1998,Lloyd2001,Kimble2008}. In order to reach this goal, properties of the photon sources beyond the maximum entanglement fidelity become relevant, such as the photon indistinguishability \cite{Mandel1987}. In this work, we examine the key figures of merit of entangled photon pairs with an emphasis on the distribution of entanglement in a quantum network. We will start from the state-of-the-art focusing on GaAs QDs. Although the emission wavelength of about \SI{785}{nm} is currently non-ideal for efficient fiber-based applications, GaAs QDs represent an excellent model system for the here discussed ideas due to their performance. Finally, we will outline recent concepts and approaches towards the realization of a viable quantum network. \section{Polarization entangled photon pairs from quantum dots} A common scheme to generate entangled photon pairs with semiconductor QDs embedded in photonic strucures (see Fig. \ref{fig:GaAs_QD}(a)) is by resonantly populating the biexciton (XX) state by a two-photon excitation (TPE) process \cite{Stufler2006}. The XX radiatively decays via the biexciton-exciton(X)-ground state cascade \cite{Benson2000}, as depicted in Fig. \ref{fig:GaAs_QD}(b). Ideally, the emitted photon pairs are in the maximum entangled Bell state $\ket{\phi^+}=1/\sqrt{2}\left( \ket{HH}+\ket{VV}\right)$, with $\ket{H}$ and $\ket{V}$ the horizontal and vertical polarization basis states, respectively. The fidelity $f_{\ket{\phi^+}}$ of the real photon pair's state to $\ket{\phi^+}$ is mostly determined by the fine structure splitting (FSS) $S$ and lifetime $T_\text{1,X}$ of the intermediate X state. In the absence of other dephasing mechanisms, the maximum achievable fidelity of an ensemble of photon pairs to a maximum entangled state is given by\cite{bassobasset2020arxiv} \begin{equation} f^{\text{max}}_{\ket{\phi^+}}=\frac{1}{4}\left(2-g_0^{(2)}+\frac{2\left(1-g_0^{(2)}\right)}{\sqrt{1+\left( S\,T_\text{1,X} / \hbar \right)^2}} \right), \label{eq:fmax} \end{equation} where $g_0^{(2)}$ is the multi-photon emission probability. In the case of GaAs QDs obtained by the Al droplet etching technique \cite{Gurioli2019}, a high in-plane symmetry\cite{Huo2013} results in average FSS values below \SI{5}{\micro eV}, while the weak lateral carrier confinement \cite{Huber2019} causes radiative lifetimes of about $T_\text{1,XX}=\SI{120}{ps}$ and $T_\text{1,X}=\SI{270}{ps}$. The wavelength of the emitted light hereby lies around \SI{780}{nm} (see Fig. \ref{fig:GaAs_QD}(c)), with the XX and X photons separated by about \SI{2}{nm} (\SI{4}{meV}), allowing them to be split by color. In contrast to SPDC-based sources\cite{Jons2017} the pair-generation probability $\epsilon$ and the $g_0^{(2)}$ of QDs are decoupled due to the sub-poissonian emission characteristics\cite{Benson2000}. This led to demonstrated values of $g_0^{(2)}= \SI{8(2)e-5}{}$, with $\epsilon=0.9$ (measured via cross-correlation measurements between XX and X\cite{Wang2019}) under pulsed TPE\cite{Schweickert2018}, as illustrated by the corresponding auto-correlation measurement in Fig. \ref{fig:GaAs_QD}(d). Figure \ref{fig:GaAs_QD}(e) displays a resulting two-photon density matrix $\rho$ in the polarization space of the XX and X photons for an as-grown GaAs QD with $S\approx\SI{0.4}{\micro eV}$, acquired by full-state tomography\cite{James2001} under pulsed TPE at an excitation rate of $R=\SI{80}{MHz}$. The fidelity deduced from this matrix as $f_{\ket{\phi^+}}=\braket{\phi^+|\rho|\phi^+}$ is \SI{0.97(1)}. By utilizing a specifically designed piezo-electric actuator\cite{Trotta2015}, capable of restoring the in-plane symmetry and erasing the FSS of the QDs by strain, fidelity values up to \SI{0.978(5)}{} were demonstrated\cite{Huber2018}. These results suggest that a modest Purcell enhancement of a factor 3 could alleviate remaining dephasing effects and push the fidelity up to \SI{0.99}{}, which would match with the best results from SPDC sources\cite{Jons2017}. The minimum time delay $1/R$ between the pulses depends on the lifetimes $T_\text{1,XX}$ and $T_\text{1,X}$, allowing for excitation rates up to $R\approx\SI{1}{GHz}$ (without Purcell enhancement). This makes GaAs QDs a viable source for applications like QKD with entangled photons \cite{Ekert1991,Bennett1992,Acin2007,Jennewein2000}, where high source brightness and near-unity entanglement fidelity are required in order to reach practical secure key rates in the high MHz or even GHz range. In order to reach this goal, however, a high photon-pair extraction efficiency is required, which is naturally limited in semiconductor structures due to total internal reflection at the air/semiconductor interface. A simple approach to increase the pair extraction efficiency $\eta_E^{(2)}$ from less than $10^{-4}$ to about 0.01 is to embed the QDs in a lambda cavity defined between two distributed Bragg reflectors and adding a solid immersion lens on top\cite{Huber2016}, see Fig. \ref{fig:GaAs_QD}(a). Recently, circular Bragg resonators have demonstrated values of $\eta_E^{(2)}=\SI{0.65(4)}{}$\cite{Liu2019} and Purcell enhancement up to a factor $11.3$\cite{Wang2019}. Although a non-ideal entanglement fidelity due to the high FSS was reported in Ref. \citenum{Liu2019}, these structures are compatible with the aforementioned strain tuning techniques, which could cancel the FSS to create a bright source of highly entangled photon pairs, applicable for QKD with key rates potentially in the GHz range. \begin{figure*} \caption{\textbf{Compilation of measurements for GaAs QDs.} \label{fig:GaAs_QD} \end{figure*} A widely discussed and researched topic is the distribution of entanglement over basically arbitrary distances, for which sources operating at high pair emission rates are especially relevant. One approach is free-space transmission via satellites, where recently a distance of \SI{1120}{km} was covered \cite{Yin2020}. From the practical point of view it would be desirable to exploit the existing and well-established telecom optical fiber networks\cite{Xiang2019}. The obvious effect of fibers on the transmitted light is a uniform damping, which is about $\SI{0.2}{dB/km}$ for typical fibers in the telecom C-Band (\SI{1550}{nm} wavelength). When transmitting polarization entangled photons through optical fibers, however, also polarization mode dispersion\cite{Antonelli2011} (PMD) has to be taken into account. PMD causes the principal states of polarization (PSPs) of the photons' wave packets to drift apart in time, leading to a degradation of the entanglement. The latter is proportional to the ratio of the total induced PMD $\tau$ between the two entangled photons and the length of the photon wave packets, given by $2\,T_1$. The maximum achievable fidelity to a perfectly entangled state, derived from the 2-qubit density matrix in polarization space\cite{Antonelli2011}, is then given by: \begin{equation} f_{\text{PMD}}(\tau)= \frac{1}{2}+\left( \frac{1}{2} + \frac{\tau}{4\,T_{1}}\right)e^{\frac{-\tau}{2T_{1}}}. \end{equation} For simplicity, we assume that $T_1=\text{min}(T_{\text{1,X}},T_{\text{1,XX}})$, which represents a worst case scenario. If the two entangled photons experience exactly the opposite relative drift due to a maximum mismatch of the input modes with the PSPs, the PMD from the individual fibers add up to $\tau=\tau_1+\tau_2$. A typical value for the PMD of a single mode fiber is $D=\SI{0.1}{ps/\sqrt{km}}$ (e.g. the type "SMF-28e+" from "Corning"), thus with the given lifetimes and a length of $l=\SI{200}{km}$ for each the X and the XX photon's fibers, the maximum possible fidelity for $T_1=\SI{120}{ps}$ is still $f_{\text{PMD}}(2D\sqrt{l})>0.99$. For $T_1=\SI{10}{ps}$, the $f_{\text{PMD}}$ degrades to about $0.98$. For $T_1=\SI{1}{ps}$, which is approximately the case for sources based on the SPDC process, the maximum fidelity drops to about $0.79$, unless resorting to lossy measures like frequency filtering~\cite{Lim2016} or conversion to time-bin entanglement~\cite{Sanaka2002,Jayakumar2014}. If the input polarization modes are aligned with the PSPs, the total PMD reads $\tau=\tau_1-\tau_2$. Therefore, given that $\tau_1=\tau_2$, a configuration exists which exactly cancels out the effect of PMD and preserves the entanglement. This could be achieved by realigning the input modes to the PSPs during operation by polarization controllers\cite{Simon1990}. An additional effect to consider in the context of a fiber-based network is chromatic dispersion, which leads to a temporal broadening of the wavefunctions\cite{Lenhard2016,Weber2019}. This lowers the success probability of two-photon interference, which we will discuss in the next section. \section{Quantum dots in a quantum repeater based network} Although transport of highly polarization entangled photons through fibers is possible\cite{Wengerowsky2020}, the exponential damping will inevitably lead to an insufficient qubit rate. A possible solution to this problem is the realization of a quantum repeater scheme. The common approach is based on the DLCZ protocol (and its variants), that relies on spin-photon entanglement~\cite{Duan2001}. However, the probabilistic nature of the entangling scheme limits the entanglement creation~\cite{Riebe2004}. Although an improved, deterministic version of the spin-photon based scheme was developed\cite{Humphreys2018}, the achieved rates are still modest. An alternative scheme relies on the use of entangled photon pair sources, like QDs, interfaced with quantum memories capable of receiving and storing entangled states to increase the qubit rate~\cite{Lloyd2001}. This scheme relies on a cascade of entanglement swapping processes \cite{Kirby2016,BassoBasset2019} among entangled photon pairs emitted by independent emitters. The teleportation of the entanglement is enabled by two-photon interference to perform a so-called Bell state measurement (BSM). The success of a BSM strongly depends on the photon indistinguishability, which, in turn, depends on the photon sources and can be experimentally accessed by probing the interference visibility in a Hong-Ou-Mandel (HOM) experiment\cite{Lenhard2016}. For maximum visibility the spatio-temporal wave packets of the two photons involved in the BSM have to be identical and pure, i.e. no other physical system should contain information about the photon's origin. The latter point plays a crucial role in the case of QDs exploiting the decay cascade for entangled photon generation. The XX and X photons are correlated in their decay times\cite{Simon2005,Huber2013,schll2020crux}, which limits the maximum possible indistinguishability for both the XX and X photons to \begin{equation} V_{\text{casc}}=\frac{1}{1+T_{\text{1,XX}}/T_{\text{1,X}}} \label{eq:Vcasc} \end{equation} Figure \ref{fig:GaAs_QD}(f) shows the result of a HOM experiment with two XX photons generated by a GaAs QD, upon excitation with two pulses separated by a $\SI{2}{ns}$ delay. The resulting visibility of $0.69$ is typical for both the XX and X photons under TPE\cite{Huber2016} and is close to the maximum according to Eq. \eqref{eq:Vcasc}. As a comparison: For single X photons generated by resonant excitation a visibility of over $0.9$ is achieved for the same QDs\cite{Scholl2019,Reindl2019}. When interfacing two dissimilar QDs, the inherently stochastic nature of the epitaxial growth has to be considered, which primarily leads to varying emission energies. Further, imperfections in the solid-state environment of the QDs lead to inhomogeneous broadening due to charge noise\cite{Kuhlmann2013}. This results in a jitter of the central emission energy by $\delta E$ (full width at half maximum) around a mean value at time-scales typically in the microsecond to millisecond timescale \cite{Schimpf2019}. The jitter leads to a degradation of the indistinguishability described by\cite{Kambs2018}: \begin{equation} V_{\delta E}=\frac{\hbar\,\text{Re}[\text{F}(z)]} {\sqrt{8\pi}\,\sigma\,T_1}, \label{eq:VE} \end{equation} with $\sigma=\delta E / 2\sqrt{2\text{ln}2}$ the standard deviation of the Gaussian distribution of the energy jitter and $\text{Re}[\text{F}(z)]$ the real part of the Faddeeva function of $z=i\,\hbar/(2\,\pi\,\sqrt{2}\,\sigma\,T_1)$. Figure \ref{fig:GaAs_QD}(g) shows the result of a HOM experiment with two X photons from GaAs QDs with $\delta E\approx \SI{4}{\micro eV}$ and a resulting visibility of $V={0.51(5)}$ \cite{Reindl2017}. The center wavelengths were previously matched by tuning the energy of one QD via a monolithic piezo-electric actuator. \begin{figure*} \caption{\textbf{Entanglement fidelity in a chain of quantum relays.} \label{fig:Fidelity} \end{figure*} We will now discuss possible solutions to overcome the two major indistinguishability degrading mechanisms in QDs discussed so far: The partial temporal entanglement in the XX-X decay cascade (Eq.\eqref{eq:Vcasc}) and frequency jitter (Eq.\eqref{eq:VE}). Both effects are influenced by the radiative lifetimes $T_{\text{1,XX}}$ and $T_{\text{1,X}}$, which can be modified by exploiting the Purcell effect in a cavity\cite{Liu2019,Wang2019}. Figure \ref{fig:Fidelity} illustrates concatenated entanglement swapping processes with a depth of $L=\{1,2,3\}$, i.e. a chain of quantum relays forming the backbone of quantum repeaters. The number of QDs required is $2^L$, while the range covered scales with $2^L\,l_0$, with $l_0$ the total length of both fibers departing from one QD. This example serves as a demonstration how the entanglement fidelity evolves over multiple layers of swapping operations with photons generated by QDs. Fig. \ref{fig:Fidelity}(a) depicts the final entanglement fidelity as a function of the Purcell factor $P$ and the energy jitter $\delta E$. In all simulations the natural linewidth of the exciton is set to \SI{2.4}{\micro eV} for $P=1$, corresponding to $T_{\text{1,X}}=\SI{270}{ps}$. Values of $P>15$ are unpractical, as the total relaxation time of the QD then approaches the typical excitation pulse width of about $\SI{10}{ps}$. This primarily leads to an increasing re-excitation probability\cite{Gustin2018,Hanschke2018}, which is detrimental to the indistinguishability and the entanglement. In addition, PMD effects in optical fibers start to become relevant for such short wave packets. For the calculation of the fidelity we utilize the density matrix formalism for describing one entanglement swapping process with QDs\cite{BassoBasset2019} with a type of BSM which can detect two Bell states\cite{Mattle1996,bassobasset2020arxiv} ($\psi^+$ and $\psi^-$). In order to model a chain of entanglement swapping processes the formalism is applied recursively, assuming uncorrelated BSM success probabilities in successive steps. We simultaneously account for varying lifetimes caused by $P$ and a decreased BSM success rate due to $\delta E$ (see supplementary for details). From the simulations we observe that already for two swapping processes the homogeneous Purcell enhancement alone cannot recover the entanglement fidelity sufficiently, as it merely alleviates the impact from inhomogeneous broadening on the indistinguishability, but the visibility degrading effect from the XX-X cascade is still at full force. Figure \ref{fig:Fidelity}(b) depicts the case for an energy selective cavity\cite{Huber2013}, which enhances the XX decay rate by a factor of 7 compared to the X, so that $P_{X}=P$ and $P_{XX}=7P$. This approach could strongly increase the BSM success rate and therefore the resulting entanglement fidelity. However, the finite temporal width of the excitation pulse, whose minimum value is set by the the limited spectral separation between X and XX and the necessity of suppressing laser stray light, sets a lower limit to the lifetimes - and therefore an upper limit to the Purcell enhancement - in order to limit re-excitation \cite{Gustin2018}. A compromise could be achieved by mild frequency filtering of the X photon, as illustrated in Fig. \ref{fig:Fidelity}(c). Filtering partially erases the temporal information held by the X photon, leading to the same outcome as prolonging the X lifetime and hence decreasing the XX/X lifetime ratio as with the selective Purcell enhancement. In the simulations we assume a filter with Lorentzian transmission characteristics and a FWHM of $\delta E_f$ and an energy jitter with a FWHM of $\delta E$ for both the X and the XX (see supplementary for details). We assume a frequency selective cavity with fixed Purcell enhancement of $P_\text{X}=2$ and $P_\text{XX}=10$. As a result of the filtering, the effective lifetime of the X signal increases while simultaneously reducing the impact of the energy jitter. Note that for the here investigated values of $\delta E$ the interference visibility again drops for $\delta E_f$ values below the inhomogeneous broadening $\delta E$. In addition, in the presence of a finite FSS, the BSM success rate drops when the filtered linewidth is in the order of the FSS or below\cite{BassoBasset2019}. From the simulations we can observe that with a low inhomogeneous broadening ($<\SI{0.2}{\micro eV}$) and a moderate frequency filtering of about $\SI{4}{\micro eV}$ one could achieve an entanglement fidelity of approximately 0.93 at $L=2$ and 0.85 at $L=3$. A complete repeater scheme requires also quantum memories~\cite{Lvovsky2009} which can store and retrieve a photonic qubit with high fidelity. To address the noise and bandwidth limitation of quantum memories two groups invented a cascaded absorption memory scheme, which is intrinsically noise-free.~\cite{Kaczmarek2018,Finkelstein2018} Furthermore the possibility to use an off-resonant Raman transition in this cascaded scheme allows for large storage bandwidth, limited mainly by the available control laser power. Currently the main drawback of these schemes is the limited storage time, which is determined by the radiative lifetime of the upper state of the cascade (below \SI{100}{ns}). Another promising approach is to use rare-earth doped crystals as quantum memories~\cite{Afzelius2009}, featuring performances that equalize, if not outperform, those of cold atomic ensembles~\cite{Li2016,Cao2020} or trapped emitters in terms of efficiency~\cite{Hedges2010} and coherence times~\cite{Zhong2015}. These memories have shown a full quantum storage protocol with telecom-heralded quantum states of light~\cite{Seri2017}, and the first photonic quantum state transfer between nodes of different nature~\cite{Maring2017}. Furthermore, atomic frequency comb quantum memories were the first to be successfully interfaced with single photons emitted from a quantum dot~\cite{Tang2015}. \section{Future outlook} We conclude that bright and nearly on-demand sources of highly entangled photon pairs are on the verge of becoming reality. The groundwork has been laid through the development of semiconductor quantum dots (QDs) emitting highly entangled photons\cite{Huber2018}, of advanced optical cavity structures\cite{Liu2019,Wang2019} and technology capable of manipulating the symmetry and emission energy of QDs\cite{Trotta2012,Trotta2015}. On-chip integration of QDs\cite{Dietrich2016} and the implementation of electric excitation schemes\cite{Salter2010} can further increase the practicability in emerging quantum technology. The optimal wavelength (about \SI{1550}{nm}) for transporting entangled photons through fibers is currently determined by the established telecom fiber infrastructure. Material systems to obtain QDs emitting at this wavelength are under development\cite{Olbrich2017,Xiang2019,zeuner2019arxiv} and existing sources with emission at shorter wavelengths could be adapted by frequency down-conversion\cite{Weber2019}. Recently, a basic GHz-clocked quantum relay with QDs emitting directly in the Telecom-C band was demonstrated\cite{Anderson2020}. One of the greatest, yet rewarding challenges is the interfacing of dissimilar sources of entangled photons for multi-photon applications\cite{Vural2020} and long-haul entanglement distribution\cite{Xiang2019,Yin2020}. The physical limits to the indistinguishability\cite{Simon2005,schll2020crux} set by the currently employed cascade for entangled photon pair generation\cite{Benson2000,Huber2018} and fluctuations stemming from the solid state environment of QDs\cite{Huo2013,Kuhlmann2013} pose intricate challenges for the years to come. As demonstrated in this work, the application of selective Purcell enhancement together with mild frequency filtering could alleviate the limit of indistinguishability of the entangled photon pairs. The different emission energies and radiative lifetimes of the biexciton (XX) or exciton (X) in QDs could be matched by utilizing strain-\cite{Trotta2015} and electric\cite{Trotta2013} degrees of freedom independently. Considering the quantum relay chains depicted in Fig. \ref{fig:Fidelity}, three strain degrees of freedom can cancel the fine structure splitting and adapt the central energy of the XX or X to the next neighbor's. The electric degree of freedom can simultaneously be used to fine-tune the respective radiative lifetime and therefore the shape of the photonic wave-packet. By repeating this strategy through the whole relay chain for each QD, one could optimize the resulting entanglement fidelity of the final photon pair. With these tools at hand, the next leap towards the demonstration of a functional quantum network will be interconnection of two dissimilar quantum dots via entanglement swapping \cite{Kirby2016,BassoBasset2019}. The following steps could be to interface the photons performing the Bell state measurement with quantum memories\cite{Lvovsky2009,Kaczmarek2018,Afzelius2009,Hedges2010,Zhong2015,Seri2017,Maring2017,Tang2015} and use the resulting entangled photon pairs for quantum key distribution\cite{Ekert1991,Bennett1992,Acin2007} with efficiencies beating the direct transmission through fibers. From there on, the goal is to expand the system to a chain of multiple QDs and implement a quantum repeater scheme \cite{Briegel1998,Lloyd2001,Kimble2008} in order to enhance the resulting entanglement fidelity and efficiency compared to a repeater-less distribution scheme. \section*{Data Availability} Data sharing is not applicable to this article as no new data were created or analyzed in this study. \section*{References} \end{document}
\begin{document} \title{ On weighted sums of numbers of convex polygons in point sets} \begin{abstract} Let $S$ be a set of $n$ points in general position in the plane, and let $X_{k,\ell}(S)$ be the number of convex $k$-gons with vertices in $S$ that have exactly~$\ell$ points of $S$ in their interior. We prove several equalities for the numbers $X_{k,\ell}(S)$. This problem is related to the Erd\H{o}s-Szekeres theorem. Some of the obtained equations also extend known equations for the numbers of empty convex polygons to polygons with interior points. Analogous results for higher dimension are shown as well. \end{abstract} \section{Introduction}\label{intro} \blfootnote{ \begin{minipage}[l]{0.3\textwidth} \hspace{-0.25cm} \includegraphics[trim=10cm 6cm 10cm 5cm,clip,scale=0.11]{eu_logo} \end{minipage} \hspace{-3.6cm} \begin{minipage}[l][1cm]{0.87\textwidth} This project has been supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 734922. \end{minipage} \hspace{0.0cm} \begin{minipage}[l]{0.97\textwidth} \noindent Research of C.H. was partially supported by project MTM2015-63791-R (MINECO/FEDER), and by project Gen. Cat. DGR 2017SGR1336. Research of D.O. was partially supported by project PAPIIT IN106318 and IN107218 as well as CONACyT 282280. Research of P.~P-L. was partially supported by project DICYT 041933PL Vicerrector\'ia de Investigaci\'on, Desarrollo e Innovaci\'on USACH (Chile), and Programa Regional STICAMSUD 19-STIC-02. Research of B.V. was partially supported by the Austrian Science Fund (FWF) within the collaborative DACH project \emph{Arrangements and Drawings} as FWF project \mbox{I 3340-N35}. \end{minipage} } Let $S$ be a set of $n$ points in the Euclidean plane in general position, that is, no three points of $S$ are collinear. We denote by $h$ the number of extreme points of $S$, that is, the points of~$S$ that lie on the boundary of the convex hull of $S$. A $k$-gon (of $S$) is a simple polygon that is spanned by exactly $k$ points of $S$. A $k$-gon is called \emph{empty}, if it does not contain any points of $S$ in its interior. The vertex set of an empty convex $k$-gon is sometimes also called a {\it{free}} set~\cite{EdelmanR00}. We denote by $\Xk(S)$ the number of empty convex $k$-gons in $S$, and more general, we denote by $\Xkl(S)$ the number of convex $k$-gons in $S$ that have exactly $\ell$ points of $S$ in their interior. Further, a convex $k$-gon with $\ell$ interior points constitutes a subset of $(k+\ell)$ points of~$S$ whose convex hull does not contain any other points of $S$. Such sets are sometimes also called {\it{islands}}~\cite{islas11}. Figure~\ref{fig:smallexamples5_2} shows a set $S$ of $10$ points and its values $\Xkl(S).$ Here we are interested in relations among the values $\Xkl(S)$ and invariants among all point sets of given cardinality. \begin{figure} \caption{A set $S$ of $n=10$ points, with $h=5$ of them on the boundary of the convex hull, and the numbers of convex $k$-gons with $\ell$ interior points, $X_{k,\ell} \label{fig:smallexamples5_2} \end{figure} Invariants for the values $\Xk(S)$ are well known. The equation \begin{equation} \label{eqn_edelman} \sum\limits_{k\geq 3} (-1)^{k+1} \Xk(S) = \ \binom{n}{2}-n+1 \end{equation} was proved by Edelman and Jamison in~\cite{EdelmanJ85}, actually in terms of the number $F_k$ of free sets of cardinality $k$ in a convex geometry (Theorem 4.5 in~\cite{EdelmanJ85}), stating $\sum_{k}(-1)^{k}F_k=0.$ Note that in our notation we omit the term for the empty set $X_0:=1$, and $X_1(S)=n$ and $X_2(S)={{n}\choose{2}}$, the number of points and edges spanned by $S$, respectively. In~\cite{EdelmanJ85}, Equation~(\ref{eqn_edelman}) is also attributed to J.~Lawrence. Moreover, it also holds in higher dimension. The equation \begin{equation} \label{eqn_ahrens} \sum\limits_{k\geq 3} (-1)^{k+1} k \Xk(S) = \ 2\binom{n}{2}-h, \end{equation} which only depends on $n$ and on $h$, was first proved by Ahrens, Gordon, and McMahon~\cite{AhrensGM99}. Its higher dimensional version was proved by Edelman and Reiner~\cite{EdelmanR00} and by Klain~\cite{Klain99}. Pinchasi, Radoi\v{c}i\'c, and Sharir~\cite{prs_oecp_06} provided elementary geometric proofs for Equations~(\ref{eqn_edelman}) and~(\ref{eqn_ahrens}), by using a continous motion argument of points. They further proved other equalities and inequalites involving the values $\Xk(S)$ and extensions to higher dimension. As one main result, they showed that for planar point sets and $r \geq 2$, is holds that \begin{equation} \label{eqn_tr} \sum\limits_{k\geq 2r} (-1)^{k+1} \frac{k}{r} \binom{k-r-1}{r-1} \Xk(S) = \ -T_r(S) \end{equation} where $T_r(S)$ denotes the number of $r$-tuples of vertex-disjoint edges $(e_1,\dots , e_r)$ spanned by $S$ that lie in convex position, and are such that the region $\tau(e_1,\dots , e_r)$, formed by the intersection of the $r$ half-planes that are bounded by the lines supporting $(e_1,\dots , e_r)$ and contain the other edges, has no point of $S$ in its interior~\cite{prs_oecp_06}. For the point set $S$ of Figure~\ref{fig:smallexamples5_2} we have $T_2(S)=30$, $T_3(S)=25$, and $T_r(S)=0$ for $r\geq 4.$ In~\cite{prs_oecp_06}, the alternating sums of Equations (\ref{eqn_edelman}), (\ref{eqn_ahrens}) and (\ref{eqn_tr}) have been joined by the notion of the \emph{$r$-th alternating moment} $M_r(S)$ of $\{\Xk(S)\}_{k\geq 3}$ as $$ M_r(S) := \left\{ \begin{array}{ll} \sum\limits_{k\geq 3} (-1)^{k+1} \Xk(S) & \mbox{ for } r = 0 \\ \sum\limits_{k\geq 3} (-1)^{k+1} \frac{k}{r} \binom{k-r-1}{r-1} \Xk(S) & \mbox{ for } r \geq 1. \ \footnotemark \end{array} \right. $$ \footnotetext{Note that Equation~(\ref{eqn_tr}) and its proof require that $k\geq 2r$.} In this work, we take a similar approach as done in the work of Pinchasi et al.~\cite{prs_oecp_06} and extend above-mentioned results to $\Xkl(S)$, that is, to convex $k$-gons having a fixed number $\ell$ of points of $S$ in their interior. We denote by \[ \ASum(S) := \sum\limits_{k\geq 3} (-1)^{k+1} \Xkl(S) \] the \emph{alternating sum} of $\{\Xkl(S)\}_{k\geq 3}$. In Section~\ref{sec:alt_sums}, we mostly concentrate on the case of polygons with one interior point, i.e., $\ell=1$, where we prove that $$ \ASum[1](S) = \sum\limits_{k\geq 3} (-1)^{k+1} X_{k,1}(S) = n-h $$ for any set $S$ of $n$ points with $h$ of them on the boundary of the convex hull. Further, equations for alternating sums of convex polygons with one given interior point $p \in S$ (denoted $A_1^p(S)$) and for alternating sums of convex polygons with one interior point and which contain a given edge $e$ spanned by two points of $S$ on the boundary (denoted $A_1(S;e)$) are obtained. We also derive inequalities for $\sum_{k=3}^{t}(-1)^{k+1}X_{k,1}(S)$ for any given value $t\geq 3$, based on analogous inequalities for $X_{k}$ from~\cite{prs_oecp_06}. In Section~\ref{sec:weighted_sums}, we consider different weight functions $\fkl{k}{\ell}$ and general weighted sums over $\Xkl(S)$ of the form $$ \ensuremath{F}(S) = \sum_{k\geq 3} \sum_{\ell\geq 0} \fkl{k}{\ell} \Xkl(S).$$ We show that for any function $\fkl{k}{\ell}$ with the property that \begin{equation} \label{eq:rr_general1} \fkl{k}{\ell} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}, \end{equation} the value of the according sum $\ensuremath{F}(S)$ is invariant over all sets $S$ of $n$ points. We present several functions $\fkl{k}{\ell}$ with this property. Among these functions, we find that for any point set $S$ of $n$ points in general position and for any $x \in \ensuremath{\mathbb R}$, it holds that $$ \sum_{k=3}^n \sum_{\ell=0}^{n-k} x^k \left(1+x\right)^\ell X_{k,\ell} = \left(1+x\right)^n -1 - x\cdot n - x^2 \binom{n}{2}. $$ Note that Equation~(\ref{eqn_edelman}) is the special case $x=-1$; for this case and $\ell=0$, set the indeterminate form $\left(1+x\right)^\ell:=1$. We further show that the maximum number of linearly independent equations $F_j(S) =\sum_{k \geq 3} \sum_{\ell \geq 0} f_j(k,\ell) X_{k,\ell}$, where each $f_j(k,\ell)$ satisfies Equation (\ref{eq:rr_general1}), in terms of the variables $X_{k,\ell}$, is $n-2$. In Section~\ref{sec:moment_sums}, we relate the results of Section~\ref{sec:weighted_sums} to the moments from~\cite{prs_oecp_06}. We denote by $\mrk{k}$ the multiplicative factor of $(-1)^{k+1}X_k(S)$ in $M_r(S)$, that is, \[ \mrk{k} := \left\{\begin{array}{ll} 1 & \mbox{for\ } r=0 \\ \frac{k}{r}\binom{k-r-1}{r-1} & \mbox{for\ } r \geq 1 \ \mbox{and $k \geq 2r$}\\ 0 & \mbox{otherwise} \end{array} \right. \] We show the following relations, which only depend on the number $n$ of points. For any point set $S$ with cardinality $n$ and integer $0 \leq r\leq 2$ it holds that \[ \FrSum(S) := \sum_{k\geq 3} \sum_{\ell=0}^r (-1)^{k-\ell+1} \mrk[r-\ell]{k-\ell} \Xkl(S) = \left\{\begin{array}{ll} \binom{n}{2}-n+1 & \mbox{for\ } r=0 \\ 2\binom{n}{2}-n & \mbox{for\ } r=1 \\ -\binom{n}{2}+n & \mbox{for\ } r=2 \\ \end{array}\right. \] Finally, in Section~\ref{sec:higher_dimensions} we discuss the generalization of the obtained results to higher dimensions. Several more results, in particular on identities involving $X_{k,\ell}$ for special point configurations, can be found in the thesis~\cite{Torra19}.\\ An important argument that will be used in the proofs of this work is the continuous motion argument of points. Of course, this argument is not new for the analysis of configurations in combinatorial geometry (see for example~\cite{AichholzerGOR07,AndrzejakAHSW98,prs_oecp_06,Tverberg66}). The goal is to prove a property for all point sets in general position in the plane. To this end, the property is first shown to hold for some particular point set (usually a set of points in convex position). Then, one can move the points from the point set, such that only one point is moved at each instant of time, until any particular point configuration is reached. It remains to show that the property holds throughout. The changes on the combinatorial structure of the point set only appear when during a point move, one point becomes collinear with two other points. It hence is sufficient to check that the property is maintained if one point crosses the edge spanned by two other points of the set. An analogous proof strategy can be applied in $\ensuremath{\mathbb R}^d$ for $d>2$. \subsection{Related work} The problem studied in this work is related to one of the most famous problems in combinatorial geometry, namely the one of showing that every set of sufficiently many points in the plane in general position determines a convex $k$-gon. The problem of determining the smallest integer~$f(k)$ such that every set of $f(k)$ points contains a convex $k$-gon was original inspiration of Esther Klein and has become well-known as the Erd\H{o}s-Szekeres problem. The fact that this number $f(k)$ exists for every $k$ and therefore, for given $k$, $\sum_{\ell \geq 0} \Xkl(S)\geq 1$ when $S$ has sufficiently many points, was first established in a seminal paper of Erd\H{o}s and Szekeres~\cite{ES_1935} who proved the following bounds on $f(k)$~\cite{ES_1935,ES_1960}: \[ 2^{k-2} + 1 \le f(k) \le \binom{2k-4}{k-2} + 1 .\] Subsequently, many improvements have been presented on the upper bound; see~\cite{Morris} for a survey on this problem. Very recently, the problem has been almost settled by a work of Suk~\cite{Suk}, who showed that $f(k)=2^{k+o(k)}$. Another recent related work concerning the existence of convex $k$-gons in point sets is~\cite{DuqueMH18}. In a slight variation of the original problem, Erd\H{o}s suggested to find the minimum number of points $g(k)$ in the plane in general position containing $k$ points which form an empty convex $k$-gon. It is easy to show that for empty triangles and empty convex quadrilaterals this number is $3$ and $5$, respectively. Harborth~\cite{Harborth} showed in 1978 that $g(5)=10$. Thus $X_{k,0}(S)\geq 1$ for $k=3,4,5$ when $S$ has at least $g(k)$ points. However, in 1983 Horton~\cite{Horton} constructed an infinite set of points with no empty convex $7$-gon, implying that $g(k)$ is infinite for any $k \geq 7$. Much later, Overmars \cite{Overmars} used a computer to find a set with $29$ points without an empty convex hexagon and later Gerken \cite{Gerken} and Nicolas~\cite{Nicolas} proved that $g(6)$ is finite. A weaker restriction of the convex polygon problem has been considered by Bialostocki, Dierker and Voxman \cite{Bialostocki-etal}. They conjectured that for any two integers $k\geq 3$ and $\ell \geq 1$, there exists a function $C(k,\ell)$ such that every set with at least $C(k,\ell)$ points contains a convex $k$-gon whose number of interior points is divisible by $\ell$. They also showed that their conjecture is true if $k \equiv 2 (\operatorname{mod} \ell)$ or $k \geq \ell+3$. In parallel to questions concerning the existence of certain configurations in every large enough set of points in general position, also their number has been subject of research. For example, Fabila-Monroy and Huemer~\cite{RuyClemensIslands12} considered the number of $k$-islands in point sets. They showed that for any fixed $k$, their number is in $\Omega(n^2)$ for any $n$-point set in general position and in $O(n^2)$ for some such sets. The question of determining the number of convex $k$-gons contained in any $n$-point set in general position was raised in the 1970s by Erd\H{o}s and Guy~\cite{EG_1973}. The trivial solution for the case $k=3$ is $\binom{n}{3}$. However, already for convex \mbox{4-gons} this question turns out to be highly non-trivial, as it is related to the search for the minimum number of crossings in a straight-line drawing of the complete graph with $n$ vertices; see again~\cite{EG_1973}. Erd\H{o}s also posed the respective question for the number $h_k(n)$ of empty convex $k$-gons~\cite{ER84}. Horton's construction implies $h_k(n) = 0$ for every $n$ and every $k \geq 7$, so it remains to consider the cases $k\!=\!3, \ldots, 6$. For the functions $h_3(n)$ and $h_4(n)$, asymptotically tight estimates are known. The currently best known bounds are $n^2 + \Omega(n\log^{2/3}n) \le h_3(n) \le 1.6196n^2+o(n^2)$ and $\frac{n^2}{2} + \Omega(n\log^{3/4}n) \le h_4(n) \le 1.9397n^2+o(n^2)$, where the lower bounds can be found in~\cite{Many5Holes_ARXIV} and the upper bounds are due to B\'{a}r\'{a}ny and Valtr~\cite{BV2004}. For $h_5(n)$ and $h_6(n)$, no matching bounds are known. The best known upper bounds $h_5(n)\le 1.0207n^2+o(n^2)$ and $h_6(n) \leq 0.2006n^2+o(n^2)$ can also be found in~\cite{BV2004}. The best known lower bound $h_6(n) \ge n/229 - 4$ is due to Valtr~\cite{v-ephpp-12}. It is widely conjectured that $h_5(n)$ grows quadratically in~$n$. However, despite many efforts in the last 30 years, only very recently a superlinear bound of $h_5(n) = \Omega(n\log^{4/5}{n})$ has been shown~\cite{abhkpsvv-slbnh-17}. A result of independent interest is by Pinchasi, Radoi\v{c}i\'c, and Sharir~\cite{prs_oecp_06}, who showed $h_4(n) \geq h_3(n) - \frac{n^2}{2} - O(n)$ and $h_5(n) \geq h_3(n) - n^2 - O(n)$. By this, any improvement of the constant 1 for the dominating factor $n^2$ of the lower bound of $h_3(n)$ would imply a quadratic lower bound for $h_5(n)$. \section{Alternating sums}\label{sec:alt_sums} In this section, we concentrate on alternating sums of numbers of polygons with one interior point and possibly with some elements fixed. To this end, we introduce some more notation. Let $p$, $q$, and $r$ be three points of a set $S$ of $n$ points in general position in the plane. We denote by $\Delta{pqr}$ the triangle with vertices $p$, $q$, and $r$, and by a directed edge $e=pq$ the segment that connects $p$ and $q$ and is oriented from $p$ to $q$. We do not always specify if an edge is directed or undirected when it is clear from the context. We sometimes also write polygon instead of convex polygon since all considered polygons are convex. We say that a polygon lies to left side of a directed edge $e$ if it contained in the left closed half-plane that is bounded by the line through~$e$. For a fixed point $p\in S$, we denote by $\Xkl[k,1]^p(S)$ the number of convex $k$-gons spanned by $S$ that contain exactly $p$ in their interior, and by $\ASum[1]^p(S):= \sum_{k\geq 3} (-1)^{k+1} \Xkl[k,1]^p(S)$ the according alternating sum. Further, for a directed edge $e$, we denote by $\Xkl(S;e)$ the number of convex $k$-gons with $\ell$ interior points that have $e$ as a boundary edge and lie on the left side of $e$; and by $\ASum(S;e):= \sum_{k\geq 3} (-1)^{k+1} \Xkl(S;e)$ the according alternating sum. Likewise $\Xkl(S;p)$ is the number of convex $k$-gons with $\ell$ interior points that have $p$ on their boundary. If more elements (points and/or edges) are required to be on the boundary of the polygons, then all those are listed after the semicolon in this notation. Further, if an element is required to not be on the boundary of the polygons, then it is listed with a minus. For example, $\Xkl(S;e,p_1,-p_2)$ denotes the number of convex $k$-gons with $\ell$ interior points that are on the left of $e$, and have $e$ and $p_1$ on their boundary but not $p_2$. In~\cite{prs_oecp_06}, the authors show as a side result (in the proof of Theorem 2.2) that the alternating sum of convex $k$-gons incident to (the left side of) a directed edge $e=pq$ is 1 if there is at least one point of $S\backslash\{p,q\}$ in the left side of $e$, and 0 otherwise. As we will repeatedly use this result, we explicitly state it as a lemma here. We remark that in~\cite{prs_oecp_06}, $\ASum[0](S)$ and $\ASum[0](S;e)$ are denoted as $M_0(S)$ and $M_0(e)$, respectively. \begin{lemma}[\cite{prs_oecp_06}] \label{edgelemma} For any set $S$ of $n$ points in general position in the plane and any directed edge $e=pq$ spanned by two points of $S$, it holds that \[ \ASum[0](S;e) = \left\{\begin{array}{ll} 1 & \mbox{ if $e$ has at least one point of $S\backslash\{p,q\}$ to its left} \\ 0 & \mbox{ otherwise.} \\ \end{array}\right. \] \end{lemma} \begin{lemma}\label{lem:one_fixed_point} For any set $S$ of $n$ points in general position in the plane and any point $p \in S$ it holds that \[ \ASum[1]^p(S) = \left\{\begin{array}{ll} 0 & \mbox{ if $p$ is an extreme point of $S$} \\ 1 & \mbox{ otherwise.} \\ \end{array}\right. \] \end{lemma} \begin{proof} Obviously, if $p$ is an extreme point of $S$, then it cannot be in the interior of any polygon spanned by points of $S$ and hence $\ASum[1]^p(S) = 0$. So assume that $p$ is not an extreme point of~$S$. Note that every polygon in $S$ that contains exactly $p$ in its interior is an empty polygon in $S\setminus\{p\}$. If $p$ lies close enough to a convex hull edge $e$ of $S$ (an edge on the boundary of the convex hull of $S$), then $p$ is contained in exactly all polygons that are incident to $e$ and empty in $S\setminus\{p\}$. Hence, $\ASum[1]^p(S) = \ASum[0](S\setminus\{p\};e) = 1$ by Lemma~\ref{edgelemma}. Otherwise, if $p$ is located arbitrarily, consider a continuous path from $p$ to a position close enough to a convex hull edge of $S$, and move $p$ along this path. This path can be chosen such that it avoids all crossings in the line arrangement spanned by $S\setminus\{p\}$ and lies inside the convex hull of $S$. During this movement, $\ASum[1]^p(S)$ can only change when $p$ crosses an edge $qr$ spanned by two points of $S$. Further, changes can only occur from changing amounts of convex $k$-gons that have $qr$ as an edge. More exactly, when $p$ is moved over~$qr$, from its left to its right side, then the alternating sum of polygons that $p$ ``stops being inside'' is $\ASum[0](S\setminus\{p\};qr) = 1$, and the alternating sum of polygons that~$p$ ``starts being inside'' is $\ASum[0](S\setminus\{p\};rq) = 1$ (note that $qr$ is not a convex hull edge of $S$). Hence, the value $\ASum[1]^p(S)$ is the same for all possible positions of $p$ on the path, including the final position for which we already showed $\ASum[1]^p(S) = 1$. \end{proof} \begin{theorem}\label{thm:one_point} Given a set $S$ of $n$ points in general position in the plane, $h$ of them extreme, it holds that $\ASum[1](S) = n-h$. \end{theorem} \begin{proof} Any convex $k$-gon counted in $\ASum[1](S)$ contains exactly one point in its interior. Hence, we can count the convex $k$-gons by their interior points and obtain that $\ASum[1](S)=\sum_{p\in S} \ASum[1]^p(S) = n-h$. \end{proof} In the previous result we used $\ASum[0](S;e)$ to obtain bounds on $\ASum[1]^p(S)$ and determine $\ASum[1](S)$. A possible approach for determining $\ASum[2](S)$ could be via $\ASum[1](S;e)$. In the following, we show why such an approach cannot work. \begin{lemma}\label{one_fixed_edge_one_fixed_point} Given a set $S$ of $n$ points in general position in the plane, a point $p \in S$, and a directed edge $e=qr$ of $S\setminus\{p\}$, it holds that $\ASum[1]^p(S;e) \in \{ -1, 0, 1 \}$. \end{lemma} \begin{proof} The idea for obtaining $\ASum[1]^p(S;e)$ is to start with $\ASum[0](S\setminus \{p\};e) = 1$ and subtract from it the alternating sum of all convex polygons that stay empty when adding $p$ again to $S$. In the following, we denote the latter with $\ASum[0](S;e,-p)$. So we have \[ \ASum[1]^p(S;e) = \ASum[0](S\setminus \{p\};e) - \ASum[0](S;e,-p). \] As we are only counting polygons on the left side of $e$, we assume without loss of generality that $e$ is a convex hull edge of $S$, and $S$ lies to the left of $e$. Obviously, if $p$ is an extreme point of $S$, then it cannot be in the interior of any polygon spanned by points of $S$, and hence $\ASum[1]^p(S;e)=0$. Likewise, if the triangle $\Delta{pqr}$ contains any points of $S$ in its interior, $\ASum[1]^p(S;e)=0$. So assume that $p$ is not an extreme point of $S$ and that $\Delta{pqr}$ is interior-empty. Consider the supporting lines $\ell_q$ and $\ell_r$ of $pq$ and $pr$, respectively, and the four wedges bounded by these lines. One of the wedges contains the triangle $\Delta{pqr}$, one contains $p$ and $q$ but not $r$ (we call it the $q$-wedge), one contains $p$ and $r$ but not $q$ (we call it the $r$-wedge), and the last one contains $p$ but not $q$ and $r$ (we call it the $p$-wedge); see Figure~\ref{fig:wedges}. \begin{figure} \caption{Illustration of wedges.} \label{fig:wedges} \end{figure} Note that any convex polygon that has $e$ as an edge and a vertex in the $p$-wedge contains $p$. Hence, any polygon counted in $\ASum[0](S;e,-p)$ has all its vertices (except $q$ and $r$) in the interior of the $q$-wedge and the $r$-wedge. If both, the $q$-wedge and the $r$-wedge are empty of points (i.e., $p$ lies close enough to $e$) then $\ASum[0](S;e,-p)=0$ and $p$ is contained in exactly all polygons that are incident to $e$ and empty in $S\setminus\{p\}$. Hence, $\ASum[1]^p(S;e) = \ASum[0](S\setminus\{p\};e) = 1$ by Lemma~\ref{edgelemma}. So assume that at least one of the $q$-wedge and the $r$-wedge, without loss of generality\ the $r$-wedge, contains at least one point of $S$ in its interior. We first count all polygons for $\ASum[0](S;e,-p)$ that have all vertices except $q$ and $r$ in the interior of the $r$-wedge. The alternating sum of those polygons is $\ASum[0](S \cap H_q;e,-p)=\ASum[0](S \setminus \{p\} \cap H_q;e)$, where $H_q$ is the closed half-plane bounded by $\ell_q$ that contains $e$. By Lemma~\ref{edgelemma}, $\ASum[0](S \cap H_q;e,-p) = 1$. For counting the polygons in $\ASum[0](S;e,-p)$ that have at least one vertex in the $q$-wedge, we consider the points in the interior of the $q$-wedge in counterclockwise order around $p$ (such that $q$ is ``after'' the last point) and denote them by $p_1, \ldots p_m$. Then for every $i \in \{1, ... m\}$, consider the line $\ell_i$ through $p$ and $p_i$ and the closed half-plane $H_i$ bounded by $\ell_i$ that contains $qr$. Let $S_i=S\cap H_i$; see Figure~\ref{fig:qwedgesorting}. \begin{figure} \caption{Sorting in $q$-wedge and definition of $H_i$.} \label{fig:qwedgesorting} \end{figure} \begin{claim} \emph{ The alternating sum $\ASum[0](S_i;e,p_i,-p)$ of empty polygons lying in $H_i$ that contain the triangle $\Delta{p_iqr}$ and do not have $p$ as a vertex is $1$ if $S_i = \{p, q, r, p_i\}$, and $0$ otherwise. } \end{claim} We first complete the proof under the assumption that the claim holds and then prove the claim. Note that every counted polygon that has a vertex in the $q$-wedge, has a unique first such vertex in the cyclic order around $p$ for which we can count it. Hence, the total alternating sum over those polygons is the sum of $i\in\{1, \ldots, m\}$ of the alternating sums $\ASum[0](S_i;e,p_i,-p)$ with $p_i$ being the first used point in the $q$-wedge. Further, note that for any $i<m$ it holds that $p_m \in S_i$ and hence $S_i\neq \{p, q, r, p_i\}$. Thus, the alternating sum of those empty polygons with at least one vertex in the $q$-wedge is $\sum_{i=1}^m \ASum[0](S_i;e,p_i,-p) = \ASum[0](S_m;e,p_m,-p) \in \{0,1\}$. Altogether, we obtain $\ASum[0](S;e,-p) = \ASum[0](S \cap H_q;e,-p) + \ASum[0](S_m;e,p_m,-p) \in \{1,2\}$. Combining this with $\ASum[0](S\setminus \{p\};e) = 1$, we obtain that $\ASum[1]^p(S;e) = \ASum[0](S\setminus \{p\};e) - \ASum[0](S;e,-p) \in\{0,-1\}$ if at least one of the $q$-wedge and the $r$-wedge contains points of $S$. The lemma thus follows. {\it Proof of Claim.} If $S_i=\{p, q, r, p_i\}$, then the only such empty polygon in $H_i$ is the triangle $\Delta{p_iqr}$, and hence the alternating sum $\ASum[0](S_i;e,p_i,-p)$ is equal to 1. If $S_i\neq\{p, q, r, p_i\}$, then we consider the triangle $\Delta{p_iqr}$, which splits $H_i$ into three wedges; one is bounded by lines $\ell_i$ and $\ell_q$, another one by $\ell_q$ and~$\ell_r$, and the third one by $\ell_r$ and $\ell_i$. We distinguish three cases. Case 1. The triangle $\Delta{p_iqr}$ contains a point of $S_i$ in its interior. Then no polygon containing $\Delta{p_iqr}$ can be empty. Hence, the alternating sum $\ASum[0](S_i;e,p_i,-p)$ is 0. Case 2. All points of $S_i\setminus\{p_i,p,q,r\}$ lie in one of the other two wedges, without loss of generality the one bounded by $\ell_i$ and $\ell_q$. Then, each counted convex $k$-gon in $\ASum[0](S_i;e,p_i,-p)$ with $k\geq 4$ vertices corresponds to a convex $(k-1)$-gon incident to $qp_i$ and not having $r$ as a vertex. The alternating sum of those is $1$ by Lemma~\ref{edgelemma}. Inverting all signs and adding 1 for the triangle $\Delta{p_iqr}$, we obtain a total of $-1+1=0$. Case 3. Both wedges contain points of $S_i\setminus\{p_i,p,q,r\}$ in the interior (and the triangle $\Delta{p_iqr}$ is empty). We have three different types of polygons that we have to count: (i) the triangle $\Delta{p_iqr}$, (ii) the polygons having additional vertices in only one wedge, and (iii) the polygons having additional vertices in both wedges. For the latter, note that the union of the vertex sets of two empty polygons, one contained in each wedge, and containing the edge $qp_i$, respectively $p_ir$, gives a polygon that is counted in $\ASum[0](S_i;e,p_i,-p)$. Let $S_i^q$ and $S_i^r$ be the points of $S_i\setminus\{p\}$ in the wedges bounded by $\ell_i$ and $\ell_q$, and bounded by $\ell_r$ and $\ell_i$, respectively. Note that $\ASum[0](S_i^q;qp_i)= \ASum[0](S_i^r;p_ir) = 1$. Let $L=\sum_{k\geq 3, \ k \mbox{ odd}} \Xkl[k,0](S_i^q;qp_i)$ be the number of such convex $k$-gons with odd $k$. As $\ASum[0](S_i^q;qp_i)=1$, the respective number of even polygons is $L - \ASum[0](S_i^q;qp_i)= L-1$. Similarly, let $R=\sum_{k\geq 3, \ k \mbox{ odd}} \Xkl[k,0](S_i^r;p_ir)$ be the number of such convex $k$-gons in the other wedge with odd $k$. Then the number of even convex $k$-gons in that wedge is $R-1$. For polygons of type (iii), note that combining two polygons with the same parity in the number of vertices, we obtain a polygon with an odd number of vertices, while combining two polygons with different parities we obtain a polygon with an even number of vertices. Hence, the alternating sum for polygons of type (iii) is $(LR + (L-1)(R-1)) - (L(R-1) + (L-1)R) = (2LR -L-R+1)-(2LR-L-R)=1$. For polygons of type (ii), each polygon with an even number of vertices gives a polygon with an odd number of vertices when combined with the triangle $\Delta{p_iqr}$, and vice versa. Hence, the alternating sum for polygons of type (ii) is $((L-1)+(R-1)) - (R+L) = -2$. Finally, for type (i) we only have the triangle $\Delta{p_iqr}$, which contributes $+1$ to the alternating sum. Hence altogether we obtain an alternating sum of $+1 -2 +1 = 0$ also for the third case, which completes the proof of the claim. \end{proof} Note that $\ASum[1](S;e)$, the alternating sum of convex polygons having $e$ as an edge and exactly one interior point, highly depends on the position of the points of $S$ and cannot be expressed by the number of extreme and non-extreme points of $S$. On the other hand, $\ASum[1](S;e) = \sum_{p\in S} \ASum[1]^p(S;e)$ and hence we can use Lemma~\ref{one_fixed_edge_one_fixed_point} to derive bounds for $\ASum[1](S;e)$, analogous to Lemma~\ref{edgelemma} for~$\ASum[0](S;e)$. \begin{theorem}\label{one_fixed_edge_one_arb_point} For any set $S$ of $n$ points in general position in the plane, $h$ of them on the boundary of the convex hull, and any edge $e=qr$ of $S$ it holds that $\max\{h,4\}-n \leq \ASum[1](S;e) \leq 1$. \end{theorem} \begin{proof} For the upper bound, consider an arbitrary point $p\in S\setminus \{q,r\}$. Reconsider the proof of Lemma~\ref{one_fixed_edge_one_fixed_point} and the $q$-wedge and the $r$-wedge. Then, for $\ASum[1]^p(S;e)=1$ it is necessary that none of those wedges contains points of $S$ in the interior. As this can happen for at most one point of $S$ and as otherwise $\ASum[1]^p(S;e)\leq 0$, the upper bound follows. For the lower bound, note first that for any extreme point $p$ of $S$ we have $\ASum[1]^p(S;e)=0$. Hence, $\sum_{p\in S} \ASum[1]^p(S;e) \geq (-1)\cdot(n-h)$, which for $h\geq 4$ is the claimed lower bound. Further, if $e$ is not a convex hull edge of $S$, then the points on the non-considered side of $e$ (there is at least one) can be ignored, again implying the claimed bound. So assume that $h=3$ and $e$ is a convex hull edge. Let $p$ be a non-extreme point of $S$ that has maximum distance to~$e$. Then, the only convex polygon spanned by $S$ and having $e$ as an edge and that contains $p$ in its interior, is a triangle, since $S$ has $h=3$ extreme points. This implies that $\ASum[1]^p(S;e)\in\{0,1\}$ and hence $\sum_{p\in S} \ASum[1]^p(S;e) \geq (-1)\cdot(n-4)=4-n = \max\{4,h\}-n$. \end{proof} We remark that both bounds are tight in the sense that there exist arbitrary large point sets and edges obtaining them. For the lower bound, a quadrilateral with a concave chain of $n-4$ edges added close enough to one edge, and $e$ being the edge opposite to the chain gives an example; see Figure~\ref{fig:tight_bounds}. Actually, the latter also provides edges $f$ with $\ASum[1](S;f) = 1$; see Figure~\ref{fig:tight_bounds}. A different example for the upper bound is when the considered edge $g$ has a point $p$ sufficiently close to it. This guarantees that any non-empty polygon with $g$ as edge contains $p$, and hence $\ASum[1](S;g) = \ASum[0](S\setminus\{p\};g) = 1$. \begin{figure} \caption{Examples reaching the lower and upper bounds of Theorem~\ref{one_fixed_edge_one_arb_point} \label{fig:tight_bounds} \end{figure} \subsection{Inequalities} Pinchasi et al.~\cite{prs_oecp_06} derived several inequalities that involve the parameters $X_{k,0}$. In this section, we show analogous inequalities for the parameters $X_{k,1}.$ We will need the following lemma proved in~\cite{prs_oecp_06}. \begin{lemma}[\cite{prs_oecp_06}] \label{lm:m00_e_ineq} For any set $S$ of $n$ points in general position and a directed edge $e$ spanned by two points of $S$, if there is at least one point of $S$ to the left of $e$, then it holds that \begin{itemize} \item for each $t \geq 3$ odd, $$ X_{3,0}(S;e)-X_{4,0}(S;e)+X_{5,0}(S;e)-\ldots+X_{t,0}(S;e) \geq 1, $$ \item for each $t \geq 4$ even, $$ X_{3,0}(S;e)-X_{4,0}(S;e)+X_{5,0}(S;e)-\ldots-X_{t,0}(S;e) \leq 1, $$ with equality holding, in either case, if and only if $X_{t+1,0}(S;e)=0$. \end{itemize} \end{lemma} \begin{lemma}\label{lm:m01_p_ineq} For any set $S$ of $n$ points in general position and a point $p \in S$ in the interior of the convex hull of $S$, it holds that \begin{itemize} \item for each $t \geq 3$ odd, $$ X_{3,1}^p(S)-X_{4,1}^p(S)+\ldots+X_{t,1}^p(S) \geq 1, $$ \item for each $t \geq 4$ even, $$ X_{3,1}^p(S)-X_{4,1}^p(S)+\ldots-X_{t,1}^p(S) \leq 1, $$ with equality holding, in either case, if and only if $X_{t+1,1}^p(S)=0$. \end{itemize} \end{lemma} \begin{proof} First of all, by Lemma~\ref{lem:one_fixed_point}, we have that if $X_{t+1,1}^p(S)=0$, then equality holds. Let us then consider that $p$ is close enough to a convex hull edge $e$ of~$S$. We can apply the same argument as in the proof of Lemma~\ref{lem:one_fixed_point}. We get \begin{equation*} \left.\begin{aligned} &X_{3,1}^p(S)-X_{4,1}^p(S)+\ldots+(-1)^{t+1}X_{t,1}^p(S)\\ &= X_{3,0}(S \setminus \left\{p\right\}; e)-X_{4,0}(S \setminus \left\{p\right\}; e)+\ldots+(-1)^{t+1}X_{t,0}(S \setminus \left\{p\right\}; e), \end{aligned}\right. \end{equation*} and the result of Lemma~\ref{lm:m00_e_ineq} applies. Moreover, if we consider a continuous motion of $p$ which is sufficiently generic, proceeding exactly in the same manner as in the proof of Lemma~\ref{lem:one_fixed_point}, the value of $X_{3,1}^p(S)-X_{4,1}^p(S)+\ldots+(-1)^{t+1}X_{t,1}^p(S)$ does not change if $p$ is located arbitrarily inside $S$. \end{proof} \begin{theorem}\label{thm:m01_ineq} For any set $S$ of $n$ points in general position in the plane, $h$ of them on the boundary of the convex hull, we have \begin{itemize} \item for each $t \geq 3$ odd, $$ X_{3,1}(S)-X_{4,1}(S)+X_{5,1}(S)-\ldots+X_{t,1}(S) \geq n-h, $$ \item for each $t \geq 4$ even, $$ X_{3,1}(S)-X_{4,1}(S)+X_{5,1}(S)-\ldots-X_{t,1}(S) \leq n-h, $$ with equality holding, in either case, if and only if $X_{t+1,1}(S)=0$. \end{itemize} \end{theorem} \begin{proof} Observe that, by Theorem~\ref{thm:one_point}, if $X_{t+1,1}=0$ then the equality holds. Using the same argument as in Theorem \ref{thm:one_point}, it is clear that \begin{equation*} \left.\begin{aligned} &X_{3,1}(S)-X_{4,1}(S)+X_{5,1}(S)-\ldots+(-1)^{t+1}X_{t,1}(S)\\ & = \sum_{p \in S} \left(X_{3,1}^p(S)-X_{4,1}^p(S)+X_{5,1}^p(S)-\ldots+(-1)^{t+1}X_{t,1}^p(S)\right). \end{aligned}\right. \end{equation*} Let $t$ be odd. By Lemma \ref{lm:m01_p_ineq}, $X_{3,1}^p(S)-X_{4,1}^p(S)+\ldots+X_{t,1}^p(S) \geq 1$ if $p$ is an interior point of $S$. Therefore, \[ X_{3,1}(S)-X_{4,1}(S)+X_{5,1}(S)-\ldots+X_{t,1}(S) = \sum_{p \in S} \left(X_{3,1}^p(S)-X_{4,1}^p(S)+\ldots+X_{t,1}^p(S)\right) \geq n - h. \] If equality holds, then $X_{3,1}^p(S)-X_{4,1}^p(S)+X_{5,1}^p(S)-\ldots+X_{t,1}^p(S)=1$ if $p$ is an interior point of~$S$ and, by Lemma \ref{lm:m01_p_ineq}, $X_{t+1,1}^p(S)=0$ for these points. Therefore, $X_{t+1,1}^p(S)=0$ for all the points and, in consequence, $X_{t+1,1}(S)=0$. If $t$ is even, the proof is analogous with the unique difference that the inequality is $X_{3,1}^p(S)-X_{4,1}^p(S)+X_{5,1}^p(S)-\ldots+(-1)^{t+1}X_{t,1}^p(S) \leq 1$ if $p$ is an interior point of $S$. Therefore, the proof proceeds in the same manner but the direction of the inequalities is reversed. \end{proof} From Theorem \ref{thm:m01_ineq} with $t=4$ we obtain the following corollary. \begin{corollary} $X_{4,1} \geq X_{3,1} -n + h$. \end{corollary} \section{Weighted sums}\label{sec:weighted_sums} In this section, we consider sums of the form $\ensuremath{F}(S) = \sum_{k\geq 3} \sum_{\ell\geq 0} \fkl{k}{\ell} \Xkl(S)$. The following theorem is the main tool used throughout this section and the next one. \begin{theorem}\label{thm:fkl} For any function $\fkl{k}{\ell}$ that fulfills the equation \begin{equation} \label{equ:fkl} \fkl{k}{\ell} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}, \end{equation} the sum $\ensuremath{F}(S) := \sum_{k\geq 3} \sum_{\ell\geq 0} \fkl{k}{\ell} \Xkl(S)$ is invariant over all sets $S$ of $n$ points in general position, that is, $\ensuremath{F}(S)$ only depends on the cardinality of $S$. \end{theorem} \begin{proof} Consider a point set $S$ in general position. We claim that any continuous motion of the points of $S$ which is sufficiently generic does not change the value of $F(S)$. Consider that $p,q,r \in S$ become collinear, with $r$ lying between $p$ and $q$; thus the only convex polygons spanned by $S$ that may change are those that have $pq$ as an edge and $r$ in its interior or those that have $p,q,$ and $r$ as vertices. Let $Q$ be a convex $k$-gon with $\ell$ interior points that contains $pq$ as an edge and $r$ in its interior. If $r$ moves outside of $Q$, then $Q$ has $\ell-1$ points in its interior and the $(k+1)$-gon~$Q'$ obtained by replacing the edge $pq$ of $Q$ by the polygonal path $prq$, starts being convex with $\ell-1$ points in its interior. Hence, in this movement we can assign each polygon counted in $X_{k,\ell}$ (which disappears) to one polygon counted in $X_{k,\ell-1}$ and to one counted in $X_{k+1,\ell-1}$ (which appear). Since $f(k,\ell)=f(k+1,\ell-1)+f(k,\ell-1)$, the movement of the point does not change the value of $F(S)$. Symmetrically, if $r$ moves inside $Q$ (with $\ell$ points in its interior), then $Q$ has $\ell+1$ points in its interior and the $(k+1)-$gon $Q'$, with also $\ell$ points in its interior, stops being convex. Again, this does not change the value of $F(S)$. \end{proof} \begin{observation}\label{obs:lincomb} Let $f_1(k, \ell)$ and $f_2(k, \ell)$ be two functions that fulfill Equation~(\ref{equ:fkl}). Then every linear combination of $f_1(k,\ell)$ and $f_2(k,\ell)$ fulfills Equation~(\ref{equ:fkl}) as well. \end{observation} Using Theorem~\ref{thm:fkl}, we first derive several relations for the sum over all convex polygons, weighted by their number of interior points. Each of this relations is proved by showing that the function $f(k,\ell)$ satisfies~Equation~(\ref{equ:fkl}) and by evaluating $F(S)$ for a set of $n$ points in convex position. Note that for point sets in convex position $X_{k,\ell}=0$ for $\ell\geq 1$. \begin{corollary}\label{cor:is} For any set $S$ of $n\geq 3$ points in general position, it holds that \[ \sum_{k=3}^{n} \sum_{\ell=0}^{n-3} 2^\ell \Xkl(S) = 2^n - \frac{n^2}{2} - \frac{n}{2} - 1. \] \end{corollary} \begin{proof} Let $\fkl{k}{\ell} = 2^\ell$. Then $2^\ell = 2^{\ell-1} + 2^{\ell-1} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}$. For a set $S$ of $n$ points in convex position we get $\sum_{k=3}^n 2^0 \Xkl[k,0] = \sum_{k=3}^n \binom{n}{k} = 2^n - \binom{n}{2} - \binom{n}{1} - \binom{n}{0} = 2^n - \frac{n^2}{2} - \frac{n}{2} - 1$. \end{proof} \begin{corollary}\label{cor:bms} For any set $S$ of $n$ points in general position, and every integer $3 \leq m \leq n$ it holds that \[ \sum_{k=3}^{m} \sum_{\ell=m-k}^{n-k} \binom{\ell}{m-k} \Xkl(S) = \binom{n}{m}. \] \end{corollary} \begin{proof} Let $\fkl{k}{\ell} = \binom{\ell}{m-k}$ and note that $m$ is fixed. Then \[ \fkl{k}{\ell} = \binom{\ell}{m-k} = \binom{\ell-1}{m-k-1} + \binom{\ell-1}{m-k} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}.\] For a set $S$ of $n$ points in convex position we get $ \sum_{k=3}^{m} \binom{0}{m-k} \Xkl[k,0](S) = \Xkl[m,0](S) = \binom{n}{m}$. \end{proof} \noindent Let $\{\Fib(n)\}_{n \in \mathbb{Z}}$ be the sequence of Fibonacci numbers, satisfying the recurrence relation \mbox{$\Fib(n)=\Fib(n-1)+\Fib(n-2)$}, with $\Fib(0)=0$ and $\Fib(1)=1$. \begin{corollary}\label{cor:fib1} For any set $S$ of $n\geq 3$ points in general position it holds that \[\sum_{k=3}^{n} \sum_{\ell=0}^{n-3} \Fib(k+2\ell) X_{k,\ell}(S) =\Fib(2n)-n-\binom{n}{2}.\] \end{corollary} \begin{proof} The function $f(k,\ell)=\Fib(k+2\ell)$ satisfies~Equation~(\ref{equ:fkl}) by definition of the Fibonacci numbers. Consider then a set $S$ of $n$ points in convex position and use the following identity; see~\cite{spivey}. \[\sum_{k=0}^{n}\Fib(k)\binom{n}{k}=\Fib(2n).\] Then, \[\sum_{k=3}^{n} \Fib(k+0) X_{k,0}(S) = \sum_{k=3}^{n} \Fib(k) \binom{n}{k} = \Fib(2n)-n-\binom{n}{2}.\] \end{proof} \begin{corollary}\label{cor:fib2} For any set $S$ of $n\geq 3$ points in general position it holds that \[\sum_{k=3}^{n} \sum_{\ell=0}^{n-3} (-1)^{k+\ell}\Fib(k-\ell) X_{k,\ell}(S) =-\Fib(n)+n-\binom{n}{2}.\] \end{corollary} \begin{proof} The function $f(k,\ell)=(-1)^{k+\ell}\Fib(k-\ell)$ satisfies~Equation~(\ref{equ:fkl}) by definition of the Fibonacci numbers. Consider then a set $S$ of $n$ points in convex position and use the following identity; see~\cite{spivey}. \[\sum_{k=0}^{n}(-1)^k \Fib(k)\binom{n}{k}=-\Fib(n).\] Then, \[\sum_{k=3}^{n} (-1)^{k+0} \Fib(k-0) X_{k,0}(S) = \sum_{k=3}^{n} (-1)^{k} \Fib(k) \binom{n}{k} = -\Fib(n)+n-\binom{n}{2}.\] \end{proof} \begin{corollary}\label{cor:cheby} Any set $S$ of $n\geq 3$ points in general position satisfies the following equations. \begin{equation}\label{equ:ChebyT} \sum_{k=3}^{n} \sum_{\ell=0}^{n-3} 2\cos\left(\frac{(2k+\ell)\pi}{3}\right) X_{k,\ell}(S) = \binom{n}{2}+n-2+ 2\cos\left(\frac{n \pi}{3}\right) \end{equation} \begin{equation}\label{equ:ChebyU} \sum_{k=3}^{n} \sum_{\ell=0}^{n-3} \frac{2}{\sqrt{3}}\sin\left(\frac{(2k+\ell)\pi}{3} \right) X_{k,\ell}(S) = \binom{n}{2}-n+ \frac{2}{\sqrt{3}}\sin\left(\frac{n \pi}{3}\right) \end{equation} \end{corollary} \begin{proof} Equations~(\ref{equ:ChebyT}) and~(\ref{equ:ChebyU}) are obtained using Chebyshev polynomials, see e.g.~\cite{mason}. The Chebyshev polynomials of the first kind, $T_m(x):=\cos(m \theta)$ with $x=\cos(\theta)$, satisfy the recurrence relation $T_m(x)=2x T_{m-1}(x) - T_{m-2}(x).$ For $x=\frac{1}{2}$, this gives the relation \[\cos\left( \frac{(m-1)\pi}{3}\right)= \cos\left(\frac{m \pi}{3}\right) + \cos\left( \frac{(m-2) \pi}{3}\right).\] Now set $m=2k+\ell+1$ and observe that the function $f_1(k,\ell):= 2\cos\left(\frac{(2k+\ell)\pi}{3}\right)$ satisfies~Equation~(\ref{equ:fkl}). Consider then a set $S$ of $n$ points in convex position. We use a binomial identity, which can be found in~\cite{binomial}, Equation (1.26), \[ \sum_{k=0}^{n} \binom{n}{k}\cos(k y) = 2^n \cos\left(\frac{n y}{2}\right)\left(\cos\left(\frac{y}{2}\right)\right)^n.\] With $y=\frac{2\pi}{3}$ and hence $\cos(\frac{y}{2})= \frac{1}{2}$ we obtain Equation~(\ref{equ:ChebyT}): \begin{eqnarray*} \sum_{k=3}^{n} f_1(k,0) X_{k,0}(S) & = & \sum_{k=0}^{n} 2\cos\left(\frac{2k \pi}{3}\right)\binom{n}{k} - \sum_{k=0}^{2} 2\cos\left(\frac{2k\pi}{3}\right) \binom{n}{k}\\ & = & 2\cos\left(\frac{n \pi}{3}\right) + \binom{n}{2}+n-2. \end{eqnarray*} To prove Equation~(\ref{equ:ChebyU}), consider the Chebyshev polynomial of the second kind, \[U_m(x):=\frac{\sin((m+1)\theta)}{\sin(\theta)},\] with $x=\cos(\theta)$. $U_m(x)$ satisfies $U_m(x)=2x U_{m-1}(x) - U_{m-2}(x)$. For $x=\frac{1}{2}$, this gives the relation \[\sin\left( \frac{m \pi}{3}\right)= \sin\left( \frac{(m+1) \pi}{3}\right) + \sin\left(\frac{(m-1) \pi}{3}\right).\] Now set $m=2k+\ell$ and observe that the function $f_2(k,\ell):= \frac{2}{\sqrt{3}}\sin\left(\frac{(2k+\ell)\pi}{3}\right)$ satisfies~Equation~(\ref{equ:fkl}). Consider then a set $S$ of $n$ points in convex position. We use a binomial identity, which can be found in~\cite{binomial}, Equation (1.27). \[ \sum_{k=0}^{n} \binom{n}{k}\sin(k y) = 2^n \sin\left(\frac{n y}{2}\right)\left(\cos\left(\frac{y}{2}\right)\right)^n.\] This identity with $y=\frac{2\pi}{3}$ completes the proof of Equation~(\ref{equ:ChebyU}): \begin{eqnarray*} \sum_{k=3}^{n} f_2(k,0) X_{k,0}(S) & = & \sum_{k=0}^{n} \frac{2}{\sqrt{3}} \sin\left(\frac{2k \pi}{3}\right) \binom{n}{k} - \sum_{k=0}^{2} \frac{2}{\sqrt{3}} \sin\left(\frac{2k \pi}{3}\right) \binom{n}{k} \\ & = & \frac{2}{\sqrt{3}}\sin\left(\frac{n \pi}{3}\right) + \binom{n}{2}-n. \end{eqnarray*}\end{proof} Next we show that the functions $f(k,\ell)$ can be expressed as a linear combination of the functions $f(k+i,0)$, for $i=0,\ldots,\ell.$ \begin{theorem}\label{thm:general} The general solution of the recurrence relation $\fkl{k}{\ell} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}$ is given by the equation \begin{equation} f(k,\ell) = \sum_{i=0}^{\ell} \binom{\ell}{i} \ f(k+i,0). \label{eq:rr_general} \end{equation} \end{theorem} \begin{proof} We prove the statement by induction on $\ell$. It is straightforward to verify the base cases $\ell=0$ and $\ell=1$. For the inductive step, let $\lambda \geq 0$ be given and suppose (\ref{eq:rr_general}) holds for $\ell = \lambda$. Then, \begin{eqnarray*} f(k,\lambda+1) &=& f(k+1,\lambda)+f(k,\lambda) = \sum_{i=0}^{\lambda} \binom{\lambda}{i} f(k+1+i,0)+\sum_{i=0}^{\lambda} \binom{\lambda}{i} f(k+i,0) = \nonumber\\ &=& \sum_{i=1}^{\lambda+1} \binom{\lambda}{i-1} f(k+i,0)+\sum_{i=0}^{\lambda} \binom{\lambda}{i} f(k+i,0) = \nonumber \\ & =& f(k,0)+\sum_{i=1}^{\lambda}\left[\binom{\lambda}{i}+ \binom{\lambda}{i-1} \right] f(k+i,0)+f(k+i,0) = \nonumber\\ &=& \sum_{i=0}^{\lambda+1} \binom{\lambda+1}{i} \ f(k+i,0). \end{eqnarray*} \end{proof} Note that there may be many functions $f(k,\ell)$ that fulfill Equation (\ref{equ:fkl}), thus many different sums $F(S)$ that only depend on $n$. An interesting point of view is to analyze how many of them are independent. \begin{proposition}\label{prop:indy} The maximum number of linearly independent equations of the form $F_j(S) =\sum_{k \geq 3} \sum_{\ell \geq 0} f_j(k,\ell) X_{k,\ell}$, where each $f_j(k,\ell)$ satisfies Equation (\ref{eq:rr_general}), in terms of the variables $X_{k,\ell}$ is $n-2$. \end{proposition} \begin{proof} Let $F_j(S) = \sum_{k=3}^{n} \sum_{\ell = 0}^{n-k} f_j(k,\ell) X_{k,\ell}$, for $1 \leq j \leq n-1$, be sums that only depend on~$n$, where $f_j(k,\ell)$ satisfies (\ref{eq:rr_general}) for all $j$. Let us consider the matrix \begin{equation*}M= \left( \begin{array}{ccccccc} f_1(3,0) & f_1(3,1) & \cdots & f_1(k,\ell) & \cdots & f_1(n-1,1) & f_1(n,0)\\ f_2(3,0) & f_2(3,1) & \cdots & f_2(k,\ell) & \cdots & f_2(n-1,1) & f_2(n,0)\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ f_j(3,0) & f_j(3,1) & \cdots & f_j(k,\ell) & \cdots & f_j(n-1,1) & f_j(n,0)\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ f_{n-1}(3,0) & f_{n-1}(3,1) & \cdots & f_{n-1}(k,\ell) & \cdots & f_{n-1}(n-1,1) & f_{n-1}(n,0) \end{array} \right), \label{eq:matrix1} \end{equation*} whose entries are the $f_j(k,\ell)$ for $3 \leq k \leq n$, $0 \leq \ell \leq n-k,$ and $1 \leq j \leq n-1.$ The function $f_j$ occupies the entries of row number $j$ of $M$. Let $C[h]$ be column number $h$ of $M$. The first column $C[1]$ contains the functions $f_j(k,\ell)$ for values $k=3$ and $\ell=0$, then the following columns are for the values $k=3$ and $\ell \geq 1$ in ascending order, then there are columns containing the functions with values $k=4$ and $\ell \geq 0$ in ascending order, and so on. This order implies that, for given values $k$ and $\ell$, matrix entries $f_j(k,\ell)$ are in column $C[h]$ of $M$, where $h=\sum_{i=3}^{k-1}(n-i+1)+\ell+1$.\\ It is sufficient to show that the matrix $M$ has rank $n-2$. Since each $f_j(k,\ell)$ satisfies (\ref{eq:rr_general}), we can express matrix $M$ as follows. \begin{equation*} M= \left( \begin{array}{ccccc} f_1(3,0) & \cdots & \sum_{i=0}^{\ell} \binom{\ell}{i} \ f_1(k+i,0) & \cdots & f_1(n,0)\\ f_2(3,0) & \cdots & \sum_{i=0}^{\ell} \binom{\ell}{i} \ f_2(k+i,0) & \cdots & f_2(n,0)\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ f_j(3,0) & \cdots & \sum_{i=0}^{\ell} \binom{\ell}{i} \ f_j(k+i,0) & \cdots & f_j(n,0)\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ f_{n-1}(3,0) & \cdots & \sum_{i=0}^{\ell} \binom{\ell}{i} \ f_{n-1}(k+i,0) & \cdots & f_{n-1}(n,0) \end{array} \right). \label{eq:mat1} \end{equation*} We observe that the columns corresponding to the functions with $\ell>0$ depend on the $n-2$ columns with $\ell=0$. With adequate operations these columns can be transformed to columns of zeros. That is, if we change each column $C[h]$ corresponding to the functions with $\ell>0$ in the following manner: \begin{equation*} C\left[{\sum_{s=3}^{k-1}(n-s+1)+\ell+1}\right] \ \rightarrow \ C\left[{\sum_{s=3}^{k-1}(n-s+1)+\ell+1}\right] - \sum_{i=0}^{\ell} \binom{\ell}{i} C\left[{\sum_{s=3}^{k+i-1}(n-s+1)+1}\right], \end{equation*} we obtain the matrix \begin{equation*} \left( \begin{array}{ccccccc} f_1(3,0) & 0 & \cdots & f_1(k,0) & \cdots & 0 & f_1(n,0)\\ f_2(3,0) & 0 & \cdots & f_2(k,0) & \cdots & 0 & f_2(n,0)\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ f_j(3,0) & 0 & \cdots & f_j(k,0) & \cdots & 0 & f_j(n,0)\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ f_{n-1}(3,0) & 0 & \cdots & f_{n-1}(k,0) & \cdots & 0 & f_{n-1}(n,0) \end{array} \right). \label{eq:mat} \end{equation*} Thus, there are only $n-2$ non-zero columns corresponding to the functions $f_j(k,0)$, implying that the maximum possible rank of $M$ is $n-2$. \end{proof} Using the general solution of Theorem \ref{thm:general}, we derive further relations for sums over all convex polygons. \begin{corollary}\label{cor:ps} For any point set $S$ of $n \geq 3$ points in general position and for any $x \in \ensuremath{\mathbb R}$, it holds that \begin{equation}\label{eq:ps} P_x(S) := \sum_{k=3}^n \sum_{\ell=0}^{n-k} x^k \left(1+x\right)^\ell X_{k,\ell} = \left(1+x\right)^n -1 - x\cdot n - x^2 \binom{n}{2}. \end{equation} \end{corollary} \begin{proof} Define $f(k+i,0)=x^{k+i}$, then $$ f(k,\ell)=\sum_{i=0}^{\ell} \binom{\ell}{i} f(k+i,0) = x^k \sum_{i=0}^{\ell} \binom{\ell}{i} x^i = x^k \cdot \left(1+x\right)^\ell. $$ The result then follows by considering a set of $n$ points in convex position and we have $$ \sum_{k=3}^n x^k \binom{n}{k} = (1+x)^n -1 - x \cdot n - x^2 \cdot \binom{n}{2}. $$\end{proof} \begin{observation} Corollary \ref{cor:is} is the particular case $x=1$ of Corollary \ref{cor:ps}. \end{observation} As shown in Proposition \ref{prop:indy}, the maximum number of possible linearly independent equations is $n-2$. From Corollary \ref{cor:ps}, we can obtain multiple equations of this form. Therefore, we evaluate the independence of these equations. \begin{proposition}\label{prop:indy2} For any point set $S$ of $n \geq 3$ points in general position, let $x_j \in \ensuremath{\mathbb R}_{\neq 0}$, with $1 \leq j \leq n-2$, be distinct values, and consider the $n-2$ equations $P_j(S) = \sum_{k=3}^n \sum_{\ell=0}^{n-k} x_j^k \left(1+x_j\right)^{\ell} X_{k,\ell}$. Then these equations are linearly independent. \end{proposition} \begin{proof} Using the argument of Proposition \ref{prop:indy}, it is sufficient to analyze the columns corresponding to the variables $X_{k,0}$. Thus, we consider the matrix \begin{equation*} \left( \begin{array}{ccccccc} x_1^3 & x_1^4 & \cdots & x_1^k & \cdots & x_1^n\\ x_2^3 & x_2^4 & \cdots & x_2^k & \cdots & x_2^n\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ x_j^3 & x_j^4 & \cdots & x_j^k & \cdots & x_j^n\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ x_{n-2}^3 & x_{n-2}^4 & \cdots & x_{n-2}^k & \cdots & x_{n-2}^n\\ \end{array} \right). \label{eq:vand3} \end{equation*} Now, we can divide each row $j$ by $x_j^3$ and we obtain \begin{equation*} \left( \begin{array}{ccccccc} 1 & x_1 & \cdots & x_1^{k-3} & \cdots & x_1^{n-3}\\ 1 & x_2 & \cdots & x_2^{k-3} & \cdots & x_2^{n-3}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & x_j & \cdots & x_j^{k-3} & \cdots & x_j^{n-3}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & x_{n-2} & \cdots & x_{n-2}^{k-3} & \cdots & x_{n-2}^{n-3}\\ \end{array} \right). \label{eq:vand} \end{equation*} This matrix is an $(n-2) \times (n-2)$ Vandermonde matrix and all $x_i$ are distinct. Therefore the rank of the matrix is $n-2$, implying that the $n-2$ equations are linearly independent. \end{proof} The following result is an immediate consequence of Proposition~\ref{prop:indy}, Corollary~\ref{cor:ps}, and Proposition~\ref{prop:indy2}. \begin{corollary}\label{cor:last} Let $S$ be a set of $n$ points in general position. Then any sum of the form $\sum_{k \geq 3} \sum_{\ell \geq 0} f(k,\ell) X_{k,\ell}$, where the function $f(k,\ell)$ fulfills Equation (\ref{equ:fkl}), can be expressed by $n-2$ sums of the form (\ref{eq:ps}) with distinct values $x \in \ensuremath{\mathbb R}$. \end{corollary} \section{Moment sums}\label{sec:moment_sums} Recall that Theorem~\ref{thm:one_point} states that for any set $S$ of $n$ points in general position, the alternating sum of the numbers of polygons with one interior point is $\ASum[1](S) = n-h$. Combining this with the first moment of the numbers of empty polygons $M_1(S) = 2\binom{n}{2}-h$ from~\cite{prs_oecp_06}, we observe that the difference $M_1(S) - \ASum[1](S) = 2\binom{n}{2} - n$ is again a function that only depends on the cardinality $n$ of~$S$ and hence is independent of the combinatorics of the underlying point set~$S$. In this section, we show that the latter observation can actually be extended to moment sums for convex polygons with at most two interior points. \begin{theorem}\label{thm:moment_sums} For any set $S$ of $n$ points in general position and for integers $0 \leq r\leq 2$, it holds that \[ \FrSum(S) := \sum_{ k\geq 3} \sum_{\ell=0}^r (-1)^{k-\ell+1} \mrk[r-\ell]{k-\ell} \Xkl(S) = \left\{\begin{array}{ll} \binom{n}{2}-n+1 & \mbox{for}\ r=0 \\ 2\binom{n}{2}-n & \mbox{for}\ r=1 \\ -\binom{n}{2}+n & \mbox{for}\ r= 2 \\ \end{array}\right. \] \end{theorem} \begin{proof} For $r \in \{0,1\}$, the statement follows from the results of~\cite{prs_oecp_06} and Theorem~\ref{thm:one_point}. For the case $r=2$, we have $$F_2(S)= \sum_{k \geq 3} \sum_{\ell=0}^{2} (-1)^{k-\ell+1} m_{2-\ell}(k-\ell)\Xkl(S)$$ $$= \sum_{k \geq 3} (-1)^{k+1}\frac{k(k-3)}{2}X_{k,0}+(-1)^k(k-1)X_{k,1}+(-1)^{k+1}X_{k,2}. $$ Let \[ f(k,\ell) := \left\{\begin{array}{ll} (-1)^{k-\ell+1} m_{2-\ell}(k-\ell) & \mbox{for}\ 0 \leq \ell \leq 2 \\ 0 & \mbox{for}\ 2 < \ell \\ \end{array}\right. \] Using $f(k,\ell)$, we can express the sum as $F_2(S) = \sum_{k\geq 3} \sum_{\ell\geq 0} f(k,\ell) \Xkl(S)$. We first show that $F_2(S)$ only depends on the cardinality $n$ of $S$ by proving that the function $f(k,\ell)$ fulfills Equation~(\ref{equ:fkl}) from Theorem~\ref{thm:fkl}, namely, for all $\ell>0$, $f(k+1,\ell-1) + f(k,\ell-1) = f(k,\ell)$. For $\ell > 3$, Equation~(\ref{equ:fkl}) is trivially true as all terms in the equation are equal to zero. For $\ell=3$, we have $f(k,\ell) = 0$ and \begin{eqnarray*} f(k+1,\ell-1) + f(k,\ell-1) & = & (-1)^{(k+1) - (\ell-1) + 1} m_{2-(\ell-1)}((k+1)-(\ell-1)) \\ && + (-1)^{k-(\ell-1)+1} m_{2-(\ell-1)}(k-(\ell-1)) \\ & = & (-1)^{k-\ell+3} m_{0}(k-\ell+2) + (-1)^{k-\ell+2} m_{0}(k-\ell+1) \ = \ 0. \\ \end{eqnarray*} For $0 < \ell \leq 2$, Equation~(\ref{equ:fkl}) for $f(k,\ell)$ can be stated in terms of the function $\mrk{k}$, which gives \begin{eqnarray} \label{eqn:mrkl} m_{2-\ell+1}(k-\ell+2) - m_{2-\ell+1}(k-\ell+1) & = & m_{2-\ell}(k-\ell). \end{eqnarray} We consider the case $\ell=2$, $\ell=1$ and $\ell=0$ separatedly. \begin{itemize} \item Let $\ell=2$. The truth of Equation~(\ref{eqn:mrkl}) follows directly from $\mrk[1]{k}=k$ for $k \geq 2$ and $\mrk[0]{k}=1$. \item Let $\ell=1$. If $k=3$, then $m_2(k)=0$ and $m_2(k+1)=m_1(k-1)=2$; the statement holds. If $k\geq 4$, then $m_{2}(k+1)-m_{2}(k)=\frac{(k+1)}{2}(k-2)-\frac{k}{2}(k-3)=k-1=m_1(k-1)$; the statement holds. \item Let $\ell=0$. If $k=3$ then $m_3(k+2)=m_3(k+1)=m_2(k)=0$; the statement holds. If $k=4$, then $m_3(k+1)=0$ and $m_3(k+2)=m_2(k-1)=2$; the statement holds. If $k\geq 5$, then $m_{3}(k+2)-m_{3}(k+1)=\frac{k+2}{3}\binom{k-2}{2}-\frac{k+1}{3}\binom{k-3}{2}=\frac{k}{2}(k-3)=m_2(k)$; the statement holds. \end{itemize} This finishes the proof that $F_2(S)$ only depends on the cardinality $|S|=n$. So what remains to show is that $F_2(S) = -\binom{n}{2}+n$. To this end, let $S$ be set of~$n$ points in convex position. Then, $X_{k,0}(S)=\binom{n}{k}$ and $X_{k,\ell}(S)=0$ for $\ell>0$. We get $$F_2(S)= \sum_{k \geq 3} \sum_{\ell=0}^{2} (-1)^{k-\ell+1}m_{2-\ell}(k-\ell)\Xkl(S)= \sum_{k=3}^{n} (-1)^{k+1}\frac{k(k-3)}{2}\binom{n}{k}.$$ Since $\sum_{k=0}^{n}(-1)^{k}k\binom{n}{k}=0$ and $\sum_{k=0}^{n}(-1)^{k}k^2\binom{n}{k}=0$, see~\cite{spivey}, we have $$F_2(S)=0-\sum_{k=0}^{2} (-1)^{k+1}\frac{k(k-3)}{2}\binom{n}{k}=-\binom{n}{2}+n.$$ \end{proof} \section{Higher dimensions}\label{sec:higher_dimensions} In this section, we consider the generalization of the previous results to $d$-dimensional Euclidean space $\ensuremath{\mathbb R}^d$, for $d\geq 3$. To this end, let $S$ be a set of $n\geq d+1$ points in $\ensuremath{\mathbb R}^d$ in general position, that is, no hyperplane contains more than $d$ points of~$S$. We again denote by $h$ the number of extreme points of $S$, that is, the points of~$S$ that lie on the boundary of the convex hull of~$S$. For each $k\geq d+1$, let $\Xk(S)$ be the number of empty convex $k$-vertex polytopes spanned by~$S$, and let $\Xkl(S)$ be the number of convex $k$-vertex polytopes of $S$ that have exactly $\ell$ points of~$S$ in their interior. In~\cite{prs_oecp_06}, the authors extend their results on alternating moments of point sets in the Euclidean plane to point sets $S$ in $\ensuremath{\mathbb R}^d$. They show that \begin{eqnarray*} M_0(S) \ := & \mathlarger{\sum}\limits_{k\geq d+1} (-1)^{k+d+1} \Xk(S) & = \ \sum\limits_{k=0}^d (-1)^{d-k} \binom{n}{k} \hspace{1.6cm} \mbox{(\cite{prs_oecp_06}, Theorem 4.1.)} \\ M_1(S) \ := & \mathlarger{\sum}\limits_{k\geq d+1} (-1)^{k+d+1} k \Xk(S) & = \ \sum\limits_{k=0}^d (-1)^{d-k} k\binom{n}{k} \ + \ i \hspace{0.5cm} \mbox{(\cite{prs_oecp_06}, Theorem 4.2.)} \end{eqnarray*} where $M_r(S)$ is again called the \emph{$r$-th alternating moment} of $\{\Xk(S)\}_{k\geq d+1}$ and $i=n-h$ is the number of interior points of $S$. The proofs of~\cite{prs_oecp_06} make use of a continous motion argument of points in $\mathbb{R}^d$. The same argument is applied in the following, so we refer the reader also to~\cite{prs_oecp_06}. \subsection{Alternating sums}\label{sec:alt_sums_rd} For an oriented facet $f$, let $\Xk(S;f)$ be the number of empty convex $k$-vertex polytopes that have $f$ as a boundary facet and lie on the positive side of $f$. In the proof of Theorem 4.2 in~\cite{prs_oecp_06}, the authors generalize Lemma~\ref{lem:one_fixed_point} to $\ensuremath{\mathbb R}^d$. The following lemma is implicit in~\cite{prs_oecp_06}. \begin{lemma}[\cite{prs_oecp_06}]\label{facetlemma} For any set $S$ of $n \geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$ and any oriented facet $f$ spanned by $d$ points of $S$, it holds that \[ \ASum[0](S;f) := \sum_{k\geq d+1} (-1)^{k+d+1} \Xk(S;f) \ = \ \left\{\begin{array}{ll} 1 & \mbox{ if $f$ has at least one point of $S$ } \\ & \mbox{ on the positive side} \\ 0 & \mbox{ otherwise.} \\ \end{array}\right. \] \end{lemma} In accordance to the planar case, we denote by $\ASum[1]^p(S)$ the alternating sum of convex polytopes that contain exactly $p$ in their interior. Using Lemma~\ref{facetlemma}, Lemma~\ref{lem:one_fixed_point} can be generalized to the following. \begin{lemma}\label{lem:one_fixed_point_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$ and any point $p \in S$ it holds that \[ \ASum[1]^p(S) = \left\{\begin{array}{ll} 0 & \mbox{ if $p$ is an extreme point of $S$} \\ 1 & \mbox{ otherwise.} \\ \end{array}\right. \] \end{lemma} \begin{proof} If $p$ is an extreme point of $S$, then it cannot be in the interior of any convex polytope spanned by points of $S$ and hence $\ASum[1]^p(S) = 0$. So assume that $p$ is not an extreme point of $S$. If $p$ lies close enough to a convex hull facet $f$ of $S$, then $p$ is contained in exactly all polytopes that are incident to~$f$ and empty in $S\setminus\{p\}$. Hence, $\ASum[1]^p(S) = \ASum[0](S\setminus\{p\};f) = 1$ by Lemma~\ref{facetlemma}. Otherwise, if $p$ is located arbitrarily, consider a continuous path from $p$ to a position close enough to a convex hull facet of $S$ and move $p$ along this path. The path can be chosen such that it avoids all lower dimensional elements in the hyperplane arrangement spanned by $S\setminus\{p\}$ and lies inside the convex hull of~$S$. During this movement, $\ASum[1]^p(S)$ can only change when $p$ crosses a facet~$f$ spanned by $d$ points of $S$. Further, changes can only occur from changing amounts of polytopes that have $f$ as a facet. Similar to $\Xk(S;f)$, let $\Xk(S;f^-)$ be the number of empty convex $k$-vertex polytopes that have $f$ as a boundary facet and lie on the negative side of $f$. When $p$ is moved through $f$ (from its positive to its negative side), then the alternating sum of polytopes that $p$ ``stops being inside'' is $\ASum[0](S\setminus\{p\};f) = 1$, and the alternating sum of polytopes that $p$ ``starts being inside'' is $\ASum[0](S\setminus\{p\};f^-) = 1$ (note that $f$ is not a convex hull facet of~$S$). Hence, the value $\ASum[1]^p(S)$ is the same for all possible positions of $p$ on the path, including the final position for which we already showed $\ASum[1]^p(S) = 1$. \end{proof} Likewise, Theorem~\ref{thm:one_point} generalizes to $\ensuremath{\mathbb R}^d$, with an analogous proof as in Section~\ref{sec:alt_sums}. \begin{theorem}\label{thm:one_point_rd} Given a set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$, $h$ of them extreme, it holds that $\ASum[1](S) = n-h$. \end{theorem} However, the results for $\ASum[1]^p(S;e)$ and $\ASum[1](S;e)$ do not carry over, as the cyclic ordering of the remaining points around $p$ in the proof of Lemma~\ref{one_fixed_edge_one_fixed_point} does not have a direct higher dimensional counterpart. \subsection{Weighted sums}\label{subsec:weighted_sums_rd} We next consider higher dimensional versions of the results from Section~\ref{sec:weighted_sums}. In the following, we denote by $\Xkl(S;f)$ the number of convex $k$-vertex polytopes with $\ell$ interior points, with vertices and interior points a subset of $S$, that have $f$ as a boundary facet and lie to the positive side of $f$. We denote by $\Xkl(S;p)$ the number of convex $k$-vertex polytopes with $\ell$ interior points that have $p$ on their boundary. The notation is again generalized to more required boundary elements by listing them all after $S$. For example, $\Xkl(S;f,p_1,p_2,p_3)$ denotes the number of convex $k$-vertex polytopes with $\ell$ interior points that have $f$, $p_1$, $p_2$, and $p_3$ on their boundary. Similar to Lemma~\ref{lem:one_fixed_point}, Theorem~\ref{thm:fkl} can directly be generalized to higher dimensions. \begin{theorem}\label{thm:fkl_rd} For any function $\fkl{k}{\ell}$ that fulfills the equation \begin{equation} \label{equ:fkl_rd} \fkl{k}{\ell} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}, \end{equation} the sum $\ensuremath{F}(S)= \sum_{\ell\geq 0} \sum_{k\geq 0} \fkl{k}{\ell} \Xkl(S)$ is invariant over all sets $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$, that is, $\ensuremath{F}(S)$ only depends on the cardinality of $S$ and the dimension $d$. \end{theorem} \begin{proof} Consider a point set $S$ in general position, an arbitrary point $p\in S$, and the numbers $\Xkl^p(S)$ of polytopes in $S$ that contain $p$ in the interior. When continuously moving the points of $S$, the values $\Xkl(S)$ change exactly when a point $p$ crosses a facet $f$ spanned by points of~$S$, from the positive to the negative side, or vice versa. Consider such a change and denote the resulting point set by $S'$. Assume without loss of generality that $p$ is to the positive side of $f$ in~$S$ and to the negative side of $f$ in $S'$. Note that in $S$, all polytopes with $f$ on their boundary and lying on the positive side of $f$ have $p$ in their interior (except for the empty simplex containing $p$ and $f$, which exists before and after the change). Further, for every $\ell\geq 1$ and every polytope (with facets $\{f,f_1,\ldots,f_r\}$) counted in $\Xkl(S;f)$, we have exactly one polytope that is counted in $\Xkl[k,\ell-1](S';f)$ (informally, the same polytope, just without $p$ in its interior), and one polytope counted in $\Xkl[k,\ell-1](S';f_1,\ldots,f_{r},p)$ (informally, the same polytope, just with $p$ added on the boundary outside $f$). Symmetrically, for every polytope in $S'$ that has $f$ on its boundary and $p$ in the interior, there are two polytopes in $S$ with one point less in the interior and zero and one point more, respectively, on the boundary. As $\fkl{k}{\ell} = \fkl{k+1}{\ell-1} + \fkl{k}{\ell-1}$, this implies that no such point move changes the sum $\ensuremath{F}(S)$. \end{proof} The proofs of the following formulae all go along the same lines as their planar counterparts and are omitted. \begin{corollary}\label{cor:is_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$, it holds that \[ \sum_{k=d+1}^{n} \sum_{\ell=0}^{n-d-1} 2^\ell \Xkl(S) = 2^n - \sum_{k=0}^d \binom{n}{k}. \] \end{corollary} \begin{corollary}\label{cor:bms_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$ and every integer $d+1 \leq m \leq n$ it holds that \[ \sum_{k=d+1}^{m} \sum_{\ell=m-k}^{n-k} \binom{\ell}{m-k} \Xkl(S) = \binom{n}{m}.\] \end{corollary} \begin{corollary}\label{cor:fib1_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$, it holds that \[\sum_{k=d+1}^{n} \sum_{\ell=0}^{n-d-1} \Fib(k+2\ell) X_{k,\ell}(S) = \Fib(2n)- \sum_{k=0}^{d} \Fib(k) \binom{n}{k}.\] \end{corollary} \begin{corollary}\label{cor:fib2_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$, it holds that \[ \sum_{k=d+1}^{n} \sum_{\ell=0}^{n-d-1} (-1)^{k+\ell}\Fib(k-\ell) X_{k,\ell}(S) =-\Fib(n) + \sum_{k=0}^{d} (-1)^{k+1} \Fib(k) \binom{n}{k}. \] \end{corollary} \begin{corollary}\label{cor_cheby_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$, it holds that \begin{equation*}\label{equ:ChebyT_rd} \sum_{k=d\!+\!1}^{n} \sum_{\ell=0}^{n\!-\!d\!-\!1} 2\cos\left(\frac{(2k+\ell)\pi}{3}\right) X_{k,\ell}(S) = 2 \cos\left(\frac{n \pi}{3}\right) - 2\sum_{k=0}^{d} \binom{n}{k}\cos\left(\frac{2k\pi}{3}\right), \ \mbox{and} \end{equation*} \begin{equation*}\label{equ:ChebyU_rd} \sum_{k=d\!+\!1}^{n} \sum_{\ell=0}^{n\!-\!d\!-\!1} \frac{2}{\sqrt{3}}\sin\left(\frac{(2k+\ell)\pi}{3} \right) X_{k,\ell}(S) = \frac{2}{\sqrt{3}}\sin\left(\frac{n \pi}{3}\right) - \frac{2}{\sqrt{3}} \sum_{k=0}^{d} \binom{n}{k}\sin\left(\frac{2k \pi}{3}\right). \end{equation*} \end{corollary} \begin{corollary}\label{all_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$ and for any $x \in \ensuremath{\mathbb R}$, it holds that \begin{equation} \sum_{k=d+1}^n \sum_{\ell=0}^{n-k} x^k \left(1+x\right)^\ell X_{k,\ell} = \left(1+x\right)^n -\sum_{k=0}^{d} x^k \binom{n}{k}. \label{eq:ps_rd} \end{equation} \end{corollary} From Equation~(\ref{eq:ps_rd}) the following can be deduced, analogous to Proposition~\ref{prop:indy} and Corollary~\ref{cor:last}. \begin{proposition}\label{prop:indy_rd} For any set $S$ of $n$ points in general position in $\ensuremath{\mathbb R}^d$, the maximum number of linearly independent equations $F_j(S) =\sum_{k \geq 3} \sum_{\ell \geq 0} f_j(k,\ell) X_{k,\ell}$, where each $f_j(k,\ell)$ satisfies Equation (\ref{equ:fkl_rd}), in terms of the variables $X_{k,\ell}$ is $n-d$. \end{proposition} \begin{corollary}\label{cor:last_rd} Let $S$ be a set of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$. Any sum $\sum_{k \geq 3} \sum_{\ell \geq 0} f(k,\ell) X_{k,\ell}$, where the function $f(k,\ell)$ fulfills Equation (\ref{equ:fkl_rd}), can be expressed in terms of $n-d$ equations of the form (\ref{eq:ps_rd}) with distinct values $x \in \ensuremath{\mathbb R}$. \end{corollary} \subsection{Moment sums}\label{subsec:moment_sums} Recall that Theorem~\ref{thm:one_point_rd} states that for any set $S$ of $n$ points in general position in $\ensuremath{\mathbb R}^d$, the alternating sum of the numbers of convex polytopes with one interior point is $\ASum[1](S) = n-h$. Combining this with the first moment of the numbers of empty convex polytopes $M_1(S) = \sum_{k=0}^d (-1)^{d-k} k\binom{n}{k} + i$ from~\cite{prs_oecp_06}, we observe that also in $\ensuremath{\mathbb R}^d$, the difference $M_1(S) - \ASum[1](S) = \sum_{k=0}^d (-1)^{d-k} k\binom{n}{k}$ is a function that only depends on the cardinality $n$ of $S$ and hence is independent of the combinatorics of the underlying point set $S$. For $r = 2$ and any point set $S$ in general position in $\ensuremath{\mathbb R}^d$ let \[ M_2(S) := \sum_{k\geq d+1} (-1)^{k+1} \frac{k}{r} \binom{k-r-1}{r-1} \Xk(S) = \sum_{k\geq d+1} (-1)^{k+1} \frac{k(k-3)}{2} \Xk(S) \] be the \emph{second alternating moment} of $\{\Xk(S)\}_{k\geq d+1}$. Then, Theorem~\ref{thm:moment_sums} can be generalized to~$\ensuremath{\mathbb R}^d$ in the following way. \begin{theorem}\label{thm:moment_sums_rd} For any set $S$ of $n\geq d+1$ points in general position in $\ensuremath{\mathbb R}^d$ and for integers $0\leq r\leq 2$ it holds that \[ \FrSum(S) := \sum_{k\geq d+1} \ \sum_{\ell=0}^r (-1)^{k-\ell+1+d} \mrk[r-\ell]{k-\ell} \Xkl(S) = \left\{\begin{array}{ll} \sum\limits_{k=0}^d (-1)^{d-k} \binom{n}{k} & \mbox{for}\ r=0 \\ \sum\limits_{k=0}^d (-1)^{d-k} k\binom{n}{k} & \mbox{for}\ r=1 \\ \sum_{k=0}^{d} (-1)^{d+k}\ \frac{k(k-3)}{2}\binom{n}{k} & \mbox{for}\ r=2.\\ \end{array}\right. \] \end{theorem} \noindent The proof of this theorem is identical to the proof of Theorem~\ref{thm:moment_sums}, except that the summation starts with $k=d+1$ instead of $k=3$. \end{document}
\begin{document} \title{ Extremal spectral results of planar graphs without vertex-disjoint cycles\footnote{Lin was supported by NSFC grant 12271162 and Natural Science Foundation of Shanghai (No. 22ZR1416300). Shi was supported by NSFC grant 12161141006.}} \author{{\bf Longfei Fang$^{a,b}$}, {\bf Huiqiu Lin$^a$}\thanks{Corresponding author: [email protected] (H. Lin).}, {\bf Yongtang Shi$^c$} \\ \small $^{a}$ School of Mathematics, East China University of Science and Technology, \\ \small Shanghai 200237, China\\ \small $^{b}$ School of Mathematics and Finance, Chuzhou University, \\ \small Chuzhou, Anhui 239012, China\\ \small $^{c}$Center for Combinatorics and LPMC, Nankai University, Tianjin 300071, China\\ } \date{} \maketitle {\flushleft\large\bf Abstract} Given a planar graph family $\mathcal{F}$, let ${\rm ex}_{\mathcal{P}}(n,\mathcal{F})$ and ${\rm spex}_{\mathcal{P}}(n,\mathcal{F})$ be the maximum size and maximum spectral radius over all $n$-vertex $\mathcal{F}$-free planar graphs, respectively. Let $tC_k$ be the disjoint union of $t$ copies of $k$-cycles, and $t\mathcal{C}$ be the family of $t$ vertex-disjoint cycles without length restriction. Tait and Tobin [Three conjectures in extremal spectral graph theory, J. Combin. Theory Ser. B 126 (2017) 137--161] determined that $K_2+P_{n-2}$ is the extremal spectral graph among all planar graphs with sufficiently large order $n$, which implies the extreme graphs of $spex_{\mathcal{P}}(n,tC_{\ell})$ and $spex_{\mathcal{P}}(n,t\mathcal{C})$ for $t\geq 3$ are $K_2+P_{n-2}$. In this paper, we first determine $spex_{\mathcal{P}}(n,tC_{\ell})$ and $spex_{\mathcal{P}}(n,t\mathcal{C})$ and characterize the unique extremal graph for $1\leq t\leq 2$, $\ell\geq 3$ and sufficiently large $n$. Secondly, we obtain the exact values of ${\rm ex}_{\mathcal{P}}(n,2C_4)$ and ${\rm ex}_{\mathcal{P}}(n,2\mathcal{C})$, which answers a conjecture of Li [Planar Tur\'an number of disjoint union of $C_3$ and $C_4$, arxiv:2212.12751v1 (2022)]. These present a new exploration of approaches and tools to investigate extremal problems of planar graphs. \begin{flushleft} \textbf{Keywords:} Spectral radius; Tur\'{a}n number; Planar graph; Vertex-disjoint cycles; Quadrilateral \end{flushleft} \textbf{AMS Classification:} 05C35; 05C50 \section{Introduction} Given a graph family $\mathcal{F}$, a graph is said to be \textit{$\mathcal{F}$-free} if it does not contain any $F\in\mathcal{F}$ as a subgraph. When $\mathcal{F}=\{F\}$, we write $F$-free instead of $\mathcal{F}$-free. One of the earliest results in extremal graph theory is the Tur\'{a}n's theorem, which gives the maximum number of edges in an $n$-vertex $K_k$-free graph. The \emph{Tur\'{a}n number} ${\rm ex}(n,\mathcal{F})$ is the maximum number of edges in an $\mathcal{F}$-free graph on $n$ vertices. F\"{u}redi and Gunderson \cite{Furedi} determined ${\rm ex}(n,C_{2k+1})$ for all $n$ and $k$. However, the exact value of ${\rm ex}(n,C_{2k})$ is still open. Erd\H{o}s \cite{Erdos} determined ${\rm ex}(n,tC_3)$ for $n\ge 400(t-1)^2$, and the unique extremal graph is characterized. Subsequently, Moon \cite{Moon} showed that Erd\H{o}s's result is still valid whenever $n>\frac{9t-11}{2}$. Erd\H{o}s and P\'{o}sa \cite{Erd} showed that ${\rm ex}(n,t\mathcal{C})=(2t-1)(n-t)$ for $t\ge 2$ and $n\ge 24t$. For more results on Tur\'{a}n-type problem, we refer the readers to the survey paper \cite{KC}. One extension of the classical Tur\'{a}n number is to study extremal spectral radius in a planar graph with a forbidden structure. The planar spectral extremal value of a given graph family $\mathcal{F}$, denoted by $spex_{\mathcal{P}}(n,\mathcal{F})$, is the maximum spectral radius over all $n$-vertex $\mathcal{F}$-free planar graphs. An $\mathcal{F}$-free planar graph on $n$ vertices with maximum spectral radius is called an \textit{extremal graph} to $spex_{\mathcal{P}}(n,\mathcal{F})$. Boots and Royle \cite{Boots} and independently Cao and Vince \cite{Cao} conjectured that $K_2+P_{n-2}$ is the unique planar graph with the maximum spectral radius where '+' means the join product. The conjecture was finally proved by Tait and Tobin \cite{Tait} for sufficiently large $n$. In order to study the spectral extremal problems on planar graphs, we first give a useful theorem which will be frequently used in the following. \begin{thm}\label{theorem1.1} Let $F$ be a planar graph and $n\ge \max\{2.16\times 10^{17}, 2|V(F)|\}$. If $F$ is a subgraph of $2K_1+P_{n/2}$, but not of $K_{2,n-2}$, then the extremal graph to $spex_{\mathcal{P}}(n,F)$ contains a copy of $K_{2,n-2}$. \end{thm} Let $tC_k$ be the disjoint union of $t$ copies of $k$-cycles, and $t\mathcal{C}$ be the family of $t$ vertex-disjoint cycles without length restriction. For $t\ge 3$, it is easy to check that $K_2+P_{n-2}$ is $tC_{\ell}$-free and $t\mathcal{C}$-free. Theorem \ref{theorem1.1} implies that $K_2+P_{n-2}$ is the extremal graph to $spex_{\mathcal{P}}(n,tC_{\ell})$ and $spex_{\mathcal{P}}(n,t\mathcal{C})$ for $t\ge 3$ and sufficiently large $n$. For three positive integers $n,n_1,n_2$ with $n\ge n_1>n_2$, let $$H(n_1,n_2):= \left\{ \begin{array}{ll} P_{n_1}\cup \frac{n-2-n_1}{n_2}P_{n_2} & \hbox{if $n_2 \mid (n-2-n_1)$,} \\ P_{n_1}\cup \lfloor\frac{n-2-n_1}{n_2}\rfloor P_{n_2}\cup P_{n-2-n_1-\lfloor\frac{n-2-n_1}{n_2}\rfloor n_2}~~~~~~~ & \hbox{otherwise.} \end{array} \right. $$ In the paper, we give answers to ${\rm spex}_{\mathcal{P}}(n,tC_{\ell})$ for $t\in \{1,2\}$ as follows. \begin{thm}\label{theorem1.2} For integers $\ell\ge 3$ and $n\ge \max\{2.16\times 10^{17},9\times2^{\ell-1}+3\}$, the graph $K_2+H(2\ell-3,\ell-2)$ is the extremal graph to $spex_{\mathcal{P}}(n,2C_{\ell})$. \end{thm} Theorem \ref{theorem1.4} implies that $K_2+H(3,1)$ is $2\mathcal{C}$-free for $n\ge 5$. By Theorem \ref{theorem1.2}, $K_2+H(3,1)$ is the extremal graph to $spex_{\mathcal{P}}(n,2C_3)$ for $n\ge 2.16\times 10^{17}$. Moreover, a planar graph is $2C_3$-free when it is $2\mathcal{C}$-free. Hence, one can easily get answer to ${\rm spex}_{\mathcal{P}}(n,2\mathcal{C})$. \begin{cor}\label{cor1.1} For $n\ge 2.16\times 10^{17}$, $K_2+H(3,1)$ is the extremal graph to $spex_{\mathcal{P}}(n,2\mathcal{C})$. \end{cor} We use $J_n$ to denote the graph obtained from $K_1+(n-1)K_1$ by embedding a maximum matching within its independent set. Nikiforov \cite {Nikiforov5} and Zhai and Wang \cite{Zhai-3} showed that $J_n$ is the extremal graph to $spex(n,C_4)$ for odd and even $n$, respectively. Clearly, $J_n$ is planar. This implies that $J_n$ is the extremal graph to $spex_{\mathcal{P}}(n,C_4)$. Nikiforov \cite{Nikiforov2} and Cioab\u{a}, Desai, and Tait \cite{Cioaba1} determined the spectral extremal graph among $C_k$-free graphs for odd $k$ and even $k\geq6$, respectively. Next we give the characterization of the spectral extremal graphs among $C_k$-free planar graphs. \begin{thm}\label{theorem1.3} For integers $\ell\ge 3$ and $n\ge \max\{2.16\times 10^{17},9\times2^{\lfloor\frac{\ell-1}{2}\rfloor}+3,\frac{625}{32}\lfloor\frac{\ell-3}{2}\rfloor^2+2\}$,\\ (i) $K_{2,n-2}$ is the unique extremal graph to $spex_{\mathcal{P}}(n,C_{3})$ for $\ell=3$;\\ (ii) $K_2+H(\lceil\frac{\ell-3}{2}\rceil,\lfloor\frac{\ell-3}{2}\rfloor)$ is the unique extremal graph to $spex_{\mathcal{P}}(n,C_{\ell})$ for $\ell\ge 5$. \end{thm} We also study another extension of the classical Tur\'{a}n number, i.e., the planar Tur\'{a}n number. Dowden \cite{DZ} initiated the following problem: what is the maximum number of edges in an $n$-vertex $\mathcal{F}$-free planar graph? This extremal number is called planar Tur\'{a}n number of $\mathcal{F}$ and denoted by ${\rm ex}_{\mathcal{P}}(n,\mathcal{F})$. The planar Tur\'{a}n number for short cycles are studied in \cite{DZ, DG, BT}, but ${\rm ex}_{\mathcal{P}}(n,C_k)$ is still open for general $k$. For more results on planar Tur\'{a}n-type problem, we refer the readers to a survey of Lan, Shi and Song \cite{B20}. It is easy to see that ${\rm ex}_{\mathcal{P}}(n,t\mathcal{C})=n-1$ for $t=1$. Lan, Shi and Song \cite{D} showed that ${\rm ex}_{\mathcal{P}}(n,t\mathcal{C})=3n-6$ for $t\ge 3$, and the double wheel $2K_1+C_{n-2}$ is an extremal graph. We prove the case of $t=2$, which will be used to prove our main theorems. \begin{thm}\label{theorem1.4} ${\rm ex}_{\mathcal{P}}(n,2\mathcal{C})=2n-1$ for $n\ge 5$. The extremal graphs are obtained from $2K_1+C_3$ and an independent set of size $n-5$ by joining each vertex of the independent set to any two vertices of the triangle. \end{thm} Moreover, Lan, Shi and Song \cite{D} also proved that ${\rm ex}_{\mathcal{P}}(n,tC_k)=3n-6$ for all $k,t\ge 3$. They \cite{Lan3} further showed that ${\rm ex}_{\mathcal{P}}(n,2C_3)=\lceil\frac{5n}{2}\rceil-5$ and obtained lower bounds of ${\rm ex}_{\mathcal{P}}(n,2C_k)$ for $k\ge 4$, which was improved by Li \cite{Li} for sufficiently large $n$ recently. Li \cite{Li} also conjectured that ${\rm ex}_{\mathcal{P}}(n,2C_4)\leq \frac{19}{7}(n-2)$ for $n\ge 23$, and the bound is sharp for $14\mid (n-2)$. In this paper, we determine the exact value of ${\rm ex}_{\mathcal{P}}(n,2C_4)$ for large $n$. \begin{thm}\label{theorem1.5} For $n\ge 2661$, $${\rm ex}_{\mathcal{P}}(n,2C_4)=\left\{ \begin{array}{ll} \frac{19n}{7}-6 & \hbox{if $7\mid n$,} \\ \big\lfloor\frac{19n-34}{7}\big\rfloor~~~~~~~ & \hbox{otherwise.} \end{array} \right. $$ \end{thm} \section{Proof of Theorem \ref{theorem1.1}} Let $A(G)$ be the adjacency matrix of a planar graph $G$, and $\rho(G)$ be its spectral radius, i.e., the maximum modulus of eigenvalues of $A(G)$. Throughout this section, let $G$ be an extremal graph to $spex_{\mathcal{P}}(n,F)$, and $\rho$ denote this spectral radius. By Perron-Frobenius theorem, there exists a positive eigenvector $X=(x_1,\ldots,x_n)^T$ corresponding to $\rho$. Choose $u'\in V(G)$ with $x_{u'}=\max\{x_i~|~i=1,2,\dots,n\}=1$. For a vertex $u$ and a positive integer $i$, let $N_i(u)$ denote the set of vertices at distance $i$ from $u$ in $G$. For two disjoint subset $S,T\subset V(G)$, denote by $G[S,T]$ the bipartite subgraph of $G$ with vertex set $S\cup T$ that consist of all edges with one endpoint in $S$ and the other endpoint in $T$. Set $e(S)=|E(G[S])|$ and $e(S,T)=|E(G[S,T])|$. Since $G$ is a planar graph, we have \begin{align}\label{align.-06} e(S)\leq 3|S|-6~~~\text{and}~~~e(S,T)\leq 2(|S|+|T|)-4. \end{align} In this section, we will always assume that $n\ge 2.16\times 10^{17}$. We first give the lower bound of $\rho$. \begin{lem}\label{lemma2.1} $\rho\ge\sqrt{2n-4}.$ \end{lem} \begin{proof} Note that $K_{2,n-2}$ is planar and $F$-free. Then, $\rho\ge \rho(K_{2,n-2})=\sqrt{2n-4}$, as $G$ is an extremal graph to $spex_{\mathcal{P}}(n,F)$. \end{proof} Set $L^{\lambda}=\{u\in V(G)~|~x_u\ge \frac{1}{10^3\lambda}\}$ for some constant $\lambda\geq \frac{1}{10^3}$. The following lemmas are used to give an upper bound for $|L^{1}|$ and a lower bound for degrees of vertices in $L^{1}$. \begin{lem}\label{lemma2.2} $|L^{\lambda}|\le \frac{\lambda n}{10^5}$. \end{lem} \begin{proof} By Lemma \ref{lemma2.1}, $\rho\geq\sqrt{2n-4}$. Hence, $$\frac{\sqrt{2n-4}}{10^3\lambda}\leq\rho x_u=\sum_{v\in N_G(u)}x_v\le d_G(u)$$ for each $u\in L^{\lambda}$. Summing this inequality over all vertices $u\in L^{\lambda}$, we obtain \begin{align*} |L^{\lambda}|\frac{\sqrt{2n-4}}{10^3\lambda}\le \sum_{u\in L^{\lambda}}d_G(u) \le \sum_{u\in V(G)}d_G(u) \le 2(3n-6). \end{align*} It follows that $|L^{\lambda}|\le 3\times 10^3\lambda\sqrt{2n-4}\le \frac{\lambda n}{10^5}$ as $n\ge 2.16\times 10^{17}$. \end{proof} \begin{lem}\label{lemma2.3} $|L^1|\leq 6\times 10^4$. \end{lem} \begin{proof} Let $u\in V(G)$ be an arbitrary vertex. For convenience, we use $N_i$, $L_i^{\lambda}$ and $\overline{L_i^{\lambda}}$ instead of $N_i(u)$, $N_i(u)\cap L^{\lambda}$ and $N_i(u)\setminus L^{\lambda}$, respectively. By Lemma \ref{lemma2.1}, $\rho\geq\sqrt{2n-4}$. Then \begin{eqnarray}\label{align.7} (2n-4)x_{u}\le \rho^2x_{u}=d_G(u)x_u+\sum_{v\in N_1}\sum_{w\in N_1(v)\setminus\{u\}}x_w. \end{eqnarray} Note that $N_1(v)\setminus\{u\}\subseteq N_1\cup N_2=L_1^{\lambda}\cup L_2^{\lambda}\cup\overline{L_1^{\lambda}}\cup \overline{L_2^{\lambda}}$. We can calculate $\sum_{v\in N_1}\sum_{w\in N_1(v)\setminus\{u\}}x_w$ according to two cases $w\in L_1^{\lambda}\cup L_2^{\lambda}$ or $w\in \overline{L_1^{\lambda}}\cup \overline{L_2^{\lambda}}$. We first consider the case $w\in L_1^{\lambda}\cup L_2^{\lambda}$. Clearly, $N_1=L_1^{\lambda}\cup\overline{L_1^{\lambda}}$ and $x_w\le 1$ for $w\in L_1^{\lambda}\cup L_2^{\lambda}$. We can see that \begin{eqnarray}\label{align.8} \sum_{v\in N_1}\sum_{w\in (L_1^{\lambda}\cup L_2^{\lambda})}\!\!x_w \leq \big(2e(L_1^{\lambda})+e(L_1^{\lambda},L_2^{\lambda})\big)+\sum_{v\in\overline{L_1^{\lambda}}}\sum_{w\in (L_1^{\lambda}\cup L_2^{\lambda})}\!\!\!x_w. \end{eqnarray} By Lemma \ref{lemma2.2}, we have $|L^{\lambda}|\le \frac{\lambda n}{10^5}$. Moreover, $L_1^{\lambda}\cup L_2^{\lambda}\subseteq L^{\lambda}$. Then, by (\ref{align.-06}), we have \begin{eqnarray}\label{align.9} 2e(L_1^{\lambda})+e(L_1^{\lambda},L_2^{\lambda})\leq 2(3|L_1^{\lambda}|-6)+(2(|L_1^{\lambda}|+|L_2^{\lambda}|)-4) < 8|L^{\lambda}|\leq \frac{8\lambda n}{10^5}. \end{eqnarray} Next, we consider the remain case $w\in \overline{L_1^{\lambda}}\cup \overline{L_2^{\lambda}}$. Clearly, $x_w\le \frac{1}{10^3\lambda}$ for $w\in\overline{L_1^{\lambda}}\cup\overline{L_2^{\lambda}}$. Then \begin{eqnarray}\label{align.10} \sum_{v\in N_1}\sum_{w\in\overline{L_1^{\lambda}}\cup\overline{L_2^{\lambda}}}\!\!\!x_w \le \Big(e(L_1^{\lambda},\overline{L_1^{\lambda}}\cup\overline{L_2^{\lambda}}) +2e(\overline{L_1^{\lambda}})+e(\overline{L_1^{\lambda}},\overline{L_2^{\lambda}})\Big)\frac{1}{10^3\lambda} < \frac{6n}{10^3\lambda}, \end{eqnarray} where $e(L_1^{\lambda},\overline{L_1^{\lambda}}\cup\overline{L_2^{\lambda}}) +2e(\overline{L_1^{\lambda}})+e(\overline{L_1^{\lambda}},\overline{L_2^{\lambda}})\le 2e(G)<6n$. Combining (\ref{align.7})-(\ref{align.10}), we obtain \begin{eqnarray}\label{align.-14} (2n-4)x_{u}< d_G(u)x_u+\sum_{v\in\overline{L_1^{\lambda}}}\sum_{w\in (L_1^{\lambda}\cup L_2^{\lambda})}\!\!\!x_w+\Big(\frac{8\lambda}{10}+\frac{60}{\lambda}\Big)\frac{n}{10^4}. \end{eqnarray} Now we prove that $d_G(u)\geq\frac{n}{10^4}$ for each $u\in L^1$. Suppose to the contrary that there exists a vertex $\widetilde{u}\in L^1$ with $d_G(\widetilde{u})<\frac{n}{10^4}$. Note that $x_{\widetilde{u}}\ge \frac{1}{10^3}$ as $\widetilde{u}\in L^1$. Setting $u=\widetilde{u}$, $\lambda=10$ and combining \eqref{align.-14}, we have \begin{eqnarray}\label{align.-15} \frac{2n-4}{10^3}<d_G(\widetilde{u})x_{\widetilde{u}}+\sum_{v\in\overline{L_1^{10}}}\sum_{w\in (L_1^{10}\cup L_2^{10})}x_w+\frac{14n}{10^4}. \end{eqnarray} By (\ref{align.-06}), we have \begin{eqnarray*} e\big(\overline{L_1^{10}},L_1^{10}\cup L_2^{10}\big) <2\big(|\overline{L_1^{10}}| +|L_1^{10}\cup L_2^{10}|\big)\leq 2\big(|N_1| +|L^{10}|\big) \leq\frac{4n}{10^4}, \end{eqnarray*} where $|N_1|=d_G(\widetilde{u})<\frac{n}{10^4}$ and $|L^{10}|\leq \frac{n}{10^4}$ by Lemma \ref{lemma2.2}. Combining this with $d_G(\widetilde{u})<\frac{n}{10^4}$ gives $$d_G(\widetilde{u})x_{\widetilde{u}}+\sum_{v\in\overline{L_1^{10}}}\sum_{w\in (L_1^{10}\cup L_2^{10})}x_w+\frac{14n}{10^4} \leq d_G(\widetilde{u})+e\big(\overline{L_1^{10}},L_1^{10}\cup L_2^{10}\big)+\frac{14n}{10^4}\leq \frac{19n}{10^4},$$ which contradicts \eqref{align.-15}. Therefore, $d_G(u)\geq\frac{n}{10^4}$ for each $u\in L^1$. Summing this inequality over all vertices $u\in L^{1}$, we obtain $$|L^1|\frac{n}{10^4}\le \sum_{u\in L^1}d_G(u)\le 2e(G)\leq 6n,$$ which yields that $|L^1|\leq 6\times 10^4$. \end{proof} For convenience, we use $L$, $L_i$ and $\overline{L_i}$ instead of $L^{1}$, $N_i(u)\cap L^{1}$ and $N_i(u)\setminus L^{1}$, respectively. \begin{lem}\label{lemma2.4} For every $u\in L$, we have $d_G(u)\ge(x_u -\frac{4}{1000})n$. \end{lem} \begin{proof} Let $\overline{L_1}'$ be the subset of $\overline{L_1}$ in which each vertex has at least $2$ neighbors in $L_1\cup L_2$. We first claim that $|\overline{L_1}'|\leq|L_1\cup L_2|^{2}$. If $|L_1\cup L_2|=1$, then $\overline{L_1}'$ is empty, as desired. It remains the case $|L_1\cup L_2|\geq 2$. Suppose to the contrary that $|\overline{L_1}'|> |L_1\cup L_2|^{2}$. Since there are only $\binom{|L_1\cup L_2|}{2}$ options for vertices in $\overline{L_1}'$ to choose a set of $2$ neighbors from $L_1\cup L_2$, we can find a set of $2$ vertices in $L_1\cup L_2$ with at least $\Big\lceil|\overline{L_1}'|/\binom{|L_1\cup L_2|}{2}\Big\rceil\ge3$ common neighbors in $\overline{L_1}'$. Moreover, one can observe that $u\notin L_1\cup L_2$ and $\overline{L_1}'\subseteq\overline{L_1}\subseteq N_1(u)$. Hence, $G$ contains a copy of $K_{3,3}$, contradicting that $G$ is planar. The claim holds. Thus, \begin{eqnarray*}\label{align31} e(\overline{L_1},L_1\cup L_2)\leq |\overline{L_1}\setminus \overline{L_1}'|\time 2 +|L_1\cup L_2||\overline{L_1}'| \leq d_G(u)+(6\times10^4)^{3} \leq d_G(u)+\frac{n}{1000}, \end{eqnarray*} where the second last inequality holds as $|\overline{L_1}'|\leq|L_1\cup L_2|^{2}$ and $|L_1\cup L_2|\leq |L|\leq 6\times 10^4$, and the last inequality holds as $n\ge 2.16\times 10^{17}$. Setting $\lambda=1$ and combining the above inequality with \eqref{align.-14}, we have \begin{eqnarray*} (2n-4)x_u\leq d_G(u)+\Big(d_G(u)+\frac{n}{10^3}\Big)+\frac{61n}{10^4}, \end{eqnarray*} which yields that $d_G(u)\ge(n-2)x_u-\frac{71n}{2\times 10^4}\ge (x_u-\frac{4}{1000})n$. \end{proof} \begin{lem}\label{lemma2.5} There exists a vertex $u''\in L_1\cup L_2$ such that $x_{u''}\ge \frac{988}{1000}$. \end{lem} \begin{proof} Setting $u=u'$, $\lambda=1$ and combining \eqref{align.-14}, we have \begin{eqnarray*} 2n-4<d_G(u')+\sum_{v\in\overline{L_1}}\sum_{w\in L_1\cup L_2}\!\!\!x_w+\frac{61n}{10^4}, \end{eqnarray*} which yields that $$\sum_{v\in\overline{L_1}}\sum_{w\in L_1\cup L_2}x_w\geq 2n-4-\frac{61n}{10^4}-d_G(u')\geq\frac{993n}{1000}.$$ From Lemma \ref{lemma2.4} we have $d_G(u')\ge \frac{996n}{1000}$ as $u'\in L$. For simplicity, set $N_{L_1}(u')=N_{G}(u')\cap L_1$ and $d_{L_1}(u')=|N_{L_1}(u')|$. By Lemma \ref{lemma2.3}, $|L|\leq 6\times 10^4$. Then, $d_{L_1}(u')\leq |L_1|\leq |L|\leq \frac{n}{1000}$ as $n\ge 2.16\times 10^{17}$. It infers that $$d_{\overline{L_1}}(u')=d_G(u')-d_{L_1}(u')\ge d_G(u')-|L|\geq \frac{995n}{1000}.$$ Combining this with \eqref{align.-06} gives $$e(\overline{L_1}, L_1\cup L_2)\le e(\overline{L_1},L)-d_{\overline{L_1}}(u')\leq (2n-4)-\frac{995n}{1000}\leq \frac{1005n}{1000}. $$ By averaging, there is a vertex $u''$ such that $$x_{u''}\ge \frac{\sum_{v\in\overline{L_1}}\sum_{w\in (L_1\cup L_2)}x_w}{e(\overline{L_1}, L_1\cup L_2)}\ge \frac{\frac{993n}{1000}}{\frac{1005n}{1000}} \ge \frac{988}{1000},$$ as desired. \end{proof} Notice that $x_{u'}=1$ and $x_{u''}\geq \frac{988}{1000}$. By Lemma \ref{lemma2.4}, we have \begin{eqnarray}\label{align.a108} d_G(u')\ge \frac{996}{1000}n~~~\text{and}~~~d_G(u'')\ge \frac{984}{1000}n. \end{eqnarray} Now, let $D=\{u',u''\}$, $R=N_G(u')\cap N_G(u'')$, and $R_1=V(G)\setminus (D\cup R)$. Thus, $|R_1|\le (n-d_G(u'))+(n-d_G(u''))\le \frac{2n}{100}$. Now, we prove that the eigenvector entries of vertices in $R\cup R_1$ are small. \begin{lem}\label{lemma2.6} Let $u\in R\cup R_1$. Then $x_u\le \frac{1}{10}$. \end{lem} \begin{proof} For any vertex $u\in R_1$, we can see that \begin{eqnarray}\label{align.a8} d_D(u)\leq 1~~~\text{and}~~~d_{R}(u)\leq 2. \end{eqnarray} In fact, if $d_{R}(u)\geq 3$, then $G[N_G(u)\cup D]$ contains a copy of $K_{3,3}$, contradicting that $G$ is planar. By (\ref{align.a8}), $d_G(u)=d_D(u)+d_{R}(u)+d_{R_1}(u)\leq 3+d_{R_1}(u)$. Note that $|R_1|\le \frac{2n}{100}$ and $e(R_1)\leq 3|R_1|$ by \eqref{align.-06}. Thus, $$\rho\sum_{u\in R_1}x_u\le \sum_{u\in R_1}d_G(u)\le \sum_{u\in R_1}(3+d_{R_1}(u))\leq 3|R_1|+2e(R_1)\le 9|R_1|\le \frac{18n}{100},$$ which yields $\sum_{u\in R_1}x_u\le \frac{18n}{100\rho}$. For any $u\in R\cup R_1$, $d_{R}(u)\leq 2$ as $G$ is $K_{3,3}$-free. It follows that $$\rho x_u=\sum_{v\in N_G(u)}x_v\leq \sum_{v\in N_D(u)}x_v+\sum_{v\in N_{R}(u)}x_v+\sum_{v\in N_{R_1}(u)}x_v\le 4+\frac{18n}{100\rho}.$$ Dividing both sides by $\rho$, we get $x_u\le \frac{1}{10}$ as $\rho\geq \sqrt{2n-4}$. \end{proof} \begin{figure} \caption{A local structure of $G'$. } \label{fig.-12} \end{figure} Now, we are ready to give the proof of Theorem \ref{theorem1.1}. \noindent{\bf Proof of Theorem \ref{theorem1.1}.} We first prove that $R_1$ is empty. Suppose to the contrary that $a=|R_1|>0$. Assume that $G^*$ is a planar embedding of $G[D\cup R]$, and $u_1,\dots,u_{|R|}$ are around $u'$ in clockwise order in $G^*$, with subscripts interpreted modulo $|R|$. Since $F$ is a subgraph of $2K_1+P_{n/2}$, we can see that $G[R]$ is $P_{n/2}$-free. Moreover, by \eqref{align.a108}, we have \begin{eqnarray}\label{align.32} |R|=|N_G(u')\cap N_G(u'')|\ge |N_G(u')|+|N_G(u'')|-n\ge \frac{980}{1000}n>\frac{n}{2}+2\geq|V(F)|. \end{eqnarray} Hence, there exists an integer $i\le |R|$ such that $u'u_iu''u_{i+1}u'$ is a face of the plane graph $G^*$. We modify the graph $G^*$ by joining each vertex in $R_1$ to each vertex in $D$ and making these edges cross the face $u'u_iu''u_{i+1}u'$, to obtain the graph $G'$ (see $G'$ in Figure \ref{fig.-12}). Then, $G'$ is a plane graph. Now, we give the following claim. \begin{claim}\label{cl1.1} (i) $\rho(G')>\rho$. (ii) $G'$ is $F$-free. \end{claim} \begin{proof} (i) Clearly, $G[R_1]$ is planar. Then, there exists a vertex $v_1\in R_1$ with $d_{R_1}(v_1)\leq 5$. Define $R_2=R_1\setminus\{v_1\}$. Repeat this step, we obtain a sequence of sets $R_1,R_2,\cdots,R_{a}$ with $d_{R_{i}}(v_{i})\leq 5$ for each $i\in\{1,\ldots,a\}$. Combining this with (\ref{align.a8}), we have \begin{eqnarray}\label{align.13} \sum_{w\in N_{D\cup R\cup R_i}(v_i)}x_w\le 1+\sum_{w\in N_R(v_i)}x_w+\sum_{w\in N_{R_i}(v_i)}x_w\le \frac{17}{10}, \end{eqnarray} where the last inequality holds as $x_w<\frac{1}{10}$ for any $w\in R\cup R_1$ by Lemma \ref{lemma2.6}. It is not hard to verify that in graph $G$ the set of edges incident to vertices in $R_1$ is $\bigcup_{i=1}^{a}\{wv_i~|~w\in N_{D\cup R\cup R_i}(v_i)\}$. Note that $x_{u'}+x_{u''}\ge \frac{1988}{1000}$. Combining these with (\ref{align.13}), we have $$\rho(G')-\rho\ge \frac{2}{X^TX}\sum_{i=1}^{a}x_{v_i}\left((x_{u'}+x_{u''})-\sum_{w\in N_{D\cup R\cup R_i}(v_i)}x_w\right)>0. $$ Thus, $\rho(G')>\rho$. (ii) Suppose to the contrary that $G'$ contains a subgraph $F'$ isomorphic to $F$. From the modification, we can see that $V(F')\cap R_1$ is not empty. Since $|R|>|V(F)|=|V(F')|$ by \eqref{align.32}, $$|R\setminus V(F')|=|R|-|R\cap V(F')|>|V(F')|-|V(F')\cap R|\geq|V(F')\cap R_1|.$$ Then, we may assume without loss of generality that $V(F')\cap R_1=\{v_1,\dots,v_b\}$ and $\{w_1,\dots,w_b\}\in R\setminus V(F')$. Clearly, $N_{G'}(v_i)=D\subseteq N_{G'}(w_i)$ for each $i\in \{1,\dots,b\}$. This indicates that a copy of $F$ is already present in $G$, which is a contradiction. \end{proof} By Claim \ref{cl1.1}, $G'$ is an $n$-vertex $F$-free plane graph. Moreover, $\rho(G')>\rho$, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,F)$. Therefore, $R_1$ is empty, and thus $G$ contains a copy of $K_{2,n-2}$. ~~~~$\Box$ \section{Extremal spectral problems on planar graphs} In the case $F=C_3$, by Theorem \ref{theorem1.1}, $G$ contains a copy of $K_{2,n-2}$. We further obtain that $G\cong K_{2,n-2}$ as $G$ is triangle-free (otherwise, adding any edge increases triangles, a contradiction). In this section, we always assume that $G$ is an extremal graph to $spex_{\mathcal{P}}(n,F)$, where $F\in \{C_{\ell}~|~\ell\ge 5\}\cup \{2C_{\ell}~|~\ell\ge 3\}$. By a direct computation, we have $n\geq 9\times2^{\lfloor\frac{\ell-1}{2}\rfloor}+3\geq 4\ell\geq 2|V(F)|$. Moreover, $F$ is a subgraph of $2K_1+P_{n/2}$, but not of $K_{2,n-2}$. Hence, by Theorem \ref{theorem1.1}, $G$ contains a copy of $K_{2,n-2}$, where $V(2K_1)=\{u',u''\}$. We first prove that $u'$ is adjacent to $u''$ in $G$. \begin{lem}\label{lemma3.1} $u'u''\in E(G)$. \end{lem} \begin{proof} Suppose to the contrary that $u'u''\notin E(G)$. Assume that $G^*$ is a planar embedding of $G$, and $u_1,\dots,u_{n-2}$ are around $u'$ in clockwise order in $G^*$, with subscripts interpreted modulo $n-2$. If $R$ induces an cycle $u_1u_2\dots u_{n-2}u_1$, then we can easily check that $G^*$ contains a copy of $F$, a contradiction. Thus, there exists an integer $i\le n-2$ such that $u_iu_{i+1}\notin E(G^*[R])$. Furthermore, $u'u_iu''u_{i+1}u'$ is a face in $G^*$. We modify the graph $G^*$ by adding the edge $u'u''$ and making $u'u''$ cross the face $u'u_iu''u_{i+1}u'$, to obtain the graph $G'$. Clearly, $G'$ is a plane graph. We shall show that $G'$ is $F$-free. Suppose to the contrary that $G'$ contains a subgraph $F'$ isomorphic to $F$. If $F'=C_{\ell}$ for some $\ell\ge 5$, then $G'$ contains an $\ell$-cycle containing $u'u''$, say $u'u''u_1'u_2'\dots u_{\ell-2}'u'$. However, an ${\ell}$-cycle $u'u_1'u''u_2'\dots u_{\ell-2}'u'$ is already present in $G$, a contradiction. If $F'=2C_{\ell}$ for some $\ell\ge 3$, then $F'$ contains two vertex-disjoint $\ell$-cycles $C^1$ and $C^2$. From the modification, we can see that one of $C^1$ and $C^2$ (say $C^1$) contains the edge $u'u''$. This implies that $C^2$ is a subgraph of $G[R]$. However, $G[V(C^2)\cup \{u',u''\}]$ contains a $K_5$-minor, contradicting the fact that $G$ is planar. Hence, $G'$ is $F$-free. However, $\rho(G')>\rho$, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,F)$. Therefore, $u'u''\in E(G)$. \end{proof} \begin{lem}\label{lemma3.2} $G[R]$ is a disjoint union of paths. \end{lem} \begin{proof} Theorem \ref{theorem1.1} and Lemma \ref{lemma3.1} imply that $u'$ and $u''$ are dominating vertices. Furthermore, since $G$ is $K_5$-minor-free and $K_{3,3}$-minor-free, we can see that $G[R]$ is $K_3$-minor-free and $K_{1,3}$-minor-free. This implies that $G[R]$ is an acyclic graph with maximum degree at most 2. Thus, $G[R]$ is a disjoint union of paths. \end{proof} We shall give characterizations of eigenvector entries of vertices in $R$ in the following lemmas. \begin{lem}\label{lemma3.3} For any vertex $u\in R$, we have $x_u\in [\frac{2}{\rho}, \frac{2}{\rho}+\frac{6}{\rho^2}]$. \end{lem} \begin{proof} Recall that $u'$ and $u''$ are dominating vertices of $G$. So, $x_{u'}=x_{u''}=1$. Hence \begin{eqnarray}\label{align.+14} \rho x_u=x_{u'}+x_{u''}+\sum_{v\in N_R(u)}x_v=2+\sum_{v\in N_R(u)}x_v. \end{eqnarray} Moreover, by Lemma \ref{lemma3.2}, $d_{R}(v)\le 2$ for any $v\in R$. Combining this with Lemma \ref{lemma2.6} and \eqref{align.+14}, we have $x_u\in \big[\frac{2}{\rho},\frac{3}{\rho}\big]$. Furthermore, by \eqref{align.+14}, $\rho x_u\in \big[2,2+\frac{6}{\rho}\big]$, which yields that $x_u\in \big[\frac{2}{\rho}, \frac{2}{\rho}+\frac{6}{\rho^2}\big]$. \end{proof} Now we give a transformation that we will use in subsequent proof. \begin{definition}\label{def3.1} Let $s_1$ and $s_2$ be two integers with $s_1\ge s_2\ge 1$, and let $H=P_{s_1}\cup P_{s_2}\cup H_0$, where $H_0$ is a disjoint union of paths. We say that $H^*$ is an $(s_1,s_2)$-transformation of $H$ if $$H^*:= \left\{ \begin{array}{ll} P_{s_1+1}\cup P_{s_2-1}\cup H_0 & \hbox{if $s_2\geq 2$,} \\ P_{s_1+s_2}\cup H_0~~~~~~~ & \hbox{if $s_2= 1$.} \end{array} \right. $$ \end{definition} Clearly, $H^*$ is a disjoint union of paths, which implies that $K_2+H^*$ is planar. If $G[R]\cong H$, then we shall show that $\rho(K_2+H^*)>\rho(K_2+H)$ for sufficiently large $n$. \begin{lem}\label{lemma3.4} Let $H$ and $H^*$ be defined as in Definition \ref{def3.1}. If $G[R]\cong H$, then $\rho(K_2+H^*)>\rho$ for $n\ge \max\{2.16\times 10^{17},9\times2^{s_2+1}+3\}$. \end{lem} \begin{proof} Assume that $P^1:=v_1v_2\dots v_{s_1}$ and $P^2:=w_1w_2\dots w_{s_2}$ are two components of $H$. Clearly, $G\cong K_2+H$ as $G[R]\cong H$. If $s_2=1$, then $H\subset H^*$, and so $G\subset K_2+H^*$. It follows that $\rho(P_2+H^*)>\rho$, the result holds. Next, we deal with the case $s_2=2$. If $x_{v_1}\leq x_{w_1}$, then let $H'$ be obtained from $H$ by deleting the edge $v_{1}v_{2}$ and adding the edge $v_{2}w_{1}$. Clearly, $H'\cong H^*$. Moreover, \begin{eqnarray*} \rho(K_2+H^*)-\rho \geq\frac{X^T\big(A(K_2+H^*)-A(G)\big)X}{X^TX} \geq\frac{2}{X^TX}(x_{w_{1}}-x_{v_{1}})x_{v_{2}}\ge 0. \end{eqnarray*} Since $X$ is a positive eigenvector of $G$, we have $\rho x_{v_1}=2+x_{v_2}$. If $\rho(K_2+H^*)=\rho$, then $X$ is also a positive eigenvector of $K_2+H^*$, and so $\rho(K_2+H^*) x_{v_1}=2$, contradicting that $\rho x_{v_1}=2+x_{v_2}$. Thus, $\rho(K_2+H^*)>\rho$, the result holds. The case $x_{v_1}>x_{w_1}$ is similar and hence omitted here. It remains the case $s_2\ge 3$. We give a claim that will prove useful. \begin{claim}\label{claim3.1} Let $i$ be a positive integer. Set $A_i=\big[\frac{2}{\rho}-\frac{6\times 2^i}{\rho^2},\frac{2}{\rho}+\frac{6\times 2^i}{\rho^2}\big]$ and $B_i=\big[-\frac{6\times 2^i}{\rho^{2}},\frac{6\times 2^i}{\rho^{2}}\big]$. Then, \\ (i) for any $i\in \{1,\dots,\lfloor\frac{s_1-1}{2}\rfloor\}$, $\rho^i(x_{v_{i+1}}-x_{v_i})\in A_i$;\\ (ii) for any $i\in \{1,\dots,\lfloor\frac{s_2-1}{2}\rfloor\}$, $\rho^i(x_{w_{i+1}}-x_{w_i})\in B_i$;\\ (iii) for any $i\in \{1,\dots,\lfloor\frac{s_2}{2}\rfloor\}$, $\rho^i(x_{v_{i}}-x_{w_i})\in B_i$. \end{claim} \begin{proof} (i) It suffices to prove that for any $i\in \{1,\dots,\lfloor\frac{s_1-1}{2}\rfloor\}$, \begin{eqnarray*} \rho^i(x_{v_{j+1}}-x_{v_j})\in \left\{ \begin{array}{ll} A_i & \hbox{if $j=i$,} \\ B_i & \hbox{if $i+1\leq j\leq s_1-i-1$.} \end{array} \right. \end{eqnarray*} We shall proceed the proof by induction on $i$. Clearly, \begin{eqnarray}\label{align.+15} \rho x_{v_j}=\sum_{v\in N_G(v_j)}x_v= \left\{ \begin{array}{ll} 2+x_{v_2} & \hbox{if $j=1$,} \\ 2+x_{v_{j-1}}+x_{v_{j+1}} & \hbox{if $2\le j\le s_1-1$.} \end{array} \right. \end{eqnarray} By Lemma \ref{lemma3.3}, we have \begin{eqnarray}\label{align.110} \rho(x_{v_{j+1}}-x_{v_j})= \left\{ \begin{array}{ll} x_{v_{1}}+x_{v_{3}}-x_{v_2}\in A_1 & \hbox{if $j=1$,} \\ (x_{v_{j}}-x_{v_{j-1}})+(x_{v_{j+2}}-x_{j+1}) \in B_1 & \hbox{if $2\le j\le s_1-2$.} \end{array} \right. \end{eqnarray} So the result is true when $i=1$. Assume then that $2\le i\le \lfloor\frac{s_1-1}{2}\rfloor$, which implies that $s_1\geq 2i+1$. For $i\le j\le s_1-i-1$, $\rho(x_{v_{j+1}}-x_{v_j})=(x_{v_{j}}-x_{v_{j-1}})+(x_{v_{j+2}}-x_{v_{j+1}})$, and so \begin{eqnarray}\label{align.111} \rho^i(x_{v_{j+1}}-x_{v_j})=\rho^{i-1}(x_{v_{j}}-x_{v_{j-1}})+\rho^{i-1}(x_{v_{j+2}}-x_{v_{j+1}}). \end{eqnarray} By the induction hypothesis, $\rho^{i-1}(x_{v_{i}}-x_{v_{i-1}})\in A_{i-1}$ and $\rho^{i-1}(x_{v_{i+2}}-x_{v_{i+1}})\in B_{i-1}$. Setting $j=i$ and combining \eqref{align.111}, we have $\rho^i(x_{v_{i+1}}-x_{v_i})\in A_i$, as desired. If $i+1\le j\le s_1-i-1$, then by the induction hypothesis, $\rho^{i-1}(x_{v_{j}}-x_{v_{j-1}})\in B_{i-1}$ and $\rho^{i-1}(x_{v_{j+2}}-x_{v_{j+1}})\in B_{i-1}$. Thus, by \eqref{align.111}, we have $\rho^i(x_{v_{j+1}}-x_{v_j})\in B_i$, as desired. Hence the result holds. The proof of (ii) is similar to that of (i) and hence omitted here. (iii) It suffices to prove that for any $i\in \{1,\dots,\lfloor\frac{s_2}{2}\rfloor\}$ and $j\in \{i,\dots,s_2-i\}$, $\rho^i (x_{v_{j}}-x_{w_j})\in B_i.$ We shall proceed the proof by induction on $i$. Clearly, $$\rho x_{w_j}=\sum_{w\in N_G(w_j)}x_w= \left\{ \begin{array}{ll} 2+x_{w_2} & \hbox{if $j=1$,} \\ 2+x_{w_{j-1}}+x_{w_{j+1}} & \hbox{if $2\le j\le s_2-1$.} \end{array} \right. $$ Combining this with (\ref{align.+15}) and Lemma \ref{lemma3.3} gives $$\rho(x_{v_j}-x_{w_j})= \left\{ \begin{array}{ll} x_{v_{2}}-x_{w_2}\in\big[-\frac{6}{\rho^2},\frac{6}{\rho^2}\big]\subset B_1 & \hbox{if $j=1$,} \\ (x_{v_{j+1}}-x_{w_{j+1}})+(x_{v_{j-1}}-x_{w_{j-1}}) \in B_1 & \hbox{if $2\le j\le s_2-1$.} \end{array} \right. $$ So the claim is true when $i=1$. Assume then that $2\le i\le \lfloor\frac{s_2}{2}\rfloor$, which implies that $s_2\geq 2i$. If $i\le j\le s_2-i$, then $\rho(x_{v_{j}}-x_{w_j})=(x_{v_{j-1}}-x_{w_{j-1}})+(x_{v_{j+1}}-x_{w_{j+1}})$, and so \begin{eqnarray}\label{align.120} \rho^i(x_{v_{j}}-x_{w_j}) =\rho^{i-1}(x_{v_{j-1}}-x_{w_{j-1}})+\rho^{i-1}(x_{v_{j+1}}-x_{w_{j+1}}). \end{eqnarray} By the induction hypothesis, $\rho^{i-1}(x_{v_{j-1}}-x_{w_{j-1}})\in B_{i-1}$ and $\rho^{i-1}(x_{v_{j+1}}-x_{w_{j+1}})\in B_{i-1}$. Combining these with \eqref{align.120}, we have $\rho^i(x_{v_{j}}-x_{w_j})=B_i$. Hence the result holds. \end{proof} Since $n\ge 9\times2^{s_2+1}+3$, we have $\rho\geq \sqrt{2n-4}> 6\times 2^{s_2/{2}}$, and so \begin{eqnarray*} \frac{2}{\rho^{i+1}}-\frac{6\times 2^i}{\rho^{i+2}}> \left(\frac{2}{\rho^{i+1}}-\frac{6\times 2^i}{\rho^{i+2}}\right)-\frac{6\times 2^i}{\rho^{i+2}}>0~~\text{for any}~~i\leq \frac{s_2}{2}. \end{eqnarray*} Combining this with Claim \ref{claim3.1}, we obtain \begin{eqnarray}\label{align.115} x_{v_{i+1}}-x_{v_{i}}\geq\frac{2}{\rho^{i+1}}-\frac{6\times 2^i}{\rho^{i+2}}>0~~\text{for any}~~i\leq \min\Big\{\frac{s_2}{2},\Big\lfloor\frac{s_1-1}{2}\Big\rfloor\Big\}, \end{eqnarray} and \begin{eqnarray}\label{align.16} x_{v_{i+1}}-x_{w_{i}}=(x_{v_{i+1}}-x_{v_{i}})+(x_{v_{i}}-x_{w_{i}})\geq \left(\frac{2}{\rho^{i+1}}-\frac{6\times 2^i}{\rho^{i+2}}\right)-\frac{6\times 2^i}{\rho^{i+2}}>0 \end{eqnarray} for any $i\leq \min\big\{\big\lfloor\frac{s_2}{2}\big\rfloor,\big\lfloor\frac{s_1-1}{2}\big\rfloor\big\}$. Similarly, \begin{eqnarray}\label{align.17} x_{w_{i+1}}>x_{w_{i}}~~\text{and}~~x_{w_{i+1}}>x_{v_{i}}~~\text{for any}~~i\leq \Big\lfloor\frac{s_2-1}{2}\Big\rfloor. \end{eqnarray} Recall that $s_2\geq 3$. Let $t_1,t_2$ be integers with $t_1,t_2\geq 1$ and $t_1+t_2=s_2-1$, and $H'$ be obtained from $H$ by deleting edges $v_{t_1}v_{t_1+1},w_{t_2}w_{t_2+1}$ and adding edges $v_{t_1}w_{t_2},v_{t_1+1}w_{t_2+1}$. Since $t_1+t_2=s_2-1$, we can see that $H'\cong H^*$. Moreover, \begin{eqnarray}\label{align.15} \rho(K_2+H^*)-\rho\geq\frac{X^T\big(A(K_2+H^*)-A(G)\big)X}{X^TX} \geq\frac{2}{X^TX}(x_{v_{t_1+1}}-x_{w_{t_2}})(x_{w_{t_2+1}}-x_{v_{t_1}}). \end{eqnarray} Now, we consider the following two cases to complete the proof. \noindent{{\bf{Case 1.}}} $s_2$ is odd. Set $t_1=\frac{s_2-1}{2}$. Then, $t_2=\frac{s_2-1}{2}$ as $t_1+t_2=s_2-1$. By (\ref{align.16}) and (\ref{align.17}), we have $x_{v_{t_1+1}}>x_{w_{t_2}}$ and $x_{w_{t_2+1}}>x_{v_{t_1}}$, and so $\rho(K_2+H^*)>\rho$ by \eqref{align.15}, as desired. \noindent{{\bf{Case 2.}}} $s_2$ is even. We only consider the subcase $x_{w_{s_2/2}}\geq x_{v_{s_2/2}}$. The proof of the subcase $x_{w_{s_2/2}}<x_{v_{s_2/2}}$ is similar and hence omitted here. Set $t_1=\frac{s_2}{2}$. Then, $t_2=\frac{s_2-2}{2}$ as $t_1+t_2=s_2-1$. Moreover, $x_{w_{t_2+1}}\geq x_{v_{t_1}}$, as $x_{w_{s_2/2}}\geq x_{v_{s_2/2}}$. If $s_1=s_2$, then $s_1$ is even. This implies that $x_{v_{(s_1+2)/2}}=x_{v_{s_1/2}}$ by symmetry, that is, $x_{v_{t_1+1}}=x_{v_{t_1}}$. If $s_1\geq s_2+1$, then by \eqref{align.115}, $x_{v_{t_1+1}}>x_{v_{t_1}}$. In both situations, we have $x_{v_{t_1+1}}\geq x_{v_{t_1}}$. From (\ref{align.16}) we have $x_{v_{t_1}}>x_{w_{t_2}}$, which implies that $x_{v_{t_1+1}}>x_{w_{t_2}}$. Furthermore, we have $\rho(K_2+H^*)\geq\rho$ by \eqref{align.15}. If $\rho(K_2+H^*)=\rho$, then $X$ is a positive eigenvector of $K_2+H^*$, and so $\rho(K_2+H^*) x_{v_{t_1}}=2+x_{v_{t_1-1}}+x_{w_{t_2}}$. On the other hand, $\rho x_{v_{t_1}}=2+x_{v_{t_1-1}}+x_{v_{t_1+1}}$, since $X$ is a positive eigenvector of $G$. It follows that $x_{v_{t_1+1}}=x_{w_{t_2}}$, which is a contradiction. Thus, $\rho(K_2+H^*)>\rho$, as desired. This completes the proof of Lemma \ref{lemma3.4}. \end{proof} From Lemma \ref{lemma3.2}, we may assume that $G[R]$ is a disjoint union of $t\geq 2$ paths. Let $H$ be a disjoint union of $t$ paths. We use $n_i(H)$ to denote the order of the $i$-th longest path of $H$ for any $i\in \{1,\dots,t\}$. For convenience, we use $n_i$ instead of $n_i(G[R])$. Having Lemmas \ref{lemma3.3} and \ref{lemma3.4}, we are ready to complete the proof of Theorems \ref{theorem1.2} and \ref{theorem1.3}.\\ \noindent{\bf Proof of Theorem \ref{theorem1.2}.} We first give the following claim. \begin{claim}\label{cla5.2} If $H$ is a disjoint union of $t\geq 2$ paths, then $K_2+H$ is $2C_{\ell}$-free if and only if $n_1(H)\leq 2\ell-3$ and $n_2(H)\leq \ell-2$. \end{claim} \begin{proof} We first claim that $K_2+H$ is $2C_{\ell}$-free if and only if $P_{n_1(H)}\cup P_{n_2(H)}$ is $2P_{\ell-1}$-free. Equivalently, $K_2+H$ contains a copy of $2C_{\ell}$ if and only if $P_{n_1(H)}\cup P_{n_2(H)}$ contains a copy of $2P_{\ell-1}$. Assume that $K_2+H$ contains two vertex-disjoint $\ell$-cycles $C^1$ and $C^2$, and $V(K_2)=\{u',u''\}$. Since $H$ is acyclic, we can see that $C^i$ must contain at least one vertex of $u'$ and $u''$ for any $i\in \{1,2\}$. Without loss of generality, assume that $u'\in V(C^1)$ and $u''\in V(C^2)$. Then, $C^1-\{u'\}\cong C^2-\{u''\}\cong P_{\ell-1}$, and so $H$ contains a $2P_{\ell-1}$. We can further find that $P_{n_1(H)}\cup P_{n_2(H)}$ contains a $2P_{\ell-1}$. Conversely, assume that $P_{n_1(H)}\cup P_{n_2(H)}$ contains two vertex-disjoint paths $P^1$ and $P^2$ such that $P^1\cong P^2\cong P_{\ell-1}$. Thus, the subgraph induced by $V(P^1)\cup \{u'\}$ contains a copy of $C_{\ell}$. Similarly, the subgraph induced by $V(P^2)\cup \{u''\}$ contains a copy of $C_{\ell}$. This indicates that $K_2+H$ contains a copy of $2C_{\ell}$. So, the claim holds. Next, we claim that $P_{n_1(H)}\cup P_{n_2(H)}$ is $2P_{\ell-1}$-free if and only if $n_1(H)\leq 2\ell-3$ and $n_2(H)\leq \ell-2$. If $P_{n_1(H)}\cup P_{n_2(H)}$ is $2P_{\ell-1}$-free, then $n_1(H)\leq 2\ell-3$ (otherwise, $P_{n_1(H)}$ contains a copy of $2P_{\ell-1}$, a contradiction); $n_2(H)\leq \ell-2$ (otherwise, $P_{n_1(H)}\cup P_{n_2(H)}$ contains a copy of $2P_{\ell-1}$ as $n_1(H)\ge n_2(H)\geq \ell-1$, a contradiction). Conversely, if $n_1(H)\leq 2\ell-3$ and $n_2(H)\leq \ell-2$, then it is not hard to verify that $P_{n_1(H)}\cup P_{n_2(H)}$ is $2P_{\ell-1}$-free. This completes the proof of Claim \ref{cla5.2}. \end{proof} Recall that $n_i$ (resp. $n_i(H)$) is the order of the $i$-th longest path of $G[R]$ (resp. $H$) for any $i\in \{1,\dots,t\}$. By Claim \ref{cla5.2}, $n_1\leq 2\ell-3$ and $n_2\leq \ell-2$. We first claim that $n_1=2\ell-3$. Suppose to the contrary that $n_1\leq 2\ell-4$. Let $H'$ be an $(n_1,n_t)$-transformation of $G[R]$. Clearly, $n_1(H')=n_1(H)+1\leq 2\ell-3$ and $n_2(H')=n_2\leq \ell-2$. By Claim \ref{cla5.2}, $K_2+H'$ is $2C_{\ell}$-free. However, by Lemma \ref{lemma3.4}, we have $\rho(K_2+H')>\rho$, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,2C_{\ell})$. Thus, $n_1=2\ell-3$, the claim holds. Note that $n_i\leq n_2\leq \ell-2$ for each $i\in \{2,\dots,t-1\}$. We then claim that $n_i=\ell-2$ for $i\in \{2,\dots,t-1\}$. If $\ell=3$, then $n_i=1$, as $n_i\leq \ell-2$, for each $i\in \{2,\dots,t\}$ and the claim holds trivially. Now let $\ell\geq 4$. Suppose to the contrary, then set $i_0=\min\{i~|~2\leq i\leq t-1,n_i\leq\ell-3\}.$ Let $H'$ be an $(n_{i_0},n_t)$-transformation of $G[R]$. Clearly, $n_1(H')=n_1=2\ell-3$ and $n_2(H')=\max\{n_2,n_{i_0}+1\}\leq \ell-2$. By Claim \ref{cla5.2}, $K_2+H'$ is $2C_{\ell}$-free. However, by Lemma \ref{lemma3.4}, we have $\rho(K_2+H')>\rho$, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,2C_{\ell})$. So, the claim holds. Since $n_1=2\ell-3$, $n_i=\ell-2$ for $i\in \{2,\dots,t-1\}$ and $n_t\leq \ell-2$, we can see that $G[R]\cong H(2\ell-3,\ell-2)$. This completes the proof of Theorem \ref{theorem1.2}. ~~~~$\Box$\\ \noindent{\bf Proof of Theorem \ref{theorem1.3}.} It remains the case $\ell\geq 5$. We first give the following claim. \begin{claim}\label{cla5.3} If $H$ is a disjoint union of $t\geq 2$ paths, then $K_2+H$ is $C_{\ell}$-free if and only if $n_1(H)+n_2(H)\leq \ell\!-\!3$. \end{claim} \begin{proof} One can observe that the longest cycle in $K_2+H$ is of order $n_1(H)+n_2(H)+2$. Furthermore, $K_2+(P_{n_1(H)}\cup P_{n_2(H)})$ contains a cycle $C_{i}$ for each $i\in \{3,\dots,n_1(H)+n_2(H)+2\}$. Consequently, $n_1(H)+n_2(H)+2\leq \ell-1$ if and only if $K_2+H$ is $C_{\ell}$-free. Hence, the claim holds. \end{proof} By Claim \ref{cla5.3}, $n_1+n_2 \leq\ell-3$, which implies that $n_2\leq \lfloor\frac{\ell-3}{2}\rfloor$ as $n_1\geq n_2$. Since $\ell\geq 5$, we have $\lfloor\frac{\ell-3}{2}\rfloor\geq 1$, which implies that $\lfloor\frac{\ell-3}{2}\rfloor^2\geq \lfloor\frac{\ell-3}{2}\rfloor$ and $3\lfloor\frac{\ell-3}{2}\rfloor^2\geq 2\lfloor\frac{\ell-3}{2}\rfloor+1\geq \ell-3$. Consequently, since $n\ge \frac{625}{32}\lfloor\frac{\ell-3}{2}\rfloor^2+2>6\lfloor\frac{\ell-3}{2}\rfloor^2+2$, we have \begin{eqnarray}\label{align.30} n> \Big\lfloor\frac{\ell-3}{2}\Big\rfloor^2+2\Big\lfloor\frac{\ell-3}{2}\Big\rfloor+(\ell-3)+2\geq n_2^2+3n_2+n_1+2 \end{eqnarray} as $n_2\leq \lfloor\frac{\ell-3}{2}\rfloor$ and $n_1+n_2 \leq\ell-3$. On the other hand, $n-2=\sum_{i=1}^{t}n_i\leq n_1+(t-1)n_2$, which implies that $t\geq \frac{n-2-n_1}{n_2}+1\geq n_2+4$ by \eqref{align.30}. We first claim that $n_1+n_2=\ell-3$. Suppose to the contrary that $n_1+n_2 \leq\ell-4$. Let $H'$ be an $(n_1,n_t)$-transformation of $G[R]$. Clearly, $n_1(H')=n_1+1\leq \ell-3-n_2$ and $n_2(H')=n_2$. Then, $n_1(H')+n_2(H')\leq \ell-3$. By Claim \ref{cla5.3}, $K_2+H'$ is $C_{\ell}$-free. However, by Lemma \ref{lemma3.4}, we have $\rho(K_2+H')>\rho$, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,C_{\ell})$. Hence, the claim holds. Secondly, we claim that $n_i=n_2$ for $i\in \{3,\dots,t-1\}$. Suppose to the contrary, then set $i_0=\min\{i~|~3\leq i\leq t-1,n_i\leq n_2-1\}$. Let $H'$ be an $(n_{i_0},n_t)$-transformation of $G[R]$. Clearly, $n_1(H')=n_1$ and $n_2(H')=\max\{n_2,n_{i_0}+1\}=n_2$, and so $n_1(H')+n_2(H')=\ell-3$. By Claim \ref{cla5.3}, $K_2+H'$ is $C_{\ell}$-free. However, by Lemma \ref{lemma3.4}, we have $\rho(K_2+H')>\rho$, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,C_{\ell})$. Hence, the claim holds. It follows that $G=K_2+(P_{n_1}\cup(t-2)P_{n_2}\cup P_{n_t})$. Recall that $t\geq n_2+4$. Then, $G[R]$ contains at least $n_2+2$ paths of order $n_2$ (say $P^1,\dots,P^{n_2+2}$), where $P^{n_2+2}=w_1w_2\dots w_{n_2}$. Finally, we claim that $n_2=\lfloor\frac{\ell-3}{2}\rfloor$. Suppose to the contrary, then $n_2\leq \lfloor\frac{\ell-5}{2}\rfloor$ as $n_2\leq \lfloor\frac{\ell-3}{2}\rfloor$. Assume that $w'$ is an endpoint of $P_{n_1}$ and $w'w''\in E(P_{n_1})$ in $G$. Let $G'$ be obtained from $G$ by (i) deleting $w'w''$ and connecting $w'$ to an endpoint of $P^{n_2+1}$; (ii) deleting all edges of $P^{n_2+2}$ and connecting $w_i$ to an endpoint of $P^{i}$ for each $i\in \{1,\dots,n_2\}$. Then, $G'$ is obtained from $G$ by deleting $n_2$ edges and adding $n_2+1$ edges. By Lemma \ref{lemma3.3}, we obtain $$\frac{4}{\rho^2} < x_{u_i}x_{u_{j}}<\frac{4}{\rho^2}+\frac{24}{\rho^3}+\frac{36}{\rho^4}<\frac{4}{\rho^2}+\frac{25}{\rho^3}$$ for any vertices $u_i,u_{j}\in R$. Then $$\rho(G')-\rho\ge \frac{X^T(A(G')-A(G))X}{X^TX}>\frac{2}{X^TX}\left(\frac{4(n_2+1)}{\rho^2}-\frac{4n_2}{\rho^2}-\frac{25n_2}{\rho^3}\right)>0,$$ where $n_2<\lfloor\frac{\ell-3}{2}\rfloor\leq \frac{4}{25}\sqrt{2n-4}\leq \frac{4}{25}\rho$ as $n\ge \frac{625}{32}\lfloor\frac{\ell-3}{2}\rfloor^2+2$. So, $\rho(G')>\rho$. On the other hand, $G'\cong K_2+(P_{n_1-1}\cup(n_2+1)P_{n_2+1}\cup(t-n_2-4)P_{n_2}\cup P_{n_t})$. By Claim \ref{cla5.3}, $G'$ is $C_{\ell}$-free, contradicting that $G$ is extremal to $spex_{\mathcal{P}}(n,C_{\ell})$. So, the claim holds. Since $n_1+n_2=\ell-3$ and $n_2=\lfloor\frac{\ell-3}{2}\rfloor$, we have $n_1=\lceil\frac{\ell-3}{2}\rceil$. Moreover, since $n_i=\lfloor\frac{\ell-3}{2}\rfloor$ for $i\in \{2,\dots,t-1\}$ and $n_t\leq n_2$, we can further obtain that $G[R]\cong H(\lceil\frac{\ell-3}{2}\rceil,\lfloor\frac{\ell-3}{2}\rfloor)$, which implies that $G\cong K_2+H(\lceil\frac{\ell-3}{2}\rceil,\lfloor\frac{\ell-3}{2}\rfloor)$. This completes the proof of Theorem \ref{theorem1.3}. ~~~~$\Box$\\ \section{Proof of Theorem \ref{theorem1.4}} Above all, we shall introduce the \emph{Jordan Curve Theorem}: any simple closed curve $C$ in the plane partitions the rest of the plane into two disjoint arcwise-connected open sets (see \cite{Bondy}, P. 244). The corresponding two open sets are called the interior and the exterior of $C$. We denote them by $int(C)$ and $ext(C)$, and their closures by $Int(C)$ and $Ext(C)$, respectively. A \emph{plane graph} is a planar embedding of a planar graph. The Jordan Curve Theorem gives the following lemma. \begin{lem}\label{lemma4.1} Let $C$ be a cycle of a plane graph $G$, and let $x,y$ be two vertices of $G$ with $x\in int(C)$ and $y\in ext(C)$, then $xy\notin E(G)$. \end{lem} Let $G$ be a plane graph. A face in $G$ of size $i$ is called an $i$-face. Let $f_i(G)$ denote the number of $i$-faces in $G$, and let $f(G)$ denote $\sum_{i}f_i(G)$. \begin{lem}\label{lemma4.2}\emph{(Proposition 2.5 of \cite{Bondy}, P. 250)} Let $G$ be a planar graph, and let $f$ be an arbitrary face in some planar embedding of $G$. Then $G$ admits a planar embedding whose outer face has the same boundary as $f$. \end{lem} Let $\delta(G)$ be the minimum degree of a graph $G$. It is well known that every graph $G$ with $\delta(G)\ge 2$ contains a cycle. In the following, we give a more delicate characterization on planar graphs, which contains an important structural information of the extremal graphs in Theorem \ref{theorem1.4}. \begin{lem}\label{lemma4.3} Let $G$ be a plane graph on $n$ vertices with $\delta(G)\ge 3$. Then $G$ contains two vertex-disjoint cycles unless $G\in \{2K_1+C_3,K_1+C_{n-1}\}$. \end{lem} \begin{proof} We first deal with some trivial cases. Since $\delta(G)\ge 3$, we have $n\ge 1+\delta(G)\ge 4$. If $n=4$, then $G\cong K_1+C_3$. If $n=5$, then $2e(G)=\sum_{v\in V(G)}d_G(v)\ge3\times5=15$, and so $e(G)\ge 8$. On the other hand, $e(G)\le 3n-6=9$, since $G$ is planar. Thus, $e(G)\in\{8,9\}$. It is not hard to verify that $G\cong 2K_1+C_3$ when $e(G)=9$ and $G\cong K_1+C_4$ when $e(G)$=8, as desired. If $G$ is not connected, then $G$ contains at least two components $G_1$ and $G_2$ with $\delta(G_i)\ge 3$ for $i\in\{1,2\}$, which implies that each $G_i$ contains a cycle. Thus, $G$ contains two vertex-disjoint cycles, as desired. If $G$ has a cut vertex $v$, then $G-\{v\}$ has at least two components $G_3$ and $G_4$. Since $\delta(G)\ge 3$, we have $\delta(G_i)\ge 2$ for $i\in\{3,4\}$, which implies that both $G_3$ and $G_4$ contain a cycle. Thus, $G$ also contains two vertex-disjoint cycles. Next, we only need to consider the case that $G$ is a 2-connected graph of order $n\ge 6$. Since $G$ is 2-connected, each face of $G$ is a cycle. Let $C$ be a face of $G$ with minimum size $g$. By Lemma \ref{lemma4.2}, we may assume without loss of generality that $C$ is the outer face of $G$. Let $G_1=G-V(C)$. If $G_1$ contains a cycle, then $G$ contains two vertex-disjoint cycles, as desired. Now assume that $G_1$ is acyclic. Since $\delta(G)\ge 3$, we have $2e(G)=\sum_{v\in V(G)}d_G(v)\ge 3n$. This, together with Euler's formula $n-2=e(G)-f(G)$, gives $e(G)\le 3f(G)-6$. On the other hand, \begin{align*} 2e(G)=\sum_{i\ge g}if_i(G)\ge g\sum_{i\ge g}f_i(G)=gf(G). \end{align*} Hence, $gf(G)\le 2e(G)\le 6f(G)-12$, yielding $g\le \frac{6f(G)-12}{f(G)}<6$. Subsequently, we shall give several claims. \begin{claim}\label{claim4.1} We have $g=3$. \end{claim} \begin{figure} \caption{Two possible local structures of $G$. } \label{fig.-01} \end{figure} \begin{proof} Suppose to the contrary that $g\in \{4,5\}$, and let $C=v_1v_2\ldots v_gv_1$. We first consider the case that there exists a vertex of $G_1$ adjacent to two consecutive vertices of $C$. Without loss of generality, let $w_1\in V(G_1)$ and $\{w_1,v_1,v_2\}$ induces a triangle $C'$. More generally, we define $A=\{w\in V(G_1)~|~ v_1,v_2 \in N_C(w)\}$. Clearly, $w_1\in A$. We can select a vertex, say $w_1$, in $A$ such that $A\subseteq Ext(C')$ (see Figure \ref{fig.-01}). Notice that $C'$ is not a face of $G$, as $g\in \{4,5\}$. Then, $int(C')\neq \varnothing$. By Lemma \ref{lemma4.1}, every vertex in $int(C')$ has no neighbors in $ext(C')$. Moreover, by the definitions of $A$ and $w_1$, every vertex in $int(C')$ has at most one neighbor in $\{v_1,v_2\}$. It follows that every vertex in $int(C')$ has at least one neighbor in $int(C')$, as $\delta(G)\geq3$. Thus, $G[int(C')]$ is nonempty, that is, $G_1[int(C')]$ is nonempty. Recall that $G_1$ is acyclic. Then $G_1[int(C')]$ contains at least two pendant vertices, one of which (say $w_2$) is not adjacent to $w_1$. Hence, $w_2$ is also a pendant vertex of $G_1$, as $w_2$ has no neighbors in $ext(C')$. On the other hand, $w_2$ has at most one neighbor in $\{v_1,v_2\}$, and so $d_C(w_2)\leq1$. Therefore, $d_{G}(w_2)=d_{G_1}(w_2)+d_C(w_2)\leq2$, contradicting $\delta(G)\geq3$. Now it remains the case that each vertex of $G_1$ is not adjacent to two consecutive vertices of $C$. Note that $\delta(G)\ge 3$ and $G_1$ is acyclic. Then $G_1$ contains a vertex $w_0$ with $d_{G_1}(w_0)\le 1$, and thus $d_C(w_0)= d_G(w_0)-d_{G_1}(w_0)\ge 2$. Now, since $g\in \{4,5\}$, we may assume without loss of generality that $v_1,v_3\in N_C(w_0)$. Let $A'=\{w\in V(G_1)~|~ v_1,v_3 \in N_C(w)\}$. Clearly, $w_0\in A'$ and $v_2\notin N_C(w)$ for each $w\in A'$. Now, we can select a vertex, say $w_1$, in $A'$ such that $A'\subseteq Ext(C'')$, where $C''=w_1v_1v_2v_3w_1$ (see Figure \ref{fig.-01}). We can see that $int(C'')\neq\varnothing$ (otherwise, $d_G(v_2)=|\{v_1,v_3\}|=2$, a contradiction). By the definition of $w_1$, we have $int(C'')\cap A'=\varnothing$. Furthermore, every vertex in $int(C'')$ has no neighbors in $ext(C'')$ and has at most one neighbor in $\{v_1,v_2,v_3\}.$ Thus, every vertex in $int(C'')$ has at least one neighbor in $int(C'')$. By a similar argument as above, we can find a vertex $w_2\in int(C'')$ with $d_{G}(w_2)=d_{G_1}(w_2)+d_C(w_2)\leq2$, which contradicts $\delta(G)\geq3$. \end{proof} By Claim \ref{claim4.1}, the outer face of $G$ is a triangle $C=v_1v_2v_3v_1$. In the following, we denote $B_i=\{w\in V(G_1)~|~d_C(w)=i\}$ for $i\leq3$. Since $\delta(G)\geq3$, we have $w\in B_3$ for each isolated vertex $w$ of $G_1$, and $w\in B_2\cup B_3$ for each pendant vertex $w$ of $G_1$. \begin{claim}\label{claim4.2} $|B_3|\le 1$ and $|B_2|+|B_3|\ge 2$. \end{claim} \begin{proof} Since $C$ is the outer face of $G$, every vertex of $G_1$ lies in $int(C)$. Furthermore, since $G$ is planar, it is easy to see that $|B_3|\le 1$. This implies that $G_1$ contains at most one isolated vertex. Recall that $|G_1|=n-3\geq3$ and $G_1$ is acyclic. Then $G_1$ contains at least two pendant vertices $w_1$ and $w_2$. Therefore, $|B_2|+|B_3|\ge |\{w_1,w_2\}|=2$. \end{proof} \begin{claim}\label{claim4.3} Let $w_0,w_1$ be two vertices in $V(G_1)$ such that $N_C(w_0)\supseteq \{v_3\}$ and $N_C(w_1)\supseteq\{v_1,v_2\}$ (see Figure \ref{fig.-02}). Then \\ (i) $v_3,w_0\in ext(C''')$, where $C'''=w_1v_1v_2w_1$;\\ (ii) if $w_0\notin B_3$, then $G_1$ contains a pendant vertex in $ext(C''')$. \end{claim} \begin{figure} \caption{A local structure in Claim \ref{claim4.3} \label{fig.-02} \end{figure} \begin{proof} (i) Since $C$ is the outer face and $v_3\in V(C)\setminus V(C''')$, we have $v_3\in ext(C''')$. Furthermore, using $w_0v_3\in E(G)$ and Lemma \ref{lemma4.1} gives $w_0\in ext(C''')$. (ii) Since $w_0\notin B_3$, we have $d_C(w_0)\le 2$, and so $d_{G_1}(w_0)=d_G(w_0)-d_C(w_0)\ge 1$. By (i), we know that $w_0\in ext(C''')$. If $d_{G_1}(w_0)=1$, then $w_0$ is a desired pendant vertex. It remains the case that $d_{G_1}(w_0)\ge2$. Now, whether $w_1$ is a neighbor of $w_0$ or not, $w_0$ has at least one neighbor in $V(G_1)\cap ext(C''')$. Thus, $G_1[ext(C''')]$ is nonempty. Recall that $G_1$ is acyclic. Then $G_1[ext(C''')]$ contains at least two pendant vertices, one of which (say $w_2$) is not adjacent to $w_1$. Hence, $w_2$ is also a pendant vertex of $G_1$, as $w_2$ has no neighbors in $int(C''')$. \end{proof} \begin{figure} \caption{Two possible local structures in Claim \ref{claim4.4} \label{fig.-03} \end{figure} \begin{claim}\label{claim4.4} Let $w_1,w_2\in V(G_1)$ with $N_{C}(w_1)\cap N_{C}(w_2)\supseteq \{v_1,v_2\}$. Assume that $C'''=w_1v_1v_2w_1$ and $w_2\in int(C''')$ (see Figure \ref{fig.-03}). Then $G$ contains a cycle $C_{v_i}$ such that $V(C_{v_i})\subseteq Int(C''')$ and $V(C_{v_i})\cap V(C)=\{v_i\}$ for each $i\in\{1,2\}$. \end{claim} \begin{proof} We first claim that $N_{C}(w_2)=\{v_1,v_2\}$. By Claim \ref{claim4.3}, we know that $v_3\in ext(C''')$. Now, since $w_2\in int(C''')$, we have $w_2v_3\notin E(G)$ by Lemma \ref{lemma4.1}, and so $N_{C}(w_2)=\{v_1,v_2\}$. Furthermore, we have $d_{G_1}(w_2)\ge 1$. Then $G_1$ contains a path $P$ with endpoints $w_2$ and $w_3$, where $w_3$ is a pendant vertex of $G_1$. If $V(P)\not\subseteq int(C''')$, then by $w_2\in int(C''')$ and Lemma \ref{lemma4.1}, we have $V(P)\cap V(C''')=\{w_1\}$ as $v_1,v_2\notin V(G_1)$. Now let $P'$ be the subpath of $P$ with endpoints $w_2$ and $w_1$. Then $V(P')\setminus\{w_1\}\subseteq int(C''')$, and $G$ contains a cycle $C(v_i)=v_iw_1P'w_2v_i$ for each $i\in\{1,2\}$, as desired. Next, assume that $V(P)\subseteq int(C''')$. Then, $w_3\in int(C''')$. By $v_3\in ext(C''')$ and Lemma \ref{lemma4.1}, we get that $w_3v_3\notin E(G)$, and so $w_3\notin B_3$. Moreover, $d_{G_1}(w_3)=1$ and $\delta(G)\geq3$ give $w_3\in B_2$. Thus, $N_C(w_3)=\{v_1,v_2\}$. Therefore, $G$ contains a cycle $C_{v_i}=v_iw_2Pw_3v_i$ for each $i\in\{1,2\}$, as desired. \end{proof} Having above four claims, we are ready to give the final proof of Lemma \ref{lemma4.3}. By Claim \ref{claim4.2}, we have $|B_3|\leq1$ and $|B_2|\ge 1$. We may without loss of generality that $w_1\in B_2$ and $N_{C}(w_1)=\{v_1,v_2\}$. For each $i\in \{1,2\}$, let $\overline{i}\in \{1,2\}\setminus \{i\}$. Since $d_{C}(w_1)=2$, we have $d_{G_1}(w_1)\ge 1$. Hence, $G_1$ is nonempty, and so $G_1$ contains at least two pendant vertices. According to the size of $B_3$, we now distinguish two cases to complete the proof. \noindent{\bf{Case 1.}} $|B_3|= 1$. Assume that $B_3=\{w_0\}$. Then $N_C(w_0)=\{v_1,v_2,v_3\}$ (see Figure \ref{fig.-04}). We then consider two subcases according to the size of $B_2$. \begin{figure} \caption{Three possible structures in Case 1. } \label{fig.-04} \end{figure} \noindent{\bf{Subcase 1.1.}} $|B_2|=1$, that is, $B_2=\{w_1\}$. For each pendant vertex $w$ of $G_1$, we have $d_C(w)=d_G(w)-d_{G_1}(w)\geq2$, consequently, $w\in B_2\cup B_3=\{w_1,w_0\}$. This indicates that $G_1$ contains exactly two pendant vertices $w_1$ and $w_0$. Furthermore, we can see that $G_1$ contains no isolated vertices (otherwise, every isolated vertex of $G_1$ has at least three neighbors in $V(C)$ and so belongs to $B_3$, while the unique vertex $w_0\in B_3$ is a pendant vertex of $G_1$). Therefore, $G_1$ is a path of order $n-|C|$ with endpoints $w_1$ and $w_0$. Now we know that $G_1$ is a path with $|G_1|=n-3\geq3$. Let $N_{G_1}(w_0)=\{w_2\}$ and $P''=G_1-\{w_0\}$. Then $P''$ is a path with endpoints $w_1$ and $w_2$. Since $d_{G_1}(w_2)=2$, we have $d_{C}(w_2)\ge 1$. If $w_2v_3\in E(G)$, then $G$ contains two vertex-disjoint cycles $v_{3}w_0w_2v_{3}$ and $w_1v_{1}v_{2}w_1$, as desired. If $w_2v_i\in E(G)$ for some $i\in \{1,2\}$, then $G$ contains two vertex-disjoint cycles $v_iw_1P''w_2v_i$ and $w_0v_{\overline{i}}v_3w_0$, as desired. \noindent{\bf{Subcase 1.2.}} $|B_2|\ge 2$. Let $w_2\in B_2\setminus \{w_1\}$. If $N_{C}(w_1)=N_{C}(w_2)$, then we may assume that $w_2\in int(C''')$ by the symmetry of $w_1$ and $w_2$, where $C'''=w_1v_1v_2w_1$. By Claim \ref{claim4.4}, $G$ contains a cycle $C_{v_1}$ such that $V(C_{v_1})\subseteq Int(C''')$ and $V(C_{v_1})\cap V(C)=\{v_1\}$. On the other hand, Claim \ref{claim4.3} implies that $w_0\in ext(C''')$. Hence, $w_0\notin V(C_{v_1})$. Therefore, $G$ contains two vertex-disjoint cycles $C_{v_1}$ and $w_0v_2v_3w_0$, as desired. It remains the case that $N_{C}(w_1)\neq N_{C}(w_2)$. Now $N_{C}(w_2)=\{v_i,v_3\}$ for some $i\in\{1,2\}$. We define $C'''=w_0v_1v_2w_0$ instead of the original one in Claim \ref{claim4.4}. Then $w_1\in int(C''')$. Moreover, $w_2\in ext(C''')$ as $w_2v_3\in E(G)$. By Claim \ref{claim4.4}, there exists a cycle $C_{v_{\overline{i}}}$ such that $V(C_{v_{\overline{i}}})\subseteq Int(C''')$ and $V(C_{v_{\overline{i}}})\cap V(C)=\{v_{\overline{i}}\}$. Therefore, $G$ contains two vertex-disjoint cycles $C_{v_{\overline{i}}}$ and $w_2v_iv_3w_2$, as desired. \noindent{\bf{Case 2.}} $|B_3|=0$. Recall that $A=\{w\in V(G_1)~|~ v_1,v_2\in N_C(w)\}$. Since $|B_3|=0$, we can see that $N_C(w)=N_C(w_1)=\{v_1,v_2\}$ for each $w\in A$. We may assume without loss of generality that $A\subseteq Int(C''')$ by the symmetry of vertices in $A$. By Claim \ref{claim4.3}, there exists a pendant vertex $w_3$ of $G_1$ in $ext(C''')$, which implies that $d_C(w_3)\geq2$. Since $|B_3|=0$, we have $w_3\in B_2$, and thus $B_2\supseteq\{w_1,w_3\}$. Moreover, $w_3\notin A$ as $A\subseteq Int(C''')$. Assume without loss of generality that $N_{C}(w_3)=\{v_1,v_3\}$ (see Figure \ref{fig.-05}). We also consider two subcases according to $|B_2|$. \begin{figure} \caption{Three possible structures in Case 2.} \label{fig.-05} \end{figure} \noindent{\bf{Subcase 2.1.}} $|B_2|=2$, that is, $B_2=\{w_1,w_3\}$. Since $d_C(w_3)=2$, we have $d_{G_1}(w_3)\geq1$, which implies that $G_1$ is non-empty and has at least two pendant vertices. On the other hand, since $\delta(G)\geq3$ while $B_3=\varnothing$, we can see that $G_1$ contains no isolated vertices, and $w\in B_2=\{w_1,w_3\}$ for each pendant vertex $w$ of $G_1$. Therefore, $G_1$ contains exactly two pendant vertices $w_1$ and $w_3$, more precisely, $G_1$ is a path with endpoints $w_1$ and $w_3$. Let $w$ be an arbitrary vertex in $V(G_1)\setminus \{w_1,w_3\}$. Then, $d_C(w)=d_G(w)-d_{G_1}(w)=d_G(w)-2\ge 1.$ If $wv_2\in E(G)$, then $G$ contains two vertex-disjoint cycles $v_2w_1P'''wv_2$ and $v_1w_3v_3v_1$, where $P'''$ is the subpath of $G_1$ from $w_1$ to $w$ (see Figure \ref{fig.-05}($a$)). If $wv_3\in E(G)$, then $G$ contains two vertex-disjoint cycles $v_3w_3P'''wv_3$ and $v_1w_1v_2v_1$, where $P'''$ is the subpath of $G_1$ from $w_3$ to $w$ (see Figure \ref{fig.-05}($a$)). If $N_C(w)=\{v_1\}$ for each $w\in V(G_1)\setminus \{w_1,w_3\},$ then $G\cong K_1+C_{n-1}$, as desired. \noindent{\bf{Subcase 2.2.}} $|B_2|\ge 3$. For each vertex $w\in B_2$, it is clear that $N_C(w)$ is one of $\{v_1,v_2\}$, $\{v_1,v_3\}$ and $\{v_2,v_3\}$. We first consider the case that there exist two vertices in $B_2$ which have the same neighbors in $C$. Without loss of generality, assume that we can find a vertex $w_2\in B_2$ with $N_C(w_2)=N_C(w_1)=\{v_1,v_2\}$. Then $w_2\in A$. Recall that $A\subseteq Int(C''')$ and $C'''=w_1v_1v_2w_1$ (see Figure \ref{fig.-05}($b$)). Then, we can further get that $w_2\in int(C''')$. By Claim \ref{claim4.4}, there exists a cycle $C_{v_2}$ such that $V(C_{v_2})\subset Int(C''')$ and $V(C_{v_2})\cap V(C)=\{v_2\}$. On the other hand, Claim \ref{claim4.3} implies that $w_3\in ext(C''')$. Hence, $w_3\notin V(C_{v_2})$. Therefore, $G$ contains two vertex-disjoint cycles $C_{v_2}$ and $w_3v_1v_3w_3$, as desired. Now it remains the case that any two vertices in $B_2$ have different neighborhoods in $C$. This implies that $|B_2|=3$ and we can find a vertex $w_2\in B_2$ with $N_{C}(w_2)=\{v_2,v_3\}$. Now we have $B_2=\{w_1,w_2,w_3\}$. Furthermore, since $\delta(G)\ge 3$ and $B_3=\varnothing$, we have $d_{G_1}(w)\geq1$ for each $w\in V(G_1)$, and if $d_{G_1}(w)=1$, then $w\in B_2$. Since $|B_2|=3$, we can see that $G_1$ has only one connected component, that is, $G_1$ is a tree and some $w_i$, say $w_2$, is a pendant vertex of $G_1$. Now, $G_1-\{w_2\}$ contains a subpath $P''''$ with endpoints $w_1$ and $w_3$ (see Figure \ref{fig.-05}($c$)). Then $G$ contains two vertex-disjoint cycles $v_1w_1P''''w_3v_1$ and $w_2v_2v_3w_2$, as desired. This completes the proof of Lemma \ref{lemma4.3}. \end{proof} Let $\mathcal{G}^*_n$ be the family of graphs obtained from $2K_1+C_3$ and an independent set of size $n-5$ by joining each vertex of the independent set to arbitrary two vertices of the triangle (see Figure \ref{fig.02}). Clearly, every graph in $\mathcal{G}^*_n$ is planar. Now, let $\mathcal{G}_n$ be the family of planar graphs obtained from $2K_1+C_3$ by iteratively adding vertices of degree 2 until the resulting graph has $n$ vertices. Then $\mathcal{G}^*_n\subseteq \mathcal{G}_n$. \begin{figure} \caption{An extremal graph in $\mathcal{G} \label{fig.02} \end{figure} \begin{lem}\label{lemma4.4} For any graph $G\in\mathcal{G}_n$, $G$ is $2\mathcal{C}$-free if and only if $G\in\mathcal{G}^*_n$. \end{lem} \begin{proof} Let $V_1:=\{v_1,v_2,v_3\}$ be the set of vertices of degree 4 and $V_2:=\{w_1,w_2\}$ be the set of vertices of degree 3 in $2K_1+C_3$, respectively. Then $V_1$ induces a triangle. We first show that every graph $G$ in $\mathcal{G}^*_n$ is $2\mathcal{C}$-free. It suffices to prove that every cycle of $G$ contains at least two vertices in $V_1$. Let $C$ be an arbitrary cycle of $G$. If $V(C)\subseteq V_1$, then there is nothing to prove. It remains the case that there exists a vertex $w\in V(C)\setminus V_1$. By the definition of $\mathcal{G}^*_n$, we can see that $N_C(w)\subseteq N_{G}(w)\subseteq V_1$. Note that $|N_C(w)|\geq2$. Hence, $C$ contains at least two vertices in $V_1$. In the following, we will show that every graph $G\in \mathcal{G}_n\setminus \mathcal{G}^*_n$ contains two vertex-disjoint cycles. By the definition of $\mathcal{G}_n$, $G$ is obtained from $2K_1+C_3$ by iteratively adding $n-5$ vertices $u_1,u_2,\dots,u_{n-5}$ of degree 2. Now, let $G_{n-5}=G$, and $G_{i-1}=G_i-\{u_i\}$ for $i\in\{1,2,\ldots,n-5\}$. Then $G_0\cong 2K_1+C_3$. Moreover, $|G_i|=i+5$ and $d_{G_i}(u_{i})=2$ for each $i\in \{1,2,\dots,n-5\}$. Now let $$i^*=\max\{i~|~0\leq i \leq n-5,~ G_i\in \mathcal{G}^*_{i+5}\}.$$ Since $G_0=2K_1+C_3\in \mathcal{G}^*_{5}$ and $G_{n-5}\notin \mathcal{G}^*_n$, we have $0\leq i^*\leq n-6$. By the choice of $i^*$, we know that $G_{i^*}\in \mathcal{G}^*_{i^*+5}$ and $G_{i^*+1}\notin \mathcal{G}^*_{i^*+6}$, which implies that $N_{G_{i^*}}(u_i)\subseteq V_1$ and $N_{G_{i^*+1}}(u_{i^*+1})\not\subseteq V_1$. \begin{figure} \caption{An extremal graph in $\mathcal{G} \label{fig.10} \end{figure} Now we may assume that $G_{n-5}$ is a planar embedding of $G$, and $G_0$ is a plane subgraph of $G_{n-5}$. Observe that $2K_1+C_3$ has six planar embeddings (see Figure \ref{fig.10}). Without loss of generality, assume that $G_0$ is the leftmost graph in Figure \ref{fig.10}. Then, $u_{i^*+1}$ lies in one of the following regions (see Figure \ref{fig.10}): $$ext(w_2v_1v_2w_2), int(w_2v_1v_3w_2), int(w_2v_2v_3w_2),$$ $$int(w_1v_1v_2w_1), int(w_1v_1v_3w_1), int(w_1v_2v_3w_1).$$ By Lemma \ref{lemma4.2}, we can assume that $u_{i^*+1}$ lies in the outer face, that is, $u_{i^*+1}\in ext(w_2v_1v_2w_2)$. For simplify, we denote $C'=w_2v_1v_2w_2$. Let $u$ be an arbitrary vertex with $uv_3\in E(G_{i^*+1})$. Then by Lemma \ref{lemma4.1} and $v_3\in int(C')$, we have $u\in int(C')$, and thus $uu_{i^*+1}\notin E(G_{i^*+1})$. This implies that $N_{G_{i^*+1}}(u_{i^*+1})\subseteq V(C')\cup W_{12}$, where $W_{12}=\{u~|~u\in V(G_{i^*+1}),~N_{G_{i^*+1}}(u)=\{v_1,v_2\}\}$. Recall that $d_{G_{i^*+1}}(u_{i^*+1})=2$ and $N_{G_{i^*+1}}(u_{i^*+1})\not\subseteq V_1$. Then, $|N_{G_{i^*+1}}(u_{i^*+1})\cap\{v_1,v_2\}|\le 1$. If $|N_{G_{i^*+1}}(u_{i^*+1})\cap\{v_1,v_2\}|= 1$, then we may assume without loss of generality that $v_1\in N_{G_{i^*+1}}(u_{i^*+1})$, and $u'\in N_{G_{i^*+1}}(u_{i^*+1})\setminus \{v_1\}$. Since $N_{G_{i^*+1}}(u_{i^*+1})\subseteq V(C')\cup W_{12}$, we have $u'\in \{w_2\}\cup W_{12}$, and so $u'v_1\in E(G_{i^*+1})$. Thus, $G_{n-5}$ contains two vertex-disjoint cycles $u_{i^*+1}v_1u'u_{i^*+1}$ and $w_1v_2v_3w_1$, as desired. Now consider the case that $| N_{G_{i^*+1}}(u_{i^*+1})\cap\{v_1,v_2\}|=0$. This implies that $N_{G_{i^*+1}}(u_{i^*+1})\subseteq \{w_2\}\cup W_{12}$. Let $N_{G_{i^*+1}}(u_{i^*+1})=\{u',u''\}$. Then $u'v_1,u''v_1\in E(G_{i^*+1})$. Therefore, $G_{n-5}$ contains two vertex-disjoint cycles $u_{i^*+1}u'v_1u''u_{i^*+1}$ and $w_1v_2v_3w_1$. \end{proof} Given a graph $G$, let $\widetilde{G}$ be the largest induced subgraph of $G$ with minimal degree at least 3. It is easy to see that $\widetilde{G}$ can be obtained from $G$ by iteratively removing the vertices of degree at most 2 until the resulting graph has minimum degree at least 3 or is empty. It is well known that $\widetilde{G}$ is unique and does not depend on the order of vertex deletion (see \cite{Pittel}). In the following, we give the proof of Theorem \ref{theorem1.4}. \begin{proof} Let $n\geq5$ and $G$ be an extremal graph corresponding to ${\rm ex}_{\mathcal{P}}(n,2\mathcal{C})$. Observe that $K_2+(P_3\cup (n-5)K_1)$ is a planar graph which contains no two vertex-disjoint cycles (see Figure \ref{fig.02}). Thus, $e(G)\ge e(K_2+(P_3\cup (n-5)K_1))=2n-1$. If $\widetilde{G}$ is empty, then we define $G'$ as the graph obtained from $G$ by iteratively removing the vertices of degree at most 2 until the resulting graph has 4 vertices. By the planarity of $G'$, we have $e(G')\leq 3|G'|-6=6$, and thus $$e(G)\le e(G')+2(n-4)\leq6+2n-8=2n-2,$$ a contradiction. Now we know that $\widetilde{G}$ is nonempty. Then, $\widetilde{G}$ contains no two vertex-disjoint cycles as $\widetilde{G}\subseteq G$. By the definition of $\widetilde{G}$, we have $\delta(\widetilde{G})\ge 3$. By Lemma \ref{lemma4.3}, we get that $\widetilde{G}\in \{2K_1+C_3,K_1+C_{|\widetilde{G}|-1}\}$. If $\widetilde{G}\cong K_1+C_{|\widetilde{G}|-1}$, then $$e(G)\le e(\widetilde{G})+2(n-|\widetilde{G}|)=2(|\widetilde{G}|-1)+2(n-|\widetilde{G}|)=2n-2,$$ a contradiction. Thus, $\widetilde{G}\cong 2K_1+C_3$. Now, $e(G)\le e(\widetilde{G})+2(n-5)=2n-1$. Therefore, $e(G)=2n-1$, which implies that ${\rm ex}_{\mathcal{P}}(n,2\mathcal{C})=2n-1$ and $G\in \mathcal{G}_n$. By Lemma \ref{lemma4.4}, we have $G\in \mathcal{G}^*_n$. This completes the proof of Theorem \ref{theorem1.4}. \end{proof} \section{Proof of Theorem \ref{theorem1.5}} We shall further introduce some notations on a plane graph $G$. A vertex or an edge of $G$ is said to be \emph{incident} with a face $F$, if it lies on the boundary of $F$. Clearly, every edge of $G$ is incident with at most two faces. A face of size $i$ is called an $i$-face. The numbers of $i$-faces and total faces are denoted by $f_i(G)$ and $f(G)$, respectively. Let $E_3(G)$ be the set of edges incident with at least one $3$-face, and particularly, let $E_{3,3}(G)$ be the set of edges incident with two $3$-faces. Moreover, let $e_{3}(G)$ and $e_{3,3}(G)$ denote the cardinalities of $E_{3}(G)$ and $E_{3,3}(G)$, respectively. We can easily see that $3f_3(G)=e_{3}(G)+e_{3,3}(G).$ Lan, Shi and Song proved that $ex_{\mathcal{P}}(n,K_1+P_3)\le \frac{12(n-2)}{5}$, with equality when $n\equiv12 \pmod{20}$ (see \cite{BT}), and $ex_{\mathcal{P}}(n,K_1+P_{k+1})\le \frac{13kn}{4k+2}-\frac{12k}{2k+1}$ for $ k\in \{3,4,5\}$ (see \cite{D}). For $k\ge 6$, one can easily see that $ex_{\mathcal{P}}(n,K_1+P_{k+1})=3n-6$. In \cite{Fang}, the authors obtained the following sharp result. \begin{lem}\label{lemma5.1}\emph{(\cite{Fang})} Let $n,k$ be two integers with $k\in \{2,3,4,5\}$ and $n\ge \frac{12}{6-k}+1$. Then $ex_{\mathcal{P}}(n,K_1+P_{k+1})\le \frac{24k}{7k+6}(n-2)$, with equality if $n\equiv{\frac{12(k+2)}{6-k}}\pmod{{\frac{28k+24}{6-k}}}$. \end{lem} To prove Theorem \ref{theorem1.5}, we also need an edge-extremal result on outerplanar graphs. Let ${\rm ex}_{\mathcal{OP}}(n,C_k)$ denote the maximum number of edges in an $n$-vertex $C_k$-free outerplanar graph. \begin{lem}\label{lemma5.2} \emph{(\cite{Fang2})} Let $n,k,\lambda$ be three integers with $n\geq k\ge 3$ and $\lambda=\lfloor\frac{kn-2k-1}{k^2-2k-1}\rfloor+1$. Then $${\rm ex}_{\mathcal{OP}}(n,C_k)=\left\{ \begin{array}{ll} 2n-\lambda+2\big\lfloor\frac{\lambda}{k}\big\rfloor-3 & \hbox{if $k\mid \lambda$,} \\ 2n-\lambda+2\big\lfloor\frac{\lambda}{k}\big\rfloor-2 & \hbox{otherwise.} \end{array} \right. $$ \end{lem} In particular, we can obtain the following corollary. \begin{cor}\label{cor5.1} $${\rm ex}_{\mathcal{OP}}(n-1,C_4)=\left\{ \begin{array}{ll} \frac{12}{7}n-5 & \hbox{if $7\mid n$,} \\ \big\lfloor\frac{12n-27}{7}\big\rfloor~~~ & \hbox{otherwise.} \end{array} \right. $$ \end{cor} \begin{figure} \caption{The constructions of $G,G_1,G_2,\ldots,G_a$. } \label{figu.2} \end{figure} For arbitrary integer $n\ge 4$, we can find a unique $(a,b)$ such that $a\ge 0$, $1\le b \le 7$ and $n-1=7a+b+2$. Let $G$ be a $9$-vertex outerplanar graph and $G_1,\ldots,G_a$ be $a$ copies of $G$ (see Figure \ref{figu.2}). Then, we define $G_0$ as the subgraph of $G$ induced by $\{u_1,u_2\}\cup \{v_1,v_2,\dots,v_b\}$. One can check that $|G_0|=b+2$ and $$ e(G_0)= \left\{ \begin{array}{ll} \frac{12(b+2)-23}{7} & \hbox{if $7\mid (b+2-6)$,} \\ \big\lfloor\frac{12(b+2)-15}{7}\big\rfloor~~~ & \hbox{otherwise.} \end{array} \right.$$ We now construct a new graph $G^*$ from $G_0,G_1,\ldots,G_a$ by identifying the edges $e_{2i}$ and $e_{2i+1}$ for each $i\in \{0,\ldots,a-1\}$. Clearly, $G^*$ is a connected $C_4$-free outerplanar graph with $$|G^*|=\sum_{i=0}^{a}|G_i|-2a=(2+b)+9a-2a=n-1.$$ Moreover, since $n\equiv b+2-6\,(\!\!\!\mod 7),$ we have $$ e(G^*)= \sum_{i=0}^{a}e(G_i)-a=e(G_0)+12a=\left\{ \begin{array}{ll} \frac{12}{7}n-5 & \hbox{if $7\mid n$,} \\ \big\lfloor\frac{12n-27}{7}\big\rfloor~~~ & \hbox{otherwise.} \end{array} \right.$$ Combining Corollary \ref{cor5.1}, $G^*$ is an extremal graph corresponding to ${\rm ex}_{\mathcal{OP}}(n-1,C_4)$. \begin{lem}\label{lemma5.3} Let $n\ge 2661$ and $G^{**}$ be an extremal plane graph corresponding to ${\rm ex}_{\mathcal{P}}(n,2C_4)$. Then $G^{**}$ contains at least fourteen quadrilaterals and all of them share exactly one vertex. \end{lem} \begin{proof} Note that $e(G^*)={\rm ex}_{\mathcal{OP}}(n-1,C_4)\ge \frac{12}{7}n-5$ and $G^*$ is a $C_4$-free outerplanar graph of order $n-1$. Then $K_1+G^*$ is an $n$-vertex $2C_4$-free planar graph, and thus \begin{align*} e(G^{**})\ge e(K_1+G^*)=e(G^*)+n-1\geq\frac{19}{7}n-6. \end{align*} On the other hand, by Lemma \ref{lemma5.1}, we have $$ex_{\mathcal{P}}(n,K_1+P_4)\le \frac{8}{3}(n-2).$$ Note that $\frac{19}{7}n-6>\frac{8}{3}(n-2)$ for $n\ge 2661$. Then $G^{**}$ contains a copy, say $H_1$, of $K_1+P_4$. Let $G_1$ be the graph obtained from $G^{**}$ by deleting all edges within $V(H_1)$. Since $|H_1|=5$, we have $e(G_1)\geq e(G^{**})-(3|H_1|-6)=e(G^{**})-9>\frac{8}{3}(n-2)$. Thus, $G_1$ contains a copy, say $H_2$, of $K_1+P_4$. Now we can obtain a new graph $G_2$ from $G_1$ by deleting all edges within $V(H_2)$. Note that $e(G^{**})-14\times9>\frac{8}{3}(n-2)$. Repeating above steps, we can obtain a graph sequence $G_1,G_2,\ldots,G_{14}$ and fourteen copies $H_1,H_2,\cdots,H_{14}$ of $K_1+P_4$ such that $H_i\subseteq G_{i-1}$ and $G_{i}$ is obtained from $G_{i-1}$ by deleting all edges within $V(H_i)$. This also implies that $G^{**}$ contains at least fourteen quadrilaterals. We next give four claims on those copies of $K_1+P_4$. \begin{claim}\label{claim5.1} Let $i,j$ be two integers with $1\le i<j\le 14$ and $v\in V(H_i)\cap V(H_j)$. Then, $V(H_i)\cap N_{H_j}(v)=\varnothing$. \end{claim} \begin{proof} Suppose to the contrary that there exists a vertex $w\in V(H_i)\cap N_{H_j}(v)$. Note that $v,w\in V(H_i)$. By the definition of $G_i$, whether $vw\in E(H_i)$ or not, we can see that $vw\notin E(G_i)$. On the other hand, note that $H_j\subseteq G_{j-1}\subseteq G_i$, then $vw\in E(H_j)\subseteq E(G_i)$, contradicting $vw\notin E(G_i).$ Hence, the claim holds. \end{proof} \begin{claim}\label{claim5.2} $|V(H_i)\cap V(H_j)|\in\{1,2\}$ for any two integers $i,j$ with $1\le i<j\le 14$. \end{claim} \begin{proof} If $H_i$ and $H_j$ are vertex-disjoint, then $G^{**}$ contains $2C_4$, a contradiction. Now suppose that there exist three vertices $v_1,v_2,v_3\in V(H_i)\cap V(H_j)$. Observe that $K_1+P_4$ contains no an independent set of size $3$. Then $H_j[\{v_1,v_2,v_3\}]$ is nonempty. Assume without loss of generality that $v_1v_2\in E(H_j)$. Then $v_2\in V(H_i)\cap N_{H_j}(v_1)$, which contradicts Claim \ref{claim5.1}. Therefore, $1\le |V(H_i)\cap V(H_j)|\le 2$. \end{proof} Now for convenience, a vertex $v$ in a graph $G$ is called a \emph{$2$-vertex} if $d_G(v)=2$, and a \emph{$2^+$-vertex} if $d_G(v)>2.$ Clearly, every copy of $K_1+P_4$ contains two 2-vertices and three $2^+$-vertices. \begin{claim}\label{claim5.3} Let $\mathcal{H}$ be the family of graphs $H_i$ ($1\leq i\leq14$) such that every 2-vertex in $H_i$ is a $2^+$-vertex in $H_1$. Then $|\mathcal{H}|\le 3$. \end{claim} \begin{proof} Note that $H_1$ contains only three $2^+$-vertices, say $v_1,v_2$ and $v_3$. Then every graph $H_i\in \mathcal{H}$ must contain two of $v_1,v_2$ and $v_3$ as $2$-vertices. Suppose to the contrary that $|\mathcal{H}|\ge 4$. By pigeonhole principle, there exist two graphs $H_{i_1},H_{i_2}\in \mathcal{H}$ such that they contain the same two 2-vertices, say $v_1,v_2$. It follows that $H_{i_j}-\{v_j\}$ contains a $4$-cycle for $j\in \{1,2\}$. By Claim \ref{claim5.2}, we have $V(H_{i_1})\cap V(H_{i_2})=\{v_1,v_2\}$, which implies that $H_{i_1}-\{v_1\}$ and $H_{i_2}-\{v_2\}$ are vertex-disjoint. Hence, $G^{**}$ contains two vertex-disjoint 4-cycles, a contradiction. \end{proof} \begin{claim}\label{claim5.4} Let $j$ be an integer with $2\leq j\leq14$ and $H_j\notin \mathcal{H}$. Then, there exists a vertex $v\in V(H_1)\cap V(H_{j})$ such that $d_{H_1}(v)\ge 3$ and $d_{H_{j}}(v)\ge 3$. \end{claim} \begin{proof} By Claim \ref{claim5.2}, we have $1\le |V(H_1)\cap V(H_j)|\le 2$. We first assume that $V(H_1)\cap V(H_j)=\{u\}$. If $d_{H_1}(u)\ge 3$ and $d_{H_j}(u)\ge3$, then there is nothing to prove. If $d_{H_1}(u)=2$, then $G^{**}$ contains two vertex-disjoint subgraphs $H_1-\{u\}$ and $H_j$, and thus $2C_4$, a contradiction. If $d_{H_j}(u)=2$, then we can similarly get a contradiction. Therefore, $|V(H_1)\cap V(H_j)|=2$. Now, assume that $V(H_1)\cap V(H_j)=\{u_1,u_2\}$. We first deal with the case $d_{H_j}(u_1)$ $=d_{H_j}(u_2)=2$. Since $H_j\notin \mathcal{H}$, one of $\{u_1,u_2\}$, say $u_1$, is a 2-vertex in $H_1$. Hence, $G^{**}$ contains two vertex-disjoint subgraphs $H_1-\{u_1\}$ and $H_j-\{u_2\}$, and so $2C_4$, a contradiction. Thus, there exists some $i\in \{1,2\}$ with $d_{H_j}(u_i)\geq 3$. If $d_{H_1}(u_i)\ge 3$, then we are done. If $d_{H_1}(u_i)=2$, then we define $H_j'$ as the subgraph of $H_j$ induced by $N_{H_j}(u_i)\cup\{u_i\}$. Since $d_{H_j}(u_i)\ge 3$, we can check that $H_j'$ always contains a $C_4$. Moreover, since $d_{H_1}(u_i)=2$, we can see that $H_1-\{u_i\}$ also contains a $C_4$. On the other hand, by Claim \ref{claim5.1}, we have $N_{H_j}(u_i)\cap V(H_1)=\varnothing$, which implies that $H_j'$ and $H_1-\{u_i\}$ are vertex-disjoint. Therefore, $G^{**}$ contains $2C_4$, a contradiction. \end{proof} By Claim \ref{claim5.3}, $|\mathcal{H}|\leq 3$, thus there are at least ten graphs in $\{H_j~|~2\leq j \leq 14\}\setminus \mathcal{H}$. However, $H_1$ has only three $2^+$-vertices. By Claim \ref{claim5.4} and pigeonhole principle, there exists a $2^+$-vertex $w$ in $H_1$ and four graphs, say $H_2,H_3,H_4,H_5$, of $\{H_j~|~2\leq j \leq 14\}\setminus \mathcal{H}$. By Claim \ref{claim5.1}, we get that $N_{H_j}(w)\cap V(H_i)=\varnothing$, and so $N_{H_j}(w)\cap N_{H_i}(w)=\varnothing$, for any $i,j$ with $1\leq i<j\leq 5$. If $G^{**}-\{w\}$ contains a quadrilateral $C'$, then there exists some $j'\le 5$ such that $N_{H_{j'}}(w)\cap V(C')=\varnothing$ as $|C'|=4$. Since $w$ is a $2^+$-vertex in $H_{j'}$, we can observe that the subgraph of $H_{j'}$ induced by $N_{H_{j'}}(w)\cup\{w\}$ must contain a $C_4$. Consequently, $G^{**}$ is not $2C_4$-free, a contradiction. Thus, $G^{**}-\{w\}$ is $C_4$-free, which implies that all quadrilaterals of $G^{**}$ share exactly one vertex. This completes the proof of Lemma \ref{lemma3.3}. \end{proof} Now we are ready to give the proof of Theorem \ref{theorem1.5}. \begin{proof} Recall that $G^*$ is an extremal graph corresponding to ${\rm ex}_{\mathcal{OP}}(n-1,C_4)$. Then $K_1+G^*$ is planar and $2C_4$-free. By Corollary \ref{cor5.1}, we have \begin{align}\label{align.-1} e(K_1+G^*)={\rm ex}_{\mathcal{OP}}(n-1,C_4)+n-1=\left\{ \begin{array}{ll} \frac{19}{7}n-6 & \hbox{if $7\mid n$,} \\ \big\lfloor\frac{19n-34}{7}\big\rfloor~ & \hbox{otherwise.} \end{array} \right. \end{align} To prove Theorem \ref{theorem1.5}, it suffices to show ${\rm ex}_{\mathcal{P}}(n,2C_4)=e(K_1+G^*).$ Since $G^{**}$ is an extremal plane graph corresponding to ${\rm ex}_{\mathcal{P}}(n,2C_4)$, we have $e(G^{**})\ge e(K_1+G^*)$. In the following, we show that $e(G^{**})\le e(K_1+G^*)$. \begin{figure} \caption{Two possible structures of $H(e)$. } \label{fig.-11} \end{figure} By Lemma \ref{lemma5.3}, all quadrilaterals of $G^{**}$ share a vertex $w$. Thus, $G^{**}-\{w\}$ is $C_4$-free. Assume that $d_{G^{**}}(w)=s$ and $w_1,\dots,w_s$ are around $w$ in clockwise order, with subscripts interpreted modulo $s$. Let $e$ be an arbitrary edge in $E_{3,3}(G^{**})$, that is, $e$ is incident with two 3-faces, say $F$ and $F'$. We define $H(e)$ as the plane subgraph induced by all edges incident with $F$ and $F'$. Clearly, $H(e)\cong K_1+P_3$ and so it contains a $C_4$. Recall that all quadrilaterals of $G^{**}$ share exactly one vertex $w$. Then, $w\in V(H_e)$ and $w$ is incident with at least one face of $F$ and $F'$ (see Figure \ref{fig.-11}). Note that $e$ is incident with $F$. Then, either $e=ww_i$ or $e=w_iw_{i+1}$ for some $i\in \{1,2,\dots,s\}$. By the choice of $e$, we have \begin{align}\label{align.-02} E_{3,3}(G^{**})\subseteq \{ww_{i},w_iw_{i+1}~|~1\leq i\leq s\}. \end{align} Assume first that $f_4(G^{**})=t\geq1$ and $F_1,\dots,F_t$ are 4-faces in $G^{**}$. Since every 4-face is a quadrilateral, $w$ is incident with each 4-face. Consequently, there exists $j_i\in \{1,\dots,s\}$ such that $ww_{j_i},ww_{j_i+1}$ are incident with $F_i$ for each $i\in \{1,\dots,t\}$. Thus, $ww_{j_i}\notin E_{3,3}(G^{**})$ for $1\le i \le t$. On the other hand, if $w_{j_i}w_{j_i+1}\in E_{3,3}(G^{**})$, then $H(w_{j_i}w_{j_i+1})$ contains a $C_4$, and so $w\in V(H(w_{j_i}w_{j_i+1}))$. This implies that $ww_{j_i}w_{j_i+1}w$ is a 3-face in $G^{**}$, contradicting the fact that $ww_{j_i},ww_{j_i+1}$ are incident with the 4-face $F_i$. Thus, we also have $w_{j_i}w_{j_i+1}\notin E_{3,3}(G^{**})$ for $1\le i \le t$. By the argument above, we can see that \begin{align}\label{align.-03} E_{3,3}(G^{**})\cap \{ww_{j_i},w_{j_i}w_{j_i+1}~|~1\le i \le t\}=\varnothing. \end{align} Using (\ref{align.-02}) and (\ref{align.-03}) gives $e_{3,3}(G^{**})\le 2s-2t=2s-2f_4(G^{**})$. Hence, \begin{align}\label{align.-04} 3f_3(G^{**})=e_3(G^{**})+e_{3,3}(G^{**})\le e(G^{**})+2s-2f_4(G^{**}). \end{align} On the other hand, \begin{align*} 2e(G^{**})=\sum_{i\ge 3}if_i(G^{**})\ge 3f_3(G^{**})+4f_4(G^{**})+5(f(G^{**})-f_3(G^{**})-f_4(G^{**})), \end{align*} which yields $f(G^{**})\le\frac{1}{5}\left(2e(G^{**})+2f_3(G^{**})+f_4(G^{**})\right).$ Combining this with Euler's formula $f(G^{**})=e(G^{**})-(n-2)$, we obtain \begin{align}\label{align.3} e(G^{**})\le \frac{5}{3}(n-2)+\frac{2}{3}f_3(G^{**})+\frac{1}{3}f_4(G^{**}). \end{align} If $f_4(G^{**})=t=0$, then (\ref{align.-04}) and (\ref{align.3}) hold directly. Combining (\ref{align.-04}) and (\ref{align.3}), we have $e(G^{**})\le \frac{15}{7}(n-2)+\frac{4}{7}s-\frac{1}{7}f_4(G^{**})$. Recall that $d_{G^{**}}(w)=s\leq n-1$. If $s\le n-2$, then $e(G^{**})\leq \lfloor\frac{19}{7}(n-2)\rfloor\leq e(K_1+G^*)$ by (\ref{align.-1}), as desired. If $s=n-1$, then $w$ is a dominating vertex of the planar graph $G^{**}$, which implies that $G^{**}-\{w\}$ is outerplanar. Recall that $G^{**}-\{w\}$ is $C_4$-free. Thus, $e(G^{**}-\{w\})\le {\rm ex}_{\mathcal{OP}}(n-1,C_4)$, and so $e(G^{**})\leq {\rm ex}_{\mathcal{OP}}(n-1,C_4)+n-1$. Combining (\ref{align.-1}), we get $e(G^{**})\leq e(K_1+G^*)$, as required. This completes the proof of Theorem \ref{theorem1.5}. \end{proof} \end{document}
\begin{document} \renewcommand\texteuro{FIXME} \title{Quantile Treatment Effects and Bootstrap Inference under Covariate-Adaptive Randomization hanks{We are grateful to Federico Bugni, Qu Feng, Sukjin Han, Yu-Chin Hsu, Shakeeb Khan, Frank Kleibergen, Michael Leung, Jia Li, Wenjie Wang, and seminar participants at NTU, Financial Econometrics and New Finance Conference at Zhejiang University, SH3 Conference on Econometrics at SMU, 2019 Shanghai Workshop of Econometrics, and Asian Meeting of the Econometric Society for their valuable comments. We also thank the editor and two anonymous referees for their valuable comments which greatly improve our paper. Zhang acknowledges the financial support from Singapore Ministry of Education Tier 2 grant under grant no. MOE2018-T2-2-169 and the Lee Kong Chian fellowship. Any and all errors are our own. } \begin{abstract} In this paper, we study the estimation and inference of the quantile treatment effect under covariate-adaptive randomization. We propose two estimation methods: (1) the simple quantile regression and (2) the inverse propensity score weighted quantile regression. For the two estimators, we derive their asymptotic distributions uniformly over a compact set of quantile indexes, and show that, when the treatment assignment rule does not achieve strong balance, the inverse propensity score weighted estimator has a smaller asymptotic variance than the simple quantile regression estimator. For the inference of method (1), we show that the Wald test using a weighted bootstrap standard error under-rejects. But for method (2), its asymptotic size equals the nominal level. We also show that, for both methods, the asymptotic size of the Wald test using a covariate-adaptive bootstrap standard error equals the nominal level. We illustrate the finite sample performance of the new estimation and inference methods using both simulated and real datasets. \\ \noindent \textbf{Keywords:} Bootstrap inference, quantile treatment effect \noindent \textbf{JEL codes:} C14, C21 \end{abstract} \section{Introduction} The randomized control trial (RCT), as pointed out by \cite{AP08}, is one of the five most common methods (along with instrumental variable regressions, matching estimations, differences-in-differences, and regression discontinuity designs) for causal inference. Researchers can use the RCT to estimate not only average treatment effects (ATEs) but also quantile treatment effects (QTEs), which capture the heterogeneity of the sign and magnitude of treatment effects, varying depending on their place in the overall distribution of outcomes. For example, \cite{MS11} estimate the QTE of teacher performance pay program on student learning via the difference of empirical quantiles of test scores between treatment and control groups. \cite{DGPR13} and \cite{BDGK15} estimate the QTEs of audits on endline pollution and a group-lending microcredit program on informal borrowing, respectively, via linear quantile regressions (QRs). \cite{CDDP15} estimate the QTE of microcredit on various household outcomes via a minimum distance method. \cite{BNM18} estimate the QTE of being informed on energy use via the inverse propensity score weighted (IPW) QR. Except \cite{CDDP15}, the other four papers all use the bootstrap to construct confidence intervals for their QTE estimates. However, RCTs have also been routinely implemented with covariate-adaptive randomization. Individuals are first stratified based on some baseline covariates, and then, within each stratum, the treatment status is assigned (independent of covariates) to achieve some balance between the sizes of treatment and control groups; as examples, see \citet[Chapter 9]{IR05} for a textbook treatment of the topic, and \cite{DGK07} and \cite{B09} for two excellent surveys on implementing RCTs in development economics. To achieve such balance, treatment status for different individuals usually exhibits a (negative) cross-sectional \textit{dependence}. The standard inference procedures that rely on cross-sectional \textit{independence} are usually conservative and lacking power. How do we consistently estimate QTEs under covariate-adaptive randomization? What are the asymptotic distributions for the QTE estimators, and how do we conduct proper bootstrap inference? These questions are as yet unaddressed. We propose two ways to estimate QTEs: (1) the simple quantile regression (SQR) and (2) the IPW QR. We establish the weak limits for both estimators uniformly over a compact set of quantile indexes and show that the IPW estimator has a smaller asymptotic variance than the SQR estimator when the treatment assignment rule does not achieve strong balance.\footnote{We will define ``strong balance" in Section \ref{sec:setup}.} If strong balance is achieved, then the two estimators are asymptotically first-order equivalent. For inference, we show that the Wald test combined with weighted bootstrap based critical values can lead to under-rejection for method (1), but its asymptotic size equals the nominal level for method (2). We also study the covariate-adaptive bootstrap which respects the cross-sectional dependence when generating the bootstrap sample. The estimator based on the covariate-adaptive bootstrap sample can mimic that of the original sample in terms of the standard error. Thus, using proper covariate-adaptive bootstrap based critical values, the asymptotic size of the Wald test equals the nominal level for both estimators. As originally proposed by \cite{D74}, the QTE, for a fixed quantile index, corresponds to the horizontal difference between the marginal distributions of the potential outcomes for treatment and control groups. \cite{F07} studies the identification and estimation of QTE under unconfoundedness. Our estimators (1) and (2) directly follow those in \cite{D74} and \cite{F07}, respectively. \cite{SYZ10} first point out that, under covariate-adaptive randomization, the usual two-sample t-test for the ATE is conservative. They then propose a covariate-adaptive bootstrap which can produce the correct standard error. \cite{SY13} extend the results to generalized linear models. However, both groups of researchers parametrize the (transformed) conditional mean equation by a specific linear model and focus on a specific randomization scheme (covariate-adaptive biased coin method). \cite{MQLH18} derive the theoretical properties of ATE estimators based on general covariate-adaptive randomization under the linear model framework. \cite{BCS17} substantially generalize the framework to a fully nonparametric setting with a general class of randomization schemes. However, they mainly focus on the ATE and show that the standard two-sample t-test and the t-test based on the linear regression with strata fixed effects are conservative. They then obtain analytical estimators for the correct standard errors and study the validity of permutation tests. \cite{HHK11} study the IPW estimator for the ATE under adaptive randomization. However, they assume the treatment status is assigned completely independently across individuals. More recently, \cite{BCS18} study the estimation of ATE with multiple treatments and propose a fully saturated estimator. \cite{T18} study the estimation of ATE under an adaptive randomization procedure. Our paper complements the above papers in four aspects. First, we consider the estimation and inference of the QTE, which is a function of quantile index $\tau$. We rely on the empirical processes theories developed by \cite{VW96} and \cite{CCK14} to obtain uniformly weak convergence of our estimators over a compact set of $\tau$. Based on the uniform convergence, we can construct not only point-wise but also uniform confidence bands. Second, we study the asymptotic properties of the IPW estimator under covariate-adaptive randomization. When the treatment assignment rule does not achieve strong balance, the IPW estimator is more efficient than the SQR estimator. Third, we investigate the weighted bootstrap approximation to the asymptotic distributions of the SQR and IPW estimators. We show that the weighted bootstrap ignores the (negative) cross-sectional dependence due to the covariate-adaptive randomization and over-estimates the asymptotic variance for the SQR estimator. However, the asymptotic variance for the IPW estimator does not rely on the randomization scheme implemented. Thus, the asymptotic size of the Wald test using the IPW estimator paired with the weighted bootstrap based critical values equals the nominal level. Fourth, we investigate the covariate-adaptive bootstrap approximation to the asymptotic distributions of the SQR and IPW estimators. We establish that, using either estimator paired with its corresponding covariate-adaptive bootstrap based critical values, the asymptotic size of the Wald test equals the nominal level. \cite{SYZ10} first propose the covariate-adaptive bootstrap and establish its validity for the ATE in a linear regression model under the null hypothesis that the treatment effect is not only zero but also homogeneous.\footnote{We say the average treatment effect is homogeneous if the conditional average treatment effect given covariates is the same as the unconditional one.} We modify the covariate-adaptive bootstrap and establish its validity for the QTE in the nonparametric setting proposed by \cite{BCS17}. In addition, our results do not rely on the homogeneity of the treatment effect. Compared with the analytical inference, the two bootstrap inferences for QTEs we study in this paper avoid estimating the infinite-dimensional nuisance parameters such as the densities of the potential outcomes, and thus, the choices of tuning parameters. In addition, unlike the permutation tests studied in \cite{BCS17}, the validity of bootstrap inferences does not require either strong balance condition or studentization. In particular, such studentization is cumbersome in the QTE context. As the asymptotic variance for the IPW estimator does not depend on the treatment assignment rule implemented in RCTs, this estimator (and equivalently, the fully saturated estimator for the ATE) is suitable for settings where the knowledge of the exact treatment assignment rule is not available. Such scenario occurs when researchers are using an experiment that was run in the past and the randomization procedure may not have been fully described. It also occurs in subsample analysis, where sub-groups are defined using variables that may have not been used to form the strata and the treatment assignment rule for each sub-group becomes unknown. We illustrate this fact in the subsample analysis of the empirical application in Section \ref{sec:app}. The rest of the paper is organized as follows. In Section \ref{sec:setup}, we describe the model setup and notation. In Sections \ref{sec:SQR} and \ref{sec:ipw}, we discuss the asymptotic properties of estimators (1) and (2), respectively. In Sections \ref{sec:SBI} and \ref{sec:CABI}, we investigate the weighted and covariate-adaptive bootstrap approximations to the asymptotic distributions of estimators (1) and (2), respectively. In Section \ref{sec:sim}, we examine the finite-sample performance of the estimation and inference methods. In Section \ref{sec:guide}, we provide recommendations for practitioners. In Section \ref{sec:app}, we apply the new methods to estimate and infer the average and quantile treatment effects of iron efficiency on educational attainment. In Section \ref{sec:concl}, we conclude. We provide proofs for all results in an appendix. We study the strata fixed effects quantile regression estimator and provide additional simulation results in the second online supplement. \section{Setup and Notation} \label{sec:setup} First, denote the potential outcomes for treated and control groups as $Y(1)$ and $Y(0)$, respectively. The treatment status is denoted as $A$, where $A=1$ means treated and $A=0$ means untreated. The researcher can only observe $\{Y_i,Z_i,A_i\}_{i=1}^n$ where $Y_i = Y_i(1)A_i + Y_i(0)(1-A_i)$, and $Z_i$ is a collection of baseline covariates. Strata are constructed from $Z$ using a function $S: \text{Supp}(Z)\mapsto \mathcal{S}$, where $\mathcal{S}$ is a finite set. For $1\leq i \leq n$, let $S_i = S(Z_i)$ and $p(s) = \mathbb{P}(S_i = s)$. Throughout the paper, we maintain the assumption that $p(s)$ is fixed w.r.t. $n$ and is positive for every $s \in \mathcal{S}$.\footnote{We can also allow for the DGP to depend on $n$ so that $p_n(s) = \mathbb{P}_n(S_i = s)$ and $p(s) = \lim p_n(s)$. All the results in this paper still hold as long as $n(s)\rightarrow \infty$ a.s. Interested readers can refer to the previous version of this paper on arXiv for more detail.} We make the following assumption for the data generating process (DGP) and the treatment assignment rule. \begin{ass} \label{ass:assignment1} \begin{enumerate} \item $\{Y_i(1),Y_i(0),S_i\}_{i=1}^n$ is i.i.d. \item $\{Y_i(1),Y_i(0)\}^{n}_{i=1} \perp\!\!\!\perp \{A_i\}^{n}_{i=1}|\{S_i\}^{n}_{i=1}$. \item $\left\{\left\{ \frac{D_n(s)}{\sqrt{n}}\right\}_{s \in \mathcal{S}}\biggl|\{S_i\}^{n}_{i=1} \right\} \rightsquigarrow N(0,\Sigma_D)$ a.s., where \begin{align*} D_n(s) = \sum_{i =1}^n (A_i-\pi)1\{S_i = s\}\quad \text{and} \quad \Sigma_D = \text{diag}\{p(s)\gamma(s):s \in \mathcal{S}\} \end{align*} with $0 \leq \gamma(s) \leq \pi(1-\pi)$. \item $\frac{D_n(s)}{n(s)} = o_p(1)$ for $s \in \mathcal{S}$, where $n(s) =\sum_{i =1}^n1\{S_i=s\}$. \end{enumerate} \end{ass} Several remarks are in order. First, Assumptions \ref{ass:assignment1}.1--\ref{ass:assignment1}.3 are exactly the same as \citet[Assumption 2.2]{BCS17}. We refer interested readers to \cite{BCS17} for more discussion of these assumptions. Second, note that in Assumption \ref{ass:assignment1}.3 the parameter $\pi$ is the target proportion of treatment for each stratum and $D_n(s)$ measures the imbalance. \cite{BCS18} study the more general case that $\pi$ can take distinct values for different strata. Third, we follow the terminology in \cite{BCS17}, which follows that of \cite{E71} and \cite{HH12}, saying a treatment assignment rule achieves strong balance if $\gamma(s)=0$. Fourth, we do not require that the treatment status is assigned independently. Instead, we only require Assumption \ref{ass:assignment1}.3 or Assumption \ref{ass:assignment1}.4, which condition is satisfied by several treatment assignment rules such as simple random sampling (SRS), biased-coin design (BCD), adaptive biased-coin design (WEI), and stratified block randomization (SBR). \citet[Section 3]{BCS17} provide an excellent summary of these four examples. For completeness, we briefly repeat their descriptions below. Note that both BCD and SBR assignment rules achieve strong balance. Last, as $p(s)>0$, Assumption \ref{ass:assignment1}.3 implies Assumption \ref{ass:assignment1}.4. \begin{ex}[SRS] \label{ex:srs} Let $\{A_i\}_{i=1}^n$ be drawn independently across $i$ and of $\{S_i\}_{i=1}^n$ as Bernoulli random variables with success rate $\pi$, i.e., for $k=1,\cdots,n$, \begin{align*} \mathbb{P}\left(A_k = 1\big|\{S_i\}_{i=1}^n, \{A_{j}\}_{j=1}^{k-1}\right) = \mathbb{P}(A_k = 1) = \pi. \end{align*} Then, Assumption \ref{ass:assignment1}.3 holds with $\gamma(s) = \pi(1-\pi)$. \end{ex} \begin{ex}[WEI] \label{ex:wei} The design is first proposed by \cite{W78}. Let $n_{k-1}(S_k) = \sum_{i=1}^{k-1}1\{S_i = S_k\}$, $D_{k-1}(s) = \sum_{i=1}^{k-1}\left(A_i - \frac{1}{2} \right) 1\{S_i = s\}$, and \begin{align*} \mathbb{P}\left(A_k = 1\big| \{S_i\}_{i=1}^k,\{A_i\}_{i=1}^{k-1}\right) = \phi\biggl(\frac{D_{k-1}(S_k)}{n_{k-1}(S_k)}\biggr), \end{align*} where $\phi(\cdot):[-1,1] \mapsto [0,1]$ is a pre-specified non-increasing function satisfying $\phi(-x) = 1- \phi(x)$. Here, $\frac{D_0(S_1)}{0}$ is understood to be zero. Then, \cite{BCS17} show that Assumption \ref{ass:assignment1}.3 holds with $\pi = \frac{1}{2}$ and $\gamma(s) = \frac{1}{4}(1 - 4\phi'(0))^{-1}$. \end{ex} \begin{ex}[BCD] \label{ex:bcd} The treatment status is determined sequentially for $1 \leq k \leq n$ as \begin{align*} \mathbb{P}\left(A_k = 1| \{S_i\}_{i=1}^k,\{A_i\}_{i=1}^{k-1}\right) = \begin{cases} \frac{1}{2} & \text{if }D_{k-1}(S_k) = 0 \\ \lambda & \text{if }D_{k-1}(S_k) < 0 \\ 1-\lambda & \text{if }D_{k-1}(S_k) > 0, \end{cases} \end{align*} where $D_{k-1}(s)$ is defined as above and $\frac{1}{2}< \lambda \leq 1$. Then, \cite{BCS17} show that Assumption \ref{ass:assignment1}.3 holds with $\pi = \frac{1}{2}$ and $\gamma(s) = 0$. \end{ex} \begin{ex}[SBR] \label{ex:sbr} For each stratum, $\lfloor \pi n(s) \rfloor$ units are assigned to treatment and the rest is assigned to control. \cite{BCS17} then show that Assumption \ref{ass:assignment1}.3 holds with $\gamma(s) = 0$. \end{ex} Our parameter of interest is the $\tau$-th QTE defined as \begin{align*} q(\tau) = q_1(\tau) - q_0(\tau), \end{align*} where $\tau \in (0,1)$ is a quantile index and $q_j(\tau)$ is the $\tau$-th quantile of random variable $Y(j)$ for $j = 0,1.$ For inference, although we mainly focus on the Wald test for the null hypothesis that $q(\tau)$ equals some particular value, our method can also be used to test hypotheses involving multiple or even a continuum of quantile indexes. The following regularity conditions are common in the literature of quantile estimations. \begin{ass} \label{ass:tau} For $j=0,1$, denote $f_j(\cdot)$ and $f_j(\cdot|s)$ as the PDFs of $Y_i(j)$ and $Y_i(j)|S_i=s$, respectively. \begin{enumerate} \item $f_j(q_j(\tau))$ and $f_j(q_j(\tau)|s)$ are bounded and bounded away from zero uniformly over $\tau \in \Upsilon$ and $s \in \mathcal{S}$, where $\Upsilon$ is a compact subset of $(0,1)$. \item $f_j(\cdot)$ and $f_j(\cdot|s)$ are Lipschitz over $\{q_j(\tau):\tau \in \Upsilon\}.$ \end{enumerate} \end{ass} \section{Estimation} \label{sec:est} \subsection{Simple Quantile Regression} \label{sec:SQR} In this section, we propose to estimate $q(\tau)$ by a QR of $Y_i$ on $A_i$. Denote $\beta(\tau) = (\beta_0(\tau),\beta_1(\tau))^\prime$, $\beta_0(\tau) = q_0(\tau)$, and $\beta_1(\tau) = q(\tau)$. We estimate $\beta(\tau)$ by $\hat{\beta}(\tau)$, where \begin{align*} \hat{\beta}(\tau) = \argmin_{b = (b_0,b_1)^\prime \in \Re^2} \sum_{i=1}^{n}\rho_\tau\left(Y_i - \dot{A}_i'b\right), \end{align*} $\dot{A}_i = (1,A_i)^\prime$, and $\rho_\tau(u) = u(\tau - 1\{u\leq 0\})$ is the standard check function. We refer to $\hat{\beta}_1(\tau)$, the second element of $\hat{\beta}(\tau)$, as our SQR estimator for the $\tau$-th QTE. As $A_i$ is a dummy variable, $\hat{\beta}_1(\tau)$ is numerically the same as the difference between the $\tau$-th empirical quantiles of $Y$ in the treatment and control groups. \begin{thm} \label{thm:qr} If Assumptions \ref{ass:assignment1}.1--\ref{ass:assignment1}.3 and \ref{ass:tau} hold, then, uniformly over $\tau \in \Upsilon$, \begin{align*} \sqrt{n}\left(\hat{\beta}_1(\tau) - q(\tau)\right) \rightsquigarrow \mathcal{B}_{sqr}(\tau),~\text{as}~n\rightarrow \infty, \end{align*} where $\mathcal{B}_{sqr}(\cdot)$ is a Gaussian process with covariance kernel $\Sigma_{sqr}(\cdot,\cdot)$. The expression for $\Sigma_{sqr}(\cdot,\cdot)$ can be found in the Appendix. \end{thm} The asymptotic variance for $\sqrt{n}\left(\hat{\beta}_1(\tau) - \beta_1(\tau)\right)$ is $\zeta_Y^2(\pi,\tau) + \zeta_A^2(\pi,\tau) + \zeta_S^2(\tau)$, where \begin{align*} \zeta_Y^2(\pi,\tau) = \frac{\tau (1-\tau) - \mathbb{E}m_1^2(S,\tau)}{\pi f_1^2(q_1(\tau))} + \frac{\tau (1-\tau) - \mathbb{E}m_0^2(S,\tau)}{(1-\pi) f_0^2(q_0(\tau))}, \end{align*} \begin{align*} \zeta_A^2(\pi,\tau) = \mathbb{E}\gamma(S) \left(\frac{m_1(S,\tau)}{\pi f_1(q_1(\tau))} + \frac{m_0(S,\tau)}{(1-\pi) f_0(q_0(\tau))} \right)^2, \end{align*} \begin{align*} \zeta_S^2(\tau) = \mathbb{E}\left(\frac{m_1(S,\tau)}{f_1(q_1(\tau))}-\frac{m_0(S,\tau)}{ f_0(q_0(\tau))} \right)^2, \end{align*} and $m_j(s,\tau) = \mathbb{E}(\tau - 1\{Y(j)\leq q_j(\tau)\}|S=s)$. Note that, if the treatment assignment rule achieves strong balance or the stratification is irrelevant\footnote{It means $\mathbb{P}(Y(j) \leq q_j(\tau)|S=s) = \tau$ for $s \in \mathcal{S}, j=0,1$.} then $\zeta_A^2(\pi,\tau) = 0$. \subsection{Inverse Propensity Score weighted Quantile Regression} \label{sec:ipw} Denote $\hat{\pi}(s) = n_1(s)/n(s)$, $n_1(s) = \sum_{i=1}^n A_i 1\{S_i = s\}$, and $n(s) = \sum_{i=1}^n 1\{S_i = s\}$. Note $\hat{\pi}(S_i)$ is an estimator for the propensity score, i.e., $\pi$. In addition, Assumption \ref{ass:assignment1}.2 implies that the unconfoundedness condition holds. Thus, following the lead of \cite{F07}, we can estimate $q_j(\tau)$ by the IPW QR. Let \begin{align*} \hat{q}_1(\tau) = \argmin_q \frac{1}{n}\sum_{i=1}^n\frac{A_i}{\hat{\pi}(S_i)}\rho_\tau(Y_i - q) \quad \text{and} \quad \hat{q}_0(\tau) = \argmin_q\frac{1}{n}\sum_{i=1}^n \frac{1-A_i}{1-\hat{\pi}(S_i)}\rho_\tau(Y_i - q). \end{align*} We then estimate $q(\tau)$ by $\hat{q}(\tau) = \hat{q}_1(\tau) - \hat{q}_0(\tau)$. \begin{thm} \label{thm:ipw} If Assumptions \ref{ass:assignment1}.1, \ref{ass:assignment1}.2, \ref{ass:assignment1}.4 and \ref{ass:tau} hold, then, uniformly over $\tau \in \Upsilon$, \begin{equation*} \sqrt{n}\left(\hat{q}(\tau)-q(\tau)\right)\rightsquigarrow \mathcal{B}_{ipw}(\tau),~\text{as}~n\rightarrow \infty, \end{equation*} where $\mathcal{B}_{ipw}(\cdot)$ is a scalar Gaussian process with covariance kernel $\Sigma_{ipw}(\cdot,\cdot)$. The expression for $\Sigma_{ipw}(\cdot,\cdot)$ can be found in the Appendix. \end{thm} Two remarks are in order. First, the asymptotic variance for $\hat{q}(\tau)$ is \begin{align*} \zeta_Y^2(\pi,\tau) + \zeta_S^2(\tau). \end{align*} When strong balance is not achieved and the stratification is relevant, we have $\zeta_A^2(\pi,\tau)>0$. Thus, $\hat{q}(\tau)$ is more efficient than $\hat{\beta}_{1}(\tau)$ in the sense that \begin{align*} \Sigma_{ipw}(\tau,\tau) < \Sigma_{sqr}(\tau,\tau). \end{align*} When strong balance is achieved ($\gamma(s) = 0$), we have $\zeta_A^2(\pi,\tau)=0$. Thus, the two estimators are asymptotically first-order equivalent. Based on the same argument, one can potentially prove that, when strong balance is not achieved and the stratification is relevant, the IPW estimator for ATE has strictly smaller asymptotic variance than the simple two-sample difference and strata fixed effects estimators studied by \cite{BCS17}, and is asymptotically equivalent to the fully saturated linear regression estimator proposed by \cite{BCS18}. Second, since the amount of ``balance" of the treatment assignment rule does not play a role in the limiting distribution of the IPW estimator, Assumption \ref{ass:assignment1}.3 is replaced by Assumption \ref{ass:assignment1}.4. \section{Weighted Bootstrap } \label{sec:SBI} In this section, we approximate the asymptotic distributions of the SQR and IPW estimators via the weighted bootstrap. Let $\{\xi_i\}_{i=1}^n$ be a sequence of bootstrap weights which will be specified later. Further denote $n_1^w(s) = \sum_{i =1}^n\xi_iA_i1\{S_i=s\}$, $n^w(s) = \sum_{i =1}^n \xi_i 1\{S_i=s\}$, and $\hat{\pi}^w(s) = n_1^w(s)/n^w(s)$. The weighted bootstrap counterparts for the two estimators we study in this paper can then be written respectively as \begin{align*} \hat{\beta}^w(\tau) = \argmin_b \sum_{i=1}^n \xi_i \rho_\tau\left(Y_i - \dot{A}_i'b\right) \end{align*} and \begin{align*} \hat{q}^w(\tau) = \hat{q}_1^w(\tau) - \hat{q}_0^w(\tau), \end{align*} where \begin{align*} \hat{q}_1^w(\tau) = \argmin_q \sum_{i =1}^n \frac{\xi_iA_i}{\hat{\pi}^w(S_i)}\rho_\tau\left(Y_i - q\right) \quad \text{and} \quad \hat{q}_0^w(\tau) = \argmin_q \sum_{i =1}^n \frac{\xi_i(1-A_i)}{1-\hat{\pi}^w(S_i)}\rho_\tau\left(Y_i - q\right). \end{align*} The second element $\hat{\beta}_1^w(\tau)$ of $\hat{\beta}^w(\tau)$ and $\hat{q}^w(\tau)$ are the SQR and IPW bootstrap estimators for the $\tau$-th QTE, respectively. Next, we specify the bootstrap weights. \begin{ass} \label{ass:weight} Suppose $\{\xi_i\}_{i=1}^n$ is a sequence of nonnegative i.i.d. random variables with unit expectation and variance and a sub-exponential upper tail. \end{ass} The nonnegativity is required to maintain the convexity of the quantile regression objective function. The other conditions in Assumption \ref{ass:weight} are common for the weighted bootstrap approximation. In practice, we generate $\{\xi_i\}_{i=1}^n$ by the standard exponential distribution. The corresponding weighted bootstrap is also known as the Bayesian bootstrap. \begin{thm} \label{thm:b} If Assumptions \ref{ass:assignment1}.1--\ref{ass:assignment1}.3, \ref{ass:tau}, and \ref{ass:weight} hold, then uniformly over $\tau \in \Upsilon$ and conditionally on data, \begin{align*} \sqrt{n}\left(\hat{\beta}_1^w(\tau) - \hat{\beta}_1(\tau)\right) \rightsquigarrow \tilde{\mathcal{B}}_{sqr}(\tau),~\text{as}~n\rightarrow \infty, \end{align*} where $\tilde{\mathcal{B}}_{sqr}(\tau)$ is a Gaussian process. In addition, $\tilde{\mathcal{B}}_{sqr}(\tau)$ shares the same covariance kernel with $\mathcal{B}_{sqr}(\tau)$ defined in Theorems \ref{thm:qr} with $\gamma(s)$ there replaced by $\pi(1-\pi)$. If Assumptions \ref{ass:assignment1}.1, \ref{ass:assignment1}.2, \ref{ass:assignment1}.4, \ref{ass:tau}, \ref{ass:weight} hold, then uniformly over $\tau \in \Upsilon$ and conditionally on data, \begin{align*} \sqrt{n}\left(\hat{q}^w(\tau) - \hat{q}(\tau)\right) \rightsquigarrow \mathcal{B}_{ipw}(\tau),~\text{as}~n\rightarrow \infty, \end{align*} where $\mathcal{B}_{ipw}(\tau)$ is the same Gaussian process defined in Theorem \ref{thm:ipw}. \end{thm} Four remarks are in order. First, the weighted bootstrap sample does not preserve the negative cross-sectional dependence in the original sample. Asymptotic variances of the weighted bootstrap estimators equal those of their original sample counterparts as if SRS is applied. In fact, the asymptotic variance for $\hat{\beta}^w_{1}(\tau)$ is \begin{align*} \zeta_Y^2(\pi,\tau) + \tilde{\zeta}_A^2(\pi,\tau) + \zeta_S^2(\tau), \end{align*} where \begin{align*} \tilde{\zeta}_A^2(\pi,\tau) = \mathbb{E}\pi(1-\pi) \left(\frac{m_1(S,\tau)}{\pi f_1(q_1(\tau))} + \frac{m_0(S,\tau)}{(1-\pi) f_0(q_0(\tau))} \right)^2. \end{align*} This asymptotic variance is intuitive as the weight $\xi_i$ is independent with each other, which implies that, conditionally on data, the bootstrap sample observations are independent. As $\gamma(s) \leq \pi (1-\pi)$, we have $$\zeta_A^2(\pi,\tau) \leq \tilde{\zeta}_A^2(\pi,\tau).$$ If the inequality is strict, then the weighted bootstrap overestimates the asymptotic variance of the SQR estimator, and thus, the Wald test constructed using the SQR estimator and its weighted bootstrap standard error is conservative. Second, the asymptotic distribution of the weighted bootstrap IPW estimator coincides with that of the original estimator. The asymptotic size of the Wald test constructed using the IPW estimator and its weighted bootstrap standard error then equals the nominal level. Theorem \ref{thm:ipw} shows that the asymptotic variance for $\hat{q}(\tau)$ is invariant in the treatment assignment rule applied. Thus, even though the weighted bootstrap sample ignores the cross-sectional dependence and behaves as if the treatment status is generated randomly, the asymptotic variance for $\hat{q}^w(\tau)$ is still \begin{align*} \zeta_Y^2(\pi,\tau) + \zeta_S^2(\tau). \end{align*} Third, the validity of weighted bootstrap for the IPW estimator only requires Assumption \ref{ass:assignment1}.4 instead of \ref{ass:assignment1}.3, for the same reason mentioned after Theorem \ref{thm:ipw}. Fourth, it is possible to consider the conventional nonparametric bootstrap which generates the bootstrap sample from the empirical distribution of the data. If the observations are i.i.d., \citet[Section 3.6]{VW96} show that the conventional bootstrap is first-order equivalent to a weighted bootstrap with Poisson(1) weights. However, in the current setting, $\{A_i\}_{i \geq 1}$ is dependent. It is technically challenging to rigorously show that the above equivalence still holds. We leave it as an interesting topic for future research. \section{Covariate-Adaptive Bootstrap} \label{sec:CABI} In this section, we consider the covariate-adaptive bootstrap procedure as follows: \begin{enumerate} \item Draw $\{S_i^*\}_{i=1}^n$ from the empirical distribution of $\{S_i\}_{i=1}^n$ with replacement. \item Generate $\{A_i^*\}_{i=1}^n$ based on $\{S_i^*\}_{i=1}^n$ and the treatment assignment rule. \item For $A_i^* = a$ and $S_i^* = s$, draw $Y_i^*$ from the empirical distribution of $Y_i$ given $A_i=a$ and $S_i = s$ with replacement. \end{enumerate} First, Step 1 is the conventional nonparametric bootstrap. The bootstrap sample $\{S_i^*\}_{i=1}^n$ is obtained by drawing from the empirical distribution of $\{S_i\}_{i=1}^n$ with replacement $n$ times. Second, Step 2 follows the treatment assignment rule, and thus preserves the cross-sectional dependence structure in the bootstrap sample, even after conditioning on data. The weighted bootstrap sample, by contrast, is cross-sectionally independent given data. Third, Step 3 applies the conventional bootstrap procedure to the outcome $Y_i$ in the cell $(S_i,A_i) = (s,a) \in \mathcal{S} \times \{0,1\}$. Given that the original data contain $n_a(s)$ observations in this cell, in this step, the bootstrap sample $\{Y_i^*\}_{i: A_i^* =a, S_i^*=s}$ is obtained by drawing from the empirical distribution of these $n_a(s)$ outcomes with replacement $n^*_a(s)$ times, where $n^*_a(s) = \sum_{i=1}^n 1\{A_i^*=a, S_i^*=s\}.$ Unlike the conventional bootstrap, here both $n_a(s)$ and $n_a^*(s)$ are random and are not necessarily the same. Last, to implement the covariate-adaptive bootstrap, researchers need to know the treatment assignment rule for the original sample. Unlike in observational studies, such information is usually available for RCTs. If one only knows that the treatment assignment rule achieves strong balance, then Theorem \ref{thm:cab} below still holds, provided that the bootstrap sample is generated from any treatment assignment rule that achieves strong balance. Even worse, if no information on the treatment assignment rule is available, then one cannot implement the covariate-adaptive bootstrap inference. In this case, the weighted bootstrap for the IPW estimator can still provide a non-conservative Wald test, as shown in Theorem \ref{thm:b}. Using the bootstrap sample $\{Y_i^*,A_i^*,S_i^*\}_{i=1}^n$, we can estimate QTE by the two methods considered in the paper. Let $n_1^*(s) = \sum_{i =1}^nA_i^*1\{S_i^*=s\}$, $n^*(s) = \sum_{i =1}^n 1\{S_i^*=s\}$, $\hat{\pi}^*(s) = \frac{n_1^*(s)}{n^*(s)}$, and $\dot{A}_i^* = (1,A_i^*)'$. Then, the two bootstrap estimators can be written respectively as \begin{align*} \hat{\beta}^*(\tau) = \argmin_b \sum_{i=1}^n\rho_\tau\left(Y_i^* - \dot{A}_i^*b\right) \end{align*} and \begin{align*} \hat{q}^*(\tau) = \hat{q}_1^*(\tau) - \hat{q}_0^*(\tau), \end{align*} where \begin{align*} \hat{q}_1^* = \argmin_q \sum_{i =1}^n \frac{A_i^*}{\hat{\pi}^*(S_i^*)}\rho_\tau(Y_i^* - q) \quad \text{and} \quad \hat{q}_0^* = \argmin_q \sum_{i =1}^n \frac{1-A_i^*}{1-\hat{\pi}^*(S_i^*)}\rho_\tau(Y_i^* - q). \end{align*} The second element $\hat{\beta}_1^*(\tau)$ of $\hat{\beta}^*(\tau)$ and $\hat{q}^*(\tau)$ are the SQR and IPW bootstrap estimators for the $\tau$-th QTE, respectively. Parallel to Assumption \ref{ass:assignment1}, we make the following assumption for the bootstrap sample. \begin{ass} \label{ass:bassignment} Let $D_n^*(s) = \sum_{i=1}^n (A_i^* - \pi)1\{S_i^* = s\}$. \begin{enumerate} \item $\left\{\left\{ \frac{D^*_n(s)}{\sqrt{n}}\right\}_{s \in \mathcal{S}}\biggl|\{S_i^*\}_{i=1}^n \right\} \rightsquigarrow N(0,\Sigma_D)$ a.s., where $\Sigma_D = \text{diag}\{p(s)\gamma(s):s \in \mathcal{S}\}$. \item $\sup_{s \in \mathcal{S} }\frac{|D^*_n(s)|}{\sqrt{n^*(s)}} = O_p(1)$, $\sup_{s \in \mathcal{S} }\frac{|D_n(s)|}{\sqrt{n(s)}} = O_p(1)$. \end{enumerate} \end{ass} Assumption \ref{ass:bassignment}.1 is a high-level assumption. Obviously, it holds for SRS. For WEI, this condition holds by the same argument in \citet[Lemma B.12]{BCS17} with the fact that $\frac{n^*(s)}{n(s)} \convP 1.$ For BCD, as shown in \citet[Lemma B.11]{BCS17}, \begin{align*} D_n^*(s) |\{S_i^*\}_{i=1}^n = O_p(1). \end{align*} Therefore, $D_n^*(s)/\sqrt{n^*(s)} \convP 0$ and Assumption \ref{ass:bassignment}.1 holds with $\gamma(s) = 0$. For SBR, it is clear that $|D_n^*(s)| \leq 1.$ Thus, Assumption \ref{ass:bassignment}.1 holds with $\gamma(s) = 0$ as well. In addition, as $p(s)>0$, based on the standard bootstrap results, we have $n^*(s)/n \convP p(s)$ and $n(s)/n \convP p(s)$. Therefore, Assumption \ref{ass:bassignment}.1 is sufficient for Assumption \ref{ass:bassignment}.2. Last, note that Assumption \ref{ass:bassignment}.2 implies Assumption \ref{ass:assignment1}.4. \begin{thm} \label{thm:cab} Suppose Assumptions \ref{ass:assignment1}.1, \ref{ass:assignment1}.2, \ref{ass:tau}, and \ref{ass:bassignment}.2 hold. Then, uniformly over $\tau \in \Upsilon$ and conditionally on data, \begin{align*} \sqrt{n}(\hat{q}^*(\tau) - \hat{q}(\tau)) \rightsquigarrow \mathcal{B}_{ipw}(\tau), ~\text{as}~n\rightarrow \infty. \end{align*} If, in addition, Assumptions \ref{ass:assignment1}.3 and \ref{ass:bassignment}.1 hold, then \begin{align*} \sqrt{n}\left(\hat{\beta}_1^*(\tau) - \hat{q}(\tau)\right) \rightsquigarrow \mathcal{B}_{sqr}(\tau),~\text{as}~n\rightarrow \infty. \end{align*} Here, $\mathcal{B}_{sqr}(\tau)$ and $\mathcal{B}_{ipw}(\tau)$ are two Gaussian processes defined in Theorem \ref{thm:qr} and \ref{thm:ipw}, respectively. \end{thm} Several remarks are in order. First, unlike the usual bootstrap estimator, the covariate-adaptive bootstrap SQR estimator is not centered around its corresponding counterpart from the original sample, but rather $\hat{q}(\tau)$. The reason is that the treatment status $A_i^*$ is not generated by bootstrap. In the linear expansion for the bootstrap estimator $\hat{\beta}_1^*(\tau)$, the part of the influence function that accounts for the variation generated by $A_i^*$ need not be centered. We also know from the proof of Theorem \ref{thm:ipw} that $\hat{q}(\tau)$ do not have an influence function that represents the variation generated by $A_i$. Thus, $\hat{q}(\tau)$ can be used to center $\hat{\beta}_1^*(\tau)$. Second, the choice of $\hat{q}(\tau)$ as the center is somehow ad-hoc. In fact, any estimator $\tilde{q}(\tau)$ that is first-order equivalent to $\hat{q}(\tau)$ in the sense that \begin{align*} \sup_{ \tau \in \Upsilon}|\tilde{q}(\tau) - \hat{q}(\tau)| = o_p(1/\sqrt{n}) \end{align*} can serve as the center for the bootstrap estimators $\hat{q}^*(\tau)$ and $\hat{\beta}_1^*(\tau) $. Third, when the treatment assignment rule achieves strong balance, $\hat{\beta}_1(\tau)$ and $\hat{q}(\tau)$ are first-order equivalent. In this case, $\hat{\beta}_1(\tau)$ can serve as the center for $\hat{\beta}_1^*(\tau)$ and various bootstrap inference methods are valid. On the other hand, when the treatment assignment rule does not achieve strong balance, $\hat{\beta}_1(\tau)$ and $\hat{q}(\tau)$ are not first-order equivalent. In this case, the asymptotic size of the percentile bootstrap for the SQR estimator using the quantiles of $\hat{\beta}_1^*(\tau)$ does not equal the nominal level. In the next section, we propose a way to compute the bootstrap standard error which does not depend on the choice of the center. Based on the bootstrap standard error, researchers can construct t-statistics and use standard normal critical values for inference. Fourth, for ATE, we can use the same bootstrap sample to compute the standard errors for the simple and strata fixed effects estimators proposed in \cite{BCS17} as well as the IPW estimator. We expect that all the results in this paper hold for the ATE as well. \section{Simulation} \label{sec:sim} We can summarize four bootstrap scenarios from the analysis in Sections \ref{sec:SBI} and \ref{sec:CABI}: (1) the SQR estimator with the weighted bootstrap, (2) the IPW estimator with either the weighted or covariate-adaptive bootstrap, (3) the SQR estimator with the covariate-adaptive bootstrap when the assignment rule achieves strong balance, and (4) the SQR estimator with the covariate-adaptive bootstrap when the assignment rule does not achieve strong balance. The results of Sections \ref{sec:SBI} and \ref{sec:CABI} imply that the bootstrap in scenario (1) produces conservative Wald-tests when the treatment assignment rule is not SRS. For scenarios (2) and (3), various bootstrap based inference methods are valid. However, for scenario (4), researchers should be careful about the centering issue. In particular, the percentile bootstrap inference using the quantiles of $\hat{\beta}_1^*$ is invalid. In the following, we propose one single bootstrap inference method that works for scenarios (2)--(4). In addition, the proposed method does not require the knowledge of the centering. We take the IPW estimator as an example. We can repeat the bootstrap estimation\footnote{For the IPW estimator, we can use either the weighted or covariate-adaptive bootstrap. For the SQR estimator, we can only use the covariate-adaptive bootstrap.} $B$ times and obtain $B$ bootstrap IPW estimates, denoted as $\{\hat{q}_b^*(\tau)\}_{b=1}^B$. Further denote $\hat{Q}(\alpha)$ as the $\alpha$-th empirical quantile of $\{\hat{q}_b^*(\tau)\}_{b=1}^B$. We can test the null hypothesis that $q(\tau) = q^0(\tau)$ via $1\left\{\left|\frac{\hat{q}(\tau) - q^0(\tau)}{\hat{\sigma}_n^*}\right| > z_{1-\alpha/2}\right\}$, where $\hat{q}(\tau)$, $z_{1-\alpha/2}$, and $\hat{\sigma}_n^*$ are the IPW estimator, the $(1-\alpha/2)$-th quantile of the standard normal distribution, and \begin{align*} \hat{\sigma}_n^* = \frac{\hat{Q}(0.975) - \hat{Q}(0.025)}{z_{0.975} - z_{0.025}}, \end{align*} respectively. In scenarios (2)--(4), the asymptotic size of such test equals the nominal level $\alpha$. In scenarios (2) and (3), we recommend the t-statistic and confidence interval using this particular bootstrap standard error (i.e., $\hat{\sigma}_n^*$) over other bootstrap inference methods (e.g., bootstrap confidence interval, percentile bootstrap confidence interval, etc.) because based on unreported simulations, they have better finite sample performance. \subsection{Data Generating Processes} We consider two DGPs with parameters $\gamma =4$, $\sigma =2$, and $\mu$ which will be specified later. \begin{enumerate} \item Let $Z$ be standardized $\text{Beta}(2,2)$ distributed, $S_i = \sum_{j=1}^41\{Z_i \leq g_j\}$, and $(g_1,\cdots,g_4) = (-0.25\sqrt{20},0,0.25\sqrt{20},0.5\sqrt{20})$. The outcome equation is $$Y_i = A_i \mu + \gamma Z_i + \eta_i,$$ where $\eta_i = \sigma A_i \varepsilon_{i,1} + (1-A_i)\varepsilon_{i,2}$ and $(\varepsilon_{i,1},\varepsilon_{i,2})$ are jointly standard normal. \item Let $Z$ be uniformly distributed on $[-2,2]$, $S_i = \sum_{j=1}^41\{Z_i \leq g_j\}$, and $(g_1,\cdots,g_4) = (-1,0,1,2)$. The outcome equation is \begin{align*} Y_i = A_i \mu+A_i\nu_{i,1} + (1-A_i)\nu_{i,0} + \eta_i, \end{align*} where $\nu_{i,0} = \gamma Z_i^2 1\{|Z_i|\geq 1\} + \frac{\gamma}{4}(2 - Z_i^2)1\{|Z_i|<1\}$, $\nu_{i,1} = -\nu_{i,0}$, $\eta_i = \sigma(1+Z_i^2)A_i\varepsilon_{i,1} + (1+Z_i^2)(1-A_i)\varepsilon_{i,2}$, and $(\varepsilon_{i,1},\varepsilon_{i,2})$ are mutually independent $T(3)/3$ distributed. \end{enumerate} When $\pi = \frac{1}{2}$, for each DGP, we consider four randomization schemes: \begin{enumerate} \item SRS: Treatment assignment is generated as in Example \ref{ex:srs}. \item WEI: Treatment assignment is generated as in Example \ref{ex:wei} with $\phi(x) = (1-x)/2$. \item BCD: Treatment assignment is generated as in Example \ref{ex:bcd} with $\lambda = 0.75$. \item SBR: Treatment assignment is generated as in Example \ref{ex:sbr}. \end{enumerate} When $\pi \neq 0.5$, BCD is not defined while WEI is not defined in the original paper (\citep{W78}). Recently, \cite{H16} generalizes the adaptive biased-coin design (i.e., WEI) to multiple treatment values and unequal target treatment ratios. Here, for $\pi \neq 0.5$, we only consider SRS and SBR as in \cite{BCS17}. We conduct the simulations with sample sizes $n=200$ and $400$. The numbers of simulation replications and bootstrap samples are 1000. Under the null, $\mu=0$ and we compute the true parameters of interest using simulations with $10^6$ sample size and $10^4$ replications. Under the alternative, we perturb the true values by $\mu = 1$ and $\mu = 0.75$ for $n = 200$ and $400$, respectively. We report the results for the median QTE. Section \ref{sec:addsim} contains additional simulation results for ATE and QTEs with $\tau = 0.25$ and $0.75$. All the observations made in this section still apply. \subsection{QTE, $\pi = 0.5$} We consider the Wald test with six t-statistics and $95\%$ nominal rate. We construct the t-statistics using one of our two point estimates and some estimate of the standard error. We will reject the null hypothesis when the absolute value of the t-statistic is greater than 1.96. The details about the point estimates and standard errors are as follows: \begin{enumerate} \item ``s/naive": the point estimator is computed by the SQR and its standard error $\hat{\sigma}_{naive}(\tau)$ is computed as \begin{align} \label{eq:stddev} \hat{\sigma}^2_{naive}(\tau) = & \frac{\tau(1-\tau) - \frac{1}{n}\sum_{i =1}^n\hat{m}^2_1(S_i,\tau) }{\pi\hat{f}_1^2(\hat{q}_1(\tau))}+\frac{\tau(1-\tau) - \frac{1}{n}\sum_{i =1}^n\hat{m}^2_0(S_i,\tau) }{(1-\pi)\hat{f}_0^2(\hat{q}_0(\tau))} \notag \\ & + \frac{1}{n}\sum_{i =1}^n \pi(1-\pi)\left(\frac{\hat{m}_1(S_i,\tau)}{\pi \hat{f}_1(\hat{q}_1(\tau))} + \frac{\hat{m}_0(S_i,\tau)}{(1-\pi) \hat{f}_0(\hat{q}_0(\tau))}\right)^2 \notag \\ & + \frac{1}{n}\sum_{i =1}^n \left(\frac{\hat{m}_1(S_i,\tau)}{ \hat{f}_1(\hat{q}_1(\tau))} - \frac{\hat{m}_0(S_i,\tau)}{ \hat{f}_0(\hat{q}_0(\tau))}\right)^2, \end{align} where $\hat{q}_j(\tau)$ is the $\tau$-th empirical quantile of $Y_i|A_i = j$, $$\hat{m}_{i,1}(s,\tau) = \frac{\sum_{i =1}^nA_i1\{S_i = s\}(\tau - 1\{Y_i \leq \hat{q}_1(\tau)\})}{n_1(s)},$$ $$\hat{m}_{i,0}(s,\tau) = \frac{\sum_{i =1}^n(1-A_i)1\{S_i = s\}(\tau - 1\{Y_i \leq \hat{q}_0(\tau)\})}{n(s)-n_1(s)}.$$ For $j=0,1$, $\hat{f}_j(\cdot)$ is computed by the kernel density estimation using the observations $Y_i$ provided that $A_i = j$, bandwidth $h_j = 1.06\hat{\sigma}_jn_j^{-1/5}$, Gaussian kernel function, standard deviation $\hat{\sigma}_j$ of the observations $Y_i$ provided that $A_i = j$, and $n_j = \sum_{i =1}^n 1\{A_i = j\}$. \item ``s/adj": exactly the same as the ``s/naive" method with one difference: replacing $\pi(1-\pi)$ in \eqref{eq:stddev} by $\gamma(S_i)$. \item ``s/W": the point estimator is computed by the SQR and its standard error $\hat{\sigma}_W(\tau)$ is computed by the weighted bootstrap procedure. The bootstrap weights $\{\xi_i\}_{i=1}^n$ are generated from the standard exponential distribution. Denote $\{\hat{\beta}^w_{1,b}\}_{b=1}^B$ as the collection of $B$ weighted bootstrap SQR estimates. Then, \begin{align*} \hat{\sigma}_{W}(\tau) = \frac{\hat{Q}(0.975) - \hat{Q}(0.025)}{z_{0.975} - z_{0.025}}, \end{align*} where $\hat{Q}(\alpha)$ is the $\alpha$-th empirical quantile of $\{\hat{\beta}^w_{1,b}(\tau)\}_{b=1}^B$. \item ``ipw/W": the same as above with one difference: the estimation method for both the original and bootstrap samples is the IPW QR. \item ``s/CA": the point estimator is computed by the SQR and its standard error $\hat{\sigma}_{CA}(\tau)$ is computed by the covariate-adaptive bootstrap procedure. Denote $\{\hat{\beta}^*_{1,b}\}_{b=1}^B$ as the collection of $B$ estimates obtained by the SQR applied to the samples generated by the covariate-adaptive bootstrap procedure. Then, \begin{align*} \hat{\sigma}_{CA}(\tau) = \frac{\hat{Q}(0.975) - \hat{Q}(0.025)}{z_{0.975} - z_{0.025}}, \end{align*} where $\hat{Q}(\alpha)$ is the $\alpha$-th empirical quantile of $\{\hat{\beta}^*_{1,b}(\tau)\}_{b=1}^B$. \item ``ipw/CA": the same as above with one difference: the estimation method for both the original and bootstrap samples is the IPW QR. \end{enumerate} Tables \ref{tab:200_2} and \ref{tab:400_2} present the rejection probabilities (multiplied by 100) for the six t-tests under both the null hypothesis and the alternative hypothesis, with sample sizes $n=200$ and $400$, respectively. In these two tables, columns $M$ and $A$ represent DGPs and treatment assignment rules, respectively. From the rejection probabilities under the null, we can make five observations. First, the naive t-test (``s/naive") is conservative for WEI, BCD, and SBR, which is consistent with the findings for ATE estimators by \cite{SYZ10} and \cite{BCS17}. Second, although the asymptotic size of the adjusted t-test (``s/adj") is expected to equal the nominal level, it does not perform well for DGP2. The main reason is that, in order to analytically compute the standard error, we must compute nuisance parameters such as the unconditional densities of $Y(0)$ and $Y(1)$, which requires tuning parameters. We further compute the standard errors following \eqref{eq:stddev} with $\pi(1-\pi)$ and the tuning parameter $h_j$ replaced by $\gamma(S_i)$ and $1.06C_f\hat{\sigma}_jn_j^{-1/5}$, respectively, for some constant $C_f \in [0.5,1.5]$. Figure \ref{fig:BCD_50_200} plots the rejection probabilities of the ``s/adj" t-tests against $C_f$ for the BCD assignment rule with $n=200$, $\tau = 0.5$, and $\pi = 0.5$. We see that (1) the rejection probability is sensitive to the choice of bandwidth, (2) there is no universal optimal bandwidth across two DGPs, and (3) the covariate-adaptive bootstrap t-tests (``s/CA") represented by the dotted dash lines are quite stable across different DGPs and close to the nominal rate of rejection. Third, the weighted bootstrap t-test for the SQR estimator (``s/W") is conservative, especially for the BCD and SBR assignment rules which achieve strong balance. Fourth, the rejection probabilities of the weighted bootstrap t-test for the IPW estimator (``ipw/W") are close to the nominal rate even for sample size $n=200$, which is consistent with Theorem \ref{thm:b}. Last, the rejection rates for the two covariate-adaptive bootstrap t-tests (``s/CA" and ``ipw/CA") are close to the nominal rate, which is consistent with Theorem \ref{thm:cab}. \begin{table}[H] \centering \caption{$n = 200, \tau = 0.5, \pi=0.5$} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{c|l|cccccc|c|cccccc} \multicolumn{8}{c|}{$H_0$} & & \multicolumn{6}{c}{$H_1$} \\ \hline \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c|}{ipw/CA} & & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 4.5 & 4.5 & 4.7 & 4.4 & 4.4 & 3.9 & & 18.3 & 18.3 & 19.3 & 44.1 & 20.0 & 42.9 \\ & WEI & 1.2 & 4.0 & 1.4 & 4.3 & 3.7 & 3.5 & & 11.6 & 29.5 & 13.8 & 44.7 & 29.8 & 43.6 \\ & BCD & 0.2 & 5.7 & 0.3 & 4.1 & 4.4 & 3.9 & & 7.2 & 47.2 & 9.5 & 45.3 & 43.4 & 44.8 \\ & SBR & 0.1 & 5.7 & 0.1 & 4.6 & 4.5 & 4.4 & & 8.5 & 48.5 & 9.9 & 46.0 & 45.7 & 44.8 \\ \hline 2 & SRS & 0.4 & 0.4 & 4.7 & 5.2 & 5.2 & 5.3 & & 79.7 & 79.7 & 90.4 & 91.6 & 90.2 & 91.3 \\ & WEI & 0.6 & 0.6 & 4.5 & 5.8 & 5.2 & 5.7 & & 80.2 & 80.7 & 90.7 & 90.9 & 91.3 & 90.6 \\ & BCD & 1.0 & 1.0 & 4.5 & 5.1 & 5.0 & 5.3 & & 79.6 & 80.4 & 90.2 & 91.1 & 90.8 & 90.6 \\ & SBR & 0.8 & 1.1 & 4.8 & 5.3 & 4.6 & 4.7 & & 77.1 & 77.4 & 89.7 & 90.1 & 89.9 & 89.9 \\ \end{tabular} \end{adjustbox} \label{tab:200_2} \end{table} \begin{table}[H] \centering \caption{$n = 400, \tau = 0.5, \pi=0.5$} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{c|l|cccccc|c|cccccc} \multicolumn{8}{c|}{$H_0$} & & \multicolumn{6}{c}{$H_1$} \\ \hline \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c|}{ipw/CA} & & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 4.2 & 4.2 & 5.4 & 4.0 & 4.6 & 4.1 & & 21.8 & 21.8 & 23.2 & 50.2 & 23.5 & 50.2 \\ & WEI & 1.0 & 4.9 & 0.8 & 4.7 & 4.6 & 4.2 & & 14.7 & 35.6 & 16.0 & 50.3 & 35.0 & 50.7 \\ & BCD & 0.3 & 4.5 & 0.2 & 4.3 & 3.5 & 4.0 & & 8.9 & 52.6 & 11.7 & 50.2 & 49.3 & 49.6 \\ & SBR & 0.2 & 4.6 & 0.0 & 3.7 & 3.6 & 3.7 & & 8.9 & 55.0 & 10.9 & 51.8 & 52.4 & 51.9 \\ \hline 2 & SRS & 1.2 & 1.2 & 4.3 & 4.8 & 4.6 & 5.0 & & 89.7 & 89.7 & 95.6 & 95.6 & 95.7 & 95.7 \\ & WEI & 1.4 & 1.6 & 5.7 & 6.0 & 5.5 & 5.7 & & 89.2 & 89.2 & 95.4 & 94.8 & 95.1 & 94.8 \\ & BCD & 1.3 & 1.3 & 5.5 & 6.1 & 5.1 & 5.2 & & 88.7 & 88.9 & 95.2 & 95.4 & 95.7 & 95.6 \\ & SBR & 0.6 & 0.6 & 4.0 & 3.9 & 3.8 & 3.8 & & 90.0 & 90.2 & 95.4 & 95.4 & 95.8 & 95.7 \\ \end{tabular} \end{adjustbox} \label{tab:400_2} \end{table} \begin{figure} \caption{Rejection Probabilities Across Different Bandwidth Values} \label{fig:BCD_50_200} \end{figure} Turning to the rejection rates under the alternative in Tables \ref{tab:200_2} and \ref{tab:400_2}, we can make two additional observations. First, for BCD and SBR, the rejection probabilities (power) for ``ipw/W", ``s/CA", and ``ipw/CA" are close. This is because both BCD and SBR achieve strong balance. In this case, the two estimators we propose are asymptotically first-order equivalent. Second, for DGP1 with SRS and WEI assignment rules, ``ipw/CA" is more powerful than ``s/CA". This confirms our theoretical finding that the IPW estimator is \textit{strictly} more efficient than the SQR estimator when the treatment assignment rule does \textit{not} achieve strong balance. For DGP2 the three t-tests, i.e., ``ipw/W", ``s/CA", and ``ipw/CA", have similar power. \subsection{QTE, $\pi = 0.7$} Tables \ref{tab:200_70_2} and \ref{tab:400_70_2} show the similar results with $\pi = 0.7$. The same comments for Tables \ref{tab:200_2} and \ref{tab:400_2} still apply. \begin{table}[H] \centering \caption{$n = 200, \tau = 0.5, \pi=0.7$} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{c|l|cccccc|c|cccccc} \multicolumn{8}{c|}{$H_0$} & & \multicolumn{6}{c}{$H_1$} \\ \hline \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c|}{ipw/CA} & & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 4.8 & 4.8 & 5.2 & 4.7 & 3.4 & 4.4 & & 17.0 & 17.0 & 17.2 & 42.5 & 16.7 & 40.6 \\ & SBR & 0.1 & 0.7 & 0.2 & 4.0 & 4.4 & 3.7 & & 4.3 & 21.2 & 6.0 & 45.5 & 45.7 & 43.4 \\ \hline 2 & SRS & 1.6 & 1.6 & 5.2 & 5.4 & 5.1 & 5.3 & & 77.1 & 77.1 & 89.1 & 90.3 & 89.5 & 89.4 \\ & SBR & 0.4 & 0.5 & 3.9 & 4.8 & 4.5 & 4.8 & & 76.0 & 76.9 & 89.2 & 91.1 & 90.1 & 90.0 \\ \end{tabular} \end{adjustbox} \label{tab:200_70_2} \end{table} \begin{table}[H] \centering \caption{$n = 400, \tau = 0.5, \pi=0.7$} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{c|l|cccccc|c|cccccc} \multicolumn{8}{c|}{$H_0$} & & \multicolumn{6}{c}{$H_1$} \\ \hline \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c|}{ipw/CA} & & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 4.4 & 4.4 & 5.1 & 3.9 & 4.8 & 3.7 & & 18.4 & 18.4 & 18.7 & 47.9 & 19.4 & 46.6 \\ & SBR & 0.1 & 0.2 & 0 & 3.9 & 3.5 & 4 & & 4.2 & 22 & 5.9 & 49.8 & 50.5 & 48.2 \\ \hline 2 & SRS & 0.7 & 0.7 & 3.9 & 4.2 & 4.2 & 4.7 & & 86.7 & 86.7 & 93.9 & 93.3 & 94.1 & 93.6 \\ & SBR & 0.6 & 0.6 & 3.5 & 3.6 & 3.7 & 3.7 & & 88.3 & 88.8 & 94.8 & 95.2 & 95.5 & 95.2 \\ \end{tabular} \end{adjustbox} \label{tab:400_70_2} \end{table} \subsection{Difference between Two QTEs} Last, we consider to infer $q(0.25)-q(0.75)$ when $\pi=0.5$: \begin{align*} H_0:~q(0.25)-q(0.75) = \text{the true value} \quad \text{v.s.} \quad H_1:~q(0.25)-q(0.75) = \text{the true value}+\mu, \end{align*} where $\mu = 1$ and $0.75$ for sample sizes $200$ and $400$, respectively. The two estimators for QTEs at $\tau=0.25$ and $0.75$ are correlated. We can compute the naive and adjusted standard errors for the SQR estimator by taking this covariance structure into account.\footnote{The formulas for the covariances can be found in the proofs of Theorems \ref{thm:qr} and \ref{thm:ipw}.} On the other hand, in addition to avoiding the tuning parameters, another advantage of the bootstrap inference is it does not require the knowledge of this complicated covariance structure. Researchers may construct the t-statistic using the difference of two QTE estimators with the corresponding weighted and covariate-adaptive bootstrap standard errors, which are calculated using the exact same procedure as in Sections \ref{sec:SBI} and \ref{sec:CABI}. Taking the SQR estimator as an example, we estimate $q(0.25)-q(0.75)$ via $\hat{\beta}_1(0.25) - \hat{\beta}_1(0.75)$ and the corresponding covariate-adaptive bootstrap standard error is \begin{align*} \hat{\sigma}_{CA} = \frac{\hat{Q}(0.975) - \hat{Q}(0.025)}{z_{0.975} - z_{0.025}}, \end{align*} where $\hat{Q}(\alpha)$ is the $\alpha$-th empirical quantile of $\{\hat{\beta}^*_{1,b}(0.25) - \hat{\beta}^*_{1,b}(0.75)\}_{b=1}^B$. Based on the rejection rates reported in Tables \ref{tab:200_50_diff} and \ref{tab:400_50_diff}, the general observations for the previous simulation results still apply. Although under the null, the rejection rates for ``ipw/W", ``S/CA", ``ipw/CA" in DGP2 are below the nominal 5\%, they gradually increase as the sample size increases from 200 to 400. \begin{table}[H] \centering \caption{$n = 200,q(0.25)-q(0.75)$} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{c|l|cccccc|c|cccccc} \multicolumn{8}{c|}{$H_0$} & & \multicolumn{6}{c}{$H_1$} \\ \hline \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c|}{ipw/CA} & & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 4.0 & 4.0 & 3.6 & 3.8 & 3.5 & 3.5 & & 15.6 & 15.6 & 14.9 & 19.4 & 16.0 & 19.4 \\ & WEI & 2.3 & 4.9 & 2.0 & 4.0 & 5.1 & 3.9 & & 11.3 & 17.9 & 11.0 & 19.0 & 16.0 & 18.6 \\ & BCD & 1.0 & 4.1 & 1.1 & 4.4 & 3.7 & 4.2 & & 9.9 & 20.7 & 10.1 & 22.0 & 20.6 & 21.4 \\ & SBR & 1.1 & 4.3 & 0.9 & 4.1 & 4.1 & 4.2 & & 9.4 & 21.8 & 8.7 & 17.3 & 20.0 & 17.2 \\ \hline 2 & SRS & 5.0 & 5.0 & 3.1 & 3.1 & 3.1 & 3.1 & & 53.7 & 53.7 & 47.1 & 48.4 & 47.8 & 48.2 \\ & WEI & 3.6 & 3.6 & 2.1 & 2.8 & 2.9 & 2.9 & & 57.0 & 57.7 & 47.6 & 49.8 & 50.3 & 50.0 \\ & BCD & 4.2 & 4.8 & 2.4 & 2.5 & 3.6 & 2.7 & & 58.0 & 59.4 & 49.1 & 52.0 & 52.8 & 50.8 \\ & SBR & 5.1 & 5.3 & 2.4 & 3.4 & 4.1 & 3.4 & & 55.5 & 57.0 & 46.5 & 46.5 & 50.5 & 45.6 \\ \end{tabular} \end{adjustbox} \label{tab:200_50_diff} \end{table} \begin{table}[H] \centering \caption{$n = 400,q(0.25)-q(0.75)$} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{c|l|cccccc|c|cccccc} \multicolumn{8}{c|}{$H_0$} & & \multicolumn{6}{c}{$H_1$} \\ \hline \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c|}{ipw/CA} & & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 3.8 & 3.8 & 3.9 & 5.1 & 3.7 & 5.0 & & 17.2 & 17.2 & 15.9 & 21.5 & 16.8 & 21.2 \\ & WEI & 2.0 & 4.2 & 2.4 & 3.3 & 4.4 & 3.5 & & 11.8 & 20.2 & 11.5 & 21.4 & 20.2 & 20.7 \\ & BCD & 1.4 & 4.4 & 1.4 & 4.3 & 4.4 & 4.1 & & 10.5 & 21.8 & 10.2 & 20.7 & 21.5 & 20.6 \\ & SBR & 0.8 & 3.8 & 0.8 & 3.9 & 3.7 & 3.8 & & 12.1 & 25.0 & 12.6 & 21.8 & 23.7 & 22.3 \\ \hline 2 & SRS & 5.3 & 5.3 & 3.9 & 4.7 & 4.3 & 4.8 & & 63.2 & 63.2 & 55.7 & 57.7 & 56.8 & 57.6 \\ & WEI & 5.4 & 5.8 & 3.4 & 3.7 & 4.1 & 3.5 & & 63.6 & 64.4 & 55.6 & 58.0 & 58.0 & 58.5 \\ & BCD & 4.0 & 4.3 & 2.6 & 2.8 & 3.1 & 3.1 & & 62.1 & 63.3 & 54.7 & 55.7 & 57.4 & 56.0 \\ & SBR & 5.1 & 5.7 & 4.0 & 4.5 & 4.4 & 4.5 & & 61.1 & 62.0 & 52.4 & 51.3 & 56.0 & 53.0 \\ \end{tabular} \end{adjustbox} \label{tab:400_50_diff} \end{table} \section{Guidance for Practitioners} \label{sec:guide} We recommend employing the t-statistic (or equivalently, the confidence interval) constructed using the IPW estimator and its weighted bootstrap standard error for inference in covariate-adaptive randomization, for the following four reasons. First, its asymptotic size equals the nominal level. Second, the IPW estimator has a smaller asymptotic variance than the SQR estimator when the treatment assignment rule does not achieve strong balance and the stratification is relevant.\footnote{In this case, for ATE, the IPW estimator also has a strictly smaller asymptotic variance than the strata fixed effects estimator studied in \cite{BCS17}.} Third, compared with the covariate-adaptive bootstrap, the validity of the weighted bootstrap requires a weaker condition that $\sup_{s \in \mathcal{S} }|D_n(s)/n(s)| = o_p(1)$. Fourth, this method does not require the knowledge of the exact treatment assignment rule, thus is suitable in settings where such information is lacking, e.g., using someone else's RCT or subsample analysis. When the treatment assignment rule achieves strong balance, SQR estimator can also be used. But in this case, only the covariate-adaptive bootstrap standard error is valid. Last, the Wald test using SQR estimator and the weighted bootstrap standard error is not recommended, as it is conservative when the treatment assignment rule introduces negative dependence (i.e., $\gamma(s) < \pi(1-\pi)$) such as WEI, BCD, and SBR. \section{Empirical Application} \label{sec:app} We illustrate our methods by estimating and inferring the average and quantile treatment effects of iron efficiency on educational attainment. The dataset we use is the same as the one analyzed by \citet{CCFNT16} and \citet{BCS17}. \subsection{Data Description} The dataset consists of 215 students from one Peruvian secondary school during the 2009 school year. About two thirds of students were assigned to the treatment group ($A=1$ or $A=2$). The other one third of students were assigned to the control group ($A=0$). One half of the students in the treatment group were shown a video in which a physician encouraged iron supplements ($A=1$) and the other half were shown the same encouragement from a popular soccer player ($A=2$). Those assignments were stratified by the number of years of secondary school completed ($\mathcal{S}=\{1,\cdots,5\}$). The field experiment used a stratified block randomization scheme with fractions $(1/3, 1/3, 1/3)$ for each group, which achieves strong balance ($\gamma(s)=0$). In the following, we focus on the observations with $A=0$ and $A=1$, and estimate the treatment effect of the exposure to a video of encouraging iron supplements by a physician only. This practice was also implemented in \cite{BCS17}. In this case, the target proportions of treatment is $\pi = 1/2$. As in \cite{CCFNT16}, it is also possible to combine the two treatment groups, i.e., $A=1$ and $A=2$ and compute the treatment effects of exposure to a video of encouraging iron supplements by either a physician or a popular soccer player. Last, one can use the method developed in \cite{BCS18} to estimate the ATEs under multiple treatment status. However, in this setting, the estimation of QTE and the validity of bootstrap inference have not been investigated yet and are interesting topics for future research. For each observation, we have three outcome variables: number of pills taken, grade point average, and cognitive ability measured by the average score across different Nintendo Wii games. For more details about the outcome variables, we refer interested readers to \cite{CCFNT16}. In the following, we focus on the grade point average only as the other two outcomes are discrete. \subsection{Computation} \label{sec:comp} We consider three pairs of point estimates and their corresponding non-conservative standard errors: (1) the SQR estimator with the covariate-adaptive bootstrap standard error, (2) the IPW estimator with the covariate-adaptive bootstrap standard error, and (3) the IPW estimator with the weighted bootstrap standard error. We denote them as ``s/CA", ``ipw/CA", and ``ipw/W", respectively. For comparison, we also compute the SQR estimator with its weighted bootstrap standard error, which is denoted as ``s/W" . The SQR estimator for the $\tau$-th QTE refers to $\hat{\beta}_1(\tau)$ as the second element of $\hat{\beta}(\tau) = (\hat{\beta}_0(\tau),\hat{\beta}_1(\tau))$, where \begin{align*} \hat{\beta}(\tau) = \argmin_{b = (b_0,b_1)^\prime \in \Re^2} \sum_{i=1}^{n}\rho_\tau\left(Y_i - \dot{A}_i'b\right), \end{align*} $\dot{A}_i = (1,A_i)^\prime$, and $\rho_\tau(u) = u(\tau - 1\{u\leq 0\})$ is the standard check function. It is also just the difference between the $\tau$-th empirical quantiles of treatment and control groups. The IPW estimator refers to $\hat{q}(\tau) = \hat{q}_1(\tau) - \hat{q}_0(\tau)$, where \begin{align*} \hat{q}_1(\tau) = \argmin_q \frac{1}{n}\sum_{i=1}^n\frac{A_i}{\hat{\pi}(S_i)}\rho_\tau(Y_i - q), \quad \hat{q}_0(\tau) = \argmin_q\frac{1}{n}\sum_{i=1}^n \frac{1-A_i}{1-\hat{\pi}(S_i)}\rho_\tau(Y_i - q), \end{align*} $\hat{\pi}(\cdot)$ denotes the propensity score estimator, $\hat{\pi}(s) = n_1(s)/n(s)$, $n_1(s) = \sum_{i=1}^n A_i 1\{S_i = s\}$, and $n(s) = \sum_{i=1}^n 1\{S_i = s\}$. The covariate-adaptive bootstrap standard error (``CA") refers to the standard error computed in Section \ref{sec:CABI}. In particular, we can draw the covariate-adaptive bootstrap sample $(Y_i^*,A_i^*,S_i^*)_{i=1}^n$ following the procedure in Section \ref{sec:CABI}. We then recompute the SQR and IPW estimates using the bootstrap sample. We repeat the bootstrap estimation $B$ times, and obtain $\{\hat{\beta}_{b,1}^*(\tau),\hat{q}_b^*(\tau)\}_{b=1}^B$. The standard errors for SQR and IPW estimates are computed as \begin{align*} \hat{\sigma}_{sqr}(\tau) = \frac{\hat{Q}_{sqr}(0.975) - \hat{Q}_{sqr}(0.025)}{z_{0.975} - z_{0.025}} \quad \text{and} \quad \hat{\sigma}_{ipw}(\tau) = \frac{\hat{Q}_{ipw}(0.975) - \hat{Q}_{ipw}(0.025)}{z_{0.975} - z_{0.025}}, \end{align*} respectively, where $\hat{Q}_{sqr}(\alpha)$ and $\hat{Q}_{ipw}(\alpha)$ are the $\alpha$-th empirical quantiles of $\{\hat{\beta}_{b,1}^*(\tau)\}_{b=1}^B$ and $\{\hat{q}_b^*(\tau)\}_{b=1}^B$, respectively, and $z_{\alpha}$ is the $\alpha$-th percentile of the standard normal distribution, i.e., $z_{0.975} \approx 1.96$ and $z_{0.025} \approx -1.96$. The weighted bootstrap standard error for the IPW estimate can be computed in the same manner with only one difference, the covariate-adaptive bootstrap estimator $\{\hat{q}_b^*(\tau)\}_{b=1}^B$ is replaced by the weighted bootstrap estimator $\{\hat{q}_b^w(\tau)\}_{b=1}^B$, where for the $b$-th replication, $\hat{q}_b^w(\tau) = \hat{q}_{b,1}^w(\tau) - \hat{q}_{b,0}^w(\tau)$, \begin{align*} \hat{q}_{b,1}^w(\tau) = \argmin_q \frac{1}{n}\sum_{i=1}^n\frac{\xi_i^bA_i}{\hat{\pi}^w(S_i)}\rho_\tau(Y_i - q), \hat{q}_{b,1}^w(\tau) = \argmin_q\frac{1}{n}\sum_{i=1}^n \frac{\xi_i^b(1-A_i)}{1-\hat{\pi}^w(S_i)}\rho_\tau(Y_i - q), \end{align*} $\{\xi_i^b\}_{i=1}^n$ is a sequence of i.i.d. standard exponentially distributed random variables, $\hat{\pi}^w(s) = n_1^w(s)/n^w(s)$, $n_1^w(s) = \sum_{i =1}^n\xi_iA_i1\{S_i=s\}$, and $n^w(s) = \sum_{i =1}^n \xi_i 1\{S_i=s\}$. Similarly, we compute the weighted bootstrap SQR estimates $\{\beta_{b,1}^w(\tau)\}_{b=1}^B$ as the second element of $\{\beta_{b}^w(\tau)\}_{b=1}^B$, where \begin{align*} \beta_{b}^w(\tau) = \argmin_{b = (b_0,b_1)^\prime \in \Re^2} \frac{1}{n}\sum_{i=1}^n \xi_i^b\rho_\tau(Y_i - \dot{A}_i'b). \end{align*} For the ATEs, we also compute the SQR estimator with the adjusted standard error based on the analytical formula derived by \cite{BCS17}, i.e., ``s/adj". For QTE estimates, we consider quantile indexes $\{0.1,0.15,\cdots,0.90\}$. The number of replications for the two bootstrap methods is $B=1000$. \subsection{Main Results} Table \ref{tab:gradesATE} shows the estimates with the corresponding standard errors in parentheses. From the table, we can make several remarks. First, for both ATE and QTE, the SQR and IPW estimates are very close to each other and so do their standard errors computed via the analytical formula, weighted bootstrap, and covariate-adaptive bootstrap. This is consistent with our theory that, under strong balance, the two estimators are first-order equivalent. Second, although in theory, the weighted bootstrap standard errors for the SQR estimators should be larger than those computed via the covariate-adaptive bootstrap, in this application, they are very close. This is consistent with the finding in \cite{BCS17} that their adjusted p-value for the ATE estimate is close to the naive one. It implies the stratification may be irrelevant for the full-sample analysis. Third, we do not compute the adjusted standard error for the QTEs as it requires tuning parameters. Fourth, the QTEs provide us a new insight that the impact of supplementation on grade promotion is only significantly positive at 25\% among the three quantiles. This may imply that the policy of reducing iron deficits is more effective for lower-ranked students. \begin{table}[H] \centering \caption{Grades Points Average} \begin{tabular}{c|ccccc}\hline & s/adj & s/W & s/CA& ipw/W & ipw/CA \\ \hline ATE & 0.35$^{**}$(0.16) & 0.35$^{**}$(0.16) & 0.35$^{**}$(0.17) & 0.37$^{**}$(0.16) & 0.37$^{**}$(0.17) \\ QTE,25\% & & 0.43$^{***}$(0.15) & 0.43$^{***}$(0.15) & 0.43$^{***}$(0.15) & 0.43$^{***}$(0.15) \\ QTE,50\% & & 0.29(0.22) & 0.29(0.23) & 0.29(0.22) & 0.29(0.24) \\ QTE,75\% & & 0.35(0.25) & 0.35(0.24) & 0.36(0.25) & 0.36(0.25) \\ \hline \end{tabular}\\ $^{*}$ $p<0.1$, $^{**}$ $p<0.05$, $^{***}$ $p<0.01$. \label{tab:gradesATE} \end{table} In order to provide more details on the QTE estimates, we plot the 95\% point-wise confidence band in Figure \ref{fig:quantileCI} with quantile index ranging from 0.1 to 0.9. The solid line and the shadow area represent the point estimate and its 95\% point-wise confidence interval, respectively. The confidence interval is constructed by $$[\hat{\beta}-1.96\hat{\sigma}(\hat{\beta}),\hat{\beta}+1.96\hat{\sigma}(\hat{\beta})],$$ where $\hat{\beta}$ and $\hat{\sigma}(\hat{\beta})$ are the point estimates and the corresponding standard errors described above. As we expected, all the four findings look the same and the estimates are only significantly positive at low quantiles (15\%--30\%). \begin{figure} \caption{95\% Point-wise Confidence Interval for Quantile Treatment Effects} \label{fig:quantileCI} \end{figure} \subsection{Subsample Results} \label{sec:apply_sub} Following \cite{CCFNT16}, we further split the sample into two based on whether the student is anemic, i.e., $Anem_i = 0$ or 1. We anticipate that there is no treatment effect for the nonanemic students and positive effects for anemic ones. In this subsample analysis, the covariate-adaptive bootstrap is infeasible, as in each sub-group, the strong-balance condition may be lost and the treatment assignment rule is not necessarily SBR and is generally unknown.\footnote{As the anonymous referee pointed out, it is possible to implement the covariate-adaptive bootstrap on the full sample and pick out the observations in the subsample to construct a bootstrap subsample. The analysis can then be repeated on this covariate-adaptive bootstrap subsample. Establishing the validity of this procedure is left as a topic for future research. } However, the weighted bootstrap is still feasible as it does not require the knowledge of the treatment assignment rule. According to Theorem \ref{thm:b}, the IPW estimator paired with the weighted bootstrap standard error is valid if \begin{align} \label{eq:D1} \sup_{s\in\mathcal{S}}\left|\frac{D_n^{(1)}(s)}{n^{(1)}(s)}\right|\equiv& \sup_{s\in\mathcal{S}}\left|\frac{\sum_{i =1}^n(A_i-\pi)1\{S_i=s\}1\{Anem_i=1\}}{\sum_{i =1}^n1\{S_i=s\}1\{Anem_i=1\}}\right|=o_p(1) \end{align} and \begin{align} \label{eq:D0} \sup_{s\in\mathcal{S}}\left|\frac{D_n^{(0)}(s)}{n^{(0)}(s)}\right|\equiv& \sup_{s\in\mathcal{S}}\left|\frac{\sum_{i =1}^n(A_i-\pi)1\{S_i=s\}1\{Anem_i=0\}}{\sum_{i =1}^n1\{S_i=s\}1\{Anem_i=0\}}\right|=o_p(1). \end{align} We maintain this mild condition in this section. In our sample, \begin{align*} \sup_{s\in\mathcal{S}}\left|\frac{D_n^{(1)}(s)}{n^{(1)}(s)}\right| = 0 \quad \text{and} \quad \sup_{s\in\mathcal{S}}\left|\frac{D_n^{(0)}(s)}{n^{(0)}(s)}\right| = 0.071, \end{align*} which indicate that \eqref{eq:D1} and \eqref{eq:D0} are plausible. \begin{table}[H] \centering \caption{Grades Points Average for Subsamples} \begin{tabular}{l|cc|cc}\hline & \multicolumn{2}{c|}{Anemic} & \multicolumn{2}{c}{Nonanemic} \\ \hline & s/W & ipw/W & s/W & ipw/W \\ \hline ATE & 0.67$^{***}$(0.23) & 0.69$^{***}$(0.20) & 0.13(0.23) & 0.19(0.20) \\ QTE, 25\% & 0.74$^{***}$(0.24) & 0.76$^{***}$(0.22) & 0.14(0.28) & 0.22(0.26) \\ QTE, 50\% & 1.05$^{***}$(0.29) & 1.05$^{***}$(0.27) & -0.14(0.29) & -0.14(0.27) \\ QTE, 75\% & 0.71$^{**}$(0.36) & 0.76$^{**}$(0.32) & 0.14(0.39) & 0.14(0.37) \\ \hline \end{tabular}\\ $^{*}$ $p<0.1$, $^{**}$ $p<0.05$, $^{***}$ $p<0.01$. \label{tab:gradesATE'} \end{table} From Table \ref{tab:gradesATE'} and Figure \ref{fig:quantileCIn}, we see that the QTE estimates are significantly positive for the anemic students when the quantile index is between around 20\%--75\%, but are insignificant for nonanemic students. The lack of significance at very low and high quantiles for the anemic subsample may be due to a poor asymptotic normal approximation at extreme quantiles. To extend the inference of extremal QTEs in \cite{Z18} to the context of covariate-adaptive randomization is an interesting topic for future research. We also note that for both subsamples, the weighted bootstrap standard errors for the SQR estimators are larger than those for the IPW estimators, which is consistent with Theorem \ref{thm:b}. It implies, for both sub-groups, the stratification is relevant. \begin{figure} \caption{95\% Point-wise Confidence Interval for Anemic and Nonanemic Students} \label{fig:quantileCIn} \end{figure} \section{Conclusion} \label{sec:concl} This paper studies the estimation and bootstrap inference for QTEs under covariate-adaptive randomization. We show that the weighted bootstrap standard error is only valid for the IPW estimator while the covariate-adaptive bootstrap standard error is valid for both SQR and IPW estimators. In the empirical application, we find that the QTE of iron supplementation on grade promotion is trivial for nonanemic students, while the impact is significantly positive for middle-ranked anemic students. \appendix \begin{center} \Large{Supplement to ``Quantile Treatment Effects and Bootstrap Inference under Covariate-Adaptive Randomization"} \end{center} \begin{abstract} This paper gathers the supplementary material to the original paper. Sections \ref{sec:qr}, \ref{sec:ipw_pf}, \ref{sec:b}, and \ref{sec:cab} contain the proofs for Theorems \ref{thm:qr}, \ref{thm:ipw}, \ref{thm:b}, and \ref{thm:cab}, respectively. Section \ref{sec:lem} contains the proofs for the technical lemmas. A separate supplement contains the analysis of strata fixed effects quantile regression estimator as well as additional simulation results. \end{abstract} \setcounter{page}{1} \renewcommand\thesection{\Alph{section}} \renewcommand{0}{0} \setcounter{equation}{0} \section{Proof of Theorem \ref{thm:qr}} \label{sec:qr} Let $u=(u_0, u_1)' \in \Re^2$ and \begin{align*} L_n(u,\tau) = \sum_{i=1}^n\left[\rho_\tau(Y_i - \dot{A}_i'\beta(\tau) - \dot{A}_i'u/\sqrt{n}) - \rho_\tau(Y_i - \dot{A}_i'\beta(\tau))\right]. \end{align*} Then, by the change of variable, we have that \begin{align*} \sqrt{n}(\hat{\beta}(\tau) - \beta(\tau)) = \argmin_u L_n(u,\tau). \end{align*} Notice that $L_n(u,\tau)$ is convex in $u$ for each $\tau$ and bounded in $\tau$ for each $u$. In the following, we aim to show that there exists $$g_n(u,\tau) = - u'W_n(\tau) + \frac{1}{2}u'Q(\tau)u$$ such that (1) for each $u$, \begin{align*} \sup_{\tau \in \Upsilon}|L_n(u,\tau) - g_n(u,\tau)| \convP 0; \end{align*} (2) the maximum eigenvalue of $Q(\tau)$ is bounded from above and the minimum eigenvalue of $Q(\tau)$ is bounded away from $0$, uniformly over $\tau \in \Upsilon$; (3) $W_n(\tau) \rightsquigarrow \tilde{\mathcal{B}}(\tau)$ uniformly over $\tau \in \Upsilon$, in which $\tilde{\mathcal{B}}(\cdot)$ is some Gaussian process. Then by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}(\tau) - \beta(\tau)) = [Q(\tau)]^{-1}W_n(\tau) + r_n(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}||r_n(\tau)|| = o_p(1)$. In addition, by (3), we have, uniformly over $\tau \in \Upsilon$, \begin{align*} \sqrt{n}(\hat{\beta}(\tau) - \beta(\tau)) \rightsquigarrow [Q(\tau)]^{-1}\tilde{\mathcal{B}}(\tau) \equiv \mathcal{B}(\tau). \end{align*} The second element of $\mathcal{B}(\tau)$ is $\mathcal{B}_{sqr}(\tau)$ stated in Theorem \ref{thm:qr}. In the following, we prove requirements (1)--(3) in three steps. \textbf{Step 1.} By Knight's identity (\citep{K98}), we have \begin{align*} & L_n(u,\tau) \\ = & -u'\sum_{i=1}^{n}\frac{1}{\sqrt{n}}\dot{A}_i\left(\tau- 1\{Y_i\leq \dot{A}_i^{\prime}\beta(\tau)\}\right) + \sum_{i=1}^{n}\int_0^{\frac{\dot{A}_iu}{\sqrt{n}}}\left(1\{Y_i - \dot{A}_i^{\prime}\beta(\tau)\leq v\} - 1\{Y_i - \dot{A}_i^{\prime}\beta(\tau)\leq 0\} \right)dv \\ \equiv & -u'W_n(\tau) +Q_n(u,\tau), \end{align*} where \begin{align*} W_n(\tau) = \sum_{i=1}^{n}\frac{1}{\sqrt{n}}\dot{A}_i\left(\tau- 1\{Y_i\leq \dot{A}_i^{\prime}\beta(\tau)\}\right) \end{align*} and \begin{align*} Q_n(u,\tau) = & \sum_{i=1}^{n}\int_0^{\frac{\dot{A}_i^\prime u}{\sqrt{n}}}\left(1\{Y_i - \dot{A}_i^{\prime}\beta(\tau)\leq v\} - 1\{Y_i - \dot{A}_i^{\prime}\beta(\tau)\leq 0\} \right)dv \\ = & \sum_{i=1}^n A_i\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\left(1\{Y_i(1) - q_1(\tau)\leq v\} - 1\{Y_i(1) - q_1(\tau)\leq 0\} \right)dv \\ & + \sum_{i=1}^n (1-A_i)\int_0^{\frac{u_0}{\sqrt{n}}}\left(1\{Y_i(0) - q_0(\tau)\leq v\} - 1\{Y_i(0) - q_0(\tau)\leq 0\} \right)dv \\ \equiv & Q_{n,1}(u,\tau) + Q_{n,0}(u,\tau). \end{align*} We first consider $Q_{n,1}(u,\tau)$. Following \cite{BCS17}, we define $\{(Y_i^s(1),Y_i^s(0)): 1\leq i \leq n\}$ as a sequence of i.i.d. random variables with marginal distributions equal to the distribution of $(Y_i(1),Y_i(0))|S_i = s$. The distribution of $Q_{n,1}(u,\tau)$ is the same as the counterpart with units ordered by strata and then ordered by $A_i = 1$ first and $A_i = 0$ second within each stratum, i.e., \begin{align} \label{eq:Qn1} Q_{n,1}(u,\tau) \stackrel{d}{=} & \sum_{s \in \mathcal{S}} \sum_{i=N(s)+1}^{N(s)+n_1(s)}\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\biggl(1\{Y_i^s(1) - q_1(\tau)\leq v\} - 1\{Y_i^s(1) - q_1(\tau)\leq 0\} \biggr)dv \notag \\ = & \sum_{s \in \mathcal{S}}\biggl[\Gamma^s_{n}(N(s) + n_1(s),\tau) - \Gamma^s_{n}(N(s),\tau) \biggr], \end{align} where $N(s) = \sum_{i=1}^n 1\{S_i < s\}$, $n_1(s) = \sum_{i=1}^n 1\{S_i = s\}A_i$, and \begin{align*} \Gamma^s_{n}(k,\tau) = \sum_{i=1}^{k}\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\biggl(1\{Y_i^s(1) - q_1(\tau)\leq v\} - 1\{Y_i^s(1) - q_1(\tau)\leq 0\} \biggr)dv. \end{align*} In addition, note that \begin{align} \label{eq:Gamma} & \mathbb{P}(\sup_{t \in (0,1),\tau \in \Upsilon}|\Gamma^s_{n}(\lfloor nt \rfloor,\tau) - \mathbb{E}\Gamma^s_{n}(\lfloor nt \rfloor,\tau)|> \varepsilon) \notag \\ = & \mathbb{P}(\max_{1 \leq k \leq n}\sup_{\tau \in \Upsilon}|\Gamma^s_{n}(k,\tau) - \mathbb{E}\Gamma^s_{n}(k,\tau)|> \varepsilon) \notag \\ \leq & 3 \max_{1 \leq k \leq n}\mathbb{P}(\sup_{\tau \in \Upsilon}|\Gamma^s_{n}(k,\tau) - \mathbb{E}\Gamma^s_{n}(k,\tau)|> \varepsilon/3) \notag \\ \leq & 9 \mathbb{P}(\sup_{\tau \in \Upsilon}|\Gamma^s_{n}(n,\tau) - \mathbb{E}\Gamma^s_{n}(n,\tau)|> \varepsilon/30) \notag \\ \leq & \frac{270 \mathbb{E}\sup_{\tau \in \Upsilon}|\Gamma^s_{n}(n,\tau) - \mathbb{E}\Gamma^s_{n}(n,\tau)|}{\varepsilon} = o(1). \end{align} The first inequality holds due to Lemma \ref{lem:S} with $S_k = \Gamma^s_{n}(k,\tau) - \mathbb{E}\Gamma^s_{n}(k,\tau)$ and $||S_k|| = \sup_{\tau \in \Upsilon} |\Gamma^s_{n}(k,\tau) - \mathbb{E}\Gamma^s_{n}(k,\tau)|.$ The second inequality holds due to \citet[Theorem 1]{M93}. To derive the last equality of \eqref{eq:Gamma}, we consider the class of functions \begin{align*} \mathcal{F} = \left\{\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\biggl(1\{Y_i^s(1) - q_1(\tau)\leq v\} - 1\{Y_i^s(1) - q_1(\tau)\leq 0\} \biggr)dv: \tau\in \Upsilon \right\} \end{align*} with envelope $\frac{|u_0+u_1|}{\sqrt{n}}$ and \begin{align*} \sup_{f \in \mathcal{F}}\mathbb{E}f^2 \leq \sup_{\tau \in \Upsilon}\mathbb{E} \left[\frac{u_0+u_1}{\sqrt{n}}1\left\{|Y_i^s(1) - q_1(\tau)|\leq \frac{u_0+u_1}{\sqrt{n}}\right\} \right]^2 \lesssim n^{-3/2}. \end{align*} Note that $\mathcal{F}$ is a VC-class with a fixed VC index. Therefore, by \citet[Corollary 5.1]{CCK14}, \begin{align*} \mathbb{E}\sup_{\tau \in \Upsilon}|\Gamma^s_{n}(n,\tau) - \mathbb{E}\Gamma^s_{n}(n,\tau)| = n||\mathbb{P}_n - \mathbb{P}||_\mathcal{F} \lesssim n\left[\sqrt{\frac{\log(n)}{n^{5/2}}} + \frac{\log(n)}{n^{3/2}}\right] = o(1). \end{align*} Then, \eqref{eq:Gamma} implies that \begin{align*} \sup_{\tau \in \Upsilon}\left|Q_{n,1}(u,\tau) - \sum_{s\in \mathcal{S}}\mathbb{E}\biggl[\Gamma^s_{n}(\lfloor n(N(s)/n + n_1(s)/n)\rfloor,\tau) - \Gamma^s_{n}(\lfloor n(N(s)/n)\rfloor,\tau) \biggr]\right| = o_p(1), \end{align*} where following the convention in the empirical process literature, \begin{align*} \mathbb{E}\biggl[\Gamma^s_{n}(\lfloor n(N(s)/n + n_1(s)/n)\rfloor,\tau) - \Gamma^s_{n}(\lfloor n(N(s)/n)\rfloor,\tau) \biggr] \end{align*} is interpreted as \begin{align*} \mathbb{E}\biggl[\Gamma^s_{n}(\lfloor nt_2\rfloor,\tau) - \Gamma^s_{n}(\lfloor nt_1\rfloor,\tau) \biggr]_{t_2 = \frac{N(s)}{n},t_2 = \frac{N(s) + n_1(s)}{n}}. \end{align*} In addition, $N(s)/n \convP F(s) = F(S_i < s)$ and $n_1(s)/n \convP \pi p(s)$. Thus, uniformly over $\tau \in \Upsilon$, \begin{align*} & \mathbb{E}\biggl[\Gamma^s_{n}(\lfloor n(N(s)/n + n_1(s)/n)\rfloor,\tau) - \Gamma^s_{n}(\lfloor n(N(s)/n)\rfloor,\tau) \biggr] \\ = & n_1(s) \int_0^{\frac{u_0 + u_1}{\sqrt{n}}}(F_1(q_1(\tau)+v|s) - F_1(q_1(\tau)|s))dv \\ \convP & \frac{\pi p(s) f_1(q_1(\tau)|s)(u_0+u_1)^2}{2}, \end{align*} where $F_1(\cdot|s)$ and $f_1(\cdot|s)$ are the conditional CDF and PDF of $Y_1$ given $S=s$, respectively. Then, uniformly over $\tau \in \Upsilon$, \begin{align*} Q_{n,1}(u,\tau) \convP \sum_{s \in \mathcal{S}}\frac{\pi p(s) f_1(q_1(\tau)|s)(u_0+u_1)^2}{2} = \frac{\pi f_1(q_1(\tau))(u_0+u_1)^2}{2}. \end{align*} Similarly, we can show that, uniformly over $\tau \in \Upsilon$, \begin{align*} Q_{n,0}(u,\tau) \convP \frac{(1-\pi) f_0(q_0(\tau))u_0^2}{2}, \end{align*} and thus \begin{align*} Q_n(u,\tau) \convP \frac{1}{2}u'Q(\tau)u, \end{align*} where \begin{align} \label{eq:Qqr} Q(\tau) = \begin{pmatrix} \pi f_1(q_1(\tau)) + (1-\pi)f_0(q_0(\tau)) & \pi f_1(q_1(\tau)) \\ \pi f_1(q_1(\tau)) & \pi f_1(q_1(\tau)) \end{pmatrix}. \end{align} Then, \begin{align*} \sup_{\tau \in \Upsilon}|L_n(u,\tau) - g_n(u,\tau)| = \sup_{\tau \in \Upsilon}|Q_n(u,\tau) - \frac{1}{2}u'Q(\tau)u| = o_p(1). \end{align*} This concludes the first step. \textbf{Step 2}. Note that $\text{det}(Q(\tau)) = \pi(1-\pi)f_1(q_1(\tau))f_0(q_0(\tau))$, which is bounded and bounded away from zero. In addition, it can be shown that the two eigenvalues of $Q$ are nonnegative. This leads to the desired result. \textbf{Step 3.} Let $e_1 = (1,1)^\prime$ and $e_0 = (1,0)^{\prime}$. Then, we have \begin{align*} W_n(\tau) = &e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}A_i 1\{S_i = s\}(\tau - 1\{Y_i(1) \leq q_1(\tau)\})\\ & + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (1-A_i) 1\{S_i = s\}(\tau - 1\{Y_i(0) \leq q_0(\tau)\}). \end{align*} Let $m_j(s,\tau) = \mathbb{E}(\tau - 1\{Y_i(j)\leq q_j(\tau)\}|S_i = s)$ and $\eta_{i,j}(s,\tau) = (\tau - 1\{Y_i(j) \leq q_j(\tau)\}) - m_j(s,\tau)$, $j = 0,1$. Then, \begin{align} \label{eq:W} W_n(\tau) = & \biggl[e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}A_i 1\{S_i = s\}\eta_{i,1}(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (1-A_i) 1\{S_i = s\}\eta_{i,0}(s,\tau)\biggr] \notag \\ & + \biggl[e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}(A_i-\pi) 1\{S_i = s\}m_1(s,\tau) - e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (A_i-\pi) 1\{S_i = s\}m_0(s,\tau)\biggr] \notag \\ & + \biggl[e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}\pi 1\{S_i = s\}m_1(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (1-\pi) 1\{S_i = s\}m_0(s,\tau) \biggr] \notag \\ \equiv & W_{n,1}(\tau) + W_{n,2}(\tau) + W_{n,3}(\tau). \end{align} By Lemma \ref{lem:Q}, uniformly over $\tau \in \Upsilon$, \begin{align*} (W_{n,1}(\tau), W_{n,2}(\tau), W_{n,3}(\tau)) \rightsquigarrow (\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau)), \end{align*} where $(\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau))$ are three independent two-dimensional Gaussian processes with covariance kernels $\Sigma_1(\tau_1,\tau_2)$, $\Sigma_2(\tau_1,\tau_2)$, and $\Sigma_3(\tau_1,\tau_2)$, respectively. Therefore, uniformly over $\tau \in \Upsilon$, \begin{align*} W_n(\tau) \rightsquigarrow \tilde{\mathcal{B}}(\tau), \end{align*} where $\tilde{\mathcal{B}}(\tau)$ is a two-dimensional Gaussian process with covariance kernel \begin{align*} \tilde{\Sigma}(\tau_1,\tau_2) = \sum_{j=1}^3\Sigma_j(\tau_1,\tau_2). \end{align*} Consequently, \begin{align*} \sqrt{n}(\hat{\beta}(\tau) - \beta(\tau)) \rightsquigarrow [Q(\tau)]^{-1}\tilde{\mathcal{B}}(\tau) \equiv \mathcal{B}(\tau), \end{align*} where $\mathcal{B}(\tau)$ is a two-dimensional Gaussian process with covariance kernel \begin{align*} \Sigma(\tau_1,\tau_2) = & [Q(\tau_1)]^{-1}\tilde{\Sigma}(\tau_1,\tau_2) [Q(\tau_2)]^{-1} \\ = & \frac{1}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))}[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)]\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} \\ & + \frac{1}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))}[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)]\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix} \\ & + \sum_{s \in \mathcal{S}}p(s)\gamma(s)\biggl[\frac{m_1(s,\tau_1)m_1(s,\tau_2)}{\pi^2 f_1(q_1(\tau_1))f_1(q_1(\tau_2))}\begin{pmatrix} 0 & 0 \\ 0& 1 \end{pmatrix} - \frac{m_1(s,\tau_1)m_0(s,\tau_2)}{\pi(1-\pi) f_1(q_1(\tau_1))f_0(q_0(\tau_2))}\begin{pmatrix} 0 & 0 \\ 1 & -1 \end{pmatrix} \\ & - \frac{m_0(s,\tau_1)m_1(s,\tau_2)}{\pi(1-\pi) f_0(q_0(\tau_1))f_1(q_1(\tau_2))}\begin{pmatrix} 0 & 1 \\ 0 & -1 \end{pmatrix} + \frac{m_0(s,\tau_1)m_0(s,\tau_2)}{(1-\pi)^2 f_0(q_0(\tau_1))f_0(q_0(\tau_2))}\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix} \biggr] \\ & + \frac{\mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)}{f_1(q_1(\tau_1))f_1(q_1(\tau_2))}\begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} + \frac{\mathbb{E}m_1(S,\tau_1)m_0(S,\tau_2)}{f_1(q_1(\tau_1))f_0(q_0(\tau_2))}\begin{pmatrix} 0 & 0 \\ 1 & -1 \end{pmatrix} \\ & + \frac{\mathbb{E}m_0(S,\tau_1)m_1(S,\tau_2)}{f_0(q_0(\tau_1))f_1(q_1(\tau_2))}\begin{pmatrix} 0 & 1 \\ 0 & -1 \end{pmatrix}+\frac{\mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{f_0(q_0(\tau_1))f_0(q_0(\tau_2))}\begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}. \end{align*} Focusing on the $(2,2)$-element of $\Sigma(\tau_1,\tau_2)$, we can conclude that \begin{align*} \sqrt{n}(\hat{\beta}_1(\tau) - q(\tau)) \rightsquigarrow \mathcal{B}_{sqr}(\tau), \end{align*} where the Gaussian process $\mathcal{B}_{sqr}(\tau)$ has a covariance kernel \begin{align*} & \Sigma_{sqr}(\tau_1,\tau_2) \\ = & \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))} \\ & + \mathbb{E}\gamma(S)\biggl[\frac{m_1(S,\tau_1)m_1(S,\tau_2)}{\pi^2 f_1(q_1(\tau_1))f_1(q_1(\tau_2))}+\frac{m_1(S,\tau_1)m_0(S,\tau_2)}{\pi(1-\pi) f_1(q_1(\tau_1))f_0(q_0(\tau_2))} \\ &+ \frac{m_0(S,\tau_1)m_1(S,\tau_2)}{\pi(1-\pi) f_0(q_0(\tau_1))f_1(q_1(\tau_2))} + \frac{m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi)^2 f_0(q_0(\tau_1))f_0(q_0(\tau_2))}\biggr] \\ & +\mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\biggr]. \end{align*} \section{Proof of Theorem \ref{thm:ipw}} \label{sec:ipw_pf} By Knight's identity, we have \begin{align*} \sqrt{n}(\hat{q}_1(\tau) - q_1(\tau)) = & \argmin_u L_n(u,\tau), \end{align*} where \begin{align*} L_n(u,\tau) \equiv & \sum_{i =1}^n \frac{A_i}{\hat{\pi}(S_i)}\left[\rho_\tau(Y_i - q_1(\tau) - \frac{u}{\sqrt{n}}) - \rho_\tau(Y_i - q_1(\tau))\right] \\ = & - L_{1,n}(\tau)u + L_{2,n}(u,\tau), \end{align*} \begin{align*} L_{1,n}(\tau) = \frac{1}{\sqrt{n}}\sum_{i=1}^n \frac{A_i}{\hat{\pi}(S_i)}(\tau - 1\{Y_i \leq q_1(\tau)\}) \end{align*} and \begin{align*} L_{2,n}(u,\tau) = \sum_{i=1}^n \frac{A_i}{\hat{\pi}(S_i)}\int_0^{\frac{u}{\sqrt{n}}}(1\{Y_i \leq q_1(\tau)+v\} - 1\{Y_i \leq q_1(\tau)\})dv. \end{align*} We aim to show that there exists \begin{align} \label{eq:gipw} g_{ipw,n}(u,\tau) = - W_{ipw,n}(\tau) u + \frac{1}{2}Q_{ipw}(\tau)u^2 \end{align} such that (1) for each $u$, \begin{align*} \sup_{\tau \in \Upsilon}|L_n(u,\tau) - g_{ipw,n}(u,\tau)|\convP 0; \end{align*} (2) $Q_{ipw}(\tau)$ is bounded and bounded away from zero uniformly over $\tau \in \Upsilon$. In addition, as a corollary of claim (3) below, $\sup_{\tau \in \Upsilon}|W_{ipw,1,n}(\tau)| = O_p(1)$. Therefore, by \citet[Theorme 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{q}_1(\tau) - q_1(\tau)) = Q_{ipw,1}^{-1}(\tau)W_{ipw,1,n}(\tau) + R_{ipw,1,n}(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}|R_{ipw,1,n}(\tau)| = o_p(1)$. Similarly, we can show that \begin{align*} \sqrt{n}(\hat{q}_0(\tau) - q_0(\tau)) = Q_{ipw,0}^{-1}(\tau)W_{ipw,0,n}(\tau) + R_{ipw,0,n}(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}|R_{ipw,0,n}(\tau)| = o_p(1)$. Then, \begin{align*} \sqrt{n}(\hat{q}(\tau) - q(\tau)) = Q_{ipw,1}^{-1}(\tau)W_{ipw,1,n}(\tau) - Q_{ipw,0}^{-1}(\tau)W_{ipw,0,n}(\tau) + R_{ipw,1,n}(\tau) - R_{ipw,0,n}(\tau). \end{align*} Last, we aim to show that, (3) uniformly over $\tau \in \Upsilon$, \begin{align*} Q_{ipw,1}^{-1}(\tau)W_{ipw,1,n}(\tau) - Q_{ipw,0}^{-1}(\tau)W_{ipw,0,n}(\tau) \rightsquigarrow \mathcal{B}_{ipw}(\tau), \end{align*} where $\mathcal{B}_{ipw}(\tau)$ is a scalar Gaussian process with covariance kernel $\Sigma_{ipw}(\tau_1,\tau_2)$. We prove claims (1)--(3) in three steps. \textbf{Step 1.} For $L_{1,n}(\tau)$, we have \begin{align*} L_{1,n}(\tau) = & \frac{1}{\sqrt{n}}\sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i}{\pi}1\{S_i=s\}(\tau - 1\{Y_i(1) \leq q_1(\tau)\}) \\ & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i1\{S_i = s\}(\hat{\pi}(s) - \pi)}{\sqrt{n}\hat{\pi}(s)\pi}(\tau - 1\{Y_i(1) \leq q_1(\tau)\}) \\ = & \frac{1}{\sqrt{n}}\sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i}{\pi}1\{S_i=s\}(\tau - 1\{Y_i(1) \leq q_1(\tau)\}) \\ & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i1\{S_i = s\}D_n(s)}{n(s)\sqrt{n}\hat{\pi}(s)\pi}\eta_{i,1}(s,\tau) - \sum_{s \in \mathcal{S}} \frac{D_n(s)m_1(s,\tau)}{n(s)\sqrt{n}\hat{\pi}(s)\pi} D_n(s) - \sum_{s \in \mathcal{S}} \frac{D_n(s)m_1(s,\tau)}{\sqrt{n}\hat{\pi}(s)} \\ = & \sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{A_i 1\{S_i = s\}}{\pi}\eta_{i,1}(s,\tau) + \sum_{s \in \mathcal{S}} \frac{D_n(s)}{\sqrt{n}\pi}m_{1}(s,\tau) + \sum_{i=1}^n \frac{m_1(S_i,\tau)}{\sqrt{n}} \\ & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i1\{S_i = s\}D_n(s)}{n(s)\sqrt{n}\hat{\pi}(s)\pi}\eta_{i,1}(s,\tau) - \sum_{s \in \mathcal{S}} \frac{D_n(s)m_1(s,\tau)}{n(s)\sqrt{n}\hat{\pi}(s)\pi} D_n(s) - \sum_{s \in \mathcal{S}} \frac{D_n(s)m_1(s,\tau)}{\sqrt{n}\hat{\pi}(s)} \\ = & W_{ipw,1,n}(\tau)+ R_{ipw}(\tau), \end{align*} where \begin{align} \label{eq:wipw} W_{ipw,1,n}(\tau) = \sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{A_i 1\{S_i = s\}}{\pi}\eta_{i,1}(s,\tau) + \sum_{i=1}^n \frac{m_1(S_i,\tau)}{\sqrt{n}} \end{align} and \begin{align*} & R_{ipw}(\tau) \\ = & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i1\{S_i = s\}D_n(s)}{n(s)\sqrt{n}\hat{\pi}(s)\pi}\eta_{i,1}(s,\tau) - \sum_{s \in \mathcal{S}} \frac{D_n(s)m_1(s,\tau)}{n(s)\sqrt{n}\hat{\pi}(s)\pi} D_n(s) + \sum_{s \in \mathcal{S}} \frac{D_n(s)m_1(s,\tau)}{\sqrt{n}}\left(\frac{1}{\pi} - \frac{1}{\hat{\pi}(s)}\right) \\ = & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i1\{S_i = s\}D_n(s)}{n(s)\sqrt{n}\hat{\pi}(s)\pi}\eta_{i,1}(s,\tau), \end{align*} where we use the fact that $\hat{\pi}(s) - \pi = \frac{D_n(s)}{n(s)}$. By the same argument in Claim (1) of the proof of Lemma \ref{lem:Q}, we have, for every $s\in\mathcal{S}$, \begin{align} \label{eq:eta1} \sup_{\tau \in \Upsilon}\left|\frac{1}{\sqrt{n}}\sum_{i =1}^n A_i 1\{S_i=s\}\eta_{i,1}(s,\tau)\right| \stackrel{d}{=} \sup_{\tau \in \Upsilon}\left|\frac{1}{\sqrt{n}}\sum_{i =N(s)+1}^{N(s)+n(s)} \tilde{\eta}_{i,1}(s,\tau)\right| = O_p(1), \end{align} where $\tilde{\eta}_{i,j}(s,\tau) = \tau - 1\{Y_i^s(j) \leq q_j(\tau) \} - m_j(s,\tau)$, for $j = 0,1$, where $\{Y_i^s(0),Y_i^s(1)\}_{i \geq 1}$ are the same as defined in Step 1 in the proof of Theorem \ref{thm:qr}. Because of \eqref{eq:eta1} and the fact that $\frac{D_n(s)}{n(s)} = o_p(1)$, we have \begin{align*} \sup_{\tau \in \Upsilon}|R_{ipw}(\tau)| = o_p(1). \end{align*} For $L_{2,n}(u,\tau)$, we have \begin{align*} L_{2,n}(u,\tau) = & \sum_{s \in \mathcal{S}} \frac{1}{\hat{\pi}(s)}\sum_{i=N(s)+1}^{N(s)+n_1(s)} \int_0^{\frac{u}{\sqrt{n}}}(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)+v\})dv \\ = & \sum_{s \in \mathcal{S}} \frac{1}{\hat{\pi}(s)} \left[\Gamma_n^s(N(s)+n_1(s),\tau) - \Gamma_n^s(N(s),\tau) \right], \end{align*} where \begin{align*} \Gamma_n^s(k,\tau) = \sum_{i=1}^k \int_0^{\frac{u}{\sqrt{n}}}(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)+v\})dv. \end{align*} By the same argument in \eqref{eq:Gamma}, we can show that \begin{align*} \sup_{t \in (0,1),\tau \in \Upsilon}|\Gamma_n^s(\lfloor nt \rfloor,\tau) - \mathbb{E}\Gamma_n^s(\lfloor nt \rfloor,\tau)| = o_p(1). \end{align*} In addition, \begin{align*} \mathbb{E}\Gamma_n^s(N(s)+n_1(s),\tau) - \mathbb{E}\Gamma_n^s(N(s),\tau) \convP \frac{\pi p(s)f_1(q_1(\tau)|s)u^2}{2}. \end{align*} Therefore, \begin{align*} \sup_{\tau \in \Upsilon}\left|L_{2,n}(u,\tau) - \frac{f_1(q_1(\tau))u^2}{2}\right| = o_p(1), \end{align*} where we use the fact that $\hat{\pi}(s)-\pi = \frac{D_{n}(s)}{n(s)} = o_p(1)$ and \begin{align*} \sum_{s \in \mathcal{S}}p(s)f_1(q_1(\tau)|s) = f_1(q_1(\tau)). \end{align*} This establishes \eqref{eq:gipw} with $Q_{ipw,1}(\tau) = f_1(q_1(\tau))$ and $W_{ipw,n}(\tau)$ defined in \eqref{eq:wipw}. \textbf{Step 2.} Statement (2) holds by Assumption \ref{ass:tau}. \textbf{Step 3.} By a similar argument in Step 1, we have \begin{align*} W_{ipw,0,n}(\tau) = \sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{(1-A_i) 1\{S_i = s\}}{1-\pi}\eta_{i,0}(s,\tau) + \sum_{i=1}^n \frac{m_0(S_i,\tau)}{\sqrt{n}} \end{align*} and $Q_{ipw,0}(\tau) = f_0(q_0(\tau))$. Therefore, \begin{align} \label{eq:wipw2} \sqrt{n}(\hat{q} - q) = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \left[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\right] \notag \\ & + \left[\frac{1}{\sqrt{n}}\sum_{i=1}^n\left(\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\right)\right] + R_{ipw,n}(\tau) \notag \\ = & \mathcal{W}_{n,1}(\tau)+\mathcal{W}_{n,2}(\tau)+ R_{ipw,n}(\tau) \end{align} where $\sup_{\tau \in \Upsilon}|R_{ipw,n}(\tau)| = o_p(1)$. Last, Lemma \ref{lem:Qipw} establishes that \begin{align*} (\mathcal{W}_{n,1}(\tau),\mathcal{W}_{n,2}(\tau)) \rightsquigarrow (\mathcal{B}_{ipw,1}(\tau),\mathcal{B}_{ipw,2}(\tau)), \end{align*} where $(\mathcal{B}_{ipw,1}(\tau),\mathcal{B}_{ipw,2}(\tau))$ are two mutually independent scalar Gaussian processes with covariance kernels \begin{align*} \Sigma_{ipw,1}(\tau_1,\tau_2) = \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2) }{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi)f_0(q_0(\tau_1))f_0(q_0(\tau_2)) } \end{align*} and \begin{align*} \Sigma_{ipw,2}(\tau_1,\tau_2) = \mathbb{E}\left(\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\right)\left(\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\right), \end{align*} respectively. In particular, the asymptotic variance for $\hat{q}$ is \begin{align*} \zeta_Y^2(\pi,\tau) + \zeta_S^2(\tau), \end{align*} where $\zeta_Y^2(\pi,\tau)$ and $\zeta_S^2(\tau)$ are the same as those in the proof of Theorem \ref{thm:qr}. \section{Proof of Theorem \ref{thm:b}} \label{sec:b} First, we consider the weighted bootstrap for the SQR estimator. Note that \begin{align*} \sqrt{n}(\hat{\beta}^w(\tau) - \beta(\tau)) = \argmin_{u}L_n^w(u,\tau), \end{align*} where \begin{align*} L_n^w(u,\tau) = \sum_{i=1}^n \xi_i\left[\rho_\tau(Y_i - \dot{A}_i'\beta(\tau) - \dot{A}_i'u/\sqrt{n}) - \rho_\tau(Y_i - \dot{A}_i'\beta(\tau))\right]. \end{align*} Similar to the proof of Theorem \ref{thm:qr}, we can show that \begin{align*} \sup_{ \tau \in \Upsilon}|L_n^w(u,\tau) - g_n^w(u,\tau)| \rightarrow 0, \end{align*} where \begin{align*} g_n^w(u,\tau) = - u'W_n^w(\tau) + \frac{1}{2}u'Q(\tau)u, \end{align*} \begin{align*} W_n^w(\tau) = \sum_{i=1}^n \frac{\xi_{i}}{\sqrt{n}}\dot{A}_i\left(\tau - 1\{Y_i \leq \dot{A}'\beta(\tau)\}\right), \end{align*} and $Q(\tau)$ is defined in \eqref{eq:Qqr}. Therefore, by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}^w(\tau) - \beta(\tau)) = [Q(\tau)]^{-1}W_n^w(\tau) + r_n^w(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}||r_n^w(\tau)|| = o_p(1)$. By Theorem \ref{thm:qr}, \begin{align*} \sqrt{n}(\hat{\beta}^w(\tau) - \hat{\beta}(\tau)) = [Q(\tau)]^{-1}\sum_{i=1}^n \frac{\xi_{i}-1}{\sqrt{n}}\dot{A}_i\left(\tau - 1\{Y_i \leq \dot{A}'\beta(\tau)\}\right) + o_p(1), \end{align*} where the $o_p(1)$ term holds uniformly over $\tau \in \Upsilon$. In addition, Lemma \ref{lem:Qwqr} shows that, conditionally on data, the second element of $[Q(\tau)]^{-1}\sum_{i=1}^n \frac{\xi_{i}-1}{\sqrt{n}}\dot{A}_i\left(\tau - 1\{Y_i \leq \dot{A}'\beta(\tau)\}\right)$ converges to $\tilde{\mathcal{B}}_{sqr}(\tau)$ uniformly over $\tau \in \Upsilon$. This leads to the desired result for the weighted bootstrap simple quantile regression estimator. Next, we turn to the IPW estimator. Denote $\hat{q}_j^w(\tau)$, $j=0,1$ the weighted bootstrap counterpart of $\hat{q}_j(\tau)$. We have \begin{align*} \sqrt{n}(\hat{q}_1^w(\tau) - q_1(\tau)) = \argmin_u L_n^w(u,\tau), \end{align*} where \begin{align*} L_n^w(u,\tau) = & \sum_{i =1}^n \frac{\xi_iA_i}{\hat{\pi}^w(S_i)}\left[\rho_\tau(Y_i - q_1(\tau) - \frac{u}{\sqrt{n}}) - \rho_\tau(Y_i -q_1(\tau))\right] \\ \equiv & - L_{1,n}^w(\tau) u + L_{2,n}^w(u,\tau), \end{align*} where \begin{align*} L_{1,n}^w(\tau) = \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{\xi_iA_i}{\hat{\pi}^w(S_i)}(\tau - 1\{Y_i \leq q_1(\tau)\}) \end{align*} and \begin{align*} L_{2,n}^w(\tau) = \sum_{i=1}^n \frac{\xi_iA_i}{\hat{\pi}^w(S_i)}\int_0^{\frac{u}{\sqrt{n}}}(1\{Y_i \leq q_1(\tau)+v\} - 1\{Y_i \leq q_1(\tau)\})dv. \end{align*} Recall $$D_n^w(s) = \sum_{i =1}^n \xi_i(A_i - \pi)1\{S_i = s\}, \quad n^w(s) = \sum_{i =1}^n \xi_i1\{S_i = s\},$$ and $$\hat{\pi}^w(s) = \frac{\sum_{i=1}^n \xi_iA_i1\{S_i=s\}}{n^w(s)} = \pi + \frac{D_n^w(s)}{n^w(s)}.$$ Then, for $L_{1,n}^w(\tau)$, we have \begin{align*} L_{1,n}^w(\tau) = & \frac{1}{\sqrt{n}}\sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{\xi_iA_i}{\pi}1\{S_i=s\}(\tau - 1\{Y_i(1) \leq q_1(\tau)\}) \\ & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{\xi_iA_i1\{S_i = s\}(\hat{\pi}^w(s) - \pi)}{\sqrt{n}\hat{\pi}^w(s)\pi}(\tau - 1\{Y_i(1) \leq q_1(\tau)\}) \\ = & \frac{1}{\sqrt{n}}\sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{\xi_iA_i}{\pi}1\{S_i=s\}(\tau - 1\{Y_i(1) \leq q_1(\tau)\}) \\ & - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{\xi_iA_i1\{S_i = s\}D^w_n(s)}{n^w(s)\sqrt{n}\hat{\pi}(s)\pi}\eta_{i,1}(s,\tau) - \sum_{s \in \mathcal{S}} \frac{D^w_n(s)m_1(s,\tau)}{n^w(s)\sqrt{n}\hat{\pi}^w(s)\pi} D^w_n(s) - \sum_{s \in \mathcal{S}} \frac{D^w_n(s)m_1(s,\tau)}{\sqrt{n}\hat{\pi}^w(s)} \\ = & \sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{\xi_iA_i 1\{S_i = s\}}{\pi}\eta_{i,1}(s,\tau) + \sum_{s \in \mathcal{S}} \frac{D^w_n(s)}{\sqrt{n}\pi}m_{1}(s,\tau) + \sum_{i=1}^n \frac{\xi_im_1(S_i,\tau)}{\sqrt{n}} \\ & - \sum_{s \in \mathcal{S}}D^w_n(s)\sum_{i=1}^n \frac{\xi_iA_i1\{S_i = s\}}{n^w(s)\sqrt{n}\hat{\pi}^w(s)\pi}\eta_{i,1}(s,\tau) - \sum_{s \in \mathcal{S}} \frac{D^w_n(s)m_1(s,\tau)}{n^w(s)\sqrt{n}\hat{\pi}^w(s)\pi} D^w_n(s) - \sum_{s \in \mathcal{S}} \frac{D^w_n(s)m_1(s,\tau)}{\sqrt{n}\hat{\pi}^w(s)} \\ = & W_{ipw,1,n}^w(\tau)+ R_{ipw}^w(\tau), \end{align*} where \begin{align} \label{eq:wipwc} W_{ipw,1,n}^w(\tau) = \sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{\xi_iA_i 1\{S_i = s\}}{\pi}\eta_{i,1}(s,\tau) + \sum_{i=1}^n \frac{\xi_im_1(S_i,\tau)}{\sqrt{n}} \end{align} and \begin{align*} & R_{ipw}^w(\tau) \\ = & - \sum_{s \in \mathcal{S}}D^w_n(s)\sum_{i=1}^n \frac{\xi_iA_i1\{S_i = s\}}{n^w(s)\sqrt{n}\hat{\pi}^w(s)\pi}\eta_{i,1}(s,\tau) - \sum_{s \in \mathcal{S}} \frac{D^w_n(s)m_1(s,\tau)}{n^w(s)\sqrt{n}\hat{\pi}^w(s)\pi} D^w_n(s) + \sum_{s \in \mathcal{S}} \frac{D^w_n(s)m_1(s,\tau)}{\sqrt{n}}(\frac{1}{\pi} - \frac{1}{\hat{\pi}^w(s)}) \\ = & - \sum_{s \in \mathcal{S}}D^w_n(s)\sum_{i=1}^n \frac{\xi_iA_i1\{S_i = s\}}{n^w(s)\sqrt{n}\hat{\pi}^w(s)\pi}\eta_{i,1}(s,\tau). \end{align*} In the following, we aim to show $D_n^w(s)/n^w(s) = o_p(1)$ and \begin{align*} \sup_{\tau \in \Upsilon, s \in \mathcal{S}}|\sum_{i=1}^{n}\xi_iA_i1\{S_i =s\}\eta_{i,1}(s,\tau)| = O_p(\sqrt{n}). \end{align*} For the first claim, we note that $n^w(s)/n(s) \convP 1$ and $D_n(s)/n(s) \convP 0$. Therefore, we only need to show \begin{align*} \frac{D_n^w(s) - D_n(s)}{n(s)} = \sum_{i=1}^n\frac{(\xi_i-1)(A_i-\pi)1\{S_i=s\}}{n(s)} \convP 0. \end{align*} As $n(s) \rightarrow \infty$ a.s., given data, \begin{align*} \frac{1}{n(s)}\sum_{i=1}^n(A_i - \pi)^21\{S_i = s\} = & \frac{1}{n}\sum_{i=1}^n\left(A_i - \pi - 2\pi(A_i - \pi) + \pi - \pi^2\right)1\{S_i=s\} \\ = & \frac{D_n(s) - 2\pi D_n(s)}{n(s)} + \pi(1-\pi) \convP \pi (1-\pi). \end{align*} Then, by the Lindeberg CLT, conditionally on data, \begin{align*} \frac{1}{\sqrt{n(s)}} \sum_{i =1}^n (\xi_i - 1)(A_i - \pi)1\{S_i = s\} \rightsquigarrow N(0,\pi(1-\pi)) = O_p(1), \end{align*} and thus \begin{align*} \frac{D_n^w(s) - D_n(s)}{n(s)} = O_p(n^{-1/2}(s)) = o_p(1). \end{align*} This leads to the first claim. For the second claim, we note that \begin{align*} \sum_{i=1}^{n}\xi_iA_i1\{S_i =s\}\eta_{i,1}(s,\tau) = \sum_{i = N(s)+1}^{N(s)+n_1(s)}\xi_i \tilde{\eta}_{i,1}(s,\tau). \end{align*} We can show the RHS of the above display is $O_p(\sqrt{n})$ for all $s \in \mathcal{S}$ following the same argument used in Claim (1) of the proof of Lemma \ref{lem:Q}. Given these two claims and by noticing that \begin{align*} \hat{\pi}^w(s) - \pi = \frac{D^w_n(s)}{n^w(s)} = o_p(1), \end{align*} we have \begin{align*} \sup_{\tau \in \Upsilon}|R_{ipw}^w(\tau)| = o_p(1). \end{align*} Similar to the argument used to derive the limit of $L_{2,n}(\tau)$ in the proof of Theorem \ref{thm:ipw}, we can show that \begin{align*} \sup_{ \tau \in \Upsilon}|L_{2,n}^w(u,\tau) - \frac{f_1(q_1(\tau))u^2}{2}| = o_p(1). \end{align*} Therefore, \begin{align*} \sqrt{n}(\hat{q}_1^w(\tau) - q_1(\tau)) = \frac{W_{ipw,1,n}^w(\tau)}{f_1(q_1(\tau))} + R_1^w(\tau), \end{align*} where $\sup_{ \tau \in \Upsilon}|R_1^w(\tau)| = o_p(1)$. Similarly, \begin{align*} \sqrt{n}(\hat{q}_0^w(\tau) - q_0(\tau)) = \frac{W_{ipw,0,n}^w(\tau)}{f_0(q_0(\tau))} + R_0^w(\tau), \end{align*} where $$W_{ipw,0,n}^w(\tau) = \sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\sum_{i =1}^n \frac{\xi_i(1-A_i)1\{S_i = s\}}{1-\pi}\eta_{i,0}(s,\tau) + \sum_{i =1}^n\frac{\xi_im_0(S_i,\tau)}{\sqrt{n}}$$ and $\sup_{ \tau \in \Upsilon}|R_0^w(\tau)| = o_p(1)$. Therefore, \begin{align*} & \sqrt{n}(\hat{q}^w(\tau) - \hat{q}(\tau)) \\ = & \sum_{s \in \mathcal{S}}\frac{1}{\sqrt{n}}\sum_{i =1}^n (\xi_i-1)\biggl\{\frac{A_i1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))}-\frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi)f_0(q_0(\tau))} \\ & + \left[\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right]1\{S_i=s\}\biggr\} + o_p(1), \end{align*} where the $o_p(1)$ term holds uniformly over $\tau \in \Upsilon.$ In order to show the conditional weak convergence, we only need to show the conditionally stochastic equicontinuity and finite-dimensional convergence. The former can be shown in the same manner as Lemma \ref{lem:Qwqr}. For the latter, we note that \begin{align*} & \frac{1}{n}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \biggl\{\frac{A_i1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))}-\frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi)f_0(q_0(\tau))} + \left[\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right]1\{S_i=s\}\biggr\}^2 \\ = & \sum_{s \in \mathcal{S}}\frac{1}{n}\sum_{i =1}^n \biggl\{\frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi)f_0(q_0(\tau))}\biggr\}^2 + \sum_{s \in \mathcal{S}}\frac{1}{n}\sum_{i =1}^n \biggl\{\frac{A_i1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))}\biggr\}^2 \\ & + \sum_{s \in \mathcal{S}}\frac{1}{n}\sum_{i =1}^n \biggl\{\left[\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right]1\{S_i=s\}\biggr\}^2 \\ & + \sum_{s \in \mathcal{S}}\frac{2}{n}\sum_{i =1}^n \biggl\{\frac{A_i1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))}\biggr\}\left[\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right] \\ & - \sum_{s \in \mathcal{S}}\frac{2}{n}\sum_{i =1}^n\biggl\{\frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi)f_0(q_0(\tau))}\biggr\} \left[\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right] \\ \convP & \zeta_Y^2(\pi,\tau) + \zeta_S^2(\tau). \end{align*} Note that the RHS of the above display is the same as the asymptotic variance of the original estimator $\hat{q}(\tau)$. By the CLT conditional on data, we can establish the one-dimensional weak convergence. Then, by the Cram\'{e}r-Wold Theorem, we can extend such result to any finite dimension. This concludes the proof. \section{Proof of Theorem \ref{thm:cab}} \label{sec:cab} It suffices to prove the theorem with \begin{align*} \tilde{q}(\tau) = & q(\tau)+\biggl[\sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{\tilde{\eta}_{i,1}(s,\tau)}{n\pi f_1(q_1(\tau))} - \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s)+\pi p(s)) \rfloor+1}^{\lfloor n (F(s)+ p(s)) \rfloor}\frac{\tilde{\eta}_{i,0}(s,\tau)}{n(1-\pi)f_0(q_0(\tau))}\biggr] \\ & + \biggl[\sum_{i=1}^n \frac{1}{n} \left(\frac{m_1(S_i,\tau) }{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau) }{f_0(q_0(\tau))}\right) \biggr], \end{align*} as we have shown in Theorem \ref{thm:ipw} that \begin{align*} \sup_{ \tau \in \Upsilon}|\tilde{q}(\tau) - \hat{q}(\tau)| = o_p(1/\sqrt{n}). \end{align*} We first consider the SQR estimator. Note that \begin{align*} \sqrt{n}(\hat{\beta}^*(\tau) - \beta(\tau)) = \argmin_{u} L^*_n(u,\tau), \end{align*} where $L^*_n(u,\tau) = \sum_{i=1}^n \left[\rho_\tau(Y_i^* - \dot{A}^{*'}_i\beta(\tau) - \dot{A}^{*'}_iu/\sqrt{n}) - \rho_\tau(Y_i^* - \dot{A}^{*'}_i\beta(\tau))\right]$. Then, $\hat{\beta}_1^*(\tau)$, the bootstrap counterpart of the SQR estimator, is just the second element of $\hat{\beta}^*(\tau)$. Similar to the proof of Theorem \ref{thm:qr}, \begin{align*} L^*_n(u,\tau) = -u'W_n^*(\tau) + Q_n^*(u,\tau), \end{align*} where \begin{align*} W_n^*(\tau) = \sum_{i =1}^n \frac{1}{\sqrt{n}}\dot{A}_i^*(\tau - 1\{Y_i^* \leq \dot{A}_i^{*'}\beta(\tau) \}) \end{align*} and \begin{align} \label{eq:Qstar} Q_n^*(u,\tau) = & \sum_{i=1}^{n}\int_0^{\frac{\dot{A}_i^{*\prime} u}{\sqrt{n}}}\left(1\{Y_i^* - \dot{A}_i^{*\prime}\beta(\tau)\leq v\} - 1\{Y_i^* - \dot{A}_i^{*\prime}\beta(\tau)\leq 0\} \right)dv \notag \\ = & \sum_{i=1}^n A^*_i\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\left(1\{Y_i^*(1) - q_1(\tau)\leq v\} - 1\{Y_i^*(1) - q_1(\tau)\leq 0\} \right)dv \notag \\ & + \sum_{i=1}^n (1-A^*_i)\int_0^{\frac{u_0}{\sqrt{n}}}\left(1\{Y_i^*(0) - q_0(\tau)\leq v\} - 1\{Y_i^*(0) - q_0(\tau)\leq 0\} \right)dv \notag \\ \equiv & Q^*_{n,1}(u,\tau) + Q^*_{n,0}(u,\tau). \end{align} Define $\eta_{i,j}^*(s,\tau) = (\tau - 1\{Y_i^*(j)\leq q_j(\tau) \}) - m_j(s,\tau)$ and $\tilde{\eta}_{i,j}(s,\tau) = \tau - 1\{Y^s_i(j) \leq q_j(\tau)\} - m_j(s,\tau)$, $j = 0,1$, where $Y^s_i(j)$ is defined in the proof of Theorem \ref{thm:qr}. Then, we have \begin{align*} W_n^*(\tau) = &e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}A^*_i 1\{S^*_i = s\}(\tau - 1\{Y^*_i(1) \leq q_1(\tau)\})\\ & + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (1-A^*_i) 1\{S^*_i = s\}(\tau - 1\{Y^*_i(0) \leq q_0(\tau)\}) \\ = & \biggl[e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}A^*_i 1\{S^*_i = s\}\eta^*_{i,1}(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (1-A^*_i) 1\{S^*_i = s\}\eta^*_{i,0}(s,\tau)\biggr] \notag \\ & + \biggl[e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}(A^*_i-\pi) 1\{S^*_i = s\}m_1(s,\tau) - e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (A_i^*-\pi) 1\{S_i^* = s\}m_0(s,\tau)\biggr] \notag \\ & + \biggl[e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}}\pi 1\{S^*_i = s\}m_1(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{1}{\sqrt{n}} (1-\pi) 1\{S^*_i = s\}m_0(s,\tau) \biggr] \notag \\ \equiv & W^*_{n,1}(\tau) + W^*_{n,2}(\tau) + W^*_{n,3}(\tau). \end{align*} By Lemma \ref{lem:Rsfestar}, there exists a sequence of independent Poisson(1) random variables $\{\xi_i^s\}_{i \geq 1,s\in \mathcal{S}}$ such that $\{\xi_i^s\}_{i \geq 1,s\in \mathcal{S}} \perp\!\!\!\perp \{A_i^*,S_i^*,Y_i,A_i,S_i\}_{i\geq 1}$, \begin{align*} \sum_{i =1}^nA_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau) = \sum_{i = N(s)+1}^{N(s)+n_1(s)}\xi_i^s\tilde{\eta}_{i,1}(s,\tau) + R^*_1(s,\tau), \end{align*} and \begin{align*} \sum_{i =1}^n(1-A_i^*)1\{S_i^*=s\}\eta_{i,1}^*(s,\tau) = \sum_{i = N(s)+n_1(s)+1}^{N(s)+n(s)}\xi_i^s\tilde{\eta}_{i,0}(s,\tau) + R^*_0(s,\tau), \end{align*} where $\sup_{\tau \in \Upsilon}(|R^*_1(s,\tau)| + |R^*_0(s,\tau)|) = o_p(\sqrt{n(s)}) = o_p(\sqrt{n})$ for all $s \in \mathcal{S}$. Therefore, \begin{align*} (W^*_{n,1}(\tau),W^*_{n,2}(\tau),W^*_{n,3}(\tau)) \stackrel{d}{=} (\tilde{W}^*_{n,1}(\tau) + R(\tau),W^*_{n,2}(\tau), W^*_{n,3}(\tau)) \end{align*} where $\sup_{\tau \in \Upsilon}||R(\tau)|| = o_p(1)$ and \begin{align*} \tilde{W}^*_{n,1}(\tau) = e_1 \sum_{s \in \mathcal{S}} \sum_{i = N(s)+1}^{N(s)+n_1(s)}\frac{\xi_i^s}{\sqrt{n}}\tilde{\eta}_{i,1}(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i = N(s)+n_1(s)+1}^{N(s)+n(s)}\frac{\xi_i^s}{\sqrt{n}}\tilde{\eta}_{i,0}(s,\tau) \end{align*} In addition, following the same argument in the proof of Lemma \ref{lem:Q}, we can further show that \begin{align*} \tilde{W}^*_{n,1}(\tau) = W^{*\ast}_{n,1}(\tau) + R^\ast_n(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}||R^\ast_n(\tau)||= o_p(1)$ and \begin{align*} W^{*\ast}_{n,1}(\tau) = e_1 \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{\xi_i^s}{\sqrt{n}}\tilde{\eta}_{i,1}(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s)+\pi p(s)) \rfloor+1}^{\lfloor n (F(s)+ p(s)) \rfloor}\frac{\xi_i^s}{\sqrt{n}}\tilde{\eta}_{i,0}(s,\tau). \end{align*} By construction, $W^{*\ast}_{n,1}(\tau) \perp\!\!\!\perp (W^{*}_{n,2}(\tau),W^{*}_{n,3}(\tau))$. Also note that $\{S_i^*\}_{i=1}^n$ are the nonparametric bootstrap draws based on the empirical CDF of $\{S_i\}_{i=1}^n$. Then, by \citet[Section 3.6]{VW96}, there exists a sequence of independent Poisson(1) random variables $\{\tilde{\xi}_i\}_{i \geq 1}$ that is independent of data, $\{A_i^*\}$ and $\{\xi^s_i\}_{i \geq 1,s\in \mathcal{S}}$ such that \begin{align*} \sup_{\tau \in \Upsilon}||W_{n,3}^*(\tau) - W_{n,3}^{**}(\tau)|| = o_p(1), \end{align*} where \begin{align*} W_{n,3}^{**}(\tau) = e_1 \sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{\tilde{\xi}_i}{\sqrt{n}}\pi 1\{S_i = s\}m_1(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=1}^n \frac{\tilde{\xi}_i}{\sqrt{n}} (1-\pi) 1\{S_i = s\}m_0(s,\tau) \end{align*} By Lemma \ref{lem:Qstar}, \begin{align*} Q_n^*(u,\tau) \convP \frac{1}{2}u'Q(\tau)u, \end{align*} where $Q(\tau)$ is defined in \eqref{eq:Qqr}. Then, by the same argument in the proof of Theorem \ref{thm:qr}, we have \begin{align*} \sqrt{n}(\hat{\beta}^*(\tau) - \beta(\tau)) = Q^{-1}(\tau)(W^{*\ast}_{n,1}(\tau)+W^{*}_{n,2}(\tau)+W^{**}_{n,3}(\tau)) + R^*(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}||R^*(\tau)|| = o_p(1)$. Focusing on the second element of $\hat{\beta}^*(\tau)$, we have \begin{align*} \sqrt{n}(\hat{\beta}^*_1(\tau) - q(\tau)) = & \biggl[\sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{\xi_i^s\tilde{\eta}_{i,1}(s,\tau)}{\sqrt{n}\pi f_1(q_1(\tau))} - \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s)+\pi p(s)) \rfloor+1}^{\lfloor n (F(s)+ p(s)) \rfloor}\frac{\xi_i^s\tilde{\eta}_{i,0}(s,\tau)}{\sqrt{n}(1-\pi)f_0(q_0(\tau))}\biggr] \\ & + \biggl[ \sum_{s \in \mathcal{S}} \frac{D_n^*(s)}{\sqrt{n}}\left(\frac{m_1(s,\tau)}{\pi f_1(q_1(\tau))} + \frac{m_0(s,\tau)}{\pi f_0(q_0(\tau))}\right) \biggr] \\ & + \biggl[\sum_{i=1}^n \frac{\tilde{\xi}_i}{\sqrt{n}} \left(\frac{m_1(S_i,\tau) }{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau) }{f_0(q_0(\tau))}\right) \biggr] + R_1^*(\tau), \end{align*} where $\sup_{ \tau \in \Upsilon}|R_1^*(\tau)| = o_p(1)$. In addition, by definition, we have \begin{align*} \sqrt{n}(\tilde{q}(\tau) - q(\tau)) = & \biggl[\sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{\tilde{\eta}_{i,1}(s,\tau)}{\sqrt{n}\pi f_1(q_1(\tau))} - \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s)+\pi p(s)) \rfloor+1}^{\lfloor n (F(s)+ p(s)) \rfloor}\frac{\tilde{\eta}_{i,0}(s,\tau)}{\sqrt{n}(1-\pi)f_0(q_0(\tau))}\biggr] \\ & + \biggl[\sum_{i=1}^n \frac{1}{\sqrt{n}} \left(\frac{m_1(S_i,\tau) }{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau) }{f_0(q_0(\tau))}\right) \biggr]. \end{align*} By taking difference of the two displays above, we have \begin{align} \label{eq:betastar} \sqrt{n}(\hat{\beta}^*_1(\tau) - \tilde{q}(\tau)) = & \biggl[\sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{(\xi_i^s-1)\tilde{\eta}_{i,1}(s,\tau)}{\sqrt{n}\pi f_1(q_1(\tau))} - \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s)+\pi p(s)) \rfloor+1}^{\lfloor n (F(s)+ p(s)) \rfloor}\frac{(\xi_i^s-1)\tilde{\eta}_{i,0}(s,\tau)}{\sqrt{n}(1-\pi)f_0(q_0(\tau))}\biggr] \notag \\ & + \biggl[ \sum_{s \in \mathcal{S}} \frac{D_n^*(s)}{\sqrt{n}}\left(\frac{m_1(s,\tau)}{\pi f_1(q_1(\tau))} + \frac{m_0(s,\tau)}{\pi f_0(q_0(\tau))}\right) \biggr] \notag \\ & + \biggl[\sum_{i=1}^n \frac{\tilde{\xi}_i-1}{\sqrt{n}} \left(\frac{m_1(S_i,\tau) }{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau) }{f_0(q_0(\tau))}\right) \biggr] + R_1^*(\tau). \end{align} Note that, conditionally on data, the first and third brackets on the RHS of the above display converge to Gaussian processes with covariance kernels \begin{align*} \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))} \end{align*} and \begin{align*} \mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\biggr], \end{align*} uniformly over $\tau \in \Upsilon$, respectively. In addition, by Assumption \ref{ass:bassignment}.1, conditionally data (and thus $\{S_i\}_{i=1}^n$), the second bracket on the RHS of \eqref{eq:betastar} converges to a Gaussian process with a covariance kernel \begin{align*} \mathbb{E}\gamma(S)\biggl[\frac{m_1(S,\tau_1)m_1(S,\tau_2)}{\pi^2 f_1(q_1(\tau_1))f_1(q_1(\tau_2))}+\frac{m_1(S,\tau_1)m_0(S,\tau_2)}{\pi(1-\pi) f_1(q_1(\tau_1))f_0(q_0(\tau_2))}\biggr], \end{align*} uniformly over $\tau \in \Upsilon$. Furthermore, we notice that these three Gaussian processes are independent. Therefore, we have, conditionally on data and uniformly over $\tau \in \Upsilon$, \begin{align*} \sqrt{n}(\hat{\beta}^*_1(\tau) - \tilde{q}(\tau)) \rightsquigarrow \mathcal{B}_{sqr}(\tau), \end{align*} where $\mathcal{B}_{sqr}(\tau)$ is defined in Theorem \ref{thm:qr}. This leads to the desired result for the simple quantile regression estimator. Next, we briefly describe the derivation for the IPW estimator. Following the proof of Theorem \ref{thm:ipw}, we have \begin{align*} \sqrt{n}(\hat{q}^*_1(\tau) - q_1(\tau)) = & \argmin_u L^*_n(u,\tau), \end{align*} where \begin{align*} L^*_n(u,\tau) \equiv & \sum_{i =1}^n \frac{A^*_i}{\hat{\pi}^*(S^*_i)}\left[\rho_\tau(Y^*_i - q_1(\tau) - \frac{u}{\sqrt{n}}) - \rho_\tau(Y^*_i - q_1(\tau))\right] \\ = & - L^*_{1,n}(\tau)u + L^*_{2,n}(u,\tau), \end{align*} and $\hat{\pi}^*(s) = \frac{n_1^*(s)}{n^*(s)}$. Then, we have \begin{align*} L^*_{1,n}(\tau) = W_{ipw,1,n}^*(\tau) + R_{ipw,1}^*(\tau), \end{align*} where \begin{align*} W_{ipw,1,n}^*(\tau) = \sum_{s\in \mathcal{S}}\frac{1}{\sqrt{n}}\sum_{i=1}^n \frac{A_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau)}{\pi} + \sum_{i=1}^n \frac{m_1(S_i^*,\tau)}{\sqrt{n}}, \end{align*} and \begin{align*} R_{ipw,1}^*(\tau) = - \sum_{i=1}^n \sum_{s \in \mathcal{S}}\frac{A_i^* 1\{S_i^*=s\}D_n^*(s)}{n^*(s)\sqrt{n}\hat{\pi}^*(s)\pi}\eta_{i,1}^*(s,\tau). \end{align*} By Lemma \ref{lem:Rsfestar}, $\sup_{ \tau \in \Upsilon}|R_{ipw,1}^*(\tau)| = o_p(1)$. In addition, same as above, we can show that \begin{align*} \sup_{ \tau \in \Upsilon}|W_{ipw,1,n}^*(\tau) - W_{ipw,1,n}^{**}(\tau)| = o_p(1), \end{align*} where \begin{align*} W_{ipw,1,n}^{**}(\tau) = & \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{\xi_i^s\tilde{\eta}_{i,1}(s,\tau)}{\sqrt{n}\pi} + \sum_{i=1}^n \frac{\tilde{\xi}_im_1(S_i,\tau) }{\sqrt{n}}. \end{align*} Similar to Lemma \ref{lem:Qstar}, we can show that, uniformly over $\tau \in \Upsilon$, \begin{align*} L_{2,n}^*(\tau) \convP \frac{f_1(q_1(\tau))u^2}{2}. \end{align*} Therefore, \begin{align*} \sqrt{n}(\hat{q}^*_1(\tau) - q_1(\tau)) = \frac{W_{ipw,1,n}^{**}(\tau)}{f_1(q_1(\tau))} + R_{ipw,1}^{**}(\tau), \end{align*} where $\sup_{ \tau \in \Upsilon}|R_{ipw,1}^{**}(\tau)| = o_p(1)$. Similarly, we can show \begin{align*} \sqrt{n}(\hat{q}^*_0(\tau) - q_0(\tau)) = \frac{W_{ipw,0,n}^{**}(\tau)}{f_0(q_0(\tau))} + R_{ipw,0}^{**}(\tau), \end{align*} where $\sup_{ \tau \in \Upsilon}|R_{ipw,0}^{**}(\tau)| = o_p(1)$ and \begin{align*} W_{ipw,0,n}^{**}(\tau) = & \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s) \pi p(s))\rfloor+1}^{\lfloor n (F(s)+p(s)) \rfloor}\frac{\xi_i^s\tilde{\eta}_{i,0}(s,\tau)}{\sqrt{n}\pi} + \sum_{i=1}^n \frac{\tilde{\xi}_im_0(S_i,\tau) }{\sqrt{n}}. \end{align*} Therefore, \begin{align*} \sqrt{n}(\hat{q}^*(\tau) - \tilde{q}(\tau)) = & \biggl[\sum_{s \in \mathcal{S}} \sum_{i = \lfloor n F(s) \rfloor+1}^{\lfloor n (F(s)+\pi p(s)) \rfloor}\frac{(\xi_i^s-1)\tilde{\eta}_{i,1}(s,\tau)}{\sqrt{n}\pi f_1(q_1(\tau))} - \sum_{s \in \mathcal{S}} \sum_{i = \lfloor n (F(s)+\pi p(s)) \rfloor+1}^{\lfloor n (F(s)+ p(s)) \rfloor}\frac{(\xi_i^s-1)\tilde{\eta}_{i,0}(s,\tau)}{\sqrt{n}(1-\pi)f_0(q_0(\tau))}\biggr] \notag \\ & + \biggl[\sum_{i=1}^n \frac{\tilde{\xi}_i-1}{\sqrt{n}} \left(\frac{m_1(S_i,\tau) }{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau) }{f_0(q_0(\tau))}\right) \biggr] + R_{ipw}^*(\tau), \end{align*} where $\sup_{ \tau \in \Upsilon}|R_{ipw}^{*}(\tau)| = o_p(1)$. Last, we can show that, conditionally on data and uniformly over $\tau \in \Upsilon$, the RHS of the above display weakly converges to the Gaussian process $\mathcal{B}_{ipw}(\tau)$, where $\mathcal{B}_{ipw}(\tau)$ is defined in Theorem \ref{thm:ipw}. \section{Technical Lemmas} \label{sec:lem} \begin{lem} \label{lem:S} Let $S_k$ be the $k$-th partial sum of Banach space valued independent identically distributed random variables, then \begin{align*} \mathbb{P}(\max_{1 \leq k \leq n}||S_k|| \geq \varepsilon) \leq 3\max_{1 \leq k \leq n}\mathbb{P}(||S_k||\geq \varepsilon/3). \end{align*} \end{lem} When $S_k$ takes values on $\Re$, Lemma \ref{lem:S} is \citet[Exercise 2.3]{PLS08}. \begin{proof} First suppose $\max_k \mathbb{P}(||S_n - S_k||\geq 2\varepsilon/3)\leq 2/3$. In addition, define \begin{align*} A_k = \{ ||S_k|| \geq \varepsilon, ||S_j||< \varepsilon, 1\leq j < k\}. \end{align*} Then, \begin{align*} \mathbb{P}(\max_k ||S_k|| \geq \varepsilon) \leq & \mathbb{P}(||S_n|| \geq \varepsilon/3) + \sum_{k=1}^n\mathbb{P}(||S_n|| \leq \varepsilon/3,A_k) \\ \leq & \mathbb{P}(||S_n|| \geq \varepsilon/3) + \sum_{k=1}^n\mathbb{P}(||S_n-S_k|| \geq 2\varepsilon/3)\mathbb{P}(A_k) \\ \leq & \mathbb{P}(||S_n|| \geq \varepsilon/3) + \frac{2}{3}\mathbb{P}(\max_k ||S_k|| \geq \varepsilon). \end{align*} This implies, \begin{align*} \mathbb{P}(\max_k ||S_k|| \geq \varepsilon) \leq 3\mathbb{P}(||S_n|| \geq \varepsilon/3). \end{align*} On the other hand, if $\max_k \mathbb{P}(||S_n - S_k||\geq 2\varepsilon/3) > 2/3$, then there exists $k_0$ such that $ \mathbb{P}(||S_n - S_{k_0}||\geq 2\varepsilon/3) > 2/3$. Thus, \begin{align*} \mathbb{P}(||S_n|| \geq \varepsilon/3) + \mathbb{P}(||S_{k_0}|| \geq \varepsilon/3) \geq 2/3. \end{align*} This implies, \begin{align*} 3\max_{1 \leq k \leq n}\mathbb{P}(||S_k|| \geq \varepsilon/3) \geq 3 \max(\mathbb{P}(||S_n|| \geq \varepsilon/3), \mathbb{P}(||S_{k_0}|| \geq \varepsilon/3)) \geq 1 \geq \mathbb{P}(\max_{1 \leq k \leq n}||S_k|| \geq \varepsilon). \end{align*} This concludes the proof. \end{proof} \begin{lem} \label{lem:Q} Let $W_{n,j}(\tau)$, $j = 1,2,3$ be defined as in \eqref{eq:W}. If Assumptions in Theorem \ref{thm:qr} hold, then uniformly over $\tau \in \Upsilon$, \begin{align*} (W_{n,1}(\tau), W_{n,2}(\tau), W_{n,3}(\tau)) \rightsquigarrow (\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau)), \end{align*} where $(\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau))$ are three independent two-dimensional Gaussian processes with covariance kernels $\Sigma_1(\tau_1,\tau_2)$, $\Sigma_2(\tau_1,\tau_2)$, and $\Sigma_3(\tau_1,\tau_2)$, respectively. The expressions for the three kernels are derived in the proof below. \end{lem} \begin{proof} We follow the general argument in the proof of \citet[Lemma B.2]{BCS17}. We divide the proof into two steps. In the first step, we show that \begin{align*} (W_{n,1}(\tau), W_{n,2}(\tau), W_{n,3}(\tau)) \stackrel{d}{=}( W^\star_{n,1}(\tau), W_{n,2}(\tau), W_{n,3}(\tau)) + o_p(1), \end{align*} where the $o_p(1)$ term holds uniformly over $\tau \in \Upsilon$, $ W^\star_{n,1}(\tau) \perp\!\!\!\perp (W_{n,2}(\tau),W_{n,3}(\tau))$, and, uniformly over $\tau \in \Upsilon$, \begin{align*} W^\star_{n,1}(\tau) \rightsquigarrow \mathcal{B}_1(\tau). \end{align*} In the second step, we show that \begin{align*} (W_{n,2}(\tau),W_{n,3}(\tau)) \rightsquigarrow (\mathcal{B}_2(\tau),\mathcal{B}_3(\tau)) \end{align*} uniformly over $\tau \in \Upsilon$ and $\mathcal{B}_2(\tau) \perp\!\!\!\perp \mathcal{B}_3(\tau)$. \textbf{Step 1.} Let $\tilde{\eta}_{i,j}(s,\tau) = \tau - 1\{Y_i^s(j) \leq q_j(\tau) \} - m_j(s,\tau)$, for $j = 0,1$, where $\{Y_i^s(0),Y_i^s(1)\}_{i \geq 1}$ are the same as defined in Step 1 in the proof of Theorem \ref{thm:qr}. In addition, denote \begin{align*} \tilde{W}_{n,1}(\tau) = e_1 \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s) + n_1(s)} \frac{1}{\sqrt{n}}\tilde{\eta}_{i,1}(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=N(s) + n_1(s)+1}^{N(s) + n(s)} \frac{1}{\sqrt{n}}\tilde{\eta}_{i,0}(s,\tau). \end{align*} Then, we have \begin{align*} \{W_{n,1}(\tau)|\{A_i,S_i\}_{i=1}^n\} \stackrel{d}{=} \{\tilde{W}_{n,1}(\tau)|\{A_i,S_i\}_{i=1}^n\}. \end{align*} Because both $W_{n,2}(\tau)$ and $W_{n,3}(\tau)$ are only functions of $\{A_i,S_i\}_{i=1}^n$, we have \begin{align*} (W_{n,1}(\tau),W_{n,2}(\tau),W_{n,3}(\tau)) \stackrel{d}{=} (\tilde{W}_{n,1}(\tau),W_{n,2}(\tau),W_{n,3}(\tau)). \end{align*} Let \begin{align*} W_{n,1}^\star(\tau) = e_1 \sum_{s \in \mathcal{S}} \sum_{i=\lfloor nF(s) \rfloor + 1}^{\lfloor n(F(s)+\pi p(s)) \rfloor} \frac{1}{\sqrt{n}}\tilde{\eta}_{i,1}(s,\tau) + e_0\sum_{s \in \mathcal{S}} \sum_{i=\lfloor n(F(s)+\pi p(s)) \rfloor+1}^{\lfloor n(F(s)+ p(s)) \rfloor} \frac{1}{\sqrt{n}}\tilde{\eta}_{i,0}(s,\tau). \end{align*} Note that $W_{n,1}^\star(\tau)$ is a function of $(Y_i^s(1),Y_i^s(0))_{i \geq 1}$ only, which is independent of $\{A_i,S_i\}_{i=1}^n$ by construction. Therefore, $W_{n,1}^\star(\tau) \perp\!\!\!\perp (W_{n,2}(\tau),W_{n,3}(\tau))$. Furthermore, note that \begin{align*} \frac{N(s)}{n} \convP F(s), \quad \frac{n_1(s)}{n} \convP \pi p(s), \quad \text{and} \quad \frac{n(s)}{n} \convP p(s). \end{align*} Denote $\Gamma_{n,j}(s,t,\tau) = \sum_{i =1}^{\lfloor nt \rfloor}\frac{1}{\sqrt{n}}\tilde{\eta}_{i,j}(s,\tau)$. In order to show $\sup_{\tau \in \Upsilon}|\tilde{W}_{n,1}(\tau) - W^\star_{n,1}(\tau)| = o_p(1)$ and $ W^\star_{n,1}(\tau) \rightsquigarrow \mathcal{B}_1(\tau)$, it suffices to show that, (1) for $j = 0,1$ and $s \in \mathcal{S}$, the stochastic processes \begin{align*} \{\Gamma_{n,j}(s,t,\tau): t\in (0,1), \tau \in \Upsilon \} \end{align*} in stochastically equicontinuous; and (2) $ W^\star_{n,1}(\tau)$ converges to $\mathcal{B}_1(\tau)$ in finite dimension. \textbf{Claim (1).} We want to bound \begin{align*} \sup|\Gamma_{n,j}(s,t_2,\tau_2) - \Gamma_{n,j}(s,t_1,\tau_1)|, \end{align*} where supremum is taken over $0 < t_1<t_2<t_1+\varepsilon < 1$ and $\tau_1<\tau_2<\tau_1+\varepsilon$ such that $\tau_1,\tau_1+\varepsilon \in \Upsilon.$ Note that, \begin{align} \label{eq:GW} & \sup|\Gamma_{n,j}(s,t_2,\tau_2) - \Gamma_{n,j}(s,t_1,\tau_1)| \notag \\ \leq & \sup_{0 < t_1<t_2<t_1+\varepsilon < 1,\tau \in \Upsilon}|\Gamma_{n,j}(s,t_2,\tau) - \Gamma_{n,j}(s,t_1,\tau)| + \sup_{t \in (0,1),\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\varepsilon}|\Gamma_{n,j}(s,t,\tau_2) - \Gamma_{n,j}(s,t,\tau_1)|. \end{align} Let $m = \lfloor nt_2 \rfloor - \lfloor nt_1 \rfloor \leq \lfloor n\varepsilon \rfloor+1$. Then, for an arbitrary $\delta>0$, by taking $\varepsilon = \delta^4$, we have \begin{align*} & \mathbb{P}(\sup_{0 < t_1<t_2<t_1+\varepsilon < 1,\tau \in \Upsilon}|\Gamma_{n,j}(s,t_2,\tau) - \Gamma_{n,j}(s,t_1,\tau)| \geq \delta) \\ = & \mathbb{P}(\sup_{0 < t_1<t_2<t_1+\varepsilon < 1,\tau \in \Upsilon} |\sum_{i=\lfloor nt_1 \rfloor+1}^{i=\lfloor nt_2 \rfloor}\tilde{\eta}_{i,j}(s,\tau)| \geq \sqrt{n}\delta) \\ = & \mathbb{P}(\sup_{0 < t \leq \varepsilon,\tau \in \Upsilon} |\sum_{i=1}^{\lfloor nt \rfloor}\tilde{\eta}_{i,j}(s,\tau)| \geq \sqrt{n}\delta) \\ \leq & \mathbb{P}(\max_{1 \leq k \leq \lfloor n\varepsilon \rfloor}\sup_{\tau \in \Upsilon} |S_k(\tau)| \geq \sqrt{n}\delta) \\ \leq & \frac{270\mathbb{E}\sup_{\tau \in \Upsilon} |\sum_{i=1}^{\lfloor n\varepsilon \rfloor}\tilde{\eta}_{i,j}(s,\tau)|}{\sqrt{n}\delta} \\ \lesssim & \frac{\sqrt{n\varepsilon}}{\sqrt{n} \delta} \lesssim \delta, \end{align*} where in the first inequality, $S_k(\tau) = \sum_{i=1}^k\tilde{\eta}_{i,j}(s,\tau)$ and the second inequality holds due to the same argument in \eqref{eq:Gamma}. For the third inequality, denote \begin{align*} \mathcal{F} = \{\tilde{\eta}_{i,j}(s,\tau): \tau \in \Upsilon \} \end{align*} with an envelope function $F = 2$. In addition, because $\mathcal{F}$ is a VC-class with a fixed VC-index, we have \begin{align*} J(1,\mathcal{F}) < \infty, \end{align*} where \begin{align*} J(\delta,\mathcal{F}) = \sup_Q \int_0^\delta \sqrt{1 + \log N(\varepsilon||F||_{Q,2},\mathcal{F},L_2(Q))}d\varepsilon, \end{align*} $N(\varepsilon||F||_{Q,2},\mathcal{F},L_2(Q))$ is the covering number, and the supremum is taken over all discrete probability measures $Q$. Therefore, by \citet[Theorem 2.14.1]{VW96} \begin{align*} \frac{270\mathbb{E}\sup_{\tau \in \Upsilon} |\sum_{i=1}^{\lfloor n\varepsilon \rfloor}\tilde{\eta}_{i,j}(s,\tau)|}{\sqrt{n}\delta} \lesssim \frac{\sqrt{\lfloor n\varepsilon \rfloor}\left[ \mathbb{E}\sqrt{\lfloor n\varepsilon \rfloor}||\mathbb{P}_{\lfloor n\varepsilon \rfloor} - \mathbb{P}||_{\mathcal{F}}\right]}{\sqrt{n}\delta} \lesssim \frac{\sqrt{\lfloor n\varepsilon \rfloor}J(1,\mathcal{F})}{\sqrt{n}\delta}. \end{align*} For the second term on the RHS of \eqref{eq:GW}, by taking $\varepsilon = \delta^4$, we have \begin{align*} & \mathbb{P}(\sup_{t \in (0,1),\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\varepsilon}|\Gamma_{n,j}(s,t,\tau_2) - \Gamma_{n,j}(s,t,\tau_1)| \geq \delta) \\ = & \mathbb{P}(\max_{1 \leq k \leq n}\sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\varepsilon}|S_k(\tau_1,\tau_2)| \geq \sqrt{n}\delta) \\ \leq & \frac{270\mathbb{E}\sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\varepsilon} |\sum_{i=1}^{n}(\tilde{\eta}_{i,j}(s,\tau_2) - \tilde{\eta}_{i,j}(s,\tau_1))|}{\sqrt{n}\delta} \lesssim \delta \sqrt{\log(\frac{C}{\delta^2})}, \end{align*} where in the first equality, $S_k(\tau_1,\tau_2) = \sum_{i=1}^k (\tilde{\eta}_{i,j}(s,\tau_2) - \tilde{\eta}_{i,j}(s,\tau_1))$ and the first inequality follows the same argument as in \eqref{eq:Gamma}. For the last inequality, denote \begin{align*} \mathcal{F} = \{\tilde{\eta}_{i,j}(s,\tau_2) - \tilde{\eta}_{i,j}(s,\tau_1): \tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\varepsilon\} \end{align*} with a constant envelope function $F = C$ and \begin{align*} \sigma^2 = \sup_{f \in \mathcal{F}}\mathbb{E}f^2 \in [c_1\varepsilon,c_2\varepsilon], \end{align*} for some constant $0<c_1<c_2<\infty$. Last, $\mathcal{F}$ is nested by some VC class with a fixed VC index. Therefore, by \citet[Corollary 5.1]{CCK14}, \begin{align*} & \frac{270\mathbb{E}\sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\varepsilon} |\sum_{i=1}^{n}(\tilde{\eta}_{i,j}(s,\tau_2) - \tilde{\eta}_{i,j}(s,\tau_1))|}{\sqrt{n}\delta} \\ \lesssim & \frac{\sqrt{n}\mathbb{E}||\mathbb{P}_n - \mathbb{P}||_\mathcal{F}}{\delta} \lesssim \sqrt{\frac{\sigma^2 \log(\frac{C}{\sigma})}{ \delta^2}} + \frac{C\log(\frac{C}{\sigma})}{\sqrt{n}\delta} \lesssim \delta \sqrt{\log(\frac{C}{\delta^2})}, \end{align*} where the last inequality holds by letting $n$ be sufficiently large. Note that $\delta \sqrt{\log(\frac{C}{\delta^2})} \rightarrow 0$ as $\delta \rightarrow 0$. This concludes the proof of Claim (1). \textbf{Claim (2).} For a single $\tau$, by the triangular CLT, \begin{align*} W_{n,1}^\star(\tau) \rightsquigarrow N(0,\Sigma_1(\tau)), \end{align*} where $\Sigma_1(\tau) = \pi[\tau(1-\tau) - \mathbb{E}m^2_1(S,\tau)] e_1 e_1^\prime + (1-\pi)[\tau(1-\tau) -\mathbb{E}m^2_0(S,\tau)]e_0e_0^\prime.$ The convergence in finite dimension can be proved by using the Cram\'{e}r-Wold device. In particular, we can show that the covariance kernel is \begin{align*} \Sigma_1(\tau_1,\tau_2) = & \pi[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E} m_1(S,\tau_1)m_1(S,\tau_2)] e_1 e_1^\prime\\ & + (1-\pi)[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)]e_0e_0^\prime. \end{align*} This concludes the proof of Claim (2), and thus leads to the desired results in Step 1. \textbf{Step 2.} We first consider the marginal distributions for $W_{n,2}(\tau)$ and $W_{n,3}(\tau)$. For $W_{n,2}(\tau)$, by Assumption \ref{ass:assignment1} and the fact that $m_j(s,\tau)$ is continuous in $\tau \in \Upsilon$ $j=0,1$, we have, conditionally on $\{S_i\}_{i=1}^n$, \begin{align} \label{eq:B2} W_{n,2}(\tau) = \sum_{s \in \mathcal{S}}\frac{D_n(s)}{\sqrt{n}}[ e_1 m_1(s,\tau) - e_0m_0(s,\tau)] \rightsquigarrow \mathcal{B}_2(\tau), \end{align} where $\mathcal{B}_2(\tau)$ is a two-dimensional Gaussian process with covariance kernel \begin{align*} & \Sigma_2(\tau_1,\tau_2) \\ = &\sum_{s \in \mathcal{S}}p(s)\gamma(s)\biggl[ e_1 e_1^\prime m_1(s,\tau_1)m_1(s,\tau_2) - e_1 e_0^\prime m_1(s,\tau_1)m_0(s,\tau_2) \\ &- e_0 e_1^\prime m_0(s,\tau_1)m_1(s,\tau_2)+e_0e_0^\prime m_0(s,\tau_1)m_0(s,\tau_2) \biggr]. \end{align*} For $W_{n,3}(\tau)$, by the fact that $m_j(s,\tau)$ is continuous in $\tau \in \Upsilon$ $j=0,1$, we have that, uniformly over $\tau \in \Upsilon$, \begin{align} \label{eq:B3} W_{n,3}(\tau) = \frac{1}{\sqrt{n}}\sum_{i=1}^n[ e_1 \pi m_1(S_i,\tau) + e_0 (1-\pi)m_0(S_i,\tau)] \rightsquigarrow \mathcal{B}_3(\tau), \end{align} where $\mathcal{B}_3(\tau)$ a two-dimensional Gaussian process with covariance kernel \begin{align*} \Sigma_3(\tau_1,\tau_2) = & e_1 e_1^\prime \pi^2 \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2) + e_1 e_0^\prime \pi(1-\pi) \mathbb{E}m_1(S,\tau_1)m_0(S,\tau_2) \\ &+ e_0 e_1^\prime \pi(1-\pi) \mathbb{E}m_0(S,\tau_1)m_1(S,\tau_2) + e_0 e_0^\prime (1-\pi)^2 \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2). \end{align*} In addition, we note that, for any fixed $\tau$, \begin{align*} \mathbb{P}(W_{n,2}(\tau) \leq w_1,W_{n,3}(\tau) \leq w_2) = & \mathbb{E}\mathbb{P}(W_{n,2}(\tau) \leq w_1|\{S_i\}_{i=1}^n)1\{W_{n,3}(\tau) \leq w_2 \} \\ = & \mathbb{E}\mathbb{P}(N(0,\Sigma_2(\tau,\tau))\leq w_1)1\{W_{n,3}(\tau) \leq w_2 \} + o(1) \\ = & \mathbb{P}(N(0,\Sigma_3(\tau,\tau))\leq w_2)\mathbb{P}(N(0,\Sigma_2(\tau,\tau))\leq w_1) + o(1). \end{align*} This implies $\mathcal{B}_2(\tau) \perp\!\!\!\perp \mathcal{B}_3(\tau)$. By the Cram\'{e}r-Wold device, we can show that \begin{align*} (W_{n,2}(\tau),W_{n,3}(\tau)) \rightsquigarrow (\mathcal{B}_2(\tau),\mathcal{B}_3(\tau)) \end{align*} jointly in finite dimension, where by an abuse of notation, $\mathcal{B}_2(\tau)$ and $\mathcal{B}_3(\tau)$ have the same marginal distributions of those in \eqref{eq:B2} and \eqref{eq:B3}, respectively, and $\mathcal{B}_2(\tau) \perp\!\!\!\perp \mathcal{B}_3(\tau)$. Last, because both $W_{n,2}(\tau)$ and $W_{n,3}(\tau)$ are tight marginally, so be the joint process $(W_{n,2}(\tau),W_{n,3}(\tau))$. This concludes the proof of Step 2, and thus the whole lemma. \end{proof} \begin{lem} \label{lem:Qipw} Let $\mathcal{W}_{n,j}(\tau)$, $j = 1,2$ be defined as in \eqref{eq:wipw2}. If Assumptions in Theorem \ref{thm:ipw} hold, then uniformly over $\tau \in \Upsilon$, \begin{align*} (\mathcal{W}_{n,1}(\tau),\mathcal{W}_{n,2}(\tau)) \rightsquigarrow (\mathcal{B}_{ipw,1}(\tau),\mathcal{B}_{ipw,2}(\tau)), \end{align*} where $(\mathcal{B}_{ipw,1}(\tau),\mathcal{B}_{ipw,2}(\tau))$ are two independent two-dimensional Gaussian processes with covariance kernels $\Sigma_{ipw,1}(\tau_1,\tau_2)$ and $\Sigma_{ipw,2}(\tau_1,\tau_2)$, respectively. The expressions for $\Sigma_{ipw,1}(\tau_1,\tau_2)$ and $\Sigma_{ipw,2}(\tau_1,\tau_2)$ are derived in the proof below. \end{lem} \begin{proof} The proofs of weak convergence and the independence between $(\mathcal{B}_{ipw,1}(\tau),\mathcal{B}_{ipw,2}(\tau))$ are similar to that in Lemma \ref{lem:Q}, and thus, are omitted. Next, we focus on deriving the covariance kernels. First, similar to the argument in the proof of Lemma \ref{lem:Q}, \begin{align*} \mathcal{W}_{n,1}(\tau) \stackrel{d}{=}\sum_{s \in \mathcal{S}} \sum_{i=N(s)+1}^{N(s)+n_1(s)}\frac{1}{\sqrt{n}f_1(q_1(\tau))}\tilde{\eta}_{i,1}(s,\tau)-\sum_{s \in \mathcal{S}} \sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\frac{1}{\sqrt{n}f_0(q_0(\tau))}\tilde{\eta}_{i,0}(s,\tau). \end{align*} Because $(\tilde{\eta}_{i,1}(s,\tau),\tilde{\eta}_{i,0}(s,\tau))$ are independent across $i$, $n_1(s)/n \convP \pi p(s)$, and $(n(s) - n_1(s))/n \convP (1-\pi)p(s)$, we have \begin{align*} \Sigma_{ipw,1}(\tau_1,\tau_2) = \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2) }{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi)f_0(q_0(\tau_1))f_0(q_0(\tau_2)) }. \end{align*} Obviously, \begin{align*} \Sigma_{ipw,2}(\tau_1,\tau_2) = \mathbb{E}\left(\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\right)\left(\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\right), \end{align*} \end{proof} \begin{lem} \label{lem:Qwqr} If Assumptions \ref{ass:assignment1} and \ref{ass:tau} hold, then conditionally on data, the second element of $[Q(\tau)]^{-1}\sum_{i=1}^n \frac{\xi_{i}-1}{\sqrt{n}}\dot{A}_i\left(\tau - 1\{Y_i \leq \dot{A}'\beta(\tau)\}\right)$ weakly converges to $\tilde{\mathcal{B}}_{sqr}(\tau)$, where $\tilde{\mathcal{B}}_{sqr}(\tau)$ is a Gaussian process with covariance kernel $\tilde{\Sigma}_{sqr}(\cdot,\cdot)$ defined in Theorem \ref{thm:b}. \end{lem} \begin{proof} We denote the second element of $[Q(\tau)]^{-1}\sum_{i=1}^n \frac{\xi_{i}-1}{\sqrt{n}}\dot{A}_i\left(\tau - 1\{Y_i \leq \dot{A}'\beta(\tau)\}\right)$ as \begin{align*} \frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)\mathcal{J}_i(s,\tau), \end{align*} where \begin{align*} \mathcal{J}_i(s,\tau) = \mathcal{J}_{i,1}(s,\tau) + \mathcal{J}_{i,2}(s,\tau) + \mathcal{J}_{i,3}(s,\tau), \end{align*} \begin{align*} \mathcal{J}_{i,1}(s,\tau) = \frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}, \end{align*} \begin{align*} \mathcal{J}_{i,2}(s,\tau) = F_1(s,\tau) (A_i - \pi)1\{S_i = s\}, \end{align*} $$F_1(s,\tau) = \frac{m_1(s,\tau)}{\pi f_1(q_1(\tau))}+\frac{m_0(s,\tau)}{(1-\pi) f_0(q_0(\tau))},$$ and \begin{align*} \mathcal{J}_{i,3}(s,\tau) = \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)1\{S_i = s\}. \end{align*} In order to show the weak convergence, we only need to show (1) conditionally stochastic equicontinuity and (2) conditional convergence in finite dimension. We divide the proof into two steps accordingly. \textbf{Step 1.} In order to show the conditionally stochastic equicontinuity, it suffices to show that, for any $\varepsilon>0$, as $n \rightarrow \infty$ followed by $\delta \rightarrow 0$, \begin{align*} \mathbb{P}_\xi\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_i(s,\tau_2) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon\right) \convP 0, \end{align*} where $\mathbb{P}_\xi(\cdot)$ means that the probability operator is with respect to $\xi_1,\cdots,\xi_n$ and conditional on data. Note \begin{align*} & \mathbb{E}\mathbb{P}_\xi\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_i(s,\tau_1) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon\right) \\ = & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_i(s,\tau_2) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon\right)\\ \leq & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,1}(s,\tau_2) - \mathcal{J}_{i,1}(s,\tau_1))\right| \geq \varepsilon/3\right) \\ & + \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,2}(s,\tau_2) - \mathcal{J}_{i,2}(s,\tau_1))\right| \geq \varepsilon/3\right)\\ & + \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,3}(s,\tau_2) - \mathcal{J}_{i,3}(s,\tau_1))\right| \geq \varepsilon/3\right). \end{align*} Further note that \begin{align*} \sum_{i =1}^n (\xi_i-1)\mathcal{J}_{i,1}(s,\tau) \stackrel{d}{=} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\frac{(\xi_i-1)\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \sum_{i=n(s) +n_1(s)+ 1}^{N(s)+n(s)}\frac{(\xi_i-1)\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \end{align*} By the same argument in Claim (1) in the proof of Lemma \ref{lem:Q}, we have \begin{align*} & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,1}(s,\tau_2) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon/3\right) \\ \leq & \frac{3\mathbb{E}\sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,1}(s,\tau_2) - \mathcal{J}_{i,1}(s,\tau_1))\right|}{\varepsilon} \\ \leq & \frac{3\sqrt{c_2\delta\log(\frac{C}{c_1\delta})} + \frac{3C\log(\frac{C}{c_1\delta})}{\sqrt{n}}}{\varepsilon}, \end{align*} where $C$, $c_1< c_2$ are some positive constants that are independent of $(n,\varepsilon,\delta)$. By letting $n\rightarrow \infty$ followed by $\delta \rightarrow 0$, the RHS vanishes. For $\mathcal{J}_{i,2}$, we note that $F_1(s,\tau)$ is Lipschitz in $\tau$. Therefore, \begin{align*} & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,2}(s,\tau_2) - \mathcal{J}_{i,2}(s,\tau_1))\right| \geq \varepsilon/3\right) \\ \leq & \sum_{s \in \mathcal{S}}\mathbb{P}\left(C\delta\left|\frac{1}{\sqrt{n}} \sum_{i =1}^n (\xi_i - 1)(A_i - \pi)1\{S_i = s\}\right| \geq \varepsilon/3 \right) \rightarrow 0 \end{align*} as $n \rightarrow \infty$ followed by $\delta \rightarrow 0$, where we use the fact that \begin{align*} \sup_{s \in \mathcal{S} }\left|\frac{1}{\sqrt{n}} \sum_{i =1}^n (\xi_i - 1)(A_i - \pi)1\{S_i = s\}\right| = O_p(1). \end{align*} To see this claim, we note that, conditionally on data, \begin{align*} \frac{1}{n}\sum_{i=1}^n(A_i - \pi)^21\{S_i = s\} = & \frac{1}{n}\sum_{i=1}^n\left(A_i - \pi - 2\pi(A_i - \pi) + \pi - \pi^2\right)1\{S_i=s\} \\ = & \frac{D_n(s) - 2\pi D_n(s)}{n} + \pi(1-\pi)\frac{n(s)}{n} \convP \pi (1-\pi)p(s). \end{align*} Then, by the Lindeberg CLT, conditionally on data, \begin{align*} \frac{1}{\sqrt{n}} \sum_{i =1}^n (\xi_i - 1)(A_i - \pi)1\{S_i = s\} \rightsquigarrow N(0,\pi(1-\pi)p(s)) = O_p(1). \end{align*} Last, by the standard maximal inequality (e.g., \citet[Theorem 2.14.1]{VW96}) and the fact that \begin{align*} \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right) \end{align*} is Lipschitz in $\tau$, we have, as $n \rightarrow \infty$ followed by $\delta \rightarrow 0$, \begin{align*} \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,3}(s,\tau_2) - \mathcal{J}_{i,3}(s,\tau_1))\right| \geq \varepsilon/3\right) \rightarrow 0. \end{align*} This concludes the proof of the conditionally stochastic equicontinuity. \textbf{Step 2.} We focus on the one-dimension case and aim to show that, conditionally on data, for fixed $\tau \in \Upsilon$, \begin{align*} \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n (\xi_i-1) \mathcal{J}_i(s,\tau) \rightsquigarrow \mathcal{N}(0,\tilde{\Sigma}_{sqr}(\tau,\tau)). \end{align*} The finite-dimensional convergence can be established similarly by the Cram\'{e}r-Wold device. In view of Lindeberg-Feller central limit theorem, we only need to show that (1) \begin{align*} \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \convP \zeta_Y^2(\pi,\tau) + \tilde{\xi}_A^{2}(\pi,\tau) + \xi_S^2(\pi,\tau) \end{align*} and (2) \begin{align*} \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \mathbb{E}_\xi(\xi-1)^21\{|\sum_{s \in \mathcal{S}}(\xi_i - 1)\mathcal{J}_i(s,\tau)| \geq \varepsilon \sqrt{n}\} \rightarrow 0. \end{align*} (2) is obvious as $|\mathcal{J}_i(s,\tau)|$ is bounded and $\max_i|\xi_i-1| \lesssim \log(n)$ as $\xi_i$ is sub-exponential. Next, we focus on (1). We have \begin{align*} & \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \\ = & \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}\biggl\{\biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr] \\ & + F_1(s,\tau)(A_i -\pi)1\{S_i=s\} + \biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)1\{S_i = s\}\biggr]\biggr\}^2 \\ \equiv & \sigma_1^2 + \sigma_2^2 + \sigma_3^2 + 2\sigma_{12} + 2\sigma_{13} + 2 \sigma_{23}, \end{align*} where \begin{align*} \sigma_1^2 = \frac{1}{n}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr]^2, \end{align*} \begin{align*} \sigma_2^2 = \frac{1}{n}\sum_{s \in \mathcal{S}}F^2_1(s,\tau)\sum_{i =1}^n (A_i-\pi)^21\{S_i = s\}, \end{align*} \begin{align*} \sigma_3^2 = \frac{1}{n}\sum_{i =1}^n \biggl[ \left(\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\right)\biggr]^2, \end{align*} \begin{align*} \sigma_{12} = \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}\biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr]F_1(s,\tau)(A_i - \pi)1\{S_i=s\}, \end{align*} \begin{align*} \sigma_{13} = \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}\biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr] \biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)\biggr], \end{align*} and \begin{align*} \sigma_{23} = \sigma_{12} = \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}F_1(s,\tau)(A_i - \pi)1\{S_i=s\}\biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)\biggr]. \end{align*} For $\sigma_1^2$, we have \begin{align*} \sigma_1^2 = & \frac{1}{n}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \biggl[\frac{A_i 1\{S_i = s\}\eta^2_{i,1}(s,\tau)}{\pi^2 f^2_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta^2_{i,0}(s,\tau)}{(1-\pi)^2 f^2_0(q_0(\tau))}\biggr] \\ \stackrel{d}{=} & \frac{1}{n}\sum_{s \in \mathcal{S}} \sum_{i=N(s)+1}^{N(s)+n_1(s)}\frac{\tilde{\eta}^2_{i,1}(s,\tau)}{\pi^2 f^2_1(q_1(\tau))} + \frac{1}{n}\sum_{s \in \mathcal{S}} \sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\frac{\tilde{\eta}^2_{i,0}(s,\tau)}{(1-\pi)^2 f^2_0(q_0(\tau))} \\ \convP & \frac{\tau(1-\tau) - \mathbb{E}m_1^s(S,\tau)}{\pi f_1^2(q_1(\tau))} + \frac{\tau(1-\tau) - \mathbb{E}m_0^s(S,\tau)}{(1-\pi) f_0^2(q_0(\tau))} = \zeta_Y^2(\pi,\tau), \end{align*} where the second equality holds due to the rearrangement argument in Lemma \ref{lem:Q} and the convergence in probability holds due to uniform convergence of the partial sum process. For $\sigma_2^2$, by Assumption \ref{ass:assignment1}, \begin{align*} \sigma_2^2 = \frac{1}{n}\sum_{s \in \mathcal{S}}F_1^2(s,\tau)(D_n(s) - 2\pi D_n(s) + \pi(1-\pi)1\{S_i =s\}) \convP \pi(1-\pi)\mathbb{E}F_1^2(S_i,\tau) = \tilde{\xi}_{A}^{2}(\pi,\tau). \end{align*} For $\sigma_3^2$, by the law of large number, \begin{align*} \sigma_3^2 \convP \mathbb{E}\biggl[ \left(\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\right)\biggr]^2 = \xi_S^2(\pi,\tau). \end{align*} For $\sigma_{12}$, we have \begin{align*} \sigma_{12} = & \frac{1}{n}\sum_{s \in \mathcal{S}}(1-\pi)F_1(s,\tau)\sum_{i=1}^n\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{1}{n}\sum_{s \in \mathcal{S}}\pi F_1(s,\tau)\sum_{i=1}^{n}\frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \\ \stackrel{d}{=} & \frac{1}{n}\sum_{s \in \mathcal{S}}(1-\pi)F_1(s,\tau)\sum_{i=N(s)+1}^{N(s)+n_1(s)}\frac{\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{1}{n}\sum_{s \in \mathcal{S}}\pi F_1(s,\tau)\sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\frac{\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \convP 0, \end{align*} where the last convergence holds because by Lemma \ref{lem:Q}, \begin{align*} \frac{1}{n}\sum_{i=N(s)+1}^{N(s)+n_1(s)}\tilde{\eta}_{i,1}(s,\tau) \convP 0, \quad \text{and} \quad \frac{1}{n}\sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\tilde{\eta}_{i,0}(s,\tau) \convP 0. \end{align*} By the same argument, we can show that \begin{align*} \sigma_{13} \convP 0. \end{align*} Last, for $\sigma_{23}$, by Assumption \ref{ass:assignment1}, \begin{align*} \sigma_{23} = \sum_{s \in \mathcal{S}}F_1(s,\tau)\biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)\biggr]\frac{D_n(s)}{n} \convP 0. \end{align*} Therefore, conditionally on data, \begin{align*} \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \convP \zeta_Y^2(\pi,\tau) + \tilde{\xi}_A^{2}(\pi,\tau) + \xi_S^2(\pi,\tau). \end{align*} \end{proof} \begin{lem} \label{lem:Rsfestar} If Assumptions \ref{ass:assignment1}.1 and \ref{ass:assignment1}.2 hold, $\sup_{ s \in \mathcal{S}}\frac{|D_n^*(s)|}{\sqrt{n^*(s)}} = O_p(1)$, $\sup_{ s \in \mathcal{S}}\frac{|D_n(s)|}{\sqrt{n(s)}} = O_p(1)$, and $n(s) \rightarrow \infty$ for all $s \in \mathcal{S}$, a.s., then there exists a sequence of Poisson(1) random variables $\{\xi_i^s\}_{i \geq 1,s\in \mathcal{S}}$ independent of $\{A_i^*,S_i^*,Y_i,A_i,S_i\}_{i\geq 1}$ such that \begin{align*} \sum_{i =1}^nA_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau) = \sum_{i = N(s)+1}^{N(s)+n_1(s)}\xi_i^s\tilde{\eta}_{i,1}(s,\tau) + R^*_1(s,\tau), \end{align*} where $\sup_{\tau \in \Upsilon, s \in \mathcal{S}}|R^*_1(s,\tau)/\sqrt{n(s)}| = o_p(1).$ In addition, \begin{align} \label{eq:targetF9} \sup_{s \in \mathcal{S},\tau \in \Upsilon}|\sum_{i =1}^nA_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau)|/\sqrt{n(s)} = O_p(1). \end{align} \end{lem} \begin{proof} Recall $\{Y^s_i(0),Y^s_i(1)\}_{i=1}^n $ as defined in the proof of Theorem \ref{thm:qr} and $$\tilde{\eta}_{i,j}(s,\tau) = \tau - 1\{Y^s_i(j) \leq q_j(\tau)\} - m_j(s,\tau),$$ $j = 0,1$. In addition, let $\Psi_n = \{\eta_{i,1}(s,\tau)\}_{i=1}^n$, $$\mathbb{N}_n = \{n(s)/n,n_1(s)/n, n^*(s)/n,n_1^*(s)/n\}_{s \in \mathcal{S}}$$ and given $\mathbb{N}_n$, $\{M_{ni}\}_{i=1}^n$ be a sequence of random variables such that the $n_1(s) \times 1$ vector $$M_n^1(s) = (M_{n,N(s)+1}, \cdots,M_{n,N(s)+n_1(s)})$$ and the $(n(s) - n_1(s)) \times 1$ vector $$M_n^0(s) = (M_{n,N(s)+n_1(s)+1}, \cdots,M_{n,N(s)+n(s)})$$ satisfy: \begin{enumerate} \item $M_n^1(s) = \sum_{i=1}^{n_1^*(s)}m_i$ and $M_n^0(s) = \sum_{i=1}^{n^*(s) - n_1^*(s)}m_i'$, where $\{m_i\}_{i=1}^{n_1^*(s)}$ and $\{m_i'\}_{i=1}^{n^*(s) - n_1^*(s)}$ are $n_1^*(s)$ i.i.d. multinomial$(1,n_1^{-1}(s),\cdots,n_1^{-1}(s))$ random vectors and $n^*(s) - n_1^*(s)$ i.i.d. multinomial $(1,(n(s) -n_1(s))^{-1},\cdots,(n(s) -n_1(s))^{-1})$ random vectors, respectively; \item $M_n^0(s) \perp\!\!\!\perp M_n^1(s)|\mathbb{N}_n;$ and \item $\{M_n^0(s),M_n^1(s)\}_{s \in \mathcal{S}}$ are independent across $s$ given $\mathbb{N}_n$ and are independent of $\Psi_n$. \end{enumerate} Recall that, by \cite{BCS17}, the original observations can be rearranged according to $s \in \mathcal{S}$ and then within strata, treatment group first and then the control group. Then, given $\mathbb{N}_n$, Step 3 in Section \ref{sec:CABI} implies that the bootstrap observations $\{Y_i^*\}_{i=1}^n$ can be generated by drawing with replacement from the empirical distribution of the outcomes in each $(s,a)$ cell for $(s,a) \in \mathcal{S} \times \{0,1\}$, $n^*_{a}(s)$ times, $a = 0,1$, where $n_0^*(s) = n^*(s) - n_1^*(s)$. Therefore, \begin{align} \label{eq:Astar} \sum_{i =1}^n A_i^*1\{S_i^*=s\}\eta^*_{i,1}(s,\tau) = \sum_{i=N(s)+1}^{N(s)+n_1(s)} M_{ni} \tilde{\eta}_{i,1}(s,\tau). \end{align} Following the standard approach in dealing with the nonparametric bootstrap, we want to approximate $$M_{ni}, i=N(s)+1,\cdots,N(s)+n_1(s)$$ by a sequence of i.i.d. Poisson(1) random variables. We construct this sequence as follows. Let $\widetilde{M}_n^1(s) =\sum_{i=1}^{\widetilde{N}(n_1(s))}m_i$, where $\widetilde{N}(k)$ is a Poisson number with mean $k$ and is independent of $\mathbb{N}_n$. The $n_1(s)$ elements of vector $\widetilde{M}_n^1(s)$ is denoted as $\{\widetilde{M}_{ni}\}_{i=N(s)+1}^{N(s)+n_1(s)}$, which is a sequence of i.i.d. Poisson(1) random variables, given $\mathbb{N}_n$. Therefore, \begin{align*} \{\widetilde{M}_{ni}, i=N(s)+1,\cdots,N(s)+n_1(s) |\mathbb{N}_n\} \equiv \{\xi_i^s, i=N(s)+1,\cdots,N(s)+n_1(s)|\mathbb{N}_n\} \end{align*} where $\{\xi^s_i\}_{i=1}^n$, $s \in \mathcal{S}$ are i.i.d. sequences of Poisson(1) random variables such that $\{\xi^s_i\}_{i=1}^n$ are independent across $s \in \mathcal{S}$ and against $\mathbb{N}_n$. Following the argument in \citet[Section 3.6]{VW96}, given $n_1(s)$, $n_1^*(s)$, and $\widetilde{N}(n_1(s)) = k$, $|\xi_i^s - M_{ni}|$ is binomially $(|k-n_1^*(s)|,n_1(s)^{-1})$-distributed. In addition, there exists a sequence $\ell_n = O(\sqrt{n(s)})$ such that \begin{align*} \mathbb{P}(|\widetilde{N}(n_1(s)) - n_1^*(s)| \geq \ell_n) \leq & \mathbb{P}(|\widetilde{N}(n_1(s)) - n_1(s)| \geq \ell_n/3) + \mathbb{P}(|n_1^*(s) - n_1(s)| \geq 2\ell_n/3) \\ \leq & \mathbb{E}\mathbb{P}(|N(n_1(s)) - n_1(s)| \geq \ell_n/3|n_1(s)) + \mathbb{P}(|n_1^*(s) - n_1(s)| \geq 2\ell_n/3) \\ \leq & \varepsilon/3 + \mathbb{P}(|n_1^*(s) - n_1(s)| \geq 2\ell_n/3) \\ \leq & \varepsilon/3 + \mathbb{P}(|D_n^*(s)| + |D_n(s)| + \pi|n^*(s) - n(s)| \geq 2\ell_n/3) \\ \leq & 2\varepsilon/3 + \mathbb{P}(\pi|n^*(s) - n(s)| \geq \ell_n/3) \\ \leq & \varepsilon, \end{align*} where the first inequality holds due to the union bound inequality, the second inequality holds by the law of iterated expectation, the third inequality holds because (1) conditionally on data, $\widetilde{N}(n_1(s)) - n_1(s) = O_p(\sqrt{n_1(s)})$ and (2) $n_1(s)/n(s) = \pi+ \frac{D_n(s)}{n(s)} \rightarrow \pi>0$ as $n(s) \rightarrow \infty$ , the fourth inequality holds by the fact that $$n_1^*(s) - n_1(s) = D_n^*(s) - D_n(s) + \pi(n^*(s) - n(s)),$$ the fifth inequality holds because by Assumptions \ref{ass:assignment1} and \ref{ass:bassignment}, $|D_n^*(s)| + |D_n(s)| = O_p(\sqrt{n(s)})$, and the sixth inequality holds because $\{S_i^*\}_{i=1}^n$ is generated from $\{S_i\}_{i=1}^n$ by the standard bootstrap procedure, and thus, by \citet[Theorem 3.6.1]{VW96}, \begin{align*} n^*(s) - n(s) = \sum_{i=1}^n(M^w_{ni}-1)(1\{S_i = s\} - p(s)) = O_p(\sqrt{n(s)}), \end{align*} where $(M^w_{n1},\cdots,M^w_{nn})$ is independent of $\{S_i\}_{i=1}^n$ and multinomially distributed with parameters $n$ and (probabilities) $1/n,\cdots,1/n$. Therefore, by direct calculation, as $n \rightarrow \infty$, \begin{align*} & \mathbb{P}(\max_{N(s)+1 \leq i \leq N(s)+n_1(s)}|\xi_{i}^s - M_{ni}|>2) \notag \\ \leq & \mathbb{P}(\max_{N(s)+1 \leq i \leq N(s)+n_1(s)}|\xi_{i}^s - M_{ni}|>2, n_1(s) \geq n(s)\varepsilon) + \mathbb{P}(n_1(s) \leq n(s)\varepsilon) \notag \\ \leq & \varepsilon + \mathbb{E}\sum_{i=N(s)+1}^{N(s)+n_1(s)}\mathbb{P}(|\xi_{i}^s - M_{ni}|>2,|N(n_1(s)) - n_1^*(s)| \leq \ell_n, n_1(s) \geq n(s) \varepsilon|n_1(s),n_1^*(s),n(s)) + \varepsilon \notag \\ \leq & 2\varepsilon + \mathbb{E}n_1(s)\mathbb{P}(\text{bin}(\ell_n,n^{-1}_1(s))>2|n_1(s),n_1^*(s),n(s))1\{n_1(s) \geq n(s) \varepsilon\} \rightarrow 2\varepsilon, \end{align*} where we use the fact that \begin{align*} & n_1(s)\mathbb{P}(\text{bin}(\ell_n,n^{-1}_1(s))>2|n_1(s),n_1^*(s),n(s))1\{n_1(s) \geq n(s) \varepsilon\} \\ \lesssim & n_1(s)\left(\frac{\ell_n}{n(s)}\right)^3\left(\frac{n(s)}{n_1(s)}\right)^31\{n_1(s) \geq n(s) \varepsilon\} \lesssim \frac{1}{\sqrt{n(s)}\varepsilon^3} \rightarrow 0. \end{align*} Because $\varepsilon$ is arbitrary, we have \begin{align} \label{eq:F92} \mathbb{P}\left(\max_{N(s)+1 \leq i \leq N(s)+n_1(s)}|\xi_{i}^s - M_{ni}|>2\right) \rightarrow 0. \end{align} Note that $|\xi_{i}^s - M_{ni}| = \sum_{j=1}^\infty 1\{|\xi_i^s - M_{ni}|\geq j\}$. Let $I_n^j(s)$ be the set of indexes $i \in \{N(s)+1,\cdots,N(s)+n_1(s)\}$ such that $|\xi_i^s - M_{ni}|\geq j$. Then, $\xi_i^s - M_{ni} = \text{sign}(\widetilde{N}(n_1(s))-n^*_1(s))\sum_{j=1}^\infty 1\{i \in I_n^j(s)\}$. Thus, \begin{align} \label{eq:boot} \frac{1}{\sqrt{n(s)}}\sum_{i=N(s) + 1}^{N(s)+n_1(s)}(\xi_i^s-M_{ni})\tilde{\eta}_{i,1}(s,\tau) = \text{sign}(\widetilde{N}(n_1(s))-n^*_1(s))\sum_{j=1}^\infty \left[ \frac{\#I_n^j(s)}{\sqrt{n(s)}}\frac{1}{\#I_n^j(s)}\sum_{i \in I_n^j(s)} \tilde{\eta}_{i,1}(s,\tau)\right]. \end{align} In the following, we aim to show that the RHS of \eqref{eq:boot} converges to zero in probability uniformly over $s \in \mathcal{S},\tau \in \Upsilon$. First, note that, by \eqref{eq:F92}, $\max_{N(s)+1 \leq i \leq N(s)+n_1(s)}|\xi_i^s - M_{ni}| \leq 2$ occurs with probability approaching one. In the event set that $\max_{N(s)+1 \leq i \leq N(s)+n_1(s)}|\xi_i^s - M_{ni}| \leq 2$, only the first two terms of the first summation on the RHS of \eqref{eq:boot} can be nonzero. In addition, for any $j$, we have $j (\#I_n^j(s)) \leq |\widetilde{N}(n_1(s)) - n_1(s)| = O_p(\sqrt{n(s)})$, and thus, $\frac{\#I_n^j(s)}{\sqrt{n(s)}} = O_p(1)$ for $j = 1,2$. Therefore, it suffices to show that, for $j = 1,2$, \begin{align*} \sup_{s \in \mathcal{S},\tau \in \Upsilon}\left|\frac{1}{\#I_n^j(s)}\sum_{i \in I_n^j(s)} \tilde{\eta}_{i,1}(s,\tau)\right| = o_p(1). \end{align*} Note that \begin{align} \label{eq:omega} \frac{1}{\#I_n^j(s)}\sum_{i \in I_n^j(s)} \tilde{\eta}_{i,1}(s,\tau) = \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\omega_{ni}\tilde{\eta}_{i,1}(s,\tau), \end{align} where $\omega_{ni} = \frac{1\{|\xi_i^s - M_{ni}|\geq j\}}{\#I_n^j(s)}$, $i=N(s) + 1,\cdots,N(s)+n_1(s)$ and by construction, $\{\omega_{ni}\}_{i=N(s) + 1}^{N(s)+n_1(s)}$ is independent of $\{\eta_{i,1}(s,\tau)\}_{i=1}^{n}$. In addition, because $\{\omega_{ni}\}_{i=N(s) + 1}^{N(s)+n_1(s)}$ is exchangeable conditional on $\mathbb{N}_n$, so be it unconditionally. Third, $\sum_{i=N(s) + 1}^{N(s)+n_1(s)}\omega_{ni} = 1$ and $\max_{i=N(s) + 1,\cdots,N(s)+n_1(s)}|\omega_{ni}| \leq 1/\#I_n^j(s) \convP 0$. Then, by the same argument in the proof of \citet[Lemma 3.6.16]{VW96}, for some $r \in (0,1)$ and any $n_0 = N(s)+1,\cdots,N(s)+n_1(s)$, we have \begin{align} \label{eq:3617} & \mathbb{E}\left(\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{i=N(s) + 1}^{N(s)+n_1(s)}\omega_{ni}\tilde{\eta}_{i,1}(s,\tau)\right|^r|\Psi_n,\mathbb{N}_n\right) \notag \\ \leq & (n_0 -1)\mathbb{E}\left[\max_{N(s)+n_0 \leq i \leq N(s)+n_1(s)}\omega_{ni}^r|\mathbb{N}_n\right] \left[\frac{1}{n_1(s)}\sum_{i=N(s) + 1}^{N(s)+n_1(s)}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}|\tilde{\eta}_{i,1}^r(s,\tau)|\right] \notag \\ & + (n_1(s)\mathbb{E}(\omega_{ni}|\mathbb{N}_n))^r\max_{n_0 \leq k \leq n_1(s)}\mathbb{E}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=N(s)+n_0}^{N(s)+k} \tilde{\eta}_{R_j(N(s),n_1(s)),1}(s,\tau)\right|^r|\mathbb{N}_n,\Psi_n\right], \end{align} where $(R_{k_1+1}(k_1,k_2),\cdots,R_{k_1+k_2}(k_1,k_2))$ is uniformly distributed on the set of all permutations of $k_1+1,\cdots,k_1+k_2$ and independent of $\mathbb{N}_n$ and $\Psi_n$. First note that $\sup_{s \in \mathcal{S},\tau \in \Upsilon}|\eta_{i,1}(s,\tau)|$ is bounded and $$\max_{N(s)+1 \leq i \leq N(s)+n_1(s)}\omega_{ni}^r \leq 1/(\#I_n^j(s))^r \convP 0.$$ Therefore, the first term on the RHS of \eqref{eq:3617} converges to zero in probability for every fixed $n_0$. For the second term, because $\omega_{ni}|\mathbb{N}_n$ is exchangeable, \begin{align*} n_1(s)\mathbb{E}(\omega_{ni}|\mathbb{N}_n) = \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\mathbb{E}(\omega_{ni}|\mathbb{N}_n) = 1. \end{align*} In addition, let $\mathbb{S}_n(k_1,k_2)$ be the $\sigma$-field generated by all functions of $\{\tilde{\eta}_{i,1}(s,\tau)\}_{i\geq 1}$ that are symmetric in their $k_1+1$ to $k_1+k_2$ arguments. Then, \begin{align*} & \max_{n_0 \leq k \leq n_1(s)}\mathbb{E}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=N(s)+n_0}^{N(s)+k} \tilde{\eta}_{R_j(N(s),n_1(s)),1}(s,\tau)\right|^r|\mathbb{N}_n, \Psi_n\right] \\ = & \max_{n_0 \leq k \leq n_1(s)}\mathbb{E}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=N(s)+n_0}^{N(s)+k} \tilde{\eta}_{j,1}(s,\tau)\right|^r|\mathbb{N}_n,\mathbb{S}_n(N(s),n_1(s))\right] \\ \leq & 2\mathbb{E}\left\{\max_{n_0 \leq k }\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=N(s)+1}^{N(s)+k} \tilde{\eta}_{j,1}(s,\tau)\right|^r\right]|\mathbb{N}_n,\mathbb{S}_n(N(s),n_1(s))\right\} \\ = & 2\mathbb{E}\left\{\max_{n_0 \leq k }\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right|^r\right]|\mathbb{N}_n,\mathbb{S}_n(0,n_1(s))\right\}, \end{align*} where the inequality holds by the Jansen's inequality and the triangle inequality and the last equality holds because $\{\tilde{\eta}_{j,1}(s,\tau)\}_{j\geq 1}$ is an i.i.d. sequence. Apply expectation on both sides, we obtain that \begin{align} \label{eq:kk} & \mathbb{E}\max_{n_0 \leq k \leq n_1(s)}\mathbb{E}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=N(s)+n_0}^{N(s)+k} \tilde{\eta}_{R_j(N(s),n_1(s)),1}(s,\tau)\right|^r|\mathbb{N}_n, \Psi_n\right] \notag \\ \leq & 2\mathbb{E}\max_{n_0 \leq k \leq n}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right|^r\right]. \end{align} By the usual maximal inequality, as $k \rightarrow \infty$, \begin{align*} \sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right| \stackrel{a.s.}{\longrightarrow} 0, \end{align*} which implies that as $n_0 \rightarrow \infty$ \begin{align*} \max_{n_0 \leq k \leq n}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right|^r\right] \leq \max_{n_0 \leq k }\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right|^r\right] \stackrel{a.s.}{\longrightarrow} 0. \end{align*} In addition, $\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right|$ is bounded. Then, by the bounded convergence theorem, we have, as $n_0 \rightarrow \infty$, \begin{align*} \mathbb{E}\max_{n_0 \leq k \leq n}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^{k} \tilde{\eta}_{j,1}(s,\tau)\right|^r\right] \rightarrow 0. \end{align*} which implies that, \begin{align*} \mathbb{E}\max_{n_0 \leq k \leq n_1(s)}\mathbb{E}\left[\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=N(s)+n_0}^{N(s)+k} \tilde{\eta}_{R_j(N(s),n_1(s)),1}(s,\tau)\right|^r|\mathbb{N}_n, \Psi_n\right] \convP 0. \end{align*} Therefore, the second term on the RHS of \eqref{eq:3617} converges to zero in probability as $n_0 \rightarrow \infty$. Then, as $n \rightarrow \infty$ followed by $n_0 \rightarrow \infty$, \begin{align*} \mathbb{E}\left(\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{i=N(s) + 1}^{N(s)+n_1(s)}\omega_{ni}\tilde{\eta}_{i,1}(s,\tau)\right|^r|\Psi_n,\mathbb{N}_n\right) \convP 0. \end{align*} Hence, by the Markov inequality and \eqref{eq:omega}, we have \begin{align*} \sup_{s \in \mathcal{S},\tau \in \Upsilon}\left|\frac{1}{\#I_n^j(s)}\sum_{i \in I_n^j(s)} \tilde{\eta}_{i,1}(s,\tau)\right| \convP 0. \end{align*} Consequently, following \eqref{eq:boot} \begin{align} \label{eq:Mtilde-M} \sup_{s \in \mathcal{S},\tau \in \Upsilon}\left\vert\sum_{i=N(s) + 1}^{N(s)+n_1(s)}(\xi_i^s-M_{ni})\tilde{\eta}_{i,1}(s,\tau)\right \vert = o_p(\sqrt{n(s)}). \end{align} This concludes the first part of this Lemma. For the second part, we note \begin{align*} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\widetilde{M}_{ni}\tilde{\eta}_{i,1}(s,\tau) \stackrel{d}{=} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\xi^s_i\tilde{\eta}_{i,1}(s,\tau) \stackrel{d}{=} \sum_{i= 1}^{n_1(s)}\xi^s_i\tilde{\eta}_{i,1}(s,\tau), \end{align*} where the second equality holds because $\{\xi_i^s,\tilde{\eta}_{i,1}(s,\tau)\}_{i \geq 1} \perp\!\!\!\perp \{N(s),n_1(s),n(s)\}$. Then, conditionally on $\{N(s),n_1(s),n(s)\}$ and uniformly over $s \in \mathcal{S}$, the usual maximal inequality (\citet[Theorem 2.14.1]{VW96}) implies \begin{align} \label{eq:xi} \sup_{\tau \in \Upsilon}|\sum_{i=N(s) + 1}^{N(s)+n_1(s)}\widetilde{M}_{ni}\tilde{\eta}_{i,1}(s,\tau)| \stackrel{d}{=} \sup_{\tau \in \Upsilon}|\sum_{i= 1}^{n_1(s)}\xi^s_i\tilde{\eta}_{i,1}(s,\tau)| = O_p(\sqrt{n(s)}). \end{align} Combining \eqref{eq:Astar}, \eqref{eq:Mtilde-M}, and \eqref{eq:xi}, we establish \eqref{eq:targetF9}. This concludes the proof. \end{proof} \begin{lem} \label{lem:Qstar} If Assumptions \ref{ass:assignment1}.1 and \ref{ass:assignment1}.2 hold, $\sup_{ s \in \mathcal{S}}\frac{|D_n^*(s)|}{\sqrt{n^*(s)}} = O_p(1)$, $\sup_{ s \in \mathcal{S}}\frac{|D_n(s)|}{\sqrt{n(s)}} = O_p(1)$, and $n(s) \rightarrow \infty$ for all $s \in \mathcal{S}$, a.s., then, uniformly over $\tau \in \Upsilon$, \begin{align*} Q^*_{n}(u,\tau) \convP \frac{1}{2}u'Qu. \end{align*} \end{lem} \begin{proof} Recall $Q^*_{n,1}(u,\tau)$ and $Q^*_{n,0}(u,\tau)$ defined in \eqref{eq:Qstar}. We focus on $Q^*_{n,1}(u,\tau)$. Recall the definition of $M_{ni}$ in the proof of Lemma \ref{lem:Rsfestar}. We have \begin{align} \label{eq:l21nstar'qr} Q^*_{n,1}(u,\tau) = & \sum_{s \in \mathcal{S}} \sum_{i = N(s)+1}^{N(s)+n_1(s)}M_{ni}\int_0^{\frac{u_0 + u_1}{\sqrt{n}}}(1\{Y^s_i(1) - q_1(\tau) \leq v\} - 1\{Y^s_i(1) - q_1(\tau) \leq 0\})dv \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i = N(s)+1}^{N(s)+n_1(s)}M_{ni}[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s)] + \sum_{s \in \mathcal{S}} \sum_{i = N(s)+1}^{N(s)+n_1(s)}M_{ni}\mathbb{E}\phi_i(u,\tau,s), \end{align} where $\phi_i(u,\tau,s) = \int_0^{\frac{u_0 + u_1}{\sqrt{n}}}(1\{Y^s_i(1) - q_1(\tau) \leq v\} - 1\{Y^s_i(1) - q_1(\tau) \leq 0\})dv$. Similar to \eqref{eq:Mtilde-M}, we have \begin{align} \label{eq:term1qr} & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right] \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\xi_i^s\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right] + \sum_{s \in \mathcal{S}}r_n(u,\tau,s), \end{align} where $\{\xi_i^s\}_{i=1}^n$ is a sequence of i.i.d. Poisson(1) random variables and is independent of everything else, and \begin{align*} r_n(u,\tau,s) = \text{sign}(\widetilde{N}(n_1(s)) - n_1^*(s))\sum_{j=1}^\infty\frac{\#I_{n}^j(s)}{\sqrt{n(s)}}\frac{1}{\#I_{n}^j(s)}\sum_{i \in I_n^j(s)}\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]. \end{align*} We aim to show \begin{align} \label{eq:rnqr} \sup_{\tau \in \Upsilon, s\in \mathcal{S}}|r_n(u,\tau,s)| = o_p(1), \end{align} Recall that the proof of Lemma \ref{lem:Rsfestar} relies on \eqref{eq:kk} and the fact that $$\mathbb{E}\sup_{n(s) \geq k \geq n_0}\sup_{\tau \in \Upsilon, s \in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k \tilde{\eta}_{j,1}(s,\tau)\right| \rightarrow 0.$$ Using the same argument and replacing $\tilde{\eta}_{j,1}(s,\tau)$ by $\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]$, in order to show \eqref{eq:rnqr}, we only need to verify that, as $n \rightarrow \infty$ followed by $n_0 \rightarrow \infty$, \begin{align*} \mathbb{E}\sup_{n(s) \geq k \geq n_0}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{i=1}^k\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right| \rightarrow 0. \end{align*} Note $\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{i=1}^k\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right|$ is bounded by $|u_0| + |u_1|$. It suffices to show that, for any $\varepsilon>0$, as $n(s) \rightarrow \infty$ followed by $n_0 \rightarrow \infty$, \begin{align} \label{eq:targetp_qr} \mathbb{P}\left(\sup_{n(s) \geq k \geq n_0}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{i=1}^k\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right| \geq \varepsilon \right) \rightarrow 0. \end{align} Define the class of functions $\mathcal{F}_n$ as \begin{align*} \mathcal{F}_n = \{\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]: \tau \in \Upsilon, s\in \mathcal{S} \}. \end{align*} Then, $\mathcal{F}_n$ is nested by a VC-class with fixed VC-index. In addition, for fixed $u$, $\mathcal{F}_n$ has a bounded (and independent of $n$) envelope function $F = |u_0| + |u_1|$. Last, define $\mathcal{I}_l = \{2^l,2^l+1,\cdots,2^{l+1}-1\}$. Then, \begin{align*} & \mathbb{P}\left(\sup_{n(s) \geq k \geq n_0}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{i=1}^k\sqrt{n(s)}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \geq \varepsilon \biggl|n(s)\right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n(s)) \rfloor+1}\mathbb{P}\left(\sup_{k \in \mathcal{I}_l}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{i=1}^k\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right| \geq \varepsilon \biggl|n(s)\right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n(s)) \rfloor+1}\mathbb{P}\left(\sup_{k \leq 2^{l+1}}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{i=1}^k\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right| \geq \varepsilon 2^l \biggl| n(s)\right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n(s)) \rfloor+1}9\mathbb{P}\left(\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{i=1}^{2^{l+1}}\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right| \geq \varepsilon 2^l/30 \biggl| n(s) \right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n(s)) \rfloor+1}\frac{270 \mathbb{E}\left(\sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{i=1}^{2^{l+1}}\sqrt{n(s)}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right] \right|\biggl| n(s)\right)}{\varepsilon 2^l} \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n(s)) \rfloor+1}\frac{C_1}{\varepsilon 2^{l/2}} \\ \leq & \frac{2C_1}{\varepsilon \sqrt{n_0}} \rightarrow 0, \end{align*} where the first inequality holds by the union bound, the second inequality holds because on $\mathcal{I}_l$, $2^{l+1} \geq k \geq 2^l$, the third inequality follows the same argument in the proof of Theorem \ref{thm:qr}, the fourth inequality is due to the Markov inequality, the fifth inequality follows the standard maximal inequality such as \citet[Theorem 2.14.1]{VW96} and the constant $C_1$ is independent of $(l,\varepsilon,n)$, and the last inequality holds by letting $n \rightarrow \infty$. Because $\varepsilon$ is arbitrary, we have established \eqref{eq:targetp_qr}, and thus, \eqref{eq:rnqr}, which further implies that \begin{align*} \sup_{\tau \in \Upsilon, s\in \mathcal{S}}|r_n(u,\tau,s)| = o_p(1). \end{align*} In addition, for the leading term of \eqref{eq:term1qr}, we have \begin{align*} & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\xi_i^s\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right] \\ = & \sum_{s \in \mathcal{S}}\left[ \Gamma_n^{s*}(N(s)+n_1(s),\tau)- \Gamma_n^{s*}(N(s),\tau)\right], \end{align*} where \begin{align*} \Gamma_n^{s*}(k,\tau,e) = & \sum_{i=1}^k \xi_i^s\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\left(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv \\ & - k \mathbb{E}\left[\int_0^{\frac{u_0+u_1}{\sqrt{n}}}\left(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv\right]. \end{align*} By the same argument in \eqref{eq:Qn1}, we can show that \begin{align*} \sup_{0 < t \leq 1,\tau \in \Upsilon}|\Gamma_n^{s*}(k,\tau,e) | = o_p(1), \end{align*} where we need to use the fact that the Poisson(1) random variable has an exponential tail and thus \begin{align*} \mathbb{E}\sup_{i\in\{1,\cdots,n\},s\in \mathcal{S}} \xi_i^s = O(\log(n)). \end{align*} Therefore, \begin{align} \label{eq:l21nstar''qr} \sup_{\tau \in \Upsilon}\left|\sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\left[\phi_i(u,\tau,s) - \mathbb{E}\phi_i(u,\tau,s) \right]\right| = o_p(1). \end{align} For the second term on the RHS of \eqref{eq:l21nstar'qr}, we have \begin{align} \label{eq:l21nstar'''qr} \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\mathbb{E}\phi_i(u,\tau,s) = & \sum_{s \in \mathcal{S}}n_1^*(s)\mathbb{E}\phi_i(u,\tau,s) \notag \\ = & \sum_{s \in \mathcal{S}} \pi p(s) \frac{f_1(q_1(\tau)|s)}{2}(u_0+u_1)^2 +o(1) \notag \\ = & \frac{\pi f_1(q_1(\tau))(u_0+u_1)^2}{2} +o(1), \end{align} where the $o(1)$ term holds uniformly over $\tau \in \Upsilon$, the first equality holds because $\sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni} = n^*_1(s)$ and the second equality holds by the same calculation in \eqref{eq:Qn1} and the facts that $n^*(s)/n \convP p(s)$ and \begin{align*} \frac{n^*_1(s)}{n} = \frac{D_n^*(s)+\pi n^*(s)}{n} \convP \pi p(s). \end{align*} Combining \eqref{eq:l21nstar'qr}--\eqref{eq:rnqr}, \eqref{eq:l21nstar''qr}, and \eqref{eq:l21nstar'''qr}, we have \begin{align*} Q^*_{n,1}(u,\tau) \convP \frac{\pi f_1(q_1(\tau))(u_0+u_1)^2}{2}, \end{align*} uniformly over $\tau \in \Upsilon.$ By the same argument, we can show that, uniformly over $\tau \in \Upsilon$, \begin{align*} Q^*_{n,0}(u,\tau) \convP \frac{(1-\pi) f_0(q_0(\tau))u^2_0}{2}. \end{align*} This concludes the proof. \end{proof} \begin{center} \Large{Second Supplement to ``Quantile Treatment Effects and Bootstrap Inference under Covariate-Adaptive Randomization": Strata Fixed Effects Quantile Regression Estimation and Additional Simulation Results} \end{center} \begin{abstract} This paper gathers the theories for the strata fixed effects quantile regression estimator and additional simulation results. Section \ref{sec:sfe} describes the estimation, weighted bootstrap, and covariate-adaptive bootstrap inference procedures for the strata fixed effects quantile regression estimator. Sections \ref{sec:proofsfe}--\ref{sec:proofcabsfe} prove Theorems \ref{thm:sfe}--\ref{thm:cabsfe}, respectively. Section \ref{sec:lem2} contains the proofs of the technical lemmas. Section \ref{sec:addsim} contains additional simulation results. \end{abstract} \setcounter{page}{1} \renewcommand\thesection{\Alph{section}} \renewcommand{0}{0} \setcounter{equation}{0} \section{Quantile Regression with Strata Fixed Effects} \label{sec:sfe} The strata fixed effects estimator for the ATE is obtained by a linear regression of outcome $Y_i$ on the treatment status $A_i$, controlling for strata dummies $\{1\{S_i=s\}_{s \in \mathcal{S}}\}$. \cite{BCS17} point out that, due to the Frisch-Waugh-Lovell theorem, this estimator is equal to the linear coefficient in the regression of $Y_i$ on $\tilde{A}_i$, in which $\tilde{A}_i$ is the residual of the projection of $A_i$ on the strata dummies. Unlike the expectation, the quantile operator is nonlinear. Therefore, we cannot consistently estimate QTEs by a linear QR of $Y_i$ on $A_i$ and strata dummies. Instead, based on the equivalence relationship, we propose to run the QR of $Y_i$ on $\tilde{A}_i$. Formally, let $\tilde{A}_i = A_i - \hat{\pi}(S_i)$ and $\dot{\tilde{A}}_i = (1,\tilde{A}_i)'$, where $\hat{\pi}(s) = n_1(s)/n(s)$, $n_1(s) = \sum_{i=1}^n A_i 1\{S_i = s\}$, and $n(s) = \sum_{i=1}^n 1\{S_i = s\}$. Then, the strata fixed effects (SFE) estimator for the QTE is $\hat{\beta}_{sfe,1}(\tau)$, where \begin{align*} \hat{\beta}_{sfe}(\tau) \equiv \left(\hat{\beta}_{sfe,0}(\tau),\hat{\beta}_{sfe,1}(\tau) \right)'= \argmin_{b = (b_0,b_1)^\prime \in \Re^2}\sum_{i =1}^n \rho_\tau\left(Y_i - \dot{\tilde{A}}_i^\prime b\right). \end{align*} \begin{thm} \label{thm:sfe} If Assumptions \ref{ass:assignment1}.1--\ref{ass:assignment1}.3 and \ref{ass:tau} hold and $p(s)>0$ for $s \in \mathcal{S}$, then, uniformly over $\tau \in \Upsilon$, \begin{equation*} \sqrt{n}\left(\hat{\beta}_{sfe,1}(\tau)-q(\tau)\right)\rightsquigarrow \mathcal{B}_{sfe}(\tau),~\text{as}~n\rightarrow \infty, \end{equation*} where $\mathcal{B}_{sfe}(\cdot)$ is a Gaussian process with covariance kernel $\Sigma_{sfe}(\cdot,\cdot)$. The expression for $\Sigma_{sfe}(\cdot,\cdot)$ can be found in the proof of this theorem. \end{thm} In particular, the asymptotic variance for $\hat{\beta}_{sfe,1}(\tau)$ is \begin{align*} \zeta_Y^2(\pi,\tau) + \zeta_A^{\prime 2}(\pi,\tau) + \zeta_S^2(\tau), \end{align*} where $\zeta_Y^2(\pi,\tau)$ and $\zeta_S^2(\tau)$ are the same as those defined below Theorem \ref{thm:qr}, \begin{align*} \zeta_A^{\prime 2}(\pi,\tau)= & \mathbb{E}\gamma(S)\biggl[(m_1(S,\tau)-m_0(S,\tau))\left(\frac{1-\pi}{\pi f_1(q_1(\tau))} - \frac{\pi}{(1-\pi) f_0(q_0(\tau))} \right) \\ & + q(\tau)\left(\frac{f_1(q_1(\tau)|S)}{f_1(q_1(\tau))}-\frac{f_0(q_0(\tau)|S)}{f_0(q_0(\tau))}\right) \biggr]^2. \end{align*} Three remarks are in order. First, if the treatment assignment rule achieves strong balance, then $\zeta_A^{\prime 2}(\pi,\tau) = 0$ and the asymptotic variances for $\hat{\beta}_{1}(\tau)$ and $\hat{\beta}_{sfe,1}(\tau)$ are the same. Second, if the treatment assignment rule does not achieve strong balance, then it is difficult to compare the asymptotic variances of $\hat{\beta}_{1}(\tau)$ and $\hat{\beta}_{sfe,1}(\tau)$. Based on our simulation results in Section \ref{sec:addsim}, the SFE estimator usually has a smaller standard error. Third, in order to analytically compute the asymptotic variance $\hat{\beta}_{sfe,1}(\tau)$, one needs to nonparametrically estimate not only the unconditional densities $f_j(\cdot)$ but also the conditional densities $f_j(\cdot|s)$ for $j = 0,1$ and $s \in \mathcal{S}.$ However, such difficulty can be avoided by the covariate-adaptive bootstrap inference considered in Section \ref{sec:CABI}. We can compute the weighted bootstrap counterpart of strata fixed effects estimator: \begin{align*} \hat{\beta}_{sfe}^w(\tau) = \argmin_b \sum_{i=1}^n \xi_i \rho_\tau\left(Y_i - \dot{\tilde{A}}_i^{w'}b\right), \end{align*} where $\dot{\tilde{A}}^w_i = (1,\tilde{A}^w_i)'$, $\tilde{A}^w_i = A_i - \hat{\pi}^w(S_i)$, and $\hat{\pi}^w(\cdot)$ is defined in Section \ref{sec:SBI}. The second element of $\hat{\beta}_{sfe}^w(\tau)$ is our bootstrap estimator of the QTE. \begin{thm} \label{thm:sfeb} If Assumptions \ref{ass:assignment1}--\ref{ass:weight} hold and $p(s)>0$ for all $s \in \mathcal{S}$, then uniformly over $\tau \in \Upsilon$ and conditionally on data, \begin{align*} \sqrt{n}\left(\hat{\beta}_{sfe,1}^w(\tau) - \hat{\beta}_{sfe,1}(\tau)\right) \rightsquigarrow \tilde{\mathcal{B}}_{sfe}(\tau),~\text{as}~n\rightarrow \infty, \end{align*} where $\tilde{\mathcal{B}}_{sfe}(\tau)$ is a Gaussian process with covariance kernel being equal to that of $\mathcal{B}_{sfe}(\tau)$ defined in Theorem \ref{thm:sfe} with $\gamma(s)$ being replaced by $\pi(1-\pi)$. \end{thm} Similar to the SQR estimator, the weighted bootstrap fails to capture the cross-sectional dependence due to the covariate-adaptive randomization, and thus, overestimates the asymptotic variance of the SFE estimator. We can also implement the covariate-adaptive bootstrap. Let \begin{align*} \hat{\beta}_{sfe}^*(\tau) = \argmin_b \sum_{i=1}^n \rho_\tau\left(Y_i^* - \dot{\tilde{A}}_i^{*'}b\right), \end{align*} where $\dot{\tilde{A}}_i^* = (1,\tilde{A}_i^*)'$, $\tilde{A}_i^* = A_i^* - \hat{\pi}^*(S_i^*)$, $\hat{\pi}^*(s) = \frac{n_1^*(s)}{n^*(s)}$, and $(Y_i^*,A_i^*,S_i^*)_{i=1}^n$ is the covariate-adaptive bootstrap sample generated via the procedure mentioned in Section \ref{sec:CABI}. The the second element $\hat{\beta}_{sfe,1}^*(\tau)$ of $\hat{\beta}_{sfe}^*(\tau)$ is the covariate-adaptive SFE estimator. \begin{thm} \label{thm:cabsfe} If Assumptions \ref{ass:assignment1}, \ref{ass:tau}, and \ref{ass:bassignment} hold and $p(s)>0$ for all $s \in \mathcal{S}$, then, uniformly over $\tau \in \Upsilon$ and conditionally on data, \begin{align*} \sqrt{n}\left(\hat{\beta}_{sfe,1}^*(\tau) - \hat{q}(\tau)\right) \rightsquigarrow \mathcal{B}_{sfe}(\tau),~\text{as}~n\rightarrow \infty. \end{align*} \end{thm} Unlike the weighted bootstrap, the covariate-adaptive bootstrap can mimic the cross-sectional dependence, and thus, produces an asymptotically valid standard error for the SFE estimator. \section{Proof of Theorem \ref{thm:sfe}} \label{sec:proofsfe} Define $\tilde{\beta}_1(\tau) = q(\tau)$, $\tilde{\beta}_0(\tau) = \pi q_1(\tau) + (1-\pi)q_0(\tau)$, $\tilde{\beta}(\tau) = (\tilde{\beta}_0(\tau),\tilde{\beta}_1(\tau))^\prime$, and $\breve{A}_i = (1,A_i - \pi)^\prime$. For arbitrary $b_0$ and $b_1$, let $u_0 = \sqrt{n}(b_0-\tilde{\beta}_0(\tau))$, $u_1 = \sqrt{n}(b_1-\tilde{\beta}_1(\tau))$, $u=(u_0, u_1)' \in \Re^2$, and \begin{align*} L_{sfe,n}(u,\tau) = \sum_{i=1}^n\left[\rho_\tau(Y_i - \breve{A}'_i\tilde{\beta}(\tau) - (\dot{\tilde{A}}_i'b - \breve{A}'_i\tilde{\beta}(\tau))) - \rho_\tau(Y_i - \breve{A}'_i\tilde{\beta}(\tau))\right]. \end{align*} Then, by the change of variable, we have that \begin{align*} \sqrt{n}(\hat{\beta}_{sfe}(\tau) - \tilde{\beta}(\tau)) = \argmin_{u} L_{sfe,n}(u,\tau). \end{align*} Notice that $L_{sfe,n}(u,\tau)$ is convex in $u$ for each $\tau$ and bounded in $\tau$ for each $u$. In the following, we aim to show that there exists $$g_{sfe,n}(u,\tau) = - u'W_{sfe,n}(\tau) + \frac{1}{2}u'Q_{sfe}(\tau)u$$ such that (1) for each $u$, \begin{align*} \sup_{\tau \in \Upsilon}|L_{sfe,n}(u,\tau) - g_{sfe,n}(u,\tau)-h_{sfe,n}(\tau)| \convP 0, \end{align*} where $h_{sfe,n}(\tau)$ does not depend on $u$; (2) the maximum eigenvalue of $Q_{sfe}(\tau)$ is bounded from above and the minimum eigenvalue of $Q_{sfe}(\tau)$ is bounded away from $0$ uniformly over $\tau \in \Upsilon$; (3) $W_{sfe,n}(\tau) \rightsquigarrow \tilde{\mathcal{B}}(\tau)$ uniformly over $\tau \in \Upsilon$ for some $\tilde{\mathcal{B}}(\tau)$.\footnote{We abuse the notation and denote the weak limit of $W_{sfe,n}(\tau)$ as $\tilde{\mathcal{B}}(\tau)$. This limit is different from the weak limit of $W_n(\tau)$ in the proof of Theorem \ref{thm:qr}.} Then by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}_{sfe}(\tau) - \tilde{\beta}(\tau)) = [Q_{sfe}(\tau)]^{-1}W_{sfe,n}(\tau) + r_{sfe,n}(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}||r_{sfe,n}(\tau)|| = o_p(1)$. In addition, by (3), we have, uniformly over $\tau \in \Upsilon$, \begin{align*} \sqrt{n}(\hat{\beta}_{sfe}(\tau) - \tilde{\beta}(\tau)) \rightsquigarrow [Q_{sfe}(\tau)]^{-1}\tilde{\mathcal{B}}(\tau) \equiv \mathcal{B}(\tau). \end{align*} The second element of $\mathcal{B}(\tau)$ is $\mathcal{B}_{sfe}(\tau)$ stated in Theorem \ref{thm:sfe}. Next, we prove requirements (1)--(3) in three steps. \textbf{Step 1.} By Knight's identity (\citep{K98}), we have \begin{align*} & L_{sfe,n}(u,\tau) \\ = & -\sum_{i=1}^{n}(\dot{\tilde{A}}_i'(\tilde{\beta}(\tau) + \frac{u}{\sqrt{n}}) - \breve{A}'_i\tilde{\beta}(\tau))\left(\tau- 1\{Y_i\leq \dot{\tilde{A}}_i'\tilde{\beta}(\tau)\}\right) \\ & + \sum_{i=1}^{n}\int_0^{\dot{\tilde{A}}_i'(\tilde{\beta}(\tau) + \frac{u}{\sqrt{n}}) - \breve{A}'_i\tilde{\beta}(\tau)}\left(1\{Y_i - \dot{\tilde{A}}_i'\tilde{\beta}(\tau)\leq v\} - 1\{Y_i - \dot{\tilde{A}}_i'\tilde{\beta}(\tau)\leq 0\} \right)dv\\ \equiv & -L_{1,n}(u,\tau) + L_{2,n}(u,\tau). \end{align*} \textbf{Step 1.1.} We first consider $L_{1,n}(u,\tau)$. Note that $\tilde{\beta}_1(\tau) = q(\tau)$ and \begin{align} \label{eq:l1} & L_{1,n}(u,\tau) \notag \\ = & \sum_{i=1}^{n} \sum_{s \in \mathcal{S}} A_i 1\{S_i = s\}\left(\frac{u_0}{\sqrt{n}} + (1-\hat{\pi}(s))\frac{u_1}{\sqrt{n}} + (\pi - \hat{\pi}(s))q(\tau) \right)\left(\tau- 1\{Y_i(1)\leq q_1(\tau)\}\right) \notag \\ & + \sum_{i=1}^{n} \sum_{s \in \mathcal{S}} (1-A_i) 1\{S_i = s\}\left(\frac{u_0}{\sqrt{n}} -\hat{\pi}(s)\frac{u_1}{\sqrt{n}} + (\pi - \hat{\pi}(s))q(\tau) \right)\left(\tau- 1\{Y_i(0)\leq q_0(\tau)\}\right) \notag \\ \equiv & L_{1,1,n}(u,\tau) + L_{1,0,n}(u,\tau). \end{align} Let $\iota_1 = (1,1-\pi)'$ and $\iota_0 = (1,-\pi)'$. Note that $\hat{\pi}(s) - \pi = \frac{D_n(s)}{n(s)}$. Then, for $ L_{1,1,n}(u,\tau)$, we have \begin{align} \label{eq:l11} & L_{1,1,n}(u,\tau) \notag \\ = & \sum_{i=1}^{n} \sum_{s \in \mathcal{S}} A_i 1\{S_i = s\}\left[\frac{u'\iota_1}{\sqrt{n}} + (\pi - \hat{\pi}(s))\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right) \right]\left(\tau- 1\{Y_i(1)\leq q_1(\tau)\}\right) \notag \\ = & \frac{u'\iota_1}{\sqrt{n}}\sum_{i=1}^{n} \sum_{s \in \mathcal{S}} A_i 1\{S_i = s\}\left(\tau- 1\{Y_i(1)\leq q_1(\tau)\}\right) \notag \\ &- \sum_{s \in \mathcal{S}} \frac{D_n(s)}{\sqrt{n}}\frac{u_1}{n(s)}\sum_{i =1}^nA_i1\{S_i = s\}\left(\tau- 1\{Y_i(1)\leq q_1(\tau)\}\right) \notag \\ & + \sum_{s \in \mathcal{S}} (\pi - \hat{\pi}(s))q(\tau)\sum_{i =1}^nA_i1\{S_i = s\}\left(\tau- 1\{Y_i(1)\leq q_1(\tau)\}\right) \notag \\ = & \sum_{s \in \mathcal{S}}\frac{u'\iota_1}{\sqrt{n}}\sum_{i=1}^{n} \biggl[ A_i 1\{S_i = s\}\eta_{i,1}(s,\tau) + (A_i-\pi) 1\{S_i = s\}m_{1}(s,\tau) + \pi 1\{S_i = s\}m_{1}(s,\tau)\biggr] \notag \\ & - \sum_{s \in \mathcal{S}}\frac{D_n(s)}{\sqrt{n}}\frac{u_1}{n(s)}\sum_{i =1}^n\biggl[A_i 1\{S_i=s\}\eta_{i,1}(s,\tau)+ (A_i - \pi)1\{S_i=s\}m_1(s,\tau) + \pi1\{S_i=s\}m_1(s,\tau)\biggr] + h_{1,1}(\tau) \notag \\ = & \sum_{s \in \mathcal{S}}\frac{u'\iota_1}{\sqrt{n}}\sum_{i=1}^{n} \biggl[ A_i 1\{S_i = s\}\eta_{i,1}(s,\tau) + (A_i-\pi) 1\{S_i = s\}m_{1}(s,\tau) + \pi 1\{S_i = s\}m_{1}(s,\tau)\biggr] \notag \\ & - \sum_{s \in \mathcal{S}} \frac{u_1D_n(s) \pi m_1(s,\tau)}{\sqrt{n}} + h_{1,1}(\tau) + R_{sfe,1,1}(u,\tau), \end{align} where \begin{align*} h_{1,1}(\tau) = \sum_{s \in \mathcal{S}} (\pi - \hat{\pi}(s))q(\tau)\sum_{i =1}^nA_i1\{S_i = s\}\left(\tau- 1\{Y_i(1)\leq q_1(\tau)\}\right) \end{align*} and \begin{align*} R_{sfe,1,1}(u,\tau) = - \sum_{s \in \mathcal{S}}\frac{u_1D_n(s)}{\sqrt{n} n(s)}\sum_{i =1}^n\biggl[A_i 1\{S_i=s\}\eta_{i,1}(s,\tau)+ (A_i - \pi)1\{S_i=s\}m_1(s,\tau)\biggr]. \end{align*} By the same argument in Lemma \ref{lem:Q} and Assumption \ref{ass:assignment1}.3, we have for every $s\in\mathcal{S}$, \begin{align} \label{eq:eta1_sfe} \sup_{\tau \in \Upsilon}\left|\frac{1}{\sqrt{n}}\sum_{i =1}^n A_i 1\{S_i=s\}\eta_{i,1}(s,\tau)\right| = O_p(1) \end{align} and \begin{align*} \sup_{\tau \in \Upsilon}\left|\frac{1}{\sqrt{n}}\sum_{i =1}^n\biggl[(A_i - \pi)1\{S_i=s\}m_1(s,\tau)\biggr]\right| = \sup_{\tau \in \Upsilon}\left|\frac{D_n(s)m_1(s,\tau)}{\sqrt{n}}\right| = O_p(1). \end{align*} In addition, note that $n(s)/n \convP p(s)$. Therefore, \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,1,1}(u,\tau)| = O_p(\frac{1}{\sqrt{n}}) = o_p(1). \end{align*} Similarly, we have \begin{align} \label{eq:l10} & L_{1,0,n}(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}}\frac{u'\iota_0}{\sqrt{n}}\sum_{i=1}^{n} \biggl[ (1-A_i) 1\{S_i = s\}\eta_{i,0}(s,\tau) - (A_i-\pi) 1\{S_i = s\}m_{0}(s,\tau) + (1-\pi) 1\{S_i = s\}m_{0}(s,\tau)\biggr] \notag \\ & - \sum_{s \in \mathcal{S}} \frac{u_1D_n(s) (1-\pi) m_0(s,\tau)}{\sqrt{n}} + h_{1,0}(\tau) + R_{sfe,1,0}(u,\tau), \end{align} where \begin{align*} h_{1,0}(\tau) = \sum_{s \in \mathcal{S}} (\pi - \hat{\pi}(s))q(\tau)\sum_{i =1}^n(1-A_i)1\{S_i = s\}\left(\tau- 1\{Y_i(0)\leq q_0(\tau)\}\right), \end{align*} \begin{align*} R_{sfe,1,0}(u,\tau) = - \sum_{s \in \mathcal{S}}\frac{u_1D_n(s)}{\sqrt{n} n(s)}\sum_{i =1}^n\biggl[(1-A_i) 1\{S_i=s\}\eta_{i,0}(\tau)- (A_i - \pi)1\{S_i=s\}m_0(s,\tau)\biggr], \end{align*} and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,1,0}(\tau)| = O_p(\frac{1}{\sqrt{n}}) = o_p(1). \end{align*} Combining \eqref{eq:l1}, \eqref{eq:l11}, \eqref{eq:l10} and letting $\iota_2 = (1,1-2\pi)'$, we have \begin{align} \label{eq:l1end} L_{1,n}(u,\tau) = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}}\sum_{i =1}^n\biggl[u'\iota_1 A_i 1\{S_i=s\}\eta_{i,1}(s,\tau) + u'\iota_0(1-A_i)1\{S_i=s\}\eta_{i,0}(s,\tau) \biggr] \notag \\ & + \sum_{s \in \mathcal{S}}u'\iota_2 \frac{D_n(s)}{\sqrt{n}}\left(m_1(s,\tau) - m_0(s,\tau)\right) \notag \\ & + \frac{1}{\sqrt{n}}\sum_{i =1}^n\left(u'\iota_1 \pi m_1(S_i,\tau) + u'\iota_0 (1-\pi) m_0(S_i,\tau)\right) \notag \\ & + R_{sfe,1,1}(u,\tau)+R_{sfe,1,0}(u,\tau) + h_{1,1}(\tau) + h_{1,0}(\tau). \end{align} \textbf{Step 1.2.} Next, we consider $L_{2,n}(u,\tau)$. Denote $E_n(s) = \sqrt{n}(\hat{\pi}(s) -\pi)$. Then, \begin{align*} \{E_n(s)\}_{s \in \mathcal{S}} = \left\{\frac{D_n(s)}{\sqrt{n}}\frac{n}{n(s)}\right\}_{s \in \mathcal{S}} \rightsquigarrow \mathcal{N}(0,\Sigma_D') = O_p(1), \end{align*} where $\Sigma_D'=\text{diag}(\gamma(s)/p(s):s\in\mathcal{S})$. In addition, \begin{align} \label{eq:l2} & L_{2,n}(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}}\sum_{i=1}^n A_i 1\{S_i = s\}\int_0^{\frac{u'\iota_1}{\sqrt{n}} - \frac{E_n(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i(1) \leq q_1(\tau) + v\} - 1\{Y_i(1) \leq q_1(\tau)\}\right)dv \notag \\ & + \sum_{s \in \mathcal{S}}\sum_{i=1}^n (1-A_i) 1\{S_i = s\}\int_0^{\frac{u'\iota_0}{\sqrt{n}} - \frac{E_n(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i(0) \leq q_0(\tau) + v\} - 1\{Y_i(0) \leq q_0(\tau)\}\right)dv \notag \\ \equiv & L_{2,1,n}(u,\tau) + L_{2,0,n}(u,\tau). \end{align} By the same argument in \eqref{eq:Qn1}, we have \begin{align} \label{eq:L2sfe} L_{2,1,n}(u,\tau) \stackrel{d}{=} & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\int_0^{\frac{u'\iota_1}{\sqrt{n}} - \frac{E_n(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i^s(1) \leq q_1(\tau) + v\} - 1\{Y_i^s(1) \leq q_1(\tau)\}\right)dv \notag \\ \equiv & \sum_{s \in \mathcal{S}}\left[\Gamma_n^s(N(s)+n_1(s),\tau,E_n(s)) - \Gamma_n^s(N(s),\tau,E_n(s))\right], \end{align} where \begin{align*} \Gamma_n^s(k,\tau,e) = \sum_{i=1}^k \int_0^{\frac{u' \iota_1 - e(q(\tau)+\frac{u_1}{\sqrt{n}})}{\sqrt{n}}}\left(1\{Y_i^s(1) \leq q_1(\tau) + v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv. \end{align*} We want to show, for some any sufficiently large constant $M$, \begin{align} \label{eq:Gamma_sfe} \sup_{0 < t \leq 1,\tau \in \Upsilon, |e| \leq M}|\Gamma_n^s(\lfloor nt \rfloor,\tau,e) - \mathbb{E}\Gamma_n^s(\lfloor nt \rfloor,\tau,e)| = o_p(1). \end{align} By the same argument in \eqref{eq:Gamma}, it suffices to show that \begin{align*} \sup_{\tau \in \Upsilon, |e| \leq M}n||\mathbb{P}_n - \mathbb{P}||_\mathcal{F} = o_p(1), \end{align*} where \begin{align*} \mathcal{F} = \left\{\int_0^{\frac{u' \iota_1 - e(q(\tau)+\frac{u_1}{\sqrt{n}})}{\sqrt{n}}}\left(1\{Y_i^s(1) \leq q_1(\tau) + v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv: \tau \in \Upsilon, |e| \leq M \right\} \end{align*} with an envelope $F=\frac{|u_0| + |u_1| + M\sup_{ \tau \in \Upsilon}|q(\tau)| + \frac{|u_1|}{\sqrt{n}}}{\sqrt{n}}$. Note that \begin{align*} \sup_{f \in \mathcal{F}}\mathbb{E}f^2 \leq & \sup_{\tau \in \Upsilon}\mathbb{E} \left[\frac{|u_0| + |u_1| + M|q(\tau)| + \frac{|u_1|}{\sqrt{n}}}{\sqrt{n}}1\left\{|Y_i^s(1) - q_1(\tau)|\leq \frac{|u_0| + |u_1| + M|q(\tau)| + \frac{|u_1|}{\sqrt{n}}}{\sqrt{n}}\right\} \right]^2 \\ \lesssim & n^{-3/2}, \end{align*} and $\mathcal{F}$ is a VC-class with a fixed VC index. Then, by \citet[Corollary 5.1]{CCK14}, \begin{align} \label{eq:gammasfe} \mathbb{E}\sup_{\tau \in \Upsilon, |e| \leq M}|\Gamma^s_{n}(n,\tau,e) - \mathbb{E}\Gamma^s_{n}(n,\tau,e)| = n||\mathbb{P}_n - \mathbb{P}||_\mathcal{F} \lesssim n\left[\sqrt{\frac{\log(n)}{n^{5/2}}} + \frac{\log(n)}{n^{3/2}}\right] = o(1). \end{align} In addition, we have \begin{align} \label{eq:Gamma_sfe1} \mathbb{E}\Gamma_n^s(\lfloor nt \rfloor,\tau,e) = & \lfloor nt \rfloor \int_0^{\frac{u' \iota_1 - e(q(\tau)+\frac{u_1}{\sqrt{n}})}{\sqrt{n}}} [F_1(q_1(\tau)+v|s) - F_1(q_1(\tau)|s)]dv \notag \\ = & t \frac{f_1(q_1(\tau)|s)}{2}(u' \iota_1 - eq(\tau))^2 + o(1), \end{align} where $F_j(\cdot|s)$ and $f_j(\cdot|s)$, $j=0,1$ are the conditional CDF and PDF for $Y(j)$ given $S=s$, respectively, and the $o(1)$ term holds uniformly over $\{\tau \in \Upsilon, |e| \leq M\}$. Combining \eqref{eq:Gamma_sfe} and \eqref{eq:Gamma_sfe1} with the fact that $\frac{n_1(s)}{n} \convP \pi p(s)$, we have \begin{align} \label{eq:l21} L_{2,1,n}(u,\tau) = & \sum_{s \in \mathcal{S}}\pi p(s)\frac{f_1(q_1(\tau)|s)}{2}(u' \iota_1 - E_n(s)q(\tau))^2 + R'_{sfe,2,1}(u,\tau) \notag \\ = & \frac{\pi f_1(q_1(\tau))}{2}(u' \iota_1)^2 - \sum_{s \in \mathcal{S}}f_1(q_1(\tau)|s)\frac{\pi D_n(s)u' \iota_{1}}{\sqrt{n}}q(\tau) + h_{2,1}(\tau) + R_{sfe,2,1}(u,\tau), \end{align} where \begin{align*} \sup_{\tau \in \Upsilon}|R'_{sfe,2,1}(u,\tau)| = o_p(1), \quad \sup_{\tau \in \Upsilon}|R_{sfe,2,1}(u,\tau)| = o_p(1), \end{align*} and \begin{align*} h_{2,1}(\tau) = \sum_{s \in \mathcal{S}}\frac{\pi f_1(q_1(\tau)|s)}{2}p(s)E^2_n(s)\tilde{\beta}^2_1(\tau). \end{align*} Similarly, we have \begin{align} \label{eq:l20} L_{2,0,n}(u,\tau) = & \frac{(1-\pi)f_0(q_0(\tau))}{2}(u' \iota_0)^2 - \sum_{s \in \mathcal{S}}(1-\pi)f_0(q_0(\tau)|s)\frac{D_n(s)u'\iota_0}{\sqrt{n}}q(\tau) \notag \\ & + h_{2,0}(\tau) + R_{sfe,2,0}(u,\tau), \end{align} where \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,0}(u,\tau)| = o_p(1) \quad \text{and} \quad h_{2,0}(\tau) = \sum_{s \in \mathcal{S}}\frac{(1-\pi)f_0(q_0(\tau)|s)}{2}p(s)E^2_n(s)\tilde{\beta}^2_1(\tau). \end{align*} Combining \eqref{eq:l2}, \eqref{eq:l21}, and \eqref{eq:l20}, we have \begin{align} \label{eq:l2end} L_{2,n}(u,\tau) = & \frac{1}{2}u'Q_{sfe}(\tau)u - \sum_{s \in \mathcal{S}} q(\tau) \left[f_1(q_1(\tau)|s)\pi u'\iota_1 +f_0(q_0(\tau)|s)(1-\pi) u'\iota_0 \right]\frac{D_n(s)}{\sqrt{n}} \notag \\ & + R_{sfe,2,1}(u,\tau)+R_{sfe,2,0}(u,\tau) + h_{2,1}(\tau) + h_{2,0}(\tau). \end{align} where \begin{align*} Q_{sfe} = & \pi f_1(q_1(\tau)) \iota_1 \iota_1' + (1-\pi)f_0(q_0(\tau))\iota_0 \iota_0' \\ = & \begin{pmatrix} \pi f_1(q_1(\tau)) + (1-\pi) f_0(q_0(\tau)) & \pi(1-\pi)(f_1(q_1(\tau)) - f_0(q_0(\tau)) ) \\ \pi(1-\pi)(f_1(q_1(\tau)) - f_0(q_0(\tau)) ) & \pi(1-\pi)((1-\pi)f_1(q_1(\tau)) + \pi f_0(q_0(\tau))) \end{pmatrix}. \end{align*} \textbf{Step 1.3.} Last, by combining \eqref{eq:l1end} and \eqref{eq:l2end}, we have \begin{align*} L_{sfe,n}(u,\tau) = - u'W_{sfe,n}(\tau) + \frac{1}{2}u'Q_{sfe}(\tau)u + R_{sfe}(u,\tau) + h_{sfe,n}(\tau), \end{align*} where \begin{align} \label{eq:wsfe} & W_{sfe,n}(\tau) \notag \\ = &\frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}}\sum_{i =1}^n\biggl[\iota_1 A_i 1\{S_i=s\}\eta_{i,1}(s,\tau) + \iota_0(1-A_i)1\{S_i=s\}\eta_{i,0}(s,\tau) \biggr] \notag \\ & + \sum_{s \in \mathcal{S}}\biggl\{\iota_2 \left(m_1(s,\tau) - m_0(s,\tau)\right) + q(\tau)\biggl[f_1(q_1(\tau)|s)\pi \iota_1 + f_0(q_0(\tau)|s)(1-\pi) \iota_0\biggr]\biggr\}\frac{D_n(s)}{\sqrt{n}}\notag \\ & + \frac{1}{\sqrt{n}}\sum_{i =1}^n\left(\iota_1 \pi m_1(S_i,\tau) + \iota_0 (1-\pi) m_0(S_i,\tau)\right) \notag \\ \equiv & W_{sfe,n,1}(\tau) + W_{sfe,n,2}(\tau)+W_{sfe,n,3}(\tau), \end{align} \begin{align*} R_{sfe}(u,\tau) = R_{sfe,1,1}(u,\tau) + R_{sfe,1,0}(u,\tau) + R_{sfe,2,1}(u,\tau) + R_{sfe,2,0}(u,\tau) \end{align*} such that $\sup_{\tau \in \Upsilon}|R_{sfe}(u,\tau)| = o_p(1)$, and \begin{align*} h_{sfe,n}(\tau) = h_{1,1}(\tau)+h_{1,0}(\tau)+h_{2,1}(\tau)+h_{2,0}(\tau). \end{align*} This concludes the proof of Step 1. \textbf{Step 2}. Note that $\text{det}(Q_{sfe}(\tau)) = \pi(1-\pi) f_0(q_0(\tau))f_1(q_1(\tau))$, which is bounded and bounded away from zero. In addition, it can be shown that the two eigenvalues of $Q_{sfe}(\tau)$ are nonnegative. This leads to the desired result. \textbf{Step 3}. Lemma \ref{lem:Qsfe} establishes the weak convergence that \begin{align*} (W_{sfe,1,n}(\tau), W_{sfe,2,n}(\tau), W_{sfe,3,n}(\tau)) \rightsquigarrow (\mathcal{B}_{sfe,1}(\tau),\mathcal{B}_{sfe,2}(\tau),\mathcal{B}_{sfe,3}(\tau)), \end{align*} where $(\mathcal{B}_{sfe,1}(\tau),\mathcal{B}_{sfe,2}(\tau),\mathcal{B}_{sfe,3}(\tau))$ are three independent two-dimensional Gaussian processes with covariance kernels $\Sigma_{1}(\tau_1,\tau_2)$, $\Sigma_{2}(\tau_1,\tau_2)$, and $\Sigma_{3}(\tau_1,\tau_2)$, respectively. Therefore, uniformly over $\tau \in \Upsilon$, \begin{align*} W_{sfe,n}(\tau) \rightsquigarrow \tilde{\mathcal{B}}(\tau), \end{align*} where $\tilde{\mathcal{B}}(\tau)$ is a two-dimensional Gaussian process with covariance kernel $$\tilde{\Sigma}(\tau_1,\tau_2) = \sum_{j=1}^3\Sigma_{j}(\tau_1,\tau_2).$$ Consequently, \begin{align*} \sqrt{n}(\hat{\beta}_{sfe}(\tau) - \tilde{\beta}(\tau)) \rightsquigarrow \mathcal{B}(\tau) \equiv Q_{sfe}^{-1}(\tau)\tilde{\mathcal{B}}(\tau), \end{align*} where $\Sigma(\tau_1,\tau_2)$, the covariance kernel of $\mathcal{B}(\tau)$, has the expression that \begin{align*} & \Sigma(\tau_1,\tau_2) \\ = & Q_{sfe}^{-1}(\tau_1)\tilde{\Sigma}(\tau_1,\tau_2)Q_{sfe}^{-1}(\tau_2) \\ = & \biggl\{ \frac{1}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))}\left[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)\right]\begin{pmatrix} \pi^2 & \pi \\ \pi & 1 \end{pmatrix} \\ & + \frac{1}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))}\left[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)\right]\begin{pmatrix} (1-\pi)^2 & \pi-1 \\ \pi-1 & 1 \end{pmatrix} \biggr\}\\ & + \biggl\{\mathbb{E}\gamma(S)\biggl[(m_1(S,\tau_1) - m_0(S,\tau_1))\begin{pmatrix} \frac{\pi}{f_0(q_0(\tau_1))} + \frac{1-\pi}{f_1(q_1(\tau_1))} \\ \frac{1-\pi}{\pi f_1(q_1(\tau_1))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_1))} \end{pmatrix} +q(\tau_1)\frac{f_1(q_1(\tau_1)|S)}{f_1(q_1(\tau_1))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} \\ & + q(\tau_1)\frac{f_0(q_0(\tau_1)|S)}{f_0(q_0(\tau_1))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix}\biggr] \times \biggl[(m_1(S,\tau_2) - m_0(S,\tau_2))\begin{pmatrix} \frac{\pi}{f_0(q_0(\tau_2))} + \frac{1-\pi}{f_1(q_1(\tau_2))} \\ \frac{1-\pi}{\pi f_1(q_1(\tau_2))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_2))} \end{pmatrix} \\ & +q(\tau_2)\frac{f_1(q_1(\tau_2)|S)}{f_1(q_1(\tau_2))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} + q(\tau_2)\frac{f_0(q_0(\tau_2)|S)}{f_0(q_0(\tau_2))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix}\biggr] \biggr\}\\ & + \biggl\{\mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} + \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix} \biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} + \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix} \biggr]' \biggr\}. \end{align*} By checking the $(2,2)$-element of $\Sigma(\tau_1,\tau_2)$, we have \begin{align*} & \Sigma_{sfe}(\tau_1,\tau_2) \\ = &\frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))} \\ & + \mathbb{E}\gamma(S)\biggl[(m_1(S,\tau_1) - m_0(S,\tau_1))\left(\frac{1-\pi}{\pi f_1(q_1(\tau_1))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_1))}\right) + q(\tau_1)\left(\frac{f_1(q(\tau_1)|S)}{f_1(q_1(\tau_1))} -\frac{f_0(q(\tau_1)|S)}{f_0(q_0(\tau_1))}\right)\biggr] \\ &\times \biggl[(m_1(S,\tau_2) - m_0(S,\tau_2))\left(\frac{1-\pi}{\pi f_1(q_1(\tau_2))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_2))}\right) + q(\tau_2)\left(\frac{f_1(q(\tau_2)|S)}{f_1(q_2(\tau_2))} -\frac{f_0(q(\tau_2)|S)}{f_0(q_0(\tau_2))}\right)\biggr] \\ & +\mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\biggr]. \end{align*} \section{Proof of Theorem \ref{thm:sfeb}} \label{sec:proofsfeb} Note that \begin{align*} \sqrt{n}(\hat{\beta}_{sfe}^w(\tau)-\tilde{\beta}(\tau)) = \argmin_u L_{sfe,n}^w(u,\tau), \end{align*} where \begin{align*} L_{sfe,n}^w(u,\tau) = \sum_{i=1}^n \xi_{i}\biggl[\rho_\tau(Y_i - \dot{\tilde{A}}_i^{w \prime}(\tilde{\beta}(\tau) + \frac{u}{\sqrt{n}})) - \rho_\tau(Y_i - \breve{A}_i^{\prime}\tilde{\beta}(\tau)) \biggr], \end{align*} $\dot{\tilde{A}}_i^w = (1,\tilde{A}_i^w)'$, $\tilde{A}_i^w = A_i - \hat{\pi}^w(S_i)$, and $$\hat{\pi}^w(s) = \frac{\sum_{i =1}^n \xi_i A_i1\{S_i = s\}}{\sum_{i =1}^n \xi_i 1\{S_i = s\}}.$$ Similar to the proof of Theorem \ref{thm:sfe}, we divide the proof into two steps. In the first step, we show that there exists $$g_{sfe,n}^w(u,\tau) = - u'W_{sfe,n}^w(\tau) + \frac{1}{2}u'Q_{sfe}(\tau)u$$ and $h_{sfe,n}^w(\tau)$ independent of $u$ such that for each $u$ \begin{align*} \sup_{\tau \in \Upsilon}|L^w_{sfe,n}(u,\tau) - g_{sfe,n}^w(u,\tau) - h_{sfe,n}^w(\tau)|\convP 0. \end{align*} In addition, we will show that $\sup_{\tau \in \Upsilon}||W_{sfe,n}^w(\tau)|| = O_p(1)$. Then, by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}^w_{sfe}(\tau)-\tilde{\beta}(\tau)) = [Q_{sfe}(\tau)]^{-1}W_{sfe,n}^w(\tau) + R_{sfe,n}^w(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}||R_{sfe,n}^w(\tau)|| = o_p(1). \end{align*} In the second step, we show that, conditionally on data, \begin{align*} \sqrt{n}(\hat{\beta}^w_{sfe,1}(\tau)-\hat{\beta}_{sfe,1}(\tau)) \rightsquigarrow \tilde{\mathcal{B}}_{sfe}(\tau). \end{align*} \textbf{Step 1.} Following Step 1 in the proof of Theorem \ref{thm:sfe}, we have \begin{align*} L^w_{sfe,n}(u,\tau) \equiv - L_{1,n}^w(u,\tau) + L_{2,n}^w(u,\tau), \end{align*} where \begin{align*} & L_{1,n}^w(u,\tau) \\ = & \sum_{i =1}^n \sum_{s \in \mathcal{S}}\xi_iA_i 1\{S_i=s\}\left(\frac{u_0}{\sqrt{n}} + (1 - \hat{\pi}^w(s))\frac{u_1}{\sqrt{n}} + (\pi - \hat{\pi}^w(s))q(\tau)\right)(\tau - 1\{Y_i \leq q_1(\tau)\}) \\ & + \sum_{i =1}^n \sum_{s \in \mathcal{S}}\xi_i(1-A_i)1\{S_i=s\}\left(\frac{u_0}{\sqrt{n}}-\hat{\pi}^w(s)\frac{u_1}{\sqrt{n}} + (\pi - \hat{\pi}^w(s))q(\tau)\right)(\tau - 1\{Y_i \leq q_0(\tau)\}) \\ \equiv & L_{1,1,n}^w(u,\tau) + L_{1,0,n}^w(u,\tau), \end{align*} \begin{align*} & L_{2,n}^w(u,\tau) \\ = & \sum_{s \in \mathcal{S}} \sum_{i =1}^n \xi_iA_i 1\{S_i=s\}\int_0^{\frac{u'\iota_1}{\sqrt{n}} - \frac{E_n^w(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right) }\left(1\{Y_i \leq q_1(\tau)+v \} - 1\{Y_i \leq q_1(\tau)\}\right)dv \\ & + \sum_{s \in \mathcal{S}} \sum_{i =1}^n \xi_i(1-A_i) 1\{S_i=s\}\int_0^{\frac{u'\iota_0}{\sqrt{n}} - \frac{E_n^w(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right) }\left(1\{Y_i \leq q_0(\tau)+v \} - 1\{Y_i \leq q_0(\tau)\}\right)dv \\ \equiv & L_{2,1,n}^w(u,\tau) + L_{2,0,n}^w(u,\tau), \end{align*} and $E_n^w(s) = \sqrt{n}(\hat{\pi}^w(s) - \pi)$. \textbf{Step 1.1.} Recall that $\iota_1 = (1,1-\pi)'$ and $\iota_0=(1,-\pi)'$. In addition, denote $\hat{\pi}^w(s) - \pi = \frac{D_n^w(s)}{n^w(s)}$, where \begin{align*} D_n^w(s) = \sum_{i =1}^n \xi_i(A_i-\pi)1\{S_i = s\} \quad \text{and} \quad n^w(s) = \sum_{i =1}^n \xi_i1\{S_i = s\}. \end{align*} Then, we have \begin{align} \label{eq:l11nc} & L_{1,1,n}^w(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}} \frac{u'\iota_1}{\sqrt{n}} \sum_{i=1}^n \xi_i \left[A_i 1\{S_i =s\}\eta_{i,1}(s,\tau)+ \pi1\{S_i=s\}m_1(s,\tau)\right] + \sum_{s \in \mathcal{S}}\frac{u'\iota_2 D_n^w(s)m_1(s,\tau)}{\sqrt{n}} \notag \\ &+h_{1,1}^w(\tau) + R_{sfe,1,1}^w(u,\tau), \end{align} where $\eta_{i,1}(s,\tau) = (\tau - 1\{Y_i(1) \leq q_1(\tau)\}) - m_1(s,\tau)$, \begin{align*} h_{1,1}^w(\tau) = \sum_{s\in \mathcal{S}}(\pi - \hat{\pi}^w(s))q(\tau)\left(\sum_{i =1}^n\xi_i A_i 1\{S_i=s\}(\tau - 1\{Y_i \leq q_1(\tau)\})\right), \end{align*} and \begin{align} \label{eq:Rsfe11c} R_{sfe,1,1}^w(u,\tau)= - \sum_{s \in \mathcal{S}}\frac{u_1D_n^w(s)}{\sqrt{n}n^w(s)}\left\{\sum_{i =1}^n \xi_i\left[A_i 1\{S_i=s\}\eta_{i,1}(s,\tau)+(A_i-\pi)1\{S_i=s\}m_1(s,\tau)\right] \right\}. \end{align} By Lemma \ref{lem:Rsfe11c}, we have \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,1,1}^w(u,\tau)| = o_p(1). \end{align*} Similarly, we have \begin{align} \label{eq:l10nc} & L_{1,0,n}^w(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i=1}^n \xi_i \biggl\{\frac{u'\iota_0}{\sqrt{n}}\left[(1-A_i) 1\{S_i =s\}\eta_{i,0}(s,\tau)+ \pi1\{S_i=s\}m_1(s,\tau)\right] - \frac{u'\iota_2}{\sqrt{n}}(A_i - \pi)1\{S_i = s\}m_0(s,\tau)\biggr\} \notag \\ &+h_{1,0}^w(\tau) + R_{sfe,1,0}^w(u,\tau), \end{align} where \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,1,0}^w(u,\tau)| = o_p(1). \end{align*} Combining \eqref{eq:l11nc} and \eqref{eq:l10nc}, we have \begin{align*} & L^w_{1,n}(u,\tau) \\ = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \xi_i\biggl[u'\iota_1A_i1\{S_i = s\}\eta_{i,1}(u,\tau) + u'\iota_0(1-A_i)1\{S_i = s\}\eta_{i,0}(u,\tau) \\ & + u'\iota_2 (A_i - \pi)1\{S_i = s\}(m_1(s,\tau) - m_0(s,\tau)) + 1\{S_i = s\}(u'\iota_1 \pi m_1(s,\tau) + u'\iota_0(1-\pi)m_0(s,\tau)) \biggr] \\ & +R_{sfe,1,1}^w(u,\tau)+R_{sfe,1,0}^w(u,\tau) + h_{1,1}^w(\tau) +h_{1,0}^w(\tau). \end{align*} Furthermore, by Lemma \ref{lem:L2c}, we have \begin{align} \label{eq:l21nc} L^w_{2,1,n}(u,\tau) = \frac{\pi f_1(q_1(\tau))}{2}(u'\iota_1)^2 - \sum_{s \in \mathcal{S}}f_1(q_1(\tau)|s)\frac{\pi D_n^w(s) u'\iota_1}{\sqrt{n}}q(\tau) + h_{2,1}^w(\tau) + R_{sfe,2,1}^w(u,\tau) \end{align} and \begin{align} \label{eq:l20nc} L^w_{2,0,n}(u,\tau) = \frac{(1-\pi) f_0(q_0(\tau))}{2}(u'\iota_0)^2 - \sum_{s \in \mathcal{S}}f_0(q_0(\tau)|s)\frac{(1-\pi) D_n^w(s) u'\iota_0}{\sqrt{n}}q(\tau) + h_{2,0}^w(\tau) + R_{sfe,2,0}^w(u,\tau), \end{align} where \begin{align*} h_{2,1}^w(\tau) = \sum_{s \in \mathcal{S}} \frac{\pi f_1(q_1(\tau)|s)}{2}p(s)(E_n^w(s))^2q^2(\tau), \end{align*} \begin{align*} h_{2,0}^w(\tau) = \sum_{s \in \mathcal{S}} \frac{(1-\pi) f_0(q_0(\tau)|s)}{2}p(s)(E_n^w(s))^2q^2(\tau), \end{align*} \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,1}^w(u,\tau)| = o_p(1), \end{align*} and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,0}^w(u,\tau)| = o_p(1). \end{align*} Therefore, \begin{align*} L_{2,n}^w(u,\tau) = & \frac{1}{2}u'Q_{sfe}(\tau)u - \sum_{s \in \mathcal{S}}q(\tau)\left[f_1(q_1(\tau)|s)\pi u'\iota_1 + f_0(q_0(\tau)|s)(1-\pi) u'\iota_0 \right]\frac{D_n^w(s)}{\sqrt{n}} \\ & + R_{sfe,2,1}^w(u,\tau) + R_{sfe,2,0}^w(u,\tau) + h_{2,1}^w(\tau)+h_{2,0}^w(\tau). \end{align*} Combining \eqref{eq:l11nc}, \eqref{eq:l10nc}, \eqref{eq:l21nc}, and \eqref{eq:l20nc}, we have \begin{align*} L_{sfe,n}^w(u,\tau) = - u'\tilde{W}_{sfe,n}^w(\tau) + \frac{1}{2}u'Q_{sfe}u + \tilde{R}_{sfe,n}^w(u,\tau) + h_{sfe,n}^w(\tau), \end{align*} where \begin{align*} & W_{sfe,n}^w(\tau) \\ = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \xi_i \biggl[\iota_1A_i 1\{S_i=s\}\eta_{i,1}(s,\tau) + \iota_0(1-A_i)1\{S_i=s\}\eta_{i,0}(s,\tau)\biggr] \\ & + \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \xi_i\biggl\{\iota_2(m_1(s,\tau)- m_0(s,\tau)) + q(\tau)\biggl[f_1(q_1(\tau)|s)\pi \iota_1 + f_0(q_0(\tau)|s)(1-\pi)\iota_0 \biggr] \biggr\}\\ &\times(A_i - \pi)1\{S_i = s\} + \frac{1}{\sqrt{n}}\sum_{i=1}^n\xi_i(\iota_1 \pi m_1(S_i,\tau) + \iota_0(1-\pi)m_0(S_i,\tau)), \end{align*} \begin{align*} h_{sfe,n}^w(\tau) = h_{1,1}^w(\tau)+h_{1,0}^w(\tau)+h_{2,1}^w(\tau)+h_{2,0}^w(\tau), \end{align*} and $$\sup_{\tau \in \Upsilon}|\tilde{R}_{sfe,n}^w(u,\tau)| = o_p(1).$$ In addition, by Lemma \ref{lem:wboot}, $\sup_{\tau \in \Upsilon}|W_{sfe,n}^w(\tau)| = O_p(1)$. Then, by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}^w_{sfe}(\tau)-\tilde{\beta}(\tau)) = [Q_{sfe}(\tau)]^{-1}W_{sfe,n}^w(\tau) + R_{sfe,n}^w(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}||R_{sfe,n}^w(\tau)|| = o_p(1). \end{align*} This concludes Step 1. \textbf{Step 2.} We now focus on the second element of $\hat{\beta}^w_{sfe}(\tau)$. From Step 1, we know that \begin{align*} & \sqrt{n}(\hat{\beta}^w_{sfe,1}(\tau) - q(\tau)) = \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \xi_i \mathcal{J}_i(s,\tau) + R_{sfe,n,1}^w(\tau), \end{align*} where \begin{align*} \mathcal{J}_i(s,\tau) = & \left[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\right] \\ & + \biggl\{\left(\frac{1-\pi}{\pi f_1(q_1(\tau))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau))}\right)(m_1(s,\tau)- m_0(s,\tau)) \\ & + q(\tau)\biggl[\frac{f_1(q_1(\tau)|s)}{f_1(q_1(\tau))} - \frac{f_0(q_0(\tau)|s)}{f_0(q_0(\tau))} \biggr] \biggr\}(A_i - \pi)1\{S_i = s\} \\ & + \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)1\{S_i = s\} \end{align*} and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,n,1}^w(\tau)| = o_p(1). \end{align*} By \eqref{eq:wsfe}, we have \begin{align*} & \sqrt{n}(\hat{\beta}_{sfe,1}(\tau) - q(\tau)) = \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \mathcal{J}_i(s,\tau) + R_{sfe,n,1}(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,n,1}(\tau)|= o_p(1). \end{align*} Taking the difference of the above two equations, we have \begin{align*} \sqrt{n}(\hat{\beta}^w_{sfe,1}(\tau) - \hat{\beta}_{sfe,1}(\tau)) = \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n (\xi_i-1) \mathcal{J}_i(s,\tau) + R^w(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}|R^w(\tau)| = o_p(1). \end{align*} Lemma \ref{lem:Qsfec} shows that, conditionally on data, \begin{align*} \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n (\xi_i-1) \mathcal{J}_i(s,\tau) \rightsquigarrow \tilde{\mathcal{B}}_{sfe}(\tau), \end{align*} where $\tilde{\mathcal{B}}_{sfe}(\tau)$ is a Gaussian process with covariance kernel \begin{align} \label{eq:sigma_sfe_tilde} & \tilde{\Sigma}_{sfe}(\tau_1,\tau_2) \notag\\ = &\frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))} \notag\\ & + \mathbb{E}\pi(1-\pi)\biggl[(m_1(S,\tau_1) - m_0(S,\tau_1))\left(\frac{1-\pi}{\pi f_1(q_1(\tau_1))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_1))}\right) \notag\\ & + q(\tau_1)\left(\frac{f_1(q(\tau_1)|S)}{f_1(q_1(\tau_1))} -\frac{f_0(q(\tau_1)|S)}{f_0(q_0(\tau_1))}\right)\biggr] \notag\\ &\times \biggl[(m_1(S,\tau_2) - m_0(S,\tau_2))\left(\frac{1-\pi}{\pi f_1(q_1(\tau_2))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_2))}\right) + q(\tau_2)\left(\frac{f_1(q(\tau_2)|S)}{f_1(q_2(\tau_2))} -\frac{f_0(q(\tau_2)|S)}{f_0(q_0(\tau_2))}\right)\biggr] \notag\\ & +\mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\biggr]. \end{align} This concludes the proof for the SFE estimator. \section{Proof of Theorem \ref{thm:cabsfe}} \label{sec:proofcabsfe} Recall the definition of $\tilde{\beta}(\tau) = (\tilde{\beta}_0(\tau),\tilde{\beta}_1(\tau))'$ in the proof of Theorem \ref{thm:sfe}. Let $u_0 = \sqrt{n}(b_0-\tilde{\beta}_0(\tau))$, $u_1 = \sqrt{n}(b_1-\tilde{\beta}_1(\tau))$ and $u=(u_0, u_1)' \in \Re^2$. Then, \begin{align*} \sqrt{n}(\hat{\beta}^*_{sfe}(\tau)-\tilde{\beta}(\tau)) = \argmin_u L_{sfe,n}^*(u,\tau), \end{align*} where \begin{align*} L_{sfe,n}^*(u,\tau) = \sum_{i =1}^n \left[ \rho_\tau(Y_i^* - \dot{\tilde{A}}_i^{* \prime}(\tilde{\beta}(\tau) + \frac{u}{\sqrt{n}})) - \rho_\tau(Y_i^* - \breve{A}_i^{* \prime}\tilde{\beta}(\tau))\right] \end{align*} and $\breve{A}_i^* = (1, A_i^* - \pi)'$. Following the proof of Theorem \ref{thm:sfe}, we divide the current proof into two steps. In the first step, we show that there exist $$g_{sfe,n}^*(u,\tau) = - u'W_{sfe,n}^*(\tau) + \frac{1}{2}u'Q_{sfe}(\tau)u$$ and $h_{sfe,n}^*(\tau)$ independent of $u$ such that for each $u$ \begin{align*} \sup_{\tau \in \Upsilon}|L^*_{sfe,n}(u,\tau) - g_{sfe,n}^*(u,\tau) - h_{sfe,n}^*(\tau)|\convP 0. \end{align*} In addition, we show that $\sup_{\tau \in \Upsilon}||W_{sfe,n}^*(\tau)|| = O_p(1)$. Then, by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}^*_{sfe}(\tau)-\tilde{\beta}(\tau)) = [Q_{sfe}(\tau)]^{-1}W_{sfe,n}^*(\tau) + R_{sfe,n}^*(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}||R_{sfe,n}^*(\tau)|| = o_p(1). \end{align*} In the second step, we show that, conditionally on data, \begin{align*} \sqrt{n}(\hat{\beta}^*_{sfe,1}(\tau)-\hat{q}(\tau)) \rightsquigarrow \mathcal{B}_{sfe}(\tau). \end{align*} \textbf{Step 1.} Following Step 1 in the proof of Theorem \ref{thm:sfe}, we have \begin{align*} L^*_{sfe,n}(u,\tau) \equiv - L_{1,n}^*(u,\tau) + L_{2,n}^*(u,\tau), \end{align*} where \begin{align*} & L_{1,n}^*(u,\tau) \\ = & \sum_{i =1}^n \sum_{s \in \mathcal{S}}A_i^*1\{S_i^*=s\}\left(\frac{u_0}{\sqrt{n}} + (1 - \hat{\pi}^*(s))\frac{u_1}{\sqrt{n}} + (\pi - \hat{\pi}^*(s))q(\tau)\right)(\tau - 1\{Y_i^* \leq q_1(\tau)\}) \\ & + \sum_{i =1}^n \sum_{s \in \mathcal{S}}(1-A_i^*)1\{S_i^*=s\}\left(\frac{u_0}{\sqrt{n}}-\hat{\pi}^*(s)\frac{u_1}{\sqrt{n}} + (\pi - \hat{\pi}^*(s))q(\tau)\right)(\tau - 1\{Y_i^* \leq q_0(\tau)\}) \\ \equiv & L_{1,1,n}^*(u,\tau) + L_{1,0,n}^*(u,\tau), \end{align*} \begin{align*} & L_{2,n}^*(u,\tau) \\ = & \sum_{s \in \mathcal{S}} \sum_{i =1}^n A_i^* 1\{S_i^*=s\}\int_0^{\frac{u'\iota_1}{\sqrt{n}} - \frac{E_n^*(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right) }\left(1\{Y_i^* \leq q_1(\tau)+v \} - 1\{Y_i^* \leq q_1(\tau)\}\right)dv \\ & + \sum_{s \in \mathcal{S}} \sum_{i =1}^n (1-A_i^*) 1\{S_i^*=s\}\int_0^{\frac{u'\iota_0}{\sqrt{n}} - \frac{E_n^*(s)}{\sqrt{n}}\left(q(\tau)+\frac{u_1}{\sqrt{n}}\right) }\left(1\{Y_i^* \leq q_0(\tau)+v \} - 1\{Y_i^* \leq q_0(\tau)\}\right)dv \\ \equiv & L_{2,1,n}^*(u,\tau) + L_{2,0,n}^*(u,\tau), \end{align*} and $E_n^*(s) = \sqrt{n}(\hat{\pi}^*(s) - \pi)$. \textbf{Step 1.1.} Recall that $\iota_1 = (1,1-\pi)'$ and $\iota_0=(1,-\pi)'$. In addition, $\hat{\pi}^*(s) - \pi = \frac{D_n^*(s)}{n^*(s)}$. Then, \begin{align} \label{eq:l11nstar} & L_{1,1,n}^*(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}} \frac{u'\iota_1}{\sqrt{n}} \sum_{i=1}^n \left[A_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau)+ (A_i^* - \pi)1\{S_i^*=s\}m_1(s,\tau)+ \pi1\{S_i^*=s\}m_1(s,\tau)\right] \notag \\ & - \sum_{s \in \mathcal{S}}\frac{u_1 D^*_n(s)\pi m_1(s,\tau)}{\sqrt{n}}+h_{1,1}^*(\tau) + R_{sfe,1,1}^*(u,\tau), \end{align} where $\eta_{i,1}^*(s,\tau) = (\tau - 1\{Y_i^*(1) \leq q_1(\tau)\}) - m_1(s,\tau)$, \begin{align*} h_{1,1}^*(\tau) = \sum_{s\in \mathcal{S}}(\pi - \hat{\pi}^*(s))q(\tau)\left(\sum_{i =1}^n A_i^*1\{S_i^*=s\}(\tau - 1\{Y_i^* \leq q_1(\tau)\})\right), \end{align*} and \begin{align} \label{eq:Rsfe11star} R_{sfe,1,1}^*(u,\tau)= - \sum_{s \in \mathcal{S}}\frac{u_1D_n^*(s)}{\sqrt{n}n^*(s)}\left\{\sum_{i =1}^n A_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau)+(A_i^*-\pi)1\{S_i^*=s\}m_1(s,\tau) \right\}. \end{align} Note that \begin{align*} \sup_{s \in \mathcal{S},\tau \in \Upsilon}|\sum_{i =1}^n(A_i^*-\pi)1\{S_i^*=s\}m_1(s,\tau)| = \sup_{s \in \mathcal{S},\tau \in \Upsilon}|D_n^*(s)m_1(s,\tau)| = O_p(\sqrt{n}). \end{align*} In addition, Lemma \ref{lem:Rsfestar} shows \begin{align*} \sup_{s \in \mathcal{S},\tau \in \Upsilon}|\sum_{i =1}^nA_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau)| = O_p(\sqrt{n(s)}). \end{align*} Therefore, we have \begin{align*} & \sup_{ \tau \in \Upsilon}|R_{sfe,1,1}^*(u,\tau)| \\ \leq & \sum_{s \in \mathcal{S}}\sup_{s \in \mathcal{S} }\left|\frac{u_1 D_n^*(s)}{\sqrt{n}n^*(s)}\right|\biggl[\sup_{s \in \mathcal{S},\tau \in \Upsilon}|\sum_{i =1}^nA_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau)| + \sup_{s \in \mathcal{S},\tau \in \Upsilon}|\sum_{i =1}^n(A_i^*-\pi)1\{S_i^*=s\}m_1(s,\tau)|\biggr] \\ = & O_p(1/\sqrt{n}). \end{align*} Similarly, we have \begin{align} \label{eq:l10nstar} & L_{1,0,n}^*(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}} \frac{u'\iota_0}{\sqrt{n}} \sum_{i=1}^n \left[(1-A_i^*)1\{S_i^*=s\}\eta_{i,1}^*(s,\tau) - (A_i^* - \pi)1\{S_i^*=s\}m_0(s,\tau)+ (1-\pi)1\{S_i^*=s\}m_0(s,\tau)\right] \notag \\ & - \sum_{s \in \mathcal{S}}\frac{u_1 D^*_n(s)(1-\pi) m_0(s,\tau)}{\sqrt{n}}+h_{1,0}^*(\tau) + R_{sfe,1,0}^*(u,\tau), \end{align} where \begin{align*} h_{1,0}^*(\tau) = \sum_{s\in \mathcal{S}}(\pi - \hat{\pi}^*(s))q(\tau)\left(\sum_{i =1}^n (1-A_i^*)1\{S_i^*=s\}(\tau - 1\{Y_i^* \leq q_0(\tau)\})\right), \end{align*} and \begin{align} \label{eq:Rsfe10star} R_{sfe,1,0}^*(u,\tau)= - \sum_{s \in \mathcal{S}}\frac{u_1D_n^*(s)}{\sqrt{n}n^*(s)}\left\{\sum_{i =1}^n (1-A_i^*)1\{S_i^*=s\}\eta_{i,0}^*(s,\tau) -(A_i^*-\pi)1\{S_i^*=s\}m_0(s,\tau) \right\} \end{align} such that \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,1,0}^*(u,\tau)| = O_p(1/\sqrt{n}). \end{align*} Therefore, \begin{align*} L_{1,n}^*(u,\tau) = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}}\sum_{i =1}^n\left[u'\iota_1 A_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau) + u'\iota_0(1-A_i^*)1\{S_i^*=s\}\eta_{i,0}^*(s,\tau) \right] \\ & + \sum_{s \in \mathcal{S}} u'\iota_2 \frac{D_n^*(s)}{\sqrt{n}}\left(m_1(s,\tau) - m_0(s,\tau)\right) \\ & + \frac{1}{\sqrt{n}}\sum_{i=1}^n(u'\iota_1\pi m_1(S_i^*,\tau) + u'\iota_0(1-\pi)m_0(S_i^*,\tau)) \\ & + R_{sfe,1,1}^*(u,\tau) + R_{sfe,1,0}^*(u,\tau) + h_{1,1}(\tau) + h_{1,0}(\tau). \end{align*} Furthermore, by Lemma \ref{lem:L2star}, we have \begin{align} \label{eq:l21nstar} L^*_{2,1,n}(u,\tau) = \frac{\pi f_1(q_1(\tau))}{2}(u'\iota_1)^2 - \sum_{s \in \mathcal{S}}f_1(q_1(\tau)|s)\frac{\pi D_n^*(s) u'\iota_1}{\sqrt{n}}q(\tau) + h_{2,1}^*(\tau) + R_{sfe,2,1}^*(u,\tau) \end{align} and \begin{align} \label{eq:l20nstar} L^*_{2,0,n}(u,\tau) = \frac{(1-\pi) f_0(q_0(\tau))}{2}(u'\iota_0)^2 - \sum_{s \in \mathcal{S}}f_0(q_0(\tau)|s)\frac{(1-\pi) D_n^*(s) u'\iota_0}{\sqrt{n}}q(\tau) + h_{2,0}^*(\tau) + R_{sfe,2,0}^*(u,\tau), \end{align} where \begin{align*} h_{2,1}^*(\tau) = \sum_{s \in \mathcal{S}} \frac{\pi f_1(q_1(\tau)|s)}{2}p(s)(E_n^*(s))^2q^2(\tau), \end{align*} \begin{align*} h_{2,0}^*(\tau) = \sum_{s \in \mathcal{S}} \frac{(1-\pi) f_0(q_0(\tau)|s)}{2}p(s)(E_n^*(s))^2q^2(\tau), \end{align*} \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,1}^*(u,\tau)| = o_p(1), \end{align*} and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,0}^*(u,\tau)| = o_p(1). \end{align*} Therefore, \begin{align*} L_{2,n}^*(u,\tau) = & \frac{1}{2}u'Q_{sfe}(\tau)u - \sum_{s \in \mathcal{S}}q(\tau)\left[f_1(q_1(\tau)|s)\pi u'\iota_1 + f_0(q_0(\tau)|s)(1-\pi) u'\iota_0 \right]\frac{D_n^*(s)}{\sqrt{n}} \\ & + R_{sfe,2,1}^*(u,\tau) + R_{sfe,2,0}^*(u,\tau) + h_{2,1}^*(\tau)+h_{2,0}^*(\tau). \end{align*} Combining \eqref{eq:l11nstar}, \eqref{eq:l10nstar}, \eqref{eq:l21nstar}, and \eqref{eq:l20nstar}, we have \begin{align*} L_{sfe,n}^*(u,\tau) = - u'W_{sfe,n}^*(\tau) + \frac{1}{2}u'Q_{sfe}u + \tilde{R}_{sfe,n}^*(u,\tau) + h_{sfe,n}^*(\tau), \end{align*} where \begin{align*} & W_{sfe,n}^*(\tau) \\ = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \biggl[\iota_1A_i^*1\{S_i^*=s\}\eta_{i,1}^*(s,\tau) + \iota_0(1-A_i^*)1\{S_i^*=s\}\eta_{i,0}^*(s,\tau)\biggr] \\ & + \sum_{s \in \mathcal{S}}\biggl\{\iota_2(m_1(s,\tau)- m_0(s,\tau)) + q(\tau)\biggl[f_1(q_1(\tau)|s)\pi \iota_1 + f_0(q_0(\tau)|s)(1-\pi)\iota_0 \biggr] \biggr\}\frac{D_n^*(s)}{\sqrt{n}} \\ & + \frac{1}{\sqrt{n}}\sum_{i=1}^n(\iota_1 \pi m_1(S_i^*,\tau) + \iota_0(1-\pi)m_0(S_i^*,\tau)), \end{align*} \begin{align*} h_{sfe,n}^*(\tau) = h_{1,1}^*(\tau)+h_{1,0}^*(\tau)+h_{2,1}^*(\tau)+h_{2,0}^*(\tau), \end{align*} and $$\sup_{\tau \in \Upsilon}|\tilde{R}_{sfe,n}^*(u,\tau)| = o_p(1).$$ By Lemma \ref{lem:Qsfestar}, $\sup_{\tau \in \Upsilon}|W_{sfe,n}^*(\tau)| = O_p(1)$. Then, by \citet[Theorem 2]{K09}, we have \begin{align*} \sqrt{n}(\hat{\beta}^*_{sfe}(\tau)-\tilde{\beta}(\tau)) = [Q_{sfe}(\tau)]^{-1}W_{sfe,n}^*(\tau) + R_{sfe,n}^*(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}||R_{sfe,n}^*(\tau)|| = o_p(1). \end{align*} This concludes Step 1. \textbf{Step 2.} We now focus on the second element of $\hat{\beta}^*_{sfe}(\tau)$. From Step 1, we know that \begin{align*} & \sqrt{n}(\hat{\beta}^*_{sfe,1}(\tau) - q(\tau)) \\ = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \left[\frac{A_i^*1\{S_i^* = s\}\eta_{i,1}^*(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i^*)1\{S_i^* = s\}\eta_{i,0}^*(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\right] \\ & + \sum_{s \in \mathcal{S}}\biggl\{\left(\frac{1-\pi}{\pi f_1(q_1(\tau))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau))}\right)(m_1(s,\tau)- m_0(s,\tau)) + q(\tau)\biggl[\frac{f_1(q_1(\tau)|s)}{f_1(q_1(\tau))} - \frac{f_0(q_0(\tau)|s)}{f_0(q_0(\tau))} \biggr] \biggr\}\frac{D_n^*(s)}{\sqrt{n}} \\ & + \frac{1}{\sqrt{n}}\sum_{i =1}^n\left(\frac{m_1(S_i^*,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i^*,\tau)}{f_0(q_0(\tau))}\right) + R_{sfe,n,1}^*(\tau) \\ \equiv & W_{sfe,n,1}^*(\tau) + W_{sfe,n,2}^*(\tau) +W_{sfe,n,3}^*(\tau) + R_{sfe,n,1}^*(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,n,1}^*(\tau)| = o_p(1). \end{align*} By \eqref{eq:wipw2}, we have \begin{align*} & \sqrt{n}(\hat{q}(\tau) - q(\tau)) \\ = & \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n \left[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\right] \\ & + \frac{1}{\sqrt{n}}\sum_{i =1}^n\left(\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\right) + R_{ipw,n}(\tau) \\ \equiv & \mathcal{W}_{n,1}(\tau) + \mathcal{W}_{n,2}(\tau) + R_{ipw,n}(\tau), \end{align*} where \begin{align*} \sup_{\tau \in \Upsilon}|R_{ipw,n}(\tau)|= o_p(1). \end{align*} Taking the difference of the above two equations, we have \begin{align} \label{eq:betasfestar-qhat} \sqrt{n}(\hat{\beta}^*_{sfe,1}(\tau) - \hat{q}(\tau)) = (W_{sfe,n,1}^*(\tau)-\mathcal{W}_{n,1}(\tau)) + W_{sfe,n,2}^*(\tau) +(W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau)) + R^*(\tau), \end{align} where \begin{align*} \sup_{\tau \in \Upsilon}|R^*(\tau)| = o_p(1). \end{align*} Lemma \ref{lem:Qsfestar} shows that, conditionally on data, \begin{align*} (W_{sfe,n,1}^*(\tau)-\mathcal{W}_{n,1}(\tau)), W_{sfe,n,2}^*(\tau), (W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau)) \rightsquigarrow (\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau)), \end{align*} where $(\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau))$ are three independent Gaussian processes and $\sum_{j=1}^3\mathcal{B}_j(\tau) \stackrel{d}{=} \mathcal{B}_{sfe}(\tau).$ This concludes the proof. \section{Technical Lemmas} \label{sec:lem2} \begin{lem} \label{lem:Qsfe} Let $W_{sfe,n,j}(\tau)$, $j = 1,2,3$ be defined as in \eqref{eq:wsfe}. If Assumptions in Theorem \ref{thm:sfe} hold, then uniformly over $\tau \in \Upsilon$, \begin{align*} (W_{sfe,n,1}(\tau), W_{sfe,n,2}(\tau), W_{sfe,n,3}(\tau)) \rightsquigarrow (\mathcal{B}_{sfe,1}(\tau),\mathcal{B}_{sfe,2}(\tau),\mathcal{B}_{sfe,3}(\tau)), \end{align*} where $(\mathcal{B}_{sfe,1}(\tau),\mathcal{B}_{sfe,2}(\tau),\mathcal{B}_{sfe,3}(\tau))$ are three independent two-dimensional Gaussian process with covariance kernels $\Sigma_{sfe,1}(\tau_1,\tau_2)$, $\Sigma_{sfe,2}(\tau_1,\tau_2)$, and $\Sigma_{sfe,3}(\tau_1,\tau_2)$, respectively. The expressions for the three kernels are derived in the proof below. \end{lem} \begin{proof} The proofs of weak convergence and the independence among $(\mathcal{B}_{sfe,1}(\tau),\mathcal{B}_{sfe,2}(\tau),\mathcal{B}_{sfe,3}(\tau))$ are similar to that in Lemma \ref{lem:Q}, and thus, are omitted. In the following, we focus on deriving the covariance kernels. First, similar to the argument in the proof of Lemma \ref{lem:Q}, \begin{align*} W_{sfe,n,1}(\tau) \stackrel{d}{=} \iota_1 \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\frac{1}{\sqrt{n}}\tilde{\eta}_{i,1}(s,\tau) + \iota_0 \sum_{s \in \mathcal{S}} \sum_{i=N(s)+n_1(s) + 1}^{N(s)+n(s)}\frac{1}{\sqrt{n}}\tilde{\eta}_{i,0}(s,\tau). \end{align*} Therefore, \begin{align*} \Sigma_{1}(\tau_1,\tau_2) = & \pi[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)]\iota_1\iota_1' \\ & + (1-\pi)[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)]\iota_0\iota_0'. \end{align*} For $W_{sfe,n,2}(\tau)$, we have \begin{align*} \Sigma_{2}(\tau_1,\tau_2) = &\mathbb{E}\gamma(S)\biggl[\iota_2(m_1(S,\tau_1) - m_0(S,\tau_1))+q(\tau_1)\biggl(f_1(q_1(\tau_1)|S)\pi \iota_1 + f_0(q_0(\tau_1)|S)(1-\pi) \iota_0\biggr) \biggr] \\ & \times \biggl[\iota_2(m_1(S,\tau_2) - m_0(S,\tau_2))+q(\tau_2)\biggl(f_1(q_1(\tau_2)|S)\pi \iota_1 + f_0(q_0(\tau_2)|S)(1-\pi) \iota_0\biggr) \biggr]'. \end{align*} Next, we have \begin{align*} \Sigma_{3}(\tau_1,\tau_2) = \mathbb{E}(\iota_1\pi m_1(S,\tau_1) + \iota_0(1-\pi)m_0(S,\tau_1))(\iota_1\pi m_1(S,\tau_2) + \iota_0(1-\pi)m_0(S,\tau_2))'. \end{align*} In addition, \begin{align*} [Q_{sfe}(\tau)]^{-1} = \begin{pmatrix} \frac{1-\pi}{f_0(q_0(\tau))} + \frac{\pi}{f_1(q_1(\tau))} & \frac{1}{f_1(q_1(\tau))}-\frac{1}{f_0(q_0(\tau))}\\ \frac{1}{f_1(q_1(\tau))}-\frac{1}{f_0(q_0(\tau))} & \frac{1}{(1-\pi)f_0(q_0(\tau))} + \frac{1}{\pi f_1(q_1(\tau))} \end{pmatrix}. \end{align*} Therefore, \begin{align*} & \Sigma(\tau_1,\tau_2) \\ = & \biggl\{ \frac{1}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))}\left[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)\right]\begin{pmatrix} \pi^2 & \pi \\ \pi & 1 \end{pmatrix} \\ & + \frac{1}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))}\left[\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)\right]\begin{pmatrix} (1-\pi)^2 & \pi-1 \\ \pi-1 & 1 \end{pmatrix} \biggr\}\\ & + \biggl\{\mathbb{E}\gamma(S)\biggl[(m_1(S,\tau_1) - m_0(S,\tau_1))\begin{pmatrix} \frac{\pi}{f_0(q_0(\tau_1))} + \frac{1-\pi}{f_1(q_1(\tau_1))} \\ \frac{1-\pi}{\pi f_1(q_1(\tau_1))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_1))} \end{pmatrix} +q(\tau_1)\frac{f_1(q_1(\tau_1)|S)}{f_1(q_1(\tau_1))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} \\ & + q(\tau_1)\frac{f_0(q_0(\tau_1)|S)}{f_0(q_0(\tau_1))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix}\biggr] \times \biggl[(m_1(S,\tau_2) - m_0(S,\tau_2))\begin{pmatrix} \frac{\pi}{f_0(q_0(\tau_2))} + \frac{1-\pi}{f_1(q_1(\tau_2))} \\ \frac{1-\pi}{\pi f_1(q_1(\tau_2))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_2))} \end{pmatrix} \\ & +q(\tau_2)\frac{f_1(q_1(\tau_2)|S)}{f_1(q_1(\tau_2))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} + q(\tau_2)\frac{f_0(q_0(\tau_2)|S)}{f_0(q_0(\tau_2))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix}\biggr] \biggr\}\\ & + \biggl\{\mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} + \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix} \biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))}\begin{pmatrix} \pi \\ 1 \end{pmatrix} + \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\begin{pmatrix} 1-\pi \\ -1 \end{pmatrix} \biggr]' \biggr\}. \end{align*} \end{proof} \begin{lem} \label{lem:Rsfe11c} Recall the definition of $R^w_{sfe,1,1}(u,\tau)$ in \eqref{eq:Rsfe11c}. If Assumptions \ref{ass:assignment1} and \ref{ass:tau} hold, then \begin{align*} \sup_{\tau \in \Upsilon}|R^w_{sfe,1,1}(u,\tau)| = o_p(1). \end{align*} \end{lem} \begin{proof} We divide the proof into two steps. In the first step, we show that $\sup_{ s \in \mathcal{S}}|D_n^w(s)| = O_p(\sqrt{n})$. In the second step, we show that \begin{align} \label{eq:Mn} \sup_{\tau \in \Upsilon,s \in \mathcal{S}}|\sum_{i =1}^n \xi_iA_i 1\{S_i = s\}\eta_{i,1}(s,\tau)| = O_p(\sqrt{n}). \end{align} Then, \begin{align*} & \sup_{\tau \in \Upsilon}|R^w_{sfe,1,1}(u,\tau)| \\ \leq & \sum_{s \in \mathcal{S}}\frac{|u_1|}{n^w(s)} \sup_{ s \in \mathcal{S}}\left|\frac{D_n^w(s)}{\sqrt{n}}\right|\biggl[\sup_{\tau \in \Upsilon, s \in \mathcal{S}}\biggl|\sum_{i =1}^n \xi_iA_i 1\{S_i = s\}\eta_{i,1}(s,\tau)\biggr| + \sup_{ s \in \mathcal{S}}|D_n^w(s)| \biggr] \\ = & O_p(1/\sqrt{n}), \end{align*} as $n^w(s)/n \convP p(s) > 0$. \textbf{Step 1.} Because \begin{align*} \sup_{s \in \mathcal{S} }|D_n(s)| = O_p(\sqrt{n}), \end{align*} we only need to bound the difference $D_n^w(s) - D_n(s)$. Note that \begin{align} \label{eq:Dnc} &n(s)^{-1/2}D_n^w(s)-n(s)^{-1/2}D_n(s)= n^{-1/2}\sum_{i=1}^n(\xi_i-1)(A_i-\pi)1\{S_i=s\}. \end{align} We aim to prove that, if $n(s) \rightarrow \infty$ and $D_n(s)/n(s) = o_p(1)$, then conditionally on data, for $s \in \mathcal{S}$, \begin{align} \label{eq:clt} n(s)^{-1/2}\sum_{i=1}^n(\xi_i-1)(A_i-\pi)1\{S_i =s\} \rightsquigarrow N(0,\pi(1-\pi)) \end{align} and they are independent across $s\in\mathcal{S}$. The independence is straightforward because \begin{align*} \frac{1}{n(s)}\sum_{i =1}^n (\xi_i-1)^2(A_i-\pi)^21\{S_i =s\}1\{S_i =s'\} = 0 \quad \text{for} \quad s \neq s'. \end{align*} For the limiting distribution, let $\mathcal{D}_n = \{Y_i,A_i,S_i\}_{i=1}^n$ denote data. According to the Lindeberg-Feller central limit theorem, \eqref{eq:clt} holds because (1) \begin{align*} n(s)^{-1}\sum_{i=1}^n\mathbb{E}[(\xi_i-1)^2(A_i-\pi)^21\{S_i=s\}|\mathcal{D}_n]=&n(s)^{-1}\sum_{i=1}^n(A_i-2A_i\pi+\pi^2)1\{S_i=s\}\\ =&n(s)^{-1}\sum_{i=1}^n(A_i-\pi-2(A_i-\pi)\pi+\pi-\pi^2)1\{S_i=s\}\\ =&\frac{1-2\pi}{n(s)}D_n(s)+\pi(1-\pi)\\ \convP&\pi(1-\pi), \end{align*} and (2) for every $\varepsilon>0$, \begin{align*} &n(s)^{-1}\sum_{i=1}^n(A_i- \pi)^21\{S_i = s\}\mathbb{E}\left[(\xi_i-1)^21\{|\xi_i-1|(A_i- \pi)^21\{S_i = s\}>\varepsilon\sqrt{n(s)}\}|\mathcal{D}_n\right]\\ \leq &4 \mathbb{E}(\xi_i-1)^2 1\{2|\xi_i - 1| \geq \varepsilon\sqrt{n(s)}\}\rightarrow 0, \end{align*} where we use the fact that $|A_i - \pi|1\{S_i = s\} \leq 2$ and $n(s) \rightarrow \infty$. This concludes the proof of Step 1. \textbf{Step 2.} By the same rearrangement argument and the fact that $\{\xi_i\}_{i=1}^n \perp\!\!\!\perp \mathcal{D}_n$, we have \begin{align*} \sup_{\tau \in \Upsilon, s \in \mathcal{S}}\biggl| \frac{1}{n}\sum_{i=1}^{n} \xi_i A_i1\{S_i = s\}\eta_{i,1}(s,\tau) \biggr| \stackrel{d}{=} \sup_{\tau \in \Upsilon, s \in \mathcal{S}}\biggl| \frac{1}{n}\sum_{i=N(s) + 1}^{N(s)+n_1(s)} \xi_i \tilde{\eta}_{i,1}(s,\tau)\biggr|. \end{align*} Let $\Gamma_{n,1}(s,t,\tau) = \sum_{i =1}^{\lfloor nt \rfloor}\frac{\xi_i \tilde{\eta}_{i,1}(s,\tau)}{\sqrt{n}}$ and $\mathcal{F} = \{\xi_i \tilde{\eta}_{i,1}(s,\tau): \tau \in \Upsilon, s \in \mathcal{S}\}$ with envelope $F_i = C\xi_i$ and $||F_i||_{P,2}<\infty$. By Lemma \ref{lem:S} and \citet[Theorem 2.14.1]{VW96}, for any $\varepsilon>0$, we can choose $M$ sufficiently large such that \begin{align*} \mathbb{P}(\sup_{0 < t \leq 1,\tau \in \Upsilon, s\in \mathcal{S}}|\Gamma_{n,1}(s,t,\tau)| \geq M)\leq &\frac{270 \mathbb{E}\sup_{\tau \in \Upsilon, s\in \mathcal{S}}|\Gamma_{n,1}(s,1,\tau)|}{M} \\ = & \frac{270 \mathbb{E}\sqrt{n}||\mathbb{P}_n - \mathbb{P}||_\mathcal{F}}{M} \lesssim \frac{J(1,\mathcal{F})||F_i||_{P,2}}{M} < \varepsilon. \end{align*} Therefore, \begin{align*} \sup_{0 < t \leq 1,\tau \in \Upsilon, s\in \mathcal{S}}|\Gamma_{n,1}(s,t,\tau)| = O_p(1) \end{align*} and \begin{align} \label{eq:l5} \sup_{\tau \in \Upsilon, s \in \mathcal{S}}\biggl| \frac{1}{n}\sum_{i=1}^{n} \xi_i A_i1\{S_i = s\}\eta_{i,1}(s,\tau) \biggr| \stackrel{d}{=} & \sup_{\tau \in \Upsilon, s \in \mathcal{S}}\frac{1}{\sqrt{n}}\left|\Gamma_{n,1}\left(s,\frac{N(s)+n_1(s)}{n},\tau\right) - \Gamma_{n,1}\left(s,\frac{N(s)}{n},\tau\right)\right| \notag \\ = & O_p(1/\sqrt{n}). \end{align} This concludes the proof of Step 2. \end{proof} \begin{lem} \label{lem:L2c} If Assumptions \ref{ass:assignment1} and \ref{ass:tau} hold, then \ref{eq:l21nc} and \ref{eq:l20nc} hold. \end{lem} \begin{proof} We focus on \eqref{eq:l21nc}. Note that \begin{align} \label{eq:l21nc'} & L_{2,1,n}^w(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i=1}^n \xi_iA_i 1\{S_i = s\} \int_0^{\frac{u'\iota_1}{\sqrt{n}}- \frac{E_n^w(s)}{\sqrt{n}}\left(q(\tau) + \frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i(1) \leq q_1(\tau)+v\} - 1\{Y_i(1) \leq q_1(\tau)\} \right)dv \notag \\ = & \sum_{s \in \mathcal{S}}\sum_{i=1}^{n} \xi_iA_i 1\{S_i = s\}[\phi_i(u,\tau,s,E_n^w(s)) - \mathbb{E}\phi_i(u,\tau,s,E_n^w(s)|S_i=s)] \notag \\ & + \sum_{s \in \mathcal{S}}\sum_{i=1}^{n} \xi_iA_i 1\{S_i=s\}\mathbb{E}\phi_i(u,\tau,s,E_n^w(s)|S_i=s), \notag \\ \end{align} where by Lemma \ref{lem:Rsfe11c}, $E_n^w(s) = \sqrt{n}(\hat{\pi}^w(s) - \pi) = \frac{n}{n^w(s)}\frac{D_n^w(s)}{\sqrt{n}} = O_p(1)$, \begin{align*} \phi_i(u,\tau,s,e) = \int_0^{\frac{u'\iota_1}{\sqrt{n}}- \frac{e}{\sqrt{n}}\left(q(\tau) + \frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i(1) \leq q_1(\tau)+v\} - 1\{Y_i(1) \leq q_1(\tau)\} \right)dv, \end{align*} and $ \mathbb{E}\phi_i(u,\tau,s,E_n^w(s)|S_i=s)$ is interpreted as $ \mathbb{E}(\phi_i(u,\tau,s,e)|S_i=s)$ with $e$ being evaluated at $E_n^w(s)$. For the first term on the RHS of \eqref{eq:l21nc'}, by the rearrangement argument in Lemma \ref{lem:Q}, we have \begin{align*} & \sum_{s \in \mathcal{S}}\sum_{i=1}^{n}\xi_{i}A_i 1\{S_i = s\}[\phi_i(u,\tau,s,E_n^w(s)) - \mathbb{E}\phi_i(u,\tau,s,E_n^w(s)|S_i=s)] \\ \stackrel{d}{=} & \sum_{s \in \mathcal{S}}\sum_{i=N(s)+1}^{N(s)+n_1(s)}\xi_{i}[\phi_i^s(u,\tau,s,E_n^w(s)) - \mathbb{E}\phi_i^s(u,\tau,s,E_n^w(s))] , \end{align*} where \begin{align*} \phi_i^s(u,\tau,s,e) = \int_0^{\frac{u'\iota_1}{\sqrt{n}}- \frac{e}{\sqrt{n}}\left(q(\tau) + \frac{u_1}{\sqrt{n}}\right)}\left(1\{Y^s_i(1) \leq q_1(\tau)+v\} - 1\{Y^s_i(1) \leq q_1(\tau)\} \right)dv. \end{align*} Similar to \eqref{eq:gammasfe}, we can show that, as $n \rightarrow \infty$, \begin{align} \label{eq:term1xi} \sup_{\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{i=N(s)+1}^{N(s)+n_1(s)}\xi_{i}\left[\phi_i^s(u,\tau,s,E_n^w(s)) - \mathbb{E}\phi_i^s(u,\tau,s,E_n^w(s))\right]\right| = o_p(1). \end{align} For the second term in \eqref{eq:l21nc'}, we have \begin{align} \label{eq:l21nc'''} & \sum_{s \in \mathcal{S}}\sum_{i=1}^{n} \xi_i A_i 1\{S_i=s\}\mathbb{E}\phi_i(u,\tau,s,E_n^w(s)|S_i=s) \notag \\ = & \sum_{s \in \mathcal{S}} \frac{\sum_{i=1}^{n} \xi_i\pi 1\{S_i=s\}}{n} n\mathbb{E}\phi^s_i(u,\tau,s,E_n^w(s)) + \sum_{s \in \mathcal{S}}\frac{D_n^w(s)}{n} n\mathbb{E}\phi^s_i(u,\tau,s,E_n^w(s)) \notag \\ = & \sum_{s \in \mathcal{S}} \pi p(s) \left[\frac{f_1(q_1(\tau)|s)}{2}(u'\iota_1 - E_n^w(s)q(\tau))^2 + o_p(1)\right] + \sum_{s \in \mathcal{S}}\frac{D_n^w(s)}{n}\left[\frac{f_1(q_1(\tau)|s)}{2}(u'\iota_1 - E_n^w(s)q(\tau))^2 + o_p(1)\right] \notag \\ = & \frac{\pi f_1(q_1(\tau))}{2}(u'\iota_1)^2 - \sum_{s \in \mathcal{S}}f_1(q_1(\tau)|s)\frac{\pi D_{n}^w(s)u'\iota_1}{\sqrt{n}}q(\tau) + h_{2,1}^w(\tau) + o_p(1), \end{align} where the $o_p(1)$ term holds uniformly over $(\tau,s) \in \Upsilon \times \mathcal{S}$. The second equality holds by the same calculation in \eqref{eq:Gamma_sfe1} and the fact that $\sum_{i =1}^n \xi_i1\{S_i = s\}/n \convP p(s)$. The last inequality holds because $\frac{D_n^w(s)}{n} = o_p(1)$, $E_n^w(s) = \frac{n}{n^w(s)}\frac{D_n^w(s)}{\sqrt{n}} = O_p(1)$, $ \frac{n}{n^w(s)} \convP 1/p(s)$, and \begin{align*} h_{2,1}^w(\tau) = \sum_{s \in \mathcal{S}} \frac{\pi f_1(q_1(\tau)|s)}{2}p(s)(E_n^w(s))^2q^2(\tau). \end{align*} Combining \eqref{eq:l21nc'}--\eqref{eq:l21nc'''}, we have \begin{align*} L_{2,1,n}^w(u,\tau) = \frac{\pi f_1(q_1(\tau))}{2}(u'\iota_1)^2 - \sum_{s \in \mathcal{S}}f_1(q_1(\tau)|s)\frac{\pi D_{n}^w(s)u'\iota_1}{\sqrt{n}}q(\tau) + h_{2,1}^w(\tau) + R_{sfe,2,1}^w(u,\tau), \end{align*} where \begin{align*} h_{2,1}^w(\tau) = \sum_{s \in \mathcal{S}} \frac{\pi f_1(q_1(\tau)|s)}{2}p(s)(E_n^w(s))^2q^2(\tau) \end{align*} and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,1}^w(u,\tau)| = o_p(1). \end{align*} This concludes the proof. \end{proof} \begin{lem} \label{lem:wboot} If Assumptions \ref{ass:assignment1} and \ref{ass:tau} hold, then $\sup_{\tau \in \Upsilon}|| W_{sfe,n}^w(\tau) || = O_p(1)$. \end{lem} \begin{proof} It suffices to show that \begin{align} \label{eq:wboot_4} \sup_{\tau \in \Upsilon, s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i =1}^n \xi_i A_i 1\{S_i=s\}\eta_{i,1}(s,\tau)\right| = O_p(1) \end{align} \begin{align} \label{eq:wboot_4'} \sup_{\tau \in \Upsilon, s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i =1}^n \xi_i(1-A_i)1\{S_i=s\}\eta_{i,0}(s,\tau)\right| = O_p(1), \end{align} \begin{align} \label{eq:wboot_5} \sup_{ s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i =1}^n\xi_i(A_i - \pi)1\{S_i = s\}\right| =O_p(1), \end{align} and \begin{align} \label{eq:wboot_6} \sup_{\tau \in \Upsilon}\left\Vert \frac{1}{\sqrt{n}}\sum_{i=1}^n\xi_i(\iota_1 \pi m_1(S_i,\tau) + \iota_0(1-\pi)m_0(S_i,\tau))\right\Vert = O_p(1). \end{align} Note that \eqref{eq:wboot_4} holds by the argument in step 2 in the proof of Lemma \ref{lem:Rsfe11c}, \eqref{eq:wboot_4'} holds similarly, \eqref{eq:wboot_5} holds by \eqref{eq:Dnc} and \eqref{eq:clt}, and \eqref{eq:wboot_6} holds by the usual maximal inequality, e.g., \citet[Theorem 2.14.1]{VW96}. This concludes the proof. \end{proof} \begin{lem} \label{lem:Qsfec} If Assumptions \ref{ass:assignment1} and \ref{ass:tau} hold, then conditionally on data, \begin{align*} \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n (\xi_i-1) \mathcal{J}_i(s,\tau) \rightsquigarrow \tilde{\mathcal{B}}_{sfe}(\tau), \end{align*} where $\tilde{\mathcal{B}}_{sfe}(\tau)$ is a Gaussian process with covariance kernel $\tilde{\Sigma}_{sfe}(\cdot,\cdot)$ defined in \eqref{eq:sigma_sfe_tilde}. \end{lem} \begin{proof} In order to show the weak convergence, we only need to show (1) conditional stochastic equicontinuity and (2) conditional convergence in finite dimension. We divide the proof into two steps accordingly. \textbf{Step 1.} In order to show the conditional stochastic equicontinuity, it suffices to show that, for any $\varepsilon>0$, as $n \rightarrow \infty$ followed by $\delta \rightarrow 0$, \begin{align*} \mathbb{P}_\xi\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_i(s,\tau_2) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon\right) \convP 0, \end{align*} where $\mathbb{P}_\xi(\cdot)$ means that the probability operator is with respect to $\xi_1,\cdots,\xi_n$ and conditional on data. Note \begin{align*} & \mathbb{E}\mathbb{P}_\xi\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_i(s,\tau_1) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon\right) \\ = & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_i(s,\tau_2) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon\right)\\ \leq & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,1}(s,\tau_2) - \mathcal{J}_{i,1}(s,\tau_1))\right| \geq \varepsilon/3\right) \\ & + \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,2}(s,\tau_2) - \mathcal{J}_{i,2}(s,\tau_1))\right| \geq \varepsilon/3\right)\\ & + \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,3}(s,\tau_2) - \mathcal{J}_{i,3}(s,\tau_1))\right| \geq \varepsilon/3\right), \end{align*} where \begin{align*} \mathcal{J}_{i,1}(s,\tau) = \frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}, \end{align*} \begin{align*} \mathcal{J}_{i,2}(s,\tau) = F_1(s,\tau) (A_i - \pi)1\{S_i = s\}, \end{align*} $$F_1(s,\tau) = \left(\frac{1-\pi}{\pi f_1(q_1(\tau))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau))}\right)(m_1(s,\tau)- m_0(s,\tau)) + q(\tau)\biggl[\frac{f_1(q_1(\tau)|s)}{f_1(q_1(\tau))} - \frac{f_0(q_0(\tau)|s)}{f_0(q_0(\tau))} \biggr],$$ \begin{align*} \mathcal{J}_{i,3}(s,\tau) = \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)1\{S_i = s\}. \end{align*} Further note that \begin{align*} \sum_{i =1}^n (\xi_i-1)\mathcal{J}_{i,1}(s,\tau) \stackrel{d}{=} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\frac{(\xi_i-1)\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \sum_{i=N(s) +n_1(s)+ 1}^{N(s)+n(s)}\frac{(\xi_i-1)\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \end{align*} By the same argument in Claim (1) in the proof of Lemma \ref{lem:Q}, we have \begin{align*} & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,1}(s,\tau_2) - \mathcal{J}_i(s,\tau_1))\right| \geq \varepsilon/3\right) \\ \leq & \frac{3\mathbb{E}\sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,1}(s,\tau_2) - \mathcal{J}_{i,1}(s,\tau_1))\right|}{\varepsilon} \\ \leq & \frac{3\sqrt{c_2\delta\log(\frac{C}{c_1\delta})} + \frac{3C\log(\frac{C}{c_1\delta})}{\sqrt{n}}}{\varepsilon}, \end{align*} where $C$, $c_1< c_2$ are some positive constants that are independent of $(n,\varepsilon,\delta)$. By letting $n\rightarrow \infty$ followed by $\delta \rightarrow 0$, the RHS vanishes. For $\mathcal{J}_{i,2}$, we note that $F_1(s,\tau)$ is Lipschitz in $\tau$. Therefore, \begin{align*} & \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,2}(s,\tau_2) - \mathcal{J}_{i,2}(s,\tau_1))\right| \geq \varepsilon/3\right) \\ \leq & \sum_{s \in \mathcal{S}}\mathbb{P}\left(C\delta\left|\frac{1}{\sqrt{n}} \sum_{i =1}^n (\xi_i - 1)(A_i - \pi)1\{S_i = s\}\right| \geq \varepsilon/3 \right) \rightarrow 0 \end{align*} as $n \rightarrow \infty$ followed by $\delta \rightarrow 0$, in which we use the fact that, by \eqref{eq:clt}, \begin{align*} \sup_{s \in \mathcal{S} }\left|\frac{1}{\sqrt{n}} \sum_{i =1}^n (\xi_i - 1)(A_i - \pi)1\{S_i = s\}\right| = O_p(1). \end{align*} Last, by the standard maximal inequality (e.g., \citet[Theorem 2.14.1]{VW96}) and the fact that \begin{align*} \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right) \end{align*} is Lipschitz in $\tau$, we have, as $n \rightarrow \infty$ followed by $\delta \rightarrow 0$, \begin{align*} \mathbb{P}\left( \sup_{\tau_1,\tau_2 \in \Upsilon, \tau_1 < \tau_2 < \tau_1+\delta,s \in \mathcal{S}}\left|\frac{1}{\sqrt{n}}\sum_{i=1}^n (\xi_i - 1)(\mathcal{J}_{i,3}(s,\tau_2) - \mathcal{J}_{i,3}(s,\tau_1))\right| \geq \varepsilon/3\right) \rightarrow 0 \end{align*} This concludes the proof of the conditional stochastic equicontinuity. \textbf{Step 2.} We focus on the one-dimension case and aim to show that, conditionally on data, for fixed $\tau \in \Upsilon$, \begin{align*} \frac{1}{\sqrt{n}}\sum_{s \in \mathcal{S}} \sum_{i =1}^n (\xi_i-1) \mathcal{J}_i(s,\tau) \rightsquigarrow \mathcal{N}(0,\tilde{\Sigma}_{sfe}(\tau,\tau)). \end{align*} The finite-dimensional convergence can be established similarly by the Cram\'{e}r-Wold device. In view of Lindeberg-Feller central limit theorem, we only need to show that (1) \begin{align*} \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \convP \zeta_Y^2(\pi,\tau) + \tilde{\xi}_A^{\prime 2}(\pi,\tau) + \xi_S^2(\pi,\tau) \end{align*} and (2) \begin{align*} \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \mathbb{E}_\xi(\xi-1)^21\{|\sum_{s \in \mathcal{S}}(\xi_i - 1)\mathcal{J}_i(s,\tau)| \geq \varepsilon \sqrt{n}\} \rightarrow 0. \end{align*} (2) is obvious as $|\mathcal{J}_i(s,\tau)|$ is bounded and $\max_i|\xi_i-1| \lesssim \log(n)$ as $\xi_i$ is sub-exponential. Next, we focus on (1). We have \begin{align*} & \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \\ = & \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}\biggl\{\biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr] \\ & + F_1(s,\tau)(A_i -\pi)1\{S_i=s\} + \biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)1\{S_i = s\}\biggr]\biggr\}^2 \\ \equiv & \sigma_1^2 + \sigma_2^2 + \sigma_3^2 + 2\sigma_{12} + 2\sigma_{13} + 2 \sigma_{23}, \end{align*} where \begin{align*} \sigma_1^2 = \frac{1}{n}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr]^2, \end{align*} \begin{align*} \sigma_2^2 = \frac{1}{n}\sum_{s \in \mathcal{S}}F^2_1(s,\tau)\sum_{i =1}^n (A_i-\pi)^21\{S_i = s\}, \end{align*} \begin{align*} \sigma_3^2 = \frac{1}{n}\sum_{i =1}^n \biggl[ \left(\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\right)\biggr]^2, \end{align*} \begin{align*} \sigma_{12} = \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}\biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr]F_1(s,\tau)(A_i - \pi)1\{S_i=s\}, \end{align*} \begin{align*} \sigma_{13} = \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}\biggl[\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\biggr] \biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)\biggr], \end{align*} and \begin{align*} \sigma_{23} = \sigma_{12} = \frac{1}{n}\sum_{i=1}^n\sum_{s \in \mathcal{S}}F_1(s,\tau)(A_i - \pi)1\{S_i=s\}\biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)\biggr]. \end{align*} For $\sigma_1^2$, we have \begin{align*} \sigma_1^2 = & \frac{1}{n}\sum_{s \in \mathcal{S}}\sum_{i =1}^n \biggl[\frac{A_i 1\{S_i = s\}\eta^2_{i,1}(s,\tau)}{\pi^2 f^2_1(q_1(\tau))} - \frac{(1-A_i)1\{S_i = s\}\eta^2_{i,0}(s,\tau)}{(1-\pi)^2 f^2_0(q_0(\tau))}\biggr] \\ \stackrel{d}{=} & \frac{1}{n}\sum_{s \in \mathcal{S}} \sum_{i=N(s)+1}^{N(s)+n_1(s)}\frac{\tilde{\eta}^2_{i,1}(s,\tau)}{\pi^2 f^2_1(q_1(\tau))} + \frac{1}{n}\sum_{s \in \mathcal{S}} \sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\frac{\tilde{\eta}^2_{i,0}(s,\tau)}{(1-\pi)^2 f^2_0(q_0(\tau))} \\ \convP & \frac{\tau(1-\tau) - \mathbb{E}m_1^s(S,\tau)}{\pi f_1^2(q_1(\tau))} + \frac{\tau(1-\tau) - \mathbb{E}m_0^s(S,\tau)}{(1-\pi) f_0^2(q_0(\tau))} = \zeta_Y^2(\pi,\tau), \end{align*} where the second equality holds due to the rearrangement argument in Lemma \ref{lem:Q} and the convergence in probability holds due to uniform convergence of the partial sum process. For $\sigma_2^2$, by Assumption \ref{ass:assignment1}, \begin{align*} \sigma_2^2 = \frac{1}{n}\sum_{s \in \mathcal{S}}F_1^2(s,\tau)(D_n(s) - 2\pi D_n(s) + \pi(1-\pi)1\{S_i =s\}) \convP \pi(1-\pi)\mathbb{E}F_1^2(S_i,\tau) = \tilde{\xi}_{A}^{\prime 2}(\pi,\tau). \end{align*} For $\sigma_3^2$, by the law of large number, \begin{align*} \sigma_3^2 \convP \mathbb{E}\biggl[ \left(\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\right)\biggr]^2 = \xi_S^2(\pi,\tau). \end{align*} For $\sigma_{12}$, we have \begin{align*} \sigma_{12} = & \frac{1}{n}\sum_{s \in \mathcal{S}}(1-\pi)F_1(s,\tau)\sum_{i=1}^n\frac{A_i 1\{S_i = s\}\eta_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{1}{n}\sum_{s \in \mathcal{S}}\pi F_1(s,\tau)\sum_{i=1}^{n}\frac{(1-A_i)1\{S_i = s\}\eta_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \\ \stackrel{d}{=} & \frac{1}{n}\sum_{s \in \mathcal{S}}(1-\pi)F_1(s,\tau)\sum_{i=N(s)+1}^{N(s)+n_1(s)}\frac{\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \frac{1}{n}\sum_{s \in \mathcal{S}}\pi F_1(s,\tau)\sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\frac{\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \convP 0, \end{align*} where the last convergence holds because by Lemma \ref{lem:Q}, \begin{align*} \frac{1}{n}\sum_{i=N(s)+1}^{N(s)+n_1(s)}\tilde{\eta}_{i,1}(s,\tau) \convP 0 \quad \text{and} \quad \frac{1}{n}\sum_{i=N(s)+n_1(s)+1}^{N(s)+n(s)}\tilde{\eta}_{i,0}(s,\tau) \convP 0. \end{align*} By the same argument, we can show that \begin{align*} \sigma_{13} \convP 0. \end{align*} Last, for $\sigma_{23}$, by Assumption \ref{ass:assignment1}, \begin{align*} \sigma_{23} = \sum_{s \in \mathcal{S}}F_1(s,\tau)\biggl[ \left(\frac{m_1(s,\tau)}{f_1(q_1(\tau))} - \frac{m_0(s,\tau)}{f_0(q_0(\tau))}\right)\biggr]\frac{D_n(s)}{n} \convP 0. \end{align*} Therefore, we have \begin{align*} \frac{1}{n}\sum_{i =1}^n[\sum_{s \in \mathcal{S}}\mathcal{J}_i(s,\tau)]^2 \convP \zeta_Y^2(\pi,\tau) + \tilde{\xi}_A^{\prime 2}(\pi,\tau) + \xi_S^2(\pi,\tau). \end{align*} \end{proof} \begin{lem} \label{lem:L2star} Recall $R_{sfe,2,1}^*(u,\tau)$ and $R_{sfe,2,0}^*(u,\tau)$ defined in \eqref{eq:l21nstar} and \eqref{eq:l20nstar}, respectively. If Assumptions in Theorem \ref{thm:cab} hold, then \eqref{eq:l21nstar} and \eqref{eq:l20nstar} hold and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,1}^*(u,\tau)| = o_p(1) \quad \text{and} \quad \sup_{\tau \in \Upsilon}|R_{sfe,2,0}^*(u,\tau)| = o_p(1). \end{align*} \end{lem} \begin{proof} We focus on \eqref{eq:l21nstar}. Following the definition of $M_{ni}$ in the proof of Lemma \ref{lem:Rsfestar} and the argument in the Step 1.2 of the proof of Theorem \ref{thm:sfe}, we have \begin{align} \label{eq:l21nstar'} & L_{2,1,n}^*(u,\tau) \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\int_0^{\frac{u'\iota_1}{\sqrt{n}}- \frac{E_n^*(s)}{\sqrt{n}}\left(q(\tau) + \frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\left[\phi_i(u,\tau,s,E_n^*(s)) - \mathbb{E}\phi_i(u,\tau,E_n^*(s)) \right] + \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\mathbb{E}\phi_i(u,\tau,s,E_n^*(s)), \end{align} where $E_n^*(s) = \sqrt{n}(\hat{\pi}^*(s) - \pi) = \frac{n}{n^*(s)}\frac{D_n^*(s)}{\sqrt{n}} = O_p(1)$, \begin{align*} \phi_i(u,\tau,s,e) = \int_0^{\frac{u'\iota_1}{\sqrt{n}}- \frac{e}{\sqrt{n}}\left(q(\tau) + \frac{u_1}{\sqrt{n}}\right)}\left(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv, \end{align*} and $ \mathbb{E}\phi_i(u,\tau,s,E_n^*(s))$ is interpreted as $ \mathbb{E}\phi_i(u,\tau,s,e)$ with $e$ being evaluated at $E_n^*(s)$. For the first term on the RHS of \eqref{eq:l21nstar'}, similar to \eqref{eq:Mtilde-M}, we have \begin{align} \label{eq:term1} & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\left[\phi_i(u,\tau,s,E_n^*(s)) - \mathbb{E}\phi_i(u,\tau,s,E_n^*(s)) \right] \notag \\ = & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\xi_i^s\left[\phi_i(u,\tau,s,E_n^*(s)) - \mathbb{E}\phi_i(u,\tau,s,E_n^*(s)) \right] + \sum_{s \in \mathcal{S}}r_n(u,\tau,s,E_n^*(s)), \end{align} where $\{\xi_i^s\}_{i=1}^n$ is a sequence of i.i.d. Poisson(1) random variables and is independent of everything else, and \begin{align*} r_n(u,\tau,s,e) = \text{sign}(N(n_1(s)) - n_1(s))\sum_{j=1}^\infty\frac{\#I_{n}^j(s)}{\sqrt{n}}\frac{1}{\#I_{n}^j(s)}\sum_{i \in I_n^j(s)}\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]. \end{align*} We aim to show \begin{align} \label{eq:rn} \sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}|r_n(u,\tau,s,e)| = o_p(1), \end{align} Recall that the proof of Lemma \ref{lem:Rsfestar} relies on \eqref{eq:kk} and the fact that $$\mathbb{E}\sup_{n \geq k \geq n_0}\sup_{\tau \in \Upsilon, s \in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k \tilde{\eta}_{j,1}(s,\tau)\right| \rightarrow 0.$$ Using the same argument and replacing $\tilde{\eta}_{j,1}(s,\tau)$ by $\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]$, in order to show \eqref{eq:rn}, we only need to verify that, as $n \rightarrow \infty$ followed by $n_0 \rightarrow \infty$, \begin{align*} \mathbb{E}\sup_{n \geq k \geq n_0}\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \rightarrow 0 \end{align*} Because $\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right|$ is bounded as shown below, it suffices to show that, for any $\varepsilon>0$, as $n \rightarrow \infty$ followed by $n_0 \rightarrow \infty$, \begin{align} \label{eq:targetp} \mathbb{P}\left(\sup_{n \geq k \geq n_0}\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \geq \varepsilon \right) \rightarrow 0. \end{align} Define the class of functions $\mathcal{F}_n$ as \begin{align*} \mathcal{F}_n = \{\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]: |e| \leq M,\tau \in \Upsilon, s\in \mathcal{S} \}. \end{align*} Then, $\mathcal{F}_n$ is nested by a VC-class with fixed VC-index. In addition, for fixed $u$, $\mathcal{F}_n$ has a bounded (and independent of $n$) envelope function $$F = |u'\iota_1|+M\left(\max_{\tau \in \Upsilon}|q(\tau)| + \left|u_1\right|\right).$$ Last, define $\mathcal{I}_l = \{2^l,2^l+1,\cdots,2^{l+1}-1\}$. Then, \begin{align*} & \mathbb{P}\left(\sup_{n \geq k \geq n_0}\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \geq \varepsilon\right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n) \rfloor+1}\mathbb{P}\left(\sup_{k \in \mathcal{I}_l}\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\frac{1}{k}\sum_{j=1}^k\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \geq \varepsilon \right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n) \rfloor+1}\mathbb{P}\left(\sup_{k \leq 2^{l+1}}\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{j=1}^k\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \geq \varepsilon 2^l \right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n) \rfloor+1}9\mathbb{P}\left(\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{j=1}^{2^{l+1}}\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right| \geq \varepsilon 2^l/30 \right) \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n) \rfloor+1}\frac{270 \mathbb{E}\sup_{|e| \leq M,\tau \in \Upsilon, s\in \mathcal{S}}\left|\sum_{j=1}^{2^{l+1}}\sqrt{n}\left[\phi_i(u,\tau,s,e) - \mathbb{E}\phi_i(u,\tau,s,e) \right]\right|}{\varepsilon 2^l} \\ \leq & \sum_{l=\lfloor \log_2(n_0) \rfloor}^{\lfloor \log_2(n) \rfloor+1}\frac{C_1}{\varepsilon 2^{l/2}} \\ \leq & \frac{2C_1}{\varepsilon \sqrt{n_0}} \rightarrow 0, \end{align*} where the first inequality holds by the union bound, the second inequality holds because on $\mathcal{I}_l$, $2^{l+1} \geq k \geq 2^l$, the third inequality follows the same argument in the proof of Theorem \ref{thm:qr}, the fourth inequality is due to the Markov inequality, the fifth inequality follows the standard maximal inequality such as \citet[Theorem 2.14.1]{VW96} and the constant $C_1$ is independent of $(l,\varepsilon,n)$, and the last inequality holds by letting $n \rightarrow \infty$. Because $\varepsilon$ is arbitrary, we have established \eqref{eq:targetp}, and thus, \eqref{eq:rn}, which further implies that \begin{align*} \sup_{\tau \in \Upsilon, s\in \mathcal{S}}|r_n(u,\tau,s,E_n^*(s))| = o_p(1), \end{align*} For the leading term of \eqref{eq:term1}, we have \begin{align*} & \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}\xi_i^s\left[\phi_i(u,\tau,s,E_n^*(s)) - \mathbb{E}\phi_i(u,\tau,s,E_n^*(s)) \right] \\ = & \sum_{s \in \mathcal{S}}\left[ \Gamma_n^{s*}(N(s),\tau,E_n^*(s)) - \Gamma_n^{s*}(N(s)+n_1(s),\tau,E_n^*(s))\right], \end{align*} where \begin{align*} \Gamma_n^{s*}(k,\tau,e) = & \sum_{i=1}^k \xi_i^s\int_0^{\frac{u'\iota_1 - e(q(\tau)+\frac{u_1}{\sqrt{n}})}{\sqrt{n}}}\left(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv \\ & - k \mathbb{E}\left[\int_0^{\frac{u'\iota_1 - e(q(\tau)+\frac{u_1}{\sqrt{n}})}{\sqrt{n}}}\left(1\{Y_i^s(1) \leq q_1(\tau)+v\} - 1\{Y_i^s(1) \leq q_1(\tau)\} \right)dv\right]. \end{align*} By the same argument in \eqref{eq:Gamma_sfe}, we can show that \begin{align*} \sup_{0 < t \leq 1,\tau \in \Upsilon, |e| \leq M}|\Gamma_n^{s*}(k,\tau,e) | = o_p(1), \end{align*} where we need to use the fact that the Poisson(1) random variable has an exponential tail and thus \begin{align*} \mathbb{E}\sup_{i\in\{1,\cdots,n\},s\in \mathcal{S}} \xi_i^s = O(\log(n)). \end{align*} Therefore, \begin{align} \label{eq:l21nstar''} \sup_{\tau \in \Upsilon}\left|\sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\left[\phi_i(u,\tau,s,E_n^*(s)) - \mathbb{E}\phi_i(u,\tau,E_n^*(s)) \right]\right| = o_p(1). \end{align} For the second term on the RHS of \eqref{eq:l21nstar'}, we have \begin{align} \label{eq:l21nstar'''} \sum_{s \in \mathcal{S}} \sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni}\mathbb{E}\phi_i(u,\tau,s,e) = & \sum_{s \in \mathcal{S}}n_1^*(s)\mathbb{E}\phi_i(u,\tau,s,e) \notag \\ = & \sum_{s \in \mathcal{S}} \pi p(s) \frac{f_1(q_1(\tau)|s)}{2}(u'\iota_1 - eq(\tau))^2 +o(1), \end{align} where the $o(1)$ term holds uniformly over $(\tau,e) \in \Upsilon \times [-M,M]$, the first equality holds because $\sum_{i=N(s) + 1}^{N(s)+n_1(s)}M_{ni} = n^*_1(s)$ and the second equality holds by the same calculation in \eqref{eq:Gamma_sfe1} and the facts that $n^*(s)/n \convP p(s)$ and \begin{align*} \frac{n^*_1(s)}{n} = \frac{D_n^*(s)+\pi n^*(s)}{n} \convP \pi p(s). \end{align*} Combining \eqref{eq:l21nstar}, \eqref{eq:l21nstar'}, \eqref{eq:l21nstar''}, \eqref{eq:l21nstar'''}, and the facts that $E_n^*(s) = \frac{n}{n^*(s)}\frac{D_n^*(s)}{\sqrt{n}}$ and $ \frac{n}{n^*(s)} \convP 1/p(s)$, we have \begin{align*} L_{2,1,n}^*(u,\tau) = \frac{\pi f_1(q_1(\tau))}{2}(u'\iota_1)^2 - \sum_{s \in \mathcal{S}}f_1(q_1(\tau)|s)\frac{\pi D_{n}^*(s)u'\iota_1}{\sqrt{n}}q(\tau) + h_{2,1}^*(\tau) + R_{sfe,2,1}^*(u,\tau), \end{align*} where \begin{align*} h_{2,1}^*(\tau) = \sum_{s \in \mathcal{S}} \frac{\pi f_1(q_1(\tau)|s)}{2}p(s)(E_n^*(s))^2q^2(\tau) \end{align*} and \begin{align*} \sup_{\tau \in \Upsilon}|R_{sfe,2,1}^*(u,\tau)| = o_p(1). \end{align*} This concludes the proof. \end{proof} \begin{lem} \label{lem:Qsfestar} Recall the definition of $(W_{sfe,n,1}^*(\tau)-\mathcal{W}_{n,1}(\tau), W_{sfe,n,2}^*(\tau), W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau))$ in \eqref{eq:betasfestar-qhat}. If Assumptions in Theorem \ref{thm:cab} hold, then conditionally on data, \begin{align*} (W_{sfe,n,1}^*(\tau)-\mathcal{W}_{n,1}(\tau), W_{sfe,n,2}^*(\tau), W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau)) \rightsquigarrow (\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau)), \end{align*} where $(\mathcal{B}_1(\tau),\mathcal{B}_2(\tau),\mathcal{B}_3(\tau))$ are three independent Gaussian processes with covariance kernels \begin{align*} \Sigma_1(\tau_1,\tau_2) = \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_1(S,\tau_1)m_1(S,\tau_2)}{\pi f_1(q_1(\tau_1))f_1(q_1(\tau_2))} + \frac{\min(\tau_1,\tau_2) - \tau_1\tau_2 - \mathbb{E}m_0(S,\tau_1)m_0(S,\tau_2)}{(1-\pi) f_0(q_0(\tau_1))f_0(q_0(\tau_2))}, \end{align*} \begin{align*} & \Sigma_2(\tau_1,\tau_2) \\ = & \mathbb{E}\gamma(S)\biggl[(m_1(S,\tau_1) - m_0(S,\tau_1))\left(\frac{1-\pi}{\pi f_1(q_1(\tau_1))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_1))}\right) + q(\tau_1)\left(\frac{f_1(q(\tau_1)|S)}{f_1(q_1(\tau_1))} -\frac{f_0(q(\tau_1)|S)}{f_0(q_0(\tau_1))}\right)\biggr]\\ &\times \biggl[(m_1(S,\tau_2) - m_0(S,\tau_2))\left(\frac{1-\pi}{\pi f_1(q_1(\tau_2))} - \frac{\pi}{(1-\pi)f_0(q_0(\tau_2))}\right) + q(\tau_2)\left(\frac{f_1(q(\tau_2)|S)}{f_1(q_2(\tau_2))} -\frac{f_0(q(\tau_2)|S)}{f_0(q_0(\tau_2))}\right)\biggr], \end{align*} and \begin{align*} \Sigma_3(\tau_1,\tau_2) = \mathbb{E}\biggl[\frac{m_1(S,\tau_1)}{f_1(q_1(\tau_1))} - \frac{m_0(S,\tau_1)}{f_0(q_0(\tau_1))}\biggr] \biggl[\frac{m_1(S,\tau_2)}{f_1(q_1(\tau_2))} - \frac{m_0(S,\tau_2)}{f_0(q_0(\tau_2))}\biggr], \end{align*} respectively. \end{lem} \begin{proof} Let $\mathcal{A}_n = \{(A_i^*,S_i^*, A_i, S_i): i =1,\cdots,n\}$. Following the definition of $M_{ni}$ and arguments in the proof of Lemma \ref{lem:Rsfestar}, we have \begin{align*} & \{W_{sfe,n,1}^*(\tau)-\mathcal{W}_{n,1}(\tau)|\mathcal{A}_n\} \\ \stackrel{d}{=} &\left\{\sum_{s \in \mathcal{S}} \frac{1}{\sqrt{n}}\left[\sum_{i=N(s) + 1}^{N(s)+n_1(s)}(M_{ni}-1)\left(\frac{\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))}\right) - \sum_{i=N(s) +n_1(s)+ 1}^{N(s)+n(s)}(M_{ni}-1)\left(\frac{\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\right)\right] \biggl|\mathcal{A}_n\right\}\\ = & \left\{\sum_{s \in \mathcal{S}}\frac{1}{\sqrt{n}}\left[\sum_{i=N(s) + 1}^{N(s)+n_1(s)}(\xi_{i}^s-1)\frac{\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \sum_{i=N(s) +n_1(s)+ 1}^{N(s)+n(s)}(\xi_{i}^s-1)\frac{\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))}\right] + R_1(\tau) \biggl|\mathcal{A}_n\right\}, \end{align*} where $\sup_{\tau \in \Upsilon}|R_1(\tau)| = o_p(1)$ and $\{\xi_{i}^s\}_{i=1}^n$, $s \in \mathcal{S}$ are sequences of i.i.d. Poisson(1) random variables that are independent of $\mathcal{A}_n$ and across $s \in \mathcal{S}$. In addition, by the same argument in the proof of Lemma \ref{lem:Q}, we have \begin{align*} & \sum_{s \in \mathcal{S}}\frac{1}{\sqrt{n}}\left[\sum_{i=N(s) + 1}^{N(s)+n_1(s)}(\xi_{i}^s-1)\frac{\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \sum_{i=N(s) +n_1(s)+ 1}^{N(s)+n(s)}(\xi_{i}^s-1)\frac{\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \right] \\ = & \sum_{s \in \mathcal{S}}\frac{1}{\sqrt{n}}\left[\sum_{i= \lfloor nF(s)\rfloor + 1}^{\lfloor n(F(s)+\pi p(s))\rfloor}(\xi_{i}^s-1)\frac{\tilde{\eta}_{i,1}(s,\tau)}{\pi f_1(q_1(\tau))} - \sum_{i=\lfloor n(F(s)+\pi p(s))\rfloor+1}^{\lfloor n(F(s)+ p(s))\rfloor}(\xi_{i}^s-1) \frac{\tilde{\eta}_{i,0}(s,\tau)}{(1-\pi) f_0(q_0(\tau))} \right] + R_2(\tau) \\ \equiv & W_{1}^*(\tau) + R_2(\tau), \end{align*} where $\sup_{\tau \in \Upsilon}|R_2(\tau)| = o_p(1)$. Because both $ W_{sfe,n,2}^*(\tau)$ and $W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau)$ are in the $\sigma$-field generated by $\mathcal{A}_n$, we have \begin{align*} & (W_{sfe,n,1}^*(\tau)-\mathcal{W}_{n,1}(\tau), W_{sfe,n,2}^*(\tau), W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau)) \\ \stackrel{d}{=} & (W_{1}^*(\tau) + R_1(\tau)+R_2(\tau), W_{sfe,n,2}^*(\tau), W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau)). \end{align*} In addition, note that $\{\xi_i^s\}_{i=1}^n$ and $\{\tilde{\eta}_{i,1}(s,\tau),\tilde{\eta}_{i,1}(s,\tau)\}_{i=1}^n$ are independent of $\mathcal{A}_n$, therefore, $W_{1}^*(\tau) \perp\!\!\!\perp (W_{sfe,n,2}^*(\tau),W_{sfe,n,3}^*(\tau)-\mathcal{W}_{n,2}(\tau))$. Applying \citet[Theorem 2.9.6]{VW96} to each segment $$\lfloor nF(s)\rfloor + 1,\cdots,\lfloor n(F(s)+\pi p(s))\rfloor \quad \text{or} \quad \lfloor n(F(s)+\pi p(s))\rfloor+1,\cdots,\lfloor n(F(s)+ p(s))\rfloor$$ for $s \in \mathcal{S}$ and noticing that $\{\tilde{\eta}_{i,1}(s,\tau)\}_{i=1}^n$ and $\{\tilde{\eta}_{i,0}(s,\tau)\}_{i=1}^n$ are two i.i.d. sequences for each $s \in \mathcal{S}$, independent of each other, and independent across $s$, we have, conditionally on $\{\tilde{\eta}_{i,1}(s,\tau),\tilde{\eta}_{i,0}(s,\tau)\}_{i=1}^n$, $s \in \mathcal{S}$, \begin{align*} W_1^*(\tau) \rightsquigarrow \mathcal{B}_1(\tau) \end{align*} with the covariance kernel $\Sigma_1(\tau_1,\tau_2)$. For $W^*_{sfe,n,2}(\tau)$, we note that it depends on data only through $\{S_i^*\}_{i=1}^n$. By Assumption \ref{ass:bassignment}, \begin{align*} W^*_{sfe,n,2}(\tau)|\{S_i^*\}_{i=1}^n \rightsquigarrow \mathcal{B}_2(\tau) \end{align*} with the covariance kernel $\Sigma_2(\tau_1,\tau_2)$. Last, for $W^*_{sfe,n,3}(\tau) - \mathcal{W}_{n,2}(\tau)$, note that $\{S_i^*\}$ is sampled by the standard bootstrap procedure. Therefore, directly applying \citet[Theorem 3.6.2]{VW96}, we have \begin{align*} W^*_{sfe,n,3}(\tau) - \mathcal{W}_{n,2}(\tau) = \frac{1}{\sqrt{n}}\sum_{i =1}^n (\xi'_i-1) \biggl[\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\biggr]+R_3(\tau) \end{align*} where $\sup_{\tau \in \Upsilon}|R_3(\tau)| = o_p(1)$, $\{\xi_i'\}_{i=1}^n$ is a sequence of i.i.d. Poisson(1) random variables that is independent of data and $\{\xi^s_i\}_{i=1}^n$, $s \in \mathcal{S}$. By \citet[Theorem 3.6.2]{VW96}, conditionally on data $\{S_i\}_{i=1}^n$, \begin{align*} \frac{1}{\sqrt{n}}\sum_{i =1}^n (\xi'_i-1) \biggl[\frac{m_1(S_i,\tau)}{f_1(q_1(\tau))} - \frac{m_0(S_i,\tau)}{f_0(q_0(\tau))}\biggr] \rightsquigarrow \mathcal{B}_3(\tau), \end{align*} where $\mathcal{B}_3(\tau)$ has the covariance kernel $\Sigma_3(\tau_1,\tau_2)$. Furthermore, $\mathcal{B}_2(\tau)$ and $\mathcal{B}_3(\tau)$ are independent as $\Sigma_2(\tau_1,\tau_2)$ is not a function of $\{S_i^*\}_{i=1}^n$. This concludes the proof. \end{proof} \section{Additional Simulation Results} \label{sec:addsim} \subsection{DGPs} We consider the following four DGPs with parameters $\gamma =4$, $\sigma =2$, and $\mu$ which will be specified later. DGPs 1 and 3 correspond to DGPs 1 and 2 in Section \ref{sec:sim} in the main paper. \begin{enumerate} \item Let $Z$ be the standardized $\text{Beta}(2,2)$ distributed, $S_i = \sum_{j=1}^4\{Z_i \leq g_j\}$, and $(g_1,\cdots,g_4) = (-0.25\sqrt{20},0,0.25\sqrt{20},0.5\sqrt{20})$. The outcome equation is $$Y_i = A_i \mu + \gamma Z_i + \eta_i,$$ where $\eta_i = \sigma A_i \varepsilon_{i,1} + (1-A_i)\varepsilon_{i,2}$ and $(\varepsilon_{i,1},\varepsilon_{i,2})$ are jointly standard normal. \item Let $S$ be the same as in DGP1. The outcome equation is \begin{align*} Y_i = A_i \mu+\gamma Z_i A_i - \gamma(1-A_i)(\log(Z_i+3)1\{Z_i \leq 0.5\}) + \eta_i. \end{align*} where $\eta_i = \sigma A_i \varepsilon_{i,1} + (1-A_i)\varepsilon_{i,2}$ and $(\varepsilon_{i,1},\varepsilon_{i,2})$ are jointly standard normal. \item Let $Z$ be uniformly distributed on $[-2,2]$, $S_i = \sum_{j=1}^4\{Z_i \leq g_j\}$, and $(g_1,\cdots,g_4) = (-1,0,1,2)$. The outcome equation is \begin{align*} Y_i = A_i \mu+A_im_{i,1} + (1-A_i)m_{i,0} + \eta_i, \end{align*} where $m_{i,0} = \gamma Z_i^2 1\{|Z_i|\geq 1\} + \frac{\gamma}{4}(2 - Z_i^2)1\{|Z_i|<1\}$, $\eta_i = \sigma(1+Z_i^2)A_i\varepsilon_{i,1} + (1+Z_i^2)(1-A_i)\varepsilon_{i,2}$, and $(\varepsilon_{i,1},\varepsilon_{i,2})$ are mutually independent $T(3)/3$ distributed. \item Let $Z_i$ be normally distributed with mean $0$ and variance $4$, $S_i = \sum_{j=1}^4\{Z_i \leq g_j\}$, $(g_1,\cdots,g_4) = (2\Phi^{-1}(0.25),2\Phi^{-1}(0.5),2\Phi^{-1}(0.75),\infty)$, and $\Phi(\cdot)$ is the standard normal CDF. The outcome equation is \begin{align*} Y_i = A_i \mu+A_i m_{i,1} + (1-A_i)m_{i,0} + \eta_i, \end{align*} where $m_{i,0} = -\gamma Z_i^2/4$, $m_{i,1} = \gamma Z_i^2/4$, \begin{align*} \eta_i = \sigma(1 + 0.5 \exp(-Z_i^2/2))A_i\varepsilon_{i,1} + (1+0.5\exp(-Z_i^2/2))(1-A_i)\varepsilon_{i,2}, \end{align*} and $(\varepsilon_{i,1},\varepsilon_{i,2})$ are jointly standard normal. \end{enumerate} When $\pi = \frac{1}{2}$, for each DGP, we consider four randomization schemes: \begin{enumerate} \item SRS: Treatment assignment is generated as in Example \ref{ex:srs}. \item WEI: Treatment assignment is generated as in Example \ref{ex:wei} with $\phi(x) = (1-x)/2$. \item BCD: Treatment assignment is generated as in Example \ref{ex:bcd} with $\lambda = 0.75$. \item SBR: Treatment assignment is generated as in Example \ref{ex:sbr}. \end{enumerate} When $\pi \neq 0.5$, we focus on SRS and SBR. We conduct the simulations with sample sizes $n=200$ and $400$. The numbers of simulation replications and bootstrap samples are 1000. Under the null, $\mu=0$ and the true parameters of interest are computed by simulations with $10^6$ sample size and $10^4$ replications. Under the alternative, we perturb the true values by $\mu = 1$ and $\mu = 0.75$ for $n = 200$ and $400$, respectively. We consider the following eight t-statistics. \begin{enumerate} \item ``s/naive": the point estimator is computed by the simple QR and its standard error $\sigma_{naive}$ is computed as \begin{align} \label{eq:stddev1} \sigma^2_{naive} = & \frac{\tau(1-\tau) - \frac{1}{n}\sum_{i =1}^n\hat{m}^2_1(S_i,\tau) }{\pi\hat{f}_1^2(\hat{q}_1(\tau))}+\frac{\tau(1-\tau) - \frac{1}{n}\sum_{i =1}^n\hat{m}^2_0(S_i,\tau) }{(1-\pi)\hat{f}_0^2(\hat{q}_0(\tau))} \notag \\ & + \frac{1}{n}\sum_{i =1}^n \pi(1-\pi)\left(\frac{\hat{m}_1(S_i,\tau)}{\pi \hat{f}_1(\hat{q}_1(\tau))} + \frac{\hat{m}_0(S_i,\tau)}{(1-\pi) \hat{f}_0(\hat{q}_0(\tau))}\right)^2 \notag \\ & + \frac{1}{n}\sum_{i =1}^n \left(\frac{\hat{m}_1(S_i,\tau)}{ \hat{f}_1(\hat{q}_1(\tau))} - \frac{\hat{m}_0(S_i,\tau)}{ \hat{f}_0(\hat{q}_0(\tau))}\right)^2, \end{align} where $\hat{q}_j(\tau)$ is the $\tau$-the empirical quantile of $Y_i|A_i = j$, $$\hat{m}_{i,1}(s,\tau) = \frac{\sum_{i =1}^nA_i1\{S_i = s\}(\tau - 1\{Y_i \leq \hat{q}_1(\tau)\})}{n_1(s)},$$ $$\hat{m}_{i,0}(s,\tau) = \frac{\sum_{i =1}^n(1-A_i)1\{S_i = s\}(\tau - 1\{Y_i \leq \hat{q}_0(\tau)\})}{n(s)-n_1(s)},$$ and for $j=0,1$, $\hat{f}_j(\cdot)$ is computed by the kernel density estimation using the observations $Y_i$ provided that $A_i = j$, bandwidth $h_j = 1.06\hat{\sigma}_jn_j^{-1/5}$, and the Gaussian kernel function, where $\hat{\sigma}_j$ is the standard deviation of the observations $Y_i$ provided that $A_i = j$, and $n_j = \sum_{i =1}^n \{A_i = j\}$, $j=0,1$. \item ``s/adj": exactly the same as the ``s/naive" method with one difference: replacing $\pi(1-\pi)$ in $\sigma^2_{naive}$ by $\gamma(S_i)$. \item ``s/W": the point estimator is computed by the simple QR and its standard error $\sigma_B$ is computed by the weighted bootstrap procedure. The bootstrap weights $\{\xi_i\}_{i=1}^n$ are generated from the standard exponential distribution. Denote $\{\hat{\beta}^w_{1,b}\}_{b=1}^B$ as the collection of $B$ estimates obtained by the simple QR applied to the samples generated by the weighted bootstrap procedure. Then, \begin{align*} \sigma_{B} = \frac{\hat{Q}(0.9) - \hat{Q}(0.1)}{\Phi^{-1}(0.9) - \Phi^{-1}(0.1)}, \end{align*} where $\Phi(\cdot)$ is the standard normal CDF and $\hat{Q}(\tau)$ is the $\tau$-th empirical quantile of $\{\hat{\beta}^w_{1,b}\}_{b=1}^B$. \item ``sfe/W": the same as above with one difference: the estimation method for both the original and bootstrap samples is the QR with strata fixed effects. \item ``ipw/W": the same as above with one difference: the estimation method for both the original and bootstrap samples is the inverse propensity score weighted QR. \item ``s/CA": the point estimator is computed by the simple QR and its standard error $\sigma_{CA}$ is computed by the covariate-adaptive bootstrap procedure. Denote $\{\hat{\beta}^*_{1,b}\}_{b=1}^B$ as the collection of $B$ estimates obtained by the simple QR applied to the samples generated by the covariate-adaptive bootstrap procedure. Then, \begin{align*} \sigma_{CA} = \frac{\hat{Q}(0.9) - \hat{Q}(0.1)}{\Phi^{-1}(0.9) - \Phi^{-1}(0.1)}, \end{align*} where $\hat{Q}(\tau)$ is the $\tau$-th empirical quantile of $\{\hat{\beta}^*_{1,b}\}_{b=1}^B$. \item ``sfe/CA": the same as above with one difference: the estimation method for both the original and bootstrap samples is the QR with strata fixed effects. \item ``ipw/CA": the same as above with one difference: the estimation method for both the original and bootstrap samples is the inverse propensity score weighted QR. \end{enumerate} \subsection{QTE, $H_0$, $\pi = 0.5$} \begin{table}[H] \centering \caption{$H_0$, $n = 200$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.042 & 0.042 & 0.051 & 0.039 & 0.047 & 0.046 & 0.044 & 0.046 \\ & WEI & 0.011 & 0.038 & 0.018 & 0.043 & 0.046 & 0.037 & 0.047 & 0.047 \\ & BCD & 0.004 & 0.041 & 0.010 & 0.043 & 0.043 & 0.045 & 0.048 & 0.048 \\ & SBR & 0.003 & 0.047 & 0.003 & 0.047 & 0.054 & 0.049 & 0.046 & 0.046 \\ \hline 2 & SRS & 0.045 & 0.045 & 0.060 & 0.062 & 0.066 & 0.056 & 0.069 & 0.069 \\ & WEI & 0.023 & 0.037 & 0.049 & 0.056 & 0.066 & 0.068 & 0.064 & 0.068 \\ & BCD & 0.021 & 0.037 & 0.032 & 0.049 & 0.057 & 0.063 & 0.059 & 0.057 \\ & SBR & 0.025 & 0.042 & 0.037 & 0.050 & 0.054 & 0.057 & 0.054 & 0.053 \\ \hline 3 & SRS & 0.042 & 0.042 & 0.045 & 0.045 & 0.054 & 0.055 & 0.044 & 0.058 \\ & WEI & 0.042 & 0.043 & 0.037 & 0.044 & 0.045 & 0.045 & 0.043 & 0.045 \\ & BCD & 0.052 & 0.056 & 0.044 & 0.050 & 0.057 & 0.057 & 0.057 & 0.055 \\ & SBR & 0.046 & 0.053 & 0.041 & 0.043 & 0.048 & 0.052 & 0.048 & 0.047 \\ \hline 4 & SRS & 0.054 & 0.054 & 0.048 & 0.046 & 0.049 & 0.046 & 0.043 & 0.048 \\ & WEI & 0.050 & 0.051 & 0.045 & 0.035 & 0.047 & 0.051 & 0.043 & 0.055 \\ & BCD & 0.056 & 0.059 & 0.040 & 0.030 & 0.049 & 0.047 & 0.044 & 0.048 \\ & SBR & 0.061 & 0.065 & 0.044 & 0.032 & 0.053 & 0.057 & 0.051 & 0.053 \\ \end{tabular} \label{tab:200_1} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 200$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.045 & 0.045 & 0.047 & 0.043 & 0.044 & 0.044 & 0.039 & 0.039 \\ & WEI & 0.012 & 0.040 & 0.014 & 0.044 & 0.043 & 0.037 & 0.041 & 0.035 \\ & BCD & 0.002 & 0.057 & 0.003 & 0.040 & 0.041 & 0.044 & 0.039 & 0.039 \\ & SBR & 0.001 & 0.057 & 0.001 & 0.045 & 0.046 & 0.045 & 0.045 & 0.044 \\ \hline 2 & SRS & 0.045 & 0.045 & 0.057 & 0.066 & 0.061 & 0.048 & 0.064 & 0.066 \\ & WEI & 0.033 & 0.065 & 0.037 & 0.056 & 0.065 & 0.065 & 0.056 & 0.061 \\ & BCD & 0.022 & 0.062 & 0.027 & 0.048 & 0.056 & 0.057 & 0.057 & 0.054 \\ & SBR & 0.017 & 0.050 & 0.017 & 0.040 & 0.046 & 0.048 & 0.048 & 0.046 \\ \hline 3 & SRS & 0.004 & 0.004 & 0.047 & 0.045 & 0.052 & 0.052 & 0.047 & 0.053 \\ & WEI & 0.006 & 0.006 & 0.045 & 0.050 & 0.058 & 0.052 & 0.053 & 0.057 \\ & BCD & 0.010 & 0.010 & 0.045 & 0.050 & 0.051 & 0.050 & 0.050 & 0.053 \\ & SBR & 0.008 & 0.011 & 0.048 & 0.048 & 0.053 & 0.046 & 0.051 & 0.047 \\ \hline 4 & SRS & 0.013 & 0.013 & 0.050 & 0.036 & 0.051 & 0.055 & 0.035 & 0.043 \\ & WEI & 0.011 & 0.011 & 0.043 & 0.033 & 0.051 & 0.049 & 0.043 & 0.052 \\ & BCD & 0.013 & 0.013 & 0.049 & 0.041 & 0.053 & 0.055 & 0.047 & 0.052 \\ & SBR & 0.013 & 0.013 & 0.040 & 0.033 & 0.047 & 0.046 & 0.044 & 0.045 \\ \end{tabular} \label{tab:200_2_2} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 200$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.052 & 0.052 & 0.053 & 0.044 & 0.044 & 0.048 & 0.041 & 0.042 \\ & WEI & 0.012 & 0.042 & 0.014 & 0.043 & 0.046 & 0.037 & 0.039 & 0.045 \\ & BCD & 0.002 & 0.047 & 0.002 & 0.051 & 0.054 & 0.055 & 0.053 & 0.053 \\ & SBR & 0.001 & 0.026 & 0.003 & 0.030 & 0.035 & 0.030 & 0.033 & 0.035 \\ \hline 2 & SRS & 0.052 & 0.052 & 0.066 & 0.057 & 0.058 & 0.053 & 0.048 & 0.058 \\ & WEI & 0.021 & 0.045 & 0.027 & 0.047 & 0.052 & 0.057 & 0.051 & 0.054 \\ & BCD & 0.013 & 0.046 & 0.025 & 0.051 & 0.060 & 0.067 & 0.061 & 0.060 \\ & SBR & 0.008 & 0.036 & 0.012 & 0.037 & 0.046 & 0.046 & 0.046 & 0.050 \\ \hline 3 & SRS & 0.058 & 0.058 & 0.048 & 0.054 & 0.047 & 0.058 & 0.054 & 0.051 \\ & WEI & 0.053 & 0.055 & 0.041 & 0.044 & 0.047 & 0.047 & 0.048 & 0.046 \\ & BCD & 0.042 & 0.043 & 0.026 & 0.026 & 0.033 & 0.033 & 0.032 & 0.034 \\ & SBR & 0.048 & 0.052 & 0.040 & 0.036 & 0.046 & 0.051 & 0.043 & 0.048 \\ \hline 4 & SRS & 0.044 & 0.044 & 0.057 & 0.059 & 0.062 & 0.053 & 0.051 & 0.065 \\ & WEI & 0.034 & 0.034 & 0.044 & 0.029 & 0.053 & 0.048 & 0.044 & 0.054 \\ & BCD & 0.029 & 0.032 & 0.040 & 0.019 & 0.045 & 0.047 & 0.043 & 0.047 \\ & SBR & 0.034 & 0.037 & 0.042 & 0.025 & 0.051 & 0.055 & 0.049 & 0.051 \\ \end{tabular} \label{tab:200_3} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 400$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.047 & 0.047 & 0.053 & 0.041 & 0.039 & 0.049 & 0.040 & 0.040 \\ & WEI & 0.009 & 0.043 & 0.017 & 0.041 & 0.042 & 0.045 & 0.044 & 0.043 \\ & BCD & 0.002 & 0.042 & 0.003 & 0.037 & 0.040 & 0.035 & 0.036 & 0.037 \\ & SBR & 0.002 & 0.043 & 0.004 & 0.034 & 0.034 & 0.036 & 0.032 & 0.030 \\ \hline 2 & SRS & 0.046 & 0.046 & 0.056 & 0.059 & 0.059 & 0.055 & 0.057 & 0.059 \\ & WEI & 0.035 & 0.046 & 0.046 & 0.056 & 0.062 & 0.065 & 0.061 & 0.060 \\ & BCD & 0.030 & 0.044 & 0.037 & 0.055 & 0.065 & 0.060 & 0.060 & 0.057 \\ & SBR & 0.026 & 0.049 & 0.042 & 0.058 & 0.067 & 0.063 & 0.062 & 0.066 \\ \hline 3 & SRS & 0.044 & 0.044 & 0.039 & 0.041 & 0.042 & 0.042 & 0.041 & 0.043 \\ & WEI & 0.042 & 0.045 & 0.048 & 0.041 & 0.048 & 0.051 & 0.046 & 0.049 \\ & BCD & 0.039 & 0.040 & 0.041 & 0.040 & 0.044 & 0.046 & 0.047 & 0.048 \\ & SBR & 0.048 & 0.051 & 0.046 & 0.048 & 0.052 & 0.056 & 0.056 & 0.055 \\ \hline 4 & SRS & 0.056 & 0.056 & 0.039 & 0.042 & 0.041 & 0.041 & 0.043 & 0.042 \\ & WEI & 0.052 & 0.055 & 0.038 & 0.034 & 0.045 & 0.042 & 0.044 & 0.044 \\ & BCD & 0.054 & 0.058 & 0.040 & 0.026 & 0.045 & 0.044 & 0.045 & 0.043 \\ & SBR & 0.061 & 0.068 & 0.049 & 0.027 & 0.047 & 0.054 & 0.055 & 0.051 \\ \end{tabular} \label{tab:400_1} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 400$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.042 & 0.042 & 0.054 & 0.046 & 0.040 & 0.046 & 0.050 & 0.041 \\ & WEI & 0.010 & 0.049 & 0.008 & 0.047 & 0.047 & 0.046 & 0.043 & 0.042 \\ & BCD & 0.003 & 0.045 & 0.002 & 0.043 & 0.043 & 0.035 & 0.039 & 0.040 \\ & SBR & 0.002 & 0.046 & 0.000 & 0.035 & 0.037 & 0.036 & 0.036 & 0.037 \\ \hline 2 & SRS & 0.050 & 0.050 & 0.055 & 0.049 & 0.047 & 0.051 & 0.052 & 0.050 \\ & WEI & 0.018 & 0.048 & 0.025 & 0.041 & 0.046 & 0.045 & 0.048 & 0.045 \\ & BCD & 0.011 & 0.042 & 0.011 & 0.041 & 0.046 & 0.045 & 0.046 & 0.043 \\ & SBR & 0.017 & 0.051 & 0.014 & 0.042 & 0.050 & 0.053 & 0.047 & 0.050 \\ \hline 3 & SRS & 0.012 & 0.012 & 0.043 & 0.046 & 0.048 & 0.046 & 0.050 & 0.050 \\ & WEI & 0.014 & 0.016 & 0.057 & 0.055 & 0.060 & 0.055 & 0.058 & 0.057 \\ & BCD & 0.013 & 0.013 & 0.055 & 0.059 & 0.061 & 0.051 & 0.053 & 0.052 \\ & SBR & 0.006 & 0.006 & 0.040 & 0.040 & 0.039 & 0.038 & 0.039 & 0.038 \\ \hline 4 & SRS & 0.019 & 0.019 & 0.056 & 0.052 & 0.064 & 0.056 & 0.051 & 0.061 \\ & WEI & 0.018 & 0.018 & 0.060 & 0.046 & 0.065 & 0.064 & 0.062 & 0.066 \\ & BCD & 0.015 & 0.015 & 0.057 & 0.046 & 0.066 & 0.063 & 0.059 & 0.067 \\ & SBR & 0.021 & 0.021 & 0.057 & 0.043 & 0.060 & 0.062 & 0.062 & 0.062 \\ \end{tabular} \label{tab:400_2_2} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 400$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.051 & 0.051 & 0.056 & 0.055 & 0.056 & 0.052 & 0.055 & 0.054 \\ & WEI & 0.007 & 0.041 & 0.014 & 0.055 & 0.053 & 0.051 & 0.050 & 0.051 \\ & BCD & 0.006 & 0.038 & 0.004 & 0.046 & 0.048 & 0.041 & 0.042 & 0.046 \\ & SBR & 0.004 & 0.033 & 0.002 & 0.044 & 0.043 & 0.042 & 0.043 & 0.042 \\ \hline 2 & SRS & 0.048 & 0.048 & 0.073 & 0.055 & 0.061 & 0.060 & 0.057 & 0.059 \\ & WEI & 0.020 & 0.039 & 0.024 & 0.046 & 0.053 & 0.048 & 0.051 & 0.053 \\ & BCD & 0.012 & 0.048 & 0.020 & 0.050 & 0.051 & 0.057 & 0.055 & 0.051 \\ & SBR & 0.011 & 0.047 & 0.014 & 0.046 & 0.052 & 0.050 & 0.052 & 0.052 \\ \hline 3 & SRS & 0.054 & 0.054 & 0.050 & 0.045 & 0.052 & 0.049 & 0.044 & 0.052 \\ & WEI & 0.053 & 0.055 & 0.049 & 0.047 & 0.053 & 0.050 & 0.049 & 0.054 \\ & BCD & 0.059 & 0.063 & 0.038 & 0.041 & 0.045 & 0.044 & 0.043 & 0.043 \\ & SBR & 0.049 & 0.051 & 0.042 & 0.044 & 0.043 & 0.049 & 0.049 & 0.049 \\ \hline 4 & SRS & 0.054 & 0.054 & 0.057 & 0.053 & 0.063 & 0.055 & 0.056 & 0.063 \\ & WEI & 0.047 & 0.051 & 0.055 & 0.043 & 0.064 & 0.055 & 0.061 & 0.059 \\ & BCD & 0.049 & 0.051 & 0.054 & 0.033 & 0.063 & 0.062 & 0.056 & 0.063 \\ & SBR & 0.046 & 0.048 & 0.047 & 0.026 & 0.051 & 0.057 & 0.056 & 0.053 \\ \end{tabular} \label{tab:400_3} \end{table} \subsection{QTE, $H_1$, $\pi = 0.5$} \begin{table}[H] \centering \caption{$H_1$, $n = 200$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.191 & 0.191 & 0.203 & 0.354 & 0.356 & 0.205 & 0.340 & 0.342 \\ & WEI & 0.126 & 0.257 & 0.147 & 0.359 & 0.358 & 0.279 & 0.345 & 0.350 \\ & BCD & 0.105 & 0.372 & 0.122 & 0.379 & 0.375 & 0.361 & 0.369 & 0.365 \\ & SBR & 0.099 & 0.400 & 0.114 & 0.378 & 0.382 & 0.411 & 0.375 & 0.368 \\ \hline 2 & SRS & 0.284 & 0.284 & 0.315 & 0.352 & 0.376 & 0.319 & 0.345 & 0.378 \\ & WEI & 0.270 & 0.319 & 0.314 & 0.356 & 0.364 & 0.359 & 0.363 & 0.369 \\ & BCD & 0.282 & 0.333 & 0.304 & 0.361 & 0.375 & 0.390 & 0.385 & 0.383 \\ & SBR & 0.290 & 0.346 & 0.296 & 0.335 & 0.361 & 0.387 & 0.358 & 0.356 \\ \hline 3 & SRS & 0.712 & 0.712 & 0.694 & 0.688 & 0.698 & 0.704 & 0.677 & 0.686 \\ & WEI & 0.701 & 0.707 & 0.678 & 0.685 & 0.680 & 0.699 & 0.687 & 0.674 \\ & BCD & 0.712 & 0.720 & 0.673 & 0.686 & 0.695 & 0.699 & 0.698 & 0.698 \\ & SBR & 0.672 & 0.684 & 0.659 & 0.639 & 0.647 & 0.673 & 0.647 & 0.638 \\ \hline 4 & SRS & 0.166 & 0.166 & 0.124 & 0.112 & 0.132 & 0.135 & 0.131 & 0.128 \\ & WEI & 0.166 & 0.170 & 0.126 & 0.098 & 0.125 & 0.144 & 0.139 & 0.133 \\ & BCD & 0.165 & 0.176 & 0.126 & 0.094 & 0.155 & 0.157 & 0.145 & 0.157 \\ & SBR & 0.167 & 0.175 & 0.122 & 0.088 & 0.139 & 0.145 & 0.133 & 0.140 \\ \end{tabular} \label{tab:200_1'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 200$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.183 & 0.183 & 0.193 & 0.443 & 0.441 & 0.200 & 0.431 & 0.429 \\ & WEI & 0.116 & 0.295 & 0.138 & 0.442 & 0.447 & 0.298 & 0.437 & 0.436 \\ & BCD & 0.072 & 0.472 & 0.095 & 0.450 & 0.453 & 0.434 & 0.446 & 0.448 \\ & SBR & 0.085 & 0.485 & 0.099 & 0.463 & 0.460 & 0.457 & 0.453 & 0.448 \\ \hline 2 & SRS & 0.267 & 0.267 & 0.256 & 0.359 & 0.366 & 0.265 & 0.358 & 0.371 \\ & WEI & 0.248 & 0.346 & 0.247 & 0.358 & 0.394 & 0.346 & 0.378 & 0.389 \\ & BCD & 0.229 & 0.402 & 0.233 & 0.358 & 0.396 & 0.388 & 0.395 & 0.392 \\ & SBR & 0.232 & 0.404 & 0.234 & 0.365 & 0.392 & 0.399 & 0.401 & 0.391 \\ \hline 3 & SRS & 0.797 & 0.797 & 0.904 & 0.897 & 0.916 & 0.902 & 0.897 & 0.913 \\ & WEI & 0.802 & 0.807 & 0.907 & 0.903 & 0.909 & 0.913 & 0.902 & 0.906 \\ & BCD & 0.796 & 0.804 & 0.902 & 0.910 & 0.911 & 0.908 & 0.911 & 0.906 \\ & SBR & 0.771 & 0.774 & 0.897 & 0.896 & 0.901 & 0.899 & 0.894 & 0.899 \\ \hline 4 & SRS & 0.176 & 0.176 & 0.312 & 0.269 & 0.317 & 0.316 & 0.297 & 0.316 \\ & WEI & 0.171 & 0.175 & 0.289 & 0.255 & 0.307 & 0.309 & 0.297 & 0.298 \\ & BCD & 0.169 & 0.174 & 0.299 & 0.262 & 0.313 & 0.329 & 0.311 & 0.316 \\ & SBR & 0.163 & 0.165 & 0.283 & 0.255 & 0.304 & 0.302 & 0.298 & 0.298 \\ \end{tabular} \label{tab:200_2_2'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 200$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.198 & 0.198 & 0.215 & 0.362 & 0.358 & 0.216 & 0.353 & 0.355 \\ & WEI & 0.143 & 0.293 & 0.153 & 0.361 & 0.368 & 0.315 & 0.362 & 0.364 \\ & BCD & 0.108 & 0.377 & 0.131 & 0.356 & 0.360 & 0.355 & 0.353 & 0.353 \\ & SBR & 0.079 & 0.386 & 0.105 & 0.397 & 0.396 & 0.381 & 0.403 & 0.386 \\ \hline 2 & SRS & 0.268 & 0.268 & 0.315 & 0.386 & 0.439 & 0.322 & 0.391 & 0.434 \\ & WEI & 0.238 & 0.339 & 0.285 & 0.396 & 0.430 & 0.390 & 0.417 & 0.428 \\ & BCD & 0.209 & 0.407 & 0.263 & 0.398 & 0.428 & 0.425 & 0.428 & 0.418 \\ & SBR & 0.206 & 0.427 & 0.267 & 0.439 & 0.455 & 0.450 & 0.465 & 0.456 \\ \hline 3 & SRS & 0.698 & 0.698 & 0.607 & 0.594 & 0.619 & 0.634 & 0.609 & 0.622 \\ & WEI & 0.668 & 0.673 & 0.607 & 0.606 & 0.616 & 0.631 & 0.623 & 0.624 \\ & BCD & 0.690 & 0.698 & 0.607 & 0.612 & 0.616 & 0.635 & 0.618 & 0.621 \\ & SBR & 0.669 & 0.675 & 0.596 & 0.614 & 0.633 & 0.617 & 0.631 & 0.630 \\ \hline 4 & SRS & 0.163 & 0.163 & 0.158 & 0.122 & 0.167 & 0.173 & 0.140 & 0.169 \\ & WEI & 0.144 & 0.152 & 0.152 & 0.105 & 0.175 & 0.169 & 0.152 & 0.178 \\ & BCD & 0.133 & 0.138 & 0.151 & 0.085 & 0.170 & 0.177 & 0.173 & 0.172 \\ & SBR & 0.146 & 0.154 & 0.143 & 0.090 & 0.175 & 0.171 & 0.177 & 0.180 \\ \end{tabular} \label{tab:200_3'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 400$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.206 & 0.206 & 0.229 & 0.403 & 0.417 & 0.231 & 0.401 & 0.405 \\ & WEI & 0.163 & 0.332 & 0.173 & 0.408 & 0.413 & 0.337 & 0.408 & 0.413 \\ & BCD & 0.121 & 0.430 & 0.143 & 0.420 & 0.422 & 0.421 & 0.419 & 0.413 \\ & SBR & 0.128 & 0.451 & 0.144 & 0.428 & 0.429 & 0.458 & 0.426 & 0.423 \\ \hline 2 & SRS & 0.312 & 0.312 & 0.345 & 0.422 & 0.415 & 0.351 & 0.416 & 0.416 \\ & WEI & 0.312 & 0.352 & 0.332 & 0.405 & 0.424 & 0.378 & 0.408 & 0.426 \\ & BCD & 0.299 & 0.378 & 0.333 & 0.392 & 0.405 & 0.403 & 0.415 & 0.413 \\ & SBR & 0.330 & 0.389 & 0.345 & 0.401 & 0.407 & 0.426 & 0.410 & 0.406 \\ \hline 3 & SRS & 0.763 & 0.763 & 0.734 & 0.730 & 0.740 & 0.738 & 0.732 & 0.738 \\ & WEI & 0.763 & 0.764 & 0.739 & 0.739 & 0.748 & 0.744 & 0.746 & 0.746 \\ & BCD & 0.781 & 0.783 & 0.760 & 0.760 & 0.768 & 0.772 & 0.774 & 0.767 \\ & SBR & 0.766 & 0.773 & 0.745 & 0.739 & 0.744 & 0.763 & 0.751 & 0.744 \\ \hline 4 & SRS & 0.177 & 0.177 & 0.129 & 0.108 & 0.136 & 0.127 & 0.121 & 0.133 \\ & WEI & 0.170 & 0.176 & 0.129 & 0.096 & 0.139 & 0.139 & 0.131 & 0.143 \\ & BCD & 0.178 & 0.185 & 0.132 & 0.089 & 0.141 & 0.141 & 0.139 & 0.138 \\ & SBR & 0.180 & 0.186 & 0.129 & 0.102 & 0.134 & 0.147 & 0.135 & 0.133 \\ \end{tabular} \label{tab:400_1'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 400$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.218 & 0.218 & 0.232 & 0.504 & 0.502 & 0.235 & 0.497 & 0.502 \\ & WEI & 0.147 & 0.356 & 0.160 & 0.503 & 0.503 & 0.350 & 0.498 & 0.507 \\ & BCD & 0.089 & 0.526 & 0.117 & 0.498 & 0.502 & 0.493 & 0.495 & 0.496 \\ & SBR & 0.089 & 0.550 & 0.109 & 0.520 & 0.518 & 0.524 & 0.526 & 0.519 \\ \hline 2 & SRS & 0.301 & 0.301 & 0.309 & 0.402 & 0.426 & 0.306 & 0.413 & 0.423 \\ & WEI & 0.287 & 0.387 & 0.281 & 0.402 & 0.418 & 0.372 & 0.411 & 0.420 \\ & BCD & 0.268 & 0.451 & 0.262 & 0.400 & 0.443 & 0.434 & 0.434 & 0.441 \\ & SBR & 0.260 & 0.433 & 0.252 & 0.403 & 0.421 & 0.418 & 0.431 & 0.420 \\ \hline 3 & SRS & 0.897 & 0.897 & 0.956 & 0.957 & 0.956 & 0.957 & 0.956 & 0.957 \\ & WEI & 0.892 & 0.892 & 0.954 & 0.944 & 0.948 & 0.951 & 0.942 & 0.948 \\ & BCD & 0.887 & 0.889 & 0.952 & 0.949 & 0.954 & 0.957 & 0.954 & 0.956 \\ & SBR & 0.900 & 0.902 & 0.954 & 0.954 & 0.954 & 0.958 & 0.962 & 0.957 \\ \hline 4 & SRS & 0.234 & 0.234 & 0.345 & 0.317 & 0.351 & 0.353 & 0.339 & 0.343 \\ & WEI & 0.222 & 0.224 & 0.336 & 0.326 & 0.352 & 0.352 & 0.335 & 0.358 \\ & BCD & 0.226 & 0.230 & 0.346 & 0.321 & 0.349 & 0.368 & 0.359 & 0.365 \\ & SBR & 0.238 & 0.242 & 0.369 & 0.350 & 0.380 & 0.379 & 0.374 & 0.377 \\ \end{tabular} \label{tab:400_2_2'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 400$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.218 & 0.218 & 0.237 & 0.430 & 0.435 & 0.242 & 0.438 & 0.435 \\ & WEI & 0.163 & 0.321 & 0.176 & 0.441 & 0.437 & 0.344 & 0.433 & 0.432 \\ & BCD & 0.136 & 0.422 & 0.152 & 0.421 & 0.420 & 0.417 & 0.417 & 0.416 \\ & SBR & 0.103 & 0.446 & 0.124 & 0.459 & 0.459 & 0.448 & 0.463 & 0.461 \\ \hline 2 & SRS & 0.300 & 0.300 & 0.337 & 0.445 & 0.479 & 0.335 & 0.449 & 0.479 \\ & WEI & 0.258 & 0.369 & 0.313 & 0.446 & 0.465 & 0.414 & 0.453 & 0.463 \\ & BCD & 0.247 & 0.462 & 0.295 & 0.451 & 0.476 & 0.483 & 0.481 & 0.477 \\ & SBR & 0.227 & 0.444 & 0.276 & 0.472 & 0.490 & 0.471 & 0.496 & 0.492 \\ \hline 3 & SRS & 0.763 & 0.763 & 0.710 & 0.702 & 0.707 & 0.712 & 0.701 & 0.715 \\ & WEI & 0.773 & 0.776 & 0.696 & 0.701 & 0.700 & 0.720 & 0.709 & 0.706 \\ & BCD & 0.753 & 0.755 & 0.705 & 0.716 & 0.720 & 0.720 & 0.717 & 0.726 \\ & SBR & 0.746 & 0.750 & 0.684 & 0.699 & 0.705 & 0.692 & 0.709 & 0.708 \\ \hline 4 & SRS & 0.209 & 0.209 & 0.199 & 0.140 & 0.221 & 0.208 & 0.149 & 0.221 \\ & WEI & 0.201 & 0.208 & 0.191 & 0.110 & 0.203 & 0.206 & 0.178 & 0.204 \\ & BCD & 0.195 & 0.200 & 0.199 & 0.121 & 0.213 & 0.224 & 0.213 & 0.220 \\ & SBR & 0.198 & 0.203 & 0.198 & 0.114 & 0.229 & 0.214 & 0.230 & 0.225 \\ \end{tabular} \label{tab:400_3'} \end{table} \subsection{QTE, $H_0$, $\pi = 0.7$} \begin{table}[H] \centering \caption{$H_0$, $n = 200$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.042 & 0.042 & 0.046 & 0.042 & 0.036 & 0.036 & 0.039 & 0.039 \\ & SBR & 0.002 & 0.014 & 0.005 & 0.053 & 0.052 & 0.049 & 0.050 & 0.047 \\ \hline 2 & SRS & 0.037 & 0.037 & 0.051 & 0.059 & 0.057 & 0.061 & 0.057 & 0.064 \\ & SBR & 0.032 & 0.036 & 0.042 & 0.046 & 0.048 & 0.055 & 0.055 & 0.055 \\ \hline 3 & SRS & 0.046 & 0.046 & 0.046 & 0.047 & 0.039 & 0.045 & 0.049 & 0.043 \\ & SBR & 0.040 & 0.044 & 0.032 & 0.031 & 0.034 & 0.041 & 0.037 & 0.040 \\ \hline 4 & SRS & 0.098 & 0.098 & 0.067 & 0.075 & 0.069 & 0.062 & 0.057 & 0.066 \\ & SBR & 0.057 & 0.066 & 0.043 & 0.016 & 0.062 & 0.061 & 0.066 & 0.064 \\ \end{tabular} \label{tab:200_1_70} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 200$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.048 & 0.048 & 0.052 & 0.045 & 0.047 & 0.034 & 0.040 & 0.044 \\ & SBR & 0.001 & 0.007 & 0.002 & 0.039 & 0.040 & 0.044 & 0.038 & 0.037 \\ \hline 2 & SRS & 0.057 & 0.057 & 0.065 & 0.051 & 0.058 & 0.050 & 0.051 & 0.053 \\ & SBR & 0.022 & 0.034 & 0.021 & 0.053 & 0.053 & 0.050 & 0.059 & 0.053 \\ \hline 3 & SRS & 0.016 & 0.016 & 0.052 & 0.046 & 0.054 & 0.051 & 0.048 & 0.053 \\ & SBR & 0.004 & 0.005 & 0.039 & 0.038 & 0.048 & 0.045 & 0.046 & 0.048 \\ \hline 4 & SRS & 0.009 & 0.009 & 0.046 & 0.037 & 0.049 & 0.046 & 0.045 & 0.051 \\ & SBR & 0.004 & 0.005 & 0.036 & 0.016 & 0.052 & 0.049 & 0.043 & 0.046 \\ \end{tabular} \label{tab:200_2_2_70} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 200$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.052 & 0.052 & 0.057 & 0.045 & 0.049 & 0.044 & 0.040 & 0.043 \\ & SBR & 0.002 & 0.008 & 0.004 & 0.033 & 0.034 & 0.036 & 0.036 & 0.036 \\ \hline 2 & SRS & 0.042 & 0.042 & 0.061 & 0.055 & 0.067 & 0.047 & 0.055 & 0.068 \\ & SBR & 0.006 & 0.014 & 0.009 & 0.029 & 0.037 & 0.042 & 0.039 & 0.040 \\ \hline 3 & SRS & 0.056 & 0.056 & 0.043 & 0.038 & 0.054 & 0.048 & 0.046 & 0.054 \\ & SBR & 0.055 & 0.057 & 0.048 & 0.042 & 0.050 & 0.053 & 0.052 & 0.052 \\ \hline 4 & SRS & 0.019 & 0.019 & 0.038 & 0.032 & 0.046 & 0.045 & 0.042 & 0.042 \\ & SBR & 0.022 & 0.022 & 0.044 & 0.028 & 0.045 & 0.044 & 0.038 & 0.042 \\ \end{tabular} \label{tab:200_3_70} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 400$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.044 & 0.044 & 0.054 & 0.039 & 0.041 & 0.038 & 0.040 & 0.042 \\ & SBR & 0.003 & 0.015 & 0.003 & 0.051 & 0.052 & 0.043 & 0.046 & 0.046 \\ \hline 2 & SRS & 0.034 & 0.034 & 0.057 & 0.058 & 0.054 & 0.062 & 0.058 & 0.053 \\ & SBR & 0.031 & 0.034 & 0.040 & 0.044 & 0.049 & 0.051 & 0.051 & 0.051 \\ \hline 3 & SRS & 0.037 & 0.037 & 0.029 & 0.034 & 0.036 & 0.033 & 0.033 & 0.039 \\ & SBR & 0.045 & 0.049 & 0.037 & 0.037 & 0.042 & 0.044 & 0.040 & 0.041 \\ \hline 4 & SRS & 0.073 & 0.073 & 0.044 & 0.054 & 0.046 & 0.045 & 0.048 & 0.041 \\ & SBR & 0.065 & 0.076 & 0.036 & 0.014 & 0.060 & 0.058 & 0.062 & 0.060 \\ \end{tabular} \label{tab:400_1_70} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 400$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.044 & 0.044 & 0.051 & 0.037 & 0.039 & 0.048 & 0.036 & 0.037 \\ & SBR & 0.001 & 0.002 & 0.000 & 0.035 & 0.039 & 0.035 & 0.040 & 0.040 \\ \hline 2 & SRS & 0.062 & 0.062 & 0.062 & 0.049 & 0.049 & 0.059 & 0.041 & 0.048 \\ & SBR & 0.015 & 0.029 & 0.015 & 0.034 & 0.040 & 0.040 & 0.042 & 0.037 \\ \hline 3 & SRS & 0.007 & 0.007 & 0.039 & 0.036 & 0.042 & 0.042 & 0.042 & 0.047 \\ & SBR & 0.006 & 0.006 & 0.035 & 0.037 & 0.036 & 0.037 & 0.041 & 0.037 \\ \hline 4 & SRS & 0.013 & 0.013 & 0.046 & 0.029 & 0.061 & 0.053 & 0.035 & 0.054 \\ & SBR & 0.009 & 0.010 & 0.033 & 0.025 & 0.056 & 0.054 & 0.052 & 0.050 \\ \end{tabular} \label{tab:400_2_2_70} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n = 400$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.049 & 0.049 & 0.053 & 0.046 & 0.050 & 0.043 & 0.048 & 0.050 \\ & SBR & 0.001 & 0.006 & 0.002 & 0.038 & 0.041 & 0.037 & 0.036 & 0.036 \\ \hline 2 & SRS & 0.050 & 0.050 & 0.065 & 0.050 & 0.049 & 0.056 & 0.052 & 0.052 \\ & SBR & 0.010 & 0.019 & 0.015 & 0.041 & 0.048 & 0.042 & 0.041 & 0.041 \\ \hline 3 & SRS & 0.044 & 0.044 & 0.031 & 0.042 & 0.039 & 0.032 & 0.038 & 0.039 \\ & SBR & 0.057 & 0.059 & 0.040 & 0.036 & 0.044 & 0.043 & 0.043 & 0.043 \\ \hline 4 & SRS & 0.034 & 0.034 & 0.051 & 0.046 & 0.049 & 0.051 & 0.046 & 0.051 \\ & SBR & 0.028 & 0.028 & 0.044 & 0.040 & 0.045 & 0.045 & 0.045 & 0.046 \\ \end{tabular} \label{tab:400_3_70} \end{table} \subsection{QTE, $H_1$, $\pi = 0.7$} \begin{table}[H] \centering \caption{$H_1$, $n = 200$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.152 & 0.152 & 0.176 & 0.359 & 0.313 & 0.187 & 0.343 & 0.339 \\ & SBR & 0.065 & 0.186 & 0.100 & 0.346 & 0.336 & 0.357 & 0.341 & 0.338 \\ \hline 2 & SRS & 0.314 & 0.314 & 0.334 & 0.361 & 0.325 & 0.347 & 0.367 & 0.365 \\ & SBR & 0.309 & 0.334 & 0.336 & 0.355 & 0.368 & 0.383 & 0.375 & 0.376 \\ \hline 3 & SRS & 0.704 & 0.704 & 0.671 & 0.665 & 0.626 & 0.685 & 0.663 & 0.691 \\ & SBR & 0.697 & 0.716 & 0.663 & 0.671 & 0.669 & 0.702 & 0.686 & 0.688 \\ \hline 4 & SRS & 0.136 & 0.136 & 0.097 & 0.094 & 0.129 & 0.106 & 0.093 & 0.122 \\ & SBR & 0.116 & 0.127 & 0.081 & 0.050 & 0.103 & 0.107 & 0.105 & 0.106 \\ \end{tabular} \label{tab:200_70_1'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 200$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.170 & 0.170 & 0.172 & 0.411 & 0.425 & 0.167 & 0.407 & 0.406 \\ & SBR & 0.043 & 0.212 & 0.060 & 0.445 & 0.455 & 0.457 & 0.435 & 0.434 \\ \hline 2 & SRS & 0.287 & 0.287 & 0.280 & 0.371 & 0.364 & 0.275 & 0.374 & 0.360 \\ & SBR & 0.258 & 0.327 & 0.236 & 0.367 & 0.387 & 0.372 & 0.383 & 0.381 \\ \hline 3 & SRS & 0.771 & 0.771 & 0.891 & 0.882 & 0.903 & 0.895 & 0.883 & 0.894 \\ & SBR & 0.760 & 0.769 & 0.892 & 0.896 & 0.911 & 0.901 & 0.904 & 0.900 \\ \hline 4 & SRS & 0.145 & 0.145 & 0.265 & 0.218 & 0.305 & 0.264 & 0.241 & 0.301 \\ & SBR & 0.128 & 0.136 & 0.235 & 0.177 & 0.288 & 0.290 & 0.284 & 0.287 \\ \end{tabular} \label{tab:200_70_2'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 200$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.181 & 0.181 & 0.183 & 0.342 & 0.340 & 0.188 & 0.340 & 0.338 \\ & SBR & 0.072 & 0.175 & 0.076 & 0.353 & 0.364 & 0.342 & 0.357 & 0.357 \\ \hline 2 & SRS & 0.279 & 0.279 & 0.321 & 0.404 & 0.427 & 0.341 & 0.400 & 0.427 \\ & SBR & 0.243 & 0.341 & 0.293 & 0.430 & 0.451 & 0.430 & 0.454 & 0.435 \\ \hline 3 & SRS & 0.662 & 0.662 & 0.586 & 0.559 & 0.599 & 0.605 & 0.569 & 0.592 \\ & SBR & 0.631 & 0.639 & 0.572 & 0.564 & 0.597 & 0.594 & 0.601 & 0.598 \\ \hline 4 & SRS & 0.150 & 0.150 & 0.201 & 0.164 & 0.199 & 0.208 & 0.189 & 0.211 \\ & SBR & 0.143 & 0.145 & 0.193 & 0.166 & 0.206 & 0.206 & 0.208 & 0.205 \\ \end{tabular} \label{tab:200_70_3'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 400$, $\tau = 0.25$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.181 & 0.181 & 0.192 & 0.351 & 0.354 & 0.202 & 0.346 & 0.351 \\ & SBR & 0.083 & 0.233 & 0.113 & 0.392 & 0.392 & 0.407 & 0.394 & 0.392 \\ \hline 2 & SRS & 0.362 & 0.362 & 0.406 & 0.403 & 0.415 & 0.408 & 0.415 & 0.424 \\ & SBR & 0.350 & 0.381 & 0.388 & 0.412 & 0.426 & 0.426 & 0.422 & 0.419 \\ \hline 3 & SRS & 0.781 & 0.781 & 0.743 & 0.751 & 0.758 & 0.746 & 0.750 & 0.759 \\ & SBR & 0.791 & 0.797 & 0.752 & 0.765 & 0.777 & 0.781 & 0.778 & 0.779 \\ \hline 4 & SRS & 0.160 & 0.160 & 0.082 & 0.072 & 0.112 & 0.097 & 0.095 & 0.116 \\ & SBR & 0.133 & 0.154 & 0.091 & 0.044 & 0.119 & 0.119 & 0.121 & 0.120 \\ \end{tabular} \label{tab:400_70_1'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 400$, $\tau = 0.5$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.184 & 0.184 & 0.187 & 0.468 & 0.479 & 0.194 & 0.460 & 0.466 \\ & SBR & 0.042 & 0.220 & 0.059 & 0.486 & 0.498 & 0.505 & 0.480 & 0.482 \\ \hline 2 & SRS & 0.322 & 0.322 & 0.298 & 0.405 & 0.404 & 0.303 & 0.412 & 0.400 \\ & SBR & 0.262 & 0.342 & 0.237 & 0.376 & 0.399 & 0.385 & 0.389 & 0.389 \\ \hline 3 & SRS & 0.867 & 0.867 & 0.939 & 0.930 & 0.933 & 0.941 & 0.932 & 0.936 \\ & SBR & 0.883 & 0.888 & 0.948 & 0.952 & 0.952 & 0.955 & 0.952 & 0.952 \\ \hline 4 & SRS & 0.209 & 0.209 & 0.327 & 0.275 & 0.354 & 0.341 & 0.308 & 0.351 \\ & SBR & 0.194 & 0.217 & 0.310 & 0.256 & 0.365 & 0.364 & 0.359 & 0.356 \\ \end{tabular} \label{tab:400_70_2'} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n = 400$, $\tau = 0.75$} \begin{tabular}{l|l|cccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{l}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.217 & 0.217 & 0.224 & 0.411 & 0.409 & 0.219 & 0.411 & 0.408 \\ & SBR & 0.103 & 0.246 & 0.107 & 0.419 & 0.418 & 0.400 & 0.421 & 0.420 \\ \hline 2 & SRS & 0.335 & 0.335 & 0.378 & 0.485 & 0.505 & 0.384 & 0.468 & 0.501 \\ & SBR & 0.278 & 0.384 & 0.329 & 0.479 & 0.500 & 0.487 & 0.504 & 0.493 \\ \hline 3 & SRS & 0.708 & 0.708 & 0.661 & 0.628 & 0.665 & 0.665 & 0.629 & 0.672 \\ & SBR & 0.705 & 0.706 & 0.652 & 0.631 & 0.665 & 0.673 & 0.672 & 0.673 \\ \hline 4 & SRS & 0.205 & 0.205 & 0.226 & 0.221 & 0.245 & 0.234 & 0.234 & 0.240 \\ & SBR & 0.205 & 0.205 & 0.249 & 0.209 & 0.248 & 0.258 & 0.256 & 0.258 \\ \end{tabular} \label{tab:400_70_3'} \end{table} \subsection{ATE, $\pi = 0.5$} \begin{table}[H] \centering \caption{$H_0$, $n=200$, $\pi = 0.5$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.059 & 0.057 & 0.051 & 0.061 & 0.055 & 0.057 & 0.053 & 0.048 & 0.049 \\ & WEI & 0.006 & 0.048 & 0.062 & 0.004 & 0.068 & 0.068 & 0.051 & 0.065 & 0.065 \\ & BCD & 0.001 & 0.089 & 0.056 & 0.000 & 0.058 & 0.058 & 0.071 & 0.056 & 0.056 \\ & SBR & 0.000 & 0.067 & 0.061 & 0.000 & 0.064 & 0.064 & 0.059 & 0.061 & 0.061 \\ \hline 2 & SRS & 0.062 & 0.061 & 0.061 & 0.061 & 0.059 & 0.062 & 0.060 & 0.057 & 0.059 \\ & WEI & 0.027 & 0.060 & 0.050 & 0.029 & 0.046 & 0.054 & 0.057 & 0.052 & 0.053 \\ & BCD & 0.014 & 0.058 & 0.053 & 0.016 & 0.053 & 0.052 & 0.052 & 0.052 & 0.049 \\ & SBR & 0.006 & 0.045 & 0.044 & 0.006 & 0.045 & 0.045 & 0.045 & 0.045 & 0.045 \\ \hline 3 & SRS & 0.057 & 0.056 & 0.068 & 0.055 & 0.061 & 0.061 & 0.056 & 0.064 & 0.065 \\ & WEI & 0.049 & 0.050 & 0.057 & 0.052 & 0.057 & 0.056 & 0.048 & 0.053 & 0.053 \\ & BCD & 0.057 & 0.058 & 0.057 & 0.057 & 0.063 & 0.063 & 0.057 & 0.056 & 0.057 \\ & SBR & 0.055 & 0.058 & 0.056 & 0.057 & 0.060 & 0.061 & 0.055 & 0.055 & 0.055 \\ \hline 4 & SRS & 0.066 & 0.067 & 0.077 & 0.068 & 0.069 & 0.063 & 0.063 & 0.070 & 0.063 \\ & WEI & 0.065 & 0.067 & 0.070 & 0.066 & 0.067 & 0.068 & 0.069 & 0.067 & 0.070 \\ & BCD & 0.068 & 0.068 & 0.067 & 0.065 & 0.061 & 0.068 & 0.065 & 0.065 & 0.065 \\ & SBR & 0.055 & 0.055 & 0.055 & 0.057 & 0.057 & 0.058 & 0.057 & 0.057 & 0.057 \\ \end{tabular} \label{tab:ate_200_50} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n=200$, $\pi = 0.5$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.062 & 0.058 & 0.047 & 0.064 & 0.054 & 0.054 & 0.062 & 0.048 & 0.047 \\ & WEI & 0.006 & 0.056 & 0.049 & 0.007 & 0.052 & 0.052 & 0.062 & 0.050 & 0.051 \\ & BCD & 0.000 & 0.063 & 0.054 & 0.000 & 0.055 & 0.055 & 0.044 & 0.054 & 0.054 \\ & SBR & 0.000 & 0.053 & 0.054 & 0.000 & 0.056 & 0.057 & 0.054 & 0.055 & 0.055 \\ 2 & SRS & 0.049 & 0.048 & 0.039 & 0.048 & 0.043 & 0.046 & 0.049 & 0.042 & 0.045 \\ & WEI & 0.018 & 0.050 & 0.049 & 0.018 & 0.047 & 0.048 & 0.051 & 0.044 & 0.043 \\ & BCD & 0.012 & 0.050 & 0.043 & 0.012 & 0.043 & 0.045 & 0.046 & 0.042 & 0.042 \\ & SBR & 0.008 & 0.049 & 0.049 & 0.009 & 0.047 & 0.047 & 0.050 & 0.049 & 0.049 \\ 3 & SRS & 0.048 & 0.050 & 0.051 & 0.049 & 0.050 & 0.049 & 0.054 & 0.050 & 0.053 \\ & WEI & 0.049 & 0.049 & 0.050 & 0.047 & 0.047 & 0.047 & 0.051 & 0.051 & 0.051 \\ & BCD & 0.045 & 0.049 & 0.048 & 0.048 & 0.050 & 0.046 & 0.049 & 0.049 & 0.049 \\ & SBR & 0.051 & 0.053 & 0.052 & 0.050 & 0.055 & 0.054 & 0.057 & 0.057 & 0.057 \\ 4 & SRS & 0.057 & 0.056 & 0.058 & 0.053 & 0.056 & 0.056 & 0.055 & 0.055 & 0.055 \\ & WEI & 0.059 & 0.059 & 0.061 & 0.056 & 0.057 & 0.061 & 0.058 & 0.063 & 0.060 \\ & BCD & 0.050 & 0.050 & 0.051 & 0.054 & 0.053 & 0.053 & 0.052 & 0.052 & 0.052 \\ & SBR & 0.058 & 0.058 & 0.059 & 0.058 & 0.055 & 0.059 & 0.058 & 0.057 & 0.057 \\ \end{tabular} \label{tab:ate_200_50'} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n=400$, $\pi = 0.5$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.063 & 0.061 & 0.042 & 0.063 & 0.043 & 0.045 & 0.055 & 0.042 & 0.042 \\ & WEI & 0.005 & 0.050 & 0.050 & 0.006 & 0.052 & 0.052 & 0.052 & 0.050 & 0.050 \\ & BCD & 0.000 & 0.067 & 0.052 & 0.000 & 0.059 & 0.059 & 0.051 & 0.059 & 0.059 \\ & SBR & 0.000 & 0.059 & 0.058 & 0.000 & 0.057 & 0.057 & 0.063 & 0.060 & 0.060 \\ \hline 2 & SRS & 0.061 & 0.057 & 0.055 & 0.058 & 0.055 & 0.054 & 0.061 & 0.054 & 0.051 \\ & WEI & 0.018 & 0.051 & 0.064 & 0.019 & 0.063 & 0.064 & 0.052 & 0.064 & 0.064 \\ & BCD & 0.009 & 0.045 & 0.046 & 0.006 & 0.046 & 0.047 & 0.043 & 0.049 & 0.049 \\ & SBR & 0.014 & 0.062 & 0.060 & 0.016 & 0.065 & 0.065 & 0.063 & 0.063 & 0.063 \\ \hline 3 & SRS & 0.050 & 0.049 & 0.050 & 0.050 & 0.049 & 0.051 & 0.052 & 0.048 & 0.048 \\ & WEI & 0.046 & 0.047 & 0.049 & 0.047 & 0.046 & 0.047 & 0.048 & 0.047 & 0.046 \\ & BCD & 0.049 & 0.049 & 0.049 & 0.049 & 0.050 & 0.050 & 0.050 & 0.050 & 0.050 \\ & SBR & 0.055 & 0.056 & 0.056 & 0.059 & 0.058 & 0.059 & 0.055 & 0.056 & 0.056 \\ \hline 4 & SRS & 0.057 & 0.057 & 0.055 & 0.056 & 0.056 & 0.059 & 0.054 & 0.051 & 0.056 \\ & WEI & 0.051 & 0.051 & 0.053 & 0.052 & 0.054 & 0.054 & 0.051 & 0.051 & 0.052 \\ & BCD & 0.056 & 0.056 & 0.056 & 0.054 & 0.056 & 0.056 & 0.054 & 0.053 & 0.053 \\ & SBR & 0.056 & 0.058 & 0.058 & 0.055 & 0.056 & 0.057 & 0.057 & 0.057 & 0.057 \\ \end{tabular} \label{tab:ate_400_50} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n=400$, $\pi = 0.5$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.422 & 0.422 & 0.964 & 0.416 & 0.968 & 0.966 & 0.415 & 0.964 & 0.962 \\ & WEI & 0.387 & 0.732 & 0.969 & 0.393 & 0.969 & 0.969 & 0.732 & 0.967 & 0.968 \\ & BCD & 0.341 & 0.962 & 0.971 & 0.350 & 0.969 & 0.968 & 0.955 & 0.968 & 0.968 \\ & SBR & 0.357 & 0.967 & 0.967 & 0.368 & 0.966 & 0.966 & 0.967 & 0.965 & 0.965 \\ \hline 2 & SRS & 0.572 & 0.568 & 0.806 & 0.579 & 0.795 & 0.805 & 0.568 & 0.796 & 0.805 \\ & WEI & 0.577 & 0.723 & 0.813 & 0.575 & 0.814 & 0.810 & 0.728 & 0.811 & 0.808 \\ & BCD & 0.606 & 0.809 & 0.813 & 0.618 & 0.817 & 0.821 & 0.802 & 0.810 & 0.810 \\ & SBR & 0.601 & 0.828 & 0.829 & 0.603 & 0.832 & 0.836 & 0.830 & 0.834 & 0.834 \\ \hline 3 & SRS & 0.804 & 0.801 & 0.803 & 0.798 & 0.798 & 0.799 & 0.804 & 0.803 & 0.803 \\ & WEI & 0.804 & 0.804 & 0.806 & 0.802 & 0.800 & 0.803 & 0.803 & 0.803 & 0.803 \\ & BCD & 0.816 & 0.818 & 0.820 & 0.822 & 0.825 & 0.825 & 0.819 & 0.819 & 0.819 \\ & SBR & 0.821 & 0.823 & 0.823 & 0.816 & 0.820 & 0.819 & 0.822 & 0.822 & 0.822 \\ \hline 4 & SRS & 0.228 & 0.230 & 0.229 & 0.225 & 0.227 & 0.228 & 0.234 & 0.226 & 0.226 \\ & WEI & 0.229 & 0.230 & 0.230 & 0.225 & 0.223 & 0.228 & 0.233 & 0.235 & 0.234 \\ & BCD & 0.221 & 0.224 & 0.225 & 0.227 & 0.225 & 0.231 & 0.231 & 0.231 & 0.233 \\ & SBR & 0.224 & 0.226 & 0.225 & 0.224 & 0.225 & 0.230 & 0.235 & 0.235 & 0.235 \\ \end{tabular} \label{tab:ate_400_50'} \end{table} \subsection{ATE, $\pi = 0.7$} \begin{table}[H] \centering \caption{$H_0$, $n=200$, $\pi = 0.7$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.050 & 0.045 & 0.056 & 0.051 & 0.056 & 0.062 & 0.046 & 0.054 & 0.055 \\ & SBR & 0.000 & 0.004 & 0.051 & 0.000 & 0.061 & 0.064 & 0.064 & 0.060 & 0.059 \\ \hline 2 & SRS & 0.048 & 0.055 & 0.074 & 0.055 & 0.049 & 0.056 & 0.045 & 0.049 & 0.057 \\ & SBR & 0.013 & 0.030 & 0.041 & 0.013 & 0.024 & 0.051 & 0.056 & 0.049 & 0.051 \\ \hline 3 & SRS & 0.059 & 0.060 & 0.066 & 0.060 & 0.060 & 0.064 & 0.058 & 0.055 & 0.064 \\ & SBR & 0.051 & 0.053 & 0.052 & 0.053 & 0.045 & 0.057 & 0.056 & 0.056 & 0.055 \\ \hline 4 & SRS & 0.057 & 0.057 & 0.056 & 0.058 & 0.056 & 0.068 & 0.054 & 0.057 & 0.058 \\ & SBR & 0.047 & 0.050 & 0.044 & 0.051 & 0.037 & 0.054 & 0.054 & 0.055 & 0.055 \\ \end{tabular} \label{tab:ate_200_70} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n=200$, $\pi = 0.7$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.329 & 0.328 & 0.934 & 0.336 & 0.943 & 0.946 & 0.326 & 0.941 & 0.941 \\ & SBR & 0.220 & 0.631 & 0.938 & 0.233 & 0.946 & 0.949 & 0.932 & 0.943 & 0.943 \\ \hline 2 & SRS & 0.581 & 0.578 & 0.687 & 0.582 & 0.619 & 0.756 & 0.571 & 0.601 & 0.758 \\ & SBR & 0.598 & 0.699 & 0.747 & 0.599 & 0.686 & 0.768 & 0.752 & 0.766 & 0.764 \\ \hline 3 & SRS & 0.773 & 0.779 & 0.758 & 0.769 & 0.741 & 0.784 & 0.773 & 0.729 & 0.782 \\ & SBR & 0.771 & 0.773 & 0.772 & 0.777 & 0.763 & 0.782 & 0.782 & 0.780 & 0.781 \\ \hline 4 & SRS & 0.149 & 0.154 & 0.121 & 0.153 & 0.140 & 0.168 & 0.154 & 0.141 & 0.165 \\ & SBR & 0.144 & 0.151 & 0.129 & 0.153 & 0.118 & 0.175 & 0.172 & 0.170 & 0.169 \\ \end{tabular} \label{tab:ate_200_70'} \end{table} \begin{table}[H] \centering \caption{$H_0$, $n=400$, $\pi = 0.7$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.062 & 0.059 & 0.065 & 0.061 & 0.056 & 0.056 & 0.062 & 0.060 & 0.061 \\ & SBR & 0.000 & 0.000 & 0.034 & 0.000 & 0.039 & 0.040 & 0.045 & 0.045 & 0.044 \\ \hline 2 & SRS & 0.052 & 0.050 & 0.087 & 0.054 & 0.055 & 0.052 & 0.050 & 0.057 & 0.051 \\ & SBR & 0.013 & 0.029 & 0.040 & 0.012 & 0.027 & 0.044 & 0.042 & 0.044 & 0.042 \\ \hline 3 & SRS & 0.042 & 0.041 & 0.049 & 0.045 & 0.043 & 0.052 & 0.040 & 0.040 & 0.046 \\ & SBR & 0.028 & 0.028 & 0.031 & 0.029 & 0.025 & 0.032 & 0.035 & 0.036 & 0.034 \\ \hline 4 & SRS & 0.053 & 0.055 & 0.043 & 0.058 & 0.053 & 0.058 & 0.055 & 0.050 & 0.056 \\ & SBR & 0.050 & 0.051 & 0.043 & 0.051 & 0.035 & 0.054 & 0.055 & 0.055 & 0.053 \\ \end{tabular} \label{tab:ate_400_70} \end{table} \begin{table}[H] \centering \caption{$H_1$, $n=400$, $\pi = 0.7$} \begin{tabular}{c|c|ccccccccc} \multicolumn{1}{c}{M} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{s/naive} & \multicolumn{1}{c}{s/adj} & \multicolumn{1}{c}{sfe/adj} & \multicolumn{1}{c}{s/W} & \multicolumn{1}{c}{sfe/W} & \multicolumn{1}{c}{ipw/W} & \multicolumn{1}{c}{s/CA } & \multicolumn{1}{c}{sfe/CA} & \multicolumn{1}{c}{ipw/CA} \\ \hline 1 & SRS & 0.384 & 0.380 & 0.972 & 0.381 & 0.971 & 0.976 & 0.382 & 0.970 & 0.973 \\ & SBR & 0.250 & 0.736 & 0.970 & 0.254 & 0.972 & 0.972 & 0.967 & 0.973 & 0.974 \\ \hline 2 & SRS & 0.616 & 0.628 & 0.753 & 0.622 & 0.693 & 0.796 & 0.617 & 0.690 & 0.795 \\ & SBR & 0.659 & 0.759 & 0.806 & 0.665 & 0.740 & 0.827 & 0.817 & 0.827 & 0.827 \\ \hline 3 & SRS & 0.818 & 0.817 & 0.805 & 0.812 & 0.793 & 0.821 & 0.816 & 0.793 & 0.829 \\ & SBR & 0.833 & 0.838 & 0.836 & 0.831 & 0.824 & 0.840 & 0.838 & 0.839 & 0.837 \\ \hline 4 & SRS & 0.177 & 0.172 & 0.145 & 0.180 & 0.162 & 0.195 & 0.181 & 0.171 & 0.186 \\ & SBR & 0.181 & 0.190 & 0.164 & 0.184 & 0.142 & 0.202 & 0.202 & 0.202 & 0.200 \\ \end{tabular} \label{tab:ate_400_70'} \end{table} \end{document}
\begin{document} \pagestyle{plain} \title{ extbf{On the double EPW sextic associated to a Gushel-Mukai fourfold} \begin{abstract} In analogy to the case of cubic fourfolds, we discuss the conditions under which the double cover $\tilde{Y}_A$ of the EPW sextic hypersurface associated to a Gushel-Mukai fourfold is birationally equivalent to a moduli space of (twisted) stable sheaves on a K3 surface. In particular, we prove that $\tilde{Y}_A$ is birational to the Hilbert scheme of two points on a K3 surface if and only if the Gushel-Mukai fourfold is Hodge-special with discriminant $d$ such that the negative Pell equation $\mathcal{P}_{d/2}(-1)$ is solvable in $\Z$. \end{abstract} \blfootnote{\textup{2010} \textit{Mathematics Subject Classification}: \textup{14J35}, \textup{14J28}, \textup{18E30}.} \blfootnote{\textit{Key words and phrases}: Gushel-Mukai fourfolds, K3 surfaces, Derived categories, Double EPW sextics.} \section{Introduction} The geometry of Gushel-Mukai (GM) varieties has been recently studied by Debarre and Kuznetsov in \cite{DK1}, \cite{DK2}, and from a categorical point of view by Kuznetsov and Perry in \cite{KP}. Of particular interest is the case of GM fourfolds, which are smooth intersections of dimension four of the cone over the Grassmannian $\G(2,5)$ with a quadric hypersurface in a eight-dimensional linear space over $\C$. Indeed, these Fano fourfolds have a lot of similarities with cubic fourfolds; for instance, it is unknown if the very general GM fourfold is irrational, even if there are rational examples (see \cite[Section 4]{R}, \cite[Section 3]{P} and \cite[Section 7]{DIM}). In \cite{DIM} Debarre, Iliev and Manivel investigated the period map and the period domain of GM fourfolds, in analogy to the work done by Hassett for cubic fourfolds. In particular, they proved that period points of \emph{Hodge-special} GM fourfolds (see Definition \ref{defHodge-spe}) form a countable union of irreducible divisors in the period domain, depending on the discriminant of the possible labellings (see Section 2.3). It is not difficult to check that the discriminant of a Hodge-special GM fourfold is an integer $\equiv 0,2$ or $4 \pmod 8$ (see \cite{DIM}, Lemma 6.1). Furthermore, the non-special cohomology of a Hodge-special GM fourfold $X$ is Hodge isometric (up to a Tate twist) to the degree-two primitive cohomology of a polarized K3 surface if and only if the discriminant $d$ of $X$ satisfies also the following numerical condition: \begin{equation*}\tag{$\ast\ast$} 8 \nmid d \text{ and the only odd primes which divide }d \text{ are } \equiv 1 \pmod 4. \end{equation*} The first result of this paper is a generalization of the previous property to the twisted case, as done by Huybrechts for cubic fourfolds in \cite{Huy}. \begin{thm} \label{thm3intro} A GM fourfold $X$ has an associated twisted K3 surface in the cohomological sense (see Definition \ref{deftwist}) if and only if the discriminant $d$ of $X$ satisfies \begin{equation*}\tag{$\ast\ast'$} d=\prod_{i}p_i^{n_i} \text{ with } n_i \equiv 0 \pmod 2 \text{ for } p_i \equiv 3 \pmod 4. \end{equation*} \end{thm} On the other hand, a general GM fourfold $X$ has an associated hyperk\"ahler variety, as cubic fourfolds have their Fano variety of lines. Indeed, $X$ determines a triple $(V_6,V_5,A)$ of Lagrangian data, where $V_6 \supset V_5$ are six and five-dimensional vector spaces, respectively, and $A \subset \bigwedge^3V_6$ is a Lagrangian subspace with respect to the symplectic structure induced by the wedge product, with no decomposable vectors (see \cite[Theorem 3.16]{DK1}). Conversely, it is possible to reconstruct an ordinary and a special GM variety from a Lagrangian data having $A$ without decomposable vectors (see \cite[Theorem 3.10 and Proposition 3.13]{DK1}). The data of $A$ determines a stratification in subschemes of the form $Y_A^{\geq 3} \subset Y_A^{\geq 2} \subset Y_A^{\geq 1} \subset \p(V_6)$, where $Y_A^{\geq 1}$ is a Eisenbud-Popescu-Walter (EPW) sextic hypersurface (see Section 2.2). Moreover, if $Y_A^{\geq 3}$ is empty, then the double cover $\tilde{Y}_A$ of the EPW sextic is a hyperk\"ahler fourfold deformation equivalent to the Hilbert scheme of length-two subschemes on a K3 surface. Actually, in order to guarantee the smoothness of $\tilde{Y}_A$, it is enough to avoid the divisor $\mathcal{D}_{10}''$ in the period domain by \cite[Remark 5.29]{DK2}. The second main result is the following theorem, whose analogue for cubic fourfolds was proven by Addington in \cite{Add}. Let $\lambda_1$ and $\lambda_2$ be the classes in the topological K-theory of a GM fourfold defined in \eqref{basis}. \begin{thm} \label{thm1} Let $X$ be a Hodge-special GM fourfold such that $Y_A^{\geq 3}=\emptyset$. Consider the following propositions: \begin{enumerate} \item[$(\emph{a})$] $X$ has discriminant $d$ satisfying $(\ast\ast)$; \item[$(\emph{b})$] The trascendental lattice $T_X$ is Hodge isometric to the trascendental lattice $T_S(-1)$ for some K3 surface $S$, or equivalently, there is a hyperbolic plane $U=\langle \kappa_1,\kappa_2 \rangle$ primitively embedded in the algebraic part of the Mukai lattice; \item[$(\emph{c})$] $\tilde{Y}_A$ is birational to a moduli space of stable sheaves on $S$. \end{enumerate} Then we have $(\emph{a})$ implies $(\emph{b})$, and $(\emph{b})$ is equivalent to $(\emph{c})$. Moreover, $(\emph{b})$ implies $(\emph{a})$ if either $H^{2,2}(X,\Z)$ has rank-$3$, or there is an element $\tau$ in the hyperbolic plane $U$ such that $\langle \lambda_1, \lambda_2, \tau \rangle$ has discriminant $\equiv 2$ or $4 \pmod 8$. \end{thm} \noindent In Section 3.3 we discuss a counterexample showing that the reverse implication of Theorem \ref{thm1} does not hold in full generality. More precisely, we show that there are GM fourfolds satisfying condition (b), but without a Hodge-associated K3 surface. In particular, we deduce that property (b) is not always divisorial and that there are period points of K3 type corresponding to GM fourfolds without a Hodge-associated K3 surface. We also prove its natural extension to the twisted case, as in \cite{Huy} for cubic fourfolds. \begin{thm} \label{thm4} Let $X$ be a Hodge-special GM fourfold with discriminant $d$ such that $Y_A^{\geq 3} = \emptyset$. Then $\tilde{Y}_A$ is birational to a moduli space of stable twisted sheaves on a K3 surface $S$ if and only if $d$ satisfies $(\ast\ast')$. \end{thm} Finally, we determine the numerical condition on the discriminant $d$ of a Hodge-special GM fourfold in order to have $\tilde{Y}_A$ birational to the Hilbert scheme $S^{[2]}$ on a K3 surface $S$; this condition is stricter than that of $(\ast\ast)$, as proved in \cite{Add} for cubic fourfolds (see Remark \ref{d=50}). \begin{thm} \label{thm2} Let $X$ be a Hodge-special GM fourfold of discriminant $d$ such that $Y_A^{\geq 3}=\emptyset$. Then $\tilde{Y}_A$ is birational to the Hilbert square $S^{[2]}$ of a K3 surface $S$ if and only if $d$ satisfies the condition \begin{equation*}\tag{$\ast$$\ast$$\ast$} \label{ast3} a^2d=2n^2+2 \quad \text{for }a,n \in \Z. \end{equation*} \end{thm} The strategy to prove these results relies on the definition of the Mukai lattice for the \emph{Kuznetsov component} (or the K3 category), which is a non commutative K3 surface arising from the semiorthogonal decomposition of the derived category of a GM variety constructed in \cite{KP} (see Section 2.4). The Mukai lattice is defined as done in \cite{AT} by Addington and Thomas for cubic fourfolds; actually, we can prove the analogue of their results, using the vanishing lattice of a GM fourfold instead of the primitive degree-four lattice of cubic fourfolds. In particular, following the work of Addington, this allows us to apply Propositions 4 and 5 of \cite{Add} and, then, to prove Theorems \ref{thm1} and \ref{thm2}. On the other hand, we obtain that if a general GM fourfold has a homological associated K3 surface, then there is a Hodge-theoretic associated K3 surface (see Theorem \ref{thmHomHodge} for a more precise statement). \begin{rw} In \cite[Proposition 2.1]{IM}, Iliev and Madonna prove that if a smooth double EPW sextic is birational to the Hilbert square $S^{[2]}$ on a K3 surface $S$ with polarization of degree-$d$, then the negative Pell equation $\mathcal{P}_{d/2}(-1): n^2-\frac{d}{2}a^2=-1$ is solvable. Thus Theorem \ref{thm2} is consistent with this necessary condition (see also Remark \ref{linkIM}). Finally, in \cite[Corollary 7.6]{DM}, Debarre and Macrì prove that the Hilbert square of a general polarized K3 surface of degree-$d$ is isomorphic to a double EPW sextic if and only if the Pell equation $\mathcal{P}_{d/2}(-1)$ is solvable and the equation $\mathcal{P}_{2d}(5): n^2-(2d)a^2=5$ is not. By Theorem \ref{thm2}, we see that the birationality to this Hilbert scheme is obtained by relaxing the second condition on $\mathcal{P}_{2d}(5)$. \end{rw} \begin{plan} In Section 2 we recall some preliminary facts about Hodge-special GM fourfolds and their Kuznetsov component. In Section 3.1 we define the Mukai lattice and we reinterpret the definition of Hodge-special GM fourfolds with a certain discriminant via their Mukai lattice. In Section 3.2 we characterize the condition of having a Hodge-associated K3 surface by the existence of a primitively embedded hyperbolic lattice in the algebraic part of the Mukai lattice, for general Hodge-special GM fourfolds and for non general GM fourfolds satisfying a precise condition. Then, we prove Theorem \ref{thm3intro}. In Section 3.3 we discuss the construction of GM fourfolds which do not realize the equivalence of Theorem \ref{thm1}. Section 4.1 is devoted to the proofs of Theorem \ref{thm1} and Theorem \ref{thm4}. In Section 4.2 we prove Theorem \ref{thm2}. \end{plan} \end{ack} \section{Background on Gushel-Mukai fourfolds} The aim of this section is to recall some definitions and properties concerning Hodge-special GM fourfolds and to fix the notation. Our main references are \cite{DIM}, \cite{DK1} \cite{DK2} and \cite{KP}. \subsection{Geometry of GM fourfolds} Let $V_5$ be a $5$-dimensional complex vector space; we denote by $\G(2,V_5)$ the Grassmannian of $2$-dimensional subspaces of $V_5$, viewed in $\p(\bigwedge^2 V_5) \cong \p^9$ via the Pl\"ucker embeddig. Let $\CG(2,V_5) \subset \p(\C \oplus \bigwedge^2 V_5) \cong \p^{10}$ be the cone over $\G(2,V_5)$ of vertex $\nu:=\p(\C)$. \begin{dfn} A \textbf{GM fourfold} is a smooth intersection of dimension four $$X=\CG(2,V_5) \cap \p(W) \cap Q,$$ where $W$ is a $9$-dimensional vector subspace of $\C \oplus \bigwedge^2 V_5$ and $Q$ is a quadric hypersurface in $\p(W)\cong \p^{8}$. \end{dfn} \noindent Notice that $\nu$ does not belong to $X$, because $X$ is smooth. Thus, the linear projection from $\nu$ defines a regular map $$\gamma_X: X \rightarrow \G(2,V_5)$$ called the \textbf{Gushel map}. We denote by $\mathcal{U}_X$ the pullback via $\gamma_X$ of the tautological rank-$2$ subbundle of $\G(2,V_5)$. We can distinguish two cases: \begin{itemize} \item If the linear space $\p(W)$ does not contain $\nu$, then $X$ is isomorphic to a quadric section of the linear section of the Grassmannian $\G(2,V_5)$ given by the projection of $\p(W)$ to $\p(\bigwedge^2 V_5)$. GM fourfolds of this form are \textbf{ordinary}. \item If $\nu$ is in $\p(W)$, the linear space $\p(W)$ is a cone over $\p(W/\C) \cong \p^7 \subset \p(\bigwedge^2 V_5)$. Then, $X$ is a double cover of the linear section $\p(W/\C) \cap \G(2,V_5)$, branched along its intersection with a quadric. In this case, we say that $X$ is \textbf{special}. \end{itemize} We denote by $\sigma_{i,j} \in H^{2(i+j)}(\G(2,V_5),\Z)$ the Schubert cycles on $\G(2,V_5)$ for every $3 \geq i \geq j \geq 0$ and we set $\sigma_i:=\sigma_{i,0}$. The restriction $h:=\gamma_X^*\sigma_1$ of the hyperplane class on $\p(\C \oplus \bigwedge^2 V_5)$ defines a natural polarization of degree-$10$ on $X$. An easy computation using adjunction formula shows that $X$ is a Fano fourfold with canonical class $-2h$. The moduli stack $\mathcal{M}_4$ of GM fourfolds is a Deligne-Mumford stack of finite type over $\C$, of dimension $24$ (see \cite[Proposition A.2]{KP}). \subsection{Period map and period points} We recall the Hodge numbers of $X$: \[ \begin{tabular}{ccccccccccccccc} &&&&&&&1\\ &&&&&&0&&0&\\ &&&&&0&&1&&0\\ &&&&0&&0&&0&&0\\ &&&0&&1&&22&&1&&0 \end{tabular} \] (see \cite[Lemma 4.1]{IMani}). Notice that $H^{\bullet}(X,\Z)$ is torsion free by \cite[Proposition 3.4]{DK2}. The classes $h^2$ and $\gamma_X^*\sigma_{2}$ span the embedding of the rank-two lattice $H^4(\G(2,V_5),\Z)$ in $H^4(X,\Z)$. The \textbf{vanishing lattice} of $X$ is the sublattice $$H^4(X,\Z)_{00}:=\left\lbrace x \in H^4(X,\Z): x \cdot \gamma_X^{*}(H^4(\G(2,V_5),\Z))=0 \right\rbrace.$$ By \cite[Proposition 5.1]{DIM}, we have an isomorphism of lattices $$H^4(X,\Z)_{00} \cong E_8^2 \oplus U^2 \oplus I_{2,0}(2)=:\Lambda.$$ Here $U$ is the hyperbolic plane $(\Z^2,\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix})$, $E_8$ is the unique even unimodular lattice of signature $(8,0)$ and $I_{r,s}:=I_1^r \oplus I_1(-1)^s$, where $I_1$ is the lattice $\Z$ with bilinear form $(1)$ (given a lattice $L$ and a non-zero integer $m$, we denote by $L(m)$ the lattice $L$ with the intersection form multiplied by $m$). We point out that the lattice $I_{2,0}(2)$ is also denoted by $A_1^2$ in the literature. Let $e$ and $f$ be two classes in $I_{22,2}$ of square $2$ and $e \cdot f=0$, which generate the orthogonal of $\Lambda$ in $I_{22,2}$. The choice of an isometry $\phi: H^4(X,\Z)\cong I_{22,2}$ sending $\gamma_X^*\sigma_{1,1}$ and $\gamma_X^*(\sigma_2-\sigma_{1,1})$ to $e$ and $f$ respectively, and such that $\phi(H^4(X,\Z)_{00}) = \Lambda$, determines a \textbf{marking} for $X$. Notice that the Hodge structure on the vanishing lattice is of K3 type. Let $\text{\~{O}}(\Lambda)$ be the subgroup of automorphisms of $\text{O}(\Lambda)$ acting trivially on the discriminant group $d(\Lambda)$. The groups $\text{\~{O}}(\Lambda)$ and $\text{O}(\Lambda)$ act properly discontinuously on the complex variety \begin{equation} \label{locperiodom} \Omega:= \lbrace w \in \p(\Lambda \otimes \C): w \cdot w =0, w \cdot \bar{w}<0 \rbrace. \end{equation} The \textbf{global period domain} is the quotient $\mathcal{D}:=\text{\~{O}}(\Lambda) \setminus \Omega$, which is an irreducible quasi-projective variety of dimension $20$. We observe that two markings differ by the action of an element in $\text{\~{O}}(\Lambda)$. It follows that the \textbf{period map} $p: \mathcal{M}_4 \rightarrow \mathcal{D}$, which sends $X$ to the class of the one-dimensional subspace $H^{3,1}(X)$, is well-defined. As a map of stacks, $p$ is dominant with $4$-dimensional smooth fibers (see \cite[Theorem 4.4]{DIM}). The \textbf{period point} of $X$ is the image $p(X)$ in $\mathcal{D}$. As proved in \cite{DK2}, the period point of a general GM fourfold is determined by that of its associated double EPW sextic. More precisely, let $(V_6,V_5,A)$ be the Lagrangian data of $X$, as defined in the introduction. By the work of O'Grady, we can consider the closed subschemes \[ Y_A^{\geq l}:=\lbrace [U_1] \in \p(V_6): \text{dim}(A \cap (U_1 \wedge \bigwedge\nolimits^2 V_6)) \geq l \rbrace \quad \text{for } l \geq 0. \] Since $A$ has no decomposable vectors, we have $Y_A:=Y_A^{\geq 1}$ is a normal sextic hypersurface, called EPW sextic, which is singular along the integral surface $Y_A^{\geq 2}$. Moreover, $Y_A^{\geq 3}$ is finite and it is the singular locus of $Y_A^{\geq 2}$, while $Y_A^{\geq 4}$ is empty (see \cite[Proposition B.2]{DK1}). Let $\tilde{Y}_A$ be the double cover of the EPW sextic $Y_A$ branched over $Y_A^{\geq 2}$. If $Y_A^{\geq 3}$ is empty, (e.g.\ for generic $A$), then the \textbf{double EPW sextic} $\tilde{Y}_A$ is a smooth hyperk\"ahler fourfold, deformation equivalent to the Hilbert scheme of two points on a K3 surface (see \cite[Theorem 1.1]{OG2}). In this case, the period point of $\tilde{Y}_A$ coincides with $p(X)$, as explained in the following result. \begin{thm}\emph{\cite[Theorem 5.1]{DK2}} \label{periodEPW} Let $X$ be a GM fourfold with associated Lagrangian data $(V_6,V_5,A)$. Assume that the double EPW sextic $\tilde{Y}_A$ is smooth (i.e. $Y_A^{\geq 3} = \emptyset$). Then, there is an isometry of Hodge structures $$H^4(X,\Z)_{00} \cong H^2(\tilde{Y}_A,\Z)_0(-1),$$ where $H^2(\tilde{Y}_A,\Z)_0$ is the degree-two primitive cohomology of $\tilde{Y}_A$ equipped with the Beauville-Bogomolov-Fujiki form $q$. \end{thm} \subsection{Hodge-special GM fourfolds} \begin{dfn} \label{defHodge-spe} A GM fourfold $X$ is \textbf{Hodge-special} if $H^{2,2}(X) \cap H^4(X,\Q)_{00} \neq 0$. \end{dfn} \noindent Equivalently, $X$ is Hodge-special if and only if $H^{2,2}(X,\Z)$ contains a rank-three primitive sublattice $K$ containing $\gamma_X^*(H^4(\G(2,V_5),\Z))$. Such a lattice $K$ is a \textbf{labelling} for $X$ and the discriminant of the labelling is the determinant of the intersection matrix on $K$. We say that $X$ \emph{has discriminant $d$} if it has a labelling of discriminant $d$. Notice that $d$ is positive and $d \equiv 0,2$ or $4(\text{mod\,8})$ (see \cite[Lemma 6.1]{DIM}). More precisely, the period point of a Hodge-special GM fourfold with discriminant $d$ belongs to an irreducible divisor $\mathcal{D}_d$ in $\mathcal{D}$ if $d \equiv 0 \pmod 4$, or to the union of two irreducible divisors $\mathcal{D}_d'$ and $\mathcal{D}_d''$ in $\mathcal{D}$ if $d \equiv 2 \pmod 8$ (see \cite[Corollary 6.3]{DIM}). The hypersurfaces $\mathcal{D}_d'$ and $\mathcal{D}_d''$ are interchanged by the involution $r_{\mathcal{D}}$, defined on $\mathcal{D}$ by exchanging $e$ and $f$. Let $X$ be a Hodge-special GM fourfold with a labelling $K$ of discriminant $d$. The orthogonal $K^{\perp}$ of $K$ in $I_{22,2}$ is the \textbf{non-special lattice} of $X$; it is equipped with a Hodge structure induced by the Hodge structure on $H^4(X,\Z)$. A pseudo-polarized K3 surface $S$ of degree-$d$ is \textbf{Hodge-associated} to $(X,K)$ if the non-special cohomology $K^{\perp}$ is Hodge-isometric to the primitive cohomology lattice $H^2(S,\Z(-1))_0$. As proved in \cite[Proposition 6.5]{DIM}, this is equivalent to have $d$ satisfying $(\ast\ast)$. Moreover, if $p(X)$ is not in $\mathcal{D}_8$, then the pseudo-polarization is a polarization (see \cite[Proposition 6.5]{DIM}). \subsection{Kuznetsov component and K-theory} The analogy between GM fourfolds and cubic fourfold is also reflected on their derived categories. Indeed, we denote by $\D(X)$ the bounded derived category of coherent sheaves on a GM fourfold $X$. By \cite[Proposition 4.2]{KP}, there exists a semiorthogonal decomposition of the form $$\D(X)= \langle \A_X, \mathcal{O}_X, \mathcal{U}_X^*, \mathcal{O}_X(1),\mathcal{U}_X^*(1) \rangle,$$ where $\mathcal{A}_X$ is the right orthogonal to the subcategory generated by the exceptional objects \begin{equation} \label{excol} \mathcal{O}_X,\mathcal{U}_X^*,\mathcal{O}_X(1),\mathcal{U}_X^*(1), \end{equation} in $\D(X)$. We say that $\A_X$ is the \textbf{Kuznetsov component} (or the \textbf{K3 category}) associated to $X$. The Kuznetsov component has the same Serre functor and the same Hochschild homology as the derived category of a K3 surface (see \cite[Proposition 4.5 and Lemma 5.3]{KP}). In particular, the Kuznetsov component can be viewed as a non commutative K3 surface. Moreover, if $X$ is an ordinary GM fourfold containing a quintic del Pezzo surface, then there exists a K3 surface $S$ and an equivalence $\A_X \xrightarrow{\sim} \D(S)$ (see \cite[Theorem 1.2]{KP}). We denote by $K_0(\A_X)$ the Grothendieck group of $\A_X$ and let $\chi$ be the Euler pairing. The numerical Grothendieck group of $\A_X$ is given by the quotient $K_0(\A_X)_{\text{num}}:=K_0(\A_X)/\ker\chi$. By the additivity with respect to semiorthogonal decompositions, we have the orthogonal direct sum $$K_0(X)_{\text{num}}=K_0(\A_X)_{\text{num}} \oplus \langle [\mathcal{O}_X],[\mathcal{U}_X^*],[\mathcal{O}_X(1)],[\mathcal{U}_X^*(1)] \rangle_{\text{num}} \cong K_0(\A_X)_{\text{num}} \oplus \Z^{4}$$ with respect to $\chi$. In particular, since the Hodge conjecture holds for $X$ (see \cite{CM}), it follows that $$\text{rank}(K_0(\A_X)_{\text{num}})=\sum_k h^{k,k}(X,\Q) - 4=4+h^{2,2}(X,\Q)-4= h^{2,2}(X,\Q).$$ We recall the following lemma, which will be useful to study the relation between the Mukai lattice of $\A_X$ and the vanishing cohomology of $X$. \begin{lemma}\emph{\cite[Lemma 5.14 and Lemma 5.16]{KP}} \label{lemmabase} If $X$ is a non Hodge-special GM fourfold, then $K_0(\A_X)_{\emph{num}} \cong \Z^2$ and it admits a basis such that the Euler form with respect to this basis is given by \[ \begin{pmatrix} -2 & 0 \\ 0 & -2 \end{pmatrix}. \] \end{lemma} We end this section with the explicit computation of the basis of Lemma \ref{lemmabase}. The Todd class of a GM fourfold $X$ is $$\td(X)=1+h+\left( \frac{2}{3}h^2-\frac{1}{12}\gamma_X^*\sigma_2\right) +\frac{17}{60}h^3 + \frac{1}{10}h^4.$$ Let $P$ be a point in $X$, $L$ be a line lying on $X$, $\Sigma$ be the zero locus of a regular section of $\mathcal{U}_X^{\ast}$, $S$ be the complete intersection of two hyperplanes in $X$ and $H$ be a hyperplane section of $X$. Since $X$ is not Hodge-special, the structure sheaves of these subvarieties give a basis for the numerical Grothendieck group. Thus, an element $\kappa$ in $K_0(X)_{\text{num}}$ can be written as $$\kappa=a[\mathcal{O}_X]+ b[\mathcal{O}_H]+c[\mathcal{O}_S]+d[\mathcal{O}_{\Sigma}] +e[\mathcal{O}_L]+f[\mathcal{O}_P],$$ for $a,b,c,d,e,f \in \Z$. A computation using Riemann-Roch gives that $\kappa$ belongs to $K_0(\A_X)$ if and only if it is a linear combination of the following classes: \begin{align} \label{basis} \lambda_1 &:=2[\mathcal{O}_X]-[\mathcal{O}_{\Sigma}]-2[\mathcal{O}_L]+[\mathcal{O}_P] \\ \lambda_2 &:=4[\mathcal{O}_X]-2[\mathcal{O}_H]-[\mathcal{O}_S]-5[\mathcal{O}_L]+5[\mathcal{O}_P].\notag \end{align} It is easy to verify that the matrix they define with respect to the Euler form is as in Lemma \ref{lemmabase}. \begin{rmk} Let $C$ be a generic conic in a GM fourfold $X$; we denote by $\mathcal{O}_C$ its structure sheaf. Notice that $\lambda_1$ is the class in the K-theory of $X$ of the projection of $\mathcal{O}_C(1)$ in $\A_X$. Indeed, the projection $\text{pr}: \D(X) \to \A_X$ is given by the composition $\text{pr}:=\mathbb{L}_{\mathcal{O}_X}\mathbb{L}_{\mathcal{U}_X^*}\mathbb{L}_{\mathcal{O}_X(1)}\mathbb{L}_{\mathcal{U}^*_X(1)}$ of the left mutation functors with respect to the exceptional objects (see for example \cite{BLMS}, Section 3 for the definition of left mutation). Performing this computation, we get that $$[\text{pr}(\mathcal{O}_C(1))]=[\mathcal{O}_C(1)]-[\mathcal{O}_X(1)]+[\mathcal{U}_X^*]+[\mathcal{O}_X],$$ which has the same Chern character of $\lambda_1$. The second element $\lambda_2$ should be the class of an object in $\A_X$ obtained as the image of $\text{pr}(\mathcal{O}_C(1))$ via an autoequivalence of $\A_X$. \end{rmk} \section{Mukai lattice for the Kuznetsov component} In this section we describe the Mukai lattice of the Kuznetsov component. The main results of Section 3.1 are Proposition \ref{mukaivsvan}, where we prove that the vanishing lattice is Hodge isometric to the orthogonal of the lattice generated by $\lambda_1$ and $\lambda_2$ in the Mukai lattice, and Corollary \ref{correticoli}, where we determine Hodge-special GM fourfolds by their Mukai lattice. In Section 3.2 we relate the condition of having an associated K3 surface with the Mukai lattice (Theorem \ref{thmK3U}); as a consequence, we get Theorem \ref{thmHomHodge}, where we prove that the existence of a homological associated K3 surface implies that there is a Hodge-theoretic associated K3 surface for general Hodge-special GM fourfolds. Then we prove Theorem \ref{thm3intro}. We follow the methods introduced in \cite{AT} and \cite{Huy} for cubic fourfolds. \subsection{Mukai lattice and vanishing lattice} Let $X$ be a GM fourfold. We denote by $K(X)_{\text{top}}$ the topological K-theory of $X$ which is endowed with the Euler pairing $\chi$. As recalled in Section 2.2, the group $H^{\bullet}(X,\Z)$ is torsion-free; by \cite[Section 2.5]{AH} (see also \cite{AT}, Theorem 2.1), it follows that also $K(X)_{\text{top}}$ is torsion-free. Inspired by \cite{AT}, we define the \textbf{Mukai lattice} of the Kuznetsov component $\A_X$ as the abelian group $$\KT:=\lbrace \kappa \in K(X)_{\text{top}}: \chi([\mathcal{O}_X(i)],\kappa)=\chi([\mathcal{U}_X^*(i)],\kappa)=0 \, \text{ for }i=0,1 \rbrace$$ with the Euler form $\chi$. We point out that $\KT$ is torsion-free, because $K(X)_{\text{top}}$ is. We recall that the Mukai vector of an element $\kappa$ of $K(X)_{\text{top}}$ is given by $$v(\kappa)=\ch(\kappa).\sqrt{\td(X)}$$ and it induces an isomorphism of $\Q$-vector spaces $v: K(X)_{\text{top}}\otimes \Q \cong H^{\bullet}(X,\Q)$. We define the weight-zero Hodge structure on the Mukai lattice given by pulling back via the isomorphism $$\KT \otimes \C \rightarrow \bigoplus_{p=0}^4 H^{2p}(X,\C)(p)$$ induced by $v$. It is also convenient to consider the Mukai lattice $\KT(-1)$ with weight-two Hodge structure $\bigoplus_{p+q=2}\tilde{H}^{p,q}(\A_X)$ and Euler form with reversed sign. In the following, we will use both conventions according to the situation. The \emph{Néron-Severi lattice} of $\A_X$ is $$N(\A_X)= \tilde{H}^{1,1}(\A_X,\Z):= \tilde{H}^{1,1}(\A_X)\cap \KT$$ and the \emph{trascendental lattice} $T(\A_X)$ is the orthogonal complement of the Néron-Severi lattice with respect to $\chi$. We observe that by \cite[Theorem 1.2]{KP}, there exist GM fourfolds $X$ such that the associated Kuznetsov component $\A_X$ is equivalent to the derived category of a K3 surface $S$. Moreover, any equivalence $\A_X \xrightarrow{\sim} \D(S)$ induces an isometry of Hodge structures $\KT(-1) \cong K(S)_{\text{top}}$, by the same argument used in \cite[Section 2.3]{AT}. We set $\tilde{\Lambda}:=U^4 \oplus E_8(-1)^2$ and we recall that $K(S)_{\text{top}}$ is isomorphic as a lattice to $\tilde{\Lambda}$. Since the definition of $\KT$ does not depend on $X$ (any two GM fourfolds are deformation equivalent), we deduce that the Euler form is symmetric on $\KT$ and $\KT$ is isomorphic as a lattice to $\tilde{\Lambda}(-1)= U^4\oplus E_8^2$. We denote by $\langle \lambda_1,\lambda_2 \rangle^{\perp}$ the orthogonal complement with respect to the Euler pairing of the sublattice of $\KT$ generated by the objects $\lambda_1, \lambda_2$ determined in \eqref{basis}. In the next result, we explain the relation of this lattice with the vanishing lattice $H^4(X,\Z)_{00}$. \begin{prop} \label{mukaivsvan} Let $X$ be a GM fourfold. Then the Mukai vector $v$ induces an isometry of Hodge structures $$\langle \lambda_1,\lambda_2 \rangle^{\perp} \cong H^4(X,\Z)_{00}(2)=\langle h^2,\gamma_X^*\sigma_2 \rangle^{\perp}.$$ Moreover, for every set of $n$ objects $\zeta_1,\dots,\zeta_n$ in $K(\A_X)_{\emph{top}}$, the Mukai vector induces the isometry $$\langle \lambda_1,\lambda_2, \zeta_1,\dots,\zeta_n \rangle^{\perp} \cong \langle h^2,\gamma_X^*\sigma_2,c_2(\zeta_1),\dots,c_2(\zeta_n)\rangle^{\perp}.$$ \end{prop} \begin{proof} By definition $\kappa$ belongs to $\langle \lambda_1,\lambda_2 \rangle^{\perp} \subset \KT$ if and only if \begin{equation} \begin{cases} \label{sist2} \chi(\mathcal{O}_X,\kappa)=\chi(\mathcal{O}_X(1),\kappa)=\chi(\mathcal{U}^*_X,\kappa)=\chi(\mathcal{U}^*_X(1),\kappa)=0\\ \chi(\lambda_1,\kappa)=\chi(\lambda_2,\kappa)=0. \end{cases} \end{equation} The Chern character of $\kappa$ has the form $$\ch(\kappa)=k_0+k_2h+k_4+k_6h^3+k_8h^4 \quad \text{for }k_0,k_2,k_6,k_8 \in \Q \text{ and } k_4 \in H^4(X,\Q).$$ Thus, using Riemann-Roch, we can express the conditions \eqref{sist2} as a linear system in the variables $k_0,k_2,k_4 \cdot h^2,k_4 \cdot \gamma_X^*\sigma_{1,1},k_6,k_8$. Since the equations are linearly independent, we obtain that the system \eqref{sist2} has a unique solution, i.e. $$k_0=k_2=k_6=k_8=0 \quad \text{and} \quad k_4 \cdot h^2=k_4 \cdot \gamma_X^*\sigma_{1,1}=0.$$ In particular, $\ch(\kappa)$ belongs to $\langle 1,h,h^2, \gamma_X^*\sigma_2,h^3,h^4 \rangle^{\perp}=\langle h^2,\gamma_X^*\sigma_2\rangle^{\perp}$ in $H^4(X,\Q)$. Since $k_4 \cdot h=0$, $v(\kappa)=k_4$, i.e.\ $v(\kappa)$ is in the sublattice $\langle h^2,\gamma_X^*\sigma_2\rangle^{\perp}$ of $H^4(X,\Q)$. Since the lowest-degree term of the Mukai vector is integral (see \cite[Section 2.5]{AH} and \cite[Proposition 3.4]{DK2}), we conclude that $\kappa$ belongs to $\langle \lambda_1,\lambda_2 \rangle^{\perp}$ if and only if $v(\kappa)$ is in $H^4(X,\Z)_{00}$. By \cite[Section 2.5]{AH}, we have $v: \langle \lambda_1,\lambda_2 \rangle^{\perp} \rightarrow H^4(X,\Z)_{00}(2)$ is injective. It remains to prove the surjectivity. It is possible to argue as in the proof of \cite[Proposition 2.3]{AT} using \cite[Section 2.5]{AH}. We propose an alternative way. We observe that the lattices $\langle \lambda_1,\lambda_2 \rangle^{\perp}$ and $H^4(X,\Z)_{00}$ have both rank-$22$. Notice that $\langle \lambda_1,\lambda_2 \rangle^{\perp}$ has signature $(20,2)$. Moreover, the discriminant group of $\langle \lambda_1,\lambda_2 \rangle^{\perp}$ is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2$, because the Mukai lattice is unimodular. On the other hand, by Section 2.2 (see \cite[Proposition 5.1]{DIM}), we deduce that $H^4(X,\Z)_{00}$ and $\langle \lambda_1,\lambda_2 \rangle^{\perp}$ have the same signature and isomorphic discriminant groups. Since the genus of such a lattice contains only one element by \cite[Theorem 1.14.2]{Ni}, we conclude that $v$ is an isometry which preserves the Hodge structures, as we wanted. For the second part of the proposition, let $v(\zeta_i)=z_0+z_2h+z_4+z_6h^3+z_8h^4$ with $z_0,z_2,z_6,z_8 \in \Q$ and $z_4 \in H^4(X,\Q)$. Using the previous computation, we have $$0=\chi(\zeta_i,\kappa)=\int_X \exp\left(h \right)v(\zeta_i)^*\cdot k_4= k_4 \cdot z_4$$ for every $\kappa$ in $\langle \lambda_1,\lambda_2, \zeta_1,\dots,\zeta_n \rangle^{\perp}$. Since $z_4$ is by definition a linear combination of $c_2(\zeta_i)$, $h^2$ and $\gamma_X^*\sigma_2$, using again that $k_4$ is in $H^4(X,\Z)_{00}$, we deduce that $k_4 \cdot z_4=0$ if and only if $k_4 \cdot c_2(\zeta_i)=0$. This completes the proof of the statement. \end{proof} We point out that the lattice $\langle \lambda_1,\lambda_2 \rangle$ has a primitive embedding in $\KT$ by \cite[Corollary 1.12.3]{Ni}. By Proposition \ref{mukaivsvan}, we have the isomorphism of lattices $$\langle \lambda_1,\lambda_2 \rangle^{\perp} \cong H^4(X,\Z)_{00} \cong E_8^2 \oplus U^2 \oplus I_{2,0}(2).$$ On the other hand, the lattice $\langle \lambda_1,\lambda_2 \rangle$ is isomorphic to $I_{0,2}(2)$. Notice that by \cite[Theorem 1.14.4]{Ni}, there exists a unique (up to isomorphism) primitive embedding $$i: I_{0,2}(2) \hookrightarrow \tilde{\Lambda}(-1)=E_8^2 \oplus U^4.$$ Let us denote by $f_1,f_2$ the standard generators of $I_{0,2}(2)$ and by $u_1,v_1$ (resp.\ $u_2,v_2$) the standard basis of the first (resp.\ the second) hyperbolic plane $U$. Then, we define $i$ setting $$i(f_1)=u_1-v_1 \quad i(f_2)=u_2-v_2.$$ The orthogonal complement of $I_{0,2}(2)$ via $i$ is $$I_{0,2}(2)^{\perp}\cong E_8^2 \oplus U^2 \oplus I_{2,0}(2).$$ In particular, we have an isometry $\phi: \KT \cong \tilde{\Lambda}(-1)$ such that \begin{equation} \label{isophi} \phi(\lambda_1)=i(f_1), \quad \phi(\lambda_2)=i(f_2), \quad \phi(\langle \lambda_1,\lambda_2 \rangle^{\perp})\cong I_{0,2}(2)^{\perp}\cong E_8^2 \oplus U^2 \oplus I_{2,0}(2), \end{equation} which is equivalent to the data of a marking for $X$. Hence, we can write $p(X)=[\phi_{\C}(\tilde{H}^{2,0}(\A_X))]$. Now, we prove that the isomorphism of Proposition \ref{mukaivsvan} extends to the quotients $\KT/\langle \lambda_1,\lambda_2 \rangle$ and $H^4(X,\Z)/\langle h^2,\gamma_X^*\sigma_2 \rangle$. The proof is analogous to that of \cite[Proposition 2.4]{AT}. \begin{prop} \label{c2quoz} The second Chern class induces a group isomorphism $$\bar{c}_2: \frac{K(\A_X)_{\emph{top}}}{\langle \lambda_1,\lambda_2 \rangle} \rightarrow \frac{H^4(X,\Z)}{\langle h^2,\gamma_X^*\sigma_2 \rangle}.$$ \end{prop} \begin{proof} The composition of the projection $p: H^4(X,\Z) \twoheadrightarrow H^4(X,\Z)/\langle h^2,\gamma_X^*\sigma_2 \rangle$ with $c_2$ is a group homomorphism, because $$c_2(\kappa_1 + \kappa_2)=c_2(\kappa_1)+c_1(\kappa_1)c_1(\kappa_2)+c_2(\kappa_2)=c_2(\kappa_1)+mh^2+c_2(\kappa_2) \quad \text{for }m \in \Z.$$ Since the second Chern classes of $\lambda_1$ and $\lambda_2$ are respectively $$c_2(\lambda_1)=2 h^2 \quad \text{and} \quad c_2(\lambda_2)=-\gamma_X^*\sigma_{1,1},$$ it follows that $\langle \lambda_1,\lambda_2 \rangle$ is in the kernel of $p \circ c_2$. In particular, the induced morphism $\bar{c}_2$ of the statement is well-defined. Notice that $\bar{c}_2$ is injective. Indeed, let $\kappa$ be an element in $\KT$ such that $c_2(\kappa)$ belongs to the sublattice $\langle h^2,\gamma_X^*\sigma_2 \rangle$. In particular, $\kappa$ is an element of $K(X)_{\text{top}}$ such that $\ch(\kappa)$ belongs to $H^0(X,\Z) \oplus H^2(X,\Z) \oplus \Z\langle h^2,\gamma_X^*\sigma_2 \rangle \oplus H^6(X,\Z) \oplus H^8(X,\Z)$. Then $\kappa$ is a linear combination of $[\mathcal{O}_X], [\mathcal{O}_H], [\mathcal{O}_S], [\mathcal{O}_{\Sigma}],[\mathcal{O}_L],[\mathcal{O}_P]$ with the notation of Section 2.4, because $X$ is AK-compatible (see \cite[Section 5]{KP}). Since it belongs to $\KT$, by the same computation done in the end of Section 2.4, we deduce that $\kappa$ is a linear combination of $\lambda_1$ and $\lambda_2$, as we claimed. Finally, we show that $\bar{c}_2$ is surjective. Let $T$ be a class in $H^4(X,\Z)$. By \cite[Theorem 2.1(3)]{AT}, there exists $\tau$ in $K(X)_{\text{top}}$ such that $v(\tau)$ is the sum of $-T$ with highter degree terms. Then the projection $\text{pr}(\tau)$ of $\tau$ in $\KT$ is a linear combination of $\tau$ and the classes of the exceptional objects in \eqref{excol}. Since the Chern classes of the exceptional objects are all multiples of $h^i$ and $\gamma_X^*\sigma_{1,1}$, it follows that $c_2(\text{pr}(\tau))$ differs form $c_2(\tau)$ by a linear combination of $h^2$ and $\gamma_X^*\sigma_{1,1}$. We conclude that $\bar{c_2}(\text{pr}(\tau))=c_2(\tau)=T$ in $H^4(X,\Z)/\langle h^2,\gamma_X^*\sigma_2 \rangle$. \end{proof} \begin{rmk} Notice that the image of the algebraic K-theory $K(\A_X)$ in $\KT$ is contained in $N(\A_X)$. However, we do not know if the opposite inclusion holds, because it is not clear if every Hodge class in $H^{2,2}(X,\Z)$ comes from an algebraic cycle with integral coefficients. In the case of cubic fourfolds the integral Hodge conjecture holds by the work of Voisin (see \cite{Voi}); thus, in \cite[Proposition 2.4]{AT} they use this fact to prove that the $(1,1)$ part of the Hodge structure on the Mukai lattice is identified with $K(\A_X)_{\text{num}}$. Voisin's argument should work also for GM fourfolds, but it requires to give a description of the intermediate Jacobian of a GM threefold, as done in \cite[Theorem 5.6]{MT} and \cite[Theorem 1.4]{Dru} in the cubic threefolds case. An other approach could be firstly to construct Bridgeland stability conditions for the Kuznetsov component (e.g.\ as in \cite{BLMS} for the Kuznetsov component of a cubic fourfold). Then, to deduce the integral Hodge conjecture by an argument on moduli spaces of stable objects with given Mukai vector, along the same lines as in \cite{BLMSpre} where they develop the argument for cubic fourfolds. \end{rmk} Finally, we need the following lemma, which is a consequence of Proposition \ref{c2quoz}; the proof is the same as that of \cite[Proposition 2.5]{AT}, so we skip it. \begin{lemma} \label{lemmauguale} Let $\kappa_1, \dots, \kappa_n$ be in $K(\A_X)_{\emph{top}}$; we define the sublattices $$M_K:=\langle \lambda_1,\lambda_2,\kappa_1,\dots,\kappa_n \rangle \subset K(\A_X)_{\emph{top}}$$ and $$M_H:=\langle h^2, \gamma_X^*\sigma_2,c_2(\kappa_1),\dots,c_2(\kappa_n) \rangle \subset H^4(X,\Z).$$ \begin{enumerate} \item An element $\kappa$ of $K(\A_X)_{\emph{top}}$ is in $M_K$ if and only if $c_2(\kappa)$ is in $M_H$. \item $M_H$ is primitive if and only if $M_K$ is. \item $M_H$ is non degenerate if and only if $M_K$ is. \item If $M_K$ is in $N(\A_X)$, then $M_K$ and $M_H$ are non-degenerate. \item If $M_K$ and $M_H$ are non-degenerate, then $M_H$ has signature $(r,s)$ if and only if $M_K$ has signature $(r-2,s+2)$ and they have isomorphic discriminant groups. \end{enumerate} \end{lemma} \begin{cor} \label{correticoli} The period point of a Hodge-special GM fourfold $X$ belongs to the divisor $\mathcal{D}_d$ (resp.\ to the union of the divisors $\mathcal{D}_d'$ and $\mathcal{D}_d''$) for $d \equiv 0 \pmod 4$ (resp.\ for $d \equiv 2 \pmod 8$) if and only if there exists a primitive sublattice $M_K$ of $N(\A_X)$ of rank-$3$ and discriminant $d$ which contains $\langle \lambda_1,\lambda_2 \rangle$. \end{cor} \begin{proof} As recalled in Section 2.3, the period point of $X$ satisfies the condition of the statement if and only if there is a labelling $M_H$ of $H^{2,2}(X,\Z)$ with discriminant $d$. The claim follows from Lemma \ref{lemmauguale}. \end{proof} \subsection{Associated (twisted) K3 surface and Mukai lattice} The first result of this section characterizes period points of general Hodge-special GM fourfolds by their Mukai lattice. It is analogous to \cite[Theorem 3.1]{AT} for cubic fourfolds and the proof develops in a similar fashion. \begin{thm} \label{thmK3U} Let $X$ be a Hodge-special GM fourfold. If $X$ admits a Hodge-associated K3 surface, then $N(\A_X)$ contains a copy of the hyperbolic plane. Moreover, the converse holds assuming one of the following conditions: \begin{enumerate} \item $X$ is general (i.e.\ $H^{2,2}(X,\Z)$ has rank-$3$); \item There is an element $\tau$ in the hyperbolic plane such that $\langle \lambda_1, \lambda_2, \tau \rangle$ has discriminant $d \equiv 2$ or $4 \pmod 8$. \end{enumerate} \end{thm} \begin{proof} Assume that $X$ has a Hodge-associated K3 surface; as recalled in the introduction and in Section 2.3, there exists a labelling $M_H$ whose discriminant $d$ satisfies $(\ast\ast)$. Equivalently, by Corollary \ref{correticoli}, there exists a primitive sublattice $M_K$ in $N(\A_X)$ of rank-$3$ containing $\langle \lambda_1,\lambda_2 \rangle$, with same discriminant $d$. Thus, there exists a rank-one primitive sublattice $\Z w$ and a primitive embedding $j: \Z w \hookrightarrow U^3 \oplus E_8^2$ with $w^2=-d$, such that $M_K^{\perp}$ in $\KT$ is isomorphic to $\Z w^{\perp}$. Adding $U$ to both sides of $j$, we get the primitive embedding of $U \oplus \Z w$ in $\tilde{\Lambda}(-1)$. Since $U \oplus \Z w$ and $M_K \subset \KT \cong \tilde{\Lambda}(-1)$ have isomorphic orthogonal complements, they have isomorphic discriminant groups by \cite[Corollary 1.6.2]{Ni}. Since one contains $U$, they are isomorphic by \cite[Corollary 1.13.4]{Ni}. In particular, we conclude that $U$ is contained in $M_K \subset N(\A_X)$, as we wanted. Conversely, let $X$ be as in the second part of the statement and let $\kappa_1,\kappa_2$ be two classes in $N(\A_X)$ spanning a copy of $U$. Notice that $\langle \lambda_1,\lambda_2 \rangle$ is negative definite and $U$ is indefinite; hence, the lattice $\langle \lambda_1,\lambda_2,\kappa_1,\kappa_2 \rangle$ has rank three or four. We distinguish these two cases. \begin{rank3} Let $M_K$ be the saturation of $\langle \lambda_1,\lambda_2,\kappa_1,\kappa_2 \rangle$ and we denote by $d$ its discriminant. We have the inclusions $U \subset M_K \subset \KT \cong \tilde{\Lambda}(-1)$. Since $U$ is unimodular, there exists a rank-one sublattice $\Z w$ with $w^2=-d$ such that $M_K \cong U \oplus \Z w$. On the other hand, the orthogonal to $U$ in $\KT$ is an even unimodular lattice of signature $(19,3)$; thus it is isomorphic to $U^3 \oplus E_8^2$. As a consequence, $M_K^{\perp}$ in $\KT$ is isomorphic to $\Z w^{\perp}$ in $U^3 \oplus E_8^2$. As observed before, this is equivalent to the existence of a labelling $M_H$ for $X$ of discriminant $d$ satisfying condition $(\ast\ast)$. This ends the proof in the rank-three case. In particular, this proves the statement for $X$ general. \end{rank3} \begin{rank4} Consider the rank-three lattices of the form $\langle \lambda_1,\lambda_2, x\kappa_1 +y\kappa_2 \rangle$, where $x$ and $y$ are integers not both zero. We define the quadratic form $$Q(x,y):=\begin{cases} \text{disc}(\langle \lambda_1,\lambda_2, x\kappa_1 +y\kappa_2 \rangle) & \text{ if } x \neq 0 \text{ or } y \neq 0 \\ 0 & \text{ if } x=y=0. \end{cases}$$ We observe that the second Chern class $c_2(x\kappa_1 +y\kappa_2)$ is in $H^{2,2}(X)$; hence, by the Hodge-Riemann bilinear relations and Lemma \ref{lemmauguale}, it follows that $Q(x,y)$ is positive unless $x=y=0$. Let \[ \begin{pmatrix} -2 & 0 & k & m \\ 0 & -2 & l & n \\ k & l & 0 & 1 \\ m & n & 1 & 0 \end{pmatrix} \] be the matrix defined by the Euler pairing on the lattice $\langle \lambda_1,\lambda_2,\kappa_1,\kappa_2 \rangle$. We have \begin{align*} Q(x,y)&= \begin{vmatrix} -2 & 0 & kx+my \\ 0 & -2 & lx+ny \\ kx+my & lx+ny & 2xy \end{vmatrix} \\ &=8xy+2(kx+my)^2+2(lx+ny)^2 \\ &=(2k^2+2l^2)x^2+(8+4km+4ln)xy+(2m^2+2n^2)y^2. \end{align*} We set $$A:=2k^2+2l^2, \quad B:=8+4km+4ln, \quad C:=2m^2+2n^2.$$ We denote by $h$ the greatest common divisor of $A, B$ and $C$; notice that $h$ is even. We set $$a=A/h, \quad b=B/h, \quad c=C/h$$ and we have $Q(x,y)=hq(x,y)$, where $$q(x,y)=ax^2+bxy+cy^2.$$ In the next lemmas we prove that $h$ satisfies $(\ast \ast)$ and that there exist integers $x$ and $y$ such that $q(x,y)$ represents a prime $p \equiv 1 \pmod 4$ . \begin{lemma} \label{lemma1} The only odd primes that divide the greatest common divisor $h$ of the coefficients of $Q$ are $\equiv 1 \pmod 4$. Moreover, we have $8 \nmid h$. \end{lemma} \begin{proof} Let $\Z[\sqrt{-1}]$ be the domain of Gaussian integers with the Euclidean norm $|\:|$. We set $$\alpha:= k+l\sqrt{-1} \quad \text{and} \quad \gamma:=m+n\sqrt{-1}.$$ We rewrite the coefficients of $Q$ as $$A=2|\alpha|^2, \quad B=4\text{Re}(\alpha\bar{\gamma})+8, \quad C=2|\gamma|^2.$$ Suppose that $p$ is an odd prime which is not congruent to $1$ modulo $4$, i.e. $p \equiv 3 \pmod 4$. Then $p$ is prime in $\Z[\sqrt{-1}]$ (see \cite[Proposition 4.18]{Cox}). Thus if $p$ divides $A=2\alpha\bar{\alpha}$, then $p$ divides $\alpha$. In particular, $p$ divides $\text{Re}(\alpha\bar{\gamma})$; so $p$ does not divide $\text{Re}(\alpha\bar{\gamma})+2$. It follows that $p$ does not divide $B$ and we conclude that $p \nmid h$. For the second part, we observe that $8 \mid h$ if and only if $k, l, m, n$ are even. In this case, we have $8 \mid Q(x,y)$ for every $x, y \in \Z$. However, the assumption we made in item 2 of the theorem exclude this possibility. \end{proof} \begin{lemma} \label{lemma2} We have $a \nequiv 3 \pmod 4$, $c \nequiv 3 \pmod 4$, and $b$ is even. \end{lemma} \begin{proof} By definition we have $$k^2+l^2=\frac{h}{2}a \quad \text{and} \quad m^2+n^2=\frac{h}{2}c.$$ Notice that if an odd prime $\equiv 3 \pmod 4$ divides the sum of two squares, then it has to appear with even exponent (see \cite[Corollary 5.14]{NZ}). Since by Lemma \ref{lemma1} the only odd primes dividing $h$ are $\equiv 1 \pmod 4$, a prime $\equiv 3 \pmod 4$ appears in the prime factorization of $a$ and $c$ only with even exponent. This gives the first part of the claim. Now, we prove that $b$ is odd if and only if $8 \mid h$. This implies the desired statement by the second part of Lemma \ref{lemma1}. Assume that $b$ is odd. Since $$B=4(2 + \text{Re}(\alpha \bar{\gamma}))=hb,$$ we have $4 \mid h$. Thus, $4$ divides $A=2|\alpha|^2$ and $C=2|\gamma|^2$. It follows that $(1 + \sqrt{-1}) \mid \alpha$ and $(1 + \sqrt{-1}) \mid \gamma$, which implies that $2 \mid \alpha \bar{\gamma}$. We conclude that $8 \mid B$ and thus $8 \mid h$. Conversely, assume that $8 \mid h$; arguing as above, we see that $8 \mid B$. Notice that $2 \nmid h/8$, because otherwise $2 \mid B/8$, in contradiction with the fact that $B=8(1+2r)$. Since $$\frac{B}{8}=1+2r=\frac{h}{8}b,$$ we conclude that $b$ is odd. \end{proof} \begin{lemma} \label{lemma3} There exist integers $x$ and $y$ such that $q(x,y)$ is a prime $p \equiv 1 \pmod 4$. \end{lemma} \begin{proof} We adapt part of the proof of \cite[Proposition 3.3]{AT} to our case. Let us list all the possible forms $q(x,y)$ modulo $4$, using the restrictions given by Lemma \ref{lemma2}: \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{For $b \equiv 0 \pmod 4$:} \\ \hline $a \equiv 0 \pmod 4$ & $0, y^2, 2y^2$ \\ \hline $a \equiv 1 \pmod 4$ & $x^2, x^2+y^2$ \\ \hline $a \equiv 2 \pmod 4$ & $2x^2, 2x^2+2y^2$ \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{For $b \equiv 2 \pmod 4$:} \\ \hline $a \equiv 0 \pmod 4$ & $2xy, 2xy+2y^2$ \\ \hline $a \equiv 1 \pmod 4$ & $x^2+2xy+y^2, x^2+2xy+2y^2$ \\ \hline $a \equiv 2 \pmod 4$ & $2x^2+2xy, 2x^2+2xy+y^2, 2x^2+2xy+2y^2$ \\ \hline \end{tabular} \end{center} Notice that we have excluded the cases $$x^2+2y^2, \quad x^2+2xy, \quad 2x^2+y^2, \quad 2xy+y^2,$$ because by completing the square we get $$x^2+2xy=(x+y)^2-y^2 \equiv (x+y)^2+3y^2 \pmod 4$$ and $$x^2+2y^2=(x+y)^2-2xy+y^2 \equiv (x+y)^2+2xy+y^2 \equiv 2(x+y)^2+3x^2 \pmod 4,$$ which is not possible by Lemma \ref{lemma2}. We exclude the cases corresponding to a non primitive form, i.e.\ $$0,2y^2, 2x^2, 2x^2+2y^2, 2xy, 2xy+2y^2, 2x^2+2xy, 2x^2+2xy+2y^2.$$ In the other cases, we find that $q$ can represent only numbers which are $\equiv 0$ or $1 \pmod 4$ (i.e.\ $y^2, x^2, (x+y)^2, x^2+2xy+2y^2, 2x^2+2xy+y^2$), or $\equiv 0, 1$ or $2 \pmod 4$ (i.e.\ $x^2+y^2$). Since a primitive positive definite form represents infinitely many primes, it must represent a prime $\equiv 1 \pmod 4$. \end{proof} \end{rank4} We observe that $h$ satisfies $(\ast\ast)$ by Lemma \ref{lemma1}. Thus, by Lemma \ref{lemma3} we conclude that there exist some integers $x$ and $y$ not both zero such that the discriminant of the lattice $\langle \lambda_1,\lambda_2, x\kappa_1 +y\kappa_2 \rangle$ satisfies $(\ast\ast)$. This observation implies the proof of the statement. Indeed, if $M_K$ is the saturation of this lattice, then its discriminant still satifies condition $(\ast\ast)$, because $\text{discr}(\langle \lambda_1,\lambda_2, x\kappa_1 +y\kappa_2 \rangle)=i^2 \text{discr}(M_K)$, and $M_K$ has rank-three. By the same argument used at the end of the rank-three case, we deduce that $X$ has a Hodge-associated K3 surface. \end{proof} In Section 3.3 we give examples of GM fourfolds having a primitively embedded hyperbolic plane in the algebraic part of the Mukai lattice, but without a Hodge-associated K3 surface. A consequence of Theorem \ref{thmK3U} is that the condition of having a homological associated K3 surface implies the existence of a Hodge-associated K3 surface for general GM fourfolds. This is analogous to the easy implication of \cite[Theorem 1.1]{AT}. \begin{thm} \label{thmHomHodge} Let $X$ be a GM fourfold such that $\A_X$ is equivalent to the derived category of a K3 surface $S$. Under the hypothesis of the second part of Theorem \ref{thmK3U}, the GM fourfold $X$ has discriminant $d$ with $d$ satisfying $(\ast\ast)$. \end{thm} \begin{proof} Assume that there is an equivalence $\Phi: \A_X \xrightarrow{\sim} \D(S)$ where $S$ is a K3 surface. We observe that $K(S)_{\text{num}}$ contains a copy of the hyperbolic plane spanned by the classes of the structure sheaf of a point and the ideal sheaf of a point. Since $\Phi$ induces an isometry of Hodge structures $\KT(-1) \cong K(S)_{\text{top}}$, it follows that $U$ is contained in $N(\A_X)$. Applying Theorem \ref{thmK3U}, we deduce the proof of the result. \end{proof} In the last part of this section we show that period points of Hodge-special GM fourfolds with an associated twisted K3 surface are organized in a countable union of divisors determined by the value of the discriminant. The argument essentially follows \cite[Section 2]{Huy}. To this end, given a GM fourfold $X$, we consider the Mukai lattice $\KT(-1)$ with the weight-two Hodge structure and Euler pairing with reversed sign. Accordingly, the local period domain is given by $$\Omega:= \lbrace w \in \p(I_{2,0}(2)^{\perp} \otimes \C): w \cdot w =0, w \cdot \bar{w}>0 \rbrace$$ changing the sign of the definition in $\eqref{locperiodom}$ and identifying $\Lambda=I_{2,0}(2)^{\perp}$. We set $\mathcal{Q}=\lbrace x \in \p(\tilde{\Lambda}\otimes \C): x^2=0, (x.\bar{x})>0 \rbrace$ and we consider the canonical embedding of $\Omega$ in $\mathcal{Q}$. We recall that a point $x$ of $\mathcal{Q}$ is of \emph{K3 type} (resp.\ \emph{twisted K3 type}) if there exists a K3 surface $S$ (resp.\ a twisted K3 surface $(S,\alpha)$) such that $\tilde{\Lambda}$ with the Hodge structure defined by $x$ is Hodge isometric to $\tilde{H}(S,\Z)$ (resp.\ $\tilde{H}(S,\alpha,\Z)$) (see \cite[Definition 2.5]{Huy}). By \cite[Lemma 2.6]{Huy}, a point $x \in \mathcal{Q}$ is of K3 type (resp.\ of twisted K3 type) if and only if there exists a primitive embedding of $U$ (resp.\ an embedding of $U(n)$) in the $(1,1)$-part of the Hodge structure defined by $x$ on $\tilde{\Lambda}$. We denote by $\mathcal{D}_{\text{K3}}$ (resp.\ $\mathcal{D}_{\text{K3}'}$) the set of points of $\Omega$ of K3 type (resp.\ of twisted K3 type). \begin{dfn} \label{deftwist} A GM fourfold $X$ has an associated twisted K3 surface if the period point $p(X)$ comes from a point in $\mathcal{D}_{\text{K3}'}$. \end{dfn} \begin{rmk} Notice that if $X$ has a Hodge-associated K3 surface, then it corresponds to a point $x$ of K3 type. In fact, it follows from the first part of Theorem \ref{thmK3U} and \cite[Lemma 2.6]{Huy}. Moreover, the converse holds for general Hodge-special GM fourfolds and for GM fourfolds satisfying condition 2 in Theorem \ref{thmK3U}. On the other hand, in Section 3.3 we see that a GM fourfold with period point of K3 type does not necessarily have a Hodge-associated K3 surface. \end{rmk} \begin{proof}[Proof of Theorem \ref{thm3intro}] The proof is analogous to that of \cite[Proposition 2.10]{Huy}. As done in \cite[Proposition 2.8]{Huy}, we have $$\mathcal{D}_{\text{K3}'}=\Omega \cap \bigcup\limits_{\substack{0 \neq \varepsilon \in \tilde{\Lambda} \\\chi(\varepsilon,\varepsilon)=0}} \varepsilon^{\perp}.$$ Assume that $x$ is a twisted K3 type point. By the previous observation, there exists an isotropic non trivial element $\varepsilon$ in $\tilde{\Lambda}$. We consider the lattice $\langle \lambda_1, \lambda_2, \varepsilon \rangle$ in $\tilde{\Lambda}$, with Euler pairing given by $$ \begin{pmatrix} -2 & 0 & x \\ 0 & -2 & y \\ x & y & 0 \end{pmatrix}. $$ Notice that $\langle \lambda_1, \lambda_2, \varepsilon \rangle$ has discriminant $2x^2+2y^2$, which satisfies condition $(\ast\ast')$. Then, let $M_K$ be the saturation of $\langle \lambda_1, \lambda_2, \varepsilon \rangle$ in $\tilde{\Lambda}$. If $d$ is the discriminant of $M_K$ and $i$ is the index of $\langle \lambda_1, \lambda_2, \varepsilon \rangle$ in its saturation, then we have $$2x^2+2y^2=i^2d.$$ It follows that also $d$ verifies condition $(\ast\ast')$, as we wanted. The other implication of the statement is proved following the same argument in the opposite direction. \end{proof} \subsection{Extending Theorem \ref{thmK3U}: a counterexample} In this section we show that there are examples of GM fourfolds having a primitively embedded hyperbolic plane in the Néron-Severi lattice, but which cannot have a Hodge-associated K3 surface. Consistently with Theorem \ref{thmK3U}, our examples have $\text{rank}(N(\A_X))=4$ and their period points belong only to divisors corresponding to discriminants $\equiv 0 \pmod 8$. Assume that $X$ is a GM fourfold such that $N(\A_X)=\langle \lambda_1,\lambda_2,\tau_1,\tau_2 \rangle$ with Euler form given by \begin{equation} \label{retcountex} \begin{pmatrix} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & -4 & -1-2n \\ 0 & 0 & -1-2n & -2(1+n^2) \end{pmatrix} \end{equation} (here we consider the Mukai lattice $\KT(-1)$ with weight-two Hodge structure and quadratic form with reversed sign) with $n \in \N$. The matrix \eqref{retcountex} has signature $(2,2)$, compatibly with the Hodge Index Theorem. Notice that the classes $$\kappa_1:=\lambda_1+\lambda_2+\tau_1 \quad \text{and} \quad \kappa_2:=\lambda_1+n\lambda_2+\tau_2$$ span a copy of $U$ in $N(\A_X)$. However, it is easy to see that every labelling of $X$ will have discriminant congruent to $0$ modulo $8$; hence, we cannot find a labelling with discriminant satisfying $(\ast\ast)$. It follows that $X$ cannot have a Hodge-associated K3 surface. It remains to prove that such a GM fourfold exists. The issue is that there is only a conjectural description of the image of the period map of GM fourfolds as the complement of some divisors in the period domain (see \cite[Question 9.1]{DIM}). As a consequence, the existence of GM fourfolds satisfying the above conditions is not a priori guaranteed. Anyway, we don't need such a strong result in order to prove that there is at least a value of $n$ and a GM fourfold with the required properties, as shown in the rest of this section. The key points are that we can reconstruct a GM fourfold from a given smooth double EPW sextic and that the union over $n$ of the sets of period points whose algebraic part contains a lattice as in \eqref{retcountex} is dense in a divisor with discriminant $16$. To this end, we firstly need the next lemma, where we study the conditions on $n$ in order to avoid the divisor $\mathcal{D}_8$. The motivation will be clear later, but essentially it comes from the fact that $\mathcal{D}_8$ contains period points of nodal GM fourfolds by \cite[Section 7.6]{DIM}, which we want to avoid as they are not smooth. \begin{lemma} \label{no8} The period point of a GM fourfold $X$ with $N(\A_X)$ as in \eqref{retcountex} is not in $\mathcal{D}_8$ if and only if $n \neq 0, 1$. \end{lemma} \begin{proof} We actually prove that $p(X)$ is in $\mathcal{D}_8$ if and only if $n=0$ or $n=1$. A lattice spanned by $\lambda_1, \lambda_2, x \tau_1+y\tau_2$ has discriminant $$Q(x,y):=4(-4x^2-2(1+n^2)y^2-2xy(1+2n))=-8(2x^2+(1+n^2)y^2+xy(1+2n)).$$ Notice that if $n=0$ (resp.\ $n=1$), then $Q(x,y)=-8$ for $(x,y)=(0,1)$ (resp.\ $(x,y)=\pm(1,-1)$. This implies that $p(X)$ is in $\mathcal{D}_8$. On the other hand, assume $n>1$; then $-Q(x,y)/8 \geq 2x^2+5y^2+5xy$. The reduction of the latter term is $2x^2+xy+2y^2$, thus the smallest value it represents is $2$ (see \cite{Baker}, Section 5.2). We deduce that $-Q(x,y)/8$ does not represent $1$ for $n>1$. This implies the statement. \end{proof} We now prove that there is a value of $n \in \N$ as in Lemma \ref{no8} and a Hodge structure on $\tilde{\Lambda}$ having algebraic part given by the lattice in \eqref{retcountex}, defining a period point in the image of the period map of GM fourfolds. Let us introduce some notation. Fix $d:=-16$ and $n>1$. We denote by $\mathcal{S}_{n}$ the set of period points in $\mathcal{D}_d$ defined by the lattice $N_{n}$ as in \eqref{retcountex}. More precisely, this is the locus in $\mathcal{D}$ coming from $\p(N_{n}^{\perp} \otimes \C) \subset \p(\tilde{\Lambda} \otimes \C)$. We set $$\mathcal{S}:=\bigcup_{n>1} \mathcal{S}_{n}$$ and we denote by $\mathcal{S}' \subset \mathcal{S}$ the locus of period points with algebraic part of rank-four. Thus, points in $\mathcal{S}'$ are very general points of $\mathcal{S}$ and their algebraic part is equal to a lattice $N_{n}$. \begin{lemma} \label{speriamosiavero} The intersection of $\mathcal{S}'$ with the image of the period map $p$ is non empty. \end{lemma} \begin{proof} Using the notation of \cite{DM}, let $\mathcal{M}_2^{(1)}$ be the moduli space of (smooth) hyperk\"ahler fourfolds deformation equivalent to the Hilbert square of a K3 surface, with polarization of degree-$2$ and divisibility $1$. Their period domain is $\mathcal{D}$. By \cite[Theorem 6.1 and Example 6.3]{DM}, the image of the period map $p_2^{(1)}: \mathcal{M}_2^{(1)} \hookrightarrow \mathcal{D}$ is equal to the complement of the divisors $\mathcal{D}_2$, $\mathcal{D}_8$ and $\mathcal{D}_{10}''$. As by definition and Lemma \ref{no8}, the period points we are considering are not in these divisors, it follows that $\mathcal{S}'$ is contained in the image of the period map $p_2^{(1)}$. We denote by $\mathcal{U}_2^{(1)}$ the Zariski open subset of $\mathcal{M}_2^{(1)}$ parametrizing smooth double EPW sextics. By \cite[Theorem 8.1]{DIM}, the Zariski open subset $p_2^{(1)}(\mathcal{U}_2^{(1)})$ of $\mathcal{D}$ meets $\mathcal{D}_d$. As a consequence, if we set $$D_{2,d}^{(1)}:=\mathcal{D}_d \cap p_2^{(1)}(\mathcal{M}_2^{(1)}),$$ which is a hypersurface in $p_2^{(1)}(\mathcal{M}_2^{(1)})$, then $$U^{(1)}_{2,d}:=D_{2,d}^{(1)} \cap p_2^{(1)}(\mathcal{U}_2^{(1)}) \neq \emptyset.$$ Moreover, $U^{(1)}_{2,d}$ is a Zariski open set in $D_{2,d}^{(1)}$. We claim that $$U^{(1)}_{2,d} \cap \bigcup_{n>1} \mathcal{S}_{n} \neq \emptyset.$$ Indeed, we set $$S_{n}:= \mathcal{S}_{n} \cap D_{2,d}^{(1)} \subset D_{2,d}^{(1)};$$ the union $\bigcup_{n>1} S_{n}$ of these Hodge loci is dense in $D_{2,d}^{(1)}$ (with respect to the Euclidean topology) by \cite[Proposition 5.20]{Voilibro}. As $U^{(1)}_{2,d}$ is Zariski open in $D_{2,d}^{(1)}$, we deduce the claim (the same argument is used in \cite{AHTV}, at the end of the proof of Theorem 1). Thus, there exists $n$ such that $$U^{(1)}_{2,d} \cap \mathcal{S}_{n} \neq \emptyset.$$ As the set above is Zariski open in $\mathcal{S}_{n}$, it contains a very general point of $\mathcal{S}_{n}$, which belongs to $\mathcal{S}'$. It follows that $$p_2^{(1)}(\mathcal{U}_2^{(1)}) \cap \mathcal{S}' \neq \emptyset.$$ For every $x \in p_2^{(1)}(\mathcal{U}_2^{(1)}) \cap \mathcal{S}'$, we denote by $\tilde{Y}_A$ a smooth double EPW sextic such that $p_2^{(1)}([\tilde{Y}_A])=x$. Notice that the corresponding Lagrangian subspace $A$ has no decomposable vectors, because otherwise the period point of $\tilde{Y}_A$ would have been in $\mathcal{D}_8$ by \cite[Remark 5.29]{DK2}, which is not possible. Finally, we observe that there exists a GM fourfold $X$ such that its associated double EPW sextic is precisely $\tilde{Y}_A$. Indeed, $\tilde{Y}_A$ determines a six-dimensional $\C$-vector space $V_6$ and a Lagrangian subspace $A \subset \bigwedge^3V_6$ without decomposable vectors. The choice of a five-dimensional subspace $V_5 \subset V_6$ with $A \cap \bigwedge^3V_5$ of the right dimension defines a Lagrangian data, which by \cite[Theorem 3.10, Proposition 3.13, and Theorem 3.16]{DK1} determines a GM fourfold $X$, as we wanted. \end{proof} Applying Lemma \ref{speriamosiavero}, we deduce that there are GM fourfolds $X$ with $U \subset N(\A_X)$, but without a Hodge-associated K3 surface. \begin{rmk} More generally, one can consider two integers $k$ and $l$ such that $(k,l) \neq (1,0), (0,1)$, $m, n \in \Z$ and the lattices $N_{k,l,m,n}:=\langle \lambda_1,\lambda_2,\tau_1,\tau_2 \rangle$ with Euler form given by \begin{equation} \label{retcountexgeneral} \begin{pmatrix} 2 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & -2(k^2+l^2) & 1-2km-2ln \\ 0 & 0 & 1-2km-2ln & -2(m^2+n^2) \end{pmatrix}. \end{equation} It is possible to show that there are infinite values of $k, l, m, n$ such that the matrix $N_{k,l,m,n}$ has the right signature and determines a period point not in $\mathcal{D}_8$. Then, the argument in Lemma \ref{speriamosiavero} applies in this situation fixing $d=-8(k^2+l^2)$. Notice that the matrix in \eqref{retcountex} is precisely $N_{1,1,1,n}$. Again, the classes $$\kappa_1:=k\lambda_1+l\lambda_2+\tau_1 \quad \text{and} \quad \kappa_2:=m\lambda_1+n\lambda_2+\tau_2$$ span a copy of $U$ in $N(\A_X)$. In the basis $\lambda_1,\lambda_2,\kappa_1,\kappa_2$, the Euler form is represented by \begin{equation} \label{matrixrmqpedante} \begin{pmatrix} 2 & 0 & 2k & 2m \\ 0 & 2 & 2l & 2n \\ 2k & 2l & 0 & 1 \\ 2m & 2n & 1 & 0 \end{pmatrix}. \end{equation} Now, we claim that for every $\tau=a \kappa_1+b \kappa_2$ with $a, b \in \Z$, the lattice $\langle \lambda_1,\lambda_2,\tau \rangle$ has discriminant $\equiv 0 \pmod 8$. Indeed, it is enough to notice that $\chi(\tau,\tau)$, $\chi(\lambda_1,\tau)$ and $\chi(\lambda_2,\tau)$ are even. From the above observation, we deduce that the lattices $N_{k,l,m,n}$ as in \eqref{retcountexgeneral} represent all the possible intersection matrices of $N(\A_X)$ in the rank-four case that do not satisfy the condition in item 2 of Theorem \ref{thmK3U}. Indeed, there is not a class in the embedded $U$ which gives rise to a labelling of discriminant $\equiv 2, 4 \pmod 8$. \end{rmk} \begin{rmk} We have proved that there are divisors of the form $\mathcal{D}_{8t}$, whose elements cannot have a Hodge-associated K3 surface as $(\ast \ast)$ does not hold, but containing period points of GM fourfolds $X$ with $U \subset N(\A_X)$. In particular, the latter property does not hold for all the elements in $\mathcal{D}_{8t}$. It follows that the condition of having an embedded $U$ in $N(\A_X)$ is not divisorial, in contrast to what happens for cubic fourfolds. \end{rmk} \section{Associated double EPW sextic} The aim of this section is to prove Theorems \ref{thm1}, \ref{thm4} and \ref{thm2} stated in the introduction. We follow the argument of \cite{Add} and of \cite{Huy} for the twisted case; in particular, we define a Markman embedding for $H^2(\tilde{Y}_A,\Z)$ in $\tilde{\Lambda}$ and we apply Propositions 4 and 5 of \cite{Add}. \subsection{Proof of Theorem \ref{thm1} and \ref{thm4}} Assume that $X$ is a GM fourfold with Lagrangian data $(V_6,V_5,A)$ such that $\tilde{Y}_A$ is smooth. Before starting with the proofs, we need the following lemma, which relates the sublattice $\langle \lambda_1 \rangle^{\perp}$ of $\KT$ (equipped with the induced Hodge structure) and $H^2(\tilde{Y}_A,\Z)$. \begin{lemma} \label{mukaivsprim} There exists an isometry of Hodge structures between the lattices $\langle \lambda_1 \rangle^{\perp} \subset K(\A_X)_{\emph{top}}$ and $H^2(\tilde{Y}_A,\Z)(1)$. \end{lemma} \begin{proof} Composing the isometry of Proposition \ref{mukaivsvan} with that of Theorem \ref{periodEPW}, we obtain the Hodge isometry $$f: \langle \lambda_1,\lambda_2 \rangle^{\perp} \cong H^2(\tilde{Y}_A,\Z)_0(1).$$ Notice that twisting by $1$, we have shifted by two the weight of the Hodge structure on the primitive cohomology and we have reversed the sign of $q$; in particular, $f$ is an isometry of weight zero Hodge structures. Now, we observe that $\langle \lambda_1 \rangle^{\perp}$ is isomorphic to $E_8^2 \oplus U^3 \oplus \Z(u_1+v_1)$ via the marking $\phi$ defined in \eqref{isophi}. On the other hand, by \cite[Proposition 6]{Beau}, we have the isometry $$H^2(\tilde{Y}_A,\Z) \cong H^2(S,\Z) \oplus \Z\delta \cong E_8(-1)^2 \oplus U^3 \oplus I_{0,1}(2),$$ where $S$ is a degree-two K3 surface and $q(\delta)=-2$. Twisting by $1$, we get $$\psi: H^2(\tilde{Y}_A,\Z)(1) \cong E_8^2 \oplus U^3 \oplus \Z\delta, \quad \text{with }q(\delta)=2,$$ using that $U(-1)\cong U$. In particular, $\langle \lambda_1 \rangle^{\perp}$ and $H^2(\tilde{Y}_A,\Z)(1)$ are isomorphic lattices. Let $h_A$ be the polarization class on $\tilde{Y}_A$ with satisfies $q(h_A)=-2$ in $H^2(\tilde{Y}_A,\Z)(1)$ (see \cite{OG2}, eq.\ (1.3)). We define an isometry $g: \langle \lambda_1, \lambda_2 \rangle^{\perp} \oplus \langle \lambda_2 \rangle \cong H^2(\tilde{Y}_A,\Z)_0(1) \oplus \langle h_A \rangle $ such that $$g(\lambda_2)=h_A \quad \text{and} \quad g(\langle \lambda_1,\lambda_2 \rangle^{\perp})= f(\langle \lambda_1,\lambda_2 \rangle^{\perp})=H^2(\tilde{Y}_A,\Z)_0(1).$$ Notice that $g$ preserves the Hodge structures, because $f$ does and $g$ sends the $(0,0)$ class $\lambda_2$ to the $(0,0)$ class $h_A$. In particular, $g$ defines an isomorphism of Hodge structures $\langle \lambda_1 \rangle^{\perp} \cong H^2(\tilde{Y}_A,\Q)(1)$ over $\Q$. We claim that $g$ extends to an isometry $\langle \lambda_1 \rangle^{\perp} \cong H^2(\tilde{Y}_A,\Z)(1)$ over $\Z$. Indeed, we set $S_1:=H^2(\tilde{Y}_A,\Z)_0(1)$, $S_2:=\langle \lambda_1,\lambda_2 \rangle^{\perp}$ and $L:=H^2(\tilde{Y}_A,\Z)(1)$. We denote by $K_1$ and $K_2$ the orthogonal complements of $S_1$ and $S_2$ in $L$. By definition, we have $K_1\cong \langle h_A \rangle$ and $K_2 \cong \langle \lambda_2 \rangle$. We set $$H_1:=\frac{L}{S_1 \oplus K_1} \subset d(S_1) \oplus d(K_1) \quad \text{and} \quad H_2:=\frac{L}{S_2 \oplus K_2} \subset d(S_2) \oplus d(K_2);$$ recall that $$d(S_i) \cong \frac{\Z}{2\mathbb{Z}} \oplus \frac{\Z}{2\mathbb{Z}} \quad \text{ and } \quad d(K_i) \cong \frac{\Z}{2\mathbb{Z}}.$$ Let $H_{i,S}$ and $H_{i,K}$ be the projections of $H_i$ in $d(S_i)$ and $d(K_i)$, respectively. Then, there is an isomorphism $\gamma_i: H_{i,S} \cong H_{i,K}$, given by the composition of the inverse of the projection on the first factor with the projection to the second factor. By definition $H_{i,K}$ is a subgroup of $d(K_i) \cong \Z/2\mathbb{Z}$. We exclude the case $H_{i,K}=0$. Then we have $H_{i,K}=d(K_i)$. We list the generators of the subgroups of $d(S_i) \oplus d(K_i)$ mapping to $\Z/2\mathbb{Z}$ via the two projections: $$(1,0,1), (0,1,1) \text{ and } (1,1,1).$$ Since $H_i$ is isotropic with respect to $q:=q_{S_i} \oplus q_{K_i}$, we exclude $(1,1,1)$, because $$q((1,1,1))= \frac{1}{2}+\frac{1}{2}-\frac{1}{2}=-\frac{1}{2} \neq 0 \mod \mathbb{Z}.$$ Moreover, recall that by \cite[Proposition 1.4.1(b)]{Ni}, we have $$d(L)\cong \frac{H_i^{\perp}}{H_i},$$ where $H_i^{\perp}$ is the orthogonal to $H_i$ in $d(S_i) \oplus d(K_i)$. This condition implies that $H_i=\langle (0,1,1) \rangle$. Indeed, assume that $H_i=\langle (1,0,1) \rangle$. Writing explicitely the generators of the discriminant groups we have $$d(K_i)=\langle \frac{f_2}{2} \rangle, \quad d(S_i)=\langle \frac{g_1}{2},\frac{g_2}{2} \rangle, \quad d(L)=\langle \frac{g_1}{2} \rangle, \quad H_i=\langle \frac{g_1}{2}+\frac{f_2}{2} \rangle.$$ However, we have $$\frac{H_i^{\perp}}{H_i}=\frac{\langle \frac{g_1}{2}+\frac{f_2}{2},\frac{g_2}{2} \rangle}{\langle \frac{g_1}{2}+\frac{f_2}{2} \rangle}=\langle \frac{g_2}{2} \rangle$$ giving a contradiction. Now, recall that by \cite[Corollary 1.5.2]{Ni}, the isometry $f$ extends to an isometry of $L$ if and only if there exists an isometry $f': K_1 \to K_2$ such that the diagram \[ \xymatrix{ d(S_1)& \ar[l]_{\supset} H_{1,S} \ar[d]^{\bar{f}} \ar[r]^-{\gamma_1}_{\cong} & H_{1,K} \ar[d]^{\bar{f'}} \ar[r]^-{=} & d(K_1) \\ d(S_2)& \ar[l]_{\supset} H_{2,S} \ar[r]^-{\gamma_2}_{\cong} & H_{2,K} \ar[r]^-{=} &d(K_2).} \] commutes, where $\bar{f}$ and $\bar{f'}$ are induced by $f$ and $f'$ on the discriminant groups. So, we consider the isometry $f': K_1 \to K_2$ sending $h_A$ to $\lambda_2$; notice that $f'$ acts trivially on the discriminant group. On the other hand, the isometry $f$ acts either as the identity on $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ or it exchanges the two factors. Assume we are in the first case. Then, it follows that $\bar{f'} \circ \gamma_1 ((0,1))=\gamma_2 \circ \bar{f}((0,1))$. In the second case, we change the marking $\phi$ with the marking $\phi': \KT \cong \tilde{\Lambda}(-1)$, such that $\phi'(\lambda_1)=f_2$ and $\phi'(\lambda_2)=f_1$. By the same argument explained above, we have $H_2=\langle (1,0,1) \rangle$ and $H_{2,S}=\langle (1,0) \rangle$. It follows that $$\gamma_2 \circ \bar{f}((0,1))= \gamma_2((1,0))=H_{2,K}=\bar{f'} \circ \gamma_1 ((0,1)).$$ Then \cite[Corollary 1.5.2]{Ni} applies and we deduce that the isometry $f$ extends to an isometry of $L$. It follows that $g$ is well defined over $\Z$, which concludes the proof. \end{proof} \begin{rmk} \label{Memb2} In the same way we can prove that there is a Hodge isometry $\langle \lambda_2 \rangle^{\perp} \cong H^2(\tilde{Y}_A,\Z)(1)$ which extends $f$ and sending $\lambda_1$ to $h_A$. \end{rmk} By Lemma \ref{mukaivsprim}, it follows that there is a primitive embedding $$H^2(\tilde{Y}_A,\Z) \hookrightarrow \KT(-1).$$ By \cite[Section 9]{Mark}, it is unique up to isometry of $\tilde{\Lambda}$. Thus it defines a \textbf{Markman embedding} as discussed in \cite[Section 1]{Add}. \begin{proof}[Proof of Theorem \ref{thm1}] If $d$ satisfies $(\ast\ast)$, then $N(\A_X)$ contains a copy of the hyperbolic plane $U$ by Theorem \ref{thmK3U}. This proves that (a) implies (b). Recall that $T_X$ is Hodge isometric to $T_{\tilde{Y}_A}(-1)$ by Theorem \ref{periodEPW}. Then (b) is equivalent to (c) by Proposition 4 in \cite{Add}. Assume that $X$ is as in the second part of the statement. Then by Theorem \ref{thmK3U} $d$ satisfies $(\ast\ast)$ if and only if $U \subset N(\A_X)$. The statement follows applying \cite[Proposition 4]{Add} as before. \end{proof} \begin{rmk} As observed in \cite{Add} for cubic fourfolds, under the hypothesis of Theorem \ref{thm1}, $\tilde{Y}_A$ is birational to a moduli space of Bridgeland stable objects if and only if $d$ satisfies $(\ast\ast)$, by \cite[Theorem 1.2(c)]{BM}. \end{rmk} \begin{rmk} As observed in \cite[Remark 5.29]{DK2}, the image of the closure of the locus of smooth GM fourfolds having singular associated double EPW sextic is precisely the divisor $\mathcal{D}_{10}''$. We claim that there exist Hodge-special GM fourfolds with smooth associated double EPW sextic. Indeed, by \cite[Section 7.2]{DIM} this is clear for general GM fourfolds containing a $\tau$-plane: their period points lie in the divisor $\mathcal{D}_{12}$ and they do not belong to $\mathcal{D}_{10}''$, because of generality assumption. Now, let $d$ be a positive integer $\equiv 0,2,4 \pmod 8$. Assume $d >12$ if $d \equiv 0 \pmod 4$, resp.\ $d\geq 10$ if $d\equiv 2 \pmod 8$. By \cite[Theorem 8.1]{DIM}, the image of the period map meets all divisors $\mathcal{D}_d$, $\mathcal{D}_d'$ and $\mathcal{D}_d''$ for the respective values of the discriminant. More precisely, for every $d$ as before, they construct a GM fourfold $X_0$ whose period point $p(X_0)$ belongs to the intersection of $\mathcal{D}_{10}''$ with $\mathcal{D}_d$ (resp.\ $\mathcal{D}_d'$ or $\mathcal{D}_d''$) if $d \equiv 0 \pmod 4$ (resp.\ $d \equiv 2 \pmod 8$). Consider the case $d \equiv 0 \pmod 4$. Since the period map is dominant (see Section 2.2), there exists an open dense subset $U$ of $\mathcal{D}$ contaning $p(X_0)$ such that $U \subseteq p(\mathcal{M}_4)$. Notice that $U \cap \mathcal{D}_d$ is open in $\mathcal{D}_d$ and it contains $p(X_0)$. Moreover, it is not contained in $\mathcal{D}_d \cap \mathcal{D}_{10}''$, because the latter has codimension $1$ in $\mathcal{D}_d$. It follows that $(U \cap \mathcal{D}_d)\setminus \mathcal{D}_{10}'' \neq \emptyset$. The same argument applies to the case $d \equiv 2 \pmod 8$ and it completes the proof of the claim. \end{rmk} \begin{rmk} Assume that $X$ is a Hodge-special GM fourfold such that $\tilde{Y}_A$ is smooth. Notice that the period point defined by the Hodge structure on $K(\A_X)_{\text{top}}(-1)$ is of K3 type if and only if $\tilde{Y}_A$ is birational to a moduli space of stable sheaves on a K3 surface. \end{rmk} As in \cite[Proposition 4.1]{Huy} in the case of cubic fourfolds, we can prove the twisted version of Theorem \ref{thm1}. \begin{proof}[Proof of Theorem \ref{thm4}] Assume that $\tilde{Y}_A$ is birational to a moduli space $M(v)$ of $\alpha$-twisted stable sheaves on a K3 surface $S$, where $v$ is primitive in $\tilde{H}^{1,1}(S,\alpha,\Z)$ and $(v,v)=2$. Using the embedding $H^2(\tilde{Y}_A,\Z) \hookrightarrow \KT(-1)$ and Torelli Theorem for hyperk\"ahler manifolds, this is equivalent to have an isometry of Hodge structures $\KT(-1) \cong \tilde{H}(S,\alpha,\Z)$, which restricts to $$H^2(\tilde{Y}_A,\Z) \cong H^2(M(v),\Z)\cong v^{\perp} \hookrightarrow \tilde{H}(S,\alpha,\Z).$$ Equivalently, by Theorem \ref{thm3intro}, $X$ is Hodge-special with a labelling of discriminant $d$ satisfying condition $(\ast\ast')$. This proves one direction. On the other hand, assume that $p(X)$ belongs to a divisor with $d$ satisfying $(\ast\ast')$. Then the image $v$ of $\lambda_1$ through $H^2(\tilde{Y}_A,\Z) \hookrightarrow \KT(-1) \cong \tilde{H}(S,\alpha,\Z)$ is primitive. Since the induced moduli space $M(v)$ is non-empty and the Hodge isometry $H^2(\tilde{Y}_A,\Z)\cong v^{\perp} \cong H^2(M(v),\Z)$ extends to $\tilde{\Lambda}$, we conclude the desired statement. \end{proof} \subsection{Proof of Theorem \ref{thm2}} Firstly, we need the following lemma, which is analogous to \cite[Lemma 9]{Add} and that we will use also in Section 4.2. \begin{lemma} \label{lemmaformaeulfacile} Let $X$ be a Hodge-special GM fourfold of discriminant $d$ such that $d \equiv 2\text{ or }4 \pmod 8$. Then there exists an element $\tilde{\tau}$ in $N(\A_X)$ such that $\langle \lambda_1,\lambda_2,\tilde{\tau} \rangle$ is a primitive sublattice of discriminant $d$ with Euler pairing given, respectively, by \[ \begin{pmatrix} -2 & 0 & 1 \\ 0 & -2 & 0 \\ 1 & 0 & 2k \end{pmatrix} \quad \text{or} \quad \begin{pmatrix} -2 & 0 & 0 \\ 0 & -2 & 1 \\ 0 & 1 & 2k \end{pmatrix} \quad \text{with }d=2+8k, \] \[ \begin{pmatrix} -2 & 0 & 1 \\ 0 & -2 & 1 \\ 1 & 1 & 2k \end{pmatrix} \quad \text{with }d=4+8k. \] \end{lemma} \begin{proof} By Corollary \ref{correticoli}, there exists an element $\tau \in N(\A_X)$ such that $\langle \lambda_1,\lambda_2,\tau \rangle$ is a primitive sublattice of discriminant $d$ with Euler paring given by \[ \begin{pmatrix} -2 & 0 & a \\ 0 & -2 & b \\ a & b & c \end{pmatrix}.\] Notice that $c$ is even, because $N(\A_X)$ is an even lattice. Assume that $d \equiv 2 \pmod 8$; then one of $a$ and $b$ is even and the other is odd. Assume that $b$ is even. Substituting $\tau$ with $\tau'=\tau +b/2 \lambda_2$, we get a new basis with Euler form \[ \begin{pmatrix} -2 & 0 & a \\ 0 & -2 & 0 \\ a & 0 & c' \end{pmatrix}. \] We can write $a=4d+e$ with $e=-1,1$. Then, the basis $\lambda_1,\lambda_2, \tilde{\tau}=\tau'+2d\lambda_1$ has Euler pairing \[ \begin{pmatrix} -2 & 0 & e \\ 0 & -2 & 0 \\ e & 0 & 2k \end{pmatrix}. \] If $e=-1$, we change $\tilde{\tau}$ with $-\tilde{\tau}$ and we return to the case $e=1$. If $a$ is even, by the same argument we obtain a basis with the second matrix in the statement. This proves the claim in the case $d \equiv 2 \pmod 8$. The case $d \equiv 4 \pmod 8$ works in the same way. \end{proof} \begin{proof}[Proof of Theorem \ref{thm2}] Assume that there exist a K3 surface $S$ and a birational equivalence $\tilde{Y}_A \dashrightarrow S^{[2]}$. By Lemma \ref{mukaivsprim} and \cite[Proposition 5]{Add}, this is equivalent to the existence of an element $w$ in $N(\A_X)$ such that the Euler pairing of $K:=\langle\lambda_1,\lambda_2,w \rangle$ has the form \[ \begin{pmatrix} -2 & 0 & 1 \\ 0 & -2 & n \\ 1 & n & 0 \end{pmatrix} \quad \text{for some }n \in \Z. \] In particular, the discriminant of $K$ is $2n^2+2$. Let $M_K$ be the saturation of $K$; if $a$ is the index of $K$ in $M_K$ and $d$ is the discriminant of $M_K$, we have $\text{discr}(K)=a^2d$, as we wanted. Conversely, assume that $d$ satisfies condition \eqref{ast3}. Then there exist integers $n$ and $a$ such that $a^2d=2n^2+2$. Firstly, we observe that $2n^2+2$ satisfies $(\ast\ast)$. Indeed, every odd prime $p$ dividing $n^2+1$ is $\equiv 1 \pmod 4$, and $8 \nmid 2n^2+2$. It follows that $a$ is the product of odd primes $\equiv 1 \pmod 4$; in particular, $a \equiv 1 \pmod 4$. Suppose firstly that $d \equiv 2 \pmod 8$; then $n$ is even. Indeed, assume that $n \equiv 1 \pmod 4$ (resp.\ $n \equiv 3 \pmod 4$). It follows that $n^2+1 \equiv 2 \pmod 4$; then $d \equiv 4 \pmod 8$, which is impossible. Furthermore, by Lemma \ref{lemmaformaeulfacile}, there is an element $\tau \in N(\A_X)$ such that the sublattice $\langle \lambda_1,\lambda_2,\tau \rangle$ has Euler pairing of the form \[ \begin{pmatrix} -2 & 0 & 1 \\ 0 & -2 & 0 \\ 1 & 0 & 2k \end{pmatrix} \quad \text{or}\quad \begin{pmatrix} -2 & 0 & 0 \\ 0 & -2 & 1 \\ 0 & 1 & 2k \end{pmatrix}. \] Assume that we are in the first case. We set $$w:=\frac{a-1}{2} \lambda_1 + \frac{n}{2} \lambda_2 + a\tau \in N(\A_X),$$ where $n/2$ is an integer, because $n$ is even. An easy computation shows that $$\chi(\lambda_1,w)=1 \quad \text{and} \quad \chi(w)=0.$$ By \cite[Proposition 5]{Add}, it follows that $\tilde{Y}_A$ is birational to $S^{[2]}$ for a K3 surface $S$. If we are in the second case, we consider the Markman embedding $H^2(\tilde{Y}_A,\Z) \subset \KT(-1)$ defined by the Hodge isometry $\langle \lambda_2 \rangle^{\perp} \cong H^2(\tilde{Y}_A,\Z)(1)$ (see Remark \ref{Memb2}). Setting $$w:=\frac{n}{2} \lambda_1 + \frac{a-1}{2} \lambda_2 + a\tau \in N(\A_X),$$ the proof follows from \cite[Proposition 5]{Add}. Now assume that $d \equiv 4 \pmod 8$; then $n$ is odd. Indeed, if $n \equiv 0 \pmod 4$ (resp.\ $n \equiv 2 \pmod 4$), then $n^2+1 \equiv 1 \pmod 4$. Since $a^2d/2=n^2+1$ and $a \equiv 1 \pmod 4$, we conclude that $d/2 \equiv 1 \pmod 4$, which is impossible. By Lemma \ref{lemmaformaeulfacile}, there is an element $\tau \in N(\A_X)$ such that the sublattice $\langle \lambda_1,\lambda_2,\tau \rangle$ has Euler pairing of the form \[ \begin{pmatrix} -2 & 0 & 1 \\ 0 & -2 & 1 \\ 1 & 1 & 2k \end{pmatrix} \text{ with }d=4+8k. \] We set $$w:=\frac{a-1}{2} \lambda_1 + \frac{a-n}{2} \lambda_2 + a\tau \in N(\A_X).$$ Notice that $(a-n)/2$ is integral, because $n$ is odd. Arguing as before, we conclude the proof of the result. \end{proof} \begin{rmk} \label{d=50} As seen in the proof of Theorem \ref{thm2}, condition \eqref{ast3} implies condition $(\ast\ast)$. On the other hand, condition \eqref{ast3} is stricter than condition $(\ast\ast)$. For example, $d=50$ satisfies $(\ast\ast)$, but not \eqref{ast3}. \end{rmk} \begin{rmk} \label{linkIM} In \cite[Proposition 2.1]{IM}, they proved that if a smooth double EPW sextic is birational to the Hilbert scheme $S^{[2]}$ on a K3 surface $S$ with polarization of the degree-$d$, then the negative Pell equation $$\mathcal{P}_{d/2}(-1): n^2-\frac{d}{2}a^2=-1$$ is solvable in $\Z$. This condition is actually condition \eqref{ast3} in the case of the double EPW associated to a GM fourfold. Notice also that the K3 surface $S$, realizing the birational equivalence between $\tilde{Y}_A$ and $S^{[2]}$ in Theorem \ref{thm2}, has a pseudo-polarization of degree-$d$. Indeed, the hypothesis implies that there is a rank-three sublattice $M_K$ of $N(\A_X)$. Moreover, it contains a copy of the hyperbolic plane and $H^2(S,\Z) \cong U^{\perp} \subset N(\A_X)$, as explained in the proof of \cite[Proposition 5]{Add}. Then, the generator of $U^{\perp} \cap M_K$ has degree-$d$, as we wanted. Moreover, if $p(X) \notin \mathcal{D}_8$, then there are no classes of square $2$ in $H^4(X,\Z)_{00} \cap H^{2,2}(X,\Z)$. In this case, the pseudo-polarization is a polarization class on $S$. \end{rmk} Dipartimento di Matematica ``F.\ Enriques'', Universit\`a degli Studi di Milano, Via Cesare Saldini 50, 20133 Milano, Italy \\ \indent E-mail address: \texttt{[email protected]}\\ \indent URL: \texttt{http://www.mat.unimi.it/users/pertusi} \end{document}
\begin{document} \author{Ozlem Ersoy Hepson$^{1}$ and Idris Dag$^{2}$ \\ Department of Mathematics-Computer$^{1}$\\ Computer Engineering Department$^{2}$\\ Eski\c{s}ehir Osmangazi \"{U}niversity, Eski\c{s}ehir, Turkey,} \date{} \title{{\large \textbf{Numerical Solution of Singularly Perturbed Problems via both Galerkin and Subdomain Galerkin methods}}} \maketitle \begin{abstract} In this paper, numerical solutions of singularly perturbed boundary value problems are given by using variants of finite element method. Both Galerkin and subdomain Galerkin method based on quadratic B-spline functions are applied over the geometrically graded. Results of some text problems are compated with analytical solutions of the singularly perturbed problem \textbf{Keywords: }Subdomain Galerkin, graded mesh, spline, singularly perturbed. \end{abstract} \section{Introduction} This paper contains numerical solutions of one dimensional singularly perturbation problems \begin{equation} -\varepsilon u^{\prime \prime }+p(x)u^{\prime }+q(x)u=f(x),\text{ \ }x\in \lbrack 0,1] \label{1} \end{equation} with boundary conditions \begin{equation} u\left( 0\right) =\lambda \text{ and }u\left( 1\right) =\beta ,\text{ } \lambda ,\text{ }\beta \in \mathbb{R} \label{2} \end{equation} where $\varepsilon $ is a small positive parameter, $p(x),$ $q(x),$ $f(x)$ are sufficiently smooth functions with $p(x)\geq p^{\ast }>0,$ $q(x)\geq q^{\ast }>0.$ These problems depend on $\varepsilon $ in such a way that the solution varies rapidly in some parts and varies slowly in some other parts. So, typically there are thin transition layers where the solutions can jump abruptly, while away from the layers the solution behaves regularly and vary slowly. The numerical treatment of the singular perturbation problems is far from the trivial because of the boundary layer behavior of the solution. There are a wide variety of asymptotic techniques for solving singular perturbation problems. These problems occur in many areas of engineering and applied mathematics such as chemical reactor theory, optimal control, quantum mechanics, fluid mechanics, reaction-diffusion process, aerodynamics, heat transport problems with large Peclet numbers and Navier--Strokes flows withlarge Reynolds numbers etc. Many authors have studied on this problem and tried to overcome the above-mentioned difficulties. M.K. Kadalbajoo and Vikas Gupta \cite {kadalbajoo} proposed B-spline collocation method on a non-uniform mesh of Shishkin type.to solve singularly perturbed two-point boundary value problems with a turning point exhibiting twin boundary layers. J.Vigo-Aguiar and S.Natesan \cite{aguiar} consider a class of singularly perturbed two-point boundary-value problems for second-order ordinary differential equations. They suggested an iterative non-overlapping domain decomposition method in order to obtain numerical solution to these problems. Tirmizi et al. \cite{Tirmizi} have proposed a generalized scheme based on quartic non-polynomial spline functions in order to designed for numerical solution of singularly perturbed two-point boundary-value problems. D.J.Fyfe \cite {fyfe} used cubic splines on equal and unequal intervals and compared the results. He observed that very little advantage is gained by using unequal intervals. M.K.Kadalbajoo and K.C.Patidar \cite{kadalbajoo1} gave some difference schemes using spline in tension. They showed that these methods are second-order accurate. Employing coordinate stretching a Galerkin-spectral method is applied to the singularly perturbed boundary value problems by W.Liu and T.Tang \cite{wenbin}. G.Beckett and J.A.Mackenzie \cite{beckett} gave a $p$th order Galerkin finite element method on a non-uniform grid. In their study the grid is constructed by equidistributing a strictly positive monitor function. After the appropriate selection of the monitor function parameters they obtained insensitive numerical solution. The definitions of B-splines over the geomertically graded mesh was given in reference \cite{dag}. Dag and Sahin \cite{Sahin} have set up the finite element method employing the quadratic and the cubic B-splines to form the trial function. In this article, we used the finite element method with the quadratic B-splines. After giving the expressions of the mentioned B-splines over the geometrically graded mesh we applied the quadratik Galerkin and quadratik subdomain Galerkin method to Eq.(\ref{1}). Briefly, outline is as follows. In Section 2, numerical methods are given. Numerical experiments are carried out for one test problem and errors of those methods are compared in Section 3. Finally conclusion is given in last section. \section{B-spline Galerkin Methods} For numerical purpose, let us divide the solution domain $\left[ 0,1\right] $ into subintervals by the knots $x_{m}$ such that \begin{equation*} 0=x_{0}<x_{1}<\cdot \cdot \cdot <x_{N}=1 \end{equation*} where $x_{m+1}=x_{m}+h_{m}$ and $h_{m}$ is the size of interval $\left[ x_{m},x_{m+1}\right] $ with relation $h_{m}=\sigma h_{m-1}.$ Here $\sigma $ is mesh ratio constant. To construct the geometrically graded mesh, determination of the first element size $h_{0}$ is necessary. Since \begin{equation*} h_{0}+h_{1}+\cdot \cdot \cdot +h_{N-1}=1 \end{equation*} it is easy to write \begin{equation*} h_{0}=\frac{1}{1+\sigma +\sigma ^{2}+\cdot \cdot \cdot +\sigma ^{N-1}}. \end{equation*} This partition will be uniform if the mesh ratio $\sigma $ is taken as unity. To obtain finer mesh at the left boundary, $\sigma $ must be chosen as $\sigma >1.$ On the other hand, to make the mesh size smaller at the right boundary, $\sigma $ must be chosen as $\sigma <1$ . Mentioned selection of $\sigma $ will be done by experimentally. \subsection{Quadratic B-spline Galerkin method (QM)} The expression of the quadratic B-splines over the geometrically graded mesh may be given in the following form \cite{dag}: \begin{equation} \begin{tabular}{l} $Q_{m-1}$ \\ $Q_{m}$ \\ $Q_{m+1}$ \end{tabular} =\dfrac{1}{h_{m}^{2}}\left \{ \ \begin{array}{l} (h_{m}-\xi )^{2}\sigma , \\ h_{m}^{2}+2h_{m}\sigma \xi -\left( 1+\sigma \right) \xi ^{2}, \\ \xi ^{2} \end{array} \ \right. \label{3} \end{equation} where $\xi =x-x_{m}$ and $0\leq \xi \leq h_{m}.$ A quadratic B-spline covers $3$ elements. Any quadratic B-spline $Q_{m}$ and its derivatives vanish outside of the interval $\left[ x_{m-1},x_{m+2}\right] $ and therefore an element is covered by $3$ successive quadratic B-splines. The set of the quadratic B-splines $\left \{ Q_{-1},Q_{0},...,Q_{N}\right \} $ forms a basis for the functions defined on the solution domain \cite{prenter}. Thence, an approximation $u_{N}$ to the analytical solution $u$ can be written as \begin{equation} u_{N}=\sum \limits_{m=-1}^{N}\delta _{m}Q_{m} \label{4} \end{equation} where $\delta _{m}$ are unknown parameters. By the substitution of the value of $Q_{m}$ at the knots $x_{m}$ in Eq.(\ref{4}), the nodal value $u$\ and its derivative $u^{\prime }$ are expressed in terms of $\delta _{m}$ by\ \begin{equation} \begin{array}{ll} u_{m}=u(x_{m}) & =\sigma \delta _{m-1}+\delta _{m}, \\ u_{m}^{\prime }=u^{\prime }(x_{m}) & =\dfrac{2\sigma }{h_{m}}(\delta _{m}-\delta _{m-1}). \end{array} \label{5} \end{equation} Both sides of the weight function by multiplying the differential equation and is integrated over the range $[x_{m},x_{m+1}]$ following equation is obtained. \begin{equation*} -\varepsilon vu^{\prime \prime }(x)+p(x)vu^{\prime }(x)+q(x)vu(x)=f(x) \end{equation*} partial integration is applied to the first term in the above integral is obtained as follows. \begin{equation*} \dint \limits_{x_{m}}^{x_{m+1}}(-\varepsilon v^{\prime }u^{\prime }(x)+vp(x)u^{\prime }+vq(x)u(x))dx-\varepsilon vu^{\prime }(x)\overset{ x_{m+1}}{\underset{x_{m}}{\mid }}-\dint \limits_{x_{m}}^{x_{m+1}}vf(x)dx=0 \end{equation*} weight functions selected as $Q_{j},$ $j=m-1,m,m+1$ and used (\ref{4}), following integral is obtained. \begin{equation} \dsum \limits_{j=m-1}^{m+1}[-\varepsilon \dint \limits_{0}^{h_{m}}(\phi _{i}^{\prime }\phi _{j}^{\prime }+p\phi _{i}\phi _{j}^{\prime }+q\phi _{i}\phi _{j})d\xi ]\delta _{j}-\varepsilon \phi _{i}\phi _{j}^{\prime } \overset{h_{m}}{\underset{0}{\mid }}\delta _{j}-\dint \limits_{0}^{h_{m}}\phi _{i}f(x_{m}+\xi )d\xi =0 \label{3.6} \end{equation} So, following values can be computed. \begin{equation*} \begin{array}{lll} a_{ij}=\dint \limits_{0}^{h_{m}}\phi _{i}^{\prime }\phi _{j}^{\prime }d\xi , & & r_{ij}=\phi _{i}\phi _{j}^{\prime }\overset{h_{m}}{\underset{0}{\mid }}, \\ & & \\ b_{ij}=\dint \limits_{0}^{h_{m}}\phi _{i}\phi _{j}^{\prime }d\xi , & & f_{i}=\dint \limits_{0}^{h_{m}}\phi _{i}f(x_{m}+\xi )d\xi , \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{c_{ij}=\dint \limits_{0}^{h_{m}}\phi _{i}\phi _{j}d\xi ,} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \end{array} \end{equation*} Where $i,j=m-1,m,m+1$ and \begin{eqnarray*} A^{(m)} &=&\frac{2}{3h_{m}} \begin{bmatrix} 2\alpha ^{2} & \alpha (1-\alpha ) & -\alpha \\ \alpha (1-\alpha ) & 2(1-\alpha +\alpha ^{2}) & \alpha -2 \\ -\alpha & \alpha -2 & 2 \end{bmatrix} \\ B^{(m)} &=& \begin{bmatrix} \frac{-\alpha ^{2}}{2} & \frac{1}{6}\alpha \left( 3\alpha -1\right) & \frac{ \alpha }{6} \\ -\frac{1}{6}\alpha \left( 3\alpha +5\right) & \frac{1}{2}\alpha ^{2}-\frac{1 }{2} & \frac{5}{6}\alpha +\frac{1}{2} \\ \frac{-\alpha }{6} & \frac{1}{6}\alpha -\frac{1}{2} & \frac{1}{2} \end{bmatrix} \\ C^{(m)} &=&h_{m} \begin{bmatrix} \frac{1}{5}\alpha ^{2}h_{m} & \frac{1}{30}\alpha h_{m}\left( 4\alpha +9\right) & \frac{\alpha }{30} \\ \frac{1}{30}\alpha h_{m}\left( 4\alpha +9\right) & \frac{8}{15}\alpha ^{2}+ \frac{11}{5}\alpha +\frac{8}{15} & \frac{3}{10}\alpha +\frac{2}{15} \\ \frac{\alpha }{30} & \frac{3}{10}\alpha +\frac{2}{15} & \frac{1}{5}h_{m} \end{bmatrix} \\ R^{(m)} &=&\frac{1}{h_{m}} \begin{bmatrix} 2\alpha ^{2} & 0 & 0 \\ 2\alpha (3\alpha +2) & -4\alpha & 2\alpha \\ 0 & 0 & 0 \end{bmatrix} \end{eqnarray*} and \begin{equation*} F^{(m)}=\frac{1}{h_{m}^{2}} \begin{bmatrix} \vartheta _{1} & \vartheta _{1} & \vartheta _{1} \\ \vartheta _{2} & \vartheta _{2} & \vartheta _{2} \\ \vartheta _{3} & \vartheta _{3} & \vartheta _{3} \end{bmatrix} \end{equation*} Where \begin{eqnarray*} \vartheta _{1} &=&-\alpha \left( 2h_{m}-2e^{h_{m}}+h_{m}^{2}+2\right) \\ \vartheta _{2} &=&2(\alpha +1)(1-e^{h_{m}})+2h_{m}(\alpha +e^{h_{m}})-h_{m}^{2}(1-\alpha e^{h_{m}}) \\ \vartheta _{3} &=&\left( e^{h_{m}}\left( h_{m}^{2}-2h_{m}+2\right) -2\right) \end{eqnarray*} Defined in terms of local matrices $A^{(i)},$ $B^{(i)},$ $C^{(i)},$ $R^{(i)}$ and $F^{(i)}$, equation (\ref{3.6}) can be represented following form. \begin{equation*} (-\varepsilon A^{(i)}+pB^{(i)}+qC^{(i)}-\varepsilon R^{(i)})\delta ^{(i)}=F^{(i)} \end{equation*} Where \begin{equation*} \delta ^{(i)}=(\delta _{m-1}^{(i)},\delta _{m}^{(i)},\delta _{m+1}^{(i)},\delta _{mi+2}^{()}),\text{ } F^{(i)}=(f_{m-1}^{(i)},f_{m}^{(i)},f_{m+1}^{(i)},f_{m+2}^{(i)}) \end{equation*} By combining the local matrices which is defined on $[x_{i},x_{i+1}],$ $ i=0,\ldots ,N-1$, the global system in the range of $[x_{0},x_{N}]$ can be defined as follows. \begin{equation} (-\varepsilon A+pB+qC-\varepsilon R)\delta =F \label{3.7} \end{equation} The matrix of $A$ is \begin{equation*} A=\left[ \begin{array}{cccccccccc} \sigma _{0,0}^{(0)} & \sigma _{0,1}^{(0)} & \sigma _{0,2}^{(0)} & & & & & & & \\ \sigma _{1,0}^{(0)} & \sigma _{1,1}^{\ast (1)} & \sigma _{1,2}^{\ast (1)} & \sigma _{1,3}^{\ast (1)} & & & & & & \\ \sigma _{2,0}^{(0)} & \sigma _{2,1}^{\ast (1)} & \sigma _{2,2}^{\ast (2)} & \sigma _{2,3}^{\ast (2)} & \sigma _{2,4}^{\ast (2)} & & & & & \\ & \sigma _{3,0}^{\ast (1)} & \sigma _{3,1}^{\ast (2)} & \sigma _{3,2}^{\ast (3)} & \sigma _{3,3}^{\ast (3)} & \sigma _{3,4}^{\ast (3)} & & & & \\ & & \ddots & \ddots & \ddots & \ddots & \ddots & & & \\ & & & \sigma _{i,i-2}^{\ast (i-1)} & \sigma _{i,i-1}^{\ast (i)} & \sigma _{i,i}^{\ast (i+1)} & \sigma _{i,i+1}^{\ast (i+1)} & \sigma _{i,i+2}^{\ast (i+1)} & & \\ & & & & \ddots & \ddots & \ddots & \ddots & \ddots & \\ & & & & & \sigma _{n-1,n-3}^{\ast (n-2)} & \sigma _{n-1,n-2}^{\ast (n-1)} & \sigma _{n-1,n-1}^{\ast (n)} & \sigma _{n-1,n}^{\ast (n)} & \sigma _{n-1,n+1}^{(n)} \\ & & & & & & \sigma _{n,n-2}^{\ast (n-1)} & \sigma _{n,n-1}^{\ast (n-1)} & \sigma _{n,n}^{\ast (n-1)} & \sigma _{n,n+1}^{(n)} \\ & & & & & & & \sigma _{n+1,n-1}^{(n)} & \sigma _{n+1,n}^{(n)} & a_{n+1,n+1}^{(n)} \end{array} \right] \end{equation*} Where \begin{eqnarray*} && \begin{array}{lll} \sigma _{1,1}^{\ast (1)}=\sigma _{1,1}^{(0)}+\sigma _{1,1}^{(1)}, & & \sigma _{1,2}^{\ast (1)}=\sigma _{1,2}^{(0)}+\sigma _{1,2}^{(1)}, \\ & & \\ \sigma _{2,1}^{\ast (1)}=\sigma _{2,1}^{(0)}+\sigma _{2,1}^{(1)}, & & \sigma _{2,2}^{\ast (2)}=\sigma _{2,2}^{(0)}+\sigma _{2,2}^{(1)}+\sigma _{2,2}^{(2)}, \\ & & \\ \sigma _{2,3}^{\ast (2)}=\sigma _{2,3}^{(2)}+\sigma _{2,3}^{(1)}, & & \sigma _{i,i-1}^{\ast (i)}=\sigma _{i,i-1}^{(i-1)}+\sigma _{i,i-1}^{(i)}, \end{array} \\ && \begin{array}{lll} \sigma _{i,i}^{\ast (i)}=\sigma _{i,i}^{(i-1)}+\sigma _{i,i}^{(i)}+\sigma _{i,i}^{(i+1)}, & & \sigma _{i,i+1}^{\ast (i)}=\sigma _{i,i+1}^{(i)}+\sigma _{i,i+1}^{(i+1)}, \\ & & \\ \sigma _{n-1,n-2}^{\ast (n-1)}=\sigma _{n-1,n-2}^{(n-2)}+\sigma _{n-1,n-2}^{(n-1)}, & & \sigma _{n-1,n-1}^{\ast (n)}=\sigma _{n-1,n-1}^{(n-2)}+\sigma _{n-1,n-1}^{(n-1)}+\sigma _{n-1,n-1}^{(n)}, \\ & & \\ \sigma _{n-1,n}^{\ast (n)}=\sigma _{n-1,n}^{(n-1)}+\sigma _{n-1,n}^{(n)}, & & \sigma _{n,n-1}^{\ast (n-1)}=\sigma _{n,n-1}^{(n-1)}+\sigma _{n,n-1}^{(n)}, \\ & & \\ \sigma _{n,n}^{\ast (n-1)}=\sigma _{n,n}^{(n-1)}+\sigma _{n,n}^{(n)}. & & \end{array} \end{eqnarray*} $B,$ $C,$ $R$ matrices are obtained similarly. Also the matrix of $F$ is computed as follows. \begin{equation*} F=\left[ \begin{array}{l} f_{0}^{0} \\ f_{1}^{0}+f_{1}^{1} \\ f_{2}^{0}+f_{2}^{1}+f_{2}^{2} \\ \vdots \\ f_{i}^{i-1}+f_{i}^{i}+f_{i}^{i+1} \\ \vdots \\ f_{n-2}^{n-1}+f_{n-2}^{n-2}+f_{n-2}^{n-3} \\ f_{n-1}^{n-1}+f_{n-1}^{n-2} \\ f_{n}^{n-1} \end{array} \right] \end{equation*} The matrix system (\ref{3.7}) has $N+1$ equations and $N+3$ unknowns. In order to solve this system, the numbers of equations and unknowns must be equal. From the boundary conditions (\ref{2}) and Eq.(\ref{5}) it is easy to write \begin{equation*} \delta _{-1}=\frac{\lambda -\delta _{0}}{\alpha },\text{ \ }\delta _{N}=\beta -\alpha \delta _{N-1}. \end{equation*} Using these equalities, $\delta _{-1}$ and $\delta _{N}$ can be eliminated from the system and then matrix equation (\ref{7.1}) can be solved with Thomas algorithm. Substituting the obtained parameters $\delta _{m}$ in Eq.( \ref{5}), the numerical solution is found at the knots $x_{m}.$ \subsection{Quadratic B-spline Subdomain Galerkin method (QM)} If on each side of the equation (\ref{1}), multiplied by weight function $ V_{n}$ and is integrated over the range $[x_{m},x_{m+1}]$ then following integral obtained. \begin{equation*} \dint \limits_{x_{0}}^{x_{n}}[-\varepsilon u^{\prime \prime }(x)+pu^{\prime }(x)+qu(x)-f(x)]dx=0 \end{equation*} Here the wight function is \begin{equation*} V_{n}=\left \{ \begin{array}{cc} 1, & x_{m}\leq x<x_{m+1} \\ 0, & \text{other case} \end{array} \right. \end{equation*} partial integration is applied to the first term in the above integral is obtained as follows. \begin{equation*} -\varepsilon u^{\prime }(x)\overset{x_{m+1}}{\underset{x_{m}}{\mid }}+pu(x) \overset{x_{m+1}}{\underset{x_{m}}{\mid }}+q\dint \limits_{x_{m}}^{x_{m+1}}u(x)dx=\dint \limits_{x_{m}}^{x_{m+1}}f(x)dx \end{equation*} By the substitution of the nodal value $u$ and its derivative $u^{\prime }$ in last equation is expressed following integral. \begin{equation} \begin{array}{l} \lbrack -\varepsilon (\dsum \limits_{J=m-1}^{m+1}\phi _{i}^{\prime }\overset{ hm}{\underset{0}{\mid }})\delta _{j})+p(x_{m}+\xi )(\dsum \limits_{J=m-1}^{m+1}\phi _{i}\overset{hm}{\underset{0}{\mid }}\delta _{j})+q(x_{m}+\xi )\dsum \limits_{J=m-1}^{m+1}\dint \limits_{x_{m}}^{x_{m+1}}\phi _{i}\delta _{j}dx \\ =\dint \limits_{x_{m}}^{x_{m+1}}f(x_{m}+\xi )dx \end{array} \label{4.1} \end{equation} With the help of the division points values, Qauadratik B-spline shape functions defined on geometrically increasing intervals $[x_{m},x_{m+1}]$ \begin{eqnarray} \dint \limits_{x_{m}}^{x_{m+1}}u^{\prime \prime }(x)dx &=&u^{\prime }(x) \overset{x_{m+1}}{\underset{x_{m}}{\mid }}=\frac{2\alpha }{h_{m+1}}\delta _{m+1}+(-\frac{2\alpha }{h_{m+1}}-\frac{2\alpha }{h_{m}})\delta _{m}+\frac{ 2\alpha }{h_{m}}\delta _{m-1} \label{5.1} \\ \dint \limits_{x_{m}}^{x_{m+1}}u^{\prime }(x)dx &=&u(x)\overset{x_{m+1}}{ \underset{x_{m}}{\mid }}=\delta _{m+1}+(\alpha -1)\delta _{m}-\alpha \delta _{m-1} \label{5.2} \\ \dint \limits_{x_{m}}^{x_{m+1}}u(x)dx &=&\dint \limits_{0}^{h_{m}}(\dsum \limits_{J=m-1}^{m+1}\phi _{j}\delta _{j})d\xi =\delta _{m-1}\dint \limits_{x_{m}}^{x_{m+1}}\phi _{m-1}dx+\delta _{m}\dint \limits_{x_{m}}^{x_{m+1}}\phi _{m}dx+\delta _{m+1}\dint \limits_{x_{m}}^{x_{m+1}}\phi _{m+1}dx \label{5.3} \end{eqnarray} The value of $Q_{m-1,}Q_{m},$ $Q_{m+1}$ substitute in (\ref{5.3}) is computed as follows. \begin{eqnarray*} \dint \limits_{x_{m}}^{x_{m+1}}\phi _{m-1}dx &=&\frac{1}{3}\alpha h_{m} \\ \dint \limits_{x_{m}}^{x_{m+1}}\phi _{m}dx &=&\frac{2}{3}h_{m}\left( \alpha +1\right) \\ \dint \limits_{x_{m}}^{x_{m+1}}\phi _{m+1}dx &=&\frac{1}{3}h_{m} \end{eqnarray*} If we replace integrals calculated by (\ref{5.3}) the following results are obtained. \begin{equation*} \dint \limits_{x_{m}}^{x_{m+1}}u(x)dx=\frac{1}{3}\alpha h_{m}\delta _{m-1}+ \frac{2}{3}h_{m}\left( \alpha +1\right) \delta _{m}+\frac{1}{3}h_{m}\delta _{m+1} \end{equation*} When we apply the Galerkin method to the system (\ref{4.1}), (\ref{5.1}), ( \ref{5.2}) and (\ref{5.3}) derivatives are replaced with their equals obtained from Eq.(\ref{4.1}). This substitution yields the following system: \begin{equation*} \begin{array}{l} (-\dfrac{2\alpha \varepsilon }{h_{m}}+\alpha (\dfrac{1}{3} h_{m}q(x)-p(x)))\delta _{m-1}+(\dfrac{2\varepsilon }{h_{m}}(1+\alpha )+(\alpha -1)p(x)+\dfrac{2}{3}h_{m}\left( \alpha +1\right) q(x))\delta _{m} \\ \\ +(-\dfrac{2\varepsilon }{h_{m}}+p(x)+\dfrac{1}{3}h_{m}q(x))\delta _{m+1}=\dint \limits_{x_{m}}^{x_{m+1}}f(x)dx \end{array} \end{equation*} With necessary operations, this system can be written in matrix form as \begin{equation} \mathbf{AX=F} \label{7} \end{equation} where \begin{equation} A=\left[ \begin{array}{cccccccc} \alpha _{01} & \alpha _{02} & \alpha _{03} & & & & & \\ & \alpha _{11} & \alpha _{12} & \alpha _{13} & & & & \\ & & \alpha _{21} & \alpha _{22} & \alpha _{23} & & & \\ & & & \alpha _{31} & \alpha _{32} & \alpha _{32} & & \\ & & & \ddots & \ddots & \ddots & & \\ & & & & & \alpha _{n1} & \alpha _{n2} & \alpha _{n3} \end{array} \right] , \label{7.1} \end{equation} where \begin{equation*} \begin{array}{l} \alpha _{m1}=-\dfrac{2\alpha \varepsilon }{h_{m}}-\alpha p_{m}+\dfrac{1}{3} \alpha h_{m}q_{m}, \\ \\ \alpha _{m2}=\dfrac{2\varepsilon }{h_{m}}(1+\alpha )+(\alpha -1)p_{m}+\dfrac{ 2}{3}h_{m}\left( \alpha +1\right) q_{m} \\ \\ \alpha _{m3}=-\dfrac{2\alpha \varepsilon }{\alpha h_{m}}+p_{m}+\dfrac{1}{3} h_{m}q_{m} \end{array} \end{equation*} and \begin{eqnarray*} X &=&\left[ \begin{array}{cccccc} \delta _{-1}, & \delta _{0}, & \delta _{1}, & \cdots , & \delta _{n-1}, & \delta _{n} \end{array} \right] ^{T} \\ F &=&\left[ \begin{array}{cccccc} f_{0}, & f_{1}, & f_{2}, & \cdots , & f_{n-1}, & f_{n} \end{array} \right] ^{T} \end{eqnarray*} \begin{equation*} \text{ }f_{m}=f(x_{m}),\text{ }m=0,1,\cdots ,N \end{equation*} The matrix system (\ref{7}) has $N+1$ equations and $N+3$ unknowns. In order to solve this system, the numbers of equations and unknowns must be equal. From the boundary conditions (\ref{2}) and Eq.(\ref{5}) it is easy to write \begin{equation*} \delta _{-1}=\frac{\lambda -\delta _{0}}{\alpha },\text{ \ }\delta _{N}=\beta -\alpha \delta _{N-1}. \end{equation*} Using these equalities, $\delta _{-1}$ and $\delta _{N}$ can be eliminated from the system and then matrix equation (\ref{7.1}) can be solved with Thomas algorithm. Substituting the obtained parameters $\delta _{m}$ in Eq.( \ref{5}), the numerical solution is found at the knots $x_{m}.$ \section{Numerical Experiments} We have tested the accuracy of the numerical methods on two examples. Errors are measured with the norm \begin{equation*} L_{\infty }=\left \vert u-u_{N}\right \vert _{\infty }=\max_{j}\left \vert u_{j}-\left( u_{N}\right) _{j}\right \vert . \end{equation*} Since the boundary layers are at the right boundary in both examples, in order to minimize the error, we have searched the interval $\left( 0,1\right) $ for the best choice of \ the mesh ratio $\sigma .$ Solution profiles are illustrated in Figs. 1-4 for the first example and in Figs. 5-8 for the second example. These figures are graphed for $N=20$ and two different $\varepsilon .$ In order to see the success of the numerical methods more clear, exact solutions and obtained results are illustrated together in all figures. In all figures, continuous line is used for the exact solutions and the lines$\cdots $o$\cdots $o$\cdots $, $\cdots $+$ \cdots $+$\cdots $ are used for QM and CM respectively. Using uniform mesh leads to oscillations, seen in Figs. 1, 3, 5 and 7, in solution profiles because of the boundary layer. As observed from Figs. 2, 4, 6 and 8, after the best choice of mesh ratio $\sigma ,$ these oscillations disappear.\ Using various $\varepsilon $ and $N,$ calculated numerical errors are tabulated and compared in Table 1 and Table 2 for the first and the second examples respectively. \textbf{Example }Our example is \begin{equation*} \begin{tabular}{c} $-\varepsilon u^{\prime \prime }+u^{\prime }=\exp (x),$ \\ \\ $u(0)=u(1)=0$ \end{tabular} \end{equation*} with the exact solution \begin{equation*} u(x)=\frac{1}{1-\varepsilon }\left[ \exp (x)-\frac{1-\exp (1-1/\varepsilon )+\left( \exp (1)-1\right) \exp \left( \left( x-1\right) /\varepsilon \right) }{1-\exp (-1/\varepsilon )}\right] . \end{equation*} taken from \cite{lorenz}. \section{Conclusion} Quadratic and cubic B-spline algorithms are applied to singularly perturbed problems. Difficulties arised from the modelling of the boundary layers in numerical methods are tried to overcome by using B-splines over the geometrically graded mesh. Simplicity of the adaptation of B-splines\ and obtaining acceptable solutions can be noted as advantages of given numerical methods. Consequently, in getting the numerical solution of the differential equations having boundary layers, B-spline collocation methods over the geometrically graded mesh are advisable. \end{document}
\begin{document} \title[ENERGY-MINIMIZING MAPS FROM NONNEGATIVE RICCI CURVATURE]{ENERGY-MINIMIZING MAPS FROM MANIFOLDS WITH NONNEGATIVE RICCI CURVATURE} \author[JAMES DIBBLE]{JAMES DIBBLE} \address{Department of Mathematics, University of Iowa, 14 MacLean Hall, Iowa City, IA 52242} \email{[email protected]} \thanks{Most of these results appear in my doctoral dissertation \cite{Dibble2014}. I'm thankful to my dissertation advisor, Xiaochun Rong, as well as Christopher Croke, Penny Smith, and Charles Frohman for many helpful discussions. An anonymous referee pointed out a mistake that I made while reformulating some of the results in \cite{Dibble2014}. Parts of this work were done while I visited Capital Normal University.} \subjclass[2010]{Primary 53C21 and 53C24; Secondary 53C22 and 53C43} \date{} \begin{abstract} The energy of any $C^1$ representative of a homotopy class of maps from a compact and connected Riemannian manifold with nonnegative Ricci curvature into a complete Riemannian manifold with no conjugate points is bounded below by a constant determined by the asymptotic geometry of the target, with equality if and only if the original map is totally geodesic. This conclusion also holds under the weaker assumption that the domain is finitely covered by a diffeomorphic product, and its universal covering space splits isometrically as a product with a flat factor, in a commutative diagram that follows from the Cheeger--Gromoll splitting theorem. \end{abstract} \maketitle \section{Introduction} Eells--Sampson \cite{EellsSampson1964} showed that a $C^2$ map from a compact manifold with nonnegative Ricci curvature into a complete manifold with nonpositive sectional curvature is harmonic if and only if it is totally geodesic. Hartman \cite{Hartman1967} further showed that every such harmonic map minimizes energy within its homotopy class. This paper presents a partial generalization of these results under the additional assumption that the original map is homotopic to a totally geodesic map. This is done by showing that the asymptotic geometry of $N$ yields a lower bound for the energy of maps in a given homotopy class that is realized by, and only by, totally geodesic maps. In the special case of a domain with nonnegative Ricci curvature and a target with no conjugate points, this takes the following form. \begin{theorem}\label{nnrc theorem} Let $M$ be a compact and connected $C^2$ Riemannian manifold with nonnegative Ricci curvature, $N$ a complete Riemannian manifold with no conjugate points, and $[F]$ a homotopy class of maps from $M$ to $N$. Then, for the flat semi-Finsler manifold $K$ and totally geodesic surjection $S : M \to K$ constructed in subsection 3.5, the following holds: For any $C^1$ map $f \in [F]$, $E(f) \geq E(S)$, with equality if and only if $f$ is totally geodesic. \end{theorem} \noindent The Cheeger--Gromoll splitting theorem \cite{CheegerGromoll1971} states that the Riemannian universal covering space of $M$ splits isometrically as a product $M_0 \times \mathbb{R}^k$, while the semi-Finsler universal covering space of $K$ is $\mathbb{R}^m$. The above map $S$ lifts to a totally geodesic map $M_0 \times \mathbb{R}^k \to \mathbb{R}^m$ that is constant on each $M_0$-fiber and an affine surjection on each $\mathbb{R}^k$-fiber. Moreover, if $f$ is totally geodesic, then $K$ is Riemannian and embeds isometrically in $N$ in such a way that $f = S$. The more general version of Theorem \ref{nnrc theorem} holds for compact domains that are finitely covered by a product $M_1 \times \mathbb{T}^k$ in a commutative diagram of the form \eqref{diagram1} that, by the Cheeger--Gromoll splitting theorem, holds whenever $M$ has nonnegative Ricci curvature. An example is given to show that such domains may have some negative Ricci curvature. These results build on the work of Croke--Fathi \cite{CrokeFathi1990} relating energy and intersection. Without curvature assumptions on $M$ and $N$, they proved a lower bound for the energy of any $C^1$ representative of a homotopy class of maps $[F]$ from $M$ to $N$, one which is realized only by maps called homotheties. They define the \textbf{intersection} of a map $f : M \to N$ to be \[ i(f) = \lim_{t \to \infty} \frac{1}{t} \int_{SM} \phi_t(v) \,d\mathrm{Liou}_{SM}(v)\textrm{,} \] where $\mathrm{Liou}_{SM}$ is the Liouville measure on the unit sphere bundle $\pi : SM \to M$, $\Phi : SM \times \mathbb{R} \to SM$ is the geodesic flow, $\Phi_t(\cdot) = \Phi(\cdot,t)$, and \begin{align*} \phi_t(v) = \min \{ L(\gamma) \,\big|\, \gamma : [0,t] \to N &\textrm{ is endpoint-fixed homotopic to the}\\ &\textrm{curve } s \mapsto f \big( \pi \circ \Phi_s(v) \big) \textrm{ for } 0 \leq s \leq t \}\textrm{.} \end{align*} Intersection turns out to be invariant under homotopy, so one may define the \textbf{intersection} of $[F]$ by $i([F]) = i(f)$ for any $f \in [F]$. If $g$ and $h$ denote the Riemannian metrics on $M$ and $N$, respectively, a \textbf{homothety} is a map $f : M \to N$ such that both $f^*(h) = cg$ for some $c \geq 0$ and the image under $f$ of each geodesic in $M$ minimizes length within its endpoint-fixed homotopy class. For each $n \in \mathbb{N}$, denote by $c_n$ the volume of the unit sphere $S^n \subseteq \mathbb{R}^{n+1}$, where, by convention, $S^0 = \{ -1, 1 \} \subseteq \mathbb{R}$ has volume $c_0 = 2$. \begin{theorem}[Croke--Fathi]\label{croke--fathi} Let $M$ and $N$ be Riemannian manifolds and $n = \dim(M)$. If $[F]$ is a homotopy class of maps from $M$ to $N$, then, for any $C^1$ map $f \in [F]$, \[ E(f) \geq \frac{n}{2 c_{n-1}^2 \mathrm{vol}(M)} i^2([F])\textrm{,} \] with equality if and only if $f$ is a homothety. \end{theorem} \noindent There are natural generalizations of energy and length to maps into semi-Finsler manifolds. For the class of maps to which Theorem \ref{nnrc theorem} applies, the intersection is a constant multiple, depending only on the dimensions of the factors in $M_1 \times \mathbb{T}^k$, of the length the totally geodesic surjection $S$. For a more general class of NNRC-like domains, energy may be used to identify totally geodesic maps in addition to certain types of homotheties. A version of this phenomenon is recorded in Theorem \ref{main theorem}, the statement of which is rather technical. When $N$ has no conjugate points, a slightly simpler statement holds. \begin{theorem}\label{main theorem for no conjugate points} Let $M$ be a compact $n$-dimensional $C^1$ Riemannian manifold and $\psi_1 : M_1 \times \mathbb{T}^k \to M$ a finite covering map, where $M_1 \times \mathbb{T}^k$ appears in a NNRC diagram \eqref{diagram1} that commutes isometrically and in which the manifold $M_0$ is compact. Let $N$ be a Riemannian manifold with no conjugate points and $[F]$ a homotopy class of maps from $M$ to $N$. Then, for the flat semi-Finsler manifold $K$ and the totally geodesic surjection $S : M \to K$ constructed in subsection 3.5, the following holds: For any $C^1$ map $f \in [F]$, \[ E(f) \geq E(S) \geq \frac{k c_k^2 c_{n-1}}{n c_n^2 c_{k-1}^2} \frac{L^2(S)}{\mathrm{vol}(M)} \geq \frac{1}{c_{n-1}} \frac{L^2(S)}{\mathrm{vol}(M)}\textrm{.} \] Moreover, each of the following holds: \noindent \textbf{(a)} $E(f) = E(S)$ if and only if $f$ is totally geodesic; \noindent \textbf{(b)} $E(f) = \frac{k c_k^2 c_{n-1}}{n c_n^2 c_{k-1}^2} \frac{L^2(S)}{\mathrm{vol}(M)}$ if and only if $f \circ \psi_1$ is constant along each $M_1$-fiber and a homothety along each $\mathbb{T}^k$-fiber; \noindent \textbf{(c)} $E(f) = \frac{1}{c_{n-1}} \frac{L^2(S)}{\mathrm{vol}(M)}$ if and only if $f$ is a homothety. \end{theorem} \noindent The final inequality, which is simply a statement about the volumes of spheres, is included because, in principle, the corresponding geometric conditions are different. However, the inequality is strict if and only if $\inf_{f \in [F]} E(f) > 0$ and $k < n$, in which case the energy level $\frac{1}{c_{n-1}} \frac{L^2(S)}{\mathrm{vol}(M)}$ is unattainable within $[F]$. \subsection*{Organization of the paper} The second section contains background information. Subsection 2.1 defines the energy and length of a map between manifolds and, following the work of Croke, recharacterizes them using Santal\'{o}'s formula. Subsection 2.2 describes the asymptotic norm of a $\mathbb{Z}^k$-equivariant metric on $\mathbb{Z}^k$. Subsection 2.3 discusses totally geodesic maps into manifolds with no conjugate points and, in particular, uses the Poincar\'{e} recurrence theorem to characterize totally geodesic maps from manifolds with finite volume that act trivially on the fundamental group. Subsection 2.4 defines the class of domains to which Theorem \ref{nnrc theorem} generalizes, which are those that are finitely covered by a diffeomorphic product in a commutative diagram inspired by the Cheeger--Gromoll splitting theorem. Subsection 2.5 is about the beta and gamma functions and the volumes of spheres. The main theorems are proved in the third section. Subsection 3.1 contains the most general result to be proved, the statement of which is rather long and technical. Subsection 3.2 defines the semi-Finsler torus and affine surjection associated to a homotopy class of maps. Subsection 3.3 relates the intersection of a homotopy class to the length of that affine surjection. Subsection 3.4 contains the proof of the main theorem. Subsection 3.5 specializes to the case of targets with no conjugate points. \section{Preliminaries} \subsection{Energy and length} All manifolds in the paper are assumed to be $C^1$, and those satisfying curvature bounds are assumed to be $C^2$. The \textbf{energy density} of a $C^1$ map $f : M \to N$ between Riemannian manifolds $(M,g)$ and $(N,h)$ is the function $e_f = \frac{1}{2} \,\mathrm{trace} \!<\!\cdot,\cdot\!>_{f^{-1}(TN)}$, where $f^{-1}(TN)$ is the pull-back bundle $\coprod_{x \in M} T_{f(x)} N \to M$ and $<\!\cdot,\cdot\!>_{f^{-1}(TN)}$ is the bundle pseudo-metric obtained by pulling back $h$ via $f$. The \textbf{energy} of $f$ is $E(f) = \int_M e_f \,d \mathrm{vol}_M$. Croke \cite{Croke1987} observed that, as a trace, $e_f(x)$ may be computed as an average over the unit sphere $S_x M \subseteq T_x M$, endowed with its usual round metric. \begin{lemma}[Croke]\label{energy density is a trace} Let $M$ and $N$ be Riemannian manifolds with $n = \dim(M)$, $f : M \rightarrow N$ a $C^1$ map, and $x \in M$. Then \begin{equation}\label{energy density} \begin{aligned} e_f(x) &= \frac{n}{2c_{n-1}} \int_{S_x M} \|v\|_{f^{-1}(TN)}^2 \,d\mathrm{vol}_{S_x M}\\ &= \frac{n}{2c_{n-1}} \int_{S_x M} \|df(v)\|_N^2 \,d\mathrm{vol}_{S_x M}\textrm{.} \end{aligned} \end{equation} \end{lemma} \noindent The expressions on the right-hand side of \eqref{energy density} depend only on the norm on $TN$. If $N$ is endowed with only a Finsler semi-norm $\| \cdot \|_N$, then the \textbf{energy density} of a $C^1$ map $f : M \to N$ is the function on $M$ defined by \eqref{energy density}, and the \textbf{energy} of $f$ is $E(f) = \int_M e_f \,d \mathrm{vol}_M$. Since the Liouville measure is locally the product $\mathrm{Liou}_{SM} = \mathrm{vol}_M \times \mathrm{vol}_{S^{n-1}}$, one has (cf. \cite{Croke1987}) that \begin{equation}\label{energy integral} \begin{aligned} E(f) &= \frac{n}{2c_{n-1}} \int_M \int_{S_x M} \|df(v)\|_N^2 \,d\mathrm{vol}_{S_x M} \,d\mathrm{vol}_M\\ &= \frac{n}{2c_{n-1}} \int_{SM} \|df(v)\|_N^2 \,d\mathrm{Liou}_{SM} \end{aligned} \end{equation} \noindent Jost \cite{Jost1997} gave definitions of energy density and energy for maps from measure spaces into metric spaces. Centore \cite{Centore2000} showed that the above definitions agree with Jost's in the Finsler setting and yield a sensible energy functional, in that its minimizers have vanishing Laplacian. By analogy with energy, the \textbf{length density} of a $C^1$ map $f : M \to N$ at a point $x \in M$ is defined here to be \[ \ell_f(x) = \sqrt{\frac{n}{2c_{n-1}}} \int_{S_x M} \|df(v)\|_N \,d\mathrm{vol}_{S_x M}\textrm{,} \] and the \textbf{length} of $f$ is $L(f) = \int_M \ell_f \,d \mathrm{vol}_M$, so that \[ L(f) = \sqrt{\frac{n}{2c_{n-1}}} \int_{SM} \|df(v)\|_N \,d\mathrm{Liou}_{SM}\textrm{.} \] When $n = 1$, this definition agrees with the usual length of a curve in a semi-Finsler manifold. According to the Cauchy--Schwarz inequality, $e_f \geq \frac{1}{c_{n-1}} \ell_f^2$ and \begin{equation}\label{cauchy--schwarz} E(f) \geq \frac{1}{\mathrm{vol}(SM)} L^2(f) = \frac{1}{c_{n-1}\mathrm{vol}(M)} L^2(f)\textrm{,} \end{equation} with equality if and only if $\| df \|$ is constant. When $M$ has boundary $\partial M \neq \emptyset$, denote by $\nu$ the inward-pointing unit normal vector field along the $C^1$ portion of $\partial M$ and $S^+ \partial M$ the inward-pointing unit vectors there. That is, \[ S^+ \partial M = \{ w \in S(\partial M) \,\big|\, g(w,\nu) > 0 \}\textrm{.} \] For each $w \in S^+ \partial M$, denote by $l(w) \in (0,\infty]$ the supremum of times at which the geodesic $t \mapsto \pi \circ \Phi_t(w)$ is defined and $\varsigma_w$ the restriction of that geodesic to $[0,l(w)]$, so that $l(w) = L(\varsigma_w)$. Set $U = \{ (w,t) \in S^+ \partial M \times [0,\infty) \,\big|\, t \leq l(w) \}$. Denote by $\mathrm{vol}_{S^+ \partial M}$ the measure obtained by restricting $\mathrm{vol}_{\partial M} \times \mathrm{vol}_{S^{n-1}}$ to $S^+ \partial M$. Santal\'{o} showed that, at $(w,t) \in U$, \[ \Phi|_U^*(d\mathrm{Liou}_{SM}) = g(w,\nu) \, d\mathrm{vol}_{S^+ \partial M} \, dt\textrm{;} \] Consequently, integrals over $X = \Phi(U)$ may be reformulated as integrals over $S^+ \partial M$ \cite{Santalo1952,Santalo1976}. \begin{theorem}[Santal\'{o}'s formula] Let $M$ be a complete Riemannian manifold with boundary. If $f : X \to [0,\infty)$ is integrable, where $X = \Phi(U)$ as above, then \[ \int_X f \,d\mathrm{Liou}_{SM} = \int_{S^+ \partial M} \big[ \int_0^{l(w)} f(\Phi_t(w)) \, dt \big] g(w,\nu) \,d\mathrm{vol}_{S^+ \partial M}(w) \textrm{.} \] \end{theorem} \noindent Applying Santal\'{o}'s formula to \eqref{energy integral}, Croke \cite{Croke1984,Croke1987} observed that the energy of a map is bounded below by an integral over $S^+ \partial M$, with equality if geodesics in all directions reach $\partial M$ in finite time. \begin{lemma}[Croke]\label{energy as a santalo integral} Let $M$ be a complete Riemannian manifold with boundary. If $f : M \to N$ is a $C^1$ map into a Riemannian manifold, then \[ E(f) \geq \frac{n}{2c_{n-1}} \int_{S^+ \partial M} E(\varsigma_w) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial M} (w)\textrm{,} \] with equality if $\Phi|_U : U \to SM$ is surjective. \end{lemma} \noindent The above conclusion also holds when $N$ is a semi-Finsler manifold, and one may similarly bound the length integral. \subsection{Asymptotic semi-norm of a periodic metric on $\mathbb{Z}^m$} It was shown by Burago \cite{Burago1992} that any $\mathbb{Z}^m$-equivariant Riemannian metric on $\mathbb{R}^m$ is within finite Gromov--Hausdorff distance of a normed space. In Section 8.5 of \cite{BuragoBuragoIvanov2001}, this is partially generalized to distance functions on $\mathbb{Z}^m$. \begin{theorem}[Burago--Burago--Ivanov]\label{asymptotic norm} Let $d : \mathbb{Z}^m \times \mathbb{Z}^m \to [0,\infty)$ be a distance function that's equivariant with respect to the action of $\mathbb{Z}^m$ on itself by addition. Then there exists a unique semi-norm $\| \cdot \|_\infty$ on $\mathbb{R}^m$ such that \[ \| v \|_\infty = \lim_{n \to \infty} \frac{d(0,nv)}{n} \] for all $v \in \mathbb{Z}^m$. This semi-norm has the following properties: \noindent \textbf{(i)} $\frac{d(0,v)}{\|v\|_\infty} \to 1$ uniformly as $\|v\|_\infty \to \infty$; \noindent \textbf{(ii)} If $d$ is the orbit metric of a free and properly discontinuous action, with compact quotient, of $\mathbb{Z}^m$ on a length space, then $\| \cdot \|_\infty$ is a norm. \end{theorem} \noindent The above terminology is defined in \cite{BuragoBuragoIvanov2001}. The semi-norm $\| \cdot \|_\infty$ is the \textbf{asymptotic semi-norm} of $d$. If $N$ is a Riemannian manifold and $G$ is a finitely generated and torsion-free Abelian subgroup of $\pi_1(N)$, then, with respect to a fixed isomorphism $G \cong \mathbb{Z}^m$, the action of $G$ on $N$ induces an orbit metric on $\mathbb{Z}^m$. Though the corresponding asymptotic semi-norm $\| \cdot \|_\infty$ depends on the choice of isomorphism, in what follows the appropriate isomorphisms will be implicitly understood. Elementary arguments show that $\| \cdot \|_\infty$ is independent of the choice of basepoint for $\pi_1(N)$. The following is a routine application of the triangle inequality. \begin{lemma}\label{triangle inequality} Let $N$ be a Riemannian manifold and $G \cong \mathbb{Z}^m$ a finitely generated and torsion-free Abelian subgroup of $\pi_1(N)$. Then the corresponding orbit metric and asymptotic semi-norm satisfy $\|v\|_\infty \leq d(0,v)$ for all $v \in \mathbb{Z}^m$. \end{lemma} \subsection{Totally geodesic maps into manifolds with no conjugate points} A map $f : M \to N$ between length spaces is \textbf{totally geodesic} if the composition $f \circ \gamma : (a,b) \rightarrow N$ is a geodesic whenever $f : (a,b) \rightarrow M$ is a geodesic. When $M$ and $N$ are Riemannian, an elementary argument shows that totally geodesic maps have full regularity (see \cite{Dibble2014} for details). The same doesn't necessarily hold for Finsler $N$; for instance, there are singular geodesics in $\mathbb{R}^2$ with respect to the constant Finsler norm $ae_1 + be_2 \mapsto |a| + |b|$. If $N$ is Riemannian and $\gamma : [a,b] \rightarrow N$ is a geodesic, then \textbf{$\gamma(a)$ and $\gamma(b)$ are conjugate along $\gamma$} if there exists a nontrivial normal Jacobi field $J$ along $\gamma$ that vanishes at the endpoints. A Riemannian manifold $N$ has \textbf{no conjugate points} if no two points of $N$ are conjugate along any geodesic connecting them. A complete $N$ has no conjugate points exactly when each pair of points in its universal covering space $\hat{N}$ is connected by a unique minimal geodesic (see \cite{O'Sullivan1974} and \cite{GromollKlingenbergMeyer1968}). In this case, $\hat{N}$ is diffeomorphic to $\mathbb{R}^n$, $N$ is aspherical, and $\pi_1(N)$ is torsion-free \cite{Hurewicz1936}. According to the classical theorem of Cartan--Hadamard, manifolds with nonpositive sectional curvature have no conjugate points, as lengths of Jacobi fields are convex. \begin{lemma}\label{totally geodesic into no conjugate points} Let $M$ be a complete and connected Riemannian manifold with finite volume and $N$ a length space that admits a locally isometric covering from a space $\tilde{N}$ in which every geodesic is minimal. If $f : M \to N$ is a totally geodesic map with trivial induced homomorphism on $\pi_1(M)$, then $f$ is constant. \end{lemma} \begin{proof} Assuming the result is false, one may find an open $U \subseteq TM$, contained in a set of the form $\{ v \in TM \,\big|\, \pi(v) \in U_0, \| v \| \leq C_0 \}$ with compact closure, such that $df \neq 0$ on $U$. Fix $v \in U$, and write $x = \pi(v)$. Let \[ C = \max \{ \| df(w) \| \,\big|\, \pi(w) \in \overline{U}_0, \| w \| \leq C_0 \} > 0\textrm{,} \] $V = U \cap TB(x,\varepsilon/C)$, and $D = \min_{w \in \overline{V}} \| df(w) \| > 0$. By the Poincar\'{e} recurrence theorem, there exist $T > 2\varepsilon/D$ and $w \in V$ such that $z = \Phi_T(w) \in V$. Write $y = \pi(w)$. If $\alpha$ and $\beta$ are minimal geodesics from $x$ to $y$ and $z$, respectively, then $L(\alpha),L(\beta) < \varepsilon/C$. Since the image of the concatenation $\alpha \cdot \Phi(w,\cdot)|_{[0,T]} \cdot \beta^{-1}$ under $f$ is homotopically trivial in $N$, it lifts to $\tilde{N}$. The lifts of $f \circ \alpha$, $f \circ \Phi(w,\cdot)|_{[0,T]}$, and $f \circ \beta^{-1}$ are minimal geodesics in $\tilde{N}$ which satisfy $L \big( f \circ \Phi(w,\cdot)|_{[0,T]} \big) > 2\varepsilon$, $L(f \circ \alpha) < \varepsilon$, and $L(f \circ \beta^{-1}) < \varepsilon$. This contradicts the triangle inequality. \end{proof} \noindent The requirement that $M$ have finite volume cannot be dropped from Lemma \ref{totally geodesic into no conjugate points}, as shown by the universal covering map onto any complete and connected manifold with no conjugate points. \subsection{Commutative diagrams} This subsection describes the types of domains to which Theorem \ref{nnrc theorem} generalizes. Let $M_0$ and $M_1$ be $C^1$ manifolds, $\pi_0 : M_0 \times \mathbb{R}^k \to \mathbb{R}^k$ and $\pi_1 : M_1 \times \mathbb{T}^k \to \mathbb{T}^k$ projection onto the second components, and $\psi : M_0 \times \mathbb{R}^k \to M_1 \times \mathbb{T}^k$ and $\phi : \mathbb{R}^k \to \mathbb{T}^k$ covering maps. The \textbf{NNRC diagram} \begin{equation}\label{diagram1} \begin{gathered} \xymatrix{ M_0 \times \mathbb{R}^k \ar[r]^-{\pi_0} \ar[d]^-\psi & \mathbb{R}^k \ar[d]^-\phi \\ M_1 \times \mathbb{T}^k \ar[r]^-{\pi_1} & \mathbb{T}^k} \end{gathered} \end{equation} \noindent is said to \textbf{commute isometrically} if it commutes, $M_0$, $M_0 \times \mathbb{R}^k$, and $M_1 \times \mathbb{T}^k$ are connected Riemannian manifolds, $M_0 \times \mathbb{R}^k$ has a product metric with a flat $\mathbb{R}^k$-factor, and $\psi$ and $\phi$ are local isometries. It's emphasized that $M_1 \times \mathbb{T}^k$ might not have a product metric and that $\pi_1$ is not necessarily a Riemannian submersion. This definition is motivated by the following result of Cheeger--Gromoll \cite{CheegerGromoll1972}, a consequence of their celebrated splitting theorem. \begin{theorem}[Cheeger--Gromoll]\label{cheeger--gromoll splitting theorem} Let $M$ be a compact $C^2$ Riemannian manifold with nonnegative Ricci curvature. Then $M$ is finitely and locally isometrically covered by a Riemannian manifold $M_1 \times \mathbb{T}^k$ in a diagram of the form \eqref{diagram1} that commutes isometrically, in which $M_0$ is compact and simply connected. \end{theorem} \begin{proof} This is almost exactly the statement of Theorem 9 in \cite{CheegerGromoll1972}, which holds in the case of nonnegative Ricci curvature by Theorem 2 of \cite{CheegerGromoll1971}. However, it's not immediately clear that the diffeomorphism $\hat{M} \to M_1 \times \mathbb{T}^k$ therein makes the diagram \eqref{diagram1} commute. One may verify that the diffeomorphism they construct is canonical and takes the image of each $M_0$-fiber to the correct $M_1$-fiber. \end{proof} \begin{remark} As in \cite{CheegerGromoll1972}, let $\mathbb{Z}$ act on the isometric product $S^2 \times \mathbb{R}$ by rotation by irrational multiples of $2\pi$ in the first component and translations in the second. Then the quotient is diffeomorphic to $S^2 \times S^1$ but not finitely covered by an isometric product. Modifying this so that the initial metric on $S^2$ is no longer round, but still admits an isometric $S^1$ action, one may obtain a diagram \eqref{diagram1} that commutes isometrically but in which $M_1 \times \mathbb{T}^k$ has some negative Ricci curvature. \end{remark} When the diagram \eqref{diagram1} commutes, $\psi$ restricts on each $M_0$-fiber to a covering map onto an $M_1$-fiber. Moreover, there exist a covering map $\chi : M_0 \to M_1$ and a diffeomorphism $\varphi : M_0 \times \mathbb{R}^k \to M_0 \times \mathbb{R}^k$ such that the diagram \begin{equation}\label{diagram2} \begin{gathered} \xymatrix{ M_0 \times \mathbb{R}^k \ar[r]^-\varphi \ar[dr]_-{\chi \times \phi} & M_0 \times \mathbb{R}^k \ar[r]^-{\pi_0} \ar[d]^-\psi & \mathbb{R}^k \ar[d]^-\phi \\ & M_1 \times \mathbb{T}^k \ar[r]^-{\pi_1} & \mathbb{T}^k} \end{gathered} \end{equation} \noindent commutes. The map $\chi$ may be taken as $\rho_1 \circ \psi \circ \hat{\iota}_{\hat{x}}$ for any $\hat{x} \in \mathbb{R}^k$, where $\hat{\iota}_{\hat{x}}$ is inclusion and $\rho_1$ is projection onto the first component of $M_1 \times \mathbb{T}^k$. The map $\varphi$ is a lift of $\chi \times \phi$ along $\psi$, which may be shown to exist by general homotopy theory. \begin{lemma}\label{bounded diameter} Suppose the diagram \eqref{diagram1} commutes isometrically, $M_1$ is compact, $N$ is a Riemannian manifold with Riemannian universal covering map $\pi : \hat{N} \to N$, and $f : M_1 \times \mathbb{T}^k \to N$ is a continuous function such that $f \circ \iota_z(\cdot) = f(\cdot,z)$ induces the trivial homomorphism on $\pi_1(M_1)$ for each $z \in \mathbb{T}^k$. Then there exist maps $\hat{f}$ and $\hat{F}$ such that diagram \begin{equation}\label{diagram3} \begin{gathered} \xymatrix{ M_0 \times \mathbb{R}^k \ar[r]^-{\hat{F}} \ar[d]^-\psi \ar[dr]^-{\hat{f}} & \hat{N} \ar[d]^-\pi \\ M_1 \times \mathbb{T}^k \ar[r]^{f} & N } \end{gathered} \end{equation} commutes. Moreover, for any $R \geq 0$, there exists $C \geq 0$ such that any lift $\hat{F}$ satisfies $\mathrm{diam} \big( \hat{F}(M_0 \times B(\hat{z},R)) \big) \leq C$ for all $\hat{z} \in \mathbb{R}^k$. \end{lemma} \begin{proof} Write $\hat{f} = f \circ \psi$. The topological assumption on $f$ implies that $\hat{f}$ lifts to a map $\hat{F} : M_0 \times \mathbb{R}^k \to \hat{N}$ such that the diagram \eqref{diagram3} commutes. The homotopy lifting property implies that $\hat{F} \circ \varphi$ is constant on each fiber of $\chi \times \mathrm{id} : M_0 \times \mathbb{R}^k \to M_1 \times \mathbb{R}^k$. It follows that $\hat{F} \circ \varphi$ descends to a map $F : M_1 \times \mathbb{R}^k \to \hat{N}$ such that the diagram \[ \xymatrix{ M_0 \times \mathbb{R}^k \ar[r]^-{\varphi} \ar[d]^-{\chi \times \mathrm{id}} & M_0 \times \mathbb{R}^k \ar[r]^-{\hat{F}} \ar[d]^\psi \ar[dr]^(0.6){\hat{f}} & \hat{N} \ar[d]^-\pi \\ M_1 \times \mathbb{R}^k \ar[r]^{\mathrm{id} \times \phi} \ar@/^0.4pc/[rru]^(0.35){F} & M_1 \times \mathbb{T}^k \ar[r]^{f} & N } \] \noindent commutes. Fix $R \geq 0$. Define $D : \mathbb{T}^k \to [0,\infty)$ by $D(z) = \mathrm{diam} \big( F(M_1 \times B(\hat{z},R)) \big)$ for any $\hat{z} \in \phi^{-1}(z)$. Since $\phi$ and $\pi$ are local isometries, $D$ is well defined. One may show that $D$ is independent of the choice of lift $\hat{F}$. By continuity, $D$ is bounded above by some $C \geq 0$. The result follows. \end{proof} \noindent It will also help to characterize certain totally geodesic maps from finite-volume $M_1 \times \mathbb{T}^k$ into manifolds with no conjugate points. Note that, when the diagram \eqref{diagram1} commutes isometrically, each $M_1$-fiber of $M_1 \times \mathbb{T}^k$ must be totally geodesic. \begin{lemma}\label{totally geodesic from nnrc into no conjugate points} Suppose the diagram \eqref{diagram1} commutes isometrically, $M_1 \times \mathbb{T}^k$ has finite volume, $N$ is a Riemannian manifold such that every geodesic in $\hat{N}$ is minimal, and $f : M_1 \times \mathbb{T}^k \to N$ is a continuous function such that $f \circ \iota_z$ induces the trivial homomorphism on $\pi_1(M_1)$ for each $z \in \mathbb{T}^k$. Then $f$ is totally geodesic if and only if $f$ is constant along each $M_1$-fiber and, for each $p \in M_1$, $f \circ \iota_p$ is totally geodesic. \end{lemma} \begin{proof} Suppose $f$ is constant along each $M_1$-fiber and, for each $p \in M_1$, $f \circ \iota_p$ is totally geodesic. Since $\phi$ is a local isometry and $\pi_1 \circ \psi = \phi \circ \pi_0$, $\hat{f}$ is constant along each $M_0$-fiber and each $\hat{f} \circ \iota_{\hat{p}}$ is totally geodesic. Since $M_0 \times \mathbb{R}^k$ has a product metric, $\hat{f}$ is totally geodesic, which, since $\psi$ is a local isometry, means $f$ is as well. Conversely, suppose $f$ is totally geodesic. Note that $f$ must be $C^1$. Since $M_1 \times \mathbb{T}^k$ has finite volume and each $M_1$-fiber is totally geodesic, the coarea formula implies that almost all $M_1$-fibers have finite volume. By Lemma \ref{totally geodesic into no conjugate points} and a continuity argument, $f$ must be constant along each $M_1$-fiber. A short additional argument now shows that each $f \circ \iota_p$ is totally geodesic. \end{proof} \begin{lemma}\label{fundamental domains} Suppose the diagram \eqref{diagram1} commutes. If $\chi$ is finite with $\kappa_0$ sheets and $B \subseteq \mathbb{R}^k$ is a fundamental domain of $\phi$, then $M_0 \times B$ is the union of $\kappa_0$ fundamental domains of $\psi$. \end{lemma} \begin{proof} From the diagram \eqref{diagram2}, one sees that, whenever $A \subseteq M_0$ and $B \subseteq \mathbb{R}^k$ are fundamental domains of $\chi$ and $\phi$, respectively, $\varphi(A \times B)$ is a fundamental domain of $\psi$. For any $\hat{p} \in M_0$, $\pi_0 \circ \varphi \circ \hat{\iota}_{\hat{p}}$ is a deck transformation of $\phi$. Therefore, $B_0 = (\rho_0 \circ \varphi \circ \hat{\iota}_{\hat{p}})^{-1}(B)$ is a fundamental domain of $\phi$. For any $\hat{z} \in \mathbb{R}^k$, $\rho_0 \circ \varphi \circ \hat{\iota}_{\hat{z}}$ is an automorphism of $M_0$. Thus $M_0 \times B = \varphi(M_0 \times B_0)$, from which the result follows. \end{proof} \begin{remark} If the diagram \eqref{diagram1} commutes isometrically, $\Gamma$ is the deck transformation group of $\psi$, and $\mathscr{I}(M_0)$ and $\mathscr{I}(\mathbb{R}^k)$ are the isometry groups of $M_0$ and $\mathbb{R}^k$, respectively, then $\Gamma \subseteq \mathscr{I}(M_0) \times \mathscr{I}(\mathbb{R}^k)$. \end{remark} \begin{remark} Suppose the diagram \eqref{diagram1} commutes isometrically, $M_0$ has finite volume, and $\chi$ has $\kappa_0$ sheets. Since $\psi$ restricts on each $M_0$-fiber to a local isometry $\chi_{\hat{z}} = \rho_1 \circ \psi \circ \hat{\iota}_{\hat{z}}$ onto a totally geodesic $M_1$-fiber of $M_1 \times \mathbb{T}^k$, and the number of sheets of $\chi_{\hat{z}}$ must be constant in $\hat{z} \in \mathbb{R}^k$, each $M_1$-fiber has volume $\frac{1}{\kappa_0} \mathrm{vol}(M_0)$. \end{remark} \subsection{Beta and gamma functions} An inequality about the volumes of the unit spheres $S^n \subseteq \mathbb{R}^{n+1}$ will be used in the proof of Theorem \ref{main theorem}. \begin{lemma}\label{volumes of spheres} Let $n \geq k$. Then $n c_n^2 c_{k-1}^2 \leq k c_k^2 c_{n-1}^2$, with equality if and only if $n = k$. \end{lemma} \noindent Let $\mathbb{C}_+ = \{ z \in \mathbb{C} \,\big|\, \mathrm{Re}(z) > 0 \}$. Define the \textbf{beta function} $B : \mathbb{C}_+ \times \mathbb{C}_+ \to \mathbb{C}$ by \[ B(x,y) = \int_0^1 t^{x-1}(1-t)^{y-1} \,dt \] and the \textbf{gamma function} $\Gamma : \mathbb{C}_+ \to \mathbb{C}$ by \[ \Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \,dt\textrm{.} \] The gamma function continuously extends $(n-1)!$, as $\Gamma(1) = 1$ and $\Gamma(z+1) = z\Gamma(z)$ for all $z \in \mathbb{C}_+$. These are related to each other, and to the volumes of spheres, by the following well-known equalities. \begin{lemma}\label{beta and gamma} Each of the following holds: \noindent \textbf{(a)} $B(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ for all $x,y \in \mathbb{C}_+$; \noindent \textbf{(b)} $c_{n-1} = \frac{2\pi^{n/2}}{\Gamma(\frac{n}{2})}$ for all $n \in \mathbb{N}$; \noindent \textbf{(c)} $B(\frac{k}{2},\frac{l}{2}) = \frac{2c_{k+l-1}}{c_{k-1} c_{l-1}}$ for all $k,l \in \mathbb{N}$. \end{lemma} \noindent The following is an immediate consequence of Theorem 1 in \cite{BustozIsmail1986}. \begin{theorem}[Bustoz--Ismail]\label{bustoz--ismail} The function $x \mapsto \sqrt{x}\frac{\Gamma(x)}{\Gamma(x + \frac{1}{2})}$ is strictly decreasing on $(0,\infty)$. \end{theorem} \noindent Theorem \ref{bustoz--ismail} and Lemma \ref{beta and gamma}(a) together imply the following inequality for the beta function, which has, to the best of my knowledge, gone unremarked upon in the literature. \begin{lemma}\label{beta inequality} For any $x,y \geq 0$, $B(x+\frac{1}{2},y) < \sqrt{\frac{x}{x+y}} B(x,y)$. \end{lemma} \noindent This may be used to prove Lemma \ref{volumes of spheres}. \begin{proof}[Proof of Lemma \ref{volumes of spheres}] If $n = k$, then equality is clear. If $n > k$, write $l = n - k$. Then Lemma \ref{beta inequality} implies that \[ B^2 \big( \frac{k + l}{2},\frac{l}{2} \big) < \frac{k}{k+l} B^2 \big( \frac{k}{2},\frac{l}{2} \big)\textrm{,} \] which by Lemma \ref{beta and gamma}(c) is equivalent to \[ \frac{4c_{k+l}^2}{c_k^2 c_{l-1}^2} < \frac{4kc_{k+l-1}^2}{(k+l)c_{k-1}^2 c_{l-1}^2}\textrm{.} \] This is equivalent to the strict form of the desired inequality. \end{proof} \begin{remark}\label{gurland's inequality} The inequality $B(\frac{k+1}{2},\frac{l}{2}) < \sqrt{\frac{k}{k+l}} B(\frac{k}{2},\frac{l}{2})$ for $k,l \in \mathbb{N}$ may also be derived from Gurland's inequality $\frac{\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2})} < \frac{n}{\sqrt{2n+1}}$ \cite{Gurland1956}. \end{remark} \section{Proof of the main theorems} \subsection{Statement of the main technical result} Theorem \ref{main theorem for no conjugate points} is built upon the following result, which contains no assumptions on the target manifold. \begin{theorem}\label{main theorem} Let $M$ be a compact $n$-dimensional $C^1$ Riemannian manifold and $\psi_1 : M_1 \times \mathbb{T}^k \to M$ a finite covering map, where $M_1 \times \mathbb{T}^k$ appears in a NNRC diagram \eqref{diagram1} that commutes isometrically and in which the manifold $M_0$ is compact. Let $N$ be a Riemannian manifold and $[F]$ a homotopy class of maps from $M$ to $N$ such that $F \circ \psi_1$ acts trivially on $\pi_1(M_1)$. Then, for the flat semi-Finsler torus $\mathbb{T}^m$ and totally geodesic surjection $T : \mathbb{T}^k \to \mathbb{T}^m$ constructed in subsection 3.2, the following holds: For any $C^1$ map $f \in [F]$, \[ E(f) \geq \mathrm{vol}(M) e_T \geq \frac{1}{c_{k-1}} \mathrm{vol}(M)\ell_T^2 \geq \frac{n c_n^2 c_{k-1}}{k c_k^2 c_{n-1}^2} \mathrm{vol}(M)\ell_T^2\textrm{.} \] Moreover, each of the following holds: \noindent \textbf{(a)} If $E(f) = \mathrm{vol}(M) e_T$, then $f$ is totally geodesic; \noindent \textbf{(b)} $E(f) = \frac{1}{c_{k-1}} \mathrm{vol}(M)\ell_T^2$ if and only if $f \circ \psi_1$ is constant along each $M_1$-fiber and a homothety along each $\mathbb{T}^k$-fiber; \noindent \textbf{(c)} $E(f) = \frac{n c_n^2 c_{k-1}}{k c_k^2 c_{n-1}^2} \mathrm{vol}(M)\ell_T^2$ if and only if $f$ is a homothety and either $M_1$ is a point or $f$ is constant. \end{theorem} \noindent The next three subsections are devoted to proving this, while the last specializes to the case of targets with no conjugate points. \subsection{The semi-Finsler torus and totally geodesic surjection associated to $[F]$} Throughout this section, $M$ and $[F]$ will be assumed to satisfy the hypotheses of Theorem \ref{main theorem}. The flat metric on $\mathbb{T}^k$ will be denoted by $g$. The constant $\kappa_1$ will denote the number of sheets of the covering map $\psi_1 : M_1 \times \mathbb{T}^k \to M$, which is assumed to be a local isometry. For a fixed $f \in [F]$, let $\tilde{f} = f \circ \psi_1$ and $\hat{f} = \tilde{f} \circ \psi$. The topological assumption on $F \circ \psi_1$ is equivalent to the statement that $\tilde{f} \circ \iota_z$ induces the trivial homomorphism on $\pi_1(M_1)$ for all $z \in \mathbb{T}^k$, where $\iota$ denotes inclusion. Fix $\hat{p} \in M_0$, and let $(p,x) = \psi(\hat{p},0)$. Then \[ G = \tilde{f}_*(\pi_1(M_1 \times \mathbb{T}^k)) = (\tilde{f} \circ \iota_x)_*(\pi_1(M_1)) (\tilde{f} \circ \iota_p)_*(\pi_1(\mathbb{T}^k)) = (\tilde{f} \circ \iota_p)_*(\pi_1(\mathbb{T}^k)) \] is a free Abelian group of rank $0 \leq m \leq k$. Since the metric on $\mathbb{R}^k$ is flat, one may suppose that $\phi$ is the quotient map induced by a lattice $\Gamma_1$ equal to the span, with integer coefficients, of a set of vectors $V = \{ v_1,\ldots,v_k \}$. Each geodesic segment $t \mapsto tv_i$, where $t \in [0,1]$, descends via $\phi$ to a closed geodesic $s_i$ based at $x$, and $\{[s_1],\ldots,[s_k]\}$ is a minimal generating set for $\pi_1(\mathbb{T}^k)$. Let $\sigma_i = \tilde{f} \circ \iota_p \circ s_i$. Without loss of generality, one may suppose that $G$ is generated by $[\sigma_1],\ldots,[\sigma_m]$. Denote by $d_G$ the orbit metric induced by the action of $G \cong \mathbb{Z}^m$ on $\hat{N}$, by $d_{\mathbb{Z}^m}$ the induced metric on $\mathbb{Z}^m$, and by $\|\cdot\|_\infty$ the corresponding asymptotic semi-norm on $\mathbb{R}^m$. Endow $\mathbb{T}^m = \mathbb{R}^m / \mathbb{Z}^m$ with the constant semi-Finsler metric $\|\cdot\|_\infty$. For each $j = 1,\ldots,m$, let $w_j$ be the vector in $\mathbb{Z}^m$ corresponding to $[\sigma_j]$. There exist integers $s_{ij} \in \mathbb{Z}$ such that $[\sigma_i] = \sum_{j=1}^m s_{ij} [\sigma_j]$ for each $i = 1,\ldots,k$. Define a linear map $\hat{T} : \mathbb{R}^k \to \mathbb{R}^m$ by $\hat{T}(v_i) = \sum_{j=1}^m s_{ij} w_j$. Then $\hat{T}$ descends to a totally geodesic surjection $T : \mathbb{T}^k \to \mathbb{T}^m$. Let $\tilde{S} : M_1 \times \mathbb{T}^k \to \mathbb{T}^m$ be the map that's constant on each $M_1$-fiber and agrees with $T$ on each $\mathbb{T}^k$-fiber. Let $\hat{F} : M_0 \times \mathbb{R}^k \to \hat{N}$ be a lift of $\tilde{f}$ as in Lemma \ref{bounded diameter}, and let $\hat{d}$ be the distance function on $\mathbb{R}^k$ obtained as the quotient of the pull-back $\hat{F}^*(d_{\hat{N}})$ by $M_0$. The following relates $\|\hat{T}(\cdot)\|_\infty$ to $\hat{d}$. The basic idea of the proof is to use Theorem \ref{asymptotic norm}(i), but one must account for $v$ that point in irrational directions. \begin{lemma}\label{asymptotic inequalities} Each of the following holds: \noindent \textbf{(a)} There exists $D \geq 0$ such that, for all $v \in T_{\hat{z}} \mathbb{R}^k$, $\|\hat{T}(v)\|_\infty \leq \hat{d} (\hat{z}, \hat{z} + v) + D$; \noindent \textbf{(b)} For all $v \in T_{\hat{z}} \mathbb{R}^k$, $\| \hat{T}(v) \|_\infty = \lim_{t \to \infty} \frac{\hat{d}(\hat{z},\hat{z} + tv)}{t}$. \end{lemma} \begin{proof} \textbf{(a)} By Lemma \ref{bounded diameter}, there exists $D_0 \geq 0$ such that \[ d_{\hat{N}}(\hat{F}(\hat{q}_0,\hat{z}_0),\hat{F}(\hat{q}_1,\hat{z}_1)) \leq D_0 \] whenever $d_{\mathbb{R}^k}(\hat{z}_0,\hat{z}_1) \leq \mathrm{diam}(\mathbb{T}^k)$. Fix $u_0,u_1 \in \Gamma_1$ such that \[ d_{\mathbb{R}^k}(\hat{z}, u_0),d_{\mathbb{R}^k}(\hat{z} + v, u_1) \leq \mathrm{diam}(\mathbb{T}^k)\textrm{.} \] For $i = 0,1$, let $\alpha_i : [0,1] \to M_1 \times \mathbb{T}^k$ be the loop $\alpha_i(s) = (p,\phi(su_i))$, and let $\hat{\alpha}_i : [0,1] \to M_0 \times \mathbb{R}^k$ be the lift of $\alpha_i$ along $\psi$ satisfying $\hat{\alpha}_i(0) = (\hat{p},0)$. Then \begin{equation}\label{inequality1} |d_{\hat{N}}(\hat{F} \circ \hat{\alpha}_1(1),\hat{F} \circ \hat{\alpha}_0(1)) - \hat{d} (\hat{z}, \hat{z} + v)| \leq 2D_0\textrm{.} \end{equation} Write $u_i = \sum_{j=1}^k c_{ij} v_j$, so that $\pi_1 \circ \alpha_i \in \sum_{j=1}^k c_{ij} [s_j]$. Then $\tilde{f} \circ \alpha_i \in \sum_{j=1}^m c_{ij}[\sigma_j]$. By construction, \begin{equation}\label{inequality2} \begin{aligned} d_{\hat{N}}(\hat{F} \circ \hat{\alpha}_1(1),\hat{F} \circ \hat{\alpha}_0(1)) &= d_G([\tilde{f} \circ \alpha_1],[\tilde{f} \circ \alpha_0])\\ &= d_{\mathbb{Z}^m} \big(\sum_{j=1}^m c_{1j} w_j, \sum_{j=1}^m c_{0j} w_j \big)\\ &= d_{\mathbb{Z}^m}(0,\hat{T}(u_1 - u_0))\\ &\geq \|\hat{T}(u_1 - u_0)\|_\infty\textrm{,} \end{aligned} \end{equation} where the final inequality follows from Lemma \ref{triangle inequality}. Write $\|\hat{T}\|_\infty = \max_{|\hat{w}| = 1} \|\hat{T}(\hat{x} + \hat{w})\|_\infty$ and $D_1 = \|\hat{T}\|_\infty \mathrm{diam}(\mathbb{T}^k)$. Since $\hat{T}$ is linear, \begin{equation}\label{inequality3} \begin{aligned} \big| \|\hat{T}(u_1 - u_0)\|_\infty - \|\hat{T}(v)\|_\infty \big| &\leq \|\hat{T}(\hat{z} + v) - \hat{T}(u_1)\|_\infty + \|\hat{T}(\hat{z}) - \hat{T}(u_0)\|_\infty\\ &\leq 2D_1\textrm{.} \end{aligned} \end{equation} The result follows from \eqref{inequality1}-\eqref{inequality3} with $D = 2D_0 + 2D_1$. \noindent \textbf{(b)} If $v = 0$, the result is clear, so let $v \neq 0$. The argument is simplified by first proving two special cases. Suppose $v = \sum_{i=1}^m a_i v_i$ for $a_i \in \mathbb{R}$. Fix $\varepsilon > 0$. By Theorem \ref{asymptotic norm}(i), there exists $C \geq 0$ such that \[ \big| d_{\mathbb{Z}^m}(0,w) - \|w\|_\infty \big| \leq \varepsilon \|w\|_\infty \] whenever $w \in \mathbb{Z}^m$ satisfies $\|w\|_\infty \geq C$. Let $t_0 = (C+2D_1)/\|v\|_\infty$. For any $t \geq t_0$, there exist $u_0,u_1 \in \Gamma_1$, as in the proof of (a), such that \eqref{inequality1}-\eqref{inequality3} hold with respect to the corresponding $\alpha_i$. By \eqref{inequality3}, $\|\hat{T}(u_1 - u_0)\|_\infty \geq C$. So \begin{equation}\label{inequality4} \big| d_{\mathbb{Z}^m}(0,\hat{T}(u_1 - u_0)) - \|\hat{T}(u_1 - u_0)\|_\infty \big| \leq \varepsilon \|\hat{T}(u_1 - u_0)\|_\infty\textrm{.} \end{equation} As in \eqref{inequality2}, $d_{\mathbb{Z}^m}(0,\hat{T}(u_1 - u_0)) = d_{\hat{N}}(\hat{F} \circ \hat{\alpha}_1(1), \hat{F} \circ \hat{\alpha}_0(1))$. This and \eqref{inequality1}, \eqref{inequality3}, and \eqref{inequality4} imply \[ \big| \hat{d} (\hat{z}, \hat{z} + tv) - \|\hat{T}(tv)\|_\infty \big| \leq \varepsilon\|\hat{T}(tv)\|_\infty + 2D_0 + 2(1+\varepsilon)D_1\textrm{.} \] The conclusion follows in this case. The case $v = \sum_{i=m+1}^k a_i v_i$ is easier, since one may take $u_1 - u_0 = \sum_{i=m+1}^k b_i v_i$ for $b_i \in \mathbb{Z}$. It follows that $\hat{d} (\hat{z}, \hat{z} + tv) \leq 2D_0$ and, consequently, \[ \|\hat{T}(v)\|_\infty = 0 = \lim_{t \to \infty} \frac{\hat{d} (\hat{z}, \hat{z} + tv)}{t}\textrm{.} \] The general case follows by writing $v = v_0 + v_1$, where $v_0 = \sum_{i=1}^m a_i v_i$ and $v_1 = \sum_{i=m+1}^k a_i v_i$. Then $\|\hat{T}(v)\|_\infty = \|\hat{T}(v_0)\|_\infty$. The proof is completed by applying the triangle inequality to $\hat{d}$. \end{proof} \subsection{Length and intersection} Write $l = \mathrm{dim}(M_0)$, so that $n = \mathrm{dim}(M) = k + l$. As the results of this subsection will all hold trivially when $n = 0$, suppose $n > 0$, which ensures that $i([F])$ is well defined. Define constants \[ d_{kl} = \left\{ \begin{array}{ccc} 0 & \textrm{if} & k = 0 \\ \frac{c_{k+l}}{c_k} \sqrt{\frac{2c_{k-1}}{k}} & \textrm{if} & k > 0 \end{array} \right. \textrm{.} \] \begin{theorem}\label{length and intersection} Length and intersection are related by $i([F]) = d_{kl} \mathrm{vol}(M) \ell_T$. \end{theorem} \begin{proof} If $k = 0$, the result holds trivially, so suppose $k > 0$. One may show that the length of $T$ is independent of the choice of representative $f \in [F]$ used in its construction, so suppose, without loss of generality, that $f$ is $C^1$. Let $C \geq 0$ be an upper bound for $|df|$ on the unit sphere bundle $SM$. Then $\phi_t(w) \leq Ct$ for all $w \in SM$. Let $\hat{F} : M_0 \times \mathbb{R}^k \to \hat{N}$ be a lift of $\tilde{f} = f \circ \psi_1$ guaranteed by Lemma \ref{bounded diameter}. It follows from Lemma \ref{asymptotic inequalities}(b) that, for each $w \in S_x M$, $\tilde{w} \in S_{\tilde{x}}(M_1 \times \mathbb{T}^k)$, and $\hat{w} = (\hat{u},\hat{v}) \in S_{\hat{x}} (M_0 \times \mathbb{R}^k)$ such that $\psi_*(\hat{w}) = \tilde{w}$ and $(\psi_1)_*(\tilde{w}) = w$, \[ \|dT(\tilde{w})\|_\infty = \|\hat{T}(\hat{v})\|_\infty = \lim_{t \to \infty} \frac{d_{\hat{N}}(\hat{F}(\hat{p},\hat{x} + t\hat{v}),\hat{F}(\hat{p},\hat{x}))}{t} = \lim_{t \to \infty} \frac{\phi_t(w)}{t}\textrm{.} \] Thus \begin{equation}\label{intersection integral} \begin{aligned} i([F]) &= \int_{SM} \lim_{t \to \infty} \frac{\phi_t(w)}{t} \,d\mathrm{Liou}_{SM}\\ &= \frac{1}{\kappa_1}\int_{M_1 \times \mathbb{T}^k} \int_{S_{(\tilde{p},\tilde{x})} (M_1 \times \mathbb{T}^k)} \|d\tilde{S}(\tilde{w})\|_\infty \,d\mathrm{vol}_{S_{(\tilde{p},\tilde{x})}}(M_1 \times \mathbb{T}^k) d\mathrm{vol}_{M_1 \times \mathbb{T}^k}\textrm{,} \end{aligned} \end{equation} where the first equality follows from the bounded convergence theorem. Note that \begin{equation}\label{integral over sphere bundle} \int_{S_{\tilde{x}} \mathbb{T}^k} \|dT(\tilde{v})\|_\infty d\mathrm{vol}_{S_{\tilde{x}}} = \sqrt{\frac{2c_{k-1}}{k}} \ell_T\textrm{.} \end{equation} If $l = 0$, then $n = k$, $M_1 \times \mathbb{T}^k \cong \mathbb{T}^k$, and the result follows immediately. Suppose $l > 0$. Let $\rho : \{ (\tilde{u},\tilde{v}) \in S_{(\tilde{p},\tilde{x})} (M_1 \times \mathbb{T}^k) \,\big|\, \tilde{v} \neq 0 \} \to S_{\tilde{x}} \mathbb{T}^k$ be defined by $\rho(\tilde{w}) = d\pi_0(\tilde{w})/|d\pi_0(\tilde{w})|$, i.e, $\rho(\tilde{u},\tilde{v}) = \tilde{v}/|\tilde{v}|$. Applying the coarea formula with respect to $\rho$ yields \[ \begin{aligned} \int_{S_{(\tilde{p},\tilde{x})} (M_1 \times \mathbb{T}^k)} \|d\tilde{S}&(\tilde{w})\|_\infty \,d\mathrm{vol}_{S_{(\tilde{p},\tilde{x})} (M_1 \times \mathbb{T}^k)}\\ &= \int_{S_{\tilde{x}} \mathbb{T}^k} \|dT(\tilde{v})\|_\infty \int_{\rho^{-1}(\tilde{v})} |d\pi_0(\tilde{w})|^k \,d\mathrm{vol}_{\rho^{-1}(\tilde{v})} d\mathrm{vol}_{S_{\tilde{x}} \mathbb{T}^k}\textrm{.} \end{aligned} \] Another application of the coarea formula shows that \[ \int_{\rho^{-1}(\tilde{v})} |d\pi_0(\tilde{w})|^k d\mathrm{vol}_{\rho^{-1}(\tilde{v})} = c_{l-1} \int_0^1 r^k (1-r^2)^{\frac{l}{2} - 1} \,dr = \frac{1}{2}c_{l-1} B \big( \frac{k+1}{2},\frac{l}{2} \big)\textrm{.} \] Thus \begin{equation}\label{integral over product sphere bundle} \begin{aligned} \int_{S_{(\tilde{p},\tilde{x})} (M_1 \times \mathbb{T}^k)} \|d\tilde{S}(\tilde{w})\|_\infty \,&d\mathrm{vol}_{S_{(\tilde{p},\tilde{x})} (M_1 \times \mathbb{T}^k)}\\ &= \frac{1}{2} c_{l-1} B \big( \frac{k+1}{2},\frac{l}{2} \big) \int_{S_{\tilde{x}} \mathbb{T}^k} \|dT(\tilde{v})\|_\infty d\mathrm{vol}_{S_{\tilde{x}} \mathbb{T}^k}\textrm{.} \end{aligned} \end{equation} The result follows from \eqref{intersection integral}-\eqref{integral over product sphere bundle} and Lemma \ref{beta and gamma}(c). \end{proof} \subsection{Main inequalities} This subsection begins with a simple integral inequality, the proof of which is elementary. \begin{lemma}\label{elementary inequality} Let $f : [a,b] \to \mathbb{R}$ be a measurable function, and let $a = x_0 < x_1 < \cdots < x_n = b$ be a partition of $[a,b]$ for some $n \geq 1$. Then \[ \sum_{i=0}^{n-1} \frac{[ \int_{x_i}^{x_{i+1}} f(t) \,dt ]^2}{x_{i+1} - x_i} \geq \frac{[\int_a^b f(t) \,dt]^2}{b-a}\textrm{.} \] \end{lemma} \noindent It is now possible to prove Theorem \ref{main theorem}. \begin{proof}[Proof of Theorem \ref{main theorem}] Without loss of generality, suppose that the map $f \in [F]$ used in the construction of $S$ is $C^1$. Let $\hat{F} : M_0 \times \mathbb{R}^k \to \hat{N}$ be a lift of $\tilde{f}$ of the form in Lemma \ref{bounded diameter}. Denote by $\kappa_0$ the number of sheets of the covering map $\chi$ in diagram \eqref{diagram3} and, as before, by $\kappa_1$ the number of sheets of $\psi_1 : M_1 \times \mathbb{T}^k \to M$. For each $r \in \mathbb{N}$, define a parallelotope $P_r = \{ \sum_{i=1}^k t_i v_i \,\big|\, 0 \leq t_i \leq r \}$, where, as before, $\Gamma_1 = \{ \sum_{i=1}^k m_i v_i \,\big|\, m_i \in \mathbb{Z} \}$. By Lemma \ref{fundamental domains}, $M_0 \times P_r$ is the union of $\kappa_0 r^k$ fundamental domains of $\psi$. Thus \begin{equation}\label{equation1} \begin{aligned} E(f) &= \frac{1}{\kappa_1 \kappa_0 r^k} E(\hat{F}|_{M_0 \times P_r})\\ & \geq \frac{1}{\kappa_1 \kappa_0 r^k} \int_{M_0} E(\hat{F} \circ \iota_{\hat{q}}|_{P_r}) \,d\mathrm{vol}_{M_0} \end{aligned} \end{equation} for each $\hat{q} \in M_0$. For such $\hat{q}$ and $w \in S^+ \partial P_r$, define $\varsigma_{\hat{q},w} : [0,l(w)] \to N$ by $\varsigma_{\hat{q},w} = \hat{F} \circ \iota_{\hat{q}} \circ \gamma_w$, where $\gamma_w$ is the geodesic in $\mathbb{R}^k$ with initial vector $w$. By approximating $P_r$ from within by a sequence of smooth submanifolds with boundary that round off its edges, applying the condition for equality in Lemma \ref{energy as a santalo integral} to each, and taking a limit, it follows that \[ E(\hat{F} \circ \iota_{\hat{q}}|_{P_r}) = \frac{k}{2c_{k-1}} \int_{S^+ \partial P_r} E(\varsigma_{\hat{q},w})g(w,\nu)\,d\mathrm{vol}_{S^+ \partial P_r}\textrm{.} \] Therefore, \begin{equation}\label{equation2} E(f) \geq \frac{1}{\kappa_1 \kappa_0 r^k} \int_{M_0} \frac{k}{2c_{k-1}} \int_{S^+ \partial P_r} E(\varsigma_{\hat{q},w}) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r}\textrm{,} \end{equation} with equality if and only if $\hat{F}$ is constant along each $M_0$-fiber. By Lemma \ref{asymptotic inequalities}(a), there exists $D \geq 0$ such that \[ \frac{L^2(\varsigma_{\hat{q},w})}{2l(w)} \geq \frac{1}{2}l(w) \|\hat{T}(w)\|_\infty^2 - D\|\hat{T}(w)\|_\infty \] for all $\hat{q}$ and $w$. This and inequality \eqref{cauchy--schwarz} imply that \begin{equation}\label{equation3} E(\varsigma_{\hat{q},w}) \geq \frac{1}{2} l(w) \|\hat{T}(w)\|_\infty^2 - D\|\hat{T}(w)\|_\infty\textrm{.} \end{equation} Combining \eqref{equation2} and \eqref{equation3} yields \begin{equation}\label{equation4} \begin{aligned} E(f) \geq \frac{\mathrm{vol}(M_0)}{\kappa_1 \kappa_0 r^k} \frac{k}{2c_{k-1}} \int_{S^+ \partial P_r} &\frac{1}{2}l(w) \|\hat{T}(w)\|_\infty^2 g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r}\\ &- \frac{D\|\hat{T}\|_\infty \mathrm{vol}(M_0)}{\kappa_1 \kappa_0} \frac{k}{2c_{k-1}} \frac{\mathrm{vol}(S^+ \partial P_r)}{r^k}\textrm{.} \end{aligned} \end{equation} Since $\hat{T}$ is linear, $E(\hat{T}|_{P_r}) = \mathrm{vol}(P_r)e_{\hat{T}}$. By Lemma \ref{energy as a santalo integral}, rounding off the edges of $P_r$ as in the proof of \eqref{equation2}, one has that \[ e_{\hat{T}} = \frac{E(\hat{T}|_{P_r})}{\mathrm{vol}(P_r)} = \frac{k}{2c_{k-1}} \frac{1}{\mathrm{vol}(P_r)} \int_{S^+ \partial P_r} \frac{1}{2}l(w) \|\hat{T}(w)\|_\infty^2 g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r}\textrm{.} \] Since $e_T = e_{\hat{T}}$, \begin{equation}\label{equation5} \mathrm{vol}(M)e_T = \frac{\mathrm{vol}(M_0)}{\kappa_1 \kappa_0 r^k} \frac{k}{2c_{k-1}} \int_{S^+ \partial P_r} \frac{1}{2}l(w) \|\hat{T}(w)\|_\infty^2 g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r}\textrm{.} \end{equation} An elementary argument shows that $\mathrm{vol}(S^+ \partial P_r)/r^k \to 0$ as $r \to \infty$. The inequality $E(f) \geq \mathrm{vol}(M)e_T$ follows from this, \eqref{equation4}, and \eqref{equation5} by letting $r \to \infty$. \noindent \textbf{(a)} Suppose $E(f) = \mathrm{vol}(M)e_T$. Fix $r \in \mathbb{N}$. For each $R = (r_1,\ldots,r_k) \in \{ 0,\ldots,r-1 \}^k$, let $Q_R$ be the translation of $P_1$ by $\sum_{i=1}^k r_i v_i$. By Lemma \ref{energy as a santalo integral}, \[ \int_{S^+ \partial P_r} E(\varsigma_{\hat{q},w}) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r} = \sum_R E(\hat{F} \circ \iota_{\hat{q}}|_{Q_R}) \] for each $\hat{q} \in M_0$. By symmetry, \[ \frac{1}{r^k} \int_{M_0} \,\int_{S^+ \partial P_r} E(\varsigma_{\hat{q},w}) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r} = \int_{M_0} E(\hat{F} \circ \iota_{\hat{q}}|_{P_1}) \,d\mathrm{vol}_{M_0} \] for each $r \in \mathbb{N}$. Combining this with the arguments used to prove \eqref{equation1}-\eqref{equation5} yields \begin{align*} \mathrm{vol}(M)e_T &= E(f)\\ &\geq \frac{1}{\kappa_1 \kappa_0} \frac{k}{2c_{k-1}} \int_{M_0} \,\int_{S^+ \partial P_1} E(\varsigma_{\hat{q},w}) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_1} \,d\mathrm{vol}_{M_0}\\ &= \liminf_{r \to \infty} \frac{1}{\kappa_1 \kappa_0} \frac{k}{2c_{k-1}} \frac{1}{r^k} \int_{M_0} \, \int_{S^+ \partial P_r} E(\varsigma_{\hat{q},w}) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_r} \,d\mathrm{vol}_{M_0}\\ &\geq \mathrm{vol}(M)e_T \textrm{.} \end{align*} Thus all of the above are equalities. In particular, equality holds in \eqref{equation2} for $r = 1$, which means $\hat{F}$ is constant along each $M_0$-fiber. For each $\Omega = (\omega_1,\ldots,\omega_k) \in \{0,1\}^k$ and $r \in \mathbb{N}$, denote by $\mu_\Omega : S\mathbb{R}^k \to \{0,1\}$ the indicator function of $S(rQ_\Omega)$, $l_\Omega : S\mathbb{R}^k \to [0,\infty)$ the function that takes each $w$ to the length of the line segment $rQ_\Omega \cap \gamma_w(\mathbb{R})$, and $L_\Omega : S\mathbb{R}^k \to [0,\infty)$ the function that takes each $w$ to $L_\Omega(w) = \int_{-\infty}^\infty (\mu_\Omega \circ \gamma_w') \|\varsigma_{\hat{q},w}'\|_\infty \,dt$. Then \begin{align*} \int_{S^+ \partial P_r} \frac{L^2(\varsigma_{\hat{q},w})}{2l(w)} g(w,\nu) d\mathrm{vol}_{S^+ \partial P_r} &= \int_{S P_r} \frac{L_0^2(w)}{2l_0^2(w)} \,d\mathrm{Liou}_{S P_r}\\ &= \frac{1}{2^k} \int_{S P_{2r}} \big[ \sum_{\Omega \in \{ 0,1 \}^k} \mu_\Omega(w) \frac{L_\Omega^2(w)}{2l_\Omega^2(w)} \big] \,d\mathrm{Liou}_{S P_{2r}}\\ &= \frac{1}{2^k} \int_{S^+ \partial P_{2r}} \big[ \sum_{l_\Omega \neq 0} \frac{L_\Omega^2(w)}{2l_\Omega(w)} \big] g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_{2r}}\\ &\geq \frac{1}{2^k} \int_{S^+ \partial P_{2r}} \frac{L^2(\varsigma_{\hat{q},w})}{2l(w)} g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_{2r}}\textrm{,} \end{align*} where the first and third equalities follow from Santal\'{o}'s formula and the inequality from Lemma \ref{elementary inequality}. Therefore, \begin{align*} \mathrm{vol}(M)e_T &= \frac{\mathrm{vol}(M_0)}{\kappa_1 \kappa_0} \frac{k}{2c_{k-1}} \int_{S^+ \partial P_1} E(\varsigma_{\hat{q},w}) g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_1}\\ &\geq \frac{\mathrm{vol}(M_0)}{\kappa_1 \kappa_0} \frac{k}{2c_{k-1}} \int_{S^+ \partial P_1} \frac{L^2(\varsigma_{\hat{q},w})}{2l(w)} g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_1}\\ &\geq \liminf_{r \to \infty} \frac{\mathrm{vol}(M_0)}{\kappa_1 \kappa_0} \frac{k}{2c_{k-1}} \frac{1}{2^{kr}} \int_{S^+ \partial P_{2^r}} \frac{L^2(\varsigma_{\hat{q},w})}{2l(w)} g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_{2^r}}\\ &\geq \liminf_{r \to \infty} \frac{k\mathrm{vol}(M_0)}{2^{kr+1}\kappa_1 \kappa_0 c_{k-1}} \int_{S^+ \partial P_{2^r}} \big[ \frac{1}{2}l(w)\|\hat{T}(w)\|_\infty^2 - D\|\hat{T}(w)\|_\infty \big] g(w,\nu) \,d\mathrm{vol}_{S^+ \partial P_{2^r}}\\ &= \mathrm{vol}(M)e_T \textrm{.} \end{align*} By the condition for equality in \eqref{cauchy--schwarz}, each $\varsigma_{\hat{q},w}$ must have constant speed. Lemma \ref{asymptotic inequalities}(a) implies that each $\varsigma_{\hat{q},w}$ has speed at least $\|\hat{T}(w)\|_\infty$. This, the above equalities, and \eqref{equation5} imply that each $\varsigma_{\hat{q},w}$ has speed $\|\hat{T}(w)\|_\infty$. It follows from the argument used to prove \eqref{inequality2}, and in particular Lemma \ref{triangle inequality}, that $\varsigma_{\hat{q},w}$ is a geodesic whenever $w \in S^+ \partial P_r$ satisfies $\gamma_w(s) - \gamma_w(0) \in \Gamma_1$ for some $s > 0$. The density of these rational vectors implies that each $\varsigma_{\hat{q},w}$ is a geodesic. By Lemma \ref{totally geodesic from nnrc into no conjugate points}, $f$ is totally geodesic. \noindent \textbf{(b)} If $E(f) = \frac{1}{c_{k-1}} \mathrm{vol}(M)\ell_T^2$, then, by part (a) and Lemma \ref{totally geodesic from nnrc into no conjugate points}, $\tilde{f}$ is constant along each $M_1$-fiber. For each $p \in M_1$, $\tilde{f} \circ \iota_p$ has energy $\frac{1}{c_{k-1}} \mathrm{vol}(M)\ell_T^2$ and, by Theorem \ref{length and intersection}, intersection $\frac{2c_{k-1}}{k}\mathrm{vol}(\mathbb{T}^k)\ell_T^2$. By Theorem \ref{croke--fathi}, $\tilde{f} \circ \iota_p$ is a homothety. Conversely, if $\tilde{f}$ is constant along each $M_1$-fiber and a homothety along each $\mathbb{T}^k$-fiber, then it follows from Lemma \ref{asymptotic inequalities}(b) that $E(f) = \frac{1}{c_{k-1}} \mathrm{vol}(M)\ell_T^2$. \noindent \textbf{(c)} If $E(f) = \frac{n c_n^2 c_{k-1}}{k c_k^2 c_{n-1}^2} \mathrm{vol}(M)\ell_T^2$, then $E(f) = \frac{1}{c_{k-1}} \mathrm{vol}(M)\ell_T^2$. By part (b), $\tilde{f}$ is constant along each $M_1$-fiber and a homothety along each $\mathbb{T}^k$-fiber. If $\ell_T = 0$, then $E(f) = 0$, so $f$ is constant. If $\ell_T > 0$, then $n c_n^2 c_{k-1}^2 = k c_k^2 c_{n-1}^2$. Lemma \ref{volumes of spheres} implies that $n = k$, so $M_1$ is a point. Thus $\tilde{f}$ and, consequently, $f$ are homotheties. Conversely, suppose $\tilde{f}$ is a homothety. If $f$ is constant, then it's clear that $E(f) = \frac{n c_n^2 c_{k-1}}{k c_k^2 c_{n-1}^2} \mathrm{vol}(M)\ell_T^2$. If $M_1$ is a point, then $k = n$, and, by part (b), $E(f) = \frac{n c_n^2 c_{k-1}}{k c_k^2 c_{n-1}^2} \mathrm{vol}(M)\ell_T^2$. \end{proof} \begin{lemma}\label{semi-Riemannian norm} Suppose that every geodesic in $\hat{N}$ is minimal. If $[F]$ contains a totally geodesic map, then $\|\cdot\|_\infty$ is induced by a semi-inner product. \end{lemma} \begin{proof} One may suppose that the map $f$ is totally geodesic. By Lemma \ref{totally geodesic into no conjugate points}, $\hat{F}$ is constant along each $M_0$-fiber and totally geodesic along each $\mathbb{R}^k$-fiber. Therefore, $\hat{F}(M_0 \times \mathbb{R}^k)$ is a totally geodesic and flat $m$-dimensional submanifold of $\hat{N}$. Since each geodesic in $\hat{N}$ is minimal, it follows from Lemma \ref{asymptotic inequalities}(b) that $\|\cdot\|_\infty$ is induced by the pull-back of the inner product on $\mathbb{R}^m$ under the linear map $\hat{T}$. \end{proof} \begin{remark} If $N$ is compact and the induced homomorphism of $[F]$ is non-trivial on $\pi_1(M)$, then, by modifying the proof of Theorem 8.3.19 in \cite{BuragoBuragoIvanov2001}, one may show that $\|\cdot\|_\infty$ is a norm. \end{remark} \begin{remark} Lemma \ref{volumes of spheres}, and consequently the inequality in Remark \ref{gurland's inequality}, may be obtained without using the results of Bustoz--Ismail or Gurland by applying Theorems \ref{croke--fathi} and \ref{main theorem}{b} and Theorem \ref{length and intersection} to the projection $\mathbb{T}^k \times \mathbb{T}^l \to \mathbb{T}^k$. \end{remark} \subsection{Targets with no conjugate points} The aim of this subsection is to prove Theorem \ref{main theorem for no conjugate points}. Note that Theorem \ref{nnrc theorem} is an immediate consequence of Theorems \ref{cheeger--gromoll splitting theorem} and \ref{main theorem for no conjugate points}. Suppose that $\pi_1(N)$ is torsion-free. Since $M_0$ is compact, the covering map $\chi : M_1 \to M_0$ in diagram \eqref{diagram2} must be finite. It follows that $\pi_1(M_1)$ is finite and, consequently, $F \circ \psi_1$ acts trivially on $\pi_1(M_1)$. Moreover, $H = f_*(\pi_1(M))$, being a torsion-free group containing $G \cong \mathbb{Z}^m$ as a finite-index subgroup, must be a Bieberbach group \cite{Charlap1986}. It follows that there exist a compact flat manifold $K$ and a finite covering map $\phi_1 : \mathbb{T}^m \to K$. Endow $K$ with the flat semi-Finsler metric $\|\cdot\|_\infty$, so that $\phi_1$ is a local isometry. Since the universal cover of $K$ is contractible, there exists a map $M \to K$ with surjective induced homomorphism. Applying the classical theorem of Eells--Sampson \cite{EellsSampson1964}, or the results of \cite{Dibble2018c}, one finds that this map is homotopic to a totally geodesic surjection $S : M \to K$. By construction, $\tilde{S}$ is a lift of $S$ to $M_1 \times \mathbb{T}^k \to \mathbb{T}^m$. It is clear that $E(S) = \mathrm{vol}(M)e_T$. Applying the coarea formula as in the proof of Theorem \ref{length and intersection}, one may show that \[ L^2(S) = \frac{n c_n^2 c_{k-1}}{k c_k^2 c_{n-1}} \mathrm{vol}(M)^2 \ell_T^2\textrm{.} \] Thus the inequalities in Theorem \ref{main theorem for no conjugate points} are equivalent to those in Theorem \ref{main theorem}. When $N$ has no conjugate points, $\pi_1(N)$ is torsion free, and the above hold. It remains to prove that, when $N$ has no conjugate points, the converse of Theorem \ref{main theorem}(a) and the simpler version of (c) in Theorem \ref{main theorem for no conjugate points} hold. Suppose in this case that $f$ is totally geodesic. By Lemma \ref{totally geodesic from nnrc into no conjugate points}, $\tilde{f}$ is constant along each $M_1$-fiber and totally geodesic along each $\mathbb{T}^k$-fiber. For each $\tilde{q} \in M_1$ and $w \in S\mathbb{T}^k$, the geodesic $t \mapsto \tilde{f}(\tilde{q},\exp(tw))$ has minimal length within its endpoint-fixed homotopy class and, by Lemma \eqref{asymptotic inequalities}(b), speed $\|dT(w)\|_\infty$. A direct computation, simplified by Lemma \ref{semi-Riemannian norm}, shows that $E(f) = E(\tilde{f})/\kappa_1 = E(S)$, which proves the converse to (a). If $f$ is a homothety, then $f$ is totally geodesic and, by Lemma \ref{totally geodesic from nnrc into no conjugate points}, $\tilde{f}$ is constant along each $M_1$-fiber. If $f$ is non-constant, then, since there exists $a \geq 0$ such that $f*(h) = ah$, $M_1$ must be a point. Theorem \ref{main theorem for no conjugate points}(c) now follows from Theorem \ref{main theorem}(c). \end{document}
\begin{document} \begin{frontmatter} \title{An upwind method for genuine weakly hyperbolic systems} \author[1]{Naveen Kumar Garg} \ead{[email protected], [email protected]} \author[2]{Michael Junk} \ead{[email protected]} \author[3]{S.V. Raghurama Rao} \ead{[email protected]} \author[4]{M. Sekhar} \ead{[email protected]} \address[1]{Research Scholar, IISc Mathematical Initiative (IMI), Indian Institute of Science, Bangalore, India} \address[2]{Professor, Fachbereich Mathematik und Statistik, Universit\"at Konstanz, Germany} \address[3]{Associate Professor, Department of Aerospace Engineering, Indian Institute of Science, Bangalore, India} \address[4]{Professor, Department of Civil Engineering, Indian Institute of Science, Bangalore, India} \begin{abstract} In this article, we attempted to develop an upwind scheme based on Flux Difference Splitting using Jordan canonical forms to simulate genuine weakly hyperbolic systems. Theory of Jordan Canonical Forms is being used to complete defective set of linear independent eigenvectors. Proposed FDS-J scheme is capable of recognizing various shocks accurately. \end{abstract} \begin{keyword} {Weakly hyperbolic systems} \sep {Jordan canonical forms} \sep{Upwind scheme} \end{keyword} \end{frontmatter} \section{Introduction} Central and upwind discretization schemes are the popular categories of numerical methods for simulating hyperbolic conservation laws. A system is said to be hyperbolic if its Jacobian matrix has all real eigenvalues with complete set of linearly independent eigenvectors. Upwind schemes based on Flux Difference Splitting (FDS) are usually more accurate than others. Two popular schemes belonging to this category are the approximate Riemann solvers of Roe \cite{Roe} and Osher \cite{Osher} and these are heavily dependent on eigenvector structure. Thus their applications are limited to systems which have complete set of linearly independent eigenvectors. Several other numerical schemes too are dependent strongly on eigenstructure and thus share the same difficulty. Recently, an attempt is made \cite{Smith_et_al} to extend Roe scheme to weakly hyperbolic systems by adding a perturbation parameter $\epsilon$ to make such systems strictly hyperbolic. In this article we try to develop an upwind method based on the concept of flux difference splitting together with Jordan forms, thus naming it as FDS-J scheme, to simulate genuine weakly hyperbolic systems. We use the theory of Jordan canonical forms to complete the defective set of linearly independent (LI) eigenvectors. Pressureless gas dynamics system, which happens to be weakly hyperbolic, is considered and it is known to produce delta shocks for density variable. Next, we consider {\em Modified Burgers' System} as given in \cite{Capdeville} and for this system too delta shocks occur exactly at same locations where normal shocks occur in the primary variables. Similarly, other types of discontinuities, namely, $\delta^{\prime}$-shocks and $\delta^{\prime\prime}$-shocks are observed if we further extend modified Burgers' system as given in \cite{Shelkovich} and \cite{Joseph}. FDS-J solver is capable of recognizing these shocks accurately. Comparison is done with simple Local Lax-Friendrichs (LLF) \cite{LLF} method. Contribution of generalized eigenvectors is not seen directly in the final FDS-J scheme for simulating considered genuine weakly hyperbolic systems. It is because for each considered system, all eigenvalues are equal with arithmetic multiplicity (AM) greater than one in the resulting single Jordan block for each case. \section{1-D Pressureless system} Consider the one-dimensional pressure-less gas dynamics system \begin{equation}\label{1D_Pressureless_system} \frac{\partial \boldsymbol{U}}{\partial t} + \frac{\partial \boldsymbol{F} \left( \boldsymbol{U} \right)} {\partial x} \ = \ 0 \end{equation} where $\boldsymbol{U}$ is the conserved variable vector and $\boldsymbol{F} \left( \boldsymbol{U} \right)$ is the flux vector defined by \begin{equation*} \boldsymbol{U} = \begin{bmatrix} \rho \\[0.3em] \rho u \end{bmatrix} \ \mbox{and} \ \boldsymbol{F}(\boldsymbol{U}) = \begin{bmatrix} \rho u \\[0.3em] \rho u^{2} \end{bmatrix} \end{equation*} This system can also be written in quasilinear form as follows. \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t} + \boldsymbol{A} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ 0 \ \end{equation} Here $A$ is Jacobian matrix for pressure-less system and is given by \begin{equation*} \boldsymbol{A} = \begin{bmatrix} \ 0 & 1 \\[0.3em] \ -u^{2} & 2 u \end{bmatrix} \ \end{equation*} Eigenvalues corresponding to Jacobian matrix $A$ are $\lambda_{1} = \lambda_{2} = u$ and thus algebraic multiplicity (AM) of the eigenvalues is 2, so we have to find its eigenvector space to see whether $\boldsymbol{A}$ has complete set of linearly independent eigenvectors or not. The analysis of matrix $\boldsymbol{A}$ shows that given system is weakly hyperbolic as there is no complete set of linearly independent eigenvectors, with the only eigenvector being \begin{equation*} \boldsymbol{R}_{1} = \begin{bmatrix} \ 1 \\[0.3em] \ u \end{bmatrix} \end{equation*} Since given system doesn't have a complete set of linearly independent (LI) eigenvectors, it will be difficult to apply any upwind scheme based on either Flux Vector Splitting (FVS) method or Flux Difference Splitting (FDS) method. But from the theory of Jordan Canonical Forms we can still recover complete set of LI generalized eigenvectors. \section{Jordan canonical forms and FDS for Pressureless Gas Dynamics}\label{Jordan_forms} Every square matrix is similar to a triangular matrix with all eigenvalues on its main diagonal. A square matrix is said to be similar to a diagonal matrix only if it has a complete set of LI eigenvectors. But every square matrix can be made similar to a {\em Jordan} matrix. An $n\times n$ matrix $\boldsymbol{J}$ with repeated eigenvalue $\lambda$ is called a $Jordan$ matrix of order \textbf{n} if each diagonal entry in a Jordan block is $\lambda$, each entry in the {\em super diagonal} is $1$ and every other entry is zero. Here we are providing a brief procedure to reduce a given square matrix to a Jordan matrix. \subsection{Re-visit of typical cases} Let $\boldsymbol{A}$ be $n \times n$ matrix with $n$ real eigenvalues $\lambda_{1}, \lambda_{2}, \lambda_{3}, \cdots, \lambda_{n}$. Now the following typical cases may arise: Case $1$: When all $\lambda_{i}$, where $1\leq i \leq n$, are distinct. In this case matrix $\boldsymbol{A}$ will have a complete set of LI eigenvectors and hence will be similar to a diagonal matrix. Case $2$: When some $\lambda_{i}$ are equal, {\em i.e.}, let $\lambda_{1} \ = \ \lambda_{2} \ = \ \lambda_{3} \ = \ \cdots \ = \ \lambda_{p} \ = \ \lambda$, where $p$ is a natural number $\leq n$, any of the following sub-cases may happen. Sub-case $1$: If algebraic multiplicity (AM) of an eigenvalue $\lambda$, which is $p$ in assumed case, is equal to geometric multiplicity (GM), and moreover if this is true for all subsets of equal eigenvalues then square matrix $\boldsymbol{A}$ will again be similar to a unique diagonal matrix. Sub-case $2$: Now, consider the case in which GM is strictly less than AM, in that case the LI set of eigenvectors will not be a complete one. Here, we can pull in the theory of Jordan canonical forms to recover full LI set of generalized eigenvectors and to make given square matrix similar to a Jordan matrix which is not much different from a diagonal matrix.\\ \textbf{Definition:} A $n \times n$ matrix is called defective matrix if it doesn't possess full set of linearly independent eigenvectors. \\ \textbf{Procedure to find generalized eigenvectors:} In this article we mainly focus on systems which belong to the category as discussed in Sub-case $2$. If all eigenvalues of a given defective matrix are equal and further if there is only a single Jordan block corresponding to given matrix, then following steps need to be followed to recover full set of LI generalized eigenvectors: \\ $(i)$ For an eigenvalue $\lambda$, compute the ranks of the matrices $\boldsymbol{A}-\lambda \boldsymbol{I}$, \ \ $(\boldsymbol{A}-\lambda \boldsymbol{I})^2$, $\cdots$, and find the least positive integer $s$ such that $rank(\boldsymbol{A}-\lambda \boldsymbol{I})^s \ = \ rank(\boldsymbol{A}-\lambda \boldsymbol{I})^{s+1}$. There will be a single Jordan block only if $s$ comes out equal to dimension of given matrix. \\ $(ii)$ Once $s$ is equal to dimension of defective matrix, generalized eigenvectors can be computed from the system of equations $\boldsymbol{A}\boldsymbol{P} = \boldsymbol{P}\boldsymbol{J}$, where \begin{equation*} {\boldsymbol J(\lambda)} = \begin{bmatrix} \ \lambda & 1 & \\[0.3em] \ & \ddots & \ddots & \\[0.3em] \ & & \ddots & 1 \\[0.3em] \ & & & \lambda \end{bmatrix}_{s\times s} \ \end{equation*} \\ Let $\boldsymbol{P}$ equal to $[\boldsymbol{X_{1}}, \boldsymbol{X_{2}}, \boldsymbol{X_{3}}, ..., \boldsymbol{X_{s}}]$ be a set of column vectors which need to evaluated. Then, \begin{equation} \boldsymbol{A}\big[\boldsymbol{X}_1, \boldsymbol{X}_2, \boldsymbol{X}_3,........., \boldsymbol{X}_{s}\big] \ = \ \big[\boldsymbol{X}_1, \boldsymbol{X}_2, \boldsymbol{X}_3,........., \boldsymbol{X}_{s}\big] \begin{bmatrix} \ \lambda & 1 & \\[0.3em] \ & \ddots & \ddots & \\[0.3em] \ & & \ddots & 1 \\[0.3em] \ & & & \lambda \end{bmatrix}_{s\times s} \ \end{equation} gives \begin{align}\label{formula_to_find_gen_eig} \begin{split} \boldsymbol{A}\boldsymbol{X}_{1} \ &= \ \lambda \boldsymbol{X}_{1} \\ \boldsymbol{A}\boldsymbol{X}_{2} \ &= \ \lambda \boldsymbol{X}_{2} \ + \ \boldsymbol{X}_{1} \\ \boldsymbol{A}\boldsymbol{X}_{3} \ &= \ \lambda \boldsymbol{X}_{3} \ + \ \boldsymbol{X}_{2} \\ \vdots \\ \boldsymbol{A}\boldsymbol{X}_{s} \ &= \ \lambda \boldsymbol{X}_{s} \ + \ \boldsymbol{X}_{s-1} \end{split} \end{align} Now we can compute all true and generalized eigenvectors from system of relations (\ref{formula_to_find_gen_eig}). For present case, $u$ is repeated eigenvalue with arithmetic multiplicity (AM) of 2 and on computing the ranks of matrices $\boldsymbol{A}- u\boldsymbol{I}$, \ \ $(\boldsymbol{A}- u\boldsymbol{I})^2$ and $(\boldsymbol{A}- u\boldsymbol{I})^{3}$, we find $rank(\boldsymbol{A}- u \boldsymbol{I})^2 \ = \ 0 \ = \ rank(\boldsymbol{A}- u \boldsymbol{I})^{3}$. Thus $s$ will be $2$ in this case, so there will be one $Jordan$ block of order $2$. On expanding relation $\boldsymbol{A}\boldsymbol{P} = \boldsymbol{P}\boldsymbol{J}$, we get \begin{equation}\label{relation_to_find_gen_eigenvectors} \boldsymbol{A}[\boldsymbol{X}_{1} \ \ \boldsymbol{X}_{2}] \ = \ [\boldsymbol{X}_{1} \ \ \boldsymbol{X}_{2}] [\boldsymbol{J}_{1} \ \ \boldsymbol{J}_{2}] \end{equation} where $\boldsymbol{X}_{i}^{\prime{s}}$ are linearly independent, $2 \times 1$, column vectors. Similarly, $\boldsymbol{J}_{i}^{\prime{s}}$ are column vectors which form Jordan matrix $\boldsymbol{J}$ and are given as \begin{equation} \boldsymbol{J}_{1} = \begin{bmatrix} \ \lambda \\[0.3em] \ 0 \end{bmatrix}_{2 \times 1}, \ \boldsymbol{J}_{2} = \begin{bmatrix} \ 1 \\[0.3em] \ \lambda \end{bmatrix}_{2 \times 1} \ \end{equation} On solving (\ref{relation_to_find_gen_eigenvectors}), we get following relations to find all eigenvectors, {\em i.e.}, \begin{align}\label{def_gen_eigenvectors_2} \begin{split} \boldsymbol{A}\boldsymbol{X}_{1} \ &= \ \lambda \boldsymbol{X}_{1} \\ \boldsymbol{A}\boldsymbol{X}_{2} \ &= \ \lambda \boldsymbol{X}_{2} \ + \ \boldsymbol{X}_{1} \end{split} \end{align} First relation of (\ref{relation_to_find_gen_eigenvectors}) gives $\boldsymbol{X}_{1} = \boldsymbol{R}_{1}$ and on using this value in second relation of (\ref{relation_to_find_gen_eigenvectors}), we get \begin{equation*} \boldsymbol{X}_{2} \ = \ \boldsymbol{R}_{2} = \begin{bmatrix} \ x_{1} \\[0.3em] \ 1 + ux_{1} \end{bmatrix} \end{equation*} which will be a generalized eigenvector of the pressureless gas dynamics system and $x_{1} \in {\rm I\!R}$. \subsection{Formulation of a FDS scheme for Pressureless System} System (\ref{1D_Pressureless_system}) can be written in quasi-linear form as \begin{equation}\label{Quasi_form_pressureless_system} \frac{\partial \boldsymbol{U}}{\partial t} + \boldsymbol{A} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ 0 \end{equation} Now, because of the non-linearity of Jacobian matrix $\boldsymbol{A}$, it is difficult to solve above system. But locally, inside each cell, $\boldsymbol{A}$ can be made linearized to form a constant matrix $\boldsymbol{\bar{A}}$, which is now a function of left and right state variables $\boldsymbol{U}_L$ and $\boldsymbol{U}_R$, {\em i.e.}, $\boldsymbol{\bar{A}} \ = \ \boldsymbol{\bar{A}}\left(\boldsymbol{U}_L,\boldsymbol{U}_R\right)$. So, (\ref{Quasi_form_pressureless_system}) becomes \begin{equation}\label{Quasi-linearised_eqn} \frac{\partial \boldsymbol{U}}{\partial t} + \boldsymbol{\bar{A}} \frac{\partial \boldsymbol{U}} {\partial x} \ = \ 0 \end{equation} On comparing (\ref{1D_Pressureless_system}) and (\ref{Quasi-linearised_eqn}), we get \begin{equation} d\boldsymbol{F} = \boldsymbol{\bar{A}}d\boldsymbol{U} \end{equation} The finite difference analogue of the above differential relation is, \begin{equation}\label{conservation1_for_pressureless_system} \bigtriangleup{\boldsymbol{F}} = \boldsymbol{\bar{A}}\bigtriangleup{\boldsymbol{U}} \end{equation} where, \begin{align} \begin{split} \bigtriangleup{\boldsymbol{F}} \ &= \ \boldsymbol{F}_R - \boldsymbol{F}_L \\ \bigtriangleup{\boldsymbol{U}} \ &= \ \boldsymbol{U}_R - \boldsymbol{U}_L \end{split} \end{align} In the above equations, subscripts $R$ and $L$ represent the right and left states respectively. Relation (\ref{conservation1_for_pressureless_system}) ensures the conservation property. As already explained the present system is weakly hyperbolic, but on the basis of above mentioned procedure, we can construct a basis of true and generalized eigenvectors for column vector $\bigtriangleup{\boldsymbol{U}}$, i.e., \begin{equation} \bigtriangleup{\boldsymbol{U}} \ = \ \sum_{i = 1}^{2} \bar\alpha_{i}\boldsymbol{\bar{R}}_{i} \end{equation} where, $\bar{\alpha}_{i}'s$ are coefficients attached with both LI eigenvectors corresponding to given system. On using above equation in (\ref{conservation1_for_pressureless_system}), we get \begin{equation}\label{conservation2_for_pressureless_system} \bigtriangleup{\boldsymbol{F}} = \boldsymbol{\bar{A}}\sum_{i = 1}^{2} \bar\alpha_{i}\boldsymbol{\bar{R}}_{i} \end{equation} For weakly hyperbolic systems, $\boldsymbol{\bar{A}}$ is non-diagonalizable, resulting in \begin{equation*} \boldsymbol{\bar{A}} \boldsymbol{\bar{R}}_{i} \ \neq \ \bar \lambda_{i} \boldsymbol{\bar{R}}_{i} \ \textrm{for some i's} \ \end{equation*} We now have $\boldsymbol{\bar{R}}_{2}$ as a generalized eigenvector and \begin{equation*} \boldsymbol{\bar{A}} \boldsymbol{\bar{R}}_{1} \ = \ \bar \lambda_{1} \boldsymbol{\bar{R}}_{1} \ ~~\mbox{and}~~ \boldsymbol{\bar{A}} \boldsymbol{\bar{R}}_{2} \ = \ \bar \lambda_{2} \boldsymbol{\bar{R}}_{2} \ + \ \boldsymbol{\bar{R}}_{1} \end{equation*} On using above relations in (\ref{conservation2_for_pressureless_system}), we get \begin{equation*} \bigtriangleup{\boldsymbol{F}} \ = \ \bar\alpha_{1} \bar \lambda_{1} \boldsymbol{\bar{R}}_{1} \ + \ \bar\alpha_{2} \bar \lambda_{2} \boldsymbol{\bar{R}}_{2} \ + \ \bar\alpha_{2} \boldsymbol{\bar{R}}_{1} \end{equation*} We now define the standard Courant splitting for the eigenvalues as \begin{equation*} \bar \lambda^{+}_{i} - \bar \lambda^{-}_{i} = |\bar \lambda_{i}| \ \end{equation*} After splitting each of the eigenvalues into a positive and a negative part, $\bigtriangleup{\boldsymbol{F}}^{+}$ and $\bigtriangleup{\boldsymbol{F}}^{-}$ can be written as \begin{equation} \label{delta_F_positive} \bigtriangleup{\boldsymbol{F}}^{+} \ = \ \bar\alpha_{1} \bar \lambda^{+}_{1} \boldsymbol{\bar{R}}_{1} \ + \ \bar\alpha_{2} \bar \lambda^{+}_{2} \boldsymbol{\bar{R}}_{2} \ + \ \bar\alpha_{2} \boldsymbol{\bar{R}}_{1} \end{equation} and \begin{equation} \label{delta_F_negative} \bigtriangleup{\boldsymbol{F}}^{-} \ = \ \bar\alpha_{1} \bar \lambda^{-}_{1} \boldsymbol{\bar{R}}_{1} \ + \ \bar\alpha_{2} \bar \lambda^{-}_{2} \boldsymbol{\bar{R}}_{2} \ + \ \bar\alpha_{2} \boldsymbol{\bar{R}}_{1} \end{equation} Taking a cue from the traditional flux difference splitting methods, we now write the interface flux as \begin{equation} \\ \boldsymbol{F}_{I} = \frac{1}{2} \left[\boldsymbol{F}_{L} + \boldsymbol{F}_{R} \right] - \frac{1}{2} \left[ \left(\bigtriangleup{\boldsymbol{F}}^{+} \ - \ \bigtriangleup{\boldsymbol{F}}^{-}\right) \right] \end{equation} On using (\ref{delta_F_positive}) and (\ref{delta_F_negative}) in the upwinding part of FDS formulation for pressureless system, we get \begin{equation} \ \ \bigtriangleup{\boldsymbol{F}}^{+} - \bigtriangleup{\boldsymbol{F}}^{-} \ = \ \sum_{i = 1}^{2} \bar\alpha_{i} |\bar \lambda_{i}| \boldsymbol{\bar{R}}_{i} \end{equation} Since both eigenvalues are the same, above relation becomes \begin{equation}\label{flux_differencing} \bigtriangleup{\boldsymbol{F}}^{+} - \bigtriangleup{\boldsymbol{F}}^{-} \ = \ |\bar \lambda|\bigtriangleup{\boldsymbol{U}} \end{equation} Now $\bigtriangleup{U_{2}}$ is equal to $\bigtriangleup({\rho u})$, which can be further expressed as \begin{equation} \bigtriangleup(\rho u) \ = \ \bar{u}\bigtriangleup{\rho} \ + \ \bar{\rho}\bigtriangleup{u} \end{equation} where $\bar{u}$ is some average of $u_L$ and $u_R$, $\bar{\rho}$ is another average of $\rho_L$ and $\rho_R$, both to be determined. We now have \begin{equation}\label{U_2_average_relation} \rho_{R}u_{R} - \rho_{L}u_{L} \ = \ \bar{u}(\rho_{R} - \rho_{L}) \ + \ \bar{\rho}(u_{R} - u_{L}) \end{equation} We need to find average values for both density and velocity variables and both of which should satisfy relation (\ref{U_2_average_relation}) to get some meaningful solutions for interface fluxes inside each cell. Again consider relation $\bigtriangleup{\boldsymbol{F}} = \boldsymbol{\bar{A}}\bigtriangleup{\boldsymbol{U}}$, which in expanded form can be written as \begin{equation} \label{conservation_eq} \begin{bmatrix} \bigtriangleup (\rho u) \\[0.3em] \bigtriangleup (\rho u^{2}) \end{bmatrix} \ = \ \begin{bmatrix} 0 & 1 \\[0.3em] -{\bar u}^{2} & 2 \bar{u} \end{bmatrix} \begin{bmatrix} \bigtriangleup (\rho) \\[0.3em] \bigtriangleup (\rho u) \end{bmatrix} \end{equation} First relation is automatically satisfied for any average values. From the second relation, we get \begin{equation}\label{momentum_eqn} \bigtriangleup(\rho u^{2}) \ = \ -{\bar u}^{2} \bigtriangleup (\rho) + 2 \bar{u} \bigtriangleup (\rho u) \end{equation} where \begin{equation} \bigtriangleup (\rho) \ = \ (\rho)_R \ - \ (\rho)_L \end{equation} \begin{equation} \bigtriangleup (\rho u) \ = \ (\rho u)_R \ - \ (\rho u)_L \end{equation} \begin{equation} \bigtriangleup (\rho u^2) \ = \ (\rho u^2)_R \ - \ (\rho u^2)_L \end{equation} After rearrangement of terms we obtain \begin{equation*} \\ {\bar{u}}^2 \bigtriangleup (\rho) \ - \ 2 \bar{u} \bigtriangleup (\rho u) \ + \ \bigtriangleup (\rho u^{2})\ = \ 0 \end{equation*} which is a quadratic equation in $\bar{u}$ the solution of which, after a little algebra, is obtained as \begin{equation} \label{u_bar} \bar{u} \ = \ \frac{\sqrt{\rho_L} u_L \ \pm \ \sqrt{\rho_R} u_R }{\sqrt{\rho_L} \ \pm \ \sqrt{\rho_R}} \end{equation} We neglect the root having negative signs in both numerator and denominator as it is not physical and may become infinity as $\sqrt{\rho_R} \longmapsto \sqrt{\rho_L}$ or vice-versa. Thus average value of $u$ is defined as \begin{equation} \bar{u} \ = \ \frac{\sqrt{\rho_L} u_L \ + \ \sqrt{\rho_R} u_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}} \end{equation} On using $\bar{u}$ in the relation (\ref{U_2_average_relation}) we get \begin{equation} \rho_{R}u_{R} - \rho_{L}u_{L} \ = \frac{\sqrt{\rho_L} u_L \ + \ \sqrt{\rho_R} u_R }{\sqrt{\rho_L} \ + \ \sqrt{\rho_R}}(\rho_{R} - \rho_{L}) \ + \ \bar{\rho}(u_{R} - u_{L}) \end{equation} Now we use $(\rho_{R} - \rho_{L}) \ = \ (\sqrt{\rho_R} + \sqrt{\rho_L})(\sqrt{\rho_R} - \sqrt{\rho_L})$ in the above equation and after rearrangement of terms, we get \begin{equation} \bar{\rho} = (\sqrt{\rho_R}\sqrt{\rho_L}) \end{equation} Since density is always positive, the average value $\bar{\rho}$ becomes equal to $(\sqrt{\rho_R\rho_L})$. One can check that the relation (\ref{U_2_average_relation}) becomes an equation for above defined averages for both density and velocity variables. As the interface flux is now completely defined, the final update formula in the finite volume framework is written as follows. \begin{equation} \boldsymbol{U}^{n+1}_{j} = \boldsymbol{U}^{n}_{j} - \frac{\Delta t}{\Delta x} \left[ \boldsymbol{F}^{n}_{j+\frac{1}{2}} - \boldsymbol{F}^{n}_{j-\frac{1}{2}} \right] \end{equation} \subsection{Numerical examples} Here we consider two test cases for 1D-pressureless gas dynamics. First test case we take from \cite{Chen_&_Liu} with initial conditions being given as $(\rho_L, u_L) = (1.0, 1.5)$, $(\rho_R, u_R) = (0.2, 0.0)$ with $x_{o}=0.0$ and all solutions are obtained at final time $t=0.2$ units. In this case, a $\delta$-shock develops in density variable and our FDS-J scheme captures this feature accurately, as seen in Figure \ref{Pressureless_delta_shocks_d}. The formation of step discontinuity in velocity variable is shown in Figure \ref{Pressureless_delta_shocks_u}. Second test case is taken from \cite{Bouchut_Jin_&_Li}. This test case is designed to check positivity property and maximum principle for density and velocity variables respectively. For this problem, FDS-J scheme generates insufficient numerical diffusion. To get meaningful solution, we use Harten's entropy fix \cite{Harten_entropy_fix} which usually increase diffusion in the scheme, {\em i.e.}, \begin{align} \begin{split} |\tilde{\lambda}| \ &= \ |\lambda| \ \ \ \textrm{if} \ \ \ |\lambda| \geq \epsilon \ \textrm{and} \ \\ |\tilde{\lambda}| \ &= \ \dfrac{1}{2}\big(\frac{\lambda^{2}}{\epsilon} + \epsilon \big) \ \ \textrm{if} \ \ |\lambda| < \epsilon \end{split} \end{align} for some small value of $\epsilon$. The density variable plot is shown in Figure \ref{positivity_test}. \begin{figure} \caption{ (a) Results of FDS-J scheme for pressureless system, formation of $\delta$ shocks in density variable and (b) represents formation of step discontinuity in velocity variable.} \label{Pressureless_delta_shocks_d} \label{Pressureless_delta_shocks_u} \end{figure} \begin{figure} \caption{Results of density variable for positivity problem using FDS-J scheme for pressureless gas dynamics system. } \label{positivity_test} \end{figure} \section{Modified Burgers' system} Next we consider modified Burgers' system which is formed augmenting the inviscid Burgers equation with an equation obtained by taking its derivative, forming a $2\times2$ system. Let us consider one-dimensional inviscid Burgers' equation \begin{equation}\label{Burgers'_1} u_{t} \ + \ f_{x}(u) \ = \ 0 \end{equation} where, $u$ is the conserved variable and $f(u)$ is the flux function which is given by $f(u) = \frac{1}{2} u^{2}$. On differentiating above equation $w.r.t.$ $x$, we obtain \begin{equation} (u_{t})_{x} \ + \ (f_{x}(u))_{x} \ = \ 0 \end{equation} It further can be written as \begin{equation} (u_{x})_{t} \ + \ (f^{\prime}(u)u_{x})_{x} \ = \ 0 \end{equation} or \begin{equation}\label{Burgers'_2} v_{t} \ + \ g_{x}(u) \ = \ 0 \end{equation} where we define $v = u_{x}$ and $g(u) = f^{\prime}(u)v $. (\ref{Burgers'_1}) and (\ref{Burgers'_2}) together form $2\times2$ system \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t} \ + \ \boldsymbol{A} \frac{\partial \boldsymbol{U}}{\partial x} \ = \ 0 \end{equation}where $\boldsymbol{U}$ is column vector and $\boldsymbol{A}$ is $2\times2$ matrix, {\em i.e.}, \begin{equation*} \boldsymbol{U} = \begin{bmatrix} \ u \\[0.3em] \ v \end{bmatrix} \ \textrm{and} \ \boldsymbol{A} = \begin{bmatrix} \ u && 0 \\[0.3em] \ v && u \end{bmatrix} \end{equation*} Eigenvalues corresponding to Jacobian matrix $\boldsymbol{A}$ are $\lambda_{1} = u = \lambda_{2}$ and thus algebraic multiplicity (AM) of the eigenvalue $u$ is 2. For $v \neq 0$, analysis of matrix $\boldsymbol{A}$ shows that given system is weakly hyperbolic as the given system has only one LI eigenvector, which is given by \begin{equation} \boldsymbol{R}_{1} = \begin{bmatrix} \ 0 \\[0.3em] \ 1 \end{bmatrix} \end{equation} We find that there is one Jordan block of order two as $rank(\boldsymbol{A} - u \boldsymbol{I})^{2} \ = \ 0 \ = \ rank(\boldsymbol{A} - u \boldsymbol{I})^{3}$. Like in the previous case, in order to find a generalized eigenvector we need to solve relation $\boldsymbol{A}\boldsymbol{P} = \boldsymbol{P}\boldsymbol{J}$. After a little algebra, $\boldsymbol{R}_{2}$ comes out as \begin{equation} \boldsymbol{R}_{2} = \begin{bmatrix} \ \dfrac{1}{v} \\[0.8em] \ x_{2} \end{bmatrix} \end{equation} where $x_{2} \in {\rm I\!R}$. \subsection{Formulation of FDS scheme for Modified Burgers' system} Similar analysis like that in the pressureless gas dynamics system is valid for modified Burgers' system till equation (\ref{flux_differencing}) which is \begin{equation}\label{flux_differencing_2} \bigtriangleup{\boldsymbol{F}}^{+} - \bigtriangleup{\boldsymbol{F}}^{-} \ = \ |\bar \lambda|\bigtriangleup{\boldsymbol{U}} \end{equation} In this case $\bigtriangleup{\boldsymbol{U}}$ is defined as \begin{equation} \bigtriangleup{\boldsymbol{U}} \ = \begin{bmatrix} \ \bigtriangleup{u} \\[0.3em] \ \bigtriangleup{v} \end{bmatrix} \end{equation} and $\bar{\lambda} = \bar{u}$. In order to solve (\ref{flux_differencing_2}) fully, we need to find average value of $u$ from relation $\bigtriangleup{\boldsymbol{F}} \ = \ \boldsymbol{\bar{A}} \bigtriangleup{\boldsymbol{U}}$. In expanded form it can be written as \begin{equation} \begin{bmatrix} \bigtriangleup (\frac{1}{2} u^{2}) \\[0.3em] \bigtriangleup (u v) \end{bmatrix} \ = \ \begin{bmatrix} \bar u && 0 \\[0.3em] \bar v && \bar u \end{bmatrix} \begin{bmatrix} \bigtriangleup (u) \\[0.3em] \bigtriangleup (v) \end{bmatrix} \end{equation} From the first equation, we get \begin{equation}\label{1_eqn_aug_Burgers'} \bigtriangleup \left(\frac{1}{2} u^{2}\right) \ = \ \bar u \bigtriangleup (u) \end{equation} or \begin{equation} \left(\frac{1}{2} u^{2}_{R} - \frac{1}{2} u^{2}_{L}\right) \ = \ \bar{u}(u_{R} - u_{L}) \end{equation} if $u_{L} \neq u_{R}$, then $\bar{u} = \dfrac{(u_{L} + u_{R})}{2}$. Otherwise also $\bar{u} = \dfrac{(u_{L} + u_{R})}{2}$ in a limiting sense. Second expression ($\bar{v}$) need not be solved as interface flux requires only $\bar{u}$ to be evaluated. It is important to note that even if $v = 0$, relation (\ref{flux_differencing_2}) still holds. \subsection{Numerical examples} We considered some numerical test cases from \cite{Capdeville} for the modified Burgers' system. First test case contains smooth initial conditions which are given as \begin{eqnarray*} \boldsymbol{U}(x,0) \ = \ \left\{ \begin{array}{l} \frac{1}{2} + sin(\pi x) \\ \pi cos(\pi x) \end{array} \right. \forall x \in [0,2] \end{eqnarray*} with a 2-periodic boundary condition. Later near time $t =$ $\frac{3}{(2\pi)}$, the given system develops a normal shock and a $\delta-$shock in $u$ and $v$ variables respectively. Theoretically, $v = \pi cos(\pi x)$ may be zero at points $x = \frac{1}{2}, \frac{3}{2}$ but computationally it is not so. Results with FDS-J scheme are given in Figure \ref{Modified Burgers'_normal_shock} and \ref{Modified Burgers'_delta_shocks}. Next we present results with Local Lax-Friedrichs (LLF) method, which is a simple central solver and are given in Figure \ref{Modified Burgers'_normal_shock_comp}, \ref{Modified Burgers'_delta_shocks_comp}. Second test case for which initial conditions are defined as $(u_L, v_L) = (-2.0, 1.0)$, $(u_R, v_R) = (4.0, -2.0)$ with $x_{o} = 1.0$ contains a sonic point. Final solutions are obtained at time $t=0.125$ units as given in \cite{Smith_et_al}. Harten's entropy fix is employed to get meaningful solutions and results are given in Figure \ref{Modified Burgers'_test2_u} and \ref{Modified Burgers'_test2_v}. \begin{figure} \caption{ (a) Formation of normal shock in u-variable and (b) represents formation of $\delta-$ shock in v-variable, for Modified Burgers' system.} \label{Modified Burgers'_normal_shock} \label{Modified Burgers'_delta_shocks} \end{figure} \begin{figure} \caption{ LLF scheme with 500 points (a) formation of normal shock in u-variable and (b) represents formation of $\delta-$ shock in v-variable, for Modified Burgers' system.} \label{Modified Burgers'_normal_shock_comp} \label{Modified Burgers'_delta_shocks_comp} \end{figure} \begin{figure} \caption{ (a) Sonic point problem results in u-variable and (b) represents numerical results in v-variable for FDS-J scheme with Modified Burgers' system.} \label{Modified Burgers'_test2_u} \label{Modified Burgers'_test2_v} \end{figure} \section{Further modified Burgers' system} Shelkovich \cite{Shelkovich} shows existence of $\delta^{\prime}$-shocks in addition to $\delta$-shocks. These shocks occur in a system which is formed by taking one more derivative of second equation of modified Burgers' system leading to $3\times3$ system. Similarly, Joseph \cite{Joseph} shows existence of $\delta^{\prime\prime}$-shocks in the solution of $4\times4$ system. Let us consider again both equations of modified Burgers' system \begin{equation} u_{t} \ + \ f_{x}(u) \ = \ 0 \end{equation} and \begin{equation} v_{t} \ + \ g_{x}(u) \ = \ 0 \end{equation} On differentiating above equation w.r.t $x$, we get \begin{equation} w_{t} \ + \ (v^{2} + uw)_{x} \ = \ 0 \end{equation} If we differentiate above equation once more we have \begin{equation} z_{t} \ + \ (3vw + uz)_{x} \ = \ 0 \end{equation} In a quasi-linear form above set of four equations can be written as \begin{equation} \boldsymbol{U}_{t} \ + \ \boldsymbol{A}\boldsymbol{U}_{x} \ = \ 0 \end{equation} where, $\boldsymbol{U}$ is a $4\times1$ column vector and $\boldsymbol{A}$ is a Jacobian matrix which is given below \begin{equation} \boldsymbol{A} = \begin{bmatrix} \ u && 0 && 0 && 0 \\[0.3em] \ v && u && 0 && 0 \\[0.3em] \ w && 2v && u && 0 \\[0.3em] \ z && 3w && 3v && u \end{bmatrix} \end{equation} Eigenvalues corresponding to matrix $\boldsymbol{A}$ are $u,u,u,u$ and for $v\neq0,w\neq0,z\neq0$, matrix $\boldsymbol{A}$ is weakly hyperbolic. Indeed it has only one LI eigenvector $\boldsymbol{e}_{4}$. In this case also we find that there is only one Jordan block of order $4$ as $rank(\boldsymbol{A} - u \boldsymbol{I})^{4} \ = \ 0 \ = \ rank(\boldsymbol{A} - u \boldsymbol{I})^{5}$. This means for present system, a Jordan chain of order four corresponding to eigenvalue $\lambda = u$ will form, {\em i.e.}, \begin{align} \begin{split} \boldsymbol{A} \boldsymbol{R}_{1} \ &= \ \lambda \boldsymbol{R}_{1} \\ \boldsymbol{A} \boldsymbol{R}_{2} \ &= \ \lambda \boldsymbol{R}_{2} \ + \ \boldsymbol{R}_{1} \\ \boldsymbol{A} \boldsymbol{R}_{3} \ &= \ \lambda \boldsymbol{R}_{3} \ + \ \boldsymbol{R}_{2} \\ \boldsymbol{A} \boldsymbol{R}_{4} \ &= \ \lambda \boldsymbol{R}_{4} \ + \ \boldsymbol{R}_{3} \end{split} \end{align} where $\boldsymbol{R}_{1} = \boldsymbol{e}_{4}$ and on using $\boldsymbol{R}_{1}$ in the second relation, $\boldsymbol{R}_{2}$ comes out as $(0,0,\frac{1}{3v},x_{4})^{t}$ with $x_{4}$ as a real constant. Similarly, on using $\boldsymbol{R}_{2}$ in next relation, $\boldsymbol{R}_{3}$ comes out equal to $(0,\frac{1}{6v^{2}},\frac{x_{4}}{3v}-\frac{w}{6v^{3}},y_{4})$, where $x_{4}$ is already defined and $y_{4}$ is another real constant. Finally, last expression gives $\boldsymbol{R}_{4} = (\frac{1}{6v^{3}}, \frac{x_{4}}{6v^{2}}, \frac{y_{4}}{3v} - \frac{z}{18v^{4}} - \frac{w x_{4}}{6v^{3}}, t_{4})^{t}$. Let $\boldsymbol{P}$ denote a matrix with column vectors $[\boldsymbol{R}_{1} | \boldsymbol{R}_{2} | \boldsymbol{R}_{3} | \boldsymbol{R}_{4}]$ and one can check determinant of $\boldsymbol{P}$ is $\frac{1}{108v^{6}} \neq 0$. \subsection{Formulation of FDS scheme for Further Modified Burgers' System} In this case $\bigtriangleup{\boldsymbol{F}}$ is written as, \begin{equation*} \bigtriangleup{\boldsymbol{F}} \ = \ \bar\alpha_{1} \bar \lambda \boldsymbol{\bar{R}}_{1} \ + \ \bar\alpha_{2}(\bar \lambda \boldsymbol{\bar{R}}_{2} \ + \ \boldsymbol{\bar{R}}_{1}) \ + \ \bar\alpha_{3}(\bar\lambda \boldsymbol{\bar{R}}_{3} \ + \ \boldsymbol{\bar{R}}_{2}) \ + \ \bar\alpha_{4}(\bar \lambda \boldsymbol{\bar{R}}_{4} \ + \ \boldsymbol{\bar{R}}_{3}) \end{equation*} After splitting each of the eigenvalues into a positive part and a negative part, $\bigtriangleup{\boldsymbol{F}}^{+}$ and $\bigtriangleup{\boldsymbol{F}}^{-}$ can be written as \begin{equation} \bigtriangleup{\boldsymbol{F}}^{+} \ = \ \bar\alpha_{1} \bar \lambda^{+} \boldsymbol{\bar{R}}_{1} \ + \ \bar\alpha_{2}(\bar \lambda^{+} \boldsymbol{\bar{R}}_{2} \ + \ \boldsymbol{\bar{R}}_{1}) \ + \ \bar\alpha_{3}(\bar\lambda^{+} \boldsymbol{\bar{R}}_{3} \ + \ \boldsymbol{\bar{R}}_{2}) \ + \ \bar\alpha_{4}(\bar \lambda^{+} \boldsymbol{\bar{R}}_{4} \ + \ \boldsymbol{\bar{R}}_{3}) \end{equation} and \begin{equation} \bigtriangleup{\boldsymbol{F}}^{-} \ = \ \bar\alpha_{1} \bar \lambda^{-} \boldsymbol{\bar{R}}_{1} \ + \ \bar\alpha_{2}(\bar \lambda^{-} \boldsymbol{\bar{R}}_{2} \ + \ \boldsymbol{\bar{R}}_{1}) \ + \ \bar\alpha_{3}(\bar\lambda^{-} \boldsymbol{\bar{R}}_{3} \ + \ \boldsymbol{\bar{R}}_{2}) \ + \ \bar\alpha_{4}(\bar \lambda^{-} \boldsymbol{\bar{R}}_{4} \ + \ \boldsymbol{\bar{R}}_{3}) \end{equation} \begin{equation}\label{flux_differencing_3} \Rightarrow \bigtriangleup{\boldsymbol{F}}^{+} - \bigtriangleup{\boldsymbol{F}}^{-} \ = \ |\bar \lambda|\bigtriangleup{\boldsymbol{U}} \end{equation} In this case $\bigtriangleup{\boldsymbol{U}}$ is defined as, \begin{equation} \bigtriangleup{\boldsymbol{U}} \ = \begin{bmatrix} \ \bigtriangleup{u} \\[0.3em] \ \bigtriangleup{v} \\[0.3em] \ \bigtriangleup{w} \\[0.3em] \ \bigtriangleup{z} \end{bmatrix} \end{equation} and $\bar{\lambda} = \bar{u}$. In order to solve (\ref{flux_differencing_3}) fully, we need to find average value of $u$. In this case also average value of $u$ turns out to be equal to $\dfrac{u_{L} + u_{R}}{2}$. we take the same test case as considered in the modified Burgers' system with initial smooth conditions \begin{eqnarray*} \boldsymbol{U}(x,0) \ = \ \left\{ \begin{array}{l} \frac{1}{2} + sin(\pi x) \\ \pi cos(\pi x) \\ -\pi^{2} sin(\pi x) \\ -\pi^{3} cos(\pi x) \end{array} \right. \forall x \in [0,2] \end{eqnarray*} As already explained at time $t =$ $\frac{3}{(2\pi)}$, the given system develops a normal shock and a $\delta-$shock in $u$ and $v$ variables. Similarly, at same position where normal shock forms, third variable $w$ gives a $\delta^{\prime}$-shock and fourth variable $z$ creates a $\delta^{\prime \prime}$-shock. Results for FDS-J scheme are compared with simple central solver LLF and are given in Figures \ref{delta_dash_shock} and \ref{delta_double_dash_shock}. \begin{figure} \caption{ Comparison of FDS-J scheme with LLF scheme for further modified Burgers' system (a) represents formation of $\delta^{\prime} \label{delta_dash_shock} \label{delta_double_dash_shock} \end{figure} \section{Summary} In this study, we attempted to develop a Flux Difference Splitting scheme for genuine weakly hyperbolic systems to simulate various shocks including $\delta$-shocks, $\delta^{\prime}$-shocks and $\delta^{\prime\prime}$-shocks. Newly constructed FDS-J scheme, developed using Jordan Canonical forms together with an upwind flux difference splitting method, is capable of recognizing these shocks accurately. For considered weakly hyperbolic systems, there is no direct contribution of generalized eigenvector in the final formulation of the scheme. \end{document}
\begin{document} \title{L\"uders Channels and the Existence of Symmetric Informationally Complete Measurements} \author{John B.\ DeBrota} \author{Blake C.\ Stacey} \affiliation{University of Massachusetts Boston, Morrissey Boulevard, Boston MA 02125, USA} \date{\today} \begin{abstract} The L\"uders rule provides a way to define a quantum channel given a quantum measurement. Using this construction, we establish an if-and-only-if condition for the existence of a $d$-dimensional Symmetric Informationally Complete quantum measurement (a SIC) in terms of a particular depolarizing channel. Moreover, the channel in question satisfies two entropic optimality criteria. \end{abstract} \maketitle \section{Introduction} A \textit{minimal informationally complete} (MIC) quantum measurement for a $d$ dimensional Hilbert space $\mathcal{H}_d$ is a set of linearly independent positive semidefinite operators $\{E_i\}$, $i=1,\ldots,d^2$, which sum to the identity~\cite{DeBrota:2018a, DeBrota:2018b}. If every element in a MIC is proportional to a rank-$n$ projector, we say the MIC itself is \textit{rank-n}. If the Hilbert--Schmidt inner products ${\rm tr}\, E_iE_j$ equal one constant for all $i\neq j$ and another constant when $i=j$, we say the MIC is \textit{equiangular}. A \textit{symmetric informationally complete} (SIC) quantum measurement is a rank-1 equiangular MIC~\cite{Zauner:1999, Renes:2004, Scott:2010, Fuchs:2017}. When a SIC $\{H_i\}$ exists, one can show that $H_i=\frac{1}{d}\Pi_i$ where $\Pi_i$ are rank-1 projectors and that \begin{equation} {\rm tr}\, H_iH_j=\frac{d\delta_{ij}+1}{d^2(d+1)}\;. \end{equation} The theory of quantum channels provides a means to discuss the fully general way in which quantum states may be transformed. A standard result \cite{Nielsen:2010} has it that a quantum channel $\mathcal{E}$ may always be specified by a set of operators $\{A_i\}$, called \textit{Kraus operators}, such that for a quantum state $\rho$, \begin{equation} \mathcal{E}(\rho)=\sum_iA_i\rho A_i^\dag\;. \end{equation} Consider a physicist Alice who is preparing to send a quantum system through a channel that she models by $\mathcal{E}$. Alice initially describes her quantum system by assigning to it a density matrix $\rho$. The state $\mathcal{E}(\rho)$ encodes Alice's expectations for measurements that can potentially be performed after the system is sent through the channel. More specifically, let Alice's channel be a \emph{L\"uders MIC channel} (LMC) associated with the MIC $\{E_i\}$, which may be understood in the following way. Alice plans to apply the MIC $\{E_i\}$, and upon obtaining the result of that measurement, invoke the L\"uders rule~\cite{Barnum:2000, Busch:2009} to obtain a new state for her system, \begin{equation} \rho'_i := \frac{1}{{\rm tr}\, \rho E_i} \sqrt{E_i} \rho \sqrt{E_i}\;, \end{equation} where we have introduced the \emph{principal Kraus operators} $\{\sqrt{E_i}\}$, the unique positive semidefinite square roots of the MIC elements. Before applying her MIC, Alice can write the \emph{post-channel state} \begin{equation} \mathcal{E}(\rho) := \sum_i p(E_i) \rho'_i = \sum_i \sqrt{E_i} \rho \sqrt{E_i}\;, \end{equation} which is a weighted average of the states from which Alice plans to select the actual state she will ascribe to the system after making the measurement. (For more on the broader conceptual context of this operation, see~\cite{Fuchs:2012, Stacey:2019}.) LMCs are a proper subset of all quantum channels as many valid channels are unrelated to a MIC and do not admit a representation in terms of principal Kraus operators. For example, a unitary channel is not an LMC. Throughout this paper, we will frequently use the fact that any MIC element $E_i$ is proportional to a density matrix $E_i:=e_i\rho_i$, where we call the proportionality constants $\{e_i\}$ the \textit{weights} of the MIC. Because the $\{E_i\}$ sum to the identity, the weights $\{e_i\}$ sum to the trace of the identity, which is just the dimension $d$. We refer to the LMC obtained from a SIC as the \textit{SIC channel} $\mathcal{E}_{\rm SIC}$. We may characterize the SIC channel in any dimension in which a SIC exists using the convenient notion of a \textit{dual basis}. Given a basis for a vector space, any vector in that space is uniquely identified by its inner products with the basis elements. These inner products are the coefficients in the expansion of the vector over the elements of the dual basis. Likewise, the inner products with the elements of the dual basis are the coefficients in the expansion over the original basis. A consequence of this is that, if $\{H_i\}$ denotes the original basis and $\{\widetilde{H}_j\}$ denotes its dual basis, then \begin{equation} {\rm tr}\, H_i \widetilde{H}_j = \delta_{ij}. \end{equation} It follows that if $\{H_i\}$ is a SIC, then the dual basis is given by \begin{equation} \widetilde{H}_j=(d+1)\Pi_j-I\;, \end{equation} so we may write an operator $X\in\mathcal{L}(\mathcal{H}_d)$ in the SIC basis as \begin{equation} \begin{split} X &= \sum_j({\rm tr}\, X\widetilde{H}_j)H_j\\ &= (d+1)\sum_j({\rm tr}\, X\Pi_j)H_j - ({\rm tr}\, X)I\;. \end{split} \end{equation} Noting that $({\rm tr}\, X\Pi_j)H_j=\frac{1}{d}\Pi_jX\Pi_j$ and that $\sqrt{H_j}=\frac{1}{\sqrt{d}}\Pi_j$, we obtain \begin{equation}\label{sicchannel} \mathcal{E}_{\rm SIC}(X) = \frac{1}{d}\sum_j\Pi_jX\Pi_j = \frac{({\rm tr}\, X)I+X}{d+1}\;, \end{equation} and so for any quantum state $\rho$, \begin{equation} \mathcal{E}_{\rm SIC}(\rho) = \frac{I+\rho}{d+1}\;. \end{equation} Going forward, given an LMC $\mathcal{E}$ and input state $\rho$, let $\lambda$ denote the eigenvalue spectrum of the post-channel state $\mathcal{E}(\rho)$ and $\lambda_{\rm max}$ denote the maximum eigenvalue of this state. We use the notation $\overline{f(\ketbra{\psi}{\psi})}$ to denote the average value of the function $f(\ketbra{\psi}{\psi})$ over all pure state inputs $\ketbra{\psi}{\psi}$ with respect to the Haar measure. We now prove a lemma applicable to arbitrary LMCs upon which our later results rely. \begin{lemma}\label{lemma} Let $\mathcal{E}$ be an LMC. Then \begin{equation} \overline{\lambda}_{\rm max}\geq \frac{2}{d+1}\;. \end{equation} \end{lemma} \begin{proof} For an arbitrary pure state $\rho=\ketbra{\psi}{\psi}$, \begin{equation} \mathcal{E}(\ketbra{\psi}{\psi}) = \sum_i\sqrt{E_i}\ketbra{\psi}{\psi}\sqrt{E_i}\;. \end{equation} We may lower bound $\lambda_{\rm max}$ given such an input as follows: \begin{equation} \lambda_{\rm max} \geq \bra{\psi}\mathcal{E}(\ketbra{\psi}{\psi})\ket{\psi} = \sum_i|\bra{\psi}\sqrt{E_i}\ket{\psi}|^2\;. \end{equation} If we now average over all pure states with the Haar measure, we will produce a generic lower bound on the average maximal eigenvalue of the post-channel state: \begin{equation}\label{lowerbndavg} \overline{\lambda}_{\rm max} \geq\sum_i\int_{\mathcal{H}}|\bra{\psi}\sqrt{E_i}\ket{\psi}|^2 d\Omega_{\psi}\;. \end{equation} We can evaluate this integral using a known property of the Haar measure \cite{Renes:2004}. Integrating a tensor power over pure state projectors gives a result proportional to the projector $P_{\rm sym}$ onto the symmetric subspace of $\mathcal{H}_d\otimes\mathcal{H}_d$. Consequently, \begin{equation}\label{sumsym} \overline{\lambda}_{\rm max}\geq \frac{2}{d(d+1)}\sum_i{\rm tr}\,\!\!\left[\left(\sqrt{E_i} \otimes \sqrt{E_i}\right)P_{\rm sym}\right]. \end{equation} Using the fact that \begin{equation} P_{\rm sym}=\frac{1}{2}\left(I\otimes I+\sum_{kl}\ketbra{l}{k}\otimes\ketbra{k}{l}\right)\;, \end{equation} one may verify that \begin{equation} {\rm tr}\,\!\!\left[\left(\sqrt{E_i}\otimes \sqrt{E_i}\right)P_{\rm sym}\right]=\frac{1}{2}\left({\rm tr}\,\sqrt{E_i}\right)^2 +\frac{e_i}{2}\;. \end{equation} Thus equation \eqref{sumsym} becomes \begin{equation} \overline{\lambda}_{\rm max}\geq \frac{1}{d(d+1)}\left(\sum_i\left({\rm tr}\,\sqrt{E_i}\right)^2+\sum_ie_i\right)\;. \end{equation} Now since ${\rm tr}\,\sqrt{E_i}=\sqrt{e_i}{\rm tr}\,\sqrt{\rho_i}$ and ${\rm tr}\,\sqrt{\rho_i}\geq1$, \begin{equation} \overline{\lambda}_{\rm max}\geq\frac{2}{d(d+1)}\sum_ie_i=\frac{2}{d+1}\;. \end{equation} \end{proof} The following theorem reveals that the SIC channel's action is unique to SICs among LMCs. \begin{theorem}\label{existenceproof} A SIC exists in dimension $d$ iff there is an LMC with action $\mathcal{E}(\rho)=\frac{I+\rho}{d+1}$ for all $\rho$. \end{theorem} \begin{proof} If a SIC exists, take $\mathcal{E}=\mathcal{E}_{\rm SIC}$. For the other direction, we will first demonstrate that the MIC which gives rise to this LMC must be rank-$1$. Having established this, we will be able to see that the unitaries relating different Kraus operators for this LMC are directly related to the MIC weights. This will allow us to show that the principal Kraus operators have the Gram matrix of a SIC and must form a SIC themselves. For any pure state input, \begin{equation} \lambda=\left(\frac{2}{d+1},\frac{1}{d+1},\ldots,\frac{1}{d+1}\right)\;. \end{equation} Thus the lower bound in Lemma \ref{lemma} is saturated. This can only occur when ${\rm tr}\,\sqrt{\rho_i}=1$ for all $i$ which implies that the MIC is rank-1. In \hyperref[quasiSIC]{Appendix A} we define and construct the \textit{quasi-SICs}, that is, sets of Hermitian, but not necessarily postive semidefinite, matrices $\{Q_i\}$ which have the same Hilbert--Schmidt inner products as SIC projectors, and we demonstrate that they furnish a Hermitian basis of constant-trace Kraus operators $A_i$ which give the same action as $\mathcal{E}$. Any other set of Kraus operators with the same effect will be related to this set by a unitary remixing, and since $\mathcal{E}$ is an LMC, we must have \begin{equation}\label{remix} \sqrt{E_i}=\sum_j[U]_{ij}A_j\;. \end{equation} Since we know the MIC is rank-$1$, we can trace both sides of this expression to obtain the identity $\sqrt{de_i}=\sum_j[U]_{ij}$. Furthermore, since the $A_j$ form a Hermitian basis, one may see that every element of $U$ must be real, rendering $U$ an orthogonal matrix. Then \begin{equation} \begin{split} {\rm tr}\, \!\left(\sqrt{E_i}\sqrt{E_j}\right) &= \frac{1}{d}\sum_{k,l}[U]_{ik}[U]_{jl}{\rm tr}\, Q_k Q_l\\ &= \frac{1}{d}\sum_{k,l}[U]_{ik}[U]_{jl}\frac{d\delta_{kl}+1}{d+1}\\ &=\frac{d\delta_{ij}+d\sqrt{e_ie_j}}{d(d+1)}\;. \end{split} \end{equation} When $i=j$, we have \begin{equation} e_i=\frac{1+e_i}{d+1}\implies e_i=\frac{1}{d}\;, \end{equation} and so \begin{equation}\label{sqrtgram} {\rm tr}\, \!\left(\sqrt{E_i}\sqrt{E_j}\right)=\frac{d\delta_{ij}+1}{d(d+1)}\;, \end{equation} and because $E_i$ is rank-$1$, \begin{equation} {\rm tr}\,(E_iE_j)=\frac{d\delta_{ij}+1}{d^2(d+1)}\;, \end{equation} that is, the MIC is a SIC. \end{proof} \section{Depolarizing L\"uders MIC Channels} The SIC channel falls within a class of channels called \textit{depolarizing channels}~\cite{Wilde:2017}. A depolarizing channel is a channel \begin{equation} \mathcal{E}_\alpha(\rho) = \alpha\rho+\frac{1-\alpha}{d}I\;,\qquad \frac{-1}{d^2-1}\leq\alpha\leq1\;. \end{equation} The SIC channel corresponds to $\alpha=\frac{1}{d+1}$. One might wish to know when an LMC is a depolarizing channel. From Theorem \ref{existenceproof}, we know the only LMC with $\alpha=\frac{1}{d+1}$ is the SIC channel. What range of $\alpha$ are achievable by LMCs? The answer to this question is any $\frac{1}{d+1}\leq\alpha<1$. To see this, note that the eigenvalue spectrum for a depolarizing channel given a pure state input is \begin{equation}\label{depolarizingspectrum} \lambda\left(\mathcal{E}_\alpha(\ketbra{\psi}{\psi})\right) = \left(\alpha+\frac{1-\alpha}{d},\frac{1-\alpha}{d},\ldots, \frac{1-\alpha}{d}\right)\;. \end{equation} Recall the lower bound on the average maximal eigenvalue for any LMC given a pure state input from Lemma \ref{lemma} is $\frac{2}{d+1}$. As the spectrum for a depolarizing channel is constant for pure state inputs, the lower bound on the average is the lower bound for any pure state input. If $\lambda_{\rm max} = \frac{1-\alpha}{d}$, then $\frac{1-\alpha}{d} \geq \alpha+\frac{1-\alpha}{d} \implies \alpha\leq0$. The more negative $\alpha$ is, the larger the maximal eigenvalue would be, so the largest it can get is when $\alpha = \frac{-1}{d^2-1}$, in which case $\lambda_{\rm max} = \frac{d}{d^2-1}<\frac{2}{d+1}$. So, $\lambda_{\rm max} = \alpha+\frac{1-\alpha}{d} \geq \frac{2}{d+1} \implies \alpha \geq \frac{1}{d+1}$. When $\alpha=1$, the channel is the identity channel, in other words, not depolarizing at all. It is easy to prove that were this to be implemented by an LMC, it would require $\sqrt{E_i}=\frac{1}{d}I$, but this does not lead to a linearly independent set and is not a MIC. If a SIC exists, however, a depolarizing LMC exists for any $\frac{1}{d+1} \leq \alpha < 1$, as the next proposition shows. \begin{prop} Suppose a SIC exists in dimension $d$. For a nonzero $\beta\in\left[\frac{-1}{d-1},1\right]$ satisfying \begin{equation} \alpha=1-\frac{\left(\sqrt{1-\beta+d\beta}-\sqrt{1-\beta}\right)^2}{d+1}\;, \end{equation} or, equivalently, \begin{equation} \begin{split} \beta = \frac{1}{d^2}& \Bigr ( (d-2)(d+1)(1-\alpha)\\ & +2\sqrt{(d+1)(1-\alpha)(1-\alpha+d^2\alpha)} \Bigr )\;, \end{split} \end{equation} the MIC \begin{equation}\label{sicmix} E_i=\frac{\beta}{d}\Pi_i+\frac{1-\beta}{d^2}I\;, \end{equation} where $\Pi_i$ is a SIC projector, gives rise to the LMC \begin{equation}\label{depolarizingLMC} \mathcal{E}_\alpha(\rho) = \alpha\rho+\frac{1-\alpha}{d}I\;,\qquad \frac{1}{d+1}\leq\alpha<1\;. \end{equation} \end{prop} \noindent One may check that the principal Kraus operators associated with the MIC elements \eqref{sicmix} are given by \begin{equation}\label{principalkraus} A_i=\frac{\sqrt{1-\beta+d\beta}-\sqrt{1-\beta}}{d}\Pi_i + \frac{\sqrt{1-\beta}}{d}I\;, \end{equation} and then a routine calculation and the characterization of the SIC channel from Theorem \ref{existenceproof} confirms the claim of the proposition. \begin{remark} When $\beta = 1$, the MIC $\{E_i\}$ is the original SIC, whereas when $\beta$ equals its minimum allowed value $-1/(d-1)$, it is the rank-$(d-1)$ equiangular MIC \begin{equation}\label{minbeta} E_i = \frac{1}{d(d-1)}(-\Pi_i+I)\;, \end{equation} indirectly noted in prior work~\cite{DeBrota:2018b, Zhu:2016, DeBrota:2017} for extremizing a nonclassicality measure based on negativity of quasi-probability. \end{remark} Do any LMCs give rise to depolarizing channels in dimensions where one does not have access to a SIC? If we replace the SIC projector in equation \eqref{principalkraus} with a quasi-SIC, we may form Kraus operators effecting the same depolarizing channel, \begin{equation}\label{quasikraus} K_i=\frac{\sqrt{1-\beta+d\beta}-\sqrt{1-\beta}}{d}Q_i + \frac{\sqrt{1-\beta}}{d}I\;. \end{equation} From \eqref{qsicsquaresum}, one can check that these will square to a valid MIC. For arbitrary $\beta$, however, $K_i$ may fail to be positive semidefinite and would therefore not be a principal Kraus operator. From the definition of a quasi-SIC, one sees that the eigenvalues of $Q_i$ are bounded below by $-1$. Even in this worst case, one can easily derive that $K_i$ will be positive semidefinite for any nonzero $\beta\leq\frac{3}{d+3}$. This range of $\beta$ entitles any $\alpha\geq\frac{d^2-d-1}{d^2-1}$. (The minimal $\alpha$ is obtained from the most negative $\beta$.) When $d=2$, this minimal $\alpha$ matches the lower bound achieved by the SIC channel because every quasi-SIC is a SIC in this dimension, but for all $d>2$ the inequality is strict and monotonically increases with dimension. In practice, the minimal eigenvalue among all of the quasi-SIC operators one constructs will be significantly larger than $-1$, and so, depending on how close to a SIC one can make their quasi-SIC, one should be able to get significantly closer to the SIC bound than the $\alpha$ we have derived. Fully classifying the MICs giving depolarizing LMCs for particular values of $\alpha>\frac{1}{d+1}$ appears to be a difficult problem; it is not clear what properties these MICs must satisfy. For example, squaring the $K_i$ operators from equation \eqref{quasikraus} results in MICs which are dependent on one's quasi-SIC implementation and need not be equiangular as the family in equation \eqref{sicmix} was. All principal Kraus operators which give rise to a depolarizing channel with a given $\beta$ (and corresponding $\alpha$) will be related to the operators \eqref{quasikraus} by way of a unitary remixing satisfying \begin{equation}\label{remix2} \sqrt{E_i}=\sum_j[U]_{ij}K_j\; \end{equation} for some unitary $U$. As in the proof of Theorem \ref{existenceproof}, all the elements of the unitary must be real and so it is actually an orthogonal matrix. We have not been able to identify any further necessary characteristics of the $U$ in the completely general case, but the following notable restriction yielded further structure. A MIC is \emph{unbiased} if the traces of all the elements are equal, that is, if $e_i=\frac{1}{d}$ for all $i$. MICs in this class have the property that their measurement outcome probabilities for the ``garbage state'' $\frac{1}{d}I$ input is the flat probability distribution over $d^2$ outcomes. From the standpoint of \cite{DeBrota:2018a}, this means they preserve the intution that the state $\frac{1}{d}I$ should correspond to a prior with complete outcome indifference in a reference process scenario and accordingly warrant special attention. If we demand that $\{E_i\}$ be unbiased, then it is necessary, but not sufficient, that the orthogonal matrix remixing \eqref{quasikraus} be doubly quasistochastic (see \hyperref[dblyquasi]{Appendix B}). \section{Entropic Optimality} One way to evaluate the performance of a quantum channel is by using measures based on von Neumann entropy, \begin{equation} S(\rho)=-{\rm tr}\,\rho\log\rho\;. \end{equation} In this section, we consider two such, proving in each case an optimality result for LMCs constructed from SICs. To understand the conceptual significance of the bounds we will derive, consider again Alice who is preparing to send a quantum system through an LMC. Alice initially ascribes the quantum state $\rho$ to her system, and before sending the system through the channel, she computes $\mathcal{E}(\rho)$. After eliciting a measurement outcome, Alice will update her quantum-state assignment, not to $\mathcal{E}(\rho)$ but rather to whichever $\rho'_i$ corresponds to the outcome $E_i$ that actually transpires. The state $\mathcal{E}(\rho)$ will generally be mixed, while $\rho'_i$ will be a pure state in the case of a rank-1 MIC. This change from mixed to pure represents a sharpening of Alice's expectations about her quantum system. We can quantify this in entropic terms, even for MICs that are not rank-1. In fact, for pure state inputs we can calculate Alice's \emph{typical sharpening of expectations} by averaging the post-channel von Neumann entropy over the possible input states using the Haar measure, denoted $\overline{S(\mathcal{E}(\ketbra{\psi}{\psi}))}$. We will see that SIC channels give the largest possible typical sharpening of expectations. In the following we make use of a partial ordering on real vectors arranged in nonincreasing order called \textit{majorization}~\cite{Marshall:2011}. A real vector $x$ rearranged into nonincreasing order is written as $x^\downarrow$. Then we say a vector $x$ majorizes a vector $y$, denoted $x\succ y$, if all of the leading partial sums of $x^\downarrow$ are greater than or equal to the leading partial sums of $y^\downarrow$ and if the sum of all the elements of each is equal. Explicitly, $x\succ y$ if \begin{equation} \sum_{i=1}^kx^\downarrow_i\geq\sum_{i=1}^ky^\downarrow_i\;, \end{equation} for $k=1\ldots N-1$ and $\sum_ix_i=\sum_iy_i$. Speaking heuristically, if $x \succ y$, then $y$ is a flatter vector than $x$. A \textit{Schur convex} function is a function $f$ satisfying the implication $x\succ y\implies f(x)\geq f(y)$. A function is \textit{strictly} Schur convex if the inequality is strict when $x^\downarrow\neq y^\downarrow$. When the inequality is reversed the function is called \textit{Schur concave}. \begin{theorem}\label{avgentropybnd} Let $\mathcal{E}$ be an LMC. $\overline{S(\mathcal{E}(\ketbra{\psi}{\psi}))} \leq \log(d+1) - \frac{2}{d+1}\log 2$ with equality achievable if a SIC exists in dimension $d$. \end{theorem} \begin{proof} From Lemma \ref{lemma}, we know that the average maximal eigenvalue for the output of an arbitrary LMC given a pure state input is lower bounded by $\frac{2}{d+1}$. This implies \begin{equation}\label{avgmaj} \begin{split} \overline{\lambda}&\succ \left(\overline{\lambda}_{\rm max}, \frac{1-\overline{\lambda}_{\rm max}}{d-1}, \ldots,\frac{1-\overline{\lambda}_{\rm max}}{d-1}\right)\\ & \succ \left(\frac{2}{d+1},\frac{1}{d+1},\ldots,\frac{1}{d+1}\right)\;. \end{split} \end{equation} The Shannon entropy $H(P)=-\sum_iP_i\log P_i$ is a concave and Schur concave function of probability distributions. Furthermore, the von Neumann entropy of a density matrix is equal to the Shannon entropy of its eigenvalue spectrum. Using these facts we have \begin{equation} \begin{split} \overline{S(\mathcal{E}(\ketbra{\psi}{\psi}))}&=\overline{H\big(\lambda(\mathcal{E}(\ketbra{\psi}{\psi}))\big)}\\ &\leq H\!\left(\overline{\lambda(\mathcal{E}(\ketbra{\psi}{\psi}))}\right)\\ &\leq \log(d+1) - \frac{2}{d+1}\log 2\;. \end{split} \end{equation} If a SIC exists, taking $\mathcal{E}=\mathcal{E}_{\rm SIC}$ achieves this upper bound. \end{proof} Theorem \ref{avgentropybnd} would have been more forceful if the upper bound were saturated ``only if'' a SIC exists, but we were unable to demonstrate this property, and so we leave it as a conjecture: \begin{conjecture}\label{onlyif} Equality is achievable in the statement of Theorem \ref{avgentropybnd} only if a SIC exists in dimension $d$. \end{conjecture} \noindent We were, however, able to prove a strong SIC optimality result in the setting of bipartite systems, applicable for example to Bell-test scenarios. The \emph{entropy exchange} for a channel $\mathcal{E}$ upon input by state $\rho$ is defined~\cite{Nielsen:2010} to be the von Neumann entropy of the result of sending one half of a purification of $\rho$, $\ket{\Psi_\rho}$, through the channel: \begin{equation} S(\rho,\mathcal{E}) := S\big(I\otimes\mathcal{E}(\ketbra{\Psi_\rho}{\Psi_{\rho}})\big)\;. \end{equation} \begin{theorem}\label{entropyexchangethm} Let $\mathcal{E}$ be an LMC. Then $S\!\left(\frac{1}{d}I,\mathcal{E}\right) \leq \log d+\frac{d-1}{d}\log(d+1)$ with equality achievable iff a SIC exists in dimension $d$. \end{theorem} \begin{proof} The purification of the state $\frac{1}{d}I$ is the maximally entangled state $\ket{M\!E}:=\frac{1}{\sqrt{d}}\sum_i\ket{ii}$. Let $\lambda$ be the eigenvalues of $I\otimes\mathcal{E}(\ketbra{M\!E}{M\!E})$ arranged in nonincreasing order. We may lower bound the maximal eigenvalue as follows: \begin{equation}\label{maxbound} \begin{split} \lambda_{\rm max} &\geq\bra{M\!E}I\otimes \mathcal{E}(\ketbra{M\!E}{M\!E})\ket{M\!E}\\ &=\frac{1}{d^2}\sum_{ijkl}\bra{ii}I \otimes \mathcal{E}(\ketbra{jj}{kk})\ket{ll}\\ &=\frac{1}{d^2}\sum_{ijkl}\bra{ii}(\ketbra{j}{k} \otimes \mathcal{E}(\ketbra{j}{k}))\ket{ll}\\ &=\frac{1}{d^2}\sum_{il}\bra{i}\mathcal{E}(\ketbra{i}{l})\ket{l}\\ &=\frac{1}{d^2}\sum_{ijl}\bra{i}\sqrt{E_j}\ketbra{i}{l}\sqrt{E_j}\ket{l}\\ &=\frac{1}{d^2}\sum_j\left({\rm tr}\,\!\sqrt{E_j}\right)^2 \geq \frac{1}{d^2}\sum_je_j=\frac{1}{d}\;. \end{split} \end{equation} Thus, \begin{equation} \begin{split} \lambda &\succ \left(\lambda_{\rm max}, \frac{1-\lambda_{\rm max}}{d^2-1}, \ldots,\frac{1-\lambda_{\rm max}}{d^2-1}\right)\\ &\succ \left(\frac{1}{d},\frac{1}{d(d+1)},\ldots,\frac{1}{d(d+1)}\right)\;. \end{split} \end{equation} The upper bound now follows from the Schur concavity of von Neumann entropy. If a SIC exists, it is easy to verify that \begin{equation} I\otimes\mathcal{E}_{\rm SIC}(\ketbra{M\!E}{M\!E}) = \frac{\frac{1}{d}I\otimes I+\ketbra{M\!E}{M\!E}}{d+1} \end{equation} which saturates the upper bound. Von Neumann entropy is strictly Schur concave~\cite{Bosyk:2016}, so the upper bound is saturated iff $\lambda = \left(\frac{1}{d}, \frac{1}{d(d+1)}, \ldots, \frac{1}{d(d+1)} \right)$. Equation \eqref{maxbound} shows that $\ket{M\!E}$ is the maximal eigenstate and that $\{E_j\}$ is a rank-$1$ MIC in the same way as in Theorem \ref{existenceproof}. By the spectral decomposition, we may write \begin{equation} I\otimes\mathcal{E}(\ketbra{M\!E}{M\!E}) = \frac{1}{d}\ketbra{M\!E}{M\!E}+\frac{1}{d(d+1)}\sum_{i=2}^{d^2}P_i \end{equation} where $P_i$ are projectors into the other $d^2-1$ eigenstates. As the full set of projectors forms a resolution of the identity, we have \begin{equation} \sum_{i=2}^{d^2}P_i=I\otimes I-\ketbra{M\!E}{M\!E}\;, \end{equation} so \begin{equation} I\otimes\mathcal{E}(\ketbra{M\!E}{M\!E}) = \frac{\frac{1}{d}I\otimes I+\ketbra{M\!E}{M\!E}}{d+1}\;. \end{equation} It follows from \eqref{qsicproperty} in \hyperref[quasiSIC]{Appendix A} that \begin{equation}\label{MEidentity} \ketbra{M\!E}{M\!E} = \frac{d+1}{d^2}\sum_{i=1}^{d^2}Q^T_i\otimes Q_i-\frac{1}{d}I \otimes I\;, \end{equation} where the $Q_i$ are elements of a quasi-SIC. From the previous expression we now have \begin{equation}\label{spectral} I\otimes\mathcal{E}(\ketbra{M\!E}{M\!E})=\frac{1}{d^2}\sum_iQ^T_i\otimes Q_i\;. \end{equation} Applying $I\otimes\mathcal{E}$ directly to equation \eqref{MEidentity} gives us \begin{equation}\label{direct} \begin{split} I\otimes\mathcal{E}(\ketbra{M\!E}{M\!E}) &=\frac{d+1}{d^2}\sum_iQ^T_i\otimes\mathcal{E}(Q_i)-\frac{1}{d}I\otimes I\\ &=\frac{1}{d^2}\sum_iQ^T_i\otimes[(d+1)\mathcal{E}(Q_i)-I]\;, \end{split} \end{equation} where we used that every LMC is unital and that $\frac{1}{d}\sum_iQ^T_i=I$. Comparing equations \eqref{spectral} and \eqref{direct}, we may see that \begin{equation} Q_i=(d+1)\mathcal{E}(Q_i)-I \end{equation} by multiplying both sides by $\widetilde{Q}^T_j\otimes I$ and tracing over the first subsystem. The quasi-SICs form a basis for operator space, so it follows by linearity that \begin{equation} \mathcal{E}(\rho)=\frac{I+\rho}{d+1}\;, \end{equation} and so by Theorem \ref{existenceproof} we are done. \end{proof} \section{Conclusions} In prior works we have emphasized the importance of MICs as a special class of measurements. The considerations of this paper developed from the idea that MICs may naturally furnish important classes of quantum channels as well. We affirmed this intuition with the introduction of LMCs which enabled us to discover several new ways in which SICs occupy a position of optimality among all MICs, supposing they exist. The appearance of additional equivalences with SIC existence plays two important roles. First, it should aid those trying to prove the SIC existence conjecture in all finite dimensions, and second, to our minds, it suggests that LMCs are a more important family of quantum channels than has been realized. We hope this work will inspire more study of LMCs and other types of channels derived from MICs not investigated here. One example of such an alternative is a procedure where, when the agent implementing the channel applies the MIC, they \emph{reprepare} the measured system in such a way that they ascribe a fixed quantum state to it, the choice of new state being made based on the measurement outcome. The action of such a channel is \begin{equation} \mathcal{E}(\rho) = \sum_i ({\rm tr}\, \rho E_i) \sigma_i, \end{equation} where the states $\{\sigma_i\}$ are the new preparations applied in consequence to the measurement outcomes. Channels defined by a POVM and a set of repreparations are known as \emph{entanglement-breaking channels}~\cite{Ruskai:2003}. When the POVM is a MIC, we can speak of an \emph{entanglement-breaking MIC channel} (EBMC). EBMCs coincide with LMCs for rank-1 MICs and repreparations proportional to the MIC, but not in general. While earlier work already gives some indication that SIC channels are significant among EBMCs~\cite{DeBrota:2018a}, we suspect that there is much more to be discovered about EBMCs as a class. \section*{Acknowledgments} We gratefully acknowledge helpful conversations with Gabriela Barreto Lemos, Christopher A.\ Fuchs and Jacques Pienaar. Also we thank two anonymous referees for identifying points that required clarification and locating an error in our original proof of Lemma \ref{lemma}. This research was supported in part by the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. JBD was supported by grant FQXi-RFP-1811B of the Foundational Questions Institute. \section*{Postscript} Due to a breakdown of our university email system, it was not until after \textsl{Physical Review A} published this article that we became aware of the preprint ``Entanglement Breaking Rank'' by Pandey \textit{et al.}~\cite{Pandey:2018}. Their Corollary 3.3 is equivalent to our Theorem 1, albeit proved from a different starting point. They consider all channels having the same action as $\mathcal{E}_{\rm SIC}$ and ask when those channels can be achieved using only $d^2$ rank-1 Kraus operators. We consider channels defined by $d^2$ Kraus operators (of arbitrary rank) and ask when they can have the action of the SIC channel. We regret this oversight, and we commend their paper to the reader's attention. The silver lining is that we can now say the SIC problem has attracted sufficient interest that the literature is not easy to keep up with. Moreover, our attention having been called back to this paper after an interlude thinking about other aspects of SICs, we now believe that Conjecture 1 can be proven for the special case of unbiased LMCs in $d = 2$. We now sketch the argument here. Consider a MIC whose elements are constructed by taking the orbit of an operator under the action of a discrete group of unitaries. Such a MIC is known as \emph{group covariant} and is necessarily unbiased. If $\{\ket{\psi_j}\}$ is a set covariant with respect to the same group as the MIC, then the post-channel states $\{\mathcal{E}(\ketbra{\psi_j}{\psi_j})\}$ are unitarily equivalent and thus isospectral. We have the eigenvalue bound \begin{equation} \lambda_{\rm max}(\mathcal{E}(\ketbra{\psi_0}{\psi_0})) \geq \bra{\psi_0}\mathcal{E}(\ketbra{\psi_0}{\psi_0}) \ket{\psi_0}\;. \end{equation} In the case that the states $\{\ket{\psi_j}\}$ comprise a SIC and the LMC is unbiased and rank-1, that is, the MIC elements take the form $E_i=\frac{1}{d}\ketbra{\phi_i}{\phi_i}$, we can evaluate this bound by transferring the unitary transformations from the LMC elements to the SIC states: \begin{equation} \lambda_{\rm max}(\mathcal{E}(\ketbra{\psi_0}{\psi_0})) \geq \frac{1}{d} \sum_i |\braket{\phi_0}{\psi_i}|^4 = \frac{2}{d+1}\;. \end{equation} Supposing the entropic bound in Theorem 2 is saturated, then the MIC must be rank-1, and the average maximum eigenvalue is equal to $2/(d+1)$. If a SIC exists, then the discrete average of the maximum eigenvalue over the SIC-state inputs is equal to the continuous average over all pure states, because a SIC is a 2-design. Therefore, if a SIC exists and the entropic bound is saturated, then the maximum eigenvalue of each post-channel state for any SIC-state input is exactly $2/(d+1)$. In addition, the eigenvector of the post-channel state corresponding to this eigenvalue is the SIC-state input itself. Eigenvalue information is most helpful in $d = 2$. Knowing the maximum eigenvalue fixes the only other eigenvalue, and from the above, we have the complete eigendecomposition of the post-channel state for any SIC-state input. From here, we can essentially do quantum channel tomography, fixing by linearity the action of the channel. A careful study of Bloch-sphere geometry shows that an unbiased rank-1 MIC in $d = 2$ is necessarily group covariant, and in fact is unitarily equivalent to a MIC covariant under the Pauli group. Therefore, knowing that an LMC in $d = 2$ is unbiased and that the entropic bound is saturated, we know the MIC is group covariant, and the above argument applies. \appendix \section*{Appendix A}\label{quasiSIC} Here we define and construct the quasi-SICs which furnish the Kraus operators needed in Theorem \ref{existenceproof} and which were referenced in Theorem \ref{entropyexchangethm}. Although SIC existence is not assured, one may always form a \emph{quasi-SIC} in any finite dimension $d$. A quasi-SIC is a set of Hermitian operators obeying the same Hilbert--Schmidt inner product condition as the SIC projectors. As positivity is not demanded, it is relatively easy to construct a quasi-SIC as follows~\cite{Appleby:2017}. Start with an orthonormal basis for the Lie algebra $\mathfrak{su}(d)$ of traceless Hermitian operators. With the Hilbert--Schmidt inner product this space is a $(d^2-1)$-dimensional Euclidean space, so it is possible to construct a regular simplex $\{B_i\}$ consisting of $d^2$ normalized traceless Hermitian operators. In this case ${\rm tr}\, B_iB_j = \frac{-1}{d^2-1}$ when $i\neq j$. Then the operators \begin{equation} Q_i = \sqrt{\frac{d-1}{d}}B_i+\frac{1}{d}I \end{equation} form a quasi-SIC. It turns out that $A_i=\frac{1}{\sqrt{d}}Q_i$ give a set of Kraus operators such that \begin{equation} \mathcal{E}(\rho)=\sum_jA_j\rho A_j^\dag=\frac{I+\rho}{d+1}\;, \end{equation} or, more generally, for an arbitrary operator $X$, \begin{equation}\label{qsicaction} \mathcal{E}(X)=\frac{({\rm tr}\, X)I+X}{d+1}\;, \end{equation} that is, equivalent to the action of $\mathcal{E}_{\rm SIC}$. To see this, first observe from Corollary 1 in \cite{Appleby:2015} that \begin{equation}\label{qsicproperty} \begin{split} \frac{1}{d}\sum_iQ_i\otimes Q_i^T &= \frac{2}{d+1}P_{\rm sym}^{T_B}\\ &= \frac{1}{d+1}\left(I^{\otimes 2}+\sum_{kl}\ketbra{kk}{ll}\right), \end{split} \end{equation} where $T_B$ indicates the partial transpose over the second subsystem. Then, with the help of the vectorized notation for an operator $\kket{A}:=\sum_iA\otimes I\ket{i}\ket{i}$ and the identity $\kket{BAB}=B\otimes B^T\kket{A}$, we have \begin{equation} \begin{split} \kket{\mathcal{E}(X)} &= \frac{1}{d}\sum_iQ_i\otimes Q_i^T\kket{X}\\ &=\frac{1}{d+1}\left(I^{\otimes 2}+\sum_{kl}\ketbra{kk}{ll}\right)\kket{X}\\ &=\frac{1}{d+1}\left(\kket{X}+\sum_{klm}\ketbra{k}{l}X\ket{m}\otimes\ketbra{k}{l}I\ket{m}\right)\\ &=\frac{1}{d+1}\left(\kket{X}+\sum_{km}\bra{m}X\ket{m}\ket{k}\ket{k}\right)\\ &=\frac{1}{d+1}\left(\kket{X}+\kket{({\rm tr}\, X)I}\right)\\ &= \biggr\lvert\frac{({\rm tr}\, X)I+X}{d+1}\biggr\rangle\!\!\!\biggr\rangle\;, \end{split} \end{equation} which is equivalent to \eqref{qsicaction}. Sending $X=I$ through equation \eqref{qsicaction} reveals the identity \begin{equation}\label{qsicsquaresum} \frac{1}{d}\sum_iQ_i^2=I\;, \end{equation} which, since the quasi-SICs are Hermitian, is equivalent to the requirement that Kraus operators satisfy $\sum_iA_i^\dag A_i=I$. \section*{Appendix B}\label{dblyquasi} A doubly quasistochastic matrix is a matrix of real numbers whose rows and columns sum to $1$. If we assume that $\{E_j\}$ is an unbiased MIC, $E_i=\frac{1}{d}\rho_i$, we will now show that $U$ is furthermore doubly quasistochastic. The Gram matrix for the $K_i$ operators \eqref{quasikraus} is \begin{equation} {\rm tr}\, K_iK_j=(1/d-\gamma)\delta_{ij}+\gamma \end{equation} where \begin{equation} \gamma=\frac{d-1-(d-2)\beta+2\sqrt{(1-\beta)(1-\beta+d\beta)}}{d(d+1)}\;. \end{equation} Since $e_i=1/d={\rm tr}\, E_i={\rm tr}\,\sqrt{E_i}\sqrt{E_i}$, we have \begin{equation} \begin{split} \frac{1}{d}&=\sum_{jk}[U]_{ij}[U]_{ik}{\rm tr}\, K_jK_k\\ &=\sum_{jk}[U]_{ij}[U]_{ik}\big((1/d-\gamma)\delta_{jk}+\gamma\big)\\ &=(1/d-\gamma)\sum_{jk}[U]_{ij}[U]_{ik}\delta_{jk} + \gamma\left(\sum_j[U]_{ij}\right)^{\!2}\\ &=(1/d-\gamma)+\gamma\left(\sum_j[U]_{ij}\right)^{\!2}\;, \end{split} \end{equation} from which we obtain \begin{equation}\label{usum2} \sum_j[U]_{ij}=1\;. \end{equation} Now note that \begin{equation} {\rm tr}\, K_i=\frac{\sqrt{1-\beta+d\beta}+(d-1)\sqrt{1-\beta}}{d}\;. \end{equation} Tracing both sides of \eqref{remix2} reveals that ${\rm tr}\,\sqrt{E_i}={\rm tr}\, K_i$ is a constant. Corollary 3 from \cite{Appleby:2015} then asserts that \begin{equation} \begin{split} \sum_i\sqrt{E_i} &= \sum_j\left(\sum_i[U]_{ij}\right)K_j\\ &= \sqrt{d(1/d-\gamma)+d^3\gamma}I\;. \end{split} \end{equation} Summing equation \eqref{quasikraus} gives \begin{equation} \sum_iK_i=\left(\sqrt{1-\beta+d\beta}+(d-1)\sqrt{1-\beta}\right)I\;. \end{equation} As the $K_j$ form a linearly independent set, combining the previous two equations requires \begin{equation} \sum_i[U]_{ij}=\frac{\sqrt{1-d\gamma+d^3\gamma}}{\sqrt{1-\beta+d\beta} + (d-1)\sqrt{1-\beta}} = 1\;. \end{equation} Thus $U$ is doubly quasistochastic, as claimed. \end{document}
\begin{document} \title{Explaining Deep Learning Hidden Neuron Activations using Concept Induction} \begin{abstract} One of the current key challenges in Explainable AI is in correctly interpreting activations of hidden neurons. It seems evident that accurate interpretations thereof would provide insights into the question what a deep learning system has internally \emph{detected} as relevant on the input, thus lifting some of the black box character of deep learning systems. The state of the art on this front indicates that hidden node activations appear to be interpretable in a way that makes sense to humans, at least in some cases. Yet, systematic automated methods that would be able to first hypothesize an interpretation of hidden neuron activations, and then verify it, are mostly missing. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. It is based on using large-scale background knowledge -- a class hierarchy of approx. 2 million classes curated from the Wikipedia Concept Hierarchy -- together with a symbolic reasoning approach called \emph{concept induction} based on description logics that was originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process. \end{abstract} \section{Introduction} \label{introduction} The origins of Artificial Intelligence trace back several decades ago, and AI has been successfully applied to multiple complex tasks such as image classification \cite {ramprasath2018image}, speech recognition \cite{graves2014towards}, language translation \cite{auli2013joint}, drug design \cite{segler2018generating}, treatment diagnosis \cite{choi2019artificial}, and climate sciences \cite{liu2016application}, as an instance for just a few. Artificially intelligent machines reach exceptional performance levels in learning to solve more and more complex computational problems by possessing the capabilities of learning, thinking, and adapting -- mimicking human behavior to some extent, making them crucial for future development. Despite their success in a wide variety of tasks, there is a general distrust of their results. Powerful AI machines particularly Deep Neural Networks, are hard to explain and are often referred to as "Black Boxes" simply because there are no clear human-understandable explanations as to why the network gave the particular output. Many cases have been reported; for example, In 2019 Apple co-founder Steve Wozniak accused Apple Card of gender discrimination, claiming that the card gave him a credit limit that was ten times higher than that of his wife, even though the couple shares all property.\footnote{https://worldline.com/en/home/knowledgehub/blog/2021/january/ever-heard-of-the-aI-black-box-problem.html}. In CEO image search, while 27\% of US CEOs were women, only 11\% of the top image results for “CEOs” were featured as women.\footnote{https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans} In continuation to the mentioned observation, the output of a network's classification can be altered by introducing Adversarial examples \cite{bau2020understanding}, and there are many more attack techniques. It becomes a need to understand the reasoning behind how a system behaves and generates an output in a human-interpretable way, especially since the popularity of these systems has grown to such an extent that these systems are responsible for decisions previously taken by human beings in safety-critical situations, for example like self-driving cars \cite{chen2017end}, drug discovery and treatment recommendations \cite{rifaioglu2020deepscreen,hariri2021deep}. Explainable AI has been pursued for several years already, and the quest for efficient algorithms to generate human-understandable explanations has led to a significant number of contributions based on different approaches. or internal unit summarizing \cite{zhou2018interpreting,bau2020understanding}. Improvements in deep learning show that neurons in the hidden layer of the neural network can detect human-interpretable concepts that were not explicitly taught to the network, such as objects, parts, gender, context, sentiment etc \cite{bau2018identifying,karpathy2015visualizing,qi2017pointnet}. In our approach which we present in this paper, we make central use of \emph{concept induction} \cite{DBLP:journals/ml/LehmannH10}, which has been developed for use in the Semantic Web field and is based on deductive reasoning over description logics, i.e., over logics relevant to ontologies, knowledge graphs and generally the Semantic Web field \cite{DBLP:books/crc/Hitzler2010,DBLP:journals/cacm/Hitzler21}. In a nutshell -- and more details are given below -- a concept induction system accepts three inputs, a set of positive examples $P$, a set of negative examples $N$, and a knowledge base (or ontology) $K$, all expressed as description logic theories, and where we have $x$ occurring as instances (constants) in $K$ for all $x\in P\cup N$. It then returns description logic class expressions $E$ such that $K\models E(p)$ for all $p\in P$ and $K\not\models E(q)$ for all $q\in N$. If no such class expressions exist, then it returns approximations for $E$ together with a number of accuracy measures. In this paper, for scalability reasons, we use the heuristic concept induction system ECII \cite{DBLP:conf/aaai/SarkerH19} together with a background knowledge base that consists only of a class hierarchy, however with approximately 2 million classes, as presented in \cite{DBLP:conf/kgswc/SarkerSHZNMJRA20}. Given a hidden neuron, $P$ is a set of inputs to the deep learning system that activate the neuron, and $N$ is a set of inputs that do not activate the neuron. Inputs are annotated with classes from the background knowledge for concept induction, however these annotations and the background knowledge are not part of the input to the deep learning system. As we will see below, this approach is able to provide meaningful explanations for hidden neuron activation. The rest of this paper is organized as follows. Section \ref{relatedWork} discusses relevant work in the filed of generating explanations using knowledge graph. Sections \ref{researchMethod} present our study design and Section \ref{resultsDiscussion} discusses the results of our study along with the findings and their implications. Finally, Section \ref{conclusion} sums up the paper and proposes some possibilities for future research. \section{Related Work} \label{relatedWork} Explainable AI has been intensively studied since the 1970s \cite{mueller2019explanation}; and the model's explainability can be translated in many ways - interpretable, understandable, justified, and evaluable. The segment of explainable AI methods focuses on interpreting the inner workings of black box models, such as identifying input features by training explanation networks that generate human-readable explanations \cite{hendricks2016generating} or create models alternatives to summarize the behavior of a complex network \cite{ribeiro2016should}. Other approaches include such as the use of salience maps where the explanations summarize the contribution of each pixel to predictions \cite{bach2015pixel} or visual cues \cite{xu2015show,selvaraju2017grad} or counterfactuals \cite{byrne2019counterfactuals}. The literature demonstrates that combinations of neurons can encode meaningful and insightful information \cite{kim2018interpretability,bau2017network}. Justifying the result of a neural network requires a defined language that incorporate elements of reasoning that use knowledge bases to create human-understandable, yet unbiased explanations\cite{doran2017does}. Knowledge graphs and the structured web represent a valuable form of machine -- readable, domain -- specific knowledge; available connected datasets can serve as a knowledge base for an AI system to explain its decisions to its users in a better way. The Web Ontology Language (OWL) provides a basis for verbose descriptions of entities and their relationships through description logics \cite{allemang2011semantic}. Deep deductive reasoning can be described as one of generating complex description logic class expressions over the knowledge graph and is based on rich concept hierarchies that play an important role in generating human -- readable satisfactory explanations. We briefly discussed some recent works doing logical reasoning using deep networks. \cite{zhou2018interpretable,kim2018interpretability,zhou2018interpreting}, methods have been proposed and demonstrated that adding semantic annotations to label objects that activate neurons in the hidden layers of common CNN architectures provides human-readable explanations. Nonetheless, these approaches need to improve in terms of producing deeper explanations generated over more expressive background knowledge. \cite{procko2022exploration} follows the effort of \cite{sarker2017explaining}, by semi-automating the DL Learner tool, which provides explanations to ML algorithms using semantic background knowledge. However, while DL-Learner is a very useful system in producing theoretically correct results has significant performance issues in some scenarios, such as a single run of DL-Learner can easily take over two hours; in contrast the scenario easily necessitates thousands of such runs. The main motivation of the proposed work is to automate the assignment of human-interpretable explanations for the activations of neurons in the hidden -- dense layer of CNNs; Using Wikipedia's rich class hierarchy of around 2 million classes with an improved concept induction approach (in terms of running time by 1-2 orders of magnitude while maintaining accuracy of results products) known as ECII. \section{Research Method} \label{researchMethod} This work includes the implementation of explaining the activation pattern of neurons in hidden layers of CNN i.e. dense layer in this case, using Resnet50V2 architecture and ECII -- concept induction explanation generation algorithm. We also tested other architectures to achieve better accuracy and found that Resnet50V2 gives the highest accuracy. The subsections discuss the steps followed for implementing the system in a more detailed manner. \begin{figure*} \caption{Examples of images collected for neuron-5 from google using the lists of concepts for Case-1.} \caption{Images that didn't activate the neuron-5 for Case-1.} \caption{Images that activate the neuron-5 for Case-1.} \caption{Case - I} \label{pics:neuron5dataset_1} \label{pics:neuron5_1_ntActivated} \label{pics:neuron5_1_Activated} \end{figure*} \subsection{Training Convolutional Neural Network} \label{TrainingCNN} \subsubsection{Dataset} \label{dataset} 1) The ADE20K \cite{zhou2019semantic} semantic segmentation dataset from the Massachusetts Institute of Technology contains more than 27K scene-based images from the SUN and Places databases, extensively annotated with pixel-level objects and object part labels. There are 150 semantic categories including sky, road, grass, and discrete objects like person, car, and bed. The current version of the dataset contains the following: \begin{itemize} \item 25,574 for training and 2,000 for testing from 365 scenes. \item 707,868 unique object along with their WordNet definition and hierarchy. \item 193,238 parts of annotated objects and parts of parts. \item Polygon annotations with attributes, annotation time, and depth order. \end{itemize} We only considered the subset of scenes in the ADE20k Dataset; the classes that were considered for this work are ten classes with the highest number of images -- bathroom, bedroom, building facade, conference room, dining room, highway, kitchen, living room, skyscraper, and street. 2) For verification purposes of the activation pattern of each neuron in corresponding to identified concepts for that respective neuron, we used Google images -- simply because the system should be easy to use for any user. It should be able to detect concepts and give us the reasoning for its classification category of any random image from the largest crawling search engine. \begin{figure*} \caption{Examples of images collected for neuron-5 from google using the lists of concepts for Case-2.} \caption{Images that activate the neuron-5 for Case-2.} \caption{Images that didn't activate the neuron-5 for Case-2.} \caption{Case - II} \label{pics:neuron5dataset_2} \label{pics:neuron5_2_activations} \label{pics:neuron5_2_ntactivations} \end{figure*} \subsubsection {Tested Networks} \label{testednetworks} We analyzed many Convolutional neural network (CNN) architectures to achieve better and higher accuracy such as Vgg16 \cite{simonyan2014very}, InceptionV3 \cite{szegedy2016rethinking}; in Resnet we tried different versions like -- Resnet50, and Resnet50V2, Resnet101, Resnet152V2 architecture \cite{He2015,he2016identity}. Each neural network was fine-tuned with a dataset of 6187 images (training and validation set) of size 224*224 for 20 epochs to classify images into 10 scene categories using the ADE20K dataset. The optimization algorithm used was Adam, with a categorical cross-entropy loss function and a learning rate of 0.001. The accuracy achieved by each architecture along with validation accuracy is summarized in table \ref{accuracy_of_networks}. \begin{table} \centering \begin{tabular}{lrr} \toprule Architectures & Training acc & Validation acc \\ \midrule Vgg16 & 80.05\% & 46.22\% \\ InceptionV3 & 89.02\% & 51.43\% \\ Resnet50 & 35.01\% & 26.56\% \\ \textbf{Resnet50V2} &\textbf{92.47\%} &\textbf{87.50\%} \\ Resnet101 & 53.97\% & 53.57\% \\ Resnet152V2 & 94.53\% & 51.04\% \\ \bottomrule \end{tabular} \caption{Performance of different architectures on ADE20K dataset} \label{accuracy_of_networks} \end{table} Clearly, ResNet50V2 achieved the highest accuracy -- 92.47\% on the training dataset and 87.50\% on the validation dataset, proving to be the best network out of all. \subsubsection {Activations of Trained Network} \label{activationsTrainedNw} We tested the Resnet50V2 with 1370 images and retrieved the activations of the dense layer, i.e., the layer before the output layer. Though technically, the layer before the output layer is the dropout layer, we chose not to analyze the activations of the dropout layer since the dropout layer is a mask that negates the contribution of some neurons towards the next layer and leaves all others unmodified. The activations of 1370 images for the dense layer comprise 64 neurons contributing to the final decision of classifying each image as one of 10 classes. \subsubsection {Candidate Set of Neurons} \label{candidadateSet} Next, out of 64 neurons, we chose some candidate sets of neurons based on the following criteria -- only the neuron having more than 50\% of activation values $>$ 0 i.e., the neuron should have at least 680 values (= 1370/2, 1370 being total images) that are greater than 0. Choosing such neurons would simply mean that these are frequently activated nodes, which would be a good choice to analyze before exploring any other possibilities. Following the idea, the neurons selected for analysis were neuron numbers -- 4, 5, 6, 7, 9, 11, 12, 13, 15, 16, 22, 23, 27, 29, 34, 35, 36, 37, 39, 45, 52, 54, 55, 56, 58, 59, 60, 62, 63. \subsubsection {ECII - Preliminaries} \label{eciiprelim} As mentioned concept induction is an explanation generation algorithm over description logic which takes in three inputs -- a positive set of images, a negative set of images and a knowledge base. For our approach, we use ECII --improved on DL Learner by the magnitude of order 2. For a given neuron, a positive set of images that activates the said neuron, and a negative set of images that do not activate the given neuron. How do we decide that an image activates a neuron and therefore that image is positive, and in the same way for negative set of images. To decide on activation, we considered and analyzed -- a threshold value around the activation values with the following three different criteria:- \begin{itemize} \item CASE-I -- positive set will have images with $>=$ 50\% activation of the highest value, lets say if the highest activation value is 12 for neuron\_x then all images (1370, is the total number) having an activation value of 6 or more than 6 will be positive set and so negative set will have images that were $<$ 50\% i.e all the images with less than 6 including 0. \item CASE-II -- positive set will have images with $>=$ 50\% activation of the highest activation value and negative set as the images that were just zero, i.e excluding images that are 0 $<$ images $<$ 50\%. \item CASE-III -- positive set will have images with anything $>$ zero i.e this will include all images that are 0 $<$ images $<=$ highest value and negative set as the images that were just zero, i.e excluding images that are 0 $<$ images. \end{itemize} For the knowledge base, we mapped all the 1370 images with Wikipedia's rich class hierarchy of 2 million classes. \subsubsection {ECII - Analysis} Now that we have a knowledge base and a set of positive and negative images based on three cases, we run ECII with all three inputs from each case defined above for all chosen candidate sets of the neurons. ECII returns a list of class expressions such that it best describes the positive set of images while excluding all negative images, sorted by coverage score. Coverage score can be formulated using the following formula: $$coverage(E) = \frac{|P\cap Z1 | + |N\cap Z2|}{|P\cup N|}$$ Where,\\ \begin{center}{ $Z1 = K\models E(p)$ for all $p\in P$, \\ $Z2 = K\not\models E(n)$ for all $n\in N$,\\ $P$ is the set of all positive instances,\\ $N$ is the set of all negative instances, and \\$K$ is the knowledge base provided to ECII as input.} \end{center} We chose to look at the first 50 expressions out of all returned by ECII in text format, simply because the list of expressions could have many duplicate concepts. \begin{example} An example of the -- explanation ECII came up with looks like solution 1 $\exists$ imageContains.((WN\_Table) $\sqcap$ (Bed)) solution 3 $\exists$ imageContains.(:WN\_Table) \end{example} indicating the presence of a table and bed in one of the images from the positive set. We collected all distinctive keywords as concepts(in this case -- Table and Bed) from the list since solutions could have overlapping concepts, resulting in a reduced list of concepts. This returned list of concepts for each neuron would give us the intuition of what contributes towards the activation of the respective neuron. \begin{example} As an example lets see the the list of class expression returned by ECII for neuron 5 and its corresponding reduced list of concepts. solution 1: $\exists$ :imageContains.(:WN\_Table)\\ solution 2: $\exists$ :imageContains.(:Floor)\\ solution 3: $\exists$ :imageContains.(:WN\_Floor)\\ solution 4: $\exists$ :imageContains.(:WN\_Flooring)\\ solution 5: $\exists$ :imageContains.(:Window)\\ solution 6: $\exists$ :imageContains.(:WN\_Window)\\ solution 7: $\exists$ :imageContains.((:WN\_Flooring) $\sqcap$ (:Window))\\ solution 8: $\exists$ :imageContains.((:Window) $\sqcap$ (:Floor))\\ solution 9: $\exists$ :imageContains.((:WN\_Flooring) $\sqcap$ (:Floor))\\ solution 10: $\exists$ :imageContains.((:Ceiling) $\sqcap$ (:WN\_Table))\\ solution 11: $\exists$ :imageContains.(:Ceiling)\\ solution 12: $\exists$ :imageContains.(:WN\_Ceiling)\\ solution 13: $\exists$ :imageContains.(:WN\_Windowpane)\\ solution 14: $\exists$ :imageContains.(:WN\_Leg)\\ solution 15: $\exists$ :imageContains.(:Picture)\\ solution 16: $\exists$ :imageContains.(:WN\_Painting)\\ solution 17: $\exists$ :imageContains.(:WN\_Picture)\\ solution 18: $\exists$ :imageContains.(:Leg)\\ solution 19: $\exists$ :imageContains.((:WN\_Table) $\sqcap$ (:Leg))\\ solution 20: $\exists$ :imageContains.((:WN\_Painting) $\sqcap$ (:WN\_Ceiling))\\ solution 21: $\exists$ :imageContains.((:WN\_Leg) $\sqcap$ (:WN\_Window))\\ solution 22: $\exists$ :imageContains.(:Chair)\\ solution 23: $\exists$ :imageContains.(:WN\_Chair)\\ solution 24: $\exists$ :imageContains.(:WN\_Lamp)\\ solution 25: $\exists$ :imageContains.((:WN\_Lamp) $\sqcap$ (:WN\_Floor))\\ solution 26: $\exists$ :imageContains.((:WN\_Windowpane) $\sqcap$ (:WN\_Painting))\\ solution 27: $\exists$ :imageContains.(:Back)\\ solution 28: $\exists$ :imageContains.(:WN\_Back)\\ solution 29: $\exists$ :imageContains.((:Back) $\sqcap$ (:WN\_Flooring))\\ solution 30: $\exists$ :imageContains.((:WN\_Floor) $\sqcap$ (:WN\_Back))\\ solution 31: $\exists$ :imageContains.((:WN\_Windowpane) $\sqcap$ (:WN\_Ceiling))\\ solution 32: $\exists$ :imageContains.((:Ceiling) $\sqcap$ (:Leg))\\ solution 33: $\exists$ :imageContains.((:Floor) $\sqcap$ (:Table))\\ solution 34: $\exists$ :imageContains.(:Table)\\ solution 35: $\exists$ :imageContains.((:WN\_Back) $\sqcap$ (:WN\_Windowpane))\\ solution 36: $\exists$ :imageContains.((:Chair) $\sqcap$ (:Ceiling))\\ solution 37: $\exists$ :imageContains.(:Arm)\\ solution 38: $\exists$ :imageContains.(:WN\_Arm)\\ solution 39: $\exists$ :imageContains.((:WN\_Window) $\sqcap$ (:WN\_Lamp))\\ solution 40: $\exists$ :imageContains.((:Back) $\sqcap$ (:Window))\\ solution 41: $\exists$ :imageContains.((:WN\_Floor) $\sqcap$ (:WN\_Windowpane))\\ solution 42: $\exists$ :imageContains.((:Back) $\sqcap$ (:Floor))\\ solution 43: $\exists$ :imageContains.((:WN\_Window) $\sqcap$ (:WN\_Floor))\\ solution 44: $\exists$ :imageContains.((:Chair) $\sqcap$ (:WN\_Table))\\ solution 45: $\exists$ :imageContains.(:Top)\\ solution 46: $\exists$ :imageContains.(:WN\_Top)\\ solution 47: $\exists$ :imageContains.((:Table) $\sqcap$ (:WN\_Chair))\\ solution 48: $\exists$ :imageContains.((:Floor) $\sqcap$ (:WN\_Chair))\\ solution 49: $\exists$ :imageContains.((:Leg) $\sqcap$ (:Picture))\\ solution 50: $\exists$ :imageContains.(:WN\_Cabinet)\\ And after eliminating duplicate concepts from the above class expressions, we get a reduced list of concepts as -- \textbf{arm, back, cabinet, ceiling, chair, floor, flooring, lamp, leg, painting, picture, table, top, window, windowpane} \end{example} The next step would be to verify the activation of neurons by collecting some more data that is more generic and could serve as solid verification and test it through the model and see if we get the same activations for the neurons; we collected google images corresponding to the resultant keywords of the list for each neuron. In this case -- collect images of arm, back, cabinet, ceiling, chair, floor, flooring, lamp, leg, painting, picture, table, top, window, and windowpane from the google search engine. \subsubsection {Collection of Google Images} We used a python script to download Google images for each keyword in the list. For each keyword, it collects the first 200 images that appear in the google search. After that, we manually checked for duplicates and removed them. After cleaning the duplicates we have at least 140 images for each keyword. For example, for the keyword, 'base' google search comes up with all kinds of images including bed frames to military bases. However, some search for keywords like 'edifice' collects images of a particular model of watch named edifice, which in this case is not what we wanted. But we take the google results as it appears and evaluate our model on them. \subsubsection {Activations on Google Images} Once we have the dataset from google ready for all neurons (29 being total as chosen candidate set), we test this new dataset for each neuron through our trained model -- Resnet50V2 and get the activations of the dense layer. \begin{figure*} \caption{Examples of images collected for neuron-5 from google using the lists of concepts for Case-3.} \caption{Images that activate the neuron-5 for Case-3.} \caption{Images that didn't activate the neuron-5 for Case-3.} \caption{Case - III} \label{pics:neuron5dataset_3} \label{pics:neuron5_3Activations} \label{pics:neuron5_3ntActivations} \end{figure*} \section{Results and Discussion} \label{resultsDiscussion} We analyzed three different criteria as mentioned in subsection \ref{eciiprelim} -- ECII-Preliminaries, to see what we could call the best scenario for the activation of the neuron -- \begin{itemize} \item CASE-I -- positive set will have images with $>=$ 50\% activation of the highest value, and the negative set will have images that were $<$ 50\%. \item CASE-II -- positive set will have images with $>=$ 50\% activation of the highest activation value and negative set as the images that were just zero. \item CASE-III -- positive set will have images with anything $>$ zero and negative set as the images that were just zero. \end{itemize} ECII was run for all neurons taking each case at a time along with Wikipedia as a Knowledge base; in total we did 29*3 = 87 ECII analysis. From each ECII analysis, we got a list of class expressions sorted by coverage score for the respective neuron -- looked at the first 50 expressions, and reduced it to a shorter list by eliminating any duplicate keywords. These keywords indicate the activation of neurons by the presence of these concepts. Table \ref{tab:listofConcepts} lists the concepts we got from ECII corresponding to each case, representing the neuron's activation for neuron number 5. \begin{table}[t] \centering \begin{tabular}{lrr} \toprule Case-I & Case-II & Case-III \\ \midrule arm & arm & cabinet\\ back & back & ceiling \\ cabinet & cabinet & chair \\ ceiling & ceiling & curtain\\ chair & chair & cushion\\ floor & floor & drapery\\ flooring & flooring & floor\\ lamp & lamp & flooring\\ leg & leg & lamp\\ painting & painting & leg\\ picture & picture & painting\\ table & table & picture\\ top & wall & shade\\ window & windowpane & table\\ windowpane & &table\_lamp\\ & & wall\\ & & windowpane\\ \bottomrule \end{tabular} \caption{List of concepts activating neuron\_5 for case I, II, III} \label{tab:listofConcepts} \end{table} To verify if these concepts actually play a role for neurons in deciding the output for the network, collected google images corresponding to the reduced list of concepts for each neuron. In case of neuron number 5 CASE--I, google images were collected corresponding to \textbf{arm, back, cabinet, ceiling, chair, floor, flooring, lamp, leg, painting, picture, table, top, window, windowpane.} For CASE--II, google images were collected corresponding to \textbf{ arm, back, cabinet, ceiling, chair, floor, flooring, lamp, leg, painting, picture, table, wall, windowpane.} For CASE--III, google images were collected corresponding to \textbf{cabinet, ceiling, chair, curtain, cushion, drapery, floor, flooring, lamp, leg, painting, picture, shade, table, table\_lamp, wall, windowpane.} Around 200 images were collected for each concept in the list, making a total of around 4000-5000 images for each neuron. In total 4000*87 = 348000 images were collected. The new google image dataset was divided into 80-20 ratio and 80\% of them were tested by the trained model for verification and activations of the dense layer (n-1 layer) were analyzed for each neuron. In total there were 87 dense layer activations; each dense layer activation consists of 64 neurons; we only look for the activation value of the desired neuron number. The activation percentage for each neuron is summarized case--wise in table \ref{activationGoogle}. The figure shows the examples of the google image dataset collected for neuron\_5 in each case, along with images that activated the neuron and those that didn't activate the neuron. \begin{table} \centering \begin{tabular}{lrrr} \toprule & Case-I & Case-II & Case-III \\ \midrule neuron4 &100\% &100\% &100\%\\ neuron5 &35.01\% &36.84\% &38.38\%\\ neuron6 &0.44\% &0.40\% &0.24\%\\ neuron7 &6.31\% &7.48\% &5.71\%\\ neuron9 &99.90\% &100\% &99.97\%\\ neuron11 &99.00\% &99.00\% &99.00\%\\ neuron12 &95.20\% &95.20\% &95.20\%\\ neuron13 &0.05\% &0.06\% &0.05\%\\ neuron15 &99.93\% &99.96\% &100\%\\ neuron16 &99.94\% &99.97\% &99.97\%\\ neuron22 &37.32\% &26.00\% &26.24\%\\ neuron23 &100\% &100\% &100\%\\ neuron27 &99.64\% &99.65\% &99.60\%\\ neuron29 &0.67\% &0.67\% &0.67\%\\ neuron34 &56.46\% &56.46\% &57.58\%\\ neuron35 &16.32\% &16.31\% &9.25\%\\ neuron36 &0\% &0\% &0\%\\ neuron37 &0\% &0\% &0\%\\ neuron39 &0.24\% &0.20\% &0.63\%\\ neuron45 &0\% &0.09\% &0\%\\ neuron52 &0.18\% &0.24\% &0.17\%\\ neuron54 &0\% &0\% &0\%\\ neuron55 &53.81\% &47.88\% &38.36\%\\ neuron56 &4.16\% &4.06\% &2.22\%\\ neuron58 &1.80\% &1.80\% &1.68\%\\ neuron59 &0\% &0\% &0\%\\ neuron60 &100\% &99.96\% &100\%\\ neuron62 &100\% &100\% &100\%\\ neuron63 &100\% &100\% &100\%\\ \bottomrule \end{tabular} \caption{Activation percentage for each neuron with Google Images for all three cases.} \label{activationGoogle} \end{table} Some observations from the table -- \begin{itemize} \item 11 neurons -- neuron number 4, 9, 11, 12, 15, 16, 23, 27, 60, 62, and 63 got activated by more than 90\% activations in all the three cases. \item 10 neurons -- neuron numbers 6, 13, 29, 36, 37, 39, 45, 52, 54, 59 were below 1\% activations in all three cases. \item the rest activations are in the range of 1 -- 56.52\%. \end{itemize} We can say that the criteria we chose for deciding the activation for the positive and negative set of images -- as two inputs for ECII, doesn't have much impact on the activation percentage of the neurons as there is a slight difference in the percentage value of Ist, IInd and IIIrd Case. Table \ref{activationGoogle_eval} shows the evaluation of 29 neurons for the remaining 20\% of the Google Image Dataset. The activation percentage for each neuron is listed for all three cases. At this point, we can say that by our hypothesis and our verification process -- neurons get activated by the presence of concepts and concepts plays a role in deciding the output given by the network. \begin{table} \centering \begin{tabular}{lrrr} \toprule & Case-I & Case-II & Case-III \\ \midrule neuron4 &100\% &100\% &100\%\\ neuron5 &34.49\% &37.79\% &38.10\%\\ neuron6 &0.5\% &0.53\% &0.19\%\\ neuron7 &0.14\% &0.19\% &0.20\%\\ neuron9 &100\% &100\% &100\%\\ neuron11 &98.83\% &98.83\% &98.83\%\\ neuron12 &94\% &94\% &94\%\\ neuron13 &0.20\% &0.22\% &0.20\%\\ neuron15 &100\% &99.85\% &100\%\\ neuron16 &100\% &100\% &100\%\\ neuron22 &35.13\% &23.06\% &25.62\%\\ neuron23 &100\% &100\% &100\%\\ neuron27 &99.74\% &99.45\% &99.77\%\\ neuron29 &1 \% &1\% &1\%\\ neuron34 &56.74\% &56.74\% &58.17\%\\ neuron35 &14.29\% &17.62\% &9.55\%\\ neuron36 &0.14\% &0.14\% &0.22\%\\ neuron37 &0.18\% &0.20\% &0.13\%\\ neuron39 &0.58\% &0.39\% &0.5\%\\ neuron45 &0.12\% &0.27\% &0.19\%\\ neuron52 &0.36\% &0.19\% &0.33\%\\ neuron54 &0.19\% &0.22\% &0.12\%\\ neuron55 &55.37\% &46.27\% &35.39\%\\ neuron56 &4.82\% &4.32\% &2.17\%\\ neuron58 &1.33\% &1.33\% &1.40\%\\ neuron59 &0.12\% &0.14\% &0.22\%\\ neuron60 &99.88\% &100\% &99.84\%\\ neuron62 &100\% &100\% &100\%\\ neuron63 &100\% &100\% &100\%\\ \bottomrule \end{tabular} \caption{Activation percentage after doing evaluation for each neuron with Google Images for all three cases.} \label{activationGoogle_eval} \end{table} \section{Conclusion and Future Work} \label{conclusion} This paper is an effort toward recognizing the activation pattern of neurons in the hidden layer of CNN architecture with the presence of abstract concepts. A novel approach using ECII as an explanation generation algorithm and Wikipedia as background knowledge was shown to quantify how well a concept is recognized across the latest convolutional layer (specifically the dense layer) of a CNN. Through our verification and evaluation using Google Images, we have also reported on promising activation percentages to support our hypothesis. Future work will incorporate the studying of the remaining neurons and will study the effect of the different thresholds for activation of the neuron. We will need to automate the whole process of getting the human-understandable explanation for the output of the network; given the classification of the network (output of the network) as an input, it should output the activation concepts for the neurons to limit the human-intervention and explain the decision of the network efficiently. \noindent\emph{Acknowledgement.} This work was supported by the U.S. Department of Commerce, National Science Foundation, under award number 2033521. \end{document}
\begin{document} \title{Regional topology and approximative solutions of difference and differential equations} \begin{abstract} We introduce a topology, which we call the regional topology, on the space of all real functions on a given locally compact metric space. Next we obtain a new versions of Schauder's fixed point theorem and Ascoli's theorem. We use these theorems and the properties of the iterated remainder operator to establish conditions under which there exist solutions, with prescribed asymptotic behavior, of some difference and differential equations. \\ {\bf Key words:} regional topology, difference equation, differential equation, remainder operator, approximative solution, prescribed asymptotic behavior, Schauder's theorem, Ascoli's theorem.\\ {\bf AMS Subject Classification:} $39A10$ \end{abstract} \section{Introduction} Let $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$ denote the set of positive integers, the set of all integers and the set of real numbers, respectively. Assume \[ m\in\mathbb{N}, \quad t_0\in[0,\infty), \quad I=[t_0,\infty). \] In this paper we consider difference equations of the form \begin{equation}\label{E1}\tag{E1} \Delta^mx_n=a_nf(n,x_n)+b_n \end{equation} \[ n\in\mathbb{N},\quad a_n,\ b_n \in\mathbb{R},\quad f:\mathbb{N}\times\mathbb{R}\to\mathbb{R}, \] and differential equations of the form \begin{equation}\label{E2}\tag{E2} x^{(m)}(t)=a(t)f(t,x(t))+b(t) \end{equation} \[ t\in I,\quad a(t),\ b(t)\in\mathbb{R},\quad f:I\times\mathbb{R}\to\mathbb{R}. \] We are interesting in the existence of solutions with prescribed asymptotic behavior. More precisely, we establish conditions under which, for a given $\alpha\in(-\infty,0]$ and solution $y$ of the equation $\Delta^my=b$, there exists a solution $x$ of \eqref{E1} such that $x=y+\mathrm{o}(n^\alpha)$. Analogously, we establish conditions under which, for a given $\alpha\in(-\infty,0]$ and solution $y$ of the equation $y^{(m)}=b$ there exists a solution $x$ of \eqref{E2} such that $x=y+\mathrm{o}(t^\alpha)$. We obtain these results using a new version of the Schauder fixed point theorem and a new version of the Ascoli theorem. First, we introduce the notion of regional norm and regional topology. In the case of functional space, the regional topology is the topology of uniform convergence. This topology is nonlinear, but it is almost linear, see Remark \ref{top 03}. Using this topology we can state our version of the Schauder theorem. Next, in functional spaces, we introduce the notion of homogeneous at infinity and stable at infinity family of functions. The first notion generalizes the notion of equiconvergence at infinity, see Remark \ref{homogeneous} and Remark \ref{equiconvergence}. We use the notion of homogeneity at infinity to state our version of the Ascoli theorem. Our approach to the study of existence of solutions with prescribed asymptotic behavior is based on using the iterated remainder operator. In the discrete case properties of this operator were presented in \cite{J Migda 2010 a}, \cite{J Migda 2014 a} and \cite{J Migda 2014 c}. The basic properties of this operator in the continuous case we establish in the first part of Section 5. If $y$ is a sequence with known properties and $x$ is a solution of \eqref{E1} such that $x=y+\mathrm{o}(1)$, then we say that $x$ is a solution with prescribed asymptotic behavior. Writing the equality $x=y+\mathrm{o}(1)$ in the form $y=x+\mathrm{o}(1)$ we may say that $y$ is an approximative solution of \eqref{E1}. In Section 4, using the iterated remainder operator we establish conditions under which a given solution $y$ of the equation $\Delta^my=b$ is an approximative solution of \eqref{E1}. Moreover, the technique of iterated remainder operator allow us to change $\mathrm{o}(1)$ by $\mathrm{o}(n^\alpha)$ for a given nonpositive real $\alpha$. Hence we obtain an approximative solution which better approximates the solution $x$. In Section 5, we obtain an analogous result in the continuous case. In Section 6, we give some additional remarks on the regional topology. This paper is a continuation of a cycle of papers \cite{MJ Migda 2001}-\cite{J Migda 2014 b}. Our studies, in continuous case, were inspired by papers \cite{Ehrnstrom, Lipovan, Mingarelli, Mustafa 2006} (devoted to asymptotically linear solutions) and by papers \cite{Hallam, Kong, Philos 2004, Trench 1984} (devoted to asymptotically polynomial solutions). \section{Notation and terminology} If $p,k\in\mathbb{Z}$, $p\leq k$, then $\mathbb{N}_p$, $\mathbb{N}(p,k)$ denote the sets defined by \[ \mathbb{N}_p=\{p,p+1,\dots\}, \qquad \mathbb{N}(p,k)=\{p,p+1,\dots,k\}. \] Let $X$ be a set. We denote by \[ \mathrm{F}(X), \quad \mathrm{F}_b(X) \] the space of all functions $f:X\to\mathbb{R}$ and the space of all bounded functions $f\in\mathrm{F}(X)$ respectively. Let $f,g\in\mathrm{F}(X)$, $F\subset\mathrm{F}(X)$. Then \[ |f|, \quad f+g, \quad f-g, \quad fg \] denotes the functions defined by a standard way. If $Z\subset X$, then $f|Z$ denotes the restriction $f|Z:Z\to\mathbb{R}$. Moreover, \[ F-F=\{f-g:\ f,g\in F\}, \qquad F+g=\{f+g:\ f\in F\}. \] We say that $F$ is pointwise bounded if for any $t\in X$ the set $F(t)=\{f(t):\ f\in F\}$ is bounded. If $X$ is a topological space, then $\mathrm{C}(X)$ denotes the space of all continuous functions $f\in\mathrm{F}(X)$. \\ Assume $X$ is a locally compact metric space. A function $f\in\mathrm{F}(X)$ is called vanishing at infinity if for any $\varepsilon>0$ there exists a compact subset $Z$ of $X$ such that $|f(t)|<\varepsilon$ for any $t\notin Z$. The space of all vanishing at infinity functions $f\in\mathrm{F}(X)$ we denote by $\mathrm{F}_0(X)$. Moreover \[ \mathrm{C}_0(X)=\mathrm{C}(X)\cap\mathrm{F}_0(X). \] Note that any continuous and vanishing at infinity function is bounded. \\ Assume $p\in\mathbb{N}$, $t_0\in[0,\infty)$, $I=[t_0,\infty)$. Then \[ \mathrm{F}_0(\mathbb{N}_p)=\{x\in\mathrm{F}(\mathbb{N}_p):\ \lim_{n\to\infty}x_n=0\}, \quad \mathrm{F}_0(I)=\{f\in\mathrm{F}(I):\ \lim_{t\to\infty}f(t)=0\}. \] Moreover, for $\alpha\in\mathbb{R}$ we use the following notation \[ \mathrm{o}(n^\alpha)=\{x\in\mathrm{F}(\mathbb{N}_p):\ \lim_{n\to\infty}x_nn^{-\alpha}=0\}, \qquad \mathrm{o}(t^\alpha)=\{f\in\mathrm{C}(I):\ \lim_{t\to\infty}f(t)t^{-\alpha}=0\} \] Let $X$ be a metric space. For a subset $A$ of $X$ and $\varepsilon>0$, we define an $\varepsilon$-framed interior of $A$ by \[ \mathop{\mathrm{Int}}(A,\varepsilon)=\{x\in X:\ \mathrm{B}(x,\varepsilon)\subset A\}, \] where $\mathrm{B}(x,\varepsilon)$ denotes an open ball of radius $\varepsilon$ about $x$. Moreover, we define an $\varepsilon$-ball about $A$ by \[ \mathrm{B}(A,\varepsilon)=\bigcup_{x\in A}\mathrm{B}(x,\varepsilon). \] A subset $A$ of $X$ is called an $\varepsilon$-net for a subset $Z$ of $X$ if $Z\subset\mathrm{B}(A,\varepsilon)$. A subset $Z$ of $X$ is said to be totally bounded if for any $\varepsilon>0$ there exist a finite $\varepsilon$-net for $Z$. \\ Let $d$ denotes a metric in $X$. A family $F\subset\mathrm{F}(X)$ is called equicontinuous if for any $\varepsilon>0$ there exists a $\delta>0$ such that the condition $s,t\in X$, $d(s,t)<\delta$ implies $|f(s)-f(t)|<\varepsilon$ for any $f\in F$. For $m\in\mathbb{N}$ we use the rising factorial notation \[ n^{\mathrm{o}verline{m}}=n(n+1)\dots(n+m-1)\quad \text{with} \quad n^{\mathrm{o}verline{0}}=1. \] \section{Regional topology} Let $X$ be a set. For a function $f\in\mathrm{F}(X)$ we define a generalized norm $\|f\|\in[0,\infty]$ by \[ \|f\|=\sup\{|f(t)|:\ t\in X\}. \] We say that a subset $F$ of $\mathrm{F}(X)$ is ordinary if $\|f-g\|<\infty$ for any $f,g\in F$. We regard every ordinary subset $F$ of $\mathrm{F}(X)$ as a metric space with metric defined by \begin{equation}\label{dfg} d(f,g)=\|f-g\|. \end{equation} Let $U\subset\mathrm{F}(X)$. We say that $U$ is regionally open if $U\cap F$ is open in $F$ for any ordinary subset $F$ of $\mathrm{F}(X)$. The family of all regionally open subsets is a topology on $\mathrm{F}(X)$ which we call the regional topology. We regard any subset of $F(X)$ as a topological space with topology induced by the regional topology. \\ Note that a subset $U$ of $\mathrm{F}(X)$ is a neighborhood of $f\in U$ if and only if there exists an $\varepsilon>0$ such that $\mathrm{B}(f,\varepsilon)\subset U$. Hence, a sequence $(f_n)$ in $\mathrm{F}(X)$ is convergent to $f$ if and only if for any the sequence $\|f_n-f\|$ is convergent to zero. Therefore we can say that the regional topology in $\mathrm{F}(X)$ is the topology of uniform convergence. \\ More generally, let $X$ be a real vector space. We say that a function $\|\cdot\|:X\to[0,\infty]$ is regional norm if the condition $\|x\|=0$ is equivalent to $x=0$ and for any $x,y\in X$ and $\alpha\in\mathbb{R}$ we have \[ \|\alpha x\|=|\alpha|\|x\|, \quad \|x+y\|\leq\|x\|+\|y\|. \] Hence, the notion of regional norm generalizes the notion of usual norm. If a regional norm on $X$ is given, then we say that $X$ is a regional normed space. If there exists a vector $x\in X$ such that $\|x\|=\infty$, then we say that $X$ is extraordinary. \\ Assume $X$ is a regional normed space. We say that a subset $Z$ of $X$ is ordinary if $\|x-y\|<\infty$ for any $x,y\in Z$. We regard every ordinary subset $Z$ of $X$ as a metric space with metric defined by \eqref{dfg}. Analogously as above we define a regional topology on $X$. Let \[ X_0=\{x\in X:\ \|x\|<\infty\}. \] Obviously $X_0$ is a linear subspace of $X$. Moreover, the regional norm induces a usual norm on $X_0$. We say that $X$ is a Banach regional space if $X_0$ is complete. \\ For $p\in X$ let \[ X_p=p+X_0. \] Then $X_p$ is a maximal ordinary subset of $X$ which contain $p$. It is easy to see that $X_p$ is the equivalence class of $p$ under the relation defined by \[ x\equiv y \Leftrightarrow \|x-y\|<\infty. \] We say that $X_p$ is an ordinary component or a region of the space $X$. It is easy to see that $X_p$ is also a connected component of $p$ in $X$. From topological point of view, the space $X$ is a disjoint union of all its regions. In particular, every region $X_p$ is open and closed subset of $X$. Moreover, for any $p\in X$ the translation \[ T_p:X_0\to X_p, \qquad T_p(x)=x+p \] is an isometry of $X_0$ onto $X_p$. Hence a metric space $X_p$ is metrically equivalent to the normed space $X_0$. Note also, that the translation $T_p$ preserves convexity of subsets. Now, we are ready to state and prove a generalization of the following theorem. \\ \textbf{Theorem}\ \textbf{(Schauder fixed point theorem)} \textit{Assume $Q$ is a closed and convex subset of a Banach space $X$, a map $A: Q\to Q$ is continuous and the set $AQ$ is totally bounded. Then there exists a point $x\in Q$ such that $Ax=x$.} \begin{theorem}[Generalized Schauder theorem]\label{GST} Assume $Q$ is a closed and convex subset of a regional Banach space $X$, a map $A: Q\to Q$ is continuous and the set $AQ$ is ordinary and totally bounded. Then there exists a point $x\in Q$ such that $Ax=x$. \end{theorem} \textbf{Proof.} Choose $p\in AQ$. Since $AQ$ is ordinary, we have \begin{equation}\label{AQ} AQ\subset X_p. \end{equation} Let \[ Q_p=Q\cap X_p, \qquad Q_0=Q_p-p. \] Then $Q_p$ is closed and convex. Moreover, by \eqref{AQ}, $AQ_p\subset Q_p$. Let \[ A':Q_p\to Q_p, \qquad A'x=Ax, \qquad U:Q_p\to Q_0, \qquad Ux=x-p. \] Then $Q_0=T_p^{-1}Q_p$, and \[ T_p^{-1}: X_p\to X_0 \] is an isometry and preserves convexity. Hence $Q_0$ is a closed and convex subset of a Banach space $X_0$. Let \[ B:Q_0\to Q_0, \qquad B=U\circ A'\circ U^{-1}. \] Then $B$ is continuous and \begin{equation}\label{BQ} BQ_0=(U\circ A'\circ U^{-1})Q_0=(U\circ A')(U^{-1}Q_0)=(U\circ A')Q_p=U(A'Q_p). \end{equation} Moreover, \[ A'Q_p=AQ_p\subset AQ. \] Hence $A'Q_p$ is totally bounded and $U$ is an isometry. Therefore, by \eqref{BQ}, $BQ_0$ is totally bounded. Thus, by the Schauder fixed point theorem, there exists a point $y\in Q_0$ such that $By=y$. Let $x=U^{-1}y$. Then \[ x=U^{-1}y=U^{-1}By=U^{-1}UA'U^{-1}y=A'U^{-1}y=A'x=Ax. \] The proof is complete. \quad $\mathrm{B}ox$. \begin{remark} In Example \ref{ord ess} we show, that the assumption of ordinarity of the set $AQ$, in the above theorem, is essential. \end{remark} Note that, as in the functional case, a subset $U$ of $X$ is a neighborhood of a point $x$ if and only if there exists an $\varepsilon>0$ such that $\mathrm{B}(x,\varepsilon)\subset U$. Hence, we have the following remark, which will be used in the proof of Theorem \ref{rho}. \begin{remark}\label{continuity} If $T$ is a topological space, then a map $\varphi:X\to T$ is continuous at a point $x$ if and only if for any neighborhood $V$ of $\varphi(x)$ there exists an $\varepsilon>0$ such that $\varphi(\mathrm{B}(x,\varepsilon))\subset V$. \end{remark} Now, assume that $X$ is a locally compact metric space and $F\subset\mathrm{F}(X)$. We say that $F$ is: \begin{itemize} \item[] vanishing at infinity if for any $\varepsilon>0$ there exists a compact subset $Z$ of $X$ such that $|f(s)|<\varepsilon$ for every $s\notin Z$ and every $f\in F$. \item[] equicontinuous at infinity if for any $\varepsilon>0$ there exists a compact subset $Z$ of $X$ such that $|f(s)-f(t)|<\varepsilon$ for every $s,t\notin Z$ and every $f\in F$. \item[] stable at infinity if for any $\varepsilon>0$ there exists a compact subset $Z$ of $X$ such that $|f(s)-g(s)|<\varepsilon$ for any $s\notin Z$ and $f,g\in F$ \item[] homogeneous at infinity if for any $\varepsilon>0$ there exists a compact subset $Z$ of $X$ such that $|(f-g)(s)-(f-g)(t)|<\varepsilon$ for every $s,t\notin Z$ and every $f,g\in F$. \end{itemize} \begin{remark}\label{homogeneous} Obviously, every family which is vanishing at infinity is also stable at infinity and equicontinuous at infinity. Moreover, every stable at infinity and every equicontinuous at infinity family is homogeneous at infinity. It is easy to see that a family $F\subset\mathrm{F}(X)$ is stable at infinity if and only if $F-F$ is vanishing at infinity. Similarly $F$ is homogeneous at infinity if and only if $F-F$ is equicontinuous at infinity. \end{remark} \begin{example} Let \[ f,h:\mathbb{R}\to\mathbb{R}, \qquad f(t)=\exp(-t^2), \qquad h(t)=t^2, \] \[ F=\{f\}, \qquad G=\{f, f+1\}, \qquad H=\{h\}, \qquad K=\{h, h+1\}. \] Then $F$ is vanishing at infinity, $G$ is equicontinuous at infinity but not vanishing at infinity. $H$ is stable at infinity but not equicontinuous at infinity, $K$ is homogeneous at infinity but not stable at infinity and not equicontinuous at infinity. \end{example} In \cite{Avramescu 2002}, Avramescu use the term evanescent solution for solution vanishing at infinity. If $X=\mathbb{N}$, then the notion of equicontinuity at infinity is equivalent to the notion of uniformly Cauchy family of sequences, see \cite{Cheng 1993}. Now, we are ready to generalize the following theorem. \\ \textbf{Theorem (Ascoli theorem)} \textit{If $X$ is a compact metric space, then every pointwise bounded and equicontinuous subset $F$ of $\mathrm{C}(X)$ is totally bounded. } \begin{theorem}[Generalized Ascoli theorem]\label{GAT} If $X$ is a locally compact metric space, then every equicontinuous, pointwise bounded and homogeneous at infinity subset $F$ of $\mathrm{C}(X)$ is totally bounded. \end{theorem} \textbf{Proof.} Let $h\in F$ and \[ G=F-h=\{f-h: f\in F\}. \] Then $G$ is pointwise bounded, equicontinuous and equicontinuous at infinity. Let $\varepsilon>0$. Choose a compact $Z\subset X$ such that \[ |g(s)-g(t)|<\varepsilon \] for $g\in G$ and $s,t\notin Z$. Choose $s\in X\setminus Z$ and let \[ Y=Z\cup\{s\}. \] Then $Y$ is a compact subset od the space $X$. By the Ascoli theorem, there exist \[ g_1,\dots,g_n\in G \] such that the family $g_1|Y,\dots,g_n|Y$ is an $\varepsilon$-net for the set $\{g|Y: g\in G\}$. Let $g\in G$. Then there exists an index $i\in\mathbb{N}(1,n)$ such that $|g(y)-g_i(y)|\leq\varepsilon$ for any $y\in Y$. Let $t\in X\setminus Y$. Then \[ |g(t)-g_i(t)|\leq|g(t)-g(s)|+|g(s)-g_i(s)|+|g_i(s)-g_i(t)|\leq 3\varepsilon. \] Hence $\{g_1,\dots,g_n\}$ is a $3\varepsilon$-net for $G$. Therefore the set $G$ is totally bounded. Thus the set $F=G+h$ is also totally bounded. \quad $\mathrm{B}ox$. Theorem \ref{GAT} generalizes the Compactness criterion of C. Avramescu (see \cite[page 1164]{Philos 2004} and Remark \ref{equiconvergence}). In this paper, we do not need the Arzela theorem which states that if $X$ is a compact metric space, then every totally bounded subset of $\mathrm{C}(X)$ is equicontinuous. Now, we present the last `topological' theorem which will be used in the proofs of our main theorems. \begin{theorem}\label{rho} Let $X$ be a locally compact metric space, $h\in\mathrm{F}(X)$, $\rho\in\mathrm{F}_0(X)$ and \[ F=\{f\in\mathrm{F}(X):\ |f-h|\leq|\rho|\}. \] Then $F$ is closed, convex, pointwise bounded and stable at infinity. Moreover \begin{enumerate} \item[$(a)$] if $\rho$ is continuous, then $F$ is ordinary, \item[$(b)$] if $X$ is discrete, then $F$ is compact and ordinary. \end{enumerate} \end{theorem} \textbf{Proof.} Obviously, $F$ is convex, pointwise bounded and stable at infinity. Using Remark \ref{continuity}, it is easy to see that the map \[ \mathrm{F}(X)\to\mathrm{F}(X), \qquad f\mapsto f-h \] is continuous. Similarly, using Remark \ref{continuity}, we can see that for any $x\in X$, the evaluation \[ e_x:\mathrm{F}(X)\to\mathbb{R}, \qquad e_x(f)=f(x) \] is continuous. Hence, for any nonnegative $\alpha$, the set \[ \{f\in\mathrm{F}(X):\ |f(x)-h(x)|\leq\alpha\} \] is closed in $\mathrm{F}(X)$. Therefore the intersection \[ F=\bigcap\limits_{x\in X}\{f\in\mathrm{F}(X):\ |f(x)-h(x)|\leq|\rho(x)|\} \] is closed. Assume $\rho$ is continuous, then $\|\rho\|<\infty$ and \[ \|f-g\|\leq\|f-h\|+\|g-h\|\leq 2\|\rho\| \] for any $f,g\in F$. Hence $F$ is ordinary. \\ Now, assume that $X$ is discrete. Then $\rho$ is continuous and, by (a), $F$ is ordinary. Let $G$ be defined by \[ G=\{f\in\mathrm{F}(X):\ |f|\leq\rho\}. \] Choose an $\varepsilon>0$. Every compact subset of $X$ is finite. Since $\rho$ is vanishing at infinity, there exists a finite subset $Z$ of $X$ such that $|\rho(x)|\leq\varepsilon$ for any $x\notin Z$. For any $z\in Z$ choose a finite $\varepsilon$-net $H_z$ for the interval $[-\rho(z),\rho(z)]$ and let \[ H=\{g\in G:\ g(z)\in H_z\ \text{ for }\ z\in Z \text{ and }\ g(x)=0\ \text{ for }\ x\notin Z \}. \] Then $H$ is a finite $\varepsilon$-net for $G$. Hence $G$ is a complete and totally bounded metric space and so, $G$ is compact. Therefore $F=G+h$ is also compact. \quad $\mathrm{B}ox$. \section{Approximative solutions of difference equations} In this section we establish fundamental properties of the iterated remainder operator $r^m$ in the discrete case. Next, in Theorem \ref{T1}, we obtain our first main result. \begin{lemma}\label{dL2} Assume $m\in\mathbb{N}$, $x\in\mathrm{SQ}$ and \begin{equation}\label{dEL1} \sum_{n=1}^\infty n^{m-1}|x_n|<\infty. \end{equation} Then there exists exactly one sequence $z\in\mathrm{SQ}$ such that \begin{equation}\label{Dmz} z_n=\mathrm{o}(1) \qquad \text{and} \qquad \Delta^mz=x. \end{equation} The sequence $z$ is defined by \begin{equation}\label{zn} z_n=(-1)^m\sum_{j=n}^\infty\frac{(j-n+1)^{\mathrm{o}verline{m-1}}}{(m-1)!}x_j. \end{equation} Moreover, if $k\in\mathbb{N}(0,m-1)$, then \begin{equation}\label{Dkzn} \Delta^kz_n=(-1)^{m-k}\sum_{j=n}^\infty \frac{(j-n+1)^{\mathrm{o}verline{m-1-k}}}{(m-1-k)!}x_j. \end{equation} \end{lemma} \textbf{Proof.} By Lemma 4 in \cite{J Migda 2014 b}, there exists exactly one sequence $z\in\mathrm{SQ}$ such that \eqref{Dmz} is satisfied. The sequence $z$ is defined by \begin{equation}\label{znb} z_n=(-1)^m\sum_{k=0}^\infty\frac{(k+1)(k+2)\cdots(k+m-1)}{(m-1)!}x_{n+k}. \end{equation} Replacing $n+k$ by $j$ in \eqref{znb} we obtain \eqref{zn}. Moreover, \eqref{Dkzn} is a consequence of the proof of Lemma 3 in \cite{J Migda 2014 b}. \quad $\mathrm{B}ox$. \begin{lemma}\label{dL3} Assume $m\in\mathbb{N}$, $\alpha\in(-\infty,0]$, $x\in\mathrm{SQ}$, \[ \sum_{n=1}^\infty n^{m-1-\alpha}|x_n|<\infty \] and $z\in\mathrm{SQ}$ is defined by \eqref{zn}. Then $z_n=\mathrm{o}(n^\alpha)$. \end{lemma} \textbf{Proof.} The assertion follows from Lemma 4.2 of \cite{J Migda 2014 a}. \quad $\mathrm{B}ox$. \begin{definition}\label{def1} Assume $m,p\in\mathbb{N}$, $x\in\mathrm{F}(\mathbb{N}_p)$ and \[ \sum_{n=p}^\infty n^{m-1}|x_n|<\infty. \] We define $r^mx\in\mathrm{F}(\mathbb{N}_p)$ by \begin{equation}\label{rmxn} r^mx_n=\sum_{j=n}^\infty\frac{(j-n+1)^{\mathrm{o}verline{m-1}}}{(m-1)!}x_j. \end{equation} \end{definition} \begin{lemma}\label{rmL} Assume $m,p\in\mathbb{N}$, $k\in\mathbb{N}(0,m)$, $\alpha\in(-\infty,0]$, \[ \sum\limits_{n=p}^\infty n^{m-1-\alpha}|x_n|<\infty, \] and $n\in\mathbb{N}_p$. Then we have \begin{equation}\label{rm|x|} r^m|x|_n\leq\sum_{j=n}^\infty j^{m-1}|x_j|, \end{equation} \begin{equation}\label{Dkrm} \Delta^kr^mx=(-1)^kr^{m-k}x=\mathrm{o}(n^{\alpha-k}), \end{equation} \begin{equation}\label{Dmrmx} \Delta^mr^mx=(-1)^mx, \qquad r^mx=\mathrm{o}(n^{\alpha}). \end{equation} \end{lemma} \textbf{Proof.} Obviously, \eqref{rm|x|} is a consequence of \eqref{rmxn}. If $k\in\mathbb{N}(0,m-1)$, then, by \eqref{zn} and \eqref{Dkzn}, we have \[ \Delta^kr^mx=(-1)^{-k}r^{m-k}x=(-1)^kr^{m-k}x. \] Moreover, by \eqref{Dmz}, we get $\Delta^mr^mx=(-1)^mx$. Using Lemma \ref{dL3} and the equality \[ m-1-\alpha=(m-k)-1-(\alpha-k) \] we have \[ r^{m-k}x=\mathrm{o}(n^{\alpha-k}). \] Hence, we obtain \eqref{Dkrm}. Taking $k=m$ and $k=0$ in \eqref{Dkrm} we obtain \eqref{Dmrmx}. \quad $\mathrm{B}ox$ \begin{theorem}\label{T1} Assume $m,p\in\mathbb{N}$, $U\subset\mathbb{R}$, $M\geq 1$, $\mu>0$, $\alpha\in(-\infty,0]$, \begin{equation}\label{function f} f:\mathbb{N}_p\times\mathbb{R}\to\mathbb{R}, \qquad \|f|\mathbb{N}_p\times U\|\leq M, \end{equation} $f|\mathbb{N}_p\times U$ is continuous, $a,b,y\in\mathrm{F}(\mathbb{N}_p)$, $\Delta^my=b$, \begin{equation}\label{Mmu} M\sum_{n=p}^\infty n^{m-1-\alpha}|a_n|\leq\mu, \quad \text{and} \quad y(\mathbb{N}_p)\subset\mathop{\mathrm{Int}}(U,\mu). \end{equation} Then there exists a sequence $x\in\mathrm{F}(\mathbb{N}_p)$ such that $x_n=y_n+\mathrm{o}(n^\alpha)$ and \[ \Delta^mx_n=a_nf(n,x_n)+b_n \] for $n\geq p$. \end{theorem} \textbf{Proof.} Let $\rho=r^m|a|$ and \begin{equation}\label{Q} Q=\{x\in\mathrm{F}(\mathbb{N}_p):\ |x-y|\leq M\rho\}. \end{equation} Let $x\in Q$ and $n\in\mathbb{N}_p$. Then, using \eqref{rm|x|} and \eqref{Mmu}, we have \[ |x_n-y_n|\leq M\rho_n=Mr^m|a|_n\leq M\sum_{j=n}^\infty j^{m-1}|a_j|\leq\mu. \] Hence, using the inclusion $y(\mathbb{N}_p)\subset\mathop{\mathrm{Int}}(U,\mu)$, we obtain $x(\mathbb{N}_p)\subset U$. Therefore, by \eqref{function f}, \begin{equation}\label{fM} |f(n,x_n)|\leq M \end{equation} for any $x\in Q$ and $n\in\mathbb{N}_p$. For $n\geq p$ let \[ \bar{x}_n=a_nf(n,x_n). \] Then $|\bar{x}|\leq M|a|$. Thus $r^m|\bar{x}|\leq Mr^m|a|=M\rho$. Now, we define a sequence $Ax$ by \[ Ax=y+(-1)^mr^m\bar{x}. \] Then \[ |Ax-y|=|r^m\bar{x}|\leq r^m|\bar{x}|\leq M\rho. \] Hence, by \eqref{Q}, $Ax\in Q$. Thus \[ AQ\subset Q. \] Let $\varepsilon>0$. There exist $q\in\mathbb{N}_p$ and $\alpha>0$ such that \[ M\sum_{n=q}^\infty n^{m-1}|a_n|<\varepsilon \qquad\text{and}\qquad \alpha\sum_{n=p}^{q}n^{m-1}|a_n|<\varepsilon. \] Let \[ W=\{(n,t):\ n\in\mathbb{N}(p,q),\quad |t-y_n|\leq\mu\} \] Then $W$ is compact and, by \eqref{Mmu}, $W\subset\mathbb{N}_p\times U$. Hence $f|W$ is uniformly continuous. Therefore there exists $\delta>0$ such that if $(n,t_1)$, $(n,t_2)\in W$ and $|t_1-t_2|<\delta$, then \[ |f(n,t_1)-f(n,t_2)|<\alpha. \] Choose $z\in Q$ such that\: $\|x-z\|<\delta$. Let $\bar{z}=a_nf(n,z_n)$. Then \[ \|Ax-Az\|=\sup_{n\geq p}|r^m\bar{x}_n-r^m\bar{z}_n|= \sup_{n\geq p}|r^m(\bar{x}-\bar{z})_n|\leq r^m|\bar{x}-\bar{z}|_p \] \[ \leq\sum_{n=p}^\infty n^{m-1}|\bar{x}_n-\bar{z}_n|\leq \sum_{n=p}^qn^{m-1}|\bar{x}_n-\bar{z}_n|+ \sum_{n=q}^\infty n^{m-1}|\bar{x}_n-\bar{z}_n| \] \[ \leq\alpha\sum_{n=p}^qn^{m-1}|a_n|+2M\sum_{n=q}^\infty n^{m-1}|a_n|<3\varepsilon. \] Hence, the mapping $A: Q\rightarrow Q$ is continuous. By Theorem \ref{rho}, $Q$ is a compact subset of the metric space \[ \mathrm{F}(\mathbb{N}_p)_y=y+\mathrm{F}_b(\mathbb{N}_p). \] Hence $AQ\subset Q$ is totally bounded. Therefore, by Theorem \ref{GST}, there exists $x\in Q$ such that $Ax=x$. Then \[ x=y+(-1)^mr^m\bar{x} \] and, by \eqref{Dmrmx}, \[ \Delta^mx=\Delta^my+\Delta^m((-1)^mr^m\bar{x})=b+\bar{x}. \] Therefore \[ \Delta^mx_n=a_nf(n,x_n)+b_n \] for $n\geq p$. Moreover, using \eqref{Dmrmx}, we have \[ x=y+(-1)^mr^m\bar{x}=y+\mathrm{o}(n^\alpha). \] The proof is complete. \quad $\mathrm{B}ox$. \section{Approximative solutions of differential equations} In this section we establish fundamental properties of the iterated remainder operator $r^m$ in the continuous case. Next, in Theorem \ref{T2}, we obtain our second main result. \begin{lemma}\label{L1} Let $a\in\mathbb{R}$, $b\in(a,\infty)$, $f\in\mathrm{C}[a,\infty)$, $m\in\mathbb{N}(0)$. Then \[ \int_a^b\int_t^b\frac{(s-t)^m}{m!}f(s)dsdt= \int_a^b\frac{(s-a)^{m+1}}{(m+1)!}f(s)ds. \] \end{lemma} \textbf{Proof.} Let $H:[a,b]\times[a,b]\to\mathbb{R}$, \[ H(t,s)=\frac{(s-t)^m}{m!}f(s). \] Then $H$ is continuous and \[ \int_a^b\int_t^b\frac{(s-t)^m}{m!}f(s)dsdt=\int_a^b\int_t^bH(t,s)dsdt \] \[ =\int_a^b\int_a^sH(t,s)dtds=\int_a^bf(s)\int_a^s\frac{(s-t)^m}{m!}dtds \] \[ =\int_a^bf(s)\left[-\frac{(s-t)^{m+1}}{(m+1)!}\right]_a^sds= \int_a^b\frac{(s-a)^{m+1}}{(m+1)!}f(s)ds. \] \quad $\mathrm{B}ox$. \begin{lemma}\label{L2} Assume $m\in\mathbb{N}$, $t_0\in[0,\infty)$, $f\in\mathrm{C}[t_0,\infty)$ and \begin{equation}\label{EL1} \int_{t_0}^\infty s^{m-1}|f(s)|ds<\infty. \end{equation} Then there exists exactly one function $F:[t_0,\infty)\to\mathbb{R}$ such that \begin{equation}\label{Fmf} F=\mathrm{o}(1) \qquad \text{and} \qquad F^{(m)}=f. \end{equation} The function $F$ is defined by \begin{equation}\label{Ft} F(t)=(-1)^m\int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}f(s)ds. \end{equation} Moreover, if $k\in\mathbb{N}(0,m-1)$, then \begin{equation}\label{Fkt} F^{(k)}(t)=(-1)^{m-k}\int_t^\infty\frac{(s-t)^{m-1-k}}{(m-1-k)!}f(s)ds. \end{equation} \end{lemma} \textbf{Proof.} Let $\varphi_0=\psi_0=f$. For $k\in\mathbb{N}(1,m)$ let $\varphi_k, \psi_k:[t_0,\infty)\to\mathbb{R}$, \[ \varphi_k(t)=\int_t^\infty\varphi_{k-1}(s)ds, \qquad \psi_k(t)=\int_t^\infty\frac{(s-t)^{k-1}}{(k-1)!}f(s)ds. \] By (\ref{EL1}), the integrals $\psi_k$ are convergent. Using Lemma \ref{L1} it is easy to see that $\varphi_k=\psi_k$ for $k\in\mathbb{N}(0,m)$. Let $F=(-1)^m\psi_m$. Obviously $\varphi_k'=-\varphi_{k-1}$ for $k\in\mathbb{N}(1,m)$. Hence \begin{equation}\label{Fk} F^{(k)}=(-1)^{m-k}\psi_{m-k} \end{equation} for $k\in\mathbb{N}(1,m)$. By (\ref{EL1}), we have \[ \int_{t_0}^\infty|\psi_{m-1}(s)|ds<\infty. \] Hence \[ \psi_m(t)=\int_t^\infty\psi_{m-1}(s)ds=\mathrm{o}(1). \] Therefore $F=\mathrm{o}(1)$. Taking $k=m$ in \eqref{Fk} we obtain \eqref{Fmf}. Moreover, \eqref{Fkt} is a consequence of \eqref{Fk}. Now assume \[ G:[t_0,\infty)\to\mathbb{R}, \quad G^{(m)}=f \quad \text{and} \quad G=\mathrm{o}(1). \] Then $(G-F)^{(m)}=0$ and so $G-F\in\mathop{\mathrm{Pol}}(m-1)$. Moreover $G-F=\mathrm{o}(1)$. Since $\mathop{\mathrm{Pol}}(m-1)\cap\mathrm{o}(1)=0$, we obtain $G-F=0$. The proof is complete. \quad $\mathrm{B}ox$. \begin{lemma}\label{L3} Assume $m\in\mathbb{N}$, $t_0\in[0,\infty)$, $\alpha\in(-\infty,0]$, $f\in\mathrm{C}[t_0,\infty)$, \[ \int_{t_0}^\infty s^{m-1-\alpha}|f(s)|ds<\infty \] and $F:[t_0,\infty)\to\mathbb{R}$ is defined by \eqref{Ft}. Then $F(t)=\mathrm{o}(t^\alpha)$. \end{lemma} \textbf{Proof.} Let $g,G:[t_0,\infty)\to\mathbb{R}$, \[ g(s)=s^{-\alpha}f(s), \qquad G(t)=\int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}|g(s)|ds. \] Then, by Lemma \ref{L2}, $G=\mathrm{o}(1)$. Moreover, \[ |t^{-\alpha}F(t)|= t^{-\alpha}\left|\int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}f(s)ds\right|= \left|\int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}t^{-\alpha}f(s)ds\right| \] \[ \leq\int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}t^{-\alpha}|f(s)|ds\leq \int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}s^{-\alpha}|f(s)|ds=G(t)=\mathrm{o}(1). \] Hence $F(t)=\mathrm{o}(t^\alpha)$. \quad $\mathrm{B}ox$. \begin{definition}\label{def2} Assume $m\in\mathbb{N}$, $f\in\mathrm{C}[t_0,\infty)$ and \[ \int_{t_0}^\infty s^{m-1}|f(s)|ds<\infty. \] We define $r^mf:[t_0,\infty)\to\mathbb{R}$ by \begin{equation}\label{rmf-def} (r^mf)(t)=\int_t^\infty\frac{(s-t)^{m-1}}{(m-1)!}f(s)ds. \end{equation} \end{definition} \begin{lemma}\label{L4} Assume $m\in\mathbb{N}$, $k\in\mathbb{N}(0,m)$, $\alpha\in(-\infty,0]$, $f\in\mathrm{C}[t_0,\infty)$ and \[ \int_{t_0}^\infty s^{m-1-\alpha}|f(s)|ds<\infty. \] Then for $t\geq t_0$ we have \begin{equation}\label{rm|f|} r^m|f|(t)\leq\int_{t}^\infty s^{m-1}|f(s)|ds \end{equation} Moreover, \begin{equation}\label{rmfk} (r^mf)^{(k)}=(-1)^kr^{m-k}f=\mathrm{o}(t^{\alpha-k}), \end{equation} \begin{equation}\label{rmfm} (r^mf)^{(m)}=(-1)^mf, \qquad r^mf=\mathrm{o}(t^{\alpha}). \end{equation} \end{lemma} \textbf{Proof.} Inequality \eqref{rm|f|} is an easy consequence of \eqref{rmf-def}. By \eqref{rmf-def}, \eqref{Ft} and \eqref{Fkt}, we have \[ (r^mf)^{(k)}=(-1)^kr^{m-k}f. \] Using Lemma \ref{L3}, we obtain the right equality in \eqref{rmfk}. Taking $k=m$ and $k=0$ in \eqref{rmfk} we obtain \eqref{rmfm}. \quad $\mathrm{B}ox$. \begin{theorem}\label{T2} Assume $m\in\mathbb{N}$, $U\subset\mathbb{R}$, $M\geq 1$, $t_0\in[0,\infty)$, $\mu>0$, $\alpha\in(-\infty,0]$, \begin{equation}\label{cfunction f} I=[t_0,\infty), \qquad f:I\times\mathbb{R}\to\mathbb{R}, \qquad \|f|I\times U\|\leq M, \end{equation} $f|I\times U$ is continuous, $a,b\in\mathrm{C}(I)$, $y\in\mathrm{C}^m(I)$, $y^{(m)}=b$, \begin{equation}\label{int} M\int_{t_0}^\infty s^{m-1-\alpha}|a(s)|ds\leq\mu, \quad \text{and} \quad y(I)\subset\mathop{\mathrm{Int}}(U,\mu). \end{equation} Then there exists a function $x\in\mathrm{C}^m(I)$ such that $x(t)=y(t)+\mathrm{o}(t^\alpha)$ and \[ x^{(m)}(t)=a(t)f(t,x(t))+b(t) \] for $t>t_0$. \end{theorem} \textbf{Proof.} Define a function $\rho$ and a subset $Q$ of $\mathrm{C}(I)$ by \[ \rho=r^m|a|, \qquad Q=\{x\in\mathrm{C}(I):\ |x-y|\leq M\rho\}. \] By \eqref{rmf-def} and \eqref{int} we have $M\rho\leq\mu$. Let $x\in Q$. Then $|x-y|\leq\mu$. Hence, using the inclusion $y(I)\subset\mathop{\mathrm{Int}}(U,\mu)$, we get $x(I)\subset U$. Therefore, by \eqref{cfunction f}, we have \begin{equation}\label{ftxM} |f(t,x(t))|\leq M \end{equation} for any $x\in Q$ and $t\in I$. For $t\geq t_0$ let \begin{equation}\label{barx} \bar{x}(t)=a(t)f(t,x(t)). \end{equation} Then $|\bar{x}|\leq M|a|$. Thus $r^m|\bar{x}|\leq Mr^m|a|=M\rho$. Now, we define a function $Ax$ by \[ Ax=y+(-1)^mr^m\bar{x}. \] Then \[ |Ax-y|=|r^m\bar{x}|\leq r^m|\bar{x}|\leq M\rho. \] Hence $Ax\in Q$. Thus \[ AQ\subset Q. \] Let $\varepsilon>0$. By \eqref{int}, there exist $k\in I$ and $\alpha>0$ such that \[ M\int_{k}^\infty s^{m-1}|a(s)|ds<\varepsilon \qquad\text{and}\qquad \alpha\int_{t_0}^{k}s^{m-1}|a(s)|ds<\varepsilon. \] Let \[ W=\{(t,s):\ t\in[t_0,k],\quad |s-y(t)|\leq\mu\} \] Then $W$ is compact and $f|W$ is uniformly continuous. Hence there exists $\delta>0$ such that if $(t,s_1)$, $(t,s_2)\in W$ and $|s_1-s_2|<\delta$, then \[ |f(t,s_1)-f(t,s_2)|<\alpha. \] Choose $z\in Q$ such that\: $\|x-z\|<\delta$. Let $\bar{z}=a(t)f(t,z(t))$. Then, using \eqref{rm|f|} and \eqref{ftxM}, we obtain \[ \|Ax-Az\|=\sup_{t\geq t_0}|(r^m\bar{x})(t)-(r^m\bar{z})(t)|= \sup_{t\geq t_0}|r^m(\bar{x}-\bar{z})(t)|\leq r^m|\bar{x}-\bar{z}|(t_0) \] \[ \leq\int_{t_0}^\infty s^{m-1}|\bar{x}-\bar{z}|(s)ds\leq \int_{t_0}^ks^{m-1}|\bar{x}-\bar{z}|(s)ds+ \int_k^\infty s^{m-1}|\bar{x}-\bar{z}|(s)ds \] \[ \leq\alpha\int_{t_0}^ks^{m-1}|a(s)|ds+2M\int_k^\infty s^{m-1}|a(s)|ds<3\varepsilon. \] Hence, the mapping $A: Q\rightarrow Q$ is continuous. Assume $x\in Q$. If $t_1,t_2\in I$, then \[ |A(x)(t_1)-A(x)(t_2)|=|r^m\bar{x}(t_1)-r^m\bar{x}(t_2)| \] \[ =\left|\int_{t_1}^\infty r^{m-1}\bar{x}(s)ds- \int_{t_2}^\infty r^{m-1}\bar{x}(s)ds\right|= \left|\int_{t_1}^{t_2}r^{m-1}\bar{x}(s)ds\right|\leq \int_{t_1}^{t_2}r^{m-1}|\bar{x}|(s)ds. \] Moreover, if $t\in I$, then \[ r^{m-1}|\bar{x}|(t)\leq\int_t^\infty s^{m-2}|\bar{x}(s)|ds\leq \int_{t_0}^\infty s^{m-2}|a(s)f(s,x(s))|ds \] \[ \leq M\int_{t_0}^\infty s^{m-1}|a(s)|ds\leq\mu. \] Hence \[ |A(x)(t_1)-A(x)(t_2)|\leq\mu\int_{t_1}^{t_2}ds=|t_2-t_1|\mu. \] Therefore the family $AQ$ is equicontinuous. By Theorem \ref{rho}, $Q$ is closed, convex, pointwise bounded, and stable at infinity. Hence $AQ$ is pointwise bounded, equicontinuous and stable at infinity. By Theorem \ref{GAT}, $AQ$ is totally bounded. By Theorem \ref{GST}, there exists $x\in Q$ such that $Ax=x$. Then \begin{equation}\label{y+rmx} x=y+(-1)^mr^m\bar{x}. \end{equation} Using \eqref{rmfm}, we obtain \[ x^{(m)}=y^{(m)}+((-1)^mr^m\bar{x})^{(m)}=b+\bar{x}. \] Therefore, by \eqref{barx}, we get \[ x^{(m)}(t)=a(t)f(t,x(t))+b(t) \] for $t>t_0$. Moreover, using \eqref{y+rmx} and \eqref{rmfm}, we have \[ x=y+(-1)^mr^m\bar{x}=y+\mathrm{o}(t^\alpha). \] The proof is complete. \quad $\mathrm{B}ox$. \section{Remarks} In this section, we give some additional remarks on the regional topology. In particular, in Remark \ref{top 03} we show that if $X$ is an extraordinary regional space, then the regional topology is almost linear but not linear. In Example \ref{ord ess} we show, that in Theorem \ref{GST}, the assumption of ordinarity of the set $AQ$ is essential. At the end of the section we show that if $X$ is a locally compact but noncompact metric space, then the notion of equicontinuity at infinity of a family $F\subset\mathrm{F}(X)$ is equivalent to the known notion of equiconvergence at infinity. \\ Let $X$ be a regional normed space. The following remark is a consequence of the fact, that $X$ is a topological disjoint union of all its regions. \begin{remark}\label{top 01} A subset $Y$ of $X$ is closed in $X$ if and only if, $Y\cap X_p$ is closed in $X_p$ for any region $X_p$. If $Z$ is a topological space, then a map $\varphi:X\to Z$ is continuous if and only if $\varphi|X_p$ is continuous for any region $X_p$. If $Y\subset X$ and $p\in X$, then the set $Y\cap X_p$ is closed and open in $Y$. \end{remark} Let $Z$ be a linear subspace of $X$ such that \[ Z\cap X_0=\{0\}, \] $q\in X$, and $Y=q+Z$. If $p\in Y$, then $Y\cap X_p=\{p\}$. Hence, by Remark \ref{top 01}, we obtain: \begin{remark}\label{top 02} If $Z$ is a linear subspace of $X$ such that $Z\cap X_0=\{0\}$, $q\in X$, then the topology on $q+Z$ induced from $X$ is discrete. \end{remark} \begin{example} A formula \[ \|(x,y)\|= \begin{cases} |y| &\text{if $x=0$}\\ \infty &\text{if $x\neq 0$} \end{cases} \] define a regional norm on the space $X=\mathbb{R}^2$. A subset $L$ of $X$ is a region in $X$ if and only if $L$ is a vertical line. If $L$ is a vertical line, then the topology induced on $L$ from $X$ is the usual Euclidean topology. On the other hand, the topology induced from $X$ on any nonvertical line $L$ is discrete. Moreover, if $\varphi:\mathbb{R}\to\mathbb{R}$ is an arbitrary function, then the topology induced on the graph \[ G(\varphi)=\{(x,\varphi(x)):\ x\in\mathbb{R}\} \] is discrete. \end{example} \begin{example}\label{[x,y]} Let $x,y\in X$ and let $I$ denote the line segment \[ I=[x,y]=x+[0,y-x]=x+[0,1](y-x)=\{x+\lambda(y-x):\ \lambda\in[0,1]\}. \] If $\|y-x\|<\infty$, then \[ [0,y-x]=[0,1](y-x)\subset X_0. \] Hence, in this case, $I$ is topologically equivalent to standard interval $[0,1]$. Moreover, $I$ is a closed subset of $X_x$ and, by Remark \ref{top 01}, $I$ is closed in $X$. \\ Assume $\|y-x\|=\infty$. Then \[ X_0\cap\mathbb{R}(y-x)=\{0\} \] and, by Remark \ref{top 02}, the topology induced on $x+\mathbb{R}(y-x)$ from $X$ is discrete. Moreover \[ I\subset x+\mathbb{R}(y-x). \] Hence, in this case, the topology induced on $I$ is discrete. If $p\in X$, then $I\cap X_p$ is one point or empty subset of $X_p$ and, by Remark \ref{top 01}, $I$ is closed in $X$. \end{example} Using Remark \ref{top 01} it is not difficult to see that the addition $X\times X\to X$ is continuous. Moreover, if $p\in X$, and $\lambda\in\mathbb{R}$, then the translation and $x\mapsto p+x$ and the homothety $x\mapsto \lambda x$ are continuous maps from $X$ to $X$. Let $\mu:\mathbb{R}\times X\to X$ denote the multiplication by scalars. If there exists an $x\in X$ such that $\|x\|=\infty$, then $\mu(\mathbb{R}\times\{x\})=\mathbb{R} x$ and, by Remark \ref{top 02}, the topology induced on $\mathbb{R} x$ is discrete. Hence, the restriction $\mu|\mathbb{R}\times\{x_0\}$ is discontinuous. This implies the discontinuity of $\mu$. Hence we have \begin{remark}\label{top 03} The addition \[ X\times X\to X, \qquad (x,y)\mapsto x+y \] is continuous. If $p\in X$, then the translation \[ X\to X, \qquad x\mapsto p+x \] is a homeomorphism. If $\lambda$ is a nonzero scalar, the the homothety \[ X\to X, \qquad x\mapsto \lambda x \] is a homeomorphism. If $X$ is extraordinary, then the multiplication by scalars \[ \mathbb{R}\times X\to X, \qquad (\lambda,x)\mapsto\lambda x \] is discontinuous. \end{remark} Every region $X_p$ is topologically equivalent to the normed space $X_0$. Hence $X_p$ is connected. Moreover, $X_p$ is open and closed subset of $X$. Hence $X_p$ is a maximal connected subset of $X$. Therefore any connected subset of $X$ is contained in certain region. \\ Obviously, $X$ is a Hausdorff space and so, any compact subset of $X$ is closed. Moreover, if $C\subset X$ is compact and $p\in X$, then $C_p=C\cap X_p$ is closed and open in $C$. Hence $C_p$ is compact and \[ C\subset C_{p_1}\cup\cdots\cup C_{p_n} \] for some $p_1,\dots,p_n\in X$. \\ Now, assume that $C\subset X$ is compact and convex. Let $x,y\in C$ and let $I=[x,y]$. If $\|y-x\|=\infty$, then, by Example \ref{[x,y]}, $I$ is closed and discrete subset of $C$. Hence $I$ is compact and discrete. It is impossible. This means that $C$ is ordinary. Obviously, any convex and ordinary subset of $X$ is connected. \\ Therefore we obtain: \begin{remark}\label{top 04} Any connected subset of $X$ is ordinary. Any compact subset of $X$ is a finite sum of ordinary compact subsets. Any compact and convex subset of $X$ is connected and ordinary. \end{remark} \begin{example}\label{ord ess} Let $X$ be a regional normed space, $x,y\in X$, $\|y-x\|=\infty$, \[ Q=[x,y], \qquad A:Q\to Q, \qquad Az= \begin{cases} x &\text{for $z\in(x,y]$}\\ y &\text{for $z=x$} \end{cases}. \] Then $Q$ is closed and convex, $A$ is continuous, $AQ=\{x,y\}$ is totally bounded and $Az\neq z$ for any $z\in Q$. \end{example} Assume that $X$ is a locally compact, noncompact metric space. \\ We say that a real number $p$ is a limit at infinity of a function $f:X\to\mathbb{R}$ if for any $\varepsilon>0$ there exists a compact subset $Z$ of $X$ such that $|f(t)-p|<\varepsilon$ for any $t\notin Z$. Then we write \[ p=\lim_{t\to\infty}f(t) \] and say that $f$ is convergent at infinity. We say that a family $F\subset\mathrm{F}(X)$ is equiconvergent at infinity if all functions $f\in F$ are convergent at infinity and for any $\varepsilon>0$ there exists a compact $Z\subset X$ such that \[ \left|f(s)-\lim_{t\to\infty}f(t)\right|<\varepsilon \] for any $f\in F$ and $s\notin Z$. \\ In the next remark we show that a family $F\subset\mathrm{F}(X)$ is equiconvergent at infinity if and only if, it is equcontinuous at infinity. \begin{remark}\label{equiconvergence} Obviously every equiconvergent at infinity family $F\subset F(X)$ is equcontinuous at infinity. If a family $F\subset F(X)$ is equicontinuous at infinity, then for any natural $n$ there exists a compact $Z_n\subset X$ such that $|f(s)-f(t)|<1/n$ for every $f\in F$ and $s,t\notin Z_n$. For $n\in\mathbb{N}$ let \[ K_n=Z_1\cup\dots\cup Z_n. \] Then $K_n$ is compact. Choose a sequence $(t_n)$ in $X$ such that $t_n\notin K_n$ for any $n$. If $f\in F$, then $(f(t_n))$ is a Cauchy sequence and there exists a limit $p$ of this sequence. It is easy to see that \[ p=\lim_{t\to\infty}f(t) \] and $F$ is equiconvergent at infinity. \end{remark} Note that if $X$ is a compact space, then every family $F\subset\mathrm{F}(X)$ is equicontinuous at infinity but, in this case, equiconvergence at infinity is not defined. \end{document}
\begin{document} \title[Class number one criterion]{Class number one criterion for some non-normal totally real cubic fields} \author[Jun Ho Lee] {Jun Ho Lee} \address{School of Mathematics, Korea Institute for Advanced Study, Hoegiro 85, Dongdaemun-gu, Seoul 130-722, Republic of Korea} \email{[email protected]} \thanks{2010 {\it Mathematics Subject Classification.} Primary 11R42; Secondary 11R16, 11R29.} \thanks{{\it key words and phrases.} cubic fields, class number, zeta function.} \maketitle \begin{abstract} Let ${\{K_m\}_{m\geq 4}}$ be the family of non-normal totally real cubic number fields defined by the irreducible cubic polynomial $f_m(x)=x^3-mx^2-(m+1)x-1$, where $m$ is an integer with $m\geq 4$. In this paper, we will give a class number one criterion for $K_m$. \end{abstract} \section{Introduction} \label{sec:introduction} It has been known for a long time that there exists close connection between prime producing polynomials and class number one problem for some number fields. Rabinowitsch\cite{Ra} proved that for a prime number $q$, the class number of $\mathbb{Q}(\sqrt{1-4q})$ is equal to one if and only if $k^2+k+q$ is prime for every $k=0,1,\ldots,q-2.$ For real quadratic fields, many authors\cite{BK1,BK2,Mo,Yo} considered the connection between prime producing polynomials and class number. For the simplest cubic fields, Kim and Hwang\cite{KH} gave a class number one criterion which is related to some prime producing polynomials. The aim of this paper is to give a class number one criterion for some non-normal totally real cubic fields. Its criterion provides some polynomials having almost prime values in a given interval. The method done in this paper is basically same as one in \cite{BK1,BK2,KH}. Let $\zeta_K(s)$ be the Dedekind zeta function of an algebraic number field $K$ and $\zeta_K(s,P)$ be the partial zeta function for the principal ideal class $P$ of $K$. Then we have \begin{displaymath} \zeta_K(-1) \leq \zeta_K(-1,P). \end{displaymath} Halbritter and Pohst\cite{HP} developed a method of expressing special values of the partial zeta functions of totally real cubic fields as a finite sum involving norm, trace, and 3-fold Dedekind sums. Their result has been exploited by Byeon\cite{B} to give an explicit formula for the values of the partial zeta functions of the simplest cubic fields. Kim and Hwang\cite{KH} gave a class number one criterion for the simplest cubic fields by estimating the value $\zeta_K(-1)$ and combining Byeon's result. In this paper, we will do this kind of work in some non-normal totally real cubic fields. First, we apply Halbritter and Pohst's formula to our cubic fields, and then evaluate the upper bound of $\zeta_K(-1)$ by using Siegel's formula. Finally, combining this computation, we give a class number one criterion for some non-normal totally real cubic fields. Halbritter and Pohst\cite{HP} proved: \begin{thm}\label{theorem:1.1} Let $K$ be a totally real cubic field with discriminant $\Delta$. For $\alpha \in K$, the conjugates are denoted by $\alpha'$ and $\alpha''$, respectively. Furthermore, for $\alpha \in K$, let $\textup{Tr}(\alpha):=\alpha+\alpha'+\alpha''$ and $\textup{N}(\alpha):=\alpha\cdot\alpha'\cdot\alpha''$. Let $\widehat{K}:=K(\sqrt{\Delta}),k\in \mathbb{N}, k\geq2$, and $\{\epsilon_1,\epsilon_2\}$ be a system of fundamental units of $K$. Define $L$ by $L:=\textup{ln}|\epsilon_1/\epsilon_1''|\,\textup{ln}|\epsilon_2'/\epsilon_2''|- \textup{ln}|\epsilon_1'/\epsilon_1''|\,\textup{ln}|\epsilon_2/\epsilon_2''|$. Let $W$ be an integral ideal of $K$ with basis $\{w_1,w_2,w_3\}$. Let $\rho=\widetilde{w_3}$ for a dual basis $\widetilde{w_1},\widetilde{w_2},\widetilde{w_3}$ of $W$ subject to $$\textup{Tr}(w_i \widetilde{w}_j)=\delta_{ij}(1\leq i, j \leq 3).$$ For $j=1,2$, set \begin{displaymath} E_j=\left(\begin{array}{rrr} 1 & 1 & 1\\ \epsilon_j & \epsilon_j' & \epsilon_j''\\ \epsilon_1\epsilon_2 & \epsilon_1'\epsilon_2' & \epsilon_1''\epsilon_2''\end{array}\right) \end{displaymath} and \begin{displaymath} B_\rho=\left(\begin{array}{rrr} \rho w_1 & \rho w_2 & \rho w_3\\ \rho' w_1' & \rho' w_2' & \rho' w_3'\\ \rho'' w_1'' & \rho'' w_2'' & \rho'' w_3'' \end{array}\right). \end{displaymath} For $\tau_1, \tau_2 \in K$, $\nu =1,2$, set $$M(k,\nu,\tau_1,\tau_2):=0$$ if $\textup{det} E_\nu=0$, otherwise \begin{eqnarray*} \lefteqn{M(k,\nu,\tau_1,\tau_2)} \\ & &: = \textup{sign}(L)(-1)^\nu [\widehat{K}:\mathbb{Q}]^{-1}\frac{(2\pi i)^{3k}}{(3k)!}\textup{N}(\rho)^k\\ & & \ \ \ \cdot \sum_{m_1=0}^{3k} \sum_{m_2=0}^{3k} \binom{3k}{m_1,m_2}\\ & & \ \ \ \cdot \{\frac{\textup{det}E_\nu}{|\textup{det}(E_\nu B_\rho)|^3} \mathbf{B}(3,m_1,m_2,3k-(m_1+m_2),(E_\nu B_\rho)^*,\mathbf{0})\\ & & \ \ \ \cdot \sum_{\kappa_1=0}^{k-1} \sum_{\kappa_2=0}^{k-1} \sum_{\mu_1=0}^{k-1} \sum_{\mu_2=0}^{k-1} \binom{m_1-1}{k-1-(\kappa_1+\kappa_2),k-1-(\mu_1+\mu_2)}\\ & & \ \ \ \cdot \binom{m_2-1}{\kappa_1,\mu_1} \binom{3k-1-(m_1+m_2)}{\kappa_2,\mu_2}\\ & & \ \ \ \cdot \textup{Tr}_{\widehat{K}/\mathbb{Q}}(\tau_1^{\kappa_1+\kappa_2} \tau_1'\,^{\mu_1+\mu_2} \tau_1''\,^{3k-2-(m_1+\kappa_1+\kappa_2+\mu_1+\mu_2)}\\ & & \ \ \ \cdot \tau_2^{\kappa_2} \tau_2'\,^{\mu_2} \tau_2''\,^{3k-1-(m_1+m_2+\kappa_2+\mu_2)})\}, \end{eqnarray*} where $(E_\nu B_\rho)^*$ denotes the transposed matrix of $(E_\nu B_\rho)$, and \begin{eqnarray*} \lefteqn{C(k,\nu,\tau_1,\tau_2)} \\ & & : = \textup{sign}(L)(-1)^{\nu+1}\frac{(2\pi i)^{3k}}{12\cdot(3k-2)(k-1)!^3 }\textup{N}(\rho)^k\\ & & \ \ \ \cdot \widetilde{B}_{3k-2}(0)|\textup{det}B_\rho|^{-1}\textup{sign}(\textup{det}E_\nu)\\ & & \ \ \ \cdot\{\textup{sign}((\tau_1\tau_2-\tau_1'\tau_2')(\tau_1-\tau_1'))+\textup{sign}((\tau_1'\tau_2'-\tau_1''\tau_2'')(\tau_1'-\tau_1''))\\ & & \ \ \ + \textup{sign}((\tau_1''\tau_2''-\tau_1\tau_2)(\tau_1''-\tau_1))+\textup{sign}(\tau_1''(\tau_1-\tau_1')(\tau_2'-\tau_2))\\ & & \ \ \ + \textup{sign}(\tau_1(\tau_1'-\tau_1'')(\tau_2''-\tau_2'))+\textup{sign}(\tau_1'(\tau_1''-\tau_1)(\tau_2-\tau_2''))\\ & & \ \ \ + \textup{N}(\tau_2)[\textup{sign}(\tau_1''(\tau_2-\tau_2')(\tau_1\tau_2-\tau_1'\tau_2'))\\ & & \ \ \ + \textup{sign}(\tau_1(\tau_2'-\tau_2'')(\tau_1'\tau_2'-\tau_1''\tau_2''))+ \textup{sign}(\tau_1'(\tau_2''-\tau_2)(\tau_1''\tau_2''-\tau_1\tau_2))]\}. \end{eqnarray*} Define \begin{eqnarray*} \zeta(k,W,\epsilon_1,\epsilon_2):= M(k,1,\epsilon_1,\epsilon_2)+M(k,2,\epsilon_2,\epsilon_1)\\ + C(k,1,\epsilon_1,\epsilon_2)+C(k,2,\epsilon_2,\epsilon_1). \end{eqnarray*} Let $\zeta_K(s,K_0)$ be the partial zeta function of an absolute ideal class $K_0$ of $K$ and $W\in K_0^{-1}$. Then we have \begin{equation}\label{eqn:1} \zeta_K(2k,K_0)=\frac{1}{2}\textup{Norm}(W)^{2k}\zeta(2k,W,\epsilon_1,\epsilon_2). \end{equation} \end{thm} \textit{Remark} 1. For $k,l,m \in \mathbb{Z}$, \begin{equation*} \binom{k}{l,m}:= \begin{cases}\frac{k!}{l! m! (k-(l+m))!} & \textup{if} \ \ k,l,m,k-(l+m)\in \mathbb{N}\cup \{0\}\\ (-1)^{l+m}\binom{l+m}{l} & \textup{if} \ \ k=-1 \ \textup{and} \ \ l,m\in \mathbb{N}\cup \{0\}\\ 0 & \textup{otherwise}.\\ \end{cases} \end{equation*} \textit{Remark} 2. Let $A=(a_{ij})_{n,n}$ be a regular $(n,n)$-matrix with integral coefficients, $(A_{ij})_{n,n}:=(\textup{det}A)A^{-1}$. Let \begin{equation*} \widetilde{B}_r(x):=\begin{cases} B_r(x-[x]) & \textup{if} \ \ r=0 \,\, \textup{or} \,\, r\geq2 \,\, \textup{or} \,\, r=1\wedge x\in \!\!\!\!\! / \ \mathbb{Z} \\ 0 & \textup{if} \ \ r=1\wedge x\in \mathbb{Z}, \end{cases} \end{equation*} where $B_r(y)$ is defined as usual by $ze^{yz}(e^z-1)^{-1}=\sum_{r=0}^\infty B_r(y)z^r/r!$. Then, for $\mathbf{r}=(r_1, \ldots, r_n)\in (\mathbb{N}\cup \{0\})^n$, \begin{equation*} \mathbf{B}(n,\mathbf{r},A,\mathbf{0})=\sum_{\kappa_1=0}^{|\textup{det}A|-1}\cdots\sum_{\kappa_n=0}^{|\textup{det}A|-1}\prod_{i=1}^n \widetilde{B}_{r_i}(\frac{1}{\textup{det}A}\sum_{j=1}^n A_{ij}\kappa_j). \end{equation*} Next, we introduce Siegel's formula for the values of the Dedekind zeta function of a totally real algebraic number field at negative odd integers. For an ideal $I$ of the ring of integers $\mathcal{O}_K$, we define the sum of ideal divisors function $\sigma_r(I)$ by \begin{equation} \sigma_r(I)=\sum_{J \mid I} N_{K/\mathbb{Q}}(J)^r, \end{equation} where $J$ runs over all ideals of $\mathcal{O}_K$ which divide $I$. Note that, if $K=\mathbb{Q}$ and $I=(n)$, our definition coincides with the usual sum of divisors function \begin{equation} \sigma_r(n)=\sum_{d \mid n \atop d>0} d^r. \end{equation} Now let $K$ be a totally real algebraic number field. For $l,b=1,2, \ldots ,$ we define \begin{equation}\label{eqn:6} S_l^{K}(2b)= \sum_{ \begin{smallmatrix} \nu \in \mathcal{D}_K^{-1} \\ \nu \gg\, 0 \\ \scriptsize{\textup{Tr}_{K/\mathbb{Q}}(\nu)=\, l} \\ \end{smallmatrix}} \sigma_{2b-1}((\nu)\mathcal{D}_K), \end{equation} where $\mathcal{D}_K$ is the different of $K$. At this moment, we remark that this is a finite sum. Siegel\cite{Si} proved: \begin{thm}\label{theorem:1.2} Let $b$ be a natural number, $K$ a totally real algebraic number field of degree $n$, and $h=2bn$. Then \begin{equation} \zeta_K(1-2b)=2^n \sum_{l=1}^r b_l(h)S_l^K(2b). \end{equation} The numbers $r\geq 1$ and $b_1(h), \ldots , b_r(h) \in \mathbb{Q}$ depend only on $h$. In particular, \begin{equation} r = \textup{dim}_\mathbb{C}\mathcal{M}_h, \end{equation} where $\mathcal{M}_h$ denotes the space of modular forms of weight $h$. Thus by a well-known formula, \begin{equation*} r=\begin{cases} [\frac{h}{12}] & \mbox{ if } \ \ h \equiv 2\ (\mbox{\rm mod}\, 12)\\ [\frac{h}{12}]+1 & \mbox{ if } \ \ h \equiv \!\!\!\!\! / \ 2\ (\mbox{\rm mod}\, 12).\\ \end{cases} \end{equation*} \end{thm} Now, we will introduce our target fields. Let $m (\geq 4)$ be a rational integer and $K_m$(or simply $K$)$=\mathbb{Q}(\alpha)$ be the non-normal totally real cubic number field (whose arithmetic was studied in \cite{Lou1}) associated with the irreducible cubic polynomial \begin{equation}\label{eqn:2} f_m(x)=x^3-mx^2-(m+1)x-1\in \mathbb{Z}[x] \end{equation} of positive discriminant \begin{equation*} D_m=(m^2+m-3)^2-32>0 \end{equation*} and with three distinct real roots $\alpha_3 < \alpha_2 < \alpha_1 = \alpha$. We borrow known results for arithmetic of $K_m$. \begin{thm}\label{theorem:1.3} \begin{enumerate} \item[{\rm(1)}] The set $\{1, \alpha, {\alpha}^2\}$ forms an integral basis of the ring $\mathcal{O}_K$ of algebraic integers of $K$ if and only if one of the following conditions holds true: \begin{enumerate} \item[{\rm(i)}] $m \equiv \!\!\!\!\! / \,\, 3\,(\mbox{\rm mod}\, 7)$ and $D_m$ is square-free, \item[{\rm(ii)}] $m \equiv 3\,(\mbox{\rm mod}\,7)$, $m \equiv \!\!\!\!\! / \,\, 24\,(\mbox{\rm mod}\, 7^2)$ and $\frac{D_m}{7^2}$ is square-free. \end{enumerate} \item[{\rm(2)}] The full group of algebraic units of $K_m$ is ${<}{-1},\alpha,{\alpha+1}{>}$. \end{enumerate} \end{thm} \begin{proof} See \cite{Lou1}. \end{proof} \section{class number one criterion for $K_m$} In this section, to have the value of $\zeta_K(-1,P)$, we apply Theorem \ref{theorem:1.1} to $K_m$. On the other hand, we evaluate the upper bound of $\zeta_K(-1)$ by using Theorem \ref{theorem:1.2}. Finally, combining these results, we give a class number one criterion for $K_m$. We take $W=\mathcal{O}_K=(\alpha)$. Since the ideal class containing $\mathcal{O}_K$ is the principal ideal class $P$, by $(\ref{eqn:1})$, we have \begin{equation*} \zeta_K(2,P)=\frac{1}{2}\, \zeta(2,\mathcal{O}_K,\alpha,{\alpha+1}). \end{equation*} By definition, \begin{eqnarray*} \zeta(2,\mathcal{O}_K,\alpha,\alpha+1)= M(2,1,\alpha,\alpha+1)+M(2,2,\alpha+1,\alpha)\\ + C(2,1,\alpha,\alpha+1)+C(2,2,\alpha+1,\alpha). \end{eqnarray*} Let $\{\widetilde{w_1},\widetilde{w_2},\widetilde{w_3}\}$ be a dual basis of $\mathcal{O}_K$. Then, by a simple computation, we get \begin{eqnarray*} \rho=\widetilde{w_3}=\frac{-1}{D_{m}}\{(m^3+5m^2+5m+4)+(2m^3+7m^2+7m+9)\alpha\\-2(m^2+3m+3)\alpha^2\} \end{eqnarray*} This makes it possible to determine matrices $E_1, E_2$ and $B_\rho$. Now, we note that 3-fold Dedekind sum $\mathbf{B}(3,m_1,m_2,6-(m_1+m_2),(E_\nu B_\rho)^*,\mathbf{0})$ vanishes when $m_1$ or $m_2$ is odd. Next, we need the computation for trace. This computation is very long but elementary. Combining these data, we have \begin{eqnarray*} \lefteqn{M(2,1,\alpha,\alpha+1)=-(4m^9+54m^8+304m^7+979m^6}\\\nonumber &&+2119m^5+3234m^4+3327m^3+2067m^2+72m-714)\pi^6/2835D_m^{3/2} \end{eqnarray*} \begin{eqnarray*} \lefteqn{M(2,2,\alpha+1,\alpha)=(4m^9+54m^8+304m^7+985m^6}\\\nonumber &&+2137m^5+3204m^4+3237m^3+2091m^2+144m-714)\pi^6/2835D_m^{3/2} \end{eqnarray*} On the other hand, the calculation of $C(2,1,\alpha,\alpha+1)$(resp.~$C(2,2,\alpha+1,\alpha)$) is simpler than one of $M(2,1,\alpha,\alpha+1)$(resp.~$M(2,2,\alpha+1,\alpha)$). In fact, \begin{equation*} C(2,1,\alpha,\alpha+1)=\frac{2\pi^6}{45D_m^{3/2}}, \ C(2,2,\alpha+1,\alpha)=-\frac{2\pi^6}{45D_m^{3/2}}. \end{equation*} Then, by collecting these results, we have the following theorem. \begin{thm} Let $m(\geq4)$ be an integer which satisfies the conditions of Theorem $\ref{theorem:1.3}$ and $K_m$ the non-normal totally real cubic field defined by $(\ref{eqn:2})$. Let $P$ be the principal ideal class of $K_m$. Then we have \begin{equation*} \zeta_K(2,P)=\frac{m(m^5+3m^4-5m^3-15m^2+4m+12)\pi^6}{945(D_m)^{3/2}}. \end{equation*} Moreover, by a functional equation, \begin{equation*} \zeta_K(-1,P)=-\frac{m(m^5+3m^4-5m^3-15m^2+4m+12)}{7560}. \end{equation*} \end{thm} Next, by Theorem~\ref{theorem:1.2}, noting that $b_1(8)=-1/504$(cf.\ \cite{Za}), we have \begin{equation*} \zeta_K(-1)=-\frac{8}{504} S_1^K(2)=-\frac{8}{504}\sum_{\nu \in S_1} \sigma_1((\nu) \mathcal{D}_K), \end{equation*} where \begin{equation*} S_1:=\{\nu \in K|\, \nu \in \mathcal{D}_K^{-1}, \nu \gg 0, \textup{Tr}_{K/\mathbb{Q}}(\nu)=1\}. \end{equation*} Let $T$ be the set of integral points in $(s,t)$-plane corresponding to $S_1$ by one-to-one correspondence in [4, Proposition 2.1]. This set has been completely determined in [4, Theorem 2.3] as follows: \begin{eqnarray*} T \;=&\{& (1,1),\hspace{0.5cm}(1,2),\hspace{1cm} \ldots\hspace{1cm},\hspace{0.5cm} (1,m-1),\\ && (2,2),\hspace{0.5cm}(2,3),\hspace{1cm}\ldots \hspace{1cm},\hspace{0.5cm}(2,m),\\ && (3,3),\hspace{0.5cm}(3,4),\hspace{1cm}\ldots \hspace{1cm},\hspace{0.5cm}(3,m),\\ && \ldots\ldots\hspace{2cm}\ldots\ldots \hspace{1.5cm}\ldots\ldots,\\ && (m-2,m-2),(m-2,m-1),(m-2,m),\\ && (m-1,m) \: \}.\hspace{2cm} \end{eqnarray*} Furthermore, by (26) of \cite{CKL} \begin{equation*} N((\nu) \mathcal{D}_K)= |f_m(s,t)|, \end{equation*} where \begin{eqnarray*} \lefteqn{ f_m(s,t)=(-s^2+(t+1)s)m^2+((t-2)s^2-(t^2-t)s-(t^2+t))m}\\\nonumber & & \ \ \ \ \ \ \ \ \ \ \ \ +(s^3-2s^2-(t^2-3t-1)s+t^3-t-1). \end{eqnarray*} One can easily check that $f_m(s,t) > 1$ for all $(s,t)\in T$. Therefore, we have the following inequalities \begin{eqnarray}\label{eqn:3} \ \ \ \ \zeta_K(-1) & \leq & -\frac{8}{504}\sum_{\nu \in S_1}(1+N((\nu)\mathcal{D}_K))\\\nonumber & = & -\frac{8}{504}\{\sharp S_1+\sum_{\nu \in S_1}N((\nu)\mathcal{D}_K)\}\\\nonumber & = & -\frac{8}{504}\{\frac{1}{2}(m^2+m-6)+\sum_{(s,t) \in T}f_m(s,t)\}\\\nonumber & = & -\frac{8}{504}\{\frac{1}{2}(m^2+m-6)+\sum_{t=1}^{m-1}f_m(1,t)\\\nonumber & & \ \ \ \ \ \ \ \ \ +\sum_{s=2}^{m-2} \sum_{t=s}^m f_m(s,t)+f_m(m-1,m)\}\\\nonumber & = &-\frac{m(m^5+3m^4-5m^3-15m^2+4m+12)}{7560} \\\nonumber & = & \zeta_K(-1,P), \end{eqnarray} and equality holds in $(\ref{eqn:3})$ if and only if $(\nu) \mathcal{D}_K$ is a prime ideal for all $\nu \in S_1$. Combining this computation, we give a class number one criterion for $K_m$. \begin{thm} Let $m(\geq4)$ be an integer which satisfies the conditions of Theorem $\ref{theorem:1.3}$ and $K_m$ the non-normal totally real cubic field defined by $(\ref{eqn:2})$. Then we have \begin{equation*} h_K=1 \ \textup{if and only if} \ (\nu) \mathcal{D}_K \ \textup{is a prime ideal for all} \ \nu \in S_1. \end{equation*} \end{thm} On the other hand, Louboutin\cite{Lou1} showed: $$m=4,5,6,8\ \textup{gives all the values of} \ m \ \textup{such that}\ h_K=1.$$ Therefore, we can conclude that \begin{equation*} m=4,5,6,8 \ \textup{if and only if} \ (\nu) \mathcal{D}_K \ \textup{is a prime ideal for all} \ \nu \in S_1. \end{equation*} \textit{Remark} 3. Unlike in the simplest cubic fields which is a Galois extension of $\mathbb{Q}$, $N((\nu) \mathcal{D}_K)= |f_m(s,t)|$ is necessarily not prime where $(\nu) \mathcal{D}_K$ is a prime ideal for each $\nu \in S_1$. For example, $f_m(2,3)=({2m-5})^2$ for the point $(2,3)$ in $T$, but if $m=4,5,6,8$, then each integral ideal $(\nu) \mathcal{D}_K$ corresponding to the point $(2,3)$ is prime. Furthermore, $f_m(s,t)$ is a prime for all points only except $(2,3)$ in $T$ when $m=4,5,6,8$. \textbf{Acknowledgements.} The author would like to thank the referee for careful reading of the manuscript and helpful comments. \end{document}
\begin{document} \title{Very well--covered graphs via the Rees algebra} \author{Marilena Crupi, Antonino Ficarra} \address{Marilena Crupi, Department of mathematics and computer sciences, physics and earth sciences, University of Messina, Viale Ferdinando Stagno d'Alcontres 31, 98166 Messina, Italy} \email{[email protected]} \address{Antonino Ficarra, Department of mathematics and computer sciences, physics and earth sciences, University of Messina, Viale Ferdinando Stagno d'Alcontres 31, 98166 Messina, Italy} \email{[email protected]} \subjclass[2020]{13D02, 13P10, 13F55, 13H10, 05C75} \keywords{Rees algebras, Normality, Monomial ideals, Edge ideals, Very well--covered graphs} \maketitle \begin{abstract} A very well--covered graph is a well--covered graph without isolated vertices such that the size of its minimal vertex covers is half of the number of vertices. If $G$ is a Cohen--Macaulay very well--covered graph, we deeply investigate some algebraic properties of the cover ideal of $G$ via the Rees algebra associated to the ideal, and especially when $G$ is a whisker graph. \end{abstract} \section{Introduction} In this article, a \textit{graph} will always mean a finite undirected graph without loops or multiple edges. Let $G$ be a graph with the \textit{vertex set} $V(G)=\{x_1,\dots,x_n\}$ and the \textit{edge set} $E(G)$. Let $K$ be a field and let $R=K[x_1,\dots,x_n]$ be the standard graded polynomial ring with coefficients in $K$. If $x_i\in V(G)$, the \textit{neighborhood} of $x_i$ is the set \[N(x_i)=\{x_j\in V(G):x_ix_j\in E(G)\}.\] By abuse of notation, we use an edge $e=\{x_i, x_j\}$ interchangeably with the monomial $x_ix_j\in R$. Let $W \subseteq V(G)$. $W$ is called a \emph{vertex cover} if $e\cap W\ne\emptyset$ for all $e\in E(G)$, and it is called a \emph{minimal vertex cover} if no proper subset of $W$ is a vertex cover of $G$. The set of all minimal vertex covers of $G$ is denoted by $\mathcal{C}(G)$. Attached to $G$ \cite{RV1, RV} there are the \emph{edge ideal} of $G$, defined as \[I(G) = (x_ix_j\ :\ x_ix_j \in E(G)),\] and the \emph{cover ideal} of $G$, defined as \[J(G) = (x_{i_1}\cdots x_{i_s}\ :\ \{x_{i_1},\ldots, x_{i_s}\}\in\mathcal{C}(G)). \] The cover ideal $J(G)$ is the \emph{Alexander dual} of the edge ideal $I(G)$ and conversely. Indeed, the minimal primes of the edge ideals, correspond to the minimal vertex covers of their underlying graph. Hereafter, for an integer $n\ge1$, we set $[n]=\{1,\dots,n\}$. A graph $G$ is \emph{well--covered} or \emph{unmixed} if all its minimal vertex covers have the same cardinality. In particular, all the associated primes of $I(G)$ have the same height \cite{RV}. By \cite[Corollary 3.4]{GV}, for a well--covered graph $G$ without isolated vertices we have \[2\cdot\min\{|C|:C\in\mathcal{C}(G)\}\ \ge\ \vert V(G)\vert.\] If equality holds, the graph $G$ is called \textit{very well--covered}. Note that, if $G$ is a very well--covered graph, then its number of vertices is even. Finally, a graph $G$ is called \emph{Cohen–Macaulay} over the field $K$ if $R/I(G)$ is a Cohen--Macaulay ring, $R=K[x_i: x_i\in V(G)]$. It is well--known that a Cohen--Macaulay graph is well--covered \cite[Proposition 7.2.9]{RV}. The fundamental Eagon--Reiner theorem \cite[Theorem 8.1.9]{JT} says that $R/I(G)$ is a Cohen--Macaulay ring if and only if $J(G)$ has a linear resolution. Very well--covered graphs have been studied from view points of both Commutative Algebra and Combinatorics. See, for instance, \cite{CRT2011,OF,GV,KTY2018,KPFTY,KPTY2022,MMCRTY2011,Fak}. In \cite{CF2023} we deeply studied very well--covered graphs by means of \textit{Betti splittings} \cite{FHT2009}. Recently, many authors have managed the Betti splitting technique for studying algebraic and combinatorial properties of classes of monomial ideals (see, for instance, \cite{CFts1,CFL,FH2023} and references therein). In the present article, we continue the algebraic study of the class of Cohen--Macaulay very well--covered graphs started in \cite{CF2023}. If $G$ is a graph in such a class, our main tool will be the Rees algebra of the cover ideal $J(G)$. We state that if $G$ is a Cohen--Macaulay very well--covered graph, then the Rees algebra of $J(G)$ is a normal Cohen--Macaulay domain and as a consequence we obtain some relevant properties on the behavior of the powers of $J(G)$, when $G$ is a \emph{whisker graph}. Adding a \emph{whisker} to a graph $G$ at a vertex $v$ means adding a new vertex $w$ and an edge $vw$ to the set $E(G)$. If a whisker is added to every vertex of $G$, then the resulting graph, denoted by $G^\ast$, is called the \emph{whisker graph} or \emph{suspension} of $G$. It is important to point out that the whisker graph $G^\ast$ of a graph $G$ with $n$ vertices is a very well-covered Cohen--Macaulay graph with $2n$ vertices (see, for instance, \cite{KPFTY} and references therein). In \cite[Question 6.6]{HVT2023} it is asked in which way attaching whiskers to a graph $H$ gives rise to a graph $G$ such that $J(G)$ has linear powers. In Corollary \ref{Conj:PowersHSCMverywell} we partially answer this question. See also \cite[Corollary 4.5]{LW2023} and \cite[Theorem 2.3]{MF2014}. Here are the outline of the article. In Section \ref{sec:2} we discuss a normality criterion for squarefree monomial ideals (Criterion \ref{Thm:CritNormality}). This result is borrowed from \cite{NQBM}. Section \ref{sec:3} deeply investigates the Rees algebra $\mathcal{R}(J(G))$ of $J(G)$, with $G$ a Cohen--Macaulay very well--covered graph. Our main result states that $\mathcal{R}(J(G))$ is a normal Cohen--Macaulay domain (Theorem \ref{Thm:R(J(G))CMnormal}). To obtain this result we use Criterion \ref{Thm:CritNormality} as well as the structure theorem of Cohen--Macaulay very well--covered graphs (Characterization \ref{char:veryWellCGCM}) stated in \cite{CRT2011}. In Section \ref{sec:4}, if $G$ is a whisker graph with $2n$ vertices, we prove that $J(G)$ satisfies the $\ell$-exchange property (Theorem \ref{Thm:J(G)ell-exchangeProp}). As a consequence, we state that $J(G)^k$ has linear quotients, for all $k\ge 1$, and then that $J(G)$ has linear powers (Corollary \ref{Cor:J(G)LinQuot}). Furthermore, if $G$ is a whisker graph, we show that each power of $J(G)$ has homological linear quotients (Theorem \ref{Thm:HSJ(G)LinQuotRees}). This result supports a conjecture stated in \cite[Conjecture 4.4]{CF2023}. Moreover, as applications of the previous results, we compute the limit depth, the depth stability and the analytic spread of $J(G)$. Finally, if $G$ is a whisker graph with $2n$ vertices, we get a partial result on the structure of the reduced Gr\"obner basis of the presentation ideal of $\mathcal{R}(J(G))$ (Corollary \ref{cor:grobner}). At present we do not know the reduced Gr\"obner basis of the presentation ideal of $\mathcal{R}(J(G))$. However, our experiments in \textit{Macaulay2} \cite{GDS} suggest that for a suitable monomial order, the reduced Gr\"obner basis is quadratic and hence that $\mathcal{R}(J(G))$ is Koszul (Conjecture \ref{Conj:R(J(G))Koszul}), for any Cohen--Macaulay very well--covered graph with $2n$ vertices. \section{A normality criterion for monomial ideals}\label{sec:2} Let $I$ be an ideal of a domain $R$. An element $f\in R$ is \textit{integral over $I$} if it satisfies an equation of the type $$ f^k+a_1f^{k-1}+\dots+a_{k-1}f+a_k=0, \ \ \ a_i\in I^i. $$ The set of all these elements, denoted by $\overline{I}$, is an ideal containing $I$ and called the \textit{integral closure} of $I$. We say that $I$ is \textit{integrally closed} if $\overline{I}=I$, and we say that $I$ is \textit{normal} if all its powers $I^k$, $k\ge1$, are integrally closed. Let $I$ be an ideal of a commutative ring $R$ generated by $u_1, \ldots, u_m$. The \textit{Rees algebra} of $I$, denoted by $\mathcal{R}(I)$ or $R[It]$, is the subring of $R[t]$, defined as follows $$ \mathcal{R}(I)= R[It]= R[u_1t, \ldots, u_mt]= \bigoplus_{k\ge0}I^kt^k \subset\ R[t], $$ where $t$ is a new variable. We quote the next fundamental result from \cite{HV} (see, also, \cite[Theorem 4.3.17]{RV}). \begin{Theorem} \label{thm:HVnormality} Let $I$ be an ideal of a normal domain $R$. Then the following are equivalent: \begin{enumerate} \item[\em(a)] $I$ is a normal ideal; \item[\em(b)] the Rees algebra $\mathcal{R}(I)$ is normal. \end{enumerate} \end{Theorem} This property will be crucial in the sequel. Now, let $R=K[x_1,\dots,x_n]$ be the standard graded polynomial ring with coefficients in a field $K$ and let $I$ be a monomial ideal of $R$. As usual we denote by $G(I)=\{u_1,\dots,u_m\}$ the unique minimal set of monomial generators of $I$. Then the Rees algebra of $I$ is the following $K$-algebra $$ \mathcal{R}(I)\ =\ K[x_1,\dots,x_n,u_1t,\dots,u_mt]\ \subset\ R[t]. $$ The next criterion quickly follows from \cite[Theorem 3.1]{NQBM} (see, also, \cite[Theorem 3.1]{NBR2022}). \begin{Criterion}\label{Thm:CritNormality} Let $I_1,I_2\subset K[x_2,\dots,x_{n}]$ be two squarefree monomial ideals. If $I_1\subseteq I_2$ are normal ideals, then $I=I_1+x_1I_2\subset R$ is a normal squarefree monomial ideal. \end{Criterion} Indeed, by \cite[Theorem 3.1]{NQBM}, it is enough to check that $I_1+I_2$ is normal and that $\textup{gcd}(x_1,u)=1$ for all $u\in G(I_1)\cup G(I_2)$. Since $I_1\subseteq I_2$, the first assertion follows because $I_1+I_2=I_2$ is normal by hypothesis. The second assertion follows because the generators of $I_1$ and $I_2$ are monomials of $K[x_2,\dots,x_n]$. \section{The Rees algebra}\label{sec:3} In this section we study the Rees algebra of the vertex cover ideal of a Cohen--Macaulay very well--covered graph. The main result in this section is the following. \begin{Theorem}\label{Thm:R(J(G))CMnormal} Let $G$ be a Cohen--Macaulay very well--covered graph. Then the Rees algebra $\mathcal{R}(J(G))$ is a normal Cohen--Macaulay domain. \end{Theorem} In order to prove it, we recall the following fundamental algebraic characterization of Cohen--Macaulay very well--covered graphs. \begin{Characterization}\label{char:veryWellCGCM} \textup{(\cite{CRT2011}, \cite[Lemma 3.1]{MMCRTY2011}).} Let $G$ be a very well--covered graph with $2n$ vertices. Then, the following conditions are equivalent. \begin{enumerate}[label=\textup{(\alph*)}] \item $G$ is Cohen--Macaulay. \item There exists a labeling of $V(G)=\{x_1,\dots,x_n,y_1,\dots,y_n\}$ such that \begin{enumerate}[label=\textup{(\roman*)}] \item $X=\{x_1,\dots,x_n\}$ is a minimal vertex cover of $G$ and $Y=\{y_1,\dots,y_n\}$ is a maximal independent set of $G$, \item $x_iy_i\in E(G)$ for all $i\in[n]$, \item if $x_iy_j\in E(G)$ then $i\le j$, \item if $x_iy_j\in E(G)$ then $x_ix_j\notin E(G)$, \item if $z_ix_j,y_jx_k\in E(G)$ then $z_ix_k\in E(G)$ for any distinct $i,j,k$ and $z_i\in\{x_i,y_i\}$. \end{enumerate} \end{enumerate} \end{Characterization} From now on, if $G$ is a Cohen--Macaulay very well--covered graph with $2n$ vertices, we tacitly assume that its set of vertices $V(G)=\{x_1,\dots,x_n,y_1,\dots,y_n\}$ satisfies the conditions \textup{(i)-(v)} of Characterization \ref{char:veryWellCGCM}, without having to relabel it. See \cite{CRT2011, CF2023} for more details on this topic. Hereafter, denote by $S$ the polynomial ring $K[x_1,\dots,x_n,y_1,\dots,y_n]$ in the $2n$ variables $x_1,\dots,x_n,y_1,\dots,y_n$ with coefficients in the field $K$. Let $F\subseteq[n]$ be a non empty set. We set ${\bf x}_F=\mathfrak{p}rod_{i\in F}x_i$, ${\bf y}_F=\mathfrak{p}rod_{i\in F}y_i$. Otherwise, we set ${\bf x}_{\emptyset}={\bf y}_{\emptyset}=1$. The \textit{support} of a monomial $u\in S$ is the set $$ \supp(u)=\{x_i:x_i\ \textit{divides}\ u\}\cup\{y_j:y_j\ \textit{divides}\ u\}. $$ If $W \subseteq V(G)$, we denote by $G\textup{set}minus W$ the subgraph of $G$ with the vertices of $W$ and their incident edges deleted. The following results were proved in \cite[Lemma 2.2, Proposition 2.1]{CF2023}. \begin{Lemma}\label{Lem:minimalGeneratorsI(G)^vee}\label{Prop:GremoveVerticesVeryWC} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. \begin{enumerate} \item[\textup{(a)}] For each $u\in G(J(G))$ there exists a unique subset $F$ of $[n]$ such that $u={\bf x}_F{\bf y}_{[n]\textup{set}minus F}$. \item[\textup{(b)}] For any $A\subseteq[n]$, $G\textup{set}minus\{x_i,y_i:i\in A\}$ is a Cohen--Macaulay very well--covered graph. \end{enumerate} \end{Lemma} For a subset $C$ of $X\cup Y=\{x_1,\dots,x_n,y_1,\dots,y_n\}$, we define $$ {\bf z}_C= {\bf x}_{C_x}{\bf y}_{C_y}, $$ where $C_x=\{i:x_i\in C\}$ and $C_y=\{j:y_j\in C\}$. The next result holds true. \begin{Lemma}\label{Lemma:J(G)Decompx_1} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. Then \begin{equation}\label{eq:J(G)x_1Decomp} J(G)\ =\ {\bf z}_{N(x_1)}J(G_1)+x_1J(G\textup{set}minus\{x_1,y_1\}), \end{equation} where $G_1=G\textup{set}minus\{x_{i},y_{i}:i\in N(x_1)_x\cup N(x_1)_y\}$. \end{Lemma} \begin{proof} The proof is similar to that of \cite[Proposition 2.3]{CF2023}. We include it for completeness. Let $u\in G(J(G))$. By Lemma \ref{Lem:minimalGeneratorsI(G)^vee}(a), either $x_1$ divides $u$ or $y_1$ divides $u$. \\ \textsc{Case 1.} Suppose $x_1$ divides $u$. Note that $N(y_1)=\{x_1\}$. Indeed, by Characterization \ref{char:veryWellCGCM}(i), $N(y_1)$ is a subset of $X$, since $Y$ is a maximal independent set. Moreover, by (iii) if $x_iy_1\in E(G)$ then $i\le 1$. Hence, $N(y_1)=\{x_1\}$. Consequently ${\bf z}_{N(y_1)}=x_1$ and the support $C'$ of $u/{\bf z}_{N(y_1)}=u/x_1$ is a vertex cover of $G\textup{set}minus\{x_1,y_1\}$. But $C'$ is a minimal vertex cover, for $u/x_1$ has degree $n-1$ and $G\textup{set}minus\{x_1,y_1\}$ is a Cohen--Macaulay very well--covered graph with $2(n-1)$ vertices (Lemma \ref{Prop:GremoveVerticesVeryWC}(b)). Thus $u/x_1\in G(J(G\textup{set}minus\{x_1,y_1\}))$ and so $u\in G(x_1J(G\textup{set}minus\{x_1,y_1\}))$. \\ \textsc{Case 2.} Suppose $y_1$ divides $u$. Since the support $C$ of $u$ is a minimal vertex cover of $G$ and $x_1\notin C$, then $z_i\in C$ for all $z_i\in N(x_1)$. Consequently, the support $C_1$ of $u/{\bf z}_{N(x_1)}$ is a vertex cover of $G_1$. But $C_1$ is a minimal vertex cover of $G_1$, for $|C_1|=n-|N(x_1)|$ and $G_1$ is a Cohen--Macaulay very well--covered graph with $2(n-|N(x_1)|)$ vertices (Lemma \ref{Prop:GremoveVerticesVeryWC}(b)). Hence $u\in G({\bf z}_{N(x_1)}J(G_1))$. These two cases show the inclusion ``$\subseteq$" in equation (\ref{eq:J(G)x_1Decomp}). The other inclusion is acquired as in the last part of the proof of \cite[Proposition 2.3]{CF2023}. \end{proof} \begin{Corollary}\label{Cor:normal} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. Then $J(G)$ is a normal ideal. \end{Corollary} \begin{proof} By Lemma \ref{Lemma:J(G)Decompx_1}, equation (\ref{eq:J(G)x_1Decomp}) holds. Set $J=J(G)$, $J_1={\bf z}_{N(x_1)\textup{set}minus y_1}J(G_1)$ and $J_2=J(G\textup{set}minus\{x_1,y_1\})$. Thus $$ J\ =\ y_1J_1+x_1J_2. $$ Since $y_1J_1,J_2\subset K[x_2,\dots,x_n,y_1,y_2,\dots,y_n]$, it is enough to show that $J_1\subseteq J_2$. Then $y_1J_1\subset J_2$ and the result follows from Criterion \ref{Thm:CritNormality} and induction on $n$. Let $u\in G(J_1)$. We must prove that $u\in G(J_2)$, too. That is, we must show that $C=\supp(u)$ is a minimal vertex cover of $G\textup{set}minus\{x_1,y_1\}$. It is enough to prove $C$ is a vertex cover of $G\textup{set}minus\{x_1,y_1\}$. Minimality follows because $|C|=n-1$. Hence, we must prove that $e\cap C\ne\emptyset$ for all edges $e\in E(G\textup{set}minus\{x_1,y_1\})$. Let $e\in E(G\textup{set}minus\{x_1,y_1\})$. Since $y_1u\in G(J)$, it follows that $C\cup y_1$ is a minimal vertex cover of $G$. Hence $e\cap (C\cup y_1)\ne\emptyset$. Therefore $e\cap C\ne\emptyset$ because $y_1\notin e$. Our assertion follows. \end{proof} Finally, we are in the position to prove the main result in the section. \begin{proof}[Proof of Theorem \ref{Thm:R(J(G))CMnormal}] By Corollary \ref{Cor:normal}, $J(G)$ is a normal ideal. Hence, the Rees algebra $\mathcal{R}(J(G))$ is normal (Theorem \ref{thm:HVnormality}). Next, by a theorem of Hochster \cite{Hoc72}, since $\mathcal{R}(J(G))$ is a normal affine semigroup ring, it follows that $\mathcal{R}(J(G))$ is Cohen--Macaulay. \end{proof} The \textit{toric ring} of $G(J(G))$ is the $K$-algebra $K[J(G)]=K[u:u\in G(J(G))]\subset S$. \begin{Corollary} Let $G$ be a Cohen--Macaulay very well--covered graph. Then the toric ring $K[J(G)]$ is a normal Cohen--Macaulay domain. \end{Corollary} \begin{proof} Since $J(G)$ is generated in one degree, the statement follows from Theorem \ref{Thm:R(J(G))CMnormal} together with \cite[Proposition 4.3.42]{RV}. \end{proof} Now let $I$ be an ideal of a noetherian ring $R$. As usual, denote by $V(I)$ the set of prime ideals containing $I$ and by $\textup{Ass}(I)$ the set of associated prime ideals of $R/I$. For all $P\in \Spec(R)$, we denote by $\mathfrak{m}_P$ the maximal ideal of the local ring $R_P$. Recall that $I$ satisfies the \textit{persistence property} (with respect to associated ideals) if $$ \textup{Ass}(I)\subseteq\textup{Ass}(I^2)\subseteq\textup{Ass}(I^3)\subseteq\cdots. $$ In \cite{HQ15}, Herzog and Qureshi introduced the notion of \textit{strong persistence property}. More in detail, let $P\in V(I)$. We say that $I$ satisfies the \textit{strong persistence property with respect to $P$} if for all $k$ and all $f\in (I^k_P:\mathfrak{m}_P) \textup{set}minus I^k_P$ there exists $g\in I_P$ such that $fg\notin I^{k+1}_P$. The ideal $I$ is said to satisfy the \textit{strong persistence property} if it satisfies the strong persistence property for all $P\in V(I)$. One can verify that the strong persistence property implies the persistence property \cite{HQ15} (see, also, \cite[Proposition 2.1]{NQBM}). Theorem \ref{Thm:R(J(G))CMnormal} yields the next result. \begin{Corollary} Let $G$ be a Cohen--Macaulay very well--covered graph. Then $J(G)$ satisfies the strong persistence property, and in particular, the persistence property. \end{Corollary} \begin{proof} The assertion follows from Theorem \ref{Thm:R(J(G))CMnormal} and \cite[Corollary 1.6]{HQ15}. \end{proof} \section{Whisker graphs}\label{sec:4} In this section we study some algebraic properties of the powers of the cover ideals of a special class of Cohen--Macaulay very well--covered graphs. Our main tool is the so called $\ell$-exchange property introduced in \cite{HHV2005}. Let $I\subset R=K[x_1,\dots,x_n]$ be a monomial ideal generated in one degree, and let $K[I]=K[u:u\in G(I)]$ be the toric ring of $G(I)$. Then $K[I]$ has the presentation $\mathfrak{p}si:T=K[t_u:u\in G(I)]\rightarrow K[I]$ defined by $\mathfrak{p}si(t_u)=u$ for all $u\in G(I)$. The kernel $\Ker(\mathfrak{p}si)=J$ is called the \textit{toric ideal} of $K[I]$. Fix a monomial order $>$ on $T$. We say that the monomial $t_{u_1}\cdots t_{u_N}\in T$ is \textit{standard with respect to $>$}, if $t_{u_1}\cdots t_{u_N}$ does not belong to the initial ideal, $\text{in}_<(J)$, of the toric ideal $J$ of $K[I]$. Let $u\in S$ be a monomial. The \textit{$x_i$-degree} and the \textit{$y_i$-degree} of $u$ are the integers $\deg_{x_i}(u)=\max\{j:x_i^j\ \textit{divides}\ u\}$, $\deg_{y_i}(u)=\max\{j:y_i^j\ \textit{divides}\ u\}$, respectively. \begin{Definition}{\rm(}\cite[Definition 3.3]{DHQ}{\rm)}. The equigenerated monomial ideal $I\subset R$ satisfies the \textit{$\ell$-exchange property with respect to $>$}, if the following condition is satisfied: for all standard monomials $t_{u_1}\cdots t_{u_N}$, $t_{v_1}\cdots t_{v_N}\in T$ of degree $N$ such that \begin{enumerate} \item[\em(i)] $\deg_{x_i}(u_1\cdots u_N)=\deg_{x_i}(v_1\cdots v_N)$, for all $1\le i\le j-1$ with $j\le n-1$, \item[\em(ii)] $\deg_{x_j}(u_1\cdots u_N)<\deg_{x_j}(v_1\cdots v_N)$, \end{enumerate} there exist $h$ and $k$ with $j< h \le n$ and $1\le k\le N$, such that $x_j(u_k/x_h)\in G(I)$. \end{Definition} The following lemmata will be needed later. \begin{Lemma}\label{Lem:againCover} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. Let $C\in\mathcal{C}(G)$ such that $C_y\ne\emptyset$ and let $i=\min C_y$. Then $(C\textup{set}minus y_i)\cup x_i\in\mathcal{C}(G)$. \end{Lemma} \begin{proof} Firstly we prove that $C'=(C\textup{set}minus y_i)\cup x_i$ is a vertex cover of $G$. Let $e\in E(G)$, we must show that $e\cap C'$ is non empty. Since $C$ is a vertex cover of $G$, then $e\cap C\ne\emptyset$. If $\{x_i,y_i\}\cap e=\emptyset$, then $e\cap C'\ne\emptyset$, too. If $x_i\in e$ then $e\cap C'$ contains $x_i$ and therefore the intersection is non empty. Finally, suppose $y_i\in e$ but $x_i\notin e$. Since $Y$ is a maximal independent set, it follows that $N(y_i)\subseteq X$. Hence, $e=x_jy_i$ for some $j$. By Characterization \ref{char:veryWellCGCM}(iii) we have $j\le i$. Thus $j<i$, because $x_i\notin e$. Since $i=\min C_y$ and $j<i$, it follows from Lemma \ref{Lem:minimalGeneratorsI(G)^vee}(a) that $x_j\in C$. Hence $x_j\in e\cap C'$ and again the intersection is non empty. The fact that $C'$ is a minimal vertex cover of $G$ follows because $|C'|=n$. \end{proof} \begin{Lemma}\label{Lemma:degx+degy=const} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. Then, for all $k\ge1$, all $i\in[n]$ and all $u\in G(J(G)^k)$ we have $$ \deg_{x_i}(u)+\deg_{y_i}(u)=k. $$ \end{Lemma} \begin{proof} By Lemma \ref{Lem:minimalGeneratorsI(G)^vee}(a), for all $u\in G(J(G))$, we have $$ \deg_{x_i}(u)+\deg_{y_i}(u)=1\ \ \text{for all}\ \ 1\le i\le n. $$ Since $J(G)$ is generated in a single degree, the minimal generators of $J(G)^k$ are the products $u=u_1\cdots u_k$ of $k$ arbitrary monomials of $G(J(G))$. Hence, for all $1\le i\le n$, we have $\deg_{x_i}(u)+\deg_{y_i}(u)=\sum_{j=1}^k[\deg_{x_i}(u_j)+\deg_{y_i}(u_j)]=k$. \end{proof} Now, we consider a special but wide class of Cohen--Macaulay very well--covered graphs. Let $H$ be a graph on the vertex set $X=\{x_1,\dots,x_n\}$ and take a new set of variables $Y=\{y_1,\dots,y_n\}$. Then, the \textit{whisker graph} $G=H^*$ of $H$ is the graph obtained from $H$ by attaching to each vertex $x_i$ a new vertex $y_i$ and the edge $x_iy_i$. The edge $x_iy_i$ is called a \textit{whisker}. More in detail, the \textit{whisker graph} $G=H^*$ of $H$ is the graph on the vertex set $X\cup Y=\{x_1,\dots,x_n\}\cup\{y_1,\dots,y_n\}$ and the edge set $E(G)\cup\{x_1y_1,x_2y_2,\dots,x_ny_n\}$. We have already underlined in the introduction that the whisker graph of a given graph with $n$ vertices is a Cohen--Macaulay very well--covered graph with $2n$ vertices (see, also, \cite[Corollary 4.3]{CF2023}, proof). With the same notation as before, the next result holds true. \begin{Lemma}\label{Lem:againCoverWhisker} Let $G$ be a whisker graph with vertex set $X\cup Y$. Then, for all $C\in\mathcal{C}(G)$ and all $y_i\in C$ we have that $(C\textup{set}minus y_i)\cup x_i\in\mathcal{C}(G)$. \end{Lemma} \begin{proof} By assumption $G=H^*$, for some graph $H$ on the vertex set $X= \{x_1, \ldots, x_n\}$, thus a Cohen--Macaulay very well--covered graph with the $2n$ vertices $x_1,\dots,x_n,$ $y_1,\dots,$ $y_n$. Since for all $i$, the only vertex adjacent to $y_i$ is $x_i$, then for any labeling of $X\cup Y$, the conditions (i)--(v) of Characterization \ref{char:veryWellCGCM} are satisfied. Hence, if $C\in\mathcal{C}(G)$ and $y_i\in C$, we can choose a labeling such that $\min_yC=i$. The assertion follows by applying Lemma \ref{Lem:againCover}. \end{proof} From now on, when we tell about a whisker graph $G$ with $2n$ vertices, we implicitly assume that \begin{enumerate} \item[-] $G$ is the whisker graph associated to a given graph whose vertex set is the set $X=\{x_1,\dots,x_n\}$, and with whiskers $x_iy_i$, $i=1, \ldots, n$, that is, $V(G)=X\cup Y$, with $Y=\{y_1,\dots,y_n\}$. \item[-] $G$ is a very well--covered Cohen--Macaulay graph whose vertex set $X\cup Y$ satisfies the conditions (i)--(v) of Characterization \ref{char:veryWellCGCM}. \end{enumerate} \begin{Theorem}\label{Thm:J(G)ell-exchangeProp} Let $G$ be a whisker graph with $2n$ vertices. Then $J(G)$ satisfies the $\ell$-exchange property with respect to the lexicographic order $>_{\lex}$ induced by $x_1>y_1>x_2>y_2>\cdots>x_n>y_n$. \end{Theorem} \begin{proof} Set $z_{2p-1}=x_{p}$ and $z_{2p}=y_{p}$, for $p=1,\dots,n$. Then $$ z_{1}>z_{2}>z_{3}>z_{4}>\dots>z_{2n-1}>z_{2n}. $$ We prove the following slightly more general statement. \\ $(*)$ For all monomials $t_{u_1}\cdots t_{u_N}$, $t_{v_1}\cdots t_{v_N}$ of $K[t_u:u\in G(J(G))]$ such that \begin{enumerate} \item[(i)] $\deg_{z_i}(u_1\cdots u_N)=\deg_{z_i}(v_1\cdots v_N)$, for all $1\le i\le j-1$ with $j\le 2n-1$, \item[(ii)] $\deg_{z_j}(u_1\cdots u_N)<\deg_{z_j}(v_1\cdots v_N)$, \end{enumerate} there exist $h$ and $k$, with $j<h\le 2n$ and $1\le k\le N$, such that $z_j(u_k/z_h)\in G(J(G))$. Let $t_{u_1}\cdots t_{u_N}$, $t_{v_1}\cdots t_{v_N}$ monomials of $K[t_u:u\in G(J(G))]$ satisfying the conditions (i) and (ii). We claim that the integer $j$ is odd. Suppose for a contradiction that $j$ is even, then $j=2p$ for some $p\in[n]$. Thus $z_j=y_p$ and \begin{equation}\label{eq:degxy1} \deg_{y_p}(u_1\cdots u_N)<\deg_{y_p}(v_1\cdots v_N). \end{equation} On the other hand, since $J(G)$ is generated in a single degree, $u_1\cdots u_N$ and $v_1\cdots v_N$ belong to $G(J(G)^N)$. Thus, Lemma \ref{Lemma:degx+degy=const} gives \begin{align} \label{eq:degxy2}\deg_{x_p}(u_1\cdots u_N)+\deg_{y_p}(u_1\cdots u_N)=\deg_{x_p}(v_1\cdots v_N)+\deg_{y_p}(v_1\cdots v_N)=N. \end{align} Equations (\ref{eq:degxy1}) and (\ref{eq:degxy2}) yield $\deg_{x_p}(u_1\cdots u_N)>\deg_{x_p}(v_1\cdots v_N)$, but this contradicts condition (i), since $x_p=z_{j-1}$. Hence $j$ is odd, and so $z_j=x_{p}$ for some $p\in[n]$. Since $\deg_{x_p}(u_1\cdots u_N)<\deg_{x_p}(v_1\cdots v_N)\le N$, by Lemma \ref{Lemma:degx+degy=const} it follows that $\deg_{y_p}(u_1\cdots u_N)>0$. Hence, there exists $k$ with $1\le k\le N$ such that $y_p$ divides $u_k$. By Lemma \ref{Lem:againCoverWhisker}, it follows that $x_p(u_k/y_p)\in G(J(G))$. Since $y_p=z_{j+1}$, and $z_j>z_{j+1}$, the claim $(*)$ is proved. \end{proof} Recall that an ideal $I$ of a polynomial ring $R=K[x_1,\dots,x_n]$ is said to have \textit{linear powers} if $I^k$ has linear resolution, for all $k\ge1$. Moreover, a monomial ideal $I$ of $R$, has \textit{linear quotients} if for some order $u_1,\dots,u_m$ of its minimal generating set $G(I)$, all colon ideals $(u_1,\dots,u_{\ell-1}):u_{\ell}$, $\ell=2,\dots,m$, are generated by a subset of the set of variables $\{x_1,\dots,x_n\}$. As a first consequence of Theorem \ref{Thm:J(G)ell-exchangeProp} we prove that the cover ideal of a whisker graph $G$ has linear powers. Such a result has been recently obtained in \cite[Corollary 4.5]{LW2023} (see, also, \cite[Theorem 2.3]{MF2014}) by showing that the ordinary powers of $J(G)$ are weakly polymatroidal, which implies having linear powers. \begin{Corollary}\label{Cor:J(G)LinQuot} Let $G$ be a whisker graph with $2n$ vertices. Then, \begin{enumerate} \item[\em(a)] for all $k\ge1$, $J(G)^k$ has linear quotients with respect to the lexicographic order $>_{\lex}$ induced by $x_1>y_1>x_2>y_2>\cdots>x_n>y_n$. \item[\em(b)] $J(G)$ has linear powers. In particular, the depth function $\depth S/J(G)^k$ is a non-increasing function of $k$, that is, $\depth S/J(G)^k\ge\depth S/J(G)^{k+1}$ for all $k\ge1$. \end{enumerate} \end{Corollary} \begin{proof} (a) Since $J(G)$ is generated in a single degree, each minimal monomial generator of $J(G)^N$ is a product $u_1\cdots u_N$ of $N$ arbitrary, non necessarily distinct, monomials $u_i\in G(J(G))$. Let $u=u_1\cdots u_N\in G(J(G)^N)$, where each $u_i\in G(J(G))$. Setting $P=(v_1\cdots v_N:v_i\in G(J(G)),v_1\cdots v_N>_{\lex}u_1\cdots u_N)$, we must prove that the ideal $P:u$ is generated by variables. Let $v=v_1\cdots v_N\in G(P)$. Using the labeling $z_i$ on the variables, given in the proof of Theorem \ref{Thm:J(G)ell-exchangeProp}, by the definition of $>_{\lex}$, for some $i$ and $j$ we have \begin{enumerate} \item[(i)] $\deg_{z_i}(v_1\cdots v_N)=\deg_{z_i}(u_1\cdots u_N)$, for all $1\le i\le j-1$ with $j\le 2n-1$, \item[(ii)] $\deg_{z_j}(v_1\cdots v_N)>\deg_{z_j}(u_1\cdots u_N)$. \end{enumerate} Hence, by the property $(*)$ proved in Theorem \ref{Thm:J(G)ell-exchangeProp}, there exist integers $k$ and $h$ such that $z_j=x_{h}$ and $x_{h}(u_k/y_h)\in G(J(G))$. Since $x_h>y_h$, we have $x_{h}(u_k/y_{h})>_{\lex}u_k$. Consequently, $$ u'=x_h(u/y_h)=u_1\cdots u_{k-1}\cdot x_h(u_k/y_h)\cdot u_{k+1}\cdots u_N\in P $$ and $x_h=u'/\textup{gcd}(u',u)\in P:u$ divides the monomial $v/\textup{gcd}(v,u)\in P$. Indeed, the set $\{v/\textup{gcd}(v,u):v\in G(P)\}$ generates $P:u$ (\cite[Proposition 1.2.2]{JT}). Hence, we see that $P:u$ is generated by variables, as desired.\\ (b) That $J(G)$ has linear powers follows from (a) and the fact that all powers $J(G)^k$ are monomial ideals generated in a single degree. The claim about the non-increasingness of the function $\depth S/J(G)^k$ follows from \cite[Proposition 10.3.4]{JT}. \end{proof} Let us briefly recall the concept of \textit{homological shift ideal} \cite{Bay019,BJT019,CF2023,F2,F2Pack,FH2023,HMRZ021a,HMRZ021b}. For ${\bf a}=(a_1,\dots,a_n)\in{\NZQ Z}_{\ge0}^n$, we set ${\bf x^a}=x_1^{a_1}\cdots x_n^{a_n}$. Let $I\subset S$ be a monomial ideal, then $\textup{HS}_i(I)=({\bf x^a}:\beta_{i,\bf a}(I)\ne0)$ is the \emph{$i$th homological shift ideal} of $I$, where $\beta_{i,\bf a}(I)$ is a multigraded Betti number. Note that $\textup{HS}_0(I)=I$ and $\textup{HS}_i(I)=0$ if $i<0$ or $i>\textup{proj}\mathfrak{p}hantom{.}\!\textup{dim}(I)$. A basic goal is to determine those homological properties satisfied by all $\textup{HS}_i(I)$, the so-called \textit{homological shift properties} of $I$ \cite{F2}. In \cite{CF2023}, we posed the following conjecture: \begin{Conjecture}\label{Conj:PowersHSCMverywell} \textup{(\cite[Conjecture 4.4]{CF2023}).} Let $G$ be a Cohen--Macaulay very well--covered graph with 2n vertices. Then $\textup{HS}_k(J(G)^\ell)$ has linear quotients, for all $k\ge0$, and all $\ell\ge1$. \end{Conjecture} In \cite{CF2023}, we gave a positive answer to this conjecture for $\ell=1$ \cite[Theorem 4.4]{CF2023} and for all Cohen--Macaulay bipartite graphs \cite[Corollary 4.11]{CF2023}. Now we prove that the powers of cover ideals of whisker graphs have homological linear quotients, partially answering Conjecture \ref{Conj:PowersHSCMverywell}. \begin{Theorem}\label{Thm:HSJ(G)LinQuotRees} Let $G$ be a whisker graph with $2n$ vertices. Then, for all $\ell\ge1$ and all $k\ge0$, $\textup{HS}_k(J(G)^\ell)$ has linear quotients with respect to the lexicographic order $>_{\lex}$ induced by $x_1>x_2>\cdots>x_n>y_1>y_2>\cdots>y_n$. \end{Theorem} \begin{proof} Let $>$ be the lexicographic order induced by $x_1>y_1>x_2>y_2>\cdots>x_n>y_n$. Then, by Corollary \ref{Cor:J(G)LinQuot}(a), $J(G)^\ell$ has linear quotients with respect to $>$ for all $\ell\ge1$. Let $u\in G(J(G)^\ell)$, we define $$ \textup{set}(u)\ =\ \{i\ :\ z_i\in\{x_i,y_i\},z_i\in(v\in G(J(G)^\ell):v>u):(u)\}. $$ The definition of the order $>$ and Lemma \ref{Lemma:degx+degy=const} imply that the set of variables generating the ideal $(v\in G(J(G)^\ell):v>u):(u)$ is a subset of $X=\{x_1,\dots,x_n\}$. Thus, by \cite[Proposition 1.2]{F2} we have $$ \textup{HS}_k(J(G)^\ell)\ =\ \big({\bf x}_Fu\ :\ u\in G(J(G)^{\ell}),\ F\subseteq\textup{set}(u),\ |F|=k\big). $$ Let ${\bf x}_Dv\in G(\textup{HS}_k(J(G)^{\ell}))$, $D\subseteq\textup{set}(v)$, $v\in G(J(G)^{\ell})$ and consider the colon ideal $$ P\ =\ ({\bf x}_Fu\in G(\textup{HS}_k(J(G)^{\ell})):{\bf x}_Fu>_{\lex}{\bf x}_Dv):({\bf x}_Dv). $$ We must prove that $P$ is generated by variables. Let ${\bf x}_Fu\in G(\textup{HS}_k(J(G)^{\ell}))$, $F\subseteq\textup{set}(u)$, $u\in G(J(G)^\ell)$, such that ${\bf x}_Fu>_{\lex}{\bf x}_Dv$. Let $h=\textup{lcm}({\bf x}_Fu,{\bf x}_Dv)/({\bf x}_Dv)$. If $\deg(h)=1$, $h$ is a variable. Assume $\deg(h)>1$. Let $z_i$ be the labeling on the variables such that $z_1=x_1$, $z_2=x_2$, $\dots$, $z_n=x_n$, $z_{n+1}=y_1$, $z_{n+2}=y_2$, $\dots$, $z_{2n}=y_n$. Then, by definition of $>_{\lex}$, there exists $p$ such that $\deg_{z_j}({\bf x}_Fu)=\deg_{z_j}({\bf x}_Dv)$ for all $j<p$ and \begin{equation}\label{eq:ineqdeg} \deg_{z_p}({\bf x}_Fu)>\deg_{z_p}({\bf x}_Dv). \end{equation} Now, we distinguish two cases. \\ \textsc{Case 1.} Suppose $z_p=x_i$ for some $i$. We claim that \begin{equation}\label{eq:boundell} \deg_{x_i}({\bf x}_Fu)\le\ell. \end{equation} Indeed, by Lemma \ref{Lemma:degx+degy=const} and the structure of ${\bf x}_Fu$, it follows that $\deg_{x_i}({\bf x}_Fu)\le\ell+1$. Suppose by contradiction that $\deg_{x_i}({\bf x}_Fu)=\ell+1$, then $i\in\textup{set}(u)$. Necessarily $y_i$ must divide $u$. But this would imply that $\deg_{x_i}({\bf x}_Fu)+\deg_{y_i}({\bf x}_Fu)$ exceeds $\ell+1$, which is impossible. Hence, equation (\ref{eq:boundell}) follows. By Lemma \ref{Lemma:degx+degy=const} and equations (\ref{eq:ineqdeg}) and (\ref{eq:boundell}), we have $\ell\ge\deg_{x_i}({\bf x}_Fu)>\deg_{x_i}({\bf x}_Dv)$ and $\deg_{y_i}({\bf x}_Dv)>0$. Writing $v=v_1v_2\cdots v_\ell$, with each $v_q\in G(J(G))$, we have that $y_i$ divides $v_q$ for some $q$. Then $i\in\textup{set}(v)$. Indeed, $x_i(v_q/y_i)\in G(J(G))$ and $v'=x_i(v/y_i)=v_1\cdots v_{q-1}(x_i(v_q/y_i))v_{q+1}\cdots v_{\ell}\in G(J(G)^\ell)$. We distinguish two cases. \\ \textsc{Subcase 1.1.} Let $i\notin D$. Then ${\bf x}_Dv'\in G(\textup{HS}_k(J(G)^\ell))$ and ${\bf x}_Dv'>_{\lex}{\bf x}_Dv$. Moreover, $\textup{lcm}({\bf x}_Dv',{\bf x}_Dv)/({\bf x}_Dv)=x_i\in P$ divides $h$. \\ \textsc{Subcase 1.2.} Let $i\in D$. Then $\deg_{x_i}({\bf x}_Dv)+\deg_{y_i}({\bf x}_Dv)=\ell+1$ (Lemma \ref{Lemma:degx+degy=const}). Since by (\ref{eq:ineqdeg}) and (\ref{eq:boundell}) $\deg_{x_i}({\bf x}_Dv)<\ell$, it follows that $\deg_{y_i}(v)\ge2$. Therefore, there exist $h_1\ne h_2$ such that $y_i$ divides $v_{h_1}$ and $v_{h_2}$. Set $v'=v_1\cdots (x_i(v_{h_1})/y_i)\cdots v_{h_2}\cdots v_\ell$. Then, it follows that $v'\in G(J(G)^\ell)$ and $D\subseteq\textup{set}(v')$. Thus ${\bf x}_Dv'>_{\lex}{\bf x}_Dv$ and moreover $\textup{lcm}({\bf x}_Dv',{\bf x}_Dv)/({\bf x}_Dv)=x_i\in P$ divides $h$. \\ \textsc{Case 2.} Suppose $z_p=y_i$ for some $i$. For all $j$ such that $\deg_{y_j}({\bf x}_Fu)>\deg_{y_j}({\bf x}_Dv)$, since $\deg_{x_j}({\bf x}_Fu)=\deg_{x_j}({\bf x}_Dv)$, Lemma \ref{Lemma:degx+degy=const} gives \begin{align*} \ell+1=\deg_{x_j}({\bf x}_Fu)+\deg_{y_j}({\bf x}_Fu)>\deg_{x_j}({\bf x}_Dv)+\deg_{y_j}({\bf x}_Dv)=\ell. \end{align*} Hence $j\notin D$ and $\deg_{y_j}({\bf x}_Fu)-\deg_{y_j}({\bf x}_Dv)=1$. Let $j_1,\dots,j_t$ be the integers such that $\deg_{y_{j_s}}({\bf x}_Fu)>\deg_{y_{j_s}}({\bf x}_Dv)$, $s=1,\dots,t$. Then, the above argument shows that $h=y_{j_1}y_{j_2}\cdots y_{j_t}$ and $j_s\notin D$ for all $s=1,\dots,t$. Since $\deg(h)>1$ we have $t\ge2$. As before $v'=x_{j_2}\cdots x_{j_t}v/(y_{j_2}\cdots y_{j_t})\in G(J(G)^{\ell})$, $D\subseteq\textup{set}(v')$ and ${\bf x}_Dv'>_{\lex}{\bf x}_Dv$. Finally $\textup{lcm}({\bf x}_Dv',{\bf x}_Dv)/({\bf x}_Dv)=y_{j_1}\in P$ divides $h$ . The above \textsc{Cases 1} and 2 show that $P$ is generated by variables, as wanted. \end{proof} Another relevant consequence of Theorem \ref{Thm:J(G)ell-exchangeProp} concerns the \textit{limit depth} of $J(G)$. The role of the Rees algebra of the cover ideal $J(G)$ will be crucial to calculating it. Let $I$ be a graded ideal of a polynomial ring $R$ with $n$ variables. By a theorem of Brodmann \cite{Brod79}, $\depth R/I^k$ is constant for $k$ large enough. This eventually constant value is called the \textit{limit depth} of $I$, and it is denoted by $\lim_{k\rightarrow\infty}\depth R/I^k$. The \textit{depth stability} of $I$, denoted by $\textup{dstab}(I)$, is the least integer $k_0$ such that $\depth R/I^k=\depth R/I^{k_0}$ for all $k\ge k_0$. Brodmann proved that $$ \lim_{k\rightarrow\infty}\depth R/I^k\le n-\ell(I), $$ where $\ell(I)$ is the \textit{analytic spread} of $I$, that is, the Krull dimension of the fiber ring $\mathcal{R}(I)/\mathfrak{m}\mathcal{R}(I)$, where $\mathfrak{m}$ is the maximal graded ideal of $R$. If the Rees algebra of $I$ is Cohen--Macaulay, then by \cite[Proposition 10.3.2]{JT} (see, also, \cite{EH83} combined with \cite[Proposition 1.1]{Huneke82}), we have \begin{equation}\label{eq:limdepthCM} \lim_{k\rightarrow\infty}\depth R/I^k=n-\ell(I). \end{equation} Hence, we have \begin{Theorem}\label{Thm:ell(J(G))R(J(G))} Let $G$ be a whisker graph with $2n$ vertices. Then $$ \lim_{k\rightarrow\infty}\depth S/J(G)^k\ =\ n-1. $$ Moreover, $\textup{dstab}(J(G))\le n$ and $\ell(J(G))=n+1$. \end{Theorem} For the proof of this result, we need the next more general lemma. Let $u\in S$ be a monomial. Using the notation in Section \ref{sec:3}, we have $\supp(u)_x=\{i:x_i\ \textit{divides}\ u\}$ and $\supp(u)_y=\{i:y_i\ \textit{divides}\ u\}$. \begin{Lemma}\label{Lem:min(y)=i} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. Then, for all $i\in[n]$, there exists $u\in G(J(G))$ such that $\min\supp(u)_y=i$. \end{Lemma} \begin{proof} We proceed by induction on $n\ge1$. For the base case, $J(G)=(x_1,y_1)$ and the statement holds. Now, let $n>1$. Then, by Lemma \ref{Lemma:J(G)Decompx_1} we have $J(G)={\bf z}_{N(x_1)}J(G_1)+x_1J(G\textup{set}minus\{x_1,y_1\})$. If $u\in G({\bf z}_{N(x_1)}J(G_1))$, then $\min\supp(u)_y=1$. Let $i\in[n]$ with $i\ne1$. Since $G\textup{set}minus\{x_1,y_1\}$ is a Cohen--Macaulay very well--covered graph (Lemma \ref{Lem:minimalGeneratorsI(G)^vee}(b)), by induction there exists $v\in G(J(G\textup{set}minus\{x_1,y_1\}))$ with $\min\supp(v)_y=i$. Setting $u=x_1v$, we have $u\in G(J(G))$ and $\min\supp(u)_y=\min\supp(v)_y=i$. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm:ell(J(G))R(J(G))}] For any $N\ge1$ and any $u_1\cdots u_N\in G(J(G)^N)$, with $u_i\in G(J(G))$, let $P=(v_1\cdots v_N:v_i\in G(J(G)),v_1\cdots v_N>_{\lex}u_1\cdots u_N)$ and denote by $s(u_1,\dots,u_N)$ the number of variables generating the colon ideal $P:u_1\cdots u_N.$ Since $J(G)^N$ has linear quotients with respect to $>_{\lex}$ (Corollary \ref{Cor:J(G)LinQuot}(a)) we have that $\depth S/J(G)^{N}=2n-\max\{s(u_1,\dots,u_N)+1:u_1,\dots,u_N\in G(J(G))\}$. This follows from \cite[Corollary 8.2.2]{JT} and the Auslander--Buchsbaum formula. The definition of the order $>_{\lex}$ and Lemma \ref{Lemma:degx+degy=const} imply that the set of variables generating the ideal $P:u_1\cdots u_N$ is a subset of $X=\{x_1,\dots,x_n\}$. Pick $n$ monomials $u_i={\bf x}_{F_i}{\bf y}_{[n]\textup{set}minus F_i}$, where $\min([n]\textup{set}minus F_i)=i$, for $i=1,\dots,n$. The existence of these monomials follows from Lemma \ref{Lem:min(y)=i}. By Lemma \ref{Lem:againCover}, $x_i(u_i/y_i)\in G(J(G))$ for $i=1,\dots,n$. Hence, $s(u_1,\dots,u_n)=n$. This shows that $\depth S/J(G)^n=n-1$. We claim that $\depth S/J(G)^N=n-1$ for all $N\ge n$. It is enough to consider $u_1,\dots,u_n$ and $N-n$ arbitrary monomials $v_{n+1},\dots,v_{N}\in G(J(G))$. Then $s(u_1,\dots,u_n,v_{n+1},\dots,v_n)=n$ and $\depth S/J(G)^N=n-1$. Hence, $\textup{dstab}(J(G))\le n$. Moreover, from Theorem \ref{Thm:R(J(G))CMnormal} and equation (\ref{eq:limdepthCM}), since $S$ is a polynomial ring in $2n$ variables, $\ell(J(G))=2n -\lim_{k\rightarrow\infty}\depth S/J(G)^k = n+1$. \end{proof} We close the section with some remarks on the reduced Gr\"obner basis of the presentation ideal $J$ of $\mathcal{R}(J(G))$. Hereafter, we follow closely \cite[Section 6.4.1]{EHGB}. Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices $X\cup Y=\{x_1,\dots,x_n,y_1,\dots,y_n\}$. Let $G(J(G))=\{u_1,\dots,u_m\}$ and let $\mathcal{R}(J(G))$ be the Rees algebra of $J(G)$. Let ${\bf x}=x_1,\dots,x_n$, ${\bf y}=y_1,\dots,y_n$ and ${\bf t}=t_{u_1},\dots,t_{u_m}$. Then the Rees algebra $\mathcal{R}(J(G))$ has the presentation $$ \varphi: S' =K[{\bf x},{\bf y},{\bf t}]\rightarrow\mathcal{R}(J(G)) $$ defined by setting $\varphi(x_i)=x_i$, $\varphi(y_i)=y_i$, for $1\le i\le n$, and $\varphi(t_{u_j})=u_jt$ for $1\le j\le m$. The ideal $J=\Ker(\varphi)$ is called the \textit{presentation ideal} of $\mathcal{R}(J(G))$. Analogously, the toric ring $K[J(G)]=K[u_1,\dots,u_m]$ has the presentation $$ \mathfrak{p}si:T=K[{\bf t}]\rightarrow K[u_1,\dots,u_m] $$ defined by setting $\mathfrak{p}si(t_{u_j})=u_j$ for $1\le j\le m$. The ideal $L=\Ker(\mathfrak{p}si)$ is called the \textit{toric ideal} of $K[u_1,\dots,u_m]$. Let $\mathfrak{m}=(x_1,\dots,x_n,y_1,\dots,y_n)$ be the graded maximal ideal of $S$. Since $J(G)$ is generated in a single degree, the fiber ring $\mathcal{R}(J(G))/\mathfrak{m}\mathcal{R}(J(G))$ is isomorphic to the toric ring $K[J(G)]$.\\ Let $>'$ be an arbitrary monomial order on $T$ and let $>_{\lex}$ be the lexicographic order on $S$ induced by $x_1>y_1>x_2>y_2>\dots>x_n>y_n$. We define the monomial order $>'_{\lex}$ as follows: for two monomials $w_1t_{u_1}^{a_1}\cdots t_{u_m}^{a_m}$ and $w_2t_{u_1}^{b_1}\cdots t_{u_m}^{b_m}$ in $S'$, with $w_1,w_2\in S$, we set $w_1t_{u_1}^{a_1}\cdots t_{u_m}^{a_m}>_{\lex}'w_2t_{u_1}^{b_1}\cdots t_{u_m}^{b_m}$ if and only if $w_1>_{\lex}w_2$ or $w_1=w_2$ and $t_{u_1}^{a_1}\cdots t_{u_m}^{a_m}>'t_{u_1}^{b_1}\cdots t_{u_m}^{b_m}$. According to \cite[Section 2]{EHGB} the order $>'_{\lex}$ is the product order of $>'$ and $>_{\lex}$. With the notation above, from Theorem \ref{Thm:J(G)ell-exchangeProp} and \cite[Theorem 6.24]{EHGB}, we get the next result. \begin{Corollary}\label{cor:grobner} Let $G$ be a whisker graph with $2n$ vertices. Then the reduced Gr\"obner basis of the presentation ideal $J$ of $\mathcal{R}(J(G))$ with respect to $>_{\lex}'$ consists of all binomials belonging to the reduced Gr\"obner basis of $L$ with respect to $>'$ together with the binomials $$ x_it_{u}-y_it_{x_i(u/y_i)}, $$ where $u,x_i(u/y_i)\in G(J(G))$. \end{Corollary} The statement of Corollary \ref{cor:grobner} seems to be true for all Cohen--Macaulay very well--covered graphs. \begin{Example} \rm Consider the graph $G$ with 12 vertices depicted below \begin{picture}(90,150)(-90,-50) \mathfrak{p}ut(-10,60){\circle*{4}} \mathfrak{p}ut(40,60){\circle*{4}} \mathfrak{p}ut(-20,65){\textit{$y_1$}} \mathfrak{p}ut(30,65){\textit{$y_2$}} \mathfrak{p}ut(-10,10){\circle*{4}} \mathfrak{p}ut(40,10){\circle*{4}} \mathfrak{p}ut(-20,0){\textit{$x_1$}} \mathfrak{p}ut(33,0){\textit{$x_2$}} \mathfrak{p}ut(90,10){\circle*{4}} \mathfrak{p}ut(92,5){\textit{$x_3$}} \mathfrak{p}ut(90,60){\circle*{4}} \mathfrak{p}ut(90,65){\textit{$y_3$}} \mathfrak{p}ut(-10,60){\line(0,-1){50}} \mathfrak{p}ut(40,60){\line(0,-1){50}} \mathfrak{p}ut(90,60){\line(0,-1){50}} \mathfrak{p}ut(90,10){\line(1,1){50}} \mathfrak{p}ut(144,7){\textit{$x_4$}} \mathfrak{p}ut(140,10){\circle*{4}} \mathfrak{p}ut(140,65){\textit{$y_4$}} \mathfrak{p}ut(140,60){\circle*{4}} \mathfrak{p}ut(193,9){\textit{$x_5$}} \mathfrak{p}ut(190,10){\circle*{4}} \mathfrak{p}ut(190,65){\textit{$y_5$}} \mathfrak{p}ut(190,60){\circle*{4}} \mathfrak{p}ut(190,60){\line(0,-1){50}} \mathfrak{p}ut(240,0){\textit{$x_6$}} \mathfrak{p}ut(240,10){\circle*{4}} \mathfrak{p}ut(240,65){\textit{$y_6$}} \mathfrak{p}ut(240,60){\circle*{4}} \mathfrak{p}ut(240,60){\line(0,-1){50}} \mathfrak{p}ut(140,60){\line(0,-1){50}} \mathfrak{p}ut(190,60){\line(-2,-1){100}} \mathfrak{p}ut(190,60){\line(-1,-1){50}} \mathfrak{p}ut(240,60){\line(-2,-1){100}} \mathfrak{p}ut(240,60){\line(-3,-1){150}} \mathfrak{q}bezier(-10,10)(25,0)(40,10) \mathfrak{q}bezier(-10,10)(45,-20)(90,10) \mathfrak{q}bezier(-10,10)(45,-40)(140,10) \mathfrak{q}bezier(-10,10)(45,-40)(190,10) \mathfrak{q}bezier(-10,10)(45,-40)(240,10) \mathfrak{q}bezier(40,10)(65,0)(90,10) \mathfrak{q}bezier(40,10)(55,-12)(140,10) \mathfrak{q}bezier(40,10)(55,-12)(190,10) \mathfrak{q}bezier(40,10)(55,-12)(240,10) \mathfrak{p}ut(113,-30){\textit{$G$}} \end{picture}\vspace*{-0,3cm} By Characterization \ref{char:veryWellCGCM}, one verifies that $G$ is a Cohen--Macaulay very well--covered graph with $12$ vertices. We have \begin{align*} I(G)\ &\ =\ (x_1y_1, x_2y_2, x_3y_3, x_4y_4, x_5y_5, x_6y_6, x_1x_2, x_1x_3, x_1x_4, x_1x_5,\\ &\mathfrak{p}hantom{\ =(..}x_1x_6, x_2x_3, x_2x_4, x_2x_5, x_2x_6, x_3y_4,x_3y_5,x_3y_6, x_4y_5, x_4y_6),\\[4pt] J(G)\ &\ =\ (x_1x_2x_3x_4x_5x_6,\, x_1x_2x_3x_4x_5y_6,\, x_1x_2x_3x_4y_5x_6,\, x_1x_2x_3x_4y_5y_6, \\ &\mathfrak{p}hantom{\ =(..} x_1x_2x_3y_4y_5y_6,\, x_1x_2y_3y_4y_5y_6,\, x_1y_2x_3x_4x_5x_6,\, y_1x_2x_3x_4x_5x_6). \end{align*} We order the monomials $u_1,\dots,u_8$ of $G(J(G))$ with respect to the lexicographic order induced by $x_1>y_1>\dots>x_6>y_6$. Thus, for instance $u_1=x_1x_2x_3x_4x_5x_6$, $u_2=x_1x_2x_3x_4x_5y_6$ and so on. Now, let $$ \varphi: S' =K[{\bf x},{\bf y},{\bf t}]\rightarrow\mathcal{R}(J(G)) $$ be the map defined by setting $\varphi(x_i)=x_i$, $\varphi(y_i)=y_i$, for $1\le i\le 6$, and $\varphi(t_{u_j})=u_jt$ for $1\le j\le 8$. Furthermore, let $T=K[{\bf t}]$. Let $>'_{\lex}$ be the product order of the lexicographic order $>_{\lex}$ on $S$ induced by $x_1>y_1>\dots>x_n>y_n$, and the lexicographic order $>'$ on $T$ such that $t_{u_i}>'t_{u_j}$ if and only if $u_i>_{\lex}u_j$. By using \textit{Macaulay2} \cite{GDS}, we have that the reduced Gr\"obner basis of $\ker(\varphi)$ with respect to the order $>'_{\lex}$ is the following one: \begin{align*} \mathcal{G}\ =\ \{& x_{6}t_{u_2}-y_{6}t_{u_1},\,\,x_{5}t_{u_3}-y_{5}t_{u_1},\,\,x_{6}t_{u_4}-y_{6}t_{u_3},\,\,x_{5}t_{u_3}-y_{5}t_{u_1},\,\,x_{4}t_{u_5}-y_{4}t_{u_4},\,\, \\&x_{3}t_{u_6}-y_{3}t_{u_5},\,\,x_{2}t_{u_7}-y_{2}t_{u_1},\,\,x_{1}t_{u_8}-y_{1}t_{u_1},\,\,t_{u_1}t_{u_4}-t_{u_2}t_{u_3}\}. \end{align*} \end{Example} Our experiments using \textit{Macaulay2} \cite{GDS} suggest the next conjecture. \begin{Conjecture}\label{Conj:R(J(G))Koszul} Let $G$ be a Cohen--Macaulay very well--covered graph with $2n$ vertices. Then the presentation ideal of the Rees algebra of $J(G)$ has a quadratic reduced Gr\"obner basis with respect to the product order $>_{\lex}'$ of the lexicographic order $>_{\lex}$ on $S$ induced by $x_1>y_1>\dots>x_n>y_n$, and the lexicographic order $>'$ on $T$ such that $t_{u_i}>'t_{u_j}$ if and only if $u_i>_{\lex}u_j$. In particular, $\mathcal{R}(J(G))$ is a Koszul algebra. \end{Conjecture} \emph{Acknowledgment}. We thank S.A. Seyed Fakhari and M. Nasernejad for their comments and helpful suggestions that allowed us to improve the quality of the paper. \end{document}
\begin{document} \title{Besov Spaces and Frames on Compact Manifolds } \author{Daryl Geller\\ \footnotesize\texttt{{[email protected]}}\\ Azita Mayeli \\ \footnotesize\texttt{{[email protected]}} } \maketitle \begin{abstract} We show that one can characterize the Besov spaces on a smooth compact oriented Riemannian manifold, for the full range of indices, through a knowledge of the size of frame coefficients, using the smooth, nearly tight frames we have constructed in \cite{gm2}. \footnotesize{Keywords and phrases: \textit{Wavelets, Frames, Spectral Theory, Besov Spaces, Manifolds, Pseudodifferential Operators.}} \end{abstract} \section{Introduction} In \cite{gm2}, we have constructed smooth, nearly tight frames on $({\bf M},g)$, a general smooth, compact oriented Riemannian manifold without boundary. Our goal in this article is to show that one can characterize the (inhomogeneous) Besov spaces on ${\bf M}$, for the full range of indices, through a knowledge of the size of frame coefficients, using the frames we have constructed. (We hope to consider Triebel-Lizorkin spaces in a forthcoming article.) Our methods, in addition to using the results of \cite{gm2}, are largely adapted from those of Frazier and Jawerth \cite{FJ1}, who gave a similar characterization of Besov spaces on $\RR^n$. However, as we shall explain below, some new ideas are needed on manifolds. Let us briefly review our construction of smooth, nearly tight frames on ${\bf M}$. Say $f^0 \in {\cal S}(\RR^+)$ (the space of restrictions to $\RR^+$ of functions in ${\mathcal S}(\RR)$). Say $f^0 \not\equiv 0$, and let \[f(s) = sf^0(s).\] One then has the {\em Calder\'on formula}: if $c \in (0,\infty)$ is defined by \[c = \int_0^{\infty} |f(t)|^2 \frac{dt}{t} = \int_0^{\infty} t|f^0(t)|^2 dt,\] then for all $s > 0$, \begin{equation} \label{cald} \int_0^{\infty} |f(ts)|^2 \frac{dt}{t} = c < \infty. \end{equation} Discretizing (\ref{cald}), if $a \neq 1$ is sufficiently close to $1$, one obtains a special form of {\em Daubechies' condition}: for all $s > 0$, \begin{equation} \label{daub} 0 < A_a \leq \sum_{j=-\infty}^{\infty} |f(a^{2j} s)|^2 \leq B_a < \infty, \end{equation} where \begin{equation} \label{daubest} A_a = \frac{c}{2|\log a|} \left(1 - O(|(a-1)^2 (\log|a-1|)|\right),\:\:\: B_a= \frac{c}{2|\log a|}\left(1 + O(|(a-1)^2 (\log|a-1|)|)\right). \end{equation} ((\ref{daubest}) was proved in \cite{gm1}, Lemma 7.6) In particular, $B_a/A_a$ converges nearly quadratically to $1$ as $a \rightarrow 1$. For example, Daubechies calculated that if $f(s) = se^{-s}$ and $a=2^{1/3}$, then $B_a/A_a = 1.0000$ to four significant digits. Our general program is to construct (smooth, nearly tight) frames, and analogues of continuous wavelets, on much more general spaces, by replacing the positive number $s$ in (\ref{cald}) and (\ref{daub}) by a positive self-adjoint operator $T$ on a Hilbert space ${\cal H}$. If $P$ is the projection onto the null space of $T$, by the spectral theorem we obtain the relations \begin{equation} \label{gelmay1} \int_0^{\infty} |f|^2(tT) \frac{dt}{t} = c(I-P) \end{equation} and \begin{equation} \label{gelmay2} A_a(I-P) \leq \sum_{j=-\infty}^{\infty} |f|^2(a^{2j} T) \leq B_a(I-P). \end{equation} (The integral in (\ref{gelmay1}) and the sum in (\ref{gelmay2}) converge strongly. In (\ref{gelmay2}), $\sum_{j=-\infty}^{\infty} := \lim_{M, N \rightarrow \infty} \sum_{j=-M}^N$, taken in the strong operator topology.) (\ref{gelmay1}) and (\ref{gelmay2}) were justified in section 2 of our earlier article \cite{gmcw}. In \cite{gmcw} and \cite{gm2}, we looked at the situation in which $T$ is the Laplace-Beltrami operator $\Delta$ on $L^2({\bf M})$, We constructed smooth, nearly tight frames in this context. Here $P$ is the projection onto the one-dimensional space of constant functions. We constructed continuous wavelets on ${\bf M}$ in \cite{gmcw}. To see how frames can be obtained from (\ref{gelmay2}), suppose that, for any $t > 0$, $K_t$ is the Schwartz kernel of $f(t^2T)$. Thus, if $F \in L^2({\bf M})$, \begin{equation} \label{schkerf} [f(t^2T)F](x) = \int_{\bf M} F(y) K_t(x,y) d\mu(y), \end{equation} here $\mu$ is the measure on ${\bf M}$ arising from integration with respect to the volume form on ${\bf M}$. Say now that $\int_{\bf M} F = 0$, so that $F = (I-P)F$. By (\ref{gelmay2}), \begin{equation} \label{aasumba0} A_a \langle F,F \rangle \leq \langle \sum_j |f|^2(a^{2j} T)F, F \rangle \leq B_a \langle F,F \rangle . \end{equation} Thus \begin{equation} \label{aasumba} A_a \langle F,F \rangle \leq \sum_j \langle f(a^{2j} T)F, f(a^{2j} T)F \rangle \leq B_a \langle F,F \rangle , \end{equation} so that \begin{equation} \label{kerfaj} A_a \langle F,F \rangle \leq \sum_j \int|\int K_{a^j}(x,y) F(y)d\mu(y)|^2 d\mu(x) \leq B_a \langle F,F \rangle . \end{equation} Now, pick $b > 0$, and for each $j$, write ${\bf M}$ as a disjoint union of measurable sets $E_{j,k}$ with diameter at most $ba^j$. Take $x_{j,k} \in E_{j,k}$. It is then reasonable to expect that, for any $\epsilon > 0$, if $b$ is sufficiently small, and if $x_{j,k} \in E_{j,k}$, then \begin{equation} \label{kerfajap} (A_a-\epsilon) \langle F,F \rangle \leq \sum_j \sum_{k}|\int K_{a^j}(x_{j,k},y) F(y)d\mu(y)|^2 \mu(E_{j,k}) \leq (B_a + \epsilon) \langle F,F \rangle , \end{equation} which means \begin{equation} \label{mtvest} (A_a-\epsilon) \langle F,F \rangle \leq \sum_j \sum_{k} |(F,\phi_{j,k})|^2 \leq (B_a + \epsilon) \langle F,F \rangle , \end{equation} where \begin{equation} \label{phjkdf} \phi_{j,k}(x)=[\mu(E_{j,k})]^{1/2} \overline{K_{a^j}}(x_{j,k},x). \end{equation} In our earlier article \cite{gm2}, we showed that (\ref{mtvest}) indeed holds, provided the $E_{j,k}$ are also ``not too small'' (precisely, if they satisfy (\ref{measgeq}) directly below). In fact, in Theorem \ref{framainfr} of that article, we showed (a more general form of) the following result: \begin{theorem} \label{framain} Fix $a >1$, and say $c_0 , \delta_0 > 0$. Suppose $f \in {\mathcal S}(\RR^+)$, and $f(0) = 0$. Suppose that the Daubechies condition $(\ref{daub})$ holds. Then there exists a constant $C_0 > 0$ $($depending only on ${\bf M}, f, a, c_0$ and $\delta_0$$)$ as follows:\\ For $t > 0$, let $K_t$ be the kernel of $f(t^2\Delta)$. Say $0 < b < 1$. Suppose that, for each $j \in \ZZ$, we can write ${\bf M}$ as a finite disjoint union of measurable sets $\{E_{j,k}: 1 \leq k \leq N_j\}$, where: \begin{equation} \label{diamleq} \mbox{the diameter of each } E_{j,k} \mbox{ is less than or equal to } ba^j, \end{equation} and where: \begin{equation} \label{measgeq} \mbox{for each } j \mbox{ with } ba^j < \delta_0,\: \mu(E_{j,k}) \geq c_0(ba^j)^n. \end{equation} $($In \cite{gm2} we show that such $E_{j,k}$ exist provided $c_0$ and $\delta_0$ are sufficiently small, independent of the values of $a$ and $b$.$)$ For $1 \leq k \leq N_j$, define $\phi_{j,k}$ by (\ref{phjkdf}). Then if $P$ denotes the projection in $L^2({\bf M})$ onto the space of constants, we have \[(A_a-C_0b) \langle F,F \rangle \leq \sum_j \sum_{k} |(F,\phi_{j,k})|^2 \leq (B_a + C_0b) \langle F,F \rangle ,\] for all $F \in (I-P)L^2({\bf M})$. In particular, if $A_a - C_0b > 0$, then $\left\{\phi_{j,k}\right\}$ is a frame for $(I-P)L^2({\bf M})$, with frame bounds $A_a - C_0b$ and $B_a + C_0b$. \end{theorem} Thus, in these circumstances, if $b$ is sufficiently small, $\{\phi_{j,k}\}$ is a frame, in fact a smooth, nearly tight frame, since \[ \frac{B_a + C_0b}{A_a - C_0b} \sim \frac{B_a}{A_a} = 1 + O(|(a-1)^2 (\log|a-1|)|). \] To justfiy the formal argument leading from (\ref{aasumba0}) to (\ref{phjkdf}), and to go beyond the $L^2$ theory, one needs the following information about the kernel $K_t$, which we established in Lemma \ref{manmolcw} of our earlier paper \cite{gmcw} (see also the remark following the proof of that lemma): \begin{lemma} \label{manmol} Say $f(0) = 0$. Then for every pair of $C^{\infty}$ differential operators $X$ $($in $x)$ and $Y$ $($in $y)$ on ${\bf M}$, and for every integer $N \geq 0$, there exists $C_{N,X,Y}$ as follows. Suppose $\deg X = j$ and $\deg Y = k$. Then \begin{equation} \label{diagest} t^{n+j+k} \left|\left(\frac{d(x,y)}{t}\right)^N XYK_t(x,y)\right| \leq C_{N,X,Y} \end{equation} for all $t > 0$ and all $x,y \in {\bf M}$. (The result holds even without the hypothesis that $f(0) = 0$, provided we look only at $t \in (0,1]$.) \end{lemma} The main results are Theorems \ref{besmain2} and \ref{besmain3} below, whose precise statements can be read now. To summarize them: fix any $M_0 > 0$. We study frame expansions for the space $B_{p,0}^{\alpha q}$, consisting of distributions $F$ in the Besov space $B_{p}^{\alpha q}$ on ${\bf M}$ for which $F1 = 0$. We assume that the $E_{jk}$ satisfy the conditions of Theorem \ref{framain} above for $b$ sufficiently small, and also that, if $0 < p < 1$, and if $ba^j \geq \delta_0$, then $\mu(E_{j,k}) \geq {\cal C}$ (for some ${\cal C} > 0$). (Such sets $E_{j,k}$ are easily constructed.) We assume that $f(s) = s^l f_0(s)$ for some $f_0 \in {\cal S}(\RR^+)$, and for $l$ sufficiently large, depending on the indices $p,q, \alpha$ (so that $f(t^2\Delta) = t^{2l}\Delta^l f_0(t^2\Delta)$). We let the $\phi_{j,k}$ be as in Theorem \ref{framain}, and let $\varphi_{j,k}(x)= \phi_{j,k}(x)/[\mu(E_{j,k})]^{1/2} = \overline{K_{a^j}}(x_{j,k},x)$. We then show that a distribution $F$, of order at most $M_0$ and satisfying $F1 = 0$, is in $B_{p,0}^{\alpha q}$ if and only if \[ (\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q} < \infty;\] further this expression furnishes a norm on $B_{p,0}^{\alpha q}$ which is equivalent to the usual norm. Moreover, if $F \in B_{p,0}^{\alpha q}$, there exist constants $r_{j,k}$ with \begin{equation} \label{rjknrmdf} (\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k})|r_{j,k}|^p]^{q/p})^{1/q} < \infty \end{equation} such that \begin{equation} \label{besexpF} F = \sum_{j=-\infty}^{\infty} \sum_k \mu(E_{j,k}) r_{j,k} \varphi_{j,k} \end{equation} with convergence in $B_p^{\alpha q}$; and the infimum of the sums in (\ref{rjknrmdf}), taken over all collections of numbers $\{r_{j,k}\}$ for which (\ref{besexpF}) holds, defines a norm on $B_{p,0}^{\alpha q}$, which is equivalent to the usual norm.\\ In addition to Lemma \ref{manmol}, our main tools will be the characterization of Besov spaces on $\RR^n$ by Frazier and Jawerth \cite{FJ1}, and the characterization of these spaces on ${\bf M}$ by Seeger and Sogge \cite{SS}. Our methods are largely adapted from those of Frazier and Jawerth. There are, however, at least three major differences:\\ \ \\ 1. We need to find replacements, on ${\bf M}$, for the condition that a function have numerous vanishing moments. Specifically, note that if $g \in C_c^{\infty}(\RR^n)$, then $\Delta^l g$ has $2l-1$ vanishing moments for any $l \geq 1$, if $\Delta$ is the usual Laplacian. In order to make effective use of our frames in Besov spaces, we need an analogue of this on ${\bf M}$. Say $g \in C^{\infty}({\bf M})$; what replacement condition does $\Delta^l g$ satisfy, if, as usual, $\Delta$ is the Laplace-Beltrami operator on ${\bf M}$? We will find an effective replacement in Lemma \ref{fjan2} of the next section. These considerations explain why we need to use functions $f$ of the form $f(s) = s^l f_0(s)$ for $l$ sufficiently large, depending on the indices.\\ \ \\ 2. If one knows appropriate information about the size of the frame coefficients of a function $F$, then, by adapting the methods of Frazier-Jawerth and by using the results of Seeger-Sogge, we learn only that $SF$ (not $F$) is in the desired Besov space, where \begin{equation} \label{sover} SF = \sum_j \sum_k \mu(E_{j,k}) \langle F,\phi_{j,k} \rangle \phi_{j,k}; \end{equation} another step is then required. Although, for $b$ sufficiently small, $SF$ is an excellent approximation to a multiple of $F$ (since the $\{\phi_{j,k}\}$ are a nearly tight frame), it generally does not {\em equal} a multiple of $F$. To conclude that $F$ itself is in the desired Besov space, we will need to use the theory of pseudodifferential operators (in Theorem \ref{besmain1} below).\\ \ \\ 3. We need to show that $S$ is bounded on the Besov spaces, and a technical issue arises when the index $p$ lies between $0$ and $1$. Since the $p$-triangle inequality generally becomes more and more wasteful when one splits quantities more finely (e.g. if we write $x > 0$ as $\frac{x}{N} + \ldots + \frac{x}{N}$ ($N$ terms), then we find the wasteful estimate $x^p = (\frac{x}{N} + \ldots + \frac{x}{N})^p \leq N\frac{x^p}{N^p} = N^{1-p}x^p$), and since we must use fine grids (that is, we must take $b$ to be small), a rather subtle ``regrouping'' (or ``amalgamation'') argument is needed at one point (Theorem \ref{sumopbes} below).\\ \ \\ In order for the notation to be fully analogous to that in \cite{FJ1}, we shall need to adapt the notation that we used in our earlier article \cite{gm2}; through much of this article, we will write $E^j_k = E_{-j,k}$, $x^j_k = x_{-j,k}$, $\varphi^j_k = \varphi_{-j,k}$. (The notations on the right sides of these equations were used frequently in \cite{gm2}.) \subsection{Historical Comments} Although we are adapting the methods of Frazier and Jawerth \cite{FJ1}, we should note that they were not working with nearly tight frames, but rather with the $\varphi$-transform. Characterizations of Besov spaces on $\RR^n$, which are similar to ours, were obtained by Gr\"ochenig \cite{groch} (see also \cite{fg0}, \cite{fg1}, \cite{fg2}) through use of frames, and by Meyer \cite{meyer}, through use of bases of orthonormal wavelets. In \cite{dahsch}, Dahmen and Schneider used parametric liftings from standard bases on the unit cube to obtain biorthogonal wavelet bases on manifolds which are the disjoint union of smooth parametric images of the standard cube. Using these bases, they obtained characterizations of the Besov spaces $B_p^{\alpha q}({\bf M})$, for $0 < p \leq \infty$, $q \geq 1$, and $\alpha > 0$. Their results hold on manifolds with less than $C^{\infty}$ regularity (for a range of $\alpha$); also, they applied their methods to the discretization of elliptic operator equations. We consider neither of these topics here. However, our methods have the advantage of holding for all ${\bf M}$, and all $p,q, \alpha$. Our frames have the advantage of being nearly tight, and admitting a space-frequency analysis. Moreover our results are coordinate-free, in the sense that our frames are constructed without patching the manifold with charts. We presume that this would lead to greater stability in applications, if data is moving around the manifold in time, since one does not have to worry about data moving from chart to chart, although this presumed advantage has not been established. In \cite{narc2}, Narcowich, Petrushev and Ward obtain a characterization of both Besov and Triebel-Lizorkin spaces, through the size of frame coefficients, in the special case ${\bf M} = S^n$. As frames they use the ``needlets'' that they constructed in \cite{narc1}. We discussed the similarities, advantages and disadvantages of these frames as compared to ours on $S^n$, in section 3 of our earlier article \cite{gm2}. They proved and used a result similar to our Lemma \ref{manmol}, and our methods (based on adapting the ideas in \cite{FJ1}) are rather similar to theirs. However, on the sphere, they constructed tight frames, so they did not need to deal with the issues \#1,2 and 3 above. Han and Sawyer \cite{hansaw} define Besov spaces on general spaces of homogeneous type, for a range of indices. In \cite{hanyang}, Han and Yang give a characterization of these spaces using frames which they construct. These frames cannot be expected to be nearly tight, nor (on ${\bf M}$) have they been shown to admit a space-frequency analysis. Further, in the very general situation of \cite{hanyang}, there are no derivatives, so results are obtained there only for smoothness index $\alpha \in (0,1)$. \section{Integrating Products} We shall need the following basic facts, from section 3 of \cite{gmcw}, about ${\bf M}$ and its geodesic distance $d$. For $x \in {\bf M}$, we let $B(x,r)$ denote the ball $\{y: d(x,y) < r\}$. \begin{proposition} \label{ujvj} Cover ${\bf M}$ with a finite collection of open sets $U_i$ $(1 \leq i \leq I)$, such that the following properties hold for each $i$: \begin{itemize} \item[$(i)$] there exists a chart $(V_i,\phi_i)$ with $\overline{U}_i \subseteq V_i$; and \item[$(ii)$] $\phi_i(U_i)$ is a ball in $\RR^n$. \end{itemize} Choose $\delta > 0$ so that $3\delta$ is a Lebesgue number for the covering $\{U_i\}$. Then, there exist $c_1, c_2 > 0$ as follows:\\ For any $x \in {\bf M}$, choose any $U_i \supseteq B(x,3\delta)$. Then, in the coordinate system on $U_i$ obtained from $\phi_i$, \begin{equation} \label{rhoeuccmp2} d(y,z) \leq c_2|y-z| \end{equation} for all $y,z \in U_i$; and \begin{equation} \label{rhoeuccmp} c_1|y-z| \leq d(y,z) \end{equation} for all $y,z \in B(x,\delta)$. \end{proposition} We fix collections $\{U_i\}$, $\{V_i\}$, $\{\phi_i\}$ and also $\delta$ as in Proposition \ref{ujvj}, once and for all. \begin{itemize} \item Notation as in Proposition \ref{ujvj}, there exist $c_3, c_4 > 0$, such that, whenever $x \in {\bf M}$ and $0 < r \leq \delta$, \begin{equation} \label{ballsn} c_3r^n \leq \mu(B(x,r)) \leq c_4r^n \end{equation} and such that, whenever $x \in {\bf M}$ and $r > \delta$, \begin{equation} \label{ballsn1} c_3 \delta^n \leq \mu(B(x,r)) \leq \mu({\bf M}) \leq c_4r^n. \end{equation} \item For any $N > n$ there exists $C_N$ such that, for all $x \in {\bf M}$ and $t > 0$, \begin{equation} \label{ptestm} \int_{\bf M} [1 + d(x,y)/t]^{-N} d\mu(y) \leq C_N t^n. \end{equation} \item For any $N > n$ there exists $C_N^{\prime}$ such that, for all $x \in {\bf M}$ and $t > 0$, \begin{equation} \label{ptestm1} \int_{d(x,y) \geq t} d(x,y)^{-N} d\mu(y) \leq C_N^{\prime} t^{n-N}. \end{equation} \end{itemize} In Lemma 3.3 of \cite{FJ1}, Frazier and Jawerth proved, in essence, the following key lemma on $\RR^n$, for which we must find analogues on ${\bf M}$. \begin{lemma} \label{fjrn} Say $L, M$ are integers with $L \geq -1$ and $M \geq L+n+1$. Then there exists $C > 0$ as follows. Supppose $\varphi_1 \in C(\RR^n)$ and $\varphi_2 \in C^{L+1}(\RR^n)$ satisfy, for some $\sigma, \nu \in \ZZ$ with $\sigma \geq \nu$, \[ |\varphi_1(x)| \leq (1+2^{\sigma}|x|)^{-M}, \] \begin{equation} \label{momfjrn} \int x^{\alpha} \varphi_1(x) dx = 0 \mbox{ whenever } |\alpha| \leq L, \end{equation} and \[ |\partial^{\gamma}\varphi_2(x)| \leq (1+2^{\nu}|x|)^{L+n+1-M} \mbox{ whenever } |\gamma| = L+1. \] Then \[ |(\varphi_1 * \varphi_2)(x)| \leq C 2^{-\sigma(L+n+1)}(1+2^{\nu}|x|)^{L+n+1-M}. \] \end{lemma} To clarify, (\ref{momfjrn}) is the empty condition if $L = -1$. We will need two different analogues of this lemma. The first is a straightforward adaptation of Lemma (\ref{fjrn}) and its proof. \begin{lemma} \label{fjan1} Say $L, M$ are integers with $L \geq -1$ and $M \geq L+n+1$. Then there exists $C > 0$ as follows. Say $\sigma \in \ZZ$, $\nu \in \RR$ with $2^{\sigma} \geq a^{\nu}$. Say $x_0 \in {\bf M}$. Select one of the charts $U_i$ (as in Proposition \ref{ujvj}) with $B(x_0,3\delta) \subseteq U_i$. Suppose, that in local coordinates on $U_i$, $Q$ is a dyadic cube of side $2^{-\sigma}$, and $3Q \subseteq B(x_0,\delta)$. (Here $3Q$ is the cube with the same center as $Q$ and $3$ times the side length $l(Q)$ of $Q$.) Suppose also that $\varphi_1 \in C({\bf M})$ satisfies the following conditions: \[ \supp \varphi_1 \subseteq 3Q; \] \[ |\varphi_1(y)| \leq 1 \mbox{ for all } y \in {\bf M}; \] \begin{equation} \label{momfjm} \int y^{\alpha} \varphi_1(y) d\mu(y) = 0 \mbox{ whenever } |\alpha| \leq L, \end{equation} Also suppose $x_1 \in {\bf M}$, that $\varphi_2 \in C^{L+1}({\bf M})$, and that for all $y \in 3Q$, \[ |\partial^{\gamma}\varphi_2(y)| \leq (1+a^{\nu}d(y,x_1))^{L+n+1-M} \mbox{ whenever } |\gamma| = L+1. \] Then, \[ |\int_{\bf M}(\varphi_1 \varphi_2)(y)d\mu(y)| \leq C 2^{-\sigma(L+n+1)}(1+a^{\nu}d(x_0,x_1))^{L+n+1-M}. \] \end{lemma} To clarify, (\ref{momfjm}) is the empty condition if $L = -1$.\\ {\bf Proof} In this proof, $C$ will always denote a constant which depends only on $L$ and $M$ (and $n$, $a$ and $({\bf M},g)$, of course); it may change from one line to the next. Surely $|\int_{\bf M}(\varphi_1 \varphi_2)(y)d\mu(y)| = |\int_{3Q}(\varphi_1 \varphi_2)(y)d\mu(y)|$. We write \begin{eqnarray*} |\int_{3Q}(\varphi_1 \varphi_2)(y)d\mu(y)| & = & |\int_{3Q}\varphi_1(y)[\varphi_2(y) - \sum_{|\gamma| \leq L} \frac{\partial^{\gamma} \varphi_2(x_0)(y-x_0)^{\gamma}}{\gamma!}] d\mu(y)|\\ & \leq & C(\int_{\{y \in 3Q: d(y,x_0) \leq c_1d(x_0,x_1)/2c_2\}} + \int_{\{y \in 3Q: d(y,x_0) > c_1d(x_0,x_1)/2c_2\}})|x_0-y|^{L+1} \Phi(y) d\mu(y)\\ & = & I + II, \end{eqnarray*} where $c_1$ and $c_2$ are as in Proposition \ref{ujvj}, and where $\Phi(y) = \sup_{0 < \epsilon < 1} \sum_{|\gamma|=L+1}|\partial^{\gamma}\varphi_2(x_0 + \epsilon(y-x_0))|$. In I, we have that whenever $0 < \epsilon < 1$, then \begin{equation} \label{linsegest} d(x_0 + \epsilon(y-x_0),x_0) \leq c_2|(x_0 + \epsilon(y-x_0)-x_0| \leq c_2|y-x_0| \leq (c_2/c_1)d(y,x_0) \leq d(x_0,x_1)/2, \end{equation} so that in I, by the hypotheses, \[ |\Phi(y)| \leq C (1+a^{\nu}d(x_0,x_1))^{L+n+1-M}. \] In II we just note that \[ |\Phi(y)| \leq 1. \] Accordingly \[ I \leq C (1+a^{\nu}d(x_0,x_1))^{L+n+1-M}\int_{3Q} |x_0-y|^{L+1} d\mu(y) \leq C 2^{-\sigma(L+n+1)}(1+a^{\nu}d(x_0,x_1))^{L+n+1-M}, \] while if $a^{\nu}d(x_0,x_1) \leq 1$, we have \[ II \leq \int_{3Q} |x_0-y|^{L+1}d\mu(y) \leq C 2^{-\sigma(L+n+1)} \leq C 2^{-\sigma(L+n+1)}(1+a^{\nu}d(x_0,x_1))^{L+n+1-M}. \] Finally, say $a^{\nu}d(x_0,x_1) > 1$. Note that $1 \leq C[2^{\sigma}|x_0-y|]^{-1}$ for all $y \in 3Q$. Raising this to the $M$th power, and then using (\ref{ptestm1}) and the assumption that $2^{\sigma} \geq a^{\nu}$, we see that \begin{eqnarray} II & \leq & C 2^{-\sigma M}\int_{\{y \in 3Q: d(y,x_0) > c_1 d(x_0,x_1)/2c_2\}}|x_0-y|^{L-M+1}d\mu(y)\nonumber\\ & \leq & C 2^{-\sigma M}\int_{d(y,x_0) > c_1 d(x_0,x_1)/2c_2}d(y,x_0)^{L-M+1}d\mu(y)\label{IIstrt}\\ & \leq & C 2^{-\sigma M} d(x_0,x_1)^{L+n+1-M} \nonumber\\ & = & C 2^{-\sigma(L+n+1)} [2^{\sigma} d(x_0,x_1)]^{L+n+1-M} \nonumber\\ & \leq & C 2^{-\sigma(L+n+1)}(1+a^{\nu}d(x_0,x_1))^{L+n+1-M},\label{IIend} \end{eqnarray} as claimed.\\ In our second analogue of Lemma \ref{fjrn}, instead of assuming that $\varphi_1$ is supported in a chart and satisfies familiar moment conditions there, as we did in Lemma \ref{fjan1}, we will instead allow $\varphi_1$ to be supported anywhere in ${\bf M}$. The moment conditions will be replaced by an assumption that $\varphi = \Delta^l \Phi$ for another well-behaved function $\Phi$. Formally, in this lemma, the role of $L$ in Lemma \ref{fjan1} will be played by $2l-1$. \begin{lemma} \label{fjan2} Say $l, M$ are integers with $l \geq 0$ and $M > n$. Then there exists $C > 0$ as follows. Say $\sigma, \nu \in \RR$ with $\sigma \geq \nu$. Say $x_0 \in {\bf M}$, and suppose that $\varphi_1 = \Delta^l\Phi$, where $\Phi \in C^{2l}({\bf M})$ satisfies: \[ |\Phi(y)| \leq (1+a^{\sigma}d(y,x_0))^{-M}. \] Also suppose $x_1 \in {\bf M}$, that $\varphi_2 \in C^{2l}({\bf M})$, and that for all $y \in {\bf M}$, \[ |\Delta^l \varphi_2(y)| \leq (1+a^{\nu}d(y,x_1))^{n-M}.\] Then, \[ |\int_{\bf M}(\varphi_1 \varphi_2)(y)d\mu(y)| \leq C a^{-\sigma n}(1+a^{\nu}d(x_0,x_1))^{n-M}. \] \end{lemma} {\bf Proof} In this proof, $C$ will always denote a constant which depends only on $l, M$ and $n$ (and $a, ({\bf M},g)$, of course); it may change from one line to the next. First we observe that we may take $l = 0$. Indeed, if this case were known, then we could prove the general result simply by noting that $\Delta$ is self-adjoint, so that \[ \int_{\bf M}(\varphi_1 \varphi_2)(y)d\mu(y) = \int_{\bf M}(\Delta^l \Phi \varphi_2)(y)d\mu(y) = \int_{\bf M}(\Phi \Delta^l \varphi_2)(y)d\mu(y). \] So we may assume $l=0$. We have \[ |\int_{\bf M}(\varphi_1 \varphi_2)(y)d\mu(y)| \leq I+II, \] where we are setting \[ I = \int_{d(y,x_0) \leq d(x_0,x_1)/2}|\varphi_1(y)\varphi_2(y)| d\mu(y), \] \[ II = \int_{d(y,x_0) > d(x_0,x_1)/2}|\varphi_1(y)\varphi_2(y)| d\mu(y). \] We shall show that I and II are less than or equal to $C a^{-\sigma n}(1+a^{\nu}d(x_0,x_1))^{n-M}$, and then we will be done. For I and II we need to estimate $\varphi_2(y)$. We shall use the evident estimates: \begin{equation} \label{estIphi} \mbox{In I, } |\varphi_2(y)| \leq (1+a^{\nu}d(x_0,x_1))^{n-M}, \end{equation} and \begin{equation} \label{estIIphi} \mbox{In II, } |\varphi_2(y)| \leq 1. \end{equation} From (\ref{estIphi}) and the hypotheses on $\varphi_1 = \Phi$, we find that \[ I \leq (1+a^{\nu}d(x_0,x_1))^{n-M}\int_{\bf M} (1+a^{\sigma}d(y,x_0))^{-M}d\mu(y) \leq Ca^{-\sigma n}(1+a^{\nu}d(x_0,x_1))^{n-M} \] as needed. (We have used (\ref{ptestm}).) For II, we have the estimate \[ II \leq \int_{d(y,x_0) > d(x_0,x_1)/2}(1+a^{\sigma}d(y,x_0))^{-M}d\mu(y)\] Suppose first that $a^{\nu}d(x_0,x_1) \leq 1$. Then we can just note that, by (\ref{ptestm}), \[ II \leq C a^{-\sigma n } \leq C a^{-\sigma n}(1+a^{\nu}d(x_0,x_1))^{n-M}. \] If, instead, $a^{\nu}d(x_0,x_1) > 1$, we find that \begin{eqnarray*} II & \leq & C a^{-\sigma M}\int_{d(y,x_0) > d(x_0,x_1)/2}d(y,x_0)^{-M}d\mu(y)\\ & \leq & C a^{-\sigma n} (1+a^{\nu}d(x_0,x_1))^{n-M} \end{eqnarray*} as an argument just like the one beginning with (\ref{IIstrt}) and ending with (\ref{IIend}) shows. This completes the proof.\\ Next we need analogues of Lemma 3.4 of \cite{FJ1}. After one multiplies by certain constants, that lemma states: \begin{lemma} \label{fjrn2} If $Q$ is a dyadic cube of $\RR^n$, let $x_Q$ denote its center. Let $1 \leq p \leq \infty$, and suppose $\sigma, \eta \in \ZZ$, $\eta \leq \sigma$. Suppose $F(x) = \sum_{l(Q) = 2^{-\sigma}} s_Q f_Q(x)$, where \[ |f_Q(x)| \leq 2^{-\sigma n}(1+2^{\eta}|x-x_Q|)^{-n-1}. \] Then \[ \|F\|_{L^p} \leq C 2^{-\eta n} (\sum_{l(Q) = 2^{-\sigma}} 2^{-\sigma n}|s_Q|^p)^{1/p}. \] \end{lemma} Here is our first analogue. \begin{lemma} \label{fjan3} Let $1 \leq p \leq \infty$, and as usual fix $a > 1$. Also fix $b > 0$. Then there exists $C > 0$ as follows. Suppose $\eta \in \RR$, $j \in \ZZ$, $\eta \leq j$. Write ${\bf M}$ as a finite disjoint union of measurable subsets $\{E^j_k: 1 \leq k \leq {\cal N}_j\}$, each of diameter less than $ba^{-j}$. For each $k$ with $1 \leq k \leq {\cal N}_j$, select any $x^j_k \in E^j_k$. Suppose that, for $x \in {\bf M}$, $F(x) = \sum_{k=1}^{{\cal N}_j} s_{j,k} f_{j,k}(x)$, where \[ |f_{j,k}(x)| \leq \mu(E^j_k)(1+a^{\eta}d(x,x^j_k))^{-n-1}. \] Then \[ \|F\|_{L^p} \leq C a^{-\eta n} (\sum_{k} \mu(E^j_k)|s_{j,k}|^p)^{1/p}. \] \end{lemma} {\bf Proof} We have \begin{equation} \label{sjkamu} \|F\|_p \leq (\int_{\bf M} [\sum_{k=1}^{{\cal N}_j} |s_{j,k}| \mu(E^j_k)(1+a^{\eta}d(x,x^j_k))^{-n-1}]^p d\mu(x))^{1/p} \end{equation} Let $Y_j$ be the finite measure space $\{1,\ldots,{\cal N}_j\}$ with measure $\lambda$ where $\lambda(\{k\}) = \mu(E^j_k)$ for each $k \in Y_j$. and define ${\cal K}: L^p(Y_j) \rightarrow L^p({\bf M})$ by ${\cal K} r(x) = \int_{Y_j} K(x,k)r(k) d\lambda(k)$ for $r \in L^p(Y_j)$, where \[ K(x,k) = (1+a^{\eta}d(x,x^j_k))^{-n-1}. \] By standard arguments, $\|{\cal K} r\|_p \leq M\|r\|_p$, where $M$ is any number satisfying \begin{equation} \label{mkxkmu} M \geq \int_{\bf M} K(x,k) d\mu(x) \end{equation} for all $k \in Y_j$ and also \begin{equation} \label{mkxkla} M \geq \int_{Y_j} K(x,k) d\lambda(k) \end{equation} for all $x \in {\bf M}$. By (\ref{sjkamu}), $\|F\|_p \leq \|{\cal K}r\|_p$ if $r(k) = |s_{j,k}|$. Thus we need only show that we may take $M = C a^{-\eta n}$. But, for this $M$, (\ref{mkxkmu}) holds by (\ref{ptestm}). As for (\ref{mkxkla}), choose $B > \max(2b,1)$. If $x \in {\bf M}$, we have \begin{eqnarray} \int_{Y_j} |K(x,k)| d\lambda(k) & \leq & C\sum_{k} \mu(E^j_k)(B+a^{\eta}d(x,x^j_k))^{-n-1} \label{m2stt}\\ & \leq & C \sum_{k} \int_{E^j_k}(B+a^{\eta}d(x,y))^{-n-1} d\mu(y) \nonumber \\ & \leq & C\int_{\bf M}(1+a^{\eta}d(x,y))^{-n-1} d\mu(y) \nonumber \\ & \leq & Ca^{-\eta n} \label{m2end} \end{eqnarray} as desired. Here we have used the assumption that $\eta \leq j$, the fact that the diameter of $E^j_k$ is at most $ba^{-j}$, and (\ref{ptestm}). This completes the proof.\\ Our second analogue deals only with $L^p$ norms of functions defined on finite sets. \begin{lemma} \label{fjan4} Let $1 \leq p \leq \infty$, and as usual fix $a > 1$. Also fix $b > 0$. Then there exists $C > 0$ as follows. Say $j \in \ZZ$. Select sets $E^j_k$ and points $x^j_k$ as in Lemma \ref{fjan3}. Let $Y_j$ again be the finite measure space $\{1,\ldots,{\cal N}_j\}$ with measure $\lambda$ where $\lambda(\{k\}) = \mu(E^j_k)$ for each $k \in Y_j$. Suppose $U_i$ is one of the charts of Proposition \ref{ujvj}. Say $B(x_0,3\delta) \subseteq U_i$. Say $\sigma \in \ZZ$, and let ${\cal Q}_{\sigma}$ denote the set of dyadic cubes of length $2^{-\sigma}$ in $U_i$ which are contained in $B(x_0,\delta)$. (We are using local coordinates on $U_i$.) \\ (a) Say $2^{\sigma} \geq a^{j}$. Suppose that, for $k \in Y_j$, $F(k) = \sum_{Q \in {\cal Q}_{\sigma}} s_Q f_{Q}(k)$, where \[ |f_Q(k)| \leq 2^{-\sigma n}(1+a^{j}d(x_Q,x^j_k))^{-n-1}. \] Then \[ \|F\|_{L^p(Y_j)} \leq C a^{-jn}(\sum_{l(Q) = 2^{-\sigma}} 2^{-\sigma n}|s_{Q}|^p)^{1/p}. \] (b) Suppose instead $2^{\sigma} \leq a^{j}$. Suppose that, for $k \in Y_j$, $F(k) = \sum_{Q \in {\cal Q}_{\sigma}} s_Q f_{Q}(k)$, where \[ |f_Q(k)| \leq 2^{-\sigma n}(1+2^{\sigma}d(x_Q,x^j_k))^{-n-1}. \] Then \[ \|F\|_{L^p(Y_j)} \leq C 2^{-\sigma n}(\sum_{l(Q) = 2^{-\sigma}} 2^{-\sigma n}|s_{Q}|^p)^{1/p}. \] \end{lemma} {\bf Proof} Say ${\cal Q}_{\sigma} = \{Q_1,\ldots,Q_{I(\sigma)}\}$, and let $X_{\sigma} = \{1,\ldots,I(\sigma)\}$, with measure $\tau$, where $\tau(m) = 2^{-\sigma n}$ for each $m \in X_{\sigma}$. In either (a) or (b), we define an operator ${\cal K}: L^p(X_{\sigma}) \rightarrow L^p(Y_j)$ by ${\cal K} r(k) = \int_{X_{\sigma}} K(k,m)r(m) d\tau(m)$ for $r \in L^p(X_{\sigma})$, where in (a), \begin{equation} \label{kkma} K(k,m) = (1+a^{j}d(x_{Q_m},x^j_k))^{-n-1}, \end{equation} while in (b) \begin{equation} \label{kkmb} K(k,m) = (1+2^{\sigma}(x_{Q_m},x^j_k))^{-n-1}. \end{equation} With these definitions of $K(k,m)$, in either (a) or (b), $\|F\|_p \leq \|{\cal K}r\|_p$ if $r(m) = |s_{Q_m}|$. On the other hand, $\|{\cal K} r\|_p \leq M\|r\|_p$, where $M$ is any number satisfying \begin{equation} \label{mkkmla} M \geq \int_{Y_j} K(k,m) d\lambda(k) \end{equation} for all $m \in X_{\sigma}$ and also \begin{equation} \label{mkkmtau} M \geq \int_{X_{\sigma}} K(k,m) d\tau(m) \end{equation} for all $k \in Y_j$. In (a), where $K(k,m)$ is given by (\ref{kkma}), we need only show that we may take $M = C a^{-jn}$. The fact that (\ref{mkkmla}) then holds follows just as in the argument starting with (\ref{m2stt}) and ending with (\ref{m2end}). Similarly, for (\ref{mkkmtau}), say $k \in Y_j$. Since $Q_m \subseteq B(x_0,\delta)$, the diameter of $Q_m$ is at most $c2^{-\sigma}$ for some $c$. Choose $B > \max(2c,1)$. Then \begin{eqnarray*} \int_{X_{\sigma}} K(k,m) d\tau(m) & \leq & C\sum_{m} 2^{-\sigma n}(B+a^{j}d(x_{Q_m},x^j_k))^{-n-1}\\ & \leq & C \sum_{m} \int_{Q_m}(B+a^{j}d(x,x^j_k))^{-n-1} d\mu(y)\\ & \leq & C B \int_{\bf M}(1+a^{j}d(x,x^j_k))^{-n-1} d\mu(y) \\ & \leq & C a^{-jn} \end{eqnarray*} as desired. Here we have used the assumption that $2^{-\sigma} \leq a^{-j}$, the fact that the diameter of $Q_m$ is at most $c2^{-\sigma}$, and (\ref{ptestm}). In (b), where $K(k,m)$ is given by (\ref{kkmb}), we need only show that we may take $M = C 2^{-\sigma n}$. This, however, follows in a similar manner to (a), if one now uses the assumption that $a^{-j} \leq 2^{-\sigma}$. This completes the proof. \section{Besov spaces} We will need the following simple fact about operators on $l^q(\NN)$, for $0 < q \leq 1$, which is again adapted from arguments in \cite{FJ1}. \begin{proposition} \label{lqconv} Suppose $0 < q \leq \infty$. Say $K: \NT \times \NT \rightarrow \RR$ is nonnegative. If $z$ is a nonnegative sequence, define the nonnegative sequence ${\cal K}z$ by \[ ({\cal K}z)(r) = \sum_{s=0}^{\infty} K(r,s) z(s). \] Let $M_q$ be a number satisfying \[ M_q \geq [\sum_{s=0}^{\infty} K(r,s)^q]^{1/q} \] for all $r$, and also \[ M_q \geq [\sum_{r=0}^{\infty} K(r,s)^q]^{1/q} \] for all $s$. Then:\\ (a) If $1 \leq q \leq \infty$, then for every nonnegative sequence $z$, $\|{\cal K}z\|_q \leq M_1\|z\|_q$.\\ (b) If $0 < q < 1$, then for every nonnegative sequence $z$, $\|{\cal K}z\|_q \leq M_q\|z\|_q$.\\ (Here, $\|z\|_q$ denotes the $l^q(\NT)$ ``norm'' of $z$, which could be $\infty$; and all nonnegative numbers here are allowed to be $\infty$. Also, here and elsewhere, we follow the usual rules for interpreting the expressions when $q=\infty$.) \end{proposition} {\bf Proof} (a) is of course well known. For (ii), note that, by the $q$-triangle inequality, \[ ({\cal K}z)(r)^q \leq \sum_{s=0}^{\infty} K(r,s)^q z(s)^q. \] By the known case $q=1$ of the proposition, we now see that \[ \|({\cal K}z)^q\|_1 \leq M_q^q \|z^q\|_1. \] Raising both sides to the $1/q$ power, we obtain the desired result.\\ For the rest of this section, we fix $a > 1$. We also fix $\alpha, p, q$ with $-\infty < \alpha < \infty$ and $0 < p,q \leq \infty$. \\ We use the notation for inhomogeneous Besov spaces $B_p^{\alpha q}$ on $\RR^n$ from \cite{FJ1}. Thus, on $\RR^n$, one takes any $\Phi \in {\cal S}$ supported in the closed unit ball, which does not vanish anywhere in the ball of radius $5/6$ centered at $0$. One also takes functions $\varphi_{\nu} \in {\cal S}$ for $\nu \geq 1$, supported in the annulus $\{\xi: 2^{\nu-1} \leq |\xi| \leq 2^{\nu+1}\}$, satisfying $|\varphi_{\nu}(\xi)| \geq c > 0$ for $3/5 \leq 2^{-\nu}|\xi| \leq 5/3$ and also $|\partial^{\gamma} \varphi_{\nu}| \leq c_{\gamma} 2^{-\nu \gamma}$ for every multiindex $\gamma$. The Besov space $B_p^{\alpha q}(\RR^n)$ is then the space of $F \in {\cal S}'(\RR^n)$ such that \[ \|F\|_{B_p^{\alpha q}} = \|\check{\Phi}*F\|_{L^p} + \left(\sum_{\nu = 0}^{\infty} (2^{\nu \alpha}\|\check{\varphi}_{\nu}*F\|_{L^p})^q \right)^{1/q} < \infty. \] (Here we use the usual conventions if $p$ or $q$ is $\infty$. The definition of $B_p^{\alpha q}(\RR^n)$ is independent of the choices of $\Phi, \varphi_{\nu}$ (\cite{Peet}, page 49). Moreover, $B_p^{\alpha q}(\RR^n)$ is a quasi-Banach space, and the inclusion $B_p^{\alpha q} \subseteq {\cal S}'$ is continuous (\cite{Trieb}, page 48). In particular the space $B^{\alpha}_{\infty,\infty}(\RR^n) = {\cal C}^{\alpha}(\RR^n)$, which is the usual H\"older space if $0 < \alpha < 1$, or in general a H\"older-Zygmund space for $\alpha > 0$ (\cite{Trieb}, page 51). It is not hard to see, by using the definition and the Fourier transform, that if $K \subseteq \RR^n$ is compact, and if $N$ is sufficiently large, then \begin{equation} \label{ccNbes} \{F \in C^N: \mbox{ supp}F \subseteq K\} \subseteq B_p^{\alpha q} \end{equation} where the inclusion map is continuous if we regard the left side as a subspace of $C^N$. Pseudodifferential operators of order $0$ are bounded on the Besov spaces (\cite{P}); in particular, if $\psi \in C_c^{\infty}(\RR^n)$, the mapping $F \rightarrow \psi F$ is a bounded map on $B_p^{\alpha q}(\RR^n)$. Moreover, if $\eta: \RR^n \rightarrow \RR^n$ is a diffeomorphism which equals the identity outside a compact set, then one can define $F \circ \eta$ for $F \in B_p^{\alpha q}(\RR^n)$, and the map $F \rightarrow F \circ \eta$ is bounded on the Besov spaces (\cite{Trieb}, chapter 2.10). These facts then enable one to define $B_p^{\alpha q}({\bf M})$: let $(W_i, \chi_i)$ be a finite atlas on ${\bf M}$ with charts $\chi_i$ mapping $W_i$ into the unit ball on $\RR^n$, and suppose $\{\zeta_i\}$ is a partition of unity subordinate to the $W_i$. Then one defines $B_p^{\alpha q}({\bf M})$ to be the space of distributions $F$ on ${\bf M}$ for which \[ \|F\|_{B_p^{\alpha q}({\bf M})} = \sum_i \|(\zeta_i F) \circ \chi_i^{-1}\|_{B_p^{\alpha q}({\bf R}^n)} < \infty. \] This definition does not depend on the choice of charts or partition of unity (\cite{Triebman}).\\ It will be convenient to fix a spanning set of the differential operators on ${\bf M}$ of degree less than or equal to $J$ (for any fixed $J$). Recall that we have already fixed a finite set ${\mathcal P}$ of real $C^{\infty}$ vector fields on ${\bf M}$, whose elements span the tangent space at each point. For any integer $L \geq 1$, we let \begin{equation} \label{pmdfopdf} {\mathcal P}^J = \{X_1\ldots X_M: X_1,\ldots,X_M \in {\mathcal P}, 1 \leq M \leq J\} \cup \{\mbox{the identity map}\}. \end{equation} (In particular, ${\mathcal P}^1$ is what we have previously called ${\mathcal P}_0$.) \\ \begin{lemma} \label{besov1} Fix $b > 0$. Also fix an integer $l \geq 1$ with \begin{equation} \label{lgqgint} 2l > \max(n(1/p-1)_+ - \alpha, \alpha). \end{equation} where here $x_+ = \max(x,0)$. Fix $M$ with $(M-2l-n)p > n+1$ if $0 < p < 1$, $M-2l-n > n+1$ otherwise. Then there exists $C > 0$ as follows. Say $j \in \ZZ$. Select sets $E^j_k$ and points $x^j_k$ as in Lemma \ref{fjan3}. Suppose that, for each $j \geq 0$, and each $k$, \begin{equation} \label{var2lph} \varphi^j_k = (a^{-2j}\Delta)^l\Phi^j_k, \end{equation} where $\Phi^j_k \in C^{\infty}({\bf M})$ satisfies the following conditions: \begin{equation} \label{xphip} |X\Phi^j_k(y)| \leq a^{j (\deg X + n)}(1+a^{j}d(y,x^j_k))^{-M} \mbox{ whenever } X \in {\mathcal P}^{4l}. \end{equation} Then, for every $F$ in the inhomogeneous Besov space $B_p^{\alpha q}({\bf M})$, if we let \[ s_{j,k} = \langle F, \varphi^j_k \rangle, \] then \begin{equation} \label{besov1way} (\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_k \mu(E^j_k)|s_{j,k}|^p]^{q/p})^{1/q} \leq C\|F\|_{B_p^{\alpha q}}. \end{equation} \end{lemma} {\bf Proof} Cover ${\bf M}$ by a finite collection of open sets $\{W_r\}$, where each $W_r$ has the form $B(x_r,\delta)$ for some $x_r \in {\bf M}$. Let $\{\zeta_r\}$ be a partition of unity subordinate to the $\{W_r\}$. Let $Z_r = \supp \zeta_r \subseteq W_r$. Then \cite{Trieb} the map $F \rightarrow \zeta_r F$ is continuous from $B_p^{\alpha q}$ to itself. Without loss, we may therefore assume that $\supp F \subseteq Z$ where $Z$ is a compact subset of $W=B(x_0,\delta)$ for some $x_0$. Choose a chart $U_i$, as in Proposition \ref{ujvj}, with $B(x_0,3\delta) \subseteq U_i$. Select $\zeta \in C_c^{\infty}(W)$ with $\zeta \equiv 1$ in a neighborhood of $Z$. In local coordinates on $U_i$, $U_i$ is a ball in $\RR^n$. By \cite{FJ1} (changing their notation slightly by multiplying by certain constants) we may write \[ F = \sum_{m \in \ZZ^n} s_m b_m + \sum_{\sigma=0}^{\infty} \sum_{l(Q)=2^{-\sigma}} s_Q a_Q \] with convergence in $B_p^{\alpha q}$. Here, if $Q_{0m}$ is the dyadic cube of side $1$ with ''lower left corner'' $m$, \[ \mbox{supp} b_m \subseteq 3Q_{0m}, \] \[ |\partial ^{\gamma} b_m| \leq 1 \mbox{ if } |\gamma| \leq 2l, \] \[ \mbox{supp} a_Q \subseteq 3Q, \] \[ |\partial ^{\gamma} a_Q| \leq |Q|^{-|\gamma|/n} \mbox{ if } |\gamma| \leq 2l, \] \[ \int x^{\gamma} a_Q(x)dx = 0 \mbox{ if } |\gamma| \leq 2l-1,\] and finally \[ (\sum_{m \in \ZZ^n} |s_m|^p)^{1/p} + [\sum_{\sigma=0}^{\infty} 2^{\sigma \alpha q}[\sum_{l(Q)=2^{-\sigma}} 2^{-\sigma n}|s_Q|^p]^{q/p}]^{1/q} \leq C\|F\|_{B_p^{\alpha q}}. \] Since $F = \zeta F$, we have \begin{equation} \label{fsmsq} F = \sum_{m \in \ZZ^n} s_m \zeta b_m + \sum_{\sigma=0}^{\infty} \sum_{l(Q)=2^{-\sigma}} s_Q \zeta a_Q \end{equation} with convergence in $B_p^{\alpha q}$, hence in ${\cal E}'({\bf M})$. Now, the (Euclidean) distance from supp$\zeta$ to $W^c$ is positive. Thus there exists $\sigma_0$ with the property that if $\sigma \geq \sigma_0$, $l(Q) = 2^{-\sigma}$ and $3Q \cap \mbox{supp}\zeta \neq \oslash$, then supp$\zeta + 3Q \subseteq W$. (Note that $\sigma_0$ does not depend on $F$.) Accordingly, if $l(Q) \leq 2^{-\sigma_0}$, then either $\zeta a_Q \equiv 0$ or $3Q \subseteq W$. Moreover, only finitely many cubes $3Q$ with $2^{-\sigma_0} < l(Q) \leq 1$ intersect the compact set supp$\zeta$; let ${\cal Q}_0$ denote the collection of such cubes. Thus we may write \begin{equation} \label{ff0sq} \zeta F = F_0 + \sum_{\sigma=\sigma_0}^{\infty} \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q \zeta a_Q \end{equation} where \begin{equation} \label{foqodf} F_0 = \sum_{m \in \ZZ^n, Q_{0m} \in {\cal Q}_0} s_m \zeta b_m + \sum_{\sigma=0}^{\sigma_0-1} \sum_{l(Q)=2^{-\sigma}, Q \in {\cal Q}_0} s_Q \zeta a_Q \end{equation} Let $c = \log_2 a$. Then, since the series in (\ref{fsmsq}) converges to $F$ in ${\cal E}'({\bf M})$, \[ s_{j,k} = \langle F_0,\varphi^j_k \rangle + \sum_{\sigma_0 \leq \sigma \leq jc}\: \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q \langle \zeta a_Q, \varphi^j_k \rangle + \sum_{\sigma > jc}\: \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q \langle \zeta a_Q, \varphi^j_k \rangle. \] For each $j \geq 0$, let $Y_j$ be the finite measure space $\{1,\ldots,{\cal N}_j\}$ with measure $\lambda$ where $\lambda(\{k\}) = \mu(E^j_k)$ for each $k \in Y_j$. For each $j \geq 0$, define $s_j: Y_j \rightarrow \CC$ by $s_j(k) = s_{j,k}$. Also, for each $Q$ with $3Q \subseteq W$, define $u^Q_j: Y_j \rightarrow \CC$ by $u^Q_j(k) = \langle \zeta a_Q, \varphi^j_k \rangle$. Finally, define $u^0_j: Y_j \rightarrow \CC$ by $u^0_j(k) = \langle F_0,\varphi^j_k \rangle$. Then \begin{equation} \label{sjsumal} s_j = u^0_j + \sum_{\sigma_0 \leq \sigma \leq jc}\: \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q u^Q_j + \sum_{\sigma > jc}\: \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q u^Q_j. \end{equation} Define $h$ on $U_i$ by $d\mu = h dx$ there. Note that if $3Q \subseteq W$ then $\int x^{\gamma} (a_Q/h) d\mu = 0$ if $|\gamma| \leq 2l-1$. Now say that $\sigma > jc$, i.e., that $2^{\sigma} \geq a^{j}$. Then by Lemma \ref{fjan1} (replacing ``$\nu$'' in that lemma by $j$, and with $a_Q/h = \varphi_1$ and $h\zeta\varphi^j_k = a^{j(2l+n)}\varphi_2$), then \begin{equation} \label{uqjest1} |u^Q_j(k)| = |\langle \zeta a_Q, \varphi^j_k \rangle| = |\langle a_Q/h , h\zeta\varphi^j_k \rangle| \leq Ca^{j(2l+n)} 2^{-\sigma(2l+n)}(1+a^{j}d(x_Q,x^j_k))^{2l+n-M}. \end{equation} Say instead $\sigma_0 \leq \sigma \leq jc$, so that $2^{\nu} < a^{j}$. Then by Lemma \ref{fjan2} (replacing ``$\nu$'' in that lemma by $\sigma/c$, replacing ``$\sigma$'' in that lemma by $j$, and with $\Phi^j_k = a^{jn}\Phi$, $\varphi^j_k = a^{jn-2jl}\varphi_1$, and $\zeta a_Q = 2^{2\sigma l}\varphi_2$) we have \begin{equation} \label{uqjest2} |u^Q_j(k)| = |\langle \zeta a_Q, \varphi^j_k \rangle| \leq C2^{2\sigma l} a^{-2jl}(1+2^{\sigma}d(x_Q,x^j_k))^{n-M}. \end{equation} Moreover, if $Q$ is one of the finitely many cubes in ${\cal Q}_0$, then again by Lemma \ref{fjan2} (taking ``$\nu$'' in that lemma to be $0$, replacing ``$\sigma$'' in that lemma by $j$, and with $\Phi^j_k = a^{jnl}\Phi$, $\varphi^j_k = a^{jn-2jl}\varphi_1$, and $C\zeta b_m = \varphi_2$ if $Q = Q_{0m}$ has side length $1$, or $C\zeta a_Q = \varphi_2$ otherwise) we have \begin{equation} \label{uqjest3} |u^0_j(k)| \leq C{\cal T} a^{-2jl}(1+d(x_Q,x^j_k))^{n-M}, \end{equation} where \[ {\cal T} = \sum_{m \in \ZZ^n, Q_{0m} \in {\cal Q}_0} |s_m| + \sum_{\sigma=0}^{\sigma_0-1} \sum_{l(Q)=2^{-\sigma}, Q \in {\cal Q}_0} |s_Q|. \] Say now $p \geq 1$, and let $\|\:\|_p$ denote $L^p(Y_j)$ norm. From (\ref{sjsumal}) we obtain \[ \|s_j\|_p \leq \|u^0_j\|_p + \sum_{\sigma_0 \leq \sigma \leq jc}\: \|\sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q u^Q_j\|_p + \sum_{\sigma > jc}\: \|\sum_{l(Q)=2^{-\sigma},3Q \subseteq W} s_Q u^Q_j\|_p. \] Let \[ A_j = \|s_j\|_p = [\sum_{k=1}^{{\cal N}_j} \mu(E^j_k)|s_{j,k}|^p]^{1/p},\:\: B_{\sigma} = [\sum_{l(Q)=2^{-\sigma},3Q \subseteq W} 2^{-\sigma n}|s_Q|^p]^{1/p}. \] Then by (\ref{uqjest1}), (\ref{uqjest2}), (\ref{uqjest3}) and Lemma \ref{fjan4}, we see that \begin{equation} \label{ajbnuest} A_j \leq C({\cal T}a^{-2jl} + \sum_{\sigma_0 \leq \sigma \leq jc} a^{-2jl} 2^{2\sigma l}B_{\sigma} + \sum_{\sigma > jc} a^{2jl} 2^{-2\sigma l}B_{\sigma}). \end{equation} (In using Lemma \ref{fjan4}, we have noted that $3Q \subseteq W \Rightarrow Q \subseteq W$.) Now also write \[ A^{\alpha}_j = a^{j\alpha} A_j,\:\: B^{\alpha}_{\sigma} = 2^{\sigma \alpha} B_{\sigma}. \] Then, by (\ref{ajbnuest}), \begin{equation} \label{ajbnalest} A_j^{\alpha} \leq C({\cal T} a^{-j(2l-\alpha)}+ \sum_{\sigma=\sigma_0}^{\infty} K(j,\sigma) B^{\alpha}_{\sigma}), \end{equation} where \[ K(j,\sigma) = a^{-j(2l-\alpha)}2^{\sigma(2l-\alpha)} \mbox{ if } \sigma_0 \leq \sigma \leq jc, \] \[ K(j,\sigma) = a^{j(2l+\alpha)}2^{-\sigma(2l+\alpha)} \mbox{ if } \sigma_0 \geq jc. \] By (\ref{lgqgint}), $2l$ is more than $\max(\alpha,-\alpha)$. Recall also that $c = \log_2 a$. Thus, by Proposition \ref{lqconv}, $\|(A_j^{\alpha})\|_q \leq C({\cal T}+\|(B_{\sigma}^{\alpha})\|_q)$. Consequently \begin{equation} \label{bescmp1} (\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_k \mu(E^j_k)|s_{j,k}|^p]^{q/p})^{1/q} \leq C[(\sum_{m \in \ZZ^n} |s_m|^p)^{1/p} + (\sum_{\sigma=0}^{\infty} 2^{\sigma \alpha q}[\sum_{l(Q)=2^{-\sigma}} 2^{-\sigma n}|s_Q|^p]^{q/p})^{1/q}] \leq C\|F\|_{B_p^{\alpha q}} \end{equation} as desired, at least if $p \geq 1$. If instead $0 < p < 1$, we evaluate each side of (\ref{sjsumal}) at $k$ (for $1 \leq k \leq {\cal N}_j$), and use the $p$-triangle inequality, to obtain \begin{equation} \label{sjsumalp} |s_j(k)|^p \leq |u^0_j(k)|^p + \sum_{\sigma_0 \leq \sigma \leq jc}\: \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} |s_Q|^p |u^Q_j(k)|^p + \sum_{\sigma > jc}\: \sum_{l(Q)=2^{-\sigma},3Q \subseteq W} |s_Q|^p |u^Q_j(k)|^p. \end{equation} Let $A_j$, $B_{\sigma}$ be as above. Integrating both sides of (\ref{sjsumalp}) over $Y_j$, using (\ref{uqjest1}), (\ref{uqjest2}), (\ref{uqjest3}), and using Lemma \ref{fjan4} (taking ``$p$'' in that lemma to be $1$), we find \[ A_j^p \leq C({\cal T}^p a^{-2jlp} + \sum_{\sigma_0 \leq \sigma \leq jc} a^{-2jlp} 2^{2\sigma lp}B_{\sigma}^p + \sum_{\sigma > jc} a^{j(2l+n)p-jn} 2^{-\sigma(2l+n)p+\sigma n}B_{\sigma}^p). \] Let $A^{\alpha}_j$, $B^{\alpha}_{\sigma}$ be as above. We find that \begin{equation} \label{ajbnalpest} (A_j^{\alpha})^p \leq C({\cal T}^p a^{-j(2l-\alpha)p}+ \sum_{\sigma=\sigma_0}^{\infty} K(j,\sigma) (B^{\alpha}_{\sigma})^p), \end{equation} where now \[ K(j,\sigma) = a^{-j(2l-\alpha)p}2^{\sigma(2l-\alpha)p} \mbox{ if } \sigma_0 \leq \sigma \leq jc, \] \[ K(j,\sigma) = a^{j[(2l+n+\alpha)p -n]}2^{-\sigma[(2l+n+\alpha)p -n]} \mbox{ if } \sigma_0 \geq jc. \] By (\ref{lgqgint}), $2l$ is more than $\max(\alpha,n/p-n-\alpha)$. Recall also that $c = \log_2 a$. Thus, by Proposition \ref{lqconv}, $\|(A_j^{\alpha})^p\|_{q/p} \leq C({\cal T}^q+\|(B_{\sigma}^{\alpha})^p\|_{q/p})$. Upon raising both sides to the $1/q$ power, one again obtains (\ref{bescmp1}), as desired.\\ \ \\ {\bf Remark} Presumably the assumptions in the last lemma can be weakened: assuming only $2m > n(1/p-1)_+ - \alpha$, we conjecture that one should be able to replace the assumption (\ref{var2lph}) by the assumption that $\varphi^j_k = (a^{-2j}\Delta)^m\Phi^j_k$, where $\Phi^j_k$ satisfies (\ref{xphip}) for all $X \in {\mathcal P}^{2(l+m)}$ (not ${\mathcal P}^{4l}$). \begin{lemma} \label{besov2} Fix $b > 0$. Also fix an integer $l \geq 1$ with \begin{equation} \label{lgqgintac} 2l > n(1/p-1)_+ - \alpha. \end{equation} where here $x_+ = \max(x,0)$. Fix $M$ with $(M-n)p > n+1$ if $0 < p < 1$, $M-n > n+1$ otherwise. If $0 < p < 1$, we also fix a number $\rho > 0$. Then there exists $C > 0$ as follows. Say $j \in \ZZ$. Select sets $E^j_k$ and points $x^j_k$ as in Lemma \ref{fjan3}. If $0 < p < 1$, we assume that, for all $j,k$, \begin{equation} \label{ejkro} \mu(E^j_k) \geq \rho a^{-jn} \end{equation} Suppose that, for each $j \geq 0$, and each $k$, $\varphi^j_k = (a^{-2j}\Delta)^l\Phi^j_k$, where $\Phi^j_k \in C^{\infty}({\bf M})$ satisfies the following conditions: \[ |X\Phi^j_k(y)| \leq a^{j (\deg X + n)}(1+a^{j}d(y,x^j_k))^{-M} \mbox{ whenever } X \in {\mathcal P}^{4l}. \] Suppose that $\{s_{j,k}: j \geq 0, 1 \leq k \leq {\cal N}_j\}$ satisfies \[ (\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_k \mu(E^j_k)|s_{j,k}|^p]^{q/p})^{1/q} < \infty. \] Then $\sum_{j=0}^{\infty} \sum_k \mu(E^j_k) s_{j,k} \varphi^j_k$ converges in $B_p^{\alpha q}({\bf M})$, and \begin{equation} \label{besov2way} \|\sum_{j=0}^{\infty} \sum_k \mu(E^j_k) s_{j,k} \varphi^j_k\|_{B_p^{\alpha q}} \leq C(\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_k \mu(E^j_k)|s_{j,k}|^p]^{q/p})^{1/q}. \end{equation} \end{lemma} {\bf Proof} In \cite{SS}, Seeger and Sogge gave an equivalent characterization of $B_p^{\alpha q}$. (We change their notation a little; what we shall call $\beta_{k-1}(s^2)$, they called $\beta_k(s)$.) Choose $\beta_0 \in C_c^{\infty}((1/4,16))$, with the property that for any $s > 0$, $\sum_{\nu=-\infty}^{\infty} \beta^2_0(2^{-2\nu}s) = 1$. For $\nu \geq 1$, define $\beta_{\nu} \in C_c^{\infty}((2^{2\nu-2},2^{2\nu+4}))$, by $\beta_{\nu}(s) = \beta_0(2^{-2\nu}s)$. Also, for $s > 0$, define the smooth function $\beta_{-1}(s)$ by $\beta_{-1}(s) = \sum_{\nu=-\infty}^{-1} \beta(2^{-2\nu}s)$. (Note that $\beta_{-1}(s) = 0$ for $s > 4$.) Then (\cite{SS}), for $F \in C^{\infty}({\bf M})$, $\|F\|_{B_p^{\alpha q}}$ is equivalent to the $l^q$ norm of the sequence $\{ 2^{\nu\alpha}\|\beta_{\nu}(\Delta)F\|_p: -1 \leq \nu \leq \infty\}$. For $\nu \geq -1$, let $J_{\nu}$ be the kernel of $\beta_{\nu}(\Delta)$. Using the eigenfunction expansion of $\beta_{\nu}(\Delta)$ (see (\ref{kerexp}) of \cite{gmcw}), one sees at once that $J_{-1}(x,y)$ is smooth in $(x,y)$. Moreover, for $\nu \geq 0$, $\beta_{\nu} \in {\cal S}(\RR^+)$ and $\beta_{\nu}(0) = 0$; so $J_{\nu}$ is smooth as well. For any integer $I \geq 0$, we set $\beta_0^I(s) = \beta_0(s)/s^I$, and define $\beta_{\nu}^I$ by $\beta_{\nu}^I(s) = \beta_0^I(2^{-2\nu}s)$. Then $\beta_{\nu}^I(s) = 2^{2\nu I}\beta_{\nu}(s)/s^I$, so that $\beta_{\nu}(\Delta) = 2^{-2\nu I} \beta_{\nu}^I(\Delta)\Delta^I $. Thus, if $J_{\nu}^I$ is the kernel of $\beta_{\nu}^I(\Delta)$, we have \begin{equation} \label{jjdyi} J_{\nu}(x,y) = 2^{-2{\nu}I}\Delta_y^I J_{\nu}^I(x,y), \end{equation} where $\Delta_y$ means $\Delta$ as applied in the $y$ variable. Also, by Lemma \ref{manmol}, since $\beta_{\nu}^I(\Delta) = \beta_0^I(2^{-2\nu}\Delta)$, we know the following: for every pair of $C^{\infty}$ differential operators $X$ (in $x$) and $Y$ (in $y$) on ${\bf M}$, and for every integer $N \geq 0$, and for any fixed $I$, there exists $C$ such that for all $\nu$, \begin{equation} \label{jliest} |XY J_{\nu}^I(x,y)| \leq C 2^{\nu(n+\deg X + \deg Y)}(1+2^\nu d(x,y))^{-N} \end{equation} for all $x,y \in {\bf M}$. In proving (\ref{besov2way}), we may assume that for all but finitely many $j$, all $s_{j,k} = 0$. For if we can prove the inequality (\ref{besov2way}) in that case, it will follow at once that the partial sums of $\sum_{j=0}^{\infty} \sum_k \mu(E^j_k) s_{j,k} \varphi^j_k$ form a Cauchy sequence in the quasi-Banach space $B_p^{\alpha q}({\bf M})$. Thus the series will converge in that quasi-Banach space, and moreover the inequality (\ref{besov2way}) will hold in full generality. In fact (\ref{besov2way}) shows that the convergence is unconditional. With this assumption, we may let $F = \sum_{j=0}^{\infty} \sum_k \mu(E^j_k) s_{j,k} \varphi^j_k$. Let $c = \log_a 2$. For $\nu \geq -1$, we have \begin{equation} \label{beftot} \beta_{\nu}(\Delta)F = \sum_{0 \leq j \leq \nu c}\: \sum_k s_{j,k} \mu(E^j_k)\beta_{\nu}(\Delta)\varphi^j_k + \sum_{j > \nu c}\: \sum_k s_{j,k} \mu(E^j_k)\beta_{\nu}(\Delta)\varphi^j_k. \end{equation} Of course, in each term, $[\beta_{\nu}(\Delta)\varphi^j_k](z) = \int J_{\nu}(x,y) \varphi^j_k(y) d\mu(y)$. Suppose that $x \in {\bf M}$. Now say that $j > \nu c$, i.e., that $a^j \geq 2^{\nu}$. Then by Lemma \ref{fjan2} (replacing ``$\sigma$'' in that lemma by $j$, ``$x_0$'' by $x^j_k$, replacing ``$\nu$'' in that lemma by $\nu c$, and with $\Phi^j_k = a^{jn} \Phi$, $\varphi^j_k = a^{jn-2jl}\varphi_1$, and $J_{\nu}(x,y) = 2^{\nu (n+2l)}\varphi_2(y)$, then \begin{equation} \label{bdfest1} |\mu(E^j_k)\beta_{\nu}(\Delta)\varphi^j_k(x)| \leq Ca^{-2jl} 2^{\nu(n+2l)}\mu(E^j_k)(1+2^{\nu}d(x,x^j_k))^{n-M}. \end{equation} Say instead $0 \leq j \leq \nu c$, so that $a^j \leq 2^{\nu}$. Select $I$ with \[ 2I > \max(\alpha,2l). \] Then by Lemma \ref{fjan2} (replacing ``$\sigma$'' in that lemma by $\nu c$, replacing ``$\nu$'' in that lemma by $j$, and with $J^I_{\nu}(x,y) = 2^{\nu n}\Phi(y)$, $J_{\nu}(x,y) = 2^{\nu n -2\nu I}\varphi_1(y)$, and $\varphi^j_k = a^{j(n+2l)}\varphi_2$) we have \begin{equation} \label{bdfest2} |\mu(E^j_k)\beta_{\nu}(\Delta)\varphi^j_k(x)| \leq Ca^{j(n+2I)} 2^{-2\nu I}\mu(E^j_k)(1+a^j d(x,x^j_k))^{n-M}, \end{equation} since $a^{2jl} \leq a^{2jI}$. Say now $p \geq 1$. From (\ref{beftot}) we obtain \[ \|\beta_{\nu}(\Delta)F\|_p \leq \sum_{0 \leq j \leq \nu c}\: \|\sum_k s_{j,k} \mu(E^j_k)\beta_{\nu}(\Delta)\varphi^j_k\|_p + \sum_{j > \nu c}\: \|\sum_k s_{j,k} \mu(E^j_k)\beta_{\nu}(\Delta)\varphi^j_k\|_p. \] Let \[ A_{\nu} = \|\beta_{\nu}(\Delta)F\|_p,\:\: B_{j} = [\sum_k \mu(E^j_k)|s_{j,k}|^p]^{1/p}. \] Then by (\ref{bdfest1}), (\ref{bdfest2}), and Lemma \ref{fjan3}, we see that \begin{equation} \label{ajbnuest3} A_{\nu} \leq C(\sum_{0 \leq j \leq \nu c} a^{2jI} 2^{-2\nu I}B_{j} + \sum_{j > \nu c} a^{-2jl} 2^{2\nu l}B_{j}). \end{equation} Now also write \[ A^{\alpha}_{\nu} = 2^{\nu \alpha} A_{\nu},\:\: B^{\alpha}_{j} = a^{j \alpha} B_{j}. \] Then, by (\ref{ajbnuest3}), \begin{equation} \label{ajbnalest3} A_{\nu}^{\alpha} \leq C(\sum_{j=0}^{\infty} K(\nu,j) B^{\alpha}_{j}), \end{equation} where \[ K(\nu,j) = 2^{-\nu(2I-\alpha)}a^{j(2I-\alpha)} \mbox{ if } 0 \leq j \leq \nu c, \] \[ K(\nu,j) = 2^{\nu(2l+\alpha)}a^{-j(2l+\alpha)} \mbox{ if } j > \nu c. \] By (\ref{lgqgintac}), $2l$ is more than $-\alpha$, and we have also taken $I$ to satisfy $2I > \alpha$. Recall also that $c = \log_a 2$. Thus, by Proposition \ref{lqconv}, $\|(A_{\nu}^{\alpha})\|_q \leq C\|(B_{j}^{\alpha})\|_q$. Consequently the $l^q$ norm of the sequence $\{2^{\nu \alpha} \|\beta_{\nu}(\Delta)F\|_p\}$ is less than or equal to $C(\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_k \mu(E^j_k)|s_{j,k}|^p]^{q/p})^{1/q}$, which, because of the result of Seeger-Sogge, gives the lemma, at least if $p \geq 1$. If instead $0 < p < 1$, we evaluate each side of (\ref{beftot}) at $x$ (for each $x \in {\bf M}$), and use the $p$-triangle inequality, to obtain \[ |\beta_{\nu}(\Delta)F(x)|^p \leq \sum_{0 \leq j \leq \nu c}\: \sum_k |s_{j,k}|^p \mu(E^j_k)^p|\beta_{\nu}(\Delta)\varphi^j_k(x)|^p + \sum_{j > \nu c}\: \sum_k |s_{j,k}|^p \mu(E^j_k)^p|\beta_{\nu}(\Delta)\varphi^j_k(x)|^p \] so, by the assumption (\ref{ejkro}), \begin{align} &|\beta_{\nu}(\Delta)F(x)|^p \nonumber \\ &\leq C (\sum_{0 \leq j \leq \nu c}\: \sum_k |s_{j,k}|^p a^{jn(1-p)}\mu(E^j_k)|\beta_{\nu}(\Delta)\varphi^j_k(x)|^p + \sum_{j > \nu c}\: \sum_k |s_{j,k}|^p a^{jn(1-p)}\mu(E^j_k)|\beta_{\nu}(\Delta)\varphi^j_k(x)|^p) \label{beftotp} \end{align} Let $A_{\nu}$, $B_{j}$ be as above. Integrating both sides of (\ref{beftotp}) over ${\bf M}$, using (\ref{bdfest1}), (\ref{bdfest2}), and Lemma \ref{fjan3} (taking ``$p$'' in that lemma to be $1$), we find \begin{equation} \label{anupbjp} A_{\nu}^p \leq C(\sum_{0 \leq j \leq \nu c} a^{2jIp} 2^{-2\nu Ip}B_{j}^p + \sum_{j > \nu c} a^{-j(2l+n)p+jn} 2^{\nu(2l+n)p-\nu n}B_{j}^p). \end{equation} Let $A^{\alpha}_{\nu}$, $B^{\alpha}_{j}$ be as above. We find that \begin{equation} \label{ajbnalpest2} (A_{\nu}^{\alpha})^p \leq C(\sum_{j=0}^{\infty} K(\nu,j) (B^{\alpha}_{j})^p), \end{equation} where now \[ K(\nu,j) = 2^{-\nu(2I-\alpha)p}a^{j(2I-\alpha)p} \mbox{ if } 0 \leq \sigma \leq \nu c, \] \[ K(\nu,j) = 2^{\nu[(2l+n+\alpha)p -n]}a^{-j[(2l+n+\alpha)p -n]} \mbox{ if } j \geq \nu c. \] By (\ref{lgqgintac}), $2l$ is more than $n/p-n-\alpha$; also $2I > \alpha$. Recall also that $c = \log_2 a$. Thus, by Proposition \ref{lqconv}, $\|(A_{\nu}^{\alpha})^p\|_{q/p} \leq C(\|(B_{j}^{\alpha})^p\|_{q/p})$. Upon raising both sides to the $1/q$ power, one again obtains (\ref{besov2way}), as desired.\\ For any $x \in {\bf M}$, and any integer $I, J \geq 1$, we let \begin{equation} \label{mxdefL} {\mathcal M}_{x,t}^{IJ} = \{\varphi \in C_0^{\infty}({\bf M}): t^{n+\deg Y} |(\frac{d(x,y)}{t})^N Y\varphi(y)|\leq 1 \mbox{ whenever } y \in {\bf M}, \: 0 \leq N \leq J \mbox{ and } Y \in {\mathcal P}^I\}. \end{equation} (This space is a variant of a space of molecules, as defined earlier in \cite{gil} and \cite{han2}. In the notation of our earlier article \cite{gm2}, ${\mathcal M}_{x,t}^{n+2,1} = {\mathcal M}_{x,t}$.) Note that, if for each $x \in {\bf M}$, we define the functions $\varphi^t_x, \psi^t_x$ on ${\bf M}$ by $\varphi^t_x(y) = K_t(x,y)$ and $\psi^t_x(y) = K_t(y,x)$ (notation as in Lemma \ref{manmol}), then by Lemma \ref{manmol}, for each $I, J \geq 1$, there exists $C_{IJ} > 0$ such that $\varphi^t_x$ and $\psi^t_x$ are in $C_{IJ}{\mathcal M}_{x,t}^{IJ}$ {\em for all} $x \in {\bf M}$ and all $t > 0$.\\ We recall Theorem \ref{sumopthmfr} of our earlier article \cite{gm2}: \begin{theorem} \label{sumopthm} Fix $a>1$. Then there exists $C_1, C_2 > 0$ as follows. For each $j \in \ZZ$, write ${\bf M}$ as a finite disjoint union of measurable subsets $\{E^j_k: 1 \leq k \leq {\cal N}_j\}$, each of diameter less than $a^{-j}$. For each $j,k$, select any $x^j_k \in E^j_k$, and select $\varphi^j_k$, $\psi^j_k$ with \\$\varphi^j_k , \psi^j_k\in \mathcal{M}^{n+2,1}_{x^j_k, a^{-j}}$. For $F \in C^1({\bf M})$, we claim that we may define \begin{equation} \label{sumopdf2} SF = S_{\{\varphi^j_k\},\{\psi^j_k\}}F = \sum_{j}\sum_k \mu(E^j_k) \langle F, \varphi^j_k \rangle \psi^j_k. \end{equation} $($Here, and in similar equations below, the sum in $k$ runs from $k = 1$ to $k = {\cal N}_j$.$)$ Indeed: \begin{itemize} \item[$(a)$] For any $F\in C^1({\bf M})$, the series defining $SF$ converges absolutely, uniformly on $\bf{M}$, \item[$(b)$] $\parallel SF\parallel_2\leq C_2\parallel F\parallel_2$ for all $F\in C^1(\bf{M})$.\\ Consequently, $S$ extends to be a bounded operator on $L^2({\bf M})$, with norm less than or equal to $C_2$. \item[$(c)$] If $F \in L^2({\bf M})$, then \begin{equation} \label{uncndl2} SF = \sum_{j}\sum_k \mu(E^j_k) \langle F, \varphi^j_k \rangle \psi^j_k \end{equation} where the series converges unconditionally. \item[$(d)$] If $F,G \in L^2({\bf M})$, then \begin{equation}\label{eq:wk} \langle SF,G \rangle = \sum_{j}\sum_{k}\mu(E^j_k) \langle F, \varphi^j_k \rangle \langle \psi^j_k,G \rangle , \end{equation} where the series converges absolutely.\\ \end{itemize} \end{theorem} This result, which was proved by use of the $T(1)$ theorem, explains the $L^2$ theory of the summation operator $S$. On Besov spaces, we have the following result for summation operators, where now we consider the sum over nonnegative $j$: \begin{theorem} \label{sumopbes} Fix an integer $l \geq 1$ with $2l > \max(n(1/p-1)_+ - \alpha, \alpha)$. Also fix $J$ with $(J-2l-n)p > n+1$ if $0 < p < 1$, $J-2l-n > n+1$ if $p \geq 1$. Then there exists $C > 0$ as follows. For each integer $j \geq 0$, write ${\bf M}$ as a finite disjoint union of measurable subsets $\{E^j_k: 1 \leq k \leq {\cal N}_j\}$, each of diameter less than $a^{-j}$, select $x^j_k \in E^j_k$, select $\Phi^j_k, \Psi^j_k \in \mathcal{M}_{x^j_k, a^{-j}}^{4l,J}$ and set $\varphi^j_k = (a^{-2j}\Delta)^l\Phi^j_k$, $\psi^j_k = (a^{-2j}\Delta)^l\Psi^j_k$. By Theorem \ref{sumopthm}, we may then define \begin{equation} \label{sumopdf4} S^{\prime}F = S^{\prime}_{\{\varphi^j_k\},\{\psi^j_k\}}F = \sum_{j=0}^{\infty}\sum_k \mu(E^j_k) \langle F, \varphi^j_k \rangle \psi^j_k, \end{equation} at first for $F \in C^1({\bf M})$, and Theorem \ref{sumopthm} applies.\\ Then, if $F \in B_p^{\alpha q}$, the series in (\ref{sumopdf4}) converges in $B_p^{\alpha q}$, to a distribution $S'F \in B_p^{\alpha q}$, such that $(S'F)(1) = 0$. Moreover, $\|S'F\|_{B_p^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$. \end{theorem} {\bf Proof} Setting $s_{j,k} = \langle F, \varphi^j_k \rangle$, we see by Lemmas \ref{besov1} and \ref{besov2} that the series in (\ref{sumopdf4}) converges to a distribution $S'F$ in $B_p^{\alpha q}$, and that \begin{equation} \label{sumbes1} \|S'F\|_{B_p^{\alpha q}} = \|\sum_{j=0}^{\infty}\sum_k \mu(E^j_k) \langle F, \varphi^j_k \rangle \psi^j_k\|_{B_p^{\alpha q}} \leq C(\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_k \mu(E^j_k)|\langle F, \varphi^j_k \rangle|^p]^{q/p})^{1/q} \leq C\|F\|_{B_p^{\alpha q}}, \end{equation} {\em provided} that $p \geq 1$, or alternatively that $0 < p < 1$ and (\ref{ejkro}) holds. If $0 < p < 1$ and (\ref{ejkro}) does not hold, we at least know that the second inequality in (\ref{sumbes1}) holds. For the first inequality, we need to regroup. Say then that $0 < p < 1$. By the discussion before Theorem \ref{framainfr} of \cite{gm2}, for each $j \geq 0$, there exists a finite covering of ${\bf M}$ by disjoint measurable sets ${\cal F}^j_1,\ldots,{\cal F}^j_{L(j)}$, such that whenever $1 \leq i \leq L(j)$, there is a $y^j_i \in {\bf M}$ with $B(y^j_i,2a^{-j}) \subseteq {\cal F}^j_i \subseteq B(y^j_i,4a^{-j})$. Fix $j$ for now. For each $k$ with $1 \leq k \leq {\cal N}_j$, select a number $i_{j,k}$ with $1 \leq i_{j,k} \leq L(j)$, such that ${\cal F}^j_{i_{j,k}} \cap E^j_k \neq \oslash$. For $1 \leq i \leq L(j)$, let $S_{j,i} = \{k: i_{j,k} = i\}$, and then let \[ {\cal E}^j_i = \cup_{k \in S_{j,i}} E^j_k. \] Then we have that \[ B(y^j_i,a^{-j}) \subseteq {\cal E}^j_i \subseteq B(y^j_i,5a^{-j}). \] The second inclusion here is evident from the facts that, firstly, ${\cal F}^j_i \subseteq B(y^j_i,4a^{-j})$, secondly, that ${\cal F}^j_{i} \cap E^j_k \neq \oslash$ whenever $k \in S_{j,i}$, and thirdly, that each $E^j_k$ has diameter less than $a^{-j}$. For the first inclusion, say $x \in B(y^j_i,a^{-j})$; we need to show that $x \in {\cal E}^j_i$. Choose $k$ with $x \in E^j_k$. Since $E^j_k$ has diameter less than $a^{-j}$, surely $E^j_k \subseteq B(y^j_i,2a^{-j}) \subseteq {\cal F}^j_i$. Thus ${\cal F}^j_i$ is the only one of the sets ${\cal F}^j_1,\ldots,{\cal F}^j_{L(j)}$ which $E^j_k$ intersects, and so $k$ must be in $S_{j,i}$. Accordingly $x \in E^j_k \subseteq {\cal E}^j_i$, as claimed. For each $j, i$ let \[ r^{j,i} = \max_{k \in S_{j,i}} |\langle F, \varphi^j_k \rangle|. \] Then, for some $k \in S_{j,i}$, $r^{j,i} = |\langle F, \varphi^j_k \rangle|$. Now $d(x^j_k, y^j_i) \leq 5a^{-j}$, so it is evident from (\ref{mxdefL}) that, for some absolute constant $C_0$, $\mathcal{M}_{x^j_k, a^{-j}}^{4l,J} \subseteq C_0\mathcal{M}_{y^j_i, a^{-j}}^{4l,J}$; in particular, $\Phi^j_k \in C_0\mathcal{M}_{y^j_i, a^{-j}}^{4l,J}$. Also, $\mbox{diam}{\cal E}^j_i \leq 10a^{-j}$. Thus, by Lemma \ref{besov1}, \begin{equation} \label{rjifbpq} (\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_i \mu({\cal E}^j_i)|r^{j,i}|^p]^{q/p})^{1/q} \leq C\|F\|_{B_p^{\alpha q}}. \end{equation} Let ${\bf F}$ be any finite subset of $\{(j,k): j \geq 0, 1 \leq k \leq {\cal N}_j\}$. Define $s_{j,k} = \langle F, \varphi^j_k \rangle$ if $(j,k) \in {\bf F}$; otherwise, let $s_{j,k} = 0$. Also, for each $j,i$ with $1 \leq i \leq L(j)$, let \[ s^{j,i} = \max_{k \in S_{j,i}} |s_{j,k}|. \] Then only finitely many of the $s^{j,i}$ are nonzero, and we always have $0 \leq s^{j,i} \leq r^{j,i}$. Therefore by (\ref{rjifbpq}), in order to show the convergence of the series for $S'F$ in $B_p^{\alpha q}$, and that $\|S'F\|_{B_p^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$, it is sufficient to show that \begin{equation} \label{sjksjiok} \|\sum_{j=0}^{\infty}\sum_k s_{j,k} \mu(E^j_k) \psi^j_k\|_{B_p^{\alpha q}} \leq C(\sum_{j = 0}^{\infty} a^{j\alpha q} [\sum_i \mu({\cal E}^j_i)|s^{j,i}|^p]^{q/p})^{1/q}, \end{equation} where here $C$ is independent of ${\bf F}$ (and our choices of $E^j_k, x^j_k, \Phi^j_k, \Psi^j_k, {\cal E}^j_i$). Let $G = \sum_{j=0}^{\infty}\sum_k s_{j,k} \mu(E^j_k) \psi^j_k$. Let $\beta_{\nu}$ and $c$ be as in the proof of Lemma \ref{besov2}. We have from (\ref{beftot}) that, for each $x \in {\bf M}$, \begin{equation} \label{beftotabs} |\beta_{\nu}(\Delta)G(x)| \leq \sum_{0 \leq j \leq \nu c}\: \sum_{i=1}^{L(j)}\sum_{k \in S_{j,i}} \mu(E^j_k)|s_{j,k}| |\beta_{\nu}(\Delta)\psi^j_k(x)| + \sum_{j > \nu c}\: \sum_{i=1}^{L(j)}\sum_{k \in S_{j,i}}\mu(E^j_k)|s_{j,k}| \beta_{\nu}(\Delta)\psi^j_k(x)|. \end{equation} For each $j, i$, and each $x \in {\bf M}$, let \[ H^{j,i}(x) = \max_{k \in S_{j,i}}|\beta_{\nu}(\Delta)\psi^j_k(x)|. \] From (\ref{beftotabs}), we see that \begin{equation} \label{beftotamlg} |\beta_{\nu}(\Delta)G(x)| \leq \sum_{0 \leq j \leq \nu c}\: \sum_{i=1}^{L(j)} s^{j,i} \mu({\cal E}^j_i)H^{j,i}(x) + \sum_{j > \nu c}\: \sum_{i=1}^{L(j)}s^{j,i} \mu({\cal E}^j_i)H^{j,i}(x). \end{equation} Note once again that if $k \in S_{j,i}$, then $d(x^j_k, y^j_i) \leq 5a^{-j}$; and therefore, $\Psi^j_k \in C_0\mathcal{M}_{y^j_i, a^{-j}}^{4l,J}$ for some absolute constant $C_0$. Note also that $H^{j,i}(x) = |\beta_{\nu}(\Delta)\psi^j_k(x)|$ for some $k \in S_{j,i}$. Thus, the reasoning leading to (\ref{bdfest1}) and (\ref{bdfest2}) shows that if $j > \nu c$, then \begin{equation} \label{bdfest3} \mu({\cal E}^j_i) H^{j,i}(x) \leq Ca^{-2jl} 2^{\nu(n+2l)}\mu({\cal E}^j_i)(1+2^{\nu}d(x,y^j_i))^{n-J}, \end{equation} while if we select $I$ with $2I > \max(\alpha, 2l)$, and if $0 \leq j \leq \nu c$, then \begin{equation} \label{bdfest4} \mu({\cal E}^j_i) H^{j,i}(x) \leq Ca^{j(n+2I)} 2^{-\nu(2I)}\mu({\cal E}^j_i)(1+a^j d(x,y^j_i))^{n-J}. \end{equation} Since $B(y^j_i,a^{-j}) \subseteq {\cal E}^j_i$ for all $j, i$, there exists $\rho > 0$ such that $\mu({\cal E}^j_i) \geq \rho a^{-jn}$ for all $j \geq 0$. Consequently, the reasoning leading to (\ref{beftotp}) shows that \[ |\beta_{\nu}(\Delta)G(x)|^p \leq C (\sum_{0 \leq j \leq \nu c}\: \sum_i |s^{j,i}|^p a^{jn(1-p)}\mu({\cal E}^j_i) |H^{j,i}(x)|^p + \sum_{j > \nu c}\: \sum_k |s^{j,i}|^p a^{jn(1-p)}\mu({\cal E}^j_i) |H^{j,i}(x)|^p). \] Now set \[ A_{\nu} = \|\beta_{\nu}(\Delta)G\|_p,\:\: B_{j} = [\sum_i \mu({\cal E}^j_i)|s^{j,i}|^p]^{1/p},\:\: A^{\alpha}_{\nu} = 2^{\nu \alpha} A_{\nu},\:\: B^{\alpha}_{j} = a^{j \alpha} B_{j}. \] Just as in the proof of Lemma \ref{besov2}, (\ref{bdfest3}), (\ref{bdfest4}) and Lemma \ref{fjan3} (taking ''$p$'' in that lemma to be $1$) show that (\ref{anupbjp}) and (\ref{ajbnalpest2}) hold, with $K(\nu,j)$ as just after (\ref{ajbnalpest2}). This again gives $\|(A_{\nu}^{\alpha})^p\|_{q/p} \leq C(\|(B_{j}^{\alpha})^p\|_{q/p})$. Upon raising both sides to the $1/q$ power, one obtains (\ref{sjksjiok}), as claimed. Finally, the fact that $(S'F)(1) = 0$ follows from the fact that each term of the summation in (\ref{sumopdf4}) vanishes when applied to $1$ (since the assumption that $l \geq 1$ implies that each $\psi^j_k$ has integral zero), and the fact that the series in (\ref{sumopdf4}) converges in $B_p^{\alpha q}({\bf M})$, and hence in ${\cal E}'({\bf M})$. This completes the proof. \begin{definition} \label{bes0} We let $B_{p,0}^{\alpha q}({\bf M}) = \{F \in B_p^{\alpha q}({\bf M}): F1 = 0\}$. (Here $F1$ is the result of applying the distribution $F$ to the constant function $1$.) \end{definition} \begin{theorem} \label{besmain1} Say $c_0 , \delta_0 > 0$. Fix an integer $l \geq 1$ with $2l > \max(n(1/p-1)_+ - \alpha, \alpha)$. Say $f_0 \in {\cal S}(\RR^+)$, and let $f(s) = s^l f_0(s)$. Suppose also that the Daubechies condition (\ref{daub}) holds. Then there exist constants $C > 0$ and $0 < b_0 < 1$ as follows:\\ Say $0 < b < b_0$. Suppose that, for each $j$, we can write ${\bf M}$ as a finite disjoint union of measurable sets $\{E_{j,k}: 1 \leq k \leq N_j\}$, and that (\ref{diamleq}), (\ref{measgeq}) hold. (In the notation of Theorem \ref{framainfr} of \cite{gm2}, this is surely possible if $c_0 \leq c_0^{\prime}$ and $\delta_0 \leq 2\delta$.) Select $x_{j,k} \in E_{j,k}$ for each $j,k$. For $t > 0$, let $K_t$ be the kernel of $f(t^2\Delta)$. Set \[ \varphi_{j,k}(y) = \overline{K}_{a^j}(x_{j,k},y). \] By Lemma (\ref{manmol}), there is a constant $C_0$ (independent of the choice of $b$ or the $E_{j,k}$), such that $\varphi_{j,k} \in C{\mathcal M}_{x_{j,k},a^j}$ for all $j,k$. Thus, we may form the summation operator $S$ with \begin{equation} \label{sumjopdfbes} SF = S_{\{\varphi_{j,k}\},\{\varphi_{j,k}\}}F = \sum_{j}\sum_k \mu(E_{j,k}) \langle F, \varphi_{j,k} \rangle \varphi_{j,k}, \end{equation} at first for $F \in C^1({\bf M})$, and Theorem \ref{sumopthm} applies. Then:\\ If $F \in B_{p,0}^{\alpha q}$, then the series in (\ref{sumjopdfbes}) converges in $B_{p,0}^{\alpha q}$, and $S: B_{p,0}^{\alpha q} \rightarrow B_{p,0}^{\alpha q}$ is bounded and {\em invertible}. We have $\|SF\|_{B_p^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$ and $\|S^{-1}F\|_{B_p^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$ for all $F \in B_{p,0}^{\alpha q}$. \end{theorem} {\bf Proof} By the last sentence of Lemma \ref{manmol}, if we let $K^0_t(x,y)$ be the kernel of $f_0(t^2\Delta)$, and if we let $\eta_{j,k}(y) = \overline{K}^0_{a^j}(x_{j,k},y)$ for $j \leq 0$, then for every $I,J$ there exists $C_{IJ}$ with $\eta_{j,k} \in C_{IJ}{\mathcal M}^{IJ}_{x_{j,k},a^j}$. Also (for instance, by looking at eigenfunction expansions, as in (\ref{kerexp}) of \cite{gmcw}), one has that $\varphi_{j,k} = (a^{2j}\Delta)^l \eta_{j,k}$. Thus, by Theorem \ref{sumopbes}, if $F \in B_p^{\alpha q}$, the series \begin{equation} \label{sumjopdfbesneg} \sum_{j \leq 0}\sum_k \mu(E_{j,k}) \langle F, \varphi_{j,k} \rangle \varphi_{j,k} := S'F \end{equation} converges in $B_{p,0}^{\alpha q}$, and $\|S'F\|_{B_{p,0}^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$. We shall next show that, if $F \in B_p^{\alpha q}$, then the series \begin{equation} \label{sprprf} \sum_{j > 0}\sum_k \mu(E_{j,k}) \langle F, \varphi_{j,k} \rangle \varphi_{j,k} \end{equation} also converges in $B_{p,0}^{\alpha q}$, and if we call the sum of this series $S''F$, then $\|S''F\|_{B_{p,0}^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$. Since $S = S' + S''$, this will tell us that $\|SF\|_{B_{p,0}^{\alpha q}} \leq C\|F\|_{B_p^{\alpha q}}$. More generally, let us show the following:\\ \ \\ (*) There exist $N, C_0$ (independent of our choices of $b$, $E_{j,k}$, $x_{j,k}$) as follows. Suppose $\psi_{j,k}, \Psi_{j,k}$ are smooth functions on ${\bf M}$ (for all $j > 0$ and $1 \leq k \leq N(j)$), which satisfy \begin{equation} \label{jposway} \|\psi_{j,k}\|_{C^N} \leq C_1 a^{-2j},\:\: \|\Psi_{j,k}\|_{C^N} \leq C_1 a^{-2j}, \end{equation} for all $j,k$. Then, if $F \in B_p^{\alpha q}$, the series \begin{equation} \label{jposway1} \sum_{j > 0}\sum_k \mu(E_{j,k}) \langle F, \psi_{j,k} \rangle \Psi_{j,k} \end{equation} converges in $B_{p,0}^{\alpha q}$, and if we call the sum of this series $S''F = S^{\prime \prime}_{\{\psi_{j,k}\},\{\Psi_{j,k}\}}F$, then \begin{equation} \label{jposway2} \|S''F\|_{B_{p,0}^{\alpha q} }\leq C_0 C_1\|F\|_{B_p^{\alpha q}}. \end{equation} Note that our $\varphi_{j,k}$ do satisfy (\ref{jposway}), since, as we noted in section 4 of \cite{gmcw}, $\lim_{t \rightarrow \infty} t K_{\sqrt t}(x,y) = 0$ in $C^{\infty}({\bf M} \times {\bf M})$.\\ To prove (*), we need only note that, by (\ref{ccNbes}) and a partition of unity argument, one has (for some $N$) a continuous inclusion $C^N({\bf M}) \subseteq B_p^{\alpha q}({\bf M})$. Also, since the inclusion $B_p^{\alpha q} \subseteq {\cal S}'(\RR^n)$ is continuous, we have a continuous inclusion $B_p^{\alpha q}({\bf M}) \subseteq {\cal E}'(\RR^n)$. In particular, for some $N$, \begin{equation} \label{bescn} \|G\|_{B_p^{\alpha q}} \leq C\|G\|_{C^N} \end{equation} for all $G \in C^N$, while \begin{equation} \label{besepcn} |\langle F, G \rangle| \leq C\|F\|_{B_p^{\alpha q}}\|G\|_{C^N} \end{equation} for all $F \in B_p^{\alpha q}$ and all $G \in C^{\infty}$. In particular, in (\ref{jposway1}), \[\sum_k \mu(E_{j,k}) |\langle F, \psi_{j,k} \rangle|\|\Psi_{j,k}\|_{C^N} \leq CC^2_1 a^{-4j}\|F\|_{B_p^{\alpha q}}\sum_k \mu(E_{j,k}) \leq CC^2_1 \mu({\bf M}) a^{-4j}\|F\|_{B_p^{\alpha q}}.\] Therefore the series in (\ref{jposway1}) converges absolutely in $C^N$, hence in $B_p^{\alpha q}$, and we have the estimate (\ref{jposway2}) as well.\\ To complete the proof, we return to the notation we used in the proof of Theorem \ref{framainfr} of \cite{gm2} (taking ``${\mathcal J}$'' in that proof to be $\ZZ$, and setting $Q = Q^{\ZZ}$). We wish to show that $Q$, when restricted to $C^{\infty}({\bf M})$, has a bounded extension to an operator $Q: B_p^{\alpha q} \rightarrow B_{p,0}^{\alpha q}$, and that, as operators on $B_p^{\alpha q}$, \[ \|Q-S\| \leq Cb \] (where as usual $C$ is independent of our choices of $b$, the $E_{j,k}$ and the $x_{j,k}$). Now, $C^{\infty}$ is dense in $B_p^{\alpha q}$ (for instance, by Theorem 7.1 (a) of \cite{FJ1}; the constructions in that paper show that the building blocks can be taken to be smooth). Thus it is enough to show that $\|(Q-S)F\| \leq Cb\|F\|$ for all $F \in C^{\infty}$ (where, until further notice, $\|\:\|$ means the $B_p^{\alpha q}$ norm). Say then that $F \in C^{\infty}$. As in the proof of Theorem \ref{framainfr} of \cite{gm2}, we may assume that $\delta_0 \leq \delta$. Again put $\Omega_b = \log_a(\delta_0/b)$. In the proof of Theorem \ref{framainfr} of \cite{gm2}, we have obtained a formula for $\langle (Q-S)F, F \rangle$. This expression may be polarized, and a formula for $(Q-S)F$ may be obtained from it. We see that $(Q-S)F = \sum_{i=1}^N I_i + II$, where in the notation of the proof of Theorem \ref{framainfr} of \cite{gm2}, \[ I_i = b \int_{\mathcal B} \int_0^1 [ S_{\{\varphi_{j,k}^{w,s}\},\{\psi_{j,k}^{w,s}\}}F + S_{\{\psi_{j,k}^{w,s}\},\{\varphi_{j,k}^{w,s}\}}F ] dsdw \] and where \[ II = \sum_{j \geq \Omega_b} \int_{\bf M} \langle F, \Phi_{x,a^j} \rangle \Phi_{\cdot,a^j} d\mu(x) + \sum_{j \geq \Omega_b} \sum_k \mu(E_{j,k}) \langle F, \varphi_{j,k} \rangle \varphi_{j,k}. \] (Note that $I_i$ and $II$ are functions. In the first term of $II$ we have represented the independent variable as $\cdot$.) In $I_i$ we note, from the explicit expressions for the $\varphi_{j,k}^{w,s}$ and $\psi_{j,k}^{w,s}$, and by using $K^0_t$ as in the first paragraph of this proof, that we can write $\varphi_{j,k}^{w,s} = (a^{-2j}\Delta)^l\eta_{j,k}^{w,s}$, $\psi_{j,k}^{w,s} = (a^{-2j}\Delta)^l\Psi_{j,k}^{w,s}$, where for any $I,J$ there exists $C_{IJ}$ with $\eta_{j,k}^{w,s}, \Psi_{j,k}^{w,s} \in C_{IJ}\mathcal{M}_{x_{j,k}, a^j}^{IJ}$. By what we have already seen in this proof, this implies that $\|I_i\| \leq Cb\|F\|$. Also, $I_i(1)=0$. In $II$ we note that, for some $C$, $\|\Phi_{z,a^j}\|_{C^N} \leq Ca^{-2j}$ (for all $j \geq \Omega_b$ and all $z \in {\bf M}$), while $\|\varphi_{j,k}\|_{C^N} \leq Ca^{-2j}$ (for all $j \geq \Omega_b$ and all $k$). By (\ref{bescn}) and (\ref{besepcn}), we see that \[ \|II\| \leq C \|F\| \sum_{j \geq \Omega_b}a^{-4j} \leq Cb \|F\|, \] for $0 < b < 1$, since $\Omega_b = \log_a(\delta_0/b)$. Moreover, $II(1) = 0$. We have, then, that for $F \in C^{\infty}$, $\|QF\| \leq \|SF\| + Cb\|F\|$. So $Q: B_p^{\alpha q} \rightarrow B_p^{\alpha q}$ is bounded. Also, we see that $\|Q-S\| \leq Cb$. Further, since $Q=S+\sum_{i=1}^N I_i + II$, we know $Q(1)=0$. To complete the proof, it suffices to show that $Q: B_{p,0}^{\alpha q} \rightarrow B_{p,0}^{\alpha q}$ is invertible. Indeed, if $\|\:\|$ now denotes the norm of an operator on $B_{p,0}^{\alpha q}$, we will then have that $\|I-Q^{-1}S\| \leq Cb\|Q^{-1}\|$, and the theorem will follow for $b$ sufficiently small. For $\lambda \in \RR$, let $H(\lambda) = \sum_{j=-\infty}^{\infty} |f|^2(a^{2j} \lambda^2) = \sum_{j=-\infty}^{\infty} (a^{j}\lambda)^{4l}g_0(a^j \lambda)$, if we write $g_0(\xi) = |f_0|^2(\xi^2)$. Since $g_0$ and all its derivatives are bounded and decay rapidly at $\infty$, $H$ is a smooth even function on $\RR^+ \setminus \{0\}$. By Daubechies' criterion, $1/H = G$, say, is a smooth even function on $\RR^+ \setminus \{0\}$. Now, let $u$ equal either $G$ or $H$. Note that $u(\lambda) = u(a^{-1}\lambda)$, so that $u$ is actually a bounded smooth function on $\RR^+ \setminus \{0\}$. Moreover, if $\lambda > 0$, we may choose an integer $m$ with $a^{m} \leq \lambda \leq a^{m+1}$. Since $u(\lambda) = u(a^{-m}\lambda)$, for any $k$ we have \[ |u^{(k)}(\lambda)| = a^{-km} |u^{(k)}(a^{-m}\lambda)| \leq a^{k}\lambda^{-k}M, \] where $M = \max_{1 \leq \lambda \leq a} |u^{(k)}(\lambda)|$. This implies that $\|\lambda^k u^{(k)}(\lambda)\|_{\infty} < \infty$ for any $k$. Now choose an even function $v \in C_c^{\infty}(\RR)$ with $\supp v \subseteq (-\lambda_1,\lambda_1)$ and $v \equiv 1$ in a neighborhood of $0$. (Here $\lambda_1$ is the smallest positive eigenvalue of $\Delta$). Then the even function $u_1 := (1-v)u \in C^{\infty}(\RR)$ is in $S_1^0(\RR)$, so $u_1(\sqrt{\Delta}) \in OPS^0_{1,0}({\bf M})$. Moreover, $u_1(\sqrt{\Delta}) = u(\sqrt{\Delta})$ on $C_0^{\infty}$ (:= $(I-P)C^{\infty}({\bf M})$). Recall that, here, $u= G$ or $H$; set $G_1 = (1-v)G$, $H_1 = (1-v)H$. Note further that $Q = H(\sqrt{\Delta}) = H_1(\sqrt{\Delta})$, first when acting on finite linear combinations of non-constant eigenfunctions of $\Delta$, then on $C_0^{\infty}$, since such finite linear combinations are dense in $C_0^{\infty}$, and the operators are bounded on $L^2$. Moreover $G_1(\sqrt{\Delta})H_1(\sqrt{\Delta}) = H_1(\sqrt{\Delta})G_1(\sqrt{\Delta}) = (H_1G_1)(\sqrt{\Delta}) = I$. first when acting on finite linear combinations of non-constant eigenfunctions of $\Delta$, then on $C_0^{\infty}$, and finally on $B_{p,0}^{\alpha q}$, since by \cite{P}, operators in $OPS^0_{1,0}({\bf M})$ are bounded on $B_p^{\alpha q}$. This shows that $Q$ is indeed invertible on $B_{p,0}^{\alpha q}$, and establishes the theorem.\\ In Theorem \ref{besmain1}, we have again required the condition (\ref{measgeq}), that $\mu(E_{j,k}) \geq c_0(ba^j)^n$, whenever $ba^j < \delta_0$. In the next theorem, if $0 < p < 1$, we will require a mild condition on $\mu(E_{j,k})$ if $ba^j \geq \delta_0$, namely that \begin{equation} \label{measgeq2} \mu(E_{j,k}) \geq {\cal C} \mbox{ whenever } ba^j \geq \delta_0, \end{equation} (for some ${\cal C} > 0$). Sets $E_{j,k}$ which satisfy (\ref{diamleq}), (\ref{measgeq}) and (\ref{measgeq2}) are easily constructed. Indeed, set $t = ba^j/2 \geq \delta_0/2$, and, as in the second bullet point prior to Theorem \ref{framainfr} of \cite{gm2}, select a finite covering of ${\bf M}$ by disjoint measurable sets $E_1,\ldots,E_N$, such that whenever $1 \leq k \leq N$, there is a $y_k \in {\bf M}$ with $B(y_k,t) \subseteq E_k \subseteq B(y_k,2t)$. Then, by (\ref{ballsn}) and (\ref{ballsn1}), there is a constant $c_0^{\prime}$, depending only on ${\bf M}$, such that $\mu(E_k) \geq c_0^{\prime}\min(\delta^n,(\delta_0/2)^n)$, as desired. \begin{theorem} \label{besmain2} Say $c_0 , \delta_0, M_0, {\cal C} > 0$. Fix an integer $l \geq 1$ with $2l > \max(n(1/p-1)_+ - \alpha, \alpha)$. Say $f_0 \in {\cal S}(\RR^+)$, and let $f(s) = s^l f_0(s)$. Suppose also that the Daubechies condition (\ref{daub}) holds. Then there exist constants $C_1 > 0$ and $0 < b_0 < 1$ as follows:\\ Say $0 < b < b_0$. Then there exists a constant $C_2 > 0$ as follows:\\ Suppose that, for each $j$, we can write ${\bf M}$ as a finite disjoint union of measurable sets $\{E_{j,k}: 1 \leq k \leq N_j\}$, and that (\ref{diamleq}), (\ref{measgeq}) hold. If $0 < p < 1$, we suppose that (\ref{measgeq2}) holds as well. Select $x_{j,k} \in E_{j,k}$ for each $j,k$. For $t > 0$, let $K_t$ be the kernel of $f(t^2\Delta)$. Set \[ \varphi_{j,k}(y) = \overline{K}_{a^j}(x_{j,k},y). \] Suppose $F$ is a distribution on ${\bf M}$ of order at most $M_0$, and that $F1=0$. Then the following are equivalent:\\ (i) $F \in B_{p,0}^{\alpha q}$;\\ (ii) $(\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q} < \infty$. Further\\ \begin{equation} \label{nrmeqiv1} \|F\|_{B_p^{\alpha q}}/C_2 \leq (\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q} \leq C_1\|F\|_{B_p^{\alpha q}}. \end{equation} Moreover, if $p \geq 1$, then $C_2$ may be chosen to be independent of the choice of $b$ with $0 < b < b_0$. \end{theorem} {\bf Proof} Note first that, if $\eta_{j,k}$ is as in the first paragraph of the proof of Theorem \ref{besmain1}, then as we noted there for every $I,J$ there exists $C_{IJ}$ with $\eta_{j,k} \in C_{IJ}{\mathcal M}^{IJ}_{x_{j,k},a^j}$, and $\varphi_{j,k} = (a^{2j}\Delta)^l \eta_{j,k}$. Say now that $F \in B_{p,0}^{\alpha q}$. As we noted in the first paragraph of the proof of Theorem \ref{sumopbes}, it follows from Lemma \ref{besov1} that \[ (\sum_{j = -\infty}^{0} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q} \leq C\|F\|_{B_p^{\alpha q}}. \] As for the terms in (ii) with $j > 0$, we note that, as in section 4 of \cite{gmcw}, $\lim_{t \rightarrow \infty} t^M K_{\sqrt t}(x,y) = 0$ in $C^{\infty}({\bf M} \times {\bf M})$, for any $M$. Consequently, for any $M,N$, \begin{equation} \label{vphcnm} \|\varphi_{j,k}\|_{C^N} \leq C a^{-Mj}. \end{equation} We choose any $M > -\alpha$. Then, by (\ref{bescn}) and (\ref{besepcn}), we see that \[ (\sum_{j = 1}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q} \leq C\|F\|_{B_p^{\alpha q}} \mu(M)^{1/p} (\sum_{j = 1}^{\infty} a^{-j\alpha q - jMq})^{1/q} \leq C\|F\|_{B_p^{\alpha q}}. \] This proves that $(i) \Rightarrow (ii)$, and also establishes the rightmost inequality in (\ref{nrmeqiv1}). Say, conversely, that $(ii)$ holds. Our first step will be to show that the series for $SF$ (as in (\ref{sumjopdfbes})) converges in $B_{p,0}^{\alpha q}$. For this, it will be enough to show that the series for $S'F$ and $S''F$ (as in (\ref{sumjopdfbesneg}) and (\ref{sprprf}))each converge in $B_{p,0}^{\alpha q}$. Lemma \ref{besov2}, with $s_{j,k}$ in that Lemma being our $\langle F, \varphi_{-j,k} \rangle$, implies that the series for $S'F$ does converge in $B_{p,0}^{\alpha q}$, and, moreover, that \begin{equation} \label{sprfiii} \|S'F\|_{B_p^{\alpha q}} \leq C(\sum_{j = -\infty}^{0} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q}. \end{equation} (If $0 < p < 1$, we need to note that, since we are assuming (\ref{measgeq}) and (\ref{measgeq2}), we have that (\ref{ejkro}) holds for some $\rho > 0$. Since $\rho$ depends on $b$, the constant $C$ in (\ref{sprfiii}) depends on $b$ as well, if $0 < p < 1$.) As for $S''F$, by (\ref{bescn}), it is enough to show that the series for it converges absolutely in $C^N$ (for any fixed $N$). By (\ref{vphcnm}), we need only show:\\ \ \\ (*) Suppose that $\{r_{j,k}: 1 \leq j < \infty, 1 \leq k \leq N(j)\}$ are constants. If $M$ is sufficiently large, then \begin{equation} \label{cnmjpq} \sum_{j=1}^{\infty}a^{-Mj} \sum_k \mu(E_{j,k})|r_{j,k}| \leq C(\sum_{j = 1}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |r_{j,k}|^p]^{q/p})^{1/q}. \end{equation} Here $C$ depends only on $c_0, \delta_0$ if $p \geq 1$ and only on $c_0, \delta_0, b$ if $0 < p < 1$.\\ \ \\ To see this, let $a_j = \sum_k \mu(E_{j,k})|r_{j,k}|$, $d_j = [\sum_k \mu(E_{j,k}) |r_{j,k}|^p]^{1/p}$; we begin by showing that $a_j \leq C d_j$. If $p \geq 1$, we let $Y_j$ be the finite measure space $\{1,\ldots,N_j\}$ with measure $\lambda$, where $\lambda(\{k\}) = \mu(E_{j,k})$. Applying H\"older's inequality on $Y_j$, we find that $a_j \leq \mu({\bf M})^{1/p'} d_j$, as claimed. If, instead, $0 < p < 1$, the $p$-triangle inequality implies that \[ a_j^p \leq \sum_k \mu(E_{j,k})^p|r_{j,k}|^p \leq C\sum_k \mu(E_{j,k}) |r_{j,k}|^p = Cd_j^p \] where now $C$ depends on $b$. (We have noted that, if $ba^j < \delta$ then $1 \leq \mu(E_{j,k})^{1-p}/[c_0 (ba^j)^n]^{1-p} \leq \mu(E_{j,k})^{1-p}/[c_0 b^n]^{1-p}$ by (\ref{measgeq}), while if $ba^j \geq \delta$, then $1 \leq \mu(E_{j,k})^{1-p}/[{\cal C}]^{1-p}$ by (\ref{measgeq2}).) To prove (\ref{cnmjpq}), it is enough, then, to show that, for $M$ sufficiently large, \begin{equation} \label{cnmjpq1} \sum_{j=1}^{\infty}a^{-Mj} a_j \leq C(\sum_{j = 1}^{\infty} a^{-j\alpha q} a_j^q)^{1/q}, \end{equation} for the right side of (\ref{cnmjpq1}) is less than or equal to $C(\sum_{j = 1}^{\infty} a^{-j\alpha q} d_j^q)^{1/q}$, as we have seen, which gives (\ref{cnmjpq}) at once. But (\ref{cnmjpq1}) is true for {\em any} nonnegative constants $a_j$ (for $M$ sufficiently large), for the following reason. If $0 < q \leq 1$, the $q$-triangle inequality tells us that $(\sum_{j=1}^{\infty}a^{-j} a_j)^q \leq \sum_{j = 1}^{\infty} a^{-j\alpha q} a_j^q$, as claimed. On the other hand, if $q > 1$, and $(-M + \alpha)q' < -1$, then (\ref{cnmjpq1}) follows by writing $a^{-Mj} a_j = a^{(-M+\alpha)j}(a^{-\alpha j}a_j)$, and using H\"older's inequality. In all, then, the series for $SF$ converges in $B_{p,0}^{\alpha q}$. Moreover, if we call the sum of this series $S_0F$, we have from (\ref{sprfiii}) and (\ref{cnmjpq}) that \begin{equation} \label{sofbpq} \|S_0F\|_{B_p^{\alpha q}} \leq C(\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k}) |\langle F, \varphi_{j,k} \rangle |^p]^{q/p})^{1/q}, \end{equation} with $C$ independent of $b$ if $p \geq 1$. To complete the proof we need only prove that if (ii) holds, and $b$ is sufficiently small, then $F \in B_{p,0}^{\alpha q}$. For then we will surely have that $S_0 F = SF$. By Theorem \ref{besmain1}, $S: B_{p,0}^{\alpha q} \rightarrow B_{p,0}^{\alpha q}$ is invertible, with $\|S^{-1}\|$ independent of $b$ (for $b$ sufficiently small.) Thus the leftmost inequality in (\ref{nrmeqiv1}) will follow from (\ref{sofbpq}), with $C_2$ independent of $b$ if $p \geq 1$ (and $b$ is sufficiently small). We have assumed that $F$ is a distribution of order at most $M_0$, and we claim that this implies that $F \in B_p^{\gamma q}$ for some $\gamma \in \RR$. To see this, let $\beta_{\nu}$ be as in the proof of Lemma \ref{besov2}; we need to show that $\{ 2^{\nu\gamma}\|\beta_{\nu}(\Delta)F\|_p: -1 \leq \nu \leq \infty\}$ is in $l^q$ for some $\gamma \in \RR$. As in the proof of Lemma \ref{besov2}, for $\nu \geq -1$, let $J_{\nu}$ be the kernel of $\beta_{\nu}(\Delta)$. The arguments in the second paragraph of the proof of Lemma \ref{besov2} (specifically (\ref{jliest}), with $I = 0$, in which case $J_{\nu}^I = J_{\nu}$), show that $\|J_{\nu}\|_{C^{M_0}({\bf M} \times {\bf M})} \leq C_02^{\nu(n+M_0)}$ for some $C_0 > 0$. For any fixed $x \in {\bf M}$, let $J_{\nu,x}(y) = J_{\nu}(x,y)$. Then, for any $x \in {\bf M}$, \[ |[\beta_{\nu}(\Delta)F](x)| = |F(J_{\nu,x})| \leq C_12^{\nu(n+M_0)}, \] for some $C_1 > 0$. Thus \[ |[\beta_{\nu}(\Delta)F](x)|_p \leq C_1\mu({\bf M})^{1/p} 2^{\nu(n+M_0)}. \] Therefore $\{ 2^{\nu\gamma}\|\beta_{\nu}(\Delta)F\|_p: -1 \leq \nu \leq \infty\}$ is in $l^q$ if $\gamma + n + M_0 < 0$, that is, if $\gamma < -n -M_0$. Fix $\gamma < \min(-n-M_0, \alpha)$; then $F \in B_p^{\gamma q}$, and $\gamma \leq \alpha$. In fact, since we are assuming that $F(1) = 0$, $F \in B_{p,0}^{\gamma q}$. \\ By Theorem \ref{besmain1}, we may choose $b_0$ sufficiently small that $S: B_{p,0}^{\alpha q} \rightarrow B_{p,0}^{\alpha q}$ and $S: B_{p,0}^{\gamma q} \rightarrow B_{p,0}^{\gamma q}$ are both invertible if $0 < b < b_0$. For such $b$, since $S_0F \in B_{p,0}^{\alpha q} \subseteq B_{p,0}^{\gamma q}$, there is a unique $F_1 \in B_{p,0}^{\alpha q}$ with $SF_1 = S_0F$, and a unique $F_2 \in B_{p,0}^{\gamma q}$ with $SF_2 = S_0F$. Now $S_0F$ is the sum (in $B_{p,0}^{\alpha q}$) of the series in (\ref{sumjopdfbes}). Since $F \in B_{p,0}^{\gamma q}$, that series converges in $B_{p,0}^{\gamma q}$ to $SF$. Thus $S_0F = SF$ (as elements of $B_{p,0}^{\gamma q}$), so $F_2 = F$. Also $F_1 \in B_{p,0}^{\alpha q} \subseteq B_{p,0}^{\gamma q}$, and $SF_1 = S_0F$, so $F_1 = F_2 = F$. But then $F = F_1 \in B_{p,0}^{\alpha q}$, as desired. This completes the proof. \begin{theorem} \label{besmain3} Say $c_0 , \delta_0, {\cal C} > 0$. Fix an integer $l \geq 1$ with $2l > \max(n(1/p-1)_+ - \alpha, \alpha)$. Say $f_0 \in {\cal S}(\RR^+)$, and let $f(s) = s^l f_0(s)$. Suppose also that the Daubechies condition (\ref{daub}) holds. Then there exist constants $C_1 > 0$ and $0 < b_0 < 1$ as follows:\\ Say $0 < b < b_0$. Then there exists a constant $C_2 > 0$ as follows:\\ Suppose that, for each $j$, we can write ${\bf M}$ as a finite disjoint union of measurable sets $\{E_{j,k}: 1 \leq k \leq N_j\}$, and that (\ref{diamleq}), (\ref{measgeq}) hold. If $0 < p < 1$, we assume that (\ref{measgeq2}) holds as well. Select $x_{j,k} \in E_{j,k}$ for each $j,k$. For $t > 0$, let $K_t$ be the kernel of $f(t^2\Delta)$. Set \[ \varphi_{j,k}(y) = \overline{K}_{a^j}(x_{j,k},y). \] Then:\\ If $F \in B_{p,0}^{\alpha q}$, there exist constants $r_{j,k}$ with $(\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k})|r_{j,k}|^p]^{q/p})^{1/q} < \infty$ such that \begin{equation} \label{dfexp} F = \sum_{j=-\infty}^{\infty} \sum_k \mu(E_{j,k}) r_{j,k} \varphi_{j,k}, \end{equation} with convergence in $B_p^{\alpha q}$. Further\\ \begin{equation} \label{nrmeqiv2} \|F\|_{B_p^{\alpha q}}/C_2 \leq \inf\{(\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k})|r_{j,k}|^p]^{q/p})^{1/q}:(\ref{dfexp}) \mbox{ holds}\} \leq C_1\|F\|_{B_p^{\alpha q}}. \end{equation} Moreover, if $p \geq 1$, then $C_2$ may be chosen to be independent of the choice of $b$ with $0 < b < b_0$. \end{theorem} {\bf Proof} We choose $b_0 > 0$ sufficiently small that $S: B_{p,0}^{\alpha q} \rightarrow B_{p,0}^{\alpha q}$ is invertible for $0 < b < b_0$. For such $b$, if $F \in B_{p,0}^{\alpha q}$, then $S^{-1}F \in B_{p,0}^{\alpha q}$, so \begin{equation} \label{ssinvf} F = S(S^{-1}F) = \sum_{j}\sum_k \mu(E_{j,k}) \langle S^{-1}F, \varphi_{j,k} \rangle \varphi_{j,k}. \end{equation} so that (\ref{dfexp}) holds with $r_{j,k} = \langle S^{-1}F, \varphi_{j,k} \rangle$. By Theorem \ref{besmain2} and Theorem \ref{besmain1}, $(\sum_k \mu(E_{j,k})|r_{j,k}|^p]^{q/p})^{1/q} \leq C_1^{\prime} \|S^{-1}F\|_{B_p^{\alpha q}} \leq C_1\|F\|_{B_p^{\alpha q}}.$ That leaves only the leftmost inequality in (\ref{nrmeqiv2}) to prove. We need only show that, for any $F$ as in (\ref{dfexp}) (convergence in $B_p^{\alpha q}$), we have the inequality \begin{equation} \label{frjkbpq} \|F\|_{B_p^{\alpha q}} \leq C_2\sum_{j = -\infty}^{\infty} a^{-j\alpha q} [\sum_k \mu(E_{j,k})|r_{j,k}|^p]^{q/p})^{1/q}. \end{equation} with $C_2$ independent of $0 < b < b_0$ if $p \geq 1$. It is enough to do this in each of two cases: (i) if $r_{j,k} = 0$ whenever $j > 0$; and (ii) if $r_{j,k} = 0$ whenever $j \leq 0$. Case (i) follows at once from Lemma \ref{besov2} (and, if $0 < p < 1$, the hypotheses (\ref{measgeq}), (\ref{measgeq2}), which imply (\ref{ejkro})). Case (ii) follows from (*) of the proof of Theorem \ref{besmain2}, since the inequality (\ref{cnmjpq}) there shows that $\|F\|_{C^N}$ is less than or equal to the right side of (\ref{frjkbpq}), for any fixed $N$. This completes the proof. \ \\ STONY BROOK UNIVERSITY \end{document}
\begin{document} \begin{abstract} Fix an irrational number $\alpha$, and consider a random walk on the circle in which at each step one moves to $x+\alpha$ or $x-\alpha$ with probabilities $1/2, 1/2$ provided the current position is $x$. If an observable is given we can study a process called an additive functional of this random walk. One can formulate certain relations between the regularity of the observable and the Diophantine properties of $\alpha$ implying the central limit theorem. It is proven here that for every Liouville angle there exists a smooth observable such that the central limit theorem fails. We construct also a Liouville angle such that the central limit theorem fails with some analytic observable. For Diophantine angles some counterexample is given as well. An interesting question remained open. \end{abstract} \maketitle \section{Introduction} Fix $\alpha\in \mathbb R$, and consider a Markov process $(Y_n^\alpha)_{n\ge 1}$ defined on some probability space $(\Omega, \mathcal F, \mathbb P)$ with the evolution governed by the transition kernel \begin{equation}\label{E:1.1} p(x, \cdot ) = \frac 1 2 \delta_{x+\alpha} + \frac 12 \delta_{x-\alpha}, \quad p : \mathbb S^1 \times \mathcal B (\mathbb S^1 ) \to [0,1], \end{equation} whose initial distribution, i.e. the distribution of $Y_1^\alpha$, is the Lebesgue measure (here $\mathcal B (\mathbb S^1 )$ stands for the $\sigma$-algebra of Borel subsets of $\mathbb S^1$). One can easily verify the process is stationary. More work is needed to show the Lebesgue measure is the unique possible choice for the law of $Y_1^\alpha$ to make the process stationary (see e.g. Theorem 7 and Remark 8 in \cite{Szarek_Zdunik_2016b}). In particular $(Y_n^\alpha)$ is ergodic, which means that if $A\in \mathcal B (\mathbb S^1 )$ is such that $p(x, A)=1$ for Lebesgue a.e. $x\in \mathbb S^1$ then $A$ is of the Lebesgue measure $0$ or $1$ (see e.g. Section 5 in \cite{Hairer_2006}, page 37, for characterizations of ergodicity and the relation to the notion of ergodicity in dynamical systems). This paper is devoted to the central limit theorem (\textbf{CLT} for short) for additive functionals of $(Y_n^\alpha)$, i.e. processes of the form $\big(\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)\big)$, where a function $\varphi : \mathbb S^1 \to \mathbb R$ is usually called an observable. For convenience we assume that $\int \varphi(x)dx=0$. We say that \textbf{CLT} holds for the process if $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt n} \Rightarrow \mathcal N (0, \sigma) \quad \textrm{as $n\to \infty$}$$ for some $\sigma>0$. The validity of \textbf{CLT} depends on Diophantine properties of $\alpha$. An angle $\alpha$ is called Diophantine of type $(c,\gamma)$, $c>0$, $\gamma\ge 2$ if \begin{equation}\label{diophantine} \bigg|\alpha - \frac p q \bigg| \ge \frac{c}{q^\gamma} \quad \textrm{for all $p, q\in \mathbb Z$, $q\not=0$.} \end{equation} An angle $\alpha$ is Liouville if it is not Diophantine of type $(c,\gamma)$ for any choice of $c>0$, $\gamma \ge 2$. These and similar processes has been widely studied in the literature. \begin{itemize} \item Kesten \cite{Kesten_1960, Kesten_1961} investigated the limit distribution of $$D_N(x,\alpha)=\sum_{n=0}^{N-1} \varphi(x+n\alpha) - N\int_{\mathbb{S}^1}\varphi(x)dx,$$ where $\varphi$ is the characteristic function of some interval and $(x,\alpha)$ is uniformly distributed in $\mathbb S^1\times \mathbb S^1$. This was later generalized to higher dimensions by Dolgopyat and Fayad \cite{Dolgopyat_Fayad_2014, Dolgopyat_Fayad_2020}. \item Sinai and Ulcigrai \cite{Sinai_Ulcigrai_2008} considered a similar problem when $\varphi$ is non-integrable meromorphic function. \item In the above examples a point in the space is chosen randomly thus one calls it a spatial \textbf{CLT} . One can also fix a point in the space $x\in \mathbb S^1$, an angle $\alpha$ and, given $N$, pick randomly an integer number $n\in [1, N]$. The question arise what is the limit distribution of $D_n(x,\alpha)$ as $N$ is growing. This kind of limit theorems are called temporal. The first limit theorem in this flavour was proven by Beck \cite{Beck_2010, Beck_2011}. For further development see e.g. \cite{Dolgopyat_Sarig_2017}, \cite{Bromberg_Ulcigrai_2018}, \cite{Dolgopyat_Sarig_2020}. \item Sinai \cite{Sinai_1999} considered a situation where one draws $+\alpha$ or $-\alpha$ with a probability distribution depending on the position in the circle (the method was to study a related random walk in random environment). He proved the unique ergodicity and stability of the process when $\alpha$ is Diophantine. Recently Dolgopyat et. al. \cite{Dolgopyat_Fayad_Saprykina_2021} studied the behaviour in the Liouvillean case. \item Borda \cite{Borda_2021} considered even a more general situation where several angles are given and one chooses one of them randomly. Given $p\in (0,1]$, he formulated certain Diophantine conditions implying \textbf{CLT} for all $\varphi$ in the class of $p$-H{\"o}lder functions. Thus the author was concerned about what assumptions one should put on the angles of rotation to imply \textbf{CLT} for all observables in a given class. \end{itemize} The situation here resembles the one from the last point, but here we rather touch the question how regular an observable should be to imply \textbf{CLT} if $\alpha$ is given. Namely, using celebrated result by Kipnis and Varadhan \cite{Kipnis_Varadhan_1986} we prove the following statement. \begin{prop}\label{P:1} Let us assume $\alpha$ to be Diophantine of type $(c,\gamma)$, $\gamma \ge 2$. If a non-constant function $\varphi \in C^{r}$, $r>\gamma-1/2$ (possibly $r=\infty$), is such that $\int\varphi(x)dx=0$ then there exists $\sigma>0$ such that $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt{n}} \Rightarrow \mathcal N (0, \sigma).$$ In particular, \textbf{CLT} holds if $\alpha$ is Diophantine of an arbitrary type and $\varphi$ is smooth. \end{prop} \noindent The result is included for the sake of completeness, not because of novelty. This (or slightly different) statement has been proven independently by several people using various methods related to harmonic analysis (section 8 in \cite{Derriennic_Lin_2001}, section 7.5 in \cite{Weber_2009}, \cite{Zdunik_2017}, \cite{Borda_2021}). By Proposition \ref{P:1} \textbf{CLT} holds if $\varphi$ is smooth and $\alpha$ is Diophantine of an arbitrary type. It is natural to ask then if for every Liouville $\alpha$ there exists a smooth $\varphi$ for which \textbf{CLT} fails. It is also natural to ask if \textbf{CLT} fails if analytic observables are considered. This leads us to the following theorems showing dichotomy between the behaviour of Liouville and Diophantine random rotation, similar to the one appearing in smooth conjugacy results for circle diffeomorphisms (see the beginning of Chapter I.3 in \cite{deMelo_vanStrien_1993}). \begin{thm}\label{T:1} There exists an irrational $\alpha$ and $\varphi \in C^\omega(\mathbb S^1)$ such that \textbf{CLT} fails. \end{thm} \noindent Note that by Proposition \ref{P:1} the angle in the assertion must be Liouville. \begin{thm}\label{T:2} Let $\alpha$ be an irrational number. Let us assume there exist $c>0$, $\gamma>5$ such that $$\bigg|\alpha - \frac{p}{q} \bigg| \le \frac{c}{q^\gamma} \quad \textrm{for infinitely many $p,q \in \mathbb Z$, $q\not = 0$.} $$ Let $r$ be the largest positive integer with $r<\frac{\gamma}{2}-\frac 3 2$. Then there exist $\varphi \in C^r$ such that \textbf{CLT} fails. \end{thm} \noindent The only reason for making the assumption $\gamma>5$ is to ensure $\frac{\gamma}{2}-\frac 3 2$ greater than $1$, so that the condition $r<\frac{\gamma}{2}-\frac 3 2$ is satisfied for at least one positive integer $r$. A slightly changed proof of Theorem \ref{T:2} yields the following. \begin{thm}\label{T:3} Let $\alpha$ be Liouville. Then there exists $\varphi\in C^\infty(\mathbb S^1)$ such that \textbf{CLT} fails. \end{thm} Let us end this section with an interesting open problem. An angle $\alpha$ is called badly approximable when it is Diophantine of type $(c, 2)$ for some $c>0$ (for instance, every quadratic irrational is badly approximable). Proposition \ref{P:1} yields if $\varphi$ is $C^2$ then the additive functional satisfies \textbf{CLT}. Unfortunately, Theorem \ref{T:2} does not give any counterexample in that case. This leads to a natural question: does \textbf{CLT} holds if $\alpha$ is badly approximable (e.g. $\alpha$ is the golden ratio) and $\varphi$ is $C^1$? \section{The Poisson equation and central limit theorem}\label{S:3} One of methods of proving \textbf{CLT} for additive functionals of Markov chains is the Gordin-Lif\v{s}ic method \cite{Gordin_Lifsic_1978}, which is to be roughly explained in present section (note that in \cite{Weber_2009}, \cite{Zdunik_2017}, \cite{Borda_2021} different techniques have been used). Before that let us define the operator $$ T\varphi(x)=\frac 1 2 \varphi(x+\alpha) + \frac 12 \varphi(x-\alpha), \quad \varphi\in B(\mathbb S^1), \ T: B(\mathbb S^1)\rightarrow B(\mathbb S^1), $$ where $B(\mathbb S^1)$ is the space of Borel measurable functions. By the very definition of a Markov process, if $(Y_n^\alpha)$ is defined on $(\Omega, \mathcal F, \mathbb P)$ then \begin{equation}\label{E:dual} \mathbb E ( \varphi(Y^\alpha_{n+1} ) | Y_n^\alpha ) = \int_{\mathbb{S}^1} p( Y_n^\alpha, dy) \varphi(y) = T\varphi (Y_n^\alpha), \quad n\ge 1, \end{equation} where $p$ is the transition function (\ref{E:1.1}). Let $\varphi : \mathbb S^1 \rightarrow \mathbb R$ be a square integrable function (with respect to the Lebesgue measure) with $\int \varphi(x)dx=0$. To show the convergence of $\frac{1}{\sqrt{n}}(\varphi(Y_1)+\cdots+\varphi(Y_n))$ to the normal distribution we solve so called Poisson equation\footnote{In dynamical systems theory this equation (with $T$ replaced by a Koopman operator) is called a cohomological equation. The name ``Poisson equation'' is more common in theory of stochastic processes, probably due to the fact that writing down the corresponding equation for a Brownian motion, which is a continuous time Markov process, gives $\frac 1 2 \Delta \varphi = \psi$, where $\Delta$ is the Laplace operator. Note $\frac 1 2 \Delta$ is the infinitesimal generator of the Brownian motion.} $T\psi - \psi =\varphi$, where $\psi\in L^2(\mathbb S^1)$ is unknown. If the solution $\psi$ exists then we can write $$\varphi(Y_1)+\cdots+\varphi(Y_n)$$ \begin{equation}\label{E:P1.1} =\big[(T\psi(Y_1)-\psi(Y_2))+\cdots+(T\psi(Y_{n-1})-\psi(Y_n))\big]+(T\psi(Y_n) - \psi (Y_1)). \end{equation} When divided by $\sqrt{n}$, the second term tends to zero in probability. It is sufficient then to show \textbf{CLT} for the first process, which is an ergodic, stationary martingale by (\ref{E:dual}). For such processess \textbf{CLT} is valid (see \cite{Brown_1971}). Thus the assertion follows provided the solution of the Poisson equation exists. Observe that $(I-T) u_n=(1-\cos(2\pi n \alpha)) u_n$ for $u_n(x)=\exp(2\pi i n x)$, $x\in \mathbb S^1$, $n\in\mathbb Z$. Therefore the trigonometric system $(u_n)_{n\in\mathbb Z}$ is also the orthonormal system of eigenvectors of $I-T$ with corresponding eigenvalues $1-\cos(2\pi n \alpha)$, $n\in\mathbb Z$. We deduce the $n$-th Fourier coefficient of $(I-T)\psi$, $\psi \in L^2(\mathbb S^1)$, is of the form $(1-\cos(2\pi n \alpha))\hat{\psi}(n)$, $n\in \mathbb Z$. This yields a recipe to find $\psi$ when $\varphi$ is given. Namely, $\psi$ should be a square integrable function whose Fourier series coefficient are \begin{equation}\label{fourier} \hat{\psi}(n) = \frac{|\hat{\varphi}(n)|}{1-\cos(2\pi\alpha n)}, \quad n\in\mathbb Z \setminus \{0\}, \end{equation} while $\hat{\psi}(0)$ is an arbitrary real number. Note we use here also the assumption that $\hat{\varphi}(0)=\int\varphi(x)dx=0$. Indeed, $1-\cos(0)=0$ implies that we must have $\hat{\varphi}(0)=0$ to solve the equation. What remains to do is to show the convergence \begin{equation}\label{condition1} \sum_{n\in\mathbb Z\setminus \{0\}} \frac{|\hat{\varphi}(n)|^2}{(1-\cos(2\pi\alpha n))^2}<\infty, \end{equation} to make sure the object with Fourier coefficients (\ref{fourier}) is indeed a square integrable function. In fact the solution of the Poisson equation does not have to exists to have \textbf{CLT}. Note the processes under consideration are reversible, which means that the distribution of random vectors $(Y^\alpha_1, \ldots, Y^\alpha_n)$ and $(Y^\alpha_n, \ldots, Y^\alpha_1)$ are the same for every natural $n$ or, equivalently, the operator $T$ is self-adjoint. In celebrated paper \cite{Kipnis_Varadhan_1986} (see Theorem 1.3 therein) the authors have proven the condition $\varphi\in \textrm{Im}(I-T)$ can be relaxed to \begin{equation}\label{Kipnis_Varadhan} \varphi \in \textrm{Im}(\sqrt{I-T}), \end{equation} where $\sqrt{I-T}$ is the square root of $I-T$ (recall the square root of a positive semidefinite, self-adjoint operator $P$ acting on a Hilbert space is the operator $\sqrt{P}$ with the property $(\sqrt{P})^2=P$). Since the $n$-th Fourier coefficient of the function $(I-T)\psi$, $\psi \in L^2(\mathbb S^1)$ is given by $(1-\cos(2\pi n \alpha))\hat{\psi}(n)$, we easily deduce that $\sqrt{I-T}$ is well defined on $L^2(\mathbb S^1)$ and the $n$-th Fourier coefficient of the function $\sqrt{I-T}\psi$, $\psi \in L^2(\mathbb S^1)$, is given by $\sqrt{1-\cos(2\pi n \alpha)}\hat{\psi}(n)$. Thus (\ref{Kipnis_Varadhan}) leads to the condition \begin{equation}\label{condition2} \sum_{n\in\mathbb Z \setminus \{0\}} \frac{|\hat{\varphi}(n)|^2}{1-\cos(2\pi\alpha n)}<\infty, \end{equation} weaker than (\ref{condition1}). Moreover, \cite{Kipnis_Varadhan_1986} (see (1.1) therein) delivers a formula for $\sigma$, which reads here as $$\sigma^2=\sum_{n\in \mathbb Z \setminus \{0\}} \frac{1+\cos(2\pi \alpha n)}{1-\cos(2\pi \alpha n)} |\hat{\varphi}(n)|^2.$$ Clearly, $\sigma^2<\infty$ if (\ref{condition2}) is satisfied and $\sigma>0$ if $\varphi$ is non-constant. We are in position to prove Proposition \ref{P:1}. We recall the statement for the convenience of the reader. \setcounter{prop}{0} \begin{prop} Let us assume $\alpha$ to be Diophantine of type $(c,\gamma)$, $\gamma \ge 2$. If a non-constant function $\varphi \in C^{r}$, $r>\gamma-1/2$ (possibly $r=\infty$), is such that $\int\varphi(x)dx=0$ then there exists $\sigma>0$ such that $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt{n}} \Rightarrow \mathcal N (0, \sigma).$$ In particular, \textbf{CLT} holds if $\alpha$ is Diophantine of an arbitrary type and $\varphi$ is smooth. \end{prop} \begin{proof} We are going to prove (\ref{condition2}) is satisfied. Fix $\alpha$ and $\varphi$ as above. Clearly $\sum_{n\in \mathbb Z} |\hat{\varphi}(n)|^2<\infty$ since $\varphi$ is square integrable, therefore the problem is when $\cos(2\pi \alpha n)$ is close to 1, which happens exactly when $\alpha n$ is close to some integer. To handle this we will use the fact that $\alpha$ is Diophantine of type $(c, \gamma)$. This means \begin{equation}\label{E:5.1} \bigg|\alpha - \frac{p}{n}\bigg| \ge \frac{c}{n^\gamma} \quad \textrm{for all $p, n\in \mathbb Z$, $n\not = 0$.} \end{equation} By Taylor's formula $|\cos(2 \pi (p+x))-1|=\frac{(2\pi x)^2}{2}+o(x^2)$ for $p\in \mathbb Z$. As a consequence there exists $\eta>0$ such that $$\big|\cos(2 \pi \alpha n)-1 \big|\ge 2\pi \eta|n\alpha - p|^2 \ge \frac{2\pi \eta c^2}{n^{2(\gamma-1)}}$$ for an arbitrary $n \in\mathbb Z$. If $\varphi \in C^r$ then $\hat{\varphi}(n)\le C|n|^{-r}$ for some constant $C$ thus $$ \frac{|\hat{\varphi}(n)|^2}{1-\cos(2\pi\alpha n)} \le \frac{C^2}{2\pi \eta c^2} |n|^{-2r+2(\gamma-1)}$$ for every $n$. It is immediate that if $r>\gamma-\frac 12$, then the series (\ref{condition2}) is convergent. This implies \textbf{CLT} by Theorem 1.3 in \cite{Kipnis_Varadhan_1986}. \end{proof} Clearly, if $\varphi$ is a trigonometric polynomial, then series (\ref{condition2}) becomes a finite sum and thus the condition is trivially satisfied. This yields another proposition, which will be used in the proof of Theorem \ref{T:2}. \begin{prop} \label{P:2} Let us assume $\alpha$ to be irrational. If $\varphi$ is a non-constant trigonometric polynomial with $\int\varphi(x)dx=0$ then there exists $\sigma>0$ such that $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt{n}} \Rightarrow \mathcal N (0, \sigma).$$ \end{prop} \section{Auxiliary results} In the proofs three lemmas will be pivotal. Given integer $q\ge 1$, $\eta\in (0,1/2)$, define $G_q^\eta$ to be the subset of $\mathbb S^1$ containing all points whose distance from the set $\{ 0, \frac{1}{q}, \ldots, \frac{ q-1}{q} \}$ (where $\cos(2\pi q x)$ attains value 1) is less than $\frac {\eta}{q}$. Clearly $\textrm{Leb}(G_q^\eta)=2\eta$ whatever $q$ is. Recall that $(Y^{\alpha}_n)$ stands for the Markov process defined on some probability space $(\Omega, \mathcal F, \mathbb P)$ with transition function (\ref{E:1.1}) and $Y_1^\alpha \sim \textrm{Leb}$. \begin{lem}\label{L:1} Let $\alpha=\frac p q$, $\varphi(x)=2^{-q} \cos(2 \pi q x)$ and let $s\in (0,1)$. Let $N$ be an arbitrary natural number with $2^{-q-1}N^{1-s}>2$. If $\alpha'$ is sufficiently close to $\alpha$ then $$\mathbb P \bigg(\frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2 \bigg) > \frac{1}{6}.$$ \end{lem} \noindent Note the assertion is more difficult to obtain when $s$ is close to 1. \begin{proof} The result is the consequence of the invariance of $\varphi$ under the action of the rotation of angle $\alpha$. In particular the set $G_q^\eta$ is invariant for every $\eta>0$. Take $N$ like in the statement, and choose $\alpha'$ so close to $\alpha$ that $x+n\alpha' \in G_q^{1/6}$ for $|n|\le N$ and $x\in G_q^{1/12}$. By the definition of $G_q^\eta$, the value of $\varphi$ on $G_q^{1/6}$ is greater or equal to $2^{-q}\cos(2\pi/6)\ge 2^{-q}\cdot 1/2$. Thus $\varphi(x+n\alpha')\ge 2^{-q}\cdot 1/2=2^{-q-1}$ for $|n|\le N$ and $x\in G_q^{1/12}$. This yields $$\{ Y^{\alpha'}_1 \in G_q^{1/12} \} \subseteq \bigg\{ \frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N} > 2^{-q-1} \bigg\}$$ $$ = \bigg\{ \frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2^{-q-1}N^{1-s} \bigg\}$$ \noindent Using the facts that $Y_1^{\alpha'}\sim \textrm{Leb}$, $\textrm{Leb}(G_q^{1/12})=1/6$ and $2^{-q-1}N^{1-s}>2$ we have $$\mathbb P \bigg(\frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2 \bigg)$$ $$\ge \mathbb P \bigg( \frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2^{-q-1}N^{1-s} \bigg) \ge \mathbb P (Y_1^{\alpha'} \in G_q^{1/12}) = \frac{1}{6},$$ which yields the assertion. \end{proof} A slightly different lemma is the following. \begin{lem}\label{L:3} Let $\alpha$ be an irrational number, $s\in (1/2, 1)$, $c>0$, $\gamma\ge 2$. If $\alpha$ satisfies $$\bigg|\alpha - \frac p q \bigg|\le \frac{c}{q^\gamma}$$ for some pair of integers $p,q$, $q\not= 0$, then $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16 c)^{1-s}} \bigg) > \frac{1}{8},$$ where $\varphi(x)=q^{-(\gamma-1)(1-s)}\cos (2\pi q x)$, $N=\lfloor\frac{q^{\gamma-1}}{16c}\rfloor$. \end{lem} \begin{proof} If $|\alpha - \frac p q|\le \frac{c}{q^\gamma}$ and $|k|\le \frac{q^{\gamma-1}}{16c}$ then \begin{equation}\label{L:3.1} \bigg|k\alpha- k\frac p q \bigg|\le |k| \frac{c}{q^\gamma} < \frac{1}{16q}. \end{equation} Thus $z+n\alpha \in G^{1/8}_q$ for all $z\in G^{1/16}_q$ and integers $n$ with $|n|\le N$. On the other hand, the value of $\varphi$ on $G^{1/8}_q$ is greater or equal to $q^{-(\gamma-1)(1-s)}\cos(\frac{2 \pi}{8})=\frac{\sqrt{2}}{2}\cdot q^{-(\gamma-1)(1-s)}$. By the same reasoning as in the proof of Lemma \ref{L:1} we have $$\{Y_1^\alpha \in G^{1/16}_q \} \subseteq \bigg\{ \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg\}$$ and consequently $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg)\ge \mathbb P (Y_1^\alpha\in G^{1/16}_q) =\frac{1}{8}.$$ \end{proof} Take $\alpha=p/q$ rational ($p/q$ is in the irreducible form) and the corresponding process $(Y_n^\alpha)$. If the initial point $Y_1^\alpha$ is already known, then we know also each $Y^\alpha_n$, $n\in \mathbb N$, is contained almost surely in the orbit of $Y_1^\alpha$ under the action of the rotation of angle $\alpha$, $\{Y_1^\alpha, Y_1^\alpha+\alpha, \ldots, Y_1^\alpha+(q-1)\alpha\}$ (this set is finite, since $\alpha$ is rational). The process $(Y^\alpha_n)$ can be therefore treated as a finite state Markov chain. If $q$ is odd, then the process $(Y^\alpha_n)$ treated as a finite state Markov chain is aperiodic and irreducible. Its stationary distribution the uniform distribution on the set $\{Y_1^\alpha, Y_1^\alpha+\alpha, \ldots, Y_1^\alpha+(q-1)\alpha\}$ (every state is of measure $1/q$). It follows from Theorem 8.9 (page 131) \cite{Billingsley_1995} that \begin{equation}\label{E:exp.conv.} |\mathbb P (Y^\alpha_n = Y_1^\alpha+i\alpha) - 1/q| \le A \rho^n \quad \textrm{for $i=0,1, \ldots, q-1$}, \end{equation} where the constants $A$ and $\rho\in(0,1)$ are independent of $x$ (since neither the space nor the transition probabilities depend on $x$). Let $\varphi(x)=a\cos(2\pi q' x)$ for some $a>0$ and $q'$ not a multiplicity of $q$. Since $p/q$ is assumed to be in an irreducible form, $p/q\cdot q'$ is not an integer and thus we have $$1/q\sum_{i=0}^{q-1} \varphi(x+i\alpha)=0$$ for every $x\in \mathbb S^1$, which is equivalent to say that the integral of $\varphi$ with respect to the stationary distribution of $(Y_n^\alpha)$ (treated as a finite state Markov chain) equals zero. Moreover, using (\ref{E:exp.conv.}) gives $$\bigg| \mathbb E\big( \varphi(Y^\alpha_n) \ \big| \ Y^\alpha_1 \big) \bigg| = \bigg| \sum_{i=0}^{q-1} \mathbb P ( Y^\alpha_n = Y^\alpha_1+i\alpha ) \cdot \varphi(Y^\alpha_1+i\alpha) - 1/q\sum_{i=0}^{q-1} \varphi(Y^\alpha_1+i\alpha ) \bigg|$$ $$\le \sum_{i=0}^{q-1} \|\varphi\|_\infty \big| \mathbb P ( Y^\alpha_n = Y^\alpha_1+i\alpha ) - 1/q \big| \le A q \|\varphi \|_\infty \rho^n$$ for $n\ge 1$. Thus \begin{equation}\label{E:3.1} \sum_{n=1}^\infty \bigg| \mathbb E\big( \varphi(Y^\alpha_n) \ \big| \ Y^\alpha_1 \big) \bigg| \le \frac{A q \|\varphi \|_\infty}{1-\rho} \quad \textrm{a.s.} \end{equation} The next lemma is essentially the consequence of the central limit theorem for finite state irreducible and aperiodic Markov chains. However, using (\ref{E:3.1}) we may deduce it in simpler way. \begin{lem}\label{L:2} Let $\alpha=\frac p q$ be rational (in irreducible form), $q$-odd. Let $\varphi(x)=a\cos(2\pi q' x)$ for some $a>0$ and $q'$ not a multiplicity of $q$. If $s>1/2$ then for every $\varepsilonpsilon>0$ and $\delta>0$ there exists $N$ such that $$\mathbb P \bigg( \frac{\big| \varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)\big|}{n^s} > \delta \bigg) < \varepsilonpsilon$$ for $n \ge N$. \end{lem} \begin{proof} It follows from the Chebyshev inequality. We have $$ \mathbb P \bigg( \frac{\big|\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)\big|}{n^s} > \delta \bigg) \le \frac{\mathbb E \big( \varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n) \big)^2}{\delta^2 n^{2s}} $$ $$= \frac{ \big| \mathbb E \big(\sum_{i=1}^n \varphi^2(Y_i^\alpha) + 2\sum_{i=1}^{n-1} \varphi(Y^\alpha_i)(\varphi(Y^\alpha_{i+1})+\cdots+\varphi(Y^\alpha_n) ) \big)\big|}{\delta^2 n^{2s}}$$ $$\le \sum_{i=1}^n \frac{\mathbb E\varphi^2(Y_i^\alpha)}{\delta^2 n^{2s}} + 2\sum_{i=1}^n \frac{\big| \mathbb E \big(\varphi(Y^\alpha_i) \mathbb E[ \varphi(Y^\alpha_{i+1})+\cdots+\varphi(Y^\alpha_n) | Y^\alpha_i ]\big)\big|}{\delta^2n^{2s}}$$ $$=\frac{1}{\delta^2 n^{2s-1}} \int\varphi^2(x)dx + 2\sum_{i=1}^n \frac{\big| \mathbb E \varphi(Y^\alpha_i) \big|\cdot \big| \mathbb E[ \varphi(Y^\alpha_{i+1})+\cdots+\varphi(Y^\alpha_n) | Y^\alpha_i ]\big|}{\delta^2n^{2s}}.$$ $$\le\frac{1}{\delta^2 n^{2s-1}} \int\varphi^2(x)dx + 2 \sum_{i=1}^n \frac{\big| \mathbb E \varphi(Y^\alpha_i) \big|\cdot \bigg( \big|\mathbb E[ \varphi(Y^\alpha_{i+1})| Y^\alpha_i ]\big| +\cdots+ \big|\mathbb E[ \varphi(Y^\alpha_n) | Y^\alpha_i ]\big|\bigg)}{\delta^2n^{2s}}.$$ By (\ref{E:3.1}) and the stationarity of the process each of the numerators in the sum does not exceed $\frac{2A q \|\varphi \|^2_\infty}{1-\rho}$, thus the second term is bounded by $$n\cdot \frac{2A q \|\varphi \|^2_\infty}{(1-\rho)\delta^2n^{2s}}=\frac{2A q \|\varphi \|^2_\infty}{(1-\rho)\delta^2n^{2s-1}}.$$ The entire expression tends to zero since $s>1/2$. The assertion follows. \end{proof} \section{Proof of Theorem \ref{T:1}} Fix an arbitrary $s\in (\frac 1 2, 1)$. We are going to construct an angle $\alpha$ and an observable $\varphi$ with $\int\varphi(x)dx=0$ such that there exist infinitely many $n$'s with $$\mathbb P \bigg( \frac{\varphi(Y_1^\alpha)+\cdots+\varphi(Y_n^\alpha)}{n^{s}} > 1 \bigg) > \frac{1}{12}.$$ Consequently the process does not satisfy \textbf{CLT} since \textbf{CLT} would imply the above quantity tends to zero. First we shall define inductively a sequence of numbers $\alpha_k$ convergent to some $\alpha$ along with certain observables $\varphi_k$. Then we will put $\varphi=\sum_k \varphi_k$ and use some relations between $\alpha_k$ and $\varphi_k$ established during the induction process to get the above assertion. Put $\alpha_1=\frac 1 3=\frac {p_1}{q_1}$ (when we represent a rational number as a fraction of integers we always assume it to be in an irreducible form, so here $p_1=1$ and $q_1=3$), and set $\varphi_1(x)=2^{-q_1} \cos(2\pi q_1 x)$. Take $N_1$ so large that $2^{-q_1-1}N_1^{1-s}>2$ and apply Lemma \ref{L:1} to obtain an angle $\alpha_2=\frac{p_2}{q_2}$, with $q_2>q_1$ and $q_2$ odd, such that \begin{equation}\label{E:4.1} \mathbb P \bigg( \frac{\varphi_1(Y_1^{\alpha_2})+\cdots+\varphi_1(Y_{N_1}^{\alpha_2})}{N_1^{s}} >2 \bigg) >\frac 1 6. \end{equation} Define $\varphi_2(x)=2^{-q_2}\cos(2\pi q_2 x)$. Take $N_2>N_1$ so large that $2^{-q_2-1} N_2^{1-s}>2$. Clearly $q_1$ is not a multiplicity of $q_2$, hence by Lemma \ref{L:2} we can assume that $N_2$ is so large that \begin{equation}\label{E:4.11} \mathbb P \bigg( \frac{\big|\varphi_1(Y_1^{\alpha_2})+\cdots+\varphi_1(Y_{N_2}^{\alpha_2})\big|}{N_2^{s}} >\frac 1 4 \bigg) < \frac 1 4 \cdot \frac{1}{6}.\end{equation} Again use Lemma \ref{L:1} to obtain an angle $\alpha_3=\frac{p_3}{q_3}$, with $q_3>q_2$ and $q_3$ odd, such that \begin{equation}\label{E:4.13} \mathbb P \bigg( \frac{\varphi_2(Y_1^{\alpha_3})+\cdots+\varphi_2(Y_{N_2}^{\alpha_3})}{N_2^{s}} > 2 \bigg) > \frac{1}{6}. \end{equation} We assume also the number $\alpha_3$ is so close to $\alpha_2$ that (\ref{E:4.1}) and (\ref{E:4.11}) still hold with $\alpha_2$ replaced by $\alpha_3$. This combined with (\ref{E:4.13}) gives $$\mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha_3})+\cdots+\varphi_i(Y_{N_i}^{\alpha_3})}{N_i^{s}} >2 \bigg) > \frac 1 6, \quad \textrm{for $i=1,2$,}$$ and $$\mathbb P \bigg( \frac{\big|\varphi_1(Y_1^{\alpha_3})+\cdots+\varphi_1(Y_{N_2}^{\alpha_3})\big|}{N_2^{s}} >\frac 1 4 \bigg) < \frac 1 4 \cdot \frac{1}{6}.$$ Assume $\alpha_k=\frac{p_k}{q_k}$, $N_i$, $\varphi_i$ are already defined, $k\ge 3$, $i<k$. These objects satisfy the relations \begin{equation}\label{E:4.51} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha_{k}})+\cdots+\varphi_i(Y_{N_j}^{\alpha_{k}})\big|}{N_j^{s}} >\frac{1}{4^i} \bigg) < \frac{ 1}{4^i} \cdot \frac{1}{6} \quad \textrm{for $j=1,\ldots, k-1$, $i<j$,} \end{equation} and \begin{equation}\label{E:4.41} \mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha_{k}})+\cdots+\varphi_i(Y_{N_i}^{\alpha_{k}})}{N_i^{s}} > 2 \bigg) > \frac{1}{6} \quad \textrm{for $i=1,\ldots, k-1$.} \end{equation} \noindent Define $\varphi_k(x)=2^{-q_k} \cos(2 \pi q_k x)$ and take $N_k>N_{k-1}$ so large that $2^{-q_k-1} N_k^{1-s}>2$ and \begin{equation}\label{E:4.2} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha_k})+\cdots+\varphi_i(Y_{N_k}^{\alpha_k})\big|}{N_k^{s}} >\frac{1}{4^i} \bigg) < \frac{ 1}{4^i} \cdot \frac {1}{6} \end{equation} for $i=1,\ldots, k-1$, by Lemma \ref{L:2}. Use Lemma \ref{L:1} to get a number $\alpha_{k+1}=\frac{p_{k+1}}{q_{k+1}}$, with $q_{k+1}>q_k$, $q_{k+1}$ odd, such that \begin{equation}\label{E:4.3} \mathbb P \bigg( \frac{\varphi_k(Y_1^{\alpha_{k+1}})+\cdots+\varphi_k(Y_{N_k}^{\alpha_{k+1}})}{N_k^{s}} > 2 \bigg) > \frac{1}{6}. \end{equation} We should take care that $\alpha_{k+1}$ is so close to $\alpha_k$ that (\ref{E:4.51}), (\ref{E:4.41}) and (\ref{E:4.2}) still hold with $\alpha_k$ replaced by $\alpha_{k+1}$. With this modification, (\ref{E:4.51}) and (\ref{E:4.2}) become \begin{equation}\label{E:4.5} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha_{k+1}})+\cdots+\varphi_i(Y_{N_j}^{\alpha_{k+1}})\big|}{N_j^{s}} >\frac{1}{4^i} \bigg) < \frac{ 1}{4^i} \cdot \frac{1}{6} \quad \textrm{for $j=1,\ldots, k$, $i<j$}. \end{equation} while (\ref{E:4.41}) and (\ref{E:4.3}) can be rewritten as \begin{equation}\label{E:4.4} \mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha_{k+1}})+\cdots+\varphi_i(Y_{N_i}^{\alpha_{k+1}})}{N_i^{s}} > 2 \bigg) > \frac{1}{6} \quad \textrm{for $i=1,\ldots, k$.} \end{equation} This completes the induction. Observe there is no inconsistency in assuming that $q_{k+1}$'s grow so fast that \begin{equation}\label{E:4.8} 2^{-q_{k+1}} N_i^{1-s}<4^{-(k-i)} \quad \textrm{for $i=1,\ldots k$.} \end{equation} This way the sequences of numbers $(\alpha_k)$, $(N_k)$ and functions $(\varphi_k)$ are defined. Set $\alpha=\lim_{k\to \infty} \alpha_k$ and $\varphi=\sum_{k=1}^\infty \varphi_k$. When passing to the limit, inequality (\ref{E:4.5}) becomes \begin{equation}\label{E:4.6} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_j}^{\alpha})\big|}{N_j^{s}} \ge \frac{1}{4^i} \bigg) \le \frac{ 1}{4^i} \cdot \frac {1}{6} \quad \textrm{for $j>1$, $i<j$}. \end{equation} while (\ref{E:4.4}) yields \begin{equation}\label{E:4.7} \mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_i}^{\alpha})}{N_i^{s}} \ge 2 \bigg) \ge \frac{1}{6} \quad \textrm{for $i\ge 1$.} \end{equation} The function $\varphi$ is analytic. Indeed, by design $$\varphi(x) = \sum_{k=-\infty}^\infty c_k e^{2\pi i k x},$$ where $c_k= \|\varphi_j\|_\infty=2^{-q_j}$ if $|k|=q_j$ and zero otherwise. Thus the Fourier coefficients of $\varphi$ decay exponentially fast, which implies $\varphi$ to be analytic\footnote{Indeed, $\varphi$ is defined as a series on the circle, however by the exponential convergence it can be extended to some neighbourhood of the unit disc $\mathbb D$ in the complex plane $\mathbb{C}$. Then $\varphi$ becomes a sum of holomorphic functions convergent uniformly on compact subsets of the domain of $\varphi$. Theorem 10.28 (page 214) in \cite{Rudin_1987} implies $\varphi$ is holomorphic.}. Obviously $\int \varphi(x)dx=0$ by the Lebesgue convergence theorem. Observe also that (\ref{E:4.8}) combined with $\|\varphi_i\|_\infty=2^{-q_i}$ yield \begin{equation}\label{E:4.9} \sum_{i>k} \|\varphi_i\|_\infty N_k^{1-s}<\sum_{i=1}^\infty 4^{-i}=\frac 1 2. \end{equation} We are in position to complete the proof, i.e. to show that $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 1 \bigg) \ge \frac {1}{12}$$ for every $k$. To this end fix $k$ and write $$\frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} = \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^{s}}$$ $$+ \sum_{i>k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^{s}}.$$ From (\ref{E:4.9}) it easily follows that the absolute value of the second summand on the right-hand side is less than $\frac 1 2$ almost surely. Thus $$\mathbb P \bigg ( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 1 \bigg ) \ge \mathbb P \bigg( \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 3/2 \bigg)$$ $$ \ge \mathbb P \bigg ( \frac{\varphi_k(Y_1^{\alpha})+\cdots+\varphi_k(Y_{N_k}^{\alpha})}{N_k^{s}}\ge 2 \bigg ) -\sum_{i<k} \mathbb P \bigg ( \frac{\big|\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})\big|}{N_k^{s}} \ge \frac{1}{4^i} \bigg). $$ By (\ref{E:4.6}) and (\ref{E:4.7}) it follows that $$\mathbb P \bigg ( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 1 \bigg ) \ge \frac{1}{6} - \sum_{i=1}^\infty\frac{1}{4^i} \cdot \frac{1}{6} = \frac{1}{12},$$ which is the desired assertion. \section{Proof of Theorems \ref{T:2} and \ref{T:3}} The entire section is devoted to the proof of Theorem \ref{T:2}. In the end we will give a short remark how to change the proof to get Theorem \ref{T:3}. Fix an irrational $\alpha$ and numbers $c>0$, $\gamma\ge 2$ such that \begin{equation}\label{E:6.1} \bigg|\alpha - \frac{p}{q} \bigg| \le \frac{c}{q^\gamma} \end{equation} for infinitely many pairs $p,q \in \mathbb Z$, $q\not = 0$. Take $r$ to be the largest possible integer with $r<\frac{\gamma}{2}-\frac 3 2$. The function $s\longmapsto (\gamma-1)(1-s)-1$ is decreasing, $s\in [\frac 1 2, 1)$, and its value at $s=\frac 1 2$ is $\frac{\gamma}{2}-\frac 3 2$, thus by continuity we can choose $s>\frac 1 2$ such that $r<(\gamma-1)(1-s)-1$. For this choice of $s$ we are going to construct an observable $\varphi$ with $\int\varphi(x)dx=0$ such that $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_n^{\alpha})}{n^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}} \bigg) > \frac{1}{16}$$ for infinitely many $n$'s. Consequently \textbf{CLT} is violated. Take arbitrary $p_1, q_1 \in \mathbb Z$, $q_1\not = 0$, satisfying (\ref{E:6.1}). Set $\varphi_1(x) = q_1^{-(\gamma-1)(1-s)}\cos(2 \pi q_1 x)$ and apply Lemma \ref{L:3} to get \begin{equation}\label{E:6.3} \mathbb P \bigg( \frac{\varphi_1(Y_1^{\alpha})+\cdots+\varphi_1(Y_{N_1}^{\alpha})}{N_1^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg) > \frac{1}{8}, \end{equation} where $N_1=\lfloor\frac{q_1^{\gamma-1}}{16c}\rfloor$. By Proposition \ref{P:2} the additive functional $(\varphi_1(Y^\alpha_1)+\cdots+\varphi_1(Y^\alpha_n))$ satisfies \textbf{CLT}, thus for $N$ sufficiently large \begin{equation}\label{E:6.2} \mathbb P \bigg( \frac{\varphi_1(Y_1^{\alpha})+\cdots+\varphi_1(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{4 \cdot (16c)^{1-s}} \cdot \frac{1}{4} \bigg) < \frac{1}{8}\cdot\frac{1}{4}. \end{equation} Let us take $p_2, q_2\in \mathbb Z$, $q_2\not = 0$, such that (\ref{E:6.1}) holds, $N_2=\lfloor\frac{q_2^{\gamma-1}}{16c}\rfloor$ satisfies (\ref{E:6.2}) and \begin{equation} q_2^{-(\gamma-1)(1-s)} \cdot N_1^{1-s}<\frac{\sqrt{2}}{4 \cdot (16c)^{1-s}} \cdot \frac 1 4 \end{equation} (this will imply that the inequality (\ref{E:6.3}) is not affected too much when $\varphi_1$ replaced by $\varphi_1+\varphi_2$). Lemma \ref{L:3} yields $$\mathbb P \bigg( \frac{\varphi_2(Y_1^{\alpha})+\cdots+\varphi_2(Y_{N_2}^{\alpha})}{N_2^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg) >\frac{1}{8}.$$ Assume $N_k$, $p_k$, $q_k$ are already defined. Let us choose a pair $q_{k+1}, p_{k+1} \in \mathbb Z$ with (\ref{E:6.1}), where $q_{k+1}>q_k$ is so large that \begin{equation}\label{E:6.5} q_{k+1}^{-(\gamma-1)(1-s)} \cdot N_i^{1-s}< \frac{\sqrt{2}}{4\cdot (16c)^{1-s}} \cdot 4^{-(k-i)} \quad \textrm{for $i=1,\ldots, k$.} \end{equation} Moreover, using Lemma \ref{L:2} we demand that $q_{k+1}$ is so large that \begin{equation}\label{E:6.4} \mathbb P \bigg( \frac{\varphi_j(Y_1^{\alpha})+\cdots+\varphi_j(Y_{N_{k+1}}^{\alpha})}{N_{k+1}^s} > \frac{\sqrt{2}}{4 \cdot (16c)^{1-s}}\cdot \frac{1}{4^j} \bigg) < \frac{1}{8}\cdot\frac{1}{4^j} \quad \textrm{for $j \le k$,} \end{equation} where $N_{k+1}=\lfloor\frac{q_{k+1}^{\gamma-1}}{16c}\rfloor$. Finally we use Lemma \ref{L:3} to get \begin{equation}\label{E:6.6} \mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16 c)^{1-s}} \bigg) > \frac{1}{8}, \end{equation} where $\varphi_{k+1}(x)=q_{k+1}^{-(\gamma-1)(1-s)}\cos (2\pi q_{k+1} x)$. \noindent When the induction is complete put $$\varphi(x)=\sum_{k=1}^\infty \varphi_k(x)=\sum_{k=1}^\infty q_k^{-(\gamma-1)(1-s)}\cos(2 \pi q_k x).$$ By assumption $r<(\gamma-1)(1-s)-1$, therefore we can take $\varepsilonpsilon>0$ so that $r=(\gamma-1)(1-s)-(1+\varepsilonpsilon)$. If one differentiates this series $r$ times, then it still converges uniformly (with the rate at least $q^{-(1+\varepsilonpsilon)}$). Therefore Theorem 7.17 (page 152) in \cite{Rudin_1976} yields $\varphi$ is $C^r$. Now it remains to show that $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}} \bigg) > \frac{1}{16}$$ for every $k\in\mathbb N$. We proceed analogously to the proof of Theorem \ref{T:1}. Fix $k$. We have $$\frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^s} = \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s}$$ $$+\sum_{i> k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s}.$$ The application of (\ref{E:6.5}) yields the second term is bounded by $\frac{\sqrt{2}}{8\cdot (16 c)^{1-s}}$ a.s. Therefore $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}} \bigg)$$ $$\ge \mathbb P \bigg( \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s} > \frac{3\sqrt{2}}{8\cdot (16 c)^{1-s}} \bigg)$$ $$\ge \mathbb P \bigg( \frac{\varphi_k(Y_1^{\alpha})+\cdots+\varphi_k(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{2\cdot (16 c)^{1-s}} \bigg)$$ $$-\sum_{i<k} \mathbb{P} \bigg( \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}}\cdot \frac{1}{4^i} \bigg). $$ The application of (\ref{E:6.4}) and (\ref{E:6.6}) yields Theorem \ref{T:2}. To demonstrate Theorem \ref{T:3} observe that for $\alpha$ Liouville there exist sequences of integers $p_k$, $q_k$ with $$\bigg|\alpha - \frac{p_k}{q_k} \bigg| \le \frac{1}{q^k} \quad \textrm{for every $k$.}$$ The only difference with the proof of Theorem \ref{T:2} is that $p,q$ are chosen from this sequence. Then again $\varphi=\sum_m \varphi_m$, and the series is uniformly convergent after differentiating it $r$-times for an arbitrary $r$. This implies $\varphi\in C^\infty$. The rest remains unchanged. \end{document}
\begin{document} \title{ extbf{Scaling entropy and automorphisms with purely point spectrum ootnote{Partially supported by the grants RFBR-08-01-00379-a and RFBR-09-01-12175-ofi-m.} \rightline{\it \textbf{To the memory of my friend Misha Birman}} \begin{abstract} We study the dynamics of metrics generated by measure-preserving transformations. We consider sequences of average metrics and $\epsilon$-entropies of the measure with respect to these metrics. The main result, which gives a criterion for checking that the spectrum of a transformation if purely point, is that the {\it scaling sequence for the $\epsilon$-entropies with respect to the averages of an admissible metric is bounded if and only if the automorphism has a purely point spectrum}. This paper is one of a series of papers by the author devoted to the asymptotic theory of sequences of metric measure spaces and its applications to ergodic theory. \end{abstract} \tableofcontents \section{Introduction} Among the many mathematical and nonmathematical problems we have been discussing with Misha Birman for many years after our acquaintance began in the early 1960s, the most intriguing one was the parallel between scattering theory and ergodic theory. Recently, I have returned to the (yet nonexistent) ``ergodic scattering theory'' and some forgotten questions related to it. However, this paper deals with another subject, which also correlates with M.~Sh.~Birman's research. It is well known that the problem of deciding whether or not the spectrum of a given (say, differential) operator is discrete is quite difficult. Several remarkable early papers by M.~Sh.~Birman (in the first place, \cite{MB}) dealt with exactly this problem. In ergodic theory and the theory of dynamical systems, this problem (whether or not the system of eigenfunctions is complete) is also very difficult. In what follows, we use the terminology common in ergodic theory that slightly differs from the one adopted in operator theory: we use the term ``discrete spectrum'' as a synonym for ``pure point spectrum.'' This is justified by the fact that a discrete spectrum in the sense of operator theory almost never appears in the theory of dynamical systems, since almost always one deals with unitary operators. As an example demonstrating the difficulty of this problem, we can mention the theory of substitutions, or stationary adic transformations \cite{V1}, on which an extensive literature exists. The most intriguing problems concern nonstationary adic transformations with subexponential growth of the number of vertices in the corresponding Bratteli--Vershik diagram (see \cite{V1}). The simplest and most popular example of such a transformation is the Pascal automorphism defined in \cite{V2}; in this case, the measure space (i.\,e., the phase space of the dynamical system) is the space of infinite paths in the Pascal graph endowed with a Bernoulli measure, and the transformation sends a path to its successor in the natural lexicographic order. In spite of many efforts, we still do not know whether the corresponding unitary operator has a discrete (i.\,e., pure point) or mixed spectrum, or, which seems most likely, its spectrum is purely continuous. Attempts to directly construct its eigenfunctions failed; another approach, based on the characterization of systems with discrete spectrum in terms of Kushnirenko's sequential entropy, \cite{Ku} has not been carried out. In the paper \cite{V6} (for more details, see \cite{V3,V4,VG}), we suggested a new notion, the so-called \emph{scaling entropy}, which generalizes the notion of Kolmogorov's entropy. The main point is that we suggest to {\it average the shifts of the metric} with respect to a given transformation and then compute the {\it $\varepsilon$-entropy} of the average metric. The class of increasing sequences of positive integers that normalize the growth of this $\varepsilon$-entropy over all admissible metrics does not depend on the choice of an admissible metric, so that the asymptotics of the growth of these sequences is a new metric invariant of automorphisms. Admissible metrics are measurable metrics satisfying some conditions that do or do not depend on the automorphism (see Section~2). It is important that {\it we consider not merely the $\varepsilon$-entropy of a metric, but the $\varepsilon$-entropy of a metric in a measure space}. Admissible metrics play the same role as measurable generating partitions in the classical theory of Kolmogorov's entropy according to Sinai's definition. At first sight, the difference between using partitions and metrics looks rather technical: a partition determines a semimetric of a special form (a so-called ``cut semimetric,'' see \cite{D}). However, our approach has two important differences from the classical theory. First, we use the $\varepsilon$-entropy of the iterated metric on a measure space rather than the entropy of a partition; this is a generalization of Kolmogorov's entropy, which allows one to distinguish automorphisms with zero entropy. The second, more important, difference is that we use the average metric (rather than the supremum of metrics, which corresponds to the supremum of partitions), which has no interpretation in terms of partitions and which contains more information about the automorphism than the supremum of metrics. In this paper, we give a necessary and sufficient condition for an automorphism to have a discrete spectrum in terms of the scaling sequence. The condition is that this sequence is bounded. This result generalizes a theorem of S.~Ferenczi \cite{Fe,FeP}, who considered the measure-theoretic complexity of symbolic systems by analogy with the ordinary complexity in symbolic dynamics. Our approach substantially differs from that of \cite{Fe}: we consider an arbitrary admissible metric rather than the Hamming metric only, and, which is most important, introduce the average metric and show that it is admissible in many cases and, in particular, for the Hamming metric. Thus criterion, i.e., the boundedness of the growth of the scaling entropy, should first be applied to adic transformations, e.g., to the Pascal automorphism (see \cite{V3}). Although it is not yet applied to checking that the spectrum of the Pascal automorphism is not discrete, \footnote{See the footnote in the subsection 7.2}, the corresponding combinatorics is already developed and described in the recent paper \cite{MM}, where a lower bound is obtained on the scaling sequence of the sup metric for the same Pascal automorphism. Supposedly, one can extend this bound ($\ln n$) to the average metric using the same techniques. A wider context is presented in our papers \cite{V3,V4}, where we suggest a plan for the study of the {\it dynamics of metrics in a measure space} as a source of new invariants of automorphisms. It is important that the notion of scaling entropy provides an answer to the question of whether or not the spectrum of a given transformation is discrete. Sections 2--4 are of general nature and are intended not only for the purposes of this paper, which is devoted mostly to automorphisms with discrete spectrum. Here we introduce our main objects: admissible metrics, averages, scaling sequences, and the scaling entropy of an automorphism. In Section~5, we study the dynamics of metrics on a group and find conditions under which the average metric is admissible. The main result is given in Section~6, where we present a criterion for checking whether the spectrum of an automorphism is completely or partially discrete. In the last section, we sketch possible applications, links to the ordinary construction of entropy, and general remarks about the dynamics of metrics. \section{Admissible metrics} We consider various metrics and semimetrics on a measure space $(X,{\frak A},\mu)$. In what follows, it is assumed to be a standard space with continuous measure $\mu$ and $\sigma$-algebra $\frak A$ of $\bmod 0$ classes of measurable sets, i.e., a Lebesgue space with continuous measure in the sense of Rokhlin (see \cite{Ro}). The space $X\times X$ is endowed with the $\sigma$-algebra $\frak A \times \frak A$ and the measure $\mu \times \mu$. We define a class of (semi)metrics on a measure space, which plays an important role in what follows. \begin{definition} A measurable function $\rho:X\times X\rightarrow {\Bbb R}_+$ is called an admissible (semi)metric if {\rm 1)} $\rho$ is a (semi)metric in the ordinary sense on a subset $X'\subset X$ of full measure ($\mu X'=1$), i.e., $\rho(x,y)\geq 0$, $\rho(x,y)=\rho(y,x)$, $\rho(x,y)+\rho(y,z)\geq \rho(x,z)$ for all triples $(x,y,z)\in X'\times X' \times X'$, $(\mu\times\mu)\{(x,x),x\in X'\}=0$, and $$\int_X\int_X \rho(x,y)d\mu(x)d\mu(y)<\infty.$$ In order to formulate the next condition, observe that if $\rho$ satisfies Condition~1), then the partition $\psi_{\rho}$ of the space $X$ into the classes of points $C_x=\{y:\rho(x,y)=0\}$, $x \in X$, is measurable. Hence we have a well-defined quotient space $X_\rho \equiv X/{\psi_{\rho}}$ endowed with the quotient metric denoted by the same letter $\rho$ and the quotient measure $\mu_{\psi_{\rho}}\equiv\mu_{\rho}$. For the quotient space $(X_\rho,\rho,\mu_{\rho})$, Condition~1) is still satisfied. {\rm 2)} The (completion of the) metric space $(X,\rho)$ (if $\rho$ is a metric) or $(X_{\rho},\rho)$ (if $\rho$ on $X$ is a semimetric) is a Polish (= metric, separable, complete) space with a Borel probability measure $\mu$ (respectively, $\mu_{\rho}$). \end{definition} Following the measure-theoretic tradition, we must identify (semi)metrics (and hence the corresponding spaces) if they coincide almost everywhere as measurable functions on the space $(X\times X,\mu \times \mu)$. Of course, a (semi)metric that coincides almost everywhere with an admissible (semi)metric is admissible.\footnote{We can define the notion of {\it almost metric} as a measurable function for which all axioms on metric satisfy for almost all pairs or triples points. As F.Petrov noticed for each almost metric there exists the admissible metric in our sense which almost everywhere coincided with it.} Condition {\rm 2)} means that the $\sigma$-algebra of Borel sets in the metric space $(X, \rho)$ (or $(X_{\rho}, \rho)$ in the case of a semimetric) is dense in the $\sigma$-algebra $\frak A$ of all measurable sets and, therefore, the measure $\mu$ (respectively, $\mu_{\rho}$) is a Borel probability measure. It is obvious from the definition that a semimetric $\rho$ is admissible if and only if the corresponding metric in the quotient space $X_{\rho}$ is admissible. An equivalent definition of an admissible metric is as follows: almost every pair of points can be separated by balls of positive measure, or, in other words, the $\sigma$-subalgebra generated by the open balls $\bmod 0$ separates points of the space. One can also formulate the admissibility condition in terms of the notion of a pure function from our paper \cite{VCl}: \begin{lemma} A metric $\rho$ is admissible if and only if it satisfies Condition~1) and, regarded as a function of two variables, is pure in the sense of \cite{VCl}; the latter means that the partition of the space on the classes of equivalence $x\sim y \Leftrightarrow \mu\{z: \rho(x,z)= \rho(y,z)\}=1$ is the partition on the separate points $mod 0$. In other words: the map $x\mapsto \rho(x,.)$ is injective from measure space $(X,\mu)$ to the classes of $mod 0$ equal functions of one variable. \footnote{In other words, almost every point is uniquely determined by the collection of distances from this point to all points of some set of full measure (which may depend on the point).} If $\rho$ is a semimetric, then this condition must hold for the metric $\rho$ on the quotient space $(X_{\rho},\rho,\mu_{\rho})$. \end{lemma} Indeed, the purity condition implies that the $\sigma$-algebra of sets generated by the balls separates points and hence is dense in the $\sigma$-algebra $(X,{\frak A},\mu)$; this also implies the separability. The converse immediately follows from the definition of an admissible metric. It is well known (see \cite{Ro}) that if $(X,\rho)$ is a Polish space, then every nondegenerate Borel probability measure $\mu$ on $X$ turns $(X,\rho)$ into a Lebesgue space. In other words, the metric $\rho$ on a Polish space $(X,\rho)$ endowed with a Borel probability measure $\mu$ is an admissible metric on the space $(X,\mu)$. As in other our papers, in the definition of an admissible metric we reverse the tradition and {\it consider various metrics and semimetrics on a fixed measure space rather than various measures on a given metric space}. Recall that triples $(X,\rho, \mu)$, consisting of a metric space endowed with a measure, in M.~Gromov's book \cite{Gr} were called $mm$-spaces, and in the paper \cite{VU}, metric triples or Gromov triples. It is useful to regard admissible metrics as {\it densities of some finite measures equivalent to the measure $\mu\times \mu$ on $X\times X$}: $$dM_{\mu,\rho}=\rho(x,y)d\mu(x)d\mu(y).$$ If we set $\int_{X\times X} \rho(x,y)d\mu(x)d\mu(y)=1$, which can be done by normalizing the metric, then the new measure is also a probability measure. Observe the following important property of admissible metrics implied by this interpretation. \begin{theorem} For almost every point $x\in X$, there is a uniquely defined $\bmod 0$ conditional measure $\mu^x$ on $X$, which is given by the formula $d\mu^x(A)=\int_A\rho(x,y)d\mu(y)$ for every measurable set $A\subset X$. The family of measures $\{\mu^x; x\in X\}$ satisfies the condition $$\mu(A)=\int_X \mu^x(A)d\mu(x).$$ The metric $\rho$, regarded as a metric on the measure space $(X,\mu^x)$, is admissible. \end{theorem} \begin{proof} Consider the new measure $M_{\mu,\rho}$ on the space $X\times X$ and the measurable partition into the classes of points $C^x=\{(x,*)\}\subset X\times X$, and use the classical theorem on the existence of conditional measures (see, e.g., \cite{Ro}), which implies the desired formula and the uniqueness $\bmod 0$ of the family of conditional measures. Now consider the space $(X,\rho,\mu^x)$ for a fixed $x$; the metric $\rho$ on this space is admissible since the admissibility of a metric is obviously preserved under replacing a measure with an equivalent one. \end{proof} The conditional measure $\mu^x(A)$ can be interpreted as the ``average distance,'' or the conditional expectation of the distance from the set $A$ to the point $x$. In these terms, the condition that the metric, regarded as a function of two variables, is pure means that the conditional expectations do not coincide $\mod 0$ for almost all pairs of points. Now we can give a convenient criterion of admissibility. \begin{statement} A measurable function $\rho(\cdot,\cdot)$ satisfying Condition~1) of Definition~1 is admissible if and only if the following non-degeneracy condition holds: for $\mu$- almost all $x$, the measure of arbitrary balls of positive radius is positive: $\forall \varepsilon >0, \quad \mu\{y:\rho(x,y)<\epsilon\}>0$, which is equivalent to: $$\mu^x([0,\epsilon])>0$$ for almost all $x$ and all positive $\epsilon$. \end{statement} \begin{proof} The fact that an admissible metric satisfies the condition in question was observed above. Now assume that this condition is satisfied. We must prove that the space $(X,\rho)$ (if $\rho$ is a metric) or the quotient space $X_{\rho}$ (if $\rho$ is a semimetric) is separable. It suffices to consider the case of a metric. The condition stated in the proposition implies that for every $\epsilon$ there exists $\delta=\delta(\epsilon)$ with $\lim_{\epsilon \to 0} \delta(\epsilon)=0$ such that some set of measure $>1-\delta$ contains a finite $\epsilon$-net. But this means exactly that the space is separable and the measure is concentrated on a $\sigma$-compact set. \end{proof} \textbf{Several examples.} 1.~An important example is the following class of semimetrics, which was in fact intensively used in entropy theory. Every partition $\xi$ of a space $(X,\mu)$ into finitely or countably many measurable sets gives rise to a semimetric: $$\rho_{\xi}(x,y)=\delta_{(\xi(x),\xi(y))},$$ where $\xi(z)$ is the element of $\xi$ that contains $z$. In this case, $X_\rho$ is a finite or countable metric space. Such (finite) semimetrics are called {\it cuts}, and their linear combinations are called {\it cut semimetrics} (in the terminology of \cite{D}). It is easy to see that cut (semi)metrics are admissible. 2.~The very important metric defined by the formula $\rho(x,y)={\rm const}$ for $x \ne y$ determines a discrete uncountable space. We will call it the constant metric. {\it The constant metric on a space with continuous measure is not admissible.} In this case, the $\sigma$-algebra generated by the open sets is trivial and does not separate points of the space, i.e., a point is not determined by the collection of distances to the other points. 3.~The condition defining an admissible metric can be strengthened by requiring, in condition~1), that $$\int_X\int_X \rho(x,y)^p d\mu(x)d\mu(y)<\infty$$ for $p>1$; in this case, we say that the metric is {\it $p$-admissible}. Let us say that $\infty$-admissible metrics (semimetrics) are {\it bounded}; this class of admissible metrics will be most useful in what follows. 4.~In combinatorial examples, it often suffices to consider metrics with which the space is compact or precompact (or, in the case of a semimetric, quasi-compact). For example, adic transformations \cite{V1,V2} act in the space of infinite paths of an $\Bbb N$-graded graph, which is a totally disconnected compact space. The set of admissible metrics on a Lebesgue space $(X,\mu)$ with continuous measure is a convex cone ${\cal R}(X,\mu)={\cal R}$ with respect to the operation of taking a linear combination of metrics with nonnegative coefficients. This cone is a canonical object (by the uniqueness of a Lebesgue space up to isomorphism) and plays a role similar to the role of the simplex of Borel probability measures in topological dynamics. In many cases, it suffices to consider admissible (semi)metrics that produce compact spaces (after completion), but we do not exclude the case of a noncompact space. One may consider different topologies on the cone ${\cal R}$; the most natural of them is the weak topology, in which an $\varepsilon$-neighborhood of a metric $\rho$ is the collection of metrics $$ \Bigl\{\theta: (\mu\times\mu)\{(x,y):|\rho(x,y)-\theta(x,y)|<\varepsilon\}>1-\varepsilon\Bigr\}. $$ The property of being an admissible metric is invariant under measure-preserving transformations: if a metric $\rho$ is admissible and a transformation $T$ of the space $(X,\mu)$ preserves the measure $\mu$, then the image $\rho^T$ of $\rho$, defined by the formula $\rho^T(\cdot,\cdot)=\rho(T\cdot,T\cdot)$, is also admissible. Thus the group of (classes of) measure-preserving transformations acts on the cone ${\cal R}$ in a natural way. \section{Average and maximal metrics} Let $T$ be a measure-preserving transformation (in what follows, it will be an automorphism). When considering automorphisms or groups of automorphisms in spaces with admissible semimetrics, it is natural to assume that there exist an invariant set of full measure on which the (semi)metric is admissible in the sense of our definition. In addition to the notion of admissible (semi)metrics, we define the class of {\it $T$-admissible metrics}. The $T$-admissibility condition must be invariant, in the sense that if $\cal M$ is the class of $T$-admissible metrics, then $$V{\cal M}\equiv \{\rho: \rho(x,y)=\rho_1(Vx,Vy),\;\rho_1 \in \cal M\}$$ is the class of $VTV^{-1}$-admissible metrics in the same sense. Among many possible versions, we choose the class of admissible (semi)metrics for which $T$ is a {\it Lipschitz transformation} almost everywhere: there exists a positive constant $C$ such that for $(\mu\times \mu)$-almost all pairs $(x,y)$, the condition $$\rho(Tx,Ty)\leq C\rho(x,y)$$ holds; let us say that metrics from this class are Lipschitz $T$-admissible (semi)metrics. The choice of an appropriate class depends on the problem under consideration and the properties of the automorphism. For example, in the case of adic automorphisms, it is most convenient to consider the class of Lipschitz metrics. This class can also be defined for countable groups of automorphisms. For an arbitrary admissible semimetric $\rho$, we have defined the partition $\psi_{\rho}$ of the space $(X,\mu)$. In the same way, given an arbitrary automorphism $T$ of the space $(X,\mu)$, we consider the $T$-invariant partition $\psi_{\rho}^T=\bigvee_{k=0}^{\infty} T^k \psi_{\rho}$. We say that the semimetric $\rho$ is {\it generating} for $T$ if $\psi_{\rho}^T$ is the partition into separate points $\bmod 0$ (which we denote by $\varepsilon $). If $\rho$ is the metric generated by a finite partition, then this is the ordinary definition of a generator (see \cite{Ro1}). Let us define the average metric and the $\sup$-metric for a given automorphism. \begin{definition} Let $T$ be an automorphism of a space $(X,\mu)$, and let $\rho$ be an admissible metric. The average metric $\rho_n^T$ is defined by the formula $$\hat \rho_n^T(x,y)=\frac{1}{n}\sum_{k=0}^{n-1}\rho(T^kx,T^ky).$$ The $\sup$-metric is defined by the formula $$ {\bar \rho}_n^T(x,y)=\sup_{0\le k<n}\rho (T^k x,T^k y). $$ \end{definition} The following important result is a direct corollary of the pointwise ergodic theorem. \begin{theorem} For any automorphism $T$ and any admissible (semi)metric $\rho$, the limit of the sequence of average (semi)metrics $\rho^T_n$, which we denote by $\hat \rho$, exists almost everywhere in the space $(X\times X, \mu \times \mu)$: $$\hat \rho^T (x,y)=\lim_{n \to \infty}\frac{1}{n}\sum_{k=0}^{n-1}\rho(T^kx,T^ky);$$ $\hat \rho$ is a metric if and only if $\rho$ is a metric or a generating semimetric. \end{theorem} The existence of the limit follows from the fact that the integral $\int_X\int_X\rho(x,y)d\mu(x)d\mu(y)$ is finite and the ergodic theorem. \begin{definition} The metric $$ \hat \rho^T(x,y)=\lim_{n\to \infty} \frac{1}{n}\sum_{k=0}^{n-1}\rho(T^kx,T^ky) $$ (the limit exists $\mu$-a.e.) is called the {\it average}, or the $l^1$-average of $\rho$ with respect to the automorphism $T$. The metric $$\bar \rho^T(x,y)=\sup_{k\geq 0} \rho(T^k x, T^k x)$$ is called the limiting $\sup$-metric of $\rho$ with respect to $T$. \end{definition} It is clear that $\hat \rho^T$ and $\bar \rho^T$ satisfy all conditions of the definition of a (semi)metric; in what follows, we will consider only admissible metrics or generating semimetrics $\rho$, so that $\hat \rho^T$, and $\bar \rho^T$ for an ergodic automorphism $ T$, are metrics. The superscript $T$ in the notation for the average and $\sup$-metrics will be omitted if the automorphism is clear from the context. It is also obvious that $\hat \rho \leq \bar \rho$. However, the metrics $\hat \rho, \bar \rho$ may not be admissible even if the (semi)metric $\rho$ is admissible. In what follows, we will mainly consider the {\it average metric} $\hat \rho$. In most interesting cases, namely if $T$ is weakly mixing, i.e., its spectrum in the orthogonal complement to the space of constants is continuous, this metric is constant and hence not admissible; the $\sup$-metric may be constant even for automorphisms with discrete spectrum. However, in all cases we will be interested not in the limiting metrics themselves, but in the {\it asymptotic behavior of the averages $\rho_n^T$ as $n\to \infty$}. As we will see, automorphisms with discrete spectrum stand apart, since for them the average metric is often admissible. It is clear that $\hat \rho$ is nothing else but the projection of the function $\rho$, regarded as an element of the space $L^1(X\times X,\mu \times\mu)$, to the subspace of $(T\times T)$-invariant functions, i.e., the expectation of the metric $\rho$ with respect to the subspace of invariant functions on $X\times X$. This space consists of constants if and only if $T$ has no nontrivial eigenfunctions (in other words, $T$ is weakly mixing). In this case, $\hat \rho$ is almost everywhere a constant, which is equal to the average $\rho$-distance between the points of the space $X$. At the same time, if $T$ is not weakly mixing, then the spectrum of $T$ contains a discrete component and $\hat \rho$ may be a nonconstant {\it $T$-invariant (semi)metric}. In this case, one may obtain bounds on $\hat \rho$ using Fourier analysis. Definitions and Lemma~1 imply the following lemma. \begin{lemma} Let $\rho$ be an admissible Lipschitz metric for an automorphism $T$; then the average metric $\hat \rho$ is also admissible and Lipschitz for $T$. \end{lemma} We do not prove this assertion, because we do not need it in this paper. Here is an example of computing the average metric generated by the cut semimetric in the case of a rotation of the circle. \textbf{Example}. Let $X={\Bbb T}^1={\Bbb R}/\Bbb Z$, and let $\lambda\in X$ be an irrational number. Consider the semimetric $\rho(x,y)=|\chi_A(x)-\chi_A(y)|$, where $\chi_A$ is the indicator of a measurable set $A\subset X$; the metric $\rho(x,y)$ is $T$-admissible in the sense of our definition for the shift $T_{\lambda}$ by any irrational number $\lambda$. The corresponding average metric is shift-invariant and looks as follows: $\hat\rho(x,y)=m[(A+x)\Delta\mu(A+y)]=m[A\Delta (A+x-y)]$; it is obviously admissible. \section{Entropy and scaling entropy} \subsection{The $\varepsilon$-entropy of a measure in a metric space} Recall that the $\varepsilon$-entropy of a compact metric space $(X,\rho)$ is the function $\varepsilon\mapsto H_{\rho}(\varepsilon)$ whose value is equal to the minimum number of points in an $\varepsilon$-net of $(X,\rho)$. \begin{definition} The $\varepsilon$-entropy of a measure space $(X,\mu)$ with an admissible metric $\rho$ is the function $$\varepsilon\mapsto H_{\rho,\varepsilon}(\mu)=\inf \{H(\nu): k_{\rho}(\nu,\mu)<\varepsilon\},$$ where $\nu$ ranges over the set of all discrete measures with finite entropy and $k_{\rho}(\cdot,\cdot)$ is the Kantorovich metric on the space of Borel probability measures on $(X,\rho)$. The entropy of a discrete measure $\nu=\sum_i c_i \delta_{x_i}$ is defined in the usual way: $H(\nu)=-\sum_i c_i\ln c_i$. \end{definition} For our purposes, it is more convenient to use in the above definition another characteristic instead of $H(\mu)$, namely, $$ H'_{\rho,\varepsilon}(\mu)= \min\Bigl\{\ln k: \exists X', \mu(X')>1-\varepsilon,\; \exists \{x_i\}_1^k: X'\subset \bigcup_{i=1}^k V_{\varepsilon}(x_i)\Bigr\}, $$ where $V_{\varepsilon}(x)$ is the $\varepsilon$-ball centered at $x$; thus $\{x_1,x_2,\dots, x_k\}$ is an $\varepsilon$-net in $X'$. The finiteness of $H'$ follows from the fact that a Borel probability measure in a Polish space is concentrated, up to $\varepsilon$, on a compact set; $H'$ is more convenient for computations than $H$. In this paper, we use the following simple inequality. \begin{lemma} For every compact metric space $(X,\rho)$ and every nondegenerate Borel measure $\mu$, the following inequality holds: $$H_{\rho,(d+1)\varepsilon}(\mu)\leq H'_{\rho,\varepsilon}(\mu),$$ where $d$ is the diameter of the space. \end{lemma} \begin{proof} Assume that the diameter of the compact space does not exceed $1$. Assume that the measure of a set $X'$ is greater than $1-\varepsilon$ and $X'\subset \bigcup_{i=1}^k V_{\varepsilon}(x_i)$. Thus the points $x_1,\dots, x_k$ form an $\varepsilon$-net in $X'$. Consider the discrete measure $\nu$ supported by the points $x_1,\dots, x_k$ with charges $\nu(x_i)$ equal to the measures $\mu(V(x_i))$ of the corresponding balls (if two balls have a nonempty intersection, then we distribute the measure of the intersection proportionally between their centers). Choose an arbitrary point $x_{\infty}$ and set its charge equal to $1-\mu(X')$. Now consider the Monge--Kantorovich transportation problem with input measure $\mu$ and output measure $\nu$. It is easy to see that we have in fact determined an admissible plan $\Psi$ for this problem: the transportation from a point $x\in X'$ goes to the points $x_i$ for which $x\in V_{\varepsilon}(x_i)$, and the remaining part of the measure $\mu$ on the set $X\setminus X'$ goes to the point $x_{\infty}$. It is easy to compute the cost of this plan; this gives a bound on the Kantorovich distance between the measures $\nu$ and $\mu$: $$k_{\rho}(\nu,\mu)\leq\varepsilon(1-\varepsilon)+\varepsilon<2\varepsilon.$$ On the other hand, we have $H(\nu)\leq \ln k =H'_{\rho,\varepsilon}(\mu)$. \end{proof} \subsection{Scaling sequence and scaling entropy} Let us define the notion of a scaling sequence for the entropy of an automorphism. If an automorphism $T$ is fixed, we omit the superscript in the notation for the average entropy and white simply $\hat\rho_n$. \begin{definition} Let $T$ be an automorphism of a Lebesgue space $(X,\mu)$ with a $T$-invariant measure $\mu$. By definition, the class of scaling sequences for the automorphism $T$ and a given (semi)metric $\rho$ on $X$ is the class, denoted by ${\cal H}_{\rho,\varepsilon}(T)$, of increasing sequences of positive numbers $\{c_n, n\in {\Bbb N}\}$ such that $${\cal H}_{\rho,\varepsilon}(T)= \Bigl\{\{c_n\}: 0<\liminf_{n\to \infty}\frac{H_{\hat\rho_n,\varepsilon}(\mu)}{c_n}\leq \limsup_{n\to \infty}\frac{H_{\hat \rho_n,\varepsilon}(\mu)}{c_n} <\infty\Bigr\}.$$ \end{definition} In many cases, the class of scaling sequences for a given metric $\rho$ does not depend on sufficiently small $\varepsilon$. In this case, it is obvious that all sequences $\{c_n\}$ from ${\cal H}_{\rho}(T)$ are equivalent. \begin{definition} Assume that for a given ergodic automorphism $T$ of a space $(X,\mu)$ there exists a (semi)metric $\rho_0$ such that the class of scaling sequences for $\rho_0$ is the maximal one (i.e., for any other (semi)metric, sequences $\{c_n\}$ from the corresponding class grow not faster than for $\rho$). In symbols, we write this fact as $${\cal H}_{\rho_{0}}(T)=\sup_{\rho}{\cal H}_{\rho}(T).$$ Then we say that ${\cal H}_{\rho_{0}}(T)$ is the class of scaling sequences for the automorphism $T$ and the metric $\rho_0$ is $T$-maximal. \end{definition} It seems that such a metric exists for every automorphism. If we have chosen some $T$-maximal scaling sequence and the corresponding limit of entropies does exist, then it is called the {\it scaling entropy}. \begin{conjecture} For every ergodic automorphism $T$, a generic $T$-admissible Lipschitz metric is $T$-maximal. In particular, for a $K$-automorphism (i.e., an automorphism with completely positive entropy), the scaling sequence is equivalent to the sequence $c_n=h(T)n$, where $h(T)$ is the entropy of $T$, for every Lipschitz metric. \end{conjecture} A preparatory result in this direction was obtained in \cite{Ke}. In this paper, we will prove that for an automorphism with purely discrete spectrum and a $T$-admissible metric, the class ${\cal H}_{\rho}(T)$ of scaling sequences is the class of bounded sequences. \section{Invariant metrics on groups and averages of admissible metrics} \subsection{Invariant metrics and discrete spectrum} Let us recall some known facts about ergodic automorphisms with discrete spectrum. It obviously follows from the character theory of commutative groups that the spectrum of a translation on a compact Abelian group is discrete. By the classical von Neumann theorem, the converse is also true: an ergodic automorphism with discrete spectrum is metrically isomorphic to the translation $T$ on a compact Abelian group $G$ endowed with the Haar measure $m$ by an element whose powers form a dense subgroup: $$x\mapsto Tx=x+g,\quad \operatorname{Cl}\{ng,\, n\in \Bbb Z\}=G$$ (we use the additive notation). Note that on a compact group $G$ there are many metrics that are invariant under the whole group of translations and determine the standard group topology. We will need the following assertion (which is, possibly, partially known). \begin{statement} The spectrum of an ergodic automorphism $T$ of a measure space $(X,\mu)$ for which there exists a $T$-invariant admissible semimetric $\rho$ contains a discrete component. Moreover, if $\rho$ is a metric, then the spectrum of $T$ is discrete and, consequently, $T$ is isomorphic to a translation on a compact Abelian group. \end{statement} \begin{proof} Since an admissible (semi)metric lies in the space $L^1(X\times X, \mu \times \mu)$, the tensor square of the operator $U_T$, which corresponds to the automorphism $T\times T$, has nonconstant eigenfunctions. This can happen only if the spectrum of the unitary operator $U_T$ contains a discrete component, and the first claim is proved. In other words, $T$ is an extension of some quotient automorphism with discrete spectrum, which may coincide with $T$ itself. This means that $T$ is a skew product over a base with discrete spectrum. Denote by $H \subset L^2(X,\mu)$ the subspace spanned by all eigenfunctions of $U_T$. All invariant functions of the operator $U_T\otimes U_T$ belong to the tensor square $H\otimes H$; hence these functions, regarded as functions of two variables, do not change when the argument ranges over a fiber of the skew product. But if these fibers are not single point sets, it follows that the metric does not distinguish points in fibers and hence is a semimetric. Thus if $\rho$ is a metric, then each fiber necessarily consists of a single point, the spectrum of $T$ is purely discrete, and, by the ergodicity, $T$ is isomorphic to a translation on a group. \end{proof} Let us supplement this proof with an important refinement. Assume that the automorphism $T$ has an orbit that is everywhere dense with respect to the metric $\rho$, i.e., $T$ is topologically transitive (though we do not assume that it is a priori continuous). It is clear from above that this condition follows in fact from the existence of an invariant metric. Since the metric is admissible, we may assume without loss of generality that $(X,\rho)$ is a Polish space. Consider the dense orbit $O=\{T^nx,\, n \in \Bbb Z\}$ of some point $x$. The restriction of $\rho$ to $O$ is a translation-invariant metric on the group $\Bbb Z$, and all translations are isometries. Hence the completion of $O$ is an Abelian group to which we can extend the translations and their limits. Therefore $X$ is a Polish monothetic group.\footnote{A topological group that contains a dense infinite cyclic subgroup is called monothetic. Note that there are many non-locally compact monothetic groups on which there is an invariant metric, but there is no invariant measure. A recent example is the Urysohn universal space regarded as a commutative group, see \cite{CV}.} Obviously, the measure $\mu$ is invariant under the action of the closures of powers of $T$, i.e., it is an invariant probability measure on the whole group. Hence, by Weil's theorem, $X$ is a compact commutative group and $T$ is the translation by an element whose powers are everywhere dense. \subsection{Admissible invariant metrics} As we have already observed, for weakly mixing automorphisms, the average of every metric is constant, since there are no other invariant metrics. However, for automorphims whose spectra contain a discrete component or are purely discrete, there are many invariant (semi)metrics. Hence, in order to study such automorphisms, we should investigate the question when the average metric for an automorphism $T$ with a discrete spectrum is admissible. Given a translation-invariant metric $\rho$ on a compact commutative group with the Haar measure, consider the function $\phi_{\rho}(r)=\rho(x,x+r)=\rho(0,r)$. In the example from Section~3, it looked as $\phi(r)\equiv\hat\rho(x,x+r)= m[A\Delta (A+r)]$. One can easily write down necessary and sufficient algebraic conditions on a measurable function $\phi$ that guarantee that it can be written as $\phi_{\rho}$ for a measurable invariant metrics: $$\phi(\textbf{0})=0,\;\phi(x)\geq 0, \;\phi(-x)=\phi(x),\; \phi(x)+\phi(y)\geq \phi(x+y).$$ We will not need these conditions; the only important fact is that the admissibility condition from Lemma~1 can easily be reformulated in terms of this function. \begin{theorem} An invariant measurable (semi)metric on a commutative compact group (satisfying Condition~1 from Definition~1) is admissible if and only if any of the following conditions holds. {\rm1.} The corresponding function $\phi_{\rho}$ is measurable, and $$\mu \{z: \phi_{\rho}(z)\ne \phi_{\rho}(g+z)\}>0$$ for almost all $g$. {\rm2.} $$\operatorname{ess\,inf}_{g\in V\setminus 0}\phi_{\rho}(g)= 0,$$ where $V$ is an arbitrary neighborhood of the zero of the group in the standard topology. \end{theorem} \begin{proof} Condition~1 is exactly equivalent to the condition of Lemma~1. It is useful to give a direct proof that this condition is necessary. Assume that it is not satisfied, i.e., for all elements $g$ from some set of positive Haar measure, $\phi_{\rho}(z)=\phi_{\rho}(g+z)$ for almost all $z$. Therefore, the measurable function $\phi$ is constant on cosets of the subgroup generated by $g$. However, every set of positive Haar measure in a nondiscrete Abelian compact group on which there is an ergodic translation contains an element $g$ that generates a dense cyclic subgroup. Then, since the function $\phi_{\rho}$ is measurable, it follows that it is constant almost everywhere. But if the function $\phi_{\rho}$ is constant, then the metric $\rho$ is also constant and hence not admissible. Let us prove that Condition~2 of the theorem is necessary. Consider the function $\phi_{\rho}$ for an invariant admissible metric $\rho$. Assume that $\theta=\operatorname{ess\,inf}$ from Condition~2 is positive; then, using the invariance of $\rho$, we can construct a continuum of points lying at a fixed positive $\rho$-distance greater than $\theta$, which contradicts the admissibility. For the same reason, the set of values of $\phi_{\rho}$ is dense in some neighborhood of the zero. The fact that Condition~2 is sufficient follows from Proposition~1. \end{proof} The last assertion implies the following corollary. \begin{corollary} For every admissible invariant metric there exists a sequence of group elements that converges to zero both in the standard topology and with respect to the (semi)metric. \end{corollary} \subsection{Admissibility of the average metric} Now we can explicitly write down the condition that guarantees the admissibility of the average, i.e., invariant, metric in terms of the original metric. Consider an arbitrary measurable (non necessarily admissible) metric $\rho$ on a compact commutative group $G$ and an ergodic translation $T$ by an element $g$. Let us write down an expression for the average metric: \begin{eqnarray*} {\hat \rho}^T(x,y)&=&\lim_{n\to \infty} \frac{1}{n}\sum_{k=0}^n \rho (x+kg,y+kg)=\int_G \rho(g+x,g+y)dm(g)\\ &=&\int_G \rho(z,z+y-x)dm(z) \end{eqnarray*} (we have used the fact that the measure $m$ is invariant). Hence the function $\phi(r)$, regarded as a function of $r$, is measurable and has the form $$ \phi(r)=\int_G \rho(z,z+r)dm(z)=\hat \rho(x,y);\quad y-x=r.$$ Obviously, $\hat \rho$ is a measurable function on the group $G\times G$. Now we can check the admissibility condition for the average metric $\hat \rho$. \begin{definition} We say that an admissible metric $\rho$ on a compact commutative group $G$ with Haar measure $m$ is semicontinuous at zero in mean if $$\liminf_{r\to 0}\int_G \rho(x,x+r)dm(x)=0;$$ we say that it is semicontinuous at zero in measure if the following condition holds (in which ``meas'' means convergence in measure): $$\liminf_{r\to 0}(meas) \rho(x,x+r)=0.$$ \end{definition} Note that the second condition follows from the first one, and that both conditions are stated in purely group terms, i.e., do not depend on the particular translation $T$. Thus we have the following admissibility criterion for the average metric. \begin{statement} The average metric $\hat\rho^T$ for an ergodic translation $T$ on a compact commutative group $G$ is admissible if and only if the original (semi)metric $\rho$ is admissible and semicontinuous at zero in mean. \end{statement} \begin{proof} The ``if'' part is proved above; the ``only if'' part follows from the equality $\phi_{\hat\rho}(r)=\int_G \rho(x,x+r)dm(x)$ and the previous proposition. \end{proof} Now we are ready to formulate and prove the following important fact. \begin{theorem} For every bounded admissible metric on a compact commutative group $G$, the average metric is admissible. \end{theorem} \begin{proof} Assume that the metric $\rho$ is not semicontinuous at zero and there exists a positive number $c>0$ such that $$\liminf_{r\to 0}\int_G \rho(x,x+r)dm(x)> c.$$ Assume that the metric is normalized so that the diameter of the space $X$ is equal to 1. Then it follows from our assumption that for sufficiently small (i.e., belonging to a small neighborhood of the zero in the group $G$) $r$ there exists a (depending on $r$) subset in $X$ of measure $\alpha$, which does not depend on $r$ and is greater than $\frac{c}{2}$, on which $\rho(x,x+r)>\frac{c}{2}$. Indeed, $1 \cdot \alpha +(1-\alpha)\frac{c}{2}>\int_G \rho(x,x+r)dm(x)> c$. But since the group is compact, there is a set of positive measure for all points $x$ of which the inequality $\rho(x,x+r)>\frac{c}{2}$ holds for all sufficiently small $r$ from some set of positive measure. This in turn contradicts the admissibility of the metric $\rho$ in the formulation of Proposition~1: arbitrarily small values $\phi(r)$ for small $r$ cannot interlace with values greater than $\frac{c}{2}$ by the triangle inequality. Thus we have proved that $\rho$ is semicontinuous. \end{proof} \noindent{\bf Question.} Does there exist an unbounded admissible metric $\rho$ on the circle $S^1={\Bbb R}/{\Bbb Z}$ for which the average metric $\hat \rho$ is not admissible? Does there exist an unbounded $p$-admissible metric, with $1<p<\infty$, on a compact Abelian group for which the average metric is not admissible?\footnote{While the paper was in press, F.~Petrov and P.~Zatitskiy proved that the average of any (in particular, unbounded) admissible metric on the circle is admissible. Thus the additional assumptions that the average metric is admissible in Proposition~2 and Theorem~5 are superfluous.} \section{Criterion for the discreteness and continuity of the spectrum in terms of the scaling entropy} Now we formulate our main result. \begin{theorem} For an ergodic automorphism with discrete spectrum realized as a translation on a compact commutative group with an arbitrary bounded admissible metric, or, more generally, with a metric for which the average metric is admissible, the scaling sequence is bounded. \end{theorem} \begin{proof} Since the set $B_{\varepsilon}=\phi^{-1}([0,\varepsilon])$ is, by definition, the ball of radius $\varepsilon$ centered at $\textbf{0}\in G $ in the metric $\hat \rho$, which is admissible by assumption, it follows that $m B_{\varepsilon}>0$, since the metric is nondegenerate (see the definition of an admissible metric). But the sum $B_{\varepsilon}+B_{\varepsilon}$, like the sum $A+A$ for every set $A$ of positive Haar measure in a locally compact group, contains a neighborhood $V$ of the zero in the standard topology (see, e.g., \cite{We}). It follows from the triangle inequality that $B_{\varepsilon}+B_{\varepsilon}\subset B_{2\varepsilon}$, so that $V \subset B_{2\varepsilon}$. Since $\varepsilon>0$ is arbitrary, we see that {\it the topology on $G$ determined by the average metric $\hat\rho$ coincides with the standard topology}, i.e., in the topology determined by $\hat \rho$, the group $G$ is compact and contains a finite $\varepsilon$-net for every $\varepsilon$. The pointwise a.e. convergence $$\lim_{n \to \infty} \frac{1}{n}\sum_{k=0}^{n-1} \rho(x+kg,y+kg)={\hat \rho}_T(x,y),$$ which follows from the pointwise ergodic theorem, implies that the number of points in an $\varepsilon$-net for $(G,\rho_n)$ tends to the number of points in an $\varepsilon$-net for $(G,\hat \rho)$. This means that the sequence $H_{\rho_n,\varepsilon}(X)$ converges to $H_{\hat \rho,\varepsilon}(G)$. From the inequalities of Lemma~3 we see that the sequence $H_{\rho_n,\varepsilon}(\mu)$ is bounded as $n \to \infty$, and thus the scaling sequence for the automorphism $T$, which acts on the metric triple $(G,\rho, m)$, is bounded. \end{proof} Combining this theorem with the previous one, we obtain the following result. \begin{theorem} An ergodic automorphism $T$ has a discrete spectrum if and only if the scaling sequence for $T$ is bounded for some, and hence for every, bounded admissible metric. \end{theorem} \begin{proof} Above we have proved that if an automorphism has a discrete spectrum and the average metric is admissible, then the scaling sequence is bounded. But the average metric is always admissible provided that the original metric is bounded and admissible. Assume that the scaling sequence is bounded for an automorphism $T$ and an admissible metric $\rho$. Recall that the average metric is indeed a metric (and not a semimetric). Consequently, the space $(X,\hat \rho)$ is precompact, and hence the metric $\hat \rho$ is admissible. Since it is $T$-invariant, it follows from Theorem~1 that $T$ has a purely discrete spectrum. \end{proof} Combining the last theorems with the previous results yields a criterion for the discreteness and continuity of the spectrum in terms of the automorphism $T$ and an arbitrary admissible metric. \begin{theorem} Let $T$ be an ergodic automorphism, and let $\rho$ be a bounded admissible (semi)metric. If the corresponding scaling sequence is not bounded, then the spectrum of $T$ contains a continuous component. If the scaling sequence is not bounded for every admissible (semi)metric, then the spectrum of $T$ is purely continuous. \end{theorem} In the next section, we will show how one could apply this criterion. \section{Comparison with the traditional approach, the Pascal automorphism, and concluding remarks} \subsection{Supremum metrics} The entropy theory of dynamical systems, developed mainly by Kolmogorov, Sinai, and Rokhlin, essentially uses the tools of the theory of measurable partitions. In Sinai's definition, the entropy appears as an asymptotic invariant of the dynamics of finite partitions under the automorphism: $$\lim_n\frac{H(\bigvee_{k=0}^{n-1} T^k\xi)}n=h(T,\xi).$$ As a result of this theory, the study of the class of automorphisms with completely positive entropy was differentiated into a separate field, whose methods do not apply to automorphisms with zero entropy. For example, one cannot obtain a new invariant for such automorphisms following the same scheme. This can be seen from the following simple fact. \begin{statement} For every transformation $T$ and every increasing sequence of positive numbers $\{c_n,\, n\in \Bbb N\}$ satisfying the condition $lim_n\frac{c_n}{n}=0$, there exists a generating partition $\xi$ such that $$\lim_n\frac{H(\bigvee_{k=1}^{n} T^k\xi)}{c_n}=\infty.$$ \end{statement} This means that the maximum growth of the entropy $H(\bigvee_{k=1}^{n} T^k\xi)$ either is linear (for automorphisms with positive entropy), or, in the case it is sublinear, it is arbitrarily close to linear for every automorphism. Thus we obtain no new information. The metric corresponding to the supremum (product) $\xi_n=\bigvee_{k=1}^n T^k\xi$ of partitions is the supremum of the shifted metrics: ${\bar \rho}_n^T(x,y)=\sup_{0\le k<n}\rho (T^k x,T^k y)$. Hence, following our plan, we can use the $\varepsilon$-entropy of the metric ${\bar \rho}_n^T(x,y)$ instead of the entropy of the partition $\xi_n$ itself. Then, using the definitions from Section~4, for a given metric $\rho$ we can introduce an analog of the function ${\cal H}_{\rho,\varepsilon}(T)$ with the metric $\hat\rho_n$ replaced by $\bar \rho_n$: $${ \cal \bar H}_{\rho,\varepsilon}(T)= \Bigl\{\{c_n\}: 0<\liminf_{n\to \infty}\frac{H_{\bar\rho_n,\varepsilon}(\mu)}{c_n}\leq \limsup_{n\to \infty}\frac{H_{\bar \rho_n,\varepsilon}(\mu)}{c_n} <\infty\Bigr\}.$$ In this way we define the class of {\it $\sup$-scaling sequences} $\bar c_n$ for a given metric $\rho$. This also allows us to extend the classical entropy theory following the above scheme. Though it is somewhat easier to deal with the sup-metric than with the average metric, the former is much less useful than the latter. The metric $\bar \rho$ more often happens to be constant for an automorphism with discrete spectrum, while, as we have seen, $\hat \rho$ is always admissible if the original metric is bounded. Let us illustrate the important difference between the operations of taking the average and supremum metrics by the following example. \noindent \textbf{Example.} Let $T$ be an irrational rotation of the unit circle, and let $\rho$ be the semi-metric corresponding to a generating two-block partition (i.e., a partition into two sets of positive measure); the semi-metric $\rho$ is $T$-admissible, hence, as we have seen, $\hat \rho$ is an invariant admissible metric. At the same time, $\bar \rho$ is the constant metric. This means that the scaling sequence $c_n$ is bounded, but $\bar c_n$ (the scaling sequence for the sup-metric) is not; namely, we have $\bar c_n\sim \ln n$. Thus the difference manifests itself even in the case of a discrete spectrum. Does in make sense to use intermediate averages, e.g., the $l^p$-averages $$\lim\Bigl[\frac{1}{n}\sum_{k=0}^{n-1}\rho(T^k x,T^k y)^p\Bigr]^{\frac{1}{p}}=\hat \rho^p(x,y)$$ for $p\in (1,\infty)$, instead of $l^1$? Apparently, they do not lead to any new effects: these metrics behave in the same way as the $l^1$-average metric. For instance, in the above example, the $p$-average of the metric determined by a two-block partition of the unit circle is $\hat \rho^p(x,y)=\{m[A\Delta (A+x-y)]\}^{\frac{1}{p}}$. For $p=\infty$ (i.e., the sup-metric), the picture is completely different, as in other interpolation theories. Thus in entropy theory, the use of average metrics substantially supplements the classical considerations. \subsection{Application of the discreteness criterion} As noted above, the problem of determining whether or not the spectrum of an automorphism is discrete, is not at all simple. Theorems~5--7 provide convenient non-spectral criteria for checking that the spectrum is not purely discrete; for this, one should bound the entropy from below for one admissible metric satisfying the conditions of Section~2 by a sequence that grows arbitrarily slow with $n$. One of the intriguing examples of automorphisms for which the discreteness of the spectrum has not been neither proved nor disproved since the 1980s is the Pascal automorphism. It was introduced by the author in 1980 (see \cite{V1,V2}) as an example of an {\it adic transformation}, and is defined as a natural transformation in the space of paths in the Pascal graph regarded as a Bratteli--Vershik diagram with lexicographic ordering of paths. One can give a short combinatorial description of this transformation by encoding these paths with sequences of zeros and ones and identifying the space of paths with the compact space $X=\{0;1\}^{\infty}=\textbf{Z}_2$. Then the Pascal automorphism is defined by the formula $$T(\{1^i0^j1**\}=\{0^{j-1}1^{i+1}0**\});$$ here $i\geq 0$, $j>0$, and the domain of $T$ and $T^{-1}$ is the whole $X$ except for the countable set of sequences having finitely many zeros or ones. The most natural metric on $X$ is the 2-adic metric $\rho(\{x_k\},\{y_k\})=2^{-n}$, where $n$ is the first digit with $x_k\ne y_k$. This metric is admissible, and the Pascal automorphism satisfies the Lipschitz condition almost everywhere. The orbits of this automorphism coincide with the orbits of the action of the infinite symmetric group. The Bernoulli measures are $T$-invariant. The spectrum of the Pascal transformation was studied in the papers \cite{Pe1,Pe2,Me,Rue}, where some interesting properties were established (e.g., it was proved that $T$ is loosely Bernoulli, the complexity of $T$ was computed, etc.), but the question about the type of the spectrum remains open. \footnote{{\it Note on the translation:} The answer is known now --- the spectrum of Pascal automorphism is continuous.} In \cite{V3,V4} it was conjectured that the study of the behavior of scaling sequences may turn to be useful. The corresponding plan was carried out in \cite{MM}, but in that paper a logarithmic lower bound was obtained on the scaling sequence for the sup-metric, and not for the average metric; this is not sufficient for the conclusion that the spectrum is not discrete. Nevertheless, one may hope that the combinatorics developed in \cite{MM} will help to prove that the scaling sequence is unbounded also for the $\varepsilon$-entropy of the average metric, which, by our theorem, would imply that the spectrum is not discrete. There are many adic transformations similar to the Pascal automorphism for which the same question is also of great interest. For example, if we replace the Pascal graph with its multidimensional analog or the Young graph, we will obtain automorphisms that supposedly have continuous spectra. As observed above, in order to prove that there are no nontrivial eigenfunctions, one should obtain a growing lower bound on the scaling sequence not for one, but for all (or for some representative set of) bounded admissible (semi)metrics. \subsection{The dynamics of metrics} Recall that the general approach that consists in studying the asymptotic behavior of metrics is not exhausted by considering the asymptotics of the $\varepsilon$-entropy of the average or supremum metric, i.e., does not reduce to studying the growth of scaling sequences; this is only its simplest version. In fact, we consider the original measure space $(X,{\frak A},\mu)$ with an action of an automorphism $T$ (or a group of automorphisms $G$), fix an appropriate metric $\rho$, and study the sequence of metric triples $$(X,\rho^T_n,\mu), \quad\mbox{where } \rho_n^T(x,y)=\frac{1}{n}\sum_{k=0}^{n-1}\rho(T^k x,T^k y). $$ The conjecture is that, for a fixed measure and a fixed automorphism (or group of automorphisms), the asymptotic properties of this sequence of metric triples do not depend (or weakly depend) on the choice of an individual admissible metric from a wide class. These properties include not only the scaling entropy, but also more complicated characteristics of the sequence, say the mutual properties of several consecutive metric triples. Since the classification of metric triples up to measure-preserving isometry is known (see \cite{Gr,VU}), one may hope to apply it to this problem. In this field there are many traditional and nontraditional questions. For example, what is the distribution of the fluctuations of the sequence of average metrics, regarded as functions of two variables on $(X\times X,\mu\times \mu)$, as they converge to the constant metric (for weakly mixing transformations, e.g., $K$-automorphisms)? What can be said about the asymptotic properties of neighboring pairs of metric triples (with indices $n$ and $n+1$)? Etc. In conclusion, it is worth mentioning that the concept of scaling entropy appeared in connection with the classification of filtrations in \cite{V6} and was used in \cite{VG}. In terms of the present paper, the scaling entropy for filtrations, i.e., decreasing sequences of measurable partitions or $\sigma$-algebras, is the scaling entropy for an action of a locally finite group such as $\sum {\Bbb Z}/2$ instead of an action of $\Bbb Z$ considered here. The definitions we have given for an action of $\Bbb Z$ essentially coincide with those given in \cite{V6} for locally finite groups. \end{document}
\begin{document} \begin{frontmatter} \begin{fmbox} \dochead{Research} \title{CLeaR: An Adaptive Continual Learning Framework for Regression Tasks} \author[ addressref={aff1}, corref={aff1}, email={[email protected]} ]{\inits{YH}\fnm{Yujiang} \snm{He}} \author[ addressref={aff1}, email={[email protected]} ]{\inits{BS}\fnm{Bernhard} \snm{Sick}} \address[id=aff1]{ \orgdiv{Intelligent Embedded Systems (IES) Group}, \orgname{University of Kassel}, \street{Wilhelmsh\"{o}her Allee 71 - 73}, \city{Kassel}, \cny{Germany} } \begin{abstractbox} \begin{abstract} Catastrophic forgetting means that a trained neural network model gradually forgets the previously learned tasks when being retrained on new tasks. Overcoming the forgetting problem is a major problem in machine learning. Numerous continual learning algorithms are very successful in incremental learning of classification tasks, where new samples with their labels appear frequently. However, there is currently no research that addresses the catastrophic forgetting problem in regression tasks as far as we know. This problem has emerged as one of the primary constraints in some applications, such as renewable energy forecasts. This article clarifies problem-related definitions and proposes a new methodological framework that can forecast targets and update itself by means of continual learning. The framework consists of forecasting neural networks and buffers, which store newly collected data from a non-stationary data stream in an application. The changed probability distribution of the data stream, which the framework has identified, will be learned sequentially. The framework is called CLeaR (\textbf{C}ontinual \textbf{Lea}rning for \textbf{R}egression Tasks), where components can be flexibly customized for a specific application scenario. We design two sets of experiments to evaluate the CLeaR framework concerning fitting error (training), prediction error (test), and forgetting ratio. The first one is based on an artificial time series to explore how hyperparameters affect the CLeaR framework. The second one is designed with data collected from European wind farms to evaluate the CLeaR framework's performance in a real-world application. The experimental results demonstrate that the CLeaR framework can continually acquire knowledge in the data stream and improve the prediction accuracy. The article concludes with further research issues arising from requirements to extend the framework. \end{abstract} \begin{keyword} \kwd{Continual learning} \kwd{Renewable energy forecasts} \kwd{Regression tasks} \kwd{Deep neural networks} \end{keyword} \end{abstractbox} \end{fmbox} \end{frontmatter} \input{sections/Introduction.tex} \input{sections/Forecasts.tex} \input{sections/CLeaR.tex} \input{sections/Experiment.tex} \input{sections/Conclusion.tex} \begin{backmatter} \section*{Funding} This work was supported within the C/sells RegioFlexMarkt Nordhessen (03SIN119) project and the Digital-Twin-Solar (03EI6024E) project, funded by BMWi: Deutsches Bundesministerium f√ºr Wirtschaft und Energie/German Federal Ministry for Economic Affairs and Energy. \section*{Abbreviations} CL: Continual Learning; MSE: Mean Square Error; EWC: Elastic Weight Consolidation; SI: Synaptic Intelligence; LWF: Learning Without Forgetting; Online-EWC: Online Elastic Weight Consolidation; PCA: Principal Component Analysis; AE: Autoencoder; WF: Wind farm; \section*{Availability of data and materials} The datasets generated and analyzed during the current study are available from \url{https://www.uni-kassel.de/eecs/ies/downloads} or the corresponding author on reasonable request. \section*{Ethics approval and consent to participate} Not applicable \section*{Competing interests} The authors declare that they have no competing interests. \section*{Consent for publication} Not applicable \section*{Authors' contributions} YH wrote the majority of the manuscript and was responsible for implementing the CLeaR and experiments. BS suggested the artificial data and the first experiment and was responsible for double-checking the manuscript. All authors read and approved the final manuscript. \section*{Authors' information} Not applicable \end{backmatter} \end{document}
\begin{document} \title{Interpolation in extensions of first-order logic} \begin{abstract} We prove a generalization of Maehara's lemma to show that the extensions of classical and intuitionistic first-order logic with a special type of geometric axioms, called singular geometric axioms, have Craig's interpolation property. As a corollary, we obtain a direct proof of interpolation for (classical and intuitionistic) first-order logic with identity, as well as interpolation for several mathematical theories, including the theory of equivalence relations, (strict) partial and linear orders, and various intuitionistic order theories such as apartness and positive partial and linear orders. \end{abstract} Craig's interpolation theorem \cite{Craig1957} is a central result in first-order logic. It asserts that for any theorem $A \to B$ there exists a formula $C$, called \emph{interpolant}, such that $A \to C$ and $C \to B$ are also theorems and $C$ only contains non-logical symbols that are contained in both $A$ and $B$ (and if $A$ and $B$ have no non-logical symbols in common, then either $\neg A$ is a theorem or $B$ is). The aim of this paper is to extend interpolation beyond first-order logic. In particular, we show how to prove interpolation in extensions of intuitionistic and classical sequent calculi with \emph{singular geometric rules}, a special case of geometric rules investigated in \cite{Negri2003}. Interpolation for singular geometric rules will be obtained by generalizing a standard result, reportedly due to Maehara in \cite{Takeuti1987} and known as \textquotedblleft Maehara's lemma\textquotedblright{} \cite{Maehara1960}.\footnote{In this work we shall not consider semantic methods to prove interpolation. These have been applied extensively to non-classical logics in \cite{GabbayMaksimova2005}; there are also proofs of interpolation for non-classical logics that are more similar to our approach, especially \cite{BaazIemhoff2005,FittingKuznets2015,Kuznets2018}.} The proof of Maehara's lemma for intuitionistic and classical first-order logic requires cut elimination. This clearly challenges the project of proving the lemma for systems extending first-order logic with axioms, since such systems are not generally cut-free (cf.~\cite[\S 4.5]{TroelstraSchwichtenberg2000} and \cite [\S 6.3]{NegrivonPlato2001} for different approaches to non-logical axioms). For example, in the calculus $\mathsf{LK_e}$, an extension of Gentzen's $\mathsf{LK}$ for first-order logic with identity, cuts on identities $s = t$ are not eliminable (cf. Theorem 6 in \cite{Takeuti1987}, where these cuts are called \textquotedblleft inessential\textquotedblright{}). Fortunately, interpolation can still be proved for first-order logic with identity. The drawback of the existing proofs, however, is that they are indirect, in the sense that the interpolant is not built using exclusively the rules of the calculus. In \cite{TroelstraSchwichtenberg2000}, for example, a translation is used to reduce interpolation for first-order logic with identity to the case of pure first-order logic.\footnote{For other proofs of interpolation via translation see \cite{RasgaCarnielliSernadas2009} and \cite{BonacinaJohansson2015}.} A different route is taken in \cite{Gallier2015}, using the method of \textquotedblleft axioms in the context,\textquotedblright{} where interpolation is again not proved directly in $\mathsf{LK_e}$, but in an variant of $\mathsf{LK}$, equivalent to $\mathsf{LK_e}$, in which all derivable sequents have the axioms governing the identity predicate in the context.\footnote{Thanks to a referee for bringing this to our attention.} Beside the use of such indirect maneuvers, these approaches are specifically designed for first-order logic with identity and it is not entirely obvious how to adapt them to other extensions of first-order logic. On the other hand, in this paper interpolation is proved via a generalization of Maehara's lemma to a class of extensions of first-order logic (which include first-order logic with identity as a particular case) and using no other means than the rules of the calculus (Lemma \ref{th:maehara_singular}). Our generalization of Maehara's lemma is based on previous work by Negri and von Plato who have shown (in a series of papers starting from \cite{NegrivonPlato1998}) how to recover cut elimination (as well as the admissibility of other structural rules) for extensions of the calculi $\mathsf{G3c}$ and $\mathsf{m\textnormal{-}G3i}$ for classical and intuitionistic first-order logic. Of particular interest for the present work are the extensions with geometric rules, investigated in \cite{Negri2003}.\footnote{We depart from Negri's approach in taking the intuitionistic single-succedent calculus $\mathsf{G3i}$ instead of the multi-succedent $\mathsf{m\textnormal{-}G3i}$ of \cite{Negri2003}; in Theorem \ref{prop:structural_properties_geometric_in} we will also prove, along the way, that cut elimination holds for geometric extensions of $\mathsf{G3i}$.} Once cut elimination is recovered in this way, we impose a singularity condition on geometric rules to isolate those containing at most one non-logical predicate (identity will be counted as logical). Our main result is to show that Maehara's lemma holds when $\mathsf{G3c}$ and $\mathsf{G3i}$ are extended with singular geometric rules (Lemma \ref{th:maehara_singular}). Then interpolation follows easily from the generalized Maehara's lemma (Theorem \ref{CraigSingular}). Finally, we consider applications of Theorem \ref{CraigSingular} and we show that singular geometric rules include many interesting extensions of intuitionistic and classical first-order logic, especially (classical and intuitionistic) first-order logic with identity, the theory of equivalence relations, (strict) partial and linear orders, the theory of apartness and the theory of positive partial and linear orders. \section{Classical and intuitionistic sequent calculi} The language $\mathcal{L}$ is a first-order language with individual constants and no functional symbols. Terms ($s,t,u,\Deltaots$) are either variables ($x,y,z,\Deltaots$) or individual constants ($a,b,c\Deltaots$). $\mathcal{L}$ contains also denumerably many $k$-ary predicates $P^k,Q^k,R^k, \Deltaots$ for each $k \Gammaeq 0$. $\mathcal{L}$ may also contain the identity. We agree that all predicates, except identity, are non-logical. Moreover, it is convenient to have two propositional constants $\bot$ (falsity) and $\top$ (truth). Formulas are built up from atoms $P^k (t_1, \Deltaots , t_k)$, the constants $\bot$ and $\top$ using logical operators $\wedge$, $\vee$, $\to$, $\exists$ and $\forall$ as usual. We use $P,Q,R,\Deltaots$ for atoms, $A,B,C,\Deltaots$ for formulas and $\Gamma, \mathcal{D}elta, \Pi , \Deltaots$ for (possibly empty) finite multisets of formulas. The negation $\neg A$ of a formula $A$ is defined as $A \to \bot$. We also agree that $\Gamma,\mathcal{D}elta$ is an abbreviation for $\Gamma \cup \mathcal{D}elta$ (where $\cup$ is the multiset union) and $\bigwedge\Gamma$ ($\bigvee\Gamma$) stands for the conjunction (disjunction, respectively) of all formulas in $\Gamma$. Moreover, if $\Gamma$ is empty, then $\bigwedge\Gamma\equiv\top$ and $\bigvee\Gamma\equiv\bot$, where $\equiv$ indicates syntactic identity (up to $\alpha$-congruence) between expressions of the object-language. The substitution of a variable $x$ with a term $t$ in a term $s$ (in a formula $A$, in a multiset $\Gamma$) will be indicated as $s [\,{}_{x}^{t}\,]$ ($A [\,{}_{x}^{t}\,]$ and $\Gamma [\,{}_{x}^{t}\,]$, respectively) and defined as usual. To indicate the simultaneous substitution of the list of variables $x_1 , \Deltaots , x_n$ (abbreviated in $\bar{x}$) with the list of terms $t_1, \Deltaots , t_n$ (abbreviated in $\bar{t}$), we use $[\,{}_{\bar{x}}^{\bar{t}}\,]$ in place of $[\,{}_{x_1\,\Deltaots\,x_n}^{t_1\,\Deltaots\,t_n}\,]$. Later on, we shall also need a more general notion of substitution of terms for terms (not just variables) which will be proved to preserve derivability (Lemma \ref{th:substitution_general}). Finally, let $\mathsf{FV} (A)$ be the set of free variables of a formula $A$ and let $\mathsf{Con} (A)$ be the set of its individual constants. We agree that the set of terms $\mathsf{Ter} (A)$ of $A$ is $\mathsf{FV} (A) \cup \mathsf{Con} (A)$. Moreover, if $\mathsf{Rel}(A)$ is the set of non-logical predicates of $A$ then we define the language $\mathcal{L}(A)$ of $A$ as $\mathsf{Ter} (A) \cup \mathsf{Rel}(A)$. Notice that $= \; \notin \mathcal{L} (A)$, for all $A$. Such notions are immediately extended to multisets of formulas $\Gamma$, by letting $\mathsf{FV} (\Gamma)$ to be defined as $\bigcup_{A\in\Gamma} \mathsf{FV} (A)$, and analogously for $\mathsf{Con} (\Gamma)$, $\mathsf{Ter} (\Gamma)$, $\mathsf{Rel}(\Gamma)$ and $\mathcal{L} (\Gamma)$. The calculus $\mathsf{Gc}$ ($\mathsf{Gi}$) is a variant of $\mathsf{LK}$ ($\mathsf{LI}$) for classical (intuitionistic, respectively) first-order logic, originally introduced by Gentzen in \cite{Gentzen1969a}. In the literature, especially in \cite{TroelstraSchwichtenberg2000} and \cite{NegrivonPlato2001}, $\mathsf{Gc}$ and $\mathsf{Gi}$ are commonly referred to as $\mathsf{G3c}$ and $\mathsf{G3i}$ but we will omit \textquoteleft $\mathsf{3}$\textquoteright{} in the interest of readability. Moreover, we will write $\mathsf{G}$ to refer to either $\mathsf{Gc}$ or $\mathsf{Gi}$. A sequent in $\mathsf{Gc}$ is a pair $\langle \Gamma,\mathcal{D}elta \rangle$ of multisets, indicated as $\Gamma\Rightarrow\mathcal{D}elta$. The calculus $\mathsf{Gc}$ consists of the following initial sequents and logical rules (where $y$ is an \emph{eigenvariable} in $R\forall$ and $L\exists$, i.e. $y$ must not occur free in the conclusion of these rules): \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{\emph{The calculus $\mathsf{Gc}$}}\\\\ \multicolumn{2}{c}{$P , \Gamma\Rightarrow\mathcal{D}elta , P$}\\\\ $\infer[\infrule{L\bot}]{\bot , \Gamma\Rightarrow\mathcal{D}elta}{}$ & $\infer[\infrule{R\top}]{\Gamma\Rightarrow\mathcal{D}elta , \top}{}$ \\\\ $\infer[\infrule{L\wedge}]{A \wedge B , \Gamma\Rightarrow\mathcal{D}elta}{A , B , \Gamma\Rightarrow\mathcal{D}elta}$ & $\infer[\infrule{R\wedge}]{\Gamma\Rightarrow\mathcal{D}elta , A \wedge B}{\Gamma\Rightarrow\mathcal{D}elta , A & \Gamma\Rightarrow\mathcal{D}elta , B}$ \\\\ $\infer[\infrule{L\vee}]{A \vee B , \Gamma\Rightarrow\mathcal{D}elta}{A , \Gamma\Rightarrow\mathcal{D}elta & B , \Gamma\Rightarrow\mathcal{D}elta}$ & $\infer[\infrule{R\vee}]{\Gamma\Rightarrow\mathcal{D}elta , A \vee B}{\Gamma\Rightarrow\mathcal{D}elta , A , B}$ \\\\ $\infer[\infrule{L\to}]{A \to B , \Gamma\Rightarrow\mathcal{D}elta}{\Gamma\Rightarrow\mathcal{D}elta , A & B , \Gamma\Rightarrow\mathcal{D}elta}$ & $\infer[\infrule{R\to}]{\Gamma\Rightarrow\mathcal{D}elta , A \to B}{A , \Gamma\Rightarrow\mathcal{D}elta , B}$ \\\\ $\infer[\infrule{L\forall}]{\forall x A , \Gamma\Rightarrow\mathcal{D}elta}{A [\, {}_{x}^{t} \,] , \forall x A , \Gamma\Rightarrow\mathcal{D}elta}$ & $\infer[\infrule{R\forall}]{\Gamma\Rightarrow\mathcal{D}elta ,\forall x A}{\Gamma\Rightarrow\mathcal{D}elta , A [\,{}_{x}^{y}\,]}$ \\\\ $\infer[\infrule{L\exists}]{\exists x A , \Gamma\Rightarrow\mathcal{D}elta}{A [\,{}_{x}^{y}\,] , \Gamma\Rightarrow\mathcal{D}elta}$ & $\infer[\infrule{R\exists}]{\Gamma\Rightarrow\mathcal{D}elta ,\exists x A}{\Gamma\Rightarrow\mathcal{D}elta , \exists x A , A [\,{}_{x}^{t}\,]}$ \end{tabular} \end{center} Sequents in $\mathsf{Gi}$ are defined as in $\mathsf{Gc}$, except that $\mathcal{D}elta$ must contain exactly one formula. The calculus $\mathsf{Gi}$ has the following initial sequents and logical rules (again, $y$ is an \emph{eigenvariable} in $R\forall$ and $L\exists$). \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{\emph{The calculus $\mathsf{Gi}$}}\\\\ \multicolumn{2}{c}{$P , \Gamma \Rightarrow P$}\\\\ $\infer[\infrule{L\bot}]{\bot , \Gamma \Rightarrow C}{}$&$\infer[\infrule{R\top}]{\Gamma \Rightarrow \top}{}$ \\\\ $\infer[\infrule{L\wedge}]{A \wedge B , \Gamma \Rightarrow C}{A , B , \Gamma \Rightarrow C}$ & $\infer[\infrule{R\wedge}]{\Gamma \Rightarrow A \wedge B}{\Gamma \Rightarrow A & \Gamma \Rightarrow B}$ \\\\ $\infer[\infrule{L\vee}]{A \vee B , \Gamma \Rightarrow C}{A , \Gamma \Rightarrow C & B , \Gamma \Rightarrow C}$ & $\infer[\infrule{R\vee_1}]{\Gamma \Rightarrow A \vee B}{\Gamma \Rightarrow A}$ \quad $\infer[\infrule{R\vee_2}]{\Gamma \Rightarrow A \vee B}{\Gamma \Rightarrow B}$ \\\\ $\infer[\infrule{L\to}]{A \to B , \Gamma \Rightarrow C}{A \to B , \Gamma \Rightarrow A & B , \Gamma \Rightarrow C}$ & $\infer[\infrule{R\to}]{\Gamma \Rightarrow A \to B}{A , \Gamma \Rightarrow B}$ \\\\ $\infer[\infrule{L\forall}]{\forall x A , \Gamma \Rightarrow C}{A [\, {}_{x}^{t} \,] , \forall x A , \Gamma \Rightarrow C}$ & $\infer[\infrule{R\forall}]{\Gamma \Rightarrow \forall x A}{\Gamma \Rightarrow A [\,{}_{x}^{y}\,]}$ \\\\ $\infer[\infrule{L\exists}]{\exists x A , \Gamma \Rightarrow C}{A [\,{}_{x}^{y}\,] , \Gamma \Rightarrow C}$ & $\infer[\infrule{R\exists}]{\Gamma \Rightarrow \exists x A}{\Gamma \Rightarrow A [\,{}_{x}^{t}\,]}$ \end{tabular} \end{center} A derivation in $\mathsf{G}$ is a tree of sequents which grows according to the rules of $\mathsf{G}$ and whose leaves are initial sequents or conclusions of a 0-premise rule. A derivation of a sequent is a derivation concluding that sequent and a sequent is derivable when there is a derivation of it. As usual, we consider only \emph{pure-variable derivations}: bound and free variables are kept distinct, and no two rule instances have the same variable as \emph{eigenvariable}, see \cite[p. 38]{TroelstraSchwichtenberg2000}. The height $h$ of a derivation is defined inductively as follows: the derivation height of an initial sequent or of a conclusion of a 0-premise rule is 0, the derivation height of a derivation of a conclusion of a one-premise rule is the derivation height of its premise plus 1, and the derivation height of a derivation of a conclusion of a $n$-premise rule ($n \Gammaeq 2$) is the maximum of the derivation heights of its premises plus 1. A sequent is $h$-derivable if it is derivable with a derivation of height less than or equal to $h$. A rule is admissible if the conclusion is derivable whenever the premises are derivable; a rule is height-preserving admissible if the conclusion is $h$-derivable whenever the premises are $h$-derivable. Derivations will be denoted by $\mathcal{D} , \mathcal{D}_1, \mathcal{D}_2, \Deltaots$. We agree to use $\mathcal{D} \vdash \Gamma\Rightarrow\mathcal{D}elta$ to indicate that $\mathcal{D}$ is a derivation in $\mathsf{G}$ of $\Gamma\Rightarrow\mathcal{D}elta$ and $\vdash \Gamma\Rightarrow\mathcal{D}elta$ to indicate that $\Gamma\Rightarrow\mathcal{D}elta$ is derivable; finally, $\vdash^h \Gamma\Rightarrow\mathcal{D}elta$ indicates that $\Gamma\Rightarrow\mathcal{D}elta$ is $h$-derivable. We will use a double-line rule of the form $$ \infer=[\infrule R]{\Gamma\Rightarrow\mathcal{D}elta}{\Pi\Rightarrow\Sigma} $$ to indicate that $\Gamma\Rightarrow\mathcal{D}elta$ is derivable from $\Pi\Rightarrow\Sigma$ by a (possibly empty) sequence of instances of the rule \mbox{\it{R}}. It is easy to see that initial sequents with $A , \Gamma\Rightarrow\mathcal{D}elta , A$, for an arbitrary $A$, are derivable in $\mathsf{G}$ (where $\Delta$ is empty for $\mathsf{Gi}$). The following structural rules for $\mathsf{Gc}$ (weakening, contraction and cut) are valid in the standard semantics of $\mathsf{Gc}$. \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{\emph{Structural rules of} $\mathsf{Gc}$}\\\\ $\infer[\infrule{Wkn}]{A, \Gamma \Rightarrow \mathcal{D}elta}{\Gamma \Rightarrow \mathcal{D}elta}$ & $\infer[\infrule{Wkn}]{\Gamma \Rightarrow \mathcal{D}elta , A}{\Gamma \Rightarrow \mathcal{D}elta}$\\\\ $\infer[\infrule{Ctr}]{A, \Gamma \Rightarrow \mathcal{D}elta}{A, A , \Gamma \Rightarrow \mathcal{D}elta}$ & $\infer[\infrule{Ctr}]{\Gamma \Rightarrow \mathcal{D}elta , A}{\Gamma \Rightarrow \mathcal{D}elta , A, A}$\\\\ \multicolumn{2}{c}{$\infer[\infrule{Cut}]{\Gamma , \Pi \Rightarrow \mathcal{D}elta , \Sigma}{\Gamma \Rightarrow \mathcal{D}elta , A & A , \Pi \Rightarrow \Sigma}$} \end{tabular} \end{center} However, we can safely leave them out without impairing the completeness of $\mathsf{Gc}$, since they are all admissible in it. In fact, weakening and contraction are also height-preserving admissible. Regarding $\mathsf{Gi}$, the structural rules are: \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{\emph{Structural rules of} $\mathsf{Gi}$}\\\\ $\infer[\infrule{Wkn}]{A, \Gamma \Rightarrow C}{\Gamma \Rightarrow C}$ & $\infer[\infrule{Ctr}]{A, \Gamma \Rightarrow C}{A, A , \Gamma \Rightarrow C}$\\\\ \multicolumn{2}{c}{$\infer[\infrule{Cut}]{\Gamma , \mathcal{D}elta \Rightarrow C}{\Gamma \Rightarrow A & A , \mathcal{D}elta \Rightarrow C}$} \end{tabular} \end{center} These rules are also valid in the model-theoretic semantics for intuitionistic logic, but just like in the classical case, they are all admissible in $\mathsf{Gi}$ (again, weakening and contracting are height-preserving admissible) and there is no need to take any of them as primitive. The proof of the admissibility of the structural rules in any of the two calculi requires some preparatory results. First, the height-preserving admissibility of substitution in $\mathsf{G}$. \begin{lemma}\label{sub} In $\mathsf{G}$, if $\, \vdash^h \Gamma\Rightarrow\mathcal{D}elta$ and $t$ is free for $x$ in $\Gamma , \mathcal{D}elta$ then $\, \vdash^h \Gamma [\,{}_{x}^{t}\,] \Rightarrow \mathcal{D}elta [\,{}_{x}^{t}\,]$. \end{lemma} Second, the so-called inversion lemma. Intuitively, a rule is invertible when it can be applied backwards, from the conclusion to its premises, and it is height-preserving invertible when it is invertible with the preservation of the derivation height (for a precise definition of height-preserving invertible rule see \cite[p. 76-77]{TroelstraSchwichtenberg2000}). \begin{lemma}\label{inv} In $\mathsf{Gc}$ all rules are height-preserving invertible. In $\mathsf{Gi}$ all rules, except $R\vee$, $L\to$ and $R\exists$, are height-preserving invertible. However, $L\to$ is height-preserving invertible with respect to its right premise. \end{lemma} With height-preserving admissibility of substitution and inversion lemma, it is possible to prove the admissibility of the structural rules. \begin{thm}\label{prop:structural_properties_logic} In $\mathsf{G}$ weakening and contraction are height-preserving admissible. Moreover, cut is admissible. \end{thm} The proof of Lemma \ref{sub}, Lemma \ref{inv}, and Theorem \ref{prop:structural_properties_logic} are standard and the interested reader is referred to \cite{TroelstraSchwichtenberg2000} and \cite{NegrivonPlato2001}. \subsection{From axioms to rules} Extensions of $\mathsf{G}$ are not, in general, cut free; this means that Theorem \ref{prop:structural_properties_logic} does not necessarily hold in the presence of new initial sequents or rules. For example, a natural way to extend $\mathsf{Gc}$ to cover first-order logic with identity is to allow derivations to start with initial sequents of the form $\Rightarrow s = s$ and $s = t , P [{}_{x}^{s}] \Rightarrow P [{}_{x}^{t}]$, corresponding to the reflexivity of identity and Leibniz's principle of indescernibility of identicals, respectively (we call these sequents $S_1$ and $S_2$). Notice that $S_2$ is in fact a scheme which becomes $s = t , s = s \Rightarrow t = s$, when $P$ is $x = s$. From this, via cut on $\Rightarrow s = s$, one derives $s = t \Rightarrow t = s$, namely the symmetry of identity. However, such a sequent has no derivation without cut. Therefore, cut is not admissible in $\mathsf{Gc} + \{ S_1 , S_2 \}$, though it is admissible in the underlying system $\mathsf{Gc}$. In \cite{NegrivonPlato1998} Negri and von Plato have shown how to recover cut elimination for (classical) first-order logic with identity by transforming $S_1$ and $S_2$ into an equivalent pair of rules of the form: $$ \infer[\infrule{\textnormal{\emph{Ref}}}_=]{\Gamma\Rightarrow\mathcal{D}elta}{s=s,\Gamma\Rightarrow\mathcal{D}elta} \qquad \infer[\infrule{\textnormal{\emph{Repl}}}_=]{s=t,P[{}_{x}^{s}],\Gamma\Rightarrow\mathcal{D}elta}{P[{}_{x}^{t}],s=t,P[{}_{x}^{s}],\Gamma\Rightarrow\mathcal{D}elta} $$ If one replaces $S_1$ and $S_2$ with the corresponding rules, it is easy to derive $s = t \Rightarrow t = s$ without any application of cut. More generally, cut elimination holds in $\mathsf{Gc} + \{\textnormal{\emph{Ref}} , \textnormal{\emph{Repl}} \}$ (cf. Theorem 4.2 in \cite{NegrivonPlato1998} and \cite[\S 6.5]{NegrivonPlato2001}). This result can be, and has been, extended in different directions. Here we are particularly interested in the fact, established by \cite{Negri2003}, that cut elimination holds in extensions of $\mathsf{Gc}$ with geometric rules (of which the rules of identity are special cases). The result will be reviewed briefly below, while for a more thorough discussion on this topic the reader is referred to \cite{Negri2003} or the monograph \cite{NegrivonPlato2011}. In \cite{Negri2003} Negri also showed that cut elimination holds for geometric theories formulated as extensions of the multi-succedent calculus $\mathsf{m\textnormal{-}G3i}$ for intuitionistic logic, introduced in \cite{Dragalin1988}. For our purposes, however, it is better to work in $\mathsf{Gi}$ as the underlying logical calculus for intuitionistic logic. In this way we can rely on the proof of Maehara's lemma for $\mathsf{Gi}$ already available in the literature (whereas to our knowledge no attempt has been made to obtain a similar result for $\mathsf{m\textnormal{-}G3i}$). In fact, it is not entirely obvious how to prove Maehara's lemma for $\mathsf{m\textnormal{-}G3i}$. Working in $\mathsf{Gi}$ is thus more advantageous as far as Maehara's lemma is concerned, but one needs first to make sure that cut elimination holds in the presence of geometric rules. Thus, after introducing geometric rules, we will show that the standard cut-elimination procedures goes through with minor adjustment in geometric extensions of $\mathsf{Gi}$ (Theorem \ref{prop:structural_properties_geometric_in}). \subsection{Geometric theories} A \emph{geometric axiom} is a formula following the \emph{geometric axiom scheme} below: $$\forall \bar{x} (P_1 \wedge \Deltaots \wedge P_n \to \exists \bar{y}_1 M_1 \vee \Deltaots \vee \exists \bar{y}_m M_m)$$ where each $P_j$ is an atom and each $M_i$ is a conjunction of a list of atoms $Q_{i_{1}} , \Deltaots , Q_{i_{\ell}}$ and none of the variables in any $\bar{y}_i$ are free in the $P_j$s. We shall conveniently abbreviate $Q_{i_{1}} , \Deltaots , Q_{i_{\ell}}$ in $\mathbf{Q}_i$. In a geometric axiom, if $m = 0$ then the consequent of $\to$ becomes $\bot$, whereas if $n = 0$ the antecedent of $\to$ becomes $\top$. A \emph{geometric theory} is a theory containing only geometric axioms. An $m$-premise \emph{geometric rule}, for $m\Gammaeq 0$, is a rule following the \emph{geometric rule scheme} below: $$ \infer[\infrule{R}]{P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta}{\mathbf{Q}^*_1 , P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta & \cdots & \mathbf{Q}^*_m , P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta} $$ where each $\mathbf{Q}^*_i$ is obtained from $\mathbf{Q}_i$ by replacing every variable in $\bar {y}_i$ with a variable which does not occur free in the conclusion. Such variables will be called the \emph{eigenvariables} of $R$. Without loss of generality, we assume that each $\bar{y}_i$ consists of a single variable. In sequent calculus a geometric theory can be formulated by adding on top of $\mathsf{G}$ finitely many geometric rules (recall that $\mathcal{D}elta$ contains exactly one formula in $\mathsf{Gi}$). Moreover, geometric rules are assumed to satisfy a natural closure property for contraction (see \cite[6.1.7]{NegrivonPlato2001}). \begin{defn}[Closure condition]\label{closurecondition} If a geometric extension $\mathsf{G}'$ of $\mathsf{G}$ contains a rule where a substitution instance of the principal formulas produces a rule with repetition of the form: $$ \infer[\infrule R]{P_1,\Deltaots,P_{n-2},P,P,\Gamma\Rightarrow\mathcal{D}elta}{\mathbf{Q}_1^*,P_1,\Deltaots,P_{n-2},P,P,\Gamma\Rightarrow\mathcal{D}elta & \cdots & \mathbf{Q}_m^*,P_1,\Deltaots,P_{n-2},P,P,\Gamma\Rightarrow\mathcal{D}elta} $$ then $\mathsf{G}'$ contains or is closed under the following contracted instance of the rule: $$ \infer[\infrule R^c]{P_1,\Deltaots,P_{n-2},P,\Gamma\Rightarrow\mathcal{D}elta}{\mathbf{Q}_1^*,P_1,\Deltaots,P_{n-2},P,\Gamma\Rightarrow\mathcal{D}elta & \cdots & \mathbf{Q}_m^*,P_1,\Deltaots,P_{n-2},P,\Gamma\Rightarrow\mathcal{D}elta} $$ \end{defn} As an illustration, we consider the rule \emph{Trans}$_\leqslant$ in the theory $\mathsf{PO}$ (see $\S$ \ref{PO}): $$ \infer[\infrule{Trans_\leqslant}]{s \leqslant t , t \leqslant u , \Gamma \xRightarrow{} \mathcal{D}elta}{s \leqslant u , s \leqslant t , t \leqslant u , \Gamma \xRightarrow{} \mathcal{D}elta} $$ Clearly, as an instance of such a rule we have: $$ \infer[\infrule{Trans_\leqslant}]{s \leqslant s , s \leqslant s , \Gamma \xRightarrow{} \mathcal{D}elta}{s \leqslant s , s \leqslant s , s \leqslant s , \Gamma \xRightarrow{} \mathcal{D}elta} $$ Hence $\mathsf{PO}$ has to be closed under the following contracted instance $$ \infer[\infrule{Trans_\leqslant^c}]{s \leqslant s , \Gamma \xRightarrow{} \mathcal{D}elta}{s \leqslant s , s \leqslant s , \Gamma \xRightarrow{} \mathcal{D}elta} $$ For $\mathsf{PO}$ we don't need to add the contracted rule \emph{Trans}$_\leqslant^c$, because it is admissible thanks to rule \emph{Ref}$_\leqslant$. In general, however, this is not the case. Let $\mathsf{G^g}$ be any extension of $\mathsf{G}$ with finitely many geometric rules satisfying the closure condition (from now on, we will tacitly assume that the closure condition is always met). We now show that cut elimination and the admissibility of the structural rules hold in $\mathsf{G^g}$. Although we will heavily rely on \cite{Negri2003}, we start by introducing a more general notion of substitution that allows an arbitrary term $u$ (possibly a constant) to be replaced by a term $t$. In the presence of such general substitutions, special care is needed in order to maintain the height-preserving admissibility of substitutions. In particular, general substitutions are height-preserving admissible, provided that the replaced term $u$ does not occur essentially in the calculus. Intuitively, a term $u$ occurs essentially in a rule $R$ when $u$ cannot be replaced (by an arbitrary term), namely when $u$ is a constant and $u$ already occurs in the axiom from which $R$ is obtained. More precisely, \begin{defn} A constant $u$ occurs essentially in a geometric axiom $A$ if and only if, for some $t\not\equiv u$, $A [\,{}_{u}^{t}\,]$ is not an instance of the axiom $A$. \end{defn} \noindent We also agree that a term $u$ occurs essentially in a geometric rule $R$ when it does so in the corresponding axiom. For example, in the geometric axiom $\neg 1 \leqslant 0$ of non-degenerate partial orders (see \cite[p. 116]{NegrivonPlato2011}) both $1$ and $0$ occur essentially; hence they also occur essentially in the corresponding geometric rule \emph{Non-deg}: $$ \infer[\infrule{\textnormal{\emph{Non-deg}}}]{1 \leqslant 0 , \Gamma\Rightarrow\mathcal{D}elta}{} $$ Now we show that the general substitution $[\,{}_{u}^{t}\,]$ is height-preserving admissible in $\mathsf{G^g}$, provided that $u$ occurs essentially in none of its geometric rule. \begin{lemma}\label{th:substitution_general} In $\mathsf{G^g}$, if $\, \vdash^n \Gamma\Rightarrow\mathcal{D}elta$, $t$ is free for $u$ in $\Gamma,\mathcal{D}elta$, and $u$ does not occur essentially in any rule of $\mathsf{G^g}$, then $\, \vdash^n \Gamma [{}_{u}^{t}] \Rightarrow \mathcal{D}elta [{}_{u}^{t}]$. \end{lemma} \begin{proof} If $u$ is a variable, the claim holds by extending Lemma \ref{sub} to $\mathsf{G^g}$. Otherwise, let $u$ be an individual constant. We can think of the derivation $\mathcal{D}$ of $\Gamma\Rightarrow\Delta$ as $$ \infer[{[{}^{u}_{z}]}]{\Gamma'[{}_{z}^{u}]\Rightarrow\Delta'[{}^{u}_{z}]}{\Gamma'\Rightarrow\Delta'} $$ where $\Gamma'\Rightarrow\Delta'$ is like $\Gamma\Rightarrow\Delta$ save that it has a fresh variable $z$ in place of $u$. Note that this is always feasible for purely logical derivations, and it is feasible for derivations involving geometric rules as long as these rules do not involve essentially the constant $u$. We transform $\mathcal{D}$ into $$ \infer[{[{}^{t}_{z}]}]{\Gamma'[{}_{z}^{t}]\Rightarrow\Delta'[{}^{t}_{z}]}{\Gamma'\Rightarrow\Delta'} $$ where $t$ is free for $z$ since we assumed it is free for $u$ in $\Gamma\Rightarrow\mathcal{D}elta$. We have thus found a derivation ($\mathcal{D}[{}_{u}^{t}]$) of $\Gamma [{}_{u}^{t}] \Rightarrow \mathcal{D}elta [{}_{u}^{t}]$ that has the same height as the derivation $\mathcal{D}$ of $\Gamma\Rightarrow\Delta$. \end{proof} We can now show that Lemma \ref{inv} and Theorem \ref{prop:structural_properties_logic} still hold in $\mathsf{G^g}$. In fact, for $\mathsf{Gc^g}$ a proof has already been given in \cite{Negri2003}. \begin{thm}[Negri]\label{prop:structural_properties_geometric_cl} In $\mathsf{Gc^g}$ all the geometric and logical rules are height-preserving invertible. Moreover, weakening and contraction are height-preserving admissible and cut is admissible. \end{thm} At this point we need to show that the same holds for $\mathsf{Gi}$. A similar result has been proved by Negri in \cite{Negri1999} for a subclass of geometric rules, called universal rules. In fact, Negri only considers specific instances of universal rules expressing the axioms of the constructive theory of apartness and excess, see \S \ref{apart} and \S \ref{sectPPO}. Moreover, in \cite{Negri1999} only the quantifier-free version of $\mathsf{Gi}$ is considered. Here we extend Negri's result and show the admissibility of the structural rules for the full calculus $\mathsf{Gi}$ extended by arbitrary geometric rules. Then, \begin{thm}\label{prop:structural_properties_geometric_in} In $\mathsf{Gi^g}$ all the geometric rules and all logical rules, except $R\vee$, $L\to$ and $R\exists$, are height-preserving invertible. However, $L\to$ is height-preserving invertible with respect to its right premise. Moreover, weakening and contraction are height-preserving admissible and cut is admissible. \end{thm} \begin{proof} The proof of height-preserving invertibility of the geometric and logical rules for $\mathsf{Gi^g}$ does not differ substantially from that for $\mathsf{Gi}$ and is left to the reader. We take a closer look at the admissibility of the structural rules. \ \emph{Weakening}. To show that weakening is height-preserving admissible in $\mathsf{Gi^g}$, we need to extend the proof for $\mathsf{Gi}$ with the cases arising from geometric rules $R$. These cases be dealt with as for geometric rules over $\mathsf{m\textnormal{-}Gi}$ and $\mathsf{Gc}$ \cite[Thm. 2]{Negri2003}. In particular, if $R$ is an $m$-premises ($m\Gammaeq 1$) geometric rule with a variable condition on $y$, we replace $y$ with a fresh variable not occurring in the weakening formula, then we apply the inductive hypothesis and, finally, we apply $R$. If $R$ is an $m$-premises ($m\Gammaeq 1$) geometric rule without variable condition, we can apply directly the inductive hypothesis and then $R$. Finally, if $R$ is a 0-premise geometric rule, the conclusion of weakening is obtained directly by $R$. \ \emph{Contraction}. Once again, the new cases arising by the addition of geometric rules to $\mathsf{Gi}$ are similar to the cases in which these rules are added to $\mathsf{m\textnormal{-}Gi}$ or to $\mathsf{Gc}$ \cite[Thm. 4]{Negri2003}. This means we have three cases: of the occurrences of the contraction formula either (i) none, or (ii) exactly one, or (iii) both are principal in the final step of the derivation of the premise. The first case can be dealt with by induction, the second by inversion, and the third by the closure condition. \ \emph{Cut}. To show that cut is admissible we need to prove that if $\, \vdash \Gamma \Rightarrow A$ and $\, \vdash A, \mathcal{D}elta \Rightarrow C$ then $\, \vdash \Gamma , \mathcal{D}elta \Rightarrow C$. The proof is by induction on the weight of the cut formula $A$ with a sub-induction on the sum of heights of derivation of the two premises (cut-height, for short). As for the proof of the admissibility of Cut over $\mathsf{m\textnormal{-}Gi^g}$ \cite[Thm. 5]{Negri2003}, we consider only the new cases arising from the geometric rules $R$. \begin{enumerate} \item The left premise of \emph{Cut} is by a 0-premise geometric rule $R$. Hence also the conclusion of \emph{Cut} is a conclusion of an instance of $R$. \item The right premise is by a 0-premise geometric rule $R$ and the cut formula is not principal in it. We proceed as in case 1. \item The right premise is by an instance of a 0-premise geometric rule $R$ and the cut formula is principal in it. In this case we know that $A$ is atomic (or $\top$ or $\bot$) and we consider the last step of the derivation of the left premise. If it is by a 0-premise (logic or geometric) rule or it is an initial sequent, we proceed as in case 1. \footnote{Observe that, unlike the cases of $\mathsf{m\textnormal{-}Gi^g}$ and $\mathsf{Gc^g}$, the cut formula $A$ must be principal in the left premise when this premise is an initial sequent.} If the left premise is by an $m$-premises ($m\Gammaeq 1$) logical or geometric rule, then the cut formula is not principal in it and we can permute the \emph{cut} upwards in the left premise (if the last rule applied in the left premise has \emph{eigenvariables}, we rename them before permuting the cut to avoid clashes). \item If the cut formula is not principal either in the left or in the right premise and this premise is by an $m$-premises (for $m\Gammaeq 1$) geometric rule $R$, then, after having renamed any \emph{eigenvariable} of $R$ to avoid clashes, we permute the cut upwards with respect to this premise. \item Finally, if the cut formula is principal in both premises, neither premise has been derived by a geometric rule and we proceed as for $\mathsf{Gi}$. \end{enumerate} \end{proof} \section{Singular geometric theories} To prove interpolation in extensions of first-order logic, the class of geometric rules seems too large. Thus, we restrict our attention to a proper sub-class of it and we introduce the class of singular geometric theories. In the next section we will prove (Lemma \ref{th:maehara_singular}) that Maehara's lemma holds for singular geometric extensions of first-order logic. A \emph{singular geometric axiom} is a geometric axiom with at most one non-logical predicate and no constant occurring essentially. A \emph{singular geometric theory} is a theory containing only singular geometric axioms. In sequent calculus a singular geometric theory can be formulated by extending $\mathsf{G}$ with finitely many geometric rules of form: $$ \infer[\infrule{R}]{P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta}{\mathbf{Q}^*_1 , P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta & \cdots & \mathbf{Q}^*_m , P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta} $$ where no constant occurs essentially and that satisfy the following singularity condition: \begin{equation}\tag{$\star$} |\mathsf{Rel} (\mathbf{Q}^*_1 , \Deltaots , \mathbf{Q}^*_m , P_1 , \Deltaots , P_n)| \leq 1 \end{equation} \noindent Singular geometric axioms are ubiquitous in mathematics. Here, for example, is an incomplete list of singular geometric axioms for a binary relation $R$ (the list is partly taken from \cite[p. 48-50]{Casari2006}). \ \begin{tabular}{ll} $R$ is reflexive & $\forall x (\top \to x R x)$\\ $R$ is irreflexive & $\forall x (x R x \to \bot)$\\ $R$ is transitive & $\forall x \forall y \forall z (x R y \wedge y R z \to x R z)$\\ $R$ is intransitive & $\forall x \forall y \forall z (x R y \wedge y R z \wedge x R z \to \bot)$\\ $R$ is co-transitive & $\forall x \forall y \forall z(x R y\to xR z \vee z Ry)$\\ $R$ is splitting & $\forall x \forall y \forall z (x R y \to xR z \vee y R z)$\\ $R$ is symmetric & $\forall x \forall y (x R y \to y R x)$\\ $R$ is asymmetric & $\forall x \forall y (x R y \wedge y R x \to \bot)$\\ \end{tabular} \begin{tabular}{ll} $R$ is anti-symmetric & $\forall x \forall y (x R y \wedge y R x \to x = y)$\\ $R$ is trichotomy & $\forall x \forall y (\top \to x = y \vee x R y \vee y R x)$\\ $R$ is linear & $\forall x \forall y (\top \to x R y \vee y R x)$\\ $R$ is Euclidean & $\forall x \forall y \forall z (x R z \wedge y R z \to x R y)$\\ $R$ is left-unique & $\forall x \forall y \forall z (x R z \wedge y R z \to x = y)$\\ $R$ is right-unique & $\forall x \forall y \forall z (z R x \wedge z R y \to x = y)$\\ $R$ is connected & $\forall x \forall y \forall z (x R y \wedge x R z \to y R z \vee z R y)$\\ $R$ is nilpotent & $\forall x \forall y \forall z (x R z \wedge z R y \to \bot)$\\ $R$ is a left ideal & $\forall x \forall y \forall z (x R y \to x R z)$\\ $R$ is a right ideal & $\forall x \forall y \forall z (x R y \to z R y)$\\ $R$ is rectangular & $\forall x \forall y \forall z \forall v (x R z \wedge v R y \to x R y)$\\ $R$ is dense & $\forall x \forall y (x R y \to \exists z (x R z \wedge z R y))$\\ $R$ is total & $\forall x \exists y (\top \to x R y)$\\ $R$ is confluent & $\forall x \forall y \forall z (x R y \wedge x R z \to \exists u (y R u \wedge z R u))$\\ $R$ is left-oriented & $\forall x \forall y (\top \to \exists z (z R x \wedge z R y))$\\ $R$ is right-oriented & $\forall x \forall y (\top \to \exists z (x R z \wedge y R z))$\\ \end{tabular} \ It is evident that a number of important classical and intuitionistic mathematical theories are singular geometric. Regarding the classical ones, the theory of partial orders ($R$ is reflexive, transitive and anti-symmetric), the theory of linear orders ($R$ is a linear partial order), as well as the theories of strict partial orders ($R$ is irreflexive and transitive) and strict linear orders ($R$ is a trichotomic strict partial order) are singular geometric. Constructive singular geometric theories, on the other hand, include von Plato's theories of positive partial orders \cite{vonPlato2001} ($R$ is irreflexive and co-transitive) and positive linear orders ($R$ is an asymmetric positive partial order), as well as the theory of apartness ($R$ is irreflexive and splitting). Also the theory of equivalence relations ($R$ is reflexive, transitive and symmetric) falls within the class of singular geometric. Finally, the fact the a relation $R$ is functional (total and right-unique) can be axiomatized using singular geometric axioms. Singular geometric axioms are important in logic, too. Specifically, the axioms of identity are singular geometric. \ \begin{tabular}{ll} $=$ is reflexive & $\forall x (x = x)$\\ $=$ satisfies the indescernibility of identicals & $\forall x \forall y (x = y \wedge P [{}_{z}^{x}] \to P [{}_{z}^{y}])$\\ \end{tabular} \ Notice that the indiscernibility of identicals satisfies the singularity condition ($\star$) because identity is a logical predicate. Hence, first-order logic with identity is a singular geometric theory. Cut elimination for singular geometric rules clearly follows from cut elimination for geometric rules. More precisely, let $\mathsf{G^s}$ be any extension of $\mathsf{G}$ with singular geometric rules. Then: \begin{cor}\label{prop:structural_properties_singular_geometric} All derivability properties expressed in Lemma \ref{th:substitution_general}, Theorem \ref{prop:structural_properties_geometric_cl} and Theorem \ref{prop:structural_properties_geometric_in} hold for $\mathsf{G^s}$. \end{cor} \begin{proof} Straightforward, since all singular geometric rules are geometric. \end{proof} \section{Interpolation with singular geometric rules} The standard proof of interpolation in sequent calculi rests on a result due to Maehara which appeared (in Japanese) in \cite{Maehara1960} and was later made available to international readership by Takeuti in his \cite{Takeuti1987}. While interpolation is a result about logic, regardless the formal system (sequent calculus, natural deduction, axiom system, etc), Maehara's lemma is a \textquotedblleft sequent-calculus version\textquotedblright{} of interpolation. Although originally Maehara proved his lemma for $\mathsf{LK}$, it is easy to adapt the proof so that it holds also in $\mathsf{G}$ (cf. \cite[\S 4.4]{TroelstraSchwichtenberg2000}). We recall from \cite{TroelstraSchwichtenberg2000} some basic definitions. \begin{defn}[partition, split-interpolant]\label{partition} A \emph{partition of a sequent $\Gamma\Rightarrow\mathcal{D}elta$} is an expression $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$, where $\Gamma = \Gamma_1 , \Gamma_2$ and $\mathcal{D}elta = \mathcal{D}elta_1 ,\mathcal{D}elta_2$ (for = the multiset-identity). A \emph{split-interpolant} of a partition $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ is a formula $C$ such that: \begin{enumerate} \item[I] \qquad$\vdash \Gamma_1 \Rightarrow \mathcal{D}elta_1 , C$ \item[II] \qquad$\vdash C , \Gamma_2 \Rightarrow \mathcal{D}elta_2$ \item[III] \qquad$\mathcal{L}(C) \subseteq \mathcal{L} (\Gamma_1 , \mathcal{D}elta_1) \cap \mathcal{L} (\Gamma_2 , \mathcal{D}elta_2)$ \end{enumerate} We use $\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ to indicate that $C$ is a split-interpolant for $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$. \end{defn} Moreover, we say that a $C$ satisfying conditions (I) and (II) satisfies the derivability conditions for being a split-interpolant for the partition $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$, whereas if $C$ satisfies (III) we say that it satisfies the language condition for being a split-interpolant for the same partition. \begin{lemma}[Maehara]\label{th:maehara_logic} In $\mathsf{Gc}$ every partition $ \Gamma_1\; ; \;\Gamma_2\Rightarrow\mathcal{D}elta_1\; ; \;\mathcal{D}elta_2 $ of a derivable sequent $\Gamma\Rightarrow\mathcal{D}elta$ has a split-interpolant. In $\mathsf{Gi}$ every partition $\Gamma_1\; ; \;\Gamma_2\Rightarrow\; ; \; A $ of a derivable sequent $\Gamma\Rightarrow A$ has a split-interpolant. \end{lemma} The proof is by induction on the height $h$ of the derivation. If $h = 0$ then $\Gamma \Rightarrow \mathcal{D}elta$ is an initial sequent or a conclusion of a 0-premise rule and the proof is as in \cite{TroelstraSchwichtenberg2000}.\footnote{Notice, however, that the proof given in \cite{TroelstraSchwichtenberg2000} contains a misprint and the split-interpolant for the partition of the initial sequent $\Gamma_1 , P \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 , P \; ; \; \mathcal{D}elta_2$ (their notation adjusted to ours) is $\bot$, and not $\bot \to \bot$ as stated in \cite[p.117]{TroelstraSchwichtenberg2000}.} If $h=n+1$ one uses as induction hypothesis the fact that any partition of the premises of a rule $R$ has a split-interpolant. For a detailed proof the reader is again referred to \cite{TroelstraSchwichtenberg2000}. From Maehara's lemma it is immediate to prove Craig's interpolation theorem. \begin{thm}[Craig]\label{Craig} If $A \Rightarrow B$ is derivable in $\mathsf{G}$ then there exists a $C$ such that $\vdash A \Rightarrow C$ and $\vdash C \Rightarrow B$ and $\mathcal{L}(C) \subseteq \mathcal{L} (A) \cap \mathcal{L} (B)$. \end{thm} \begin{proof} Let $A \Rightarrow B$ be derivable in $\mathsf{G}$ and consider the partition $A \; ; \; \varnothing \Rightarrow \varnothing \; ; \; B$ of $A \Rightarrow B$. By Lemma \ref{th:maehara_logic}, this partition has a split-interpolant, namely there exists a $C$ such that $A \; ; \; \varnothing \xRightarrow{C} \varnothing \; ; \; B$. Hence $\vdash A \Rightarrow C$ and $\vdash C \Rightarrow B$ and $\mathcal{L}(C) \subseteq \mathcal{L} (A) \cap \mathcal{L} (B)$ by Definition \ref{partition}. \end{proof} Of any calculus for which Theorem \ref{Craig} holds, we say that it has the interpolation property. Now we extend Lemma \ref{th:maehara_logic} to extensions of $\mathsf{G}$ with singular geometric rules. In the proof of Lemma \ref{th:maehara_singular} we shall only consider singular geometric rules where each $\mathbf{Q}^*_i$ is a single atom $Q^*_i$. More precisely, we consider singular geometric rules of the form $$ \infer[\infrule{R}]{P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta}{Q_1^* , P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta & \cdots & Q_m^* , P_1 , \Deltaots , P_n , \Gamma \Rightarrow \mathcal{D}elta} $$ where $\mathcal{D}elta$ consists of exactly one formula in $\mathsf{Gi}$. This allows some notational simplification and will significantly improve the readability of the proof. It does not impair the generality of the result. \begin{lemma}\label{th:maehara_singular} In $\mathsf{Gc^s}$ every partition $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ of a derivable sequent $\Gamma\Rightarrow\mathcal{D}elta$ has a split-interpolant. In $\mathsf{Gi^s}$ every partition $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \; ; \; A$ of a derivable sequent $\Gamma\Rightarrow A$ has a split-interpolant. \end{lemma} \begin{proof} The proof extends that of Lemma \ref{th:maehara_logic}. Let $R$ be a singular geometric rule with $m$ premises and let $\Pi , \Gamma\Rightarrow\mathcal{D}elta$ be its conclusion, where $\Pi$ is the multiset $P_1,\Deltaots,P_n$ of the atomic principal formulas of $R$, if any. We consider the following generic partition of the conclusion: $$ \Pi_1,\Gamma_1\; ; \;\Pi_2,\Gamma_2\Rightarrow\mathcal{D}elta_1\; ; \;\mathcal{D}elta_2 $$ where $\Pi_1,\Pi_2=\Pi$ and $\Gamma_1,\Gamma_2=\Gamma$ and $\mathcal{D}elta_1,\mathcal{D}elta_2=\mathcal{D}elta$, and where $\Delta_1=\varnothing$ and $\Delta_2=A$ for $\mathsf{Gi^s}$. Moreover, let $\Theta$ be the multiset $Q_1^*,\Deltaots,Q_m^*$ of active formulas of $R$, if any. We organize the proof in three exhaustive cases: \begin{enumerate} \item\label{case1} $\mathsf{Rel}(\Theta,\Pi)\subseteq\mathsf{Rel}(\Pi_1,\Gamma_1,\Delta_1)$; \item\label{case2} $\mathsf{Rel}(\Theta,\Pi)\subseteq\mathsf{Rel}(\Pi_2,\Gamma_2,\Delta_2)$; \item\label{case3} $\mathsf{Rel}(\Theta,\Pi)\not\subseteq\mathsf{Rel}(\Pi,\Gamma,\Delta)$. \end{enumerate} Observe that these three cases are exhaustive since singular geometric rules have at most one non-logical predicate in their principal and active formulas and, therefore, when Case 3 does not hold at least one of Cases 1 and 2 holds. We give a proof of the three cases for $\mathsf{Gc}$, and then we show the modifications needed for $\mathsf{Gi}$. \ \emph{Case \ref{case1} for $\mathsf{Gc^s}$}. If $R$ is an $m$-premise(s) rule for $m\Gammaeq1$, then by the inductive hypothesis (IH) every partition of each of the $m$ premises of $R$ has a split-interpolant. In particular, for each $k\in\{1,\Deltaots,m\}$, there is a $C_k$ such that: \begin{itemize} \item[(I$_k$)] $\vdash Q^*_k , \Pi_1,\Pi_2, \Gamma_1 \Rightarrow \mathcal{D}elta_1 , C_k $ \item[(II$_k$)] $\vdash C_k , \Gamma_2 \Rightarrow \mathcal{D}elta_2 $ \item[(III$_k$)] $\mathcal{L} (C_k) \subseteq \mathcal{L} (Q^*_k , \Pi_1,\Pi_2, \Gamma_1 , \mathcal{D}elta_1) \cap \mathcal{L} (\Gamma_2 , \mathcal{D}elta_2)$ \end{itemize} If, instead, $R$ is a 0-premise rule then (I$_1$), (II$_1$), and (III$_1$) hold trivially for $C_1\equiv\bot$. We start by assuming that $\Pi_2$ is the non-empty multiset $P_{i_{j+1}} , \Deltaots , P_{i_{n}}$, and then we show the modifications needed when $\Pi_2=\varnothing$. Consider now the following derivation $\mathcal{D}_1$, where the topmost sequents are derivable by (I$_1$)\,-\,(I$_m$): \begin{small} \begin{equation}\label{firstcase1} \infer[\infrule{R\to}]{\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1, \bigwedge\Pi_2 \rightarrow \bigvee_{i=1}^m C_i}{ \infer=[\infrule{L\wedge}]{\bigwedge\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1, \bigvee_{i=1}^m C_i } {\infer[\infrule R]{\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1, \bigvee_{i=1}^m C_i }{ \infer=[\infrule R\lor]{Q_1^*,\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1, \bigvee_{i=1}^m C_i}{ \infer=[\infrule Wkn]{Q_1^*,\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1,C_1,\Deltaots,C_m}{Q_1^*,\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1,C_1}}& \Deltaeduce{\phantom{a}}{\Deltaots}& \infer=[\infrule R\lor ]{Q_m^*,\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1, \bigvee_{i=1}^m C_i}{ \infer=[\infrule Wkn]{Q_m^*,\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1,C_1,\Deltaots,C_m}{Q_m^*,\Pi_2,\Pi_1,\Gamma_1\Rightarrow\mathcal{D}elta_1,C_m}} } } } \end{equation} \end{small} Notice that the application of $R$ is legitimate because by assumption $R$ is applicable to $Q_i^*,\Pi,\Gamma\Rightarrow\mathcal{D}elta$ and none of the \emph{eigenvariables} of the $Q_i^*$'s can occur free in some $C_k$, since $\mathcal L(C_k)\subseteq\mathcal L(\Gamma_2,\Delta_2)$. Notice also that in some particular case the double-line stands for the empty sequence of instances, e.g., the steps by $R\lor$ when $R$ is a 0- or 1-premise rule. Consider another derivation $\mathcal{D}_2$, where the left-topmost sequents are initial sequents since $\Pi_2=P_{i_{j+1}} , \Deltaots , P_{i_{n}}$ and the right-topmost ones are derivable by (II$_1$)--(II$_m$): {\small \begin{equation}\label{secondcase1} \infer[\infrule{L\to}]{ \bigwedge\Pi_2 \to \bigvee_{i=1}^m C_i , \Pi_2 ,\Gamma_2\Rightarrow\mathcal{D}elta_2 } { \infer=[\infrule{R\wedge}] { \Pi_2 ,\Gamma_2\Rightarrow\mathcal{D}elta_2 , \bigwedge\Pi_2 } { \Pi_2,\Gamma_2\Rightarrow\mathcal{D}elta_2 , P_{i_{j+1}} & \cdots & \Pi_2,\Gamma_2\Rightarrow\mathcal{D}elta_2 , P_{i_{n}} } & \infer=[\infrule{Wkn}] { \bigvee_{i=1}^m C_i , \Pi_2 ,\Gamma_2\Rightarrow\mathcal{D}elta_2 } { \infer=[\infrule{L\vee}] { \bigvee_{i=1}^m C_i , \Gamma_2\Rightarrow\mathcal{D}elta_2 } { C_1 , \Gamma_2\Rightarrow\mathcal{D}elta_2 & \cdots & C_m , \Gamma_2\Rightarrow\mathcal{D}elta_2 } } } \end{equation}} When $\Pi_2=\varnothing$ we modify $\mathcal{D}_1$ by using left weakening instead of $L\wedge$ to add $\bigwedge\Pi_2$ --i.e., $\top$ -- to the antecedent, and we modify $\mathcal{D}_2$ by deriving the conclusion of $R\wedge$ by an instance of $R\top$ instead of by instances of $R\wedge$. Let $t_1 , \Deltaots , t_\ell$ be all terms such that $t_1 , \Deltaots , t_\ell \in \mathsf{Ter} (\bigwedge\Pi_2\to \bigvee_{i=1}^m C_i)$ and $(\bullet)\;t_1 , \Deltaots , t_\ell \notin \mathsf{Ter}(\Pi_1 , \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Ter} (\Pi_2 , \Gamma_2 , \mathcal{D}elta_2)$. We use $\overline{t}$ to denote $t_1,\Deltaots,t_\ell$. We show that $$ (\Deltadag)\qquad t_1 , \Deltaots , t_\ell \notin \mathsf{Ter} ( \Pi_1 , \Gamma_1 , \mathcal{D}elta_1) $$ For each $k\leq m$, (III$_k$) entails that $\mathsf{Ter}(C_k)\subseteq \mathsf{Ter}(\Gamma_2,\mathcal{D}elta_2)$. Hence $\mathsf{Ter}(\bigwedge\Pi_2\to \bigvee_{i=1}^m C_i)\subseteq \mathsf{Ter}(\Pi_2,\Gamma_2,\mathcal{D}elta_2)$. By this and ($\bullet$) we immediately get that $(\Deltadag)$ holds. Let now $\bar{z}$ be variables $z_1 , \Deltaots , z_\ell$ not occurring in $\mathcal{D}_1$ and $\mathcal{D}_2$. Lemma \ref{th:substitution_general} applied to $\mathcal D_1$ shows that: $$ \vdash \Pi_1, \Gamma_1 \Rightarrow \mathcal{D}elta_1 , ( \bigwedge\Pi_2\to \bigvee_{i=1}^m C_i ) [\,{}_{\bar{t}}^{\bar{z}}\,] $$ Here $(\Deltadag$) ensures that the substitution $[\,{}_{\bar{t}}^{\bar{z}}\,]$ has no effect on $\Pi_1, \Gamma_1,\mathcal{D}elta_1$. By $\ell$ applications of $R\forall$ to the derivable sequent above we obtain: $$ \textnormal{(I}_C) \qquad \vdash \Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , \forall \bar{z}(( \bigwedge\Pi_2\to \bigvee_{i=1}^m C_i ) [\,{}_{\bar{t}}^{\bar{z}}\,]) $$ Moreover, by applying $\ell$ instances of left weakening and then $\ell$ instances of $L\forall$ to the conclusion of $\mathcal D_2$ we obtain: $$\textnormal{(II}_C) \qquad\vdash \forall \bar{z}(( \bigwedge\Pi_2\to \bigvee_{i=1}^m C_i )[\,{}_{\bar{t}}^{\bar{z}}\,]) ,\Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 $$ Let $C$ be $\forall \bar{z}(( \bigwedge\Pi_2\to \bigvee_{i=1}^m C_i )[\,{}_{\bar{t}}^{\bar{z}}\,])$. By (I$_C$) and (II$_C$), we have established that $C$ satisfies the derivability conditions for being a split-interpolant of the given partition. We now show that it also satisfies the language condition, namely: $$ \textnormal{(III}_C) \quad \mathcal{L} (C) \subseteq \mathcal{L} (\Pi_1, \Gamma_1, \mathcal{D}elta_1) \cap \mathcal{L}(\Pi_2 , \Gamma_2, \mathcal{D}elta_2) $$ First, if $s$ is a term in $\mathsf{Ter} (C)$, it is a term occurring in $\bigwedge\Pi_2\to \bigvee_{i=1}^m C_i$ that is not in the list $\bar{t}$. By $(\bullet)$, we have: $$\text{(III}.1_C)\quad s \in \mathsf{Ter}(\Pi_1 , \Gamma_1,\mathcal{D}elta_1)\cap\mathsf{Ter}(\Pi_2 ,\Gamma_2,\mathcal{D}elta_2)$$ Next, we show that: $$(\textnormal{III}.2_C)\qquad\mathsf{Rel}(C)\subseteq\mathsf{Rel}(\Pi_1,\Gamma_1,\mathcal{D}elta_1)\cap\mathsf{Rel}(\Pi_2,\Gamma_2,\mathcal{D}elta_2) $$ By assumption, we are in Case \ref{case1}, i.e., $ \mathsf{Rel}(\Theta,\Pi)\subseteq\mathsf{Rel}(\Pi_1,\Gamma_1,\Delta_1)$. The following set-theoretic reasoning shows that ($\textnormal{III}.2_C$) holds: $\begin{array}{lc} \mathsf{Rel}(C)&\stackrel{\textnormal{III}_k}\subseteq\\\noalign{ } \mathsf{Rel}(\Pi_2)\cup(\mathsf{Rel}(\Theta , \Pi_1,\Pi_2, \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Rel} (\Gamma_2 , \mathcal{D}elta_2))&\stackrel{\textnormal{distrib.}}=\\\noalign{ } \mathsf{Rel}(\Theta, \Pi_1,\Pi_2, \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Rel} (\Pi_2,\Gamma_2 , \mathcal{D}elta_2)&\stackrel{\text{Case }\ref{case1}}=\\\noalign{ } \mathsf{Rel}( \Pi_1, \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Rel} (\Pi_2,\Gamma_2 , \mathcal{D}elta_2) \end{array}$ \ We conclude that: $$ \framebox{ \infer { \Pi_1 , \Gamma_1 \; ; \; \Pi_2 , \Gamma_2 \xRightarrow{\forall \bar{z} (( \bigwedge\Pi_2\to\bigvee_{i=1}^mC_i)[\,{}_{\bar{t}}^{\bar{z}}\,])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { Q^*_1 , \Pi_1 , \Pi_2 , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_1} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 & \cdots & Q^*_m , \Pi_1 , \Pi_2 , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_m} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } $$ Observe that when $\Pi_2=\varnothing$ the split-interpolant of the conclusion can be simplified as follows: $$ \framebox{ \Deltaeduce{\forall \bar{z} (( \bigvee_{i=1}^mC_i)[\,{}_{\bar{t}}^{\bar{z}}\,])}{} } $$ \ \emph{Case \ref{case2} for $\mathsf{Gc^s}$}. The proof differs substantially from that of Case \ref{case1} only as far as the derivability conditions are concerned. Thus, we give a detailed analysis of these and leave to the reader the task to check that also the language condition is satisfied. By IH every partition of each premise of an $m$-premises ($m\Gammaeq 1$) rule $R$ has a split-interpolant. In particular, for all $k\in\{1,\Deltaots,m\}$, there is a $C_k$ such that: \begin{itemize} \item[(I$_k$)] $\vdash \Gamma_1 \Rightarrow \mathcal{D}elta_1 , C_k $ \item[(II$_k$)] $\vdash C_k , Q^*_k ,\Pi_1,\Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 $ \item[(III$_k$)] $\mathcal{L} (C_k) \subseteq \mathcal{L} (\Gamma_1 , \mathcal{D}elta_1) \cap \mathcal{L} (Q^*_k , \Pi_1,\Pi_2 , \Gamma_2 , \mathcal{D}elta_2)$ \end{itemize} In case $R$ is a 0-premise rule, (I$_1$), (II$_1$), and (III$_1$) hold by imposing $C_1\equiv\top$. Let $\mathcal{D}_1$ be the following derivation, where the topmost sequents are derivable by (II$_1$)\,-\,(II$_m$): \begin{small} \begin{equation}\label{firstcase2} \infer=[\infrule{L\wedge}] { \bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1, \Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 } { \infer[\infrule{R}] { C_1 , \Deltaots , C_m , \Pi_1 , \Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 } { \infer=[\infrule{Wkn}] { C_1 , \Deltaots , C_m , Q_1^* , \Pi_1 , \Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 } { C_1 , Q_1^* , \Pi_1 , \Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 } & \cdots & \infer=[\infrule{Wkn}] { C_1 , \Deltaots , C_m , Q_m^* , \Pi_1 , \Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 } { C_m , Q_m^* , \Pi_1 , \Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 } } } \end{equation} \end{small} Consider now another derivation $\mathcal{D}_2$ where the left topmost sequents are derivable by (I$_1$)\,-\,(I$_m$) and the right ones are initial sequents (we take $P_{i_1},\Deltaots,P_{i_j}= \Pi_1$ if $\Pi_1\neq\varnothing$, else, as we did in (\ref{secondcase1}), we derive the conclusion of the right top-most instance(s) of $R\wedge$ by $R\top$): {\small \begin{equation}\label{secondcase2} \infer[\infrule{R\wedge}] { \Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , \bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1 } { \infer=[\infrule{Wkn}] { \Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , \bigwedge_{i=1}^m C_i } { \infer=[\infrule{R\wedge}] { \Gamma_1 \Rightarrow \mathcal{D}elta_1 , \bigwedge_{i=1}^m C_i } { \Gamma_1 \Rightarrow \mathcal{D}elta_1 , C_1 & \cdots & \Gamma_1 \Rightarrow \mathcal{D}elta_1 , C_m } } & \infer=[\infrule{R\wedge}]{ \Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , \bigwedge\Pi_1} {\Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , P_{i_{1}} & \cdots & \Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , P_{i_{j}}} } \end{equation} } Let $\bar{t}$ be all terms $t_1 , \Deltaots , t_\ell$ such that $t_1 , \Deltaots , t_\ell \in \mathsf{Ter}( \bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1)$ and $t_1 , \Deltaots , t_\ell \notin \mathsf{Ter} (\Pi_1 , \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Ter} (\Pi_2 , \Gamma_2 , \mathcal{D}elta_2)$. As in the previous case, it is easy to show that: $$ (\Deltadag)\qquad t_1 , \Deltaots , t_\ell \notin \mathsf{Ter}(\Pi_2,\Gamma_2,\mathcal{D}elta_2) $$ Moreover let $\bar{z}$ be variables $z_1 , \Deltaots , z_\ell$ new to $\mathcal{D}_1$ and $\mathcal{D}_2$. We reason analogously to the previous case to obtain: $$ \textnormal{(I}_C) \qquad \vdash\Pi_1 , \Gamma_1 \Rightarrow \mathcal{D}elta_1 , \exists \bar{z}((\bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1)[\,{}_{\bar{t}}^{\bar{z}}\,]) $$ As above, thanks to ($\Deltadag$), we also obtain: $$ \textnormal{(II}_C) \qquad \vdash \exists \bar{z}((\bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1)[\,{}_{\bar{t}}^{\bar{z}}\,]),\Pi_2 , \Gamma_2 \Rightarrow \mathcal{D}elta_2 $$ Let $C$ be $\exists \bar{z}((\bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1)[\,{}_{\bar{t}}^{\bar{z}}\,])$. Given that $\mathsf{Rel}(\Theta,\Pi)\subseteq\mathsf{Rel}(\Pi_2,\Gamma_2,\Delta_2)$, and given that we have quantified away all terms in $\bar{t}$, we have: $$ \textnormal{(III}_C) \qquad \mathcal{L} (C) \subseteq \mathcal{L} (\Pi_1, \Gamma_1 , \mathcal{D}elta_1) \cap \mathcal{L} (\Pi_2, \Gamma_2 , \mathcal{D}elta_2) $$ We conclude that $C$ is a split-interpolant of the given partition. $$ \framebox{ \infer { \Pi_1 , \Gamma_1 \; ; \; \Pi_2 , \Gamma_2 \xRightarrow{\exists \bar{z}((\bigwedge_{i=1}^m C_i\wedge\bigwedge\Pi_1)[\,{}_{\bar{t}}^{\bar{z}}\,])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { \Gamma_1 \; ; \; Q^*_1 , \Pi_1 , \Pi_2 , \Gamma_2 \xRightarrow{C_1} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 & \Deltaots & \Gamma_1 \; ; \; Q^*_m , \Pi_1 , \Pi_2 , \Gamma_2 \xRightarrow{C_m} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } $$ As for the previous case, when $\Pi_1=\varnothing$ we have a simpler split-interpolant of the conclusion: $$ \framebox{ \Deltaeduce{\exists \bar{z}((\bigwedge_{i=1}^m C_i)[\,{}_{\bar{t}}^{\bar{z}}\,])}{} } $$ \ \emph{Case \ref{case3} for $\mathsf{Gc^s}$.} We can proceed either as in Case \ref{case1} or as in Case \ref{case2}. If we proceed as in Case \ref{case1}, we obtain the following split-interpolant: $$ \framebox{ \infer { \Pi_1 , \Gamma_1 \; ; \; \Pi_2 , \Gamma_2 \xRightarrow{\forall \bar{z} (( \bigwedge\Pi_2\to\bigvee_{i=1}^mC_i)[\,{}_{\bar{t}}^{\bar{z}}\,])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { Q^*_1 , \Pi_1 , \Pi_2 , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_1} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 & \cdots & Q^*_m , \Pi_1 , \Pi_2 , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_m} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } $$ The proof that the formula $C$ presented above is the split-interpolant of the conclusion is exactly as for Case \ref{case1}, save for the relational part (III.2$_C$) of the language condition. In this case we are assuming that $\mathsf{Rel}(\Theta,\Pi)\not\subseteq\mathsf{Rel}(\Pi,\Gamma,\Delta)$. This immediately implies $$ (+)\quad \mathsf{Rel}(\Pi_1,\Pi_2)=\varnothing $$ and, together with the fact that $|\mathsf{Rel}(\Theta)|\leq 1$, it implies $$ (++)\quad \mathsf{Rel}(\Theta)\cap\mathsf{Rel}(\Pi_2,\Gamma_2,\Delta_2)=\varnothing $$ Hence, we can show that (III.2$_C$) holds via the following set-theoretic reasoning $\begin{array}{lc} \mathsf{Rel}(C)&\stackrel{\textnormal{III}_k}\subseteq\\\noalign{ } \mathsf{Rel}(\Pi_2)\cup(\mathsf{Rel}(\Theta , \Pi_1,\Pi_2, \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Rel} (\Gamma_2 , \mathcal{D}elta_2))&\stackrel{\textnormal{distrib.}}=\\\noalign{ } \mathsf{Rel}(\Theta , \Pi_1,\Pi_2, \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Rel} (\Pi_2,\Gamma_2 , \mathcal{D}elta_2)&\stackrel{(+),(++)}=\\\noalign{ } \mathsf{Rel}( \Gamma_1 , \mathcal{D}elta_1) \cap \mathsf{Rel} (\Gamma_2 , \mathcal{D}elta_2) \end{array}$ \ \emph{Case \ref{case1} for $\mathsf{Gi^s}$}. The proof is the same as for Case \ref{case1} in $\mathsf{Gc^s}$ (with $\mathcal{D}elta_1=\varnothing$ and $\mathcal{D}elta_2= A$) save for the derivations $\mathcal{D}_1$ and $\mathcal{D}_2$ presented in (\ref{firstcase1}) and (\ref{secondcase1}) that are not $\mathsf{Gi^s}$-derivations. It is immediate to see that we can obtain a $\mathsf{Gi^s}$-derivation from the derivation in (\ref{firstcase1}) by simply omitting the instances of weakening and applying directly instances of $R\lor$ to the leaves. On the other hand, the derivation $\mathcal{D}_2$ presented in (\ref{secondcase1}) becomes a $\mathsf{Gi^s}$-derivation by simply dropping the singleton multiset $\mathcal{D}elta_2$ from the left top-most sequents and then adding an instance of weakening on the left premise of $L\to$. \ \emph{Case \ref{case2} for $\mathsf{Gi^s}$}. The proof is the same as for Case \ref{case2} in $\mathsf{Gc^s}$, since the derivations presented in (\ref{firstcase2}) and (\ref{secondcase2}) are $\mathsf{Gi^s}$-derivation when $\mathcal{D}elta_1=\varnothing$ and $\mathcal{D}elta_2=A$. \ \emph{Case \ref{case3} for $\mathsf{Gi^s}$}. We may proceed as for Case \ref{case1} for $\mathsf{Gi^s}$ save for the relational part (III.2$_C$) of the language condition where we reason as in Case \ref{case3} for $\mathsf{Gc^s}$. \end{proof} From Lemma \ref{th:maehara_singular} it is immediate to conclude that singular geometric extensions of classical and intuitionistic logic satisfy the interpolation theorem, namely: \begin{thm}\label{CraigSingular} $\mathsf{G^s}$ has the interpolation property. \end{thm} \section{Applications} We now consider some corollaries of Theorem \ref{CraigSingular} in which the strategy for building interpolants provided in Lemma \ref{th:maehara_singular} is applied. Notice that in the theories considered in this section all contracted instances are admissible and, hence, we can ignore them, see the discussion after Definition \ref{closurecondition}. \subsection{First-order logic with identity} We start with first-order logic with identity. Recall that a cut-free calculus for classical first-order logic with identity has been presented in \cite{NegrivonPlato1998} by adding on top of $\mathsf{Gc}$ the rules \emph{Ref}$_=$ and \emph{Repl}$_=$ corresponding to the reflexivity of $=$ and Leibniz's principle of indescernibility of identicals, respectively. In intuitionistic theories, on the other hand, identity is often treated differently and we will provide a constructively more acceptable treatment of identity later in dealing with apartness. In general, however, nothing prevents us from building intuitionistic first-order logic with identity in a parallel fashion to the classical case. This is, for example, the route taken in \cite{TroelstraSchwichtenberg2000} and we will follow suit. More specifically, let $\mathsf{G}^=$ be $\mathsf{G} + \{ \textnormal{\emph{Ref}}_= , \textnormal{\emph{Repl}}_=\}$. Notice that, since \emph{Ref}$_=$ and \emph{Repl}$_=$ are geometric rules, cut elimination holds in $\mathsf{Gi}^=$ in virtue of Theorem \ref{prop:structural_properties_geometric_in}. Moreover, since they are also singular geometric, it follows from our Theorem \ref{CraigSingular} that in $\mathsf{G}^=$ the interpolation property holds, i.e. \begin{cor} $\mathsf{G}^=$ has the interpolation property. \end{cor} \begin{proof} We determine the split-interpolants as applications of the procedures given in the proof of Lemma \ref{th:maehara_singular}. The rule \emph{Ref}$_=$ can be treated as an instance of Case \ref{case1} with $\Pi_2=\varnothing$ (obviously, it could also have been treated as an instance of Case \ref{case2}). Depending on whether both $c\in\mathsf{Ter}(C)$ and $c\not\in\mathsf{Ter} (\Gamma_1 , \mathcal{D}elta_1)$ or not, we have then, respectively: $$ \framebox{ \infer { \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{\forall z(C[{}_{s}^{z}])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { s=s, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } \qquad \infer { \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { s=s, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } $$ For \emph{Repl}$_=$, there are four possible partitions of the conclusion: \begin{itemize} \item $s = t , P [{}_{x}^{s}] , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $\Gamma_1 \; ; \; s = t , P [{}_{x}^{s}] , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $P [{}_{x}^{s}] , \Gamma_1 \; ; \; s = t , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $s = t , \Gamma_1 \; ; \; P [{}_{x}^{s}] , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \end{itemize} Accordingly, we need to consider four sub-cases. As in Case \ref{case1} of Lemma \ref{th:maehara_singular}, when $\Pi_2=\varnothing$, the interpolant for the first partition is as follows: $$\framebox{ \infer{s=t,P[{}_{x}^{s}], \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 }{P[{}_{x}^{t}],s=t,P[{}_{x}^{s}], \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2} }$$ The interpolant for the second partition is obtained by reasoning as in Case \ref{case2} with $\Pi_1=\varnothing$ of Lemma \ref{th:maehara_singular}: $$\framebox{ \infer{\Gamma_1 \; ; \; s=t,P[{}_{x}^{s}],\Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 }{\Gamma_1 \; ; \; P[{}_{x}^{t}],s=t,P[{}_{x}^{s}], \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2} }$$ The interpolant for the third partition is found as in Case \ref{case1} of Lemma \ref{th:maehara_singular}, depending on whether $t\in\mathsf{Ter} (P[{}_{x}^{s}] , \Gamma_1 , \mathcal{D}elta_1)$ (left derivation in the box below) or not (right derivation in the box below). \begin{small} $$ \framebox{ \infer { P[{}_{x}^{s}] , \Gamma_1 \; ; \; s=t , \Gamma_2 \xRightarrow{s = t \to C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { P[{}_{x}^{t}],s=t,P[{}_{x}^{s}] , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } \quad \infer { P[{}_{x}^{s}] , \Gamma_1 \; ; \; s=t , \Gamma_2 \xRightarrow{\forall z (s = z \to C[{}_{t}^{z}])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { P[{}_{x}^{t}],s=t,P[{}_{x}^{s}] , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } }$$\end{small} Lastly, the interpolant for the fourth partition is found as in Case \ref{case2} of Lemma \ref{th:maehara_singular}, depending on whether $t\in\mathsf{Ter} (P[{}_{x}^{s}] , \Gamma_2 , \mathcal{D}elta_2)$ or not: \begin{small} $$ \framebox{ \infer { s=t , \Gamma_1 \; ; \; P[{}_{x}^{s}], \Gamma_2 \xRightarrow{s = t \wedge C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { \Gamma_1 \; ; \; P[{}_{x}^{t}],s=t,P[{}_{x}^{s}] ,\Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } \quad \infer { s=t , \Gamma_1 \; ; \; P[{}_{x}^{s}], \Gamma_2 \xRightarrow{\exists z (s = z \wedge C[{}_{t}^{z}])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { \Gamma_1 \; ; \; P[{}_{x}^{t}],s=t,P[{}_{x}^{s}] ,\Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } $$ \end{small} \end{proof} \subsection{Equivalence relations}\label{EQ} In a perfectly parallel fashion, we obtain the theory of equivalence relations by adding to $\mathsf{G}$ the rules corresponding to the reflexivity, transitivity and symmetry of a binary relation $\sim$. Thus, $\mathsf{EQ} = \mathsf{G} + \{\textnormal{\emph{Ref}}_\sim \, , \, \textnormal{\emph{Trans}}_\sim \, , \, \textnormal{\emph{Sym}}_\sim\}$. \begin{center} \begin{tabular}{cc} $ \infer[\infrule{Ref_\sim}]{\Gamma \Rightarrow \mathcal{D}elta}{s \sim s , \Gamma \Rightarrow \mathcal{D}elta} $ & $ \infer[\infrule{Trans_\sim}]{s \sim t , t \sim u , \Gamma \xRightarrow{} \mathcal{D}elta}{s \sim u , s \sim t , t \sim u , \Gamma \xRightarrow{} \mathcal{D}elta} $ \\\\ \multicolumn{2}{c}{ $\infer[\infrule{Sym_\sim}]{s \sim t , \Gamma \xRightarrow{} \mathcal{D}elta}{t \sim s , s \sim t , \Gamma \xRightarrow{} \mathcal{D}elta}$ } \end{tabular} \end{center} From the fact that these rules are singular geometric, it follows that: \begin{cor} $\mathsf{EQ}$ has the interpolation property. \end{cor} \begin{proof} The case of \emph{Ref}$_\sim$ is like that for \emph{Ref}$_=$ in $\mathsf{G}^=$, the only difference being that, when $\sim$ is not in $\mathsf{Rel}(\Gamma,\Delta)$, the rule \emph{Ref}$_\sim$ becomes an instance of Case \ref{case3}.\footnote{Otherwise, it is an instance of Case \ref{case1} or of Case \ref{case2}, and then the split-interpolant of the conclusion can be determined as we have shown for \emph{Ref}$_=$, except for the use of the existential quantifier when we have an instance of Case \ref{case2} only and we must quantify away $s$.} We consider in detail the cases of \emph{Trans}$_\sim$ and \emph{Sym}$_\sim$. Regarding \emph{Trans}$_\sim$, there are four possible partitions of the conclusion: \begin{itemize} \item $s \sim t , t \sim u , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $\Gamma_1 \; ; \; s \sim t , t \sim u , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $s \sim t , \Gamma_1 \; ; \; t \sim u , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $t \sim u , \Gamma_1 \; ; \; s \sim t , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \end{itemize} For the first two partitions, we find the split-interpolant by reasoning as in Case \ref{case1} with $\Pi_2=\varnothing$ and Case \ref{case2} with $\Pi_1=\varnothing$, respectively. Hence, a split-interpolant for the first and second partitions is: \noindent \scalebox{0.95000}{$$ \framebox{ \infer{s \sim t , t \sim u , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{s \sim u , s \sim t , t \sim u , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}\qquad \infer{\Gamma_1 \; ; \; s \sim t , t \sim u , \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{\Gamma_1 \; ; \; s \sim u , s \sim t , t \sim u , \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2} } $$} For the last two partitions we can proceed as in Case \ref{case1} or as in Case \ref{case2}. By proceeding as in Case \ref{case1} we find the following split-interpolants, assuming, respectively, $u\not\in\mathsf{Ter} (s \sim t , \Gamma_1 , \mathcal{D}elta_1)$ and $s\not\in\mathsf{Ter} (t \sim u , \Gamma_1 , \mathcal{D}elta_1)$: \begin{footnotesize} $$ \framebox{\infer{s \sim t , \Gamma_1 \; ; \; t \sim u, \Gamma_2 \xRightarrow{\forall z (t \sim z \to C[{}_{u}^{z}])} \Delta_1 \; ; \; \Delta_2}{s \sim u, s \sim t , t \sim u,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \Delta_1 \; ; \; \Delta_2} \quad \infer{t \sim u , \Gamma_1 \; ; \; s \sim t, \Gamma_2 \xRightarrow{\forall z (z \sim t \to C[{}_{s}^{z}])} \Delta_1 \; ; \; \Delta_2}{s \sim u, s \sim t , t \sim u ,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \Delta_1 \; ; \; \Delta_2} } $$ \end{footnotesize} \noindent If, instead, $u\in\mathsf{Ter} (s \sim t , \Gamma_1 , \mathcal{D}elta_1)$ or $s\in\mathsf{Ter} (t \sim u , \Gamma_1 , \mathcal{D}elta_1)$, then we do not quantify them away and we have: \begin{footnotesize} $$ \framebox{\infer{s \sim t , \Gamma_1 \; ; \; t \sim u, \Gamma_2 \xRightarrow{t \sim u \to C} \Delta_1 \; ; \; \Delta_2}{s \sim u, s \sim t , t \sim u,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \Delta_1 \; ; \; \Delta_2} \quad \infer{t \sim u , \Gamma_1 \; ; \; s \sim t, \Gamma_2 \xRightarrow{s \sim t \to C} \Delta_1 \; ; \; \Delta_2}{s \sim u, s \sim t , t \sim u ,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \Delta_1 \; ; \; \Delta_2} } $$ \end{footnotesize} Regarding \emph{Sym}$_\sim$, there are two possible partitions of the conclusion: \begin{itemize} \item $s \sim t , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $\Gamma_1 \; ; \; s \sim t , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \end{itemize} We find the split-interpolant by reasoning as in Case \ref{case1} with $\Pi_2=\varnothing$ and Case \ref{case2} with $\Pi_1=\varnothing$, respectively. Hence we have: \begin{small}$$ \framebox{ \infer{s \sim t , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{t \sim s , s \sim t , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}\qquad \infer{\Gamma_1 \; ; \; s \sim t , \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{\Gamma_1 \; ; \; t \sim s , s \sim t , \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2} } $$\end{small}\end{proof} \subsection{Partial and linear orders}\label{PO} Now we consider some well-known order theories. We start with partial orders. In sequent calculus, the theory of partial orders is obtained by extending $\mathsf{Gc^=}$ with the following rules corresponding to the axioms of reflexivity, transitivity and anti-symmetry of a binary relation $\leqslant$. Thus, let $\mathsf{PO} = \mathsf{Gc^=} + \{\textnormal{\emph{Ref}}_\leqslant \, , \, \textnormal{\emph{Trans}}_\leqslant \, , \, \textnormal{\emph{Anti-sym}}_\leqslant\}$: \begin{center} \begin{tabular}{cc} $ \infer[\infrule{Ref_\leqslant}]{\Gamma \Rightarrow \mathcal{D}elta}{s \leqslant s , \Gamma \Rightarrow \mathcal{D}elta} $ & $ \infer[\infrule{Trans_\leqslant}]{s \leqslant t , t \leqslant u , \Gamma \xRightarrow{} \mathcal{D}elta}{s \leqslant u , s \leqslant t , t \leqslant u , \Gamma \xRightarrow{} \mathcal{D}elta} $ \\\\ \multicolumn{2}{c}{ $\infer[\infrule{Anti\textnormal{-}sym_\leqslant}]{s \leqslant t , t \leqslant s , \Gamma \xRightarrow{} \mathcal{D}elta}{s = t , s \leqslant t , t \leqslant s , \Gamma \xRightarrow{} \mathcal{D}elta}$ } \end{tabular} \end{center} Linear orders are obtained by assuming that the partial order $\leqslant$ is also linear, i.e $\mathsf{LO} = \mathsf{PO} + \{ \textnormal{\emph{Lin}}_\leqslant\}$. $$ \infer[\infrule{Lin_\leqslant}]{\Gamma \Rightarrow \mathcal{D}elta}{s \leqslant t , \Gamma \Rightarrow \mathcal{D}elta & t \leqslant s , \Gamma \Rightarrow \mathcal{D}elta} $$ Both $\mathsf{PO}$ and $\mathsf{LO}$ are singular geometric theories, hence: \begin{cor} $\mathsf{LO}$ (hence, $\mathsf{PO}$) has the interpolation property. \end{cor} \begin{proof} The procedure for building the interpolants for \emph{Ref}$_\leqslant$ and \emph{Trans}$_\leqslant$ are the same as those for \emph{Ref}$_\sim$ and \emph{Trans}$_\sim$, respectively, in $\mathsf{EQ}$; that for \emph{Anti-sym}$_\leqslant$ is like that for \emph{Trans}$_\sim$, save that here there is no need to quantify away any term occurring in the split-interpolant. For \emph{Lin}$_\leqslant$, only one partition of the conclusion has to be considered, namely $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \Delta_1 \; ; \; \Delta_2$. Its interpolant can be found as in Case \ref{case3} of Lemma \ref{th:maehara_singular} with $\Pi_2=\varnothing$, provided that $\leqslant$ is not in $\mathsf{Rel}(\Gamma,\Delta)$.\footnote{Else, we proceed as in Case \ref{case1} or \ref{case2} and, as for rule \emph{Ref}$\sim$, in the latter case, when we have to quantify away $s$ and $t$ we do it via existential quantifiers.} Assuming that both $s$ and $t$ are in $\mathsf{Ter}(C1,C_2)$ but not in $\mathsf{Ter}(\Gamma_2,\Delta_2)$: $$ \framebox{ \infer { \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{\forall z_1\forall z_2((C_1\lor C_2)[{}_{s}^{z_1}{}_{t}^{z_2}])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { s\leqslant t, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_1} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2&t\leqslant s, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_2} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } $$ If, instead, $s$ or $t$ is in $\mathsf{Ter}(\Gamma_2,\Delta_2)$, or if it is not in $\mathsf{Ter}(C_1,C_2)$, then it is not quantified away. \end{proof} Unlike $\mathsf{G^=}$ and $\mathsf{EQ}$, the underlying logical calculus of both $\mathsf{PO}$ and $\mathsf{LO}$ is the classical one. The reason is that linearity is intuitionistically contentious and normally it requires a different, more constructively acceptable, axiomatization that will be considered in Section \ref{sectPPO}. \subsection{Strict partial and linear orders} The theory of strict partial orders consists of the axioms of first-order logic with identity plus the irreflexivity and transitivity of $<$. As we did for $\mathsf{PO}$ and $\mathsf{LO}$, we consider this theory to be based on classical logic, i.e. by adding on top of $\mathsf{Gc}^=$ the following rules: \begin{center} \begin{tabular}{cc} $\infer[\infrule{Irref_<}]{s < s , \Gamma \xRightarrow{} \mathcal{D}elta}{ } \qquad \infer[\infrule{Trans_<}]{s < t , t < u , \Gamma \xRightarrow{} \mathcal{D}elta}{s < u , s < t , t < u , \Gamma \xRightarrow{} \mathcal{D}elta} $ \end{tabular} \end{center} Let $\mathsf{SPO}$ be $\mathsf{Gc}^= + \{\textnormal{\emph{Irref}}_< , \textnormal{\emph{Trans}}_<\}$. Total strict partial orders are then obtained assuming that $<$ is also trichotomic, i.e. $\mathsf{SLO} = \mathsf{SPO} + \{ \textnormal{\emph{Trich}}_<\}$: $$ \infer[\infrule{Trich_<}]{\Gamma \xRightarrow{} \mathcal{D}elta}{s = t , \Gamma \xRightarrow{} \mathcal{D}elta & s < t , \Gamma \xRightarrow{} \mathcal{D}elta & t < s , \Gamma \xRightarrow{} \mathcal{D}elta} $$ \begin{cor} $\mathsf{SLO}$ (hence, $\mathsf{SPO}$) has the interpolation property. \end{cor} \begin{proof} We show how to find the interpolants for \emph{Irref}$_<$ and \emph{Trich}$_<$, while \emph{Trans}$_<$ is identical to \emph{Trans}$_\sim$. We start with \emph{Irref}$_<$. There are two possible partitions of its conclusion, namely \begin{itemize} \item $s < s , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $\Gamma_1 \; ; \; s < s , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \end{itemize} As in Case \ref{case1} with $\Pi_2=\varnothing$ (and $m=0$) and as in Case \ref{case2} with $\Pi_1=\varnothing$ (and $m=0$) of Lemma \ref{th:maehara_singular}, we find the split-interpolant for each partition as follows: \ $$ \framebox{ \infer{s < s , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{\bot} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{ } \qquad \infer{\Gamma_1 \; ; \; s < s , \Gamma_2 \xRightarrow{\top} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{\phantom{A^7} } } $$ \ Regarding \emph{Trich}$_<$, we need to consider only one partition of the conclusion, namely $\Gamma_1 \; ; \; \Gamma_2 \Rightarrow \Delta_1 \; ; \; \Delta_2$, whose interpolant can be found as in Case \ref{case3} of Lemma \ref{th:maehara_singular} with $\Pi_2=\varnothing$ when $<$ is not in $\mathsf{Rel}(\Gamma,\Delta)$.\footnote{Else, as for rule \emph{Ref}$_\sim$, we proceed as in one of Cases \ref{case1} and \ref{case2}.} Assuming that both $s$ and $t$ are in $\mathsf{Ter}(C_1,C_2,C_3)$ but not in $\mathsf{Ter}(\Gamma_2,\Delta_2)$: $$ \scalebox{0.90000}{\framebox{ \infer { \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{\forall z_1\forall z_2((C_1\lor C_2\lor C_3)[{}_{s}^{z_1}{}_{t}^{z_2}])} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } { s=t,\Gamma_1\; ; \;\Gamma_2\xRightarrow{C_1}\Delta_1\; ; \;\Delta_2& s< t, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_2} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2&t< s, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_3} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2 } } } $$ If $s$ or $t$ is in $\mathsf{Ter}(\Gamma_2,\Delta_2)$, or if it is not in $\mathsf{Ter}(C_1,C_2,C_3)$, then it is not quantified away. \begin{comment} For the case of \emph{Trans}$_<$ there are four possible partitions of the conclusion: \begin{itemize} \item $s < t , t < u , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $\Gamma_1 \; ; \; s < t , t < u , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $s < t , \Gamma_1 \; ; \; t < u , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \item $t < u , \Gamma_1 \; ; \; s < t , \Gamma_2 \Rightarrow \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2$ \end{itemize} For the first two partitions, we find the split-interpolant as follows by reasoning as in Case \ref{case1} with $\Pi_2=\varnothing$ and Case \ref{case2} with $\Pi_1=\varnothing$, respectively. More specifically, a split-interpolant for the first and second partitions is: \noindent \scalebox{0.95000}{$$ \framebox{ \infer{s < t , t < u , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{s < u , s < t , t < u , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}\qquad \infer{\Gamma_1 \; ; \; s < t , t < u , \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2}{\Gamma_1 \; ; \; s < u , s < t , t < u , \Gamma_2 \xRightarrow{C} \mathcal{D}elta_1 \; ; \; \mathcal{D}elta_2} } $$} Finally, if we use Case \ref{case1} of Lemma \ref{th:maehara_singular} we find the split-interpolants for the remaining partitions as follows, assuming, respectively, $u\not\in\mathsf{Ter} (s<t , \Gamma_1 , \mathcal{D}elta_1)$ and $s\not\in\mathsf{Ter} (t<u , \Gamma_1 , \mathcal{D}elta_1)$ : \begin{footnotesize} $$ \framebox{\infer{s < t , \Gamma_1 \; ; \; t < u, \Gamma_2 \xRightarrow{\forall z (t < z \to C[{}_{u}^{z}])} \Delta_1 \; ; \; \Delta_2}{s < u, s < t , t < u,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \Delta_1 \; ; \; \Delta_2} \quad \infer{t < u , \Gamma_1 \; ; \; s < t, \Gamma_2 \xRightarrow{\forall z (z < t \to C[{}_{s}^{z}])} \Delta_1 \; ; \; \Delta_2}{s < u, s < t , t < u ,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C} \Delta_1 \; ; \; \Delta_2} } $$ \end{footnotesize} \end{comment} \end{proof} \subsection{Apartness}\label{apart} We noticed earlier that in intuitionistic theories the identity relation is not always treated as in classical logic. In particular, identity is defined in terms of the more constructively acceptable relation of apartness. Apartness was originally introduced by Brouwer (and later axiomatized by Heyting in \cite{Heyting1956}) to express inequality between real numbers in the constructive analysis of the continuum: whereas saying that two real numbers $a$ and $b$ are unequal only means that the assumption $a = b$ is contradictory, to say that $a$ and $b$ are apart expresses the constructively stronger requirement that their distance on the real line can be effectively measured, i.e. that $|\, a - b \,| > 0$ has a constructive proof. Classically, inequality and apartness coincide, but intuitionistically two real numbers can be unequal without being apart. The theory of apartness consists of intuitionistic first-order logic plus the irreflexivity and splitting of $\neq$. Following \cite{Negri1999}, the theory of apartness is formulated by adding on top of $\mathsf{Gi}$ the following rules:\footnote{Notice that Negri's underlying calculus is a quantifier-free version of $\mathsf{Gi}$.} \begin{center} \begin{tabular}{cc} $\infer[\infrule{Irref_{\neq}}]{s \neq s , \Gamma \xRightarrow{} A}{ } \qquad \infer[\infrule{Split_{\neq}}]{s \neq t ,\Gamma \xRightarrow{} A}{s \neq u , s \neq t, \Gamma \xRightarrow{} A&t \neq u , s \neq t, \Gamma \xRightarrow{} A} $ \end{tabular} \end{center} Let $\mathsf{AP} = \mathsf{Gi} + \{ \textnormal{\emph{Irref}}_{\neq} \, , \, \textnormal{\emph{Split}}_{\neq}\}$. Given that these two rules are singular geometric rules, it follows that: \begin{cor}\label{ap} $\mathsf{AP}$ has the interpolation property. \end{cor} \begin{proof} As above, we show how to find the interpolants for \emph{Irref}$_{\neq}$ and \emph{Split}$_{\neq}$. The former is identical to that of \emph{Irref}$_<$ in $\mathsf{SPO}$. In the case of \emph{Split}$_{\neq}$, there are two possible partitions of the conclusion: \begin{itemize} \item $s \neq t , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \; ; \; A$ \item $\Gamma_1 \; ; \; s \neq t , \Gamma_2 \Rightarrow \; ; \; A$ \end{itemize} For the first partition, we use Case \ref{case1} of Lemma \ref{th:maehara_singular} with $\Pi_2=\varnothing$. Thus, if $u\not\in\mathsf{Ter}(s\neq t,\Gamma_1)$ and $u\in \mathsf{Ter}(C_1,C_2)$ , a split-interpolant for the first partition is: $$ \framebox{ \infer{s\neq t, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{\forall z(C_1[{}_{u}^{z}]\lor C_2[{}_{u}^{z}])} \; ; \; A}{s \neq u , s \neq t ,\Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_1} \; ; \; A\quad &t \neq u , s \neq t, \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{C_2} \; ; \; A } } $$ For the second partition, we use Case \ref{case2} of Lemma \ref{th:maehara_singular} with $\Pi_1=\varnothing$. Thus, if $u\not\in\mathsf{Ter}(s\neq t,\Gamma_2,A)$ , a split-interpolant for the second partition is: $$ \framebox{ \infer{ \Gamma_1 \; ; \; s\neq t,\Gamma_2 \xRightarrow{\exists z(C_1[{}_{u}^{z}]\wedge C_2[{}_{u}^{z}])} \; ; \; A}{\Gamma_1 \; ; \; s \neq u , s \neq t , \Gamma_2 \xRightarrow{C_1} \; ; \; A\quad & \Gamma_1 \; ; \; t \neq u , s \neq t,\Gamma_2 \xRightarrow{C_2} \; ; \; A } } $$ When $u$ is, respectively, in $\mathsf{Ter}(s\neq t,\Gamma_1)$ or in $\mathsf{Ter}(s\neq t,\Gamma_2,A)$, as well as when it is not in $\mathsf{Ter}(C_1,C_2)$, we do not quantify it away .\end{proof} \subsection{Positive partial and linear orders}\label{sectPPO} Just like apartness is a positive version of inequality, so excess $\nleqslant$ is a positive version of the negation of a partial order $\leqslant$. Excess relation was introduced by von Plato in \cite{vonPlato2001} and has been further investigated by Negri in \cite{Negri1999}. The theory of positive partial orders consists of intuitionistic first-order logic plus the irreflexivity and co-transitivity of $\nleqslant$.\footnote{Co-transitivity and splitting should not be confused. In particular, splitting (along with irreflexivity) gives symmetry, whereas co-transitivity does not. This is what distinguishes apartness (which is symmetric) from excess (which in general is not).} Let $\mathsf{PPO} = \mathsf{Gi} + \{ \textnormal{\emph{Irref}}_{\nleqslant} \, , \, \textnormal{\emph{Co-trans}}_{\nleqslant}\}$ \begin{center} \begin{tabular}{cc} $\infer[\infrule{Irref_{\nleqslant}}]{s \nleqslant s , \Gamma \xRightarrow{} A}{ } \qquad \infer[\infrule{Co\textnormal{-}trans_{\nleqslant}}]{s \nleqslant t ,\Gamma \xRightarrow{} A}{s \nleqslant u , s \nleqslant t, \Gamma \xRightarrow{} A&u \nleqslant t , s \nleqslant t, \Gamma \xRightarrow{} A} $ \end{tabular} \end{center} The theory of positive linear orders extends the theory of positive partial orders with the asymmetry of $\nleqslant$. Specifically, let $\mathsf{PLO} = \mathsf{PPO} + \{\textnormal{\emph{Asym}}_\nleqslant\}$: $$ \infer[\infrule Asym_{\nleqslant}]{s \nleqslant t,t \nleqslant s,\Gamma\Rightarrow A}{} $$ Given that all these rules are singular geometric, from Theorem \ref{CraigSingular} it follows that \begin{cor} $\mathsf{PPO}$ and in $\mathsf{PLO}$ have the interpolation property. \end{cor} \begin{proof} The cases of \emph{Irref$\,{}_{\nleqslant}$} and of \emph{Co-Trans$\,{}_{\nleqslant}$} are like the analogous cases for rules \emph{Irref$\,{}_{\neq}$} and \emph{Split$\,{}_{\neq}$} and the split-interpolants can be obtained by those in the proof of Corollary \ref{ap}. For rule \emph{Asym$\,{}_{\nleqslant}$} we have four possible partitions of the conclusion \begin{itemize} \item $s \nleqslant t , t \nleqslant s , \Gamma_1 \; ; \; \Gamma_2 \Rightarrow \; ; \; A$ \item $\Gamma_1 \; ; \; s \nleqslant t , t \nleqslant s , \Gamma_2 \Rightarrow \; ; \; A$ \item $s \nleqslant t , \Gamma_1 \; ; \; t \nleqslant s , \Gamma_2 \Rightarrow \; ; \; A$ \item $t \nleqslant s , \Gamma_1 \; ; \; s \nleqslant t , \Gamma_2 \Rightarrow \; ; \; A$ \end{itemize} Their split-interpolants are like those for rule \emph{Anti-sym}$_\leqslant$, except that here we have a 0-premise rule. For the first and second partitions we have, respectively: $$ \framebox{ \infer{s \nleqslant t , t \nleqslant s , \Gamma_1 \; ; \; \Gamma_2 \xRightarrow{\bot} \; ; \; A}{\phantom{C}}\qquad\qquad \infer{\Gamma_1 \; ; \; s \nleqslant t , t \nleqslant s , \Gamma_2 \xRightarrow{\top} \; ; \; A}{\phantom{C}} } $$ \noindent Finally, for the last two partitions we have, respectively: $$ \framebox{\infer{s \nleqslant t , \Gamma_1 \; ; \; t \nleqslant s, \Gamma_2 \xRightarrow{t \nleqslant s \to \bot} \; ; \; A}{\phantom{C}} \quad \infer{t \nleqslant s , \Gamma_1 \; ; \; s \nleqslant t, \Gamma_2 \xRightarrow{s \nleqslant t\to\bot} \; ; \; A}{\phantom{C}} } $$ \end{proof} To conclude, we have shown (Lemma \ref{th:maehara_singular}) how to extend Maehara's lemma to extensions of classical and intuitionistic sequent calculi with singular geometric rules and provided a number of interesting examples of singular geometric rules that are important both in logic and mathematics, especially in order theories. In particular, we have shown that Lemma \ref{th:maehara_singular} covers first-order logic with identity and its extension with the theory of (strict) partial and linear orders. We have also proved that the same holds for the intuitionistic theories of apartness, as well as for positive partial and linear order. Along the way, we have also provided a cut-elimination theorem for geometric extensions $\mathsf{Gi^g}$ of the intuitionistic single-succedent calculus $\mathsf{Gi}$. \textbf{Acknowledgements:} We are very grateful to Birgit Elbl for precious comments and helpful discussions on various points. We also thank an anonymous referee for valuable suggestions that have helped to generalize our main result as well as to improve its exposition. \end{document}
{\bf e}gin{document} \title{Harmonic measure for biased random walk in a supercritical Galton--Watson tree} {\bf e}gin{abstract} We consider random walks $\longrightarrowmbda$-biased towards the root on a Galton--Watson tree, whose offspring distribution $(p_k)_{k\geq 1}$ is non-degenerate and has finite mean $m>1$. In the transient regime $0<\longrightarrowmbda <m$, the loop-erased trajectory of the biased random walk defines the $\longrightarrowmbda$-harmonic ray, whose law is the $\longrightarrowmbda$-harmonic measure on the boundary of the Galton--Watson tree. We answer a question of Lyons, Pemantle and Peres \cite{LPP97} by showing that the $\longrightarrowmbda$-harmonic measure has a.s.~strictly larger Hausdorff dimension than the visibility measure, which is the harmonic measure corresponding to the simple forward random walk. We also prove that the average number of children of the vertices along the $\longrightarrowmbda$-harmonic ray is a.s.~bounded below by $m$ and bounded above by $m^{-1}\sum k^2 p_k$. Moreover, at least for $0<\longrightarrowmbda \leq 1$, the average number of children of the vertices along the $\longrightarrowmbda$-harmonic ray is a.s.~strictly larger than that of the $\longrightarrowmbda$-biased random walk trajectory. We observe that the latter is not monotone in the bias parameter~$\longrightarrowmbda$. {\mathcal N}oindentndent {\bf Keywords.} random walk, harmonic measure, Galton--Watson tree, stationary measure. {\mathcal N}oindentndent{\bf AMS 2010 Classification Numbers.} 60J15, 60J80. \end{abstract} \section{Introduction} Consider a Galton--Watson tree ${\mathbb T}$ rooted at $e$ with a non-degenerate offspring distribution $(p_k)_{k\geq 0}$. We suppose that $p_0=0$, $p_k<1$ for all $k\geq 1$, and the mean offspring number $m=\sum_{k\geq 1} k p_k \in (1,\infty)$. So the Galton--Watson tree ${\mathbb T}$ is supercritical and leafless. Let ${\mathcal T}$ be the space of all infinite rooted trees with no leaves. The law of ${\mathbb T}$ is called the Galton--Watson measure $\mathbf{GW}$ on ${\mathcal T}$. For every vertex $x$ in ${\mathbb T}$, let ${\mathcal N}u(x)$ stand for its number of children. We denote by $x_*$ the parent of $x$ and by $xi, 1\leq i\leq {\mathcal N}u(x)$, the children of $x$. For $\longrightarrowmbda\geq 0$, conditionally on ${\mathbb T}$, the $\longrightarrowmbda$-biased random walk $(X_n)_{n\geq 0}$ on ${\mathbb T}$ is a Markov chain starting from the root $e$, such that, from the vertex $e$ all transitions to its children are equally likely, whereas for every vertex $x\in {\mathbb T}$ different from $e$, {\bf e}gin{align*} P_{\mathbb T}(X_{n+1}=x_* \mid X_n=x) &= \frac{\longrightarrowmbda}{{\mathcal N}u(x)+\longrightarrowmbda},\\ P_{\mathbb T}(X_{n+1}=xi \mid X_n=x) &= \frac{1}{{\mathcal N}u(x)+\longrightarrowmbda}, \quad \mbox{ for every } 1\leq i\leq {\mathcal N}u(x). \end{align*} Note that $\longrightarrowmbda=1$ corresponds to the simple random walk on ${\mathbb T}$, and $\longrightarrowmbda=0$ corresponds to the simple \emph{forward} random walk with no backtracking. Lyons established in~\cite{Ly90} that $(X_n)_{n\geq 0}$ is almost surely transient if and only if $\longrightarrowmbda<m$. Throughout this work, we assume $\longrightarrowmbda <m$ and hence the $\longrightarrowmbda$-biased random walk is always transient. For a vertex $x\in {\mathbb T}$, let $|x|$ stand for the graph distance from the root $e$ to $x$. Let $\partial {\mathbb T}$ denote the boundary of ${\mathbb T}$, which is defined as the set of infinite rays in ${\mathbb T}$ emanating from the root. Since $(X_n)_{n\geq 0}$ is transient, its loop-erased trajectory defines a unique infinite ray $\Xi_\longrightarrowmbda \in \partial {\mathbb T}$, whose distribution is called the $\longrightarrowmbda$-harmonic measure. We call $\Xi_\longrightarrowmbda$ the $\longrightarrowmbda$-harmonic ray in ${\mathbb T}$. For different rays $\xi, \eta\in \partial {\mathbb T}$, let $\xi\wedge \eta$ denote the vertex common to both $\xi$ and $\eta$ that is farthest from the root. We define the metric {\bf e}gin{equation*} d(\xi,\eta)\colonequals \exp(-|\xi\wedge \eta|) \mbox{ for } \xi, \eta\in \partial {\mathbb T}, \xi{\mathcal N}eq \eta. \end{equation*} Under this metric, the boundary $\partial {\mathbb T}$ has a.s.~Hausdorff dimension $\log m$. Lyons, Pemantle and Peres \cite{LPP95,LPP96} showed the dimension drop of harmonic measure: for all $0\leq \longrightarrowmbda<m$, the Hausdorff dimension of the $\longrightarrowmbda$-harmonic measure is a.s.~a constant $d_\longrightarrowmbda< \log m$. The 0-harmonic measure associated with the simple forward random walk was called \emph{visibility measure} in \cite{LPP95}. Its Hausdorff dimension is a.s.~equal to the constant $\sum_{k\geq 1} (\log k)p_k =\mathbf{GW}[\log {\mathcal N}u]$, where we write ${\mathcal N}u={\mathcal N}u(e)$ for the offspring number of the root under $\mathbf{GW}$. Recently, Berestycki, Lubetzky, Peres and Sly~\cite{BLPS} applied the dimension drop result $d_1<\log m$ to show cutoff for the mixing time of simple random walk on a random graph starting from a typical vertex. The Hausdorff dimension of the 0-harmonic measure was similarly used in~\cite{BLPS} and independently used by Ben-Hamou and Salez in~\cite{BHS} to determine the mixing time of the non-backtracking random walk on a random graph. The primary result of this work answers a question of Ledrappier posed in~\cite{LPP97}. This question is also stated as Question 17.28 in Lyons and Peres' book \cite{LP-book}. {\bf e}gin{theorem} \longrightarrowbel{thm:dim-harm} For all $\longrightarrowmbda\in(0,m)$, we have $d_\longrightarrowmbda > \mathbf{GW}[\log {\mathcal N}u]$, meaning that the Hausdorff dimension of the $\longrightarrowmbda$-harmonic measure is a.s.~strictly larger than the Hausdorff dimension of the 0-harmonic measure. Moreover, {\bf e}gin{equation*} \lim_{\longrightarrowmbda {\mathcal T}o 0^+} d_\longrightarrowmbda = \mathbf{GW}[\log {\mathcal N}u] \quad \mbox{ and } \quad \lim_{\longrightarrowmbda {\mathcal T}o m^-} d_\longrightarrowmbda = \log m. \end{equation*} \end{theorem} When $\longrightarrowmbda$ increases to the critical value $m$, it is non-trivial that the support of the $\longrightarrowmbda$-harmonic measure has its Hausdorff dimension tending to that of the whole boundary. Besides, Jensen's inequality implies $\mathbf{GW}[\log {\mathcal N}u]> -\log\mathbf{GW}[{\mathcal N}u^{-1}]$. The preceding theorem thus improves the lower bound $d_\longrightarrowmbda>-\log\mathbf{GW}[{\mathcal N}u^{-1}] $ shown by Vir\'ag in Corollary 7.2 of~\cite{V2000}. The proof of Theorem~\ref{thm:dim-harm} originates from the construction of a probability measure $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ on~${\mathcal T}$ that is stationary and ergodic for the harmonic flow rule. In Section \ref{sec:harm-invar} below, its Radon--Nikod\'ym derivative with respect to $\mathbf{GW}$ is given by \eqref{eq:density-Harm}. Note that an equivalent formula is also obtained independently by Rousselin \cite{Rou}. We derive afterwards an explicit expression for the dimension $d_\longrightarrowmbda$, and prove Theorem~\ref{thm:dim-harm} in Section~\ref{sec:dim-harm}. Our way to find the harmonic-stationary measure $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ is inspired by a recent work of A\"id\'ekon~\cite{Aid}, in which he found the explicit stationary measure of the environment seen from a $\longrightarrowmbda$-biased random walk. It renders possible an application of the ergodic theory on Galton--Watson trees developed in \cite{LPP95} to the biased random walk. After introducing the escape probability of $\longrightarrowmbda$-biased random walk on a tree in Section~\ref{sec:esc}, we will give a precise description of A\"id\'ekon's stationary measure in Section~\ref{sec:agw}. Apart from the Hausdorff dimension of harmonic measure, another quantity of interest is the average number of children of vertices visited by the harmonic ray $\Xi_\longrightarrowmbda$ or the $\longrightarrowmbda$-biased random walk $(X_n)_{n\geq 0}$ on ${\mathbb T}$. For an infinite path $\overlineerset \rightarrow x=(x_k)_{k\geq 0}$ in ${\mathbb T}$, if the limit {\bf e}gin{equation*} \lim_{n{\mathcal T}o \infty} \frac{1}{n}\sum_{k=0}^n {\mathcal N}u(x_k) \end{equation*} exists, we call it the average number of children of the vertices along the path $\overlineerset \rightarrow x$. Section~\ref{sec:average-nb-child} will be devoted to comparing the average number of children of vertices along different random paths in ${\mathbb T}$. The main results in this direction are summarized in the following way. {\bf e}gin{theorem} \longrightarrowbel{thm:nb-children} {\bf e}gin{enumerate} \item[(i)] For all $\longrightarrowmbda\in(0,m)$, the average number of children of the vertices along the $\longrightarrowmbda$-harmonic ray $\Xi_\longrightarrowmbda$ is a.s.~strictly larger than $m$, and strictly smaller than $m^{-1} \sum k^2 p_k$; \item[(ii)] The average number of children of the vertices along the $\longrightarrowmbda$-biased random walk $(X_n)_{n\geq 0}$ is a.s.~strictly smaller than $m$ when $\longrightarrowmbda \in(0,1)$, equal to $m$ when $\longrightarrowmbda=0$ or $1$, and strictly larger than $m$ when $\longrightarrowmbda\in (1,m)$; \item[(iii)] For $\longrightarrowmbda \in (0,1]$, the average number of children of the vertices along the $\longrightarrowmbda$-harmonic ray $\Xi_\longrightarrowmbda$ is a.s.~strictly larger than the average number of children of the vertices along the $\longrightarrowmbda$-biased random walk $(X_n)_{n\geq 0}$. \end{enumerate} \end{theorem} Assertion (iii) above is a direct consequence of assertions (i) and (ii). We conjecture that the same result holds for all $\longrightarrowmbda\in(0,m)$, not merely for $\longrightarrowmbda \in (0,1]$. Assertion (i) in Theorem \ref{thm:nb-children} was first suggested by some numerical calculations in the case $\longrightarrowmbda=1$ mentioned at the end of Section 17.10 in \cite{LP-book}. By the strong law of large numbers, the average number of children seen by the simple forward random walk is a.s.~equal to $m$. On the other hand, the uniform measure on the boundary of ${\mathbb T}$ can be defined by putting mass~1 uniformly on the vertices of level $n$ in ${\mathbb T}$ and taking the weak limit as $n{\mathcal T}o \infty$. We say that a random ray in ${\mathbb T}$ is uniform if it is distributed according to the uniform measure on $\partial {\mathbb T}$. When $\sum (k\log k)p_k <\infty$, the uniform measure on $\partial {\mathbb T}$ has a.s.~Hausdorff dimension $\log m$, and the uniform ray in ${\mathbb T}$ can be identified with the distinguished infinite path in a size-biased Galton--Watson tree. In particular, the average number of children seen by the uniform ray in ${\mathbb T}$ is equal to $m^{-1} \sum k^2 p_k$. For more details we refer the reader to Section 6 of \cite{LPP95} or Chapter 17 of \cite{LP-book}. The FKG inequality for product measures (also known as the Harris inequality) turns out to be extremely useful in proving Theorem \ref{thm:nb-children}. In Section~\ref{sec:average-nb-child}, assertion (i) will be derived from Propositions \ref{prop:child-harm-visi} and \ref{prop:child-harm-unif}, while assertion (ii) will be shown as Proposition~\ref{prop:child-rw-path}. It is worth pointing out that the average number of children seen by the $\longrightarrowmbda$-biased random walk is \emph{not} monotone with respect to $\longrightarrowmbda$, because its right continuity at $0$ (established in Proposition~\ref{prop:child-rw-path-limit-0}), together with assertion (ii) in Theorem \ref{thm:nb-children}, implies that the average number of children seen by the $\longrightarrowmbda$-biased random walk cannot be monotonic nondecreasing for all $\longrightarrowmbda\in(0,1)$. This lack of monotonicity might be explained by two opposing effects of having a small bias $\longrightarrowmbda$: on the one hand, it helps the random walk to escape faster to infinity, and a high-degree path is in favour of the escape of the $\longrightarrowmbda$-biased random walk, but on the other hand, small bias implies less backtracking, so the $\longrightarrowmbda$-biased random walk spends less time on high-degree vertices. We close this introduction by mentioning that the following question from \cite{LPP97} remains open. {\mathcal N}oindentndent{\bf Question 1.} {\mathcal T}extit{Is the dimension $d_\longrightarrowmbda$ of the $\longrightarrowmbda$-harmonic measure nondecreasing for $\longrightarrowmbda\in(0,m)$?} Taking into account the previous discussion, we find it intriguing to ask a similar question: {\mathcal N}oindentndent{\bf Question 2.} {\mathcal T}extit{Is the average number of children of the vertices along the $\longrightarrowmbda$-harmonic ray in~${\mathbb T}$ nondecreasing for $\longrightarrowmbda\in(0,m)$? Does the same monotonicity holds for the average number of children of the vertices along the $\longrightarrowmbda$-biased random walk, when $\longrightarrowmbda \in [1,m)$?} \section{Escape probability and the effective conductance} \longrightarrowbel{sec:esc} For a tree $T\in {\mathcal T}$ rooted at $e$, we define $T_*$ as the tree obtained by adding to $e$ an extra adjacent vertex $e_*$, called the parent of $e$. The new tree $T_*$ is naturally rooted at $e_*$. For a vertex $u\in T$, the descendant tree $T_u$ of $u$ is the subtree of $T$ formed by those edges and vertices which become disconnected from the root of $T$ when $u$ is removed. By definition, $T_u$ is rooted at $u$. Unless otherwise stated, we assume $\longrightarrowmbda \in (0,m)$ in the rest of the paper. Under the probability measure $P_T$, let $(X_n)_{n \geq 0}$ denote a $\longrightarrowmbda$-biased random walk on $T_*$. For any vertex $u\in T$, define ${\mathcal T}au_u\colonequals \min\{n\geq 0 \colon X_n=u\}$ the hitting time of~$u$, with the usual convention that $\min \emptyset=\infty$. Let {\bf e}gin{equation*} {\bf e}ta_\longrightarrowmbda(T)\colonequals P_T({\mathcal T}au_{e_*}=\infty\mid X_0=e)= P_T(\forall n\geq 1, X_n{\mathcal N}eq e_* \mid X_0=e) \end{equation*} be the probability of never visiting the parent $e_*$ of $e$ when starting from~$e$. For notational ease, we will make implicit the dependency in $\longrightarrowmbda$ of the escape probability by writing ${\bf e}ta(T)={\bf e}ta_\longrightarrowmbda(T)$. For $\mathbf{GW}$-a.e.~$T$, $0<{\bf e}ta(T)<1$. By coupling with a biased random walk on ${\mathbb N}$, we see that ${\bf e}ta(T)>1-\longrightarrowmbda$. Moreover, Lemma 4.2 of \cite{Aid} shows that {\bf e}gin{equation} \longrightarrowbel{eq:Aid-lemma} 0< \mathbf{GW}\bigg[\frac{1}{\longrightarrowmbda-1+{\bf e}ta(T)}\bigg]<\infty. \end{equation} For a vertex $u\in T$, $|u|=1$, the probability that a $\longrightarrowmbda$-harmonic ray in $T$ passes through $u$ is {\bf e}gin{equation*} \frac{{\bf e}ta(T_u)}{\sum_{|w|=1}{\bf e}ta(T_w)}. \end{equation*} If the tree $T$ is viewed as an electric network, and if the conductance of an edge linking vertices of level $n$ and $n+1$ is $\longrightarrowmbda^{-n}$, then ${\mathcal C}_\longrightarrowmbda(T)$ denotes the effective conductance of $T$ from its root to infinity. As for the escape probability, we will write ${\mathcal C}(T)$ for ${\mathcal C}_\longrightarrowmbda(T)$ to simplify the notation. Using the link between reversible Markov chains and electric networks, we know that {\bf e}gin{equation} \longrightarrowbel{eq:conductance} {\bf e}ta(T)={\mathcal C}(T_*)=\frac{{\mathcal C}(T)}{\longrightarrowmbda +{\mathcal C}(T)} \quad \mbox{ and }\quad {\mathcal C}(T)= \frac{\longrightarrowmbda {\bf e}ta(T)}{1- {\bf e}ta(T)}. \end{equation} This relationship between ${\bf e}ta(T)$ and ${\mathcal C}(T)$ will be used repeatedly. Since ${\mathcal C}(T)>{\bf e}ta(T)$, the lower bound ${\mathcal C}(T)>1-\longrightarrowmbda$ also holds. Moreover, for all $x\in {\mathbb R}$, {\bf e}gin{equation*} \frac{\longrightarrowmbda^{-1} x \, {\mathcal C}(T)}{(\longrightarrowmbda-1+{\mathcal C}(T))(1+\longrightarrowmbda^{-1} x)+\longrightarrowmbda^{-1} x}= \frac{{\bf e}ta(T)x}{\longrightarrowmbda-1+{\bf e}ta(T)+x}. \end{equation*} Taking $x={\mathcal C}(T')$ for another tree $T'$ yields the following identity {\bf e}gin{equation} \longrightarrowbel{eq:cond-symmetry} \frac{{\bf e}ta(T'){\mathcal C}(T)}{\longrightarrowmbda-1+{\bf e}ta(T')+{\mathcal C}(T)}=\frac{{\bf e}ta(T){\mathcal C}(T')}{\longrightarrowmbda-1+{\bf e}ta(T)+{\mathcal C}(T')}. \end{equation} Using \eqref{eq:conductance} we can also verify that {\bf e}gin{align} \longrightarrowbel{eq:cond-identity} (\longrightarrowmbda-1+{\bf e}ta(T)+{\mathcal C}(T'))(1+\longrightarrowmbda^{-1} {\mathcal C}(T)) & =\longrightarrowmbda(1+\longrightarrowmbda^{-1} {\mathcal C}(T))(1+\longrightarrowmbda^{-1} {\mathcal C}(T'))-1 {\mathcal N}onumber \\ & = (\longrightarrowmbda-1+{\bf e}ta(T')+{\mathcal C}(T))(1+\longrightarrowmbda^{-1} {\mathcal C}(T')). \end{align} The following integrability result will be used to prove the inequality $d_\longrightarrowmbda>\mathbf{GW}[\log {\mathcal N}u]$. {\bf e}gin{lemma} \longrightarrowbel{lem:mean-resistance-log} For $0<\longrightarrowmbda<m$, we have $\mathbf{GW}\big[\log \frac{1}{{\bf e}ta(T)}\big]<\infty$. \end{lemma} {\bf e}gin{proof} Let $T_1,\ldots, T_{\mathcal N}u$ be the descendant trees of the children of the root in $T$. By the parallel law of conductances, ${\bf e}ta(T)=\sum_{i=1}^{\mathcal N}u {\bf e}ta(T_i)$. Recall that {\bf e}gin{equation*} \frac{1}{{\bf e}ta(T)}=\frac{{\mathcal C}(T)+\longrightarrowmbda}{{\mathcal C}(T)} = 1+\frac{\longrightarrowmbda}{\sum_{i=1}^{\mathcal N}u {\bf e}ta(T_i)}. \end{equation*} Taking $x=\longrightarrowmbda (\sum_{i=1}^{{\mathcal N}u} {\bf e}ta(T_i))^{-1}$ and $x_0=\longrightarrowmbda {\mathcal N}u^{-1}\leq x$ in the inequality $\log(1+x)\leq \log x +\log (1+x_0^{-1})$, we deduce that {\bf e}gin{equation*} \log \frac{1}{{\bf e}ta(T)} \leq \log (1+\longrightarrowmbda^{-1} {\mathcal N}u) +\log \longrightarrowmbda +\log \frac{1}{\sum_{i=1}^{{\mathcal N}u} {\bf e}ta(T_i)}. \end{equation*} Let ${\mathcal V}e<1$ be some positive number. Then, {\bf e}gin{equation*} \log \frac{1}{\sum_{i=1}^{{\mathcal N}u} {\bf e}ta(T_i)} \leq \bigg(\log \frac{1}{{\bf e}ta(T_1)}\bigg){\bf 1}_{\{{\bf e}ta(T_i)\leq {\mathcal V}e, \forall i\geq 2\}}+\log{\mathcal V}e^{-1}. \end{equation*} By convention, the indicator function above is equal to 1 over the event $\{{\mathcal N}u=1\}$. Taking expectation gives {\bf e}gin{equation*} \mathbf{GW}\bigg[n\wedge \log \frac{1}{{\bf e}ta(T)} \bigg] \leq \mathbf{GW}\big[\log (\longrightarrowmbda+{\mathcal N}u)\big] +\log {\mathcal V}e^{-1}+ \mathbf{GW}\bigg[n \wedge \log \frac{1}{{\bf e}ta(T)} \bigg] \mathbf{GW}\big[q_{\mathcal V}e ^{{\mathcal N}u-1}\big], \end{equation*} where $q_{\mathcal V}e\colonequals \mathbf{GW}({\bf e}ta(T)\leq {\mathcal V}e)$. Since $q_{\mathcal V}e {\mathcal T}o 0$ when ${\mathcal V}e {\mathcal T}o 0$, we can take ${\mathcal V}e$ small enough such that {\bf e}gin{equation*} A_{\mathcal V}e \colonequals \mathbf{GW}[q_{\mathcal V}e ^{{\mathcal N}u-1}]<1. \end{equation*} Hence, we obtain {\bf e}gin{equation*} \mathbf{GW}\bigg[n\wedge \log \frac{1}{{\bf e}ta(T)} \bigg] \leq \frac{\mathbf{GW}\big[\log (\longrightarrowmbda+{\mathcal N}u)\big] +\log {\mathcal V}e^{-1}}{1-A_{\mathcal V}e}. \end{equation*} Taking the limit $n{\mathcal T}o \infty$ finishes the proof. \end{proof} \section{Stationary measure of the tree seen from random walk} \longrightarrowbel{sec:agw} We set up some notation before presenting A\"id\'ekon's stationary measure. For a~rooted tree $T\in {\mathcal T}$, its boundary $\partial T$ is the set of all rays starting from the root. Clearly, one can identify $\partial T_*$ with $\partial T$. Let {\bf e}gin{equation*} {\mathcal T}^* \colonequals \{(T, \xi)\mid T\in {\mathcal T}, \xi=(\xi_n)_{n\geq 0} \in \partial T \} \end{equation*} denote the space of trees with a marked ray. By definition, $\xi_0$ coincides with the root vertex of~$T$. If $T_1$ and $T_2$ are two trees rooted respectively at $e_1$ and $e_2$, we define $T_1 \!\!-\!\!\!\bullet T_2$ as the tree rooted at the root $e_2$ of $T_2$ formed by joining the roots of $T_1$ and $T_2$ by an edge. The root $e_2$ is the parent of $e_1$ in $T_1 \!\!-\!\!\!\bullet T_2$, thus we will not distinguish $e_2$ from $(e_1)_*$. Given a ray $\xi\in \partial T$, there is a unique tree $T^+$ such that $T =T_{\xi_1} \!\!-\!\!\!\bullet T^+$. Therefore, ${\mathcal T}^*$ is in bijection with the space {\bf e}gin{equation*} \big \{(T \!\!-\!\!\!\bullet T^+, \xi)\mid T, T^+ \in {\mathcal T}, \xi=(\xi_n)_{n\geq 0} \in \partial T \big\}. \end{equation*} Introducing a marked ray helps us to keep track of the past trajectory of the biased random walk. In particular, the initial starting point of the random walk, towards which the bias is exerted, would be represented by the marked ray at infinity. To be more precise, if we assign a vertex $u\in T$ to be the new root of the tree $T$, the re-rooted tree will be written as $\mathsf{ReRoot}(T,u)$. Given $\xi=(\xi_n)_{n\geq 0} \in \partial T$, we say that $x$ is the $\xi$-parent of $y$ in $T$ if $x$ becomes the parent of $y$ in the tree $\mathsf{ReRoot}(T,\xi_n)$ for all sufficiently large $n$. A random walk on $T$ is \emph{$\longrightarrowmbda$-biased towards $\xi$} if the random walk always moves to its $\xi$-parent with probability $\longrightarrowmbda$ times that of moving to one of the other neighbors. We consider the Markov chain on ${\mathcal T}^*$ that, starting from some fixed tree $T$ with a marked ray $\xi=(\xi_n)_{n\geq 0}$, is isomorphic to a random walk on $T$ $\longrightarrowmbda$-biased towards $\xi$. Recall that ${\mathcal N}u(\xi_0)$ is the number of edges incident to the root. The transition probabilities $\mathbf{p}_{\mathsf{RW}_\longrightarrowmbda}$ of this Markov chain are defined as follows: {\bf e}gin{itemize} \item If $T'=\mathsf{ReRoot}(T, x)$ and $\xi'=(x,\xi_0,\xi_1, \xi_2, \ldots)$ with a vertex $x$ adjacent to $\xi_0$ being different from $\xi_1$, \[ \mathbf{p}_{\mathsf{RW}_\longrightarrowmbda}((T,\xi), (T',\xi'))=\frac{1}{{\mathcal N}u(\xi_0)-1+\longrightarrowmbda}; \] \item If $T'=\mathsf{ReRoot}(T, \xi_1)$ and $\xi'=(\xi_1,\xi_2,\ldots)$, \[ \mathbf{p}_{\mathsf{RW}_\longrightarrowmbda}((T,\xi), (T',\xi'))=\frac{\longrightarrowmbda}{{\mathcal N}u(\xi_0)-1+\longrightarrowmbda}; \] \item Otherwise, $\mathbf{p}_{\mathsf{RW}_\longrightarrowmbda}((T,\xi), (T',\xi'))=0$. \end{itemize} We proceed to define the environment measure that is invariant under re-rooting along a $\longrightarrowmbda$-biased random walk. Let ${\mathbb T}$ and ${\mathbb T}^+$ be two independent Galton--Watson trees of offspring distribution $(p_k)_{k\geq 0}$. We write $e$ for the root vertex of ${\mathbb T}$, and $e^+$ for the root vertex of ${\mathbb T}^+$. Let ${\mathcal N}u^+$ denote the number of children of $e^+$ in ${\mathbb T}^+$. Similarly, let ${\mathcal N}u$ denote the number of children of $e$ in ${\mathbb T}$. Note that the number of children of $e^+$ in ${\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+$ is ${\mathcal N}u^+ +1$. Conditionally on $({\mathbb T}, {\mathbb T}^+)$, let $\mathcal{R}$ be a random ray in ${\mathbb T}$ distributed according to the $\longrightarrowmbda$-harmonic measure on $\partial {\mathbb T}$. We assume that $({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+, \mathcal{R})$ is defined under the probability measure $P$. {\bf e}gin{figure}[ht] {\bf e}gin{center} \includegraphics[width=11cm]{agw} \caption{ \longrightarrowbel{fig:agw} The random tree ${\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+$ rooted at $e^+$ with a marked ray $\mathcal{R}$ } \end{center} \end{figure} {\bf e}gin{definition} The $\longrightarrowmbda$-augmented Galton--Watson measure $\mathbf{AGW}_\lambda$ is defined as the probability measure on ${\mathcal T}^*$ that is absolutely continuous with respect to the law of $({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+, \mathcal{R})$ with density {\bf e}gin{equation} \longrightarrowbel{eq:density} c_\longrightarrowmbda ^{-1} \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)}, \end{equation} where {\bf e}gin{equation*} c_\longrightarrowmbda= E \bigg[ \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)}\bigg] \end{equation*} is the normalizing constant. \end{definition} It follows from the inequality $\longrightarrowmbda-1+{\mathcal C}({\mathbb T}^+)>0$ that {\bf e}gin{equation*} c_\longrightarrowmbda=E \bigg[ \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)}\bigg] < E\big[\longrightarrowmbda+{\mathcal N}u^+ \big]=\longrightarrowmbda+m. \end{equation*} Let ${\mathbb T}^+_1,\ldots, {\mathbb T}^+_{{\mathcal N}u^+}$ denote the descendant trees of the children of $e^+$ in ${\mathbb T}^+$. With a slight abuse of notation, let ${\mathbb T}_1,\ldots, {\mathbb T}_{{\mathcal N}u}$ denote the descendant trees of the children of $e$ inside ${\mathbb T}$. See Fig.~\ref{fig:agw} for a schematic illustration. By the parallel law of conductances, {\bf e}gin{equation} \longrightarrowbel{eq:C-Bsum} {\mathcal C}({\mathbb T}^+)= \sum_{i=1}^{{\mathcal N}u^+} {\bf e}ta({\mathbb T}^+_i) \quad \mbox{and} \quad {\mathcal C}({\mathbb T})= \sum_{i=1}^{{\mathcal N}u} {\bf e}ta({\mathbb T}_i). \end{equation} We will frequently use the branching property that conditionally on ${\mathcal N}u^+$, the collection of trees $\{{\mathbb T},{\mathbb T}^+_1,\ldots, {\mathbb T}^+_{{\mathcal N}u^+} \}$ are independent and identically distributed according to $\mathbf{GW}$. According to Theorem 4.1 in \cite{Aid}, the $\longrightarrowmbda$-augmented Galton--Watson measure $\mathbf{AGW}_\lambda$ is the asymptotic distribution of the environment seen from the $\longrightarrowmbda$-biased random walk on ${\mathbb T}$. {\bf e}gin{proposition} \longrightarrowbel{prop:invariance} The Markov chain with transition probabilities $\mathsf{p}_{\mathsf{RW}_\longrightarrowmbda}$ and initial distribution $\mathbf{AGW}_\lambda$ is stationary. \end{proposition} {\bf e}gin{proof} Let $F\colon {\mathcal T} {\mathcal T}o {\mathbb R}^+$ and $G\colon {\mathcal T}^* {\mathcal T}o {\mathbb R}^+$ be nonnegative measurable functions. Let $({\mathcal T}ilde {\mathbb T} \!\!-\!\!\!\bullet {\mathcal T}ilde {\mathbb T}^+, {\mathcal T}ilde{\mathcal{R}})$ denote the tree with a marked ray obtained from $({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+, \mathcal{R})$ by performing a one-step transition according to $\mathbf{p}_{\mathsf{RW}_\longrightarrowmbda}$. It suffices to show that {\bf e}gin{equation*} E\Big[\frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathcal T}ilde {\mathbb T}^+)G({\mathcal T}ilde {\mathbb T}, {\mathcal T}ilde{\mathcal{R}})\Big] = E\Big[\frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T}, \mathcal{R})\Big]. \end{equation*} To compute the left-hand side, we need to distinguish two different situations. {\bf Case I}: There exists $1\leq i\leq {\mathcal N}u^+$ such that the root of ${\mathbb T}^+_i$ becomes the new root of ${\mathcal T}ilde {\mathbb T} \!\!-\!\!\!\bullet {\mathcal T}ilde {\mathbb T}^+$. For each $i\in [1,{\mathcal N}u^+]$, it happens with probability $1/({\mathcal N}u^+ +\longrightarrowmbda)$. In this case, {\bf e}gin{equation*} {\mathcal T}ilde {\mathbb T}^+={\mathbb T}^+_i \quad \mbox{and}\quad {\mathcal T}ilde {\mathbb T}={\mathbb T}\!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq i}^+, \end{equation*} where ${\mathbb T}_{{\mathcal N}eq i}^+$ stands for the tree rooted at $e^+$ containing only the descendant trees $\{{\mathbb T}^+_j, 1\leq j \leq {\mathcal N}u^+, j{\mathcal N}eq i\}$ together with the edges connecting their roots to $e^+$. It is easy to see that ${\mathbb T}^+_i$ and ${\mathbb T}\!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq i}^+$ are two i.i.d.~Galton--Watson trees. Meanwhile, ${\mathcal T}ilde{\mathcal{R}}\in \partial {\mathcal T}ilde {\mathbb T}$ is the ray $\mathcal{R}^+$ obtained by adding the vertex $e^+$ to the beginning of the sequence $\mathcal{R}$. We set accordingly {\bf e}gin{align*} I & \colonequals E\bigg[ \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \sum_{i=1}^{{\mathcal N}u^+} \frac{1}{{\mathcal N}u^+ +\longrightarrowmbda } F({\mathbb T}^+_i)G({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i}, \mathcal{R}^+)\bigg] \\ &= E\bigg[ \frac{{\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \sum_{i=1}^{{\mathcal N}u^+} F({\mathbb T}^+_i)G({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i}, \mathcal{R}^+)\bigg]. \end{align*} Given ${\mathbb T}$ and ${\mathbb T}^+$, we let $\mathcal{R}_{{\mathcal N}eq i}$ be a random ray in the tree ${\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i}$ distributed according to the $\longrightarrowmbda$-harmonic measure on the tree boundary. Then $\mathcal{R}^+$ can be identified with $\mathcal{R}_{{\mathcal N}eq i}$ conditionally on $\{\mathcal{R}_{{\mathcal N}eq i}\in \partial {\mathbb T}\}$. We see that $I$ is equal to {\bf e}gin{align*} & E\bigg[\frac{{\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \sum_{i=1}^{{\mathcal N}u^+} F({\mathbb T}^+_i)G({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i}, \mathcal{R}_{{\mathcal N}eq i}){\bf 1}_{\{\mathcal{R}_{{\mathcal N}eq i}\in \partial {\mathbb T}\}}\frac{{\mathcal C}({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i})}{{\bf e}ta({\mathbb T})}\bigg]\\ =\,\,& E\bigg[\frac{1}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \sum_{i=1}^{{\mathcal N}u^+} F({\mathbb T}^+_i)G({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i}, \mathcal{R}_{{\mathcal N}eq i}){\bf 1}_{\{\mathcal{R}_{{\mathcal N}eq i}\in \partial {\mathbb T}\}} {\mathcal C}({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq i})\bigg]. \end{align*} By symmetry, we deduce further that {\bf e}gin{align*} I &= E\bigg[ \frac{{\mathcal C}({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq 1})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+_1)G({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq 1},\mathcal{R}_{{\mathcal N}eq 1})\Big({\bf 1}_{\{\mathcal{R}_{{\mathcal N}eq 1}\in \partial {\mathbb T}\}} + \sum_{i=2}^{{\mathcal N}u^+} {\bf 1}_{\{\mathcal{R}_{{\mathcal N}eq 1}\in \partial {\mathbb T}^+_i\}}\Big) \bigg] \\ &= E\bigg[ \frac{{\mathcal C}({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq 1})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+_1)G({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq 1},\mathcal{R}_{{\mathcal N}eq 1})\bigg]. \end{align*} As ${\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)={\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}^+ _i)={\bf e}ta({\mathbb T}^+_1)+{\mathcal C}({\mathbb T} \!\!-\!\!\!\bullet {\mathbb T}^+_{{\mathcal N}eq 1})$, we obtain from the previous display that {\bf e}gin{equation*} I = E\bigg[ \frac{{\mathcal C}({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T}^+)+{\mathcal C}({\mathbb T})} F({\mathbb T}^+)G({\mathbb T}, \mathcal{R})\bigg]. \end{equation*} Using \eqref{eq:cond-symmetry} and \eqref{eq:conductance}, we get therefore {\bf e}gin{align*} I &= E\bigg[ \frac{{\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)}\frac{{\mathcal C}({\mathbb T}^+)}{{\bf e}ta({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T},\mathcal{R})\bigg]\\ &= E\bigg[ \frac{{\bf e}ta({\mathbb T}) (\longrightarrowmbda +{\mathcal C}({\mathbb T}^+))}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T},\mathcal{R})\bigg]. \end{align*} {\bf Case II}: The vertex $e$ becomes the new root of ${\mathcal T}ilde {\mathbb T} \!\!-\!\!\!\bullet {\mathcal T}ilde {\mathbb T}^+$, which happens with probability $\longrightarrowmbda/({\mathcal N}u^+ +\longrightarrowmbda)$. In this case, if $\mathcal{R}$ passes through the root of ${\mathbb T}_k$ for some integer $k\in [1,{\mathcal N}u]$, then {\bf e}gin{equation*} {\mathcal T}ilde {\mathbb T}= {\mathbb T}_k \quad \mbox{and} \quad {\mathcal T}ilde {\mathbb T}^+= {\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k}, \end{equation*} where ${\mathbb T}_{{\mathcal N}eq k}$ stands for the tree rooted at $e$ formed by all descendant trees $\{{\mathbb T}_\ell, 1\leq \ell\leq {\mathcal N}u, \ell{\mathcal N}eq k\}$ together with the edges connecting their roots to $e$. As in the previous case, ${\mathbb T}_k$ and ${\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k}$ are two independent Galton--Watson trees. But ${\mathcal T}ilde{\mathcal{R}}$ is now the ray $\mathcal{R}^-$ obtained by deleting $e$ from the beginning of the sequence $\mathcal{R}$. We set thus {\bf e}gin{align*} I\!I & \colonequals E\bigg[ \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \frac{\longrightarrowmbda}{{\mathcal N}u^+ +\longrightarrowmbda} \sum_{k=1}^{{\mathcal N}u} F({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})G({\mathbb T}_k, \mathcal{R}^-){\bf 1}_{\{\mathcal{R}^-\in \partial {\mathbb T}_k\}}\bigg] \\ &= E\bigg[ \frac{\longrightarrowmbda {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \sum_{k=1}^{{\mathcal N}u} F({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})G({\mathbb T}_k, \mathcal{R}^-){\bf 1}_{\{\mathcal{R}^-\in \partial {\mathbb T}_k\}}\bigg]. \end{align*} Given ${\mathbb T}$ and ${\mathbb T}^+$, we let $\mathcal{R}_{k}$ be a random ray in the tree ${\mathbb T}_k$ distributed according to the $\longrightarrowmbda$-harmonic measure. It follows that {\bf e}gin{align*} I\!I & = E\bigg[ \sum_{k=1}^{{\mathcal N}u} \frac{\longrightarrowmbda {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})G({\mathbb T}_k, \mathcal{R}_k)\frac{{\bf e}ta({\mathbb T}_k)}{{\mathcal C}({\mathbb T})}\bigg] \\ &= E\bigg[ \sum_{k=1}^{{\mathcal N}u} \frac{{\bf e}ta({\mathbb T}_k)}{(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+))(1+\longrightarrowmbda^{-1} {\mathcal C}({\mathbb T}))} F({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})G({\mathbb T}_k, \mathcal{R}_k)\bigg]. \end{align*} Using the identity \eqref{eq:cond-identity}, we see that {\bf e}gin{eqnarray*} (\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+))(1+\longrightarrowmbda^{-1} {\mathcal C}({\mathbb T}))&= & \big(\longrightarrowmbda-1+{\bf e}ta({\mathbb T}^+)+{\mathcal C}({\mathbb T})\big)\big(1+\longrightarrowmbda^{-1} {\mathcal C}({\mathbb T}^+)\big)\\ &=& \big(\longrightarrowmbda-1+{\bf e}ta({\mathbb T}_k)+{\mathcal C}({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})\big)\big(1+\longrightarrowmbda^{-1} {\mathcal C}({\mathbb T}^+)\big). \end{eqnarray*} Together with \eqref{eq:conductance}, it implies {\bf e}gin{align*} I\!I & = E\bigg[ \sum_{k=1}^{{\mathcal N}u} \frac{{\bf e}ta({\mathbb T}_k)(1+\longrightarrowmbda^{-1} {\mathcal C}({\mathbb T}^+))^{-1}}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T}_k)+{\mathcal C}({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})} F({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})G({\mathbb T}_k, \mathcal{R}_k)\bigg] \\ & = E\bigg[ \sum_{k=1}^{{\mathcal N}u} \frac{{\bf e}ta({\mathbb T}_k)(1-{\bf e}ta({\mathbb T}^+))}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T}_k)+{\mathcal C}({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})} F({\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})G({\mathbb T}_k, \mathcal{R}_k)\bigg] . \end{align*} Observe that the root of ${\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k}$ has ${\mathcal N}u$ children. For any integer $m\geq k$, the conditional law of $({\mathbb T}_k, {\mathbb T}^+ \!\!-\!\!\!\bullet {\mathbb T}_{{\mathcal N}eq k})$ given $\{{\mathcal N}u=m\}$ is the same as that of $({\mathbb T},{\mathbb T}^+)$ conditionally on $\{{\mathcal N}u^+=m\}$. Hence, we obtain {\bf e}gin{align*} I\!I & = E\bigg[ \sum_{k=1}^{{\mathcal N}u^+} \frac{{\bf e}ta({\mathbb T})(1-{\bf e}ta({\mathbb T}_k^+))}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T}, \mathcal{R})\bigg] \\ & = E\bigg[\frac{{\bf e}ta({\mathbb T})({\mathcal N}u^+ -{\mathcal C}({\mathbb T}^+))}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T}, \mathcal{R})\bigg] . \end{align*} Finally, adding up Cases I and II, we have {\bf e}gin{align*} & E\Big[\frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathcal T}ilde {\mathbb T}^+)G({\mathcal T}ilde {\mathbb T}, {\mathcal T}ilde{\mathcal{R}})\Big] \\ &= E\bigg[ \frac{{\bf e}ta({\mathbb T}) (\longrightarrowmbda +{\mathcal C}({\mathbb T}^+))}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T},\mathcal{R})\bigg]+ E\bigg[\frac{{\bf e}ta({\mathbb T})({\mathcal N}u^+ -{\mathcal C}({\mathbb T}^+))}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T}, \mathcal{R})\bigg]\\ &= E\Big[\frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} F({\mathbb T}^+)G({\mathbb T}, \mathcal{R})\Big], \end{align*} which completes the proof of the stationarity. \end{proof} We write $\overlineerset \rightarrow x$ for an infinite path $(x_n)_{n\geq 0}$ in $T$. Let $\mathsf{RW}_\longrightarrowmbda{\mathcal T}imes \mathbf{AGW}_\longrightarrowmbda$ be the probability measure on the space {\bf e}gin{equation*} \big\{(\overlineerset \rightarrow x,(T,\xi))\mid (T,\xi)\in {\mathcal T}^*, \overlineerset \rightarrow x\subset T \big\} \end{equation*} that is associated to the Markov chain considered in Proposition~\ref{prop:invariance}. It is given by choosing a tree $T$ with a marked ray $\xi$ according to $\mathbf{AGW}_\longrightarrowmbda$, and then independently running on $T$ a random walk $\longrightarrowmbda$-biased towards $\xi$. \section{Harmonic-stationary measure} \longrightarrowbel{sec:harm-invar} Let $\mathrm{HARM}^T_\longrightarrowmbda$ be the flow on the vertices of $T$ in correspondence with the $\longrightarrowmbda$-harmonic measure on $\partial T$, so that $\mathrm{HARM}_\longrightarrowmbda^T(u)$ coincides with the mass given by the $\longrightarrowmbda$-harmonic measure to the set of all rays passing through the vertex $u$. We denote by $\mathsf{HARM}_\longrightarrowmbda$ the transition probabilities for a Markov chain on ${\mathcal T}$, that goes from a tree $T$ to the descendant tree $T_u$, $|u|=1$, with probability {\bf e}gin{equation*} \mathrm{HARM}_\longrightarrowmbda^T(u)= \frac{{\bf e}ta(T_u)}{\sum_{|w|=1}{\bf e}ta(T_w)} = \frac{{\bf e}ta(T_u)}{{\mathcal C}(T)}. \end{equation*} The existence of a $\mathsf{HARM}_\longrightarrowmbda$-stationary probability measure $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ that is absolutely continuous with respect to $\mathbf{GW}$ was established in Lemma 5.2 of \cite{LPP96}. Taking into account the stationary measure of the environment $\mathbf{AGW}_\longrightarrowmbda$, we can construct $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ as an induced measure by considering the $\longrightarrowmbda$-biased random walk at the exit epochs. See~\cite[Section 8]{LPP95} and~\cite[Section 5]{LPP96} for more details. According to Proposition 5.2 of \cite{LPP95}, $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ is equivalent to $\mathbf{GW}$ and the associated $\mathsf{HARM}_\longrightarrowmbda$-Markov chain is ergodic. Ergodicity implies further that $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ is the unique $\mathsf{HARM}_\longrightarrowmbda$-stationary probability measure absolutely continuous with respect to $\mathbf{GW}$. Due to uniqueness, we can identify $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ via the next result. {\bf e}gin{lemma} \longrightarrowbel{lemma:harm-invariance} For every $x>0$, set {\bf e}gin{equation*} \kappa_\longrightarrowmbda(x)\colonequals \mathbf{GW}\bigg[\frac{{\bf e}ta(T)x}{\longrightarrowmbda-1+{\bf e}ta(T)+x}\bigg]= E\bigg[\frac{{\bf e}ta({\mathbb T})x}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+x}\bigg]. \end{equation*} The finite measure $\kappa_\longrightarrowmbda({\mathcal C}(T))\mathbf{GW}(\mathrm{d}T)$ is $\mathsf{HARM}_\longrightarrowmbda$-stationary. \end{lemma} {\bf e}gin{proof} The function $\kappa_\longrightarrowmbda \colon {\mathbb R}^+ {\mathcal T}o {\mathbb R}^+$ is bounded and strictly increasing. In fact, for $\mathbf{GW}$-a.e.~$T$, $\longrightarrowmbda-1+{\bf e}ta(T)> 0$. The function {\bf e}gin{equation*} \frac{{\bf e}ta(T)x}{\longrightarrowmbda-1+{\bf e}ta(T)+x} \end{equation*} is strictly increasing in $x$, and it is bounded above by ${\bf e}ta(T)$. Thus, $\kappa_\longrightarrowmbda(x) < \mathbf{GW}[{\bf e}ta(T)]< 1$. We write ${\mathcal N}u$ for the offspring number of the root of $T$. Conditionally on the event $\{{\mathcal N}u=k\}$, let $T_1,\ldots, T_k$ denote the descendant trees of the children of the root. In order to prove the $\mathsf{HARM}_\longrightarrowmbda$-stationarity, we must verify that for any bounded measurable function $F$ on ${\mathcal T}$, the integral $\int F(T)\kappa_\longrightarrowmbda({\mathcal C}(T))\mathbf{GW}(\mathrm{d}T)$ is equal to {\bf e}gin{align*} I \colonequals & \sum_{k=1}^\infty p_k \sum_{i=1}^k \int F(T_i)\kappa_\longrightarrowmbda({\mathcal C}(T))\frac{{\bf e}ta(T_i)}{{\bf e}ta(T_1)+\cdots+{\bf e}ta(T_k)} \mathbf{GW}(\mathrm{d}T\mid {\mathcal N}u=k) \\ = & \sum_{k=1}^\infty k p_k \int F(T_1)\kappa_\longrightarrowmbda({\mathcal C}(T))\frac{{\bf e}ta(T_1)}{{\bf e}ta(T_1)+\cdots+{\bf e}ta(T_k)} \mathbf{GW}(\mathrm{d}T\mid {\mathcal N}u=k). \end{align*} Using the definition of $\kappa_\longrightarrowmbda$ and the branching property, we see that $I$ is given by {\bf e}gin{align*} & \sum_{k=1}^\infty k p_k \int F(T_1)\frac{{\bf e}ta(T_0){\bf e}ta(T_1)}{\longrightarrowmbda-1+{\bf e}ta(T_0)+{\bf e}ta(T_1)+\cdots+{\bf e}ta(T_k)} \mathbf{GW}(\mathrm{d}T\mid {\mathcal N}u=k)\mathbf{GW}(\mathrm{d}T_0) \\ =& \sum_{k=1}^\infty k p_k \int F(T_1)\frac{{\bf e}ta(T_0){\bf e}ta(T_1)}{\longrightarrowmbda-1+{\bf e}ta(T_0)+{\bf e}ta(T_1)+\cdots+{\bf e}ta(T_k)} \mathbf{GW}(\mathrm{d}T_0)\mathbf{GW}(\mathrm{d}T_1)\cdots \mathbf{GW}(\mathrm{d}T_k) \\ =& \sum_{k=1}^\infty p_k \int F(T_1)\frac{{\bf e}ta(T_1)({\bf e}ta(T_0)+{\bf e}ta(T_2)+\cdots+{\bf e}ta(T_k))}{\longrightarrowmbda-1+{\bf e}ta(T_0)+{\bf e}ta(T_1)+\cdots+{\bf e}ta(T_k)} \mathbf{GW}(\mathrm{d}T_0)\mathbf{GW}(\mathrm{d}T_1)\cdots \mathbf{GW}(\mathrm{d}T_k) \\ =& \int F(T_1) \frac{{\bf e}ta(T_1){\mathcal C}(T)}{\longrightarrowmbda-1+{\bf e}ta(T_1)+{\mathcal C}(T)}\mathbf{GW}(\mathrm{d}T)\mathbf{GW}(\mathrm{d}T_1). \end{align*} Hence, it follows from \eqref{eq:cond-symmetry} that {\bf e}gin{equation*} I= \int F(T_1) \frac{{\bf e}ta(T){\mathcal C}(T_1)}{\longrightarrowmbda-1+{\bf e}ta(T)+{\mathcal C}(T_1)}\mathbf{GW}(\mathrm{d}T) \mathbf{GW}(\mathrm{d}T_1) =\int F(T_1)\kappa_\longrightarrowmbda({\mathcal C}(T_1))\mathbf{GW}(\mathrm{d}T_1), \end{equation*} which finishes the proof. \end{proof} We deduce from the preceding lemma that the Radon--Nikod\'ym derivative of $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ with respect to $\mathbf{GW}$ is a.s. {\bf e}gin{equation} \longrightarrowbel{eq:density-Harm} \frac{\mathrm{d}\mu_{\mathsf{HARM}_\longrightarrowmbda}}{\mathrm{d}\mathbf{GW}} (T)= \frac{1}{h_\longrightarrowmbda} \kappa_\longrightarrowmbda({\mathcal C}(T))= \frac{1}{h_\longrightarrowmbda} \int \frac{{\bf e}ta(T'){\mathcal C}(T)}{\longrightarrowmbda-1+{\bf e}ta(T')+{\mathcal C}(T)} \mathbf{GW}(\mathrm{d}T'), \end{equation} where the normalizing constant {\bf e}gin{equation*} h_\longrightarrowmbda = \int \frac{{\bf e}ta(T'){\mathcal C}(T)}{\longrightarrowmbda-1+{\bf e}ta(T')+{\mathcal C}(T)} \mathbf{GW}(\mathrm{d}T)\mathbf{GW}(\mathrm{d}T') = E\bigg[\frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg]. \end{equation*} Writing $\mathcal{R}(T)={\mathcal C}(T)^{-1}$ for the effective resistance, one can reformulate \eqref{eq:density-Harm} as {\bf e}gin{equation*} \frac{\mathrm{d}\mu_{\mathsf{HARM}_\longrightarrowmbda}}{\mathrm{d}\mathbf{GW}} (T)= \frac{1}{h_\longrightarrowmbda} \int \frac{\longrightarrowmbda^{-1}}{(\longrightarrowmbda-1)\mathcal{R}(T)\mathcal{R}(T')+\mathcal{R}(T)+\mathcal{R}(T')+\longrightarrowmbda^{-1}} \mathbf{GW}(\mathrm{d}T'). \end{equation*} When $\longrightarrowmbda=1$, it coincides with the expression of the same density in Section 8 of~\cite{LPP95}. As we can see in the proof of Lemma~\ref{lemma:harm-invariance}, the mesure $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ defined by~\eqref{eq:density-Harm} is still $\mathsf{HARM}_\longrightarrowmbda$-stationary when $p_0>0$ is allowed. We also point out that the proof of Proposition 17.31 in \cite{LP-book} (corresponding to the case $\longrightarrowmbda=1$) can be adapted to derive \eqref{eq:density-Harm} from the construction of $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ by inducing. In a recent work \cite{Rou}, Rousselin develops a general result to construct explicit stationary measures for a certain class of Markov chains on trees. Applying his result to the $\mathsf{HARM}_\longrightarrowmbda$-Markov chain considered above gives the same formula \eqref{eq:density-Harm}, see Theorem 4.1 in \cite{Rou}. \section{Dimension of the harmonic measure} \longrightarrowbel{sec:dim-harm} Let $\mathsf{T}$ be a random tree distributed as $\mu_{\mathsf{HARM}_\longrightarrowmbda}$, and let ${\mathbb T}heta$ be the $\longrightarrowmbda$-harmonic ray in $\mathsf{T}$. If we denote the vertices along ${\mathbb T}heta$ by ${\mathbb T}heta_0, {\mathbb T}heta_1, \ldots$, then according to the flow property of harmonic measure, the sequence of descendant trees $(\mathsf{T}_{{\mathbb T}heta_n})_{n\geq 0}$ is a stationary $\mathsf{HARM}_\longrightarrowmbda$-Markov chain. In what follows, we write $\mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}_\longrightarrowmbda}$ for the law of $({\mathbb T}heta, \mathsf{T})$ on the space $\{(\xi, T)\mid T\in {\mathcal T}, \xi \in \partial T\}$. Recall that the ergodicity of $\mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}_\longrightarrowmbda}$ results from Proposition 5.2 in \cite{LPP95}. As shown in \cite[Section 5]{LPP95}, the Hausdorff dimension $d_\longrightarrowmbda$ of the $\longrightarrowmbda$-harmonic measure coincides with the entropy {\bf e}gin{equation*} \mathrm{Entropy}_{\mathsf{HARM}_\longrightarrowmbda}(\mu_{\mathsf{HARM}_\longrightarrowmbda}) \colonequals \int \log \frac{1}{\mathrm{HARM}_\longrightarrowmbda^T(\xi_1)} \mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}\xi,\mathrm{d}T). \end{equation*} Thus, by \eqref{eq:conductance} we have {\bf e}gin{eqnarray*} d_\longrightarrowmbda &= & \int \log \frac{{\mathcal C}(T)}{{\bf e}ta(T_{\xi_1})} \mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}\xi,\mathrm{d}T)\\ &=& \int \log \frac{\longrightarrowmbda{\bf e}ta(T)}{{\bf e}ta(T_{\xi_1})(1-{\bf e}ta(T))} \mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}\xi,\mathrm{d}T). \end{eqnarray*} By stationarity, {\bf e}gin{equation} \longrightarrowbel{eq:dim-formula} d_\longrightarrowmbda = \int \log \frac{\longrightarrowmbda}{1-{\bf e}ta(T)} \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) = \int \log \big({\mathcal C}(T)+\longrightarrowmbda \big) \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T), \end{equation} provided the integral $\int \log {\bf e}ta(T)^{-1} \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T)$ is finite. Using the explicit form \eqref{eq:density-Harm} of $\mu_{\mathsf{HARM}_\longrightarrowmbda}$, we see that this integral is equal to {\bf e}gin{equation*} h_\longrightarrowmbda^{-1} E \bigg[ \frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \log \frac{1}{{\bf e}ta({\mathbb T}^+)}\bigg], \end{equation*} in which the expectation is less than {\bf e}gin{equation} \longrightarrowbel{eq:product-expectation} E \bigg[ \frac{{\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})} \Big({\mathcal C}({\mathbb T}^+)\log \frac{1}{{\bf e}ta({\mathbb T}^+)}\Big)\bigg]= E\bigg[\frac{{\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})} \bigg] \cdot E \bigg[ \frac{\longrightarrowmbda{\bf e}ta({\mathbb T}^+)}{1-{\bf e}ta({\mathbb T}^+)} \log \frac{1}{{\bf e}ta({\mathbb T}^+)}\bigg]. \end{equation} Notice that for $x\in (0,1)$, {\bf e}gin{equation*} 0<\frac{x}{1-x}\log\frac{1}{x}<1. \end{equation*} Hence, the product in \eqref{eq:product-expectation} is bounded by {\bf e}gin{equation*} \mathbf{GW}\bigg[\frac{\longrightarrowmbda {\bf e}ta(T)}{\longrightarrowmbda-1+{\bf e}ta(T)} \bigg], \end{equation*} which is finite according to \eqref{eq:Aid-lemma}. Therefore, the formula \eqref{eq:dim-formula} is justified. By \eqref{eq:density-Harm} again, we obtain {\bf e}gin{eqnarray*} d_\longrightarrowmbda &=& h_\longrightarrowmbda^{-1} \int \log \big({\mathcal C}(T)+\longrightarrowmbda\big) \frac{{\bf e}ta(T'){\mathcal C}(T)}{\longrightarrowmbda-1+{\bf e}ta(T')+{\mathcal C}(T)} \mathbf{GW}(\mathrm{d}T)\mathbf{GW}(\mathrm{d}T') \\ &=& h_\longrightarrowmbda^{-1} E\bigg[\log ({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda) \frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg]. \end{eqnarray*} Now let us prove Theorem~\ref{thm:dim-harm} by first showing $d_\longrightarrowmbda>\mathbf{GW}[\log {\mathcal N}u]$. Recall that the function $\kappa_\longrightarrowmbda$ is strictly increasing. The FKG inequality implies that {\bf e}gin{equation*} E\bigg[\log ({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda) \frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg] > E\big[\log ({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda)\big] {\mathcal T}imes E\bigg[\frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg]. \end{equation*} In view of the previous formula for $d_\longrightarrowmbda$, it suffices to prove {\bf e}gin{equation*} \mathbf{GW}[\log ({\mathcal C}(T)+\longrightarrowmbda)]\geq \mathbf{GW}[\log {\mathcal N}u]. \end{equation*} In fact, the strict inequality holds. Recall the notation that $T_1,\ldots, T_{\mathcal N}u$ stand for the descendant trees of the children of the root in $T$, and notice that {\bf e}gin{equation*} {\mathcal C}(T)+\longrightarrowmbda=\frac{{\mathcal C}(T)}{{\bf e}ta(T)}=\frac{\sum_{i=1}^{\mathcal N}u {\bf e}ta(T_i)}{{\bf e}ta(T)}. \end{equation*} By strict concavity of the log function, {\bf e}gin{equation*} \log \sum_{i=1}^{{\mathcal N}u} {\bf e}ta(T_i) \geq \frac{1}{{\mathcal N}u} \sum_{i=1}^{{\mathcal N}u} \log ({\mathcal N}u {\bf e}ta(T_i))=\log {\mathcal N}u + \frac{1}{{\mathcal N}u} \sum_{i=1}^{{\mathcal N}u} \log {\bf e}ta(T_i) \end{equation*} with equality if and only if all ${\bf e}ta(T_i), 1\leq i\leq {\mathcal N}u,$ are equal. But this condition for equality cannot hold for $\mathbf{GW}$-almost every $T$. Meanwhile, it follows from Lemma~\ref{lem:mean-resistance-log} that {\bf e}gin{equation*} \mathbf{GW}\left[\frac{1}{{\mathcal N}u} \sum_{i=1}^{{\mathcal N}u} \log {\bf e}ta(T_i)\right]=\mathbf{GW}[\log {\bf e}ta(T)]. \end{equation*} Therefore, {\bf e}gin{equation*} \mathbf{GW}[\log ({\mathcal C}(T)+\longrightarrowmbda)] > \mathbf{GW}[\log {\mathcal N}u]. \end{equation*} To complete the proof of Theorem~\ref{thm:dim-harm}, it remains to examine the asymptotic behaviors of $d_\longrightarrowmbda$. When $\longrightarrowmbda{\mathcal T}o 0^+$, a.s.~${\bf e}ta({\mathbb T}){\mathcal T}o 1$ and ${\mathcal C}({\mathbb T}^+)=\sum_{i=1}^{{\mathcal N}u^+} {\bf e}ta({\mathbb T}^+_i) {\mathcal T}o {\mathcal N}u^+$. Since {\bf e}gin{equation} \longrightarrowbel{eq:upper-bd-harm-density} \frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \leq {\bf e}ta({\mathbb T}) \leq 1, \end{equation} we can use Lebesgue's dominated convergence to get $\lim_{\longrightarrowmbda {\mathcal T}o 0^+} h_\longrightarrowmbda= 1$. Similarly, it follows from {\bf e}gin{equation*} \log({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda)\frac{{\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \leq \log({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda) \leq \log({\mathcal N}u^+ +m) \end{equation*} that $\lim_{\longrightarrowmbda {\mathcal T}o 0^+} d_\longrightarrowmbda = E[\log {\mathcal N}u^+]= \mathbf{GW}[\log {\mathcal N}u]$. When $\longrightarrowmbda{\mathcal T}o m^-$, a.s.~${\bf e}ta({\mathbb T}){\mathcal T}o 0$ and ${\mathcal C}({\mathbb T}^+){\mathcal T}o 0$. We have seen that the FKG inequality yields the lower bound {\bf e}gin{equation*} d_\longrightarrowmbda > E[\log({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda)]. \end{equation*} Using again dominated convergence, we obtain {\bf e}gin{equation*} \lim_{\longrightarrowmbda {\mathcal T}o m^-} E[\log({\mathcal C}({\mathbb T}^+)+\longrightarrowmbda)]=\log m. \end{equation*} On the other hand, recall that $d_\longrightarrowmbda <\log m$. Consequently, $d_\longrightarrowmbda {\mathcal T}o \log m$ when $\longrightarrowmbda {\mathcal T}o m^-$. \section{Average number of children along a random path} \longrightarrowbel{sec:average-nb-child} Recall that for every vertex $x$ in a tree $T$, we write ${\mathcal N}u(x)$ for its number of children. Birkhoff's ergodic theorem implies that for $\mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}_\longrightarrowmbda}$-a.e.~$(\xi, T)$, {\bf e}gin{equation*} \lim_{n{\mathcal T}o \infty} \frac{1}{n} \sum_{k=0}^{n-1} {\mathcal N}u(\xi_k)= \int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T)= h_\longrightarrowmbda^{-1} E\bigg[ \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg]. \end{equation*} The last expectation is finite, as we derive from \eqref{eq:upper-bd-harm-density} that {\bf e}gin{equation*} \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \leq {\mathcal N}u^+ . \end{equation*} Since $\mu_{\mathsf{HARM}_\longrightarrowmbda}$ is equivalent to $\mathbf{GW}$, the convergence above also holds for $\mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mathbf{GW}$-a.e.~$(\xi, T)$. Hence, the average number of children of the vertices visited by the $\longrightarrowmbda$-harmonic ray in a Galton--Watson tree is the same as the $\mu_{\mathsf{HARM}_\longrightarrowmbda}$-mean degree of the root. For every $k\geq 1$, we set {\bf e}gin{equation*} A(k) \colonequals E\bigg[ \frac{{\bf e}ta({\mathbb T})\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+)} \bigg]. \end{equation*} The sequence $(A(k))_{k\geq 1}$ is strictly increasing. Moreover, {\bf e}gin{equation*} \frac{A(k)}{k}=E\bigg[ \frac{{\bf e}ta({\mathbb T}){\bf e}ta({\mathbb T}_1^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+)} \bigg] \end{equation*} is strictly decreasing with respect to $k$. {\bf e}gin{proposition} \longrightarrowbel{prop:child-harm-visi} For $0<\longrightarrowmbda<m$, {\bf e}gin{equation*} \int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) > m. \end{equation*} Furthermore, $\int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) {\mathcal T}o m$ as $\longrightarrowmbda{\mathcal T}o 0^+$. \end{proposition} {\bf e}gin{proof} The first assertion, reformulated as {\bf e}gin{equation*} E\bigg[ \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg] > E[{\mathcal N}u^+] \cdot E\bigg[ \frac{{\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg], \end{equation*} is a simple consequence of the FKG inequality, since {\bf e}gin{equation*} \mathbf{GW}[{\mathcal N}u A({\mathcal N}u)]>\mathbf{GW}[{\mathcal N}u] \cdot \mathbf{GW}[A({\mathcal N}u)]. \end{equation*} When $\longrightarrowmbda{\mathcal T}o 0^+$, a.s.~${\bf e}ta({\mathbb T}){\mathcal T}o 1$ and ${\mathcal C}({\mathbb T}^+){\mathcal T}o {\mathcal N}u^+$. Using Lebesgue's dominated convergence, we have seen at the end of Section~\ref{sec:dim-harm} that $\lim_{\longrightarrowmbda {\mathcal T}o 0^+} h_\longrightarrowmbda= 1$. The same argument applies to the convergence of {\bf e}gin{equation*} E\bigg[ \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T}){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg] \end{equation*} towards $E[{\mathcal N}u^+]=m$. \end{proof} Under $\mathbf{GW}$ we define a random variable $\hat {\mathcal N}u$ having the size-biased distribution of ${\mathcal N}u$. {\bf e}gin{proposition} \longrightarrowbel{prop:child-harm-unif} For $0<\longrightarrowmbda<m$, {\bf e}gin{equation*} \int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) < \mathbf{GW}[\hat{\mathcal N}u]= m^{-1}{\mathcal T}extstyle \sum k^2 p_k. \end{equation*} If we assume further that $\sum k^3 p_k<\infty$, then $\int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) {\mathcal T}o \mathbf{GW}[\hat{\mathcal N}u]$ as $\longrightarrowmbda{\mathcal T}o m^-$. \end{proposition} {\bf e}gin{proof} Since $\int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) < \infty$, we may assume $\sum k^2 p_k<\infty$ throughout the proof. The inequality in the first assertion can be written as {\bf e}gin{equation*} E[{\mathcal N}u^+] \cdot E\bigg[\frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg]< E[({\mathcal N}u^+)^2] \cdot E\bigg[\frac{{\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg]. \end{equation*} By conditioning on ${\mathcal N}u^+$, we see that it is equivalent to {\bf e}gin{equation*} \mathbf{GW}[A(\hat {\mathcal N}u)]< \mathbf{GW}[\hat {\mathcal N}u] \cdot \mathbf{GW}\bigg[\frac{A(\hat {\mathcal N}u)}{\hat {\mathcal N}u}\bigg], \end{equation*} which results from the FKG inequality. For the second assertion, remark that {\bf e}gin{eqnarray*} E\bigg[ \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{m-1} \bigg] &=& \frac{\mathbf{GW}[{\mathcal N}u^2]\cdot \mathbf{GW}[{\bf e}ta(T)]^2}{m-1} ,\\ E\bigg[ \frac{{\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{m-1} \bigg] &=& \frac{\mathbf{GW}[{\mathcal N}u]\cdot \mathbf{GW}[{\bf e}ta(T)]^2}{m-1}. \end{eqnarray*} When the offspring distribution $p$ admits a second moment, Proposition 3.1 of \cite{BHOZ} shows that {\bf e}gin{equation*} \frac{{\bf e}ta(T)}{\mathbf{GW}[{\bf e}ta(T)]} \end{equation*} is uniformly bounded in $L^2(\mathbf{GW})$. Using this fact, we can verify that {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o m^-} h_\longrightarrowmbda \cdot E\bigg[ \frac{{\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{m-1} \bigg]^{-1} = 1. \end{equation*} With the third moment condition $\sum k^3 p_k<\infty$, we similarly have {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o m^-} E\bigg[ \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg] \cdot E\bigg[ \frac{{\mathcal N}u^+ {\bf e}ta({\mathbb T})\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{m-1} \bigg]^{-1}=1. \end{equation*} Therefore, {\bf e}gin{equation*} \int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) {\mathcal T}o \frac{\mathbf{GW}[{\mathcal N}u^2]}{\mathbf{GW}[{\mathcal N}u]} =\mathbf{GW}[\hat{\mathcal N}u] \end{equation*} as $\longrightarrowmbda{\mathcal T}o m^-$. \end{proof} Now we turn to investigate the average number of children seen by the $\longrightarrowmbda$-biased random walk. First of all, as remarked in \cite[Section 8]{LPP95}, the ergodicity of $\mathsf{HARM}_\longrightarrowmbda{\mathcal T}imes \mu_{\mathsf{HARM}}$ implies that $\mathsf{RW}_\longrightarrowmbda{\mathcal T}imes \mathbf{AGW}_\longrightarrowmbda$ is also ergodic. For a tree $T$ rooted at $e$, let ${\mathcal N}u^+(e)$ denote the number of children of the root minus 1. Since {\bf e}gin{equation*} E\bigg[\frac{{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg] = E\bigg[\frac{(\longrightarrowmbda+{\mathcal N}u^+){\mathcal C}({\mathbb T}^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg] \leq \longrightarrowmbda+ E[{\mathcal N}u^+]<\infty, \end{equation*} it follows from Birkhoff's ergodic theorem that for $\mathsf{RW}_\longrightarrowmbda{\mathcal T}imes \mathbf{AGW}_\longrightarrowmbda$-a.e.~$(\overlineerset\rightarrow x, (T,\xi))$, {\bf e}gin{equation} \longrightarrowbel{eq:average-nb-child-walk} \lim_{n{\mathcal T}o \infty} \frac{1}{n} \sum_{k=0}^{n-1} {\mathcal N}u(x_k)= \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)= c_\longrightarrowmbda^{-1} E\bigg[\frac{{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg]. \end{equation} Using arguments similar to those in the last remark on page 600 of \cite{LPP95}, we deduce that the average number of children seen by the $\longrightarrowmbda$-biased random walk on ${\mathbb T}$ is a.s.~given by the same integral $\int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)$. {\bf e}gin{proposition} \longrightarrowbel{prop:child-rw-path} We have {\bf e}gin{equation*} \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi) \quad \left\{ {\bf e}gin{aligned} <m &\quad \mbox{when $0< \longrightarrowmbda < 1$}; \\ =m &\quad \mbox{when $\longrightarrowmbda \in \{0,1\}$}; \\ >m &\quad \mbox{when $1< \longrightarrowmbda <m$}. \\ \end{aligned} \right. \end{equation*} \end{proposition} {\bf e}gin{proof} For every integer $k\geq 1$ we set {\bf e}gin{equation*} B_\longrightarrowmbda(k) \colonequals E\bigg[ \frac{(\longrightarrowmbda+k){\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+)} \bigg]. \end{equation*} Clearly, we have {\bf e}gin{equation*} \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)= m \frac{\mathbf{GW}[B_\longrightarrowmbda(\hat {\mathcal N}u)]}{\mathbf{GW}[B_\longrightarrowmbda({\mathcal N}u)]}. \end{equation*} When $\longrightarrowmbda\in \{0,1\}$, $B_\longrightarrowmbda(k)= 1$ for all $k$. We will show that the sequence $(B_\longrightarrowmbda(k))_{k\geq 1}$ is strictly decreasing when $0<\longrightarrowmbda <1$, and strictly increasing when $1<\longrightarrowmbda<m$. Therefore, by the FKG inequality, $\mathbf{GW}[B_\longrightarrowmbda(\hat {\mathcal N}u)]>\mathbf{GW}[B_\longrightarrowmbda({\mathcal N}u)]$ when $1<\longrightarrowmbda<m$, and $\mathbf{GW}[B_\longrightarrowmbda(\hat {\mathcal N}u)]<\mathbf{GW}[B_\longrightarrowmbda({\mathcal N}u)]$ when $0<\longrightarrowmbda <1$. To get the claimed monotonicity of the sequence $(B_\longrightarrowmbda(k))_{k\geq 1}$, notice that {\bf e}gin{equation*} B_\longrightarrowmbda(k+1)= E\bigg[ \frac{(\longrightarrowmbda+k){\bf e}ta({\mathbb T})+{\bf e}ta({\mathbb T}^+_{k+1})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k+1}{\bf e}ta({\mathbb T}_i^+)} \bigg]. \end{equation*} Simple calculations give {\bf e}gin{align*} B_\longrightarrowmbda(k+1)-B_\longrightarrowmbda(k) & = E\bigg[ \frac{-(\longrightarrowmbda+k){\bf e}ta({\mathbb T}){\bf e}ta({\mathbb T}^+_{k+1})+{\bf e}ta({\mathbb T}^+_{k+1})(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+))}{(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k+1}{\bf e}ta({\mathbb T}_i^+))(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+))} \bigg]\\ & = E\bigg[ \frac{{\bf e}ta({\mathbb T}^+_{k+1})(\longrightarrowmbda-1)(1-{\bf e}ta({\mathbb T}))}{(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k+1}{\bf e}ta({\mathbb T}_i^+))(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+))} \bigg]\\ & \qquad + E\bigg[ \frac{-k{\bf e}ta({\mathbb T}^+_{k+1}){\bf e}ta({\mathbb T})+ {\bf e}ta({\mathbb T}^+_{k+1})\sum_{i=1}^k{\bf e}ta({\mathbb T}_i^+)}{(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k+1}{\bf e}ta({\mathbb T}_i^+))(\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{k}{\bf e}ta({\mathbb T}_i^+))} \bigg]. \end{align*} Since the last expectation vanishes, $B_\longrightarrowmbda(k+1)-B_\longrightarrowmbda(k)<0$ if and only if $\longrightarrowmbda<1$. \end{proof} As a consequence, when $0<\longrightarrowmbda\leq 1$, we have {\bf e}gin{equation*} \int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T)> m \geq \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi). \end{equation*} The next result, together with Proposition~\ref{prop:child-rw-path}, shows that $\int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)$ is not monotone with respect to $\longrightarrowmbda$. {\bf e}gin{proposition} \longrightarrowbel{prop:child-rw-path-limit-0} As $\longrightarrowmbda{\mathcal T}o 0^+$, $\int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)$ converges to $m$. \end{proposition} {\bf e}gin{proof} Note that {\bf e}gin{equation*} \frac{{\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \leq 1. \end{equation*} By Lebesgue's dominated convergence it follows that $\lim_{\longrightarrowmbda{\mathcal T}o 0^+} c_\longrightarrowmbda=1$. Similarly, we have {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o 0^+} E\bigg[\frac{\longrightarrowmbda {\mathcal N}u^+ {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg]=0. \end{equation*} On the other hand, {\bf e}gin{equation*} E\bigg[\frac{({\mathcal N}u^+)^2 {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg]=E\bigg[\frac{{\mathcal N}u^+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg], \end{equation*} to which we can apply Lebesgue's dominated convergence again to get {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o 0^+} E\bigg[\frac{({\mathcal N}u^+)^2 {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+\sum_{i=1}^{{\mathcal N}u^+}{\bf e}ta({\mathbb T}_i^+)} \bigg]=E[{\mathcal N}u^+]=m. \end{equation*} In view of \eqref{eq:average-nb-child-walk}, the proof is thus finished. \end{proof} {\bf e}gin{proposition} \longrightarrowbel{prop:child-rw-path-limit-m} Assume that $\sum k^3 p_k<\infty$. Then, {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o m-} \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi) = \frac{m^2+ \sum k^2 p_k}{2m}. \end{equation*} \end{proposition} {\bf e}gin{proof} As for the analogous result in Proposition~\ref{prop:child-harm-unif}, we can use the uniform boundedness in $L^2(\mathbf{GW})$ of ${\bf e}ta(T)/\mathbf{GW}[{\bf e}ta(T)]$ to see that {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o m^-} c_\longrightarrowmbda \cdot E\bigg[ \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1} \bigg]^{-1} = 1= \lim_{\longrightarrowmbda{\mathcal T}o m^-} E\bigg[\frac{{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg] \cdot E\bigg[\frac{{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1} \bigg]^{-1}. \end{equation*} Hence, it follows from {\bf e}gin{equation*} E\bigg[ \frac{(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1} \bigg]^{-1} E\bigg[\frac{{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1} \bigg]= \frac{E[{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+)]}{E[\longrightarrowmbda+{\mathcal N}u^+]}= \frac{\longrightarrowmbda m+\sum k^2 p_k}{\longrightarrowmbda+m} \end{equation*} that {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o m^-} c_\longrightarrowmbda^{-1} E\bigg[\frac{{\mathcal N}u^+(\longrightarrowmbda+{\mathcal N}u^+) {\bf e}ta({\mathbb T})}{\longrightarrowmbda-1+{\bf e}ta({\mathbb T})+{\mathcal C}({\mathbb T}^+)} \bigg]= \lim_{\longrightarrowmbda{\mathcal T}o m^-} \frac{\longrightarrowmbda m+\sum k^2 p_k}{\longrightarrowmbda+m} = \frac{m^2+ \sum k^2 p_k}{2m}, \end{equation*} which finishes the proof by \eqref{eq:average-nb-child-walk}. \end{proof} Combining Propositions \ref{prop:child-harm-visi}, \ref{prop:child-harm-unif}, \ref{prop:child-rw-path-limit-0} and \ref{prop:child-rw-path-limit-m}, we see that {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o 0^+} \bigg(\int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T)- \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)\bigg) = 0, \end{equation*} and if $\sum k^3 p_k<\infty$, {\bf e}gin{equation*} \lim_{\longrightarrowmbda{\mathcal T}o m^-} \bigg(\int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T)- \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi) \bigg)= \frac{\sum k^2 p_k -m^2}{2m} >0. \end{equation*} As mentioned in the introduction, we conjecture that for all $\longrightarrowmbda \in (0,m)$, {\bf e}gin{equation*} \int {\mathcal N}u(e)\mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T)- \int {\mathcal N}u^+(e) \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi)>0. \end{equation*} {\mathcal N}oindentndent{\bf Remark.} If we consider the average \emph{reciprocal} number of children of vertices along an infinite path in ${\mathbb T}$, the FKG inequality implies that for all $\longrightarrowmbda \in (0,m)$, {\bf e}gin{equation*} \int \frac{1}{{\mathcal N}u^+(e)} \mathbf{AGW}_\longrightarrowmbda(\mathrm{d}T,\mathrm{d}\xi) > \frac{1}{m}. \end{equation*} We also have {\bf e}gin{equation*} \int \frac{1}{{\mathcal N}u(e)} \mu_{\mathsf{HARM}_\longrightarrowmbda}(\mathrm{d}T) > \frac{1}{m} \mbox{ for all } \longrightarrowmbda \in (0,m), \end{equation*} by applying the FKG inequality similarly as in the proof of Proposition~\ref{prop:child-harm-unif}. {\mathcal N}oindentndent{\bf Acknowledgment.} The author thanks Elie A\"id\'ekon and Pierre Rousselin for fruitful discussions. He is also indebted to an anonymous referee for several useful suggestions. {\bf e}gin{thebibliography}{99} \bibitem{Aid} {\sc E.~A\"id\'ekon}. Speed of the biased random walk on a Galton--Watson tree. {\it Probab.~Theory Relat.~Fields} {\bf 159} (2014), 597--617. \bibitem{BHOZ} {\sc G.~Ben Arous, Y.~Hu, S.~Olla and O.~Zeitouni}. Einstein relation for biased random walk on Galton--Watson trees. {\it Ann.~Inst.~H.~Poincar\'e Probab.~Statist.} {\bf 49} (2013), 698--721. \bibitem{BHS} {\sc A.~Ben-Hamou and J.~Salez}. Cutoff for nonbacktracking random walks on sparse random graphs. {\it Ann.~Probab.} {\bf 45} (2017), 1752--1770. \bibitem{BLPS} {\sc N.~Berestycki, E.~Lubetzky, Y.~Peres and A.~Sly}. Random walks on the random graph. {\it Ann.~Probab.}, {\bf 46} (2018), 456--490. \bibitem{Ly90} {\sc R.~Lyons}. Random walks and percolation on trees. {\it Ann.~Probab.} {\bf 18} (1990), 931--958. \bibitem{LPP95} {\sc R.~Lyons, R.~Pemantle and Y.~Peres}. Ergodic theory on Galton--Watson trees: Speed of random walk and dimension of harmonic measure. {\it Erg.~Theory Dynam.~Syst.} {\bf 15} (1995), 593--619. \bibitem{LPP96} {\sc R.~Lyons, R.~Pemantle and Y.~Peres}. Biased random walks on Galton--Watson trees. {\it Probab.~Theory Relat.~Fields} {\bf 106} (1996), 249--264. \bibitem{LPP97} {\sc R.~Lyons, R.~Pemantle and Y.~Peres}. Unsolved problems concerning random walks on trees. {\it IMA Vol.~Math.~Appl.} {\bf 84} (1997), 223--237. \bibitem{LP-book} {\sc R.~Lyons and Y.~Peres}. {\it Probability on Trees and Networks}. Cambridge University Press, New York, 2016, xv+699 pp. \bibitem{Rou} {\sc P.~Rousselin}. Invariant measures, Hausdorff dimension and dimension drop of some harmonic measures on Galton--Watson trees. {\it Electron.~J.~Probab.}, {\bf 23} (2018), no.~46, 1--31. \bibitem{V2000} {\sc B.~Vir\'ag}. On the speed of random walks on graphs. {\it Ann.~Probab.} {\bf 28} (2000), 379--394. \end{thebibliography} \end{document}
\begin{document} \title{Injective types in univalent mathematics} \begin{abstract} We investigate the injective types and the algebraically injective types in univalent mathematics, both in the absence and in the presence of propositional resizing. Injectivity is defined by the surjectivity of the restriction map along any embedding, and algebraic injectivity is defined by a given section of the restriction map along any embedding. Under propositional resizing axioms, the main results are easy to state: (1)~Injectivity is equivalent to the propositional truncation of algebraic injectivity. (2)~The algebraically injective types are precisely the retracts of exponential powers of universes. (2a)~The algebraically injective sets are precisely the retracts of powersets. (2b)~The algebraically injective \m{(n+1)}-types are precisely the retracts of exponential powers of universes of \m{n}-types. (3)~The algebraically injective types are also precisely the retracts of algebras of the partial-map classifier. From~(2) it follows that any universe is embedded as a retract of any larger universe. In the absence of propositional resizing, we have similar results that have subtler statements which need to keep track of universe levels rather explicitly, and are applied to get the results that require resizing. \noindent {\bf Keywords.} Injective type, flabby type, Kan extension, partial-map classifier, univalent mathematics, univalence axiom. \noindent {\bf MSC 2010.} 03B15, 03B35, 03G30, 18A40, 18C15. \end{abstract} \section{Introduction} We investigate the injective types and the algebraically injective types in univalent mathematics, both in the absence and in the presence of propositional resizing axioms. These notions of injectivity are about the extension problem \begin{diagram} X & & \rInto^j & & Y \\ & \rdTo_f & & \ldEto & \\ & & D. & & \end{diagram} The injectivity of a type \m{D:\mathcal{U}} is defined by the surjectivity of the restriction map \m{(-) \mathrel{\circ} j} along any embedding~\m{j}: \M{ \Pi(X,Y : \mathcal{U})\, \Pi (j : X \hookrightarrow Y)\, \Pi(f : X \to D)\, \exists (g : Y \to D)\, g \mathrel{\circ} j = f, } so that we get an \emph{unspecified} extension~\m{g} of~\m{f} along~\m{j}. The algebraic injectivity of \m{D} is defined by a given section \m{(-) \mid j} of the restriction map \m{(-) \mathrel{\circ} j}, following Bourke's terminology~\cite{bourke:2017}. By \m{\Sigma{-}\Pi}-distributivity, this amounts to \M{ \Pi(X,Y : \mathcal{U})\, \Pi (j : X \hookrightarrow Y)\, \Pi(f : X \to D)\, \Sigma (f \mid j : Y \to D), f \mid j \mathrel{\circ} j = f, } so that we get a \emph{designated} extension~\m{f \mid j} of~\m{f} along~\m{j}. Formally, in this definition, \m{f \mid j} can be regarded as a variable, but we instead think of the symbol ``\m{\mid}'' as a binary operator. For the sake of generality, we work without assuming or rejecting the principle of excluded middle, and hence without assuming the axiom of choice either. Moreover, we show that the principle of excluded middle holds if and only if all pointed types are algebraically injective, and, assuming resizing, if and only if all inhabited types are injective, so that there is nothing interesting to say about (algebraic) injectivity in its presence. That pointness and inhabitedness are needed is seen by considering the embedding \m{\mathbb{0} \hookrightarrow \mathbb{1}}. Under propositional resizing principles~\cite{hottbook} (Definitions~\ref{resizing} and~\ref{omega:resizing} below), the main results are easy to state: \begin{enumerate} \item Injectivity is equivalent to the propositional truncation of algebraic injectivity. (This can be seen as a form of choice that just holds, as it moves a propositional truncation inside a \m{\Pi}-type to outside the \m{\Pi}-type, and may be related to \cite{kenney:2011}.) \item The algebraically injective types are precisely the retracts of exponential powers of universes. Here by an exponential power of a type \m{B} we mean a type of the form \m{A \to B}, also written \m{B^A}. In particular, \begin{enumerate} \item The algebraically injective sets are precisely the retracts of powersets. \item The algebraically injective \m{(n+1)}-types are precisely retracts of exponential powers of the universes of \m{n}-types. \end{enumerate} Another consequence is that any universe is embedded as a retract of any larger universe. \item The algebraically injective types are also precisely the underlying objects of the algebras of the partial-map classifier. \end{enumerate} In the absence of propositional resizing, we have similar results that have subtler statements that need to keep track of universe levels rather explicitly. Most constructions developed in this paper are in the absence of propositional resizing. We apply them, with the aid of a notion of algebraic flabbiness, which is related to the partial-map classifier, to derive the results that rely on resizing mentioned above. \paragraph{Acknowledgements.} Mike Shulman has acted as a sounding board over the years, with many helpful remarks, including in particular the suggestion of the terminology \emph{algebraic injectivity} from~\cite{bourke:2017} for the notion we consider here. \section{Underlying formal system} \label{foundations} Our handling of universes has a model in \m{\infty}-toposes following Shulman~\cite{2019arXiv190407004S}. It differs from that of the HoTT book~\cite{hottbook}, and Coq~\cite{coq}, in that we don't assume cumulativity, and it agrees with that of Agda~\cite{agda}. \subsection{Our univalent type theory} Our underlying formal system can be considered to be a subsystem of that used in UniMath~\cite{unimath}. \begin{enumerate} \item We work within an intensional Martin-L\"of type theory with types \m{\mathbb{0}} (empty type), \m{\mathbb{1}} (one-element type with \m{\operatorname{\star}:\mathbb{1}}), \m{\mathbb{N}} (natural numbers), and type formers \m{+} (binary sum), \m{\Pi} (product), \m{\Sigma} (sum) and \m{\operatorname{Id}} (identity type), and a hierarchy of type universes ranged over by \m{\mathcal{U},\mathcal{V},\mathcal{W},\mathcal{T}}, closed under them in a suitable sense discussed below. We take these as required closure properties of our formal system, rather than as an inductive definition. \item We assume a universe \m{\mathcal{U}_0}, and for each universe \m{\mathcal{U}} we assume a successor universe \m{\mathcal{U}^+} with \m{\mathcal{U} : \mathcal{U}^+}, and for any two universes \m{\mathcal{U},\mathcal{V}} a least upper bound \m{\mathcal{U} \sqcup \mathcal{V}}. We stipulate that we have \m{\mathcal{U}_0 \sqcup \mathcal{U} = \mathcal{U}} and \m{\mathcal{U} \sqcup \mathcal{U}^+ = \mathcal{U}^+} definitionally, and that the operation \m{(-)\sqcup(-)} is definitionally idempotent, commutative, and associative, and that the successor operation \m{(-)^+} distributes over \m{(-)\sqcup(-)} definitionally. \item We don't assume that the universes are cumulative on the nose, in the sense that from \m{X : \mathcal{U}} we would be able to deduce that \m{X : \mathcal{U} \sqcup \mathcal{V}} for any \m{\mathcal{V}}, but we also don't assume that they are not. However, from the assumptions formulated below, it follows that for any two universes \m{\mathcal{U},\mathcal{V}} there is a map \m{\operatorname{lift}_{\mathcal{U},\mathcal{V}} : \mathcal{U} \to \mathcal{U} \sqcup \mathcal{V}}, for instance \m{X \mapsto X + \mathbb{0}_\mathcal{V}}, which is an embedding with \m{\operatorname{lift} X \simeq X} if univalence holds (we cannot write the identity type \m{\operatorname{lift} X = X}, as the left- and right-hand sides live in the different types \m{\mathcal{U}} and \m{\mathcal{U} \sqcup \mathcal{V}}, which are not (definitionally) the same in general). \item We stipulate that we have copies \m{\mathbb{0}_\mathcal{U}} and \m{\mathbb{1}_\mathcal{V}} of the empty and singleton types in each universe \m{\mathcal{U}} (with the subscripts often elided). \item We stipulate that if \m{X : \mathcal{U}} and \m{Y : \mathcal{V}}, then \m{X+Y : \mathcal{U} \sqcup \mathcal{V}}. \item We stipulate that if \m{X : \mathcal{U}} and \m{A : X \to \mathcal{V}} then \m{\Pi_X A : \mathcal{U} \sqcup \mathcal{V}}. We abbreviate this product type as \m{\Pi A} when \m{X} can be inferred from \m{A}, and sometimes we write it verbosely as \m{\Pi (x:X), A \, x}. In particular, for types \m{X : \mathcal{U}} and \m{Y : \mathcal{V}}, we have the function type \m{X \to Y : \mathcal{U} \sqcup \mathcal{V}}. \item The same type stipulations as for \m{\Pi}, and the same grammatical conventions apply to the sum type former \m{\Sigma}. In particular, for types \m{X : \mathcal{U}} and \m{Y : \mathcal{V}}, we have the cartesian product \m{X \times Y : \mathcal{U} \sqcup \mathcal{V}}. \item We assume the \m{\eta} rules for \m{\Pi} and \m{\Sigma}, namely that \m{f = \lambda x, f \, x} holds definitionally for any \m{f} in a \m{\Pi}-type and that \m{z=(\operatorname{pr}_1 z , \operatorname{pr}_2 z)} holds definitionally for any \m{z} in a \m{\Sigma} type, where \m{\operatorname{pr}_1} and \m{\operatorname{pr}_2} are the projections. \item For a type \m{X} and points \m{x,y:X}, the identity type \m{\operatorname{Id}_{X} x \, y} is abbreviated as \m{\operatorname{Id} x \, y} and often written \m{x \operatorname{id}to_X y} or simply \m{x \operatorname{id}to y}. The elements of the identity type \m{x=y} are called identifications or paths from \m{x} to~\m{y}. \item When making definitions, definitional equality is written ``$\eqdef$''. When it is invoked, it is written e.g.\ ``\m{x = y} definitionally''. This is consistent with the fact that any definitional equality \m{x = y} gives rise to an element of the identity type \m{x = y} and should therefore be unambiguous. \item When we say that something is the case by construction, this means we are expanding definitional equalities. \item We tacitly assume univalence~\cite{hottbook}, which gives function extensionality (pointwise equal functions are equal) and propositional extensionality (logically equivalent subsingletons are equal). \item We work with the existence of propositional, or subsingleton, truncations as an assumption, also tacit. The HoTT book~\cite{hottbook}, instead, defines type formation \df{rules} for propositional truncation as a syntactical construct of the formal system. Here we take propositional truncation as an axiom for any pair of universes \m{\mathcal{U},\mathcal{V}}: \textcolor{darkblue}{ \begin{align*} \Pi (X:\mathcal{U})\, \Sigma & (\trunc{X} : \mathcal{U}), \\ & \mathrel{\phantom{\times}} \text{\m{\trunc{X}} is a proposition} \times (X \to \trunc{X}) \\ & \times \bracket{\Pi (P : \mathcal{V}), \text{\m{P} is a proposition} \to (X \to P) \to \trunc{X} \to P}. \end{align*}} We write \m{\mid x \mid} for the insertion of \m{x:X} into the type \m{\trunc{X}} by the assumed function \m{X \to \trunc{X}}. We also denote by \m{\bar{f}} the function \m{\trunc{X} \to P} obtained by the given ``elimination rule'' \m{(X \to P) \to \trunc{X} \to P} applied to a function \m{f:X \to P}. The universe \m{\mathcal{U}} is that of types we truncate, and \m{\mathcal{V}} is the universe where the propositions we eliminate into live. Because the existence of propositional truncations is an assumption rather than a type formation rule, its so-called ``computation'' rule \M{\bar{f} \mid x \mid = f x} doesn't hold definitionally, of course, but is established as a derived identification, by the definition of proposition. \end{enumerate} \subsection{Terminology and notation} \label{existence:terminology} We assume that the readers are already familiar with the notions of univalent mathematics, e.g.\ from the HoTT book~\cite{hottbook}. The purpose of this section is to establish terminology and notation only, particularly regarding our modes of expression that diverge from the HoTT book. \begin{enumerate} \item A type \m{X} is a singleton, or contractible, if there is a designated \m{c:X} with \m{x = c} for all \m{x:X}: \M{ \text{\m{X} is a singleton} \eqdef \Sigma (c : X), \Pi (x:X), x = c. } \item A proposition, or subsingleton, or truth value, is a type with at most one element, meaning that any two of its elements are identified: \M{ \text{\m{X} is a proposition} \eqdef \Pi(x,y:X), x=y. } \item By an unspecified element of a type \m{X} we mean a (specified) element of its propositional truncation~\m{\trunc{X}}. We say that a type is inhabited if it has an unspecified element. If the type \m{X} codifies a mathematical statement, we say that \m{X} holds in an unspecified way to mean the assertion \m{\trunc{X}}. For example, if we say that the type \m{A} is a retract of the type \m{B} in an unspecified way, what we mean is that \m{\trunc{\text{\m{A} is a retract of \m{B}}}}. \item Phrases such as ``there exists'', ``there is'', ``there is some'', ``for some'' etc.\ indicate a propositionally truncated \m{\Sigma}, and symbolically we write \M{(\exists (x:X), A \, x) \eqdef \trunc{\Sigma (x:X), A \, x}.} For emphasis, we may say that there is an unspecified \m{x:X} with \m{A\,x}. When the meaning of existence is intended to be (untruncated) \m{\Sigma}, we use phrases such as ``there is a designated'', ``there is a specified'', ``there is a distinguished'', ``there is a given'', ``there is a chosen'', ``for some chosen'', ``we can find'' etc. The statement that there is a unique \m{x:X} with \m{A \, x} amounts to the assertion that the type \m{\Sigma (x:X), A \, x} is a singleton: \M{ (\exists! (x:X), A \, x) \eqdef \text{the type \m{\Sigma (x:X), A \, x} is a singleton}. } That is, there is a unique pair \m{(x,a)} with \m{x:X} and \m{a : A\, x}. This doesn't need to be explicitly propositionally truncated, because singleton types are automatically propositions. The statement that there is at most one \m{x:X} with \m{A \, x} amounts to the assertion that the type \m{\Sigma (x:X), A \, x} is a subsingleton (so we have at most one pair \m{(x,a)} with \m{x:X} and \m{a : A\, x}). \item We often express a type of the form \m{\Sigma(x:X), A \, x} by phrases such as ``the type of \m{x:X} with \m{A \, x}''. For example, if we define the fiber of a point \m{y:Y} under a function \m{f : X \to Y} to be the type \m{f^{-1}(y)} of points \m{x:X} that are mapped by \m{f} to a point identified with \m{y}, it should be clear from the above conventions that we mean \M{ f^{-1}(y) \eqdef \Sigma (x : X), f x = y. } Also, with the above terminological conventions, saying that the fibers of \m{f} are singletons (that is, that \m{f} is an equivalence) amounts to the same thing as saying that for every \m{y:Y} there is a unique \m{x:X} with \m{f(x)=y}. Similarly, we say that such an \m{f} is an embedding if for every \m{y:Y} there is at most one \m{x:X} with \m{f(x)=y}. In passing, we remark that, in general, this is stronger than \m{f} being left-cancellable, but coincides with left-cancellability if the type \m{Y} is a set (its identity types are all subsingletons). \item We sometimes use the mathematically more familiar ``maps to'' notation~\m{\mapsto} instead of type-theoretical lambda notation \m{\lambda} for defining nameless functions. \item Contrarily to an existing convention among some practitioners, we will not reserve the word \df{is} for mathematical statements that are subsingleton types. For example, we say that a type is algebraically injective to mean that it comes equipped with suitable data, or that a type \m{X} is a retract of a type \m{Y} to mean that there are designated functions \m{s : X \to Y} and \m{r : Y \to X}, and a designated pointwise identification \m{r \mathrel{\circ} s \sim \operatorname{id}}. \item Similarly, we don't reserve the words \df{theorem}, \df{lemma}, \df{corollary} and \df{proof} for constructions of elements of subsingleton types, and all our constructions are indicated by the word proof, including the construction of data or structure. Because \df{proposition} is a semantical rather than syntactical notion in univalent mathematics, we often have situations when we know that a type is a proposition only much later in the mathematical development. An example of this is univalence. To know that this is a proposition, we first need to state and prove many lemmas, and even if these lemmas are propositions themselves, we will not know this at the time they are stated and proved. For instance, knowing that the notion of being an equivalence is a proposition requires function extensionality, which follows from univalence. Then this is used to prove that univalence is a proposition. \end{enumerate} \subsection{Formal development} A computer-aided formal development of the material of this paper has been performed in Agda~\cite{agda}, occasionally preceded by pencil and paper scribbles, but mostly directly in the computer with the aid of Agda's interactive features. This paper is an unformalization of that development. We emphasize that not only numbered statements in this paper have formal counterparts, but also the comments in passing, and that the formal version has more information than what we choose to report here. We have two versions. One of them~\cite{injective:blackboard} is in \df{blackboard style}, with the ideas in the order they have come to our mind over the years, in a fairly disorganized way, and with local assumptions of univalence, function extensionality, propositional extensionality and propositional truncation. The other one~\cite{injective:article} is in \df{article style}, with univalence and existence of propositional truncations as global assumptions, and functional and propositional extensionality derived from univalence. This second version follows closely this paper (or rather this paper follows closely that version), organized in a way more suitable for dissemination, repeating the blackboard definitions, in a definitionally equal way, and reproducing the proofs and constructions that we consider to be relevant while invoking the blackboard for the routine, unenlightening ones. The blackboard version also has additional information that we have chosen not to include in the article version of the Agda development or this paper. An advantage of the availability of a formal version is that, whatever steps we have omitted here because we considered them to be obvious or routine, can be found there, in case of doubt. \section{Injectivity with universe levels} As discussed in the introduction, in the absence of propositional resizing we are forced to keep track of universe levels rather explicitly. \begin{definition} We say that a type \m{D} in a universe \m{\mathcal{W}} is \df{\m{\mathcal{U},\mathcal{V}}-injective} to mean \M{ \Pi(X : \mathcal{U})\, \Pi(Y : \mathcal{V})\, \Pi (j : X \hookrightarrow Y)\, \Pi(f : X \to D),\, \exists (g : Y \to D), g \mathrel{\circ} j \sim f, } and that it is \df{algebraically \m{\mathcal{U},\mathcal{V}}-injective} to mean \M{ \Pi(X : \mathcal{U}) \,\Pi(Y : \mathcal{V})\, \Pi (j : X \hookrightarrow Y) \, \Pi(f : X \to D) ,\, \Sigma (f \mid j : Y \to D), f \mid j \mathrel{\circ} j \sim f. } \end{definition} \noindent Notice that, because we have function extensionality, pointwise equality~\m{\sim} of functions is equivalent to equality, and hence equal to equality by univalence. But it is more convenient for the purposes of this paper to work with pointwise equality in these definitions. \section{The algebraic injectivity of universes} Let \m{\mathcal{U},\mathcal{V},\mathcal{W}} be universes, \m{X:\mathcal{U}} and \m{Y : \mathcal{V}} be types, and \m{f : X \to \mathcal{W}} and \m{j : X \to Y} be given functions, where \m{j} is not necessarily an embedding. We define functions \m{f \edown j} and \m{f \eup j} of type \m{Y \to \mathcal{U} \sqcup \mathcal{V} \sqcup \mathcal{W}} by \textcolor{darkblue}{\begin{eqnarray*} (f \edown j) \, y & \eqdef & \Sigma (w : j^{-1}(y)), f(\operatorname{pr}_1 w), \\ (f \eup j) \, y & \eqdef & \Pi (w : j^{-1}(y)), f(\operatorname{pr}_1 w). \end{eqnarray*}} \begin{lemma} If \m{j} is an embedding, then both \m{f \edown j} and \m{f \eup j} are extensions of \m{f} along~\m{j} up to equivalence, in the sense that \M{(f \edown j \mathrel{\circ} j) \, x \simeq f x \simeq (f \eup j \mathrel{\circ} j) \, x,} and hence extensions up to equality if \m{\mathcal{W}} is taken to be \m{\mathcal{U} \sqcup \mathcal{V}}, by univalence. \end{lemma} \noindent Notice that if \m{\mathcal{W}} is kept arbitrary, then univalence cannot be applied because equality is defined only for elements of the same type. \begin{proof} Because a sum indexed by a subsingleton is equivalent to any of its summands, and similarly a product indexed by a subsingleton is equivalent to any of its factors, and because a map is an embedding precisely when its fibers are all subsingletons. \end{proof} \noindent We record this corollary: \begin{lemma} \label{ref:16:1} The universe \m{\mathcal{U} \sqcup \mathcal{V}} is algebraically \m{\mathcal{U},\mathcal{V}}-injective, in at least two ways. \end{lemma} \noindent And in particular, e.g.\ \m{\mathcal{U}} is \m{\mathcal{U},\mathcal{U}}-injective, but of course \m{\mathcal{U}} doesn't live in \m{\mathcal{U}} and doesn't even have a copy in \m{\mathcal{U}}. For the following, we say that \m{y : Y} is not in the image of \m{j} to mean that \m{j \, x \ne y} for all \m{x:X}. \begin{proposition} For \m{y:Y} not in the image of \m{j}, we have \m{(f \edown j) \, y \simeq \mathbb{0}} and \m{(f \eup j) \, y \simeq \mathbb{1}}. \end{proposition} \noindent With excluded middle, this would give that the two extensions have the same sum and product as the non-extended map, respectively, but excluded middle is not needed, as it is not hard to see: \begin{remark} We have canonical equivalences \m{\Sigma f \simeq \Sigma (f \edown j)} and \m{\Pi f \simeq \Pi (f \eup j)}. \end{remark} Notice that the functions \m{f}, \m{f \edown j} and \m{f \eup j}, being universe valued, are type families, and hence the notations \m{\Sigma f}, \m{\Sigma(f \edown j)}, \m{\Pi f} and \m{\Pi(f \eup j)} are just particular cases of the notations for the sum and product of a type family. The two extensions are left and right Kan extensions in the following sense, without the need to assume that \m{j} is an embedding. First, a map \m{f:X \to \mathcal{U}}, when \m{X} is viewed as an \m{\infty}-groupoid and hence an \m{\infty}-category, and when \m{\mathcal{U}} is viewed as the \m{\infty}-generalization of the category of sets, can be considered as a sort of \m{\infty}-presheaf, because its functoriality is automatic: If we define \M{f [ p ] \eqdef \operatorname{transport} f p} of type \m{f\, x \to f\, y} for \m{p : \operatorname{Id} \, x \, y}, then for \m{q : \operatorname{Id} \, y \, z} we have \M{ f [ \operatorname{refl}_x ] = \operatorname{id}_{f \, x}, \qquad\qquad f [p \operatorname{\bullet} q] = f [q] \mathrel{\circ} f [p]. } Then we can consider the type of transformations between such \m{\infty}-presheaves \m{f : X \to \mathcal{W}} and \m{f' : X \to \mathcal{W}'} defined by \M{ f \mathrel{\,\,\preceq\,\,} f' \eqdef \Pi (x : X), f \, x \to f' x, } which are automatically natural in the sense that for all \m{\tau: f \mathrel{\,\,\preceq\,\,} f'} and \m{p : \operatorname{Id} \, x \, y}, \M{ \tau_y \mathrel{\circ} f [ p ] = f' [p] \mathrel{\circ} \tau_x. } It is easy to check that we have the following canonical transformations: \begin{remark} \m{f \edown j \mathrel{\,\,\preceq\,\,} f \eup j} if \m{j} is an embedding. \end{remark} It is also easy to see that, without assuming \m{j} to be an embedding, \begin{enumerate} \item \m{f \mathrel{\,\,\preceq\,\,} f \edown j \mathrel{\circ} j}, \item \m{f \eup j \mathrel{\circ} j \mathrel{\,\,\preceq\,\,} f}. \end{enumerate} These are particular cases of the following constructions, which are evident and canonical, even if they may be a bit laborious: \begin{remark} For any \m{g : Y \to \mathcal{T}}, we have canonical equivalences \begin{enumerate} \item \m{(f \edown j \mathrel{\,\,\preceq\,\,} g) \simeq (f \mathrel{\,\,\preceq\,\,} g \mathrel{\circ} j),} \quad i.e.\ \m{f \edown j} is a left Kan extension, \item \m{(g \mathrel{\,\,\preceq\,\,} f \eup j) \simeq (g \mathrel{\circ} j \mathrel{\,\,\preceq\,\,} f),} \quad i.e.\ \m{f \eup j} is a right Kan extension. \end{enumerate} \end{remark} We also have that the left and right Kan extension operators along an embedding are themselves embeddings, as we now show. \begin{theorem} For any types \m{X,Y:\mathcal{U}} and any embedding \m{j : X \to Y}, left Kan extension along \m{j} is an embedding of the function type \m{X \to \mathcal{U}} into the function type \m{Y \to \mathcal{U}}. \end{theorem} \begin{proof} Define \m{s : (X \to \mathcal{U}) \to (Y \to \mathcal{U})} and \m{r : (Y \to \mathcal{U}) \to (X \to \mathcal{U})} by \M{ \begin{array}{lll} s \, f & \eqdef & f \edown j, \\ r \, g & \eqdef & g \mathrel{\circ} j. \end{array} } By function extensionality, we have that \m{r (s \, f) = f}, because \m{s} is a pointwise-extension operator as \m{j} is an embedding, and by construction we have that \m{s (r \, g) = (g \mathrel{\circ} j) \edown j}. Now define \m{\kappa : \Pi (g : Y \to \mathcal{U}), s(r \,g) \mathrel{\,\,\preceq\,\,} g} by \M{ \kappa \, g \, y \, ((x , p) , C) \eqdef \operatorname{transport} \, g \, p \, C } for all \m{g : Y \to \mathcal{U}}, \m{y : Y}, \m{x : X}, \m{p : j \, x = y} and \m{C : g(j \, x)}, so that \m{\operatorname{transport} \, g \, p \, C} has type \m{g \, y }, and consider the type \M{ M \eqdef \Sigma (g : Y \to \mathcal{U})\,\Pi(y:Y), \text{the map \m{\kappa \, g \, y : s (r \, g) \, y \to g \, y} is an equivalence.} } Because the notion of being an equivalence is a proposition and because products of propositions are propositions, the first projection \M{\operatorname{pr}_1 : M \to (Y \to \mathcal{U})} is an embedding. To complete the proof, we show that there is an equivalence \m{\phi : (X \to \mathcal{U}) \to M} whose composition with this projection is \m{s}, so that \m{s}, being the composition of two embeddings, is itself an embedding. We construct \m{\phi} and its inverse \m{\gamma} by \M{ \begin{array}{lll} \phi \, f & \eqdef & (s f , \varepsilon \, f), \\ \gamma \, (g , e) & \eqdef & r \, g, \end{array} } where \m{\varepsilon \, f} is a proof that the map \m{\kappa \, (s f) \, y} is an equivalence for every \m{y : Y}, to be constructed shortly. Before we know this construction, we can see that \m{\gamma (\phi \, f) = r (s \, f) = f} so that \m{\gamma \mathrel{\circ} \phi \sim \operatorname{id}}, and that \m{\phi (\gamma (g , e)) = (s(r g) , \varepsilon (r g))}. To check that the pairs \m{(s(r g) , \varepsilon (r g))} and \m{(g , e)} are equal and hence \m{\phi \mathrel{\circ} \gamma \sim \operatorname{id}}, it suffices to check the equality of the first components, because the second components live in subsingleton types. But \m{e \, y} says that \m{s (r \, g) \, y \simeq g \, y} for any \m{y:Y}, and hence by univalence and function extensionality, \m{s (r \, g) = g}. Thus the functions \m{\phi} and \m{\gamma} are mutually inverse. Now, \m{\operatorname{pr}_1 \mathrel{\circ} \phi = s} definitionally using the $\eta$-rule for \m{\Pi}, so that indeed \m{s} is the composition of two embeddings, as we wanted to show. It remains to show that the map \m{\kappa \, (s f) \, y : s (f \, y) \to s(r(s \, f)) \, y} is indeed an equivalence. The domain and codomain of this function amount, by construction, to respectively \M{ \begin{array}{lll} A & \eqdef & \Sigma (t : j^{-1}(y)), \Sigma (w : j^{-1}(j (\operatorname{pr}_1 t))), f (\operatorname{pr}_1 w)\\ B & \eqdef & \Sigma (w : j^{-1}(y)), f(\operatorname{pr}_1 w). \end{array} } We construct an inverse \m{\delta : B \to A} by \M{ \delta \, ((x , p),C) \eqdef ((x , p) , (x , \operatorname{refl}_{j \, x}) , C). } It is routine to check that the functions \m{\kappa \, (s f) \, y} and \m{\delta} are mutually inverse, which concludes the proof. \end{proof} The proof of the theorem below follows the same pattern as the previous one with some portions ``dualized'' in some sense, and so we are slightly more economic with its formulation this time. \begin{theorem} For any types \m{X,Y:\mathcal{U}} and any embedding \m{j : X \to Y}, the right Kan extension operation along \m{j} is an embedding of the function type \m{X \to \mathcal{U}} into the function type \m{Y \to \mathcal{U}}. \end{theorem} \begin{proof} Define \m{s : (X \to \mathcal{U}) \to (Y \to \mathcal{U})} and \m{r : (Y \to \mathcal{U}) \to (X \to \mathcal{U})} by \M{ \begin{array}{lll} s \, f & \eqdef & f \eup j, \\ r \, g & \eqdef & g \mathrel{\circ} j. \end{array} } By function extensionality, we have that \m{r (s \, f) = f}, and, by construction, \m{s (r \, g) = (g \mathrel{\circ} j) \eup j}. Now define \m{\kappa : \Pi (g : Y \to \mathcal{U}), g \mathrel{\,\,\preceq\,\,} s(r \,g) } by \M{ \kappa \, g \, y \, C (x , p) \eqdef \operatorname{transport} \, g \, p^{-1} \, C } for all \m{g : Y \to \mathcal{U}}, \m{y : Y}, \m{C : g \, y}, \m{x : X}, \m{p : j \, x = y}, so that \m{\operatorname{transport} \, g \, p^{-1} \, C} has type \m{g (j \, x) }, and consider the type \M{ M \eqdef \Sigma (g : Y \to \mathcal{U})\,\Pi(y:Y), \text{the map \m{\kappa \, g \, y : g \, y \to s (r \, g) \,y} is an equivalence.} } Then the first projection \m{\operatorname{pr}_1 : M \to (Y \to \mathcal{U})} is an embedding. To complete the proof, we show that there is an equivalence \m{\phi : (X \to \mathcal{U}) \to M} whose composition with this projection is \m{s}, so that it follows that \m{s} is an embedding. We construct \m{\phi} and its inverse \m{\gamma} by \M{ \begin{array}{lll} \phi \, f & \eqdef & (s f , \varepsilon \, f), \\ \gamma \, (g , e) & \eqdef & r \, g, \end{array} } where \m{\varepsilon \, f} is a proof that the map \m{\kappa \, (s f) \, y} is an equivalence for every \m{y : Y}, so that \m{\phi} and \m{\gamma} are mutually inverse by the argument of the previous proof. To prove that the map \m{\kappa \, (s f) \, y : s(r(s \, f)) \, y \to s (f \, y)} is an equivalence, notice that its domain and codomain amount, by construction, to respectively \M{ \begin{array}{lll} A & \eqdef & \Pi (w : j^{-1}(y)), f(\operatorname{pr}_1 w), \\ B & \eqdef & \Pi (t : j^{-1}(y)), \Pi (w : j^{-1}(j (\operatorname{pr}_1 t))), f (\operatorname{pr}_1 w). \end{array} } We construct an inverse \m{\delta : B \to A} by \M{ \delta \, C \, (x , p) \eqdef C (x , p) (x , \operatorname{refl}_{j \, x}). } It is routine to check that the functions \m{\kappa \, (s f) \, y} and \m{\delta} are mutually inverse, which concludes the proof. \end{proof} The left and right Kan extensions trivially satisfy \m{f \edown \operatorname{id} \sim f} and \m{f \eup \operatorname{id} \sim f} because the identity map is an embedding, by the extension property, and so are contravariantly functorial in view of the following. \begin{remark} \label{iterated} For types \m{X : \mathcal{U}}, \m{Y : \mathcal{V}} and \m{Z : \mathcal{W}}, and functions \m{j : X \to Y}, \m{k : Y \to Z} and \m{f : X \to \mathcal{U} \sqcup \mathcal{V} \sqcup \mathcal{W}}, we have canonical identifications \M{ \begin{array}{lll} f \edown (k \mathrel{\circ} j) & \sim & (f \edown j) \edown k, \\ f \eup (k \mathrel{\circ} j) & \sim & (f \eup j) \eup k. \end{array} } \end{remark} \begin{proof} This is a direct consequence of the canonical equivalences \M{ \begin{array}{lll} (\Sigma (t : \Sigma B) , C \, t) \simeq (\Sigma (a : A)\, \Sigma (b : B \, a), C(a,b)) \\ (\Pi (t : \Sigma B) , C \, t) \simeq (\Pi (a : A)\, \Pi (b : B \, a), C(a,b)) \end{array} } for arbitrary universes \m{\mathcal{U},\mathcal{V},\mathcal{W}} and \m{A:\mathcal{U}}, \m{B: A \to \mathcal{V}}, and \m{C : \Sigma \, B \to \mathcal{W}}. \end{proof} The above and the following are applied in work on compact ordinals (reported in our repository~\cite{TypeTopology}). \begin{remark} For types \m{X : \mathcal{U}} and \m{Y : \mathcal{V}}, and functions \m{j : X \to Y}, \m{f : X \to \mathcal{W}} and \m{f' : X \to \mathcal{W}'}, if the type \m{f \, x} is a retract of \m{f' \, x} for any \m{x:X}, then the type \m{(f \eup j) \, y} is a retract of \m{(f' \eup j) \, y} for any \m{y : Y}. \end{remark} \noindent The construction is routine, and presumably can be performed for left Kan extensions too, but we haven't paused to check this. \section{Constructions with algebraically injective types} Algebraic injectives are closed under retracts: \begin{lemma} If a type \m{D} in a universe \m{\mathcal{W}} is algebraically \m{\mathcal{U},\mathcal{V}}-injective, then so is any retract \m{D' : \mathcal{W}'} of \m{D} in any universe \m{\mathcal{W}'}. \end{lemma} \noindent In particular, any type equivalent to an algebraically injective type is itself algebraically injective, without the need to invoke univalence. \begin{proof} \M{\begin{diagram}[p=0.4em] X & & \rTo^j & & Y \\ & \rdTo^f\rdTo(2,4)_{s \mathrel{\circ} f} & & \ldEto^{f \mid j}\ldTo(2,4)_{(s \mathrel{\circ} f) \mid j} & \\ & & D' & & \\ & & \dTo^s \uTo_r & & \\ & & D. \end{diagram}} \noindent For a given section-retraction pair \m{(s,r)}, the construction of the extension operator for \m{D'} from that of \m{D} is given by \m{f \mid j \eqdef r \mathrel{\circ} ((s \mathrel{\circ} f) \mid j)}. \end{proof} \begin{lemma} The product of any family \m{D_a} of algebraically \m{\mathcal{U},\mathcal{V}}-injective types in a universe \m{\mathcal{W}}, with indices \m{a} in a type \m{A} of any universe \m{\mathcal{T}}, is itself algebraically \m{\mathcal{U},\mathcal{V}}-injective. \end{lemma} \noindent In particular, if a type \m{D} in a universe \m{\mathcal{W}} is algebraically \m{\mathcal{U},\mathcal{V}}-injective, then so is any exponential power \m{A \to D : \mathcal{T} \sqcup \mathcal{W}} for any type \m{A} in any universe \m{\mathcal{T}}. \begin{proof} We construct the extension operator \m{(-)\mid(-)} of the product \m{\Pi D : \mathcal{T} \sqcup \mathcal{W}} in a pointwise fashion from the extension operators \m{(-)\mid_a(-)} of the algebraically injective types \m{D_a}: For \m{f : X \to \Pi D}, we let \m{f \mid j : Y \to \Pi D} be \M{ (f \mid j) \, y \eqdef a \mapsto ((x \mapsto f \, x \, a) \mid_a j) \, y. } \end{proof} \begin{lemma} Every algebraically \m{\mathcal{U},\mathcal{V}}-injective type \m{D:\mathcal{W}} is a retract of any type \m{Y:\mathcal{V}} in which it is embedded into. \end{lemma} \begin{proof} \M{\begin{diagram} D & & \rInto^j & & Y \\ & \rdTo_\operatorname{id} & & \ldEto_{r \eqdef \operatorname{id} \mid j} & \\ & & D. & & \end{diagram}} \noindent We just extend the identity function along the embedding to get the desired retraction~\m{r}. \end{proof} The following is a sort of \m{\infty}-Yoneda embedding: \begin{lemma} The identity type former \m{\operatorname{Id}_X} of any type \m{X:\mathcal{U}} is an embedding of the type~\m{X} into the type~\m{X \to \mathcal{U}}. \end{lemma} \begin{proof} To show that the \m{\operatorname{Id}}-fiber of a given \m{A : X \to \mathcal{U}} is a subsingleton, it suffices to show that if is pointed then it is a singleton. So let \m{(x,p):\Sigma (x : X), \operatorname{Id} x = A} be a point of the fiber. Applying \m{\Sigma}, seen as a map of type \m{(X \to \mathcal{U}) \to \mathcal{U}}, to the identification~\m{p : \operatorname{Id} \, x = A}, we get an identification \M { \operatorname{ap} \, \Sigma \, p : \Sigma (\operatorname{Id} x) = \Sigma A, } and hence, being equal to the singleton type \m{\Sigma (\operatorname{Id} x)}, the type \m{\Sigma A} is itself a singleton. Hence we have \M{\begin{array}{llll} A \, x & \simeq & \operatorname{Id} x \mathrel{\,\,\preceq\,\,} A & \text{By the Yoneda Lemma~\cite{rijke:msc},} \\ & = & \Pi (y : X), \operatorname{Id} \, x \, y \to A \, y & \text{by definition of \m{\mathrel{\,\,\preceq\,\,}},} \\ & \simeq & \Pi (y : X), \operatorname{Id} \, x \, y \simeq A \, y & \text{because \m{\Sigma A} is a singleton (Yoneda corollary),} \\ & \simeq & \Pi (y : X), \operatorname{Id} \, x \, y = A \, y & \text{by univalence,} \\ & \simeq & \operatorname{Id} \, x = A & \text{by function extensionality.} \end{array} } So by a second application of univalence we get \m{A \, x = (\operatorname{Id} \, x = A)}. Hence, applying \m{\Sigma} on both sides, we get \m{\Sigma A = (\Sigma (x : X), \operatorname{Id} \, x = A)}. Therefore, because the type \m{\Sigma A} is a singleton, so is the fiber \m{\Sigma (x : X), \operatorname{Id} \, x = A} of~\m{A}. \end{proof} \begin{lemma} \label{ref:16:3} If a type \m{D} in a universe \m{\mathcal{U}} is algebraically \m{\mathcal{U},\mathcal{U}^+}-injective, then \m{D} is a retract of the exponential power \m{D \to \mathcal{U}} of \m{\mathcal{U}}. \end{lemma} \begin{proof} \M{\begin{diagram} D & & \rInto^\operatorname{Id} & & (D \to \mathcal{U}) \\ & \rdTo_\operatorname{id} & & \ldEto_{r \eqdef \operatorname{id} \mid \operatorname{Id}} & \\ & & D. & & \end{diagram}} \noindent This is obtained by combining the previous two constructions, using the fact that \m{D \to \mathcal{U}} lives in the successor universe \m{\mathcal{U}^+}. \end{proof} \section{Algebraic flabbiness and resizing constructions} We now discuss resizing constructions that don't assume resizing axioms. The above results, when combined together in the obvious way, almost give directly that the algebraically injective types are precisely the retracts of exponential powers of universes, but there is a universe mismatch. Keeping track of the universes to avoid the mismatch, what we get instead is a resizing construction without the need for resizing axioms: \begin{lemma} Algebraically \m{\mathcal{U},\mathcal{U}^+}-injective types \m{D:\mathcal{U}} are algebraically \m{\mathcal{U},\mathcal{U}}-injective too. \end{lemma} \begin{proof} By the above constructions, we first get that \m{D}, being algebraically \m{\mathcal{U},\mathcal{U}^+}-injective, is a retract of \m{D \to \mathcal{U}}. But then \m{\mathcal{U}} is algebraically \m{\mathcal{U},\mathcal{U}}-injective, and, being a power of \m{\mathcal{U}}, so is \m{D \to \mathcal{U}}. Finally, being a retract of \m{D \to \mathcal{U}}, we have that \m{D} is algebraically \m{\mathcal{U},\mathcal{U}}-injective. \end{proof} This is resizing down and so is not surprising. Of course, such a construction can be performed directly by considering an embedding \m{\mathcal{U} \to \mathcal{U}^+}, but the idea is to generalize it to obtain further resizing-for-free constructions, and, later, resizing-for-a-price constructions. We achieve this by considering a notion of flabbiness as data, rather than as property as in the 1-topos literature (see e.g.\ Blechschmidt~\cite{Blechschmidt:2018}). The notion of flabbiness considered in topos theory is defined with truncated \m{\Sigma}, that is, the existential quantifier \m{\exists} with values in the subobject classifier \m{\Omega}. We refer to the notion defined with untruncated \m{\Sigma} as algebraic flabbiness. \begin{definition} We say that a type \m{D : \mathcal{W}} is \df{algebraically \m{\mathcal{U}}-flabby} if \M{ \Pi (P : \mathcal{U}), \text{if \m{P} is a subsingleton then \m{\Pi(f : P \to D)\, \Sigma (d : D)\, \Pi(p : P), d = f \, p}.} } \end{definition} \noindent This terminology is more than a mere analogy with algebraic injectivity: notice that flabbiness and algebraic flabbiness amount to simply injectivity and algebraic injectivity with respect to the class of embeddings \m{P \to \mathbb{1}} with \m{P} ranging over subsingletons: \begin{diagram} P & & \rInto & & \mathbb{1} \\ & \rdTo_f & & \ldEto & \\ & & D. & & \end{diagram} Notice also that an algebraically flabby type \m{D} is pointed, by considering the case when \m{f} is the unique map \m{\mathbb{0} \to D}. \begin{lemma} \label{for27:1} If a type \m{D} in the universe \m{\mathcal{W}} is algebraically \m{\mathcal{U},\mathcal{V}}-injective, then it is algebraically \m{\mathcal{U}}-flabby. \end{lemma} \begin{proof} Given a subsingleton \m{P:\mathcal{U}} and a map \m{f : P \to D}, we can take its extension \m{f \mid \operatorname{!}: \mathbb{1} \to D} along the unique map \m{!:P \to \mathbb{1}}, because it is an embedding, and then we let \m{d \eqdef (f \mid \operatorname{!})\, \operatorname{\star}}, and the extension property gives \m{d = f \, p} for any \m{p:P}. \end{proof} The interesting thing about this is that the universe~\m{\mathcal{V}} is forgotten, and then we can put any other universe below \m{\mathcal{U}} back, as follows. \begin{lemma} \label{for27:2} If a type \m{D} in the universe \m{\mathcal{W}} is algebraically \m{\mathcal{U} \sqcup \mathcal{V}}-flabby, then it is also algebraically \m{\mathcal{U},\mathcal{V}}-injective. \end{lemma} \begin{proof} Given an embedding \m{j : X \to Y} of types \m{X:\mathcal{U}} and \m{\mathcal{V}}, a map \m{f : X \to D} and a point \m{y:Y}, in order to construct \m{(f \mid j) \, y} we consider the map \m{f_y : j^{-1}(y) \to D} defined by \m{(x,p) \mapsto f\,x}. Because the fiber \m{j^{-1}(y) : \mathcal{U} \sqcup \mathcal{V}} is a subsingleton as \m{j} is an embedding, we can apply algebraic flabbiness to get \m{d_y : D} with \m{d_y = f_y (x,p)} for all \m{(x,p):j^{-1}(y)}. By the construction of \m{f_y} and the definition of fiber, this amounts to saying that for any \m{x : X} and \m{p : j \, x = y}, we have \m{d_y = f \, x}. Therefore we can take \M{(f \mid j) \, y \eqdef d_y,} because we then have \M{(f \mid j) (j \, x) = d_{j \, x} = f_{j \, x} (x , \operatorname{refl}_{j \, x}) = f \, x} for any \m{x:X}, as required. \end{proof} \noindent We then get the following resizing construction by composing the above two conversions between algebraic flabbiness and injectivity: \begin{lemma} If a type \m{D} in the universe \m{\mathcal{W}} is algebraically \m{(\mathcal{U} \sqcup \mathcal{T}),\mathcal{V}}-injective, then it is also algebraically \m{\mathcal{U},\mathcal{T}}-injective. \end{lemma} \noindent In particular, algebraic \m{\mathcal{U},\mathcal{V}}-injectivity gives algebraic \m{\mathcal{U},\mathcal{U}}- and \m{\mathcal{U}_0,\mathcal{U}}-injectivity. So this is no longer necessarily resizing down, by taking \m{\mathcal{V}} to be e.g.\ the first universe~\m{\mathcal{U}_0}. \section{Injectivity of subuniverses} We now apply algebraic flabbiness to show that any subuniverse closed under subsingletons and under sums, or alternatively under products, is also algebraically injective. \begin{definition} By a \df{subuniverse} of \m{\mathcal{U}} we mean a projection \m{\Sigma \, A \to \mathcal{U}} with \m{A : \mathcal{U} \to \mathcal{T}} subsingleton-valued and the universe \m{\mathcal{T}} arbitrary. By a customary abuse of language, we also sometimes refer to the domain of the projection as the subuniverse. Closure under subsingletons means that \m{A\,P} holds for any subsingleton \m{P:\mathcal{U}}. Closure under sums amounts to saying that if \m{X:\mathcal{U}} satisfies \m{A} and every \m{Y \, x} satisfies \m{A} for a family \m{Y : X \to \mathcal{U}}, then so does \m{\Sigma \, Y}. Closure under products is defined in the same way with \m{\Pi} in place of \m{\Sigma}. \end{definition} \noindent Notice that \m{A} being subsingleton-valued is precisely what is needed for the projection to be an embedding, and that all embeddings are of this form up to equivalence (more precisely, every embedding of any two types is the composition of an equivalence into a sum type followed by the first projection). \begin{lemma} Any subuniverse of \m{\mathcal{U}} which is closed under subsingletons and sums, or alternatively under subsingletons and products, is algebraically \m{\mathcal{U}}-flabby and hence algebraically \m{\mathcal{U},\mathcal{U}}-injective. \end{lemma} \begin{proof} Let \m{\Sigma\,A} be a subuniverse of \m{\mathcal{U}}, let \m{P:\mathcal{U}} be a subsingleton and \m{f : P \to \Sigma \, A} be given. Then define \begin{quote} (1)~\m{ X \eqdef \Sigma (\operatorname{pr}_1 \mathrel{\circ} f)} \qquad or \qquad (2)~\m{X \eqdef \Pi (\operatorname{pr}_1 \mathrel{\circ} f)} \end{quote} according to whether we have closure under sums or products. Because \m{P}, being a subsingleton satisfies \m{A} and because the values of the map \m{\operatorname{pr}_1 \mathrel{\circ} f : P \to \mathcal{U}} satisfy \m{A} by definition of subuniverse, we have \m{a : A\, X} by the sum or product closure property, and \m{d \eqdef (X,a)} has type \m{\Sigma \,A}. To conclude the proof, we need to show that \m{d = f\,p} for any \m{p:P}. Because the second component \m{a} lives in a subsingleton by definition of subuniverse, it suffices to show that the first components are equal, that is, that \m{X = \operatorname{pr}_1 (f p)}. But this follows by univalence, because a sum indexed by a subsingleton is equivalent to any of summands, and a product indexed by a subsingleton is equivalent to any of its factors. \end{proof} We index \m{n}-types from \m{n=-2} as in the HoTT book, where the \m{-2}-types are the singletons. We have the following as a corollary. \begin{theorem} The subuniverse of \m{n}-types in a universe \m{\mathcal{U}} is algebraically \m{\mathcal{U}}-flabby, in at least two ways, and hence algebraically \m{\mathcal{U},\mathcal{U}}-injective. \end{theorem} \begin{proof} We have a subuniverse because the notion of being an \m{n}-type is a proposition. For \m{n=-2}, the subuniverse of singletons is itself a singleton, and hence trivially injective. For \m{n>-2}, the \m{n}-types are known to be closed under subsingletons and both sums and products. \end{proof} \noindent In particular: \begin{enumerate} \item The type \m{\Omega_\mathcal{U}} of subsingletons in a universe \m{\mathcal{U}} is algebraically \m{\mathcal{U},\mathcal{U}}-injective. (Another way to see that \m{\Omega_\mathcal{U}} is algebraically injective is that it is a retract of the universe by propositional truncation. The same would be the case for \m{n}-types if we were assuming \m{n}-truncations, which we are not.) \item Powersets, being exponential powers of \m{\Omega_\mathcal{U}}, are algebraically \m{\mathcal{U},\mathcal{U}}-injective. \end{enumerate} An anonymous referee suggested the following additional examples: (i) The subuniverse of subfinite types, i.e., subtypes of types for which there is an uunspecified equivalence with \m{\operatorname{Fin}(n)} for some~\m{n}. This subuniverse is closed under both \m{\Pi} and \m{\Sigma}. (ii) Reflective subuniverses, as they are closed under \m{\Pi}. (iii) Any universe \m{\mathcal{U}} seen as a subuniverse of \m{\mathcal{U} \sqcup \mathcal{V}}. \section{Algebraic flabbiness with resizing axioms} Returning to size issues, we now apply algebraic flabbiness to show that propositional resizing gives unrestricted algebraic injective resizing. \begin{definition} \label{resizing} The propositional resizing principle, from \m{\mathcal{U}} to \m{\mathcal{V}}, that we consider here says that every proposition in the universe \m{\mathcal{U}} has an equivalent copy in the universe~\m{\mathcal{V}}. By propositional resizing without qualification, we mean propositional resizing between any of the universes involved in the discussion. \end{definition} This is consistent because it is implied by excluded middle, but, as far as we are aware, there is no known computational interpretation of this axiom. A model in which excluded middle fails but propositional resizing holds is given by Shulman~\cite{MR3340541}. We begin with the following construction, which says that algebraic flabbiness is universe independent in the presence of propositional resizing: \begin{lemma} If propositional resizing holds, then the algebraic \m{\mathcal{V}}-flabbiness of a type in any universe gives its algebraic \m{\mathcal{U}}-flabbiness. \end{lemma} \begin{proof} Let \m{D:\mathcal{W}} be a type in any universe \m{\mathcal{W}}, let \m{P : \mathcal{U}} be a proposition and \m{f : P \to D}. By resizing, we have an equivalence \m{\beta : Q \to P} for a suitable proposition \m{Q:\mathcal{V}}. Then the algebraic \m{\mathcal{V}}-flabbiness of \m{D} gives a point \m{d:D} with \m{d = (f \mathrel{\circ} \beta) \, q} for all \m{q : Q}, and hence with \m{d = f \, p} for all \m{p : P}, because we have \m{p=\beta \, q} for \m{q = \alpha \, p} where \m{\alpha} is a quasi-inverse of \m{\beta}, which establishes the algebraic \m{\mathcal{U}}-flabbiness of~\m{D}. \end{proof} And from this it follows that algebraic injectivity is also universe independent in the presence of propositional resizing: we convert back-and-forth between algebraic injectivity and algebraic flabbiness. \begin{lemma} \label{universe:independence} If propositional resizing holds, then for any type \m{D} in any universe \m{\mathcal{W}}, the algebraic \m{\mathcal{U},\mathcal{V}}-injectivity of \m{D} gives its algebraic \m{\mathcal{U}',\mathcal{V}'}-injectivity. \end{lemma} \begin{proof} We first get the \m{\mathcal{U}}-flabbiness of \m{D} by~\ref{for27:1}, and then its \m{\mathcal{U}' \sqcup \mathcal{V}'}-flabbiness by~\ref{universe:independence}, and finally its algebraic \m{\mathcal{U}',\mathcal{V}'}-injectivity by~\ref{for27:2}. \end{proof} As an application of this and of the algebraic injectivity of universes, we get that any universe is a retract of any larger universe. We remark that for types that are not sets, sections are not automatically embeddings~\cite{MR3548859}. But we can choose the retraction so that the section is an embedding in our situation. \begin{lemma} \label{canonical} We have an embedding of any universe \m{\mathcal{U}} into any larger universe \m{\mathcal{U} \sqcup \mathcal{V}}. \end{lemma} \begin{proof} For example, we have the embedding given by \m{X \mapsto X + \mathbb{0}_\mathcal{V}}. We don't consider an argument that this is indeed an embedding to be entirely routine without a significant amount of experience in univalent mathematics, even if this may seem obvious. Nevertheless, it is certainly safe to leave it as a challenge to the reader, and a proof can be found in~\cite{injective:article} in case of doubt. \end{proof} \begin{theorem} If propositional resizing holds, then any universe \m{\mathcal{U}} is a retract of any larger universe \m{\mathcal{U} \sqcup \mathcal{V}} with a section that is an embedding. \end{theorem} \begin{proof} The universe \m{\mathcal{U}} is algebraically \m{\mathcal{U},\mathcal{U}}-injective by~\ref{ref:16:1}, and hence it is algebraically \m{\mathcal{U}^+,(\mathcal{U} \sqcup \mathcal{V})^+}-injective by~\ref{universe:independence}, which has the right universe assignments to apply the construction~\ref{ref:16:3} that gives a retraction from an embedding of an injective type into a larger type, in this case the embedding of the universe \m{\mathcal{U}} into the larger universe \m{\mathcal{U} \sqcup \mathcal{V}} constructed in~\ref{canonical}. \end{proof} As mentioned above, we almost have that the algebraically injective types are precisely the retracts of exponential powers of universes, up to a universe mismatch. This mismatch is side-stepped by propositional resizing. The following is one of the main results of this paper: \begin{theorem} \df{(First characterization of algebraic injectives.)} If propositional resizing holds, then a type \m{D} in a universe \m{\mathcal{U}} is algebraically \m{\mathcal{U},\mathcal{U}}-injective if and only if \m{D} is a retract of an exponential power of \m{\mathcal{U}} with exponent in \m{\mathcal{U}}. \end{theorem} \noindent We emphasize that this is a logical equivalence ``if and only if'' rather than an \m{\infty}-groupoid equivalence ``\m{\simeq}''. More precisely, the theorem gives two constructions in opposite directions. So this characterizes the types that \df{can} be equipped with algebraic-injective structure. \begin{proof} \m{(\mathbb{R}ightarrow)}: Because \m{D} is algebraically \m{\mathcal{U},\mathcal{U}}-injective, it is algebraically \m{\mathcal{U},\mathcal{U}^+}-injective by resizing, and hence it is a retract of \m{D \to \mathcal{U}} because it is embedded into it by the identity type former, by taking the extension of the identity function along this embedding. \m{(\Leftarrow)}: If \m{D} is a retract of \m{X \to \mathcal{U}} for some given \m{X:\mathcal{U}}, then, because \m{X \to \mathcal{U}}, being an exponential power of the algebraically \m{\mathcal{U} ,\mathcal{U}}-injective type \m{\mathcal{U}}, is algebraically \m{\mathcal{U},\mathcal{U}}-injective, and hence so is \m{D} because it is a retract of this power. \end{proof} We also have that any algebraically injective \m{(n+1)}-type is a retract of an exponential power of the universe of \m{n}-types. We establish something more general first. \begin{lemma} Under propositional resizing, for any subuniverse \m{\Sigma \, A} of a universe \m{\mathcal{U}} closed under subsingletons, we have that any algebraically \m{\mathcal{U},\mathcal{U}}-injective type \m{X:\mathcal{U}} whose identity types \m{x=_X x'} all satisfy the property \m{A} is a retract of the type \m{X \to \Sigma \, A}. \end{lemma} \begin{proof} Because the first projection \m{j : \Sigma \, A \to \mathcal{U}} is an embedding by the assumption, so is the map \m{k \eqdef j \mathrel{\circ} (-) : (X \to \Sigma A) \to (X \to \mathcal{U})} by a general property of embeddings. Now consider the map \m{l : X \to (X \to \Sigma \, A)} defined by \m{x \mapsto (x' \mapsto (x=x', p \, x \, x'))}, where \m{p \, x \, x' : A(x=x')} is given by the assumption. We have that \m{k \mathrel{\circ} l = \operatorname{Id}_X} by construction. Hence \m{l} is an embedding because \m{l} and \m{\operatorname{Id}_X} are, where we are using the general fact that if \m{g \mathrel{\circ} f} and \m{g} are embeddings then so is the factor~\m{f}. But \m{X}, being algebraically \m{\mathcal{U},\mathcal{U}}-injective by assumption, is algebraically \m{\mathcal{U},(\mathcal{U}^+ \sqcup \mathcal{T})}-injective by resizing, and hence so is the exponential power \m{X \to \Sigma \, A}, and therefore we get the desired retraction by extending its identity map along~\m{l}. \end{proof} Using this, we get the following as an immediate consequence. \begin{theorem} \df{(Characterization of algebraic injective \m{(n+1)}-types.)} If propositional resizing holds, then an \m{(n+1)}-type \m{D} in \m{\mathcal{U}} is algebraically \m{\mathcal{U},\mathcal{U}}-injective if and only if \m{D} is a retract of an exponential power of the universe of \m{n}-types in \m{\mathcal{U}}, with exponent in \m{\mathcal{U}}. \end{theorem} \begin{corollary} The algebraically injective sets in \m{\mathcal{U}} are the retracts of powersets of (arbitrary) types in \m{\mathcal{U}}, assuming propositional resizing. \end{corollary} \noindent Notice that the powerset of any type is a set, because \m{\Omega_\mathcal{U}} is a set and because sets (and more generally \m{n}-types) form an exponential ideal. \section{Injectivity in terms of algebraic injectivity in the absence of resizing} We now compare injectivity with algebraic injectivity. The following observation follows from the fact that retractions are surjections: \begin{lemma} If a type \m{D} in a universe \m{\mathcal{W}} is algebraically \m{\mathcal{U},\mathcal{V}}-injective, then it is \m{\mathcal{U},\mathcal{V}}-injective \end{lemma} \noindent The following observation follows from the fact that propositions are closed under products. \begin{lemma} Injectivity is a proposition. \end{lemma} \noindent But of course algebraic injectivity is not. From this we immediately get the following by the universal property of propositional truncation: \begin{lemma} For any type \m{D} in a universe \m{\mathcal{W}}, the truncation of the algebraic \m{\mathcal{U},\mathcal{V}}-injectivity of \m{D} gives its \m{\mathcal{U},\mathcal{V}}-injectivity. \end{lemma} In order to relate injectivity to the propositional truncation of algebraic injectivity in the other direction, we first establish some facts about injectivity that we already proved for algebraic injectivity. These facts cannot be obtained by reduction (in particular products of injectives are not necessarily injective, in the absence of choice, but exponential powers are). \begin{lemma} \label{embedding-||retract||} Any \m{\mathcal{W},\mathcal{V}}-injective type \m{D} in a universe \m{\mathcal{W}} is a retract of any type in \m{\mathcal{V}} it is embedded into, in an unspecified way. \end{lemma} \begin{proof} Given \m{Y:\mathcal{V}} with an embedding \m{j : D \to Y}, by the \m{\mathcal{W},\mathcal{V}}-injectivity of \m{D} there is an \df{unspecified} \m{r : Y \to D} with \m{r \mathrel{\circ} j \sim \operatorname{id}}. Now, if there is a \df{specified} \m{r : Y \to D} with \m{r \mathrel{\circ} j \sim \operatorname{id}} then there is a specified retraction. Therefore, by the functoriality of propositional truncation on objects applied to the previous statement, there is an unspecified retraction. \end{proof} \begin{lemma} If a type \m{D' : \mathcal{U}'} is a retract of a type \m{D : \mathcal{U}} then the \m{\mathcal{W},\mathcal{T}}-injectivity of \m{D} implies that of \m{D'}. \end{lemma} \begin{proof} Let \m{r : D \to D'} and \m{s : D' \to D} be the given section retraction pair, and, to show that \m{D'} is \m{\mathcal{W},\mathcal{T}}-injective, let an embedding \m{j : X \to Y} and a function \m{f : X \to D'} be given. By the injectivity of \m{D}, we have some unspecified extension \m{f' : Y \to D} of \m{s \mathrel{\circ} f : X \to D}. If such a designated extension is given, then we get the designated extension \m{r \mathrel{\circ} f'} of \m{f}. By the functoriality of propositional truncation on objects and the previous two statements, we get the required, unspecified extension. \end{proof} The universe assignments in the following are probably not very friendly, but we are aiming for maximum generality. \begin{lemma} If a type \m{D : \mathcal{W}} is \m{(\mathcal{U} \sqcup \mathcal{T}),(\mathcal{V} \sqcup \mathcal{T})}-injective, then the exponential power \m{A \to D} is \m{\mathcal{U},\mathcal{V}}-injective for any \m{A:\mathcal{T}}. \end{lemma} \begin{proof} For a given embedding \m{j : X \to Y} and a given map \m{f : X \to (A \to D)}, take the exponential transpose \m{g : X \times A \to D} of \m{f}, then extend it along the embedding \m{j \times \operatorname{id} : X \times A \to Y \times A} to get \m{g' : Y \times A \to D} and then back-transpose to get \m{f' : Y \to (A \to D)}, and check that this construction of \m{f'} does give an extension of \m{f} along \m{j}. For this, we need to know that if \m{j} is an embedding then so is \m{j \times \operatorname{id}}, but this is not hard to check. The result then follows by the functoriality-on-objects of the propositional truncation. \end{proof} \begin{lemma} If a type \m{D:\mathcal{U}} is \m{\mathcal{U},\mathcal{U}^+} injective, then it is a retract of \m{D \to \mathcal{U}} in an unspecified way. \end{lemma} \begin{proof} This is an immediate consequence of~\ref{embedding-||retract||} and the fact that the identity type former \m{\operatorname{Id}_X : X \to (X \to \mathcal{U})} is an embedding. \end{proof} With this we get an almost converse to the fact that truncated algebraic injectivity implies injectivity: the universe levels are different in the converse: \begin{lemma} If a type \m{D:\mathcal{U}} is \m{\mathcal{U},\mathcal{U}^+}-injective, then it is algebraically \m{\mathcal{U},\mathcal{U}^+}-injective in an unspecified way. \end{lemma} So, in summary, regarding the relationship between injectivity and truncated algebraic injectivity, so far we know that \begin{quote} if \m{D} is algebraically \m{\mathcal{U},\mathcal{V}}-injective in an unspecified way then it is \m{\mathcal{U},\mathcal{V}}-injective, \end{quote} and, not quite conversely, \begin{quote} if \m{D} is \m{\mathcal{U},\mathcal{U}^+}-injective then it is algebraically \m{\mathcal{U},\mathcal{U}}-injective in an unspecified way. \end{quote} Therefore, using propositional resizing, we get the following characterization of a particular case of injectivity in terms of algebraic injectivity. \begin{proposition} \label{worse} \df{(Injectivity in terms of algebraic injectivity.)} If propositional resizing holds, then a type \m{D : \mathcal{U}} is \m{\mathcal{U},\mathcal{U}^+}-injective if and only if it is algebraically \m{\mathcal{U},\mathcal{U}^+}-injective in an unspecified way. \end{proposition} \noindent We would like to do better than this. For that purpose, we consider the partial-map classifier in conjunction with flabbiness and resizing. \section{Algebraic flabbiness via the partial-map classifier} We begin with a generalization~\cite{MR3695545} of a familiar construction in \m{1}-topos theory~\cite{MR1173017}. \begin{definition} The lifting \m{\mathcal{L}_{\mathcal{T}} \, X : \mathcal{T}^+ \sqcup \mathcal{U}} of a type \m{X:\mathcal{U}} with respect to a universe \m{\mathcal{T}} is defined by \M{ \mathcal{L}_{\mathcal{T}}\, X \eqdef \Sigma (P : \mathcal{T}), (P \to X) \times \text{\m{P} is a subsingleton}. } \end{definition} When the universes \m{\mathcal{T}} and \m{\mathcal{U}} are the same and the last component of the triple is omitted, we have the familiar canonical correspondence \M{ (X \to \mathcal{T}) \simeq (\Sigma (P : \mathcal{T}), P \to X) } that maps \m{A : X \to \mathcal{T}} to \m{P \eqdef \Sigma \, A} and the projection \m{\Sigma \, A \to X}. If the universe~\m{\mathcal{U}} is not necessarily the same as \m{\mathcal{T}}, then the equivalence becomes \M{ (\Sigma (A : X \to \mathcal{T} \sqcup \mathcal{U}), \Sigma(T : \mathcal{T}), T \simeq \Sigma \, A) \simeq (\Sigma (P : \mathcal{T}), P \to X). } This says that although the total space \m{\Sigma \, A} doesn't live in the universe \m{\mathcal{T}}, it must have a copy in \m{\mathcal{T}}. What the third component of the triple does is to restrict the above equivalences to the subtype of those \m{A} whose total spaces \m{\Sigma \, A} are subsingletons. If we define the type of partial maps by \M{(X \rightharpoonup Y) \eqdef \Sigma (A : \mathcal{T}), (A \hookrightarrow X) \times (A \to Y), } where \m{A \hookrightarrow X} is the type of embeddings, then for any \m{X,Y : \mathcal{T}}, we have an equivalence \M{ (X \rightharpoonup Y) \simeq (X \to \mathcal{L}_{\mathcal{T}} \, Y), } so that \m{\mathcal{L}_{\mathcal{T}}} is the partial-map classifier for the universe \m{\mathcal{T}}. When the universe~\m{\mathcal{U}} is not necessarily the same as~\m{\mathcal{T}}, the lifting classifies partial maps in~\m{\mathcal{U}} whose embeddings have fibers with copies in~\m{\mathcal{T}}. This is a sort of an \m{\infty}-monad ``across universes''~\cite{TypeTopology}, and modulo providing coherence data, which we haven't done at the time of writing, but which is not needed for our purposes. We could call this a ``wild monad'', but we will refer to it as simply a monad with this warning. In order to discuss the lifting in more detail, we first characterize its equality types. We denote the projections from \m{\mathcal{L}_{\mathcal{T}} \, X} by \M{ \begin{array}{llll} \delta (P , \phi , i) & \eqdef & P & \text{(domain of definition),} \\ \upsilon (P , \phi , i) & \eqdef & \phi & \text{(value function),} \\ \sigma(P , \phi , i) & \eqdef & i & \text{(subsingleton-hood of the domain of definition).} \end{array} } For \m{l , m : \mathcal{L}_{\mathcal{T}} \, X}, define \M{ (l \backsimeq m) \eqdef \Sigma (e : \delta \, l \simeq \delta \, m), \upsilon \, l = \upsilon \, m \mathrel{\circ} e, } as indicated in the commuting triangle \M{\begin{diagram}[p=0.4em] \delta l & & \rTo^e & & \delta m \\ & \rdTo_{v l} & & \ldTo_{v m} & \\ & & X & & \end{diagram}} \begin{lemma} The canonical transformation \m{(l = m) \to (l \backsimeq m)} that sends \m{\operatorname{refl}_l} to the identity equivalence paired with \m{\operatorname{refl}_{\upsilon \, l}} is an equivalence. \end{lemma} The unit \m{\eta : X \to \mathcal{L}_\mathcal{T} X} is given by \M{\eta_X \, x = (\mathbb{1}, (p \mapsto x), i)} where \m{i} is a proof that \m{\mathbb{1}} is a proposition. \begin{lemma} The unit \m{\eta_X : X\to\mathcal{L}_\mathcal{T} X} is an embedding. \end{lemma} \begin{proof} This is easily proved using the above characterization of equality. \end{proof} \begin{lemma} The unit satisfies the unit equations for a monad. \end{lemma} \begin{proof} Using the above characterization of equality, the left and right unit laws amount to the fact that the type \m{\mathbb{1}} is the left and right unit for the operation \m{(-)\times(-)} on types. \end{proof} \noindent Next, \m{\mathcal{L}_\mathcal{T}} is functorial by mapping a function \m{f : X \to Y} to the function \m{\mathcal{L}_\mathcal{T} f : \mathcal{L}_\mathcal{T} X \to \mathcal{L}_\mathcal{T} Y} defined by \M{ \mathcal{L}_\mathcal{T} f (P , \phi , i) = (P , f \mathrel{\circ} \phi , i). } This commutes with identities and composition definitionally. We define the multiplication \m{\mu_X : \mathcal{L}_{\mathcal{T}} (\mathcal{L}_{\mathcal{T}}\, X) \to \mathcal{L}_{\mathcal{T}}\, X} by \M{ \begin{array}{lll} \delta (\mu (P , \phi , i)) & \eqdef & \Sigma (p : P), \delta (\phi \, p), \\ \upsilon (\mu (P , \phi , i)) & \eqdef & (p , q) \mapsto \upsilon (\phi \, p) \, q , \\ \sigma (\mu (P , \phi , i)) & \eqdef & \text{because subsingletons are closed under sums.} \\ \end{array} } \begin{lemma} The multiplication satisfies the associativity equation for a monad. \end{lemma} \begin{proof} Using the above characterization of equality, we see that this amounts to the associativity of \m{\Sigma}, which says that for \m{P:\mathcal{T}}, \m{Q: X \to \mathcal{T}}, \m{R : \Sigma \, Q \to \mathcal{T}} we have \m{(\Sigma (t : \Sigma \, Q), R \, t) \simeq (\Sigma (p : P)\, \Sigma (q : Q \, p), R(p,q))}. \end{proof} \noindent The naturality conditions for the unit and multiplication are even easier to check, and we omit the verification. We now turn to algebras. We omit the direct verification of the following. \begin{lemma} Let \m{X:\mathcal{U}} be any type. \begin{enumerate} \item A function \m{\alpha : \mathcal{L}_\mathcal{T} X \to X}, that is, a functor algebra, amounts to a family of functions \m{\bigsqcup_P : (P \to X) \to X} with \m{P : \mathcal{T}} ranging over subsingletons. We will write \m{\bigsqcup_P \phi} as \m{\bigsqcup_{p : P} \, \phi \, p}. \item The unit law for monad algebras amounts to, for any \m{x:X}, \M{ \bigsqcup_{p : \mathbb{1}} x = x, } which is equivalent to, for all subsingletons \m{P}, functions \m{\phi : P \to X} and points \m{p_0 : P}, \M{ \bigsqcup_{p : P} \phi \, p = \phi \, p_0. } Therefore a functor algebra satisfying the unit law amounts to the same thing as algebraic flabbiness data. In other words, the algebraically \m{\mathcal{T}}-flabby types are the algebras of the pointed functor \m{(\mathcal{L}_\mathcal{T},\eta)}. In particular, monad algebras are algebraically flabby. \item The associativity law for monad algebras amounts to, for any subsingleton \m{P : \mathcal{T}} and family \m{Q : P \to \mathcal{T}} of subsingletons, and any \m{\phi : \Sigma \, Q \to X}, \M{ \bigsqcup_{t : \Sigma Q} \phi \, t = \bigsqcup_{p : P} \bigsqcup_{q : Q \, p} \phi (p ,q). } \end{enumerate} \end{lemma} \noindent So the associativity law for algebras plays no role in flabbiness. But of course we can have algebraic flabbiness data that is associative, such as not only the free algebra \m{\mathcal{L}_\mathcal{T} X}, but also the following two examples that connect to the opening development of this paper on the injectivity of universes, in particular the construction~\ref{iterated}: \begin{lemma} The universe \m{\mathcal{T}} is a monad algebra of \m{\mathcal{L}_\mathcal{T}} in at least two ways, with \m{\bigsqcup = \Sigma} and \m{\bigsqcup = \Pi}. \end{lemma} We now apply these ideas to injectivity. \begin{lemma} Any algebraically \m{\mathcal{T},\mathcal{T}^+}-injective type \m{D:\mathcal{T}} is a retract of \m{\mathcal{L}_\mathcal{T} D}. \end{lemma} \begin{proof} Because the unit is an embedding, and so we can extend the identity of~\m{D} along it. \end{proof} \begin{theorem} \df{(Second characterization of algebraic injectives.)} With propositional resizing, a type \m{D:\mathcal{T}} is algebraically \m{\mathcal{T},\mathcal{T}}-injective if and only if it is a retract of a monad algebra of \m{\mathcal{L}_\mathcal{T}}. \end{theorem} \begin{proof} \m{(\mathbb{R}ightarrow)}: Because \m{D} is algebraically \m{\mathcal{T},\mathcal{T}}-injective, it is algebraically \m{\mathcal{T},\mathcal{T}^+}-injective by resizing, and hence it is a retract of \m{\mathcal{L}_\mathcal{T} D}. \m{(\Leftarrow)}: Algebraic injectivity is closed under retracts. \end{proof} \begin{definition} \label{omega:resizing} Now, instead of propositional resizing, we consider the propositional impredicativity of the universe \m{\mathcal{U}}, which says that the type \m{\Omega_\mathcal{U}} of propositions in \m{\mathcal{U}}, which lives in the next universe \m{\mathcal{U}^+}, has an equivalent copy in \m{\mathcal{U}}. We refer to this kind of impredicativity as \m{\Omega}-resizing. \end{definition} It is not hard to see that propositional resizing implies \m{\Omega}-resizing for all universes other than the first one~\cite{TypeTopology}, and so all the assumption of \m{\Omega}-resizing does is to account for the first universe too. \begin{lemma} Under \m{\Omega}-resizing, for any type \m{X:\mathcal{T}}, the type \m{\mathcal{L}_{\mathcal{T}} X : \mathcal{T}^+} has an equivalent copy in the universe \m{\mathcal{T}}. \end{lemma} \begin{proof} We can take \m{\Sigma (p : \Omega'), \operatorname{pr}_1(\rho \, p) \to X} where \m{\rho : \Omega' \to \Omega_\mathcal{T}} is the given equivalence. \end{proof} We apply this lifting machinery to get the following, which doesn't mention lifting in its formulation. \begin{theorem} \label{better} (Characterization of injectivity in terms of algebraic injectivity.) In the presence of \m{\Omega}-resizing, the \m{\mathcal{T},\mathcal{T}}-injectivity of a type \m{D} in a universe \m{\mathcal{T}} is equivalent to the propositional truncation of its algebraic \m{\mathcal{T},\mathcal{T}}-injectivity. \end{theorem} \begin{proof} We already know that the truncation of algebraic injectivity (trivially) gives injectivity. For the other direction, let $L$ be a resized copy of \m{\mathcal{L}_\mathcal{T} D} in the universe \m{\mathcal{T}}. Composing the unit with the equivalence given by resizing, we get an embedding \m{D \to L}, because embeddings are closed under composition and equivalences are embeddings. Hence \m{D} is a retract of \m{L} in an unspecified way by the injectivity of~\m{D}, by extending its identity. But \m{L}, being equivalent to a free algebra, is algebraically injective, and hence, being a retract of \m{L} in an unspecified way, \m{D} is algebraically injective in an unspecified way, because retracts of algebraically injectives are algebraically injective, by the functoriality of truncation on objects. \end{proof} As an immediate consequence, by reduction to the above results about algebraic injectivity, we have the following corollary. \begin{theorem} Under \m{\Omega}-resizing and propositional resizing, if a type \m{D} in a universe \m{\mathcal{T}} is \m{\mathcal{T},\mathcal{T}}-injective , then it is also \m{\mathcal{U},\mathcal{V}}-injective for any universes \m{\mathcal{U}} and \m{\mathcal{V}}. \end{theorem} \begin{proof} The type \m{D} is algebraically \m{\mathcal{T},\mathcal{T}}-injective in an unspecified way, and so by functoriality of truncation on objects and algebraic injective resizing, it is algebraically \m{\mathcal{U},\mathcal{V}}-injective in an unspecified way, and hence it is \m{\mathcal{U},\mathcal{V}}-injective. \end{proof} At the time of writing, we are not able to establish the converse. In particular, we don't have the analogue of~\ref{universe:independence}. \section{The equivalence of excluded middle with the (algebraic) injectivity of all pointed types} Algebraic flabbiness can also be applied to show that all pointed types are (algebraically) injective if and only if excluded middle holds, where for injectivity resizing is needed as an assumption, but for algebraic injectivity it is not. The decidability of a type \m{X} is defined to be the assertion \m{X + (X \to \mathbb{0})}, which says that we can exhibit a point of \m{X} or else tell that \m{X} is empty. The principle of excluded middle in univalent mathematics, for the universe \m{\mathcal{U}}, is taken to mean that all subsingleton types in \m{\mathcal{U}} are decidable: \M{ \operatorname{EM}_\mathcal{U} \eqdef \Pi (P : \mathcal{U}), \text{\m{P} is a subsingleton \m{\to P + (P \to \mathbb{0})}.} } As discussed in the introduction, we are not assuming or rejecting this principle, which is independent of the other axioms. Notice that, in the presence of function extensionality, this principle is a subsingleton, because products of subsingletons are subsingletons and because \m{P + (P \to \mathbb{0})} is a subsingleton for any subsingleton \m{P}. So in the following we get data out of a proposition. \begin{lemma} If excluded middle holds in the universe \m{\mathcal{U}}, then every pointed type \m{D} in any universe \m{\mathcal{W}} is algebraically \m{\mathcal{U}}-flabby. \end{lemma} \begin{proof} Let \m{d} be the given point of \m{D} and \m{f : P \to D} be a function with subsingleton domain. If we have a point \m{p : P}, then we can take \m{f \, p} as the flabbiness witness. Otherwise, if \m{P \to \mathbb{0}}, we can take \m{d} as the flabbiness witness. \end{proof} \noindent For the converse, we use the following. \begin{lemma} If the type \m{P + (P \to \mathbb{0}) + \mathbb{1}} is algebraically \m{\mathcal{W}}-flabby for a given subsingleton \m{P} in a universe \m{\mathcal{W}}, then \m{P} is decidable. \end{lemma} \begin{proof} Denote by \m{D} the type \m{P + (P \to \mathbb{0}) + \mathbb{1}} and let \m{f : P + (P \to \mathbb{0}) \to D} be the inclusion. Because \m{P + (P \to \mathbb{0})} is a subsingleton, the algebraic flabbiness of \m{D} gives \m{d : D} with \m{d = f \, z} for all \m{z : P + (P \to \mathbb{0})}. Now, by definition of binary sum, \m{d} must be in one of the three components of the sum that defines~\m{D}. If it were in the third component, namely \m{\mathbb{1}}, then \m{P} couldn't hold, because if it did we would have \m{p:P} and hence, omitting the inclusions into sums, and considering \m{z=p}, we would have, \m{d = f p = p}, because \m{f} is the inclusion, which is not in the \m{\mathbb{1}} component. But also \m{P \to \mathbb{0}} couldn't hold, because if it did we would have \m{\phi:P \to \mathbb{0}} and hence, again omitting the inclusion, and considering \m{z=\phi}, we would have \m{d = f \, \phi = \phi}, which again is not in the \m{\mathbb{1}} component. But it is impossible for both \m{P} and \m{P \to \mathbb{0}} to fail, because this would mean that we would have functions \m{P \to \mathbb{0}} (the failure of \m{P}) and \m{(P \to \mathbb{0}) \to \mathbb{0}} (the failure of \m{P \to \mathbb{0}}), and so we could apply the second function to the first to get a point of the empty type, which is not available. Therefore \m{d} can't be in the third component, and so it must be in the first or the second, which means that \m{P} is decidable. \end{proof} \noindent From this we immediately conclude the following: \begin{lemma} If all pointed types in a universe \m{\mathcal{W}} are algebraically \m{\mathcal{W}}-flabby, then excluded middle holds in~\m{\mathcal{W}}. \end{lemma} \noindent And then we have the same situation for algebraically injective types, by reduction to algebraic flabbiness: \begin{lemma} If excluded middle holds in the universe \m{\mathcal{U} \sqcup \mathcal{V}}, then any pointed type \m{D} in any universe \m{\mathcal{W}} is algebraically \m{\mathcal{U},\mathcal{V}}-injective. \end{lemma} \noindent Putting this together with some universe specializations, we have the following construction. \begin{theorem} All pointed types in a universe \m{\mathcal{U}} are algebraically \m{\mathcal{U},\mathcal{U}}-injective if and only if excluded middle holds in~\m{\mathcal{U}}. \end{theorem} \noindent And we have a similar situation with injective types. \begin{lemma} If excluded middle holds, then every inhabited type of any universe is injective with respect to any two universes. \end{lemma} \begin{proof} Because excluded middle gives algebraic injectivity, which in turn gives injectivity. \end{proof} \noindent Without resizing, we have the following. \begin{lemma} If every inhabited type \m{D:\mathcal{W}} is \m{\mathcal{W},\mathcal{W}^+}-injective, then excluded middle holds in the universe \m{\mathcal{W}}. \end{lemma} \begin{proof} Given a proposition \m{P}, we have that the type \m{D \eqdef P + (P \to \mathbb{0}) + \mathbb{1}_{\mathcal{W}}} is injective by the assumption. Hence it is algebraically injective in an unspecified way by Proposition~\ref{worse}. And so it is algebraically flabby in an unspecified way. By the lemma, \m{P} is decidable in an unspecified way, but then it is decidable because the decidability of a proposition is a proposition. \end{proof} \noindent With resizing we can do better: \begin{lemma} Under \m{\Omega}-resizing, if every inhabited type in a universe \m{\mathcal{U}} is \m{\mathcal{U},\mathcal{U}}-injective, then excluded middle holds in \m{\mathcal{U}}. \end{lemma} \begin{proof} Given a proposition \m{P}, we have that the type \m{D \eqdef P + (P \to \mathbb{0}) + \mathbb{1}_{\mathcal{U}}} is injective by the assumption. Hence it is injective in an unspecified way by Theorem~\ref{better}. And so it is algebraically flabby in an unspecified way. By the lemma, \m{P} is decidable in an unspecified way, and hence decidable. \end{proof} \begin{theorem} Under \m{\Omega}-resizing, all inhabited types in a universe \m{\mathcal{U}} are \m{\mathcal{U},\mathcal{U}}-injective if and only if excluded middles holds in~\m{\mathcal{U}}. \end{theorem} \noindent It would be interesting to get rid of the resizing assumption, which, as we have seen, is not needed for the equivalence of the algebraic injectivity of all pointed types with excluded middle. \end{document}
\begin{document} \begin{abstract} The aim of this paper is to clarify and generalize techniques of \cite{Sh1} (see also \cite{Pr1} and \cite{Pr2}). Roughly speaking, we prove that for local Fano contractions the existence of complements can be reduced to the existence of complements for lower dimensional projective Fano varieties. \end{abstract} \title{The first main theorem on complements: from global to local} \section*{Introduction} The aim of this paper is to clarify and generalize techniques of \cite[Sect.~7]{Sh1} (see also \cite{Pr1}, \cite{Pr2}). We prove that for local Fano contractions the existence of complements can be reduced to the existence of complements for lower dimensional projective Fano varieties. The main conjecture on $n$-complements (Conjecture~\cite[1.3]{Sh1}) states that they are bounded in each given dimension. \par Roughly speaking, an $n$-complement is a ``good'' member of the multiple anti-log canonical linear system. A multitude of examples support the conjecture \cite{Abe}, \cite{Is}, \cite{IP}, \cite{KeM}, \cite{Ko-SGT}, \cite{MP}, \cite{Pr2}, \cite{Sh}, \cite{Sh1}. As was noticed in \cite{Sh}, complements have good structures which are related to restrictions of linear systems and Kawamata-Viehweg vanishing. The latter essentially explains a tricky structure of $n$-complement boundaries (cf. inequality in \eref{def-coplements-n} below). In the main conjecture we consider log pairs $(X/Z,D)$ consisting of Fano contractions $X/Z$ and boundaries $D$. To use an induction in a proof of the conjecture we need to divide log pairs and their complements into two types with respect to the dimension of the base $Z$, namely, local whenever $\dim(Z)>0$, and global otherwise. Equivalently, in the global case $Z$ is a point and $X$ is a projective log Fano. We prove, for local log Fano contractions, the existence of an $n$-complement, where $n\in{\mathbb N}N$ and the set ${\mathbb N}N$ comes from lower dimensional projective log Fano varieties. This is called the first main theorem on complements (see Theorem~\ref{result-Fano-1} below): from global to local. The proof uses the LogMMP, so it is conditional in dimensions $n=\dim(X)\ge 4$ and the proof for $n\le 3$. The core idea is to extend an $n$-complement from a central fiber of a good modification for $(X/Z,D)$ (cf. the proof of Theorem~5.6 and Example~5.2 in \cite{Sh}). Moreover, such an approach allows us to control some numerical invariants of complements: e.g., indices and their type, exceptional or non-exceptional, and their regularity (cf. \cite[Sect. 7]{Sh1}). The second theorem, from local to global, will be discussed in the next paper. Its prototype is the global case in \cite{Sh1} (cf. also tigers in \cite{KeM}) that uses local and inductive complements \cite[Sect. 2]{Sh1}. An elementary but really generic case of the second theorem is Theorem~\ref{result-Fano-2}. It is a modification of the first one. Other cases show that the main difficulty of the Borisov-Alekseev conjecture (see \ref{conjecture-boundedness-log-Fano}) concerns $\varphiepsilon_d$-log terminal log Fano varieties of dimension $d$, namely, that they are bounded for some $\varphiepsilon_d>0$ depending on the dimension $d$. For instance, in the dimension $2$, $\varphiepsilon_2=6/7$. The paper is organized as follows. Section~1 is auxiliary. In Section~2 we introduce the very important notion of exceptional pairs. In Section~3 we prove the main result (Theorem~\ref{result-Fano-1}). Some corollaries and applications are discussed in Section~4. Finally, in Section~5 we present the global version of Theorem~\ref{result-Fano-1}. \section{Preliminaries} \subsection*{Notation}\quad \newline\noindent \begin{tabular}{lp{12cm}} ${\EuScript{K}}(X)$& the function field of the variety $X$;\\ $D_1\approx D_2$& prime divisors $D_1$, $D_2$ give the same discrete valuation of ${\EuScript{K}}(X)$;\\ $K_X$& canonical (Weil) divisor, we will frequently write $K$ if no confusion is likely. \\ \end{tabular} \par \noindent All varieties are assumed to be algebraic and defined over ${\mathbb C}$, the field of complex numbers. A \textit{contraction} (or \textit{extraction}, if we start with $X$ instead of $Y$) is a projective morphism of normal varieties $f\colon Y\to X$ such that $f_*{\mathcal O}_Y={\mathcal O}_X$. A \textit{blow-up} is a birational extraction. We will use the standard abbreviations and notation of Minimal Model Program as MMP, lc, klt, plt, $\equiv$, $\sim$, $\down{\cdot}$, $\up{\cdot}$, $\fr{\cdot}$, ${\overline{NE}}(X/Z)$, $\dis{E,D}$, $\discr{X,D}$, $\totaldiscr{X,D}$; see \cite{KMM}, \cite{Ut}, \cite{Ko}. Everywhere below, if we do not specify the opposite, a \textit{boundary} means a ${\mathbb Q}$-boundary, i.e. a ${\mathbb Q}$-Weil divisor $D=\sum d_iD_i$ such that $0\le d_i\le 1$ for all $i$. A \textit{log variety} (\textit{log pair}) $(X/Z\ni o,D)$ is, by definition, a contraction $X\to Z$ which is considered locally near the fiber over $o\in Z$ and a boundary $D$ on $X$. By the dimension of a log pair $(X/Z\ni o,D)$ we mean the dimension of the total space $X$. \par \begin{definition}[\cite{Sh}] \label{definition-complements} Let $(X/Z,D)$ be a log variety. Then \para \label{def-coplements-num} \textit{numerical complement} is an ${\mathbb R}$-boundary $D'\ge D$, such that $K+D'$ is lc and numerically trivial; \para \label{def-coplements-R} ${\mathbb R}$-\textit{complement} is an ${\mathbb R}$-boundary $D'\ge D$ such that $K+D'$ is lc and ${\mathbb R}$-linearly trivial; \para \label{def-coplements-Q} ${\mathbb Q}$-\textit{complement} is a ${\mathbb Q}$-boundary $D'\ge D$ such that $K+D'$ is lc and ${\mathbb Q}$-linearly trivial. \para \label{def-coplements-n} Write $D=S+B$, where $S=\down{D}$, $B=\fr{D}$. Then an $n$-\textit{complement} is a ${\mathbb Q}$-boundary $D^+$ such that $K+D^+$ is lc, $n(K+D^+)\sim 0$ and $nD^+\ge nS+\down{(n+1)D}$. \end{definition} Note that an ${\mathbb R}$-complement can be considered as an $n$-complement for $n=\infty$ because the limit of the inequality in \eref{def-coplements-n} for $n\to \infty$ gives as $D'\ge D$. All these definitions can be done in the more general situation: when $D$ is an ${\mathbb R}$-subboundary (i.e. an ${\mathbb R}$-divisor $D=\sum d_iD_i$ with $d_i\le 1$ for all $i$). Obviously, there are the following implications: \begin{quote} $\exists$ ${\mathbb Q}$-complement $\Longrightarrow$ $\exists$ ${\mathbb R}$-complement $\Longrightarrow$ $\exists$ numerical complement. \end{quote} The simple example below shows that an $n$-complement is not necessarily a ${\mathbb Q}$-complement (even not a numerical complement). \begin{example} Let $P_1$, $P_2$, $P_3$ be a different points on ${\mathbb P}^1$. Put $D:=P_1+(\frac{1}{2}+\varphiepsilon)P_2+(\frac{1}{2}-\varphiepsilon)P_3$ and $D':=P_1+\frac{1}{2}P_2+\frac{1}{2}P_3$, where $0<\varphiepsilon\ll 1$. Then $K+D'$ is a $2$-complement of the log divisor $K+D$. However $D'\ge D$ is wrong, i.e. $K+D'$ is not a ${\mathbb Q}$-complement of the log divisor $K+D$. \end{example} Under additional restriction on coefficients of $D$ (for example, if $D$ is \textit{standard}, see \eref{definition-NNN}) we have $D^+\ge D$ in \eref{def-coplements-n}, see \cite[2.7]{Sh1} or \cite{Pr3}. Therefore $D^+$ is a ${\mathbb Q}$-complement in this case. The question on the existence of complements naturally arises for varieties of Fano or Calabi-Yau type, i.e. for varieties with nef anti-log canonical divisor. However the nef property of $-(K+D)$ does not guarantee the existence of complements \cite[1.1]{Sh1}. \begin{proposition}[{\cite[5.5]{Sh}}] Let $(X/Z\ni o,D)$ be a log variety. Assume that $K+D$ is lc and $-(K+D)$ (semi)ample over $Z$. Then near the fiber over $o$ there exists a ${\mathbb Q}$-complement of the log divisor $K+D$. \end{proposition} \parag \label{definition-NNN} Fix a subset ${\Phi}\subset [0,1]$. We will write simply $D\in{\Phi}$ if all the coefficients of $D$ are contained in ${\Phi}$. For example, we can consider ${\Phi}={\MMM}_{\mt{\mathbf{sm}}}:=\{1-1/m \mid m\in{\mathbb N}\cup\{\infty\}\}$ (this is called the case of \textit{standard coefficients}). However some of our statements and conjectures can be formulated for another choice of ${\Phi}$ (see \eref{eq-def-N-M} below). \parag \label{assumptions-NNN} Let $(X,D)$ be a projective log variety such that: \para $K_X+D$ is lc; \para $D\in{\Phi}$; \para $-(K_X+D)$ nef and big; \para there exists some ${\mathbb Q}$-complement of $K_X+D$ (this condition holds if $-(K_X+D)$ is semi-ample, for example, by \cite[3-1-2]{KMM} this holds if $K_X+D$ is klt). \par Such a pair we call a \textit{log Fano variety}. \parag Notation as above. Define the minimal complementary number by \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-compl-def} \compl{X,D}:=\min\{m\mid K_X+D\ \text{is $m$-complementary}\}. \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent and consider the set \begin{multline*} {\mathbb N}N_d({\Phi}):=\{m\in{\mathbb N}\mid\exists\ \text{a log Fano variety} \ (X,D)\ \text{of dimension $d$}\\ \text{such that} \ D\in{\Phi}\ \text{and}\ \compl{X,D}=m\}. \end{multline*} For example, ${\mathbb N}N_1([0,1])=\{1, 2, 3, 4, 6\}$ (see \cite{Sh}). Taking products with ${\mathbb P}^1$, one can show that ${\mathbb N}N_{d-1}({\Phi})\subset{\mathbb N}N_d({\Phi})$. For inductive purposes we put ${\mathbb N}N_0([0,1])=\{1,2\}$. By induction we define \setcounter{equation}{\value{subsubsection}} \begin{multline} \label{eq-def-N-M} {\MMM}_{\mt{\mathbf{m}}}^1:={\MMM}_{\mt{\mathbf{sm}}},\mathbin{\sim_{\scriptscriptstyle{{\mathbb Q}}}}uad N_1:=\max{\mathbb N}N_1({\MMM}_{\mt{\mathbf{m}}}^1),\\ {\MMM}_{\mt{\mathbf{m}}}^d:={\MMM}_{\mt{\mathbf{sm}}}\cup\left[1-\frac{1}{N_{d-1}+1},1\right], \mathbin{\sim_{\scriptscriptstyle{{\mathbb Q}}}}uad N_d=\max\left( \cup_{k=1}^d{\mathbb N}N_d({\MMM}_{\mt{\mathbf{m}}}^k) \right). \end{multline} \setcounter{subsubsection}{\value{equation}}\noindent We do not exclude the case $N_d=\infty$ (and then ${\MMM}_{\mt{\mathbf{m}}}^d:={\MMM}_{\mt{\mathbf{sm}}}$), however, we hope that $N_d<\infty$ (see \ref{conjecture-boundedness-complements} below). By \cite[5.2]{Sh} we have \[ N_1=6,\mathbin{\sim_{\scriptscriptstyle{{\mathbb Q}}}}uad {\MMM}_{\mt{\mathbf{m}}}^2={\MMM}_{\mt{\mathbf{sm}}}\cup [6/7,1]. \] It was proved in \cite{Sh1} that $N_2$ is finite. By construction, $N_d\ge N_{d'}$ and ${\MMM}_{\mt{\mathbf{m}}}^d\subset{\MMM}_{\mt{\mathbf{m}}}^{d'}$ if $d\ge d'$. \begin{lemma}[cf. {\cite[2.7]{Sh1}}] \label{ge} If $\alpha\in {\MMM}_{\mt{\mathbf{m}}}^d$, then for any $n\le N_{d-1}$ we have \[ \down{(n+1)\alpha}\ge n\alpha. \] \end{lemma} \begin{proof} If $\alpha\in{\MMM}_{\mt{\mathbf{sm}}}$, then $\alpha=1-1/m$ for some $m\in{\mathbb N}$. In this case we write $n\alpha=q+k/m$, where $q=\down{n\alpha}$ and $k/m=\fr{n\alpha}$, $k\in{\mathbb Z}$, $0\le k\le m-1$. Then \[ \down{(n+1)\alpha}=\down{q+k/m+1-1/m}=\left\{ \begin{array}{ll} q\mathbin{\sim_{\scriptscriptstyle{{\mathbb Q}}}}uad&\text{if}\ k=0,\\ q+1\mathbin{\sim_{\scriptscriptstyle{{\mathbb Q}}}}uad&\text{otherwise.} \end{array} \right. \] In both cases $\down{(n+1)\alpha}\ge q+k/m=n\alpha$. Assume that $\alpha\notin{\MMM}_{\mt{\mathbf{sm}}}$. Then $\alpha>1-\frac{1}{N_{d-1}+1}$ and \[ \down{(n+1)\alpha}\ge \down{n+1-\frac{n+1}{N_{d-1}+1}}\ge n\ge n\alpha. \] \end{proof} \begin{corollary} \label{ge-1} Let $(X,D)$ be a log pair such that $D\in {\MMM}_{\mt{\mathbf{m}}}^d$ and let $D^+$ be an $n$-complement with $n\le N_{d-1}$. Then $D^+\ge D$. \end{corollary} \begin{lemma}[cf. {\cite[Lemma~4.2]{Sh}}] \label{coeff-diff} Let $(X,D)$ be a lc log pair, let $S:=\down{D}$ and $B:=\fr{D}$. Assume that $K+S$ is plt and $D\in {\MMM}_{\mt{\mathbf{m}}}^{d}$ for some $d$ (resp. $D\in {\MMM}_{\mt{\mathbf{sm}}}$). Then $\Diff{S}{B}\in {\MMM}_{\mt{\mathbf{m}}}^{d}$ (resp. $\Diff{S}{B}\in {\MMM}_{\mt{\mathbf{sm}}}$). \end{lemma} \begin{proof} Write $B=\sum b_jB_j$, $0<b_j<1$. Let $\alpha$ be a coefficient $\Diff{S}{B}$. Then by \cite[3.10]{Sh}, \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-alpha} \alpha=\frac{m-1}{m}+\sum_j\frac{b_jn_{j}}{m}, \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent where $m\in{\mathbb N}$, $n_{j}\in{\mathbb N}\cup\{0\}$. Since $K_{S}+\Diff{S}{B}$ is lc (see \cite[17.7]{Ut}), $\alpha\le 1$ and we may assume that $\alpha<1$. Using $b_j\ge 1/2$ one can easily show that in \eref{eq-alpha} $\sum n_j\le 1$ (see \cite[Lemma~4.2]{Sh}). If $n_j=0$ for all $j$ in \eref{eq-alpha}, then, obviously, $\alpha\in{\MMM}_{\mt{\mathbf{sm}}}$. Otherwise $n_{j_0}=1$ for some $j_0$ and $n_j=0$ for $j\neq j_0$ in \eref{eq-alpha}. Then $\alpha=\frac{m-1+b_{j_0}}{m}$. If $b_{j_0}\in{\MMM}_{\mt{\mathbf{sm}}}$, then $b_{j_0}=1-1/n$, $n\in{\mathbb N}$ and $\alpha=\frac{mn-1}{mn}\in{\MMM}_{\mt{\mathbf{sm}}}$. If $b_{j_0}\ge 1-\frac{1}{N_{d-1}+1}$, then $\alpha\ge b_{j_0}\ge 1-\frac{1}{N_{d-1}+1}$. In both cases $\alpha\in{\MMM}_{\mt{\mathbf{m}}}^d$. \end{proof} \begin{conjecture} \label{conjecture-boundedness-complements} Notation as in \eref{assumptions-NNN}. Then ${\mathbb N}N_d({\Phi})$ is finite. \end{conjecture} The proof of Conjecture~\ref{conjecture-boundedness-complements} in dimension two given in \cite{Sh1} relies heavily upon boundedness results for log del Pezzo surfaces \cite{A}, see also \cite{N3}. In arbitrary dimension there is the following \begin{conjecture} \label{conjecture-boundedness-log-Fano} Fix $\varphiepsilon>0$. Let $(X,D)$ be a normal projective log variety such that: \para $K+D$ is ${\mathbb Q}$-Cartier; \para $\totaldiscr{X,D}>-1+\varphiepsilon$; \para $-(K_X+D)$ is nef and big. \par Then $(X,\Supp{D})$ belongs to a finite number of algebraic families. \end{conjecture} This conjecture is known to be true for $\dim(X)=2$. For $\dim(X)\ge 3$ there are only particular results in this direction \cite{B}, \cite{BB}. A new approach to the proof of \ref{conjecture-boundedness-log-Fano} was proposed in \cite[Sect. 9]{KeM}. \begin{conjecture}[Inductive Conjecture] \label{conjecture-inductive-complements} Let $(X,D)$ be such as in \eref{assumptions-NNN} (in particular $D\in{\Phi}$). Assume that there exists a ${\mathbb Q}$-complement of $K+D$ which is not klt. Then $K+D$ has an $n$-complement for $n\in{\mathbb N}N_{d-1}({\Phi})$. Moreover, this new complement also can be taken non-klt. \end{conjecture} We may expect Conjecture~\ref{conjecture-inductive-complements} for ${\Phi}={\MMM}_{\mt{\mathbf{sm}}}$ or ${\Phi}={\MMM}_{\mt{\mathbf{m}}}^d$, where $d=\dim(X)$. In general, it fails \cite[2.4]{Sh1}, \cite[8.1.2]{Pr3}. At the moment, this conjecture is proved for $\dim(X)=2$ and ${\Phi}={\MMM}_{\mt{\mathbf{m}}}^2$, \cite{Sh1} (even in a stronger form). \section{Exceptionality} \begin{definition} We say that a contraction $f\colon X\to Z$ is of \textit{local type}, if $\dim(Z)>0$. Otherwise (i.e. $Z$ is a point) we say that the contraction $f\colon X\to Z$ is of \textit{global type}. \end{definition} Thus a contraction of local type can be either birational or of fiber type. In this case we are interested in the structure of $f\colon X\to Z$ near the fixed fiber $f^{-1}(o)$, $o\in Z$ and usually we assume that $X$ is a sufficiently small neighborhood of the fiber over $o$. \begin{definition}[{\cite[Sect. 5]{Sh}}, {\cite[1.5]{Sh1}}] \label{def-exc-loc} Let $(X/Z\ni o,\Delta)$ be a log variety of local type. Assume that $K+\Delta$ has at least one ${\mathbb Q}$-complement near the fiber over $o$. Then $(X/Z\ni o,\Delta)$ is said to be \textit{exceptional} if for any ${\mathbb Q}$-complement $K+\Delta^+$ of $K+\Delta$ near the fiber over $o$ there exists at most one (prime) divisor $E$ of ${\EuScript{K}}(X)$ with $\dis{E,\Delta^+}=-1$. \end{definition} Clearly, to be exceptional depends on the choice of the base point $o\in Z$. As an immediate consequence of the definition we have \begin{lemma} \label{lemma-crepant-ex} Let $(X/Z\ni o,\Delta)$ and $(X'/Z\ni o,\Delta')$ be log varieties (of local or global type) and let $f\colon X\to X'$ be a contraction over $Z$. Assume that $K_{X'}+\Delta'$ is ${\mathbb Q}$-Cartier and $\Delta$ is a crepant pull back of $\Delta'$ (i.e. $f^*(K_{X'}+\Delta')=K_{X}+\Delta$ and $f_*\Delta=\Delta'$). Then $(X/Z\ni o,\Delta)$ is exceptional if and only if $(X'/Z\ni o,\Delta')$ is. \end{lemma} \begin{proof} Follows by \cite[3.10]{Ko}. \end{proof} \begin{proposition} \label{two-divisors} Let $(X/Z\ni o,\Delta)$ be a non-exceptional log variety of local type and let $D$, $D'$ be ${\mathbb Q}$-complements such that both $K+D$ and $K+D'$ are not klt. Let $S$ and $S'$ are divisors of ${\EuScript{K}}(X)$ such that $\dis{S,D}=-1$ and $\dis{S',D'}=-1$. Assume that $S\not\approx S'$. Then there exists a ${\mathbb Q}$-complement $G$ of $K+\Delta$ such that $\dis{S,G}=\dis{E,G}=-1$ for some $E\not\approx S$. \end{proposition} \begin{proof}[Proof (cf. {\cite[2.7]{MP}}, {\cite[2.4]{IP}})] Note that $D'-D$ is ${\mathbb Q}$-Cartier and numerically trivial over $Z$. Put $D(\alpha):=D+\alpha(D'-D)$. Then $D(0)=D$, $D(1)=D'$ and $K+D(\alpha)$ is a ${\mathbb Q}$-complement for all $0\le\alpha\le 1$ (by convexity of the lc property see \cite[1.4.1]{Sh} or \cite[2.17.1]{Ut}). Fix an effective Cartier divisor $L$ on $Z$ (passing through $o$) and put $F:=f^*L$. For $0\le\alpha\le 1$, define a function \[ \varphisigma(\alpha):=\sup\{\beta\mid K+D(\alpha)+\beta F\quad \text{is lc}\}, \] and put $T(\alpha):=D(\alpha)+\varphisigma(\alpha)F$. Fix some log resolution of $(X,D+D'+F)$ and let $\sum E_i$ be the union of the exceptional divisor and the proper transform of $\Supp{D+D'+F}$. Then $\varphisigma(\alpha)$ can be computed as \[ \varphisigma(\alpha)=\max_{E_i}\{\beta\mid \dis{E_i,D(\alpha)+\beta F}\ge -1\}. \] (see e.~g. \cite[0-2-12]{KMM}). In particular, $\varphisigma(\alpha)\in{\mathbb Q}$. Hence $K+T(\alpha)$ is a ${\mathbb Q}$-complement. By the above, $\beta=\varphisigma(\alpha)$ can be computed from linear inequalities $\dis{E_i, D(\alpha)+\beta F}\ge -1$, where $E_i$ runs through a finite number of prime divisors $E_i$. Therefore the function $\varphisigma(\alpha)$ is piecewise linear and continuous in $\alpha$ and so are the coefficients of $T(\alpha)$. By construction, $K+T(\alpha)$ is not klt for all $0\le\alpha\le 1$. We claim that $\dis{S,T(0)}=-1$. Indeed, $T(0)=D+\varphisigma(0)F\ge D$. Thus $\dis{S,T(0)}\le \dis{S,D}=-1$. Since $K+T(0)$ is lc, $\dis{S,T(0)}=-1$. Now, take \[ \alpha_0:=\sup\{\alpha\mid \dis{S,T(\alpha)}=-1\}. \] By the above discussions $\alpha_0$ is rational (and $\dis{S,T(\alpha_0)}=-1$). If $\alpha_0=1$, then we put $G:=T(1)$ and $E=S'$. Otherwise, for any $\alpha>\alpha_0$, $\dis{S,T(\alpha)}>-1$. Hence there is a divisor $E\not\approx S$ of ${\EuScript{K}}(X)$ such that $\dis{E,T(\alpha)}=-1$. Again we can take $E$ to be a component of $\sum E_i$. Thus $E$ does not depend on $\alpha$ if $0<\alpha-\alpha_0\ll 1$. Obviously, $\dis{E,T(\alpha_0)}=-1$ and we can put $G:=T(\alpha_0)$. \end{proof} \begin{corollary} \label{two-divisors-non-except} Let $(X/Z\ni o,\Delta)$ be a non-exceptional log variety of local type, let $D\ge \Delta$ be a ${\mathbb Q}$-complement such that $K+D$ is not klt and let $S$ be a divisor of ${\EuScript{K}}(X)$ such that $\dis{S,D}=-1$. Then there is a ${\mathbb Q}$-complement $G\ge \Delta$ such that $\dis{S,G}=\dis{E,G}=-1$ for some divisor $E\not\approx S$ of ${\EuScript{K}}(X)$. \end{corollary} \begin{proof} Since $(X/Z\ni o,\Delta)$ is non-exceptional, there is a ${\mathbb Q}$-complement $D'\ge \Delta$ such that $\dis{S',D'}=-1$ for some $S'\not\approx S$. Then one can apply Proposition~\ref{two-divisors}. \end{proof} \begin{corollary} \label{two-divisors-except} Let $(X/Z\ni o,\Delta)$ be an exceptional log variety of local type. Then there exists a uniquely defined divisor $S$ of ${\EuScript{K}}(X)$ such that for any ${\mathbb Q}$-complement $D$ one has $\dis{E,D}>-1$ whenever $E\not\approx S$ in ${\EuScript{K}}(X)$. \end{corollary} We call the divisor $S$ defined in \ref{two-divisors-except} the \textit{central divisor} of an exceptional log pair $(X/Z\ni o,\Delta)$. \begin{corollary} \label{dimension} Let $(X/Z\ni o,\Delta)$ be a exceptional log variety of local type, let $S$ be the central divisor. Then the center of $S$ on $X$ is contained in the fiber over $o$. \end{corollary} \begin{proof} Let $K+D$ be a ${\mathbb Q}$-complement such that $\dis{S,D}=-1$ and let $H$ be a general hyperplane section of $Z$ passing through $o$. Since $f^*H$ does not contain the center of $S$, $\operatorname{mult}_Sf^*H=0$ and $\dis{S,D}=\dis{S,D+cf^*H}=-1$ for all $c$. Take $c$ so that $K+D+cf^*H$ is maximally lc. Then, as in the proof of Proposition~\ref{two-divisors}, $\dis{E,D+cf^*H}=-1$ for some $E\not\approx S$, a contradiction. \end{proof} \begin{example} \label{ex-sing} Consider a log canonical singularity $X\ni o$ (i.e. $X=Z$ and $\Delta=0$). Then it is exceptional if and only if for any boundary $B$ on $X$ such that $K+B$ is lc there exists at most one divisor $E$ of ${\EuScript{K}}(X)$ with $\dis{E,B}=-1$. For example, a two-dimensional log terminal singularity is exceptional if and only if it is of type ${\mathbb E}_6$, ${\mathbb E}_7$ or ${\mathbb E}_8$ (see \cite[5.2.3]{Sh}, \cite{MP}). \end{example} In the global case Definition~\ref{def-exc-loc} has a different form: \begin{definition} Let $(X,\Delta)$ be a log variety of global type. Assume that $K+\Delta$ has at least one $n$-complement. Then $(X,\Delta)$ is said to be \textit{exceptional} if any ${\mathbb Q}$-complement $K+\Delta^+$ of $K+\Delta$ is klt (i.e. $\dis{E,\Delta^+}>-1$ for any divisor $E$ of ${\EuScript{K}}(X)$). \end{definition} \begin{examples} \label{ex-global} (i) Let $X={\mathbb P}^1$, $Z=\mt{pt}$ and let $\Delta=\sum_{i=1}^r (1-1/m_i)P_i$, $m_i\in{\mathbb N}$, where $P_1$,\dots, $P_r$ are different points. The divisor $-(K+\Delta)$ is nef if and only if $\sum_{i=1}^r (1-1/m_i)\le 2$. In this case, the collection $(m_1,\dots,m_r)$ gives us an exceptional pair if and only if it is (up to permutations) one of the following: \[ \begin{array}{lllllllllll} &&&E_6:&(2,3,3)&&E_7:&(2,3,4)&&E_8:&(2,3,5)\\ &&&\widetilde E_6:&(3,3,3)& \quad&\widetilde E_7:&(2,4,4)&\quad&\widetilde E_8:&(2,3,6)\\ &&&\widetilde D_4:&(2,2,2,2)&&&&&&\\ \end{array} \] \par (ii) Let $X={\mathbb P}^d$, $Z=\mt{pt}$ and let $\Delta=\sum_{i=1}^{d+2} (1-1/m_i)\Delta_i$, $m_i\in{\mathbb N}$, where $\Delta_1$,\dots, $\Delta_{d+2}$ are hyperplanes in ${\mathbb P}^d$. The log divisor $-(K+\Delta)$ is nef if and only if $\sum 1/m_i\le 1$. If $(X,\Delta)$ is exceptional, then $-(K+\Delta_j+\sum_{i\ne j}(1-1/m_i)\Delta_i)$ is not nef for all $j$. Hence $\sum_{i\ne j} 1/m_i>1$. In this situation it is easy to prove the existence of a constant $\operatorname{Const}(d)$ such that $m_j\le \operatorname{Const}(d)$ for all $j$ (cf. \cite[8.16]{Ko}). Therefore there are only a finite number of possibilities for exceptional collections $(m_1,\dots,m_{d+2})$. \end{examples} Examples above and many other facts (see \cite{Sh1}, \cite{MP}, \cite{IP}, \cite{Pr2}, \cite{Is}) show that in general we may expect the following principle: \begin{itemize} \item non-exceptional pairs have good properties of $|-m(K+D)|$ for some small $m$; \item exceptional pairs can be classified. \end{itemize} \section{Fano contractions} In this section we prove Theorem~\ref{result-Fano-1} below. The two dimensional version of this result was proved by the second author in \cite{Sh}. Later it was generalized in \cite{Sh1}, \cite{Pr2}. \begin{theorem}[Local case] \label{result-Fano-1} Let ${\Phi}:={\MMM}_{\mt{\mathbf{m}}}^d$ (or ${\Phi}:={\MMM}_{\mt{\mathbf{sm}}}$) and let $(X/Z\ni o,D)$ be a $d$-dimensional log variety of local type such that \para $D\in{\Phi}$; \para $K+D$ is klt; \para $-(K+D)$ is nef and big over $Z$. \par Let $f\colon X\to Z$ be the structure morphism. Assume LogMMP in dimension $d$. Then there exists a non-klt $n$-complement of $K+D$ near $f^{-1}(o)$ for $n\in{\mathbb N}N_{d-1}({\Phi})$. Moreover, if $(X/Z\ni o,D)$ is non-exceptional and Conjecture~\ref{conjecture-inductive-complements} holds in dimensions $d'\le d-1$ for ${\Phi}={\MMM}_{\mt{\mathbf{m}}}^{d}$ (resp. ${\Phi}={\MMM}_{\mt{\mathbf{sm}}}$), then $K+D$ is $n$-complementary near $f^{-1}(o)$ for $n\in{\mathbb N}N_{d-2}({\Phi})$. This complement also can be taken non-exceptional. \end{theorem} In the non-exceptional case we expect more precise results. In this case the existence of complements should depend on the topological structure of the essential exceptional divisor (see \cite[Sect.~7]{Sh1}). \begin{example} \label{ex-RDP} Let $(Z\ni o)$ be a two-dimensional DuVal (RDP) singularity, let $D=0$, and let $f=\mt{id}$. There is a non-klt $n$-complement of $K_Z$ for some $n\in{\mathbb N}N_1({\MMM}_{\mt{\mathbf{sm}}})=\{1,2,3,4,6\}$ (see \cite[5.2.3]{Sh}). The singularity is non-exceptional if it is of type $A_n$ or $D_n$. In these cases there is a non-klt $n$-complement for $n\in{\mathbb N}N_0({\MMM}_{\mt{\mathbf{sm}}})=\{1,2\}$. \end{example} The rough idea of the proof is very easy: we construct some special blow-up of $X$ with irreducible exceptional divisor $S$ (Proposition~\ref{constr-plt-blowup}) and then apply inductive properties of complements (Proposition~\ref{prodolj}) to reduce the problem to a low dimensional (but possibly projective) variety $S$. \begin{lemma} \label{weak-log-Fano} Let $(X/Z,D)$ be a log variety such that $K_X+D$ is klt and $-(K_X+D)$ nef and big over $Z$. Then there exists an effective ${\mathbb Q}$-divisor $D^{\mho}$ such that $K_X+D+D^{\mho}$ is again klt and $-(K_X+D+D^{\mho})$ ample over $Z$. \end{lemma} \begin{proof} Follows by Kodaira's lemma (see e.~g. \cite[0-3-3, 0-3-4]{KMM}). \end{proof} \begin{corollary} \label{weak-log-Fano-cor} Notation and assumptions as in \ref{weak-log-Fano}. Then the Mori cone ${\overline{NE}}(X/Z)$ is polyhedral and generated by contractible extremal rational curves. \end{corollary} \begin{definition} \label{def_plt_blowup} Let $(X,\Delta)$ be a log pair and let $g\colon Y\to X$ be a blow-up such that the exceptional locus of $g$ contains exactly one irreducible divisor, say $S$. Assume that $K_Y+\Delta_Y+S$ is plt and $-(K_Y+\Delta_Y+S)$ is $g$-ample. Then $g\colon (Y\supset S)\to X$ is called a \textit{purely log terminal (plt) blow-up} of $(X,\Delta)$. \end{definition} \textsc{Warning:} In contrast with log terminal modifications \cite[3.1]{Sh2} purely log terminal blow-ups are not log crepant. \begin{remark*} Let $(X\ni o,D)$ be an exceptional singularity. Then by Corollary~\ref{two-divisors-except} there is at most one plt blow-up (see \cite[Prop.~6]{Pr1}). \end{remark*} \begin{proposition}[{\cite{Pr1}}, {\cite{Pr3}}, cf. {\cite{Sh3}}] \label{constr-plt-blowup} Let $(X,\Delta+\Delta^0)$ be a log variety such that $X$ is ${\mathbb Q}$-factorial, $\Delta\ge 0$, $\Delta^0\ge 0$, $K+\Delta+\Delta^0$ is lc but not plt and $K+\Delta$ is klt. (We do not claim that $\Delta$ and $\Delta^0$ have no common components). Assume LogMMP in dimension $\dim(X)$. Then there exists a plt blow-up $g\colon (Y\supset S)\to X$ of $(X,\Delta)$ such that \para \label{inductive-blow-up-i} $K_Y+\Delta_Y+S+\Delta^0_Y=g^{*}(K+\Delta+\Delta^0)$ is lc; \para \label{inductive-blow-up-ii} $K_Y+\Delta_Y+S+(1-\varphiepsilon)\Delta^0_Y$ is plt and anti-ample over $X$ for any $\varphiepsilon>0$; \para \label{inductive-blow-up-iii} $Y$ is ${\mathbb Q}$-factorial and $\rho (Y/X)=1$. \end{proposition} Such a blow-up we call an \textit{inductive blow-up} of $K_X+\Delta+\Delta^0$. It is important to note that this definition depends on $\Delta$ and $\Delta^0$, not just on $\Delta+\Delta^0$. Such blow-ups are very useful in the theory of complements. In the local case one can construct a boundary $\Delta^0$ as in Proposition~\ref{constr-plt-blowup} just by taking the pull-back of some ${\mathbb Q}$-divisor on $Z$. In the global case the problem of finding $\Delta^0$ is not so easy. \begin{proof} First take a log terminal modification $h\colon V\to X$ of $(X, \Delta+\Delta^0)$ (see \cite{Sh}, \cite[17.10]{Ut}). Write \[ h^*(K+\Delta+\Delta^0)=K_V+\Delta_V+\Delta^0_V+E, \] where $\Delta_V$ and $\Delta^0_V$ are proper transforms of $\Delta$ and $\Delta^0$, respectively, and $E$ is exceptional. One can take $h$ so that $E$ is reduced and $E\ne 0$ (see \cite[17.10]{Ut}, \cite[3.1]{Sh2}). We claim that $K_V+\Delta_V+E$ cannot be nef over $X$. Indeed, write \[ h^*(K+\Delta)=K_V+\Delta_V+\sum \alpha_iE_i,\quad \text{where}\quad \alpha_i<1\quad \text{for all}\quad i. \] This give us $h^*\Delta^0=\Delta^0_V+\sum (1-\alpha_i)E_i$, so \[ K_V+\Delta_V+E\equiv-\Delta^0_V\equiv \sum (1-\alpha_i)E_i\quad \text{over}\quad X, \] where $\sum (1-\alpha_i)E_i$ is effective, exceptional and $\ne 0$. This divisor cannot be $h$-nef (see e.~g. \cite[1.1]{Sh}). Now, run $(K_V+\Delta_V+E)$-MMP over $X$. At the last step we get a birational contraction $g\colon Y\to X$ which satisfies \eref{inductive-blow-up-i}--\eref{inductive-blow-up-iii}. \end{proof} \para \label{start} We prove Theorem~\ref{result-Fano-1} by induction on $d$. So assume that \ref{result-Fano-1} holds if $\dim(X)<d$. To begin the proof, replace $X$ with its ${\mathbb Q}$-factorialization (see \cite[6.11.1]{Ut}). This preserves all our assumptions. Next, take $D^{\mho}$ as in Lemma~\ref{weak-log-Fano} and put $D^{\triangledown}:=D^{\mho}+cf^*H$, where $H$ is an effective Cartier divisor on $Z$ passing through $o$ and $c$ is the log canonical threshold $c=c(X,D+D^{\mho},f^*H)$ (the maximal such that $K+D+D^{\mho}+cf^*H$ is lc). Then \para \label{triangld-lc} $K+D+D^{\triangledown}$ is anti-ample over $Z$, lc and not klt. \par Note that $D$ and $D^{\triangledown}$ can have common components. Now, we distinguish two cases: \begin{enumerate} \item[(A)] $K+D+D^{\triangledown}$ is plt (and $\down{D+D^{\triangledown}}\ne 0$); \item[(B)] $K+D+D^{\triangledown}$ is not plt. \end{enumerate} \par In case (B) we consider an inductive blow-up $g\colon \widehat X\to X$ of $(X,D+D^{\triangledown})$. Let $S$ be the (irreducible) exceptional divisor. By \cite[5.4]{Sh} (or \cite[19.2]{Ut}) it is sufficient to prove the existence of required complements on $\widehat X$. Write \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-first-def-D} \begin{array}{l} g^*(K+D+D^{\triangledown})=K_{\widehat X}+\Delta+S+\widehat D^{\triangledown},\\ g^*(K+D)=K_{\widehat X}+\Delta+aS, \end{array} \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent where $\widehat D^{\triangledown}$ and $\Delta$ are proper transforms of $D^{\triangledown}$ and $D$, respectively, and $a<1$. Note that $\Delta+aS$ is not necessarily a boundary. \par In case (A) we put $\widehat X=X$, $g:=\mt{id}$, $S=\down{D+D^{\triangledown}}$. In this case $S$ is irreducible by the Connectedness Lemma \cite[17.4]{Ut} and because $S$ is normal \cite[17.5]{Ut}. Define $\Delta$ from $D=\Delta+aS$, where $0\le a<1$ and $S$ is not a component of $\Delta$, and put $\widehat D^{\triangledown}:=D+D^{\triangledown}-S-\Delta$. In both cases we have by \eref{triangld-lc} and \eref{eq-first-def-D} the following (see \cite[3.10]{Ko}): \para \label{condition-KDS} $K_{\widehat X}+\Delta+S+\widehat D^{\triangledown}$ is lc, not klt, $K_{\widehat X}+\Delta+aS$ is klt and both $-(K_{\widehat X}+\Delta+S+\widehat D^{\triangledown})$ and $-(K_{\widehat X}+\Delta+aS)$ are nef and big over $Z$. \begin{lemma} \label{M-klt} Notation as above. There exist $\delta_0>0$ and a boundary $M$ on $\widehat X$ such that \para \label{lemma-i} $\Delta+aS\le M\le \Delta+S+(1-\delta_0)\widehat D^{\triangledown}$; \para \label{lemma-ii} $K+M$ is klt; \para \label{lemma-iii} $-(K+M)$ is nef and big over $Z$. \par In particular, the Mori cone ${\overline{NE}}(\widehat X/Z)$ is polyhedral. \end{lemma} \begin{proof} By \eref{triangld-lc}, $K+D+(1-\delta_0) D^{\triangledown}$ is klt and anti-ample over $Z$ for sufficiently small positive $\delta_0$. Take $M$ as the crepant pull-back \setcounter{equation}{\value{subsubsection}} \begin{multline} \label{eq-multline-1} K_{\widehat X}+M= g^*(K+D+(1-\delta_0) D^{\triangledown})=\\ g^*(K+D)+(1-\delta_0)\bigl(g^*(K+D+D^{\triangledown})- g^*(K+D)\bigr)=\\ K_{\widehat X}+\Delta+aS+(1-\delta_0)\bigl((K_{\widehat X}+\Delta+S+\widehat D^{\triangledown}) -(K_{\widehat X}+\Delta+aS)\bigr). \end{multline} \setcounter{subsubsection}{\value{equation}}\noindent In other words, \begin{multline*} M=\Delta + aS+(1-\delta_0)(S+\widehat D^{\triangledown}-aS)=\\ \Delta+\bigl(1-\delta_0(1-a)\bigr)S+(1-\delta_0) \widehat D^{\triangledown}. \end{multline*}\noindent From \eref{eq-multline-1} we obtain that $K+M$ is klt \cite[3.10]{Ko}, anti-nef and anti-big over $Z$. \eref{lemma-i} holds if $a\le 1-\delta_0(1-a)$, i.e. for $0<\delta_0\ll 1$. \end{proof} \para \label{plt-and-anti-ample} Further, take $0<\lambda \ll \delta_0$ and put \[ \widehat D^{\lambda}:=(1-\lambda)\widehat D^{\triangledown}. \] We claim that the log divisor $K_{\widehat X}+\Delta+S+\widehat D^{\lambda}$ is plt and anti-ample over $Z$. \par Indeed, in case (B), since $\rho (\widehat X/X)=1$, curves in the fibers of $g$ generate an extremal ray, say $R$. Then $R\cdot (K_{\widehat X}+\Delta+S+\widehat D^{\triangledown})=0$ (and $K_{\widehat X}+\Delta+S+\widehat D^{\triangledown} $ is strictly negative on all extremal rays $\ne R$, see \eref{triangld-lc} and \eref{eq-first-def-D}). Further, by \eref{eq-first-def-D} $\widehat D^{\triangledown}\equiv -(1-a)S$ over $X$ and this divisor is positive on $R$. Thus $K_{\widehat X}+\Delta+S+\widehat D^{\lambda}$ is strictly negative on all extremal rays of ${\overline{NE}}(\widehat X/Z)$ for sufficiently small positive $\lambda$. By Kleiman criterion, it is anti-ample. Finally, $K_{\widehat X}+\Delta+S+\widehat D^{\lambda}$ is plt because $\widehat D^{\lambda}\le \widehat D^{\triangledown}$. In case (A), our claim obviously follows by \eref{triangld-lc}. \par Note that $M\le \Delta+S+\widehat D^{\lambda}$ by \eref{lemma-i}. \par Fix some set $F_1,\dots,F_r$ of prime divisors on $\widehat X$. For $n\gg 0$, take a general member $F\in |-n(K_{\widehat X}+\Delta+S+\widehat D^{\lambda})-\sum F_i|$ and put $B:=\widehat D^{\lambda}+\frac1n(F+\sum F_i)$. We can take $F_1,\dots, F_r$ and $n$ so that \para $K+\Delta+S+B$ is plt; \para \label{generators} components of $B$ generate $N^1(\widehat X/Z)$. \par By construction, we have \para $K+\Delta+S+B\equiv 0$ over $Z$. \par Take $\varphiepsilon>0$ so that $K+\Delta+S+(1+\varphiepsilon)B$ is plt (see \cite[2.17]{Ut}) and $M\le \Delta+S+(1-\varphiepsilon)B$ (i.e. $1-\delta_0\le (1-\varphiepsilon)(1-\lambda)$, see proof of Lemma~\ref{M-klt}). Run $(K+\Delta+S+(1+\varphiepsilon)B)$-MMP over $Z$: \[ \vcenter{\hbox{ \begin{picture}(215,80) \put(65,70){\vector(-2,-1) {40}} \put(25,30){\vector(2,-1){40}} \put(110,70){\line(2,-1){10}} \put(125,62.5){\line(2,-1){10}} \put(140,55){\vector(2,-1){10}} \put(150,30){\vector(-2,-1) {40}} \put(90,75){\makebox(0,0){$\widehat X$}} \put(10,40){\makebox(0,0){$X$}} \put(170,40){\makebox(0,0){$\ov X$}} \put(90,3){\makebox(0,0){$Z$}} \put(40,66){\makebox(0,0)[c]{\scriptsize $g$}} \put(40,13){\makebox(0,0)[c]{\scriptsize $f$}} \put(135,14){\makebox(0,0)[c]{\scriptsize $q$}} \end{picture} }} \] We will use $\ov\square$ to denote the proper transform on $\ov X$ of a divisor $\square$ on $\widehat X$. For each extremal ray $R$ we have $R\cdot B<0$ and $R\cdot (K+\Delta+S)>0$. Therefore any contraction is either flipping or divisorial and contracts a component of $B$. In particular, any divisorial contraction does not contract $S$. At the end we get the situation when $(K+\Delta+S+(1+\varphiepsilon)B)$ is nef over $Z$ (we do not exclude the case $\ov X=Z$). Since $K+\Delta+S+B\equiv 0$, $-(K+\Delta+S)$ is also nef over $Z$. \begin{lemma} \label{NE} We can run $(K+\Delta+S+(1+\varphiepsilon)B)$-MMP so that on each step there is a boundary $M\le \Delta+S+(1-\varphiepsilon)B$ such that $K+M$ is klt and $-(K+M)$ is nef and big over $Z$. \end{lemma} \begin{proof} By Lemma~\ref{M-klt} such a boundary exists on the first step. If $K+\Delta+S+(1+\varphiepsilon)B \equiv \varphiepsilon B$ is not nef over $Z$, then $-(K+\Delta+S+(1-\varphiepsilon)B)\equiv \varphiepsilon B$ is also not nef over $Z$. Put \[ t_0:=\sup\{t\mid -(K+M+t(\Delta+S+(1-\varphiepsilon)B-M))\quad\text{is nef}\}. \] By Lemma~\ref{weak-log-Fano-cor} this supremum is a maximum and is achieved on some extremal ray. Hence $t_0$ is rational and $0< t_0<1$. Consider the boundary $M^0:=M+t_0(\Delta+S+(1-\varphiepsilon)B-M)$. Then $-(K+M^0)$ is nef over $Z$ and $M^0\le \Delta+S+(1-\varphiepsilon)B$. We claim that $-(K+M^0)$ is also big over $Z$. Assume the opposite. By the Base Point Free Theorem, $-(K+M^0)$ is semi-ample over $Z$ and defines a contraction $\varphi\colon \widehat X\to W$ onto a lower-dimensional variety. Let $C$ be a general curve in a fiber. Then $C\cdot (K+M^0)=C\cdot (K+\Delta+S+B)=0$, so $C\cdot (\Delta+S+B-M^0)=0$. Since $C$ is nef, $\varphiepsilon C\cdot B\le C\cdot (\Delta+S+B-M^0)=0$ and $C\cdot B=0$. By \eref{generators}, $C\equiv 0$, a contradiction. \par Further, ${\overline{NE}}(\widehat X/Z)$ is polyhedral, so there is an extremal ray $R$ such that $R\cdot (K+M^0)=0$ and $\varphiepsilon R\cdot B=-R\cdot (K+\Delta+S+(1-\varphiepsilon)B)<0$. Hence $R\cdot (K+\Delta+S+(1+\varphiepsilon)B)<0$. Let $h\colon \widehat X\to Y$ be the contraction of $R$. Put $M^0_Y:=h_*M^0$. Then $K+M^0=h^*(K_Y+M^0_Y)$. Therefore $K_Y+M^0_Y$ is ${\mathbb Q}$-Cartier, klt and $-(K_Y+M^0_Y)$ is nef and big over $Z$. If $g$ is divisorial, we can continue the process replacing $\widehat X$ with $Y$ and $M$ with $M=M^0_Y$. Assume that $g$ is a flipping contraction and let \[ \vcenter{\hbox{ \begin{picture}(85,50) \put(14,35){\vector(1,-1){21}} \put(66,35){\vector(-1,-1){21}} \put(15,40){\line(1,0){6}} \put(26,40){\line(1,0){6}} \put(37,40){\line(1,0){6}} \put(48,40){\line(1,0){6}} \put(59,40){\vector(1,0){6}} \put(7,40){\makebox(0,0){$\widehat X$}} \put(77,40){\makebox(0,0){$X^+$}} \put(40, 6){\makebox(0,0){$Y$}} \put(19,20){\makebox(0,0){\scriptsize$h$}} \put(62.1,20.1){\makebox(0,0){\scriptsize$h^+$}} \end{picture}}} \] be the flip. Take $M:={h^+}^{-1}(M^0_Y)$. Again we have that $-(K_{X^+}+{M}^+)=-{h^+}^*(K_Y+M^0_Y)$ is nef and big over $Z$. Thus we can continue the process replacing $X$ with $X^+$. \end{proof} Finally, we get on $\ov X$ \para \label{ov-plt} $K+\ov\Delta+\ov S$ is plt; \para \label{ov-nef} $-(K+\ov\Delta+\ov S)$ is nef over $Z$. \begin{lemma} \label{ov-big} Notation as above. Then $-(K+\ov\Delta+\ov S)$ is semi-ample over $Z$. Moreover, if $-(K+\ov\Delta+\ov S)$ is not ample, then it defines a birational contraction over $Z$ with the exceptional locus contained in $\Supp{\ov B}$. In particular, $-(K_{\ov S}+\Diff{\ov S}{\ov\Delta}) =-(K+\ov\Delta+\ov S)|_{\ov S}$ is big (and nef) over $q(\ov S)$. \end{lemma} \begin{proof} By Lemma~\ref{NE} and the Base Point Free Theorem, $-(K+\ov\Delta+\ov S)$ is semi-ample. Thus for some $n\in{\mathbb N}$ the linear system $|-n(K+\ov\Delta+\ov S)|$ defines a contraction $\ov X\to W$. For any curve $C$ in a fiber we have $C\cdot \ov B=0$. Since the components of $\ov B$ generate $N^1(\ov X/Z)$ (see \eref{generators}), we have that $C\cdot \ov B_i<0$ for some component $\ov B_i$ of $\ov B$. Hence $C\subset\Supp{\ov B}$. \end{proof} Note that $q\colon \ov S\to q(\ov S)$ is also a contraction: \begin{lemma} $q_*{\mathcal O}_{\ov S}={\mathcal O}_{q(\ov S)}$ and $q(\ov S)=f(g(S))$ is normal. \end{lemma} \begin{proof} See the proof of Lemma~3.6 in \cite{Sh}. \end{proof} By Lemma~\ref{coeff-diff}, $\Diff{\ov S}{\ov\Delta}\in{\Phi}$ (recall that we put ${\Phi}={\MMM}_{\mt{\mathbf{m}}}^d$ or ${\Phi}={\MMM}_{\mt{\mathbf{sm}}}$). \begin{lemma} \label{return} Assume that near $q^{-1}(o)$ there exists an $n$-complement $K_{\ov S}+\Diff{\ov S}{\ov\Delta}^+$ of $K_{\ov S}+\Diff{\ov S}{\ov\Delta}$. Then near $q^{-1}(o)$ there exists an $n$-complement $K+D^+$ of $K+D$. Moreover, if $K_{\ov S}+\Diff{\ov S}{\ov\Delta}^+$ is not klt, then $K+D^+$ is not exceptional. \end{lemma} \begin{proof} By Proposition~\ref{prodolj} any $n$-complement of $K_{\ov S}+\Diff{\ov S}{\ov\Delta}$ can be extended to an $n$-complement of $K+\ov\Delta+\ov S$. By \ref{bir-prop} we can pull-back complements of $K+\Delta+S$ under divisorial contractions because they are $(K+\Delta+S)$-positive. Finally, note that the proper transform of an $n$-complement under a flip is again an $n$-complement. Indeed, the inequality in \eref{def-coplements-n}, obviously, is preserved under any birational map which is an isomorphism in codimension one. The log canonical property is preserved by \cite[2.28]{Ut}. \end{proof} \begin{lemma} \label{dim-q(S)>0} If $\dim(q(\ov S))>0$, then \para $(X/Z\ni o,D)$ is not exceptional; \para there is a non-exceptional $n$-complement of $K+D$ with $n\in{\mathbb N}N_{d-2}({\Phi})$. \end{lemma} \begin{proof} (i) follows by Corollary~\ref{dimension}. Note that $(\ov S/q(\ov S)\ni o,\Diff{\ov S}{\ov\Delta})$ satisfies the conditions of our theorem (see Lemma~\ref{coeff-diff}). By inductive hypothesis we may assume that there is a non-klt $n$-complement of $K_{\ov S}+\Diff{\ov S}{\ov\Delta}$ for $n\in{\mathbb N}N_{d-2}({\Phi})$. The rest follows by Lemma~\ref{return}. \end{proof} \parag \label{non-exceptional-} Going back to the proof of Theorem~\ref{result-Fano-1}, assume that $(X/Z\ni o, D)$ is non-exceptional (i.e. there exists a non-exceptional complement $K+D+\Upsilon$) and $q(\ov S)=o$. We have to show only that there exists a non-exceptional $n$-complement of $K+D$ with $n\in{\mathbb N}N_{d-2}({\Phi})$. By Lemma~\ref{dim-q(S)>0} we may assume that $q(\ov S)=o$, i.e. $\ov S$ is projective. By Corollary~\ref{two-divisors-non-except} we can take $\Upsilon$ so that $\dis{S,D+\Upsilon}=-1$ (and $\dis{E,D+\Upsilon}=-1$ for some $E\not\approx S$). Let $\widehat\Upsilon$ and $\ov\Upsilon$ be proper transforms of $\Upsilon$ on $\widehat X$ and $\ov X$, respectively. Then \[ g^*(K+D+\Upsilon)=K_{\widehat X}+\Delta+S+\widehat\Upsilon. \] Moreover, $\dis{E,\Delta+S+\widehat\Upsilon}=\dis{E,\ov\Delta+\ov S+\ov\Upsilon}=-1$, because $K_{\widehat X}+\Delta+S+\widehat\Upsilon\equiv 0$. Thus $K_{\ov X}+\ov\Delta+\ov S+\ov\Upsilon$ is not plt (near $q^{-1}(o)$). \begin{lemma} \label{non-klt} Assumptions as in \eref{non-exceptional-}. Then $K_{\ov S}+\Diff{\ov S}{\ov\Delta+\ov\Upsilon}$ is not klt. \end{lemma} \begin{proof} By the Adjunction \cite[17.6]{Ut} it is sufficient to prove that $K+\ov\Delta+\ov S+\ov\Upsilon$ is not plt near $\ov S$. Taking into account discussions above, we see that this is a consequence of Lemma~\ref{connect} below. \end{proof} By Lemma~\ref{non-klt} and Conjecture~\ref{conjecture-inductive-complements} we obtain that there is a non-klt $n$-complement of $K_{\ov S}+\Diff{\ov S}{\ov\Delta}$ with $n\in{\mathbb N}N_{d-2}({\Phi})$. By Lemma~\ref{return} this proves Theorem~\ref{result-Fano-1}. The following example illustrates the proof of Theorem~\ref{result-Fano-1}: \begin{example} As in Example~\ref{ex-RDP}, let $(Z\ni o)$ be a two-dimensional DuVal (RDP) singularity, let $D=0$, and let $f=\mt{id}$. In this case, $g\colon \widehat X\to X$ is a weighted blow-up (with suitable weights) and $\widehat X\dasharrow \ov X$ is the identity map. Hence $S\simeq{\mathbb P}^1$. Write $\Diff S{0}=\sum_{i=1}^r (1-1/m_i)P_i$, where $P_1,\dots,P_r$ are different points. We have the following correspondence between types of $(Z\ni o)$ and collections $(m_1,\dots,m_r)$ (see \ref{ex-sing} and \ref{ex-global}): \par \begin{center} \begin{tabular}{c||c|c|c|c|c} $(Z\ni o)$&$A_n$&$D_n$&$E_6$&$E_7$&$E_8$\\ \hline $(m_1,\dots,m_r)$&$r\le 2$&$(2,2,m)$&$(2,3,3)$&$(2,3,4)$&$(2,3,5)$\\ \end{tabular} \end{center} \par \noindent Thus $(Z\ni o)$ is exceptional if and only if it is of type $E_6$, $E_7$ or $E_8$. \end{example} \begin{lemma}[see \cite{Pr2}, cf. {\cite[6.9]{Sh}}, {\cite[Proposition~2.1]{F}}] \label{connect} Let $(X/Z\ni o,D)$ be a be a log variety and let $f\colon X\to Z$ be the structure morphism. Assume that \para $K+D$ is lc and not plt near $f^{-1}(o)$; \para $K+D\equiv 0$ over $Z$; \para there is an irreducible component $S\subset\down{D}$ such that $f(S)\ne Z$. \par Assume also LogMMP in dimension $\dim(X)$. Then $K+D$ is not plt near $S\cap f^{-1}(o)$. \end{lemma} \begin{corollary} \label{main-corollary} Notation as in Theorem~\ref{result-Fano-1}. The following are equivalent: \para \label{(i)} $(X/Z\ni o, D)$ is an exceptional pair (of local type); \para \label{(ii)} $q(\ov S)=o$ and $(\ov S,\Diff{\ov S}{\ov D})$ is an exceptional pair (of global type). \end{corollary} \begin{proof} \eref{(i)} $\Longrightarrow$ \eref{(ii)} follows by Lemma~\ref{dim-q(S)>0} and Lemma~\ref{return}. The inverse implication follows by Lemma~\ref{non-klt}. \end{proof} Define \[ \compll{X,D}:=\min\{m\mid \ \text{there is non-klt $m$-complement of $K+D$}\}. \] \begin{corollary} Notation and assumptions as in Theorem~\ref{result-Fano-1}. Assume that $(X/Z\ni o, D)$ is exceptional. Then \[ \compll{X,D}=\compl{\ov S,\Diff{\ov S}{\ov D}}. \] \end{corollary} \begin{proof} The inequality $\le$ follows by Lemma~\ref{return}, so we show $\ge$. Let $K+D^+$ be a non-klt $n$-complement of $K+D$. Then $D^+\ge D$. By Corollary~\ref{two-divisors-except} $\dis{S,D^+}=-1$. Consider the crepant pull-back $g^*(K+D^+)=K_{\widehat X}+\Delta+S+\Upsilon$ and let $\ov \Upsilon$ be the proper transform of $\Upsilon$ on $\ov X$. Then $K_{\ov S}+\Diff{\ov S}{\ov \Delta+\ov \Upsilon}$ is an $n$-complement of $K_{\ov S}+\Diff{\ov S}{\ov \Delta}$. \end{proof} Note that for non-exceptional contractions we have only $\compll{X,D}\le \compl{\ov S,\Diff{\ov S}{\ov D}}$: \begin{example} Let $(X\ni o)$ be a terminal $cE_8$-singularity given by the equation $x_1^2+x_2^3+x_3^5+x_4^r=0$, $\gcd(r,30)=1$ and let $g\colon (\widehat X,S)\to X$ be the weighted blow-up with weights $(15r,10r,6r,30)$. Then $S={\mathbb P}^2$ and $\Diff S0=\frac12 L_1+ \frac23 L_2+\frac45 L_3+\frac{r-1}r L_4$, where $L_1,\dots,L_4$ are lines on ${\mathbb P}^2$ in general position. Then $\compll{X,0}=1$ because $(X\ni o)$ is a $cDV$-singularity. On the other hand, $\compl{\ov S,\Diff{\ov S}0}=6$. \end{example} \section{Exceptional Fano contractions} In this section we study exceptional Fano contractions such as in Theorem~\ref{result-Fano-1}. \begin{proposition} \label{prop-exc-1} Notation and assumptions as in Theorem~\ref{result-Fano-1}. Assume also Conjecture~\ref{conjecture-boundedness-complements} in dimensions $\le d-1$. Let $(X/Z\ni o, D)$ is exceptional. Then \[ \dis{E,D}\ge -1+\delta_d\quad\text{for any}\quad E\not\approx S, \] where $\delta_d>0$ is a constant which depends only on $d$. \end{proposition} \begin{proof} Let $K+D^+$ be a non-klt $n$-complement with $n\in{\mathbb N}N_{d-1}({\Phi})$. Then $D^+\ge D$ (see \ref{ge-1}). By definition of exceptional contractions $\dis{S,D^+}=-1$ and $\dis{E,D^+}>-1$ for all $E\not\approx S$. Hence $\dis{E,D^+}\ge-1+1/n$ because $n\dis{E,D^+}$ is an integer. Since $D^+\ge D$, $\dis{E,D}\ge \dis{E,D^+}$. Thus we can take $\delta_d:=1/\max({\mathbb N}N_{d-1}({\MMM}_{\mt{\mathbf{m}}}^{d-1}))$. \end{proof} Assuming $D\in{\MMM}_{\mt{\mathbf{sm}}}$, we obtain \begin{corollary} Notation and assumptions as in \ref{prop-exc-1}. Let $D_i$ be a component of $D$ and let $d_i=1-1/m_i$ be its coefficient. If $D_i\not\approx S$, then $m_i\le 1/\delta(d)$ and therefore there is a finite number of possibilities for $d_i$. \end{corollary} \begin{corollary}[cf. \cite{Ko-SGT}] Assume $LogMMP$ in dimensions $\le d$ and Conjecture~\ref{conjecture-inductive-complements} in dimensions $\le d-1$. Let $(X\ni o)$ be a $d$-dimensional klt singularity and let $F=\sum F_i$ be an effective reduced Weil ${\mathbb Q}$-Cartier divisor on $X$ passing through $o$. Then we have either $c_o(X,F)=1$ or $c_o(X,F)\le 1-1/N_{d-1}$, where $c_o(X,F)$ is the log canonical threshold of $(X,F)$ \cite{Sh} (see also \cite{Ko}) and $N_{d-1}$ is such as in \eref{eq-def-N-M}. \end{corollary} This corollary is non-trivial only if Conjecture~\ref{conjecture-boundedness-complements} holds in dimension $\le d-1$. \begin{proof} Put $c:=c_o(X,F)$ and assume that $1-1/N_{d-1}<c<1$. Theorem~\ref{result-Fano-1} gives us that there is an $n$-complement $K+B$ of $K+cF$, where $n\le N_{d-1}$. Let $c^+_i$ be the coefficient of $F_i$ in $B$. By \eref{def-coplements-n}, $c^+\ge 1$. Hence $F\le B$ and $K+F$ is lc, a contradiction. \end{proof} In the case $1-1/(N_{d-2}+1)\le c=c_o(X,F)<1$, the pair $(X,cF)$ is exceptional. We expect that there are only a finite number of possibilities for $c\in\left[1-1/(N_{d-2}+1,1)\right]$ in any dimension. For example, this method gives us (see e.~g. \cite[6.1.3]{Pr3}) that in dimension $d=2$ the set of all values of $c_o(X,F)$ in the interval $[2/3,1]$ is $\left\{2/3,7/{10},3/4,5/6,1\right\}$. \begin{theorem} \label{theorem-cover} Fix $\varphiepsilon>0$. Let $(X/Z\ni o, D)$ be a $d$-dimensional log variety of local type such that \para $D\in{\MMM}_{\mt{\mathbf{sm}}}$ (i.e. $D=\sum (1-1/m_i)D_i$, where $m_i\in{\mathbb N}\cup\{\infty\}$ and $D_i$'s are prime divisors); \para \label{assumption-totaldiscr} $\totaldiscr{X,D}>-1+\varphiepsilon$; \para $-(K+D)$ is nef and big over $Z$; \para $(X/Z\ni o, D)$ is exceptional. \par Let $\varphi\colon X'\to X$ be a finite cover such that \para $X'$ is normal and irreducible; \para $\varphi$ is \'etale in codimension one outside of $\Supp{D}$; \para \label{assumption-divides} the ramification index of $\varphi$ at the generic point of components of $\varphi^{-1}(D_i)$ divides $m_i$. \par Assume also LogMMP in dimensions $\le d$ and Conjectures~\ref{conjecture-boundedness-log-Fano}, \ref{conjecture-boundedness-complements} and \ref{conjecture-inductive-complements} for ${\MMM}_{\mt{\mathbf{sm}}}$ in dimension $d-1$. Then the degree of $\varphi$ is bounded by a constant $\operatorname{Const}(d,\varphiepsilon)$. \end{theorem} \begin{proof} We will use notation of the proof of Theorem~\ref{result-Fano-1}. Taking the fiber product with ${\mathbb Q}$-factorialization we can reduce the situation to the case when $X$ is ${\mathbb Q}$-factorial. Note also that $\varphi^{-1}\circ f^{-1}(o)$ is connected (because $X$ is considered as a germ near $f^{-1}(o)$ and $X'$ is irreducible). Consider the base change \[ \label{CD-main} \begin{CD} X'@>\varphi>>X\\ @Vf'VV @VfVV\\ Z'@>\pi>>Z\\ \end{CD} \] where $X'\stackrel{f'}\to Z'\stackrel{\pi}\to Z$ is the Stein factorization. Then $f'\colon X'\to Z'$ is a contraction and $\pi\colon Z'\to Z$ is a finite morphism. Define $D'$ and $D^{\triangledown\prime}$ by \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-var-preimage} \begin{array}{l} K_{X'}+D'=\varphi^*(K+D),\\ K_{X'}+D'+D^{\triangledown\prime}=\varphi^*(K+D+D^{\triangledown}) \end{array} \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent (see \cite[Sect.~2]{Sh}). This means that, for example, the coefficient of a component $D_{i,j}'$ of $\varphi^{-1}(D_i)$ in $D'$ is as follows \[ d'_{i,j}=1-r_{i,j}(1-(1-1/m_i)), \] where $r_{i,j}$ is the ramification index at the generic point of $D_{i,j}'$. Then by \eref{assumption-divides}, $D'\in {\MMM}_{\mt{\mathbf{sm}}}$ (and $D^{\triangledown\prime}\ge 0$). Obviously, $K_{X'}+D'+D^{\triangledown\prime}$ is ample over $Z'$. \para First we consider case (A) (i.e. when $K+D+D^{\triangledown}$ is plt, $S:=\down{D+D^{\triangledown}}\ne 0$, $\widehat X=X$, $g:=\mt{id}$). We put $S':=\down{D'+D^{\triangledown\prime}}=\varphi^{-1}(S)$. By Lemma~\ref{dim-q(S)>0}, $S$ is compact and $S\subset f^{-1}(o)$. Applying \cite[Sect.~2]{Sh} (or \cite[20.3]{Ut}) we get that $K_{X'}+D'+D^{\triangledown\prime}$ is plt. By the Connectedness Lemma \cite[17.4]{Ut} and the Adjunction \cite[17.6]{Ut}, $S'$ is connected, irreducible and normal. Define $\Delta'$ from $D'=\Delta'+a'S'$, where $0\le a'<1$. Let $\ov{X}'$ be the normalization of $\ov{X}$ in the function field of $X'$. There is the commutative diagram \[ \vcenter{\hbox{ \begin{picture}(98,50) \put(28,5){\vector(1,0){52}} \put(88,37){\line(0,-1){4}} \put(88,30){\line(0,-1){4}} \put(88,23){\line(0,-1){4}} \put(88,16){\vector(0,-1){4}} \put(20,37){\line(0,-1){4}} \put(20,30){\line(0,-1){4}} \put(20,23){\line(0,-1){4}} \put(20,16){\vector(0,-1){4}} \put(28,45){\vector(1,0){52}} \put(20,45){\makebox(0,0){$X'$}} \put(88,45){\makebox(0,0){$X$}} \put(88,5){\makebox(0,0){$\ov X$}} \put(20,5){\makebox(0,0){$\ov X'$}} \put(54,10){\makebox(0,0){\scriptsize$\ov\varphi$}} \put(59,50){\makebox(0,0){\scriptsize$\varphi$}} \put(13,25){\makebox(0,0){\scriptsize$\psi '$}} \put(96,25){\makebox(0,0){\scriptsize$\psi$}} \end{picture} }} \] where $\ov \varphi\colon \ov{X}'\to \ov{X}$ is a finite morphism and $\psi\colon X\dasharrow \ov X$ and $\psi'\colon X'\dasharrow \ov X'$ are birational maps such that both $\psi^{-1}$ and $\psi^{\prime -1}$ do not contract divisors. Hence $\ov\varphi$ has the ramification divisor only over $\Supp{\psi_*(D)}\subset\ov S\cup\Supp{\ov \Delta}$ and the ramification index of $\ov\varphi$ at the generic point of a component over $\psi_*(D_i)$ is equal the ramification index of $\varphi$ at the generic point of the corresponding component over $D_i$. Applying $\psi_*$ and $\psi_*'$ to \eref{eq-var-preimage}, we obtain \setcounter{equation}{\value{subsubsection}} \begin{equation} \begin{array}{l} \label{eq-gather-X-D} K_{\ov X'}+\ov D'=\ov \varphi^*\left(K_{\ov X}+\ov D\right),\\ K_{\ov X'}+\ov D'+\ov D^{\triangledown\prime}= \ov \varphi^*\left(K_{\ov X}+\ov D+\ov D^{\triangledown}\right), \end{array} \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent where $\ov D':=\psi_*' D'$ and $\ov D^{\triangledown\prime}:=\psi_*'D^{\triangledown\prime}$. Recall that $\ov S=\down{\ov D+\ov D^{\triangledown}}$ is irreducible. Now, \eref{eq-gather-X-D} yields \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-X-prime} K_{\ov X'}+\ov \Delta'+\ov S'=\ov \varphi^*\left(K_{\ov X}+\ov \Delta+\ov S\right), \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent where $\ov \Delta':=\psi_*' \Delta'$ and $\ov S':=\psi_*' S'$. By \cite[Sect.~2]{Sh} and \eref{ov-plt} (see also \cite[20.3]{Ut}), $K_{\ov X'}+\ov \Delta'+\ov S'$ is plt. Moreover, \eref{ov-nef} and Lemma~\ref{ov-big} give us that $-(K_{\ov X'}+\ov \Delta'+\ov S')$ is nef and big over $Z'$. It is sufficient to prove the boundedness of the degree of the restriction $\ov \phi=\ov\varphi|_{\ov S'} \colon\ov S'\to \ov S$. Indeed, $\deg \varphi=(\deg \ov \phi)r$, where $r$ is the ramification index over $S$. By \eref{assumption-totaldiscr} and \eref{assumption-divides}, $r$ is bounded. Now, we consider log pairs $\pair{\ov S,\Diff{\ov S}{\ov\Delta}}$ and $\pair{\ov S',\Diff{\ov S'}{\ov\Delta'}}$. \par Restricting \eref{eq-X-prime} on $\ov S$, we obtain \[ K_{\ov S}+\Diff{\ov S}{\ov\Delta}=\ov \phi^*\left(K_{\ov S'}+ \Diff{\ov S'}{\ov\Delta'}\right). \] In particular, $\pair{K_{\ov S}+\Diff{\ov S}{\ov\Delta}}^{d-1}=\pair{\deg \ov \phi}\pair{K_{\ov S'}+ \Diff{\ov S'}{\ov\Delta'}}^{d-1}$. Both sides of this equality are positive by Lemma~\ref{ov-big}. \para \label{para-A} By the proof of Theorem~\ref{result-Fano-1}, there is an $n$-complement $K_{\ov X}+\ov \Delta+\ov S+\ov \Upsilon$ of $K_{\ov X}+\ov \Delta+\ov S$ with $n\le \max{\mathbb N}N_{d-1}({\MMM}_{\mt{\mathbf{sm}}})<\infty$. Define $\ov \Upsilon'$ from \[ \label{eq-X-prime-Up} K_{\ov X'}+\ov \Delta'+\ov S'+\ov \Upsilon'= \ov \varphi^*\left(K_{\ov X}+\ov \Delta+\ov S+ \ov \Upsilon\right), \] and put $\Theta:=\Diff{\ov S}{\ov\Delta+\ov \Upsilon}$ and $\Theta':=\Diff{\ov S'}{\ov\Delta'+\ov \Upsilon'}$. Then $K_{\ov S}+\Theta$ and $K_{\ov S'}+\Theta'$ are $n$-complements. Since $K_{\ov S}+\Theta$ is klt (see Corollary~\ref{main-corollary}), we have \[ \totaldiscr{\ov S, \Diff{\ov S}{\ov\Delta}}\ge -1+\frac1n\ge -1+\beta, \quad \text{where} \quad \beta=\frac{1}{\max{\mathbb N}N_{d-1}({\MMM}_{\mt{\mathbf{sm}}})}. \] Similarly, \[ \totaldiscr{\ov S', \Diff{\ov S'}{\ov\Delta'}}\ge -1+\beta. \] By \ref{conjecture-boundedness-log-Fano}, $\left(\ov S,\Supp{\Diff{\ov S}{\ov\Delta}}\right)$ and $\left(\ov S',\Supp{\Diff{\ov S'}{\ov\Delta'}}\right)$ belongs to a finite number of algebraic families. Taking into account that $\Diff{\ov S}{\ov\Delta}, \Diff{\ov S'}{\ov\Delta'}\in{\MMM}_{\mt{\mathbf{sm}}}$ (see Lemma~\ref{coeff-diff}) and $\Diff{\ov S}{\ov\Delta}\le \Theta$, $\Diff{\ov S'}{\ov\Delta'}\le \Theta'$, we see that so are $\pair{\ov S,\Diff{\ov S}{\ov\Delta}}$ and $\pair{\ov S',\Diff{\ov S'}{\ov\Delta'}}$. This gives us that $\deg\ov \phi$ is bounded. Now, we consider case (B). Let $\widehat X'$ be the normalization of a dominant component of $\widehat X\times_X X'$ and let $S'$ be the proper transform of $S$ on $\widehat X'$. We claim that $g'\colon (\widehat X'\supset S')\to X'$ is a plt blow-up of $(X',D')$. Consider the base change \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{CD1} \begin{CD} \widehat X'@>\widehat \varphi>>\widehat X\\ @Vg'VV @VgVV\\ X'@>\varphi>>\phantom{.}X.\\ \end{CD} \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent It is clear that $\widehat\varphi\colon \widehat X'\to \widehat X$ is finite and its ramification divisor can be supported only in $S\cup \Supp{D}$. Then $S'$ is the exceptional divisor of the blow-up $g'\colon \widehat X'\to X'$. We have \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{for-2} K_{\widehat X'}+\Delta'+S'=\widehat \varphi^*(K_{\widehat X}+\Delta+S), \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent where $\Delta'$ is a boundary. This divisor is plt \cite[2.2]{Sh}, \cite[20.3]{Ut} and anti-ample over $X'$. By the Adjunction \cite[17.6]{Ut}, $S'$ is normal. On the other hand, $S'$ is connected near the fiber over $o'\in Z'$. Indeed, $-\pair{K_{\widehat X'}+\widehat D'+\widehat D^{\triangledown\prime}}$ is nef and big over $Z'$, by \eref{CD1} and \eref{condition-KDS}. Since $S'\subset\down{\widehat D'+\widehat D^{\triangledown\prime}}$, it is connected by the Connectedness Lemma \cite[17.4]{Ut}. This proves our claim. \par Now, as in case (A) we consider the commutative diagram \[ \vcenter{\hbox{ \begin{picture}(98,50) \put(28,5){\vector(1,0){52}} \put(88,37){\line(0,-1){4}} \put(88,30){\line(0,-1){4}} \put(88,23){\line(0,-1){4}} \put(88,16){\vector(0,-1){4}} \put(20,37){\line(0,-1){4}} \put(20,30){\line(0,-1){4}} \put(20,23){\line(0,-1){4}} \put(20,16){\vector(0,-1){4}} \put(28,45){\vector(1,0){52}} \put(20,45){\makebox(0,0){$\widehat X'$}} \put(88,45){\makebox(0,0){$\widehat X$}} \put(88,5){\makebox(0,0){\phantom{.}$\ov X$.}} \put(20,5){\makebox(0,0){$\ov X'$}} \put(54,10){\makebox(0,0){\scriptsize$\ov\varphi$}} \put(59,50){\makebox(0,0){\scriptsize$\widehat\varphi$}} \put(13,25){\makebox(0,0){\scriptsize$\psi '$}} \put(96,25){\makebox(0,0){\scriptsize$\psi$}} \end{picture} }} \] Similar to case (A), $\pair{\ov S,\Diff{\ov S}{\ov\Delta}}$ and $\pair{\ov S',\Diff{\ov S'}{\ov\Delta'}}$ are bounded. Hence we may assume that $\deg \ov \phi$ is bounded, where $\ov \phi=\ov\varphi|_{\ov S'} \colon\ov S'\to \ov S$. It remains to show that the ramification index $r$ of $\ov\varphi$ at the generic point of $S'$ is bounded. Clearly, $r$ is equal to the ramification index of $\widehat\varphi$ at the generic point of $\widehat S'$. Similar to \eref{eq-first-def-D} write \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-a-prime} \begin{array}{l} g^*(K_{X'}+D')=K_{\widehat X'}+\Delta'+a'S'. \end{array} \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent Then \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-a-final} 1-a'=r(1-a)\ge r(1+\discr{X,D})> r\varphiepsilon \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent (see \cite[Sect.~2]{Sh} or \cite[proof of 20.3]{Ut}). We claim that $(S', \Diff{S'}{\Delta'})$ belong to a finite number of algebraic families. Note that we cannot apply \ref{conjecture-boundedness-log-Fano} directly because $-(K_{S'}+\Diff{S'}{\Delta'})$ is not necessarily nef. As in case (A), take $n$-complement $K_{\widehat X}+\Delta+S+\widehat\Upsilon$ with $n\le \max{\mathbb N}N_{d-1}({\MMM}_{\mt{\mathbf{sm}}})$. Similar to \eref{eq-var-preimage} define $\widehat\Upsilon'$ and $\widehat D^{\lambda\prime}$ (see \eref{plt-and-anti-ample}): \begin{gather*} K_{\widehat X'}+\Delta'+S'+\widehat\Upsilon'= \widehat\varphi^*(K_{\widehat X}+\Delta+S+\widehat\Upsilon)\\ K_{\widehat X'}+\Delta'+S'+\widehat D^{\lambda\prime}= \widehat\varphi^*(K_{\widehat X}+\Delta+S+\widehat D^{\lambda})\ . \end{gather*} Then $K_{S'}+\Diff{S'}{\Delta'+\widehat\Upsilon'}\equiv 0$ and by \eref{plt-and-anti-ample}, $K_{S'}+\Diff{S'}{\Delta'+\widehat D^{\lambda\prime}}$ is anti-ample. Hence $-\pair{K_{S'}+\Diff{S'}{\Delta'+\alpha \widehat D^{\lambda\prime}+ (1-\alpha)\widehat\Upsilon'}}$ is ample for any $\alpha>0$. Note that \[ \totaldiscr{S', \Diff{S'}{\Delta'+\widehat\Upsilon'}}\ge -1+1/n. \] Thus we can apply Conjecture~\ref{conjecture-boundedness-log-Fano} to $\pair{S', \Diff{S'}{\Delta'+\alpha \widehat D^{\lambda\prime}+ (1-\alpha)\widehat\Upsilon'}}$ for small positive $\alpha$. We obtain that $S'$ is bounded. Now, as in \eref{para-A} we see that so is $(S', \Diff{S'}{\Delta'})$. Take a sufficiently general curve $\ell$ in a general fiber of $g'|_{S'}\colon S'\to g'(S')$. From \eref{eq-a-prime} we have \setcounter{equation}{\value{subsubsection}} \begin{equation} \label{eq-diff-ell-de} -(K_{S'}+\Diff{S'}{\Delta'})\cdot\ell=-(1-a')S'\cdot\ell. \end{equation} \setcounter{subsubsection}{\value{equation}}\noindent Clearly, $-(K_{S'}+\Diff{S'}{\Delta'})\cdot \ell$ depends only on $(S',\Diff{S'}{\Delta'})$, but not on $\widehat X'$. So we assume that $-(K_{S'}+\Diff{S'}{\Delta'})\cdot \ell$ is fixed. Recall that the coefficients of $\Diff{S'}{\Delta'}$ are standard (see~\cite[3.9]{Sh}, \cite[16.6]{Ut}), so we can write $\Diff{S'}{\Delta'}=\sum_{i=1}^r (1-1/m_i)\Xi_i'$, where $m_i\in{\mathbb N}$, $r\ge 0$. Put $m':=\mt{l.c.m.}(m_1,\dots , m_r)$. By~\cite[3.9]{Sh} both $m'S'$ and $m'(K_{S'}+\Diff{S'}{\Delta'})$ are Cartier along $\ell$. So \eref{eq-diff-ell-de} can be rewritten as $N=(1-a')k$, where $N=-m'\ell \cdot (K_{S'}+\Diff{S'}{\Delta'})$ is a fixed natural number and $k=-m'(\ell\cdot S')$ is also natural. Thus by \eref{eq-a-final} $N=(1-a')k>kr\varphiepsilon\ge r\varphiepsilon$. This gives us that $r<N/\varphiepsilon$ is bounded and proves the theorem. \end{proof} Now, we present a few corollaries of Theorem~\ref{result-Fano-1} and Theorem~\ref{theorem-cover}. We concentrate our attention on the three-dimensional case (then all required conjectures are known to be true, see \cite{Sh1} and \cite{A}). Recall in this case a non-exceptional contraction such as in Theorem~\ref{result-Fano-1} has either $1$, $2$, $3$, $4$, or $6$-complement. Put $X=Z$ and $D=0$ in Theorem~\ref{theorem-cover}. We obtain \begin{corollary} \label{singularity-pi-1} Let $(Z\ni o,D)$ be a three-dimensional exceptional klt singularity such that $\totaldiscr{Z,D}>-1+\varphiepsilon$ and $D\in{\MMM}_{\mt{\mathbf{sm}}}$. Then \para \label{singularity-pi-1-i} the order of algebraic fundamental group $\pi_1^{\mt{alg}}(Z\setminus \operatorname{Sing}(Z))$ is bounded by a constant $\operatorname{Const}(\varphiepsilon)$; \para the index of $K_Z+D$ is bounded by a constant $\operatorname{Const}(\varphiepsilon)$; \para for any exceptional divisor $E$ over $Z$ we have either $\dis{E}>0$ or $\dis{E}\in \mathfrak{M}(\varphiepsilon)$, where $\mathfrak{M}(\varphiepsilon)\subset (-1,0]$ is a subset which depends only on $\varphiepsilon$. \end{corollary} Note that without assumption of exceptionality, $\pi_1^{\mt{alg}}(Z\setminus \operatorname{Sing}(Z))$ is not bounded, however it is finite \cite[Th. 3.6]{SW}. The assertion of \eref{singularity-pi-1-i} also holds for the topological fundamental group $\pi_1$ under the assumption that $\pi_1(Z\setminus \operatorname{Sing}(Z))$ is finite. M.~Reid has informed us that the finiteness of $\pi_1(Z\setminus \operatorname{Sing}(Z))$ for three-dimensional log terminal singularities was proved by N.~Shepherd-Barron (unpublished). \begin{corollary}[\cite{Pr2}] Fix $\varphiepsilon>0$. Let $(X/Z\ni o,D)$ be a three-dimensional log variety of local type such that $K+D$ is ${\mathbb Q}$-Cartier and $-(K+D)$ is $f$-nef and $f$-big. Assume that $f$ is exceptional and $\totaldiscr{X}>-1+\varphiepsilon$. \para If $\dim(Z)\ge 2$, then $\pi_1^{\mt{alg}}(Z\setminus \operatorname{Sing}(Z))$ is bounded by a constant $\operatorname{Const}(\varphiepsilon)$. \para If $\dim(Z)=1$, then the multiplicity of the central fiber $f^{-1}(o)$ is bounded by a constant $\operatorname{Const}(\varphiepsilon)$. \end{corollary} \begin{corollary}[\cite{Sh1}] \label{cor-small} Fix $\varphiepsilon>0$. Let $(X/Z\ni o,D)$ be a three-dimensional exceptional log pair such that the structure morphism $f\colon X\to Z\ni o$ is a small contraction (i.e. $f$ contracts only a finite number of curves), $\totaldiscr{X,D}>-1+\varphiepsilon$, $D\in {\MMM}_{\mt{\mathbf{m}}}^3$ and $-(K+D)$ is nef and big over $Z$. Then \para \label{corollary-rho-i} $\rho(X/Z)$ and $\rho^{\mt{an}}(X/Z)$ are bounded by $\operatorname{Const}(\varphiepsilon)$; \para \label{corollary-rho-ii} the number of components of the central fiber $f^{-1}(o)$ is bounded by $\operatorname{Const}'(\varphiepsilon)$. \end{corollary} \begin{proof} Notation as in the proof of Theorem~\ref{result-Fano-1}. Take some $n$-complement $K_{\widehat X}+\Delta+S+\Upsilon$ with $n\le N_2$. Run $(K_{\widehat X}+\Delta+\Upsilon)$-MMP. For each extremal ray $R$ we have $R\cdot S>0$. Hence $S$ is not contracted. At the end we get a model $p\colon \widetilde X\to Z$ with $p$-nef $K_{\widetilde X}+\widetilde \Delta+\widetilde\Upsilon\equiv -\widetilde S$. Since $K_{\widehat X}+\Delta+S+\Upsilon$ is numerically trivial, for any divisor $E$ of ${\EuScript{K}}(X)$, we have $\dis{E,\Delta+S+\Upsilon}=\dis{E,\widetilde \Delta+\widetilde S+\widetilde\Upsilon}$ (cf. \cite[3.10]{Ko}). This shows that $K_{\widetilde X}+\widetilde \Delta+\widetilde S+\widetilde\Upsilon$ is plt. Further, by Lemma~\ref{dim-q(S)>0}, $p(\widetilde S)=o$. Since $-\widetilde S$ is nef over $Z$, we see that $\widetilde S$ coincides with the fiber over $o$. By construction, $n\pair{K_{\widetilde S}+\Diff{\widetilde S}{\widetilde \Delta+\widetilde\Upsilon}}\sim 0$ and $K_{\widetilde S}+\Diff{\widetilde S}{\widetilde \Delta+\widetilde\Upsilon}$ is klt (by the Adjunction \cite[17.6]{Ut}). Therefore \[ \totaldiscr{\widetilde S, \Diff{\widetilde S}{\widetilde \Delta+\widetilde\Upsilon}}\ge -1+1/n, \quad n\le N_2. \] Obviously, $\Diff{\widetilde S}{\widetilde \Delta+\widetilde\Upsilon}\ne 0$. By \cite{A}, $\widetilde S$ belongs to a finite number of algebraic families. Thus we may assume that $\rho(\widetilde S)$ is bounded by $\operatorname{Const}(\varphiepsilon)$. \par Now, consider the exact sequence \[ 0\longrightarrow{\mathbb Z}\longrightarrow {\mathcal O}^{\mt{an}}_{\widetilde X} \stackrel{\exp}{\longrightarrow} {\mathcal O}^{\mt{an}*}_{\widetilde X}\longrightarrow 0. \] By Kawamata-Viehweg vanishing $R^if^*{\mathcal O}^{\mt{an}}_{\widetilde X}=0$, $i>0$. Hence, $\operatorname{Pic}^{\mt{an}}\left({\widetilde X}\right)=H^2\left({\widetilde X},{\mathbb Z}\right)$. Similarly, $H^2\left(\widetilde S,{\mathbb Z}\right)=\operatorname{Pic}\left(\widetilde S\right)$. Since ${\widetilde X}$ is a topological retract of $\widetilde S=p^{-1}(o)$, $H^2({\widetilde X},{\mathbb Z})=H^2(\widetilde S,{\mathbb Z})$. Hence $\rho^{\mt{an}}(\widetilde X)$ is bounded, and so is $\rho^{\mt{an}}(\widehat X)$ (because $\widehat X\dasharrow \widetilde X$ is a sequence of flips). This shows \eref{corollary-rho-i}. To prove \eref{corollary-rho-ii} one can use that $\rho^{\mt{an}}(X/Z)$ is equal to the number of components of $f^{-1}(o)$ (by the same arguments as above, see \cite[(1.3)]{Mo}). \end{proof} \begin{corollary} Fix $\varphiepsilon>0$. Let $(Z\ni o,D)$ be a three-dimensional exceptional klt singularity such that $\totaldiscr{X,D}>-1+\varphiepsilon$ and $D\in {\MMM}_{\mt{\mathbf{m}}}^3$. Then for its ${\mathbb Q}$-factorialization $f\colon X\to Z$ one has \para $\rho(X/Z)$ and $\rho^{\mt{an}}(X/Z)$ are bounded by $\operatorname{Const}(\varphiepsilon)$; \para the number of components of $f^{-1}(o)$ is bounded by $\operatorname{Const}'(\varphiepsilon)$. \end{corollary} Note that for non-exceptional flipping contractions the number of components of the fiber is not bounded even in the terminal case \cite[13.7]{KoM}. We present an example of a flopping contraction as in Corollary~\ref{cor-small}: \begin{example} Let $(Z\ni o)$ be a hypersurface singularity given by $x_1^3+x_2^3+x_3^5+x_4^5$ in ${\mathbb C}^4$. By \cite{IP} $(Z\ni o)$ is exceptional (and canonical). It is easy to see also that it is not ${\mathbb Q}$-factorial. Let $f\colon X\to Z$ be a ${\mathbb Q}$-factorialization \cite[6.11.1]{Ut}. By Lemma~\ref{lemma-crepant-ex}, $(X/Z\ni o,0)$ is exceptional. Hence it satisfies conditions of Corollary~\ref{cor-small} (with $D=0$). \end{example} Many examples of exceptional singularities can be found in \cite{MP} and \cite{IP}. Finally, we propose an example of an exceptional Fano contraction $f\colon X\to Z$ with $\dim(X)>\dim(Z)$. \begin{example}[{\cite[Sect. 7]{Pr3}}] Starting with ${\mathbb P}^1\times{\mathbb C}^1$, blow-up points on a fiber of the projection ${\mathbb P}^1\times{\mathbb C}^1\to {\mathbb C}^1$ so that we obtain a fibration $f^{\min}\colon X^{\min}\to {\mathbb C}^1$ with the central fiber having the following dual graph \[ \begin{array}{ccccccccccc} &&&&\stackrel{-3}{\circ}&&&&&&\\ &&&&|&&&&&&\\ \stackrel{-2}{\circ}&\text{---}&\stackrel{-2}{\circ}&\text{---}&\stackrel{-b}{\circ} &\text{---}&\stackrel{-2}{\circ}&\text{---}&\stackrel{-1}{\bullet}&\text{---}& \underbrace{\stackrel{-3}{\circ} \text{---}\stackrel{-2}{\circ}\text{---} \cdots\text{---}\stackrel{-2}{\circ}}_{b-2}\\ \end{array} \] where $b\ge 2$. Now, contract curves corresponding to white vertices. We obtain an extremal contraction $f\colon X\to {\mathbb C}^1$ with two log terminal points. The canonical divisor $K_X$ is $3$-complementary, but not $1$ or $2$-complementary \cite[Sect. 7]{Pr3}. Hence $f$ is exceptional. \end{example} \section{Global case} In this section we modify Theorem~\ref{result-Fano-1} to the global case. In contrast with the local case here we have to assume also the existence of a boundary with rather ``bad'' singularities. Theorem~\ref{result-Fano-2} is a special case of Conjecture~\ref{conjecture-inductive-complements}. \begin{theorem}[Global case] \label{result-Fano-2} Let $(X,D)$ be a $d$-dimensional log variety of global type such that \para $K+D$ is klt; \para $-(K+D)$ is nef and big; \para $D\in{\Phi}$, where ${\Phi}={\MMM}_{\mt{\mathbf{m}}}^d$ or ${\MMM}_{\mt{\mathbf{sm}}}$. \par Assume that there is a boundary $D^\flat$ such that \para $K+D+D^\flat$ is not klt; \para $-(K+D+D^\flat)$ is nef and big. \par Assume LogMMP in dimension $d$. Then there exists a non-klt $n$-complement of $K+D$ for $n\in{\mathbb N}N_{d-1}({\Phi})$. \end{theorem} \begin{proof} First, replace $X$ with its ${\mathbb Q}$-factorialization. Then as in Lemma~\ref{weak-log-Fano} we take $D^\mho\ge 0$ such that $-(K+D+D^\flat+D^\mho)$ is ample (but $K+D^\flat+D^{\flat\mho}$ is not necessarily lc). Next we put $D^\triangledown=t(D^\flat+D^\mho)$, $0<t\le 1$ so that \para $K+D+D^\flat$ is lc but not klt (i.e. $t$ is the log canonical threshold $c(X,D,D^\flat+D^{\mho})$). \par Now, the proof is similar to the proof of Theorem~\ref{result-Fano-1}. \end{proof} \begin{corollary}[cf. {\cite[2.8]{Sh1}}] Let $(X,D)$ be a $d$-dimensional log variety of global type such that \para $K+D$ is klt; \para $-(K+D)$ is nef and big; \para $D\in{\Phi}$, where ${\Phi}={\MMM}_{\mt{\mathbf{m}}}^d$ or ${\MMM}_{\mt{\mathbf{sm}}}$. \para $(K+D)^d>d^d$. \par Assume LogMMP in dimension $d$. Then there exists a non-klt $n$-complement of $K+D$ for $n\in{\mathbb N}N_{d-1}({\Phi})$. \end{corollary} \begin{proof} A boundary $D^\flat$ such as in Theorem~\ref{result-Fano-2} exists by Riemann-Roch (see e.~g. \cite[6.7.1]{Ko}). \end{proof} Many examples of exceptional log del Pezzo surfaces can be found in \cite{Sh1}, \cite{Abe}, \cite{KeM} and \cite{Pr3}. \section{Appendix} In this section we give two very useful properties of complements. We will use Definition \eref{def-coplements-n} for the case when $D$ is a subboundary, i.e. a ${\mathbb Q}$-divisor (not necessarily effective) with coefficients $d_i\le 1$. \begin{proposition1}[{\cite[2.13]{Sh1}}] \label{bir-prop} Fix $n\in{\mathbb N}$. Let $f\colon Y\to X$ be a birational contraction and let $D$ be a subboundary on $Y$ such that \para \label{prop-i} $K_Y+D$ is nef over $X$; \para \label{prop-ii} $f(D)=\sum d_if(D_i)$ is a boundary whose coefficients satisfy the inequality \[ \down{(n+1)d_i}\ge nd_i. \] Assume that $K_X+f(D)$ is $n$-complementary. Then $K_Y+D$ is also $n$-complementary. \end{proposition1} \begin{proof} Let us consider the crepant pull-back $K_Y+D'=f^*(K_X+f(D)^+)$, $f_*D'=D$. Write $D'=S'+B'$, where $S'$ is reduced, $S'$, $B'$ have no common components, and $\down{B'}\le 0$. We claim that $K_Y+D'$ is an $n$-complement of $K_Y+D$. The only thing we need to check is that $nB'\ge \down{(n+1)\fr{D}}$. From \eref{prop-ii} we have $f(D)^+\ge f(D)$. This gives us that $D'\ge D$ (because $D-D'$ is $f$-nef; see \cite[1.1]{Sh}). Finally, since $nD'$ is an integral divisor, we have \[ nD'\ge nS'+\down{(n+1)B'}\ge n\down{D}+\down{(n+1)\fr{D}}. \] \end{proof} The following is a refinement of \cite[Proof of 5.6]{Sh} and \cite[19.6]{Ut}. \begin{proposition1}[{\cite{Pr2}}] \label{prodolj} Fix $n\in{\mathbb N}$. Let $(X/Z\ni o,D=S+B)$ be a log variety. Set $S:=\down{D}$ and $B:=\fr{D}$. Assume that \para $K_X+D$ is plt; \para $-(K_X+D)$ is nef and big over $Z$; \para $S\ne 0$ near $f^{-1}(o)$; \para \label{prop-2-iv} the coefficients of $D=\sum d_iD_i$ satisfy the inequality \[ \down{(n+1)d_i}\ge nd_i. \] Further, assume that near $f^{-1}(o)\cap S$ there exists an $n$-complement $K_S+\Diff SB^+$ of $K_S+\Diff SB$. Then near $f^{-1}(o)$ there exists an $n$-complement $K_X+S+B^+$ of $K_X+S+B$ such that $\Diff SB^+=\Diff S{B^+}$. \end{proposition1} \begin{proof} Let $g\colon Y\to X$ be a log resolution. Write $K_Y+S_Y+A=g^*(K_X+S+B)$, where $S_Y$ is the proper transform of $S$ on $Y$ and $\down{A}\le 0$. By the Inversion of Adjunction \cite[17.6]{Ut}, $S$ is normal and $K_S+\Diff SB$ is plt. In particular, $g_S\colon S_Y\to S$ is a birational contraction. Therefore we have \[ K_{S_Y}+\Diff{S_Y}{A}=g_S^*(K_S+\Diff SB). \] Note that $\Diff{S_Y}{A}=A|_{S_Y}$, because $Y$ is smooth. It is easy to show (see \cite[4.7]{Pr3}) that the coefficients of $\Diff SB$ satisfy the inequality \eref{prop-2-iv}. So we can apply Proposition~\ref{bir-prop} to $g_S$. We get an $n$-complement $K_{S_Y}+\Diff{S_Y}A^+$ of $K_{S_Y}+\Diff{S_Y}A$. In particular, by \eref{def-coplements-n}, there exists \[ \Theta\in \left|-nK_{S_Y}-\down{(n+1) \Diff{S_Y}A}\right| \] such that \[ n\Diff{S_Y}A^+= \down{(n+1)\Diff{S_Y}A}+\Theta. \] By Kawamata-Viehweg Vanishing, \begin{multline*} R^1h_*({\mathcal O}_{Y}(Y,-nK_Y-(n+1)S_Y-\down{(n+1)A}))=\\ R^1h_*({\mathcal O}_{Y}(Y, K_Y+\up{-(n+1)(K_Y+S_Y+A)}))=0. \end{multline*} From the exact sequence \begin{multline*} 0\longrightarrow{\mathcal O}_{Y}(Y,-nK_Y-(n+1)S_Y-\down{(n+1)A})\\ \longrightarrow{\mathcal O}_{Y}(Y,-nK_Y-nS_Y-\down{(n+1)A})\\ \longrightarrow{\mathcal O}_{S}(S_Y,-nK_{S_Y}-\down{(n+1)A}|_{S_Y}) \longrightarrow 0 \end{multline*} we get surjectivity of the restriction map \begin{multline*} H^0(Y,{\mathcal O}_{Y}(-nK_Y-nS_Y-\down{(n+1)A})) \longrightarrow\\ H^0(S_Y,{\mathcal O}_{S_Y}(-nK_{S_Y}-\down{(n+1)A}|_{S_Y})). \end{multline*} Therefore there exists a divisor \[ \Xi\in\left|-nK_Y-nS_Y-\down{(n+1)A}\right| \] such that $\Xi|_{S_Y}=\Theta$. Set \[ A^+:=\frac1{n}(\down{(n+1)A}+\Xi). \] Then $n(K_Y+S_Y+A^+)\sim 0$ and $(K_Y+S_Y+A^+)|_{S_Y}= K_{S_Y}+\Diff{S_Y}A^+$. Note that we cannot apply the Inversion of Adjunction on $Y$ because $A^+$ can have negative coefficients. So we put $B^+:=g_*A^+$. Again we have $n(K_X+S+B^+)\sim 0$ and $(K_X+S+B^+)|_S=K_S+\Diff SB^+$. We have to show only that $K_X+S+B^+$ is lc. Assume that $K_X+S+B^+$ is not lc. Then $K_X+S+B+\alpha(B^+-B)$ is also not lc for some $\alpha<1$. It is clear that $-(K_X+S+B+\alpha(B^+-B))$ is nef and big over $Z$. By the Inversion of Adjunction \cite[17.6]{Ut}, $K_X+S+B+\alpha(B^+-B)$ is plt near $S\cap f^{-1}(o)$. Hence $LCS(X,B+\alpha(B^+-B))=S$ near $S\cap f^{-1}(o)$. On the other hand, by the Connectedness Lemma \cite[17.4]{Ut}, $LCS(X,B+\alpha(B^+-B))$ is connected near $f^{-1}(o)$. Thus $K_X+S+B+\alpha(B^+-B)$ is plt. This contradiction proves the proposition. \end{proof} \end{document}
\betaegin{document} \muaketitle \betaegin{abstract} We construct a central Lie group extension for the Lie group of compactly supported sections of a Lie group bundle over a sigma-compact base manifold. This generalises a result of the paper ``Central extensions of groups of sections'' by Neeb and Wockel, where the base manifold is assumed to be compact. In the second part of the paper, we show that this extension is universal and obtain a generalisation of a corresponding result in the paper ''Universal central extensions of gauge algebras and groups'' by Janssens and Wockel, where again (in the case of Lie group extensions) the base manifold is assumed compact. \etand{abstract} {\frak{o}otnotesize {\betaf Jan Milan Eyni}, Universit\"{a}t Paderborn, Institut f\"{u}r Mathematik, Warburger Str.\ 100, 33098 Paderborn, Germany; \,{\thetat [email protected]}\\[2mm] \sigmaection*{Introduction and notation} Central extensions play an important role in the theory of infinite-dimensional Lie groups. For example, every Banach-Lie algebra $\frak{g}$ is a central extension $\frak{z}(\frak{g}) \hookrightarrow \frak{g} \rhoightarrow \alphad(\frak{g})$, where the centre $\frak{z}(\frak{g})$ and $\alphad(\frak{g})$ are integrable to a Banach-Lie group; integrability of $\frak{g}$ corresponds to the existence of a corresponding central Lie group extensions (see \chiircte{van-Est:1964}). Inspired by the seminal work by van Est and Korthagen, Neeb elaborated the general theory of central extensions of Lie groups that are modelled over locally convex spaces in 2002 (see \chiircte{Neeb:2002}). In particular, Neeb showed that certain central extensions of Lie algebras can be integrated to central extensions of Lie groups: If the central extension of a locally convex Lie algebra $V\hookrightarrow \hat{\frak{g}}\rhoightarrow \frak{g}$ (with a sequentially complete locally convex space $V$) is represented by a continuous Lie algebra cocycle $\omega\chiolon \frak{g}^2 \rhoightarrow V$ and $G$ is a Lie group with Lie algebra $\frak{g}$, one considers the so-called period homomorphism \betaegin{align*} \pier_\omega \chiolon \pi_2(G) \rhoightarrow V,~ [\sigma]\mus \iotant_{\sigma} \omega^l \etand{align*} where $\omega^l \iotan \gammao^2(G,V)$ is the canonical left invariant $2$-form on $G$ with $\omega^l_1(v,w)=\omega(v,w)$ and $\sigma$ is a smooth representative of the homotopy class $[\sigma]$. One writes $\gammap_\omega$ for the image of the period homomorphism and calls it the period group of $\omega$. The important result from \chiircte{Neeb:2002} is that if $\gammap_\omega$ is a discrete subgroup of $V$ and the adjoined action of $\frak{g}$ on $\hat{\frak{g}}$ integrates to a smooth action of $G$ on $\hat{\frak{g}}$, then $V\hookrightarrow \hat{\frak{g}}\rhoightarrow \frak{g}$ integrates to a central extension of Lie groups (see \chiircte[Proposition 7.6 and Theorem 7.12]{Neeb:2002}). In the following, a Lie group is always assumed to be modelled over a Hausdorff locally convex space. Given two central Lie group extensions $Z_1\hookrightarrow \hat{G}_1 \xira{q_1} G$ and $Z_2\hookrightarrow \hat{G}_2 \xira{q_2} G$, we call a Lie group homomorphism $\pih \chiolon \hat{G}_1 \rhoightarrow \hat{G}_2$ a morphism of Lie group extensions if $q_1 = q_2 \chiirc \pih$. In an analogous way one defines a morphism of Lie algebra extensions. In this way one obtains categories of Lie group extensions and Lie algebra extensions, respectively, and an object in these categories is called universal if it is the initial one. In 2002 Neeb showed that under certain conditions a central extension of a Lie group is universal in the category of Lie group extensions if its corresponding Lie algebra extension is universal in the category of Lie algebra extensions (see \chiircte[Recognition Theorem (Theorem 4.13)]{Neeb:2002a}). The natural next step was to apply the general theory to different types of Lie groups that are modelled over locally convex spaces. Important infinite-dimensional Lie groups are current groups. These are groups of the form $C^\iotanfty(M,G)$ where $M$ is a compact finite-dimensional manifold and $G$ is a Lie group. In 2003 Maier and Neeb constructed a universal central extensions for current groups (see \chiircte{Maier:2003}) by reducing the problem to the case of loop groups $C^\iotanfty(\muathbb{S}^1,G)$. The compactness of $M$ is a strong condition but it is not possible to equip $C^\iotanfty(M,G)$ with a reasonable Lie group structure if $M$ is non-compact. Although one has a natural Lie group structure on the group $C^\iotanfty_c(M,G)$ of compactly supported smooth functions from a $\sigmaigma$-compact manifold $M$ to a Lie group $G$. In this situation, $C^\iotanfty_c(M,G)$ is the inductive limit of the Lie groups $C_K^\iotanfty(M,G):= \sigmaet{f\iotan C^\iotanfty(M,G): \sigmaupp(f)\sigmaubs K}$ where $K$ runs through a compact exhaustion of $M$. The Lie algebra of $C^\iotanfty_c(M,G)$ is given by $C^\iotanfty_c(M,\frak{g})$. In this context, $C^\iotanfty_c(M,\frak{g})$ is equipped with the canonical direct limit topology in the category of locally convex spaces. In 2004, Neeb constructed a universal central extension for $C^\iotanfty_c(M,G)$ in important cases (see \chiircte{Neeb:2004}). It is possible to turn the group $\gammag(M,\muathcal{G})$ of sections of a Lie group bundle $\muathcal{G}$ over a compact base manifold $M$ into a Lie group by using the construction of the Lie group structure of the gauge group from \chiircte{Wockel:2007} (see \chiircte[Appendix A]{Neeb:2009}). The Lie algebra of $\gammag(M,\muathcal{G})$ is the Lie algebra $\gammag(M,\frak{G})$ of sections of the Lie algebra bundle $\frak{G}$ that corresponds to $\muathcal{G}$. Hence the question arises if it is possible to construct central extensions for these groups of sections. This is indeed the case and was done in 2009 by Neeb and Wockel in \chiircte{Neeb:2009}. As mentioned above, one way to show the universality of a Lie group extension is to show the universality of the corresponding Lie algebra extension and then use the Recognition Theorem from \chiircte{Neeb:2002a}. In the resent paper \chiircte{Janssens:2013} from 2013, Janssens and Wockel constructed a universal central extension of the Lie algebra $\gammag_c(M,\frak{G})$ of compactly supported sections in a Lie algebra bundle over a $\sigmaigma$-compact manifold. They also applied this result to the central extension constructed in \chiircte{Neeb:2009}: By assuming the base manifold $M$ to be compact they obtained a universal Lie algebra extension that corresponds to the Lie group extension described in \chiircte{Neeb:2009}; they were able to show the universality of this Lie group extension. In 2013, Sch{\"u}tt generalised the construction of the Lie group structure from \chiircte{Wockel:2007} by endowing the gauge group of a principal bundle over a not necessary compact base manifold $M$ with a Lie group structure, under mild hypotheses (see \chiircte{Schuett:2013}). It is clear that we can use an analogous construction to endow the group of compactly supported sections of a Lie group bundle over a $\sigmaigma$-compact manifold with a Lie group structure. Similarly, Neeb and Wockel already generalised the construction of the Lie group structure on a gauge group with compact base manifold from \chiircte{Wockel:2007} to the case of section groups over compact base manifolds. The principal aim of this paper is to construct a central extension of the Lie group of compactly supported smooth sections on a $\sigmaigma$-compact manifold such that its corresponding Lie algebra extension is represented by the Lie algebra cocycle described in \chiircte{Janssens:2013}. This generalises the corresponding result from \chiircte{Neeb:2009} to the case where the base manifold is non-compact. The proof, which combines arguments from \chiircte{Neeb:2004} and \chiircte{Neeb:2009} with new ideas, is discussed in Section \rhoef{GruChLieEx} and Section \rhoef{Gru1234}. The main result is Theorem \rhoef{GruMainIII} where we show that the canonical cocycle \betaegin{align*} \omega \chiolon \gammag_c(M,\frak{G})^2 \rhoightarrow \gammao^1_c(M,\muathbb{V})/d\gammag_c(M,\frak{G}),~ (\gamma,\eta)\mus [\kappa(\gamma,\eta)] \etand{align*} can be integrated to a cocycle of Lie groups. This result generalises \chiircte[Theorem 4.24]{Neeb:2009} to the case of a non-compact base manifold. The first step is to show that the period group of $\omega$ is a discrete subgroup of $\gammao^1_c(M,\muathbb{V})/d\gammag_c(M,\frak{G})$. This will be discussed in Theorem \rhoef{GruMainI} and is the complementary result to \chiircte[Theorem 4.14]{Neeb:2009}. Then we will integrate the adjoint action of $\gammag_c(M,\frak{G})$ on $\gammag_c(M,\frak{G}) \thetaimes_{\omega} \omegal{\mathcal{O}mega}^1(M,\muathbb{V})$ to a smooth action of $\gammag_c(M,\muathcal{G})$ on $\gammag_c(M,\frak{G}) \thetaimes_{\omega_M} \omegal{\mathcal{O}mega}^1(M,\muathbb{V})$ this is the complementary result to the statements in \chiircte[Section 4.2 (Part about general Lie algebra bundles]{Neeb:2009}. Considering a compact manifold $M$ in our consideration in Section \rhoef{Gru1234} yields an alternative argumentation for the result in \chiircte[Section 4.2 (Part about general Lie algebra bundles]{Neeb:2009}\frak{o}otnote{Our arguments about the discreteness of the image of the period map (Section \rhoef{GruChLieEx}) dose not yield an alternative argumentation in the compact case.}. Especially we do not have to assume the typical fiber $G$ of the Lie group bundle $\muathcal{G}$ to be $1$-connected.\frak{o}otnote{An earlier version of this paper, contained a more complicated argumentation that required the group $G$ to be semisimple.} In the second part of the paper (Section \rhoef{GruZweiterTeil}) we turn to the question of universality. Once constructed, the central extension it is not hard to show its universality because mainly we can use the arguments from the compact case (\chiircte{Janssens:2013}). In the following we fix our notation: \betaegin{compactenum} \iotatem If $H \hookrightarrow P \xira{q} M$ is a principal bundle with right action $R \chiolon P \thetaimes H \rhoightarrow P$ we write $VP:=\kappaer(Tq)$ for the vertical bundle of $TP$ and $V_pP:=T_pP\chiapp VP$ for the vertical space in $p \iotan P$. Analogously if $HP \sigmaubs TP$ is a principal connection ($HP\omegapluslus VP = TP$ and $TR_hH_pP = H_{ph}P$), we write $H_pP:=T_pP \chiapp HP$ for the horizontal space in $p \iotan P$. \iotatem \lambdaeftarrowbel{Gruaaaa}Let $H \hookrightarrow P \xira{q} M$ be a finite-dimensional principal bundle over a connected $\sigma$-compact manifold $M$ with right action $R \chiolon P \thetaimes H \rhoightarrow P$ and a principal connection $HP \sigmaubs TP$. Given a finite-dimensional linear representation $\rho \chiolon H \rhoightarrow \muathcal{G}L(V)$ and $k\iotan \muathbb{N}_0$, we write \betaegin{align*} \mathcal{O}mega^k(P,V)_\rho = \sigmaet{\thetaheta \iotan \mathcal{O}mega^k(P,V): (\frak{o}rall g\iotan H)~\rho(g) \chiirc R_g^\alphast\thetaheta = \theta} \etand{align*} for the space of $H$-invariant $k$-forms on $P$ and $\mathcal{O}mega^k(P,V)_\rho^\thetaext{hor}$ for the space of $H$-invariant $k$-forms that are horizontal with respect to $HP$ ($\etaxists i:~v_i \iotan V_pP\muathbb{R}ightarrow \theta(v_1,\deltaots,v_k)=0$) (cf. \chiircte[Definition 3.3]{Baum:2014}). Moreover, given a compact set $K \sigmaubs M$ we define $\mathcal{O}mega^k_K(P,V)_\rho := \sigmaet{\theta \iotan \mathcal{O}mega^k_K(P,V)_\rho: \sigmaupp (\theta) \sigmaubs q^{-1}(K)}$ and write $\mathcal{O}mega^k_K(P,V)_\rho^\thetaext{hor}$ for the analogous subspace in the horizontal case. We emphasise that these forms are in general not compactly supported in $P$ its self. As mentioned in the introduction we equip these spaces with the natural Fr{\'e}chet-topology and write $\mathcal{O}mega^k_c(P,V)_\rho$ respectively $\mathcal{O}mega^k_c(P,V)_\rho^\thetaext{hor}$ for the locally convex inductive limit of the spaces $\mathcal{O}mega^k_K(P,V)_\rho$ respectively $\mathcal{O}mega^k_K(P,V)_\rho^\thetaext{hor}$. This convention also clarifies what we mean by $C^\iotanfty(P,V)_\rho$ respectively $C_c^\iotanfty(P,V)_\rho$. \iotatem In Lemma \rhoef{GruRealisation}, we recall that if $\muathbb{V}$ is the vector bundle associated to a principal bundle as in (\rhoef{Gruaaaa}), then the canonical isomorphism of chain complexes $\mathcal{O}mega_c^{\scriptscriptstyle \betaullet}(P,V)_{\rho}^\thetaext{hor} \chiolonng \mathcal{O}mega^{\scriptscriptstyle \betaullet}_c(M,\muathbb{V})$ (see e.g. \chiircte[Theorem 3.5]{Baum:2014}) induces isomorphisms of locally convex spaces $\mathcal{O}mega_c^k(P,V)_{\rho}^\thetaext{hor} \chiolonng \mathcal{O}mega^k_c(M,\muathbb{V})$. \item Given a finite-dimensional vector bundle $V\hookrightarrow \muathbb{V}\xira{q}M$ over a $\sigma$-compact manifold $M$, a compact set $K\sigmaubs M$ and $k \iotan \muathbb{N}_0$ we write $\gammao^k_K(M,\muathbb{V})$ for the space of $k$-forms on $M$ with values in the vector bundle $\muathbb{V}$ and support in $K$. Using the identification $\gammao^k(M,\muathbb{V}) \chiolonng \gammag(M,\gammal^k T^\alphast M \omegatimesmes V)$ we give these spaces the locally convex vector topology described in \chiircte{Gloeckner} and equip $\gammao_c^k(M,\muathbb{V})$ with the canonical inductive limit topology. \item Given a manifold $M$, we write $C^\iotanfty_p(\muathbb{R},M)$ for the set of proper smooth maps from $\muathbb{R}$ to $M$. However, if $F$ is the total space of a fibre bundle $E\hookrightarrow F\xira{q}M$, then we define $C^\iotanfty_p(\muathbb{R},F):= \sigmaet{f\iotan C^\iotanfty(\muathbb{R},F): q\chiirc f\iotan C^\iotanfty_p(\muathbb{R},M)}$. \etand{compactenum} \sigmaection{Construction of the Lie group extension}\lambdaeftarrowbel{GruChLieEx} We introduce the following conventions: \betaegin{convention}\lambdaeftarrowbel{GruConvention9898} \betaegin{compactenum} \iotatem All finite-dimensional manifolds are assumed to be $\sigmaigma$-compact. \iotatem Analogously to \chiircte[p. 385 and p.388]{Neeb:2009} we consider following setting\frak{o}otnote{In \chiircte{Neeb:2009} Neeb and Wockel also consider situations where the Lie groups $H$ and $G$ can be infinite-dimensional locally exponential Lie groups. See also Theorem \rhoef{GruMainI.2}, where we discuss the infinite-dimensional case.}: If not defined otherwise, $H\hookrightarrow P \xira{q}M$ denotes a finite-dimensional principal bundle (see also Theorem \rhoef{GruMainI.2} for the case where $H$ is infinite-dimensional) over a connected non-compact, $\sigma$-compact manifold $M$ and $\frak{h}$ the Lie algebra of $H$\frak{o}otnote{Like in \chiircte{Neeb:2004} it is crucial for our proof that the manifold $M$ is not compact. Hence our argumentation is not an alternative for the proof of \chiircte{Neeb:2009}}. Moreover let $G$ be a finite-dimensional Lie group (see also Theorem \rhoef{GruMainI.2} for the case where $H$ is infinite-dimensional) with Lie algebra $\frak{g}$ and $\kappa_\frak{g} \chiolon \frak{g}\thetaimes \frak{g} \rhoightarrow V(\frak{g})=:V$ be the universal invariant bilinear map on $\frak{g}$ (see e.g. \chiircte[Chapter 4]{Gundogan:2011}). Let $\rho_G \chiolon H \thetaimes G \rhoightarrow G$ be a smooth action of $H$ on $G$ by Lie group automorphisms and $\rho_\frak{g} \chiolon H \thetaimes \frak{g} \rhoightarrow \frak{g}$ be the derived action on $\frak{g}$ by Lie algebra automorphisms ($\rho_\frak{g}(h,{\scriptscriptstyle \betaullet})= L(\rho_G(h,{\scriptscriptstyle \betaullet})) \iotan \muathcal{A}ut(\frak{g})$). We find a unique map $\rho_V \chiolon H \thetaimes V \rhoightarrow V$ that is linear in the second argument and fulfils $\rho_V(h,\kappa_\frak{g}(x,y)) = \kappa_\frak{g}(\rho_\frak{g}(h,x),\rho_\frak{g}(h,y))$ for $x,y\iotan \frak{g}$ and $h \iotan H$. The vector space $V$ is generated by elements of the form $\kappaappa_\frak{g}(x,y)$ with $x,y\iotan \frak{g}$. To see that $\rhoho_V$ is also a representation we show $\rhoho_V(g,\rhoho_V(h,\kappaappa_\frak{g}(x,y))) = \rhoho_V(gh,\kappaappa_\frak{g}(x,y))$ for $x,y\iotan \frak{g}$ and $g,h \iotan H$: \betaegin{align*} &\rhoho_V(g,\rhoho_V(h,\kappaappa_\frak{g}(x,y))) = \rhoho_V(g, \kappaappa_\frak{g}(\rhoho_\frak{g}(h,x), \rhoho_\frak{g}(h,y)))\\ =& \kappaappa_\frak{g}(\rhoho_\frak{g}(g,\rhoho_{\frak{g}}(h,x)), \rhoho_\frak{g}(g,\rhoho_{\frak{g}}(h,y))) = \rhoho_V(gh,\kappaappa_\frak{g}(x,y)). \etand{align*} Because we can find a basis of $V$ consisting of vectors of the form $\kappaappa_\frak{g}(x,y)$ the smoothness of $\rhoho_V$ follows. We write $\muathcal{G}:= P\thetaimes_{\rho_G} G$ for the associated Lie group bundle (the definition of a Lie group bundle (respectively associated Lie group bundle) is completely analogous to the definition of a vector bundle (respectively associated Lie group bundle), just in the category of Lie groups), $\frak{G}: = P\thetaimes_{\rho_\frak{g}} \frak{g}$ for the associated Lie algebra bundle and $\muathbb{V}: = P\thetaimes_{\rho_V} V$ for the associated vector bundle to $H\hookrightarrow P \rhoightarrow M$. Let $VP$ be the vertical bundle of $TP$. We fix a principal connection $HP \sigmaubs TP$ on the principal bundle $P$ and write $\pir_h\chiolon TP\rhoightarrow HP$ for the projection onto the horizontal bundle. As pointed out in \chiircte[p. 385]{Neeb:2009} it is no loose of generality to assume the total space $P$ to be connected. Hence we do so in this paper. \iotatem Let $D_{\rho_\frak{g}} \chiolon C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}^\thetaext{hor}$, $f\mus df\chiirc\pir_h$ and $D_{\rho_V} \chiolon C^\iotanfty_c(P,V)_{\rho_V} \rhoightarrow \mathcal{O}mega^1_c(P,V)_{\rho_V}^\thetaext{hor}$, $f\mus df\chiirc\pir_h$ be the absolute derivatives corresponding to $HP$ (cf. \chiircte[Definition 3.8]{Baum:2014}). Moreover let $d_\frak{G} \chiolon \gammag_c(M,\frak{G}) \rhoightarrow \mathcal{O}mega^1_c(M, \frak{G})$ and $d_\muathbb{V} \chiolon \gammag_c(M, \muathbb{V}) \rhoightarrow \mathcal{O}mega^1_c(M, \muathbb{V})$ be the induced covariant derivations on the Lie algebra bundle $\frak{G}$ and the vector bundle $\muathbb{V}$ respectively (cf. \chiircte[p. 100 ff]{Baum:2014} and Lemma \rhoef{GruRealisation}). \etand{compactenum} \etand{convention} In \chiircte[Appendix A]{Neeb:2009}, where $M$ is compact, Neeb and Wockel endowed the group of sections $\gammag(M,\muathcal{G})$ of a Lie group bundle $\muathcal{G}$ that comes from a principal bundle $P$ with a Lie group structure. They used the identification $\gammag(M,\muathcal{G})\chiolonng C^\iotanfty(P,G)_{\rho_G}$ and endowed the group $C^\iotanfty(P,G)_{\rho_G}$ of $G$-invariant smooth maps from $P$ to $G$ with a Lie group structure by using the construction of a Lie group structure on the gauge group $\muathrm{Gau}(P)$ described in \chiircte{Wockel:2007}. To this end they replaced the conjugation of the structure group on itself by the Lie group action $\rho_G$. In the following Definition \rhoef{GruLieGruppenStruktur}, we proceed analogously in the case where $M$ is non-compact but $\sigmaigma$-compact. As the construction from \chiircte{Neeb:2009} is based on \chiircte{Wockel:2007}, our analogous definition is based on \chiircte[Chapter 4]{Schuett:2013}, because \chiircte[Chapter 4]{Schuett:2013} is the generalisation of \chiircte{Wockel:2007} to the non-compact case. \betaegin{definition}\lambdaeftarrowbel{GruLieGruppenStruktur} \betaegin{compactenum} \item We equip the group \betaegin{align*} C^\iotanfty_c(P,G)_{\rho_G} = \{&\pih\iotan C^\iotanfty(P,G): (\etaxists K \sigmaubs M \thetaext{ compact}) \sigmaupp(\pih)\sigmaubs q^{-1}(K)\\ &\thetax{and} (\frak{o}rall h\iotan H, p \iotan P) ~ \rho_G(h)\chiirc \pih(pg) =\pih(p)\} \etand{align*} with the infinite-dimensional Lie group structure described in \chiircte[Chapter 4]{Schuett:2013}. We just replace the conjugation of $H$ on itself by the action $\rho_G$ of $H$ on $G$. We emphasise that the functions $f\iotan C^\iotanfty_c(P,G)_{\rho_G}$ are not compactly supported in $P$ itself. The Lie algebra of $C^\iotanfty_c(P,G)_{\rho_G}$ is given by the locally convex Lie algebra \betaegin{align*} &C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}= \{f\iotan C^\iotanfty(P,\frak{g})_{\rho_\frak{g}}: (\etaxists K \sigmaubs M \thetaext{ compact}) \sigmaupp(f)\sigmaubs q^{-1}(K)\}\\% \\ =& \varinjlim C^\iotanfty_K(P,\frak{g})_{\rho_\frak{g}}, \etand{align*} where $K$ runs through the compact subsets of $M$. \item From \chiircte[Chapter 4]{Schuett:2013} (cf. \chiircte[Theorem 3.5]{Baum:2014} and Lemma \rhoef{GruRealisation}) we know $\gammag_c(M,\frak{G}) \chiolonng C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}$ in the sense of topological vector spaces. Now, we endow $\gammag_c(M,\muathcal{G})$ with the Lie group structure that turns the group isomorphism $\gammag_c(M,\muathcal{G})\chiolonng C^\iotanfty_c(P,G)_{\rho_G}$ into an isomorphism of Lie groups. Hence $\gammag_c(M,\muathcal{G})$ becomes an infinite-dimensional Lie group modelled over the locally convex space $\gammag_c(M,\frak{G})$. \etand{compactenum} \etand{definition} In the following definition we fix our notation for the quotient principal bundle. For details on the well-known concept of quotient principal bundles see e.g. \chiircte[Proposition 2.2.20]{Gundogan:2011}. \betaegin{definition}\lambdaeftarrowbel{GruQuotientenBuendel} Let $N := \kappaer(\rho_V) \sigmaubs H$ and $H/N \hookrightarrow P/N \xira{\omegal{q}} M$ be the quotient bundle with projection $\omegal{q} \chiolon P/N \rhoightarrow M,~ pN \mus q(p)$ and right action $\omegal{R} \chiolon H/N \thetaimes P/N \rhoightarrow P/N,~ ([g],pN) \mus (pg)N$. We write $\omegal{H}:=H/N$ and $\omegal{P}:=P/N$. Let $\omegal{\rho}_V\chiolon H/N \rhoightarrow GL(V)$ be the factorisation of $\rho_V$ over $N$ and $\pi \chiolon P \rhoightarrow P/N$ the orbit projection. If $\pis\chiolon q^{-1}(U)\rhoightarrow U\thetaimes H$ is a trivialisation of $P$, then $\pis'\chiolon \omegal{q}^{-1}(U)\rhoightarrow U\thetaimes \omegal{H}$, $pN \mus (q(p), [\pir_2\chiirc \pis(p)])$ is a typical trivialisation of $\omegal{P}$. It is well-known that $\muathbb{V}$ is isomorph to the associated bundle to $\omegal{H} \hookrightarrow \omegal{P}\xira{\omegal{q}} M$ via $\omegal{\rho}_V$ (see e.g. \chiircte[Remark 2.2.21]{Gundogan:2011}). Moreover, we write $H\omegaverline{P}:= T\pii(HP)$ for the canonical principal connection on $\omegaverline{P}$ that comes from $P$ (see \rhoef{GruIsoQuotientBundle} (a)). We mention that $\pii^\alphast \chiolonlon \mathcal{O}mega^k_c(\omegaverline{P},V)^\thetaext{hor}_{\omegaverline{\rhoho}} \rhoightghtarrow \mathcal{O}mega_c^k(P,V)^\thetaext{hor}_{\rhoho_V}$ is an isomorphism of topological vector spaces and induces an isomorphism of chain complexes (see \rhoef{GruIsoQuotientBundle} (c)). \etand{definition} \betaegin{convention} Analogously to \chiircte{Neeb:2009} we introduce the following convention. We assume that the identity-neighbourhood of $H$ acts trivially on $V$ by $\rho_V$ (cf. \chiircte[p. 385]{Neeb:2009}). Hence $\omegal{H}$ is a discrete Lie group. Moreover, we even assume $\omegal{H}$ to be finite (cf. \chiircte[p. 386, p.398 f and Theorem 4.14]{Neeb:2009}). \etand{convention} \betaegin{definition} Let $H\hookrightarrow P \rhoightarrow M$ be a principal bundle with connected total space $P$ and $\rho_V \chiolon H \thetaimes V \rhoightarrow V$ a linear representation. Moreover fix a connection $HP$ on $TP$ and let $D_{\rho_V}$ be the induced absolute derivative of the associated vector bundle $\muathbb{V}$. \betaegin{compactenum} \iotatem We define \betaegin{align*} &Z^1_{dR,c}(P,V)_{\rho_V}:= \sigmaet{\theta \iotan \mathcal{O}mega_c^1(P,V)^\thetaext{hor}_{\rho_V}: D_{\rho_V}\theta =0}, \\ &B^1_{dR,c}(P,V)_{\rho_V}:= D_{\rho_V}(C^\iotanfty_c(P,V)_{\rho_V}),\\ \etand{align*} and equip these spaces with the induced topology of $\mathcal{O}mega_c^1(P,V)^\thetaext{hor}_{\rho_V}$. \iotatem We define \betaegin{align*} &Z^1_{dR,c}(P,V)_{\thetaext{fix}}:= Z^1_{dR,c}(P,V) \chiapp \mathcal{O}mega^1_c(P,V)_{\rho_V},\\ &B^1_{dR,c}(P,V)_{\thetaext{fix}}:= B^1_{dR,c}(P,V) \chiapp \mathcal{O}mega_c^1(P,V)_{\rho_V} \etand{align*} and equip these spaces with the induced topology of $\mathcal{O}mega_c^1(P,V)_{\rho_V}$. \etand{compactenum} \etand{definition} \betaegin{lemma}\lambdaeftarrowbel{GruFIX88} Let $H\hookrightarrow P \rhoightarrow M$ be a principal bundle and $\rho_V \chiolon H \thetaimes V \rhoightarrow V$ a linear representation. Moreover fix a connection $HP$ on $TP$ and let $D_{\rho_V}$ be the induced absolute derivative of the associated vector bundle $\muathbb{V}$. \betaegin{compactenum} \iotatem If $H$ is discrete we have \betaegin{align*} Z^1_{dR,c}(P,V)_{\rho_V} = Z^1_{dR,c}(P,V)_\thetaext{fix} \thetax{and} B^1_{dR,c}(P,V)_{\rho_V} \sigmaubs B^1_{dR,c}(P,V)_\thetaext{fix}. \etand{align*} Because in this situation all forms on $P$ are horizontal the topologies on $Z^1_{dR,c}(P,V)_{\rho_V}$ and $Z^1_{dR,c}(P,V)_\thetaext{fix}$ coincide. \item \lambdaeftarrowbel{GruFIXB1} If $H$ is finite we get $B^1_{dR,c}(P,V)_{\rho_V} = B^1_{dR,c}(P,V)_\thetaext{fix}$. Again the topologies on these subspaces coincide, because the $\mathcal{O}mega^1_{dR,c}(P,V)_{\rho_V}^\thetaext{hor}$ and $\mathcal{O}mega^1_{dR,c}(P,V)_{\rho_V}$ are exactly the same topological vector spaces. \etand{compactenum} \etand{lemma} \betaegin{proof} \betaegin{compactenum} \iotatem If $H$ is discrete there is only one connection on $P$ namely $HP=TP$. Hence $D_{\rho_V}$ becomes the normal exterior derivative. \item Let $n:=\#H$ and $\theta \iotan B^1_{dR,c}(P,V)_\thetaext{fix}$ with $\theta = df$ for $f\iotan C^\iotanfty_c(P,V)$. For $\pih \iotan C^\iotanfty_c(P,V)$ and $g\iotan H$ we write $g.\pih := \rho_V(g)\chiirc R_g^\alphast \pih$ and get $\fr{1}{n} \cdot \sigmaum_{g\iotan H} g.f \iotan C^\iotanfty_c(P,V)_{\rho_{V}}$. Moreover $d(\fr{1}{n} \cdot \sigmaum_{g\iotan H} g.f) = \theta$. Hence $B^1_{dR,c}(P,V)_{\rho_V} = B^1_{dR,c}(P,V)_\thetaext{fix}$. \etand{compactenum} \etand{proof} \betaegin{lemma}\lambdaeftarrowbel{GruBkdlc89} Let $H\hookrightarrow P \xira{q} M$ be a principal bundle with finite structure group $H$ and connected total space $P$. Moreover let $\rho_V \chiolon H \thetaimes V \rhoightarrow V$ be a finite-dimensional linear representation, $HP$ a connection on $TP$ and $D_{\rho_V}$ be the induced absolute derivative of the associated vector bundle $\muathbb{V}$. \betaegin{compactenum} \iotatem The map $q$ is proper. Hence in this case the forms in $\mathcal{O}mega^k_c(P,V)$ are exactly the compactly supported forms in $P$. \iotatem The space $B^1_{dR,c}(P,V) = dC^\iotanfty_c(P,V)$ is a closed subspace of $\mathcal{O}mega^1_c(P,V)$. \etand{compactenum} \etand{lemma} \betaegin{proof} \betaegin{compactenum} \iotatem \chiircte[Lemma 10.2.11]{Napier:2011} tells us that if $F\hookrightarrow \muathbb{F} \xira{q} M$ is a continuous fibre bundle of finite-dimensional topological manifolds and $F$ is finite, then $q$ is a proper map (a more general statement in the setting of topological spaces is stated in \chiircte[Exercise A.75.]{Lee:2013} (but it is not of interest for our considerations)). \iotatem \chiircte[Lemma IV.11]{Neeb:2004} tells us that, if $M$ is a connected finite-dimensional manifold and $V$ a finite-dimensional vector space, then $B^1_{dR,c}(M,V)=dC^\iotanfty_c(M,V)$ is a closed subspace of $\mathcal{O}mega^1_c(M,V)$. \etand{compactenum} \etand{proof} For the corresponding statement to the following lemma in the case of a compact base manifold, compare \chiircte[p. 385 f]{Neeb:2009}. \betaegin{lemma}\lambdaeftarrowbel{GruAbgeschlossenDrVff} The subspace $D_{\rho_V}C^\iotanfty_c(P,V)_{\rho_V} \sigmaubs \mathcal{O}mega_c^1(P,V)_{\rho_V}^\thetaext{hor}$ is closed. \etand{lemma} \betaegin{proof} The lemma simply says that $d\gammag_c(M,\muathbb{V})$ is closed in $\mathcal{O}mega_c^1(M,\muathbb{V})$. Hence it is enough to show that the subspace $dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V}$ is closed in $\mathcal{O}mega_c^1(\omegal{P},V)_{\omegal{\rho}_V}^\thetaext{hor} = \mathcal{O}mega_c^1(\omegal{P},V)_{\omegal{\rho}_V}$. We know that $B^1_{dR,c}(\omegal{P},V)$ is closed in $\mathcal{O}mega^1_c(\omegal{P},V)$. We calculate \betaegin{align*} &dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V} = B^1_{dR,c}(\omegal{P},V)_{\thetaext{fix}} = \betaiggcap_{g\iotan \omegal{H}} \sigmaet{\theta \iotan B^1_{dR,c}(\omegal{P},V): \omegal{\rho}_V(g)\chiirc \omegal{R}_g^\alphast \theta =\theta}\\ = &\betaiggcap_{g\iotan \omegal{H}} (\omegal{\rho}_V(g)\chiirc \omegal{R}_g^\alphast - \iotad)^{-1}\sigmaet{0}. \etand{align*} We see that $dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V}$ is closed in $\mathcal{O}mega^1_c(\omegal{P},V)$. Because the topology of $\mathcal{O}mega^1_c(\omegal{P},V)_{\rhoho_V}$ is finer then the induced topology of $\mathcal{O}mega^1_c(\omegal{P},V)$, the space $dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V}$ is also closed in $\mathcal{O}mega^1_c(\omegal{P},V)_{\rhoho_V}$. \etand{proof} \betaegin{definition} Let $H\hookrightarrow P \rhoightarrow M$ be a principal bundle with connected total space $P$ and $\rho_V \chiolon H \thetaimes V \rhoightarrow V$ a linear representation. Moreover fix a connection $HP$ on $TP$ and let $D_{\rho_V}$ be the induced absolute derivative of the associated vector bundle $\muathbb{V}$. \betaegin{compactenum} \iotatem If the quotient group $H/\kappaer(\rho_V)$ is finite (this of course includes the case where the group $H$ is finite) we define \betaegin{align*} H^1_{dR,c}(P,V)_{\rho_V}:=Z^1_{dR,c}(P,V)_{\rho_V} / B^1_{dR,c}(P,V)_{\rho_V} \chiolonng H^1_{dR,c}(M,\muathbb{V}). \etand{align*} Because of Lemma \rhoef{GruAbgeschlossenDrVff} this is a Hausdorff locally convex space. \iotatem We have a canonical $H$-module structure on $H^1_{dR,c}(P,V)$ given by $H\thetaimes H^1_{dR,c}(P,V) \rhoightarrow H^1_{dR,c}(P,V)$, $(h,[\theta]) \mus [\rho_V(h)\chiirc R_h^\alphast \theta]$. As usual we call the fixed points of this action $\rho_V$-invariant. If the group $H$ is finite we define \betaegin{align*} H^1_{dR,c}(P,V)_{\thetaext{fix}}:= \sigmaet{ [\theta] \iotan H^1_{dR,c}(P,V): [\theta]\thetax{is}\rho_V\thetaext{-invariant}} \etand{align*} and because of Lemma \rhoef{GruBkdlc89} the space $H^1_{dR,c}(P,V)_{\thetaext{fix}}$ becomes a Hausdorff locally convex space as a closed subspace of the Hausdorff locally convex space $H^1_{dR,c}(P,V)$. \etand{compactenum} \etand{definition} It is possible to show the following Lemma \rhoef{GruFIX} by a more abstract argument using that under certain conditions the fixed point functor is exact like it was done in the compact case in \chiircte[Remark 4.12]{Neeb:2009}. \betaegin{lemma}\lambdaeftarrowbel{GruFIX} Let $H\hookrightarrow P \rhoightarrow M$ be a principal bundle and $\rho_V \chiolon H \thetaimes V \rhoightarrow V$ a linear representation. If $H$ is finite we get \betaegin{align*} H^1_{dR,c}(P,V)_\thetaext{fix} \chiolonng Z^1_{dR,c}(P,V)_{\thetaext{fix}} / B^1_{dR,c}(P,V)_{\thetaext{fix}}. \etand{align*} in the sense of topological vector spaces. \etand{lemma} \betaegin{proof} Let $n:=\#H$. We consider the linear map $\pis \chiolon Z^1_{dR,c}(P,V)_\thetaext{fix} \rhoightarrow H^1_{dR,c}(P,V)_\thetaext{fix},~ \theta \mus [\theta]$. The map $\pis$ is continuous because the inclusion $Z_{dR,c}(P,V)_\thetaext{fix} \hookrightarrow \mathcal{O}mega^1_{c}(P,V)$ is continuous and so the canonical map $Z_{dR,c}(P,V)_\thetaext{fix} \rhoightarrow H^1_{dR,c}(P,V)$ is continuous. If $[\theta] \iotan H^1_{dR,c}(P,V)_\thetaext{fix}$ with $\theta = df$ for $f\iotan C^\iotanfty_c(P,V)$, then $[\theta] = [d (\fr{1}{n} \sigmaum_{g\iotan H}g.f)]$ and $d (\fr{1}{n} \sigmaum_{g\iotan H}g.f) \iotan B^1_{dR,c}(P,V)_\thetaext{fix}$ so $\kappaer(\pis) \sigmaubs B^1_{dR,c}(P,V)_\thetaext{fix}$. Obviously $B^1_{dR,c}(P,V)_\thetaext{fix} \sigmaubs \kappaer(\pis)$. Now we show that $\pis$ is surjective. If $[\theta] \iotan H^1_{dR,c}(P,V)_\thetaext{fix}$, then $[\theta] = [\fr{1}{n}\cdot \sigmaum_{g\iotan H} g.\theta]$ and $\fr{1}{n}\cdot \sigmaum_{g\iotan H} g.\theta \iotan Z^1_{dR,c}(P,V)_\thetaext{fix}$. Hence $\pis$ factors through a continuous bijective linear map $\omegal{\pis} \chiolon Z^1_{dR,c}(P,V)_{\thetaext{fix}} / B^1_{dR,c}(P,V)_{\thetaext{fix}} \rhoightarrow H^1_{dR,c}(P,V)_\thetaext{fix}$. It is left to show that $\pis$ is also open. We define \betaegin{align*} \thetaauu \chiolon H_{dR,c}^1(P,V)\rhoightarrow Z^1_{dR,c}(P,V)_{\thetaext{fix}} / B^1_{dR,c}(P,V)_{\thetaext{fix}}, [\theta] \mus \lambdaeft[\frac{1}{n} \chidot \sigmaum_{g\iotan H} g.\theta\rhoightght]. \etand{align*} Obviously $\thetaauu|_{H_{dR,c}^1(P,V)_\thetaext{fix}}$ is inverse to $\omegal{\pis}$. The map \betaegin{align*} \mathcal{O}mega^1_{c}(P,V) \rhoightarrow \mathcal{O}mega^1_{c}(P,V)_{\rho_V},~\theta \mus \frac{1}{n} \chidot \sigmaum_{g\iotan H} g.\theta \etand{align*} is continuous, because the action $g.\theta = \rhoho(g)\chiirc R_g^\alphast \theta$ dose not enlarge the support of a given form. \etand{proof} \betaegin{corollary}\lambdaeftarrowbel{GruCorolllaryol} Considering the principal bundle $\omegal{H}\hookrightarrow \omegal{P} \rhoightarrow M$ with the action $\omegal{\rho}_V$ we have \betaegin{align*} H^1_{dR,c}(\omegal{P},V)_\thetaext{fix} \chiolonng Z^1_{dR,c}(\omegal{P},V)_{\thetaext{fix}} / B^1_{dR,c}(\omegal{P},V)_{\thetaext{fix}}\chiolonng H^1_{dR,c}(\omegal{P},V)_{\omegal{\rho}_V}. \etand{align*} \etand{corollary} The following lemma is a generalisation of considerations in \chiircte[p. 399 and Remark 4.12]{Neeb:2009} from the compact case to the non-compact case. \betaegin{lemma} \betaegin{compactenum} \iotatem If we endow $H^1_{dR,c}(M,V)$ with the canonical $\omegal{H}$-module structure $\omegal{H}\thetaimes H^1_{dR,c}(M,V) \rhoightarrow H^1_{dR,c}(M,V),~ (h,[\theta]) \mus [\omegal{\rho}_V(h) \chiirc \theta]$, the map $\omegal{q}^\alphast \chiolon H^1_{dR,c}(M,V) \rhoightarrow H^1_{dR,c}(\omegal{P},V)$ becomes an isomorphism of $\omegal{H}$-modules. And $H^1_{dR,c}(M,V)_\thetaext{fix} \chiolonng_{\omegal{q}^\alphast} H^1_{dR,c}(\omegal{P},V)_\thetaext{fix}$. \iotatem We have \betaegin{align}\lambdaeftarrowbel{GruIsoDeRham} H^1_{dR,c}(M,V_\thetaext{fix}) \chiolonng H^1_{dR,c}(M,V)_\thetaext{fix} \etand{align} if we write $V_\thetaext{fix}$ for the sup space of fixed points in $V$ by the action $\omegal{\rho}_V$. \iotatem The map \betaegin{align*} H^1_{dR,c}(M,V_\thetaext{fix}) \rhoightarrow H_{dR,c}^1(P,V)_{\rho_V},~ [\theta] \mus [q^\alphast \theta] \etand{align*} is an isomorphism of topological vector spaces. \etand{compactenum} \etand{lemma} \betaegin{proof} \betaegin{compactenum} \iotatem For $\omegal{h}\iotan \omegal{H}$ we calculate \betaegin{align*} \omegal{q}^\alphast [\omegal{\rho}(\omegal{h}) \chiirc \theta] = [\omegal{\rho}(\omegal{h}) \chiirc \omegal{q}^\alphast \theta] = [\omegal{\rho}(\omegal{h}) \chiirc (\omegal{q}\chiirc \omegal{R}_{\omegal{h}})^\alphast \theta] = [\omegal{\rho}(\omegal{h}) \chiirc \omegal{R}_{\omegal{h}}^\alphast \omegal{q}^\alphast \theta] = \omegal{h}.\omegal{q}^\alphast [\theta]. \etand{align*} Hence $\omegal{q}^\alphast$ is an isomorphism of $\omegal{H}$-modules. Now the second assertion follows from Lemma \rhoef{GruIsoEndlCover}. \iotatem We exchange $P$ with $M$ and the action $g.\theta= \pih(g)\chiirc R_g^\alphast\pih$ with $g.\theta= \omegal{\rhoho}_V(g)\chiirc \theta$ in the proof of Lemma \rhoef{GruFIX} and get \betaegin{align*} H^1_{dR,c}(M,V)_\thetaext{fix} \chiolonng Z^1_{dR,c}(M,V)_\thetaext{fix} /B^1_{dR,c}(M,V)_\thetaext{fix}. \etand{align*} Now we show that the isomorphism $\pih \chiolon \mathcal{O}mega^1_{c}(M,V_\thetaext{fix})\rhoightarrow \mathcal{O}mega_{c}^1(M,V)_\thetaext{fix}$, $\theta\mus\theta$ is a homeomorphism where $\mathcal{O}mega_{c}^1(M,V)_\thetaext{fix}$ is equipped with the induced topology from $\mathcal{O}mega_{c}^1(M,V)$. Given a compact set $K\sigmaubs M$ the map $\mathcal{O}mega^1_{K}(M,V_\thetaext{fix})\rhoightarrow \mathcal{O}mega_{K}^1(M,V)$ is continuous. Hence $\mathcal{O}mega^1_{c}(M,V_\thetaext{fix})\rhoightarrow \mathcal{O}mega_{c}^1(M,V)$ is continuous. Therefore $\pih$ is continuous. Considering the continuous map $\mathcal{O}mega^1_{c}(M,V)\rhoightarrow \mathcal{O}mega^1_{c}(M,V_\thetaext{fix})$, $\theta \mus \sigmaum_{\omegal{h} \iotan \omegal{H}} \omegal{h}.\theta$, we see that $\pih$ is an isomorphism of topological vector spaces. Now the assertion follows from $Z^1_{dR,c}(M,V)_\thetaext{fix}=Z^1_{dR,c}(M,V_\thetaext{fix})$ and $B^1_{dR,c}(M,V)_\thetaext{fix}=B^1_{dR,c}(M,V_\thetaext{fix})$. \iotatem We have the commutative diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ H_{dR,c}^1(M,V_\thetaext{fix}) \alphar[rr]^-{q^\alphast} \alphar[d]&& H_{dR,c}^1(P,V)_{\rhoho_V}\\ H_{dR,c}^1(M,V)_\thetaext{fix} \alphar[r]^-{\omegal{q}^\alphast} & H^1_{dR,c}(\omegal{P},V)_\thetaext{fix} \alphar[r] & H^1_{dR,c}(\omegal{P},V)_{\omegal{\rho}_V} \alphar[u]^-{\pi^\alphast}. } \etand{xy} \etand{align*} The assertion now follows from (a), (b) and Corollary \rhoef{GruCorolllaryol}. \etand{compactenum} \etand{proof} \betaegin{convention} From now on we write $q_\alphast \chiolon H_{dR,c}^1(P,V)_{\rho_V} \rhoightarrow H^1_{dR,c}(M,V_\thetaext{fix})$ for the inverse of $q^\alphast \chiolon H^1_{dR,c}(M,V_\thetaext{fix}) \rhoightarrow H_{dR,c}^1(P,V)_{\rho_V},~ [\theta] \mus [q^\alphast \theta]$. \etand{convention} \betaegin{remark}\lambdaeftarrowbel{GruVollststaendigno} Given an infinite-dimensional Lie group $G$ with Lie algebra $\frak{g}$, a trivial locally convex $\frak{g}$-module $\frak{z}$ and a Lie algebra cocycle $\omega\chiolon \frak{g}\thetaimes \frak{g} \rhoightarrow \frak{z}$, \chiircte[Theorem 7.12]{Neeb:2002} gives us conditions under which we can integrate $\omega$ to a Lie group cocycle of the Lie group $G$. These conditions were recalled in the introductions to this thesis (see p. xiv). Theorem 7.12 in \chiircte{Neeb:2002} is formulated in the case where $\frak{z}$ is sequentially complete\frak{o}otnote{See Lemma \rhoef{GruFolgenvoll2} for conditions that guaranty the sequential completeness of $\omegal{\mathcal{O}mega}^1(M,\muathbb{V})$.}. Although it also hold in a special case when $\frak{z}$ is not sequentially complete: Let $E$ be a Mackey complete space, $F\sigmaubs E$ be a closed subspace, $\frak{z}= E/F$. If $\omega$ lifts to a continuous bilinear map $\alpha \chiolon \frak{g}\thetaimes \frak{g} \rhoightarrow E$, then the results of \chiircte{Neeb:2002} stay valid. To see this we make the following consideration: Let $\omega^l$ be the left invariant $2$ form on $G$ corresponding to $\omega$. The completeness of $\frak{z}$ is only used to guaranty the existence of weak integrals in the following settings: \betaegin{compactenum} \iotatem $\iotant_{{\sigma}}\omega^l=\iotant_M\sigma^\alphast \omega^l$ where $M$ is a $2$-dimensional manifold (namely $M=\muathbb{S}^1$) or simplex and $\sigma\chiolon M\rhoightarrow G$ is a smooth map (see \chiircte[Chapter 5 and 6]{Neeb:2002}), \iotatem $\iotant_0^1\omega^l(f(t))dt$ where $f \chiolon [0,1]\rhoightarrow TG\omegapluslus TG$ is a smooth map into the Whitney sum (see \chiircte[Chapter 7]{Neeb:2002}). \etand{compactenum} The integrals $\iotant_{{\sigma}}\omega^l$ and $\iotant_0^1\omega^l(f(t))dt$ are weak integral, but such integrals do not have to exist in arbitrary locally convex space. Although they exist in sequentially complete (respectively Mackey complete) locally convex spaces. This is the reason why Neeb assumes $\frak{z}$ to be sequentially complete. Now we consider the situation where $\frak{z}$ is not its self sequentially complete, but $\frak{z} = E/F$ with a Mackey complete locally convex space $E$ and a closed subspace $F$ and $\omega = \pi \chiirc \alpha$ is a Lie algebra cocycle with the canonical projection $\pi \chiolon E \rhoightarrow E/F$ and a continuous bilinear map $\alpha \chiolon \frak{g}^2\rhoightarrow E$. We show the existence of the weak integral $\iotant_{\sigma}\omega^l$. We define $\thetaimesl{\alpha}\chiolon \frak{g}^2 \rhoightarrow E,~ (v,w) \mus \fr{1}{2} \alpha(v,w) - \fr{1}{2} \alpha(w,v)$ (cf. \chiircte[Remark 2.2]{Neeb:2009}). We get $\pi \chiirc \thetaimesl{\alpha} = \omega$ and $\thetaimesl{\alpha}$ is a continuous Lie algebra 2-cochain. Let $\thetaimesl{\alpha}^l \iotan \mathcal{O}mega^2(G,E)$ be the left invariant differential form on $G$ that comes from $\thetaimesl{\alpha}$. We get $\omega^l = \pi \chiirc \thetaimesl{\alpha}^l$ and the weak integral $\iotant_\sigma \omega^l$ is given by \betaegin{align*} \iotant_{M} \sigma^\alphast \omega^l = \pi \lambdaeft( \iotant_{M}\sigma^\alphast \thetaimesl{\alpha}^l\rhoightght). \etand{align*} The existence of the weak integral $\iotant_0^1\omega^l(f(t))dt$ follows analogously. \etand{remark} \betaegin{definition}\lambdaeftarrowbel{Gruomega} We define the locally convex spaces \betaegin{align*} &\omegal{\mathcal{O}mega}^1_c(P,V)_{\rho_V}^\thetaext{hor} := \mathcal{O}mega^1_c(P,V)_{\rho_V}^\thetaext{hor} / D_{\rho_V}C^\iotanfty_c(P,V)_{\rho_V} \thetax{and}\\ &\omegal{\mathcal{O}mega}^1_c(M,\muathbb{V}) := \mathcal{O}mega^1_c(M,\muathbb{V}) / d_\muathbb{V}\gammag_c(M,\muathbb{V}). \etand{align*} With Lemma \rhoef{GruIsoQuotientBundle} and Lemma \rhoef{GruRealisation} we get \betaegin{align*} \omegal{\mathcal{O}mega}^1_c(M,\muathbb{V}) \chiolonng \omegal{\mathcal{O}mega}^1_c(P,V)_{\rho_V}^\thetaext{hor} \chiolonng \omegal{\mathcal{O}mega}^1_c(\omegal{P},V)_{\omegal{\rho}_V}^\thetaext{hor}. \etand{align*} \etand{definition} \betaegin{remark}\lambdaeftarrowbel{GruRmarkallslsgfla} \betaegin{compactenum} \iotatem Considering the vector bundle $V(\frak{G})$ from \chiircte{Janssens:2013}, we have a vector bundle isomorphism $\muathbb{V} \rhoightarrow V(\frak{G})$ given by \betaegin{align*} \pih\chiolon P\thetaimes_{\rho_V} V = \muathbb{V}& \rhoightarrow V(\frak{G}) = V(P \thetaimes_{\rho_\frak{g}} \frak{g}),\\ [p,\kappa_\frak{g}(x,y)] &\mus \kappa_{\frak{G}_{q(p)}}([p,x],[p,y]) \thetax{for}x,y \iotan\frak{g}. \etand{align*} In fact, $\pih$ is well-defined, because given $p\iotan P$ there exists a unique linear map $\pih_p \chiolon V=V(\frak{g}) \rhoightarrow V(\frak{G}_{q(p)})$ given by $\pih_p(\kappaappa_\frak{g}(x,y)) = \kappaappa_{\frak{G}_{q(p)}}([p,x],[p,y])$. And given $x\iotan M$ the map $(P\thetaimes_{\rhoho_V}V)_x \rhoightarrow V(\frak{G}_x)$, $[p,v] \mus \pih_p(v)$ is well-defined. The bundle morphism $\pih$ is smooth, because $\pih$ is locally given by $U\thetaimes V \rhoightarrow U\thetaimes V$, $(x_0,\kappaappa_\frak{g}(x,y)) \mus (x_0,\kappaappa_\frak{g}(x,y)$ for a domain $U \sigmaubs M$ of a trivialisation of $P$, $x_0\iotan U$ and $x,y \iotan \frak{g}$ in the canonical charts. Hence $\pih$ is locally given by the identity $U\thetaimes V \rhoightarrow U\thetaimes V$. \iotatem \lambdaeftarrowbel{GruRmarkallslsgflab} Given $\theta \iotan \mathcal{O}mega^1_c(P,\frak{g})^\thetaext{hor}_{\rhoho_\frak{g}}$ and $f \iotan C_c^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}}$, we have $\kappaappa_\frak{g}\chiirc (\theta,f) \iotan \mathcal{O}mega^1_c(P,V)_{\rhoho_V}^\thetaext{hor}$. In fact obviously $\kappaappa_\frak{g}\chiirc (\theta,f)$ is horizontal and ``compactly supported'' with respect to the principal bundle $P \xira{q}M$. Moreover given $h\iotan H$, $p \iotan P$ and $v \iotan T_pP$, we calculate \betaegin{align*} &R_h^\alphast (\kappaappa_\frak{g}\chiirc (\theta,f))_p(v) = \kappaappa_\frak{g}( \theta_{ph}(TR_h(v)), f(ph))\\ =&\kappaappa_\frak{g}(\rhoho_\frak{g}(h^{-1}). \theta_p(v), \rhoho_\frak{g}(h^{-1}). f(p) ) = \rhoho_V(h^{-1}).\kappaappa_\frak{g}(\theta_p(v), f(p)). \etand{align*} Therefore the map $\thetaimesl{\kappaappa}_\frak{g} \chiolon \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \thetaimes C_c^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow \mathcal{O}mega^1_c(P,V)_{\rhoho_V}^\thetaext{hor}$, $(\theta,f) \mus \kappaappa_\frak{g}\chiirc (\theta,f)$ makes sense and we obtain the commutative diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \thetaimes C_c^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}} \alphar[dd] \alphar[r]^-{\thetaimesl{\kappaappa}_\frak{g}}& \mathcal{O}mega^1_c(P,V)_{\rhoho_V}^{\muathrm{hor}} \alphar[d]\\ & \mathcal{O}mega_c(M,\muathbb{V}) \alphar[d]\\ \mathcal{O}mega^1_c(M,\frak{G}) \thetaimes \muathcal{G}amma_c(\frak{G}) \alphar[r]^-{\thetaimesl{\kappaappa}_{\frak{G}}} & \mathcal{O}mega^1_c(M,V(\frak{G})), } \etand{xy} \etand{align*} where the lower horizontal arrow is given by the map $\thetaimesl{\kappaappa}_{\frak{G}}$ described in \chiircte[Lemma 3.23]{Eyni:2014} and the vertical arrows are the canonical isomorphisms of topological vector spaces. Especially $\thetaimesl{\kappaappa}_\frak{g}$ is continuous. Moreover we write $\thetaimesl{\kappaappa}_\frak{g}(\etata,\theta):=\thetaimesl{\kappaappa}_\frak{g}(\theta,\etata)$ for $\theta \iotan \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ and $\etata\iotan \muathcal{G}amma_c(\frak{G})$. \iotatem The map $C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \thetaimes \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$, $(\etata, \theta)\mus [\etata, \theta]$ with $[\etata, \theta]_{p}(w)= [\etata(p), \theta_p(w)]$ for $p \iotan P$ and $w\iotan T_pP$ makes sense, because obviously $[\etata,\theta]$ is horizontal and $[\etata,\theta] \iotan \mathcal{O}mega^1_c(P,\frak{g})$ and because $\rhoho_\frak{g}$ acts by Lie algebra automorphisms on $\frak{g}$ the form $[\etata,\theta]$ is also $\rhoho_\frak{g}$ invariant. Under the canonical isomorphisms of topological vector spaces $\mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \chiolonng \mathcal{O}mega^1_c(M,\frak{G})$ and $C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \chiolonng \muathcal{G}amma_c(M,\frak{G})$ this map corresponds to the map $\mathcal{O}mega^1_c(M,\frak{G}) \thetaimes \muathcal{G}amma_c(M,\frak{G}) \rhoightarrow \mathcal{O}mega_c(M,\frak{G})$, $(\theta,\etata)\mus [\theta,\etata]$ with $[\theta,\etata]_x(v) = [\theta_x(v), \etata(x)]_{\frak{G}_x}$ for $x\iotan M$ and $v\iotan T_xM$. We define $[\etata,\theta]:=-[\theta,\etata]$ for $\theta \iotan \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ and $\etata\iotan \muathcal{G}amma_c(\frak{G})$. \iotatem We write $\pir_h\chiolon TP \rhoightarrow HP$ for the projection onto the horizontal bundle. We see directly, that $D_{\rhoho_\frak{g}} \chiolon C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$, $f\mus df\chiirc \pir_h$ is a Lie connection. \iotatem We define the map $\betaeta \chiolon C_c^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}}\thetaimes C_c^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow \mathcal{O}mega_c^1(P,V)_{\rhoho_V}^\thetaext{hor}$, $\betaeta(f,g) = \thetaimesl{\kappaappa}_\frak{g}(D_{\rhoho_\frak{g}}f , g) + \thetaimesl{\kappaappa}_\frak{g}(D_{\rhoho_\frak{g}} g, f)$. Because $D_{\rhoho_V}$ and $D_{\rhoho_\frak{g}}$ are induced by the same principal connection on $P$ we obtain a commutative diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \thetaimes C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \alphar[r]^-{\betaeta} \alphar[d]^-{(\kappaappa_\frak{g})_\alphast}& \mathcal{O}mega^1_c(P,V)_{\rhoho_V}^\thetaext{hor}\\ C^\iotanfty_c(P,V)_{\rhoho_V} \alphar[ur]_-{D_{\rhoho_V}} & } \etand{xy} \etand{align*} where $(\kappaappa_\frak{g})_\alphast(f,g) = \kappaappa_\frak{g}\chiirc (f,g)$. \etand{compactenum} \etand{remark} \betaegin{definition}\lambdaeftarrowbel{GruDeffffomegaMMM} We define the map \betaegin{align*} \omega_M \chiolon C^\iotanfty_c(P,\frak{g})_{\rho_{\frak{g}}} \thetaimes C^\iotanfty_c(P,\frak{g})_{\rho_{\frak{g}}} \rhoightarrow \omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor},~ (f,g)\mus [\kappa_\frak{g}(f,D_{\rho_\frak{g}}g)], \etand{align*} which is the analogous map to the cocycle $\omega$ defined in the compact case in \chiircte[Proposition 2.1]{Neeb:2009}. Because $D_{\rho_\frak{g}}$ is linear, $D_{\rho_\frak{g}}(C_K^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}}) \sigmaubs \mathcal{O}mega^1_K(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ and $D_{\rho_\frak{g}}(f) = df\chiirc \pir_h$ we get that $D_{\rho_\frak{g}} \chiolon C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\frak{g}}^\thetaext{hor}$ is continuous. Considering Remark \rhoef{GruRmarkallslsgfla} (\rhoef{GruRmarkallslsgflab}) we see that $\omega_M$ is continuous. Repeating the argumentation of the proof of \chiircte[Proposition 2.1]{Neeb:2009} we see that $\omega_M$ is anti-symmetric and a cocycle. \etand{definition} \betaegin{remark}\lambdaeftarrowbel{Gru89Remarkdkdf} In this remark we suppose that $\frak{g}$ is perfect. Because $(\kappaappa_\frak{g})_\alphast \chiolon C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \thetaimes C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow C^\iotanfty_c(P,V)_{\rhoho_V}$ corresponds to the universal continuous invariant bilinear form $\kappaappa_{\frak{G}} \chiolon \muathcal{G}amma_c(\frak{G}) \thetaimes \muathcal{G}amma_c(\frak{G}) \rhoightarrow \muathcal{G}amma_c(V(\frak{G}))$ from \chiircte[Theorem 3.20]{Eyni:2014} the absolute derivative $D_{\rhoho_V}$ corresponds to the covariant derivative $d$ constructed in \chiircte[Theorem 3.24]{Eyni:2014}, especially we have $d=d_{\muathbb{V}}$. Hence our Lie algebra cocycle $\omega_M$ from Definition \rhoef{GruDeffffomegaMMM} corresponds to the cocycle $\omega_\nuabla$ from \chiircte[Chapter 1, (1.1)]{Janssens:2013}. \etand{remark} In \chiircte{Neeb:2009} Neeb and Wockel used Lie group homomorphisms that are pull-backs by horizontal lifts of smooth loops $\alpha \chiolon \muathbb{S}^1 \rhoightarrow M$ to reduce the proof of the discreteness of the period group to the case of $M=\muathbb{S}^1$ (see \chiircte[Definition 4.2 and Remark 4.3]{Neeb:2009}). But this approach dose not work in the non-compact case. Instead we want to use the results from \chiircte{Neeb:2004} of current groups on non-compact manifolds. Hence we use pull-backs by horizontal lifts of proper maps $\alpha \chiolon \muathbb{R} \rhoightarrow M$ (see the next definition). A corresponding definition to the following Definition \rhoef{GruDerPullBack}, in the case of a compact base manifold is given by \chiircte[Definition 4.2]{Neeb:2009}. \betaegin{definition}\lambdaeftarrowbel{GruDerPullBack} We fix $x_0 \iotan M$, $p_0 \iotan P_{x_0}$ and $\alpha \iotan C^\iotanfty_p(\muathbb{R}, M)$ with $\alpha(0)=x_0$. Let $\hat\alpha\iotan C^\iotanfty(\muathbb{R},P)$ be the unique horizontal Lift of $\alpha$ with $\hat\alpha(0) = p_0$. We define the group homomorphism \betaegin{align*} \hat{\alpha}_G^\alphast \chiolon C^\iotanfty_c(P,G)_{\rho_G} \rhoightarrow C^\iotanfty_c(\muathbb{R}, G),~ \pih\mus \pih\chiirc \hat{\alpha} \etand{align*} and the Lie algebra homomorphism \betaegin{align*} \hat{\alpha}_\frak{g}^\alphast \chiolon C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow C^\iotanfty_c(\muathbb{R}, \frak{g}),~ f\mus f\chiirc \hat{\alpha}. \etand{align*} In this context the maps in $C^\iotanfty_c(\muathbb{R}, G)$ respectively $C^\iotanfty_c(\muathbb{R}, \frak{g})$ are compactly supported in $\muathbb{R}$ itself. These maps make sense, because given $\pih\iotan C^\iotanfty_c(P,G)$ we have $\sigmaupp (\pih) \sigmaubs q^{-1}(L)$ for a compact set $L \sigmaubs M$. We get $\sigmaupp(\pih\chiirc \hat{\alpha}) \sigmaubs \alpha^{-1}(L)$, because if $\pih(\hat{\alpha}(t)) \nueq 1$, we get $\hat{\alpha}(t) \iotan q^{-1}(L)$ and so $\alpha(t)= q\chiirc \hat{\alpha}(t) \iotan L$. Hence $t\iotan \alpha^{-1}(L)$. Now we take the closure. Moreover we define the integration map \betaegin{align*} I_\alpha \chiolon \omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor} \rhoightarrow V,~ [\theta] \mus \iotant_{\muathbb{R}}\hat{\alpha}^\alphast \theta. \etand{align*} This map is well-defined: Let $\theta \iotan \mathcal{O}mega^1_c(P,V)_{\rho_V}$ with $\sigmaupp(\theta) \sigmaubs q^{-1}(L)$ for a compact set $L \sigmaubs M$. We get $\sigmaupp(\hat{\alpha}^\alphast \theta) \sigmaubs \alpha^{-1}(L)$, because if $(\hat{\alpha}^\alphast \theta)_t \nueq 0$ we get $\hat{\alpha}(t) \iotan q^{-1}(L)$ and so $\alpha(t)= q\chiirc \hat{\alpha}(t)\iotan L$. Moreover \betaegin{align*} (\hat{\alpha}^\alphast D_{\rho_V}f)(t)= (D_{\rho_V}f)_{\hat{\alpha}(t)}(\hat{\alpha}'(t)) = (df)_{\hat{\alpha}(t)}(\hat{\alpha}'(t))= (f\chiirc \hat{\alpha})'(t). \etand{align*} Hence $\iotant_\muathbb{R}(\hat{\alpha}^\alphast D_{\rho_V}f) = \iotant_{\muathbb{R}}(f\chiirc \hat{\alpha})'(t) =0$ for $f\iotan C^\iotanfty_c(P,V)_{\rho_V}$, because $f\chiirc \hat{\alpha}$ has compact support in $\muathbb{R}$. \etand{definition} The following Remark is obvious. \betaegin{remark}\lambdaeftarrowbel{GruIntervall} Let $W= \betaiggcup_{i=1}^n I_i$ be a union of finitely many closed intervals in $\muathbb{R}$. Then $W$ is a submanifold with boundary. In fact let $W = \betaiggcup_{j\iotan J}C_j$ be the disjoint union of the connected components of $W$. For $j \iotan J$ and $x\iotan C_j$ we find $i_j$ with $x\iotan I_{i_j}$. Hence $I_{i_j}\sigmaubs C_j$. If $j_1 \nueq j_2$ then $I_{i_{j_1}}\chiapp I_{i_{j_2}} = \etamptyset$ and so $i_{j_1} \nueq i_{j_2}$. Therefore $\#J \lambdaeq n$. Obviously the sets $C_j$ are intervals. \etand{remark} The following lemma is a modification of the proofs of \chiircte[Lemma 3.7 and Corollary 3.10]{Schuett:2013}. \betaegin{lemma}\lambdaeftarrowbel{GruInterCoverSub} Let $(U_i)_{i\iotan\muathbb{N}}$ be a relative compact open cover of $\muathbb{R}$ with $U_i \nueq \etamptyset$. Then there exists an open cover $(W_i)_{i\iotan \muathbb{N}}$ of $\muathbb{R}$, such that $W_i \sigmaubs U_i$, $W_i\nueq \etamptyset$ and $\omegal{W}_i$ is a submanifold with boundary. \etand{lemma} \betaegin{proof} Let $K_n:=[-n,n]$ for $n \iotan \muathbb{N}$. For all $x\iotan K_1$ there exists $i_x\iotan \muathbb{N}$, such that $x\iotan U_{i_x}$. Let $\omegal{B}_{\varepsilon_x}(x)\sigmaubs U_{i_x}$. We find $x_1,\deltaots,x_{N_1}$, such that $K_1 \sigmaubs \betaiggcup_{k=1}^{N_1}B_{\varepsilon_{x_k}}(x_k)$. We define $V_{1,k}:= B_{\varepsilon_{x_k}}(x_k)$ and $U_{i_{1,k}}:=U_{i_{x_k}}$ for $k=1,\deltaots, N_1$. We have $K_1 \sigmaubs \betaiggcup_{k=1}^{N_1}V_{1,k}$ and $\omegal{V}_{1,k}\sigmaubs U_{i_{1,k}}$. We can argue analogously for the compact set $K_n\sigmaetminus K_{n-1}$ with $n\gammaeq 2$ and find open intervals $V_{n,1},\deltaots,V_{n,N_n}$ and indices $i_{n,k}$ such that $K_n\sigmaetminus K_{n-1} \sigmaubs \betaiggcup_{k=1}^{N_n}V_{n,k}$ and $\omegal{V}_{n,k}\sigmaubs U_{i_{n,k}}$. We obtain $\muathbb{R} \sigmaubs \betaiggcup_{n=1}^\iotanfty \betaiggcup_{k=1}^{N_n} V_{n,k}$. For $i \iotan \muathbb{N}$ we define $I_i:=\sigmaet{(n,k): i_{n,k}=i}$. We have $\#I_i<\iotanfty$, because $U_i$ is relative compact. Now we define \betaegin{align*} W_i:= \betaegin{cases} \betaiggcup_{(n,k) \iotan I_i}V_{(n,k)} &:I_i\nueq \etamptyset\\ J, \etand{cases} \etand{align*} where $J$ is an arbitrary not degenerated interval that is contained in $U_i$. We obtain $\betaiggcup_{i\iotan \muathbb{N}}W_i= \muathbb{R}$ and $W_i \sigmaubs U_i$ for all $i\iotan \muathbb{N}$. Moreover $W_i$ is a finite union of open intervals. Let $W_i= \betaiggcup_{j=1}^nJ_j$ with intervals $J_j$. We have $\omegal{W}_i= \betaiggcup_{j=1}^n\omegal{J}_j$. Hence $\omegal{W}_i$ is a manifold with boundary (see Remark \rhoef{GruIntervall}). \etand{proof} In the following lemma we use the concept of weak products of infinite-dimensional Lie groups (cf. \chiircte[Section 7]{Glockner:2003} respectively \chiircte[Section 4]{Glockner:2007}). \betaegin{lemma}\lambdaeftarrowbel{GruDerPullBackSmooth} In the situation of Definition \rhoef{GruDerPullBack} the group homomorphism $$\hat{\alpha}_G^\alphast \chiolon C^\iotanfty_c(P,G)_{\rho_G} \rhoightarrow C^\iotanfty_c(\muathbb{R}, G),~ \pih\mus \pih\chiirc \hat{\alpha}$$ is in fact a Lie group homomorphism such that the corresponding Lie algebra homomorphism is given by $\hat{\alpha}^\alphast_\frak{g} \chiolon C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow C^\iotanfty_c(\muathbb{R}, \frak{g}),~ f\mus f\chiirc \hat{\alpha}$. \etand{lemma} \betaegin{proof} Using the construction of the Lie group structure described in \chiircte[Chapter 4]{Schuett:2013}, we can argue in the following way. Let $(\omegal{V}_i,\sigma_i)_{i\iotan \muathbb{N}}$ be a locally finite compact trivialising system of $H\hookrightarrow P \xira{q}M$ (see \chiircte[Definition 3.6 and Corollary 3.10]{Schuett:2013}). We define $U_i:= \alpha^{-1}(V_i)$ for $i \iotan \muathbb{N}$. The map $\alpha$ is proper. Because $(\alpha^{-1}(\omegal{V}_i))_{i\iotan \muathbb{N}}$ is a compact locally finite cover of $\muathbb{R}$, also $(\omegal{U}_i)_{i\iotan \muathbb{N}}$ is a compact locally finite cover of $\muathbb{R}$. We use Lemma \rhoef{GruInterCoverSub} and find a cover $(W_i)_{i\iotan \muathbb{N}}$ of $\muathbb{R}$ such that $W_i \sigmaubs U_i$ and $(\omegal{W}_i )_{i\iotan \muathbb{N}}$ is a compact locally finite cover of $\muathbb{R}$ with submanifolds with boundaries. Moreover we have $\omegal{W}_i \sigmaubs \alpha^{-1}(\omegal{V}_i)$ for all $i \iotan \muathbb{N}$. Now $(\omegal{W}_i, \iotad|^\muathbb{R}_{\omegal{W}_i})_{i\iotan \muathbb{N}}$ is a compact locally finite trivialising system of the trivial principal bundle $\sigmaet{1}\hookrightarrow \muathbb{R} \xira{\iotad}\muathbb{R}$ with the trivial action $\sigmaet{1}\thetaimes G \rhoightarrow G$. We get the following diagram \betaegin{align}\lambdaeftarrowbel{GruKomDiaGlatt} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ C^\iotanfty_c(P,G)_{\rho_G} \alphar@{_{(}->}[d]_{f\mus (f\chiirc \sigma_i)_{i}} \alphar[r]^{\hat{\alpha}^\alphast_G}& C^\iotanfty_c(\muathbb{R},G) \alphar@{_{(}->}[d]^{f\mus (f|_{W_i})_{i}}\\ \pirod^\alphast_{i\iotan \muathbb{N}} C^\iotanfty(\omegal{V}_i,G) \alphar[r]^{(\pis_i)_{i\iotan \muathbb{N}}}& \pirod^{\alphast}_{i\iotan \muathbb{N}} C^\iotanfty(\omegal{W}_i,G) } \etand{xy} \etand{align} where the group homomorphisms $\pis_i$ are given by the diagram \betaegin{align*} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ C^\iotanfty(\omegal{V}_i,G) \alphar[r]^{\pis_i} \alphar[d]_{\theta_i}& C^\iotanfty(\omegal{W}_i,G)\\ C^\iotanfty(\omegal{V}_i\thetaimes H, G) \alphar[d]_{f\mus f\chiirc \pih_i}&\\ C^\iotanfty(P|_{\omegal{V}_i},G)\alphar[uur]_{f\mus f\chiirc \hat{\alpha}|_{\omegal{W}_i}}& } \etand{xy} \etand{align*} with $\theta_i \chiolon C^\iotanfty(\omegal{V}_i,G)\rhoightarrow C^\iotanfty(\omegal{V}_i\thetaimes H, G),~f\mus ((x,h)\mus \rho_G(h).f(x))$ and $\pih_i$ the inverse of $\omegal{V}_i\thetaimes H \rhoightarrow P|_{\omegal{V}_i},~ (x,h)\mus \sigma_i(x)h$. Defining $\thetaau^i\chiolon \omegal{W}_i \rhoightarrow\omegal{V}_i\thetaimes H,~ \thetaau^i:= \pih_i\chiirc \hat{\alpha}|_{\omegal{W}_i}$ and $\thetaau^i_j:=\pir_j\chiirc \thetaauu^i$ for $j \iotan \sigmaet{1,2}$ the map $\pis_i \chiolon C^\iotanfty(\omegal{V}_i,G) \rhoightarrow C^\iotanfty(\omegal{W}_i, G)$ is given by \betaegin{align*} f \mus {\rho}_G (\pir_2\chiirc \pih_i \chiirc \hat{\alpha}|_{\omegal{W}_i}({\scriptscriptstyle \betaullet})). (f\chiirc \pir_1\chiirc \pih_i \chiirc \hat{\alpha}|_{\omegal{W}_i}({\scriptscriptstyle \betaullet})) = {\rho}_G(\thetaau^i_2({\scriptscriptstyle \betaullet})).(f\chiirc \thetaau^i_1({\scriptscriptstyle \betaullet})). \etand{align*} In order to show that (\rhoef{GruKomDiaGlatt}) is commutative let $f\iotan C^\iotanfty_c(P,G)_{\rho_G}$. Then \betaegin{align*} \rho_G(h).f\chiirc\sigma_i(x) = f(\sigma_i(x).h) =f(\pih_i^{-1}(x,h)) \etand{align*} for all $(x,h) \iotan \omegal{V}_i \thetaimes H$. Hence $\pis_i(f\chiirc \sigma_i) = f\chiirc\hat{\alpha}|_{\omegal{W}_i}$. To show that $\pis_i$ is a Lie group homomorphism it is enough to show that $C^\iotanfty(\omegal{V}_i, G) \thetaimes \omegal{W}_i \rhoightarrow G,~ (f,x)\mus \rho_G(\thetaau_2(x), f(\thetaau_1(x)))$ is smooth (\chiircte{Alzaareer:2013} respectively \chiircte[Theorem 2.25]{Schuett:2013}). The map $C^\iotanfty(\omegal{V}_i,G) \thetaimes \omegal{V}_i,~ (f,y) \mus f(y)$ is smooth (see \chiircte{Alzaareer:2013} respectively \chiircte[Theorem 2.26]{Schuett:2013}) and so $C^\iotanfty(\omegal{V}_i,G) \thetaimes \omegal{W}_i \rhoightarrow H\thetaimes G,~(f,x)\mus (\thetaau_2(x), f(\thetaau_1(x)))$ is smooth. It is left to show that $L(\hat{\alpha}^\alphast_G)$ is given by $C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow C^\iotanfty_c(\muathbb{R},\frak{g}),~ f\mus f\chiirc \hat{\alpha}$. To this end let $f\iotan C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}$. We calculate \betaegin{align*} &L(\hat{\alpha}^\alphast_G)(f) = \fr{\piartial}{\piartial t}\betaiggg|_{t=0} \hat{\alpha}_G^\alphast(\etaxp(tf)) = \fr{\piartial}{\piartial t}\betaiggg|_{t=0} (\etaxp(tf) \chiirc \hat{\alpha})\\ = &\fr{\piartial}{\piartial t}\betaiggg|_{t=0} (\etaxp_G\chiirc (t\cdotf) \chiirc \hat{\alpha}) \ub{=}{\alphast} \fr{\piartial}{\piartial t}\betaiggg|_{t=0} ( (t\cdotf) \chiirc \hat{\alpha}) = f\chiirc \hat{\alpha} \etand{align*} where $\alphast$ follows from \betaegin{align*} \etav_p\lambdaeft(\fr{\piartial}{\piartial t}\betaiggg|_{t=0} (\etaxp_G\chiirc (t\cdotf) \chiirc \hat{\alpha}) \rhoightght)= \fr{\piartial}{\piartial t}\betaiggg|_{t=0} \lambdaeft(\etaxp_G (t\cdotf (\hat{\alpha}(p))\rhoightght) = \etav_p\lambdaeft(\fr{\piartial}{\piartial t}\betaiggg|_{t=0} ( (t\cdotf) \chiirc \hat{\alpha}) \rhoightght) \etand{align*} for $p \iotan P$. Now the assertion follows from \chiircte[Proposition 4.5]{Glockner:2007} respectively \chiircte[Corollary 2.38]{Schuett:2013}. \etand{proof} \betaegin{definition}[Cf. Proof of Lemma V.10 in \chiircte{Neeb:2004}] We define the cocycle \betaegin{align*} &\omega_\muathbb{R} \chiolon C^\iotanfty_c(\muathbb{R},\frak{g})^2 \rhoightarrow \omegal{\mathcal{O}mega}_c^1(\muathbb{R}, V)= H_{dR,c}^1(\muathbb{R},V) \rhoightarrow V,\\ & (f,g) \mus [\kappa_\frak{g}(f,g')] \mus \iotant_\muathbb{R} \kappa_\frak{g}(f(t), g'(t)) dt. \etand{align*} \etand{definition} The following Lemmas \rhoef{GruLemma1}, \rhoef{GruLemmaInteggM} and \rhoef{GruLemma2} are used to proof Lemma \rhoef{GruWichtigLemma}, a generalisation of \chiircte[Lemma V.16]{Neeb:2004} from the case of a current group to the case of a group of sections. In the case of a compact base manifold, there exists a corresponding statement to the following Lemma \rhoef{GruLemma1}. It is given by equation (9) in \chiircte[Remark 4.3]{Neeb:2009}. \betaegin{lemma}\lambdaeftarrowbel{GruLemma1} Given $x_0 \iotan M$, $p_0 \iotan P_{x_0}$ and $\alpha \iotan C^\iotanfty_p(\muathbb{R}, M)$ with $\alpha(0)=x_0$ we get \betaegin{align}\lambdaeftarrowbel{GruGL1CCC} I_\alpha \chiirc \omega_M = \omega_\muathbb{R} \chiirc (\hat{\alpha}^\alphast_\frak{g} \thetaimes \hat{\alpha}^\alphast_\frak{g} ). \etand{align} Hence the following diagram commutes: \betaegin{align*} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ C^\iotanfty_c(P, \frak{g})_{\rho_\frak{g}}^2 \alphar[r]^{\omega_M} \alphar[d]_{\hat{\alpha}^\alphast_\frak{g} \thetaimes \hat{\alpha}_\frak{g}^\alphast}& \omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor} \alphar[d]^{I_\alpha} \\ C^\iotanfty_c(\muathbb{R} , \frak{g})^2 \alphar[r]^{\omega_\muathbb{R}} & V. } \etand{xy} \etand{align*} \etand{lemma} \betaegin{proof} For $g\iotan C^\iotanfty_c(P, \frak{g})_{\rho_\frak{g}}$ we have \betaegin{align*} (\hat{\alpha}^\alphast D_{\rho_\frak{g}}g)(t) = D_{\rho_\frak{g}} g(\hat{\alpha}'(t)) = (g\chiirc \hat{\alpha})'(t), \etand{align*} because $\hat{\alpha}$ is a horizontal map. For $f,g\iotan C^\iotanfty_c(P, \frak{g})_{\rho_\frak{g}}$ we get \betaegin{align*} &I_\alpha(\omega_M (f,g)) = I_\alpha([\kappa_\frak{g}(f,D_{\rho_\frak{g}}g)]) = \iotant_\muathbb{R}\hat{\alpha}^\alphast \kappa_\frak{g}(f,D_{\rho_\frak{g}}g) = \iotant_\muathbb{R} \kappa_\frak{g}(f\chiirc \hat{\alpha} , \hat{\alpha}^\alphast D_{\frak{g}}g)\\ =& \iotant_\muathbb{R} \kappa_\frak{g}(f\chiirc \hat{\alpha}(t) , (g\chiirc \hat{\alpha})'(t)) dt = \omega_\muathbb{R} \chiirc (\hat{\alpha}^\alphast_\frak{g} \thetaimes \hat{\alpha}^\alphast_\frak{g})(f,g). \etand{align*} \etand{proof} The following Lemma \rhoef{GruLemmaInteggM} can be found in \chiircte[Remark C.2 (a)]{Neeb:2009}. \betaegin{lemma}\lambdaeftarrowbel{GruLemmaInteggM} Let $\pih \chiolon G_1 \rhoightarrow G_2$ be a Lie group homomorphism and $\frak{g}_i$ the Lie algebra of $G_i$ for $i \iotan \sigmaet{1,2}$. Moreover let $V$ be a trivial $\frak{g}_i$-module and $\omega \iotan Z^2_c(\frak{g}_2,V)$. Then we get \betaegin{align}\lambdaeftarrowbel{GruGL2IntegMapCCC} \pier_{\omega} \chiirc \pi_2(\pih) = \pier_{L(\pih)^\alphast \omega} \etand{align} as an equation in the set of group homomorphism from $\pi_2(G_1)$ to $V$. \etand{lemma} The following lemma corresponds to the first equation in \chiircte[Remark 4.3 (10)]{Neeb:2009}. \betaegin{lemma}\lambdaeftarrowbel{GruLemma2} Let $x_0\iotan M$ and $p_0 \iotan P_{x_0}$ be base points and $\alpha \iotan C_p^\iotanfty(\muathbb{R},M)$. Then \betaegin{align}\lambdaeftarrowbel{GruGL4IntegMapCCC} I_\alpha \chiirc \pier_{\omega_M} =\pier_{I_\alpha \chiirc \omega_M} \chiolon \pi_2(C^\iotanfty_c(P,G)_{\rho_G}) \rhoightarrow V. \etand{align} \etand{lemma} \betaegin{proof} Let $\omega_M^l \iotan \mathcal{O}mega^2(C^\iotanfty_c(P,G)_{\rho_G} , \omegal{\mathcal{O}mega}^1_c(P,V)_{\rho_V}^\thetaext{hor})$ be the corresponding left invariant 2-form of $\omega_M \iotan Z^2_{ct}(C^\iotanfty_c(P,\frak{g})_{\rho_{\frak{g}}}, \omegal{\mathcal{O}mega}^1_c(P,V)_{\rho_V}^\thetaext{hor})$. Then $I_\alpha \chiirc \omega_M^l \iotan \mathcal{O}mega^2_{dR}(C^\iotanfty_c(P,K)_{\rho_K},V)$ is left invariant and \betaegin{align*} (I_\alpha \chiirc \omega_M^l)_1(f,g) = I_\alpha ((\omega_M^l)_1(f,g)) =I_\alpha \chiirc \omega_M(f,g) \etand{align*} for $f,g\iotan C^\iotanfty_c(P,\frak{g})_{\rho_{\frak{g}}}=T_1C_c^\iotanfty(P,G)_{\rhoho_G}$. Hence $(I_a \chiirc \omega_M)^l = I_\alpha \chiirc \omega_M^l$. For $[\sigma]\iotan \pi_2(C^\iotanfty_c(P,G)_{\rho_G})$ with a smooth representative $\sigma$ we get \betaegin{align*} \pier_{I_\alpha \chiirc \omega_M}([\sigma]) = \iotant_{\muathbb{S}^2}\sigma^\alphast (I_\alpha \chiirc \omega_M^l) = \iotant_{\muathbb{S}^2} I_\alpha \chiirc \sigma^\alphast \omega_M^l = I_\alpha \chiirc \iotant_{\muathbb{S}^2} \sigma^\alphast \omega_M^l = I_\alpha\chiirc \pier_{\omega_M} ([\sigma]). \etand{align*} \etand{proof} The following lemma is a generalisation of \chiircte[Lemma V.16]{Neeb:2004} in the case of a finite-dimensional Lie group. \betaegin{lemma}\lambdaeftarrowbel{GruWichtigLemma} For a proper map $\alpha \iotan C_p^\iotanfty(\muathbb{R},M)$ and the base points $x_0\iotan M$ and $p_0 \iotan P_{x_0}$ we get the following diagram commutes: \betaegin{align*} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ \pi_2(C^\iotanfty_c(P,G)_{\rho_G}) \alphar[r]^-{\pier_{\omega_M}} \alphar[d]_-{\pi_2(\hat{\alpha}^\alphast_G)} & \omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^{\thetaext{hor}} \alphar[d]^-{I_\alpha}\\ \pi_2(C^\iotanfty_c(\muathbb{R},G)) \alphar[r]_-{\pier_{\omega_{\muathbb{R}}}} & V. } \etand{xy} \etand{align*} \etand{lemma} \betaegin{proof} We calculate \betaegin{align}\lambdaeftarrowbel{GruGL3IntegMapCCC} I_\alpha \chiirc \pier_{\omega_M} \ub{=}{(\rhoef{GruGL4IntegMapCCC})} \pier_{I_\alpha \chiirc \omega_M} \ub{=}{(\rhoef{GruGL1CCC})} \pier_{\omega_{\muathbb{R}}\chiirc (\hat{\alpha}^\alphast \thetaimes \hat{\alpha}^\alphast)} \ub{=}{(\rhoef{GruGL2IntegMapCCC})} \pier_{\omega_{\muathbb{R}}} \chiirc \pi_2(\hat{\alpha}_G^\alphast). \etand{align} \etand{proof} \betaegin{lemma}\lambdaeftarrowbel{GruIntegralLemma} Let $x_0 \iotan M$ and $p_0 \iotan P_{x_0}$ be base points and $\alpha \iotan C_p^\iotanfty(\muathbb{R},M)$ with $\alpha(0)=x_0$. Moreover, let $\hat{\alpha} \iotan C^{\iotanfty}(\muathbb{R},P)$ be the unique horizontal lift of $\alpha$ to $P$ with $\hat{\alpha}(0)=p_0$. \betaegin{compactenum} \iotatem We have the commutative diagram \betaegin{align*} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ H_{dR,c}^1(M,V_\thetaext{fix}) \alphar[d]_-{\chiolonng}^{q^\alphast} \alphar[r]^-{I_\alpha}_-{[\theta] \mus \iotant_\muathbb{R} \alpha^\alphast \theta} & V \alphar[d]^-{\iotad}\\ H_{dR,c}^1(P , V)_{\rho_V} \alphar[r]^-{I_\alpha}_-{[\theta] \mus \iotant_\muathbb{R} \hat{\alpha}^\alphast \theta} & V. } \etand{xy} \etand{align*} \iotatem Given $[\theta] \iotan H^1_{dR,c}(M,V_\thetaext{fix})$ we have $\iotant_\muathbb{R} \hat{a}^\alphast q^\alphast \theta = \iotant_{\muathbb{R}} \alpha^\alphast \theta$ respectively \betaegin{align*} \iotant_\muathbb{R} \hat{\alpha}^\alphast \theta = \iotant_\muathbb{R} \alpha^\alphast q_\alphast \theta \etand{align*} for all $[\theta] \iotan H_{dR,c}^1(P,V)_{\rho_V}$. \etand{compactenum} \etand{lemma} \betaegin{proof} \betaegin{compactenum} \iotatem Given $[\theta] \iotan H^1_{dR,c}(M,V_\thetaext{fix})$, we calculate \betaegin{align*} \iotant_\muathbb{R} \hat{\alpha}^\alphast (q^\alphast \theta) = \iotant_\muathbb{R} (q \chiirc \hat{\alpha})^\alphast \theta) = \iotant_\muathbb{R} \alpha^\alphast \theta. \etand{align*} \iotatem Clear. \etand{compactenum} \etand{proof} The following Lemma \rhoef{GruDiskLemma} comes from \chiircte[Corrolary IV.21]{Neeb:2004}. \betaegin{lemma}\lambdaeftarrowbel{GruDiskLemma} If $\gammag \sigmaubs V$ a discrete subgroup then \betaegin{align*} H^1_{dR,c}(M,\gammag) := \sigmaet{[\theta] \iotan H^1_{dR,c}(M,V): (\frak{o}rall \alpha \iotan C^\iotanfty_p(\muathbb{R},M)) \iotant_\muathbb{R} \alpha^\alphast \theta \iotan \gammag } \etand{align*} is a discrete subgroup of $\omegal{\mathcal{O}mega}^1_c(M,V)$. \etand{lemma} The following statement can be found in the proof of \chiircte[Proposition V.19]{Neeb:2004}. \betaegin{lemma}\lambdaeftarrowbel{GruRLemma} Because $\kappa_\frak{g}$ is universal and $V$ is finite dimensional it is well known that $\gammap_{\omega_\muathbb{R}}=\iotam(\pier_{\omegamega_\muathbb{R}})$ is a discrete subgroup of $\omegal{\mathcal{O}mega}^1_c(\muathbb{R},V) = H_{dR,c}^1(\muathbb{R},V) \chiolonng V$. \etand{lemma} \betaegin{proof} We argue exactly like in the proof of \chiircte[Proposition V.19]{Neeb:2004}, by combining \chiircte[Theorem II.9]{Maier:2003} and \chiircte[Lemma V.11]{Neeb:2004}. \etand{proof} \betaegin{remark}\lambdaeftarrowbel{GruproppeperinPFP} Because $\omegal{q} \chiolon \omegal{P} \rhoightarrow M$ is a finite covering, $\omegal{q}$ is a proper map and so a curve $\omegal{\alpha}\chiolon \muathbb{R} \rhoightarrow \omegal{P}$ is proper if and only if $\omegal{q} \chiirc \omegal{\alpha} \chiolon \muathbb{R} \rhoightarrow M$ is proper. Hence the maps in $C^\iotanfty_p(\muathbb{R},\omegal{P})$ are proper in the usual sense. \etand{remark} \betaegin{lemma}\lambdaeftarrowbel{GruIntegralLemmaPiStern} Let $\omegal{\alpha}\chiolon \muathbb{R} \rhoightarrow \omegal{P}$ be a proper map. We define $\omegal{x}_0:= \omegal{\alpha}(0)$, $x_0:=\omegal{q}(\omegal{x}_0)$ and $\alpha := \omegal{q} \chiirc \omegal{\alpha}$. Moreover let $p_0 \iotan P$ with ${\pi}(p_0)=\omegal{x}_0$ and $\hat{\alpha}\chiolon \muathbb{R} \rhoightarrow P$ be the unique horizontal lift of $\alpha$ to $P$ with $\hat{\alpha}(0)= p_0$ then \betaegin{align*} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ \omegal{\mathcal{O}mega}_c^1(\omegal{P},V)_{\omegal{\rho}_V} \alphar[d]_-{\chiolonng}^{\pi^\alphast} \alphar[r]_-{[\theta] \mus \iotant_\muathbb{R} \omegal{\alpha}^\alphast \theta} & V \alphar[d]^-{\iotad}\\ \omegal{\mathcal{O}mega}_c^1(P , V)_{\rho_V}^\thetaext{hor} \alphar[r]^-{I_\alpha}_-{[\theta] \mus \iotant_\muathbb{R} \hat{\alpha}^\alphast \theta} & V } \etand{xy} \etand{align*} commutes. \etand{lemma} \betaegin{proof} We have $\pi \chiirc\hat{\alpha} = \omegal{\alpha}$ because $\omegal{\alpha}$ is the unique horizontal lift of $\alpha$ to $\omegal{P}$ with $\omegal{\alpha}(0)= \omegal{x}_0$ and $\pi\chiirc \hat{\alpha}$ is also a horizontal lift of $\alpha$ to $\omegal{P}$ that maps $0$ to $\omegal{x}_0$. Hence \betaegin{align*} \iotant_\muathbb{R} \hat{\alpha}^\alphast (\pi^\alphast \theta) = \iotant_\muathbb{R} (\pi \chiirc \hat{\alpha})^\alphast \theta = \iotant_\muathbb{R} \omegal{\alpha}^\alphast \theta \etand{align*} for $\theta \iotan \mathcal{O}mega^1_c(\omegal{P},V)_{\omegal{\rho}_V}$. \etand{proof} The proof of the following lemma is similar to the proof of \chiircte[Lemma A.1]{Neeb:2004}. \betaegin{lemma}\lambdaeftarrowbel{GruKompaktRegulaer} Given a compact set $L \sigmaubs C^\iotanfty_c(P,G)_{\rho_G}$ we find a compact set $K\sigmaubs M$ such that $L \sigmaubs C^\iotanfty_K(P,G)_{\rho_G}$. \etand{lemma} \betaegin{proof} From \chiircte[Theorem 4.18]{Schuett:2013} we know that the map $\etaxp_\alphast \chiolon C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow C^\iotanfty_c(P,G)_{\rhoho_G},~ f\mus \etaxp_G \chiirc f$ is a local diffeomorphism around $0$. Given a compact set $K\sigmaubs M$ we have \betaegin{align}\lambdaeftarrowbel{GruExpPasst} \etaxp_\alphast(C^\iotanfty_K(P,\frak{g})_{\rho_\frak{g}}) \sigmaubs C^\iotanfty_K(P,G)_{\rho_G}. \etand{align} Let $U\sigmaubs C^\iotanfty_c(P,G)_{\rho_G}$ be a $1$-neighbourhood and $V\sigmaubs C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}$ be a $0$-neighbourhood such that $\etaxp_\alphast|_V^U$ is a diffeomorphism. We write $\gammaph:= \lambdaeft(\etaxp_\alphast|_V^U\rhoightght)^{-1}$. If $L \sigmaubs U$ is a compact set, then $\gammaph (L)$ is a compact subset $C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}$. Because $C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}$ is a strict LF-space we find a compact subset $K\sigmaubs M$ such that $\gammaph (L)\sigmaubs C^\iotanfty_K(P,\frak{g})_{\rho_{\frak{g}}} \chiapp V$ (see \chiircte[Theorem 6.4]{Wengenroth:2003} or \chiircte[Remark 6.2 (d)]{Glockner:2008a}). Hence with (\rhoef{GruExpPasst}) we get $L \sigmaubs C^\iotanfty_K(P,G)_{\rho_G}$. Now let $L \sigmaubs C^\iotanfty_c(P,G)_{\rho_G}$ be an arbitrary compact subset. Let $W\sigmaubs C^\iotanfty_c(P,G)_{\rho_G}$ be a $1$-neighbourhood such that $\omegal{W}\sigmaubs U$. Because $L$ is compact we find $n\iotan \muathbb{N}$ and $g_i \iotan C^\iotanfty_c(P,G)_{\rho_G}$ such that $L \sigmaubs \betaiggcup_{i=1}^n g_i\cdot \omegal{W}$. Defining the compact set $L_i:= L\chiapp g_i\cdot \omegal{W}$, we get $L\sigmaubs \betaiggcup_{i=1}^n L_i$. Let $i \iotan \sigmaet{1,\dots,n}$. We get $g_i^{-1} \chidot L_i \sigmaubs \omegal{W} \sigmaubs U$. Hence we find a compact set $K_1\sigmaubs M$ with $g_i^{-1} \chidot L_i \sigmaubs C^\iotanfty_{K_1}(P,G)_{\rho_G}$. Let $K_2\sigmaubs M$, be compact with $\sigmaupp(g_i)\sigmaubs q^{-1}(K_2)$ and $K_i:= K_1\chiupp K_2$ and $K:= \betaiggcup_{i=1}^n K_i$. We get \betaegin{align*} L_i \sigmaubs g_i \chidot C^\iotanfty_{K_1}(P,G)_{\rho_G} \sigmaubs C^\iotanfty_{K_i}(P,G)_{\rho_G} \sigmaubs C^\iotanfty_{K}(P,G)_{\rho_G}. \etand{align*} Hence $L= \betaiggcup_{i=1}^n L_i \sigmaubs C^\iotanfty_{K}(P,G)_{\rho_G}$. \etand{proof} In \chiircte[Remark IV.17]{Neeb:2004} (where $M$ is non-compact) Neeb extends a smooth loop $\alpha \chiolon [0,1]\rhoightarrow M$ by a smooth proper map $\gamma \chiolon [0,\iotanfty[ \rhoightarrow M$ to a proper map $\thetaimesl{a} \chiolon \muathbb{R} \rhoightarrow M$ such that for all 1-forms $\theta$ with compact support one gets $\iotant_\alpha\theta= \iotant_{\thetaimesl{\alpha}} \theta$. This construction is also used in the proof of the following Theorem \rhoef{GruAllFormsAreClosed}. An analogous theorem was proofed in the compact case in \chiircte[Proposition 4.11]{Neeb:2009}. \betaegin{theorem}\lambdaeftarrowbel{GruAllFormsAreClosed} If $M$ is non-compact, we have \betaegin{align*} \iotam(\pier_{\omegamega_M}) = \gammap_{\omega_M} \sigmaubs H^1_{dR,c}(M,\muathbb{V}) \sigmaubs \omegal{\mathcal{O}mega}^1_c(M,\muathbb{V}). \etand{align*} This means that all forms in $\gammap_{\omega_M}$ are closed. \etand{theorem} \betaegin{proof} Because $\pi^\alphast \chiolon {\mathcal{O}mega}^{\scriptscriptstyle \betaullet}_c(\omegal{P},V)_{\omegal{\rho}_V} \rhoightarrow {\mathcal{O}mega}^{\scriptscriptstyle \betaullet}_c(P,V)_{\rho_V}^\thetaext{hor}$ is an isomorphism of chain complexes, it is enough to show $\gammap_{\omega_M} \sigmaubs H^1_{dR,c}(\omegal{P},V) \sigmaubs \omegal{\mathcal{O}mega}^1_c(\omegal{P},V)$. To this end let $[\theta] \iotan \gammap_{\omega_M}$ and $\omegal{\alpha}_0, \omegal{\alpha}_1 \chiolon [0,1]\rhoightarrow \omegal{P}$ be closed smooth curves in a point $\omegal{x}_0 \iotan \omegal{P}$ that are homotopic relative $\sigmaet{0,1}$ by a smooth homotopy $\omegal{F}\chiolon [0,1]^2 \rhoightarrow \omegal{P}$. From Lemma \rhoef{GrugeschlForm} we get that it is enough to show $\iotant_{\omegal{\alpha}_0} \theta=\iotant_{\omegal{\alpha}_1} \theta$. By composing $\omegal{\alpha}_i$ respectively $\omegal{F}(s,{\scriptscriptstyle \betaullet})$ with a strictly increasing diffeomorphism $\pih \chiolon [0,1]\rhoightarrow [0,1]$ whose jets vanish in $0$ and $1$, respectively, we can assume that in a local chart all derivatives of $\omegal{\alpha}_i$ and $\omegal{F}(s,{\scriptscriptstyle \betaullet})$ vanish in $0$ and $1$ respectively, because $\iotant_{\omegal{\alpha}_i}\theta = \iotant_{\omegal{\alpha}_i\chiirc \pih}\theta$ (forward parametrization dose not change line integrals). Because $M$ is non-compact, we find a proper map $\omegal{\gamma} \chiolon [0,\iotanfty[ \rhoightarrow \omegal{P}$ such that $\gamma(0)=\omegal{x}_0$ and in a local chart all derivatives vanish in $0$ (see \chiircte[Lemma IV. 5]{Neeb:2004} and composition with $x\mus x^3$). For $i \iotan \sigmaet{0,1}$ we define the smooth map \betaegin{align*} \omegal{\alpha}_i^\muathbb{R} \chiolon \muathbb{R} \rhoightarrow \omegal{P},~ t \mus \betaegin{cases} \omegal{\gamma}(-t) &:t<0\\ \omegal{\alpha}_i(t)&:t \iotan [0,1]\\ \omegal{\gamma}(t-1)&: t>1. \etand{cases} \etand{align*} Moreover, we define the smooth homotopy \betaegin{align*} \omegal{F}^\muathbb{R} \chiolon [0,1] \thetaimes \muathbb{R} \rhoightarrow \omegal{P}, ~ (s,t) \mus \betaegin{cases} \omegal{\gamma}(-t) &:t<0\\ \omegal{F}(s,t)&:t \iotan [0,1]\\ \omegal{\gamma}(t-1)&: t>1. \etand{cases} \etand{align*} Hence we have $\omegal{\alpha}_i^\muathbb{R}, \omegal{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet}) \iotan C^\iotanfty_p(\muathbb{R},\omegal{P})$ for $i \iotan \sigmaet{0,1}$ and $s\iotan [0,1]$ (see Remark \rhoef{GruproppeperinPFP}). We define $\alpha_i:=\omegal{q}\chiirc\omegal{\alpha}_i$, $F:=\omegal{q}\chiirc \omegal{F}$, $\alpha_i^\muathbb{R}:= \omegal{q}\chiirc \omegal{\alpha}^\muathbb{R}_i$, $F^\muathbb{R}:= \omegal{q}\chiirc \omegal{F}^\muathbb{R}$, $\gamma:= \omegal{q}\chiirc \omegal{\gamma}$ and $x_0:=\omegal{x}_0$. The curves $\alpha_0$ and $\alpha_1$ are closed curves in $x_0$ and are homotopic relative $\sigmaet{0,1}$ by the homotopy $F$, because $\alpha_i(j)=\omegal{q}(\omegal{x}_0) =x_0$ and $F(i,{\scriptscriptstyle \betaullet})=\omegal{q}\chiirc \omegal{F}(i,{\scriptscriptstyle \betaullet})=\omegal{q}\chiirc \omegal{\alpha}_i = \alpha_i$ for $j,i \iotan \sigmaet{0,1}$. Moreover \betaegin{align*} \alpha_i^\muathbb{R}(t)= \betaegin{cases} \gamma(-t) &:t<0\\ \alpha_i(t)&:t \iotan [0,1]\\ \gamma(t-1)&: t>1, \etand{cases} \\ F^\muathbb{R}(s,t)= \betaegin{cases} \gamma(-t) &:t<0\\ F(s,t)&:t \iotan [0,1]\\ \gamma(t-1)&: t>1 \etand{cases} \etand{align*} and $\alpha_i^\muathbb{R}, F^\muathbb{R}(s,{\scriptscriptstyle \betaullet}) \iotan C^\iotanfty_p(\muathbb{R},M)$. We choose $p_0 \iotan \pi^{-1}(\sigmaet{\omegal{x}_0})$. Now let $\hat{\alpha}_i^\muathbb{R} \chiolon \muathbb{R} \rhoightarrow P$ be the unique horizontal lift of $\alpha_i^\muathbb{R}$ to $P$ with $\hat{\alpha}^\muathbb{R}_i(0)=p_0$ and $\hat{F}^\muathbb{R}\chiolon [0,1]\thetaimes \muathbb{R} \rhoightarrow P$ be the unique horizontal lift of $F^\muathbb{R}$ to $P$ such that $\hat{F}^\muathbb{R}(s,0) = p_0$ for all $s\iotan [0,1]$. The map $\hat{F}^\muathbb{R}$ is not a homotopy relative $\sigmaet{0,1}$, but we have $\hat{F}^\muathbb{R}(0,s) = \hat{\alpha}_0^\muathbb{R}(s)$ and $\hat{F}^\muathbb{R}(1,s) = \hat{\alpha}_1^\muathbb{R}(s)$ for all $s \iotan [0,1]$. For $i\iotan \sigmaet{0,1}$ we have \betaegin{align}\lambdaeftarrowbel{GruIntegGleichung} \iotant_{\omegal{\alpha}_i}\theta =\iotant_{\omegal{\alpha}_i^\muathbb{R}}\theta = \iotant_{\hat{\alpha}^\muathbb{R}_i} \pi^\alphast \theta, \etand{align} where the last equation follows from Lemma \rhoef{GruIntegralLemmaPiStern}. Because of (\rhoef{GruIntegGleichung}) and Lemma \rhoef{GruWichtigLemma} it is enough to show $\pi_2((\hat{\alpha}_0^\muathbb{R})^\alphast)= \pi_2((\hat{\alpha}_1^\muathbb{R})^\alphast)$ as group homomorphisms from $\pi_2(C^\iotanfty_c(P,G)_{\rho_G})$ to $\pi_2(C^\iotanfty_c(\muathbb{R},G))$. From \chiircte[Theorem A.7]{Neeb:2004} we get $\pi_2(C^\iotanfty_c(\muathbb{R},G)) = \pi_2(C_c(\muathbb{R},G))$. We set $I:=[0,1]$. Let $\sigma \chiolon I^2 \rhoightarrow C^\iotanfty_c(P,G)_{\rho_G}$ be continuous with $\sigma|_{\piartial I^2} =c_{1_G}$. Because $\pi_2((\hat{\alpha}_i^\muathbb{R})^\alphast)([\sigma]) = [\sigma({\scriptscriptstyle \betaullet})\chiirc \hat{\alpha}_i^\muathbb{R}]$ for $i\iotan \sigmaet{0,1}$ it is enough to show \betaegin{align*} [\sigma({\scriptscriptstyle \betaullet})\chiirc \hat{\alpha}_0^\muathbb{R}] = [\sigma({\scriptscriptstyle \betaullet})\chiirc \hat{\alpha}_1^\muathbb{R}] \etand{align*} in $\pi_2(C_c(\muathbb{R},G))$. Hence we have to construct a continuous map $H\chiolon [0,1]\thetaimes I^2 \rhoightarrow C_c(\muathbb{R},G)$ with $H(0,{\scriptscriptstyle \betaullet})= \sigma({\scriptscriptstyle \betaullet}) \chiirc \hat{a}^\muathbb{R}_0$, $H(1,{\scriptscriptstyle \betaullet})=\sigma({\scriptscriptstyle \betaullet}) \chiirc \hat{\alpha}^\muathbb{R}_1$ and $H(s,x) =c_{1_G}$ for all $s\iotan [0,1]$ and $x\iotan \piartial I^2$. We define $H(s,x)= \sigma(x)\chiirc \hat{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet})$ for $s\iotan [0,1]$ and $x \iotan I^2$. Because $\sigmaigma|_{\piartial I^2}= c_{1_G}$, it is left to show that $H$ is continuous. Let $K\sigmaubs M$ be compact such that $\iotam(\sigma) = \sigma(I^2) \sigmaubs C^\iotanfty_K(P,G)_{\rho_G}$ (see Lemma \rhoef{GruKompaktRegulaer}). For $f\iotan C^\iotanfty_K(P,G)_{\rho_G}$ we have $\sigmaupp(f\chiirc \hat{\alpha}^\muathbb{R}_i) \sigmaubs {\alpha_i^\muathbb{R}}^{-1}(K)$ as well as $\sigmaupp(f\chiirc \hat{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet})) \sigmaubs F^\muathbb{R}(s,{\scriptscriptstyle \betaullet})^{-1}(K)$ for $s\iotan [0,1]$. Hence $\sigmaupp(\sigma(x)\chiirc \hat{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet})) \sigmaubs F^\muathbb{R}(s,{\scriptscriptstyle \betaullet})^{-1}(K)$ for $x\iotan I^2$ and $s\iotan [0,1]$. We have \betaegin{align*} &{F^\muathbb{R}}^{-1}(K) = F^\muathbb{R}|_{[0,1]\thetaimes [0,1]}^{-1} (K) \chiupp F^\muathbb{R}|_{[0,1]\thetaimes]-\iotanfty,0]}^{-1}(K) \chiupp F^\muathbb{R}|_{[0,1]\thetaimes[1,\iotanfty[}^{-1}(K)\\ =&F^\muathbb{R}|_{[0,1]\thetaimes [0,1]}^{-1} (K) \chiupp ([0,1] \thetaimes -\gamma^{-1}(K)) \chiupp ([0,1] \thetaimes \gamma^{-1}(K)+1). \etand{align*} Hence ${F^\muathbb{R}}^{-1} (K)\sigmaubs [0,1]\thetaimes \muathbb{R}$ is compact. Therefore \betaegin{align*} L:= \betaiggcup_{s\iotan [0,1]} F^\muathbb{R}(s,{\scriptscriptstyle \betaullet})^{-1} (K) = \pir_2({F^\muathbb{R}}^{-1}(K))\sigmaubs \muathbb{R} \etand{align*} is compact. We have $\sigmaupp(\sigma(x)\chiirc \hat{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet})) \sigmaubs L$ for all $x\iotan I^2$ and $s\iotan [0,1]$. Thus $\iotam(H) \sigmaubs C_L(\muathbb{R},G)$. Therefore it is enough to show that $H\chiolon [0,1]\thetaimes I^2 \rhoightarrow C_L(\muathbb{R},G) \sigmaubs C(\muathbb{R},G),~ (s,x) \mus \sigma(x)\chiirc \hat{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet})$ is continuous. We know that $\thetaau \chiolon [0,1]\rhoightarrow C(\muathbb{R},P),~ s\mus \hat{F}^\muathbb{R}(s,{\scriptscriptstyle \betaullet})$ is continuous and so the assertion follows from the following commutative diagram \betaegin{align*} \betaegin{xy} \xiymatrixcolsep{5pc}\xiymatrix{ [0,1]\thetaimes I^2 \alphar[r]^{\thetaau \thetaimes \sigma} \alphar[ddr]^H& C(\muathbb{R},P) \thetaimes C^\iotanfty_K(P,G)_{\rho_G} \alphar@{_{(}->}[d]\\ & C(\muathbb{R},P)\thetaimes C(P,G)\alphar[d]^{(\alpha,f)\mus f\chiirc \alpha}\\ &C(\muathbb{R},G). } \etand{xy} \etand{align*} \etand{proof} The following Theorem \rhoef{GruMainI} corresponds to the Reduction Theorem \chiircte[Theorem 4.14]{Neeb:2009} (compact base manifold $M$) in the case of a finite-dimensional Lie group $G$ and a finite-dimensional principal bundle $P$. See also Theorem \rhoef{GruMainI.2}. \betaegin{theorem}\lambdaeftarrowbel{GruMainI} The period group $\gammap_{\omega_M} = \iotam \pier_{\omega_M}$ is discrete in $\omegal{\mathcal{O}mega}_c^1(M,\muathbb{V})$. \etand{theorem} \betaegin{proof} Because $q^\alphast \chiolon H^1_{dR,c}(M,V_\thetaext{fix}) \rhoightarrow H^1_{dR,c}(P,V)_{\rho_V}$ is an isomorphism of topological vector spaces and $\gammap_{\omega_M} \sigmaubs H^1_{dR,c}(M,\muathbb{V}) = H^1_{dR,c}(P,V)_{\rho_V}$, it is sufficient to show that $\gammap_{\omega_M}$ is a discrete sub group of $H^1_{dR,c}(M,V)$ (Lemma \rhoef{GruIntegralLemma}). With Lemma \rhoef{GruDiskLemma} and Lemma \rhoef{GruRLemma} it is enough to show \betaegin{align}\lambdaeftarrowbel{GruTada} \gammap_{\omega_M} \sigmaubs H_{dR,c}^1(M,\gammap_\muathbb{R}). \etand{align} Let $\beta \iotan \gammap_{\omega_M}$, $\alpha \iotan C^\iotanfty_p(\muathbb{R},M)$ and $[\sigma]\iotan \pi_2(C^\iotanfty_c(P,G)_{\rho_G})$ with $\beta = \pier_{\omega_M}([\sigma])$. Using Lemma \rhoef{GruIntegralLemma} and Lemma \rhoef{GruWichtigLemma} we get \betaegin{align*} \iotant_\muathbb{R} \alpha^\alphast q_\alphast \beta = \iotant_\muathbb{R} \hat{\alpha}^\alphast \beta = I_\alpha \chiirc \pier_{\omega_M} ([\sigma]) = \pier_{\omega_\muathbb{R}} \chiirc \pi_2(\hat{\alpha}^\alphast) ([\sigma]) \iotan \gammap_{\omega_\muathbb{R}}. \etand{align*} Hence we get (\rhoef{GruTada}). \etand{proof} \betaegin{theorem}\lambdaeftarrowbel{GruMainI.2} If the structure group $H$ and the Lie group $G$ are not finite-dimensional but locally exponential Lie groups that are modelled over locally convex spaces and $\omega \chiolon \frak{g}^2 \rhoightarrow V $ is not necessarily the universal continuous invariant bilinear form but just a continuous invariant bilinear form with values in a Fr{\'e}chet space then $\gammap_\omega$ is discrete if $\gammap_{\omega_\muathbb{R}}$ is discrete in $V$. We emphasise that the base manifold $M$ still has to be $\sigmaigma$-compact and finite-dimensional and $\omegal{H}$ still has to be finite, but we do not need to assume $\pi_2(G)$ to be trivial. \etand{theorem} \betaegin{proof} The only point where it was necessary to assume $G$ to be finite-dimensional and $\omega$ to be universal was in Lemma \rhoef{GruRLemma}. \etand{proof} \sigmaection{Integration of the Lie algebra action and the main result}\lambdaeftarrowbel{Gru1234} In the case of a compact base manifold (\chiircte[Section 4.2 (Part about general Lie algebra bundles)]{Neeb:2009}) Neeb and Wockel integrated the adjoined action of $\gammag(\frak{G})$ on $\widehat{\gammag(\frak{G})}:=\omegal{\mathcal{O}mega}^1(M,\muathbb{V})\thetaimes_\omega \gammag(\frak{G})$ given by \betaegin{align*} \gammag(\frak{G}) \thetaimes \widehat{\gammag(\frak{G})} \rhoightarrow \widehat{\gammag(\frak{G})},~ (\etata,([\alpha],\gammaamma)) \mus [\etata,([\alpha],\gammaamma)]_{\omega} = (\omega(\etata, \gammaamma), [\etata,\gammaamma]) \etand{align*} to a Lie group action of $\gammag(\muathcal{G})$ on $\widehat{\gammag(\frak{G})}$. As a first step in their proof, Neeb and Wockel integrated the covariant derivative $d_\frak{G}\chiolon \gammag(\frak{G}) \rhoightarrow \mathcal{O}mega^1(M,\frak{G})$ to a smooth map from $\gammag(\muathcal{G})$ to $\mathcal{O}mega^1(M,\frak{G})$. As the absolute derivative is the sum of $d\chiolon C^\iotanfty(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow \mathcal{O}mega^1(P,\frak{g})$ and $C^\iotanfty(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow \mathcal{O}mega^1(P,\frak{g}),~ f\mus \rho_\alphast(Z)\wedge f$, where $Z\chiolon TP \rhoightarrow L(H)=:\frak{h}$ is the connection form, $\rhoho_\alphast = L(\rhoho_\frak{g})\chiolon \frak{h}\rhoightarrow \deltaer(\frak{g})$ and $(\rho_\alphast(Z)\wedge f)_p(v) = \rhoho_\alphast(Z(v)).f(p)$, they integrated these summands separately. The image of the exterior derivative dose not lie in $\mathcal{O}mega^1(M,\frak{g})_\frak{g}^\thetaext{hor}$, but in the space $\mathcal{O}mega^1(M,\frak{g})_\frak{g}$ and in some sense the summand $f\mus \rho_\alphast(Z)\wedge f$ annihilates the vertical parts of $df$. The exterior derivative $d\chiolon C^\iotanfty(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow\mathcal{O}mega^1(M,\frak{g})_\frak{g}$ integrates to the left logarithmic derivative $\delta \chiolon C^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow\mathcal{O}mega^1(M,\frak{g})_\frak{g}$, $\pih \mus \delta(\pih)$ with $\delta(\pih)_p(v)= T\lambda_{\pih(p)^{-1}}\chiirc T\pih(v)$ (\chiircte{Neeb:2009}). The integration of the second summand is more complicated, and Neeb and Wockel assumed the Lie group $G$ to be $1$-connected (in the special case of the gauge group they did not need this assumption (see \chiircte[Theorem4.21]{Neeb:2009})). In the second step they used an exponential law to obtain the integrated action. Because our base manifold is not compact the adjoined action of $\gammag_c(\frak{G})$ on $\widehat{\gammag_c(\frak{G})}:=\omegal{\mathcal{O}mega}_c^1(M,\muathbb{V})\thetaimes_{\omega_M} \gammag_c(\frak{G})$ given by \betaegin{align*} \gammag_c(\frak{G}) \thetaimes \widehat{\gammag_c(\frak{G})} \rhoightarrow \widehat{\gammag_c(\frak{G})},~ (\etata,([\alpha],\gammaamma)) \mus (\omega_M(\etata, \gammaamma), [\etata,\gammaamma]). \etand{align*} With the canonical identifications (see Remark \rhoef{GruRmarkallslsgfla}) the adjoint action is given by \betaegin{gather}\lambdaeftarrowbel{GruAdacttitit234} C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \thetaimes (\omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor} \thetaimes_{\omega_M} C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}) \rhoightarrow \omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor} \thetaimes_{\omega_M} C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}\nuonumber\\ (g , ([\alpha],f)) \mus ([\kappa_\frak{g}(g,D_{\rhoho_\frak{g}}(f))], \alphad(g,f)). \etand{gather} We have to integrate this action to a Lie group action of $(\gammag_c(\muathcal{G}))_0$ on $\widehat{\gammag_c(\frak{G})}$.\frak{o}otnote{In an earlier version of this paper, we presented a more complicated argumentation for this result. Also our proof required the group $G$ to be semisimple. In this version of the paper we do not assume $G$ to be semisimple.} Like Neeb and Wockel we have to integrate the covariant derivative $d_\frak{G}\chiolon \gammag_c(\frak{G}) \rhoightarrow \mathcal{O}mega_c^1(M,\frak{G})$ to a smooth map from $\gammag_c(\muathcal{G})$ to $\mathcal{O}mega_c^1(M,\frak{G})$. But we will not describe the absolute derivative via the connection form $Z$ as the sum of the exterior derivative $d$ and the map $f\mus \rho_\alphast(Z)\wedge f$. Instead we use the principal connection $HP$ and write $D_{\rhoho_\frak{g}}= \pir_h^\alphast\chiirc d$, where $\pir_h$ is the projection onto the horizontal bundle and $(\pir_h^\alphast\chiirc d)(f)(v)= df(\pir_h(v))$. In Theorem \rhoef{Grunctandk} we will show that the map $\muathbb{D}elta:=\pir_h^\alphast \chiirc \delta \chiolon C^\iotanfty_c(P,G) \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ is smooth and its derivative in $1$ is given by the absolute derivative $D_{\rhoho_\frak{g}}$. One could show the smoothness of $\delta \chiolon C^\iotanfty_c(P,G)_{\rhoho_{G}} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\frak{g}}$ and $\pir_h^\alphast \chiolon \mathcal{O}mega^1_c(P,\frak{g})_{\frak{g}} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\frak{g}}^\thetaext{hor}$ separately, but it is more convenient to show the smoothness of $\muathbb{D}elta$ directly, because we work in the non-compact case and $\mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ is an inductive limit (compare Lemma \rhoef{Grukljgdfljhg}). \betaegin{remark} In \chiircte[Chapter 4.2 page 408]{Neeb:2009} Neeb and Wockel define $\chihi^Z(f)v:= \chihi(Z(v),(f(p))$ for $f\iotan \gammag\muathcal{G} = C^\iotanfty(P, G)_{\rho_G}$, $v\iotan T_pP$, $Z$ the connection form of $P$ and a smooth map $\chihi \chiolon \frak{h} \thetaimes G \rhoightarrow \frak{g}$ that is linear in the first argument. If the connection on $P$ is not trivial, then $TP \rhoightarrow \frak{g},~ v \mus \chihi(Z(v),(f(p)))$ is not in $\mathcal{O}mega^1(M,\frak{G})= \mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}_{\rho_\frak{g}}$, because it is not horizontal except it is constant $0$. Although the image of the map $\delta^\nuabla(f) = \delta(f)+ \chihi^Z(f^{-1})$ lies in $\mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}_{\rho_\frak{g}}$, because the image of its derivative in $1$ lies in $\mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}_{\rho_\frak{g}}$ and $\delta^\nuabla$ is a $1$-cocycle with respect to the adjoint action of $C^\iotanfty(P, G)_{\rho_G}$ on $\mathcal{O}mega^1(P,\frak{g})_{\rho_\frak{g}}$ and $\mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}_{\rho_\frak{g}}$ is invariant under this action. Although this consideration is not important for our argumentation, because we use a different description of the absolute derivative, as mentioned above. \etand{remark} \betaegin{lemma}\lambdaeftarrowbel{GruLemmareresk09834t} Let $V$ be a locally convex space and $G$ a locally convex Lie group. Moreover let $\muu \chiolon G\thetaimes V \rhoightarrow V$ be a map that is continuous linear in the second argument. Let $f\chiolon G \rhoightarrow V$ be a map that is smooth on an $1$-neighbourhood. If $f(hg) = f(g)+ \muu(g^{-1},f(h))$ for $g,h \iotan G$ then $f$ is smooth. \etand{lemma} \betaegin{proof} Let $U\sigmaubs G$ be a $1$-neighbourhood such that $f|_U$ is smooth and $g\iotan G$. Then $Ug$ is a $g$-neighbourhood and given $z\iotan Ug$ we define $h:= z g^{-1}\iotan U$. Hence $z = hg$. Now we calculate \betaegin{align*} &f(z)= f(hg) = f(g)+ \muu(g^{-1},f(h)) = f(g)+ \muu(g^{-1},f(zg^{-1})) \\ = &f(g) + \muu(g^{-1},f|_U\chiirc \varrho_{g^{-1}}(z)). \etand{align*} Hence \betaegin{align*} f|_{gU}= f(g)+ \muu(g^{-1},{\scriptscriptstyle \betaullet})\chiirc f|_U \chiirc \varrho_{g^{-1}}|_{gU}. \etand{align*} \etand{proof} \betaegin{lemma}\lambdaeftarrowbel{GruAdInvarianzOmega} We considering the map \betaegin{align*} \muu\chiolon C^\iotanfty_c(P,G)_{\rho_G} \thetaimes \mathcal{O}mega^1_c(P,\frak{g}) \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g}),~ (\pih,\theta) \rhoightarrow \muathcal{A}d^G_{\pih}.\theta \etand{align*} with $\muathcal{A}d^G_{\pih}.\theta \chiolon TP \rhoightarrow \frak{g},~ v\mus \muathcal{A}d^G_{\pih(\pi(v))}.\theta(v)$ and the canonical projection $\pii \chiolon TP \rhoightarrow P$. The sub space $\mathcal{O}mega^1(P,\frak{g})_{\rho_\frak{g}}^\thetaext{hor}$ is $\muu$-invariant. In this context $\muathcal{A}d^G$ means the adjoint action of $G$ on $\frak{g}$. \etand{lemma} \betaegin{proof} Given $\theta\iotan \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}$ and $\pih \iotan C_c^\iotanfty(P,G)_{\rho_G}$ we show $\muu(\pih, \theta)\iotan \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}$. Let $h\iotan H$, $p \iotan P$ and $v\iotan T_pP$. We calculate \betaegin{align*} &(R_h^\alphast \muu (\pih, \theta))_p(v) = \muathcal{A}d^G(\pih(ph), \theta_{ph}(TR_h(v))) = \muathcal{A}d^G(\rho_G(h^{-1}).\pih(p), \rho_\frak{g}(h^{-1}).\theta_p(v))\\ =& T\lambda_{\rho_G(h^{-1}).\pih(p)} \chiirc T\varrho_{\rho_G(h^{-1}).\pih(p)^{-1}} \chiirc T_1\rho_G(h^{-1})(\theta_p(v))\\ =& T_1(\rho_G(h^{-1})(\pih(p)) \cdot \rho_G(h^{-1})({\scriptscriptstyle \betaullet}) \cdot \rho_G(h^{-1}) (\pih(p)^{-1}) ) (\theta_p(v))\\ =& T_1( \rho_G(h^{-1})\chiirc I_{\pih(p)} )(\theta_p(v)) = \rho_\frak{g}(h^{-1})\chiirc \muathcal{A}d^G_{\pih(p)} (\theta_p(v)), \etand{align*} where $I_{\pih(p)}(g) = \pih(p)g\pih(p)^{-1}$ is the conjugation on $G$. Obviously $\muu(\pih, \theta)$ is horizontal if $\theta$ is so. \etand{proof} \betaegin{definition} We define the map \betaegin{align*} \muathcal{A}d^G_\alphast \chiolon C^\iotanfty_c(P,G)_{\rho_G}^\thetaext{hor} \thetaimes \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}^\thetaext{hor} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}^\thetaext{hor},~ (\pih,\theta) \rhoightarrow \muathcal{A}d^G_{\pih}.\theta \etand{align*} with $\muathcal{A}d^G_{\pih}.\theta \chiolon TP \rhoightarrow \frak{g},~ v\mus \muathcal{A}d^G_{\pih\chiirc \pi(v)}.\theta(v)$ and $\pii \chiolon TP \rhoightarrow P$ the canonical projection. \etand{definition} \betaegin{lemma}\lambdaeftarrowbel{Gruddkdfjhlemmsdg78} The map $\muathcal{A}d^G_\alphast \chiolon C^\iotanfty_c(P,G)_{\rho_G} \thetaimes \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}^\thetaext{hor} \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rho_\frak{g}}^\thetaext{hor}$ is continuous linear in the second argument. \etand{lemma} \betaegin{proof} \betaegin{compactenum} Let $\pih \iotan C^\iotanfty_c(P,G)_{\rho_G}$ and $K\sigmaubs M$ be compact. It is enough to show that \betaegin{align*} \muathcal{A}d^G_\alphast(\pih,{\scriptscriptstyle \betaullet}) \chiolon \mathcal{O}mega^1_K(P,\frak{g})_{\rhoho_\frak{g}} \rhoightarrow \mathcal{O}mega^1_K(P,\frak{g})_{\rhoho_\frak{g}} \etand{align*} is continuous, because $\muathcal{A}d^G_\alphast(\pih,{\scriptscriptstyle \betaullet})$ is linear and $\muathcal{A}d^G_\alphast(\pih,{\scriptscriptstyle \betaullet})(\mathcal{O}mega^1_K(P,\frak{g})_{\rhoho_\frak{g}})\sigmaubs \mathcal{O}mega^1_K(P,\frak{g})_{\rhoho_\frak{g}}$. The map $f\chiolon TP \thetaimes \frak{g} \rhoightarrow \frak{g},~ (v,w)\mus \muathcal{A}d^G(\pih\chiirc\pi(v), w)$ is smooth. From \chiircte[Lemma 4.3.2]{Glockner:a} we know that \betaegin{align*} f_\alphast \chiolon C^\iotanfty(TP,\frak{g}) \rhoightarrow C^\iotanfty(TP,\frak{g}),~ \theta \mus f\chiirc (\iotad,\theta) \etand{align*} is continuous. We can embed $\mathcal{O}mega^1_K(P,\frak{g})$ into $C^\iotanfty(TP,\frak{g})$. Hence we are done. \etand{compactenum} \etand{proof} \betaegin{definition}\lambdaeftarrowbel{GruLeftLogarithmic} Let $\pi \chiolon TP \rhoightarrow P$ be the canonical projection and $\pir_h\chiolon TP \rhoightarrow HP$ the projection onto the horizontal bundle. \betaegin{compactenum} \iotatem We define \betaegin{align*} \delta \chiolon C^\iotanfty_c(P,G)_{\rho_G} \rhoightarrow \mathcal{O}mega^1(P,\frak{g}),~ \pih\mus \delta(\pih) \etand{align*} with $\delta\pih(v) = T\lambda_{\pih(\pi(v))^{-1}} \chiirc T\pih (v)$ for $v\iotan TP$ (cf. \chiircte[38.1]{Kriegl:1997}). \iotatem We define \betaegin{align*} \pir_h^\alphast \chiolon \mathcal{O}mega^1(P,\frak{g}) \rhoightarrow \mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}, \theta \mus \theta \chiirc \pir_h \etand{align*} \etand{compactenum} \etand{definition} The statement (b) in the following lemma is well-known and can be found in \chiircte[38.1]{Kriegl:1997}. \betaegin{lemma}\lambdaeftarrowbel{Grukdjllr65rglkdf} \betaegin{compactenum} \iotatem We have \betaegin{align*} \delta(C^\iotanfty_c(P,G)_{\rho_G}) \sigmaubs \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}} \etand{align*} \iotatem Given $f,g \iotan C^\iotanfty_c(P,G)_{\rho_G}$ we have \betaegin{align*} \delta(f\chidot g) = \delta(g) + \muathcal{A}d^G_\alphast(g^{-1}, \delta(f)) \etand{align*} \etand{compactenum} \etand{lemma} \betaegin{proof} \betaegin{compactenum} \iotatem Let $\pih \iotan C_c^\iotanfty(P,G)_{\rhoho_G}$, $h \iotan H$, $p \iotan P$ and $w\iotan T_pP$. We calculate \betaegin{align*} &(R_h^\alphast \delta(\pih))_p(w) = \delta(\pih)_{ph}(TR_h(w)) = T\lambda_{\pih(ph)^{-1}} (T\pih (TR_h(w))) \\ =& T(\lambda_{\pih(ph)^{-1}} \chiirc \pih \chiirc R_h)(w) =:\deltaagger \etand{align*} For $x \iotan P$ we have \betaegin{align*} &\lambda_{\pih(ph)^{-1}} \chiirc \pih \chiirc R_h(x) = (\rhoho_G(h^{-1}). \pih(p))^{-1} \chidot (\rhoho_G(h^{-1}). \pih(x)) \\ =& \rhoho_G(h^{-1}). (\pih(p)^{-1} \chidot \pih(x)) = \rhoho_G(h^{-1}) \chiirc \lambda_{\pih(p)^{-1}} \chiirc \pih (x) \etand{align*} We conclude \betaegin{align*} \deltaagger = \rhoho_\frak{g}(h^{-1})\chiirc T\lambda_{\pih(p)^{-1}} \chiirc T\pih(w) = \rhoho_\frak{g}(h^{-1})\chiirc \delta(\pih)_p(w). \etand{align*} \iotatem The assertion follows directly from \chiircte[38.1]{Kriegl:1997}. \etand{compactenum} \etand{proof} \betaegin{definition} Let $\pir_h\chiolon TP=VP\omegapluslus HP \rhoightarrow HP$, be the projection onto the horizontal bundle $HP$. We define \betaegin{align*} \muathbb{D}elta\chiolon C_c^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow \mathcal{O}mega^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}, \pih \mus \pir^\alphast_h \chiirc \delta(\pih) = \delta(\pih)\chiirc \pir_h. \etand{align*} \etand{definition} As in \chiircte{Schuett:2013} we use the concept of weak products of infinite-dimensional Lie groups described in \chiircte[Section 7]{Glockner:2003} respectively \chiircte[Section 4]{Glockner:2007} in the following considerations. The following lemma is basically \chiircte[Corrolary 2.38]{Schuett:2013}, but with modified assumptions. \betaegin{lemma}\lambdaeftarrowbel{Grudkdkdlsjlf098} For $i \iotan \muathbb{N}$ let $G_i$ be a locally convex Lie group, $E_i$ a locally convex space and $f_i \chiolon G_i \rhoightarrow E_i$ be a smooth map such that $f_i(1)=0$. In this situation there exists an open $1$-neighbourhood $U\sigmaubs \pirod^\alphast_{i\iotan \muathbb{N}} G_i$ such that the map $f\chiolon \pirod^\alphast_{i\iotan \muathbb{N}} G_i \rhoightarrow \betaiggoplus_{i\iotan \muathbb{N}} E_i$, $(g_i)_i \mus (f_i(g_i))_{i}$ is smooth on $U$. \etand{lemma} \betaegin{proof} Given $i \iotan \muathbb{N}$ let $\pih_i \chiolon U_i \sigmaubs G_i \rhoightarrow V_i \sigmaubs \frak{g}_i$ be a chart around $1$ with $\pih_i(1)=0$. We have the commutative diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{9pc} \xiymatrix{ \pirod^\alphast_{i\iotan \muathbb{N}}U_i \alphar[r]^-{f|_{\pirod^\alphast_{i\iotan \muathbb{N}}U_i} = (f_i|_{U_i})_{i\iotan \muathbb{N}}} \alphar[d]_-{(\pih_i)_{i\iotan \muathbb{N}}} & \betaiggoplus_{i\iotan \muathbb{N}} E_i \\ \betaiggoplus_{i\iotan \muathbb{N}} V_i \alphar[ur]_-{(f_i\chiirc \pih_i^{-1})_{i\iotan\muathbb{N}}}. } \etand{xy} \etand{align*} Now the assertion follows form \chiircte[Proposition 7.1]{Glockner:2003}. \etand{proof} \betaegin{remark}\lambdaeftarrowbel{Gruruckdsld} Given a compact locally finite trivializing system $(\omegal{V_i},\sigmaigma_i)_{i\iotan \muathbb{N}}$ of the principal bundle $H\hookrightarrow P \xira{q}M$ in the sense of \chiircte[Definition 3.6]{Schuett:2013} respectively \chiircte{Wockel:2007}. We follow \chiircte[Remark 3.5]{Schuett:2013} respectively \chiircte{Wockel:2007} and define as usually the smooth map $\betaeta_{\sigmaigma_i}\chiolon q^{-1}(\omegal{V}_i) \rhoightarrow H$ by the equation $\sigmaigma_i(q(p)) \chidot \betaeta_{\sigmaigma_i}(p)= p$ for all $p \iotan q^{-1}(\omegal{V}_i)$. Obviously we have $\betaeta_{\sigma_i}(ph)= \beta_{\sigma_i}(p)\chidot h$ for all $h\iotan H$. Moreover we define the smooth cocycle $\betaeta_{i,j}\chiolon \omegal{V}_i\chiapp \omegal{V}_j \rhoightarrow H$ by the equation $\sigmaigma_i(x)\chidot \beta_{i,j}(x) = \sigma_j(x)$. We have $\beta_{i,j}(x)^{-1} = \beta_{j,i}(x)$ and $\beta_{\sigma_i}(p)^{-1}\chidot \beta_{i,j}(q(p)) = \beta_{\sigma_j}(p)^{-1}$ for $p \iotan q^{-1}(\omegal{V}_i\chiapp \omegal{V}_j)$. \etand{remark} The proof of the following lemma is similar to the proof of \chiircte[Proposition 4.6]{Schuett:2013}, where beside other results Sch{\"u}tt, constructed a topological embedding from the compactly supported gauge algebra $\muathrm{gau}_c(P,\frak{g})_\frak{g}$ into a direct sum $\betaiggoplus_{i\iotan\muathbb{N}}C^\iotanfty(\omegal{V}_i,\frak{g})$ of locally convex spaces. However the following lemma differers from \chiircte[Proposition 4.6]{Schuett:2013}, because we deal with horizontal differential forms and these need some extra considerations. \betaegin{lemma}\lambdaeftarrowbel{Grukljgdfljhg} Let $(\omegal{V_i},\sigmaigma_i)_{i\iotan \muathbb{N}}$ be a compact locally finite trivializing system in the sense of \chiircte[Definition 3.6]{Schuett:2013}. The map \betaegin{align*} \mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \rhoightarrow \betaiggoplus_{i\iotan \muathbb{N}} \mathcal{O}mega^1(\omegal{V}_i,\frak{g}),~ \theta \mus (\sigmaigma_i^\alphast \theta)_{i\iotan \muathbb{N}} \etand{align*} is a topological embedding. \etand{lemma} \betaegin{proof} We define \betaegin{align*} \mathcal{O}mega_\omegapluslus:=\sigmaet{(\etata_i)_i \iotan \betaiggoplus_{i\iotan\muathbb{N}} \mathcal{O}mega^1(\omegal{V}_i,\frak{g}): (\etata_i)_x = \rhoho_\frak{g}(\beta_{i,j}(x))\chiirc (\etata_j)_x ~\muathrm{for }~ x\iotan \omegal{V}_i \chiapp \omegal{V}_j } \etand{align*} and the map \betaegin{align*} \mathcal{P}hi\chiolon \mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \rhoightarrow \betaiggoplus_{i\iotan \muathbb{N}} \mathcal{O}mega^1(\omegal{V}_i,\frak{g}),~ \theta \mus (\sigmaigma_i^\alphast \theta)_{i\iotan \muathbb{N}}. \etand{align*} Lets show $\iotam(\mathcal{P}hi)\sigmaubs \mathcal{O}mega_\omegapluslus$. For $x \iotan \omegal{V}_i\chiapp \omegal{V}_j, v \iotan T_xM$ we have $\sigma_j(x)=\sigma_i(x)\beta_{i,j}(x)$ and \betaegin{align}\lambdaeftarrowbel{Grunbxml121} Tq(T\sigma_j(v) - T(R_{\beta_{i,j}(x)} \chiirc\sigma_i)(v)) = T(q\chiirc \sigma_j)(v) - T(q\chiirc R_{\beta_{i,j}(x)} \chiirc \sigma_i)(v) =0. \etand{align} Because $\theta$ is $\rhoho_\frak{g}$ invariant and horizontal we can calculate \betaegin{align*} &(\sigma_i^\alphast \theta)_x(v) = \theta_{\sigma_i(x)}(T\sigma_i(v)) = \rhoho_\frak{g}(\beta_{i,j}(x))\chiirc \theta_{\sigma_i(x)\beta_{i,j}(x)} (TR_{\beta_{i,j}(x)}(T\sigma_i(v)))\\ \ub{=}{(\rhoef{Grunbxml121})}& \rhoho_\frak{g}(\beta_{i,j}(x))\chiirc \theta_{\sigma_j(x)} (T\sigma_j(v)) = \rhoho_\frak{g}(\beta_{i,j}(x))\chiirc \sigma_j^\alphast \theta_{x} (v). \etand{align*} Analogous to \chiircte[Proposition 4.6]{Schuett:2013} we can argue as followed: The map $\mathcal{P}hi$ is linear, $\mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_{\frak{g}}}^\thetaext{hor} = \varinjlim \mathcal{O}mega_K^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ and $(\omegal{V}_i)_i$ is locally finite, hence the map $\mathcal{P}hi$ is continuous. Now let $(\lambda_i)_{i\iotan\muathbb{N}}$ be a partition of the unity of $M$ that is subordinated to the cover $(V_i)_i$. Given $\eta\iotan\mathcal{O}mega^1(\omegal{V}_i,\frak{g})$ we define $\widetilde{\lambda_i\eta} \iotan \mathcal{O}mega^1(P,\frak{g})$ by \betaegin{align*} \widetilde{\lambda_i\eta}_p(w):= \betaegin{cases} \lambda_i(q(p))\chidot \rhoho_\frak{g}(\beta_{\sigma_i}(p)^{-1}). \eta_{q(p)}(Tq(w)) &: p \iotan q^{-1}(V_i)\\ 0&: ~\muathrm{else}. \etand{cases} \etand{align*} With Remark \rhoef{Gruruckdsld} we get $\widetilde{\lambda_i\eta}\iotan \mathcal{O}mega^1_{\sigmaupp(\lambda_i)}(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ and $\sigmaum_{i\iotan\muathbb{N}} \widetilde{\lambda_i\eta_i}\iotan\mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ for $(\eta_i)_i \iotan \betaiggoplus_{i\iotan \muathbb{N}} \mathcal{O}mega^1(\omegal{V}_i,\frak{g})$. The map $\mathcal{P}si\chiolon \betaiggoplus_{i\iotan \muathbb{N}} \mathcal{O}mega^1(\omegal{V}_i,\frak{g}) \rhoightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$, $(\eta_i)_i \mus \sigmaum_{i\iotan\muathbb{N}}\thetaimesl{\lambda_i\eta_i}$ is continuous, because it is linear and the inclusions $\mathcal{O}mega^1_{\sigmaupp(\lambda_i)}(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \hookrightarrow \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ are continuous. Let $(\eta_i)_i \iotan \mathcal{O}mega_\omegapluslus$. As in \chiircte[Proposition 4.6]{Schuett:2013} we get \betaegin{align*} \mathcal{P}si((\eta_i)_i)_p(w) = \rhoho_\frak{g}(\beta_{\sigma_{i_0}}(p)^{-1} ). (\eta_{i_0})_{q(p)} (Tq(w)) \etand{align*} if $p \iotan q^{-1}(V_{i_0})$ and $w\iotan T_pP$. In abuse of notation we write $\mathcal{P}hi:=\mathcal{P}hi|^{\mathcal{O}mega_\omegapluslus}$ and $\mathcal{P}si:=\mathcal{P}si|_{\mathcal{O}mega_\omegapluslus}$. One easely sees $\mathcal{P}hi\chiirc\mathcal{P}si=\iotad_{\mathcal{O}mega_\omegapluslus}$. It is left to show $\mathcal{P}si\chiirc\mathcal{P}hi=\iotad_{\mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}}$. Let $\theta\iotan \mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$, $p \iotan P$, $w\iotan T_pP$ and $i \iotan\muathbb{N}$ with $p \iotan q^{-1}(V_i)$. We calculate \betaegin{align}\lambdaeftarrowbel{Gru92023} Tq(w- TR_{\beta_{\sigma_i}(p)} T\sigma_i Tq(w)) = Tq(w) - T(q\chiirc R_{\beta_{\sigma_i}(p)} \chiirc \sigma_i )(Tq(w)) =0. \etand{align} Now we get \betaegin{align*} &\mathcal{P}si\chiirc\mathcal{P}hi(\theta)_p (w) = \rhoho_\frak{g}(\beta_{\sigma_i}(p)^{-1}).(\sigma_i^\alphast \theta)_{q(p)}(Tq(w))\\ =&\rhoho_\frak{g}(\beta_{\sigma_i}(p)^{-1}). \theta_{\sigma_i(q(p))}(T\sigma_iTq(w))\\ =&\rhoho_\frak{g}(\beta_{\sigma_i}(p)^{-1}). \rhoho_\frak{g}(\beta_{\sigma_i}(p)) \theta_{\sigma_i(q(p))\beta_{\sigma_i}(p)}(TR_{\beta_{\sigma_i}(p)}T\sigma_iTq(w)) \\ \ub{=}{\rhoef{Gru92023}}& \theta_{\sigma_i(q(p))\beta_{\sigma_i}(p)}(w) = \theta_p(w). \etand{align*} \etand{proof} \betaegin{remark}\lambdaeftarrowbel{GruPunkteTrennenene} Let $M$ be a $m$-dimensional manifold, $D\sigmaubs TM$ a $d$-dimensional subbundle, $p_0 \iotan M$ and $w_0 \iotan D_{p_0}$. Then there exists a smooth curve $\gamma\chiolon [-1,1] \rhoightarrow M$ such that $\gamma(0)=p_0$, $\gamma'(0)=w_0$ and $\gamma'(t)\iotan D_{\gamma(t)}$ for all $t\iotan [-1,1]$. In fact let $\pis\chiolon TU \rhoightarrow U\thetaimes \muathbb{R}^m$ be a trivialisation with $\pis(D)=U\thetaimes \muathbb{R}^d\thetaimes\sigmaet{0}$, $v_0:=\pir_2\chiirc \pis(w_0) \iotan \muathbb{R}^d\thetaimes\sigmaet{0}$. Then $X\chiolon U \rhoightarrow TU$, $x\mus \pis^{-1}(x,v_0)$ is a smooth vector field of $U$ and $\iotam(X)\sigmaubs D$. Let $\thetaimesl{\gamma}\chiolon [-\varepsilon,\varepsilon] \rhoightarrow U$ be the integral curve of $X$ with $\thetaimesl{\gamma}(0)=p_0$. Then $\thetaimesl{\gamma}'(0)= X(p_0) = w_0$ and obviously $\gamma'(t) \iotan D_{\gamma(t)}$ for all $t$. Now let $\pih \chiolon [-1, 1] \rhoightarrow [-\varepsilon,\varepsilon]$ be a diffeomorphism with $\pih(0)=0$ and $\pih'(0)=1$. Then $\gamma:=\thetaimesl{\gamma}\chiirc \pih$ is as needed. \etand{remark} \betaegin{lemma}\lambdaeftarrowbel{GruPullsepPoints} The pullbacks $\gamma^\alphast \chiolon \mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}\rhoightarrow C^\iotanfty([-1,1],\frak{g})$, $\theta \mus \gamma^\alphast \theta$ along horizontal maps $\gamma \chiolon [-1,1]\rhoightarrow P$ ($\gamma'(t)\iotan H_{\gamma(t)}P$) separate the points in $\mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}$. \etand{lemma} \betaegin{proof} Let $\theta \iotan \mathcal{O}mega^1(P,\frak{g})^\thetaext{hor}$ and $\gamma^\alphast\theta=0$ for all horizontal curves $\gamma\chiolon [-1,1] \rhoightarrow P$. Let $p \iotan P$ and $w\iotan T_pP$. We show $\theta_p(w)=0$. Because $\theta$ is horizontal we can assume $w \iotan H_pP$. We use Remark \rhoef{GruPunkteTrennenene} and find a horizontal curve $\gamma \chiolon [-1,1]\rhoightarrow P$ with $\gamma'(0)=w$. Hence $\theta_p(w) = \theta_p(\gamma'(0)) = \gamma^\alphast\theta(0) =0$. \etand{proof} One can easily deduce the following easy observation in Remark \rhoef{GruMussdochnichtsein} from \chiircte[Theorem 1.11]{Wockel:2007}, but in the special case of a current group on a compact interval the argumentation from \chiircte[Theorem 1.11]{Wockel:2007} becomes much easier. \betaegin{remark}\lambdaeftarrowbel{GruMussdochnichtsein} Let $(U_i)_{i=1,\deltaots,n}$ be an open cover of the space $[-1,1]$ such that the sets $\omegal{U}_i$ are submanifolds with boundary and $G$ be a finite-dimensional Lie group. Then the map $\mathcal{P}hi \chiolon C^\iotanfty([-1,1],G) \rhoightarrow \pirod_{i=1}^n C^\iotanfty(\omegal{U}_i,G)$, $\pihi \mus (\pihi|_{\omegal{U}_i})_i$ is an injective Lie group morphism thats image is a sub Lie group of $\pirod_{i=1}^n C^\iotanfty(\omegal{U}_i,G)$ and $\mathcal{P}hi|^{\iotam(\mathcal{P}hi)}$ is an isomorphism of Lie groups. We define $\mathcal{P}si \chiolon C^\iotanfty([-1,1], \frak{g})\rhoightarrow \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,\frak{g})$, $f\mus (f|_{\omegal{U}_i})_i$. Let $\etaxp\chiolon V_\frak{g} \sigmaubs \frak{g} \rhoightarrow U_G\sigmaubs G$ be the exponential function of $G$ restricted to a $0$-neighbourhood such that it is a diffeomorphism. We define the open sets $\muathcal{U}:= C^\iotanfty([-1,1],U_G)\sigmaubs C^\iotanfty([-1,1],G)$ and $\muathcal{V}:=C^\iotanfty([-1,1],V_\frak{g})\sigmaubs C^\iotanfty([-1,1],\frak{g})$. Let $\thetaauu_1\chiolon C^\iotanfty([-1,1],U_G) \rhoightarrow C^\iotanfty([-1,1],V_\frak{g})$, $\pih \mus (\etaxp|_{V_\frak{g}}^{U_G})^{-1} \chiirc\pih$ and $\thetaauu_2\chiolon \pirod_{i=1}^n C^\iotanfty(\omegal{U}_i,U_G) \rhoightarrow \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,V_\frak{g})$, $(\pih_i)_i \mus ((\etaxp|_{V_\frak{g}}^{U_G})^{-1} \chiirc\pih_i)_i$ be the canonical charts. We obtain the commutative diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ C^\iotanfty([-1,1],U_G) \alphar[r]^-{\mathcal{P}hi|_{\muathcal{U}}} \alphar[d]^{\thetaauu_1} & \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,U_G)\alphar[d]^-{\thetaauu_2}\\ C^\iotanfty([-1,1],V_\frak{g}) \alphar[r]^-{\mathcal{P}si|_{\muathcal{V}}}& \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,V_\frak{g}) } \etand{xy} \etand{align*} and calculate \betaegin{align*} &\thetaauu_2\lambdaeft(\iotam(\mathcal{P}hi) \chiapp \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,U_G)\rhoightght) = \thetaauu_2\lambdaeft( \mathcal{P}hi(C^\iotanfty([-1,1],U_G)) \rhoightght)\\ =& \mathcal{P}si(C^\iotanfty([-1,1],V_\frak{g})) = \iotam(\mathcal{P}si)\chiapp \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,V_\frak{g}) \etand{align*} The space \betaegin{align*} \iotam(\mathcal{P}si) = \sigmaet{(f_i)_i: f_i(x)=f_j(x) ~\muathrm{for}~ x\iotan \omegal{U}_i\chiapp\omegal{U}_j} \etand{align*} is closed in $\pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,\frak{g})$. Hence $\iotam(\mathcal{P}hi)$ is a sub Lie group of $\pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,G)$. In the commutative diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ C^\iotanfty([-1,1],U_G) \alphar[r]^-{\mathcal{P}hi|_{\muathcal{U}}} \alphar[d]^{\thetaauu_1} & \iotam(\mathcal{P}hi) \chiapp \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,U_G)\alphar[d]^-{\thetaauu_2}\\ C^\iotanfty([-1,1],V_\frak{g}) \alphar[r]^-{\mathcal{P}si|_{\muathcal{V}}}& \iotam(\mathcal{P}si)\chiapp \pirod_{i=1}^nC^\iotanfty(\omegal{U}_i,V_\frak{g}) } \etand{xy} \etand{align*} the lower vertical arrow is a diffeomorphism because $\mathcal{P}si \chiolon C^\iotanfty([-1,1],\frak{g}) \rhoightarrow \iotam(\mathcal{P}si)$ is a continuous bijective linear map between Fr{\'e}chet spaces. Now the assertion follows. \etand{remark} The following theorem is a generalisation of \chiircte[Proposition V.7]{Neeb:2004} in the case of a finite-dimensional codomain. \betaegin{theorem}\lambdaeftarrowbel{Grunctandk} \betaegin{compactenum} \iotatem The map $\muathbb{D}elta \chiolon C_c^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow \mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ is smooth. \iotatem For the smooth map $\muathbb{D}elta \chiolon C_c^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow \mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ we have $d_1\muathbb{D}elta(f)=D_{\rhoho_\frak{g}}f$.\frak{o}otnote{To show this equation we use Lemma \rhoef{GruPullsepPoints}. If $H$ is infinite-dimensional, then we can not apply Lemma \rhoef{GruPullsepPoints}. Instead on can show that $\delta \chiolon C_c^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow \mathcal{O}mega_c^1(P,\frak{g})$ is smooth and $d_1\delta(f)=df$ (with an argument similar to the argumentation in this proof, because pullbacks by smooth maps (not nessesary horizontal) speperate the points in $\mathcal{O}mega_c^1(P,\frak{g})$). Obviously $\pir_h^\alphast$ is linear. Because $\muathbb{D}elta \chiolon C_c^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow \mathcal{O}mega_c^1(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor}$ is smooth also $\muathbb{D}elta = \pir^\alphast\chiirc \delta \chiolon C_c^\iotanfty(P,G)_{\rhoho_G} \rhoightarrow \mathcal{O}mega_c^1(P,\frak{g})^\thetaext{hor}$ is smooth. Now one sees that for this corser topology we have $d_1\muathbb{D}elta(f)=\pir_h^\alphast(df)$. Then also the derivative with respect to the finer topology on the domain ($\mathcal{O}mega_c^1(P,\frak{g})^\thetaext{hor}_{\rhoho_\frak{g}}$) has to be given by this equation. We remaind the reader that the topology on $\mathcal{O}mega_c^1(P,\frak{g})^\thetaext{hor}_{\rhoho_\frak{g}}$ is not the induced topology by $\mathcal{O}mega_c^1(P,\frak{g})$, but the inductive limit toplogy. Where the inductive limit topology on $\mathcal{O}mega_c^1(P,\frak{g})^\thetaext{hor}$ and the induced topology realy coincide.} \etand{compactenum} \etand{theorem} \betaegin{proof} \betaegin{compactenum} \iotatem Because of Lemma \rhoef{GruLemmareresk09834t}, Lemma \rhoef{Gruddkdfjhlemmsdg78} and Lemma \rhoef{Grukdjllr65rglkdf} it is enough to show the smoothness of $\muathbb{D}elta$ on a $1$-neighbourhood. Let $(\sigmaigma_i,\omegal{V}_i)_{i\iotan\muathbb{N}}$ be a locally finite compact trivialising system in the sense of \chiircte[Definition 3.6.]{Schuett:2013} (the existence follows from \chiircte[Corollary 3.10]{Schuett:2013}). With the help of Lemma \rhoef{Grudkdkdlsjlf098} and Lemma \rhoef{Grukljgdfljhg} it is enough to construct smooth maps $\pis_i\chiolon C^\iotanfty(\omegal{V}_i, G)\rhoightarrow \mathcal{O}mega^1(\omegal{V}_i,\frak{g})$ such that the diagram \betaegin{align*} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ C_c^\iotanfty(P,G)_{\rhoho_G} \alphar[d]_-{\pih\mus (\pih\chiirc\sigmaigma_i)_i} \alphar[r]^-{\muathbb{D}elta} & \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \alphar[d]^-{\theta \mus (\sigmaigma_i^\alphast\theta)_i}\\ \pirod^\alphast_{i\iotan\muathbb{N}} C^\iotanfty(\omegal{V}_i,G) \alphar[r]^-{\pisi_i}& \betaiggoplus_{i\iotan\muathbb{N}} \mathcal{O}mega^1(\omegal{V}_i,\frak{g}) } \etand{xy} \etand{align*} commutes. Let $\thetaauu_i \chiolon q^{-1}(V_i)\rhoightarrow V_i\thetaimes H$, $p \mus (q(p), \pih_i(p))$ be the inverse to $(x,h)\mus \sigma_i(x)h$. Then $\sigmaigma_i(x)= \thetaauu_i(x,1)$. For $f\iotan C^\iotanfty(\omegal{V}_i,G)$ we define \betaegin{align*} \thetaimesl f \chiolon q^{-1}(V_i)\rhoightarrow G,~ p \mus \rho_G(\pih_i(p) , f(q(p))). \etand{align*} If $f \iotan C^\iotanfty_c(P,G)_{\rho_G}$, then $\widetilde{f\chiirc \sigma_i} = f|_{q^{-1}(V_i)}$. We define $\pis_i\chiolon C^\iotanfty(\omegal{V}_i, G)\rhoightarrow \mathcal{O}mega^1(\omegal{V}_i,\frak{g})$ by \betaegin{align*} \pis_i(f)_x(v) = T\lambda_{f(x)^{-1}} T\thetaimesl{f}(\pir_h(T_x\sigmaigma_i(v))). \etand{align*} At first we show that the above diagram commutes. We calculate \betaegin{align*} \pis_i(f\chiirc\sigma_i)_x(v) = T\lambda_{(f_i\chiirc\sigma(x))^{-1}} Tf (\pir_h \chiirc T_x\sigma_i(v)) = \sigmaigma_i^\alphast(\delta(f)\chiirc \pir_h)_x(v). \etand{align*} It is left to show the smoothness of $\pis_i$. Because we can embed $\mathcal{O}mega^1(\omegal{V}_i,\frak{g})$ into $C^\iotanfty(T\omegal{V}_i, \frak{g})$, we show that \betaegin{align*} C^\iotanfty(\omegal{V}_i ,G)\thetaimes (TV_i) \rhoightarrow \frak{g}, ~(f,v)\mus T\lambda_{f(x)^{-1}} T\thetaimesl{f}(\pir_h(T_x\sigmaigma_i(v))) \etand{align*} is smooth. Let $m\chiolon G\thetaimes G\rhoightarrow G$ be the multiplication on $G$ and $n\chiolon G\rhoightarrow TG,~ g\mus 0_g$ the zero section. Given $f \iotan C^\iotanfty(\omegal{V}_i ,G)$ and $v\iotan T_x\omegal{V}_i$ we calculate \betaegin{align*} \pis_i(f)(v) = Tm(n(f(\pi(v))^{-1} )), T\thetaimesl{f}(\pir_h T\sigmaigma_i(v)) ). \etand{align*} The map $\etav\chiolon C^\iotanfty(\omegal{V}_i,G) \thetaimes \omegal{V}_i \rhoightarrow G$, $f,x\mus f(x)$ is smooth (see \chiircte[Lemma 121]{Alzaareer:2013}). Therefore it is left to show the smoothness of $C^\iotanfty(\omegal{V}_i,G) \thetaimes Tq ^{-1} (TV_i) \rhoightarrow TG,~ (f ,v)\mus T\thetaimesl{f}(v)$ The map $\etav^q\chiolon C^\iotanfty(\omegal{V}_i,G) \thetaimes q^{-1}(V_i) \rhoightarrow G, (f,p) \mus f\chiirc q (p)$ is smooth, because $\etav$ is smooth. We have $T(f\chiirc q)(v))= T\etav^q(f, {\scriptscriptstyle \betaullet})(v) = T\etav^q(n(f),v)$, where $n$ is the zero section of $C^\iotanfty(\omegal{V}_i,G)$. Hence \betaegin{align*} T\etav^q \chiirc (n,\iotad) \chiolon C^\iotanfty(\omegal{V},G) \thetaimes Tq^{-1}(TV) \rhoightarrow TG,~ (f ,v) \mus Tf \chiirc Tq (v) \etand{align*} is smooth. With $T\thetaimesl{f} = T\rho_G \chiirc (T\pih_i, Tf\chiirc Tq)$ the assertion follows from the smoothness of $T\etav^q \chiirc (n,\iotad)$. \iotatem We write $\delta^l\chiolon C^\iotanfty([-1,1],G) \rhoightarrow C^\iotanfty([-1,1],\frak{g})$ for the classical left logarithmic derivative. It is known that $d_{c_1}\delta^l(f) =f'$ for $f\iotan C([-1,1],\frak{g})$ (\chiircte[Lemma 2.4]{Glockner:2012}). Given a horizontal curve $\gamma\chiolon [-1,1]\rhoightarrow P$ we define the maps \betaegin{align*} &\gamma^\alphast_G\chiolon C^\iotanfty_c(P,G)_{\rho_G} \rhoightarrow C^\iotanfty([-1,1],G), \pih \mus \pih\chiirc \gamma\\ &\gamma^\alphast_\frak{g} \chiolon C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}} \rhoightarrow C^\iotanfty([-1,1],\frak{g}), f\mus f\chiirc \gamma\thetax{and}\\ &\gamma^\alphast_\mathcal{O}mega \chiolon \mathcal{O}mega^1_c(P,\frak{g})_{\rhoho_\frak{g}}^\thetaext{hor} \rhoightarrow C^\iotanfty([-1,1],\frak{g}), \theta \mus \gamma^\alphast\theta. \etand{align*} Like in Lemma \rhoef{GruDerPullBackSmooth} one shows that $\gamma^\alphast_G$ is a smooth Lie group homomorphism with $L(\gamma^\alphast_G) = \gamma_\frak{g}^\alphast$ (see Remark \rhoef{GruMussdochnichtsein}). The diagram \betaegin{align}\lambdaeftarrowbel{GruPullCommd98} \betaegin{xy}\xiymatrixcolsep{5pc} \xiymatrix{ C^\iotanfty_c(P,G)_{\rho_G}\alphar[r]^-{\muathbb{D}elta} \alphar[d]_-{\gamma_G^\alphast} &\mathcal{O}mega_c^1(P,\frak{g})^\thetaext{hor}_{\rhoho_\frak{g}} \alphar[d]^-{\gamma_\mathcal{O}mega^\alphast}\\ C^\iotanfty([-1,1],G) \alphar[r]^-{\delta^l} & C^\iotanfty([-1,1],\frak{g}) } \etand{xy} \etand{align} commutes, because \betaegin{align*} &(\gamma_\mathcal{O}mega^\alphast \muathbb{D}elta(f))(t)= \delta(f) (\pir_h(\gamma'(t))) = \delta(f) (\gamma'(t)) =T\lambda_{f\chiirc\gamma(t) ^{-1}} \chiirc Tf(\gamma'(t)) \\ =& \delta^l(f\chiirc \gamma). \etand{align*} Let $f\iotan C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}}$. We want to show $d_1\muathbb{D}elta(f) = D_{\rhoho_\frak{g}}f$. Because of Lemma \rhoef{GruPullsepPoints} it is enough to show $\gamma_\mathcal{O}mega^\alphast(d_1\muathbb{D}elta(f)) = \gamma_\mathcal{O}mega^\alphast(D_{\rhoho_\frak{g}}f)$ for an arbitrary horizontal curve $\gamma \chiolon [-1,1]\rhoightarrow P$. Because $\gamma_\mathcal{O}mega^\alphast$ is continuous linear and the diagram (\rhoef{GruPullCommd98}) commutes we can calculate \betaegin{align*} \gamma_\mathcal{O}mega^\alphast(d_1\muathbb{D}elta(f)) = d_1(\gamma_\mathcal{O}mega^\alphast\chiirc\muathbb{D}elta)(f) = d_1(\delta^l\chiirc \gammaamma^\alphast_G)(f)= d_1(\delta^l)(L(\gamma^\alphast_G)(f)) = (f\chiirc\gamma)'. \etand{align*} Now we use that $\gamma$ is horizontal and obtain \betaegin{align*} \gamma_\mathcal{O}mega^\alphast(D_{\rhoho_\frak{g}} f)_t = D_{\rhoho_\frak{g}}f(\gamma'(t)) = df(\gamma'(t)) = \gamma_\mathcal{O}mega^\alphast(d_1\muathbb{D}elta(f))_t \etand{align*} for $t\iotan [-1,1]$. \etand{compactenum} \etand{proof} The proof of the following Lemma \rhoef{GruMainII} is analogous to the first part of \chiircte[Proposition III.3]{Maier:2003}. \betaegin{lemma}\lambdaeftarrowbel{GruMainII} In the following we write $\muathcal{A}d$ for the adjoined action of $C_c^\iotanfty(P,G)_{\rhoho_G}$ on $C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}}$. The map \betaegin{align*} \muathcal{A}\chiolon &C^\iotanfty_c(P,G)_{\rho_G} \thetaimes (\omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor} \thetaimes_{\omega_M} C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}) \rhoightarrow \omegal{\mathcal{O}mega}_c^1(P,V)_{\rho_V}^\thetaext{hor} \thetaimes_{\omega_M} C^\iotanfty_c(P,\frak{g})_{\rho_\frak{g}}\\ &(\pih , ([\alpha],f)) \mus ([\alpha] - [\kappa_\frak{g}(\gammad(\pih),f)], \muathcal{A}d(\pih,f)). \etand{align*} is a smooth group action and its associated Lie algebra action is given by the adoined action described in (\rhoef{GruAdacttitit234}). Hence, the adjoint action of $\gammag_c(M,\frak{G})$ on the extension $\widehat{\gammag_c(M,\frak{G})}:=\omegal{\mathcal{O}mega}_c^1(M,\muathbb{V}) \thetaimes_{\omega_M} \gammag_c(M,\frak{G})$ represented by $\omega_M$ integrates to a Lie group action of $\gammag_c(M,\muathcal{G})$ on $\widehat{\gammag_c(M,\frak{G})}$. \etand{lemma} \betaegin{proof} The smoothness of $\muathcal{A}$ follows from the smoothness of $\muathbb{D}elta$. We show that $\muathcal{A}$ is a group action. For $\pih,\pis \iotan C^\iotanfty_c(P,G)_{\rho_G}$ we have \betaegin{align*} \gammad(\pih \pis) =\gammad(\pis) + \muathcal{A}d^G_\alphast(\pis^{-1}, \gammad(\pih)). \etand{align*} And for $v,w\iotan \frak{g}$ and $g\iotan G$ we have \betaegin{align}\lambdaeftarrowbel{Grunnnnnnf} \kappa_\frak{g}(v,\muathcal{A}d^G_g w) = \kappa_\frak{g}(\muathcal{A}d^G_{g^{-1}}v,w). \etand{align} In this context $\muathcal{A}d^G$ is the adjoint action of $G$ on $\frak{g}$. Now we calculate for $\alpha \iotan \mathcal{O}mega^1_c(P,V)_{\rho_V}^\thetaext{hor}$ \betaegin{align*} &\muathcal{A}(\pih \cdot \pis ,([\alpha],f)) = ([\alpha]- [\kappa_\frak{g}(\gammad(\pih\pis),f)], \muathcal{A}d_{\pih\pis}f)\\ =&([\alpha] - [\kappa_\frak{g}(\gammad\pis , f)] - [\kappa_\frak{g} (\muathcal{A}d^G_\alphast(\pis^{-1}, \gammad(\pih)), f)] , \muathcal{A}d_\pih . \muathcal{A}d_\pis . f) \\ \ub{=}{(\rhoef{Grunnnnnnf})}&([\alpha] - [\kappa_\frak{g}(\gammad\pis , f)] - [\kappa_\frak{g} (\gammad(\pih) , \muathcal{A}d^G_\alphast({\pis} f))] , \muathcal{A}d_\pih . \muathcal{A}d_\pis . f)\\ =& \muathcal{A}(\pih, ([\alpha]- [\kappa_\frak{g}(\gammad \pis, f)], \muathcal{A}d_{\pis}f))\\ =& \muathcal{A}(\pih, (\muathcal{A}(\pis,([\alpha],f))). \etand{align*} The associated action to $\muathcal{A}$ by $C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}}$ is given by the adoined action described in (\rhoef{GruAdacttitit234}), because $(-[\kappa_\frak{g}(D_{\rhoho_\frak{g}}(g),f)], \alphad(g,f)) = ([\kappa_\frak{g}(g,D_{\rhoho_\frak{g}}(f)], \alphad(g,f))$ for $f,g \iotan C^\iotanfty_c(P,\frak{g})_{\rhoho_\frak{g}}$. \etand{proof} \betaegin{theorem}\lambdaeftarrowbel{GruMainIII} If $\omegal{H}$ is finite, then we find a Lie group extension \betaegin{align*} \omegal{\mathcal{O}mega}_c^1(M,\muathbb{V})/\gammap_{\omega_M} \hookrightarrow \widehat{\gammag_c(M,\muathcal{G})_0} \rhoightarrow \gammag_c(M,\muathcal{G})_0 \etand{align*} that corresponds to the central Lie algebra extension that is represented by $\omega_M$. \etand{theorem} \betaegin{proof} We simply need to put Theorem \rhoef{GruMainI}, Theorem \rhoef{GruMainII}, \chiircte[Proposition 7.6]{Neeb:2002} and \chiircte[Theorem 7.12]{Neeb:2002} together (the last two statements were recalled in the introduction to this theses). \etand{proof} \sigmaection{Universality of the Lie group extension}\lambdaeftarrowbel{GruZweiterTeil} In this section we want to proof \chiircte[Theorem I.2.]{Janssens:2013} in the case where $M$ is not compact but $\sigmaigma$-compact (like in \chiircte[Theorem I.2.]{Janssens:2013} $M$ still has to be connected). In the first part of \chiircte{Janssens:2013} Janssens and Wockel showed that the cocycle $\omega_M\chiolon \gammag_c(M,\frak{G})^2 \rhoightarrow \omegal{\mathcal{O}mega}^1_c(M,\muathbb{V})$ (see \chiircte[p. 129 (1.1)]{Janssens:2013} and Remark \rhoef{Gru89Remarkdkdf}) is universal if $\frak{g}$ is semisimple and $M$ is a $\sigmaigma$-compact manifold. In the second part of the paper they assumed the base manifold $M$ to be compact and got a universal cocycle $\gammag(M,\frak{G})^2 \rhoightarrow \omegal{\mathcal{O}mega}^1(M,\muathbb{V})$. Then they show, that under certain conditions a given Lie group bundle $G\hookrightarrow \muathcal{G}\rhoightarrow M$ with finite-dimensional Lie group $G$ is associated to the principal frame bundle $\muathcal{A}ut(G) \hookrightarrow \thetaext{Fr}(\muathcal{G}) \rhoightarrow M$. Hence they were able to use \chiircte[Theorem 4.24]{Neeb:2009} to integrate the universal Lie algebra cocycle $\gammag(M,\frak{G})^2 \rhoightarrow \omegal{\mathcal{O}mega}^1(M,\muathbb{V})$ to a Lie group cocycle $Z\hookrightarrow \widehat{\gammag(M,\muathcal{G})_0} \rhoightarrow \gammag(M,\muathcal{G})_0$. At this point it was crucial that $M$ was compact and connected in order to apply \chiircte[Theorem 4.24]{Neeb:2009}. Once the Lie group extension was constructed, Janssens and Wockel proved its universality by using the Recognition Theorem from \chiircte{Neeb:2002a} (see \chiircte[Theorem I.2.]{Janssens:2013}). To generalise \chiircte[Theorem I.2.]{Janssens:2013} to the case where $M$ is connected and not compact, much of the proofs of \chiircte{Janssens:2013} can be transfered to case of a non-compact base manifold by using Theorem \rhoef{GruMainIII} instead of \chiircte[Theorem 4.24]{Neeb:2009}. But our proof is shorter, because we can use our Theorem \rhoef{GruMainIII} that holds for section groups and not just for gauges group, while \chiircte[Theorem 4.24]{Neeb:2009} holds only for gauge groups. Hence we do not have to reduce the statement to the case of gauge groups, like it was done in \chiircte{Janssens:2013}. We mention that in this section we assume the typical fiber $G$ of the Lie group bundle to be connected, while in \chiircte{Janssens:2013} Janssens and Wockel assume $\pi_0(G)$ to be finitely generated \betaegin{convention} In this section $G$ is a connected semisimple finite-dimensional Lie group. Like in the rest of the paper, $M$ still is a connected, non-compact, $\sigmaigma$-compact finite-dimensional manifold. \etand{convention} Analogously to \chiircte[p. 130]{Janssens:2013} we consider the following setting\frak{o}otnote{In \chiircte{Janssens:2013} the Lie group $G$ is not assumed to be connected. Instead Janssens and Wockel assume $\pi_0(G)$ to be finite generated.}: \betaegin{definition}[Cf. p. 130 in \chiircte{Janssens:2013}]\lambdaeftarrowbel{GruUni1} Let $G$ be a connected finite-dimensional semisimple Lie group with Lie algebra $\frak{g}$ and $G\hookrightarrow \muathcal{G} \xira{q} M$ be a Lie group bundle. Like in \chiircte[11.3.1]{Hilgert:2012} we turn $\muathcal{A}ut(G)$ into a finite-dimensional Lie group. In particular $\muathcal{A}ut(G)$ becomes a Lie group such that $L\chiolon \muathcal{A}ut(G)\rhoightarrow \muathcal{A}ut(\frak{g})$ is an isomorphism onto a closed subgroup (\chiircte[Lemma 11.3.3]{Hilgert:2012}) and $\muathcal{A}ut(G)$ acts smooth on $G$. \etand{definition} \betaegin{lemma}\lambdaeftarrowbel{GruUni2} The Lie group bundle $G\hookrightarrow \muathcal{G}\xira{q} M$ is isomorphic to the associated Lie group bundle of the frame principal bundle $\muathcal{A}ut(G) \hookrightarrow \muathrm{Fr}(\muathcal{G}) \rhoightarrow M$ (cf. \chiircte[p. 130]{Janssens:2013}). Obviously all manifolds are $\sigmaigma$-compact, because $M$ is $\sigmaigma$-compact and $\muathcal{A}ut(G)$ is homeomorph to an closed subgroup of $\muathcal{A}ut(\frak{g})$. \etand{lemma} \betaegin{definition} We define $V=V(\frak{g})$. In the situation considered in this subsection the map $\rho_V \chiolon \muathcal{A}ut(G)\thetaimes V \rhoightarrow V,~ (\pih,\kappa_\frak{g}(v,w)) \mus \kappa_\frak{g}(L(\pih)(v),L(\pih)(w))$ is the smooth automorphic action $\rhoho_V$ described in Convention \rhoef{GruConvention9898}. \etand{definition} \betaegin{lemma}[Cf. p. 130 in \chiircte{Janssens:2013}]\lambdaeftarrowbel{GruActTriv} The identity component of $\muathcal{A}ut(G)$ acts trivially on by the representation $\rho_V \chiolon \muathcal{A}ut(G)\thetaimes V \rhoightarrow V,~ (\pih,\kappa_\frak{g}(v,w)) \mus \kappa_\frak{g}(L(\pih)(v),L(\pih)(w))$. \etand{lemma} \betaegin{proof} Obviously it is enough to show that $(\muathcal{A}ut(\frak{g}))_0$ acts trivially by $\rho \chiolon\muathcal{A}ut(\frak{g}) \thetaimes V \rhoightarrow V,~(\pih,\kappa_\frak{g}(x, y))\mus \kappa_\frak{g}(\pih(x),\pih(y))$. For $\chiheck{\rho}\chiolon \muathcal{A}ut(\frak{g}) \rhoightarrow GL(V),~ \pih \mus \rho(\pih,{\scriptscriptstyle \betaullet})$, $x,y \iotan \frak{g}$ and $f\iotan \deltaer(\frak{g})$ we have $L(\chiheck{\rho})(f)(\kappa_\frak{g}(x, y)) = d_{\iotad} \rho ({\scriptscriptstyle \betaullet},\kappa_\frak{g}(x, y))(f)$. Defining $\etav_x \chiolon \muathcal{A}ut(\frak{g}) \rhoightarrow \frak{g},~ \pih \rhoightarrow \pih(x)$ for $x\iotan \frak{g}$ we get \betaegin{align*} \rho({\scriptscriptstyle \betaullet},\kappa_\frak{g}(x, y)) = \kappa_\frak{g}\chiirc (\etav_x,\etav_y). \etand{align*} We have $d_{\iotad} \etav_x(f) = \fr{\piartial}{\piartial t}|_{t=0} \etaxp(tf)(x) = f(x)$. Hence \betaegin{align*} &d_{\iotad} \rho({\scriptscriptstyle \betaullet},\kappa_\frak{g}(x,y)) (f) = \kappa_\frak{g}(\etav_x(\iotad), d_{\iotad} \etav_y(f)) + \kappa_\frak{g}(d_{\iotad} \etav_x(f) , \etav_y(\iotad))\\ =&\kappa_\frak{g}(x,f(y)) + \kappa_\frak{g}(f(x),y). \etand{align*} Because $\frak{g}$ is semisimple, $\deltaer(\frak{g}) = \muathrm{inn}(\frak{g})$. For $z\iotan \frak{g}$ we calculate \betaegin{align*} L(\chiheck{\rho})(\alphad_z) (\kappa_\frak{g}(x,y)) = \kappa_\frak{g}(x,[y,z]) + \kappa_\frak{g}([x,z],y) = \kappa_\frak{g}(x,[y,z]) + \kappa_\frak{g}(x,[z,y]) =0. \etand{align*} Hence $\chiheck{\rho}|_{\muathcal{A}ut(\frak{g})_0} = \iotad_V$. \etand{proof} Analogously to \chiircte[p. 130]{Janssens:2013}, we need the following requirement: \betaegin{convention}\lambdaeftarrowbel{GruConventioooo} In the following we assume $\omegal{\muathcal{A}ut(G)}:= \muathcal{A}ut(G)/\kappaer(\rho_V)$ to be finite. \etand{convention} \betaegin{definition} Combining Convention \rhoef{GruConventioooo}, Lemma \rhoef{GruActTriv} and Theorem \rhoef{GruMainIII} we find a Lie group extension \betaegin{align*} \omegal{\mathcal{O}mega}_c^1(M,\muathbb{V})/\gammap_{\omega_M} \hookrightarrow \widehat{\gammag_c(M,\muathcal{G})_0} \rhoightarrow \gammag_c(M,\muathcal{G})_0 \etand{align*} that corresponds to the central Lie algebra extension that is represented by $\omega_M$. We write $Z:=\omegal{\mathcal{O}mega}_c^1(M,\muathbb{V})/\gammap_{\omega_M}$. If $\pi \chiolon \widetilde{\gammag_c(M,\muathcal{G})_0} \rhoightarrow \gammag_c(M,\muathcal{G})_0$ is the universal covering homomorphism and $Z\hookrightarrow H \rhoightarrow \widetilde{\gammag_c(M,\muathcal{G})_0}$ the pullback extension then \chiircte[Remark 7.14.]{Neeb:2002} tells us that we get a central extension of Lie groups \betaegin{align*} E:= Z\thetaimes \pi_1(\gammag_c(M,\muathcal{G})_0) \hookrightarrow H \rhoightarrow \gammag_c(M,\muathcal{G})_0. \etand{align*} Its corresponding Lie algebra extension is represented by ${\omega_M}$. \etand{definition} The following theorem is the analogous statement to \chiircte[Theorem I.2.]{Janssens:2013} in the case of a non-compact base-manifold and connected typical fibre. \betaegin{theorem} The central Lie group extension $Z\thetaimes \pi_1(\gammag_c(M,\muathcal{G})_0) \hookrightarrow H \rhoightarrow \gammag_c(M,\muathcal{G})_0$ is universal for all abelian Lie groups modelled over locally convex spaces. \etand{theorem} \betaegin{proof} The statement \chiircte[Theorem 4.13]{Neeb:2002a} and the analogous statement \chiircte[Theorem III.1]{Janssens:2013} are formulated for sequentially complete respectively Mackey complete spaces. But the completeness is only assumed to guaranty the existence of the period map $\pier_\omega$ and the existence of period maps of the form $\pier_{\gamma\chiirc \omega}$ for continuous linear maps $\gamma \chiolon \frak{z}\rhoightarrow \frak{a}$. Obviously the period maps $\pier_{\gamma\chiirc \omega}$ exist if the period map $\pier_\omega$ exists. Hence with Remark \rhoef{GruVollststaendigno} we do not need to assume the completeness of the spaces. Therefore it is left to show that $H$ is simply connected. Using \chiircte[Remark 5.12]{Neeb:2002} we have the long exact homotopy sequence \betaegin{align*} &\pi_2(\gammag_c(M,\muathcal{G})_0) \xira{\delta_2} \pi_1(Z\thetaimes \pi_1(\gammag_c(M,\muathcal{G})_0)) \xira{i} \pi_1(H) \xira{p} \pi_1(\gammag_c(M,\muathcal{G})_0) \\ &\xira{\delta_1} \pi_0(Z\thetaimes \pi_1(\gammag_c(M,\muathcal{G})_0)). \etand{align*} We show $i=0$: Calculating \betaegin{align*} \pi_1(Z\thetaimes \pi_1(\gammag_c(M,\muathcal{G})_0)) = \pi_1(\omegal{\mathcal{O}mega}^1_c(M,\muathbb{V})/\gammap_{\omega_M}) = \gammap_{\omega_M} \etand{align*} and using \chiircte[Proposition 5.11]{Neeb:2002} we conclude that $\delta_2$ is surjective. Hence $i=0$. From \betaegin{align*} \pi_0(Z\thetaimes \pi_1(\gammag_c(M,\muathcal{G})_0))= \pi_1(\gammag_c(M,\muathcal{G})_0), \etand{align*} we get that $\delta_1$ is injective. Therefore $p=0$. Thus $\pi_1(H)=0$. \etand{proof} \alphappendix \sigmaection{Some differential topology}\lambdaeftarrowbel{GruAppendix} \betaegin{lemma}\lambdaeftarrowbel{GruRealisation} Let $H\hookrightarrow P\xira{q} M$ be a finite-dimensional smooth principal bundle (with $\sigmaigma$-compact total space $P$), $\rhoho\chiolon H \thetaimes V\rhoightarrow V$ a finite-dimensional smooth linear representation and $\muathbb{V}:=P\thetaimes_{\rhoho}V$ the associated vectorbundle. \betaegin{compactenum} \iotatem The canonical isomorphism of vector spaces (see e.g. \chiircte[Satz 3.5]{Baum:2014}) $\gammaph \chiolon \mathcal{O}mega^k(P,V)_\rho^\thetaext{hor} \rhoightarrow \mathcal{O}mega^k(M,\muathbb{V})$, $\omega\mus \thetaimesl{\omega}$ (with $\thetaimesl{\omega}_x(v_1,\deltaots,v_k)=\omega_{\sigma_(x)}(T\sigma(v_1),\deltaots, T\sigma(v_k))$ for a local section $(\sigma\chiolon U \rhoightarrow P$ of $P\xira{q}M$ and $x\iotan U$) is in fact an isomorphism of topological vector spaces. \iotatem Also the isomorphism of vector spaces $\gammaph \chiolon \mathcal{O}mega^k_c(P,V)_\rho^\thetaext{hor} \rhoightarrow \mathcal{O}mega_c^k(M,\muathbb{V})$, $\omega\mus\thetaimesl{\omega}$ is in an isomorphism of topological vector spaces. \etand{compactenum} \etand{lemma} \betaegin{proof} \betaegin{compactenum} \iotatem We choose an atlas $\pis_i \chiolon q^{-1}(U_i)\rhoightarrow U\thetaimes H$ of trivialisations of $P$ with $i\iotan I$. Let $\sigmaigma_i:= \pis_i^{-1}({\scriptscriptstyle \betaullet},1_H)$ be the canonical section corresponding to $\pis_i$. As $\mathcal{O}mega^k(P,V)_\rho^\thetaext{hor}$ and $\mathcal{O}mega^k(M,\muathbb{V})$ are Fr{\'e}chet spaces it is enough to show the continuity of $\gammaph$ (Open mapping theorem). The topology on $\mathcal{O}mega^k(M,\muathbb{V}) = \muathcal{G}amma(\muathcal{A}lt^k(TM,\muathbb{V}))$ is initial with respect to the maps $\muathcal{G}amma(\muathcal{A}lt^k(TM,\muathbb{V}))\rhoightarrow \muathcal{G}amma(\muathcal{A}lt^k(TU_i,\muathbb{V}|_{U_i}))$, $\etata \mus \etata|_{U_i}$. Given $\omega\iotan \mathcal{O}mega^k(P,V)_\rho^\thetaext{hor}$, $x\iotan U_i$ and $v\iotan T_xU_i$ we have $(\thetaimesl{\omega}|_{U_i})_x(v) =[\sigmaigma_i(x),\sigmaigma_i^\alphast\omega_x(v)]$. Because $\muathcal{G}amma(\muathcal{A}lt^k(TU_i,\muathbb{V}|_{U_i}))\chiolonng \muathcal{G}amma(\muathcal{A}lt^k(TU_i,V)) \chiolonng \mathcal{O}mega^k(U_i,V)$ it is enough to show the continuity of $\mathcal{O}mega^k(P,V)_{\rhoho_V}^\thetaext{hor} \rhoightarrow \mathcal{O}mega^k(U_i,V)$, $\omega \mus \sigmaigma_i^\alphast \omega$. The map $C^\iotanfty((TP)^k,V)\rhoightarrow C^\iotanfty((TU_i)^k,V)$, $f\mus f\chiirc T\sigmaigma_i\thetaimes \deltaots\thetaimes T\sigmaigma_i$ is continuous (see \chiircte{Glockner:a}). Now the assertion follows, because we can embed $\mathcal{O}mega^k(P,V)_{\rhoho_V}^\thetaext{hor}$ into $C^\iotanfty((TP)^k,V)$. \iotatem The analogous map from $\mathcal{O}mega^k(P,V)_\rho^\thetaext{hor}$ to $\mathcal{O}mega^k(M,\muathbb{V})$ is continuous. Hence given a compact set $K \sigmaubs M$ we get that the corresponding map from $\mathcal{O}mega^k_K(P,V)_\rho^\thetaext{hor}$ to $\mathcal{O}mega_K^k(M,\muathbb{V})$ is continuous. Therefore $\gammaph$ is continuous. The same argument shows that the inverse of $\gammaph$ is continuous. \etand{compactenum} \etand{proof} The basic consideration in the following remark, seems to be part of the folklore. \betaegin{remark}\lambdaeftarrowbel{GruIsoQuotientBundle} Given the situation of Definition \rhoef{GruQuotientenBuendel} the following holds. \betaegin{compactenum} \iotatem The vertical bundle of $\omegal{H} \hookrightarrow \omegal{P} \xira{\omegal{q}} M$ is given by $V\omegal{P} = T\pi (VP))$ and $H\omegal{P}:= T\pi(HP)$ is a principal connection on $\omegal{P}$. \item Given $k \iotan \muathbb{N}_0$ the pullback $\pi^\alphast \chiolon \mathcal{O}mega^k(\omegal{P},V)_{\omegal{\rho}_V}^\thetaext{hor} \rhoightarrow \mathcal{O}mega^k(P,V)_{\rho_V}^\thetaext{hor},~ \theta \mus \pi^\alphast \theta$ is an isomorphism of topological vector spaces and an isomorphism of chain complexes. \item Given $k \iotan \muathbb{N}_0$ the pullback $\pi^\alphast \chiolon \mathcal{O}mega_c^k(\omegal{P},V)_{\omegal{\rho}_V}^\thetaext{hor} \rhoightarrow \mathcal{O}mega_c^k(P,V)_{\rho_V}^\thetaext{hor},~ \theta \mus \pi^\alphast\theta$ is an isomorphism of topological vector spaces and an isomorphism of chain complexes. \etand{compactenum} \etand{remark} \betaegin{proof} \betaegin{compactenum} \iotatem First we show $T\pi(VP) \sigmaubs \kappaer(T\omegal{q})$. For $v\iotan VP$ we get $T\omegal{q} (T \pi(v))= T \omegal{q} \chiirc \pi (v) = T q (v) =0$. To see $\kappaer(T\omegal{q}) \sigmaubs T\pi(VP)$ let $T\omegal{q} (w) = 0$. We find $v\iotan TP$ with $T\pi (v) = w$. Hence $T\omegal{q}(T\pi(v)) = T\omegal{q} \chiirc \pi (v) = Tq(v) = 0$. Thus $v\iotan VP$ and so $w\iotan T\pi(VP)$. Now we show that $T\pi(HP)$ is a smooth sub vector bundle of $T\omegal{P}$. Let $\omegal{x}\iotan \omegal{P}$. Obviously $(T\pi(HP))_{\omegal{x}} := T_{\omegal{x}}\omegal{P} \chiapp T\pii(HP)$ is closed under scalar multiplication. Let $v,w \iotan (T\pi(HP))_{\omegal{x}} = T_x\omegal{P}\chiapp T\pi(HP)$. We find $p_1,p_2 \iotan P$, ${v}_1\iotan H_{p_1}P$ and $w_2 \iotan H_{p_2}P$ with $T_{p_1}\pii (v_1) = v$ and $T_{p_2}\pii (w_2) = w$. Hence $\pii(p_1)=\omegal{x}=\pii(p_2)$. Therefore we find $n \iotan N$ with $p_1=p_2\chidot n$ and $\thetaimesl{w}\iotan T_{p_1}P$ with $TR_n(\thetaimesl{w}) = w_2$. Now we calculate \betaegin{align*} &v+w = T_{p_1}\pii(v_1) + T_{p_2} \pii (w_2) = T_{p_1}\pii(v_1) + T_{p_2} \pii \chiirc T_{p_1}R_n (\thetaimesl{w}) \\ = &T_{p_1}\pii(v_1) + T_{p_1} \pii \chiirc TR_n (\thetaimesl{w}) = T_{p_1}\pii(v_1) + T_{p_1} \pii (\thetaimesl{w}) = T_{p_1}\pii (v_1 +\thetaimesl{w}). \etand{align*} Now we can show that $HP$ is a smooth sub vector bundle. Let $\omegal{p} \iotan \omegal{P}$. Because $\pii$ is a submersion, we find a smooth local section $\thetaau \thetaimesl{V} \rhoightarrow P$ of $\pii$ on an open $\omegal{p}$-neighbourhood $\thetaimesl{V}\sigmaubs \omegal{P}$. We define $p:=\thetaauu (\omegal{p})$ and find a smooth local frame $\sigmaigma_1, \deltaots ,\sigmaigma_m \chiolon \thetaimesl{U} \rhoightarrow TP$ of the smooth sub vector bundle $HP$ on a $p$-neighbourhood $\thetaimesl{U}\sigmaubs P$. Without loose of generality we can assume $\thetaauu (\thetaimesl{V}) \sigmaubs \thetaimesl{U}$. Given $i \iotan \sigmaet{1,\deltaots, m}$ we define the smooth map \betaegin{align*} \omegal{\sigmaigma}_i \chiolon \thetaimesl{V} \rhoightarrow T\omegal{P},~ \omegal{x}\mus T\pii(\sigmaigma_i \chiirc \thetaauu (\omegal{x})). \etand{align*} The map $\omegal{\sigmaigma}_i$ is a section for the tangential bundle $T\omegal{P}$, because given $\omegal{x}\iotan \thetaimesl{V}$ we have $\sigmaigma_i \chiirc \thetaauu (\omegal{x}) \iotan T_{\thetaauu(\omegal{x})}P$ and so $\omegal{\sigmaigma}_i (\omegal{x}) \iotan T_{\pii(\thetaauu(\omegal{x}))} \omegal{P} = T_{\omegal{x}} \omegal{P}$. Let $\omegal{x}\iotan \thetaimesl{V}$. Next we show that $(\sigmaigma_i(\omegal{x}))_{i=1,\deltaots,m}$ is a basis of $(T\pi(HP))_{\omegal{x}} = T_{\omegal{x}} \omegal{P}\chiapp T\pi(HP)$. Let $\lambda_i \iotan \muathbb{R}$ with $\sigmaum_{i=1}^m \lambda_i \chidot \omegal{\sigmaigma}(\omegal{x}) =0$. We conclude $T_{\thetaauu(\omegal{x})} \pii (\sigmaum_{i=1}^m \lambda_i \chidot {\sigmaigma}_i(\thetaauu( \omegal{x})))=0$. And so \betaegin{align*} Tq\lambdaeft(\sigmaum_{i=1}^m \lambda_i \chidot {\sigmaigma}_i(\thetaauu( \omegal{x}))\rhoightght) = T\omegal{q} \lambdaeft(T\pii\lambdaeft(\sigmaum_{i=1}^m \lambda_i \chidot {\sigmaigma}_i(\thetaauu( \omegal{x}))\rhoightght)\rhoightght) = 0. \etand{align*} Therefore $\sigmaum_{i=1}^m \lambda_i \chidot {\sigmaigma}_i(\thetaauu( \omegal{x})) \iotan V_{\thetaauu(\omegal{x})}P$. And hence $\lambda_i =0$ for $i=1,\deltaots,m$. Let $p \iotan P$ with $\pii(p)=\omegal{x}$. One easily sees that the linear map $(T_p\pii)|_{H_pP} \chiolon H_pP \rhoightarrow (T\pi(HP))_x$ is a surjection (see above). Because $m = \deltaim(H_pP)$ the linear independent system $\omegal{\sigmaigma}_i(\omegal{x})_{i=1, \deltaots,m}$ is a basis of $(T\pii(HP))_{\omegal{x}}$. Now lets show that $H\omegal{P}:= T\pii(HP)$ is a principal connection on $\omegal{P}$. Because $\pi$ is a submersion and $T_pP = H_pP \omegapluslus V_pP$ we get $V_{\omegal{x}}\omegal{P} + H_{\omegal{x}}\omegal{P} = T_{\omegal{x}} \omegal{P}$ for $\omegal{x}\iotan \omegal{P}$. If $T_{p}\pi(v)=T_{p'}\pi(w)$ with $v\iotan V_pP$, $w\iotan H_{p'}P$ and $\pi(p)=\pi(p')=:\omegal{p}$ we get \betaegin{align*} T_{\omegal{p}}\omegal{q}\chiirc \pi(v) = T_{\omegal{p}}\omegal{q} \chiirc \pi(w). \etand{align*} Hence $0=Tq(v)=Tq(w)$. Thus $w \iotan V_{p'}P$. Therefore $w =0$ and so $T_p\pi(v)=T_{p'}(w)=0$ in $T_{\omegal{p}}\omegal{P}$. We conclude $V\omegal{P} \omegapluslus H\omegal{P} = T\omegal{P}$. It is left to show that $H\omegal{P}$ is invariant under the action of $\omegal{H}$. Obviously it is enough to show $T_{\omegal{x}}\omegal{R}_{[g]} (H_{\omegal{x}}P) \sigmaubs H_{\omegal{x}[g]}\omegal{P}$ for $\omegal{x} \iotan \omegal{P}$ and $[g]\iotan \omegal{H}$. Let $v\iotan H_{\omegal{x}}\omegal{P}$. We find $p \iotan P$ and $w \iotan H_pP$ with $v= T_p\pii(w)$. With $\omegal{R}_{[g]}\chiirc \pii = \pii \chiirc R_g$ and $\pii(pg)= \omegal{x}.[g]$ we calculate \betaegin{align*} &T\omegal{R}_{[g]}(v) = T_p(\omegal{R}_{[g]} \chiirc \pii)(w) = T_{pg}\pii \chiirc T_pR_g(w) \iotan T_{pg} \pii (H_{pg}P)\\ \sigmaubs& T_{\omegal{x}[g]}\omegal{P}\chiapp T\pii(HP) = H_{\omegal{x}[g]} \omegal{P}. \etand{align*} \item First we show that $\pi^\alphast$ makes sense. Without loss of generality we assume $k =1$. Let $\theta \iotan \mathcal{O}mega^1(\omegal{P},V)_{\omegal{\rho}_V}^\thetaext{hor}$. We have $\pi \chiirc R_g = \omegal{R}_{[g]} \chiirc \pi$. Hence \betaegin{align*} &\rho_V(g) \chiirc R_g^\alphast \pi^\alphast \theta = \omegal{\rho}_V([g]) \chiirc (\pi\chiirc R_g)^\alphast \theta = \omegal{\rho}_V([g]) \chiirc (\omegal{R}_{[g]} \chiirc \pi)^\alphast \theta\\ =&\pi^\alphast(\omegal{\rho}_V([g]) \chiirc \omegal{R}_{[g]} ^\alphast \theta) = \pi^\alphast \theta. \etand{align*} Moreover if $v\iotan V_pP$ we get $T_p\pi(v) \iotan V_{\pi(p)}\omegal{P}$ and so $\pi^\alphast \theta_p(v) = \theta_{\pi(p)} (T_p\pi(v)) =0$. We show that $\pi^\alphast$ is bijective. It is clear that $\pi^\alphast$ is injective, because $\pii$ is a submersion. To see that $\pi^\alphast$ is surjective let $\eta \iotan \mathcal{O}mega^1(P,V)_{\rho_V}^\thetaext{hor}$, we define $\theta \iotan \mathcal{O}mega^1(\omegal{P},V)^\thetaext{hor}_{\omegal{\rho}_V}$ by $\theta_{\pi(p)}(T_p\pi(v)) := \eta_p(v)$ for $p \iotan P$ and $v\iotan T_pP$. To see that this is well-defined, we choose $p,r\iotan P$, $v\iotan T_pP$ and $w\iotan T_rP$ with $\pi(p)=\pi(r)$ and $T_p\pi(v) = T_r\pi(w)$. We find $ n\iotan N$ with $p = r.n$. Because $\eta_{r.n} (T_rR_n(w)) = \eta_r(w)$ ($N=\kappaer(\rho_V)$), it is enough to show $\eta_p(v) = \eta_p(T_rR_n(w))$. We have $\pi \chiirc R_n= R_{[n]}\chiirc\pi = \pi$. Hence $T\pi \chiirc TR_n = T\pi$. Thus $T_p\pi(T_rR_n(w)) = T_r\pi(w) = T_p\pi(v)$. Therefore we find $x \iotan \kappaer(T_p\pi)$ with $T_rR_n(w)+x =v$ in $T_pP$. Hence $T_pq(x) = 0$, because $Tq = T\omegal{q} \chiirc T\pi$. So $x \iotan V_pP$ and hence $\eta_p(x)=0$. The form $\theta$ is $\omegal{\rho}_V$-invariant because for $g\iotan H$, $p \iotan P$ and $v\iotan T_pP$ we get \betaegin{align*} &(\omegal{\rho}_V([g])\chiirc \omegal{R}_{[g]}^\alphast \theta)_{\pi(p)} (T_p\pi(v)) = \omegal{\rho}_V([g])\chiirc \theta_{\pi(p).[g]} (T\omegal{R}_{[g]}(T_p\pi(v)))\\ =& \rho_V(g) \chiirc \theta_{\pi(p.g)}(T_{p.g}\pi(TR_g(v)))\\ =&\rho_V(g)\chiirc \eta_{p.g}(TR_g(v)) = \theta_{\pi(p)}(T\pi(v)). \etand{align*} Moreover $\theta$ is horizontal because given $u \iotan V_{\omegal{p}}\omegal{P}$ with $\omegal{p}\iotan \omegal{P}$ we find $p \iotan P$ with $\pi(p)= \omegal{p}$ and $v\iotan V_pP$ with $u = T_p\pi(v)$. Hence $\theta_{\omegal{p}}(u)= \theta_{\pi(p)}(T\pi(v)) = \eta_p(v)=0$. Obviously we have $\pi^\alphast \theta =\eta$. In order to show that $\pi^\alphast$ is an isomorphism of chain complexes we choose $p\iotan P$, and $v,w\iotan T_pP$ and calculate \betaegin{align*} &(\pi^\alphast D_{\omegal{\rho}_V}\theta)_p(v,w) = (D_{\omegal{\rho}_V}\theta)_{\pi(p)}(T\pi(v),T\pi(w))\\ =& (d\theta)_{\pi(p)} (\pir_h\chiirc T\pi (v),\pir_h\chiirc T\pi (w)) = (d\theta)_{\pi(p)} (T\pi \chiirc \pir_h(v),T\pi \chiirc \pir_h(w))\\ =&(\pi^\alphast d\theta)_p(\pir_h(v), \pir_h(w)) =(D_{\rho_V}\pi^\alphast \theta)_p(v,w). \etand{align*} It is left to show that $\pi^\alphast$ is a homeomorphism. Because the corresponding spaces are Fr{\'e}chet-spaces it is enough to show the continuity of $\pi^\alphast$. We can embed $\mathcal{O}mega^k(\omegal{P},V)_{\omegal{\rhoho}_V}^\thetaext{hor}$ into $C^\iotanfty(T\omegal{P}^k,V)$ and $\mathcal{O}mega^k(P,V)_{\rhoho_V}^\thetaext{hor}$ into $C^\iotanfty(TP^k,V)$. The map $C^\iotanfty(T\omegal{P}^k,V) \rhoightarrow C^\iotanfty(TP^k,V)$, $f\mus f\chiirc (T\pii \thetaimes \chidots \thetaimes T\pii)$ is continuous (see \chiircte{Glockner:a}). Now the assertion follows. \item This follows from (b) and the fact that $\pi^\alphast(\mathcal{O}mega_K^k(\omegal{P},V)_{\omegal{\rho}_V}^\thetaext{hor}) = \mathcal{O}mega^k_K(P,V)_{\rho_V}^\thetaext{hor}$ for a compact set $K\sigmaubs M$. \etand{compactenum} \etand{proof} Also the statement in the following lemma seems to be well-known, but we did not find a source for this exact result. It's proof uses techniques from the proof of \chiircte[Theorem 1.5]{Rosenberg:1997}. See also \chiircte[Chapter 6]{Bott:1982}. \betaegin{lemma}\lambdaeftarrowbel{GruIsoEndlCover} If $q \chiolon \hat{M} \rhoightarrow M$ is a smooth finite manifold covering, then $q^\alphast\chiolon \mathcal{O}mega^1_c(M,V) \rhoightarrow \mathcal{O}mega^1_c(\hat{M}, V),~ \theta \mus q^\alphast \theta$ leads to a well-defined isomorphism of topological vector spaces $H^1_{dR,c}(M,V) \rhoightarrow H^1_{dR,c}(\hat{M},V),~[\theta] \mus [q^\alphast \theta]$. Therefore $\omegal{q}^\alphast\chiolon H^1_{dR,c}(M,V) \rhoightarrow H^1_{dR,c}(\omegal{P}, V),~ \theta \mus \omegal{q}^\alphast \theta$ is an isomorphism of topological vector spaces. \etand{lemma} \betaegin{proof} We use the notation $\mathcal{O}mega_K^k(\hat{M},V):= \sigmaet{\theta \iotan \mathcal{O}mega^k(\hat{M},V)| \sigmaupp(\theta)\sigmaubs q^{-1}(K)}$ for a compact subset $K\sigmaubs M$. Let $n$ be the order of the covering. The first step is to define a continuous linear map $q_\alphast \chiolon \mathcal{O}mega_c^k(\hat{M},V) \rhoightarrow \mathcal{O}mega_c^k(M,V)$ for $k \iotan \muathbb{N}_0$. Without loss of generality let $k=1$. Let $\theta \iotan \mathcal{O}mega^1_c(\hat{M},V)$. Given $y\iotan M$ we find a $y$-neighbourhood $V_y \sigmaubs M$ that is evenly covered by open sets $U_{y,i}\sigmaubs \hat{M}$ with $i=1,\dots,n$. We have diffeomorphisms $q_i^y:=q|_{U_{y,i}}^{V_y}$. Then \betaegin{align*} \thetaimesl{\theta}^y:= \fr 1n \sigmaum_{i=1}^n (q_i^y)_\alphast \theta|_{U_{y,i}} \etand{align*} is a form on $V_y$ with $(q_i^y)_\alphast \theta|_{U_{y,i}} = \theta(T{q_i^y}^{-1}(v))$ for $x\iotan V_y$ and $v \iotan T_xV_y$. We define $q_\alphast \theta:= \thetaimesl{\theta} \iotan \mathcal{O}mega^1_c(\hat{M},V)$ by $\thetaimesl{\theta}_x:=\thetaimesl{\theta}^y_x$ for $x\iotan V_y$. Now we show that this is a well-defined map. Let $x\iotan V_y \chiapp V_{y'}$ for $y' \iotan M$ with a $y'$-neighbourhood $V_{y'}$ that is evenly covered by $(U_{y',i})_{i=1,..,n}$. After renumbering the sets $U_{y',i}$ we get \betaegin{align*} q|_{U_{y,i}} ^{-1} = q|_{U_{y',i}}^{-1} \etand{align*} on $V_y \chiapp V_{y'}$ for $i=1,\dots,n$. Hence \betaegin{align*} \thetaimesl{\theta}^y_x = \fr 1n \sigmaum_i ((q_i^y)_\alphast \theta|_{U_{y,i}})_x = \fr 1n \sigmaum_i ((q_i^{y'})_\alphast \theta|_{U_{y',i}})_x = \thetaimesl{\theta}^{y'}_x \thetax{for} x\iotan V_{y}\chiapp V_{y'}. \etand{align*} We note that $q$ is a proper map, because it is a finite covering. Let $\sigmaupp (\theta)\sigmaubs q^{-1}(K)$ for a compact set $K \sigmaubs M$. If $y \nuotin K$ then $q^{-1}(\sigmaet{y}) \chiapp q^{-1}(K) = \etamptyset$. Hence $q^{-1}(\sigmaet{y}) \chiapp \sigmaupp(\theta) =\etamptyset$. It follows \betaegin{align*} q_\alphast \theta_y = \thetaimesl{\theta}_y^y = \fr 1n \sigmaum_i ((q|_{U_i^y})_\alphast \theta|_{U_{y,i}})_y = \fr 1n \sigmaum_i \theta_{q|_{U_i}^{-1} (y)} =0. \etand{align*} Hence $M\sigmaetminus K \sigmaubs M\sigmaetminus \sigmaet{x\iotan M: q_\alphast \theta_x\nueq 0}$. Therefore $\sigmaet{x\iotan M: q_\alphast \theta_x \nueq 0} \sigmaubs K$ and so $\sigmaupp(q_\alphast \theta) \sigmaubs K$. Obviously $q_\alphast$ is linear. Moreover $q_\alphast$ is continuous because the analogous map form $\mathcal{O}mega^1(\hat{M},V)$ to $\mathcal{O}mega^1(M,V)$ is continuous and $q_\alphast(\mathcal{O}mega_K^1(\hat{M}, V)) \sigmaubs \mathcal{O}mega_K^1(M,V)$. Moreover $q_\alphast$ is a homomorphism of chain complexes: Given $y\iotan M$, $v,w \iotan T_yM$ we calculate \betaegin{align*} (q_\alphast d\theta)_y(v,w) = \fr 1n \sigmaum_i ((q_i^y)_\alphast d\theta|_{U_{y,i}})_y(v,w) = \fr 1n \sigmaum_i ( d (q_i^y)_\alphast \theta|_{U_{y,i}})_y(v,w) = (d q_\alphast \theta)_y(v,w). \etand{align*} Now we show \betaegin{align}\lambdaeftarrowbel{Grusur} q_\alphast \chiirc q^\alphast = \iotad_{\mathcal{O}mega^1_c(M,V)}. \etand{align} Given $\theta \iotan \mathcal{O}mega_c^1(M,V)$, $y \iotan M$ and $v\iotan T_yM$ we calculate \betaegin{align*} &(q_\alphast q^\alphast \theta)_y(v) =\fr 1n \sigmaum_i ({q_i^y}_\alphast q^\alphast \theta|_{U_{y,i}})_y(v) = \fr 1n \sigmaum_i (q^\alphast \theta|_{U_{y,i}})_{{q_i^y}^{-1} (y)} (T{q_i^y}^{-1} (v))\\ =&\fr 1n \sigmaum_i \theta_{q({q_i^y}^{-1} (y))} (Tq\chiirc{q_i^y}^{-1} (v)) = \theta_y(v). \etand{align*} Hence $q_\alphast \chiirc q^\alphast = \iotad_{\mathcal{O}mega^1_c(M,V)}$. We know that $q^\alphast$ factorises to a continuous linear map $q^\alphast \chiolon H^1_{dR,c}(M,V)\rhoightarrow H^1_{dR,c}(\hat{M},V)$ and because $q_\alphast$ is a homomorphism of chain complexes we get a map $q_\alphast \chiolon H^1_{dR,c}(\hat{M},V) \rhoightarrow H^1_{dR,c}(M,V)$. With equation (\rhoef{Grusur}) we see \betaegin{align*} q_\alphast \chiirc q^\alphast = \iotad_{H^1_{dR,c}(M,V)}. \etand{align*} Hence $q_\alphast$ is surjective. It remains to show that $q_\alphast \chiolon H^1_{dR,c}(M,V) \rhoightarrow H^1_{dR,c}(\hat{M},V)$ is also injective. To this end we show $q_\alphast (B^1_c(\hat{M},V)) = B^1_c(M,V)$. Given $f\iotan C^\iotanfty_c(M,V)$ we calculate \betaegin{align*} q_\alphast (d q^\alphast f) =q_\alphast q^\alphast df =df. \etand{align*} \etand{proof} The proof of Lemma \rhoef{GrugeschlForm} is similar to the proof of \chiircte[Lemma II.10 (1)]{Neeb:2004}. \betaegin{lemma}\lambdaeftarrowbel{GrugeschlForm} Let $M$ be a connected finite-dimensional manifold, $E$ be a finite-dimensional vector space and $\theta \iotan \mathcal{O}mega^1(M,E)$. If for all closed smooth curves $\alpha_0,\alpha_1 \chiolon [0,1]\rhoightarrow M$ such that $\alpha_0$ is homotopy to $\alpha_1$ relative $\sigmaet{0,1}$ we get \betaegin{align*} \iotant_{\alpha_0} \theta = \iotant_{\alpha_1} \theta, \etand{align*} then $\theta \iotan Z^1_{dR}(M,E)$. \etand{lemma} \betaegin{proof} Let $q\chiolon \thetaimesl{M} \rhoightarrow M$ be the universal smooth covering of $M$. First we show that $q^\alphast \theta$ is exact. To this end we show that $q^\alphast \theta$ is conservative. Let $\gamma\chiolon [0,1] \rhoightarrow \thetaimesl{M}$ be a smooth closed curve in a point $p_0 \iotan \thetaimesl{M}$ and $q(p_0)=:x_0 \iotan M$. Because $\thetaimesl{M}$ is simply connected, we find a homotopy $H$ from $\gamma$ to $c_{p_0}$ relative $\sigmaet{0,1}$. Hence $q\chiirc \gamma$ is homotopy to $c_{x_0} = q\chiirc c_{p_0}$ relative $\sigmaet{0,1}$. Therefore we get \betaegin{align*} \iotant_\gamma q^\alphast \theta = \iotant_{[0,1]} \gamma^\alphast q^\alphast \theta = \iotant_{[0,1]} (q\chiirc \gamma)^\alphast \theta \ub{=}{(\alphast)} \iotant_{[0,1]} c_{x_0}^\alphast \theta =0. \etand{align*} Equation $(\alphast)$ follows from the assumptions of the lemma. Because $q^\alphast \theta$ is exact we find $f\iotan C^\iotanfty(\thetaimesl{M},E)$ with $q^\alphast \theta =df$. Hence we get \betaegin{align*} q^\alphast d\theta = dq^\alphast \theta = ddf =0. \etand{align*} Therefore $d\theta =0$ because $q$ is a submersion. \etand{proof} \betaegin{definition}\lambdaeftarrowbel{GrustrictSES} Let $M$ be a connected $\sigmaigma$-compact finite-dimensional manifold. Using \chiircte[Lemma IV.4]{Neeb:2004} we find a sequence $(K_n)_{n\iotan \muathbb{N}}$ of compact equidimensional submanifolds with boundaries of $M$ such that $K_n\sigmaubs \muathring{K}_{n+1}$, $\betaiggcup_{n\iotan \muathbb{N}} K_n = M$ and the connected components of $M\sigmaetminus K_n$ are not relative compact in $M$. We call such a sequence $(K_n)_{n\iotan \muathbb{N}}$ a {\iotat saturated exhaustive sequence}. The sequence is called {\iotat strict} if there exists $N\iotan \muathbb{N}$ such that for all $n\gammaeq N$ the canonical map $\pi_0(M\sigmaetminus K_{n+1}) \rhoightarrow \pi_0(M\sigmaetminus K_n)$ is injective. \etand{definition} \betaegin{lemma}\lambdaeftarrowbel{GruFolgenvoll} Let $M$ be a $\sigmaigma$-compact finite-dimensional manifold, $V$ a finite-dimensional vector space and $(K_n)_{n\iotan \muathbb{N}}$ a strict saturated exhaustive sequence for $M$. Then the space $\gammao_c^1(M,V)/dC_c^\iotanfty(M,V)$ is complete. \etand{lemma} \betaegin{proof} The space $\gammao_c^1(M,V)$ is the strict inductive limit of the Frechet spaces $(\gammao^1_{K_n}{M,V})_{n\gammaeq N}$ where $N$ is chosen like in Definition \rhoef{GrustrictSES}. Using \chiircte[Lemma B.4]{Neeb:2004} and \chiircte[Lemma IV.10]{Neeb:2004} we get \betaegin{align*} &\omegal{\gammao}_c^1(M,V):= \gammao_c^1(M,V)/dC_c^\iotanfty(M,V) = \underset{\rhoightarrow}{\lambdaim}\gammao^1_{K_n}{M,V} / (dC^\iotanfty_c(M,V)\chiapp \gammao^1_{K_n}{M,V})\\ =& \underset{\rhoightarrow}{\lambdaim}\gammao_{K_n}^1(M,V)/dC_{K_n}^\iotanfty(M,V). \etand{align*} Because the spaces $\gammao_{K_n}^1(M,V)/dC_{K_n}^\iotanfty(M,V)=:\omegal{\gammao}_{K_n}^1(M,V)$ are Frechet spaces, it is enough to show that the inductive limit is strict. Therefore we have to show that the image $\gammao_{K_n}(M,V)+ dC^\iotanfty_{K_{n+1}}(M,V)/ dC^\iotanfty_{K_{n+1}}(M,V)$ of the continuous linear injection $\omegal{\gammao}^1_{K_n} \rhoightarrow \omegal{\gammao}^1_{K_{n+1}},~ [\theta]\mus [\theta]$ is closed in $\omegal{\gammao}^1_{K_{n+1}}$. For $\alpha \iotan C^\iotanfty(\muathbb{S}^1,M)$ we define the continuous linear map \betaegin{align*} I_\alpha \chiolon \omegal{\gammao}_{K_{n+1}}(M,V) \rhoightarrow V,~ [\theta]\mus \iotant_\alpha \theta. \etand{align*} We show \betaegin{align*} \omegal{\gammao}_{K_n}^1(M,V) \chiolonng \gammao_{K_n}(M,V)+ dC^\iotanfty_{K_{n+1}}(M,V)/ dC^\iotanfty_{K_{n+1}}(M,V)= \betaiggcap_{\alpha \iotan C^\iotanfty(\muathbb{S}^1, M\sigmaetminus \muathring{K}_n)} I_\alpha ^{-1} (\sigmaet{0}). \etand{align*} One inclusion is trivial. We mention that $M\sigmaetminus \muathring{K}_n$ is a submanifold with boundary of $M$. Now let $[\theta] \iotan \omegal{\gammao}^1_{K_{n+1}}(M,V)$ and $\iotant_\alpha \theta|_{M\sigmaetminus \muathring{K}_n} =0$ for all $\alpha \iotan C^\iotanfty(\muathbb{S}^1,M\sigmaetminus \muathring{K}_n)$. Hence $\theta|_{M\sigmaetminus \muathring{K}_n}$ is conservative and we find $f\iotan C^\iotanfty(M\sigmaetminus \muathring{K}_n, V)$ with $\theta|_{M\sigmaetminus \muathring{K}_n} = df$. Because $\sigmaupp(\theta|_{M\sigmaetminus \muathring{K}_{n}}) \sigmaubseteq K_{n+1}$ we get $df|_{M\sigmaetminus K_{n+1}}=0$. Hence $f$ is constant on each connected component of $M\sigmaetminus K_{n+1}$. Because the saturated exhaustive sequence $(K_n)_{n\iotan \muathbb{N}}$ is strict we can subtract the constant value of $f$ on each connected component and can assume $\sigmaupp(f)\sigmaubs K_{n+1}$. Now we extend $f$ to a smooth map $\thetaimesl{f} \chiolon M\rhoightarrow V$, what is possible because $M\sigmaetminus \muathring{K_n}$ is a closed submanifold. Obviously $\sigmaupp (\theta - d\thetaimesl{f}) \sigmaubs K_n$ and hence $[\theta]= [\theta-d\thetaimesl{f}] \iotan \omegal{\gammao}_{K_n}(M,V) \sigmaubs \omegal{\gammao}_{K_{n+1}}(M,V)$. \etand{proof} \betaegin{lemma}\lambdaeftarrowbel{GruFolgenvoll2} In the situation of Section \rhoef{GruChLieEx}, the space $\gammao_c^1(M,\muathbb{V})/d_{\muathbb{V}}\gammag_c(M,\muathbb{V})$ is sequentially complete if $\omegal{P}$ admits a strict saturated exhaustive sequence. \etand{lemma} \betaegin{proof} It is enough to show that $\gammao^1_c(\omegal{P},V)_{\omegal{\rho}_V}/dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V}$ is sequentially complete. To this end we show that $\pis \chiolon \gammao^1_c(\omegal{P},V)_{\omegal{\rho}_V} \rhoightarrow (\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix}, ~ \omega \mus [\omega]$ is surjective and $\kappaer(\pis) = dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V}$ where $(\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix}$ stands for the fixed points of the natural action of $\omegal{H}$ on $\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V))$. The inclusion $dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V} \sigmaubs \kappaer(\pis)$ is clear. Let $n:=\#\omegal{H}$, $[\omega] \iotan (\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix}$ and $[\omega]=[df]$ for $f\iotan C^\iotanfty_c(\omegal{P},V)$. Then $[\omega] = [d \fr 1n \sigmaum_{g\iotan \omegal{H}} g.f]$. In order to show that $\pis$ is surjective let $\omega \iotan \gammao^1_c(\omegal{P},V)$ such that $[\omega]$ is $\rho_V$-invariant. Then $[\omega] = [\fr 1n \sigmaum_{g\iotan \omegal{H}} g.\omega]$ and $\fr 1n \sigmaum_{g\iotan \omegal{H}} g.\omega \iotan \gammao^1_c(\omegal{P},V)_{\omegal{\rho}_V}$. Hence \betaegin{align*} \gammao^1_c(\omegal{P},V)_{\omegal{\rho}_V}/(dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}_V}) \rhoightarrow (\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix}, ~ [\omega] \mus [\omega] \etand{align*} is a continuous vector space isomorphism. It is also an isomorphism of topological vector spaces, because $(\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix} \rhoightarrow \gammao^1_c(\omegal{P},V)_{\omegal{\rho}_V},~ [\omega] \mus [\fr 1n \sigmaum_{g \iotan \omegal{H}} g.\omega]$ is a continuous right-inverse. Hence $\gammao^1_c(\omegal{P},V)_{\omegal{\rho}}/dC^\iotanfty_c(\omegal{P},V)_{\omegal{\rho}}$ is sequentially complete if and only if \betaegin{align*} (\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix} \etand{align*} is sequentially complete. Because $\omegal{q}$ is proper $\omegal{P}$ is $\sigmaigma$-compact. For the same reason like in \chiircte[Chapter 2, p. 385]{Neeb:2009} we can assume $\omegal{P}$ to be connected. Now Lemma \rhoef{GruFolgenvoll} shows that $\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V))$ is sequentially complete. Hence the assertion follows from \betaegin{align*} &(\gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)))_\thetaext{fix} = \betaiggcap_{g\iotan \omegal{H}} \sigmaet{\omega \iotan \gammao_c^1(\omegal{P},V)/ (dC^\iotanfty_c(\omegal{P},V)): \omegal{\rho}_V(g)\chiirc \omegal{R}_g^\alphast \omega =\omega}\\ =&\betaiggcap_{g\iotan \omegal{H}} (\omegal{\rho}_V(g)\chiirc \omegal{R}_g^\alphast - \iotad)^{-1}\sigmaet{0}. \etand{align*} \etand{proof} \betaegin{thebibliography}{DD} \betaigbitem{Alzaareer:2013} Alzaareer, H., {\iotat Lie groups of mappings on non-compact spaces and manifolds}, Dissertation, Universit{\"a}t Paderborn (2013). \betaigbitem{Baum:2014} Baum, H., ``Eichfeldtheorie. Eine Einf\"uhrung in die Differentialgeometrie auf Faserb\"undeln'' Second Edition, Springer, 2014. \betaigbitem{Bott:1982} Bott, R.; Tu L. W., ``Differential Forms in Algebraic Topology'' Springer, 1982 \betaigbitem{Eyni:2014} Eyni, J. M. {\iotat Universal continuous bilinear forms for compactly supported sections of Lie algebra bundles and universal continuous extensions of certain current algebras} \betaigbitem{Glockner:a} Gl{\"o}ckner, H., Neeb K.-H., ``Infinite-Dimensional Lie Groups'' book in preparation. \betaigbitem{Glockner:2003} Gl{\"o}ckner, H., {\iotat Lie groups of measurable mappings,} Can. J. Math. {\betaf 55} (2003), 969--999. \betaigbitem{Glockner:2004} Gl{\"o}ckner, H., {\iotat Lie groups over non-discrete topological fields,} arXiv:math/0408008v1, 2004. \betaigbitem{Glockner:2007} Gl{\"o}ckner, H., {\iotat Direct limits of infinite-dimensional Lie groups compared to direct limits in related categories,} J. Funct. Anal. {\betaf 245} (2007), 19--61. \betaigbitem{Glockner:2008a} Gl\"ockner, H., {\iotat Homotopy groups of ascending unions of infinite-dimensional manifolds}, arXiv:0812.4713v2 [math.AT], 2010. \betaigbitem{Glockner:2013} Gl\"ockner, H., {\iotat Differentiable mappings between spaces of sections}, arXiv:1308.1172v1, 2013. \betaigbitem{Glockner:2012} Gl{\"o}ckner, H., {\iotat Regularity properties of infinite-dimensional Lie groups, and semiregularity}, arXiv:1208.0715v2, 2014. \betaigbitem{Gundogan:2011} G{\"u}ndo\u{g}an, H., ``Classification and Structure Theory of Lie Algebras of Smooth Sections'' Logos Berlin (2011). \betaigbitem{Hilgert:2012} Hilgert J., Neeb K.-H., ``Structure and Geometry of Lie Groups,'' Springer Monographs in Mathematics, 2012. \betaigbitem{Janssens:2013} Janssens, B.; Wockel, C., {\iotat Universal central extensions of gauge algebras and groups}, J. Reine Angew. Math. 682 (2013), 129--139. \betaigbitem{Kriegl:1997} Kriegl, A. and Michor, P. W., ``The Convenient Setting of Global Analysis'', Mathematical Surveys and Monographs {\betaf 53}, 1997. \betaigbitem{Lee:2013} Lee, J. M., ``Introduction to Smooth Manifolds'' Second Edition, Springer, 2013. \betaigbitem{Maier:2003} Maier, P.; Neeb, K.-H., {\iotat Central extensions of current groups}, Math. Ann. {\betaf 326} (2003), 367--415 \betaigbitem{Michor:1980} Michor, P., ``Manifolds of Differentiable Mappings,'' Shiva Publishing, 1980. \betaigbitem{Neeb:2009} Neeb, K.-H.; Wockel, C., {\iotat Central extensions of groups of sections}, Ann. Glob. Anal. Geom. {\betaf 36} (2009), 381--418 \betaigbitem{Neeb:2002} Neeb, K.-H., {\iotat Central extensions of infinite-dimensional Lie groups}, Ann. Inst. Fourier, Grenoble {\betaf 52} 5 (2002), 1365--1442 \betaigbitem{Neeb:2002a} Neeb, K.-H., {\iotat Universal central extensions of Lie groups}, Acta Appl. Math. {\betaf 73} (2002), 175--219 \betaigbitem{Neeb:2004} Neeb, K.-H., {\iotat Current groups for non-compact manifolds and their central extensions}, IRMA Lect. Math. Theor. Phys. {\betaf 5}, de Gruyter Verlag, Berlin (2004), 109--183 \betaigbitem{Rosenberg:1997} Rosenberg S., ``The Laplacian on a Riemanian Manifold'' London Mathematical Society Student Texts Cambridge University Press {\betaf 31}, 1997 \betaigbitem{Schuett:2013} Sch{\"u}tt, J., {\iotat Symmetry Groups of Principal Bundles Over Non-Compact Bases} arXiv:1310.8538 (2013) \betaigbitem{Napier:2011} Napier, T.; Ramachandran, M., ``An Introduction to Riemann Surfaces'' Birkh{\"a}user, 2010. \betaigbitem{van-Est:1964} van Est, W. T.; Korthagen, Th. J., {\iotat Non-enlargible Lie algebras} Nederl. Akad. Wet., Proc., Ser. A 67 (1964), 15-31 \betaigbitem{Wengenroth:2003} Wengenroth, J., ``Derived functors in functional analysis'' Springer, 2003. \betaigbitem{Wockel:2007} Wockel, C., {\iotat Lie group structures on symmetry groups of principal bundles}, J. Funct. Anal. {\betaf 251} (2007), 254--288 \etand{thebibliography} \etand{document}
\begin{document} \begin{abstract} Ordinary first-order logic has the property that two formulas $\phi$ and $\psi$ have the same meaning in a structure if and only if the formula $\phi \iff \psi$ is true in the structure. We prove that independence-friendly logic does not have this property. \end{abstract} \title{``Iff'' is not expressible in independence-friendly logic} \section{Introduction} The \emph{meaning} of a first-order formula $\phi$ in a structure $\A$ is just the set of valuations that make the formula true in $\A$. That is, \[ \phi^\A = \setof{\vec a \in \^NA}{\A \models \phi[\vec a]}, \] where $A$ is the universe of $\A$, and $N$ is the number of variables in $\phi$. Given a structure $\A$ and any two first-order formulas $\phi$ and $\psi$, \[ \phi^\A = \psi^\A \] if and only if \(\A \models \phi \iff \psi\). Thus first-order logic is able to express the concept of ``if and only if.'' Independence-friendly logic (IF logic) is a conservative extension of first-order logic that has the same expressive power as existential second-order logic \cite{Hintikka:1989, Hintikka:1996}. In IF logic the truth of a sentence is defined via a game between two players, \abelard\ ($\forall$) and \eloise\ ($\exists$). The additional expressivity is obtained by modifying the quantifiers and connectives of an ordinary first-order sentence in order to restrict the information available to the existential player, \eloise, in the associated semantic game. In IF logic, only the information available to \eloise\ is restricted, which means existential quantifiers are not dual to universal quantifiers. To compensate, negation symbols are only allowed before atomic formulas. Generalized independence-friendly logic (IFG logic) is a variant of independence-friendly logic in which the information available to both players can be restricted, making existential quantifiers dual to universal quantifiers and allowing any formula to be negated \cite{Dechesne:2005}. Since there are IFG-sentences that are neither true nor false, it is unclear whether IFG logic can express the concept of ``if and only if.'' For instance, one can define \(\phi \hiff{J} \psi\) as an abbreviation for the formula \[ ({\hneg\phi} \hor{J} \psi) \hand{J} (\phi \hor{J} {\hneg\psi}) \] (the subscripts indicate what information is unavailable to the players at each move), but does this formula assert that $\phi$ and $\psi$ are logically equivalent? Is \(\phi \hiff{J} \psi\) true in a structure exactly when $\phi$ and $\psi$ have the same meaning in that structure? If not, is there some other syntactical combination of $\phi$ and $\psi$ that is true exactly when $\phi$ and $\psi$ have the same meaning? The answer to all of these questions is no. \section{IFG logic} \begin{defn} Given a first-order signature $\sigma$, an \emph{atomic IFG-formula}\index{IFG-formula!atomic|mainidx} is a pair $\tuple{\phi, X}$ where $\phi$ is an atomic first-order formula and $X$ is a finite set of variables that includes every variable that appears in $\phi$ (and possibly more). \end{defn} \begin{defn} Given a first-order signature $\sigma$, the language $\mathscr L_\mathrm{IFG}^\sigma$\index{$\mathscr L_\mathrm{IFG}^\sigma$|mainidx}\index{IFG-formula|(} is the smallest set of formulas such that: \begin{enumerate} \item Every atomic IFG-formula is in $\mathscr L_\mathrm{IFG}^\sigma$. \item If $\tuple{\phi, Y}$ is in $\mathscr L_\mathrm{IFG}^\sigma$ and \(Y \subseteq X\), then $\tuple{\phi, X}$ is in $\mathscr L_\mathrm{IFG}^\sigma$. \item If $\tuple{\phi, X}$ is in $\mathscr L_\mathrm{IFG}^\sigma$, then $\tuple{\hneg\phi, X}$ is in $\mathscr L_\mathrm{IFG}^\sigma$. \item If $\tuple{\phi, X}$ and $\tuple{\psi, X}$ are in $\mathscr L_\mathrm{IFG}^\sigma$, and \(Y \subseteq X\), then $\tuple{\phi \hor{Y} \psi, X}$ is in $\mathscr L_\mathrm{IFG}^\sigma$. \item If $\tuple{\phi, X}$ is in $\mathscr L_\mathrm{IFG}^\sigma$, \(x \in X\), and \(Y \subseteq X\), then $\tuple{\hexists{x}{Y}\phi, X}$ is in $\mathscr L_\mathrm{IFG}^\sigma$. \end{enumerate} Above $X$ and $Y$ are finite sets of variables. \end{defn} From now on we will make certain assumptions about IFG-formulas that will allow us to simplify our notation. First, we will assume that the set of variables of $\mathscr L_\mathrm{IFG}^\sigma$ is $\setof{v_n}{n \in \omega}$. Second, since it does not matter much which particular variables appear in a formula, we will assume that variables with smaller indices are used before variables with larger indices. More precisely, if $\tuple{\phi, X}$ is a formula, \(v_j \in X\), and \(i \leq j\), then \(v_i \in X\). By abuse of notation, if $\tuple{\phi, X}$ is a formula and \(\abs X = N\), then we will say that $\phi$ has $N$ variables and write $\phi$ for $\tuple{\phi, X}$. As a shorthand, we will call $\phi$ an IFG$_N$-formula\index{IFG$_N$-formula}. Let \[ \mathscr L^\sigma_{\mathrm{IFG}_N} = \setof{\phi \in \mathscr L^\sigma_{\mathrm{IFG}}}{\phi \text{ has $N$ variables}}\index{$\mathscr L^\sigma_{\mathrm{IFG}_N}$}. \] Third, sometimes we will write $\phi \hor{J} \psi$ instead of $\phi \hor{Y} \psi$ and $\hexists{v_n}{J}\phi$ instead of $\hexists{v_n}{Y}\phi$, where \(J = \setof{j}{v_j \in Y}\). Finally, we will use \(\phi \hand{J} \psi\) to abbreviate \(\hneg(\hneg\phi \hor{J} \hneg\psi)\) and \(\hforall{v_n}{J} \phi\) to abbreviate \(\hneg\hexists{v_n}{J}\hneg\phi\). Truth and falsity for IFG-sentences are defined in terms of a two-player, win-loss game of imperfect information. Given an IFG-sentence, \eloise's goal is to verify the sentence, and \abelard's goal is to falsify it. The sentence is true if \eloise\ has a winning strategy, and it is false if \abelard\ has a winning strategy. For example, consider a structure $\A$ with universe $A$ and the ordinary first-order sentence \[ \forall v_0 \exists v_1[v_0 \not= v_1]. \] First \abelard\ chooses an element $a$ to be the value of the variable $v_0$, then \eloise\ chooses an element $b$ to be the value of the variable $v_1$. If \(a \not= b\), \eloise\ wins; otherwise, \abelard\ wins. If $\A$ has more than one element \eloise\ can win every play of the game; hence the sentence is true. If $\A$ has only one element, then \abelard\ will win every play; hence the sentence is false. Now consider the IFG$_2$-sentence \[ \hforall{v_0}{\emptyset}\hexists{v_1}{\set{v_0}}[v_0 \not= v_1]. \] The subscripts indicate what information is unavailable to the players at each move. The game begins as before with \abelard\ choosing an \(a \in A\) to be the value of $v_0$. Next \eloise\ chooses an element \(b \in A\) to be the value of $v_1$, but this time she must make her choice in ignorance of the value of $v_0$. Let us assume $A$ has more than one element. On the one hand, \eloise\ does not have a winning strategy because she might blindly choose the same element as \abelard. Therefore the sentence is not true. On the other hand, \abelard\ does not have a winning strategy either, because \eloise\ might get lucky and choose a different element than the one he chose. Therefore the sentence is not false. Now we define the game semantics for formulas with free variables. Consider the IFG$_2$-formula \[ \hexists{v_1}{\set{v_0}}[v_0 \not= v_1]. \] In order for us to decide who wins a given play of the semantic game, at the end of the game every variable must have a value. Since $v_0$ is free, neither player has the opportunity to choose its value. To get around this difficulty, before the game begins we will assign random values to all the free variables. In fact, instead of assigning values only to the free variables, we will assign values to all of the variables. Thus the first move of the game is to choose a valuation \(\vec a \in \^NA\). Play proceeds with the players modifying the initial valuation until an atomic formula is reached, at which point the the game ends and the final valuation is used to determine the winner. In the above example, play begins with values for $v_0$ and $v_1$ being chosen at random. Then \eloise\ attempts to modify the value of $v_1$ so that it is different from the value of $v_0$. Unfortunately for her, she is not allowed to see the value of $v_0$, so her task is no easier than before. However, suppose an oracle revealed to \eloise\ that the initial valuation belonged to a subset $V$ of the space of all valuations $\^2A$. \eloise\ might be able to use this information to devise a winning strategy. For example, suppose \(A = \set{0,1}\). Then \(\^2A = \set{00, 01, 10, 11}\). If the oracle tells \eloise\ that the initial valuation belongs to the set \(V = \set{00, 01}\), then \eloise\ will know to choose 1 for the value of $v_1$. Thus \eloise\ has a winning strategy for the game that begins by choosing the initial valuation from $V$ instead of from $\^2A$. A set of valuations, such as $V$, is called a \emph{team}. We say that the formula \(\hexists{v_1}{\set{v_0}}[v_0 \not= v_1]\) is true in $\A$ relative to $V$, and that $V$ is a winning team for $\phi$ in $\A$. Disjunctions and conjunctions are moves for the players, as well. In the game corresponding to the formula \(\psi_1 \hor{Y} \psi_2\), \eloise\ must choose which disjunct she wishes to verify without knowing the values of the variables in $Y$. Dually, in the game corresponding to \(\psi_1 \hand{Y} \psi_2\), \abelard\ chooses which conjunct \eloise\ must verify, but his choice is not allowed to depend on the variables in $Y$. Negation is handled by having the players switch roles. \eloise\ attempts to verify ${\hneg\psi}$ by falsifying $\psi$, and \abelard\ attempts to falsify ${\hneg\psi}$ by verifying $\psi$. In general, if $\phi$ is an IFG$_N$-formula and \(V,W \subseteq \^NA\) are teams, then $\phi$ is true in $\A$ relative to $V$, denoted \(\A \modelt \phi[V]\), if and only if \eloise\ has a winning strategy for the semantic game, given she knows the initial valuation belongs to $V$. Dually, $\phi$ is false in $\A$ relative to $W$, denoted \(\A \modelf \phi[W]\), if and only if \abelard\ has a winning strategy, given he knows the initial valuation belongs to $W$. In the first case, we say that $V$ is a \emph{winning team} (or \emph{trump}) for $\phi$ in $\A$, and in the second case, that $W$ is a \emph{losing team} (or \emph{cotrump}) for $\phi$ in $\A$. Finally, we need to connect the game semantics for IFG-sentences with the game semantics for IFG-formulas. If $\phi$ is an IFG-sentence, then the initial valuation is irrelevant because the value of every variable is modified during the course of the game. If $\phi$ has free variables, then in order to have a winning strategy, a player must be able to win no matter what the initial valuation is. Therefore we define \(\A \modelt \phi\) if and only if \(\A \modelt \phi[\^NA]\) and \(\A \modelf \phi\) if and only if \(\A \modelf \phi[\^NA]\). In the future, we will abbreviate similar statements by writing \(\A \modeltf \phi\) if and only if \(\A \modeltf \phi[\^NA]\). It is worth noting that restricting the information available to the players does not affect their moves, only their strategies. Therefore, restricting the information available to one player does not help his or her opponent. \eloise\ has a winning strategy if and only if she wins regardless of how \abelard\ plays. Withholding information from her makes it harder for her to have a winning strategy, but withholding information from \abelard\ does not make it easier. We hope this summary of the game semantics for IFG logic is sufficient. A more rigorous treatment can be found in \cite[Section 1.2]{Mann:2007a} or \cite[Section 1.3]{Mann:2007}. \section{Trump semantics} Wilfrid Hodges made an important breakthrough when he found a way to define a Tarski-style semantics for independence-friendly logic \cite{Hodges:1997a, Hodges:1997b}. We now recall the necessary details. \begin{defn} Two valuations \(\vec a, \vec b \in\, \^NA\) \emph{agree outside of \(J \subseteq N\)}, denoted \(\vec a \approx_J \vec b\)\index{\(\vec a \approx_J \vec b\)\quad $\vec a$ and $\vec b$ agree outside of $J$|mainidx}, if \[ \vec a \restrict (N\setminus J) = \vec b \restrict(N\setminus J). \] \end{defn} \begin{defn} Given any set $V$, a \emph{cover} of $V$ is a collection of sets $\mathscr U$ such that \(V = \bigcup\mathscr U\). A \emph{disjoint cover} is a cover whose members are pairwise disjoint. \end{defn} \begin{defn} Let \(V \subseteq \^NA\), and let $\mathscr U$ be a cover of $V$. Then $\mathscr U$ is called \emph{$J$-saturated}\index{cover!$J$-saturated|mainidx} if every \(U \in \mathscr U\) is closed under $\approx_J$. That is, for every \(\vec a, \vec b \in V\), if \(\vec a \approx_J \vec b\) and \(\vec a \in U \in \mathscr U\), then \(\vec b \in U\). \end{defn} \begin{defn} Define a partial operation $\bigcup_J$\index{$\bigcup_J \mathscr U$|mainidx} on sets of teams by declaring \(\bigcup_J \mathscr U = \bigcup \mathscr U\) whenever $\mathscr U$ is a $J$-saturated disjoint cover of $\bigcup \mathscr U$ and letting $\bigcup_J \mathscr U$ be undefined otherwise. Thus the formula \(V = \bigcup_J \mathscr U\) asserts that $\mathscr U$ is a $J$-saturated disjoint cover of $V$. We will use the notation \(V_1 \cup_J V_2\)\index{$V_1 \cup_J V_2$|mainidx} to abbreviate \(\bigcup_J \set{V_1, V_2}\). \end{defn} \begin{defn} A function \(f\colon V \to A\) is \emph{independent of $J$}, denoted \(f\colon V \toind{J} A\)\index{$f\colon V \toind{J} A$ \quad $f$ is independent of $J$|mainidx}, if \(f(\vec{a}) = f(\vec{b})\) whenever \(\vec{a} \approx_J \vec{b}\). \end{defn} \begin{defn} Let \(\vec a \in \^NA\). For every \(n < N\) and \(b \in A\), define $\vec a(n:b)$ to be the valuation that is like $\vec a$ except the $n$th value has been changed to $b$, i.e., \[ \vec a(n:b) = \vec a \restrict(N\setminus\set{n}) \cup \set{\tuple{n,b}}. \] Let \(V,W \subseteq \^NA\) and \(f\colon V \to A\). Define \begin{align*} V(n:f) &= \setof{\vec a(n:f(a))}{\vec a \in V}, \\ W(n:A) &= \setof{\vec a(n:b)}{\vec a \in W,\, b \in A}. \end{align*} \end{defn} The next theorem has appeared in many different forms in the literature. Hodges' original formulation (for IF logic) appears in \cite[Theorem 7.5]{Hodges:1997a}. Dechesne's version for IFG logic appears in \cite[Theorem 5.3.5]{Dechesne:2005}. \begin{thm}[Hodges] \label{trump semantics} \index{$\A \modeltf \phi[V]$|mainidx} Let $\phi$ be an IFG$_N$-formula, let $\A$ be a suitable structure, and let \(V,W \subseteq \^NA\). \begin{itemize} \item If $\phi$ is atomic, then \begin{itemize} \item[(+)] \(\A \modelt \phi[V]\) if and only if for every \(\vec a \in V\), \(\A \models \phi[\vec a]\), \item[($-$)] \(\A \modelf \phi[W]\) if and only if for every \(\vec b \in W\), \(\A \not\models \phi[\vec b]\). \end{itemize} \item If $\phi$ is ${\hneg\psi}$, then \begin{itemize} \item[(+)] \(\A \modelt {\hneg\psi[V]}\) if and only if \(\A \modelf \psi[V]\), \item[($-$)] \(\A \modelf {\hneg\psi[W]}\) if and only if \(\A \modelt \psi[W]\). \end{itemize} \item If $\phi$ is $\psi_1 \hor{J} \psi_2$, then \begin{itemize} \item[(+)] \(\A \modelt \psi_1 \hor{J} \psi_2[V]\) if and only if \(\A \modelt \psi_1[V_1]\) and \( \A \modelt \psi_2[V_2]\) for some \(V = V_1 \cup_J V_2\), \item[($-$)] \(\A \modelf \psi_1 \hor{J} \psi_2[W]\) if and only if \(\A \modelf \psi_1[W]\) and \( \A \modelf \psi_2[W]\). \end{itemize} \item If $\phi$ is $\hexists{v_n}{J}\psi$, then \begin{itemize} \item[(+)] \(\A \modelt \hexists{v_n}{J}\psi[V]\) if and only if \(\A \modelt \psi[V(n:f)]\) for some \(f\colon V \toind{J} A\), \item[($-$)] \(\A \modelf \hexists{v_n}{J}\psi[W]\) if and only if \(\A \modelf \psi[W(n:A)]\). \end{itemize} \end{itemize} \end{thm} \begin{proof} By two simultaneous inductions on the subformulas of $\phi$. A full proof using the present notation can be found in \cite[Theorem 1.32]{Mann:2007}. \end{proof} \section{IFG-cylindric set algebras} We introduced IFG-cylindric set algebras in \cite{Mann:2007a, Mann:2007} as a way to study the algebra of IFG logic. Recall from \cite[p.~2]{Henkin:1981} or from \cite[Definition 4.3.4 on p.~154]{Henkin:1985} that the universe of $\Cyls_{N}(\A)$, the $N$-dimensional cylindric set algebra over $\A$, consists of the meanings of all the $N$-variable, first-order formulas expressible in the language of $\A$, where the meaning of a formula is defined by \[ \phi^\A = \setof{\vec a \in \^NA}{\A \models \phi[\vec a]}. \] Similarly, the universe of \(\Cyls_{\mathrm{IFG}_{N}}(\A)\), the $N$-dimensional IFG-cylindric set algebra over $\A$, consists of the meanings of all the IFG$_N$-formulas expressible in the language of $\A$, where the meaning of an IFG$_N$-formula is given by \[ \norm{\phi}_\A = \tuple{\trump{\phi}_\A, \cotrump{\phi}_\A}, \] \[ \trump{\phi}_\A = \setof{V \subseteq \^NA}{\A \modelt \phi[V]}, \qquad \cotrump{\phi}_\A = \setof{W \subseteq \^NA}{\A \modelf \phi[W]}. \] More generally, we can define IFG-cylindric set algebras without reference to a base structure $\A$. \begin{defn} An \emph{IFG-cylindric power set algebra}\index{independence-friendly cylindric power set algebra|mainidx} is an algebra whose universe is \(\powerset(\powerset(\^NA)) \times \powerset(\powerset(\^NA))\), where $A$ is a set and $N$ is a natural number. The set $A$ is called the \emph{base set}\index{base set|mainidx}, and the number $N$ is called the \emph{dimension}\index{dimension|mainidx} of the algebra. Every element $X$ of an IFG-cylindric power set algebra is an ordered pair of sets of teams. We will use the notation $X^+$\index{$X^+$ \quad truth coordinate of $X$|mainidx} to refer to the first coordinate of the pair, and $X^-$\index{$X^-$ \quad falsity coordinate of $X$|mainidx} to refer to the second coordinate. There are a finite number of operations: \begin{itemize} \item the constant \(0 = \tuple{\set{\emptyset}, \powerset(\^NA)}\); \item the constant \(1 = \tuple{\powerset(\^NA), \set{\emptyset}}\); \item for all \(i,j < N\), the constant $D_{ij}$\index{$D_{ij}$ \quad diagonal element|mainidx} is defined by \begin{itemize} \item[(+)] \(D_{ij}^+ = \powerset(\setof{\vec a \in\, \^NA}{a_i = a_j})\), \item[($-$)] \(D_{ij}^- = \powerset(\setof{\vec a \in\, \^NA}{a_i \not= a_j})\); \end{itemize} \item if \(X = \tuple{X^+,\, X^-}\), then \(\n{X}\index{$\n{X}$ \quad negation of $X$|mainidx} = \tuple{X^-, X^+}\); \item for every \(J \subseteq N\), the binary operation $+_J$\index{$X +_J Y$|mainidx} is defined by \begin{itemize} \item[(+)] \(V \in (X +_J Y)^+\) if and only if \(V = V_1 \cup_J V_2\) for some \(V_1 \in X^+\) and \(V_2 \in Y^+\), \item[($-$)] \((X +_J Y)^- = X^- \cap Y^-\); \end{itemize} \item for every \(J \subseteq N\), the binary operation $\cdot_J$\index{$X \cdot_J Y$|mainidx} is defined by \begin{itemize} \item[(+)] \((X \cdot_J Y)^+ = X^+ \cap Y^+\), \item[($-$)] \(W \in (X \cdot_J Y)^-\) if and only if \(W = W_1 \cup_J W_2\) for some \(W_1 \in X^-\) and \(W_2 \in Y^-\); \end{itemize} \item for every \(n < N\) and \(J \subseteq N\), the unary operation $C_{n,J}$\index{$C_{n,J}(X)$ \quad cylindrification of $X$|mainidx} is defined by \begin{itemize} \item[(+)] \(V \in C_{n,J}(X)^+\) if and only if \(V(n:f) \in X^+\) for some \(f\colon V \toind{J} A\), \item[($-$)] \(W \in C_{n,J}(X)^- \) if and only if \(W(n:A) \in X^-\). \end{itemize} \end{itemize} \end{defn} \begin{defn} An \emph{IFG-cylindric set algebra}\index{independence-friendly cylindric set algebra|mainidx} (or \emph{IFG-algebra}, for short) is any subalgebra of an IFG-cylindric power set algebra. An \emph{IFG$_N$-cylindric set algebra}\index{IFG$_N$-cylindric set algebra|mainidx} (or \emph{IFG$_N$-algebra}) is an IFG-cylindric set algebra of dimension $N$. \end{defn} The operations $+_\emptyset$ and $+_N$ are of particular interest. Since every disjoint cover of $V$ is $\emptyset$-saturated, \(V \in (X +_\emptyset Y)^+\) if and only if there is a disjoint cover \(V = V_1 \cup V_2\) such that \(V_1 \in X^+\) and \(V_2 \in Y^+\). At the other extreme, \(V = V_1 \cup_N V_2\) if and only if \(V_1 = V\) and \(V_2 = \emptyset\) or vice versa. Also, the element \(\Omega = \tuple{\set{\emptyset}, \set{\emptyset}}\) is present in most, but not all, IFG-algebras. \section{Suits and double suits} Meanings of IFG-formulas have the property that \(\trump{\phi} \cap\ \cotrump{\phi} = \set{\emptyset}\), and \(V' \subseteq V \in \norm{\phi}^\pm\) implies \(V' \in \norm{\phi}^\pm\). These facts inspire the following definitions. \begin{defn} A nonempty set $X^* \subseteq\powerset(\^NA)$ is called a \emph{suit}\index{suit|mainidx} if \(V' \subseteq V \in X^*\) implies \(V' \in X^*\). A \emph{double suit}\index{double suit|mainidx} is a pair $\tuple{X^+, X^-}$ of suits such that \(X^+ \cap X^- = \set{\emptyset}\). \end{defn} \begin{defn} An IFG-algebra is \emph{suited}\index{IFG-cylindric set algebra!suited|mainidx} if all of its elements are pairs of suits. It is \emph{double-suited}\index{IFG-cylindric set algebra!double-suited|mainidx} if all of its elements are double suits. \end{defn} \begin{prop}[Proposition 2.10 in \cite{Mann:2007a}] \label{subalgebra generated suited} The IFG$_N$-algebra generated by a set of pairs of suits is a suited IFG$_N$-algebra. \end{prop} \begin{prop}[Proposition 2.11 in \cite{Mann:2007a}] \label{subalgebra generated double-suited} The IFG$_N$-algebra generated by a set of double suits is a double-suited IFG$_N$-algebra. In particular, \(\Cyls_{\mathrm{IFG}_{N}}(\A)\) is a double-suited IFG$_N$-algebra. \end{prop} For the rest of the paper we will only be concerned with double suits. The next proposition is a summary of results from \cite[Section 2.5]{Mann:2007a}. \begin{prop}\label{section 2.5} If $X$ and $Y$ are double suits, \begin{enumerate} \item \(X +_J 0 = X = X \cdot_J 1\), \item \(X +_J 1 = 1\) and \(X \cdot_J 0 = 0\), \item \(X +_N Y = \tuple{X^+ \cup Y^+,\, X^- \cap Y^-}\) and \(X \cdot_N Y = \tuple{X^+ \cap Y^+,\, X^- \cup Y^-}\). \end{enumerate} \end{prop} Part (c) implies that $+_N$ and $\cdot_N$ are lattice operations. Thus we can define a partial order on any double-suited IFG$_N$-algebra by declaring \(X \leq Y\) if and only if \(X +_N Y = Y\). It follows that \(X \leq Y\) if and only if \(X^+ \subseteq Y^+\) and \(Y^- \subseteq X^-\). \begin{prop}\label{X < Omega,Y} If $X$ and $Y$ are double suits, \(X \leq \Omega\), and \(X \leq Y\), then \(X +_J Y = Y\). \end{prop} \begin{proof} If \(X \leq \Omega\), then \((X +_J Y)^+ = Y^+\). To see why, suppose \(X \leq \Omega\) and \(V \in (X +_J Y)^+\). Then \(V = V_1 \cup_J V_2\) for some \(V_1 \in X^+\) and \(V_2 \in Y^+\), but since \(X^+ = \set{\emptyset}\) we must have \(V_1 = \emptyset\) and \(V_2 = V\). Hence \(V \in Y^+\). Conversely, suppose \(V \in Y^+\). Then \(V = \emptyset \cup_J V\), where \(\emptyset \in X^+\) and \(V \in Y^+\), so \(V \in (X +_J Y)^+\). If \(X \leq Y\), then \(Y^- \subseteq X^-\), so \((X +_J Y)^- = X^- \cap Y^- = Y^-\). \end{proof} Whereas an ordinary cylindric algebra is an expansion of a Boolean algebra, we should not expect the same to be true for IFG-algebras because of the failure of the law of excluded middle in IFG logic. Somewhat miraculously, double-suited IFG-algebras do have an underlying structure that is as close to being a Boolean algebra as possible without satisfying the complementation axioms. \begin{defn} A \emph{De Morgan algebra}\index{De Morgan algebra|mainidx} \(\A = \seq{A; 0, 1, {\hneg}\,, \join, \meet}\) is a bounded distributive lattice with an additional unary operation $\hneg\ $ that satisfies \(\hneg{\hneg x} = x\) and \(\hneg(x \join y) = {\hneg x} \meet {\hneg y}\). \end{defn} \begin{defn} A \emph{Kleene algebra}\index{Kleene algebra|mainidx} is a De Morgan algebra that satisfies the additional axiom \(x \meet {\hneg x} \leq y \join {\hneg y}\). \end{defn} \begin{thm}[Theorem 2.31 in \cite{Mann:2007a}] \label{Kleene reduct} The reduct of a double-suited IFG$_N$-algebra to the signature \(\tuple{0, 1, \n{ }, +_N, \cdot_N}\) is a Kleene algebra. \end{thm} Given a set $A$, let $\Suit(\^NA)$\index{$\Suit(\^NA)$ \quad set of suits in \(\powerset(\powerset(\^NA))\)} denote the set of all suits in \(\powerset(\powerset(\^NA))\), and let $\DSuit(\^NA)$\index{$\DSuit(\^NA)$ \quad set of double-suits in \(\powerset(\powerset(\^NA)) \times \powerset(\powerset(\^NA))\)} denote the set of all double suits in \(\powerset(\powerset(\^NA)) \times \powerset(\powerset(\^NA))\). Since the meaning of every IFG-formula is a double suit, the universe of \(\Cyls_{\mathrm{IFG}_{N}}(\A)\) is contained in \(\DSuit(\^NA)\). Therefore \(\abs{\DSuit(\^NA)}\) gives an upper bound for the size of $\Cyls_{\mathrm{IFG}_{N}}(\A)$. Cameron\index{Cameron, P.} and Hodges\index{Hodges, W.} \cite{Cameron:2001} count suits and double suits in the case when \(N = 1\). The results of their calculations are shown in Table \ref{counting suits and double suits}, where \(m = \abs{A}\), \(f(m) = \abs{\Suit(A)}\), and \(g(m) = \abs{\DSuit(A)}\). They remark that ``one can think of the ratio of $g(m)$ to $2^m$ as measuring the expressive strength of [IFG logic] compared with ordinary first-order logic---always bearing in mind that [IFG logic] may have a rather unorthodox notion of what is worth expressing'' \cite[p.~679]{Cameron:2001}. Cameron\index{Cameron, P.} and Hodges\index{Hodges, W.} also prove that given any finite set $A$, there is a structure $\A$ such that the universe of $\Cyls_{\mathrm{IFG}_{N}}(\A)$ is exactly $\DSuit(\^NA)$ \cite[Corollary 3.4]{Cameron:2001}. \input{table-Counting_Suits} \begin{prop} \label{all double suits} Let $\A$ be a finite structure with at least two elements, and in which every element is named by a constant symbol. Then the universe of \(\Cyls_{\mathrm{IFG}_{N}}(\A)\) is exactly \(\DSuit(\^NA)\).\index{$\DSuit(\^NA)$ \quad double-suited IFG$_N$-cylindric set algebra over $A$} \end{prop} \begin{proof} For every \(\vec a \in \^NA\), let $\phi_{\vec a}$ be the formula \(v_0 = a_0 \hand{\emptyset} \cdots \hand{\emptyset} v_{N-1} = a_{N-1}\). For every \(V \subseteq \^NA\), let $\phi_V$ be the formula \(\bigvee_{\!/\emptyset} \setof{\phi_{\vec a}}{\vec a \in V}\). Then \begin{align*} \norm{\phi_{\vec a}} &= \tuple{\powerset(\set{\vec a}), \powerset(\^NA \setminus \set{\vec a})}, \\ \norm{\phi_V} &= \tuple{\powerset(V), \powerset(\^NA \setminus V)}. \end{align*} Let $X$ be a double suit, and let \(X^+ = \powerset(V_0) \cup \cdots \cup \powerset(V_{k-1})\). Let $\phi$ be the formula \( \phi_{V_0} \hor{N} \cdots \hor{N} \phi_{V_{k-1}}. \) Then \(\trump{\phi} = X^+\). Similarly, there is a formula $\psi$ such that \(\cotrump{\psi} = X^-\). Let $c$ be a constant symbol naming one of the elements of $\A$, let $\chi$ be the formula \(v_0 = c \hor{N} v_0 \not= c\), and let \(V = V_0 \cup \cdots \cup V_{k-1}\). Then \(\norm{\chi} = \Omega\), and \begin{align*} X' &= \norm{\phi \hor{N} \chi} = \tuple{X^+, \set{\emptyset}}, \\ X'' &= \norm{\psi \hand{N} \chi} = \tuple{\set{\emptyset}, X^-}, \\ Y &= \norm{\phi_V} = \tuple{\powerset(V), \powerset(\^NA \setminus V)}. \end{align*} It suffices to show that \begin{align*} \norm{(\phi \hor{N} \chi) \hand{N} ((\psi \hand{N} \chi) \hor{N} \phi_V)} &= X' \cdot_N (X'' +_N Y) \\ &= \tuple{X^+ \cap \powerset(V),\ X^- \cap \powerset(\^NA \setminus V)} \\ &= \tuple{X^+\!,\, X^-} \\ &= X. \end{align*} Therefore \(X \in \Cyls_{\mathrm{IFG}_{N}}(\A)\). \end{proof} At this point, it is natural to ask which double suits can be the meanings of ordinary first-order formulas (that is, IFG-formulas whose independence sets are all empty). Ordinary first-order formulas have the property that \(\A \modelt \phi[V]\) if and only if \(\A \modelt \phi[\set{\vec a}]\) for all \(\vec a \in V\). Hence if \(\A \modelt \phi[V]\) and \(\A \modelt \phi[V']\), then \(\A\modelt \phi[V \cup V']\). It follows that the set of winning teams for an ordinary first-order formula $\phi$ is simply the power set of the set of valuations that satisfy $\phi$. That is, \[ \trump{\phi}_\A = \powerset(\phi^\A). \] Ordinary first-order formulas also have the property that for every \(\vec a \in \^NA\) either \(\A \modelt \phi[\set{\vec a}]\) or \(\A \modelf \phi[\set{\vec a}]\). These facts inspire the following definitions. \begin{defn} A double suit $X$ is \emph{flat}\index{suit!flat|mainidx} if there is a \(V \subseteq \^NA\) such that \(X^+ = \powerset(V)\). \end{defn} \begin{prop}\label{flat absorption} If $X$ and $Y$ are double suits, \(X \leq Y\), and $Y$ is flat, then \(X +_J Y = Y\). \end{prop} \begin{proof} Suppose \(X \leq Y\) and \(Y^+ = \powerset(V)\). If \(V' \in (X +_J Y)^+\), then \(V' = V_1 \cup_J V_2\) for some \(V_1 \in X^+\) and \(V_2 \in Y^+\). Hence \(V_1, V_2 \subseteq V\) because \(X^+ \subseteq Y^+ = \powerset(V)\). Thus \(V' \subseteq V\), which implies \(V' \in Y^+\). Conversely, if \(V' \in Y^+\), then \(V' = \emptyset \cup_J V'\), where \(\emptyset \in X^+\) and \(V' \in Y^+\), so \(V' \in (X +_J Y)^+\). Also, since \(Y^- \subseteq X^-\) we have \((X +_J Y)^- = X^- \cap Y^- = Y^-\). \end{proof} \begin{defn} A double suit $X$ is \emph{perfect}\index{double suit!perfect|mainidx} if there is a \(V \subseteq \^NA\) such that \[ X = \tuple{\powerset(V),\, \powerset(\^NA \setminus V)}. \] \end{defn} In \cite{Mann:2008aa}, we showed that an IFG-formula $\phi$ is equivalent to an ordinary first-order formula in a structure $\A$ if and only if $\norm{\phi}_\A$ is perfect. It is worth noting that $\Cyls_{\mathrm{IFG}_{N}}(\A)$ is generated by its perfect elements because it is generated by the meanings of atomic formulas, which are all perfect. \section{$\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ is hereditarily simple} Let $\mathbf{2}$ be the structure with universe $\set{0,1}$ in which both elements are named by constant symbols. Then \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2}) = \DSuit_1(\set{0,1})\). The distributive lattice structure of $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ is shown in Figure \ref{CsIFG(2)b}, where the join operation is $+_{\set{0}}$ and the meet operation is $\cdot_{\set{0}}$, \begin{align*} A &= \tuple{\powerset(\set{0}) \cup \powerset(\set{1}),\, \set{\emptyset}}, \\ B &= \tuple{\powerset(\set{0}),\, \set{\emptyset}}, \\ C &= \tuple{\powerset(\set{1}),\, \set{\emptyset}}, \\ \norm{v_0 = 0} &= \tuple{\powerset(\set{0}),\, \powerset(\set{1})}, \\ \norm{v_0 = 1} &= \tuple{\powerset(\set{1}),\, \powerset(\set{0})}. \end{align*} The goal of this section is to show that $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ is hereditarily simple. \input{figure-CsIFG2} In \cite[Proposition 2.5]{Mann:2007a}, we proved that an IFG$_N$-sentence can have one of only three possible meanings: 0, $\Omega$, and 1. Thus, if we think of $C_{0,J_0}\ldots C_{N-1,J_{N-1}}$ an a single operation that quantifies (cylindrifies) all of the variables of an IFG$_N$-formula, then the range of that operation is the IFG$_N$-algebra \(\set{0, \Omega, 1}\). \begin{prop}[Proposition 2.51 in \cite{Mann:2007a}] \label{C_0,J...C_N-1,J double suit} If $X$ is a double suit, then \[ C_{0,J_0}\ldots C_{N-1,J_{N-1}}(X) = \begin{cases} 1 & \text{if \(X \not\leq \Omega\)}, \\ \Omega & \text{if \(0 < X \leq \Omega\)}, \\ 0 & \text{if \(X = 0\)}. \end{cases} \] \end{prop} \begin{lem} \label{congruence 0, Omega, 1} Let $\cong$ be a congruence on a double-suited IFG$_N$-algebra. If 0, $\Omega$ (if present), or 1 are congruent to any other element, then $\cong$ is the total congruence. \end{lem} \begin{proof} First we will show that if \(0 \cong 1\), then $\cong$ is the total congruence. If \(0 \cong 1\), then for every $X$ we have \(X = X +_\emptyset 0 \cong X +_\emptyset 1 = 1\). Hence $\cong$ is the total congruence. Next we will show that if \(0 \cong \Omega\) or \(1 \cong \Omega\), then $\cong$ is the total congruence. If \(0 \cong \Omega\), then \(\Omega = \n{\Omega} \cong \n{0} = 1\). Similarly, if \(1 \cong \Omega\), then \(\Omega = \n{\Omega} \cong \n{1} = 0\). Now suppose \(0 \not= X \cong 0\). Then either \[ C_{0,\emptyset}\ldots C_{N-1,\emptyset}(X) = 1 \quad \text{or} \quad C_{0,\emptyset}\ldots C_{N-1,\emptyset}(X) = \Omega. \] In the first case, \(0 = C_{0,\emptyset}\ldots C_{N-1,\emptyset}(0) \cong C_{0,\emptyset}\ldots C_{N-1,\emptyset}(X) = 1\). In the second case, \(0 = C_{0,\emptyset}\ldots C_{N-1,\emptyset}(0) \cong C_{0,\emptyset}\ldots C_{N-1,\emptyset}(X) = \Omega\). Either way, $\cong$ is the total congruence. In addition, if \(1 \not= X \cong 1\), then \(0 \not= \n{X} \cong 0\), so $\cong$ is the total congruence. Finally, if \(\Omega \not= X \cong \Omega\), then either \(X \not\leq \Omega\) or \(\n{X} \not\leq \Omega\). Hence either \[ \Omega = C_{0,\emptyset}\ldots C_{N-1,\emptyset}(\Omega) \cong C_{0,\emptyset}\ldots C_{N-1,\emptyset}(X) = 1 \] or \[ \Omega = C_{0,\emptyset}\ldots C_{N-1,\emptyset}(\Omega) \cong C_{0,\emptyset}\ldots C_{N-1,\emptyset}(\n{X}) = 1. \] Thus $\cong$ is the total congruence. \end{proof} \begin{lem} \label{congruence X < Omega < Y} Let $\cong$ be a congruence on any double-suited IFG$_N$-algebra that includes $\Omega$. If \(X < \Omega < Y\) and \(X \cong Y\), then $\cong$ is the total congruence. \end{lem} \begin{proof} If \(X < \Omega < Y\), and \(X \cong Y\), then \(\Omega = X +_N \Omega \cong Y +_N \Omega = Y\), so by the previous lemma $\cong$ is the total congruence. \end{proof} \begin{lem} \label{congruence Omega < X < Y} Let $\cong$ be a nontrivial congruence on any double-suited IFG$_N$-algebra that includes $\Omega$. Then there exist elements $X$ and $Y$ such that \(X \cong Y\)and \(\Omega \leq X < Y\). \end{lem} \begin{proof} Since $\cong$ is nontrivial, there exist distinct elements $X''$ and $Y''$ such that \(X'' \cong Y''\). Either \((X'')^+ \not= (Y'')^+\) or \((X'')^- \not= (Y'')^-\). In the first case, let \(X' = X'' +_N \Omega\) and \(Y' = Y'' +_N \Omega\); in the second case, let \(X' = \n{(X'')} +_N \Omega\) and \(Y' = \n{(Y'')} +_N \Omega\). In both cases, \((X')^+ \not= (Y')^+\), \(X' \cong Y'\), and \(\Omega \leq X',Y'\). Now let \(X = X' = X' +_N X'\) and \(Y = X' +_N Y'\). Then \(X \cong Y\) and \(\Omega \leq X < Y\). \end{proof} \begin{thm} \label{CsIFG(2) is simple} $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ is simple. \end{thm} \begin{proof} By \lemref{congruence Omega < X < Y} it suffices to consider the congruences generated by pairs of elements from the interval above $\Omega$. Using the technique of perspective edges, we can see that if \(A \cong B\), then \(C \cong \Omega\), because \(A \cdot_N C = C\) and \(B \cdot_N C = \Omega\). Thus $\cong$ is the total congruence by \lemref{congruence 0, Omega, 1}. Similarly, if \(A \cong C\) then \(B \cong \Omega\). Finally, if \(B \cong C\) then \(B \cong A\) because \(B = B +_N \norm{v_0 = 0}\) and \(A = C +_N \norm{v_0 = 0}\). \end{proof} \begin{prop} \label{subalgebras of CsIFG(2)} The proper subalgebras of $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ are \(\set{0,1}\), \(\set{0, \Omega, 1}\), and those shown in Figure \ref{figure:subalgebras of CsIFG(2)}. \input{figure-subalgebras_of_CsIFG2} \end{prop} \begin{proof} It is easy to check that the IFG$_1$-algebras \(\set{0,1}\) and \(\set{0, \Omega, 1}\) are subalgebras of \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})\). Consider the subalgebra \(\gen{A} = \set{0, \n{A}, \Omega, A, 1}\). Recall that since \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})\) is double-suited, \(X +_J 0 = X\) and \(X +_J 1 = 1\). Also, \(X \leq \Omega\) and \(X \leq Y\) imply \(X +_J Y = Y\). Thus \(\n{A} +_J \n{A} = \n{A}\), \(\n{A} +_J \Omega = \Omega\) and \(\n{A} +_J A = A\). To finish showing \(\gen{A} = \set{0, \n{A}, \Omega, A, 1}\) is closed under $+_\emptyset$ and $+_{\set{0}}$, it suffices to perform a few calculations. It is easily checked that \begin{alignat*}{2} A +_\emptyset A &= 1, &\qquad A +_{\set{0}} A &= A. \end{alignat*} For example, \(\set{0,1} \in (A +_\emptyset A)^+\) because \(\set{0,1} = \set{0} \cup_\emptyset \set{1}\), where \(\set{0} \in A^+\) and \(\set{1} \in A^+\). \(A +_{\set 0} A = A\) because $+_{\set 0}$ is a lattice join operation. Finally by \propref{C_0,J...C_N-1,J double suit}, \(C_{0,J}(0) = 0\), \(C_{0,J}(\n A) = C_{0,J}(\Omega) = \Omega\), and \(C_{0,J}(A) = C_{0,J}(1) = 1\). Now consider \(\gen{B} = \set{0, \n{B}, \Omega, B, 1}\). Since $B$ is flat \(B +_\emptyset B = B\). All the other calculations are the same as for $\gen A$. Similarly \(\gen{C} = \set{0, \n{C}, \Omega, C, 1}\). To show \(\gen{A,B} = \set{0, \n{A}, \n{B}, \Omega, B, A, 1}\) observe that \begin{alignat*}{2} A +_\emptyset B &= 1, &\qquad A +_{\set{0}} B &= A. \end{alignat*} Similarly \(\gen{A,C} = \set{0, \n{A}, \n{C}, \Omega, C, A, 1}\). To show \(\gen{B,C} = \set{0, \n{A}, \n{B}, \n{C}, \Omega, C, B, A, 1}\), observe that \begin{alignat*}{2} B +_\emptyset C &= 1, &\qquad B +_{\set{0}} C &= A, \\ \n{B} +_\emptyset \n{C} &= \Omega, &\qquad \n{B} +_{\set{0}} \n{C} &= \Omega. \end{alignat*} Finally, note that if $\D$ is a subalgebra of \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})\) that includes $\norm{v_0 = 0}$, then \(\norm{v_0 = 1} = \n{\norm{v_0 = 0}} \in \D\). Thus $\D$ includes all of the perfect elements in \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})\). Hence \(\D = \Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})\). Similarly, if \(\norm{v_0 = 1} \in \D\), then \(\D = \Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})\). \end{proof} \begin{thm} \label{CsIFG(2) hereditarily simple} $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ is hereditarily simple. \end{thm} \begin{proof} It follows from \lemref{congruence 0, Omega, 1} and \lemref{congruence X < Omega < Y} that the subalgebras \(\set{0, \Omega, 1}\), $\gen{A}$, $\gen{B}$, and $\gen{C}$ are all simple. To show the subalgebra $\gen{A, B}$ is simple, by \lemref{congruence Omega < X < Y} it suffices to show that the congruence $\Cg(A,B)$ generated by $A$ and $B$ is the total congruence. Observe that if \(A \cong B\), then \(1 = A +_\emptyset A \cong B +_\emptyset B = B\), so $\Cg(A,B)$ is the total congruence. A similar argument shows that the subalgebra $\gen{A,C}$ is simple. Finally, to prove the subalgebra $\gen{B,C}$ and $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ are simple it suffices to show that the congruences $\Cg(A,B)$ and $\Cg(A,C)$ are both the total congruence. But the calculations are the same as before, so we are done. \end{proof} \section{$\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3})$ is not hereditarily simple} Let $\mathbf{3}$ be the structure with universe $\set{0,1,2}$ in which all three elements are named by constant symbols. Then \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3}) = \DSuit_1(\set{0,1,2})\), which has 55 elements. Part of the lattice structure of $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3})$ is shown in Figure \ref{CsIFG(3)}. For simplicity, we only show the interval above $\Omega$. Furthermore, we omit the falsity coordinate and denote each truth coordinate by listing the maximal winning teams. For example, the vertex labeled $\set{0,1}, \set{2}$ denotes the element \(\tuple{\powerset(\set{0,1}) \cup \powerset(\set{2}), \set{\emptyset}}\), and the vertex labeled $\emptyset$ denotes \(\tuple{\set{\emptyset}, \set{\emptyset}} = \Omega\). Readers familiar with the cover of \cite{Balbes:1974} will recognize that Figure \ref{CsIFG(3)} is isomorphic to the free distributive 1-lattice on three generators. To obtain the full lattice structure of \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf 3)\) it is necessary to flip the figure upside-down to get the interval below $\Omega$, then fill in the sides with every possible double suit incomparable to $\Omega$. \input{figure-CsIFG3} The goal of this section is to show that \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3})\) is simple, but not hereditarily simple. In fact, every IFG$_N$-algebra whose universe is the collection of all double suits over a set $A$ is simple. \begin{prop} \label{finite constant structures have simple cylindric set algebras} \(\DSuit(\^NA)\) is simple. \end{prop} \begin{proof} Suppose $X$ and $Y$ are distinct elements of \(\DSuit(\^NA)\) such that \(X \cong Y\). Without loss of generality we may assume that there exists a \(V \in Y^+ \setminus X^+\). Let \(Z = \tuple{\powerset(\^NA \setminus V), \powerset(V)}\). Since \(V \notin X^+\) we know that for every \(U \in X^+\) there is an \(\vec a \in V \setminus U\). Hence \(U \cup (\^NA\setminus V) \not= \^NA\). Thus \(1 \not= X +_\emptyset Z \cong Y +_\emptyset Z = 1\). Therefore $\cong$ is the total congruence. \end{proof} Recall that in the proof that $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{2})$ is hereditarily simple, we used the fact that \(A +_\emptyset A = 1\) but \(B +_\emptyset B \not= 1\). For any element $X$ of an IFG-algebra, let $nX$ be a abbreviation for \( \underbrace{X +_\emptyset \cdots +_\emptyset X}_{n} \). \begin{defn} The \emph{order}\index{order of an element|mainidx} of an element $X$ is the least positive integer $n$ such that \(nX = 1\). If no such positive integer exists then the order of $X$ is infinite. \end{defn} \begin{lem} \label{congruent different order} Let $\cong$ be a congruence on a double-suited IFG$_N$-algebra. If any two elements of different order are congruent, then $\cong$ is the total congruence. \end{lem} \begin{proof} Let \(X \cong Y\). If the order of $X$ is less than the order of $Y$, then for some positive integer $n$ we have \(1 = nX \cong nY \not= 1\). \end{proof} We know by \propref{all double suits} and \propref{finite constant structures have simple cylindric set algebras} that $\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3})$ is simple, but we can verify this directly by using the lemmas and the technique of perspective edges. For example, if \(\set{0},\set{1} \cong \set{0,1}\), then \(\set{0}, \set{1}, \set{2} \cong \set{0,1},\set{2}\). But \(\set{0}, \set{1}, \set{2}\) has order 2, while \(\set{0,1},\set{2}\) has order 1, so by \lemref{congruent different order} we have that $\cong$ is the total congruence. \begin{thm} \label{CsIFG_1(3) is not hereditarily simple} \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3})\) is not hereditarily simple. \end{thm} \begin{proof} Let \(A = \tuple{\powerset(\set{0,1}),\ \set{\emptyset}}\) and \(B = \tuple{\powerset(\set{0}) \cup \powerset(\set{1}),\ \set{\emptyset}}\). The subalgebra \(\gen{B} = \set{0, \n{A}, \n{B}, \Omega, B, A, 1}\) is closed under $+_\emptyset$ and $+_{\set 0}$ because\begin{alignat*}{2} A +_\emptyset A &= A, &\qquad A +_{\set{0}} A &= A, \\ A +_\emptyset B &= A, &\qquad A +_{\set{0}} B &= A, \\ B +_\emptyset B &= A, &\qquad B +_{\set{0}} B &= B, \\ A +_\emptyset \n{A} &= A, &\qquad A +_{\set{0}} \n{A} &= A, \\ A +_\emptyset \n{B} &= A, &\qquad A +_{\set{0}} \n{B} &= A, \\ B +_\emptyset \n{A} &= B, &\qquad B +_{\set{0}} \n{A} &= B, \\ B +_\emptyset \n{B} &= B, &\qquad B +_{\set{0}} \n{B} &= B, \\ \n{A} +_\emptyset \n{A} &= \n{A}, &\qquad \n{A} +_{\set{0}} \n{A} &= \n{A}, \\ \n{A} +_\emptyset \n{B} &= \n{B}, &\qquad \n{A} +_{\set{0}} \n{B} &= \n{B}, \\ \n{B} +_\emptyset \n{B} &= \n{B}, &\qquad \n{B} +_{\set{0}} \n{B} &= \n{B}. \end{alignat*} All of the $+_{\set 0}$ calculations are easy to check by looking at the lattice. The $+_\emptyset$ calculations require some computation. First, \(A +_\emptyset A = A\) because $A$ is flat. Second, \(A +_\emptyset B = A\) because \(\set{0, 1} = \set{0} \cup_\emptyset \set{1}\), where \(\set{0} \in A^+\) and \(\set{1} \in B^+\), while \((A +_\emptyset B)^- = A^- \cap B^- = A^-\). Similarly, \(B +_\emptyset B = A\). The remaining $+_\emptyset$ calculations all follow from \propref{X < Omega,Y} or \propref{flat absorption}. Finally, the set is closed under $C_{0,J}$ by \propref{C_0,J...C_N-1,J double suit}. Let $\cong$ denote the equivalence relation that makes \(A \cong B\) and \(\n{A} \cong \n{B}\), but makes no other pair of distinct elements equivalent. To verify that $\cong$ is a congruence, observe that $\cong$ is preserved under $\n{\,}$ because \(\n{A} \cong \n{B}\) and \(\n{(\n{A})} = A \cong B = \n{(\n{B})}\). It is preserved under $C_{0,J}$ because \(C_{0,J}(A) = 1 = C_{0,J}(B)\) and \(C_{0,J}(\n{A}) = \Omega = C_{0,J}(\n{B})\). Finally, the calculations above show that $\cong$ is preserved under $+_\emptyset$ and $+_{\set{0}}$. Thus $\cong$ is a nontrivial, non-total congruence. Therefore $\gen{B}$ is not simple. \end{proof} \section{``Iff'' is not expressible in IFG logic} Let \(\phi \himplies{J} \psi\)\index{$\phi \himplies{J} \psi$} be an abbreviation for \({\hneg\phi} \hor{J} \psi\), and let \(\phi \hiff{J} \psi\)\index{$\phi \hiff{J} \psi$} be an abbreviation for \[ (\phi \himplies{J} \psi) \hand{J} (\psi \himplies{J} \phi). \] It will be useful to know when \(\A \modeltf \phi \himplies{J} \psi[V]\) and \(\A \modeltf \phi \hiff{J} \psi[V]\). It follows immediately from the definitions that for \(\phi \himplies{J} \psi\), \begin{itemize} \item[(+)] \(\A \modelt \phi \himplies{J} \psi[V]\) if and only \(\A \modelf \phi[V_1]\) and \(\A \modelt \psi[V_2]\) for some \(V = V_1 \cup_J V_2\), \item[($-$)] \(\A \modelf \phi \himplies{J} \psi[W]\) if and only if \(\A \modelt \phi[W]\) and \(\A \modelf \psi[W]\). \end{itemize} Similarly for \(\phi \hiff{J} \psi\), \begin{itemize} \item[(+)] \(\A \modelt \phi \hiff{J} \psi[V]\) if and only if \(\A \modelf \phi[V_1]\) and \(\A \modelt \psi[V_2]\) for some \(V = V_1 \cup_J V_2\), and \(\A \modelt \phi[V_3]\) and \(\A \modelf \psi[V_4]\) for some \(V = V_3 \cup_J V_4\), \item[($-$)] \(\A \modelf \phi \hiff{J} \psi[W]\) if and only if \(\A \modelt \phi[W_1]\), \(\A \modelf \psi[W_1]\), \(\A \modelf \phi[W_2]\), and \(\A \modelt \psi[W_2]\) for some \(W = W_1 \cup_J W_2\). \end{itemize} In particular, the semantics for \(\phi \himplies{N} \psi\) are \begin{itemize} \item[(+)] \(\A \modelt \phi \himplies{N} \psi[V]\) if and only if \(\A \modelf \phi[V]\) or \(\A \modelt \psi[V]\), \item[($-$)] \(\A \modelf \phi \himplies{N} \psi[W]\) if and only if \(\A \modelt \phi[W]\) and \(\A \modelf \psi[W]\), \end{itemize} and for \(\phi \hiff{N} \psi\), \begin{itemize} \item[(+)] \(\A \modelt \phi \hiff{N} \psi[V]\) if and only if \(\A \modelt \phi[V]\) and \(\A \modelt\psi[V]\), or \(\A \modelf \phi[V]\) and \(\A \modelf \psi[V]\), \item[($-$)] \(\A \modelf \phi \hiff{N} \psi[W]\) if and only if \(\A \modelt \phi[W]\) and \(\A \modelf \psi[W]\), or \(\A \modelf \phi[W]\) and \(\A \modelt \psi[W]\). \end{itemize} For example, \(\A \modelt \phi \hiff{N} \psi[V]\) if and only if \(\A \modelt ({\hneg\phi} \hor{N} \psi) \hand{N} (\phi \hor{N} {\hneg\psi})[V]\) if and only if \(\A \modelt ({\hneg\phi} \hor{N} \psi)[V]\) and \(\A \modelt (\phi \hor{N} {\hneg\psi})[V]\) if and only if \(\A \modelf \phi[V]\) or \(\A \modelt \psi[V]\), and \(\A \modelt \phi[V]\) or \(\A \modelf \psi[V]\) if and only if \(\A \modelf \phi[V]\) and \(\A \modelf \psi[V]\), or \(\A \modelt \psi[V]\) and \(\A \modelt \phi[V]\). \begin{prop} \label{hiff_0} For any IFG$_N$-formulas $\phi$ and $\psi$, \(\A \modelt \phi \hiff{\emptyset} \psi\) if and only if \(\norm{\phi}_\A = \norm{\psi}_\A\) and both are perfect. \end{prop} \begin{proof} Suppose \(\A \modelt (\phi \hiff{\emptyset} \psi)[\^NA]\). Then there exist \(V, V' \subseteq \^NA\) such that \(\A \modelt \phi[V]\), \(\A \modelf \psi[\^NA\setminus V]\), \(\A \modelf \phi[V']\), and \(\A \modelt \psi[\^NA \setminus V']\). Thus \(V \cap V' = \emptyset\) and \((\^NA \setminus V) \cap (\^NA \setminus V') = \emptyset\). Therefore \(V' = \^NA \setminus V\), and \(\norm{\phi}_\A = \tuple{\powerset(V), \powerset(\^NA \setminus V)} = \norm{\psi}_\A\). \end{proof} \begin{prop} \label{hiff_N} For any IFG$_N$-formulas $\phi$ and $\psi$, \(\A \modelt \phi \hiff{N} \psi\) if and only if \(\norm{\phi}_\A = \norm{\psi}_\A \in \set{0,1}\). \end{prop} \begin{proof} Suppose \(\A \modelt (\phi \hiff{N} \psi)[\^NA]\). Then \(\A \modelt \phi[\^NA]\) and \(\A \modelt\psi[\^NA]\), in which case \(\norm{\phi}_\A = \norm{\psi}_\A = 1 \), or \(\A \modelf \phi[\^NA]\) and \(\A \modelf \psi[\^NA]\), in which case \(\norm{\phi}_\A = \norm{\psi}_\A = 0\). \end{proof} \begin{defn} An \emph{IFG$_N$-schema}\index{IFG$_N$-schema|mainidx} involving $k$ formula variables is an element of the smallest set $\Xi$ satisfying the following conditions. \begin{enumerate} \item The formula variables \(\alpha_0, \ldots, \alpha_{k-1}\) belong to $\Xi$. \item For all \(i,j < N\), the formula \(v_i = v_j\) belongs to $\Xi$. \item If $\xi$ belongs to $\Xi$, then ${\hneg\xi}$ belongs to $\Xi$. \item If $\xi_1$ and $\xi_2$ belong to $\Xi$, and \(J \subseteq N\), then \(\xi_1 \hor{J} \xi_2\) belongs to $\Xi$. \item If $\xi$ belongs to $\Xi$, \(n < N\), and \(J \subseteq N\), then \(\hexists{v_n}{J}\xi\) belongs to $\Xi$. \end{enumerate} Note that the symbols \(\alpha_0, \ldots, \alpha_{k-1}\) are distinct from the usual variables $v_0$, \ldots, $v_{k-1}$. If $\xi$ is an IFG$_N$-schema involving $k$ formula variables, and \(\phi_0, \ldots, \phi_{k-1}\) are IFG$_N$-formulas, then the IFG$_N$-formula \(\xi(\phi_0, \ldots, \phi_{k-1})\) is called an \emph{instance}\index{instance of a schema|mainidx} of $\xi$. \end{defn} \begin{defn} Every IFG$_N$-schema $\xi$ has a corresponding term $T_\xi$ in the language of IFG$_N$-algebras. $T_\xi$ is defined recursively as follows: \begin{enumerate} \item \(T_{\alpha_i} = X_i\), \item \(T_{v_i = v_j} = D_{ij}\), \item \(T_{\hneg\,\xi} = \n{(T_\xi)}\), \item \(T_{\xi_1 \hor{J} \xi_2} = T_{\xi_1} +_J T_{\xi_2}\), \item \(T_{\hexists{v_n}{J}\xi} = C_{n,J}(T_\xi)\). \end{enumerate} \end{defn} \begin{lem} \label{schema terms} Let $\xi$ be an IFG$_N$-schema involving $k$ formula variables, and let $T_\xi$ be its corresponding term. Then for any IFG$_N$-formulas \(\phi_0, \ldots, \phi_{k-1}\) and any suitable structure $\A$, \[ \norm{\xi(\phi_0, \ldots, \phi_{k-1})} = T_\xi^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}). \] \end{lem} \begin{proof} If $\xi$ is a formula variable $\alpha_i$, then \(T_\xi = X_i\), so \[ \norm{\xi(\phi_0, \ldots, \phi_{k-1})} = \norm{\phi_i} = T_\xi^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}). \] If $\xi$ is \(v_i = v_j\), then \(T_\xi = D_{ij}\), so \[ \norm{\xi(\phi_0, \ldots, \phi_{k-1})} = \norm{v_i = v_j} = D_{ij}^{\Cyls_{\mathrm{IFG}_{N}}(\A)} = T_\xi^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}). \] Now assume that {\allowdisplaybreaks \begin{align*} \norm{\xi(\phi_0, \ldots, \phi_{k-1})} &= T_\xi^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}), \\ \norm{\xi_1(\phi_0, \ldots, \phi_{k-1})} &= T_{\xi_1}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}), \\ \norm{\xi_2(\phi_0, \ldots, \phi_{k-1})} &= T_{\xi_2}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}). \intertext{Then} \norm{\hneg\xi(\phi_0, \ldots, \phi_{k-1})} &= \n{\norm{\xi(\phi_0, \ldots, \phi_{k-1})}} \\ &= \n{\left(T_\xi^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}})\right)} \\ &= T_{\hneg\,\xi}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}), \\[10pt] \norm{\xi_1 \hor{J} \xi_2(\phi_0, \ldots, \phi_{k-1})} &= \norm{\xi_1(\phi_0, \ldots, \phi_{k-1})} +_J \norm{\xi_2(\phi_0, \ldots, \phi_{k-1})} \\ &= T_{\xi_1}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}) \\ &\qquad +_J T_{\xi_2}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}) \\ &= T_{\xi_1 \hor{J} \xi_2}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}), \\[10pt] \norm{\hexists{v_n}{J}\xi(\phi_0, \ldots, \phi_{k-1})} &= C_{n,J}\norm{\xi(\phi_0, \ldots, \phi_{k-1})} \\ &= C_{n,J}\left(T_\xi^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}})\right) \\ &= T_{\hexists{v_n}{J}\xi}^{\Cyls_{\mathrm{IFG}_{N}}(\A)}(\norm{\phi_0}, \ldots, \norm{\phi_{k-1}}). \end{align*}} \end{proof} \begin{prop} \label{term operation implies hereditarily simple} Any double-suited IFG-algebra that has a term operation $T(X,Y)$ such that \(T(X,Y) = 1\) if and only if \(X = Y\) is hereditarily simple. \end{prop} \begin{proof} Suppose $\C$ is a double-suited IFG-algebra that has such a term oper\-ation. Then for any \(X \not= Y\) we have \(\tuple{1, Z} = \tuple{T(X,X), T(X,Y)} \in \Cg(X,Y)\), where $Z$ is some element different than 1. Hence $\Cg(X,Y)$ is the total congruence. Thus $\C$ is simple. Furthermore, the sentence \[ \forall X \forall Y[ T(X,Y) = 1 \iff X = Y] \] is universal, and so must hold in every subalgebra of $\C$. Hence $\C$ is hereditarily simple. \end{proof} \begin{thm} \label{no schema} There is no IFG$_1$-schema $\xi$ involving two formula variables such that for every pair of IFG$_1$-formulas $\phi$ and $\psi$, and every suitable structure $\A$, we have \[ \A \modelt \xi(\phi, \psi) \quad \text{iff} \quad \norm{\phi}_\A = \norm{\psi}_\A. \] \end{thm} \begin{proof} Suppose $\xi$ were such a schema. Then the corresponding term $T_\xi$ would have the property that for any $\A$ and any \(\norm{\phi}_\A, \norm{\psi}_\A \in \Cyls_{\mathrm{IFG}_{1}}(\A)\), \begin{align*} T_\xi^{\Cyls_{\mathrm{IFG}_{1}}(\A)}(\norm{\phi}_\A, \norm{\psi}_\A) = 1 \quad &\text{iff} \quad \norm{\xi(\phi,\psi)}_\A = 1 \\ &\text{iff} \quad \A \modelt \xi(\phi,\psi) \\ &\text{iff} \quad \norm{\phi}_\A = \norm{\psi}_\A. \end{align*} Thus every \(\Cyls_{\mathrm{IFG}_{1}}(\A)\) would be hereditarily simple. However \(\Cyls_{\mathrm{IFG}_{1}}(\mathbf{3})\) is not hereditarily simple. \end{proof} \input{mann.bbl} \end{document}
\begin{document} \newcommand{1502.05252}{1502.05252} \allowdisplaybreaks \renewcommand{054}{054} \FirstPageHeading \ShortArticleName{Eigenvalue Estimates of the ${\mathop{\rm spin}^c}$ Dirac Operator and Harmonic Forms} \ArticleName{Eigenvalue Estimates of the $\boldsymbol{{\mathop{\rm spin}^c}}$ Dirac Operator\\ and Harmonic Forms on K\"ahler--Einstein Manifolds} \Author{Roger NAKAD~$^\dag$ and Mihaela PILCA~$^{\ddag\S}$} \AuthorNameForHeading{R.~Nakad and M.~Pilca} \Address{$^\dag$~Notre Dame University-Louaiz\'e, Faculty of Natural and Applied Sciences,\\ \hphantom{$^\dag$}~Department of Mathematics and Statistics, P.O. Box 72, Zouk Mikael, Lebanon} \EmailD{\href{mailto:[email protected]}{[email protected]}} \URLaddressD{\url{http://www.iecn.u-nancy.fr/~nakad/}} \Address{$^\ddag$~Fakult\"at f\"ur Mathematik, Universit\"at Regensburg,\\ \hphantom{$^\ddag$}~Universit\"atsstra{\ss}e~31, 93040 Regensburg, Germany} \EmailD{\href{mailto:[email protected]}{[email protected]}} \URLaddressD{\url{http://www.mathematik.uni-regensburg.de/pilca/}} \Address{$^\S$~Institute of Mathematics ``Simion Stoilow'' of the Romanian Academy,\\ \hphantom{$^\S$}~21, Calea Grivitei Str, 010702-Bucharest, Romania} \ArticleDates{Received March 03, 2015, in f\/inal form July 02, 2015; Published online July 14, 2015} \Abstract{We establish a lower bound for the eigenvalues of the Dirac operator def\/ined on a~compact K\"ahler--Einstein manifold of positive scalar curvature and endowed with particular ${\mathop{\rm spin}^c}$ structures. The limiting case is characteri\-zed by the existence of K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinors in a certain subbundle of the spinor bundle. Moreover, we show that the Clif\/ford multiplication between an ef\/fective harmonic form and a K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor f\/ield vanishes. This extends to the ${\mathop{\rm spin}^c}$ case the result of A.~Moroianu stating that, on a~compact K\"ahler--Einstein manifold of complex dimension $4\ell+3$ carrying a complex contact structure, the Clif\/ford multiplication between an ef\/fective harmonic form and a~K\"ahlerian Killing spinor is zero.} \Keywords{${\mathop{\rm spin}^c}$ Dirac operator; eigenvalue estimate; K\"ahlerian Killing spinor; parallel form; harmonic form} \Classification{53C27; 53C25; 53C55; 58J50; 83C60} \section{Introduction} The geometry and topology of a compact Riemannian spin manifold $(M^n, g)$ are strongly related to the existence of special spinor f\/ields and thus, to the spectral properties of a fundamental operator called the Dirac operator $D$ \cite{AS, lich0}. A.~Lichnerowicz \cite{lich0} proved, under the weak condition of the positivity of the scalar curvature, that the kernel of the Dirac operator is trivial. Th.~Friedrich \cite{fr1} gave the following lower bound for the f\/irst eigenvalue $\lambda$ of $D$ on a~compact Riemannian spin manifold $(M^n, g)$: \begin{gather}\label{fried} \lambda^2\geq\frac{n}{4(n-1)}\underset{M}{\inf}\, S, \end{gather} where $S$ denotes the scalar curvature, assumed to be nonnegative. Equality holds if and only if the corresponding eigenspinor $\varphi$ is parallel (if $\lambda=0$) or a Killing spinor of Killing constant $-\frac{\lambda}{n}$ (if $\lambda\neq0$), i.e., if $\nabla_X\varphi = -\frac{\lambda}{n} X\cdot\varphi$, for all vector f\/ields $X$, where ``$\cdot$'' denotes the Clif\/ford multiplication and $\nabla$ is the spinorial Levi-Civita connection on the spinor bundle $\Sigma M$ (see also~\cite{hijconf}). Killing (resp.\ parallel) spinors force the underlying metric to be Einstein (resp.\ Ricci f\/lat). The classif\/ication of complete simply-connected Riemannian spin manifolds with real Killing (resp.\ parallel) spinors was done by C.~B\"ar~\cite{baer} (resp.\ M.Y.~Wang~\cite{wang}). Useful geometric information has been also obtained by restricting parallel and Killing spinors to hypersurfaces \cite{Ba, HMZ1, HMZ2, HMZ02, HZ1, HZ2}. O.~Hijazi proved that the Clif\/ford multiplication between a harmonic $k$-form $\beta$ ($k \neq 0, n$) and a Killing spinor vanishes. In particular, the equality case in~\eqref{fried} cannot be attained on a K\"ahler spin manifold, since the Clif\/ford multiplication between the K\"ahler form and a Killing spinor is never zero. Indeed, on a K\"ahler compact manifold $(M^{2m},g,J)$ of complex dimension $m$ and complex structure $J$, K.-D.~Kirchberg~\cite{kirch86} showed that the f\/irst eigenvalue $\lambda$ of the Dirac operator satisf\/ies \begin{gather}\label{kirchoddeven} \lambda^2\geq\begin{cases} \dfrac{m+1}{4m}\underset{M}{\inf}\, S,& \text{if $m$ is odd,} \\ \dfrac{m}{4(m-1)}\underset{M}{\inf} \, S,& \text{if $m$ is even.} \end{cases} \end{gather} Kirchberg's estimates rely essentially on the decomposition of $\Sigma M$ under the action of the K\"ahler form $\Omega$. In fact, we have $\Sigma M = \oplus_{r=0}^m\Sigma_r M$, where~$\Sigma_r M$ is the eigenbundle corresponding to the eigenvalue $i(2r-m)$ of $\Omega$. The limiting manifolds of (\ref{kirchoddeven}) are also characterized by the existence of spinors satisfying a certain dif\/ferential equation similar to the one fulf\/illed by Killing spinors. More precisely, in odd complex dimension $m=2\ell+1$, it is proved in \cite{hij, kirch2,kirch} that the metric is Einstein and the corresponding eigenspinor $\varphi$ of $\lambda$ is a K\"ahlerian Killing spinor, i.e., $\varphi=\varphi_{\ell}+\varphi_{\ell+1}\in\Gamma(\Sigma_{\ell} M\oplus \Sigma_{\ell+1} M)$ and it satisf\/ies \begin{gather} \begin{split} &\nabla_X \varphi_\ell = -\frac{\lambda}{2(m+1)} (X+iJX)\cdot \varphi_{\ell+1}, \\ &\nabla_X \varphi_{\ell+1} = -\frac{\lambda}{2(m+1)} (X-iJX)\cdot \varphi_{\ell}, \end{split}\label{ecodd} \end{gather} for any vector f\/ield $X$. We point out that the existence of spinors of the form $\varphi=\varphi_{\ell'}+\varphi_{\ell'+1}\in\Gamma(\Sigma_{\ell'}M\oplus\Sigma_{\ell'+1}M)$ satisfying \eqref{ecodd}, implies that $m$ is odd and they lie in the middle, i.e., $l' =\frac{m-1}{2}$. If the complex dimension is even, $m=2\ell$, the limiting manifolds are characterized by constant scalar curvature and the existence of so-called anti-holomorphic K\"ahlerian twistor spinors $\varphi_{\ell-1}\in\Gamma(\Sigma_{\ell-1}M)$, i.e., satisfying for any vector f\/ield $X$: $\nabla_X \varphi_{\ell-1}= -\frac{1}{2m}(X+iJX)\cdot D\varphi_{\ell-1}$. The limiting manifolds for Kirchberg's inequalities \eqref{kirchoddeven} have been geometrically described by A.~Moroianu in~\cite{am_odd} for~$m$ odd and in~\cite{am_even} for~$m$ even. In \cite{pilcapaper}, this result is extended to limiting manifolds of the so-called ref\/ined Kirchberg inequalities, obtained by restricting the square of the Dirac operator to the eigenbundles~$\Sigma_r M$. When~$m$ is even, the limiting manifold cannot be Einstein. Thus, on compact K\"ahler--Einstein manifolds of even complex dimension, K.-D.~Kirchberg~\cite{kircheven} impro\-ved~\eqref{kirchoddeven} to the following lower bound \begin{gather}\label{kirchke} \lambda^2 \geq \frac{m+2}{4m} S. \end{gather} Equality is characterized by the existence of holomorphic or anti-holomorphic spinors. When $m$ is odd, A.~Moroianu extended the above mentioned result of O.~Hijazi to K\"ahler manifolds, by showing that the Clif\/ford multiplication between a harmonic ef\/fective form of nonzero degree and a K\"ahlerian Killing spinor vanishes. We recall that the manifolds of complex dimension $m =4\ell+3$ admitting K\"ahlerian Killing spinors are exactly the K\"ahler--Einstein manifolds carrying a complex contact structure (cf.~\cite{ks, am_odd, ms}). In the present paper, we extend this result of A.~Moroianu to K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinors (see Theorem~\ref{eff}). In this more general setting dif\/f\/iculties occur due to the fact that the connection on the ${\mathop{\rm spin}^c}$ bundle, hence its curvature, the Dirac operator and its spectrum, do not only depend on the geometry of the manifold, but also on the connection of the auxiliary line bundle associated with the ${\mathop{\rm spin}^c}$ structure. \looseness=-1 $ \mathrm{Spin}^c$ geometry became an active f\/ield of research with the advent of Seiberg--Witten theory, which has many applications to $4$-dimensional geometry and topology \cite{don,gursky,lebrun1,lebrun2,SW3, SW2}. From an intrinsic point of view, almost complex, Sasaki and some classes of CR manifolds carry a~canonical ${\mathop{\rm spin}^c}$ structure. In particular, every K\"ahler manifold is ${\mathop{\rm spin}^c}$ but not necessarily spin. For example, the complex projective space $\mathbb C P^m$ is spin if and only if $m$ is odd. Moreover, from the extrinsic point of view, it seems that it is more natural to work with ${\mathop{\rm spin}^c}$ structures rather than spin structu\-res~\cite{hmu, nakadthesis,JRRN}. For instance, on K\"ahler--Einstein manifolds of positive scalar curvature, O.~Hijazi, S.~Montiel and F.~Urbano \cite{hmu} constructed ${\mathop{\rm spin}^c}$ structures carrying K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinors, i.e., spinors satisfying \eqref{ecodd}, where the covariant derivative is the ${\mathop{\rm spin}^c}$ one. In \cite{her}, M.~Herzlich and A.~Moroianu extended Friedrich's estimate \eqref{fried} to compact Riemannian ${\mathop{\rm spin}^c}$ manifolds. This new lower bound involves only the conformal geometry of the manifold and the curvature of the auxiliary line bundle associated with the ${\mathop{\rm spin}^c}$ structure. The limiting case is characterized by the existence of a ${\mathop{\rm spin}^c}$ Killing or parallel spinor, such that the Clif\/ford multiplication of the curvature form of the auxiliary line bundle with this spinor is proportional to it. In this paper, we give an estimate for the eigenvalues of the ${\mathop{\rm spin}^c}$ Dirac operator, by restric\-ting ourselves to compact K\"ahler--Einstein manifolds endowed with particular ${\mathop{\rm spin}^c}$ structures. More precisely, we consider $(M^{2m},g,J)$ a compact K\"ahler--Einstein manifold of positive scalar curvature $S$ and of index $p\in\mathbb{N}^*$. We endow $M$ with the ${\mathop{\rm spin}^c}$ structure whose auxiliary line bundle is a tensorial power $\mathcal{L}^q$ of the $p$-$th$ root $\mathcal L$ of the canonical bundle $K_M$ of $M$, where $q \in \mathbb Z$, $p+q\in2\mathbb{Z}$ and $|q|\leq p$. Our main result is the following: \begin{thm}\label{globestimke} Let $(M^{2m}, g)$ be a compact K\"ahler--Einstein manifold of index $p$ and positive scalar curvature $S$, carrying the ${\mathop{\rm spin}^c}$ structure given by $\mathcal{L}^q$ with $q+p\in 2\mathbb{Z}$, where $\mathcal{L}^p=K_M$. We assume that $p \geq |q|$ and the metric is normalized such that its scalar curvature equals $4m(m+1)$. Then, any eigenvalue $\lambda$ of $D^2$ is bounded from below as follows \begin{gather}\label{global} \lambda\geq \left(1-\frac{q^2}{p^2}\right) (m+1)^2. \end{gather} Equality is attained if and only if $b:=\frac{q}{p}\cdot\frac{m+1}{2}+\frac{m-1}{2}\in\mathbb{N}$ and there exists a K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor in $\Gamma(\Sigma_{b}M\oplus \Sigma_{b+1} M)$. \end{thm} Indeed, this is a consequence of more ref\/ined estimates for the eigenvalues of the square of the ${\mathop{\rm spin}^c}$ Dirac operator restricted to the eigenbundles~$\Sigma_r M$ of the spinor bundle (see Theorem~\ref{estimke}). The proof of this result is based on a ref\/ined Schr\"odinger--Lichnerowicz~${\mathop{\rm spin}^c}$ formula (see Lemma~\ref{refinedd}) written on each such eigenbundle~$\Sigma_rM$, which uses the decomposition of the covariant derivative acting on spinors into its holomorphic and antiholomorphic part. This formula has already been used in literature, for instance by K.-D.~Kirchberg~\cite{kircheven}. The limiting manifolds of~\eqref{global} are characterized by the existence of K\"ahlerian Killing~${\mathop{\rm spin}^c}$ spinors in a certain subbundle~$\Sigma_r M$. In particular, this gives a positive answer to the conjectured relationship between ${\mathop{\rm spin}^c}$ K\"ahlerian Killing spinors and a lower bound for the eigenvalues of the~${\mathop{\rm spin}^c}$ Dirac operator, as stated in \cite[Remark~16]{hmu}. Let us mention here that the Einstein condition in Theorem~\ref{globestimke} is important in order to establish the estimate~\eqref{global}, since otherwise there is no control over the estimate of the term given by the Clif\/ford action of the curvature form of the auxiliary line bundle of the~${\mathop{\rm spin}^c}$ structure (see~\eqref{fa-ke}). \section{Preliminaries and notation} In this section, we set the notation and brief\/ly review some basic facts about ${\mathop{\rm spin}^c}$ and K\"ahler geometries. For more details we refer to the books \cite{bookspin, fr_book,spin,am_lectures}. Let $(M^{n}, g)$ be an $n$-dimensional closed Riemannian ${\mathop{\rm spin}^c}$ manifold and denote by $\Sigma M$ its complex spinor bundle, which has complex rank equal to $2^{[\frac{n}{2}]}$. The bundle $\Sigma M$ is endowed with a Clif\/ford multiplication denoted by ``$\cdot$'' and a scalar product denoted by $\langle \cdot, \cdot\rangle$. Given a~${\mathop{\rm spin}^c}$ structure on~$(M^{n}, g)$, one can check that the determinant line bundle $\mathrm{det}(\Sigma M)$ has a root~$L$ of index~$2^{[\frac {n}{2}]-1}$. This line bundle~$L$ over~$M$ is called the auxiliary line bundle associated with the ${\mathop{\rm spin}^c}$ structure. The connection~$\nabla^A$ on~$\Sigma M$ is the twisted connection of the one on the spinor bundle (induced by the Levi-Civita connection) and a f\/ixed connection~$A$ on~$L$. The ${\mathop{\rm spin}^c}$ Dirac operator $D^A$ acting on the space of sections of $\Sigma M$ is def\/ined by the composition of the connection $\nabla^A$ with the Clif\/ford multiplication. For simplicity, we will denote $\nabla^A$ by $\nabla$ and~$D^A$ by~$D$. In local coordinates: \begin{gather*} D =\sum_{j=1}^{n} e_j \cdot \nabla_{e_j}, \end{gather*} where $\{e_j\}_{j=1,\dots, n}$ is a local orthonormal basis of~$TM$. $D$ is a f\/irst-order elliptic operator and is formally self-adjoint with respect to the $L^2$-scalar product. A useful tool when examining the ${\mathop{\rm spin}^c}$ Dirac operator is the Schr\"{o}dinger--Lichnerowicz formula \begin{gather} D^2 = \nabla^*\nabla + \frac 14 S + \frac{1}{2}F_A \cdot, \label{sl} \end{gather} where $\nabla^*$ is the adjoint of $\nabla$ with respect to the $L^2$-scalar product and $F_A$ is the curvature (imaginary-valued) $2$-form on $M$ associated to the connection $A$ def\/ined on the auxiliary line bundle $L$, which acts on spinors by the extension of the Clif\/ford multiplication to dif\/ferential forms. We recall that the complex volume element $\omega_{\mathbb{C}}=i^{[\frac{n+1}{2}]} e_1\wedge \cdots \wedge e_n$ acts as the identity on the spinor bundle if~$n$ is odd. If~$n$ is even, $\omega_{\mathbb C}^2=1$. Thus, under the action of the complex volume element, the spinor bundle decomposes into the eigenspaces $\Sigma^{\pm} M$ corresponding to the~$\pm 1$ eigenspaces, the {\it positive} (resp. {\it negative}) spinors. Every spin manifold has a trivial ${\mathop{\rm spin}^c}$ structure, by choosing the trivial line bundle with the trivial connection whose curvature~$F_A$ vanishes. Every K\"ahler manifold $(M^{2m},g,J)$ has a canonical ${\mathop{\rm spin}^c}$ structure induced by the complex structure~$J$. The complexif\/ied tangent bundle decomposes into $T^{\mathbb{C}} M = T_{1,0} M\oplus T_{0,1} M,$ the $i$-eigenbundle (resp.~$(-i)$-eigenbundle) of the complex linear extension of~$J$. For any vector f\/ield $X$, we denote by $X^{\pm}:=\frac{1}{2}(X\mp iJX)$ its component in $T_{1,0} M$, resp.~$T_{0,1} M$. The spinor bundle of the canonical ${\mathop{\rm spin}^c}$ structure is def\/ined~by \begin{gather*} \Sigma M = \Lambda^{0,*} M =\overset{m}{\underset{r=0}{\oplus}} \Lambda^r (T_{0,1}^* M), \end{gather*} and its auxiliary line bundle is $L = (K_M)^{-1}= \Lambda^m (T_{0,1}^* M)$, where $K_M=\Lambda^{m,0}M$ is the canonical bundle of $M$. The line bundle $L$ has a canonical holomorphic connection, whose curvature form is given by $- i\rho$, where $\rho$ is the Ricci form def\/ined, for all vector f\/ields~$X$ and~$Y$, by $\rho(X, Y) = \mathrm{Ric}(JX, Y)$ and $\mathrm{Ric}$ denotes the Ricci tensor. Let us mention here the sign convention we use to def\/ine the Riemann curvature tensor, respectively the Ricci tensor: $R_{X,Y}:=\nabla_{X}\nabla_{Y} -\nabla_{Y}\nabla_{X}-\nabla_{[X,Y]}$ and ${\mathop{\rm Ric}}(X,Y):=\sum\limits_{j=1}^{2m}R(e_j,X,Y,e_j)$, for all vector f\/ields $X$, $Y$ on~$M$, where $\{e_j\}_{j=1,\dots, 2m}$ is a local orthonormal basis of the tangent bundle. Similarly, one def\/ines the so called anti-canonical ${\mathop{\rm spin}^c}$ structure, whose spinor bundle is given by $\Lambda^{*, 0} M =\oplus_{r=0}^m \Lambda^r (T_{1, 0}^* M)$ and the auxiliary line bundle by~$K_M$. The spinor bundle of any other ${\mathop{\rm spin}^c}$ structure on $M$ can be written as \begin{gather*} \Sigma M = \Lambda^{0, *} M \otimes \mathbb L, \end{gather*} where $\mathbb L^2 = K_M\otimes L$ and $L$ is the auxiliary line bundle associated with this ${\mathop{\rm spin}^c}$ structure. The K\"ahler form $\Omega$, def\/ined as $\Omega(X,Y)=g(JX,Y)$, acts on $\Sigma M$ via Clif\/ford multiplication and this action is locally given by \begin{gather}\label{defomega} \Omega\cdot \psi = \frac{1}{2} \sum_{j=1}^{2m} e_j\cdot Je_j\cdot\psi, \end{gather} for all $\psi\in\Gamma(\Sigma M)$, where $\{e_1, \dots, e_{2m}\}$ is a local orthonormal basis of $\mathrm{TM}$. Under this action, the spinor bundle decomposes as follows \begin{gather}\label{decomp} \Sigma M =\overset{m}{\underset{r=0}{\oplus}} \Sigma_r M, \end{gather} where $\Sigma_r M$ denotes the eigenbundle to the eigenvalue $i(2r-m)$ of $\Omega$, of complex rank $\binom{m}{k}$. It is easy to see that $\Sigma_r M \subset \Sigma^+ M$ (resp.\ $\Sigma_r M \subset \Sigma^-M$) if and only if $r$ is even (resp.~$r$ is odd). Moreover, for any $X \in \Gamma(TM)$ and $\varphi \in \Gamma(\Sigma_r M)$, we have $X^+ \cdot\varphi \in \Gamma(\Sigma_{r+1}M)$ and $X^-\cdot \varphi \in \Gamma(\Sigma_{r-1} M)$, with the convention $\Sigma_{-1}M=\Sigma_{m+1}M=M\times\{0\}$. Thus, for any ${\mathop{\rm spin}^c}$ structure, we have $\Sigma_r M = \Lambda^{0, r} M\otimes \Sigma_0 M$. Hence, $(\Sigma_0M)^2 = K_M \otimes L,$ where $L$ is the auxiliary line bundle associated with the ${\mathop{\rm spin}^c}$ structure. For example, when the manifold is spin, we have $(\Sigma_0M)^2 = K_M$ \cite{hit, kirch86}. For the canonical ${\mathop{\rm spin}^c}$ structure, since $L = (K_M)^{-1}$, it follows that $\Sigma_0 M$ is trivial. This yields the existence of parallel spinors (the constant functions) lying in $\Sigma_0M$, cf.~\cite{Moro1}. Associated to the complex structure $J$, one def\/ines the following operators \begin{gather*} D^+ =\sum_{j=1}^{2m}e_j^+\cdot \nabla_{e_j^-},\qquad D^- =\sum_{j=1}^{2m}e_j^-\cdot \nabla_{e_j^+}, \end{gather*} which satisfy the relations \begin{gather*} D=D^+ +D^-, \qquad (D^+)^2=0, \qquad (D^-)^2=0, \qquad D^+D^- + D^-D^+ =D^2. \end{gather*} When restricting the Dirac operator to $\Sigma_{r}M$, it acts as \begin{gather*} D=D^+ +D^-\colon \ \Gamma(\Sigma_{r}M) \to \Gamma(\Sigma_{r-1}M\oplus\Sigma_{r+1}M). \end{gather*} Corresponding to the decomposition $TM\otimes\Sigma_{r}M\cong \Sigma_{r-1}M\oplus \Sigma_{r+1}M\oplus \mathrm{Ker}_r$, where $\mathrm{Ker}_r$ denotes the kernel of the Clif\/ford multiplication by tangent vectors restricted to $\Sigma_{r}M$, we have, as in the spin case (for details see, e.g., \cite[equation~(2.7)]{pilcapaper}), the following Weitzenb\"ock formula relating the dif\/ferential operators acting on sections of $\Sigma_{r}M$: \begin{gather*} \nabla^{*}\nabla=\frac{1}{2(r+1)}D^-D^+ +\frac{1}{2(m-r+1)}D^+D^- + T_r^*T_r, \end{gather*} where $T_r$ is the so-called K\"ahlerian twistor operator and is def\/ined by \begin{gather*} T_r\varphi:= \nabla \varphi +\frac{1}{2(m-r+1)}e_j\otimes e_j^+\cdot D^-\varphi+\frac{1}{2(r+1)}e_j\otimes e_j^-\cdot D^+\varphi. \end{gather*} This decomposition further implies the following identity for $\varphi\in\Gamma(\Sigma_{r}M)$, by the same argument as in \cite[Lemma 2.5]{pilcapaper}, \begin{gather}\label{ident} |\nabla\varphi|^2= \frac{1}{2(r+1)}|D^+\varphi|^2 +\frac{1}{2(m-r+1)}|D^-\varphi|^2+|T_r\varphi|^2. \end{gather} Hence, we have the inequality \begin{gather}\label{ineg} |\nabla\varphi|^2\geq \frac{1}{2(r+1)}|D^+\varphi|^2 +\frac{1}{2(m-r+1)}|D^-\varphi|^2. \end{gather} Equality in \eqref{ineg} is attained if and only if $T_r\varphi=0$, in which case $\varphi$ is called a K\"ahlerian twistor spinor. The Lichnerowicz--Schr\"odinger formula~\eqref{sl} yields the following: \begin{lem}\label{alg1} Let $(M^{2m},g,J)$ be a compact K\"ahler manifold endowed with any ${\mathop{\rm spin}^c}$ structure. If $\varphi$ is an eigenspinor of $D^2$ with eigenvalue $\lambda$, $D^2\varphi=\lambda\varphi$, and satisfies \begin{gather}\label{inegalg1} |\nabla\varphi|^2\geq\frac{1}{j}|D\varphi|^2, \end{gather} for some real number $j>1$, and $(S +2 F_A) \cdot \varphi = c \varphi$, where $c$ is a positive function, then \begin{gather}\label{firstineq1} \lambda\geq\frac{j}{4(j-1)}\underset{M}{\inf} \, c. \end{gather} Moreover, equality in \eqref{firstineq1} holds if and only if the function $c$ is constant and equality in \eqref{inegalg1} holds at all points of the manifold. \end{lem} Let $\{e_1,\dots, e_{2m}\}$ be a local orthonormal basis of $M^{2m}$. We implicitly use the Einstein summation convention over repeated indices. We have the following formulas for contractions that hold as endomorphisms of $\Sigma_r M$: \begin{gather}\label{kecontr1} e_j^+\cdot e_j^-=-2r, \qquad e_j^-\cdot e_j^+=-2(m-r), \\ \label{kecontr3} e_j\cdot \mathrm{Ric}(e_j)=-S, \qquad e_j^-\cdot \mathrm{Ric}(e_j^+)=-\frac{S}{2}-i\rho, \qquad e_j^+\cdot \mathrm{Ric}(e_j^-)=-\frac{S}{2}+i\rho. \end{gather} The \looseness=-1 identities \eqref{kecontr1} follow directly from~\eqref{defomega}, which gives the action of the K\"ahler form and has~$\Sigma_r M $ as eigenspace to the eigenvalue $i(2r-m)$, implying that $ie_j\cdot Je_j=2i\Omega=-2(2r-m)$, and from the fact that $e_j\cdot e_j=-2m$. The identities~\eqref{kecontr3} are obtained from the following identities \begin{gather*} e_j\cdot {\mathop{\rm Ric}}(e_j)=e_j\wedge {\mathop{\rm Ric}}(e_j)-g({\mathop{\rm Ric}}(e_j),e_j)=-S,\\ ie_j\cdot {\mathop{\rm Ric}}(Je_j)=ie_j\wedge {\mathop{\rm Ric}}(Je_j)-ig({\mathop{\rm Ric}}(Je_j),e_j)=2i\rho. \end{gather*} The ${\mathop{\rm spin}^c}$ Ricci identity, for any spinor $\varphi$ and any vector f\/ield $X$, is given by \begin{gather}\label{ricident} e_i \cdot\mathcal{R}^A_{e_i,X} \varphi= \frac{1}{2}\mathrm{Ric}(X) \cdot\varphi-\frac{1}{2} (X\lrcorner F_A)\cdot\varphi, \end{gather} where $\mathcal{R}^A$ denotes the ${\mathop{\rm spin}^c}$ spinorial curvature, def\/ined with the same sign convention as above, namely $\mathcal{R}^A_{X,Y}:=\nabla^A_{X}\nabla^A_{Y} -\nabla^A_{Y}\nabla^A_{X}-\nabla^A_{[X,Y]}$. For a~proof of the ${\mathop{\rm spin}^c}$ Ricci identity we refer to \cite[Section~3.1]{fr_book}. For any vector f\/ield $X$ parallel at the point where the computation is done, the following commutator rules hold \begin{gather}\label{nablax} [\nabla_X,D]=-\frac{1}{2}\mathrm{Ric}(X)\cdot+\frac{1}{2}(X\lrcorner F_A)\cdot, \\\label{nablax+} [\nabla_{X},D^+]=-\frac{1}{2}\mathrm{Ric}(X^+)\cdot+\frac{1}{2}\big(X^+\lrcorner F^{1,1}_A\big)\cdot +\frac{1}{2}\big(X^-\lrcorner F_A^{0,2}\big)\cdot, \\ \label{nablax-} [\nabla_{X},D^-]=-\frac{1}{2}\mathrm{Ric}(X^-)\cdot+\frac{1}{2}\big(X^-\lrcorner F^{1,1}_A\big)\cdot+\frac{1}{2}\big(X^+\lrcorner F_A^{2,0}\big)\cdot, \end{gather} where the $2$-form $F_A$ is decomposed as $F_A=F_A^{2,0}+F_A^{1,1}+F_A^{0,2}$, into forms of type $(2,0)$, $(1,1)$, respectively $(0,2)$. The identity \eqref{nablax} is obtained from the following straightforward computation \begin{gather*} \nabla_X(D\varphi) =\nabla_X( e_j\cdot \nabla_{e_j}\varphi)= e_j\cdot \mathcal{R}^A_{X,e_j}\varphi+ e_j\cdot \nabla_{e_j}\nabla_X\varphi\\ \hphantom{\nabla_X(D\varphi)}{} \overset{\eqref{ricident}}{=}-\frac{1}{2}{\mathop{\rm Ric}}(X)\cdot\varphi+\frac{1}{2}(X\lrcorner F_A)\cdot\varphi+D(\nabla_X \varphi). \end{gather*} The identity \eqref{nablax+} follows from the identities \begin{gather*} \nabla_{X^+}(D^+\varphi) =\nabla_{X^+}( e_i^+\cdot \nabla_{e_i^-}\varphi)= e_i^+\cdot \mathcal{R}^A_{X^+,e_i^-}\varphi+ e_i^+\cdot \nabla_{e_i^-}\nabla_{X^+}\varphi\\ \hphantom{\nabla_{X^+}(D^+\varphi)}{} =-\frac{1}{2}\mathrm{Ric}(X^+)\cdot\varphi+\frac{1}{2}\big(X^+\lrcorner F^{1,1}_A\big)\cdot\varphi+D^+(\nabla_{X^+} \varphi), \\ \nabla_{X^-}(D^+\varphi) =\nabla_{X^-}\big( e_i^+\cdot \nabla_{e_i^-}\varphi\big)= e_i^+\cdot R_{X^-,e_i^-}\varphi+ e_i^+\cdot \nabla_{e_i^-}\nabla_{X^-}\varphi\\ \hphantom{\nabla_{X^-}(D^+\varphi)}{} =\frac{1}{2}\big(X^-\lrcorner F^{0,2}_A\big)\cdot\varphi+D^+(\nabla_{X^-} \varphi). \end{gather*} The identity \eqref{nablax-} follows either by an analogous computation or by conjugating~\eqref{nablax+}. On a K\"ahler manifold $(M,g,J)$ endowed with any ${\mathop{\rm spin}^c}$ structure, a spinor of the form $\varphi_r+\varphi_{r+1}\in \Gamma(\Sigma_r M\oplus \Sigma_{r+1} M)$, for some $0\leq r\leq m$, is called a {\it K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor} if there exists a non-zero real constant $\alpha$, such that the following equations are satisf\/ied, for all vector f\/ields $X$, \begin{gather}\label{KKSSdefinition} \nabla_X\varphi_r= \alpha X^-\cdot\varphi_{r+1},\qquad \nabla_X\varphi_{r+1} = \alpha X^+\cdot\varphi_{r}. \end{gather} K\"ahlerian Killing spinors lying in $\Gamma(\Sigma_{m} M\oplus \Sigma_{m+1} M) = \Gamma(\Sigma_m M)$ or in $\Gamma(\Sigma_{-1} M\oplus \Sigma_{0} M) = \Gamma(\Sigma_0 M)$ are just parallel spinors. A direct computation shows that each K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor is an eigenspinor of the square of the Dirac operator. More precisely, the following equalities hold \begin{gather}\label{kkseigen1} D\varphi_r= -2(r+1)\alpha \varphi_{r+1},\qquad D\varphi_{r+1}= -2(m-r)\alpha \varphi_{r}, \end{gather} which further yield \begin{gather}\label{kkseigen2} D^2\varphi_r=4(m-r)(r+1)\alpha^2\varphi_r,\qquad D^2\varphi_{r+1}=4(m-r)(r+1)\alpha^2\varphi_{r+1}. \end{gather} In \cite{hmu}, the authors gave examples of ${\mathop{\rm spin}^c}$ structures on compact K\"ahler--Einstein manifolds of positive scalar curvature, which carry K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinors lying in $\Sigma_r M\oplus \Sigma_{r+1} M$, for $r\neq \frac{m\pm1}{2}$, in contrast to the spin case, where K\"ahlerian Killing spinors may only exist for $m$ odd in the middle of the decomposition \eqref{decomp}. We brief\/ly describe these ${\mathop{\rm spin}^c}$ structures here. If the f\/irst Chern class $c_1(K_M)$ of the canonical bundle of the K\"ahler $M$ is a non-zero cohomology class, the greatest number $p\in \mathbb N^*$ such that \begin{gather*} \frac 1p c_1 (K_M) \in H^2 (M, \mathbb Z), \end{gather*} is called the {\it $($Fano$)$ index} of the manifold $M$. One can thus consider a $p$-th root of the canonical bundle $K_M$, i.e., a complex line bundle $\mathcal L$, such that $\mathcal L^ p = K_M$. In \cite{hmu}, O.~Hijazi, S.~Montiel and F.~Urbano proved the following: \begin{thm}[\protect{\cite[Theorem~14]{hmu}}] \label{KKSS} Let $M$ be a $2m$-dimensional K\"ahler--Einstein compact mani\-fold with scalar curvature $4m(m+1)$ and index $p \in \mathbb N^*$. For each $0 \leq r \leq m+1$, there exists on $M$ a ${\mathop{\rm spin}^c}$ structure with auxiliary line bundle given by $\mathcal L^q$, where $q = \frac{p}{m+1} (2r-m-1) \in \mathbb Z$, and carrying a K\"ahlerian Killing spinor $\psi_{r-1} + \psi_r \in \Gamma(\Sigma_{r-1} M \oplus \Sigma_{r} M)$, i.e., it satisfies the first-order system \begin{gather*} \nabla_X \psi_{r} = - X^+ \cdot \psi_{r-1},\qquad \nabla_X \psi_{r-1} = - X^- \cdot \psi_{r}, \end{gather*} for all $X \in \Gamma(TM)$. \end{thm} For example, if $M$ is the complex projective space $\mathbb C P^m$ of complex dimension $m$, then $p = m+1$ and $\mathcal L$ is just the tautological line bundle. We f\/ix $0 \leq r \leq m+1$ and we endow~$\mathbb C P^m$ with the ${\mathop{\rm spin}^c}$ structure whose auxiliary line bundle is given by $\mathcal L^q$ where $q = \frac{p}{m+1} (2r-m-1) = 2r-m-1 \in \mathbb Z$. For this ${\mathop{\rm spin}^c}$ structure, the space of K\"ahlerian Killing spinors in $\Gamma(\Sigma_{r-1}M \oplus \Sigma_rM)$ has dimension $\binom{m+1}{r}$. A K\"ahler manifold carrying a complex contact structure necessarily has odd complex dimension $m = 2\ell+1$ and its index $p$ equals $\ell+1$. We f\/ix $0 \leq r \leq m+1$ and we endow $M$ with the ${\mathop{\rm spin}^c}$ structure whose auxiliary line bundle is given by $\mathcal L^q$ where $q = \frac{p}{m+1} (2r-m-1) = r-\ell-1 \in \mathbb Z$. For this ${\mathop{\rm spin}^c}$ structure, the space of K\"ahlerian Killing spinors in $\Gamma(\Sigma_{r-1} M\oplus \Sigma_{r}M)$ has dimension $1$. In these examples, for $r=0$ (resp. $r = m+1$), we get the canonical (resp. anticanonical) ${\mathop{\rm spin}^c}$ structure for which K\"ahlerian Killing spinors are just parallel spinors. \section[Eigenvalue estimates for the ${\mathop{\rm spin}^c}$ Dirac operator on K\"ahler--Einstein manifolds]{Eigenvalue estimates for the $\boldsymbol{{\mathop{\rm spin}^c}}$ Dirac operator\\ on K\"ahler--Einstein manifolds}\label{eigen} In this section, we give a lower bound for the eigenvalues of the ${\mathop{\rm spin}^c}$ Dirac operator on a K\"ahler--Einstein manifold endowed with particular ${\mathop{\rm spin}^c}$ structures. More precisely, let $(M^{2m},g,J)$ be a~compact K\"ahler--Einstein manifold of index $p\in\mathbb{N}^*$ and of positive scalar curvature $S$, endowed with the ${\mathop{\rm spin}^c}$ structure given by $\mathcal{L}^q$, where $\mathcal L$ is the $p$-$th$ root of the canonical bundle and $q+p\in2\mathbb{Z}$ (among all powers $\mathcal L^q$, only those satisfying $p+q \in 2\mathbb Z$ provide us a ${\mathop{\rm spin}^c}$ structure, cf.~\cite[Section~7]{hmu}). The curvature form $F_A$ of the induced connection $A$ on $\mathcal{L}^q$ acts on the spinor bundle as $\frac{q}{p}i\rho$. Since $(M^{2m},g,J)$ is K\"ahler--Einstein, it follows that $\rho=\frac{S}{2m}\Omega$, where $\Omega$ is the K\"ahler form. Hence, for each $0\leq r\leq m$, we have \begin{gather}\label{fa-ke} (S +2 F_A )\cdot \varphi_r =\left(1-\frac{q}{p}\cdot\frac{2r-m}{m}\right) S \varphi_r, \qquad \forall\, \varphi_r\in\Gamma(\Sigma_{r}M). \end{gather} Let us denote by $c_r:=1-\frac{q}{p}\cdot \frac{2r-m}{m}$ and \begin{gather*} a_1\colon \ \{0,\dots,m\} \to \mathbb{R}, \qquad a_1(r):=\frac{r+1}{2r+1} c_r,\\ a_2\colon \ \{0,\dots,m\} \to \mathbb{R},\qquad a_2(r):=\frac{m-r+1}{2m-2r+1}c_r. \end{gather*} With the above notation, the following result holds: \begin{prop}\label{estimgen} Each eigenvalue $\lambda_r$ of $D^2$ restricted to $\Sigma_{r}M$ with associated eigenspinor~$\varphi_r$ satisfies the inequality \begin{gather}\label{ineggen} \lambda_r\geq \max \big (\min \big (a_1(r), a_1(r-1)\big ), \min\big( a_2(r), a_2(r+1)\big)\big)\cdot \frac{S}{2}. \end{gather} Moreover, the equality case is characterized as follows: \begin{enumerate}[$a)$]\itemsep=0pt \item $D^2\varphi_r=a_1(r)\frac{S}{2}\varphi_r \Longleftrightarrow T_r\varphi_r=0$, $D^-\varphi_r=0$; \item $D^2\varphi_r=a_1(r-1)\frac{S}{2}\varphi_r \Longleftrightarrow T_{r-1}(D^-\varphi_r)=0$; \item $D^2\varphi_r=a_2(r)\frac{S}{2}\varphi_r \Longleftrightarrow T_r\varphi_r=0$, $D^+\varphi_r=0$; \item $D^2\varphi_r=a_2(r+1)\frac{S}{2}\varphi_r \Longleftrightarrow T_{r+1}(D^+\varphi_r)=0$. \end{enumerate} \end{prop} \begin{proof} For $0\leq r\leq m$ we have: $(S +2 F_A )\cdot \varphi_r = c_r S \varphi_r$, $\forall\varphi_r\in\Gamma(\Sigma_{r}M)$. Let $r\in\{0,\dots,m\}$ be f\/ixed, $\lambda_r$ be an eigenvalue of $D^2|_{\Sigma_r M}$ and $\varphi_r\in\Gamma(\Sigma_{r}M)$ be an eigenspinor: $D^2\varphi_r=\lambda_r\varphi_r$. We distinguish two cases. i) If $D^-\varphi_r=0$, then $|D\varphi_r|^2=|D^+\varphi_r|^2$ and \eqref{ineg} implies \begin{gather*} |\nabla\varphi_r|^2\geq \frac{1}{2(r+1)}|D^+\varphi_r|^2=\frac{1}{2(r+1)}|D\varphi_r|^2. \end{gather*} By Lemma~\ref{alg1}, it follows that \begin{gather*}\lambda_r\geq\frac{r+1}{2(2r+1)}c_rS. \end{gather*} ii) If $D^-\varphi_r\neq 0$, then we consider $\varphi_r^-:= D^-\varphi_r$, which satisf\/ies $D^2\varphi_r^-=\lambda_r\varphi_r^-$ and $D^-\varphi_r^-=0$, so in particular $|D\varphi_r^-|^2=|D^+\varphi_r^-|^2$. We now apply the argument in i) to $\varphi_r^-\in\Gamma(\Sigma_{r-1}M)$. By~\eqref{ineg}, it follows that \begin{gather*} |\nabla\varphi_r^-|^2\geq \frac{1}{2r}|D^+\varphi_r^-|^2=\frac{1}{2r}|D\varphi_r^-|^2. \end{gather*} Applying again Lemma~\ref{alg1}, we obtain $\lambda_r\geq\frac{r}{2(2r-1)} c_{r-1} S$. Hence, we have showed that $\lambda_r\geq \min\big (a_1(r), a_1(r-1)\big)\frac{S}{2}$. The same argument applied to the cases when $D^+\varphi_r=0$ and $D^+\varphi_r\neq 0$ proves the inequality $\lambda_r\geq \min\big (a_2(r), a_2(r+1)\big ) \frac{S}{2}$. Altogether we obtain the estimate in Proposition~\ref{estimgen}. The characterization of the equality cases is a direct consequence of Lemma~\ref{alg1}, identity~\eqref{ident} and the description of the limiting case of inequality~\eqref{ineg}. \end{proof} \begin{Remark} The inequality \eqref{ineggen} can be expressed more explicitly, by determining the maximum according to several possible cases. However, since in the sequel we will ref\/ine this eigenvalue estimate, we are only interested in the characterization of the limiting cases, which will be used later in the proof of the equality case of the estimate (\ref{global}). \end{Remark} In order to ref\/ine the estimate (\ref{ineggen}), we start by the following two lemmas. \begin{lem} \label{lemmacit} Let $(M^{2m},g,J)$ be a compact K\"ahler--Einstein manifold of index $p$ and of positive scalar curvature $S$, endowed with a ${\mathop{\rm spin}^c}$ structure given by $\mathcal{L}^q$, where $q+p\in2\mathbb{Z}$. For any spinor field $\varphi$ and any vector field $X$, the ${\mathop{\rm spin}^c}$ Ricci identity is given by \begin{gather} \label{kericspinc} e_j \cdot\mathcal{R}^A_{e_j,X} \varphi = \frac{1}{2}\mathrm{Ric}(X) \cdot\varphi- \frac{S}{4m}\frac{q}{p} (X\lrcorner i\Omega)\cdot\varphi, \end{gather} and it can be refined as follows \begin{gather}\label{kericspinc+-} e^-_j \cdot\mathcal{R}^A_{e^+_j,X^-} \varphi =\frac{1}{2}\mathrm{Ric}(X^-)\cdot \varphi -\frac{S}{4m}\frac{q}{p} X^-\cdot\varphi, \\ \label{kericspinc-+} e^+_j \cdot\mathcal{R}^A_{e^-_j,X^+} \varphi = \frac{1}{2}\mathrm{Ric}(X^+)\cdot \varphi +\frac{S}{4m}\frac{q}{p} X^+\cdot\varphi. \end{gather} \end{lem} \begin{proof} Since the curvature form $F_A$ of the ${\mathop{\rm spin}^c}$ structure acts on the spinor bundle as $\frac{q}{p}i\rho=\frac{q}{p} \frac{S}{2m} i\Omega$, \eqref{kericspinc} follows directly from the Ricci identity \eqref{ricident}. The ref\/ined identities \eqref{kericspinc+-} and \eqref{kericspinc-+} follow by replacing $X$ in \eqref{kericspinc} with $X^-$, respectively $X^+$, which is possible since both sides of the identity are complex linear in $X$, and by taking into account that when decomposing $e_j=e_j^+ + e^-_j$, the following identities (and their analogue for $X^+$) hold: $e_j \cdot\mathcal{R}^A_{e^-_j,X^-}=0$ and $e^+_j \cdot\mathcal{R}^A_{e^+_j,X^-}=0$. These last two identities are a consequence of the $J$-invariance of the curvature tensor, i.e., $\mathcal{R}^A_{JX,JY}=\mathcal{R}^A_{X,Y}$, for all vector f\/ields~$X$,~$Y$, as this implies $\mathcal{R}^A_{e^-_j,X^-}=\mathcal{R}^A_{Je^-_j,JX^-}=(-i)^2\mathcal{R}^A_{e^-_j,X^-}$ and also $e^+_j \cdot\mathcal{R}^A_{e^+_j,X^-}=Je^+_j \cdot\mathcal{R}^A_{Je^+_j,X^-}=i^2 e^+_j \cdot\mathcal{R}^A_{e^+_j,X^-}$, so they both vanish. In order to obtain the second term on the right hand side of~\eqref{kericspinc+-} and~\eqref{kericspinc-+}, we use the following identities of endomorphisms of the spinor bundle: $X^-\lrcorner i\Omega=X^-$ and $X^+\lrcorner i\Omega=-X^+$. \end{proof} \begin{lem}\label{refinedd} Under \looseness=-1 the same assumptions as in Lemma~{\rm \ref{lemmacit}}, the refined Schr\"odinger--Lichne\-ro\-wicz formula for ${\mathop{\rm spin}^c}$ K\"ahler manifolds for the action on each eigenbundle $\Sigma_{r}M$ is given by \begin{gather}\label{sl1} 2\nabla^{{1,0}^*}\nabla^{1,0}=D^2-\frac{S}{4}-\frac{i}{2}\rho-\frac{m-r}{2m}\frac{q}{p}S, \\ \label{sl2} 2\nabla^{{0,1}^*}\nabla^{0,1}=D^2-\frac{S}{4}+\frac{i}{2}\rho+\frac{r}{2m}\frac{q}{p}S, \end{gather} where $\nabla^{1, 0}$ $($resp.\ $\nabla^{0,1})$ is the holomorphic $($resp.\ antiholomorphic$)$ part of $\nabla$, i.e., the projections of~$\nabla$ onto the following two components \begin{gather*} \nabla\colon \ \Gamma(\Sigma_{r}M)\to \Gamma(\Lambda^{1,0}M\otimes\Sigma_{r}M)\oplus \Gamma(\Lambda^{0,1}M\otimes\Sigma_{r}M). \end{gather*} They are locally defined, for all vector fields~$X$, by \begin{gather*} \nabla^{1, 0}_X = g(X, e_i^-) \nabla_{e_i^+}=\nabla_{X^+} \qquad \text{and}\qquad \nabla^{0, 1}_X = g(X, e_i^+) \nabla_{e_i^-}=\nabla_{X^-}, \end{gather*} where $\{e_1,\dots, e_{2m}\}$ is a local orthonormal basis of~$TM$. \end{lem} \begin{proof} Let $\{e_1,\dots, e_{2m}\}$ be a local orthonormal basis of $TM$ (identif\/ied with $\Lambda^1 M$ via the met\-ric~$g$), parallel at the point where the computation is made. We recall that the formal adjoints $\nabla^{{1,0}^*}$ and $\nabla^{{1,0}^*}$ are given by the following formulas (for a proof, see, e.g., \cite[Lemma~20.1]{am_lectures}) \begin{gather*} \nabla^{{1,0}^*}\colon \ \Gamma(\Lambda^{1,0}M\otimes\Sigma_{r}M) \longrightarrow \Gamma(\Sigma_{r}M), \qquad \nabla^{{1,0}^*}(\alpha\otimes \varphi)=(\delta\alpha)\varphi-\nabla_ {\alpha}\varphi,\\ \nabla^{{0,1}^*}\colon \ \Gamma(\Lambda^{0,1}M\otimes\Sigma_{r}M) \longrightarrow \Gamma(\Sigma_{r}M), \qquad \nabla^{{0,1}^*}(\alpha\otimes \varphi)=(\delta\alpha)\varphi-\nabla_ {\alpha}\varphi. \end{gather*} We thus obtain for the corresponding Laplacians \begin{gather}\label{laplac} \nabla^{{1,0}^*}\nabla^{1,0}\varphi =\nabla^{{1,0}^*}(e^-_j\otimes \nabla_{e^+_j}\varphi)=- \nabla_{e^-_j}\nabla_{e^+_j}, \end{gather} since $\delta e^-_j=0$, as the basis is parallel at the given point, and $g(\cdot, e^-_j)\in\Lambda^{1,0}M$. Analogously, or by conjugation, we have $\nabla^{{0,1}^*}\nabla^{0,1}\varphi =- \nabla_{e^+_j}\nabla_{e^-_j}$. We now prove~\eqref{sl1}. By a similar computation, one obtains \eqref{sl2} \begin{gather*} 2\nabla^{{1,0}^*}\nabla^{1,0} \overset{\eqref{laplac}}{=}-2g(e_i,e_j)\nabla_{e_i^-}\nabla_{e_j^+}=(e_i\cdot e_j + e_j\cdot e_i) \cdot \nabla_{e_i^-}\nabla_{e_j^+}\\ \hphantom{2\nabla^{{1,0}^*}\nabla^{1,0}}{} =D^+D^- + e_j\cdot e_i\cdot (\nabla_{e_j^+}\nabla_{e_i^-} - R_{e_j^+,e_i^-})=D^+D^- + D^-D^+ + e_j^-\cdot e^+_i\cdot R_{e_i^-, e_j^+}\\ \hphantom{2\nabla^{{1,0}^*}\nabla^{1,0}}{} \overset{\eqref{kericspinc-+}}{=}D^2+e^-_j\cdot \left( \frac{1}{2}\mathrm{Ric}(e_j^+)+\frac{S}{4m}\frac{q}{p}e_j^+\right)\\ \hphantom{2\nabla^{{1,0}^*}\nabla^{1,0}}{} \overset{\eqref{kecontr1},~\eqref{kecontr3}}{=}D^2 -\frac{1}{2}\left(\frac{S}{2}+i\rho\right)-\frac{m-r}{2m}\frac{q}{p}S.\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof} \begin{thm}\label{estimke} Let $(M^{2m}, g, J)$ be a compact K\"ahler--Einstein manifold of index $p$ and positive scalar curvature $S$, carrying the ${\mathop{\rm spin}^c}$ structure given by $\mathcal{L}^q$ with $q+p\in 2\mathbb{Z}$, where $\mathcal{L}^p=K_M$. We assume that $p\geq |q|$. Then, for each $r\in\{0, \dots, m\}$, any eigenvalue $\lambda_r$ of $D^2|_{\Gamma(\Sigma_r M )}$ satisfies the inequality \begin{gather}\label{eqestimke} \lambda_r\geq e(r)\frac{S}{2}, \end{gather} where \begin{gather*} e\colon \ [0,m]\to\mathbb{R}, \qquad e(x)=\begin{cases} \displaystyle e_1(x)=\frac{m-x}{m}\left(1+\frac{q}{p}\right), & \displaystyle\text{if} \ \ x\leq \left(1+\frac{q}{p}\right)\frac{m}{2}, \\ \displaystyle e_2(x)=\frac{x}{m}\left(1-\frac{q}{p}\right), & \displaystyle\text{if} \ \ x\geq \left(1+\frac{q}{p}\right)\frac{m}{2}. \end{cases} \end{gather*} Moreover, equality is attained if and only if the corresponding eigenspinor $\varphi_r\in\Gamma(\Sigma_r M)$ is an antiholomorphic spinor: $\nabla^{1,0}\varphi_r=0$, if $r\leq \left(1+\frac{q}{p}\right)\frac{m}{2}$, respectively a holomorphic spinor: $\nabla^{0,1}\varphi_r=0$, if $r \geq \big(1+\frac{q}{p}\big)\frac{m}{2}$. \end{thm} \begin{proof} First we notice that our assumption $|q|\leq p$ implies that the lower bound in~\eqref{eqestimke} is non-negative and that $0\leq \big(1+\frac{q}{p}\big)\frac{m}{2}\leq m$. The formulas~\eqref{sl1} and~\eqref{sl2} applied to $\varphi_r$ yield, after taking the scalar product with $\varphi_r$ and integrating over $M$, the following inequalities \begin{gather*}\lambda_r\geq \frac{m-r}{m}\left(1+\frac{q}{p}\right)\frac{S}{2},\qquad \lambda_r\geq \frac{r}{m}\left(1-\frac{q}{p}\right)\frac{S}{2},\end{gather*} and equality is attained if and only if the corresponding eigenspinor $\varphi_r$ satisf\/ies $\nabla^{1,0}\varphi_r=0$, resp.~$\nabla^{0,1}\varphi_r=0$. Hence, for any $0\leq r\leq m$ we obtain the following lower bound: \begin{gather*} \lambda_r\geq \max\left(\frac{m-r}{m}\left(1+\frac{q}{p}\right),\frac{r}{m}\left(1-\frac{q}{p}\right)\right)= e(r)\frac{S}{2}.\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof} \begin{Remark}\label{comp} Let us denote $\frac{q}{p}\cdot\frac{m+1}{2}+\frac{m-1}{2}$ by $b$. Comparing the estimate given by Theorem~\ref{estimke} with the estimate from Proposition~\ref{estimgen}, we obtain for $r\leq b$ \begin{gather*}e(r)-a_1(r)=\frac{(m+1)\frac{q}{p}+m-1-2r}{m(2r+1)}=-\frac{2(r-b)}{m(2r+1)}.\end{gather*} Hence, for $r \leq b$, we have $e(r)\geq a_1(r)$ and $e(r)=a_1(r)$ if\/f $r=b\in\mathbb{N}$. Similarly, for $r\geq b+1$, we compute \begin{gather*}e(r)-a_2(r)=\frac{2(m-r)(r-b-1)}{m(2m-2r+1)}.\end{gather*} Hence, for $r\geq b+1$, we have $e(r)\geq a_2(r)$ and $e(r)=a_2(r)$ if\/f $r=b+1\in\mathbb{N}$. \end{Remark} Theorem~\ref{estimke} implies the global lower bound for the eigenvalues of the ${\mathop{\rm spin}^c}$ Dirac operator acting on the whole spinor bundle in Theorem \ref{globestimke}. We are now ready to prove this result. \begin{proof}[Proof of Theorem~\ref{globestimke}] \hspace{0.01cm} Since the lower bound established in Theorem~\ref{estimke} decreases on $\big(0,\big(1+\frac{q}{p}\big)\frac{m}{2}\big)$ and increases on $\big(\big(1+\frac{q}{p}\big)\frac{m}{2},m\big)$, we obtain the following global estimate \begin{gather*}\lambda\geq e\left(\left(1+\frac{q}{p}\right)\frac{m}{2}\right)=\frac{1}{2}\left(1-\frac{q^2}{p^2}\right)\frac{S}{2}.\end{gather*} However, this estimate is not sharp. Otherwise, this would imply that $\big(1+\frac{q}{p}\big)\frac{m}{2}\in\mathbb{N}$ and the limiting eigenspinor would be, according to the characterization of the equality case in Theo\-rem~\ref{estimke}, both holomorphic and antiholomorphic, hence parallel and, in particular, harmonic. This fact together with the Lichnerowicz--Schr\"odinger formula~\eqref{sl} and the fact that the scalar curvature is positive leads to a contradiction. We now assume that there exists an $r\in\mathbb{N}$, such that $b<r< \big(1+\frac{q}{p}\big)\frac{m}{2}$ and the equality in~\eqref{eqestimke} is attained. We obtain a contradiction as follows. Let $\varphi_r$ be the corresponding eigenspinor: $D^2\varphi_r=e_1(r)\frac{S}{2}\varphi_r$ and $\nabla^{1,0}\varphi_r=0$. Then $D^+\varphi_r\in\Sigma_{r+1}M$ is also an eigenspinor of $D^2$ to the eigenvalue $e_1(r)\frac{S}{2}$ (note that $D^+\varphi_r\neq 0$, otherwise $\varphi_r$ would be a harmonic spinor and we could conclude as above). However, for all $r>b$, the strict inequality $e_2(r+1)>e_1(r)$ holds. Since $r+1>\big(1+\frac{q}{p}\big)\frac{m}{2}$, this contradicts the estimate \eqref{eqestimke}. The same argument as above shows that there exists no $r\in\mathbb{N}$, such that $\big(1+\frac{q}{p}\big)\frac{m}{2}<r<b+1 $ and the equality in \eqref{eqestimke} is attained. Hence, we obtain the following global estimate \begin{gather*} \lambda\geq e_1(b)\frac{S}{2}=e_2(b+1)\frac{S}{2}=\frac{m+1}{2m}\left(1-\frac{q^2}{p^2}\right)\frac{S}{2}=\left(1-\frac{q^2}{p^2}\right)(m+1)^2. \end{gather*} According to Theorem~\ref{estimke}, the equality is attained if and only if $b \in \mathbb N$ and the corresponding eigenspinors $\varphi_{b}\in\Gamma(\Sigma_{b} M)$ and $\varphi_{b+1}\in\Gamma(\Sigma_{b+1}M)$ to the eigenvalue $\big(1-\frac{q^2}{p^2}\big)(m+1)^2$ are antiholomorphic resp. holomorphic spinors: $\nabla^{1,0}\varphi_{b}=0$, $\nabla^{0,1}\varphi_{b+1}=0$. In particular, this implies $D^-\varphi_{b}=0$ and $D^+\varphi_{b+1}=0$. By Remark~\ref{comp}, we have: $e_1(b)=a_1(b)$ and $e_2(b+1)=a_2(b+1)$. Hence, the characterization of the equality case in Proposition~\ref{estimgen} yields $T_{b}\varphi_{b}=0$ and $T_{b+1}\varphi_{b+1}=0$, which further imply \begin{gather}\label{twis1} \nabla_X \varphi_{b}=-\frac{1}{2(b+1)}X^-\cdot D^+\varphi_{b}=-\frac{1}{2(b+1)}X^-\cdot D\varphi_{b}, \\ \nabla_X \varphi_{b+1}=-\frac{1}{2(m-b)}X^+\cdot D^-\varphi_{b+1}=-\frac{1}{2(m-b)}X^+\cdot D\varphi_{b+1}.\nonumber \end{gather} We now show that the spinors $\varphi_{b}+\frac{1}{(m+1)\left(1+\frac{q}{p}\right)}D\varphi_{b}\in\Gamma(\Sigma_{b} M\oplus \Sigma_{b+1} M)$ and $\varphi_{b+1}+\frac{1}{(m+1)\left(1-\frac{q}{p}\right)}D\varphi_{b+1}\in\Gamma(\Sigma_{b+1} M\oplus \Sigma_{b} M)$ are K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinors. Note that for $q=0$ (corresponding to the spin case), it follows that $\varphi_{b}+\frac{1}{m+1}D\varphi_{b}, \varphi_{b+1}+\frac{1}{m+1}D\varphi_{b+1}\in\Gamma(\Sigma_{b} M\oplus \Sigma_{b+1} M)$ are eigenspinors of the Dirac operator corresponding to the smallest possible eigenvalue $m+1$, i.e., K\"ahlerian Killing spinors. From \eqref{twis1} it follows \begin{gather}\label{twiss1} \nabla_X \varphi_{b}=-X^-\cdot \frac{1}{(m+1)\left(1+\frac{q}{p}\right)} D\varphi_{b}. \end{gather} Applying \eqref{nablax+} to $\varphi_{b}$ in this case for ${\mathop{\rm Ric}}=\frac{S}{2m}g=2(m+1)g$ and $F_A=\frac{q}{p}\frac{S}{2m}i\Omega=2(m+1)\frac{q}{p}i\Omega$, we get \begin{gather}\label{twiss2} \nabla_X (D^+\varphi_{b})=-(m+1)\left(1+\frac{q}{p}\right)X^+\cdot \varphi_{b}. \end{gather} According to the def\/ining equation \eqref{KKSSdefinition} of a K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor, equa\-tions~\eqref{twiss1} and~\eqref{twiss2} imply that the spinor $\varphi_{b}+\frac{1}{(m+1)\big(1+\frac{q}{p}\big)}D\varphi_{b}\in\Gamma(\Sigma_{b} M\oplus \Sigma_{b+1} M)$ is a K\"ahlerian Killing~${\mathop{\rm spin}^c}$ spinor. A similar computation yields that $\varphi_{b+1}+\frac{1}{(m+1)\big(1-\frac{q}{p}\big)}D\varphi_{b+1}$ is a K\"ahlerian Killing~${\mathop{\rm spin}^c}$ spinor. Conversely, if $\varphi_{b}+\varphi_{b+1}\in \Gamma(\Sigma_{b}M\oplus \Sigma_{b+1} M)$ is a K\"ahlerian Killing~${\mathop{\rm spin}^c}$ spinor, then according to~\eqref{kkseigen2}, $\varphi_{b}$ and $\varphi_{b+1}$ are eigenspinors of $D^2$ to the eigenvalue $4(m-b)(b+1)=\big(1-\frac{q^2}{p^2}\big)(m+1)^2$. This concludes the proof. \end{proof} \begin{Remark} If $q=0$, which corresponds to the spin case, the assumption $p \geq \vert q \vert = 0$ is trivial and we recover from Theorem~\ref{estimke} and Theorem~\ref{globestimke} Kirchberg's estimates on K\"ahler--Einstein spin manifolds: the lower bound \eqref{kirchoddeven} for $m$ odd, namely $\lambda^2\geq \frac{m+1}{4m}S=e\big(\frac{m+1}{2}\big)\frac{S}{2}$, and the lower bound~\eqref{kirchke} for $m$ even, namely $\lambda^2\geq \frac{m+2}{4m}S=e\big(\frac{m}{2}+1\big)\frac{S}{2}$. In the latter case, when~$m$ is even, the equality in \eqref{global} cannot be attained, as $b=\frac{m}{2}-\frac{1}{2}\notin\mathbb{N}$. Also for $r=\frac{m}{2}$ the inequality~\eqref{eqestimke} is strict, since otherwise it would imply, according to the characterization of the equality case in Theorem~\ref{estimke}, that the corresponding eigenspinor $\varphi\in\Sigma_{\frac{m}{2}}M$ is parallel, in contradiction to the positivity of the scalar curvature. Note that the same argument as in the proof of Theorem~\ref{globestimke} shows that there cannot exist an eigenspinor~$\varphi\in \Sigma_{\frac{m}{2}}M$ of $D^2$ to an eigenvalue strictly smaller than the lowest bound for $r=\frac{m}{2}\pm 1$, since otherwise~$D^+\varphi$ and~$D^-\varphi$ would either be eigenspinors or would vanish, leading in both cases to a contradiction. Hence, from the estimate~\eqref{eqestimke} and the fact that the function~$e_1$ decreases on $\big(0,\big(1+\frac{q}{p}\big)\frac{m}{2}\big)$ and~$e_2$ increases on $\big(\big(1+\frac{q}{p}\big)\frac{m}{2},m\big)$, it follows that the lowest possible bound for $\lambda^2$ in this case is given by $e_1\big(\frac{m}{2}-1\big)S=e_2\big(\frac{m}{2}+1\big)S=\frac{m+2}{4m}$. If $q=- p$ (resp.~$q = p$), which corresponds to the canonical (resp.\ anti-canonical) ${\mathop{\rm spin}^c}$ structure, the lower bound in Theorem~\ref{globestimke} equals $0$ and is attained by the parallel spinors in~$\Sigma_{0}M$ (resp.~$\Sigma_{m}M$), cf.~\cite{Moro1}. \end{Remark} \section{Harmonic forms on limiting K\"ahler--Einstein manifolds} In this section we give an application for the eigenvalue estimate of the ${\mathop{\rm spin}^c}$ Dirac operator established in Theorem~\ref{globestimke}. Namely, we extend to ${\mathop{\rm spin}^c}$ spinors the result of A.~Moroianu \cite{am} stating that the Clif\/ford multiplication between a harmonic ef\/fective form of nonzero degree and a~K\"ahlerian Killing spinor vanishes. As above, $(M^{2m}, g)$ denotes a $2m$-dimensional K\"ahler--Einstein compact manifold of index $p$ and normalized scalar curvature $4m(m+1)$, which carries the ${\mathop{\rm spin}^c}$ structure given by $\mathcal{L}^q$ with $q+p\in 2\mathbb{Z}$, where $\mathcal{L}^p=K_M$. We call $M$ a {\it limiting mani\-fold} if equality in \eqref{global} is achieved on $M$, which is by Theorem~\ref{globestimke} equivalent to the existence of a~K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor in $\Sigma_{r}M\oplus \Sigma_{r+1} M$ for $r=\frac{q}{p}\cdot\frac{m+1}{2}+\frac{m-1}{2}\in\mathbb{N}$. Let $\psi=\psi_{r-1}+\psi_{r}\in \Gamma(\Sigma_{r-1} M \oplus \Sigma_{r} M)$ be such a spinor, i.e., $\Omega \cdot \psi_{r-1} = i (2r-2-m) \psi_{r-1}$, $\Omega \cdot \psi_{r} = i (2r-m) \psi_{r}$ and the following equations are satisf\/ied \begin{gather*} \nabla_{X^+}\psi_{r} =- X^+ \cdot \psi_{r-1},\qquad \nabla_{X^-} \psi_{r-1} =- X^-\cdot \psi_{r}. \end{gather*} By \eqref{kkseigen1}, we have \begin{gather*}D\psi_{r}=2(m-r+1)\psi_{r-1}, \qquad D\psi_{r-1}=2r\psi_{r}.\end{gather*} Recall that a form $\omega$ on a K\"ahler manifold is called {\it effective} if $\Lambda \omega = 0$, where $\Lambda$ is the adjoint of the operator $L\colon\Lambda^{*} M \longrightarrow \Lambda^{*+2} M $, $L(\omega):=\omega\wedge\Omega$. More precisely, $\Lambda$ is given by the formula: $\Lambda = -2 \overset{2m}{\underset{j=1}{\sum}} e_j^+ \lrcorner e_j^- \lrcorner$. Moreover, one can check that \begin{gather*} (\Lambda L - L \Lambda ) \omega = (m-t) \omega, \qquad \forall\, \omega \in \Lambda^t M. \end{gather*} \begin{lem} Let $\psi=\psi_{r-1}+\psi_{r}\in\Gamma(\Sigma_{r-1} M \oplus \Sigma_{r} M)$ be a K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor and~$\omega$ a harmonic effective form of type $(k, k')$. Then, we have \begin{gather}\label{domeg1} D(\omega\cdot\psi_{r})=2(-1)^{k+k'} (m-r+1-k') \omega\cdot\psi_{r-1}, \\ \label{domeg2} D(\omega\cdot\psi_{r-1})=2(-1)^{k+k'} (r-k) \omega\cdot\psi_{r}. \end{gather} \end{lem} \begin{proof} The following general formula holds for {\color{black}{any form}} $\omega$ of degree $\text{deg}(\omega)$ and any spinor $\varphi$ \begin{gather*}D(\omega\cdot\varphi)=(d\omega+\delta\omega)\cdot\varphi+(-1)^{\text{deg}(\omega)} \omega\cdot D\varphi -2\sum_{j=1}^{2m}(e_j\lrcorner\omega)\cdot \nabla_{e_j}\varphi.\end{gather*} Applying this formula to an ef\/fective harmonic form $\omega$ of type $(k, k')$ and to the components of the K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor $\psi$, we obtain \begin{gather*} D(\omega\cdot\psi_{r}) = (-1)^{k+k'} \omega\cdot D\psi_{r-1}-2\sum_{j=1}^{2m}(e_j\lrcorner\omega)\cdot \nabla_{e_j}\psi_{r}\\ \hphantom{D(\omega\cdot\psi_{r})}{} =(-1)^{k+k'} 2(m-r+1) \omega\cdot\psi_{r-1}+2\sum_{j=1}^{2m}(e^-_j\lrcorner\omega)\cdot e^+_j\cdot \psi_{r-1}\\ \hphantom{D(\omega\cdot\psi_{r})}{} =2(-1)^{k+k'} \left[(m-r+1) \omega\cdot\psi_{r-1}+\left(\sum_{j=1}^{2m} e^+_j\wedge(e^-_j\lrcorner\omega)\right)\cdot\psi_{r-1}\right]. \end{gather*} Since $\omega$ is ef\/fective, we have for any spinor $\varphi$ that \begin{gather*}(e^-_j\lrcorner\omega)\cdot e^+_j\cdot \varphi=(-1)^{k+k'-1} \big(e^+_j\wedge(e^-_j\lrcorner\omega)+e^+_j\lrcorner e^-_j\lrcorner\omega \big)\cdot \varphi.\end{gather*} Thus, we conclude $D(\omega\cdot\psi_{r}) = 2 (-1)^{k+k'} (m-r+1-k') \omega\cdot \psi_{r-1}$. Analogously we obtain $D(\omega\cdot\psi_{r-1})=2(-1)^{k+k'} (r-k) \omega\cdot\psi_{r}.$ \end{proof} Now, we are able to state the main result of this section, which extends the result of A.~Moroianu mentioned in the introduction to the~${\mathop{\rm spin}^c}$ setting: \begin{thm} \label{eff} On a compact K\"ahler--Einstein limiting manifold, the Clifford multiplication of a~harmonic effective form of nonzero degree with the corresponding K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinor vanishes. \end{thm} \begin{proof} Equations \eqref{domeg1} and \eqref{domeg2} imply that \begin{gather*} D^2 (\omega \cdot \psi) = 4 (r-k)(m-r+1-k') \omega\cdot \psi. \end{gather*} Note that for all values of $k,k' \in\{0,\dots,m\}$ and $r\in\{0,\dots, m+1\}$, either $4 (r-k)(m-r+1-k') \leq 0$, or $4 (r-k)(m-r+1-k') < 4r(m-r+1)$, which for $r=b+1$ is exactly the lower bound obtained in Theorem~\ref{globestimke} for the eigenvalues of~$D^2$. This shows that $\omega\cdot\psi=0$. \end{proof} K\"ahler--Einstein manifolds carrying a complex contact structure are examples of odd-di\-men\-sional K\"ahler manifolds with K\"ahlerian Killing ${\mathop{\rm spin}^c}$ spinors in $\Sigma_{r-1} M \oplus\Sigma_r M $ for the ${\mathop{\rm spin}^c}$ structure (described in the introduction) whose auxiliary line bundle is given by $\mathcal L^q$ and $q = r-\ell-1$, where $m=2\ell+1$. Thus, the result of A.~Moroianu is obtained as a special case of Theorem~\ref{eff}. \subsection*{Acknowledgments} The f\/irst named author gratefully acknowledges the f\/inancial support of the Berlin Mathematical School (BMS) and would like to thank the University of Potsdam, especially Christian B\"ar and his group, for their generous support and friendly welcome during summer 2013 and summer 2014. The f\/irst named author thanks also the Faculty of Mathematics of the University of Regensburg for its support and hospitality during his two visits in July 2013 and July 2014. The authors are very much indebted to Oussama Hijazi and Andrei Moroianu for many useful discussions. Both authors thank the editor and the referees for carefully reading the paper and for providing constructive comments, which substantially improved it. \pdfbookmark[1]{References}{ref} \LastPageEnding \end{document}
\begin{document} \title[ Fibonacci sequence via $\ $the $\sum -$ trnasform]{The Fibonacci sequence via the $\sum -$ transform} \author{Dejenie A. Lakew} \address{John Tyler Community College\\ Department of Mathematics} \email{[email protected]} \urladdr{http://www.jtcc.edu} \date{January 1, 2014} \subjclass[2010]{Primary 44A10, 65Q10, 65Q30 } \keywords{$\sum -$ transform, discrete differential equations, Fibonacci sequence, infinite order differential equation} \begin{abstract} In this short article, we study different problems described as initial value problems of discrete differential equations and develop a \ a transform method called the $\sum -$ transform, a discrete version of the continuous Laplace transform to generate solutions as rational functions of integers to these initial value problems. Particularly we look how the method generates the traditionally known numbers called Fibonacci sequence as a solution to an initial value problem of a discrete differential equation. \end{abstract} \maketitle \section{\protect Introduction} In this short article we introduce a discrete analogue of the continuous Laplace transform, called the $\sum -$ transform or discrete Laplace transform (DLT) to study solutions to some initial value problems of discrete difference equations. The solutions will be sequences of numbers not necessarily integers but rational functions of integer polynomials. The Fibonacci numbers we know will be particular case of the general sequence we obtain through such process. \ The Fibonacci numbers are one of the wonders of old mathematics. They represent several natural things, such as leaves of trees, foliages of flowers, replications of some species, etc. These Fibonacci numbers are given by the sequence : \begin{equation*} 1,\text{ }1,\text{ }2,\text{ }3,\text{ }5,\text{ }8,\text{ }13,\text{ }21, \text{ }34,..... \end{equation*} The pattern is that a number is the sum of two of the previous or predecessor numbers and can be written as : \begin{equation} \left\{ \begin{array}{c} a_{n+2}=a_{n+1}+a_{n}\text{ \ } \\ \text{\ \ } \\ a_{1}=1,\text{ }a_{2}=1\text{ \ \ for }n=1,\text{ }2,\text{ }3,.... \end{array} \right. \label{Fibeqn} \end{equation} \ We can rewrite equation \ref{Fibeqn} as \begin{equation*} \left\{ \begin{array}{c} \Delta a_{n+1}=a_{n}\text{\ } \\ a_{1}=1,a_{2}=1 \end{array} \right. \end{equation*} We know that the Fibonacci sequence has a formula that generates the numbers. That formula is : \ \ \begin{equation} a_{n}=\frac{\left( 1+\sqrt{5}\right) ^{n}-\left( 1-\sqrt{5}\right) ^{n}}{ 2^{n}\sqrt{5}} \label{Fib} \end{equation} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ In discrete mathematics or sequence courses we ask students to verify using mathematical induction that indeed, the given formula generates the Fibonacci numbers. It is also a known fact that the quotients of consecutive numbers of the sequence converges to a number: \begin{equation*} \frac{a_{n+1}}{a_{n}}\longrightarrow \frac{1+\sqrt{5}}{2}\text{ as } n\rightarrow \infty \end{equation*} \ We use a powerful method, the $\sum -$ transform or the \textit{discrete version of the Laplace transform} (for more reading on the transform, see \cite{DL}) that generates solutions to many sequential or discrete initial value problems of difference equations which are prevalent in discrete mathematics and some applicable mathematics. We will also see how the method is used to find solutions of a different kind of discrete differential equations such as: \begin{equation*} \left\{ \begin{array}{c} a_{n+1}=\lambda a_{n}+\beta \\ a_{1}=a(1)\text{, }n=1,\text{ }2,\text{ }3,.. \end{array} \right. \end{equation*} \ and \begin{equation*} \left\{ \begin{array}{c} a_{n+2}=a_{n+1}+a_{n} \\ a_{1}=a\left( 1\right) ,\text{ }a_{2}=a\left( 2\right) \text{, }n=1,\text{ } 2,\text{ }3,... \end{array} \right. \end{equation*} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \section{\protect The $\sum -$Transform} \begin{definition} Let \begin{equation*} f: \mathbb{N} \rightarrow \mathbb{R} \end{equation*} be a sequence and let $s>0$. \ We define the $\sum -$transform or the discrete Laplace transform of $f$ \ by \begin{equation*} \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) :=\dsum\limits_{n=1}^{\infty }f\left( n\right) e^{-sn} \end{equation*} provided the series converges. \end{definition} \ \ \ \ \ \begin{theorem} Existence of a $\sum -$ transform\textbf{\ } Let $f: \mathbb{N} \rightarrow \mathbb{R} $ be a sequence such that \begin{equation*} \mid f\left( n\right) \mid \leq \alpha e^{s_{0}n}\text{ for }\alpha >0,\text{ }s_{0}>0 \end{equation*} Then \begin{equation*} \dsum\limits_{n=1}^{\infty }f\left( n\right) e^{-sn} \end{equation*} is absolutely convergent and hence is convergent. Therefore, for such a sequence, the discrete Laplace transform \begin{equation*} \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) \end{equation*} exists finitely for $s>s_{0}$. \end{theorem} \begin{proof} Since \begin{equation*} \mid \dsum\limits_{n=1}^{\infty }f\left( n\right) e^{-sn}\mid \leq \dsum\limits_{n=1}^{\infty }\alpha e^{\left( s_{0}-s\right) n}=\frac{\alpha }{e^{s-s_{0}}-1}<+\infty \end{equation*} \ \ for $s>s_{0}$, we conclude that sequences which are polynomials in $n$ have convergent $\sum -$ transforms or discrete Laplace transform. \end{proof} \ \begin{proposition} (Transform of translate of a sequence). For $k\in \mathbb{N} $, \begin{equation*} \ell _{d}\left\{ f\left( n+k\right) \right\} \left( s\right) =e^{ks}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) -\dsum\limits_{i=1}^{k}f\left( i\right) e^{\left( k-i\right) s} \end{equation*} \end{proposition} \begin{proof} Let $f: \mathbb{N} \rightarrow \mathbb{R} $ be a sequence. Then \begin{eqnarray*} \ell _{d}\left\{ f\left( n+k\right) \right\} \left( s\right) &=&\dsum\limits_{n=1}^{\infty }f\left( n+k\right) e^{-sn} \\ &=&\dsum\limits_{m=k+1}^{\infty }f\left( m\right) e^{-s\left( m-k\right) } \\ &=&e^{sk}\dsum\limits_{m=k+1}^{\infty }f\left( m\right) e^{-sm} \\ &=&e^{ks}\dsum\limits_{m=1}^{\infty }f\left( m\right) e^{-sm}-\dsum\limits_{i=1}^{k}f\left( i\right) e^{\left( k-i\right) s} \\ &=&e^{ks}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) -\dsum\limits_{i=1}^{k}f\left( i\right) e^{\left( k-i\right) s} \end{eqnarray*} \end{proof} \begin{corollary} \begin{equation*} \ell _{d}\left\{ f\left( n+1\right) \right\} \left( s\right) =e^{s}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) -f\left( 1\right) \end{equation*} \end{corollary} \begin{proposition} For $0<a<e^{s}$ we have \begin{equation*} \ell _{d}\left\{ a^{n-1}\right\} \left( s\right) =\frac{1}{e^{s}-a} \end{equation*} \end{proposition} \begin{proof} From the definition, \begin{eqnarray*} \ell _{d}\left\{ a^{n-1}\right\} \left( s\right) &=&\dsum\limits_{n=1}^{\infty }e^{-sn}a^{n-1} \\ &=&\dsum\limits_{n=1}^{\infty }e^{-sn}e^{\left( n-1\right) \ln a} \\ &=&a^{-1}\dsum\limits_{n=1}^{\infty }e^{-sn}e^{n\ln a} \\ &=&a^{-1}\dsum\limits_{n=1}^{\infty }e^{-\left( s-\ln a\right) n} \\ &=&\frac{1}{e^{s}-a} \end{eqnarray*} \end{proof} \begin{example} For $a=5$, \begin{equation*} \ell _{d}\left\{ 5^{n-1}\right\} \left( s\right) =\frac{1}{e^{s}-5} \end{equation*} \end{example} \begin{definition} Let $f: \mathbb{N} \rightarrow \mathbb{R} $ be a sequence. The discrete derivative of of $f$ denoted \begin{equation*} \triangle f\left( n\right) :=f\left( n+1\right) -f\left( n\right) \end{equation*} \end{definition} \begin{proposition} (Transform of a discrete derivative of a sequence). \begin{equation*} \ell _{d}\left\{ \triangle f\left( n\right) \right\} \left( s\right) =\left( e^{s}-1\right) \ell _{d}\left\{ f\left( n\right) \right\} -f\left( 1\right) \end{equation*} \end{proposition} \begin{proof} \begin{equation*} \ell _{d}\left\{ \triangle f\left( n\right) \right\} \left( s\right) =\dsum\limits_{n=1}^{\infty }\triangle f\left( n\right) e^{-sn}=\dsum\limits_{n=1}^{\infty }\left( f\left( n+1\right) -f(n)\right) e^{-sn} \end{equation*} \begin{eqnarray*} &=&\dsum\limits_{n=1}^{\infty }f\left( n+1\right) e^{-sn}-\dsum\limits_{n=1}^{\infty }f\left( n\right) e^{-sn} \\ &=&\ell _{d}\left\{ f\left( n+1\right) \right\} -\ell _{d}\left\{ f\left( n\right) \right\} \\ &=&\left( e^{s}-1\right) \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) -f\left( 1\right) \end{eqnarray*} \end{proof} \ \ Next we define a discrete convolution operator on sequences which latter will be useful in solving discrete initial value problems. \ \begin{definition} Let $f,$ $g: \mathbb{N} \rightarrow \mathbb{R} $ be two sequences . Then the discrete convolution of $f$ and $g$ denoted $ \left( f\ast g\right) \left( n\right) $ is defined by \begin{equation*} \left( f\ast g\right) \left( n\right) :=\dsum\limits_{k=1}^{n-1}f\left( k\right) g\left( n-k\right) \end{equation*} \end{definition} \begin{example} $\left( n\ast 1\right) =\frac{n^{2}-n}{2}$ \end{example} \begin{example} $\left( n\ast n\right) =\frac{n^{3}-n}{6}$ \end{example} \begin{proposition} (Transform of a discrete convolution). \begin{equation*} \ell _{d}\left\{ \left( f\ast g\right) \left( n\right) \right\} \left( s\right) =\ell _{d}\left\{ f\left( n\right) \right\} \ell _{d}\left\{ g\left( n\right) \right\} \end{equation*} \end{proposition} \begin{proof} From the product of the two series: \begin{equation*} \left( \dsum\limits_{n=1}^{\infty }a_{n}x^{n}\right) \left( \dsum\limits_{n=1}^{\infty }b_{n}x^{n}\right) =\dsum\limits_{n=2}^{\infty }c_{n}x^{n} \end{equation*} where $c_{n}=\dsum\limits_{k=1}^{n-1}a_{k}b_{n-k}$, we have, \begin{eqnarray*} \ell _{d}\left\{ \left( f\ast g\right) \left( n\right) \right\} \left( s\right) &=&\dsum\limits_{n=1}^{\infty }\left( f\ast g\right) \left( n\right) e^{-sn} \\ &=&\dsum\limits_{n=2}^{\infty }\left( \dsum\limits_{k=1}^{n-1}f\left( k\right) g\left( n-k\right) \right) e^{-sn} \\ &=&\left( \dsum\limits_{n=1}^{\infty }f\left( n\right) e^{-sn}\right) \left( \dsum\limits_{n=1}^{\infty }g\left( n\right) e^{-sn}\right) \\ &=&\ell _{d}\left\{ f\left( n\right) \right\} \ell _{d}\left\{ g\left( n\right) \right\} \end{eqnarray*} \end{proof} \ \begin{corollary} \begin{equation*} \ell _{d}\left\{ \dsum\limits_{k=1}^{n-1}f\left( k\right) \right\} \left( s\right) =\frac{s\ell _{d}\left\{ f\left( n\right) \right\} }{e^{s}-1} \end{equation*} \end{corollary} \ \begin{proof} Follows from the fact that choosing $g\equiv 1$, we have \begin{equation*} \left( f\ast g\right) \left( n\right) =f\left( n\right) \ast 1=\dsum\limits_{k=1}^{n-1}f\left( k\right) \end{equation*} Then taking the transform of both sides, we have the result. \end{proof} \begin{proposition} For a sequence $f: \mathbb{N} \rightarrow \mathbb{R} $, \begin{equation*} \ell _{d}\left\{ nf\left( n\right) \right\} \left( s\right) =-\frac{d}{ds} \left( \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) \right) \end{equation*} \end{proposition} \begin{proof} \begin{eqnarray*} \frac{d}{ds}\left( \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) \right) &=&\frac{d}{ds}\dsum\limits_{n=1}^{\infty }f\left( n\right) e^{-sn} \\ &=&\dsum\limits_{n=1}^{\infty }\left( -nf\left( n\right) e^{-sn}\right) \\ &=&-\ell _{d}\left\{ nf\left( n\right) \right\} \left( s\right) \end{eqnarray*} \end{proof} \begin{corollary} For $k\in \mathbb{N} $, \begin{equation*} \ell _{d}\left\{ n^{k}f\left( n\right) \right\} \left( s\right) =\left( -1\right) ^{k}\frac{d^{k}}{ds^{k}}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) \end{equation*} \end{corollary} \begin{remark} By taking $f\equiv 1$, we get the relation: \begin{eqnarray*} \ \ell _{d}\left\{ n^{k}\right\} \left( s\right) &=&\left( -1\right) ^{k} \frac{d^{k}}{ds^{k}}\ell _{d}\left\{ 1\right\} \left( s\right) \\ &=&\left( -1\right) ^{k}\frac{d^{k}}{ds^{k}}\left( \frac{1}{e^{s}-1}\right) \end{eqnarray*} \end{remark} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \section{Ivps of discrete differential equations.} In this section we solve initial value problems of discrete differential equations using the $\sum -$ transform and the Fibonacci numbers \begin{proposition} (\cite{DL}) \begin{equation*} \ell _{d}\left\{ \frac{1}{n}\right\} \left( s\right) =s-\ln \left( e^{s}-1\right) \end{equation*} \ for $s>0$. \end{proposition} \begin{proposition} For $n\geq 2$, the IVP : \begin{equation*} \left\{ \begin{array}{c} \triangle f\left( n\right) =\frac{1}{n^{2}} \\ f\left( 2\right) =2 \end{array} \right. \end{equation*} has solution given by \begin{equation*} f\left( n\right) =1+\dsum\limits_{k=1}^{n-1}\frac{1}{k^{2}} \end{equation*} \end{proposition} \begin{proof} Re-writing the difference equation as : $n\triangle f\left( n\right) =\frac{1 }{n}$, taking the transform of both sides and using corollary $3.3$ we get \begin{equation*} \frac{d}{ds}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) + \frac{e^{s}}{e^{s}-1}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) =-\frac{s-\ln \left( e^{s}-1\right) }{e^{s}-1}. \end{equation*} Again solving for $\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) $, we have \begin{equation*} \ \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) =\frac{1}{ e^{s}-1}-\frac{1}{e^{s}-1}\int \left( s-\ln \left( e^{s}-1\right) \right) ds. \end{equation*} \begin{equation*} \Rightarrow \text{ }f\left( n\right) =1-\ell _{d}^{-1}\left\{ \frac{1}{ e^{s}-1}\right\} \ast \ell _{d}^{-1}\left\{ \int \left( s-\ln \left( e^{s}-1\right) \right) ds\right\} \end{equation*} \begin{eqnarray*} &=&1-\left( 1\ast \left( -\frac{1}{n^{2}}\right) \right) \\ &=&1+\dsum\limits_{k=1}^{n-1}\frac{1}{k^{2}}. \end{eqnarray*} \end{proof} \ \ \ \begin{proposition} The second order IVP: \begin{equation*} \left\{ \begin{array}{c} \triangle ^{2}f\left( n\right) =n \\ \text{ \ }f\left( 1\right) =1,\text{ \ }\triangle f\left( 1\right) =2 \end{array} \right. \end{equation*} has solution given by : \begin{equation*} f\left( n\right) =2n-1+\frac{n\left( n-1\right) \left( n-2\right) }{6} \end{equation*} \end{proposition} \begin{proof} First, \begin{eqnarray*} \triangle ^{2}f\left( n\right) &=&\triangle \left( \triangle f\left( n\right) \right) \\ &=&f\left( n+2\right) -2f\left( n+1\right) +f\left( n\right) \end{eqnarray*} and using the initial conditions we get: \begin{equation*} \ell _{d}\left\{ \triangle ^{2}f\left( n\right) \right\} \left( s\right) =\left( e^{2s}-2e^{s}+1\right) \ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) -e^{s}-1. \end{equation*} \begin{equation*} \Rightarrow \text{ \ \ \ \ \ \ \ }\left( e^{s}-1\right) ^{2}\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) -e^{s}-1=\frac{e^{s}}{\left( e^{s}-1\right) ^{2}} \end{equation*} \begin{equation*} \Rightarrow \text{ \ \ \ \ \ \ \ }\ell _{d}\left\{ f\left( n\right) \right\} \left( s\right) =\frac{1}{\left( e^{s}-1\right) ^{2}}+\frac{e^{s}}{\left( e^{s}-1\right) ^{2}}+\frac{e^{s}}{\left( e^{s}-1\right) ^{4}} \end{equation*} \ Then taking the inverse transform and using convolutions, we get the solution as : \begin{eqnarray*} f\left( n\right) &=&\left( 1\ast 1\right) +n+\frac{n\left( n-1\right) \left( n-2\right) }{6} \\ &=&2n-1+\frac{n\left( n-1\right) \left( n-2\right) }{6} \end{eqnarray*} \end{proof} \ \ \section{ The Fibonacci numbers via the $\sum -$transform.} \begin{proposition} (The main result ) The sequence of numbers: \begin{equation*} 1,\text{ }1,\text{ }2,\text{ }3,\text{ }5,\text{ }8,\text{ }13,\text{ }.... \end{equation*} usually called the Fibonacci sequence are generated by the formula : \begin{equation*} a_{n}=\frac{\left( 1+\sqrt{5}\right) ^{n}-\left( 1-\sqrt{5}\right) ^{n}}{ 2^{n}\sqrt{5}} \end{equation*} where $n\in \mathbb{N} $. \end{proposition} \ \begin{proof} First, we all know, the numbers can be represented in a recursive way by : \begin{equation*} \left\{ \begin{array}{c} a_{n+2}=a_{n+1}+a_{n}\text{ \ for }n=1,2,3,.... \\ \text{\ }a_{1}=1,\text{ }a_{2}=1. \end{array} \right. \end{equation*} \ In tern the above recursive definition can also be described as an initial value problem of a second order discrete difference equation given by : \ \begin{equation*} \left\{ \begin{array}{c} \Delta ^{2}a_{n}+\Delta a_{n}=a_{n} \\ a_{1}=1,\text{ }a_{2}=1 \end{array} \right. \end{equation*} \ We will see in a moment that this sequence as a special case of a general sequence that will be generated from the one whose formula will be obtained from the recursive expression : \ \begin{equation*} \left\{ \begin{array}{c} a_{n+2}=a_{n+1}+a_{n}\text{ \ for }n=1,2,3,.... \\ \text{\ }a_{1}=a(1),\text{ }a_{2}=a(2) \end{array} \right. \end{equation*} \ as two initial conditions. The pattern that will be observed from the latter sequence is that the general term of the sequence will appear as a linear combination of the two initial conditions $a_{1}$ and $a_{2}$: \ \begin{equation*} a_{n}=\gamma _{n}a_{1}+\beta _{n}a_{2} \end{equation*} \ \ in which $\gamma _{n},$ $\beta _{n}$ are themselves obtained as Fibonacci numbers of the respective coefficients of $a_{1}$ and $a_{2}$. I call these numbers Fibonacci - like numbers. \ Applying the discrete Laplace transform on both sides of the later recursive equation : \begin{eqnarray*} \ell _{d}\{a_{n+2}\}(s) &=&\ell _{d}\{a_{n+1}+a_{n}\}(s) \\ &=&\ell _{d}\{a_{n+1}\}(s)+\ell _{d}\{a_{n}\}(s). \end{eqnarray*} \ From results of (\cite{DL}) we have transforms of these types: \begin{eqnarray*} \ell _{d}\{a_{n+2}\}(s) &=&e^{2s}\ell _{d}\{a_{n}\}(s)-e^{s}a_{1}-a_{2} \\ \ell _{d}\{a_{n+1}\}(s)+\ell _{d}\{a_{n}\}(s) &=&e^{s}\ell _{d}\{a_{n}\}(s)-a_{1}+\ell _{d}\{a_{n}\}(s) \end{eqnarray*} $\Rightarrow $ \begin{equation*} e^{2s}\ell _{d}\{a_{n}\}(s)-e^{s}a_{1}-a_{2}=e^{s}\ell _{d}\{a_{n}\}(s)-a_{1}+\ell _{d}\{a_{n}\}(s) \end{equation*} Therefore rearranging, we have : \begin{equation*} \ell _{d}\{a_{n}\}(s)=\frac{a_{1}e^{s}+a_{2}-a_{1}}{e^{2s}-e^{s}-1} \end{equation*} But \begin{equation*} e^{2s}-e^{s}-1=\left( e^{s}-\frac{\left( 1+\sqrt{5}\right) }{2}\right) \left( e^{s}-\frac{\left( 1-\sqrt{5}\right) }{2}\right) \end{equation*} and \begin{eqnarray*} \frac{a_{1}e^{s}+a_{2}-a_{1}}{e^{2s}-e^{s}-1} &=&\frac{a_{1}e^{s}+a_{2}-a_{1} }{\left( e^{s}-\frac{\left( 1+\sqrt{5}\right) }{2}\right) \left( e^{s}-\frac{ \left( 1-\sqrt{5}\right) }{2}\right) } \\ &=&\frac{a_{1}\left( \sqrt{5}-1\right) +2a_{2}}{2\sqrt{5}\left( e^{s}-\frac{ \left( 1+\sqrt{5}\right) }{2}\right) }+\frac{a_{1}\left( \sqrt{5}+1\right) -2a_{2}}{2\sqrt{5}\left( e^{s}-\frac{\left( 1-\sqrt{5}\right) }{2}\right) } \end{eqnarray*} \ Therefore taking the inverse discrete Laplace transform we have : \begin{equation*} a_{n}=\ell _{d}^{-1}\left\{ \frac{a_{1}\left( \sqrt{5}-1\right) +2a_{2}}{2 \sqrt{5}\left( e^{s}-\frac{\left( 1+\sqrt{5}\right) }{2}\right) }+\frac{ a_{1}\left( \sqrt{5}+1\right) -2a_{2}}{2\sqrt{5}\left( e^{s}-\frac{\left( 1- \sqrt{5}\right) }{2}\right) }\right\} \end{equation*} \begin{equation*} \ \ \ =\frac{1}{2^{n}\sqrt{5}}\left( \begin{array}{c} \left( a_{1}\left( \sqrt{5}-1\right) +2a_{2}\right) \left( 1+\sqrt{5}\right) ^{n-1} \\ +\left( a_{1}\left( \sqrt{5}+1\right) -2a_{2}\right) \left( 1-\sqrt{5} \right) ^{n-1} \end{array} \right) \end{equation*} \ Therefore, the sequence \begin{equation*} a_{n}=\frac{1}{2^{n}\sqrt{5}}\left( \begin{array}{c} \left( a_{1}\left( \sqrt{5}-1\right) +2a_{2}\right) \left( 1+\sqrt{5}\right) ^{n-1} \\ +\left( a_{1}\left( \sqrt{5}+1\right) -2a_{2}\right) \left( 1-\sqrt{5} \right) ^{n-1} \end{array} \right) \end{equation*} \ is a solution to the recursive equation of the Fibonacci-like numbers. Rearranging the expression, we get linear combinations of $a_{1}$ and $a_{2}$ as : \begin{eqnarray*} a_{n} &=&\frac{\left( \left( \sqrt{5}-1\right) \left( 1+\sqrt{5}\right) ^{n-1}+\left( \sqrt{5}+1\right) ^{n-1}-\left( 1-\sqrt{5}\right) ^{n-1}\right) }{2^{n}\sqrt{5}}a_{1} \\ &&+\frac{\left( \left( 1+\sqrt{5}\right) ^{n-1}-\left( 1-\sqrt{5}\right) ^{n-1}\right) }{2^{n-1}\sqrt{5}}a_{2} \end{eqnarray*} \ with coefficients of $a_{1}$ and $a_{2}$ being represented by: \ \begin{equation*} \gamma _{n}=\frac{\left( \left( \sqrt{5}-1\right) \left( 1+\sqrt{5}\right) ^{n-1}+\left( \sqrt{5}+1\right) ^{n-1}-\left( 1-\sqrt{5}\right) ^{n-1}\right) }{2^{n}\sqrt{5}} \end{equation*} and \begin{equation*} \beta _{n}=\frac{\left( \left( 1+\sqrt{5}\right) ^{n-1}-\left( 1-\sqrt{5} \right) ^{n-1}\right) }{2^{n-1}\sqrt{5}} \end{equation*} \ \ \ \ We note that when $n=1$, the coefficient of $a_{1}$ is $1$ and that of $ a_{2} $ is zero and therefore the term will be just $a_{1}$. Like wise when $ n=2$, the coefficient of $a_{1}$ is zero and that of $a_{2}$ is one and again the term will be $a_{2}.$ The coefficients themselves are generated as a sequence which are Fibonacci like numbers. \ Then coming back to our original question, extracting a sequence that generates the Fibonacci sequence, we look at the above general sequence but with two fixed initial conditions : \ \begin{equation*} a_{1}=1=a_{2} \end{equation*} \ These initial conditions provide the following sequence which generates the well known Fibonacci numbers (\ref{Fib})\ that are known to be integers: \begin{equation*} a_{n}=\frac{\left( 1+\sqrt{5}\right) ^{n}-\left( 1-\sqrt{5}\right) ^{n}}{ 2^{n}\sqrt{5}},\text{ }n\in \mathbb{N} \end{equation*} \ \end{proof} \ The other sequences obtained with initial conditions other than $1$ will generate numbers which are not necessarily integers but ruled by the sequential definition indicated at the beginning.\newline Therefore we have sequences that are Fibonacci like but non integer valued. It will be a challenge to find which initial conditions will provide a sequence of integer values other than the Fibonacci sequence. It is a problem that somebody can pursue. \ \begin{proposition} Let $\lambda \left( \neq 1\right) \in \mathbb{R} $, the solution to the recursive equation: \begin{equation*} \left\{ \begin{array}{c} a_{n+1}=\lambda a_{n}+\beta \\ a\left( 1\right) =a_{1},\text{ }n\in \mathbb{N} \end{array} \right. \end{equation*} is given by \begin{equation*} a_{n}=\left( a_{1}+\frac{\beta }{\lambda -1}\right) \lambda ^{n-1}+\frac{ \beta }{1-\lambda },\text{ }n\in \mathbb{N} \end{equation*} \end{proposition} \ \ \begin{proof} Using the discrete replace transform of both sides of the equation in the proposition, we have: \begin{eqnarray*} l_{d}\{a_{n}\} &=&\frac{a_{1}}{e^{s}-\lambda }+\frac{\beta }{\left( e^{s}-1\right) \left( e^{s}-\lambda \right) } \\ &=&\frac{a_{1}}{e^{s}-\lambda }+\frac{\beta }{\left( \lambda -1\right) \left( e^{s}-1\right) }+\frac{\beta }{\left( 1-\lambda \right) \left( e^{s}-1\right) } \\ &=&\frac{\left( a_{1}+\frac{\beta }{\lambda -1}\right) }{e^{s}-\lambda }+ \frac{\beta }{\left( 1-\lambda \right) \left( e^{s}-1\right) } \end{eqnarray*} Then taking the inverse discrete Laplace transform, we have: \begin{equation*} a_{n}=\left( a_{1}+\frac{\beta }{\lambda -1}\right) \lambda ^{n-1}+\frac{ \beta }{1-\lambda },\text{ }n\in \mathbb{N} \end{equation*} \ Note here why we restrict $\lambda \neq 1$. \ The case for $\lambda =1$ is done in the following way: considering the equation : \begin{equation*} a_{n+1}=a_{n}+\beta ,\text{ \ }n=1,\text{ }2,\text{ }3,... \end{equation*} \ and taking the discrete Laplace transform of both sides, and solving for $ l_{d}\{a_{n}\}$ we get \begin{equation*} l_{d}\{a_{n}\}=\frac{a_{1}}{e^{s}-1}+\frac{\beta }{\left( e^{s}-1\right) ^{2} } \end{equation*} Applying the inverse discrete Laplace transform we have, $\ $ \begin{eqnarray*} a_{n} &=&l_{d}^{-1}\left\{ \frac{a_{1}}{e^{s}-1}\right\} +l_{d}^{-1}\left\{ \frac{\beta }{\left( e^{s}-1\right) ^{2}}\right\} \\ &=&a_{1}l_{d}^{-1}\left\{ \frac{1}{e^{s}-1}\right\} +\beta l_{d}^{-1}\left\{ \frac{1}{\left( e^{s}-1\right) ^{2}}\right\} \\ &=&a_{1}+\beta \left( 1\ast 1\right) \\ &=&a_{1}+\beta \left( n-1\right) \end{eqnarray*} \ \ \ \ \ \ \ \ \ \ \ \ where $1\ast 1$ is the convolution of the constant sequence $1$ by itself, which is $n-1$. Therefore this case has a solution given by : \ \begin{equation*} a_{n}=\beta \left( n-1\right) +a_{1},\text{ \ }n\in \mathbb{N} \end{equation*} \end{proof} \section{$\protect $} \section{The $\infty -$order initial value problem (IVP$_{\infty }$)} In this section we investigate the infinite order differential operator $ \sum_{k=0}^{\infty }\frac{D^{k}}{k!}$ with special infinite number of initial conditions, where $D=\frac{d}{dx}$. An infinite order initial value problem for this differential operator can be stated as follow: \begin{equation*} IVP_{\infty }:\left\{ \begin{array}{c} \sum_{k=0}^{\infty }\frac{D^{k}}{k!}f(x)=g(x) \\ f^{\left( j\right) }\left( x_{0}\right) =y_{0,j},\text{ \ }j=0,\text{ }1, \text{ }2,... \end{array} \right. \end{equation*} \begin{proposition} The infinite order IVP$_{\infty }$: \begin{equation*} \left\{ \begin{array}{c} \sum_{k=0}^{\infty }\frac{D^{k}}{k!}f(x)=\cos (x),\text{ \ for }x\in \lbrack 0,\infty ) \\ f^{\left( k\right) }\left( 0\right) =0,\text{ }\forall k\in \mathbb{N} \cup \{0\} \end{array} \right. \end{equation*} has a solution given by \begin{equation*} f(x)=\cos \left( x-1\right) u(x-1) \end{equation*} \end{proposition} \begin{proof} Using the Laplace transform : \begin{eqnarray*} \int_{0}^{\infty }e^{-sx}\sum_{k=0}^{\infty }\frac{D^{k}}{k!}f(x)dx &=&\int_{0}^{\infty }e^{-sx}\cos \left( x\right) dx \\ &=&\frac{s}{1+s^{2}} \end{eqnarray*} But the left side of the equation becomes : \begin{eqnarray*} \sum_{k=0}^{\infty }\frac{s^{k}}{k!}\int_{0}^{\infty }e^{-sx}f(x)dx &=&F(s)\sum_{k=0}^{\infty }\frac{s^{k}}{k!} \\ &=&e^{s}F(s) \end{eqnarray*} Thus \begin{equation*} e^{s}F(s)=\frac{s}{1+s^{2}} \end{equation*} \ which implies \begin{equation*} F(s)=e^{-s}\frac{s}{1+s^{2}} \end{equation*} Taking the inverse Laplace transform and get \begin{equation*} f\left( x\right) =\cos (x-1)u(x-1) \end{equation*} to be the required solution defined on the half line $[0,\infty )$. \end{proof} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{document}
\betaegin{equation}gin{document} \title[Unique continuation for the BO eq.]{Uniqueness Properties of Solutions to the Benjamin-Ono equation and related models.} \author{C. E. Kenig} \address[C. E. Kenig]{Department of Mathematics\\University of Chicago\\mathbb Chicago, Il. 60637 \\USA.} \email{[email protected]} \author{G. Ponce} \address[G. Ponce]{Department of Mathematics\\ University of California\\ Santa Barbara, CA 93106\\ USA.} \email{[email protected]} \author{L. Vega} \address[L. Vega]{UPV/EHU\\Dpto. de Matem\'aticas\\Apto. 644, 48080 Bilbao, Spain, and Basque Center for Applied Mathematics, E-48009 Bilbao, Spain.} \email{[email protected]} \keywords{Benjamin-Ono equation, unique continuation } {\mathcal S}ubjclass{Primary: 35Q53. Secondary: 35B05} \betaegin{equation}gin{abstract} We prove that if $u_1,\,u_2$ are solutions of the Benjamin-Ono equation defined in $ (x,t)\in\mathbb R \times [0,T]$ which agree in an open set $\Omega{\mathcal S}ubset \mathbb R \times [0,T]$, then $u_1\equiv u_2$. We extend this uniqueness result to a general class of equations of Benjamin-Ono type in both the initial value problem and the initial periodic boundary value problem. This class of 1-dimensional non-local models includes the intermediate long wave equation. Finally, we present a slightly stronger version of our uniqueness results for the Benjamin-Ono equation. \end{abstract} \maketitle {\mathcal S}ection{Introduction} We consider the initial value problem (IVP) for the Benjamin-Ono (BO) equation \betaegin{equation}\label{BO} \betaegin{equation}gin{aligned} \betaegin{equation}gin{cases} & \partialartial_t u - \mathcal H \partialartial_x^2 u + u\partialartial_x u = 0,\qquad (x,t) \in \, \mathbb R\times \mathbb R,\\ &u(x,0)=u_0(x), \end{cases} \end{aligned} \end{equation} where $u=u(x,t) $ is a real-valued function, and $\mathcal H$ denotes the Hilbert transform \betaegin{equation}\label{H} \betaegin{equation}gin{aligned} \mathcal Hf(x)& :=\frac{1}{\partiali}\,\rm{p.v.}\Big(\frac{1}{x}\ast f\Big)(x) \\ \\ &:=\frac{1}{\partiali}\lim_{\end{proof}silon\downarrow 0}\int_{|y|>\end{proof}silon} \frac{f(x-y)}{y}dy=(-i\,{\mathcal S}gn(\mathbf xi)\widehat{f}(\mathbf xi))^{\vee}(x) \end{aligned} \end{equation} The BO equation was first deduced by Benjamin \cite{Be} and Ono \cite{On} as a model for long internal gravity waves in deep stratified fluids. Later, it was shown to be a completely integrable system (see \cite {AbFo}, \cite {CoWi} and references therein). In particular, real solutions of the IVP \eqref{BO} satisfy infinitely many conservation laws, which provide an a priori estimate for the $H^{n/2}$-norm, $n\in \mathbb Z^+$. \vskip.1in The problem of finding the minimal regularity measured in the Sobolev scale $\,H^s(\mathbb R)$, $\,s\in\mathbb R,$ required to guarantee that the IVP \eqref{BO} is locally or globally well-posed (WP) in $\,H^s(\mathbb R)$ has been extensively studied, see \cite{ABFS}, \cite{Io1}, \cite{Po}, \cite{KoTz1}, \cite{KeKo}, \cite{Ta}, \cite{BuPl} and \cite{IoKe} where global WP was established in $H^0(\mathbb R) = L^2(\mathbb R)$, (for further details and results regarding the well-posedness of the IVP \eqref{BO} we refer to \cite{MoPi} and to \cite{IT} for a different proof of the result in \cite{IoKe}). We remark that a result established in \cite{MoSaTz} (see also \cite{KoTz2}) implies that no well-posedness result in $H^s(\mathbb R), s\in\mathbb R$, for the IVP \eqref{BO} can be established by using solely a contraction principle argument. It was first shown in \cite{Io1} and \cite{Io2} that polynomial decay of the data may not be preserved by the solution flow of the BO equation. The results in \cite{Io1} and \cite{Io2} which present some unique continuation properties of the BO equation have been extended to fractional order weighted Sobolev spaces and have shown to be optimal in \cite{FP} and \cite{FLP}. More precisely, using the notation $$ Z_{s,r}:=H^s(\mathbb R)\cap L^2(|x|^{2r}dx),\;\dot{Z}_{s,r}=Z_{s,r}\cap \{f\in L^1(\mathbb R):\widehat {f}(0)=0\}, $$ with $\,s,r>0$ one has the results :\newline (i) \cite{FP} The IVP \eqref{BO} is locally WP in $Z_{s,r}$ for $s\gammaeq r\in [1,5/2)$ and if $\,u\in C([0,T]: Z_{5/2,2})$ is a solution of \eqref{BO} s.t. $u(\cdot, t_j)\in Z_{5/2,5/2}$, $\,j=1,2$ with $t_1,\,t_2\in [0,T],\,t_1\neq t_2$, then $\,u\in C([0,T]: \dot Z_{5/2,2})$.\newline (ii) \cite{FP} The IVP \eqref{BO} is locally WP in $\dot Z_{s,r}$ $s\gammaeq r\in [5/2,7/2)$.\newline (iii) \cite{FP} If $\,u\in C([0,T]:\dot Z_{7/2,3})$ is a solution of \eqref{BO} s.t. $\,\exists\, t_1,\,t_2, \,t_3\in [0,T], \,t_1<t_2<t_3$ with $u(\cdot, t_j)\in Z_{7/2,7/2},\,j=1,2,3 $, then $\,u\equiv 0$.\newline (iv) \cite{FLP} The IVP \eqref{BO} has solutions $\,u\in C([0,T]:\dot Z_{7/2,3}),\,u\not\equiv 0,$ for which $\,\exists\, t_1,\,t_2,\in [0,T], \,t_1<t_2,$ with $u(\cdot, t_j)\in Z_{7/2,7/2}, \,j=1,2$. \vskip.1in Our first main result in this work is the following theorem: \betaegin{equation}gin{theorem}\label{TH1} Let $u_1,\;u_2$ be solutions to the IVP \eqref{BO} for $ (x,t)\in \mathbb R\times [0,T]$ such that \betaegin{equation}gin{equation} \label{m1} u_1,\;u_2\in C([0,T]:H^s(\mathbb R)) \cap C^1((0,T):H^{s-2}(\mathbb R)),\;\;s>5/2. \end{equation} If there exists an open set $\,\Omega {\mathcal S}ubset \mathbb R\times [0,T]$ such that \betaegin{equation}\label{H1a} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in\Omega, \end{equation} then, \betaegin{equation}\label{result1} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in \mathbb R\times [0,T]. \end{equation} In particular, if $u_1$ vanishes in $\Omega$, then $u_1\equiv 0$. \end{theorem} \betaegin{equation}gin{remark} \label{rr1} (i) Under the same hypotheses, Theorem \ref{TH1} applies to solutions of the generalized BO equation \betaegin{equation}\label{gBO} \betaegin{equation}gin{aligned} \partialartial_t u - \mathcal H \partialartial_x^2 u + \partialartial_x f(u) = 0,\qquad (x,t) \in & ~ \mathbb R\times \mathbb R, \end{aligned} \end{equation} with $f:\mathbb R\to\mathbb R$ smooth enough and $f(0)=0$. In particular, it applies for $\,f(u)=u^k,\;k=2,3,4,...$ for which the well posedness of the associated IVP was considered in \cite{ABFS}, \cite{KeTa}, \cite{KeKo}, \cite{KPV1}, \cite{Vent1}, \cite{Vent2}, see also \cite{LiPo}. \vskip.1in (ii) The hypothesis \eqref{m1} guarantees that the solutions satisfy the equation \eqref{BO} point-wise, which will be required in our proof.\vskip.1in (iii) A similar result to that described in Theorem \ref{TH1} for the IVP associated to the generalized Korteweg-de Vries equation \betaegin{equation}\label{gKdV} \betaegin{equation}gin{aligned} \partialartial_t u + \partialartial_x^3 u + \partialartial_x u^{k} = 0,\qquad (x,t) \in & ~ \mathbb R\times \mathbb R,\;k=2,3,...., \end{aligned} \end{equation} was established in \cite{SaSc}, and for some evolution equations of Schr\"odinger type in \cite{Iz}. In both cases, their proofs are based on appropriate forms of the so called Carleman estimates. Our proof of Theorem \ref{TH1} is elementary and relies on simple properties of the Hilbert transform as a boundary value of analytic functions. \vskip.1in (iv) We observe that the unique continuation in (iii) before the statement of Theorem \ref{TH1} applies to a single solution of the BO equation but not to any two solutions as in Theorem \ref{TH1}. This is due to the fact that the argument in the proof there depends upon the whole symmetry structure of the BO equation. \vskip.1in (v) Theorem \ref{TH1} can be seen as a corollary of the following linear result whose proof is exactly the one given below for Theorem \ref{TH1} : Assume that $\,k,\,j\in \mathbb Z^+\cup\{0\}$ and that $$ a_m :\mathbb R\times [0,T]\to\mathbb R,\;m=0,1,..., k,\;\;\,\text{and}\,\;\;\,b:\mathbb R\times [0,T]\to\mathbb R $$ are continuous functions with $\,b(\cdot)$ never vanishing on $(x,t)\in \mathbb R\times [0,T],$ and consider the IVP \betaegin{equation}\label{general} \betaegin{equation}gin{aligned} \betaegin{equation}gin{cases} &\displaystyle \partialartial_t w - b(x,t)\,\mathcal H\partialartial_x^j w +{\mathcal S}um_{m=0}^ka_m(x,t)\partialartial_x^mw= 0,\\ \\ &w(x,0)=w_0(x). \end{cases} \end{aligned} \end{equation} \end{remark} \betaegin{equation}gin{theorem}\label{TH3} Let $$\,w\in C([0,T]:H^s(\mathbb R)) \cap C^1((0,T):H^{s-2}(\mathbb R)),\;\,\;\;s>\max\{k;j\}+1/2,$$ be a solution to the IVP \eqref{general}. If there exists an open set $\,\Omega {\mathcal S}ubset \mathbb R\times [0,T]$ such that \betaegin{equation}\label{HA1} w(x,t)=0,\;\;\;\;(x,t)\in\Omega, \end{equation} then, \betaegin{equation}\label{result22} w(x,t)=0\;\;\;\;(x,t)\in \mathbb R\times [0,T]. \end{equation} \end{theorem} \vskip.1in \betaegin{equation}gin{remark}\label{ole} (i) In particular, applying Theorem \ref{TH3} to the difference of two solutions $\,u_1,\,u_2\,$ of the Burgers-Hilbert (BH) equation (see \cite{BiHu}) \betaegin{equation} \label{BH} \partialartial_t u - \mathcal H u + u\partialartial_x u = 0,\qquad (x,t) \in \, \mathbb R\times \mathbb R, \end{equation} one sees that the result in Theorem \ref{TH1}, with $\,s>3/2$, holds for the IVP associated to the BH equation \eqref{BH}. \vskip.1in (ii) The result of Theorem \ref{TH1} extends to solutions of the initial periodic boundary value problem (IPBVP) associated to the generalized BO equation \betaegin{equation}\label{gBO-PBVP} \betaegin{equation}gin{aligned} \betaegin{equation}gin{cases} & \partialartial_t u - \mathcal H \partialartial_x^2 u + \partialartial_x f(u) = 0,\qquad (x,t) \in \mathbb S^1\times \mathbb R,\\ &u(x,0)=u_0(x), \end{cases} \end{aligned} \end{equation} with $ \,f(\cdot)\,$ as in part (i) of this remark. More precisely : \end{remark} \betaegin{equation}gin{theorem}\label{TH2} Let $u_1,\;u_2$ be solutions of the IPBVP \eqref{gBO-PBVP} in $ (x,t)\in \mathbb S^1\times [0,T]$ such that \betaegin{equation}gin{equation} \label{m2} u_1,\;u_2\in C([0,T]:H^s(\mathbb S^1)) \cap C^1((0,T):H^{s-2}(\mathbb S^1)),\;s>5/2. \end{equation} If there exists an open set $\,\Omega {\mathcal S}ubset \mathbb S^1\times [0,T]$ such that \betaegin{equation}\label{HB1} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in\Omega, \end{equation} then, \betaegin{equation}\label{result} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in \mathbb S^1\times [0,T]. \end{equation} In particular, if $u_1$ vanishes in $\Omega$, then $u_1\equiv 0$. \end{theorem} \vskip.1in \betaegin{equation}gin{remark} The well-posedness of the initial IPBVP \eqref{gBO-PBVP} has been studied in \cite{Mo1}, \cite{Mo2} and \cite{MoRi3}. \end{remark} \betaigskip Next, we consider the Intermediate Long Wave (ILW) equation \betaegin{equation}\label{ILW} \betaegin{equation}gin{aligned} \partialartial_t u - \mathcal L_\delta \partialartial_x^2 u + \frac1{\delta}\partialartial_x u + u\partialartial_x u = 0,\qquad (x,t) \in & ~ \mathbb R\times \mathbb R, \end{aligned} \end{equation} where $u=u(x,t)$ is a real-valued function, $\,\delta>0\,$ and \betaegin{equation}\label{T} \mathcal L_\delta f(x) :=-\frac1{2\delta}\,\rm{p.v.} \int \rm{coth}\it \left(\frac{\partiali(x-y)}{2\delta}\right)f(y)dy. \end{equation} Note that $\mathcal L_\delta$ is a multiplier operator with $\partialartial_x\mathcal L_\delta$ having symbol \betaegin{equation} \label{symbol} {\mathcal S}igma(\partialartial_x\mathcal L_\delta)=\widehat{\partialartial_x\mathcal L_\delta} =2\partiali \mathbf xi \,\rm{coth}\,(2\partiali \delta \mathbf xi). \end{equation} The ILW equation \eqref{ILW} describes long internal gravity waves in a stratified fluid with finite depth represented by the parameter $\,\delta$, see \cite{KKD}, \cite{Jo}, \cite{JE}. Also, the ILW equation has been proven to be complete integrable, see \cite{KSA} and \cite{KAS}. In \cite{ABFS} it was proven that solutions of the ILW as $\delta \to \infty$ (deep-water limit) converge to solutions of the BO equation with the same initial data. Also, in \cite{ABFS} it was shown that if $u_{\delta}(x,t)$ denotes the solution of the ILW equation \eqref{ILW}, then \betaegin{equation} \label{scaleKdV} v_{\delta}(x,t)=\,\frac{3}{\delta} \,u_{\delta}\betaig(x,\frac{3}{\delta} t\Big) \end{equation} converges as $\delta\to 0$ (shallow-water limit) to the solution of the KdV equation, i.e. \eqref{gKdV} with $k=2$, with the same initial data. For further comments on general properties of the ILW equation we refer to the recent survey \cite{Sa} and references therein. The well-posedness of the IVP associated to the ILW equation \eqref{ILW} was studied in \cite{ABFS} and more recently in \cite{MoVe}. Our next theorem extends the result in Theorem \ref{TH1} to solution of the IVP associated to the ILW\eqref{ILW}: \betaegin{equation}gin{theorem}\label{TH5} Let $u_1,\;u_2$ be solutions to \eqref{ILW} in $ (x,t)\in \mathbb R\times [0,T]$ such that \betaegin{equation}gin{equation} \label{m1a} u_1,\;u_2\in C([0,T]:H^s(\mathbb R)) \cap C^1((0,T):H^{s-2}(\mathbb R)),\;\;s>5/2. \end{equation} If there exists an open set $\,\Omega {\mathcal S}ubset \mathbb R\times [0,T]$ such that \betaegin{equation}\label{H1} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in\Omega, \end{equation} then, \betaegin{equation}\label{result2} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in \mathbb R\times [0,T]. \end{equation} In particular, if $u_1$ vanishes in $\Omega$, then $u_1\equiv 0$. \end{theorem} \betaegin{equation}gin{remark} The observations in (i) and (v) in Remark \ref{rr1} and (ii) in Remark \ref{ole} apply, after some simple modifications, to the ILW equation \eqref{ILW}. \end{remark} \vskip.1in Next, we present the following slight improvement of Theorem \ref{TH1} and Theorem \ref{TH2} : \betaegin{equation}gin{theorem}\label{TH10} Let $u_1,\;u_2$ be solutions to \eqref{BO} in $ (x,t)\in \mathbb R\times [0,T]$ such that \betaegin{equation}gin{equation} \label{m10} u_1,\;u_2\in C([0,T]:H^s(\mathbb R)) \cap C^1((0,T):H^{s-2}(\mathbb R)),\;\;s>5/2. \end{equation} If there exists an open set $\,I {\mathcal S}ubset \mathbb R,\;0\in I\,$ such that \betaegin{equation}\label{H10} u_1(x,0)=u_2(x,0),\;\;\;\;\;\;\;\;x\in I, \end{equation} and for each $\,N\in\mathbb Z^+$ \betaegin{equation}gin{equation} \label{H15} \int_{|x|\leq R}\;|\partialartial_tu_1(x,0)-\partialartial_tu_2(x,0)|^2dx\leq c_N\,R^N\;\;\;\;\;\;\text{as}\;\;\;\;\;R \downarrow \,0, \end{equation} then, \betaegin{equation}\label{result11} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in \mathbb R\times [0,T]. \end{equation} \end{theorem} \betaegin{equation}gin{theorem}\label{TH11} Let $u_1,\;u_2$ be solutions of the IPBVP \eqref{gBO-PBVP} in $ (x,t)\in \mathbb S^1\times [0,T]{\mathcal S}imeq \mathbb R/\mathbb Z\times[0,T]$ such that \betaegin{equation}gin{equation} \label{m22} u_1,\;u_2\in C([0,T]:H^s(\mathbb S^1)) \cap C^1((0,T):H^{s-2}(\mathbb S^1)),\;s>5/2. \end{equation} If there exists an open set $\,I {\mathcal S}ubset [-1/2,1/2]$ with $\,0\in I$ such that \betaegin{equation}\label{HB12} u_1(x,0)=u_2(x,0),\;\;\;\;x\in I, \end{equation} and for each $\,N\in\mathbb Z^+$ \betaegin{equation}gin{equation} \label{H20} \int_{|x|\leq R}\,\;|\partialartial_tu_1(x,0)-\partialartial_tu_2(x,0)|^2d\theta\leq c_N\,R^N\;\;\;\;\;\;\text{as}\;\;\;\;\;R \downarrow \,0, \end{equation} then, \betaegin{equation}\label{result23} u_1(x,t)=u_2(x,t),\;\;\;\;(x,t)\in \mathbb S^1\times [0,T]. \end{equation} \end{theorem} \vskip.1in \betaegin{equation}gin{remark} It will be clear form our proof of Theorem \ref{TH10} that a similar argument provides the proof of Theorem \ref{TH11} which will be omitted. \end{remark} The rest of this paper is organized as follows : section 2 contains some preliminary estimates required for Theorem \ref{TH1} as well as its proof. It also includes the modification needed to extend the argument in the proof of Theorem \ref{TH1} from the IVP to the IPBVP to prove Theorem \ref{TH2}. Section 3 contains the proof of Theorem \ref{TH5}, and section 4 consists of the proof of Theorem \ref{TH10}. {\mathcal S}ection{Proof of Theorem \ref{TH1}} To prove Theorem \ref{TH1} we need the following result from complex analysis whose proof follows directly from Schwarz reflection principle: \betaegin{equation}gin{proposition} \label{pro1} Let $I{\mathcal S}ubseteq \mathbb R$ be an open interval, $\,b\in(0,\infty]$ and \betaegin{equation}\label{sets} D_b=\{z=x+iy\in \mathbb C:0<y<b\},\;\;L=\{x+i0\in \mathbb C:x\in I\}. \end{equation} Let $F:D_b\cup L\to\mathbb C$ be a continuous function such that $\,F \betaig|_{D_b}$ is analytic. If $\,F \betaig|_{L}\equiv 0$, then $\,F\equiv 0$. \end{proposition} As a consequence we have \betaegin{equation}gin{corollary} \label{col1} Let $f\in H^s(\mathbb R),\,s>1/2$ be a real valued function. If there exists an open set $I{\mathcal S}ubset \mathbb R$ such that $$f(x)=\mathcal Hf(x)=0,\;\;\;\;\;\;\;\forall \,x\in I, $$ then $f\equiv 0$. \end{corollary} \betaegin{equation}gin{proof} Denoting $U=U(x,y)$ the harmonic extension of $f$ to the upper half-plane $D$, one sees that its harmonic conjugate $V=V(x,y)$ has boundary value $V(x,0)=\mathcal Hf(x)$ with \betaegin{equation} \label{HT} (\widehat{f+i\mathcal H f})(\mathbf xi)=2\,\chi_{[0,\infty)}(\mathbf xi)\,\widehat{f}(\mathbf xi),\;\;\;\;\;\;\;\;\widehat{f}\in L^1(\mathbb R). \end{equation} Thus, $F:=U+iV$ is continuous on $\overline D_{\infty}$ and analytic on $D_{\infty}$ with $\,F \betaig|_{L}\equiv 0$. Hence, Proposition \ref{pro1} yields the desired result \end{proof} \betaegin{equation}gin{proof} [Proof of Theorem \ref{TH1} ] Defining $w(x,t)=(u_1-u_2)(x,t)$ one has that \betaegin{equation}\label{eq1} \partialartial_tw-\mathcal H \partialartial_x^2 w+\partialartial_xu_2\,w+u_1\,\partialartial_xw=0,\;\;\;(x,t)\in\mathbb R\times[0,T]. \end{equation} By hypotheses \eqref{m1} and \eqref{H1} there exist open intervals $I,\,J{\mathcal S}ubset \mathbb R$ such that \betaegin{equation}\label{zeros1} \betaegin{equation}gin{aligned} w(x,t)&=\partialartial_xw(x,t)\\ &=\partialartial_tw(x,t)=\partialartial_x^2w(x,t)=0,\;\;\;\;\;\;\;\;\;\;(x,t)\in I\times J{\mathcal S}ubset \Omega. \end{aligned} \end{equation} Thus, the equation \eqref{eq1} tells us \betaegin{equation}\label{zeros2} \mathcal H \partialartial^2_xw(x,t)=0,\;\;(x,t)\in I\times J{\mathcal S}ubset \Omega. \end{equation} Combining \eqref{zeros1} and \eqref{zeros2} and fixing $t^*\in J$ it follows that \betaegin{equation}\label{zeros3} \partialartial_x^2w(x,t^*)=\mathcal H \partialartial^2_xw(x,t^*)=0,\;\;x\in I, \end{equation} with $\,\partialartial_x^2w(\cdot,t^*),\;\mathcal H \partialartial^2_xw(\cdot,t^*)\in H^s(\mathbb R)$, $\,s>1/2$. \vskip.1in Therefore, using Corollary \ref{col1} one has that $\,\partialartial_x^2w(\cdot,t^*)\equiv 0$ which implies that $\,w(\cdot,t^*)\equiv 0$ and completes the proof. \end{proof} To extend the previous argument to prove Theorem \ref{TH2} we need the following result from complex analysis : \betaegin{equation}gin{proposition} \label{pro2} Let $\,J{\mathcal S}ubset [-\partiali, \partiali]$ be an open non-empty interval and $$ B_1(0)=\{z=x+iy\in \mathbb C : |z|<1\},\;\,A=\{ z\in \mathbb C : |z|=1,\,\arg(z)\in J\}. $$ Let $ \,F: B_1(0)\cup A\to \mathbb C$ be a continuous function such that $\,F\betaig |_{B_1(0)}$ is analytic. If $\,F\betaig |_A\equiv 0$, then $\,F\equiv 0$. \end{proposition} \betaegin{equation}gin{proof} The proof follows from Proposition \ref{pro1} by considering $\,F_oT(z)$ where $\,T\,$ is a fractional linear transformation mapping the upper half-plane to the unit disk $B_1(0)$. \end{proof} {\mathcal S}ection{Proof of Theorem \ref{TH5}} First, we shall prove the following result : \betaegin{equation}gin{corollary} \label{col11} Let $f\in H^s(\mathbb R),\,s>3/2$ be a real valued function. If there exists an open set $I{\mathcal S}ubset \mathbb R$ such that $$ f(x)=\mathcal L_{\delta}\partialartial_x f(x)=0,\;\;\;\;\;\;\;\forall \,x\in I, $$ with $\,\mathcal L_{\delta} \,$ as in \eqref{T}, \eqref{symbol}, then $f\equiv 0$. \end{corollary} \betaegin{equation}gin{proof} We define \betaegin{equation}\label{a1} F(x)=\partialartial_xf(x)+i \mathcal L_{\delta}\partialartial_x f(x),\;\;\;\;x\in\mathbb R, \end{equation} and consider its Fourier transform \betaegin{equation}\label{a2} \betaegin{equation}gin{aligned} \widehat{F}(\mathbf xi)&=\widehat{(\partialartial_xf+i\mathcal L_{\delta}\partialartial_xf)}(\mathbf xi)\\ &=2\partiali i \mathbf xi (1+\rm{coth}(2\partiali\delta \mathbf xi))\,\widehat{f}(\mathbf xi)\\ &=2\partiali i\mathbf xi \Big(1+\frac{e^{2\partiali\delta\mathbf xi}+e^{-2\partiali\delta\mathbf xi}}{e^{2\partiali\delta\mathbf xi}-e^{-2\partiali\delta\mathbf xi}}\,\Big)\,\widehat{f}(\mathbf xi)\\ &=-4\partiali i \mathbf xi \,\frac{e^{4\partiali\delta \mathbf xi}}{1-e^{4\partiali \delta \mathbf xi}}\,\widehat{f}(\mathbf xi) \end{aligned} \end{equation} We observe that by considering $\,\partialartial_xf$ with $\,f\in H^s(\mathbb R), \,s>3/2,$ one cancels the singularity of $\,F\,$ at $\,\mathbf xi=0\,$ introduced by $\,\rm{coth}(\mathbf xi)$. By hypothesis and \eqref{a2} one concludes that $\,\widehat{F}\in L^1(\mathbb R)$ and has exponential decay for $\,\mathbf xi<0$. Hence, \betaegin{equation}\label{a3} F(x)=\int_{-\infty}^{\infty}\;e^{2\partiali i \mathbf xi x}\,\widehat{F}(\mathbf xi)\,d\mathbf xi \end{equation} has an analytic extension \betaegin{equation}\label{a4} F(x+iy)=\int_{-\infty}^{\infty}\;e^{2\partiali i \mathbf xi (x+iy)}\,\widehat{F}(\mathbf xi)\,d\mathbf xi \end{equation} to the strip $$ D_{2\delta}=\{z=x+iy\in \mathbb C\,:\,0<y<2\delta\} $$ with $\,F\,$ continuous on $$ \,\{z=x+iy\,:\,0\leq y<2\delta\}$$ from the hypothesis on $\,f$. Now, Proposition \ref{pro1} leads the desired result. \end{proof} \betaegin{equation}gin{proof} [Proof of Theorem \ref{TH5}] Once Corollary \ref{col11} is available the proof of Theorem \ref{TH5} is similar to that given for Theorem \ref{TH1}, therefore it will be omitted. \end{proof} {\mathcal S}ection{Proof of Theorem \ref{TH10}} To prove Theorem \ref{TH10} we need an auxiliary lemma: \betaegin{equation}gin{lemma} \label{lemma1} Let $\,f\in L^2(\mathbb R)$ be a real valued function. If there exists an open set $\,I {\mathcal S}ubset \mathbb R,\;0\in I,\,$ such that \betaegin{equation}\label{H12} f(x,0)=0,\;\;\;\;x\in I, \end{equation} and for each $\,N\in\mathbb Z^+$ \betaegin{equation}gin{equation} \label{H11} \int_{|x|\leq R}\;|\,\mathcal H f(x)|^2dx\leq c_N\,R^N\;\;\;\;\;\;\text{as}\;\;\;\;\;R \downarrow \,0, \end{equation} then, \betaegin{equation}\label{result10} f(x)=0,\;\;\;\;x\in \mathbb R. \end{equation} \end{lemma} \betaegin{equation}gin{proof} Consider the analytic function $F=F(x+iy)$ defined in $\,\mathbb R\times(0,\infty)$ with boundary values $$F(x+i0)=-\mathcal Hf(x)+if(x).$$ Since $\,F\betaig|_{I}$ is real we can use Schwarz reflexion principle to find $\,\widetilde {F}$ analytic in $\,I\times (-\infty,\infty)\,$ with $\,\widetilde{F}=F$ on $\,I\times [0,\infty)$. \vskip.07in We observe : $\mathbb Re\,\widetilde{F}(x+i0)=\mathcal H f(x),\;x\in I\,$ with $\,\mathcal H f\betaig|_{I}\in C^{\infty}$, by the support property of $\,f$, and by assumption \eqref{H11} $\,\partialartial^j_x\mathcal H f(0)=0$, $\,j\in\mathbb Z^+\cup\{0\}$. Hence $$ \frac{\partialartial^j}{\partialartial z^j}\,\widetilde {F}(0,0)=0\;\;\;\;\;\;\;j=0,1,2,.... $$ which completes the proof. \end{proof} \vskip.1in \betaegin{equation}gin{proof}[Proof of Theorem \ref{TH10}] Defining $w(x,t)=(u_1-u_2)(x,t)$ it follows that \betaegin{equation}\label{eq11} \partialartial_tw-\mathcal H \partialartial_x^2 w+\partialartial_xu_1 \,w+u_2\,\partialartial_xw=0,\;\;\;(x,t)\in\mathbb R\times[0,T]. \end{equation} Since $\,w(x,0)=0,\;x\in I$, one has that $\partialartial_x^jw(x,0)=0,\;x\in I$, $\,j\in \mathbb Z^+\cup\{0\}$, and using \eqref{eq11} $$ \mathcal H \partialartial_x^2 w(x,0)=\partialartial_tw(x,0) $$ We now apply the hypothesis \eqref{H11} and Lemma \ref{lemma1} to conclude that $\,\partialartial_x^2w(x,0)=0,\;x\in\mathbb R$. \end{proof} \noindent\underline{\betaf Acknowledgements.} C.E.K. was supported by the NSF grant DMS-1800082. L.V. was supported by an ERCEA Advanced Grant 2014 669689 - HADE, by the MINECO and by BCAM Severo Ochoa excellence accreditation SEV-2013-0323. project MTM2014-53850-P. \betaegin{equation}gin{thebibliography}{9} \betaibitem{ABFS} L. Abdelouhab, J. L. Bona, M. Felland, and J.-C. Saut, \emph{Nonlocal models for nonlinear dispersive waves}, Physica D. {\betaf 40} (1989) 360--392. \betaibitem{AbFo} M. J. Ablowitz and A. S. Fokas, \emph {The inverse scattering transform for the Benjamin-Ono equation, a pivot for multidimensional problems}, Stud. Appl. Math. {\betaf 68} (1983) 1--10. \betaibitem{Be} T. B. Benjamin, \emph{Internal waves of permanent form in fluids of great depth}, J. Fluid Mech. {\betaf 29} (1967) 559--592. \betaibitem{BiHu} J. Biello and J. K. Hunter, \emph{Nonlinear Hamiltonian waves with constant frequency and surface waves on vorticity discontinuities}, Comm. Pure Appl. Math. {\betaf 63} 2009, 303--336. \betaibitem{BuPl} N. Burq and F. Planchon, \emph{On the well-posedness of the Benjamin-Ono equation}, Math. Ann. {\betaf 340} (2008) 497--542. \betaibitem{CoWi} R. R. Coifman and M. Wickerhauser, \emph{The scattering transform for the Benjamin-Ono equation}, Inverse Problems {\betaf 6} (1990) 825--860. \betaibitem{FP} G. Fonseca and G. Ponce, \emph{The IVP for the Benjamin-Ono equation in weighted Sobolev spaces}, J. Funct. Anal. {\betaf 260} (2010) 436--459. \betaibitem{FLP} G. Fonseca, F. Linares, and G. Ponce, \emph{The IVP for the Benjamin-Ono equation in weighted Sobolev spaces II}, J. Funct. Anal. {\betaf 262} (2012) 2031--2049. \betaibitem{GLM} Z. Guo, Y. Lin, and L. Molinet, \emph{Well-posedness in energy space for the periodic modified Benjamin?Ono equation}, J. Diff. Eqs.{\betaf 256} (2014) 2778--2806. \betaibitem{IT} M. Ifrim and D. Tataru, \emph{Well-posedness and dispersive decay of small data solutions for the Benjamin-Ono equation}, pre-print arXiv:1701.08476 \betaibitem{IoKe} A. D. Ionescu and C. E. Kenig, \emph{Global well- posedness of the Benjamin-Ono equation on low-regularity spaces}, J. Amer. Math. Soc. {\betaf 20} (2007) 753--798. \betaibitem{Io1} R. J. Iorio, \emph{On the Cauchy problem for the Benjamin-Ono equation}, Comm. Partial Diff. Eqs. {\betaf 11} (1986) 1031--1081. \betaibitem{Io2} R. J. Iorio, \emph{Unique continuation principle for the Benjamin-Ono equation}, Diff. and Int. Eqs. {\betaf 16} (2003) 1281--1291. \betaibitem{Jo} R. I. Joseph, \emph{Solitary waves in a finite depth fluid}, J. Phys. A 11 (1978) L97. \betaibitem{JE} R. I. Joseph,and R. Egri \emph{Multi-soliton solutions in a finite depth fluid}, J. Phys. A 10 (1977) L225 \betaibitem{Iz} V. Izakov \emph{Carleman type estimates in an anisotropic case and applications,} J. Diff . Eqs. {\betaf 105} (1993) 217--238. \betaibitem{KeKo} C. E. Kenig and K. D. Koenig, \emph{On the local well-posedness of the Benjamin-Ono and modified Benjamin-Ono equations}, Math. Res. Letters {\betaf 10} (2003,) 879--895. \betaibitem{KeTa} C. E. Kenig and H. Takaoka, \emph{Global well-posedness of the modified Benjamin-Ono equation with initial data in $H^{1/2}$}, Int. Math. Res. Not. Art. ID 95702 (2006) 1--44. \betaibitem{KPV1} C. E. Kenig, G. Ponce, and L. Vega, \emph{On the generalized Benjamin-Ono equation}, Trans. Amer. Math. Soc. . {\betaf 342} (1994) 155--172. \betaibitem{KoTz1} H. Koch and N. Tzvetkov, \emph{On the local well-posedness of the Benjamin-Ono equation on $H^{s}(\mathbb{R})$}, Int. Math. Res. Not. {\betaf 26} (2003) 1449-1464. \betaibitem{KoTz2} H. Koch and N. Tzvetkov, \emph{Nonlinear wave interactions for the Benjamin-Ono equation.}, Int. Math. Res. Not. {\betaf 30} (2005) 1833--1847. \betaibitem{KSA} Y. Kodama, J. Satsuma and M.J. Ablowitz, \emph{Nonlinear intermediate long-wave equation: analysis and method of solution}, Phys.Rev. Lett. 46 (1981), 687-690. \betaibitem{KAS} Y. Kodama, M.J. Ablowitz and J. Satsuma, \emph{Direct and inverse scattering problems of the nonlinear intermediate long wave equation}, J. Math. Physics 23 (1982), 564-576. \betaibitem{KKD} T. Kubota, D.R.S Ko and L.D. Dobbs, \emph{Weakly nonlinear, long internal gravity wavesin stratified fluids of finite depth}, J. Hydronautics {\betaf 12} (1978), 157-165. \betaibitem{LiPo} F. Linares and G. Ponce, \emph{Introduction to nonlinear dispersive equations}, second edition, Springer New York, 2014. \betaibitem{Mo1} L. Molinet \emph{Global well-posedness in the energy space for the Benjamin-Ono equation on the circle}, Math. Ann. {\betaf 337} (2007) 353--383. \betaibitem{Mo2} L. Molinet \emph{Global well-posedness in $L^2$ for the periodic Benjamin-Ono equation}, Amer. J. Math. {\betaf 130} (2008) 635--683. \betaibitem{Mo3} L. Molinet \emph{Sharp ill-posedness result for the periodic Benjamin-Ono equation}, J. Funct. Anal. {\betaf 257} (2009), 348--3516. \betaibitem{MoPi} L. Molinet and D. Pilod, \emph{The Cauchy problem for the Benjamin-Ono equation in $L^2$ revisited}, Anal. PDE {\betaf 5} (2012) 365--395. \betaibitem{MoRi1} L. Molinet and F. Ribaud, \emph{Well-posedness results for the generalized Benjamin-Ono equation with small initial data}, J. Math. Pures et Appl. {\betaf 83} (2004) 277--311. \betaibitem{MoRi2} L. Molinet and F. Ribaud, \emph{Well-posedness results for the Benjamin-Ono equation with arbitrary large initial data}, Int. Math. Res. Not. {\betaf 70} (2004) 3757--3795. \betaibitem{MoRi3} L. Molinet and F. Ribaud, \emph{Well-posedness in $H^1$ for generalized Benjamin-Ono equations on the circle}, Discrete Contin. Dyn. Syst. {\betaf 23} (2009) 1295--1311. \betaibitem{MoSaTz} L. Molinet, J.C. Saut, and N. Tzvetkov, \emph{Ill-posedness issues for the Benjamin-Ono and related equations}, SIAM J. Math. Anal. {\betaf 33} (2001) 982--988. \betaibitem{MoVe} L. Molinet, and S. Vento, \emph{Improvement of the energy method for strongly nonresonant dispersive equations and applications}, Anal. PDE 8 (2015), no. 6, 1455--1495 \betaibitem{On} H. Ono, \emph{Algebraic solitary waves on stratified fluids}, J. Phy. Soc. Japan {\betaf 39} (1975) 1082--1091. \betaibitem{Po} G. Ponce, \emph{On the global well-posedness of the Benjamin-Ono equation}, Diff. and Int. Eqs. {\betaf 4} (1991) 527--542. \betaibitem{Sa} J.-C. Saut, \emph{Benjamin-Ono and intermediate long wave equations : modeling, IST and PDE}, pre-print Fields Institute (2017) \betaibitem{SaSc} J.C. Saut and B.Scheurer, \emph{Unique continuation for evolution equations,} J. Diff. Eqs. \textbf{66} (1987), 118--137. \betaibitem{Ta} T. Tao, \emph{Global well-posedness of the Benjamin-Ono equation on $H^{1}$}, Journal Hyp. Diff. Eqs. {\betaf 1} (2004) 27--49. \betaibitem{Vent1} S. Vento, \emph{Sharp well-posedness results for the generalized Benjamin-Ono equations with higher nonlinearity}, Diff. and Int. Eqs. {\betaf 22} (2009) 425--446. \betaibitem{Vent2} S. Vento, \emph{Well-posedness of the generalized Benjamin-Ono equations with arbitrary large initial data in the critical space}, Int. Math. Res. Not. {\betaf 2} (2010) 297--319. \betaibitem{Wh} G. B. Whitham, \emph{Variational methods and applications to water waves}, Proc.R. Soc. Lond. Ser. A., 299 (1967), 6-25. \end{thebibliography} \end{document}
\begin{document} \title{Functional affine-isoperimetry and an inverse logarithmic Sobolev inequality \footnote{Keywords: affine isoperimetric inequality, logarithmic Sobolev inequality. 2010 Mathematics Subject Classification: 52A20, }} \author{S. Artstein-Avidan\thanks{Partially supported by BSF grant No. 2006079 and by ISF grant No. 865/07}, B. Klartag\thanks{Partially supported by an ISF grant and an IRG grant} , C. Sch\"utt and E. Werner\thanks{Partially supported by NSF grant and BSF grant No. 2006079}} \date{} \maketitle \begin{abstract} We give a functional version of the affine isoperimetric inequality for log-concave functions which may be interpreted as an inverse form of a logarithmic Sobolev inequality inequality for entropy. A linearization of this inequality gives an inverse inequality to the Poincar\'e inequality for the Gaussian measure. \end{abstract} \section{Introduction} There is a general approach to extend invariants of convex bodies to the corresponding invariants of functions \cite{ArtKlarMil, Fradelizi+Meyer, Klar, Milman}. We investigate here the affine surface area and the affine isoperimetric inequality and their corresponding invariants for log-concave functions. The affine isoperimetric inequality corresponds to an inequality that may be viewed as an inverse logarithmic Sobolev inequality for entropy. A linearization of this inequality yields an inverse inequality to a Poincar\'e inequality. Logarithmic Sobolev inequalities provide upper bounds for the entropy. There is a vast amount of literature on logarithmic Sobolev inequalities and related topics, e.g. \cite{BarKol, BobLed, Car, Gross, LatOle, LYZ2002, Stam}. We quote only the sharp logarithmic Sobolev inequality for the Lebesgue measure on $\mathbb R^{n}$ (see, e.g., \cite{BePe}) \begin{eqnarray}\label{logsob1} \int_{\operatorname{supp}(f)}|f|^{2} \ln(|f|^2)dx -\left(\int_{\mathbb R^n}|f|^{2}dx\right)\ln\left( \int_{\mathbb R^n}|f|^{2}dx\right) \leq \frac{n}{2}\ln\left(\frac{2}{\pi e n}\int_{\mathbb R^n}\|\nabla f\|^{2}dx \right), \end{eqnarray} with equality if and only if $f(x) = (2 \pi)^{-(n/4)} \exp(-\| x - b \|^2/4)$ for a vector $b \in \mathbb R^n$. Here, and throughout the paper, $\| {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot \|$ denotes the standard Euclidean norm and $\langle {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot, {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot\rangle$ denotes the standard scalar product on $\mathbb R^n$. This inequality is directly equivalent to the Gross logarithmic Sobolev inequality \cite{BePe, Gross} \begin{equation}\label{logsob2} \int_{\operatorname{supp}(h)} |h|^2 \ln\left(\frac{|h|}{\|h\|_{L^2(\gamma ma_{n})}}\right) d \gamma ma_{n} \leq \int_{ \mathbb R^n} \|\nabla h\|^{2}d\gamma ma_{n}, \end{equation} where $\gamma ma_{n}$ is the normalized Gauss measure on $\mathbb R^{n}$, $d\gamma ma_{n}=(2\pi)^{-\frac{n}{2}}e^{-\frac{\|x\|^{2}}{2}}dx$. Equation (\ref{logsob2}) becomes an equality if and only if $h(x)=c e^{\langle a, x \rangle}$ with $c > 0$ and $a \in \mathbb R^n$. We will now integrate by parts, and rewrite the logarithmic Sobolev inequality as an upper bound for the entropy in terms of the Laplacian of the function. The main result in this note shall be a lower bound for entropy in terms of the Laplacian, the difference between the two bounds being an interchange between integration and logarithm and replacement of the arithmetric mean of the eigenvalues of the Hessian by the geometric mean. We shall need some more notation. Let $(X, \mu)$ be a measure space and let $f: X \rightarrow \mathbb R$ be a measurable function. Denote the support of $f$ by $\operatorname{supp}(f)= \{x: f(x) \neq 0\}$. Then the {\em entropy} of $f$, $\operatorname{Ent}(f) $, is defined (whenever it makes sense) by \begin{equation}\label{entropy} \operatorname{Ent}(f) = \int_{\operatorname{supp}(f)} |f| \ln(|f|) d \mu - \|f\|_{L^1(X, \mu)} \ln \|f\|_{L^1(X, \mu)} = \int_{\operatorname{supp}(f)} f \ln\left(\frac{|f|}{\|f\|_{L^1(X, \mu)}}\right) d \mu, \end{equation} where $\|f\|_{L^1(X, \mu)} = \|f\|_{L^1( \mu)}= \int_X |f| d \mu$. In particular, if $\|f\|_{L^1(X, \mu)} = 1$, $$ \operatorname{Ent}(f) = \int_{{\hbox{supp}}(f)} |f| \ln(|f|) d \mu . $$ If $f$ is a positive function, we get in (\ref{logsob1}) \begin{eqnarray}\label{logsob3} \operatorname{Ent}(f) &=&\int_{\mathbb R^n}f \ln(f)dx -\left(\int_{\mathbb R^n}fdx\right)\ln\left(\int_{\mathbb R^n}fdx\right) \nonumber \\ &\leq& \frac{n}{2}\ln\left(\frac{2}{\pi e n} \int_{\mathbb R^n}\|\nabla \sqrt{f}\|^{2}dx \right) = \frac{n}{2}\ln\left(\frac{1}{2\pi e n} \int_{\mathbb R^n}\frac{\|\nabla f\|^{2}}{f}dx \right). \end{eqnarray} For a sufficiently smooth function $f$ defined on $\mathbb R^{n}$, we denote the Hessian of $f$ by ${\nabla^2}\left(f\right) = \left(\frac{\partial ^2 f}{\partial x_i \partial x_j} \right)_{i,j=1,\ldots,n}$. Note that \begin{equation}\label{Fisher22} \int_{{{\hbox{supp}} (f)}}\frac{\|\nabla f\|^{2}}{f}dx = \int_{{\hbox{supp}} (f)} f \ \bigg(\operatorname{tr} \left({\nabla^2} \left(-\ln f\right)\right)\bigg)dx. \end{equation} For $f \geq 0$ with $\int f dx=1$, this is the {\em Fisher information}. Equation (\ref{Fisher22}) is easily verified using integration by parts. The logarithmic Sobolev inequality (\ref{logsob3}), together with (\ref{Fisher22}), becomes \begin{equation}\label{LogSob55} \operatorname{Ent}(f)+\ln((2\pi e)^{\frac{n}{2}}) \leq \frac{n}{2}\ln\left[\frac{1}{n} \int_{{\hbox{supp}} (f)} f \ \bigg(\operatorname{tr} \left({\nabla^2} \left(-\ln f\right)\right)\bigg)dx\right]. \end{equation} The main goal in this paper is to prove, for log-concave functions, a converse of inequality (\ref{LogSob55}). A function $f: \mathbb R^n \rightarrow \mathbb R$ is called log-concave if it takes the form $\exp(-\Psi)$ for a convex function $\Psi: \mathbb R^n \rightarrow \mathbb R \cup \{ \infty \}$. We shall usually assume also that the function is upper semi-continuous. This converse log Sobolev inequality is stated in the following theorem. It relates entropy to a new expression, which can be thought of as an affine invariant version of Fisher information. The inequality is obtained by suitably applying and analysing the affine isoperimetric inequality, which, for convex bodies $K$ in $\mathbb R^n$, gives an upper bound for the affine surface area. Affine surface area measures and their related inequalities (see below for the definition and statements) have attracted considerable attention recently e.g. \cite{GaZ, LR2, Lu2, SW2002, Z3}. \begin{theo} Let $f:\mathbb R^{n}\rightarrow [0, \infty)$ be an upper semi-continuous log-concave function which belongs to $C^2({{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}l S}upp(f)) {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p L^1(\mathbb R^n, dx)$ and such that $f \ln f $ and $f \ln{\rm det}\left( {\nabla^2} \left(-\ln f\right)\right)\bigg) \in L^1({\hbox{supp}} (f), dx)$. Then \begin{eqnarray*} \int_{{\hbox{supp}} (f)} f \ \ln \bigg({\rm det} \left( {\nabla^2}\left(-\ln f\right)\right)\bigg) dx \leq 2 \bigg[ \operatorname{Ent}(f) + \|f\|_{L^1( dx)} \ln(2 \pi e)^\frac{n}{2} \bigg]. \end{eqnarray*} There is equality for $f(x)=C e^{-\langle A x, x \rangle}$, where $C>0$ and $A$ is an $n \times n$ positive-definite matrix of determinant one. \label{thm1} \end{theo} It is important to note the affine invariant nature of Theorem \ref{thm1}. Both the left-hand side and the right-hand side are invariant under volume-preserving linear transformations. This is not the case with the logarithmic Sobolev inequality. The expression on the right-hand side of (\ref{LogSob55}) involves the arithmetric mean $\frac{1}{n}\ \left(\operatorname{tr} \left({\nabla^2} \left(-\ln f\right)\right) \right)$ of the eigenvalues of ${\nabla^2} \left(-\ln f\right)$. The expression on the left-hand side of Theorem \ref{thm1} can be written as $n \ \ln \bigg({\rm det}\left( {\nabla^2}\left(-\ln f\right)\right)\bigg)^\frac{1}{n}$ and involves the geometric mean of the eigenvalues of ${\nabla^2} \left(-\ln f\right)$. Thus, we get from an upper bound for the entropy to a lower bound for the entropy by interchanging integration and logarithm and by replacing the arithmetic mean of the eigenvalues of the Hessian by its geometric mean. As the entropy for the Gaussian random variable $g(x)=\frac{1}{(2\pi)^\frac{n}{2}}e^{-\frac{\|x\|^{2}}{2}}$ is $\operatorname{Ent}(g) = - \ln(2 \pi e)^\frac{n}{2}$, Theorem \ref{thm1} immediately implies the following corollary. \vskip 3mm \begin{cor} Let $f:\mathbb R^{n}\rightarrow\mathbb [0, \infty)$ be a log-concave function such that $ f \in C^2(\mathbb R^n)$, $ \|f\|_{L^1( dx)} =1$ and such that $f \ln f $ and $ f \ln \left({\rm det} {\nabla^2} \left(-\ln f\right)\right) \in L^1({\hbox{supp}} (f), dx)$. Then $$ \int_{{\hbox{supp}} (f)} f \ln \bigg( {\rm det} \left( {\nabla^2} \left(-\ln f\right)\right)\bigg) dx \leq 2\big( \operatorname{Ent}(f) -\operatorname{Ent}(g)\big), $$ with equality for $f(x)=e^{-\pi \langle Ax, x \rangle}$ for a positive-definite matrix $A$ of determinant one. \label{cor_2} \end{cor} The expression $\operatorname{Ent}(f) -\operatorname{Ent}(g)$ is called the entropy gap. The linearization of Theorem \ref{thm1} yields the following corollary, and alternative proof of which, together with a generalization, is also given below in Section \ref{lin}. \begin{cor}\label{CorPoin} For all functions $\varphi\in C^{2}( \mathbb R^{n}){\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p L^{2}(\mathbb R^{n}, \gamma ma_{n})$ with $\|{\nabla^2} \varphi \|_{HS}\in L^{2}(\mathbb R^{n},\gamma ma_{n})$ we have \begin{equation}\label{CorPoin1_} \int_{\mathbb R^{n}} \left[ \left\|\nabla \varphi\right\|^2_{} - \frac{\|{\nabla^2} \varphi\|_{HS}^2}{2} \right] d\gamma ma_n \le \mbox{Var}_{\gamma ma_n}(\varphi). \end{equation} Here, $\|\ \|_{HS}$ denotes the Hilbert-Schmidt norm and $\mbox{Var}_{\gamma ma_n}(\varphi)= \int_{\mathbb R^{n}} \varphi^2 d\gamma ma_n - \left(\int_{\mathbb R^{n}} \varphi d\gamma ma_n\right)^2$ is the variance. There is equality for all polynomials of degree $2$. \end{cor} The Poincar\'e inequality for the Gauss measure is ( see \cite{Beck1}) $$ \int_{\mathbb R^{n}}|f|^{2}d\gamma ma_{n} -\left(\int_{\mathbb R^{n}}fd\gamma ma_{n}\right)^{2} \leq\int_{\mathbb R^{n}}\|\nabla f\|^{2}d\gamma ma_{n}. $$ Hence, the inequality of Corollary \ref{CorPoin} gives a reverse Poincar\'e inequality. We shall also give an alternative proof of Corollary \ref{CorPoin}, which generalizes to the following family of inequalities (which we state only in the one dimensional case for simplicity) \begin{theo}\label{gen-inv-poinc} For all $m$ and all $\varphi \in C^{m,2}(\mathbb R)$ with $\int \varphi d\gamma ma = 0$, one has \[\int \sum_{j=0}^{m-1} \frac{\left(\varphi^{(2j+1)}\right)^2}{(2j+1)!} d\gamma ma \le \int \sum_{j=0}^m \frac{\left(\varphi^{(2j)}\right)^2}{(2j)!} d\gamma ma \le \int \sum_{j=0}^m \frac{\left(\varphi^{(2j+1)}\right)^2}{(2j+1)!} d\gamma ma .\] \end{theo} \noindentndent Here $\gamma ma = \gamma ma_1$ denotes the one-dimensional standard Gaussian distribution, and $C^{m,2}(\mathbb R)$ means functions which are $m$ times continuously differentiable whose respective derivatives belong to $L_2$. Our results are formulated and proved for functions that are sufficiently smooth. However, they can be generalized to functions that are not necessarily satisfying any $C^2$-assumptions. We then need to replace the second derivatives by the generalized second derivatives (compare e.g. \cite{SW2002}). \section{Affine isoperimetry for $s$-concave functions} \begin{defn} \label{def1} Let $s, n \in \mathbb{N}$. We say that $f: {\mathbb R}^n\rightarrow [0, \infty)$ is $s$-concave, and denote $f\in Conc_s({\mathbb R}^n)$, if $f$ is upper semi continuous, $\overline{{\hbox{supp}} (f)}$ is a convex body (convex, compact and with non-empty interior) and $f^\frac{1}{s}$ is concave on $\overline{{\hbox{supp}} (f)}$. The class $Conc_s^{(2)}(\mathbb R^n)$ shall consist of such $f\in Conc_s(\mathbb R^n)$ which are twice continuously differentiable in the interior of their support. \end{defn} Note that for every $f\in Conc_s(\mathbb R^n)$ there exists a constant $C>0$ such that $0\leq f\leq C$. In particular, such an $f$ is integrable. As in \cite{ArtKlarMil}, we associate with a function $f \in Conc_s(\mathbb R^n)$ a the convex body $K_{s}(f)$ in $\mathbb R^{n}\times \mathbb R^{s}$ given by \begin{equation}\label{def.Ksf} K_s(f):=\big\{(x,y) \in \mathbb R^n \times \mathbb R^s: x \in {\hbox{supp}} (f), \|y\| \leq f^\frac{1}{s}(x)\big\}.\end{equation} A special function in the class $Conc_s(\mathbb R^n)$, which will play the role of the Euclidean ball in convexity, is \[g_s(x):=(1- \|x\|^2)_+^\frac{s}{2} \] where, for $a \in \mathbb{R}$, $a_+ = \max \{a, 0 \}$. It follows immediately from the definition that $K_s(g_s)=B^{n+s}_2$, the $(n+s)$-dimensional Euclidean unit ball centred at the origin. By Fubini's theorem, we have that for all $f \in Conc_s(\mathbb R^n)$ \[ \text{vol}_{n+s}\left(K_s(f)\right) = \text{vol}_s(B_2^s) \int_{\mathbb R^n} f dx. \] An important affine invariant quantity in convex geometric analysis is the affine surface area which, for a convex body $K\subset \mathbb R^n$ with a smooth boundary is defined by \begin{equation} \label{def:affine} as_{1}(K)=\int_{\partial K} \kappa_K(x)^{\frac{1}{n+1}} d\mu_{ K}(x). \end{equation} Here, $\kappa (x) =\kappa_{K}(x)$ is the generalized Gaussian curvature at the point $x$ in $\partial K$, the boundary of $K$, and $\mu=\mu_K$ is the surface area measure on the boundary $\partial K$. See e.g. \cite{Lu1991, MW1, SW1990} for extensions of the definition of affine surface area to an arbitrary convex body in $\mathbb R^n$. For a function $f\in Conc_s(\mathbb R^n)$, we define \begin{equation}\label{def:aff} as_1^{(s)}(f)=as_1\left(K_s(f) \right). \end{equation} Our first goal is to give a precise formula for $as_1^{(s)}(f)$ in terms of derivatives of the function $f$. This is done in the next proposition. There, for $x, y >0$, $$B(x,y)= \int_0^1 t^{x-1}(1-t)^{y-1} dt$$ is the Beta function. \begin{prop}\label{prop.s-affine-sa} Let $s\in \mathbb{N}$ and $f \in Conc_s^{(2)}(\mathbb R^n)$. Then \begin{equation*} as_1^{(s)}(f)=c_s \int_{{\hbox{supp}} (f)} \left| {\rm det} \left( {{\nabla^2} f^\frac{1}{s} } \right) \right|^\frac{1}{n+s+1} f^{\frac{(s-1)(n+s)}{s(n+s+1)} }\ dx. \end{equation*} Here, $c_s=(s-1) \text{vol}_{n-1}(B^{s-1}_2) B\left(\frac{s-1}{2},\frac{1}{2}\right)$ if $s \neq 1$ and $c_1=2$. \end{prop} In order to derive the formula for $as_1^{(s)}(f)$, we have to compute the affine surface area of the body $K_s(f)$. To this end, we compute the curvature of this body, which is circular in $s$ directions, and is behaving like $f^{1/s}$ in the other directions. We make use of the following well known lemma. \begin{lem} \label{lem.kappa}(\cite{Tho}, p. 93, exercise 12.13) Let $h: \mathbb R^n \rightarrow [0,+\infty)$ be twice continuously differentiable. Let $x=(t,h(t))\in \mathbb R^{n}\times \mathbb R$ be a point on the graph of $h$. Then, with the appropriate orientation, the Gauss curvature $\kappa$ at $x$ is $$ \kappa(x)=\frac{ {\rm det} ({\nabla^2} h)}{(1 + \|\nabla h \|^2)^{\frac{n+2}{2}}}. $$ \end{lem} \vskip 3mm We shall apply Lemma \ref{lem.kappa} to the boundary of a convex body $K$. We consider only the orientation that gives nonnegative curvature. Thus, for a point $x \in \partial K$ whose boundary is described locally by the convex function $h$ we can use the formula \begin{equation}\label{curv} \kappa(x) =\left| \frac{ {\rm det} \left({\nabla^2} h \right)}{(1 + \|\nabla h \|^2)^{\frac{n+2}{2}}}\right|. \end{equation} We shall denote by $N_K(x)$ the outer unit normal vector to $\partial K$ at $x\in \partial K$. \begin{lem} \label{normal+kappa} Let $f \in Conc_s^{(2)}(\mathbb R^n)$. Then for all $x = (x_1,\ldots,x_{n+s}) \in \partial K_s(f)$ with $(x_1\ldots,x_n) \in {\hbox{int}}({{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}l S}upp(f))$,\\ \noindentndent (i) $N_{K_s(f)}(x)= \frac{\left( f^\frac{1}{s}\nabla f^\frac{1}{s}, -x_{n+1}, \dots , -x_{n+s} \right) }{f^\frac{1}{s}\left(1+ \| \nabla f^\frac{1}{s}\|^2\right)^\frac{1}{2}}$, \\ \noindentndent (ii) $\kappa_{K_s(f)}(x)= \left|\frac{ {\rm det} \left({\nabla^2} \ f^\frac{1}{s}\right)}{f^\frac{s-1}{s} \left(1 +\|\nabla f^\frac{1}{s}\|^2\right)^\frac{n+s+1}{2}}\right|.$\\ Here, $f$ is evaluated at $(x_1,\ldots,x_n) \in \mathbb R^n$. \end{lem} \begin{proof}[Proof of Lemma \ref{normal+kappa}] If $s=1$, (i) of the lemma follows immediately from elementary calculus and (ii) from Lemma \ref{lem.kappa}. Therefore, we can assume that $s \geq 2$. Since, by equation (\ref{def.Ksf}), the boundary of $K_s(f)$ is given by $\{(x, y ) \in \mathbb R^n \times \mathbb R^s: \|y\| = f^{1/s}(x)\}$, the boundary of $K_s(f)$ is the union of the graphs of the two mappings $$(x_1,\dots,x_n,x_{n+1},\dots, x_{n+s-1}) \rightarrow (x_1,\dots,x_n, x_{n+1},\dots, x_{n+s-1}, \pm x_{n+s}),$$ where, with $x = (x_1, \ldots, x_n)$, \begin{eqnarray}\label{xn+s} x_{n+s}&=&\bigg(f^\frac{2}{s}(x)-\sum_{i=n+1}^{n+s-1}x_i^2\bigg)^\frac{1}{2}. \end{eqnarray} Because of symmetry, it is enough to consider only the ``positive'' part of $\partial K_s(f)$, in which the last coordinate is non-negative. We will show that the outer normal and the curvature exist for $(x, y)$ with $x \in {\hbox{supp}} (f)$ and $\|y\|=f(x)^\frac{1}{s}$ (they may not exist for $x \in \partial \left({\hbox{supp}} (f)\right)$). Letting $g=f^{1/s}$ we have $$ x_{n+s}=\sqrt{g(x_{1},\dots,x_{n})^{2}-\sum_{i=n+1}^{n+s-1}x_{i}^{2}}. $$ As $f^\frac{1}{s}$ is everywhere differentiable on its support, we have for $i$ with $1\leq i\leq n$ and, provided $s \geq 2$, for $j$ with $n+1\leq j\leq n+s-1$, \begin{eqnarray}\label{diff1} \frac{\partial x_{n+s}}{\partial x_{i}} =\frac{g\frac{\partial g}{\partial x_{i}}}{\sqrt{g^{2}-\sum_{i=n+1}^{n+s-1}x_{i}^{2}}} =\frac{g\frac{\partial g}{\partial x_{i}}}{x_{n+s}} \hskip 5mm \text{and} \hskip 5mm \frac{\partial x_{n+s}}{\partial x_{j}} =-\frac{x_{j}}{\sqrt{g^{2}-\sum_{i=n+1}^{n+s-1}x_{i}^{2}}} =-\frac{x_{j}}{x_{n+s}}. \end{eqnarray} \vskip 2mm \noindentndent (i) Therefore we get for almost all $z \in \partial K_s(f)$ with (\ref{xn+s}) and (\ref{diff1}) \begin{eqnarray*} \noindentndent N_{K_s(f)}(z) = \frac{(\nabla x_{n+s}, -1)}{\left(1+ \| \nabla x_{n+s}\|^2\right)^\frac{1}{2}} = \frac{\left( f^\frac{1}{s}\nabla f^\frac{1}{s}, -x_{n+1}, \dots , -x_{n+s} \right) }{f^\frac{1}{s}\left(1+ \| \nabla f^\frac{1}{s}\|^2\right)^\frac{1}{2}}. \end{eqnarray*} \noindentndent (ii) We have for all $i$ with $1\leq i\leq n$, $$ \frac{\partial^{2} x_{n+s}}{\partial x_{i}^{2}} =\frac{g\frac{\partial^{2}g}{\partial x_{i}^{2}} +(\frac{\partial g}{\partial x_{i}})^{2}}{x_{n+s}} -\frac{g^{2}(\frac{\partial g}{\partial x_{i}})^{2}}{x_{n+s}^{3}} =\frac{g\frac{\partial^{2}g}{\partial x_{i}^{2}} }{x_{n+s}} -\frac{(\frac{\partial g}{\partial x_{i}})^{2} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}}. $$ For $i\ne j$ with $1\leq i,j\leq n$, \begin{eqnarray*} \frac{\partial^{2} x_{n+s}}{\partial x_{i}\partial x_{j}} &=&\frac{g\frac{\partial^{2} g}{\partial x_{i}\partial x_{j}}+\frac{\partial g}{\partial x_{i}}\frac{\partial g}{\partial x_{j}}}{x_{n+s}} -\frac{g^{2}\frac{\partial g}{\partial x_{i}} \frac{\partial g}{\partial x_{j}}}{x_{n+s}^{3}} =\frac{g\frac{\partial^{2} g}{\partial x_{i}\partial x_{j}}}{x_{n+s}} -\frac{\frac{\partial g}{\partial x_{i}} \frac{\partial g}{\partial x_{j}} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}}. \end{eqnarray*} For $1\leq i\leq n$ and $n+1\leq j\leq n+s-1$ $$ \frac{\partial^{2} x_{n+s}}{\partial x_{i}\partial x_{j}} =-\frac{x_{j}g\frac{\partial g}{\partial x_{i}}}{x_{n+s}^{3}}. $$ For $n+1\leq i\leq n+s-1$, $$ \frac{\partial^{2} x_{n+s}}{\partial x_{i}^{2}} =-\frac{1}{x_{n+s}}-\frac{x_{i}^{2}}{x_{n+s}^{3}} =-\frac{x_{n+s}^{2}+x_{i}^{2}}{x_{n+s}^{3}}. $$ For $i$ and $j$ with $n+1\leq i,j\leq n+s-1$ and $j \neq i$, $$ \frac{\partial^{2} x_{n+s}}{\partial x_{i}\partial x_{j}} =-\frac{x_{i}x_{j}}{x_{n+s}^{3}}. $$ We compute now the determinant of the following $[n+(s-1)]\times [n+(s-1)]$ matrix $$ \left(\begin{array}{cccccc} \frac{g\frac{\partial^{2}g}{\partial x_{1}^{2}} }{x_{n+s}} -\frac{(\frac{\partial g}{\partial x_{1}})^{2} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}} &\dots & \frac{g\frac{\partial^{2} g}{\partial x_{1}\partial x_{n}}}{x_{n+s}} -\frac{\frac{\partial g}{\partial x_{1}} \frac{\partial g}{\partial x_{n}} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}} & \hskip 5mm -\frac{x_{n+1}g\frac{\partial g}{\partial x_{1}}}{x_{n+s}^{3}} &\dots & -\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{1}}}{x_{n+s}^{3}} \\ \vdots & &\vdots & \vdots & &\vdots \\ \frac{g\frac{\partial^{2} g}{\partial x_{n}\partial x_{1}}}{x_{n+s}} -\frac{\frac{\partial g}{\partial x_{n}} \frac{\partial g}{\partial x_{1}} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & \frac{g\frac{\partial^{2}g}{\partial x_{n}^{2}} }{x_{n+s}} -\frac{(\frac{\partial g}{\partial x_{n}})^{2} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}} &\hskip 5mm -\frac{x_{n+1}g\frac{\partial g}{\partial x_{n}}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots& -\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{n}}}{x_{n+s}^{3}} \\ -\frac{x_{n+1}g\frac{\partial g}{\partial x_{1}}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & -\frac{x_{n+1}g\frac{\partial g}{\partial x_{n}}}{x_{n+s}^{3}} &-\frac{x_{n+s}^{2}+x_{n+1}^{2}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots &-\frac{x_{n+s-1}x_{n+1}}{x_{n+s}^{3}} \\ \vdots & & \vdots&\vdots& &\vdots \\ -\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{1}}}{x_{n+s}^{3}} & {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & -\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{n}}}{x_{n+s}^{3}} & -\frac{x_{n+s-1}x_{n+1}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots&-\frac{x_{n+s}^{2}+x_{n+s-1}^{2}}{x_{n+s}^{3}} \\ \end{array} \right) $$ For fixed $i$, $1\leq i\leq n$ we multiply each of the rows $n+1\leq j\leq n+s-1$ by $$ \frac{x_{j}}{g}\frac{\partial g}{\partial x_{i}} $$ and add them up. We obtain the vector $$ \left(-\frac{\frac{\partial g}{\partial x_{i}} \frac{\partial g}{\partial x_{1}} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}},\dots, -\frac{\frac{\partial g}{\partial x_{i}} \frac{\partial g}{\partial x_{n}} \sum_{j=n+1}^{n+s-1}x_{j}^{2}}{x_{n+s}^{3}}, -\frac{x_{n+1}g\frac{\partial g}{\partial x_{i}}}{x_{n+s}^{3}}, \dots,-\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{i}}}{x_{n+s}^{3}} \right) $$ and subtract it from the $i$-th row. The determinant does not change and we obtain $$ \left(\begin{array}{cccccc} \frac{g\frac{\partial^{2}g}{\partial x_{1}^{2}} }{x_{n+s}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & \frac{g\frac{\partial^{2} g}{\partial x_{1}\partial x_{n}}}{x_{n+s}} & \hskip 5mm 0 &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & 0 \\ \vdots & &\vdots & \vdots & &\vdots \\ \frac{g\frac{\partial^{2} g}{\partial x_{n}\partial x_{1}}}{x_{n+s}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & \frac{g\frac{\partial^{2}g}{\partial x_{n}^{2}} }{x_{n+s}} &\hskip 5mm 0 &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots& 0 \\ -\frac{x_{n+1}g\frac{\partial g}{\partial x_{1}}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & -\frac{x_{n+1}g\frac{\partial g}{\partial x_{n}}}{x_{n+s}^{3}} &-\frac{x_{n+s}^{2}+x_{n+1}^{2}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots &-\frac{x_{n+s-1}x_{n+1}}{x_{n+s}^{3}} \\ \vdots & & \vdots&\vdots& &\vdots \\ -\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{1}}}{x_{n+s}^{3}} & {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots & -\frac{x_{n+s-1}g\frac{\partial g}{\partial x_{n}}}{x_{n+s}^{3}} & -\frac{x_{n+s-1}x_{n+1}}{x_{n+s}^{3}} &{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots&-\frac{x_{n+s}^{2}+x_{n+s-1}^{2}}{x_{n+s}^{3}} \\ \end{array} \right). $$ The determinant of this matrix equals, up to a sign, to \begin{equation}\label{determinant2} \frac{g^{n}}{x_{n+s}^{n+3(s-1)}} {\rm det}\left( \begin{array}{ccc} \frac{\partial^{2}g}{\partial x_{1}^{2}} &\dots& \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} \\ \dots & &\vdots \\ \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} & \dots& \frac{\partial^{2}g}{\partial x_{n}^{2}} \end{array} \right) {\rm det}\left(\begin{array}{ccc} x_{n+s}^{2}+x_{n+1}^{2} &\dots&x_{n+s-1}x_{n+1} \\ \vdots& &\vdots \\ x_{n+s-1}x_{n+1}& \dots& x_{n+s}^{2}+x_{n+s-1}^{2} \end{array} \right) \end{equation} It is left to evaluate the second determinant. To that end we use a well-known matrix determinant formula: For any dimension $m$ and $y \in \mathbb R^m$, \begin{equation} {\rm det}(Id + y \otimes y) = 1 + \| y \|^2 \label{eq_1051} \end{equation} where $y \otimes y$ is the matrix whose $(y_i y_j)_{i,j=1,\ldots,n}$. Consequently, for the second determinant in (\ref{determinant2}) we have $$ {\rm det}\left(\begin{array}{ccc} x_{n+s}^{2}+x_{n+1}^{2} &\dots&x_{n+s-1}x_{n+1} \\ \vdots& &\vdots \\ x_{n+s-1}x_{n+1}& \dots& x_{n+s}^{2}+x_{n+s-1}^{2} \end{array} \right) =\left(\sum_{i=n+1}^{n+s} \frac{x_{i}^{2}}{x_{n+s}^2} \right)x_{n+s}^{2s} =g^{2}x_{n+s}^{2(s-2)}. $$ Therefore we get for the expression (\ref{determinant2}) $$ \frac{g^{n+2}}{x_{n+s}^{n+s+1}} {\rm det}\left( \begin{array}{ccc} \frac{\partial^{2}g}{\partial x_{1}^{2}} &\dots& \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} \\ \vdots & &\vdots \\ \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} & \dots& \frac{\partial^{2}g}{\partial x_{n}^{2}} \end{array} \right). $$ Moreover \begin{eqnarray*} 1+\sum_{i=1}^{n+s-1}\left|\frac{\partial x_{n+s}}{\partial x_{i}} \right|^{2} &=&1+\sum_{i=1}^{n}\left|\frac{g\frac{\partial g}{\partial x_{i}}}{x_{n+s}} \right|^{2} +\sum_{i=n+1}^{n+s-1}\left|\frac{x_{i}}{x_{n+s}}\right|^{2} =\sum_{i=1}^{n}\left|\frac{g\frac{\partial g}{\partial x_{i}}}{x_{n+s}} \right|^{2} +\left|\frac{g}{x_{n+s}}\right|^{2} \\ &=&\left|\frac{g}{x_{n+s}}\right|^{2} \left(1+\sum_{i=1}^{n}\left|\frac{\partial g}{\partial x_{i}} \right|^{2}\right). \end{eqnarray*} Therefore, we get by (\ref{curv}) for the curvature \begin{eqnarray*} \kappa(z) &=&\frac{\frac{g^{n+2}}{x_{n+s}^{n+s+1}} {\rm det}\left( \begin{array}{ccc} \frac{\partial^{2}g}{\partial x_{1}^{2}} &\dots& \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} \\ \vdots & &\vdots \\ \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} & {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots& \frac{\partial^{2}g}{\partial x_{n}^{2}} \end{array} \right)} {\left(\left|\frac{g}{x_{n+2}}\right|^{2} \left(1+\sum_{i=1}^{n}\left|\frac{\partial g}{\partial x_{i}} \right|^{2}\right)\right)^{\frac{n+s+1}{2}}} =\frac{ {\rm det}\left( \begin{array}{ccc} \frac{\partial^{2}g}{\partial x_{1}^{2}} &\dots& \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} \\ \vdots & &\vdots \\ \frac{\partial^{2} g}{\partial x_{1}\partial x_{n}} & \dots& \frac{\partial^{2}g}{\partial x_{n}^{2}} \end{array} \right)} {g^{s-1} \left(1+\sum_{i=1}^{n}\left|\frac{\partial g}{\partial x_{i}} \right|^{2}\right)^{\frac{n+s+1}{2}}}\\ &=& \frac{ \mbox{det} \left({\nabla^2} \ f^\frac{1}{s}\right)}{f^\frac{s-1}{s} \left(1 +\|\nabla f^\frac{1}{s}\|^2\right)^\frac{n+s+1}{2}}. \end{eqnarray*} This completes the proof of Lemma \ref{normal+kappa}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop.s-affine-sa}] Denote by $\tilde{\partial} K_s(f)$ the collection of all points $(x_1,\ldots,x_{n+s}) \in \partial K_s(f)$ such that $(x_1,\ldots,x_n) \in {\hbox{int}}({{\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}l S}upp(f))$. Since there is no contribution to the integral of $as_1\left(K_s(f)\right)$ from $\partial K_s(f) \setminus \overline{\tilde{\partial} K_s(f)}$ (since the Gauss curvature vanishes on the part with full dimension, if exists) clearly $$ as_1^{(s)}(f) = as_1\left(K_s(f)\right)= \int_{\partial K_s(f)} \kappa_{K_s(f)} ^\frac{1}{n+s+1} \ d\mu_{K_s(f)} = \int_{\tilde{\partial} K_s(f)} \kappa_{K_s(f)} ^\frac{1}{n+s+1} \ d\mu_{K_s(f)}. $$ By Lemma \ref{normal+kappa} \begin{eqnarray} \label{as} as_1^{(s)}(f)&=& \int_{\tilde{\partial} K_s(f)}\frac{\left( {\rm det}\left({\nabla^2} (f^\frac{1}{s})\right) \right)^\frac{1}{n+s+1}}{\left(1 + \|\nabla f^\frac{1}{s}\| \right)^\frac{1}{2}}\ f ^{-\frac{s-1}{s(n+s+1)}} \ d\mu_{K_s(f)} \nonumber \\ &= & 2\ \int_{\mathbb R^{n+s-1}} f^\frac{1}{s}\ \left( \frac{{\rm det}\left( {\nabla^2}(f^\frac{1}{s})\right)} {f^ \frac{s-1}{s} } \right)^\frac{1}{n+s+1} \frac{ dx_1 \dots dx_{n+s-1} }{ |x_{n+s}|} \end{eqnarray} where $f$ is evaluated, of course, at $(x_1,\ldots,x_n)$. The last equality follows as the boundary of $K_s(f)$ consists of two, ``positive'' and ``negative'', parts. For $s=1$, we get \[ 2 \int_{\mathbb R^{n}} \left({\rm det}\left( {\nabla^2} f \right) \right)^\frac{1}{n+2} dx_1 \dots dx_{n}, \] hence $c_1=2$. For $s >1$, \begin{eqnarray*} \int_{\mathbb R^{s-1}}\frac{ dx_{n+1} \dots dx_{n+s-1} }{ |x_{n+s}|}&=& \int_{\mathbb R^{s-1}} f^{-\frac{1}{s}}\ \bigg(1-\sum_{i=n+1}^{n+s-1} \left(\frac{x_i}{f^\frac{1}{s}} \right)^2 \bigg)^{-\frac{1}{2}} \ dx_{n+1} \dots dx_{n+s-1}\\ &=& \int_{\sum_{i=n+1}^{n+s-1} y_i^2 \leq 1} \frac{f^\frac{s-1}{s}}{f^\frac{1}{s}}\ \bigg(1-\sum_{i=n+1}^{n+s-1} y_i^2 \bigg)^{-\frac{1}{2}} \ dy_{n+1} \dots dy_{n+s-1}\\ &=& \frac{f^\frac{s-1}{s}}{f^\frac{1}{s}}\ (s-1) \mbox{vol}_{s-1}\left(B_2^{s-1}\right) \ \int_0^1 \frac{r^{s-2} dr}{(1-r^2)^\frac{1}{2}}\\ &=& \frac{f^\frac{s-1}{s}}{f^\frac{1}{s}}\ (s-1) \mbox{vol}_{s-1}\left(B_2^{s-1}\right) \frac{1}{2}B\left(\frac{s-1}{2}, \frac{1}{2}\right). \end{eqnarray*} Thus (\ref{as}) becomes $$ as_1^{(s)}(f)=(s-1)\mbox{vol}_{s-1}\left(B_2^{s-1}\right) B\left(\frac{s-1}{2}, \frac{1}{2}\right) \ \int_{\mathbb R^{n}} f^\frac{s-1}{s}\ \left( \frac{ \mbox{det}\left(\mbox{Hess}(f^\frac{1}{s})\right)} {f^ \frac{s-1}{s} } \right)^\frac{1}{n+s+1} \ dx, $$ and the proof of Proposition \ref{prop.s-affine-sa} is complete. \end{proof} With the formula for $as_1^{(s)}(f)$ in hand, we may use the affine isoperimetric inequality for convex bodies to obtain the following corollary. \begin{cor} \label{cor:s-ineq} For all $s \in \mathbb{N}$ and for all $f \in Conc_s^{(2)}(\mathbb R^n)$ we have $$ \int_{{\hbox{supp}} (f)} \left| {\rm det} \left( {\nabla^2} f^\frac{1}{s} \right)\right|^\frac{1}{n+s+1} f^{\frac{s-1}{s}\left( \frac{n+s}{n+s+1}\right)} \ dx \leq d(n,s) \left(\int_{{\hbox{supp}} (f)} f dx\right)^\frac{n+s-1}{n+s+1}, $$ where $$d(n,s)=\pi^\frac{n}{n+s+1} \left(\frac{n+s}{s}\right)^\frac{n+s-1}{n+s+1} \left(\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{n+s}{2})}\right)^\frac{2}{n+s+1}. $$ Equality holds if and only if $f=(a + \langle b, x \rangle - \langle A x, x \rangle)_+^{s/2}$ for $a \in \mathbb R, b \in \mathbb R^n$ and a positive-definite matrix $A$. \end{cor} \begin{proof}[Proof of Corollary \ref{cor:s-ineq}] The affine isoperimetric inequality for convex bodies $K$ in $\mathbb R^n$ (see, e.g., \cite{Petty}) says that \begin{equation}\label{aii} \frac{as_1(K)}{as_1(B^n_2)} \leq \left(\frac{\mbox{vol}_n\left(K\right)}{\mbox{vol}_n\left(B^n_2\right)}\right)^\frac{n-1}{n+1}, \end{equation} with equality if and only if $K$ is an ellipsoid. We apply (\ref{aii}) to $K_s(f) \subset \mathbb R^{n+s}$ and get \begin{eqnarray*} \frac{as_1^{(s)}(f)}{as_1^{(s)}(g_s)} &=& \frac{as_1\left(K_s(f)\right)}{as_1\left(K_s(g_s)\right)} \\ &=& \frac{c_s}{as_1\left(B^{n+s}_2\right)} \int_{{\hbox{supp}} (f)} \bigg(\mbox{det}\big(\frac{\partial^2 f^\frac{1}{s}}{\partial x_i \partial x_j}\big)_{i,j =1,\ldots,n}\bigg)^\frac{1}{n+s+1} \ f^{\frac{(s-1)(n+s)}{s(n+s+1)}} dx \\ & \leq & \left(\frac{ \mbox{vol}_s\left(B^s_2\right) \ \int_{{\hbox{supp}} (f)}f dx}{\mbox{vol}_{n+s}\left(B^{n+s}_2\right)}\right)^\frac{n+s-1}{n+s+1}, \end{eqnarray*} with equality if and only if $f(x)=(a + \langle b, x \rangle - \langle A x, x \rangle)_+^{s/2}$ for $a \in \mathbb R, b \in \mathbb R^n$ and a positive-definite matrix $A$. This is rewritten as $$ \int_{{\hbox{supp}} (f)} \left({\rm det}\left({\nabla^2} ( f^\frac{1}{s})\right)\right)^\frac{1}{n+s+1} \ f^{\frac{(s-1)(n+s)}{s(n+s+1)}} dx \leq d(n,s) \ \left(\int_{{\hbox{supp}} (f)} f dx \right)^\frac{n+s-1}{n+s+1}, $$ where \begin{eqnarray*} \nonumber d(n,s)&=&\frac{(n+s)\ \mbox{vol}_{n+s}\left(B^{n+s}_2\right)}{c_s} \left(\frac{\mbox{vol}_{s}\left(B^{s}_2\right)}{\mbox{vol}_{n+s}\left(B^{n+s}_2\right)}\right)^\frac{n+s-1}{n+s+1} \\ &=&\pi^\frac{n}{n+s+1}\left(\frac{n+s}{s}\right)^\frac{n+s-1}{n+s+1}\ \left(\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{n+s}{2})}\right)^\frac{2}{n+s+1}. \end{eqnarray*} \end{proof} It follows immediately from the definition and from Proposition \ref{prop.s-affine-sa}, that $as_1^{(s) }(f) $ is affine invariant and that it is a valuation: \begin{cor}\label{prop} Let $s \in \mathbb{N}$ and let $f \in Conc_s(\mathbb R^n) {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p C^2\left({\hbox{supp}} (f)\right)$. \vskip 2mm \noindentndent (i) For all linear maps $A: \mathbb R^n \rightarrow \mathbb R^n$ with ${\rm det} A \neq 0$, and for all $\lambdabda \in \mathbb R$, we have \[ as_1^{(s)}((\lambdabda f) \circ A) = \frac{\lambdabda^\frac{n+s-1}{n+s+1}}{|{\rm det} A|}as_1^{(s)}(f).\] In particular, if $|{\rm det} A|=1$, $$ as_1^{(s)}(f \circ A) = as_1^{(s)}(f). $$ \noindentndent (ii) $as_1^{(s)}$ is a ``valuation'': If $\max (f_1,f_2)$ is $s$-concave, then \[ as_1^{(s)}(f_1)+ as_1^{(s)}(f_2) = as_1^{(s)}(\max (f_1, f_2)) + as_1^{(s)}(\min(f_1, f_2))\] \end{cor} \begin{proof}[Proof of Corollary \ref{prop}] (i) By Proposition \ref{prop.s-affine-sa}, \begin{eqnarray*} &&as_1^{(s)}((\lambdabda f) \circ A )\\ &&= c_s\ \int_{{\hbox{supp}} ( f\circ A)} \left|{\rm det}\left( {\nabla^2} \left( (\lambdabda f) \circ A\right)^\frac{1}{s} \right) \right|^\frac{1}{n+s+1} \ \lambdabda^{\frac{(s-1)(n+s)}{s(n+s+1)}} f(Ax)^{\frac{(s-1)(n+s)}{s(n+s+1)} }\ dx \\ &&= c_s\ \frac{ \lambdabda^{\frac{n+s-1}{n+s+1}}}{|{\rm det} A|} \ \int_{{\hbox{supp}} (f)} \left|{\rm det}\left( {\nabla^2} (f ^\frac{1}{s} )\right) \right|^\frac{1}{n+s+1} \ f^{\frac{(s-1)(n+s)}{s(n+s+1)} }\ dy\\ && = \frac{ \lambdabda^{\frac{n+s-1}{n+s+1}}}{|{\rm det} A|} as_1^{(s)}( f ). \end{eqnarray*} \noindentndent (ii) By (\ref{def:aff}) and since the affine surface area for convex bodies is a valuation \cite{Schuett1}, \begin{eqnarray*} as_1^{(s)}(f_1) + as_1^{(s)}(f_2) &=& as_1\left(K_s(f_1)\right) + as_1\left(K_s(f_2)\right) \\ &= & as_1\left(K_s(f_1) \cup K_s(f_2) \right) + as_1\left(K_s(f_1) {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p K_s(f_2) \right) \\ &= & as_1^{(s)}(\max (f_1, f_2)) + as_1^{(s)}(\min(f_1, f_2)), \end{eqnarray*} provided that $K_s(f_1) \cup K_s(f_2)$ is convex. \end{proof} \section{$\log$-concave functions} We would like to obtain an inequality corresponding to the one of Corollary \ref{cor:s-ineq} not only for $s$-concave functions but, more generally, for log-concave functions on $\mathbb R^n$, which are the natural functional extension of convex bodies. The union of all classes of $s$ concave functions over all $s$ is dense within log-concave functions in many natural topologies. Note that if a function $f$ is $s_0$-concave for some $s_0$, then it is $s$-concave for all $s\ge s_0$. Therefore, by Corollary \ref{cor:s-ineq}, we get that for any $s_0 \in \mathbb{N}$ and any $f \in Conc_{s_0}(\mathbb R^n) {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p C^2\left({\hbox{supp}} (f )\right)$we have for all $s\ge s_0$ \begin{eqnarray*} \int_{{\hbox{supp}} (f)} f^{\frac{(s-1)(n+s)}{s(n+s+1)}} \left|{\rm det}\left({\nabla^2} (f^\frac{1}{s})\right)\right|^\frac{1}{n+s+1} dx \leq d(n,s) \left(\int_{{\hbox{supp}} (f )} f dx\right)^\frac{n+s-1}{n+s+1}. \end{eqnarray*} Taking the limit as $s\rightarrow \infty$ one sees that the limit on both sides is simply $\int_{{\hbox{supp}} (f )} f dx$, so that one does not get an interesting inequality. However, we may take the derivative at $s=+\infty$ as in \cite{Klar} (the details are given in the proof below), and doing so, we obtain the inequality of Theorem \ref{thm1}. Before we present the proof of Theorem \ref{thm1}, we give an example in which both sides are computable. The computation is straightforward and left for the interested reader. \begin{exam}\label{exam1} Let $p>1$ and $f:\mathbb R^{n}\rightarrow\mathbb R$ be given by $f(x)=e^{-\sum_{i=1}^n |x_i|^p}$. Then $$ \int_{\mathbb R^{n}} f \ \ln\bigg({\rm det}\left( {\nabla^2}\left(-\ln f\right)\right)\bigg) dx = n \ \left( \frac{2}{p} \ \Gamma\left(\frac{1}{p}\right) \right)^n \left( \ln \big(p(p-1)\big) + (p-2) \ \frac{\Gamma^{\prime}(\frac{1}{p})}{\Gamma(\frac{1}{p})}\right) $$ and $$ 2 \bigg[ \operatorname{Ent}(f) + \|f\|_{L^1( dx)} \ln(2 \pi e)^\frac{n}{2} \bigg] =n \ \left( \frac{2}{p} \ \Gamma\left(\frac{1}{p}\right) \right)^n \left( \ln \bigg(\frac{\pi e}{2 \Gamma(1+\frac{1}{p})^2}\bigg) - \frac{2}{p}\right). $$ Both expressions are equal when $p=2$. \end{exam} \vskip 3mm \begin{proof}[Proof of Theorem \ref{thm1}:] One is given a function $f$ which is log-concave and $C^2$-smooth in the interior of its support. In order to apply Corollary \ref{cor:s-ineq}, we modify $f$ slightly as follows: For $\varepsilon > 0$, set $$ f_{\varepsilon}(x) = f(x) \exp(-\varepsilon \| x \|^2) \chi_{\{f \geq \varepsilon\}}(x) \quad \quad \quad (x \in \mathbb R^n). $$ By a standard compactness argument, every log-concave function with compact support is $s_0$-concave for some $s_0$. Hence there exists $s_0 > 0$ such that $f_\varepsilon$ is $s$-concave for all $s \geq s_0$ and thus (\ref{s-ineq1}) holds for $f_\varepsilon$ and any $s \geq s_0$. We expand the left hand side and the right hand side of the inequality in Corollary \ref{cor:s-ineq} in terms of $\frac{1}{s}$. We have $$ \frac{\partial^2 f_{\varepsilon}^\frac{1}{s}}{\partial x_i \partial x_j}=\frac{1}{s} \frac{\partial}{\partial x_j}\left(f_{\varepsilon}^{\frac{1}{s}-1} \frac{ \partial f_{\varepsilon}}{\partial x_i}\right) = \frac{f_{\varepsilon}^{\frac{1}{s}-2}}{s} \left( f_{\varepsilon} \frac{\partial^2 f_{\varepsilon}}{\partial x_i \partial x_j} - \frac{\partial f_{\varepsilon}}{ \partial x_j} \frac{\partial f_{\varepsilon}}{ \partial x_i} + \frac{1}{s} \ \frac{\partial f_{\varepsilon}}{ \partial x_j} \frac{\partial f_{\varepsilon}}{ \partial x_i} \right) $$ Thus \begin{eqnarray*} {\nabla^2}(f_{\varepsilon}^{1/s}) &=& \frac{f_{\varepsilon}^\frac{1}{s}}{s} \left( \frac{f_{\varepsilon} {\nabla^2}(f_{\varepsilon}) - \nabla f_{\varepsilon} \otimes \nabla f_{\varepsilon} + \frac{1}{s} \nabla f_{\varepsilon} \otimes \nabla f_{\varepsilon} }{f_{\varepsilon}^2} \right) \\ &=&\frac{f_{\varepsilon}^\frac{1}{s}}{s} \ \left( {\nabla^2} \left(\ln f_{\varepsilon}\right) + \frac{1}{s} \frac{\nabla f_{\varepsilon} \otimes \nabla f_{\varepsilon} }{f_{\varepsilon}^2} \right) \end{eqnarray*} and hence $$ \mbox{det}\left(-\frac{\partial^2 f_{\varepsilon}^\frac{1}{s}}{\partial x_i \partial x_j}\right)_{i,j =1,\ldots,n} = \frac{f_{\varepsilon}^\frac{n}{s}}{s^n} \ \mbox{det} \left( -\left( {\nabla^2} \left(\ln f_{\varepsilon}\right) + \frac{1}{s} \frac{\nabla f_{\varepsilon} \otimes \nabla f_{\varepsilon} }{f_{\varepsilon}^2} \right) \right). $$ Thus the inequality of Corollary \ref{cor:s-ineq} is equivalent to \begin{eqnarray} \label{s-ineq1} &&\hskip -15mm \int_{{\hbox{supp}} (f_{\varepsilon})} \left| \mbox{det} \left( -\left( {\nabla^2} \left(\ln f_{\varepsilon}\right) + \frac{1}{s} \frac{\nabla f_{\varepsilon} \otimes \nabla f_{\varepsilon} }{f_{\varepsilon}^2} \right) \right)\right|^\frac{1}{n+s+1} \ f_{\varepsilon}^\frac{n+s-1}{n+s+1} dx \nonumber \\ &&\hskip 40mm \leq d(n,s) \ s ^\frac{n}{n+s+1} \ \left(\int_{{\hbox{supp}} (f_{\varepsilon})} f_{\varepsilon} dx\right)^\frac{n+s-1}{n+s+1}. \end{eqnarray} Applying again the formula for the determinant of a rank-one perturbation of a matrix, we have \begin{eqnarray}\label{expand:det} && {\rm det} \left( -\left( {\nabla^2} \left(\ln f_{\varepsilon}\right) + \frac{1}{s} \frac{\nabla f_{\varepsilon} \otimes \nabla f_{\varepsilon} }{f_{\varepsilon}^2} \right) \right) \nonumber \\ && ={\rm det} \left( - {\nabla^2} \left(\ln f_{\varepsilon}\right) \right) \left[ 1 +s^{-2} f_{\varepsilon}^{-2} \langle \left( {\nabla^2} \ln f_{\varepsilon} \right) ^{-1} \nabla f_{\varepsilon}, \nabla f_{\varepsilon} \rangle \right] \nonumber \\ && ={\rm det} \left( - {\nabla^2} \left(\ln f_{\varepsilon}\right) \right) + s^{-2} \alphaha_{\varepsilon}(x), \end{eqnarray} where, for a fixed $\varepsilon$, the function $\alphaha_{\varepsilon}(x)$ is defined by \eqref{expand:det} and is clearly bounded on the interior of the support of $f_{\varepsilon}$. We write, for the left hand side of (\ref{s-ineq1}), $$ f_\varepsilon^\frac{n+s-1}{n+s+1}= f_\varepsilon \ \left(f_\varepsilon^{-2}\right)^\frac{1}{n+s+1} $$ and on the right hand side $$ \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^\frac{n+s-1}{n+s+1}= \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right) \left( \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^\frac{-2}{n+s+1}. $$ Moreover, \begin{eqnarray*} d(n,s) \ s^\frac{n}{n+s+1} &=& \left(s\pi \right)^\frac{n}{n+s+1}\left(\frac{n+s}{s}\right)^\frac{n+s-1}{n+s+1}\ \left(\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{n+s}{2})}\right)^\frac{2}{n+s+1} \\ &\leq& \left(2 \pi e\right)^\frac{n}{n+s+1} \left(1 +\frac{1}{3s}\right)^\frac{2}{n+s+1}, \end{eqnarray*} where we have used that for $x \rightarrow \infty$, \begin{eqnarray*}\label{gamma} \Gamma(x)= \sqrt{2\pi} \ x^{x-\frac{1}{2}}\ e^{-x}\ \left[1+ \frac{1}{12x} + \frac{1}{288x^2} \pm o(x^{-2})\right], \end{eqnarray*} and we make the legitimate assumption that $s$ is sufficiently large. Thus, together with (\ref{expand:det}), it follows from (\ref{s-ineq1}) that \begin{eqnarray} \label{s-ineq2} &&\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon\left| f_\varepsilon^{-2}\left({\rm det} \left( - {\nabla^2} \left(\ln f_\varepsilon\right) \right) +s^{-2} \alphaha_{\varepsilon}(x) \right)\right|^\frac{1}{n+s+1} dx \nonumber \\ &&\leq \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \left(\left(1+\frac{1}{3s}\right)^2 (2 \pi e)^n \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right)^{-2} \right)^\frac{1}{n+s+1}. \end{eqnarray} We estimate the left hand side of (\ref{s-ineq2}) from below by \begin{eqnarray}\label{s-ineq3} &&\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon\left| f_\varepsilon^{-2}\left({\rm det} \left( -\left( {\nabla^2} \ln f_\varepsilon\right) \right) + s^{-2} \alphaha_{\varepsilon}(x) \right)\right|^\frac{1}{n+s+1} dx \nonumber \\ &&=\int_{{\hbox{supp}} (f_\varepsilon)}f_\varepsilon \exp\bigg(\frac{1}{n+s+1}\ln\bigg| f_\varepsilon^{-2}\bigg({\rm det} \left( -\left( {\nabla^2} (\ln f_\varepsilon\right) \right) + s^{-2} \alphaha_\varepsilon(x) \bigg) \bigg|\bigg) dx \nonumber\\ && \geq \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon \bigg( 1 + \frac{1}{n+s+1} \ln \bigg| f_\varepsilon^{-2}\bigg({\rm det} \left( -\left( {\nabla^2} \ln f_\varepsilon\right) \right) +s^{-2} \alphaha_{\varepsilon}(x) \bigg) \bigg|\bigg) dx. \nonumber \end{eqnarray} We write the right hand side of (\ref{s-ineq2}) \begin{eqnarray*} &&\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \left(\left(1+\frac{1}{3s}\right)^2 (2 \pi e)^n \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right)^{-2} \right)^\frac{1}{n+s+1} \\ &&= \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \sum_{j=0}^\infty \frac{1} {j! (n+s+1)^j} \left( \ln\left(\frac{\left(1+\frac{1}{3s}\right)^2 (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right) \right)^j. \end{eqnarray*} Therefore we get the following inequality \begin{eqnarray}\label{s-ineq4} \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon \bigg( 1 + \frac{1}{n+s+1} \ln \bigg| f_\varepsilon^{-2}\bigg({\rm det} \left( -\left( {\nabla^2} \ln f_\varepsilon\right) \right)+ s^{-2} \alphaha_{\varepsilon}(x) \bigg) \bigg|\bigg) dx \nonumber \\ \leq \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \sum_{j=0}^\infty \frac{1} {j! (n+s+1)^j} \left( \ln\left(\frac{\left(1+\frac{1}{3s}\right)^2 (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right) \right)^j. \end{eqnarray} We subtract the first order term $\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx$ from both side, multiply by $n+s+1$ and take the limit as $s\rightarrow \infty$. We get \begin{eqnarray*} && \liminf_{s \rightarrow \infty} \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon \bigg( \ln \bigg| f_\varepsilon^{-2}\bigg({\rm det} \left( -\left( {\nabla^2} \ln f_\varepsilon\right) \right) + s^{-2} \alphaha_{\varepsilon}(x) \bigg) \bigg|\bigg) dx \nonumber \\ && \leq \limsup_{s \rightarrow \infty} \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon \bigg( \ln \bigg| f_\varepsilon^{-2}\bigg({\rm det} \left( -\left( {\nabla^2} \ln f_\varepsilon\right) \right) + s^{-2} \alphaha_{\varepsilon}(x) \bigg) \bigg|\bigg) dx \nonumber \\ && \leq \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \limsup_{s \rightarrow \infty} \sum_{j=1}^\infty \frac{1} {j! (n+s+1)^{j-1}} \left( \ln\left(\frac{\left(1+\frac{1}{3s}\right)^2 (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right) \right)^j \nonumber\\ &&= \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \ \ln\left(\frac{ (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right). \end{eqnarray*} In the interior of the support of $f_{\varepsilon}$, the Hessian of $\nabla^2 (\ln f_{\varepsilon})$ is greater than $\varepsilon Id$, hence we can apply Fatou's lemma on the left hand side to get \begin{eqnarray*} && \int_{{\hbox{supp}} (f_\varepsilon)} \liminf_{s \rightarrow \infty} f_\varepsilon \bigg( \ln \bigg| f_\varepsilon^{-2}\bigg({\rm det} \left( -\left( {\nabla^2} \left(\ln f_\varepsilon\right) \right)_{i,j =1,\ldots,n} \right) + s^{-2} \alphaha_{\varepsilon}(x) \bigg) \bigg|\bigg) dx \nonumber \\ &&\leq \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \ \ln\left(\frac{ (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right), \end{eqnarray*} which simplifies to \begin{eqnarray}\label{s-ineq5} && \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon \bigg( \ln \bigg({\rm det} \left( -\left( {\nabla^2} \ln f_\varepsilon\right) \right) \bigg) \bigg) dx \nonumber \\ && \leq \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \ \ln\left(\frac{ (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right) + 2 \int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon \ln f_\varepsilon dx . \end{eqnarray} Now we pass to the limit $\varepsilon \rightarrow 0$ on both sides of (\ref{s-ineq5}). We deal with each of the three terms separately. For the first term, since $-\ln f_{\varepsilon} = -\ln f + \varepsilon \|{\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot \|^2/2$, we have \[ \int_{\{f\ge \varepsilon\} } f_\varepsilon \bigg( \ln \bigg({\rm det} \left(-{\nabla^2} (\ln f) + \varepsilon Id \right) \bigg) dx \ge \int_{\{f\ge \varepsilon\} } f_\varepsilon \bigg( \ln \bigg({\rm det} \left(-{\nabla^2} \ln f\right) \bigg) dx. \] Since the integral $ f \ln ({\rm det} ({\nabla^2} \ln f ) )$ is assumed to belong to $L_1$ and $f_\varepsilon$ increases monotonously to $f$ as $\varepsilon \rightarrow 0$, the integrand is bounded by $ f \left|\ln ({\rm det} ({\nabla^2} \ln f ) )\right|$ and by the dominated convergence theorem \[ \lim_{\varepsilon\rightarrow 0} \int_{\{f\ge \varepsilon\} } f_\varepsilon \bigg( \ln \bigg({\rm det} \left(-{\nabla^2} \ln f\right) \bigg) dx = \int_{{\hbox{supp}} f} f \bigg( \ln \bigg({\rm det} \left(-{\nabla^2} \ln f\right) \bigg) dx.\] Similarly, monotone convergence theorem ensures that $$ \lim_{\varepsilon \rightarrow 0} \left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx \right) \ \ln\left(\frac{ (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f_\varepsilon)} f_\varepsilon dx\right)^2}\right)= \left(\int_{{\hbox{supp}} (f)} fdx \right) \ \ln\left(\frac{ (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f)} f dx\right)^2}\right). $$ We are left with showing that for the entropy function \[ \lim_{\varepsilon\rightarrow 0}\int f_\varepsilon \ln f_\varepsilon = \int f \ln f. \] This is straightforward from the definition of $f_\varepsilon$ and the assumptions on $f$, as \[ f_\varepsilon \ln f_\varepsilon = \left( e^{-\varepsilon \|x\|^2/2}f\ln f + \varepsilon f e^{-\varepsilon \|x\|^2/2} \|x\|^2/2\right) \chi_{\{f\ge \varepsilon\}}.\] For the first term, apply again the dominated convergence theorem, and the second term disappears since the second moment of $f_{\varepsilon}$ is bounded uniformly by the second moment of $f$. We end up with \begin{eqnarray}\label{log-ineq5} && \int_{{\hbox{supp}} (f)} f \bigg( \ln \bigg({\rm det} \left( -\left( {\nabla^2} \ln f \right) \right) \bigg) \bigg) dx \nonumber \\ && \leq \left(\int_{{\hbox{supp}} (f)} f \right) \ \ln\left(\frac{ (2 \pi e)^n}{\left(\int_{{\hbox{supp}} (f)} f dx\right)^2}\right) + 2 \int_{{\hbox{supp}} (f)} f \ln f. \end{eqnarray} This completes the proof of the main inequality. The equality case is easily verified, and in particular follows from the affine invariance together with the computation in Example \ref{exam1}. \end{proof} \section{Linearization}\label{lin} In this section we prove Corollary \ref{CorPoin}, be means of linearization of our main inequality around its equality case. for convenience, we rewrite the inequality of Theorem \ref{thm1} in terms of a convex function $\psi: \mathbb R^{n} \rightarrow \mathbb R$ such that $f=e^{-\psi}$. We get \begin{eqnarray}\label{mainineq55} && \hskip -10mm \int_{\mathbb R^{n}} e^{-\psi} \ln({\rm det}({\nabla^2}(\psi))) dx \leq \nonumber\\ && \hskip -10mm 2 \left\{ -\int_{\mathbb R^{n}} e^{-\psi} {\psi} dx- \left(\int_{\mathbb R^{n}} e^{-\psi}dx\right) \ln\left( \int_{\mathbb R^{n}} e^{-\psi}dx \right) + \left( \int_{\mathbb R^{n}} e^{-\psi}dx \right) \ln(2 \pi e)^\frac{n}{2} \right\}. \end{eqnarray} Note that the support of $f$ is $\mathbb R^{n}$. We then linearize around the equality case $\psi(x) = \|x\|^2/2$. \begin{proof}[Proof of Corollary \ref{CorPoin}] We first prove the corollary for functions with bounded support. Thus, let $\varphi$ be a twice continuously differentiable function with bounded support and let $\psi(x) = \|x\|^2/2 + \varepsilon \varphi(x)$. Note that for sufficiently small $\varepsilon$ the function $\psi$ is convex. Therefore we can plug $\psi$ into inequality (\ref{mainineq55}) and develop in powers of $\varepsilon$. We evaluate first the left hand expression of (\ref{mainineq55}). Since $ {\nabla^2}(\psi)=I+\varepsilon \varphi $, we obtain for the left hand side $$ \int_{{\mathbb R}^n} e^{-\|x\|^2/2 - \varepsilon \varphi} \ln ({\rm det} (I + \varepsilon {\nabla^2} \varphi)) dx. $$ By Taylor's theorem this equals $$ \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \left(1 - \varepsilon \varphi + \frac{\varepsilon^2}{2} \varphi^2 \right) {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot \ln ({\rm det} (I + \varepsilon {\nabla^2} \varphi)) dx + O(\varepsilon^3). $$ For a matrix $A=(a_{i,j})_{i,j=1,\ldots,n}$, let $D(A) = \sum_{i =1}^n\sum_{j\neq i}^n [a_{i,i}a_{j,j} - a_{i,j}^2]$. Note that each $2\times 2$ minor is counted twice. Then $$ {\rm det}(I+\varepsilon {\nabla^2} \varphi) =1+\varepsilon \triangle \varphi +\frac{\varepsilon^{2}}{2}D({\nabla^2} \varphi)+O(\varepsilon^{3}) $$ where $\triangle \varphi = \operatorname{tr}( {\nabla^2} \varphi)$ is the Laplacian of $\varphi$. Therefore the left hand side equals \begin{eqnarray*} && \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \left(1 - \varepsilon \varphi + \frac{\varepsilon^2}{2} \varphi^2 \right) {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot \left( \varepsilon \triangle \varphi + \frac{\varepsilon^2}{2}D({\nabla^2} \varphi) - \frac{\varepsilon^2}{2} (\triangle \varphi)^2 \right)dx + O(\varepsilon^3) \\ &&= \varepsilon \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \triangle \varphi dx+ \varepsilon^2 \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \left[ - \varphi \triangle \varphi + \frac{D({\nabla^2} \varphi) - (\triangle \varphi)^2}{2} \right]dx + O(\varepsilon^3) \\ &&= \varepsilon \int_{\mathbb R^n} \left(\|x\|^2 - n\right) e^{-\|x\|^2/2} \varphi + \varepsilon^2 \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \left[ - \varphi \triangle \varphi - \frac{\|{\nabla^2} \varphi\|_2^2}{2} \right] + O(\varepsilon^3). \end{eqnarray*} The last equation follows by twice integration by parts. \par Now we evaluate the right hand side expression. First consider \begin{eqnarray*} \int_{\mathbb R^{n}} e^{-\psi}dx &=& \int_{\mathbb R^n} e^{-\|x\|^2/2}dx - \varepsilon \int_{\mathbb R^n} e^{-\|x\|^2/2} \varphi dx+ \varepsilon^2 \int_{\mathbb R^n} e^{-\|x\|^2/2} \frac{\varphi^2}{2}dx +O(\varepsilon^3). \end{eqnarray*} Next, \begin{eqnarray*} -\int_{\mathbb R^{n}} e^{-\psi} \psi dx &=& - \int_{\mathbb R^n} e^{-\|x\|^2/2} \left( 1 - \varepsilon \varphi + \varepsilon^2 \frac{\varphi^2}{2} \right) {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot \left( \frac{\|x\|^2}{2} + \varepsilon \varphi dx \right) + O(\varepsilon^3) \\ &=& -\int_{\mathbb R^n} e^{-\|x\|^2/2}\frac{\|x\|^2}{2} + \varepsilon\left(\int_{{\mathbb R}^n} \varphi e^{-\|x\|^2/2} \left(\frac{\|x\|^2}{2}-1\right)dx\right)\\ &&+\varepsilon^2 \left( \int_{\mathbb R^n} \varphi^2 e^{-\|x\|^2/2} \left(1-\frac{\|x\|^2}{4}\right)dx\right) + O(\varepsilon^3) \end{eqnarray*} To treat $\left(\int_{\mathbb R^{n}} e^{-\psi}dx\right) \ln\left( \int_{\mathbb R^{n}} e^{-\psi}dx\right)$, we consider the function $g(y) = y\ln y$, which we will apply to $\int e^{-\|x\|^2/2-\varepsilon \varphi}$. We obtain \begin{eqnarray*} &&\int_{\mathbb R^n} e^{-\|x\|^2/2 - \varepsilon \varphi} dx \ \ln \left(\int_{\mathbb R^n} e^{-\|x\|^2/2 - \varepsilon \varphi}dx\right) \\ &&=\frac{n}{2}({2\pi})^{n/2}\ln({2\pi}) +\varepsilon \left( -\left(\frac{n}{2}\ln({2\pi})+1\right)\int_{\mathbb R^n} e^{-\|x\|^2/2} \varphi dx \right)\\ &&+\varepsilon^2\left( \left(\frac{n}{2}\ln ({2\pi}) +1\right)\int_{\mathbb R^n} e^{-\|x\|^2/2} \frac{\varphi^2}{2} dx+ \frac{1}{2({2\pi})^{n/2}}\left(\int_{\mathbb R^n} e^{-\|x\|^2/2} \varphi dx\right)^2 \right) + O(\varepsilon^3) \end{eqnarray*} Altogether, the right hand side equals \begin{eqnarray*} &&2 \left\{ \int_{\mathbb R^n} e^{-\|x\|^2/2-\varepsilon \varphi} (-\|x\|^2/2-\varepsilon\varphi)dx - \left(\int_{\mathbb R^n} e^{-\|x\|^2/2-\varepsilon\varphi}dx\right) \ln \left(\int_{\mathbb R^n} e^{-\|x\|^2/2-\varepsilon \varphi}dx \right) \right\} \\ &&+ \left( \int_{\mathbb R^n} e^{-\|x\|^2/2-\varepsilon \varphi}dx \right) n\ln(2 \pi e) \\&&= -\int_{\mathbb R^n} e^{-\|x\|^2/2}{\|x\|^2}dx - n(2\pi)^{n/2}\ln(2\pi) +n \ln(2\pi e) \int_{\mathbb R^n} e^{-\|x\|^2/2} dx\\ &&+ \varepsilon\left\{ 2\left(\int_{\mathbb R^n} \varphi e^{-\|x\|^2/2} (\frac{\|x\|^2}{2}-1)dx\right) - 2 \left( -(\frac{n}{2}\ln({2\pi})+1)\int_{\mathbb R^n} e^{-\|x\|^2/2} \varphi dx\right)\right. \\ &&\left.- n \ln (2\pi e)\int e^{-\|x\|^2/2} \varphi\right\} \\ &&+\varepsilon^2\left\{ \int_{{\mathbb R}^n} \varphi^2 e^{-\|x\|^2/2} (1+\frac{n}{2}-\frac{1}{2}\|x\|^2)dx - \frac{1}{({2\pi})^{n/2}}\left(\int_{{\mathbb R}^n} e^{-\|x\|^2/2} \varphi dx\right)^2\right\} +O(\varepsilon^3). \end{eqnarray*} Since $$ \int_{{\mathbb R}^n} e^{-\|x\|^2/2} dx =({2\pi})^{n/2} \hskip 10mm \mbox{and} \hskip 10mm \int_{{\mathbb R}^n}\|x\|^{2} e^{-\|x\|^2/2} dx =n({2\pi})^{n/2}, $$ we get for the zeroth order term, \begin{eqnarray*} &&-\int_{\mathbb R^n} e^{-\|x\|^2/2}{\|x\|^2}dx - n(2\pi)^{n/2}\ln(2\pi) +n \ln(2\pi e) \int_{\mathbb R^n} e^{-\|x\|^2/2} dx \\ &&=-n(2\pi)^{n/2} -n(2\pi)^{n/2}\ln(2\pi) +n \ln(2\pi e) (2\pi)^{n/2}=0. \end{eqnarray*} Therefore, we get for the right hand side \begin{eqnarray*} && \varepsilon \int_{{\mathbb R}^n}\varphi e^{-\|x\|^2/2} {(\|x\|^2-n)} dx \\ &&+\varepsilon^2 \left\{ \int_{{\mathbb R}^n}\varphi^2 e^{-\|x\|^2/2} \left(\frac{n+2-\|x\|^2}{2}\right)dx - \frac{1}{({2\pi})^{n/2}}\left(\int_{{\mathbb R}^n} e^{-\|x\|^2/2} \varphi dx\right)^2\right\} +O(\varepsilon^3). \end{eqnarray*} The coefficients of $\varepsilon$ on the left and right hand side are the same and we disacrd them. We divide both sides by $\varepsilon^{2}$ and take the limit for $\varepsilon\rightarrow0$. Then \begin{eqnarray*} &&\hskip -15mm \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \left[ - \varphi \triangle \varphi - \frac{\|{\nabla^2} \varphi\|_2^2}{2} \right]dx \\ &&\le \int_{{\mathbb R}^n}\varphi^2 e^{-\|x\|^2/2} (\frac{n+2-\|x\|^2}{2}) dx- \frac{1}{({2\pi})^{n/2}}\left(\int_{{\mathbb R}^n} e^{-\|x\|^2/2} \varphi dx\right)^2. \end{eqnarray*} If we want the right hand side to include the variance, we may write the inequality as follows \begin{eqnarray}\label{sohalt} &&\hskip -15mm \int_{{\mathbb R}^n} e^{-\|x\|^2/2} \left[ - \varphi \triangle \varphi - \frac{\|{\nabla^2} \varphi\|_2^2}{2} \right]dx \nonumber \\ && \le \int_{{\mathbb R}^n}\varphi^2 e^{-\|x\|^2/2} \left(\frac{n-|x|^2}{2}\right)dx + ({2\pi})^{n/2} \left[\int_{{\mathbb R}^n} \varphi^2 d\gamma ma_n - \left(\int_{{\mathbb R}^n} \varphi d\gamma ma_n\right)^2\right] \end{eqnarray} Now we integrate on the right by parts twice, noting that $(n-\|x\|^2)e^{-\|x\|^2/2} = \triangle(e^{-\|x\|^2/2})$, so that the first term on the right hand side is \begin{eqnarray*} \int_{\mathbb R^n} e^{-\|x\|^2/2} \varphi^2 (\frac{n - \|x\|^2}{2}) dx &=&-\frac{1}{2}\int_{\mathbb R^n} e^{-\|x\|^2/2} \triangle(\varphi^2) dx \\ &=& -\int_{\mathbb R^n} e^{-\|x\|^2/2} (\varphi\triangle \varphi + \|\nabla \varphi\|^2)dx. \end{eqnarray*} We put that in (\ref{sohalt}) and one gets \[\int_{{\mathbb R}^n} e^{-|x|^2/2} \left[ \|\nabla \varphi\|^2 - \frac{\|{\nabla^2} \varphi\|_{HS}^2}{2} \right] dx \le {({2\pi})^{n/2}}\left[ \int_{{\mathbb R}^n} \varphi^2 d\gamma ma_n - \left(\int_{{\mathbb R}^n} \varphi d\gamma ma_n\right)^2\right], \] which we can rewrite as \[\int_{{\mathbb R}^n} \|\nabla \varphi\|^2 - \frac{\|{\nabla^2} \varphi\|_2^2}{2} d\gamma ma_n \le \int_{{\mathbb R}^n} \varphi^2 d\gamma ma_n - \left(\int_{{\mathbb R}^n} \varphi d\gamma ma_n\right)^2.\] Thus we have shown that the inequality holds for all twice continuously differentiable functions $\varphi$ with bounded support. One may extend it to all twice continuously differentiable functions $\varphi\in L^{2}(\mathbb R^{n},\gamma ma_{n})$ with $\| {\nabla^2} \varphi\|_{HS} \in L^{2}(\mathbb R^{n},\gamma ma_{n})$ by a standard approximation argument, as follows. Let $\chi_{k}$ be a twice continuously differentiable function bounded between zero and one such that $\chi_{n}(x)=1$ for all $\|x\|\leq k$ and $\chi_{n}(x)=0$ for all $\|x\|>k+1$. Then, for all $k\in\mathbb N$ $$ \int_{\mathbb R^{n}} \left[ \|\nabla(\varphi\circ\chi_{k}) \|^2 - \frac{\| {\nabla^2} \varphi\|_{HS}^2}{2} \right] d\gamma ma_{n} \le \int_{\mathbb R} (\varphi\circ\chi_{k})^2 d\gamma ma - \left(\int_{\mathbb R^{n}}(\varphi\circ\chi_{k}) d\gamma ma_{n}\right)^2, $$ or, equivalently, $$ \int_{\mathbb R^{n}} \|\nabla(\varphi\circ\chi_{k})\|^2 d\gamma ma + \left(\int(\varphi\circ\chi_{k}) d\gamma ma_{n}\right)^2 \le \int_{\mathbb R^{n}} (\varphi\circ\chi_{k})^2 d\gamma ma +\int_{\mathbb R^{n}} \frac{\| {\nabla^2} \varphi\|_{HS}^2}{2} d\gamma ma_{n}. $$ It follows that \begin{eqnarray*} &&\hskip -15mm \liminf_{k\rightarrow\infty}\int_{\mathbb R^{n}} \|\nabla(\varphi\circ\chi_{k})\|^2 d\gamma ma_{n} + \liminf_{k\rightarrow\infty}\left(\int(\varphi\circ\chi_{k}) d\gamma ma_{n}\right)^2 \\ &&\hskip 15mm \le \limsup_{k\rightarrow\infty} \int_{\mathbb R^{n}} (\varphi\circ\chi_{k})^2 d\gamma ma_{n} +\limsup_{k\rightarrow\infty}\int_{\mathbb R^{n}} \frac{\| {\nabla^2} \varphi\|_{HS}^2}{2} d\gamma ma_{n}. \end{eqnarray*} By Fatou's lemma and the dominated convergence theorem \begin{eqnarray*} &&\hskip -15mm\int_{\mathbb R^{n}} \liminf_{k\rightarrow\infty} \|\nabla(\varphi\circ\chi_{k})\|^2 d\gamma ma_{n} + \left(\int_{\mathbb R^{n}}\lim_{k\rightarrow\infty}(\varphi\circ\chi_{k}) d\gamma ma_{n}\right)^2 \\ && \hskip 15mm \le \int_{\mathbb R} \lim_{k\rightarrow\infty} (\varphi\circ\chi_{k})^2 d\gamma ma +\int_{\mathbb R} \limsup_{k\rightarrow\infty} \frac{\| {\nabla^2} \varphi\|_{HS}^2}{2} d\gamma ma, \end{eqnarray*} which gives \begin{eqnarray*} \int_{\mathbb R^{n}} \|\nabla \varphi \|^2 d\gamma ma + \left(\int_{\mathbb R^{n}}\varphi d\gamma ma\right)^2 \le \int_{\mathbb R^{n}} \varphi^2 d\gamma ma_{n} +\int_{\mathbb R^{n}} \frac{\| {\nabla^2} \varphi\|_{HS}^2}{2} d\gamma ma_{n}. \end{eqnarray*} \end{proof} An alternative, direct proof of Corollary \ref{CorPoin} may be given by expanding ${\varphi} \in C^2(\mathbb R^n) {\cal A}} \def\cb{{\cal B}} \def\cc{{\cal C}p L^2(\mathbb R^n, \gamma ma_n)$ into Hermite polynomials. That is, denote by $h_0(x),h_1(x),\ldots$ the Hermite polynomials in one variable, normalized so that $\| h_i \|_{L^2(\gamma ma_1)} = 1$ for all $i$. We may decompose $$ {\varphi} = \sum_{i_1,\ldots,i_n = 0}^{\infty} a_{i_1,\ldots,i_n} \prod_{j=1}^n h_{i_j}(x_i) $$ where the convergence is in $L^2(\mathbb R^n, \gamma ma_n)$. Then the right-hand side of (\ref{CorPoin1_}) equals \begin{equation} \label{eq_1054} \sum_{i_1,\ldots,i_n = 0 \atop{(i_1,\ldots, i_n) \neq (0,\ldots,0)}}^{\infty} a_{i_1,\ldots,i_n}^2. \end{equation} Using the identity $h_i^{\prime} = \sqrt{i} {\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ot h_{i-1}$, we see that the left-hand side of (\ref{CorPoin1_}) is \begin{equation} \int_{\mathbb R^{n}} \left[ \left\|\nabla \varphi\right\|^2_{} - \frac{\|{\nabla^2} \varphi\|_{HS}^2}{2} \right] d\gamma ma_n = \sum_{i_1,\ldots,i_n = 0}^{\infty} \left[ \frac{3}{2} \sum_{j=1}^n i_j - \frac{1}{2} \left( \sum_{j=1}^n i_j \right)^2 \right] a_{i_1,\ldots,i_n}^2. \label{eq_1110} \end{equation} We will use the simple fact that $x(3 - x) / 2 \leq 1$ for any integer $x \geq 1$, for $x = \sum_{j=1}^n i_j$. Glancing at (\ref{eq_1054}) with (\ref{eq_1110}) and using the aforementioned simple fact, we deduce Corollary \ref{CorPoin}. We also see that equality in (\ref{CorPoin1_}) holds if and only if ${\varphi}$ is a polynomial of degree at most $2$, because $x (3-x) / 2 = 1$ only for $x=1,2$. The proof of Theorem \ref{gen-inv-poinc} is along the exact same lines, using the all the derivatives are diagonalized by the Hermite polynomials with respect to the Gaussian measure, only that the inequality $x(3 - x) / 2 \leq 1$, which can be rewritten as $(x-1)(x-2)\ge 0$ for integers $x\ge 1$, is replaced by the more general inequality $(x-1)(x-2){\cal D}} \def\ce{{\cal E}} \def\cf{{\cal F}ots (x-j)\ge 0$ for integers $x\ge 1$, with equality if and only if $x\in \{1, \ldots, j\}$. We remark that it is desirable to find an alternative, direct proof of Theorem \ref{thm1}, which does not rely on the affine isoperimetric inequality. {\bf Acknowledgement} We would like to thank Dario Cordero-Erausquin and Matthieu Fradelizi for helpful conversations. Part of the work was done during the authors stay at the Fields Institute, Toronto, in the fall of 2010. The paper was finished while the last two named authors stayed at the Institute for Mathematics and its Applications, University of Minnesota, in the fall of 2011. Thanks go to both institutions for their hospitality. \vskip 5mm \vskip 2mm \noindentndent Shiri Artstein-Avidan\\ {\small School of Mathematical Sciences}\\ {\small Tel-Aviv University }\\ {\small Tel-Aviv 69978, Israel}\\ {\small \tt [email protected]} \\ \\ \noindentndent \and Bo'az Klartag\\ {\small School of Mathematical Sciences}\\ {\small Tel-Aviv University }\\ {\small Tel-Aviv 69978, Israel}\\ {\small \tt [email protected]} \\ \\ \noindentndent \and Carsten Sch\"utt\\ {\small Mathematisches Institut}\\ {\small Universit\"at Kiel}\\ {\small 24105 Kiel, Germany}\\ {\small \tt [email protected] }\\ \\ \noindentndent \and Elisabeth Werner\\ {\small Department of Mathematics \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Universit\'{e} de Lille 1}\\ {\small Case Western Reserve University \ \ \ \ \ \ \ \ \ \ \ \ \ UFR de Math\'{e}matique }\\ {\small Cleveland, Ohio 44106, U. S. A. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 59655 Villeneuve d'Ascq, France}\\ {\small \tt [email protected]}\\ \\ \end{document}
\begin{document} \begin{abstract} Let $X$ be a regular tame stack. If $X$ is locally of finite type over a field, we prove that the essential dimension of $X$ is equal to its generic essential dimension, this generalizes a previous result of P.~Brosnan, Z.~Reichstein and the second author. Now suppose that $X$ is locally of finite type over a $1$-dimensional noetherian local domain $R$ with fraction field $K$ and residue field $k$. We prove that $\operatorname{ed}_{k}X_{k} \le \operatorname{ed}_{K}X_{K}$ if $X\to \operatorname{Spec} R$ is smooth and $\operatorname{ed}_{k}X_{k} \le \operatorname{ed}_{K}X_{K}+1$ in general. \end{abstract} \maketitle \subseteqection{Introduction, and the statement of the main theorem} Let $k$ be a field, $X \arr \subseteqpec k$ an algebraic stack, $\ell$ an extension of $k$, $\xi \in X(\ell)$ an object of $X$ over $\ell$. If $k \subsetequbseteq L \subsetequbseteq \ell$ is an intermediate extension, we say, very naturally, that $L$ is a \emph{field of definition} of $\xi$ if $\xi$ descends to $L$. The \emph{essential dimension} $\operatorname{ed}_{k}\xi$, which is either a natural number or $+\infty$, is the minimal transcendence degree of a field of definition of $\xi$. If $X$ is of finite type then $\operatorname{ed}_{k}\xi$ is always finite. This number $\operatorname{ed}_{k}\xi$ is a very natural invariant, which measures, essentially, the number of independent parameters that are needed for defining $\xi$. The essential dimension $\operatorname{ed}_{k}X$ of $X$ is the supremum of the essential dimension of all objects over all extensions of $k$ (if $X$ is empty then $\operatorname{ed}_{k}X$ is $-\infty$). This number is the answer to the question ``how many independent parameters are needed to define the most complicated object of $X$?''. For example, this is a very natural question for the stack $\cM_{g}$ of smooth projective curves of genus~$g$. Essential dimension was introduced for classifying stacks of finite groups in \cite{buhler-reichstein}, with a rather more geometric language. Since then it has been actively investigated by many mathematicians. It has been studied for classifying stacks of positive dimensional algebraic group starting from \cite{reichstein-ed-algebraic-groups}, and for more general classes of geometric and algebraic objects in \cite{brosnan-reichstein-vistoli1}. See \cite{berhuy-favi-functorial,brosnan-reichstein-vistoli3, reichstein-ed-survey, merkurjev-ed-survey} for an overview of the subject. Suppose that $X$ is an integral algebraic stack which is locally of finite type over $k$. We can define the \emph{generic essential dimension} $\operatorname{g\mspace{1mu}ed}_{k}X$ (see \cite[Definition~3.3]{brosnan-reichstein-vistoli3}) as the supremum of all $\operatorname{ed}_{k}\xi$ taken over all $\xi \in X(\ell)$ such that the associated morphism $\xi\colon \subseteqpec \ell \arr X$ is dominant. For example, if $X$ has finite inertia and $X \arr M$ is its moduli space, then $M$ is an integral scheme over $k$; if $k(M)$ is its field of rational functions, the pullback $X_{k(M)} \arr \subseteqpec k(M)$ is a gerbe (the \emph{generic gerbe of $X$}), and \[ \operatorname{g\mspace{1mu}ed}_{k}X = \operatorname{ed}_{k(M)}X_{k(M)} + \dim M\,. \] So, if a generic object of $X$ has trivial automorphism group, then \[ \operatorname{g\mspace{1mu}ed}_{k}X = \dim M = \dim X\,. \] P.~Brosnan, Z.~Reichstein and the second author proved two results linking $\operatorname{ed}_{k}X$ and $\operatorname{ed}_{k(M)}X_{k(M)}$. \begin{enumerate1} \item The genericity theorem (see \cite{brosnan-reichstein-vistoli1, brosnan-reichstein-vistoli-mixed-char, brosnan-reichstein-vistoli3, reichstein-vistoli-genericity}) says that if $X$ is a smooth integral tame Deligne--Mumford stack over $k$, then $\operatorname{ed}_{k}X = \operatorname{g\mspace{1mu}ed}_{k}X$. \item If $R$ is a DVR with quotient field $K$ and residue field $k$, and $X \arr \subseteqpec R$ is a finite tame étale gerbe, then $\operatorname{ed}_{k}X_{k} \leq \operatorname{ed}_{K}X_{K}$. This was proved in \cite{brosnan-reichstein-vistoli3, brosnan-reichstein-vistoli-mixed-char} (the proof in the first paper did not work when $R$ had mixed characteristics). \end{enumerate1} (1) plays a pivotal role in the calculation of the essential dimension of many stacks of geometric interest, such as stacks of smooth or stable curves, stacks of principally polarized abelian varieties \cite{brosnan-reichstein-vistoli3}, coherent sheaves on smooth curves \cite{biswas-dhillon-hoffmann-ed-coherent}, quiver representations \cite{scavia-quiver-ed} and polarized K3 surfaces \cite{gao-k3}. In this paper we extend both these results to \emph{weakly tame stacks}, in a somewhat more general form. Tame stacks have been defined by D.~Abramovich, M.~Olsson and the second author in \cite{dan-olsson-vistoli1}. They are stacks with finite inertia, whose automorphism group schemes for objects over a field are linearly reductive. In characteristic~$0$ these are the Deligne--Mumford stacks with finite inertia; but in positive characteristic they are not necessarily Deligne--Mumford, as there exist finite group schemes that are linearly reductive but not reduced, such as $\boldsymbol{\mu}_{p}$ in characteristic~$p$; and étale finite group schemes are linearly reductive if and only if their order is prime to the characteristic. We define weakly tame stacks as algebraic stacks, in the sense of \cite{laumon-moret-bailly}, whose automorphism group schemes for objects over a field are finite and linearly reductive, but whose inertia is not necessarily finite. \begin{theorem}\label{thm:genericity1} Let $X$ be a regular integral weakly tame stack, which is locally of finite type over a field $k$. Then \[ \operatorname{ed}_{k}X = \operatorname{g\mspace{1mu}ed}_{k}X\,. \] \end{theorem} \begin{theorem}\label{thm:genericity2} Let $S$ be a regular integral scheme, $X$ an integral, weakly tame stack, $X \arr S$ a smooth morphism. If $s\in S$ is a point with residue field $k(s)$, then \[ \operatorname{ed}_{k(s)} X_{k(s)}\le\operatorname{g\mspace{1mu}ed}_{k(S)} X_{k(S)}\,. \] \end{theorem} \begin{theorem}\label{thm:genericity3} Let $X$ be a regular integral weakly tame stack, which is locally of finite type over a noetherian $1$-dimensional local domain $R$ with fraction field $K$ and residue field $k$. Then \[ \operatorname{ed}_{k}X_{k} \le \operatorname{g\mspace{1mu}ed}_{K}X_{K}+1\,. \] \end{theorem} In fact, all these theorems have a more general formulation, which applies to individual objects, rather than whole stacks: see Section~\ref{sec:tame} for a discussion, and an application to an interesting geometric case, reduced local intersection curves. We only state this for Theorem~\ref{thm:genericity1}: see Theorem~\ref{thm:genericity4}. Theorem~\ref{thm:genericity1} generalizes (1) above. Theorem~\ref{thm:genericity2} generalizes (2) above; not because $S$ is not assumed to be the spectrum of a DVR (in fact, the general case easily reduces to this), but mostly because $X$ is not assumed to be a gerbe over $S$. Theorem~\ref{thm:genericity3} is, as far as we know, the first result comparing essential dimensions in mixed characteristic without smoothness assumptions. Theorem~\ref{thm:genericity2} also implies a slightly weaker version of Theorem~\ref{thm:genericity1}, in which the morphism $X \arr \subseteqpec k$ is assumed to be smooth. Theorems \ref{thm:genericity2} and \ref{thm:genericity3} are related in spirit with the results in \cite{reichstein-scavia-specialization, reichstein-scavia-specialization-II}. Our approach to the proof of the genericity theorem is very different from those in the references above. The main tool is an new version of the valuative criterion for properness of morphisms of tame stacks (Theorem~\ref{thm:valuative}), which was proved, in a slightly weaker form, by the first author in \cite{giulio-section-birational} in characteristic~$0$. The proof of the general case will appear in \cite{giulio-angelo-lang-nishimura}. \subseteqection{Tame objects and weakly tame stacks}\label{sec:tame} Here, and for the rest of the paper, \emph{algebraic stacks} will be defined as in \cite{laumon-moret-bailly}; that is, we will assume that the they have finite type diagonal. If $\ell$ is a field, we denote by $(\mathrm{Aff}/\ell)$ the category of affine schemes over $\ell$. Let $X$ be an algebraic stack, $\ell$ a field, and $\xi$ an object in $X(\ell)$. The functor of automorphisms $\mathop{\underline{\mathrm{Aut}}}\nolimits_{\ell}\xi\colon (\mathrm{Aff}/\ell)\op \arr \cat{Grp}$ is a group scheme of finite type. We say that $\xi$ is \emph{tame} if $\mathop{\underline{\mathrm{Aut}}}\nolimits_{\ell}\xi\colon (\mathrm{Aff}/\ell)\op \arr \cat{Grp}$ is finite and linearly reductive (see \cite[\S2]{dan-olsson-vistoli1} for a thorough discussion of finite linearly reductive group schemes). An algebraic stack $X$ is \emph{weakly tame} if every object over a field is tame; it is \emph{tame} if is weakly tame and has finite diagonal (see \cite{dan-olsson-vistoli1}). The following is a strong form of Theorem~\ref{thm:genericity1}. \begin{theorem}\label{thm:genericity4} Let $X$ be a regular integral stack, which is locally of finite type over a field $k$. If $\ell$ is an extension of $k$ and $\xi$ is a tame object of $X(\ell)$, then \[ \operatorname{ed}_{k}\xi \leq \operatorname{g\mspace{1mu}ed}_{k}X\,. \] \end{theorem} If $X$ has finite diagonal, then there exists an open substack $X^{\mathrm{tame}} \subsetequbseteq X$ such that an object $\xi \in X(\ell)$ is tame if an only if $\xi$ is in $X^{\mathrm{tame}}(\ell)$ (see \cite[Proposition~3.6]{dan-olsson-vistoli1}); however, this fails in general if the diagonal of $X$ is only quasi-finite. Thus, for stacks with finite diagonal Theorem~\ref{thm:genericity1} and Theorem~\ref{thm:genericity4} are equivalent, but not in general. There are many geometrically natural stacks that are Deligne--Mumford in characteristic~$0$, but not in positive characteristic, as the corresponding objects may have infinitesimal automorphisms. Examples include polarized K3 surfaces, surfaces of general type, polarized torsors for abelian varieties, stable maps. In analyzing the essential dimension of these classes of geometric objects in positive characteristic it is essential to go outside the framework of Deligne--Mumford stacks. Here is an example. Let $k$ be a field, $\ell$ an extension of $k$, $C$ a geometrically reduced, geometrically connected, and local complete intersection curve over $\ell$ of arithmetic genus $g \geq 2$. We are interested in the essential dimension $\operatorname{ed}_{k}C$, that is, the smallest transcendence degree of a field of definition of $C$ over $k$. If $\operatorname{char} k = 0$ and $C$ has a finite automorphism group, it is proved in \cite[Theorem~7.3]{brosnan-reichstein-vistoli3} that \begin{equation}\label{inequality} \operatorname{ed}_{k}C \leq \begin{cases} 3g-3 &\text{if }g \geq 3\\ 5 & \text{if }g = 2 \end{cases} \end{equation} We do not know if this holds in positive characteristic, but we can prove the following. Let us say that $C$ is \emph{tame} if $\mathop{\underline{\mathrm{Aut}}}\nolimits_{\ell}C$ is finite and tame. If $\operatorname{char} \ell = 0$, then the tame condition is automatic. \begin{proposition} The inequality \ref{inequality} above holds in any characteristic, if $C$ is tame. \end{proposition} \begin{proof} Let $\fM_{g}^{\mathrm{fin}}$ be the stack over $\subseteqpec k$ whose objects over a scheme $S$ are proper flat finitely presented morphisms $C \arr S$, whose fibers are geometrically reduced, geometrically connected, and local complete intersection curves of arithmetic genus $g$. By standard results in deformation theory, this is an integral smooth algebraic stack over $\subseteqpec k$. The integer $\operatorname{g\mspace{1mu}ed}_{k}\fM_{g}^{\mathrm{fin}} = \operatorname{g\mspace{1mu}ed}_{k}\cM_{g}$ is computed in \cite{brosnan-reichstein-vistoli3}; it is equal to $3g-3$ when $g \geq 3$, while it is $5$ when $g = 2$. Thus the result follows from Theorem~\ref{thm:genericity4}. \end{proof} One can show that the tame points of $\fM_{g}^{\mathrm{fin}}$ do not form an open substack, so Theorem~\ref{thm:genericity1} would not be sufficient for this application. Here is an example of a tame curve of the type above, whose automorphism group scheme is not reduced. \begin{example} Suppose that $\operatorname{char} k = p > 0$. Choose three rational points on $\PP^{1}_{k}$, say $0$, $1$ and $\infty$, and call $V \subseteqimeq \subseteqpec k[x]/\bigl((x-1)^{p}\bigr)$ the $p\th$ infinitesimal neighborhood of $1$. Take two copies of $\PP^{1}$, and glue the two copies of $V$. Call $C_{1}$ and $C_{2}$ two smooth curves with trivial automorphism groups and not isomorphic, and glue them to the union of two $\PP^{1}$, as in the picture below. \[ \includegraphics{curva} \] \noindent It is an exercise to show that the automorphism group scheme of the resulting curve is $\boldsymbol{\mu}_{p}$, which is finite and linearly reductive, but not reduced. \end{example} \subseteqection{Proofs of the theorems} Let us start with some preliminary results. \subsetequbsection*{A version of the valuative criterion for properness of tame algebraic stacks} A basic example of tame stacks is \emph{root stacks} (see \cite[Appendix~B2]{dan-tom-angelo2008}). We will need this in the following situation. Let $R$ be a DVR with uniformizing parameter $\pi$ and residue field $k \mathrel{\subseteqmash{\overset{\mathrm{\subseteqcriptscriptstyle def}} =}} R/(\pi)$. If $n$ is a positive integer, we will denote by $\radice[n]{\subseteqpec R}$ the $n\th$ root of the Cartier divisor $\subseteqpec k\subsetequbseteq \subseteqpec R$. It is a stack over $\subseteqpec R$, such that given a morphism $\phi\colon T \arr \subseteqpec R$, the groupoid of liftings $T \arr \radice[n]{\subseteqpec R}$ is equivalent to the groupoid whose objects are triples $(L, s, \alpha)$, where $L$ is an invertible sheaf on $T$, $s \in L(T)$ is a global section of $L$, and $\alpha$ is an isomorphism $L^{\otimes n} \subseteqimeq \cO_{T}$, such that $\alpha(s^{\otimes n}) = \phi^\subseteqharp(\pi)$. Alternatively, $\radice[n]{\subseteqpec R}$ can be described as the quotient stack $[\subseteqpec R[t]/(t^{n} - \pi)/\boldsymbol{\mu}_{n}]$, where the action of $\boldsymbol{\mu}_{n}$ on $\subseteqpec R[t]/(t^{n} - \pi)$ is by multiplication on $t$. The morphism $\rho\colon \radice[n]{\subseteqpec R} \arr \subseteqpec R$ is an isomorphism outside of $\subseteqpec k \subsetequbseteq \subseteqpec R$, while the reduced fiber $\rho^{-1}(\subseteqpec k)_{\mathrm{red}}$ is non-canonically isomorphic to the classifying stack $\cB_{k}\boldsymbol{\mu}_{n}$. Our version of the valuative criterion is as follows. \begin{theorem}\label{thm:valuative} Let $f\colon X \arr S$ be a proper morphism where $S$ is a scheme and $X$ a tame stack, $R$ a DVR with quotient field $K$. Suppose that we have a $2$-commutative square \[ \begin{tikzcd} \subseteqpec K \ar[d, hook]\rar & X \dar{f}\\ \subseteqpec R \rar & S \end{tikzcd} \] Then there exists a unique positive integer $n$ and a representable lifting $\radice[n]{\subseteqpec R} \arr X$, unique up to a unique isomorphism, of the given morphism $\subseteqpec R \arr S$, making the diagram \[\begin{tikzcd} & \subseteqpec K\rar\ar[dl, hook] \ & X\dar & \\ \radice[n]{\subseteqpec R}\rar\ar[rru] & \subseteqpec R\rar\ar[from=u,hook,crossing over] & S & \end{tikzcd}\] $2$-commutative. \end{theorem} The proof of the theorem will appear in \cite{giulio-angelo-lang-nishimura}. We say that an extension of DVRs $R\subsetequbseteq R'$ is \emph{weakly unramified} if its ramification index is $1$; in other words, if a uniformizing parameter of $R$ maps to a uniformizing parameter of $R'$. \begin{lemma}\label{loopram} Let $R \subsetequbseteq R'$ be a weakly unramified extension of DVRs, and let $K \subsetequbseteq K'$ be the fraction fields of $R$ and $R'$ respectively. Given a diagram \[ \begin{tikzcd} \subseteqpec K \ar[d, hook]\rar & X \dar{f}\\ \subseteqpec R \rar & S \end{tikzcd} \] with the same hypotheses as in Theorem~\ref{thm:valuative}, if there is an extension $\subseteqpec R'\to X$ of $\subseteqpec K' \to \subseteqpec K \to X$, then there is an extension $\subseteqpec R\to X$, too. \end{lemma} \begin{proof} By Theorem~\ref{thm:valuative}, there exists a unique positive integer $n$ with a unique representable extension $\radice[n]{\subseteqpec R} \to X$, we want to show that $n=1$. Since $R \subsetequbseteq R'$ is weakly unramified, then $\radice[n]{\subseteqpec R'} \arr \radice[n]{\subseteqpec R}$ is representable, hence the composition $\radice[n]{\subseteqpec R'} \arr \radice[n]{\subseteqpec R} \arr X$ is representable, too. By hypothesis there exists an extension $\subseteqpec R' \arr X$, hence the uniqueness part of Theorem~\ref{thm:valuative} implies that $n=1$. \end{proof} \subsetequbsection*{Some easy commutative algebra} In the proof of the main theorem we will also use the following well known facts, for which we do not know a reference. \begin{lemma}\label{lem:precut} Let $B$ be a regular local ring with $\dim B \ge 1$, and $b \in \frm_{B}$ a non-zero element. There exists a surjective homomorphism $\phi\colon B \arr R$, where $R$ is a DVR, such that $\phi(b) \neq 0$. Furthermore, if $b \in \frm_{B} \subseteqetminus\frm_{B}^{2}$ we can find such a $\phi\colon B \arr R$ with $\phi(b) \in \frm_{R}\subseteqetminus\frm_{R}^{2}$. \end{lemma} \begin{proof} If $\dim B = 1$ then we take $B = R$. If $\dim B \geq 2$ we proceed by induction on $\dim B$; it is enough to show that there exists $c \in \frm_{B}\subseteqetminus \frm_{B}^{2}$ such that $b \notin (c)$, and $b \notin (c) + \frm_{B}^{2}$ if $b \in \frm_{B} \subseteqetminus \frm_{B}^{2}$. If $b \in \frm_{B}\ \subseteqetminus\frm_{B}^{2}$ it is enough to chose $c \in \frm_{B}$ so that the images of $b$ and $c$ in $\frm_{B}/\frm_{B}^{2}$ are linearly independent. In general, we know that the ring $B$ is a UFD. Let $x$ and $y$ be elements of $\frm_{B}$, whose images in $\frm_{B}/\frm_{B}^{2}$ are linearly independent; the elements $x + y^{d}$, for $d \geq 1$ are irreducible. We claim that they are pairwise not associate: this implies that only finitely many of them can divide $b$, which implies the claim. To check this, assume that $x + y^{e} = u(x + y^{d})$ for some $u \in B \subseteqetminus \frm_{B}$ with $e > d > 0$. Reducing modulo $(y)$ we see that $u \equiv 1 \pmod{y}$, while reducing modulo $(x)$ and dividing by $y^{d}$, which is possible because $A/(x)$ is a domain, we get $u \equiv y^{e-d} \pmod{x}$, and this is clearly a contradiction. \end{proof} \begin{lemma}\label{cut} Let $A$ be a DVR, $B$ a regular local ring, $A \to B$ a local homomorphism. Assume that the induced ring homomorphism $\frm_{A}/\frm_{A}^{2}\to \frm_{B}/\frm_{B}^{2}$ is injective. There exists a surjective homomorphism $B\to R$ such that $A\to R$ is an injective, weakly unramified extension of DVRs. \end{lemma} The proof is immediate from Lemma~\ref{lem:precut}. \begin{lemma}\label{cut2} Let $U$ be an integral regular scheme, $V \subsetequbseteq U$ a nonempty open subset, $u \in U$. There exist a morphism $f\colon \subseteqpec R \arr U$, where $R$ is a DVR, which sends the closed point of\/ $\subseteqpec R$ to $u$, inducing an isomorphism $k(u) \subseteqimeq R/\frm_{R}$, and the generic point into $V$. \end{lemma} \begin{proof} If $u \in V$, this is clear. If not, call $I \subsetequbseteq \cO_{U,u}$ the radical ideal of the complement of the inverse image of $V$ in $\subseteqpec \cO_{U,u}$, take $b\in I \subseteqetminus \{0\}$, and apply Lemma~\ref{lem:precut}. \end{proof} \begin{lemma}\label{lem:inequality} Let $A$ be a noetherian $1$-dimensional local domain with fraction field $K$, $R$ a DVR with quotient field $L$ and residue field $\ell$. Let $A \subsetequbseteq R$ be a local embedding, inducing embeddings $k \subsetequbseteq \ell$ and $K \subsetequbseteq L$. Then \[ \operatorname{tr\mspace{1mu}deg}_{k}\ell \leq \operatorname{tr\mspace{1mu}deg}_{K}L\,. \] \end{lemma} \begin{proof} If $A$ is a DVR, this is \cite[Lemma 2.1]{brosnan-reichstein-vistoli-mixed-char} (there it is assumed that $A \subsetequbseteq R$ is weakly unramified, but this is not in fact used in the proof). In the general case, consider the normalization $\overline{A}$ of $A$ in $K$, and its localization $A'$ at the maximal ideal $\frm_{R}\cap \overline{A}$. By the Krull--Akizuki theorem $\overline{A}$ is a Dedekind domain, so $A'$ is a DVR. Call $k'$ the residue field of $A'$; we have a factorization $k\subsetequbseteq k' \subsetequbseteq \ell$, and $k'$ is algebraic over $k$, so that $\operatorname{tr\mspace{1mu}deg}_{k}\ell = \operatorname{tr\mspace{1mu}deg}_{k'}\ell$, so the general case reduces to the case of an extension of DVRs. \end{proof} \begin{lemma}\label{lem:inequality2} Let $k$ be a field, $R$ a DVR containing $k$, with residue field $\ell$ and fraction field $K$. Assume $\operatorname{tr\mspace{1mu}deg}_{k}\ell < +\infty$. Then \[ \operatorname{tr\mspace{1mu}deg}_{k}\ell < \operatorname{tr\mspace{1mu}deg}_{k}K\,. \] \end{lemma} \begin{proof} Suppose $u_{1}$, \dots,~$u_{n}$ are elements of $R$ whose images in $\ell$ are algebraically independent over $k$. If $\pi$ is a uniformizing parameter for $R$, it is immediate to show that $\pi$, $u_{1}$, \dots,~$u_{n}$ are algebraically independent over $k$. \end{proof} \subsetequbsection*{The proofs} Let us proceed with the proof of the theorems. For all of them, the first step is to reduce to the case in which $X$ is tame. The reduction is done almost word by word as in the beginning of the proof of \cite[Theorem~6.1]{brosnan-reichstein-vistoli3}. Let $\ell$ be a field, $\xi\colon \subseteqpec \ell \arr X$ an object of $X(\ell)$. By a result due essentially to Keel and Mori (\cite[Lemma~6.4]{brosnan-reichstein-vistoli3}) there exists a representable étale morphism $Y \arr X$, where $Y$ is a stack with finite inertia, and a lifting $\eta\colon \subseteqpec \ell \arr Y$ of $\xi$. The fact that the morphism $Y \arr X$ is representable implies that the induced homomorphism $\mathop{\underline{\mathrm{Aut}}}\nolimits_{\ell}\eta \arr\mathop{\underline{\mathrm{Aut}}}\nolimits_{\ell}\xi$ is injective, which implies that $\mathop{\underline{\mathrm{Aut}}}\nolimits_{\ell}\eta$ is linearly reductive. By \cite[Proposition~3.6]{dan-olsson-vistoli1} there exists a tame open substack $Y'$ of $Y$ containing $\eta$. The rest of the argument is identical to that in the proof of \cite[Theorem~6.1]{brosnan-reichstein-vistoli3}; so we can assume that $X$ is tame. The theorems are easy consequences of the following. Let $X \arr S$ be a dominant morphism locally of finite type, where $X$ is a regular integral tame stack, $S$ an integral locally noetherian scheme. \begin{lemma}\label{lem:DVR} Let $\ell$ be a field, $\xi\colon \subseteqpec \ell \arr X$ a morphism; call $s \in S$ the image of the composite $\subseteqpec \ell \xrightarrow{\xi} X \arr S$. Then $\ell$\/ is an extension of $k(s)$, and we can think of $\xi$ as a morphism $\subseteqpec\ell \arr X_{k(s)}$. Assume that $s$ is not the generic point of $S$. \begin{enumerate1} \item If $S$ is locally of finite type over a field $k$, there exists a generalization $s'\neq s$ of $s$ such that \[ \operatorname{ed}_{k}\xi \leq \operatorname{ed}_{k}X_{k(s')}\,. \] \item If $S$ has dimension $1$, then \[ \operatorname{ed}_{k(s)}\xi \leq \operatorname{ed}_{k(S)}X_{k(S)}+1\,. \] \item If $X \arr S$ is smooth, there exists a generalization $s'\neq s$ of $s$ such that \[ \operatorname{ed}_{k(s)}\xi \leq\operatorname{ed}_{k(s')}X_{k(s')}\,. \] \end{enumerate1} \end{lemma} \begin{proof} By \cite[Théorème~6.3]{laumon-moret-bailly} there exists a regular scheme $U$ and a smooth morphism $U\to X$ with a lifting $\subseteqpec k\to U$ of $\xi$; if we call $u \in U$ the image of $\subseteqpec \ell \arr U$, we can replace $\ell$ with $k(u)$, and assume $\ell = k(u)$. In cases (1) and (2), we call $V$ the inverse image of $S \subseteqetminus \overline{\{s\}}$ in $U$, and we construct a morphism $\subseteqpec R \arr U$ as in Lemma~\ref{cut2} such that the composition $\subseteqpec R\to S$ maps the generic point to a generalization $s'\neq s$ of $s$. In case (3) $X \arr S$ is smooth, so $U \arr S$ is also smooth, and $S$ is regular; in this case we start by choosing a point $s'$ in $S$ such that $s \in \overline{\{s'\}}$, and $\cO_{\overline{\{s'\}},s}$ is a DVR. Then we apply Lemma~\ref{cut} to the embedding $\cO_{\overline{\{s'\}},s} \subsetequbseteq \cO_{U',u}$, where $U'$ is the inverse image of $\overline{\{s'\}}$ in $U$, and obtain a morphism $\subseteqpec R\to U$ such that the composition $\subseteqpec R\to S$ maps the generic point to $s'$ and $\cO_{\overline{\{s'\}},s}\subseteq R$ is weakly unramified. Let $M$ be the moduli space of $X$, with the resulting factorization $X \arr M \arr S$. Let $K$ be the fraction field of $R$; there exists an intermediate extension $k(s')\subseteq K_{1}\subseteq K$ with \[ \operatorname{tr\mspace{1mu}deg}_{k(s')}K_{1} \le \operatorname{ed}_{k(s')}X_{k(s')} \] and a factorization $\subseteqpec K \to \subseteqpec K_{1} \to X$. Write $R_{1} = R\cap K_{1}$; clearly $K_{1} \not\subsetequbseteq R$, so that $R_{1}$ is a DVR; call $\ell_{1}\subseteq \ell$ its residue field. The composite $\subseteqpec R \arr M$ factors through $\subseteqpec R_{1}$, and we get a commutative diagram \[ \begin{tikzcd} \subseteqpec K\rar\dar[hook] & \subseteqpec K_{1}\rar & X\dar\ar[dr]\\ \subseteqpec R\ar[urr]\rar & \subseteqpec R_{1}\ar[from=u, hook, crossing over] \rar & M \rar & S \end{tikzcd} \] By Theorem~\ref{thm:valuative}, since $X\to M$ is proper then there exists an integer $n$ and a representable extension \[ \radice[n]{\subseteqpec R_{1}}\to X \] of the morphism $\subseteqpec K_{1} \arr X$. Since $X \arr S$ is separated, the diagram \[\begin{tikzcd} \radice[n]{\subseteqpec R}\rar\dar & \radice[n]{\subseteqpec R_{1}}\rar & X\dar\ar[dr]\\ \subseteqpec R\ar[urr]\rar & \subseteqpec R_{1}\ar[from=u, crossing over] \rar & M \rar &S \end{tikzcd}\] commutes. If we chose a lifting $\subseteqpec \ell \arr \radice[n]{\subseteqpec R}$ of the embedding $\subseteqpec \ell \subsetequbseteq \subseteqpec R$ we obtain a factorization \[\subseteqpec \ell \to \left(\subseteqpec \ell_{1}\times_{\subseteqpec R_{1}}\radice[n]{\subseteqpec R}\right)_{\rm red}= \cB_{\ell_{1}}\boldsymbol{\mu}_{n} \to X \] of $\xi$. Since the essential dimension of $\cB_{\ell_{1}}\boldsymbol{\mu}_{n}$ over $\ell_{1}$ is at most $1$, we get that \[\operatorname{ed}_{k(s)}\xi \leq \operatorname{tr\mspace{1mu}deg}_{k(s)}\ell_{1} +1\,.\] If $S$ is locally of finite type over a field $k$, then by Lemma~\ref{lem:inequality2} applied to $R_{1}/k$ we obtain \begin{align*} \operatorname{tr\mspace{1mu}deg}_{k(s)}\ell_{1}+1 &\leq \operatorname{tr\mspace{1mu}deg}_{k(s')}K_{1} + 1\\ &\leq \operatorname{tr\mspace{1mu}deg}_{k(s')}K_{1}+\operatorname{tr\mspace{1mu}deg}_{k}k(s')-\operatorname{tr\mspace{1mu}deg}_{k}k(s)\,; \end{align*} Since $\operatorname{ed}_{k}\xi = \operatorname{ed}_{k(s)}\xi + \operatorname{tr\mspace{1mu}deg}_{k}k(s)$ and $\operatorname{ed}_{k}X_{k(s')} = \operatorname{ed}_{k(s')}X_{k(s')} + \operatorname{tr\mspace{1mu}deg}_{k}k(s')$, we obtain the thesis. If $S$ has dimension $1$, then $s'$ is the generic point and by Lemma~\ref{lem:inequality} applied to the embedding $\cO_{S,s} \subsetequbseteq R_{1}$ we obtain the desired inequality \[\operatorname{tr\mspace{1mu}deg}_{k(s)}\ell_{1}+1 \leq \operatorname{tr\mspace{1mu}deg}_{k(S)}K_{1}+1 \leq \operatorname{ed}_{k(S)}X_{k(S)}+1\,.\] If $X \arr S$ is smooth, then $\cO_{\overline{\{s'\}},s}$ is a DVR and $R$ is weakly unramified over it, and since $\cO_{\overline{\{s'\}},s}\subseteq R_{1}\subseteq R$ then $R$ is weakly unramified over $R_{1}$, too. We can thus assume that $n = 1$, by Lemma~\ref{loopram}, so that $\cB_{\ell_{1}}\boldsymbol{\mu}_{n} = \subseteqpec \ell_{1}$, and by Lemma~\ref{lem:inequality} applied to the embedding $\cO_{\overline{\{s'\}},s}\subsetequbseteq R_{1}$ we get \[ \operatorname{ed}_{k(s)}\xi \leq \operatorname{tr\mspace{1mu}deg}_{k(s)}\ell_{1}\leq \operatorname{tr\mspace{1mu}deg}_{k(s')}K_{1} \leq \operatorname{ed}_{k(s')}X_{k(s')} \] as claimed. \end{proof} \subsetequbsubsection*{The proof of Theorem~\ref{thm:genericity1}} We apply the above to the case in which $S = M$ is the moduli space of $X$. Let $\xi\colon \subseteqpec \ell \arr X$ be a morphism; we need to show that $\operatorname{ed}_{k}\xi \leq\operatorname{g\mspace{1mu}ed}_{k}X$. Call $s \in M$ the image of $\xi$; if $s$ is the generic point of $M$, the result follows immediately. If not, by induction on the codimension of $\overline{\{s\}}$ in $M$ we may assume that the inequality holds for all morphisms $\subseteqpec \ell' \arr X$, such that the codimension of the closure of the image $s' \in M$, which equals $\dim M - \operatorname{tr\mspace{1mu}deg}_{k}k(s')$, is less than the codimension of $\overline{\{s\}}$. In particular, this holds for morphisms as above, such that $s'$ is a generalization of $s$ different from $s$. From Lemma~\ref{lem:DVR} we get a generalization $s'\neq s$ of $s$ and an inequality \[ \operatorname{ed}_{k}\xi \leq \operatorname{ed}_{k}X_{k(s')}\,. \] By inductive hypothesis, $\operatorname{ed}_{k}X_{k(s')} \leq \operatorname{g\mspace{1mu}ed}_{k}X$, hence we conclude. \subsetequbsubsection*{The proof of Theorem~\ref{thm:genericity2}} By Theorem~\ref{thm:genericity1} applied to $X_{k(S)} \arr \subseteqpec k(S)$, it enough to prove that $\operatorname{ed}_{k(s)}X_{k(s)} \leq\operatorname{ed}_{k(S)}X_{k(S)}$ for any $s \in S$. Once again we proceed by induction on the codimension of $\overline{\{s\}}$ in $S$, the case of codimension~$0$ being obvious. If this is positive, given a morphism $\subseteqpec\ell\arr X_{k(s)}$, Lemma~\ref{lem:DVR} gives us a generalization $s'$ of $s$ with $\operatorname{ed}_{k(s)}\xi \leq \operatorname{ed}_{k(s')}X_{k(s')}$, and the theorem follows. \subsetequbsubsection*{The proof of Theorem~\ref{thm:genericity3}} By Theorem~\ref{thm:genericity1} applied to $X_{K} \arr \subseteqpec K$, it is enough to prove the inequality $\operatorname{ed}_{k}X_{k} \leq \operatorname{ed}_{K}X_{K}+1$, which follows immediately from Lemma~\ref{lem:DVR}. \subseteqection{Acknowledgments} We are grateful to the referees for the detailed readings and the constructive comments. \end{document}
\begin{document} \title{Weighted polygamy inequalities of multiparty entanglement in arbitrary dimensional quantum systems} \author{Jeong San Kim} \email{[email protected]} \affiliation{ Department of Applied Mathematics and Institute of Natural Sciences, Kyung Hee University, Yongin-si, Gyeonggi-do 17104, Korea } \date{\today} \begin{abstract} We provide a generalization for the polygamy constraint of multiparty entanglement in arbitrary dimensional quantum systems. By using the $\beta$th-power of entanglement of assistance for $0\leq \beta \leq1$ and the Hamming weight of the binary vector related with the distribution of subsystems, we establish a class of weighted polygamy inequalities of multiparty entanglement in arbitrary dimensional quantum systems. We further show that our class of weighted polygamy inequalities can even be improved to be tighter inequalities with some conditions on the assisted entanglement of bipartite subsystems. \end{abstract} \pacs{ 03.67.Mn, 03.65.Ud } \maketitle \section{Introduction} \label{Intro} One intrinsic feature of quantum entanglement is the limited shareability of bipartite entanglement in multiparty quantum systems. This distinct property of quantum entanglement without any classical counterpart is known as the {\em monogamy of entanglement}(MoE)~\cite{T04, KGS}. MoE is mathematically characterized in a quantitative way; for a given three-party quantum state $\rho_{ABC}$ with its reduced density matrices $\rho_{AB}=\mbox{$\mathrm{tr}$}_C \rho_{ABC}$ and $\rho_{AC}=\mbox{$\mathrm{tr}$}_B \rho_{ABC}$, \begin{align} E\left(\rho_{A|BC}\right)\geq E\left(\rho_{A|B}\right)+E\left(\rho_{A|C}\right) \label{MoE} \end{align} where $E\left(\rho_{A|BC}\right)$ is the bipartite entanglement between subsystems $A$ and $BC$, and $E\left(\rho_{A|B}\right)$ and $E\left(\rho_{A|C}\right)$ are the bipartite entanglement between $A$ and $B$ and between $A$ and $C$, respectively. The {\em monogamy inequality} in~(\ref{MoE}) shows a mutually exclusive relation of the bipartite entanglement between $A$ and each of $B$ and $C$(that is, $E\left(\rho_{A|B}\right)$ and $E\left(\rho_{A|C}\right)$, respectively), so that their summation cannot exceeds the total entanglement between $A$ and $BC$(measured by $E\left(\rho_{A|BC}\right)$ ). The first monogamy inequality was established in three-qubit systems using {\em tangle} as the bipartite entanglement measure~\cite{CKW}. Later, it was generalized for multiqubit systems, and some cases of higher-dimensional quantum systems in terms of various bipartite entanglement measures~\cite{OV, KS, KDS, KSRenyi, KT, KSU}. Whereas MoE reveals the limited shareability of entanglement in multiparty quantum systems, the {\em assisted entanglement}, which is a dual amount to bipartite entanglement measures, is also known to have a dually monogamous property in multiparty quantum systems, namely, {\em polygamy of entanglement}(PoE). PoE is also mathematically characterized as {\em polygamy inequality}; \begin{align} E_a\left(\rho_{A|BC}\right)\leq E_a\left(\rho_{A|B}\right) +E_a\left(\rho_{A|C}\right), \label{PoE} \end{align} for a three-party quantum state $\rho_{ABC}$ where $E_a\left(\rho_{A|BC}\right)$ is the assisted entanglement~\cite{GMS}. The polygamy inequality in~(\ref{PoE}) was first proposed in three-qubit systems using {\em tangle of assistance}~\cite{GMS}, and generalized into multiqubit systems in terms of various assisted entanglements~\cite{GBS, KT, KSU}. For quantum systems beyond qubits, a general polygamy inequality of multiparty entanglement in arbitrary dimensional quantum systems was established using entanglement of assistance~\cite{BGK, KimGP, KimGP16}. One main difficulty in studying entanglement in multiparty quantum systems is that there are several inequivalent classes of genuine multiparty quantum entanglement that are not convertible to each other by means of stochastic local operations and classical communications(SLOCC)~\cite{DVC}; for example, there are two inequivalent classes of genuine three-party pure entangled states in three-qubit systems~\cite{DVC}. One is the {\em Greenberger-Horne-Zeilinger}(GHZ) class~\cite{GHZ}, and the other one is the W-class~\cite{DVC}. The existence of these inequivalent classes make us infeasible to directly compare the amount of entanglement from different classes, which also implies the hardness of having a universal way to quantify multiparty quantum entanglement, even abstractly. Although this characterization is due to the interconvertibility under SLOCC, these inequivalent classes of genuine three-qubit entangled states also reveal different characters in terms of entanglement monogamy and polygamy. The tangle-based monogamy and polygamy inequalities of three-qubit entanglement in Inequalities (\ref{MoE}) and (\ref{PoE}) are saturated (thus they hold as equalities) by the W-class states, whereas the differences between terms can assume their largest values for the GHZ-class states. The saturation of the monogamy and polygamy inequalities for W-class states implies that this type of genuine three-qubit entanglement can be complete characterized by means of the bipartite ones within it, which is not the case for the GHZ-class states, the other type of genuine three-qubit entanglement. Thus entanglement monogamy and polygamy are not just distinct phenomena in multipartite quantum systems, but they also provide us an efficient way to qualify multipartite entanglements from different classes. For the case of multi-qubit W-class states more that three qubits, the tangle-based monogamy and polygamy inequalities are also saturated by this class, and thus an analogous interpretation can be applied. However, tangle is known to fail in generalizing the monogamy inequality into higher-dimensional systems more than qubits~\cite{KDS}. This imposes the importance of having proper bipartite entanglement quantifications showing tight monogamy and polygamy inequalities for an efficient characterization of multiparty entanglements from different classes even in high-dimensional quantum systems. Recently, monogamy and polygamy inequalities of multiqubit entanglement were generalized in terms of non-negative power of entanglement measures and assisted entanglements; it was shown that the $\alpha$th-power of the entanglement of formation and concurrence can be used to establish multiqubit monogamy inequalities for $\alpha \geq \sqrt{2}$ and $\alpha \geq 2$, respectively~\cite{Fei}. Later, tight classes of monogamy and polygamy inequalities of multiqubit entanglement using non-negative power of various entanglement measures were also proposed~\cite{Fei2, KimPS18, KimPU18}. However, the validity of this tight generalization of entanglement constraints beyond qubit systems is still unclear. Here, we provide a tight polygamy constraint of multiparty entanglement in arbitrary dimensional quantum systems. By using the $\beta$th-power of entanglement of assistance for $0\leq \beta \leq1$ and the Hamming weight of the binary vector related with the distribution of subsystems, we establish a class of weighted polygamy inequalities of multiparty entanglement in arbitrary dimensional quantum systems. We further show that our class of weighted polygamy inequalities can even be improved to be tighter inequalities with some conditions on the assisted entanglement of bipartite subsystems. The paper is organized as follows. In Sec.~\ref{Sec: poly}, we review the polygamy constraints of multiparty quantum entanglement based on tangle and entanglement of assistance. In Sec.~\ref{Sec: WPoly}, we first provide some notations and definitions about binary vectors as well as its Hamming weight, and establish a class of weighted polygamy inequalities of multiparty entanglement using the $\beta$th-power of entanglement of assistance for $0\leq \beta \leq1$. We also show that our class of weighted polygamy inequalities can be improved to be tighter inequalities with some conditions on the assisted entanglement of bipartite subsystems. Finally, we summarize our results in Sec.~\ref{Sec: Conclusion}. \section{Polygamy of multiparty Quantum Entanglement} \label{Sec: poly} The first polygamy inequality was established in three-qubit systems~\cite{GMS}; for a three-qubit pure state $\ket{\psi}_{ABC}$, \begin{equation} \tau\left(\ket{\psi}_{A|BC)}\right)\le\tau_a\left(\rho_{A|B}\right) +\tau_a\left(\rho_{A|C}\right), \label{3dual} \end{equation} where \begin{align} \tau\left(\ket{\psi}_{A|BC}\right)=4\det\rho_A \label{tangle} \end{align} is the tangle of the pure state $\ket{\psi}_{ABC}$ between $A$ and $BC$, and \begin{align} \tau_a\left(\rho_{A|B}\right)=\max \sum_i p_i\tau\left({\ket{\psi_i}_{A|B}}\right) \label{tanglemix} \end{align} is the tangle of assistance of $\rho_{AB}=\mbox{$\mathrm{tr}$}_C\ket{\psi}_{ABC}\bra{\psi}$ with the maximum taken over all possible pure-state decompositions of $\rho_{AB}=\sum_{i}p_i\ket{\psi_i}_{AB}\bra{\psi_i}$. Later, Inequality~(\ref{3dual}) was generalized into multiqubit systems~\cite{GBS} \begin{align} \tau_a\left(\rho_{A_1|A_2\cdots A_n}\right) \leq&\sum_{i=2}^{n}\tau_a \left(\rho_{A_1|A_i}\right), \label{ntpoly} \end{align} for an arbitrary multiqubit mixed state $\rho_{A_1\cdots A_n}$ and its two-qubit reduced density matrices $\rho_{A_1A_i}$ with $i=2,\ldots, n$. For polygamy inequality beyond qubits, it was shown that von Neumann entropy can be used to establish a polygamy inequality of three-party quantum systems~\cite{BGK}; for any three-party pure state $\ket{\psi}_{ABC}$ of arbitrary dimensions, we have \begin{align} E\left(\ket{\psi}_{A|BC}\right)\leq& E_a\left(\rho_{A|B}\right)+E_a\left(\rho_{A|C}\right), \label{EoApoly3} \end{align} where \begin{equation} E\left(\ket{\psi}_{A|BC}\right)=S\left(\rho_A\right) \label{eent} \end{equation} is the entropy of entanglement between $A$ and $BC$ in terms of the von Neumann entropy \begin{equation} S(\rho)=-\mbox{$\mathrm{tr}$} \rho\log\rho, \label{von} \end{equation} and $E_a(\rho_{A|B})$ is the entanglement of assistance(EoA) of $\rho_{AB}$ defined as~\cite{cohen} \begin{equation} E_a(\rho_{A|B})=\max \sum_{i}p_i E\left(\ket{\psi_i}_{A|B}\right) \label{eoa} \end{equation} with the maximization over all possible pure state decompositions of $\rho_{AB}=\sum_{i}p_i\ket{\psi_i}_{AB}\bra{\psi_i}$. Later, a general polygamy inequality of multiparty quantum entanglement was established as \begin{align} E_a\left(\rho_{A_1|A_2\cdots A_n}\right) \leq& \sum_{i=2}^{n} E_a \left(\rho_{A_1|A_i}\right), \label{EoApoly} \end{align} for any multiparty quantum state $\rho_{A_1A_2\cdots A_n}$ of arbitrary dimension~\cite{KimGP}. \section{Weighted Polygamy Constraints of multiparty Quantum Entanglement} \label{Sec: WPoly} Based on the binary expression of any nonnegative integer $j$, \begin{align} j=\sum_{i=0}^{n-1} j_i 2^i \label{bire2} \end{align} such that $\log_{2}j \leq n$ and $j_i \in \{0, 1\}$ for $i=0, \ldots, n-1$, we define a unique binary vector $\overrightarrow{j}$ associated with $j$ as \begin{align} \overrightarrow{j}=\left(j_0,~j_1,\ldots,j_{n-1}\right). \label{bivec} \end{align} For the binary vector $\overrightarrow{j}$ in Eq.~(\ref{bivec}), its {\em Hamming weight}~\cite{nc}, $\omega_{H}\left(\overrightarrow{j}\right)$, is defined as the number of $1's$ in its coordinates, that is, the number of $1's$ in $\{ j_0,~j_1,\ldots,j_{n-1} \}$. The following theorem states that a class of weighted polygamy inequalities of multiparty entanglement in arbitrary dimension can be established using the $\beta$th-power of EoA and the Hamming weight of the binary vector related with the distribution of subsystems. \begin{Thm} For $0 \leq \beta \leq 1$ and any $N+1$-party quantum state $\rho_{A\mbox{$\mathbb B$}}$ where $\mbox{$\mathbb B$}$ consists of $N$-party subsystems, there exists a proper ordering of the $N$-party subsystems $\mbox{$\mathbb B$}=\{B_0, \cdots, B_{N-1}\}$ such that \begin{equation} \left(E_a\left(\rho_{A|B_0 B_1 \cdots B_{N-1}}\right)\right)^{\beta} \leq \sum_{j=0}^{N-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{peoapoly0} \end{equation} \label{tpoly0} \end{Thm} \begin{proof} Let us consider the ordering of the $N$-party subsystems $\mbox{$\mathbb B$}=\{B_0, \cdots, B_{N-1}\}$ where the EoA's between $A$ and each $B_j$ are in decreasing order, that is, \begin{align} E_a\left(\rho_{A |B_j}\right)\geq E_a\left(\rho_{A |B_{j+1}}\right)\geq0 \label{pordern2} \end{align} for each $j=0, \ldots , N-2$. From the monotonicity of the function $f(x)=x^{\beta}$ for $0 \leq \beta \leq 1$ and Inequality~(\ref{EoApoly}), we have \begin{align} \left(E_a\left(\rho_{A|B_0B_1\cdots B_{N-1}}\right)\right)^{\beta}\leq& \left(\sum_{j=0}^{N-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{qbitalppoly1} \end{align} therefore, it is enough to show that \begin{align} \left(\sum_{j=0}^{N-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\leq& \sum_{j=0}^{N-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{qbitalppoly2} \end{align} We first prove Inequality~(\ref{qbitalppoly2}) for the case that $N=2^n$, a power of 2, by using mathematical induction on $n$, and extend the result for any positive integer $N$. For $n=1$ and a three-party state $\rho_{AB_0B_1}$ with bipartite reduced density matrices $\rho_{AB_0}$ and $\rho_{AB_1}$, we have \begin{align} &\left(E_a\left(\rho_{A|B_0}\right)+E_a\left(\rho_{A|B_1}\right)\right)^{\beta}\nonumber\\ &~~~~~~~~~~=\left(E_a\left(\rho_{A|B_0}\right)\right)^{\beta}\left(1+\frac{E_a\left(\rho_{A|B_1}\right)} {E_a\left(\rho_{A|B_0}\right)} \right)^{\beta}. \label{p3scpoly1} \end{align} Because the ordering in Inequality~(\ref{pordern2}) assures~\cite{zerook} \begin{equation} 0\leq\frac{E_a\left(\rho_{A|B_1}\right)}{E_a\left(\rho_{A|B_0}\right)}\leq 1, \label{conoder0} \end{equation} Eq.~(\ref{p3scpoly1}) leads us to \begin{align} &\left(E_a\left(\rho_{A|B_0}\right)+E_a\left(\rho_{A|B_1}\right)\right)^{\beta}\nonumber\\ &~~~~~~~~~~~\leq\left(E_a\left(\rho_{A|B_0}\right)\right)^{\beta}+ \beta\left(E_a\left(\rho_{A|B_1}\right)\right)^{\beta}, \label{p3scpoly3} \end{align} where the inequality is due to \begin{align} \left(1+x\right)^{\beta}\leq 1+\beta x^{\beta}, \label{betle1} \end{align} for any $x \in \left[0,1\right]$ and $0\leq\beta \leq1$. Inequality~(\ref{p3scpoly3}) recovers Inequality~(\ref{qbitalppoly2}) for $N=2$, that is, $n=1$. Now we assume the validity of Inequality~(\ref{qbitalppoly2}) for $N=2^{n-1}$ with $n\geq 2$, and consider the case that $N=2^n$. For an $(N+1)$-party quantum state $\rho_{AB_0B_1 \cdots B_{N-1}}$ and its bipartite reduced density matrices $\rho_{AB_j}$ with $j=0, \ldots, N-1$, the ordering of subsystems in Inequality~(\ref{pordern2}) assures that \begin{equation} 0\leq\frac{\sum_{j=2^{n-1}}^{2^n-1}E_a\left(\rho_{A|B_j}\right)} {\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)}\leq 1. \label{conoder1} \end{equation} Thus we have \begin{widetext} \begin{align} \left(\sum_{j=0}^{2^n-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta} =&\left(\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta} \left(1+\frac{\sum_{j=2^{n-1}}^{2^n-1}E_a\left(\rho_{A|B_j}\right)} {\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)}\right)^{\beta}\nonumber\\ \leq& \left(\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta} \left[1+\beta\left(\frac{\sum_{j=2^{n-1}}^{2^n-1}E_a\left(\rho_{A|B_j}\right)} {\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)}\right)^{\beta}\right]\nonumber\\ =&\left(\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta} +\beta\left(\sum_{j=2^{n-1}}^{2^{n}-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{pnscpoly} \end{align} \end{widetext} where the inequality is due to Inequality~(\ref{betle1}). From the induction hypothesis, we have \begin{align} \left(\sum_{j=0}^{2^{n-1}-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\leq& \sum_{j=0}^{2^{n-1}-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{pnscpoly2} \end{align} Moreover, the second summation in the last line of~(\ref{pnscpoly}) is a summation of $2^{n-1}$ terms, therefore the induction hypothesis also guarantees \begin{align} \left(\sum_{j=2^{n-1}}^{2^{n}-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\leq& \sum_{j=2^{n-1}}^{2^{n}-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)-1}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{pnscpoly3} \end{align} (Possibly, we may index and reindex subsystems to get Inequality~(\ref{pnscpoly3}), if necessary.) From Inequalities~(\ref{pnscpoly}),~ (\ref{pnscpoly2}) and (\ref{pnscpoly3}), we have \begin{align} \left(\sum_{j=0}^{2^n-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\leq& \sum_{j=0}^{2^n-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{qditp2poly} \end{align} which recovers Inequality~(\ref{qbitalppoly2}) for the case that $N=2^n$. Now let us consider an arbitrary positive integer $N$ and a $(N+1)$-party quantum state $\rho_{AB_0B_1\cdots B_{N-1}}$. We first note that we can always consider a power of $2$ that is an upper bound of $N$, that is $0\leq N \leq 2^{n}$ for some $n$. We also consider a $(2^{n}+1)$-party quantum state \begin{align} \gamma_{AB_0 B_1 \cdots B_{2^n-1}}=\rho_{AB_0B_1\cdots B_{N-1}}\otimes\sigma_{B_N \cdots B_{2^n-1}}, \label{gamma} \end{align} which is a product of $\rho_{AB_0B_1\cdots B_{N-1}}$ and an arbitrary $(2^n-N)$-party quantum state $\sigma_{B_N \cdots B_{2^n-1}}$. Because $\gamma_{AB_0 B_1 \cdots B_{2^n-1}}$ is a $(2^{n}+1)$-party quantum state, Inequality~(\ref{qditp2poly}) leads us to \begin{equation} \left(E_a\left(\gamma_{A|B_0 B_1 \cdots B_{2^n-1}}\right)\right)^{\beta} \leq \sum_{j=0}^{2^n-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)} \left(E_a\left(\gamma_{A|B_j}\right)\right)^{\beta}, \label{gapoly} \end{equation} where $\gamma_{AB_j}$ is the bipartite reduced density matric of $\gamma_{AB_0 B_1 \cdots B_{2^n-1}}$ for each $j= 0, \ldots, 2^n-1$. Moreover, $\gamma_{AB_0 B_1 \cdots B_{2^n-1}}$ is a product state of $\rho_{AB_0B_1\cdots B_{N-1}}$ and $\sigma_{B_N \cdots B_{2^n-1}}$, which implies \begin{align} E_a\left(\gamma_{A|B_0 B_1 \cdots B_{2^n-1}}\right)=E_a\left(\rho_{A|B_0 B_1 \cdots B_{N-1}}\right), \label{psame1} \end{align} and \begin{align} E_a\left(\gamma_{A|B_j}\right)=0, \label{psame3} \end{align} for $j=N, \ldots , 2^n-1$. Because \begin{align} \gamma_{AB_j}=\rho_{AB_j}, \label{psame2} \end{align} for each $j=0, \ldots , N-1$, we have \begin{align} \left(E_a\left(\rho_{A|B_0 B_1 \cdots B_{N-1}}\right)\right)^{\beta}=& \left(E_a\left(\gamma_{A|B_0 B_1 \cdots B_{2^n-1}}\right)\right)^{\beta}\nonumber\\ \leq& \sum_{j=0}^{2^n-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)} \left(E_a\left(\gamma_{A|B_j}\right)\right)^{\beta}\nonumber\\ =&\sum_{j=0}^{N-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)} \left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{gapoly2} \end{align} and this completes the proof. \end{proof} To illustrate the tightness of Inequality~(\ref{peoapoly0}) compared with Inequality~(\ref{EoApoly}) in previous section, let us consider the three-qubit W state \begin{align} \ket{W}_{ABC} =\frac{1}{\sqrt 3}\left(\ket{100}+\ket{010}+\ket{001}\right). \label{3W} \end{align} Because it is a pure state, we have \begin{align} E_a\left(\rho_{A|BC}\right)=&S(\rho_A) =\log3-\frac{2}{3}, \label{EoA1} \end{align} and the EoA of the two-qubit reduced density matrices are~\cite{sahoo} \begin{align} E_a\left(\rho_{A|B}\right)=E_a\left(\rho_{A|C}\right)=\frac{2}{3}. \label{EoA123} \end{align} Thus, the marginal EoA from Inequality~(\ref{EoApoly}) is \begin{align} E_a\left(\rho_{A|B}\right)&+E_a\left(\rho_{A|C}\right)\nonumber\\ &-E_a\left(\rho_{A|BC}\right)=2-\log3 \approx 0.415. \label{mareoa} \end{align} For the cases that $\beta=\frac{1}{2}$ or $\frac{1}{3}$, the marginal EoA's from Inequality~(\ref{peoapoly0}) for three-qubit W state are \begin{align} \sqrt{E_a\left(\rho_{A|B}\right)}&+\frac{1}{2}\sqrt{E_a\left(\rho_{A|C}\right)}\nonumber\\ &-\sqrt{E_a\left(\rho_{A|BC}\right)}\approx 0.272,\nonumber\\ E_a\left(\rho_{A|B}\right)^{1/3}&+\frac{1}{3}E_a\left(\rho_{A|C}\right)^{1/3}\nonumber\\ &-E_a\left(\rho_{A|BC}\right)^{1/3}\approx 0.196. \label{mareoa1/2} \end{align} Thus Inequality~(\ref{peoapoly0}) is generally tighter than Inequality~(\ref{EoApoly}), which also delivers better bounds to characterize the W-class type three-party entanglement by means of bipartite ones. For any $0\leq \beta \leq 1$ and the Hamming weight $\omega_{H}\left(\overrightarrow{j}\right)$ of the binary vector $\overrightarrow{j}=\left(j_0, \ldots ,j_{n-1}\right)$, we have $0\leq {\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\leq1$, therefore \begin{align} \left(E_a\left(\rho_{A|B_0 B_1 \cdots B_{N-1}}\right)\right)^{\beta} \leq& \sum_{j=0}^{N-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\nonumber\\ \leq&\sum_{j=0}^{N-1}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{ineqtight1} \end{align} for any multiparty state $\rho_{AB_0 B_1 \cdots B_{N-1}}$. Thus we have the following corollary; \begin{Cor} For $0 \leq \beta \leq 1$ and any multiparty quantum state $\rho_{AB_0\cdots B_{N-1}}$, we have \begin{equation} \left(E_a\left(\rho_{A|B_0 B_1 \cdots B_{N-1}}\right)\right)^{\beta} \leq \sum_{j=0}^{N-1}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{peoapoly1} \end{equation} \label{Cor: poly} \end{Cor} We further note that the class of weighted polygamy inequalities in Theorem~\ref{tpoly0} can even be tightened with some condition on bipartite entanglement of assistance. \begin{Thm} For $0 \leq \beta \leq 1$ and any multiparty quantum state $\rho_{AB_0 \cdots B_{N-1}}$, we have \begin{equation} \left(E_a\left(\rho_{A|B_0 \cdots B_{N-1}}\right)\right)^{\beta} \leq \sum_{j=0}^{N-1}{\beta}^{j}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{peoapoly2} \end{equation} conditioned that \begin{align} E_a\left(\rho_{A|B_i}\right)\geq \sum_{j=i+1}^{N-1}E_a\left(\rho_{A|B_{j}}\right), \label{cond3} \end{align} for $i=0, \ldots , N-2$. \label{tpoly2} \end{Thm} \begin{proof} Inequality~(\ref{qbitalppoly1}) assures that it is enough to show \begin{align} \left(\sum_{j=0}^{N-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\leq& \sum_{j=0}^{N-1}{\beta}^{j}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{qditalppoly3} \end{align} We use the mathematical induction on $N$, and we also note that Inequality~(\ref{p3scpoly3}) guarantees the validity of Inequality~(\ref{qditalppoly3}) for $N=2$. Let us assume Inequality~(\ref{qditalppoly3}) is true for any positive integer less than $N$, and consider a multiparty quantum state $\rho_{AB_0 \cdots B_{N-1}}$. The condition in Inequality~(\ref{cond3}) assures \begin{equation} 0\leq\frac{\sum_{j=1}^{N-1}E_a\left(\rho_{A|B_j}\right)} {E_a\left(\rho_{A|B_0}\right)}\leq 1, \label{conoder2} \end{equation} thus, we have \begin{widetext} \begin{align} \left(\sum_{j=0}^{N-1}E_a\left(\rho_{A|B_j}\right)\right)^{\beta} =&\left(E_a\left(\rho_{A|B_0}\right)\right)^{\beta} \left(1+\frac{\sum_{j=1}^{N-1}E_a\left(\rho_{A|B_j}\right)} {E_a\left(\rho_{A|B_0}\right)} \right)^{\beta}\nonumber\\ \leq&\left(E_a\left(\rho_{A|B_0}\right)\right)^{\beta} \left[1+\beta\left(\frac{\sum_{j=1}^{N-1}E_a\left(\rho_{A|B_j}\right)} {E_a\left(\rho_{A|B_0}\right)}\right)^{\beta}\right]\nonumber\\ =&\left(E_a\left(\rho_{A|B_0}\right)\right)^{\beta}+\beta\left(\sum_{j=1}^{N-1}E_a\left(\rho_{A|B_j}\right) \right)^{\beta}, \label{pnscpoly6} \end{align} \end{widetext} where the inequality is due to Inequality~(\ref{betle1}). The summation in the last line of~(\ref{pnscpoly6}) is a summation of $N-1$ terms, therefore the induction hypothesis leads us to \begin{align} \left(\sum_{j=1}^{N-1}E_a\left(\rho_{A|B_j}\right) \right)^{\beta}\leq \sum_{j=1}^{N-1}{\beta}^{j-1}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}. \label{pnscpoly7} \end{align} Now, Inequalities~(\ref{pnscpoly6}) and (\ref{pnscpoly7}) recover Inequality~(\ref{qditalppoly3}), and this completes the proof. \end{proof} For any nonnegative integer $j$ and its corresponding binary vector $\overrightarrow{j}$, the Hamming weight $\omega_{H}\left(\overrightarrow{j}\right)$ is bounded above by $\log_2 j$. Thus we have \begin{align} \omega_{H}\left(\overrightarrow{j}\right)\leq \log_2 j \leq j, \label{numcom} \end{align} therefore \begin{align} \left(E_a\left(\ket{\psi}_{A|B_0 \cdots B_{N-1}}\right)\right)^{\beta} \leq& \sum_{j=0}^{N-1}{\beta}^{j}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}\nonumber\\ \leq& \sum_{j=0}^{N-1}{\beta}^{\omega_{H}\left(\overrightarrow{j}\right)}\left(E_a\left(\rho_{A|B_j}\right)\right)^{\beta}, \label{ineqtight3} \end{align} for $0\leq \beta \leq 1$. Thus, Inequality~(\ref{peoapoly2}) of Theorem~\ref{tpoly2} is tighter than Inequality~(\ref{peoapoly0}) of Theorem~\ref{tpoly0} for $0\leq \beta \leq 1$ and any multiparty quantum state $\rho_{AB_0 B_1 \cdots B_{N-1}}$ satisfying the condition in Inequality~(\ref{cond3}). \section{Conclusions}\label{Sec: Conclusion} We have provided a generalization for the polygamy constraint of multiparty entanglement in arbitrary dimensional quantum systems. By using the $\beta$th-power of entanglement of assistance for $0\leq \beta \leq1$ and the Hamming weight of the binary vector related with the distribution of subsystems, we have establish a class of weighted polygamy inequalities of multiparty entanglement in arbitrary dimensional quantum systems. We have further shown that our class of weighted polygamy inequalities can be improved to be tighter inequalities with some conditions on the assisted entanglement of bipartite subsystems. The study of higher-dimensional quantum systems is important and even necessary in various quantum information and communication processing tasks. For instance, qudit systems for $d>2$ are sometimes preferred in quantum cryptography such as in quantum key distribution where the use of qudits increases coding density and provides stronger security compared to qubits~\cite{gjvw}. However, the entanglement properties in higher-dimensional systems are hardly known so far, and the generalization of the multiparty entanglement analysis, especially the monogamy and polygamy constraints from qubit to qudit case is far more than trivial. Thus even fundamental steps of the challenges to the richness of entanglement studies for system of multiparty higher-dimensions systems would be necessary and fruitful to understand the whole picture of quantum entanglement. Our results presented here deal with a generalized polygamy constraints of multyparty entanglement in arbitrary higher dimensional quantum systems. Moreover, our class of polygamy inequalities provide tighter constraints which can also provide finer characterizations of the entanglement distributions among the multiparty systems. Noting the importance of the study on multiparty quantum entanglement especially in higher dimensional quantum systems, our result can provide a rich reference for future work on the study of multiparty quantum entanglement. \section*{Acknowledgments} This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2017R1D1A1B03034727) and a grant from Kyung Hee University in 2017(KHU-20170716). \end{document}
\begin{document} \otimesildetle[Structure Theory for a Class of Grade $3$ Ideals]{Structure Theory for a Class of Grade 3 Homogeneous Ideals Defining Type 2 Compressed Rings} \operatorname{\underline{Aut}}hor{Keller VandeBogert } \date{\otimesoday} \ideal{m}aketitle \begin{abstract} Let $R=k[x,y,z]$ be a standard graded $3$-variable polynomial ring, where $k$ denotes any field. We study grade $3$ homogeneous ideals $I \ideal{m}athfrak{S}ubseteq R$ defining compressed rings with socle $k(-s) \oplus k(-2s+1)$, where $s \geqslant3$ is some integer. We prove that all such ideals are obtained by a trimming process introduced by Christensen, Veliche, and Weyman in \cite{christ}. We also construct a general resolution for all such ideals which is minimal in sufficiently generic cases. Using this resolution, we give bounds on the minimal number of generators $\ideal{m}u(I)$ of $I$ depending only on $s$; moreover, we show these bounds are sharp by constructing ideals attaining the upper and lower bounds for all $s\geqslant 3$. Finally, we study the Tor-algebra structure of $R/I$. It is shown that these rings have Tor algebra class $G(r)$ for $s \leqslant r \leqslant 2s-1$. Furthermore, we produce ideals $I$ for all $s \geqslant 3$ and all $r$ with $s \leqslant r \leqslant 2s-1$ such that $\operatorname{Soc} (R/I ) = k(-s) \oplus k(-2s+1)$ and $R/I$ has Tor-algebra class $G(r)$, partially answering a question of realizability posed by Avramov in \cite{av2012}. \mathbf{e}nd{abstract} \ideal{m}athfrak{S}ection{Introduction} Let $(R,\ideal{m},k)$ be a regular local ring with maximal ideal $\ideal{m}$. A result of Buchsbaum and Eisenbud (see \cite{BE}) established that any quotient $R/I$ of $R$ with projective dimension $3$ admits the structure of an associative commutative differental graded (DG) algebra. Later, a complete classification of the multiplicative structure of the Tor algebra $\operatorname{Tor}_\bullet^R (R/I , k)$ for such quotients was established by Weyman in \cite{wey89} and Avramov, Kustin, and Miller in \cite{torclass}. One parametrized family arising from the aforementioned classification of Tor algebras is the class $G(r)$, where $r$ is a parameter arising from the rank of the induced map $$\delta : \operatorname{Tor}_2^R (R/I , k) \otimeso \Hom_k ( \operatorname{Tor}_1^R(R/I , k) , \operatorname{Tor}_3^R (R/I ,k)).$$ If $I \ideal{m}athfrak{S}ubset R$ is such that $R/I$ is Gorenstein, then it is shown by Avramov and Golod in \cite{av71} that the Koszul homology algebra of $R/I$ is a Poincar\'e duality algebra. Indeed, an equivalent characterization of the Tor algebra class $G(r)$ is that that there exists a subalgebra of the Tor algebra minimally exhibiting Poincar\'e duality, in the sense that there does not exist any nontrivial multiplication outside of this subalgebra (see Definition \ref{classg} for a precise statement). It can be shown that if $R/I$ is Gorenstein (and not a complete intersection) of codimension $3$, then $R/I$ has Tor algebra class $G(\ideal{m}u(I))$, where $\ideal{m}u(I)$ denotes the minimal number of generators of $I$. Avramov conjectured in \cite{av2012} that quotients of Tor algebra class $G$ are necessarily Gorenstein rings. The technique of ``trimming'' a Gorenstein ideal is used by Christensen, Veliche, and Weyman (see \cite{christ}) to produce codimension $3$ non-Gorenstein rings with Tor algebra class $G$. If $(R,\ideal{m})$ is a regular local ring and $I = (\oplushi_1 , \dots , \oplushi_n) \ideal{m}athfrak{S}ubseteq R$ is an $\ideal{m}$-primary ideal with $R/I$ of codimension $3$, then an example of this trimming process is the formation of the ideal $(\oplushi_1 , \dots , \oplushi_{n-1} ) + \ideal{m} \oplushi_n$. The classification of perfect codimension $3$ ideals has seen significant progress recently, starting with the paper \cite{wey2018} (extending the work started in \cite{wey89}), which links this structure theory to the representation theory of Kac-Moody Lie algebras. Resolutions of a given format (sequence of Betti numbers) have an associated graph, and it is conjectured in \cite{christ2} that an ideal is in the linkage class of a complete intersection if and only if this associated graph is a Dynkin diagram. In \cite[Question 3.8]{av2012}, Avramov poses a question of realizability; that is, which Tor algebra classes of codimension $3$ local rings can actually occur? Using techniques of linkage, this question is explored in \cite{christ3}, refining the classification provided in \cite{torclass} and showing that every grade $3$ perfect ideal in a regular local ring is in the linkage class of either a complete intersection or an ideal defining a Golod ring. In this paper, we examine grade $3$ homogeneous ideals $I \ideal{m}athfrak{S}ubset R:= k[x,y,z]$ (with all variables having degree $1$, and $k$ being a field of arbitrary characteristic) defining an Artinian compressed ring with socle $\operatorname{Soc} (R/I) = k(-s) \oplus k(-2s+1)$. The values $s$ and $2s-1$ are interesting because they provide a boundary case for socle degrees; more precisely, it is not possible to have a type $2$ ring with socle $k(-s_1) \oplus k(-s_2)$, where $s_2 \geqslant 2s_1$. In particular, we prove that all such ideals arise as trimmings of Gorenstein ideals. In Theorem \ref{12.6}, we produce a general resolution for trimmed Gorenstein ideals that is minimal in some generic cases (see Proposition \ref{btab3} for the relevant parameter space and the corresponding open subset). Even in the cases where this resolution is not minimal, there is valuable information to be gained from the relatively simple differentials involved. We give sharp bounds for the graded Betti numbers for ideals resolved by Theorem \ref{12.6}. Furthermore, we produce a family of ideals attaining all possible intermediate Betti numbers. This family is also used to show that for any integers $r$ and $s$ with $s\geqslant 3$ and $s \leqslant r \leqslant 2s-1$, there exists a grade $3$ ideal $I$ defining an Artinian compressed ring with $\operatorname{Soc} (R/I) = k(-s) \oplus k(-2s+1)$ of Tor algebra class $G(r)$ (see Corollary \ref{torach2}), which partially answers the question of realizability mentioned above. More generally, any such $I$ with $s \geqslant 3$ must have Tor algebra class $G( \ideal{m}u (I) - 3)$. The paper is organized as follows: Sections \ref{sec2} and \ref{genbetti} consist of preliminary material and notation. Section \ref{gr3} proves the previously mentioned fact that any grade $3$ ideal $I \ideal{m}athfrak{S}ubset k[x,y,z]$ defining an Artinian compressed ring with $\operatorname{Soc} (R/I ) = k(s) \oplus k(-2s+1)$ is obtained as the trimming of some grade $3$ Gorenstein ideal. Section \ref{theres} builds a resolution of all such ideals, deducing some consequences of the structure of the differentials along the way. In Section \ref{appex} we explore the initial consequences of the resolution built in Section \ref{theres}. In the standard graded case, we find a remarkably simple criterion to deduce whether the trimmed generating set of a Gorenstein ideal is a minimal generating set (see Proposition \ref{isminl}). In particular, questions about minimal generators are translated into counting degrees of the entries of the presenting matrix of a Gorenstein ideal. Section \ref{extbetti} deduces the maximal number of minimal generators of a grade $3$ ideal $I$ defining an Artinian compressed ring with $\operatorname{Soc} (R/I ) = k(s) \oplus k(-2s+1)$. Moreover, we produce an ideal achieving this upper bound for every $s \geqslant 2$, showing that the bound is sharp. Section \ref{toralgstr} delves into some more nontrivial consequences of the tools developed beforehand. In \cite{christ}, all possible Tor algebra structures of trimmed Gorenstein ideals are enumerated. As a consequence, all possible Tor algebra structures for the ideals of interest may be deduced. Combining this with the bounds on the minimal number of generators, we show that all such ideals are class $G(r)$ for some $s \leqslant r \leqslant 2s-1$. Furthermore, using information from the resolution of Section \ref{theres}, we show that every such $r$ value between $s$ and $2s-1$ may be achieved by choosing an ideal from the family introduced in Section \ref{extbetti}. Finally, Section \ref{evencase} drops the top socle degree by $1$ and gives a rudimentary analysis of grade $3$ ideals $I \ideal{m}athfrak{S}ubset k[x,y,z]$ defining compressed rings with $\operatorname{Soc} (R/I) = k(-s) \oplus k(-2s+2)$. Such ideals are also trimmed Gorenstein ideals, hence resolved by Theorem \ref{12.6}. The possible Tor algebra structures are much more limited in this case, and we end with a question about the existence of ideals with a prescribed number of minimal generators that are also Tor algebra class $G(r)$, for a specified $r$. \ideal{m}athfrak{S}ection{Compressed Rings and Inverse Systems}\label{sec2} \begin{definition} Let $A$ be a local Artinian $k$-algebra, where $k$ is a field and $\ideal{m}$ denotes the maximal ideal. The top socle degree is the maximum $s$ with $\ideal{m}^s \ideal{n}eq 0$ and the socle polynomial of $A$ is the formal polynomial $\ideal{m}athfrak{S}um_{i=0}^s c_i z^i$, where $$c_i = \dim_k \ideal{r}ac{\operatorname{Soc}le (A) \cap \ideal{m}^i}{\operatorname{Soc}le (A) \cap \ideal{m}^{i+1}}.$$ An Artinian $k$-algebra is \mathbf{e}mph{standard graded} if it is generated as an algebra in degree $1$. \mathbf{e}nd{definition} \begin{definition}\label{compdef} A standard graded Artinian $k$-algebra $A$ with embedding dimension $e$, top socle degree $s$, and socle polynomial $\ideal{m}athfrak{S}um_{i=0}^s c_i z^i$ is \mathbf{e}mph{compressed} if $$\dim_k \ideal{m}^i/\ideal{m}^{i+1} = \ideal{m}in \Big\{ \binom{e-1+i}{i} , \ideal{m}athfrak{S}um_{\mathbf{e}ll=0}^s c_\mathbf{e}ll \binom{e-1+\mathbf{e}ll-i}{\mathbf{e}ll - i} \Big\}$$ for $i =0, \dots , s$. \mathbf{e}nd{definition} \begin{setup}\label{setup1} Let $n \geqslant 1$ be an integer and $k$ denote a field of arbitrary characteristic. Let $V$ be a vector space of dimension $n$ over $k$. Give the symmetric algebra $S(V) =: R$ and divided power algebra $D(V^*)$ the standard grading (that is, $S_1(V) = V$, $D_1 (V^*) = V^*$). The notation $S_i := S_i (V)$ denotes the degree $i$ component of the symmetric algebra on $V$. Similarly, the notation $D_i := D_i (V^*)$ denotes the degree $i$ component of the divided power algebra on $V^*$. Given a homogeneous $I \ideal{m}athfrak{S}ubseteq S(V)$ defining an Artinian ring, there is an associated inverse system $0:_{D(V^*)} I$. Similarly, for any finitely generated graded submodule $N \ideal{m}athfrak{S}ubseteq D(V^*)$ there is a corresponding homogeneous ideal $0:_{S(V)} N$ defining an Artinian ring. If $I$ is a homogeneous ideal with associated inverse system minimally generated by elements $\oplushi_1 , \ \dots , \oplushi_k$ with $\deg \oplushi_i = s_i$, then there are induced vector space homomorphisms $$\Phi_i : S_i \otimeso \bigoplus_{j=1}^k D_{s_j - i}$$ sending $f \ideal{m}apsto (f \cdot \oplushi_1 , \dots , f \cdot \oplushi_k)$. \mathbf{e}nd{setup} \begin{observation}\label{obs2} Let $I \ideal{m}athfrak{S}ubseteq S(V)$ be a homogeneous ideal with associated inverse system minimally generated by elements $\oplushi_1 , \ \dots , \oplushi_k$ with $\deg \oplushi_i = s_i$. If the induced maps $\Phi_i$ of Setup \ref{setup1} have maximal rank for all $i$, then the ring $R/I$ is compressed, as in Definition \ref{compdef}. \mathbf{e}nd{observation} \begin{proof} By definition, $I_i = \Ker \Phi_i$; by the rank-nullity theorem, \begingroup\allowdisplaybreaks \begin{align*} \dim_k (R/I)_i &= \dim_k \operatorname{im} \Phi_i \\ &= \ideal{m}in \Big\{ \dim_k S_i , \dim_k \bigoplus_{j=1}^\mathbf{e}ll D_{s_j - i } \Big\} \\ \mathbf{e}nd{align*} \mathbf{e}ndgroup where the latter equality follows by the assumption that $\Phi_i$ has maximal rank. \mathbf{e}nd{proof} \begin{definition} Let $I \ideal{m}athfrak{S}ubseteq S(V)$ be a homogeneous ideal with associated inverse system minimally generated by elements $\oplushi_1 , \ \dots , \oplushi_k$ with $\deg \oplushi_i = s_i$. Let $m$ denote the first integer for which $\Phi_m$ is a surjection. Then $m$ is called the \mathbf{e}mph{tipping point} of $I$; this is well defined since the rank of the domain and codomain of each $\Phi_i$ is increasing/decreasing in $i$, respectively (and the codomain is eventually $0$). \mathbf{e}nd{definition} \begin{prop}[\cite{miller}, Lemma 1.13]\label{proplol} Let $\oplushi$ be a homogeneous element of $D(V^*)$ of degree $s$. Then the tipping point of the ideal $0 :_{S(V)} \oplushi$ is $\lceil s/2 \rceil$. In addition, the induced maps $\Phi_i$ satisfy the following properties for every integer $i$. \begin{enumerate}[(a)] \item $\Hom_{k} (\Phi_i , k) = \Phi_{s-i}$ \item $\Phi_i$ is surjective if and only if $\Phi_{s-i}$ is injective. \mathbf{e}nd{enumerate} \mathbf{e}nd{prop} \begin{definition} Adopt Setup \ref{setup1} with $R = S(V)$ and let $\oplussi : V \otimeso R$. The Koszul complex $K_\bullet$ on $\oplussi$ is the complex obtained by setting $$K_i := \bigwedge^i V \otimes R(-i)$$ with differential $$\delta_i : \bigwedge^i V \otimes R(-i) \otimeso \bigwedge^{i-1} V \otimes R(-i+1)$$ defined as multiplication by $\oplussi \in V^*$ (where $\bigwedge^\bullet V$ is given the standard module structure over $\bigwedge^\bullet V^*$). \mathbf{e}nd{definition} The following can be found as Proposition $2.5$ of \cite{boij}: \begin{prop}\label{caval} Let $I$ be a homogeneous ideal in $R:= S(V)$ of initial degree $t$, and set $A = R/I$. Then $$\operatorname{Tor}_i^R (A , k)_{i+t-1} \cong \Ker (\oplusi ) \cap \Ker (\delta_{i-1}), \ideal{q}uad i=2, \dots , n$$ where $\delta_{i-1}$ is the Koszul differential $\bigwedge^{i-1}V \otimes R_t \otimeso \bigwedge^{i-2}V \otimes R_{t+1}$ and $\oplusi$ is the quotient map $\bigwedge^{i-1} V \otimes R_t \otimeso \bigwedge^{i-1} V \otimes A_t$. \mathbf{e}nd{prop} \begin{remark}\label{rk1} Adopt notation and hypotheses of Setup \ref{setup1}. Let $\oplussi : V \otimeso R $ be such that $\operatorname{im} \oplussi = R_+$. Observe that $\Ker (\oplusi) \cap \Ker ( \delta_{i-1})$ as in Proposition \ref{caval} is precisely $\Ker ( \oplusi ) \cap \operatorname{im} (\delta_i)$ by exactness of the Koszul complex. The latter set may be described as the kernel of the composition of $k$-vector space homomorphisms $$\mathbf{x}ymatrix{\bigwedge^i V \otimes S_t (V) \ar[r]^-{\delta_i} & \bigwedge^{i-1} V \otimes S_{t+1} (V) \\ \ar[r]^-{1 \otimes \Phi_{t+1}} & \bigwedge^{i-1} V \otimes D_{c-t-1} (V^*). \\}$$ Denote the above composition of $k$-vector space homomorphisms by $$\Theta_i (\oplushi) : \bigwedge^i V \otimes S_t (V) \otimeso \bigwedge^{i-1} V \otimes D_{c-t-1} (V^*).$$ \mathbf{e}nd{remark} \begin{prop}\label{rank} Adopt Setup \ref{setup1}. Let $A = R/I$ where $I = (0 :_R \oplushi)$, $\oplushi \in D_c(V^*)$, and let $t$ denote the initial degree of $I$, $n = \operatorname{pd}_R R/I$. Then, $$\dim_k \operatorname{Tor}_i^R (A , k)_{i+t-1} = \binom{t-1+i-1}{i-1} \binom{t-1+n}{n-i} - \operatorname{rank} \Theta_i (\oplushi)$$ for all $i=2, \dots , n$. \mathbf{e}nd{prop} \begin{proof} First observe that the minimal homogeneous free resolution of $$\operatorname{im} (\delta_i : \bigwedge^{i+1} V \otimes R_{t-1} \otimeso \bigwedge^i V \otimes R_t)$$ is obtained by truncating the Koszul complex: $$0 \otimeso \bigwedge^{i+t} V \otimes R_0 \otimeso \dots \otimeso \bigwedge^{i+1} V \otimes R_{t-1} \otimeso \operatorname{im}(\delta_i) \otimeso 0$$ whence \begin{equation*} \begin{split} \dim \operatorname{im} (\delta_i) &= \ideal{m}athfrak{S}um_{j=1}^t (-1)^{j+1} \dim \Big( \bigwedge^{i+j} V \otimes R_{t-j} \Big) \\ &= \ideal{m}athfrak{S}um_{j=1}^t (-1)^{j+1} \binom{n+1}{i+j} \cdot \binom{n+t-j}{t-j}. \\ \mathbf{e}nd{split} \mathbf{e}nd{equation*} By Lemma $1.2$ of \cite{crv}, this sum is equal to $\binom{i+t-1}{i} \cdot \binom{n+t}{i+t}$. Similarly, by construction $\dim \Ker (\Theta_i (\oplushi)) = \dim \big( \Ker ( \oplusi) \cap \operatorname{im} (\delta_i) \big)$. By exactness of the Koszul complex, $\operatorname{im} (\delta_i) = \Ker (\delta_{i-1})$; combining this with Proposition \ref{caval}: $$\dim \operatorname{Tor}_i^R (A , k)_{i+t-1} = \dim \Ker ( \Theta_i (\oplushi) ).$$ By the rank-nullity theorem, $$\dim \operatorname{Tor}_i^R (A , k)_{i+t-1} = \dim \operatorname{im} (\delta_i) - \operatorname{rank}_k \Theta_i (\oplushi)$$ \mathbf{e}nd{proof} \begin{cor}\label{gener} Adopt notation and hypotheses as in Setup \ref{setup1}. Then there is a nonempty open set $U$ in the Grassmannian parametrizing all $1$-dimensional subspaces of $D_c (V^*)$ such that the Betti numbers of the $k$-algebra $A = R/ (0:_{S(V)} \oplushi) $ are the same for all $[\oplushi]$ in $U$ (where $[\oplushi]$ denotes the class of the subspace spanned by $\oplushi \in D_c (V^*)$). \mathbf{e}nd{cor} \begin{proof} Take the open subset $U$ to be the set of all $1$-dimensional subspaces $[\oplushi]$ of $D_c (V^*)$ such that $\Theta_i (\oplushi)$ has maximal rank for each $i=2, \dots , n$. We may identify the Grassmannian $\otimeshetaxtrm{Gr} (1, D_c (V^*))$ with the projective space $\mathbb{P}^{\binom{c+2}{2}-1}$, so it suffices to show that the complement of $U$ is the zero set of homogeneous polynomials in the variables $p_1 , \dots , p_{\binom{c+2}{2}}$, where $[\oplushi ] = [p_1 : \cdots : p_{\binom{c+2}{2}}]$. Let $\mathbf{e}psilon_\beta$ denote any standard basis element of $\bigwedge^i V$, so $\beta = (\beta_1 , \dots , \beta_i)$ with $\beta_1 < \cdots < \beta_i$. Let $m \in S_t (V)$ be any degree $t$ monomial. We compute $$\Theta_i (\oplushi)(m) = \ideal{m}athfrak{S}um_{j=1}^i (-1)^{i+1} \mathbf{e}psilon_{\beta \mathbf{a}ckslash \beta_j} \otimes (\oplussi(\mathbf{e}psilon_{\beta_j})m)\cdot \oplushi,$$ implying that the matrix representation of $\Theta_i ( \oplushi)$ has entries of the form $\oplusm p_\mathbf{e}ll$, for $\mathbf{e}ll=1 , \dots , \binom{c+2}{2}$, where the basis chosen for $\bigwedge^{i-1} V \otimes D_{c-t-1} (V^*)$ consists of the tensor products of the standard basis for $\bigwedge^{i-1} V$ and the monomial basis for $D_{c-t-1} (V^*)$. The complement of $U$ is the union of the zero sets of the determinant of the above matrix representation for each $i=2, \dots , n$, which is a homogeneous polynomial in the $p_\mathbf{e}ll$. As a finite union of closed sets, this set is closed. Thus $U$ is an open set, and by Proposition \ref{rank}, any $[\oplushi] \in U$ gives rise to an ideal $(0:_R \oplushi)$ whose Betti numbers are independent of the choice of $\oplushi$. \mathbf{e}nd{proof} \ideal{m}athfrak{S}ection{Generic Betti Numbers for Grade $3$ Gorenstein Ideals}\label{genbetti} \begin{definition} A standard graded Artinian algebra is \mathbf{e}mph{level} if its socle is concentrated entirely in a single degree. \mathbf{e}nd{definition} \begin{prop}[\cite{boij}, Proposition 3.6]\label{equalities} Let $R/I$ be a standard graded compressed level Artinian algebra of embedding dimension $r$, socle degree $c$, socle dimension $m$, and assume $I$ has initial degree $t$. Then \begin{equation*} \begin{split} &\dim_k \operatorname{Tor}_i^R (R/I , k)_{t+i-1} - \dim_k \operatorname{Tor}_{i-1}^R (R/I , k)_{t+i-1} = \\ &\binom{t-1+i-1}{i-1} \cdot \binom{t-1+r}{r-i} - m \binom{c-t+r-i}{r-i} \cdot \binom{c-t+r}{i-1} \\ \mathbf{e}nd{split} \mathbf{e}nd{equation*} for $i=1, \dots , r-1$. \mathbf{e}nd{prop} \begin{prop}\label{btab1} Let $R = \otimeshetaxtrm{k} [x,y,z]$ be standard graded and $I$ a homogeneous grade $3$ Gorenstein ideal with $R/I$ compressed and $\operatorname{Soc}le (R/I) = k(-2s+1)$ for some integer $s$. Then $R/I$ has Betti table of the form \begin{equation*}\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 &s+1 & b & 0 \\ \hline s & 0 & b& s+1 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular} \mathbf{e}nd{equation*} where $b$ is some integer. Moreover, $b \leqslant s$. \mathbf{e}nd{prop} \begin{proof} Employ Proposition \ref{equalities}, where $r=3$, $c = 2s-1$, $m=1$, and $t = s$ ($= \lceil (2s-1)/2 \rceil$; see Proposition \ref{proplol}). Using the notation $$T_i := \operatorname{Tor}_i^R (R/I,k),$$ we obtain \begin{equation*} \begin{split} & \dim (T_1)_{s} = s+1 \\ &\dim(T_2)_{s+1} - \dim(T_1)_{s+1} = 0 \\ &\dim (T_2)_{s+2} = s+1. \\ \mathbf{e}nd{split} \mathbf{e}nd{equation*} Thus define $b:= \dim_k (T_2)_{s+1}$. The final claim that $b \leqslant s$ follows from the fact that the Betti table has the following decomposition into standard pure Betti diagrams: \begin{equation*} \begin{split} & \ideal{r}ac{b}{2(s+1)^2-2} \cdot \begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & s+2 & 0 & 0 & 0 \\ \hline s-1 & 0 &2(s+1)^2 & 2(s+1)^2-2 & 0 \\ \hline s & 0 & 0 & 0 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &s \\ \mathbf{e}nd{tabular} \\ + & \ideal{r}ac{(s+1)^2-1 - (s+1)b}{(s+1)^2-1} \cdot \begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 &s+1 & 0 & 0 \\ \hline s & 0 & 0 & s+1 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular} \\ + & \ideal{r}ac{b}{2(s+1)^2-2} \cdot \begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & s & 0 & 0 & 0 \\ \hline s-1 & 0 &0 & 0 & 0 \\ \hline s & 0 & 2(s+1)^2-2 & 2(s+1)^2 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &s+2 \\ \mathbf{e}nd{tabular} \\ \mathbf{e}nd{split} \mathbf{e}nd{equation*} If $b \geqslant s+1$, then the middle coefficient of the above decomposition becomes negative, which is a contradiction to results of Boij-S\"oderberg theory (see, for instance, \cite[Theorem 2]{boijsod}). \mathbf{e}nd{proof} In the following, recall that the notation $[\oplushi] \in \otimeshetaxtrm{Gr} (1 , D_c (V^*))$ means the class of the subspace spanned by the element $\oplushi \in D_c (V^*)$. \begin{prop}\label{btab3} Let $R = S(V)$ be standard graded, where $V$ is a $3$-dimensional vector space over a field $k$. If $s$ is even, then there is a nonempty open set $U$ in the Grassmannian parametrizing all $1$-dimensional subspaces of $D_{2s-1} (V^*)$ such that for all $[\oplushi] \in U$, $I := (0:_{S(V)} \oplushi)$ has Betti table \begin{equation*}\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 &s+1 & 0 & 0 \\ \hline s & 0 & 0& s+1 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular} \mathbf{e}nd{equation*} If $s$ is odd, then there is a nonempty open set $U$ in the Grassmannian parametrizing all $1$-dimensional subspaces of $D_{2s-1} (V^*)$ such that for all $[\oplushi] \in U$, $I := (0:_{S(V)} \oplushi)$ has Betti table \begin{equation*}\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 &s+1 & 1 & 0 \\ \hline s & 0 & 1& s+1 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular} \mathbf{e}nd{equation*} \mathbf{e}nd{prop} \begin{proof} The goal is to find minimal values for $b$, where $b$ is as in Proposition \ref{btab1}, since $b$ is minimized precisely when the rank of $\Theta_i (\oplushi)$ is maximized by Proposition \ref{rank}. To this end, we exhibit an explicit $I$ for each $s$ attaining the Betti table as in the statement and argue that no smaller values of $b$ can be obtained. The matrices used below are those from Proposition $6.2$ of \cite{BE} with minor alterations; in our case, some of the entries are squared. Choosing a basis for $V$, we may view $S(V)$ as the standard graded polynomial ring $k[x,y,z]$. Assume first that $s$ is even. Consider the $(s+1)\otimesildemes (s+1)$ alternating matrix $$\begin{pmatrix} 0 & x^2 & 0 & \cdots & 0& z^2 \\ -x^2 & 0 & y^2 & \cdots & z^2 & 0 \\ 0 & -y^2 & 0 & \cdots & & 0 \\ \vdots & & & & & \vdots \\ -z^2 & 0&\cdots & & & 0 \\ \mathbf{e}nd{pmatrix}$$ To see the pattern more clearly, the first two matrices are $$H^{ev}_1 = \begin{pmatrix} 0 & x^2 & z^2 \\ -x^2 & 0 & y^2 \\ -z^2 & -y^2 & 0 \\ \mathbf{e}nd{pmatrix}, \ideal{q}uad H^{ev}_2 = \begin{pmatrix} 0 & x^2 & 0 & 0 & z^2 \\ -x^2 & 0 & y^2 & z^2 & 0 \\ 0 & -y^2 & 0 & x^2 & 0 \\ 0 & -z^2 & -x^2 & 0 & y^2 \\ -z^2 & 0 & 0 & -y^2 & 0 \\ \mathbf{e}nd{pmatrix}$$ The ideal generated by the $(s) \otimesildemes (s)$ Pfaffians has grade $3$ according to section $6$ of \cite{BE} (a much more explicit generating set is exhibited in Proposition $7.6$ of \cite{elkustin}), and hence has minimal free resolution $$0 \otimeso R(-2s+1) \otimeso R(-s-2)^{s+1} \otimeso R(-s)^{s+1} \otimeso R.$$ The above gives an ideal for which $b = 0$, and this is clearly the smallest possible. Similarly, if $s $ is odd, consider the following $(s+2) \otimesildemes (s+2)$ matrix: $$\begin{pmatrix} 0 & x^2 & 0 & \cdots & 0& z \\ -x^2 & 0 & y^2 & \cdots & z^2 & 0 \\ 0 & -y^2 & 0 & \cdots & & 0 \\ \vdots & & & & & \vdots \\ -z & 0&\cdots & & & 0 \\ \mathbf{e}nd{pmatrix}$$ The first two matrices in this case are $$H^{odd}_1 = \begin{pmatrix} 0 & x^2 & z \\ -x^2 & 0 & y \\ -z & -y & 0 \\ \mathbf{e}nd{pmatrix}, \ideal{q}uad H^{odd}_2 = \begin{pmatrix} 0 & x^2 & 0 & 0 & z \\ -x^2 & 0 & y^2 & z^2 & 0 \\ 0 & -y^2 & 0 & x^2 & 0 \\ 0 & -z^2 & -x^2 & 0 & y \\ -z & 0 & 0 & -y & 0 \\ \mathbf{e}nd{pmatrix}$$ Again, the ideal generated by the submaximal Pfaffians is grade $3$ Gorenstein with $b= 1$. Moreover, no smaller value of $b$ can be achieved since otherwise the ideal would have an even number of minimal generators, which is impossible by work of Watanabe in \cite{watanabe} or Corollary $2.2$ of \cite{BE}. \mathbf{e}nd{proof} \ideal{m}athfrak{S}ection{Some Structure in the Grade 3 Setting}\label{gr3} \begin{setup}\label{setup2} Let $k$ be a field and let $R = k[x,y,z]$ be a standard graded polynomial ring over a field $k$. Let $I \ideal{m}athfrak{S}ubset R$ be a grade $3$ homogeneous ideal defining a compressed ring with $\operatorname{Soc} (R/I) = k(-s) \oplus k(-2s+1)$, where $s \geqslant 3$. Write $I = I_1 \cap I_2$ for $I_1$, $I_2$ homogeneous grade $3$ Gorenstein ideals defining rings with socle degrees $s$ and $2s-1$, respectively. The notation $R_+$ will denote the irrelevant ideal ($R_{>0}$). \mathbf{e}nd{setup} \begin{prop}\label{alscomp} Adopt Setup \ref{setup2}. Then the ideal $I_2$ defines a compressed ring. \mathbf{e}nd{prop} \begin{proof} In view of Observation \ref{obs2} and Proposition \ref{proplol}, it suffices to show that the map $\Phi_i$ associated to $I_2$ is surjective for all $i \geqslant \lceil s-1/2 \rceil =s$. Assume that the inverse system associated to $I$ is minimally generated by $\oplushi_1$ and $\oplushi_2$ of degree $s$ and $2s-1$, respectively. Then $I_2 = 0:_{S(V)} \oplushi_2$. By hypothesis, the map $f \ideal{m}apsto (f \oplushi_1 , f \oplushi_2)$ is surjective for all $i \geqslant s$. Composing with the projection onto the second factor, $f \ideal{m}apsto f \oplushi_2$ is also surjective for all $i \geqslant s$. \mathbf{e}nd{proof} \begin{prop}\label{trimmed} Adopt Setup \ref{setup2}. There exists a minimal generating set $$(\oplushi_1 , \dots , \oplushi_{s+1} , \oplussi_1 , \dots , \oplussi_b)$$ for $I_2$ such that $$I = (\oplushi_1 , \dots , \oplushi_s , \oplussi_1 , \dots , \oplussi_b) + R_+ \oplushi_{s+1}$$ where $\deg (\oplushi_i) = s$, $\deg (\oplussi_j) = s+1$, and the integer $b$ is obtained as in Proposition \ref{btab1}. \mathbf{e}nd{prop} \begin{proof} By the definition of a compressed ring, \begingroup\allowdisplaybreaks \begin{align*} \dim I_s &=\dim R_s - \ideal{m}in \{ \dim_k R_s , \dim_k R_0 + \dim_k R_{s-1} \} \\ &= (s+2)(s+1)/2 - 1 - s(s+1)/2 \\ &= s \\ \mathbf{e}nd{align*} \mathbf{e}ndgroup Choose a basis $\{ \oplushi_1 , \dots , \oplushi_s \}$ for $(I_1 \cap I_2)_s$. By Proposition \ref{alscomp}, $I_2$ defines a compressed ring, whence $\dim_k (I_2)_s = s+1$. Extend $\{\oplushi_1 , \dots , \oplushi_s \}$ to a basis $\{ \oplushi_1 , \dots , \oplushi_{s+1} \}$ of $(I_2)_s$. Observe that $(I_1 \cap I_2)_{s+1} = (I_2)_{s+1}$ by a dimension count. By Proposition \ref{btab1}, there exist $b$ linearly independent $s+1$-forms $\oplussi_1 , \dots \oplussi_b$ such that $$(I_2)_{s+1} = (R_+ I_2)_{s+1} + \operatorname{Span}_k \{ \oplussi_1 , \dots , \oplussi_b \}.$$ Since $I$ defines a compressed ring, the degrees of its minimal generators are concentrated in $2$ consecutive degrees. This means that $$I = (\oplushi_1 , \dots , \oplushi_s , \oplussi_1 , \dots , \oplussi_b) + R_+ \oplushi_{s+1}.$$ \mathbf{e}nd{proof} \begin{cor}\label{evens} Adopt Setup \ref{setup2}. Assume furthermore that the Betti table of $I_2$ is given by Proposition \ref{btab3}. If $s$ is even, then there exists a minimal generating set $\oplushi_1 , \dots , \oplushi_{s+1}$ for $I_2$ such that $$I = (\oplushi_1 , \dots , \oplushi_s) + R_+ \oplushi_{s+1}$$ and this is a minimal generating set for $I$. $\ideal{q}quad \ideal{m}athfrak{S}quare$ \mathbf{e}nd{cor} \ideal{m}athfrak{S}ection{A Resolution for Certain Types of Ideals}\label{theres} In view of Proposition \ref{trimmed}, our goal is to produce a resolution of ideals of the form $(\oplushi_1 , \dots , \oplushi_s , \oplussi_1 , \dots , \oplussi_b) + R_+ \oplushi_{s+1}$. We build the resolution in a more general setting, then specialize to the relevant case. \begin{setup}\label{SU12}Let $R$ be a ring, $n$ be a positive integer, and $U$ and $V$ be free $R$-modules of rank $3$ and $2n+1$, respectively. Suppose that $v_0\in V$ generates a free summand of rank 1. Let $w_0$ be an element of $V^*$ with $w_0(v_0)=1$ and let $V'$ be the kernel of $w_0$. Observe that \begin{equation}\label{12decomp}V=Rv_0\oplus V'.\mathbf{e}nd{equation} Let \begin{equation}\label{phi}\oplushi=\oplushi'+(v_0\wedge v_0')\mathbf{e}nd{equation} be an element of $\bigwedge^2 V$, with $\oplushi'\in \bigwedge^2V'$ and $v_0'\in V'$, and let ${x:U\otimeso R}$ be an $R$-module homomorphism. Assume \begin{enumerate}[\rm (a)] \item\label{SU12.a} the ideal $\operatorname{im} (x:U\otimeso R)$ has grade three, \item\label{SU12.b} the ideal $\operatorname{im} \big(\oplushi^{(n)}:\bigwedge ^{2n}V^* \otimeso R\big)$ is a grade three Gorenstein ideal, \item\label{SU12.c} the ideal $\operatorname{im}(v_0':V^*\otimeso R)$ is contained in the ideal $\operatorname{im} (x:U\otimeso R)$, and \item\label{SU12.d} the element $v_0\wedge \oplushi^{(n)}$ of $\bigwedge^{2n+1}V$ is regular. \mathbf{e}nd{enumerate} Let $\ideal{m}cI$ be the ideal $$\otimess \operatorname{im} \Big((V'\wedge \oplushi^{(n)}):\bigwedge ^{2n+1}V^* \otimeso R\Big) +\operatorname{im}(x:U\otimeso R)\cdot \operatorname{im} \Big(v_0\wedge \oplushi^{(n)}: \bigwedge ^{2n+1}V^* \otimeso R\Big)$$of $R$. \mathbf{e}nd{setup} Let $K$ represent the ideal $\operatorname{im} \big(\oplushi^{(n)}:\bigwedge ^{2n}V^* \otimeso R\big)$ of Setup \ref{SU12}.\ref{SU12.b}. This ideal has $2n+1$ generators: one for each element in a basis for $\bigwedge^{2n}V^*$. We have partitioned this generating set into two subsets and we consider the corresponding two subideals $K_0$ and $K'$ of $K$. The ideal $$\otimess K_0=\operatorname{im} \Big(v_0\wedge \oplushi^{(n)}: \bigwedge ^{2n+1}V^* \otimeso R\Big)$$ is principal and its generator is the Pfaffian (of the alternating matrix which corresponds to $\oplushi$) that involves all of the basis vectors of $V'$. The ideal $$\otimess K'=\operatorname{im}\Big((V'\wedge \oplushi^{(n)}):\bigwedge ^{2n+1}V^* \otimeso R\Big)$$ has $2n$ generators and each of these generators is a Pfaffian (of the alternating matrix which corresponds to $\oplushi$) that involves $v_0$ together with $2n-1$ basis vectors from $V'$. Of course, $\ideal{m}cI=K'+(\operatorname{im} x)\cdot K_0$ \begin{chunk}\label{12.4} Define $\operatorname{proj}: V\otimeso V'$ to be the projection induced by the decomposition (\ref{12decomp}); in other words, $\operatorname{proj}(v)=v-w_0(v)\cdot v_0$, for $v\in V$. \mathbf{e}nd{chunk} \begin{observation}\label{obs12.1} Adopt the terminology and hypotheses of {\rm\ref{SU12}}. The following statements hold: \begin{enumerate} \item\label{obs12.1.y}$w_0(\oplushi)=v_0'\in V'$, \item\label{obs12.1.z} $\oplushi^{(n)}={\oplushi'}^{(n)}+{\oplushi'}^{(n-1)}\wedge v_0\wedge v_0'$, \item \label{obs12.1.b}there exists an $R$-module homomorphism $q:V\otimeso U$ for which the diagram $$\mathbf{x}ymatrix{&V^*\ar@{-->}[ld]_{\mathbf{e}xists q}\ar[d]^{v_0'}\\U\ar[r]^x&\operatorname{im} x\ar[r]&0}$$ commutes. \item \label{obs12.1.c}there exists an $R$-module homorphism $B: \bigwedge^{2n+1}V^*\otimeso \bigwedge^2 U$ for which the diagram $$\mathbf{x}ymatrix{ \bigwedge^{2n+1}V^*\ar[rr] ^{\oplushi^{(n)} (\underline{\oplushantom{X}})}\ar[d]^{B} &&V^*\ar[d]^{q}\\ \bigwedge^2U\ar[rr]^{x}&& U }$$commutes, and \item\label{obs12.1.a} $ K':_RK_0\ideal{m}athfrak{S}ubseteq \operatorname{im}(x: U\otimeso R)$. \mathbf{e}nd{enumerate} \mathbf{e}nd{observation} \begin{proof} Assertions (\ref{obs12.1.y}), (\ref{obs12.1.z}), and (\ref{obs12.1.b}) are evident. To prove (\ref{obs12.1.c}), observe that hypothesis \ref{SU12}.\ref{SU12.a} ensures that \begin{equation}\label{12assum-May6}\otimess 0\otimeso \bigwedge^3 U\mathbf{x}rightarrow{x} \bigwedge^2 U\mathbf{x}rightarrow{x} U\mathbf{x}rightarrow{x} R\mathbf{e}nd{equation} is an acyclic complex of free $R$-modules. One computes: \begin{align*}&(x\circ q)(\oplushi^{(n)}(w_{2n+1}))=v_0'((\oplushi^{(n)}(w_{2n+1})) =(v_0'\wedge \oplushi^{(n)})(w_{2n+1})\\ =& (v_0'\wedge [{\oplushi'}^{(n)}+{\oplushi'}^{(n-1)}\wedge v_0\wedge v_0'])(w_{2n+1}) =(v_0'\wedge{\oplushi'}^{(n)})(w_{2n+1})=0,\mathbf{e}nd{align*} for each $w_{2n+1}\in \bigwedge^{2n+1}V^*$. The first equality is (\ref{obs12.1.b}), the second equality is the fact that $\bigwedge^\bullet V^*$ is a $\bigwedge^\bullet V$-module, the third equality is (\ref{obs12.1.z}), and last equality holds because $v_0'\wedge {\oplushi'}^{(n)}$ is in $\bigwedge^{2n+1}{V'}=0$.) The assertion now follows from the acyclicity of (\ref{12assum-May6}). \ideal{m}edskip\ideal{n}oindent (\ref{obs12.1.a}) The Buchsbaum-Eisenbud theorem \cite[Cor.~2.6, Thm.~3.1]{BE} guarantees that {\begin{equation}\mathbf{x}ymatrix{ &0\ar[r]&\bigwedge^{2n+1}V^*\ar[rr] ^{\oplushi^{(n) } (\underline{\oplushantom{X}})} &&V^*\ar[rr]^{ (\underline{\oplushantom{X}}) (\oplushi) }&&V\ar[rr]^{(\underline{\oplushantom{X}})\wedge \oplushi^{(n)}}&&\bigwedge^{2n+1}V}\label{BE}\mathbf{e}nd{equation}} is a resolution of $R/K$. If $r\in R$ and $rK_0\ideal{m}athfrak{S}ubseteq K'$, then there exists $v'\in V'$ with $$v'\wedge \oplushi^{(n)} +rv_0 \wedge \oplushi^{(n)}=0.$$ The exactness of (\ref{BE}) guarantees that there exists an element $w\in V^*$ with $$w(\oplushi)=v'+rv_0.$$ Apply (\ref{phi}) to see that $$w(\oplushi') +w(v_0)\cdot v_0'-w(v_0')\cdot v_0=v'+rv_0.$$It follows that $[r+w(v_0')]\cdot v_0\in V'$; hence $r=v_0'(-w)\in \operatorname{im} v_0'$. The hypothesis \ref{SU12}.\ref{SU12.c} ensures that $\operatorname{im} v_0'\ideal{m}athfrak{S}ubseteq \operatorname{im}(x)$. \mathbf{e}nd{proof} \begin{theorem}\label{12.6} Adopt the terminology and hypotheses of {\rm\ref{SU12}, \ref{12.4},} and {\rm\ref{obs12.1}}. Then the maps and modules \begin{equation}\label{12.6.1}0\otimeso \begin{matrix} \bigwedge^{2n+1}V^*\\\oplus\\\bigwedge^3 U\mathbf{e}nd{matrix} \mathbf{x}rightarrow{\ \ d_3\ \ } \begin{matrix} V^*\\\oplus\\ \bigwedge^2 U\mathbf{e}nd{matrix} \mathbf{x}rightarrow{\ \ d_2\ \ }\begin{matrix}V'\\\oplus\\U \mathbf{e}nd{matrix} \mathbf{x}rightarrow{\ \ d_1\ \ } {\otimess\bigwedge^{2n+1}V}\mathbf{e}nd{equation} form a resolution of $R/\ideal{m}cI$ by free $R$-modules, where $$d_3\begin{pmatrix} w_{2n+1}\\u_3\mathbf{e}nd{pmatrix}= \begin{pmatrix}\oplushi^{(n)}(w_{2n+1})\\B(w_{2n+1})+x(u_3)\mathbf{e}nd{pmatrix},$$ $$d_2\begin{pmatrix} w_{1}\\u_2\mathbf{e}nd{pmatrix}= \begin{pmatrix} \operatorname{proj}(w_1(\oplushi))\\-q(w_1)+x(u_2)\mathbf{e}nd{pmatrix},$$ and $$d_1\begin{pmatrix} v'\\u_1\mathbf{e}nd{pmatrix} =(v'+x(u_1)\cdot v_0)\wedge \oplushi^{(n)},$$ for $w_{2n+1}\in \bigwedge^{2n+1}V^*$, $u_3\in \bigwedge^3U$, $w_1\in V^*$, $u_2\in \bigwedge^2U$, $v'\in V'$, and $u_1\in U$. \mathbf{e}nd{theorem} \begin{proof}The homomorphisms (\ref{12.6.1}) form the mapping cone of \begin{equation}\label{12.6.2}\mathbf{x}ymatrix{ &0\ar[rr]&&\bigwedge^{2n+1}V^*\ar[rr] ^{\oplushi^{(n)} (\underline{\oplushantom{X}})}\ar[d]^{B} &&V^*\ar[rr]^{\operatorname{proj}\Big( (\underline{\oplushantom{X}}) (\oplushi)\Big) }\ar[d]^{q}&&V'\ar[d]^{\underline{\oplushantom{X}} \wedge\oplushi^{(n)}}\\ 0\ar[r]&\bigwedge^3U\ar[rr]^{x}&& \bigwedge^2U\ar[rr]^{x}&& U\ar[rr]^{x(\underline{\oplushantom{X}})\cdot v_0\wedge \oplushi^{(n)}}&&\bigwedge^{2n+1}V. }\mathbf{e}nd{equation} The rows are complexes by (\ref{BE}) and (\ref{12assum-May6}). The left most square commutes by \ref{obs12.1}.(\ref{obs12.1.c}). To see that the right most square commutes, let $w\in V^*$. The clock-wise path sends $w$ to \begin{align*}[w(\oplushi)-w_0(w(\oplushi))\cdot v_0]\wedge \oplushi^{(n)} &=w(\oplushi)\wedge \oplushi^{(n)}+[w_0(\oplushi)](w)\cdot v_0\wedge \oplushi^{(n)}&&\otimeshetaxt{by \ref{12.4}}\\ &=w(\oplushi^{(n+1)})+ v_0'(w)\cdot v_0\wedge \oplushi^{(n)}&&\otimeshetaxt{by \ref{obs12.1}.(\ref{obs12.1.y})}\\ &=v_0'(w)\cdot v_0\wedge \oplushi^{(n)}.&&\mathbf{e}nd{align*} The counter-clock-wise path sends $w$ to $$(x\circ q)(w)\cdot v_0\wedge \oplushi^{(n)}=v_0'(w)\cdot v_0\wedge \oplushi^{(n)}$$ by \ref{obs12.1}.(\ref{obs12.1.b}). Apply the long exact sequence of homology associated to a mapping cone to see that the complex (\ref{12.6.1}) is acyclic. It suffices to show that \begin{enumerate} \item\label{12.6.a} the top row of (\ref{12.6.2}) is a resolution of $K'/(K'\cap K_0)$, \item\label{12.6.b} the bottom row of (\ref{12.6.2}) is a resolution of $R/(\operatorname{im} x)\cdot K_0$, and \item\label{12.6.c} the induced map on zero-th homology $$\ideal{r}ac{K'}{K'\cap K_0} \otimeso \ideal{r}ac {R}{(\operatorname{im} x) \cdot K_0}$$is an injection. This induced map is the following composition of natural maps $$\mathbf{x}ymatrix{ \ideal{r}ac{K'}{K'\cap K_0}\ar@{^(->}[r]&\ideal{r}ac{R}{K'\cap K_0}\ar@{->>}[r]&\ideal{r}ac{R}{(\operatorname{im} x)\cdot K_0}}.$$ (Recall from \ref{obs12.1}.(\ref{obs12.1.a}) that $(K'\cap K_0)\ideal{m}athfrak{S}ubseteq (\operatorname{im} x)\cdot K_0$.) \mathbf{e}nd{enumerate} \ideal{m}edskip\ideal{n}oindent {\it Proof of {\rm(\ref{12.6.a})}.} The augmented top row of (\ref{12.6.2}) is the bottom row of the following short exact sequence of complexes: $$ \mathbf{x}ymatrix{ &&0\ar[r]&Rv_0\ar[r]^{(\underline{\oplushantom{X}})\wedge \oplushi^{(n)}}\ar@{^(->}[d]&K_0\ar@{^(->}[d]\ar[r]&0\\ 0\ar[r]&\bigwedge^{2n+1}V^*\ar[r]\ar[d]^{=}&V^*\ar[r]\ar[d]^{=}&V\ar[r]^{(\underline{\oplushantom{X}})\wedge \oplushi^{(n)}}\ar@{->>}[d]&K\ar[r]\ar@{->>}[d]&0\\ 0\ar[r]&\bigwedge^{2n+1}V^*\ar[r]&V^*\ar[r]&V'\ar[r]^{(\underline{\oplushantom{X}})\wedge \oplushi^{(n)}}&\ideal{r}ac{K'}{K'\cap K_0}\ar[r]&0 .}$$ The middle complex is exact because of \ref{BE}; the top complex is exact because of \ref{SU12}.\ref{SU12.d}. Assertion (\ref{12.6.b}) is a consequence of (\ref{12assum-May6}) and \ref{SU12}.(\ref{SU12.d}). Assertion (\ref{12.6.c}) is immediate because $K'\cap (\operatorname{im} x)\cdot K_0\ideal{m}athfrak{S}ubseteq K'\cap K_0$. \mathbf{e}nd{proof} \begin{remark}\label{translate} Adopt the notation and hypotheses of Setup \ref{setup2}. Observe that in Setup \ref{SU12}, the map $x: U \otimeso R$ is the first Koszul differential for the ideal $R_+$, so that $\operatorname{im} (x : U \otimeso R) = R_+$. Similarly, choose $\oplushi$ such that $$\operatorname{im} ( \oplushi^{(n)} : \bigwedge^{2n} V^* \otimeso R ) = I_2.$$ The hypothesis \ref{SU12}.\ref{SU12.c} is simply the statement that $I_2$ is homogeneous and generated in positive degree. Similarly, since we are working over a polynomial ring (which is a domain), hypothesis \ref{SU12}.\ref{SU12.d} is trivially satisfied. By Proposition \ref{trimmed}, there exists a minimal generating set $$(\oplushi_1 , \dots , \oplushi_{s+1} , \oplussi_1 , \dots , \oplussi_b)$$ for $I_2$ such that $$I = (\oplushi_1 , \dots , \oplushi_s , \oplussi_1 , \dots , \oplussi_b) + R_+ \oplushi_{s+1}.$$ Choose $v_0$ to be the direct summand corresponding to the minimal generator $\oplushi_{s+1}$ of $I_2$; then, in the notation of Setup \ref{SU12}, $$\cat{I} = (\oplushi_1 , \dots , \oplushi_s , \oplussi_1 , \dots , \oplussi_b) + R_+ \oplushi_{s+1},$$ whence Theorem \ref{12.6} provides a resolution of the ideal $I$ as in Setup \ref{setup2}. \mathbf{e}nd{remark} \begin{cor}\label{numgenss} In the notation and hypotheses of Theorem \ref{12.6}, assume that $(R,\ideal{m} , k)$ is a local ring or $R$ is a standard graded polynomial ring over a field $k$. Then $$\ideal{m}u ( \cat{I} ) = \ideal{m}u (K) + 2 - \operatorname{rank}_k (q \otimes k)$$ \mathbf{e}nd{cor} \begin{proof} Observe that $d_1 \otimes k = 0$ by construction. Let $\cat{F}$ denote the complex of Theorem \ref{12.6}. Then, $$ H_1 (\cat{F} \otimes k ) = \ideal{r}ac{(V' \oplus U) \otimes k}{\operatorname{im} (d_2 \otimes k)}.$$ The only nonzero entries of $d_2 \otimes k$ come from $q \otimes k$, since the other differentials in the mapping cone are built from minimal resolutions. Thus $$\operatorname{rank}_k (d_2 \otimes k) = \operatorname{rank}_k ( q \otimes k)$$ and \begin{align*} \dim_k H_1 ( \cat{F} \otimes k) &= \dim_k (V' \oplus U)\otimes k - \operatorname{rank}_k (q \otimes k) \\ &= \ideal{m}u(K) + 2 - \operatorname{rank}_k (q \otimes k). \\ \mathbf{e}nd{align*} To conclude, recall that $\dim_k H_1 ( \cat{F} \otimes k) = \dim_k \operatorname{Tor}_1^R (R/\cat{I} , k) = \ideal{m}u(\cat{I})$. \mathbf{e}nd{proof} \begin{cor}\label{evens2} Adopt Setup \ref{setup2}, where $s$ is even. Assume $I_2$ has Betti table given by Proposition \ref{btab3}. Under the identifications of Remark \ref{translate}, $I$ has homogeneous minimal free resolution of the form $$0 \otimeso \begin{matrix} R(-2s-2) \\ \oplus \\ R(-s-3) \mathbf{e}nd{matrix} \otimeso R(-s-2)^{s+4} \otimeso \begin{matrix} R(-s)^s \\ \oplus \\ R(-s-1)^3 \\ \mathbf{e}nd{matrix} \otimeso R.$$ In particular, the Betti table for $R/I$ as an $R$-module is: $$\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 & s & 0 & 0 \\ \hline s & 0 & 3 & s+4 & 1 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular}$$ \mathbf{e}nd{cor} \begin{proof} This is an immediate consequence of Corollary \ref{evens} combined with the resolution of Theorem \ref{12.6}. \mathbf{e}nd{proof} \ideal{m}athfrak{S}ection{Applications and Examples}\label{appex} In this section we examine classes of examples arising from variants of matrices defined in Section $3$ of \cite{christ}. \begin{definition}\label{Umat} Let $U_m^{ev}$ denote the $m\otimesildemes m$ matrix with entries from the polynomial ring $R=k[x,y,z]$ defined by: $$U^{ev}_{i,m-i} = x^2, \ideal{q}uad U^{ev}_{i,m-i+1} = z^2, \ideal{q}uad U^{ev}_{i,m-i+2} = y^2.$$ Similarly, define $U_m^{odd}$ via: $$U^{odd}_{i,m-i} = x^2, \ideal{q}uad U^{odd}_{i,m-i+1} = z^2, \ideal{q}uad U^{odd}_{i,m-i+2} = y^2, \ \otimeshetaxtrm{for } i<m$$ and $U^{odd}_{m,1} = z$, $U^{odd}_{m,2} = y$. All other entries are defined to be $0$. Define $d_m^{ev} := \det ( U_m^{ev})$ and $d_m^{odd} := U_{m}^{odd}$. \mathbf{e}nd{definition} \begin{observation} For all $i = 1 , \dots , m$, $$U_m^{ev} = \begin{pmatrix} O_{x^2} & U^{ev}_i \\ U^{ev}_{m-i+1} & ^{y^2}O \\ \mathbf{e}nd{pmatrix}$$ $$U_m^{odd} = \begin{pmatrix} O_{x^2} & U^{ev}_i \\ U^{odd}_{m-i+1} & ^{y^2}O \\ \mathbf{e}nd{pmatrix}$$ where $O_{x^2}$ denotes the appropriately sized matrix with $x^2$ as the bottom rightmost corner entry and zeroes elsewhere. Similarly, $^{y^2}O$ denotes the appropriately sized matrix with $y^2$ in the top leftmost corner and zeroes elsewhere. \mathbf{e}nd{observation} \begin{definition} Define $V_m^{ev}$ to be the $(2m+1)\otimesildemes (2m+1)$ skew symmetric matrix $$V^{ev}_m := \begin{pmatrix} O & O_{x^2} & U_m^{ev} \\ -(O_{x^2})^T & 0 & ^{y^2}O \\ -U_m^{ev} & -(^{y^2}O)^T & O \\ \mathbf{e}nd{pmatrix}$$ and $V_m^{odd}$ to be the $(2m+1) \otimesildemes (2m+1)$ skew symmetric matrix $$V^{odd}_m := \begin{pmatrix} O & O_{x^2} & (U_m^{odd})^T \\ -(O_{x^2})^T & 0 & ^{y^2}O \\ -U_m^{odd} & -(^{y^2}O)^T & O \\ \mathbf{e}nd{pmatrix}$$ \mathbf{e}nd{definition} To see the pattern a little more clearly, the first first couple of matrices are: $$V_1^{ev} = \begin{pmatrix} 0&x^{2}&z^{2}\\ {-x^{2}}&0&y^{2}\\ {-z^{2}}&{-y^{2}}&0\mathbf{e}nd{pmatrix}, \ideal{q}uad V_2^{ev} = \begin{pmatrix} 0&0&0&x^{2}&z^{2}\\ 0&0&x^{2}&z^{2}&y^{2}\\ 0&{-x^{2}}&0&y^{2}&0\\ {-x^{2}}&{-z^{2}}&{-y^{2}}&0&0\\ {-z^{2}}&{-y^{2}}&0&0&0\mathbf{e}nd{pmatrix}$$ $$V_1^{odd} = \begin{pmatrix} 0&x^{2}&z\\ {-x^{2}}&0&y\\ {-z}&{-y}&0\mathbf{e}nd{pmatrix}, \ideal{q}uad V_2^{odd} = \begin{pmatrix} 0&0&0&x^{2}&z\\ 0&0&x^{2}&z^{2}&y\\ 0&{-x^{2}}&0&y^{2}&0\\ {-x^{2}}&{-z^{2}}&{-y^{2}}&0&0\\ {-z}&{-y}&0&0&0\mathbf{e}nd{pmatrix}$$ \begin{prop} The ideal of submaximal pfaffians $\otimeshetaxtrm{Pf} (V_m^{ev})$ is minimally generated by the elements $$x^{2m-2i}d_i^{ev}, \ideal{q}uad y^{2m-2i}d_i^{ev} \ \otimeshetaxtrm{for} \ 0 \leqslant i \leqslant m-1, \ \otimeshetaxtrm{and} \ d_m^{ev}$$ Similarly, the ideal of submaximal pfaffians $\otimeshetaxtrm{Pf} (V_m^{odd})$ is minimally generated by the elements $$x^{2m-2i} d_i^{odd} \ \otimeshetaxtrm{for} \ 0 \leqslant i \leqslant m-1 \ y^{2m-1}, \ y^{2m-2i}d_i^{odd} \ \otimeshetaxtrm{for} \ 0 \leqslant i \leqslant m-2$$ $$\otimeshetaxtrm{and} \ d_m^{odd}$$ \mathbf{e}nd{prop} \begin{proof} The proof is essentially identical to that of Proposition $3.3$ in \cite{christ}. \mathbf{e}nd{proof} \begin{example} Consider the ideal $\otimeshetaxtrm{Pf} (V_m^{odd})$ and trim the generator $x^{2m-2}d_1^{odd} = x^{2m-2}z$. Let $\oplushi \in \bigwedge^2 V$ denote the element corresponding to the matrix $V_m^{odd}$. If we consider $x^{2m-2}z$ as the $2m$th generator, decompose $V = \bigoplus_{i=1}^{2m+1} Re_i$ as $V' \oplus Re_{2m}$, where the basis $e_i$ corresponds to the $i$th signed submaximal pfaffian of $V_m^{odd}$. Let $\oplushi \in \bigwedge^2 V$ denote the element corresponding to the matrix $V_m^{odd}$; notice $$\oplushi = \oplushi' -e_{2m} \wedge (x^2e_1 + z^2 e_2 + y^2 e_3)$$ for some $\oplushi' \in \bigwedge^2 V'$. Take $v_0' := -x^2 e_1 - z^2 e_2 - y^2 e_3$ and let $U = Re_x \oplus Re_y \oplus Re_z$ with map $X : e_x \ideal{m}apsto x$, $e_y \ideal{m}apsto y$, and $e_z \ideal{m}apsto z$. If $q : V^* \otimeso U$ is the map sending $e_1^* \ideal{m}apsto -xe_x$, $e_2^* \ideal{m}apsto -z e_z$, $e_3^* \ideal{m}apsto -y e_y$, and all other basis element to $0$, then the following diagram commutes: $$\mathbf{x}ymatrix{ & V^* \ar[dl]_-{q} \ar[d]^-{v_0'} \\ U \ar[r]^{X} & \operatorname{im} X \\}$$ Similarly, if $B : \bigwedge^{2m+1} V^* \otimeso \bigwedge^2 U$ is defined by sending $\omega \ideal{m}apsto x y^{2m-2} e_x \wedge e_y + y^{2m-3} z^2 e_y \wedge e_z$ (where $\omega$ is a generator for $\bigwedge^{2m+1} V^*$), then the following diagram commutes: $$\mathbf{x}ymatrix{\bigwedge^{2m+1} V^* \ar[r]^-{\oplushi^{(n)} (-)} \ar[d]^-{B} & V^* \ar[d]^-{q} \\ \bigwedge^2 U \ar[r]^-{X} & U \\}$$ Employing the construction of Theorem \ref{12.6}, we deduce that the mapping cone is acyclic; moreover, the entries of all maps involved have entries in $R_+$, so this is a minimal free resolution of the ideal $$J = (\otimeshetaxtrm{Pf} ( V_m^{odd}) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_{2m} (V_m^{odd}) ) + R_+\otimeshetaxtrm{Pf}_{2m} (V_m^{odd}).$$ This implies that $J$ is minimally generated by $2m+3$ elements (and by Lemma \ref{tormins} defines a ring of Tor algebra class $G(2m)$). \mathbf{e}nd{example} \begin{prop}\label{isminl} Let $R=k[x,y,z]$ with the standard grading, where $k$ is any field. Let $I \ideal{m}athfrak{S}ubset R_+^2$ be a homogeneous $R_+$-primary grade $3$ Gorenstein ideal with generators obtained from the ideal of submaximal pfaffians $\otimeshetaxtrm{Pf} (M)$ for some skew symmetric matrix $M$. If the $i$th row of $M$ has entries in $R_{>1}$, then $$(\otimeshetaxtrm{Pf}_j (M) \ideal{m}id j \ideal{n}eq i ) + R_+ \otimeshetaxtrm{Pf}_i (M)$$ is minimally generated by $\ideal{m}u(I)+2$ elements and defines a ring of type $2$. \mathbf{e}nd{prop} \begin{proof} Let $M$ be obtained from the element $\oplushi \in \bigwedge^2 V$, where $V$ is a free module of rank $\ideal{m}u (I)$. Let $e_i$ be the basis element of $V$ corresponding to $\otimeshetaxtrm{Pf}_i (M)$; write $V = V' \oplus Re_i$. It must be verified that $q \otimes k=0$ and $B \otimes k =0$. The statement that the $i$th row of $M$ has entries in $R_{>1}$ means that if $$\oplushi = \oplushi' + e_i \wedge v_0',$$ then $0 \ideal{n}eq v_0' \in R_{>1} V$. Counting degrees on the induced map $q : V^* \otimeso U$ of Observation \ref{obs12.1}.\ref{obs12.1.z}, we deduce $q (V^*) \ideal{m}athfrak{S}ubset R_+ U$, so that $q \otimes k = 0$. In particular, by Proposition \ref{numgenss}, $$(\otimeshetaxtrm{Pf}_j (M) \ideal{m}id j \ideal{n}eq i ) + R_+ \otimeshetaxtrm{Pf}_i (M)$$ is minimally generated by $\ideal{m}u (I) - 1 + 3 = \ideal{m}u(I)+2$ elements. By the assumption that $I \ideal{m}athfrak{S}ubseteq R_+^2$, a degree count shows that $B \otimes k = 0$. More precisely, if $t$ is the initial degree of $I$, then the matrix representation of $B$ with respect to the standard bases has entries in $R_+^{t-1}$. We conclude that the resolution provided by Theorem \ref{12.6} is minimal. \mathbf{e}nd{proof} \ideal{m}athfrak{S}ection{A Class of Ideals with Extremal Graded Betti Numbers}\label{extbetti} Adopt Setup \ref{setup2}. The ideal $I_2$ has Betti table arising from Proposition \ref{btab1} for some integer $b < s+1$ by Proposition \ref{alscomp}. We may fit $I$ into the short exact sequence $$0 \otimeso I_2 / I \otimeso R/I \otimeso R/I_2 \otimeso 0$$ whence upon counting ranks on the graded strands of the long exact sequence of $\operatorname{Tor}$, we deduce that $\dim_k \operatorname{Tor}_1^R (R/I,k)_{s+1} \leqslant b+3$. Since $b \leqslant s$, $\dim_k \operatorname{Tor}_1^R (R/I,k)_{s+1} \leqslant s+3$. Furthermore: \begin{prop}\label{torbound} Let $I$ be as in Setup \ref{setup2}. Then $\dim_k \operatorname{Tor}_1^R (R/I,k)_{s+1} \leqslant s+2$. \mathbf{e}nd{prop} \begin{proof} By counting ranks on the long exact sequence of $\operatorname{Tor}$ induced by the short exact sequence $$0 \otimeso I_2 / I \otimeso R/I \otimeso R/I_2 \otimeso 0,$$ we must have $\dim_k \operatorname{Tor}_1^R (R/I_2,k)_{s+1} = s$, which is the maximum possible. By Proposition \ref{trimmed}, $I$ may be written $$I = (\oplushi_1 , \dots , \oplushi_s , \oplussi_1, \dots , \oplussi_s) + R_+ \oplushi_{s+1}$$ where $\oplushi_1 , \dots , \oplushi_{s+1} , \oplussi_1, \dots , \oplussi_s$ is a minimal generating set for $I_2$. Assume for sake of contradiction that $\dim_k \operatorname{Tor}_1^R (R/I,k)_{s+1} = s+3$; this means that the above generating set for $I$ is minimal. Thus the resolution of Theorem \ref{12.6} is a minimal free resolution for $R/I$. Let us examine the map $q : V^* \otimeso U$. By counting degrees, one finds that $q(V^*) \ideal{n}ot\ideal{m}athfrak{S}ubset R_+ U$ or that $q$ is identically the $0$ map. Either case is a contradiction, so that $\operatorname{rank}_k (q \otimes k) \geqslant 1$. \mathbf{e}nd{proof} We now exhibit a class of ideals defining compressed rings with top socle degree $2s-1$ and $\dim_k \operatorname{Tor}_1^R (R/I,k)_{s+1} = s+2$, showing that the inequality of \ref{torbound} is sharp. To do this, we first need some notation. \begin{definition} Let $U_m^{j}$ (for $j \leqslant m$) denote the $m\otimesildemes m$ matrix with entries from the polynomial ring $R=k[x,y,z]$ defined by: $$(U^{j}_m)_{i,m-i} = x^2, \ideal{q}uad (U^{j}_m)_{i,m-i+1} = z^2, \ideal{q}uad (U^{j}_m)_{i,m-i+2} = y^2 \ \otimeshetaxtrm{for} \ i \leqslant m-j$$ $$(U^{j}_m)_{i,m-i} = x, \ideal{q}uad (U^{j}_m)_{i,m-i+1} = z, \ideal{q}uad (U^{j}_m)_{i,m-i+2} = y \ \otimeshetaxtrm{for} \ i >m-j$$ and all other entries are defined to be $0$. Define $d_m^j := \det ( U_m^j)$. \mathbf{e}nd{definition} To see the pattern, we have: $$U_2^1 = \begin{pmatrix} x^{2}&z^{2}\\ z&y\mathbf{e}nd{pmatrix}, \ U_3^1 = \begin{pmatrix} 0&x^{2}&z^{2}\\ x^{2}&z^{2}&y^{2}\\ z&y&0\mathbf{e}nd{pmatrix}, \ U_3^2 = \begin{pmatrix} 0&x^{2}&z^{2}\\ x&z&y\\ z&y&0\mathbf{e}nd{pmatrix}$$ \begin{definition} Define $V_m^{j}$ (for $j< m$) to be the $(2m+1)\otimesildemes (2m+1)$ skew symmetric matrix $$V^{j}_m := \begin{pmatrix} O & O_{x^2} & (U_m^{j})^T \\ -(O_{x^2})^T & 0 & ^{y^2}O \\ -U_m^{j} & -(^{y^2}O)^T & O \\ \mathbf{e}nd{pmatrix}$$ and if $j=m$, then $V_m^m$ is the skew symmetric matrix $$V^{j}_m := \begin{pmatrix} O & O_{x^2} & (U_m^{m})^T \\ -(O_{x^2})^T & 0 & ^{y}O \\ -U_m^{m} & -(^{y}O)^T & O \\ \mathbf{e}nd{pmatrix}$$ \mathbf{e}nd{definition} Observe that the ideal of pfaffians $\otimeshetaxtrm{Pf} (V_m^j)$ is a grade $3$ Gorenstein ideal with graded Betti table $$\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline 2m-j-1 & 0 & 2m+1-j & j & 0 \\ \hline 2m-j & 0 & j & 2m+1-j & 0 \\ \hline 4m-2j-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular}$$ In particular, for any integer $s$, $\otimeshetaxtrm{Pf} (V_s^s)$ has Betti table $$\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 & s+1 & s & 0 \\ \hline s & 0 & s & s+1 & 0 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular}$$ \begin{chunk}\label{pfnote} Given an $n \otimesildemes n$ alternating matrix $M$, the notation $\otimeshetaxtrm{Pf} (M)$ will denote the ideal of submaximal pfaffians of the matrix $M$. Similarly, $$(\otimeshetaxtrm{Pf} (M) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_i (M))$$ is shorthand for the ideal $$(\otimeshetaxtrm{Pf}_j (M) \ideal{m}id 1 \leqslant j \leqslant n, \ j \ideal{n}eq i ),$$ where $\otimeshetaxtrm{Pf}_j (M)$ denotes the pfaffian of the matrix obtained by deleting the $j$th row and column of $M$. \mathbf{e}nd{chunk} \begin{prop}\label{maxideal} Let $s \geqslant 1$ be an integer and $R =k[x,y,z]$, where $k$ is any field. The ideal $$I := (\otimeshetaxtrm{Pf} (V_s^s) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_{s+1} (V_s^s)) + R_+ \otimeshetaxtrm{Pf}_{s+1} (V_s^s)$$ is minimally generated by $2s+2$ elements and defines a compressed ring with $\operatorname{Soc} (R/I ) \cong k(-s) \oplus k(-2s+1)$ and Betti table $$\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 & s & s-1 & 0 \\ \hline s & 0 & s+2 & s+4 & 1 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular}$$ \mathbf{e}nd{prop} \begin{proof} We again use the resolution provided by Theorem \ref{12.6}. Let $V = \bigoplus_{i=1}^{2s+1} Re_i$, where $e_i$ corresponds to the $i$th pfaffian of $V_s^s$. Write $V = V' \oplus Re_{s+1}$ and let $\oplushi \in \bigwedge^2 V$ denote the element corresponding to $V^s_s$. By definition of the matrix $V_s^s$, we see $$\oplushi = \oplushi' + e_{s+1} \wedge (-x^2 e_s + ye_{s+2}).$$ Set $v_0' := -x^2 e_s + ye_{s+2}$ and let $U = Re_x \oplus Re_y \oplus Re_z$ with map $X : e_x \ideal{m}apsto x$, $e_y \ideal{m}apsto y$, and $e_z \ideal{m}apsto z$. Let $q :V^* \otimeso U$ be the map sending $e_{s}^* \ideal{m}apsto -x e_x$, $e_{s+2}^* \ideal{m}apsto e_y$, and all other basis vectors to $0$. Then the following diagram commutes: $$\mathbf{x}ymatrix{ & V^* \ar[dl]_-{q} \ar[d]^-{v_0'} \\ U \ar[r]^{X} & \operatorname{im} X \\}$$ In particular, $\operatorname{rank}_k (q \otimes k) = 1$. We do not have to compute the map $B : \bigwedge^{2s+1} V^* \otimeso \bigwedge^2 U$, since a degree count tells us that $B \otimes k = 0$. Thus the resolution of Theorem \ref{12.6} is not minimal, but we deduce that we only need to take the quotient by a subcomplex of the form $$0 \otimeso R(-s-1) \otimeso R(-s-1) \otimeso 0$$ to obtain a minimal resolution. This immediately yields the Betti table of the statement; the other claims are immediate consequences of the Betti table. \mathbf{e}nd{proof} \ideal{m}athfrak{S}ection{Tor Algebra Structures}\label{toralgstr} In this section we examine some consequences for the Tor-algebra structures of the ideals resolved by Theorem \ref{12.6}. We start by defining what it means to have Tor algebra class $G(r)$. Although there are other Tor algebra families, we will only concern ourselves with the class $G$; the other families with their definitions may be found in \cite[Theorem 2.1]{torclass}. \begin{definition}\label{classg} Let $(R,\ideal{m},k)$ be a regular local ring with $I\ideal{m}athfrak{S}ubset \ideal{m}^2$ and ideal such that $\operatorname{pd}_R (R/I) = 3$. Let $T_\bullet := \operatorname{Tor}_\bullet^R (R/I , k)$. Then $R/I$ has Tor algebra class $G(r)$ if, for $m = \ideal{m}u(I)$ and $t = \operatorname{type} (R/I)$, there exist bases for $T_1$, $T_2$, and $T_3$ $$e_1 , \dots , e_m, \ideal{q}uad f_1 , \dots , f_{m+t-1} , \ideal{q}uad g_1 , \dots , g_t,$$ respectively, such that the only nonzero products are given by $$e_i f_i = g_1 = f_i e_i, \ideal{q}uad 1 \leqslant i \leqslant r.$$ Such a Tor algebra structure has $$T_1 \cdot T_1 = 0, \ideal{q}uad \operatorname{rank}_k (T_1 \cdot T_2 ) = 1, \ideal{q}uad \operatorname{rank}_k (T_2 \otimeso \Hom_k (T_1 , T_3) ) = r,$$ where $r \geqslant 2$. \mathbf{e}nd{definition} \begin{theorem}[\cite{christ}, Theorem $2.4$, Homogeneous version]\label{toralg} Let $R = k[x,y,z]$ with the standard grading and let $I \ideal{m}athfrak{S}ubseteq R_+^2$ be an $R_+$-primary homogeneous Gorenstein ideal minimally generated by elements $\oplushi_1 , \dots , \oplushi_{2m+1}$. Then the ideal $$J = (\oplushi_1 , \dots , \oplushi_{2m} ) + R_+\oplushi_{2m+1}$$ is a homogeneous $R_+$-primary ideal and defines a ring of type $2$. Moreover, \begin{enumerate}[(a)] \item If $m=1$, then $\ideal{m}u (J) = 5$ and $R/J$ is class $B$. \item If $m=2$, then one of the following holds: \begin{equation*} \begin{split} &\bullet \ideal{m}u(J) = 4 \ \otimeshetaxtrm{and} \ R/J \ \otimeshetaxtrm{is class} \ H(3,2) \\ &\bullet \ideal{m}u(J) = 5 \ \otimeshetaxtrm{and} \ R/J \ \otimeshetaxtrm{is class} \ B \\ &\bullet \ideal{m}u(J) \in \{ 6,7 \} \ \otimeshetaxtrm{and} \ R/J \ \otimeshetaxtrm{is class} \ G(r) \ \otimeshetaxtrm{with} \ \ideal{m}u(J)-2 \geqslant r \geqslant \ideal{m}u(J)-3 \\ \mathbf{e}nd{split} \mathbf{e}nd{equation*} \item If $m\geqslant 3$, then $R/J$ is class $G(r)$ with $\ideal{m}u(J)-2 \geqslant r \geqslant \ideal{m}u(J) - 3$. \mathbf{e}nd{enumerate} \mathbf{e}nd{theorem} \begin{prop} Adopt Setup \ref{setup2}. Assume furthermore that the Betti table of $I_2$ is given by those of Proposition \ref{btab3}. If $s$ is even, then $R/I$ defines a ring of Tor algebra class $G(s)$. \mathbf{e}nd{prop} \begin{proof} By Theorem \ref{toralg} combined with Proposition \ref{evens}, $I$ is class $G(r)$ for some $r \geqslant s$. It suffices to show that the induced map on the Tor algebra $$\delta : T_2 \otimeso \Hom (T_1 , T_3)$$ has rank $\leqslant s$, where $T_i := \operatorname{Tor}_i^R (R/I , R )$. Examining the Betti table of Corollary \ref{evens2}, we see that $$(T_1)_{s+1} \cdot (T_2) \ideal{m}athfrak{S}ubseteq (T_2)_{\geqslant 2s+3} = 0,$$ whence the only nontrivial products can occur between $(T_1)_s$ and $T_2$, implying $\operatorname{rank}_k \delta \leqslant s$. \mathbf{e}nd{proof} \begin{prop}\label{propmax} Adopt notation and hypotheses of Proposition \ref{maxideal}. Then $I$ defines a ring of Tor algebra class $G(2s-1)$. \mathbf{e}nd{prop} \begin{proof} If $s=2$, a direct computation in Macaulay2 verifies that $I$ is class $G(3)$, so assume that $s \geqslant 3$. By Theorem \ref{toralg}, $I$ is class $G(r)$ for some $r \geqslant 2s-1$. It remains to show $$\delta : T_2 \otimeso \Hom (T_1 , T_3)$$ has rank $\leqslant 2s-1$, where $T_i := \operatorname{Tor}_i^R (R/I , R )$. First, observe that $$(T_1)_s \cdot (T_2)_{s+1} \ideal{m}athfrak{S}ubseteq (T_2)_{2s+1}$$ and $(T_3)_{2s+1} = 0$ since $s \geqslant 3$. Similarly, $$(T_1)_{s+1} \cdot (T_2)_{s+2} \ideal{m}athfrak{S}ubseteq (T_3)_{2s+3} = 0$$ whence the only nontrivial products are between $(T_1)_s$, $(T_2)_{s+2}$ and $(T_1)_{s+1}$, $(T_2)_{s+1}$, implying $$\operatorname{rank}_k \delta \leqslant s + (s-1) = 2s-1.$$ \mathbf{e}nd{proof} \begin{cor}\label{rbounds} Adopt Setup \ref{setup2} with $s \geqslant 3$. Then $I$ has Tor algebra class $G(r)$ for some $s \leqslant r \leqslant 2s-1$. \mathbf{e}nd{cor} \begin{proof} Observe that $\ideal{m}u(I) \geqslant s+3$. Moreover, $\ideal{m}u (I) \leqslant 2s+2$ by Proposition \ref{torbound}. If $\ideal{m}u (I) = 2s+2$, then $I$ has Betti table identical to that of the ideal in Proposition \ref{maxideal}, so that by the proof of Proposition \ref{propmax}, $I$ has Tor algebra class $G(2s-1)$. \mathbf{e}nd{proof} \begin{lemma}\label{tormins} Adopt Setup \ref{setup2} with $s \geqslant 3$. Then $I$ defines a compressed ring of Tor algebra class $G ( \ideal{m}u(I)-3)$. \mathbf{e}nd{lemma} \begin{proof} Consider the degree $s+1$ strand of the long exact sequence of Tor associated to the short exact sequence $$0 \otimeso \ideal{r}ac{I_2}{I} \otimeso \ideal{r}ac{R}{I} \otimeso \ideal{r}ac{R}{I_2} \otimeso 0.$$ We obtain: \begin{align*} 0 &\otimeso \operatorname{Tor}_2^R (R/I , k )_{s+1} \otimeso \operatorname{Tor}_2 (R/I_2 , k )_{s+1} \\ &\otimeso \operatorname{Tor}_1 (I_2 / I , k)_{s+1} \otimeso \operatorname{Tor}_1^R (R/I , k)_{s+1} \\ &\otimeso \operatorname{Tor}_1^R (R/I_2 , k)_{s+1} \otimeso 0. \\ \mathbf{e}nd{align*} Counting ranks, \begin{align*} \dim_k \operatorname{Tor}_2^R (R/I , k)_{s+1} &= b -3+ \ideal{m}u(I) - s -b \\ &= \ideal{m}u(I) - s - 3. \\ \mathbf{e}nd{align*} A similar, but easier, rank count on the degree $s+2$ strand yields $$\dim_k \operatorname{Tor}_2^R (R/I , k)_{s+2} = s+4,$$ implying $R/I$ has Betti table $$\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 & s & \ideal{m}u(I)-s-3 & 0 \\ \hline s & 0 & \ideal{m}u(I) - s & s+4 & 1 \\ \hline 2s-1 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular}$$ By Theorem \ref{toralg} combined with Proposition \ref{trimmed}, the ideal $I$ defines a ring of Tor algebra class $G(r)$ for $r \geqslant \ideal{m}u(I)- 3$. We examine the induced map $$\delta : T_2 \otimeso \Hom (T_1 , T_3),$$ where $T_i := \operatorname{Tor}_i^R (R/I , R )$. Notice that the only nontrivial products can occur between $(T_1)_{s}$, $(T_2)_{s+2}$ and $(T_1)_{s+1}$, $(T_2)_{s+1}$, implying that $$\operatorname{rank}_k \delta \leqslant s + ( \ideal{m}u(I) - s - 3 ) = \ideal{m}u(I) - 3$$ \mathbf{e}nd{proof} A natural question arising from Corollary \ref{rbounds} is whether or not every possible $r$ value may be obtained for a given $s \geqslant 3$, where $I$ is obtained from Setup \ref{setup2}. The next proposition will allow us to answer in the affirmative: \begin{prop}\label{torach} Let $s \geqslant 3$ be an integer and $R =k[x,y,z]$, where $k$ is any field. \begin{enumerate} \item For $1\leqslant i < s/2$, the ideal $$I := (\otimeshetaxtrm{Pf} (V_{s-i}^{s-2i}) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_{s-i+1} (V_{s-i}^{s-2i})) + R_+ \otimeshetaxtrm{Pf}_{s-i+1} (V_{s-i}^{s-2i})$$ has Tor algebra class $G(2s-2i)$. \item For $1 \leqslant i < s/2$, the ideal $$I := (\otimeshetaxtrm{Pf} (V_{s-i}^{s-2i}) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_{i+1} (V_{s-i}^{s-2i})) + R_+ \otimeshetaxtrm{Pf}_{i+1} (V_{s-i}^{s-2i})$$ has Tor algebra class $G(2s-2i-1)$. \mathbf{e}nd{enumerate} \mathbf{e}nd{prop} \begin{proof} In view of Corollary \ref{numgenss} and Lemma \ref{tormins}, it suffices to compute the rank of the map $q : V^* \otimeso U$ as in Theorem \ref{12.6} to find the minimal number of generators; consequently, \begin{align*} \dim_k \operatorname{Tor}_1^R (R/I , k)_{s+1} &= \ideal{m}u (I) - s \\ &= \ideal{m}u (\otimeshetaxtrm{Pf} (V_{s-i}^{s-2i}) ) + 2 - \operatorname{rank}_k (q \otimes k) - s \\ &= s+3-2i - \operatorname{rank}_k ( q \otimes k) \\ \mathbf{e}nd{align*} The above equality follows from a rank count on the long exact sequence of $\operatorname{Tor}$ associated to the short exact sequence $$0 \otimeso \ideal{r}ac{\otimeshetaxtrm{Pf} (V_{s-i}^{s-2i})}{I} \otimeso \ideal{r}ac{R}{I} \otimeso \ideal{r}ac{R}{\otimeshetaxtrm{Pf} (V_{s-i}^{s-2i})} \otimeso 0$$ combined with the fact that $\dim_k \operatorname{Tor}_1^R ( R/\otimeshetaxtrm{Pf} (V_{s-i}^{s-2i}) , k)_{s} = s+1$. We compute the map $q$ explicitly in each case. Let $\oplushi \in \bigwedge^2 V$ represent the matrix $V_{s-i}^{s-2i}$; in the first case, after writing $V = V' \oplus Re_{s-i+1}$, we see (recalling that $i>0$) $$\oplushi = \oplushi' + e_{s-i+1} \wedge ( -x^2 e_{s-i} + y^2 e_{s-i+2} ).$$ Let $U = Re_x \oplus Re_y \oplus Re_z$ with map $X : e_x \ideal{m}apsto x$, $e_y \ideal{m}apsto y$, and $e_z \ideal{m}apsto z$. Take $q : V^* \otimeso U$ to be the map sending $e_{s-i}^* \ideal{m}apsto -x e_x$, $e_{s-i+2}^* \ideal{m}apsto ye_y$, and all other basis vectors map to $0$. Clearly $q \otimes k = 0$, whence the resolution of Theorem \ref{12.6} is minimal. In particular, $\ideal{m}u (I) = 2s+3-2i$. For the second case, retain much of the notation as above. Decompose $V = V' \oplus Re_{i+1}$ and write $$\oplushi = \oplushi' + e_{i+1} \wedge (x^2 e_{2s-i} + z^2e_{2s-i+1} + ye_{2s-i+2} ).$$ Take $q : V^* \otimeso U$ to be the map sending $e_{2s-i}^* \ideal{m}apsto xe_x$, $e_{2s-i+1}^* \ideal{m}apsto ze_z$, $e_{2s-i+2}^* \ideal{m}apsto e_y$, and all other basis vectors to $0$. In this case $\operatorname{rank}_k (q \otimes k) = 1$, whence $\ideal{m}u (I) = 2s+2-2i$. \mathbf{e}nd{proof} \begin{cor}\label{torach2} Let $R = k[x,y,z]$ with the standard grading, where $k$ is any field. Given any $s \geqslant 3$ and any $r$ with $s \leqslant r \leqslant 2s-1$, there exists an ideal $I$ with $\operatorname{Soc} (R/I) = k(-s) \oplus k(-2s+1)$ and defining an Artinian compressed ring of Tor algebra class $G(r)$. \mathbf{e}nd{cor} \begin{proof} Assume first that $r$ is even and $s \leqslant r < 2s-1$. In the case $r=s$, use Proposition \ref{evens2} on the ideal $( \otimeshetaxtrm{Pf} (V_s^{ev} ) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_1 (V_s^{ev} )) + R_+ \otimeshetaxtrm{Pf}_1 (V_s^{ev} )$. If $r>s$, employ Proposition \ref{torach} on the ideal $$(\otimeshetaxtrm{Pf} (V_{r/2}^{r-s}) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_{r/2+1} (V_{r/2}^{r-s})) + R_+ \otimeshetaxtrm{Pf}_{r/2+1} (V_{r/2}^{r-s})$$ Assume now that $r$ is odd, with $s \leqslant r \leqslant 2s-1$. If $r=2s-1$, use the ideal from Proposition \ref{propmax}. If $r<2s-1$, apply Proposition \ref{torach} to the ideal $$(\otimeshetaxtrm{Pf} (V_{(r+1)/2}^{r+1-s}) \mathbf{a}ckslash \otimeshetaxtrm{Pf}_{s-(r+1)/2+1} (V_{(r+1)/2}^{r+1-s})) + R_+ \otimeshetaxtrm{Pf}_{s-(r+1)/2+1} (V_{(r+1)/2}^{r+1-s})$$ \mathbf{e}nd{proof} \ideal{m}athfrak{S}ection{Socle Minimally Generated in Degrees $s$, $2s-2$}\label{evencase} In this section we further exploit properties of the resolution of Theorem \ref{12.6}. \begin{setup}\label{setup3} Let $k$ be a field and $V$ a $k$-vector space of dimension $3$; view $S(V)$ as graded by the standard grading. Let $I \ideal{m}athfrak{S}ubset R := S(V)$ be a grade $3$ homogeneous ideal defining a compressed ring with $\operatorname{Soc} (R/I ) = k(-s) \oplus k(-2s+2)$, where $s \geqslant 3$. Write $I = I_1 \cap I_2$ for $I_1$, $I_2$ homogeneous grade $3$ Gorenstein ideals defining rings with socle degrees $s$ and $2s-2$, respectively. The notation $R_+$ will denote the irrelevant ideal ($R_{>0}$). \mathbf{e}nd{setup} It turns out that the ideals of Setup \ref{setup3} are also resolved by Theorem \ref{12.6}. \begin{prop}\label{alscomp2} Adopt Setup \ref{setup3}. Then the ideal $I_2$ defines a compressed ring. \mathbf{e}nd{prop} \begin{proof} This is identical to the proof of Proposition \ref{alscomp}. \mathbf{e}nd{proof} \begin{lemma}\label{evenbtab} Adopt Setup \ref{setup3}. Then $R/I_2$ has Betti table $$\begin{tabular}{L|L|L|L|L} & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ \hline s-1 & 0 & 2s+1 & 2s+1 & 0 \\ \hline s & 0 & 0 & 0 & 0 \\ \hline 2s-2 & 0 & 0 & 0 &1 \\ \mathbf{e}nd{tabular}$$ \mathbf{e}nd{lemma} \begin{proof} We employ Proposition \ref{equalities}, where $r=3$, $c = 2s-2$, $m=1$, and $t = s$ ($= \lceil (2s-2)/2 \rceil$; see Proposition \ref{proplol}). Using the notation $$T_i := \operatorname{Tor}_i^R (R/I_2,k),$$ we obtain \begin{equation*} \begin{split} & \dim (T_1)_{s} = 2s+1 \\ &\dim (T_2)_{s+1} - \dim (T_1)_{s+1} = 2s+1 \\ &\dim (T_2)_{s+2} = 0 \\ \mathbf{e}nd{split} \mathbf{e}nd{equation*} Observe that we must have $\dim_k (T_1)_{s+1} = 0$, since otherwise the resolution of $R/I_2$ would not be self-dual, contradicting the fact that $I_2$ is Gorenstein. This yields the result. \mathbf{e}nd{proof} \begin{prop}\label{genset} Adopt Setup \ref{setup3}. There exists a minimal generating set $(\oplushi_1 , \dots , \oplushi_{2s+1})$ for $I_2$ such that $$I = (\oplushi_1 , \dots , \oplushi_{2s} ) + R_+ \oplushi_{2s+1}$$ \mathbf{e}nd{prop} \begin{proof} By the definition of a compressed ring, \begingroup\allowdisplaybreaks \begin{align*} \dim I_s &=\dim R_s - \ideal{m}in \{ \dim_k R_s , \dim_k R_0 + \dim_k R_{s-2} \} \\ &= (s+2)(s+1)/2 - 1 - s(s-1)/2 \\ &= 2s \\ \mathbf{e}nd{align*} \mathbf{e}ndgroup Choose a basis $\{ \oplushi_1 , \dots , \oplushi_{2s} \}$ for $(I_1 \cap I_2)_s$. By Proposition \ref{alscomp2}, $I_2$ defines a compressed ring, whence $\dim_k (I_2)_s = 2s+1$ by Lemma \ref{evenbtab}. Extend $\{\oplushi_1 , \dots , \oplushi_{2s} \}$ to a basis $\{ \oplushi_1 , \dots , \oplushi_{2s+1} \}$ of $(I_2)_s$. Observe that $(I_1 \cap I_2)_{s+1} = (I_2)_{s+1}$ by a dimension count. Since $I$ defines a compressed ring, the degrees of its minimal generators are concentrated in $2$ consecutive degrees. This means that $$I = (\oplushi_1 , \dots , \oplushi_{2s}) + R_+ \oplushi_{2s+1}.$$ \mathbf{e}nd{proof} \begin{prop}\label{numgens2} Adopt Setup \ref{setup3}. Then $\ideal{m}u (I) = 2s$ or $2s+1$. \mathbf{e}nd{prop} \begin{proof} In view of Corollary \ref{numgenss} and Proposition \ref{genset}, it suffices to compute the rank of the map $q \otimes k$ as in Theorem \ref{12.6}. A priori, $\operatorname{rank}_k (q \otimes k) \leqslant 3$; assume first that $\operatorname{rank}_k ( q \otimes k ) = 0$. Assume $I_2$ arises from the pfaffians of some skew symmetric matrix $M$. Observe that $M$ has linear entries by Lemma \ref{evenbtab}. Counting degrees, one notices that $q$ must have degree $0$ entries. Therefore $q = 0$ identically if $q \otimes k =0$, implying that $M$ must have an entire row of zeroes. This is impossible, so $\operatorname{rank}_k (q \otimes k) \geqslant 1$. Assume instead that $\operatorname{rank}_k (q \otimes k ) = 1$. Without loss of generality, we may assume that $q : V^* \otimeso Re_x \oplus Re_y \oplus Re_z$ is the map sending $e_{2s+1}^* \ideal{m}apsto e_x$ and all other basis vectors to $0$. This means that $M$ has a row consisting of a single nonzero linear entry, $x$. But the resolution of $I_2$ implies that there is a relation of the form $x \cdot \oplushi = 0$, contradicting the fact that $R$ is a domain. Thus $\operatorname{rank}_k (q \otimes k) \geqslant 2$. \mathbf{e}nd{proof} \begin{cor}\label{torboundev} Adopt Setup \ref{setup3}. Then $I$ defines a compressed ring of Tor algebra class $G(r)$ for some $2s-3 \leqslant r \leqslant 2s-1$. \mathbf{e}nd{cor} \begin{proof} By Theorem \ref{toralg} combined with Proposition \ref{numgens2}, $I$ has Tor algebra class $G(r)$ for $r \geqslant 2s-3$ or $2s-2$. Observe that $\dim_k \operatorname{Tor}_1(R/I , k)_{s+1} = 0$ or $1$ if $\ideal{m}u(I) = 2s$ or $2s+1$, respectively. Counting ranks on the homogeneous strands of the long exact sequence of Tor associated to the short exact sequence $$0 \otimeso I_2 / I \otimeso R/I \otimeso R/I_2 \otimeso 0$$ we deduce that $\dim_k \operatorname{Tor}_2^R (R/I , k)_{s+1} = 2s-2$ or $2s-1$ if $\ideal{m}u(I) = 2s$ or $2s+1$, respectively. In a similar manner to Proposition \ref{torach}, we examine the induced map $$\delta : T_2 \otimeso \Hom (T_1 , T_3),$$ where $T_i := \operatorname{Tor}_i^R (R/I , R )$. Observe that $$(T_1) (T_2)_{s+2} \ideal{m}athfrak{S}ubset (T_3)_{\geqslant 2s+2} = 0$$ and $$(T_1)_{s+1} (T_2)_{s+1} \ideal{m}athfrak{S}ubset (T_2)_{2s+2} = 0$$ whence the only nontrivial products can occur between $(T_1)_{s}$ and $(T_2)_{s+1}$. This means $$\operatorname{rank}_k \delta \leqslant 2s-2 \ \otimeshetaxtrm{or} \ 2s-1.$$ \mathbf{e}nd{proof} \begin{question} Let $R= k[x,y,z]$. Does there exist a homogeneous ideal defining a compressed ring with $\operatorname{Soc} (R/I) = k(-s) \oplus k(-2s+2)$ such that either \begin{enumerate}[(a)] \item $\ideal{m}u(I) = 2s$ and $R/I$ has Tor algebra class $G(2s-2)$, or \item $\ideal{m}u(I) = 2s+1$ and $R/I$ has Tor algebra class $G(2s-1)$? \mathbf{e}nd{enumerate} \mathbf{e}nd{question} As Corollary \ref{torboundev} suggests, the numerology alone does not forbid ideals of the above form to exist. \ideal{m}athfrak{S}ection*{Acknowledgements} Thanks to Andy Kustin for helpful comments on various drafts of this paper. \begin{thebibliography}{} \bibitem{av2012} L. Avramov, \mathbf{e}mph{A cohomological study of local rings of embedding codepth 3,} J. Pure Appl. Algebra 216, no. 11 (2012), pp. 2489-1506. \bibitem{av71} L. Avramov, E. Golod, \mathbf{e}mph{On the homology algebra of the Koszul complex of a local Gorenstein ring}, Mat. Zametki 9 (1971), pp. 53–58. \bibitem{torclass} L. Avramov, A. Kustin, M. Miller, \mathbf{e}mph{Poincaré series of modules over local rings of small embedding codepth or small linking number}, Journal of Algebra, Volume 118 (1988), pp. 162-204. \bibitem{BE} D. Buchsbaum, D. Eisenbud, \mathbf{e}mph{Algebra Structures for Finite Free Resolutions, and Some Structure Theorems for Ideals of Codimension 3}, American Journal of Mathematics, Volume 99 (1977), pp. 447-485. \bibitem{boij} M. Boij, \mathbf{e}mph{Betti Numbers of Compressed Level Algebras}, Journal of Pure and Applied Algebra, vol. 134 (1999), pp. 111-131. \bibitem{boijsod} M. Boij, J. Söderberg, \mathbf{e}mph{Betti numbers of graded modules and the multiplicity conjecture in the non-Cohen–Macaulay case}, Algebra Number Theory 6 (2012), no. 3, pp. 437-454. \bibitem{crv} M. Cavaliere, M. Rossi, G. Valla, \mathbf{e}mph{On the Resolution of Certain Graded Algebras}, Trans. Amer. Math. Soc., vol. 337 (1993), pp. 389-409. \bibitem{christ} L. Christensen, O. Veliche, J. Weyman, \mathbf{e}mph{Trimming a Gorenstein Ideal}, Journal of Commutative Algebra (2019), to appear. \bibitem{christ2} L. Christensen, O. Veliche, and J. Weyman, \mathbf{e}mph{Free resolutions of Dynkin format and the licci property of grade 3 perfect ideals}, Math. Scand. (2019), to appear. \bibitem{christ3} L. Christensen, O. Veliche, and J. Weyman, \mathbf{e}mph{Linkage classes of grade $3$ perfect ideals,} arXiV:\otimeshetaxttt{1812.11552} (2018). \bibitem{elkustin} S. El Khoury, A. Kustin, \mathbf{e}mph{Artinian Gorenstein algebras with linear resolutions}, Journal of Algebra, vol. 420 (2016), pp. 402-474. \bibitem{miller} C. Miller, H. Rahmati, \mathbf{e}mph{Free Resolutions of Artinian compressed algebras}, Journal of Algebra, vol. 470 (2018), pp. 270-301. \bibitem{watanabe} J. Watanabe, \mathbf{e}mph{A Note On Gorenstein Rings Of Embedding Codimension Three}, Nagoya Math. J. Vol. 50 (1973), pp. 227-232. \bibitem{wey89} J. Weyman, \mathbf{e}mph{On the structure of free resolutions of length 3}, J. Algebra 126, no. 1 (1989), pp. 1–33. \bibitem{wey2018} J. Weyman, \mathbf{e}mph{Generic free resolutions and root systems,} Annales de l'Institut Fourier, Volume 68, no. 3 (2018), pp. 1241-1296. \mathbf{e}nd{thebibliography} \mathbf{e}nd{document}
\begin{document} \maketitle \thispagestyle{empty} \begin{abstract}\noindent\baselineskip=15pt In this paper, we give a new approach for the study of Weyl-type theorems. Precisely we introduce the concepts of spectral valued and spectral partitioning functions. Using two natural order relations on the set of spectral valued functions, we reduce the question of relationship between Weyl-type theorems to the study of the set difference between the parts of the spectrum that are involved. This study solves completely the question of relationship between two spectral valued functions, comparable for one or the other order relation. Then several known results about Weyl-type theorems becomes corollaries of the results obtained. \end{abstract} \baselineskip=15pt \footnotetext{\small \noindent 2010 AMS subject classification: Primary 47A53, 47A10, 47A11 \\ \noindent Keywords: Spectral valued function, Partitioning, Spectrum, Weyl-type theorem. } \baselineskip=15pt \section{Introduction} Let $X$ be a Banach space, and let $L(X)$ be the Banach algebra of all bounded linear operators acting on $X.$ For $T\in L(X),$ we will denote by $N(T)$ the null space of $T$, by $\alpha(T)$ the nullity of $T$, by $R(T)$ the range of $T$, by $\beta(T)$ its defect and by $T^*$ the adjoint of $T.$ We will denote also by $\sigma(T)$ the spectrum of $T$ and by $\sigma_a(T)$ the approximate point spectrum of $T.$ If the range $R(T)$ of $T$ is closed and $\alpha(T)<\infty$ (resp. $\beta(T)<\infty),$ then $T$ is called an upper semi-Fredholm (resp. a lower semi-Fredholm) operator. If $T\in L(X)$ is either upper or lower semi-Fredholm, then $T$ is called a semi-Fredholm operator, and the index of $T$ is defined by $\mbox{ind}(T)=\alpha(T)-\beta(T)$. If both of $\alpha(T)$ and $\beta(T)$ are finite, then $T$ is called a Fredholm operator. An operator $T\in L(X)$ is called a Weyl operator if it is a Fredholm operator of index zero. The Weyl spectrum $\sigma_{W}(T)$ of $T$ is defined by $\sigma_{W}(T)= \{ \lambda \in \mathbb{C}\mid T-\lambda I$ is not a Weyl operator\}. For a bounded linear operator $T$ and a nonnegative integer $n,$ define $T_{[n]}$ to be the restriction of $T$ to $R(T^n),$ viewed as a map from $R(T^n)$ into $R(T^n)$ (in particular $T_{[0]}=T$). If for some integer $n,$ the range space $R(T^n)$ is closed and $T_{[n]}$ is an upper (resp. a lower) semi-Fredholm operator, then $T$ is called an upper (resp. a lower) semi-B-Fredholm operator. A semi-B- Fredholm operator $T$ is an upper or a lower semi-B-Fredholm operator, and in this case the index of $T$ is defined as the index of the semi-Fredholm operator $T_{[n]},$ see \mathbb{C}e{BS}. Moreover, if $T_{[n]}$ is a Fredholm operator, then $T$ is called a B-Fredholm operator, see \mathbb{C}e{BE1}. An operator $T\in L(X)$ is said to be a B-Weyl operator \mathbb{C}e {BW}, if it is a B-Fredholm operator of index zero. The B-Weyl spectrum $\sigma_{BW}(T)$ of $T$ is defined by $\sigma_{BW}(T)= \{ \lambda \in \mathbb{C}\mid T-\lambda I$ is not a B-Weyl operator\}. The ascent $a(T)$ of an operator $T$ is defined by $a(T)=\mbox{inf} \{ n\in \mathbb{N} \mid N(T^n)=N(T^{n+1})\},$ and the descent $ \delta(T)$ of $T$, is defined by $\delta(T)= \mbox{inf} \{ n \in \mathbb{N} \mid R(T^n)= R(T^{n+1})\},$ with $ \mbox{inf}\, \emptyset= \infty.$ According to \mathbb{C}e {H}, a complex number $ \lambda$ is a pole of the resolvent of $T$ if and only if \,\,$ 0< $ max $(a(T- \lambda I), \delta(T- \lambda I))< \infty.$ Moreover, if this is true, then $a(T- \lambda I)= \delta(T- \lambda I).$ An operator $T$ is called Drazin invertible if $0$ is a pole of $T.$ The Drazin spectrum $\sigma_{D}(T)$ of $T$ is defined by $\sigma_{D}(T)=\{\lambda\in\mathbb{C} : T-\lambda I \, \,\text{is not Drazin invertible}\}.$ Define also the set $LD(X)$ by $LD(X)=\{T\in L(X) : a(T)<\infty \mbox{ and } R(T^{a(T)+1}) \mbox{ is closed}\}$ and $\sigma_{LD}(T)=\{\lambda\in\mathbb{C} : T-\lambda I\not\in LD(X)\}$ be the left Drazin spectrum. Following \mathbb{C}e{BK}, an operator $T\in L(X)$ is said to be left Drazin invertible if $T\in LD(X)$. We say that $\lambda\in \sigma_a(T)$ is a left pole of $T$ if $T-\lambda I\in LD(X)$, and that $\lambda\in\sigma_a(T)$ is a left pole of $T$ of finite rank if $\lambda$ is a left pole of $T$ and $\alpha(T-\lambda I)<\infty$. Let $SF_{+}(X)$ be the class of all upper semi-Fredholm operators and $ SF_{+}^{-}(X)=\{T\in SF_{+}(X) : \mbox{ind}(T)\leq 0\}.$ The upper semi-Weyl spectrum $\sigma_{SF_+^-}(T)$ of $T$ is defined by $\sigma_{SF_{+}^{-}}(T)=\{\lambda\in\mathbb{C} : T-\lambda I \not\in SF_{+}^{-}(X)\}.$ Similarly is defined the upper semi-B-Weyl spectrum $\sigma_{SBF_+^-}(T)$ of $T.$ An operator $T\in L(X)$ is called upper semi-Browder if it is upper semi-Fredholm operator of finite ascent, and is called Browder if it is a Fredholm operator of finite ascent and descent. The upper semi-Browder spectrum $\sigma_{uB}(T)$ of $T$ is defined by $\sigma_{uB}(T)=\{\lambda\in \mathbb{C} : T-\lambda I \mbox { is not upper semi-Browder}\}$, and the Browder spectrum $\sigma_{B}(T)$ of $T$ is defined by $\sigma_B(T)=\{\lambda\in \mathbb{C} : T-\lambda I \mbox { is not Browder}\}$. Below, we give a list of symbols and notations we will use: \noindent $E(T):$ eigenvalues of $T$ that are isolated in the spectrum $\sigma(T)$ of $T ,$\\ $E^0(T):$ eigenvalues of $T$ of finite multiplicity that are isolated in the spectrum $\sigma(T)$ \, of $T,$\\ $E_a(T):$ eigenvalues of $T$ that are isolated in the approximate point spectrum $\sigma_a(T)$ of $T,$\\ $E_a^0(T):$ eigenvalues of $T$ of finite multiplicity that are isolated in the spectrum $\sigma_a(T)$ of $T,$\\ $\Pi(T):$ poles of $T,$\\ $\Pi^0(T):$ poles of $T$ of finite rank,\\ $\Pi_a(T):$ left poles of $T,$\\ $\Pi_a^0(T):$ left poles of $T$ of finite rank,\\ $\sigma_{B}(T):$ Browder spectrum of $T,$ \\ $\sigma_{D}(T):$ Drazin spectrum of $T,$ \\ $\sigma_{LD}(T)$: left Drazin spectrum of $T.$\\ $\sigma_{uB}(T):$ upper semi-Browder spectrum of $T,$ \\ $\sigma_{BW}(T):$ B-Weyl spectrum of $T,$ \\ $\sigma_{W}(T):$ Weyl spectrum of $T,$ \\ $\sigma_{SF_+^-}(T):$ upper semi-Weyl spectrum of $T,$ \\ $\sigma_{SBF_+^-}(T):$ upper semi-B-Weyl spectrum of $T,$ \\ Hereafter, the symbol $\bigsqcup$ stands for disjoint union, while $iso(A)$ and $acc(A)$ means respectively isolated points and accumulation points of a given subset $A$ of $\mathbb{C}.$ After the first step of this introduction, we define in the second section of this paper the concepts of spectral valued functions and spectral partitioning functions. They are functions defined on the Banach algebra $L(X)$ and valued into $ \mathcal{P}(\mathbb{C})\times \mathcal{P}(\mathbb{C}),$ where $\mathcal{P}(\mathbb{C})$ is the set of the subsets of $\mathbb{C}.$ A spectral valued function $\Phi=(\Phi_1, \Phi_2)$ is a spectral partitioning, respectively a spectral a-partitioning, valued function for an operator $T \in L(X)$ if $ \sigma(T)= \Phi_1(T)\bigsqcup \Phi_2(T), $ respectively if $ \sigma_a(T)= \Phi_1(T)\bigsqcup \Phi_2(T).$ Recall that from \mathbb{C}e{WL}, if $T$ is a normal operator acting on a Hilbert space, then $ \sigma(T)= \sigma_W(T) \bigsqcup E(T).$ Thus a spectral valued function $\Phi= (\Phi_1, \Phi_2)$ could be considered as an "Abstract Weyl-type theorem," and an operator $ T \in L(X)$ satisfies the abstract Weyl-type theorem $\Phi,$ if $\Phi$ is a spectral partitioning or a-partitioning function for $T.$\\ Our main goal here is the study of abstract Weyl-type theorems and their relationship. By the study of relationship between two given abstract Weyl-type theorems $\Phi$ and $\Psi$ we mean the answer of the following question: If an operator $T \in L(X)$ satisfies one of the two abstract Weyl-type theorems $\Phi$ and $\Psi$, does $T$ satisfies the other one? The two abstract Weyl-type theorems $\Phi$ and $\Psi$ are said to be equivalent if $T \in L(X)$ satisfies one of the two abstract Weyl-type theorems $\Phi$ and $\Psi$ if and only $T$ satisfies the other one. To study the relationship between abstract Weyl-type theorems, we introduce two order relations $\leq $ and $<<$ on the set of spectral valued functions. Then the question of relationship between two comparable spectral valued functions for the order $ \leq$ is solved in terms of set difference between parts of the spectrum that are involved. In the third section, following the same steps as in the second section, we consider spectral a-partitioning functions and we obtain similar results to those of the second section. In the forth section, we give some crossed results by considering two spectral valued functions comparable for the order $\leq$ , one partitioning the spectrum and the other one partitioning the approximate point spectrum. We obtain new kind of results, where the set difference $\sigma(T) \setminus \sigma_a(T)$ plays a crucial role. At the end of this section, we study the case of two comparable spectral valued functions for the order relation $<<$, and we answer in Theorem \ref{thm45} and Theorem \ref{thm46} the question of relationship between the two spectral valued functions. \\ Globally, This study solves completely the question of relationship between two comparable spectral valued functions, and several known results about Weyl-type theorems appearing in recent literature becomes corollaries of the results obtained. To illustrate this, we will give through the different sections, several examples as an application of the results obtained, linking them to original references where they have been first established. As mentioned before, the original idea leading to a partition of the spectrum goes back to the famous paper by H. Weyl \mathbb{C}e{WL}. More recently, several authors worldwide had worked in this direction, see for example \mathbb{C}e{AP}, \mathbb{C}e{BK}, \mathbb{C}e{CH}, \mathbb{C}e{DD}, \mathbb{C}e{DH}, \mathbb{C}e{DU}, \mathbb{C}e{RA} and \mathbb{C}e{XH}. \section{ Partitioning functions for the spectrum} In this section we study the relationship between two comparable spectral valued functions, when one of them is spectral partitioning and the other one would be also spectral partitioning. \begin{dfn} A spectral valued function is a function $\Phi= (\Phi_1, \Phi_2) : L(X) \rightarrow \mathcal{P}(\mathbb{C})\times \mathcal{P}(\mathbb{C})$ such that $ \,\, \forall \,\, T \in L(X), \Phi(T) \subset \sigma(T) \times \sigma(T),$ where $\mathcal{P}(\mathbb{C})$ is the set of the subsets of $\mathbb{C}.$ \end{dfn} \begin{dfn} Let $\Phi$ be a spectral valued function. We will say that $\Phi= (\Phi_1, \Phi_2)$ is a spectral partitioning function for an operator $T \in L(X),$ if $\sigma(T)= \Phi_1(T) \bigsqcup \Phi_2(T).$ \end{dfn} A spectral valued function $\Phi= (\Phi_1, \Phi_2)$ could be considered as an "Abstract Weyl-type theorem." An operator $ T \in L(X)$ satisfies the abstract Weyl-type theorem $\Phi$ if $\Phi$ is a spectral partitioning function for $T.$ \begin{ex} \begin{itemize} \item Let $\Phi_W(T)= (\sigma_W(T), E^0(T)), \,\, \forall \,\,T \in L(X).$ From \mathbb{C}e{WL}, it follows that $\Phi_W$ is a partitioning function for each normal operator acting on a Hilbert space. \item Let $\Phi_{BW}(T)= (\sigma_{BW}(T), E(T)),\,\, \forall \,\,T \in L(X).$ From \mathbb{C}e{BI}, it follows that $\Phi_{BW}$ is a partitioning function for each normal operator acting on a Hilbert space. \end{itemize} \end{ex} \begin{dfn} Let $\Psi$ and $\Phi$ be two spectral valued functions. We will say that $\Psi \leq \Phi,$ if $\, \forall \, T \in L(X),$ we have $\Phi_1(T) \subset \Psi_1(T)$ and $\Psi_2(T) \subset \Phi_2(T).$ We will say that $\Psi << \Phi,$ if \,\, $ \forall \, T \in L(X),$ we have $\Psi_1(T) \subset \Phi_1(T)$ and $\Psi_2(T) \subset \Phi_2(T).$ \end{dfn} It's easily seen that both $\leq$ and $<<$ are order relations on the set of spectral valued functions. \begin{thm} \label{thm21} Let $T \in L(X)$ and let $\Phi$ be a spectral partitioning function for $T.$ If $\Psi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Psi$ is a spectral partitioning function for $T$ if and only if $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ \end{thm} \begin{proof} Assume that $\Psi$ is a spectral partitioning function for $T,$ then $\sigma(T)= \Psi_1(T) \bigsqcup \Psi_2(T)= \Phi_1(T) \bigsqcup \Phi_2(T).$ Hence $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$\\ Conversely assume that $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ Since $\Phi$ is a spectral partitioning function for $T,$ then $\sigma(T)= \Phi_1(T) \bigsqcup \Phi_2(T).$ As $\Psi \leq \Phi,$ then $\Phi_1(T) \subset \Psi_1(T)$ and so $\sigma(T)= \Psi_1(T) \cup \Phi_2(T)= \Psi_1(T) \cup (\Phi_2(T) \setminus \Psi_2(T)) \cup \Psi_2(T) .$ As $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T),$ then $ \Phi_2(T) \setminus \Psi_2(T) \subset \Psi_1(T).$ Hence $\sigma(T) \subset \Psi_1(T) \cup \Psi_2(T).$ As we have always $ \Psi_1(T) \cup \Psi_2(T) \subset \sigma(T),$ then $\sigma(T) = \Psi_1(T) \cup \Psi_2(T).$ Moreover we have $ \Psi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T) = [\Phi_1(T)\cup (\Psi_1(T)\setminus \Phi_1(T))] $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= [\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)] \cup [(\Psi_1(T)\setminus \Phi_1(T)) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)]= [\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)] \cup [(\Phi_2(T)\setminus \Psi_2(T)) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)]= \emptyset.$ Hence $\sigma(T)= \Psi_1(T) \bigsqcup \Psi_2(T) $ and $\Psi$ is a spectral partitioning function for $T.$ \end{proof} In the following corollary, as an application of Theorem \ref{thm21}, we give a direct proof of \mathbb{C}e [Theorem 3.9]{BK} \begin{cor} \label{cor21}If $\Phi_{BW}$ is a spectral partitioning function for $T,$ then $\Phi_{W}$ is also a spectral partitioning function for $T.$ \end{cor} \begin{proof} Observe first that $\Phi_{W} \leq \Phi_{BW}.$ Then if $\Phi_{BW}$ is a spectral partitioning function for $T,$ it's easily seen that $\sigma_{W}(T) \setminus \sigma_{BW}(T)= E(T) \setminus E^0(T).$ From Theorem \ref{thm21}, it follows that $\Phi_{W}$ is also a spectral partitioning function for $T.$ \end{proof} Similarly to Theorem \ref{thm21}, we have the following theorem, which we give without proof. \begin{thm} \label{thm22} Let $T \in L(X)$ and let $\Psi$ be a spectral partitioning function for $T.$ If $\Phi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Phi$ is a spectral partitioning function for $T$ if and only if $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ \end{thm} \begin{rema}\label{rema21} \mathbb{C}e[Example 3.12]{BK} There exist operators $T \in L(X)$ such that $\Phi_W$ is a spectral partitioning function for $T$ but $\Phi_{BW}$ is not a spectral partitioning function for $T.$ Indeed, let us consider the operator $Q$ be defined for each $x=(\xi_i)\in l^1$ by \begin{equation*} Q(\xi_1,\xi_2,\xi_3,\dots, \xi_k,\dots) = ( 0, \alpha_1\xi_1, \alpha_2\xi_2,\dots, \alpha_{k-1}\xi_{k-1},\dots), \end{equation*} where $(\alpha_i)$ is a sequence of complex numbers such that $0<|\alpha_i|\le 1$ and $\sum_{i=1}^\infty |\alpha_i|<\infty$. We observe that \begin{equation*} \overline{R(Q^n)} \ne R(Q^n), \quad n=1,2,\dots \end{equation*} Indeed, for a given $n\in\mathbb{N}$ let $x^{(n)}_k = (1,\dots , 1,0,0,\dots)$ (with $n+k$ times $1$). Then the limit $y^{(n)}=\lim_{k\to\infty}Q^n x^{(n)}_k$ exists and lies in $\overline{R(Q^n)}$. However, there is no element $x^{(n)}\in\ell^1$ satisfying the equation $Q^nx^{(n)}=y^{(n)}$ as the algebraic solution to this equation is $(1,1,1,\dots )\notin\ell^1$. Define $T$ on $X=\ell^1\oplus \ell^1$ by $T=Q\oplus 0$. Then $N(T)=\{0\}\oplus \ell^1$, $\sigma(T)=\{0\}$, $E(T)=\{0\}$, $E_0(T)=\emptyset$. Since $R(T^n) = R(Q^n) \oplus \{0\}$, $R(T^n)$ is not closed for any $n\in\mathbb{N}$; so $T$ is not a $B$-Weyl operator, and $\sigma_{BW}(T)=\{0\}$. Further, $T$ is not a Fredholm operator and $\sigma_{W}(T) = \{0\}$. Hence $\Phi_W$ is a spectral partitioning function for $T$ but $\Phi_{BW}$ is not a spectral partitioning function for $T.$ \end{rema} \begin{dfn} The Drazin spectral valued function $\Phi_D$ and the Browder spectral valued function $\Phi_B$ are defined respectively by: $$\Phi_D(T)= ( \sigma_{BW}(T), \Pi(T)), \, \, \Phi_B(T)= ( \sigma_{W}(T), \Pi^0(T)), \forall T \in L(X).$$ \end{dfn} \begin{thm}\label{thm23} Let $T \in L(X).$ Then the Drazin spectral valued function $\Phi_D$ is a spectral partitioning function for $T$ if and only if the Browder spectral valued function $\Phi_B$ is a spectral partitioning function for $T.$ \end{thm} \begin{proof} Observe first that $\Phi_B \leq \Phi_D.$ If $\Phi_D$ is a spectral partitioning function for $T,$ then $ \sigma_W(T) \setminus \sigma_{BW}(T)= \Pi(T) \setminus \Pi^0(T).$ From Theorem \ref{thm21}, we conclude that $\Phi_B$ is a spectral partitioning function for $T.$ Conversely assume that $\Phi_B$ is a spectral partitioning function for $T.$ Let us show that $ \sigma_W(T) \setminus \sigma_{BW}(T)= \Pi(T) \setminus \Pi^0(T).$ The inclusion $ \Pi(T) \setminus \Pi^0(T) \subset \sigma_W(T) \setminus \sigma_{BW}(T)$ is obvious. For the reverse inclusion, let $ \lambda \in \sigma_W(T) \setminus \sigma_{BW}(T).$ Then from \mathbb{C}e[Corollary 3.2]{BS}, $\lambda $ is isolated in $\sigma_W(T).$ As $\Phi_B$ is a spectral partitioning function for $T,$ then $\sigma(T)= \sigma_W(T) \bigsqcup \Pi^0(T).$ Hence $\lambda$ is isolated in $\sigma(T).$ As $ \lambda \notin \sigma_{BW}(T),$ then from \mathbb{C}e [Theorem 2.3]{BW}, $\lambda \in \Pi(T) \setminus \Pi_0(T).$ From Theorem \ref{thm22}, it follows that $\Phi_D$ is a spectral partitioning function for $T.$ \end{proof} The direct implication of Theorem \ref{thm23} had been proved in \mathbb{C}e[Theorem 3.15]{BK}, while the reverse implication was posed as a question in \mathbb{C}e[p. 374]{BK}, and answered in \mathbb{C}e[Theorem 3.1]{CM} and \mathbb{C}e[Theorem 2.1]{AZ}. \begin{rema} If the Drazin spectral valued function $\Phi_D$ is a spectral partitioning function for $T \in L(X),$ then $\sigma_{BW}(T)= \sigma_D(T)$ and $\sigma_{W}(T)= \sigma_B(T).$ \end{rema} \begin{exs} The following table summarize some of spectral valued functions considered recently as partitioning functions. \begin{center} \vbox{ \[ \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|} {\textbf Spectral valued functions-1} \\[5pt] \hline $\Phi_W(T)= (\sigma_{W}(T),E^0(T))$ & $\Phi_B(T)=(\sigma_{W}(T),\Pi^0(T))$\\[5pt] $\Phi_{gW}(T)=(\sigma_{BW}(T),E(T))$& $\Phi_{gB}(T)= (\sigma_{BW}(T),\Pi(T))$ \\[5pt] $\Phi_{Bw}=(\sigma_{BW}(T), E^0(T))$ & $\Phi_{Bb}(T)=(\sigma_{BW}(T),\Pi^0(T))$ \\[5pt] $\Phi_{aw}(T)=(\sigma_{W}(T),E_a^0(T))$& $\Phi_{ab}(T)=(\sigma_{W}(T),\Pi_a^0(T))$ \\[5pt] $\Phi_{gaw}(T)=(\sigma_{BW}(T),E_a(T))$& $\Phi_{gab}(T)=(\sigma_{BW}(T),\Pi_a(T))$ \\[5pt] $ \Phi_{Baw}(T)=(\sigma_{BW}(T),E_a^0(T))$ & $\Phi_{Bab}(T)=(\sigma_{BW}(T),\Pi_a^0(T))$\\[5pt] \hline \end{tabular} \] \begin{center} {\mathcal U} nderline{Table~1} \end{center}} \end{center} Among the spectral valued functions listed in Table~1, we consider the following cases to illustrate the use of Theorem \ref{thm21} and Theorem \ref{thm22} \begin{itemize} \item It is shown in \mathbb{C}e[Theorem 3.5 ]{BZ3} that if $\Phi_{gaw}$ is a spectral partitioning function for $ T \in L(X)$, then $\Phi_{gab}$ is also a partitioning function for $T.$ As $\Phi_{gab} \leq \Phi_{gaw}$,\, to prove this result using Theorem \ref{thm21}, it is enough to prove that $\emptyset= \sigma_{BW}(T) \setminus \sigma_{BW}(T)= E_a(T) \setminus \Pi_a(T),$ which is the case from \mathbb{C}e[Theorem 2.8]{BK}. \item It is shown in \mathbb{C}e[Theorem 2.9 ]{B1} that if $\Phi_{W}$ is a spectral partitioning function for $ T \in L(X)$, then $\Phi_{gW}$ is a partitioning function for $T$ if and only if $E(T)= \Pi(T).$ As $\Phi_{W} \leq \Phi_{gW}$,\, to prove this result using Theorem \ref{thm22}, it is enough to prove that $\emptyset= \sigma_{W}(T) \setminus \sigma_{W}(T)= E(T) \setminus \Pi(T),$ which is the case from \mathbb{C}e[Corollary2.6]{BW}. \item It is shown in \mathbb{C}e[Corollary 5 ]{BAR} that if $\Phi_{W}$ is a spectral partitioning function for $ T \in L(X)$, then $\Phi_{B}$ is also a spectral partitioning function for $T.$ To see this using Theorem \ref{thm21}, as $\Phi_{B} \leq \Phi_{W}$,\, it is enough to prove that $\emptyset= \sigma_{W}(T) \setminus \sigma_{W}(T)= E^0(T) \setminus \Pi^0(T),$ which is the case from \mathbb{C}e[Theorem 4.2]{BW}. \end{itemize} \end{exs} \section{ Partitioning functions for the approximate spectrum} In this section we study the relationship between two comparable spectral valued functions, when one of them is spectral a-partitioning and the other one would be also spectral a-partitioning. \begin{dfn} Let $\Phi$ be a spectral valued function and let $T \in L(X).$ We will say that $\Phi$ is a spectral a-partitioning function for $T$ if $\sigma_a(T)= \Phi_1(T) \bigsqcup \Phi_2(T).$ \end{dfn} \begin{ex} \begin{itemize} \item Let $\Phi_{aW}(T)= (\sigma_{SF_+^-}(T), E_a^0(T)), \,\, \forall \,\,T \in L(X).$ From \mathbb{C}e{RA}, it follows that $\Phi_{aW}$ is a spectral a-partitioning function for each normal operator acting on a Hilbert space. \item Let $\Phi_{gaW}(T)= (\sigma_{SBF_+^-}(T), E_a(T)),\,\, \forall \,\,T \in L(X).$ In the case of a normal operator $T$ acting on a Hilbert space, we have $ \sigma(T)= \sigma_a(T)$ and $\Phi_{gaW}(T)=\Phi_{gW}(T).$ From \mathbb{C}e[Theorem 4.5]{BI}, it follows that $\Phi_{gaW}$ is a spectral a-partitioning for $T.$ \end{itemize} \end{ex} \begin{thm} \label{thm31} Let $T \in L(X)$ and let $\Phi$ be a spectral a-partitioning function for $T.$ If $\Psi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Psi$ is a spectral a-partitioning function for $T$ if and only if $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ \end{thm} \begin{proof} Assume that $\Psi$ is a spectral a-partitioning function for $T,$ then $\sigma_a(T)= \Psi_1(T) \bigsqcup \Psi_2(T)= \Phi_1(T) \bigsqcup \Phi_2(T).$ Hence $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ Conversely if $\Phi$ is a spectral a-partitioning function for $T$ and $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ Then $\Psi_1(T) \subset \sigma_a(T)$ and $\Psi_2(T) \subset \sigma_a(T).$ As $\sigma_a(T)= \Phi_1(T) \bigsqcup \Phi_2(T)$ and $\Psi \leq \Phi,$ then $\Phi_1(T) \subset \Psi_1(T)$ and so $\sigma_a(T)= \Psi_1(T) \cup \Phi_2(T)= \Psi_1(T) \cup (\Phi_2(T) \setminus \Psi_2(T)) \cup \Psi_2(T) =\Psi_1(T) \cup (\Psi_1(T) \setminus \Phi_1(T)) \cup \Psi_2(T)= \Psi_1(T) \cup \Psi_2(T).$ Moreover, we have $ \Psi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T) = [\Phi_1(T)\cup (\Psi_1(T)\setminus \Phi_1(T))] $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= [\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)] \cup [(\Psi_1(T)\setminus \Phi_1(T)) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)]= [\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)] \cup [(\Phi_2(T)\setminus \Psi_2(T)) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)]=\emptyset. $ Hence $\sigma_a(T)= \Psi_1(T) \bigsqcup \Psi_2(T) $ and $\Psi$ is a spectral a-partitioning function for $T.$ \end{proof} In the following corollary, as an application of Theorem \ref{thm31}, we give a direct proof of \mathbb{C}e [Theorem 3.11]{BK} \begin{cor} If $\Phi_{gaW}$ is a spectral a-partitioning function for $T,$ then $\Phi_{aW}$ is also a spectral a-partitioning function for $T.$ \end{cor} \begin{proof} If $\Phi_{gaW}$ is a spectral a-partitioning function for $T,$ then it's easily seen that $\sigma_{SF_+^-}(T) \setminus \sigma_{SBF_+^-}(T)= E_a(T) \setminus E_a^0(T).$ From Theorem \ref{thm31}, it follows that $\Phi_{aW}$ is also a spectral a-partitioning function for $T.$ \end{proof} Similarly to Theorem \ref{thm31}, we have the following theorem, which we give without proof. \begin{thm} \label{thm32} Let $T \in L(X)$ and let $\Psi$ be a spectral a-partitioning function for $T.$ If $\Phi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Phi$ is a spectral a-partitioning function for $T$ if and only if $ \Psi_1(T) \setminus \Phi_1(T) = \Phi_2(T) \setminus \Psi_2(T).$ \end{thm} \begin{rema} The spectral valued function $\Phi_{aW}$ is a spectral a-partitioning function for the operator $T$ considered in Remark \ref{rema21}, but $\Phi_{gaW}$ is not a spectral a-partitioning function for $T.$ \end{rema} \begin{dfn} The Left-Drazin spectral valued function $\Psi_{gaB}$ is defined by: $$\Psi_{gaB}(T)= ( \sigma_{SBF_+^-}(T), \Pi_a(T)), \, \forall T \in L(X),$$ while the Upper-Browder spectral valued function $\Psi_{aB}$ is defined on $L(X)$ by: $$\Psi_{aB}(T)= ( \sigma_{SF_+^-}(T), \Pi_a^0(T)), \forall T \in L(X).$$ \end{dfn} \begin{thm} \label{thm33} Let $T \in L(X).$ Then the Left-Drazin spectral valued function $\Psi_{gaB}$ is a spectral a-partitioning function for $T$ if and only if the upper-Browder spectral valued function $\Psi_{aB}$ is a spectral a-partitioning function for $T.$ \end{thm} \begin{proof} Observe first that $\Psi_{aB} \leq \Psi_{gaB}.$ If $\Psi_{gaB}$ is a spectral a-partitioning function for $T,$ then $ \sigma_{SF_+^-}(T) \setminus \sigma_{SBF_+^-}(T)= \Pi_a(T) \setminus \Pi_a^0(T).$ From Theorem \ref{thm31}, we conclude that $\Psi_{aB}$ is a spectral a-partitioning function for $T.$ Conversely assume that $\Psi_{aB}$ is a spectral a-partitioning function for $T.$ Let us show that $ \sigma_{SF_+^-}(T) \setminus \sigma_{SBF_+^-}(T)= \Pi_a(T) \setminus \Pi_a^0(T).$ The inclusion $ \Pi_a(T) \setminus \Pi_a^0(T) \subset \sigma_{SF_+^-}(T) \setminus \sigma_{SBF_+^-}(T)$ is obvious. For the reverse inclusion, let $ \lambda \in \sigma_{SF_+^-}(T) \setminus \sigma_{SBF_+^-}(T).$ From \mathbb{C}e[Corollary 3.2]{BS}, $\lambda $ is isolated in \, $\sigma_{SF_+^-}(T).$ As $\Psi_{aB}$ is a spectral partitioning function for $T,$ then $\sigma_a(T)= \sigma_{SF_+^-}(T) \bigsqcup \Pi_a^0(T).$ Hence $\lambda$ is isolated in $\sigma_a(T).$ From \mathbb{C}e[Theorem 2.8]{BK}, it follows that $\lambda \in \Pi_a(T) \setminus \Pi_a^0(T).$ \end{proof} The direct implication of Theorem \ref{thm33} had been proved in \mathbb{C}e[Theorem 3.8]{BK}, while the reverse implication was posed as a question in \mathbb{C}e[p. 374]{BK}, and answered in \mathbb{C}e[Theorem 1.3]{XH} and \mathbb{C}e[Theorem 2.2]{AZ}. \begin{rema} If the Left-Drazin spectral valued function $\Psi_{gaB}$ is a spectral a-partitioning function for $T \in L(X),$ then $\sigma_{SBF_-^+}(T)= \sigma_{LD}(T)$ and $\sigma_{SF_-^+}(T)= \sigma_{uB}(T).$ \end{rema} \begin{exs} The following table summarize some of spectral valued functions considered recently as a-partitioning functions. \begin{center} \vbox{ \[ \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|} {\textbf Spectral valued functions-2} \\[5pt] \hline $\Psi_{aW}(T)=(\sigma_{SF_+^-}(T),E_a^0(T))$ & $\Psi_{aB}(T)=(\sigma_{SF_+^-}(T), \Pi_a^0(T))$\\[5pt] $\Psi_{gaW}(T)=(\sigma_{SBF_+^-}(T),E_a(T))$& $\Psi_{gaB}(T)= (\sigma_{SBF_+^-}(T),\Pi_a(T))$ \\[5pt] $\Psi_{w}(T)= (\sigma_{SF_+^-}(T),E^0(T))$ & $\Psi_{b}(T)= (\sigma_{SF_+^-}(T), \Pi^0(T))$ \\[5pt] $\Psi_{gw}(T)=(\sigma_{SBF_+^-}(T),E(T))$ & $\Psi_{gb}(T)=(\sigma_{SBF_+^-}(T),\Pi(T))$\\[5pt] $ \Psi_{SBw}(T)= (\sigma_{SBF_+^-}(T),E^0(T))$&$\Psi_{SBb}(T) =(\sigma_{SBF_+^-}(T),\Pi^0(T))$ \\[5pt] $\Psi_{SBaw}(T)=(\sigma_{SBF_+^-}(T),E_a^0(T)) $ & $\Psi_{SBab}(T)=(\sigma_{SBF_+^-}(T), \Pi_a^0(T))$\\[5pt] \hline \end{tabular} \] \begin{center} {\mathcal U} nderline{Table~2} \end{center}} \end{center} Among the spectral valued functions listed in Table~2, we consider the following cases to illustrate the use of Theorem \ref{thm31} and Theorem \ref{thm32} \begin{itemize} \item It is shown in \mathbb{C}e[Theorem 2.15]{BZ1} that if $\Psi_{gw}$ is a spectral a-partitioning function for $ T \in L(X)$, then $\Psi_{gb}$ is also a spectral a-partitioning function for $T.$ Since $\Psi_{gb} \leq \Psi_{gw},$ to prove this result using Theorem \ref{thm31}, it is enough to prove that $\emptyset= \sigma_{SBF_+^-}(T) \setminus \sigma_{SBF_+^-}(T)= E(T) \setminus \Pi(T),$ which is the case from \mathbb{C}e[Theorem 4.2]{BI}. \item It is shown in \mathbb{C}e[Corollary 3.3 ]{BK} that if $\Psi_{gaW}$ is a spectral a-partitioning function for $ T \in L(X)$, then $\Psi_{gaB}$ is a spectral a-partitioning function for $T.$ Since $\Psi_{gaB} \leq \Psi_{gaW},$ to prove this result using Theorem \ref{thm31}, it is enough to prove that $\emptyset= \sigma_{SBF_+^-}(T) \setminus \sigma_{SBF_+^-}(T)= E_a(T) \setminus \Pi_a(T),$ which is the case from \mathbb{C}e[Theorem 2.8]{BK}. \end{itemize} \end{exs} \section {Crossed Results} In this section we consider the situation of two comparable spectral valued functions, one is spectral partitioning, while the other one would be spectral a-partitioning, and vice-versa. \begin{thm} \label{thm41} Let $T \in L(X)$ and let $\Phi$ be a spectral partitioning function for $T.$ If $\Psi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Psi$ is a spectral a-partitioning function for $T$ if and only if $ \Phi_2(T) \setminus \Psi_2(T) = (\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ \end{thm} \begin{proof} If $\Psi$ is a spectral partitioning function for $T,$ then $\sigma(T)= [\Psi_1(T) \cup \Psi_2(T)] \bigsqcup [(\sigma(T) \setminus \sigma_a(T))]= [\Phi_1(T) \cup (\Psi_1(T) \setminus \Phi_1(T))\cup\Psi_2(T)]\bigsqcup[(\sigma(T) \setminus \sigma_a(T))]. $ Hence $\Phi_2(T) \setminus \Psi_2(T)=(\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$\\ Conversely assume that $\Phi$ is a spectral partitioning function for $T$ and $ \Phi_2(T) \setminus \Psi_2(T) = (\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ Then $ \sigma(T)= \Phi_1(T)\bigsqcup \Phi_2(T)= \Phi_1(T) \cup (\Phi_2(T)\setminus \Psi_2(T)) \cup \Psi_2(T)= \Phi_1(T) \cup(\Psi_1(T) \setminus \Phi_1(T)) \cup (\sigma(T)\setminus \sigma_a(T))\cup \Psi_2(T)= \Psi_1(T) \cup \Psi_2(T)\cup (\sigma(T)\setminus \sigma_a(T)).$ Hence $\sigma_a(T)\subset \Psi_1(T) \cup \Psi_2(T).$ Since $ \Phi_2(T) \setminus \Psi_2(T) = (\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)),$ then $ \Psi_1(T) \cup \Psi_2(T)\subset \sigma_a(T).$ Moreover as $\Psi_2(T) \subset \Phi_2(T),$ we have $\Psi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= (\Phi_1(T) \cup (\Psi_1(T) \setminus \Phi_1(T)) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= \emptyset.$ Then $\sigma_a(T)= \Psi_1(T) \bigsqcup \Psi_2(T)$ and $\Psi$ is a spectral a-partitioning function for $T.$ \end{proof} \begin{cor}\label{cor41} Let $T \in L(X)$ and let $\Psi$ be a spectral a-partitioning function for $T.$ If $\Phi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Phi$ is a spectral partitioning function for $T$ if and only if $ \Phi_2(T) \setminus \Psi_2(T) = (\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ \end{cor} \begin{proof} Assume that $\Phi$ is a spectral partitioning function for $T.$ As $\Psi \leq \Phi,$ and $\Psi$ is a spectral a-partitioning function for $T,$ then from Theorem \ref{thm41}, we have $ \Phi_2(T) \setminus \Psi_2(T) = (\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$\\ Conversely assume that $\Psi$ is a spectral a-partitioning function for $T$ and $ \Phi_2(T) \setminus \Psi_2(T) = (\Psi_1(T) \setminus \Phi_1(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ Then $ \sigma(T)= [\Psi_1(T) \cup \Psi_2(T)]\bigsqcup [(\sigma(T)\setminus \sigma_a(T))]= [ \Phi_1(T) \cup (\Psi_1(T) \setminus \Phi_1(T)) \cup \Psi_2(T)] \bigsqcup [\sigma(T)\setminus \sigma_a(T)]= \Phi_1(T) \cup (\Phi_2(T) \setminus \Psi_2(T))\cup \Psi_2(T)= \Phi_1(T) \cup \Phi_2(T).$ \\ As $\Phi_1(T) \subset \Psi_1(T),$ we have $\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Phi_2(T)= \Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p [(\Phi_2(T) \setminus \Psi_2(T)) \cup \Psi_2(T)]= \Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p [(\Psi_1(T) \setminus \Phi_1(T)) \cup (\sigma(T)\setminus \sigma_a(T)) \cup \Psi_2(T)]= \emptyset.$ Therefore $\sigma(T)= \Phi_1(T) \bigsqcup \Phi_2(T)$ and $\Phi$ is a spectral partitioning function for $T.$ \end{proof} Similarly to Theorem \ref {thm41} and Corollary \ref{cor41} we have the following two results. \begin{thm} \label{thm42} Let $T \in L(X)$ and let $\Phi$ be a spectral a-partitioning function for $T.$ If $\Psi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Psi$ is a spectral partitioning function for $T$ if and only if $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup(\sigma(T)\setminus \sigma_a(T)).$ \end{thm} \begin{proof} If $\Psi$ is a spectral partitioning function for $T,$ then $\sigma(T)= \Psi_1(T) \bigsqcup \Psi_2(T) = \Phi_1(T) \cup (\Psi_1(T) \setminus \Phi_1(T))\cup\Psi_2(T).$ Hence $\Psi_1(T) \setminus \Phi_1(T)=(\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$\\ Conversely assume that $\Phi$ is a spectral a-partitioning function for $T$ and $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ Then $ \sigma(T)= \Phi_1(T)\cup \Phi_2(T)\cup (\sigma(T)\setminus \sigma_a(T)) = \Phi_1(T) \cup (\Phi_2(T)\setminus \Psi_2(T)) \cup \Psi_2(T) \cup (\sigma(T)\setminus \sigma_a(T))= \Phi_1(T) \cup (\Psi_1(T)\setminus \Phi_1(T)) \cup \Psi_2(T). $ Hence $\sigma(T) =\Psi_1(T) \cup \Psi_2(T).$ Since $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T))$ and $\Psi_2(T) \subset \Phi_2(T),$ then $ \Psi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= [\Phi_1(T) \cup (\Psi_1(T)\setminus \Phi_1(T)] $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= \emptyset. $ Hence $\sigma(T)= \Psi_1(T) \bigsqcup \Psi_2(T)$ and $\Psi$ is a spectral partitioning function for $T.$ \end{proof} \begin{cor} \label{cor42} Let $T \in L(X)$ and let $\Psi$ be a spectral partitioning function for $T.$ If $\Phi$ is a spectral valued function such that $\Psi \leq \Phi,$ then $\Phi$ is a spectral a-partitioning function for $T$ if and only if $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ \end{cor} \begin{proof} Assume that $\Phi$ is a spectral a-partitioning function for $T.$ As $\Psi \leq \Phi,$ and $\Psi$ is a spectral partitioning function for $T,$ then from Theorem \ref{thm42}, we have $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$\\ Conversely assume that $\Psi$ is a spectral partitioning function for $T$ and $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T)).$ Then $ \sigma(T)= \Psi_1(T)\cup \Psi_2(T) = \Phi_1(T) \cup (\Psi_1(T)\setminus \Phi_1(T)) \cup \Psi_2(T)= \Phi_1(T) \cup (\Phi_2(T) \setminus \Psi_2(T)) \cup (\sigma(T)\setminus \sigma_a(T)) \cup \Psi_2(T) = \Phi_1(T) \cup \Phi_2(T) \cup (\sigma(T)\setminus \sigma_a(T)). $ Hence $\sigma_a(T) \subset \Phi_1(T) \cup \Phi_2(T).$ Since $ \Psi_1(T) \setminus \Phi_1(T) = (\Phi_2(T) \setminus \Psi_2(T)) \bigsqcup (\sigma(T)\setminus \sigma_a(T))$ and $\Psi_2(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_1(T)= \emptyset,$ then $ \Phi_1(T) \cup \Phi_2(T) \subset \sigma_a(T).$ Moreover, we have $\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Phi_2(T)= \Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p [(\Phi_2(T) \setminus \Psi_2(T)) \cup \Psi_2(T)] $ and since $\Phi_1(T) \subset \Psi_1(T),$ then $\Phi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Phi_2(T) = \emptyset.$ Therefore $\sigma_a(T)= \Phi_1(T) \bigsqcup \Phi_2(T)$ and $\Phi$ is a spectral a-partitioning function for $T.$ \end{proof} Among the spectral valued functions listed in Table~1 and Table~2, we consider the following cases to illustrate the use of Theorem \ref{thm41} and Theorem \ref{thm42}. It is shown in \mathbb{C}e[Corollary 2.5 ]{RA} that if $\Psi_{aW}$ is a spectral a-partitioning function for $ T \in L(X)$, then $\Phi_{W}$ is a partitioning function for $T.$ When $\Psi_{aW}$ is a spectral a-partitioning function for $ T,$ then $ \sigma _W(T) \setminus \sigma_{SF_+^-}(T)=(E_a^0(T \setminus E^0(T))\bigsqcup (\sigma(T) \setminus \sigma_a(T)).$ Since $\Phi_{W} \leq \Psi_{aW},$ this result is then a direct consequence of Theorem \ref{thm42}. Moreover, combining Theorem \ref{thm42} and Corollary \ref{cor42}, we have the following theorem, characterizing the equivalence of the two properties. \begin{thm}\label{thm43} Let $ T \in L(X).$ The spectral valued function $\Psi_{aW}$ is a spectral a-partitioning function for $T$ if and only if $\Phi_{W}$ is a partitioning function for $T$ and $ \sigma _W(T) \setminus \sigma_{SF_+^-}(T)=(E_a^0(T \setminus E^0(T)) \bigsqcup(\sigma(T) \setminus \sigma_a(T)).$ \end{thm} It is shown in \mathbb{C}e[Theorem 2.6]{AP} that if $\Psi_{w}$ is a spectral a-partitioning function for $ T \in L(X)$, then $\Phi_{B}$ is a spectral partitioning function for $T.$ When $\Psi_{w}$ is a spectral a-partitioning function for $ T, $ then $\sigma _W(T) \setminus \sigma_{SF_+^-}(T)=(E^0(T) \setminus \Pi^0(T)) \bigsqcup (\sigma(T) \setminus \sigma_a(T)).$ As Since $\Phi_{B} \leq \Psi_{w},$ this result is then a direct consequence of Theorem \ref{thm41}. Moreover, as in Theorem \ref{thm43}, we have the following theorem characterizing the equivalence of the two properties. \begin{thm} \label{thm44} Let $ T \in L(X).$ The spectral valued function $\Psi_{w}$ is a spectral a-partitioning function for $T$ if and only if $\Phi_{B}$ is a partitioning function for $T$ and $ \sigma _W(T) \setminus \sigma_{SF_+^-}(T)=(E^0(T) \setminus \Pi^0(T)) \cup (\sigma(T) \setminus \sigma_a(T)).$ \end{thm} \begin{ex} Let us consider the operator $T$ of Remark \ref{rema21}, and the two spectral valued functions defined by $\Phi_{gaw}(T)=( \Phi_1(T), \Phi_2(T))= (\sigma_{BW}, E_a(T))$ and $\Phi_{Baw}(T)=(\Phi'_1(T), \Phi'_2(T))= (\sigma_{BW}(T), E_a^0(T)).$ Then $ \Phi_{Baw} \leq \Phi_{gaw} $ and $\Phi_{Baw}$ is a spectral a-partitioning function for $T.$ Since we have $ \Phi_2(T) \setminus \Phi'_2(T) = \{0\},$ and $(\Phi'_1(T) \setminus \Phi_1(T)) \cup (\sigma(T)\setminus \sigma_a(T))= \emptyset,$ then from Corollary \ref{cor41}, $\Phi_{gaw}$ is not a spectral partitioning function for $T,$ which is readily verified. \end{ex} For the study of the spectral valued functions, we have considered comparable spectral valued functions for the order relation $\leq.$ This not always the case, as seen by the spectral valued functions $\Psi_{gb}$ and $\Phi_{gab},$ defined by $\Psi_{gb}(T)=(\sigma_{SBF_+^-}(T),\Pi(T))$ and $\Phi_{gab}(T)=(\sigma_{BW}(T),\Pi_a(T)),$ for all $T \in L(X).$ We observe that In fact $ \Psi_{gb}$ and $ \Phi_{gab}$ are comparable for the order relation $<<$, in the sense that $\sigma_{SBF_+^-}(T) \subset \sigma_{BW}(T),$ and $\Pi(T) \subset \Pi_a(T).$ To deal with such cases, we have the following two results \begin{thm}\label{thm45} Let $T \in L(X)$ and let $\Phi$ be a spectral partitioning function for $T.$ If $\Psi$ is a spectral valued function such that $\Psi << \Phi. $ Then $\Psi$ is a spectral a-partitioning function for $T$ if and only if $(\Phi_1(T) \setminus \Psi_1(T)) \bigsqcup (\Phi_2(T) \setminus \Psi_2(T))= \sigma(T)\setminus \sigma_a(T).$ \end{thm} \begin{proof} Since $\Phi$ is a spectral partitioning function for $T,$ then $\sigma(T)= \Phi_1(T) \bigsqcup \Phi_2(T).$ If $\Psi$ is a spectral a-partitioning function for $T,$ then $ \sigma(T)= \Phi_1(T) \bigsqcup \Phi_2(T)= [\Psi_1(T) \cup \Psi_2(T)] \bigsqcup [(\sigma(T)\setminus \sigma_a(T))]= \Psi_1(T) \cup (\Phi_1(T) \setminus \Psi_1(T)) \cup \Psi_2(T) \cup (\Phi_2(T) \setminus \Psi_2(T)).$ As $ (\Phi_1(T) \setminus \Psi_1(T)) $\mathcal{C}_{\de}^{1}(U)$p (\Phi_2(T) \setminus \Psi_2(T)= \emptyset,$ then $ \sigma(T)\setminus \sigma_a(T)= (\Phi_1(T) \setminus \Psi_1(T)) \bigsqcup (\Phi_2(T) \setminus \Psi_2(T).$\\ Conversely assume that $ \sigma(T)\setminus \sigma_a(T)= (\Phi_1(T) \setminus \Psi_1(T)) \bigsqcup (\Psi_2(T) \setminus \Psi_2(T)).$ As $\sigma(T)= \Phi_1(T) \bigsqcup \Phi_2(T),$ then $\sigma_a(T)= \Psi_1(T) \cup \Psi_2(T).$ As we have obviously $ \Psi_1(T) $\mathcal{C}_{\de}^{1}(U)$p \Psi_2(T)= \emptyset, $ then $\sigma_a(T)= \Psi_1(T) \bigsqcup \Psi_2(T),$ and $\Psi$ is a spectral a-partitioning function for $T.$ \end{proof} Similarly to Theorem \ref{thm45}, we have the following result, which we give without proof. \begin{thm}\label{thm46} Let $T \in L(X)$ and let $\Psi$ be a spectral a-partitioning function for $T.$ If $\Phi$ is a spectral valued function such that $\Psi << \Phi.$ Then $\Phi$ is a spectral partitioning function for $T$ if and only if $ \sigma(T)\setminus \sigma_a(T)= (\Phi_1(T) \setminus\Psi_1(T)) \bigsqcup (\Phi_2(T) \setminus \Psi_2(T)).$ \end{thm} We observe that in Theorem \ref{thm46}, $\Phi$ is a spectral partitioning function for $T$ if and only if the function $\Phi \setminus \Psi$ defined by $(\Phi \setminus \Psi)(T) = ((\Phi_1(T) \setminus\Psi_1(T)) , (\Phi_2(T) \setminus \Psi_2(T)) , \, \forall T \in L(X)$ is partitioning for the complement of the approximate spectrum $\sigma(T)\setminus \sigma_a(T).$ \goodbreak {\footnotesize \noindent Mohammed Berkani,\\ \noindent Department of mathematics,\\ \noindent Science Faculty of Oujda,\\ \noindent University Mohammed I,\\ \noindent Operator Theory Team, SFO,\\ \noindent Morocco\\ \noindent [email protected]\\} \end{document}
\begin{document} \title{A flexible Particle Markov chain Monte Carlo method} \author{Eduardo F. Mendes \\ School of Applied Mathematics \\ Funda\c{c}\~{a}o Getulio Vargas \and \and Christopher K. Carter \\ School of Economics \\ University of New South Wales \and David Gunawan \\ School of Economics \\ University of New South Wales \and Robert Kohn \\ School of Economics \\ University of New South Wales } \date{} \maketitle \begin{abstract} Particle Markov Chain Monte Carlo methods are used to carry out inference in non-linear and non-Gaussian state space models, where the posterior density of the states is approximated using particles. Current approaches usually perform Bayesian inference using either a particle Marginal Metropolis-Hastings (PMMH) algorithm or a particle Gibbs (PG) sampler. This paper shows how the two ways of generating variables mentioned above can be combined in a flexible manner to give sampling schemes that converge to a desired target distribution. The advantage of our approach is that the sampling scheme can be tailored to obtain good results for different applications. For example, when some parameters and the states are highly correlated, such parameters can be generated using PMMH, while all other parameters are generated using PG because it is easier to obtain good proposals for the parameters within the PG framework. We derive some convergence properties of our sampling scheme and also investigate its performance empirically by applying it to univariate and multivariate stochastic volatility models and comparing it to other PMCMC methods proposed in the literature. \end{abstract} \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Keywords:} Diffusion equation; Factor stochastic volatility model; Metropolis-Hastings; Particle Gibbs sampler. \section{Introduction\label{S: Introd}} Our article deals with statistical inference for both the unobserved states and the parameters in a class of state space models. Its main goal is to give a flexible approach to constructing sampling schemes that converge to the posterior distribution of the states and the parameters. The sampling schemes generate particles as auxiliary variables. This work extends the methods proposed by \cite{andrieuetal2010}, \cite{olssonryden2011}, \cite{lindstenschon2012a}, \cite {lindstenetal2014}, \citet{Fearnhead2016}, and \citet{Deligiannidis2018}. \cite{andrieuetal2010} introduce two particle Markov chain Monte Carlo (MCMC) methods for state space models. The first is particle marginal Metropolis-Hastings (PMMH), where the parameters are generated with the states integrated out. The second is particle Gibbs (PG), which generates the parameters given the states. They show that the augmented density targeted by this algorithm has the joint posterior density of the parameters and states as a marginal density. \cite{andrieuetal2010} and \cite{andrieuroberts2009} show that the law of the marginal sequence of parameters and states, sampled using either PG or PMMH, converges to the true posterior as the number of iterations increase. Both particle MCMC methods are the focus of recent research. \cite{olssonryden2011} and \cite {lindstenschon2012a} use \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} \citep{godsilletal2004} for sampling the state vector, instead of \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ancestral tracing} \citep{kitagawa1996}. \cite{lindstenschon2012a} extend the PG sampler to a particle Metropolis within Gibbs (PMwG) sampler to deal with the case where the parameters cannot be generated exactly conditional on the states. \citet{Fearnhead2016} proposed an augmented particle MCMC methods. They show that their method can improve the mixing of the particle Gibbs when the parameters are highly correlated with the states. Recently, \citet{Deligiannidis2018} proposed the correlated pseudo marginal Metropolis-Hastings method that significantly reduce the number of particles used by the standard pseudo marginal method. Unless stated otherwise, we write PG to denote both the PG and PMwG samplers that generate the parameters conditional on the states. We note that there are no formal results in the literature to guide the user on whether to use PMMH or PG for any given problem. Our work extends the particle MCMC framework to situations where using just PMMH or just PG is inefficient. It is well-known from the literature on Gaussian and conditionally Gaussian state space models that confining MCMC for state space models to Gibbs sampling or Metropolis-Hastings sampling can result in inefficient or even degenerate sampling. See, for example, \cite{kimetal1998} who show for a stochastic volatility model that generating the states conditional on the parameters and the parameters conditional on the states can result in a highly inefficient sampler. See also \cite{carterkohn1996} and \cite{gerlachetal2000} who demonstrate using a signal plus noise model that a Gibbs sampler for the states and indicator variables for the structural breaks produces a degenerate sampler. A natural solution is to combine Gibbs and Metropolis-Hastings samplers. Motivated by that, we derive a particle sampler on the same augmented space as the PMMH and PG samplers, in which some parameters are sampled conditionally on the states and the remaining parameters are sampled with the states integrated out. We call this a PMMH+PG sampler. We show that the PMMH+PG sampler targets the same augmented density as the PMMH or PG samplers. We provide supplementary material showing that the Markov chain generated by the algorithm is uniformly ergodic, given regularity conditions. It implies that the marginal law of the Markov chain generated by $n^{th}$ iteration of the algorithm converges to the posterior density function geometrically fast, uniformly on its starting value, as $ n\rightarrow \infty $. We use \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ancestral tracing} in the particle Gibbs step to make the presentation accessible. The online supplementary material shows how to modify the methods proposed in the paper to incorporate auxiliary particle filters and \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} in the particle Gibbs step. The same convergence results for the latter methods are obtained by modifying the arguments in \cite{olssonryden2011}. We apply our PMMH+PG sampler to several univariate and multivariate examples using simulated and real datasets. As a main application we propose a general algorithm for Bayesian inference on a multivariate factor stochastic volatility (SV) model. This model is used to jointly model many co-varying financial time series, as it is able to capture the common features using only a small number of latent factors (see, e.g. \cite{Chib2006} and \cite{Kastner:2017}). We consider a factor SV model in which the volatilities of the factors follow a traditional SV model (as in \cite{Chib2006} and \cite{Kastner:2017}) and the log-volatilities of the idiosyncratic errors follow either a continuous time Ornstein-Uhlenbeck (OU) process \citep{Stein1991} or a GARCH diffusion process \citep{Chib2004, Kleppe2010}. The OU process admits a closed form transition density whereas the GARCH process does not. Similar factor models can also be applied to spatial temporal data with a large number of spatial measurements at each time point. We use these examples to compare the performance of our sampling schemes to the standard PMMH and PG samplers of \cite{andrieuetal2010}, the particle Gibbs with data augmentation sampler of \citet{Fearnhead2016}, and the correlated PMMH of \citet{Deligiannidis2018}. For the standard and correlated PMMH, we consider adaptive random walk proposals and the refined proposals by \citet{dahlin2015} and \citet{Nemeth2016}. We show that the PMMH + PG sampler outperforms these methods in the situation where we have both a large number of parameters and a large number of latent states. In general, there are likely to be a number of different sampling schemes that can solve the same problems addressed in our article, and which sampler is best depends on a number of factors such as the model, the data set and the number of observations. We also note that our PMMH + PG approach can be further refined by using the data augmented PMMH and PG sampling schemes proposed by \citet{Fearnhead2016} and the refined proposals for the PMMH sampling scheme by \citet{dahlin2015} and \citet{Nemeth2016}. The rest of the paper is organized as follows. Section \ref{s:prelim} introduces the basic concepts and notation used throughout the paper as well as the PMMH+PG sampler for estimating a single state space model and its associated parameters. Sections \ref{SSS: cts time OU process} and \ref{PMMH+PG SV factor} compare the performance of the PMMH+PG sampler to other competing PMCMC methods for estimating univariate and multivariate stochastic volatility models, respectively. The paper has an online supplement which contains some further empirical and technical results. \section{The PMMH+PG sampling scheme for state space models\label{s:prelim}} This section introduces a sampling scheme that combines PMMH and PG steps for the Bayesian estimation of a state space model. The first three sections give preliminary results and Section~\ref{s:pmwg} presents the sampling scheme. The methods and models introduced in this section are used in the univariate models in Section~\ref{SSS: cts time OU process} and the multivariate models in Section~\ref{PMMH+PG SV factor}. \subsection{State space model\label{s:SSM}} Define $\mathbb{N}$ as the set of positive integers and let $\{X_{t}\}_{t\in \mathbb{N}}$ and $\{Y_{t}\}_{t\in \mathbb{N}}$ denote $\mathcal{X}$-valued and $\mathcal{Y}$-valued stochastic processes, where $\{X_{t}\}_{t\in \mathbb{N}}$ is a latent Markov process with initial density $f_1^{{\Greekmath 0112} }(x)$ and transition density $f_t^{{\Greekmath 0112} }(x^{\prime }|x)$, i.e., \begin{equation*} X_{1}\sim f_1^{{\Greekmath 0112} }(\cdot )\quad \mbox{and}\quad X_{t}|(X_{t-1}=x)\sim f_t^{{\Greekmath 0112} }(\cdot |x)\quad (t=2,3,\dots ). \end{equation*} The latent process $\{X_{t}\}_{t\in \mathbb{N}}$ is observed only through $\{Y_{t}\}_{t\in \mathbb{N}}$, whose value at time $t$ depends on the value of the hidden state at time $t$, and is distributed according to $g_t^{{\Greekmath 0112} }(y|x)$ \begin{equation*} Y_{t}|(X_{t}=x)\sim g_t^{{\Greekmath 0112} }(\cdot |x)\quad (t=1,2,\dots ). \end{equation*} The densities $f_t^{{\Greekmath 0112} }$ and $g_t^{{\Greekmath 0112} }$ are indexed by a parameter vector ${\Greekmath 0112} \in \Theta $, where $\Theta $ is an open subset of $\mathbb{R}^{d_{{\Greekmath 0112} }}$, and all densities are with respect to suitable dominating measures, denoted as $dx$ and $dy$. The dominating measures are frequently taken to be the Lebesgue measure if $ \mathcal{X}\in \mathcal{B}(\mathbb{R}^{d_{x}})$ and $\mathcal{Y}\in \mathcal{ B}(\mathbb{R}^{d_{y}})$, where $\mathcal{B}(A)$ is the Borel ${\Greekmath 011B} $ -algebra generated by the set $A$. Usually $\mathcal{X} = \mathbb{R}^{d_x}$ and $\mathcal{Y} = \mathbb{R}^{d_y}$. We use the colon notation for collections of random variables, i.e., $a_{t}^{1:N}=\left( a_{t}^{1},\dots ,a_{t}^{N}\right) $ and for $t\leq u$, $ a_{t:u}^{1:N}=\left( a_{t}^{1:N},\dots ,a_{u}^{1:N}\right) $. The joint probability density function of $\left( x_{1:T},y_{1:T}\right) $ is \begin{equation*} p\left( x_{1:T},y_{1:T}|{\Greekmath 0112} \right) =f_1^{{\Greekmath 0112} }(x_{1})g_1^{{\Greekmath 0112} }(y_{1}|x_{1})\,\prod_{t=2}^{T}f_t^{{\Greekmath 0112} }(x_{t}|x_{t-1})\,g_t^{{\Greekmath 0112} }(y_{t}|x_{t}). \end{equation*} We define $Z_{1}({\Greekmath 0112} ):=p(y_{1}|{\Greekmath 0112} )$ and $Z_{t}({\Greekmath 0112} ):=p(y_{t}|y_{1:t-1},{\Greekmath 0112} )$ for $t\geq 2$, so the likelihood is $ Z_{1:T}\left( {\Greekmath 0112} \right) =Z_{1}({\Greekmath 0112} )\times Z_{2}({\Greekmath 0112} )\ldots Z_{T}({\Greekmath 0112} )$. The joint filtering density of $X_{1:t}$ is \begin{equation*} p\left( x_{1:t}|y_{1:t},{\Greekmath 0112} \right) =\frac{p\left( x_{1:t},y_{1:t}|{\Greekmath 0112} \right) }{Z_{1:t}\left( {\Greekmath 0112} \right) }. \end{equation*} The posterior density of ${\Greekmath 0112} $ and $X_{1:T}$ can also be factorized as \begin{equation*} p(x_{1:T},{\Greekmath 0112} |y_{1:T})=\frac{p(x_{1:T},y_{1:T}|{\Greekmath 0112} )p({\Greekmath 0112} )}{ {\overline}erline{Z}_{1:T}}, \end{equation*} where the marginal likelihood ${\overline}erline{Z}_{T}=\int_{\Theta }Z_{1:T}\left( {\Greekmath 0112} \right) \,p({\Greekmath 0112} )\,\mathrm{d}{\Greekmath 0112} =p(y_{1:T})$. This factorization is used in the particle Markov chain Monte Carlo algorithms. \subsection{Target distribution for state space models\label{s:targetdist}} We first approximate the joint filtering densities $\{p(x_{t}|y_{1:t},{\Greekmath 0112}):\,t=1,2,\dots \}$ sequentially, using particles, i.e., weighted samples, $(x_{t}^{1:N},\bar{w}_{t}^{1:N})$, drawn from auxiliary distributions $m_{t}^{{\Greekmath 0112} }$. This requires specifying \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{importance densities} $m_{1}^{{\Greekmath 0112} }(x_{1}):=m_{1}(x_{1}|Y_{1}=y_{1},{\Greekmath 0112} )$ and $m_{t}^{{\Greekmath 0112}}(x_{t}|x_{t-1}):=m_{t}(x_{t}|X_{t-1}=x_{t-1},Y_{1:t}=y_{1:t},{\Greekmath 0112} )$, and a resampling scheme $\mathcal{M}(a_{t-1}^{1:N}|\bar{w}_{t-1}^{1:N})$, where each $a_{t-1}^{i}=k$ indexes a particle in $(x_{t-1}^{1:N},\bar{w}_{t-1}^{1:N})$, and is sampled with probability $\bar{w}_{t-1}^{k}$. We refer to \cite{doucetetal2000}, \cite{merveetal2001}, and \cite{guoetal2005} for the choice of importance densities and \cite{doucetal2005} for a comparison between resampling schemes. Unless stated otherwise, upper case letters indicate random variables and lower case letters indicate the corresponding values of these random variables, e.g., $A_{t}^{j}$ and $a_{t}^{j}$, $X_{t}$ and $x_{t}$. We denote the vector of particles by \begin{equation}\label{eq:defineU} U_{1:T}:=\left( X_{1}^{1:N},\ldots ,X_{T}^{1:N},A_{1}^{1:N},\ldots ,A_{T-1}^{1:N}\right) \end{equation} where $a_{t}^{j}$ is the value of the random variable $A_{t}^{j}$ and its sample space by $\mathcal{U}:=\mathcal{X}^{TN}\times \mathbb{N}^{(T-1)N}$. The Sequential Monte Carlo (SMC) algorithm used here is the same one as in Section 4.1 of \cite{andrieuetal2010}, and is defined in Section~\ref{s:algorithms} and Algorithm~\ref{alg:smc} in the supplementary material. The algorithm provides an unbiased estimate \begin{align*}\widehat{Z}_{T}\left({\Greekmath 0112}\right)=Z(u_{1:T},{\Greekmath 0112} ):=\prod_{t=1}^{T}\left(N^{-1}\sum_{i=1}^{N}w_{t}^{i}\right), \end{align*} of the likelihood, where \begin{align*} w_1^i & = \frac{f_1^{\Greekmath 0112}(x_1^i)g_1^{\Greekmath 0112}(y_1|x_1^i) }{m_1^{\Greekmath 0112}(x_1^i)} , w_t^i = \frac{g_t^{\Greekmath 0112}(y_t|x_t^i)f_t^{\Greekmath 0112}(x_t^i|x_{t-1}^{a_{t-1}^i})} {m_t^{\Greekmath 0112}(x_t^i|x_{t-1}^{a_{t-1}^i})} \,\,\,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{for} \,\,\, t =2,\dots, ,T, \,\,\, \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \,\,\, {{\overline}erline w}_t^i = \frac{w_t^i}{\sum_{j=1}^N w_t^j}. \end{align*} The joint distribution of the particles given the parameters is \begin{eqnarray} {{\Greekmath 0120} \left( u_{1:T}|{\Greekmath 0112} \right) :=\prod_{i=1}^{N}m_{1}^{{\Greekmath 0112}} \left( x_{1}^{i}\right) \prod_{t=2}^{T}\left\{ \mathcal{M}(a_{t-1}^{1:N}|\bar{w}_{t-1}^{1:N}) \prod_{i=1}^{N}m_{t}^{{\Greekmath 0112} }\left( x_{t}^{i}|x_{t-1}^{a_{t-1}^{i}}\right) \right\}}. \label{eq:pfdens} \end{eqnarray} The key idea of particle MCMC methods is to construct a target distribution on an augmented space that includes the particles $U_{1:T}$ and has a marginal distribution equal to $p(x_{1:T},{\Greekmath 0112} |y_{1:T})$. This section describes the target distribution from \cite{andrieuetal2010}. Later sections describe particle MCMC methods to sample from this distribution and hence sample from $p(x_{1:T},{\Greekmath 0112} |y_{1:T})$. Section~\ref{appendixbsi} of the supplementary material describes other choices of target distribution and how it is straightforward to modify our results to apply to them. The simplest way of sampling from the particle approximation of $p(x_{1:T}|y_{1:T},{\Greekmath 0112})$ is called \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ancestral tracing}. It was introduced in \cite{kitagawa1996} and used in \cite{andrieuetal2010} and consists of sampling one particle from the final particle filter. The method is equivalent to sampling an index $J=j$ with probability $\bar w_T^{j}$, tracing back its ancestral lineage $b_{1:T}^j$ ($b_T^j = j$ and $b_{t-1}^j=a_{t-1}^{b_t^j}$) and choosing the particle $x_{1:T}^j = (x_1^{b_1^j},\dots,x_T^{b_T^j})$. With some abuse of notation, for a vector $a_{t}$, denote $a_{t}^{(-k)}=\left( a_{t}^{1},\dots ,a_{t}^{k-1},a_{t}^{k+1},\dots,a_{t}^{N}\right) $, with obvious changes for $k\in \{1,N\}$, and denote \begin{equation*} u_{1:T}^{(-j)}=\left\{ x_{1}^{(-b_{1}^{j})},\ldots ,x_{T-1}^{(-b_{T-1}^{j})},x_{T}^{(-j)},a_{1}^{(-b_{1}^{1})},\ldots,a_{T-1}^{(-b_{T-1}^{j})}\right\} . \end{equation*} It simplifies the notation to sometimes use the following one-to-one transformation \begin{equation*} \left( u_{1:T},j\right) \leftrightarrow \left\{x_{1:T}^{j},b_{1:T-1}^{j},j,u_{1:T}^{(-j)}\right\} , \end{equation*} and switch between the two representations and use whichever is more convenient. Note that the right hand expression will sometimes be written as $\left\{ x_{1:T},b_{1:T-1},j,u_{1:T}^{(-j)}\right\} $ without ambiguity. We now assume Assumptions \ref{assu:propstatespace} and \ref{assu:resampling}, given in Section~\ref{s:algorithms} of the online supplement. The target distribution from \cite{andrieuetal2010} is \begin{align} \tilde{{\Greekmath 0119}}^{N}\left( x_{1:T},b_{1:T-1},j,u_{1:T}^{(-j)},{\Greekmath 0112} \right) \mathrel{:=} \frac{p(x_{1:T},{\Greekmath 0112} |y_{1:T})}{N^{T}}\frac{{\Greekmath 0120} \left( u_{1:T}|{\Greekmath 0112} \right) }{m_{1}^{{\Greekmath 0112} }\left( x_{1}^{b_{1}}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\prod_{t=2}^{T} \bar{w}_{t-1}^{a_{t-1}^{b_{t}}}m_{t}^{{\Greekmath 0112} }\left( x_{t}^{b_{t}}|x_{t-1}^{a_{t-1}^{b_{t}}}\right) }, \label{eq:targetdist} \end{align} where $u_{1:T}$ is given in Eq. \eqref{eq:defineU}. Assumption \ref{assu:propstatespace} ensures that $\tilde{{\Greekmath 0119}}^{N}\left(u_{1:T}|{\Greekmath 0112} \right) $ is absolutely continuous with respect to ${\Greekmath 0120}\left( u_{1:T}|{\Greekmath 0112} \right) $, so that ${\Greekmath 0120} \left( u_{1:T}|{\Greekmath 0112} \right) $ can be used as a Metropolis-Hastings proposal density for generating from $\tilde{{\Greekmath 0119}}^{N}\left( u_{1:T}|{\Greekmath 0112} \right)$. From Assumption \ref{assu:resampling}, Eq. (\ref{eq:targetdist}) has the following marginal distribution \begin{equation} \tilde{{\Greekmath 0119}}^{N}\left( x_{1:T},b_{1:T-1},j,{\Greekmath 0112} \right) =\frac{p(x_{1:T},{\Greekmath 0112} |y_{1:T})}{N^{T}}, \label{eq:margdist} \end{equation} and hence $\tilde{{\Greekmath 0119}}^{N}\left( x_{1:T},{\Greekmath 0112} \right) =p(x_{1:T},{\Greekmath 0112} |y_{1:T})$. The online supplement gives further details. \subsection{Conditional sequential Monte Carlo (CSMC)\label{s:CSMC}} The particle Gibbs algorithm in \cite{andrieuetal2010} uses exact conditional distributions to construct a Gibbs sampler. If we use the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ancestral tracing }augmented distribution given in (\ref{eq:targetdist}), then this includes the conditional distribution given by $\tilde{{\Greekmath 0119}}^{N}\left( u_{1:T}^{(-j)}|x_{1:T}^{j},b_{1:T-1}^{j},j,{\Greekmath 0112} \right)$, which involves constructing the particle approximation conditional on a pre-specified path. The \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{conditional sequential Monte Carlo} algorithm, introduced in \cite{andrieuetal2010}, is a sequential Monte Carlo algorithm in which a particle $X_{1:T}^{J}=(X_{1}^{B_{1}^{J}},\dots,X_{T}^{B_{T}^{J}})$, and the associated sequence of ancestral indices $B_{1:T-1}^{J}$ are kept unchanged. In other words, the conditional sequential Monte Carlo algorithm is a procedure that resamples all the particles and indices except for $U_{1:T}^{J}=(X_{1:T}^{J},A_{1:T-1}^{J})=(X_{1}^{B_{1}^{J}},\dots,X_{T}^{B_{T}^{J}},B_{1}^{J},\dots ,B_{T-1}^{J})$. Algorithm~\ref{alg:condsmc} of the supplementary material describes the conditional sequential Monte Carlo algorithm (as in \cite{andrieuetal2010}), consistent with $(x_{1:T}^{j},a_{1:T-1}^{j},j)$. \subsection{Flexible sampling scheme for state space models\label{s:pmwg}} This section introduces a sampling scheme that is suitable for the state space form given in Section~\ref{s:SSM}, where some of the parameters can be generated exactly conditional on the state vectors using PG step, but other parameters must be generated using PMMH step. For simplicity, let ${\Greekmath 0112} :=({\Greekmath 0112} _{1},{\Greekmath 0112} _{2})$ be a partition of the parameter vector into $2$ components where each component may be a vector. Let $\Theta =\Theta_{1}\times\Theta _{2}$ be the corresponding partition of the parameter space. The following sampling scheme generates the vector of parameter ${\Greekmath 0112} _{1}$ using PMMH step and the vector of parameter ${\Greekmath 0112} _{2}$ using PG step. We call this a PMMH+PG sampler. It is important to note that the components in the parameter vector ${\Greekmath 0112} _{1}$ can be sampled separately in multiple PMMH steps and the components in the parameter vector ${\Greekmath 0112} _{2}$ can be sampled separately in multiple Gibbs steps. Details are given in Section~\ref{s:theory} in the online supplement. \begin{sscheme}[PMMH+PG Sampler] \label{ssch:pmwg} Given initial values for $U_{1:T}$, $J$ and ${\Greekmath 0112}$, one iteration of the MCMC involves the following steps. \begin{enumerate} \item (PMMH sampling) \begin{enumerate} \item Sample ${\Greekmath 0112} _{1}^{\ast }\sim q_{1,1}(\cdot |U_{1:T}, J, {\Greekmath 0112}_{2},{\Greekmath 0112} _{1}).$ \item Sample $U_{1:T}^{\ast }\sim {\Greekmath 0120}(\cdot |{\Greekmath 0112}_{2},{\Greekmath 0112} _{1}^{\ast }).$ \item Sample $J^{\ast } \sim \tilde{{\Greekmath 0119}}^N (\cdot |U_{1:T}^{\ast}, {\Greekmath 0112} _{2},{\Greekmath 0112} _{1}^{\ast }).$ \item Set $({\Greekmath 0112} _{1}, U_{1:T}, J ) \leftarrow ({\Greekmath 0112} _{1}^{\ast }, U_{1:T}^{\ast},J^{\ast })$ with probability \begin{align} {\Greekmath 010B} _{1} & \left( U_{1:T}, J, {\Greekmath 0112} _{1};U_{1:T}^{\ast },J^{\ast},{\Greekmath 0112} _{1}^{\ast }|{\Greekmath 0112} _{2}\right) = 1\wedge \nonumber \\ & \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast } , {\Greekmath 0112} _{1}^{\ast}|{\Greekmath 0112} _{2}\right) }{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}, {\Greekmath 0112} _{1}|{\Greekmath 0112}_{2}\right) }\, \frac{q_{1}(U_{1:T}, {\Greekmath 0112} _{1}|U_{1:T}^{\ast }, J^{\ast}, {\Greekmath 0112} _{2},{\Greekmath 0112} _{1}^{\ast })}{q_{1}(U_{1:T}^{\ast }, {\Greekmath 0112} _{1}^{\ast}|U_{1:T}, J,{\Greekmath 0112} _{2},{\Greekmath 0112} _{1})} , \label{eq:PMwGaccprob} \end{align} where \begin{eqnarray*} q_{1}( U_{1:T}^{\ast },{\Greekmath 0112} _{1}^{\ast } | U_{1:T}, J, {\Greekmath 0112} _{2},{\Greekmath 0112} _{1}) & = & q_{1,1}({\Greekmath 0112} _{1}^{\ast }|U_{1:T}, J, {\Greekmath 0112} _{2},{\Greekmath 0112} _{1}) {\Greekmath 0120}(U_{1:T}^\ast|{\Greekmath 0112} _{2},{\Greekmath 0112} _{1}^{\ast}). \end{eqnarray*} \end{enumerate} \item (PG sampling) \begin{enumerate} \item Sample ${\Greekmath 0112} _{2}^{\ast }\sim q_{2}(\cdot |X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{2},{\Greekmath 0112} _{1}).$ \item Set ${\Greekmath 0112}_2 \leftarrow {\Greekmath 0112}_{2}^{\ast }$ with probability \begin{eqnarray} \lefteqn{{\Greekmath 010B}_{2}\left({\Greekmath 0112}_{2};{\Greekmath 0112}_{2}^{\ast}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{1}\right) =} \notag \\ &&1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112}_{2}^{\ast}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{1}\right) }{\tilde{{\Greekmath 0119}}^{N}\left({\Greekmath 0112}_{2}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{1}\right) } \times \frac{q_{2}({\Greekmath 0112} _{2}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{1},{\Greekmath 0112}_{2}^{\ast })}{q_{2}({\Greekmath 0112}_{2}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{1},{\Greekmath 0112}_{2}) } . \label{eq:PMwGaccproba} \end{eqnarray} \end{enumerate} \item Sample $U_{1:T}^{(-J)}\sim \tilde{{\Greekmath 0119}}^{N}(\cdot|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} )$ using the conditional sequential Monte Carlo algorithm (CSMC) discussed in Section ~\ref{s:CSMC}. \item Sample $J\sim \tilde{{\Greekmath 0119}}^{N}\left( \cdot |U_{1:T},{\Greekmath 0112} \right)$. \end{enumerate} \end{sscheme} The generalization of the sampling scheme to the case where the components in the parameter vector ${\Greekmath 0112} _{1}$ are sampled separately in multiple PMMH steps and the components in the parameter vector ${\Greekmath 0112} _{2}$ are sampled separately in multiple Gibbs steps is straighforward and involves repeated steps of the same form as given in Part 1 and Part 2 respectively. Note that Parts 2 to 4 are the same as the particle Gibbs sampler described in \cite{andrieuetal2010} or the particle Metropolis within Gibbs sampler described in \cite{lindstenschon2012}. Part 1 differs from the particle Marginal Metropolis-Hastings approach discussed in \cite{andrieuetal2010} by generating the variable $J$ which selects the trajectory. This is necessary since $J$ is used in Part 2. A major computational cost of the algorithm is generating the particles $p^{\ast}$ times in Part 1, where $p^{\ast}$ is the number of PMMH steps, as well as running the CSMC algorithm in Part 3. Hence there is a computational cost in using the PMMH+PG sampler compared to a particle Gibbs sampler. Similar comments apply to a blocked PMMH sampler. Section~\ref{s:theory} of the supplementary material discusses the convergence of Sampling Scheme~\ref{ssch:pmwg} to its target distribution. \begin{remark} \cite{andrieuetal2010} show that \begin{equation} \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T},{\Greekmath 0112} _{1}|{\Greekmath 0112} _{2}\right) }{{\Greekmath 0120}\left( U_{1:T}|{\Greekmath 0112} _{2},{\Greekmath 0112} _{1}\right) }=\frac{Z(U_{1:T},{\Greekmath 0112})p({\Greekmath 0112} _{1}|{\Greekmath 0112} _{2})}{p\left( y_{1:T}|{\Greekmath 0112} _{2}\right) }, \label{eq:accprobsimplify1} \end{equation} and hence the Metropolis-Hastings acceptance probability in Eq. (\ref{eq:PMwGaccprob}) simplifies to \begin{equation} 1\wedge \frac{Z({\Greekmath 0112} _{1}^{\ast },{\Greekmath 0112} _{2},U_{1:T}^{\ast })}{Z({\Greekmath 0112}_{1},{\Greekmath 0112} _{2},U_{1:T})}\,\frac{q_{1,1}({\Greekmath 0112} _{1}|U_{1:T}^{\ast},J^{\ast},{\Greekmath 0112} _{2},{\Greekmath 0112} _{1}^{\ast })p({\Greekmath 0112} _{1}^{\ast }|{\Greekmath 0112} _{2})}{q_{1,1}({\Greekmath 0112} _{1}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{2},{\Greekmath 0112}_{1})p({\Greekmath 0112} _{1}|{\Greekmath 0112} _{2})}. \label{eq:accprobsimplify1a} \end{equation} Equation~\eqref{eq:accprobsimplify1a} shows the PMMH steps can be viewed as involving a particle approximation to an \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ideal} sampler which we use to estimate the likelihood of the model. This version of the PMMH algorithm can also be viewed as a Metropolis-Hastings algorithm using an unbiased estimate of the likelihood. \end{remark} \begin{remark} Part 1 of the sampling scheme is a good choice for parameter vector ${\Greekmath 0112} _{1}$ which is highly correlated with the state vector $X_{1:T}$. Part 2 of the sampling scheme is a good choice if the parameter vector ${\Greekmath 0112}_{2}$ is not highly correlated with the states and it is possible to sample exactly from the distribution $\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{2}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{1}\right)$ or a good approximation is available as a Metropolis-Hastings proposal. Using Eq. \eqref{eq:margdist}, the Metropolis-Hastings acceptance probability in Eq. \eqref{eq:PMwGaccproba} simplifies to \begin{equation} \frac{p\left(y_{1:T}|X_{1:T}^{J},{\Greekmath 0112}_{2}^{*},{\Greekmath 0112}_{1}\right)p\left(X_{1:T}^{J}|{\Greekmath 0112}_{2}^{*},{\Greekmath 0112}_{1}\right)p\left({\Greekmath 0112}_{2}^{*}|{\Greekmath 0112}_{1}\right)}{p\left(y_{1:T}|X_{1:T}^{J}{\Greekmath 0112}_{2},{\Greekmath 0112}_{1}\right)p\left(X_{1:T}^{J}|{\Greekmath 0112}_{2},{\Greekmath 0112}_{1}\right)p\left({\Greekmath 0112}_{2}|{\Greekmath 0112}_{1}\right)}\times\frac{q_{2}\left({\Greekmath 0112}_{2}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{1},{\Greekmath 0112}_{2}^{*}\right)}{q_{2}\left({\Greekmath 0112}_{2}^{*}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{1},{\Greekmath 0112}_{2}\right)}. \end{equation} See \cite{lindstenschon2012} for more discussion about the particle Metropolis-Hastings within Gibbs proposals in Part 2. \end{remark} \section{Univariate Example: The univariate continuous time Ornstein-Uhlenbeck process\label{SSS: cts time OU process}} This section applies the PMMH + PG sampler defined in Section \ref{s:pmwg} to the univariate continuous time Ornstein-Uhlenbeck SV model with covariates in the mean. \subsection{Definition of inefficiency} \label{SS: preliminaries of examples} To define our measure of the inefficiency of a sampler that takes computing time into account, we first define the integrated autocorrelation time (IACT) for a univariate parameter ${\Greekmath 0112}$, \begin{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}_{\Greekmath 0112}:= 1+2\sum_{j=1}^{\infty}{\Greekmath 011A}_{j,{\Greekmath 0112}} \end{equation} where ${\Greekmath 011A}_{j,{\Greekmath 0112}}$ is the correlation of the iterates of ${\Greekmath 0112}$ in the MCMC after the chain has converged. A large value of IACT for one or more of the parameters indicates that the chain does not mix well. We estimate $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}_{\Greekmath 0112}$ based on $M$ iterates ${\Greekmath 0112}^{\left[1\right]},...,{\Greekmath 0112}^{\left[M\right]}$ (after convergence) as \begin{align*} {\widehat {\rm IACT}}_{{\Greekmath 0112},M} &=1+2\sum_{j=1}^{L_{M}}\widehat {{\Greekmath 011A}}_{j,{\Greekmath 0112}}, \end{align*} where $\widehat{{\Greekmath 011A}}_{j,{\Greekmath 0112}}$ is the estimate of ${\Greekmath 011A}_{j,{\Greekmath 0112}}$, $L_{M}=\min(1000,L)$ and $L=\min_{j\leq M}|\widehat{{\Greekmath 011A}}_{j,{\Greekmath 0112}}|<2/\sqrt{M}$ because $1/\sqrt M$ is approximately the standard error of the autocorrelation estimates when the series is white noise. Let $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ and $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ be the maximum and mean of the estimated IACT values over all the parameters in the model, respectively. Our measure of the inefficiency of a sampler based on $\widehat {\rm IACT}_{\rm MAX}$ is the time normalized variance (TNV), \begin{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}=\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}\times {\rm CT}, \end{equation} where $\rm CT$ is the computing time in seconds per iteration; we define the inefficiency of a sampler based on $\widehat {\rm IACT}_{\rm MEAN}$ similarly. The relative time normalized variance (RTNV) shows the TNV relative to our method. \subsection{The univariate continuous time Ornstein-Uhlenbeck process\label{SSS: subsection cts time OU process}} We consider the model \begin{align}\label{eq:obseqnOU} y_{t}=z_{t}^{'}{\Greekmath 010C}+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{exp}\left(h_{t}/2\right)\mathrm{var}epsilon_{t},\qquad\RIfM@\expandafter\text@\else\expandafter\mbox\firm{where}\quad\mathrm{var}epsilon_{t}\sim N\left(0,1\right), \end{align} with the log-volatility $h_t$ generated by the continuous time Ornstein-Uhlenbeck (OU) process $\{h_{t}\}_{t\ge 1}$, introduced by \cite{Stein1991}. This process satisfies, \begin{equation} d h_{t}={\Greekmath 010B}\left({\Greekmath 0116}-h_{t}\right) d t+{\Greekmath 011C} d W_{t},\label{eq:transitiondensityuniv} \end{equation} where $W_t$ is a Wiener process. The transition densities for $h_{t}$ have the closed form \citep[][p. 7]{Lunde2015} \begin{align} h_{t}|h_{t-1} & \sim N\left({\Greekmath 0116}+\exp\left(-{\Greekmath 010B}\right)\left(h_{t-1}-{\Greekmath 0116}\right),\frac{1-\exp\left(-2{\Greekmath 010B}\right)}{2{\Greekmath 010B}}{\Greekmath 011C}^{2}\right), \label{eq:exact transition} \end{align} with $h_{1} \sim N\left ( {\Greekmath 0116} , \frac{{\Greekmath 011C}^2}{2 {\Greekmath 010B}}\right )$. This is a state space model of the form given in Section~\ref{s:SSM} with $x_{1:T}=h_{1:T}$ and whose parameters are ${\Greekmath 010B}>0$, ${\Greekmath 0116}$, ${\Greekmath 011C}^{2}>0$, and $\left(m_{{\Greekmath 010C}}\times1\right)$ vector ${\Greekmath 010C}$. This is a general time series model that allows for a scalar dependent variable $y_{t}$ with possible dependence on covariates in the mean as well as stochastic variance terms. Thus, $E\left(y_{t}|z_{t},h_{t},{\Greekmath 0112}\right)=z_{t}^{'}{\Greekmath 010C}$, where $z_{t}$ can consist of lags of $y_{t}$; $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Var}\left(y_{t}|z_{t},h_{t},{\Greekmath 0112}\right)=\exp\left(h_{t}\right)$. The model can be applied to many time series and has been extensively used in the financial econometrics literature. It is straightforward to generalise this model in a number of ways: for example, by allowing for covariates in the conditional variance and including conditional variance term in the mean. See \citet[pp. 216-221]{durbinkoopman2012}, who discuss the basic stochastic volatility model and some extensions. Many stochastic volatility diffusion models do not have a closed form transition density, e.g., the continuous time GARCH diffusion process \cite{Chib2004, Kleppe2010} discussed in Section \ref{S: factor SV model explanation}, and it is then necessary to estimate such state space models using an approximation such as the Euler discretization. It is therefore informative to study the relative performance of the PG+PMMH sampler for the OU process using both the closed form transition equation in Eq.~\eqref{eq:exact transition} as well as the OU with the Euler approximation in Eq.~\eqref{eq: OU with euler univ}, to see the relative loss due to the approximation. The Euler scheme approximates the evolution of the log-volatilities ${h}_{t}$ in equation \eqref{eq:transitiondensityuniv} by placing $M-1$ evenly spaced points between times $t$ and $t+1$. We denote the intermediate volatility components by $h_{t,1},...,h_{t,M-1}$, and it is convenient to set $h_{t,0}=h_{t}$ and $h_{t,M}=h_{t+1}$. The equation for the Euler evolution, starting at $h_{t,0}$ is (see, for example, \cite{Stramer2011}, pg. 234) \begin{align}\label{eq: OU with euler univ} h_{t,j}|h_{t,j-1}\sim N\left ( h_{t,j-1}+{\Greekmath 010B}\left({\Greekmath 0116}-h_{t,j-1}\right){\Greekmath 010E}, {\Greekmath 011C}^2{\Greekmath 010E} \right ), \end{align} for $j=1,...,M$, where ${\Greekmath 010E} = 1/M$. \subsection{Empirical results\label{SS: Empirical results for OU}} We use the following notation to describe the algorithm used in this example. The basic samplers, as used in Sampling Scheme 1, are $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\cdotp\right)$ and $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\cdotp\right)$. These samplers can be used alone or in combination. For example, $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 0112}\right)$ means using a PMMH step to sample the parameter vector ${\Greekmath 0112}$; $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 0112}_{1}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 0112}_{2}\right)$ means sampling ${\Greekmath 0112}_{1}$ in the PMMH step and ${\Greekmath 0112}_{2}$ in the PG step; and $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 0112}\right)$ means sampling ${\Greekmath 0112}$ using the PG sampler. Our general procedure to determine an efficient sampling scheme is to first run a PG algorithm to identify which parameters have large IACT, or, in some cases, require a large amount of computational time to generate in the PG step. We then generate these parameters in the PMMH step. \subsubsection*{Univariate OU model with exact transition density and no covariate} In this section, we consider the univariate OU model with exact transition density and no covariate $\left(m_{{\Greekmath 010C}}=0\right)$. We compare the performance of the following samplers: (I) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 010B},{\Greekmath 011C}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 0116}\right)$, (II) the particle Gibbs with ancestral tracing approach of \citet{andrieuetal2010} $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$, (III) the particle Gibbs with backward simulation approach of \citet{Lindsten2013} $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$, (IV) PMMH with an adaptive random walk as the proposal density for the parameters $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-RW}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$, (V) PMMH with the Metropolis adjusted Langevin algorithm (MALA) of \citet{Nemeth2016} for the proposal for the parameters $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-MALA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$, (VI) the correlated PMMH approach of \citet{Deligiannidis2018} with an adaptive random walk as the proposal density for the parameters $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-RW}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$, (VII) the correlated PMMH approach of \citet{Deligiannidis2018} with the Metropolis adjusted Langevin algorithm of \citet{Nemeth2016} as the proposal for the parameters $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-MALA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$, and (VIII) the particle Gibbs with data augmentation approach of \citet{Fearnhead2016} $\left(\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGDA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)\right)$. The score vector required for the MALA algorithm is estimated efficiently using methods described in \citet{Nemeth2016a}. The tuning parameters of the PGDA sampler are set optimally according to the approach described in \citet{Fearnhead2016}. The correlated PMMH proposed by \citet{Deligiannidis2018} correlates the random vectors $\boldsymbol{u}$ and $\boldsymbol{u}^{'}$ used to construct the estimators of the likelihood at the current and proposed values of the parameters (${\Greekmath 0112}$ and ${\Greekmath 0112}^{'}$ respectively). This is done to reduce the variance of the difference between $\log\left(Z_{1:T}\left({\Greekmath 0112}^{'},\boldsymbol{u}^{'}\right)\right)-\log\left(Z_{1:T}\left({\Greekmath 0112},\boldsymbol{u}\right)\right)$ which appears in the PMMH acceptance ratio. The correlated PMMH significantly reduces the number of particles required by the standard pseudo marginal method proposed by \citet{andrieuetal2010}. We use $N=500$ particles for the PMMH+PG, PGAT, PGBS, PMMH and PGDA samplers, and $N=50$ for the correlated PMMH sampler. In this example, we use the bootstrap particle filter to sample the particles for all samplers and the adaptive random walk in \citet{robertsrosenthal2009} for the PMMH step in the PMMH+PG sampler as the proposal density for the parameters. The particle filter and the parameter samplers are implemented in Matlab. We apply the methods to a sample of daily US steel industry stock returns data obtained from the Kenneth French website\footnote{http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/datalibrary.html}, using a sample from January 3rd, 2001 to the 24th of December, 2003, a total of 1,000 observations. The priors for the OU parameters are ${\Greekmath 010B}\sim IG\left(\frac{v_{0}}{2},\frac{s_{0}}{2}\right)$, ${\Greekmath 011C}^{2}\sim IG\left(\frac{v_{0}}{2},\frac{s_{0}}{2}\right)$, where $v_{0}=10$ and $s_{0}=1$, $p\left({\Greekmath 0116}\right)\propto1$, and $p\left({\Greekmath 010C}\right)\propto1$. These prior densities cover most possible values in practice. We ran all the sampling schemes for 11,000 iterations and discarded the initial 1,000 iterations as warmup for all the methods. Table \ref{tab:Univariate-OU-model with no covariates} shows the IACT, TNV, and RTNV values for the parameters in the univariate OU model with an exact transition density and no covariate estimated using the 8 different samplers described above. The table shows the following points. (1) Both the PGAT and PGBS samplers have large IACT values for both parameters ${\Greekmath 010B}$ and ${\Greekmath 011C}^{2}$, and we show that putting those two parameters in the PMMH step improves the mixing significantly. We show later in this section and in Section \ref{s:simulations} that it is also beneficial to use a PMMH step for at least the ${\Greekmath 010B}$ and ${\Greekmath 011C}^{2}$ parameters for the stochastic volatility diffusion models that use an approximation such as the Euler discretization. (2) In terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$, the PMMH+PG sampler is 3.18, 3.12, 1.08, and 1.51 times better than the PGAT, PGBS, $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-MALA}$, and PGDA samplers respectively, and the PMMH-RW, PMMH-MALA, and correlated PMMH-RW methods are 1.33, 2.56, and 1.88 times better than the PMMH+PG sampler, respectively. Similar conclusions can be made based on $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$. (3) The best sampler for this example is the correlated PMMH-RW. (4) The PMMH-MALA sampler has lower IACT values for all the parameters compared to the PMMH-RW sampler, but the correlated PMMH-RW sampler is better than the correlated PMMH-MALA sampler. This shows that there is no advantage of using particle MALA over the random walk proposal. It is therefore important to note that although the correlated PMMH can significantly reduce the number of particles required compared to standard PMMH, the variance of the estimate of the gradient of the log-posterior is not sufficiently small with the choice of $N=50$ particles used by the correlated PMMH sampler. This confirms the observation made by \cite{Nemeth2016} who write ``Our results show that the behaviour of particle MALA depends on how accurately we can estimate the gradient of the log-posterior. If the error in the estimate of the gradient is not controlled sufficiently well as we increase dimension, then asymptotically there will be no advantage in using particle MALA over a particle MCMC algorithm using a random-walk proposal''. (5) The PGDA sampler has lower IACT values for both ${\Greekmath 010B}$ and ${\Greekmath 011C}^{2}$ parameters compared to the PGBS and PGAT samplers, but it has higher IACT value for ${\Greekmath 0116}$. This shows that the PGDA sampler is useful to improve the mixing of the parameters that are highly correlated with the states. \begin{table}[H] \caption{Inefficiency factors of ${\Greekmath 010B}$, ${\Greekmath 011C}^{2}$, and ${\Greekmath 0116}$ for the Univariate OU model with an exact transition density and without covariates for the US steel industry stock returns data with $T=1000$. Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 010B},{\Greekmath 011C}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 0116}\right)$, Sampler II: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler IV: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-RW}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler V: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-MALA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler VI: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Correlated PMMH-RW}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler VII: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Correlated PMMH-MALA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, and Sampler VIII: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGDA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$. \label{tab:Univariate-OU-model with no covariates}} \centering{} \begin{tabular}{ccccccccc} \hline Param & I & II & III & IV & V & VI & VII & VIII\tabularnewline \hline ${\Greekmath 010B}$ & 12.01 & 50.21 & 40.12 & 15.02 & 4.62 & 13.00 & 12.38 & 18.06\tabularnewline ${\Greekmath 0116}$ & 1.56 & 1.65 & 1.48 & 12.81 & 4.59 & 14.17 & 28.77 & 9.16\tabularnewline ${\Greekmath 011C}^{2}$ & 13.49 & 85.46 & 70.98 & 12.64 & 4.74 & 11.18 & 17.20 & 19.42\tabularnewline \hline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 13.49 & 85.46 & 70.98 & 15.02 & 4.74 & 14.17 & 28.77 & 19.42\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 2.16 & 8.55 & 8.52 & 1.20 & 0.57 & 0.85 & 2.30 & 2.72\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 1 & 3.95 & 3.94 & 0.56 & 0.26 & 0.39 & 1.06 & 1.25\tabularnewline \hline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 9.02 & 45.77 & 37.53 & 13.49 & 4.65 & 12.78 & 19.45 & 15.55\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 1.44 & 4.58 & 4.50 & 1.08 & 0.56 & 0.77 & 1.56 & 2.17\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 1 & 3.18 & 3.12 & 0.75 & 0.39 & 0.53 & 1.08 & 1.51\tabularnewline \hline Time & 0.16 & 0.10 & 0.12 & 0.08 & 0.12 & 0.05 & 0.08 & 0.14\tabularnewline \hline \end{tabular} \end{table} \subsubsection*{Univariate OU model with exact transition density and 50 covariates} We now consider the univariate OU model with an exact transition density and $m_{{\Greekmath 010C}}=50$ covariates. We compare the performance of the following samplers: (1) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 010B},{\Greekmath 011C}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 0116},{\Greekmath 010C}\right)$, (2) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, (3) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, (4) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-RW}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, (5) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-MALA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, (6) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-RW}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, (7) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-MALA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, and (8) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGDA}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$. We use $N=500$ particles for the PMMH+PG, PGAT, PGBS, PMMH, and PGDA samplers, and $N=50$ for the correlated PMMH sampler. We simulated data with $T=1000$ and set ${\Greekmath 010B}=0.09$, ${\Greekmath 0116}=0.38$, ${\Greekmath 011C}^{2}=0.08$, and ${\Greekmath 010C}_{i}=0.1$ for $i=1,...,m_{{\Greekmath 010C}}$. The covariates are $z_{t}\sim N\left(0,I_{50}\right)$. Table \ref{tab:Univariate-OU-model with covariate} shows the IACT, TNV, and RTNV values for the parameters in the univariate OU model with an exact transition density and 50 covariates estimated using the 8 different samplers listed above. The table shows the following points. (1) The best sampler for this example is the PMMH+PG sampler. This example shows how the PMMH and PG samplers can be combined in a flexible manner to obtain good results. In this example, the vector of parameters ${\Greekmath 010C}$ are high dimensional and not highly correlated with the states, so it is important to generate them in a PG step. Both ${\Greekmath 010B}$ and ${\Greekmath 011C}^{2}$ are generated in a PMMH step because they are highly correlated with the states. (2) The standard and correlated PMMH with adaptive random walks are much worse than the PMMH+PG sampler because the adaptive random walk proposal is inefficient in high dimensions. (3) The correlated PMMH with the MALA proposal is worse than the correlated PMMH with an adaptive random walk proposal and is the worst sampler in this example because the variance of the gradient of log-posterior is not sufficiently small with the number of particles set to $N=50$. (4) The PGDA sampler has very large IACT values for all parameters indicating that the PGDA sampler does not perform well for models with a large number of parameters. Figure \ref{fig:The-Inefficiency-Factors log-volatilities} shows the RTNV of the PMMH+PG sampler over other samplers for the log-volatilities $h_{1:T}$ for all $t$. The figure shows that the PMMH+PG sampler is much more efficient than the standard and correlated PMMH samplers and the PGDA sampler. It is only slightly worse than the PGAT and PGBS samplers. \begin{table}[H] \caption{Inefficiency factors of ${\Greekmath 010B}$, ${\Greekmath 011C}^{2}$, and ${\Greekmath 0116}$ for the Univariate OU model with an exact transition density and $m_{{\Greekmath 010C}}=50$ covariates for the simulated data with $T=1000$. Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 010B},{\Greekmath 011C}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 010C},{\Greekmath 0116}\right)$, Sampler II: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler IV: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-RW}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler V: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-MALA}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler VI: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Correlated PMMH-RW}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler VII: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Correlated PMMH-MALA}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, and Sampler VIII: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGDA}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$. \label{tab:Univariate-OU-model with covariate}} \centering{} \begin{tabular}{ccccccccc} \hline Param & I & II & III & IV & V & VI & VII & VIII\tabularnewline \hline ${\Greekmath 010B}$ & 11.15 & 47.14 & 40.94 & 281.68 & 33.15 & 135.17 & 561.44 & 356.24\tabularnewline ${\Greekmath 0116}$ & 1.58 & 1.73 & 1.81 & 377.59 & 17.79 & 84.31 & 931.89 & 211.48\tabularnewline ${\Greekmath 011C}^{2}$ & 14.50 & 95.55 & 71.83 & 341.19 & 20.43 & 81.17 & 1368.65 & 296.52\tabularnewline $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{mean}\left({\Greekmath 010C}\right)$ & 1.52 & 1.57 & 1.46 & 165.50 & 14.43 & 131.88 & 958.51 & 276.13\tabularnewline $\max\left({\Greekmath 010C}\right)$ & 1.80 & 1.95 & 1.71 & 545.26 & 21.76 & 434.50 & 1445.25 & 690.57\tabularnewline \hline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 14.50 & 95.55 & 71.83 & 545.26 & 33.15 & 434.50 & 1445.25 & 690.57\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 2.47 & 9.55 & 9.34 & 43.62 & 7.96 & 26.07 & 130.07 & 227.89\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 1 & 3.87 & 3.78 & 17.66 & 3.22 & 10.55 & 52.66 & 92.26\tabularnewline \hline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 1.95 & 4.21 & 3.53 & 175.00 & 14.96 & 130.10 & 958.26 & 276.81\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 0.33 & 0.42 & 0.46 & 14.00 & 3.59 & 7.81 & 86.24 & 91.35\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 1 & 1.27 & 1.39 & 42.42 & 10.88 & 23.67 & 261.33 & 276.82\tabularnewline \hline Time & 0.17 & 0.10 & 0.13 & 0.08 & 0.24 & 0.06 & 0.09 & 0.33\tabularnewline \hline \end{tabular} \end{table} \begin{figure} \caption{The Inefficiency Factors for the log-volatilities $h_{1:T} \label{fig:The-Inefficiency-Factors log-volatilities} \end{figure} \subsubsection*{Univariate OU model with Euler approximation for the state transition density and 50 covariates} Lastly, we consider the univariate OU model with an Euler approximation for the state transition density and $m_{{\Greekmath 010C}}=50$ covariates. We compare the performance of the following samplers: (1) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 0116},{\Greekmath 010B},{\Greekmath 011C}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 010C}\right)$, (2) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$, (3) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left({\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B},{\Greekmath 010C}\right)$. We used $N=500$ particles for all samplers and $M=10$ latent points for the Euler approximation of the state transition density. Table \ref{tab:Univariate-OU-model with covariate-Euler} shows the IACT, TNV, and RTNV values for the parameters in the univariate OU model with an Euler approximation for the state transition density and 50 covariates. The table shows the following points. (1) The PMMH+PG samplers with exact and approximate state transition densities have very similar IACT values suggesting that the inefficiency of the PMMH+PG sampler does not deteriorate when the Euler approximation is used. However, both the PGAT and PGBS samplers using the Euler approximation are significantly worse than the PGAT and PGBS samplers with exact transition densities. (2) The best sampler is the PMMH+PG sampler. (3) It is interesting to see that when we use an Euler approximation for the diffusion the PMMH+PG, PGAT, and PGBT samplers all take approximately the same computing time. This is because the PGAT and PGBT samplers need to store and trace back all the latent log-volatilities $h_{t}$ and the $M$ latent data points between $t$ and $t+1$ for all $t=1,...,T$, whereas the PMMH+PG sampler only needs to store and trace back the latent log-volatilities $h_{t}$ for all $t=1,...,T$. Therefore, the PMMH+PG sampler is also more efficient in terms of memory usage if it is necessary to use an Euler approximation. In summary, in this univariate example, we show the following points. (1) The inefficiency of the PMMH+PG sampler does not deteriorate when the Euler approximation is used, whereas both the PGAS and PGAT samplers are significantly worse. (2) PGDA is useful to improve the mixing of the parameters that are highly correlated with the states, but it does not work for models with many parameters. (3) The PMMH+PG sampler is much more efficient than the standard and correlated PMMH samplers with adaptive random walk proposals because the random walk proposals are inefficient in high dimensions. (4) There is no advantage of using particle MALA over the random walk proposal when the variance of the estimate of the gradient of the log-posterior is not sufficiently small. (5) It is desirable to generate parameters that are highly correlated with the states using a PMMH step that does not condition on the states. Conversely, if there is a subset of parameters that is not highly correlated with the states, then it is preferable to generate them using a particle Gibbs step, or a particle Metropolis within Gibbs step, that conditions on the states, especially when the subset is large. In general, using PG may be preferred to PMMH whenever possible, because it may be easier to obtain better proposals within a PG framework. (6) Our PMMH + PG approach can be further refined by using the data augmented PMMH and PG sampling schemes proposed by \citet{Fearnhead2016} and the refined proposals for the PMMH sampling scheme by \citet{dahlin2015} and \citet{Nemeth2016}. \begin{table}[H] \caption{Univariate OU model with $m_{{\Greekmath 010C}}=50$ covariates and Euler approximation for the state transition density for the simulated data with $T=1000$. Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 010B},{\Greekmath 011C}^{2},{\Greekmath 0116}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 010C}\right)$, Sampler II: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$, Sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left({\Greekmath 010C},{\Greekmath 0116},{\Greekmath 011C}^{2},{\Greekmath 010B}\right)$. \label{tab:Univariate-OU-model with covariate-Euler}} \centering{} \begin{tabular}{cccc} \hline Param & I & II & III\tabularnewline \hline ${\Greekmath 010B}$ & 12.23 & 175.33 & 130.71\tabularnewline ${\Greekmath 0116}$ & 13.56 & 18.09 & 15.22\tabularnewline ${\Greekmath 011C}^{2}$ & 10.99 & 403.72 & 347.64\tabularnewline $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{mean}\left({\Greekmath 010C}\right)$ & 1.52 & 1.55 & 1.46\tabularnewline $\max\left({\Greekmath 010C}\right)$ & 1.72 & 1.87 & 1.72\tabularnewline \hline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 13.56 & 403.72 & 347.64\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 3.53 & 117.08 & 111.24\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$ & 1 & 33.17 & 31.51\tabularnewline \hline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 2.13 & 12.73 & 10.69\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 0.55 & 3.69 & 3.42\tabularnewline $\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$ & 1 & 6.71 & 6.22\tabularnewline \hline Time & 0.26 & 0.29 & 0.32\tabularnewline \hline \end{tabular} \end{table} \section{Multivariate Example\label{PMMH+PG SV factor}} This section applies the ideas in this paper to the multivariate factor stochastic volatility model, which is a serious complex example. It also shows how a complex particle MCMC scheme can be built from the basic PMMH + PG sampler in Section~\ref{s:pmwg}. Section \ref{S: factor SV model explanation} discusses the multivariate factor stochastic volatility model. Section \ref{s:simulations} compares the performance of the PMMH+PG sampler to other competing PMCMC methods to estimate multivariate factor SV models using both simulated and real datasets. \subsection{The factor stochastic volatility model \label{S: factor SV model explanation}} Factor stochastic volatility (SV) models are a popular approach to jointly model many co-varying financial time series, as they are able to capture their common features using only a small number of latent factors (see, e.g., \cite{Chib2006} and \cite{Kastner:2017}). However, estimating time-varying multivariate factor SV models can be very challenging because the likelihood involves calculating an integral over a very high-dimensional latent state space, and the number of parameters in the model can be large. We consider a factor SV model with the volatilities of the factors following a traditional SV model \citep{Chib2006, Kastner:2017}, while the log volatilities of the idiosyncratic errors follow continuous time Ornstein-Uhlenbeck (OU) processes \citep{Stein1991} or GARCH diffusion processes \citep{Chib2004, Kleppe2010}. The log volatility of an OU process admits a closed form state transition density, see Section~\ref{SSS: subsection cts time OU process}, whereas the GARCH diffusion process does not. Our estimation methods are applied to Euler approximations of the diffusion process driving the log volatilities, and hence can handle diffusions that do not admit closed form transition densities; see \cite{Ignatieva2015} for other diffusions whose transition equations need an Euler approximation because they cannot be expressed in closed form. It is informative to study the closed form and Euler approximation for the state transition density for the OU process in the multivariate case to see the relative loss due to the approximation. Suppose that $\boldsymbol{P}_{t}$ is a $S\times1$ vector of daily stock prices and define $\boldsymbol{y}_{t}:=\log\boldsymbol{P}_{t}-\log\boldsymbol{P}_{t-1}$ as the log-return of the stocks. We model ${\boldsymbol y_t}$ as the factor SV model \begin{equation} \boldsymbol{y}_{t}=\boldsymbol{{\Greekmath 010C}}\boldsymbol{f}_{t}+\boldsymbol{V}_{t}^{\frac{1}{2}}\boldsymbol{{\Greekmath 010F}}_{t} \quad (t = 1, \ldots, T),\label{eq:discretisationfactor} \end{equation} where $\boldsymbol{f}_{t}$ is a $K\times1$ vector of latent factors (with $K\ll S$), $\boldsymbol{{\Greekmath 010C}}$ is a $S\times K$ factor loading matrix of unknown parameters. Appendix \ref{SS: sampling loading matrix} gives further details on the restrictions on $\boldsymbol{{\Greekmath 010C}}$. We model the latent factors as $\boldsymbol{f}_{t}\sim N\left(0,\boldsymbol{D}_{t}\right)$ and $\boldsymbol{{\Greekmath 010F}}_{t}\sim N\left(0,I\right)$, so that $\boldsymbol{y}_{t}|(\boldsymbol{f}_{t},\boldsymbol{h}_{t})\sim N\left(\boldsymbol{{\Greekmath 010C}}\boldsymbol{f}_{t},\boldsymbol{V}_{t}\right)$. The time-varying variance matrices $\boldsymbol{D}_{t}$ and $\boldsymbol{V}_{t}$ depend on unobserved random variables $\boldsymbol{{\Greekmath 0115}}_{t}=\left({\Greekmath 0115}_{1,t},...,{\Greekmath 0115}_{K,t}\right)$ and $\boldsymbol{h}_{t}=\left(h_{1,t},...,h_{S,t}\right)$ such that \[ \boldsymbol{D}_{t} := \RIfM@\expandafter\text@\else\expandafter\mbox\firm{diag}\left(\exp\left({\Greekmath 0115}_{1,t}\right),...,\exp\left({\Greekmath 0115}_{K,t}\right)\right), \quad \boldsymbol{V}_{t} :=\RIfM@\expandafter\text@\else\expandafter\mbox\firm{diag}\left(\exp\left(h_{1,t}\right),...,\exp\left(h_{S,t}\right)\right). \] Each ${\Greekmath 0115}_{k,t}$ is assumed to follow an independent autoregressive process \begin{equation} {\Greekmath 0115}_{k,t}={\Greekmath 011E}_{k}{\Greekmath 0115}_{k,t-1}+{\Greekmath 011C}_{f,k}{\Greekmath 0111}_{k,t}, \quad k=1,...,K,\label{eq:SVtransition} \end{equation} with ${\Greekmath 0111}_{k,t}\sim N\left(0,1\right)$. The log volatilities $h_{s,t}$ follow a either a Gaussian OU continuous time volatility process or a GARCH diffusion continuous time volatility process. The continuous time Ornstein-Uhlenbeck (OU) process $\{h_{s,t}\}_{t\ge 1}$ discussed in Section~\ref{SSS: subsection cts time OU process} satisfies \begin{equation} d h_{s,t}={\Greekmath 010B}_{s}\left({\Greekmath 0116}_{s}-h_{s,t}\right) d t+{\Greekmath 011C}_{{\Greekmath 010F},s} d W_{s,t},\quad \RIfM@\expandafter\text@\else\expandafter\mbox\firm{for}\quad \;s=1,...,S,\label{eq:transitiondensity} \end{equation} where $W_{s,t}$ is a Wiener process. The transition distribution for each $h_{s,t}$ is \citep[][p. 7]{Lunde2015} \begin{align} h_{s,t}|h_{s,t-1} & \sim N\left({\Greekmath 0116}_{s}+\exp\left(-{\Greekmath 010B}_{s}\right)\left(h_{s,t-1}-{\Greekmath 0116}_{s}\right),\frac{1-\exp\left(-2{\Greekmath 010B}_{s}\right)}{2{\Greekmath 010B}_{s}}{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}\right), \quad s = 1, \dots, S.\label{eq:exact transition} \end{align} with $h_{s,1} \sim N\left ( {\Greekmath 0116}_s , \frac{{\Greekmath 011C}_{{\Greekmath 010F},s}^2}{2 {\Greekmath 010B}_s}\right )$. The parameters are ${\Greekmath 010B}_s>0$, ${\Greekmath 0116}_s$ and ${\Greekmath 011C}_{{\Greekmath 010F},s}^2>0$. The Euler scheme approximates the evolution of the log-volatilities ${h}_{s,t}$ in equation \eqref{eq:transitiondensity}. We use the approach in Section~\ref{SSS: subsection cts time OU process} by placing $M-1$ evenly spaced points between times $t$ and $t+1$. The intermediate volatility components are denoted by $h_{s,t,1},...,h_{s,t,M-1}$, and it is convenient to set $h_{s,t,0}=h_{s,t}$ and $h_{s,t,M}=h_{s,t+1}$. The equation for the Euler evolution, starting at $h_{s,t,0}$ is (see, for example, \cite{Stramer2011}, pg. 234) \begin{align} h_{s,t,j}|h_{s,t,j-1}\sim N\left ( h_{s,t,j-1}+{\Greekmath 010B}_s\left({\Greekmath 0116}_s-h_{s,t,j-1}\right){\Greekmath 010E}, {\Greekmath 011C}_{{\Greekmath 010F},s}^2{\Greekmath 010E} \right ),\label{eq:ou approximatetransition multiple} \end{align} for $j=1,...,M$, where ${\Greekmath 010E} = 1/M$. The continuous time GARCH diffusion process $\{h_{s,t}\}_{t\ge 1}$ \citep{Chib2004, Kleppe2010} satisfies \begin{equation} dh_{s,t}=\left\{ {\Greekmath 010B}_{s}\left({\Greekmath 0116}_{s}-\exp\left(h_{s,t}\right)\right)\exp\left(-h_{s,t}\right)-\frac{{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}{2}\right\} dt+{\Greekmath 011C}_{{\Greekmath 010F},s}dW_{s,t},\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\firm{for}\;s=1,...,S\label{eq:GARCHtransitiondensity}, \end{equation} where the $W_{s,t}$ are independent Wiener processes. The Euler approximation of the state transition density of equation \eqref{eq:GARCHtransitiondensity} yields the transition density between steps (see for example, \cite{Wuetal2018}, pg. 21) \begin{align} h_{s,t,j+1}|h_{s,t,j}\sim N\left(h_{s,t,j}+\left\{ {\Greekmath 010B}_{s}\left({\Greekmath 0116}_{s}-\exp\left(h_{s,t,j}\right)\right)\exp\left(-h_{s,t,j}\right)-\frac{{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}{2}\right\} {\Greekmath 010E},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}{\Greekmath 010E}\right) \label{eq:GARCH Euler transitiondensity} \end{align} for $j=0,...,M-1$, where ${\Greekmath 010E} = 1/M$. We denote the parameter vector for the factor stochastic volatility model given by equations \eqref{eq:discretisationfactor}, \eqref{eq:SVtransition} and either \eqref{eq:exact transition}, \eqref{eq:ou approximatetransition multiple} or \eqref{eq:GARCH Euler transitiondensity} by \begin{align*} \boldsymbol{{\Greekmath 0121}} = \left( \boldsymbol{{\Greekmath 010C}}; ({\Greekmath 011E}_k, {\Greekmath 011C}_{f,k}), k = 1, \ldots, K; ({\Greekmath 010B}_s, {\Greekmath 0116}_s, {\Greekmath 011C}_{{\Greekmath 010F},s}), s = 1, \ldots, S \right). \end{align*} Although the factor SV model can be written in state space form as in Section~\ref{s:SSM}, it is more efficient to take advantage of the extra structure in the model and base the sampling scheme on multiple independent univariate state space models. The next section outlines the conditional independence structure in the factor SV model. Sections \ref{sec:Sampling-Schemes-Factor model} and \ref{ss: sampling scheme for the factor SV model} of the supplement give the more complex target density and sampling schemes required for estimating the posterior distribution of the factor SV model. \subsection*{Conditional independence in the factor SV model \label{sss: Conditional Independence}} The key to making the estimation of the factor SV model tractable is that the factor SV model in equation \eqref{eq:discretisationfactor} separates into independent components consisting of $K$ univariate SV models for the latent factors and $S$ univariate state space models for the idiosyncratic errors given the values of $\left(\boldsymbol{y}_{1:T},\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 0121}}\right)$ and the conditional independence of the innovations of the returns. The sampling scheme generates the latent factors and factor loading matrix in PG steps and then, conditioning on the them, estimates a series of univariate state space models. For $k=1,...,K$, we have that \begin{align} f_{k,t}|{\Greekmath 0115}_{k,t} \sim N\left(0,\exp\left({\Greekmath 0115}_{k,t}\right)\right), \label{eq:PMMH+PG SV obs density} \end{align} with the transition density in equation \eqref{eq:SVtransition}. For $s=1,...,S$, we have \begin{align} y_{s,t}|\boldsymbol{f}_{t},h_{s,t}\sim N\left(\boldsymbol{{\Greekmath 010C}}_{s}\boldsymbol{f}_{t},\exp\left(h_{s,t}\right)\right),\label{eq:PMMH+PG OU obs density} \end{align} with the exact and approximate transition densities given in equations \eqref{eq:exact transition}, \eqref{eq:ou approximatetransition multiple} or \eqref{eq:GARCH Euler transitiondensity}. Section \ref{s:simulations} shows on both simulated and real data that the PMMH+PG sampler works well. We note that our example merely illustrates our methods which can naturally handle multiple factors and most types of log-volatilites for both the factors and idiosyncratic errors. \subsection{Empirical Studies\label{s:simulations}} This section presents empirical results for the factor SV model described in Section \ref{S: factor SV model explanation} to illustrate the flexibility of the sampling approach given in our article. Section \ref{ss:simulationandapplication} presents a simulation study for the factor SV model with the idiosyncratic log-volatilities following Gaussian OU processes with exact and approximate transition densities. Section \ref{SS: US stock returns} presents empirical results for the factor SV model with the idiosyncratic log-volatilities following Gaussian OU processes and GARCH diffusion processes using a sample of daily US industry stock returns data. We use the same notation as Section \ref{SS: Empirical results for OU} to describe the algorithms in this study. For example, the basic sampler, as used in Sampling Scheme 1, is $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left({\Greekmath 0112}_{1}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left({\Greekmath 0112}_{2}\right)$ sampling the parameter vector ${\Greekmath 0112}_{1}$ in the PMMH step and ${\Greekmath 0112}_{2}$ in the PG step. Our general procedure to determine an efficient sampling scheme is to first run a PG algorithm to identify which parameters have large IACTs, or, in some cases, require a large amount of computational time to generate in the PG step. We then generate these parameters in the PMMH step. \subsubsection{Simulation Study \label{ss:simulationandapplication}} We conducted a simulation study for the factor SV model with the idiosyncratic log-volatilities following Gaussian OU continuous time volatility processes with exact and approximate transition densities. We compare the performance of the samplers listed below. Section~\ref{SS: Empirical results for OU} gives the notation for the samplers. The samplers are: (I) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E}\right)$ for the Gaussian OU model with exact transition densities and $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2},\boldsymbol{{\Greekmath 0116}}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},{\Greekmath 011E}\right)$ for the Gaussian OU model with approximate transition densities, (II) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, (III) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, (IV) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-RW}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, (V) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH-MALA}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, (VI) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-RW}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, (VII) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Corr. PMMH-MALA}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, (VIII) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGDA}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$. We first compare the three samplers PMMH+PG, PGAT, and PGBS and then discuss the PMMH and PGDA sampling schemes for the factor SV model. We simulated data with $T=1,000$ observations, $S=20$ stocks, and $K=1$ factors from the factor SV model in equation~\eqref{eq:discretisationfactor}, setting ${\Greekmath 010B}_{s}=0.06$, and ${\Greekmath 011C}_{{\Greekmath 010F},s}^{2}=0.1$ for all $s$, ${\Greekmath 011E}_{1}=0.98$, ${\Greekmath 011C}_{f,1}^{2}=0.1$ and ${\Greekmath 010C}_{s}=0.8$ for all $s$. We chose independent Gaussian priors for every unrestricted element of the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$, i.e. ${\Greekmath 010C}_{s,k}\sim N\left(0,1\right)$. The priors for the state transition density parameters are ${\Greekmath 010B}_{s}\sim IG\left(\frac{v_{0}}{2},\frac{s_{0}}{2}\right)$, ${\Greekmath 011C}_{{\Greekmath 010F},s}^{2}\sim IG\left(\frac{v_{0}}{2},\frac{s_{0}}{2}\right)$, ${\Greekmath 011C}_{f,k}^{2}\sim IG\left(\frac{v_{0}}{2},\frac{s_{0}}{2}\right)$, where $v_{0}=10, s_{0}=1$, and ${\Greekmath 011E}_{k}\sim U\left(-1,1\right)$. These prior densities cover most possible values in practice. The initial state of ${\Greekmath 0115}_{k,t}$ is assumed normally distributed $N\left(0,\frac{{\Greekmath 011C}_{f,k}^{2}}{1-{\Greekmath 011E}_{k}^{2}}\right)$, for $k=1,...,K$. The initial state of $h_{s,t}$ is also assumed normally distributed $N\left({\Greekmath 0116}_{s},\frac{{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}{2{\Greekmath 010B}_{s}}\right)$, for $s=1,...,S$. We ran all the sampling schemes for $11,000$ iterations and discarded the initial $1,000$ iterates as warmup. We used $M=10$ latent points for the Euler approximations to the state transition densities. \subsection*{Gaussian OU process with exact transition density} Table \ref{tab:Inefficiency-factor-of simulation} in Section~\ref{S:FSV tables and figures} of the supplement shows the IACT estimates for the parameters in the factor SV model estimated for three different samplers using the exact transition density, (I) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}}, \boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 010C}},\boldsymbol{f}_{1:T}, {\Greekmath 011E}\right)$, (II) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2},{\Greekmath 011E}\right)$ and (III) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}, {\Greekmath 011E}\right)$. All three samplers estimate the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$ and $\boldsymbol{{\Greekmath 0116}}$ with comparable IACT values. The PMMH+PG sampler always has lower IACT values than both PG samplers for the parameters $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2}$, $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$, and ${\Greekmath 011E}$. There are some improvements in terms of IACT obtained by using PGBS compared to PGAT. Table \ref{tab:Comparison-between-different simulationexact} summarises the estimation results when the exact transition density is used and shows that in terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$, the PMMH+PG sampler is 9.25 and 4.19 times better than PGAT and PGBS, respectively, and in terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$, the PMMH+PG is 2.69 and 2.55 times better than PGAT and PGBS, respectively. \begin{table}[H] \caption{Comparing different samplers in terms of Time Normalised Variance (TNV) with the exact transition density used for the Gaussian OU model: Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{II}$: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$. The data was simulated with $T=1000$, $S=20$, and $K=1$, and number of particles $N=500$. Time denotes the time taken in seconds for one iteration of the method. \label{tab:Comparison-between-different simulationexact}} \centering{} \begin{tabular}{cccc} \hline & $I$ & $II$ & $III$\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\footnotesize{}$18.07$} & {\footnotesize{}$283.23$} & {\footnotesize{}$101.64$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\max}$} & {\footnotesize{}$33.97$} & {\footnotesize{}$314.39$} & {\footnotesize{}$142.30$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{\max}$} & {\footnotesize{}$1$} & {\footnotesize{}$9.25$} & {\footnotesize{}$4.19$}\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$8.54$} & {\footnotesize{}$38.96$} & {\footnotesize{}$29.26$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$16.06$} & {\footnotesize{}$43.25$} & {\footnotesize{}$40.96$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{MEAN}$} & {\footnotesize{}$1$} & {\footnotesize{}$2.69$} & {\footnotesize{}$2.55$}\tabularnewline \hline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Time}$} & {\footnotesize{}$1.88$} & {\footnotesize{}$1.11$} & {\footnotesize{}$1.40$}\tabularnewline \hline \end{tabular} \end{table} \subsubsection*{Gaussian OU process with an Euler evolution transition density} Table \ref{tab:Inefficiency-factor-of simulation-1} in Section~\ref{S:FSV tables and figures} of the supplement shows the IACT values for all the parameters in the model for the three samplers, (I) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 010B}}, \boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{f}_{1:T}, {\Greekmath 011E}\right)$, (II) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2},{\Greekmath 011E}\right)$ and (III) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}, {\Greekmath 011E}\right)$, using the Euler approximation scheme for the transition density. The table shows that the PMMH+PG samplers with the exact and approximate state transition densities have very similar IACT values for all the parameters suggesting that the inefficiency of the PMMH+PG sampler does not deteriorate when the Euler approximation is used. However, both PG samplers, PGAT and PGBS, using the Euler approximation are significantly worse than the PGAT and PGBS samplers with the exact transition density. For example, the IACT of ${\Greekmath 011C}_{4}^{2}$ in PGAT with the exact transition density is 283.23, compared to 977.93 for PGAT with the Euler approximation. Table \ref{tab:Comparison-between-different simulationapproximate} summarises the estimation results with the Euler approximation of the transition density and shows that in terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$, the PMMH+PG sampler is 60.57 and 50.72 times better than PGAT and PGBS, respectively, and in terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$, the PMMH+PG sampler is 14.67 and 12.95 times better than the PGAT and PGBS samplers, respectively. Similarly to the univariate case in Section \ref{SS: Empirical results for OU}, we note that if Euler approximations are used for the state transition densities then all three samplers PMMH+PG, PGAT, and PGBT take approximately the same computing time because the PG samplers need to store and trace back all the latent log-volatilities $h_{s,t}$ and the $M$ latent data points between $t$ and $t+1$ for all $s=1,...,S$ and $t=1,...,T$, whereas the PMMH+PG sampler only needs to store and trace back the latent log-volatilities $h_{s,t}$ for all $s=1,...,S$ and $t=1,...,T$. \begin{table}[H] \caption{Comparing different samplers in terms of Time Normalised Variance using an Euler approximation for the state transition density for the Gaussian OU model: Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{II}$: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for the simulated data with $T=1,000$, $S=20$, and $K=1$, and the number of particles $N=1,000$. Time denotes the time taken in seconds for one iteration of the method.\label{tab:Comparison-between-different simulationapproximate}} \centering{} \begin{tabular}{cccc} \hline & $I$ & $II$ & $III$\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\footnotesize{}$17.57$} & {\footnotesize{}$977.93$} & {\footnotesize{}$792.88$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\max}$} & {\footnotesize{}$113.50$} & {\footnotesize{}$6874.85$} & {\footnotesize{}$5756.31$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{\max}$} & {\footnotesize{}$1$} & {\footnotesize{}$60.57$} & {\footnotesize{}$50.72$}\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$14.17$} & {\footnotesize{}$191.04$} & {\footnotesize{}$163.26$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$91.54$} & {\footnotesize{}$1343.01$} & {\footnotesize{}$1185.27$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{MEAN}$} & {\footnotesize{}$1$} & {\footnotesize{}$14.67$} & {\footnotesize{}$12.95$}\tabularnewline \hline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Time}$} & {\footnotesize{}$6.46$} & {\footnotesize{}$7.03$} & {\footnotesize{}$7.26$}\tabularnewline \hline \end{tabular} \end{table} \subsection*{The PMMH and PGDA Sampling Schemes for the Factor SV Model} This section discusses the PMMH samplers, both the standard and correlated PMMH, and the PGDA sampler of \citet{Fearnhead2016} to estimate the factor SV model which are denoted by sampling schemes IV to VIII. The PMMH method generates the parameters by integrating out all the latent factors, so that the observation equation is given by \begin{equation} \boldsymbol{y}_{t}|\boldsymbol{{\Greekmath 0115}}_{t}, \boldsymbol{h}_{t}, \boldsymbol{{\Greekmath 0121}} \sim N\left(\boldsymbol{0},\boldsymbol{{\Greekmath 010C}}\boldsymbol{D}_{t}\boldsymbol{{\Greekmath 010C}}^{'}+\boldsymbol{V}_{t}\right).\label{eq:PMMHmeasurement density} \end{equation} The state transition equations are given by equations~\eqref{eq:SVtransition} and either equation \eqref{eq:exact transition} for the closed form case or equation \eqref{eq:ou approximatetransition multiple} for the Euler scheme for the OU model and equation \eqref{eq:GARCH Euler transitiondensity} for the Euler scheme for the GARCH model. The PMMH method uses the observation density, which includes all $(K+S)$ dimensional latent log-volatilities simultaneously. This becomes a high dimensional (21 dimensional) state space model. The performance of the standard PMMH sampler depends critically on the number of particles $N$ used to estimate the likelihood. \cite{pittetal2012} suggest selecting the number of particles $N$ such that the variance of the log of the estimated likelihood is around 1 to obtain an optimal tradeoff between computing time and statistical efficiency. Table \ref{tab:The-Variance-of log-likelihood} gives the variance of the log of the estimated likelihood for different numbers of particles using the bootstrap filter and shows that even with 5,000 particles, the log of the estimated likelihood still has a large variance and the Markov chain for the standard PMMH approach (sampling schemes IV and V) would get stuck. We therefore do not report results for the standard PMMH method as it is computationally very expensive and its TNV would be significantly higher than the PG and PMMH+PG methods. From Section \ref{SS: Empirical results for OU}, we need $\log \left(Z_{1:T}\left(\boldsymbol{{\Greekmath 0112}}^{'},\boldsymbol{u}^{'}\right)\right)$ and $\log \left(Z_{1:T}\left(\boldsymbol{{\Greekmath 0112}},\boldsymbol{u}\right)\right)$ to be highly correlated to reduce the variance of the difference between them for the correlated PMMH method. We now set the correlation between the individual elements of $\boldsymbol u$ and $\boldsymbol u^{'}$ to $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{corr}\left(u_{i},u_{i}^{'}\right)=0.999999$. We then obtained $1,000$ independent estimates of $\log \left(Z_{1:T}\left(\boldsymbol{{\Greekmath 0112}},\boldsymbol{u}^{'}\right)\right)$ and $\log \left(Z_{1:T}\left(\boldsymbol{{\Greekmath 0112}},\boldsymbol{u}\right)\right)$ at the true value of $\boldsymbol{{\Greekmath 0112}}$ and computed their sample correlation. The sample correlation was $0.06$, showing that it is difficult to preserve the correlation in such a high dimensional state space model and that the correlated PMMH Markov chain would still get stuck unless enough particles are used to ensure that the variance of the log of the estimator of the likelihood is close to 1. A second problem with the PMMH approach is the large number of parameters to be estimated. Constructing proposals in high dimensions is remarkably difficult, and often requires estimating gradients and Hessian matrices. On the other hand, simpler approaches such as the adaptive random walk are very inefficient in large dimensions, as we showed in Section \ref{SS: Empirical results for OU}. Hence, it is natural to use a parameter splitting strategy and hybrid samplers. Finally, we do not report results for the PGDA method applied to the factor stochastic volatility model as it is very clear that its TNV would be significantly higher than the PMMH+PG method. This sampler updates pseudo observations of the parameters by MCMC and updates the latent states and parameters jointly using a particle filter. Section \ref{SS: Empirical results for OU} shows that this sampler does not work well when the model has many parameters. Note that \citet{Fearnhead2016} only apply their method to a simple univariate SV model. The factor SV model considered in this section is more complex with a large number of parameters and high dimensional latent states. \begin{table}[H] \caption{The Variance of the log of the estimated likelihood for the PMMH method with the exact transition density for different numbers of particles for the simulated dataset with $T=1,000$, $S=20$, and $K=1$ evaluated at the true values of the parameters. CPU time to estimate the likelihood is in seconds .\label{tab:The-Variance-of log-likelihood}} \centering{} \begin{tabular}{ccc} \hline Number of Particles & Variance of log-likelihood & CPU time\tabularnewline \hline \hline 250 & 1672.07 & 4.39\tabularnewline 500 & 766.38 & 8.57\tabularnewline 2500 & 331.65 & 45.03\tabularnewline 5000 & 243.82 & 130.53\tabularnewline \hline \end{tabular} \end{table} \subsubsection{Application to US stock returns\label{SS: US stock returns}} We now apply our methods to a sample of daily US industry stock returns data. The data, obtained from the Kenneth French website\footnote{ http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/datalibrary.html} consists of daily returns for $S=20$ value-weighted industry portfolios, using a sample from January 3rd, 2001 to the 24th of December, 2003, a total of 1,000 observations. We compare the PMMH+PG, PGAT, and PGBS samplers for the factor SV model with the idiosyncratic log-volatilities following Gaussian OU processes with exact and approximate transition densities and GARCH diffusion processes and show that the performance of the PMMH+PG sampler does not deteriorate for the real data, whereas both PGAT and PGBS samplers get worse in terms of the IACT values of the parameters, especially with the Euler approximation. This section does not compare the PMMH+PG sampler with either of the standard or correlated PMMH samplers or the PGDA sampler because of the problems discussed in Section \ref{ss:simulationandapplication}. \subsubsection*{Gaussian OU process with exact and Euler evolution transition densities{\label{GaussianOUrealdata}}} This section compares the following samplers: (I) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E}\right)$ for the Gaussian OU model with exact transition densities and $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2},\boldsymbol{{\Greekmath 0116}}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},{\Greekmath 011E}\right)$ for the Gaussian OU model with approximate transition densities, (II) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, and (III) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for the factor SV model with the idiosyncratic log-volatilities following Gaussian OU processes with exact and approximate transition densities. Tables \ref{tab:Inefficiency-factor-of real data} and \ref{tab:Inefficiency-factor-of real data-1} in Section~\ref{S:FSV tables and figures} of the supplement show the IACT estimates for all the parameters in the factor SV model estimated with exact transition densities for the Gaussian OU model and Euler approximations for the transition densities for the Gaussian OU processes. As for the simulated data, all three samplers estimate the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$ and $\boldsymbol{{\Greekmath 0116}}$ efficiently and with comparable IACT values. The performance of the PMMH+PG sampler does not deteriorate for the real data, whereas both PGAT and PGBS samplers get worse in terms of the IACT values of the parameters, especially for the Euler approximation model. Overall, the PMMH+PG samplers always have smaller IACT values than both the PGAT and PGBS samplers for all the state transition parameters. Tables \ref{tab:Comparison-between-different real data-exact} and \ref{tab:Comparison-between-different real dataapproximate} summarise the estimation results for the Gaussian OU model and show that in terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$, the PMMH+PG sampler is 20.87 and 13.91 times better than the PGAT and PGBS samplers with the exact transition density, respectively, and the PMMH+PG sampler is 53.94 and 58.71 times, respectively, better than the PGAT and PGBS with the Euler approximation. In terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$, the PMMH+PG sampler is 5.61 and 4.73 times better than the PGAT and PGBS samplers with the exact transition density, respectively, and the PMMH+PG sampler is 22.17 and 22.40 times, respectively, better than the PGAT and PGBS samplers when using the Euler approximation. Figures~\ref{fig:The-kernel-density alpha real data} and \ref{fig:The-kernel-density tau real data} in Section~\ref{S:FSV tables and figures} of the supplement present the kernel density estimates of marginal posterior densities of four representative ${\Greekmath 010B}$ and ${\Greekmath 011C}_{{\Greekmath 010F}}^2$ parameters, respectively, for the US stock returns data. The density estimates are for PMMH+PG using exact and approximate transition densities and PG with approximate transition densities using ancestral tracing and backward simulation for the Gaussian OU model. The figures show that both PMMH+PG samplers produce estimates that are close to each other, whereas the PG samplers are much less reliable and suggest that the PG estimators did not converge. This confirms the usefulness of the PMMH+PG samplers for this class of model. \begin{table}[H] \caption{Comparing different samplers in terms of Time Normalised Variance with the exact transition density for the Gaussian OU model: Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{II}$: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1,000$, $S=20$, and $K=1$, and number of particles $N=500$. Time denotes the time taken in seconds for one iteration of the method.\label{tab:Comparison-between-different real data-exact}} \centering{} \begin{tabular}{cccc} \hline & $I$ & $II$ & $III$\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\footnotesize{}$20.57$} & {\footnotesize{}$682.49$} & {\footnotesize{}$382.86$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\max}$} & {\footnotesize{}$38.26$} & {\footnotesize{}$798.51$} & {\footnotesize{}$532.18$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{\max}$} & {\footnotesize{}$1$} & {\footnotesize{}$20.87$} & {\footnotesize{}$13.91$}\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$8.54$} & {\footnotesize{}$76.19$} & {\footnotesize{}$54.06$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$15.88$} & {\footnotesize{}$89.14$} & {\footnotesize{}$75.14$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{MEAN}$} & {\footnotesize{}$1$} & {\footnotesize{}$5.61$} & {\footnotesize{}$4.73$}\tabularnewline \hline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Time}$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.17$} & {\footnotesize{}$1.39$}\tabularnewline \hline \end{tabular} \end{table} \begin{table}[H] \caption{Comparing different samplers in terms of Time Normalised Variance with the Euler approximation for state transition density for the Gaussian OU model: Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{II}$: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ with backward simulation for US stock returns data with $T=1,000$, $S=20$, and $K=1$, and number of particles $N=1,000$. Time denotes the time taken in seconds for one iteration of the method.\label{tab:Comparison-between-different real dataapproximate}} \centering{} \begin{tabular}{cccc} \hline & $I$ & $II$ & $III$\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\footnotesize{}$23.99$} & {\footnotesize{}$1215.77$} & {\footnotesize{}$1228.99$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\max}$} & {\footnotesize{}$152.82$} & {\footnotesize{}$8242.92$} & {\footnotesize{}$8971.63$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{\max}$} & {\footnotesize{}$1$} & {\footnotesize{}$53.94$} & {\footnotesize{}$58.71$}\tabularnewline \hline {\footnotesize{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$12.99$} & {\footnotesize{}$270.58$} & {\footnotesize{}$253.90$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\footnotesize{}$82.75$} & {\footnotesize{}$1834.53$} & {\footnotesize{}$1853.47$}\tabularnewline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{MEAN}$} & {\footnotesize{}$1$} & {\footnotesize{}$22.17$} & {\footnotesize{}$22.40$}\tabularnewline \hline {\footnotesize{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{Time}$} & {\footnotesize{}$6.37$} & {\footnotesize{}$6.78$} & {\footnotesize{}$7.30$}\tabularnewline \hline \end{tabular} \end{table} \subsubsection*{GARCH diffusion process with an Euler evolution transition density {\label{GARCHrealdata}}} This section compares the following samplers: (I) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2},\boldsymbol{{\Greekmath 0116}}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},{\Greekmath 011E}\right)$, (II) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, and (III) $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},{\Greekmath 011E},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for the factor SV model with the idiosyncratic log-volatilities following GARCH diffusion processes which do not have closed form state transition densities. Table \ref{tab:Inefficiency-factor-of real data-GARCH} in Section~\ref{S:FSV tables and figures} of the supplement shows the IACT estimates for all the parameters for the factor SV model with the idiosyncratic log-volatilities following GARCH diffusion processes which do not have closed form state transition densities. As for the models with Gaussian OU processes, all three samplers estimate the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$ efficiently and with comparable IACT values. The performance of the PMMH+PG sampler does not deteriorate for the real data, whereas both the PGAT and PGBS samplers get worse in terms of the IACT values for the remaining parameters. Overall, the PMMH+PG sampler always has smaller IACT values than both the PGAT and PGBS samplers for all the state transition parameters. Table \ref{tab:Comparison-between-different real dataapproximateGARCH} summarises the estimation results for the GARCH diffusion model and shows that in terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$, the PMMH+PG is 19.56 and 22.11 times better than PGAT and PGBS samplers. In terms of $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$, the PMMH+PG is 25.84 and 28.01 times better than PGAT and PGBS, respectively. This confirms the usefulness of the PMMH+PG samplers for this class of the model. \begin{table}[H] \caption{Comparing different samplers in terms of Time Normalised Variance with the Euler approximation for the state transition density for the GARCH diffusion model. Sampler I: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler II: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGAT}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, Sampler III: $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PGBS}\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $S=20$, and $K=1$, and number of particles $N=1000$. Time denotes the time taken in seconds for one iteration of the method. \label{tab:Comparison-between-different real dataapproximateGARCH}} \centering{} \begin{tabular}{cccc} \hline & {\small{}$I$} & {\small{}$II$} & {\small{}$III$}\tabularnewline \hline {\small{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\small{}$147.16$} & {\small{}$3098.27$} & {\small{}$3257.52$}\tabularnewline {\small{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\small{}$1392.13$} & {\small{}$27233.79$} & {\small{}$30783.56$}\tabularnewline {\small{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MAX}}$} & {\small{}$1$} & {\small{}$19.56$} & {\small{}$22.11$}\tabularnewline \hline {\small{}$\widehat{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{IACT}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\small{}$17.38$} & {\small{}$483.37$} & {\small{}$487.28$}\tabularnewline {\small{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{TNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\small{}$164.41$} & {\small{}$4248.82$} & {\small{}$4604.80$}\tabularnewline {\small{}$\RIfM@\expandafter\text@\else\expandafter\mbox\firm{RTNV}_{\RIfM@\expandafter\text@\else\expandafter\mbox\firm{MEAN}}$} & {\small{}$1$} & {\small{}$25.84$} & {\small{}$28.01$}\tabularnewline \hline {\small{}Time} & {\small{}$9.46$} & {\small{}$8.79$} & {\small{}$9.45$}\tabularnewline \hline \end{tabular} \end{table} \section{Discussion\label{s:discussion}} Our article introduces a flexible particle Markov chain Monte Carlo sampling scheme for state space models where some parameters are generated without conditioning on the states (PMMH) while other parameters are generated conditional on the states (PG). Previous sampling schemes used PMMH or PG exclusively without combining both strategies. The technical contribution of our article is to set out the required particle framework for the flexible sampler and to obtain uniform ergodicity under given assumptions. Our examples demonstrate that it is advantageous to use this flexible sampling scheme to generate the parameters that are highly correlated with the states without conditioning on the states (the PMMH component) while the other parameters are generated by particle Gibbs (PG). As we note in the introduction, in general, there are likely to be a number of different sampling schemes that can solve the same problems addressed in our article, and which sampler is best depends on a number of factors such as the model, the data set and the number of observations. We also note that our PMMH + PG approach can be further refined by using the data augmented PMMH and PG sampling schemes proposed by \citet{Fearnhead2016} and the refined proposals for the PMMH sampling scheme by \citet{dahlin2015} and \citet{Nemeth2016}. \pagebreak \renewcommand{S\arabic{sscheme}}{S\arabic{sscheme}} \renewcommand{S\arabic{algorithm}}{S\arabic{algorithm}} \renewcommand{S\arabic{remark}}{S\arabic{remark}} \renewcommand{S\arabic{equation}}{S\arabic{equation}} \renewcommand{S\arabic{theorem}}{S\arabic{theorem}} \renewcommand{S\arabic{section}}{S\arabic{section}} \renewcommand{S\arabic{page}}{S\arabic{page}} \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \renewcommand{S\arabic{assumption}}{S\arabic{assumption}} \setcounter{page}{1} \setcounter{section}{0} \setcounter{equation}{0} \setcounter{algorithm}{0} \setcounter{table}{0} \setcounter{figure}{0} \section*{Online Supplement for \RIfM@\expandafter\text@\else\expandafter\mbox\fiquotedblleft A Flexible Particle Markov chain Monte Carlo method\RIfM@\expandafter\text@\else\expandafter\mbox\fiquotedblright} We use the following notation in the supplement. Equation (1), Algorithm 1, and Sampling Scheme 1, etc, refer to the main paper, while equation (S1), Algorithm S1, and Sampling Scheme S1, etc, refer to the supplement. Section \ref{s:algorithms} lists some of the algorithms used in the main paper. These algorithms are used in \cite{andrieuetal2010} and are included here for notational consistency. Section~\ref{s:theory} discusses the convergence of Sampling Scheme~\ref{ssch:pmwg} to its target distribution. Section~\ref{appendixbsi} discusses other choices of target distribution and how it is straightforward to modify the results in the main paper to apply to these distributions. Section \ref{sec:Sampling-Schemes-Factor model} discusses the target density of the PMMH+PG sampler for the multivariate factor SV model. Section \ref{ss: sampling scheme for the factor SV model} discusses the PMMH+PG sampling schemes for the factor SV model. Section~\ref{S:FSV tables and figures} presents some additional tables and plots based on the analysis reported in Sections~\ref{ss:simulationandapplication} and \ref{SS: US stock returns}. \section{Algorithms\label{s:algorithms}} The Sequential Monte Carlo algorithm used here is the same one as in \cite {andrieuetal2010} and is defined as follows. \begin{algorithm}[Sequential Monte Carlo] \label{alg:smc}\ \newline \begin{enumerate} \item For $t=1$: \begin{enumerate} \item Sample $X_1^i$ from $m_1^{\Greekmath 0112}(x)$, for $i=1,\dots,N$ \item Calculate the importance weights \begin{equation*} w_1^i = \frac{f_1^{\Greekmath 0112}(x_1^i)\,g_{{\Greekmath 0112}}(y_1|x_1^i)}{ m_1^{\Greekmath 0112}(x_1^i)} \quad(i=1,\dots,N), \end{equation*} and normalize them to obtain $\bar w_1^{1:N}$. \end{enumerate} \item For $t=2,3,\dots$: \begin{enumerate} \item Sample the ancestral indices $A_{t-1}^{1:N} \sim \mathcal{M}\left( a_{t-1}^{1:N}|\bar w_{t-1}^{1:N} \right)$ \item Sample $X_{t}^{i}$ from $m_{t}^{{\Greekmath 0112} }\left( x|x_{t-1}^{a_{t-1}^{i}}\right)$, $i=1,\dots ,N$ \item Calculate the importance weights \begin{equation*} w_{t}^{i}=\frac{f_{{\Greekmath 0112} }\left( x_{t}^{i}|x_{t-1}^{a_{t-1}^{i}}\right) \,g_{{\Greekmath 0112} }\left( y_{t}|x_{t}^{i}\right) }{m_{t}^{{\Greekmath 0112} }\left( x_{t}^{i}|x_{t-1}^{a_{t-1}^{i}}\right) }\quad (i=1,\dots ,N) \end{equation*} and normalize them to obtain ${\overline}erline{w}_{t}^{1:N}=w_{t}^{1:N}/\sum_{i=1}^{N}w_{t}^{i}$. \end{enumerate} \end{enumerate} \end{algorithm} Algorithm \ref{alg:condsmc} is the conditional sequential Monte Carlo algorithm (as in \cite{andrieuetal2010}), consistent with $ (x_{1:T}^{j},a_{1:T-1}^{j},j)$. \begin{algorithm}[Conditional Sequential Monte Carlo] \label{alg:condsmc}\ \newline \end{algorithm} \begin{enumerate} \item Fix $X_{1:T}^{j}=x_{1:T}^{j}$ and $A_{1:T-1}^{j}=b_{1:T-1}^{j}$. \item For $t=1$ \begin{enumerate} \item Sample $X_{1}^{i}$ from $m_{1}^{{\Greekmath 0112} }(x)\mathrm{d}x$, for $i\in \{1,\dots ,N\}\setminus \{b_{1}^{j}\}$. \item Calculate the importance weights \begin{equation*} w_{1}^{i}=\frac{f_1^{{\Greekmath 0112} }(x_{1}^{i})\,g_{{\Greekmath 0112} }(y_{1}|x_{1}^{i})}{ m_{1}^{{\Greekmath 0112} }(x_{1}^{i})}\quad (i=1,\dots ,N), \end{equation*} and normalize them to obtain $\bar{w}_{1}^{1:N}$. \end{enumerate} \item For $t=2,\dots ,T$ \begin{enumerate} \item Sample the ancestral indices \begin{equation*} A_{t-1}^{-(b_{t}^{j})}\sim \mathcal{M}\left( a^{(-b_{t}^{j})}|\bar{w} _{t-1}^{1:N}\right) . \end{equation*} \item Sample $X_{t}^{i}$ from $m_{t}^{{\Greekmath 0112} }\left( x|x_{t-1}^{a_{t-1}^{i}}\right) \mathrm{d}x$, $i\in \{1,\dots ,N\}\setminus \{b_{t}^{j}\}$. \item Calculate the importance weights \begin{equation*} w_{t}^{i}=\frac{f_{{\Greekmath 0112} }\left( x_{t}^{i}|x_{t-1}^{a_{t-1}^{i}}\right) \,g_{{\Greekmath 0112} }\left( y_{t}|x_{t}^{i}\right) }{m_{t}^{{\Greekmath 0112} }\left( x_{t}^{i}|x_{t-1}^{a_{t-1}^{i}}\right) }\quad (i=1,\dots ,N) \end{equation*} and normalized them to obtain $\bar{w}_{t}^{1:N}$. \end{enumerate} \end{enumerate} \section{Ergodicity\label{s:theory}} This section discusses the assumptions required for the particle filter. We then discuss convergence of Sampling Scheme~\ref{ssch:pmwg} in total variation norm and then consider the stronger condition of uniform convergence. We will use the generalization of Sampling Scheme \ref{ssch:pmwg} to the case where there may be multiple PMMH steps and there may be multiple Gibbs steps. This was discussed in Section~\ref{s:pmwg}. Let ${\Greekmath 0112} :=({\Greekmath 0112} _{1},\ldots ,{\Greekmath 0112} _{p})$ be a partition of the parameter vector into $p$ components where each component may be a vector and let $0\leq p_{1}\leq p$. Let $\Theta =\Theta_{1}\times \ldots \times \Theta _{p}$ be the corresponding partition of the parameter space. We use the notation ${\Greekmath 0112} _{-i}:=({\Greekmath 0112} _{1},\ldots ,{\Greekmath 0112} _{i-1},{\Greekmath 0112} _{i+1},\ldots,{\Greekmath 0112}_{p})$. Sampling Scheme \ref{ssch:pmwg general} generates the parameters ${\Greekmath 0112} _{1},\ldots,{\Greekmath 0112} _{p_{1}}$ using PMMH steps and the parameters ${\Greekmath 0112} _{p_{1}+1},\ldots ,{\Greekmath 0112} _{p}$ using PG steps. To simplify the discussion, we assume that both particle marginal Metropolis-Hastings steps and particle Gibbs steps are used, i.e., $0<p_{1}<p $. \begin{sscheme}[PMMH+PG Sampler] \label{ssch:pmwg general} Given initial values for $U_{1:T}$, $J$ and ${\Greekmath 0112}$, one iteration of the MCMC involves the following steps. \begin{enumerate} \item (PMMH sampling) For $i=1,\ldots ,p_{1}$ Step $i$: \begin{enumerate} \item Sample ${\Greekmath 0112} _{i}^{\ast }\sim q_{i,1}(\cdot |U_{1:T}, J, {\Greekmath 0112}_{-i},{\Greekmath 0112} _{i}).$ \item Sample $U_{1:T}^{\ast }\sim {\Greekmath 0120}(\cdot |{\Greekmath 0112}_{-i},{\Greekmath 0112} _{i}^{\ast }).$ \item Sample $J^{\ast } \sim \tilde{{\Greekmath 0119}}^N (\cdot |U_{1:T}^{\ast}, {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }).$ \item Set $({\Greekmath 0112} _{i}, U_{1:T}, J ) \leftarrow ({\Greekmath 0112} _{i}^{\ast }, U_{1:T}^{\ast},J^{\ast })$ with probability \begin{align} {\Greekmath 010B} _{i} & \left( U_{1:T}, J, {\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast},{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) = 1\wedge \nonumber \\ & \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast } , {\Greekmath 0112} _{i}^{\ast}|{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}, {\Greekmath 0112} _{i}|{\Greekmath 0112}_{-i}\right) }\, \frac{q_{i}(U_{1:T}, {\Greekmath 0112} _{i}|U_{1:T}^{\ast }, J^{\ast}, {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{q_{i}(U_{1:T}^{\ast }, {\Greekmath 0112} _{i}^{\ast}|U_{1:T}, J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})} , \label{eq:PMwGaccprob} \end{align} where \begin{eqnarray*} q_{i}( U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast } | U_{1:T}, J, {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}) & = & q_{i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T}, J, {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}) {\Greekmath 0120}(U_{1:T}^\ast|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast}). \end{eqnarray*} \end{enumerate} \item (PG sampling) For $i=p_{1}+1,\ldots ,p$ Step $i$: \begin{enumerate} \item Sample ${\Greekmath 0112} _{i}^{\ast }\sim q_{i}(\cdot |X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}).$ \item Set ${\Greekmath 0112}_i \leftarrow {\Greekmath 0112}_{i}^{\ast }$ with probability \begin{eqnarray} \lefteqn{{\Greekmath 010B}_{i}\left({\Greekmath 0112}_{i};{\Greekmath 0112}_{i}^{\ast}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) =} \notag \\ &&1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112}_{i}^{\ast}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}^{N}\left({\Greekmath 0112}_{i}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{-i}\right) } \times \frac{q_{i}({\Greekmath 0112} _{i}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{-i},{\Greekmath 0112}_{i}^{\ast })}{q_{i}({\Greekmath 0112}_{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112}_{-i},{\Greekmath 0112}_{i}) } . \label{eq:PMwGaccproba} \end{eqnarray} \end{enumerate} \item Sample $U_{1:T}^{(-J)}\sim \tilde{{\Greekmath 0119}}^{N}(\cdot|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} )$ using the conditional sequential Monte Carlo algorithm (CSMC) discussed in Section ~\ref{s:CSMC}. \item Sample $J\sim \tilde{{\Greekmath 0119}}^{N}\left( \cdot |U_{1:T},{\Greekmath 0112} \right)$. \end{enumerate} \end{sscheme} We now discuss the assumptions required for the particle filter. For $t\geq1$, we define, \begin{eqnarray*} S_{t}^{\boldsymbol{{\Greekmath 0112}}}=\left(\boldsymbol{x}_{1:t}\in\boldsymbol{{\Greekmath 011F}}^{t}:{\Greekmath 0119}\left(\boldsymbol{x}_{1:t}|\boldsymbol{{\Greekmath 0112}}\right)>0\right) & \RIfM@\expandafter\text@\else\expandafter\mbox\firm{and} & Q_{t}^{\boldsymbol{{\Greekmath 0112}}}=\left\{ \boldsymbol{x}_{1:t}\in\boldsymbol{{\Greekmath 011F}}^{t}:{\Greekmath 0119}\left(\boldsymbol{x}_{1:t-1}|\boldsymbol{{\Greekmath 0112}}\right)m_{t}^{\boldsymbol{{\Greekmath 0112}}}\left(\boldsymbol{x}_{t}|\boldsymbol{x}_{1:t-1},\boldsymbol{y}_{1:t}\right)>0\right\} . \end{eqnarray*} Assumption \ref{assu:propstatespace} ensures that the proposal densities ${\Greekmath 0119}\left(\boldsymbol{x}_{1:t-1}|\boldsymbol{{\Greekmath 0112}}\right)m_{t}^{\boldsymbol{{\Greekmath 0112}}}\left(\boldsymbol{x}_{t}|\boldsymbol{x}_{1:t-1},\boldsymbol{y}_{1:t}\right)$ can be used to approximate ${\Greekmath 0119}\left(\boldsymbol{x}_{1:t}|\boldsymbol{{\Greekmath 0112}}\right)$ for $t\geq1$. \begin{assumption} \citep{andrieuetal2010} We assume that $S_{t}^{\boldsymbol{{\Greekmath 0112}}}\subseteq Q_{t}^{\boldsymbol{{\Greekmath 0112}}}$ for any $\boldsymbol{{\Greekmath 0112}}\in\boldsymbol{\Theta}$ and $t=1,...,T$ \label{assu:propstatespace} \end{assumption} Assumption \ref{assu:propstatespace} is always satisfied in our implementation because we use the bootstrap filter with $p\left(\boldsymbol{x}_{t}|\boldsymbol{x}_{t-1},\boldsymbol{{\Greekmath 0112}}\right)$ as a proposal density which are positive everywhere. We also require Assumption \ref{assu:resampling} given below. \begin{assumption} \citep{andrieuetal2010} For any $k=1,...,N$ and $t=1,..,T$, the resampling scheme $\mathcal{M}\left(a_{t-1}^{1:N}|\bar{w}_{t-1}^{1:N}\right)$ satisfies $\mathcal{M}\left(a_{t-1}^{k}=j|\bar{w}_{t-1}^{1:N}\right) = \bar{w}_{t-1}^{j}$.\label{assu:resampling} \end{assumption} Assumption \ref{assu:resampling} is satisfied by the popular resampling schemes, such as multinomial, systematic, residual resampling. Under Assumption \ref{assu:resampling}, it is straightforward to show that the algorithm samples from the target density of the random variable $ U_{1:T}^{\left( -J\right) }=\left( X_{1}^{(-B_{1}^{J})},\ldots ,X_{T}^{(-B_{T}^{J})},A_{1}^{(-B_{2}^{J})},\ldots ,A_{T-1}^{(-B_{T}^{J})}\right) ,$ conditional on $U_{1:T}^{J}$ and index $J$ given by \begin{eqnarray*} \lefteqn{\tilde{{\Greekmath 0119}}^{N}\left( u_{1:T}^{(-j)}|x_{1:T},b_{1:T-1},j,{\Greekmath 0112} \right) =} \\ &&\frac{{\Greekmath 0120} \left( u_{1:T}|{\Greekmath 0112} \right) }{m_{1}^{{\Greekmath 0112} }\left( x_{1}^{b_{1}}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\prod_{t=2}^{T}\bar{w} _{t-1}^{a_{t-1}^{i}}m_{t}^{{\Greekmath 0112} }\left( x_{t}^{b_{t}}|x_{t-1}^{a_{t-1}^{b_{t}}}\right) }; \end{eqnarray*} see \cite{andrieuetal2010} for details. We now discuss convergence of Sampling Scheme~\ref{ssch:pmwg general} in total variation norm and then consider the stronger condition of uniform convergence. Note that, by construction, Sampling Scheme~\ref{ssch:pmwg general} has the stationary distribution \begin{equation*} \tilde{{\Greekmath 0119}}^{N}\left( x_{1:T},b_{1:T-1},j,u_{1:T}^{(-j)},{\Greekmath 0112} \right) \end{equation*} defined in (\ref{eq:targetdist}). From \cite{robertsrosenthal2004} Theorem 4, irreducibility and aperiodicity are sufficient conditions for the Markov chain obtained using Sampling Scheme~\ref{ssch:pmwg general} to converge to its stationary distribution in total variation norm for $\tilde{{\Greekmath 0119}}^{N}$-almost all starting values. These conditions must be checked for a particular sampler and it is often straightforward to do so. We will relate Sampling Scheme \ref{ssch:pmwg general} to the particle Metropolis within Gibbs sampling scheme defined below. \begin{sscheme}[Ideal] \label{ssch:idealpmwg} \begin{description} \item Given initial values for $U_{1:T}$, $J$ and ${\Greekmath 0112} $, one iteration of the MCMC sampling scheme involves the following steps \item[1.] (PMMH sampling) For $i=1,\ldots ,p_{1}$ Step $i$: \begin{description} \item[(a)] Sample ${\Greekmath 0112} _{i}^{\ast }\sim q_{i,1}(\cdot |U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}).$ \item[(b)] Sample $\left( J^{\ast },U_{1:T}^{\ast }\right) \sim \tilde{{\Greekmath 0119}} ^{N}\left( \cdot |{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) $. \item[(c)] Set $\left( {\Greekmath 0112} _{i}, U_{1:T}, J\right) \leftarrow \left( {\Greekmath 0112} _{i}^{\ast }, U_{1:T}^{\ast}, J^{\ast }\right)$ with probability \begin{eqnarray} \lefteqn{\widetilde{{\Greekmath 010B} _{i}}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) =} \notag \\ & & 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i}\right) }\, \frac{q_{i,1}({\Greekmath 0112} _{i}|U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{q_{i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})} \label{eq:idealPMwGaccprob} \end{eqnarray} \end{description} \item[2.] (PG sampling) For $i=p_{1}+1,\ldots ,p$ Step $i$: \begin{description} \item[(a)] Sample ${\Greekmath 0112} _{i}^{\ast }\sim q_{i}(\cdot |X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}).$ \item[(b)] Set ${\Greekmath 0112}_{i} \leftarrow {\Greekmath 0112}_{i}^{\ast }$ with probability \begin{eqnarray} \lefteqn{{\Greekmath 010B} _{i}\left\{ {\Greekmath 0112} _{i};{\Greekmath 0112} _{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right\} =} \notag \\ &&1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) } \notag \\ &&\frac{q_{i}({\Greekmath 0112} _{i}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{q_{i}({\Greekmath 0112} _{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}. \label{eq:idealPMwGaccproba} \end{eqnarray} \end{description} \item[3.] \begin{description} \item Sample $U_{1:T}^{(-J)}\sim \tilde{{\Greekmath 0119}}^{N}(\cdot |X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} )$ using Algorithm~\ref{alg:condsmc}. \end{description} \item[4.] \begin{description} \item Sample $J\sim \tilde{{\Greekmath 0119}}^{N}\left( \cdot |U_{1:T},{\Greekmath 0112} \right) $. \end{description} \end{description} \end{sscheme} We call Sampling Scheme~\ref{ssch:idealpmwg} an \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ideal} particle sampling scheme because in Part 1 Step $i$(b) it generates the particles $ U_{1:T}^{\ast }$ from their conditional distribution $\tilde{{\Greekmath 0119}}^{N}\left( \cdot |{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) $ instead of using a Metropolis-Hastings proposal. Thus comparing Sampling Schemes \ref{ssch:pmwg general} and \ref{ssch:idealpmwg} allows us to concentrate on the effect of the Metropolis-Hastings proposal for the particles on the convergence of the sampler. \begin{remark} \cite{andrieuroberts2009} and \cite{andrieuvihola2012} discuss the relationship between PMMH sampling schemes with one block of parameters and an \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ideal} Metropolis-Hastings sampling scheme not involving the particles. Sampling Schemes \ref{ssch:pmwg general} and \ref{ssch:idealpmwg} are more general. Our approach is similar to, but generalizes, the results in \cite{andrieuroberts2009} and \cite{andrieuvihola2012} to more complex sampling schemes. \end{remark} To develop the theory of Sampling Schemes \ref{ssch:pmwg general} and \ref {ssch:idealpmwg} we require the following definitions. Let $\left\{ V^{\left( n\right) },n=1,2,\ldots \right\} $ be the iterates of a Markov chain defined on the state space $\mathcal{V}:=\mathcal{U}\times \mathbb{N} \times \Theta $. For $i=1,\ldots ,p$, let $K_{i}(v;\cdot )$ be the substochastic transition kernel of the $i$th step of Sampling Scheme~\ref {ssch:pmwg general} that defines the probabilities for accepted Metropolis-Hastings moves and define \begin{equation*} K:=K_{1}K_{2}\ldots K_{p} \end{equation*} to be the substochastic transition kernel that defines the probabilities for accepted Metropolis-Hastings moves. Note that probabilities involving the substochastic kernels provide lower bounds on the probabilities for the transition kernel of the corresponding Markov chain. For $i=1,\ldots ,p_{1}$ \begin{eqnarray*} \lefteqn{K_{i}\left( U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i};U_{1:T}^{\ast },J_{i}^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) =} \\ &&\tilde{{\Greekmath 0119}}^{N}(J^{\ast }|U_{1:T}^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })q_{i}(U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})\times {\Greekmath 010B} _{i}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) . \end{eqnarray*} Similarly, for $i=1,\ldots ,p$, let $\widetilde{K}_{i}(v;\cdot )$ be the substochastic transition kernel of the $i$th step of Sampling Scheme~\ref {ssch:idealpmwg} that defines the probabilities for accepted Metropolis-Hastings moves and define \begin{equation*} \widetilde{K}=\widetilde{K}_{1}\widetilde{K}_{2}\ldots \widetilde{K}_{p}, \end{equation*} where the kernels $K_{i}$ and $\widetilde{K}_{i}$ only differ for $ i=1,\ldots ,p_{1}$. The next theorem gives a sufficient condition for Sampling Scheme~\ref{ssch:pmwg general} to be irreducible and aperiodic and is similar to Theorem 1 of \cite{andrieuroberts2009}). \begin{theorem} \label{th:irrandaperiodic} If $\widetilde{K}$ is irreducible and aperiodic then $K$ is irreducible and aperiodic. \begin{proof} For $i=1,\ldots ,p_{1}$, $\tilde{{\Greekmath 0119}}^{N}\left( \cdot |{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) \ll {\Greekmath 0120}\left (\cdot|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast } \right ) $ and the result now follows from Assumption 1 of \cite{andrieuetal2010}. \end{proof} \end{theorem} We now follow the approach in \cite{andrieuroberts2009} and show the uniform erdogicity of the sampling schemes by giving sufficient conditions for the existence of minorization conditions for Sampling Scheme~\ref{ssch:pmwg general}. These minorization conditions are equivalent to uniform ergodicity by Theorem 8 of \cite{robertsrosenthal2004}. The results use the following technical lemmas. \begin{lemma} \label{l:ergtechnical1}For $i=1,\ldots ,p_{1},$ \begin{eqnarray*} {\Greekmath 010B} _{i}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) \geq \left\{ 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) {\Greekmath 0120}(U_{1:T}| {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}\right) {\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}\right\} \times \widetilde{{\Greekmath 010B} _{i}}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) \end{eqnarray*} \end{lemma} \begin{proof} From (\ref{eq:PMwGaccprob}), \begin{eqnarray*} \lefteqn{{\Greekmath 010B} _{i}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) } \\ &=&1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T},{\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i}\right) }\,\frac{q_{i}(U_{1:T},{\Greekmath 0112} _{i}|U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{q_{i}(U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})} \\ &=& 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) {\Greekmath 0120}(U_{1:T}| {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}\right) {\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })} \times \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) q_{i,1}({\Greekmath 0112} _{i}|U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i}\right) q_{i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}\\ &\geq & 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) {\Greekmath 0120}(U_{1:T}| {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}\right) {\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })} \times 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) q_{i,1}({\Greekmath 0112} _{i}|U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i}\right) q_{i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}\\ &=& \left\{ 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) {\Greekmath 0120}(U_{1:T}| {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}\right) {\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })} \right\} \times \widetilde{{\Greekmath 010B} _{i}}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) \end{eqnarray*} \end{proof} \begin{lemma} \label{l:ergtechnical2}Suppose that \begin{equation} \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} \right) }{ {\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} )} \leq {\Greekmath 010D}<\infty \label{eq:convcond} \end{equation} for all $U_{1:T}^{\ast }\in \mathcal{U}, {\Greekmath 0112} \in \mathcal{S}$. Then, for $i=1,\ldots ,p_{1}$, each Markov transition kernel $ K_i$ satisfies \begin{equation} K_{i}\geq {\Greekmath 010D}^{-1}\widetilde{K}_{i} \label{eq:kernelbound1} \end{equation} and hence \begin{equation} K\geq {\Greekmath 010D} ^{-p_1}\widetilde{K}. \label{eq:kernelbound2} \end{equation} \end{lemma} \begin{proof} Fix $i\in \left\{ 1,\ldots ,p_{1}\right\} $ and let $A\in \mathcal{B}\left( \mathcal{U}\right) $, $J,J^{\ast }\in \left\{ 1,\ldots ,N\right\} $ and $ B\in \mathcal{B}\left( \Theta _{i}\right) $. Then \begin{eqnarray*} \lefteqn{K_{i}\left( U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i};A,J^{\ast },{\Greekmath 0112} _{-i},B\right) } \\ &=&\msi@int\displaystyle\int_{A\times B}\tilde{{\Greekmath 0119}}^{N}(J^{\ast }|U_{1:T}^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })q_{i}(U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})\times \\ &&{\Greekmath 010B} _{i}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) dU_{1:T}^{\ast }d{\Greekmath 0112} _{i}^{\ast } \\ &\geq &\msi@int\displaystyle\int_{A\times B}\tilde{{\Greekmath 0119}}^{N}(J^{\ast }|U_{1:T}^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })q_{i}(U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})\times \\ &&\left\{ 1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) {\Greekmath 0120}(U_{1:T}| {\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}\right) {\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}\right\} \times \widetilde{{\Greekmath 010B} _{i}}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) dU_{1:T}^{\ast }d{\Greekmath 0112} _{i}^{\ast } \\ &\geq &{\Greekmath 010D}^{-1}\msi@int\displaystyle\int_{A\times B}\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast },J^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }\right) q_{i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})\times \widetilde{{\Greekmath 010B} _{i}}\left( U_{1:T},J,{\Greekmath 0112} _{i};U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) d U_{1:T}^{\ast }d{\Greekmath 0112} _{i}^{\ast }\\ &=&{\Greekmath 010D}^{-1}\widetilde{K}_{i}\left( U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i};A,J^{\ast },{\Greekmath 0112} _{-i},B\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{eqnarray*} which proves (\ref{eq:kernelbound1}). Apply (\ref{eq:kernelbound1}) for each $i$ to get (\ref{eq:kernelbound2}) \end{proof} Lemma \ref{l:ergtechnical2} can be used to find sufficient conditions for the existence of minorization conditions for Sampling Scheme~\ref{ssch:pmwg general} as given in the theorem below, which is similar to \cite{andrieuroberts2009} , Theorem 8. Let $\mathcal{L}_{N}\{V^{\left( n\right) }\in \cdot \}$ denote the sequence of distribution functions of the random variables $\{V^{\left( n\right) }:\,n=1,2,\dots \}$, generated by Sampling Scheme~\ref{ssch:pmwg general}, and let $|\cdot |_{TV}$ be total variation norm. \begin{theorem} \label{th:minorization}Suppose that Sampling Scheme~\ref{ssch:idealpmwg} satisfies the following minorization condition: there exists a constant $ {\Greekmath 010F} >0$, a number $n_{0}\geq 1$, and a probability measure ${\Greekmath 0117} $ on $ \mathcal{V}$ such that $\widetilde{K}^{n_{0}}(v;A)\geq {\Greekmath 010F} \,{\Greekmath 0117} (A)$ for all $v\in \mathcal{V},A\in \mathcal{B}\left( \mathcal{V}\right) $. Suppose also that the conditions of Lemma \ref{l:ergtechnical2} are satisfied. Then Sampling Scheme~\ref{ssch:pmwg general} satisfies the minorization condition \begin{equation*} K^{n_{0}}(v;A)\geq {\Greekmath 010D} ^{-p_1 n_{0}}{\Greekmath 010F} {\Greekmath 0117} (A) \end{equation*} and for all starting values for the Markov Chain \begin{equation*} \left\vert \mathcal{L}_{N}\{V^{\left( n\right) }\in \cdot \}-\tilde{{\Greekmath 0119}} ^{N}\left\{ V^{\left( n\right) }\in \cdot \right\} \right\vert _{TV}\leq \left( 1-{\Greekmath 010E} \right) ^{\left\lfloor n/n_{0}\right\rfloor }, \end{equation*} where $0<{\Greekmath 010E} <1$ and $\left\lfloor n/n_{0}\right\rfloor $ is the greatest integer not exceeding $n/n_{0}$. \end{theorem} \begin{proof} To show the first part, suppose $\widetilde{K}^{n_{0}}(v;A)\geq {\Greekmath 010F} \,{\Greekmath 0117} (A)$ for all $v\in \mathcal{V},A\in \mathcal{B}\left( \mathcal{V} \right) $. Fix $v\in \mathcal{V},A\in \mathcal{B}\left( \mathcal{V}\right) $ . Applying Lemma \ref{l:ergtechnical2} repeatedly gives \begin{eqnarray*} K^{n_{0}}(v;A) &\geq &{\Greekmath 010D}^{-p_1 n_{0}} \widetilde{K}^{n_{0}}(v;A) \geq \,{\Greekmath 010D} ^{-p_1 n_{0}}{\Greekmath 010F} {\Greekmath 0117} (A) \end{eqnarray*} as required. The second part follows from the first part and \cite {robertsrosenthal2004}, Theorem 8. \end{proof} Lemma \ref{l:boundedlikelihood} gives sufficient conditions for Lemma \ref {l:ergtechnical2} to hold. The first condition is from \cite{andrieuetal2010}. \begin{lemma} \label{l:boundedlikelihood}Suppose \begin{description} \item[(i)] There is a sequence of finite, positive constants $ \{c_{t}:t=1,\dots ,T\}$ such that for any $x_{1:t}\in \mathcal{S}_{t}({\Greekmath 0112} )$ and all ${\Greekmath 0112} \in \mathcal{S}$, $f_{{\Greekmath 0112} }(x_{t}|x_{t-1})g_{{\Greekmath 0112} }(y_{t}|x_{t})\leq c_{t}\,m_{t}^{{\Greekmath 0112} }(x_{t}|x_{t-1})$. \item[(ii)] There exists an $\mathrm{var}epsilon >0$ such that for all ${\Greekmath 0112} \in \mathcal{S}$, $p\left( y_{1:T}|{\Greekmath 0112} \right) >\mathrm{var}epsilon $. \end{description} If (i)\ and (ii)\ hold, then the conditions in Lemma \ref{l:ergtechnical2} are satisfied. \end{lemma} \begin{proof} Part (i) implies that for all ${\Greekmath 0112} \in \mathcal{S}$ and all $U_{1:T}\in \mathcal{U}$, $Z(U_{1:T},{\Greekmath 0112} )\leq \mathop{\displaystyle \prod }_{t=1}^{T}c_{t}$. Hence Part (ii) implies that \begin{equation*} \frac{Z(U_{1:T},{\Greekmath 0112} )}{p\left( y_{1:T}|{\Greekmath 0112} \right) }<\frac{ \mathop{\displaystyle \prod }_{t=1}^{T}c_{t}}{\mathrm{var}epsilon }. \end{equation*} From \eqref{eq:accprobsimplify1}, \begin{align*}\frac{\tilde{{\Greekmath 0119}}^{N}\left( U_{1:T}^{\ast }|{\Greekmath 0112} \right) }{{\Greekmath 0120} \left( U_{1:T}^{\ast }|{\Greekmath 0112} \right) } &=\frac{Z(U_{1:T},{\Greekmath 0112} )}{p\left( y_{1:T}|{\Greekmath 0112} \right) } \end{align*} giving the result. \end{proof} \begin{remark} The results above can be modified for the factor stochastic volatility model given in Section \ref{PMMH+PG SV factor} in a straightforward way. Details are available from the authors on request. \end{remark} \begin{remark} If the states are sampled using backward simulation, similar arguments can be applied to obtain corresponding results (see Section \ref{appendixbsi}). The mathematical details of the derivation use the results in \cite {olssonryden2011} and \cite{lindstenschon2012}. \end{remark} \section{Backward simulation\label{appendixbsi}} \cite{godsilletal2004} introduce the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} algorithm which samples the indices\linebreak $J_{T},J_{T-1},\dots ,J_{1}$ sequentially, and differs from \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ancestral tracing }which samples one index $J$ and traces back its ancestral lineage. The \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} algorithm (Algorithm \ref{alg:bsim} below) is used in the PMCMC setting by \cite{olssonryden2011} (in the PMMH algorithm) and \cite {lindstenschon2012} (in the PG algorithm). \cite{chopinsingh2013} studied the PG algorithm with \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} and found that it yields a smaller autocorrelation than the corresponding algorithm using \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{\ ancestral tracing}. Moreover, it is more robust to the resampling scheme (multinomial resampling, systematic resampling, residual resampling or stratified resampling) used in the resampling step of the algorithm. \begin{algorithm}[Backward Simulation] \label{alg:bsim} \begin{enumerate} \item Sample $J_{T}=j_{t}$ conditional on $u_{1:T}$, with probability proportional to $w_{T}^{j_{T}}$, and choose $x_{T}^{j_{T}}$; \item For $t=T-1,\dots ,1$, sample $J_{t}=j_{t}$ conditional on \begin{equation*} (u_{1:t},j_{t+1:T},x_{t+1}^{j_{t+1}},\dots ,x_{T}^{j_{T}}), \end{equation*} with probability proportional to $w_{t}^{j_{t}}f_{{\Greekmath 0112} }(x_{t+1}^{j_{t+1}}|x_{t}^{j_{t}})$, and choose $x_{t}^{j_{t}}$. \end{enumerate} \end{algorithm} We denote the particles selected and the trajectory selected by $ x_{1:T}^{j_{1:T}}=(x_{1}^{j_{1}},\dots ,x_{T}^{j_{T}})$ and $j_{1:T}$, respectively. With some abuse of notation, we denote \begin{equation*} x_{1:T}^{(-j_{1:T})}=\left\{ x_{1}^{(-j_{1})},\ldots ,x_{T}^{(-j_{T})}\right\} . \end{equation*} It will simplify the notation to sometimes use the following one-to-one transformation \begin{equation*} \left( u_{1:T},j_{1:T}\right) \leftrightarrow \left\{ x_{1:T}^{j_{1:T}},j_{1:T},x_{1:T}^{(-j_{1:T})},a_{1:T-1}\right\} , \end{equation*} and switch between the two representations and use whichever is more convenient. The augmented space in this case consists of the particle filter variables $ U_{1:T}$ and the sampled trajectory $J_{1:T}$ and PMCMC methods using \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} target the following density \begin{eqnarray} \lefteqn{\tilde{{\Greekmath 0119}}_{BSi}^{N}\left( x_{1:T},j_{1:T},x_{1:T}^{(-j_{1:T})},a_{1:T-1},{\Greekmath 0112} \right) \mathrel{:=} } \notag \\ &&\frac{p(x_{1:T},{\Greekmath 0112} |y_{1:T})}{N^{T}}\frac{{\Greekmath 0120} \left( u_{1:T}|{\Greekmath 0112} \right) }{m_{1}^{{\Greekmath 0112} }\left( x_{1}^{b_{1}}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\prod_{t=2}^{T} \bar{w}_{t-1}^{a_{t-1}^{i}}m_{t}^{{\Greekmath 0112} }\left( x_{t}^{b_{t}}|x_{t-1}^{a_{t-1}^{b_{t}}}\right) } \times \notag \\ & & \prod_{t=2}^{T}\frac{ w_{t}^{a_{t-1}^{j_{t}}}f(x_{t}^{j_{t}}|x_{t-1}^{a_{t-1}^{j_{t}}})}{ \sum_{i=1}^{N}w_{t}^{a_{t-1}^{i}}f(x_{t}^{i}|x_{t-1}^{a_{t-1}^{i}})}. \label{eq:targetbsi} \end{eqnarray} \cite{olssonryden2011} show that, under Assumption 2 of \cite {andrieuetal2010}, \begin{equation*} \tilde{{\Greekmath 0119}}_{BSi}^{N}\left( x_{1:T},j_{1:T},x_{1:T}^{(-j_{1:T})},a_{1:T-1},{\Greekmath 0112} \right) \end{equation*} has the following marginal distribution \begin{equation*} \tilde{{\Greekmath 0119}}_{BSi}^{N}\left( x_{1:T},j_{1:T},{\Greekmath 0112} \right) =\frac{ p(x_{1:T},{\Greekmath 0112} |y_{1:T})}{N^{T}}, \end{equation*} and hence \begin{equation*} \tilde{{\Greekmath 0119}}_{BSi}^{N}\left( x_{1:T},{\Greekmath 0112} \right) =p(x_{1:T},{\Greekmath 0112} |y_{1:T}). \end{equation*} The conditional sequential Monte Carlo algorithm used in the backward simulation also changes. It is given in \cite{lindstenetal2014} and generates from the full conditional distribution \begin{equation*} \tilde{{\Greekmath 0119}}_{BSi}^{N}\left( x_{1:T}^{(-j_{1:T})},a_{1:T-1}|x_{1:T},j_{1:T},{\Greekmath 0112} \right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation*} The general sampler using \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{backward simulation} is analogous to the \RIfM@\expandafter\text@\else\expandafter\mbox\fiit{ancestral tracing} general sampler, but on an expanded space. \begin{sscheme}[general-BSi] \label{ssch:pmwgbsi} \begin{description} \item Given initial values for $U_{1:T}$, $J_{1:T}$ and ${\Greekmath 0112} $, one iteration of the MCMC involves the following steps \item[1.] (PMMH sampling) For $i=1,\ldots ,p_{1}$ Step $i$: \begin{description} \item[(a)] Sample ${\Greekmath 0112} _{i}^{\ast }\sim q_{BSi,i,1}(\cdot |U_{1:T},J_{1:T},{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}).$ \item[(b)] Sample $U_{1:T}^{\ast }\sim {\Greekmath 0120}(\cdot |{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }).$ \item[(c)] Sample $J_{1:T}^{\ast }$ from $\tilde{{\Greekmath 0119}}_{BSi}^{N}(\cdot |U_{1:T}^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }).$ \item[(d)] Set $\left({\Greekmath 0112} _{i}, U_{1:T}, J_{1:T}\right) \leftarrow \left({\Greekmath 0112} _{i}^{\ast }, U_{1:T}^{\ast}, J_{1:T}^{\ast }\right)$ with probability \begin{eqnarray} \lefteqn{{\Greekmath 010B} _{i}\left( U_{1:T},J_{1:T},{\Greekmath 0112} _{i};U_{1:T}^{\ast },J_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) =} \label{eq:PMwGBSiaccprob} \\ &&1\wedge \frac{\tilde{{\Greekmath 0119}}_{BSi}^{N}\left( U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}_{BSi}^{N}\left( U_{1:T},{\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i}\right) }\,\frac{q_{BSi,i}(U_{1:T},{\Greekmath 0112} _{i}|U_{1:T}^{\ast },J_{1:T}^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{ q_{BSi,i}(U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|U_{1:T},J_{1:T},{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})} \notag \end{eqnarray} where \begin{align*} q_{Bsi,i}(U_{1:T}^{\ast },{\Greekmath 0112} _{i}^{\ast }|U_{1:T},J_{1:T},{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})= & q_{BSi,i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T},J_{1:T},{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}){\Greekmath 0120}(U_{1:T}^{\ast }|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast }). \end{align*} \end{description} \item[2.] (PG or PMwG sampling) For $i=p_{1}+1,\ldots ,p$ Step $i$: \begin{description} \item[(a)] Sample ${\Greekmath 0112} _{i}^{\ast }\sim q_{i}(\cdot |X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}).$ \item[(b)] Set ${\Greekmath 0112} _{i} \leftarrow {\Greekmath 0112} _{i}^{\ast }$ with probability \begin{eqnarray*} &&{\Greekmath 010B} _{i}\left( {\Greekmath 0112} _{i};{\Greekmath 0112} _{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) \\ &=&1\wedge \frac{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) }{\tilde{{\Greekmath 0119}}^{N}\left( {\Greekmath 0112} _{i}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i}\right) }\,\frac{ q_{i}({\Greekmath 0112} _{i}|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })}{q_{i}({\Greekmath 0112} _{i}^{\ast }|X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})}. \end{eqnarray*} \end{description} \item[3.] Sample $U_{1:T}^{(-J),\ast }\sim \tilde{{\Greekmath 0119}}^{N}(\cdot |X_{1:T}^{J},B_{1:T-1}^{J},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })$. \item[4.] Sample $J\sim \tilde{{\Greekmath 0119}}^{N}\left( \cdot |U_{1:T},{\Greekmath 0112} \right) $ \end{description} \end{sscheme} The PMMH steps in Sampling Scheme~\ref{ssch:pmwgbsi} simplify similarly to Sampling Scheme~\ref{ssch:pmwg general}. \cite{olssonryden2011} show that \begin{equation*} \frac{\tilde{{\Greekmath 0119}}_{BSi}^{N}\left( U_{1:T},{\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i}\right) }{ {\Greekmath 0120} \left( U_{1:T}|{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}\right) }=\frac{Z(U_{1:T},{\Greekmath 0112} )p({\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i})}{p\left( y_{1:T}|{\Greekmath 0112} _{-i}\right) }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{equation*} which is the same expression as (\ref{eq:accprobsimplify1}). Hence, the Metropolis-Hastings acceptance probability in (\ref {eq:PMwGBSiaccprob}) simplifies to \begin{equation*} 1\wedge \frac{Z({\Greekmath 0112} _{i}^{\ast },{\Greekmath 0112} _{-i},U_{1:T}^{\ast })}{Z({\Greekmath 0112} _{i},{\Greekmath 0112} _{-i},U_{1:T})}\,\frac{q_{BSi,i,1}({\Greekmath 0112} _{i}|U_{1:T}^{\ast },J^{\ast },{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i}^{\ast })p({\Greekmath 0112} _{i}^{\ast }|{\Greekmath 0112} _{-i})}{q_{BSi,i,1}({\Greekmath 0112} _{i}^{\ast }|U_{1:T},J,{\Greekmath 0112} _{-i},{\Greekmath 0112} _{i})p({\Greekmath 0112} _{i}|{\Greekmath 0112} _{-i})}. \end{equation*} The results in Section \ref{s:theory} can be modified for the distribution $ \tilde{{\Greekmath 0119}}_{BSi}^{N}\left( \cdot \right) $, instead of the distribution $ \tilde{{\Greekmath 0119}}^{N}\left( \cdot \right) $ in a straightforward way. Details are available from the authors on request. \section{Target density for the factor SV model \label{sec:Sampling-Schemes-Factor model}} This section discusses the target density of the PMMH+PG sampler for the multivariate factor SV model outlined in Section \ref{S: factor SV model explanation}. Section \ref{sss: exact transition density case} discusses an appropriate target density for the closed form density case and Section \ref{sss: Euler scheme case} discusses an appropriate target density for a factor SV model with the Euler approximation. \subsection{The closed form density case} \label{sss: exact transition density case} This section provides an appropriate target density for a factor SV model with the closed form state transition density given in equation~\eqref{eq:exact transition}. The target density includes all the random variables produced by $K + S$ univariate particle filters that generate the factor log volatilities $\boldsymbol{{\Greekmath 0115}}_{k,1:T}$ for $k=1,...,K$ and the idiosyncratic log volatilities $\boldsymbol{h}_{s,1:T}$ for $s=1,...,S$, as well as the factors $\boldsymbol{f}_{1:T}$ and the parameters $\boldsymbol{{\Greekmath 0121}}$. It is convenient in the developments below to define $\boldsymbol{{\Greekmath 0112}} = (\boldsymbol{f}_{1:T}, \boldsymbol{{\Greekmath 0121}})$. To specify the univariate particle filters that generate the factor log volatilities $\boldsymbol{{\Greekmath 0115}}_{k,1:T}$ for $k=1,...,K$, we use equations \eqref{eq:SVtransition} and \eqref{eq:PMMH+PG SV obs density} and to generate the idiosyncratic log volatilities $\boldsymbol{h}_{s,1:T},$ for $s=1,...,S,$ we use equations \eqref{eq:exact transition} and \eqref{eq:PMMH+PG OU obs density}. We denote the weighted samples by $\left(\boldsymbol{{\Greekmath 0115}}_{k,t}^{1:N},{\overline}erline{w}_{f,k,t}^{1:N}\right)$ and $\left(\boldsymbol{h}_{s,t}^{1:N},{\overline}erline{w}_{{\Greekmath 010F},s,t}^{1:N}\right)$. We denote the proposal densities by $m_{f,k,1}^{\boldsymbol{{\Greekmath 0112}}}\left({\Greekmath 0115}_{k,1}\right)$, $m_{f,k,t}^{\boldsymbol{{\Greekmath 0112}}}\left({\Greekmath 0115}_{k,t}|{\Greekmath 0115}_{k,t-1}\right)$, $m_{{\Greekmath 010F},s,1}^{\boldsymbol{{\Greekmath 0112}}}\left(h_{s,1}\right)$ and $m_{{\Greekmath 010F},s,t}^{\boldsymbol{{\Greekmath 0112}}}\left(h_{s,t}|h_{s,t-1}\right)$ for $t=2,...,T$. We denote the resampling schemes by $\mathcal{M}_{f}\left(\boldsymbol{a}_{f,k,t-1}^{1:N}|{\overline}erline{w}_{f,k,t-1}^{1:N}\right)$ for $k=1,...,K$, where each $a_{f,k,t-1}^{i}=j$ indexes a particle in $\left(\boldsymbol{{\Greekmath 0115}}_{k,t}^{1:N},{\overline}erline{w}_{f,k,t}^{1:N}\right)$ and is chosen with probability ${\overline}erline{w}_{f,k,t}^{j}$; the resampling scheme $\mathcal{M}_{{\Greekmath 010F}}\left(\boldsymbol{a}_{{\Greekmath 010F},s,t-1}^{1:N}|{\overline}erline{w}_{{\Greekmath 010F},s,t-1}^{1:N}\right)$ for $s=1,...,S$ is defined similarly. We denote the vector of particles by \begin{align} \boldsymbol{U}_{f,1:K,1:T}& =\left(\boldsymbol{{\Greekmath 0115}}_{1:K,1:T}^{1:N}, \boldsymbol{A}_{f,1:K,1:T-1}^{1:N}\right),\label{eq:particlefactor}\\ \intertext{and} \boldsymbol{U}_{{\Greekmath 010F}, 1:S,1:T}&=\left(\boldsymbol{h}_{1:S,1:T}^{1:N},\boldsymbol{A}_{{\Greekmath 010F}, 1:S,1:T-1}^{1:N}\right).\label{eq:particlesidio} \end{align} The joint distribution of the particles given the parameters is \begin{equation} {\Greekmath 0120}_{f,k}\left(\boldsymbol{U}_{f,k,1:T}|\boldsymbol{{\Greekmath 0112}}\right)=\\ \prod_{i=1}^{N}m_{f,k,1}^{\boldsymbol{{\Greekmath 0112}}}\left({\Greekmath 0115}_{k,1}^{i}\right)\prod_{t=2}^{T}\left\{ \mathcal{M}_{f}\left(\boldsymbol{a}_{f,k,t-1}^{1:N}|{\overline}erline{w}_{f,k,t-1}^{1:N}\right)\prod_{i=1}^{N}m_{f,k,t}^{\boldsymbol{{\Greekmath 0112}}}\left({\Greekmath 0115}_{f,k,t}^{i}| {\Greekmath 0115}_{f,k,t-1}^{a_{f,k,t-1}^{i}}\right)\right\}, \label{eq:factor} \end{equation} for $k=1,...,K,$ and \begin{equation} {\Greekmath 0120}_{{\Greekmath 010F},s}\left(\boldsymbol{U}_{{\Greekmath 010F},s,1:T}|\boldsymbol{{\Greekmath 0112}}\right)= \prod_{i=1}^{N}m_{{\Greekmath 010F},s,1}^{\boldsymbol{{\Greekmath 0112}}}\left(h_{s,1}^{i}\right)\prod_{t=2}^{T}\left\{ \mathcal{M}_{{\Greekmath 010F}}\left(\boldsymbol{a}_{{\Greekmath 010F},s,t-1}^{1:N}| {\overline}erline{w}_{{\Greekmath 010F},s,t-1}^{1:N}\right)\prod_{i=1}^{N}m_{{\Greekmath 010F},s,t}^{\boldsymbol{{\Greekmath 0112}}}\left(h_{s,t}^{i}|h_{s,t-1}^{a_{{\Greekmath 010F},s,t-1}^{i}}\right)\right\}, \label{eq:idio} \end{equation} for $s=1,...,S$. Next, we define indices $J_{f,k}=j$ for each $k=1,...,K$, then trace back its ancestral lineage $b_{f,k,1:T}^{j}$ $\left(b_{f,k,T}^{j}=j,b_{f,k,t-1}^{j}=a_{f,k,t-1}^{b_{f,k,t}^{j}}\right)$, and select the particle trajectory $\boldsymbol{{\Greekmath 0115}}_{k,1:T}^j=\left({\Greekmath 0115}_{k,1}^{b_{f,k,1}^{j}},...,{\Greekmath 0115}_{k,T}^{b_{f,k,T}^{j}}\right)$. Similarly, we define indices $J_{{\Greekmath 010F} s}=j$ for each $s=1,...,S$, then trace back its ancestral lineage $b_{{\Greekmath 010F},s,1:T}^{j}$ $\left(b_{{\Greekmath 010F},s,T}^{j}=j,b_{{\Greekmath 010F},s,t-1}^{j}=a_{{\Greekmath 010F},s,t-1}^{b_{{\Greekmath 010F},s,t}^{j}}\right)$, and select the particle trajectory $\boldsymbol{h}_{s,1:T}^j=\left(h_{s,1}^{b_{{\Greekmath 010F},s,1}^{j}},...,h_{s,T}^{b_{{\Greekmath 010F},s,T}^{j}}\right)$. The augmented target density of the factor model is defined as \begin{multline} \tilde{{\Greekmath 0119}}^{N}\left(\boldsymbol{U}_{f,1:K,1:T}, \boldsymbol{U}_{{\Greekmath 010F},1:S,1:T}, \boldsymbol{J}_{f},\boldsymbol{J}_{{\Greekmath 010F}},\boldsymbol{{\Greekmath 0112}}\right):=\\ \frac{{\Greekmath 0119}\left(\boldsymbol{{\Greekmath 0115}}_{1:K,1:T}^{\boldsymbol{J}_{f}},\boldsymbol{h}_{1:S,1:T}^{\boldsymbol{J}_{{\Greekmath 010F}}},\boldsymbol{{\Greekmath 0112}}\right)}{N^{T\left(K+S\right)}} \prod_{k=1}^{K} \frac{{\Greekmath 0120}_{f,k}\left(\boldsymbol{U}_{f,k,1:T}|\boldsymbol{{\Greekmath 0112}}\right)} {m_{f,k,1}^{\boldsymbol{{\Greekmath 0112}}}\left({\Greekmath 0115}_{k,1}^{b_{f,k,1}}\right)\prod_{t=2}^{T}{\overline}erline{w}_{f,k,t-1}^{a_{f,k,t-1}^{b_{f,k,t}}}m_{f,k,t}^{{\Greekmath 0112}}\left({\Greekmath 0115}_{k,t}^{b_{f,k,t}}|{\Greekmath 0115}_{k,t-1}^{a_{f,k,t-1}^{b_{f,k,t}}}\right)}\\ \prod_{s=1}^{S} \frac{{\Greekmath 0120}_{{\Greekmath 010F},s}\left(\boldsymbol{U}_{{\Greekmath 010F},s,1:T}|\boldsymbol{{\Greekmath 0112}}\right)} {m_{{\Greekmath 010F},s,1}^{\boldsymbol{{\Greekmath 0112}}}\left(h_{s,1}^{b_{{\Greekmath 010F},s,1}}\right)\prod_{t=2}^{T}{\overline}erline{w}_{{\Greekmath 010F},s,t-1}^{a_{{\Greekmath 010F},s,t-1}^{b_{{\Greekmath 010F},s,t}}}m_{{\Greekmath 010F},s,t}^{{\Greekmath 0112}}\left(h_{s,t}^{b_{{\Greekmath 010F},s,t}}|h_{s,t-1}^{a_{{\Greekmath 010F},s,t-1}^{b_{{\Greekmath 010F},s,t}}}\right)}. \label{eq:target distribution} \end{multline} \subsection{Approximating the transition density by an Euler scheme} \label{sss: Euler scheme case} This section provides an appropriate target density for a factor model with the Euler approximation given in Eq. \eqref{eq:ou approximatetransition multiple} or Eq. \eqref{eq:GARCH Euler transitiondensity}. We follow the approach in \cite{Lindstenetal2015} and introduce state vectors for $s=1,...,S$ defined as $x_{s,1} = h_{s,1}$ and $x_{s,t} = \left ( h_{s,t}, h_{s,t-1,M-1}, \dots,h_{s,t-1,1} \right )^{\tiny T}$, for $t=2, \dots, T$. The state transition densities are given by \begin{align} f_{s,t}^{{\Greekmath 0112}}(x_{s,t}|x_{s,t-1}) & = \prod_{j=1}^{M}f_{s,t-1,j}^{{\Greekmath 0112}} (h_{s,t-1,j}|h_{s,t-1,j-1})\,\, (t=2, \dots, T), \label{eq: factor gen transition eqn} \end{align} where the densities $f_{s,t,j}^{{\Greekmath 0112}} (h_{s,t,j}|h_{s,t,j-1})$ for $j = 1, \ldots, M$, $t = 1, \ldots, T-1$ and $s=1, \ldots, S$ are defined by equation~\eqref{eq:ou approximatetransition multiple} or equation \eqref{eq:GARCH Euler transitiondensity}. We use the proposal densities \begin{align*} m_{{\Greekmath 010F},s,t}^{{\Greekmath 0112}} (x_{s,t} | x_{s,t-1}) = f_{s,t}^{{\Greekmath 0112}}(x_{s,t} | x_{s,t-1}) \quad (t = 2, \ldots, T \mbox{ and } s = 1, \ldots, S) \end{align*} which can be generated using equation \eqref{eq:ou approximatetransition multiple} or equation \eqref{eq:GARCH Euler transitiondensity}. With these modifications, we use the same construction as Section~\ref{sss: exact transition density case}. The modifications give \begin{align} \boldsymbol{U}_{{\Greekmath 010F},1:S,1:T} = \left(\boldsymbol{x}_{1:S,1:T}^{1:N},\boldsymbol{A}_{{\Greekmath 010F}, 1:S,1:T-1}^{1:N}\right) \label{eq:particlesidio Euler} \end{align} \begin{align} {\Greekmath 0120}_{{\Greekmath 010F},s}\left(\boldsymbol{U}_{{\Greekmath 010F},s,1:T}|\boldsymbol{{\Greekmath 0112}}\right) = \prod_{i=1}^{N}m_{{\Greekmath 010F},s,1}^{\boldsymbol{{\Greekmath 0112}}}\left(x_{s,1}^{i}\right)\prod_{t=2}^{T}\left\{ \mathcal{M}_{{\Greekmath 010F}}\left(\boldsymbol{a}_{{\Greekmath 010F},s,t-1}^{1:N}| {\overline}erline{w}_{{\Greekmath 010F},s,t-1}^{1:N}\right)\prod_{i=1}^{N}m_{{\Greekmath 010F},s,t}^{\boldsymbol{{\Greekmath 0112}}}\left(x_{s,t}^{i}|x_{s,t-1}^{a_{{\Greekmath 010F},s,t-1}^{i}}\right)\right\} \label{eq:idio Euler} \end{align} \begin{multline} \tilde{{\Greekmath 0119}}^{N}\left(\boldsymbol{U}_{f,1:K,1:T}, \boldsymbol{U}_{{\Greekmath 010F},1:S,1:T}, \boldsymbol{J}_{f},\boldsymbol{J}_{{\Greekmath 010F}},\boldsymbol{{\Greekmath 0112}}\right):=\\ \frac{{\Greekmath 0119}\left(\boldsymbol{{\Greekmath 0115}}_{1:K,1:T}^{\boldsymbol{J}_{f}},\boldsymbol{x}_{1:S,1:T}^{\boldsymbol{J}_{{\Greekmath 010F}}},\boldsymbol{{\Greekmath 0112}}\right)}{N^{T\left(K+S\right)}} \prod_{k=1}^{K} \frac{{\Greekmath 0120}_{f,k}\left(\boldsymbol{U}_{f,k,1:T}|\boldsymbol{{\Greekmath 0112}}\right)} {m_{f,k,1}^{\boldsymbol{{\Greekmath 0112}}}\left({\Greekmath 0115}_{k,1}^{b_{f,k,1}}\right)\prod_{t=2}^{T}{\overline}erline{w}_{f,k,t-1}^{a_{f,k,t-1}^{b_{f,k,t}}}m_{f,k,t}^{{\Greekmath 0112}}\left({\Greekmath 0115}_{k,t}^{b_{f,k,t}}|{\Greekmath 0115}_{k,t-1}^{a_{f,k,t-1}^{b_{f,k,t}}}\right)} \\ \prod_{s=1}^{S} \frac{{\Greekmath 0120}_{{\Greekmath 010F},s}\left(\boldsymbol{U}_{{\Greekmath 010F},s,1:T}|\boldsymbol{{\Greekmath 0112}}\right)} {m_{{\Greekmath 010F},s,1}^{\boldsymbol{{\Greekmath 0112}}}\left(x_{s,1}^{b_{{\Greekmath 010F},s,1}}\right)\prod_{t=2}^{T}{\overline}erline{w}_{{\Greekmath 010F},s,t-1}^{a_{{\Greekmath 010F},s,t-1}^{b_{{\Greekmath 010F},s,t}}}m_{{\Greekmath 010F},s,t}^{{\Greekmath 0112}}\left(x_{s,t}^{b_{{\Greekmath 010F},s,t}}|x_{s,t-1}^{a_{{\Greekmath 010F},s,t-1}^{b_{{\Greekmath 010F},s,t}}}\right)} \label{eq:target distribution Euler} \end{multline} \section{PMMH+PG sampling scheme for the factor SV model} \label{ss: sampling scheme for the factor SV model} Similarly to Section \ref{SS: Empirical results for OU}, we use the following notation to describe the algorithms used in the examples. The basic samplers, as used in Sampling Schemes~\ref{ssch:pmwg} or \ref{ssch:factor SV}, are $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}(\cdot)$ and $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}(\cdot)$. These samplers can be used alone or in combination. For example, $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}({\Greekmath 0112} )$ means using a PMMH step to sample the parameter vector ${\Greekmath 0112} $; $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PMMH}({\Greekmath 0112} _{1})+\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}({\Greekmath 0112} _{2})$ means sampling ${\Greekmath 0112} _{1}$ in the PMMH step and ${\Greekmath 0112} _{2}$ in the PG step; and $\RIfM@\expandafter\text@\else\expandafter\mbox\firm{PG}({\Greekmath 0112})$ means sampling ${\Greekmath 0112} $ using the PG sampler. We illustrate our methods using the $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_f^{2}, \boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}}, \boldsymbol{f}_{1:T}, \boldsymbol{{\Greekmath 011E}}, \boldsymbol{{\Greekmath 0116}}\right)$ sampler, which we found to give good performance in the empirical studies in Section~\ref{s:simulations}. It is straightforward to modify the sampling scheme for other choices of which parameters to sample with a PMMH step and which to sample with a PG step. Our procedure to determine an efficient sampling scheme is to run the PG algorithm first to identify which parameters have large IACT, or, in some cases, require a large amount of computational time to generate in the PG step. We then generate these parameters in the PMMH step. See, for example, our discussion of the univariate OU model in Section \ref{SS: Empirical results for OU}. In particular, we note that if an Euler approximation is used, then generating any parameter in the OU or GARCH model is very time intensive as it is necessary to determine, store and use the ancestor history of the entire state vector. The sampling schemes for the factor SV model with the closed form transition density given by equation \eqref{eq:exact transition} and the model with the Euler scheme given by equation \eqref{eq:ou approximatetransition multiple} or equation \eqref{eq:GARCH Euler transitiondensity} have the same structure, so Sampling Scheme \ref{ssch:factor SV} is given below in a generic form and the appropriate state space models are used for the different cases; see Sections~\ref{sss: exact transition density case} and \ref{sss: Euler scheme case} for details. We have simplified the conditional distributions in Sampling Scheme~\ref{ssch:factor SV} wherever possible using the conditional independence properties discussed in Section~\ref{sss: Conditional Independence}. The Metropolis-Hastings proposal densities for Sampling scheme \ref{ssch:factor SV} are given in Section \ref{sss:proposal densities}. We use the notation ${\Greekmath 0112} _{-i}:=({\Greekmath 0112} _{1},\ldots ,{\Greekmath 0112} _{i-1},{\Greekmath 0112} _{i+1},\ldots,{\Greekmath 0112}_{p})$, where $p$ is the total number of parameters. \begin{sscheme}[$PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_f^{2},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}}, \boldsymbol{f}_{1:T}, \boldsymbol{{\Greekmath 011E}}, \boldsymbol{{\Greekmath 0116}}\right)$] \label{ssch:factor SV} Given initial values for $U_{f,1:T}$, $U_{{\Greekmath 010F}, 1:T}$, $J_f$, $J_{{\Greekmath 010F}}$ and ${\Greekmath 0112}$, one iteration of the MCMC involves the following steps. \begin{enumerate} \item (PMMH sampling), \begin{enumerate} \item For $k=1,...,K$ \begin{enumerate} \item Sample $\left({\Greekmath 011C}_{f,k}^{2*}\right)\sim q_{{\Greekmath 011C}^2_{f,k}}\left(\cdot|\boldsymbol{U}_{f,k,1:T},{\Greekmath 011C}_{f,k}^{2},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)$ \item Sample $\boldsymbol{U}_{f,k,1:T}^{*}\sim {\Greekmath 0120}_{f,k}\left(\cdot|{\Greekmath 011C}_{f,k}^{2*},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)$ \item Sample $J_{f,k}^{*}$ from $\tilde{{\Greekmath 0119}}^{N}\left(\cdot|\boldsymbol{U}_{f,k,1:T}^{*},{\Greekmath 011C}_{f,k}^{2*},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)$ \item Set $\left({\Greekmath 011C}_{f,k}^{2}, \boldsymbol{U}_{f,k,1:T}, J_{f,k}\right) \leftarrow \left({\Greekmath 011C}_{f,k}^{2*}, \boldsymbol{U}_{f,k,1:T}^{*}, J_{f,k}^{*}\right)$ with probability \begin{multline*} {\Greekmath 010B}\left(\boldsymbol{U}_{f,k,1:T},J_{f,k},{\Greekmath 011C}_{f,k}^{2}; \boldsymbol{U}_{f,k,1:T}^{*},J_{f,k}^{*},{\Greekmath 011C}_{f,k}^{2*} | \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}} \right)=\\ 1\wedge\frac{Z\left(U_{f,k,1:T}^{*},{\Greekmath 011C}_{f,k}^{2*}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)p\left({\Greekmath 011C}_{f,k}^{2*}\right)}{Z\left(U_{f,k,1:T},{\Greekmath 011C}_{f,k}^{2}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)p\left({\Greekmath 011C}_{f,k}^{2}\right)}\times\frac{q_{{\Greekmath 011C}^2_{f,k}}\left({\Greekmath 011C}_{f,k}^{2}| \boldsymbol{U}_{f,k,1:T}^{*},{\Greekmath 011C}_{f,k}^{2*},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)} {q_{{\Greekmath 011C}^2_{f,k}}\left({\Greekmath 011C}_{f,k}^{2*}|\boldsymbol{U}_{f,k,1:T},{\Greekmath 011C}_{f,k}^{2}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011C}_{f,k}^{2}}\right)}. \end{multline*} \end{enumerate} \item For $s=1,...,S$, \begin{enumerate} \item Sample $\left({\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}\right)\sim q_{{\Greekmath 010B}_s,{\Greekmath 011C}_{{\Greekmath 010F},s}^2}\left(\cdot|\boldsymbol{U}_{{\Greekmath 010F},s,1:T},{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}\right)$ \item Sample $\boldsymbol{U}_{{\Greekmath 010F},s,1:T}^{*}\sim {\Greekmath 0120}_{{\Greekmath 010F},s} \left(\cdot|{\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}\right)$ \item Sample $J_{{\Greekmath 010F},s}^{*}$ from $\tilde{{\Greekmath 0119}}^{N}\left(\cdot|\boldsymbol{U}_{{\Greekmath 010F},s,1:T}^{*},{\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}\right)$ \item Set $\left({\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}, \boldsymbol{U}_{{\Greekmath 010F},s,1:T}, J_{{\Greekmath 010F},s}\right) \leftarrow \left({\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}, \boldsymbol{U}_{{\Greekmath 010F},s,1:T}^{*}, J_{{\Greekmath 010F},s}^{*}\right)$ with probability \begin{multline*} {\Greekmath 010B}\left(\boldsymbol{U}_{{\Greekmath 010F},s,1:T},J_{s},\left({\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}\right); \boldsymbol{U}_{{\Greekmath 010F},s,1:T}^{*},J_{{\Greekmath 010F},s}^{*},\left({\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}\right) | \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}} \right)=\\ 1\wedge\frac{Z\left(U_{{\Greekmath 010F},s,1:T}^{*},{\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}\right) p\left({\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}\right)}{Z\left(U_{{\Greekmath 010F},s,1:T},{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{s}^{2}}\right)p\left({\Greekmath 010B}_{s},{\Greekmath 011C}_{s}^{2}\right)} \times\frac{q_{{\Greekmath 010B}_s,{\Greekmath 011C}_{{\Greekmath 010F},s}^2}\left({\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}| \boldsymbol{U}_{{\Greekmath 010F},s,1:T}^{*}, {\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}\right)}{q_{{\Greekmath 010B}_s,{\Greekmath 011C}_{{\Greekmath 010F},s}^2}\left({\Greekmath 010B}_{s}^{*},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2*}|\boldsymbol{U}_{{\Greekmath 010F},s,1:T},{\Greekmath 010B}_{s}, {\Greekmath 011C}_{{\Greekmath 010F} s}^{2},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}\right)}. \end{multline*} \end{enumerate} \end{enumerate} \item (PG sampling) \begin{enumerate} \item Sample $\boldsymbol{{\Greekmath 010C}}|\boldsymbol{{\Greekmath 0115}}_{1:T}^{\boldsymbol{J}_{f}},\boldsymbol{h}_{1:T}^{\boldsymbol{J}_{{\Greekmath 010F}}},\boldsymbol{B}_{f,1:T-1}^{\boldsymbol{J}_f},\boldsymbol{B}_{{\Greekmath 010F},1:T-1}^{\boldsymbol{J}_{{\Greekmath 010F}}},\boldsymbol{J}_f,\boldsymbol{J}_{{\Greekmath 010F}},\boldsymbol{{\Greekmath 0112}}_{-\boldsymbol{{\Greekmath 010C}}},\boldsymbol{y}_{1:T}$ using equation \eqref{eq:Bfactor} in Appendix~\ref{SS: sampling loading matrix}. \item Redraw the diagonal elements of $\boldsymbol{{\Greekmath 010C}}$ through the deep interweaving procedure described in Appendix~\ref{SS: deep interweaving}. This step is necessary to improve the mixing of the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$. \item Sample $\boldsymbol{f}_{1:T}|\boldsymbol{{\Greekmath 0115}}_{1:T}^{\boldsymbol{J}_{f}},\boldsymbol{h}_{1:T}^{\boldsymbol{J}_{{\Greekmath 010F}}},\boldsymbol{B}_{f,1:T-1}^{\boldsymbol{J}_f},\boldsymbol{B}_{{\Greekmath 010F},1:T-1}^{\boldsymbol{J}_{{\Greekmath 010F}}},\boldsymbol{J}_f,\boldsymbol{J}_{{\Greekmath 010F}},\boldsymbol{{\Greekmath 0112}}_{-{\boldsymbol{f}_{1:T}}},\boldsymbol{y}_{1:T}$ using equation~\eqref{eq:factordraws} in Appendix~\ref{SS: sampling latent factors}. \item For $k=1,...,K$ \begin{enumerate} \item Sample ${\Greekmath 011E}_{k}^{*}$ from the proposal $q_{{\Greekmath 011E}_{k}}\left(\cdot|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011E}_{k}}\right)$ and set ${\Greekmath 011E}_{k} \leftarrow {\Greekmath 011E}_{k}^{*}$ with probability \begin{align*} 1\land\frac{\tilde{{\Greekmath 0119}}^{N}\left({\Greekmath 011E}_{k}^{*}|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}},\boldsymbol{B}_{f,k,1:T-1},J_{f,k},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011E}_{k}}\right)}{\tilde{{\Greekmath 0119}}^{N}\left({\Greekmath 011E}_{k}|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}},\boldsymbol{B}_{f,k,1:T-1},J_{f,k},\boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011E}_{k}}\right)}\times \frac{q_{{\Greekmath 011E}_{k}}\left({\Greekmath 011E}_{k}|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011E}_{k}}\right)}{q_{{\Greekmath 011E}_{k}}\left({\Greekmath 011E}_{k}^{*}|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011E}_{k}}\right)}. \end{align*} \item Sample $\boldsymbol{U}_{f,k,1:T}^{\left(-J_{f,k}\right)}\sim\tilde{{\Greekmath 0119}}^{N}\left(\cdot|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}},\boldsymbol{B}_{f,k,1:T-1},J_{f,k},\boldsymbol{{\Greekmath 0112}}\right)$ using the conditional sequential Monte Carlo algorithm (CSMC) discussed in Section ~\ref{alg:condsmc}. \item Sample $J_{f,k}\sim\tilde{{\Greekmath 0119}}^{N}\left(\cdot|\boldsymbol{U}_{f,k,1:T},\boldsymbol{{\Greekmath 0112}}\right)$. \end{enumerate} \item For $s=1,...,S$, \begin{enumerate} \item Sample ${\Greekmath 0116}_{s}^{*}$ from the proposal $q_{{\Greekmath 0116}_{s}}\left(\cdot|\boldsymbol{h}_{s,1:T}^{J_{{\Greekmath 010F},s}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 0116}_{s}}\right)$ and set ${\Greekmath 0116}_{s} \leftarrow {\Greekmath 0116}_{s}^{*}$ with probability \begin{align*} 1\land\frac{\tilde{{\Greekmath 0119}}^{N}\left({\Greekmath 0116}_{s}^{*}|\boldsymbol{h}_{s,1:T}^{J_{{\Greekmath 010F},s}},\boldsymbol{B}_{{\Greekmath 010F},s,1:T-1},J_{{\Greekmath 010F},s}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 0116}_{s}}\right)} {\tilde{{\Greekmath 0119}}^{N}\left({\Greekmath 0116}_{s}|\boldsymbol{h}_{s,1:T}^{J_{s}},\boldsymbol{B}_{{\Greekmath 010F},s,1:T-1},J_{{\Greekmath 010F},s}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 0116}_{s}}\right)}\times \frac{q_{{\Greekmath 0116}_{s}}\left({\Greekmath 0116}_{s}|\boldsymbol{h}_{s,1:T}^{J_{{\Greekmath 010F},s}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 0116}_{s}}\right)}{q_{{\Greekmath 0116}_{s}}\left({\Greekmath 0116}_{s}^{*}|\boldsymbol{h}_{s,1:T}^{J_{{\Greekmath 010F},s}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 0116}_{s}}\right)} \end{align*} \item Sample $\boldsymbol{U}_{{\Greekmath 010F},s,1:T}^{\left(-J_{{\Greekmath 010F},s}\right)} \sim\tilde{{\Greekmath 0119}}^{N}\left(\cdot| \boldsymbol{h}_{s,1:T}^{J_{s}},\boldsymbol{B}_{{\Greekmath 010F},s,1:T-1},J_{{\Greekmath 010F},s}, \boldsymbol{{\Greekmath 0112}}\right)$ using the conditional sequential Monte Carlo algorithm (CSMC) discussed in Section~\ref{s:CSMC}. \item Sample $J_{{\Greekmath 010F},s}\sim\tilde{{\Greekmath 0119}}^{N}\left(\cdot|\boldsymbol{U}_{{\Greekmath 010F},s,1:T},\boldsymbol{{\Greekmath 0112}}\right)$. \end{enumerate} \end{enumerate} \end{enumerate} \end{sscheme} \subsection{Proposal densities\label{sss:proposal densities}} This section details the proposal densities used in Sampling Scheme \ref{ssch:factor SV} for the exact OU model given by equation \eqref{eq:exact transition}. We will specify other cases such as the Euler evolution given by equation \eqref{eq:ou approximatetransition multiple} and the GARCH diffusion model given by equation \eqref{eq:GARCH Euler transitiondensity} when describing the sampling scheme. \begin{itemize} \item For $k=1, \dots, K$, $q_{{\Greekmath 011C}_{f,k}^{2}}$ is an adaptive random walk. \item For $s=1,\dots, S$, $q_{{\Greekmath 010B}_{s},{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}$ is an adaptive random walk. \item For $k=1, \dots, K$, $q_{{\Greekmath 011E}_{k}}\left(\cdot|\boldsymbol{{\Greekmath 0115}}_{k,1:T}^{J_{f,k}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 011E}_{k}}\right)= N\left(c_{{\Greekmath 011E}_{k}},d_{{\Greekmath 011E}_{k}}\right)$, where \begin{align*} c_{{\Greekmath 011E}_{k}} &=\frac{d_{{\Greekmath 011E}_{k}}}{{\Greekmath 011C}_{f,k}^{2}} \sum_{t=2}^{T}{\Greekmath 0115}_{k,t}{\Greekmath 0115}_{k,t-1},\,\,\, \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\,\,\, d_{{\Greekmath 011E}_{k}} =\frac{{\Greekmath 011C}_{f,k}^{2}}{\sum_{t=2}^{T-1}{\Greekmath 0115}_{k,t}^{2}}, \end{align*} \item For $s=1,\dots, S$, $q_{{\Greekmath 0116}_{s}}\left(\cdot|\boldsymbol{h}_{s,1:T}^{J_{{\Greekmath 010F} s}}, \boldsymbol{{\Greekmath 0112}}_{-{\Greekmath 0116}_{s}}\right)= N\left(c_{{\Greekmath 0116}_{s}},d_{{\Greekmath 0116}_{s}}\right)$, where \begin{align*} c_{{\Greekmath 0116}_{s}}&=\frac{d_{{\Greekmath 0116}_{s}}}{{\Greekmath 011C}_{{\Greekmath 010F},s}^2}\bigg ( h_{s,1}\left(2{\Greekmath 010B}_{s}\right)+ \left ( \frac{2{\Greekmath 010B}_{s}}{1-\exp\left(-2{\Greekmath 010B}_{s}\right)}\right) \left ( \sum_{t=2}^{T} \left( h_{s,t}-\exp\left(-{\Greekmath 010B}_{s}\right)h_{s,t}+ \right .\right .\\ & \left . \exp\left(-2{\Greekmath 010B}_{s}\right)h_{s,t-1} -\exp\left(-{\Greekmath 010B}_{s}\right)h_{s,t-1}\right ) \bigg). \\ d_{{\Greekmath 0116}_{s}}& =\frac{{\Greekmath 011C}_{{\Greekmath 010F},s}^{2}}{\left(2{\Greekmath 010B}_{s}\right)+\left(\frac{2{\Greekmath 010B}_{s}} {1-\exp\left(-2{\Greekmath 010B}_{s}\right)}\right)\left(T-1\right)\left(1-2\exp\left(-{\Greekmath 010B}_{s}\right)+ \exp\left(-2{\Greekmath 010B}_{s}\right)\right)^{2}}, \end{align*} \end{itemize} \subsection{Sampling the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$\label{SS: sampling loading matrix}} First, to identify the parameters for the factor loading matrix $\boldsymbol{{\Greekmath 010C}}$, we follow the usual convention and set the upper triangular part of $\boldsymbol{{\Greekmath 010C}}$ to zero (\cite{gewekezhou1996}). This parameterisation imposes an order dependence. Second, the model is also not identified without further constraining either the scale of the $k$th column of $\boldsymbol{{\Greekmath 010C}}$ or the variance of $f_{k,t}$. The usual solution is to set the diagonal elements of the factor loading matrix $\boldsymbol{{\Greekmath 010C}}_{k,k}$ to one, for $k=1,..,K$, while the level ${\Greekmath 0116}_{f,k}$ of the factor volatility ${\Greekmath 0115}_{k,t}$ is modeled to be unknown. However, \cite{Kastner:2017} note that this approach makes the variable ordering dependence stronger. We therefore follow \cite{Kastner:2017} and leave the diagonal elements $\boldsymbol{{\Greekmath 010C}}_{k,k}$ unrestricted and set the level ${\Greekmath 0116}_{f,k}$ of the factor volatility ${\Greekmath 0115}_{k,t}$ to zero for $k=1,...,K$. Let $k_{s}$ denote the number of unrestricted elements in row $s$ of $\boldsymbol {\Greekmath 010C}$ and define \[ \boldsymbol{F}_{s}=\left[\begin{array}{ccc} f_{1,1} & \cdots & f_{k_{s},1}\\ \vdots & & \vdots\\ f_{1,T} & \cdots & f_{k_{s},T} \end{array}\right], \quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \quad \boldsymbol{\widetilde V}_{s}=\left[\begin{array}{ccc} \exp\left(h_{s,1}\right) & \cdots & 0\\ 0 & \mathrm{d}ots & 0\\ 0 & \cdots & \exp\left(h_{s,T}\right) \end{array}\right]. \] We sample the factor loadings $\boldsymbol{{\Greekmath 010C}}_{s,.}=\left({\Greekmath 010C}_{s,1},...,{\Greekmath 010C}_{s,k_{s}}\right)^{\tiny T}$, for $s=1,...,S$, independently for each $s$ using the Gibbs-update \begin{equation} \boldsymbol{{\Greekmath 010C}}_{s,.}|\boldsymbol{f},\boldsymbol{y}_{s,.},\boldsymbol{h}_{s,.}\sim N_{k_{s}}\left(a_{s,T},b_{s,T}\right),\label{eq:Bfactor} \end{equation} where $ b_{s,T}=\left(\boldsymbol{F}_{s}^{\tiny T}\boldsymbol{\widetilde V}_{s}^{-1}\boldsymbol{F}_{s}+I_{k_{s}}\right)^{-1}$ and $ a_{s,T}=b_{s,T}\boldsymbol{F}_{s}^{\tiny T}\boldsymbol{\widetilde V}_{p}^{-1}\boldsymbol{y}_{s,1:T}$. \subsection{Deep Interweaving\label{SS: deep interweaving}} To improve the mixing in the draws of the factor loading matrix we employ the following deep interweaving strategy introduced by \cite{Kastner:2017}. \begin{itemize} \item Determine the vector $\boldsymbol{{\Greekmath 010C}}_{.,k}^{*}$, where ${\Greekmath 010C}_{s,k}^{*}={\Greekmath 010C}_{s,k}^{old}/{\Greekmath 010C}_{k,k}^{old}$ in the $k$th column of the transformed factor loading matrix $\boldsymbol{{\Greekmath 010C}}^{*}$. \item Define $\boldsymbol{{\Greekmath 0115}_{k,.}}^{*}={\boldsymbol {\Greekmath 0115}}_{k,.}^{old}+2\log|{\Greekmath 010C}_{k,k}^{old}|$ and sample ${\Greekmath 010C}_{k,k}^{new}$ from $p\left({\Greekmath 010C}_{k,k}|{\Greekmath 010C}_{.,k}^{*},\boldsymbol{{\Greekmath 0115}}_{k,.}^{*},{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)$. \item Update $\boldsymbol{{\Greekmath 010C}}_{.,k}=\frac{{\Greekmath 010C}_{k,k}^{new}}{{\Greekmath 010C}_{k,k}^{old}}\boldsymbol{{\Greekmath 010C}}_{.,k}^{old}$, $\boldsymbol{f}_{k,.}=\frac{{\Greekmath 010C}_{k,k}^{old}}{{\Greekmath 010C}_{k,k}^{new}}\boldsymbol{f}_{k,.}^{old}$, and ${\boldsymbol {\Greekmath 0115}}_{k,.}={\boldsymbol {\Greekmath 0115}}_{k,.}^{old}+2\log|\frac{{\Greekmath 010C}_{k,k}^{old}}{{\Greekmath 010C}_{k,k}^{new}}|$. \end{itemize} In the deep interweaving representation the scaling parameter ${\Greekmath 010C}_{k,k}$ is sampled indirectly through ${\Greekmath 0116}_{f,k}=\log{\Greekmath 010C}_{k,k}^{2}$, $k=1,...,K$. The implied prior $p\left({\Greekmath 0116}_{f,k}\right)\propto\exp\left({\Greekmath 0116}_{f,k}/2-\exp\left({\Greekmath 0116}_{f,k}\right)/2\right)$ and the density $p\left(\boldsymbol{{\Greekmath 010C}}_{.,k}^{*}|{\Greekmath 0116}_{f,k}\right)\sim N_{k_{l}}\left(0,\exp\left(-{\Greekmath 0116}_{f,k}\right)I_{k_{l}}\right)$ and the likelihood yields the posterior \[ p\left({\Greekmath 0116}_{f,k}|\boldsymbol{{\Greekmath 010C}}_{.,k}^{*},\boldsymbol{{\Greekmath 0115}}_{k,.}^{*},{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)\propto p\left(\boldsymbol{{\Greekmath 0115}}_{k,.}^{*}|{\Greekmath 0116}_{f,k},{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)p\left(\boldsymbol{{\Greekmath 010C}}_{.,k}^{*}|{\Greekmath 0116}_{f,k}\right)p\left({\Greekmath 0116}_{f,k}\right), \] which is not in recognisable form. We draw a proposal for ${\Greekmath 0116}_{f,k}^{prop}$ from $N\left(A,B\right)$ where \[ A=\frac{\sum_{t=2}^{T-1}{\Greekmath 0115}_{k,t}^{*}+\left({\Greekmath 0115}_{k,T}^{*}-{\Greekmath 011E}_{k}{\Greekmath 0115}_{k,1}^{*}\right)/\left(1-{\Greekmath 011E}_{k}\right)}{T-1+1/B_{0}},B=\frac{{\Greekmath 011C}_{f,k}^{2}/\left(1-{\Greekmath 011E}_{k}\right)^{2}}{T-1+1/B_{0}}. \] Denoting the current value ${\Greekmath 0116}_{f,k}$ by ${\Greekmath 0116}_{f,k}^{old}$, the new value ${\Greekmath 0116}_{f,k}^{prop}$ gets accepted with probability $\min\left(1,R\right)$, where \[ R=\frac{p\left({\Greekmath 0116}_{f,k}^{prop}\right)p\left({\Greekmath 0115}_{k,1}^{*}|{\Greekmath 0116}_{f,k}^{prop},{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)p\left(\boldsymbol{{\Greekmath 010C}}_{.,k}^{*}|{\Greekmath 0116}_{f,k}^{prop}\right)}{p\left({\Greekmath 0116}_{f,k}^{old}\right)\left({\Greekmath 0115}_{k,1}^{*}|{\Greekmath 0116}_{f,k}^{old},{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)p\left(\boldsymbol{{\Greekmath 010C}}{}_{.,k}^{*}|{\Greekmath 0116}_{f,k}^{old}\right)}\times\frac{p_{aux}\left({\Greekmath 0116}_{f,k}^{old}|{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)}{p_{aux}\left({\Greekmath 0116}_{f,k}^{prop}|{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)}, \] where \[ p_{aux}\left({\Greekmath 0116}_{f,k}^{old}|{\Greekmath 011E}_{k},{\Greekmath 011C}_{f,k}^{2}\right)\sim N\left(0,B_{0}{\Greekmath 011C}_{f,k}^{2}/\left(1-{\Greekmath 011E}_{k}\right)^{2}\right). \] The constant $B_{0}$ is set to large value $10^{5}$ as in \cite{Kastner:2017}. \subsection{Sampling the Latent Factors $\boldsymbol{f}_{1:T}$\label{SS: sampling latent factors}} After some algebra, we obtain that \begin{align} \left\{ \boldsymbol{f}_{t}\right\} |\boldsymbol{y},\left\{ \boldsymbol{h}_{t}\right\} ,\left\{ \boldsymbol{{\Greekmath 0115}}_{t}\right\},\boldsymbol{{\Greekmath 010C}} & \sim N\left(a_{t},b_{t}\right),\label{eq:factordraws} \end{align} where $b_{t} =\left(\boldsymbol{{\Greekmath 010C}}^{\tiny T}\boldsymbol{V}_{t}^{-1} \boldsymbol{{\Greekmath 010C}}+\boldsymbol{D}_{t}^{-1}\right)^{-1}$ and $a_{t} = b_{t}\boldsymbol{{\Greekmath 010C}}^{\tiny T}\boldsymbol{V}_{t}^{-1}\boldsymbol{y}_{t}$. \section{Tables and figures for the factor stochastic volatility model in Sections \ref{ss:simulationandapplication} and \ref{SS: US stock returns}} \label{S:FSV tables and figures} \begin{table}[H] \caption{Inefficiency factor of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with exact transition density for the Gaussian OU model: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $PGBS\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for simulated data with $T=1000$, $S=20$, and $K=1$, and number of particles $N=500$.\label{tab:Inefficiency-factor-of simulation}} \centering{} \begin{tabular}{cccccccccccccccc} \hline & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III}\tabularnewline \hline {\footnotesize{}${\Greekmath 010C}_{1}$} & {\footnotesize{}$12.55$} & {\footnotesize{}$12.92$} & {\footnotesize{}$13.95$} & {\footnotesize{}${\Greekmath 010B}_{1}$} & {\footnotesize{}$12.64$} & {\footnotesize{}$66.69$} & {\footnotesize{}$39.94$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},1}^{2}$} & {\footnotesize{}$14.70$} & {\footnotesize{}$136.58$} & {\footnotesize{}$99.80$} & {\footnotesize{}${\Greekmath 0116}_{1}$} & {\footnotesize{}$1.29$} & {\footnotesize{}$1.47$} & {\footnotesize{}$1.39$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{2}$} & {\footnotesize{}$12.67$} & {\footnotesize{}$13.03$} & {\footnotesize{}$13.94$} & {\footnotesize{}${\Greekmath 010B}_{2}$} & {\footnotesize{}$11.76$} & {\footnotesize{}$44.67$} & {\footnotesize{}$35.59$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},2}^{2}$} & {\footnotesize{}$14.36$} & {\footnotesize{}$72.64$} & {\footnotesize{}$74.03$} & {\footnotesize{}${\Greekmath 0116}_{2}$} & {\footnotesize{}$1.28$} & {\footnotesize{}$1.43$} & {\footnotesize{}$1.33$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{3}$} & {\footnotesize{}$12.69$} & {\footnotesize{}$13.20$} & {\footnotesize{}$14.17$} & {\footnotesize{}${\Greekmath 010B}_{3}$} & {\footnotesize{}$11.89$} & {\footnotesize{}$64.76$} & {\footnotesize{}$61.08$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},3}^{2}$} & {\footnotesize{}$12.01$} & {\footnotesize{}$92.80$} & {\footnotesize{}$101.64$} & {\footnotesize{}${\Greekmath 0116}_{3}$} & {\footnotesize{}$1.56$} & {\footnotesize{}$1.72$} & {\footnotesize{}$1.59$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{4}$} & {\footnotesize{}$12.53$} & {\footnotesize{}$12.37$} & {\footnotesize{}$13.77$} & {\footnotesize{}${\Greekmath 010B}_{4}$} & {\footnotesize{}$13.13$} & {\footnotesize{}$107.58$} & {\footnotesize{}$59.69$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},4}^{2}$} & {\footnotesize{}$14.70$} & {\footnotesize{}$283.23$} & {\footnotesize{}$93.35$} & {\footnotesize{}${\Greekmath 0116}_{4}$} & {\footnotesize{}$1.41$} & {\footnotesize{}$1.40$} & {\footnotesize{}$1.33$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{5}$} & {\footnotesize{}$12.66$} & {\footnotesize{}$13.08$} & {\footnotesize{}$13.86$} & {\footnotesize{}${\Greekmath 010B}_{5}$} & {\footnotesize{}$15.21$} & {\footnotesize{}$76.45$} & {\footnotesize{}$35.94$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},5}^{2}$} & {\footnotesize{}$14.56$} & {\footnotesize{}$123.53$} & {\footnotesize{}$81.58$} & {\footnotesize{}${\Greekmath 0116}_{5}$} & {\footnotesize{}$1.29$} & {\footnotesize{}$1.37$} & {\footnotesize{}$1.25$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{6}$} & {\footnotesize{}$12.76$} & {\footnotesize{}$12.89$} & {\footnotesize{}$14.01$} & {\footnotesize{}${\Greekmath 010B}_{6}$} & {\footnotesize{}$14.80$} & {\footnotesize{}$37.25$} & {\footnotesize{}$30.74$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},6}^{2}$} & {\footnotesize{}$14.84$} & {\footnotesize{}$76.76$} & {\footnotesize{}$56.96$} & {\footnotesize{}${\Greekmath 0116}_{6}$} & {\footnotesize{}$1.25$} & {\footnotesize{}$1.29$} & {\footnotesize{}$1.18$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{7}$} & {\footnotesize{}$12.56$} & {\footnotesize{}$12.62$} & {\footnotesize{}$13.72$} & {\footnotesize{}${\Greekmath 010B}_{7}$} & {\footnotesize{}$14.11$} & {\footnotesize{}$27.87$} & {\footnotesize{}$24.29$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},7}^{2}$} & {\footnotesize{}$13.36$} & {\footnotesize{}$58.61$} & {\footnotesize{}$43.39$} & {\footnotesize{}${\Greekmath 0116}_{7}$} & {\footnotesize{}$1.23$} & {\footnotesize{}$1.28$} & {\footnotesize{}$1.18$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{8}$} & {\footnotesize{}$12.85$} & {\footnotesize{}$12.96$} & {\footnotesize{}$13.87$} & {\footnotesize{}${\Greekmath 010B}_{8}$} & {\footnotesize{}$13.65$} & {\footnotesize{}$40.08$} & {\footnotesize{}$19.94$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},8}^{2}$} & {\footnotesize{}$13.37$} & {\footnotesize{}$98.49$} & {\footnotesize{}$42.14$} & {\footnotesize{}${\Greekmath 0116}_{8}$} & {\footnotesize{}$1.24$} & {\footnotesize{}$1.27$} & {\footnotesize{}$1.20$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{9}$} & {\footnotesize{}$12.52$} & {\footnotesize{}$13.11$} & {\footnotesize{}$13.83$} & {\footnotesize{}${\Greekmath 010B}_{9}$} & {\footnotesize{}$13.58$} & {\footnotesize{}$96.90$} & {\footnotesize{}$47.77$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},9}^{2}$} & {\footnotesize{}$15.06$} & {\footnotesize{}$144.72$} & {\footnotesize{}$81.66$} & {\footnotesize{}${\Greekmath 0116}_{9}$} & {\footnotesize{}$1.99$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.54$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{10}$} & {\footnotesize{}$12.39$} & {\footnotesize{}$12.81$} & {\footnotesize{}$14.05$} & {\footnotesize{}${\Greekmath 010B}_{10}$} & {\footnotesize{}$18.07$} & {\footnotesize{}$23.49$} & {\footnotesize{}$32.13$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},10}^{2}$} & {\footnotesize{}$16.56$} & {\footnotesize{}$58.06$} & {\footnotesize{}$57.03$} & {\footnotesize{}${\Greekmath 0116}_{10}$} & {\footnotesize{}$1.29$} & {\footnotesize{}$1.28$} & {\footnotesize{}$1.23$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{11}$} & {\footnotesize{}$12.80$} & {\footnotesize{}$12.94$} & {\footnotesize{}$14.13$} & {\footnotesize{}${\Greekmath 010B}_{11}$} & {\footnotesize{}$17.31$} & {\footnotesize{}$41.43$} & {\footnotesize{}$31.13$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},11}^{2}$} & {\footnotesize{}$14.33$} & {\footnotesize{}$75.79$} & {\footnotesize{}$66.30$} & {\footnotesize{}${\Greekmath 0116}_{11}$} & {\footnotesize{}$1.33$} & {\footnotesize{}$1.37$} & {\footnotesize{}$1.27$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{12}$} & {\footnotesize{}$12.75$} & {\footnotesize{}$13.07$} & {\footnotesize{}$14.22$} & {\footnotesize{}${\Greekmath 010B}_{12}$} & {\footnotesize{}$16.33$} & {\footnotesize{}$30.14$} & {\footnotesize{}$47.93$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},12}^{2}$} & {\footnotesize{}$14.18$} & {\footnotesize{}$53.80$} & {\footnotesize{}$74.84$} & {\footnotesize{}${\Greekmath 0116}_{12}$} & {\footnotesize{}$1.42$} & {\footnotesize{}$1.35$} & {\footnotesize{}$1.31$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{13}$} & {\footnotesize{}$12.78$} & {\footnotesize{}$12.87$} & {\footnotesize{}$14.16$} & {\footnotesize{}${\Greekmath 010B}_{13}$} & {\footnotesize{}$16.24$} & {\footnotesize{}$38.37$} & {\footnotesize{}$27.31$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},13}^{2}$} & {\footnotesize{}$13.67$} & {\footnotesize{}$67.67$} & {\footnotesize{}$47.37$} & {\footnotesize{}${\Greekmath 0116}_{13}$} & {\footnotesize{}$1.25$} & {\footnotesize{}$1.31$} & {\footnotesize{}$1.25$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{14}$} & {\footnotesize{}$12.78$} & {\footnotesize{}$13.04$} & {\footnotesize{}$14.23$} & {\footnotesize{}${\Greekmath 010B}_{14}$} & {\footnotesize{}$14.41$} & {\footnotesize{}$38.38$} & {\footnotesize{}$21.61$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},14}^{2}$} & {\footnotesize{}$15.88$} & {\footnotesize{}$83.16$} & {\footnotesize{}$46.09$} & {\footnotesize{}${\Greekmath 0116}_{14}$} & {\footnotesize{}$1.27$} & {\footnotesize{}$1.30$} & {\footnotesize{}$1.26$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{15}$} & {\footnotesize{}$12.47$} & {\footnotesize{}$12.82$} & {\footnotesize{}$13.80$} & {\footnotesize{}${\Greekmath 010B}_{15}$} & {\footnotesize{}$12.72$} & {\footnotesize{}$34.25$} & {\footnotesize{}$22.16$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},15}^{2}$} & {\footnotesize{}$15.39$} & {\footnotesize{}$60.91$} & {\footnotesize{}$44.90$} & {\footnotesize{}${\Greekmath 0116}_{15}$} & {\footnotesize{}$1.22$} & {\footnotesize{}$1.25$} & {\footnotesize{}$1.19$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{16}$} & {\footnotesize{}$12.91$} & {\footnotesize{}$12.99$} & {\footnotesize{}$14.01$} & {\footnotesize{}${\Greekmath 010B}_{16}$} & {\footnotesize{}$15.19$} & {\footnotesize{}$70.11$} & {\footnotesize{}$42.38$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},16}^{2}$} & {\footnotesize{}$13.60$} & {\footnotesize{}$110.75$} & {\footnotesize{}$66.36$} & {\footnotesize{}${\Greekmath 0116}_{16}$} & {\footnotesize{}$1.40$} & {\footnotesize{}$1.62$} & {\footnotesize{}$1.34$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{17}$} & {\footnotesize{}$12.74$} & {\footnotesize{}$13.11$} & {\footnotesize{}$13.86$} & {\footnotesize{}${\Greekmath 010B}_{17}$} & {\footnotesize{}$11.17$} & {\footnotesize{}$22.16$} & {\footnotesize{}$27.11$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},17}^{2}$} & {\footnotesize{}$11.43$} & {\footnotesize{}$53.60$} & {\footnotesize{}$51.73$} & {\footnotesize{}${\Greekmath 0116}_{17}$} & {\footnotesize{}$1.37$} & {\footnotesize{}$1.31$} & {\footnotesize{}$1.21$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{18}$} & {\footnotesize{}$12.58$} & {\footnotesize{}$12.93$} & {\footnotesize{}$13.84$} & {\footnotesize{}${\Greekmath 010B}_{18}$} & {\footnotesize{}$12.74$} & {\footnotesize{}$28.17$} & {\footnotesize{}$28.51$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},18}^{2}$} & {\footnotesize{}$15.66$} & {\footnotesize{}$59.10$} & {\footnotesize{}$75.58$} & {\footnotesize{}${\Greekmath 0116}_{18}$} & {\footnotesize{}$1.33$} & {\footnotesize{}$1.32$} & {\footnotesize{}$1.30$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{19}$} & {\footnotesize{}$12.64$} & {\footnotesize{}$12.81$} & {\footnotesize{}$13.80$} & {\footnotesize{}${\Greekmath 010B}_{19}$} & {\footnotesize{}$12.67$} & {\footnotesize{}$40.38$} & {\footnotesize{}$29.96$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},19}^{2}$} & {\footnotesize{}$15.17$} & {\footnotesize{}$74.87$} & {\footnotesize{}$59.19$} & {\footnotesize{}${\Greekmath 0116}_{19}$} & {\footnotesize{}$1.44$} & {\footnotesize{}$1.57$} & {\footnotesize{}$1.41$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{20}$} & {\footnotesize{}$12.77$} & {\footnotesize{}$13.19$} & {\footnotesize{}$14.08$} & {\footnotesize{}${\Greekmath 010B}_{20}$} & {\footnotesize{}$12.85$} & {\footnotesize{}$27.12$} & {\footnotesize{}$22.34$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},20}^{2}$} & {\footnotesize{}$12.84$} & {\footnotesize{}$73.02$} & {\footnotesize{}$44.80$} & {\footnotesize{}${\Greekmath 0116}_{20}$} & {\footnotesize{}$1.26$} & {\footnotesize{}$1.38$} & {\footnotesize{}$1.30$}\tabularnewline ${\Greekmath 011E}$ & {\footnotesize{}$8.03$} & {\footnotesize{}$20.12$} & {\footnotesize{}$18.62$} & {\footnotesize{}${\Greekmath 011C}_{f,1}^{2}$} & {\footnotesize{}$14.76$} & {\footnotesize{}$73.76$} & {\footnotesize{}$79.14$} & & & & & & & & \tabularnewline \hline \end{tabular} \end{table} \begin{sidewaystable} \caption{Inefficiency factor of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with Euler approximation for state transition density for the Gaussian OU model: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $PGBS\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for simulated data with $T=1000$, $S=20$, and $K=1$, and number of particles $N=1000$.\label{tab:Inefficiency-factor-of simulation-1}} \centering{} \begin{tabular}{cccccccccccccccc} \hline & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III}\tabularnewline \hline {\footnotesize{}${\Greekmath 010C}_{1}$} & {\footnotesize{}$13.72$} & {\footnotesize{}$13.67$} & {\footnotesize{}$11.06$} & {\footnotesize{}${\Greekmath 010B}_{1}$} & {\footnotesize{}$12.85$} & {\footnotesize{}$159.79$} & {\footnotesize{}$181.78$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},1}^{2}$} & {\footnotesize{}$13.10$} & {\footnotesize{}$374.25$} & {\footnotesize{}$444.82$} & {\footnotesize{}${\Greekmath 0116}_{1}$} & {\footnotesize{}$13.27$} & {\footnotesize{}$12.92$} & {\footnotesize{}$11.82$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{2}$} & {\footnotesize{}$13.93$} & {\footnotesize{}$13.79$} & {\footnotesize{}$11.23$} & {\footnotesize{}${\Greekmath 010B}_{2}$} & {\footnotesize{}$15.49$} & {\footnotesize{}$92.87$} & {\footnotesize{}$335.05$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},2}^{2}$} & {\footnotesize{}$13.49$} & {\footnotesize{}$340.28$} & {\footnotesize{}$792.88$} & {\footnotesize{}${\Greekmath 0116}_{2}$} & {\footnotesize{}$13.42$} & {\footnotesize{}$11.00$} & {\footnotesize{}$11.98$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{3}$} & {\footnotesize{}$13.87$} & {\footnotesize{}$13.60$} & {\footnotesize{}$11.30$} & {\footnotesize{}${\Greekmath 010B}_{3}$} & {\footnotesize{}$12.43$} & {\footnotesize{}$300.77$} & {\footnotesize{}$272.34$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},3}^{2}$} & {\footnotesize{}$12.46$} & {\footnotesize{}$733.43$} & {\footnotesize{}$682.28$} & {\footnotesize{}${\Greekmath 0116}_{3}$} & {\footnotesize{}$15.41$} & {\footnotesize{}$13.09$} & {\footnotesize{}$13.23$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{4}$} & {\footnotesize{}$14.14$} & {\footnotesize{}$13.48$} & {\footnotesize{}$10.95$} & {\footnotesize{}${\Greekmath 010B}_{4}$} & {\footnotesize{}$13.35$} & {\footnotesize{}$530.99$} & {\footnotesize{}$303.41$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},4}^{2}$} & {\footnotesize{}$13.46$} & {\footnotesize{}$977.93$} & {\footnotesize{}$654.65$} & {\footnotesize{}${\Greekmath 0116}_{4}$} & {\footnotesize{}$13.44$} & {\footnotesize{}$13.87$} & {\footnotesize{}$14.16$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{5}$} & {\footnotesize{}$13.63$} & {\footnotesize{}$13.56$} & {\footnotesize{}$10.95$} & {\footnotesize{}${\Greekmath 010B}_{5}$} & {\footnotesize{}$15.72$} & {\footnotesize{}$93.77$} & {\footnotesize{}$140.44$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},5}^{2}$} & {\footnotesize{}$16.23$} & {\footnotesize{}$514.24$} & {\footnotesize{}$339.73$} & {\footnotesize{}${\Greekmath 0116}_{5}$} & {\footnotesize{}$13.24$} & {\footnotesize{}$11.83$} & {\footnotesize{}$12.87$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{6}$} & {\footnotesize{}$13.84$} & {\footnotesize{}$13.68$} & {\footnotesize{}$11.30$} & {\footnotesize{}${\Greekmath 010B}_{6}$} & {\footnotesize{}$16.81$} & {\footnotesize{}$190.71$} & {\footnotesize{}$152.17$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},6}^{2}$} & {\footnotesize{}$16.20$} & {\footnotesize{}$539.97$} & {\footnotesize{}$418.23$} & {\footnotesize{}${\Greekmath 0116}_{6}$} & {\footnotesize{}$14.00$} & {\footnotesize{}$13.23$} & {\footnotesize{}$13.45$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{7}$} & {\footnotesize{}$13.77$} & {\footnotesize{}$13.69$} & {\footnotesize{}$11.25$} & {\footnotesize{}${\Greekmath 010B}_{7}$} & {\footnotesize{}$17.57$} & {\footnotesize{}$79.74$} & {\footnotesize{}$102.55$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},7}^{2}$} & {\footnotesize{}$13.75$} & {\footnotesize{}$592.05$} & {\footnotesize{}$352.65$} & {\footnotesize{}${\Greekmath 0116}_{7}$} & {\footnotesize{}$13.80$} & {\footnotesize{}$10.85$} & {\footnotesize{}$11.77$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{8}$} & {\footnotesize{}$13.87$} & {\footnotesize{}$13.52$} & {\footnotesize{}$11.14$} & {\footnotesize{}${\Greekmath 010B}_{8}$} & {\footnotesize{}$13.33$} & {\footnotesize{}$134.56$} & {\footnotesize{}$136.97$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},8}^{2}$} & {\footnotesize{}$13.99$} & {\footnotesize{}$392.80$} & {\footnotesize{}$376.86$} & {\footnotesize{}${\Greekmath 0116}_{8}$} & {\footnotesize{}$16.46$} & {\footnotesize{}$11.48$} & {\footnotesize{}$11.67$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{9}$} & {\footnotesize{}$13.69$} & {\footnotesize{}$13.39$} & {\footnotesize{}$11.15$} & {\footnotesize{}${\Greekmath 010B}_{9}$} & {\footnotesize{}$13.50$} & {\footnotesize{}$395.91$} & {\footnotesize{}$161.91$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},9}^{2}$} & {\footnotesize{}$14.65$} & {\footnotesize{}$803.36$} & {\footnotesize{}$457.15$} & {\footnotesize{}${\Greekmath 0116}_{9}$} & {\footnotesize{}$16.12$} & {\footnotesize{}$13.72$} & {\footnotesize{}$12.55$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{10}$} & {\footnotesize{}$13.95$} & {\footnotesize{}$13.66$} & {\footnotesize{}$11.19$} & {\footnotesize{}${\Greekmath 010B}_{10}$} & {\footnotesize{}$12.46$} & {\footnotesize{}$128.96$} & {\footnotesize{}$117.10$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},10}^{2}$} & {\footnotesize{}$13.10$} & {\footnotesize{}$408.40$} & {\footnotesize{}$357.97$} & {\footnotesize{}${\Greekmath 0116}_{10}$} & {\footnotesize{}$14.72$} & {\footnotesize{}$11.70$} & {\footnotesize{}$11.94$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{11}$} & {\footnotesize{}$13.99$} & {\footnotesize{}$13.84$} & {\footnotesize{}$11.14$} & {\footnotesize{}${\Greekmath 010B}_{11}$} & {\footnotesize{}$13.55$} & {\footnotesize{}$273.87$} & {\footnotesize{}$98.71$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},11}^{2}$} & {\footnotesize{}$15.56$} & {\footnotesize{}$667.52$} & {\footnotesize{}$402.61$} & {\footnotesize{}${\Greekmath 0116}_{11}$} & {\footnotesize{}$12.55$} & {\footnotesize{}$11.51$} & {\footnotesize{}$12.62$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{12}$} & {\footnotesize{}$13.85$} & {\footnotesize{}$13.78$} & {\footnotesize{}$11.32$} & {\footnotesize{}${\Greekmath 010B}_{12}$} & {\footnotesize{}$16.34$} & {\footnotesize{}$105.64$} & {\footnotesize{}$204.73$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},12}^{2}$} & {\footnotesize{}$16.09$} & {\footnotesize{}$356.37$} & {\footnotesize{}$438.96$} & {\footnotesize{}${\Greekmath 0116}_{12}$} & {\footnotesize{}$12.56$} & {\footnotesize{}$13.00$} & {\footnotesize{}$13.25$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{13}$} & {\footnotesize{}$14.20$} & {\footnotesize{}$13.56$} & {\footnotesize{}$11.13$} & {\footnotesize{}${\Greekmath 010B}_{13}$} & {\footnotesize{}$13.56$} & {\footnotesize{}$262.15$} & {\footnotesize{}$136.41$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},13}^{2}$} & {\footnotesize{}$12.73$} & {\footnotesize{}$511.17$} & {\footnotesize{}$378.67$} & {\footnotesize{}${\Greekmath 0116}_{13}$} & {\footnotesize{}$13.18$} & {\footnotesize{}$14.97$} & {\footnotesize{}$11.28$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{14}$} & {\footnotesize{}$14.12$} & {\footnotesize{}$13.92$} & {\footnotesize{}$11.34$} & {\footnotesize{}${\Greekmath 010B}_{14}$} & {\footnotesize{}$12.60$} & {\footnotesize{}$188.22$} & {\footnotesize{}$177.73$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},14}^{2}$} & {\footnotesize{}$12.00$} & {\footnotesize{}$530.42$} & {\footnotesize{}$428.24$} & {\footnotesize{}${\Greekmath 0116}_{14}$} & {\footnotesize{}$16.19$} & {\footnotesize{}$12.18$} & {\footnotesize{}$11.69$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{15}$} & {\footnotesize{}$13.65$} & {\footnotesize{}$13.27$} & {\footnotesize{}$11.00$} & {\footnotesize{}${\Greekmath 010B}_{15}$} & {\footnotesize{}$14.79$} & {\footnotesize{}$200.20$} & {\footnotesize{}$162.37$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},15}^{2}$} & {\footnotesize{}$12.79$} & {\footnotesize{}$574.45$} & {\footnotesize{}$578.06$} & {\footnotesize{}${\Greekmath 0116}_{15}$} & {\footnotesize{}$15.09$} & {\footnotesize{}$13.01$} & {\footnotesize{}$12.46$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{16}$} & {\footnotesize{}$13.89$} & {\footnotesize{}$13.89$} & {\footnotesize{}$11.07$} & {\footnotesize{}${\Greekmath 010B}_{16}$} & {\footnotesize{}$14.62$} & {\footnotesize{}$271.96$} & {\footnotesize{}$337.69$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},16}^{2}$} & {\footnotesize{}$15.67$} & {\footnotesize{}$470.91$} & {\footnotesize{}$672.67$} & {\footnotesize{}${\Greekmath 0116}_{16}$} & {\footnotesize{}$13.51$} & {\footnotesize{}$15.99$} & {\footnotesize{}$11.88$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{17}$} & {\footnotesize{}$13.77$} & {\footnotesize{}$13.30$} & {\footnotesize{}$11.07$} & {\footnotesize{}${\Greekmath 010B}_{17}$} & {\footnotesize{}$16.29$} & {\footnotesize{}$139.51$} & {\footnotesize{}$87.63$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},17}^{2}$} & {\footnotesize{}$13.62$} & {\footnotesize{}$467.94$} & {\footnotesize{}$330.15$} & {\footnotesize{}${\Greekmath 0116}_{17}$} & {\footnotesize{}$16.63$} & {\footnotesize{}$12.34$} & {\footnotesize{}$13.24$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{18}$} & {\footnotesize{}$13.71$} & {\footnotesize{}$13.40$} & {\footnotesize{}$10.96$} & {\footnotesize{}${\Greekmath 010B}_{18}$} & {\footnotesize{}$15.69$} & {\footnotesize{}$55.90$} & {\footnotesize{}$107.32$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},18}^{2}$} & {\footnotesize{}$17.08$} & {\footnotesize{}$262.38$} & {\footnotesize{}$317.31$} & {\footnotesize{}${\Greekmath 0116}_{18}$} & {\footnotesize{}$15.03$} & {\footnotesize{}$10.81$} & {\footnotesize{}$11.65$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{19}$} & {\footnotesize{}$13.90$} & {\footnotesize{}$13.69$} & {\footnotesize{}$11.05$} & {\footnotesize{}${\Greekmath 010B}_{19}$} & {\footnotesize{}$15.73$} & {\footnotesize{}$284.70$} & {\footnotesize{}$194.08$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},19}^{2}$} & {\footnotesize{}$14.97$} & {\footnotesize{}$649.26$} & {\footnotesize{}$537.12$} & {\footnotesize{}${\Greekmath 0116}_{19}$} & {\footnotesize{}$15.39$} & {\footnotesize{}$13.53$} & {\footnotesize{}$11.72$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{20}$} & {\footnotesize{}$13.86$} & {\footnotesize{}$13.61$} & {\footnotesize{}$11.21$} & {\footnotesize{}${\Greekmath 010B}_{20}$} & {\footnotesize{}$13.76$} & {\footnotesize{}$311.20$} & {\footnotesize{}$125.72$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},20}^{2}$} & {\footnotesize{}$14.99$} & {\footnotesize{}$667.49$} & {\footnotesize{}$331.18$} & {\footnotesize{}${\Greekmath 0116}_{20}$} & {\footnotesize{}$14.64$} & {\footnotesize{}$16.43$} & {\footnotesize{}$15.96$}\tabularnewline ${\Greekmath 011E}$ & {\footnotesize{}$7.11$} & {\footnotesize{}$20.88$} & {\footnotesize{}$17.01$} & {\footnotesize{}${\Greekmath 011C}_{f,1}^{2}$} & {\footnotesize{}$12.66$} & {\footnotesize{}$78.23$} & {\footnotesize{}$67.92$} & & & & & & & & \tabularnewline \hline \end{tabular} \end{sidewaystable} \begin{sidewaystable} \caption{Inefficiency factors of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with exact transition density for the Gaussian OU model: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $PGBS\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $S=20$, and $K=1$, and number of particles $N=500$.\label{tab:Inefficiency-factor-of real data}} \centering{} \begin{tabular}{cccccccccccccccc} \hline & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III}\tabularnewline \hline {\footnotesize{}${\Greekmath 010C}_{1}$} & {\footnotesize{}$2.18$} & {\footnotesize{}$2.05$} & {\footnotesize{}$1.91$} & {\footnotesize{}${\Greekmath 010B}_{1}$} & {\footnotesize{}$14.21$} & {\footnotesize{}$219.61$} & {\footnotesize{}$113.66$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},1}^{2}$} & {\footnotesize{}$14.37$} & {\footnotesize{}$260.88$} & {\footnotesize{}$129.79$} & {\footnotesize{}${\Greekmath 0116}_{1}$} & {\footnotesize{}$2.11$} & {\footnotesize{}$4.50$} & {\footnotesize{}$2.84$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{2}$} & {\footnotesize{}$1.68$} & {\footnotesize{}$1.85$} & {\footnotesize{}$1.90$} & {\footnotesize{}${\Greekmath 010B}_{2}$} & {\footnotesize{}$11.87$} & {\footnotesize{}$35.87$} & {\footnotesize{}$40.80$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},2}^{2}$} & {\footnotesize{}$12.29$} & {\footnotesize{}$68.34$} & {\footnotesize{}$70.17$} & {\footnotesize{}${\Greekmath 0116}_{2}$} & {\footnotesize{}$1.20$} & {\footnotesize{}$1.42$} & {\footnotesize{}$1.18$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{3}$} & {\footnotesize{}$1.80$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.70$} & {\footnotesize{}${\Greekmath 010B}_{3}$} & {\footnotesize{}$13.04$} & {\footnotesize{}$62.04$} & {\footnotesize{}$89.69$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},3}^{2}$} & {\footnotesize{}$13.23$} & {\footnotesize{}$110.88$} & {\footnotesize{}$157.46$} & {\footnotesize{}${\Greekmath 0116}_{3}$} & {\footnotesize{}$2.39$} & {\footnotesize{}$2.66$} & {\footnotesize{}$2.36$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{4}$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.83$} & {\footnotesize{}${\Greekmath 010B}_{4}$} & {\footnotesize{}$14.22$} & {\footnotesize{}$66.24$} & {\footnotesize{}$51.79$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},4}^{2}$} & {\footnotesize{}$14.99$} & {\footnotesize{}$122.26$} & {\footnotesize{}$88.17$} & {\footnotesize{}${\Greekmath 0116}_{4}$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.83$} & {\footnotesize{}$1.50$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{5}$} & {\footnotesize{}$1.87$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.69$} & {\footnotesize{}${\Greekmath 010B}_{5}$} & {\footnotesize{}$18.44$} & {\footnotesize{}$466.48$} & {\footnotesize{}$136.77$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},5}^{2}$} & {\footnotesize{}$17.14$} & {\footnotesize{}$682.49$} & {\footnotesize{}$167.35$} & {\footnotesize{}${\Greekmath 0116}_{5}$} & {\footnotesize{}$2.97$} & {\footnotesize{}$3.57$} & {\footnotesize{}$1.91$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{6}$} & {\footnotesize{}$1.66$} & {\footnotesize{}$1.74$} & {\footnotesize{}$1.67$} & {\footnotesize{}${\Greekmath 010B}_{6}$} & {\footnotesize{}$17.31$} & {\footnotesize{}$113.00$} & {\footnotesize{}$112.08$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},6}^{2}$} & {\footnotesize{}$19.29$} & {\footnotesize{}$202.66$} & {\footnotesize{}$258.42$} & {\footnotesize{}${\Greekmath 0116}_{6}$} & {\footnotesize{}$4.88$} & {\footnotesize{}$5.94$} & {\footnotesize{}$4.11$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{7}$} & {\footnotesize{}$1.61$} & {\footnotesize{}$1.67$} & {\footnotesize{}$1.66$} & {\footnotesize{}${\Greekmath 010B}_{7}$} & {\footnotesize{}$11.41$} & {\footnotesize{}$52.72$} & {\footnotesize{}$64.09$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},7}^{2}$} & {\footnotesize{}$14.00$} & {\footnotesize{}$91.79$} & {\footnotesize{}$92.67$} & {\footnotesize{}${\Greekmath 0116}_{7}$} & {\footnotesize{}$1.87$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.86$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{8}$} & {\footnotesize{}$1.82$} & {\footnotesize{}$1.93$} & {\footnotesize{}$1.70$} & {\footnotesize{}${\Greekmath 010B}_{8}$} & {\footnotesize{}$18.71$} & {\footnotesize{}$86.37$} & {\footnotesize{}$45.28$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},8}^{2}$} & {\footnotesize{}$20.57$} & {\footnotesize{}$145.71$} & {\footnotesize{}$76.37$} & {\footnotesize{}${\Greekmath 0116}_{8}$} & {\footnotesize{}$2.43$} & {\footnotesize{}$3.41$} & {\footnotesize{}$1.80$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{9}$} & {\footnotesize{}$1.89$} & {\footnotesize{}$1.96$} & {\footnotesize{}$1.74$} & {\footnotesize{}${\Greekmath 010B}_{9}$} & {\footnotesize{}$12.97$} & {\footnotesize{}$80.73$} & {\footnotesize{}$136.71$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},9}^{2}$} & {\footnotesize{}$14.23$} & {\footnotesize{}$116.44$} & {\footnotesize{}$158.23$} & {\footnotesize{}${\Greekmath 0116}_{9}$} & {\footnotesize{}$2.27$} & {\footnotesize{}$2.77$} & {\footnotesize{}$3.30$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{10}$} & {\footnotesize{}$1.65$} & {\footnotesize{}$1.73$} & {\footnotesize{}$1.66$} & {\footnotesize{}${\Greekmath 010B}_{10}$} & {\footnotesize{}$15.25$} & {\footnotesize{}$119.34$} & {\footnotesize{}$124.61$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},10}^{2}$} & {\footnotesize{}$12.54$} & {\footnotesize{}$106.68$} & {\footnotesize{}$128.63$} & {\footnotesize{}${\Greekmath 0116}_{10}$} & {\footnotesize{}$6.21$} & {\footnotesize{}$7.57$} & {\footnotesize{}$6.70$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{11}$} & {\footnotesize{}$1.63$} & {\footnotesize{}$1.74$} & {\footnotesize{}$1.67$} & {\footnotesize{}${\Greekmath 010B}_{11}$} & {\footnotesize{}$14.66$} & {\footnotesize{}$65.71$} & {\footnotesize{}$69.71$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},11}^{2}$} & {\footnotesize{}$14.44$} & {\footnotesize{}$121.39$} & {\footnotesize{}$83.53$} & {\footnotesize{}${\Greekmath 0116}_{11}$} & {\footnotesize{}$3.24$} & {\footnotesize{}$5.57$} & {\footnotesize{}$2.84$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{12}$} & {\footnotesize{}$1.65$} & {\footnotesize{}$1.89$} & {\footnotesize{}$1.69$} & {\footnotesize{}${\Greekmath 010B}_{12}$} & {\footnotesize{}$17.47$} & {\footnotesize{}$433.51$} & {\footnotesize{}$97.20$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},12}^{2}$} & {\footnotesize{}$16.20$} & {\footnotesize{}$545.21$} & {\footnotesize{}$146.63$} & {\footnotesize{}${\Greekmath 0116}_{12}$} & {\footnotesize{}$3.36$} & {\footnotesize{}$5.94$} & {\footnotesize{}$2.54$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{13}$} & {\footnotesize{}$1.94$} & {\footnotesize{}$2.02$} & {\footnotesize{}$1.92$} & {\footnotesize{}${\Greekmath 010B}_{13}$} & {\footnotesize{}$13.50$} & {\footnotesize{}$151.20$} & {\footnotesize{}$112.64$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},13}^{2}$} & {\footnotesize{}$13.49$} & {\footnotesize{}$189.17$} & {\footnotesize{}$145.44$} & {\footnotesize{}${\Greekmath 0116}_{13}$} & {\footnotesize{}$2.74$} & {\footnotesize{}$3.19$} & {\footnotesize{}$2.19$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{14}$} & {\footnotesize{}$1.66$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.60$} & {\footnotesize{}${\Greekmath 010B}_{14}$} & {\footnotesize{}$14.48$} & {\footnotesize{}$70.44$} & {\footnotesize{}$74.94$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},14}^{2}$} & {\footnotesize{}$14.11$} & {\footnotesize{}$146.32$} & {\footnotesize{}$121.04$} & {\footnotesize{}${\Greekmath 0116}_{14}$} & {\footnotesize{}$2.01$} & {\footnotesize{}$2.06$} & {\footnotesize{}$1.73$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{15}$} & {\footnotesize{}$1.62$} & {\footnotesize{}$1.82$} & {\footnotesize{}$1.45$} & {\footnotesize{}${\Greekmath 010B}_{15}$} & {\footnotesize{}$13.08$} & {\footnotesize{}$126.39$} & {\footnotesize{}$291.78$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},15}^{2}$} & {\footnotesize{}$14.80$} & {\footnotesize{}$148.03$} & {\footnotesize{}$382.86$} & {\footnotesize{}${\Greekmath 0116}_{15}$} & {\footnotesize{}$2.20$} & {\footnotesize{}$2.66$} & {\footnotesize{}$2.11$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{16}$} & {\footnotesize{}$1.69$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.83$} & {\footnotesize{}${\Greekmath 010B}_{16}$} & {\footnotesize{}$11.58$} & {\footnotesize{}$133.17$} & {\footnotesize{}$39.94$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},16}^{2}$} & {\footnotesize{}$11.64$} & {\footnotesize{}$210.38$} & {\footnotesize{}$99.40$} & {\footnotesize{}${\Greekmath 0116}_{16}$} & {\footnotesize{}$1.54$} & {\footnotesize{}$1.54$} & {\footnotesize{}$1.55$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{17}$} & {\footnotesize{}$2.12$} & {\footnotesize{}$2.54$} & {\footnotesize{}$1.95$} & {\footnotesize{}${\Greekmath 010B}_{17}$} & {\footnotesize{}$14.52$} & {\footnotesize{}$39.97$} & {\footnotesize{}$30.94$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},17}^{2}$} & {\footnotesize{}$15.65$} & {\footnotesize{}$94.23$} & {\footnotesize{}$54.03$} & {\footnotesize{}${\Greekmath 0116}_{17}$} & {\footnotesize{}$1.30$} & {\footnotesize{}$1.25$} & {\footnotesize{}$1.24$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{18}$} & {\footnotesize{}$1.94$} & {\footnotesize{}$2.04$} & {\footnotesize{}$1.93$} & {\footnotesize{}${\Greekmath 010B}_{18}$} & {\footnotesize{}$15.24$} & {\footnotesize{}$51.58$} & {\footnotesize{}$40.02$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},18}^{2}$} & {\footnotesize{}$17.46$} & {\footnotesize{}$105.41$} & {\footnotesize{}$70.14$} & {\footnotesize{}${\Greekmath 0116}_{18}$} & {\footnotesize{}$1.36$} & {\footnotesize{}$1.51$} & {\footnotesize{}$1.36$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{19}$} & {\footnotesize{}$1.80$} & {\footnotesize{}$1.92$} & {\footnotesize{}$1.73$} & {\footnotesize{}${\Greekmath 010B}_{19}$} & {\footnotesize{}$15.14$} & {\footnotesize{}$36.14$} & {\footnotesize{}$28.02$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},19}^{2}$} & {\footnotesize{}$13.73$} & {\footnotesize{}$81.59$} & {\footnotesize{}$68.35$} & {\footnotesize{}${\Greekmath 0116}_{19}$} & {\footnotesize{}$1.28$} & {\footnotesize{}$1.48$} & {\footnotesize{}$1.37$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{20}$} & {\footnotesize{}$1.87$} & {\footnotesize{}$1.81$} & {\footnotesize{}$1.73$} & {\footnotesize{}${\Greekmath 010B}_{20}$} & {\footnotesize{}$14.52$} & {\footnotesize{}$33.78$} & {\footnotesize{}$28.57$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},20}^{2}$} & {\footnotesize{}$17.10$} & {\footnotesize{}$72.67$} & {\footnotesize{}$55.28$} & {\footnotesize{}${\Greekmath 0116}_{20}$} & {\footnotesize{}$1.27$} & {\footnotesize{}$1.51$} & {\footnotesize{}$1.22$}\tabularnewline ${\Greekmath 011E}$ & {\footnotesize{}$8.77$} & {\footnotesize{}$25.64$} & {\footnotesize{}$20.05$} & {\footnotesize{}${\Greekmath 011C}_{f,1}^{2}$} & {\footnotesize{}$14.24$} & {\footnotesize{}$55.08$} & {\footnotesize{}$48.92$} & & & & & & & & \tabularnewline \hline \end{tabular} \end{sidewaystable} Table~\ref{tab:Inefficiency-factor-of real data} gives the inefficiency factors of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with the exact transition density for the Gaussian OU model for the three samplers: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $PGBS\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $S=20$, and $K=1$, and with the number of particles $N=500$. Table~\ref{tab:Inefficiency-factor-of real data-1} gives the inefficiency factors of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with the approximate Euler based transition density for the Gaussian OU model, for the three samplers: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, Sampler $III$: $PGBS\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $S=20$, and $K=1$, and with the number of particles $N=1000$. \begin{sidewaystable} \caption{Inefficiency factors of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with an Euler approximation for the state transition densities for the Gaussian OU model: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)+PG\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $PGBS\left(\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $S=20$, and $K=1$, and number of particles $N=1000$.\label{tab:Inefficiency-factor-of real data-1}} \centering{} \begin{tabular}{cccccccccccccccc} \hline & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III}\tabularnewline \hline {\footnotesize{}${\Greekmath 010C}_{1}$} & {\footnotesize{}$2.01$} & {\footnotesize{}$2.18$} & {\footnotesize{}$1.89$} & {\footnotesize{}${\Greekmath 010B}_{1}$} & {\footnotesize{}$15.52$} & {\footnotesize{}$559.41$} & {\footnotesize{}$723.77$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},1}^{2}$} & {\footnotesize{}$15.33$} & {\footnotesize{}$787.45$} & {\footnotesize{}$977.73$} & {\footnotesize{}${\Greekmath 0116}_{1}$} & {\footnotesize{}$11.99$} & {\footnotesize{}$23.88$} & {\footnotesize{}$18.17$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{2}$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.84$} & {\footnotesize{}$1.83$} & {\footnotesize{}${\Greekmath 010B}_{2}$} & {\footnotesize{}$16.40$} & {\footnotesize{}$554.76$} & {\footnotesize{}$186.87$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},2}^{2}$} & {\footnotesize{}$14.35$} & {\footnotesize{}$914.39$} & {\footnotesize{}$475.00$} & {\footnotesize{}${\Greekmath 0116}_{2}$} & {\footnotesize{}$15.46$} & {\footnotesize{}$12.34$} & {\footnotesize{}$11.59$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{3}$} & {\footnotesize{}$1.80$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.73$} & {\footnotesize{}${\Greekmath 010B}_{3}$} & {\footnotesize{}$18.50$} & {\footnotesize{}$342.35$} & {\footnotesize{}$210.83$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},3}^{2}$} & {\footnotesize{}$19.40$} & {\footnotesize{}$688.81$} & {\footnotesize{}$546.86$} & {\footnotesize{}${\Greekmath 0116}_{3}$} & {\footnotesize{}$13.45$} & {\footnotesize{}$12.56$} & {\footnotesize{}$12.66$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{4}$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.83$} & {\footnotesize{}$1.76$} & {\footnotesize{}${\Greekmath 010B}_{4}$} & {\footnotesize{}$15.40$} & {\footnotesize{}$215.12$} & {\footnotesize{}$111.11$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},4}^{2}$} & {\footnotesize{}$16.44$} & {\footnotesize{}$455.87$} & {\footnotesize{}$326.75$} & {\footnotesize{}${\Greekmath 0116}_{4}$} & {\footnotesize{}$12.11$} & {\footnotesize{}$12.82$} & {\footnotesize{}$12.83$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{5}$} & {\footnotesize{}$1.85$} & {\footnotesize{}$1.75$} & {\footnotesize{}$1.68$} & {\footnotesize{}${\Greekmath 010B}_{5}$} & {\footnotesize{}$15.82$} & {\footnotesize{}$308.00$} & {\footnotesize{}$305.18$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},5}^{2}$} & {\footnotesize{}$19.29$} & {\footnotesize{}$576.04$} & {\footnotesize{}$456.33$} & {\footnotesize{}${\Greekmath 0116}_{5}$} & {\footnotesize{}$21.39$} & {\footnotesize{}$16.70$} & {\footnotesize{}$20.62$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{6}$} & {\footnotesize{}$1.73$} & {\footnotesize{}$1.75$} & {\footnotesize{}$1.74$} & {\footnotesize{}${\Greekmath 010B}_{6}$} & {\footnotesize{}$20.72$} & {\footnotesize{}$494.91$} & {\footnotesize{}$374.78$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},6}^{2}$} & {\footnotesize{}$20.06$} & {\footnotesize{}$995.03$} & {\footnotesize{}$797.53$} & {\footnotesize{}${\Greekmath 0116}_{6}$} & {\footnotesize{}$19.43$} & {\footnotesize{}$26.62$} & {\footnotesize{}$36.67$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{7}$} & {\footnotesize{}$1.78$} & {\footnotesize{}$1.75$} & {\footnotesize{}$1.77$} & {\footnotesize{}${\Greekmath 010B}_{7}$} & {\footnotesize{}$16.07$} & {\footnotesize{}$340.91$} & {\footnotesize{}$464.08$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},7}^{2}$} & {\footnotesize{}$14.49$} & {\footnotesize{}$783.92$} & {\footnotesize{}$754.46$} & {\footnotesize{}${\Greekmath 0116}_{7}$} & {\footnotesize{}$13.71$} & {\footnotesize{}$13.56$} & {\footnotesize{}$14.80$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{8}$} & {\footnotesize{}$1.83$} & {\footnotesize{}$1.83$} & {\footnotesize{}$1.74$} & {\footnotesize{}${\Greekmath 010B}_{8}$} & {\footnotesize{}$19.85$} & {\footnotesize{}$400.60$} & {\footnotesize{}$128.31$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},8}^{2}$} & {\footnotesize{}$23.81$} & {\footnotesize{}$928.38$} & {\footnotesize{}$328.45$} & {\footnotesize{}${\Greekmath 0116}_{8}$} & {\footnotesize{}$18.30$} & {\footnotesize{}$13.56$} & {\footnotesize{}$12.63$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{9}$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.96$} & {\footnotesize{}$1.77$} & {\footnotesize{}${\Greekmath 010B}_{9}$} & {\footnotesize{}$15.19$} & {\footnotesize{}$909.99$} & {\footnotesize{}$546.01$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},9}^{2}$} & {\footnotesize{}$14.84$} & {\footnotesize{}$1215.77$} & {\footnotesize{}$937.28$} & {\footnotesize{}${\Greekmath 0116}_{9}$} & {\footnotesize{}$13.06$} & {\footnotesize{}$19.86$} & {\footnotesize{}$19.14$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{10}$} & {\footnotesize{}$1.74$} & {\footnotesize{}$1.78$} & {\footnotesize{}$1.77$} & {\footnotesize{}${\Greekmath 010B}_{10}$} & {\footnotesize{}$16.96$} & {\footnotesize{}$385.25$} & {\footnotesize{}$236.04$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},10}^{2}$} & {\footnotesize{}$23.59$} & {\footnotesize{}$962.51$} & {\footnotesize{}$716.08$} & {\footnotesize{}${\Greekmath 0116}_{10}$} & {\footnotesize{}$11.92$} & {\footnotesize{}$50.67$} & {\footnotesize{}$35.06$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{11}$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.78$} & {\footnotesize{}$1.74$} & {\footnotesize{}${\Greekmath 010B}_{11}$} & {\footnotesize{}$18.43$} & {\footnotesize{}$368.53$} & {\footnotesize{}$115.84$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},11}^{2}$} & {\footnotesize{}$23.99$} & {\footnotesize{}$811.02$} & {\footnotesize{}$872.32$} & {\footnotesize{}${\Greekmath 0116}_{11}$} & {\footnotesize{}$13.76$} & {\footnotesize{}$15.12$} & {\footnotesize{}$14.85$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{12}$} & {\footnotesize{}$1.81$} & {\footnotesize{}$1.82$} & {\footnotesize{}$1.77$} & {\footnotesize{}${\Greekmath 010B}_{12}$} & {\footnotesize{}$20.48$} & {\footnotesize{}$521.58$} & {\footnotesize{}$460.67$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},12}^{2}$} & {\footnotesize{}$20.43$} & {\footnotesize{}$771.17$} & {\footnotesize{}$700.72$} & {\footnotesize{}${\Greekmath 0116}_{12}$} & {\footnotesize{}$16.91$} & {\footnotesize{}$20.80$} & {\footnotesize{}$19.88$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{13}$} & {\footnotesize{}$1.81$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.83$} & {\footnotesize{}${\Greekmath 010B}_{13}$} & {\footnotesize{}$17.79$} & {\footnotesize{}$362.85$} & {\footnotesize{}$548.70$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},13}^{2}$} & {\footnotesize{}$18.43$} & {\footnotesize{}$632.95$} & {\footnotesize{}$707.42$} & {\footnotesize{}${\Greekmath 0116}_{13}$} & {\footnotesize{}$15.76$} & {\footnotesize{}$14.73$} & {\footnotesize{}$19.90$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{14}$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.64$} & {\footnotesize{}${\Greekmath 010B}_{14}$} & {\footnotesize{}$15.48$} & {\footnotesize{}$195.27$} & {\footnotesize{}$375.87$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},14}^{2}$} & {\footnotesize{}$17.05$} & {\footnotesize{}$603.04$} & {\footnotesize{}$704.08$} & {\footnotesize{}${\Greekmath 0116}_{14}$} & {\footnotesize{}$14.14$} & {\footnotesize{}$14.75$} & {\footnotesize{}$19.37$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{15}$} & {\footnotesize{}$1.59$} & {\footnotesize{}$1.69$} & {\footnotesize{}$1.57$} & {\footnotesize{}${\Greekmath 010B}_{15}$} & {\footnotesize{}$17.48$} & {\footnotesize{}$485.37$} & {\footnotesize{}$1097.26$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},15}^{2}$} & {\footnotesize{}$15.58$} & {\footnotesize{}$897.29$} & {\footnotesize{}$1228.99$} & {\footnotesize{}${\Greekmath 0116}_{15}$} & {\footnotesize{}$15.76$} & {\footnotesize{}$18.84$} & {\footnotesize{}$29.16$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{16}$} & {\footnotesize{}$1.80$} & {\footnotesize{}$1.70$} & {\footnotesize{}$1.74$} & {\footnotesize{}${\Greekmath 010B}_{16}$} & {\footnotesize{}$15.94$} & {\footnotesize{}$240.28$} & {\footnotesize{}$211.86$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},16}^{2}$} & {\footnotesize{}$14.50$} & {\footnotesize{}$571.97$} & {\footnotesize{}$434.93$} & {\footnotesize{}${\Greekmath 0116}_{16}$} & {\footnotesize{}$13.40$} & {\footnotesize{}$13.29$} & {\footnotesize{}$13.19$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{17}$} & {\footnotesize{}$2.14$} & {\footnotesize{}$2.12$} & {\footnotesize{}$2.02$} & {\footnotesize{}${\Greekmath 010B}_{17}$} & {\footnotesize{}$16.99$} & {\footnotesize{}$143.03$} & {\footnotesize{}$330.84$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},17}^{2}$} & {\footnotesize{}$17.16$} & {\footnotesize{}$496.86$} & {\footnotesize{}$683.20$} & {\footnotesize{}${\Greekmath 0116}_{17}$} & {\footnotesize{}$15.79$} & {\footnotesize{}$11.49$} & {\footnotesize{}$10.91$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{18}$} & {\footnotesize{}$1.88$} & {\footnotesize{}$1.96$} & {\footnotesize{}$1.87$} & {\footnotesize{}${\Greekmath 010B}_{18}$} & {\footnotesize{}$18.10$} & {\footnotesize{}$225.30$} & {\footnotesize{}$184.31$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},18}^{2}$} & {\footnotesize{}$16.15$} & {\footnotesize{}$518.71$} & {\footnotesize{}$683.36$} & {\footnotesize{}${\Greekmath 0116}_{18}$} & {\footnotesize{}$18.81$} & {\footnotesize{}$11.63$} & {\footnotesize{}$12.63$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{19}$} & {\footnotesize{}$1.84$} & {\footnotesize{}$1.88$} & {\footnotesize{}$1.79$} & {\footnotesize{}${\Greekmath 010B}_{19}$} & {\footnotesize{}$16.61$} & {\footnotesize{}$200.54$} & {\footnotesize{}$70.61$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},19}^{2}$} & {\footnotesize{}$16.24$} & {\footnotesize{}$474.26$} & {\footnotesize{}$276.66$} & {\footnotesize{}${\Greekmath 0116}_{19}$} & {\footnotesize{}$16.64$} & {\footnotesize{}$11.33$} & {\footnotesize{}$8.73$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{20}$} & {\footnotesize{}$1.91$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.77$} & {\footnotesize{}${\Greekmath 010B}_{20}$} & {\footnotesize{}$13.97$} & {\footnotesize{}$94.55$} & {\footnotesize{}$310.15$} & {\footnotesize{}${\Greekmath 011C}_{{\Greekmath 010F},20}^{2}$} & {\footnotesize{}$16.21$} & {\footnotesize{}$306.76$} & {\footnotesize{}$726.35$} & {\footnotesize{}${\Greekmath 0116}_{20}$} & {\footnotesize{}$17.31$} & {\footnotesize{}$10.68$} & {\footnotesize{}$11.21$}\tabularnewline ${\Greekmath 011E}$ & {\footnotesize{}$8.22$} & {\footnotesize{}$22.73$} & {\footnotesize{}$34.20$} & {\footnotesize{}${\Greekmath 011C}_{f,1}^{2}$} & {\footnotesize{}$12.36$} & {\footnotesize{}$52.16$} & {\footnotesize{}$68.27$} & & & & & & & & \tabularnewline \hline \end{tabular} \end{sidewaystable} Figures~\ref{fig:The-kernel-density alpha real data} and \ref{fig:The-kernel-density tau real data} present the kernel density estimates of marginal posterior densities of four representative ${\Greekmath 010B}$ and ${\Greekmath 011C}_{{\Greekmath 010F}}^2$ respectively for the Gaussian OU model for the US stock returns data. The density estimates are for PMMH+PG using exact and approximate transition densities and PG with approximate transition densities using ancestral tracing and backward simulation. Both figures show that both PMMH+PG samplers produce estimates that are close to each other, whereas the PG samplers are much less reliable. \begin{figure} \caption{The kernel density estimates of marginal posterior densities of four representative ${\Greekmath 010B} \label{fig:The-kernel-density alpha real data} \end{figure} \begin{figure} \caption{The kernel density estimates of marginal posterior densities of $\boldsymbol{{\Greekmath 011C} \label{fig:The-kernel-density tau real data} \end{figure} Table~\ref{tab:Inefficiency-factor-of real data-GARCH} gives the inefficiency factors of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with the approximate Euler based transition density for the GARCH diffusion model, for the three samplers: Sampler I: $PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 011C}}_{f}^{2},\boldsymbol{{\Greekmath 0116}}\right)+PG\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, Sampler $III$: $PGBS\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}_{{\Greekmath 010F}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $S=20$, and $K=1$, and with the number of particles $N=1000$. \begin{sidewaystable} \caption{Inefficiency factors of $\boldsymbol{{\Greekmath 010C}}$, $\boldsymbol{{\Greekmath 010B}}$, $\boldsymbol{{\Greekmath 0116}}$, $\boldsymbol{{\Greekmath 011C}}^{2}$, $\boldsymbol{{\Greekmath 011E}}$, and $\boldsymbol{{\Greekmath 011C}}_{f}^{2}$ with an Euler approximation for the state transition densities for the GARCH diffusion model: Sampler I: $PG\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 011E}}\right)+PMMH\left(\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, Sampler $II$: $PGAT\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$, sampler III: $PGBS\left(\boldsymbol{f}_{1:T},\boldsymbol{{\Greekmath 010C}},\boldsymbol{{\Greekmath 010B}},\boldsymbol{{\Greekmath 011C}}^{2},\boldsymbol{{\Greekmath 0116}},\boldsymbol{{\Greekmath 011E}},\boldsymbol{{\Greekmath 011C}}_{f}^{2}\right)$ for US stock returns data with $T=1000$, $P=20$, and $K=1$, and number of particles $N=1000$.\label{tab:Inefficiency-factor-of real data-GARCH}} \centering{} \begin{tabular}{cccccccccccccccc} \hline & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III} & & {\footnotesize{}I} & {\footnotesize{}II} & {\footnotesize{}III}\tabularnewline \hline {\footnotesize{}${\Greekmath 010C}_{1}$} & {\footnotesize{}$1.91$} & {\footnotesize{}$2.06$} & {\footnotesize{}$1.88$} & {\footnotesize{}${\Greekmath 010B}_{1}$} & {\footnotesize{}$32.83$} & {\footnotesize{}$197.50$} & {\footnotesize{}$318.60$} & {\footnotesize{}${\Greekmath 011C}_{1}^{2}$} & {\footnotesize{}$48.24$} & {\footnotesize{}$1944.73$} & {\footnotesize{}$1079.01$} & {\footnotesize{}${\Greekmath 0116}_{1}$} & {\footnotesize{}$111.02$} & {\footnotesize{}$207.54$} & {\footnotesize{}$229.44$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{2}$} & {\footnotesize{}$1.73$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.76$} & {\footnotesize{}${\Greekmath 010B}_{2}$} & {\footnotesize{}$13.31$} & {\footnotesize{}$144.70$} & {\footnotesize{}$135.32$} & {\footnotesize{}${\Greekmath 011C}_{2}^{2}$} & {\footnotesize{}$12.39$} & {\footnotesize{}$2186.34$} & {\footnotesize{}$2205.98$} & {\footnotesize{}${\Greekmath 0116}_{2}$} & {\footnotesize{}$14.57$} & {\footnotesize{}$227.46$} & {\footnotesize{}$130.33$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{3}$} & {\footnotesize{}$1.69$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.72$} & {\footnotesize{}${\Greekmath 010B}_{3}$} & {\footnotesize{}$13.53$} & {\footnotesize{}$179.77$} & {\footnotesize{}$186.45$} & {\footnotesize{}${\Greekmath 011C}_{3}^{2}$} & {\footnotesize{}$23.59$} & {\footnotesize{}$1794.07$} & {\footnotesize{}$654.43$} & {\footnotesize{}${\Greekmath 0116}_{3}$} & {\footnotesize{}$19.46$} & {\footnotesize{}$212.72$} & {\footnotesize{}$143.36$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{4}$} & {\footnotesize{}$1.70$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.76$} & {\footnotesize{}${\Greekmath 010B}_{4}$} & {\footnotesize{}$14.94$} & {\footnotesize{}$225.04$} & {\footnotesize{}$157.04$} & {\footnotesize{}${\Greekmath 011C}_{4}^{2}$} & {\footnotesize{}$16.84$} & {\footnotesize{}$3098.27$} & {\footnotesize{}$1208.39$} & {\footnotesize{}${\Greekmath 0116}_{4}$} & {\footnotesize{}$23.45$} & {\footnotesize{}$163.81$} & {\footnotesize{}$118.05$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{5}$} & {\footnotesize{}$1.71$} & {\footnotesize{}$1.74$} & {\footnotesize{}$1.71$} & {\footnotesize{}${\Greekmath 010B}_{5}$} & {\footnotesize{}$16.23$} & {\footnotesize{}$420.88$} & {\footnotesize{}$421.26$} & {\footnotesize{}${\Greekmath 011C}_{5}^{2}$} & {\footnotesize{}$14.29$} & {\footnotesize{}$558.61$} & {\footnotesize{}$3257.52$} & {\footnotesize{}${\Greekmath 0116}_{5}$} & {\footnotesize{}$19.34$} & {\footnotesize{}$322.81$} & {\footnotesize{}$243.11$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{6}$} & {\footnotesize{}$1.66$} & {\footnotesize{}$1.72$} & {\footnotesize{}$1.68$} & {\footnotesize{}${\Greekmath 010B}_{6}$} & {\footnotesize{}$18.66$} & {\footnotesize{}$875.82$} & {\footnotesize{}$1166.83$} & {\footnotesize{}${\Greekmath 011C}_{6}^{2}$} & {\footnotesize{}$21.67$} & {\footnotesize{}$1097.21$} & {\footnotesize{}$2746.64$} & {\footnotesize{}${\Greekmath 0116}_{6}$} & {\footnotesize{}$18.93$} & {\footnotesize{}$359.97$} & {\footnotesize{}$638.61$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{7}$} & {\footnotesize{}$1.64$} & {\footnotesize{}$1.73$} & {\footnotesize{}$1.66$} & {\footnotesize{}${\Greekmath 010B}_{7}$} & {\footnotesize{}$14.81$} & {\footnotesize{}$488.09$} & {\footnotesize{}$447.91$} & {\footnotesize{}${\Greekmath 011C}_{7}^{2}$} & {\footnotesize{}$16.79$} & {\footnotesize{}$1932.45$} & {\footnotesize{}$2415.33$} & {\footnotesize{}${\Greekmath 0116}_{7}$} & {\footnotesize{}$35.95$} & {\footnotesize{}$247.08$} & {\footnotesize{}$205.50$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{8}$} & {\footnotesize{}$1.75$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.72$} & {\footnotesize{}${\Greekmath 010B}_{8}$} & {\footnotesize{}$18.77$} & {\footnotesize{}$180.04$} & {\footnotesize{}$152.92$} & {\footnotesize{}${\Greekmath 011C}_{8}^{2}$} & {\footnotesize{}$17.51$} & {\footnotesize{}$681.34$} & {\footnotesize{}$2236.32$} & {\footnotesize{}${\Greekmath 0116}_{8}$} & {\footnotesize{}$16.08$} & {\footnotesize{}$140.56$} & {\footnotesize{}$231.74$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{9}$} & {\footnotesize{}$1.76$} & {\footnotesize{}$1.79$} & {\footnotesize{}$1.85$} & {\footnotesize{}${\Greekmath 010B}_{9}$} & {\footnotesize{}$23.51$} & {\footnotesize{}$655.71$} & {\footnotesize{}$543.04$} & {\footnotesize{}${\Greekmath 011C}_{9}^{2}$} & {\footnotesize{}$23.17$} & {\footnotesize{}$2465.44$} & {\footnotesize{}$3065.63$} & {\footnotesize{}${\Greekmath 0116}_{9}$} & {\footnotesize{}$147.16$} & {\footnotesize{}$434.49$} & {\footnotesize{}$814.62$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{10}$} & {\footnotesize{}$1.70$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.75$} & {\footnotesize{}${\Greekmath 010B}_{10}$} & {\footnotesize{}$13.04$} & {\footnotesize{}$1159.77$} & {\footnotesize{}$969.04$} & {\footnotesize{}${\Greekmath 011C}_{10}^{2}$} & {\footnotesize{}$14.04$} & {\footnotesize{}$2013.82$} & {\footnotesize{}$1638.88$} & {\footnotesize{}${\Greekmath 0116}_{10}$} & {\footnotesize{}$17.20$} & {\footnotesize{}$902.78$} & {\footnotesize{}$322.78$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{11}$} & {\footnotesize{}$1.69$} & {\footnotesize{}$1.77$} & {\footnotesize{}$1.74$} & {\footnotesize{}${\Greekmath 010B}_{11}$} & {\footnotesize{}$11.05$} & {\footnotesize{}$298.47$} & {\footnotesize{}$210.95$} & {\footnotesize{}${\Greekmath 011C}_{11}^{2}$} & {\footnotesize{}$14.72$} & {\footnotesize{}$1224.84$} & {\footnotesize{}$2551.95$} & {\footnotesize{}${\Greekmath 0116}_{11}$} & {\footnotesize{}$17.49$} & {\footnotesize{}$335.21$} & {\footnotesize{}$216.52$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{12}$} & {\footnotesize{}$1.68$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.78$} & {\footnotesize{}${\Greekmath 010B}_{12}$} & {\footnotesize{}$19.20$} & {\footnotesize{}$462.64$} & {\footnotesize{}$495.65$} & {\footnotesize{}${\Greekmath 011C}_{12}^{2}$} & {\footnotesize{}$22.52$} & {\footnotesize{}$2865.97$} & {\footnotesize{}$1412.81$} & {\footnotesize{}${\Greekmath 0116}_{12}$} & {\footnotesize{}$49.95$} & {\footnotesize{}$179.47$} & {\footnotesize{}$351.02$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{13}$} & {\footnotesize{}$1.78$} & {\footnotesize{}$1.85$} & {\footnotesize{}$1.86$} & {\footnotesize{}${\Greekmath 010B}_{13}$} & {\footnotesize{}$14.12$} & {\footnotesize{}$232.22$} & {\footnotesize{}$270.89$} & {\footnotesize{}${\Greekmath 011C}_{13}^{2}$} & {\footnotesize{}$13.88$} & {\footnotesize{}$1646.83$} & {\footnotesize{}$2770.24$} & {\footnotesize{}${\Greekmath 0116}_{13}$} & {\footnotesize{}$16.15$} & {\footnotesize{}$230.03$} & {\footnotesize{}$597.87$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{14}$} & {\footnotesize{}$1.61$} & {\footnotesize{}$1.63$} & {\footnotesize{}$1.63$} & {\footnotesize{}${\Greekmath 010B}_{14}$} & {\footnotesize{}$17.59$} & {\footnotesize{}$159.37$} & {\footnotesize{}$337.67$} & {\footnotesize{}${\Greekmath 011C}_{14}^{2}$} & {\footnotesize{}$16.22$} & {\footnotesize{}$2651.10$} & {\footnotesize{}$1083.02$} & {\footnotesize{}${\Greekmath 0116}_{14}$} & {\footnotesize{}$15.34$} & {\footnotesize{}$146.47$} & {\footnotesize{}$227.23$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{15}$} & {\footnotesize{}$1.54$} & {\footnotesize{}$1.54$} & {\footnotesize{}$1.52$} & {\footnotesize{}${\Greekmath 010B}_{15}$} & {\footnotesize{}$13.93$} & {\footnotesize{}$330.76$} & {\footnotesize{}$329.03$} & {\footnotesize{}${\Greekmath 011C}_{15}^{2}$} & {\footnotesize{}$16.10$} & {\footnotesize{}$1551.35$} & {\footnotesize{}$1303.25$} & {\footnotesize{}${\Greekmath 0116}_{15}$} & {\footnotesize{}$30.23$} & {\footnotesize{}$164.37$} & {\footnotesize{}$182.29$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{16}$} & {\footnotesize{}$1.67$} & {\footnotesize{}$1.69$} & {\footnotesize{}$1.62$} & {\footnotesize{}${\Greekmath 010B}_{16}$} & {\footnotesize{}$17.17$} & {\footnotesize{}$352.23$} & {\footnotesize{}$275.77$} & {\footnotesize{}${\Greekmath 011C}_{16}^{2}$} & {\footnotesize{}$15.05$} & {\footnotesize{}$2166.59$} & {\footnotesize{}$1121.20$} & {\footnotesize{}${\Greekmath 0116}_{16}$} & {\footnotesize{}$11.35$} & {\footnotesize{}$141.30$} & {\footnotesize{}$246.14$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{17}$} & {\footnotesize{}$2.04$} & {\footnotesize{}$2.16$} & {\footnotesize{}$2.03$} & {\footnotesize{}${\Greekmath 010B}_{17}$} & {\footnotesize{}$16.20$} & {\footnotesize{}$202.76$} & {\footnotesize{}$198.07$} & {\footnotesize{}${\Greekmath 011C}_{17}^{2}$} & {\footnotesize{}$17.68$} & {\footnotesize{}$2007.36$} & {\footnotesize{}$3053.61$} & {\footnotesize{}${\Greekmath 0116}_{17}$} & {\footnotesize{}$36.97$} & {\footnotesize{}$728.55$} & {\footnotesize{}$820.53$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{18}$} & {\footnotesize{}$1.83$} & {\footnotesize{}$1.86$} & {\footnotesize{}$1.77$} & {\footnotesize{}${\Greekmath 010B}_{18}$} & {\footnotesize{}$13.94$} & {\footnotesize{}$347.07$} & {\footnotesize{}$192.65$} & {\footnotesize{}${\Greekmath 011C}_{18}^{2}$} & {\footnotesize{}$17.27$} & {\footnotesize{}$1478.12$} & {\footnotesize{}$2889.07$} & {\footnotesize{}${\Greekmath 0116}_{18}$} & {\footnotesize{}$19.63$} & {\footnotesize{}$311.89$} & {\footnotesize{}$603.94$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{19}$} & {\footnotesize{}$1.74$} & {\footnotesize{}$1.80$} & {\footnotesize{}$1.78$} & {\footnotesize{}${\Greekmath 010B}_{19}$} & {\footnotesize{}$14.17$} & {\footnotesize{}$398.65$} & {\footnotesize{}$157.60$} & {\footnotesize{}${\Greekmath 011C}_{19}^{2}$} & {\footnotesize{}$18.14$} & {\footnotesize{}$2896.07$} & {\footnotesize{}$2682.24$} & {\footnotesize{}${\Greekmath 0116}_{19}$} & {\footnotesize{}$19.85$} & {\footnotesize{}$1340.20$} & {\footnotesize{}$235.55$}\tabularnewline {\footnotesize{}${\Greekmath 010C}_{20}$} & {\footnotesize{}$1.75$} & {\footnotesize{}$1.81$} & {\footnotesize{}$1.80$} & {\footnotesize{}${\Greekmath 010B}_{20}$} & {\footnotesize{}$17.59$} & {\footnotesize{}$130.58$} & {\footnotesize{}$262.31$} & {\footnotesize{}${\Greekmath 011C}_{20}^{2}$} & {\footnotesize{}$15.98$} & {\footnotesize{}$2096.28$} & {\footnotesize{}$1352.18$} & {\footnotesize{}${\Greekmath 0116}_{20}$} & {\footnotesize{}$17.63$} & {\footnotesize{}$119.10$} & {\footnotesize{}$148.03$}\tabularnewline ${\Greekmath 011E}$ & {\footnotesize{}$8.74$} & {\footnotesize{}$21.52$} & {\footnotesize{}$20.91$} & {\footnotesize{}${\Greekmath 011C}_{f_{1}}^{2}$} & {\footnotesize{}$13.65$} & {\footnotesize{}$47.47$} & {\footnotesize{}$46.10$} & & & & & & & & \tabularnewline \hline \end{tabular} \end{sidewaystable} \end{document}
\begin{document} \title{{\Large Ramsey numbers of $5$-uniform loose cycles}} \author{ M. Shahsiah$^{\textrm{a},\textrm{b}}$ \\[2pt] {\small $^{\textrm{a}}$Department of Mathematics, University of Khansar, Khansar, 87916-85163, Iran}\\[2pt] {\small $^{\textrm{b}}$School of Mathematics, Institute for Research in Fundamental Sciences (IPM),}\\ {\small P.O. Box 19395-5746, Tehran, Iran }\\[2pt] {[email protected]}} \date{} \maketitle \begin{abstract} Gy\'{a}rf\'{a}s et al. determined the asymptotic value of the diagonal Ramsey number of $\mathcal{C}^k_n$, $R(\mathcal{C}^k_n,\mathcal{C}^k_n),$ generating the same result for $k=3$ due to Haxell et al. Recently, the exact values of the Ramsey numbers of 3-uniform loose paths and cycles are completely determined. These results are motivations to conjecture that for every $n\geq m\geq 3$ and $k\geq 3,$ $$R(\mathcal{C}^k_n,\mathcal{C}^k_m)=(k-1)n+\lfloor\frac{m-1}{2}\rfloor,$$ as mentioned by Omidi et al. More recently, it is shown that this conjecture is true for $n=m\geq 2$ and $k\geq 7$ and for $k=4$ when $n>m$ or $n=m$ is odd. Here we investigate this conjecture for $k=5$ and demonstrate that it holds for $k=5$ and sufficiently large $n$. \noindent{\small { Keywords:} Ramsey number, Loose path, Loose cycle.}\\ {\small AMS subject classification: 05C65, 05C55, 05D10.} \end{abstract} \section{\normalsize Introduction} For two $k$-uniform hypergraphs $\mathcal{G}$ and $\mathcal{H},$ the \textit{Ramsey number} $R(\mathcal{G},\mathcal{H})$ is the smallest integer $N$ such that in every red-blue coloring of the edges of the complete $k$-uniform hypergraph $\mathcal{K}^k_N$ on $N$ vertices, there is a monochromatic copy of $\mathcal{G}$ in color red or a monochromatic copy of $\mathcal{H}$ in color blue. A {\it $k$-uniform loose cycle} (shortly, a {\it cycle of length $n$}), denoted by $\mathcal{C}_n^k,$ is a hypergraph with vertex set $\{v_1,v_2,\ldots,v_{n(k-1)}\}$ and with the set of $n$ edges $e_i=\{v_{(i-1)(k-1)+1},v_{(i-1)(k-1)+2},\ldots, v_{(i-1)(k-1)+k}\}$, $1\leq i\leq n$, where we use mod $n(k-1)$ arithmetic. Similarly, a {\it $k$-uniform loose path} (shortly, a {\it path of length $n$}), denoted by $\mathcal{P}_n^k,$ is a hypergraph with vertex set $\{v_1,v_2,\ldots,v_{n(k-1)+1}\}$ and with the set of $n$ edges $e_i=\{v_{(i-1)(k-1)+1},v_{(i-1)(k-1)+2},\ldots, v_{(i-1)(k-1)+k}\}$, $1\leq i\leq n$. For an edge $e_i=\{v_{(i-1)(k-1)+1},v_{(i-1)(k-1)+2},\ldots, v_{i(k-1)+1}\}$ of a given loose path (also a given loose cycle) $\mathcal{K}$, we denote by $f_{\mathcal{K},e_i}$ and $l_{\mathcal{K},e_i}$ the first vertex ($v_{(i-1)(k-1)+1}$) and the last vertex ($v_{i(k-1)+1}$) of $e_i,$ respectively.\\ The problem of determining or estimating Ramsey numbers is one of the central problems in combinatorics which has been of interest to many investigators. In contrast to the graph case, there are relatively few results on the hypergraph Ramsey numbers. The Ramsey numbers of hypergraph loose cycles were first considered by Haxell et al. \cite{Ramsy number of loose cycle}. They showed that $R(\mathcal{C}^3_n,\mathcal{C}^3_n)$ is asymptotically $\frac{5}{2}n$. Gy\'{a}rf\'{a}s et al. \cite{Ramsy number of loose cycle for k-uniform} generalized this result to $k$-uniform loose cycles and showed that for $k\geq 3,$ $R(\mathcal{C}^k_n,\mathcal{C}^k_n)$ is asymptotic to $\frac{1}{2}(2k-1)n.$ The investigation of the exact values of hypergraph loose paths and cycles was initiated by Gy\'{a}rf\'{a}s et al. \cite{subm}, who determined the exact values of the Ramsey numbers of two $k$-uniform loose triangles and quadrangles. Recently, in \cite{The Ramsey number of loose paths and loose cycles in 3-uniform hypergraphs}, the authors completely determined the exact values of the Ramsey numbers of 3-uniform loose paths and cycles. More precisely, they showed the following. \begin{theorem}{\rm \cite{The Ramsey number of loose paths and loose cycles in 3-uniform hypergraphs}}\label{Omidi} For every $n\geq m\geq 3,$ \begin{eqnarray*}\label{5} R(\mathcal{P}^3_n,\mathcal{P}^3_m)=R(\mathcal{P}^3_n,\mathcal{C}^3_m)=R(\mathcal{C}^3_n,\mathcal{C}^3_m)+1=2n+\Big\lfloor\frac{m+1}{2}\Big\rfloor. \end{eqnarray*} Moreover, for $n>m\geq 3,$ we have \begin{eqnarray*}\label{5} R(\mathcal{P}^3_m,\mathcal{C}^3_n)=2n+\Big\lfloor\frac{m-1}{2}\Big\rfloor. \end{eqnarray*} \end{theorem} Regarding Ramsey numbers of $k$-uniform loose paths and cycles for $k\geq 3,$ in \cite{Ramsey numbers of loose cycles in uniform hypergraphs} the authors posed the following conjecture, as mentioned also in \cite{subm}. \begin{conjecture}\label{our conjecture} Let $k\geq 3$ be an integer number. For every $n\geq m \geq 3$, \begin{eqnarray*}\label{2} R(\mathcal{P}^k_{n},\mathcal{P}^k_{m})=R(\mathcal{P}^k_{n},\mathcal{C}^k_{m})=R(\mathcal{C}^k_n,\mathcal{C}^k_m)+1=(k-1)n+\lfloor\frac{m+1}{2}\rfloor. \end{eqnarray*} \end{conjecture} \noindent They also showed that Conjecture \ref{our conjecture} is equivalent to the following. \begin{conjecture}\label{our conjecture2} Let $k\geq 3$ be an integer number. For every $n\geq m \geq 3$, \begin{eqnarray*}\label{2} R(\mathcal{C}^k_n,\mathcal{C}^k_m)=(k-1)n+\lfloor\frac{m-1}{2}\rfloor. \end{eqnarray*} \end{conjecture} More recently, it is shown that Conjecture \ref{our conjecture2} holds for $n=m$ and $k\geq 7$ (see \cite{diagonal}). For small values of $k$, in \cite{4-uniform}, the authors demonstrate that Conjecture \ref{our conjecture2} holds for $k=4$ where $n>m$ or $n=m$ is odd. Therefore, in this regard, investigating the small cases $k=5,6$ are interesting. In this paper, we focus on the case $k=5$ and shall show that Conjecture \ref{our conjecture2} holds for sufficiently large $n$. More precisely, we prove that Conjecture \ref{our conjecture2} holds for $k=5$ when $n\geq\lfloor\frac{3m}{2}\rfloor$. We remark that in the proof we extend the method that used in \cite{Ramsey numbers of loose cycles in uniform hypergraphs} and use a modified version of some lemmas in \cite{4-uniform}. Throughout the paper, by Lemma 1 of \cite{subm}, it suffices to prove only the upper bound for the claimed Ramsey numbers. Throughout the paper, for a 2-edge colored hypergraph $\mathcal{H}$ we denote by $\mathcal{H}_{\rm red}$ and $\mathcal{H}_{\rm blue}$ the induced hypergraphs on red edges and blue edges, respectively. \section{\normalsize Preliminaries} In this section, we prove some lemmas that will be used in the follow up section. Also, we recall the following result from \cite{Ramsey numbers of loose cycles in uniform hypergraphs}. \begin{theorem}\label{R(C3,C4)}{\rm \cite{Ramsey numbers of loose cycles in uniform hypergraphs}} Let $n,k\geq 3$ be integer numbers. Then \begin{eqnarray*}R(\mathcal{C}^k_3,\mathcal{C}^k_n)= (k-1)n+1.\end{eqnarray*} \end{theorem} Before we state our main results we need some definitions. Let $\mathcal{H}$ be a 2-edge colored complete $5$-uniform hypergraph, $\mathcal{P}$ be a loose path in $\mathcal{H}$ and $W$ be a set of vertices with $W\cap V(\mathcal{P})=\emptyset$. By a {\it $\varpi_S$-configuration}, we mean a copy of $\mathcal{P}^5_2$ with edges $$\{x,a_1,a_2,a_3,a_4\}, \{a_4,a_5,a_6,a_7,y\},$$ so that $\{x,y\}\subseteq W$ and $S=\{a_j : 1\leq j \leq 7\}\subseteq (e_{i-1}\setminus \{f_{\mathcal{P},e_{i-1}}\})\cup e_{i} \cup e_{i+1}\cup e_{i+2}$ is a set of unordered vertices of $4$ consecutive edges of $\mathcal{P}$ with $|S\cap (e_{i-1}\setminus \{f_{\mathcal{P},e_{i-1}}\})|\leq 1.$ Note that it is possible to have $S\subseteq e_i\cup e_{i+1}\cup e_{i+2},$ this case happens only when $S\cap (e_{i-1}\setminus \{f_{\mathcal{P},e_{i-1}}\})\subseteq \{f_{\mathcal{P},e_{i}}\}$. The vertices $x$ and $y$ are called {\it the end vertices} of this configuration. A $\varpi_{S}$-configuration with $S\subseteq (e_{i-1}\setminus \{f_{\mathcal{P},e_{i-1}}\})\cup e_{i} \cup e_{i+1}\cup e_{i+2}$, is {\it good} if at least one of the vertices of $e_{i+2}\setminus e_{i+1}$ is not in $S$. We say that a monochromatic path $\mathcal{P}=e_1e_2\ldots e_n$ is {\it maximal with respect to} (w.r.t. for short) $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$ if there is no $W'\subseteq W$ so that for some $1\leq r\leq n$ and some $1\leq i \leq n-r+1,$ the path \begin{eqnarray*} \mathcal{P}'= \left\lbrace \begin{array}{lr} e'_1e'_{2}\ldots e'_{n+1}, &i=1 \ \ {\rm and}\ \ r=n, \\ e'_1e'_{2}\ldots e'_{r+1}e_{r+1}\ldots e_n, &i=1 \ \ {\rm and}\ \ 1\leq r<n, \\ e_1\ldots e_{i-1}e'_ie'_{i+1}\ldots e'_{i+r}e_{i+r}\ldots e_n,\ \ & 2\leq i \leq n-r, \\ e_1e_2\ldots e_{i-1}e'_ie'_{i+1}\ldots e'_{n+1} &2 \leq i=n-r+1. \end{array} \right. \end{eqnarray*} is a monochromatic path with $n+1$ edges and the following properties: \begin{itemize} \item[(i)] $V(\mathcal{P}')=V(\mathcal{P})\cup W'$, \item[(ii)] if $i=1$, then $f_{\mathcal{P}',e'_1}=f_{\mathcal{P},e_1}$, \item[(iii)] if $i=n-r+1$, then $l_{\mathcal{P}',e'_{n+1}}=l_{\mathcal{P},e_n}$. \end{itemize} In the other words, we say a monochromatic path $\mathcal{P}$ is maximal w.r.t. $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$, if one can not extend $\mathcal{P}$ any longer with the same color using the vertices of $W.$ Clearly, if $\mathcal{P}$ is maximal w.r.t. $W$, then it is maximal w.r.t. every $W'\subseteq W$ and also every loose path $\mathcal{P}'$ which is a sub-hypergraph of $\mathcal{P}$ is again maximal w.r.t. $W$.\\ The following lemma is indeed the modified version of \cite{4-uniform, {Lemma 2.3}} for $5$-uniform hypergraphs. But, for the sake of completeness, we give a proof here. \begin{lemma}\label{spacial configuration2} Assume that $\mathcal{H}=\mathcal{K}^5_{n},$ is $2$-edge colored red and blue. Let $\mathcal{P}\subseteq \mathcal{H}_{\rm red}$ be a maximal path w.r.t. $W,$ where $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$ and $|W|\geq 5$. Let $A_1=\{f_{\mathcal{P},e_{1}}\}=\{v_1\}$ and $A_i=e_{i-1}\setminus\{f_{\mathcal{P},e_{i-1}}\}$ for $i>1$. Then for every three consecutive edges $e_i,e_{i+1}$ and $e_{i+2}$ of $\mathcal{P}$ and for each $u\in A_i$ there is a good $\varpi_S$-configuration, say $C=fg$, in $\mathcal{H}_{\rm blue}$ with end vertices $x\in f$ and $y\in g$ in $W$ and \begin{eqnarray*}S\subseteq \Big((e_i\setminus \{f_{\mathcal{P},e_{i}}\})\cup \{u\}\Big)\cup e_{i+1} \cup \Big(e_{i+2}\setminus \{v\}\Big),\end{eqnarray*} for some $v\in A_{i+3}$. Moreover, there are two subsets $W_1\subseteq W$ and $W_2\subseteq W$ with $|W_1|\geq |W|-3$ and $|W_2|\geq |W|-4$ so that for every distinct vertices $x'\in W_1$ and $y'\in W_2$, the path $C'=\Big((f\setminus\{x\})\cup\{x'\}\Big)\Big((g\setminus\{y\})\cup\{y'\}\Big)$ is also a good $\varpi_S$-configuration in $\mathcal{H}_{\rm blue}$ with end vertices $x'$ and $y'$ in $W.$ \end{lemma} \begin{proof}{ Let $\mathcal{P}=e_1e_2\ldots e_m\subseteq \mathcal{H}_{\rm red}$ be a maximal path w.r.t. $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$, where \begin{eqnarray*} e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}, \hspace{1 cm} i=1,2,\ldots, m. \end{eqnarray*} Suppose that $e_i,e_{i+1}$ and $e_{i+2}$ are three consecutive edges of $\mathcal{P}$ and $u\in A_i$. (Note that for $i=1,$ we have $u=v_1$) Among different choices of $4$ distinct vertices of $W,$ choose a $4$-tuple $X=(x_1,x_2,x_3,x_4)$ so that $E_X$ has the minimum number of blue edges, where $E_X=\{f_1,f_2,f_3,f_4\}$ and \begin{eqnarray*} &&f_{1}=\{u,x_1,v_{4i-2},v_{4i+2},v_{4i+6}\},\\ &&f_{2}=\{v_{4i-2},x_2,v_{4i-1},v_{4i+3},v_{4i+7}\},\\ &&f_{{3}}=\{v_{4i-1},x_{3},v_{4i},v_{4i+4},v_{4i+8}\},\\ &&f_{{4}}=\{v_{4i},x_{4},v_{4i+1},v_{4i+5},v_{4i+9}\}. \end{eqnarray*} Note that for $1\leq k \leq 4,$ we have $|f_k\cap(e_{i+2}\setminus\{f_{\mathcal{P},e_{i+2}}\})|=1.$ Since $\mathcal{P}$ is a maximal path w.r.t. $W,$ there is $1\leq j\leq 4$ so that the edge $f_{j}$ is blue. Otherwise, replacing $e_ie_{i+1}e_{i+2}$ by $f_1f_2f_3f_4$ in $\mathcal{P}$ yields a red path $\mathcal{P}'$ with $n+1$ edges; this is a contradiction. Let $W_1=(W\setminus\{x_1,x_2,x_{3},x_4\})\cup\{x_j\}$. For each vertex $x\in W_1$ the edge $f_x=(f_j\setminus\{x_j\})\cup\{x\}$ is blue. Otherwise, the number of blue edges in $E_Y$ is less than this number for $E_X$, where $Y$ is obtained from $X$ by replacing $x_j$ to $x$. This is a contradiction.\\ Now we choose $h_1,h_2,h_3,h_4$ as follows. If $j=1,$ then set \begin{eqnarray*} &&h_1=\{u,v_{4i+3},v_{4i+7},v_{4i-1}\},h_2=\{v_{4i-1},v_{4i-2},v_{4i+4},v_{4i}\},\\ &&h_3=\{v_{4i},v_{4i+2},v_{4i+8},v_{4i+1}\},h_4=\{v_{4i+1},v_{4i+5},v_{4i+6},v_{4i+9}\}. \end{eqnarray*} If $j=2,$ then set \begin{eqnarray*} &&h_1=\{u,v_{4i-2},v_{4i+1},v_{4i}\},h_2=\{v_{4i},v_{4i-1},v_{4i+6},v_{4i+2}\},\\ &&h_3=\{v_{4i+2},v_{4i+3},v_{4i+8},v_{4i+4}\},h_4=\{v_{4i+4},v_{4i+5},v_{4i+7},v_{4i+9}\}. \end{eqnarray*} If $j=3,$ then set \begin{eqnarray*} &&h_1=\{u,v_{4i-2},v_{4i-1},v_{4i+1}\},h_2=\{v_{4i+1},v_{4i},v_{4i+6},v_{4i+2}\},\\ &&h_3=\{v_{4i+2},v_{4i+4},v_{4i+7},v_{4i+3}\},h_4=\{v_{4i+3},v_{4i+5},v_{4i+8},v_{4i+9}\}. \end{eqnarray*} If $j=4,$ then set \begin{eqnarray*} &&h_1=\{u,v_{4i-2},v_{4i},v_{4i-1}\},h_2=\{v_{4i-1},v_{4i+1},v_{4i+6},v_{4i+2}\},\\ &&h_3=\{v_{4i+2},v_{4i+5},v_{4i+7},v_{4i+3}\},h_4=\{v_{4i+3},v_{4i+4},v_{4i+8},v_{4i+9}\}. \end{eqnarray*} Note that in each the above cases, for $1\leq k \leq 4,$ we have $|h_k\cap (f_j\setminus\{x_j\})|=1$ and $|h_k\cap(e_{i+2}\setminus(f_j\cup\{f_{\mathcal{P},e_{i+2}}\}))|\leq 1$. Let $Y=(y_1,y_2,y_{3},y_4)$ be a $4$-tuple of distinct vertices of $W\setminus\{x_j\}$ with minimum number of blue edges in $F_{Y}$, where $F_Y=\{g_1,g_2,g_3,g_4\}$ and $g_k=h_k\cup\{y_k\}$ for $1\leq k \leq 4$. Again since $\mathcal{P}$ is maximal w.r.t. $W$, for some $1\leq \ell\leq 4$ the edge $g_{\ell}$ is blue and also, for each vertex $y_a\in W_2= (W\setminus\{x_j,y_{1},y_{2},y_3,y_4\})\cup \{y_\ell\}$ the edge $g_a=(g_{\ell}\setminus\{y_{\ell}\})\cup\{y_a\}$ is blue. Set $f=f_j$ and $g=g_{\ell}$. Clearly $C=fg$ is our desired configuration with end vertices $x_j\in f$ and $y_{\ell}\in g$ in $W.$ Moreover, for distinct vertices $x'\in W_1$ and $y'\in W_2$, the path $C'=\Big((f_j\setminus\{x_j\})\cup\{x'\}\Big)\Big((g_{\ell}\setminus\{y_{\ell}\})\cup\{y'\}\Big)$ is also a good $\varpi_S$-configuration in $\mathcal{H}_{\rm blue}$ with end vertices $x'$ and $y'$ in $W.$ Since $|W_1|=|W|-2$, each vertex of $W,$ with the exception of at most $2,$ can be considered as an end vertex of $C'.$ Note that the configuration $C$ (and also $C'$) contains at most two vertices of $e_{i+2}\setminus e_{i+1}$.\\}\end{proof} Also, we need the following lemma. \begin{lemma}\label{there is P3:2} Let $\mathcal{H}=\mathcal{K}^5_{q}$ be $2$-edge colored red and blue and $\mathcal{P}=e_1e_2\ldots e_n \subseteq \mathcal{H}_{\rm blue}$ be a maximal path w.r.t. $W,$ where $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$ and $|W|\geq 7$. Assume that $A_1=\{f_{\mathcal{P},e_1}\}$ and $A_i=V(e_{i-1})\setminus\{f_{\mathcal{P},e_{i-1}}\}$ for $i>1$. Then for every two consecutive edges $e_i$ and $e_{i+1}$ of $\mathcal{P}$ and for each $u\in A_i$, there is a $\mathcal{P}_3^5\subseteq \mathcal{H}_{\rm red}$, say $\mathcal{Q}$, with end vertices in $W$ such that $V(\mathcal{Q})\subseteq ((e_i\setminus\{f_{\mathcal{P},e_i}\})\cup\{u\})\cup e_{i+1}\cup W,$ at least one of the vertices of $A_{i+2}$ is not in $\mathcal{Q}$ and $\vert W\cap V(\mathcal{Q})\vert \leq 6$. Moreover, each vertex of $W,$ with the exception of at most one, can be considered as an end vertex of $\mathcal{Q}$. \end{lemma} \begin{proof}{ Let $\mathcal{P}=e_1e_2\ldots e_n \subseteq \mathcal{H}_{\rm blue}$ be a maximal path w.r.t. $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P}),$ where \begin{eqnarray*} e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\},\hspace{1 cm} 1\leq i\leq n. \end{eqnarray*} Also, let $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}$ and $e_{i+1}=\{v_{4i+1},v_{4i+2},v_{4i+3},v_{4i+4},v_{4i+5}\}$ be two consecutive edges of $\mathcal{P},$ $u\in A_i$ and $W=\{x_1,...,x_t\}$ (note that for $i=1$, we have $u=v_1$). \noindent \textbf{Case 1. }There exist $x_{j},x_{j'}\in W$ such that the edge $e=\{u,v_{4i-2},v_{4i-1},x_j,x_{j'}\}$ is blue. \noindent Suppose without loss of generality that $x_j=x_1$ and $x_{j'}=x_2$. If for every vertex $x_3\in W\setminus\{x_1,x_2\}$ the edge $f_1=\{x_2,x_3,v_{4i+1},v_{4i+3},v_{4i+4}\}$ is blue, then for arbitrary vertices $x_4,x_5,x_6,x_7\in W\setminus\{x_1,x_2,x_3\}$ the edges \begin{eqnarray*} &&f_2=\{x_1,x_{4},v_{4i+4},v_{4i+5},v_{4i+2}\},\\ &&f_3=\{v_{4i+2},v_{4i-2},x_{3},x_{5},v_{4i}\},\\ &&f_4=\{v_{4i},v_{4i-1},v_{4i+1},x_{6},x_{7}\}, \end{eqnarray*} are red (since $\mathcal{P}$ is maximal w.r.t. $W$) and $\mathcal{Q}=f_2f_3f_4$ is the desired path. Note that for every $\alpha\neq 2,$ $x_{\alpha}$ can be considered as an end vertex of $\mathcal{Q}$. To see that, let $x_3 \notin \{x_{\alpha},x_1,x_2\}$. Thereby, for $\alpha=1$ we have $x_{\alpha}\in f_2$ and otherwise we may assume that $x_{\alpha}=x_6$ ($x_{\alpha}\in f_4$). \noindent Now, we may assume that there is a vertex $x_3\in W\setminus \{x_1,x_2\}$ so that $f_1$ is red. Again, since $\mathcal{P}$ is maximal w.r.t. $W,$ the path $\mathcal{Q}=f_1f_4\{v_{4i-2},v_{4i},v_{4i+2},x_{4},x_5\}$ is the desired red $\mathcal{P}^5_3$. It is easy to check that for every $\alpha\neq 1,$ $x_{\alpha}$ can be considered as an end vertex of $\mathcal{Q}.$ \noindent \textbf{Case 2. }There exist $x_{j},x_{j'}\in W$ such that the edge $e'=\{v_{4i-1}, v_{4i}, v_{4i+1}, x_j, x_{j'}\}$ is blue. \noindent Similar to Case 1, assume that $x_j=x_1$ and $x_{j'}=x_2$. Since $\mathcal{P}$ is maximal w.r.t. $W,$ for every $x_{\ell} \in W\setminus \{x_1,x_2\},$ $3\leq {\ell}\leq 7,$ the edges $g_1=\{u,v_{4i-2},v_{4i-1},x_{3},x_{4}\}$ and $g_2=\{u,v_{4i},x_{5},x_{6},x_{7}\}$ are red (if the edge $g_{\ell},$ $1\leq {\ell} \leq 2,$ is blue, replacing the edge $e_i$ by $g_{\ell}e'$ in the path $\mathcal{P},$ yields a blue path $\mathcal{P}'$ with $n+1$ edges, a contradiction). If there is $k\in \{1,2\}$ such that the edge $g_3=\{v_{4i+3},v_{4i+4},v_{4i+5},x_k,x_{5}\}$ is blue, then for $\ell\in \{1,2\}\setminus\{k\},$ the edge $g_4=\{v_{4i+1},v_{4i+2},v_{4i+3},x_{6},x_{\ell}\}$ is red. So $\mathcal{Q}=g_1 g_2 g_4$ makes the desired red $\mathcal{P}^5_3$ (it is obvious that for every $\alpha \neq k,$ $x_{\alpha}$ can be seen as an end vertex of $\mathcal{Q}$). So we may assume that for every $k\in \{1,2\}$, the edge $g_3$ is red and $\mathcal{Q}=g_1 g_2 g_3$ is a red $\mathcal{P}^5_3$ such that $v_{4i+2}\notin V(\mathcal{Q})$ and every $x_{\alpha}\in W$ can be considered as an end vertex of $\mathcal{Q}$. \noindent \textbf{Case 3. }For every $x_1,x_{2},x_3,x_{4}\in W,$ the edges $e=\{u, v_{4i-2}, v_{4i-1}, x_{1}, x_{2}\}$ and $e'=\{v_{4i-1},v_{4i},v_{4i+1},x_{3},x_{4}\}$ are red. \noindent We may assume that for every $k\neq j$, $1\leq j\leq 4$, the edge $h_1=\{x_{4}, v_{4i+3}, v_{4i+4}, v_{4i+5}, x_{\ell}\}$ is red and so $\mathcal{Q}=ee'h_1$ is the desired path (note that $v_{4i+2}\notin V(\mathcal{Q})$). If not, since $\mathcal{P}$ is maximal w.r.t. $W,$ for every vertices $x_{\ell}, x_{\ell'} \in W\setminus\{x_1,$ $x_2,$ $x_3,$ $x_4,$ $x_k\}$ the edge $h_2=\{v_{4i+1},v_{4i+2},v_{4i+3}, x_{\ell}, x_{\ell'}\}$ is red and $\mathcal{Q}=ee'h_2$ is the desired path (note that, in this case, every vertex of $W$ can be considered as an end vertex of $\mathcal{Q}$). }\end{proof} Consider a given 2-edge colored complete 5-uniform hypergraph $\mathcal{H}$. By Lemma \ref{spacial configuration2}, we can find many disjoint blue $\varpi_S$-configuration corresponding to a maximal red loose path. The following lemma guarantees how we can connect these configurations to make at most two blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ so that $\|\mathcal{Q}\cup \mathcal{Q}'\|$ is sufficiently large.\\ \begin{lemma}\label{there is a Pl} Let $\mathcal{H}=\mathcal{K}_l^5$ be two edge colored red and blue. Also let $\mathcal{P}=e_1e_2\ldots e_n,$ $n\geq 3,$ be a maximal red path w.r.t. $W,$ where $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$ and $|W|\geq 5$. Then for some $r\geq 0$ and $W'\subseteq W$ there are two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}',$ with $\|\mathcal{Q}\|\geq 2$ and \begin{eqnarray*} \|\mathcal{Q}\cup \mathcal{Q}'\|=2(n-r)/3=\left\lbrace \begin{array}{ll} 2(|W'|-2) &\mbox{if} \ \|\mathcal{Q}'\|\neq 0, \\ 2(|W'|-1) & \mbox{if}\ \|\mathcal{Q}'\|=0, \end{array} \right. \end{eqnarray*} between $W'$ and $\overline{\mathcal{P}}=e_1e_2\ldots e_{n-r}$ so that $e\cap W'$ is actually the end vertex of $e$ for each edge $e\in \mathcal{Q}\cup \mathcal{Q}'$ and at least one of the vertices of $e_{n-r}\setminus e_{n-r-1}$ is not in $V(\mathcal{Q})\cup V(\mathcal{Q}')$. Moreover, if $\|\mathcal{Q}'\|=0$ then either $x=|W\setminus W'|\in\{2,3\}$ or $x\geq 4$ and $0\leq r \leq 2$. Otherwise, either $x=|W\setminus W'|=1$ or $x\geq 2$ and $0\leq r \leq 2$. \end{lemma} \begin{proof}{ Let $\mathcal{P}=e_1e_2\ldots e_{n}$ be a maximal red path w.r.t. $W,$ $W\subseteq V(\mathcal{H})\setminus V(\mathcal{P})$, and \begin{eqnarray*} e_i=\{v_{(i-1)(k-1)+1},v_{(i-1)(k-1)+2},\ldots,v_{i(k-1)+1}\}, \hspace{1 cm} i=1,2,\ldots,n,\end{eqnarray*} are the edges of $\mathcal{P}$.\\\\ {\bf Step 1:} Set $\mathcal{P}_1=\mathcal{P}$, $W_1=W$ and $\overline{\mathcal{P}}_1=\mathcal{P}'_1=e_{1}e_{2}e_3$. Since $\mathcal{P}$ is maximal w.r.t. $W_1$, using Lemma \ref{spacial configuration2} there is a good $\varpi_S$-configuration, say $\mathcal{Q}_1=f_1g_1,$ in $\mathcal{H}_{\rm blue}$ with end vertices $x\in f_1$ and $y\in g_1$ in $W_1$ so that $S\subseteq \mathcal{P}'_1$ and $\mathcal{Q}_1$ does not contain a vertex of $e_3\setminus e_2,$ say $u_1$. Set $X_1=|W\setminus V(\mathcal{Q}_1)|$, $\mathcal{P}_2=\mathcal{P}_1\setminus \overline{\mathcal{P}}_1=e_4e_5\ldots e_n$ and $W_2=W.$ If $|W_2|=5$ or $\|\mathcal{P}_2\|\leq 2$, then $\mathcal{Q}=\mathcal{Q}_1$ is a blue path between $W'=W_1\cap V(\mathcal{Q}_1)$ and $\overline{\mathcal{P}}=\overline{\mathcal{P}}_1$ with desired properties. Otherwise, go to Step 2.\\\\ \noindent {\bf Step 2:} Clearly $|W_2|\geq 6$ and $\|\mathcal{P}_2\|\geq 3.$ Set $\overline{\mathcal{P}}_2=e_4e_5e_6$ and $\mathcal{P}'_2=((e_{4}\setminus \{f_{\mathcal{P},e_{4}}\})\cup\{u_1\})e_5e_6$. Since $\mathcal{P}$ is maximal w.r.t. $W_2$, using Lemma \ref{spacial configuration2} there is a good $\varpi_S$-configuration, say $\mathcal{Q}_2=f_2g_2,$ in $\mathcal{H}_{\rm blue}$ with end vertices $x\in f_2$ and $y\in g_2$ in $W_2$ such that $S\subseteq \mathcal{P}'_2$ and $\mathcal{Q}_2$ does not contain a vertex of $e_6\setminus e_5,$ say $u_2.$ By Lemma \ref{spacial configuration2}, there are two subsets $W_{21}\subseteq W_2$ and $W_{22}\subseteq W_2$ with $|W_{21}|\geq |W_2|-3$ and $|W_{22}|\geq |W_2|-4$ so that for every distinct vertices $x'\in W_{21}$ and $y'\in W_{22}$, the path $\mathcal{Q}'_2=\Big((f_2\setminus\{x\})\cup\{x'\}\Big)\Big((g_2\setminus\{y\})\cup\{y'\}\Big)$ is also a good $\varpi_S$-configuration in $\mathcal{H}_{\rm blue}$ with end vertices $x'$ and $y'$ in $W_2.$ Therefore, we may assume that $\bigcup_{i=1}^{2}\mathcal{Q}_i$ is either a blue path or the union of two disjoint blue paths. Set $X_2=|W\setminus \bigcup_{i=1}^{2} V(\mathcal{Q}_i)|$ and $\mathcal{P}_3=\mathcal{P}_{2}\setminus \overline{\mathcal{P}}_{2}=e_7e_8\ldots e_n$. If $\bigcup_{i=1}^{2}\mathcal{Q}_i$ is a blue path $\mathcal{Q}$ with end vertices $x_{2}$ and $y_{2}$, then set \begin{eqnarray*} W_3=\Big(W_{2}\setminus V(\mathcal{Q})\Big)\cup\{x_{2},y_{2}\}. \end{eqnarray*} In this case, clearly $|W_3|= |W_2|-1$. Otherwise, $\bigcup_{i=1}^{2}\mathcal{Q}_i$ is the union of two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ with end vertices $x_{2},y_{2}$ and $x'_{2},y'_{2}$ in $W_2$, respectively. In this case, set \begin{eqnarray*} W_3=\Big(W_{2}\setminus V(\mathcal{Q}\cup\mathcal{Q}')\Big)\cup\{x_{2},y_{2},x'_{2},y'_{2}\}. \end{eqnarray*} Clearly $|W_3|=|W_2|$. If $|W_3|\leq 5$ or $\|\mathcal{P}_3\|\leq 2$, then $\bigcup_{i=1}^{2}\mathcal{Q}_i=\mathcal{Q}$ and $\emptyset$ or $\mathcal{Q}$ and $\mathcal{Q}'$ (in the case $\bigcup_{i=1}^{2}\mathcal{Q}_i=\mathcal{Q}\cup\mathcal{Q}'$) are the paths between $W'=W\cap \bigcup_{i=1}^{2} V(\mathcal{Q}_i)$ and $\overline{\mathcal{P}}=\overline{\mathcal{P}}_1\cup \overline{\mathcal{P}}_2$ with desired properties. Otherwise, go to Step $3$.\\\\\\ \noindent{\bf Step $\ell$ ($\ell>2$):} Clearly $|W_{\ell}|\geq 6$ and $\|\mathcal{P}_{\ell}\|\geq 3.$ Set \begin{eqnarray*} \hspace{-0.7 cm}&&\overline{\mathcal{P}}_{\ell}=e_{3{\ell}-2}e_{3{\ell}-1}e_{3{\ell}},\\ \hspace{-0.7 cm}&&\mathcal{P}'_{\ell}=\Big((e_{3{\ell}-2}\setminus\{f_{\mathcal{P},e_{3{\ell}-2}}\})\cup \{u_{{\ell}-1}\}\Big)e_{3{\ell}-1}e_{3{\ell}}. \end{eqnarray*} Since $\mathcal{P}$ is maximal w.r.t. $W_{\ell}$, using Lemma \ref{spacial configuration2} there is a good $\varpi_S$-configuration, say $\mathcal{Q}_{\ell}=f_{\ell}g_{\ell},$ in $\mathcal{H}_{\rm blue}$ with end vertices $x\in f_{\ell}$ and $y\in g_{\ell}$ in $W_{\ell}$ such that $\mathcal{Q}_{\ell}$ does not contain a vertex of $e_{3{\ell}}\setminus e_{3{\ell}-1},$ say $u_{\ell}$. By Lemma \ref{spacial configuration2}, there are two subsets $W_{{\ell}1}\subseteq W_{\ell}$ and $W_{{\ell}2}\subseteq W_{\ell}$ with $|W_{{\ell}1}|\geq |W_{\ell}|-3$ and $|W_{{\ell}2}|\geq |W_{\ell}|-4$ so that for every distinct vertices $x'\in W_{{\ell}1}$ and $y'\in W_{{\ell}2}$, the path $\mathcal{Q}'_{\ell}=\Big((f_{\ell}\setminus\{x\})\cup\{x'\}\Big)\Big((g_{\ell}\setminus\{y\})\cup\{y'\}\Big)$ is also a good $\varpi_S$-configuration in $\mathcal{H}_{\rm blue}$ with end vertices $x'$ and $y'$ in $W_{\ell}.$ Therefore, we may assume that either $\bigcup_{i=1}^{{\ell}}\mathcal{Q}_i$ is a blue path $\mathcal{Q}$ with end vertices in $W_{\ell}$ or we have two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ with end vertices in $W_{\ell}$ so that $\mathcal{Q}\cup \mathcal{Q}'=\bigcup_{i=1}^{{\ell}}\mathcal{Q}_i$. \noindent Set $X_{\ell}=|W\setminus \bigcup_{i=1}^{{\ell}}V(\mathcal{Q}_i)|$ and $\mathcal{P}_{{\ell}+1}=\mathcal{P}_{{\ell}}\setminus \overline{\mathcal{P}}_{{\ell}}=e_{3{\ell}+1}e_{3{\ell}+2}\ldots e_n$. If $\bigcup_{i=1}^{{\ell}}\mathcal{Q}_i$ is a blue path $\mathcal{Q}$ with end vertices $x_{{\ell}}$ and $y_{{\ell}}$, then set \begin{eqnarray*} W_{{\ell}+1}=\Big(W_{{\ell}}\setminus V(\mathcal{Q})\Big)\cup\{x_{{\ell}},y_{{\ell}}\}. \end{eqnarray*} Note that in this case, $|W_{{\ell}}|-2\leq |W_{{\ell}+1}|\leq |W_{{\ell}}|-1$. Otherwise, $\bigcup_{i=1}^{{\ell}}\mathcal{Q}_i$ is the union of two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ with end vertices $x_{{\ell}},y_{{\ell}}$ and $x'_{{\ell}},y'_{{\ell}}$, respectively. In this case, set \begin{eqnarray*} W_{{\ell}+1}=\Big(W_{{\ell}}\setminus V(\mathcal{Q}\cup\mathcal{Q}')\Big)\cup\{x_{{\ell}},y_{{\ell}},x'_{{\ell}},y'_{{\ell}}\}. \end{eqnarray*} Clearly, $|W_{{\ell}}|-1\leq |W_{{\ell}+1}|\leq |W_{{\ell}}|$.\\ If $|W_{{\ell}+1}|\leq 5$ or $\|\mathcal{P}_{{\ell}+1}\|\leq 2$, then $\bigcup_{i=1}^{{\ell}}\mathcal{Q}_i=\mathcal{Q}$ and $\emptyset$ or $\mathcal{Q}$ and $\mathcal{Q}'$ (in the case $\bigcup_{i=1}^{{\ell}}\mathcal{Q}_i=\mathcal{Q}\cup\mathcal{Q}'$) are the paths with the desired properties. Otherwise, go to Step $\ell+1$.\\ Let $t\geq 2$ be the minimum integer for which we have either $|W_t|\leq 5$ or $\|\mathcal{P}_t\|\leq 2$. Set $x=X_{t-1}$ and $r=\|\mathcal{P}_{t}\|=n-3(t-1)$. So $\bigcup_{i=1}^{t-1}\mathcal{Q}_i$ is either a blue path $\mathcal{Q}$ or the union two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ between $\overline{\mathcal{P}}=e_1e_2\ldots e_{n-r}$ and $W'=W\cap (\bigcup_{i=1}^{t-1} V(\mathcal{Q}_i))$ with the desired properties. If $\bigcup _{i=1}^{t-1}\mathcal{Q}_i$ is a blue path $\mathcal{Q},$ then either $x\in\{2,3\}$ or $x\geq 4$ and $0\leq r\leq 2$. Otherwise, $\bigcup _{i=1}^{t-1}\mathcal{Q}_i$ is the union of two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ and we have either $x=1$ or $x\geq 2$ and $0\leq r\leq 2$. }\end{proof} \section{Ramsey number of 5-uniform loose cycles} In this section we determine the exact value of the Ramsey number $R(\mathcal{C}^5_{n},\mathcal{C}^5_{m})$, where $n\geq\lfloor\frac{3m}{2}\rfloor.$ We shall use Lemma \ref{there is P3:2} to prove the following basic lemma. \begin{lemma}\label{pm-1 implies pn-1} Let $n= \Big\lfloor\frac{3m}{2}\Big\rfloor$, $m\geq 4$, and $\mathcal{H}=\mathcal{K}^5_{4n+\lfloor\frac{m-1}{2}\rfloor}$ be $2$-edge colored red and blue. If there is no copy of $\mathcal{C}^5_{m}$ in $\mathcal{H}_{\rm blue}$ and $\mathcal{C}=\mathcal{C}^5_{m-1}\subseteq\mathcal{H}_{\rm blue}$, then $\mathcal{C}^5_{n-1}\subseteq\mathcal{H}_{\rm red}$. \end{lemma} \begin{proof}{ Let $\mathcal{C}=e_1e_2\ldots e_{m-1}$ be a copy of $\mathcal{C}_{m-1}^5\subseteq \mathcal{H}_{\rm blue}$ with edges \begin{eqnarray*}e_j=\{v_{4j-3},v_{4j-2},v_{4j-1},v_{4j},v_{4j+1}\}\hspace{0.5 cm} (\rm{mod}\ \ 4(m-1)),\hspace{0.5 cm} 1\leq j\leq m-1\end{eqnarray*} and $W=V(\mathcal{H})\setminus V(\mathcal{C})$. We have two following cases. \noindent \textbf{Case 1.} For some edge $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\},$ $1\leq i\leq m-1$, there are two vertices $u,v\in W$ such that at least one of the edges $\{v_{4i-1},v_{4i},v_{4i+1},u,v\}$ or $\{v_{4i-3},v_{4i-2},v_{4i-1},u,v\}$ is blue. \noindent We can assume that the edge $e=\{v_{4i-1},v_{4i},v_{4i+1},u,v\}$ is blue. Set \begin{eqnarray*}\mathcal{P}=e_{i+1}e_{i+2}\ldots e_{m-1}e_1e_2\ldots e_{i-2}e_{i-1}\end{eqnarray*} and $W_0=W\setminus \{u,v\}$ (if the edge $\{v_{4i-3},v_{4i-2},v_{4i-1},u,v\}$ is blue, consider the path \begin{eqnarray*} \mathcal{P}=e_{i-1}e_{i-2}\ldots e_2e_1e_{m-1}\ldots e_{i+2}e_{i+1} \end{eqnarray*} and do the following process to get a red copy of $\mathcal{C}_{n-1}^5$).\\ Let $m=2k+p$, where $p=0,1$. For $1\leq \ell \leq k-1,$ do the following process.\\ \noindent{\bf Step 1:} Set $\mathcal{P}_1=\mathcal{P}=g_1g_2\ldots g_{m-2}$, $W_1=W_0$ and $\mathcal{P}'_1=\overline{\mathcal{P}}_1=g_1g_2$. Since $\mathcal{P}$ is maximal w.r.t. $W_1$, using Lemma \ref{there is P3:2}, there is a red path $\mathcal{P}_3^5,$ say $\mathcal{Q}_1,$ with end vertices $x_1,y_1$ in $W_1$ so that $V(\mathcal{Q}_1)\subseteq V(\mathcal{P}'_1)\cup W_1$ and $\mathcal{Q}_1$ does not contain a vertex of $g_2\setminus g_1$, say $u_1$. Let $W'_1=W_1\cap V(\mathcal{Q}_1).$ By Lemma \ref{there is P3:2}, we have $|W'_1|\leq 6$.\\\\ {\bf Step $\ell$ ($ 2\leq \ell \leq k-1$):} Set $\mathcal{P}_{\ell}=\mathcal{P}_{\ell-1}\setminus \overline{\mathcal{P}}_{\ell-1}=g_{2\ell-1}g_{2\ell}\ldots g_{m-2}$, $\mathcal{P}'_{\ell}=\Big((g_{2\ell-1}\setminus \{f_{\mathcal{P},g_{2\ell-1}}\})\cup\{u_{\ell-1}\}\Big)g_{2\ell}$, $\overline{\mathcal{P}}_{\ell}=g_{2\ell-1}g_{2\ell}$ and $W_{\ell}=(W_{1}\setminus \bigcup_{j=1}^{\ell-1}V(\mathcal{Q}_j))\cup\{x_{\ell-1},y_{\ell-1}\}$. Since $\mathcal{P}$ is maximal w.r.t. $W_{\ell},$ using Lemma \ref{there is P3:2}, there is a red path $\mathcal{P}_3^5,$ say $\mathcal{Q}_{\ell},$ with mentioned properties so that $V(\mathcal{Q}_{\ell})\subseteq V(\mathcal{P}'_{\ell})\cup W_{\ell}$ and $\mathcal{Q}_{\ell}$ does not contain a vertex of $g_{2\ell}\setminus g_{2\ell-1},$ say $u_{\ell}$. Since each vertex of $W_{\ell},$ with the exception of at most one, can be considered as an end vertex of $\mathcal{Q}_{\ell},$ we may assume that $\bigcup_{j=1}^{\ell}\mathcal{Q}_j$ is a red path with end vertices $x_{\ell}, y_{\ell}$ in $W_{\ell}$. Let $W'_{\ell}=W_{1}\cap V(\bigcup_{j=1}^{\ell} \mathcal{Q}_j)$. Clearly $|W'_{\ell}|\leq 5\ell+1$.\\ Therefore, $\mathcal{Q}=\bigcup_{\ell=1}^{k-1} \mathcal{Q}_{\ell}$ is a red path of length $3k-3$ with end vertices $x_{k-1}, y_{k-1}$ in $W_{k-1}$ and $|W'_{k-1}|=|W_0\cap V(\mathcal{Q})|\leq 5(k-1)+1$. Let $W'=W_0\setminus V(\mathcal{Q})=W_0\setminus W'_{k-1}$. We have two following subcases.\\ \noindent {\it Subcase $1$}. $m=2k.$ \noindent Since $|W_0|=5k+1,$ we have $|W'|\geq 5$. Let $\{u_1,u_2,\ldots,u_5\}\subseteq W'$ and $z$ be a vertex of $g_{m-2}\setminus g_{m-3}$ ($e_{i-1}\setminus e_{i-2}$) so that $z\notin V(\mathcal{Q})$. Since there is no blue copy of $\mathcal{C}^5_{m}$ and the edge $e$ is blue, the edges $$f_1=\{y_{k-1},u_1,u_2,v_{4i-1},z\}, f_2=\{z,v_{4i-2},u,u_3,x_{k-1}\},$$ are red and $\mathcal{Q}f_1f_2$ is a red copy of $\mathcal{C}_{n-1}^5$ (note that $n=3k$). \noindent{\it Subcase $2$}. $m=2k+1.$ \noindent In this case we have $n=3k+1$ and $|W'|\geq 6.$ Let $\{u_1,$ $u_2,\ldots,u_6\}\subseteq W'$. Also we have $(e_{i-1}\setminus\{f_{\mathcal{P},e_{i-1}}\})\cap V(\mathcal{Q})=\emptyset.$ Since there is no blue copy of $\mathcal{C}^5_{m}$ and the edge $e$ is blue, the edges $$g_1=\{y_{k-1},v_{4i-4},v_{4i-1},u_1,u_2\}, g_2=\{u_2,u_3, u_4,v_{4i-5},v_{4i}\},$$ $$g_3=\{v_{4i},v_{4i-3},u_5,u_6,x_{k-1}\},$$ are red and $\mathcal{Q}g_1g_2g_3$ is a copy of $\mathcal{C}_{n-1}^5$ in $\mathcal{H}_{\rm red}$. \noindent \textbf{Case 2.} For every edge $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}$, $1\leq i\leq m-1$, and every vertices $u,v\in W$ the edges $\{v_{4i-1},v_{4i},v_{4i+1},u,v\}$ and $\{v_{4i-3},v_{4i-2},v_{4i-1},u,v\}$ are red. \noindent Clearly \begin{eqnarray*} |W|=\left\lbrace \begin{array}{ll} 5k+3 &\mbox{if $m=2k$}, \\ 5k+4 &\mbox{if $m=2k+1$}. \end{array}\right. \end{eqnarray*} \noindent Now let $W'=\{x_1,x_2,\ldots,x_{5k+3}\}\subseteq W$. For $0\leq j\leq k-2$ and $1\leq \ell \leq 3$, set \begin{eqnarray*} f_{3j+\ell}=\left\lbrace \begin{array}{ll} (e_{2j+1}\setminus \{v_{8j+4},v_{8j+5}\})\cup\{x_{5j+1},x_{5j+2}\} &\mbox{if $\ell=1$}, \\ (e_{2j+1}\setminus \{v_{8j+1},v_{8j+2}\})\cup\{x_{5j+3},x_{5j+4}\} &\mbox{if $\ell=2$}, \\ (e_{2j+2}\setminus \{v_{8j+5},v_{8j+6}\})\cup\{x_{5j+4},x_{5j+5}\} &\mbox{if $\ell=3$}, \end{array}\right. \end{eqnarray*} where $$e_{2j+1}=\{v_{8j+1},v_{8j+2},v_{8j+3},v_{8j+4},v_{8j+5}\}, e_{2j+2}=\{v_{8j+5},v_{8j+6},v_{8j+7},v_{8j+8},v_{8j+9}\}.$$ Also, let \begin{eqnarray*} f_{3(k-1)+\ell}=\left\lbrace \begin{array}{ll} (e_{2k-1}\setminus \{v_{8k-4},v_{8k-3}\})\cup\{x_{5k-4},x_{5k-3}\} &\mbox{if $\ell=1$}, \\ (e_{2k-1}\setminus \{v_{8k-7},v_{8k-6}\})\cup\{x_{5k-2},x_{5k-1}\} &\mbox{if $\ell=2$}, \\ (e_{2k}\setminus \{v_{8k-3},v_{8k-2}\})\cup\{x_{5k-1},x_{5k}\} &\mbox{if $\ell=3$ and $m=2k+1$}. \end{array}\right. \end{eqnarray*} Clearly for $m=2k,$ $\mathcal{C}' =f_1f_2 \ldots f_{3k-1}$ and for $m=2k+1$, $\mathcal{C}''=f_1f_2 \ldots f_{3k}$ is a copy of $\mathcal{C}_{n-1}^5$ in $\mathcal{H}_{\rm red}$. }\end{proof} \begin{lemma}\label{Cn-1 implies small Cm} Let $n\geq \Big\lfloor\frac{3m}{2}\Big\rfloor$, $6\geq m\geq 4$ and $\mathcal{H}=\mathcal{K}^5_{4n+\lfloor\frac{m-1}{2}\rfloor}$ be $2$-edge colored red and blue. If there is no red copy of $\mathcal{C}^5_{n}$ and $\mathcal{C}=\mathcal{C}^5_{n-1}\subseteq \mathcal{H}_{\rm red}$, then $\mathcal{C}^5_{m}\subseteq \mathcal{H}_{\rm blue}$. \end{lemma} \begin{proof}{Let $\mathcal{C}=e_1e_2\ldots e_{n-1}$ be a copy of $\mathcal{C}_{n-1}^5$ in $\mathcal{H}_{\rm red}$ with edges \begin{eqnarray*}e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}\hspace{0.5 cm} (\rm{mod}\ \ 4(n-1)),\hspace{0.5 cm} 1\leq i\leq n-1 \end{eqnarray*} and $W=V(\mathcal{H})\setminus V(\mathcal{C})$. We have two following cases. \noindent \textbf{Case 1.} For some edge $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}$, $1\leq i\leq n-1$, there are vertices $z_1,z_2\in W$ such that at least one of the edges $\{v_{4i-1},v_{4i},v_{4i+1},z_1,z_2\}$ or $\{v_{4i-3},v_{4i-2},v_{4i-1},z_1,z_2\}$ is red. \noindent We can suppose that there are vertices $z_1,z_2\in W$ so that the edge $e=\{v_{4i-1},v_{4i},v_{4i+1},z_1,z_2\}$ is red. Set \begin{eqnarray*}\mathcal{P}=e_{i+1}e_{i+2}\ldots e_{n-1}e_1e_2\ldots e_{i-2}e_{i-1}\end{eqnarray*} and $W_0=W\setminus \{z_1,z_2\}$. If the edge $\{v_{4i-3},v_{4i-2},v_{4i-1},z_1,z_2\}$ is red, then consider the path \begin{eqnarray*}\mathcal{P}=e_{i-1}e_{i-2}\ldots e_2e_1e_{n-1}\ldots e_{i+2}e_{i+1}\end{eqnarray*} and do the following process to get a blue copy of $\mathcal{C}_{m}^5$.\\ First let $m=4$. Hence we have $n\geq 6$ and $|W_0|=3$. Let $W_0=\{u_1,u_2,u_3\}$. Since there is no red copy of $\mathcal{C}^5_n$, $\mathcal{P}$ is a maximal path w.r.t. $W$. Use Lemma \ref{spacial configuration2} (by putting $u=f_{\mathcal{P},e_{i-4}}$) to obtain a good $\varpi_{S}$-configuration, say $C_1$, in $\mathcal{H}_{\rm blue}$ with end vertices in $W$ and $S\subseteq e_{i-4}e_{i-3}e_{i-2}$. Since $C_1$ is a good $\varpi_{S}$-configuration, there is a vertex $w\in e_{i-2}\setminus e_{i-3}$ so that $w\notin V(C_1).$ We have two following cases. \\ \begin{itemize} \item[(i)] $V(C_1)\cap W_0\neq \emptyset$. \\ Assume that $u_1\in V(C_1)\cap W_0$. Since $|W_0|=3$, we have $W_0\setminus V(C_1)\neq \emptyset$. Suppose that $u_2\in W_0\setminus V(C_1)$. Let $f=\{u_1,w,v_{4i-6},v_{4i-5},u_2\}.$ If the edge $f$ is blue, then set \begin{eqnarray*} &&g_1=\{u_2,v_{4i-3},v_{4i-2},u_3,z_1\},\\ &&g_2=\{u_2,v_{4i-3},v_{4i-2},u_3,z_2\}. \end{eqnarray*} Since there is no red copy of $\mathcal{C}_{n}^5$ and the edge $e$ is red, then the edges $g_1$ and $g_2$ are blue. Clearly at least one of $\mathcal{C}'=C_1fg_1$ or $\mathcal{C}''=C_1fg_2$ is a blue copy of $\mathcal{C}_4^5$. So we may assume that the edge $f$ is red. If $\{z_1,z_2\}\cap V(C_1)\neq \emptyset$, then set $g=\{u_2,v_{4i-4},v_{4i-3},z_1,z_2\}.$ The edge $g$ is blue (otherwise, $fge_ie_{i+1}\ldots e_{n-1}e_1\ldots e_{i-2}$ is a red copy of $\mathcal{C}^5_{n},$ a contradiction). Again, since there is no red copy of $\mathcal{C}_{n}^5$ and the edge $e$ is red, $C_1g \{v_{4i-3},v_{4i-2},v_{4i-1},u_3,u_1\}$ is a blue copy of $\mathcal{C}_4^5$. So we may suppose that $\{z_1,z_2\}\cap V(C_1)= \emptyset.$ Therefore, $u_1$ and $u_3$ are end vertices of $C_1$ in $W.$ Clearly \begin{eqnarray*} C_1\{u_1,z_1,z_2,v_{4i-4},v_{4i-3}\}\{v_{4i-3},v_{4i-2},v_{4i-1},u_2,u_3\} \end{eqnarray*} is a blue copy of $\mathcal{C}_4^5$. \item[(ii)] $V(C_1)\cap W_0= \emptyset$.\\ Therefore $z_1$ and $z_2$ are end vertices of $C_1$ in $W$. Let $f=\{z_1,w,v_{4i-6},v_{4i-5},u_1\}.$ If the edge $f$ is blue, then \begin{eqnarray*} C_1f\{u_1,u_2,v_{4i-3},v_{4i-2},z_2\}, \end{eqnarray*} is a blue copy of $\mathcal{C}_4^5$. Otherwise, since there is no red copy of $\mathcal{C}_{n}^5,$ the edge $g=\{z_1,u_2,u_3,v_{4i-4},v_{4i-3}\}$ is blue and $C_1g\{u_2,u_1,v_{4i-5},v_{4i-2},z_2\}$ is a blue copy of $\mathcal{C}_4^5$. \end{itemize} Now let $m=5$. Therefore, we have $n\geq 7$ and $|W_0|=4$. Let $W_0=\{u_1,\ldots,u_4\}$. Since there is no red copy of $\mathcal{C}^5_n$, $\mathcal{P}$ is a maximal path w.r.t. $W'=W_0\cup \{z_1\}$. Use Lemma \ref{spacial configuration2} (by putting $u=f_{\mathcal{P},e_{i-5}}$) to obtain a good $\varpi_{S}$-configuration, say $C_2$, in $\mathcal{H}_{\rm blue}$ with end vertices in $W'$ and $S\subseteq e_{i-5}e_{i-4}e_{i-3}$. Since $C_2$ is a good $\varpi_{S}$-configuration, there is a vertex $w\in e_{i-3}\setminus e_{i-4}$ so that $w\notin V(C_2).$ Clearly $V(C_2)\cap W_0\neq \emptyset$. By symmetry we may assume that $u_1$ is an end vertex of $C_2$ in $W_0$. Since $|V(C_2)\cap W'|=2$, we have $|W_0\setminus V(C_2)|\geq 2$. Without loss of generality suppose that $u_2,u_3\in W_0\setminus V(C_2)$. Let $f'=\{u_1,v_{4i-9},v_{4i-10},w,u_2\}$. If the edge $f'$ is blue, then set \begin{eqnarray*} &&\mathcal{C}'=C_2f'\{u_2,v_{4i-7},v_{4i-6},v_{4i-5},u_3\}\{u_3,u_4,z_1,v_{4i-3},v_{4i-2}\},\\ &&\mathcal{C}''=C_2f'\{u_2,u_3,v_{4i-1},v_{4i-2},v_{4i-3}\}\{v_{4i-3},v_{4i-4},v_{4i-5},z_1,u_4\}. \end{eqnarray*} Clearly at least one of $\mathcal{C}'$ or $\mathcal{C}''$ is a blue copy of $\mathcal{C}_5^5$. So we may assume that the edge $f'$ is red. Since there is no red copy of $\mathcal{C}_n^5,$ the edge $g'=\{u_1,u_3,z_2,v_{4i-7},v_{4i-8}\}$ is blue. If the edge $g''=\{u_3,v_{4i-9},v_{4i-6},v_{4i-5},u_2\}$ is blue, then $C_2g'g''\{u_2,v_{4i-3},v_{4i-2},z_1,u_4\}$ is a blue copy of $\mathcal{C}_5^5$. Otherwise, \begin{eqnarray*} C_2g'\{u_3,u_2,v_{4i-1},v_{4i-2},v_{4i-3}\}\{v_{4i-3},v_{4i-4},v_{4i-5},z_1,u_4\} \end{eqnarray*} is a blue copy of $\mathcal{C}_5^5$.\\\\ Now let $m=6$. So we have $n\geq 9$ and $|W_0|=4$. Since $\mathcal{P}$ is maximal w.r.t. $W$, using Lemma \ref{spacial configuration2} there is a good $\varpi_{S}$-configuration, say $\mathcal{Q}_1$, in $\mathcal{H}_{\rm blue}$ with end vertices $x_1$ and $y_1$ in $W$ so that $S\subseteq e_{i-7}\cup e_{i-6}\cup e_{i-5}$ and $\mathcal{Q}_1$ does not contain a vertex of $e_{i-5}\setminus e_{i-6},$ say $u$. Now, Since $\mathcal{P}$ is maximal w.r.t. $W\setminus\{x_1\}$, using Lemma \ref{spacial configuration2} there is a good $\varpi_{S}$-configuration, say $\mathcal{Q}_2$, in $\mathcal{H}_{\rm blue}$ with end vertices in $W\setminus \{x_1\}$ so that $S\subseteq \Big((e_{i-4}\setminus\{f_{\mathcal{P},e_{i-4}}\})\cup\{u\}\Big)\cup e_{i-3}\cup e_{i-2}$ and $\mathcal{Q}_2$ does not contain a vertex of $e_{i-2}\setminus e_{i-3},$ say $u'$. Clearly, $\mathcal{Q}_1\cup \mathcal{Q}_2$ is either a blue path of length $4$ or the union of two disjoint blue paths of length $2$. First consider the case $\mathcal{Q}_1\cup \mathcal{Q}_2$ is a blue path, say $\mathcal{Q}$, of length $4$ with end vertices $x_1$ and $y_2$ in $W$. It is easy to see that one of the following cases holds. \begin{itemize} \item[(i)] $|\{x_1,y_2\}\cap \{z_1,z_2\}|=0$\\ In this case, obviously $|V(\mathcal{Q})\cap \{z_1,z_2\}|\leq 1$. By symmetry we may suppose that $z_2\notin V(\mathcal{Q})$. Since $|W_0|=4$, we have $|W_0\setminus V(\mathcal{Q})|\geq 1$. Let $u_1\in W_0\setminus V(\mathcal{Q})$. Consider the edge $f=\{y_2,u',v_{4i-6},v_{4i-5},u_1\}$. If the edge $f$ is blue, since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, then $\mathcal{Q}f\{u_1,v_{4i-3},v_{4i-2},z_2,x_1\}$ is a blue copy of $\mathcal{C}_6^5$. Otherwise, set $T=W\setminus (V(\mathcal{Q})\cup \{u_1\})$. So we have $z_2\in T$ and $|T|=2$. The edge $f'=\{y_2,v_{4i-4},v_{4i-3}\}\cup T$ is blue (otherwise, $ff'e_ie_{i+1}\ldots e_{n-1}e_1\ldots e_{i-2}$ is a red copy of $\mathcal{C}^5_{n},$ a contradiction). Thereby, $\mathcal{Q}f' \{v_{4i-3},v_{4i-2},v_{4i-1},u_1,x_1\}$ is a blue copy of $\mathcal{C}_6^5$. \item[(ii)] $|\{x_1,y_2\}\cap \{z_1,z_2\}|=1$\\ By symmetry we may assume that $y_2=z_1$. Since $|W_0|=4$ and $|W\cap V(\mathcal{Q})|=3$, we have $|W_0\setminus V(\mathcal{Q})|\geq 2$. Let $u_1\in W_0\setminus V(\mathcal{Q})$. Consider the edge $f=\{y_2,u',v_{4i-6},v_{4i-5},u_1\}$. If the edge $f$ is blue, then $\mathcal{Q}f\{u_1,v_{4i-3},v_{4i-2},v_{4i-1},x_1\}$ is a blue copy of $\mathcal{C}_6^5$. Otherwise, set $T=W\setminus (V(\mathcal{Q})\cup \{u_1\})$. Clearly, the edge $f'=\{y_2,v_{4i-4},v_{4i-3}\}\cup T$ is blue and thereby, $\mathcal{Q}f' \{v_{4i-3},v_{4i-2},v_{4i-1},u_1,x_1\}$ is a blue copy of $\mathcal{C}_6^5$. \item[(iii)] $|\{x_1,y_2\}\cap \{z_1,z_2\}|=2$\\ In this case, clearly $|W_0\setminus V(\mathcal{Q})|=3$. Set $T=W_0\setminus V(\mathcal{Q})=\{u_1,u_2,u_3\}$. If the edge $f=\{y_2,u',v_{4i-6},v_{4i-5},u_1\}$ is blue, then $\mathcal{Q}f\{u_1,v_{4i-3},v_{4i-2},u_2,x_1\}$ is a blue copy of $\mathcal{C}_6^5$. Otherwise, the edge $f'=\{y_2,v_{4i-4},v_{4i-3},u_2,u_3\}$ is blue and $\mathcal{Q}f' \{u_3,u_1,v_{4i-5},v_{4i-2},x_1\}$ is a blue copy of $\mathcal{C}_6^5$. \end{itemize} Now let $\mathcal{Q}_1$ and $\mathcal{Q}_2$ are two disjoint blue paths with end vertices $x_1,y_1$ and $x_2,y_2$ in $W,$ respectively. We have the following cases. \begin{itemize} \item[(i)] $|\{x_1,y_1,x_2,y_2\}\cap \{z_1,z_2\}|=0$\\ Let $f=\{y_1,u',v_{4i-6},v_{4i-5},x_2\}$. If the edge $f$ is blue, since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, then \begin{eqnarray*} \mathcal{Q}_1f\mathcal{Q}_2\{y_2,v_{4i-3},v_{4i-2},v_{4i-1},x_1\}, \end{eqnarray*} is a blue copy of $\mathcal{C}_6^5$. Otherwise, the edge $f'=\{x_1,v_{4i-5},v_{4i-4},v_{4i-3},y_2\}$ is blue (if not, we can find a red copy of $\mathcal{C}_n^5,$ a contradiction). Thereby, $\mathcal{Q}_1f'\mathcal{Q}_2\{x_2,v_{4i-6},v_{4i-2},v_{4i-1},y_1\}$ is a blue copy of $\mathcal{C}_6^5$. \item[(ii)] $|\{x_1,y_1,x_2,y_2\}\cap \{z_1,z_2\}|\geq 1$\\ By symmetry we may assume that $y_2=z_1.$ Clearly $|W_0\setminus V(\mathcal{Q}_1\cup \mathcal{Q}_2)|,|W_0 \cap V(\mathcal{Q}_1)|\geq 1.$ We can without loss of generality suppose that $u_1\in W_0\setminus V(\mathcal{Q}_1\cup\mathcal{Q}_2) $ and $x_1\in W_0 \cap V(\mathcal{Q}_1).$ If the edge $f=\{y_1,u',v_{4i-6},v_{4i-5},y_2\}$ is blue, then set \begin{eqnarray*} &&g=\{x_2,v_{4i-3},v_{4i-2},u_1,x_1\},\\ &&g'=\{x_2,v_{4i-3},v_{4i-2},v_{4i-1},x_1\}. \end{eqnarray*} If $x_2=z_2,$ then $\mathcal{Q}_1f\mathcal{Q}_2g $ is a blue copy of $\mathcal{C}_6^5$ (if not, $gee_{i+1}\ldots e_{n-1}e_1\ldots e_{i-1}$ is a red copy of $\mathcal{C}_n^5,$ a contradiction). Otherwise, the edge $g'$ is blue (if not, $g'ee_{i+1}\ldots e_{n-1}e_1\ldots e_{i-1}$ is a red copy of $\mathcal{C}_n^5$) and $\mathcal{Q}_1f\mathcal{Q}_2g'$ is a blue copy of $\mathcal{C}_6^5$.\\ Therefore, we may assume that the edge $f$ is red. Since there is no red copy of $\mathcal{C}_n^5,$ the edges $h_1=\{x_1,v_{4i-5},v_{4i-4},v_{4i-3},x_2\}$ and $h_2=\{y_1,v_{4i-4},v_{4i},u_1,x_2\}$ are blue (if the edge $h_j,$ $1\leq j \leq 2,$ is red, then $fh_je_{i}\ldots e_{n-1}e_1\ldots e_{i-2}$ is a red copy of $\mathcal{C}_n^5$). If $y_1\neq z_2,$ then $\mathcal{Q}_1h_1\mathcal{Q}_2 \{y_2,u_1,v_{4i-2},v_{4i-6},y_1\}$ is a blue copy of $\mathcal{C}_6^5$. If $y_1=z_2,$ then there is a vertex $u_2\in W_0\setminus(V(\mathcal{Q}_1\cup \mathcal{Q}_2)\cup\{u_1\})$ and clearly $\mathcal{Q}_1h_2\mathcal{Q}_2 \{y_2,v_{4i-3},v_{4i-2},u_2,x_1\}$ is a blue copy of $\mathcal{C}_6^5$. \end{itemize} \noindent \textbf{Case 2. } For every edge $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}$, $1\leq i\leq n-1$, and every vertices $z_1,z_2\in W$ the edges $\{v_{4i-1},v_{4i},v_{4i+1},z_1,z_2\}$ and $\{v_{4i-3},v_{4i-2},v_{4i-1},z_1,z_2\}$ are blue. \noindent Let $W=\{u_1,u_2,\ldots,u_{\lfloor\frac{m-1}{2}\rfloor+4}\}$. Set \begin{eqnarray*} h_i=\left\lbrace \begin{array}{ll} (e_i\setminus \{v_{4i},v_{4i+1}\})\cup \{u_i,u_{i+1}\} &\mbox{for $1\leq i \leq m-1$}, \\ (e_i\setminus \{v_{4i},v_{4i+1}\})\cup \{u_i,u_{1}\} &\mbox{for $i=m$}. \end{array}\right. \end{eqnarray*} Since $4\leq m \leq 6$ and $h_i$'s, $1\leq i \leq m$, are blue, then $h_1h_2\ldots h_m$ is a blue $\mathcal{C}^5_m$. }\end{proof} \begin{lemma}\label{Cn-1 implies cm} Let $n\geq \Big\lfloor\frac{3m}{2}\Big\rfloor$, $m\geq 7$ and $\mathcal{H}=\mathcal{K}^5_{4n+\lfloor\frac{m-1}{2}\rfloor}$ be $2$-edge colored red and blue. If there is no red copy of $\mathcal{C}^5_{n}$ and $\mathcal{C}=\mathcal{C}^5_{n-1}\subseteq \mathcal{H}_{\rm red}$, then $\mathcal{C}^5_{m}\subseteq \mathcal{H}_{\rm blue}$. \end{lemma} \begin{proof}{Let $\mathcal{C}=e_1e_2\ldots e_{n-1}$ be a copy of $\mathcal{C}_{n-1}^5$ in $\mathcal{H}_{\rm red}$ with edges \begin{eqnarray*}e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}\hspace{0.5 cm} (\rm mod\ \ 4(n-1)),\hspace{0.5 cm} 1\leq i\leq n-1. \end{eqnarray*} Also, let $W=V(\mathcal{H})\setminus V(\mathcal{C})$. We have the following cases: \noindent \textbf{Case 1. } For some edge $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}$, $1\leq i\leq n-1$, there are vertices $z_1,z_2\in W$ and $v'\in e_{i+1}\setminus\{l_{\mathcal{C},e_{i+1}}\}$ (resp. $z_1,z_2\in W$ and $v'\in e_{i-1}\setminus\{f_{\mathcal{C},e_{i-1}}\}$) such that the edge $\{v_{4i-1},v_{4i},v',z_1,z_2\}$ (resp. the edge $\{v',v_{4i-2},v_{4i-1},z_1,z_2\}$) is red. We can without loss of generality assume that there are vertices $z_1,z_2\in W$ and $v'\in e_{i+1}\setminus\{l_{\mathcal{C},e_{i+1}}\}$ so that the edge $e=\{v_{4i-1},v_{4i},v',z_1,z_2\}$ is red. Set \begin{eqnarray*}\mathcal{P}=e_{i+1}e_{i+2}\ldots e_{n-1}e_1e_2\ldots e_{i-2}e_{i-1}\end{eqnarray*} and $W_0=W\setminus \{z_1,z_2\}$. If there are vertices $z_1,z_2\in W$ and $v'\in e_{i-1}\setminus\{f_{\mathcal{C},e_{i-1}}\}$ so that the edge $\{v',v_{4i-2},v_{4i-1},z_1,z_2\}$ is red, consider the path \begin{eqnarray*}\mathcal{P}=e_{i-1}e_{i-2}\ldots e_1e_{n-1}e_{n-2}\ldots e_{i+2}e_{i+1}\end{eqnarray*} and repeat the following process to get a blue copy of $\mathcal{C}_{m}^5$.\\ Since $m\geq 7$, we have $|W_0|=\lfloor\frac{m-1}{2}\rfloor+2 \geq 5$. Also, since there is no red copy of $\mathcal{C}^5_n$, $\mathcal{P}$ is a maximal path w.r.t. $W_0$. Applying Lemma \ref{there is a Pl}, there are two disjoint blue paths between $\overline{\mathcal{P}}$, the path obtained from $\mathcal{P}$ by deleting the last $r$ edges for some $r\geq 0,$ and $W'\subseteq W_0$ with the mentioned properties. Consider the paths $\mathcal{Q}$ and $\mathcal{Q}'$ with $\|\mathcal{Q}\|\geq \|\mathcal{Q}'\|$ so that $\ell'=\|\mathcal{Q}\cup\mathcal{Q}'\|$ is maximum. Among these paths, choose $\mathcal{Q}$ and $\mathcal{Q}'$, where $\|\mathcal{Q}\|$ is maximum. Since $\|\mathcal{P}\|=n-2,$ by Lemma \ref{there is a Pl}, we have $\ell'=\frac{2}{3}(n-2-r)$.\\ \noindent {\it Subcase $1$.} $\|\mathcal{Q}'\|\neq 0$.\\ Set $T=W_0\setminus W'.$ Let $x,y$ and $x',y'$ be the end vertices of $\mathcal{Q}$ and $\mathcal{Q}'$ in $W',$ respectively. Using Lemma \ref{there is a Pl} we have one of the following cases: \begin{itemize} \item[I.] $|T|\geq 2.$ In this case, we have $\ell'\leq 2\lfloor\frac{m-1}{2}\rfloor-4$ and so $r\geq 4$. Therefore, this case does not occur by Lemma \ref{there is a Pl}.\\ \item[II.] $|T|=1.$ Let $T=\{u_1\}$. Clearly $\ell'=2\lfloor\frac{m-1}{2}\rfloor-2$. If $m$ is odd, then $\ell'=m-3$ and $r\geq 2$. Set $h=\{y,w,v_{4i-10},v_{4i-9},x'\}$, where $w\in (e_{i-3}\setminus e_{i-4})\setminus V(\mathcal{Q}\cup \mathcal{Q}')$ (the existence of $w$ is guaranteed by Lemma \ref{there is a Pl}). If $h$ is blue, then set \begin{eqnarray*} &&g_1=\{{y'},v_{4i-7},v_{4i-6},v_{4i-5},u_1\},\\ &&g_2=\{{y'},z_1,z_2,v_{4i-4},v_{4i-3}\}. \end{eqnarray*} If the edge $g_1$ is blue, Since there is no red copy of $\mathcal{C}_n^5,$ \begin{eqnarray*} \mathcal{Q}h{\mathcal{Q}'}g_1\{u_1,v_{4i-3},v_{4i-2},v_{4i-1},x\}, \end{eqnarray*} is a blue copy of $\mathcal{C}_m^5$. So we may assume that the edge $g_1$ is red. Then the edge $g_2$ is blue (otherwise, $g_1g_2e_i e_{i+1}\ldots e_{n-1}e_1 \ldots e_{i-2}$ is a red copy of $\mathcal{C}_n^5$, a contradiction) and $\mathcal{Q}h{\mathcal{Q}'}g_2\{v_{4i-3},v_{4i-2},v_{4i-1},u_1,x\}$ is a blue copy of $\mathcal{C}_m^5$. So we may assume that the edge $h$ is red. Since there is no red copy of $\mathcal{C}_n^5$ the edge $h'=\{x,v_{4i-9},v_{4i-8},v_{4i-7},{y'}\}$ is blue (if not, $hh'e_{i-1}e_i\ldots e_{n-1}e_{1}\ldots e_{i-3}$ is a red copy of $\mathcal{C}_n^5$, a contradiction). Now set \begin{eqnarray*} &&h_1=\{x',v_{4i-10},v_{4i-6},v_{4i-5},u_1\},\\ &&h_2=\{x',z_1,z_2,v_{4i-4},v_{4i-3}\}. \end{eqnarray*} If the edge $h_1$ is blue, then $\mathcal{Q}h'{\mathcal{Q}'}h_1\{u_1,v_{4i-3},v_{4i-2},v_{4i-1},y\}$ is a blue copy of $\mathcal{C}_m^5$. Therefore, we may assume that the edge $h_1$ is red. Since there is no red copy of $\mathcal{C}_n^5,$ the edge $h_2$ is blue and $\mathcal{Q}h'{\mathcal{Q}'}h_2\{v_{4i-3},v_{4i-2},v_{4i-1},u_1,y\}$ is a blue copy of $\mathcal{C}_m^5.$ Now let $m$ be even. Therefore, $\ell'=m-4$ and $r\geq 4.$ Let $u$ be a vertex of $e_{i-5}\setminus e_{i-6}$ so that $u\notin V(\mathcal{Q}\cup \mathcal{Q}')$ (the existence of $u$ is guaranteed by Lemma \ref{there is a Pl}). Set \begin{eqnarray*} \overline{W}=\{x,y,x',y',z_1,z_2,u_1\}. \end{eqnarray*} Using Lemma \ref{spacial configuration2} there is a good $\varpi_{S}$-configuration in $\mathcal{H}_{\rm blue}$, say $C_1$, with end vertices in $\overline{W}$ so that $S\subseteq ((e_{i-4}\setminus \{f_{\mathcal{P},e_{i-4}}\})\cup\{u\})e_{i-3}e_{i-2}$. Let $w\in e_{i-2}\setminus e_{i-3}$ so that $w\notin C_1$. Using Lemma \ref{spacial configuration2} and since $\ell'$ is maximum, we may assume that $y'$ and $z_1$ are end vertices of $C_1.$ Let $h=\{z_1,v_{4i-5},v_{4i-6},w,x\}$. If the edge $h$ is blue then \begin{eqnarray*} \mathcal{Q}'C_1 h{\mathcal{Q}}\{y,v_{4i-3},v_{4i-2},v_{4i-1},x'\} \end{eqnarray*} is a blue copy of $\mathcal{C}^5_m$. Otherwise, the edge $h'=\{z_1,z_2,v_{4i-4},v_{4i-3},y\}$ is blue and clearly \begin{eqnarray*}\mathcal{Q}'C_1 h'{\mathcal{Q}}\{x,v_{4i-5},v_{4i-2},v_{4i-1},x'\} \end{eqnarray*} is a copy of $\mathcal{C}^5_m$ in $\mathcal{H}_{\rm blue}$. \\ \item[III.] $|T|=0.$ We can clearly observe that $\ell'=2\lfloor\frac{m-1}{2}\rfloor$. If $m$ is even, then $\ell'=m-2$ and $r\geq 1$. By Lemma \ref{there is a Pl}, there is a vertex $w\in e_{i-2}\setminus e_{i-3}$ so that $w\notin V(\mathcal{Q}\cup\mathcal{Q}')$. Since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, the edges \begin{eqnarray*} &&g=\{y', v_{4i-3},v_{4i-2},v_{4i-1}, x\},\\ &&g'=\{x', v_{4i-3},v_{4i-2},v_{4i-1}, y\}, \end{eqnarray*} are blue. If the edge $f=\{y,w,v_{4i-6},v_{4i-5},x'\}$ is blue, then $\mathcal{Q}f\mathcal{Q}'g$ is a blue copy of $\mathcal{C}^5_m$. Therefore, we may assume that the edge $f$ is red. Thereby the edge $f'=\{x,v_{4i-5},v_{4i-4},v_{4i},y'\}$ is blue (if not, $ff' e_i\ldots e_{n-1}e_{1}\ldots e_{i-2}$ is a red copy of $\mathcal{C}_n^5$, a contradiction) and $\mathcal{Q}f'\mathcal{Q}'g' $ is desired $\mathcal{C}^5_m$. If $m$ is odd, then $\ell'=m-1$. Remove the last two edges of $\mathcal{Q}\cup \mathcal{Q}'$ to get two disjoint blue paths $\overline{\mathcal{Q}}$ and $\overline{\mathcal{Q}'}$ so that $\|\overline{\mathcal{Q}}\cup \overline{\mathcal{Q}'}\|=m-3$ and $(\overline{\mathcal{Q}}\cup \overline{\mathcal{Q}'})\cap (e_{i-2}\cup e_{i-1})=\emptyset$. We can without loss of generality assume that $\mathcal{Q}=\overline{\mathcal{Q}}$. First let $\|\overline{\mathcal{Q}'}\|>0$ and $x',y''$ with $y''\neq y'$ be the end vertices of $\overline{\mathcal{Q}'}$ in $W'$. Since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, the edges \begin{eqnarray*} &&g_1= \{y',v_{4i-3},v_{4i-2},v_{4i-1},x\},\\ &&g_2=\{y',v_{4i-3},v_{4i-2},v_{4i-1},y\}, \end{eqnarray*} are blue. Consider the edge $h_1=\{y,v_{4i-11},v_{4i-10},v_{4i-9},x'\}.$ If the edge $h_1$ is blue, then at least one of $\mathcal{C}_1$ or $\mathcal{C}_2$ is the desired blue cycle, where \begin{eqnarray*} &&\mathcal{C}_1=\mathcal{Q}h_1\overline{\mathcal{Q}'}\{y'',v_{4i-7},v_{4i-6},v_{4i-5},y'\}g_1,\\ &&\mathcal{C}_2=\mathcal{Q}h_1\overline{\mathcal{Q}'}\{y'',z_1,z_2,v_{4i-4},v_{4i-3}\}g_1.\\ \end{eqnarray*} So we may assume that the edge $h_1$ is red. Then, since there is no red copy of $\mathcal{C}_n^5,$ the edge $h_2=\{x,v_{4i-9},v_{4i-8},v_{4i-7},y''\}$ is blue. Clearly, at least one of $\mathcal{C}_3$ or $\mathcal{C}_4$ is the desired blue cycle, where \begin{eqnarray*} &&\mathcal{C}_3=\mathcal{Q}h_2\overline{\mathcal{Q}'}\{x',v_{4i-10},v_{4i-6},v_{4i-5},y'\}g_2,\\ &&\mathcal{C}_4=\mathcal{Q}h_2\overline{\mathcal{Q}'}\{x',z_1,z_2,v_{4i-4},v_{4i-3}\}g_2. \end{eqnarray*} If $\|\overline{\mathcal{Q}'}\|=0,$ by some discussions similar to the above, we can find a blue copy of $\mathcal{C}_m^5.$ So we omit it's proof here.\\ \end{itemize} \noindent{\it Subcase $2$}. $\|\mathcal{Q}'\|=0$. \noindent Let $x$ and $y$ be the end vertices of $\mathcal{Q}$ in $W'$ and $T=W_0\setminus W'$. Using Lemma \ref{there is a Pl}, we have one of the following cases: \begin{itemize} \item[I.] $|T|\geq 4$. Since $\ell'\leq 2\lfloor\frac{m-1}{2}\rfloor-6$ and $\ell'=\frac{2}{3}(n-2-r),$ we have $r\geq 3$. So this subcase does not occur by Lemma \ref{there is a Pl}.\\ \item[II.] $|T|=3.$ Let $T=\{u_1,u_2,u_3\}$. Clearly $\ell'=2\lfloor\frac{m-1}{2}\rfloor-4$. First let $m$ be odd. Therefore, $\ell'=m-5.$ Since $\ell'=\frac{2}{3}(n-2-r)$ and $n\geq \lfloor\frac{3m}{2}\rfloor,$ we have $r\geq 5.$ By Lemma \ref{there is a Pl}, there is a vertex $v\in e_{i-6}\setminus e_{i-7}$ so that $v\notin V(\mathcal{Q})$. Since $\mathcal{P}$ is maximal w.r.t. $\overline{W}=\{x,y,z_1,z_2,u_1,u_2,u_3\}$, using Lemma \ref{spacial configuration2}, there is a good $\varpi_{S}$-configuration, say $C_1=fg$, in $\mathcal{H}_{\rm blue}$ with end vertices $\overline{x}\in f$ and $\overline{y}\in g$ in $\overline{W}$ and $S\subseteq (e_{i-5}\setminus\{f_{\mathcal{P},e_{i-5}}\}\cup\{v\})e_{i-4}e_{i-3}$. By Lemma \ref{spacial configuration2}, there is a vertex of $e_{i-3}\setminus e_{i-4},$ say $w,$ so that $w\notin V(C_1)$. Moreover there are two subsets $W_1$ and $W_2$ of $\overline{W}$ with $|W_1|\geq 4$ and $|W_2|\geq 3$ so that for every distinct vertices $\overline{x}'\in W_1$ and $\overline{y}'\in W_2,$ the path $C_1'=((f\setminus \{\overline{x}\})\cup\{\overline{x}'\})((g\setminus \{\overline{y}\})\cup\{\overline{y}'\})$ is also a good $\varpi_{S}$-configuration in $\mathcal{H}_{\rm blue}$ with end vertices $\overline{x}'$ and $\overline{y}'$ in $\overline{W}$. Since $|W_1|\geq 4$ and $\ell'$ is maximum, by symmetry, we may assume that $y$ and $z_1$ are the end vertices of $C_1$ in $\overline{W}$. If the edge $f=\{u_1,w,v_{4i-10},v_{4i-9},x\}$ is blue, then set \begin{eqnarray*} &&g_1=\{u_2,v_{4i-5},v_{4i-6},v_{4i-7},u_1\},\\ &&g_2=\{z_1,v_{4i-5},v_{4i-4},u_3,v_{4i-3}\}. \end{eqnarray*} First let the edge $g_1$ is blue. Since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, the cycle $g_1f\mathcal{Q}C_1\{z_1,v_{4i-3},v_{4i-2},u_3,u_2\}$ is a blue copy of $\mathcal{C}_m^5.$ So we may suppose that the edge $g_1$ is red. Therefore, the edge $g_2$ is blue ( otherwise, $g_1 g_2 e_i e_{i+1} \ldots e_{n-1} e_1 \ldots e_{i-2}$ is a red copy of $\mathcal{C}_n^5,$ a contradiction to our assumption). Again, since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, the edge $g_3=\{v_{4i-3},v_{4i-2},v_{4i-1},u_2,u_1\}$ is blue (otherwise, $g_3ee_{i+1}\ldots e_{n-1}e_1\ldots e_{i-1}$ is a red copy of $\mathcal{C}_n^5,$ a contradiction) and $f\mathcal{Q}C_1g_2g_3$ is a blue copy of $\mathcal{C}_m^5$. Now suppose that the edge $f$ is red. Then the edge $f'=\{u_3,v_{4i-7},v_{4i-8},v_{4i-9},z_1\}$ is blue. Let \begin{eqnarray*} &&h_1=\{u_2,v_{4i-5},v_{4i-6},v_{4i-10},u_3\},\\ &&h_2= \{x,u_1,v_{4i-5},v_{4i-4},v_{4i-3}\}. \end{eqnarray*} If the edge $h_1$ is blue, then $h_1f' C_1\mathcal{Q}\{x,v_{4i-3},v_{4i-2},v_{4i-1},u_2\}$ is a blue copy of $\mathcal{C}_m^5.$ So suppose that the edge $h_1$ is red. Therefore, the edge $h_2$ is blue. Again, since there is no red copy of $\mathcal{C}_n^5$ and the edge $e$ is red, the cycle $f'C_1\mathcal{Q} h_2\{v_{4i-3},v_{4i-2},v_{4i-1},u_2,u_3\}$ is a blue copy of $\mathcal{C}_m^5.$ Now, let $m$ be even. Hence, we have $\ell'=m-6$ and $r\geq 7$. Let $ \overline{W} = \{x, y, z_1, z_2, u_1, u_2, u_3\}$ and $v\in e_{i-8}\setminus e_{i-9}$ be a vertex so that $v\notin V(\mathcal{Q})$ (the existence of $v$ is guaranteed by Lemma \ref{there is a Pl}). Using Lemma \ref{spacial configuration2}, there is a good $\varpi_{S_1}$-configuration, say $C_1$, in $\mathcal{H}_{\rm blue}$ with end vertices in $\overline{W}$ so that $S_1\subseteq (e_{i-7}\setminus\{f_{\mathcal{P},e_{i-7}}\}\cup\{v\})e_{i-6}e_{i-5}$ and at least one of the vertices of $e_{i-5}\setminus e_{i-6},$ say $w,$ is not in $C_1$. By an argument similar to the case that $m$ is odd, we may assume that $y$ and $z_1$ are the end vertices of $C_1$ in $\overline{W}$. Now set $\widetilde{W}=\overline{W}\setminus\{y\}=\{x,z_1,z_2,u_1,u_2,u_3\}.$ Again, using Lemma \ref{spacial configuration2}, there is a good $\varpi_{S_2}$-configuration, say $C_2$, in $\mathcal{H}_{\rm blue}$ with end vertices in $\widetilde{W}$ so that $S_2\subseteq (e_{i-4}\setminus\{f_{\mathcal{P},e_{i-4}}\}\cup\{w\})e_{i-3}e_{i-2}$ and at least one of the vertices of $e_{i-2}\setminus e_{i-3},$ say $w',$ is not in $C_2$. By the properties of Lemma \ref{spacial configuration2} and since $\ell'$ is maximum, we may suppose that $x$ and $z_2$ or $z_2$ and $u_1$ are end vertices of $C_2$ in $\widetilde{W}.$ It is not difficult to show that in each cases there is a blue copy of $\mathcal{C}^5_m$. Here, for abbreviation, we omit the proof.\\ \item[III.] $|T|=2.$ Let $T=\{u_1,u_2\}$. Since $\ell'=2\lfloor\frac{m-1}{2}\rfloor-2$, for odd $m$ we have $\ell'=m-3$ and $r\geq 2$. By Lemma \ref{there is a Pl}, there is a vertex $w\in e_{i-3}\setminus e_{i-4}$ so that $w\notin V(\mathcal{Q}).$ If the edge $f=\{y,w,v_{4i-10},v_{4i-9},u_1\}$ is blue, then set \begin{eqnarray*} &&g_1=\{u_1,z_1,z_2,v_{4i-4},v_{4i-3}\},\\ &&g_2=\{u_1,v_{4i-7},v_{4i-6},v_{4i-5},u_2\}. \end{eqnarray*} Since there is no red copy of $\mathcal{C}_n^5,$ at least one of the edges $g_1$ or $g_2$, say $g',$ is blue (otherwise, $g_2g_1 e_{i}e_{i+1}\ldots e_{n-1}e_1\ldots e_{i-2}$ is a red copy of $\mathcal{C}_n^5$). Now, since that edge $e$ is red, the cycle $\mathcal{C}_1=\mathcal{Q}fg'\{v_{4i-3},v_{4i-2},v_{4i-1},u_2,x\}$ is a blue copy of $\mathcal{C}_m^5$. If the edge $f$ is red, then the edge $g=\{x,v_{4i-9},v_{4i-8},v_{4i-7},u_2\}$ is blue and at least one of $\mathcal{C}_3$ or $\mathcal{C}_4$ is a blue copy of $\mathcal{C}_m^5$, where \begin{eqnarray*} &&\mathcal{C}_3=\mathcal{Q}g\{u_2,u_1,v_{4i-1},v_{4i-2},v_{4i-3}\}\{v_{4i-3},v_{4i-4},v_{4i-5},z_1,y\},\\ && \mathcal{C}_4=\mathcal{Q}g\{u_2,v_{4i-10},v_{4i-6},v_{4i-5},u_1\}\{u_1,v_{4i-3},v_{4i-2},v_{4i-1},y\}. \end{eqnarray*} Now, we may assume that $m$ is even. Consequently, $\ell'=m-4$ and $r\geq 4$. Let $\overline{W}=\{x,y,z_1,z_2,u_1,u_2\}$ and $u$ be a vertex of $e_{i-5}\setminus e_{i-6}$ so that $u\notin V(\mathcal{Q})$. Using Lemma \ref{spacial configuration2}, there is a good $\varpi_{S}$-configuration, say $C_1$, in $\mathcal{H}_{\rm blue}$ with end vertices in $\overline{W}$ so that $S\subseteq ((e_{i-4}\setminus \{f_{\mathcal{P},e_{i-4}}\})\cup\{u\})e_{i-3}e_{i-2}$. Let $w\in e_{i-2}\setminus e_{i-3}$ so that $w\notin C_1$. Since $\ell'$ is maximum, then $u_1,u_2$ or $u_j,x$ or $u_j,y$ for $1\leq j \leq 2$ can not be end vertices of $C_1.$ By symmetry we may assume that one of the following cases holds:\\ {\it $(i)$ $y$ and $z_1$ are the end vertices of $C_1$ $($when $y,z_2$ or $x,z_1$ or $x,z_2$ are end vertices of $C_1$ the proof is similar to this case$)$.} If the edge $f=\{u_1,v_{4i-5},v_{4i-6},w,x\}$ is blue, then $\mathcal{Q}C_1\{z_1,v_{4i-2},v_{4i-3},u_2,u_1\}f$ is a blue copy of $\mathcal{C}_m^5$. So we may suppose that the edge $f$ is red. Since there is no red copy of $\mathcal{C}_n^5,$ the edge $f'=\{z_1,u_2,v_{4i-5},v_{4i-4},v_{4i-3}\}$ is blue and \begin{eqnarray*}\mathcal{Q}C_1f'\{v_{4i-3},v_{4i-2},v_{4i-1},u_1,x\}, \end{eqnarray*} is a blue copy of $\mathcal{C}_m^5.$\\ {\it $(ii)$ $u_2$ and $z_1$ are the end vertices of $C_1$ $($when $u_2,z_2$ or $u_1,z_1$ or $u_1,z_2$ are end vertices of $C_1$ the proof is similar to this case$)$.} If the edge $f=\{y,w,v_{4i-6},v_{4i-5},u_2\}$ is blue, then $\mathcal{Q}fC_1\{z_1,v_{4i-3},v_{4i-2},u_1,x\}$ is a blue copy of $\mathcal{C}_m^5$. Otherwise, $\mathcal{Q}f'C_1\{u_2,z_2,v_{4i-2},v_{4i-3},y\}$ is our desired cycle, where $f'=\{x,v_{4i-5},v_{4i-4},v_{4i-1},z_1\}.$\\ \item[IV.] $|T|=1.$ Let $T=\{u_1\}$. One can easily check that $\ell'=2\lfloor\frac{m-1}{2}\rfloor$. If $m$ is odd, then $\ell'=m-1$. Let $w$ be a vertex of $e_{i-1}\setminus e_{i-2}$ so that $w\notin V(\mathcal{Q})$(the existence of $w$ is guaranteed by Lemma \ref{there is a Pl}). Since there is no red copy of $\mathcal{C}^5_n$ and the edge $e$ is red, the cycle $\mathcal{Q}\{y,u_1,v_{4i-1},w,x\}$ is a blue copy of $\mathcal{C}^5_m$. Now, we may assume that $m$ is even. So $\ell'=m-2$ and $r\geq 1$. Let $w$ be a vertex of $e_{i-2}\setminus e_{i-3}$ so that $w\notin V(\mathcal{Q})$. Since there is no red copy of $\mathcal{C}^5_n$ at least one of the edges \begin{eqnarray*} &&g_1=\{y,w,v_{4i-6},v_{4i-5},u_1\},\\ &&g_2=\{x,v_{4i-5},v_{4i-4},z_1,v_{4i-3}\}, \end{eqnarray*} is blue (if not, $g_1g_2 e_ie_{i+1}\ldots e_{n-1}e_1\ldots e_{i-2}$ form a red copy of $\mathcal{C}^5_n$). If the edge $g_1$ is blue, then \begin{eqnarray*} \mathcal{Q}g_1\{u_1,v_{4i-3},v_{4i-2},v_{4i-1},x\} \end{eqnarray*} is a blue copy of $\mathcal{C}^5_m.$ Otherwise, \begin{eqnarray*} \mathcal{Q}g_2\{v_{4i-3},v_{4i-2},v_{4i-1}, u_1,y\} \end{eqnarray*} is a copy of $\mathcal{C}^5_m$ in $\mathcal{H}_{\rm blue}$. \item[V.] $|T|=0.$ One can easily check that $\ell'=2\lfloor\frac{m-1}{2}\rfloor+2$. Remove the last two edges of $\mathcal{Q}$ to get two disjoint blue paths $\overline{\mathcal{Q}}$ and $\overline{\mathcal{Q}'}$ so that $\|\overline{\mathcal{Q}}\cup\overline{\mathcal{Q}'}\|=2\lfloor\frac{m-1}{2}\rfloor$ and $(\overline{\mathcal{Q}}\cup \overline{\mathcal{Q}'})\cap((e_{i-3}\setminus\{f_{\mathcal{P},e_{i-3}}\})\cup e_{i-2}\cup e_{i-1})=\emptyset$. By an argument similar to the case $|T|=0$ of Subcase $1$ and the case $|T|=1$ of this subcase, we can find a blue copy of $\mathcal{C}^5_m.$ \end{itemize} \noindent \textbf{Case 2. }For every edge $e_i=\{v_{4i-3},v_{4i-2},v_{4i-1},v_{4i},v_{4i+1}\}$, $1\leq i\leq n-1$, and every vertices $z_1,z_2\in W$ and $v'\in e_{i+1}\setminus\{l_{\mathcal{C},e_{i+1}}\}$ (also $v'\in e_{i-1}\setminus\{f_{\mathcal{C},e_{i-1}}\}$) the edge $\{v_{4i-1},v_{4i},v',z_1,z_2\}$ (also the edge $\{v',v_{4i-2},v_{4i-1},z_1,z_2\}$) is blue. Let $W=\{u_1,u_2,\ldots,u_{\lfloor\frac{m-1}{2}\rfloor+4}\}$. First assume that $m=7$. Set \begin{eqnarray*} h_i=\left\lbrace \begin{array}{ll} (e_i\setminus \{v_{4i},v_{4i+1}\})\cup \{u_i,u_{i+1}\} &\mbox{for $1\leq i \leq 6$}, \\ (e_i\setminus \{v_{4i},v_{4i+1}\})\cup \{u_i,u_{1}\} &\mbox{for $i=7$}. \end{array}\right. \end{eqnarray*} Since $h_i$'s, $1\leq i \leq 7$, are blue, then $h_1h_2\ldots h_7$ is a blue copy of $\mathcal{C}^5_7$. Therefore, we may suppose that $m\geq 8$. Let $\mathcal{P}=e_1e_2\ldots e_{n-3}$ and $W_0=W\setminus \{u_1\}.$ Clearly $|W_0|\geq 6$. Since there is no red copy of $\mathcal{C}^5_n$, $\mathcal{P}$ is a maximal path w.r.t. $W_0$. Use Lemma \ref{there is a Pl} to obtain two disjoint blue paths $\mathcal{Q}$ and $\mathcal{Q}'$ between $\overline{\mathcal{P}}$, the path obtained from $\mathcal{P}$ by deleting the last $r$ edges for some $r\geq 0$ and $W'\subseteq W_0$ with the mentioned properties. Consider the paths $\mathcal{Q}$ and $\mathcal{Q}'$ with $\|\mathcal{Q}\|\geq \|\mathcal{Q}'\|$ so that $\ell'=\|\mathcal{Q}\cup \mathcal{Q}'\|$ is maximum. Among these paths, choose $\mathcal{Q}$ and $\mathcal{Q}'$, where $\|\mathcal{Q}\|$ is maximum. Since $\|\mathcal{P}\|=n-3,$ by Lemma \ref{there is a Pl}, we have $\ell'=\frac{2}{3}(n-3-r)$.\\ \noindent {\it Subcase $1$}. $\|\mathcal{Q}'\|\neq 0$. \noindent Set $T=W_0\setminus W'.$ Let $x,y$ and $x',y'$ be the end vertices of $\mathcal{Q}$ and $\mathcal{Q}'$ in $W'$, respectively. Using Lemma \ref{there is a Pl} we have one of the following cases: \begin{itemize} \item[I.] $|T|\geq 3.$ In this case we have $\ell'\leq 2\lfloor\frac{m-1}{2}\rfloor-4$ and so $r\geq 4$. Therefore, this case does not occur by Lemma \ref{there is a Pl}. \item[II.] $|T|=2.$ Let $T=\{u_2,u_3\}$. In this case, $\ell'=2\lfloor\frac{m-1}{2}\rfloor-2$. If $m$ is even, then $\ell'=m-4$ and $r\geq 3$. It is impossible by Lemma \ref{there is a Pl}. So we may assume that $m$ is odd. Therefore, $\ell'=m-3$ and $r\geq 1$. Based on our assumptions, the edges $h_1,h_2$ and $h_3$ are blue, where \begin{eqnarray*} &&h_1=(e_{n-3}\setminus \{v_{4(n-3)-3},v_{4(n-3)-2}\})\cup\{y,x'\},\\ &&h_2=(e_{n-2}\setminus \{v_{4(n-2)-3},v_{4(n-2)-2}\})\cup\{y',u_2\},\\ &&h_3=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{u_3,x\}. \end{eqnarray*} Thereby $\mathcal{Q}h_1\mathcal{Q}' h_2h_3$ is our desired blue $\mathcal{C}_m^5$. \item[III.] $|T|=1.$ By symmetry we may assume that $T=\{u_2\}$. Clearly $\ell'=2\lfloor\frac{m-1}{2}\rfloor$. If $m$ is odd, then $\ell'=m-1$. Remove the last two edges of $\mathcal{Q}\cup \mathcal{Q}'$ to get two disjoint blue paths $\overline{\mathcal{Q}}$ and $\overline{\mathcal{Q}'}$ so that $V(\overline{\mathcal{Q}}\cup \overline{\mathcal{Q}'})\cap (e_{n-4}\cup e_{n-3})=\emptyset$. We can without loss of generality assume that $\mathcal{Q}=\overline{\mathcal{Q}}$. First let $\|\overline{\mathcal{Q}'}\|>0$ and $x',y''$ with $y''\neq y'$ be the end vertices of $\overline{\mathcal{Q}'}$ in $W'$. It is easy to check that $\mathcal{Q} h_1 \overline{\mathcal{Q}'} h_2 h_3$ is a blue copy of $\mathcal{C}_m^5$, where \begin{eqnarray*} &&h_1=(e_{n-3}\setminus \{v_{4(n-3)-3},v_{4(n-3)-2}\})\cup\{y,x'\},\\ &&h_2=(e_{n-2}\setminus \{v_{4(n-2)-3},v_{4(n-2)-2}\})\cup\{y'',y'\},\\ &&h_3=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{u_2,x\}. \end{eqnarray*} So we may assume that $\|\overline{\mathcal{Q}'}\|=0$. Clearly $\mathcal{Q} h'_1 h'_2 h'_3$ is a blue copy of $\mathcal{C}_m^5$, where \begin{eqnarray*} &&h'_1=(e_{n-3}\setminus \{v_{4(n-3)-3},v_{4(n-3)-2}\})\cup\{y,x'\},\\ &&h'_2=(e_{n-2}\setminus \{v_{4(n-2)-3},v_{4(n-2)-2}\})\cup\{x',y'\},\\ &&h'_3=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{u_2,x\}. \end{eqnarray*} Now let $m$ be even. Therefore, $\ell'=m-2$ and $r\geq 0.$ Let $w$ be a vertex of $e_{n-3}\setminus e_{n-4}$ so that $w\notin V(\mathcal{Q}\cup \mathcal{Q}')$ (the existence of $w$ is guaranteed by Lemma \ref{there is a Pl}). By the assumption, the edges $h$ and $h'$ are blue, where \begin{eqnarray*} &&h=(e_{n-1}\setminus\{v_{4(n-1)},v_{1}\})\cup\{y',x\},\\ &&h'=(e_{n-2}\setminus\{v_{4(n-2)-3},v_{4(n-2)},v_{4(n-2)+1}\})\cup\{w,y,x'\}. \end{eqnarray*} Thereby $\mathcal{Q}h'\mathcal{Q}' h$ is a blue copy of $\mathcal{C}_m^5$. \end{itemize} \noindent{\it Subcase $2$.} $\|\mathcal{Q}'\|=0$. \noindent Let $x$ and $y$ be the end vertices of $\mathcal{Q}$ in $W'$ and $T=W_0\setminus W'$. Using Lemma \ref{there is a Pl} we have the following cases: \begin{itemize} \item[I.] $|T|\geq 4$. In this case, we have $\ell'=2\lfloor\frac{m-1}{2}\rfloor-4$ and $r\geq 4$. So this subcase does not hold by Lemma \ref{there is a Pl}. \item[II.] $|T|=3.$ Let $T=\{u_2,u_3,u_4\}$. Clearly $\ell'=2\lfloor\frac{m-1}{2}\rfloor-2$. First let $m$ be odd. Then $\ell'=m-3$ and $r\geq 1$. Set \begin{eqnarray*} &&h_1=(e_{n-2}\setminus \{v_{4(n-2)},v_{4(n-2)+1}\})\cup\{y,u_1\},\\ &&h_2=(e_{n-2}\setminus \{v_{4(n-2)-3},v_{4(n-2)-2}\})\cup\{u_2,u_3\},\\ &&h_3=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{u_4,x\}. \end{eqnarray*} Clearly $\mathcal{Q}h_1h_2h_3$ is a blue copy of $\mathcal{C}_m^5$. Now, let $m$ be even. Hence $\ell'=m-4$ and $r\geq 3$. It is easy to see that $\mathcal{Q}h'_1h'_2h'_3h'_4$ is a blue copy of $\mathcal{C}^5_m$, where \begin{eqnarray*} &&h'_1=(e_{n-4}\setminus \{v_{4(n-4)},v_{4(n-4)+1}\})\cup\{y,u_1\},\\ &&h'_2=(e_{n-3}\setminus \{v_{4(n-3)},v_{4(n-3)+1}\})\cup\{u_1,u_2\},\\ &&h'_3=(e_{n-2}\setminus \{v_{4(n-2)},v_{4(n-2)+1}\})\cup\{u_2,u_3\},\\ &&h'_4=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{u_3,x\}. \end{eqnarray*} \item[III.] $|T|=2.$ Let $T=\{u_2,u_3\}$. Clearly $\ell'=2\lfloor\frac{m-1}{2}\rfloor$. If $m$ is odd, then $\ell'=m-1$. By the assumption, the edge $h=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{x,y\}$ is blue and so $\mathcal{Q}h$ is a blue copy of $\mathcal{C}_m^5$. Now, we may assume that $m$ is even. So $\ell'=m-2$. It is easy to see that $\mathcal{Q}h'h$ is a copy of $\mathcal{C}_m^5$ in $\mathcal{H}_{\rm blue}$, where \begin{eqnarray*} &&h=(e_{n-1}\setminus \{v_{4(n-1)},v_1\})\cup\{u_2,x\},\\ &&h'=(e_{n-2}\setminus \{v_{4(n-2)-3},v_{4(n-2)-2}\})\cup\{y,u_3\}. \end{eqnarray*} \end{itemize} }\end{proof} We shall use Theorem \ref{R(C3,C4)} and Lemmas \ref{pm-1 implies pn-1}, \ref{Cn-1 implies small Cm} and \ref{Cn-1 implies cm} to prove the following main theorem. \begin{theorem}\label{main theorem,k=5} For every $n\geq \Big\lfloor\frac{3m}{2}\Big\rfloor$, $$R(\mathcal{C}^5_n,\mathcal{C}^5_m)=4n+\Big\lfloor\frac{m-1}{2}\Big\rfloor.$$ \end{theorem} \begin{proof}{We give a proof by induction on $m+n$. Using Theorem \ref{R(C3,C4)} the statement of this theorem holds for $m=3.$ Let $m\geq 4,$ $n\geq \Big\lfloor\frac{3m}{2}\Big\rfloor$ and $\mathcal{H}=\mathcal{K}^5_{4n+\lfloor\frac{m-1}{2}\rfloor}$ be 2-edge colored red and blue with no red copy of $\mathcal{C}^5_n$ and no blue copy of $\mathcal{C}^5_m$. Consider the following cases: \noindent \textbf{Case 1. } $n= \Big\lfloor\frac{3m}{2}\Big\rfloor$.\\ By induction hypothesis, $$R(\mathcal{C}^5_{n-1},\mathcal{C}^5_{m-1})= 4(n-1)+\Big\lfloor\frac{m-2}{2}\Big\rfloor< 4n+\Big\lfloor\frac{m-1}{2}\Big\rfloor.$$ Therefore, there is a copy of $\mathcal{C}^5_{n-1}\subseteq \mathcal{H}_{\rm red}$ or a copy of $\mathcal{C}^5_{m-1}\subseteq \mathcal{H}_{\rm blue}$. If we have a red copy of $\mathcal{C}^5_{n-1}$, then by Lemma \ref{Cn-1 implies small Cm} or \ref{Cn-1 implies cm} we have a copy of $\mathcal{C}^5_{m}\subseteq \mathcal{H}_{\rm blue}$. So, we may suppose that there is a blue copy of $\mathcal{C}^5_{m-1}$. Lemma \ref{pm-1 implies pn-1} implies that $\mathcal{C}^5_{n-1}\subseteq \mathcal{H}_{\rm red}$ and using Lemmas \ref{Cn-1 implies small Cm} and \ref{Cn-1 implies cm} we have $\mathcal{C}^5_{m}\subseteq \mathcal{H}_{\rm blue}$. This is a contradiction. \noindent \textbf{Case 2. }$n> \Big\lfloor\frac{3m}{2}\Big\rfloor$.\\ In this case, $n-1\geq \Big\lfloor\frac{3m}{2}\Big\rfloor.$ Since $$R(\mathcal{C}^5_{n-1},\mathcal{C}^5_{m})= 4(n-1)+\Big\lfloor\frac{m-1}{2}\Big\rfloor< 4n+\Big\lfloor\frac{m-1}{2}\Big\rfloor,$$ we have a copy of $\mathcal{C}^5_{n-1}$ in $\mathcal{H}_{\rm red}$. Applying Lemma \ref{Cn-1 implies small Cm} for $4\leq m \leq 6$ and Lemma \ref{Cn-1 implies cm} for $m\geq 7$, we have a blue copy of $\mathcal{C}^5_{m}$. This contradiction completes the proof. }\end{proof} \footnotesize \end{document}
\begin{document} \begin{abstract} In this paper we study the dynamics of an incompressible viscous fluid evolving in an open-top container in two dimensions. The fluid mechanics are dictated by the Navier-Stokes equations. The upper boundary of the fluid is free and evolves within the container. The fluid is acted upon by a uniform gravitational field, and capillary forces are accounted for along the free boundary. The triple-phase interfaces where the fluid, air above the vessel, and solid vessel wall come in contact are called contact points, and the angles formed at the contact point are called contact angles. The model that we consider integrates boundary conditions that allow for full motion of the contact points and angles. Equilibrium configurations consist of quiescent fluid within a domain whose upper boundary is given as the graph of a function minimizing a gravity-capillary energy functional, subject to a fixed mass constraint. The equilibrium contact angles can take on any values between $0$ and $\partial^\alphartiali$ depending on the choice of capillary parameters. The main thrust of the paper is the development of a scheme of a priori estimates that show that solutions emanating from data sufficiently close to the equilibrium exist globally in time and decay to equilibrium at an exponential rate. \end{abstract} \maketitle \section{Introduction }\label{sec_intro} \subsection{Equations of motion } The purpose of this paper is to study the dynamics of a viscous incompressible fluid occupying an open-top vessel in two dimensions. The vessel is modeled as a bounded, connected, open subset $\mathcal{V} \subset \mathbb{R}^2$ obeying the following pair of assumptions. First, we posit that the vessel's top is a rectangular channel by assuming that \begin{equation} \mathcal{V}_{\operatorname{top}} := \mathcal{V} \cap \{y \in \mathbb{R}n{2} \;\vert\; y_2 \ge 0 \} = \{y \in \mathbb{R}n{2} \;\vert\; -\ell < y_1 < \ell, 0 \le y_2 < L \} \end{equation} for some given distances $\ell, L >0$. Note that $L$ is the height of the channel, while $2\ell$ is its width. The second assumption on the vessel is that its boundary, $\partial^\alphartial \mathcal{V} \subset \mathbb{R}^2$, is $C^2$ away from the corner points $(\partial^\alphartialm \ell, L)$. We will use the notation \begin{equation} \mathcal{V}_{\operatorname{btm}} := \mathcal{V} \cap \{y \in \mathbb{R}n{2} \;\vert\; y_2 \le 0 \} \end{equation} to denote the bottom portion of the vessel, on which we place no further geometric restrictions. We refer to Figure \ref{fig_1} for two examples of vessels of the type considered here. \begin{figure} \caption{Empty vessels} \label{fig_1} \caption{Vessels with fluid} \label{fig_2} \end{figure} The fluid is assumed to occupy the vessel in such a way that $\mathcal{V}_{\operatorname{btm}}$ is filled by the fluid, while $\mathcal{V}_{\operatorname{top}}$ is only partially filled, resulting in a free boundary where the fluid meets the air above the vessel. For each time $t \ge 0$, this boundary is taken to be the graph of a function $\zeta(\cdot,t) : (-\ell,\ell) \to (0,\infty)$ subject to the constraint that $\zeta(\partial^\alphartialm \ell,t) \le L$. The physical meaning of this constraint is that the fluid is assumed to not spill over the edges of the vessel. Note, though, that we allow for the possibility that $\zeta(x,t) > L$ for some $x \in (-\ell,\ell)$ and $t \ge 0$, which corresponds to the fluid extending past the vessel's top away from the edges. The points where the fluid, vessel, and air meet are $(\partial^\alphartialm\ell, \zeta(\partial^\alphartialm\ell,t))$ and are called the contact points. In mathematical terms, we assume that the fluid occupies the time-dependent open set \begin{equation}\label{Omega_t_def} \Omega(t) = \mathcal{V}_{\operatorname{btm}} \cup \{ y \in \mathbb{R}^{2} \;\vert\; -\ell < y_1 < \ell, 0 < y_2 < \zeta(y_1,t)\}. \end{equation} We will write \begin{equation} \Sigma(t) = \{y \in \mathbb{R}^2 \;\vert\; \abs{y_1} < \ell \text{ and } y_2 = \zeta(y_1,t)\} \subset \partial^\alphartial \Omega(t) \end{equation} for the moving fluid-vapor interface and \begin{equation} \Sigma_s(t) = \partial^\alphartial \Omega(t) \backslash \Sigma(t) \end{equation} for the moving fluid-solid interface. See Figure \ref{fig_2} for an example of two fluid domains in different types of vessels. The fluid's state is determined at each time by its velocity and pressure functions, $(u,P) : \Omega(t) \to \mathbb{R}^2 \times \mathbb{R}$, for which the associated viscous stress tensor is given by $S(P,u) : \Omega(t) \to \mathbb{R}^{2 \times 2}$ via \begin{equation}\label{stress_def} S(P,u) := PI - \mu \mathbb{D} u, \end{equation} where $I$ is the $2 \times 2$ identity, $\mu >0$ is the fluid viscosity, and the symmetrized gradient is $\mathbb{D} u = D u + (Du)^T$. Extending the divergence operator to act on $S$ in the usual way, we have that $\diverge S(P,u) = \nabla P - \mu \mathcal{D}elta u - \mu \nabla \diverge u$. In order to state the equations of motion, we first need to enumerate several terms that affect the dynamics. The fluid is assumed to be of unit density and acted on by a uniform gravitational field pointing straight down with strength $g >0$. Surface tension is accounted for, and we write $\sigma >0$ for the coefficient of tension coefficient along the fluid-vapor interface, which is the graph of $\zeta(\cdot,t)$. The parameter $\beta > 0$ is the inverse slip length, which will appear in Navier's slip condition on the vessel side walls. The energetic parameters $\gamma_{sv}, \gamma_{sf} \in \mathbb{R}n{}$ measure the free-energy per unit length associated to the solid-vapor and solid-fluid interaction, respectively, and are the analogs of $\sigma$ for the other interfaces. We define \begin{equation}\label{jg_def} \jump{\gamma} := \gamma_{sv} - \gamma_{sf}, \end{equation} and we assume that $\jump{\gamma}$ and $\sigma$ satisfy the classical Young relation \cite{young}: \begin{equation}\label{gamma_assume} \frac{\abs{\jump{\gamma}}}{\sigma} < 1. \end{equation} Finally, we define the contact point velocity response function $\mathscr{W}: \mathbb{R}\to \mathbb{R}$ to be a $C^2$ increasing diffeomorphism such that $\mathscr{W}(0) =0$. We can now state the equations of motion that govern the dynamics of the unknown triple $(u,P,\zeta)$ for $t >0$: \begin{equation}\label{ns_euler} \begin{cases} \partial^\alphartialartial_t u + u\cdot \nabla u + \nabla P - \mu \mathcal{D}elta u = 0 & \text{in }\Omega(t) \\ \diverge{u}=0 & \text{in }\Omega(t) \\ S(P,u) \nu = g \zeta \nu - \sigma \mathcal{H}(\zeta) \nu & \text{on } \Sigma(t) \\ (S(P,u)\nu - \beta u)\cdot \tau =0 &\text{on } \Sigma_s(t) \\ u \cdot \nu =0 &\text{on } \Sigma_s(t) \\ \partial^\alphartialartial_t \zeta = u_2 - u_1 \partial^\alphartialartial_{y_1}\zeta &\text{on } \Sigma(t) \\ \mathscr{W}(\partial^\alphartialartial_t \zeta(\partial^\alphartialm \ell,t)) = \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) \end{cases} \end{equation} where $\nu$ the outward-pointing unit normal, $\tau$ is the associated unit tangent, and \begin{equation}\label{H_def} \mathcal{H}(\zeta) := \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}} \right) \end{equation} is the mean-curvature operator. The first two equations in \eqref{ns_euler} are the incompressible Navier-Stokes equations for a fluid of unit density. The third equation is the balance of stress on the free surface, which is also called the dynamic boundary condition. Note that in principle the gravitational forcing term $-g e_2$ should appear as a bulk force in the first equation, but by shifting the pressure unknown via $P \mapsto P + g x_2$ we have shifted gravity to a surface term, as it is more convenient in this form. The fourth and fifth equations in \eqref{ns_euler} constitute the Navier-slip condition; in contrast with the no-slip condition, the Navier condition allows for fluid slip along the fluid-solid interface, at the expense of generating a stress that acts against the motion. The sixth equation in \eqref{ns_euler} is called the kinematic equation, as it tracks how the free surface function changes due to the fluid velocity. The final equation in \eqref{ns_euler}, which is essential in our analysis and will be discussed more later in Section \ref{sec_res_disc}, is the contact point response equation. The problem \eqref{ns_euler} is an evolution equation and must be augmented with two pieces of initial data: \begin{enumerate} \item the initial free surface, $\zeta(\cdot,0): (-\ell,\ell) \to (0,\infty)$, which determines the initial fluid domain $\Omega(0)$, \item the initial fluid velocity $u_0 : \Omega(0) \to \mathbb{R}^2$, which satisfies $\diverge u_0 =0$ in $\Omega(0)$ and $u_0 \cdot \nu = 0$ on $\Sigma_s(0)$. \end{enumerate} As usual for the Navier-Stokes system, the initial pressure does not need to be specified. The initial mass of the fluid is denoted by \begin{equation} M_0 := \abs{\Omega(0)} = \abs{\mathcal{V}_{\operatorname{btm}}} + M_{top}, \text{ where } M_{top} = \int_{-\ell}^\ell \zeta(y_1,0) dy_1. \end{equation} The fluid's mass is conserved in time due to the combination of the kinematic boundary condition and the solenoidal condition for $u$ from \eqref{ns_euler}: \begin{equation}\label{avg_prop} \frac{d}{dt} \abs{\Omega(t)} = \frac{d}{dt} \int_{-\ell}^\ell \zeta = \int_{-\ell}^\ell \partial^\alphartialartial_t \zeta = \int_{\Sigma(t)} u \cdot \nu = \int_{\Omega(t)} \diverge{u} = 0. \end{equation} \subsection{Equilibria } A steady state equilibrium solution to \eqref{ns_euler} corresponds to setting $u =0$, $P(y,t) = P_0 \in \mathbb{R}$, and $\zeta(y_1,t) = \zeta_0(y_1)$ with $\zeta_0$ and $P_0$ solving \begin{equation}\label{zeta0_eqn} \begin{cases} g \zeta_0 - \sigma \mathcal{H}(\zeta_0) = P_0 & \text{on } (-\ell,\ell) \\ \sigma \frac{\partial^\alphartial_1 \zeta_0}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}}(\partial^\alphartialm \ell) = \partial^\alphartialm \jump{\gamma}. \end{cases} \end{equation} By a slight abuse of notation, solutions to \eqref{zeta0_eqn} are called equilibrium capillary surfaces. Note that the boundary condition specifies the cosine of the angle formed by the graph at the endpoints. The constant pressure $P_0$ is not arbitrary; indeed, it is uniquely determined by specifying the mass in $\mathcal{V}_{\operatorname{top}}$ at equilibrium, i.e. prescribing \begin{equation}\label{zeta0_constraint} M_{top} = \int_{-\ell}^\ell \zeta_0(y_1) dy_1. \end{equation} To see this, we use \eqref{zeta0_eqn} to compute \begin{equation} 2\ell P_0 = \int_{-\ell}^\ell P_0 = \int_{-\ell}^\ell g \zeta_0 - \sigma \mathcal{H}(\zeta_0) = g M_{top} -\sigma \left. \frac{\partial^\alphartial_1 \zeta_0}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta_0}^2}} \right\vert_{-\ell}^\ell = g M_{top} -2 \jump{\gamma}, \end{equation} which in turn implies that \begin{equation}\label{p0_def} P_0 = \frac{g M_{top} -2 \jump{\gamma}}{2\ell}. \end{equation} The equations \eqref{zeta0_eqn} are the Euler-Lagrange equations associated to constrained minimizers of the energy functional $\mathscr{I} : W^{1,1}((-\ell,\ell)) \to \mathbb{R}n{}$ defined via \begin{equation}\label{zeta0_energy} \mathscr{I}(\zeta) = \int_{-\ell}^\ell \frac{g}{2} \abs{\zeta}^2 + \sigma \sqrt{1 + \abs{\zeta'}^2} - \jump{\gamma}(\zeta(\ell) + \zeta(-\ell)), \end{equation} subject to the mass constraint $M_{top} = \int_{-\ell}^\ell \zeta$. In this framework the pressure $P_0$ is understood as the Lagrange multiplier associated to this constraint. We now state an existence result for equilibrium capillary surfaces. For a detailed proof we refer, for instance, to Appendix E of \cite{guo_tice_QS}, which is a one dimensional version of results found in the book of Finn \cite{finn}. \begin{thm}\label{zeta0_wp} There exists a constant $M_{min} \ge 0$ such that if $M_{top} > M_{min}$ then there exists a unique solution $\zeta_0 \in C^\infty([-\ell,\ell])$ to \eqref{zeta0_eqn} that satisfies \eqref{zeta0_constraint} with $P_0$ given by \eqref{p0_def}. Moreover, $\zeta_0$ is even, $\min_{[-\ell,\ell]} \zeta_0 >0$, and if $\mathscr{I}$ is given by \eqref{zeta0_energy}, then $\mathscr{I}(\zeta_0) \le \mathscr{I}(\partial^\alphartialsi)$ for all $\partial^\alphartialsi \in W^{1,1}((-\ell,\ell))$ such that $\int_{-\ell}^\ell \partial^\alphartialsi = M_{top}$. \end{thm} Throughout the rest of the paper we make the following two crucial assumptions on the parameters. \begin{enumerate} \item We assume that $M_{top} > M_{min}$ in order to have an equilibrium $\zeta_0$ as in Theorem \ref{zeta0_wp}. \item We assume that the parameter $L >0$, the height of the rectangular channel $\mathcal{V}_{\operatorname{top}}$, satisfies the condition $\zeta_0(\partial^\alphartialm\ell) < L$, which means the fluid is not just about to spill over the top of the vessel. \end{enumerate} \subsection{Previous work and origins of the model \eqref{ns_euler} }\label{sec_discussion_model} The contact lines (or contact points in two dimension) that form at triple junctions between three distinct phases (fluid, solid, and vapor phases in the present paper) have been a subject of intense study since the pioneering work of Young \cite{young} in 1805. For an exhaustive overview we refer to de Gennes \cite{degennes}. Here we will content ourselves with a terse review. The story began with the study of equilibrium configurations by Young \cite{young}, Laplace \cite{laplace}, and Gauss \cite{gauss}, who discovered the underlying variational principle for $\mathscr{I}$ described above and in Theorem \ref{zeta0_wp} (though, obviously, the theorem is restated in the modern language of Sobolev spaces). A key byproduct of this work is that the angle formed between the solid wall and the fluid (through the vapor phase), which is known as the equilibrium contact angle $\theta_{\operatorname{eq}}$ (see Figure \ref{fig_3}), is related to the free energy parameters $\gamma_{sf}$, $\gamma_{sv}$, and $\sigma$ via Young's equation \begin{equation}\label{young_relat} \cos(\theta_{\operatorname{eq}}) = \frac{\gamma_{sf} - \gamma_{sv}}{\sigma} = -\frac{\jump{\gamma}}{\sigma}. \end{equation} Note that this manifests in \eqref{zeta0_eqn} through the equations for $\partial^\alphartial_1 \zeta_0$ at the endpoints. \begin{figure} \caption{Equilibrium contact angle} \label{fig_3} \end{figure} The dynamic behavior of a contact line or point is significantly more delicate. For instance, including a dynamic contact point in a fluid-solid-vapor model presents challenges to standard modeling assumptions made when working with viscous fluids. Indeed, the free boundary kinematics (which may be rewritten as $\partial^\alphartialartial_t \zeta = u\cdot \nu\sqrt{1 + \abs{\nabla \zeta}^2}$) and the typical no-slip boundary conditions for viscous fluids ($u=0$ at the fluid-solid interface) are incompatible: combining the two leads to the prediction that $\partial^\alphartialartial_t \zeta =0$ at the contact points, i.e. that the fluid is pinned at its initial position on the solid. A moment's experimentation with an everyday coffee cup reveals this prediction to be nonsensical, and we are led to abandon the no-slip condition in favor of another boundary condition that allows for motion of the contact point. The surveys of Dussan \cite{dussan} and Blake \cite{blake} provide a thorough discussion of the efforts of physicists and chemists in determining the dynamics of a contact point. The general picture is that the dynamic contact angle, $\theta_{\operatorname{dyn}}$, and the equilibrium angle, $\theta_{\operatorname{eq}}$, are related via \begin{equation}\label{cl_motion} V_{cl} = F( \cos(\theta_{\operatorname{dyn}}) - \cos(\theta_{\operatorname{eq}}) ), \end{equation} where $V_{cl}$ is the contact point velocity (along the solid) and $F$ is some increasing function such that $F(0)=0$. The assumptions on $F$ enforce the experimentally observed fact that the slip of the contact line acts to restore the equilibrium angle (see Figure \ref{fig_4}). Equations of the form \eqref{cl_motion}, but with different forms of $F$, have been derived in a number of ways. Blake-Haynes \cite{blake_haynes} arrived at \eqref{cl_motion} through thermodynamic and molecular kinetics arguments. Cox \cite{cox} used matched asymptotic analysis and hydrodynamic arguments. Ren-E \cite{ren_e_deriv} derived \eqref{cl_motion} from thermodynamic principles applied to constitutive equations. Ren-E \cite{ren_e} also performed molecular dynamics simulations and found an equation of the form \eqref{cl_motion}. These simulations also indicated that the slip of the fluid along the solid obeys the well-known Navier-slip condition \begin{equation}\label{navier_slip} u \cdot \nu =0 \text{ and } S(P,u) \nu \cdot \tau = \beta u \cdot \tau \end{equation} for some parameter $\beta >0$. The system \eqref{ns_euler} studied in the present paper synthesizes the Navier-slip boundary conditions \eqref{navier_slip} with the general form of the contact point equation \eqref{cl_motion}. Indeed, the last equation in \eqref{ns_euler} may be rewritten as \begin{equation} \mathscr{W}(V_{cl}) = \mathscr{W}(\partial^\alphartialartial_t \zeta) = \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) = \sigma (\cos(\theta_{\operatorname{dyn}}) - \cos(\theta_{\operatorname{eq}}) ), \end{equation} which is \eqref{cl_motion} with the convenient reformulation $\mathscr{W} = \sigma F^{-1}$. \begin{figure} \caption{$\theta_{eq} \caption{$\theta_{eq} \caption{ The same dynamic fluid configuration, with the dynamic contact angle $\theta_{dyn} \label{fig_4} \end{figure} Given the numerous derivations of \eqref{cl_motion}, we believe that its integration into the model \eqref{ns_euler} along with the Navier-slip condition yields a good general model for describing the dynamics of a viscous fluid with dynamic contact points and contact angles. A goal of this article is to provide further evidence for the validity of the model by proving that the equilibrium capillary surfaces are asymptotically stable, or more precisely, that sufficiently small perturbations of the equilibria give rise to global-in-time solutions that return to equilibrium exponentially fast as time diverges to infinity. In recent previous work \cite{guo_tice_QS} we proved this in the much simpler case in which the Navier-Stokes equations in \eqref{ns_euler} were replaced by the Stokes equations, which yields a sort of quasi-static evolution. The second author and Wu \cite{tice_wu} proved corresponding results for the Stokes droplet problem in which the vessel configuration is replaced with a droplet sitting atop a flat substrate, and with Zheng \cite{tice_zheng} established local existence results. The Navier-Stokes problem presents numerous challenges relative to the Stokes problem, but we will delay a discussion of these to Section \ref{sec_res_disc}. To the best of our knowledge, there are no other prior results in the literature related to models in which the full fluid mechanics are considered alongside dynamic contact points and contact angles. However, there are results with a subset of these features. Schweizer \cite{schweizer} studied a 2D Navier-Stokes problem with a fixed contact angle of $\partial^\alphartiali/2$. Bodea \cite{bodea} studied a similar problem with fixed $\partial^\alphartiali/2$ contact angle in 3D channels with periodicity in one direction. Kn\"upfer-Masmoudi \cite{knupfer_masmoudi} studied the dynamics of a 2D drop with fixed contact angle when the fluid is assumed to be governed by Darcy's law. Related analysis of the fully stationary Navier-Stokes system with free, but unmoving boundary, was carried out in 2D by Solonnikov \cite{solonnikov} with contact angle fixed at $\partial^\alphartiali$, by Jin \cite{jin} in 3D with angle $\partial^\alphartiali/2$, and by Socolowsky \cite{socolowsky} for 2D coating problems with fixed contact angles. A simplified droplet model without fluid coupling was studied by Feldman-Kim \cite{feldman_kim}, who proved asymptotic stability using gradient flow techniques. It is worth noting that much work has also been done on contact points in simplified thin-film models; we refer to the survey by Bertozzi \cite{bertozzi} for an overview. We conclude this overview of the model with some stability heuristics. Sufficiently regular solutions to \eqref{ns_euler} obey the energy-dissipation equation \begin{multline}\label{fund_en_evolve} \frac{d}{dt}\left( \int_{\Omega(t)} \frac{1}{2} \abs{u(\cdot,t)}^2 + \mathscr{I}( \zeta(\cdot,t)) \right) \\ + \int_{\Omega(t)} \frac{\mu}{2} \abs{\mathbb{D} u(\cdot,t)}^2 + \int_{\Sigma_s(t)} \frac{\beta}{2} \abs{u(\cdot,t) }^2 + \sum_{a=\partial^\alphartialm 1} \partial^\alphartialartial_t \zeta(a \ell,t) \mathscr{W}(\partial^\alphartialartial_t \zeta(a \ell,t))= 0, \end{multline} where $\mathscr{I}$ is the energy functional from \eqref{zeta0_energy}. This identity may be derived in the usual way by dotting the first equation in \eqref{ns_euler} by $u$, integrating by parts over $\Omega(t)$, and employing the other equations. The temporally differentiated term in parentheses is the physical energy, comprised of the fluid's kinetic energy (the first term) and the gravity-capillary potential energy (the second term). The three remaining terms are the dissipation due to viscosity (the first term), slip along fluid-solid interface (the second), and slip along the contact point (the third). Crucially, the assumptions on $\mathscr{W}$ imply that $z \mathscr{W}(z) >0$ for $z \neq 0$, which means the contact point dissipation term provides positive definite control of $\partial^\alphartialartial_t \zeta$ at the contact point. Thus, the dissipation has a sign and serves to decrease the energy. Since the equilibrium configuration $u=0$, $p=0$, $\zeta = \zeta_0$ is the unique global minimizer of the energy, \eqref{fund_en_evolve} formally suggests that global-in-time solutions will converge to the equilibrium as $t \to \infty$. We will prove that this is indeed the case, provided that the initial data are sufficiently close to the equilibrium configuration, and we will show that such solutions must decay to equilibrium exponentially. \subsection{Problem reformulation } In order to analyze the system \eqref{ns_euler} it is convenient to reformulate the problem in a fixed open set. The stability heuristic given above suggests that for large time, the fluid domain should not differ much from the equilibrium domain, which suggests that we employ it as the fixed open set. To this end we consider $\zeta_0 \in C^\infty([-\ell,\ell])$ from Theorem \ref{zeta0_wp} and define the equilibrium domain $\Omega \subset \mathbb{R}^2$ via \begin{equation}\label{Omega_def} \Omega := \mathcal{V}_{\operatorname{btm}} \cup \{ x \in \mathbb{R}n{2} \;\vert\; -\ell < x_1 < \ell \text{ and } 0 < x_2 < \zeta_0(x_1) \}. \end{equation} Note that $\partial^\alphartial \Omega$ is $C^2$ away from the contact points $(\partial^\alphartialm\ell,\zeta_0(\partial^\alphartialm \ell))$, but that $\Omega$ has corner singularities there, so $\partial^\alphartial \Omega$ is only Lipschitz globally. Depending on the choice of the capillary parameters $\sigma$, $\gamma_{sv}$, and $\gamma_{sf}$, the angles formed at the contact points can take on any value between $0$ and $\partial^\alphartiali$. We decompose the boundary $\partial^\alphartial \Omega = \Sigma \sqcup \Sigma_s$, where \begin{equation}\label{sigma_def} \Sigma := \{ x \in \mathbb{R}n{2} \;\vert\; -\ell < x_1 < \ell \text{ and } x_2 = \zeta_0(x_1) \} \text{ and } \Sigma_s := \partial^\alphartial \Omega \backslash \Sigma. \end{equation} The set $\Sigma$ is the equilibrium free surface, while $\Sigma_s$ denotes the ``sides'' of the equilibrium fluid configuration. We will write $x \in \Omega$ as the spatial coordinate in the equilibrium domain. We will write \begin{equation}\label{N0_def} \mathcal{N}_0 = (-\partial^\alphartial_1 \zeta_0,1) \end{equation} for the non-unit normal vector field on $\Sigma$. In our analysis we will assume that the free boundary is a small perturbation of the equilibrium interface by introducing the perturbation $\eta:(-\ell,\ell) \times \mathbb{R}^{+} \to \mathbb{R}$ and positing that \begin{equation} \zeta(x_1,t) = \zeta_0(x_1) + \eta(x_1,t). \end{equation} We will need to define an extension of $\eta$ that gains regularity. To this end we first choose $E$ to be a bounded linear extension operator that maps $C^m((-\ell,\ell))$ to $C^m(\mathbb{R})$ for all $0 \le m \le 5$ and $W^{s,p}((-\ell,\ell))$ to $W^{s,p}(\mathbb{R})$ for all $0 \le s \le 5$ and $1 \le p < \infty$ (such a map is readily constructed with the help of higher order reflections, Vandermonde matrices, and a cutoff function - see, for instance, Exercise 7.24 in \cite{leoni} for integer regularity, but non-integer regularity follows then by interpolating). In turn, we define the extension of $\eta$ to be the function $\bar{\eta} : \{x \in \mathbb{R}^2 \;\vert\; x_2 \le E\zeta_0(x_1)\} \times \mathbb{R}^+ \to \mathbb{R}$ given by \begin{equation}\label{extension_def} \bar{\eta}(x,t) = \mathcal{P} E \eta(x_1,x_2 - E\zeta_0(x_1),t), \end{equation} where $\mathcal{P}$ is the lower Poisson extension defined by \eqref{poisson_def}. Note that although $\bar{\eta}(\cdot,t)$ is a priori defined in the unbounded set $\{x \in \mathbb{R}^2 \;\vert\; x_2 \le E\zeta_0(x_1)\}$, in practice we will only ever use its restriction to the bounded set $\Omega \subset \{x \in \mathbb{R}^2 \;\vert\; x_2 \le E\zeta_0(x_1)\}$. Choose $\partial^\alphartialhi \in C^\infty(\mathbb{R})$ such that $\partial^\alphartialhi(z) =0$ for $z \le \frac{1}{4} \min \zeta_0$ and $\partial^\alphartialhi(z) =z$ for $z \ge \frac{1}{2} \min \zeta_0$. We combine $\partial^\alphartialhi$ and the extension $\bar{\eta}$ to define a map from the equilibrium domain to the moving domain $\Omega(t)$: \begin{equation}\label{mapping_def} \Omega \ni x \mapsto \left( x_1,x_2 + \frac{\partial^\alphartialhi(x_2)}{\zeta_0(x_1)} \bar{\eta}(x,t) \right) := \mathcal{P}hi(x,t) = (y_1,y_2) \in \Omega(t). \end{equation} It is readily verified that the map $\mathcal{P}hi$ satisfies the following properties: \begin{enumerate} \item $\mathcal{P}hi(x_1,\zeta_0(x_1),t) = (x_1, \zeta_0(x_1) + \eta(x_1,t)) = (x_1,\zeta(x_1,t))$, and hence $\mathcal{P}hi(\Sigma,t) = \Sigma(t)$, \item $\mathcal{P}hi(x,t) =x$ for $x \in \mathcal{V}_{\operatorname{btm}}$, i.e. the map is the identity in the bottom portion of the vessel and thus only distorts the upper rectangular channel $\mathcal{V}_{\operatorname{top}}$, \item $\mathcal{P}hi(\partial^\alphartialm \ell, x_2,t) = (\partial^\alphartialm \ell, x_2+ \partial^\alphartialhi(x_2)\bar{\eta}(\partial^\alphartialm\ell ,x_2)/\zeta_0(\partial^\alphartialm \ell))$, and hence $\mathcal{P}hi(\Sigma_s \cap \{x_1 = \partial^\alphartialm \ell, x_2 \ge 0\},t) = \Sigma_s(t) \cap \{y_1 = \partial^\alphartialm \ell, y_2 \ge 0\}$. \end{enumerate} Moreover, if $\eta$ is sufficiently small (in an appropriate Sobolev space), then the mapping $\mathcal{P}hi$ will be a $C^1$ diffeomorphism of $\Omega$ onto $\Omega(t)$ that maps the components of $\partial^\alphartial \Omega$ to the corresponding components of $\partial^\alphartial \Omega(t)$. We will use $\mathcal{P}hi$ to reformulate \eqref{ns_euler} in $\Omega$, but first it is convenient to introduce some notation. We write \begin{equation}\label{A_def} \nabla \mathcal{P}hi = \begin{pmatrix} 1 & 0 \\ A & J \end{pmatrix} \text{ and } \mathcal{A} := (\nabla \mathcal{P}hi^{-1})^T = \begin{pmatrix} 1 & -A K \\ 0 & K \end{pmatrix} \end{equation} for \begin{equation}\label{AJK_def} W = \frac{\partial^\alphartialhi}{\zeta_0}, \quad A = W \partial^\alphartial_1 \bar{\eta} - \frac{W}{\zeta_0} \partial^\alphartial_1 \zeta_0 \bar{\eta}, \quad J = 1 + W \partial^\alphartial_2 \bar{\eta} + \frac{\partial^\alphartialhi' \bar{\eta}}{\zeta_0}, \quad K = J^{-1}. \end{equation} Note that the Jacobian of our coordinate transformation is exactly $J = \det{\nabla \mathcal{P}hi}$. Provided that $\mathcal{P}hi$ is a diffeomorphism (which will always be satisfied in our analysis), we can reformulate \eqref{ns_euler} by using $\mathcal{P}hi$ to change coordinate systems. This results in a PDE system that has the benefit of being posed in a fixed set but the downside of being significantly more nonlinear. In the new system the PDE becomes \begin{equation}\label{ns_flattened} \begin{cases} \partial^\alphartialartial_t u -\partial^\alphartialartial_t \bar{\eta} W K \partial^\alphartial_2 u + u \cdot \nablaa u + \diverge_{\mathcal{A}} S_{\mathcal{A}}(P,u) =0 & \text{in }\Omega \\ \diverge_{\mathcal{A}} u = 0 &\text{in } \Omega \\ S_{\mathcal{A}}(P,u)\mathcal{N} = \left(g \zeta - \sigma \mathcal{H}(\zeta) \right) \mathcal{N} &\text{on }\Sigma \\ \partial^\alphartialartial_t \eta = u \cdot \mathcal{N} &\text{on }\Sigma \\ (S_{\mathcal{A}}(P,u) \cdot \nu - \beta u)\cdot \tau = 0 &\text{on }\Sigma_s \\ u \cdot \nu =0 & \text{on }\Sigma_s \\ \mathscr{W}(\partial^\alphartialartial_t \eta(\partial^\alphartialm \ell,t)) = \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t), \end{cases} \end{equation} where $\zeta = \zeta_0 + \eta$ and \begin{equation}\label{N_def} \mathcal{N} = (-\partial^\alphartial_1 \zeta,1) = \mathcal{N}_0 - (\partial^\alphartial_1 \eta,0) \end{equation} is the non-unit normal to the moving free boundary. Here we have written the differential operators $\nablaa$, $\diverge_{\mathcal{A}}$, and $\mathcal{D}elta_{\mathcal{A}}$ with their actions given by $(\nablaa f)_i := \mathcal{A}_{ij} \partial^\alphartial_j f$, $\diverge_{\mathcal{A}} X := \mathcal{A}_{ij}\partial^\alphartial_j X_i$, and $\mathcal{D}elta_{\mathcal{A}} f = \diverge_{\mathcal{A}} \nablaa f$ for appropriate $f$ and $X$. The vector field $u\cdot \nablaa u$ has components $(u \cdot \nablaa u)_i := u_j \mathcal{A}_{jk} \partial^\alphartial_k u_i$. We also write $S_{\mathcal{A}}(P,u) = P I - \mu \mathbb{D}_{\mathcal{A}} u$ for the stress tensor, where $I$ the $2 \times 2$ identity and $(\mathbb{D}_{\mathcal{A}} u)_{ij} = \mathcal{A}_{ik} \partial^\alphartial_k u_j + \mathcal{A}_{jk} \partial^\alphartial_k u_i$ is the symmetric $\mathcal{A}-$gradient. Note that if we extend $\diverge_{\mathcal{A}}$ to act on symmetric tensors in the natural way, then $\diverge_{\mathcal{A}} S_{\mathcal{A}}(P,u) = \nablaa P - \mu \mathcal{D}elta_{\mathcal{A}} u$ for vector fields satisfying $\diverge_{\mathcal{A}} u=0$. Now that we have reformulated our PDE system in a fixed domain, it is convenient to make a final modification by rewriting \eqref{ns_flattened} as a perturbation of the equilibrium configuration. In other words, we posit that the solution has the special form $u =0 + u$, $P = P_0 + p$, $\zeta = \zeta_0 + \eta$ for new unknowns $(u,p,\eta)$. In order to record the perturbed equations, we first need to introduce some notation. To begin, we use a Taylor expansion in $z$ to write \begin{equation} \frac{y+z}{(1+\abs{y+z}^2)^{1/2}} = \frac{y}{(1+\abs{y}^2)^{1/2}} + \frac{z}{(1+\abs{y}^2)^{3/2}} + \mathcal{R}(y,z), \end{equation} where $\mathcal{R} \in C^\infty(\mathbb{R}^{2})$ is given by \begin{equation}\label{R_def} \mathcal{R}(y,z) = \int_0^z 3 \frac{(s-z)(s+y)}{(1+ \abs{y+s}^2)^{5/2}} ds. \end{equation} By construction, \begin{equation} \frac{\partial^\alphartial_1 \zeta}{(1+\abs{\partial^\alphartial_1 \zeta}^2)^{1/2}} = \frac{\partial^\alphartial_1 \zeta_0}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{1/2}} + \frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta), \end{equation} which then allows us to use \eqref{zeta0_eqn} to compute \begin{multline}\label{pert_comp_1} g \zeta - \sigma \mathcal{H}(\zeta) = \left( g \zeta_0 - \sigma \mathcal{H}(\zeta_0)\right) + g \eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}\right) -\sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \\ = P_0 + g \eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}\right) -\sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \end{multline} and \begin{multline} \jump{\gamma} \mp \sigma \frac{\partial^\alphartial_1 \zeta}{\sqrt{1+\abs{\partial^\alphartial_1 \zeta}^2}}(\partial^\alphartialm \ell,t) = \jump{\gamma} \mp \frac{\sigma \partial^\alphartial_1 \zeta_0}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{1/2}}(\partial^\alphartialm \ell) \mp \frac{\sigma \partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}(\partial^\alphartialm \ell,t) \\ \mp \sigma \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)(\partial^\alphartialm \ell,t) = \mp \frac{\sigma \partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}(\partial^\alphartialm \ell,t) \mp \sigma \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)(\partial^\alphartialm \ell,t). \end{multline} Next, we compute \begin{multline}\label{pert_comp_2} \diverge_{\mathcal{A}} S_{\mathcal{A}}(P,u) = \diverge_{\mathcal{A}} S_{\mathcal{A}}(p,u) \text{ in } \Omega, \; S_{\mathcal{A}}(P,u) \mathcal{N} = S_{\mathcal{A}}(p,u) \mathcal{N} + P_0 \mathcal{N} \text{ on }\Sigma, \\ \text{and } S_{\mathcal{A}}(P,u)\nu \cdot \tau = S_{\mathcal{A}}(p,u)\nu \cdot \tau \text{ on }\Sigma_s. \end{multline} Finally, we expand the velocity response function inverse $\mathscr{W} \in C^2(\mathbb{R})$. Since $\mathscr{W}$ is increasing, we may set \begin{equation}\label{linz_def} \kappa = \mathscr{W}'(0) >0. \end{equation} We then define the perturbation $\mathscr{W}h \in C^2(\mathbb{R}n{})$ as \begin{equation}\label{V_pert} \mathscr{W}h(z) = \frac{1}{\kappa} \mathscr{W}(z) - z. \end{equation} We now insert the expansions \eqref{pert_comp_1}--\eqref{pert_comp_2} and \eqref{V_pert} into \eqref{ns_flattened}. This yields the following equivalent PDE system for the perturbed unknown s$(u,p,\eta)$: \begin{equation}\label{ns_geometric} \begin{cases} \partial^\alphartialartial_t u -\partial^\alphartialartial_t \bar{\eta} W K \partial^\alphartial_2 u + u \cdot \nablaa u + \diverge_{\mathcal{A}} S_{\mathcal{A}}(p,u) =0 & \text{in }\Omega \\ \diverge_{\mathcal{A}} u = 0 &\text{in } \Omega \\ S_{\mathcal{A}}(p,u)\mathcal{N} = \left[g \eta - \sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1\eta) \right) \right] \mathcal{N} &\text{on }\Sigma \\ \partial^\alphartialartial_t \eta = u \cdot \mathcal{N} &\text{on }\Sigma \\ (S_{\mathcal{A}}(p,u) \cdot \nu - \beta u)\cdot \tau = 0 &\text{on }\Sigma_s \\ u \cdot \nu =0 & \text{on }\Sigma_s \\ \kappa \partial^\alphartialartial_t \eta(\partial^\alphartialm \ell,t) + \kappa \mathscr{W}h( \eta(\partial^\alphartialm \ell,t) ) = \mp \sigma\left( \frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1\eta)\right)(\partial^\alphartialm \ell,t). \end{cases} \end{equation} It is in this form that we will study the problem. Throughout the paper we assume that $M_{top} > M_{min}$ is specified as in the discussion after Theorem \ref{zeta0_wp}. For the data to \eqref{ns_geometric} we then have: \begin{enumerate} \item the initial free surface $\eta_0$, which we assume satisfies \begin{equation}\label{avg_zero_t=0} \int_{-\ell}^\ell \eta_0 =0 \end{equation} so that the equilibrium mass, $M_{top}$, matches the initial mass, i.e. \begin{equation} \int_{-\ell}^\ell (\zeta_0 + \eta_0) = M_{top}, \end{equation} \item the initial velocity $u_0 : \Omega \to \mathbb{R}^2$, which we assume satisfies $\diverge_{\mathcal{A}_0} u_0 =0$ as well as the boundary conditions $u_0 \cdot \nu =0$ on $\Sigma_s$. \end{enumerate} \section{Main results and discussion }\label{sec_main} \subsection{Energy and dissipation functionals and other notation} In order to state our main result, we must first introduce some notation. \emph{Equilibrium angles and regularity parameters:} We begin by introducing the supplementary equilibrium contact angle \begin{equation}\label{omega_eq_def} \omega_{\operatorname{eq}} = \partial^\alphartiali -\theta_{\operatorname{eq}} \in (0,\partial^\alphartiali), \end{equation} which is useful as it determines the angles created at the contact points in the fluid domain at equilibrium (see Figure \ref{fig_3}). This angle, which can take on any value between $0$ and $\partial^\alphartiali$ depending on the choice of the capillary parameters $\sigma$, $\gamma_{sv}$, and $\gamma_{sf}$, plays an important role in the elliptic regularity theory associated to $\Omega$, as it determines the possible regularity gain. For the Stokes problem with boundary conditions related to those we use in \eqref{ns_geometric}, the regularity is related to the following parameter, computed by Orlt-S\"{a}ndig \cite{orlt_sandig}: \begin{equation}\label{epm_def} \varepsilonm = \varepsilonm(\omega_{\operatorname{eq}}) = \min\{1,-1+\partial^\alphartiali/\omega_{\operatorname{eq}}\} \in (0,1]. \end{equation} For a parameter $0 < \varepsilon \le \varepsilonm < 1$ we set \begin{equation}\label{qalpha_def} q_\varepsilon = \frac{2}{2-\varepsilon} \in (1,2). \end{equation} Note that $0 < \varepsilon_- < \varepsilon_+ < \varepsilonm(\omega)$ implies that \begin{equation} q_{\varepsilon_-} < q_{\varepsilon_+} < q_{\varepsilonm}. \end{equation} Then \cite{orlt_sandig} shows that the regularity available for the velocity in the associated Stokes problem cannot reach $W^{2,q_{\varepsilonm}}$. With $\varepsilonm$ in hand, we fix three parameters $\alpha, \varepsilon_-,\varepsilon_+$ such that \begin{equation}\label{kappa_ep_def} 0 < \alpha < \varepsilon_- < \varepsilon_+ < \varepsilonm, \; \alpha < \min\left\{ \frac{1-\varepsilon_+}{2}, \frac{\varepsilon_+ - \varepsilon_-}{2} \right\}, \text{ and } \varepsilon_+ \le \frac{\varepsilon_- + 1}{2}. \end{equation} For brevity we also write \begin{equation}\label{qpm_def} q_+ = q_{\varepsilon_+} \text{ and } q_- = q_{\varepsilon_-} \end{equation} in the notation established in \eqref{qalpha_def}. We will crucially use these parameters to track regularity in this paper. \emph{Norms:} We write $W^{s,p}(\mathcal{G}amma;\mathbb{R}^k)$ for $\mathcal{G}amma \in \{\Omega,\Sigma,\Sigma_s\}$, $0 \le s \in \mathbb{R}$, $1 \le p \le \infty$, and $1 \le k \in \mathbb{N}$ for the usual Sobolev spaces of $\mathbb{R}^k-$valued functions on these sets. In particular, $W^{0,p}(\mathcal{G}amma;\mathbb{R}^k) = L^p(\mathcal{G}amma;\mathbb{R}^k)$. When $k=1$ we typically write $W^{s,p}(\mathcal{G}amma) = W^{s,p}(\mathcal{G}amma;\mathbb{R})$. When $p=2$ we write $H^s(\mathcal{G}amma;\mathbb{R}^k) = W^{s,2}(\mathcal{G}amma;\mathbb{R}^k)$. For the sake of brevity, we typically write our norms as $\norm{\cdot}_{W^{s,p}}$, suppressing the domain $\mathcal{G}amma$ and the codomain $\mathbb{R}^k$. We employ this notation whenever it is clear from the context what the set and codomain are; in situations where there is ambiguity (typically due to the evaluation of bulk-defined functions on $\Sigma$ or $\Sigma_s$ via trace operators) we will include the domain in the norm notation. Next we define a useful pairing for the contact points that gives a contact point norm: we set \begin{equation}\label{bndry_pairing} [f,g]_\ell = f(-\ell)g(-\ell) + f(\ell) g(\ell) \text{ and } [f]_\ell = \sqrt{[f,f]_\ell}. \end{equation} \emph{Energy and dissipation functionals:} We define the following energy and dissipation functionals. For $0 \le k \le 2$ we define the natural energy and dissipation via \begin{equation}\label{ED_natural} \mathcal{E}_{\shortparallel,k} = \ns{\partial^\alphartialartial_t^k u}_{L^2} + \ns{\partial^\alphartialartial_t^k \eta}_{H^1} \text{ and } \mathcal{D}_{\shortparallel,k} = \ns{ \partial^\alphartialartial_t^k u}_{H^1} + \ns{\partial^\alphartialartial_t^k u}_{L^2(\Sigma_s)} + [\partial^\alphartialartial_t^{k+1} \eta]_\ell^2. \end{equation} We also set \begin{equation}\label{ED_parallel} \mathcal{E}_{\shortparallel} = \sum_{k=0}^2 \mathcal{E}_{\shortparallel,k} \text{ and } \mathcal{D}_{\shortparallel}=\sum_{k=0}^2 \mathcal{D}_{\shortparallel,k}. \end{equation} Then the full energy is \begin{multline}\label{E_def} \mathcal{E} = \mathcal{E}_{\shortparallel} + \ns{u}_{W^{2,q_+}} + \ns{\partial^\alphartialartial_t u}_{H^{1 + \varepsilon_-/2}} + \ns{\partial^\alphartialartial_t^2 u}_{H^0} + \ns{p}_{W^{1,q_+}} + \ns{\partial^\alphartialartial_t p}_{L^2} \\ + \ns{\eta}_{W^{3-1/q_+,q_+}} + \ns{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2 }} + \ns{\partial^\alphartialartial_t^2 \eta}_{H^1}, \end{multline} and the full dissipation is \begin{multline}\label{D_def} \mathcal{D} = \mathcal{D}_{\shortparallel} + \ns{u}_{W^{2,q_+}} + \ns{\partial^\alphartialartial_t u}_{W^{2,q_-}} + \sum_{k=0}^2 [\partial^\alphartialartial_t^k u \cdot \mathcal{N}]_\ell^2 + \ns{p}_{W^{1,q_+}} + \ns{\partial^\alphartialartial_t p }_{W^{1,q_-}} \\ + \sum_{k=0}^2 \ns{\partial^\alphartialartial_t^k \eta}_{H^{3/2-\alpha}} + \ns{\eta}_{W^{3-1/q_+,q_+}} + \ns{\partial^\alphartialartial_t \eta}_{W^{3-1/q_-,q_-}} + \ns{\partial^\alphartialartial_t^3 \eta}_{H^{1/2-\alpha}} . \end{multline} \emph{Universal constants and Einstein summation:} A generic constant $C>0$ will be called universal if it depends on $\Omega$, the dimension, or any of the parameters of the problem, but not on the solution or the initial data. In the usual manner, we allow the value of these constants to change from one estimate to the next. We employ the notation $a \lesssim b$ to mean that $a \le C b$ for a universal constant $C>0$, and we write $a \asymp b$ to mean that $a \lesssim b \lesssim a$. From time to time we will use the Einstein convention of implicitly summing over repeated indices in vector and tensor expressions. \subsection{Main results and discussion }\label{sec_res_disc} Our main result is an a priori estimate for solutions to \eqref{ns_geometric} that shows that if solutions exist in a time horizon $[0,T)$ and have sufficiently small energy, then in fact the dissipation is integrable on $[0,T)$ and the energy decays exponentially. Moreover, we have quantitative estimates in terms of the data. \begin{thm}\label{main_apriori} Let $\omega_{\operatorname{eq}} \in (0,\partial^\alphartiali)$ be given by \eqref{omega_eq_def}, $0 < \varepsilonm \le 1$ be given by \eqref{epm_def}, and suppose that $\alpha$, $\varepsilon_-$, and $\varepsilon_+$ satisfy \eqref{kappa_ep_def}. Suppose that $\mathcal{E}$ and $\mathcal{D}$ are defined with these parameters via \eqref{E_def} and \eqref{D_def}, respectively. Then there exists a universal constant $0 < \delta_0 <1$ such that if a solution to \eqref{ns_geometric} exists on the time horizon $[0,T)$ for $0 < T \le \infty$ and obeys the estimate \begin{equation} \sup_{0 \le t < T } \mathcal{E}(t) \le \delta_0, \end{equation} then there exist universal constants $C,\lambda >0$ such that \begin{equation} \sup_{0 \le t <T} e^{\lambda t} \mathcal{E}(t) + \int_0^T \mathcal{D}(t) dt \le C \mathcal{E}(0). \end{equation} \end{thm} The a priori estimates of this theorem may be coupled to a local existence theory that verifies the small energy condition is satisfied, provided the data are small enough and all necessary compatibility conditions are satisfied. To keep the present paper of reasonable length, we will neglect to develop this local existence theory here. Such a theory can be developed on the basis of the a priori estimates proved here in the same way that \cite{tice_zheng} develops the local theory for the Stokes version of \eqref{ns_geometric} based on the a priori estimates for the Stokes system that we developed in \cite{guo_tice_QS}. Upon combining the local theory with our a priori estimates, we may deduce the existence of global-in-time decaying solutions. This provides further evidence that the contact dynamics relation \eqref{cl_motion} together with the Navier-slip boundary conditions yield a good model of contact points in fluids. \begin{thm}\label{main_gwp_dec} Let $\omega_{\operatorname{eq}} \in (0,\partial^\alphartiali)$ be given by \eqref{omega_eq_def}, $0 < \varepsilonm \le 1$ be given by \eqref{epm_def}, and suppose that $\alpha$, $\varepsilon_-$, and $\varepsilon_+$ satisfy \eqref{kappa_ep_def}. Suppose that $\mathcal{E}$ and $\mathcal{D}$ are defined with these parameters via \eqref{E_def} and \eqref{D_def}, respectively. There exists a universal constant $0 < \delta_1 <1$ such that if $\mathcal{E}(0) \le \delta_1$, then there exists a unique global solution triple $(u,p,\eta)$ to \eqref{ns_geometric} on the time horizon $[0,\infty)$ such that \begin{equation} \sup_{0 \le t <\infty} e^{\lambda t} \mathcal{E}(t) + \int_0^\infty \mathcal{D}(t) dt \le C \mathcal{E}(0), \end{equation} where $C,\lambda >0$ are universal constants. \end{thm} In \cite{guo_tice_QS} we proved analogous results for the Stokes version of \eqref{ns_euler} (the terms $\partial^\alphartialartial_t u + u \cdot \nabla u$ in the first equation are neglected), so it is prudent to begin the discussion of our current results by comparing and contrasting the Stokes and Navier-Stokes problems and the difficulties they present. For both problems, an examination of the control provided by the basic energy-dissipation relation \eqref{fund_en_evolve} (the kinetic energy term in the energy is removed for the Stokes problem) reveals that neither the energy nor the dissipation provide enough control to close a scheme of a priori estimates. As such, we are forced to analyze solutions in a higher regularity context, and it is here that it becomes clear that the geometry of the fluid domain is the central difficulty. Indeed, the first issue it causes is that even after reformulation in a fixed domain as in \eqref{ns_geometric}, the only differential operators compatible with the domain are time derivatives. We then need a strategy for bootstrapping from energy-dissipation control of the time derivatives to higher spatial regularity via elliptic estimates. It is at this point where we encounter the fundamental difficulty in analyzing the contact point problem. Both the moving domain $\Omega(t)$ and the equilibrium domain $\Omega$ have corners at the contact points, and thus the boundary is at most globally Lipschitz. In such domains it is well-known that the corners can harbor singularities in the solutions to elliptic equations. For the Stokes problem in $\Omega$ with Navier-slip boundary conditions, the work of Orlt-S\"{a}ndig \cite{orlt_sandig} shows that the velocity can not even belong to $W^{2,q_{\varepsilonm}}$, where $\varepsilonm$ is determined by the equilibrium angle as in \eqref{epm_def}. Consequently, regardless of how many temporal derivatives we gain basic control of, there is a fundamental barrier to the spatial regularity gain we can hope to achieve. In closing a scheme of a priori estimates for \eqref{ns_geometric}, the mandate then becomes to make due with what's available and close with little spatial regularity. In our work on the Stokes problem \cite{guo_tice_QS}, we do this by crucially exploiting a version of the normal trace estimate for the viscous stress. This allows us to get a dissipative estimate for $\mathcal{K} \partial^\alphartialartial_t^2 \eta$ in $H^{-1/2}$, where $\mathcal{K}$ is the gravity-capillary operator associated to $\zeta_0$ (see \eqref{cap_op_def}). With this in hand, we take advantage of the contact point boundary condition (the last equation in \eqref{ns_geometric}) in two essential ways. First, this condition is responsible for providing a dissipative estimate of $[\partial^\alphartialartial_t^3 \eta]_\ell$. Second, this condition serves as a boundary condition compatible with the elliptic operator $\mathcal{K}$, which couples with the aforementioned dissipative control to yield an $H^{3/2}$ estimate for $\partial^\alphartialartial_t^2 \eta$ in terms of the dissipation. This estimate then serves as the starting point for a chain of elliptic estimates in weighted $L^2-$based Sobolev spaces that allow us to close for the Stokes problem. Here the choice of $L^2-$based weighted spaces is convenient as it maintains consistency with the $L^2-$based estimates coming from the energy and dissipation. For the Navier-Stokes problem considered in the present paper, the convective term $\partial^\alphartialartial_t u + u\cdot \nabla u$ precludes the use of the normal trace estimate for the twice time-differentiated problem since neither the energy nor the dissipation provides control of $\partial^\alphartialartial_t^3 u$ in this case. We are thus forced to seek another mechanism for obtaining a sufficiently high regularity estimate for $\partial^\alphartialartial_t^2 \eta$, which we need to kick start the chain of elliptic gains. This is the central difficulty in dealing with the contact point Navier-Stokes system \eqref{ns_geometric}. In place of the normal trace estimate, we instead employ a delicate argument using test functions in the weak formulation of the twice time-differentiated problem, together with the dissipative estimate of $[\partial^\alphartialartial_t^3 \eta]_\ell$. This is delicate for two reasons. First, we have very poor spatial regularity at that level of time derivative, so we must be careful with how the test function interacts with the solution. Second, we aim to achieve estimates for the fractional regularity of $\partial^\alphartialartial_t^2 \eta$, but in the weak formulation we find $\partial^\alphartialartial_t^2 \eta$ interacting with the test function on $(-\ell,\ell)$ via an $H^1-$type inner product with the equilibrium free surface function $\zeta_0$ appearing as a weight (see \eqref{bndry_ips}). The standard Fourier-analytic tricks that would try on a torus or full space don't work here due to the finite extent of $(-\ell,\ell)$ and the weight. We are thus led to replace the standard Fourier tricks with the functional calculus associated to the gravity-capillary operator $\mathcal{K}$, which provides a scale of custom Sobolev spaces measuring fractional regularity in terms of the eigenfunctions of $\mathcal{K}$. This allows us to build test functions that can produce higher fractional regularity estimates for $\partial^\alphartialartial_t^2 \eta$. Unfortunately, despite major effort, we were unable to derive an exact $H^{3/2}$ estimate for $\partial^\alphartialartial_t^2 \eta$. The obstacles are primarily due to the technical complications that arise from the criticality of $H^{1/2}$ in one dimension. The principal technical achievement of this paper is the development of a scheme of a priori estimates that exchanges the full $H^{3/2}$ estimate used for the Stokes problem for a slightly weaker estimate in $H^{3/2 - \alpha}$, where $\alpha$ is given by \eqref{kappa_ep_def}. Fortunately, this is just barely sufficient to kick start the elliptic gain and allow us to close. In order to execute this, we have had to switch from weighted $L^2$-based Sobolev spaces to unweighted $L^q-$based spaces for values of $q$ just below the maximal value $q_{\varepsilonm}$. This yields key technical advantages in dealing with several nonlinear terms. \subsection{Technical overview and layout of paper } We now turn our attention to a brief technical overview of our methods, which we provide in a rough sketch form meant to highlight the main ideas while suppressing certain technical complications. The starting point of our scheme of a priori estimates is a version of the energy dissipation relation \eqref{fund_en_evolve} for \eqref{ns_geometric}. We need versions of this for the solution and its time derivatives up to order two. These are recorded in Section \ref{sec_basic_tools}. Upon differentiating \eqref{ns_geometric} we produce commutators, so we end up with an energy dissipation relation roughly of the form \begin{equation}\label{sum_1} \frac{d}{dt} \mathcal{E}_{\shortparallel} + \mathcal{D}_{\shortparallel} = \mathscr{N}, \end{equation} where $\mathcal{E}_{\shortparallel}$ and $\mathcal{D}_{\shortparallel}$ are as in \eqref{ED_parallel} and $\mathscr{N}$ represents nonlinear interactions arising due to the commutators. Section \ref{sec_basic_tools} also contains a number of other basic estimates. To advance from the basic control provided by $\mathcal{E}_{\shortparallel}$ and $\mathcal{D}_{\shortparallel}$ to higher spatial regularity estimates we need elliptic estimates for a Stokes problem related to \eqref{ns_geometric}. We develop these in Section \ref{sec_elliptic} within the context of $L^q-$based spaces instead of the weighted $L^2-$based spaces we employed in \cite{guo_tice_QS}. Here the main technical problem is associated to the upper bound on the regularity gain available due to the corner singularities in $\Omega$. An interesting feature of our main result, Theorem \ref{A_stokes_stress_solve}, is that it treats the triple $(v,Q,\xi)$ as the elliptic unknown, but $\xi$ only appears on the boundary. With the elliptic estimates and \eqref{sum_1} in hand, we may identify most of the nonlinear terms that need to be estimated in order to close our scheme. Due to the limited spatial regularity, these estimates are fairly delicate and require a good deal of care. In particular, in dealing with $\mathscr{N}$ in \eqref{sum_1}, we need structured estimates of the form \begin{equation}\label{sum_3} \abs{\mathscr{N}} \le C \mathcal{E}^\theta \mathcal{D} \text{ for some } \theta >0, \end{equation} where $\mathcal{E}$ and $\mathcal{D}$ are the full energy and dissipation from \eqref{E_def} and \eqref{D_def}, in order to have any hope closing with \eqref{sum_1}. In Section \ref{sec_nl_int_d} we record a host of nonlinear interaction estimates of this form. In Section \ref{sec_nl_int_e} we record similar estimates but in terms of the energy functional instead of the dissipation. Section \ref{sec_nl_elliptic} records estimates of the nonlinear terms that appear in applying the elliptic estimates from Theorem \ref{A_stokes_stress_solve}. An interesting feature of these is that the upper bound of regularity identified by Orlt-S\"{a}ndig \cite{orlt_sandig} yields an open interval $(0,q_{\varepsilonm})$ of possible integrability exponents. We take advantage of this by using two different exponents $0 < q_- < q_+ < q_{\varepsilonm}$ as in \eqref{kappa_ep_def}, with $q_+$ associated to the non-differentiated problem and $q_-$ associated to the once-differentiated problem. The parameter $q_-$ can be made arbitrarily close to $q_{\varepsilonm}$, but the tiny increase we get in advancing to $q_+$ plays an essential role in Proposition \ref{ne2_f2}, which highlights how delicate the nonlinear estimates are. As mentioned above, the key to starting the elliptic gains is an estimate of $\partial^\alphartialartial_t^2 \eta$ in $H^{3/2 -\alpha}$. To achieve this we employ the functional calculus associated to the gravity-capillary operator $\mathcal{K}$, defined by \eqref{cap_op_def}. We develop this in Section \ref{sec_fnal_calc}. This calculus provides us with the ability to make sense of fractional powers of $\mathcal{K}$, which is essential in our test function method for deriving the needed estimate. It also provides us with a scale of custom Sobolev spaces defined in terms of the eigenfunctions of $\mathcal{K}$, which we characterize in terms of standard Sobolev spaces in Theorem \ref{s_embed} when the regularity parameter satisfies $0 \le s \le 2$. A serious technical complication in our test function / functional calculus method is that we would like to exploit an equivalence of the form \begin{equation}\label{sum_4} (\partial^\alphartialartial_t^2 \eta, (\mathcal{K})^{\frac{1}{2} - \alpha} \partial^\alphartialartial_t^2 \eta)_{1,\Sigma} \asymp \ns{\partial^\alphartialartial_t^2 \eta}_{H^{3/2 - \alpha}} \end{equation} where $(\cdot,\cdot)_{1,\Sigma}$ is as in \eqref{bndry_ips} and $\alpha$ satisfies \eqref{kappa_ep_def}, but we cannot guarantee that $(\mathcal{K})^{\frac{1}{2} - \alpha} \partial^\alphartialartial_t^2 \eta \in H^1$ in our functional framework. To get around this, for $j \in \mathbb{N}$ and $s \ge 0$ we introduce the operators $D_j^s$ in Section \ref{sec_djr}. These are approximations of the fractional differential operators $D^s := (\mathcal{K})^{s/2}$ formed by projecting onto the first $j$ eigenfunctions of $\mathcal{K}$ in the spectral representation of $D^{s}$. The eigenfunctions are smooth up to the boundary, so they work nicely when replaced on the left side of \eqref{sum_4}. We then aim to recover the desired control by working with these operators and sending $j \to \infty$. In Section \ref{sec_enhancements} we carry out the details of our test function / functional calculus method to derive the estimate for $\partial^\alphartialartial_t^2 \eta$ in $H^{3/2-\alpha}$. Along the way we also use similar methods to derive a couple other useful estimates for $\eta,$ $\partial^\alphartialartial_t \eta$, and $\partial^\alphartialartial_t p$. These all serve as enhancements to the basic energy-dissipation estimate \eqref{sum_1} since they are given in similar form. In Section \ref{sec_aprioris} we complete the proof of Theorem \ref{main_apriori}. We combine an integrated form of \eqref{sum_1} with the enhancement estimates to form the core estimates in energy dissipation form. These are then coupled to the elliptic estimates to gain spatial regularity. We then employ our array of nonlinear estimates to derive an estimate of the form \begin{equation} \mathcal{E}(t) + \int_{s}^t \mathcal{D} \lesssim \mathcal{E}(s) \end{equation} for all $0 \le s \le t < T$, and from this we complete the proof with a version of Gronwall's inequality, Proposition \ref{gronwall_variant}. Appendix \ref{sec_nonlinear_records} records the lengthy forms of various nonlinearities and commutators. Appendix \ref{sec_analysis_tools} contains a number of useful tools from analysis that are used throughout the paper, including product and composition estimates, estimates for the Poisson extension, and the Bogovskii operator. \section{Basic tools}\label{sec_basic_tools} In this section we record a number of basic identities and estimates associated to the problem \eqref{ns_geometric}. \subsection{Energy-dissipation relation } Upon applying temporal derivatives to \eqref{ns_geometric} and keeping track of the essential transport terms, we arrive at the following general linearization: \begin{equation}\label{linear_geometric} \begin{cases} \partial^\alphartialartial_t v -\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v + \diverge_{\mathcal{A}} S_{\mathcal{A}}(q,v) =F^1 & \text{in }\Omega \\ \diverge_{\mathcal{A}} v = F^2 &\text{in } \Omega \\ S_{\mathcal{A}}(q,v)\mathcal{N} = \left[g \xi - \sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3 \right) \right] \mathcal{N} + F^4 &\text{on }\Sigma \\ \partial^\alphartialartial_t \xi - v \cdot \mathcal{N} = F^6 &\text{on }\Sigma \\ (S_{\mathcal{A}}(q,v) \cdot \nu - \beta v)\cdot \tau = F^5 &\text{on }\Sigma_s \\ v \cdot \nu =0 & \text{on }\Sigma_s \\ \kappa \partial^\alphartialartial_t \xi(\partial^\alphartialm \ell,t) = \mp \sigma\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3\right)(\partial^\alphartialm \ell,t) - \kappa F^7. \end{cases} \end{equation} We will mostly be interested in this problem for $v = \partial^\alphartialartial_t^k u$, $\xi = \partial^\alphartialartial_t^k \eta$ and $q = \partial^\alphartialartial_t^k p$, in which case the forcing terms have the special form given in Appendix \ref{sec_nonlinear_records}. We now aim to record the weak formulation of \eqref{linear_geometric}. First, we will need to introduce some useful bilinear forms. Suppose that $\eta$ is given and that $\mathcal{A}$, $J$, $K$, and $\mathcal{N}$ are determined as in \eqref{AJK_def} and \eqref{N_def}. We define \begin{equation}\label{bulk_ips} \partial^\alphartialp{u,v} := \int_\Omega \frac{\mu}{2} \mathbb{D}_{\mathcal{A}} u : \mathbb{D}_{\mathcal{A}} v J + \int_{\Sigma_s} \beta (u\cdot \tau) (v\cdot \tau) J \text{ and } (u,v)_0 := \int_{\Omega} u \cdot v J. \end{equation} With these in hand we can formulate an integral version of \eqref{linear_geometric}. \begin{lem}\label{geometric_evolution} Suppose that $u,p,\eta$ are given and satisfy \eqref{ns_geometric}. Further suppose that $(v,q,\xi)$ are sufficiently regular and solve \eqref{linear_geometric}. Then for sufficiently regular test functions $w$ satisfying $w \cdot \nu =0$ on $\Sigma_s$ we have that \begin{multline}\label{ge_01} \br{\partial^\alphartialartial_t v,Jw} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,Jw)_0 + \partial^\alphartialp{v,w} - (q,\diverge_{\mathcal{A}} w)_0 \\ = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell g \xi (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} + F^4 \cdot w \end{multline} and \begin{multline}\label{ge_00} \br{\partial^\alphartialartial_t v,Jw} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,Jw)_0 + \partial^\alphartialp{v,w} - (q,\diverge_{\mathcal{A}} w)_0 + (\xi,w\cdot \mathcal{N})_{1,\Sigma} + \kappa [\partial^\alphartialartial_t \xi,w\cdot \mathcal{N}]_\ell \\ = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (w \cdot \mathcal{N}) + F^4 \cdot w - \kappa [w\cdot \mathcal{N}, F^7]_\ell, \end{multline} where $[\cdot,\cdot]_\ell$ is defined in \eqref{bndry_pairing} and $\kappa >0$ is as in \eqref{linz_def}. \end{lem} \begin{proof} Upon taking the dot product of the first equation in \eqref{linear_geometric} with $J w$ and integrating over $\Omega$, we arrive at the identity \begin{equation}\label{ge_1} \br{\partial^\alphartialartial_t v,Jw} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,Jw)_0 + I = II \end{equation} where we have written \begin{equation} I := \int_\Omega \diverge_{\mathcal{A}} S_{\mathcal{A}}(q,v) \cdot w J \text{ and } II := \int_\Omega F^1 \cdot w J. \end{equation} In expanding the term $I$ we will employ a pair of identities that are readily verified through elementary computations, using the definitions of $J$, $\mathcal{A}$, and $\mathcal{N}$ from \eqref{AJK_def} and \eqref{N_def}: first, \begin{equation}\label{ge_3} \partial^\alphartial_k(J \mathcal{A}_{jk}) =0 \text{ for each }j; \end{equation} and, second, \begin{equation}\label{ge_4} J \mathcal{A} \nu = \begin{cases} J \nu &\text{on }\Sigma_s \\ \mathcal{N}/\sqrt{1 +\abs{\partial^\alphartial_1 \zeta_0}^2} &\text{on }\Sigma. \end{cases} \end{equation} From \eqref{ge_3} and an integration by parts, we can write \begin{equation}\label{ge_7} I = \int_\Omega \partial^\alphartial_k (J \mathcal{A}_{jk} S_{\mathcal{A}}(q,v)_{ij}) w_i = \int_\Omega -J \mathcal{A}_{jk} \partial^\alphartial_k w_i S_{\mathcal{A}}(q,v)_{ij} + \int_{\partial^\alphartial \Omega} (J \mathcal{A} \nu) \cdot (S_{\mathcal{A}}(q,v) w) := I_1 + I_2. \end{equation} The term $I_1$ is readily rewritten using the definition of $S_{\mathcal{A}}(q,v)$ (given just below \eqref{N_def}): \begin{equation}\label{ge_20} I_1 = \int_\Omega \frac{\mu}{2} \mathbb{D}a v: \mathbb{D}a w J - q \diverge_{\mathcal{A}}{w} J. \end{equation} To handle $I_2$ we use the first equation in \eqref{ge_4} to see that \begin{multline} \int_{\Sigma_s} (J \mathcal{A} \nu) \cdot (S_{\mathcal{A}}(q,v) w) = \int_{\Sigma_s} J \nu \cdot (S_{\mathcal{A}}(q,v) w) = \int_{\Sigma_s} J w \cdot (S_{\mathcal{A}}(q,v) \nu) \\ = \int_{\Sigma_s} J \left(\beta (v \cdot \tau) (w \cdot \tau) + w\cdot \tau F^5\right), \end{multline} and the second equality in \eqref{ge_4} to see that \begin{multline} \int_{\Sigma} (J \mathcal{A} \nu) \cdot (S_{\mathcal{A}}(q,v) w) \\ = \int_{-\ell}^\ell (S_{\mathcal{A}}(q,v) \mathcal{N})\cdot w = \int_{-\ell}^\ell g \xi (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} + F^4 \cdot w. \end{multline} Since $\partial^\alphartial \Omega = \Sigma_s \cup \Sigma$, we then have that \begin{multline}\label{ge_21} I_2 = \int_{\Sigma_s} J \left(\beta (v \cdot \tau) (w \cdot \tau) + w\cdot \tau F^5\right) \\ +\int_{-\ell}^\ell g \xi (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} + F^4 \cdot w. \end{multline} Upon combining \eqref{ge_20} and \eqref{ge_21} with \eqref{ge_1} and recalling the definition of $\partial^\alphartialp{\cdot,\cdot}$ from \eqref{bulk_ips}, we deduce that \eqref{ge_01} holds. It remains to show that \eqref{ge_01} can be rewritten as \eqref{ge_00}. To this end, we integrate by parts and use the equations in \eqref{linear_geometric} to rewrite \begin{multline}\label{ge_8} \int_{-\ell}^\ell - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right)w\cdot \mathcal{N} \\ = \int_{-\ell}^\ell \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right)\partial^\alphartial_1 (w\cdot \mathcal{N}) - \sigma \left. \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right) (w\cdot \mathcal{N}) \right\vert_{-\ell}^\ell \end{multline} with \begin{multline}\label{ge_9} - \sigma \left. \left( \frac{\partial^\alphartial_1 \xi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3 \right) (w\cdot \mathcal{N}) \right\vert_{-\ell}^\ell = \sum_{a=\partial^\alphartialm 1} \left(\kappa \partial^\alphartialartial_t\xi(a\ell) + \kappa F^7(a\ell) \right)(w\cdot \mathcal{N}(a\ell) ) \\ = \sum_{a=\partial^\alphartialm 1} \kappa (v \cdot \mathcal{N}(a\ell))(w\cdot \mathcal{N}(a\ell) ) + \kappa (w\cdot\mathcal{N}(a\ell)) ( F^6(a\ell) + F^7(a\ell) ). \end{multline} Combining \eqref{ge_8} and \eqref{ge_9} with \eqref{ge_01} and rearranging then yields \eqref{ge_00}. \end{proof} The most natural use of Lemma \ref{geometric_evolution} occurs with $w=v$, but we will record a slight variant of this. This results in the following fundamental energy-dissipation identity. \begin{thm}\label{linear_energy} Suppose that $\zeta=\zeta_0 + \eta$ is given and $\mathcal{A}$ and $\mathcal{N}$ are determined in terms of $\zeta$ as in \eqref{AJK_def} and \eqref{N_def}. Suppose that $(v,q,\xi)$ satisfy \eqref{linear_geometric} and that $\omega(\cdot,t) \in H^1_0(\Omega;\mathbb{R}^2)$ is sufficiently regular for the following expression to be well-defined. Then \begin{multline} \label{linear_energy_0} \frac{d}{dt} \left( \int_{\Omega} J \frac{\abs{v}^2}{2} + \int_{-\ell}^\ell \frac{g}{2} \abs{\xi}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \xi}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} - \int_\Omega J v\cdot \omega \right) \\ + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a v}^2 J +\int_{\Sigma_s} \beta J \abs{v \cdot \tau}^2 + \kappa [\partial^\alphartialartial_t \xi,\partial^\alphartialartial_t \xi]_\ell = \int_\Omega F^1 \cdot v J + q J(F^2-\diverge_{\mathcal{A}} \omega) - \int_{\Sigma_s} J (v \cdot \tau)F^5 \\ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1(v \cdot \mathcal{N}) + F^4 \cdot v - g \xi F^6 - \sigma \frac{\partial^\alphartial_1 \xi \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} - \kappa [v\cdot \mathcal{N}, F^7]_\ell + \kappa [\partial^\alphartialartial_t \xi,F^6] \\ - \int_\Omega v \cdot \partial^\alphartialartial_t(J \omega) + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,J \omega)_0 + \partial^\alphartialp{v,\omega} - \int_\Omega F^1 \omega J. \end{multline} \end{thm} \begin{proof} We use $v-\omega$ as a test function in Lemma \ref{geometric_evolution} to see that \begin{multline}\label{linear_energy_1} \br{\partial^\alphartialartial_t v,Jv} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,Jv)_0 + \partial^\alphartialp{v,v} - (q,\diverge_{\mathcal{A}} v)_0 + (\xi,v\cdot \mathcal{N})_{1,\Sigma} + \kappa [\partial^\alphartialartial_t \xi,v\cdot \mathcal{N}]_\ell \\ = \int_\Omega F^1 \cdot v J - \int_{\Sigma_s} J (v\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (v \cdot \mathcal{N}) + F^4 \cdot v - \kappa [v\cdot \mathcal{N}, F^7]_\ell \\ + \br{\partial^\alphartialartial_t v, J \omega} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,J \omega)_0 + \partial^\alphartialp{v,\omega} - (q,\diverge_{\mathcal{A}} \omega)_0 - \int_\Omega F^1 \omega J. \end{multline} First note that \begin{equation}\label{linear_energy_2} \partial^\alphartialp{v,v} = \int_\Omega \abs{\mathbb{D}a v}^2 J +\int_{\Sigma_s} \beta J \abs{v \cdot \tau}^2. \end{equation} Next, we expand \begin{equation} \br{\partial^\alphartialartial_t v,Jv} = \frac{d}{dt}\int_{\Omega} \frac{\abs{v}^2}{2} - \int_{\Omega} \partial^\alphartialartial_t J \frac{\abs{v}^2}{2}. \end{equation} Using the identities \eqref{ge_3} and \eqref{ge_4}, we may integrate by parts to compute \begin{multline} \int_{\Omega} \left( -\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v \right)\cdot Jv = \int_{\Omega} \frac{\abs{v}^2}{2} \left(-J \diverge_{\mathcal{A}}{u} + \partial^\alphartial_2 \left(\frac{\partial^\alphartialartial_t \bar{\eta} \partial^\alphartialhi}{\zeta_0} \right) \right) \\ + \int_{\Sigma} \frac{\abs{v}^2}{2 \sqrt{1+ \abs{\partial^\alphartial_1 \zeta_0}^2}} \left(-\partial^\alphartialartial_t \eta + u \cdot \mathcal{N} \right) + \int_{\Sigma_s} J u \cdot \nu \frac{\abs{v}^2}{2}. \end{multline} Since $\partial^\alphartialartial_t \eta = u\cdot \mathcal{N}$ on $\Sigma$, $u\cdot \nu =0$ on $\Sigma_s$< and $\diverge_{\mathcal{A}} u=0$, we arrive at the equality \begin{equation} \int_{\Omega} \left( -\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v \right)\cdot Jv = \int_{\Omega} \frac{\abs{v}^2}{2} \partial^\alphartial_2 \left(\frac{\partial^\alphartialartial_t \bar{\eta} \partial^\alphartialhi}{\zeta_0} \right). \end{equation} We then compute \begin{equation} J = 1 + \partial^\alphartial_2\left( \frac{\bar{\eta} \partial^\alphartialhi}{\zeta_0} \right) \mathbb{R}ightarrow \partial^\alphartial_2 \left(\frac{\partial^\alphartialartial_t \bar{\eta} \partial^\alphartialhi}{\zeta_0} \right) = \partial^\alphartialartial_t J, \end{equation} which shows that \begin{equation}\label{linear_energy_3} \br{\partial^\alphartialartial_t v,Jv} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 v + u \cdot \nablaa v,Jv)_0 = \frac{d}{dt}\int_{\Omega} \frac{\abs{v}^2}{2}. \end{equation} On the other hand, we may use \eqref{linear_geometric} to compute \begin{multline}\label{linear_energy_4} (\xi,v\cdot \mathcal{N})_{1,\Sigma} = (\xi,\partial^\alphartialartial_t \xi - F^6)_{1,\Sigma} \\ = \partial^\alphartialartial_t \left( \int_{-\ell}^\ell \frac{g}{2} \abs{\xi}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \xi}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) - \int_{-\ell}^\ell g \xi F^6 + \sigma \frac{\partial^\alphartial_1 \xi \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}, \end{multline} \begin{equation}\label{linear_energy_5} (q,\diverge_{\mathcal{A}} v)_0 - (q,\diverge_{\mathcal{A}} \omega)_0 = \int_\Omega q J(F^2 - \diverge_{\mathcal{A}} \omega), \end{equation} and \begin{equation}\label{linear_energy_6} [\partial^\alphartialartial_t \xi, v \cdot \mathcal{N}]_\ell = [\partial^\alphartialartial_t \xi,\partial^\alphartialartial_t \xi]_\ell - [\partial^\alphartialartial_t \xi, F^6]_\ell. \end{equation} Then \eqref{linear_energy_0} follows by plugging \eqref{linear_energy_2}, \eqref{linear_energy_3}, \eqref{linear_energy_4}, \eqref{linear_energy_5}, and \eqref{linear_energy_6} into \eqref{linear_energy_1} and noting that \begin{equation} \br{\partial^\alphartialartial_t v,J\omega} = \frac{d}{dt}\int_{\Omega} Jv\cdot \omega - \int_{\Omega} v \cdot \partial^\alphartialartial_t(J\omega). \end{equation} \end{proof} Next we record an application of this to \eqref{ns_geometric}. \begin{cor}\label{basic_energy} Suppose that $(u,p,\eta)$ solve \eqref{ns_geometric}, and consider the function $\mathcal{Q}$ given by \eqref{Q_def}. Then \begin{multline}\label{be_0} \frac{d}{dt} \left(\int_\Omega \frac{1}{2} J \abs{u}^2 + \int_{-\ell}^\ell \frac{g}{2} \abs{\eta}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \eta}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)\right) \\ + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a u}^2 J +\int_{\Sigma_s} \beta J \abs{u \cdot \tau}^2 + \kappa [\partial^\alphartialartial_t \eta]_\ell^2 =- \kappa [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell . \end{multline} \end{cor} \begin{proof} From \eqref{ns_geometric} we have that $v =u$, $q =p$, and $\xi = \eta$ solve \eqref{linear_geometric} with $F^i=0$ for $i\neq 3,7$ and $F^3 = \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)$, $F^7 = \mathscr{W}h(\partial^\alphartialartial_t \eta)$. The identity \eqref{be_0} then follows by applying Theorem \ref{linear_energy} with $\omega =0$ and noting that in this case \begin{equation} -\int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (v\cdot \mathcal{N}) = -\int_{-\ell}^\ell \sigma \partial^\alphartialartial_t \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta,\partial^\alphartial_1 \eta) = -\frac{d}{dt} \int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta). \end{equation} \end{proof} Next we record the consequences of conservation of mass. \begin{prop}\label{avg_zero_prop} If $(u,p,\eta)$ solve \eqref{ns_geometric}, then \begin{equation} \int_{-\ell}^\ell \partial^\alphartialartial_t^j \eta =0 \end{equation} for $0 \le j \le 3$. \end{prop} \begin{proof} Integrating the condition $J \diverge_{\mathcal{A}} u =0$ against $J$ over $\Omega$ and using \eqref{ge_3} and \eqref{ge_4} together with the divergence theorem shows that \begin{equation} \frac{d}{dt} \int_{-\ell}^\ell \eta = \int_{-\ell}^\ell \partial^\alphartialartial_t \eta = \int_\Omega J \diverge_{\mathcal{A}} u =0. \end{equation} The result for $1\le j \le 3$ follows immediately from this, and for $j=0$ it follows from the assumption \eqref{avg_zero_t=0}. \end{proof} \subsection{Coefficient bounds } The smallness of the perturbation $\eta$ will play an essential role in most of the arguments in the paper, from guaranteeing that $\mathcal{P}hi$ is a diffeomorphism to enabling certain nonlinear estimates. The following lemma records this smallness is a quantitative way. \begin{lem}\label{eta_small} Let $q_+$ be as in \eqref{qpm_def}. There exists a universal $0 < \gamma < 1$ such that if $\norm{\eta}_{W^{3-1/q_+,q_+}} \le \gamma$, then the following hold for $\mathcal{A}$ defined by \eqref{A_def}, $A, J, K$ defined by \eqref{AJK_def}, and $\mathcal{N}$ and $\mathcal{N}_0$ defined by \eqref{N_def} and \eqref{N0_def}, respectively. \begin{enumerate} \item We have the estimates \begin{equation}\label{es_01} \max\{\norm{J-1}_{L^\infty}, \norm{K-1}_{L^\infty}, \norm{A}_{L^\infty}, \norm{\mathcal{N}- \mathcal{N}_0}_{L^\infty} \} \le \frac{1}{2} \text{ and } \norm{\mathcal{A}}_{L^\infty} \lesssim 1. \end{equation} \item For every $u \in H^1(\Omega; \mathbb{R}^2)$ such that $u\cdot \nu =0$ on $\Sigma_s$ we have that \begin{equation} \frac{\mu}{4} \int_\Omega \abs{\mathbb{D} u}^2 + \frac{\beta}{2} \int_{\Sigma_s} \abs{u}^2 \le \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a u}^2 J + \beta \int_{\Sigma_s} J\abs{u\cdot \tau}^2 \le \mu \int_\Omega \abs{\mathbb{D} u}^2 + 2\beta \int_{\Sigma_s} \abs{u}^2 \end{equation} \item The map $\mathcal{P}hi$ defined by \eqref{mapping_def} is a diffeomorphism. \end{enumerate} \end{lem} \begin{proof} The first and third items follow from standard product estimates, Proposition \ref{poisson_prop}, and the Sobolev embeddings. The second item is a simple modification of Proposition 4.3 in \cite{guo_tice_per}. \end{proof} \subsection{$M$ as a multiplier} It will be useful to define the following matrix in terms of $\eta$: \begin{equation}\label{M_def} M = K \nabla \mathcal{P}hi = (J \mathcal{A}^T)^{-1}, \end{equation} where $\mathcal{A}$ is as in \eqref{A_def} and $J$ and $K$ are as in \eqref{AJK_def}. We will view this matrix as inducing a linear map via multiplication. Our first result records the boundedness properties of this map. \begin{prop}\label{M_multiplier} Let $M$ be given by \eqref{M_def} and suppose that $1 \le q \le 2/(1-\varepsilon_+)$. Then we have the inclusions $M \in \mathcal{L}( W^{1,q}(\Omega;\mathbb{R}^2))$ and $M,\partial^\alphartialartial_t M \in \mathcal{L}(L^{q}(\Omega;\mathbb{R}^2))$ as well as the estimates \begin{equation} \norm{M \zeta}_{W^{1,q}} \lesssim (1+\sqrt{\mathcal{E}}) \norm{\zeta}_{W^{1,q}} \text{ and } \norm{M \zeta}_{L^{q}} + \norm{\partial^\alphartialartial_t M \zeta}_{L^{q}} \lesssim (1+\sqrt{\mathcal{E}}) \norm{\zeta}_{L^{q}}. \end{equation} \end{prop} \begin{proof} First note that \begin{equation} \norm{M \zeta}_{W^{1,q}} \lesssim \norm{\abs{M}\abs{\zeta}}_{L^q} + \norm{\abs{M} \abs{\nabla \zeta}}_{L^q} + \norm{\abs{\nabla M} \zeta}_{L^q} \lesssim \norm{M}_{L^\infty} \norm{\zeta}_{W^{1,q}} +\norm{\abs{\nabla M} \zeta}_{L^q}. \end{equation} It's easy to see that \begin{equation} \norm{M}_{L^\infty} \lesssim 1 + \norm{\bar{\eta}}_{W^{1,\infty}} \lesssim 1+ \sqrt{\mathcal{E}}, \end{equation} which handles the first term on the right. For the second we need to use H\"{o}lder's inequality, and we must break to cases. In the first case we assume that $q$ is subcritical, i.e. $1 \le q < 2$. Then \begin{equation} \frac{1- \varepsilon_+}{2} + \frac{1}{q^\ast} = \frac{1-\varepsilon_+}{2} + \frac{1}{q} - \frac{1}{2} < \frac{1}{q}, \end{equation} so we can bound \begin{equation} \norm{\abs{\nabla M} \zeta}_{L^q} \lesssim \norm{\nabla M}_{L^{2/(1-\varepsilon_+)}} \norm{\zeta}_{L^{q^\ast}} \lesssim \norm{\bar{\eta}}_{L^{2/(1-\varepsilon_+)}}\norm{\zeta}_{W^{1,q}} \lesssim \sqrt{\mathcal{E}} \norm{\zeta}_{W^{1,q}}. \end{equation} In the second case we assume criticality, i.e. $q=2$. Then by the critical Sobolev embedding, \begin{equation} \norm{\abs{\nabla M} \zeta}_{L^2} \lesssim \norm{\nabla M}_{L^{2/(1-\varepsilon_+)}} \norm{\zeta}_{L^{2/\varepsilon_+}} \lesssim \norm{\bar{\eta}}_{L^{2/(1-\varepsilon_+)}}\norm{\zeta}_{W^{1,2}} \lesssim \sqrt{\mathcal{E}} \norm{\zeta}_{W^{1,q}}. \end{equation} In the third case we assume supercriticality, i.e. $2 < q \le 2/(1-\varepsilon_+)$. Then \begin{equation} \norm{\abs{\nabla M} \zeta}_{L^q} \lesssim \norm{\nabla M}_{L^{2/(1-\varepsilon_+)}} \norm{\zeta}_{L^{\infty}} \lesssim \norm{\bar{\eta}}_{L^{2/(1-\varepsilon_+)}}\norm{\zeta}_{W^{1,q}} \lesssim \sqrt{\mathcal{E}} \norm{\zeta}_{W^{1,q}}. \end{equation} Thus, in any case we have \begin{equation} \norm{\abs{\nabla M} \zeta}_{L^q} \lesssim \sqrt{\mathcal{E}} \norm{\zeta}_{W^{1,q}}, \end{equation} and the first estimate follows. To prove the second estimate we simply note that by Theorem \ref{catalog_energy} \begin{equation} \norm{ M}_{L^\infty} + \norm{\partial^\alphartialartial_t M}_{L^\infty} \lesssim 1 + \norm{\bar{\eta}}_{W^{1,\infty}}+ \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \lesssim 1 + \sqrt{\mathcal{E}}. \end{equation} \end{proof} The matrix $M$ plays an important role in switching from the operator $\diverge$ to $\diverge_{\mathcal{A}}$. We record this information in the following. \begin{prop}\label{M_properties} Let $M$ be given by \eqref{M_def} and $1 \le q \le 2/(1-\varepsilon_+)$. Then the following hold for $u \in W^{1,q}(\Omega;\mathbb{R}^2)$. \begin{enumerate} \item $\diverge u =p$ if and only if $\diverge_{\mathcal{A}}(Mu)=K p$. \item $u \cdot \nu =0$ on $\Sigma_s$ if and only if $(Mu) \cdot \nu =0$ on $\Sigma_s$. \item $u \cdot \mathcal{N}_0 = (Mu)\cdot \mathcal{N}$ on $\Sigma$. \end{enumerate} \end{prop} \begin{proof} We compute \begin{equation} \diverge(M^{-1} v) = \partial^\alphartial_j( J\mathcal{A}_{ij}v_i) = J \mathcal{A}_{ij} \partial^\alphartial_j v_i = J \diverge_{\mathcal{A}}{v}. \end{equation} Hence, upon setting $Mu = v$ we see that \begin{equation} \diverge{u} = p \text{ if and only if } \diverge_{\mathcal{A}}(Mu) = Kp. \end{equation} This proves the first item. For the second note that \begin{equation} K\nabla \mathcal{P}hi^T \nu = K\nu \text{ on } \{ x \in \partial^\alphartial \Omega \;\vert \; x_1 = \partial^\alphartialm \ell, x_2 \ge 0 \} \text{ and } K\nabla \mathcal{P}hi^T \nu = \nu \text{ on } \{x \in \partial^\alphartial \Omega \; \vert \; x_2 < 0 \}, \end{equation} so on $\Sigma_s$ we have that \begin{equation} M u \cdot \nu =0 \mathcal{L}eftrightarrow u \cdot (K \nabla \mathcal{P}hi^T \nu) =0 \mathcal{L}eftrightarrow u \cdot \nu =0. \end{equation} Finally, for the third item we compute on $\Sigma$: \begin{equation} J \mathcal{A} \mathcal{N}_0 = \mathcal{N} \mathbb{R}ightarrow \mathcal{N}_0 = K (\mathcal{A})^{-1} \mathcal{N} = K \nabla \mathcal{P}hi^T \mathcal{N}, \end{equation} which implies that \begin{equation} u \cdot \mathcal{N}_0 = u \cdot K \nabla \mathcal{P}hi^T \mathcal{N} = K \nabla \mathcal{P}hi u \cdot \mathcal{N} = M u \cdot \mathcal{N}. \end{equation} \end{proof} \subsection{Various bounds } In subsequent parts of the paper we will need to repeatedly employ various $L^q$ estimates for $u$, $p$, $\eta$ and their derivatives in terms of either $\sqrt{\mathcal{E}}$ or $\sqrt{\mathcal{D}}$, defined respectively in \eqref{E_def} and \eqref{D_def}. Thus, we now turn our attention to recording a precise catalog of such estimates, which are available due to the control provided by $\mathcal{E}$ and $\mathcal{D}$ and various auxiliary estimates. In order to efficiently record this catalog, we will use tables of the following form. \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line \varphi & \infty & a & b & c \\ \mathcal{H}line \tr \varphi & \infty- & e & & \\ \mathcal{H}line \end{array} \end{displaymath} The top row indicates that the first column labels the function under consideration, and the subsequent columns give the $q$ for which the number of derivatives indicated in top row belong to $L^q$. In this notation $q = \infty$ indicates $L^\infty$, while $\infty-$ indicates inclusion in $L^q$ for all $1 \le q < \infty$ (with bounds that diverge as $q \to \infty$ as in the critical Sobolev inequality), and an empty cell indicates no estimate available. The set on which the $L^q$ norm is evaluated is always understood to be the ``natural'' set on which the function is defined: $\Omega$ for $u$, $p$, $\bar{\eta}$, and $(-\ell,\ell)$ for $\eta$. The notation $\tr$ indicates that the function under consideration is the trace onto either $\Sigma$ or $\Sigma_s$. For example, if we state that the above sample table records estimates in terms of $\sqrt{\mathcal{E}}$, and $\varphi$ is defined in $\Omega$, then this indicates that \begin{equation} \norm{\varphi}_{L^\infty(\Omega)} + \norm{\nabla \varphi}_{L^a(\Omega)} + \norm{\nabla^2 \varphi}_{L^b(\Omega)} + \norm{\nabla^3 \varphi}_{L^c(\Omega)} \lesssim \sqrt{\mathcal{E}}, \end{equation} \begin{equation} \norm{\tr \varphi}_{L^q(\Sigma)} + \norm{\tr \varphi}_{L^q(\Sigma_s)} \le C_q \sqrt{\mathcal{E}} \text{ for all } 1 \le q < \infty, \end{equation} where $C_q \to \infty$ as $q \to \infty$, and \begin{equation} \norm{\tr \nabla \varphi}_{L^e(\Sigma)} + \norm{\tr \nabla \varphi}_{L^e(\Sigma_s)} \lesssim \sqrt{\mathcal{E}}. \end{equation} With this notation established, we now turn to recording the catalogs. We begin with the estimates in terms of the energy. \begin{thm}\label{catalog_energy} The following three tables record the $L^q$ bounds for $u$, $p$, $\eta$ and their derivatives in terms of the energy $\sqrt{\mathcal{E}}$, as defined in \eqref{E_def}. \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line u & \infty & 2/(1-\varepsilon_+) & 2/(2-\varepsilon_+) & \\ \mathcal{H}line \partial^\alphartialartial_t u & \infty & 4/(2-\varepsilon_-) & & \\ \mathcal{H}line \partial^\alphartialartial_t^2 u & 2 & & & \\ \mathcal{H}line \tr u & \infty & 1/(1-\varepsilon_+) & & \\ \mathcal{H}line \tr \partial^\alphartialartial_t u & \infty & & & \\ \mathcal{H}line \end{array} \end{displaymath} \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line p & 2/(1-\varepsilon_+) & 2/(2-\varepsilon_+) & & \\ \mathcal{H}line \partial^\alphartialartial_t p & 2 & & & \\ \mathcal{H}line \tr p & 1/(1-\varepsilon_+) & & & \\ \mathcal{H}line \end{array} \end{displaymath} \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line \eta & \infty & \infty & 1/(1-\varepsilon_+) & \\ \mathcal{H}line \partial^\alphartialartial_t \eta & \infty & \infty & & \\ \mathcal{H}line \partial^\alphartialartial_t^2 \eta & \infty & 2 & & \\ \mathcal{H}line \bar{\eta} & \infty & \infty & 2/(1-\varepsilon_+) & 2/(2-\varepsilon_+) \\ \mathcal{H}line \partial^\alphartialartial_t \bar{\eta} & \infty & \infty & 4/(2- (\varepsilon_--\alpha) ) & \\ \mathcal{H}line \partial^\alphartialartial_t^2 \bar{\eta} & \infty & 4 & & \\ \mathcal{H}line \tr \bar{\eta} & \infty & \infty & 1/(1-\varepsilon_+) & \\ \mathcal{H}line \tr \partial^\alphartialartial_t \bar{\eta} & \infty & \infty & & \\ \mathcal{H}line \tr \partial^\alphartialartial_t^2 \bar{\eta} & \infty & 2 & & \\ \mathcal{H}line \end{array} \end{displaymath} \end{thm} \begin{proof} The estimates for $u$, $p$, $\eta$ and their derivatives follow directly from the standard Sobolev embeddings and trace theorems, together with the definition of $\mathcal{E}$. The estimates for $\bar{\eta}$ at its derivatives follow similarly, except that we also employ Proposition \ref{poisson_prop} to account for the regularity gains arising from the appearance of the Poisson extensions $\mathcal{P}$ in the definition of $\bar{\eta}$. \end{proof} Next we record the catalog of estimates in terms of the dissipation. \begin{thm}\label{catalog_dissipation} The following three tables record the $L^q$ bounds for $u$, $p$, $\eta$ and their derivatives in terms of the dissipation $\sqrt{\mathcal{D}}$, as defined in \eqref{D_def}. \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line u & \infty & 2/(1-\varepsilon_+) & 2/(2-\varepsilon_+) & \\ \mathcal{H}line \partial^\alphartialartial_t u & \infty & 2/(1-\varepsilon_-) & 2/(2-\varepsilon_-) & \\ \mathcal{H}line \partial^\alphartialartial_t^2 u & \infty- & 2 & & \\ \mathcal{H}line \tr u & \infty & 1/(1-\varepsilon_+) & & \\ \mathcal{H}line \tr \partial^\alphartialartial_t u & \infty & 1/(1-\varepsilon_-) & & \\ \mathcal{H}line \tr \partial^\alphartialartial_t^2 u & \infty- & & & \\ \mathcal{H}line \end{array} \end{displaymath} \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line p & 2/(1-\varepsilon_+) & 2/(2-\varepsilon_+) & & \\ \mathcal{H}line \partial^\alphartialartial_t p & 2/(1-\varepsilon_-) & 2/(2-\varepsilon_-)& & \\ \mathcal{H}line \tr p & 1/(1-\varepsilon_+) & & & \\ \mathcal{H}line \tr \partial^\alphartialartial_t p & 1/(1-\varepsilon_-) & & & \\ \mathcal{H}line \end{array} \end{displaymath} \begin{displaymath} \def1.2{1.2} \begin{array}{|c | c | c | c | c |} \mathcal{H}line \text{Function} = f & f & \nabla f & \nabla^2 f & \nabla^3 f \\ \mathcal{H}line \eta & \infty & \infty & 1/(1-\varepsilon_+) & \\ \mathcal{H}line \partial^\alphartialartial_t \eta & \infty & \infty & 1/(1-\varepsilon_-) & \\ \mathcal{H}line \partial^\alphartialartial_t^2 \eta & \infty & 1/\alpha & & \\ \mathcal{H}line \partial^\alphartialartial_t^3 \eta & 1/\alpha & & & \\ \mathcal{H}line \bar{\eta} & \infty & \infty & 2/(1-\varepsilon_+) & 2/(2-\varepsilon_+) \\ \mathcal{H}line \partial^\alphartialartial_t \bar{\eta} & \infty & \infty & 2/(1-\varepsilon_-) & 2/(2-\varepsilon_-) \\ \mathcal{H}line \partial^\alphartialartial_t^2 \bar{\eta} & \infty & 2/\alpha & 2/(1+\alpha)& \\ \mathcal{H}line \partial^\alphartialartial_t^3 \bar{\eta} & 2/\alpha & 2/(1+2\alpha) & & \\ \mathcal{H}line \tr \bar{\eta} & \infty & \infty & 1/(1-\varepsilon_+) & \\ \mathcal{H}line \tr \partial^\alphartialartial_t \bar{\eta} & \infty & \infty & 1/(1-\varepsilon_-) & \\ \mathcal{H}line \tr \partial^\alphartialartial_t^2 \bar{\eta} & \infty & 1/\alpha & & \\ \mathcal{H}line \tr \partial^\alphartialartial_t^3 \bar{\eta} & 1/\alpha & & & \\ \mathcal{H}line \end{array} \end{displaymath} \end{thm} \begin{proof} The estimates for $u$, $p$, $\eta$ and their derivatives follow directly from the standard Sobolev embeddings and trace theorems, together with the definition of $\mathcal{D}$. The estimates for $\bar{\eta}$ at its derivatives follow similarly, except that we also employ Proposition \ref{poisson_prop} to account for the regularity gains arising from the appearance of the Poisson extensions $\mathcal{P}$ in the definition of $\bar{\eta}$. \end{proof} \section{Elliptic theory for Stokes problems}\label{sec_elliptic} In this section we record some elliptic theory for the Stokes problem. We begin with analysis of a model problem in $2D$ cones and then build to a theory in the domain $\Omega$ given by \eqref{Omega_def}. The material here roughly mirrors the material in Section 5 of \cite{guo_tice_QS}, except that here we work in $L^q-$based spaces without weights rather than $L^2-$based weighted spaces. \subsection{Analysis in cones } Given an opening angle $\omega \in (0,\partial^\alphartiali)$, we define the infinite $2D$ cone \begin{equation}\label{cone1_def} K_\omega = \{x \in \mathbb{R}n{2} \;\vert\; r>0 \text{ and } \theta \in (-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega) \}, \end{equation} where $r$ and $\theta$ are the usual polar coordinates in $\mathbb{R}n{2}$ with the set $\{\theta =-\partial^\alphartiali/2\}$ chosen to coincide with the negative $x_2$ axis. We define two parts of $\partial^\alphartial K_\omega$ via \begin{equation} \mathcal{G}amma_- = \{ x \in \mathbb{R}n{2} \;\vert\; r>0 \text{ and } \theta =-\partial^\alphartiali/2 \} \text{ and } \mathcal{G}amma_+ = \{x \in \mathbb{R}n{2} \;\vert\; r>0 \text{ and } \theta =-\partial^\alphartiali/2 + \omega \}. \end{equation} Next we introduce a special matrix-valued function. Suppose that $\mathfrak{A}: K_\omega \to \mathbb{R}n{2\times 2}$ is a map satisfying the following four properties. First, $\mathfrak{A}$ is smooth on $K_\omega$ and $\mathfrak{A}$ extends to a smooth function on $\bar{K}_\omega \backslash \{0\}$ and a continuous function on $\bar{K}_\omega$. Second, $\mathfrak{A}$ satisfies the following for all $a,b \in \mathbb{N}$: \begin{equation}\label{frak_A_assump} \begin{split} & \lim_{r\to 0} \sup_{\theta \in [-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega]} \abs{ (r \partial^\alphartial_r)^a \partial^\alphartial_\theta^b [ \mathfrak{A}(r,\theta) \mathfrak{A}^T(r,\theta) - I ] } =0 \\ & \lim_{r\to 0} \sup_{\theta \in [-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega]} \abs{ (r \partial^\alphartial_r)^a \partial^\alphartial_\theta^b [ \mathfrak{A}_{ij}(r,\theta)\partial^\alphartial_j \mathfrak{A}_{ik}(r,\theta) ] } =0 \text{ for }k \in \{1,2\} \\ & \lim_{r\to 0} \sup_{\theta \in [-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega]} \abs{ (r \partial^\alphartial_r)^a \partial^\alphartial_\theta^b [ \mathfrak{A}(r,\theta) - I ] } =0 \\ & \lim_{r\to 0} (r \partial^\alphartial_r)^a [ \mathfrak{A}(r,\theta_0)\nu - \nu ] =0 \text{ for } \theta_0 =-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega \\ & \lim_{r\to 0} (r \partial^\alphartial_r)^a \left[ \left(\mathfrak{A}\nu \otimes \mathfrak{A}^T (\mathfrak{A} \nu)^\bot + (\mathfrak{A}\nu)^\bot \otimes \mathfrak{A}^T (\mathfrak{A}\nu)\right)(r,\theta_0) - I \right] =0 \text{ for } \theta_0 =-\partial^\alphartiali/2,-\partial^\alphartiali/2 + \omega \\ \end{split} \end{equation} where again $(r,\theta)$ denote polar coordinates and $(z_1,z_2)^\bot = (z_2,-z_1)$. Third, the matrix $\mathfrak{A} \mathfrak{A}^T$ is uniformly elliptic on $K_\omega$. Fourth, $\det \mathfrak{A} =1$ and $\partial^\alphartial_j( \mathfrak{A}_{ij}) = 0 \text{ for }i=1,2.$ The matrix $\mathfrak{A}$ serves to determine the coefficients in a variant of the Stokes problem in the cone $K_\omega$. This problem, which we call the $\mathfrak{A}-$Stokes problem, reads: \begin{equation}\label{af_stokes_cone} \begin{cases} \diverge_\mathfrak{A} S_\mathfrak{A}(Q,v) = G^1 &\text{in } K_\omega \\ \diverge_\mathfrak{A} v = G^2 &\text{in } K_\omega \\ v \cdot \mathfrak{A} \nu = G^3_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm \\ \mu \mathbb{D}_\mathfrak{A} v \mathfrak{A} \nu \cdot (\mathfrak{A} \nu)^\bot = G^4_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm, \end{cases} \end{equation} where here the operators $\diverge_\mathfrak{A}$ and $S_\mathfrak{A}$ are defined in the same way as $\diverge_{\mathcal{A}}$ and $S_\mathcal{A}$. When $\mathfrak{A} = I_{2 \times 2}$ all of the above assumptions are trivially verified, and we arrive at the usual Stokes problem \begin{equation}\label{stokes_cone} \begin{cases} \diverge S(Q,v) = G^1 &\text{in } K_\omega \\ \diverge v = G^2 &\text{in } K_\omega \\ v \cdot \nu = G^3_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm \\ \mu \mathbb{D} v \nu \cdot \tau = G^4_\partial^\alphartialm &\text{on } \mathcal{G}amma_\partial^\alphartialm. \end{cases} \end{equation} The purpose of the assumptions in \eqref{frak_A_assump} is to guarantee that the problems \eqref{af_stokes_cone} and \eqref{stokes_cone} have the same elliptic regularity properties. Next we introduce a parameter depending on the cone's opening angle that determines how much regularity is gained in these Stokes problems. Given $\omega \in (0,\partial^\alphartiali)$ we define \begin{equation}\label{crit_wt} \varepsilonm(\omega) = \min\{1,-1+\partial^\alphartiali/\omega\} \in (0,1]. \end{equation} We can now state the elliptic regularity for these Stokes problems. \begin{thm}\label{cone1_solve} Let $\omega \in (0,\partial^\alphartiali)$ and $\varepsilonm(\omega)$ be as in \eqref{crit_wt}. Let $0 <\delta < \varepsilonm(\omega)$ and set \begin{equation} q_\delta = \frac{2}{2-\delta} \in (1,2). \end{equation} Suppose that $\mathfrak{A}$ satisfies the four properties stated above and that the data $G^1,G^2, G^3_\partial^\alphartialm,G^4_\partial^\alphartialm$ for the problem \eqref{stokes_cone} satisfy \begin{equation}\label{cone1_solve_01} G^1 \in L^{q_\delta}(K_\omega), G^2 \in W^{1,q_\delta}(K_\omega), G^3_\partial^\alphartialm \in W^{2-1/q_\delta,q_\delta}(\mathcal{G}amma_\partial^\alphartialm), G^4_\partial^\alphartialm \in W^{1-1/q_\delta,q_\delta}(\mathcal{G}amma_\partial^\alphartialm) \end{equation} as well as the compatibility condition \begin{equation}\label{cone1_solve_cc} \int_{K_\omega} G^2 = \int_{\mathcal{G}amma_+} G^3_+ + \int_{\mathcal{G}amma_-} G^3_- . \end{equation} Suppose that $(v,Q) \in H^1(K_\omega) \times H^0(K_\omega)$ satisfy $\diverge_\mathfrak{A} v = G^2$, $v\cdot \mathfrak{A} \nu = G^3_\partial^\alphartialm$ on $\mathcal{G}amma_\partial^\alphartialm$, and \begin{equation} \int_{K_\omega} \frac{\mu}{2} \mathbb{D}_\mathfrak{A} v : \mathbb{D}_\mathfrak{A} w - Q \diverge_\mathfrak{A} w = \int_{K_\omega} G^1 \cdot w + \int_{\mathcal{G}amma_+} \mathcal{G}^4_+ w\cdot \frac{(\mathfrak{A} \nu)^\bot}{\abs{\mathfrak{A} \nu}} + \int_{\mathcal{G}amma_-} \mathcal{G}^4_- w\cdot \frac{(\mathfrak{A} \nu)^\bot}{\abs{\mathfrak{A} \nu}} \end{equation} for all $w \in \{ w \in H^1(K_\omega) \;\vert\; w\cdot (\mathfrak{A} \nu) =0 \text{ on } \mathcal{G}amma_\partial^\alphartialm\}$. Finally, suppose that $v,Q$ and all of the data $G^i$ are supported in $\bar{K}_\omega \cap B[0,1]$. Then $v \in W^{2,q_\delta}(K_\omega) \cap H^{1+\delta}(K_\omega)$, $Q \in W^{1,q_\delta}(K_\omega) \cap H^\delta(K_\omega)$, and \begin{multline}\label{cone1_solve_02} \norm{v}_{W^{2,q_\delta}} + \norm{v}_{H^{1+\delta}} + \norm{ Q}_{W^{1,q_\delta}} + \norm{Q}_{H^\delta} \\ \lesssim \norm{G^1}_{L^{q_\delta} } + \norm{G^2}_{W^{1,q_\delta}} + \norm{G^3_-}_{W^{2-1/q_\delta,q_\delta} } + \norm{G^3_+}_{W^{2-1/q_\delta,q_\delta}} + \norm{G^4_-}_{W^{1-1/q_\delta,q_\delta}} + \norm{G^4_+}_{W^{1-1/q_\delta,q_\delta}}. \end{multline} \end{thm} \begin{proof} In the case $\mathfrak{A} = I$ the result is proved in Corollary 4.2 of \cite{orlt_sandig} when $G^3_\partial^\alphartialm =0$ and $G^4_\partial^\alphartialm =0$ and in Theorem 3.6 of \cite{orlt} in the general case. The choice of $q_\delta$ is determined by the eigenvalues of an operator pencil associated to \eqref{stokes_cone}, which may be found in the ``G-G eigenvalue computations'' of \cite{orlt_sandig} (with $\chi_1 = \chi_2 = \partial^\alphartiali/2$). These results show that these eigenvalues for the Stokes problem \eqref{stokes_cone} in the cone $K_\omega$ are $\partial^\alphartialm 1 + n \partial^\alphartiali/\omega$ for $n \in \mathbb{Z}$, which leads to the constraint $q < 2/(2-\varepsilonm(\omega))$ in $W^{2,q} \times W^{1,q}$ estimates. However, Theorem 8.2.1 of \cite{kmr_1}, together with the assumptions on $\mathfrak{A}$, guarantee that the operator pencils that determine the regularity of \eqref{af_stokes_cone} coincide with those of \eqref{stokes_cone}, so the estimates of \cite{orlt_sandig} and \cite{orlt} remain valid for the $\mathfrak{A}-$Stokes problem. \end{proof} \subsection{The Stokes problem in $\Omega$} We now study the following Stokes problem in $\Omega$, as defined by \eqref{Omega_def}: \begin{equation}\label{stokes_omega} \begin{cases} \diverge S(Q,v) = G^1 &\text{in } \Omega \\ \diverge v = G^2 &\text{in } \Omega \\ v \cdot \nu = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D} v \nu \cdot \tau = G^4_+ &\text{on } \Sigma\\ v \cdot \nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D} v \nu \cdot \tau = G^4_- &\text{on } \Sigma_s. \end{cases} \end{equation} Consider $0 < \delta < \varepsilonm$ (defined by \eqref{epm_def} in terms of $\omega_{\operatorname{eq}}$ from \eqref{omega_eq_def}) and $q_\delta = 2/(2-\delta) \in (1,2)$. We will study this problem with data belonging to the space $\mathfrak{X}_\delta$, which we define as the space of $6-$tuples \begin{multline} (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \\ \in L^{q_\delta}(\Omega) \times W^{1,q_\delta}(\Omega) \times W^{2-1/q_\delta,q_\delta}(\Sigma )\times W^{2-1/q_\delta,q_\delta}(\Sigma_s ) \times W^{1-1/q_\delta,q_\delta}(\Sigma) \times W^{1-1/q_\delta,q_\delta}(\Sigma_s) \end{multline} such that \begin{equation} \int_{\Omega} G^2 = \int_{\Sigma} G^3_+ + \int_{\Sigma_s} G^3_-. \end{equation} We endow this space with the obvious norm \begin{multline} \norm{ (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)}_{\mathfrak{X}_\delta} = \norm{G^1}_{L^{q_\delta}} + \norm{G^2}_{W^{1,q_\delta}} + \norm{G^3_+}_{W^{2-1/q_\delta,q_\delta}} + \norm{G^3_-}_{W^{2-1/q_\delta,q_\delta}} \\ + \norm{G^4_+}_{W^{1-1/q_\delta,q_\delta}} + \norm{G^4_-}_{W^{1-1/q_\delta,q_\delta}} \end{multline} We have the following weak existence result, which works without constraint on $\delta \in (0,1)$. \begin{thm}\label{stokes_om_weak} Assume that $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_\delta$ for any $0 < \delta < 1$. Then there exist a unique pair $(v,Q) \in H^1(\Omega) \times \mathring{H}^0(\Omega)$ that is a weak solution to \eqref{stokes_omega} in the sense that $\diverge v = G^2$, $v\cdot \nu = G^3$ on $\partial^\alphartial \Omega$, and \begin{equation}\label{stokes_om_weak_form} \int_{\Omega} \frac{\mu}{2} \mathbb{D} v : \mathbb{D} w - Q \diverge w = \int_{\Omega} G^1 \cdot w + \int_{\Sigma} G^4_+ (w\cdot \tau) + \int_{\Sigma_s} G^4_- (w \cdot \tau) \end{equation} for all $w \in \{ w \in H^1(\Omega) \;\vert\; w\cdot \nu =0 \text{ on } \partial^\alphartial \Omega\}$. Moreover, \begin{equation}\label{stokes_weak_0} \norm{v}_{H^1} + \norm{Q}_{L^2} \lesssim \norm{ (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)}_{\mathfrak{X}_\delta}. \end{equation} \end{thm} \begin{proof} The argument is standard and doesn't use the higher-regularity structure of $\mathfrak{X}_\delta$. See, for instance, Theorem 5.3 in \cite{guo_tice_QS}. \end{proof} For second-order regularity we do need the constraints on $\delta$ in order to use Theorem \ref{cone1_solve}. \begin{thm}\label{stokes_om_reg} Let $\varepsilonm\in (0,1]$ be given by \eqref{epm_def}, and $0 < \delta < \varepsilonm$. Let $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_{\delta}$, and let $(v,Q) \in H^1(\Omega) \times \mathring{H}^0(\Omega)$ be the weak solution to \eqref{stokes_omega} constructed in Theorem \ref{stokes_om_weak}. Then $v \in W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega)$, $Q \in W^{1,q_\delta}(\Omega)\cap \mathring{H}^\delta(\Omega)$, and \begin{equation}\label{stokes_om_reg_0} \norm{v}_{W^{2,q_\delta}} + \norm{v}_{H^{1+\delta}} + \norm{Q}_{W^{1,q_\delta}} + \norm{Q}_{H^{\delta}}\lesssim \norm{ (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)}_{\mathfrak{X}_\delta} . \end{equation} \end{thm} \begin{proof} The argument used in Theorem 5.5 of \cite{guo_tice_QS} works in the present case as well, except that we use the estimates of Theorem \ref{cone1_solve} in place of the estimates from Theorem 5.1 in \cite{guo_tice_QS}. \end{proof} In what follows it will be useful to rephrase Theorem \ref{stokes_om_reg} as follows. For $0 < \delta < \varepsilonm$ we define the operator \begin{equation}\label{stokes_om_iso_def1} T_\delta : \left( W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega) \right) \times \left( W^{1,q_\delta}_\delta(\Omega) \cap \mathring{H}^\delta(\Omega) \right) \to \mathfrak{X}_\delta \end{equation} via \begin{equation}\label{stokes_om_iso_def2} T_\delta(v,Q) = (\diverge S(Q,v), \diverge v, v\cdot n \vert_{\Sigma},v\cdot n \vert_{\Sigma_s}, \mu \mathbb{D} v n \cdot \tau \vert_{\Sigma}, \mu \mathbb{D} v n \cdot \tau \vert_{\Sigma_s}). \end{equation} We may then deduce the following from Theorems \ref{stokes_om_weak} and \ref{stokes_om_reg}. \begin{cor}\label{stokes_om_iso} Let $\varepsilonm$ be as in \eqref{epm_def}. If $0 <\delta < \varepsilonm$, then the operator $T_{\delta}$ defined by \eqref{stokes_om_iso_def1} and \eqref{stokes_om_iso_def2} is an isomorphism. \end{cor} \subsection{The $\mathcal{A}$-Stokes problem in $\Omega$} Next we consider a version of the Stokes problem with coefficients that depend on a given function $\eta \in W^{3-1/q_\delta,q_\delta}$ with $0 < \delta < \varepsilonm$. The function $\eta$ determines the coefficients $\mathcal{A},$ $J$, and $\mathcal{N}$ via \eqref{AJK_def} and \eqref{N_def}, and we study the system \begin{equation}\label{A_stokes} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(Q,v) = G^1 & \text{in }\Omega \\ J \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N}/ \abs{\mathcal{N}_0} = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D}_\mathcal{A} v \mathcal{N} \cdot \mathcal{T} / \abs{\mathcal{N}_0}^2 = G^4_+ &\text{on } \Sigma \\ v\cdot J\nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D}_\mathcal{A} v \nu \cdot \tau = G^4_- &\text{on } \Sigma_s. \end{cases} \end{equation} Note here that $\mathcal{N} = \mathcal{N}_0 - \partial^\alphartial_1 \eta e_1$ for $\mathcal{N}_0$, given by \eqref{N0_def}, the outward normal vector on $\Sigma$ and $\mathcal{T} = \mathcal{T}_0 + \partial^\alphartial_1 \eta e_2$ for $\mathcal{T}_0 = e_1 + \partial^\alphartial_1 \zeta_0 e_2$ the associated tangent vector. We begin our analysis of this problem by introducing the operator \begin{equation}\label{A_stokes_om_iso_def1} T_\delta[\eta] : \left( W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega) \right) \times \left( W^{1,q_\delta}_\delta(\Omega) \cap \mathring{H}^\delta(\Omega) \right) \to \mathfrak{X}_\delta \end{equation} given by \begin{equation}\label{A_stokes_om_iso_def2} T_\delta[\eta](v,Q) = (\diverge_{\mathcal{A}} S_\mathcal{A}(Q,v), J \diverge_{\mathcal{A}} v, v\cdot \mathcal{N}/\abs{\mathcal{N}_0} \vert_{\Sigma}, v\cdot J\nu \vert_{\Sigma_s}, \mu \mathbb{D}a v \mathcal{N} \cdot \mathcal{T} / \abs{\mathcal{N}_0}^2 \vert_{\Sigma}, \mu \mathbb{D}a v \nu \cdot \tau \vert_{\Sigma_s}). \end{equation} The map $T_\delta[\eta]$, which encodes the solvability of \eqref{A_stokes}, is an isomorphism under a smallness assumption on $\eta$. \begin{thm} \label{A_stokes_om_iso} Let $\varepsilonm$ be as in \eqref{epm_def}. Let $0 < \delta < \varepsilonm$ and $q_\delta = 2/(2-\delta)$. There exists a $\gamma >0$ such that if $\norm{\eta}_{W^{3-1/q_\delta,q_\delta}} < \gamma$, then the operator $T_{\delta}[\eta]$ defined by \eqref{A_stokes_om_iso_def1} and \eqref{A_stokes_om_iso_def2} is well-defined and yields a bounded isomorphism. \end{thm} \begin{proof} We divide the proof into steps. \emph{Step 1 - Setup: } First note that \begin{equation} \int_\Omega J \diverge_{\mathcal{A}} v = \int_{\partial^\alphartial \Omega} J \mathcal{A} \nu \cdot v = \int_{\Sigma} \frac{\mathcal{N}}{\abs{\mathcal{N}_0}} \cdot v+ \int_{\Sigma_s} J \nu \cdot v, \end{equation} which establishes the compatibility between the second and third terms needed for $T_\delta[\eta]$ to map into $\mathfrak{X}_\delta$. Now assume that $\gamma < 1$ is as small as in Lemma \ref{eta_small}. We write $T_\delta[\eta](v,q) = T_\delta(v,Q) - \mathcal{G}(v,Q)$ where $T_\delta$ is defined by \eqref{stokes_om_iso} and $\mathcal{G}$ denotes the linear map with components \begin{equation} \begin{split} \mathcal{G}^1(v,Q) &= \diverge_{I-\mathcal{A}} S_{\mathcal{A}}(Q,v) - \diverge \mu \mathbb{D}_{I-\mathcal{A}}(v) \\ \mathcal{G}^2(v) &= \diverge_{I-\mathcal{A}} v + (1-J) \diverge_{\mathcal{A}} v \\ \mathcal{G}^3_+(v) &= (1+(\partial^\alphartial_1\zeta_0)^2)^{-1/2}[\partial^\alphartial_1 \eta v_1 ] \\ \mathcal{G}^4_+(v) &= (1+(\partial^\alphartial_1\zeta_0)^2)^{-1} [ \mu \mathbb{D}_{I-\mathcal{A}} v \mathcal{N}_0\cdot \mathcal{T}_0 -\mu \partial^\alphartial_1 \eta (\mathbb{D}_\mathcal{A} v \mathcal{N}_0 \cdot e_2 - \mathbb{D}_\mathcal{A} v e_1 \cdot \mathcal{T}_0 ) +\mu (\partial^\alphartial_1 \eta)^2 \mathbb{D}_\mathcal{A} v e_1 \cdot e_2 ] \\ \mathcal{G}^3_- & = (1-J) v\cdot \nu \\ \mathcal{G}^4_-(v) & = \mu \mathbb{D}_{I-\mathcal{A}} v \nu \cdot \tau. \end{split} \end{equation} Since both $T_\delta[\eta]$ and $T_\delta$ enforce the compatibility between the second and third terms, $\mathcal{G}$ does as well. Then the equation $T_\delta[\eta](v,Q) = G := (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)$ is equivalent to \begin{equation}\label{A_stokes_om_iso_1} T_\delta(v,Q) = G + \mathcal{G}(v,Q). \end{equation} \emph{Step 2 - $\mathcal{G}$ boundedness: } We now claim that \begin{equation}\label{A_stokes_om_iso_0} \norm{\mathcal{G}(v,Q) }_{\mathfrak{X}_\delta} \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \left( \norm{v}_{W^{2,q_\delta}} + \norm{Q}_{W^{1,q_\delta}} \right). \end{equation} We proceed term by term. \textbf{$\mathcal{G}^1$ estimate:} We need to bound $\mathcal{G}^1(v,Q)$ in $L^{q_\delta}(\Omega)$. We estimate the first term via \begin{multline} \norm{\diverge_{I-A} S_{\mathcal{A}}(Q,v) }_{L^{q_\delta}} \lesssim \norm{\nabla \bar{\eta}}_{L^\infty} \left( \norm{ \nabla Q }_{L^{q_\delta}} + \norm{ \nabla^2 v }_{L^{q_\delta}} \right) \\ + \norm{\nabla \bar{\eta}}_{L^\infty} \norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \left(\norm{Q}_{L^{2/(1-\delta)}} + \norm{\nabla v}_{L^{2/(1-\delta)}} \right) \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \left(\norm{v}_{W^{2,q_\delta}} + \norm{Q}_{W^{1,q_\delta}} \right). \end{multline} Similarly, we estimate the second term as \begin{equation} \norm{\diverge \mu \mathbb{D}_{I-\mathcal{A}}(v)}_{L^{q_\delta}} \lesssim \norm{\nabla \bar{\eta}}_{L^\infty} \norm{D^2 v}_{L^{q_\delta}} + \norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \norm{\nabla v}_{L^{2/(1-\delta)}} \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \norm{v}_{W^{2,q_\delta}}. \end{equation} Combining these two, we deduce that \begin{equation} \norm{ \mathcal{G}^1(v,Q) }_{L^{q_\delta}} \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \left(\norm{v}_{W^{2,q_\delta}} + \norm{Q}_{W^{1,q_\delta}} \right). \end{equation} \textbf{$\mathcal{G}^2$ estimate: } We need to bound $\mathcal{G}^2(v)$ in $W^{1,q_\delta}(\Omega)$. For the first term \begin{equation} \norm{ \diverge_{I-\mathcal{A}} v }_{W^{1,q_\delta}} \lesssim \norm{\nabla \bar{\eta}}_{L^\infty} \norm{v}_{W^{2,q_\delta}} +\norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \norm{\nabla v}_{L^{2/(1-\delta)}} \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \norm{v}_{W^{2,q_\delta}}. \end{equation} Similarly, for the second term we bound \begin{multline} \norm{ (1-J) \diverge_{\mathcal{A}} v }_{W^{1,q_\delta}} \lesssim \norm{\nabla \bar{\eta}}_{L^\infty} \norm{v}_{W^{2,q_\delta}} + (1+\norm{\nabla \bar{\eta}}_{L^\infty}) \norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \norm{\nabla v}_{L^{2/(1-\delta)}} \\ \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \norm{v}_{W^{2,q_\delta}}. \end{multline} Combining these, we deduce that \begin{equation} \norm{ \mathcal{G}^2(v) }_{W^{1,q_\delta}} \lesssim \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \norm{v}_{W^{2,q_\delta}} \end{equation} \textbf{$\mathcal{G}^3_+$ estimate:} We need to bound $\mathcal{G}^3_+(v)$ in $W^{2-1/q_\delta,q_\delta}(\Sigma)$. For this we use the trace characterization of boundary norms and the fact that $W^{2,q_\delta}(\Omega)$ is an algebra to estimate: \begin{multline} \norm{\mathcal{G}^3_+(v) }_{W^{2-1/q_\delta,q_\delta}(\Sigma)} \lesssim \norm{ \partial^\alphartial_1 \bar{\eta} v_1 }_{W^{2,q_\delta}(\Omega)} \lesssim \norm{\partial^\alphartial_1 \bar{\eta} v_1 }_{W^{2,q_\delta}(\Omega)} \\ \lesssim \norm{\partial^\alphartial_1 \bar{\eta} }_{W^{2,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)} \lesssim \norm{\bar{\eta} }_{W^{3,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)} \lesssim \norm{\eta }_{W^{3-1/q_\delta,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)}. \end{multline} \textbf{$\mathcal{G}^3_-$ estimate:} We need to bound $\mathcal{G}^3_+(v)$ in $W^{2-1/q_\delta,q_\delta}(\Sigma)$. Since $\nu$ is determined by $\Sigma_s$, which is $C^2$, we can argue as with $\mathcal{G}^3_+$ to estimate \begin{equation} \norm{\mathcal{G}^3_-(v)}_{W^{2-1/q_\delta,q_\delta}(\Sigma)} \lesssim \norm{(1-J) v}_{W^{2,q_\delta}(\Sigma)} \lesssim \norm{\bar{\eta} }_{W^{3,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)} \lesssim \norm{\eta }_{W^{3-1/q_\delta,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)}. \end{equation} \textbf{$\mathcal{G}^4_+$ estimate:} We need to bound $\mathcal{G}^4(v)$ in in $W^{1-1/q_\delta,q_\delta}(\Sigma)$. Recall that $\mathcal{N}_0 = -\partial^\alphartial_1 \zeta_0 e_1 + e_2$ and $\mathcal{T}_0 = e_1 + \partial^\alphartial_1 \zeta_0 e_2$ are smooth, so we can bound \begin{equation} \norm{\mathcal{G}^4_+(v)}_{W^{1-1/q_\delta,q_\delta}(\Sigma)} \lesssim \norm{\mathbb{D}_{I-\mathcal{A}} v}_{W^{1-1/q_\delta,q_\delta}(\Sigma)} + \norm{\partial^\alphartial_1 \eta \mathbb{D}_\mathcal{A} v }_{W^{1-1/q_\delta,q_\delta}(\Sigma)} + \norm{(\partial^\alphartial_1\eta)^2 \mathbb{D}_\mathcal{A} v}_{W^{1-1/q_\delta,q_\delta}(\Sigma)}. \end{equation} We then use the trace characterization again to bound \begin{multline} \norm{\mathbb{D}_{I-\mathcal{A}} v}_{W^{1-1/q_\delta,q_\delta}(\Sigma)}+ \norm{\partial^\alphartial_1 \eta \mathbb{D}_\mathcal{A} v}_{W^{1-1/q_\delta,q_\delta}(\Sigma)} \lesssim \norm{\mathbb{D}_{I-\mathcal{A}} v}_{W^{1,q_\delta}(\Omega)} + \norm{\partial^\alphartial_1 \bar{\eta} \mathbb{D}_\mathcal{A} v}_{W^{1,q_\delta}(\Omega)} \\ \lesssim \norm{\nabla \bar{\eta} \nabla v}_{L^{q_\delta}(\Omega)} + \norm{\nabla^2 \bar{\eta} \nabla v}_{L^{q_\delta}(\Omega)} + \norm{\nabla \bar{\eta} \nabla^2 v}_{L^{q_\delta}(\Omega)} \\ \lesssim \norm{\nabla \bar{\eta}}_{L^\infty} \norm{v}_{W^{2,q_\delta}} + \norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \norm{\nabla v}_{L^{2/(1-\delta)}} \lesssim \norm{\eta }_{W^{3-1/q_\delta,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)}. \end{multline} Similarly, \begin{multline} \norm{(\partial^\alphartial_1\eta)^2 \mathbb{D}_\mathcal{A} v}_{W^{1-1/q_\delta,q_\delta}(\Sigma)} \lesssim \norm{(\partial^\alphartial_1\bar{\eta})^2 \mathbb{D}_\mathcal{A} v}_{W^{1,q_\delta}(\Omega)} \\ \lesssim \norm{\nabla \bar{\eta}}_{L^\infty}^2 \norm{v}_{W^{2,q_\delta}} + \norm{\nabla \bar{\eta}}_{L^\infty} \norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \norm{\nabla v}_{L^{2/(1-\delta)}} \lesssim \norm{\eta }_{W^{3-1/q_\delta,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)}. \end{multline} Combining these shows that \begin{equation} \norm{\mathcal{G}^4_+(v)}_{W^{1-1/q_\delta,q_\delta}(\Sigma)} \lesssim \norm{\eta }_{W^{3-1/q_\delta,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)}. \end{equation} \textbf{$\mathcal{G}^4_-$ estimate:} We need to bound $\mathcal{G}^4_-(v)$ in $W^{1-1/q_\delta,q_\delta}(\Sigma_s)$. Since $\nu$ and $\tau$ are determined by $\Sigma_s$ and are thus $C^2$ we can estimate in exactly the same way as above: \begin{multline} \norm{\mathcal{G}^4_-(v)}_{W^{1-1/q_\delta,q_\delta}(\Sigma_s)} = \norm{\mathbb{D}_{I-\mathcal{A}} v}_{W^{1-1/q_\delta,q_\delta}(\Sigma_s)} \lesssim \norm{\mathbb{D}_{I-\mathcal{A}} v}_{W^{1,q_\delta}(\Omega)} \\ \lesssim \norm{\nabla \bar{\eta} \nabla v}_{L^{q_\delta}(\Omega)} + \norm{\nabla^2 \bar{\eta} \nabla v}_{L^{q_\delta}(\Omega)} + \norm{\nabla \bar{\eta} \nabla^2 v}_{L^{q_\delta}(\Omega)} \\ \lesssim \norm{\nabla \bar{\eta}}_{L^\infty} \norm{v}_{W^{2,q_\delta}} + \norm{\nabla^2 \bar{\eta}}_{L^{2/(1-\delta)}} \norm{\nabla v}_{L^{2/(1-\delta)}} \lesssim \norm{\eta }_{W^{3-1/q_\delta,q_\delta}(\Omega)} \norm{v }_{W^{2,q_\delta}(\Omega)}. \end{multline} \textbf{Synthesis:} Combining the above estimates shows that the bound \eqref{A_stokes_om_iso_0} holds. \emph{Step 3 - Isomorphism: } The map $T_{\delta}$ is an isomorphism, so \eqref{A_stokes_om_iso_1} is equivalent to the fixed point problem \begin{equation}\label{A_stokes_om_iso_3} (v,Q) = T_{\delta}^{-1}(G+ \mathcal{G}(v,Q)) =: \mathcal{P}si(v,Q) \end{equation} for $\mathcal{P}si$ a map from $Z := \left( W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega) \right) \times \left( W^{1,q_\delta}_\delta(\Omega) \cap \mathring{H}^\delta(\Omega) \right)$ to itself. From \eqref{A_stokes_om_iso_0} we have that \begin{equation} \norm{\mathcal{P}si(v_1,Q_1) - \mathcal{P}si(v_2,Q_2)}_Z \le C \norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \norm{T^{-1}_{\delta}}_{\text{op}} \norm{(v_1,Q_1) - (v_2,Q_2) }_{Z}. \end{equation} Hence, if $\gamma$ is sufficiently small, then $\mathcal{P}si$ is a contraction and thus there exists a unique $(v,Q)$ solving \eqref{A_stokes_om_iso_1} for every $G$. In turn, this means that $T_\delta[\eta]$ is an isomorphism with this choice of $\gamma$. \end{proof} \subsection{The $\mathcal{A}$-Stokes problem in $\Omega$ with $\beta \neq 0$} As the next step we modify the boundary conditions in \eqref{A_stokes} to include the Navier-slip friction term on the vessel walls. The new system is: \begin{equation}\label{A_stokes_beta} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(Q,v) = G^1 & \text{in }\Omega \\ J \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N}/ \abs{\mathcal{N}_0} = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D}_\mathcal{A} v \mathcal{N} \cdot \mathcal{T} / \abs{\mathcal{N}_0}^2 = G^4_+ &\text{on } \Sigma \\ v\cdot J\nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D}_\mathcal{A} v \nu \cdot \tau + \beta v\cdot \tau = G^4_- &\text{on } \Sigma_s, \end{cases} \end{equation} where $\beta>0$ is the Navier-slip friction coefficient. We have the following existence result. \begin{thm}\label{A_stokes_beta_solve} Let $\varepsilonm$ be as in \eqref{epm_def}. Let $0 < \delta < \varepsilonm$, $q_\delta = 2/(2-\delta)$, and suppose that $\ns{\eta}_{W^{3-1/q_\delta,q_\delta} } < \gamma$, where $\gamma$ is as in Theorem \ref{A_stokes_om_iso}. If $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_{\delta}$, then there exists a unique \begin{equation} (v,Q) \in \left( W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega) \right) \times \left( W^{1,q_\delta}_\delta(\Omega) \cap \mathring{H}^\delta(\Omega) \right) \end{equation} solving \eqref{A_stokes_beta}. Moreover, the solution obeys the estimate \begin{equation}\label{A_stokes_beta_solve_0} \norm{v}_{W^{2,q_\delta}} + \norm{v}_{H^{1+\delta}} + \norm{Q}_{W^{1,q_\delta}} + \norm{Q}_{H^{\delta}}\lesssim \norm{ (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)}_{\mathfrak{X}_\delta} . \end{equation} \end{thm} \begin{proof} Define the operator $R: \left( W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega) \right) \times \left( W^{1,q_\delta}_\delta(\Omega) \cap \mathring{H}^\delta(\Omega) \right) \to \mathfrak{X}_\delta$ via \begin{equation} R(v,q) = (0,0,0,0,0,\beta v\cdot \nu\vert_{\Sigma_s}), \end{equation} which is bounded and well-defined since $v\cdot \nu \in W^{2-1/q_\delta,q_\delta}(\Sigma_s)$. Standard Sobolev theory shows that the embedding $W^{2-1/q_\delta,1_\delta}(\Sigma_s) \mathcal{H}ookrightarrow W^{1-1/q_\delta,q_\delta}(\Sigma_s)$ is compact, so $R$ is a compact operator. Theorem \ref{A_stokes_om_iso} tells us that the operator $T_{\delta}[\eta]$ is an isomorphism, so the compactness of $R$ implies that $T_{\delta}[\eta] + R$ is a Fredholm operator. We claim that this map is injective. Once this is proved, the Fredholm alternative implies that the map is also surjective and hence is an isomorphism. To prove the claim we assume $(T_{\delta}[\eta] + R)(v,Q) =0$, i.e. \eqref{A_stokes_beta} holds with all the $G^i$ terms vanishing. We multiply the first equation in \eqref{A_stokes_beta} by $J v$ and integrate by parts, arguing as in Lemma \ref{geometric_evolution}, to arrive at the identity \begin{equation} \int_\Omega \frac{\mu}{2} \abs{\mathbb{D}_\mathcal{A} v}^2 J + \int_{\Sigma_s} \beta\abs{v\cdot \tau}^2 J =0. \end{equation} Thus $v=0$, but then $0 = \nabla_\mathcal{A} Q = \mathcal{A} \nabla Q=0$, which implies, since $\mathcal{A}$ is invertible (via Lemma \ref{eta_small}), that $Q$ is constant. Since $Q \in \mathring{H}^{\delta}$ we then have that $Q =0$. This proves the claim. \end{proof} \subsection{The $\mathcal{A}$-Stokes problem in $\Omega$ with a boundary equations for $\xi$} We finally have the tools needed to address the desired problem, which synthesizes the $\mathcal{A}-$Stokes system in $\Omega$ with boundary conditions on $\Sigma$ involving a new unknown $\xi$: \begin{equation}\label{A_stokes_stress} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(Q,v) = G^1 & \text{in }\Omega \\ J \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N}/ \abs{\mathcal{N}_0} = G^3_+ &\text{on } \Sigma \\ S_\mathcal{A}(Q,v) \mathcal{N} = \left[ g\xi -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + G^6 \right) \right] \mathcal{N} + G^4_+ \mathcal{T} + G^5\mathcal{N} &\text{on } \Sigma \\ v\cdot J \nu = G^3_- &\text{on } \Sigma_s \\ (S_{\mathcal{A}}(Q,v)\nu - \beta v)\cdot \tau = G^4_- &\text{on } \Sigma_s \\ \mp \sigma \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} (\partial^\alphartialm \ell) = G^7_\partial^\alphartialm. \end{cases} \end{equation} We have the following existence result for \eqref{A_stokes_stress}. \begin{thm}\label{A_stokes_stress_solve} Let $\varepsilonm$ be as in \eqref{epm_def}. Let $0 < \delta < \varepsilonm$, $q_\delta = 2/(2-\delta)$, and suppose that $\ns{\eta}_{W^{3-1/q_\delta,q_\delta} } < \gamma$, where $\gamma$ is as in Theorem \ref{A_stokes_om_iso}. If $(G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-) \in \mathfrak{X}_{\delta}$, and $G^5,\partial^\alphartial_1 G^6 \in W^{1-1/q_\delta,q_\delta}(\Sigma)$, and $G^7_\partial^\alphartialm \in \mathbb{R}$, then there exists a unique \begin{equation} (v,Q,\xi) \in \left( W^{2,q_\delta}(\Omega) \cap H^{1+\delta}(\Omega) \right) \times \left( W^{1,q_\delta}_\delta(\Omega) \cap \mathring{H}^\delta(\Omega) \right) \times W^{3-1/q_\delta,q_\delta}(\Sigma) \end{equation} solving \eqref{A_stokes_stress}. Moreover, the solution obeys the estimate \begin{multline}\label{A_stokes_stress_0} \norm{v}_{W^{2,q_\delta}} + \norm{v}_{H^{1+\delta}} + \norm{Q}_{W^{1,q_\delta}} + \norm{Q}_{H^{\delta}} + \norm{\xi}_{W^{3-1/q_\delta,q_\delta}}\\ \lesssim \norm{ (G^1,G^2,G^3_+,G^3_-,G^4_+,G^4_-)}_{\mathfrak{X}_\delta} + \ns{G^5}_{W^{1-1/q_\delta,q_\delta}} + \ns{\partial^\alphartial_1 G^6}_{W^{1-1/q_\delta,q_\delta}} + [G^7]_\ell^2, \end{multline} where we recall that $[\cdot,\cdot]_\ell$ is defined in \eqref{bndry_pairing}. \end{thm} \begin{proof} First note that since $\abs{\mathcal{N}} = \abs{\mathcal{T}}$, \begin{equation} S_\mathcal{A}(Q,v) \mathcal{N} = \left[ g\xi -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + G^6 \right) \right] \mathcal{N} + G^4_+ \mathcal{T} + G^5\mathcal{N} \end{equation} is equivalent to \begin{equation} S_\mathcal{A}(Q,v) \mathcal{N} \cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2} = \left[ g\xi -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + G^6 \right) \right] +G^5 \end{equation} and \begin{equation} S_\mathcal{A}(Q,v) \mathcal{N} \cdot \frac{\mathcal{T}}{\abs{\mathcal{N}_0}^2} = G^4_+ \frac{\abs{\mathcal{N}}^2}{\abs{\mathcal{N}_0}^2} \in. \end{equation} Note that the same sort of argument used in the proof of Theorem \ref{A_stokes_om_iso} shows that \begin{equation} \norm{G^4_+ \frac{\abs{\mathcal{N}}^2}{\abs{\mathcal{N}_0}^2} }_{W^{1-q_\delta,q_\delta}} \lesssim \norm{G^4_+ }_{W^{1-q_\delta,q_\delta}} \end{equation} since $\norm{\eta}_{W^{3-1/q_\delta,q_\delta}} \le 1$. We may then use Theorem \ref{A_stokes_beta_solve} to produce the pair $(v,Q)$ solving \begin{equation} \begin{cases} \diverge_{\mathcal{A}} S_\mathcal{A}(Q,v) = G^1 & \text{in }\Omega \\ J \diverge_{\mathcal{A}} v = G^2 & \text{in } \Omega \\ v\cdot \mathcal{N}/ \abs{\mathcal{N}_0} = G^3_+ &\text{on } \Sigma \\ \mu \mathbb{D}_\mathcal{A} v \mathcal{N} \cdot \mathcal{T} / \abs{\mathcal{N}_0}^2 = G^4_+ \frac{\abs{\mathcal{N}}^2}{\abs{\mathcal{N}_0}^2} &\text{on } \Sigma \\ v\cdot J\nu = G^3_- &\text{on } \Sigma_s \\ \mu \mathbb{D}_\mathcal{A} v \nu \cdot \tau + \beta v\cdot \tau = -G^4_- &\text{on } \Sigma_s, \end{cases} \end{equation} and obeying the estimates \eqref{A_stokes_beta_solve_0}. With this $(v,Q)$ in hand we then have a solution to \eqref{A_stokes_stress} as soon as we find $\xi$ solving \begin{equation}\label{A_stokes_stress_solve_1} g\xi -\sigma \partial^\alphartial_1\left( \frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) = S_\mathcal{A}(Q,v) \mathcal{N}\cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2} +\sigma \partial^\alphartial_1 G^6 - G^5 \end{equation} on $\Sigma$ subject to the boundary conditions \begin{equation}\label{A_stokes_stress_solve_2} \mp \sigma \left(\frac{\partial^\alphartial_1 \xi}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3 \right)(\partial^\alphartialm \ell) = G^7_\partial^\alphartialm. \end{equation} The estimate \eqref{A_stokes_beta_solve_0} guarantees that $S_\mathcal{A}(q,v) \mathcal{N}\cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2} \in W^{1-1/q_\delta,q_\delta}(\Sigma)$, the usual elliptic theory provides a unique $\xi \in W^{3-1/q_\delta,q_\delta}(\Sigma)$ satisfying \eqref{A_stokes_stress_solve_1} and \eqref{A_stokes_stress_solve_2} and obeying the estimate \begin{equation}\label{A_stokes_stress_solve_3} \norm{\xi}_{W^{1-1/q_\delta,q_\delta}} \lesssim \norm{S_\mathcal{A}(Q,v) \mathcal{N}\cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}}_{W^{1-1/q_\delta,q_\delta}} + \norm{\partial^\alphartial_1 G^6}_{W^{1-1/q_\delta,q_\delta}} + \norm{G^5}_{_{W^{1-1/q_\delta,q_\delta}}} + [G^7]_\ell^2 . \end{equation} Then \eqref{A_stokes_stress_0} follow by combining \eqref{A_stokes_beta_solve_0} and \eqref{A_stokes_stress_solve_3}. \end{proof} \section{Nonlinear estimates I: interaction terms, dissipative form}\label{sec_nl_int_d} In this section we begin our study of the estimates available for the nonlinearities that appear in the system \eqref{ns_geometric} and its derivatives. Here we focus on the interaction terms as they appear in Theorem \ref{linear_energy} and on deriving estimates in terms of the dissipation functional. In order to avoid tedious restatements of the same hypothesis, we assume throughout this section that a solution to \eqref{ns_geometric} exists on the time horizon $(0,T)$ for $0 < T \le \infty$ and obeys the small-energy estimate \begin{equation} \sup_{0\le t < T} \mathcal{E}(t) \le \gamma^2 < 1, \end{equation} where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. In particular, this means that the estimates of Lemma \ref{eta_small} are available for use, and we will use them often without explicit reference. \subsection{General interaction functional estimates} We begin by studying the terms involving $F^1$, $F^4$, and $F^5$ in Theorem \ref{linear_energy}. The structure of these is not particularly delicate, so we can derive general dual estimates in which the particular form of the test function is irrelevant. We begin by studying $F^1$. \begin{prop}\label{nid_f1} Suppose that $F^1$ is as defined in either \eqref{dt1_f1} or \eqref{dt2_f1}. Then \begin{equation}\label{nid_f1_0} \abs{\int_\Omega J w \cdot F^1} \lesssim \norm{w}_{H^1} \left(\sqrt{\mathcal{E}} + \mathcal{E} \right) \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} We will present the proof only in the more involved case that $F^1$ is defined by \eqref{dt2_f1}, which corresponds to two temporal derivatives. The case \eqref{dt1_f1}, which corresponds to one temporal derivative, follows from a simpler and easier argument. There are fifteen terms appearing in \eqref{dt2_f1}, and we will deal with them one at a time, proving that each can be estimated in the stated form. For the sake of brevity, throughout the proof we will repeatedly make use of four essential tools without explicitly referring to them: H\"{o}lder's inequality, the standard Sobolev embeddings for $w \in H^1(\Omega)$, the fact that $\mathcal{E} \le 1$, and the catalogs of $L^q$ estimates given in Theorems \ref{catalog_energy} and \ref{catalog_dissipation}. For the latter we will always use the following ordering convention: the ordering in expressions of the form \begin{equation} a b c \lesssim A \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \text{ and } a b' c' \lesssim A \sqrt{\mathcal{D}} \mathcal{E} \end{equation} implies that we bound $a \lesssim A$, use Theorem \ref{catalog_energy} to bound $b \lesssim \sqrt{\mathcal{E}}$ and $c' \lesssim \mathcal{E}$, and use Theorem \ref{catalog_dissipation} to estimate $c \lesssim \sqrt{\mathcal{D}}$ and $b' \lesssim \sqrt{\mathcal{D}}$. In other words, the order of appearance of $\mathcal{E}$ and $\mathcal{D}$ on the right side corresponds to the order on the left and indicates which of Theorems \ref{catalog_energy} and \ref{catalog_dissipation} is being used implicitly. \textbf{Term: $- 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u)$. } We first bound \begin{multline} \abs{\int_\Omega J w \cdot (- 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u)) } \lesssim \int_\Omega \abs{w} \abs{ \partial^\alphartialartial_t \mathcal{A}} (\abs{\nabla \partial^\alphartialartial_t p} + \abs{\nabla^2 \partial^\alphartialartial_t u} ) \\ + \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \mathcal{A}} (\abs{\partial^\alphartialartial_t p} + \abs{\nabla \partial^\alphartialartial_t u}) =: I + II. \end{multline} For $I$ we then bound \begin{equation} I \lesssim \norm{w}_{L^{2/\varepsilon_-}} \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \left( \norm{\nabla \partial^\alphartialartial_t p}_{L^{2/(2-\varepsilon-)}} +\norm{\nabla^2 \partial^\alphartialartial_t u}_{L^{2/(2-\varepsilon-)}} \right) \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \end{equation} and for $II$ we bound \begin{equation} II \lesssim \norm{w}_{L^{2/(\varepsilon_- + \varepsilon_+)}} \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{ \bar{\eta}}_{W^{2,2/(1-\varepsilon_+)}} \left( \norm{\partial^\alphartialartial_t p}_{L^{2/(1-\varepsilon_-)}} + \norm{\nabla \partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-)}} \right) \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}. \end{equation} Combining these shows that this term can be estimated as stated. \textbf{Term: $2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u$. } We first bound \begin{equation} \abs{\int_\Omega J w \cdot (2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u )} \lesssim \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} + \int_\Omega \abs{w}\abs{ \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla^2 \partial^\alphartialartial_t u} =: I + II. \end{equation} We then bound \begin{equation} I \lesssim \norm{w}_{L^{4/(3\varepsilon_-)}} \left( \norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} + \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} \right) \norm{\nabla \partial^\alphartialartial_t u}_{L^{4/(2-\varepsilon-)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \end{equation} and \begin{equation} II \lesssim \norm{w}_{L^{2/\varepsilon_-}} \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla^2 \partial^\alphartialartial_t u}_{L^{2/(2-\varepsilon_-)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Combining these shows that this term can be estimated as stated. \textbf{Term: $- \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u)$. } We first bound \begin{equation} \abs{\int_\Omega J w \cdot (- \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u)) } \lesssim \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t^2 \mathcal{A}} (\abs{\nabla p} + \abs{\nabla^2 u}) + \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla \mathcal{A}} \abs{\nabla u} =: I + II. \end{equation} Then we estimate \begin{equation} I \lesssim \norm{w}_{L^{2/(\varepsilon_+ - \alpha)}}\left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \left(\norm{\nabla p}_{L^{2/(2-\varepsilon_+)}} + \norm{\nabla^2 u}_{L^{2/(2-\varepsilon_+)}} \right) \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \end{equation} and \begin{equation} II \lesssim \norm{w}_{L^{2/(2\varepsilon_+ - \alpha)}} \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \norm{ \bar{\eta}}_{W^{2,2/(1-\varepsilon_+)}} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \mathcal{E}. \end{equation} Combining these shows that this term can be estimated as stated. \textbf{Term: $2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u$. } We first estimate \begin{equation} \abs{\int_\Omega J w \cdot (2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u) } \lesssim \int_\Omega \abs{w} \abs{ \partial^\alphartialartial_t \mathcal{A}}^2 \abs{\nabla^2 u} + \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} =: I + II. \end{equation} We then bound \begin{equation} I \lesssim \norm{w}_{L^{2/\varepsilon_+}} \ns{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla^2 u}_{L^{2/(2-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}} \end{equation} and \begin{equation} II \lesssim \norm{w}_{L^{2/(\varepsilon_- + \varepsilon_+)}} \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \left( \norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} + \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} Combining these shows that this term can be estimated as stated. \textbf{Term: $\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u$. } We initially estimate \begin{equation} \abs{\int_\Omega J w \cdot (\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u)} \lesssim \int_\Omega \abs{w} \abs{ \partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla^2 u} + \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u} =: I + II. \end{equation} Then we bound \begin{equation} I \lesssim \norm{w}_{L^{2/(\varepsilon_+ - \alpha)}}\left(\norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{ \partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \norm{\nabla^2 u}_{L^{2/(2-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \end{equation} and \begin{equation} II \lesssim \norm{w}_{L^{2/(\varepsilon_+ - \alpha)}} \left( \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{2,2/(1+\alpha)}}+\norm{ \partial^\alphartialartial_t^2 \bar{\eta}}_{W^{2,2/(1+\alpha)}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} Combining these shows that this term can be estimated as stated. \textbf{Term: $- 2 u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u$. } For this term we bound \begin{multline} \abs{\int_\Omega Jw \cdot (2 u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u)} \lesssim \int_{\Omega} \abs{w} \abs{u} \abs{ \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{w}_{L^{2}} \norm{u}_{L^\infty} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2}} \\ \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \textbf{Term: $- 2 \partial^\alphartialartial_t u \cdot \nablaa \partial^\alphartialartial_t u$. } We bound \begin{equation} \abs{\int_{\Omega} J w \cdot(2 \partial^\alphartialartial_t u \cdot \nablaa \partial^\alphartialartial_t u)} \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t u} \abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{w}_{L^{2}} \norm{\partial^\alphartialartial_t u}_{L^\infty} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $- 2 \partial^\alphartialartial_t u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u$. } We bound \begin{multline} \abs{\int_{\Omega} Jw \cdot(2 \partial^\alphartialartial_t u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u)} \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t u} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} \lesssim \norm{w}_{L^{2}} \norm{\partial^\alphartialartial_t u}_{L^\infty} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{2}} \\ \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{Term: $- u \cdot \nabla_{\partial^\alphartialartial_t^2 \mathcal{A}} u$. } We estimate \begin{multline} \abs{\int_{\Omega} Jw \cdot(u \cdot \nabla_{\partial^\alphartialartial_t^2 \mathcal{A}} u)} \lesssim \int_{\Omega} \abs{w} \abs{u} \abs{ \partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u} \lesssim \norm{w}_{L^{2/(1 - \alpha)}} \norm{u}_{L^\infty} \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \norm{\nabla u}_{L^{2}} \\ \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \textbf{Term: $- \partial^\alphartialartial_t^2 u \cdot \nablaa u$. } We estimate \begin{equation} \abs{\int_{\Omega} Jw \cdot(\partial^\alphartialartial_t^2 u \cdot \nablaa u)} \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t^2 u} \abs{\nabla u} \lesssim \norm{w}_{L^{2/\varepsilon_+}} \norm{\partial^\alphartialartial_t^2 u}_{L^2} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} \textbf{Term: $2 \partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 \partial^\alphartialartial_t u$. } For this term we bound \begin{multline} \abs{\int_{\Omega} J w \cdot(2 \partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 \partial^\alphartialartial_t u) } \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t \bar{\eta}} \abs{\partial^\alphartialartial_t K}\abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{w}_{L^2} \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty} \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla \partial^\alphartialartial_t u}_{L^2} \\ \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{Term: $2 \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u$. } We estimate \begin{multline} \abs{\int_{\Omega} J w \cdot(2 \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u) } \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{w}_{L^2} \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{L^\infty} \norm{\nabla \partial^\alphartialartial_t u}_{L^2} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \textbf{Term: $2 \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u$. } We bound \begin{multline} \abs{\int_{\Omega} Jw \cdot(2 \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u)} \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t^2 \bar{\eta}} \abs{\partial^\alphartialartial_t K} \abs{\nabla u} \lesssim \norm{w}_{L^2} \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{L^\infty} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^2} \\ \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \mathcal{E}. \end{multline} \textbf{Term: $\partial^\alphartialartial_t^3 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u$. } We estimate \begin{equation} \abs{\int_{\Omega} J w \cdot(\partial^\alphartialartial_t^3 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u)} \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t^3 \bar{\eta}} \abs{\nabla u} \lesssim \norm{w}_{L^{2/\varepsilon_+}} \norm{\partial^\alphartialartial_t^3 \bar{\eta}}_{L^2} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} \textbf{Term: $\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t^2 K\partial^\alphartial_2 u$. } For the final term we bound \begin{multline} \abs{\int_\Omega J w \cdot (\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t^2 K\partial^\alphartial_2 u)} \lesssim \int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t \bar{\eta}} \abs{\partial^\alphartialartial_t^2 K} \abs{\nabla u} \\ \lesssim \norm{w}_{L^{2/(1-\alpha)}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty} \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \norm{\nabla u}_{L^2} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \end{proof} Next we study the $F^4$ nonlinearity. \begin{prop}\label{nid_f4} Suppose that $F^4$ is defined as in either \eqref{dt1_f4} or \eqref{dt2_f4}. Then we \begin{equation}\label{nid_f4_0} \abs{ \int_{-\ell}^\ell w \cdot F^4 } \lesssim \norm{w}_{H^1} (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}} \end{equation} for all $w \in H^1(\Omega)$. \end{prop} \begin{proof} We will present the proof only in the more involved case that $F^4$ is defined by \eqref{dt2_f4}, which corresponds to two temporal derivatives. The case \eqref{dt1_f4}, which corresponds to one temporal derivative, follows from a simpler and easier argument. There are eleven terms appearing in \eqref{dt2_f4}, and we will deal with them mostly one at a time, with just a bit of grouping. We will prove that each can be estimated in the stated form. For the sake of brevity, throughout the proof we will repeatedly make use of five essential tools without explicitly referring to them: H\"{o}lder's inequality, standard trace estimates for $H^1(\Omega)$, the standard Sobolev embeddings for $H^1(\Omega)$ and $H^{1/2}((-\ell,\ell))$, the fact that $\mathcal{E} \le 1$, and the catalogs of $L^q$ estimates given in Theorems \ref{catalog_energy} and \ref{catalog_dissipation}. For the latter we will again use the ordering convention described at the start of the proof of Proposition \ref{nid_f1}. \textbf{Term: $ 2\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \mathcal{N}$.} We bound \begin{multline} \abs{\int_{-\ell}^\ell 2\mu w \cdot (\mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u)(\mathcal{N}) } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{w}_{L^{1/\varepsilon_-}(\Sigma)} \norm{\partial^\alphartialartial_t \eta}_{W^{1,\infty}} \norm{\nabla \partial^\alphartialartial_t u}_{L^{1/(1-\varepsilon_-)}(\Sigma)} \\ \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} . \end{multline} \textbf{Term: $ \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N}$.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u} \\ \lesssim \norm{w}_{L^{1/(\varepsilon_+ - \alpha)}(\Sigma)} \left(\norm{\partial^\alphartialartial_t \eta}_{W^{1,1/\alpha}} + \norm{\partial^\alphartialartial_t^2 \eta}_{W^{1,1/\alpha}} \right) \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma)} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}}\sqrt{\mathcal{E}}. \end{multline} \textbf{Term: $\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N}$.} We bound \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \\ \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma)} \norm{\partial^\alphartialartial_t \eta}_{W^{1,\infty}} \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma)} \norm{\partial^\alphartialartial_t \partial^\alphartial_1 \eta}_{L^\infty} \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} \textbf{Term: $\left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t \mathcal{N} $.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w}(\abs{\partial^\alphartialartial_t \eta} + \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} + \abs{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta} ) \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \\ \lesssim \norm{w}_{L^{1/\varepsilon_-}(\Sigma)} \norm{\partial^\alphartialartial_t \eta}_{W^{2,1/(1-\varepsilon_-)}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} . \end{multline} \textbf{Term: $\partial^\alphartial_1 \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \partial^\alphartialartial_t \mathcal{N}$.} For this term we initially expand \begin{multline} \partial^\alphartial_1 \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_1 [\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta] = \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta \\ + \frac{ \partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1^2 \zeta_0 \partial^\alphartial_1 \partial^\alphartialartial_t \eta. \end{multline} This and Proposition \ref{R_prop} then allow us to bound \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \partial^\alphartial_1 [\partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] ] \partial^\alphartialartial_t \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \abs{\partial^\alphartial_1 \eta}\abs{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta} + \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \abs{\partial^\alphartial_1^2 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \\ +\int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} =: I + II + III. \end{multline} We then bound \begin{equation} I \lesssim \norm{w}_{L^{1/\varepsilon_-}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{1/(1-\varepsilon_-)}} \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}, \end{equation} \begin{equation} II \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma)} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \norm{\partial^\alphartial_1^2 \eta}_{L^{1/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}, \end{equation} and \begin{equation} III \lesssim \norm{w}_{L^{2}(\Sigma)} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \norm{\partial^\alphartial_1 \eta}_{L^{2}} \lesssim \norm{w}_{H^1} \mathcal{E} \sqrt{\mathcal{D}}. \end{equation} Combining these then shows that this term can be estimated as stated. \textbf{Term: $ -2 S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \partial^\alphartialartial_t \mathcal{N}$.} We estimate \begin{multline} \abs{ \int_{-\ell}^\ell -2 w \cdot S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \partial^\alphartialartial_t \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} (\abs{\partial^\alphartialartial_t p} + \abs{\nabla \partial^\alphartialartial_t u}) \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \\ \lesssim \norm{w}_{L^{1/\varepsilon_-}(\Sigma)} \left(\norm{\partial^\alphartialartial_t p}_{L^{1/(1-\varepsilon_-)}(\Sigma)} + \norm{\nabla \partial^\alphartialartial_t u}_{L^{1/(1-\varepsilon_-)}(\Sigma)} \right) \norm{\partial^\alphartialartial_t \partial^\alphartial_1 \eta}_{L^\infty} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} \textbf{Term: $\left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t^2 \mathcal{N}$.} We bound \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \right] \partial^\alphartialartial_t^2 \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} (\abs{\eta} + \abs{\partial^\alphartial_1 \eta} + \abs{\partial^\alphartial_1^2 \eta} ) \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \\ \lesssim \norm{w}_{L^{1/(\varepsilon_+ - \alpha)}(\Sigma)} \norm{\eta}_{W^{2,1/(1-\varepsilon_+)}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} \textbf{Term: $ \partial^\alphartial_1[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) ] \partial^\alphartialartial_t^2 \mathcal{N}$.} To handle this term we expand \begin{equation} \partial^\alphartial_1 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1^2 \eta + \frac{ \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{(\partial^\alphartial_1 \eta)^2}\partial^\alphartial_1^2 \zeta_0 (\partial^\alphartial_1 \eta)^2 . \end{equation} This and Proposition \ref{R_prop} then allow us to bound \begin{multline} \abs{ \int_{-\ell}^\ell w \cdot \partial^\alphartial_1[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) ] \partial^\alphartialartial_t^2 \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} (\abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1^2 \eta} + \abs{\partial^\alphartial_1 \eta}^2 ) \\ \lesssim \norm{w}_{L^{1/(\varepsilon_+-\alpha)}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \left(\norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1^2 \eta}_{L^{1/(1-\varepsilon_+)}} + \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 \eta}_{L^{1/(1-\varepsilon_+)}} \right) \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \mathcal{E}. \end{multline} \textbf{Term: $- S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t^2 \mathcal{N}$.} For the final term we estimate \begin{multline} \abs{ - \int_{-\ell}^\ell w \cdot S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t^2 \mathcal{N} } \lesssim \int_{-\ell}^\ell \abs{w} (\abs{p} + \abs{\nabla u}) \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \\ \lesssim \norm{w}_{L^{1/(\varepsilon_+ - \alpha)}(\Sigma)} \left(\norm{p}_{L^{1/(1-\varepsilon_+)}(\Sigma)} + \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma)} \right) \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} \end{proof} Finally, we study the $F^5$ nonlinearity. \begin{prop}\label{nid_f5} Suppose that $F^5$ is given by either \eqref{dt1_f5} or \eqref{dt2_f5}. Then \begin{equation}\label{nid_f5_0} \abs{ \int_{\Sigma_s} J(w \cdot \tau)F^5 } \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}}\sqrt{\mathcal{D}} \end{equation} for every $w \in H^1(\Omega)$. \end{prop} \begin{proof} As in the proof of Propositions \ref{nid_f1} and \ref{nid_f4}, we will only prove the result in the harder case of two temporal derivatives, which occurs when $F^5$ is given by \eqref{dt2_f5}. Then $F^5$ consists only of two terms. Using the bounds in Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (once more with the ordering convention described at the start of the proof of Proposition \ref{nid_f1}) together with the Sobolev embeddings and trace estimates, we estimate the first term in $F^5$ via \begin{multline} \abs{\int_{\Sigma_s} J (w\cdot \tau) (2 \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \nu \cdot \tau) } \lesssim \int_{\Sigma_s} \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} \\ \lesssim \norm{w}_{L^{1/\varepsilon_-}(\Sigma_s)} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}(\Sigma_s)} \norm{\nabla \partial^\alphartialartial_t u}_{L^{1/(1-\varepsilon_-)}(\Sigma_s)} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} Similarly, we bound the second term via \begin{multline} \abs{\int_{\Sigma_s} J (w\cdot \tau) (\mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \nu \cdot \tau) } \lesssim \int_{\Sigma_s} \abs{w} \abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u} \\ \lesssim \norm{w}_{L^{1/(\varepsilon_+ - \alpha)}(\Sigma_s)} \left( \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,1/\alpha}(\Sigma_s)} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,1/\alpha}(\Sigma_s)} \right) \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma_s)} \lesssim \norm{w}_{H^1} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} These bounds can then be combined to conclude that the stated estimate holds. \end{proof} We synthesize the results of Propositions \ref{nid_f1}, \ref{nid_f4}, and \ref{nid_f5} into the following result. \begin{thm}\label{nid_v_est} Consider the functional $H^1(\Omega) \ni w\mapsto \br{\mathcal{F},w} \in \mathbb{R}$ defined by \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{-\ell}^\ell F^4 \cdot w - \int_{\Sigma_s} J (w \cdot \tau)F^5, \end{equation} where $F^1,F^4,F^5$ are defined either by \eqref{dt1_f1}, \eqref{dt1_f4}, and \eqref{dt1_f5} or else by \eqref{dt2_f1}, \eqref{dt2_f4}, and \eqref{dt2_f5}. Then \begin{equation} \abs{\br{\mathcal{F},w}} \lesssim \norm{w}_{H^1} ( \sqrt{\mathcal{E}} + \mathcal{E})\sqrt{\mathcal{D}} \end{equation} for all $w \in H^1(\Omega)$. \end{thm} \begin{proof} This follows immediately from Propositions \ref{nid_f1}, \ref{nid_f4}, and \ref{nid_f5}. \end{proof} \subsection{General interaction functional estimates II: pressure term} We next turn our attention to the term $F^2$ appearing in Theorem \ref{linear_energy}. We again derive a general dual estimate. \begin{thm}\label{nid_p_est} Suppose that $F^2$ is given by either \eqref{dt1_f2} or \eqref{dt2_f2}. Then \begin{equation}\label{nid_p_est_0} \abs{ \int_\Omega J \partial^\alphartialsi F^2} \lesssim \norm{\partial^\alphartialsi}_{L^2} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \end{equation} for every $\partial^\alphartialsi \in L^2(\Omega)$. \end{thm} \begin{proof} Again, we will only prove the result in the harder case of two temporal derivatives, which occurs when $F^2$ is given by \eqref{dt2_f2}. In this case $F^2$ only consists of two terms. From the bounds in Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (again using the ordering convention described at the start of the proof of Proposition \ref{nid_f1}) together with the H\"{o}lder's inequality and the fact that $\Omega$ has finite measure, we bound the first term via \begin{multline} \abs{\int_{\Omega} J \partial^\alphartialsi \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u } \lesssim \int_{\Omega} \abs{\partial^\alphartialsi} \abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u} \\ \lesssim \norm{\partial^\alphartialsi}_{L^2} \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \norm{1}_{L^{2/(\varepsilon_+ - \alpha)}} \lesssim \norm{\partial^\alphartialsi}_{L^2} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{multline} For the second term we argue similarly to estimate \begin{equation} \abs{\int_{\Omega} J \partial^\alphartialsi 2\diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u} \lesssim \int_{\Omega} \abs{\partial^\alphartialsi} \abs{\partial^\alphartialartial_t \mathcal{A}}\abs{\nabla \partial^\alphartialartial_t u} \lesssim \norm{\partial^\alphartialsi}_{L^2} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla \partial^\alphartialartial_t u}_{L^2} \lesssim \norm{\partial^\alphartialsi}_{L^2} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Upon combining these, we arrive at the stated bound. \end{proof} \subsection{Special interaction estimates I: velocity terms} The $F^3$ nonlinear interaction term in Theorem \ref{linear_energy} requires greater care than we have used above. Indeed, we will not derive general dual estimates, but will instead derive estimates that take careful advantage of the structure of the test function. When two time derivatives are applied, the $F^3$ nonlinearity from \eqref{dt2_f3} has the form \begin{equation} F^3 = \partial^\alphartialartial_t^2 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2, \end{equation} where $\mathcal{R}$ is as in \eqref{R_def}. For the purposes of estimating $F^3$ we will write \begin{equation} \partial^\alphartialartial_t^2 u \cdot \mathcal{N} = \partial^\alphartialartial_t^3 \eta - F^6. \end{equation} We may then decompose the relevant interaction term in Theorem \ref{linear_energy} as \begin{multline}\label{nid_f3_decomp} - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) = - \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R} (\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta - \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6 \\ - \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta - \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 F^6 \\ =: I + II + III + IV. \end{multline} We will handle each of these separately, starting with $I$. \begin{prop}\label{nid_f3_I} Let $I$ be as in \eqref{nid_f3_decomp}. Then we have the estimates \begin{equation} \abs{I + \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} and \begin{equation} \abs{ \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \lesssim \sqrt{\mathcal{E}} \mathcal{E}_{\shortparallel}. \end{equation} \end{prop} \begin{proof} The essential feature of $I$ is the appearance of the total time derivative $\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta$, which allows us to write \begin{multline} I = - \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R} (\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartialartial_t \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} = - \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} \\ + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2}. \end{multline} Using Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (with the ordering convention described in the proof of Proposition \ref{nid_f1}) in conjunction with Proposition \ref{R_prop}, we then estimate \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2}} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2 \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{2}} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} and \begin{equation} \abs{ \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2 \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{2}} \lesssim \sqrt{\mathcal{E}} \mathcal{E}_{\shortparallel}. \end{equation} These are the stated bounds. \end{proof} Next we deal with the term $II$. \begin{prop}\label{nid_f3_II} Let $II$ be as given in \eqref{nid_f3_decomp}. Then \begin{equation} \abs{II} \lesssim \mathcal{E} \mathcal{D}. \end{equation} \end{prop} \begin{proof} We begin by writing $F^6 = -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta$ in order to expand \begin{multline} II = \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta 2 \partial^\alphartial_1 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta + \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta 2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta \\ + \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t^2 \eta \\=: II_1 + II_2 + II_3 + II_4. \end{multline} To estimate these terms we will use Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (with the ordering convention described in the proof of Proposition \ref{nid_f1}), Proposition \ref{R_prop}, the ordering $0 <2 \alpha < \varepsilon_- < \varepsilon_+$ assumed in \eqref{kappa_ep_def}, and H\"{o}lder's inequality. This yields the bounds \begin{multline} \abs{II_1} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \abs{ \partial^\alphartial_1 \partial^\alphartialartial_t u} \abs{ \partial^\alphartial_1 \partial^\alphartialartial_t \eta} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t u}_{L^{1/(1-\varepsilon_-)}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \\ \lesssim \sqrt{\mathcal{E}} \mathcal{D} \sqrt{\mathcal{E}}, \end{multline} \begin{multline} \abs{II_2} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \abs{\partial^\alphartialartial_t u}\abs{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \norm{\partial^\alphartialartial_t u}_{L^\infty(\Sigma)} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{1/(1-\varepsilon_-)}} \\ \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \end{multline} and \begin{equation} \abs{III_3} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2 \abs{\partial^\alphartial_1 u} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \norm{\partial^\alphartial_1 u}_{L^{1/(1-\varepsilon_+)}(\Sigma)} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \sqrt{\mathcal{E}}. \end{equation} For $II_4$ we note that $\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1^2 \partial^\alphartialartial_t^2 \eta$ is a total derivative, so we can integrate by parts and use the fact that $u_1 =0$ at the endpoints to see that \begin{multline} \abs{II_4} = \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) u_1 \partial^\alphartial_1 \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } = \abs{- \int_{-\ell}^\ell \sigma \partial^\alphartial_1 \left[ \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) u_1 \right] \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} } \\ = \abs{ \int_{-\ell}^\ell \sigma \left[ \frac{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \partial^\alphartial_1 \eta \partial^\alphartial_1 u_1 + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta u_1 + \frac{\partial^\alphartial_y \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)\partial^\alphartial_1^2 \zeta_0}{\partial^\alphartial_1\eta} \partial^\alphartial_1 \eta u_1\right] \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} }. \end{multline} We may then use the same tools listed above to estimate \begin{multline} \abs{II_4} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2 \left( \abs{\partial^\alphartial_1 \eta}\abs{\partial^\alphartial_1 u} + \abs{\partial^\alphartial_1^2 \eta} \abs{u} + \abs{\partial^\alphartial_1 \eta}\abs{u} \right) \\ \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \left( \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 u}_{L^{1/(1-\varepsilon_+)(\Sigma)}} + \norm{\partial^\alphartial_1^2 \eta}_{L^{1/(1-\varepsilon_+)}} \norm{u}_{L^\infty(\Sigma)} + \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{u}_{L^\infty(\Sigma)} \right) \\ \lesssim \mathcal{D} \mathcal{E}. \end{multline} Combining these bounds then yields the stated estimate. \end{proof} The term $III$ is next. \begin{prop}\label{nid_f3_III} Let $III$ be as given in \eqref{nid_f3_decomp}. Then we have the bounds \begin{equation} \abs{III + \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E})\mathcal{D} \end{equation} and \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta } \lesssim \sqrt{\mathcal{E}}\mathcal{E}_{\shortparallel}. \end{equation} \end{prop} \begin{proof} To handle the term $III$ we begin by pulling a time derivative out of the integral: \begin{multline} III = - \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^3 \eta = - \frac{d}{dt} \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \\ + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^3 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^3 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) 2 \partial^\alphartial_1 \partial^\alphartialartial_t \eta \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2. \end{multline} We then employ Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (with the ordering convention described in the proof of Proposition \ref{nid_f1}) and Proposition \ref{R_prop} to bound \begin{multline} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^3 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^3 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^3 \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2\eta} \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t\eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t\eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \\ \lesssim \mathcal{E} \mathcal{D} \end{multline} and \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) 2 \partial^\alphartial_1 \partial^\alphartialartial_t \eta \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2 \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \ns{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Upon combining these, we arrive at the first stated estimate. To derive the second estimate we first note that standard Sobolev embeddings and interpolation show that if $\partial^\alphartialsi \in H^{3/2}((-\ell,\ell))$ then \begin{equation} \norm{\partial^\alphartial_1 \partial^\alphartialsi}_{L^4} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialsi}_{H^{1/4}} \lesssim \norm{\partial^\alphartialsi}_{H^{5/4}} \lesssim \norm{\partial^\alphartialsi}_{H^1}^{1/2} \norm{\partial^\alphartialsi}_{H^{3/2}}^{1/2}. \end{equation} Applying this with $\partial^\alphartialsi = \partial^\alphartialartial_t \eta$ and again using Theorem \ref{catalog_energy} and Proposition \ref{R_prop} with the definitions of $\mathcal{E}$ (see \eqref{E_def}) and $\mathcal{E}_{\shortparallel}$ (see \eqref{ED_parallel}), we bound \begin{multline} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta } \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^2 \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^4} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^2} \\ \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^1} \norm{\partial^\alphartialartial_t \eta}_{H^{3/2}}\norm{\partial^\alphartialartial_t^2 \eta}_{H^1} \lesssim \sqrt{\mathcal{E}_{\shortparallel}} \sqrt{\mathcal{E}} \sqrt{\mathcal{E}_{\shortparallel}}, \end{multline} which is the second stated estimate. \end{proof} Finally, we handle the term $IV$. \begin{prop}\label{nid_f3_IV} Let $IV$ be as given in \eqref{nid_f3_decomp}. Then \begin{equation} \abs{IV} \lesssim (\mathcal{E} + \mathcal{E}^{3/2})\mathcal{D}. \end{equation} \end{prop} \begin{proof} We first write $F^6= -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta$ in order to split \begin{multline} IV = \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 (2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta) \\ + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1(u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta) =: IV_1 + IV_2. \end{multline} Then Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (with the ordering convention described in the proof of Proposition \ref{nid_f1}) and Proposition \ref{R_prop} allow us to estimate \begin{multline} \abs{IV_1} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^2 \left( \abs{\partial^\alphartial_1 \partial^\alphartialartial_t u} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} + \abs{\partial^\alphartialartial_t u} \abs{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta} \right) \\ \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \left( \norm{\partial^\alphartial_1 \partial^\alphartialartial_t u}_{L^{1/(1-\varepsilon_-)}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} + \norm{\partial^\alphartialartial_t u}_{L^\infty(\Sigma)} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{1/(1-\varepsilon_-)}} \right) \lesssim \mathcal{E} \mathcal{D}. \end{multline} To handle $IV_2$ we further expand \begin{multline} IV_2 = \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta + \int_{-\ell}^\ell \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 u_1 \partial^\alphartial_1^2 \partial^\alphartialartial_t^2 \eta =: IV_{3} + IV_4. \end{multline} The term $IV_3$ can be estimated as $IV_1$ was, recalling from \eqref{kappa_ep_def} that $\alpha < \varepsilon_- < \varepsilon_+$: \begin{equation} \abs{IV_3} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^2 \abs{\partial^\alphartial_1 u } \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t\eta}_{L^\infty} \norm{\partial^\alphartial_1 u}_{L^{1/(1-\varepsilon_+)(\Sigma)}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \lesssim \mathcal{E} \mathcal{D}. \end{equation} On the other hand, for $IV_4$ we need to integrate by parts again, using the fact that $u_1$ vanishes at the endpoints: \begin{multline} IV_4 = -\int_{-\ell}^\ell \sigma \left[ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 u_1 + 2 \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta u_1 \right] \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \\ -\int_{-\ell}^\ell \sigma \left[ \partial^\alphartial_z^3 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta + \partial^\alphartial_y \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 \right] (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta =: IV_5 + IV_6. \end{multline} These terms can then be estimated as above: \begin{multline} \abs{IV_5} \lesssim \int_{-\ell}^\ell \left( \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^2 \abs{\partial^\alphartial_1 u} + \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} \abs{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta} \abs{u} \right) \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \\ \lesssim \left(\ns{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \norm{\partial^\alphartial_1 u}_{L^{1/(1-\varepsilon_+)}(\Sigma)} + \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{1/(1-\varepsilon_-)}} \norm{u}_{L^\infty(\Sigma)} \right) \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \\ \lesssim \left(\mathcal{E} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \right) \sqrt{\mathcal{D}} \end{multline} and \begin{multline} \abs{IV_6} \lesssim \int_{-\ell}^\ell (\abs{\partial^\alphartial_1^2 \eta} +1) \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^2 \abs{u} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \\ \le \left(\norm{\partial^\alphartial_1^2 \eta}_{L^{1/(1-\varepsilon_+)}} +1 \right) \ns{\partial^\alphartial_1 \partial^\alphartialartial_t\eta}_{L^\infty} \norm{u}_{L^\infty(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \lesssim \left(\sqrt{\mathcal{E}} +1 \right) \mathcal{E} \mathcal{D} . \end{multline} The stated estimate then follows by combining all of these. \end{proof} Now that we have controlled $I$--$IV$ in \eqref{nid_f3_decomp} we can record a unified estimate. \begin{thm}\label{nid_f3_dt2} Let $F^3$ be given by \eqref{dt2_f3}. Then we have the bounds \begin{multline} \abs{ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) + \frac{d}{dt} \int_{-\ell}^\ell \left[ \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} + \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \right] } \\ \lesssim (\sqrt{\mathcal{E}} + \mathcal{E} + \mathcal{E}^{3/2})\mathcal{D} \end{multline} and \begin{equation} \abs{\int_{-\ell}^\ell \left[ \sigma \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2}{2} + \sigma \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \right]} \lesssim \sqrt{\mathcal{E}} \mathcal{E}_{\shortparallel}. \end{equation} \end{thm} \begin{proof} The results follows from combining \eqref{nid_f3_decomp} with Propositions \ref{nid_f3_I}, \ref{nid_f3_II}, \ref{nid_f3_III}, and \ref{nid_f3_IV}. \end{proof} A similar and simpler result holds for $F^3$ when only one time derivative is applied as in \eqref{dt1_f3}. We will record it without proof. \begin{thm}\label{nid_f3_dt} Let $F^3$ be given by \eqref{dt1_f3}. Then \begin{equation} \abs{ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) } \lesssim (\sqrt{\mathcal{E}} + \mathcal{E})\mathcal{D}. \end{equation} \end{thm} \subsection{Special interaction estimates II: free surface terms} The term involving $F^6$ and $F^7$ in Theorem \ref{linear_energy} also require a delicate treatment. We record these now, starting with $F^6$. \begin{thm}\label{nid_f6} We have the estimate \begin{equation} \abs{ \int_{-\ell}^\ell g \partial^\alphartialartial_t^2 \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^6$ is given by \eqref{dt2_f6}, and we have the estimate \begin{equation} \abs{ \int_{-\ell}^\ell g \partial^\alphartialartial_t \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^6$ is given by \eqref{dt1_f6}. \end{thm} \begin{proof} We begin by using the definition of $F^6$ in \eqref{dt2_f6} to split \begin{multline} \int_{-\ell}^\ell g \partial^\alphartialartial_t^2 \eta F^6 + \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} = \int_{-\ell}^\ell g \partial^\alphartialartial_t^2 \eta (-2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta) + g \partial^\alphartialartial_t^2 \eta (- u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta) \\ + \int_{-\ell}^\ell \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 (-2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta)}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \int_{-\ell}^\ell\sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 (- u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta)}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} =: I + II +III. \end{multline} We will estimate these three terms using Theorems \ref{catalog_energy} and \ref{catalog_dissipation} (with the ordering convention described in the proof of Proposition \ref{nid_f1}) and H\"older's inequality. For $I$ we directly estimate \begin{multline} \abs{I} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartialartial_t^2 \eta} \left( \abs{\partial^\alphartialartial_t u} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} + \abs{u} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \right) \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{L^2} \left(\norm{\partial^\alphartialartial_t u}_{L^\infty(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^2} + \norm{u}_{L^\infty(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^2} \right) \\ \lesssim \sqrt{\mathcal{D}} \left(\sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}\right). \end{multline} Similarly, for $II$ we apply the product rule and estimate (recalling \eqref{kappa_ep_def}) \begin{multline} \abs{II} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta} \left( \abs{\partial^\alphartial_1 \partial^\alphartialartial_t u} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} + \abs{\partial^\alphartialartial_t u} \abs{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta} \right) \\ \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \left( \norm{\partial^\alphartial_1 \partial^\alphartialartial_t u}_{L^{1/(1-\varepsilon_-)}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} + \norm{\partial^\alphartialartial_t u}_{L^\infty(\Sigma)} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{L^{1/(1-\varepsilon_-)}} \right) \\ \lesssim \sqrt{\mathcal{D}}\left( \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}\right). \end{multline} On the other hand, for $III$ we expand with the product rule and then integrate by parts and exploit the vanishing of $u_1$ at the endpoints: \begin{multline} III = -\int_{-\ell}^\ell\sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \int_{-\ell}^\ell \sigma \partial^\alphartial_1 \left(\frac{u_1 }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \frac{\abs{ \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta }^2}{2} \\ = \frac{\sigma}{2} \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2\left( u_1 \partial^\alphartial_1 \left(\frac{1 }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) - \frac{\partial^\alphartial_1 u_1 }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right). \end{multline} In this form we can estimate with the same tools as above, crucially using that $2 \alpha < \varepsilon_+$, to see that \begin{equation} \abs{III} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}^2 \left( \abs{u} + \abs{\partial^\alphartial_1 u} \right) \lesssim \ns{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{L^{1/\alpha}} \norm{ u}_{W^{1/(1-\varepsilon_+)}(\Sigma)} \lesssim \mathcal{D} \sqrt{\mathcal{E}}. \end{equation} Combining these then provides the stated bound. \end{proof} Next we record the $F^7$ bound. \begin{thm}\label{nid_f7} We have the estimate \begin{equation} \abs{ [\partial^\alphartialartial_t^2 u\cdot \mathcal{N},F^7]_\ell} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^7$ is given by \eqref{dt2_f7}, and we have the estimate \begin{equation} \abs{ [\partial^\alphartialartial_t u \cdot \mathcal{N},F^7]_\ell} \lesssim \sqrt{\mathcal{E}} \mathcal{D} \end{equation} when $F^7$ is given by \eqref{dt1_f7}. \end{thm} \begin{proof} Once more we only record the proof in the harder case when $F^7$ is given by \eqref{dt2_f7}. We begin by estimating \begin{equation} \abs{F^7} \lesssim \abs{\mathscr{W}h'(\partial^\alphartialartial_t \eta)} \abs{\partial^\alphartialartial_t^3 \eta} + \abs{\mathscr{W}h''(\partial^\alphartialartial_t \eta)} \abs{\partial^\alphartialartial_t^2 \eta}^2. \end{equation} Since $\norm{\partial^\alphartialartial_t \eta}_{L^\infty} \lesssim \sqrt{\mathcal{E}} \lesssim 1$, we can bound \begin{equation} \abs{\mathscr{W}h'(z)} = \frac{1}{\alpha}\abs{ \int_0^z \mathscr{W}''(r)dr}\lesssim \abs{z} \text{ for } z \in [-\norm{\partial^\alphartialartial_t \eta}_{L^\infty} ,\norm{\partial^\alphartialartial_t \eta}_{L^\infty} ]. \end{equation} From this, basic trace theory, and the bound $\sum_{k=1}^3\max_{\partial^\alphartialm \ell} \abs{\partial^\alphartialartial_t^k \eta} \lesssim \sqrt{\mathcal{D}}$ we then estimate \begin{equation} \max_{\partial^\alphartialm \ell} \abs{F_7} \lesssim \max_{\partial^\alphartialm \ell} \left( \abs{\partial^\alphartialartial_t \eta} \abs{\partial^\alphartialartial_t^3 \eta} + \abs{\partial^\alphartialartial_t^2 \eta}^2 \right) \lesssim \sqrt{\mathcal{D}} \left( \norm{\partial^\alphartialartial_t \eta}_{H^1} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^1} \right) \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} From this and the fact that $[\partial^\alphartialartial_t^2 u \cdot \mathcal{N}]_\ell = [\partial^\alphartialartial_t^3 \eta ]_\ell \lesssim \sqrt{\mathcal{D}}$, we deduce that \begin{equation} \abs{ [\partial^\alphartialartial_t^2 u\cdot \mathcal{N},F^7]_\ell } \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} [\partial^\alphartialartial_t^2 u \cdot \mathcal{N}]_\ell \lesssim \sqrt{\mathcal{E}} \mathcal{D}, \end{equation} which is the stated estimate. \end{proof} We conclude with two more estimates involving the free surface function. The first is for a term involving the function $\mathcal{Q}$ from \eqref{Q_def} that appears in Corollary \ref{basic_energy}. \begin{thm}\label{nid_Q} Let $\mathcal{Q}$ be the smooth function defined by \eqref{Q_def}. Then \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0, \partial^\alphartial_1 \eta) } \lesssim \sqrt{\mathcal{E}} \ns{\eta}_{H^1} \end{equation} \end{thm} \begin{proof} According to Proposition \ref{R_prop} and Theorem \ref{catalog_energy}, we have that \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0, \partial^\alphartial_1 \eta) } \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartial_1 \eta}^3 \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \ns{\eta}_{H^1} \lesssim \sqrt{\mathcal{E}} \ns{\eta}_{H^1} . \end{equation} This is the stated estimate. \end{proof} Our final estimate involves the term $\mathscr{W}h$, as defined in \eqref{V_pert}. \begin{thm}\label{nid_W} We have that \begin{equation} \abs{ [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell } \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^1} \bs{u\cdot \mathcal{N}}. \end{equation} \end{thm} \begin{proof} The definition of $\mathscr{W}h \in C^2$ in \eqref{V_pert} shows that $\abs{\mathscr{W}h(z)} \lesssim z^2$ for $\abs{z} \lesssim 1$. Since $\partial^\alphartialartial_t \eta = u\cdot \mathcal{N}$ at $\partial^\alphartialm \ell$ we can use standard trace theory to deduce the stated bound: \begin{equation} \abs{ [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell } \lesssim \sum_{a=\partial^\alphartialm 1} \alpha \abs{u\cdot \mathcal{N}(a\ell,t)}^2 \abs{\partial^\alphartialartial_t \eta(a \ell,t)} \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^1} [u\cdot \mathcal{N}]_\ell^2. \end{equation} \end{proof} \section{Nonlinear estimates II: interaction terms, energetic form}\label{sec_nl_int_e} In this section we continue our study of the nonlinear interaction terms appearing in Theorem \ref{linear_energy}. However, the focus now is on estimates in terms of the energy functional. Once more, in order to avoid tedious restatements of the same hypothesis, we assume throughout this section that a solution to \eqref{ns_geometric} exists on the time horizon $(0,T)$ for $0 < T \le \infty$ and obeys the small-energy estimate \begin{equation} \sup_{0\le t < T} \mathcal{E}(t) \le \gamma^2 < 1, \end{equation} where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. Again, this means that the estimates of Lemma \ref{eta_small} are available for use, and we will use them often without explicit reference. \subsection{General interaction functional estimates } We begin by deriving general dual estimates for the terms involving $F^1$, $F^4$, and $F^5$ in terms of the energy functional. First we consider $F^1$. \begin{prop}\label{nie_f1} Suppose that $F^1$ is as defined by \eqref{dt1_f1}. Then \begin{equation} \abs{\int_\Omega J w \cdot F^1} \lesssim \norm{w}_{H^1} \left(\mathcal{E} + \mathcal{E}^{3/2} \right) . \end{equation} \end{prop} \begin{proof} The terms $F^1$, as defined by \eqref{dt1_f1}, contains six separate terms. We will estimate each of these, employing H\"{o}lder's inequality and Theorem \ref{catalog_energy} repeatedly and without explicit reference. \textbf{Term: $-\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u)$. } We first bound \begin{equation} \abs{\int_\Omega J w \cdot (-\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u)) } \lesssim \int_\Omega \abs{w}\abs{\partial^\alphartialartial_t \mathcal{A}} \left(\abs{\nabla p} +\abs{\nabla^2 u} \right) +\int_{\Omega} \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \mathcal{A}}\left(\abs{p} + \abs{\nabla u} \right) =: I + II. \end{equation} We then estimate \begin{equation} I \lesssim \norm{w}_{L^{2/\varepsilon_+}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \left( \norm{\nabla p}_{L^{2/(2-\varepsilon_+)}} + \norm{\nabla^2 u}_{L^{2/(2-\varepsilon_+)}} \right) \lesssim \norm{w}_{H^1} \mathcal{E}, \end{equation} and \begin{equation} II \lesssim \norm{w}_{L^{1/\varepsilon_+}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_+)}} \left( \norm{p}_{L^{2/(1-\varepsilon_+)}} + \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \right) \lesssim \norm{w}_{H^1} \mathcal{E}^{3/2}. \end{equation} The combination of these estimates shows this term can be estimated as stated. \textbf{Term: $\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u$. } We first bound \begin{equation} \abs{\int_\Omega J w \cdot (\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u) } \lesssim \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla^2 u} + \int_\Omega \abs{w} \abs{\nabla \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} =: I + II. \end{equation} For $I$ we bound \begin{equation} I \lesssim \norm{w}_{L^{2/\varepsilon_+}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla^2 u}_{L^{2/(2-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{equation} For $II$ we use \eqref{kappa_ep_def} to see that $0 < \varepsilon_- - \alpha < 1$ and $0 < 2\varepsilon_+ + \varepsilon_- - \alpha < 2$, so \begin{equation} II \lesssim \norm{w}_{L^{4/(2\varepsilon_+ + \varepsilon_- - \alpha)}} \left( \norm{\bar{\eta}}_{W^{2,4/(2-(\varepsilon_--\alpha))}} + \norm{ \partial^\alphartialartial_t \bar{\eta}}_{W^{2,4/(2-(\varepsilon_--\alpha))}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{equation} Combining these then shows that this term can be estimated as stated. \textbf{Term: $- u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u$. } We bound \begin{equation} \abs{\int_\Omega Jw \cdot (- u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u) } \lesssim \int_{\Omega} \abs{w} \abs{u} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} \lesssim \norm{w}_{L^{2}} \norm{u}_{L^\infty} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{2}} \lesssim \norm{w}_{H^1} \mathcal{E}^{3/2}. \end{equation} \textbf{Term: $- \partial^\alphartialartial_t u \cdot \nablaa u$. } We estimate \begin{multline} \abs{\int_\Omega Jw \cdot (- \partial^\alphartialartial_t u \cdot \nablaa u)} \lesssim \int_{\Omega} \abs{w}\abs{\partial^\alphartialartial_t u}\abs{\nabla u} \\ \lesssim \norm{w}_{L^{4/(2\varepsilon_+ + \varepsilon_-)}} \norm{\partial^\alphartialartial_t u}_{L^{4/(2-\varepsilon_-)}} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{multline} \textbf{Term: $\partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u$. } We bound \begin{equation} \abs{\int_\Omega J w \cdot (\partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u) } \lesssim \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla u} \lesssim \norm{w}_{L^{4}} \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{L^4} \norm{\nabla u}_{L^{2}} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{equation} \textbf{Term: $\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u$. } We bound \begin{equation} \abs{\int_\Omega J w \cdot (\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u)} \lesssim \int_\Omega \abs{w} \abs{\partial^\alphartialartial_t \bar{\eta}} \abs{\partial^\alphartialartial_t K} \abs{\nabla u} \lesssim \norm{w}_{L^{2}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty}\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^2} \lesssim \norm{w}_{H^1} \mathcal{E}^{3/2}. \end{equation} \end{proof} Our next result concerns energetic estimates for $F^4$. \begin{prop}\label{nie_f4} Suppose that $F^4$ is defined by \eqref{dt1_f4}. Then we \begin{equation} \abs{ \int_{-\ell}^\ell w \cdot F^4 } \lesssim \norm{w}_{H^1} (\mathcal{E} + \mathcal{E}^{3/2}) \end{equation} for all $w \in H^1(\Omega)$. \end{prop} \begin{proof} The terms $F^4$, as defined by \eqref{dt1_f4}, contains four separate terms. We will estimate each of these, employing H\"{o}lder's inequality, trace estimates, and Theorem \ref{catalog_energy} repeatedly and without explicit reference. \textbf{Term: $\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N} $. } We bound \begin{equation} \abs{ \int_{-\ell}^\ell w \cdot (\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N}) } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartialartial_t \nabla \eta} \abs{\nabla u} \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma)} \norm{\partial^\alphartialartial_t \eta}_{W^{1,\infty}} \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma)} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{equation} \textbf{Term: $\left( g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}\right)\right) \partial^\alphartialartial_t\mathcal{N}$. } We estimate \begin{multline} \abs{\int_{-\ell}^\ell w \cdot \left( g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}\right)\right) \partial^\alphartialartial_t\mathcal{N}} \lesssim \int_{-\ell}^\ell \abs{w} \left(\abs{\eta} + \abs{\partial^\alphartial_1 \eta} + \abs{\partial^\alphartial_1^2 \eta} \right) \abs{ \partial^\alphartialartial_t \partial^\alphartial_1 \eta} \\ \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma)} \norm{\eta}_{W^{2,1/(1-\varepsilon_+)}} \norm{\partial^\alphartialartial_t \eta}_{W^{1,\infty}} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{multline} \textbf{Term: $-\sigma \partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \partial^\alphartialartial_t \mathcal{N}$. } We first expand \begin{equation} \partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0, \partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 + \partial^\alphartial_z\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta \end{equation} and then use Proposition \ref{R_prop} in order to estimate \begin{equation} \abs{\int_{-\ell}^\ell w \cdot(-\sigma \partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \partial^\alphartialartial_t \mathcal{N}) } \lesssim \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartial_1 \eta}^2 \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} + \int_{-\ell}^\ell \abs{w} \abs{\partial^\alphartial_1 \eta} \abs{\partial^\alphartial_1^2 \eta} \abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta} =:I +II. \end{equation} Then we bound \begin{equation} I \lesssim \norm{w}_{L^{2}(\Sigma)} \ns{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^2} \lesssim \norm{w}_{H^1} \mathcal{E} \end{equation} and \begin{equation} II \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma)} \norm{\partial^\alphartial_1 \eta}_{L^\infty} \norm{\partial^\alphartial_1^2 \eta}_{L^{1/(1-\varepsilon_+)}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \lesssim \norm{w}_{H^1} \mathcal{E}^{3/2}. \end{equation} Upon combining these, we find that this term can be estimated as stated. \textbf{Term: $ - S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t \mathcal{N}$. } We bound \begin{multline} \abs{\int_{-\ell}^\ell w \cdot ( - S_{\mathcal{A}}(p,u) \partial^\alphartialartial_t \mathcal{N})} \lesssim \int_{-\ell}^\ell \abs{w} \left(\abs{p} + \abs{\nabla u} \right)\abs{\partial^\alphartialartial_t \partial^\alphartial_1 \eta} \\ \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma)} \left(\norm{p}_{L^{1/(1-\varepsilon_+)}(\Sigma) }+ \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma) } \right) \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{L^\infty} \lesssim \norm{w}_{H^1} \mathcal{E}. \end{multline} \end{proof} Next we study the term $F^5$. \begin{prop}\label{nie_f5} Suppose that $F^5$ is given by \eqref{dt1_f5}. Then \begin{equation} \abs{ \int_{\Sigma_s} J(w \cdot \tau)F^5 } \lesssim \norm{w}_{H^1} \mathcal{E} \end{equation} for every $w \in H^1(\Omega)$. \end{prop} \begin{proof} Using trace estimates and the Sobolev embeddings together with Theorem \ref{catalog_energy}, we bound \begin{multline} \abs{ \int_{\Sigma_s} J(w \cdot \tau) ( \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \nu \cdot \tau )} \lesssim \int_{\Sigma_s} \abs{w} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} \lesssim \norm{w}_{L^{1/\varepsilon_+}(\Sigma_s)} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{1/(1-\varepsilon_+)}(\Sigma_s)} \\ \lesssim \norm{w}_{H^1} \mathcal{E} \end{multline} This is the stated bound. \end{proof} We combine the above estimates into the following theorem, which is the analog of Theorem \ref{nid_v_est}. \begin{thm}\label{nie_v_est} Consider the functional $H^1(\Omega) \ni w\mapsto \br{\mathcal{F},w} \in \mathbb{R}$ defined by \begin{equation} \br{\mathcal{F},w} = \int_\Omega F^1 \cdot w J - \int_{-\ell}^\ell F^4 \cdot w - \int_{\Sigma_s} J (w \cdot \tau)F^5, \end{equation} where $F^1,F^4,F^5$ are defined via \eqref{dt1_f1}, \eqref{dt1_f4}, and \eqref{dt1_f5}, respectively. Then \begin{equation} \abs{\br{\mathcal{F},w}} \lesssim \norm{w}_{H^1} (\mathcal{E}+ \mathcal{E}^{3/2}) \end{equation} for all $w \in H^1(\Omega)$. \end{thm} \begin{proof} This follows immediately from Propositions \ref{nie_f1}, \ref{nie_f4}, and \ref{nie_f5}. \end{proof} \subsection{General interaction functional with free surface terms } Next we turn our attention to a general estimate involving the free surface and $F^3$. \begin{thm}\label{nie_ST} Suppose that $F^3$ is given by \eqref{dt1_f3}. Then we have the estimate \begin{equation} \abs{\int_{-\ell}^\ell g \partial^\alphartialartial_t \eta (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} } \lesssim \left(1+ \sqrt{\mathcal{E}} \right) \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} \norm{w}_{H^1} \end{equation} for every $w \in H^1(\Omega)$. \end{thm} \begin{proof} The first term is easy to deal with: \begin{equation} \abs{\int_{-\ell}^\ell g \partial^\alphartialartial_t \eta w \cdot \mathcal{N}} \lesssim \int_{-\ell}^\ell \abs{\partial^\alphartialartial_t \eta}\abs{w} \lesssim \norm{\partial^\alphartialartial_t \eta}_{L^2} \norm{w}_{L^2(\Sigma)} \lesssim \norm{\partial^\alphartialartial_t \eta}_{L^2} \norm{w}_{H^1(\Omega)}. \end{equation} The second and third terms require more work. Let $s = 1 - (\varepsilon_- - \alpha) \in (0,1)$, which requires that \begin{equation}\label{nie_ST_7} 2 - \frac{s}{2} = \frac{3}{2} + \frac{\varepsilon_- - \alpha}{2}. \end{equation} Using this and Proposition \ref{frac_IBP_prop} we may estimate \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3\right)w\cdot \mathcal{N} } \lesssim \left(\norm{ \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}}_{H^{1-s/2}} + \norm{F^3}_{H^{1-s/2}} \right) \norm{w \cdot \mathcal{N}}_{H^{s/2}}. \end{equation} Since $\zeta_0$ is smooth, Proposition \ref{supercrit_prod} shows that \begin{equation}\label{nie_ST_1} \norm{ \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}}_{H^{1-s/2}} \lesssim \norm{ \partial^\alphartial_1 \partial^\alphartialartial_t \eta }_{H^{1-s/2}} \lesssim \norm{\partial^\alphartialartial_t \eta }_{H^{2-s/2}} = \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}}. \end{equation} To handle the term \begin{equation} F^3 = \partial^\alphartialartial_t [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t \eta \end{equation} we first use the fact that $H^{1-s/2}((-\ell,\ell))$ is an algebra to bound \begin{equation} \norm{F^3}_{H^{1-s/2}} \lesssim \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{H^{1-s/2}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{H^{1-s/2}} \end{equation} and then we use Proposition \ref{frac_comp} with $f(x,z) = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0(x),z)$ to estimate \begin{equation} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{H^{1-s/2}} \lesssim \norm{\partial^\alphartial_1 \eta}_{H^{1-s/2}}, \end{equation} which yields (again using \eqref{nie_ST_7}) \begin{equation}\label{nie_ST_2} \norm{F^3}_{H^{1-s/2}} \lesssim \norm{\eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}}. \end{equation} However, \begin{equation} \frac{3}{2} + \frac{\varepsilon_- - \alpha}{2} \le \frac{3}{2} + \varepsilon_+ \end{equation} and \begin{equation} \frac{1}{2} = \frac{2-\varepsilon_+}{2} - \frac{1}{1} \left( 2 + \frac{\varepsilon_+}{2} - \frac{3}{2} - \varepsilon_+ \right) = \frac{1}{q_+} - \frac{1}{1}\left(3 - \frac{1}{q_+} - \frac{3}{2}-\varepsilon_+ \right), \end{equation} so we have the embedding \begin{equation}\label{nie_ST_3} W^{3-1/q_+,q_+}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{3/2 + \varepsilon_+}((-\ell,\ell)). \end{equation} Then \eqref{nie_ST_2} and \eqref{nie_ST_3} tell us that \begin{equation}\label{nie_ST_4} \norm{F^3}_{H^{1-s/2}} \lesssim \sqrt{\mathcal{E}} \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}}. \end{equation} Next we use Proposition \ref{supercrit_prod} (with $1/2 + \varepsilon_+ > \max\{1/2, s/2\}$), the usual trace estimate, the embedding \eqref{nie_ST_3}, and the bound $\mathcal{E} \le 1$ to see that \begin{equation} \norm{w \cdot \mathcal{N}}_{H^{s/2}} \lesssim \norm{\mathcal{N}}_{H^{1/2 + \varepsilon_+}} \norm{w}_{H^{s/2}} \lesssim \left(1 + \norm{\eta}_{H^{3/2 + \varepsilon_+}} \right)\norm{w}_{H^{1/2}(\Sigma)} \lesssim \left( 1+ \sqrt{\mathcal{E}}\right) \norm{w}_{H^1(\Omega)} \lesssim \norm{w}_{H^1}. \end{equation} Combining this with \eqref{nie_ST_1} and \eqref{nie_ST_4}, we conclude that \begin{equation} \abs{\int_{-\ell}^\ell \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + F^3\right)w\cdot \mathcal{N} } \lesssim (1+\sqrt{\mathcal{E}} ) \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} \norm{w}_{H^1}, \end{equation} which completes the proof. \end{proof} \section{Nonlinear estimates III: elliptic estimate terms}\label{sec_nl_elliptic} In this section we complete our study of the nonlinear terms coming from \eqref{ns_geometric} by turning our attention to elliptic estimates. More precisely, we study the terms appearing in applications of Theorem \ref{A_stokes_stress_solve}. As in the previous two sections, we assume throughout this section that a solution to \eqref{ns_geometric} exists on the time horizon $(0,T)$ for $0 < T \le \infty$ and obeys the small-energy estimate \begin{equation} \sup_{0\le t < T} \mathcal{E}(t) \le \gamma^2 < 1, \end{equation} where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. This means that the estimates of Lemma \ref{eta_small} are available for use, and we will use them often without explicit reference. \subsection{No time derivatives} We begin with the elliptic estimates we will need for the problem \eqref{ns_geometric}, i.e. when no temporal derivatives are applied. When we compare \eqref{ns_geometric} and \eqref{A_stokes_stress} we get \begin{equation}\label{ne0_Gform} \begin{split} G^1 & = -\partial^\alphartialartial_t u +\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u - u \cdot \nablaa u, \; G^2 = 0, \\ G^3_+ &= \partial^\alphartialartial_t \eta /\abs{\mathcal{N}_0}, \; G^3_- = 0, \; G^4_+ =0, \; G^4_- = 0 \\ G^5 &= 0, \; G^6 = \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta), \; G^7 = \alpha \partial^\alphartialartial_t \eta \partial^\alphartialm \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta). \end{split} \end{equation} This dictates the form of the estimates we need. We begin with the bounds for $G^1$ in \eqref{ne0_Gform}. \begin{prop}\label{ne0_g1} We have the bound \begin{equation} \norm{-\partial^\alphartialartial_t u +\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u - u \cdot \nablaa u}_{L^{q_+}} \lesssim \norm{\partial^\alphartialartial_t u}_{L^2} + \sqrt{\mathcal{E}}\left( \norm{u}_{L^2} + \norm{\partial^\alphartialartial_t \eta}_{H^1}\right) \end{equation} \end{prop} \begin{proof} The bound $\norm{\partial^\alphartialartial_t u}_{L^{q_+}} \lesssim \norm{\partial^\alphartialartial_t u}_{L^2}$ follows from the fact that $q_+ < 2$. Using H\"{o}lder's inequality and Theorem \ref{catalog_energy} we then bound \begin{equation} \norm{ u \cdot \nablaa u}_{L^{q_+}} \lesssim \norm{ \abs{u} \abs{\nabla u}}_{L^{q_+}} \lesssim \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \norm{u}_{L^2} \lesssim \sqrt{\mathcal{E}} \norm{u}_{L^2}, \end{equation} and (also using Proposition \ref{poisson_prop}) \begin{equation} \norm{\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u}_{L^{q_+}} \lesssim \norm{\abs{\partial^\alphartialartial_t \bar{\eta}} \abs{\nabla u}}_{L^{q_+}} \lesssim \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^{2}} \lesssim \sqrt{\mathcal{E}} \norm{\partial^\alphartialartial_t \eta}_{H^1}. \end{equation} The result follows by combining these. \end{proof} We continue with the bounds for $G^3$ in \eqref{ne0_Gform}. \begin{prop}\label{ne0_g3} Let $\mathcal{N}_0$ be given by \eqref{N0_def}. Then we have the estimate \begin{equation} \norm{\partial^\alphartialartial_t \eta /\abs{\mathcal{N}_0}}_{W^{2-1/q_+,q_+}} \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^{3/2-\alpha}} . \end{equation} \end{prop} \begin{proof} First note that \begin{equation} \frac{1}{\abs{\mathcal{N}_0}} = \frac{1}{\sqrt{1 + \abs{ \partial^\alphartial_1 \zeta_0}^2}} \end{equation} is smooth, and we may thus bound \begin{equation} \norm{\partial^\alphartialartial_t \eta /\abs{\mathcal{N}_0}}_{W^{2-1/q_+,q_+}} \lesssim \norm{\partial^\alphartialartial_t \eta }_{W^{2-1/q_+,q_+}}. \end{equation} Next note that \eqref{kappa_ep_def} implies that $2\alpha + \varepsilon_+ < 1$, so \begin{equation} 2 - \frac{1}{q_+} = 2 - \frac{2 - \varepsilon_+}{2} = 1 + \frac{\varepsilon_+}{2} \le \frac{3}{2} - \alpha \end{equation} and \begin{equation} \frac{1}{q_+} = \frac{2-\varepsilon_+}{2} \ge \frac{2\alpha + \varepsilon_+}{2} = \frac{1}{2} - \frac{1}{1} \left(\frac{3}{2} - \alpha -1 - \frac{\varepsilon_+}{2} \right). \end{equation} These parameter bounds and the Sobolev embeddings show that \begin{equation} H^{3/2-\alpha}((-\ell,\ell)) \mathcal{H}ookrightarrow W^{2-1/q_+, 2/(2\alpha + \varepsilon_+)}((-\ell,\ell)) \mathcal{H}ookrightarrow W^{2-1/q_+,q_+}((-\ell,\ell)). \end{equation} This allows us to estimate \begin{equation} \norm{\partial^\alphartialartial_t^2 \eta}_{W^{2-1/q_+,q_+}} \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{H^{3/2 - \alpha}}, \end{equation} and the result follows by combining these bounds. \end{proof} Our next result records the bounds for $G^6$ in \eqref{ne0_Gform}. \begin{prop}\label{ne0_g6} Let $\mathcal{R}$ be as defined in \eqref{R_def}. Then we have the estimate \begin{equation} \norm{\partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)]}_{W^{1-1/q_+,q_+}} \lesssim \sqrt{\mathcal{E}} \norm{\eta}_{W^{3 - 1/q_+,q_+}} \end{equation} \end{prop} \begin{proof} We compute \begin{equation} \partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 + \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta. \end{equation} We then use this with the product estimate from \eqref{ne1_g6_1} to bound \begin{multline} \norm{\partial^\alphartial_1 [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)]}_{W^{1-1/q_+,q_+}} \\ \lesssim \norm{\partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_+}} \norm{\partial^\alphartial_1^2 \zeta_0}_{W^{1-1/q_+,q_+}} + \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \norm{\partial^\alphartial_1^2 \eta}_{W^{1-1/q_+,q_+}} \\ \lesssim \norm{\partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_+}} + \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \norm{\eta}_{W^{3-1/q_+,q_+}}. \end{multline} On the other hand, Proposition \ref{R_prop} and Theorem \ref{catalog_energy} allow us to bound \begin{equation} \norm{\partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_+}} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \left(\norm{\partial^\alphartial_1 \eta}_{L^{q_+}} + \norm{\partial^\alphartial_1^2 \eta}_{L^{q_+}}\right) \lesssim \sqrt{\mathcal{E}} \norm{\eta}_{W^{2,q_+}} \end{equation} and \begin{equation} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^{q_+}} + \norm{\partial^\alphartial_1^2 \eta}_{L^{q_+}} \lesssim \sqrt{\mathcal{E}}. \end{equation} The result follows by combining these and recalling that $3 - 1/q_+ > 2$. \end{proof} Finally, we record the bounds for $G^7$ in \eqref{ne0_Gform}. \begin{prop}\label{ne0_g7} Let $\mathcal{R}$ be as in \eqref{R_def}. Then we have the estimate \begin{equation} [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)]_\ell \lesssim \sqrt{\mathcal{E}} \norm{\eta}_{W^{2,q_+}} \end{equation} \end{prop} \begin{proof} From trace theory, Theorem \ref{catalog_energy}, and Proposition \ref{R_prop} we may estimate \begin{equation} [\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)]_\ell \lesssim \norm{\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty} \left(\norm{\partial^\alphartial_1 \eta}_{L^{q_+}} + \norm{\partial^\alphartial_1^2 \eta}_{L^{q_+}}\right) \lesssim \sqrt{\mathcal{E}} \norm{\eta}_{W^{2,q_+}}. \end{equation} This is the stated estimate. \end{proof} \subsection{One time derivative} We now turn our attention to the elliptic estimates for the once time differentiated problem. In order to apply Theorem \ref{A_stokes_stress_solve}, we are led to consider the following $G^i$ terms for $F^1$--$F^7$ given by \eqref{dt1_f1}--\eqref{dt1_f7}: \begin{equation}\label{ne1_Gform} \begin{split} G^1 & = F^1 -\partial^\alphartialartial_t^2 u +\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u - u \cdot \nablaa \partial^\alphartialartial_t u, \; G^2 = J F^2, \\ G^3_+ &= (\partial^\alphartialartial_t^2 \eta - F^6)/\abs{\mathcal{N}_0}, \; G^3_- = 0, \; G^4_+ = F^4 \cdot \frac{\mathcal{T}}{\abs{\mathcal{T}}^2}, \; G^4_- = F^5 \\ G^5 &= F^4 \cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}, \; G^6 = F^3, \; G^7 = \kappa \partial^\alphartialartial_t^2 \eta + \kappa F^7. \end{split} \end{equation} We begin by estimating the $G^1$ term from \eqref{ne1_Gform}. \begin{prop}\label{ne1_g1} Let $F^1$ be given by \eqref{dt1_f1}. We have the estimate \begin{equation} \norm{F^1 +\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u - u \cdot \nablaa \partial^\alphartialartial_t u}_{L^{q_-}} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} We will estimate term by term using H\"{o}lder's inequality and the bounds from Theorems \ref{catalog_energy} and \ref{catalog_dissipation}, once more using the ordering scheme used in the proof of Proposition \ref{nid_f1}. Combining the estimates of each term then yields the stated estimate. Recall from \eqref{kappa_ep_def} that $0 < 2\alpha < \varepsilon_- < \varepsilon_+$, which in particular means that \begin{equation}\label{ne1_g1_est} q_- = \frac{2}{2-\varepsilon_-} < \frac{2}{2-\varepsilon_+} = q_+. \end{equation} \textbf{Term: $\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u)$.} First note that \begin{equation} \frac{1 - \varepsilon_+}{2} + \frac{1-\varepsilon_+}{2} + \frac{1}{\infty} \le \frac{2-\varepsilon_-}{2} = \frac{1}{q_-}. \end{equation} Using this and \eqref{ne1_g1_est} we can then bound \begin{multline} \norm{\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u)}_{L^{q_-}} \lesssim \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \mathcal{A}} ( \abs{p} + \abs{\nabla u}) }_{L^{q_-}} + \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} (\abs{\nabla p} + \abs{\nabla^2 u}) }_{L^{q_-}} \\ \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_+)}} \left( \norm{p}_{L^{2/(1-\varepsilon_+)}} + \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \right) + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \left( \norm{\nabla p}_{L^{q_+}} + \norm{\nabla^2 u}_{L^{q_+}} \right) \\ \lesssim \mathcal{E} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} \textbf{Term: $\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u$.} For this term note that \begin{equation} \frac{1- \varepsilon_-}{2} + \frac{1-\varepsilon_+}{2} \le \frac{2-\varepsilon_-}{2} = \frac{1}{q_-}. \end{equation} This and \eqref{ne1_g1_est} allow us to bound \begin{multline} \norm{ \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u}_{L^{q_-}} \lesssim \norm{\abs{\nabla \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} }_{L^{q_-}} + \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla^2 u} }_{L^{q_-}} \\ \lesssim \left(\norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla^2 u}_{L^{q_+}} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} \textbf{Term: $u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u$. } We simply use \eqref{ne1_g1_est} to bound \begin{equation} \norm{u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u}_{L^{q_-}} \lesssim \norm{\abs{u}\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u}}_{L^{q_-}} \lesssim \norm{u}_{L^\infty} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{q_+}} \lesssim \mathcal{E} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $\partial^\alphartialartial_t u \cdot \nablaa u$. } Again we use \eqref{ne1_g1_est} to bound \begin{equation} \norm{\partial^\alphartialartial_t u \cdot \nablaa u}_{L^{q_-}} \lesssim \norm{\abs{\partial^\alphartialartial_t u} \abs{\nabla u}}_{L^{q_-}} \lesssim \norm{\partial^\alphartialartial_t u}_{L^\infty} \norm{\nabla u}_{L^{q_+}} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} \textbf{Term: $\partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u$. } Again we use \eqref{ne1_g1_est} to bound \begin{equation} \norm{\partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u}_{L^{q_-}} \lesssim \norm{ \abs{\partial^\alphartialartial_t^2 \bar{\eta}} \abs{\nabla u}}_{L^{q_-}} \lesssim \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{L^\infty} \norm{\nabla u}_{L^{q_+}} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}}. \end{equation} \textbf{Term: $\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u$. } Once more \eqref{ne1_g1_est} let's us bound \begin{equation} \norm{\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u }_{L^{q_-}} \lesssim \norm{\abs{\partial^\alphartialartial_t \bar{\eta}} \abs{\partial^\alphartialartial_t K} \abs{\nabla u} }_{L^{q_-}} \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{q_+}} \lesssim \mathcal{E} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u$. } Since \begin{equation}\label{ne1_g1_est2} \frac{1-\varepsilon_-}{2} \le \frac{2 - \varepsilon_-}{2} = \frac{1}{q_-} \end{equation} we can bound \begin{equation} \norm{\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u}_{L^{q_-}} \lesssim \norm{\abs{\partial^\alphartialartial_t \bar{\eta}} \abs{\nabla \partial^\alphartialartial_t u}}_{L^{q_-}} \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-)}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $u \cdot \nablaa \partial^\alphartialartial_t u$. } For this term we use \eqref{ne1_g1_est2} again to bound \begin{equation} \norm{u \cdot \nablaa \partial^\alphartialartial_t u}_{L^{q_-}} \lesssim \norm{\abs{u} \abs{\nabla \partial^\alphartialartial_t u}}_{L^{q_-}} \lesssim \norm{u}_{L^\infty} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-)}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \end{proof} Next we estimate the $G^2$ term from \eqref{ne1_Gform}. \begin{prop}\label{ne1_g2} Let $F^2$ be given by \eqref{dt1_f2}. Then we have that \begin{equation} \norm{J F^2}_{W^{1,q_-}} \lesssim (\sqrt{\mathcal{E}}+ \mathcal{E}) \sqrt{\mathcal{D}} . \end{equation} \end{prop} \begin{proof} We begin by noting that $JF^2 = -J\diverge_{\partial^\alphartialartial_t \mathcal{A}} u$, so \begin{equation} \norm{J F^2}_{W^{1,q_-}} \lesssim \norm{J\diverge_{\partial^\alphartialartial_t \mathcal{A}} u}_{L^{q_-}} + \norm{\nabla(J\diverge_{\partial^\alphartialartial_t \mathcal{A}} u)}_{L^{q_-}}. \end{equation} We will estimate each of these terms with H\"{o}lder's inequality and the bounds from Theorems \ref{catalog_energy} and \ref{catalog_dissipation}, again using the ordering scheme used in the proof of Proposition \ref{nid_f1}. For the first we use the fact that $q_- < q_+$ to bound \begin{equation} \norm{J\diverge_{\partial^\alphartialartial_t \mathcal{A}} u}_{L^{q_-}} \lesssim \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u}}_{L^{q_-}} \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{q_+}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} For the second term we expand with the product rule and note that \eqref{kappa_ep_def} implies \begin{equation}\label{ne1_g2_est} \frac{1- \varepsilon_+}{2} + \frac{1- \varepsilon_+}{2} \le \frac{2- \varepsilon_-}{2} = \frac{1}{q_-} \text{ and } \frac{1- \varepsilon_+}{2} + \frac{1- \varepsilon_-}{2} \le \frac{2- \varepsilon_-}{2} = \frac{1}{q_-} \end{equation} which allows us to bound \begin{multline} \norm{\nabla(J\diverge_{\partial^\alphartialartial_t \mathcal{A}} u)}_{L^{q_-}} \lesssim \norm{\abs{\nabla J} \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} }_{L^{q_-}} + \norm{ \abs{\nabla \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u} }_{L^{q_-}} + \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla^2 u} }_{L^{q_-}} \\ \lesssim \norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_+)}} \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} + \left(\norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \\ + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla^2 u}_{L^{q_+}} \lesssim \mathcal{E} \sqrt{\mathcal{D}} + \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \lesssim(\sqrt{\mathcal{E}} + \mathcal{E})\sqrt{\mathcal{D}}. \end{multline} Combining these bounds then yields the stated estimate. \end{proof} The next result records the bounds for the $G^3$ term from \eqref{ne1_Gform}. \begin{prop}\label{ne1_g3} Let $F^6$ be given by \eqref{dt1_f6}. We have the bound \begin{equation} \norm{(\partial^\alphartialartial_t^2 \eta - F^6)/\abs{\mathcal{N}_0}}_{W^{2-1/q_-,q_-}} \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{H^{3/2-\alpha}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} First recall that $N = -\partial^\alphartial_1 \zeta_0 e_1 + e_2$, so \begin{equation} \frac{1}{\abs{\mathcal{N}_0}} = \frac{1}{\sqrt{1 + \abs{ \partial^\alphartial_1 \zeta_0}^2}} \end{equation} is smooth, and we may thus bound \begin{equation} \norm{(\partial^\alphartialartial_t^2 \eta - F^6)/\abs{\mathcal{N}_0}}_{W^{2-1/q_-,q_-}} \lesssim \norm{\partial^\alphartialartial_t^2 \eta - F^6}_{W^{2-1/q_-,q_-}}. \end{equation} Next note that \eqref{kappa_ep_def} implies that $2\alpha + \varepsilon_- < 1$, so \begin{equation} 2 - \frac{1}{q_-} = 2 - \frac{2 - \varepsilon_-}{2} = 1 + \frac{\varepsilon_-}{2} \le \frac{3}{2} - \alpha \end{equation} and \begin{equation} \frac{1}{q_-} = \frac{2-\varepsilon_-}{2} \ge \frac{2\alpha + \varepsilon_-}{2} = \frac{1}{2} - \frac{1}{1} \left(\frac{3}{2} - \alpha -1 - \frac{\varepsilon_-}{2} \right), \end{equation} and so these parameter bounds and the Sobolev embeddings show that \begin{equation} H^{3/2-\alpha}((-\ell,\ell)) \mathcal{H}ookrightarrow W^{2-1/q_-, 2/(2\alpha + \varepsilon_-)}((-\ell,\ell)) \mathcal{H}ookrightarrow W^{2-1/q_-,q_-}((-\ell,\ell)). \end{equation} This allows us to estimate \begin{equation} \norm{\partial^\alphartialartial_t^2 \eta}_{W^{2-1/q_-,q_-}} \lesssim \norm{\partial^\alphartialartial_t^2 \eta}_{H^{3/2 - \alpha}}. \end{equation} For the $F^6= -u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta$ term we the fact that $W^{2-1/q_-,q_-}((-\ell,\ell))$ is an algebra in conjunction with trace theory and the definitions of $\mathcal{E}$ and $\mathcal{D}$ in \eqref{E_def} and \eqref{D_def}, respectively, to bound \begin{equation} \norm{F^6}_{W^{2-1/q_-,q_-}} \lesssim \norm{u}_{W^{2-1/q_-,q_-}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{2-1/q_-,q_-}} \lesssim \norm{u}_{W^{2,q_-}} \norm{\partial^\alphartialartial_t \eta}_{W^{3-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Combining these then yields the stated bound. \end{proof} Next we bound the the $G^4$ and $G^5$ terms from \eqref{ne1_Gform}. \begin{prop}\label{ne1_g4_g5} Let $F^4$ and $F^5$ be given by \eqref{dt1_f4} and \eqref{dt1_f5}, respectively. Then we have the bound \begin{equation} \norm{F^4 \cdot \frac{\mathcal{T}}{\abs{\mathcal{T}}^2}}_{W^{1-1/q_-,q_-}} + \norm{F^4 \cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}}_{W^{1-1/q_-,q_-}} + \norm{F^5 }_{W^{1-1/q_-,q_-}} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} Recall that $\mathcal{E}$ and $\mathcal{D}$ are as defined in \eqref{E_def} and \eqref{D_def}. We begin with the $F^5 = \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \nu \cdot \tau$ term. We use trace theory, the product rule, Theorems \ref{catalog_energy} and \ref{catalog_dissipation}, and \eqref{ne1_g2_est} to bound \begin{multline}\label{ne1_g4_1} \norm{F^5 }_{W^{1-1/q_-,q_-}} \lesssim \norm{\mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u }_{W^{1,q_-}(\Omega)} \lesssim \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u}}_{L^{q_-}} + \norm{\abs{\nabla \partial^\alphartialartial_t \mathcal{A}} \abs{\nabla u}}_{L^{q_-}} + \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla^2 u}}_{L^{q_-}} \\ \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla u}_{L^{q_+}} + \left(\norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{2,2/(1-\varepsilon_-)}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla^2 u}_{L^{q_+}} \\ \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} We next turn our attention to the $F^4$ term. First note that since \begin{equation} \frac{\mathcal{T}}{\abs{\mathcal{T}}^2} = \frac{(1,\partial^\alphartial_1 \zeta_0 + \partial^\alphartial_1 \eta)}{1 + \abs{\partial^\alphartial_1 \zeta_0 + \partial^\alphartial_1 \eta}^2}, \mathcal{N} = (-\partial^\alphartial_1 \zeta_0 - \partial^\alphartial_1 \eta,1),\text{ and } \frac{\mathcal{N}}{\abs{\mathcal{N}}^2} = \frac{(-\partial^\alphartial_1 \zeta_0 - \partial^\alphartial_1 \eta,1)}{1 + \abs{\partial^\alphartial_1 \zeta_0 + \partial^\alphartial_1 \eta}^2} \end{equation} we have that \begin{equation}\label{ne1_g4_2} \norm{ \frac{\mathcal{T}}{\abs{\mathcal{T}}^2}}_{W^{1,q_-}} + \norm{ \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}}_{W^{1,q_-}} + \norm{\mathcal{N}}_{W^{1,q_-}} \lesssim 1+ \sqrt{\mathcal{E}} \lesssim 1. \end{equation} This allows us to employ Theorem \ref{supercrit_prod} with $1 > \max\{1/q_-,1-1/q_1\}$ to estimate \begin{multline} \norm{F^4 \cdot \frac{\mathcal{T}}{\abs{\mathcal{T}}^2}}_{W^{1-1/q_-,q_-}} + \norm{F^4 \cdot \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}}_{W^{1-1/q_-,q_-}} \lesssim \norm{F^4 }_{W^{1-1/q_-,q_-}} \left( \norm{ \frac{\mathcal{T}}{\abs{\mathcal{T}}^2}}_{W^{1,q_-}} + \norm{ \frac{\mathcal{N}}{\abs{\mathcal{N}}^2}}_{W^{1,q_-}} \right) \\ \lesssim \norm{F^4 }_{W^{1-1/q_-,q_-}}. \end{multline} We will then estimate the $F^4$ norm on the right term by term to arrive at the stated estimate. \textbf{Term: $\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N}$.} For this term we first use Theorem \ref{supercrit_prod} and trace theory to bound \begin{equation} \norm{\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N}}_{W^{1-1/q_-,q_-}} \lesssim \norm{\mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u}_{W^{1-1/q_-,q_-}(\Sigma)} \norm{\mathcal{N}}_{W^{1,q_-}} \lesssim \norm{\mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u}_{W^{1,q_-}(\Omega)} \norm{\mathcal{N}}_{W^{1,q_-}}. \end{equation} Using this, \eqref{ne1_g4_2}, and the estimate from \eqref{ne1_g4_1}, we deduce that \begin{equation} \norm{\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N}}_{W^{1-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $g\eta \partial^\alphartialartial_t \mathcal{N}$.} For this term we use Theorem \ref{supercrit_prod} to bound \begin{equation} \norm{g\eta \partial^\alphartialartial_t \mathcal{N}}_{W^{1-1/q_-,q_-}} \lesssim \norm{\eta}_{W^{1,q_-}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \lesssim \norm{\eta}_{W^{1,q_-}} \norm{\partial^\alphartialartial_t \eta}_{W^{2-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $- \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \partial^\alphartialartial_t \mathcal{N} $.} We begin be expanding with the product rule and using the fact that $\zeta_0$ is smooth to bound \begin{equation} \norm{- \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \partial^\alphartialartial_t \mathcal{N}}_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} + \norm{\partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}}. \end{equation} Then Theorem \ref{supercrit_prod} and the fact that $2 < 3 - 1/q_+$ and $q_- < q_+$ allows us to bound \begin{equation} \norm{\partial^\alphartial_1 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_1 \eta }_{W^{1,q_-}} \norm{ \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \lesssim \norm{\eta }_{W^{2,q_-}} \norm{\partial^\alphartialartial_t \eta}_{W^{2-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Similarly, Theorem \ref{supercrit_prod} and the bounds $q_- < q_+$ and $3-1/q_- < 3 - 1/q_+$ imply that \begin{multline}\label{ne1_g4_3} \norm{\partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_1^2 \eta }_{W^{1-1/q_-,q_-}} \norm{ \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1 ,q_-}} \lesssim \norm{\eta }_{W^{3-1/q_-,q_-}} \norm{ \partial^\alphartialartial_t \eta}_{W^{2 ,q_-}} \\ \lesssim \norm{\eta }_{W^{3-1/q_+,q_+}} \norm{ \partial^\alphartialartial_t \eta}_{W^{2 ,q_-}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} Combining these then shows that \begin{equation} \norm{- \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \partial^\alphartialartial_t \mathcal{N}}_{W^{1-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $ - \sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \partial^\alphartialartial_t \mathcal{N} $.} For this term we first expand \begin{equation} \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) = \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 + \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta \end{equation} and then use Theorem \ref{supercrit_prod} to bound \begin{multline} \norm{\sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \partial^\alphartialartial_t \mathcal{N} }_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0}_{W^{1,q_-}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \\ + \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \norm{\partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}}. \end{multline} The fact that $W^{1,q_-}((-\ell,\ell))$ is an algebra, Proposition \ref{R_prop}, and Theorem \ref{catalog_energy} then show that \begin{multline} \norm{\partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0}_{W^{1,q_-}} \lesssim \norm{\partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_-}} \lesssim \norm{\abs{\partial^\alphartial_1 \eta}^2 }_{L^{q_-}} + \norm{\partial^\alphartial_1^2 \eta}_{L^{q-}} \\ \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty}^2 + \norm{\eta}_{W^{3-1/q_+,q+}} \lesssim \sqrt{\mathcal{E}} \end{multline} and \begin{equation} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^{q_-}} + \norm{\partial^\alphartial_1^2 \eta}_{L^{q-}} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^\infty}^2 + \norm{\eta}_{W^{3-1/q_+,q+}} \lesssim \sqrt{\mathcal{E}}. \end{equation} Combining these with \eqref{ne1_g4_3} then shows that \begin{equation} \norm{\sigma \partial^\alphartial_1 \left( \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) \partial^\alphartialartial_t \mathcal{N} }_{W^{1-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}}\left( \norm{\partial^\alphartialartial_t \eta}_{W^{2-1/q_-,q_-}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \right) \lesssim \left(\sqrt{\mathcal{E}} + \mathcal{E}\right) \sqrt{\mathcal{D}}. \end{equation} \textbf{Term: $- S_{\mathcal{A}}(p,u)\partial^\alphartialartial_t \mathcal{N} $.} For this term we use Theorem \ref{supercrit_prod}, trace theory, the bound \begin{equation} \frac{1-\varepsilon_+}{2} + \frac{1-\varepsilon_+}{2} = \frac{2- 2\varepsilon_+}{2} < \frac{2 - \varepsilon_-}{2} = \frac{1}{q_-}, \end{equation} and H\"older's inequality to estimate \begin{multline} \norm{S_{\mathcal{A}}(p,u)\partial^\alphartialartial_t \mathcal{N} }_{W^{1-1/q_-,q_-}} \le \norm{S_{\mathcal{A}}(p,u) }_{W^{1-1/q_-,q_-}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1,q_-}} \lesssim \norm{S_{\mathcal{A}}(p,u) }_{W^{1,q_-}(\Omega)} \norm{\partial^\alphartialartial_t \eta}_{W^{2,q_-}} \\ \lesssim \left(\norm{p}_{W^{1,q_-}} + \norm{u}_{W^{2,q_-}} + \norm{\abs{\nabla \mathcal{A}} \abs{\nabla u}}_{L^{q_-}}\right) \norm{\partial^\alphartialartial_t \eta}_{W^{2,q_-}} \\ \lesssim \left(\norm{p}_{W^{1,q_+}} + \norm{u}_{W^{2,q_+}} + \norm{\bar{\eta}}_{W^{2,2/(1-\varepsilon_+)}} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \right) \norm{\partial^\alphartialartial_t \eta}_{W^{2,q_-}} \lesssim (\sqrt{\mathcal{E}}+\mathcal{E}) \sqrt{\mathcal{D}}. \end{multline} \end{proof} The next result records the estimates for the $G^6$ term from \eqref{ne1_Gform}. \begin{prop}\label{ne1_g6} Let $F^3$ be as in \eqref{dt1_f3}. Then we have the estimate \begin{equation} \norm{\partial^\alphartial_1 F^3 }_{W^{1-1/q_-,q_-}} \lesssim \left( \sqrt{\mathcal{E}} + \mathcal{E}\right) \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} We begin by expanding \begin{multline} \partial^\alphartial_1 F^3 = \partial^\alphartial_1 \partial^\alphartialartial_t [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0 \partial^\alphartial_1 \partial^\alphartialartial_t \eta \\ + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta + \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \partial^\alphartialartial_t \eta := I + II +III. \end{multline} Since $1 > \max\{1/q_-,1-1/q_1\}$ we can use Theorem \ref{supercrit_prod} to bound \begin{equation}\label{ne1_g6_1} \norm{\varphi \partial^\alphartialsi}_{W^{1-1/q_-,q_-}}\lesssim \norm{\varphi}_{W^{1,q_-}} \norm{\partial^\alphartialsi}_{W^{1-1/q_-,q_-}}, \end{equation} and we will this to handle each of $I$, $II$, and $III$. We begin with $I$ by using \eqref{ne1_g6_1} twice together with the fact that $q_- < q_+$ to bound \begin{multline} \norm{I}_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \norm{\partial^\alphartial_1^2 \zeta_0 \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \\ \lesssim \norm{\partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \norm{\partial^\alphartial_1^2 \zeta_0}_{W^{1,q_-}} \norm{ \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \sqrt{\mathcal{D}}. \end{multline} For $II$ we also use \eqref{ne1_g6_1} twice and $q_- < q_+$ to see that \begin{multline} \norm{II}_{W^{1-1/q_-,q_-}} \lesssim \norm{ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \norm{ \partial^\alphartial_1^2 \eta \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \\ \lesssim \norm{ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \norm{ \partial^\alphartial_1^2 \eta }_{W^{1-1/q_-,q_-}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }_{W^{1,q_-}} \lesssim \norm{ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{multline} For $III$ we only apply \eqref{ne1_g6_1} once to see that \begin{equation} \norm{III}_{W^{1-1/q_-,q_-}}\lesssim \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_-}} \norm{\partial^\alphartial_1^2 \partial^\alphartialartial_t \eta}_{W^{1-1/q_-,q_-}} \lesssim \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_-}} \sqrt{\mathcal{D}}. \end{equation} It remains to handle the $\mathcal{R}$ terms in these estimates. For this we use Proposition \ref{R_prop} to bound \begin{equation} \norm{ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} \lesssim 1 + \norm{\partial^\alphartial_1 \eta}_{W^{1,q_-}} \lesssim 1 + \norm{\eta}_{W^{2,q_-}} \lesssim 1 + \sqrt{\mathcal{E}} \end{equation} and \begin{equation} \norm{\partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_-}} + \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }_{W^{1,q_-}} \lesssim \norm{\eta}_{W^{2,q_-}} \lesssim \sqrt{\mathcal{E}}. \end{equation} Combining these bounds with the above, we deduce that \begin{equation} \norm{\partial^\alphartial_1 F^3}_{W^{1-1/q_-,q_-}} \lesssim \left( \sqrt{\mathcal{E}} + \mathcal{E}\right) \sqrt{\mathcal{D}}, \end{equation} as desired. \end{proof} Finally, we bound the $G^7$ term from \eqref{ne1_Gform}. \begin{prop}\label{ne1_g7} Let $F^7$ be as defined by \eqref{dt2_f7}. Then we have the estimate \begin{equation} [\kappa F^7]_\ell \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} The definition of $\mathscr{W}h \in C^2$ in \eqref{V_pert} shows that $\abs{\mathscr{W}h'(z)} \lesssim z$ for $\abs{z} \lesssim 1$. From this, standard trace theory, and the definition of $E$ and $\mathcal{D}$ in \eqref{E_def} and \eqref{D_def}, respectively, we may then bound \begin{equation} [\kappa F^7]_\ell \lesssim \max_{\partial^\alphartialm \ell} \abs{\partial^\alphartialartial_t \eta} \abs{\partial^\alphartialartial_t^2 \eta} \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^1} [\partial^\alphartialartial_t^2 \eta]_{\ell} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} \end{proof} \subsection{Two time derivatives} We will not apply Theorem \ref{A_stokes_stress_solve} to the twice time differentiated problem. However, we will need the following pair of estimates, which are in the same spirit as the above elliptic estimates. The first gives estimates of $F^2$ from \eqref{dt2_f2}. \begin{prop}\label{ne2_f2} Let $F^2$ be given by \eqref{dt2_f2}. Then we have the estimates \begin{equation}\label{ne2_f2_01} \norm{J F^2}_{L^{4/(3-2\varepsilon_+)}} \lesssim \mathcal{E}, \end{equation} \begin{equation}\label{ne2_f2_02} \norm{J F^2}_{L^{2/(1-\varepsilon_-)}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \end{equation} and \begin{equation}\label{ne2_f2_03} \norm{\partial^\alphartialartial_t(J F^2)}_{L^{q_-}} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}}. \end{equation} \end{prop} \begin{proof} First note that \eqref{kappa_ep_def} requires that $0 <3 - 2 \varepsilon_+ <1$, $0 < 1 - (\varepsilon_+-\alpha) < 1$, and \begin{equation}\label{ne2_f2_1} \varepsilon_+ \le \frac{1+\varepsilon_-}{2} \mathbb{R}ightarrow \frac{4}{3-2\varepsilon_+} \le \frac{4}{2-\varepsilon_-}. \end{equation} Also, from \eqref{dt2_f2} we have that \begin{equation} F^2 = -\diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u. \end{equation} Then from Theorems \ref{catalog_energy} and \ref{catalog_dissipation} and H\"{o}lder's inequality we can bound \begin{multline} \norm{J \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u}_{L^{4/(3-2\varepsilon_+)}} \lesssim \norm{\abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u}}_{L^{4/(3-2\varepsilon_+)}} \lesssim \norm{\partial^\alphartialartial_t^2 \mathcal{A}}_{L^4} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \\ \lesssim \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,4}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,4}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \mathcal{E} \end{multline} and, also using \eqref{ne2_f2_1}, \begin{multline} \norm{J \diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u}_{L^{4/(3-2\varepsilon_+)}} \lesssim \norm{J \diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u}_{L^{4/(2-\varepsilon_-)}} \lesssim \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} }_{L^{4/(2-\varepsilon_-)}} \\ \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla \partial^\alphartialartial_t u}_{L^{4/(2-\varepsilon_-)}} \lesssim \mathcal{E}. \end{multline} Thus, \eqref{ne2_f2_01} holds. To prove \eqref{ne2_f2_02} we argue similarly, first noting that \eqref{kappa_ep_def} tells us that \begin{equation} 0 < \varepsilon_+ - \varepsilon_- -\alpha \mathbb{R}ightarrow 1 - (\varepsilon_+-\alpha) < 1 - \varepsilon_- \mathbb{R}ightarrow \frac{2}{1-\varepsilon_-} < \frac{2}{1-(\varepsilon_+-\alpha)}. \end{equation} Thus, \begin{multline} \norm{J \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u}_{L^{2/(1-\varepsilon_-)}} \lesssim \norm{J \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u}_{L^{2/(1-(\varepsilon_+-\alpha))}} \lesssim \norm{\abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u}}_{L^{2/(1-(\varepsilon_+-\alpha))}} \\ \lesssim \norm{\partial^\alphartialartial_t^2 \mathcal{A}}_{L^{2/\alpha}} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,2/\alpha}} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{W^{1,2/\alpha}} \right) \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \end{multline} and \begin{equation} \norm{J \diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-)}} \lesssim \norm{\abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u}}_{L^{2/(1-\varepsilon_-)}} \lesssim \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-)}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \end{equation} and \eqref{ne2_f2_02} follows. Finally, note that \begin{equation} \abs{\partial^\alphartialartial_t(J F^2)} \lesssim \abs{\partial^\alphartialartial_t^3 \mathcal{A}}\abs{\nabla u} + \abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} + \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t^2 u} + \abs{\partial^\alphartialartial_t J}\left(\abs{\partial^\alphartialartial_t^2 \mathcal{A}} \abs{\nabla u} + \abs{\partial^\alphartialartial_t \mathcal{A}} \abs{\nabla \partial^\alphartialartial_t u} \right), \end{equation} while \eqref{kappa_ep_def} tells us that $0 < \varepsilon_+ - 2 \alpha < 1$, \begin{equation} \alpha < \frac{\varepsilon_+-\varepsilon_-}{2} \mathbb{R}ightarrow \varepsilon_- < \varepsilon_+ - 2\alpha \mathbb{R}ightarrow \frac{1-\varepsilon_-}{2} + \frac{1+2\alpha}{2} = 1 -\frac{(\varepsilon_+-2\alpha)}{2} < 1 - \frac{\varepsilon_-}{2} = \frac{1}{q_-}, \end{equation} and \begin{equation} \frac{1-\varepsilon_-}{2} + \frac{1}{2} = \frac{2-\varepsilon_-}{2} = \frac{1}{q_-}, \end{equation} so we may again use Theorems \ref{catalog_energy} and \ref{catalog_dissipation} and H\"{o}lder's inequality to see that \begin{multline} \norm{\partial^\alphartialartial_t(J F^2)}_{L^{q_-}} \lesssim \norm{\partial^\alphartialartial_t^3 \mathcal{A}}_{L^{2/(1+2\alpha)}} \norm{\nabla u}_{L^{2/(1-\varepsilon_+)}} + \norm{\partial^\alphartialartial_t^2 \mathcal{A}}_{L^2} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-))}} + \norm{\partial^\alphartialartial_t \mathcal{A}}_{L^{\infty}} \norm{\nabla \partial^\alphartialartial_t^2 u}_{L^{2}} \\ + \norm{\partial^\alphartialartial_t \bar{\eta} }_{W^{1,\infty}} \left( \norm{\partial^\alphartialartial_t^2 \mathcal{A}}_{L^2} \norm{\nabla u}_{L^{2/(1-\varepsilon_+))}} + \norm{\partial^\alphartialartial_t \mathcal{A}}_{L^\infty} \norm{\nabla \partial^\alphartialartial_t u}_{L^{2/(1-\varepsilon_-)}} \right) \\ \lesssim \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{H^1} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{H^1} + \norm{\partial^\alphartialartial_t^3 \bar{\eta}}_{W^{1,2/(1+2\alpha)}} \right) \sqrt{\mathcal{E}} \\ + \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{H^1} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{H^1} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \right) \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{H^1} + \norm{\partial^\alphartialartial_t^2 \bar{\eta}}_{H^1} + \norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \right) \sqrt{\mathcal{D}} \\ \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} Then \eqref{ne2_f2_03} follows. \end{proof} Next we provide a bound for $F^6$ from \eqref{dt2_f6}. \begin{prop}\label{ne2_f6} Let $F^6$ be given by \eqref{dt2_f6} and $\mathcal{N}$ be given by \eqref{N_def}. Then we have the estimates \begin{equation} \norm{F^6}_{H^{1/2-\alpha}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \end{equation} and \begin{equation} \norm{\partial^\alphartialartial_t^2 u \cdot \mathcal{N}}_{H^{1/2}((-\ell,\ell))} \lesssim (1+\sqrt{\mathcal{E}}) \norm{\partial^\alphartialartial_t^2 u}_{H^1}. \end{equation} \end{prop} \begin{proof} According to \eqref{dt2_f6}, Theorem \ref{supercrit_prod} with $\frac{1}{2} + \varepsilon_\partial^\alphartialm > \max\{\frac{1}{2},\frac{1}{2}-\alpha\}$, and trace theory we have that \begin{multline} \norm{F^6}_{H^{1/2-\alpha}} \lesssim \norm{\partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{H^{1/2-\alpha}} + \norm{u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{H^{1/2-\alpha}} \\ \lesssim \norm{\partial^\alphartialartial_t u_1 }_{H^{1/2-\alpha}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{H^{1/2+\varepsilon_-}} + \norm{u_1 }_{H^{1/2+\varepsilon_+}(\Sigma)} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta}_{H^{1/2-\alpha}} \\ \lesssim \norm{\partial^\alphartialartial_t u_1 }_{H^{1-\alpha}(\Omega)} \norm{\partial^\alphartialartial_t \eta}_{H^{3/2+\varepsilon_-}} + \norm{u_1 }_{H^{1+\varepsilon_+}(\Omega)} \norm{\partial^\alphartialartial_t^2 \eta}_{H^{3/2-\alpha}}. \end{multline} Note that \begin{equation} \frac{1}{2} = \frac{2-\varepsilon_-}{2} - \frac{1}{1} \left( 2 + \frac{\varepsilon_-}{2} - \frac{3}{2} - \varepsilon_- \right) = \frac{1}{q_-} - \frac{1}{1}\left(3 - \frac{1}{q_-} - \frac{3}{2}-\varepsilon_- \right), \end{equation} and \begin{equation} \frac{1}{2} = \frac{2-\varepsilon_+}{2} - \frac{1}{2}\left( 2 - 1 - \varepsilon_+ \right) = \frac{1}{q_+} - \frac{1}{2} \left( 2 - (1+\varepsilon_+)\right), \end{equation} so the Sobolev embeddings show that \begin{equation} W^{3-1/q_-,q_-}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{3/2 + \varepsilon_-}((-\ell,\ell)) \end{equation} and \begin{equation} W^{2,q_+}(\Omega) \mathcal{H}ookrightarrow H^{1+\varepsilon_+}(\Omega). \end{equation} Hence, \begin{equation} \norm{\partial^\alphartialartial_t \eta}_{H^{3/2+\varepsilon_-}} \lesssim \norm{\partial^\alphartialartial_t \eta}_{W^{3-1/q_-,q_-}} \lesssim \sqrt{\mathcal{E}} \end{equation} and \begin{equation} \norm{u_1}_{H^{1+\varepsilon_+}(\Omega)} \lesssim \norm{u}_{W^{2,q_+}} \lesssim \sqrt{\mathcal{E}}. \end{equation} Moreover, since $1 - \alpha < 1$ and $2 < 2/(1-\varepsilon_-)$ we can use Theorem \ref{catalog_dissipation} to bound \begin{equation} \norm{\partial^\alphartialartial_t u_1}_{H^{1-\alpha}(\Omega)} \lesssim \norm{\partial^\alphartialartial_t u_1}_{H^{1}} \lesssim \sqrt{\mathcal{D}}. \end{equation} Thus, upon combining all of these, we deduce that \begin{equation} \norm{F^6}_{H^{1/2-\alpha}} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \end{equation} as desired. For the second estimate we use the fact that $H^{1/2}((-\ell,\ell))$ is an algebra in conjunction with trace theory and the embedding \eqref{nie_ST_3}: \begin{multline} \norm{\partial^\alphartialartial_t^2 u \cdot \mathcal{N}}_{H^{1/2}((-\ell,\ell))} = \norm{\partial^\alphartialartial_t^2 u\cdot (1,-\partial^\alphartial_1 \zeta_0) }_{H^{1/2}((-\ell,\ell))} + \norm{\partial^\alphartialartial_t^2 u_2 \partial^\alphartial_1 \eta}_{H^{1/2}((-\ell,\ell))} \\ \lesssim \norm{\partial^\alphartialartial_t^2 u }_{H^{1/2}(\Sigma)} \left(1 + \norm{\partial^\alphartial_1 \eta}_{H^{1/2}} \right) \lesssim \norm{\partial^\alphartialartial_t^2 u }_{H^{1}(\Omega)} \left(1 + \norm{\eta}_{W^{3-1/q_+,q_+}} \right) \lesssim \norm{\partial^\alphartialartial_t^2 u}_{H^1} (1+\sqrt{\mathcal{E}}). \end{multline} \end{proof} \section{Functional calculus of the gravity-capillary operator}\label{sec_fnal_calc} In this section we record some essential properties of the gravity-capillary operator, $\mathcal{K}$, associated to the equilibrium $\zeta_0 : [-\ell,\ell] \to \mathbb{R}$ from Theorem \ref{zeta0_wp}, with gravitational coefficient $g >0$ and surface tension $\sigma >0$. In particular, we develop the functional calculus associated to $\mathcal{K}$ with Neumann-type boundary conditions, we study a scale of custom Sobolev spaces built in terms of the eigenfunctions of $\mathcal{K}$, and we consider some useful approximations of the fractional differential operator $D^s = (\mathcal{K})^{s/2}$. \subsection{Basic spaces and the gravity-capillary operator} We write the inner-products on $L^2((-\ell,\ell)) = H^0((-\ell,\ell))$ and $H^1((-\ell,\ell))$ via \begin{equation}\label{bndry_ips} (\varphi,\partial^\alphartialsi)_{0,\Sigma} := \int_{-\ell}^\ell \varphi \partial^\alphartialsi \text{ and } (\varphi,\partial^\alphartialsi)_{1,\Sigma} := \int_{-\ell}^\ell g \varphi \partial^\alphartialsi + \sigma \frac{\partial^\alphartial_1 \varphi \partial^\alphartial_1 \partial^\alphartialsi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}}. \end{equation} It's clear from the properties of $\zeta_0$ stated in Theorem \ref{zeta0_wp} that the latter generates a norm equivalent to the standard one on $H^1((-\ell,\ell))$ and thus generates the standard topology. Recall from \eqref{bndry_pairing} that for pairs $\varphi,\partial^\alphartialsi : \{-\ell,\ell\} \to \mathbb{R}$ we write \begin{equation} [\varphi,\partial^\alphartialsi]_\ell = \varphi(-\ell)\partial^\alphartialsi(-\ell) + \varphi(\ell)\partial^\alphartialsi(\ell) \text{ and } [\varphi]_\ell = \sqrt{[\varphi,\varphi]_\ell}, \end{equation} and we will often slightly abuse this notation by writing $[\varphi,\partial^\alphartialsi]_\ell$ when either $\varphi$ or $\partial^\alphartialsi$ is a function on $(-\ell,\ell)$ with well-defined traces, in which case the understanding is that the map on $\{-\ell,\ell\}$ is defined by the trace. The inner-product gives rise to the following elliptic operator, which we call the gravity-capillary operator associated to $\zeta_0$: \begin{equation}\label{cap_op_def} \mathcal{K} \varphi := g \varphi - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \varphi }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right). \end{equation} The associated boundary operators are \begin{equation}\label{bnrdy_op_def} \mathcal{B}_\partial^\alphartialm \partial^\alphartialsi = \partial^\alphartialm\frac{\partial^\alphartialsi'(\partial^\alphartialm\ell)}{(1+\abs{\zeta_0'(\partial^\alphartialm\ell)}^2)^{3/2}}, \end{equation} and we write $\mathcal{B}\partial^\alphartialsi : \{-\ell,\ell\} \to \mathbb{R}$ via $\mathcal{B}\partial^\alphartialsi(\partial^\alphartialm\ell) \mathcal{B}_{\partial^\alphartialm} \partial^\alphartialsi$. Then $\mathcal{K}$ and $\mathcal{B}$ intertwine our choice of inner-products on $L^2((-\ell,\ell))$ and $H^1((-\ell,\ell))$ via \begin{equation}\label{weak_motivation} (\varphi,\partial^\alphartialsi)_{1,\Sigma} = (\mathcal{K} \varphi,\partial^\alphartialsi)_{0,\Sigma} + [\mathcal{B}\varphi,\partial^\alphartialsi]_\ell \end{equation} for $\varphi,\partial^\alphartialsi \in C^2([-\ell,\ell]) $. We now aim to study the properties of $\mathcal{K}$ and $\mathcal{B}$. We begin with a version of the Riesz representation. \begin{thm}\label{riesz_iso} The map $\mathcal{J} : H^{1}((-\ell,\ell)) \to [H^1((-\ell,\ell))]^\ast$ defined via $\br{\mathcal{J} \varphi,\partial^\alphartialsi} = (\varphi,\partial^\alphartialsi)_{1,\Sigma}$ is an isomorphism. \end{thm} \begin{proof} This is the Riesz representation theorem. \end{proof} Next we construct a functional related to the form $[\cdot,\cdot]_\ell$. \begin{lem}\label{H1_trace} Suppose that $h_\partial^\alphartialm \in \mathbb{R}$ and that we view $h: \{-\ell,\ell\} \to \mathbb{R}$ via $h(\partial^\alphartialm \ell) = \partial^\alphartialm h$. Then the map $H^1((-\ell,\ell)) \ni \partial^\alphartialsi \mapsto [h,\partial^\alphartialsi]_\ell$ is bounded and linear. \end{lem} \begin{proof} This follows immediately from the standard trace estimate $ \max\{\abs{\partial^\alphartialsi(\ell)},\abs{\partial^\alphartialsi(-\ell)} \} \lesssim \norm{\partial^\alphartialsi}_{1,\Sigma}.$ \end{proof} We can now consider weak solutions to the problem \begin{equation}\label{cap_bndry_eqn} \begin{cases} \mathcal{K} \varphi = f \\ \mathcal{B} \varphi = h \end{cases} \end{equation} when $f \in [H^1((-\ell,\ell))]^\ast$ and $h: \{-\ell,\ell\} \to \mathbb{R}$ via $h(\partial^\alphartialm \ell) = \partial^\alphartialm h$. Theorem \ref{riesz_iso} allows us to define the weak solution to \eqref{cap_bndry_eqn} as the unique $\varphi \in H^1((-\ell,\ell))$ determined by \begin{equation} (\varphi,\partial^\alphartialsi)_{1,\Sigma} = \br{f,\partial^\alphartialsi} + [h,\partial^\alphartialsi]_\ell \text{ for all } \partial^\alphartialsi \in H^1((-\ell,\ell)). \end{equation} Note that according to \eqref{weak_motivation} any classical (or even strong, i.e. $H^2$) solution is a weak solution in the above sense. Moreover, Theorem \ref{riesz_iso} and Lemma \ref{H1_trace} easily imply that \begin{equation} \norm{\varphi}_{1,\Sigma} \lesssim \norm{f}_{(H^1)^\ast} + [h]_\ell. \end{equation} We next show that if $f=0$ then the weak solution is smooth up to the boundary. \begin{thm}\label{weak_soln_bnrdy_smooth} Let $h_\partial^\alphartialm \in \mathbb{R}$. Then the following hold. \begin{enumerate} \item There exists a unique $\varphi \in H^1((-\ell,\ell))$ such that \begin{equation}\label{weak_soln_bnrdy_smooth_01} (\varphi,\partial^\alphartialsi)_{1,\Sigma} = [h,\partial^\alphartialsi]_\ell \text{ for all } \partial^\alphartialsi\in H^1((-\ell,\ell)). \end{equation} \item We have that $\varphi \in H^m((-\ell,\ell))$ for each $m \in \mathbb{N}$, and \begin{equation} \norm{\varphi}_{H^m} \lesssim [h]_\ell. \end{equation} \item We have that $\varphi \in C^\infty([-\ell,\ell])$, and $\varphi$ is a classical solution to \begin{equation}\label{weak_soln_bnrdy_smooth_02} \begin{cases} \mathcal{K} \varphi = 0 &\text{in }(-\ell,\ell) \\ \mathcal{B}_\partial^\alphartialm \varphi = h_\partial^\alphartialm. \end{cases} \end{equation} \end{enumerate} \end{thm} \begin{proof} The first item follows from Lemma \ref{H1_trace} and Theorem \ref{riesz_iso}. Now consider the function $z \in C^\infty([-\ell,\ell])$ given by \begin{equation} z(x) = \frac{1 }{(1+\abs{\zeta_0'(x)}^2)^{3/2}} \end{equation} and note that there exists a constant $z_0 >0$ such that \begin{equation}\label{weak_soln_bnrdy_smooth_1} z(x) \ge z_0 \text{ for all }x \in [-\ell,\ell]. \end{equation} The function $z$ allows us to conveniently rewrite \begin{equation} (\varphi,\partial^\alphartialsi)_{1,\Sigma} = \int_{-\ell}^\ell z \varphi' \partial^\alphartialsi' + g \varphi \partial^\alphartialsi \text{ for all } \partial^\alphartialsi \in H^1((-\ell,\ell)). \end{equation} Let $\partial^\alphartialsi \in C^\infty_c((-\ell,\ell))$ and note that the bound \eqref{weak_soln_bnrdy_smooth_1} implies that $\chi = \partial^\alphartialsi / z \in C^\infty_c((-\ell,\ell)) \subset H^1((-\ell,\ell))$. Plugging this $\chi$ into \eqref{weak_soln_bnrdy_smooth_01} shows that $(\varphi,\chi)_{1,\Sigma} = [h,\chi]_\ell = 0.$ Thus \begin{equation} 0 = \int_{-\ell}^\ell z \varphi' \left(\frac{\partial^\alphartialsi}{z}\right)' + g \varphi \left(\frac{\partial^\alphartialsi}{z}\right) = \int_{-\ell}^\ell \varphi' \partial^\alphartialsi' - \frac{z'}{z} \varphi' \partial^\alphartialsi + g \varphi \left(\frac{\partial^\alphartialsi}{z}\right), \end{equation} and upon rearranging we find that \begin{equation} \int_{-\ell}^\ell \varphi' \partial^\alphartialsi' = \int_{-\ell}^{\ell} -\left(-\frac{z'}{z} \varphi' + g\frac{\varphi}{z} \right)\partial^\alphartialsi \text{ for all } \partial^\alphartialsi \in C_c^\infty((-\ell,\ell)). \end{equation} The definition of weak derivatives then tells us that $\varphi'$ is weakly differentiable, and \begin{equation}\label{weak_soln_bnrdy_smooth_2} \varphi'' = -\frac{z'}{z} \varphi' + g\frac{\varphi}{z} \in L^2((-\ell,\ell)), \end{equation} where the latter inclusion follows from the fact that $\varphi \in H^1((-\ell,\ell))$, $z \in C^\infty([-\ell,\ell])$, and the estimate \eqref{weak_soln_bnrdy_smooth_1}. Thus $\varphi \in H^2((-\ell,\ell))$, and we may estimate \begin{equation} \norm{\varphi}_{H^2} \lesssim \norm{\varphi}_{H^1} \lesssim [h]_\ell. \end{equation} Since $z'/z, g/z \in C^\infty([-\ell,\ell])$ we deduce from \eqref{weak_soln_bnrdy_smooth_2} and a simple induction argument that, in fact, $\varphi \in H^m((-\ell,\ell))$ for all $m \in \mathbb{N}$ and \begin{equation} \norm{\varphi}_{H^m} \le C_m [h]_\ell \end{equation} for a constant $C_m$ depending on $m$. Hence $\varphi \in C^\infty([-\ell,\ell])$. Returning now to \eqref{weak_soln_bnrdy_smooth_01}, we find, upon using $\partial^\alphartialsi \in C^\infty([-\ell,\ell])$ and integrating by parts, that \begin{equation} [h,\partial^\alphartialsi]_\ell = (\varphi,\partial^\alphartialsi)_{1,\Sigma} = \int_{-\ell}^\ell \mathcal{K}\varphi \partial^\alphartialsi + [\mathcal{B} \varphi,\partial^\alphartialsi]_\ell \end{equation} for all $\partial^\alphartialsi \in C^\infty([-\ell,\ell])$. This immediately implies the identity \eqref{weak_soln_bnrdy_smooth_02}. \end{proof} Next we consider elliptic regularity for \eqref{cap_bndry_eqn} with $f \neq 0$. \begin{thm}\label{cap_regularity} Let $h_\partial^\alphartialm \in \mathbb{R}$ and $f \in H^m((-\ell,\ell))$ for some $m \in \mathbb{N}$. Suppose that $\varphi \in H^1((-\ell,\ell))$ is the unique weak solution to \eqref{cap_bndry_eqn}. Then $\varphi \in H^{m+2}((-\ell,\ell))$, and \begin{equation} \norm{\varphi}_{H^{m+2}} \lesssim \norm{f}_{H^m} + [h]_\ell. \end{equation} Moreover, $\varphi$ is a strong solution to \eqref{cap_bndry_eqn}. \end{thm} \begin{proof} First note that $H^m((-\ell,\ell)) \mathcal{H}ookrightarrow H^0((-\ell,\ell)) \mathcal{H}ookrightarrow (H^1((-\ell,\ell)) )^\ast,$ where here in the last embedding we inject $H^0$ into $(H^1)^\ast$ in the standard way via \begin{equation} \br{\varphi,\partial^\alphartialsi}_\ast = \int_{-\ell}^\ell \varphi \partial^\alphartialsi = (\varphi,\partial^\alphartialsi)_{0,\Sigma} \text{ for }\varphi \in H^0((-\ell,\ell)) \text{ and }\partial^\alphartialsi \in H^1((-\ell,\ell)). \end{equation} Consequently, we can use Theorem \ref{riesz_iso} to solve for a unique $\varphi_1 \in H^1((-\ell,\ell))$ satisfying \begin{equation}\label{cap_regularity_1} (\varphi_1,\partial^\alphartialsi)_{1,\Sigma} = (f,\partial^\alphartialsi)_{0,\Sigma} \end{equation} and obeying the estimate \begin{equation} \norm{\varphi_1}_{H^1} \lesssim \norm{f}_{(H^1)^\ast} \lesssim \norm{f}_{H^0}. \end{equation} On the other hand, Theorem \ref{weak_soln_bnrdy_smooth} provides us with a unique $\varphi_2 \in C^\infty([-\ell,\ell])$ satisfying \begin{equation} (\varphi_2,\partial^\alphartialsi)_{1,\Sigma} = [h,\partial^\alphartialsi]_\ell \text{ for all }\partial^\alphartialsi \in H^1((-\ell,\ell)). \end{equation} The theorem tells us that \begin{equation} \norm{\varphi_2}_{H^k} \le C_k [h]_\ell \text{ for all }k \in \mathcal{N}. \end{equation} By the uniqueness of weak solutions, we have that $\varphi = \varphi_1 + \varphi_2$. To conclude we must only show that $\varphi_1 \in H^{m+2}((-\ell,\ell))$ and \begin{equation}\label{cap_regularity_3} \norm{\varphi_1}_{H^{m+2}} \lesssim \norm{f}_{H^m}. \end{equation} Let $z \in C^\infty([-\ell,\ell])$ be as in the proof of Theorem \ref{weak_soln_bnrdy_smooth}. For $\partial^\alphartialsi \in C^\infty_c((-\ell,\ell))$ we have that $\chi = \partial^\alphartialsi/z \in C^\infty_c((-\ell,\ell))$, and so we can use $\chi \in H^1((-\ell,\ell))$ as a test function in \eqref{cap_regularity_1}; after rearranging, we find that \begin{equation} \int_{-\ell}^\ell \varphi_1' \partial^\alphartialsi' = - \int_{-\ell}^\ell \left(-\frac{z'}{z} \varphi_1' + g\frac{\varphi_1}{z} - \frac{f}{z} \right)\partial^\alphartialsi \text{ for all } \partial^\alphartialsi \in C^\infty_c((-\ell,\ell)). \end{equation} From the definition of weak derivatives we then find that $\varphi_1'$ is weakly differentiable, and \begin{equation}\label{cap_regularity_2} \varphi_1'' = -\frac{z'}{z} \varphi_1' + g\frac{\varphi_1}{z} + \frac{f}{z}\in L^2((-\ell,\ell)), \end{equation} which implies that $\varphi_1 \in H^2((-\ell,\ell))$ and \begin{equation} \norm{\varphi_1}_{H^2} \lesssim \norm{\varphi_1}_{H^1} + \norm{f}_{L^2} \lesssim \norm{f}_{H^0}. \end{equation} This proves \eqref{cap_regularity_3} when $m =0$. When $m \ge 1$ we use a finite iteration in \eqref{cap_regularity_2} to bootstrap from $\varphi_1 \in H^2((-\ell,\ell))$ to $\varphi_1 \in H^{m+2}((-\ell,\ell))$. Along the way we readily deduce that \eqref{cap_regularity_3} holds. Thus the desired inclusion and estimates for $\varphi_1$ hold for all $m \in\mathbb{N}$. \end{proof} \subsection{Eigenfunctions of the gravity-capillary operator} The map \begin{equation} H^0((-\ell,\ell)) \ni f \mapsto \varphi_f \in H^1((-\ell,\ell)) \csubset H^0((-\ell,\ell)), \end{equation} where $\varphi_f$ is uniquely determined by \begin{equation} (\varphi_f,\partial^\alphartialsi)_{1,\Sigma} = (f,\partial^\alphartialsi)_{0,\Sigma} \text{ for all } \partial^\alphartialsi \in H^1((-\ell,\ell)) \end{equation} is easily seen to be compact and symmetric, so the usual spectral theory of compact symmetric operators (see, for instance, Chapter VI of \cite{reed_simon}) allows us to produce sequences $\{w_k\}_{k=0}^\infty \subset C^\infty([-\ell,\ell])$ and $\{\lambda_k\}_{k=0}^\infty \subset (0,\infty)$ such that the following hold. \begin{enumerate} \item $\{w_k\}_{k=0}^\infty$ forms an orthonormal basis of $L^2((-\ell,\ell))$. \item $\{w_k/\sqrt{\lambda_k}\}_{k=0}^\infty$ forms an orthonormal basis of $H^1((-\ell,\ell))$ relative to the inner-product $(\cdot,\cdot)_{1,\Sigma}$. \item $\lambda_0 = g$ and $w_0 = 1/\sqrt{2\ell}$. \item $\{\lambda_k\}_{k=0}^\infty$ is non-decreasing, and $\lambda_k \to \infty$ as $k \to \infty$. \item For each $k \in \mathbb{N}$ we have that \begin{equation} \begin{cases} \mathcal{K} w_k = \lambda_k w_k &\text{in } (-\ell,\ell) \\ \mathcal{B}_\partial^\alphartialm w_k =0. \end{cases} \end{equation} In other words, $w_k$ is the $k^{th}$ eigenfunction of the operator $\mathcal{K}$ with associated eigenvalue $\lambda_k \ge g$. \end{enumerate} We next introduce the notation for ``Fourier'' coefficients relative to this basis. \begin{dfn} For a function $f \in H^0((-\ell,\ell))$ we define the map $\mathcal{H}at{f} : \mathbb{N} \to \mathbb{R}$ via $\mathcal{H}at{f}(k) = (f,w_k)_{0,\Sigma}$. The values of $\mathcal{H}at{f}$ are called the Fourier coefficients of $f$. \end{dfn} We have the following version of Parseval's theorem for this basis. \begin{prop}\label{eigen_bases} The following hold. \begin{enumerate} \item For each $f,g \in H^0((-\ell,\ell))$ we have that \begin{equation} (f,g)_{0,\Sigma} = \sum_{k=0}^\infty \mathcal{H}at{f}(k) \mathcal{H}at{g}(k) \text{ and } \ns{f}_{0,\Sigma} = \sum_{k=0}^\infty \abs{\mathcal{H}at{f}(k)}^2. \end{equation} \item For each $f,g \in H^1((-\ell,\ell))$ we have that \begin{equation} (f,g)_{1,\Sigma} = \sum_{k=0}^\infty \lambda_k \mathcal{H}at{f}(k) \mathcal{H}at{g}(k) \text{ and } \ns{f}_{1,\Sigma} = \sum_{k=0}^\infty \lambda_k \abs{\mathcal{H}at{f}(k)}^2. \end{equation} \end{enumerate} \end{prop} \begin{proof} The first item follows from the fact that $\{w_k\}_{k=0}^\infty$ is an orthonormal basis of $H^0((-\ell,\ell))$. The second follows from the fact that $\{w_k/\sqrt{\lambda_k}\}_{k=0}^\infty$ is an orthonormal basis of $H^1((-\ell,\ell))$ and the fact that $w_k$ satisfies $(w_k,f)_{1,\Sigma} = \lambda_k (w_k,f)_{0,\Sigma} = \lambda_k \mathcal{H}at{f}(k)$ for $f \in H^1((-\ell,\ell))$. \end{proof} \subsection{Sobolev spaces for the gravity-capillary operator } In what follows we will often make reference to the vector space \begin{equation} W = \spn\{w_k\}_{k=0}^\infty = \{ \sum_{k=0}^M a_k w_k \;\vert\; M \in \mathbb{N} \text{ and } a_0,\dotsc,a_M \in \mathbb{R}\}, \end{equation} the set of finite linear combinations of basis elements. Clearly, $W \subset C^\infty([-\ell,\ell])$. We now define a special scale of Sobolev spaces built from the eigenfunctions of $\mathcal{K}$. \begin{dfn} Let $s \in \mathbb{R}$ and recall that $W = \spn\{w_k\}_{k=0}^\infty$. \begin{enumerate} \item For $u,v \in W \subset L^2((-\ell,\ell))$ we define \begin{equation} \ip{u,v}_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^s (u,w_k)_{0,\Sigma} (v,w_k)_{0,\Sigma} = \sum_{k=0}^\infty \lambda_k^s \mathcal{H}at{u}(k) \mathcal{H}at{v}(k), \end{equation} which is clearly an inner-product with associated norm $\ns{u}_{\mathcal{H}^s_\mathcal{K}} = \ip{u,u}_{\mathcal{H}^s_\mathcal{K}}.$ \item We define the Hilbert space \begin{equation} \mathcal{H}^s_\mathcal{K}((-\ell,\ell)) = \text{closure}_{\mathcal{H}^s_\mathcal{K}}(W). \end{equation} \item We define \begin{equation} \ell^2_s(\mathbb{N}) = \{ f: \mathbb{N} \to \mathbb{R} \;\vert\; \sum_{k=0}^\infty \lambda_k^s \abs{f(k)}^2 < \infty\}, \end{equation} which is clearly a Hilbert space when endowed with the obvious inner-product. \end{enumerate} \end{dfn} We now characterize these spaces. \begin{thm}\label{l2_char} The following are equivalent for $s \in \mathbb{R}$. \begin{enumerate} \item $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. \item There exists $\mathcal{H}at{u} \in \ell^2_s(\mathbb{N})$ such that $u = \sum_{k=0}^\infty \mathcal{H}at{u}(k) w_k$, where the series converges with respect to the norm $\norm{\cdot}_{\mathcal{H}^s_\mathcal{K}}.$ \end{enumerate} In either case we have that $\norm{u}_{\mathcal{H}^s_\mathcal{K}} = \norm{\mathcal{H}at{u}}_{\ell^2_s}.$ \end{thm} \begin{proof} Suppose that $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. Then there exist $\{u_m\}_{m=0}^\infty \subseteq W$ such that $u_m \to u$ in $\mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ as $m \to \infty$. For each $m$ we may then write \begin{equation} u_m = \sum_{k=0}^\infty a_{m}(k) w_k, \end{equation} where $\{a_{m}(k)\}_{k=1}^\infty \subset \mathbb{R}$ vanishes for all but finitely many $k$. Then \begin{equation} \ns{u_m - u_j}_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^s \abs{a_{m}(k) - a_{j}(k)}^2, \end{equation} and hence \begin{equation} \abs{a_{m}(k) - a_{j}(k)}^2 \le \lambda_k^{-s} \ns{u_m - u_j}_{\mathcal{H}^s_\mathcal{K}} \text{ for all } k \in \mathbb{N}^+. \end{equation} This implies that for each $k \in \mathbb{N}$ we have that $\{a_{m}(k)\}_{m=0}^\infty$ is a Cauchy sequence in $\mathbb{R}$, and hence we may define $a: \mathbb{N} \to \mathbb{R}$ via $a(k) = \lim_{m \to \infty} a_m(k)$. Now, for each $K \in \mathbb{N}$ we may estimate \begin{equation} \sum_{k=0}^K \lambda_k^s \abs{a(k)}^2 = \lim_{m \to \infty} \sum_{k=0}^K \lambda_k^s \abs{a_{m}(k)}^2 \le \limsup_{m\to \infty} \sum_{k=0}^\infty \lambda_k^s \abs{a_{m}(k)}^2 = \limsup_{m\to \infty} \ns{u_m}_{\mathcal{H}^s_\mathcal{K}} = \ns{u}_{\mathcal{H}^s_\mathcal{K}}. \end{equation} Upon sending $K \to \infty$ we then deduce that $a \in \ell^2_s(\mathbb{N})$. For $m \in \mathbb{N}$ we then set $v_m = \sum_{k=0}^{m} a(k) w_k \in W$. Then for $m > j \ge 0$ we have that \begin{equation} \ns{v_m - v_j}_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=j+1}^m \lambda_k^s \abs{a(k)}^2, \end{equation} which then implies that $\{v_m\}_{m=0}^\infty$ is a Cauchy sequence in $\mathcal{H}^s_\mathcal{K}((-\ell,\ell))$, and hence convergent to \begin{equation} v = \sum_{k=0}^\infty a(k) w_k \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell)). \end{equation} Moreover, \begin{equation} \ns{v}_{\mathcal{H}^s_\mathcal{K}} = \lim_{m \to \infty} \ns{v_m}_{\mathcal{H}^s_\mathcal{K}} = \lim_{m \to \infty} \sum_{k=0}^m \lambda_k^s \abs{a(k)}^2 = \sum_{k=0}^\infty \lambda_k^s \abs{a(k)}^2 = \ns{a}_{\ell^2_s}. \end{equation} Let $\varepsilon >0$ and choose $M \in \mathcal{N}$ such that $j,m \ge M$ imply that $\norm{u_j - u_m}_{\mathcal{H}^s_\mathcal{K}} < \varepsilon$. Then for each $K \in \mathbb{N}$ and $m \ge M$ we have that \begin{multline} \sum_{k=0}^K \lambda_k^s \abs{a(k) - a_{m}(k)}^2 = \lim_{j \to \infty} \sum_{k=0}^K \lambda_k^s \abs{a_{j}(k) - a_{m}(k)}^2 \le \limsup_{j \to \infty} \sum_{k=0}^\infty \lambda_k^s \abs{a_{j}(k)-a_{m}(k)}^2 \\ = \limsup_{j\to \infty} \ns{u_j - u_m}_{\mathcal{H}^s_\mathcal{K}} \le \varepsilon^2. \end{multline} Sending $K \to \infty$, we then find that $m \ge M$ implies that \begin{equation} \sum_{k=0}^\infty \lambda_k^s \abs{a(k) - a_{m}(k)}^2 \le \varepsilon^2. \end{equation} For any fixed $m$ we have that \begin{equation} \ns{u_m - v}_{\mathcal{H}^s_\mathcal{K}} = \lim_{K \to \infty} \ns{u_m - v_K}_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^s \abs{a(k) - a_{m}(k)}^2. \end{equation} Then for $m \ge M$ we find that \begin{equation} \norm{u_m-v}_{\mathcal{H}^s_\mathcal{K}} \le \varepsilon, \end{equation} and consequently, $u_m \to v$ as $m \to \infty$. Thus $u =v$. This completes the proof that $(1) \mathbb{R}ightarrow (2)$. We now turn to the proof of the converse. Suppose that $u = \sum_{k=0}^\infty \mathcal{H}at{u}(k) w_k$ for $\mathcal{H}at{u} \in \ell^2_s(\mathbb{N})$. For $m \in \mathbb{N}$ we define $u_m = \sum_{k=0}^m \mathcal{H}at{u}(k) w_k \in W$. Then $u_m \to u$ as $m \to \infty$ by assumption, and so $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. Moreover, \begin{equation} \ns{u_m}_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=0}^m \lambda_k^s \abs{\mathcal{H}at{u}(k)}^2 \end{equation} and hence \begin{equation} \ns{u}_{\mathcal{H}^s_\mathcal{K}} = \lim_{m\to \infty} \ns{u_m}_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^s \abs{\mathcal{H}at{u}(k)}^2 = \ns{\mathcal{H}at{u}}_{\ell^2_s}. \end{equation} \end{proof} This theorem suggests some notation. \begin{dfn} To each $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ we associate a unique element $\mathcal{H}at{u} \in \ell^2_s(\mathbb{N})$ such that $u = \sum_{k=0}^\infty \mathcal{H}at{u}(k) w_k$ and $\norm{u}_{\mathcal{H}^s_\mathcal{K}} = \norm{\mathcal{H}at{u}}_{\ell^2_s}$. \end{dfn} Now we characterize the duals of the spaces we've built. \begin{thm}\label{dual_char} Let $s\in \mathbb{R}$. Then the map $J : \mathcal{H}^{-s}_\mathcal{K}((-\ell,\ell)) \to (\mathcal{H}^s_\mathcal{K}((-\ell,\ell)))^\ast$ defined by \begin{equation} \br{Ju,v} = \sum_{k=0}^\infty \mathcal{H}at{u}(k) \mathcal{H}at{v}(k) \end{equation} is well-defined and is an isometric isomorphism. Consequently, we have a canonical identification \begin{equation} (\mathcal{H}^s_\mathcal{K}((-\ell,\ell)))^\ast = \mathcal{H}^{-s}_\mathcal{K}((-\ell,\ell)). \end{equation} \end{thm} \begin{proof} The linearity of $J$ is trivial. The boundedness follows from the estimate \begin{equation} \abs{\br{Ju,v}} = \abs{\sum_{k=0}^\infty \lambda_k^{-s/2} \mathcal{H}at{u}(k) \lambda_k^{s/2} \mathcal{H}at{v}(k) } \le \norm{\mathcal{H}at{u}}_{\ell^2_{-s}} \norm{\mathcal{H}at{v}}_{\ell^2_s} = \norm{u}_{\mathcal{H}^{-s}_\mathcal{K}} \norm{v}_{\mathcal{H}^s_\mathcal{K}}. \end{equation} Suppose that $Ju =0$ for some $u \in \mathcal{H}^{-s}_\mathcal{K}$. Then \begin{equation} 0 = \sum_{k=0}^\infty \mathcal{H}at{u}(k) \mathcal{H}at{v}(k) \text{ for all } v \in \mathcal{H}^s_\mathcal{K}. \end{equation} We may choose $v = w_j \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ for each $j \in \mathbb{N}$, and then $\mathcal{H}at{v}(k) = \delta_{kj}$, which means that \begin{equation} 0 = \mathcal{H}at{u}(j) \text{ for all } j \in \mathbb{N}. \end{equation} Then $\norm{u}_{\mathcal{H}^s_\mathcal{K}} =0$, and so $u =0$, from which we deduce that $J$ is injective. Now suppose that $F \in (\mathcal{H}^s_\mathcal{K}((-\ell,\ell)))^\ast$. Then we may define $\mathcal{H}at{F} \in (\ell^2_s(\mathbb{N}))^\ast$ via \begin{equation} \br{\mathcal{H}at{F},\mathcal{H}at{v}} = \br{F,v}. \end{equation} Since we have the canonical identification $(\ell^2_s(\mathbb{N}))^\ast = \ell^2_{-s}(\mathbb{N})$ we then deduce that there exists $\mathcal{H}at{u} \in \ell^2_{-s}(\mathbb{N})$ such that \begin{equation} \br{\mathcal{H}at{F},\mathcal{H}at{v}} = \sum_{k=0}^\infty \mathcal{H}at{v}(k) \mathcal{H}at{u}(k). \end{equation} Letting $u = \sum_{k=0}^\infty \mathcal{H}at{u}(k) w_k \in \mathcal{H}^{-s}_\mathcal{K}((-\ell,\ell))$, we find that \begin{equation} \br{F,v} = \br{\mathcal{H}at{F},\mathcal{H}at{v}} = \br{J u,v} \text{ for all } v \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell)), \end{equation} and hence $F = Ju$. Thus $J$ is surjective. It remains only to show that $J$ is an isometry. For this we compute \begin{equation} \norm{Ju}_{(\mathcal{H}^s_\mathcal{K})^\ast} = \sup_{\norm{v}_{\mathcal{H}^s_\mathcal{K}} \le 1} \br{Ju,v} = \sup_{\norm{\mathcal{H}at{v}}_{\ell^2_s} \le 1} \sum_{k=0}^\infty \mathcal{H}at{u}(k) \mathcal{H}at{v}(k) = \norm{\mathcal{H}at{u}}_{(\ell^2_s)^\ast} = \norm{\mathcal{H}at{u}}_{\ell^2_{-s}} = \norm{u}_{\mathcal{H}^{-s}_\mathcal{K}}. \end{equation} \end{proof} With this result in hand we can more explicitly describe the map $u \mapsto \mathcal{H}at{u}$. \begin{thm} The following hold for $s \in \mathbb{R}$. \begin{enumerate} \item If $s \ge 0$ and $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$, then $\mathcal{H}at{u}(k) = \ip{u,w_k}_{L^2}$ for all $k \ge 0$. \item If $s < 0$ and $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$, then $\mathcal{H}at{u}(k) = \br{u,w_k}$ for all $k \ge 0$, which is well-defined since $w_k \in \mathcal{H}^{-s}_\mathcal{K}((-\ell,\ell))$ and $\mathcal{H}^s_\mathcal{K}((-\ell,\ell)) = (\mathcal{H}^{-s}_\mathcal{K}((-\ell,\ell)))^\ast$. \end{enumerate} \end{thm} \begin{proof} If $s \ge 0$ and $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$, then $u \in L^2((-\ell,\ell))$. Since $u = \sum_{k=0}^\infty \mathcal{H}at{u}(k) w_k$ with the series converging in $\mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ and hence in $L^2$ we may then compute \begin{equation} \ip{u,w_k}_{L^2} = \ip{\sum_{j=0}^\infty \mathcal{H}at{u}(j) w_j ,w_k}_{L^2} = \mathcal{H}at{u}(k). \end{equation} This proves the first item. Now assume that $s <0$ and $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. Then Theorem \ref{dual_char} tells us that \begin{equation} \br{u,w_k} = \sum_{j=0}^\infty \mathcal{H}at{u}(j) \delta_{jk} = \mathcal{H}at{u}(k), \end{equation} which proves the second item. \end{proof} We now record the nesting properties of these Sobolev spaces. \begin{thm}\label{inclusion} For $s,t \in \mathbb{R}$ with $s \le t$ we have that $\mathcal{H}^t_\mathcal{K}((-\ell,\ell)) \subseteq \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ and \begin{equation} \norm{u}_{\mathcal{H}^s_\mathcal{K}} \le \frac{1}{\lambda_1^{(t-s)/2}} \norm{u}_{\mathcal{H}^t_\mathcal{K}} \text{ for all } u\in \mathcal{H}^t_\mathcal{K}((-\ell,\ell)). \end{equation} \end{thm} \begin{proof} This follows immediately from the definition of the norm on these spaces. \end{proof} Next we record some finer information about these spaces. In fact, this result is the key link to the usual theory of Sobolev spaces. \begin{thm}\label{sobolev_char} The following hold. \begin{enumerate} \item We have that $\mathcal{H}^0_\mathcal{K}((-\ell,\ell)) = L^2((-\ell,\ell))$ and $\norm{u}_{0,\Sigma} = \norm{u}_{\mathcal{H}^0_\mathcal{K}}$ for all $u \in L^2((-\ell,\ell))$. \item We have that $\mathcal{H}^1_\mathcal{K}((-\ell,\ell)) = H^1((-\ell,\ell))$ and $\norm{u}_{1,\Sigma} = \norm{u}_{\mathcal{H}^1_\mathcal{K}}$ for all $u \in H^1((-\ell,\ell))$. \item Let $2\le m \in \mathbb{N}$. Then \begin{equation}\label{sobolev_char_0} \mathcal{H}^m_\mathcal{K}((-\ell,\ell)) = \{ u \in H^m((-\ell,\ell)) \;\vert\; (\mathcal{K}^{(r)} u)'(\partial^\alphartialm \ell)=0 \text{ for }0\le r \le m/2-1 \}, \end{equation} where $\mathcal{K}^{(0)} = I$ and $\mathcal{K}^{(r+1)} = \mathcal{K} \mathcal{K}^{(r)}$. Moreover, $\norm{\cdot}_{\mathcal{H}^m_\mathcal{K}}$ and $\norm{\cdot}_{H^m}$ are equivalent on these spaces. \end{enumerate} \end{thm} \begin{proof} The first two assertions follow easily from the properties of the eigenfunctions $\{w_k\}_{k=0}^\infty$, so we'll only prove the third item. Throughout the proof we'll let $X^m$ denote the space on the right side of \eqref{sobolev_char_0}. We proceed by induction, starting with the base cases $m=2$ and $m=3$. First consider the case $m=2$. Suppose that $u \in \mathcal{H}^{2}_\mathcal{K}((-\ell,\ell))$. We may then define the function $U = \sum_{k=0}^\infty \lambda_k \mathcal{H}at{u}(k) w_k$, which belongs to $\mathcal{H}^0_\mathcal{K}((-\ell,\ell)) =L^2((-\ell,\ell))$ since \begin{equation} \ns{U}_{L^2} = \sum_{k=0}^\infty \abs{\lambda_k \mathcal{H}at{u}(k)}^2 = \ns{u}_{\mathcal{H}^2_\mathcal{K}} < \infty. \end{equation} Since $u \in \mathcal{H}^1_\mathcal{K}((-\ell,\ell))$ we also know that for $v \in H^1((-\ell,\ell))$, \begin{equation} (u,v)_{1,\Sigma} = \sum_{k=0}^\infty \lambda_k \mathcal{H}at{u}(k) \mathcal{H}at{v}(k) = \sum_{k=0}^\infty \mathcal{H}at{U}(k) \mathcal{H}at{v}(k)= (U,v)_{0,\Sigma}, \end{equation} and hence $u$ is a weak solution to the problem \begin{equation} \begin{cases} \mathcal{K} u = U &\text{in }(-\ell,\ell) \\ \mathcal{B}_\partial^\alphartialm u =0. \end{cases} \end{equation} The elliptic regularity of Theorem \ref{cap_regularity} then tells us that $u \in H^2((-\ell,\ell))$ and $\norm{u}_{H^2} \asymp \norm{U}_{L^2} = \norm{u}_{\mathcal{H}^2_\mathcal{K}}$, from which we deduce that $\mathcal{H}^2_\mathcal{K}((-\ell,\ell)) \subseteq X^2$. Now suppose that $u \in X^2$. Then clearly $\mathcal{K} u \in L^2$, and we may compute \begin{equation} \ns{u}_{H^2} \asymp \ns{\mathcal{K} u}_{H^0} = \sum_{k=0}^\infty \abs{(\mathcal{K} u,w_k)_{0,\Sigma}}^2 = \sum_{k=0}^\infty \abs{( u,\mathcal{K} w_k)_{0,\Sigma}}^2 = \sum_{k=0}^\infty \lambda_k^2 \abs{\mathcal{H}at{u}(k)}^2 = \ns{u}_{\mathcal{H}^2_\mathcal{K}}, \end{equation} from which we deduce that $X^2 \subseteq \mathcal{H}^2_\mathcal{K}((-\ell,\ell))$. A similar argument works for the case $m=3$; we omit the details for the sake of brevity. This establishes the base case $m=2$ and $m=3$. Suppose now that the result has been proved for all $2 \le m$ for some $m \ge 3$. Let $u \in \mathcal{H}^{m+1}_\mathcal{K}((-\ell,\ell))$. Using the same $U$ as above, we find that $\norm{U}_{\mathcal{H}^{m-1}_\mathcal{K}} = \norm{u}_{\mathcal{H}^{m+1}_\mathcal{K}}$, and so the induction hypothesis tells us that $U \in X^{m-1}$ with $\norm{U}_{\mathcal{H}^{m-1}_\mathcal{K}} \asymp \norm{U}_{\mathring{H}^{m-1}}$. We then use elliptic regularity as above to see that $u \in X^{m+1}$ and $\norm{u}_{H^{m+1}} \asymp \norm{U}_{H^{m-1}} = \norm{u}_{\mathcal{H}^{m+1}_\mathcal{K}}$, which in turn shows that $\mathcal{H}^{m+1}_\mathcal{K}((-\ell,\ell)) \subseteq X^{m+1}$. On the other hand, if $u \in X^{m+1}$ then elliptic regularity and the induction hypothesis show that \begin{multline} \ns{u}_{H^{m+1}} \asymp \ns{\mathcal{K} u}_{H^{m-1}} = \sum_{k=0}^\infty \lambda_k^{m-1} \abs{(\mathcal{K} u,w_k)_{0,\Sigma}}^2 = \sum_{k=0}^\infty \lambda_k^{m-1} \abs{( u,\mathcal{K} w_k)_{0,\Sigma}}^2 \\ = \sum_{k=0}^\infty \lambda_k^{m+1} \abs{\mathcal{H}at{u}(k)}^2 = \ns{u}_{\mathcal{H}^{m+1}_\mathcal{K}}. \end{multline} We then deduce that $X^{m+1} \subseteq \mathcal{H}^{m+1}_\mathcal{K}((-\ell,\ell))$. The principle of induction now tells us that $\mathcal{H}^{m}_\mathcal{K}((-\ell,\ell)) = X^m$ for all $m \ge 2$ and that the norms $\norm{\cdot}_{\mathcal{H}^m_\mathcal{K}}$ and $\norm{\cdot}_{H^m}$ are equivalent on these spaces. \end{proof} Theorem \ref{inclusion} shows that we have the nesting $\mathcal{H}^t_\mathcal{K}((-\ell,\ell)) \subseteq \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ for $s < t$. In fact, we can show more. \begin{thm}\label{basic_interp} Suppose that $s,t \in \mathbb{R}$ are such that $s < t$. If $u \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell)) \cap \mathcal{H}^t_\mathcal{K}((-\ell,\ell))$, then $u \in \mathcal{H}^r_\mathcal{K}((-\ell,\ell))$ for all $s \le r \le t$, and \begin{equation} \norm{u}_{\mathcal{H}^r_\mathcal{K}} \le \norm{u}_{\mathcal{H}^s_\mathcal{K}}^\theta \norm{u}_{\mathcal{H}^t_\mathcal{K}}^{1-\theta} \end{equation} for $\theta \in [0,1]$ given by \begin{equation} r = s \theta + t (1-\theta). \end{equation} \end{thm} \begin{proof} The result is trivial if $r =s$ or $r=t$, so we may assume that $s < r < t$. We know that $\mathcal{H}at{u} \in \ell^2_s(\mathbb{N}) \cap \ell^2_t(\mathbb{N})$. For $K \ge 1$ we may use H\"older's inequality to estimate \begin{multline} \sum_{k=0}^K \lambda_k^r \abs{\mathcal{H}at{u}(k)}^2 = \sum_{k=0}^K \lambda_k^{\theta s} \abs{\mathcal{H}at{u}(k)}^{2\theta} \lambda_k^{(1-\theta) t} \abs{\mathcal{H}at{u}(k)}^{2(1-\theta)} \le \left(\sum_{k=0}^K \lambda_k^s \abs{\mathcal{H}at{u}(k)}^{2} \right)^\theta \left(\sum_{k=0}^K \lambda_k^t \abs{\mathcal{H}at{u}(k)}^{2} \right)^{1-\theta} \\ \le \left(\sum_{k=0}^\infty \lambda_k^s \abs{\mathcal{H}at{u}(k)}^{2} \right)^\theta \left(\sum_{k=0}^\infty \lambda_k^t \abs{\mathcal{H}at{u}(k)}^{2} \right)^{1-\theta} = \left( \ns{u}_{\mathcal{H}^s_\mathcal{K}}\right)^\theta \left( \ns{u}_{\mathcal{H}^t_\mathcal{K}}\right)^{1-\theta}. \end{multline} Upon sending $K \to \infty$ we find that $u \in \mathcal{H}^r_\mathcal{K}((-\ell,\ell))$ and \begin{equation} \ns{u}_{\mathcal{H}^r_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^r \abs{\mathcal{H}at{u}(k)}^2 \le \left( \ns{u}_{\mathcal{H}^s_\mathcal{K}}\right)^\theta \left( \ns{u}_{\mathcal{H}^t_\mathcal{K}}\right)^{1-\theta}. \end{equation} The result follows by taking square roots. \end{proof} \subsection{Functional calculus } We can use the eigenvalues to define a functional calculus of $\mathcal{K}$. First we need some notation. \begin{dfn}\label{fnal_calc_def} Write $\Sigma_\mathcal{K} = \{\lambda_k \;\vert\; k \ge 0\} \subset [g,\infty)$. For $r \in \mathbb{R}$ define the space \begin{equation} \mathfrak{B}^r(\Sigma_\mathcal{K}) = \{ f: \Sigma_\mathcal{K} \to \mathbb{R} \;\vert\; \norm{f}_{\mathcal{B}^r} < \infty\} \end{equation} where \begin{equation} \norm{f}_{\mathfrak{B}^r} = \sup_{x \ge \lambda_0} \frac{\abs{f(x)}}{x^r}. \end{equation} This is easily shown to be a Banach space. Similarly, for $r \in \mathbb{R}$ define \begin{equation} \mathfrak{B}^r_0(\Sigma_\mathcal{K}) = \{ f \in \mathfrak{B}^r(\Sigma_\mathcal{K}) \;\vert\; \lim_{x \to \infty} (\abs{f(x)} / x^r) =0 \}, \end{equation} which is again easily shown to be a Banach space. \end{dfn} Now we define a functional calculus of $\mathcal{K}$ on the spaces $\mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. \begin{dfn} Let $s \in \mathbb{R}$ and $r \in \mathbb{R}$. For $f \in \mathfrak{B}^r(\Sigma_\mathcal{K})$ and $u \in \mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell))$ define \begin{equation} f(\mathcal{K})u = \sum_{k=0}^\infty f(\lambda_k) \mathcal{H}at{u}(k) w_k. \end{equation} \end{dfn} The next result records the key properties of these operators. \begin{thm} Let $s \in \mathbb{R}$ and $r \in \mathbb{R}$. For $f \in \mathfrak{B}^r(\Sigma_\mathcal{K})$ and $u \in \mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell))$ let $f(\mathcal{K}) u$ be as defined above. Then the following hold. \begin{enumerate} \item $f(\mathcal{K}) : \mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell)) \to \mathcal{H}^{s}_\mathcal{K}((-\ell,\ell))$ is bounded and linear. \item $f(\mathcal{K})$ is self-adjoint in the sense that if $u,v \in \mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell))$, then \begin{equation} \ip{f(\mathcal{K}) u,v}_{\mathcal{H}^s_\mathcal{K}} = \ip{ u,f(\mathcal{K}) v}_{\mathcal{H}^s_\mathcal{K}}. \end{equation} \item The map \begin{equation} \mathfrak{B}^r(\Sigma_\mathcal{K}) \ni f \mapsto f(\mathcal{K}) \in \mathcal{L}(\mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell)), \mathcal{H}^{s}_\mathcal{K}((-\ell,\ell))) \end{equation} is bounded and linear. \item If $f \in \mathfrak{B}^r_0(\Sigma_\mathcal{K})$, then $f(\mathcal{K}) : \mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell)) \to \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ is a compact operator. \end{enumerate} \end{thm} \begin{proof} The first three assertions are elementary, so we'll only prove the fourth. To prove this we will show that $f(\mathcal{K})$ is the limit (in the strong operator norm topology) of a sequence of finite rank operators (see, for instance, Chapter VI of \cite{reed_simon}). To this end, for each $j \ge 0$ define $F_j :\mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell)) \to \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ via \begin{equation} F_j u = \sum_{k=0}^j f(\lambda_k) \mathcal{H}at{u}(k) w_k. \end{equation} It's clear that each $F_j$ is bounded, linear, and of finite rank. Also, for $u \in \mathcal{H}^{s+2r}_\mathcal{K}((-\ell,\ell))$ and $j \ge 0$ we have that \begin{equation} \ns{(F_j - f(\mathcal{K}))u }_{\mathcal{H}^s_\mathcal{K}} = \sum_{k=j+1}^\infty \lambda_k^s \abs{f(\lambda_k)}^2 \abs{\mathcal{H}at{u}(k)}^2 \le \sup_{k \ge j+1} \frac{\abs{f(\lambda_k)}^2}{\lambda_k^{2r}} \ns{u}_{\mathcal{H}^{s+2r}_\mathcal{K}}, \end{equation} and hence \begin{equation} \ns{F_j - f(\mathcal{K}) }_{\mathcal{L}(\mathcal{H}^{s+2r}_\mathcal{K};\mathcal{H}^s_\mathcal{K})} \le \sup_{k \ge j+1} \frac{\abs{f(\lambda_k)}^2}{\lambda_k^{2r}}. \end{equation} From this and the inclusion $f \in \mathfrak{B}^r_0(\Sigma_\mathcal{K})$ we deduce that $F_j \to f(\mathcal{K})$ in $\mathcal{L}(\mathcal{H}^{s+2r}_\mathcal{K};\mathcal{H}^s_\mathcal{K})$, and hence $f(\mathcal{K})$ is compact. \end{proof} One of the most important uses of this result is the following corollary. \begin{cor} If $s,t \in \mathbb{R}$ and $s< t$, then $\mathcal{H}^t_\mathcal{K}((-\ell,\ell)) \csubset \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. \end{cor} We have the following variant of elliptic regularity in the spaces $\mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. \begin{thm} Let $s \in [0,\infty)$ and suppose that $f \in \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$. If $u \in \mathcal{H}^1_\mathcal{K}((-\ell,\ell)) $ is the weak solution to $\mathcal{K} u =f$ and $\mathcal{B} u =0$, i.e. \begin{equation} (u, v)_{1,\Sigma} = (f,v)_{0,\Sigma} \text{ for all } v \in H^1((-\ell,\ell)), \end{equation} then $u \in \mathcal{H}^{s+2}_\mathcal{K}((-\ell,\ell))$. Moreover, $\norm{u}_{\mathcal{H}^{s+2}_\mathcal{K}} = \norm{f}_{\mathcal{H}^s_\mathcal{K}}$. Consequently, $\mathcal{K} : \mathcal{H}^{s+2}_\mathcal{K}((-\ell,\ell)) \to \mathcal{H}^s_\mathcal{K}((-\ell,\ell))$ is an isometric isomorphism. \end{thm} \begin{proof} We have that \begin{equation} \mathcal{H}at{f}(k)= (f,w_k)_{0,\Sigma} = (u,w_k)_{1,\Sigma} = ( w_k, u)_{1,\Sigma} = \lambda_k (w_k,u)_{0,\Sigma} = \lambda_k (u,w_k)_{0,\Sigma} = \lambda_k \mathcal{H}at{u}(k). \end{equation} Thus \begin{equation} \ns{u}_{\mathcal{H}^{s+2}_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^{s+2} \abs{\mathcal{H}at{u}(k)}^2 = \sum_{k=0}^\infty \lambda_k^{s} \abs{\mathcal{H}at{f}(k)}^2 = \ns{f}_{\mathcal{H}^s_\mathcal{K}}. \end{equation} \end{proof} \subsection{Interpolation theory and its consequences} Here we write $(X,Y)_{\theta,p}$ for $\theta \in [0,1]$ and $1 \le p \le \infty$ for the real interpolation of the spaces $X,Y$ with parameters $\theta,p$. See \cite{bergh_lof} or \cite{triebel}, for instance, for the precise definition. We record a basic result from that book. \begin{thm} Let $s,t \in \mathbb{R}$ with $s \neq t$. For $0 < \theta < 1$ and $r = (1-\theta) s + \theta t$ we have that \begin{equation} (\ell^2_s(\mathbb{N}), \ell^2_t(\mathbb{N}))_{\theta,2} = \ell^2_r(\mathbb{N}). \end{equation} \end{thm} \begin{proof} This follows immediately from Theorem 5.4.1 of \cite{bergh_lof}. \end{proof} By combining this with Theorem \ref{l2_char} we immediately deduce the following. \begin{cor}\label{s_interp} Let $s,t \in \mathbb{R}$ with $s \neq t$. For $0 < \theta < 1$ and $r = (1-\theta) s + \theta t$ we have that \begin{equation} (\mathcal{H}^s_\mathcal{K}((-\ell,\ell)) , \mathcal{H}^t_\mathcal{K}((-\ell,\ell)))_{\theta,2} = \mathcal{H}^r_\mathcal{K}((-\ell,\ell)). \end{equation} \end{cor} Next we present a useful application of the interpolation theory. \begin{lem}\label{s_embed_lem} If $s \ge 0$, then $\mathcal{H}^{s}_\mathcal{K}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{s}((-\ell,\ell))$. \end{lem} \begin{proof} We may view Theorem \ref{sobolev_char} as saying that, for $m \in \mathbb{N}$, the identity map $I$ is such that \begin{equation} I : \mathcal{H}^m_\mathcal{K}((-\ell,\ell)) \to L^2((-\ell,\ell)) \text{ and } I : \mathcal{H}^{m+1}_\mathcal{K}((-\ell,\ell)) \to H^{m+1}((-\ell,\ell)) \end{equation} are bounded linear operators. We can then interpolate and use Corollary \ref{s_interp} and the interpolation properties of standard Sobolev spaces (see, for instance, \cite{bergh_lof,triebel}) to deduce that for $0 < s < 1$, \begin{multline} I : \mathcal{H}^{m+s}_\mathcal{K}((-\ell,\ell)) = (\mathcal{H}^m_\mathcal{K}((-\ell,\ell)) , \mathcal{H}^{m+1}_\mathcal{K}((-\ell,\ell)))_{s,2} \to (H^m((-\ell,\ell)),H^{m+1}((-\ell,\ell)))_{s,2} \\ = H^{m+s}((-\ell,\ell)). \end{multline} \end{proof} In fact, we can do quite a bit better when $0 \le s < 2$. In stating the following result we recall (see \cite{lions_magenes_1}) that \begin{equation}\label{h_half_00} H^{1/2}_{00}((-\ell,\ell)) = (H^0((-\ell,\ell)), H^1_0((-\ell,\ell)))_{1/2,2} \end{equation} and \begin{equation} H^{1/2}_{00}((-\ell,\ell)) \subset H^{1/2}((-\ell,\ell)) = (H^0((-\ell,\ell)), H^1_0((-\ell,\ell)))_{1/2,2}. \end{equation} \begin{thm}\label{s_embed} For $0 \le t \le 1$ we have that $H^t((-\ell,\ell)) = \mathcal{H}^t_\mathcal{K}((-\ell,\ell))$ with norm equivalence $\norm{f}_{H^t} \asymp \norm{f}_{\mathcal{H}^t_\mathcal{K}}$. Moreover, for $s \in (0,1)$ we have that \begin{equation} \mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)) = \begin{cases} H^{1+s}((-\ell,\ell)) & \text{if } 0 < s < 1/2 \\ \{f \in H^{3/2}((-\ell,\ell)) \;\vert\; f' \in H^{1/2}_{00}((-\ell,\ell)) \} &\text{if } s= 1/2 \\ \{f \in H^{1+s}((-\ell,\ell)) \;\vert\; f' \in H^{s}_{00}((-\ell,\ell)) \} &\text{if } 1/2 < s < 1 \end{cases} \end{equation} and we have the norm equivalence \begin{equation} \ns{f}_{\mathcal{H}^{1+s}_\mathcal{K}} \asymp \begin{cases} \ns{f}_{H^{1+s}} &\text{if } 0 < s < 1/2 \\ \ns{f}_{H^{3/2}} + \int_{-\ell}^\ell \frac{\abs{f'(x)}^2}{\ell - \abs{x}} dx & \text{ if } s = 1/2 \\ \ns{f}_{H^{1+s}} + \ns{f'}_{H^s_0} &\text{if } 1/2 < s < 1. \end{cases} \end{equation} \end{thm} \begin{proof} The assertion $t=0,1$ is proved in Theorem \ref{sobolev_char}, and for $0 < t < 1$ it follows from this theorem, Corollary \ref{s_interp}, and standard Sobolev interpolation: \begin{equation} H^t((-\ell,\ell)) = (H^0((-\ell,\ell)), H^1((-\ell,\ell)))_{t,2} = (\mathcal{H}^0_\mathcal{K}((-\ell,\ell)), \mathcal{H}^1_\mathcal{K}((-\ell,\ell)))_{t,2} = \mathcal{H}^{t}_\mathcal{K}((-\ell,\ell)). \end{equation} We now prove the assertion for $s \in (0,1)$. Define the map $F: L^2((-\ell,\ell)) \times \mathbb{R} \to H^1((-\ell,\ell))$ via \begin{equation} F(g,v) = v + \int_{-\ell}^x g(t) dt. \end{equation} If $g \in H^1_0((-\ell,\ell))$ and $v \in \mathbb{R}$, then $F(g,v) \in L^2((-\ell,\ell))$ and $F(g,v)' = g \in H^1_0((-\ell,\ell))$. From this and Theorem \ref{sobolev_char} we deduce that \begin{equation} F \in \mathcal{L}(L^2((-\ell,\ell)) \times \mathbb{R} ; \mathcal{H}^1_\mathcal{K}((-\ell,\ell)) ) \cap \mathcal{L}(H^1_0((-\ell,\ell)) \times \mathbb{R} ; \mathcal{H}^2_\mathcal{K}((-\ell,\ell)) ). \end{equation} Hence, upon interpolating, we find that for $s \in (0,1)$ \begin{equation}\label{s_embed_1} F \in \mathcal{L}(X^s \times \mathbb{R} ; \mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)) ), \end{equation} where we have written $X^s = (L^2((-\ell,\ell)), H^1_0((-\ell,\ell)))_{s,2}$ for brevity. Next consider the map $D$ defined by $f \mapsto Df = f'$. Theorem \ref{sobolev_char} tells us that \begin{equation} D \in \mathcal{L}(\mathcal{H}^1_\mathcal{K}((-\ell,\ell)) ; L^2((-\ell,\ell)) ) \cap \mathcal{L}( \mathcal{H}^2_\mathcal{K}((-\ell,\ell)) ; H^1_0((-\ell,\ell)) ). \end{equation} Upon interpolating again and using Corollary \ref{s_interp}, we find that \begin{equation}\label{s_embed_2} D \in \mathcal{L}(\mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)); X^s. ) \end{equation} For $s \in (0,1)$ define the Hilbert space \begin{equation} Y^{1+s} = \{ f\in H^{1+s}((-\ell,\ell)) \;\vert\; f' \in X^s \} \end{equation} with norm $\ns{f}_{Y^{1+s}} = \ns{f}_{H^{1+s}} + \ns{f'}_{X^s}$. According to \eqref{s_embed_2} and Lemma \ref{s_embed_lem}, we have the continuous inclusion $\mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)) \subseteq Y^{1+s}$. On the other hand, if $f \in Y^{1+s}$, then $(f', f(-\ell)) \in X^s \times \mathbb{R}$, and by \eqref{s_embed_1} we have that $f = F(f',f(-\ell)) \in \mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)))$. Hence, we have the continuous inclusion $Y^{1+s} \subseteq \mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)) $. We deduce that we have the algebraic and topological identity \begin{equation} \mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)) = Y^{1+s} \text{ for all }s \in (0,1). \end{equation} To conclude, we recall the standard interpolation facts \begin{equation} X^s = (L^2((-\ell,\ell)), H^1_0((-\ell,\ell)))_{2,s} = \begin{cases} H^s((-\ell,\ell)) & \text{if } 0 < s < 1/2 \\ H^{1/2}_{00}((-\ell,\ell)) &\text{if } s = 1/2 \\ H^s_0((\ell,\ell)) & \text{if } 1/2 < s < 1. \end{cases} \end{equation} If $0 < s < 1/2$ then we have the norm equivalence \begin{equation} \ns{f}_{\mathcal{H}^{1+s}_{\mathcal{K}}} \asymp \ns{f}_{H^{1+s}} + \ns{f'}_{H^s((-\ell,\ell))} \asymp \ns{f}_{H^{1+s}}, \end{equation} and so $\mathcal{H}^{1+s}_\mathcal{K}((-\ell,\ell)) = H^{1+s}((-\ell,\ell))$. The result follows from this and the characterization of $H^{1/2}_{00}((-\ell,\ell))$ in \eqref{h_half_00}. \end{proof} As a byproduct of this result we get the following Sobolev embeddings. \begin{thm} For $s \in [0,2]$ we have that \begin{equation} \mathcal{H}^s_\mathcal{K}((-\ell,\ell)) \mathcal{H}ookrightarrow \begin{cases} L^{2/(1-2s)}((-\ell,\ell)) & \text{if } s\in [0,1/2) \\ L^p((-\ell,\ell)) \text{ for all } p \in [1,\infty) &\text{if } s = 1/2 \\ C^{0,\alpha}_b((-\ell,\ell)) \text{ for } \alpha = s-1/2 &\text{if } s \in (1/2,3/2) \\ C^{0,\alpha}_b((-\ell,\ell)) \text{ for all } \alpha \in [0,1) &\text{if }s =3/2 \\ C^{1,\alpha}_b((-\ell,\ell)) \text{ for } \alpha = s-3/2 &\text{if } s \in (3/2,2]. \end{cases} \end{equation} Moreover, for $s \in [1,3/2]$ we have that \begin{equation} \mathcal{H}^s_\mathcal{K}((-\ell,\ell))\mathcal{H}ookrightarrow \begin{cases} W^{1,2/(1-2s)}((-\ell,\ell)) & \text{if } s\in [1,3/2) \\ W^{1,p}((-\ell,\ell)) \text{ for all } p \in [1,\infty) &\text{if } s = 3/2. \end{cases} \end{equation} \end{thm} \begin{proof} These are immediate consequences of Theorem \ref{s_embed} and the standard Sobolev embeddings of $H^s((-\ell,\ell))$ for $0 \le s \le 2$. \end{proof} \subsection{Bilinear boundedness, integration by parts} Suppose that $\varphi, \partial^\alphartialsi \in W$ and let $s \in [0,1]$. We then have that \begin{equation} (\varphi,\partial^\alphartialsi)_{1,\Sigma} = \sum_{k=0}^\infty \lambda_k \mathcal{H}at{\varphi}(k) \mathcal{H}at{\partial^\alphartialsi}(k) = \sum_{k=0}^\infty \lambda_k^s \mathcal{H}at{\varphi}(k) \lambda_k^{1-s}\mathcal{H}at{\partial^\alphartialsi}(k) = (\mathcal{K}^s \varphi, \mathcal{K}^{1-s} \partial^\alphartialsi)_{0,\Sigma}. \end{equation} Consequently, for any $s,t \in [0,1]$ we have that \begin{equation} (\mathcal{K}^s \varphi, \mathcal{K}^{1-s} \partial^\alphartialsi)_{0,\Sigma} = (\mathcal{K}^t \varphi, \mathcal{K}^{1-t} \partial^\alphartialsi)_{0,\Sigma}, \end{equation} which we can view as a sort of fundamental integration-by-parts result in the sense that we can arbitrarily shift powers of $\mathcal{K}$ from one term to the next so long as the overall power sums to unity. Working in $W$ is obviously too restrictive, but we can extend by density to get a generalized version of integration by parts for all fractional orders. \begin{thm} Let $B : W \times W \to \mathbb{R}$ be the bilinear map defined via \begin{equation} B(\varphi,\partial^\alphartialsi) = (\mathcal{K} \varphi,\partial^\alphartialsi)_{0,\Sigma} = (\varphi,\partial^\alphartialsi)_{1,\Sigma} = (\varphi, \mathcal{K} \partial^\alphartialsi)_{0,\Sigma}. \end{equation} Then $B$ extends to a bounded bilinear map $B : \mathcal{H}^{2s}_\mathcal{K} \times \mathcal{H}^{2(1-s)}_\mathcal{K} \to \mathbb{R}$ for each for $s \in [0,1]$. \end{thm} \begin{proof} This follows directly from the identity \begin{equation} B(\varphi,\partial^\alphartialsi) = (\mathcal{K}^s \varphi, \mathcal{K}^{1-s} \partial^\alphartialsi)_{0,\Sigma} \text{ for }\varphi,\partial^\alphartialsi \in W, \end{equation} which allows us to bound \begin{equation} \abs{B(\varphi,\partial^\alphartialsi)} \le \norm{\varphi}_{\mathcal{H}^{2s}_\mathcal{K}} \norm{\partial^\alphartialsi}_{\mathcal{H}^{2(1-s)}_\mathcal{K}}. \end{equation} Using this and the density of $W$ in $\mathcal{H}^t_\mathcal{K}$ for all $t \ge 0$ proves the result. \end{proof} \subsection{The operators $D_j^r$ }\label{sec_djr} We now turn our attention to the operators $D^r := \mathcal{K}^{r/2}$ for $r \ge 0$, as defined by the functional calculus from Definition \ref{fnal_calc_def}. We will need to introduce some finite approximations, $D^r_j$, defined by \begin{equation} D_j^r u = \sum_{k=0}^j \lambda_k^{r/2} \mathcal{H}at{u}(k) w_k. \end{equation} It's easy to see that this is well-defined for every $u \in L^2((-\ell,\ell)) = \mathcal{H}^0_\mathcal{K}((-\ell,\ell))$ and that in this case $D_j^r u \in W \subseteq \bigcap_{s \ge 0} \mathcal{H}^s_\mathcal{K}((-\ell,\ell)) \subset C^\infty([-\ell,\ell])$. Let's now study some properties. The first result tells us that $D^r_j$ is like an approximation of $r$ derivatives. \begin{prop}\label{Djr_bnds} Let $j \in \mathbb{N}$. Then the following hold. \begin{enumerate} \item If $0\le r_1,r_2,s_1,s_2 \in \mathbb{R}$ satisfy $r_1 + s_1 = r_2 + s_2$, then \begin{equation} \norm{D^{r_1}_j f}_{\mathcal{H}^{s_1}_\mathcal{K}} = \norm{D^{r_2}_j f}_{\mathcal{H}^{s_2}_\mathcal{K}} \end{equation} for all $f \in L^2((-\ell,\ell))$. \item If $0 \le r \in \mathbb{R}$, then \begin{equation} \norm{D^r_j f}_{L^2} \le \norm{f}_{\mathcal{H}^r_\mathcal{K}} \text{ and } \norm{f}_{\mathcal{H}^r_\mathcal{K}} = \lim_{j \to \infty} \norm{D^r_j f}_{L^2} \end{equation} for every $f \in \mathcal{H}^r_{\mathcal{K}}((-\ell,\ell))$. \end{enumerate} \end{prop} \begin{proof} For the first item we compute \begin{equation} \ns{D^{r_1}_j f}_{\mathcal{H}^{s_1}_\mathcal{K}} = \sum_{k=0}^j \lambda_k^{s_1} \abs{ \lambda_k^{r_1/2} \mathcal{H}at{f}(k) }^2 = \sum_{k=0}^j \lambda_k^{s_2} \abs{ \lambda_k^{r_2/2} \mathcal{H}at{f}(k) }^2 = \ns{D^{r_2}_j f}_{\mathcal{H}^{s_2}_\mathcal{K}} . \end{equation} In turn this shows that \begin{equation} \norm{D_j^r f}_{L^2} = \norm{D_j^0 f}_{\mathcal{H}^r_\mathcal{K}} = \left( \sum_{k=0}^j \lambda_k^r \abs{\mathcal{H}at{f}(k)}^2 \right)^{1/2}, \end{equation} from which the second item follows. \end{proof} Next we consider how $D^r_j$ interacts with functions of average zero. \begin{lem}\label{Djr_avg_zero} If $f\in L^2((-\ell,\ell))$ satisfies $\int_{-\ell}^\ell f =0$, then $\int_{-\ell}^\ell D_j^r f =0$ for all $r \ge 0$ and $j \in \mathbb{N}$. \end{lem} \begin{proof} Since $w_0 = 1/\sqrt{2\ell}$ we see that $\int_{-\ell}^\ell f =0$ if and only if $\mathcal{H}at{f}(0) =0$. In this case we then have that $\widehat{D^r_j f}(0) =0$ as well, and the result follows. \end{proof} Next we prove an integration by parts formula. \begin{lem}\label{Djr_IBP} Let $0 \le r,s,t,\rho \in \mathbb{R}$ be such that $r = s+t$. Then for $j \in \mathbb{N}$, $f\in L^2((-\ell,\ell))$, and $g \in \mathcal{H}^\rho_\mathcal{K}((-\ell,\ell))$ we have that \begin{equation} \ip{D^r_j f,g}_{\mathcal{H}^\rho_\mathcal{K}} = \ip{D^s_j f,D^t_j g}_{\mathcal{H}^\rho_\mathcal{K}}. \end{equation} \end{lem} \begin{proof} We simply compute \begin{multline} (D^r_j f,g)_{\mathcal{H}^\rho_\mathcal{K}} = \sum_{k=0}^\infty \lambda_k^\rho \widehat{D^r_j f}(k) \mathcal{H}at{g}(k) = \sum_{k=0}^j \lambda_k^{\rho + r/2} \mathcal{H}at{f}(k) \mathcal{H}at{g}(k) = \sum_{k=0}^j \lambda_k^{\rho } \lambda_k^{s/2} \mathcal{H}at{f}(k) \lambda_k^{t/2} \mathcal{H}at{g}(k) \\ = \sum_{k=0}^\infty \lambda_k^\rho \widehat{D^s_j f}(k) \widehat{D^t_j g}(k) = \ip{D^s_j f, D^t_j g}_{\mathcal{H}^\rho_\mathcal{K}}. \end{multline} \end{proof} \begin{remark} In this paper the most useful instances of Lemma \ref{Djr_IBP} occur with $\rho \in \{0,1\}$. Indeed, the lemma shows that if $0 \le r = s+t$ and $f \in L^2((-\ell,\ell))$, then \begin{equation} \int_{-\ell}^\ell D^r_j f g = \int_{-\ell}^\ell D^s_j f D^t_j g \text{ for all } g \in L^2((-\ell,\ell)) = \mathcal{H}^0_\mathcal{K}((\ell,\ell)) \end{equation} and \begin{equation} (D^r_j f, g)_{1,\Sigma} = (D^s_j f, D^t_j g)_{1,\Sigma} \text{ for all } g \in H^1((-\ell,\ell)) = \mathcal{H}^1_\mathcal{K}((-\ell,\ell)). \end{equation} \end{remark} We conclude with a dual estimate. \begin{prop}\label{Djr_dual} Let $0 \le r \le 1/2$, $0 \le s <3/2$, and $j \in \mathbb{N}$. Then \begin{equation} D^s_j : H^{s-r}((-\ell,\ell)) \to H^{-r}((-\ell,\ell)) = (H^{r}_0((-\ell,\ell)))^\ast \end{equation} is a bounded linear operator. \end{prop} \begin{proof} According to the results in Chapter 1 of \cite{lions_magenes_1} and Theorem \ref{s_embed} we have that \begin{equation} H^{r}_0((-\ell,\ell))= H^{r}((-\ell,\ell)) = \mathcal{H}^{r}_\mathcal{K}((-\ell,\ell)). \end{equation} This and Theorem \ref{dual_char} then show that \begin{equation} \mathcal{H}^{-r}_\mathcal{K}((-\ell,\ell)) = (H^{r}_0((-\ell,\ell)))^\ast = H^{-r}((-\ell,\ell)) \end{equation} with equality of norms. Hence, for $f \in H^{s-r}((-\ell,\ell)) = \mathcal{H}^{s-r}_\mathcal{K}((-\ell,\ell))$ (which again follows by Theorem \ref{s_embed}), we again use Theorem \ref{dual_char} together with Cauchy-Schwarz to compute \begin{multline} \norm{D^s_j f}_{H^{-r}} \asymp \norm{D^s_j f}_{\mathcal{H}^{-r}_\mathcal{K}} = \norm{J D^s_j f}_{(\mathcal{H}^{r}_\mathcal{K})^\ast} = \sup_{\norm{\mathcal{H}at{g}}_{\ell^2_r} \le 1} \sum_{k=0}^j \lambda_k^{s/2} \mathcal{H}at{f}(k) \mathcal{H}at{g}(k) \\ = \sup_{\norm{\mathcal{H}at{g}}_{\ell^2_r} \le 1} \sum_{k=0}^j \lambda_k^{(s-r)/2} \mathcal{H}at{f}(k) \lambda_k^{r/2} \mathcal{H}at{g}(k) \le \norm{f}_{\mathcal{H}^{s-r}_\mathcal{K}} \lesssim \norm{f}_{H^{s-r}}. \end{multline} This proves the boundedness assertion, and linearity is trivial. \end{proof} \section{Enhancement estimates }\label{sec_enhancements} Our goal in this section is to record enhancement estimates for the dissipation and energy that are derived through energy-type arguments rather than elliptic estimates. We will gain some dissipative control of $\eta$, $\partial^\alphartialartial_t \eta$, and $\partial^\alphartialartial_t^2 \eta$, and we will gain energetic control of $\partial^\alphartialartial_t p$. \subsection{Prerequisites} Recall that for a real parameter $0 \le s < 1$ the fractional differential operator $D^s = \mathcal{K}^{s/2}$ and its finite approximations $D^s_j$ for $j \in \mathbb{N}$, as defined in Section \ref{sec_djr}. The next result gives an existence result for a Neumann-type problem involving $D^s_j$. \begin{prop}\label{psi_solve} Let $s \in \mathbb{R}$ and $j,k \in \mathbb{N}$ with $0 \le k \le 2$ and $0 \le s < 1$. Then there exists $\partial^\alphartialsi : \Omega \to \mathbb{R}$ solving \begin{equation}\label{diss_enhace_psi_eqn} \begin{cases} - \mathcal{D}elta \partial^\alphartialsi = 0 &\text{in }\Omega \\ \partial^\alphartial_\nu \partial^\alphartialsi = (D_j^s \partial^\alphartialartial_t^k \eta)/\abs{\mathcal{N}_0} &\text{on }\Sigma \\ \partial^\alphartial_\nu \partial^\alphartialsi = 0 &\text{on } \Sigma_s \end{cases} \end{equation} where $\nu$ is the unit normal for the fixed domain $\Omega$ and its non-unit counterpart is $\mathcal{N}_0$, i.e. $\nu = \mathcal{N}_0/\abs{\mathcal{N}_0}$. Moreover, we have the estimates \begin{equation} \norm{\partial^\alphartialsi}_{H^1} \lesssim \norm{\partial^\alphartialartial_t^k \eta }_{ H^{s-1/2}}, \; \norm{\partial^\alphartialsi}_{H^2} \lesssim \norm{D_j^s \partial^\alphartialartial_t^k\eta}_{\mathcal{H}^{1/2}_\mathcal{K}}, \text{ and } \norm{\partial^\alphartialartial_t \partial^\alphartialsi}_{H^1} \lesssim \norm{ \partial^\alphartialartial_t^{k+1} \eta }_{ H^{s-1/2}}. \end{equation} \end{prop} \begin{proof} To begin we note that Proposition \ref{avg_zero_prop} and Lemma \ref{Djr_avg_zero} imply that \begin{equation} \int_\Sigma D_j^s \partial^\alphartialartial_t^k \eta \abs{\mathcal{N}_0}^{-1} = \int_{-\ell}^\ell D_j^s \partial^\alphartialartial_t^k \eta = \int_{-\ell}^\ell \partial^\alphartialartial_t^k \eta =0. \end{equation} Consequently, the compatibility condition needed to produce a unique weak solution $\partial^\alphartialsi \in \mathring{H}^1(\Omega)$ to \eqref{diss_enhace_psi_eqn} is satisfied. Since the domain $\Omega$ has convex corners, the $H^2$ solvability theory is available for \eqref{diss_enhace_psi_eqn} (see, for instance, \cite{kmr_1}). This, the elementary $H^1$ weak estimate, and Proposition \ref{Djr_dual} then show that \begin{equation} \begin{split} \norm{\partial^\alphartialsi}_{H^1} &\lesssim \norm{ D_j^s \partial^\alphartialartial_t^k \eta}_{H^{-1/2}} \lesssim \norm{\partial^\alphartialartial_t^k \eta }_{ H^{s-1/2}} \\ \norm{\partial^\alphartialsi}_{H^2} & \lesssim \norm{D_j^s \partial^\alphartialartial_t^k \eta}_{\mathcal{H}^{1/2}_\mathcal{K}} \lesssim \norm{D_j^s \partial^\alphartialartial_t^k\eta}_{\mathcal{H}^{1/2}_\mathcal{K}} \\ \norm{\partial^\alphartialartial_t \partial^\alphartialsi}_{H^1} &\lesssim \norm{ D_j^s \partial^\alphartialartial_t^{k+1} \eta}_{H^{-1/2}} \lesssim \norm{ \partial^\alphartialartial_t^{k+1} \eta }_{ H^{s-1/2}}, \end{split} \end{equation} from which the result follows. \end{proof} \subsection{Dissipative enhancement for $\eta$} We begin by considering dissipation enhancement estimates for $\eta$. To this end let $\partial^\alphartialsi$ be as in Proposition \ref{psi_solve} with $k=0$. This proposition and Proposition \ref{M_properties} show that if we set $w = M \nabla \partial^\alphartialsi$, then $w$ is a valid choice of a test function in Lemma \ref{geometric_evolution} and \begin{equation} \diverge_{\mathcal{A}}{w} = \diverge_{\mathcal{A}}{M\nabla \partial^\alphartialsi} = K\mathcal{D}elta \partial^\alphartialsi =0. \end{equation} We will use this $w$ as a test function in Lemma \ref{geometric_evolution} to produce an essential dissipation estimate. \begin{thm}\label{diss_enhance_eta} Let $\alpha \in (0,1)$ be given by \eqref{kappa_ep_def}, and $0 < T \le \infty$. There exists a universal $0 <\delta_\ast <1$ such that if $\sup_{0\le t < T} \mathcal{E}(t) \le \delta_\ast$, then for every $0 \le \tau \le t < T$ we have the estimate \begin{equation} \int_\tau^t \ns{ \eta}_{H^{3/2-\alpha}} \lesssim \mathcal{E}_{\shortparallel, 0}(\tau) + \mathcal{E}_{\shortparallel, 0}(t) + \int_{\tau}^t \mathcal{D}_{\shortparallel,0}, \end{equation} where $\mathcal{E}_{\shortparallel,0}$ and $\mathcal{D}_{\shortparallel,0}$ are as in \eqref{ED_natural}. \end{thm} \begin{proof} We begin by assuming that $\delta_\ast < \gamma^2$, where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. In particular, this means that the estimates of Lemma \ref{eta_small} are available in what follows. Let $s = 1- 2\alpha \in [0,1)$, which means that $3/2 - \alpha = 1 + s/2$. For a fixed $j \in \mathbb{N}$ we let $\partial^\alphartialsi$ solve \eqref{diss_enhace_psi_eqn} with data $D^s_j \eta / \abs{\mathcal{N}_0}$. Then Proposition \ref{psi_solve} provides the estimates \begin{equation}\label{diss_enhance_eta_ests} \norm{\partial^\alphartialsi}_{H^1} \lesssim \norm{\eta }_{ H^{s-1/2}}, \; \norm{\partial^\alphartialsi}_{H^2} \lesssim \norm{D_j^s \eta}_{\mathcal{H}^{1/2}_\mathcal{K}}, \text{ and } \norm{\partial^\alphartialartial_t \partial^\alphartialsi}_{H^1} \lesssim \norm{ \partial^\alphartialartial_t \eta }_{ H^{s-1/2}}. \end{equation} Note that since $0 \le s < 1$ we can define $\rho = (1-s)/2 \in (0,1/2]$, which satisfies \begin{equation}\label{diss_enhance_eta_rho} \frac{1}{2} + \rho + s = 1 + \frac{s}{2}. \end{equation} This, Theorem \ref{s_embed}, and Proposition \ref{Djr_bnds} then provide the useful equivalence \begin{equation}\label{diss_enhance_eta_norm} \norm{D_j^{s/2} \eta}_{H^1} \asymp \norm{D_j^{s/2} \eta}_{\mathcal{H}^1_\mathcal{K}} = \norm{D_j^s \eta}_{\mathcal{H}^{1/2 + \rho}_\mathcal{K}} \asymp \norm{D_j^s \eta}_{H^{1/2 + \rho}}. \end{equation} We use $w = M \nabla \partial^\alphartialsi$ from above in Lemma \ref{geometric_evolution} to arrive at the identity \begin{multline}\label{diss_enhance_eta_ident} \br{\partial^\alphartialartial_t u,Jw} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u + u \cdot \nablaa u,Jw)_0 + \partial^\alphartialp{ u,w} + ( \eta ,w\cdot \mathcal{N})_{1,\Sigma} + \kappa [\partial^\alphartialartial_t \eta ,w\cdot \mathcal{N}]_\ell \\ = - \int_{-\ell}^\ell \sigma \mathcal{R} \partial^\alphartial_1 (w \cdot \mathcal{N}). \end{multline} We will deal with these term-by-term. For the first term we note that $M = K \nabla \mathcal{P}hi$ for $\mathcal{P}hi$ the flattening map, and so \begin{equation} \br{\partial^\alphartialartial_t u,JM \nabla \partial^\alphartialsi} = \int_\Omega \partial^\alphartialartial_t u \cdot \nabla \mathcal{P}hi \nabla \partial^\alphartialsi = \frac{d}{dt} \int_{\Omega} u \cdot \nabla \mathcal{P}hi \nabla \partial^\alphartialsi - \int_\Omega u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialartial_t \partial^\alphartialsi - \int_\Omega u \cdot \partial^\alphartialartial_t(\nabla\mathcal{P}hi ) \nabla \partial^\alphartialsi. \end{equation} Using the bound $\mathcal{E} \le 1$, the definition of $\mathcal{P}hi$ in \eqref{mapping_def}, and \eqref{diss_enhance_eta_ests}, we may then estimate \begin{equation}\label{diss_enhance_eta_1} \abs{\int_\Omega u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialsi } \lesssim \norm{\nabla\mathcal{P}hi}_{L^\infty} \norm{u}_0 \norm{\partial^\alphartialsi}_1 \lesssim \norm{\nabla\mathcal{P}hi}_{L^\infty} \norm{u}_{H^0} \norm{ \eta }_{ H^{s-1/2}} \lesssim \norm{u}_{H^0} \norm{ \eta }_{ H^{1}} \lesssim \mathcal{E}_{\shortparallel, 0}, \end{equation} where $\mathcal{E}_{\shortparallel, 0}$ is the natural energy at the non-differentiated level, as defined in \eqref{ED_natural}. Similarly, \begin{equation} \abs{- \int_\Omega u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialartial_t \partial^\alphartialsi - \int_\Omega u \cdot \partial^\alphartialartial_t(\nabla\mathcal{P}hi ) \nabla \partial^\alphartialsi} \lesssim \norm{u}_{H^0} \left( \norm{\partial^\alphartialsi}_{H^1} + \norm{\partial^\alphartialartial_t \partial^\alphartialsi}_{H^1} \right) \lesssim \norm{u}_{H^0} \left(\norm{ \eta }_{ H^{s/2}} + \norm{\partial^\alphartialartial_t \eta }_{ H^{s/2}} \right). \end{equation} For the second term we write \begin{equation} (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u + u \cdot \nablaa u,Jw)_0 = (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u + u \cdot \nablaa u, \nabla\mathcal{P}hi \nabla \partial^\alphartialsi )_0. \end{equation} From this and the bounds $\mathcal{E} \le 1$ and \eqref{diss_enhance_eta_ests} we may then estimate \begin{equation} \abs{ (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u + u \cdot \nablaa u,Jw)_0} \lesssim \norm{u}_{H^1} \norm{\partial^\alphartialsi}_{H^1} \lesssim \norm{u}_{H^1} \norm{\eta}_{H^1}. \end{equation} For the third term we use the $H^2$ estimate from \eqref{diss_enhance_eta_ests} to bound \begin{equation} \abs{\partial^\alphartialp{u,w}} = \frac{\mu}{2} \abs{\int_{\Omega} J \mathbb{D}a u : \mathbb{D}a (M \nabla \partial^\alphartialsi) } \lesssim \norm{u}_{H^1} \norm{\partial^\alphartialsi}_{H^2} \lesssim \norm{u}_{H^1} \norm{D_j^s \eta}_{\mathcal{H}^{1/2}_\mathcal{K}}. \end{equation} To handle the fourth term we first note that on $\Sigma$, \begin{equation} w \cdot \mathcal{N} = M \nabla \partial^\alphartialsi \cdot \mathcal{N}= \nabla \partial^\alphartialsi \cdot \mathcal{N}_0 = \abs{\mathcal{N}_0} \partial^\alphartial_\nu \partial^\alphartialsi = D_j^s \eta. \end{equation} Using this, Lemma \ref{Djr_IBP}, and \eqref{diss_enhance_eta_norm} we can rewrite the fourth term as \begin{equation} ( \eta ,w\cdot \mathcal{N})_{1,\Sigma} = ( \eta ,D_j^s \eta)_{1,\Sigma} = ( D_j^{s/2}\eta ,D_j^{s/2} \eta)_{1,\Sigma} = \ns{D_j^{s/2} \eta}_{\mathcal{H}^{1}_\mathcal{K}} = \ns{D_j^{s} \eta}_{\mathcal{H}^{1/2 + \rho}_\mathcal{K}}. \end{equation} Then for the fifth term we can use trace theory and Theorem \ref{s_embed} to bound \begin{equation} \abs{\kappa [\partial^\alphartialartial_t \eta ,D_j^s \eta]_\ell } \lesssim [\partial^\alphartialartial_t \eta]_\ell \norm{D_j^s \eta}_{H^{1/2+\rho}} = [\partial^\alphartialartial_t \eta]_\ell \norm{D_j^s \eta}_{\mathcal{H}^{1/2+\rho}_\mathcal{K}} \end{equation} Finally, we examine the nonlinear term on the right side of \eqref{diss_enhance_eta_ident}. We start by using Proposition \ref{frac_IBP_prop}, which is available since $0 <s < s$, to estimate \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{R} \partial^\alphartial_1 D^s_j \eta} \lesssim \norm{D_j^s \eta}_{H^{1-s/2}} \norm{\mathcal{R}}_{H^{s/2}}. \end{equation} Next we note that \begin{equation} \frac{\mathcal{R}(y,z)}{z^2}, \frac{\partial^\alphartialartial_2 \mathcal{R}(y,z)}{z}, \text{ and }\frac{\partial^\alphartialartial_1 \mathcal{R}(y,z)}{z^2} \end{equation} are well-defined and bounded by Proposition \ref{R_prop}. Thus $\mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1 \eta)/\partial^\alphartial_1 \eta$ is well-defined and satisfies \begin{equation} \partial^\alphartial_1 \left(\frac{\mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}{\partial^\alphartial_1 \eta} \right) = \frac{\partial^\alphartial_1 \mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1^2 \zeta_0}{\partial^\alphartial_1 \eta} + \left(\frac{\partial^\alphartial_2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) }{\partial^\alphartial_1 \eta} - \frac{\mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1 \eta)}{(\partial^\alphartial_1 \eta)^2} \right) \partial^\alphartial_1^2 \eta, \end{equation} which in turn means that for any $1 \le q \le \infty$, \begin{equation} \norm{\mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1 \eta)/\partial^\alphartial_1 \eta}_{W^{1,q}} \lesssim \norm{\partial^\alphartial_1 \eta}_{L^q} + \norm{\partial^\alphartial_1^2 \eta}_{L^q} = \norm{\partial^\alphartial_1 \eta}_{W^{1,q}} \le \norm{\eta}_{W^{2,q}}. \end{equation} This allows us to use Theorem \ref{supercrit_prod} and the embedding $W^{1,q_+}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{1/2 + \varepsilon_+/2}((-\ell,\ell))$ to estimate \begin{multline} \norm{\mathcal{R}}_{H^{s/2}} = \norm{\partial^\alphartial_1 \eta \mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1\eta)/ \partial^\alphartial_1 \eta }_{H^{s/2}} \lesssim \norm{\partial^\alphartial_1 \eta}_{H^{s/2}} \norm{\mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1\eta)/ \partial^\alphartial_1 \eta}_{H^{(1+\varepsilon_+)/2}} \\ \lesssim \norm{\partial^\alphartial_1 \eta}_{H^{s/2}} \norm{\mathcal{R}(\partial^\alphartial_1\zeta_0,\partial^\alphartial_1\eta)/ \partial^\alphartial_1 \eta}_{W^{1,q_+}} \lesssim \norm{\partial^\alphartial_1 \eta}_{H^{s/2}} \norm{\eta}_{W^{2,q_+}}. \end{multline} Assembling these estimates and employing Theorem \ref{s_embed} and the definition of $\mathcal{E}$ from \eqref{E_def} then shows that \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{R} \partial^\alphartial_1 D^s_j \eta} \lesssim \norm{D_j^s \eta}_{H^{1-s/2}} \norm{ \eta}_{H^{1+s/2}} \norm{\eta}_{W^{2,q_+}} \lesssim \ns{ \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}}. \end{equation} We now combine all of these estimates to deduce that \begin{multline} \ns{D_j^s \eta}_{\mathcal{H}^{1/2 +\rho}_\mathcal{K}} + \frac{d}{dt} \int_\Omega u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialsi \\ \lesssim \norm{u}_{H^0} \left(\norm{ \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t \eta }_{ H^{s-1/2}} \right) + \norm{ u}_{H^1} \norm{D_j^s \eta}_{\mathcal{H}^{1/2}_\mathcal{K}} + [\partial^\alphartialartial_t \eta]_\ell \norm{D_j^s \eta}_{\mathcal{H}^{1/2 + \rho}_\mathcal{K}} + \ns{ \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}}. \end{multline} Then for $0 \le \tau \le t < T$ we can integrate this inequality to see that \begin{multline} \int_\tau^t \ns{D_j^s \eta}_{\mathcal{H}^{1/2 + \rho}_\mathcal{K}} + \int_\Omega (u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialsi)(t) \lesssim \int_\Omega (u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialsi)(\tau) + \int_\tau^t \norm{u}_{H^0} \left(\norm{ \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t \eta }_{ H^{s-1/2}} \right) \\ + \int_\tau^t \norm{ u}_{H^1} \norm{D_j^s \eta}_{\mathcal{H}^{1/2}_\mathcal{K}} + \int_\tau^t [\partial^\alphartialartial_t \eta]_\ell \norm{D_j^s \eta}_{\mathcal{H}^{1/2+\rho}_\mathcal{K}} + \int_\tau^t \ns{ \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}}. \end{multline} We then use \eqref{diss_enhance_eta_1} and Cauchy's inequality to deduce from this that \begin{multline} \frac{1}{2} \int_\tau^t \ns{D_j^s \eta}_{\mathcal{H}^{1/2+\rho}_\mathcal{K}} \lesssim \mathcal{E}_{\shortparallel, 0}(\tau) + \mathcal{E}_{\shortparallel, 0}(t) + \int_\tau^t \norm{u}_{H^0} \left(\norm{ \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t \eta }_{ H^{s-1/2}} \right) \\ + \int_\tau^t \left(\ns{ u}_{H^1} + [\partial^\alphartialartial_t \eta]_\ell^2 \right) + \int_\tau^t \ns{ \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}}. \end{multline} Note that from \eqref{diss_enhance_eta_rho}, Proposition \ref{Djr_bnds}, and Theorem \ref{s_embed} we have that \begin{equation} \lim_{j \to \infty} \ns{D_j^s \eta}_{\mathcal{H}^{1/2+\rho}_\mathcal{K}} = \ns{\eta}_{\mathcal{H}_\mathcal{K}^{1+s/2}} \asymp \ns{\eta}_{H^{1+s/2}}. \end{equation} We then send $j \to \infty$ and use this and Fatou's lemma to see that \begin{multline} \frac{1}{2} \int_\tau^t \ns{ \eta}_{H^{1+s/2}} \lesssim \mathcal{E}_{\shortparallel, 0}(\tau) + \mathcal{E}_{\shortparallel, 0}(t) + \int_\tau^t \norm{u}_{H^0} \left(\norm{ \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t \eta }_{ H^{s-1/2}} \right) \\ + \int_\tau^t \left(\ns{ u}_{H^1} + [\partial^\alphartialartial_t \eta]_\ell^2 \right) + \int_\tau^t \ns{ \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}}. \end{multline} Since $s -1/2 \le 1 + s/2$ we can then use Cauchy's inequality once more in addition to the smallness $\mathcal{E} \le \delta_\ast$ for some universal $\delta_\ast>0$ to conclude that \begin{equation}\label{diss_enhance_eta_2} \frac{1}{4} \int_\tau^t \ns{ \eta}_{H^{1+s/2}} \lesssim \mathcal{E}_{\shortparallel, 0}(\tau) + \mathcal{E}_{\shortparallel, 0}(t) + \int_\tau^t \left(\ns{u}_{H^0} + \norm{u}_{H^0} \norm{\partial^\alphartialartial_t \eta }_{ H^{s-1/2}} \right) + \int_\tau^t \left(\ns{ u}_{H^1} + [\partial^\alphartialartial_t \eta]_\ell^2 \right) . \end{equation} Finally, we use the equation $\partial^\alphartialartial_t \eta = u \cdot \mathcal{N}= u\cdot (-\partial^\alphartial_1 \zeta_0,1) - u_1 \partial^\alphartial_1 \eta$, Theorem \ref{supercrit_prod}, and the fact that $\mathcal{E} \le 1$ to estimate \begin{equation} \norm{\partial^\alphartialartial_t \eta}_{H^{s-1/2}} \lesssim \norm{u}_{H^{s-1/2}} \left(1 + \norm{\partial^\alphartial_1 \eta}_{H^1} \right) \lesssim \norm{u}_{H^s} \lesssim \norm{u}_{H^1}. \end{equation} Plugging this into \eqref{diss_enhance_eta_2} then shows that \begin{equation} \frac{1}{4} \int_\tau^t \ns{ \eta}_{H^{1+s/2}} \lesssim \mathcal{E}_{\shortparallel, 0}(\tau) + \mathcal{E}_{\shortparallel, 0}(t) + \int_\tau^t \left(\ns{ u}_{H^1} + [\partial^\alphartialartial_t \eta]_\ell^2 \right) = \mathcal{E}_{\shortparallel, 0}(\tau) + \mathcal{E}_{\shortparallel, 0}(t) + \int_{\tau}^t \mathcal{D}_{\shortparallel,0}, \end{equation} which is the desired bound since $1+s/2= 3/2 - \alpha$. \end{proof} \subsection{Dissipative enhancement for $\partial^\alphartialartial_t \eta$ and $\partial^\alphartialartial_t^2 \eta$ } We now turn our attention to enhanced dissipation estimates for $\partial^\alphartialartial_t \eta$ and $\partial^\alphartialartial_t^2 \eta$. \begin{thm}\label{diss_enhance_dtketa} Let $\alpha \in (0,1)$ be given by \eqref{kappa_ep_def}, and $0 < T \le \infty$. Let $k \in \{1,2\}$. There exists a universal $0 <\delta_\ast <1$ such that if $\sup_{0\le t < T} \mathcal{E}(t) \le \delta_\ast$, then for every $0 \le \tau \le t < T$ we have the estimate \begin{equation} \int_{\tau}^t \ns{\partial^\alphartialartial_t^k \eta}_{H^{3/2 - \alpha}} \lesssim \mathcal{E}_{\shortparallel,k}(\tau) + \mathcal{E}_{\shortparallel,k}(t) + \int_{\tau}^t \left( \mathcal{D}_{\shortparallel} + \mathcal{E}\mathcal{D}\right). \end{equation} \end{thm} \begin{proof} We will give the proof only in the harder case $k=2$. The case $k=1$ follows from a similar, simpler argument. To begin, we assume that $\delta_\ast < \gamma^2$, where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. In particular, this means that the estimates of Lemma \ref{eta_small} are available in what follows. We begin in essentially the same way as in the proof of Theorem \ref{diss_enhance_eta}. Let $s = 1- 2\alpha \in [0,1)$, which means that $3/2 - \alpha = 1 + s/2$. Also let $\rho= (1-s)/2$ so that $1/2 + \rho + s = 1 + s/2$. For a fixed $j \in \mathbb{N}$ we let $\partial^\alphartialsi$ solve \eqref{diss_enhace_psi_eqn} with data $D^s_j \partial^\alphartialartial_t^2 \eta / \abs{\mathcal{N}_0}$. Then Proposition \ref{psi_solve} provides the estimates \begin{equation}\label{diss_enhance_dtketa_1} \norm{\partial^\alphartialsi}_{H^1} \lesssim \norm{\partial^\alphartialartial_t^2 \eta }_{ H^{s-1/2}}, \; \norm{\partial^\alphartialsi}_{H^2} \lesssim \norm{D_j^s \partial^\alphartialartial_t^2 \eta}_{\mathcal{H}^{1/2}_\mathcal{K}}, \text{ and } \norm{\partial^\alphartialartial_t \partial^\alphartialsi}_{H^1} \lesssim \norm{ \partial^\alphartialartial_t^3 \eta }_{ H^{s-1/2}}. \end{equation} Note that $s-1/2 = 1/2 - 2 \alpha < 1/2 - \alpha$, so the latter term is controlled by the dissipation (see \eqref{D_def}). Then Proposition \ref{M_def} lets us use Lemma \ref{geometric_evolution} with $w = M \nabla \partial^\alphartialsi$ to see that \begin{multline}\label{diss_enhance_dtketa_2} \br{\partial^\alphartialartial_t^3 u,Jw} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t^2 u + u \cdot \nablaa \partial^\alphartialartial_t^2 u,Jw)_0 + \partial^\alphartialp{\partial^\alphartialartial_t^2 u,w} + (\partial^\alphartialartial_t^2 \eta ,w\cdot \mathcal{N})_{1,\Sigma} + \kappa [\partial^\alphartialartial_t^3 \eta ,w\cdot \mathcal{N}]_\ell \\ = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (w \cdot \mathcal{N}) + F^4 \cdot w - \kappa [w\cdot \mathcal{N}, F^7]_\ell. \end{multline} Here the forcing terms on the right are as defined in Appendix \ref{sec_nonlinear_records}. Arguing as in the proof of Theorem \ref{diss_enhance_eta}, we estimate all of the terms on the left of \eqref{diss_enhance_dtketa_2} to arrive at the bounds \begin{equation}\label{diss_enhance_dtketa_25} \abs{\int_\Omega \partial^\alphartialartial_t^2 u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialsi} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{H^0} \norm{\partial^\alphartialartial_t^2 \eta}_{H^1} \lesssim \mathcal{E}_{\shortparallel,2}, \end{equation} where $\mathcal{E}_{\shortparallel,2}$ is as defined in \eqref{ED_natural}, and \begin{multline}\label{diss_enhance_dtketa_3} \ns{D_j^s \partial^\alphartialartial_t^2 \eta}_{\mathcal{H}^{1/2 + \rho}_\mathcal{K}} + \frac{d}{dt} \int_\Omega \partial^\alphartialartial_t^2 u \cdot \nabla\mathcal{P}hi \nabla \partial^\alphartialsi \lesssim \norm{\partial^\alphartialartial_t^2 u}_{H^0} \left(\norm{ \partial^\alphartialartial_t^2 \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t^3 \eta }_{ H^{s-1/2}} \right) \\ + \norm{ \partial^\alphartialartial_t^2 u}_{H^1} \norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} + [\partial^\alphartialartial_t^3 \eta]_\ell \norm{ \partial^\alphartialartial_t^2\eta}_{H^{1+s/2}} + \br{\mathcal{F}, M \nabla \partial^\alphartialsi}, \end{multline} where, as shorthand, we have written \begin{multline}\label{diss_enhance_dtketa_4} \br{\mathcal{F}, M \nabla \partial^\alphartialsi} = \int_\Omega F^1 \cdot M \nabla \partial^\alphartialsi J - \int_{\Sigma_s} J (M \nabla \partial^\alphartialsi\cdot \tau)F^5 \\ - \int_{-\ell}^\ell \sigma \left(F^3 \partial^\alphartial_1 (M \nabla \partial^\alphartialsi \cdot \mathcal{N}) + F^4 \cdot M \nabla \partial^\alphartialsi\right) - [M \nabla \partial^\alphartialsi\cdot \mathcal{N}, F^7]_\ell. \end{multline} We now estimate $\mathcal{F}$, breaking it into three separate pieces. For the first piece we use Theorem \ref{nid_v_est}, Proposition \ref{M_properties}, and \eqref{diss_enhance_dtketa_1} to estimate \begin{multline}\label{diss_enhance_dtketa_5} \abs{\int_\Omega F^1 \cdot (M \nabla \partial^\alphartialsi) J - \int_{-\ell}^\ell F^4 \cdot (M \nabla \partial^\alphartialsi) - \int_{\Sigma_s} J ((M \nabla \partial^\alphartialsi) \cdot \tau)F^5 } \lesssim \norm{M \nabla \partial^\alphartialsi}_{H^1} (\mathcal{E}+ \sqrt{\mathcal{E}})\sqrt{\mathcal{D}} \\ \lesssim \norm{\partial^\alphartialsi}_{H^2} (\mathcal{E}+ \sqrt{\mathcal{E}})\sqrt{\mathcal{D}} \lesssim \norm{D^s_j \partial^\alphartialartial_t^2 \eta}_{\mathcal{H}_\mathcal{K}^{1/2}} (\mathcal{E}+ \sqrt{\mathcal{E}})\sqrt{\mathcal{D}} \end{multline} Next we handle the $F^3$ term. According to \eqref{dt2_f3} we have that \begin{equation} F^3 = \partial^\alphartialartial_t^2 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] = \partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2\eta + \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2. \end{equation} On the other hand, we know that $M \nabla \partial^\alphartialsi \cdot \mathcal{N} = D_j^s \partial^\alphartialartial_t^2 \eta$ on $\Sigma$. Combining these, and employing Proposition \ref{frac_IBP_prop}, we can estimate \begin{multline} \abs{ \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (M \nabla \partial^\alphartialsi \cdot \mathcal{N}) } \le \sigma \abs{ \int_{-\ell}^\ell \partial^\alphartial_1( D^s_j \partial^\alphartialartial_t^2 \eta )\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2\eta } \\ + \sigma \abs{\int_{-\ell}^\ell \partial^\alphartial_1( D^s_j \partial^\alphartialartial_t^2 \eta ) \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2 } \lesssim \norm{D_j^s \partial^\alphartialartial_t^2 \eta}_{H^{1-s/2}} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2\eta }_{H^{s/2}} \\ + \norm{D_j^s \partial^\alphartialartial_t^2 \eta}_{H^{1-s/2}} \norm{ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2}_{H^{s/2}}. \end{multline} Note that \begin{equation} \frac{1}{2} = \frac{1}{q_+} - \frac{1}{1} \left( \frac{1}{2} + \frac{\varepsilon_+}{2}\right) \end{equation} so the Sobolev embeddings imply that $W^{1,q_+}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{(1+\varepsilon_+)/2}((-\ell,\ell))$ and \begin{equation} W^{2,q_+}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{(3+\varepsilon_+)/2}((-\ell,\ell)) \mathcal{H}ookrightarrow H^{3/2-\alpha}((-\ell,\ell)) = H^{1+s/2}((-\ell,\ell)). \end{equation} These and Theorem \ref{supercrit_prod} then imply that \begin{multline} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \partial^\alphartial_1 \partial^\alphartialartial_t^2\eta }_{H^{s/2}} \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t^2\eta }_{H^{s/2}} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{H^{(1+\varepsilon_+)/2}} \\ \lesssim \norm{\partial^\alphartialartial_t^2\eta }_{H^{1+s/2}} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \end{multline} and \begin{multline} \norm{ \partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) (\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2}_{H^{s/2}} \lesssim \norm{(\partial^\alphartial_1 \partial^\alphartialartial_t \eta)^2}_{H^{s/2}} \norm{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{H^{(1+\varepsilon_+)/2}} \\ \lesssim \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{H^{s/2}} \norm{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}_{H^{(1+\varepsilon_+)/2}} \norm{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \\ \lesssim \ns{\partial^\alphartialartial_t \eta}_{W^{2,q_+}} \norm{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}}. \end{multline} Since the terms involving $\mathcal{R}$ involve an integer derivative count, we can employ Proposition \ref{R_prop} to estimate \begin{equation} \norm{\partial^\alphartial_z \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} + \norm{\partial^\alphartial_z^2 \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)}_{W^{1,q_+}} \lesssim \norm{\eta}_{W^{2,q_+}}. \end{equation} Hence, \begin{multline}\label{diss_enhance_dtketa_6} \abs{ \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1 (M \nabla \partial^\alphartialsi \cdot \mathcal{N}) } \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \norm{\eta}_{W^{2,q_+}} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \ns{\partial^\alphartialartial_t \eta}_{W^{2,q_+}} \norm{\eta}_{W^{2,q_+}} \\ \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \mathcal{E} \sqrt{\mathcal{D}}. \end{multline} Lastly, we handle the $F^7$ term, again using that $M \nabla \partial^\alphartialsi \cdot \mathcal{N} = D_j^2 \partial^\alphartialartial_t^2 \eta$ on $\Sigma$. Then \eqref{dt2_f7} and standard trace theory shows that \begin{multline} \abs{\kappa [M \nabla \partial^\alphartialsi\cdot \mathcal{N}, F^7]_\ell } = \kappa \abs{ [D_j^s \partial^\alphartialartial_t^2 \eta, \mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^3 \eta + \mathscr{W}h''(\partial^\alphartialartial_t \eta) (\partial^\alphartialartial_t^2 \eta)^2]_\ell } \\ \lesssim \norm{D_j^s \partial^\alphartialartial_t^2 \eta}_{H^{1-s/2}} \max_{\partial^\alphartialm \ell}\abs{\mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^3 \eta + \mathscr{W}h''(\partial^\alphartialartial_t \eta) (\partial^\alphartialartial_t^2 \eta)^2}. \end{multline} According to Theorem \ref{catalog_energy}, $\norm{\partial^\alphartialartial_t \eta}_{C^0_b} \lesssim \sqrt{\mathcal{E}} \lesssim 1,$ so we may estimate \begin{equation} \abs{\mathscr{W}h'(z)} = \frac{1}{\alpha}\abs{ \int_0^z \mathscr{W}''(r)dr}\lesssim \abs{z} \text{ for } z \in [-\norm{\partial^\alphartialartial_t \eta}_{C^0},\norm{\partial^\alphartialartial_t \eta}_{C^0}]. \end{equation} This and trace theory then provide the bound \begin{multline} \max_{\partial^\alphartialm \ell}\abs{\mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^3 \eta + \mathscr{W}h''(\partial^\alphartialartial_t \eta) (\partial^\alphartialartial_t^2 \eta)^2} \lesssim \max_{\partial^\alphartialm \ell} \left(\abs{\partial^\alphartialartial_t \eta} \abs{\partial^\alphartialartial_t^3 \eta} + \abs{ \partial^\alphartialartial_t^2 \eta}^2\right) \\ \lesssim \sqrt{\mathcal{D}_{\shortparallel}} \left( \norm{\partial^\alphartialartial_t \eta}_{H^1} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^1} \right) \lesssim \sqrt{\mathcal{E}_{\shortparallel}} \sqrt{\mathcal{D}_{\shortparallel}}, \end{multline} where $\mathcal{E}_{\shortparallel}$ and $\mathcal{D}_{\shortparallel}$ are as defined in \eqref{ED_parallel}. Hence, \begin{equation}\label{diss_enhance_dtketa_7} \abs{\kappa [M \nabla \partial^\alphartialsi\cdot \mathcal{N}, F^7]_\ell } \lesssim \norm{ \partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}}\sqrt{\mathcal{E}_{\shortparallel}} \sqrt{\mathcal{D}_{\shortparallel}}. \end{equation} Upon plugging the estimates \eqref{diss_enhance_dtketa_5}, \eqref{diss_enhance_dtketa_6}, and \eqref{diss_enhance_dtketa_7} into \eqref{diss_enhance_dtketa_4}, we deduce that \begin{equation} \abs{ \br{\mathcal{F}, M \nabla \partial^\alphartialsi}} \lesssim \ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Inserting this into \eqref{diss_enhance_dtketa_3}, integrating in time from $\tau$ to $t$, and using \eqref{diss_enhance_dtketa_25} then shows that \begin{multline} \int_{\tau}^t \ns{D_j^s \partial^\alphartialartial_t^2 \eta}_{\mathcal{H}^{1/2 + \rho}_\mathcal{K}} \lesssim \mathcal{E}_{\shortparallel,2}(\tau) + \mathcal{E}_{\shortparallel,2}(t) + \int_{\tau}^t \norm{\partial^\alphartialartial_t^2 u}_{H^0} \left(\norm{ \partial^\alphartialartial_t^2 \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t^3 \eta }_{ H^{s-1/2}} \right) \\ + \int_{\tau}^t \left( \norm{ \partial^\alphartialartial_t^2 u}_{H^1} + [\partial^\alphartialartial_t^3 \eta]_\ell \right)\norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} + \int_{\tau}^t \left(\ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \right). \end{multline} We then send $j \to \infty$ and argue as in the proof of Theorem \ref{diss_enhance_eta} to deduce from this that \begin{multline} \int_{\tau}^t \ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \lesssim \mathcal{E}_{\shortparallel,2}(\tau) + \mathcal{E}_{\shortparallel,2}(t) + \int_{\tau}^t \norm{\partial^\alphartialartial_t^2 u}_{H^0} \left(\norm{ \partial^\alphartialartial_t^2 \eta }_{ H^{s-1/2}} + \norm{\partial^\alphartialartial_t^3 \eta }_{ H^{s-1/2}} \right) \\ + \int_{\tau}^t \left( \norm{ \partial^\alphartialartial_t^2 u}_{H^1} + [\partial^\alphartialartial_t^3 \eta]_\ell \right)\norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} + \int_{\tau}^t \left(\ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \right). \end{multline} Finally, we use Cauchy's inequality, the fact that $s-1/2 < 1/2$, and the assumption that $\mathcal{E} \le \delta_\ast$ for a universal $0 <\delta_\ast \le 1$ to absorb the $\ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}}$ terms from the right to the left, which yields \begin{equation} \frac{1}{2} \int_{\tau}^t \ns{\partial^\alphartialartial_t^2 \eta}_{H^{1+s/2}} \lesssim \mathcal{E}_{\shortparallel,2}(\tau) + \mathcal{E}_{\shortparallel,2}(t) + \int_{\tau}^t \mathcal{D}_{\shortparallel} + \mathcal{E}\mathcal{D}. \end{equation} This then provides the desired estimate since $1 + s/2 = 3/2 - \alpha$. \end{proof} \subsection{Energetic enhancement for $\partial^\alphartialartial_t p$ } We now turn our attention to an estimate that provides $L^2$ control of $\partial^\alphartialartial_t p$ in terms of the energy. \begin{thm}\label{en_enhance_dtp} Let $0 < T \le \infty$ and suppose that $\sup_{0\le t < T} \mathcal{E}(t) \le \gamma^2$, where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. Then we have the estimate \begin{equation}\label{en_enhance_dtp_0} \norm{\partial^\alphartialartial_t p}_{L^2} \lesssim \norm{\partial^\alphartialartial_t u}_{H^1} + \norm{\partial^\alphartialartial_t^2 u}_{L^2}+ \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} + \mathcal{E} + \mathcal{E}^{3/2} . \end{equation} \end{thm} \begin{proof} Let $\partial^\alphartialsi \in H^2(\Omega)$ solve \begin{equation} \begin{cases} -\mathcal{D}elta \partial^\alphartialsi = \partial^\alphartialartial_t p & \text{in } \Omega \\ \partial^\alphartialsi =0 & \text{on } \Sigma \\ \partial^\alphartial_\nu \partial^\alphartialsi =0 &\text{on } \Sigma_s, \end{cases} \end{equation} which exists and enjoys $H^2$ regularity since $\Omega$ has convex corners. Moreover, \begin{equation} \norm{\partial^\alphartialsi}_{H^2} \lesssim \norm{\partial^\alphartialartial_t p}_{L^2}. \end{equation} Proposition \ref{M_properties} shows that if we set $w = M \nabla \partial^\alphartialsi$, then $w$ is a valid choice of a test function in Lemma \ref{geometric_evolution}, \begin{equation} \diverge_{\mathcal{A}}{w} = \diverge_{\mathcal{A}}{M\nabla \partial^\alphartialsi} = K\mathcal{D}elta \partial^\alphartialsi = K \partial^\alphartialartial_t p, \end{equation} and we have the bound \begin{equation} \norm{w}_{H^1} \lesssim \norm{\partial^\alphartialsi}_{H^2} \lesssim \norm{\partial^\alphartialartial_t p}_{L^2}. \end{equation} Using this $w$ in \eqref{ge_01} of Lemma \ref{geometric_evolution}, we find that \begin{multline}\label{en_enhance_dtp_1} \br{\partial^\alphartialartial_t^2 u,Jw} + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u + u \cdot \nablaa \partial^\alphartialartial_t u,Jw)_0 + \partial^\alphartialp{\partial^\alphartialartial_t u,w} - (\partial^\alphartialartial_t p,\diverge_{\mathcal{A}} w)_0 \\ = \int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 - \int_{-\ell}^\ell g \partial^\alphartialartial_t \eta (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} + F^4 \cdot w \end{multline} with $F^1$, $F^3$, $F^4$, and $F^5$ given by \eqref{dt1_f1}, \eqref{dt1_f3}, \eqref{dt1_f4}, and \eqref{dt1_f5}, respectively, but \begin{equation}\label{en_enhance_dtp_2} (\partial^\alphartialartial_t p,\diverge_{\mathcal{A}} w)_0 = \int_\Omega J \partial^\alphartialartial_t p \diverge_{\mathcal{A}} w = \int_\Omega \abs{\partial^\alphartialartial_t p}^2 = \norm{\partial^\alphartialartial_t p}_{L^2}^2. \end{equation} According to Theorem \ref{nie_v_est}, we have the bound \begin{equation}\label{en_enhance_dtp_3} \abs{\int_\Omega F^1 \cdot w J - \int_{\Sigma_s} J (w\cdot \tau)F^5 + F^4 \cdot w} \lesssim \left( \mathcal{E}+ \mathcal{E}^{3/2} \right) \norm{w}_{H^1} \lesssim \left( \mathcal{E}+ \mathcal{E}^{3/2} \right) \norm{\partial^\alphartialartial_t p}_{L^2}, \end{equation} while Theorem \ref{nie_ST} shows that \begin{multline} \abs{\int_{-\ell}^\ell g \partial^\alphartialartial_t \eta (w \cdot \mathcal{N}) - \sigma \partial^\alphartial_1 \left( \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta }{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} +F^3\right)w\cdot \mathcal{N} } \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} \norm{w}_{H^1} \\ \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} \norm{\partial^\alphartialartial_t p}_{L^2}. \end{multline} On the other hand, we have the bounds \begin{equation} \abs{\partial^\alphartialp{\partial^\alphartialartial_t u,w}} \lesssim \norm{\partial^\alphartialartial_t u}_{H^1} \norm{w}_{H^1} \lesssim \norm{\partial^\alphartialartial_t u}_{H^1} \norm{\partial^\alphartialartial_t p}_{L^2}, \end{equation} \begin{equation} \abs{\br{\partial^\alphartialartial_t^2 u,Jw}} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{L^2} \norm{w}_{L^2} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{L^2}\norm{\partial^\alphartialartial_t p}_{L^2}, \end{equation} and \begin{multline}\label{en_enhance_dtp_4} \abs{ (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u + u \cdot \nablaa \partial^\alphartialartial_t u,Jw)_0 } \lesssim \norm{w}_{L^2}\left( \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty} \norm{\nabla \partial^\alphartialartial_t u}_{L^2} + \norm{u}_{L^\infty}\norm{\partial^\alphartialartial_t u}_{L^2}\right) \\ \lesssim \norm{\partial^\alphartialartial_t p}_{L^2} \mathcal{E}. \end{multline} Plugging the estimates \eqref{en_enhance_dtp_3}--\eqref{en_enhance_dtp_4} into \eqref{en_enhance_dtp_1} and using \eqref{en_enhance_dtp_2}, we deduce that \begin{equation} \ns{\partial^\alphartialartial_t p}_{L^2} \lesssim \norm{\partial^\alphartialartial_t p}_{L^2} \left(\norm{\partial^\alphartialartial_t u}_{H^1} + \norm{\partial^\alphartialartial_t^2 u}_{L^2}+ \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} + \mathcal{E} + \mathcal{E}^{3/2} \right). \end{equation} Then \eqref{en_enhance_dtp_0} follows immediately from this. \end{proof} \section{A priori estimates}\label{sec_aprioris} In this section we present the proof of our main a priori estimates, Theorem \ref{main_apriori}. \subsection{A key construction} We need one more technical tool to close our a priori estimates, namely the construction of a useful $\omega$ to use in Theorem \ref{linear_energy}. We present the construction of such an $\omega$ now. \begin{prop}\label{omega_construction} Let $0 < T \le \infty$ and suppose that $\sup_{0\le t < T} \mathcal{E}(t) \le \gamma^2$, where $\gamma \in (0,1)$ is as in Lemma \ref{eta_small}. Let $F^2$ be given by \eqref{dt2_f2} and let $\br{\cdot}_\Omega$ denote the spatial average on $\Omega$, i.e. \begin{equation} \br{g}_\Omega = \frac{1}{\abs{\Omega}} \int_{\Omega} g. \end{equation} Then there exists $\omega : \Omega \times [0,T) \to \mathbb{R}^2$ satisfying the following. \begin{enumerate} \item We have that $\omega(\cdot,t) \in H^1_0(\Omega;\mathbb{R}^2)$ for $0 \le t < T$, and \begin{equation}\label{omega_construction_00} J \diverge_{\mathcal{A}}{\omega} = JF^2 - \br{JF^2}_\Omega. \end{equation} \item $\omega$ obeys the estimates \begin{equation} \label{omega_construction_01} \norm{\omega }_{W^{1,4/(3-2\varepsilon_+)}_0} \lesssim \mathcal{E} \text{ and } \norm{\omega}_{W^{1,2/(1-\varepsilon_-)}_0} + \norm{\partial^\alphartialartial_t \omega}_{L^{2/(1-\varepsilon_-)}} \lesssim (\sqrt{\mathcal{E}} +\mathcal{E})\sqrt{\mathcal{D}}. \end{equation} \item We have the interaction estimates \begin{equation} \label{omega_construction_02} \abs{\int_\Omega \partial^\alphartialartial_t^2 u J \omega} \lesssim \mathcal{E}^{3/2}, \end{equation} and \begin{equation} \label{omega_construction_03} \abs{\int_{\Omega} \partial^\alphartialartial_t^2 u \partial^\alphartialartial_t(J \omega)} + \abs{(-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t^2 u + u \cdot \nablaa \partial^\alphartialartial_t^2 u,J\omega)_0} + \abs{ \partial^\alphartialp{\partial^\alphartialartial_t^2 u,\omega}} + \abs{\int_\Omega J F^1 \cdot \omega} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E}) \mathcal{D}. \end{equation} \end{enumerate} \end{prop} \begin{proof} Recall from Proposition \ref{M_properties} that \begin{equation} \diverge{u} = \varphi \mathcal{L}eftrightarrow \diverge_{\mathcal{A}}(M u) = K \varphi \mathcal{L}eftrightarrow J \diverge_{\mathcal{A}} (Mu) = \varphi. \end{equation} This means that if we first solve \begin{equation} \diverge{\bar{\omega}} = JF^2 - \br{JF^2}_\Omega, \end{equation} then $\omega = M \bar{\omega}$ satisfies \eqref{omega_construction_00}. Let $\mathcal{B}_\Omega$ denote the Bogovskii operator from Proposition \ref{bogovskii}. Then we will define \begin{equation} \bar{\omega} = \mathcal{B}_\Omega (JF^2 - \br{JF^2}_\Omega ). \end{equation} The essential point is that the Bogovskii operator is a linear map that commutes with time derivatives and satisfies \begin{equation}\label{omega_5} \mathcal{B}_\Omega \in \mathcal{L}( \mathring{L}^{q}(\Omega), W^{1,q}_0(\Omega;\mathbb{R}^2) ) \text{ for all } 1 < q < \infty \end{equation} and $\diverge \mathcal{B}_\Omega \varphi = \varphi$. Then our desired vector field is given by \begin{equation} \omega = M \bar{\omega} = M \mathcal{B}_\Omega(J F^2 - \br{J F^2}_\Omega). \end{equation} According to Propositions \ref{ne2_f2} and \eqref{omega_5} we have the bounds \begin{equation}\label{omega_1} \norm{\bar{\omega} }_{W^{1,4/(3-2\varepsilon_+)}_0} \lesssim \mathcal{E}, \; \norm{\bar{\omega}}_{W^{1,2/(1-\varepsilon_-)}_0} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \text{ and } \norm{\partial^\alphartialartial_t \bar{\omega}}_{W^{1,q_-}_0} \lesssim (\sqrt{\mathcal{E}} + \mathcal{E})\sqrt{\mathcal{D}}. \end{equation} Then Proposition \ref{M_multiplier}, together with \eqref{omega_1} and the fact that $\mathcal{E} \le 1$ then shows that \begin{equation}\label{omega_2} \norm{\omega }_{W^{1,4/(3-2\varepsilon_+)}_0} \lesssim \mathcal{E} \text{ and } \norm{\omega}_{W^{1,2/(1-\varepsilon_-)}_0} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} and (since $\varepsilon_- < \varepsilon_+$ implies $2/(1-\varepsilon_-) < 2/(1-\varepsilon_+)$) \begin{multline}\label{omega_3} \norm{\partial^\alphartialartial_t \omega}_{L^{2/(1-\varepsilon_-)}} \lesssim \norm{\partial^\alphartialartial_t M \bar{\omega}}_{L^{2/(1-\varepsilon_-)}} + \norm{M \partial^\alphartialartial_t \bar{\omega}}_{L^{2/(1-\varepsilon_-)}} \lesssim (1+ \sqrt{\mathcal{E}}) \left( \norm{\bar{\omega}}_{L^{2/(1-\varepsilon_- )}} + \norm{\partial^\alphartialartial_t \bar{\omega}}_{L^{2/(1-\varepsilon_-)}_0} \right) \\ \lesssim (1+ \sqrt{\mathcal{E}})\left( \norm{\bar{\omega}}_{W^{1,2/(1-\varepsilon_- )}_0} + \norm{\partial^\alphartialartial_t \bar{\omega}}_{W^{1,q_-}_0} \right) \lesssim (\sqrt{\mathcal{E}} + \mathcal{E})\sqrt{\mathcal{D}}, \end{multline} where in the third inequality we have also used the Sobolev embeddings. Then \eqref{omega_construction_01} follows from \eqref{omega_2} and \eqref{omega_3}. It remains only to prove the interaction estimates stated in the third item. For each of these we will use the estimates \eqref{omega_construction_01} together with the bounds from Theorems \ref{catalog_energy} and \ref{catalog_dissipation}. Indeed, \begin{equation} \abs{\int_\Omega \partial^\alphartialartial_t^2 u J \omega} \lesssim \int_\Omega \abs{\partial^\alphartialartial_t^2 u} \abs{\omega} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{L^2} \norm{\omega}_{L^2} \lesssim \norm{\omega}_{W^{1,4/(3-2\varepsilon_+)}} \norm{\partial^\alphartialartial_t^2 u}_{L^2} \lesssim \mathcal{E}^{3/2}, \end{equation} which is \eqref{omega_construction_02}. For the first part of \eqref{omega_construction_03} we bound \begin{multline} \abs{\int_{\Omega} \partial^\alphartialartial_t^2 u \partial^\alphartialartial_t(J \omega)} \lesssim \int_{\Omega} \abs{\partial^\alphartialartial_t^2 u} \left( \abs{\nabla \partial^\alphartialartial_t \bar{\eta}} \abs{\omega} + \abs{\partial^\alphartialartial_t \omega} \right) \lesssim \norm{\partial^\alphartialartial_t^2 u}_{L^2} \left(\norm{\partial^\alphartialartial_t \bar{\eta}}_{W^{1,\infty}} \norm{\omega}_{L^2} + \norm{\partial^\alphartialartial_t \omega}_{L^2} \right) \\ \lesssim \sqrt{\mathcal{D}}(\sqrt{\mathcal{E}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}) \lesssim (\sqrt{\mathcal{E}} + \mathcal{E}) \mathcal{D}. \end{multline} Next we bound \begin{equation} \abs{(-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t^2 u + u \cdot \nablaa \partial^\alphartialartial_t^2 u,J\omega)_0} \lesssim \left( \norm{\partial^\alphartialartial_t \bar{\eta}}_{L^\infty} + \norm{u}_{L^\infty} \right) \norm{\nabla \partial^\alphartialartial_t^2 u}_{L^2} \norm{\omega}_{L^2} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} \lesssim \mathcal{E} \mathcal{D}, \end{equation} which is the second estimate in \eqref{omega_construction_03}. Then we bound \begin{equation} \abs{ \partial^\alphartialp{\partial^\alphartialartial_t^2 u,\omega}} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{H^1} \norm{\omega}_{H^1} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{H^1} \norm{\omega}_{W^{1,2/(1-\varepsilon_-)}} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}, \end{equation} which is the third estimate in \eqref{omega_construction_03}. For the final term in \eqref{omega_construction_03} we use Proposition \ref{nid_f1} to bound \begin{equation} \abs{\int_\Omega J F^1 \cdot \omega} \lesssim \norm{\omega}_{H^1} (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}} \lesssim \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} (\sqrt{\mathcal{E}} + \mathcal{E}) \sqrt{\mathcal{D}}. \end{equation} This completes the proof of \eqref{omega_construction_03}. \end{proof} \subsection{Main a priori estimate} We now have all of the tools needed to prove our main a priori estimate. \begin{proof}[Proof of Theorem \ref{main_apriori}] Assume initially that $\delta_0 \le \gamma^2$, where $\gamma \in (0,1)$ is from Lemma \ref{eta_small}. We divide the rest of the proof into several steps. \emph{Step 1 - Lowest level energy-dissipation estimates: } Corollary \ref{basic_energy} tells us that \begin{multline} \frac{d}{dt} \left(\int_\Omega \frac{1}{2} J \abs{u}^2 + \int_{-\ell}^\ell \frac{g}{2} \abs{\eta}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \eta}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)\right) \\ + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a u}^2 J +\int_{\Sigma_s} \beta J \abs{u \cdot s}^2 + \kappa \bs{\partial^\alphartialartial_t \eta} =- \kappa [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell . \end{multline} We integrate this and use Lemma \ref{eta_small} to deduce that \begin{equation}\label{ape_sk_-1} \mathcal{E}_{\shortparallel, 0}(t) + \int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta(t)) + \int_{s}^t \mathcal{D}_{\shortparallel,0} \lesssim \mathcal{E}_{\shortparallel, 0}(s) + \int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta(s)) + \int_{s}^t \kappa \abs{[u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell}. \end{equation} Theorem \ref{nid_Q} says that \begin{equation} \abs{\int_{-\ell}^\ell \sigma \mathcal{Q}(\partial^\alphartial_1 \zeta_0, \partial^\alphartial_1 \eta) } \lesssim \sqrt{\mathcal{E}} \ns{\eta}_{H^1} \lesssim \sqrt{\mathcal{E}} \mathcal{E}_{\shortparallel,0}, \end{equation} and Theorem \ref{nid_W} says that \begin{equation} \abs{ [u\cdot \mathcal{N},\mathscr{W}h(\partial^\alphartialartial_t \eta)]_\ell } \lesssim \norm{\partial^\alphartialartial_t \eta}_{H^1} \bs{\partial^\alphartialartial_t \eta} \lesssim \sqrt{\mathcal{E}} \mathcal{D}_{\shortparallel,0}, \end{equation} so if $\mathcal{E} \le \delta_0$ with $\delta_0$ sufficiently small, then \eqref{ape_sk_-1} implies that \begin{equation} \mathcal{E}_{\shortparallel, 0}(t) + \int_{s}^t \mathcal{D}_{\shortparallel,0} \lesssim \mathcal{E}_{\shortparallel, 0}(s). \end{equation} Then Theorem \ref{diss_enhance_eta} says \begin{equation} \int_s^t \ns{ \eta}_{H^{3/2-\alpha}} \lesssim \mathcal{E}_{\shortparallel, 0}(s) + \mathcal{E}_{\shortparallel, 0}(t) + \int_{s}^t \mathcal{D}_{\shortparallel,0} \end{equation} and we may enhance the previous bound to \begin{equation}\label{ape_sk_0} \mathcal{E}_{\shortparallel, 0}(t) + \int_{s}^t \left(\mathcal{D}_{\shortparallel,0} + \ns{ \eta}_{H^{3/2-\alpha}} \right) \lesssim \mathcal{E}_{\shortparallel, 0}(s) \end{equation} for all $0 \le s \le t \le T$. \emph{Step 2 - Energy-dissipation estimates for one temporal derivative: } Theorem \ref{linear_energy} applied with $(v,q,\xi) = (\partial^\alphartialartial_t u,\partial^\alphartialartial_t p,\partial^\alphartialartial_t \eta)$ and $\omega =0$ gives the identity \begin{multline} \frac{d}{dt} \left( \int_{\Omega} J \frac{\abs{\partial^\alphartialartial_t u}^2}{2} + \int_{-\ell}^\ell \frac{g}{2} \abs{\partial^\alphartialartial_t \eta}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} \right) \\ + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a \partial^\alphartialartial_t u}^2 J +\int_{\Sigma_s} \beta J \abs{\partial^\alphartialartial_t u \cdot s}^2 + \kappa\bs{\partial^\alphartialartial_t^2 \eta} = \br{\mathcal{F}_1,(\partial^\alphartialartial_t u,\partial^\alphartialartial_t p,\partial^\alphartialartial_t \eta)} \end{multline} for \begin{multline} \br{\mathcal{F}_1,(\partial^\alphartialartial_t u,\partial^\alphartialartial_t p, \partial^\alphartialartial_t \eta)} = \int_\Omega F^1 \cdot \partial^\alphartialartial_t u J + \partial^\alphartialartial_t p JF^2 - \int_{\Sigma_s} J (\partial^\alphartialartial_t u \cdot s)F^5 \\ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1(\partial^\alphartialartial_t u \cdot \mathcal{N}) + F^4 \cdot \partial^\alphartialartial_t u - g \partial^\alphartialartial_t \eta F^6 - \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} - \kappa [\partial^\alphartialartial_t u\cdot \mathcal{N}, F^7]_\ell + \kappa [\partial^\alphartialartial_t^2 \eta,F^6] . \end{multline} Integrating and using Lemma \ref{eta_small} then shows that \begin{equation} \mathcal{E}_{\shortparallel, 1}(t) + \int_{s}^t \mathcal{D}_{\shortparallel, 1} \lesssim \mathcal{E}_{\shortparallel, 1}(s) + \int_{s}^t \br{\mathcal{F}_1,(\partial^\alphartialartial_t u,\partial^\alphartialartial_t p,\partial^\alphartialartial_t \eta)}. \end{equation} Theorems \ref{nid_v_est}, \ref{nid_p_est}, \ref{nid_f3_dt}, \ref{nid_f6}, and \ref{nid_f7} then show that \begin{equation} \abs{ \br{\mathcal{F}_1,(\partial^\alphartialartial_t u,\partial^\alphartialartial_t p,\partial^\alphartialartial_t \eta)} } \lesssim \sqrt{\mathcal{E}} \mathcal{D}, \end{equation} and hence we have the bound \begin{equation}\label{ape_sk_1} \mathcal{E}_{\shortparallel, 1}(t) + \int_{s}^t \mathcal{D}_{\shortparallel, 1} \lesssim \mathcal{E}_{\shortparallel, 1}(s) + \int_{s}^t \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} \emph{Step 3 - Energy-dissipation estimates with two temporal derivatives: } Theorem \ref{linear_energy} applied with $(v,q,\xi) = (\partial^\alphartialartial_t^2 u,\partial^\alphartialartial_t^2 p,\partial^\alphartialartial_t^2 \eta)$ and $\omega$ from Proposition \ref{omega_construction} (which guarantees that $\omega$ can be used in Theorem \ref{linear_energy}) yields \begin{multline} \frac{d}{dt} \left( \int_{\Omega} J \frac{\abs{\partial^\alphartialartial_t^2 u}^2}{2} + \int_{-\ell}^\ell \frac{g}{2} \abs{\partial^\alphartialartial_t^2 \eta}^2 + \frac{\sigma}{2} \frac{\abs{\partial^\alphartial_1 \partial^\alphartialartial_t^2\eta}^2}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} - \int_\Omega J \partial^\alphartialartial_t^2 u\cdot \omega \right) \\ + \frac{\mu}{2} \int_\Omega \abs{\mathbb{D}a \partial^\alphartialartial_t^2 u}^2 J +\int_{\Sigma_s} \beta J \abs{\partial^\alphartialartial_t^2 u\cdot s}^2 + \kappa \bs{\partial^\alphartialartial_t^3 \eta} = \br{\mathcal{F}_2,(\partial^\alphartialartial_t^2 u,\partial^\alphartialartial_t^2 \eta)} + \int_\Omega \partial^\alphartialartial_t^2p \br{JF^2}_\Omega + \br{\mathcal{F}_3,\omega} \end{multline} where $\br{\cdot}_\Omega$ denotes the spatial average as in Proposition \ref{omega_construction}, \begin{multline} \br{\mathcal{F}_2,(\partial^\alphartialartial_t^2 u,\partial^\alphartialartial_t^2 \eta)} = \int_\Omega F^1 \cdot \partial^\alphartialartial_t^2 u J - \int_{\Sigma_s} J (\partial^\alphartialartial_t^2 u \cdot s)F^5 \\ - \int_{-\ell}^\ell \sigma F^3 \partial^\alphartial_1(\partial^\alphartialartial_t^2 u \cdot \mathcal{N}) + F^4 \cdot \partial^\alphartialartial_t^2u - g \xi F^6 - \sigma \frac{\partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta \partial^\alphartial_1 F^6}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} - \kappa [\partial^\alphartialartial_t^2 u\cdot \mathcal{N}, F^7]_\ell + \kappa [\partial^\alphartialartial_t^3 \eta,F^6], \end{multline} and \begin{equation} \br{\mathcal{F}_3,\omega} = - \int_\Omega \partial^\alphartialartial_t^2 u \cdot \partial^\alphartialartial_t(J \omega) + (-\partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t^2 u + u \cdot \nablaa \partial^\alphartialartial_t^2 u,J \omega)_0 + \partial^\alphartialp{\partial^\alphartialartial_t^2 u,\omega} - \int_\Omega F^1 \omega J. \end{equation} Theorems \ref{nid_v_est}, \ref{nid_f3_dt2}, \ref{nid_f6}, and \ref{nid_f7} show that \begin{equation} \abs{ \int_{s}^t \br{\mathcal{F}_2,(\partial^\alphartialartial_t^2 u,\partial^\alphartialartial_t^2 \eta)}} \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} For the second term we rewrite \begin{equation} \int_\Omega \partial^\alphartialartial_t^2 p \br{J F^2}_\Omega = \frac{d}{dt} \left(\br{J F^2}_\Omega \int_\Omega \partial^\alphartialartial_t p \right) - \partial^\alphartialartial_t \br{J F^2}_\Omega \int_\Omega \partial^\alphartialartial_t p =: \frac{d}{dt} I_1 - I_2. \end{equation} We then use Proposition \ref{ne2_f2} to bound \begin{equation} \abs{I_1} \lesssim \norm{\partial^\alphartialartial_t p}_{L^2} \abs{\br{J F^2}_\Omega} \lesssim \mathcal{E}^{3/2} \end{equation} and (since $\partial^\alphartialartial_t \br{J F^2}_\Omega = \br{\partial^\alphartialartial_t(J F^2)}_\Omega$) \begin{equation} \abs{I_2} \lesssim \norm{\partial^\alphartialartial_t p}_{L^2} \abs{ \partial^\alphartialartial_t \br{J F^2}_\Omega} \lesssim \sqrt{\mathcal{D}} \sqrt{\mathcal{E}} \sqrt{\mathcal{D}} . \end{equation} Finally, the interaction estimates of Proposition \ref{omega_construction} show that \begin{equation} \abs{ \br{\mathcal{F}_3,\omega}} \lesssim \sqrt{\mathcal{E}} \mathcal{D}. \end{equation} Combining all the above then shows that \begin{equation}\label{ape_sk_2} \mathcal{E}_{\shortparallel, 2}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \mathcal{D}_{\shortparallel, 2} \lesssim \mathcal{E}_{\shortparallel, 2}(s) + (\mathcal{E}(s))^{3/2} + \int_{s}^t \sqrt{\mathcal{E}} \mathcal{D} . \end{equation} \emph{Step 4 - Synthesized energy-dissipation estimates: } We sum \eqref{ape_sk_0}, \eqref{ape_sk_1}, and \eqref{ape_sk_2} to see that \begin{equation}\label{ape_1} \mathcal{E}_{\shortparallel}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \left(\mathcal{D}_{\shortparallel} + \ns{\eta}_{H^{3/2-\alpha}} \right) \lesssim \mathcal{E}_{\shortparallel}(s) + (\mathcal{E}(s))^{3/2} + \int_{s}^t \sqrt{\mathcal{E}} \mathcal{D} . \end{equation} Subsequently, we sum the estimates provided by Theorem \ref{diss_enhance_dtketa} with $k=1$ and $k=2$ to deduce the enhancement estimate \begin{equation} \int_{s}^t \ns{\partial^\alphartialartial_t \eta}_{H^{3/2 - \alpha}} + \ns{\partial^\alphartialartial_t^2 \eta}_{H^{3/2 - \alpha}} \lesssim \mathcal{E}_{\shortparallel}(s) + \mathcal{E}_{\shortparallel}(t) + \int_{s}^t \left( \mathcal{D}_{\shortparallel} + \sqrt{\mathcal{E}}\mathcal{D}\right), \end{equation} and upon combining this with \eqref{ape_1} we find that \begin{equation}\label{ape_2} \mathcal{E}_{\shortparallel}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \left(\mathcal{D}_{\shortparallel} + \sum_{k=0}^2 \ns{\partial^\alphartialartial_t^k \eta}_{H^{3/2-\alpha}} \right) \lesssim \mathcal{E}_{\shortparallel}(s) + (\mathcal{E}(s))^{3/2} + \int_{s}^t \sqrt{\mathcal{E}} \mathcal{D} . \end{equation} \emph{Step 5 - Elliptic dissipation enhancements: } We now combine the estimates of Propositions \ref{ne1_g1}--\ref{ne1_g7} with Theorem \ref{A_stokes_stress_solve}, applied with $v = \partial^\alphartialartial_t u$, $Q = \partial^\alphartialartial_t p$, and $\xi = \partial^\alphartialartial_t \eta$ and $\delta = \varepsilon_-$, to see that \begin{equation} \norm{\partial^\alphartialartial_t u}_{W^{2,q_-}} + \norm{\partial^\alphartialartial_t p}_{W^{1,q_-}} + \norm{\partial^\alphartialartial_t \eta}_{W^{3-1/q_-,q_-}} \lesssim \norm{\partial^\alphartialartial_t^2 u}_{L^{q_-}} + \norm{\partial^\alphartialartial_t^2 \eta}_{H^{3/2-\alpha}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Similarly, we combine the estimates of Propositions \ref{ne1_g1}--\ref{ne1_g7} with Theorem \ref{A_stokes_stress_solve}, applied with $v = u$, $Q = p$, and $\xi = \eta$ and $\delta = \varepsilon_+$, to see that \begin{equation} \norm{u}_{W^{2,q_+}} + \norm{p}_{W^{1,q_+}} + \norm{\eta}_{W^{3-1/q_+,q_+}} \lesssim \norm{\partial^\alphartialartial_t u}_{L^{q_+}} + \norm{\partial^\alphartialartial_t \eta}_{H^{3/2-\alpha}} + \sqrt{\mathcal{E}} \sqrt{\mathcal{D}}. \end{equation} Since $q_- < q_+ < 2$ we can then bound \begin{equation} \ns{\partial^\alphartialartial_t^2 u}_{L^{q_-}} + \ns{\partial^\alphartialartial_t u}_{L^{q_+}} \lesssim \ns{\partial^\alphartialartial_t^2 u}_{L^{2}} + \ns{\partial^\alphartialartial_t u}_{L^{2}} \lesssim \mathcal{D}_{\shortparallel} \end{equation} As such, we can combine these with \eqref{ape_2} to deduce that \begin{multline}\label{ape_3} \mathcal{E}_{\shortparallel}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \left(\mathcal{D}_{\shortparallel} +\sum_{k=0}^2 \ns{\partial^\alphartialartial_t^k \eta}_{H^{3/2-\alpha}} \right) + \int_{s}^t \left( \norm{u}_{W^{2,q_+}} + \norm{p}_{W^{1,q_+}} + \norm{\eta}_{W^{3-1/q_+,q_+}} \right) \\ + \int_{s}^t \left( \norm{\partial^\alphartialartial_t u}_{W^{2,q_-}} + \norm{\partial^\alphartialartial_t p}_{W^{1,q_-}} + \norm{\partial^\alphartialartial_t \eta}_{W^{3-1/q_-,q_-}} \right) \lesssim \mathcal{E}_{\shortparallel}(s) + (\mathcal{E}(s))^{3/2} + \int_{s}^t \sqrt{\mathcal{E}} \mathcal{D} . \end{multline} Next we sweep up the missing terms in $\mathcal{D}$. Note that for $0 \le k \le 2$ we have that \begin{equation}\label{ape_10} \partial^\alphartialartial_t^{k+1} \eta - \partial^\alphartialartial_t^k u \cdot \mathcal{N} = F^{6,k}, \end{equation} where $F^{6,0} =0$, $F^{6,1}$ is given by \eqref{dt1_f6}, and $F^{6,2}$ is given by \eqref{dt2_f6}, and in any case $F^{6,k}$ vanishes at the endpoints $\partial^\alphartialm\ell$; consequently, \begin{equation} \sum_{k=0}^2 [\partial^\alphartialartial_t^k u \cdot \mathcal{N}]_\ell^2 = \sum_{k=0}^2 [\partial^\alphartialartial_t^{k+1} \eta]_\ell^2 \le \mathcal{D}_{\shortparallel}. \end{equation} Similarly, using \eqref{ape_10} with $k=2$ in conjunction with Proposition \ref{ne2_f6}, we find that \begin{equation} \ns{\partial^\alphartialartial_t^3 \eta}_{H^{1/2-\alpha}} \lesssim \ns{\partial^\alphartialartial_t^2 u \cdot \mathcal{N}}_{H^{1/2}((-\ell,\ell))} + \ns{F^{6,2}}_{H^{1/2-\alpha}} \lesssim \ns{\partial^\alphartialartial_t^2 u}_{H^1} + \mathcal{E} \mathcal{D} \lesssim \mathcal{D}_{\shortparallel} + \mathcal{E} \mathcal{D}. \end{equation} Combining these with \eqref{ape_3} then leads us to the estimate \begin{equation} \mathcal{E}_{\shortparallel}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \mathcal{D} \lesssim \mathcal{E}_{\shortparallel}(s) + (\mathcal{E}(s))^{3/2} + \int_{s}^t \sqrt{\mathcal{E}} \mathcal{D}, \end{equation} and in turn we see from this that if $\mathcal{E} \le \delta_0$ for sufficiently small universal $\delta_0$, then we can absorb the last term on the right onto the left side and deduce that \begin{equation}\label{ape_4} \mathcal{E}_{\shortparallel}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \mathcal{D} \lesssim \mathcal{E}_{\shortparallel}(s) + (\mathcal{E}(s))^{3/2}. \end{equation} \emph{Step 6 - Energetic enhancement through dissipation integration:} We now integrate the dissipation to improve the energetic estimates with Proposition \ref{temp_deriv_interp}: \begin{multline} \ns{\partial^\alphartialartial_t \eta(t)}_{H^{3/2 +(\varepsilon_--\alpha)/2}} \lesssim \ns{\partial^\alphartialartial_t \eta(s)}_{H^{3/2+(\varepsilon_--\alpha)/2}} + \int_{s}^t \ns{\partial^\alphartialartial_t \eta}_{H^{3/2+\varepsilon_-}} + \ns{\partial^\alphartialartial_t^2 \eta}_{H^{3/2-\alpha}} \\ \lesssim \ns{\partial^\alphartialartial_t \eta(s)}_{H^{3/2+(\varepsilon_--\alpha)/2}} + \int_{s}^t \mathcal{D} \end{multline} and \begin{equation} \ns{\partial^\alphartialartial_t u(t)}_{H^{1+\varepsilon_-/2}} \lesssim \ns{\partial^\alphartialartial_t u(s)}_{H^{1+\varepsilon_-/2}} + \int_{s}^t \ns{\partial^\alphartialartial_t u}_{H^{1+\varepsilon_-}} + \ns{\partial^\alphartialartial_t^2 u}_{H^1} \lesssim \ns{\partial^\alphartialartial_t u(s)}_{H^{1+\varepsilon_-/2}} + \int_{s}^t \mathcal{D}. \end{equation} We can then combine these with \eqref{ape_4} to deduce that \begin{equation}\label{ape_5} \tilde{\mathcal{E}}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \mathcal{D} \lesssim \tilde{\mathcal{E}}(s) + (\mathcal{E}(s))^{3/2} \end{equation} for \begin{equation} \tilde{\mathcal{E}} := \mathcal{E}_{\shortparallel} + \ns{\partial^\alphartialartial_t u}_{H^{1+\varepsilon_-/2}} + \ns{\partial^\alphartialartial_t \eta}_{H^{3/2 +(\varepsilon_--\alpha)/2}}. \end{equation} \emph{Step 7 - Elliptic energy enhancement: } Propositions \ref{ne0_g1}--\ref{ne0_g7} and Theorem \ref{A_stokes_stress_solve}, applied to $(v,Q,\xi) = (u,p,\eta)$ and $\delta = \varepsilon_+$, show that \begin{equation} \norm{u}_{W^{2,q_+}} + \norm{p}_{W^{1,q_+}} + \norm{\eta}_{W^{3-1/q_+,q_+}} \lesssim \norm{\partial^\alphartialartial_t u}_{L^2} + \norm{\partial^\alphartialartial_t \eta}_{H^{3/2-\alpha}} + \mathcal{E} \lesssim \sqrt{\tilde{\mathcal{E}}} + \mathcal{E}. \end{equation} Theorem \ref{en_enhance_dtp} provides the estimate \begin{equation} \norm{\partial^\alphartialartial_t p}_{L^2} \lesssim \norm{\partial^\alphartialartial_t u}_{H^1} + \norm{\partial^\alphartialartial_t^2 u}_{L^2}+ \norm{\partial^\alphartialartial_t \eta}_{H^{3/2 + (\varepsilon_- - \alpha)/2}} + \mathcal{E} + \mathcal{E}^{3/2} \lesssim \sqrt{\tilde{\mathcal{E}}} + \mathcal{E} + \mathcal{E}^{3/2}. \end{equation} Squaring these and summing with $\mathcal{E}_{\shortparallel}$ then shows that \begin{equation} \mathcal{E} \lesssim \tilde{\mathcal{E}} + \mathcal{E}^{3/2} \end{equation} and so if $\mathcal{E} \le \delta_0$, with $\delta_0$ made smaller than another universal constant if need be, then \begin{equation} \mathcal{E} \asymp \tilde{\mathcal{E}}. \end{equation} Plugging this into \eqref{ape_5} shows that \begin{equation}\label{ape_6} \mathcal{E}(t) - (\mathcal{E}(t))^{3/2} + \int_{s}^t \mathcal{D} \lesssim \mathcal{E}(s) + (\mathcal{E}(s))^{3/2}. \end{equation} \emph{Step 8 - Conclusion: } Taking $\delta_0$ again to be smaller than a universal constant if necessary, we can absorb the $\mathcal{E}^{3/2}$ terms in \eqref{ape_6}, resulting in the inequality \begin{equation}\label{ape_7} \mathcal{E}(t) + \int_{s}^t \mathcal{D} \lesssim \mathcal{E}(s) \end{equation} for $0 \le s \le t$. Note that $\mathcal{E} \lesssim \mathcal{D}$, so $\mathcal{E}$ is integrable on $(0,T)$. We can then apply the Gronwall-type estimate of Proposition \ref{gronwall_variant} to see that $\mathcal{E}$ decays exponentially: there exists a universal $\lambda >0$ such that \begin{equation} \mathcal{E}(t) \lesssim e^{-\lambda t} \mathcal{E}(0) \text{ for all } 0 \le t < T. \end{equation} Also, taking $s =0$ in \eqref{ape_7} and sending $t \to T$ shows that \begin{equation} \int_0^T \mathcal{D} \lesssim \mathcal{E}(0). \end{equation} Combining the previous two estimates completes the proof. \end{proof} \appendix \section{Nonlinearities}\label{sec_nonlinear_records} In this appendix we record the form of the commutators that arise in applying $\partial^\alphartialartial_t^k$ to \eqref{ns_geometric} as well as some estimates for the function $\mathcal{R}$ defined by \eqref{R_def}. \subsection{Nonlinear commutator terms when $k=1$ }\label{fi_dt1} When $\partial^\alphartialartial_t$ is applied to \eqref{ns_geometric} this results in the following terms appearing in \eqref{linear_geometric} for $k=1,2$. \begin{equation}\label{dt1_f1} F^1 = - \diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(p,u) + \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u - u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u - \partial^\alphartialartial_t u \cdot \nablaa u + \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u + \partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u \end{equation} \begin{equation}\label{dt1_f2} F^2 = -\diverge_{\partial^\alphartialartial_t \mathcal{A}} u \end{equation} \begin{equation}\label{dt1_f3} F^3 = \partial^\alphartialartial_t [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \end{equation} \begin{equation}\label{dt1_f4} F^4 = \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \mathcal{N} +\left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - S_{\mathcal{A}}(p,u) \right] \partial^\alphartialartial_t \mathcal{N} \end{equation} \begin{equation}\label{dt1_f5} F^5 = \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \nu \cdot \tau \end{equation} \begin{equation}\label{dt1_f6} F^6 = u \cdot \partial^\alphartialartial_t \mathcal{N} = -u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta. \end{equation} \begin{equation}\label{dt1_f7} F^7 = \mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^2 \eta. \end{equation} Observe that $F^6$ vanishes at $\partial^\alphartialm \ell$ since $u_1$ vanishes there. \subsection{Nonlinear commutator terms when $k=2$ }\label{fi_dt2} When $\partial^\alphartialartial_t^2$ is applied to \eqref{ns_geometric} this results in the following terms appearing in \eqref{linear_geometric}. \begin{multline}\label{dt2_f1} F^1 = - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}} S_\mathcal{A}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) + 2\mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \\ - \diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} S_\mathcal{A}(p,u) + 2 \mu \diverge_{\partial^\alphartialartial_t \mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u + \mu \diverge_{\mathcal{A}} \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \\ - 2 u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u - 2 \partial^\alphartialartial_t u \cdot \nablaa \partial^\alphartialartial_t u - 2 \partial^\alphartialartial_t u \cdot \nabla_{\partial^\alphartialartial_t \mathcal{A}} u - u \cdot \nabla_{\partial^\alphartialartial_t^2 \mathcal{A}} u - \partial^\alphartialartial_t^2 u \cdot \nablaa u \\ + 2 \partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 \partial^\alphartialartial_t u + 2 \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 \partial^\alphartialartial_t u + 2 \partial^\alphartialartial_t^2 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t K \partial^\alphartial_2 u + \partial^\alphartialartial_t^3 \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} K \partial^\alphartial_2 u + \partial^\alphartialartial_t \bar{\eta} \frac{\partial^\alphartialhi}{\zeta_0} \partial^\alphartialartial_t^2 K\partial^\alphartial_2 u. \end{multline} \begin{equation}\label{dt2_f2} F^2 = -\diverge_{\partial^\alphartialartial_t^2 \mathcal{A}} u - 2\diverge_{\partial^\alphartialartial_t \mathcal{A}}\partial^\alphartialartial_t u \end{equation} \begin{equation}\label{dt2_f3} F^3 = \partial^\alphartialartial_t^2 [ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \end{equation} \begin{multline}\label{dt2_f4} F^4 = 2\mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \mathcal{N} + \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \mathcal{N} + \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} u \partial^\alphartialartial_t \mathcal{N}\\ +\left[ 2g \partial^\alphartialartial_t \eta - 2\sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \partial^\alphartialartial_t \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \partial^\alphartialartial_t[ \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta)] \right) -2 S_{\mathcal{A}}(\partial^\alphartialartial_t p,\partial^\alphartialartial_t u) \right] \partial^\alphartialartial_t \mathcal{N} \\ + \left[ g\eta - \sigma \partial^\alphartial_1 \left(\frac{\partial^\alphartial_1 \eta}{(1+\abs{\partial^\alphartial_1 \zeta_0}^2)^{3/2}} + \mathcal{R}(\partial^\alphartial_1 \zeta_0,\partial^\alphartial_1 \eta) \right) - S_{\mathcal{A}}(p,u) \right] \partial^\alphartialartial_t^2 \mathcal{N} \end{multline} \begin{equation}\label{dt2_f5} F^5 = 2 \mu \mathbb{D}_{\partial^\alphartialartial_t \mathcal{A}} \partial^\alphartialartial_t u \nu \cdot \tau + \mu \mathbb{D}_{\partial^\alphartialartial_t^2 \mathcal{A}} u \nu \cdot \tau \end{equation} \begin{equation}\label{dt2_f6} F^6 = 2 \partial^\alphartialartial_t u \cdot \partial^\alphartialartial_t \mathcal{N} + u \cdot \partial^\alphartialartial_t^2 \mathcal{N} = -2 \partial^\alphartialartial_t u_1 \partial^\alphartial_1 \partial^\alphartialartial_t \eta - u_1 \partial^\alphartial_1 \partial^\alphartialartial_t^2 \eta. \end{equation} \begin{equation}\label{dt2_f7} F^7 = \mathscr{W}h'(\partial^\alphartialartial_t \eta) \partial^\alphartialartial_t^3 \eta + \mathscr{W}h''(\partial^\alphartialartial_t \eta) (\partial^\alphartialartial_t^2 \eta)^2. \end{equation} Once more, note that $F^6$ vanishes at $\partial^\alphartialm \ell$ since $u_1$ and $\partial^\alphartialartial_t u_1$ vanish there. \subsection{$\mathcal{R}$ and $\mathcal{Q}$} Recall that $\mathcal{R}$ is given by \eqref{R_def}. The following records some essential estimates for it. \begin{prop}\label{R_prop} The mapping $\mathcal{R} \in C^\infty(\mathbb{R}n{2})$ defined by \eqref{R_def} obeys the following estimates. \begin{multline} \sup_{(y,z) \in \mathbb{R}^{2}} \left[ \abs{\frac{1}{z^3}\int_0^z \mathcal{R}(y,s)ds} + \abs{\frac{\mathcal{R}(y,z)}{z^2}} + \abs{\frac{\partial^\alphartial_z \mathcal{R}(y,z)}{z}} + \abs{\frac{\partial^\alphartial_y \mathcal{R}(y,z)}{z^2}} \right. \\ \left. + \abs{ \partial^\alphartial_z^2 \mathcal{R}(y,z)} + \abs{\frac{\partial^\alphartial_y^2 \mathcal{R}(y,z)}{z^2}} + \abs{\frac{\partial^\alphartial_z \partial^\alphartial_y \mathcal{R}(y,z)}{z}} + \abs{ \partial^\alphartial_z^3 \mathcal{R}(y,z)} + \abs{\frac{\partial^\alphartial_y^2 \partial^\alphartial_z \mathcal{R}(y,z)}{z}} + \abs{ \partial^\alphartial_z^2 \partial^\alphartial_y \mathcal{R}(y,z)} \right] < \infty. \end{multline} \end{prop} \begin{proof} These bounds follow from elementary calculus, so we omit the details. \end{proof} We also record here the definition of a special map related to $\mathcal{R}$. We define $\mathcal{Q} \in C^\infty(\mathbb{R}^2)$ via \begin{equation}\label{Q_def} \mathcal{Q}(y,z) := \int_0^z \mathcal{R}(y,r) dr \mathbb{R}ightarrow \frac{\partial^\alphartial \mathcal{Q}}{\partial^\alphartial z}(y,z) = \mathcal{R}(y,z), \end{equation} \section{Miscellaneous analysis tools}\label{sec_analysis_tools} In this appendix we record a host of analytic results that are used throughout the paper. \subsection{Product estimates} We begin with some useful product estimates. First we recall a fact about Besov spaces. \begin{prop} If $s >0$ and $1 \le p,q \le \infty$, then $B^{s}_{p,q}(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)$ is an algebra, and \begin{equation} \norm{fg}_{B^{s}_{p,q}} \lesssim \norm{f}_{L^\infty} \norm{g}_{B^s_{p,q}} + \norm{f}_{B^{s}_{p,q}} \norm{g}_{L^\infty}. \end{equation} In particular, if $s > n/p$ then $B^{s}_{p,q}(\mathbb{R}^n) \mathcal{H}ookrightarrow L^\infty(\mathbb{R}^n)$ and hence $B^{s}_{p,q}(\mathbb{R}^n)$ is a Banach algebra. \end{prop} \begin{proof} There are many proofs. See for instance, Proposition 1.4.3 of \cite{danchin}, Proposition 6.2 of \cite{mironescu_russ}, or Theorem 2 of \cite{runst}. \end{proof} Then we can prove the supercritical product estimate. \begin{thm} Suppose $1 < p < \infty$ and that $r>0$ and $s > \max\{n/p,r\}$. Then for $\varphi \in W^{s,p}(\mathbb{R}^n)$ and $\partial^\alphartialsi \in W^{r,p}(\mathbb{R}^n)$ we have that $\varphi \partial^\alphartialsi \in H^r(\mathbb{R}^n)$ and \begin{equation} \norm{\varphi \partial^\alphartialsi}_{W^{r,p}} \lesssim \norm{\varphi}_{W^{s,p}} \norm{\partial^\alphartialsi}_{W^{r,p}}. \end{equation} \end{thm} \begin{proof} Note first that for $r > n/p$ we have that $W^{r,p}(\mathbb{R}^n) = B^r_{p,p}(\mathbb{R}^n)$ is an algebra, and so the stated result is trivial. We may thus reduce to the case $0 < r \le n/p$. If $r=0$, then \begin{equation} \norm{\varphi \partial^\alphartialsi }_{L^p} \le \norm{\varphi}_{L^\infty} \norm{\partial^\alphartialsi}_{L^p} \le c \norm{\varphi}_{W^{s,p}} \norm{\partial^\alphartialsi}_{L^p} \end{equation} by virtue of the standard supercritical embedding $W^{s,p}(\mathbb{R}^n)\mathcal{H}ookrightarrow C^0_b(\mathbb{R}^n)$. On the other hand, since $W^{s,p}(\mathbb{R}^n) = B^s_{p,p}(\mathbb{R}^n)$ is an algebra for $s > n/p$ \begin{equation} \norm{\varphi \partial^\alphartialsi}_{W^{s,p}} \lesssim \norm{\varphi}_{W^{s,p}} \norm{\partial^\alphartialsi}_{W^{s,p}}. \end{equation} Thus, if we define the operator $T_\varphi$ via $T_\varphi \partial^\alphartialsi = \varphi \partial^\alphartialsi$, then $T_\varphi \in \mathcal{L}(L^p(\mathbb{R}^n)) \cap \mathcal{L}(W^{s,p}(\mathbb{R}^n))$ with \begin{equation} \norm{T_\varphi}_{\mathcal{L}(L^p)} \lesssim \norm{\varphi}_{W^{s,p}} \text{ and } \norm{T_\varphi}_{\mathcal{L}(W^{s,p})} \lesssim \norm{\varphi}_{W^{s,p}}. \end{equation} Standard interpolation theory (see, for instance, \cite{triebel}) then implies that $T_\varphi \in \mathcal{L}(W^{r,p}(\mathbb{R}^n))$ for all $0 < r < s$, and \begin{equation} \norm{T_\varphi}_{\mathcal{L}(W^{r,p})} \lesssim \norm{\varphi}_{W^{r,p}}. \end{equation} This is equivalent to the stated estimate when $0 < r \le n/p$. \end{proof} This result may be extended to bounded domains through the use of extension operators. \begin{thm}\label{supercrit_prod} Let $\varnothing \neq \Omega \subset \mathbb{R}^n$ be bounded and open with Lipschitz boundary (or an open interval when $n=1$). If $1 < p < \infty$, $r>0$, and $s > \max\{n/p,r\}$, then \begin{equation} \norm{fg}_{W^{r,p}(\Omega)} \lesssim \norm{f}_{W^{s,p}(\Omega)} \norm{g}_{W^{r,p}(\Omega)}. \end{equation} \end{thm} \begin{proof} If $E$ is the Stein extension operator (see, for instance, \cite{stein}), then \begin{equation} \norm{fg}_{W^{r,p}(\Omega)} \lesssim \norm{Ef Eg}_{W^{r,p}(\mathbb{R}^n)} \lesssim \norm{Ef}_{W^{s,p}(\mathbb{R}^n)} \norm{Eg}_{W^{r,p}(\mathbb{R}^n)} \lesssim \norm{f}_{W^{s,p}(\Omega)} \norm{g}_{W^{r,p}(\Omega)}. \end{equation} \end{proof} \subsection{Poisson extension} Let $b >0$. Given a Schwartz function $f: \mathbb{R} \to \mathbb{R}$, we define its Poisson extension $\mathcal{P} f : \mathbb{R} \times (-b,0) \to \mathbb{R}$ via \begin{equation}\label{poisson_def} \mathcal{P}f(x_1,x_2) = \int_{\mathbb{R}n{}} \mathcal{H}at{f}(\xi) e^{2\partial^\alphartiali \abs{\xi}x_2} e^{2\partial^\alphartiali i x_1 \xi} d\xi. \end{equation} The following records some basic properties of this map. \begin{prop}\label{poisson_prop} Let $0 < b < \infty$. The following hold. \begin{enumerate} \item $\mathcal{P}$ extends to a bounded linear operator from $L^p(\mathbb{R})$ to $L^p(\mathbb{R} \times(-b,0))$ for each $1 \le p \le \infty$. \item $\mathcal{P}$ extends to a bounded linear operator from $H^{s-1/2}(\mathbb{R})$ to $H^{s}(\mathbb{R} \times(-b,0))$ for all $s \ge 1/2$. \item Let $1 < p < \infty$. Then $\mathcal{P}$ extends to a bounded linear operator from $W^{s-1/p,p}(\mathbb{R})$ to $W^{s,p}(\mathbb{R} \times(-b,0))$ for all $2 \le s \in \mathbb{R}$. \end{enumerate} \end{prop} \begin{proof} The first item follows from the fact that $\mathcal{P}$ can be represented by convolution with the Poisson kernel, Young's inequality, and the fact that $b$ is finite. The second item follows from simple calculations with the Fourier representation \eqref{poisson_def}: for instance, see Lemma A.5 of \cite{guo_tice_inf}. For the third item we note that $\mathcal{P}f$ satisfies the Dirichlet problem \begin{equation} \begin{cases} \mathcal{D}elta \mathcal{P} f =0 &\text{in }\mathbb{R}^2_- = \{x \in \mathbb{R}^2 \;\vert\; x_2 < 0\} \\ \mathcal{P} f = f & \text{on } \partial^\alphartial \mathbb{R}^2_-. \end{cases} \end{equation} Suppose that $f \in W^{k-1/p,p}(\mathbb{R})$ for $2 \le k \in \mathbb{N}$, then standard trace theory shows that there exists $F \in W^{k,p}(\mathbb{R}^2_-)$ such that $F = f$ on $\partial^\alphartial \mathbb{R}^2_-$. Then $g = \mathcal{P}f - F$ satisfies the boundary value problem \begin{equation} \begin{cases} \mathcal{D}elta g =-\mathcal{D}elta F \in W^{k-2,p}(\mathbb{R}^2_-) &\text{in }\mathbb{R}^2_- = \{x \in \mathbb{R}^2 \;\vert\; x_2 < 0\} \\ g = 0 & \text{on } \partial^\alphartial \mathbb{R}^2_-. \end{cases} \end{equation} The $L^p-$elliptic theory (see, for instance, \cite{adn_1}) then shows that for each $x \in \mathbb{R}$ we have the estimate \begin{equation} \norm{g}_{W^{k,p}(Q_-((x,0),b))} \le C(k,p,b) \left( \norm{F}_{W^{k,p}(Q_-((x,0),2b))} + \norm{g}_{L^{p}(Q_-((x,0),2b))} \right), \end{equation} where we have written $Q_-((x,0),r) = (x-r,x+r) \times (-r,0)$ for the lower half-cube. Writing \begin{equation} \mathbb{R} \times (-b,0) = \bigcup_{n \in \mathbb{Z}} Q_-((nb,0),b), \end{equation} we deduce from this and the simple overlap geometry of these cubes that \begin{equation} \norm{g}_{W^{k,p}(\mathbb{R} \times (-b,0))} \le C(k,p,b) \left( \norm{F}_{W^{k,p}(\mathbb{R} \times (-2b,0))} + \norm{g}_{L^{p}(\mathbb{R} \times (-2b,0))} \right). \end{equation} However, from the first item (applied with $2b$ in place of $b$) and trace theory we know that \begin{equation} \norm{g}_{L^p(\mathbb{R} \times (-2b,0))} \le \norm{\mathcal{P} f}_{L^p(\mathbb{R} \times (-2b,0))} + \norm{F}_{L^p(\mathbb{R} \times (-2b,0))} \lesssim \norm{f}_{W^{k-1/p,p}(\mathbb{R} )}, \end{equation} and hence \begin{equation} \norm{g}_{W^{k,p}(\mathbb{R} \times (-b,0))} \lesssim \norm{f}_{W^{k-1/p,p}(\mathbb{R} )}. \end{equation} In turn, we deduce that \begin{equation} \norm{\mathcal{P}f}_{W^{k,p}(\mathbb{R} \times (-b,0))} \lesssim \norm{f}_{W^{k-1/p,p}(\mathbb{R} )}. \end{equation} The previous estimate shows that $\mathcal{P}$ extends to a bounded linear map from $W^{k-1/p,p}(\mathbb{R} )$ to $W^{k,p}(\mathbb{R} \times (-b,0))$ for every $2 \le k \in \mathbb{N}$ and $1 < p < \infty$. Then standard interpolation theory shows that it extends to a bounded linear operator between the same spaces with $k$ replaced by $2 \le s \in \mathbb{R}$, and this is the third item. \end{proof} \subsection{The Bogovskii operator} The Bogovskii operator \cite{bogovskii} gives an explicit right inverse to the divergence operator via a singular integral operator. The operator may be readily defined in Lipschitz domains and avoids many of the technical difficulties encountered in using PDE-based methods to construct such right inverses. We record some properties of this operator now. \begin{prop}\label{bogovskii} Let $\Omega \subset \mathbb{R}^2$ be given by \eqref{Omega_def}, and let $1 < p < \infty$. There exists a locally integrable function $G_\Omega : \Omega \times \Omega \to \mathbb{R}^2$ such that the integral operator \begin{equation} \mathcal{B}_\Omega f(x) = \int_{\Omega} G_\Omega(x,y) f(y) dy \end{equation} is well-defined for $f \in \mathring{L}^{q}(\Omega) = \{f \in L^q(\Omega) \;\vert\; \int_\Omega f =0\}$ and satisfies the following. \begin{enumerate} \item $\mathcal{B}_\Omega$ is a bounded linear map from $\mathring{L}^{q}(\Omega)$ to $W^{1,p}_0(\Omega;\mathbb{R}^2)$. \item If $f \in \mathring{L}^{q}(\Omega)$, then $u = \mathcal{B}_\Omega f \in W^{1,p}_0(\Omega;\mathbb{R}^2)$ satisfies \begin{equation} \begin{cases} \diverge u =f &\text{in }\Omega \\ u =0 &\text{on } \partial^\alphartial \Omega. \end{cases} \end{equation} \end{enumerate} \end{prop} \begin{proof} See \cite{bogovskii} for the original construction or Chapter 2 of \cite{acosta_duran} for a more detailed treatment. \end{proof} \subsection{Gronwall variant} We now record a variant of the classic Gronwall inequality, based on a result in \cite{maslova}. \begin{prop}\label{gronwall_variant} Let $0 < T \le \infty$ and suppose that $x : [0,T) \to [0,\infty)$ is integrable. Further suppose that there exists $\alpha > 0$ such that $x$ satisfies \begin{equation}\label{gronwall_variant_00} x(t) + \int_s^t x(r) dr \le \alpha x(s) \text{ for all } 0 \le s \le t < T. \end{equation} Then \begin{equation}\label{gronwall_variant_01} x(t) \le \min\{2,\alpha \sqrt{e}\} e^{-\frac{t}{2\alpha}} x(0) \text{ for all }0 \le t < T. \end{equation} \end{prop} \begin{proof} First note that \eqref{gronwall_variant_00} provides the trivial estimate \begin{equation}\label{gronwall_variant_1} x(t) \le \alpha x(0) \text{ for all }0 \le t < T. \end{equation} Now fix $0 < t < T$ and define the absolutely continuous function $y: [0,t] \to [0,\infty)$ via $y(s) = \int_s^t x(r) dr$. Then \eqref{gronwall_variant_00} implies that \begin{equation} y(s) \le \alpha x(s) = - \alpha \dot{y}(s) \text{ for a.e. } 0 \le s \le t \end{equation} and so the standard Gronwall estimate and \eqref{gronwall_variant_00} imply that \begin{equation} y(s) \le e^{-s/\alpha} y(0) = e^{-s/\alpha} \int_0^t x(r) dr \le e^{-s/\alpha} x(0) \text{ for all }0 \le s \le t. \end{equation} We then integrate \eqref{gronwall_variant_00} over $s \in [t/2,t]$ and use this estimate to see that \begin{equation} \frac{t}{2} x(t) = \int_{t/2}^t x(t) ds \le \alpha \int_{t/2}^t x(s) ds = \alpha y(t/2) \le \alpha e^{-t/(2\alpha)} x(0), \end{equation} and hence \begin{equation}\label{gronwall_variant_2} x(t) \le \frac{2\alpha}{t} e^{-t/(2\alpha)} x(0) \text{ for all } 0 \le t < T. \end{equation} Combining \eqref{gronwall_variant_1} and \eqref{gronwall_variant_2}, we deduce that \begin{equation} x(t) \le \min\left\{\alpha, \frac{2\alpha}{t} e^{-\frac{t}{2\alpha}} \right\} x(0) \text{ for all }0 \le t < T. \end{equation} The result then follows from this after noting that \begin{equation} \alpha \le t \mathbb{R}ightarrow \frac{2\alpha}{t} e^{-\frac{t}{2\alpha}} \le 2e^{-\frac{t}{2\alpha}} \text{ and } 0 \le t < \alpha \mathbb{R}ightarrow \alpha \le \alpha e^{1/2} e^{-\frac{t}{2\alpha}}, \end{equation} which mean that \begin{equation} \min\left\{\alpha, \frac{2\alpha}{t} e^{-\frac{t}{2\alpha}} \right\} \le \min\{2,\alpha \sqrt{e}\} e^{-\frac{t}{2\alpha}} \text{ for all } t \ge 0. \end{equation} \end{proof} \subsection{Estimates via temporal derivatives} Next we record a result about how temporal derivatives and interpolation. \begin{prop}\label{temp_deriv_interp} Let $\mathcal{G}amma$ denote either $\Omega$ or $(-\ell,\ell)$. Suppose that $f \in L^2((0,T);H^s_1(\mathcal{G}amma))$ and $\partial^\alphartialartial_t f \in L^2((0,T);H^{s_2}(\mathcal{G}amma))$ for $0 \le s_2 \le s_1$ and $0 < T \le \infty$. Then for $s = (s_1 + s_2)/2$ we have that $f \in C^0([0,T);H^s(\mathcal{G}amma))$, and we have the estimate \begin{equation} \ns{f(t)}_{H^s} \le \ns{f(\tau)}_{H^s} + \int_{\tau}^t \ns{f}_{H^{s_1}} + \ns{\partial^\alphartialartial_t f}_{H^{s_2}} \end{equation} for all $0 \le t \le \tau < T$. \end{prop} \begin{proof} See, for instance, Lemma A.4 in \cite{guo_tice_lwp}. \end{proof} \subsection{Fractional integration by parts} Here we record a sort of fractional integration-by-parts estimate. \begin{prop}\label{frac_IBP_prop} Let $0 < s < 1$. Then \begin{equation} \abs{\int_{-\ell}^\ell \partial^\alphartial_1 f g } \lesssim \norm{f}_{H^{1-s/2}} \norm{g}_{H^{s/2}}. \end{equation} \end{prop} \begin{proof} Since $0 < s < 1$ we have that (see, for instance, \cite{lions_magenes_1}) $H^{s/2}((-\ell,\ell)) = H^{s/2}_0((-\ell,\ell))$. Next we note that \begin{equation} \partial^\alphartial_1 \in \mathcal{L}( L^2((-\ell,\ell)); H^{-1}((-\ell,\ell)) ) \cap \mathcal{L}(H^1((-\ell,\ell)); L^2((-\ell,\ell))). \end{equation} Since $L^2 = (L^2)^\ast = (H^0_0)^\ast$ and $H^{-1} = (H^1_0)^\ast$ we may then use interpolation theory to find that \begin{equation} \partial^\alphartial_1 \in \mathcal{L}( (H^1,L^2)_{1-s/2,2}; (L^2, H^{-1})_{1-s/2,2} ) = \mathcal{L}( H^{1-s/2}((-\ell,\ell)); H^{-s/2}((-\ell,\ell)) ). \end{equation} Using this, we may then estimate \begin{equation} \abs{\int_{-\ell}^\ell \partial^\alphartial_1 f g } \le \norm{\partial^\alphartial_1 f}_{H^{-s/2}} \norm{g}_{H^{s/2}} \lesssim \norm{f}_{H^{1-s/2}} \norm{g}_{H^{s/2}}. \end{equation} \end{proof} \subsection{Composition in $H^s((-\ell,\ell))$} The following result provides a useful composition estimate in fractional Sobolev spaces. \begin{prop}\label{frac_comp} Let $f : (-\ell,\ell) \times \mathbb{R} \to \mathbb{R}$ be $C^1$ and satisfy \begin{equation}\label{frac_comp_00} \sup_{z \in \mathbb{R}} \sup_{\abs{x} < \ell} \left( \frac{\abs{f(x,z)} + \abs{\partial^\alphartial_1 f(x,z)}}{\abs{z}} + \abs{\partial^\alphartial_2 f(x,z)} \right) \le M < \infty. \end{equation} Then for every $0 < s < 1$ there exists a constant $C = C(s,\ell) >0$ such that if $u \in H^s((-\ell,\ell))$ then $f(\cdot,u) \in H^s((-\ell,\ell))$ and \begin{equation}\label{frac_comp_01} \norm{f(\cdot,u)}_{H^s} \le C M \norm{u}_{H^s}. \end{equation} \end{prop} \begin{proof} Let $u \in H^s((-\ell,\ell))$. We use the difference quotient characterization of $H^s((-\ell,\ell))$, which shows that \begin{equation} \ns{f(\cdot,u)}_{H^s} \asymp \ns{f(\cdot,u)}_{L^2} + [f(\cdot,u)]_{H^s}^2, \end{equation} where \begin{equation} [f(\cdot,u)]_{H^s}^2 = \int_{-\ell}^\ell \int_{-\ell}^\ell \frac{\abs{f(x,u(x)) - f(y,u(y))}^2}{\abs{x-y}^{1 + 2s}} dx dy. \end{equation} To handle these note that by \eqref{frac_comp_00}, for $x,y \in (-\ell, \ell)$ we have that \begin{equation} \abs{f(x,u(x))} \le M \abs{u(x)} \end{equation} and \begin{multline} f(x,u(x)) - f(y,u(y)) = \int_0^1 \frac{d}{dt} f(tx + (1-t) y, tu(x) + (1-t)u(y)) dt \\ = (x-y)\int_0^1 \partial^\alphartial_1 f(tx + (1-t) y, tu(x) + (1-t) u(y)) dt + (u(x) - u(y)) \int_0^1 \partial^\alphartial_2 f(tx + (1-t) y, tu(x) + (1-t) u(y)) dt, \end{multline} so \begin{equation} \abs{ f(x,u(x)) - f(y,u(y))} \le M \abs{x-y} \left(\abs{u(y)} + \abs{u(x)} \right) + M \abs{u(x)-u(y)}. \end{equation} These allow us to bound \begin{equation} \ns{f(\cdot,u)}_{L^2} \le M^2 \ns{u}_{L^2} \end{equation} and (using Tonelli's theorem and the fact that $s < 1$) \begin{multline} [f(\cdot,u)]_{H^s}^2 \le 2M^2 \int_{-\ell}^\ell \int_{-\ell}^\ell \left( \frac{\abs{x-y}^2 }{\abs{x-y}^{1+2s}}(2\abs{u(y)}^2 + 2 \abs{u(x)}^2) + \frac{\abs{u(x)-u(y)}^2}{\abs{x-y}^{1+2s}} \right) dx dy \\ \le C(s,\ell) M^2 \ns{u}_{L^2} + 2M^2 [u]_{H^s}^2 \end{multline} for a constant $C(s,\ell) >0$. Upon combining these we find that \eqref{frac_comp_01} holds. \end{proof} \end{document}
\begin{document} \title[A variant of Weyl's inequality for systems of forms]{A variant of Weyl's inequality for systems of forms and applications} \author[Damaris Schindler]{Damaris Schindler} \address{Hausdorff Center for Mathematics, Endenicher Allee 62-64, 53115 Bonn, Germany} \email{[email protected]} \subjclass[2010]{11P55 (11D72, 11G35)} \keywords{forms in many variables, Hardy-Littlewood method, Weyl's inequality} \begin{abstract} We give a variant of Weyl's inequality for systems of forms together with applications. First we use this to give a different formulation of a theorem of B. J. Birch on forms in many variables. More precisely, we show that the dimension of the locus $V^*$ introduced in this work can be replaced by the maximal dimension of the singular loci of forms in the linear system of the given forms. In some cases this improves on the aforementioned theorem of Birch.\par We say that a system of forms is a Hardy-Littlewood system if the number of integer points on the corresponding variety restricted to a box satisfies the asymptotic behaviour predicted by the classical circle method. As a second application, we improve on a theorem of W. M. Schmidt which states that a system of homogeneous forms of same degree is a Hardy-Littlewood system as soon as the so called $h$-invariant of the system is sufficiently large. In this direction we generalise previous improvements of R. Dietmann on systems of quadratic and cubic forms to systems of forms of general degree. \end{abstract} \maketitle \excludecomment{com} \section{Introduction} We consider a system of homogeneous forms $f_i(x_1,\elldots, x_n)\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}[x_1,\elldots, x_n]$ of degree $d$. For convenience we write ${\mathbf x} = (x_1,\elldots, x_n)$ and ask for the number of integer solutions to the system of Diophantine equations given by \begin{equation*} f_i({\mathbf x})=0,\quad 1\elleq i\elleq r. \end{equation*} More precisely, we fix some box ${\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}\subset {\mathbb R}}{\,{\rm d}}ef{\,{\rm d}}bT{{\mathbb T}^n$ which is contained in the unit box and we let $P\geq 1$ be some real parameter. Then we define the counting function \begin{equation*} N(P)=\sharp\{ {\mathbf x}\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^n: {\mathbf x}\in P{\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB},\ f_i({\mathbf x})=0,\ 1\elleq i\elleq r\}. \end{equation*} This counting function has received a lot of attention so far and is a central object of investigation in number theory. If the number of variables $n$ is relatively large compared to the number of equations and the degree $d$, then the Hardy-Littlewood circle method has proved to be a valuable tool in obtaining asymptotic formulas for the counting function $N(P)$.\par A very general result in this direction has been obtained by Birch in \cite{Bir62}. He introduces a locus called $V^*$ which is the affine variety given by \begin{equation*} {\rm rank} \elleft(\frac{\partial f_i({\mathbf x})}{\partial x_j}\right)_{\substack{1\elleq i\elleq r\\ 1\elleq j\elleq n}} <r. \end{equation*} In his work \cite{Bir62} Birch provides an asymptotic formula for $N(P)$ as soon as \begin{equation*} n-{\,{\rm d}}im V^*> r(r+1)(d-1)2^{d-1}. \end{equation*} A main ingredient in most applications of the circle method, as for example the one in \cite{Bir62}, is a form of Weyl's inequality. In this paper we present a variant of Weyl's inequality for systems of forms, and give two applications of our new form of Weyl's inequality.\par First this allows us to replace the dimension of the locus $V^*$ in Birch's theorem on system of forms by a quantity which appears to be more natural in this context. For some integer vector ${\mathbf b}\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r$ we let $f_{\mathbf b}= b_1f_1+\elldots +b_rf_r$ be the form in the pencil of $f_1,\elldots, f_r$ associated to ${\mathbf b}$. For any homogeneous form $g$ we write $\Sing (g)$ for the singular locus (in affine space) of the form $g=0$. We can now state a variant of Birch's theorem on forms in many variables as follows. \begin{theorem}\ellabel{thm1} Assume that \begin{equation*} n-\max_{{\mathbf b}\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}( {\,{\rm d}}im \Sing (f_{\mathbf b}) )> r(r+1) (d-1)2^{d-1}. \end{equation*} Then we have the asymptotic formula \begin{equation}\ellabel{eqnHL} N(P)= {\mathfrak S}}{\,{\rm d}}ef\grP{{\mathfrak P} {\mathcal J} P^{n-rd} +O(P^{n-rd-{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}}), \end{equation} for some ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta} >0$. Here ${\mathfrak S}}{\,{\rm d}}ef\grP{{\mathfrak P}$ and ${\mathcal J}$ are the singular series and singular integral.\end{theorem} This is essentially the main theorem of Birch's work \cite{Bir62} where the quantity ${\,{\rm d}}im V^*$ is replace by $\max_{{\mathbf b}\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}} ({\,{\rm d}}im \Sing (f_{\mathbf b}))$. In other words we can now describe the singularity of the system of forms $f_i$, $1\elleq i\elleq r$ by the maximal dimension of the singular loci of forms in the pencil. To our knowledge, there is in contrast no satisfactory geometric interpretation for the locus $V^*$ available.\par Furthermore, we point out that for any non-trivial form $f_{\mathbf b}$ in the pencil, the dimension of the singular locus ${\,{\rm d}}im \Sing (f_{\mathbf b})$ is always bounded by ${\,{\rm d}}im V^*$. Indeed, the singular locus of the form $f_{\mathbf b}$ is given by \begin{equation*} b_1\frac{\partial f_1}{\partial x_i}({\mathbf x})+\elldots + b_r \frac{\partial f_r}{\partial x_i}({\mathbf x})=0,\quad 1\elleq i\elleq n. \end{equation*} If some vector ${\mathbf x}$ is contained in $\Sing (f_{\mathbf b})$, then these relations imply that the rank of the matrix $(\frac{\partial f_i}{\partial x_j})$ can be at most $r$. This shows that $\Sing (f_{\mathbf b})\subset V^*$, and ${\,{\rm d}}im \Sing(f_{\mathbf b})\elleq {\,{\rm d}}im V^*$ for any non-zero vector ${\mathbf b}$.\par Hence Theorem \ref{thm1} formally implies Birch's theorem in \cite{Bir62}. Furthermore, there are examples of systems where Theorem \ref{thm1} is stronger than Birch's main theorem in \cite{Bir62}. For simplicity of notation let \begin{equation*} u=u({\mathbf f}):= \max_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}({\,{\rm d}}im \Sing (f_{\mathbf b})). \end{equation*} Let $k\geq r-1$ be some integer and consider the system of quadratic forms \begin{equation*} Q_i({\mathbf x},{\mathbf y})= \sum_{j=1}^k x_j y_{ji},\quad 1\elleq i\elleq r, \end{equation*} in the $k(r+1)$ variables $x_j$ for $1\elleq j \elleq k$ and $y_{ji}$ for $1\elleq i\elleq r$ and $1\elleq j\elleq k$. A short computation reveals that \begin{equation*} {\,{\rm d}}im V^*=k(r-1)+r-1= u+r-1. \end{equation*} Note also that once we choose $k$ sufficiently large, Theorem \ref{thm1} is indeed applicable. On the other hand these examples are essentially sharp. If we work over the complex numbers then we have \begin{equation*} V^*= \cup_{{\mathbf b}\in{\mathbb C}}{\,{\rm d}}ef\F{{\mathbb F}}{\,{\rm d}}ef\N{{\mathbb N}}{\,{\rm d}}ef\P{{\mathbb P}^r{\setminus\!\,}us\{0\}} \Sing (f_{\mathbf b}), \end{equation*} and this leads to the bound ${\,{\rm d}}im V^*\elleq u_{\mathbb C}}{\,{\rm d}}ef\F{{\mathbb F}}{\,{\rm d}}ef\N{{\mathbb N}}{\,{\rm d}}ef\P{{\mathbb P} +r-1$, where \begin{equation*} u_{\mathbb C}}{\,{\rm d}}ef\F{{\mathbb F}}{\,{\rm d}}ef\N{{\mathbb N}}{\,{\rm d}}ef\P{{\mathbb P}:= \max_{{\mathbf b}\in{\mathbb C}}{\,{\rm d}}ef\F{{\mathbb F}}{\,{\rm d}}ef\N{{\mathbb N}}{\,{\rm d}}ef\P{{\mathbb P}^r{\setminus\!\,}us\{0\}}({\,{\rm d}}im \Sing (f_{\mathbf b})). \end{equation*} Since published in 1962, Birch's work \cite{Bir62} has received a lot of attention and has been generalised in multiple directions. It seems natural to expect that our observation and new formulation of the main result in Theorem \ref{thm1} can in an analogous way be transferred to most of these generalisations and developments. Some examples to mention are work of Brandes \cite{BraA13} on forms representing forms and the vanishing of forms on linear subspaces. Furthermore, the analogue of the locus $V^*$ in work of Skinner \cite{Ski97}, which generalises Birch's theorem on forms in many variables to the number field situation, and work of the author \cite{bihomforms} on bihomogeneous forms, could very likely be replaced by a non-singularity condition on forms of the linear system. Another result and application in this direction is a paper of Lee \cite{LeeAlan} on a generalisation to function fields $\F_q[t]$. As a second application of our new form of Weyl's inequality for systems of forms, we can strengthen a theorem of Schmidt \cite{Schmidt85}, which provides an asymptotic formula for the counting function $N(P)$ as soon as a so-called $h$-invariant of the system is sufficiently large. As a special case of this we recover the results of Dietmann's work \cite{DieA12} on systems of quadratic and cubic forms.\par For a homogeneous form $f({\mathbf x})\in \Q[{\mathbf x}]$ we define the $h$-invariant of $f$ to be the least integer $h$ such that $f$ can be written in the form \begin{equation*} f({\mathbf x})=\sum_{i=1}^h g_i({\mathbf x}) g_i'({\mathbf x}), \end{equation*} with forms $g_i({\mathbf x})$ and $g_i'({\mathbf x})$ of positive degree with rational coefficients. For a system ${\mathbf f}$ of homogeneous forms $f_i({\mathbf x}),$ $1\elleq i\elleq r$, of degree $d$, we define the $h$-invariant $h({\mathbf f})$ to be the minimum of the $h$-invariant of any form in the rational linear system of the forms, i.e. we set $h({\mathbf f})=\min_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}h(f_{\mathbf b})$.\par We say that a system of forms ${\mathbf f}$ of degree $d$ is a Hardy-Littlewood system if the conclusion on the asymptotic formula for the counting function $N(P)$ as in equation (\ref{eqnHL}) in Theorem \ref{thm1} holds. If the $h$-invariant of a system of homogeneous forms of the same degree is sufficiently large, then Schmidt proves in his work \cite{Schmidt85} that ${\mathbf f}$ is a Hardy-Littlewood system. As we shall indicate in section 3, his results easily imply the following theorem, which we state here for convenience. \begin{theorem}\ellabel{thm2a}[Schmidt, 1985, see \cite{Schmidt85}] There exists a function $\phi(d)$ with the following property. If the system ${\mathbf f}$ of homogeneous forms of degree $d>1$ has a $h$-invariant which is bounded below by \begin{equation*} h({\mathbf f})> \phi(d) (r(r+1)(d-1)2^{d-1}+(d-1)r(r-1)), \end{equation*} then the system ${\mathbf f}$ is a Hardy-Littlewood system. Furthermore, one has $\phi(2)=\phi(3)=1$, $\phi(4)=3$, $\phi(5)=13$ and $\phi(d)<(\ellog 2)^{-d} d!$ in general. \end{theorem} Note that the function $\phi(d)$ is exactly the function occurring in Proposition $III_C$ in Schmidt's work \cite{Schmidt85}.\par Our new form of Weyl's inequality impoves on this theorem in the following way. \begin{theorem}\ellabel{thm2} Let $\phi(d)$ be the function as in Theorem \ref{thm2a}. If the system ${\mathbf f}$ of homogeneous forms of degree $d>1$ has a $h$-invariant which is bounded below by \begin{equation*} h({\mathbf f})> \phi(d) r(r+1)(d-1)2^{d-1}, \end{equation*} then ${\mathbf f}$ is a Hardy-Littlewood system. In the case $d=2$ one may replace the condition on the $h$-invariant of the system by the assumption that the rank of each form in the rational linear system of the quadratic forms is bounded below by $2r(r+1)$. \end{theorem} The special cases of degree $d=2$ and $d=3$ in Theorem \ref{thm2} reduce to Theorem 1 and Theorem 2 in Dietmann's paper \cite{DieA12}. In the quadratic case Dietmann improves on previous results of Schmidt in \cite{Schmidt80} in reducing the lower bound in the rank condition from $2r^2+3r$ to only $2r^2+2r$, and in the cubic case he reduces the lower bound on the $h$-invariant from $10r^2+6r$ (see Schmidt's paper \cite{Schmidt82}) to $8r^2+8r$. In fact, our new form of Weyl's inequality takes up the main idea in Dietmann's work \cite{DieA12}.\par As Dietmann points out in \cite{DieA12}, the $h$-invariant can in some ways be seen as a generalisation of the rank of a quadratic form to higher degree forms. By diagonalising a quadratic form one sees that its $h$-invariant is bounded by its rank. However, we note that these two notions do not coincide for the case of quadratic forms, as examples built up from forms like $x_1^2-x_2^2= (x_1+x_2)(x_1-x_2)$ show. Hence we need to formulate the case $d=2$ in Theorem \ref{thm2} separately in order to obtain the full strength of the theorem in this case.\par As another example we consider the case of systems of forms ${\mathbf f}$ of degree $d=4$. In this case one has $\phi(4)=3$ and Theorem \ref{thm2} implies that the expected asymptotic formula for $N(P)$ holds as soon as \begin{equation*} h({\mathbf f})> 3r(r+1)\cdot3\cdot2^{3}= 9\cdot (8r^2+8r). \end{equation*} Schmidt obtains the same result in his paper \cite{Schmidt85} (see Theorem \ref{thm2a} above) under the stronger condition \begin{equation} h({\mathbf f}) > \phi(4) (r(r+1)(d-1)2^{d-1}+(d-1)r(r-1))= 9(9r^2+7r). \end{equation} We finally remark that if the system of forms $f_i({\mathbf x})$, $1\elleq i\elleq r$, in Theorem \ref{thm1} or Theorem \ref{thm2} forms a complete intersection, and if there exist non-singular real and $p$-adic points on the variety $X$ given by these forms, then the singular series ${\mathfrak S}}{\,{\rm d}}ef\grP{{\mathfrak P}$ and the singular integral ${\mathcal J}$ are both positive. In particular, this implies the existence of rational points on the variety $X$ as soon as there are non-singular solutions at every place of $\Q$ including infinity.\par The structure of this paper is as follows. We recall a version of Weyl's inequality from \cite{Bir62} in the next section and present in Lemma \ref{birlem2} our new variant of Weyl's inequality for systems of forms. We use this in the last section to deduce Theorem \ref{thm1} and Theorem \ref{thm2}, and we explain the improvements of Theorem \ref{thm2} compared to Theorem \ref{thm2a}.\par \textbf{Acknowledgements.} The author would like to thank Prof. T. D. Browning for comments on an earlier version of this paper and Prof. P. Salberger for helpful discussions. \section{A variant of Weyl's inequality} For some $n$-dimensional box ${\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}$, some real vector ${\mathbf a}lp=({\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_1,\elldots, {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_r)$ and some large real number $P$ we define the exponential sum \begin{equation*} S({\mathbf a}lp)= \sum_{{\mathbf x}\in P{\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}\cap {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^n}e\elleft(\sum_{i=1}^r {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_if_i({\mathbf x})\right). \end{equation*} If $f({\mathbf x})$ is some homogeneous form of degree $d$, then we let ${\Gamma}_f ({\mathbf x}^{(1)},\elldots, {\mathbf x}^{(d)})$ be its unique associated symmetric multilinear form satisfying ${\Gamma}_f({\mathbf x},\elldots,{\mathbf x})=d! f({\mathbf x})$. Moreover, if $f_i({\mathbf x})$, $1\elleq i\elleq r$, form a system of homogeneous forms of degree $d$ as before, then we let ${\Gamma}_i({\mathbf x}^{(1)},\elldots, {\mathbf x}^{(d)})$, $1\elleq i\elleq r$, be the associated multilinear forms. We introduce the sup-norm $|{\mathbf x}|=\max_{1\elleq i\elleq n} |x_i|$ on the vector space ${\mathbb R}}{\,{\rm d}}ef{\,{\rm d}}bT{{\mathbb T}^n$, and write $\Vert {\gamma}\Vert= \min_{y\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}}|{\gamma}-y|$ for the least distance of a real number ${\gamma}$ to an integer. Furthermore, we write here and in the following $\bfe_j$ for the $j$-th unit vector in $n$-dimensional affine space. Then we let $N(P^\xi;P^{-\eta};{\mathbf a}lp)$ be the number of integer vectors ${\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)}$ with $|{\mathbf x}^{(2)}|,\elldots, |{\mathbf x}^{(d)}|\elleq P^\xi$ and \begin{equation*} \elleft\Vert \sum_{i=1}^r {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i{\Gamma}_i (\bfe_j,{\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)})\right\Vert <P^{-\eta},\quad 1\elleq j\elleq n. \end{equation*} We start our considerations with recalling Lemma 2.4 from Birch's work \cite{Bir62}. \begin{lemma}[Lemma 2.4 in \cite{Bir62}]\ellabel{bir1} For fixed $0<{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}\elleq 1$ one of the following alternatives hold.\\ i) $|S({\mathbf a}lp)|< P^{n-k}$, or\\ ii) $N(P^{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}; P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}};{\mathbf a}lp)\gg P^{(d-1)n{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} - 2^{d-1}k-\varepsilon}$, for any $\varepsilon>0$. \end{lemma} The main idea is to treat the condition ii) differently than in Birch's work \cite{Bir62}, following a similar idea as taken up in the paper \cite{DieA12}. Before stating our new version of Weyl's inequality, we need to introduce the $g$-invariant of a homogeneous form.\par We define ${\mathcal M}_f$ to be the variety in affine $(d-1)n$-space given by \begin{equation*} {\Gamma}_f(\bfe_j,{\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)})=0,\quad 1\elleq j\elleq n, \end{equation*} and write ${\mathcal M}_f(P)$ for the number of integer points on ${\mathcal M}_f$ with coordinates all bounded by $P$. Then we define the $g$-invariant $g(f)$ of the form $f$ to be the largest real number such that \begin{equation*} {\mathcal M}_f (P)\elll P^{(d-1)n-g(f)+\varepsilon}, \end{equation*} holds for all $\varepsilon >0$. Note that this number $g(f)$ coincides with the $g$-invariant that Schmidt associates to a single form $f$ in \cite{Schmidt85}.\par \begin{lemma}\ellabel{birlem2} Let ${\tilde g}=\inf_{{\mathbf b}\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us \{0\}}g (f_{\mathbf b})$ and let $0<{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}\elleq 1$ be fixed. Then we either have the bound\\ i) $|S({\mathbf a}lp)|<P^{n-2^{-d+1}{\tilde g}{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}+\varepsilon}$, or\\ ii) (major arc approximation for ${\mathbf a}lp$ with respect to the parameter ${\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}$) there exist natural numbers $a_1,\elldots, a_r$ and $1\elleq q\elll P^{r(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}}$ with $\gcd (q,a_1,\elldots, a_r)=1$ and \begin{equation*} |q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i-a_i|\elll P^{-d+r(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}},\quad 1\elleq i\elleq r. \end{equation*} \end{lemma} \begin{proof} Let the notation be as in Lemma \ref{bir1} and assume that alternative (ii) in Lemma \ref{bir1} holds.\par We consider the matrix $\psi$ of size $r\times (nN(P^{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}; P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}};{\mathbf a}lp))$ with entries ${\Gamma}_i(\bfe_j,{\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)})$ in the $i$th row. The columns are indexed by $(j,{\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)})$, where $1\elleq j\elleq n$ and ${\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)}$ run through all tuples of integer vectors counted by $N(P^{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}; P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}};{\mathbf a}lp)$. We distinguish two cases.\par Case (a): Assume that ${\rm rank} (\psi) =r$. Then there is a $r\times r$-minor ${\widetilde \psi}$ of full rank, which we say is given by \begin{equation*} {\widetilde \psi} = ({\widetilde \psi}_{i,l})_{1\elleq i,l\elleq r}=({\Gamma}_i(\bfe_{j_l},{\mathbf x}^{(2)}_l,\elldots, {\mathbf x}_l^{(d)}))_{1\elleq i,l\elleq r}. \end{equation*} In particular, we have $\Vert \sum_{i=1}^r {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i {\widetilde \psi}_{i,l}\Vert < P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}}$, for all $1\elleq l\elleq r$. Hence, we can write \begin{equation*} \sum_{i=1}^r {\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i{\widetilde \psi}_{i,l}= {\tilde a}_l+{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}til_l, \end{equation*} with integers ${\tilde a}_l$ and real numbers ${{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}til_l$ with $|{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}til_l|<P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}}$. Let ${\widetilde \psi}^{\rm adj}$ be the adjoint matrix to ${\widetilde \psi}$, which satisfies ${\widetilde \psi}^{\rm adj} {\widetilde \psi} = ({\,{\rm d}}et {\widetilde \psi}) \id$, and let $q={\,{\rm d}}et {\widetilde \psi}$. Since ${\widetilde \psi}$ was assumed to be of rank $r$, its determinant $q$ is non-zero. Furthermore we note that $|q|\elll P^{r{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} (d-1)}$. Now we can use the adjoint matrix ${\widetilde \psi}^{\rm adj}$ to find a good approximation for ${\mathbf a}lp$ by rational numbers with small denomiator. Indeed, we have \begin{align*} \elleft|q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i-\sum_{l=1}^r{\widetilde \psi}^{\rm adj}_{i,l}{\tilde a}_l\right| &\elleq \sum_{l=1}^r |{\widetilde \psi}_{i,l}^{\rm adj}||{{\,{\rm d}}elta}} {\,{\rm d}}ef\Del{{\Delta}til_l| \\ &\elll P^{{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} (r-1)(d-1)}P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}}. \end{align*} We set $a_i=\sum_{l=1}^r{\widetilde \psi}_{i,l}^{\rm adj} {\tilde a}_l$. After removing common factors of $q$ and the integers $a_i$, we obtain integers $q,a_1,\elldots, a_r$ with $\gcd (q,a_1,\elldots, a_r)=1$ and $1\elleq q\elll P^{r(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}}$, such that \begin{equation*} |q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i-a_i|\elll P^{-d+r(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}},\quad 1\elleq i\elleq r. \end{equation*} In this case the conclusion (ii) of the Lemma holds.\par Case (b): Assume that ${\rm rank} (\psi) <r$. Then the $r$ rows of $\psi$ are linearly dependent over $\Q$, and thus there exist integers $b_1,\elldots, b_r\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}$, not all zero, such that \begin{equation*} \sum_{i=1}^r b_i {\Gamma}_i(\bfe_j,{\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)})=0, \end{equation*} for all $1\elleq j\elleq n$ and for all tuples ${\mathbf x}^{(2)},\elldots, {\mathbf x}^{(d)}$ counted by $N(P^{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}; P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}};{\mathbf a}lp)$.\par We note that \begin{equation*} \sum_{i=1}^r b_i{\Gamma}_i({\mathbf x}^{(1)},\elldots, {\mathbf x}^{(d)})= {\Gamma}_{f_{\mathbf b}}({\mathbf x}^{(1)},\elldots, {\mathbf x}^{(d)}) \end{equation*} is the multilinearform associated to the form $f_{\mathbf b}({\mathbf x})= \sum_{i=1}^r b_if_i({\mathbf x})$. We recall the definition of the variety ${\mathcal M}_f$ stated before this lemma, and deduce from the lower bound on $N(P^{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}; P^{-d+(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}};{\mathbf a}lp)$, that we have \begin{equation}\ellabel{bireqn2} {\mathcal M}_{f_{\mathbf b}}(P^{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta})\gg P^{(d-1)n{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} - 2^{d-1}k-\varepsilon}, \end{equation} for any $\varepsilon >0$. By definition of the $g$-invariant $g(f_{\mathbf b})$ we see that equation (\ref{bireqn2}) implies that \begin{equation*} (d-1)n-2^{d-1}k/{\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}-\varepsilon\elleq (d-1)n-g(f_{\mathbf b})+\varepsilon, \end{equation*} for any $\varepsilon >0$. Hence we have $2^{-d+1}g(f_{\mathbf b}){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta} \elleq k$ for some ${\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}$, and the first alternative of the lemma holds. \end{proof} \section{Applications} The main goal of this section is to prove Theorem \ref{thm1} and Theorem \ref{thm2}. Before we show how Lemma \ref{birlem2} implies Theorem \ref{thm1} and Theorem \ref{thm2} we recall some very general results from Schmidt's work \cite{Schmidt85}, which simplify our following arguments.\par For this we first recall the Hypothesis on ${\mathbf f}$ introduced in section 4 in \cite{Schmidt85} for the case of forms of same degree. \begin{hypothesis}[Hypothesis on ${\mathbf f}$ with parameter $\Ome$]\ellabel{hyp} Let ${\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}$ be some box and $\Del >0$ and assume that $P$ is sufficiently large depending on the system ${\mathbf f}$, the parameter $\Ome$, the box ${\mathcal B}} {\,{\rm d}}ef\calBtil{{\widetilde \calB}$ and $\Del$. Then one either has the upper bound\\ i) $|S({\mathbf a}lp)|\elleq P^{n-\Del\Ome}$, or\\ ii) there are natural numbers $q\elleq P^\Del$ and $a_1,\elldots, a_r$ such that \begin{equation*} |q{\alpha}} {\,{\rm d}}ef\bfalp{{\boldsymbol \alpha}_i-a_i|\elleq P^{-d+\Del},\quad 1\elleq i\elleq r. \end{equation*} \end{hypothesis} In his work \cite{Schmidt85} Schmidt shows that this hypothesis is enough to verify that the Hardy-Littlewood circle method can be applied to the counting function $N(P)$ related to the system of equations ${\mathbf f}$. One of his main results is the following, which we only state for the special case of forms of same degree, since this is all we use in this paper. \begin{theorem}[Proposition I in \cite{Schmidt85}, second part]\ellabel{propI} Suppose the system ${\mathbf f}$ satisfies Hypothesis \ref{hyp} with respect to some parameter \begin{equation*} \Ome > r+1. \end{equation*} Then ${\mathbf f}$ is a Hardy-Littlewood system. \end{theorem} In combination with our new version of Weyl's inequality in Lemma \ref{birlem2} we obtain the following useful Corollary. \begin{corollary}\ellabel{cor1} Assume that ${\mathbf f}$ is a system of homogeneous forms of same degree with \begin{equation*} \inf_{{\mathbf b}\in {\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us \{0\}}g (f_{\mathbf b}) > r(r+1)(d-1)2^{d-1}. \end{equation*} Then the asymptotic formula for $N(P)$ as predicted by the circle method holds, i.e. ${\mathbf f}$ is a Hardy-Littlewood system. \end{corollary} \begin{proof} First we note that Lemma \ref{birlem2} implies Hypothesis \ref{hyp} with respect to any parameter \begin{equation*} \Ome < 2^{-d+1} r^{-1}(d-1)^{-1}{\tilde g}. \end{equation*} This is clear by the formulation of Lemma \ref{birlem2} for the range $0<\Del\elleq r(d-1)$ if we set $\Del = r(d-1){\theta}} {\,{\rm d}}ef\bftet{{\boldsymbol \theta}} {\,{\rm d}}ef\Tet{{\Theta}$. In the case where $\Del >r(d-1)$ alternative (ii) in Hypothesis \ref{hyp} is automatically satisfied by Dirichlet's approximation principle.\par Now Proposition I in \cite{Schmidt85} applies as stated in Theorem \ref{propI}, which completes the proof of the Corollary. \end{proof} Next we relate the $g$-invariant of a homogeneous form to the dimension of its singular locus, and thereby establish the new formulation of Birch's theorem on forms in many variables as stated in Theorem \ref{thm1}. \begin{proof}[Proof of Theorem \ref{thm1}] Consider some vector ${\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}$ and its associated form $f_{\mathbf b}$ in the pencil of $f_i({\mathbf x})$, $1\elleq i\elleq r$. We note that the intersection of the affine variety ${\mathcal M}_{f_{\mathbf b}}$ with the diagonal ${\mathcal D}$ given by \begin{equation*} {\mathbf x}^{(2)}=\elldots = {\mathbf x}^{(d)}, \end{equation*} is isomorphic to the singular locus of the form $f_{\mathbf b}$. Hence we obtain by the affine intersection theorem that \begin{equation*} {\,{\rm d}}im \Sing(f_{\mathbf b}) = {\,{\rm d}}im ({\mathcal D}\cap {\mathcal M}_{f_{\mathbf b}}) \geq {\,{\rm d}}im {\mathcal D} + {\,{\rm d}}im {\mathcal M}_{f_{\mathbf b}}- (d-1)n. \end{equation*} This shows that \begin{equation*} {\,{\rm d}}im \Sing (f_{\mathbf b})\geq {\,{\rm d}}im {\mathcal M}_{f_{\mathbf b}}-(d-1)n+n, \end{equation*} which implies that \begin{equation*} g(f_{\mathbf b})\geq n-{\,{\rm d}}im \Sing (f_{\mathbf b}). \end{equation*} Taking the the infimum over all non-zero integer tuples ${\mathbf b}$ we obtain \begin{equation*} \inf_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}g(f_{\mathbf b})\geq n-\max_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}({\,{\rm d}}im \Sing(f_{\mathbf b})), \end{equation*} and hence Lemma \ref{birlem2} holds with ${\tilde g}$ replaced by $n-\max_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}({\,{\rm d}}im \Sing(f_{\mathbf b}))$. This shows that in Lemma 4.3 in Birch's work \cite{Bir62}, the quantity $K$ which is defined in his setting as \begin{equation*} 2^{d-1}K= n-{\,{\rm d}}im V^*, \end{equation*} can be replaced by \begin{equation*} 2^{d-1}K= n-\max_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}({\,{\rm d}}im \Sing(f_{\mathbf b})). \end{equation*} Now Theorem \ref{thm1} follows identically as the main theorem in Birch's paper \cite{Bir62}. Alternatively we can apply Corollary \ref{cor1} to obtain the desired result. \end{proof} Next we turn towards the proof of Theorem \ref{thm2} which improves on the previous known results in Theorem \ref{thm2a}. However, since Theorem \ref{thm2a} is not contained in this formulation in the paper \cite{Schmidt85}, we first give a short deduction of it from the results of \cite{Schmidt85}. Indeed, Schmidt concludes in his remark after Proposition $II_0$ that the expected asymptotic formula in Theorem \ref{thm2a} for $N(P)$ holds as soon as a so called $g$-invariant $g({\mathbf f})$ of the system ${\mathbf f}$ is larger than \begin{equation}\ellabel{eqn4a} g({\mathbf f})>2^{d-1}(d-1)r(r+1). \end{equation} His Corollary after Proposition III states that there is the relation \begin{equation}\ellabel{eqn4b} h({\mathbf f})\elleq \phi(d) (g({\mathbf f})+(d-1)r(r-1)). \end{equation} Hence the condition \begin{equation*} h({\mathbf f})> \phi(d) (r(r+1)(d-1)2^{d-1}+(d-1)r(r-1)), \end{equation*} in Theorem \ref{thm2a} implies that (\ref{eqn4a}) holds and thus the conclusion of Theorem \ref{thm2a} follows.\par The main difference in the use of our new version of Weyl's inequality in comparison to Schmidt's work is that we can state everything in terms of the $g$-invariant of a single form. In his work \cite{Schmidt85} Schmidt uses a form of Weyl's inequality where the infimum of all $g$-invariants of the elements of the rational linear system is replaced by his so called $g$-invariant $g({\mathbf f})$ of the whole system. This is in complete analogy with the replacement of the locus $V^*$ in Birch's work \cite{Bir62} by the maximal dimension of the singular loci of elements in the rational linear system as in Theorem \ref{thm1}. \begin{proof}[Proof of Theorem \ref{thm2}] For a single form $f$, equation (17.2) in \cite{Schmidt85} implies that \begin{equation}\ellabel{eqn4d} h(f)\elleq \phi(d) g(f), \end{equation} which should be compared to the relation (\ref{eqn4b}) for systems of forms. For a single form we do not need the term $\phi(d)(d-1)r(r-1)$, which is present for the relation referring to the whole system of forms in equation (\ref{eqn4b}). This explains our improvement of Theorem \ref{thm2} compared to Theorem \ref{thm2a}.\par Assume now that the assumptions of Theorem \ref{thm2} are satisfied, i.e. \begin{equation*} h({\mathbf f})> \phi(d) r(r+1)(d-1)2^{d-1}. \end{equation*} Recall that we have defined $h({\mathbf f})=\min_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}}h(f_{\mathbf b})$. Hence we obtain together with equation (\ref{eqn4d}) the relation \begin{equation*} \inf_{{\mathbf b}\in{\mathbb Z}}{\,{\rm d}}ef\Q{{\mathbb Q}^r{\setminus\!\,}us\{0\}} g(f_{\mathbf b})> r(r+1)(d-1)2^{d-1}. \end{equation*} Now we apply Corollary \ref{cor1} to complete the proof of Theorem \ref{thm2} for the case of degree $d\geq 3$.\par For the case of systems of quadratic forms we note that the $g$-invariant of a single quadratic form $f$ is bounded below by its rank. Indeed, let some quadratic form $f$ be given by some $n\times n$-matrix $A$. Then the variety ${\mathcal M}_f$ is given by the system of linear equations $A{\mathbf x}=0$, and we deduce that \begin{equation*} {\mathcal M}_f(P)\elll P^{n-{\rm rank} (A)}, \end{equation*} which shows that $g(f)\geq {\rm rank} (A)$. Now we apply Corollary \ref{cor1} as in the case $d\geq 3$. \end{proof} \providecommand{\bysame}{\elleavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
\begin{document} \begin{center} \textbf{\Large Algebraic cycles on Gushel--Mukai varieties} {\Large Lie Fu\footnote{L.F.\ is partially supported by the Radboud Excellence Initiative and by the Agence Nationale de la Recherche (ANR) under projects ANR-20-CE40-0023 and ANR-16-CE40-0011.} \; and\; Ben Moonen} \emph{\large Dedicated to Claire Voisin on the occasion of her 60th birthday} \end{center} {\small \noindent \begin{quoting} \textbf{Abstract.} We study algebraic cycles on complex Gushel--Mukai (GM) varieties. We prove the generalised Hodge conjecture, the (motivated) Mumford--Tate conjecture and the generalised Tate conjecture for all GM varieties. We compute all integral Chow groups of GM varieties, except for the only two infinite-dimensional cases ($1$-cycles on GM fourfolds and $2$-cycles on GM sixfolds). We prove that if two GM varieties are generalised partners or generalised duals, their rational Chow motives in middle degree are isomorphic. \noindent \emph{Mathematics Subject Classification 2020:\/} primary 14C15, 14C25, 14C30; secondary 14J45, 14F08. \end{quoting} } \section{Introduction} \subsection{} The main goal of this paper and its companion~\cite{FuMoonen-TCGMV} is to prove several results about algebraic cycles on Gushel--Mukai (GM) varieties. The focus of the present paper is on GM varieties in characteristic~$0$. Several results that we obtain here are used in~\cite{FuMoonen-TCGMV}, in which we prove the Tate Conjecture for even-dimensional GM varieties in characteristic $p\geq 5$. GM varieties form a class of Fano varieties that admit a simple explicit definition and that are interesting because of their rich geometry and their connections to hyperk\"ahler varieties. We refer to the series of papers by Debarre and Kuznetsov (see \cite{DK-GMClassification}, \cite{DK-Kyoto}, \cite{DK-DoubleCov} and~\cite{DK-GMJacobian}) for an in-depth study. A nice starting point is Debarre's overview paper~\cite{Debarre-GM}. We work over an algebraically closed field~$K$ of characteristic~$0$. For the purposes of this paper, only smooth GM varieties of dimension~$>2$ are of interest. These exist in dimensions $n \in \{3,4,5,6\}$; they can be realised as intersections \[ X = \CGr(2,V_5) \cap \PP(W) \cap Q \] of the cone $\CGr(2,V_5) \subset \PP(K \oplus \wedge^2 V_5)$ over the Grassmannian of $2$-planes in a $5$-dimensional vector space~$V_5$ with a linear subspace $\PP(W) \subset \PP(K \oplus \wedge^2 V_5)$ of dimension~$n+4$ and a quadric~$Q$. \subsection{} The cohomology of an $n$-dimensional GM variety~$X$ is purely of Tate type, except in middle degree. If $n$ is odd, $H^n(X)$ corresponds to a $10$-dimensional abelian variety, the intermediate Jacobian. If $n$ is even, $H^n(X)$ is of K3 type, with Hodge numbers $1$-$22$-$1$. A first main theme in this paper is that also almost all Chow groups have a relatively simple structure. In Section~\ref{sec:ChowGroupGM} we compute all Chow groups of complex GM varieties, with \emph{integral} coefficients, except for the two infinite dimensional cases, namely $1$-cycles on GM fourfolds and $2$-cycles on GM sixfolds. For GM fivefolds the result is due to L.~Zhou in her thesis~\cite{ZhouLin}. In dimensions $3$ and~$4$, the results are easily deduced from the results of Bloch and Srinivas~\cite{BlochSrinivas} and Laterveer~\cite{Laterveer-SmallChow}. Similar results with $\QQ$-coefficients were obtained by Laterveer using different methods. Our main new contribution concerns $1$-cycles and $3$-cycles on GM sixfolds~$X$. We prove that the Fano variety of lines on~$X$ is rationally chain connected, so that all lines on~$X$ have the same class in~$\CH^5(X)$. By a nice geometric argument using ``successive ruled surfaces'' as in the work~\cite{TianZong} of Tian and Zong, we show that every $1$-cycle on~$X$ is equivalent to an integral multiple of the class of a line, so that we obtain $\CH^5(X) \cong \ZZ$. For cycles of codimension~$3$ on a GM sixfold, we show that $\CH^3(X)_{\hom} = 0$. The main step is to show that the Griffiths group~$\Griff^3(X)$ is zero. As the Hodge conjecture with $\QQ$-coefficients for~$X$ is true (see below), the image of $\CH^3(X)$ in cohomology is of finite index in the space of Hodge classes in $H^6(X,\ZZ)\bigl(3\bigr)$. \subsection{} A second main result of the paper concerns the (generalised) Hodge and the Tate conjectures, and, as a bridge between them, the Mumford--Tate conjecture. (See Section~\ref{sec:MTC+GTC} for the formulation of these conjectures.) The result we obtain is the following. \begin{theorem*} Let $X$ be a complex GM variety. \begin{enumerate} \item The Generalised Hodge Conjecture for $X$ is true. \item If $\dim(X)$ is even, the Mumford--Tate Conjecture for $X$ is true. \item If $\dim(X)$ is even, the Generalised Tate Conjecture for $X$ is true. \end{enumerate} \end{theorem*} Most cases of the Generalised Hodge Conjecture (GHC) are in fact covered by work of Laterveer, and the remaining case of GM sixfolds is easily deduced from our calculations of Chow groups. Our proof of the Mumford--Tate Conjecture (MTC) is based on the work of Y.~Andr\'e~\cite{Andre-ShafTate}. It is crucial here that the middle cohomology of~$X$ is of K3 type, which is why we have to assume that $\dim(X)$ is even. (For GM varieties of odd dimension, the Mumford--Tate conjecture is not known; in this cases it is a problem about $10$-dimensional abelian varieties, which is a much more difficult case to handle.) The convenience of Andr\'e's results is that we only need to verify a couple of conditions, the most important of which is that $X$ should appear as a fibre in a smooth family whose image under the period map contains an open subset of the appropriate period domain. This is known to be true for GM varieties by the work of Debarre and Kuznetsov. The Generalised Tate Conjecture, finally, is a formal consequence of the GHC and the MTC. The above theorem is an important ingredient for our proof of the Tate conjecture for GM varieties in characteristic $p\geq 5$ in~\cite{FuMoonen-TCGMV}. \subsection{} In the final section of the paper we turn to Chow motives of GM varieties, and we prove a result about so-called generalised partners or generalised duals. A central theme in the study of GM varieties is that a lot of important information about them can be encoded in terms of multilinear algebra data. In particular, to a GM variety over an algebraically closed field~$K$ (with $\charact(K)=0$) one can associate a ``Lagrangian data set'', which is a triple $(V_6,V_5,A)$ consisting of a $6$-dimensional $K$-vector space~$V_6$, a hyperplane $V_5 \subset V_6$, and a Lagrangian subspace $A \subset \wedge^3 V_6$ with respect to the symplectic form $\wedge^3 V_6 \times \wedge^3 V_6 \to \det(V_6)$. GM varieties $X$ and~$X^\prime$ whose dimensions have the same parity are said to be generalised partners (resp.\ generalised duals) if there exists an isomorphism $f\colon V_6(X) \isomarrow V_6(X^\prime)$ such that the induced isomorphism $\wedge^3 V_6(X) \isomarrow \wedge^3 V_6(X^\prime)$ sends $A(X)$ to~$A(X^\prime)$ (resp.\ an isomorphism $f\colon V_6(X) \isomarrow V_6(X^\prime)^\vee$ with $(\wedge^3 f)\bigl(A(X)\bigr) = A(X^\prime)^\perp$). We prove that the Chow motives in middle degree of such generalised partners or duals are isomorphic: \begin{theorem*} Let $X$ and~$X^\prime$ be GM varieties of dimensions $n$ and~$n^\prime$ over a field $K=\Kbar$ of characteristic~$0$ which are generalised partners or generalised duals. Then there is an isomorphism of rational Chow motives \[ \frh^n(X) \cong \frh^{n^\prime}(X^\prime)\bigl(\tfrac{n^\prime-n}{2}\bigr)\, . \] \end{theorem*} This theorem, too, is used in an essential way in~\cite{FuMoonen-TCGMV}. The proof relies on a result of Kuznetsov and Perry \cite{KuznetsovPerry-CatCone} which says that in this situation the Kuznetsov components of~$X$ and~$X^\prime$ are equivalent. \subsection{} We are certainly not the first to study algebraic cycles on Gushel--Mukai varieties, and part of our work here is a refinement or completion of work done by other people. In particular, let us note that during the preparation of this paper (together with~\cite{FuMoonen-TCGMV}), a preprint~\cite{BolognesiLaterveer-GM6} by Bolognesi and Laterveer was posted that has some overlap with part of the work presented here. It is a pleasure for us to dedicate this paper to Claire Voisin, who has contributed so much to complex geometry, Hodge theory, and the study of algebraic cycles. \subsection{Notation.} \label{subsec:NotConv} For a variety~$X$, we write $\CH^i(X)$ for the Chow group in codimension~$i$ with integral coefficients, and $\CH^i(X)_\QQ = \CH^i(X) \otimes \QQ$. We denote by $\CH(X)_{\alg} \subset \CH(X)_{\hom} \subset \CH(X)$ the subgroups of classes that are algebraically (resp.\ homologically) trivial. If $X$ is a complete non-singular complex algebraic variety and $i$ is a natural number, we denote by $J^{2i+1}(X)$ the intermediate Jacobian in degree~$2i+1$. \section{Generalities on Gushel--Mukai varieties} Throughout this section, $K$ denotes an algebraically closed field of characteristic~$0$. \subsection{} \label{subsec:GMIntro} If $V_5$ is a $5$-dimensional $K$-vector space, let $\Grass(2,V_5) \subset \PP(\wedge^2 V_5)$ be the Grassmannian variety of $2$-planes in~$V_5$, in its Pl\"ucker embedding. A \emph{Gushel--Mukai (GM) variety} of dimension $n \in \{3,4,5,6\}$ over~$K$ is a non-singular projective variety~$X$ with $\dim(X)=n$ that can be realised as an intersection \[ X = \CGr(2,V_5) \cap \PP(W) \cap Q\, , \] where $V_5$ is a $5$-dimensional $K$-vector space, $\CGr(2,V_5) \subset \PP(K \oplus \wedge^2 V_5)$ is the cone over the Grassmannian, $W \subset K \oplus \wedge^2 V_5$ is a linear subspace of dimension~$n+5$, and $Q \subset \PP(K \oplus \wedge^2 V_5)$ is a quadric. We refer to the series of papers by Debarre and Kuznetsov (notably \cite{DK-GMClassification}, \cite{DK-Kyoto} and~\cite{DK-GMJacobian}) for an in-depth study of such varieties and for references to the contributions by many other people. A good starting point is the overview paper~\cite{Debarre-GM}. We will follow the notation of these papers; here we only record some basic facts that we need later. Because $X$ is non-singular, it does not contain the vertex~$O$ of $\CGr(2,V_5)$ and we have a morphism $\gamma \colon X \to \Grass(2,V_5)$, which is called the Gushel map. The corresponding rank~$2$ subbundle of $V_5 \otimes \scrO_X$ is denoted by~$\scrU_X$ and is called the Gushel bundle on~$X$. GM varieties come in two flavours: with notation as above we say that $X$ is \begin{itemize} \item of \emph{Mukai type} if $\gamma$ is a closed embedding, which happens if and only $O \notin \PP(W)$. In this case, projection from~$O$ gives a realisation of~$X$ as an intersection of the Grassmannian $\Grass(2,V_5)\subset \PP(\wedge^2V_5)$ with a linear subspace and a quadric. \item of \emph{Gushel type} if $O \in \PP(W)$, in which case $\gamma$ is a double cover of its image, which is an $n$-dimensional linear section of $\Grass(2,V_5)$, denoted by~$M$. The branch divisor of this double cover $X \to M$ is an $(n-1)$-dimensional GM variety of Mukai type contained in~$M$. \end{itemize} In other papers these two types are referred to as ``ordinary'' and ``special'', respectively. As our work in the paper~\cite{FuMoonen-TCGMV} concerns GM varieties in characteristic~$p$ and the term ``ordinary'' has a well-established (different) meaning in algebraic geometry in characteristic~$p$, we prefer to use the above terminology. Note that GM varieties of Gushel type with $\dim(X) < 6$ are specializations of GM varieties of Mukai type, and that all GM sixfolds are of Gushel type (as in this case necessarily $W = K \oplus \wedge^2 V_5$). \subsection{} It is shown in~\cite{DK-GMClassification} that to a GM variety~$X$, one can canonically associate a so-called GM data set $(V_6,V_5,W,L,q,\mu,\epsilon)$ and a corresponding Lagrangian data set $(V_6,V_5,A)$. We refer to \cite{DK-GMClassification}, Sections~2.2 and~3.2 (especially Theorems~2.9 and~3.10) for details. We only recall that $V_6$ is a $6$-dimensional $K$-vector space, $V_5 \subset V_6$ is a $5$-dimensional subspace, and $A$ is a Lagrangian subspace of~$\wedge^3 V_6$ for the natural $\det(V_6)$-valued symplectic form on~$\wedge^3 V_6$. The vector spaces $V_6$, $V_5$, $A$, and~$W$ can be constructed from~$X$ in a functorial way. If the context requires it, we write $V_6(X)$, $V_5(X)$, etc. \subsection{} Being Fano varieties, GM varieties are rationally connected, by the well-known results of Campana~\cite{Campana} and Koll\'ar--Miyaoka--Mori~\cite{KMM}. Further, it is known that for every GM variety~$X/K$ (nonsingular, as always) in characteristic~$0$ we have $\Pic_{X/K} \cong \ZZ$; cf.\ \cite[Lemma~2.29]{DK-GMClassification}. (This last result is also true in characteristic~$p$; see~\cite{FuMoonen-TCGMV}.) \section{Generalities on Chow motives} We recall an important result due to Bloch and Srinivas~\cite{BlochSrinivas}, which lies at the basis of many results about Chow motives. \begin{theorem}[Bloch--Srinivas] \label{thm:Bloch-Srinivas} Let $X$ be a complete non-singular complex algebraic variety. Assume that the Chow group of $0$-cycles $\CH_0(X)$ is supported on an $r$-dimensional closed algebraic subset of~$X$. \begin{enumerate} \item\label{BS-1} If $r\leq 3$ then the Hodge Conjecture for $H^4(X,\QQ)$ is true. \item\label{BS-2} If $r\leq 2$ then the Griffiths group $\Griff^2(X)$ is zero. \item\label{BS-3} If $r\leq 1$ then the group $\CH^2(X)_{\alg}$ of algebraically trivial cycles of codimension~$2$ is weakly representable by an abelian subvariety $J^3_{\mathrm{a}}(X)$ of the intermediate Jacobian~$J^3(X)$. \end{enumerate} \end{theorem} Note that the last assertion is not explicit in~\cite{BlochSrinivas}; see however \cite{MurreKTheory}, Theorem~C. The weak representability means in particular that we have an isomorphism $\CH^2(X)_{\alg} \isomarrow J^3_{\mathrm{a}}(X)\bigl(\CC\bigr)$. \subsection{} For a field~$k$ we denote by $\CHM(k)$ the category of Chow motives over~$k$ (with rational coefficients). We use the contravariant (cohomological) notion of motives, so that the functor sending a smooth projective variety~$X$ over~$k$ to its Chow motive~$\frh(X)$ is contravariant. Let $\unitmot(1)$ denote the Tate motive and, as usual, write $\unitmot(m) = \unitmot(1)^{\otimes m}$. (So $\unitmot(-1)$ is the Lefschetz motive.) The Chow groups (with rational coefficients) of a motive~$M$ over~$k$ are defined by the rule $\CH^i(M)_\QQ = \Hom_{\CHM(k)}\bigl(\unitmot(-i),M\bigr)$, and $\CH(M)_\QQ = \oplus_{i\in \ZZ}\; \CH^i(M)_\QQ$. With this notation we have the following very useful lemma. \begin{lemma} \label{lem:ManinPrinc} Let $\Omega$ be an algebraically closed field which is of infinite transcendence degree over its prime field. Let $f \colon M \to N$ be a morphism in~$\CHM(\Omega)$ such that the induced homomorphism $\CH(M)_\QQ \to \CH(N)_\QQ$ is an isomorphism. Then $f$ is an isomorphism. \end{lemma} \begin{proof} This follows from \cite[Lemma~1.1]{Huybrechts-MotiveK3} together with \cite[Lemma~3.2]{Vial-Remarks}. \end{proof} \subsection{} \label{subsec:CKDec} Let $X/k$ be a smooth projective $k$-scheme of dimension~$d$. A decomposition \[ [\Delta_X] = \sum_{i=0}^{2d}\; \pi_X^i \] in $\CH^d(X\times X)_\QQ$ is said to be a \emph{Chow--K\"unneth decomposition} if the~$\pi_X^i$ (viewed as correspondences from~$X$ to itself) are mutually orthogonal projectors and (for a Weil cohomology theory~$H$) the endomorphism of~$H(X)$ induced by~$\pi_X^i$ is the projection onto the summand~$H^i(X)$. Such a decomposition can also be viewed as a direct sum decomposition \[ \frh(X) = \bigoplus_{i=0}^{2d}\; \frh^i(X) \] (with $\frh^i(X) = (X,\pi_X^i,0)$) such that $H\bigl(\frh^i(X)\bigr) = H^i(X)$. We shall use both points of view interchangeably. Note that the projector~$\pi_X^i$ can be viewed both as a morphism $\frh^i(X) \to \frh(X)$ and as a morphism $\frh(X) \to \frh^i(X)$. A Chow--K\"unneth decomposition as above is said to be \emph{self-dual} if ${}^{\mathsf{t}}\pi_X^i = \pi_X^{2d-i}$ for all~$i$. The diagonal morphism $\Delta_X \colon X \to X\times X$ gives a morphism $\Delta_X^* \colon \frh(X) \otimes \frh(X) \to \frh(X)$. A Chow--K\"unneth decomposition is said to be \emph{multiplicative} if for all indices~$i$ and~$j$ the composition \[ \frh^i(X) \otimes \frh^j(X) \to \frh(X) \otimes \frh(X) \xrightarrow{~\Delta_X^*~} \frh(X) \] factors through $\frh^{i+j}(X)$. See \cite[Lemma~1.6]{FLV-MCK} for other formulations of this property. (Multiplicativity in fact implies that the decomposition is self-dual; see \cite[Proposition~1.7]{FLV-MCK}.) \subsection{} The subcategory of abelian motives $\CHM(k)^{\ab} \subset \CHM(k)$ is defined as the smallest full replete rigid tensor subcategory that is closed under taking direct summands and contains all motives of $1$-dimensional smooth proper $k$-schemes. (``Replete'' means that if a motive is isomorphic to an abelian motive, it is itself an abelian motive.) A Chow motive~$M$ is said to be finite dimensional (in the sense of Kimura--O'Sullivan) if there exists a decomposition $M = M^+ \oplus M^-$ and a positive integer~$n$ such that $\wedge^n M^+ = \Sym^n M^- = 0$. It is known that all abelian motives are finite dimensional; see~\cite{Kimura} or~\cite{Andre-MotDimFin}, Section~2. We will use the following result; see \cite[Corollaire~3.16]{Andre-MotDimFin}. \begin{theorem} \label{thm:Conservative} The natural functor $\CHM(k)^{\ab} \to \Mot_{\num}(k)$ from the category of abelian Chow motives to the category of numerical motives is conservative (i.e., it detects isomorphisms). \end{theorem} We will also use the following result of Vial, see \cite[Theorem~4]{Vial-Projectors}. For an abelian variety~$J$, we define $\frh_1(J) = \frh^1(J)^\vee$, where $\frh(J) = \oplus_{i=0}^{2g}\; \frh^i(J)$ is the Deninger--Murre decomposition of the Chow motive of~$J$ as in \cite[Theorem~3.1]{DeningerMurre}. \begin{theorem}[Vial] \label{thm:Vial-MotDecomp} Let X be a non-singular complex projective variety of dimension~$d$. Assume that the Abel--Jacobi map $\CH^i(X)_{\QQ,\hom}\to J^{2i-1}(X)\bigl(\CC) \otimes \QQ$ is injective for all~$i$. Then $J^{2i-1}(X)$ is an abelian variety and there is a Chow--K\"unneth decomposition $\frh(X) = \oplus_{i=0}^{2d}\; \frh^i(X)$ with \[ \frh^{2i}(X) \cong H^{2i}(X,\QQ) \otimes \unitmot(-i)\, ,\qquad \frh^{2i-1}(X) \cong \frh_1\bigl(J^{2i-1}(X)\bigr)(-i)\, . \] In particular, $\frh(X)$ is an abelian motive. \end{theorem} \section{Integral Chow groups of GM varieties} \label{sec:ChowGroupGM} In this section, we collect the computation of all the representable Chow groups, with \emph{integral} coefficients, of complex GM varieties. The results are essentially new only in dimension~$6$. \goodbreak\noindent \emph{Gushel--Mukai threefolds} \noindent The following result is essentially just an application of the Bloch--Srinivas theorem~\ref{thm:Bloch-Srinivas}. \begin{theorem} \label{thm:ChowGroupGM3} Let $X$ be a complex GM threefold, and let $J = J^3(X)$ be its intermediate Jacobian. \begin{enumerate} \item\label{thm:ChowGroupGM3-1} The cycle class map induces isomorphisms $\CH^i(X)\isomarrow \ZZ$ for $i=0$, $1$,~$3$. \item\label{thm:ChowGroupGM3-2} For codimension 2 cycles, algebraic equivalence and homological equivalence coincide, i.e., $\Griff^2(X)=0$, and the Abel--Jacobi map $\CH^2(X)_{\alg} \to J(\CC)$ is an isomorphism. \item\label{thm:ChowGroupGM3-3} We have $H^4(X,\ZZ) \cong \ZZ$ and this group is spanned by the cohomology class of a line on~$X$. We have a split short exact sequence \[ 0\tto J(\CC) \tto \CH^2(X) \xrightarrow{~\class~} \ZZ\to 0\, . \] \end{enumerate} \end{theorem} \begin{proof} \ref{thm:ChowGroupGM3-1} For $i=0$ this is clear, for $i=1$ this is true because every complex GM variety has Picard group~$\ZZ$, and for $i=3$ this follows from the fact that $X$ is rationally connected. \ref{thm:ChowGroupGM3-2} As $\CH_0(X)$ is supported on a point, this follows from the Bloch--Srinivas theorem~\ref{thm:Bloch-Srinivas}, taking into account that $\CH^2(X)_{\alg} \to J(\CC)$ is surjective because $X$ is rationally connected. (See \cite[Theorem~12.22]{Voisin-Book1}.) \ref{thm:ChowGroupGM3-3} By \cite[Proposition~3.4]{DK-Kyoto} the groups $H^i(X,\ZZ)$ are torsion-free and $H^2(X,\ZZ) = \ZZ \cdot \class(H)$, where $H \subset X$ is a hyperplane section for the given embedding $X \subset \PP(\CC \oplus \wedge^2 V_5)$. Further, $H^4(X,\QQ)$ is $1$-dimensional and $X$ contains lines. As any line $L \subset X$ has $L \cdot H = 1$ it follows that $H^4(X,\ZZ) = \ZZ \cdot \class(L)$. The last assertion is then immediate from~\ref{thm:ChowGroupGM3-2}. \end{proof} \goodbreak \noindent \emph{Gushel--Mukai fourfolds} \begin{proposition} Let $X$ be a complex GM fourfold. \begin{enumerate} \item\label{thm:ChowGroupGM4-1} The cycle class map induces isomorphisms $\CH^i(X)\isomarrow \ZZ$ for $i=0$, $1$,~$4$. \item\label{thm:ChowGroupGM4-2} The cycle class map $\CH^2(X) \to H^4(X,\ZZ)\bigl(2\bigr)$ is injective, with image the subgroup of integral Hodge classes; hence, we obtain an isomorphism \[ \CH^2(X)\isomarrow \bigl[H^4(X,\ZZ) \cap H^{2,2}(X,\CC)\bigr]\, . \] \end{enumerate} \end{proposition} \begin{proof} \ref{thm:ChowGroupGM4-1} This is true by the same argument as for Theorem~\ref{thm:ChowGroupGM3}\ref{thm:ChowGroupGM3-1}. \ref{thm:ChowGroupGM4-2} Because $\CH_0(X)$ is supported on a point, Theorem~\ref{thm:Bloch-Srinivas} together with the fact that $H^3(X,\ZZ)=0$ gives that $\CH^2(X)_{\hom}=\CH^2(X)_{\alg}=0$. Hence the cycle class map in~\ref{thm:ChowGroupGM4-2} is injective. The last assertion follows from the integral Hodge conjecture for GM fourfolds, which is proven by Perry in~\cite{Perry-IHC}. \end{proof} \begin{remark} The Chow group $\CH_1(X)$ is not representable and is very much related to the $\CH_0$ of the double EPW sextic hyper-K\"ahler fourfold. \end{remark} \goodbreak \noindent \emph{Gushel--Mukai fivefolds} \noindent The integral Chow groups of GM fivefolds were recently computed in the thesis of Lin Zhou~\cite{ZhouLin}, following the strategy of Fu--Tian \cite{FuTian-CubicFivefold}. Analogous results with $\QQ$-coefficients had been obtained by Laterveer~\cite{Laterveer-GM5}. \begin{theorem}[Zhou] \label{thm:ChowGroupGM5} Let $X$ be a complex GM fivefold and let $J = J^5(X)$ be the intermediate Jacobian. \begin{enumerate} \item The cycle class maps induce isomorphisms \[ \CH^0(X)\cong \ZZ\, ,\quad \CH^1(X)\cong \ZZ\, ,\quad \CH^2(X)\cong \ZZ\oplus \ZZ\, ,\quad \CH^4(X)\cong \ZZ\, ,\quad \CH^5(X)\cong \ZZ\, . \] \item For codimension~$3$ cycles, algebraic equivalence and homological equivalence coincide, i.e., $\Griff^3(X)=0$. The Abel-Jacobi map $\CH^3(X)_{\alg} \isomarrow J(\CC)$ is an isomorphism. \item We have $H^6(X,\ZZ) \cong \ZZ \oplus \ZZ$ and there is a split short exact sequence \[ 0 \tto J(X) \tto \CH^3(X) \tto H^6(X,\ZZ) \tto 0\, . \] \end{enumerate} \end{theorem} \goodbreak \noindent \emph{Gushel--Mukai sixfolds} \noindent We now turn to GM sixfolds, which require more work. The proof of the following result will take up the rest of this section. With rational coefficients similar results were also obtained in~\cite{BolognesiLaterveer-GM6}. \begin{theorem} \label{thm:ChowGroupGM6} Let $X$ be a complex GM sixfold. \begin{enumerate} \item \label{thm:ChowGroupGM6-1} The cycle class maps induce isomorphisms \[ \CH^0(X)\cong \ZZ\, ,\quad \CH^1(X)\cong \ZZ\, \quad \CH^6(X)\cong \ZZ\, . \] \item\label{thm:ChowGroupGM6-2} We have $H^{10}(X,\ZZ)\cong \ZZ$, generated by the class of a line contained in~$X$. The cycle class map $\CH^5(X)\cong H^{10}(X,\ZZ)\bigl(5\bigr)$ is an isomorphism. \item\label{thm:ChowGroupGM6-3} We have $H^4(X,\ZZ) \cong \ZZ\oplus \ZZ$, generated by the classes $H^2$ and $c_2(\mathscr{U}_X)$, where $\mathscr{U}_X$ is the Gushel bundle on~$X$ and $H \in \CH^1(X) \cong \ZZ$ is the class of an ample generator. The cycle class map $\CH^2(X)\to H^4(X, \ZZ)\bigl(2\bigr)$ is an isomorphism. \item\label{thm:ChowGroupGM6-4} We have $H^6(X,\ZZ) \cong \ZZ^{24}$ with Hodge numbers $h^{4,2} = h^{2,4} = 1$ and $h^{3,3} = 22$. The cycle class map $\CH^3(X)\to H^6(X,\ZZ)\bigl(3\bigr)$ is injective. \end{enumerate} \end{theorem} \begin{remarks} \begin{enumerate} \item The theorem does not give any information about the structure of the Chow group~$\CH^4(X)$, which is certainly the most interesting one. It is not representable and is closely related to the Chow group of $0$-cycles on the associated double EPW sextic. \item As we shall show in the next section, the Hodge conjecture (with rational coefficients) is true for GM varieties. Moreover, $H^6(X,\ZZ)$ has no torsion (see \cite[Proposition~3.4]{DK-Kyoto}), so that the image of $\CH^3(X)\to H^6(X,\ZZ)\bigl(3\bigr)$ is a subgroup of finite index in the space of Hodge classes in $H^6(X,\ZZ)\bigl(3\bigr)$. \end{enumerate} \end{remarks} \subsection{} The proof of Theorem \ref{thm:ChowGroupGM6} uses the same strategy as in \cite{FuTian-CubicFivefold} and~\cite{ZhouLin}, which is very much inspired by the work of Colliot-Th\'el\`ene--Voisin~\cite{CTVoisin} and Voisin~\cite{Voisin-H4nr}. The proof of~\ref{thm:ChowGroupGM6-1} is the same as for Theorem~\ref{thm:ChowGroupGM3}\ref{thm:ChowGroupGM3-1}. Part~\ref{thm:ChowGroupGM6-3} is an application of the Bloch--Srinivas theorem. Indeed, as $\CH_0(X)$ is supported on a point, parts \ref{BS-2} and~\ref{BS-3} of Theorem~\ref{thm:Bloch-Srinivas} imply that $\CH^2(X)_{\hom} = \CH^2(X)_{\alg} = 0$, since $H^3(X, \ZZ)=0$. Hence the cycle class map $\CH^2(X)\to H^4(X, \ZZ)$ is injective. On the other hand, as is shown in \cite[Proposition~3.4]{DK-Kyoto}, $H^*(X,\ZZ)$ is torsion-free and the homomorphism $\gamma^*\colon H^4\bigl(\Grass(2,V_5),\ZZ\bigr)\to H^4(X, \ZZ)$ induced by the Gushel map~$\gamma$ is an isomorphism. This gives~\ref{thm:ChowGroupGM6-3} because $H^2 - c_2(\mathscr{U}_X)$ and~$c_2(\mathscr{U}_X)$ are the images under~$\gamma^*$ of the Schubert classes $\sigma_2$ and~$\sigma_{1,1}$, respectively, which generate $H^4\bigl(\Grass(2,V_5),\ZZ\bigr)$. We next turn to the proof of part~\ref{thm:ChowGroupGM6-2}. Again by \cite[Proposition~3.4]{DK-Kyoto}, the assertions about $H^{10}(X,\ZZ)$ and the surjectivity of the cycle class map are clear because by~\cite[Theorem~4.7]{DK-Kyoto} $X$ contains lines. (See also below.) As for the injectivity of the cycle class map, we first need some geometric facts. \begin{lemma} \label{lemma:FourLines} Any two points of $X$ can be connected by at most four lines. \end{lemma} \begin{proof} Recall from Section~\ref{subsec:GMIntro} that the Gushel bundle~$\scrU_X$ on~$X$ is the pull-back of the tautological rank~$2$ bundle on the Grassmannian via the Gushel map $\gamma \colon X \to \Grass(2,V_5)$. The embedding $\scrU_X \subset V_5\otimes \scrO_X$ induces a natural morphism \begin{equation} \label{eq:rho1} \rho_1\colon \PP_X(\mathscr{U}_X)\to \PP(V_5)\, , \end{equation} which is a flat morphism whose fibres are isomorphic to quadrics in~$\PP^4$. We refer to \cite{DK-GMClassification}, Section~4 and \cite{DK-Kyoto}, Section~2.3 for details. (Note that for a GM variety~$X$ of dimension~$6$ the locus that in loc.\ cit.\ is called $\Sigma_1(X)$ is empty.) Let $F_1(X)$ be the Fano variety of lines in~$X$. If $L \subset X$ is a line then there are uniquely determined subspaces $U_1 \subset U_3 \subset V_5$ (depending on~$L$, of course) such that \[ \gamma(L) = \bigl\{U_2 \in \Grass(2,V_5) \bigm| U_1 \subset U_2 \subset U_3\bigr\}\, . \] The map $L \mapsto U_1$ gives a morphism $\sigma \colon F_1(X)\to \PP(V_5)$. By Proposition~4.1 of \cite{DK-Kyoto}, $F_1(X)$ can be identified, as a scheme over~$\PP(V_5)$, with the relative Hilbert scheme of lines in the quadric fibration~$\rho_1$: \begin{equation} \label{eq:F1=Hilb} F_1(X)\cong \Hilb^{\PP^1}\bigl(\PP_X(\mathscr{U}_X)/\PP(V_5)\bigr)\, . \end{equation} Let $x$ and $x^\prime$ be points of~$X$. Write $\gamma(x)=[U_2]$ and $\gamma(x^\prime)=[U_2^\prime]$ in $\Grass(2,V_5)$. It is clear that there exists a third $2$-dimensional subspace $U_2^\pprime \subset V_5$ such that $U_2\cap U_2^\pprime$ and $U_2^\prime \cap U_2^\pprime$ are both $1$-dimensional. Let $x^\pprime \in X$ be a point in $\gamma^{-1}\bigl\{[U_2^\pprime]\bigr\}$. Let $y$ be a generator of $U_2\cap U_2^\pprime$, which defines a point $[y] \in \PP(V_5)$. By construction, $(x, [y])$ and $(x^\pprime, [y])$ are points of $\PP_X(\mathscr{U}_X)$ that lie on the fibre of~$\rho_1$ over~$[y]$. This fibre is a quadric of dimension~$3$; hence any two points on it can be connected by at most two lines in it. Therefore, $x$ and $x^\pprime$ can be connected by two lines in~$X$. Similarly, $x^\prime$ and $x^\pprime$ can be connected by at most two lines, and we are done. \end{proof} \begin{lemma} \label{lem:F1IrredRC} The variety $F_1(X)$ of lines on~$X$ is rationally chain connected. In particular, all lines contained in $X$ have the same class in $\CH_1(X)$. \end{lemma} Note that we do not claim that $F_1(X)$ is irreducible; this is probably true but we do not need it. \begin{proof} We use~\eqref{eq:F1=Hilb}. By \cite[Proposition~4.5]{DK-GMClassification}, the fibres of the map~\eqref{eq:rho1} are quadrics in~$\PP^4$ of corank at most~$3$. If $Q \subset \PP^4$ is a non-degenerate quadric, its Hilbert scheme of lines is isomorphic to~$\PP^3$. If $Q$ has corank~$1$ its Hilbert scheme is the union of two $\PP^1$-bundles over~$\PP^1$, glued along a $2$-dimensional quadric. If the corank is~$2$, the Hilbert scheme is obtained from a $\PP^2$-bundle over~$\PP^1$ by contracting a section, and if the corank is~$3$, we get a union of two copies of~$\PP^3$, glued along a~$\PP^2$. So in all cases the fibres are rationally chain connected. Moreover, there is a non-empty open subset $U \subset \PP(V_5)$ over which $\sigma$ is a $\PP^3$-bundle. Let $Z \subset F_1(X)$ be the closure of~$\sigma^{-1}(U)$. By Corollary~1.3 of~\cite{GraberHarrisStarr}, $Z$ is rationally connected, and since $Z$ meets all fibres, the assertion follows. \end{proof} \begin{proposition} \label{prop:CH1GM6BoundedTorsion} There exists an integer $N>0$, such that for any $\alpha\in \CH_1(X)$ the class~$N\alpha$ is a multiple of the class of a line in~$X$. In particular, $\CH_1(X)_\QQ\cong \QQ$, generated by the class of a line. \end{proposition} \begin{proof} We use the argument of ``successive ruled surfaces'', as in \cite[Propostion~3.1]{TianZong}. Fix a point $x_0 \in X$. Let $I = \bigl\{(L,x) \in F_1(X) \times X \bigm| x\in L\bigr\}$ be the incidence variety, and let $I^{(2)} = I \times_{F_1(X)} I = \bigl\{(L,x_1,x_2) \in F_1(X) \times X \times X \bigm| x_1, x_2 \in L\bigr\}$. Let $e \colon I \to X$ and $e_1$, $e_2 \colon I^{(2)} \to X$ be the evaluation morphisms. Then \[ B = e_1^{-1}\{x_0\}\; {}_{e_2}\!\times_{e_1} I^{(2)} {}_{e_2}\!\times_{e_1} I^{(2)} {}_{e_2}\!\times_{e} I \] is the scheme of tuples $(L_1,x_1,L_2,x_2,L_3,x_3,L_4)$ with $x_0 \in L$ and $x_j \in L_j\cap L_{j+1}$ for $j=1,2,3$. For $j=1,\ldots,4$, let \[ \scrL_j = \Bigl\{\bigl((L_1,x_1,L_2,x_2,L_3,x_3,L_4),y\bigr) \in B \times X \Bigm| y \in L_j \Bigr\}\, . \] The first projection makes $\scrL_j$ a $\PP^1$-bundle over~$B$. The second projection gives an evaluation morphism $e_j \colon \scrL_j \to X$. Furthermore, we have sections \[ \begin{tikzcd} \scrL_1 \ar[d] & \scrL_2 \ar[d] & \scrL_3 \ar[d] & \scrL_4 \ar[d]\\ B \ar[u,bend left,"s_0"] \ar[u,bend right,"s_1^\prime"'] & B \ar[u,bend left,"s_1"] \ar[u,bend right,"s_2^\prime"'] & B \ar[u,bend left,"s_2"] \ar[u,bend right,"s_3^\prime"'] & B \ar[u,bend left,"s_3"] \end{tikzcd} \] where $s_j$ ($j=0,1,2,3$) is obtained by taking $y=x_j$, viewed as a point of~$L_{j+1}$, and $s_j^\prime$ ($j=1,2,3$) is obtained by taking $y=x_j$, viewed as a point of~$L_j$. Let $\pi \colon \scrL \to B$ be the scheme over~$B$ obtained by gluing $\scrL_j$ and~$\scrL_{j+1}$ along their sections~$s_j^\prime$ and~$s_j$. The fibres of~$\pi$ are chains of four lines connected at points. By construction the morphisms~$e_j$ glue to an evaluation morphism $e \colon \scrL \to X$, which by Lemma~\ref{lemma:FourLines} is surjective. By taking general successive hyperplane sections of $\scrL$, we obtain a generically finite morphism $\scrL'\to X$, whose degree is denoted by $m\in \ZZ_{>0}$. Set $N=m!$. Without loss of generality, we may assume that $\alpha$ is the class of an irreducible curve $C \subset X$. Our goal is to show that $N[C]\in \CH_1(X)$ is a linear combination of classes of lines. We may assume $C$ is not itself a line. There exists an irreducible curve $\hat{C} \subset \scrL'$ such that $e$ restricts to a non-constant morphism $\hat{C} \to C$. Because $C$ is not a line, $\pi(\hat{C})$ is a curve. Let $B_0$ be the normalisation of this curve, and define $Y = B_0 \times_B \scrL$, which is a union of four ruled surfaces~$Y_j$ over~$B_0$, glued along sections. The normalisation of~$\hat{C}$ maps to~$Y$; let $\tilde{C} \subset Y$ be the image. By construction, the evaluation map $e \colon Y \to X$ gives a non-constant morphism $\tilde{C} \to C$, so that $e_*[\tilde{C}] = m'\cdot [C] \in \CH_1(X)$ for some $0<m'\leq m$. \[ \begin{tikzpicture} \coordinate (LA) at (0,8) {}; \coordinate (LB) at (0,7) {}; \coordinate (LC) at (0,6) {}; \coordinate (LD) at (0,5) {}; \coordinate (LE) at (0,4) {}; \coordinate (LF) at (0,3) {}; \coordinate (LG) at (0,2) {}; \coordinate (LH) at (0,1) {}; \coordinate (RA) at (4,8) {}; \coordinate (RB) at (4,7) {}; \coordinate (RC) at (4,6) {}; \coordinate (RD) at (4,5) {}; \coordinate (RE) at (4,4) {}; \coordinate (RF) at (4,3) {}; \coordinate (RG) at (4,2) {}; \coordinate (RH) at (4,1) {}; \path[save path=\compd] (LA) to[in=60,out=-60] (LC) to (RC) to[in=-60,out=60] (RA) to (LA); \path[save path=\compc] (LB) to[in=60,out=-60] (LE) to (RE) to[in=-60,out=60] (RB) to (LB); \path[save path=\compb] (LD) to[in=60,out=-60] (LG) to (RG) to[in=-60,out=60] (RD) to (LD); \path[save path=\compa] (LF) to[in=60,out=-60] (LH) to (RH) to[in=-60,out=60] (RF) to (LF); \fill[white][use path=\compa]; \fill[white][use path=\compb]; \fill[white][use path=\compc]; \fill[white][use path=\compd]; \draw[thick][use path=\compa]; \draw[thick][use path=\compb]; \draw[thick][use path=\compc]; \draw[thick][use path=\compd]; \path[name path=LAC] (LA) to[in=60,out=-60] (LC); \path[name path=LBE] (LB) to[in=60,out=-60] (LE); \path[name path=LDG] (LD) to[in=60,out=-60] (LG); \path[name path=LFH] (LF) to[in=60,out=-60] (LH); \path [name intersections={of=LAC and LBE}]; \coordinate (IBC) at (intersection-1); \path [name intersections={of=LBE and LDG}]; \coordinate (IDE) at (intersection-1); \path [name intersections={of=LDG and LFH}]; \coordinate (IFG) at (intersection-1); \path[name path=RAC] (RA) to[in=60,out=-60] (RC); \path[name path=RBE] (RB) to[in=60,out=-60] (RE); \path[name path=RDG] (RD) to[in=60,out=-60] (RG); \path[name path=RFH] (RF) to[in=60,out=-60] (RH); \path [name intersections={of=RAC and RBE}]; \coordinate (JBC) at (intersection-1); \path [name intersections={of=RBE and RDG}]; \coordinate (JDE) at (intersection-1); \path [name intersections={of=RDG and RFH}]; \coordinate (JFG) at (intersection-1); \path[name path=XX] (-1,1.25) to (5,1.4); \path [name intersections={of=LFH and XX}]; \coordinate (LS) at (intersection-1); \path [name intersections={of=RFH and XX}]; \coordinate (RS) at (intersection-1); \path[save path=\Chat] (.5,5) .. controls (3.5,3) and (.5,1) .. (1.1,3.5) .. controls (2.25,7) and (4,2) .. (4.37,3); \begin{scope} \clip[use path=\compd]; \clip (IBC) rectangle (5,8); \fill[white][use path=\compd]; \draw[thick,dotted][use path=\compc]; \draw[thick][use path=\compd]; \end{scope} \begin{scope} \clip[use path=\compc]; \clip (IDE) rectangle (JBC); \fill[white][use path=\compc]; \draw[thick,dotted][use path=\compd]; \draw[thick,dotted][use path=\compb]; \draw[thick][use path=\compc]; \draw[thick,red,dotted][use path=\Chat]; \end{scope} \begin{scope} \clip[use path=\compa]; \fill[white][use path=\compa]; \draw[thick,dotted][use path=\compb]; \draw[thick][use path=\compa]; \draw[thick,red,dotted][use path=\Chat]; \end{scope} \begin{scope} \clip[use path=\compb]; \clip (IFG) rectangle ($(JDE)+(1,0)$); \fill[white][use path=\compb]; \draw[thick,dotted][use path=\compc]; \draw[thick,dotted][use path=\compa]; \draw[thick][use path=\compb]; \draw[thick,red][use path=\Chat]; \end{scope} \draw[thick] (LD) to[in=60,out=-60] (LG); \draw[thick] (IBC) to[out=-5,in=185] (JBC); \draw[thick] (IDE) to (JDE); \draw[thick] (IFG) to (JFG); \draw[thick] (LS) to[out=-5,in=185] (RS); \node[left] at (LA) {$Y_4$}; \node[left] at (LB) {$Y_3$}; \node[left] at (LD) {$Y_2$}; \node[left] at (LF) {$Y_1$}; \node[above] at (3.5,6.4) {$s_3$}; \node[above] at (3.5,4.45) {$s_2$}; \node[above] at (3.5,2.4) {$s_1$}; \node[above] at (3.5,1.3) {$s_0$}; \node[above,red] at (2,3.4) {$\tilde{C}$}; \node[above] at (3.5,8) {$Y$}; \draw[->] (5.5,4.5) -- (6.5,4.5); \node[above] at (6,4.5) {$e$}; \node at (7,4.5) {$X$}; \draw[->] (2.2,.5) -- (2.2,-.5); \node at (4.5,-1) {$B_0$}; \draw[thick] (0,-1) to[out=10,in=175] (2,-1) to[out=-5,in=175] (4,-1); \end{tikzpicture} \] In each component~$Y_j$ ($j=1,\ldots,4$), every ruling pushes forward to a line in~$X$. If $t$ is any section of~$Y_j$ then $\CH_1(Y_j)$ is generated by the class of $t(B_0)$ together with the rulings. Because $s_0$ comes from the constant section~$x_0$ we have $e_*\bigl[s_0(B_0)\bigr] = 0$. Therefore, $e_*\bigl[s_j(B_0)\bigr] \in \CH_1(X)$ lies in the subgroup spanned by the classes of lines, for each $j=1,\ldots,4$. As $\tilde{C}$ is contained in one of the components~$Y_j$, it follows that $m'\cdot [C] = e_*[\tilde{C}]$, hence $N\cdot [C]$, lies in this subgroup, which is what we wanted to prove. \end{proof} Now we are ready to compute the Chow group of 1-cycles of a GM sixfold. \begin{proof}[Proof of \emph{Theorem \ref{thm:ChowGroupGM6} \ref{thm:ChowGroupGM6-2}}] By Proposition \ref{prop:CH1GM6BoundedTorsion}, there is an integer $N>0$ such that multiplication by $N$ kills $\CH_1(X)_{\alg}$. Combining this with the divisibility of $\CH_1(X)_{\alg}$, we conclude that $\CH_1(X)_{\alg} = 0$. On the other hand, by the blow-up formula, the Griffiths group of $1$-cycles is a birational invariant for smooth projective varieties. Because $X$ is rational, it follows that $\Griff_1(X)=\Griff_1(\PP^6)=0$; hence, \[ \CH_1(X)_{\hom} = \CH_1(X)_{\alg} = 0\, . \] In other words, the cycle class map $\CH_1(X)\tto H^{10}(X,\ZZ)$ is injective. The surjectivity follows from the fact that $H^{10}(X,\ZZ)$ is generated by the class of a line in $X$. \end{proof} Finally, we turn to part~\ref{thm:ChowGroupGM6-4} of Theorem~\ref{thm:ChowGroupGM6}. \begin{lemma} \label{lem:DODGM6} There exist an integer $n > 0$, a (possibly reducible) smooth projective variety~$T$ of dimension~$4$, a generically injective morphism $j\colon T\to X$, and an algebraic cycle $Z\in \CH^4(X\times T)$, such that we have the following equality in $\CH^6(X\times X)$: \begin{equation} \label{eq:DODGM6} n\cdot [\Delta_X] = n\cdot \bigl([\pt] \times [X]\bigr) + n\cdot \bigl([\operatorname{line}]\times H\bigr) + (\id_X \times j)_*(Z)\, . \end{equation} \end{lemma} \begin{proof} Since we have already proven that $\CH_0(X)\cong \ZZ$ and $\CH_1(X)\cong \ZZ$, Laterveer's refined decomposition of the diagonal \cite[Theorem 1.7]{Laterveer-SmallChow} applies. This gives that there exists an integer $n > 0$ such that in $\CH^6(X\times X)$ we have a relation \[ n\cdot [\Delta_X] = n\cdot \bigl([\pt]\times [X]\bigr) + n\cdot \bigl([\operatorname{line}]\times H\bigr) + Z^\prime\, , \] where $Z^\prime$ is supported on $X \times T^\prime$ for some closed algebraic subset $T^\prime \subset X$ of codimension~$2$. Now let $T$ be the disjoint union of resolutions of the irreducible components of~$T^\prime$ and let~$Z$ be an algebraic cycle in $X\times T$ which pushes forward to~$Z^\prime$. \end{proof} \begin{proposition} \label{prop:Griff3GM6} Algebraic and homological equivalence coincide for cycles of codimension~$3$ on a GM sixfold~$X$, i.e., $\Griff^3(X)=0$. \end{proposition} \begin{proof} We first show that $\Griff^3(X)$ is torsion. Let both sides of~\eqref{eq:DODGM6}, viewed as correspondences from~$X$ to itself, act on~$\Griff^3(X)$. For any $\alpha\in \Griff^3(X)$, the correspondence $n\cdot [\Delta_X]$ sends~$\alpha$ to $n\cdot \alpha$. The first two terms of the right hand side of~\ref{eq:DODGM6} send $\alpha$ to zero for dimension reasons, and the third term sends~$\alpha$ to $j_*\bigl(Z_*(\alpha)\bigr)$. Therefore, we have \[ n\cdot \alpha = j_*(Z_*(\alpha)) \] in $\Griff^3(X)$. However, $Z_*(\alpha)$ is an element of~$\Griff^1(T)$, which is trivial as homological equivalence and algebraic equivalence coincide for divisors. We conclude that $\Griff^3(X)$ is killed by~$n$. It remains to show that $\Griff^3(X)$ is torsion free. For any abelian group~$A$, let $\scrH^i_A$ be the Zariski sheaf associated to the presheaf $U \mapsto H^i(U,A)$. Bloch and Ogus~\cite{Bloch-Ogus} showed that, starting from the $E_2$-page, the coniveau spectral sequence agrees with the Leray spectral sequence associated with the continuous map $X(\CC)\to X_{\operatorname{Zar}}$ and the constant sheaf~$A$. Therefore, we have \begin{equation} \label{eq:SpectralSeq} E_2^{p,q} = H^p(X,\scrH^q_A) \Longrightarrow N^pH^{p+q}(X, A)\, , \end{equation} where $N^\bullet$ denotes the coniveau filtration. We need two basic properties of this spectral sequence. \begin{itemize} \item In \eqref{eq:SpectralSeq}, $E_2^{0, q}=H^0(X, \scrH^q_{A})$ is the so-called unramified cohomology $H^i_{\nr}(X,A)$, which is a birational invariant, see \cite[Theorem~2.8]{CTVoisin}. As $X$ is rational (see \cite[Proposition~4.2]{DK-GMClassification}), its unramified cohomology groups all vanish except in degree zero, i.e., $E_2^{0,q} = 0$ for any $q>0$. \item For $p>q$ we have $E_2^{p,q} = 0$. This is a consequence of the Gersten conjecture for homology theory proved by Bloch--Ogus~\cite[(0.3)]{Bloch-Ogus}. \end{itemize} Using these properties, taking $A=\ZZ$, \eqref{eq:SpectralSeq} gives rise to an exact sequence \begin{equation} \label{eq:LESfromSS} 0\tto H^5(X,\ZZ)/N^2H^5(X,\ZZ) \tto H^1(X,\scrH^4_\ZZ) \tto H^3(X,\scrH^3_\ZZ) \tto H^6(X, \ZZ)\, , \end{equation} where $H^3(X,\scrH^3_\ZZ)$ is identified with the group of codimension~$3$ cycles modulo algebraic equivalence (see~\cite[0.5]{Bloch-Ogus}) and the last arrow is the cycle class map. Therefore, the kernel of the last arrow is exactly the Griffiths group~$\Griff^3(X)$. Since $H^5(X,\ZZ)=0$, the first term in \eqref{eq:LESfromSS} vanishes, and we obtain that \begin{equation} \label{eq:Griff3=H1H4} H^1(X,\scrH^4_\ZZ)\cong \Griff^3(X). \end{equation} Now we use a result of Colliot-Th\'el\`ene and Voisin, see \cite[Theorem 3.1]{CTVoisin}, which is based on the Bloch--Kato conjecture (proven by Voevodsky \cite{Voevodsky-IHES}, \cite{Voevodsky-Annals}). This result implies that for any integer~$n$ we have a short exact sequence of Zariski sheaves: \[ 0 \tto \scrH^4_\ZZ \xrightarrow{\cdot n} \scrH^4_\ZZ \tto \scrH^4_{\ZZ/n\ZZ} \tto 0\, . \] {}From the associated long exact sequence we obtain a short exact sequence \[ 0 \tto H^0(X,\scrH^4_\ZZ)/n \tto H^0(X,\scrH^4_{\ZZ/n\ZZ}) \tto H^1(X,\scrH^4_\ZZ)[n] \tto 0\, , \] where the last term denotes the $n$-torsion subgroup of $H^1(X,\scrH^4_\ZZ)$. Now observe that the middle term is the unramified cohomology group $H^4_{\nr}(X,\ZZ/n\ZZ)$, which is trivial since $X$ is rational. It follows that $H^1(X, \scrH^4_\ZZ)$ has no $n$-torsion. On the other hand, we had already shown that $\Griff^3(X)$ is killed by~$n$; so $\Griff^3(X) = 0$. \end{proof} \begin{proof}[Proof of \emph{Theorem \ref{thm:ChowGroupGM6} \ref{thm:ChowGroupGM6-4}}] Let both sides of \eqref{eq:DODGM6} act on $\CH^3(X)_{\alg}$. The same argument as in the first part of the proof of Proposition \ref{prop:Griff3GM6} shows that the multiplication by~$n$ map on $\CH^3(X)_{\alg}$ factors through \[ Z_*\colon \CH^3(X)_{\alg}\to \CH^1(T)_{\alg}\, . \] However, we have the commutative diagram \begin{equation*} \begin{tikzcd} \CH^3(X)_{\alg} \ar[r , "Z_*"] \ar[d]& \CH^1(T)_{\alg} \ar[d, "\wr"]\\ J^5(X) \ar[r, "\class(Z)_*"]& \Pic^0(T) \end{tikzcd} \end{equation*} where the vertical arrows are Abel--Jacobi maps. Since the right vertical arrow is an isomorphism and $J^5(X)=0$ (since $H^5(X,\ZZ)=0$), the top arrow $Z_*\colon \CH^3(X)_{\alg}\to \CH^1(T)_{\alg}$ is zero. Hence $\CH^3(X)_{\alg}$ is killed by~$n$. On the other hand, $\CH^3(X)_{\alg}$ is a divisible group; hence $\CH^3(X)_{\alg} = 0$. Combining this with Proposition~\ref{prop:Griff3GM6}, we conclude that $\CH^3(X)_{\hom}=0$, i.e., the cycle class map is injective. \end{proof} \section{The Generalised Hodge Conjecture} \label{sec:GHC} The main result of this section is the following. \begin{theorem} \label{thm:GHC} Let $X$ be a complex Gushel--Mukai variety. Then the Generalised Hodge Conjecture is true for~$X$. \end{theorem} \begin{proof} Most of this can be extracted from the literature together with the computation of their Chow groups. For GM varieties of dimension $3$ or~$4$ the result is proven in \cite[Corollary~2.5(ii)]{Laterveer-SmallChow}. For GM varieties of dimension~$5$ the result can be found in \cite[Remark~3.2]{Laterveer-GM5} (which refines a result by Nagel in~\cite{Nagel-GHC}). For GM varieties of dimension~$6$, Theorem~\ref{thm:GHC} follows from Theorem \ref{thm:ChowGroupGM6} (or rather Proposition~\ref{prop:CH1GM6BoundedTorsion}) together with \cite[Proposition~2.4(ii)]{Laterveer-SmallChow}. \end{proof} \begin{remark} \label{rem:HCGM6alt} Both in \cite[Remark~2.26]{KP16} and in \cite[Remark~4.2]{Debarre-GM}, it is stated that the (usual) Hodge conjecture for GM sixfolds can be proven using the results of~\cite{DK-Kyoto}, but no details are provided. While it is clear how to proceed for general~$X$ (see below), we have not been able to make this method work for all GM sixfolds. First assume that $X$ is a GM variety of dimension~$6$ which is general, in the sense that $Y_A^{\geq 3} = \emptyset$ and $\mathbf{p}_X \notin (Y_A^{\geq 2})^\vee$. (This is condition~(10) in~\cite{DK-Kyoto}.) As in~\cite{DK-Kyoto}, let $F^\sigma_2(X)$ be the Hilbert scheme of $\sigma$-planes in~$X$, and let $\scrL^\sigma_2(X)$ be the universal plane over~$F^\sigma_2(X)$. We then have a diagram \[ \begin{tikzcd} & \scrL_2^\sigma(X) \ar[dl,"q"']\ar[dr,"p"]\\ X && F^\sigma_2(X) \ar[d,"\tilde\sigma"]\\ && \widetilde{Y}_{A,V_5} \ar[r,hook,"\iota"] & \widetilde{Y}_A \end{tikzcd} \] where $\widetilde{Y}_{A,V_5} \subset \widetilde{Y}_A$ is a subvariety of codimension~$1$ and $\tilde\sigma \colon F^\sigma_2(X) \to \widetilde{Y}_{A,V_5}$ is a $\PP^1$-bundle. By \cite[Corollary~5.13]{DK-Kyoto}, $F^\sigma_2(X)$ is a non-singular fourfold, and hence $\scrL^\sigma_2(X)$ is non-singular of dimension~$6$. Let $[\mathbf{P}] \in H^6\bigl(F^\sigma_2(X),\QQ\bigr)$ denote the class of a fibre of~$\tilde\sigma$ and define $H^2\bigl(F^\sigma_2(X),\QQ\bigr)_0 = [\mathbf{P}]^\perp \subset H^2\bigl(F^\sigma_2(X),\QQ\bigr)$. By \cite[Proposition~5.14]{DK-Kyoto}, $(\iota\circ \tilde\sigma)^*$ gives an isomorphism \[ H^2(\widetilde{Y}_A,\QQ) \isomarrow H^2\bigl(F^\sigma_2(X),\QQ\bigr)_0\, . \] Let $H^2(\widetilde{Y}_A,\QQ)_0 \subset H^2(\widetilde{Y}_A,\QQ)$ be the primitive subspace and write $H^2\bigl(F^\sigma_2(X),\QQ\bigr)_{00}$ for its image under $(\iota\circ \tilde\sigma)^*$. By \cite[Theorem~5.19]{DK-Kyoto} we then have a commutative diagram \[ \begin{tikzcd} H^2(\widetilde{Y}_A,\QQ) \ar[r,"\sim","~~(\iota\circ \tilde\sigma)^*~~"']& H^2\bigl(F^\sigma_2(X),\QQ\bigr)_0\\ H^2(\widetilde{Y}_A,\QQ)_0 \ar[r,"\sim"] \arrow[symbol]{u}{\bigcup} & H^2\bigl(F^\sigma_2(X),\QQ\bigr)_{00} \arrow[symbol]{u}{\bigcup} & H^6(X,\QQ)_{00} \ar[l,"\sim"]\\[-16pt] & p_*\bigl(q^*(z)\bigr) & z \ar[l,maps to] \end{tikzcd} \] To deduce the Hodge conjecture for~$X$, it now suffices to show that there exists a class $\xi \in \CH^6\bigl(X \times F^\sigma_2(X)\bigr)$ such that the inverse isomorphism $H^2\bigl(F^\sigma_2(X),\QQ\bigr)_{00} \isomarrow H^6(X,\QQ)_{00}$ is given by $y \mapsto \pr_{X,*}\bigl(\class(\xi) \cup \pr_{F}^*(y)\bigr)$, where $\pr_X \colon X \times F^\sigma_2(X) \to X$ and $\pr_F \colon X \times F^\sigma_2(X) \to F^\sigma_2(X)$ are the projections. This is proven as follows. Let $h \in H^2(X,\QQ)$ be the class of the very ample line bundle $H = \scrO_X(1)$. As explained in \cite{DK-Kyoto}, Section~5 (before Theorem~5.19), $p\colon \scrL_2^\sigma(X) \to F^\sigma_2(X)$ is a $\PP^2$-bundle for which $q^*(h)$ is a relative hyperplane class. Hence there is a vector bundle~$\scrE$ of rank~$3$ on~$F^\sigma_2(X)$ such that $\scrL_2^\sigma(X) \cong \PP(\scrE)$. As $q\colon \scrL_2^\sigma(X) \to X$ is generically finite of degree~$12$ (\cite[Lemma~5.15]{DK-Kyoto}), it follows from the proof of \cite[Theorem~5.19]{DK-Kyoto} that the class \[ \xi = \tfrac{1}{12} \cdot \Bigl[q^*(h^2) + p^*(c_1(\scrE))\cdot q^*(h) + p^*(c_2(\scrE))\Bigr] \] (viewed as a class in $\CH^6\bigl(X \times F^\sigma_2(X)\bigr)$) has the required property. This completes the argument in case $X$ is general. If $X$ is arbitrary then, by considering a family of GM varieties, we can still show that there exists a class $\gamma \in \CH_4(\widetilde{Y}_A \times X)$ that induces an isomorphism $H^2\bigl(\widetilde{Y}_A,\QQ(1)\bigr)_0 \isomarrow H_6\bigl(X,\QQ(-3)\bigr)_{00}$ by $z\mapsto \pr_{2,*}\bigl(\pr_1^*(z) \cap [\gamma]\bigr)$. (The reader will hopefully be able to guess the meaning of the notation.) However, it is not clear to us if the Hodge conjecture in \emph{cohomological} degree~$2$ is true for the variety~$\widetilde{Y}_A$, as in general this variety is singular. \end{remark} \section{The Mumford--Tate Conjecture and the Generalised Tate Conjecture} \label{sec:MTC+GTC} In this section we first recall the statements of the Mumford--Tate Conjecture and the Generalised Tate Conjecture (in characteristic~$0$). The main result that we prove is that these conjectures are true for Gushel--Mukai varieties of even dimension. We deduce this from a theorem of Andr\'e~\cite{Andre-ShafTate}. This argument uses that, as shown in the previous section, the Hodge conjecture is true for these varieties. In all of this section, $\ell$ is a fixed (but arbitrary) prime number. \subsection{} \label{subsec:MTC} As explained in \cite[Section~1]{Moonen-TCMTC}, the Mumford--Tate Conjecture and the Tate Conjecture can be viewed as statements about complex algebraic varieties, and we will take that perspective. Let then $X$ be a complete non-singular algebraic variety over~$\CC$. Fix an integer~$i$ and write~$H_\ell$ for $H^{2i}\bigl(X,\QQ_\ell(i)\bigr)$. Choose any subfield $F \subset \CC$ that is finitely generated over~$\QQ$ and a model~$X_F$ of~$X$ over~$F$. As $H_\ell$ is canonically isomorphic to $H^{2i}\bigl(X_{\Fbar},\QQ_\ell(i)\bigr)$, the choice of the model~$X_F$ gives us a continuous Galois representation \[ \rho_\ell \colon \Gal\bigl(\Fbar/F\bigr) \to \GL(H_\ell)\, . \] Define $G_\ell$ to be the Zariski closure of the image of~$\rho_\ell$, and let $G_\ell^0$ denote its identity component. The algebraic subgroup $G_\ell^0 \subset \GL\bigl(H_\ell\bigr)$ only depends on~$X$ and is independent of the choice of~$F$ and~$X_F$; see \cite[Proposition~1.3]{Moonen-TCMTC}. Let $H_{\Betti} = H^{2i}\bigl(X(\CC),\QQ(i)\bigr)$, and let $G_{\Betti} \subset \GL(H_{\Betti})$ be its Mumford--Tate group. (Here ``$\Betti$'' is for ``Betti realisation''.) Artin's comparison isomorphism $H_{\Betti} \otimes \QQ_\ell \isomarrow H_\ell$ induces an isomorphism of algebraic groups $\GL(H_{\Betti}) \otimes \QQ_\ell \isomarrow \GL(H_\ell)$. The \emph{Mumford--Tate Conjecture} (for the chosen $X$, $\ell$ and~$i$) is the assertion that this isomorphism restricts to an isomorphism \[ G_{\Betti} \otimes \QQ_\ell \xrightarrow[?]{~\sim~} G_\ell^0\, . \] \subsection{} With notation as above, a class $\xi \in H_\ell$ is called a Tate class if it is fixed under the action of~$G_\ell^0$; this is equivalent to the condition that the stabiliser of~$\xi$ in $\Gal(\Fbar/F)$ is an open subgroup. All elements in the image of the cycle class map $\class_\ell \colon \CH^i(X) \otimes \QQ_\ell \to H_\ell$ are Tate classes. The \emph{Tate Conjecture} (again: for the chosen $X$, $\ell$ and~$i$) says that the algebraic group~$G_\ell^0$ is reductive and that the $\ell$-adic cycle class map \begin{equation} \label{eq:classl} \class_\ell \colon \CH^i(X) \otimes \QQ_\ell \to \{\text{Tate classes in~$H_\ell$}\} \end{equation} is surjective. \subsection{} \label{subsec:GTC} Retaining the above notation and assumptions, let $W \subset H_\ell = H^{2i}\bigl(X,\QQ_\ell(i)\bigr)$ be a $G_\ell^0$-subrepresentation. If $r$ is a natural number, $W$ is said to be of Tate coniveau~$\geq r$ if the following conditions are satisfied: \begin{enumerate}[label=(\alph*)] \item\label{coniv1} there exists a domain $R\subset \CC$ which is of finite type over~$\ZZ$ and a smooth proper model $X_R \to \Spec(R)$ of~$X$ over~$R$; \item\label{coniv2} over the fraction field of~$R$, the group $G_\ell$ as above is connected; \item\label{coniv3} at every closed point~$x$ of $\Spec\bigl(R[1/\ell]\bigr)$, all eigenvalues of the Frobenius at~$x$ acting on $W(r-i)$ and on~$W^\vee(r-i)$ are algebraic integers. \end{enumerate} (The second condition is included only to make the third condition meaningful. For any $X/R$ as in~\ref{coniv1}, condition~\ref{coniv2} is satisfied after replacing~$R$ with a finite extension.) If $Z \subset X$ is a closed subscheme of which all components have codimension~$\geq r$ then $\Ker\bigl(H_\ell \to H^{2i}((X\setminus Z),\QQ_\ell(i)) \bigr)$ is a $G_\ell^0$-subrepresentation of Tate coniveau~$\geq r$. The \emph{Generalised Tate Conjecture} (which goes back to Grothendieck, see \cite[Section~10.3]{Brauer3}) states that, conversely, every $G_\ell^0$-subrepresentation $W \subset H_\ell$ of Tate coniveau~$\geq r$ is supported on a subscheme $Z \subset X$ whose components have codimension~$\geq r$, in the sense that $W \subset \Ker\bigl(H_\ell \to H^{2i}((X\setminus Z),\QQ_\ell(i)) \bigr)$. (For $r=i$ this says that the $\ell$-adic cycle class map~\eqref{eq:classl} is surjective.) \begin{remark} The Mumford--Tate Conjecture and the (Generalised) Tate Conjecture are usually formulated as statements for varieties over number fields or finitely generated fields. The conjectures as formulated above (for $X$ over~$\CC$) are true if and only if the analogous conjectures over arbitrary finitely generated fields (of characteristic~$0$) are true. \end{remark} \begin{theorem} \label{thm:MTCTC} Let $X$ be a complex Gushel--Mukai variety of even dimension. Then the Mumford--Tate Conjecture and the Generalised Tate Conjecture for~$X$ are true. \end{theorem} To avoid any confusion: the assertion is that the mentioned conjectures are true for all~$i$ and~$\ell$. However, this is non-trivial only for the cohomology in middle degree ($i=\frac{\dim(X)}{2}$), and in the proof the choice of~$\ell$ plays no particular role. Let us further note that for K3 surfaces the Mumford--Tate and Tate Conjectures were proven in~\cite{Andre-ShafTate}. \begin{proof} Let $h \in H^2\bigl(X,\ZZ(1)\bigr)$ be the class of the ample generator of~$\Pic(X)$. Let $i = \frac{\dim(X)}{2}$, We need to show that $(X,h)$ satisfies conditions $\mathrm{A}_i$ and~$\mathrm{B}_i^+$ as in \cite[Section~1.4]{Andre-ShafTate}; if this is true then the Tate Conjecture and the Mumford--Tate Conjecture follow from ibid., Theorems~1.5.1 and~1.6.1(4). (Note that these results are stated over number fields but the proofs are valid over arbitrary finitely generated base fields of characteristic~$0$, and as just remarked this is what we need. Further, for one step in the proof details are missing in~\cite{Andre-ShafTate}; this is corrected in Section~2 of~\cite{Moonen-TCMTC}.) It is clear that condition~$\mathrm{A}_i$, which says that the middle cohomology should be of K3 type, is satisfied. The conditions~$\mathrm{B}_i^+$ state that $(X,h)$ should be fibre in a connected family of polarised varieties such that the image of the period map contains an open subset of the period domain, and (condition~$\mathrm{B}^+_i\text{(iv)}$) such that the Hodge conjecture in middle degree is true for the fibres in this family. To see that these conditions are satisfied, fix a $5$-dimensional $\CC$-vector space~$V_5$ and abbreviate $T = \CC \oplus \wedge^2 V_5$. If $n=4$, let $S$ to be the open subscheme of $\Grass(9,T) \times \PP\bigl(\Sym^2(T^\vee)\bigr))$ consisting of pairs $(W,Q)$ such that $\CGr(2,V_5) \cap \PP(W) \cap Q$ is a non-singular GM fourfold, and let $f \colon Y \to S$ be the tautological family of GM fourfolds. Similarly, for $n=6$ we take $S$ to be the open subscheme of quadrics $Q \in \PP\bigl(\Sym^2(T^\vee)\bigr))$ such that $\CGr(2,V_5) \cap Q$ is non-singular of dimension~$6$, and again we naturally have a family of GM varieties $f \colon Y \to S$. (Cf.\ the proof of \cite[Proposition~A.2]{KP16}.) In either case it is clear that there exists a point $0 \in S(\CC)$ such that the fibre~$Y_0$ is the GM variety~$X$ of the theorem and that the polarisation class~$H$ extends to a section of $R^2f_*\ZZ(1)$. By what is explained in \cite[Section~4]{Debarre-GM}, condition~$\mathrm{B}_i\text{(iii)}$ is satisfied, and condition~$\mathrm{B}^+_i\text{(iv)}$ follows from Theorem~\ref{thm:GHC}. For the Generalised Tate Conjecture, finally, we have shown in the previous section (as part of the Generalised Hodge Conjecture) that the cohomology in middle degree (= degree~$2i$) is supported on a subscheme of codimension~$i-1$. If $W \subset H^{2i}\bigl(X,\QQ_\ell(i)\bigr)$ has coniveau~$\geq i$ then $W$ has weight~$0$ and, for a model $X_R/R$ as in Section~\ref{subsec:GTC}, all eigenvalues of Frobenii on~$W$ and on~$W^\vee$ are algebraic integers. Hence all eigenvalues of Frobenii are roots of unity, and it follows from Chebotarev's density theorem that $W$ consists of Tate classes. By the Tate Conjecture, $W$ is supported on a subscheme of codimension~$i$ and we are done. \end{proof} \begin{remark} By \cite[Theorem~1.5.1]{Andre-ShafTate}, the motive (in the sense of Andr\'e) of an even dimensional Gushel--Mukai variety~$X$ is an abelian motive; therefore the Mumford--Tate group~$G_{\Betti}$ of $H(X,\QQ)$ equals the motivic Galois group of the Andr\'e motive $\motH(X)$. This gives a strengthening of the Mumford--Tate Conjecture that is usually referred to as the ``Motivated Mumford--Tate Conjecture''. Note that we do not know if the Chow motive of~$X$ is an abelian motive. (This is much stronger than saying that its Andr\'e motive is abelian.) \end{remark} \section{Chow--K\"unneth decompositions and their refinements} \label{sec:ChowMotiveGM} For later use we recall some basic results on Chow--K\"unneth decompositions of (the Chow motives of) Gushel--Mukai varieties. We first work over the complex numbers; at the end of the section we explain how to extend this to GM varieties over an arbitrary algebraically closed field of characteristic~$0$. Let $X$ be a GM $n$-fold. The Gushel map $\gamma\colon X \to \Grass(2,V_5)$ induces a homomorphism $\gamma^* \colon H^*\bigl(\Grass(2,V_5),\QQ\bigr) \to H^*(X,\QQ)$, which is surjective in all degrees~$\neq n$. The cohomology of~$X$ in even degrees~$\neq n$ is therefore purely of Tate type, spanned by classes of algebraic cycles, and the cohomology in odd degrees~$\neq n$ is zero. We first consider a complex GM variety~$X$ of dimension $n \in \{3,5\}$. Its intermediate Jacobian $J = J^n(X)$ (in middle degree) is a $10$-dimensional abelian variety. We refer to \cite{DK-GMJacobian} for a detailed study of~$J$. \begin{proposition} \label{prop:CKforGM3} Let $X$ be a complex GM variety of dimension $n=3$ or~$5$, with intermediate Jacobian $J = J^n(X)$. Then $\frh(X)$ is an abelian Chow motive and there is a Chow--K\"unneth decomposition \begin{equation} \label{eq:motGM3dec} \text{for $n=3$:}\qquad \frh(X) \cong \unitmot \oplus \unitmot(-1) \oplus \frh^1(J)\bigl(-1\bigr) \oplus \unitmot(-2) \oplus \unitmot(-3)\, . \end{equation} resp.\ \begin{equation} \label{eq:motGM5dec} \text{for $n=5$:}\qquad \frh(X) \cong \unitmot \oplus \unitmot(-1) \oplus \unitmot(-2)^2 \oplus \frh^1(J)\bigl(-2\bigr) \oplus \unitmot(-3)^2 \oplus \unitmot(-4) \oplus \unitmot(-5)\, . \end{equation} \end{proposition} \begin{proof} This follows from Theorems~\ref{thm:ChowGroupGM3} and~\ref{thm:ChowGroupGM5}, together with Theorem~\ref{thm:Vial-MotDecomp}. \end{proof} \begin{remark} It was shown by Laterveer~\cite{Laterveer-GM5} that for $\dim(X) = 5$ there exists a decomposition as above which is multiplicative (see Section~\ref{subsec:CKDec}). For $\dim(X) = 3$ the corresponding result is not yet known. \end{remark} The above decompositions can be made explicit, as follows. \subsection{} \label{subsec:CKGM3explicit} Let $X$ be a complex GM threefold. Let $H = -K_X \in \CH^1(X)_\QQ$ be the class of the ample generator of the Picard group, and write $\pt = \frac{1}{10}\cdot H^3$ for the class of a point on~$X$. (By Theorem~\ref{thm:ChowGroupGM3}\ref{thm:ChowGroupGM3-3}, all points on~$X$ are rationally equivalent.) We have a self-dual Chow--K\"unneth decomposition $[\Delta_X] = \sum_{i=0}^6\; \pi_X^i$ in $\CH^3(X \times X)_\QQ$, given by \begin{align*} \pi_X^0 &= \pt \times X & \pi_X^6 &= X \times \pt\\ \pi_X^1 &= 0 & \pi_X^5 &= 0\\ \pi_X^2 &= \tfrac{1}{10} \cdot H^2 \times H & \pi_X^4 &= \tfrac{1}{10} \cdot H \times H^2 \end{align*} and \[ \pi_X^3 = [\Delta_X] - \pi_X^0 - \pi_X^2 - \pi_X^4 - \pi_X^6\, . \] These projectors realise a decomposition as in~\eqref{eq:motGM3dec}; this follows from Theorem~\ref{thm:Conservative} (which applies because $\frh(X)$ is an abelian motive), where we use that the category $\Mot_{\num}(\CC)$ is semisimple (as proven by Jannsen in~\cite{Jannsen}), and that in cohomology the above projectors~$\pi_X^i$ cut out $H^i(X,\QQ)$. \subsection{} \label{subsec:CKGM5explicit} Next consider a complex GM fivefold~$X$. Let $H = -\frac{1}{3}\cdot K_X \in \CH^1(X)_\QQ$ be the class of the ample generator of the Picard group, and write $\pt = \frac{1}{10}\cdot H^5$ for the class of a point on~$X$. Let $\sigma_{i,j} \in \CH^{i+j}\bigl(\Grass(2,V_5)\bigr)$ (for $3\geq i\geq j\geq 0$) be the Schubert classes. The classes $e_1 = H^2$ and $e_2 = \gamma^*(\sigma_{1,1}) = c_2(\scrU_X)$ form a $\QQ$-basis of $\CH^2(X)_\QQ$. Define $f_1 = \frac{1}{2}\cdot H^3 - He_2$ and $f_2 = -H^3 + \frac{5}{2}\cdot He_2$ in $\CH^3(X)_\QQ$. By using the intersection matrix \[ \begin{array}{c|cc} & e_2 & H^2\\ \hline H e_2 & 2 & 4\\ H^3 & 4 & 10\\ \end{array} \] we find that $\deg(e_i \cdot f_j) = \delta_{i,j}$. Hence we obtain a self-dual collection of mutually orthogonal projectors by setting $\pi_X^{2i-1} = 0$ for $i\neq 3$, \begin{align*} \pi_X^0 &= \pt \times X & \pi_X^{10} &= X \times \pt\\ \pi_X^2 &= \tfrac{1}{10} \cdot H^4 \times H & \pi_X^8 &= \tfrac{1}{10} \cdot H \times H^4\\ \pi_X^4 &= f_1 \times e_1 + f_2 \times e_2 & \pi_X^6 &= e_1 \times f_1 + e_2 \times f_2 \end{align*} and $\pi_X^5 = [\Delta_X] - \sum_{i\neq 5}\, \pi_X^i$. As before, this gives a Chow--K\"unneth decomposition. \subsection{} \label{subsec:CKGM46explicit} Next we turn to GM varieties of even dimension $n \in \{4,6\}$. Let $H = -\frac{1}{n-2}\cdot K_X \in \CH^1(X)$ be the class of the ample generator of the Picard group, and write $\pt = \frac{1}{10}\cdot H^n$ for the class of a point on~$X$. (By the results in Section~\ref{sec:ChowGroupGM} all points are rationally equivalent.) We have a self-dual Chow--K\"unneth decomposition $[\Delta_X] = \sum_{i=0}^{2n}\; \pi_X^i$ in $\CH^n(X \times X)_\QQ$ with $\pi_X^i = 0$ for all odd integers~$i$. For $n=4$ the projectors in even degree are given by \begin{align*} \pi_X^0 &= \pt \times X & \pi_X^8 &= X \times \pt\\ \pi_X^2 &= \tfrac{1}{10} \cdot H^3 \times H & \pi_X^6 &= \tfrac{1}{10} \cdot H \times H^3 \end{align*} and \[ \pi_X^4= [\Delta_X] - \pi_X^0 - \pi_X^2 - \pi_X^6 - \pi_X^8\, . \] These projectors realise a decomposition (for $\dim(X)=4$) \begin{equation} \label{eq:CKDecGM4} \frh(X) \cong \unitmot \oplus \unitmot(-1) \oplus \frh^4(X) \oplus \unitmot(-3) \oplus \unitmot(-4)\, . \end{equation} For $\dim(X) = n = 6$, the classes \[ e_1 = H^2 \, ,\qquad e_2 = \gamma^*(\sigma_{1,1}) = c_2(\scrU_X) \] form a basis of $\CH^2(X)_\QQ$. Define the classes \[ f_1 = \tfrac{1}{2}\cdot H^4 - H^2 e_2\, ,\qquad f_2 = -H^4 + \tfrac{5}{2}\cdot H^2 e_2 \] in $\CH^4(X)_\QQ$. Then $\deg(e_i \cdot f_j) = \delta_{i,j}$, and the even Chow--K\"unneth projectors are given by \begin{align*} \pi_X^0 &= \pt \times X & \pi_X^{12} &= X \times \pt\\ \pi_X^2 &= \tfrac{1}{10} \cdot H^5 \times H & \pi_X^{10} &= \tfrac{1}{10} \cdot H \times H^5\\ \pi_X^4 &= f_1 \times e_1 + f_2 \times e_2 & \pi_X^8 &= e_1 \times f_1 + e_2 \times f_2 \end{align*} and \[ \pi_X^6= [\Delta_X] - \pi_X^0 - \pi_X^2 - \pi_X^4 - \pi_X^8 - \pi_X^{10}-\pi_X^{12}\, . \] These projectors realise a decomposition (for $\dim(X)=6$) \begin{equation} \label{eq:CKDecGM6} \frh(X) \cong \unitmot \oplus \unitmot(-1) \oplus \unitmot(-2)^{\oplus 2} \oplus \frh^6(X) \oplus \unitmot(-4)^{\oplus 2} \oplus \unitmot(-5) \oplus \unitmot(-6)\, . \end{equation} \subsection{} \label{subsec:hntr} For later use, we will also need a refinement of the Chow--K\"unneth decomposition in the even-dimensional case. (Cf.\ \cite[Section~7.2.2]{KahnMurPedr}.) Let $\dim(X) = n \in \{4,6\}$. By the results in Section~\ref{sec:ChowGroupGM}, the cycle class map induces an isomorphism of~$\CH^{n/2}(X)_\QQ$ with the space of Hodge classes in $H^n(X,\QQ)$. Let $\{a_1, \ldots, a_r\}$ be an orthogonal basis of $\CH^{n/2}(X)_\QQ$, and define \[ \pi^n_{X,\alg} = \sum_{i=1}^{r}\; \frac{1}{\deg(a_i\cdot a_i)}\cdot a_i\times a_i\, ,\qquad \pi^n_{X,\tr} = \pi^n_X - \pi^n_{X,\alg}\, . \] These are projectors, which are independent of the chosen orthogonal basis. In cohomology, $\pi^n_{X,\alg}$ and $\pi^n_{X,\tr}$ are the projectors onto the subspace of Hodge classes in $H^4(X, \QQ)$, respectively its orthogonal complement. Denote $\frh^n_{\alg}(X) := (X,\pi^n_{X,\alg},0)$ and $\frh^n_{\tr}(X) := (X, \pi^n_{X,\tr},0)$. Then $\frh^n_{\alg}(X)\cong \unitmot(-n/2)^{\oplus r}$ and we have a refinement of \eqref{eq:CKDecGM4} and \eqref{eq:CKDecGM6} by further decomposing~$\frh^n(X)$ as \[ \frh^n(X) = \frh^n_{\alg}(X) \oplus \frh^n_{\tr}(X)\, . \] \begin{remark} Thus far in this section we have been working over~$\CC$. We in fact have Chow--K\"unneth decompositions as above for Gushel--Mukai varieties over an arbitrary field $K = \Kbar$ of characteristic~$0$. The best way to approach this would be to prove all results from Section~\ref{sec:ChowGroupGM} (at least with $\QQ$-coefficients) over such fields. As some of the results that we have used are documented in the literature only for complex varieties, we here take a more direct approach. Let $X$ be a GM variety of dimension $n \in \{3,4,5,6\}$ over~$K$. It is clear from the above explicit description of the Chow--K\"unneth projectors~$\pi^i_X$ that these same projectors are meaningfully defined over~$K$. (All we have used is the existence of the Gushel map; with a little more work, even the assumption $K=\Kbar$ could be dropped.) Taking the explicit formulas in Sections~\ref{subsec:CKGM3explicit}--\ref{subsec:CKGM46explicit} as definitions of the projectors~$\pi^i_X$, we obtain a Chow--K\"unneth decomposition $\frh(X) = \oplus_{i=0}^{2n}\; \frh^i(X)$ in~$\CHM(K)$. These projectors are invariant under extension of the base field: if $K \subset L$ is a field extension, the Chow--K\"unneth projectors $\pi^i_{X_L} \in \CH^n(X_L \times_L X_L)$ that we obtain are the images of the projectors $\pi^i_X \in \CH^n(X\times X)$ under the natural map $\CH^n(X\times X) \to \CH^n(X_L \times_L X_L)$. Hence also $\frh^i(X_L) = \frh^i(X)_L$ for all~$i$. \end{remark} \section{Motives of generalised partners and duals} Throughout this section, $K$ is an algebraically closed field of characteristic~$0$ and $n$, $n'$ are two integers in $\{3,4,5,6\}$ of the same parity. \begin{definition}[{\cite[Definition 3.5]{KP16}}] \label{def:Partner} Let $X$ and $X'$ be GM varieties over~$K$ of dimensions $n$ and~$n'$ respectively. We say that $X$ and $X'$ are \begin{itemize} \item \emph{generalised partners} if there exists an isomorphism $V_6(X)\cong V_6(X')$ inducing an identification between the Lagrangian subspaces $A(X)\subset \wedge^3 V_6(X)$ and $A(X')\subset \wedge^3 V_6(X')$; \item \emph{generalised dual} if there exists an isomorphism $V_6(X)\cong V_6(X')^\vee$ inducing an identification between the Lagrangian subspaces $A(X)\subset \wedge^3 V_6(X)$ and $A(X')^\perp\subset \wedge^3 V_6(X')^\vee$. \end{itemize} \end{definition} For $n$ and~$n'$ odd, it follows from \cite[Theorem~1.1]{DK-GMJacobian} that if $X$ and~$X'$ are generalised partners or duals over~$\CC$, their middle cohomology groups are isomorphic as rational Hodge structures, up to a Tate twist. With some work, a similar conclusion can be proven for $n$ and~$n'$ even. The main result of this section is a motivic strengthening of this. \begin{theorem} \label{thm:MotivePartnerChar0} Let $n$ and $n'$ be as above. Let $X$ and $X'$ be GM varieties of dimensions $n$ and~$n'$ over~$K$. If $X$ and~$X'$ are generalised partners or generalised duals, their rational Chow motives in middle degrees are isomorphic, i.e., \begin{equation} \label{eqn:IsomMotiveChar0} \frh^n(X)\cong \frh^{n'}(X')\bigl(\tfrac{n'-n}{2}\bigr) \quad\text{in $\CHM(K)$.} \end{equation} \end{theorem} Our proof will employ techniques from derived categories and is based on an idea that we learned from a draft version of the paper~\cite{BolognesiLaterveer-GM6} by Bolognesi and Laterveer. \subsection{} The derived category of a GM variety~$X$ admits a semi-orthogonal decomposition consisting of an exceptional collection together with an admissible subcategory $\operatorname{Ku}(X)$, called its \emph{Kuznetsov component}, which is a K3 or Enriques category depending on the parity of the dimension of the GM variety. More precisely, \begin{equation} \label{SODforGM} \Db(X) = \bigl\langle \Ku(X), \scrO_X, \scrU_X^\vee, \scrO_X(1), \scrU_X^\vee(1), \dots, \scrO_X(n-3), \scrU_X^\vee(n-3) \bigr\rangle\, , \end{equation} where $\scrO_X(1)$ is the ample generator of the Picard group of~$X$, and $\scrU_X$ is the Gushel bundle. We denote by $i\colon \Ku(X)\hookrightarrow \Db(X)$ the natural inclusion functor, and let $i^*$ and $i^!$ be respectively the left and right adjoint of $i$, which may be viewed as projection functors from $\Db(X)$ to the Kuznetsov component. We refer to the work of Kuznetsov--Perry~\cite{KP16} for details. The following result is deduced from the so-called quadratic homological projective duality. \begin{theorem}[{Kuznetsov--Perry \cite{KuznetsovPerry-CatCone}}] \label{thm:DerCatPartnerChar0} Let $X$ and $X'$ be GM varieties of dimension $n$ and~$n'$ over~$K$. If $X$ and~$X'$ are generalised partners or generalised duals, there exists a Fourier--Mukai equivalence \begin{equation} \Psi\colon \Ku(X)\xrightarrow{~\mathrm{eq}~} \Ku(X') \end{equation} between their Kuznetsov components. \end{theorem} The assertion that the equivalence is of Fourier--Mukai type means that there exists an object~$\scrE$ in $\Db(X \times X^\prime)$ such that the composition \begin{equation} \label{eq:PsiFM} \Db(X) \xrightarrow{~i^*~} \Ku(X) \xrightarrow{~\Psi~} \Ku(X') \xrightarrow{~i^\prime~} \Db(X^\prime) \end{equation} is the Fourier--Mukai transformation~$\Phi_\scrE$ defined by~$\scrE$. This is not explicitly stated in~\cite{KuznetsovPerry-CatCone} but it follows from \cite[Theorem~1.3]{LiPertusiZhao}. \subsection{} \label{subsec:A} For the proof of Theorem~\ref{thm:MotivePartnerChar0} we shall first work over the complex numbers. In Section~\ref{subsec:KbarGeneral} we shall explain how to deduce the result over a field $K=\Kbar$ of characteristic~$0$. The overall strategy is close to \cite{Huybrechts-MotiveK3}, and uses some arguments in \cite{FuVial-K3}, \cite{FuVial-Cubic4fold}. If $Z$ is a smooth complex projective variety, an admissible subcategory of its bounded derived category of coherent sheaves $\mathcal{C}\subset \Db(Z)$ may be viewed as a non-commutative smooth proper scheme. Let $K_0(\mathcal{C})$ be the Grothendieck group of~$\mathcal{C}$. By the work of Blanc~\cite{Blanc-TopK}, we can also consider the topological K-theory $K_0^{\topo}(\mathcal{C})$ of~$\mathcal{C}$, which for $\mathcal{C} = \Db(Z)$ agrees with topological K-theory of~$Z$ (see \cite[Proposition~4.32]{Blanc-TopK}). The functors $K_0$ and~$K_0^{\topo}$ are both additive invariants, in the sense that they transform a semi-orthogonal decomposition into a direct sum decomposition. Sending an algebraic vector bundle to its underlying complex vector bundle defines a natural transformation $K_0\to K_0^{\topo}$. We define a new invariant, which might be called the ``topologically trivial Grothendieck group'': \begin{equation} \A(\mathcal{C}):= \ker\left(K_0(\mathcal{C})_\QQ \to K_0^{\topo}(\mathcal{C})_\QQ\right)\, . \end{equation} For a smooth complex projective variety~$Z$, we simply write~$\A(Z)$ for $\A\bigl(\Db(Z)\bigr)$. Naturally, $\A$~being the kernel of a morphism between additive functors, it is again an additive invariant. We make two basic observations: \begin{itemize} \item It is easy to check that $\A(\pt) = 0$. Therefore, if there is a semi-orthogonal decomposition \[ \Db(Z) = \bigl\langle \Ku(Z), E_1, \cdots, E_m \bigr\rangle\, , \] with $\langle E_1, \cdots, E_m\rangle$ an exceptional collection (in particular, $\langle E_i\rangle \cong \Db(\pt)$ for every~$i$), the inclusion functor $i\colon \Ku(Z)\hookrightarrow \Db(Z)$ induces an isomorphism \begin{equation} \label{eqn:AofKu} \A\bigl(\Ku(Z)\bigr)\cong \A(Z), \end{equation} with inverse induced by the (left or right) adjoint functor of~$i$. \item For any smooth complex projective variety~$Z$, we have a commutative diagram \[ \begin{tikzcd} K_0(Z)_\QQ \ar[r] \ar[d,"\wr", "v"'] & K_0^{\topo}(Z)_\QQ \ar[d,"v^{\topo}","\wr"']\\ \CH^*(Z)_\QQ \ar[r, "\class"] & H^{2*}(Z,\QQ) \end{tikzcd} \] where the bottom arrow is the cycle class map and the vertical isomorphisms are given by ``Mukai vector maps'' that send a class~$e$ to $\ch(e) \cdot \sqrt{\td(X)\, }$. (The right vertical map is an isomorphism by a classical result of Atiyah and Hirzebruch; see for instance \cite[Section~38.4]{FomenkoFuchs}.) {}From the above diagram, we obtain that the Mukai vector map induces an isomorphism between $\A(Z)$ and the homologically trivial part of the rational Chow group: \begin{equation} \label{eqn:AisomCHhom} v\colon \A(Z) \isomarrow \CH^*(Z)_{\hom,\QQ}\, . \end{equation} \end{itemize} We can now give the proof of Theorem~\ref{thm:MotivePartnerChar0} for $X$ and~$X'$ defined over~$\CC$. \begin{proof}[Proof of Theorem \ref{thm:MotivePartnerChar0} over~$\CC$] In order to avoid case distinctions, let us make the convention that when $n$ is odd, $\frh^n_{\tr}(X) = \frh^n(X)$. By Theorem~\ref{thm:DerCatPartnerChar0}, there exists an object $\scrE \in \Db(X\times X')$, such that the composition \[ \Ku(X) \xhookrightarrow{~i~} \Db(X) \xrightarrow{~\Phi_{\scrE}~} \Db(X')\xrightarrow{~{i'}^*~} \Ku(X')\, . \] is an equivalence of smooth proper dg-categories. (This is the same as~\eqref{eq:PsiFM} because $i^* \circ i$ is the identity on~$\Ku(X)$, and likewise for~$X^\prime$.) Applying the invariant~$\A$ we find that the composition \begin{equation} \label{eqn:ApplyingA} \A\bigl(\Ku(X)\bigr)\tto \A(X) \xrightarrow{~[\scrE]_*~} \A(X') \tto \A\bigl(\Ku(X')\bigr) \end{equation} is an isomorphism. However, since the Kuznetsov component of a GM variety is defined as the right orthogonal of an exceptional collection (see~\eqref{SODforGM}), \eqref{eqn:AofKu} implies that the first and the last maps in~\eqref{eqn:ApplyingA} are isomorphisms. Therefore, the middle map of \eqref{eqn:ApplyingA} is an isomorphism between $\A(X)$ and~$\A(X')$. Now consider the Grothendieck--Riemann--Roch diagram \begin{equation} \begin{tikzcd} \A(X) \ar[r, "{[\scrE]_*}", "\sim"'] \ar[d, "v","\wr"']& \A(X') \ar[d, "v","\wr"']\\ \CH^*(X)_{\hom, \QQ} \ar[r, "v(\scrE)_*"]& \CH^*(X')_{\hom, \QQ} \end{tikzcd} \end{equation} where the top map is an isomorphism as explained above and the vertical isomorphisms are the ones obtained from~\eqref{eqn:AisomCHhom}. It follows that the bottom arrow is an isomorphism. Furthermore, by our computations of Chow groups in Section~\ref{sec:ChowGroupGM} we have that \[ \CH(X)_{\hom, \QQ} = \CH^{\lfloor \frac{n+2}{2}\rfloor}(X)_{\hom, \QQ} = \CH\bigl(\frh^n_{\tr}(X)\bigr)_\QQ\, , \] and likewise for $X^\prime$. Now consider the morphism $v_m(\scrE) \colon \frh(X) \to \frh(X^\prime)\bigl(\frac{n'-n}{2}\bigr)$ in~$\CHM(\CC)$ defined by the component of $v(\scrE)$ in degree $m = \frac{n+n'}{2}$. It follows from the above that the morphism \[ \pi^{n'}_{X',\tr} \circ v_m(\scrE) \circ \pi^n_{X,\tr} \colon \frh^n_{\tr}(X) \to \frh^{n'}_{\tr}(X')\bigl(\tfrac{n'-n}{2}\bigr) \] induces an isomorphism on Chow groups. By Lemma~\ref{lem:ManinPrinc} this implies that this morphism is an isomorphism. In case $n$ and~$n^\prime$ are even, it remains to pass from $\frh^n_{\tr}(X)$ and~$\frh^{n'}_{X',\tr}$ to $\frh^n(X)$ and~$\frh^{n'}_{X'}$. By construction, there exist integers $r$ and~$r^\prime$ such that \[ \frh^n(X) = \frh^n_{\tr}(X) \oplus \unitmot\bigl(-\tfrac{n}{2}\bigr)^{\oplus r}\, ,\qquad \frh^{n'}(X') = \frh^{n'}_{\tr}(X') \oplus \unitmot\bigl(-\tfrac{n'}{2}\bigr)^{\oplus r'}\, . \] Taking Hodge realisations and using that the middle Betti numbers of~$X$ and~$X'$ are the same, it follows that $r=r^\prime$, and therefore $\frh^n(X)\cong \frh^{n'}(X')\bigl(\tfrac{n'-n}{2}\bigr)$. \end{proof} \subsection{} \label{subsec:KbarGeneral} To finish, we explain how to obtain Theorem~\ref{thm:MotivePartnerChar0} over an arbitrary algebraically closed field~$K$ of characteristic~$0$. Let $X$ and~$X'$ be generalised partners or duals over~$K$, of dimensions $n$, $n' \in \{3,4,5,6\}$ of the same parity. There exists a subfield $K_0 \subset K$ which is finitely generated over~$\QQ$ such that $X$ and~$X'$ both have models, say $X_0$ and~$X_0^\prime$, over~$K_0$, and such that moreover the Chow--K\"unneth projectors $\pi^n_X$ and~$\pi^{n'}_{X'}$ that cut out the submotives $\frh^n(X)$ and~$\frh^{n'}(X')$ are defined over~$K_0$. Clearly, Theorem~\ref{thm:MotivePartnerChar0} for $X_0$ and~$X_0^\prime$ over an algebraic closure of~$K_0$ implies the result over~$K$. Choose an embedding $K_0 \hookrightarrow \CC$ and let $\Kbar_0$ be the algebraic closure of~$K_0$ inside~$\CC$. By the now proven Theorem~\ref{thm:MotivePartnerChar0} over~$\CC$, there exists an isomorphism $\alpha \colon \frh^n(X_{0,\CC})\cong \frh^{n'}(X^\prime_{0,\CC})(\frac{n'-n}{2})$ in~$\CHM(\CC)$. There exists a finitely generated field extension $\Kbar_0 \subset K_1$ inside~$\CC$ such that $\alpha$ and~$\alpha^{-1}$ are defined over~$K_1$. Concretely, this means there exist cycle classes $Z_1,\ldots,Z_m \in \CH\bigl((X_0 \times X_0^\prime)_{K_1}\bigr)$ and rational numbers $a_i$ and~$b_i$ such that \[ \alpha = \pi^{n'}_{X'_0} \circ \bigl(\sum a_i \cdot Z_i\bigr) \circ \pi^n_{X_0}\, ,\qquad \alpha^{-1} = \pi^n_{X_0} \circ \bigl(\sum b_i\cdot {}^{\mathsf{t}}Z_i\bigr) \circ \pi^{n'}_{X'_0}\, . \] The field~$K_1$ is the function field of a variety~$S$ over~$\Kbar_0$. Choose a point $s \in S(\Kbar_0)$. By specialisation from the generic point of~$S$ to~$s$ we obtain cycle classes $Z_{1,s},\ldots,Z_{m,s} \in \CH\bigl((X_0 \times X_0^\prime)_{\Kbar_0}\bigr)$. Then \[ \pi^{n'}_{X'_0} \circ \bigl(\sum a_i \cdot Z_{i,s}\bigr) \circ \pi^n_{X_0} \quad\text{and}\quad \pi^n_{X_0} \circ \bigl(\sum b_i\cdot {}^{\mathsf{t}}Z_{i,s}\bigr) \circ \pi^{n'}_{X'_0} \] define mutually inverse morphisms $\frh^n(X_0)\rightleftarrows \frh^{n'}(X^\prime_0)(\frac{n'-n}{2})$ in $\CHM(\Kbar_0)$, and this gives what we want. {\small } \noindent \texttt{[email protected]} \noindent Radboud University Nijmegen, IMAPP, Nijmegen, The Netherlands \noindent \texttt{[email protected]} \noindent Radboud University Nijmegen, IMAPP, Nijmegen, The Netherlands \end{document}
\begin{document} \begin{frontmatter} \title{Long Time Behavior of Finite and Infinite Dimensional Reflected Brownian Motions } \runtitle{Ergodicity of RBM} \author{\fnms{Sayan} \snm{Banerjee}\ead[label=e1]{[email protected]}} \and \author{\fnms{Amarjit} \snm{Budhiraja}\ead[label=e2]{[email protected]}} \affiliation{University of North Carolina, Chapel Hill} \runauthor{Banerjee and Budhiraja} \address{Department of Statistics \\and Operations Research\\ University of North Carolina\\ Chapel Hill, NC 27599\\ In Celebration of Prof. Rajeeva Karandikar's 65th Birthday } \begin{abstract} This article presents a review of some old and new results on the long time behavior of reflected diffusions. First, we present a summary of prior results on construction, ergodicity and geometric ergodicity of reflected diffusions in the positive orthant $\mathbb{R}^d_+$, $d \in \mathbb{N}$. The geometric ergodicity results, although very general, usually give implicit convergence rates due to abstract couplings and Lyapunov functions used in obtaining them. This leads us to some recent results on an important subclass of reflected Brownian motions (RBM) (constant drift and diffusion coefficients and oblique reflection at boundaries), known as the Harrison-Reiman class, where explicit rates of convergence are obtained as functions of the system parameters and underlying dimension. In addition, sufficient conditions on system parameters of the RBM are provided under which local convergence to stationarity holds at a `dimension-free' rate, that is, for any fixed $k \in \mathbb{N}$, the rate of convergence of the $k$-marginal to equilibrium does not depend on the dimension of the whole system. Finally, we study the long time behavior of infinite dimensional rank-based diffusions, including the well-studied infinite Atlas model. The gaps between the ordered particles evolve as infinite dimensional RBM and this gap process has uncountably many explicit product form stationary distributions. Sufficient conditions for initial configurations to lie in the weak domain of attraction of the various stationary distributions are provided. Finally, it is shown that, under conditions, all of these explicit stationary distributions are extremal (equivalently, ergodic) and, in some sense, the only product form invariant probability distributions. Proof techniques involve a pathwise analysis of RBM using explicit synchronous and mirror couplings and constructing Lyapunov functions. \end{abstract} \begin{keyword}[class=MSC] \kwd[Primary]{ 60J60} \kwd{60H10} \kwd{60K35} \kwd[; secondary ]{60J55} \kwd{60K25} \end{keyword} \begin{keyword} \kwd{Reflected Brownian motion} \kwd{local time} \kwd{coupling} \kwd{Wasserstein distance} \kwd{relaxation time} \kwd{ heavy traffic} \kwd{domain of attraction} \kwd{rank-based diffusion} \kwd{Atlas model} \end{keyword} \end{frontmatter} \section{Introduction} In this article, we will provide a survey of some results on reflected Brownian motions (RBM) in polyhedral domains that focus on the long-time behavior of such processes. We will consider the settings of both finite and infinite dimensional reflected diffusions. The simplest such model is a one dimensional RBM $\{X(t)\}_{t\ge 0}$ defined as \begin{equation}\label{eq:zeq} X(t) = x + B(t) - \inf_{0\le s \le t} \left\{(x+B(s))\wedge 0\right\}, t \ge 0, \; x \in \mathbb{R}R_+, \end{equation} where $B$ is a standard Brownian motion given on some probability space. Occasionally we will write $X(t) = X(t;x)$ to emphasize dependence on the initial condition. There is an equivalent way to describe this process using the formulation of the Skorohod problem \cite{skorokhod1961stochastic}. The Skorohod problem on the positive real line $\mathbb{R}R_+$ is to find for a given $\mathbf{y} \in \mathcal{D}([0,\infty):\mathbb{R}R)$ (throughout, for a Polish space $S$, $\mathcal{D}([0,\infty):S)$ will denote the space of functions from $[0,\infty)$ to $S$ that are right continuous and have left limits(RCLL), equipped with the usual Skorohod topology), with $\mathbf{y}(0)\ge 0$, a pair of paths $\mathbf{x},\mathbf{k} \in \mathcal{D}([0,\infty):\mathbb{R}R)$ such that for all $t\ge 0$, $\mathbf{x}(t) = \mathbf{y}(t)+\mathbf{k}(t) \ge 0$, $\mathbf{k}(0)=0$, $\mathbf{k}$ is nondecresing, and $\mathbf{k}$ increases only at instants $t$ when $\mathbf{x}(t)=0$, namely $$\int_{[0,\infty)} 1_{\{\mathbf{x}(s)>0\}} d\mathbf{k}(s) =0.$$ The above problem has a unique solution for every $\mathbf{y} \in \mathcal{D}([0,\infty):\mathbb{R}R)$ and we write $\mathbf{x} = \Gamma_1(\mathbf{y})$. Also, it is easily verified that $X$ defined by \eqref{eq:zeq} can equivalently be written as $X = \Gamma_1(x+ B)$. The formulation of a Skorohod problem also allows one to define a general reflected diffusion that on $(0,\infty)$ behaves as a diffusion with the infinitesimal generator (evaluated on smooth test functions $f$) $$\mathcal{L} f(x) = \frac{1}{2}\sigma^2(x) f''(x) + b(x) f'(x),\; x\in (0,\infty)$$ for suitable coefficient functions $\sigma$ and $b$. Such a reflected diffusion can be constructed by solving the stochastic equation \begin{equation}\label{eq:genrefdif} X(t) = \Gamma_1\left(x + \int_0^{\cdot} b(X(s)) ds + \int_0^{\cdot} \sigma(X(s)) dB(s)\right)(t), \; t \ge 0 \end{equation} in a pathwise fashion (e.g. when $b,\sigma$ are Lipschitz functions) (see \cite{andore}). In the special case when $\sigma(z) \equiv \sigma >0$ and $b(z) = b \in \mathbb{R}R$, $X$ is referred to as a RBM with drift $b$ and diffusion coefficient $\sigma$, or simply a $(b,\sigma)$-RBM. It is well known that a $(b,\sigma)$-RBM is positive recurrent if and only if $b<0$ and the unique stationary distribution is an exponential distribution with rate $\lambda = 2|b|/\sigma^2$. In the general setting of \eqref{eq:genrefdif}, under conditions on the coefficients (e.g., $b$ and $\sigma$ Lipschitz, $\sigma$ bounded, and $\sigma \sigma^T$ uniformly nondegenerate), it can be shown that if $\sup_{z\ge 0} b(z) <0$ then the diffusion is positive recurrent, although this time an explicit form stationary distribution is not available. One can also show that the law at time $t$ converges to the stationary distribution at a geometric rate in the total variation distance. In this work we are interested in the higher dimensional analogues of the above process. We will only consider reflected Brownian motions, or more generally reflected jump-diffusions, in a finite or infinite nonnegative orthant, namely the state space will be $\mathbb{R}R_+^d$ where $d \in \mathbb{N} \cup \{\infty\}$. Such multi-dimensional RBM arise from problems in stochastic networks \cite{reiman_jackson_net,harrison1987brownian}, applications in mathematical finance \cite{fernholz2009stochastic}, scaling limits of interacting particle systems \cite{karatzas_skewatlas}, and in rank-based diffusion models for competing particles \cite{karatzas_skewatlas,sarantsev}. The next three sections will focus on the finite dimensional case while the last two sections will consider infinite dimensional settings. \section{Finite dimensional RBM} Consider for now $d<\infty$. Roughly speaking, a reflected Brownian motion in the nonnegative orthant $\mathbb{R}R_+^d$ behaves like an ordinary Brownian motion (with drift) in the interior of the orthant and when it reaches the boundary of the domain it is instantaneously pushed back, with the minimal force needed to keep it inside the domain, applied in some pre-specified directions of constraint. As in the one dimensional case, a natural way to give a rigorous formulation of such an object is through a Skorohod problem. Fix vectors $r^1, \ldots, r^d \in \mathbb{R}R^d$. Here the vector $r^i$ gives the direction in which the state of the RBM is pushed when it is about to exit the domain $\mathbb{R}R_+^d$ from its $i$-th face $F_i = \{x \in \mathbb{R}R_+^d: x_i =0\}$. We refer to the $d\times d$ matrix $R = [r^1, \ldots , r^d]$ as the reflection matrix. For each $x \in \partial \mathbb{R}_+^d = \cup_{i=1}^d F_i$, the set $r(x)$ of directions of constraints is defined as \begin{equation} r(x) := \left\{ Rq: q = (q_1, \ldots , q_d)' \in \mathbb{R}R_+^d, \; q\cdot \mathbf{1} =1, \mbox{ and } q_i>0 \mbox{ only if } x_i=0\right\}, \end{equation} where $\mathbf{1} = (1, 1, \ldots)'$. The set $r(x)$ represents the collection of directions available for pushing the state of the process at the boundary point $x$. Note that if $x$ has exactly one coordinate $0$, the set $r(x)$ is a singleton. The Skorohod problem associated with the domain $\mathbb{R}R_+^d$ and the reflection matrix $R$ is defined as follows. Given $\psi \in \mathcal{D}([0,\infty): \mathbb{R}R^d)$ such that $\psi(0)\in \mathbb{R}R_+^d$, we say a pair of trajectories $\phi, \eta \in \mathcal{D}([0,\infty): \mathbb{R}R^d)$ solve the {\bf Skorohod problem} for $\psi$ with respect to reflection matrix $R$, if and only if $\eta(0)=0$ and for all $t\ge 0$: (i) $\phi(t) = \psi(t) + \eta(t)$; (ii) $\phi(t) \in \mathbb{R}R_+^d$; (iii) The total variation (with respect to the Euclidean norm on $\mathbb{R}R^d$) of $\eta$ on $[0,t]$, denoted as $|\eta|(t)$ is finite; (iv) $|\eta|(t) = \int_{[0,t]} 1_{\{\phi(s) \in \partial \mathbb{R}R_+^d\}} d|\eta|(s)$; (v) there is a Borel measurable function $\gamma: [0,\infty) \to \mathbb{R}R^d$ such that $\gamma(t) \in r(\phi(t))$, $d|\eta|$-a.e., and $\eta(t) = \int_{[0,t]} \gamma(s) d|\eta|(s)$. The trajectory $\phi$ can be viewed as the reflected version of $\psi$ that stays in $\mathbb{R}R_+^d$ at all times. Property (iv) says that the pushing term $\eta$ is activated only when the path is about to exit the domain while property (v) ensures that the push is applied in the permissible directions of constraint, as specified by the reflection matrix $R$. On the domain $D \subset D_0 := \{\phi \in \mathcal{D}([0,\infty): \mathbb{R}R^d): \phi(0) \in \mathbb{R}R_+^d\}$ on which there is a unique solution to the Skorohod problem we define the {\bf Skorohod map} (SM) $\Gamma$ as $\Gamma(\psi) = \phi$, if $(\phi, \psi-\phi)$ is the unique solution of the Skorohod problem. In many problems of interest it turns out that the Skorohod map is quite well behaved in the sense that the following property is satisfied. \begin{property}\label{prop:regsp} The Skorohod map is well defined on all of $D_0$ and the SM is Lipschitz continuous in the following sense: There exists a $K \in (0, \infty)$ such that for all $\phi_1, \phi_2 \in D_0$ $$\sup_{0\le t <\infty} \|\Gamma(\phi_1)(t)- \Gamma(\phi_2)(t)\| \le K\sup_{0\le t <\infty} \|\phi_1(t)- \phi_2(t)\| .$$ \end{property} We refer the reader to \cite{dupish, dupuram} for sufficient conditions under which the above property holds. One important class of examples that arise naturally in many stochastic network models \cite{reiman_jackson_net,harrison1987brownian} and also in the study of gaps between $d+1$ competing particles in rank-based diffusions (e.g. \cite{karatzas_skewatlas,sarantsev}), and where the above property is satisfied, is the so called {\bf Harrison-Reiman} class \cite{harrison1981reflected}. This corresponds to the family of models for which the matrix $P := I - R^T$ is substochastic (non-negative entries and row sums bounded above by 1) and transient ($P^n \rightarrow 0$ as $n \rightarrow \infty$). This family will be the focus of much of our discussion in subsequent sections. When Property \ref{prop:regsp} holds, one can solve pathwise, as in the one dimensional case, stochastic equations of the form \begin{equation}\label{eq:genrefdif2} X(t) = \Gamma\left(x + \int_0^{\cdot} b(X(s)) ds + \int_0^{\cdot} \sigma(X(s)) dB(s)\right)(t), \; t \ge 0, \end{equation} where $b: \mathbb{R}R_+^d \to \mathbb{R}R^d$ and $\sigma: \mathbb{R}R_+^d \to \mathbb{R}R^{d\times k}$ are suitable coefficient functions (e.g. Lipschitz continuous) and $B$ is a standard $k$ dimensional Brownian motion on some probability space (cf. \cite{dupish}). The solution in the case when $b(x) \equiv \mu$ and $\sigma(x) \equiv \mathbf{D}$, where $\mu \in \mathbb{R}R^d$ and $\mathbf{D}$ is a $d\times d$ matrix, will be of particular interest here and will be referred to as the $(\mu, \mathbf{D})$-RBM with reflection matrix $R$. In later sections, we will also use the abbreviation $\mathbb{R}BM(\mu,\Sigma,R)$, where $\Sigma = \mathbf{D}\mathbf{D}^T$. The solution of \eqref{eq:genrefdif2} can be written in a somewhat more explicit form using boundary local times. For example, when $X(t;x)$ is a $(\mu, \mathbf{D})$-RBM with reflection matrix $R$ starting from $x$, it can be represented as the solution of the equation \begin{equation}\label{RBMdef} X(t;\mathbf{x}) = \mathbf{x} + \mathbf{D} B(t) + \mu t + RL(t), \end{equation} where $L$, referred to as the local time process, is a $d$-dimensional non-decreasing continuous process satisfying \begin{equation}\label{loctim} L(0)=0, \;\; \int_0^t X_i(s;\mathbf{x})dL_i(s) = 0 \mbox{ for all } t>0 \mbox{ and } 1 \le i \le d. \end{equation} It turns out that there are important applications arising from stochastic network theory (see e.g. \cite{williams1998diffusion}) that lead to well-defined Brownian motions constrained in the positive orthant according to a certain reflection mechanism, but the associated Skorohod problem is not well-posed for a typical path in $\mathcal{D}([0,\infty): \mathbb{R}R_+^d)$ (in fact uniqueness of solution fails even for linear paths). In order to cover such situations the papers \cite{taywil, reiwil} introduce and study {\bf semimartingale reflecting Brownian motions} (SRBM). Suppose that $\Sigma$ is a $d\times d$ positive definite matrix. For $\mathbf{x} \in \mathbb{R}R_+^d$, an SRBM associated with the data $\left(\mu, \mathbf{D}, R\right)$ that starts from $\mathbf{x}$ is a continuous, $\left\{\mathcal{F}_{t}\right\}$-adapted $d$-dimensional process $X$, defined on some filtered probability space $\left(\Omegaega, \mathcal{F},\left\{\mathcal{F}_{t}\right\}_{t \geq 0}, P\right)$ such that: (i) $X(t)=\mathbf{x} + \mathbf{D} B(t)+\mu t+R L(t) \in \mathbb{R}R_+^d$ for all $t \geq 0, P$-a.s., (ii) $B$ is a $d$-dimensional standard $\left\{\mathcal{F}_{t}\right\}$-Brownian motion, (iii) $L$ is an $\left\{\mathcal{F}_{t}\right\}$-adapted $d$-dimensional process such that $L_{i}(0)=0$ for $i=1, \ldots, d, P$-a.s. For each $i=1, \ldots, d, L_{i}$ is continuous, nondecreasing and $\int_{0}^{t} 1_{\left\{X_{i}(s) \neq 0\right\}} d L_{i}(s)=0$ for all $t \geq 0, P$-a.s. A key condition in this context is that the matrix $R$ is {\bf completely- $\mathcal{S}$} (cf. \cite{reiwil}), namely, for every $k \times k$ principal submatrix $G$ of $R$, there is a $k$-dimensional vector $v_{G}$ such that $v_{G} \geq 0$ and $R v_{G}>0$. Roughly speaking, the conditions says that at each point $x$ on the boundary there is a permissible direction of reflection, namely a $r \in r(x)$, that points inwards. In \cite[Theorem 2]{reiwil}, it was shown that a necessary condition for the existence of an SRBM is that the reflection matrix $R$ is a completely- $\mathcal{S}$ matrix. The paper \cite{taywil} shows that the condition is sufficient as well for (weak) existence and uniqueness of SRBM to hold. Under this condition, the collection $\{P_{\mathbf{x}}\}_{\mathbf{x} \in \mathbb{R}R_+^d}$, where $P_{\mathbf{x}}$ is the law of the SRBM starting from $\mathbf{x}$, forms a $\mathbb{R}R_+^d$-valued strong Markov process. \subsection{Ergodicity.} The first basic result on long-time behavior of RBM in $\mathbb{R}R_+^d$ is due to \cite{harrison1987brownian} which treats the Harrison-Reiman class of RBM. Suppose that the following holds ((A1) is simply a restatement of the Harrison-Reiman class.) \textbf{Assumptions: } \begin{itemize} \item[(A1)] The matrix $P := I - R^T$ is substochastic and transient. \item[(A2)] $\mathbf{b} := -R^{-1}\mu > 0$. \item[(A3)] The matrix $\Sigma = \mathbf{D}\mathbf{D}^T$ is positive definite. \end{itemize} It was shown in \cite{harrison1987brownian} that, when the above assumptions (A1)-(A3) hold, then the $(\mu, \mathbf{D})$-RBM is positive recurrent and consequently has a unique stationary distribution. Assumption (A2) is the well known `stability condition' which is sufficient for the existence of a stationary measure \cite[Section 6]{harrison1987brownian}. The condition is almost necessary in that if $\mathbf{b}_i<0$ for some $i$ then the RBM is transient \cite{buddupsimp}. The matrix $\Sigma = DD^T$ gives the covariance matrix associated with the driving diffusion of \eqref{RBMdef} and (A3) guarantees that this diffusion process is elliptic. Ellipticity gives the uniqueness of the stationary distribution \cite{harrison1987brownian}. Under additional conditions on $R$ and $\mathbf{D}$, the paper \cite{harrisonwilliams_expo} shows that the stationary distribution takes an explicit and simple form given as a product of exponential distributions, however in general there is little one can say about this stationary distribution other than some properties of its tail (see e.g. the next section and \cite{budlee}). The paper \cite{ABD} studied long time behavior of constrained reflected diffusions of the form in \eqref{eq:genrefdif2}. The basic assumptions are that Property \ref{prop:regsp} holds and the coefficients $b, \sigma$ satisfy the following condition.\\ \noindent \textbf{Assumptions: } \begin{itemize} \item[(D1)] $b$ and $\sigma$ are Lipschitz maps. \item[(D2)] $\sigma$ is bounded. \item[(D3)] The matrix $\sigma \sigma^T$ is uniformly nondegenerate. \end{itemize} In addition one needs a suitable stability condition. The key stability condition in \cite{ABD} is formulated in terms of the cone $$ \mathcal{C} :=\left\{-\sum_{i=1}^{d} \alpha_{i} r^{i}: \alpha_{i} \geq 0, i \in\{1, \ldots, d\}\right\}. $$ This cone $\mathcal{C}$ was introduced in \cite{buddupsimp} to characterize stability properties of deterministic paths arising from certain law of large number limits, under the long-time scaling, of a $(\mu, \mathbf{D})$-RBM for which the associated reflection matrix $R$ satisfies Property \ref{prop:regsp}. The main result there showed that if $\mu \in \mathcal{C}^o$ then these deterministic paths are all attracted to the origin (namely the paths converge to $0$ as time becomes large) whereas if $\mu \in \mathcal{C}^c$ then these paths diverge to $\infty$. Finally, when $\mu \in \partial \mathcal{C}$, all solutions of the deterministic model are bounded, and for at least one initial condition the corresponding trajectory is not attracted to the origin. This stability characterization was instrumental in the study of the stability of constrained reflected diffusions in \cite{ABD}. Let, for $\operatorname{d}elta \in(0, \infty)$, $$ \mathcal{C}(\operatorname{d}elta) :=\{v \in \mathcal{C}: \mathrm{o}_{\sss P}eratorname{dist}(v, \partial \mathcal{C}) \geq \operatorname{d}elta\} . $$ The key stability assumption on the constrained diffusion model \eqref{eq:genrefdif2} made in \cite{ABD} is the following.\\ \noindent \textbf{Assumption ($\mathcal{C}$-delta): } There exists a $\operatorname{d}elta \in(0, \infty)$ and a bounded set $A \subseteq \mathbb{R}R_+^d$ such that for all $x \in \mathbb{R}R_+^d \backslash A$, $b(x) \in \mathcal{C}(\operatorname{d}elta)$. Under this stability assumption, together with assumptions (D1)-(D3) noted previously, \cite[Theorem 2.2]{ABD} shows that the Markov process described by the constrained diffusion model \eqref{eq:genrefdif2} is positive recurrent and has a unique stationary distribution. The main proof idea there is to study stability properties of a certain deterministic collection of controlled reflected paths of the form $$\psi_v(t;\mathbf{x}) \operatorname{d}oteq \Gamma(\mathbf{x}+ \int_0^{\cdot} v(s) ds)(t), \; t \ge0$$ where $\{v(t)\}_{t\ge 0}$ is a locally integrable function that take values in $\mathcal{C}(\operatorname{d}elta)$ for all $t\ge 0$. Denote by $T(\mathbf{x})$ the supremum of the hitting times to zero, taken over all such controlled paths, starting from $\mathbf{x}$. Then, the analysis of \cite{buddupsimp} gives an explicit bound on $T(\cdot)$ as, $T(\mathbf{x}) \le C \|\mathbf{x}\|/\operatorname{d}elta$ for some $C \in (0,\infty)$ and all $\mathbf{x} \in \mathbb{R}R_+^d$. The function $T(\cdot)$ turns out to be a natural Lyapunov function for the problem, although it lacks the required $C^2$-property for implementing the usual It\^{o}'s formula based methods for Lyapunov functions for diffusion processes. Nevertheless, the function is Lipschitz and that is enough to deduce the required uniform in time moment estimates needed to prove the main result. The cone $\mathcal{C}$ can also be used to give conditions for stability of constrained L\'{e}vy processes and more general constrained jump-diffusions in $\mathbb{R}R_+^d$ associated with a reflection matrix $R$ for which Property \ref{prop:regsp} holds. Such a jump-diffusion is given as a solution of an equation of the form \begin{equation}\label{eq:genrefdif3} X(t) = \Gamma\left(x + \int_0^{\cdot} b(X(s)) ds + \int_0^{\cdot} \sigma(X(s)) dB(s) + J(\cdot)\right)(t), \; t \ge 0, \end{equation} where $$J(t) = \int_{[0,t]\times \mathbb{R}R^n} h(\psi(X(s-), z)) [N(ds, dz)- \lambda(dz) ds] + \int_{[0,t]\times \mathbb{R}R^n} h'(\psi(X(s-), z)) N(ds, dz),$$ $N$ is a Poisson random measure (PRM) on $\mathbb{R}R_+\times \mathbb{R}R^n$, with intensity measure $ds\, \lambda(dz)$, $\lambda$ is a $\sigma$-finite measure on $\mathbb{R}R^n$, $b, \sigma$ satisfy (D1)-(D3) as before, $\psi: \mathbb{R}R_+^d\times \mathbb{R}R^n \to \mathbb{R}R^d$ is a suitable coefficient map, $h: \mathbb{R}R^d \to \mathbb{R}R^d$ is a nice truncation function (see \cite{atabudEJP}) and $h'(\mathbf{x})= \mathbf{x}-h(\mathbf{x})$ for $\mathbf{x}\in \mathbb{R}R^d$. The work \cite{atabudEJP} shows that the cone $\mathcal{C}$ can be used to study stability for a very general class of constrained jump-diffusions of the form in \eqref{eq:genrefdif3} for which only the first moment is given to be finite. Recall that, in the case of constrained diffusions as in \eqref{eq:genrefdif}, the condition for stability was Assumption $\mathcal{C}$-delta on the drift vector field $b(\cdot)$. Here, in the case of a jump-diffusion, the natural definition of the drift vector field is $\tilde \beta := \mathcal{L} \mbox{id}$, where $\mathcal{L}$ denotes the generator of the “unconstrained” jump-diffusion away from the boundary, and $\mbox{id}$ denotes the identity mapping on $\mathbb{R}R^d$. In the special case of a L\'{e}vy process with finite mean, the drift is simply $\tilde \beta(\mathbf{x}) = E_{\mathbf{x}}(X(1))-\mathbf{x}$ (which is independent of $\mathbf{x}$) where $E_{\mathbf{x}}$ denotes the expectation under which $X$ starts from $\mathbf{x}$. The basic stability assumption in \cite{atabudEJP} is that the range of $\tilde \beta$ is contained in $\cup_{k\in \mathbb{N}} k \mathcal{C}_1$ where $\mathcal{C}_1$ is a compact subset of the interior of $\mathcal{C}$. Under this assumption it is shown in \cite{atabudEJP} that there exists a compact set $A$ such that for any compact $C \subset \mathbb{R}R_+^d$, $\sup_{\mathbf{x}\in C} E_{\mathbf{x}}(\tau_A)<\infty$, where $\tau_A$ is the first time $X$ hits $A$. The proof of this result is based on the construction of a Lyapunov function, and on a detailed separate analysis of small and large jumps of the Markov process. As another consequence of the existence of a Lyapunov function it is shown in \cite{atabudEJP} that $X$ is bounded in probability. From the Feller property of the process it then follows that it admits at least one invariant measure. Finally, under further additional communicability conditions it follows that the Markov process is positive Harris recurrent and admits a unique invariant measure (see \cite{atabudEJP}). Up to now, stability results discussed in this section were for a setting where Property \ref{prop:regsp} is satisfied. In the setting of a $(\mu, \mathbf{D}, R)$-SRBM, under the completely- $\mathcal{S}$ condition on $R$, the ergodicity behavior has been studied in \cite{DW}. In this work the main stability condition is formulated in terms of constrained versions of the linear trajectory $\eta(t;\mathbf{x}) = \mathbf{x} + \mu t$, $t\ge 0$, $\mathbf{x}\in \mathbb{R}R_+^d$. Since the associated Skorohod problem is not necessarily well-posed in the setting of \cite{DW} there may be multiple constrained versions of $\eta(t;\mathbf{x})$ that are consistent with the reflection mechanism specified by $R$. The paper \cite{DW} shows that if all these constrained paths $\theta(t;\mathbf{x})$, for all initial conditions $\mathbf{x}$ are attracted to the origin (namely $\theta(t;\mathbf{x}) \to 0$ as $t\to \infty$) then the SRBM is positive recurrent and has a unique stationary distribution. The basic idea in the proof is to construct a suitable $C^2$-Lyapunov function $W$. A key property of this Lyapunov function, in addition to certain conditions on directional derivatives of $W$ at the boundary points, is that, for some $c>0$, $\nabla W(\mathbf{x}) \cdot \mu \le -c <0$ for all $\mathbf{x} \in \mathbb{R}R_+^d\setminus \{0\}$ . This property allows one to construct suitable supermartingales from which uniform in time moment estimates can be obtained. In fact this Lyapunov function was also the inspiration for the Lyapunov function, for the general state dependent drift and a jump-diffusion setting, constructed in \cite{atabudEJP}. There, due to the state dependence, the Lyapunov function $\tilde W$ needs to satisfy the property $\nabla \tilde W \cdot v \le -c<0$, uniformly, for all $v$ in the range of the drift $\tilde \beta = \mathcal{L} \mbox{id}$ which makes the construction more involved. \subsection{Geometric Ergodicity.} Above results focused on existence and uniqueness of stationary distributions, and positive recurrence. A key question of interest is how long it takes for the RBM started from arbitrary initial distributions to approach stationarity (rate of convergence). These questions were studied in \cite{budlee}. This work studied $(\mu, \mathbf{D}, R)$-SRBM models where $R$ is completely-${\mathcal{S}}$ that satisfies the stability condition of \cite{DW}, and constrained diffusion of the form in \eqref{eq:genrefdif2} with coefficients satisfying (D1)-(D3), the associated reflection matrix $R$ satisfying Property \ref{prop:regsp}, and drift satisfying the stability condition ($\mathcal{C}$-delta). For both of these settings \cite{budlee} constructs a suitable exponentially growing Lyapunov function $V$ and establishes that the processes are $V$-uniformly ergodic. The property of $V$-uniform ergodicity says in particular that law of the process at any fixed time $t$ converges to the stationary distribution, at an exponential rate, in the total variation distance as $t \rightarrow \infty$. In fact since $V$ is exponentially growing, from this convergence one can also deduce convergence of integrals of unbounded functions (with possible exponential growth) with respect to the law at time $t$, to the corresponding integrals with respect to the stationary distribution, at an exponential rate, as $t \to \infty$. The result of $V$-uniform ergodicity is also used in \cite{budlee} to prove that the unique invariant measures of the SRBM and of the reflected diffusions in \eqref{eq:genrefdif2} admit finite moment generating functions in suitable neighborhoods of zero. As other consequences of this ergodicity one can obtain uniform (in time and initial condition in a compact set) estimates on exponential moments of these constrained processes. One can also give growth estimates of polynomial moments of the process as a function of the initial condition and obtain functional central limit theorems for processes $\mathbf{x}i_{n}(t) := \frac{1}{\sqrt{n}}\left(\int_{0}^{n t}\left[F\left(X(s)\right)-\int F d\pi)\right] \mathrm{d} s\right),$ as $n\to \infty$, where $\pi$ is the unique invariant measure and $F$ is allowed to have exponential growth, and characterize the asymptotic variance via the solution of a related Poisson equation (see\cite{budlee} for these results). The key steps in the proofs are the construction of a suitable Lyapunov function and establishing a minorization condition on a sufficiently large compact set (referred to as a `small set'). The Lyapunov function provides good control on the exponential moments of the return times to the small set while the minorization condition implies the existence of abstract couplings of two copies of the process (via construction of `pseudo-atoms' as described in Chapter 5 of \cite{meyn2012markov}) which have a positive chance of coalescing inside the small set. Together, they furnish exponential rates of convergence (in a weighted total variation distance). Other than \cite{budlee}, only a few works obtain quantitative rates of convergence in some special cases. The paper \cite{IPS} obtains explicit convergence rates for time averages of bounded functionals of the state process for a class of reversible rank-based diffusions with explicit stationary measures using Dirichlet form techniques. The setting of one-dimensional RBM is considered in \cite{wang2014measuring} where (among other results) an estimate on the spectral gap is provided as a function of the drift and the diffusion coefficient. \section{Parameter and dimension dependence on rates of convergence}\label{con} The results of \cite{budlee}, due to the somewhat implicit treatment of the process inside the small set, provide convergence rates that shed little light on how they qualitatively depend on the system parameters or the state dimension. In this section and the next the focus is on studying, how does the convergence rate to stationarity depend on the initial distribution, diffusion parameters and the underlying dimension $d$? Moreover, it has been empirically observed that for certain types of RBM with parameters $(\mu^{(d)},\Sigma^{(d)}, R^{(d)})$ indexed by the dimension $d\ge 1$, for any fixed $k \in \mathbb{N}$, the marginal distribution of the first $k$-coordinates approaches the stationary marginal at a \emph{rate independent of $d$}. We will present results that identify conditions on $(\mu^{(d)}, \Sigma^{(d)}, R^{(d)})$ that give rise to this phenomenon, referred to as `dimension-free' convergence, and give quantitative bounds on this rate. These results set the stage for studying ergodicity properties of \emph{infinite-dimensional reflected diffusions} which will be the focus of the last two sections. Among other things, a key challenge in studying long-time behavior of such diffusions lies in the fact that they can have multiple stationary distributions. Thus, even before making sense of rates of convergence, one needs to understand which stationary distribution the process will approach starting from a given initial distribution. In a recent work, \cite{BC2016} obtains dimension dependent bounds on rates of convergence in Wasserstein distance for a certain class of RBM whose parameters satisfy certain `uniformity conditions' in dimension (see (BC1)-(BC3) in Section \ref{sec:bcrbm}). Although the assumptions imposed are somewhat rigid, there is a key new idea there which is to replace the role of abstract couplings (that for example underlie the approach in \cite{budlee}) with more tractable \emph{synchronous couplings} (namely, couplings where the RBM starting from different points are driven by the same Brownian motion) to analyze the behavior of the RBM inside the small set. Using explicit couplings to obtain better convergence rate estimates is a relatively recent but developing area. See \cite{bolley2010trend,Eberle2015,Eberle2017,eberle2016quantitative} for such results for other classes of diffusions. In this Section, we present results from \cite{BB2020} which, building on the work of \cite{BC2016}, systematically develop the application of synchronous couplings and construct associated Lyapunov functions to obtain quantitative rates of convergence for any RBM satisfying Assumptions (A1)-(A3). These rates explicitly highlight the dependence of system parameters and dimension. Further, the obtained rates are significantly better when applied to the class of RBM considered in \cite{BC2016} (see Section \ref{sec:bcrbm}). \subsection{Main Result} We now present the main result of \cite{BB2020}. Given probability measures $\mu$ and $\nu$ on $\mathbb{R}^d_+$, a probability measure $\gamma$ on $\mathbb{R}^d_+\times \mathbb{R}^d_+$ is said to be a coupling of $\mu$ and $\nu$ if $\gamma(\cdot \times \mathbb{R}^d_+) = \mu(\cdot)$ and $\gamma(\mathbb{R}^d_+ \times \cdot) = \nu(\cdot)$. The $L^1$-Wasserstein distance between two probability measures $\mu$ and $\nu$ on $\mathbb{R}^d_+$ is given by $$ W_1(\mu, \nu) = \inf\left\lbrace\int_{\mathbb{R}^d_+ \times \mathbb{R}^d_+}\|\mathbf{x}-\mathbf{y}\|_1 \gamma(\mathrm{o}_{\sss P}eratorname{d}\mathbf{x}, \mathrm{o}_{\sss P}eratorname{d}\mathbf{y}) \ : \gamma \text{ is a coupling of } \mu \text{ and } \nu\right\rbrace, $$ where for a vector $z \in \mathbb{R}^d$, $\|z\|_1 = \sum_{i=1}^d |z_i|$. We will denote the law of a random variable $X$ by $\mathcal{L}(X)$. Recall that from \cite{harrison1987brownian}, under Assumptions (A1)-(A3), there is a unique stationary distribution of the RBM. Denote by $\mathbf{X}(\infty)$ a random vector sampled from this stationary distribution. Define the \emph{relaxation time}, $t_{rel}(\mathbf{x})$ for the RBM starting from $\mathbf{x} \in \mathbb{R}^d_+$ as $$t_{rel}(\mathbf{x}) := \inf\{t\ge 0: W_1\left(\mathcal{L}(X(t; \mathbf{x})), \mathcal{L}(\mathbf{X}(\infty))\right) \le 1/2\}.$$ We will abbreviate the parameters of the RBM as $\Theta :=(\mu, \Sigma, R)$. Recall that these parameters are required to satisfy (A1)-(A3). We will quantify rate of convergence to equilibrium in terms of the following functions of $\Theta,d$. Define the {\em contraction coefficient} \begin{equation}\label{nd} n(R) := \inf\{ n \ge 1: \|P^n \mathbf{1}\|_{\infty} \le 1/2\}, \end{equation} where $\mathbf{1}$ is a $d$-dimensional vector of ones and for $u \in \mathbb{R}^d$, $\|u\|_{\infty} := \sup_{1\le i \le d} |u_i|$. By Assumption (A1), $n(R) < \infty$. Fix $\kappa \in (0,\infty)$. For any $\mathbf{x} \in \mathbb{R}^d_+$, define $\|\mathbf{x}\|_{\infty}^* := \sup_{1\le i \le d}\sigma_i^{-1}\mathbf{x}_i$. Let \begin{align*} a(\Theta) &:= \sup_{1 \le i \le d}\left[\frac{\sum_{j=1}^d(R^{-1})_{ij}\sigma_j}{b_i}\right],\ \ \ \ \ \ \ b(\Theta) := \sup_{1\le i \le d}\left[\frac{\sum_{j=1}^d(R^{-1})_{ij}\sigma_j}{\sigma_i}\right],\\ R_1(\Theta,d) &:= n(R)(1+a(\Theta)^2\log(2d)),\ \ \ \ \ \ \ R_2(\Theta) := a(\Theta)^2b(\Theta),\\ C_1(\mathbf{x},\Theta)&:= 2\|\mathbf{x}\|_1 + a(\Theta)\sum_{i,j}(R^{-1})_{ij}\sigma_j,\\ C_2(\mathbf{x},\Theta, \kappa) &:= 2\|\mathbf{x}\|_1e^{3(\kappa a(\Theta)b(\Theta))^{-1}\|\mathbf{x}\|_{\infty}^*} + a(\Theta)\left[2d(1+d)\left(\sum_{i,j}(R^{-1})^2_{ij}\right)\left(\sum_{j=1}^d\sigma_j^2\right)\right]^{1/2}. \end{align*} \begin{theorem}[\cite{BB2020}]\label{wasthm} There exist a $t_0 \in (0,\infty)$ and $D_1, D_2 \in (0,\infty)$ such that for every $d \in \mathbb{N}$, $\mathbf{x} \in \mathbb{R}^d_+$, every parameter choice $\Theta$, and $ t \ge t_0 \left(1 + (a(\Theta))^2\log(2d)\right)$, \begin{align*} W_1\left(\mathcal{L}(X(t; \mathbf{x}), \mathcal{L}(\mathbf{X}(\infty))\right) &\le \mathbb{E}(\|X(t;\mathbf{x}) - X(t; \mathbf{X}(\infty))\|_1)\\ & \le C_1(\mathbf{x},\Theta)\left(2e^{-\frac{D_1t}{R_1(\Theta,d)}} + e^{-\frac{t}{16D_2R_2(\Theta)}}\right) + C_2(\mathbf{x},\Theta, D_2)e^{-\frac{t}{8D_2R_2(\Theta)}}. \end{align*} In particular, the relaxation time satisfies \begin{align*} t_{rel}(\mathbf{x}) \le \max\{D_1^{-1}R_1(\Theta,d) \log(8C_1(\mathbf{x},\Theta)) + 16D_2R_2(\Theta)\log[4(C_1(\mathbf{x},\Theta) + C_2(\mathbf{x},\Theta, D_2))],\\ \qquad \qquad \qquad t_0 \left(1 + (a(\Theta))^2\log(2d)\right)\}. \end{align*} \end{theorem} \subsection{Outline of Approach} We now give a very broad outline of the approach taken in \cite{BB2020} for proving Theorem \ref{wasthm}. \begin{itemize} \item[(i)] As mentioned before, the key idea is to use synchronous couplings, namely, coupled RBMs from different initial states but driven by the same Brownian motion. An important feature of such couplings is that the $L^1$-distance between the two processes $X(\cdot;\mathbf{0})$ and $X(\cdot;\mathbf{x})$ is non-increasing in time. The rate of decay of this $L^1$-distance is quantified in terms of the contraction coefficient defined in \eqref{nd}. Roughly speaking, the process is observed at random times $\{\eta^k(\mathbf{x}) : k \ge 1\}$ such that all the co-ordinates of $X(\cdot;\mathbf{x})$ have hit zero at least once between successive $\eta^k(\mathbf{x})$'s. It turns out that the $L^1$-distance between the synchronously coupled processes decreases by a factor of $1/2$ by time $\eta^{n(R)}(\mathbf{x})$. Hence, to obtain bounds on $L^1$-Wasserstein distance, it suffices to estimate how many of these $\eta^k(\mathbf{x})$'s are smaller than some given large time $t$. \item[(ii)] For any $\mathbf{v}>0$ in $\mathbb{R}^d$ satisfying $R^{-1}\mathbf{v} \le \mathbf{b}$, the RBM with oblique reflection in \eqref{RBMdef} can be dominated in a suitable manner by a normally reflected Brownian motion with drift $-\mathbf{v}$ and the same driving Brownian motion. One can construct appropriate Lyapunov functions and associated small sets for this dominating process by analyzing weighted sup-norms of its co-ordinates. This can be then used to estimate the number of times the co-ordinates of the dominating process hit zero by time $t$ as a function of $\mathbf{v}$. This, in turn, lower bounds the number of $\eta^k(\mathbf{x})$'s for the original RBM less than $t$ in terms of $\mathbf{v}$. Optimizing over feasible choices of $\mathbf{v}$ gives the result. \end{itemize} \subsection{Examples}\label{eg} We now describe how Theorem \ref{wasthm} can be used to obtain bounds on the rate of convergence to equilibrium in two examples that are discussed in Sections \ref{sec:bcrbm} and \ref{sec:atlmod} below. \subsubsection{\textbf{Blanchet-Chen RBM}} \label{sec:bcrbm} This refers to the class of RBM under the set of assumptions in \cite{BC2016}, which are: \begin{itemize} \item[(BC1)] The matrix $P$ is substochastic and there exist $\kappa>0$ and $\beta \in (0,1)$ not depending on the dimension $d$ such that $\|\mathbf{1}^TP^n\|_{\infty} \le \kappa(1 - \beta)^n$ for all $n \ge 0$. \item[(BC2)] There exists $\operatorname{d}elta>0$ independent of $d$ such that $R^{-1}\mu < -\operatorname{d}elta \mathbf{1}$. \item[(BC3)] There exists $\sigma>0$ independent of $d$ such that $ \sigma_i :=\sqrt{\Sigma_{ii}}$ satisfies $\sigma^{-1} \le \sigma_i \le \sigma$ for every $1 \le i \le d$. \end{itemize} Under the above conditions \cite{BC2016} give a polynomial bound of $O(d^4(\log d)^2)$ on the relaxation time of the RBM. As shown in the following theorem, Theorem \ref{wasthm} gives a substantial improvement by establishing a polylogarithmic relaxation time of $O((\log d)^2)$. \begin{theorem}[\cite{BB2020}]\label{BCbound} Under Assumptions (BC1), (BC2) and (BC3), there exist positive constants $E_1, E_2, E_3, E_4,t_1$ such that for any $\mathbf{x} \in \mathbb{R}^d_+, t \ge t_1 \max\{\|\mathbf{x}\|_{\infty}, \log(2d)\}$, \begin{equation*} \mathbb{E}(\|X(t;\mathbf{x}) - X(t; \mathbf{X}(\infty))\|_1) \le 2\left(2\|\mathbf{x}\|_1 + E_1d^2\right)e^{-E_2 t/\log(2d)} + \left(4\|\mathbf{x}\|_1 + E_1d^2\right)e^{-E_4t/2} + E_3 d^{2}e^{-E_4t}. \end{equation*} In particular, the relaxation time satisfies \begin{align*} t_{rel}(\mathbf{x}) \le \max \left\lbrace E_2^{-1}\log\left[8\left(2\|\mathbf{x}\|_1 + E_1d^2\right)\right]\log (2d) + E_4^{-1}\left[2\log\left[8\left(4\|\mathbf{x}\|_1 + E_1d^2\right)\right] + \log (8E_3d^{2})\right],\right.\\ \left.\qquad \qquad \qquad t_1 \max\{\|\mathbf{x}\|_{\infty}, \log(2d)\}\right\rbrace. \end{align*} \end{theorem} \subsubsection{\textbf{Gap process of rank-based diffusions}} \label{sec:atlmod} Rank-based diffusions are interacting particle systems where the drift and diffusion coefficient of each particle depends on its rank. Mathematically, they are represented by the SDE: \begin{equation}\label{rbdef} dY_i(t) = \left(\sum_{j=0}^{d} \operatorname{d}elta_j \mathbf{1}_{[Y_i(t) = Y_{(j)}(t)]}\right)dt + \left(\sum_{j=0}^{d} \sigma_j \mathbf{1}_{[Y_i(t) = Y_{(j)}(t)]}\right)dW_i(t) \end{equation} for $0\le i \le d$, where $\{Y_{(j)}(t) : t \ge 0\}$ denotes the trajectory of the rank $j$ particle as a function of time $t$ ($Y_{(0)}(t) \le \operatorname{d}ots\le Y_{(d)}(t) \text{ for all } t \ge 0$), $\operatorname{d}elta_j, \sigma_j$ denote the drift and diffusion coefficients of the rank $j$ particle, and $W_i$, $0\le i \le d$, are mutually independent standard one dimensional Brownian motions. We will assume throughout that $\sigma_i>0$ for all $0 \le i \le d$. Rank-based diffusions have been proposed and extensively studied as models for problems in finance and economics. A special case is the Atlas model \cite{Fern} where the minimum particle (i.e. the particle with rank $1$) is a Brownian motion with positive drift and the remaining particles are Brownian motions without drift (i.e. $\operatorname{d}elta_i=0$ for all $i\ge1$). The general setting considered in \eqref{rbdef} was introduced in \cite{BFK}. In order to study the long time behavior, it is convenient to consider the gap process $Z = (Z_1,\operatorname{d}ots,Z_{d})$, given by $Z_i = Y_{(i)} - Y_{(i-1)}$ for $1 \le i \le d$. The process $Z \equiv Z(t;\mathbf{z})$ is a RBM in $\mathbb{R}_+^{d}$ given as \begin{equation*} Z(t;\mathbf{z}) = \mathbf{z} + \mathbf{D} B(t) + \mu t + RL(t) \end{equation*} where $\mathbf{z}$ is the initial gap sequence, $B$ is a standard $d$-dimensional Brownian motion, $\mu_i = \operatorname{d}elta_{i} - \operatorname{d}elta_{i-1}$ for $1 \le i \le d$, $\mathbf{D} \in \mathbb{R}^{d \times d}$, $L$ is the local time process associated with $Z$ and $R$ satisfies Assumption (A1), namely it belongs to the Harrison-Reiman class. The covariance matrix $\Sigma = \mathbf{D}\mathbf{D}^T$ has entries $\Sigma_{ii} = \sigma_{i-1}^2 + \sigma_{i}^2$ for $1 \le i \le d$, $\Sigma_{i(i-1)} = - \sigma_{i-1}^2$ for $2 \le i \le d$, $\Sigma_{i(i+1)} = -\sigma_{i}^2$ for $1 \le i \le d-1$ and $\Sigma_{ij} = 0$ otherwise. In particular, (A3) is satisfied, namely $\Sigma$ is positive definite. Moreover, $R$ is given explicitly as $R = I-P^T$, where $P$ is the substochastic matrix given by $P_{i (i+1)} = P_{i (i-1)} = 1/2$ for all $2 \le i \le d-1$, $P_{12} = P_{d(d-1)} = 1/2$ and $P_{ij} = 0$ if $|i-j| \ge 2$. From \cite{harrison1987brownian} the process is positive recurrent and has a unique stationary distribution if Assumption (A2) is satisfied, namely $\mathbf{b} =-R^{-1}\mu>0$, which is same as the following condition: \begin{equation}\label{atlasstab} b_k = \sum_{i=1}^k(\operatorname{d}elta_{i-1} - \overline{\operatorname{d}elta})>0 \mbox{ for } 1 \le k \le d, \mbox{ where } \overline{\operatorname{d}elta} = (d+1)^{-1}\sum_{j=0}^{d} \operatorname{d}elta_j. \end{equation} In the special case where \begin{equation}\label{sk} \sigma_{i}^2 - \sigma_{i-1}^2 = \sigma_{1}^2 - \sigma_0^2 \text{ for all } 1 \le i \le d, \end{equation} the stationary distribution is explicit and takes the form $\mathcal{L}(\mathbf{Z}(\infty)) = \otimes_{k=1}^d \mathrm{o}_{\sss P}eratorname{Exp}(2b_k\left(\sigma_{k-1}^2 + \sigma_{k}^2\right)^{-1})$ (see Section 5 of \cite{ichiba2011hybrid}). For the general case (i.e. $\sigma_i$ are strictly positive and \eqref{atlasstab} is satisfied) explicit formulas for stationary distribution are not available, however from \cite{{budlee}}, the law of $Z(t;\mathbf{z})$ converges to the unique stationary distribution in (weighted) total variation distance at an (albeit implicit) exponential rate. From Theorem \ref{wasthm} one can give the following bound on the rate of $L^1$-Wasserstein convergence of the gap process to $\mathbf{Z}(\infty)$. Note that we do not require reversibility or an explicit expression for the stationary measure. Two key quantities appearing in the rate of convergence are \begin{equation}\label{eq:sigupplow} a^* := \sup_{1 \le i \le d}\frac{i(d+1-i)}{b_i}, \ \ \ \sigma = \left(\sup_{0 \le i \le d} \sigma_i \right) \vee \left(\sup_{0 \le i \le d} \sigma_i^{-1} \right) \end{equation} where $b_i$ are defined in \eqref{atlasstab} and $\sigma_i$ is the standard deviation of the rank $i$ particle (see \eqref{rbdef}). \begin{theorem}[\cite{BB2020}]\label{atlasshrugged} There exist positive constants $F_1, F_2, F_3, F_4, t_2$ such that for any $d\in \mathbb{N}$, $\mathbf{z} \in \mathbb{R}^d_+$ and any $t \ge t_2 \max\{\sigma^2a^*\|\mathbf{z}\|_{\infty}, 1 + \sigma^2a^{*2}\log(2d)\}$, \begin{align*} \mathbb{E}(\|Z(t;\mathbf{z}) - Z(t; \mathbf{Z}(\infty))\|_1) \le 2\left(2\|\mathbf{z}\|_1 + F_1\sigma^2a^*d^3\right)e^{-F_2 t/\left[d^2(1+\sigma^2a^{*2}\log(2d))\right]}\\ + \left(4\|\mathbf{z}\|_1 + F_1\sigma^2a^*d^3\right)e^{-F_4t/[2\sigma^4a^{*2}(d+1)^2]} + F_3\sigma^2a^*d^{7/2}e^{-F_4t/[\sigma^4a^{*2}(d+1)^2]}. \end{align*} In particular, the relaxation time satisfies \begin{align*} t_{rel}(\mathbf{z}) &\le \max \left\lbrace F_2^{-1}\left[d^2(1+ \sigma^2a^{*2}\log(2d))\right]\log\left[8\left(2\|\mathbf{z}\|_1 + F_1\sigma^2a^*d^3\right)\right]\right.\\ &\left. \qquad \qquad\qquad + F_4^{-1}\sigma^4a^{*2}(d+1)^2\left[2\log\left[8\left(4\|\mathbf{z}\|_1 + F_1\sigma^2a^*d^3\right)\right] + \log (8F_3\sigma^2a^*d^{7/2})\right],\right.\\ &\left. \qquad \qquad\qquad\qquad t_2 \max\{\sigma^2a^*\|\mathbf{z}\|_{\infty}, 1 + \sigma^2a^{*2}\log(2d)\}\right\rbrace. \end{align*} \end{theorem} \begin{rem}\label{sam} The standard Atlas model \cite{Fern} is a special case of \eqref{rbdef} with $\operatorname{d}elta_0=1$, $\operatorname{d}elta_i = 0$ for all $i \ge 1$ and $\sigma_i=1$ for all $i$. For this model, using \eqref{atlasstab}, for any $k \ge 1$, $$ b_k = \sum_{i=1}^k(\operatorname{d}elta_{i-1} - \overline{\operatorname{d}elta}) = \frac{(d+1-k)}{d+1} $$ and $$ a^* := \sup_{1 \le i \le d}\frac{i(d+1-i)}{b_i} = \sup_{1 \le i \le d}i(d+1) = d(d+1), \ \ \ \sigma=1. $$ Using these in Theorem \ref{atlasshrugged}, one obtains positive constants $G_1, G_2, G_3, G_4, t_3$ such that for any $d\in \mathbb{N}$, $\mathbf{z} \in \mathbb{R}^d_+$ and any $t \ge t_3\{d^2\|\mathbf{z}\|_{\infty}, 1 + d^2\log(2d)\}$, $$ \mathbb{E}(\|Z(t;\mathbf{z}) - Z(t; \mathbf{Z}(\infty))\|_1) \le G_1\left(\|\mathbf{z}\|_1 + d^5\right)e^{-G_2t/d^6\log(2d)} + G_3d^{11/2}e^{-G_4t/d^6}. $$ In particular, the relaxation time for the standard Atlas model is $O(d^6(\log d)^2)$ as $d \rightarrow \infty$. \end{rem} \section{Dimension-free convergence rates for local functionals of RBM} Typically, growing dimension slows down the rate of convergence for the whole system, as is reflected in the bounds obtained in the previous section, but empirically one often observes a much faster convergence rate to equilibrium of \emph{local statistics} of the system. In this Section, we describe results for a class of RBMs for which convergence rates of local statistics do not depend on the underlying dimension of the entire system. More precisely, consider a family of processes $X^{(d)} \sim \mathbb{R}BM(\mu^{(d)}, \Sigma^{(d)}, R^{(d)})$ indexed by the dimension $d\ge 1$ (the superscript $(d)$ is dropped subsequently for notational convenience). Conditions on $(\mu^{(d)}, \Sigma^{(d)}, R^{(d)})$ are sought under which, for any fixed $k \in \mathbb{N}$, the marginal distribution of the first $k$-coordinates approaches the stationary marginal at a rate independent of $d$. The results below are taken from \cite{banerjee2020dimension}, where this phenomenon is named \emph{dimension-free local convergence}. Mathematically, this is challenging as the local evolution is no longer Markovian and the techniques in \cite{BC2016,BB2020} cannot be readily applied. A crucial observation is that certain \emph{weighted $L^1$-distances} (see $\|\cdot\|_{1,\beta}$ defined in Section \ref{wnormdef}) between synchronously coupled RBMs show dimension-free contraction rates. The evolution of such weighted distances are tracked in time for synchronously coupled RBMs $X(\cdot; \mathbf{0})$ and $X(\cdot;\mathbf{x})$ for $\mathbf{x} \in \mathbb{R}^d_+$. It is shown in \cite{banerjee2020dimension} that for this distance to decrease by a dimension-free factor of its original value, only a subset of co-ordinates of $X(\cdot;\mathbf{x})$, whose cardinality depends on the value of the original distance (and not directly on the dimension $d$), need to hit zero. This is in contrast with the unweighted $L^1$-distance considered in \cite{BC2016,BB2020} where all the coordinates need to hit zero to achieve such a contraction, thereby slowing down the convergence rate. Consequently, by tracking the hitting times to zero of a time dependent number of co-ordinates, one achieves dimension-free convergence rates in this weighted $L^1$-distance as stated in Theorem \ref{thm:main_fromx}. This, in turn, gives dimension-free local convergence as is made precise in \eqref{locstat}. In Section \ref{sec:examples}, an application of Theorem \ref{thm:main_fromx} is presented for the gap process of a rank-based interacting particle system with asymmetric collisions to obtain explicit rates exhibiting dimension-free local convergence. \subsection{A weighted $L^1$-distance governing dimension-free local convergence}\label{wnormdef} The investigation of dimension-free convergence relies on the analysis of the weighted $L^1$-distance $\|X(\cdot;\mathbf{x}) - X(\cdot;\mathbf{X}(\infty))\|_{1,\beta} := \sum_{i=1}^d \beta^i |X_i(\cdot; \mathbf{x}) - X_i(\cdot; \mathbf{X}(\infty))|$ in time, for appropriate choices of $\beta \in (0,1)$. Towards this end, the following functionals play a key role: \begin{align} u_{\beta}(\mathbf{x}, t) &= \bnorm{R^{-1}\left(X(t; \mathbf{x}) - X(t; \mathbf{0})\right)} := \sum_{i=1}^d \beta^i \left| \left[R^{-1}\left(X(t; \mathbf{x}) - X(t; \mathbf{0})\right)\right]_i \right|,\label{eqn:weighted_norm}\\ u_{\pi,\beta}(t) &= u_{\beta}(\mathbf{X}(\infty), t), \quad \quad t \ge 0. \label{eqn:u_pi} \end{align} In the following, when $\beta$ is clear from context, we will suppress dependence on $\beta$ and write $u$ for $u_{\beta}$ and $u_{\pi}$ for $u_{\pi,\beta}$. The above functionals are convenient because the vector $R^{-1}\left(X(t; \mathbf{x}) - X(t; \mathbf{0})\right)$ is co-ordinate wise non-negative and non-increasing in time. Moreover, $R^{-1}\left(X(t; \mathbf{x}) - X(t; \mathbf{0})\right) \ge X(t; \mathbf{x}) - X(t; \mathbf{0}) \ge 0$ for all $t \ge 0$. This fact and the triangle inequality can be used to show for any $\mathbf{x} \in \mathbb{R}^d_+, t \ge 0$, \begin{align*} \bnorm{\left(X(t; \mathbf{x}) - X(t; \mathbf{X}(\infty))\right)} \le u(\mathbf{x}, t) + u_\pi(t), \end{align*} where the bound on the right hand side is non-increasing in time. Due to the monotonicity of the bound, it suffices to find `events' along the trajectory of the coupled processes that lead to a reduction in this bound by a dimension-independent factor. Using this idea, conditions are obtained under which there exists a $d$-independent $\beta \in (0,1)$ and a function $f: \mathbb{R}eals_+ \mapsto \mathbb{R}eals_+$ not depending on the dimension $d$ such that $f(t) \to 0$ as $t \to \infty$ and, for any $\mathbf{x}$ in an appropriate subset $\mathcal{S}$ of $\mathbb{R}_+^d$, \begin{equation}\label{eqn:dimension_free} \Expect{\bnorm{\left(X(t; \mathbf{x}) - X(t; \mathbf{X}(\infty))\right)}} \le \Expect{u(\mathbf{x}, t) + u_\pi(t)} \le C f(t), \quad \quad t \ge t_0, \end{equation} where $C, t_0 \in (0, \infty)$ are constants not depending on $d$ (but can depend on $\mathbf{x}$). This, in particular, gives dimension-free local convergence in the following sense: For any $k \in \{1,\operatorname{d}ots,d\}$, consider any function $\phi: \mathbb{R}^k_+ \mapsto [0,\infty)$ which is Lipschitz, i.e., there exists $L_{\phi} >0$ such that $$ |\phi(\mathbf{u}) - \phi(\mathbf{v})| \le L_{\phi} \|\mathbf{u}-\mathbf{v}\|_1, \quad \quad \mathbf{u}, \mathbf{v} \in \mathbb{R}^k_+. $$ Recall that $\mathcal{L}(Z)$ denotes the law of a random variable $Z$ and $W_1$ denotes the associated $L^1$-Wasserstein distance. Then, \eqref{eqn:dimension_free} implies for $\mathbf{x} \in \mathcal{S}$, \begin{align}\label{locstat} W_1\left(\mathcal{L}(\phi(X|_k(t; \mathbf{x}))), \mathcal{L}(\phi(\mathbf{X}|_k(\infty)))\right) &\le \Expect{|\phi(X|_k(t; \mathbf{x})) - \phi(X|_k(t; \mathbf{X}(\infty)))|} \nonumber \\ &\le C\beta^{-k} L_{\phi} f(t), \quad \quad t \ge t_0, \end{align} where for $\mathbf{x} = (x_1, \ldots , x_d)' \in \mathbb{R}R_+^d$, $\mathbf{x}|_k = (x_1, \ldots, x_k)$. \subsection{Parameters and Assumptions} We now define the parameters that govern dimension-free local convergence which, in turn, are defined in terms of the original model parameters $(\mu,\Sigma,R)$ of the associated RBM. We abbreviate $\sigma_i = \sqrt{\Sigma_{ii}}, i=1,\ldots, d$. Define for $1 \le k \le d$, \begin{align}\label{pardefcon} \bk{k} &:= -R^{-1}k{k}\muk{k}, \quad \quad \mathbf{b} = \bk{d}, \notag\\ \mathrm{bl}owk{k} &:= \min_{1 \le i \le k} \bk{k}_i, \quad \quad \ak{k} := \max_{1\le i \le k} \frac{1}{\bk{k}_i} \sum_{j=1}^k (R^{-1}k{k})_{ij}\sigma_j, \end{align} where $\muk{k}$ denotes the restriction of $\mu$ onto first $k$ co-ordinates and $R^{-1}k{k}$ denotes the inverse of the principal $k \times k$ submatrix of $R$. To get a sense of why these parameters are crucial, recall that our underlying strategy is to obtain contraction rates of $u(\mathbf{x},\cdot)$ defined in \eqref{eqn:weighted_norm} by estimating the number of times a subset of the co-ordinates of $X(\cdot;\mathbf{x})$, say $\{X_1(\cdot; \mathbf{x}),\operatorname{d}ots,X_k(\cdot; \mathbf{x})\}, k \le d$, hit zero. However, this subset does not evolve in a Markovian way. Thus, we use monotonicity properties of RBMs to couple this subset with a $\mathbb{R}^k_+$-valued reflected Brownian motion $\bar{X}(\mathbf{x}|_k,\cdot)$, started from $\mathbf{x}|_k$ and defined in terms of $\mu|_k, \Sigma|_k, R|_k$ and (a possible restriction of) the same Brownian motion driving $X(\cdot;\mathbf{x})$, such that $X_i(\mathbf{x},t) \le \bar{X}_i(\mathbf{x}|_k,t)$ for all $1 \le i \le k$. The analysis in \cite{BB2020} shows that the parameters defined in \eqref{pardefcon} with $k=d$ can be used to precisely estimate the minimum number of times all co-ordinates of $X(\cdot;\mathbf{x})$ hit zero by time $t$ as $t$ grows. Applying this approach, for any $1 \le k \le d$, the parameters \eqref{pardefcon} can be used to quantify analogous hitting times for the process $\bar{X}(\cdot; \mathbf{x}|_k)$ which, by the above coupling, gives control over corresponding hitting times of $\{X_1(\cdot; \mathbf{x}),\operatorname{d}ots,X_k(\cdot; \mathbf{x})\}$. Given below is a set of assumptions on the model parameters $(\mu,\Sigma,R)$ which guarantee dimension-free local convergence.\\ \textbf{Assumptions DF:} There exist $d$-independent constants $\underline{\sigma}, \overline{\sigma}, b_0 > 0$, $r^* \ge 0$, $M, C \ge 1$, $k_0 \in \{2, \ldots, d\}$ and $\alpha\in (0, 1)$ such that for all $d \ge k_0$, \begin{itemize} \item[I.] $(R^{-1})_{ij} \le C \alpha^{j-i}$ for $1\le i \le j \le d$, \item[II.] $(R^{-1})_{ij} \le M$ for $1 \le i, j \le d$, \item[III.] $\mathrm{bl}owk{k} \ge b_0 k^{-r^*}$ for $k = k_0, \ldots, d$, \item[IV.] $\sigma_i \in [\underline{\sigma}, \overline{\sigma}]$ for $1 \le i \le d$. \end{itemize} We explain why Assumptions DF are `natural' in obtaining dimension-free local convergence. Since $P$ is a transient and substochastic, it can be associated to a killed Markov chain on $\{0\} \cup \{1,\operatorname{d}ots,d\}$ with transition matrix $P$ on $\{1,\operatorname{d}ots,d\}$ and killed upon hitting $0$ (i.e. probability of going from state $k \in \{1,\operatorname{d}ots,d\}$ to $0$ is $1 - \sum_{l=1}^d P_{kl}$ and $P_{00} =1$). Moreover, since $P$ is transient and $R = I - P^T$, we have $R^{-1} = \sum_{n=0}^\infty (P^T)^n$. This representation shows that $(R^{-1})_{ij}$ is the expected number of visits to site $i$ starting from $j$ of this killed Markov chain. For fixed $\mathbf{x} \in \mathbb{R}^d_+$ and $k < < d$, consider a local statistic of the form $\phi(X|_k(t; \mathbf{x}))$ as in \eqref{locstat}. For this statistic to stabilize faster than the whole system, we expect the influence of the far away co-ordinates $X|_j(\cdot; \mathbf{x}), j>>k,$ to diminish in an appropriate sense as $j$ increases. This influence is primarily manifested through the oblique reflection arising out of the $R$ matrix via the above representation of $R^{-1}$. I of Assumptions DF quantifies this intuition by requiring that the expected number of visits to state $i$ starting from state $j>i$ of the associated killed Markov chain decreases geometrically with $j-i$. This is the case, for example, when this Markov chain started from $j>i$ has a uniform `drift' away from $i$ towards the cemetery state. See Section \ref{sec:examples}. In more general cases, one can employ Lyapunov function type arguments \cite{meyn2012markov} to the underlying Markov chain to check I. II above implies that the killed Markov chain starting from state $j$ spends at most $M$ expected time at any other site $i \in \{1,\operatorname{d}ots,d\}$ before it is absorbed in the cemetery state $0$. This expected time, as calculations show, is intimately tied to decay rates of $\bnorm{\left(X(\cdot; \mathbf{x}) - X(\cdot; \mathbf{X}(\infty))\right)}$. As noted in \cite{harrison1987brownian,BC2016,BB2020}, the `renormalized drift' vector $\mathbf{b}$ characterizes positive recurrence of the whole system. Assumption III above, prescribes a power law type co-ordinate wise lower bound of the renormalized drift vector $\mathbf{b}^{(k)}$ of the projected system $X|_k(\cdot; \mathbf{x})$ as $k$ grows. In particular, if $\mathrm{bl}owk{k}$ is uniformly lower bounded by $b_0$, one can take $r^*=0$. Finally, IV above is a quantitative `uniform ellipticity' condition on the co-ordinates of the driving noise $\mathbf{D} B(\cdot)$. \subsection{Main result}\label{sec:results} The following result gives explicit bounds on the decay of expectation of the weighted distance $\|X(\cdot;\mathbf{x}) - X(\cdot;\mathbf{X}(\infty))\|_{1,\sqrt{\alpha}}$ ($\alpha$ defined in I of Assumptions DF) with time, for RBMs satisfying Assumptions DF. We first define some constants that will appear in Theorem \ref{thm:main_fromx}. They are needed to bound moments of certain weighted norms of the stationary random variable $\mathbf{X}(\infty)$. Suppose Assumptions DF hold, with $k_0 \in \{2, \ldots, d\}$ and $\alpha \in (0, 1)$ defined therein. Set \begin{equation}\label{l1def} L_1 := k_0^{r^* + 1} + \sum_{i = k_0}^d i^{3 + r^*} \alpha^{i/8}. \end{equation} Also, for $B \in (0,\infty)$, define the set \begin{equation}\label{startset} \mathcal{S}(b,B) := \left\lbrace x \in \mathbb{R}_+^d : \sup_{1 \le i \le d} \mathrm{bl}ow^{(i)} \supnorm{x|_i} \le B\right\rbrace. \end{equation} \begin{theorem}[\cite{banerjee2020dimension}]\label{thm:main_fromx} Let $X$be a $( \mu,\Sigma, R)$-RBM. Suppose that Assumptions DF hold with $\alpha \in (0, 1)$ defined therein. Fix any $B \in (0,\infty)$. Then there exist constants $C_0, C_0', C_1 > 0$ not depending on $d, r^*$ or $B$ such that with $t_0' = t_0'(r^*) = C_0'\left(1+r^*\right)^{8+4r^*}$ and $L_1, L_2, \mathcal{S}(b,B)$ as defined in \eqref{l1def}-\eqref{startset}, we have for any $\mathbf{x} \in \mathcal{S}(b,B)$ and any $d > t_0'^{1/(4+2r^*)}$, \begin{multline}\label{eqn:main_fromx_result1} \Expect{\|X(t; \mathbf{x}) - X(t; \mathbf{X}(\infty))\|_{1,\sqrt{\alpha}}}\\ \le \begin{cases} C_1 \left(L_1\sqrt{1+t^{1/(4+2r^*)} } + \supnorm{\mathbf{x}}\exp\left\lbrace B/\underline{\sigma}^2\right\rbrace\right) \> \exp\{-C_0 t^{1/(4+2r^*)} \}, & t_0' \le t < d^{4 + 2r^*},\\\\ C_1 \left(L_1\sqrt{1+t^{1/(4+2r^*)} } + \supnorm{\mathbf{x}}\exp\left\lbrace B/\underline{\sigma}^2\right\rbrace\right)\> \exp\left\lbrace -C_0\frac{t}{d^{3 + 2r^*}}\right\rbrace, & t \ge d^{4 + 2r^*}. \end{cases} \end{multline} \end{theorem} Theorem \ref{thm:main_fromx} directly implies dimension-free bounds on $t \mapsto \|X(t;\mathbf{x}) - X(t; \mathbf{X}(\infty))\|_{1,\sqrt{\alpha}}$ in the sense of \eqref{eqn:dimension_free} which, in turn, produce dimension-free local convergence rates as given by \eqref{locstat}. The multi-part bound in the theorem is designed to `continuously interpolate' the \emph{dimension-free stretched exponential bounds} with the \emph{dimension-dependent exponential bounds} obtained in \cite{BB2020}. \subsection{Application to the Asymmetric Atlas model}\label{sec:examples} Consider the interacting particle system represented by the following SDE: \begin{equation}\label{ascol} Y_{(k)}(t) = Y_{(k)}(0) + \mathbf{1}_{k=1} t + B^*_k(t) + p L^*_{k}(t) - q L^*_{k+1}(t), \quad t \ge 0, \end{equation} for $0\le k \le d$, $p \in (0,1), q= 1-p$. Here, $L^*_{0}(\cdot) \equiv L^*_{d+1}(\cdot) \equiv 0$, and for $0 \le k \le d-1$, $L^*_{k+1}(\cdot)$ is a continuous, non-decreasing, adapted process that denotes the collision local time between the $k$-th and $(k+1)$-th co-ordinate processes of $Y_{(\cdot)}$, namely $L^*_{k+1}(0)=0$ and $L^*_{k+1}(\cdot)$ can increase only when $Y_{(k)} = Y_{(k+1)}$. $B^*_k(\cdot)$, $0\le k \le d$, are mutually independent standard one dimensional Brownian motions. Each of the $d+1$ ranked particles with trajectories given by $(Y_{(0)}(\cdot),\operatorname{d}ots,Y_{(d)}(\cdot))$ evolves as an independent Brownian motion (with the particle $0$ having unit positive drift) when it is away from its neighboring particles, and interacts with its neighbors through possibly asymmetric collisions. The symmetric Atlas model, namely the case $p=1/2$, was introduced in \cite{Fern} as a mathematical model for stochastic portfolio theory. The asymmetric Atlas model was introduced in \cite{karatzas_skewatlas}. It was shown that it arises as scaling limit of numerous well-known interacting particle systems involving asymmetrically colliding random walks \cite[Section 3]{karatzas_skewatlas}. Since then, this model has been extensively analyzed: see \cite{karatzas_skewatlas,IPS,ichiba2013strong,AS} and references therein. The gaps between the particles, defined by $Z_i(\cdot) = Y_{(i)}(\cdot) - Y_{(i-1)}(\cdot), 1 \le i \le d$, evolve as an $\mathbb{R}BM(\mu,\Sigma, R)$ with $\Sigma$ given by $\Sigma_{ii} = 2$ for $i = 1, \ldots, d$, $\Sigma_{ij}=-1$ if $|i-j|=1$, $\Sigma_{ij} = 0$ if $|i-j| >1$, $\mu$ given by $\mu_1 = -1, \mu_j = 0$ for $j = 2, \ldots, d$, and $R = I-P^T$, where \begin{equation}\label{eqn:skew_atlas1} P_{ij} = \begin{cases} p \quad \quad j = i+1,\\ 1-p \quad \quad j = i-1, \\ 0 \quad \quad \text{otherwise}. \end{cases} \end{equation} Clearly $R$ is in the Harrison-Reiman class. The paper \cite{banerjee2020dimension} establishes dimension-free local convergence for the gap process of the Asymmetric Atlas model when $p \in (1/2, 1)$. Recall that the reflection matrix $R= I-P^T$ is associated with a killed Markov chain. For the Asymmetric Atlas model, this Markov chain has a more natural description as a random walk on $\{0,1,\operatorname{d}ots,d+1\}$ which increases by one at each step with probability $p$ and decreases by one with probability $1-p$, and is killed when it hits either $0$ or $d+1$. Then for $1 \le i,j \le d$, $(R^{-1})_{ij}$ is the expected number of visits to $i$ starting from $j$ by this random walk before it hits $0$ or $d+1$. Since $p > 1-p$, the random walk has a drift towards $d+1$, which suggests I, II of Assumptions DF hold. This is confirmed by direct computation, which gives for $q = 1-p$, \begin{equation}\label{eqn:skew_atlas2} (R^{-1})_{ij} = \begin{cases} \frac{\left(q/p \right)^{j-i}}{p-q}\frac{\left(1 - (q/p)^i \right)\left(1-(q/p)^{d+1-j} \right)}{1- (q/p)^{d+1}} \le \frac{\left(q/p \right)^{j-i}}{p-q} & 1 \le i \le j \le d,\\ \frac{\left(p/q \right)^{i-j}}{p-q}\frac{\left((p/q)^j - 1 \right)\left((p/q)^{d+1-i} - 1 \right)}{(p/q)^{d+1} - 1} \le \frac{1}{p-q} & 1 \le j < i \le d. \end{cases} \end{equation} Now I, II and IV of Assumptions DF hold with $M = C = \frac{1}{p-q}$, $\alpha = \frac{q}{p}$ and $\underline{\sigma}=\overline{\sigma}=\sqrt{2}$. Furthermore, the restriction $P|_{k}$ is defined exactly as in \eqref{eqn:skew_atlas1} with $k$ in place of $d$. Thus $R^{-1}k{k}$ is given by \eqref{eqn:skew_atlas2} with $k$ in place of $d$, and $\bk{k} = -R^{-1}k{k} \mu|_k$ is the first column of $R^{-1}k{k}$. This entails, \begin{align}\label{eqn:skew_atlas3} \bk{k}_i &= \frac{\left(p/q \right)^{i-1}}{p-q}\frac{\left((p/q) - 1 \right)\left((p/q)^{k+1-i} - 1 \right)}{(p/q)^{k+1} - 1} \ge \frac{1}{q}\left(\frac{p}{q}\right)^{k-1}\frac{\left((p/q) - 1\right)}{(p/q)^{k+1}} \notag\\ &= \frac{p-q}{p^2} =: b_0 > 0, \quad \quad 1 \le i \le k, \> 1 \le k \le d. \end{align} Thus $\mathrm{bl}owk{k} \ge b_0$ for all $1 \le k \le d$, uniformly in $d$. This shows that III of Assumptions DF holds with $b_0$ specified by \eqref{eqn:skew_atlas3} and $r^* = 0$. Moreover, it follows from the first equality in \eqref{eqn:skew_atlas3} that $\bk{k}_i \le p/(p-q)$ for all $1 \le k \le d$ and $1 \le i \le k$. Therefore, recalling the definition of $\mathcal{S}(b,\cdot)$ from \eqref{startset}, for any $\mathbf{z} \in \mathbb{R}^d_+$, $$ \mathbf{z} \in \mathcal{S}(b, p\supnorm{z}/(2p-1)). $$ The above observations result in the next theorem, which follows directly from Theorem \ref{thm:main_fromx}. These bounds imply dimension-free convergence rates in the sense of \eqref{eqn:dimension_free} and \eqref{locstat}. \begin{theorem}[\cite{banerjee2020dimension}]\label{thm:skew_atlas} Suppose $Z$ is the RBM representing the gap process of the Asymmetric Atlas model with $p \in (1/2, 1)$. Then there exist constants $\bar{C}, \bar{C}_0, t_0' > 0$ depending on $p$ but not on $d$ such that for $d > t_0'$, \begin{multline} \Expect{\|Z(t;\mathbf{z}) - Z(t;\mathbf{Z}(\infty))\|_{1,\sqrt{\frac{1-p}{p}}}}\\ \le \begin{cases} \bar{C}\left(\sqrt{1+t^{1/4}} + \supnorm{\mathbf{z}} e^{p\supnorm{\mathbf{z}}/(4p-2)}\right)\> e^{-\bar{C}_0 t^{1/4}}, & t_0' \le t < d^4,\\\\ \bar{C}\left(\sqrt{1+t^{1/4}} + \supnorm{\mathbf{z}} e^{p\supnorm{\mathbf{z}}/(4p-2)}\right)\> e^{-\bar{C}_0 t/d^3}, & t \ge d^4. \end{cases} \end{multline} \end{theorem} We note that the law of $\mathbf{Z}(\infty)$ has an explicit product form in this case. See \cite[Proposition 2.1]{AS}. \subsection{Polynomial local convergence rates for perturbations from stationarity of the Atlas model}\label{dfsym} A natural question is whether one can expect dimension-free local convergence when Assumptions DF do not hold. This is currently an open problem. For one example where one does not expect such dimension-free local convergence to hold, consider the Asymmetric Atlas model from Section \ref{sec:examples} when $p<1/2$. Observe from \eqref{ascol} that a larger change of momentum happens to the lower particle (smaller $k$) during each collision. Hence, effects of unstable gaps between far-away particles can propagate in a more unabated fashion to the lower particles. Moreover, by analyzing the associated killed Markov chain, one concludes that it takes roughly linear time in $d$ for the effect of the $d$-th gap to propagate to the first gap. This heuristic reasoning suggests that one cannot expect any reasonable form of dimension-free local convergence in this case. Now consider the `critical' case: the (symmetric) Atlas model, namely, the case $p=1/2$ in the model of Section \ref{sec:examples}. Note that here, $\mathbf{b} = -R^{-1} \mu = \{(R^{-1})_{i1} \}_{i=1}^d > 0$ and $\Sigma_{ii} = 2$ for all $i$. Therefore, there exists a unique stationary distribution. In fact, if $\mathbf{Z}(\infty)$ denotes the corresponding stationary distributed gap, it holds that \cite{harrisonwilliams_expo,ichiba2011hybrid} \begin{equation}\label{eqn:atlas_stationary} \mathbf{Z}(\infty) \sim \bigotimes_{i = 1}^d \text{Exp}\left(2\left(1 - \frac{i}{d+1}\right)\right). \end{equation} In the symmetric Atlas model, the associated killed Markov chain behaves as a simple symmetric random walk away from the cemetery points, and lacks the strong drift towards the cemetery points as formulated in I of Assumptions DF. Indeed, $R^{-1}$ is given as (see e.g. \cite[Proof of Theorem 4]{BB2020}, or take $p \to 1/2$ in \eqref{eqn:skew_atlas2}): \begin{equation}\label{eqn:atlas2} (R^{-1})_{ij} = \begin{cases} 2i\left(1 - \frac{j}{d+1}\right)\quad \quad 1 \le i \le j \le d,\\ 2j\left(1 - \frac{i}{d+1}\right) \quad \quad 1 \le j < i \le d. \end{cases} \end{equation} The above representation shows that $R^{-1}$ violates I, II of Assumptions DF (consider, for example, $i = j = \floor{d/2}$). Hence, the techniques sketched above for obtaining dimension-free convergence fail to apply in this case. Instead, a different approach is taken in \cite{banerjee2020dimension} which involves analyzing the long term behavior of pathwise derivatives of the RBM in initial conditions. Using this analysis, \emph{polynomial convergence rates} to stationarity in $L^1$-Wasserstein distance are obtained when the initial distribution of the gaps between particles is in an appropriate perturbation class of the stationary measure (see \cite[Section 3.1, Definition 1]{banerjee2020dimension}). We give a representative result obtained as a special case of the more general theorem \cite[Theorem 4]{banerjee2020dimension}. \begin{theorem}[\cite{banerjee2020dimension}]\label{cor:expo_perturbation} Consider $\mathbf{Y} = (Y_1, Y_2, \ldots)^T$ where $\{Y_i\}_{i \ge 1}$ are independent random variables with $Y_i \sim \mathrm{o}_{\sss P}eratorname{Exp}(i^{1+\beta})$ (exponential with mean $i^{-(1+\beta)}$), for some $\beta > 0$. Then there exist positive constants $C_1, C_2$ not depending only on $\beta$ such that \begin{equation*} \Expect{\|Z(t; \mathbf{Z}(\infty) + \mathbf{Y}\vert_d) - Z(t;\mathbf{Z}(\infty)) \|_1} \ \le \ \begin{cases} C_1t^{-\frac{\beta}{1 + \beta} \frac{3}{32}}, & t_0' \le t < d^{16/3},\\\\ C_2d\exp \left(-C_0 \frac{t}{d^6\log (2d)} \right), & t\ge t_0'' \ d^4\log (2d), \end{cases} \end{equation*} where $t_0' \in (0, \infty)$ does not depend on $d$ or $\beta$. \end{theorem} The analysis of the derivative process is based on the analysis of a random walk in a random environment generated by the times and locations where the RBM hits faces of $\mathbb{R}^d_+$. We mention here that \cite{blanchent2020efficient} has recently used the derivative process to study convergence rates for RBMs satisfying certain strong uniformity conditions in dimension (which do not hold for the symmetric Atlas model). Although currently there are no available lower bounds, it seems plausible (based on informal calculations using the associated killed Markov chain) that the true $L^1$-Wasserstein distance indeed decays polynomially for $t \le t(d)$, for some $t(d) \rightarrow \infty$ as $d \rightarrow \infty$. The polynomial rates of convergence to stationarity obtained in \cite{banerjee2020rates} for the Potlatch process on $\mathbb{Z}^k$, which (for $k=1$) can be loosely thought of as a `Poissonian version' of the gap process of the infinite Atlas model constructed in \cite{pitman_pal}, lend further evidence to this belief. Moreover, dimension-free local convergence is not expected to hold for initial distributions `too far away' from $\mathbf{Z}(\infty)$. This belief stems from the analogy with the \emph{infinite} Atlas model (see Section \ref{infatsec}) which has uncountably many product form stationary distributions $\pi_a := \bigotimes_{i=1}^{\infty} \mathrm{o}_{\sss P}eratorname{Exp}(2 + ia), \ a \ge 0$ \cite{sarantsev2017stationary2}, only one of which, namely $\pi_0$, is a weak limit of the stationary distribution \eqref{eqn:atlas_stationary} of the $d$-dimensional Atlas model (extended to a measure on $\mathbb{R}^{\infty}_+$ in an obvious manner) as $d \rightarrow \infty$. This leads to the heuristic that, for large $d$, the $d$-dimensional gap process with initial distribution `close' to the projection (onto the first $d$ co-ordinates) of $\pi_a$ for some $a>0$ spends a long time near this projection (which diverges as $d \rightarrow \infty$) before converging to \eqref{eqn:atlas_stationary}. From this heuristic, one expects that dimension-free convergence rates for associated statistics can only be obtained if the initial gap distribution is `close' to the stationary measure \eqref{eqn:atlas_stationary} in a certain sense. Evidence for this heuristic is provided in the few available results on `uniform in dimension' convergence rates for rank-based diffusions \cite{jourdain2008propagation,jourdain2013propagation}. In both these papers, under strong convexity assumptions on the drifts of the particles, dimension-free exponential ergodicity was proven for the joint density of the particle system when the initial distribution is close to the stationary distribution as quantified by the Dirichlet energy functional (see \cite[Theorem 2.12]{jourdain2008propagation} and \cite[Corollary 3.8]{jourdain2013propagation}). The symmetric Atlas model lacks such convexity in drift and hence, the dimension-free Poincar\'e inequality for the stationary density, that is crucial to the methods of \cite{jourdain2008propagation,jourdain2013propagation}, does not apply. \section{Domains of Attraction of the infinite Atlas model}\label{infatsec} In the remaining two sections of this review we will discuss infinite dimensional reflected Brownian motions. The processes we consider arise from the study of the infinite Atlas model and its various rank based diffusion model variants. This section will consider the standard Atlas model whereas in the next section we consider a more general family of rank based diffusions. The Atlas model describes the evolution of a countable collection of particles on the real line moving according to mutually independent Brownian motions and such that, at any point of time, the lowest particle has a unit upward drift. Letting $\{Y_i(\cdot) : i \in \mathbb{N}_0\}$ denote the state processes of the particles in the system, one can formally describe the evolution by the following system of SDE: \begin{equation}\label{Atlasdef} dY_i(s) = \mathbf{1}(Y_i(s) = Y_{(0)}(s))ds + d W_i(s), \ s \ge 0, i \in \mathbb{N}_0, \end{equation} where $Y_{(0)}(t)$ is the state of the lowest particle at time $t$. Define \begin{equation}\label{eq:cludef} {\mathcal{U}} := \Big\{\mathbf{y}bd = (y_0, y_1, \ldots)' \in \mathbb{R}R^{\infty}: \sum_{i=0}^{\infty} e^{-\alpha [(y_i)_+]^2} < \infty \mbox{ for all } \alpha >0\Big\}. \end{equation} Following \cite{AS}, a $\mathbf{y}bd \in \mathbb{R}R^{\infty}$ is called {\em rankable} if there is a one-to-one map $\pbd$ from $\mathbb{N}_0$ onto itself such that $y_{\pbd(i)} \le y_{\pbd(j)}$ whenever $i<j$, $i,j \in \mathbb{N}_0$. If $\mathbf{y}bd$ is rankable, we will write $y_{(i)} := y_{\pbd(i)}, \, i \in \mathbb{N}_0$. It is easily seen that any $\mathbf{y}bd \in {\mathcal{U}}$ is locally finite and hence rankable. It was shown in \cite[Theorem 3.2 and Lemma 3.4]{AS} that, if the distribution of the initial vector $\mathbf{Y}(0)$ of the particles in the infinite Atlas model is supported within the set ${\mathcal{U}}$, then the system \eqref{Atlasdef} has a weak solution that is unique in law and the solution process $\mathbf{Y}(t) \in {\mathcal{U}}$ a.s. for all $t\ge 0$. Consequently, it can be shown that, almost surely, the ranking $(Y_0(t),Y_1(t),\operatorname{d}ots) \mapsto (Y_{(0)}(t),Y_{(1)}(t),\operatorname{d}ots)$ is well defined (where $Y_{(i)}(t)$ is the state of the $i+1$-th particle from below) for all $t \ge 0$. We will assume throughout that $0 \le Y_{0}(0) \le Y_{1}(0) \le Y_{2}(0) \le \operatorname{d}ots$. By calculations based on Tanaka's formula (see \cite[Lemma 3.5]{AS}), it follows that the processes defined by \begin{equation}\label{eq:bistar} B^*_i(t) := \sum_{j=0}^{\infty}\int_0^t\mathbf{1}(Y_j(s) = Y_{(i)}(s)) dW_j(s), \ i \in \mathbb{N}_0, t \ge 0, \end{equation} are independent standard Brownian motions which can be used to write down the following stochastic differential equation for $\{Y_{(i)}(\cdot) : i \in \mathbb{N}_0\}$: \begin{equation}\label{order_SDE} dY_{(i)}(t) = \mathbf{1}(i=0) dt + dB^*_i(t) - \frac{1}{2}dL^*_{i+1}(t) + \frac{1}{2} dL^*_{i}(t), \ t \ge 0, i \in \mathbb{N}_0. \end{equation} Here, $L^*_{0}(\cdot) \equiv 0$ and for $i \in \mathbb{N}$, $L^*_i(\cdot)$ denotes the local time of collision between the $(i-1)$-th and $i$-th particle. The gap process in the infinite Atlas model is the $\mathbb{R}_+^{\infty}$-valued process $\mathbf{Z}(\cdot)=$ $(Z_1(\cdot), Z_2(\cdot), \operatorname{d}ots)$ defined by \begin{equation}\label{gapdef} Z_i(\cdot) := Y_{(i)}(\cdot) - Y_{(i-1)}(\cdot), \ i \in \mathbb{N}. \end{equation} We will be primarily interested in the long time behavior of the gap process. It was shown in \cite{pitman_pal} that the distribution $\pi := \otimes_{i=1}^{\infty} \mathrm{o}_{\sss P}eratorname{Exp}(2)$ is a stationary distribution of the gap process $\mathbf{Z}(\cdot)$. It was also conjectured there that this is the unique stationary distribution of the gap process. Surprisingly, this was shown to be not true in \cite{sarantsev2017stationary2} who gave an uncountable collection of product form stationary distributions defined as \begin{equation}\label{altdef} \pi_a := \bigotimes_{i=1}^{\infty} \mathrm{o}_{\sss P}eratorname{Exp}(2 + ia), \ a \ge 0. \end{equation} \textcolor{black}{The `maximal' stationary distribution $\pi_0$ (in the sense of stochastic domination) is somewhat special in this collection as described below. For this reason, in what follows, we will occasionally using the notation $\pi$ for the measure $\pi_0$.} For initial gap distribution $\nu$, we will denote by $\hat \nu_t$ the probability law of $\mathbf{Z}(t)$. Obviously, when $\nu = \pi_a$, $\hat\nu_t = \pi_a$ for all $t\ge 0$. In general, if $\hat \nu_t$ converges weakly to one of the stationary measures $\pi_a$, $a\ge 0$, we say that $\nu$ is in the {\em Domain of Attraction} (DoA) of $\pi_a$. Characterizing the domain of attraction of the above collection of stationary measures is a challenging open problem. \textcolor{black}{It was shown in \cite[Theorem 4.7]{AS} using comparison techniques with finite-dimensional Atlas models that if the law $\nu$ of the initial gaps $\mathbf{Z}(0)$ stochastically dominates $\pi = \pi_0$ (in a coordinate-wise sense), then $\nu$ is in DoA of $\pi$, i.e. the law of $\mathbf{Z}(t)$ converges weakly to $\pi$ as $t \rightarrow \infty$.} Recently, \cite{DJO} established a significantly larger domain of attraction for $\pi$ using relative entropy and Dirichlet form techniques. They showed that a $\nu$ is in DoA of $\pi$ if the random vector $\mathbf{Z}(0)$ with distribution $\nu$ almost surely satisfies the following conditions for some $\beta \in [1,2)$ and eventually non-decreasing sequence $\{\theta(m) : m \ge 1\}$ with $\inf_m\{\theta(m-1)/\theta(m)\}>0$: \begin{align} \label{d1} &\limsup_{m \rightarrow \infty}\frac{1}{m^{\beta}\theta(m)} \sum_{j=1}^m Z_j(0) < \infty,\\ \label{d2} &\limsup_{m \rightarrow \infty}\frac{1}{m^{\beta}\theta(m)} \sum_{j=1}^m (\log Z_j(0))_- < \infty,\\ \label{d3} &\liminf_{m \rightarrow \infty}\frac{1}{m^{\beta^2/(1+\beta)}\theta(m)} \sum_{j=1}^m Z_j(0) = \infty, \end{align} with the additional requirement that $\theta(m) \ge \log m, m \ge 1,$ if $\beta = 1$. This is a significant advance in our understanding of the domain of attraction properties of the infinite Atlas model. However, the conditions \eqref{d1}-\eqref{d3} involve upper and lower bounding the growth rate of the starting points $Y_m(0) = \sum_{j=1}^m Z_j(0)$ with respect to $m$ in terms of a common parameter $\beta \in [1,2)$, which partially restricts its applicability. In particular, they do not cover all initial gap distributions that stochastically dominate $\pi$ (e.g. $Z_j(0)\sim e^{j^2}$), which were shown to be in DoA of $\pi$ in \cite{AS}. Moreover, one expects that the properties of the first few gaps should not impact the domain of attraction property of a given vector of initial gaps, however \eqref{d2} is not satisfied if even one of the gaps is zero, which makes this condition somewhat unnatural. This condition arises from the relative entropy methods used in \cite{DJO} (see e.g. the estimate (3.5) therein) which make an important use of the fact that the gaps are non-zero. Beyond these results, little is known about the domain of attraction of stationary measures of the infinite Atlas model. Especially, for the other stationary measures $\pi_a, a >0$, there are no results about the domain of attraction. The fundamental challenge in investigating the latter question is that stability for initial distribution $\pi_a$ for any $a>0$ involves a delicate interplay between the drift of the lowest ranked particle and the `downward pressure' from the cloud of higher particles which has an exponentially increasing density. In particular, one cannot use approximation techniques based on finite dimensional systems as was done in previous works (namely, \cite{AS} and \cite{DJO}). This can also be observed, as noted previously, upon recalling that the marginals of the stationary distribution of the $d$-dimensional Atlas model in \eqref{eqn:atlas_stationary} approach those of $\pi_0$ as $d \rightarrow \infty$ and so a finite dimensional approximation approach is designed to {\em select} this particular stationary distribution of the infinite Atlas model. In \cite{banerjee2022domains}, we consider a somewhat weaker formulation of domain of attraction of the stationary measures $\pi_a$, $a \ge 0$. Specifically, for initial gap distribution $\nu$, define for $t>0$ $$\nu_t := \frac{1}{t} \int_0^t \hat \nu_s ds,$$ where $\hat \nu_t$ is as introduced in the paragraph below \eqref{altdef}. We say that a $\nu$ is in the {\em Weak Domain of Attraction} (WDoA) of $\pi_a$ for $a\ge 0$, if $\nu_t \to \pi_a$ weakly as $t\to \infty$. Note that, although the convergence of the time-averaged laws $\nu_t$ to $\pi_a$ (ergodic limit) is implied by the convergence of $\hat \nu_t$ to $\pi_a$ (marginal time limit), the converse is not clear. However, if $\nu$ is in the WDoA of $\pi_a$ for some $a>0$, it is \emph{necessarily not} in the DoA of $\pi=\pi_0$. The paper \cite{banerjee2022domains} provides sufficient conditions for a measure $\nu$ to be in the WDoA of $\pi_a$ for a general $a\ge 0$. The main results of \cite{banerjee2022domains} are described below. \subsection{Main results}\label{main} Denote by ${\mathcal{S}}_0$ the class of probability measures on $\mathbb{R}_+^{\infty}$ such that the corresponding $\mathbb{R}_+^{\infty}$-valued random variable $\mathbf{Y}(0) = (Y_i(0))_{i\in \mathbb{N}_0}$ is supported on ${\mathcal{U}}$ and $$0 = Y_0(0)\le Y_1(0) \le \cdots \mbox{ a.s. }$$ Given a measure $\gamma \in {\mathcal{S}}_0$, from weak existence and uniqueness, we can construct a filtered probability space $(\Omega, {\mathcal{F}}, {\mathbb{P}}, \{{\mathcal{F}}_t\}_{t\ge 0})$ on which are given mutually independent ${\mathcal{F}}_t$-Brownian motions $\{W_i(\cdot), i \in \mathbb{N}_0\}$ and ${\mathcal{F}}_t$ adapted continuous processes $\{Y_i(t), t \ge 0, i \in \mathbb{N}_0\}$ such that $Y_i$ solve \eqref{Atlasdef} with ${\mathbb{P}}\circ(\mathbf{Y}(0))^{-1} = \gamma$. On this space the processes $Y_{(i)}$ and $Z_i$ are well defined and the distribution ${\mathbb{P}}\circ (\mathbf{Z}(\cdot))^{-1}$ is uniquely determined from $\gamma$. Furthermore $\mathbf{Z}$ is a Markov process with values in (a subset of) $\mathbb{R}_+^{\infty}$. Let $${\mathcal{S}} = \{{\mathbb{P}}\circ \mathbf{Z}(0)^{-1}: {\mathbb{P}} \circ \mathbf{Y}(0)^{-1} \in {\mathcal{S}}_0\}.$$ Note that $\mu \in \mathcal{S}$ if for some $\mathbb{R}_+^{\infty}$-valued random variable $\mathbf{Y}(0)$ with probability law in ${\mathcal{S}}_0$, the vector $(Y_1(0) - Y_0(0), Y_2(0) - Y_0(0),\operatorname{d}ots)$ has probability law $\mu$. The first theorem gives a sufficient condition for a probability measure on $\mathbb{R}_+^{\infty}$ to be in the WDoA of $\pi$. \begin{theorem}[\cite{banerjee2022domains}]\label{piconv} Suppose that the probability measure $\mu$ on $\mathbb{R}_+^{\infty}$ satisfies the following: there exists a coupling $(\mathbf{U},\mathbf{V})$ of $\mu$ and $\pi$ such that, almost surely, \begin{equation}\label{star} \liminf_{d \rightarrow \infty} \frac{1}{\sqrt{d} (\log d)} \sum_{i=1}^d U_i \wedge V_i = \infty. \end{equation} Then $\mu \in {\mathcal{S}}$ and it belongs to the WDoA of $\pi$. \end{theorem} The following corollary gives some natural situations in which \eqref{star} holds. \begin{corollary}[\cite{banerjee2022domains}]\label{starcheck} Let $\mu$ be a probability measure on $\mathbb{R}_+^{\infty}$ and $\mathbf{U} \sim \mu$. Suppose one of the following conditions hold: \begin{itemize} \item[(i)] $\mu$ is stochastically dominated by $\pi$ and, almost surely, \begin{equation}\label{star1} \liminf_{d \rightarrow \infty} \frac{1}{\sqrt{d} (\log d)} \sum_{i=1}^d U_i = \infty. \end{equation} \item[(ii)] For some a.s. finite random variable $M$ \begin{equation}\label{star2} \liminf_{d \rightarrow \infty} \frac{1}{\sqrt{d} (\log d)} \sum_{i=1}^d (U_i \wedge M) = \infty, \mbox{ a.s. } \end{equation} \item[(iii)] $U_i = \lambda_i \Theta_i, \ i \in \mathbb{N}$, where $\{\Theta_i\}$ are iid non-negative random variables satisfying $\mathbb{P}(\Theta_1>0)>0$, and \textcolor{black}{$\{\lambda_i\}$ are positive deterministic real numbers satisfying one of the following: \begin{itemize} \item[(a)] $\liminf_{j \rightarrow \infty} \lambda_j>0$,\\ \item[(b)] $\limsup_{j \rightarrow \infty}\lambda_j < \infty$ and \begin{equation}\label{star3} \liminf_{d \rightarrow \infty}\frac{1}{\sqrt{d} (\log d)} \sum_{i=1}^d \lambda_i = \infty. \end{equation} \end{itemize}} \end{itemize} Then \eqref{star} holds. Hence, in all the above cases, $\mu \in {\mathcal{S}}$ and is in the WDoA of $\pi$. \end{corollary} \begin{remark} \label{remthm1} We make the following observations. \begin{enumerate}[(a)] \item Suppose that $\pi$ is stochastically dominated by $\mu$. Then \eqref{star} holds by the strong law of large numbers, and so Theorem \ref{piconv} shows that $\mu \in {\mathcal{S}}$ and is in the WDoA of $\pi$. \textcolor{black}{In fact in this case \cite[Theorem 4.7]{AS} shows that $\mu$ is in the DoA of $\pi$.} \item Note that for any $C \in (0,\infty)$, $$((2C) \wedge 1) [U_i \wedge (1/2)] \le U_i \wedge C \le ((2C) \vee 1) [U_i \wedge (1/2)].$$ Hence, if \eqref{star2} holds for one a.s. finite random variable $M$ then it holds for every such random variable. In particular if $\limsup_{n\to \infty} U_n <\infty$ a.s. and \eqref{star1} is satisfied then \eqref{star} holds and so $\mu \in {\mathcal{S}}$ and is in the WDoA of $\pi$. \item The paper \cite{DJO} notes two important settings where conditions \eqref{d1}-\eqref{d3} are satisfied. These are: (i) for some $c \in [1,\infty)$, $\lambda_j \in [c^{-1}, c]$ for all $j\in \mathbb{N}$ and $ Z_j(0) = \lambda_j \Theta_j$ where $\Theta_j$ are iid with finite mean and such that $E\log (\Theta_1)_ {-}<\infty$, and (ii) $ Z_j(0) = \lambda_j \Theta_j$ where $\Theta_j$ are iid exponential with mean $1$ and, either $\lambda_d\operatorname{d}ownarrow 0$ and $\frac{1}{\sqrt{d}\log d} \sum_{i=1}^d \lambda_i \to \infty$, as $d\to \infty$, or $\lambda_d\uparrow \infty$ and $\limsup_{d\to \infty}\frac{1}{d^{\beta}} \sum_{i=1}^d \lambda_i < \infty$ for some $\beta <2$. We note that conditions assumed in the above settings are substantially stronger than the one assumed in part (iii) of Corollary \ref{starcheck}. \textcolor{black}{However \cite{DJO} establishes the stronger marginal time convergence as opposed to a time averaged limit considered in Corollary \ref{starcheck}. See part (g) on a related open problem.} \item In the setting of (iii) the sequence $\{\lambda_i\}$ satisfies \eqref{star3} if $(\log i)/\sqrt{i} \lambda_i \to 0$ as $i \to \infty$. Indeed, note that, for any $C>0$, there exists $i_C \in \mathbb{N}$ such that for all $i \ge i_C$, $$ \sum_{j=i_C}^i \lambda_j \ge C\sum_{j=i_C}^i\frac{\log j}{\sqrt{j}} \ge C\int_{i_C}^{i+1}\frac{\log z}{\sqrt{z}}dz. $$ \textcolor{black}{Using the fact that the primitive of $\log z/\sqrt{z}$ is $2\sqrt{z}(\log z - 2)$, $\{\lambda_j\}$ is seen to satisfy \eqref{star3}} and so, if in addition $\limsup_{j \rightarrow \infty}\lambda_j < \infty$, we have that the conditions of (iii) (b) are satisfied. \item We note that the conditions prescribed in Theorem \ref{piconv} and Corollary \ref{starcheck} all involve quantifying how close together the particles can be on the average in the initial configuration of the Atlas model. In particular, we do not require any upper bounds on the sizes of $U_i$. This is in contrast with condition \eqref{d1} assumed in \cite{DJO} which requires the particles to be not too far apart on the average. Intuitively one expects convergence to $\pi$ to hold when initially the particles are not too densely packed and upper bounds on the average rate of growth of the initial spacings are somewhat unnatural. The result in \cite[Theorem 4.7]{AS} also points to this heuristic by showing weak convergence to $\pi$ from all initial gap configurations that stochastically dominate $\pi$. We also note that Theorem \ref{piconv} and Corollary \ref{starcheck} allow the first few gaps of the initial gap sequence to be arbitrary and do not require any condition analogous to \eqref{d2} and can, in particular, allow gaps to be zero. \item We note that the conditions \eqref{d1}-\eqref{d3} of \cite{DJO} do not imply \eqref{star}. To see this, consider the deterministic sequence of initial gaps: $U_i = i^{-2/3}$ for $n^3 < i < (n+1)^3$ and $U_{n^3} = n$, for any $n \in \mathbb{N}$. It can be checked that, $\sum_{i=1}^d U_i$ grows like $d^{2/3}$ and $\sum_{i=1}^d (\log U_i)_-$ grows like $d\log d$ as $d \rightarrow \infty$. Thus, \eqref{d1}-\eqref{d3} of \cite{DJO} hold with $\beta=1$ and $\theta(d)= \log d$. However, almost surely, $\sum_{i=1}^d U_i \wedge V_i$ grows like $d^{1/3}$, and therefore, \eqref{star} is violated. \item It will be interesting to investigate whether the condition in Theorem \ref{piconv} in fact implies the stronger property that $\mu$ is in the DoA of $\pi$. We leave this as an open problem. \end{enumerate} \end{remark} The next theorem provides sufficient conditions for a measure on $\mathbb{R}_+^{\infty}$ to be in the WDoA of one of the other stationary measures $\pi_a, a>0$. \begin{theorem}[\cite{banerjee2022domains}]\label{piaconv} Fix $a>0$. Suppose that the probability measure $\mu$ on $\mathbb{R}_+^{\infty}$ satisfies the following: There exists a coupling $(\mathbf{U}, \mathbf{V}_a)$ of $\mu$ and $\pi_a$ such that, almost surely, \begin{equation}\label{stara} \limsup_{d \rightarrow \infty} \frac{\log \log d}{\log d} \sum_{i=1}^d |V_{a,i} - U_i| = 0, \mbox{ and } \limsup_{d\to \infty} \frac{U_d}{dV_{a,d}}<\infty. \end{equation} Then $\mu \in {\mathcal{S}}$ and it belongs to the WDoA of $\pi_a$. \end{theorem} \textcolor{black}{ We remark that Theorem \ref{piaconv} also holds for the case $a=0$. Indeed, suppose that the condition \eqref{stara} in the theorem holds for $a=0$. Denoting the corresponding $\mathbf{V}_0$ as $\mathbf{V}$, we have that $$ \sum_{i=1}^d U_i \wedge V_i = \sum_{i=1}^d U_i \vee V_i - \sum_{i=1}^d |V_i - U_i| \ge \sum_{i=1}^d V_i - \sum_{i=1}^d |V_i - U_i|. $$ Using this and the law of large numbers for $\{V_i\}$, it follows that \eqref{star} holds and thus from Theorem \ref{piconv} the conclusion of Theorem \ref{piaconv} holds with $a=0$. The reason we do not note the case $a=0$ in the statement of Theorem \ref{piaconv} is because \eqref{star} is a strictly weaker assumption than \eqref{stara} when $a=0$. This is expected as for any initial gap distribution that stochastically dominates $\pi$, convergence to $\pi$ holds (see Remark \ref{remthm1} (a)). However, for convergence to $\pi_a$ for some $a>0$, the initial gap distribution should be appropriately close to $\pi_a$ (see Remark \ref{remthm2} (b) below).} The following corollary gives a set of random initial conditions for which \eqref{stara} holds. \begin{corollary}\label{staracheck} Fix $a>0$. Let \begin{equation}\label{eq:805nn} \mu := \bigotimes_{i=1}^{\infty} \mathrm{o}_{\sss P}eratorname{Exp}(2 + ia + \lambda_i), \end{equation} where $\{\lambda_i\}$ are real numbers satisfying $\lambda_i > -(2+ia)$ for all $i$ and \begin{equation}\label{eq:802n} -a < \liminf_{i\to \infty} \frac{\lambda_i}{i} \le \limsup_{i\to \infty} \frac{\lambda_i}{i} < \infty. \end{equation} Moreover, assume \begin{equation}\label{aexp} \limsup_{d \rightarrow \infty} \frac{\log \log d}{\log d}\sum_{i=1}^d\frac{|\lambda_i|}{i^2} = 0. \end{equation} Then \eqref{stara} holds, and hence, $\mu \in {\mathcal{S}}$ and belongs to the WDoA of $\pi_a$. \end{corollary} \begin{remark} \label{remthm2} We note the following. \begin{enumerate}[(a)] \item Clearly, there are measures $\mu$ of the form in \eqref{eq:805nn} for which $\limsup_{i\to \infty} \frac{\lambda_i}{i} < \infty$, $\liminf_{i\to \infty}\frac{|\lambda_i|}{i}>0$ (and so \eqref{eq:802n} holds) and which do not lie in the WDoA of $\pi_a$ (for example, $\pi_{a'}$ for any $a' \neq a$). However, the corollary says that if $|\lambda_i|$ grows slightly slower than $i$ then $\mu$ is indeed in the WDoA of $\pi_a$. More precisely, the convergence in \eqref{aexp} holds in particular if $\limsup_{i\to \infty} |\lambda_i|\log\log i/i=0$. Indeed, note that for any $\operatorname{d}elta>0$, there exists sufficiently large $i_\operatorname{d}elta \in \mathbb{N}$ such that for $i \ge i_\operatorname{d}elta$, \begin{equation}\label{ac2} \sum_{j=i_\operatorname{d}elta}^i \frac{|\lambda_j|}{j^2} \le \sum_{j=i_\operatorname{d}elta}^i \frac{\operatorname{d}elta_j}{j \log \log j} \le \operatorname{d}elta\int_{i_\operatorname{d}elta-1}^{i} \frac{1}{z\log \log z}dz = \operatorname{d}elta\int_{\log(i_\operatorname{d}elta-1)}^{\log i} \frac{1}{\log w}dw. \end{equation} As $w \mapsto \frac{1}{\log w}$ is a slowly varying function, by Karamata's theorem \cite[Theorem 1.2.6 (a)]{mikosch1999regular}, $$ \lim_{i \rightarrow \infty} \frac{\log \log i}{\log i}\int_{\log(i_0-1)}^{\log i} \frac{1}{\log w}dw = 1. $$ Thus, we conclude from \eqref{ac2} that for any $\operatorname{d}elta>0$, $$ \limsup_{i \rightarrow \infty} \frac{\log \log i}{\log i} \sum_{j=1}^i \frac{|\lambda_j|}{j^2} \le \operatorname{d}elta. $$ As $\operatorname{d}elta>0$ is arbitrary, \eqref{aexp} holds. \item Note that if $\mathbf{U} \sim \pi$, then for $a, a'>0$, $$V_{a,i} = \frac{2}{2+ia} U_i, \; V_{a',i} = \frac{2}{2+ia'} U_i, \; i \in \mathbb{N}$$ defines a coupling of $\pi_a$ and $\pi_{a'}$. For this coupling, \begin{equation}\label{eq:203} \sum_{i=1}^d |V_{a,i}- V_{a',i}| \sim O(|a-a'| \log d) \mbox{ and } \limsup_{d\to \infty} \frac{V_{a', d}}{dV_{a,d}} = 0. \end{equation} Since $\pi_{a'}$ is obviously not in WDoA of $\pi_a$ for $a\neq a'$, the first property in \eqref{eq:203} says that the first requirement in \eqref{stara} is not far from what one expects. We conjecture that for $\mu$ to be in the WDoA of $\pi_a$ it is necessary that $(\log d)^{-1}\sum_{i=1}^d |V_{a,i} - U_i| \rightarrow 0$ as $d \rightarrow \infty$ for some coupling $(\mathbf{U}, \mathbf{V}_a)$ of $\mu$ and $\pi_a$. Theorem \ref{piaconv} says that if the above convergence to $0$ holds at a rate faster than $1/(\log \log d)$ for some coupling of $\mu$ and $\pi_a$ then that (together with the second condition in \eqref{stara}) is sufficient for $\mu$ to be in the WDoA of $\pi_a$. Further, observe that if the second condition in \eqref{stara} does not hold, then there are infinitely many $d \in \mathbb{N}$ such that the lowest $d+1$ particles are separated from the rest by a large gap. If one such gap is very large, then it could happen that the distribution of gaps between the $d+1$ particles stabilizes towards the unique stationary gap distribution $\pi^{(d)}$ of the corresponding finite Atlas model before the remaining particles have interacted with them. As $\pi^{(d)}$ converges weakly to $\pi$ as $d \rightarrow \infty$, one does not expect in such situations the convergence (of time averaged laws) to $\pi_a$ to hold for any $a>0$. \end{enumerate} \end{remark} \subsection{Outline of Approach} Proofs of the above results are based on a pathwise approach to studying the long time behavior of the infinite Atlas model using synchronous couplings, namely, two versions of the infinite Atlas model described in terms of the ordered particles via \eqref{order_SDE} and started from different initial conditions but driven by the same collection of Brownian motions. Central ingredients in the proofs are suitable estimates on the decay rate of the $L^1$ distance between the gaps in synchronously coupled ordered infinite Atlas models. These estimates are obtained by analyzing certain excursions of the difference of the coupled processes where each excursion ensures the contraction of the $L^1$ distance by a fixed deterministic amount. The key is to appropriately control the number and the lengths of such excursions. Certain monotonicity properties of synchronous couplings and a quantification of the influence of far away coordinates on the first few gaps also play a crucial role. Using these tools, the discrepancy between the gap processes of two synchronously coupled infinite Atlas models can be controlled in terms of associated gap processes when the starting configurations differ only in finitely many coordinates. The latter are more convenient to work with as for them the initial $L^1$ distance between the gap processes is finite, and the aforementioned excursion analysis can be applied to obtain the main results. \section{Extremal and product form invariant distributions of infinite rank-based diffusions} \label{sec:extrem} The results described in the last section give us some understanding about the local stability structure of the set of invariant distributions of the infinite Atlas model. However, it does not say anything about the geometry of the (convex) set of invariant measures. In particular, a longstanding open problem is to characterize all extremal invariant measures (corner points of the convex set). In this section, we outline some recent results from \cite{banerjee2022extremal}, which makes progress in this direction. We will in fact consider the more general setting of an infinite rank-based diffusion where the particles move on the real line according to mutually independent Brownian motions, with the $i$-th particle from the left getting a constant drift of $g_{i-1}$. The vector $\mathbf{g}d := (g_0,g_1,\operatorname{d}ots)'$ is called the drift vector and we call this model the \emph{$\mathbf{g}d$-Atlas model}. Such particle systems were originally introduced in stochastic portfolio theory \cite{Fern,fernholz2009stochastic} as models for stock growth evolution in equity markets and have been investigated extensively in recent years in several different directions. In particular, characterizations of such particle systems as scaling limits of jump processes with local interactions on integer lattices, such as the totally asymmetric simple exclusion process, have been studied in \cite{karatzas_skewatlas}. Various types of results for the asymptotic behavior of the empirical measure of the particle states have been studied, such as propagation of chaos, characterization of the associated McKean-Vlasov equation and nonlinear Markov processes \cite{shkol, jourdain2008propagation}, large deviation principles \cite{dembo2016large}, characterizing the asymptotic density profile and the trajectory of the leftmost particle via Stefan free-boundary problems \cite{cabezas2019brownian} . These particle systems also have close connections with Aldous' ``Up the river'' stochastic control problem \cite{aldous41up}, recently solved in \cite{tang2018optimal}. Results on wellposedness of the associated stochastic differential equations (in the weak and strong sense) and on absence of triple collisions (three particles at the same place at the same time) have been studied in \cite{bass1987uniqueness, ichiba2013strong, ichiba2010collisions,sarantsev2015triple,ichiba2017yet}. As in the case of the infinite Atlas model, the natural state space of the location of the particles in the more general system is also given by ${\mathcal{U}}$ defined in \eqref{eq:cludef}. For a sequence $\{W_i\}_{i\in \mathbb{N}_0}$ of mutually independent standard Brownian motions given on some filtered probability space, and $\mathbf{y}bd = (y_0, y_1, \ldots)' \in {\mathcal{U}}$, $\mathbf{g}d = (g_0, g_1, \ldots)' \in \mathcal{D}$, consider the following system of equations. \begin{equation}\label{eq:unrank} dY_i(t) = \left[\sum_{k=0}^{\infty} \mathbf{1}(Y_i(t) = Y_{(k)}(t)) g_k\right] dt + dW_i(t), \ t \ge 0, \end{equation} with $Y_i(0) = y_i$, $i \in \mathbb{N}_0$. Write $\mathbf{Y}bd(\cdot) := (Y_0(\cdot),Y_1(\cdot),\operatorname{d}ots)'$. It has been shown in \cite{AS} that if $\mathbf{g}d \in \mathcal{D}$, where \begin{equation}\label{eq:clddefn} \mathcal{D} := \Big\{\mathbf{g}d = (g_0, g_1, \ldots)' \in \mathbb{R}R^{\infty}: \sum_{i=0}^{\infty} g_i^2 <\infty \Big\}, \end{equation} then for any $\mathbf{y}bd \in {\mathcal{U}}$, $P(\mathbf{Y}bf(t)\in {\mathcal{U}} \mbox{ for all } t \ge 0)=1$, and there is a unique weak solution $\mathbf{Y}bd(\cdot)$ to \eqref{eq:unrank}. Analogous to the standard Atlas model setting, the ordered process $\{Y_{(i)}(\cdot) : i \in \mathbb{N}_0\}$ can be described by the SDE: \begin{equation}\label{order_SDEg} dY_{(i)}(t) = g_i dt + dB^*_i(t) - \frac{1}{2}dL^*_{i+1}(t) + \frac{1}{2} dL^*_{i}(t), \ t \ge 0, \;\; Y_{(i)}(0) = y_{(i)},\, i \in \mathbb{N}_0. \end{equation} where $\{B_i^*(\cdot) : i \in \mathbb{N}_0\}$ are independent Brownian motions defined in \eqref{eq:bistar}, $L^*_{0}(\cdot) \equiv 0$ and for $i \in \mathbb{N}$, $L^*_i(\cdot)$ denotes the local time of collision between the $(i-1)$-th and $i$-th particles. The gap process for the $(\mathbf{g}d, \mathbf{y}bd)$-infinite Atlas model is the $\mathbb{R}R_+^{\infty}$-valued process $\mathbf{Z}bd(\cdot)=$ $(Z_1(\cdot), Z_2(\cdot), \operatorname{d}ots)'$ defined by \begin{equation}\label{gapdefg} Z_i(\cdot) := Y_{(i)}(\cdot) - Y_{(i-1)}(\cdot), \ i \in \mathbb{N}. \end{equation} The natural state space for the gap process is given by \begin{equation}\label{clvdef} {\mathcal{V}} := \{\mathbf{z}bd \in \mathbb{R}R_+^{\infty}: \mbox{ for some } \mathbf{y}bd \in {\mathcal{U}}, \mathbf{z}bd = (y_{(1)}-y_{(0)}, y_{(2)}-y_{(1)}, \ldots)'\}. \end{equation} For this setting it is known from the work of \cite{sarantsev2017stationary2} that, once more, the process associated with the gap sequence of the ranked particle system has a continuum of product-form stationary distributions given as \begin{equation}\label{eq:piagbddefn} \pi_a^{\mathbf{g}d} := \otimes_{n=1}^{\infty} \mbox{Exp}(n (2\bar g_n+a)), \;\; a > -2 \inf_{n \in \mathbb{N}} \bar g_n,\end{equation} where $\bar g_n := \frac{1}{n}(g_0 + \cdots + g_{n-1})$. In the special case where $\mathbf{g}d \in \mathcal{D}_1$, where \begin{equation}\label{cldzdefn} \mathcal{D}_1 := \{\mathbf{g}d \in \mathcal{D}: \text{ there exist } N_1 < N_2 < \operatorname{d}ots \rightarrow \infty \text{ such that } \bar g_k > \bar g_{N_j}, \ k=1,\operatorname{d}ots,N_j-1, \text{ for all } j \ge 1\},\end{equation} $\pi_a^{\mathbf{g}d}$ is also an invariant distribution for $a=-2 \inf_{n \in \mathbb{N}} \bar g_n = -2\lim_{j \rightarrow \infty} \bar g_{N_j}$; in particular for the infinite Atlas model $\pi_a = \pi_a^{\mathbf{g}d^1}$ is invariant for all $a\ge 0$. Using Kakutani's theorem, it is easy to verify that, for different values of $a$, the probability measures $\pi_a^{\mathbf{g}d}$ are mutually singular. These distributions are also special in that they have a product form structure. In particular, if the initial distribution of the gap process is chosen to be one of these distributions, then the laws of distinct gaps at any fixed time are independent despite these gaps having a highly correlated temporal evolution mechanism (see \eqref{order_SDEg}-\eqref{gapdefg}). Now we present our main results from \cite{banerjee2022extremal} which show that the above distributions are also extremal. Moreover, in some sense, these are the only product form invariant distributions. \subsection{Main results} We begin with some notation. Let $\mathbf{X} := \mathcal{C}([0,\infty): \mathbb{R}R_+^{\infty})$ which is equipped with the topology of local uniform convergence (with $\mathbb{R}R_+^{\infty}$ equipped with the product topology). Define measurable maps $\{\theta_t\}_{t\ge 0}$ from $(\mathbf{X}, \mathcal{B}(\mathbf{X}))$ to itself as $$\theta_t(\mathbf{Z}bd)(s) := \mathbf{Z}bd(t+s), \; t\ge 0, s\ge 0, \; \mathbf{Z}bd \in \mathbf{X}.$$ Given $\mathbf{g}d \in \mathcal{D}$ and $\mathbf{z}bd \in {\mathcal{V}}$, we denote the probability distribution of the gap process of the $\mathbf{g}d$-Atlas model on $(\mathbf{X}, \mathcal{B}(\mathbf{X}))$, with initial gap sequence $\mathbf{z}bd$, by $\mathbb{P}^{\mathbf{g}d}_{\mathbf{z}bd}$. Also, for $\gamma \in {\mathcal{P}}(\mathbb{R}R_+^{\infty})$ supported on ${\mathcal{V}}$ (namely, $\gamma({\mathcal{V}})=1$), let $\mathbb{P}^{\mathbf{g}d}_{\gamma} := \int_{\mathbb{R}R_+^{\infty}} \mathbb{P}^{\mathbf{g}d}_{\mathbf{z}bd} \;\gamma(d\mathbf{z}bd)$. The corresponding expectation operators will be denoted as $\mathbb{E}^{\mathbf{g}d}_{\mathbf{z}bd}$ and $\mathbb{E}^{\mathbf{g}d}_{\gamma}$ respectively. Denote by $\mathcal{I}^{\mathbf{g}d}$ the collection of all invariant (stationary) probability measures of the gap process of the $\mathbf{g}d$-Atlas model supported on ${\mathcal{V}}$, namely $$ \mathcal{I}^{\mathbf{g}d} := \{\gamma \in {\mathcal{P}}(\mathbf{X}): \gamma({\mathcal{V}}^c)=0, \mbox{ and } \mathbb{P}^{\mathbf{g}d}_{\gamma} \circ \theta_t^{-1} = \mathbb{P}^{\mathbf{g}d}_{\gamma} \mbox{ for all } t \ge 0\}.$$ Abusing notation, the canonical coordinate process on $(\mathbf{X}, \mathcal{B}(\mathbf{X}))$ will be denoted by $\{\mathbf{Z}bd(t)\}_{t\ge 0}$. Let $\mathcal{M}(\mathbb{R}R_+^{\infty})$ be the collection of all real measurable maps on $\mathbb{R}R_+^{\infty}$. For $f \in \mathcal{M}(\mathbb{R}R_+^{\infty})$, $\mathbf{z}bd \in \mathbb{R}R_+^{\infty}$ and $t\ge 0$ such that $\mathbb{E}^{\mathbf{g}d}_{\mathbf{z}bd}(|f(\mathbf{Z}bd(t))|) <\infty$ we write $$T_t^{\mathbf{g}d}f(\mathbf{z}bd) := \mathbb{E}^{\mathbf{g}d}_{\mathbf{z}bd}(f(\mathbf{Z}bd(t))).$$ For $\gamma \in {\mathcal{P}}(\mathbb{R}R_+^{\infty} )$ let $L^2(\gamma)$ be the collection of all measurable $\psi: \mathbb{R}R_+^{\infty} \to \mathbb{R}R$ such that $\int_{\mathbb{R}R_+^{\infty}} |\psi(\mathbf{z}bd)|^2 \gamma(d\mathbf{z}bd) <\infty$. We denote the inner-product and the norm on $L^2(\gamma)$ as $\langle \cdot, \cdot \rangle$ and $\|\cdot\|$ respectively. Note that for $\mathbf{g}d \in \mathcal{D}$, $\gamma \in \mathcal{I}^{\mathbf{g}d} $, and $\psi \in L^2(\gamma)$, $T_t^{\mathbf{g}d}\psi$ is $\gamma$ a.e. well defined and belongs to $L^2(\gamma)$. Furthermore, the collection $\{T_t^{\mathbf{g}d}\}_{t\ge 0}$ defines a contraction semigroup on $L^2(\gamma)$, namely $$T_t^{\mathbf{g}d}T_s^{\mathbf{g}d} \psi = T_{t+s}^{\mathbf{g}d}\psi, \mbox{ and } \|T_t^{\mathbf{g}d}\psi\| \le \|\psi\| \mbox{ for all } s,t \ge 0 \mbox{ and } \psi \in L^2(\gamma).$$ We now recall the definition of extremality and ergodicity. Let, for $\mathbf{g}d$ as above and $\gamma \in \mathcal{I}^{\mathbf{g}d}$, $\mathbb{I}_{\gamma}^{\mathbf{g}d} $ be the collection of all $T_t^{\mathbf{g}d}$-invariant square integrable functions, namely, $$\mathbb{I}_{\gamma}^{\mathbf{g}d} := \{\psi \in L^2(\gamma): T_t^{\mathbf{g}d}\psi = \psi,\; \gamma \mbox{ a.s., for all } t \ge 0\}.$$ We denote the projection of a $\psi \in L^2(\gamma)$ on to the closed subspace $\mathbb{I}_{\gamma}^{\mathbf{g}d}$ as $\hat \psi_{\gamma}^{\mathbf{g}d}$. \begin{definition} Let $\mathbf{g}d\in \mathcal{D}$. A $\nu \in \mathcal{I}^{\mathbf{g}d}$ is said to be an extremal invariant distribution of the gap process of the $\mathbf{g}d$-Atlas model if, whenever for some $\varepsilon \in (0,1)$ and $\nu_1, \nu_2 \in \mathcal{I}^{\mathbf{g}d}$ we have $\nu = \varepsilon \nu_1 + (1-\varepsilon)\nu_2$, then $\nu_1 = \nu_2 = \nu$. We denote the collection of all such measures by $\mathcal{I}^{\mathbf{g}d}_e$. We call $\nu \in \mathcal{I}^{\mathbf{g}d}$ an ergodic probability measure for the invariant distribution of the gap process of the $\mathbf{g}d$-Atlas model if for all $\psi \in L^2(\nu)$, $\hat \psi_{\nu}^{\mathbf{g}d}$ is constant $\nu$-a.s. We denote the collection of all such measures by $\mathcal{I}^{\mathbf{g}d}_{er}$. \end{definition} We note that (cf. proof of Lemma \ref{lemerg} below) if $\gamma \in \mathcal{I}^{\mathbf{g}d}_{er}$, then for any $\psi \in L^2(\gamma)$, and $\gamma$ a.e. $\mathbf{z}bd$, $$\frac{1}{t} \int_0^t T_s^{\mathbf{g}d}\psi(\mathbf{z}bd) ds \to \int_{\mathbb{R}R_+^{\infty}} \psi(\mathbf{x}bd) \gamma(\mathbf{x}bd), \mbox{ in } L^2(\gamma), \mbox{ as } t \to \infty.$$ The following result, which says that extremal invariant measures and ergodic invariant measures are the same, is standard. \begin{lemma}\label{lemerg} Let $\mathbf{g}d \in \mathcal{D}$. Then $\mathcal{I}^{\mathbf{g}d}_e = \mathcal{I}^{\mathbf{g}d}_{er}$. Let $\gamma \in \mathcal{I}^{\mathbf{g}d}$ and suppose that for every bounded measurable $\psi:\mathbb{R}R_+^{\infty} \to \mathbb{R}R$, $\hat \psi_{\gamma}^{\mathbf{g}d}$ is constant, $\gamma$ a.s. Then $\gamma \in \mathcal{I}^{\mathbf{g}d}_e$. \end{lemma} The following is the first main result of \cite{banerjee2022extremal}. \begin{theorem}[\cite{banerjee2022extremal}]\label{thm_main} Let $\mathbf{g}d\in \mathcal{D}$. Then, for every $a > -2 \inf_{n \in \mathbb{N}} \bar g_n$, $\pi_a^{\mathbf{g}d} \in \mathcal{I}^{\mathbf{g}d}_e = \mathcal{I}^{\mathbf{g}d}_{er}$. Also, when $\mathbf{g}d \in \mathcal{D}_1$, $\pi_a^{\mathbf{g}d} \in \mathcal{I}^{\mathbf{g}d}_e = \mathcal{I}^{\mathbf{g}d}_{er}$ also for $a = -2 \inf_{n \in \mathbb{N}} \bar g_n$. \end{theorem} The above theorem proves the extremality of the invariant measures $\pi_a^{\mathbf{g}d}$ for suitable values of $a$. One can ask whether these are the only extremal invariant measures of the gap process of the $\mathbf{g}d$-Atlas model supported on ${\mathcal{V}}$. The answer to this question when $\mathbf{g}d = \mathbf{0}$ is affirmative from results of \cite[Theorem 4.2]{RuzAiz} , if one restricts to extremal measures satisfying certain integrability constraints on the denseness of particle configurations. For a general $\mathbf{g}d \in \mathcal{D}$ this is currently a challenging open problem. However, the next result makes partial progress towards this goal by showing that for any $\mathbf{g}d \in \mathcal{D}_1$ (and under a mild integrability condition), the collection $\{\pi_a^{\mathbf{g}d}\}$ exhausts all the extremal product form invariant distributions. In fact we prove the substantially stronger statement that the measures $\pi_a^{\mathbf{g}d}$ are the only product form (extremal or not) invariant distributions under a mild integrability condition. Qualitatively, this result says that these measures are the only invariant distributions that preserve independence of the marginal laws of the gaps in time. \begin{theorem}[\cite{banerjee2022extremal}]\label{thm_main2} Let $\mathbf{g}d \in \mathcal{D}_1$ and let $\pi \in \mathcal{I}^{\mathbf{g}d}$ be a product measure, i.e. $\pi = \otimes_{i=1}^{\infty} \pi_i$ for some $\pi_i \in {\mathcal{P}}(\mathbb{R}R_+)$, $i \in \mathbb{N}$. Suppose that \begin{equation}\label{eq:satinteg} \int_{\mathbb{R}R_+^{\infty}} \left(\sum_{j=1}^{\infty} e^{-\frac{1}{4} (\sum_{l=1}^j z_l)^2} \right) \pi(d\mathbf{z}bd) < \infty. \end{equation} Then, for some $a\ge -2\lim_{j \rightarrow \infty} \bar g_{N_j}$, $\pi = \pi_a^{\mathbf{g}d}$. \end{theorem} Recall that ${\mathcal{V}}$ defined in \eqref{clvdef} consists of $\mathbf{z}bd \in \mathbb{R}R_+^{\infty}$ for which $\sum_{j=1}^{\infty}e^{-\alpha (\sum_{l=1}^j z_l)^2} < \infty$ for all $\alpha>0$. In comparison, condition \eqref{eq:satinteg} requires a finite expectation of $\sum_{j=1}^{\infty}e^{-\frac{1}{4}(\sum_{l=1}^j z_l)^2}$ when $\mathbf{z}bd$ is distributed as $\pi$. Roughly speaking, condition \eqref{eq:satinteg} puts a restriction on the rate of increase of the density of particles as one moves away from the lowest ranked particle. \subsection{Connection with prior work in interacting particle systems} Questions about extremality and ergodicity of stationary distributions have been addressed previously in the context of interacting particle systems on countably infinite graphs (see \cite{liggett1976coupling,andjel1982invariant,sethuraman2001extremal,balazs2007existence} and references therein). However, in all these cases, the interactions are Poissonian which enables one to use the (explicit) generator of associated processes in an effective way. The interactions in rank based diffusions are very `singular' owing to the local time based dynamics (see \eqref{order_SDE}) and generator based methods seem to be less tractable. Furthermore, unlike previous works, the state space for the gap process (i.e. $\mathbb{R}_+^{\infty}$) is not countable and has a non-smooth boundary, and the process has intricate interactions (oblique reflections) with the boundary. Hence, proving extremality requires new techniques. Our proofs are based on constructing appropriate couplings for these infinite dimensional diffusions which then allow us to prove suitable invariance properties and a certain `directional strong Feller property'. Although our setting and methods are very different, at a high level, the approach we take is inspired by the papers \cite{sethuraman2001extremal,balazs2007existence}. Complete characterization of extremal invariant measures is, in general, a very hard problem. This problem has been completely solved in a few examples of interacting particle systems such as the simple exclusion process \cite{liggett1976coupling} and the zero range process \cite{andjel1982invariant} where the extremal probability measures are fully characterized as an explicit collection of certain product form measures. However, in these models the particle density associated with distinct extremal measures are scalar multiples of each other owing to certain homogeneity properties in the dynamics (see, for example, \cite[Theorem 1.10]{andjel1982invariant}). This, along with the Poissonian nature of the interactions, enables one to prove useful monotonicity properties of the `synchronously coupled' dynamics (see \cite[Section 2]{liggett1976coupling} and \cite[Section 4]{andjel1982invariant}) using generator methods that are crucial to the above characterization results. A key challenge in extending these methods to rank based diffusions of the form considered here is that, in addition to the singular local time interactions, the point process associated with the configuration of particles with gaps distributed as $\pi_a^{\mathbf{g}d}$ has an intensity function $\rho(x)$, that grows exponentially as $x\to \infty$ when $a>0$ and, due to a nonlinear dependence on $a$, lacks the scalar multiple property for distinct values of $a$. This is a direct consequence of the inhomogeneity of the topological interactions in our particle systems where the local stability in a certain region of the particle cloud is affected both by the density of particles in the neighborhood and their relative ranks in the full system. Moreover, unlike the above interacting particle systems, in rank based diffusions, when the initial gaps are given by a stationary distribution, the point process of particles is typically not stationary. This phenomenon, where the gaps are stationary while the associated point process is not, referred to as \emph{quasi-stationarity} in \cite{RuzAiz}, is technically challenging. We note that this latter paper studies one setting where the intensity function grows exponentially and the particle density lacks the scalar multiple property for distinct values of $a$. However their setting, in the context of rank based diffusions, corresponds to the case $\mathbf{g}d = (0,0, \ldots)'$, where the unordered particles behave like independent standard Brownian motions, and this fact is crucially exploited in \cite{RuzAiz}. \subsection{Proof outline} We now make some comments on proof ideas. The key step in proving the extremality of $\pi_a^{\mathbf{g}d}$ is to establish that any bounded measurable function $\psi$ on $\mathbb{R}R_+^{\infty}$ that is $\pi_a^{\mathbf{g}d}$-a.e. invariant, under the action of the semigroup of the Markov process corresponding to the $\mathbf{g}d$-Atlas gap sequence, is constant $\pi_a^{\mathbf{g}d}$-a.e. If $\mathbf{g}d = \mathbf{g}d^1$ and $a=0$, we have that $\pi_a^{\mathbf{g}d} = \otimes_{i=1}^{\infty} \mbox{Exp}(2)$, and therefore the coordinate sequence $\{Z_i\}_{i=1}^{\infty}$ is iid under $\pi_a^{\mathbf{g}d}$. In this case, from the Hewitt-Savage zero-one law it suffices to show that $\psi$ is $\pi_a^{\mathbf{g}d}$-a.e. invariant under all finite permutations of the coordinates of $\mathbb{R}R_+^{\infty}$. For this, in turn, it suffices to simply prove the above invariance property for transpositions of the $i$-th and $(i+1)$-th coordinates, for all $i \in \mathbb{N}$. For a general $\pi_a^{\mathbf{g}d}$, the situation is a bit more involved as the coordinate sequence $\{Z_i\}_{i=1}^{\infty}$ is not iid any more. Nevertheless, from the scaling properties of Exponential distributions it follows that, with $c_n := 2[n (2\bar g_n+a)]^{-1}$, the sequence $\{\tilde Z_n\}_{n\ge 1}$, defined as $\tilde Z_n = c_n^{-1} Z_n$, $n \in \mathbb{N}$, is iid under $\pi_a^{\mathbf{g}d}$. In this case, in order to invoke the Hewitt-Savage zero-one law, one needs to argue that for each $i$ the map $\psi$ is $\pi_a^{\mathbf{g}d}$-a.e. invariant under the transformation that takes the $(i, i+1)$ coordinates $(z_i, z_{i+1})$ to $(\frac{c_i}{c_{i+1}} z_{i+1}, \frac{c_{i+1}}{c_{i}} z_{i})$ and keeps the remaining coordinates the same. Establishing this property is at the heart of the proof of Theorem \ref{thm_main}. A key technical idea in the proof is the construction of a mirror coupling of the first $i+1$ Brownian motions, and synchronous coupling of the remaining Brownian motions, in the evolution of the ranked infinite $\mathbf{g}d$-Atlas model corresponding to a pair of nearby initial configurations. Estimates on the probability of a successful coupling, before any of the first $i$-gap processes have hit zero or the lowest $i$-particles have interacted with the higher ranked particles (in a suitable sense), are some of the important ingredients in the proof. The proof of Theorem \ref{thm_main2} hinges on establishing a key identity for expectations, under the given product form invariant measure $\pi$, of certain integrals involving the state process of the $i$-th gap and the collision local time for the $(j-1)$-th and $j$-th particle, for $i\neq j$. This identity is a consequence of the product form structure of $\pi$ and basic results on local times of continuous semimartingales. By using the form of the dynamics of the $\mathbf{g}d$-Atlas model, the above identity allows us to compute explicitly the Laplace transform of the $i$-th coordinate projection of $\pi$, via It\^o's formula, from which it is then readily seen that $\pi$ must be $\pi_a^{\mathbf{g}d}$ for a suitable value of $a$.\\\\ \noindent {\bf Acknowledgements} We would like to thank Prof. B.V. Rao whose insightful and probing questions led to the results reported in Section \ref{sec:extrem}. Research supported in part by the RTG award (DMS-2134107) from the NSF. SB was supported in part by the NSF-CAREER award (DMS-2141621). AB was supported in part by the NSF (DMS-2152577). \noindent{\scriptsize {\textsc{\noindent S. Banerjee and A. Budhiraja,\newline Department of Statistics and Operations Research\newline University of North Carolina\newline Chapel Hill, NC 27599, USA\newline email: [email protected] \newline email: [email protected] } }} \end{document}
\begin{document} \title{On the Mathematics of Higher Structures} \section{Introduction} \label{sec:intro} In a series of papers \cite{B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, Baas2018, BS2016} we have discussed higher structures in science in general, and developed a framework called Hyperstructures for describing and working with higher structures. In \cite{B14} we discussed the philosophy behind higher structures and formulated a principle in six stages --- the Hyperstructure Principle --- for forming higher structures. In this paper we will relate hyperstructures and the general principle to known mathematical structures. We also discuss how they may give rise to new mathematical structures and prepare a framework for a mathematical theory. Let us first recall from \cite{B14} what we think is the basic principle in forming higher structures. \section{The $\mathscr{H}$-Principle} \label{sec:Hprin} \begin{itemize} \item[(I)] \emph{Observation and Detection.}\\ \noindent Given a collection of objects that we want to study and give a structure. First we observe the objects and detect or assign their properties, states, etc. This is the \emph{semantic} part of the process. Finally we may also select special objects.\\ \item[(II)] \emph{Binding.}\\ \noindent A procedure to produce new objects from collections of old objects by ``binding'' them in some way. This is the \emph{syntactic} part of the process.\\ \item[(III)] \emph{Levels.}\\ \noindent Iterating the described process in the following way: forming bonds of bonds and --- important! --- using the detected and observed properties at one level in forming the next level. This is iteration in a new context and not a recursive procedure. It combines syntax and semantics in forming a new level. Connections between levels are given by specifying how to dissolve a bond into lower level objects. When bonds have been formed to constitute a new level, observation and detection are like finding ``emergent properties'' of the process.\\ \end{itemize} These three steps are the most important ones, but we include three more in the general principle. \begin{itemize} \item[(IV)] \emph{Local to global.}\\ \noindent Describing a procedure of how to move from the bottom (local) level through the intermediate levels to the top (global) level with respect to general properties and states. The importance of the level structure lies in the possibility of manipulating the systems levelwise in order to achieve a desired global goal or state. This can be done using ``globalizers'' --- an extension of sections in sheaves on \emph{Grothendieck sites} (see \cite{B2}).\\ \item[(V)] \emph{Composition.}\\ \noindent A way to produce new bonds from old ones. This means that we can compose and produce new bonds on a given level, by ``gluing'' (suitably interpreted) at lower levels. The rules may vary and be flexible due to the relevant context.\\ \item[(VI)] \emph{Installation.}\\ \noindent Putting a level structure satisfying I--V on a set or collection of objects in order to perform an analysis, synthesis or construction in order to achieve a given goal. The objects to be studied may be introduced as bonds (top or bottom) in a level structure.\\ \noindent\emph{Synthesis:} The given collection is embedded at the bottom level.\\ \noindent\emph{Analysis:} The given collection is embedded at the top level.\\ \end{itemize} Synthesis facilitates local to global processes and dually, analysis facilitates global to local processes by defining localizers dual to globalizers, see \cite{B1}. The steps I--VI are the basic ingredients of what we call the \emph{Hyperstructure Principle} or in short the \emph{$\mathscr{H}$-principle}. (Corresponding to ``The General Principle'' in \cite{B10}.) In our opinion it reflects the basic way in which we make or construct things. Let us illustrate this in terms of category theory: \begin{enumerate} \item \emph{Observation} and \emph{detection}: we decide the structure of the objects like topological spaces, groups, etc.\\ \item \emph{Binding}: morphisms bind objects --- in an ordered way, continuous maps, homomorphisms, etc.\\ \item \emph{Levels}: we consider morphisms of morphisms of $\ldots$ in forming higher categories. Observation, detection and assignment become more indirect, but ought to play a more significant role.\\ \item \emph{Local to global}: at one level think of a Grothendieck sheaf on a site.\\ \item \emph{Composition}: composition of morphisms etc.\ in the ordinary sense.\\ \item \emph{Installation}: giving a collection of objects (like ``all groups'') a categorical structure. \end{enumerate} \section{A categorical implementation of the $\mathscr{H}$-principle} \label{sec:categorical} In order to illustrate how the $\mathscr{H}$-principle may be applied in an ordinary categorical setting we take the following example from \cite{B10}: Let $\mathscr{C}$ be a category and $P: \mathscr{C}^{\text{op}} \to \mathrm{Sets}$ a functor called a presheaf. The category of elements of $P$, denoted by \begin{equation*} \int_{\mathscr{C}} P \; \;, \end{equation*} is given as follows. \begin{description}[style=nextline] \item[Objects] $(C,p)$ where $C$ is an object in $\mathscr{C}$ and $p\in P(C)$.\\ \item[Morphisms] $(C',p') \to (C,p)$ are the morphisms $u \colon C' \to C$ in $\mathscr{C}$ such that $Pu \colon P(C) \to P(C')$ and $Pu(p) = p'$. \end{description} For this construction see \cite{MM}. Then a possible way to contruct a categorical hyperstructure is as follows: Start with a collection of objects $X_0$. \begin{description}[style=nextline] \item[Observation] \begin{align*} X_0 \rightsquigarrow &\; \mathscr{C}_0\\ &\; \text{$-$ category}\\ &\\ \mathscr{C}_0 \rightsquigarrow &\; \mathscr{C}_0 \text{ or}\\ &\; \mathrm{Sets}^{\mathscr{C}^{\text{op}}}\\ &\; \mathscr{C}_0^J\\ &\; \vdots\\ &\\ \Omega_0 \colon \mathscr{C}_o^{\text{op}} \to &\; \mathrm{Sets} \text{ (Spaces, categories or other structures)}\\ &\; \text{$-$ presheaf} \end{align*} \item[Binding] \begin{align*} \Gamma_0 = &\;\int_{\mathscr{C}_0} \Omega_0\\ &\; \text{$-$ category of elements}\\ &\\ B_0 \colon \Gamma_0^{\text{op}} \to &\; \mathrm{Sets} \text{ (Spaces,$\ldots$)}\\ &\; \text{$-$ presheaf} \end{align*} \item[Levels] \begin{equation*} \mathscr{C}_1 = \int_{\Gamma_0} B_0 \end{equation*} Iterating this process by making the appropriate choices we get a hyperstructure: \begin{equation*} \mathscr{H} = \{\mathscr{C}_0, \mathscr{C}_1,\ldots, \mathscr{C}_n \} \end{equation*} where \begin{align*} \mathscr{C}_m &= \int_{\Gamma_{m-1}} B_{m-1}\\ &= \underset{\underset{\mathscr{C}_{m-1}}{\int} \Omega_{m-1}}{\int} B_{m - 1} \end{align*} for $1\leq m < n$. \end{description} In category theory it is often very useful to apply the nerve construction to a category (even higher ones) in order to associate a space from which topological information can be extracted. In the present construction the ``nerve'' of $\mathscr{H}$ would mean the nerve of $\mathscr{C}_n$ constructed inductively. The point to be made is that the $\nerve(\mathscr{H}) = |\mathscr{H}|$ makes sense and may be useful in this context. \section{From morphisms to bonds} \label{sec:mor-bond} In category theory we consider an ordered pair of objects $(X,Y)$ and assign a set $\mor(X,Y)$ of morphisms. Intuitively the morphisms bind the objects together. We suggest to extend the picture to a collection of objects $\mathscr{C} = \{X_i\}_{i \in I}$. The collection could be ordered or non-ordered. We prefer to present here the ideas in the non-ordered case. Hence we assign a set of bonds to the collection \begin{equation*} B = B(\mathscr{C}). \end{equation*} $B$ may sometimes be empty. The elements are mechanisms ``binding'' the collection in some way --- extending morphisms. Let us look at some examples. \subsection*{Relations} A relation $R \subseteq X_1 \times \cdots \times X_n$ gives a bond of tuples of elements $R(x_1,\ldots,x_n)$ if and only if $(x_1,\ldots,x_n) \in R$. \subsection*{Hypergraphs} Here we are given a set of vertices and the edges are subsets of vertices, and they serve as bonds of these vertices. \subsection*{Subspaces} Even more general, let $A_i$, $i = 1,\ldots,n$ be suitable subspaces of $X$ and $A_i \subseteq X$. $X$ is then a bond of $\{A_i\}$. An interesting case is when $A_i$ and $X$ are open subsets of a larger space $Y$. \subsection*{Simplicial complexes} Given a simplicial complex $K$ based on vertices $\{v_0,\ldots,v_q\}$. Then the simplices may be interpreted as bonds. \subsection*{Cobordisms} Let $W, \{V_i\}_{i = 1,\ldots,k}$ be manifolds such that $\partial W = \cup V_i$ ($V_i$ are the boundary components). We will then call $W$ a bond of $\{V_i\}$. \subsection*{The basic idea:} Instead of assigning a set $\mor(X,Y)$ to every ordered pair of objects, we will assign a set of bonds to any collection of objects --- finite, infinite or uncountable: \begin{equation*} \bond(X,Y,Z,\ldots) \quad \text{or} \quad \bond(c\in \mathscr{C}) \end{equation*} $\mathscr{C}$ being a collection or parametrized family of objects. We may also consider ordered collections or collections with other additional properties. Bonds extend morphisms in categories and higher bonds create levels and extend higher morphisms (natural transformations and homotopies, etc.) in higher categories. This will be the basis for the creation of new global states. Bonds are more general than these examples. But prior to the bond assignment is the process of observation, detection and assignment of properties like: manifolds, subspaces, points, vertices, etc. This will become more important when forming levels. Before studying level formation we will discuss property and bond assignments. Why do we need such an extension from graphs, higher categories, etc. to hyperstructures? In previous papers \cite{B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B13, B14, B15, Baas2018, BS2016} --- to which we refer the reader --- we have given many examples to illustrate this: higher order links, higher cobordisms and many more examples where we have group interactions instead of just pair interactions. The essence is that many multiagent interactions require a hyperstructure framework. Here we just refer to these previous papers for examples and motivation since our goal here is to discuss what we consider is the essence of a philosophy of the mathematics of higher structures --- outlining the possibilities for new constructions to be carried out in the future. \section{Property and bond assignments} \label{sec:prop} \subsection*{Properties} By properties here we include: properties, states, phases, etc. Collections we consider as subsets of some given set $X$, meaning that a collection $S \in \mathscr{P}(X)$ --- the power set of $X$. In many situations one may just consider structured subsets of $\mathscr{P}(X)$, but the ideas remain the same. Similar to the example in Section \ref{sec:categorical}. We may consider $\mathscr{P}(X)$ as a category with inclusions as morphisms in some cases. Even if the $\Omega$'s and $B$'s (to be defined later in this section) are just general assignments we may ask how they behave with respect to unions and intersections --- even if they are not functors. We may look for analogues of pullback and pushout preservation. In many cases we do not find this and it may lead to new kinds of mathematical structures. This applies to both $\Omega$ and $B$ assignments. We consider assignments \begin{equation*} \Omega \colon \mathscr{P}(X) \to \mathrm{Sets} \end{equation*} (or having target something more general like a higher category). Should $\Omega$ be a functor, meaning that \begin{equation*} S' \subseteq S \end{equation*} implies (contravariantly) \begin{equation*} \Omega(S') \leftarrow \Omega(S) \end{equation*} or (covariantly) \begin{equation*} \Omega(S') \to \Omega(S)? \end{equation*} In many situations this would be natural.\\ What about \begin{equation*} \Omega(S' \cup S) \end{equation*} in terms of $\Omega(S')$ and $\Omega(S)$ where certainly $S' \cap S = \emptyset$ is allowed? \begin{figure} \caption{Collections for state and bond assignments.} \label{fig:interection} \end{figure} How does $\Omega(S_1 \cup S_2)$ relate to $\Omega(S_1)$, $\Omega(S_2)$ and $\Omega(S_1 \cap S_2)$? \begin{enumerate} \item\label{item1} If $\Omega$ is a covariant functor then \begin{center} \begin{tikzpicture} \begin{scope}[xshift=-3cm] \diagram{d}{1.5em}{1.5em}{ S_1 \cap S_2 & S_1\\ S_1 & S_1 \cup S_2\\ }; \path (d-1-1) edge[right hook->] (d-1-2) edge[left hook->] (d-2-1) (d-1-2) edge[left hook->] (d-2-2) (d-2-1) edge[right hook->] (d-2-2); \end{scope} \node at (-0.25,0) {$\mapsto$}; \begin{scope}[xshift=3cm] \diagram{dd}{1.5em}{1.5em}{ \Omega(S_1 \cap S_2) & \Omega(S_1)\\ \Omega(S_1) & \Omega(S_1 \cup S_2)\\ }; \path[->] (dd-1-1) edge (dd-1-2) edge (dd-2-1) (dd-1-2) edge (dd-2-2) (dd-2-1) edge (dd-2-2); \end{scope} \end{tikzpicture} \end{center} (where $\Omega(S_1 \cup S_2) = \Omega(S_1) \sqcup_{\Omega(S_1 \cap S_2)} \Omega(S_2)$) which in some situations may be required to be a pushout. If $\Omega$ is contravariant, we may require a pullback: \begin{center} \begin{tikzpicture} \diagram{d}{1.5em}{1.5em}{ \Omega(S_1 \cup S_2) & \Omega(S_2)\\ \Omega(S_1) & \Omega(S_1 \cap S_2)\\ }; \path[->] (d-1-1) edge (d-1-2) edge (d-2-1) (d-1-2) edge (d-2-2) (d-2-1) edge (d-2-2); \end{tikzpicture} \end{center} But there are situations in the general setting where none of these conditions are satisfied. We need to go beyond (co)-presheaves. \item\label{item2} In some situations one may require a function or assignment $\varphi$ such that \begin{equation*} \Omega(S_1\cup S_2) = \varphi \bigl(\Omega(S_1),\Omega(S_2),\Omega(S_1\cap S_2)\bigl). \end{equation*} $\varphi$ may be thought of as a generalized limit in particular in the case of a union of an arbitrary collection of $S$'s. Properties or elements in $\Omega(S_1\cup S_2)$ not in or coming from $\Omega(S_1)$ or $\Omega(S_2)$ may be thought of as ``emergent'' properties. \end{enumerate} The theory should be developed in both the cases \ref{item1} and \ref{item2}. In general the only assignment of ``emergent'' properties is by ``observation'' --- ``the whole is more than the sum of its parts.'' \subsection*{Bonds} We now consider collections $S$ with a property $\omega$, $\omega \in \Omega(S)$ and form \begin{equation*} \Gamma = \{ (S,\omega) \mid \omega \in \Omega(S)\}. \end{equation*} We want to study the ``mechanisms'' that can bind the elements of $S$ together to some kind of unity. This is done by an assignment \begin{equation*} B \colon \Gamma \to \mathrm{Sets}, \end{equation*} where $B(S,\omega)$ is the set of bonds of $S$. If $\Omega$ and $B$ are both functors we proceed by known mathematical tools. If one of them or both fail to be functors we need to develop new mathematical methods. If \begin{center} \begin{tikzpicture} \diagram{d}{1.5em}{1.5em}{ (S_1 \cap S_2, \omega_{12}) & (S_1,\omega_1)\\ (S_2,\omega_2) & (S_1 \cup S_2, \omega)\\ }; \path[->] (d-1-1) edge (d-1-2) edge (d-2-1) (d-1-2) edge (d-2-2) (d-2-1) edge (d-2-2); \end{tikzpicture} \end{center} it is sometimes natural to require that \begin{center} \begin{tikzpicture} \diagram{d}{1.5em}{1.5em}{ B(S_1 \cap S_2, \omega_{12}) & B(S_1,\omega_1)\\ B(S_2,\omega_2) & B(S_1 \cup S_2, \omega)\\ }; \path[->] (d-1-1) edge (d-1-2) edge (d-2-1) (d-1-2) edge (d-2-2) (d-2-1) edge (d-2-2); \end{tikzpicture} \end{center} is a pushout, or \begin{center} \begin{tikzpicture} \diagram{d}{1.5em}{1.5em}{ B(S_1 \cap S_2, \omega_{12}) & B(S_1,\omega_1)\\ B(S_2,\omega_2) & B(S_1 \cup S_2, \omega)\\ }; \path[->] (d-1-2) edge (d-1-1) (d-2-1) edge (d-1-1) (d-2-2) edge (d-1-2) edge (d-2-1); \end{tikzpicture} \end{center} a pullback. But sometimes these conventional notions fail and one may proceed in different ways. Bonds ($B$) (like morphisms) represent the syntactic part of the structure. Observation ($\Omega$) --- missing in (Higher) Category Theory --- represent the semantic part. For property assignments $\Omega$ we may introduce operations: Given $(S_1,\omega_1)$ and $(S_2,\omega_2)$, $\omega_1 \in \Omega(S_1)$ and $\omega_2 \in \Omega(S_2)$, we may define \begin{equation*} \omega_1 \circ \omega_2 = \varphi(\omega_1,\omega_2) \in \Omega(S_1\cup S_2), \quad \text{for $S_1$ and $S_2$ disjoint.} \end{equation*} Whenever a tensor product exists we may require: \begin{equation*} \Omega(S_1\cup S_2) = \Omega(S_1) \otimes \Omega(S_2). \end{equation*} Whenever we introduce several levels properties will automatically depend on previous properties in a cumulative way and take care of levels. Bonds are different, composing and gluing at different levels. Before elaborating that we need to discuss and specify the formation of levels. First let us give two examples. \begin{example} Given two sets of agents ($S_1$ and $S_2$) with specific skills (or products). In analogy with functorial assignments we will consider: \begin{enumerate} \item Let $\Omega$ assign collective skills. Then $\Omega(S_1)$ and $\Omega(S_2)$ will not necessarily map into $\Omega(S_1 \cap S_2)$. Hence no ``pullback property''. \item Let $\Omega$ assign individual skills to $S_1$ and $S_2$. Then $\Omega(S_1)$ and $\Omega(S_2)$ will not map into $\Omega(S_1 \cup S_2)$. Hence no ``pushout property''. \item Similarly for bonds, for example, formed by using skills to make certain products. \end{enumerate} \end{example} \begin{example} Given two sets of agents with specific skills (the $\Omega$-part) and a mechanism or organization binding them together to produce specific products (the $B$-part). The groups may intersect --- have agents in common --- but the intersection may be unable to produce the products. Hence no restriction maps or ``pullback property'' for bonds. Furthermore, we may consider the union of two groups which will clearly be able to produce the products of the groups, but the union may produce many more (for example composites). Hence, union is not preserved and no ``pushout property'' for bonds. \end{example} \section{Levels} \label{sec:levels} In higher categories we move from objects and morphisms to morphisms of morphisms, etc. In the case of continuous maps we pass to homotopies, homotopies of homotopies, etc. This is how higher levels of structure arise. In our situation we will now create higher levels by introducing bonds of bonds, etc. Let us start with collections of objects from a basic set $X_0$. Then we introduce as we described \begin{equation*} \Omega_0, \Gamma_0, B_0. \end{equation*} We let the assignments --- whether functorial or not --- be sets, but as we will point out later we may assign much more general structures. (For example, $\infty$-groupoids or $\infty$-categories as suggested by V.~Voevodsky in a private discussion.) In forming the next level we define: \begin{equation*} X_1 = \{b_0 \mid b_0 \in B_0(S_0,\omega_0), S_0\in \mathscr{P}(X_0) \text{ and } \omega_0\in \Omega_0(S_0)\}. \end{equation*} Depending on the situation we now can choose $\Omega_1$ and $B_1$ according to what we want to construct or study and then repeat the construction. This is not a recursive procedure since new properties and bonds arise at each level. Hence a higher order architecture or structure of order $n$ is described by: \begin{equation*} \mathscr{H}_n \quad \colon \quad \begin{cases} X_0, \quad \Omega_0, \quad \Gamma_0, \quad B_0\\ X_1, \quad \Omega_1, \quad \Gamma_1, \quad B_1\\[0.25cm] \hspace*{2cm} \vdots\\[0.25cm] X_n, \quad \Omega_n, \quad \Gamma_n, \quad B_n. \end{cases} \end{equation*} At the technical level we require that \begin{equation*} B_i(S_i,\omega_i) \cap B_i (S_i',\omega_i') = \emptyset \end{equation*} for $S_i \neq S_i'$ (``a bond knows what it binds'') in order to define the $\partial_i$'s below, or we could just require that the $\partial_i$'s exist. The level architectures are connected by ``boundary'' maps as follows: \begin{equation*} \partial_i \colon X_{i + 1} \to \mathscr{P}(X_i) \end{equation*} defined by \begin{equation*} \partial_i(b_i) = S_i \qquad \text{(dissolving bonds)} \end{equation*} and maps \begin{equation*} I_i \colon X_i \to X_{i + 1} \end{equation*} such that $\partial_i \circ I_i = \id$. $I_i$ gives a kind of ``identity bond''. $B_0$ may also contain identity bonds. The extensions allowing bindings of subsets or subcollections of higher power sets add many new types of architectures of hyperstructures. See \cite{B9,B2} for examples. \begin{definition} We call the system \begin{equation*} \mathscr{H}_n = \{(X_i,\Omega_i,\Gamma_i,B_i,\partial_i) \mid i = 0,\ldots,n\} \end{equation*} a \emph{hyperstructure of order $n$}. \end{definition} This definition is made very general to illustrate the key idea. In order to develop the definition and theory further mathematically additional conditions will have to be added as pointed out in Section \ref{sec:prop} and then it will branch off in several directions depending on the situation under consideration, but with the $\mathscr{H}$-structure as a common denominator. Our intention is also to cover areas and problems outside of mathematics which again may give rise to new mathematics. \section{Composition of bonds} \label{sec:comp} In the study of collections of objects we emphasize the general notion of bonds including relations, functions and morphisms. We get richer structures when we have composition rules of various types of bonds. Such compositions should take into account the higher order architecture giving bonds a level structure. We experience this situation in higher categories where we want to compose morphisms of any order. Suppose that we are given two $n$-morphisms $f$ and $g$. They may not be compatible at level $n$ for composition in the sense that \begin{equation*} \target(f) = \source(g). \end{equation*} But in a precise way we can iterate source and target maps to get down to lower levels, and it may then happen that at level $p$ we have \begin{equation*} \target_p^n(f) = \source_p^n(g). \end{equation*} Hence composition makes sense at level $p$ and we write the composition rule as \begin{equation*} \square_p^n \end{equation*} and the composed object as \begin{equation*} f \, \square_p^n \, g. \end{equation*} In a similar way we can introduce composition rules for bonds in a general hyperstructure $\mathscr{H}$. Let $a_n$ and $b_n$ be bonds at level $n$ in $\mathscr{H}$. Then we get to the lower levels via the boundary maps \begin{equation*} \partial_i \colon X_{i + 1} \to \mathscr{P}(X_i) \end{equation*} and search for compatibility in the sense that \begin{equation*} \partial_p \circ \cdots \circ \partial_{n - 1} (a_n) = \partial_p \circ \cdots \circ \partial_{n - 1} (b_n) \end{equation*} or we may just require a weaker condition like \begin{equation*} \partial_p \circ \cdots \circ \partial_{n - 1} (a_n) \cap \partial_p \circ \cdots \circ \partial_{n - 1} (b_n) \neq \emptyset \end{equation*} in order to have a composition defined: \begin{equation*} a_n \, \square_p^n \, b_n \end{equation*} For bonds in a hyperstructure we may even compose bonds at different levels: $a_m$, $b_n$ compatible at level $p$ via boundary maps, allow us to define \begin{equation*} a_m \, ^m\underset{p}{\square}^n \, b_n \end{equation*} as an $m$-bond for $m \geq n$. Compositional rules are needed and will appear elsewhere. Composition may be thought of as a kind of geometric gluing. We consider the bonds as spaces, binding collections of families of subspaces, these again being bonds, etc. By the ``boundary'' maps we go down to a level where these are compatible, gluable bond spaces along which we may glue the bonds within the type of spaces we consider. This applies for example to higher cobordisms. Compositional rules will be needed, but they will depend on the specific structures under study. For example we may require strict associativity and/or commutativity or we may just require it up to a higher bond. The point we are just trying to make is that there are a lot of choices in the development of the further theory. We have here for notational reasons suppressed the $\omega$'s (properties/states), but they are included in a compatible way. Therefore hyperstructures offer the framework for a new kind of higher order gluing in which the level architecture plays a major role. We will pursue this in the next sections. \section{States} \label{sec:states} Having introduced hyperstructures we may now assign states (properties, etc.) to them: \begin{equation*} \Lambda \colon \mathscr{H} \rightsquigarrow \mathscr{S} \end{equation*} where $\mathscr{S}$ is a structure representing the states --- in fact $\mathscr{S}$ may be a level structure, a hyperstructure in itself. All assignments are made level compatible. Furthermore, $\Lambda$ takes level to level and may even be of a cumulative nature. The important point is assigning states to bonds. This means that \begin{equation*} \Lambda = \{\Lambda_i\}, \qquad \mathscr{S} = \{\mathscr{S}_i\} \end{equation*} and \begin{equation*} \begin{array}{r@{\ }c@{\ }l} \Lambda_0 & \text{takes values in} & \mathscr{S}_n\\ & \vdots &\\ \Lambda_i & \text{takes values in} & \mathscr{S}_{n - i}\\ & \vdots &\\ \Lambda_n & \text{takes values in} & \mathscr{S}_0. \end{array} \end{equation*} The degree of structure preservation may depend on the situation in question. Even if our starting hyperstructure $\mathscr{H}$ is very simple --- like a multilevel decomposition of some space --- it may be very useful to assign rather complex states in order to act on the system. This point is dicussed in \cite[Section 5.1 --- $\mathscr{H}$-formation]{B14} where we suggest that $\mathscr{S}$ may be a hyperstructure of higher types being hyperstructures of hyperstructures $\ldots$ For state assignments there is a plethora of new possibilities, extending assignments in topological quantum field theory (TQFT). In such a level structure (hyperstructure) of states \begin{equation*} \mathscr{S} = \{\mathscr{S}_0,\mathscr{S}_1,\ldots,\mathscr{S}_n\} \end{equation*} $\mathscr{S}_n$ represents the local states associated with the lowest level bonds $B_0$, and $\mathscr{S}_0$ represents the global states associated with the top bonds $B_n$. As pointed out in \cite{B6,B2,B14} it is important to have level connecting assignments making it possible to pass from local to global states. Of course this is not always possible. We will discuss a way of doing this by using generalized multilevel gluing. We use state here in a general sense including observables and properties as well. The important thing is that we in $\mathscr{H}$-structures have levels of observables, states, properties, etc., not just local and global. \section{Local to global} \label{sec:locglob} Hyperstructures are useful tools in passing from local situations to global ones in collection of objects. In this process the level structure is important. We will here elaborate the discussion of multilevel state systems in \cite{B6} following \cite{B2} In mathematics we often consider situations locally at open sets covering a space and then glue together basically in one stroke --- meaning there are just two levels local and global, no intermediate levels. In many situations dominated by a hyperstructure this is not sufficient. We need a more general hyperstructured way of passing from local to global in general collections. Let us offer two of our intuitions regarding this process. Geometrically we think of a multilevel nested family of spaces, like manifolds with singularities represented by manifolds with multinested boundaries or just like higher dimensional cubes with iterated boundary structure (corners, edges,$\ldots$). With two such structures we may then glue at the various levels of the nesting (Figure \ref{fig:levels}). \begin{figure} \caption{Gluing possibility at various levels} \label{fig:levels} \end{figure} Furthermore, study how states and properties may be ``globalized'', meaning putting local states coherently together to global states. Biological systems are put together by multilevel structures from cells into tissues, organs etc.\ constituting an organism. Much of biology is about understanding how cell-states determine organismic states. The hyperstructure concept is in fact inspired by biological systems. In order to extend the discussion of multilevel state systems in \cite{B6} we need to generalize and formulate in a hyperstructure context the following mathematical notions (see, for example, \cite{MM}): \begin{itemize} \item Sieve \item Grothendieck Topology \item Site \item Presheaf \item Sheaf \item Descent \item Stack \item Sheaf cohomology \end{itemize} Let us start with a given hyperstructure \begin{align*} \mathscr{H} \quad \colon \quad & \{X_0,\ldots,X_n\}\\ & \{\Omega_0,\ldots,\Omega_n\}\\ & \{B_0,\ldots,B_n\}\\ & \{\partial_0,\ldots,\partial_n\}\\ \end{align*} We will now suggest a series of new definitions. \begin{definition} \label{def:sieve} A \emph{sieve} on $\mathscr{H}$ is given as follows: at the lowest level $X_0$ a sieve $\mathcal{S}$ on a bond $b_0 (=b_0(S_0,\omega_0))$ is given by families of bonds $\{ b_0^{j_0} \}$ (covering families) and $\beta_1$'s are compositional bonds in the family such that \begin{equation*} \beta_1(\{ b_0^{j_0} \}, b_0) \text{ --- the composition ---} \end{equation*} is also in the family. $b_0$ may also be replaced by a family of bonds. ($b_0$ may also be an identity bond.) Bond composition with $\{b_0^{j_0}\}$ will produce new families in the sieve. A \emph{sieve on $\mathscr{H}$} is then a family of such sieves $(\mathcal{S}_k)_{k = 1,\ldots,n}$ --- one for each level. \end{definition} We postpone connecting the levels until the definition of a Grothendieck topology, but this could also have been added to the sieve definition. \begin{definition} \label{def:Grothendieck_topology} A \emph{Grothendieck topology} on $\mathscr{H}$ is given as follows: first we define a Grothendieck topology for each level of bonds. Consider level $0$: to every bond $b_0$ we assign a collection of sieves $J(b_0)$ such that \begin{itemize} \item[(i)] (maximality), the maximal sieve on $b_0$ is in $J(b_0)$ \item[(ii)] (stability), let $S \in J(b_0)$, $b_1(b_0',b_0)$, then in obvious notation \begin{equation*} b_1^\ast (S) \in J(b_0') \end{equation*} \item[(iii)] (transitivity), let $S \in J(b_0)$ and $R$ any sieve on $b_0,b_0'$ an element of a covering family in $S$, $b_1^\ast (R) \in J (b_0')$ for all $b_1$ with $b_1(b_0',b_0)$, then $R \in J(b_0)$. \end{itemize} We call $J(b_0)$ a $J$-covering of $b_0$. This gives a Grothendieck topology for all levels of bonds, and we connect them to a structure on all of $\mathscr{H}$ by defining in addition an assignment $J$ of $(b_0,\ldots,b_n)$ where $b_i \in \partial_i b_{i + 1}$. $J(b_0,\ldots,b_n)$ consists of families of sieves $\{b_0^{j_0}\} \in J(b_0),\ldots,\{b_n^{j_n}\}\in J(b_n)$ and bonds \begin{equation*} \beta_1,\ldots,\beta_{n + 1} \end{equation*} such that \begin{equation*} \beta_1(b_0,\{b_0^{j_0}\}),\ldots,\beta_{n + 1}(b_n,\{b_n^{j_n}\}) \end{equation*} and $b_i^{j_i} \in \partial_i b_{i + 1}^{j_{i + 1}}$. In a diagram we have \begin{center} \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \diagram{d}{2em}{2.5em}{ b_n & b_{n - 1} & \cdots & b_0\\ \{b_n^{j_n}\} & \{b_{n - 1}^{j_{n - 1}}\} & \cdots & \{b_0^{j_0}\}\\ J(b_n) & J(b_{n - 1}) && J(b_0).\\ }; \path[->,midway,font=\scriptsize] (d-1-1) edge node[above]{$\partial$} (d-1-2) edge[<->] node[right]{$\beta_{n + 1}$} (d-2-1) (d-1-2) edge node[above]{$\partial$} (d-1-3) edge[<->] node[right]{$\beta_n$} (d-2-2) (d-1-3) edge node[above]{$\partial$} (d-1-4) (d-1-4) edge[<->] node[right]{$\beta_1$} (d-2-4) (d-2-1) edge node[above]{$\partial$} (d-2-2) edge[-,white] node[descr,sloped,text=black]{$\in$} (d-3-1) (d-2-2) edge node[above]{$\partial$} (d-2-3) edge[-,white] node[descr,sloped,text=black]{$\in$} (d-3-2) (d-2-3) edge node[above]{$\partial$} (d-2-4) (d-2-4) edge[-,white] node[descr,sloped,text=black]{$\in$} (d-3-4); \end{tikzpicture} \end{center} Clearly there are many possible choices of Grothendieck topologies, and they will be useful in the gluing process and the creation of global states. Examples will be discussed elsewhere, our main point here is to outline the general ideas. \end{definition} \begin{definition} \label{def:site} $(\mathscr{H},J)$ is called a \emph{hyperstructure site} when $J$ is a Grothendieck topology on the hyperstructure $\mathscr{H}$.\sloppy \end{definition} Given \begin{equation*} \mathscr{S} = \{ \mathscr{S}_0, \mathscr{S}_1,\ldots,\mathscr{S}_n\} \end{equation*} $\mathscr{S}_i$ being a hyperstructure and assignments such that \begin{equation*} \begin{array}{r@{\ }c@{\ }l} \Lambda_0 & \text{takes values in} & \mathscr{S}_n\\ & \vdots &\\ \Lambda_i & \text{takes values in} & \mathscr{S}_{n - i}\\ & \vdots &\\ \Lambda_n & \text{takes values in} & \mathscr{S}_0. \end{array} \end{equation*} Sometimes we may also assume that $\mathscr{S}$ is organized into a hyperstructure. We assume that we have bond compatibility of the $\Lambda_i$'s, preservation of bond composition and level connecting assignments $\delta_i$ (``dual'' to the $\partial_i$'s and acting on collections of bond ``states'') depending on the Grothendieck topology $J$: \begin{center} \begin{tikzpicture} \diagram{d}{2.5em}{2.5em}{ \mathscr{S}_0 & \mathscr{S}_1 & \cdots & \mathscr{S}_n.\\ }; \path[->,midway,above,font=\scriptsize] (d-1-2) edge[decorate, decoration={snake, amplitude=.4mm, segment length=3mm, post length=1mm}] node{$\delta_1$} (d-1-1) (d-1-3) edge[decorate, decoration={snake, amplitude=.4mm, segment length=3mm, post length=1mm}] node{$\delta_2$} (d-1-2) (d-1-4) edge[decorate, decoration={snake, amplitude=.4mm, segment length=3mm, post length=1mm}] node{$\delta_n$} (d-1-3); \end{tikzpicture} \end{center} The $\delta_i$'s may be cumulative functional or relational assignments, and the $\mathscr{S}_i$'s often have an algebraic structure. In the simplest case all the $\mathscr{S}_i$'s could just be $\mathrm{Sets}$. In defining the $\delta$'s levels matter in a cumulative way and the $\delta$'s may be seen as level connectors and regulators. See also \cite{B14}. We consider the $\Lambda_i$'s as a kind of ``\emph{level presheaves}'' and the $\delta_i$'s giving a kind of ``\emph{global matching families}'' --- between levels in addition to levelwise matching. However, if we have ``functional'' assignment connectors $\hat{\delta}_i$'s on $\mathscr{H}$: \begin{center} \begin{tikzpicture} \diagram{d}{2.5em}{2.5em}{ \mathscr{S}_0 & \mathscr{S}_1 & \cdots & \mathscr{S}_n\\ }; \path[->,midway,above,font=\scriptsize] (d-1-2) edge node{$\hat{\delta}_1$} (d-1-1) (d-1-3) edge node{$\hat{\delta}_2$} (d-1-2) (d-1-4) edge node{$\hat{\delta}_n$} (d-1-3); \end{tikzpicture} \end{center} means that we get a unique state of global bond objects --- like an \emph{amalgamation} for presheaves but here across levels in addition to levelwise amalgamation. Global bonds are ``covered'' as follows (see \cite{B6}) \begin{center} \begin{tikzpicture} \diagram{d}{2.5em}{2.5em}{ \{ b(i_n) \} & \{ b(i_{n - 1},i_n) \} & \cdots & \{ b(i_0,\ldots,i_n) \}\\ }; \path[->,midway,above,font=\scriptsize] (d-1-1) edge node{$\partial_{n -1}$} (d-1-2) (d-1-2) edge node{$\partial_{n - 2}$} (d-1-3) (d-1-3) edge node{$\partial_0$} (d-1-4); \end{tikzpicture} \end{center} and states are being levelwise globalized in a cumulative way by \begin{center} \begin{tikzpicture} \diagram{d}{2.5em}{2em}{ \Lambda_n(\{ b(i_n) \}) & \Lambda_{n - 1}(\{ b(i_{n - 1}, i_n) \}) & \cdots & \Lambda_0 (\{ b(i_0,\ldots,i_n \}).\\ }; \path[->,midway,above,font=\scriptsize] (d-1-2) edge node{$\hat{\delta}_1$} (d-1-1) (d-1-3) edge node{$\hat{\delta}_2$} (d-1-2) (d-1-4) edge node{$\hat{\delta}_n$} (d-1-3); \end{tikzpicture} \end{center} With a slight abuse of notation we write this as \begin{equation*} \Lambda \colon (\mathscr{H},J) \to \mathscr{S} \end{equation*} and define $\Lambda = \{\Lambda_i\}$ as a \emph{``presheaf''} on $(\mathscr{H},J)$ ($\mathscr{P}re(\mathscr{H},J)$) and when \begin{equation*} \Delta = \{\hat{\delta}_i\} \end{equation*} exists we have a unique global bond state. This is like a sheafification condition and we call $(\Delta,\Lambda)$ a \emph{globalizer} of the site $(\mathscr{H},J)$ with respect to $\Lambda$. $\Lambda$ with $\Delta$ extends the sheaf notion here, gluing within levels and between levels. \emph{A globalizer is a kind of higher order or hyperstructured sheaf covering all the levels. Dually we may also introduce ``localizers'' in a similar way.} The existence of $\Delta$ contains the global gluing data and hence corresponds to what is often called \emph{descent conditions} and the hyperstructure collection $\mathscr{S}$ extends the notion of a \emph{stack} over $\mathscr{H}$. The ``internal'' $\Omega$-property assignments may also be required to satisfy these globalizing conditions depending on the situation, sometimes we omit them notationally. The details may be worked out in several directions. Topological quantum field theories are examples of this kind of assignments. When higher cobordism categories of manifolds and cobordisms with boundaries, e.g.\ cobordism categories with singularities (see \cite{B6}), are considered the assignments may take values in some ``algebraic'' higher category like higher vectorspaces or higher factorization algebras. Suppose that we have an assignment \begin{equation*} \Lambda \colon \mathscr{H} \to \mathscr{S} \end{equation*} and consider a bond $b_i$ at level $i$ in $\mathscr{H}$: \begin{equation*} \partial_i b_i = \{b_{i - 1}^j\} \quad \text{for all $i$}. \end{equation*} Then a globalizer will give an assignment \begin{equation*} \prod_j \Lambda_{i - 1} (b_{i - 1}^j) \xrightarrow{\delta_{n - i + 1}} \Lambda_i(b_i). \end{equation*} This shows that from a family of ``things'' of one kind, one can make a ``thing'' of another (higher) kind at a higher level. One may view this as a vast generalization of the concept of an operad (see \cite{Leinster}). If the $\mathscr{S}_i$'s have a tensor type product we should require: \begin{equation*} \bigotimes_j \Lambda_{i - 1}(b_{i - 1}^j) \to \Lambda_i (b_i). \end{equation*} Sometimes when it makes sense \begin{equation*} \mathscr{S}_k = \mathscr{S}_{k - 1}, \end{equation*} like often in field theory, we may have \begin{equation*} \Lambda_i (b_i) \in \bigotimes_j \Lambda_{i - 1} (b_{i - 1}^j) \end{equation*} and \begin{center} \begin{tikzpicture} \diagram{d}{2.5em}{5em}{ b_i & \lambda_i\\ \{b_{i - 1}^j\} & \{\lambda_{i - 1}^j\}\\ }; \path[font=\scriptsize] (d-1-1) edge[snake=coil,segment aspect=0,segment amplitude=0.5pt,->] node[midway,left]{$\partial$} (d-2-1) (d-1-1) edge[snake=coil,segment aspect=0,segment amplitude=0.5pt,->] node[midway,above]{$\Lambda_i$} (d-1-2) (d-2-2) edge[snake=coil,segment aspect=0,segment amplitude=0.5pt,->] node[midway,right]{$\delta$} (d-1-2) (d-2-1) edge[snake=coil,segment aspect=0,segment amplitude=0.5pt,->] node[midway,below]{$\Lambda_{i - 1}$} (d-2-2); \end{tikzpicture} \end{center} extending pairings in TQFTs. Also the ``internal'' property and state assignments in a hyperstructure may be considered as extended multilevel field theories \begin{equation*} \Omega_k \colon B_k \to \mathscr{S}_{n - k} \end{equation*} where then $\omega_k \in \Omega_k(b_k)$ and collections $\{(b_k,\omega_k)\}$ form the next level. A generalized field theory in this sense \begin{equation*} \Lambda \colon \mathscr{H} \to \mathscr{S} \end{equation*} may be conceived as a bond between the hyperstructures $\mathscr{H}$ and $\mathscr{S}$. This picture may be extended to bonds of families of $\mathscr{H}$-structures \begin{equation*} B(\{\mathscr{H}_i\}) \end{equation*} where the $\mathscr{H}_i$'s could be a suitable mixture of geometric, topological and algebraic hyperstructures. \section{Remarks} \label{sec:rem} \subsection{Installation} \label{sub:inst} This means that we just have a set or collection of objects --- $X$ --- that we want to study and work with. This may be facilitated by organizing $X$ into a hyperstructure $\mathscr{H}(X)$ as argued in previous papers \cite{B13, B12, B10, B8, B9, B5, B6, B2, B1, B15, B14, B7, B11, B3, B4, Baas2018, BS2016}. This is analogues to the useful process of organizing a collection of objects into a category. Then one may put structure assignments on $\mathscr{H}(X)$ again \begin{equation*} \Lambda \colon \mathscr{H}(X) \to \mathscr{S} \end{equation*} and iterate whenever needed. \subsection{$\mathscr{H}$-algebras} \label{sub:Halg} In an $\mathscr{H}$-structure with bonds $\{B_0,B_1,\ldots,B_n\}$ we may define operations or products of bonds by ``gluing.'' If $b_n$ and $b_n'$ are bonds in $B_n$ that are ``gluable'' at level $k$, then we ``glue'' them into a new bond $b_n \, \square_k^n \, b_n'$: \begin{center} \begin{tikzpicture} \diagram{d}{2.5em}{5em}{ b_n & b_n'\\ b_k & b_k'\\ }; \path[font=\scriptsize] (d-1-1) edge[snake=coil,segment aspect=0,segment amplitude=0.5pt,->] node[midway,left]{$\partial \circ \cdots \circ \partial$} (d-2-1) (d-1-2) edge[snake=coil,segment aspect=0,segment amplitude=0.5pt,->] node[midway,right]{$\partial \circ \cdots \circ \partial$} (d-2-2) (d-2-1) edge[out=-30,in=-150,<->] node[midway,below,xshift=6.75em]{``gluable'' (having similar parts to be identified).} (d-2-2); \end{tikzpicture} \end{center} $(\mathscr{H},\{\square_k^n\})$ gives new forms of higher algebraic structures. We have \emph{level operations} $\{\square_k^k\}$ and \emph{interlevel operations} $\{\square_k^n\}$. For geometric objects $X$ and $Y$ one may define a ``fusion'' product \begin{equation*} X \, \square_\mathscr{H} \, Y \end{equation*} by using installed $\mathscr{H}$-structures on $\mathscr{H}(X), \mathscr{H}(Y)$ and $\mathscr{H}(X \sqcup Y)$, see \cite{B2}. As pointed out in the previous section if in an $\mathscr{H}$-structure we are given a bond $b_k$ binding $\{b_{k - 1}^i\}$ the state assignments will give levelwise assignments connected via a globalizer \begin{equation*} \Lambda_{k - 1}(\{b_{k - 1}^i\}) \rightsquigarrow \Lambda_k(b_k). \end{equation*} The globalizers act as generalized pairings connecting levels. In some cases like factorization algebras connecting local to global observables they may be isomorphisms (in perturbative field theories), see \cite{AFR,Ginot}, but not in general. An \emph{$\mathscr{H}$-algebra} will be an $\mathscr{H}$-structure $\mathscr{H}$ with ``fusion'' operations $\square = \{\square_k^n\}$. One may also add a ``globalizer'' (see \cite{B2}) and tensor-type products as just described. The combination of a tensor product and a globalizer is a kind of extension of a ``multilevel operad.'' \subsection{Hidden $\mathscr{H}$-structures} \label{sub:hidden} In addition to the examples mentioned in Section \ref{sec:mor-bond} there are well-known interesting structures that may be viewed as hyperstructures:\\ \begin{itemize} \item[(1)] Higher categories in general with objects, morphisms, morphisms of morphisms ($2$-morphisms), etc., see, for example, Lurie \cite{L1,L2}. Globalizers and localizers extend to the ideas of (iterated) spans, cospans and local systems in higher categories, see, for example, Lurie \cite{L1} and Haugseng \cite{H}.\\ \item[(2)] Higher cobordisms, cobordisms with singularities --- cobordisms of cobordisms $\ldots$ with iterated structural boundaries, see \cite{B6,B2}. Observables may be states, tangential properties, cohomological properties,$\ldots$\\ \item[(3)] Syzygies and resolutions in homological algebra are examples of structures of higher relations, see \cite{MacLane}. Hilbert's syzygy theorem states that if $M$ is a finitely generated module over a polynomial ring in $n$ variables over a field, then it has a free resolution of length $\leq n$. In our language: there is an installment of a hyperstructure on $M$ of order $\leq n$. Geometrically we see this for example in Adams resolutions coming from a (co)-homology theory.\\ \item[(4)] Higher spaces may be built up gluing or linking together spaces using (co)-homologically detected properties. For example gluing two spaces through subspaces connected by a map or relation with certain (co)-homological properties. This process may be iterated using possibly new (co)-homology theories forming new levels and one gets spaces with hyperstructures. Hyperstructures offer a method of describing a plethora of new spaces needed in various situations. One may for example take families of general spaces, manifolds or simplicial complexes and organize them into suitable $\mathscr{H}$-structures giving $\mathscr{H}$-spaces, $\mathscr{H}$-manifolds and $\mathscr{H}$ simplicial complexes combining syntax (combinatorics) and semantics ((co-)homology, homotopy, $\ldots$). \end{itemize} \subsection{$\mathscr{H}$-spaces} \label{sub:Hspaces} What is a space? This is an old and interesting question. We will here add some higher (order) perspectives. Often spaces are given by open sets, metrics, etc. They all give rise to bindings of points: open sets, ``binding'' its points, distance binding points, etc. In many contexts (of genes, neurons, links, subsets and subspaces, $\ldots$) it seems more natural to specify the binding properties of space by giving a hyperstructure --- even in addition to an already existing ``space structure''. In order to emphasize the binding aspects of space we suggest that a useful notion of space should be given by a set $X$ and a hyperstructure $\mathscr{H}$ on it. Such a pair $(X,\mathscr{H})$ we will call an \emph{$\mathscr{H}$-space}. It tells us how the points or objects are bound together, see \cite{Baas2018} for an example. Clearly there may be many such hyperstructures on a set. They may all be collected into a larger hyperstructure --- $\mathscr{H}^{\text{Total}}$ --- which in a sense parametrizes the others. Ordinary topological spaces will be of order $0$ with open sets as bonds. Through the bonds one may now study the processes like fusion and fission in the space. Our key idea is that ``spaces'' and ``hyperstructures'' are intimately connected. In neuroscience one studies ``space'' through various types of cells: place-, grid-, border-, speed-cells,$\ldots$, see \cite{B15}. All this spatial information should be put into the framework of a $\mathscr{H}$-spaces with for example firing fields as basic bonds. As pointed out, the \emph{binding} problem fits naturally in here, similarly ``cognitive'' and ``evolutionary'' spaces defined by suitable hyperstructures. Higher cognition should be described by $\mathscr{H}$-spaces as well. From a mathematical point of view simplicial complexes are also a kind of hyperstructure based on the vertices and the simplices being bonds. In a simplex all subsets of vertices are subsimplices. We have discussed in \cite{B5,B7} that many bonds do not have this property. For example a Brunnian bond is a bond of say $n$ elements in such a way that $(n - 1)$ are not bound together. These can be realized as Brunnian links of various orders, see \cite{B5, BS2016}. We may therefore suggest the following: \begin{definition} A \emph{Brunnian complex} consists of \begin{itemize} \item[(i)] A set of vertices \item[(ii)] A family of subsets $\mathscr{F}$ --- the set of simplices, such that singletons are in $\mathscr{F}$ and so is $\emptyset$. \end{itemize} \end{definition} This means only certain subsets are simplices, not all of them as in simplicial complexes. \begin{figure} \caption{A Brunnian complex.} \label{fig:brunnian_complex} \end{figure} In Figure \ref{fig:brunnian_complex} we have a $2$nd order Brunnian complex of $9$ vertices and $3$ simplices, see Figure \ref{fig:links} for the corresponding links. \begin{figure} \caption{Links} \label{fig:1stBrunnian} \label{fig:2ndBrunnian} \label{fig:links} \end{figure} \section{Conclusion} \label{sec:conc} The purpose of this paper is to introduce and formulate the basic principles of higher structures occuring in science and nature in general and in mathematics in particular. This suggests extensions of known mathematical theory, but also leads to situations where new mathematical theory has to be developed. This program of Hyperstructures may go in many directions and we just consider this paper as an eye opener of where to go in the future. \section*{Notes on the contributor} \begin{wrapfigure}[8]{l}{3cm} \centering \vspace*{-0.5cm} \includegraphics[width=3cm]{baas} \end{wrapfigure} \noindent \textbf{Nils A.\ Baas} was born in Arendal, Norway, 1946. He was educated at the University of Oslo where he got his final degree in 1969. Later on he studied in Aarhus and Manchester. He was a Visiting Assistant Professor at U. Va. Charlottesville, USA in 1971--1972. Member of IAS, Princeton in 1972--1975 and IHES, Paris in 1975. Associate Professor at the University of Trondheim, Norway in 1975--1977 and since 1977, Professor at the same university till date. He conducted research visits to Berkeley in 1982--1983 and 1989--1990; Los Alamos in 1996; Cambridge, UK in 1997, Aarhus in 2001 and 2004. He was Member IAS, Princeton 2007, 2010, 2013 and 2016. His research interests include: algebraic topology, higher categories and hyperstructures and topological data analysis. \end{document}
\begin{document} \begin{frontmatter} \title{A stochastic differential game for the inhomogeneous $\infty$-Laplace equation} \runtitle{SDG} \begin{aug} \author[A]{\fnms{Rami} \snm{Atar}\corref{}\ead[label=e1]{[email protected]}} and \author[B]{\fnms{Amarjit} \snm{Budhiraja}\thanksref{t1}\ead[label=e2]{[email protected]}} \runauthor{R. Atar and A. Budhiraja} \affiliation{Technion---Israel Institute of Technology and University of North Carolina} \address[A]{Department of Electrical Engineering\\ Technion---Israel Institute of Technology\\ Haifa 32000\\ Israel\\ \printead{e1}} \address[B]{Department of Statistics\\ \quad and Operations Research\\ University of North Carolina\\ Chapel Hill, North Carolina 27599\\ USA\\ \printead{e2}} \end{aug} \thankstext{t1}{Supported in part by Army Research Office Grant W911NF-0-1-0080.} \received{\smonth{8} \syear{2008}} \revised{\smonth{1} \syear{2009}} \begin{abstract} Given a bounded $\mathcaligr{C}^2$ domain $G\subset{\mathbb{R}}^m$, functions $g\in\mathcaligr{C}(\partial G,{\mathbb{R}})$ and $h\in\mathcaligr {C}(\overline G,{\mathbb{R}}\setminus\{0\})$, let $u$ denote the unique viscosity solution to the equation $-2\Delta_\infty u=h$ in $G$ with boundary data $g$. We provide a representation for $u$ as the value of a two-player zero-sum stochastic differential game. \end{abstract} \begin{keyword}[class=AMS] \kwd{91A15} \kwd{91A23} \kwd{35J70} \kwd{49L20}. \end{keyword} \begin{keyword} \kwd{Stochastic differential games} \kwd{infinity-Laplacian} \kwd{Bellman--Isaacs equation}. \end{keyword} \end{frontmatter} \section{Introduction}\label{sec1} \subsection{Infinity-Laplacian and games} For an integer $m\ge2$,\vspace*{1pt} let a bounded $\mathcaligr{C}^2$ domain $G\subset{\mathbb{R}}^m$, functions $g\in\mathcaligr{C}(\partial G,{\mathbb{R}})$ and $h\in\mathcaligr{C}(\overline G,{\mathbb{R}}\setminus\{0\})$ be given. We study a two-player zero-sum stochastic differential game (SDG), defined in terms of an $m$-dimensional state process that is driven by a one-dimensional Brownian motion, played until the state exits the domain. The functions $g$ and $h$ serve as terminal, and, respectively, running payoffs. The players' controls enter in a diffusion coefficient and in an unbounded drift coefficient of the state process. The dynamics are degenerate in that it is possible for the players to completely switch off the Brownian motion. We show that the game has value, and characterize the value function as the unique viscosity solution $u$ (uniqueness of solutions is known from \cite{pssw}) of the equation \begin{equation}\label{07} \cases{-2\Delta_\infty u=h, &\quad in $G$,\cr u=g, &\quad on $\partial G$.} \end{equation} Here, $\Delta_\infty$ is the infinity-Laplacian defined as $\Delta _{\infty}f = (Df)'(D^2f) (Df)/ |Df|^2$, provided $Df \neq0$, where for a $\mathcaligr{C}^2$ function $f$ we denote by $Df$ the gradient and by $D^2f$ the Hessian matrix. Our work is motivated by a representation for $u$ of Peres et al. \cite{pssw} (established in fact in a far greater generality), as the limit, as $\varepsilon\to0$, of the value function $V^\varepsilon $ of a discrete time random turn game, referred to as \textit{Tug-of-War}, in which $\varepsilon$ is a parameter. The contribution of the current work is the identification of a game for which the value function is precisely equal to $u$. The infinity-Laplacian was first considered by Aronsson \cite{Aron} in the study of absolutely minimal (AM) extensions of Lipschitz functions. Given a Lipschitz function $u$ defined on the boundary $\partial G$ of a domain $G$, a Lipschitz function $\widehat u$ extending $u$ to $\overline G$ is called an AM extension of $u$ if, for every open $U \subset G$, $\operatorname{Lip}_{\overline U}\widehat u = \operatorname {Lip}_{\partial U}u$, where for a real function $f$ defined on $F \subset{\mathbb{R}}^m$, $\operatorname{Lip}_F f = {\sup_{x,y \in F, x\neq y}} |f(x) - f(y)|/|x-y|$. It was shown in \cite{Aron} that a Lipschitz function $\widehat u$ on $\overline G$ that is $\mathcaligr{C}^2$ on $G$ is an AM extension of $\widehat u|_{\partial G}$ if and only if $\widehat u$ is infinity-harmonic, namely satisfies $\Delta_{\infty }\widehat u = 0$ in $G$. This connection enables in some cases to prove uniqueness of AM extensions via PDE tools. However, due to the degeneracy of this elliptic equation, classical PDE approach in general is not applicable. Jensen \cite{Jen} showed that an appropriate framework is through the theory of viscosity solutions, by establishing existence and uniqueness of viscosity solutions to the homogeneous version ($h=0$) of (\ref{07}), and showing that if $g$ is Lipschitz then the solution is an AM extension of $g$. In addition to the relation to AM extensions, the infinity-Laplacian arises in a variety of other situations \cite{BEJ}. Some examples include models for sand-pile evolution \cite{Aron2}, motion by mean curvature and stochastic target problems \cite{KoSe,SoTo}. We do not treat the homogenous equation for reasons mentioned later in this section. The inhomogeneous equation may admit multiple solutions when $h$ assumes both signs \cite{pssw}. Our assumption on $h$ implies that either $h > 0 $ or $h < 0$. Uniqueness for the case where these strict inequalities are replaced with weak inequalities is unknown \cite{pssw}. Thus, the assumption we make on $h$ is the minimal one under which uniqueness is known to hold in general (except the case $h=0$). Let us describe the Tug-of-War game introduced in \cite{pssw}. Fix $\varepsilon>0$. Let a token be placed at $x\in G$, and set $X_0=x$. At the $k$th step of the game ($k\ge1$), an independent toss of a fair coin determines which player takes the turn. The selected player is allowed to move the token from its current position $X_{k-1} \in G$ to a new position $X_k$ in $\overline G$, in such a way that $|X_k - X_{k-1}| \le\varepsilon$ (\cite{pssw} requires $|X_k - X_{k-1}| < \varepsilon$ but this is an equivalent formulation in the setting described here). The game ends at the first time $K$ when $X_K\in\partial G$. The associated payoff is given by \begin{equation} \label{PF1051} \mathbf{E} \Biggl[g(X_K) + \frac{\varepsilon^2}{4} \sum_{k=0}^{K-1} h(X_k) \Biggr]. \end{equation} Player I attempts to maximize the payoff and player II's goal is to minimize it. It is shown in \cite{pssw} that the value of the game, defined in a standard way and denoted $V^\varepsilon(x)$, exists, that $V^{\varepsilon}$ converges uniformly to a function $V$ referred to as the ``continuum value function'' and that $V$ is the unique viscosity solution of (\ref{07}) (these results are in fact also proved for the homogeneous case, and in generality greater than the scope of the current paper). The question of associating a game directly with the continuum value was posed and some basic technical challenges associated with it were discussed in \cite{pssw}. Our approach to the question above is via a SDG formulation. To motivate the form of the SDG, we start with the Tug-of-War game and present some formal calculations (a precise definition of the SDG will appear later). Let $\{\xi_k, k \in\mathbb{N}\}$ be a sequence of i.i.d. random variables on some probability space $(\Omega, \mathcaligr{F}, \mathbf{P})$ with $\mathbf{P}(\xi_k = 1) = \mathbf {P}(\xi_k = -1)=1/2$, interpreted as the sequence of coin tosses. Let $\{\mathcaligr{F}_k\}_{k\ge0}$ be a filtration of $\mathcaligr{F}$ to which $\{\xi_k\}$ is adapted and such that $\{\xi_{k+1},\xi_{k+2},\ldots\}$ is independent of $\mathcaligr{F}_k$ for every $k\ge0$. Let $\{a_k\}$, $\{b_k\}$ be $\{\mathcaligr{F}_k\}$-predictable sequences of random variables with values in $\overline{\mathbb{B}_{\varepsilon}(0)} = \{x \in{\mathbb{R}}^m\dvtx |x| \le\varepsilon\}$. These sequences correspond to control actions of players I and II; that is, $a_k$ (resp., $b_k$) is the displacement exercised by player I (resp., player II) if it wins the $k$th coin toss. Associating the event $\{\xi_k = 1\}$ with player I winning the $k$th toss, one can write the following representation for the position of the token, starting from initial state $x$. For $j \in \mathbb{N}$, \[ X_j = x + \sum_{k=1}^j \biggl[ a_k\frac{1+\xi_k}{2}+b_k\frac{1-\xi_k}{2} \biggr] =\sum_{k=1}^j\frac{a_k-b_k}{2}\xi_k+\sum_{k=1}^j\frac{a_k+b_k}{2}. \] We shall refer to $\{X_j\}$ as the ``state process.'' This representation, in which turns are not taken at random but both players select an action at each step, and the noise enters in the dynamics, is more convenient for the development that follows. Let $\varepsilon= 1/\sqrt{n}$ and rescale the control processes by defining, for $t \ge0$, $A^n_t=\sqrt n a_{[nt]}$, $B^n_t=\sqrt n b_{[nt]}$. Consider the continuous time state process $X^n_t = X_{[nt]}$, and define $\{W^n_t\}_{t\ge0}$ by setting $W^n_0=0$ and using the relation \[ W^n_t = W^n_{(k-1)/n}+ \biggl(t-\frac{k-1}{n} \biggr)\sqrt{n} \xi_k,\qquad t\in \biggl(\frac{k-1}{n}, \frac{k}{n} \biggr], k \in\mathbb{N}. \] Then we have \begin{equation}\label{eq330} X^n_t = x + \frac{1}{2} \int_0^t (A^n_s - B^n_s) \,dW^n_s + \frac{1}{2} \int_0^t \sqrt{n}(A^n_s + B^n_s) \,ds. \end{equation} Note that $W^n$ converges weakly to a standard Brownian motion, and since $|A^n_t| \vee|B^n_t| \le1$, the second term on the right-hand side of (\ref{eq330}) forms a tight sequence. Thus, it is easy to guess a substitute for it in the continuous game. Interpretation of the asymptotics of the third term is more subtle, and is a key element of the formulation. One possible approach is to replace the factor $\sqrt{n}$ by a large quantity that is dynamically controlled by the two players. This point of view motivates one to consider the identity (that we prove in Proposition \ref{prop2}) \begin{eqnarray}\label{44} &&-2\Delta_\infty f = \sup_{|b|=1, d \ge0 } \inf_{|a|=1, c\ge0} \biggl\{-\frac{1}{2} (a-b)' (D^2f) (a-b)\nonumber\\ &&\hspace*{148.4pt}{} - (c+d)(a+b)\cdot Df \biggr\},\\ \eqntext{f \in\mathcaligr{C}^2, Df \neq0,} \end{eqnarray} for the following reason. Let $\mathcaligr{H} = \mathcaligr{S}^{m-1} \times[0, \infty)$ where $\mathcaligr{S}^{m-1}$ is the unit sphere in ${\mathbb{R}}^m$. The expression in curly brackets is equal to $\mathcaligr{L}^{a,b,c,d}f(x)$, where for $(a,c),(b,d)\in\mathcaligr{H}$, $\mathcaligr{L}^{a,b,c,d}$ is the controlled generator associated with the process \begin{equation}\label{star1050} X_t = x+\int_0^t(A_s-B_s)\,dW_s+\int_0^t(C_s+D_s)(A_s+B_s)\,ds, \qquad t\in[0,\infty),\hspace*{-33pt} \end{equation} and $(A, C)$ and $(B, D)$ are control processes taking values in $\mathcaligr{H}$. Since $\Delta_\infty$ is related to (\ref{eq330}) via the Tug-of-War, and $\mathcaligr{L}^{a,b,c,d}$ to (\ref{star1050}), identity (\ref{44}) suggests to regard (\ref{star1050}) as a formal limit of (\ref{eq330}). Consequently the SDG will have (\ref{star1050}) as a state process, where the controls $(A,C)$ and $(B,D)$ are chosen by the two players. Finally, the payoff functional, as a formal limit of (\ref{PF1051}), and accounting for the extra factor of $1/2$ in (\ref{eq330}), will be given by $\mathbf{E}[\int_0^{\tau} h(X_s) \,ds + g(X_{\tau})]$, where $\tau=\inf\{t\dvtx X_t \notin G \}$ (with an appropriate convention regarding $\tau= \infty$). A precise formulation of this game is given in Section \ref{sec2}, along with a statement of the main result. Section \ref{sec1.3} discusses the technique and some open problems. Throughout, we will denote by $\mathcal{S}(m)$ the space of symmetric $m \times m$ matrices, and by $I_m\in\mathcal{S}(m)$ the identity matrix. A function $\vartheta\dvtx[0,\infty)\to[0,\infty)$ will be said to be a \textit{modulus} if it is continuous, nondecreasing, and satisfies $\vartheta(0)=0$. \subsection{SDG formulation and main result}\label{sec2} Recall that $G$ is a bounded $\mathcaligr{C}^2$ domain in ${\mathbb {R}}^m$, and that $g\dvtx\partial G\to{\mathbb{R}}$ and $h\dvtx\overline G\to{\mathbb {R}}\setminus\{0\}$ are given continuous functions. In particular we have that either $h > 0$ or $h < 0$. Since the two cases are similar, we will only consider $h>0$, and use the notation $\underline h:=\inf_{\overline G} h>0$. Let $(\Omega,\mathcaligr{F},\{\mathcaligr{F}_t\},\mathbf{P})$ be a complete filtered probability space with right-continuous filtration, supporting an $(m+1)$-dimensional $\{\mathcaligr{F}_t\}$-Brownian motion $\overline W=(W,\widetilde W)$, where $W$ and $\widetilde W$ are one- and $m$-dimensional Brownian motions, respectively. Let $\mathbf{E}$ denote expectation with respect to $\mathbf{P}$. Let $X_t$ be a process taking values in ${\mathbb {R}}^m$, given by \begin{equation}\label{01} X_t = x+\int_0^t(A_s-B_s)\,dW_s+\int_0^t(C_s+D_s)(A_s+B_s)\,ds,\qquad t\in[0,\infty),\hspace*{-33pt} \end{equation} where $x\in\overline G$, $A_t$ and $B_t$ take values in the unit sphere $\mathcaligr{S}^{m-1}\subset{\mathbb{R}}^m$, and $C_t$ and $D_t$ take values in $[0,\infty)$. Denote \begin{equation}\label{23} Y^0=(A,C),\qquad Z^0=(B,D). \end{equation} The processes $Y^0$ and $Z^0$ take values in $\mathcaligr{H}=\mathcaligr{S}^{m-1}\times[0,\infty)$. These processes will correspond to control actions of the maximizing and minimizing player, respectively. We remark that,\vspace*{1pt} although $\widetilde W$ does not appear explicitly in the dynamics~(\ref{01}), the control processes $Y^0, Z^0$ will be required to be $\{\mathcaligr{F}_t\}$-adapted, and thus may depend on it. In Section \ref{sec1.3}, we comment on the need for including this auxiliary Brownian motion in our formulation. Let \[ \tau=\inf\{t\dvtx X_t\in\partial G\}. \] Throughout, we will follow the convention that the infimum over an empty set is~$\infty$. We write \begin{equation} \label{30} X(x,Y^0,Z^0) \qquad [\mbox{resp., } \tau(x,Y^0,Z^0)] \end{equation} for the process $X$ (resp., the random time $\tau$) when it is important to specify the explicit dependence on $(x,Y^0,Z^0)$. If $\tau<\infty$ a.s., then the payoff $J(x, Y^0, Z^0)$ is well defined with values in $(-\infty,\infty]$, where \begin{equation}\label{02} J(x,Y^0,Z^0)=\mathbf{E}\biggl[\int_0^\tau h(X_s)\,ds+g(X_\tau)\biggr] \end{equation} and $X$ is given by (\ref{01}). When $\mathbf{P}(\tau (x,Y^0,Z^0)=\infty)>0$, we set $J(x,Y^0,Z^0)=\infty$, in agreement with the expectation of the first term in (\ref{02}). We turn to the precise definition of the SDG. For a process $H^0=(A,C)$ taking values in $\mathcaligr{H}$, we let $S(H^0)=\operatorname{ess}\sup \sup_{t\in[0,\infty)}C_t$. In the formulation below, each player initially declares a bound $S$, and then plays so as to keep $S(H^0)\le S$. \begin{definition}\label{def1} (i) A pair $H=(\{H^0_t\},S)$, where $S\in{\mathbb{N}}$ and $\{H^0_t\} $ is a process taking values in $\mathcaligr{H}$, is said to be an admissible control if $\{H^0_t\}$ is $\{\mathcaligr{F}_t\}$-progressively measurable, and $S(H^0)\le S$. The set of all admissible controls is denoted by $M$. For $H=(\{H^0_t\},S)\in M$, denote $\mathbf{S}(H)=S$. (ii) A mapping $\varrho\dvtx M\to M$ is said to be a strategy if, for every $t$, \[ \mathbf{P}(H^0_s=\widetilde H^0_s \mbox{ for a.e. } s\in[0,t])=1 \quad\mbox{and}\quad S=\widetilde S \] implies \[ \mathbf{P}(I^0_s=\widetilde I^0_s \mbox{ for a.e. } s\in[0,t])=1 \quad\mbox{and}\quad T=\widetilde T, \] where $(I^0,T)=\varrho[(H^0,S)]$ and $(\widetilde I^0,\widetilde T)=\varrho[(\widetilde H^0,\widetilde S)]$. The set of all strategies is denoted by $\widetilde\Gamma$. For $\varrho \in\widetilde\Gamma$, let $\mathbf{S}(\varrho) = \sup_{H \in M}\mathbf{S}(\varrho[H])$. Let \[ \Gamma= \{\varrho\in\widetilde\Gamma\dvtx \mathbf{S}(\varrho) <\infty\}. \] \end{definition} We will use the symbols $Y$ and $\alpha$ for generic control and strategy\break for the maximizing player, and $Z$ and $\beta$ for the minimizing player. If\break $Y = (Y^0,K), Z =(Z^0,L)\in M$, we sometimes write $J(x,Y,Z) = J(x,(Y^0,\break K),(Z^0,L))$ for $J(x$,$Y^0,Z^0)$. Similar conventions will be used for $X(x, Y, Z)$ and $\tau(x, Y, Z)$. Let \begin{eqnarray*} J^x(Y,\beta) &=& J(x,Y,\beta[Y]), \qquad x\in\overline G, Y\in M, \beta\in \Gamma, \\ J^x(\alpha,Z) &=& J(x,\alpha[Z],Z),\qquad x\in\overline G, \alpha\in\Gamma , Z\in M. \end{eqnarray*} Define analogously $X^x(Y,\beta)$, $X^x(\alpha,Z)$, $\tau^x(Y,\beta)$ and $\tau^x(\alpha,Z)$ via (\ref{30}). Define the lower value of the SDG by \begin{equation}\label{05} V(x)=\inf_{\beta\in\Gamma}\sup_{Y\in M}J^x(Y,\beta) \end{equation} and the upper value by \begin{equation}\label{06} U(x)=\sup_{\alpha\in\Gamma}\inf_{Z\in M}J^x(\alpha,Z). \end{equation} The game is said to have a value if $U=V$. Recall that the infinity-Laplacian is defined by $\Delta_\infty f=p'\Sigma p/|p|^2$, where $f$ is a $\mathcaligr{C}^2$ function, $p=Df$ and $\Sigma=D^2f$, provided that $p\ne0$. Thus, $\Delta_\infty f$ is equal to the second derivative in the direction of the gradient. In the special case where $D^2f(x)$ is of the form $\lambda I_m$ for some real $\lambda$, it is therefore natural to define $\Delta_\infty f(x)=\lambda$ even if $Df(x)=0$ \cite{pssw}. This will be reflected in the definition of viscosity solutions of (\ref{07}), that we state below. Let \begin{eqnarray*} \mathcaligr{D}_0 &=& \{(0,\lambda I_m)\in{\mathbb{R}}^m\times\mathcal {S}(m)\dvtx \lambda\in{\mathbb{R}}\}, \\ \mathcaligr{D}_1 &=& ({\mathbb{R}}^m\setminus\{0\})\times\mathcal{S}(m), \\ \mathcaligr{D} &=& \mathcaligr{D}_0\cup\mathcaligr{D}_1 \end{eqnarray*} and \[ \Lambda(p,\Sigma)= \cases{-2\lambda, &\quad $(p,\Sigma)=(0,\lambda I_m)\in \mathcaligr{D}_0$, \vspace*{2pt}\cr -2\dfrac{p'\Sigma p}{|p|^2}, &\quad $(p,\Sigma)\in\mathcaligr{D}_1$.} \] \begin{definition}\label{def3} A continuous function $u\dvtx \overline G\to{\mathbb{R}}$ is said to be a viscosity supersolution (resp., subsolution) of (\ref{07}), if: \begin{longlist} \item for every $x \in G$ and $\varphi\in\mathcaligr{C}^2(G)$ for which $(p,\Sigma):=(D\varphi(x),D^2\varphi(x))\in\mathcaligr{D}$, and $u - \varphi$ has a global minimum [maximum] on $G$ at $x$, one has \begin{equation}\label{24} \Lambda(p,\Sigma)-h(x)\ge0 \qquad [\le0]; \end{equation} and \item $u=g$ on $\partial G$. A viscosity solution is a function which is both a super- and a subsolution. \end{longlist} \end{definition} The result below has been established in \cite{pssw}. \begin{theorem}\label{thpssw} There exists a unique viscosity solution to (\ref{07}). \end{theorem} The following is our main result. \begin{theorem}\label{th1} The functions $U$ and $V$ are both viscosity solutions to (\ref{07}). Consequently, the SDG has a value. \end{theorem} In what follows, we use the terms subsolution, supersolution and solution as shorthand for viscosity subsolution, etc. \subsection{Discussion} \label{sec1.3} We describe here our approach to proving the main result, and mention some obstacles in extending it. A common approach to showing solvability of Bellman--Isaacs (BI) equations [(\ref{07}) can be viewed as such an equation due to (\ref{44})] by the associated value function, is by proving that the value function satisfies a dynamic programming principle (DPP). Roughly speaking, this is an equation expressing the fact that, rather than attempting to maximize their profit by considering directly the payoff functional, the players may consider the payoff incurred up to a time $t$ plus the value function evaluated at the position $X_t$ that the state reaches at that time. Although in a single player setting (i.e., in pure control problems) DPP are well understood, game theoretic settings as in this paper are significantly harder. In particular, as we shall shortly point out, there are some basic open problems related to such DPP. In a setting with a finite time horizon, Fleming and Souganidis \cite{FS} established a DPP based on careful discretization and approximation arguments. We have been unable to carry out a similar proof in the current setting, which includes a payoff given in terms of an exit time, degenerate diffusion and unbounded controls. Swiech \cite{swi} has developed an alternative approach to the above problem that relies on existence of solutions. Instead of establishing a DPP for the value function, the idea of \cite{swi} is to show that any \textit{solution} must satisfy a DPP. To see what is meant by such a DPP and how it is used, consider the equation, $-2\Delta_\infty u+\lambda u=h$ in $G$, $u=g$ on $\partial G$, where $\lambda\ge0$ is a constant, associated with the payoff in (\ref{02}) modified by a discount factor. Assume that one can show that whenever $u$ and $v$ are sub- and supersolutions, respectively, then \begin{eqnarray}\label{45} u(x)&\le&\sup_{\alpha\in\Gamma}\inf_{Z\in M}\mathbf{E} \biggl[\int _0^\sigma e^{-\lambda s}h(X_s)\,ds+e^{-\lambda\sigma}u(X_\sigma) \biggr], \\ \label{46} v(x)&\ge&\sup_{\alpha\in\Gamma}\inf_{Z\in M}\mathbf{E} \biggl[\int _0^\sigma e^{-\lambda s}h(X_s)\,ds+e^{-\lambda\sigma}v(X_\sigma) \biggr], \end{eqnarray} for $X=X[x,\alpha[Z],Z]$, $\tau=\tau[x,\alpha[Z],Z]$ and $\sigma =\sigma(t)=\tau \wedge t$. Sending $t \to\infty$ in the above equations, one would formally obtain \begin{equation}\label{ins1155}u(x) \le \sup_{\alpha\in\Gamma}\inf_{Z\in M}\mathbf{E} \biggl[\int_0^{\tau} e^{-\lambda s}h(X_s)\,ds+e^{-\lambda\tau}g(X_{\tau}) \biggr] \le v(x), \end{equation} in particular yielding that if $u=v$ is a solution to the equation then it must equal the upper value function. This would establish unique solvability of the equation by the upper value function, provided there exists a solution. In the case $\lambda> 0$, justifying the above formal limit is straightforward (see \cite{swi}) but the case $\lambda= 0$, as in our setting, requires a more careful argument. Our proofs exploit the uniform positivity of $h$ due to which the minimizing player will not allow $\tau$ to be too large. This leads to uniform estimates on the decay of $\mathbf{P}(\tau> t)$ as $t\to \infty$, from which an inequality as in (\ref{ins1155}) follows readily. This discussion also explains why we are unable to treat the case $h=0$. Establishing DPP as in (\ref{45}), (\ref{46}) is thus a key ingredient in this approach. For a class of BI equations, defined on all of ${\mathbb{R}}^m$, for which the associated game has a bounded action set and a fixed, finite time horizon, such a DPP was proved in Swiech \cite{swi}. In the current paper, although we do not establish (\ref{45}), (\ref{46}) in the above form, we derive similar inequalities (for $\lambda= 0$) for a related bounded action game, defined on $G$. The characterization of the value function for the original unbounded action game is then treated by taking suitable limits. Both \cite{FS} and \cite{swi} require some assumptions on the sample space and underlying filtration. In \cite{FS}, the underlying filtration is the one generated by the driving Brownian motion. The approach taken in \cite{swi}, which the current paper follows, allows for a general filtration as long as it is rich enough to support an $m$-dimensional Brownian motion, independent of the Brownian motion driving the state process [for example, it could be the filtration generated by an $(m+1)$-dimensional Brownian motion]. The reason for imposing this requirement in \cite{swi} is that inequalities similar to (\ref{45}) and (\ref{46}) are proved by first establishing them for a game associated with a nondegenerate elliptic equation, and then taking a vanishing viscosity limit. This technical issue is the reason for including the auxiliary process $\widetilde W$ in our formulation as well. As pointed out in \cite{swi}, the question of validity of the DPP and the characterization of the value as the unique solution to the PDE, under an arbitrary filtration, remains a basic open problem on SDGs. The unboundedness of the action space, on one hand, and the combination of degeneracy of the dynamics and an exit time criterion on the other hand, make it hard to adapt the results of \cite{swi} to our setting. In order to overcome the first difficulty, we approximate the original SDG by a sequence of games with bounded action spaces, that are more readily analyzed. For the bounded action game, existence of solutions to the upper and lower BI equations follow from \cite{CKLS}. We show that the solutions to these equations satisfy a DPP similar to (\ref{45}) and (\ref{46}) (Proposition \ref{prop1}). As discussed above, existence of solutions along with the DPP yields the characterization of these solutions as the corresponding value functions. Next, as we show in Lemma \ref{lem5}, the upper and lower value functions for the bounded action games approach the corresponding value functions of the original game, pointwise, as the bounds approach~$\infty$. Moreover, in Lemma \ref{lem4}, we show that any uniform subsequential limit, as the bounds approach $\infty$, of solutions to the BI equation for bounded action games is a viscosity solution of (\ref{07}). The last piece in the proof of the main result is then showing existence of uniform (subsequential) limits. This is established in Theorem \ref{th3} by proving equicontinuity, in the parameters governing the bounds, of the value functions for bounded action games. The proof of equicontinuity is the most technical part of this paper and the main place where the $\mathcaligr{C}^2$ assumption on the domain is used. This is also the place where the possibility of degenerate dynamics close to the exit time needs to be carefully analyzed. The rest of this paper is organized as follows. In Section \ref{sec3}, we prove Theorem~\ref{th1} based on results on BI equations for bounded action SDG. These results are established in Sections \ref{sec4} (equicontinuity of the value functions) and \ref{sec5} (relating the value function to the PDE). Finally, it is natural to ask whether the state process, obtained under $\delta$-optimal play by both players, converges in law as $\delta$ tends to zero. Section \ref{sec6} describes a recently obtained result \cite{AtBu2} that addresses this issue. \section{Relation to Bellman--Isaacs equation}\label{sec3} In this section, we prove Theorem \ref{th1} by relating the value functions $U$ and $V$ to value functions of SDG with bounded action sets, and similarly, the solution to (\ref{07}) to that of the corresponding Bellman--Isaacs equations. Let $p\in{\mathbb{R}}^m$, $p\ne0$ and $S\in\mathcal{S}(m)$ be given, and, for $n\in{\mathbb{N}}$, fix $p_n\in{\mathbb{R}}^m$, $p_n\ne0$ and $S_n\in\mathcal{S}(m)$, such that $p_n\to p$, $S_n\to S$. Denote $\overline p=p/|p|$ and $\overline p_n=p_n/|p_n|$. Let $\{k_n\}$ and $\{l_n\}$ be positive, increasing sequences such that $k_n\to\infty$, $l_n\to\infty$. Denote \begin{equation}\label{09} \Phi(a,b,c,d;p,S)= -\tfrac12 (a-b)'S(a-b)-(c+d)(a+b)\cdot p, \end{equation} and let \begin{eqnarray}\label{18} \Lambda_{kl}^+(p,S)&=& \max_{|b|=1, 0\le d\le l} \min_{|a|=1, 0\le c\le k} \Phi(a,b,c,d;p,S), \\ \label{10} \Lambda_{kl}^-(p,S)&=&\min_{|a|=1, 0\le c\le k} \max_{|b|=1, 0\le d\le l} \Phi(a,b,c,d;p,S). \end{eqnarray} Set \[ \Lambda_n^+(p,S)=\Lambda^+_{k_nl_n}(p,S),\qquad \Lambda_n^-(p,S)=\Lambda^-_{k_nl_n}(p,S). \] \begin{lemma}\label{lem1} One has $\Lambda^+_n(p_n,S_n)\to\Lambda(p,S)$, and $\Lambda^-_n(p_n,S_n)\to\Lambda(p,S)$, as $n\to\infty$. \end{lemma} \begin{pf} We prove only the statement regarding $\Lambda^-_n$, since the other statement can be proved analogously. We omit the superscript ``$-$'' from the notation. Denote $\Phi_n(a,b,c,d) = \Phi(a,b,c,d;p_n,S_n)$. Let \[ \overline\Lambda_n(a,c)=\max_{|b|=1, 0\le d\le l_n}\Phi_n(a,b,c,d). \] Let $(a^*_n,c^*_n)$ be such that $\Lambda_n^*:=\Lambda_n(p_n,S_n)=\overline\Lambda_n(a^*_n,c^*_n)$. Note that $\Lambda_n^*\le\overline\Lambda_n(\overline p_n,0)$, which is bounded from above as $n\to\infty$, since $(b + \overline p_n)\cdot\overline p_n \ge0$ for all $b \in\mathcaligr{S}^{m-1}$, $n \ge1$. On the other hand, if for some fixed $\varepsilon>0$, $a^*_n\cdot p_n<|p_n|-\varepsilon$ holds for infinitely many $n$, then $\limsup\overline\Lambda_n(a^*_n,c_n)=\infty$ for any choice of $c_n$ contradicting the statement that $\Lambda_n^*$ is bounded from above. This shows, for every $\varepsilon> 0$, \[ |p_n|-\varepsilon\le a^*_n\cdot p_n\le|p_n| \] for all large $n$. In particular, $a^*_n\to\overline p$. Next note that \[ \Lambda_n^*=\overline\Lambda_n(a^*_n,c^*_n)\ge\Phi_n(-\overline p_n,a^*_n,l_n,c^*_n) \ge-\tfrac12 (\overline p_n+a^*_n)'S_n(\overline p_n+a^*_n) \] hence, \[ \liminf\Lambda_n^*\ge-2\overline p'S\overline p=\Lambda(p,S). \] Also, with $(\widetilde b_n,\widetilde d_n)\in\arg\max_{(b,d)}\Phi _n(b,\overline p_n,d,k_n)$, \begin{eqnarray}\label{star1106} \Lambda_n^* &=& \overline\Lambda_n(a^*_n,c^*_n)\le\overline\Lambda_n(\overline p_n,k_n) \nonumber\\ &=&-\tfrac12 (\widetilde b_n-\overline p_n)'S_n(\widetilde b_n-\overline p_n) - (\widetilde d_n + k_n)(\widetilde b_n + \overline p_n) \cdot p_n \\ &\le& -\tfrac12 (\widetilde b_n-\overline p_n)'S_n(\widetilde b_n-\overline p_n).\nonumber \end{eqnarray} If $\widetilde b_n\to-\overline p$ does not hold, then $\liminf\Lambda _n^*=-\infty$ by the first line of (\ref{star1106}) which contradicts the previous display. This shows $\widetilde b_n\to-\overline p$. Hence, from the second line of (\ref{star1106}) \[ \limsup\Lambda_n^*\le-2\overline p'S\overline p=\Lambda(p,S). \] \upqed\end{pf} We now consider two formulations of SDG with bounded controls, the first being based on Definition \ref{def1} whereas the second is more standard. For $k,l\in{\mathbb{N}}$, let \begin{eqnarray*} M_k &=& \{Y\in M\dvtx\mathbf{S}(Y)\le k\}, \\ \Gamma_l &=& \{\beta\in\Gamma\dvtx\mathbf{S}(\beta)\le l\}. \end{eqnarray*} Define accordingly the lower value \begin{equation}\label{20} V_{kl}(x)=\inf_{\beta\in\Gamma_l}\sup_{Y\in M_k}J^x(Y,\beta), \end{equation} and the upper value \begin{equation}\label{21} U_{kl}(x)=\sup_{\alpha\in\Gamma_k}\inf_{Z\in M_l}J^x(\alpha,Z). \end{equation} \begin{definition}\label{def2} (i) A process $\{H_t\}$ taking values in $\mathcaligr{H}$ is said to be a simple admissible control if it is $\{\mathcaligr{F}_t\}$-progressively measurable. We denote by $M^0$ the set of all simple admissible controls, and let $M^0_k=\{H\in M^0\dvtx S(H)\le k\}$. (ii) Given $k,l\in{\mathbb{N}}$, we say that a mapping $\varrho \dvtx M^0_k\to M^0_l$ is a simple strategy, and write $\varrho\in\Gamma^0_{kl}$ if, for every $t$, \[ \mathbf{P}(H_s=\widetilde H_s \mbox{ for a.e. } s\in[0,t])=1 \] implies \[ \mathbf{P}(\varrho[H]_s=\varrho[\widetilde H]_s \mbox{ for a.e. } s\in[0,t])=1. \] \end{definition} For $\beta\in\Gamma^0_{kl}, Y \in M^0_k$, we write $J^x(Y, \beta(Y))$ as $J^x(Y, \beta)$. For $\alpha\in\Gamma ^0_{lk}, Z \in M^0_l$, $J^x(\alpha, Z)$ is defined similarly. For $k,l\in{\mathbb{N}}$, let \begin{eqnarray}\label{50} V^0_{kl}(x)&=&\inf_{\beta\in\Gamma^0_{kl}}\sup_{Y\in M^0_k}J^x(Y,\beta), \\ \label{51} U^0_{kl}(x)&=&\sup_{\alpha\in\Gamma^0_{lk}}\inf_{Z\in M^0_l}J^x(\alpha,Z). \end{eqnarray} The following shows that the two formulations are equivalent. \begin{lemma} \label{lem7} For every $k,l$, $V^0_{kl}=V_{kl}$ and $U^0_{kl}=U_{kl}$. \end{lemma} \begin{pf} We only show the claim regarding $V_{kl}$. Let $\beta\in\Gamma_l$. Define $\beta^0\in\Gamma^0_{kl}$ by letting, for every $Y\in M^0_k$, $\beta^0[Y]$ be the process component of the pair $\beta[(Y,k)]$. Clearly, for every $Y\in M^0_{k}$, $J^x((Y,k),\beta)=J^x(Y,\beta^0)$, whence $\sup_{Y\in M_k}J^x(Y,\beta)\ge\sup_{Y\in M^0_k}J^x(Y,\beta^0)$, and $V_{kl}(x)\ge V^0_{kl}(x)$. Next, let $\beta^0\in\Gamma^0_{kl}$. Define $\beta\dvtx M\to M_l$ as follows. Given $Y\equiv(Y^0,K)\equiv(A,C,K)\in M$, let $Y^k=(A,C\wedge k)$, and set $\beta[Y]=(\beta^0[Y^k],l)$. Note that if, for some $K$, $Y^0$ and $\widetilde Y^0$ are elements of $M^0_K$ and $Y^0(s)=\widetilde Y^0(s)$ on $[0,t]$ then $Y^k(s)=\widetilde Y^k(s)$ on $[0,t]$ and so $\beta^0[Y^k]_s=\beta^0[\widetilde Y^k]_s$ on $[0,t]$. By definition of $\beta$, it follows that $\beta\in\Gamma_l$. Also, if $(Y^0,K)\in M_k$ then $K\le k$ and thus $J^x((Y^0,K),\beta)=J^x(Y^0,\beta^0)$. This shows that $\sup_{Y\in M_k}J^x(Y,\beta)\le\sup_{Y^0\in M^0_k}J^x(Y^0,\beta^0)$. Consequently, $V_{kl}(x)\le V^0_{kl}(x)$. \end{pf} Denote $V_n=V_{k_nl_n}$ and $U_n=U_{k_nl_n}$. The following result is proved in Section \ref{sec4}. \begin{theorem}\label{th3} For some $n_0 \in\mathbb{N}$, the family $\{V_n; n \ge n_0\}$ is equicontinuous, and so is the family $\{U_n; n\ge n_0\}$. \end{theorem} Consider the Bellman--Isaacs equations for the upper and, respectively, lower values of the game with bounded controls, namely \begin{eqnarray}\label{08} &&\cases{ \Lambda_n^+(Du,D^2u)-h=0, &\quad in $G$,\cr u=g, &\quad on $\partial G$,} \\ \label{19} &&\cases{\Lambda_n^-(Du,D^2u)-h=0, &\quad in $G$,\cr u=g, &\quad on $\partial G$.} \end{eqnarray} Solutions to these equations are defined analogously to Definition \ref{def3}, with $\Lambda^\pm_n$ replacing $\Lambda$, and where there is no restriction on the derivatives of the test function, that is, $\mathcaligr{D}$ is replaced with ${\mathbb{R}}^m\times\mathcal{S}(m)$. \begin{lemma}\label{lem3} There exists $n_1 \in{\mathbb{N}}$ such that for each $n \ge n_1$, $U_n$ is the unique solution to (\ref{08}), and $V_n$ is the unique solution to (\ref{19}). \end{lemma} \begin{pf} This follows from a more general result, Theorem \ref{th2} in Section \ref{sec5}. \end{pf} \begin{lemma}\label{lem4} Any subsequential uniform limit of $U_n$ or $V_n$ is a solution of (\ref{07}). \end{lemma} \begin{pf} Denote by $U_0$ (resp., $V_0$) a subsequential limit of $U_n$ [$V_n$]. By relabeling, we assume without loss that $U_n$ (resp., $V_n$) converges to $U_0$ [$V_0$]. We will show that $U_0$ and $V_0$ are subsolutions of (\ref{07}). The proof that these are supersolutions is parallel. We start with the proof that $U_0$ is a subsolution. Fix $x_0\in G$. Let $\varphi\in\mathcaligr{C}^2(G)$ be such that $U_0-\varphi$ is strictly maximized at $x_0$. Assume first that $D\varphi(x_0)\ne0$. Since $U_n\to U_0$ uniformly, we can find $\{x_n\}\subset G$, $x_n\to x_0$, where $x_n$ is a local maximum of $U_n-\varphi$ for $n\ge N$. We take $N$ to be larger than $n_1$ of Lemma \ref{lem3}. Since by Lemma \ref{lem3} $U_n$ is a subsolution of (\ref{08}), we have that for $n \ge N$ \[ \Lambda_n^+(D\varphi(x_n),D^2\varphi(x_n))-h(x_n)\le0. \] Thus, by Lemma \ref{lem1}, \[ \Lambda(D\varphi(x_0),D^2\varphi(x_0))-h(x_0)\le0 \] as required. Next, assume that $D\varphi(x_0)=0$ and $D^2\varphi(x_0)=\lambda I_m$ for some $\lambda\in{\mathbb{R}}$. In particular, $\varphi(x)=\varphi(x_0)+\frac\lambda2|x-x_0|^2+o(|x-x_0|^2)$. We need to show that \begin{equation} \label{1.1} -2\lambda-h(x_0)\le0. \end{equation} Consider the case $\lambda\ge0$. Fix $\delta>0$ and let $\psi_\delta(x)=\frac{\lambda+\delta}{2}|x-x_0|^2$. Then $U_0-\psi _\delta$ has a strict maximum at $x_0$. Since $U_n\to U_0$ uniformly, we can find $\{x_n\}\subset G$, $x_n\to x_0$, where $x_n$ is a local maximum of $U_n-\psi_\delta$. To prove (\ref{1.1}), it suffices to show that for each $\varepsilon>0$, \begin{equation} \label{1.2} -2(\lambda+\delta)-\sup_{x\in\mathbb{B}_\varepsilon(x_0)}h(x)\le0. \end{equation} To prove (\ref{1.2}), argue by contradiction and assume that it fails. Then there exists $\varepsilon>0$ such that \begin{equation} \label{1.25} -2(\lambda+\delta)-\sup_{x\in\mathbb{B}_\varepsilon(x_0)}h(x)>0. \end{equation} Let $N \ge n_1$ be such that $|x_n-x_0|<\varepsilon$ for all $n\ge N$. Since $U_n$ is a subsolution of (\ref{08}), \begin{equation} \label{1.3} \mu_n:=\Lambda^+_n(D\psi_\delta(x_n),D^2\psi_\delta(x_n))\le h(x_n). \end{equation} Also, \begin{eqnarray}\label{1.325}\qquad \mu_n &=& \max_{|b|=1,0\le d\le l_n}\min_{|a|=1,0\le c\le k_n} \biggl[-\frac12(\lambda+\delta)|a-b|^2\nonumber\\ &&\hspace*{107pt}{} -(\lambda+\delta)(c+d)(a+b)\cdot(x_n-x_0) \biggr] \\ &\ge& \min_{|a|=1,0\le c\le k_n} \biggl[-\frac12(\lambda+\delta) |a-b_n|^2 \biggr] = -2(\lambda+\delta),\nonumber \end{eqnarray} where $b_n=-(x_n-x_0)/|x_n-x_0|$ if $x_n\ne x_0$ and arbitrary otherwise. Thus by (\ref{1.25}), \begin{equation}\label{1.34} \mu_n > h(x_n). \end{equation} However, this contradicts (\ref{1.3}). Hence, (\ref{1.2}) holds and so (\ref{1.1}) follows. Consider now the case $\lambda<0$. Let $\delta>0$ be such that $\lambda+\delta<0$. Let $\psi_\delta$ be as above. Then $U_0-\psi _\delta$ has a strict maximum at $x_0$. Fix $\varepsilon>0$. Then one can find $\gamma<\varepsilon$ such that \begin{equation} \label{1.35}\qquad U_0(x_0)=U_0(x_0)-\psi_\delta(x_0)>U_0(x)-\psi_\delta(x)\qquad \forall 0<|x-x_0|\le\gamma. \end{equation} Thus, one can find $\eta\in{\mathbb{R}}^m$ such that $0<|\eta |<\gamma$ and \begin{equation} \label{1.4} U_0(x_0)>U_0(x)-\psi_\delta(x+\eta)\qquad \forall x\in\partial \mathbb{B}_\gamma(x_0). \end{equation} Let $\psi_{\delta,\eta}(x)=\psi_\delta(x+\eta)$. Let $x_\eta\in\overline{\mathbb{B}_\gamma(x_0)}$ be a maximum point for $U_0-\psi_{\delta,\eta}$ over $\overline{\mathbb{B}_\gamma (x_0)}$. We claim that \begin{equation} \label{1.8} x_\eta\notin\partial\mathbb{B}_\gamma(x_0) \quad\mbox{and}\quad x_\eta\ne x_0-\eta. \end{equation} Suppose the claim holds. Then $D\psi_{\delta,\eta}(x_\eta)\ne0$, and so from the first part of the proof \[ -2(\lambda+\delta)-h(x_\eta)=\Lambda(D\psi_{\delta,\eta}(x_\eta ),D^2\psi_{\delta,\eta}(x_\eta))-h(x_\eta)\le0. \] Since $|x_\eta-x_0|\le\gamma<\varepsilon$, sending $\varepsilon\to 0$ and then $\delta\to0$ yields (\ref{1.1}). We now prove (\ref{1.8}). From (\ref{1.4}) and the fact that $\lambda+\delta<0$, \[ \sup_{x \in\partial\mathbb{B}_\gamma(x_0)}[U_0(x)-\psi_{\delta ,\eta }(x)]<U_0(x_0)\le U_0(x_0)-\psi_{\delta,\eta}(x_0). \] Hence, $x_\eta\notin\partial\mathbb{B}_\gamma(x_0)$. Also \begin{eqnarray*} U_0(x_0-\eta)-\psi_{\delta,\eta}(x_0-\eta)&=&U_0(x_0-\eta )<U_0(x_0)+\psi_\delta(x_0-\eta)\\ &\le& U_0(x_0)-\psi_{\delta,\eta}(x_0), \end{eqnarray*} where we used (\ref{1.35}) and the negativity of the functions $\psi_\delta$ and $\psi_{\delta,\eta}$. This shows that $x_\eta \ne x_0-\eta$, and (\ref{1.8}) follows. This completes the proof that $U_0$ is a subsolution of (\ref{07}). Finally, the argument for $V_0$ differs only at one point. If we had $(V_n,\Lambda_n^-)$ instead of $(U_n,\Lambda_n^+)$, then instead of (\ref{1.325}), we could write \begin{eqnarray*} \mu_n &=&\min_{|a|=1,0\le c\le k_n}\max_{|b|=1,0\le d\le l_n} \biggl[-\frac 12(\lambda+\delta)|a-b|^2\\ &&\hspace*{107pt}{}-(\lambda+\delta)(c+d)(a+b) \cdot(x_n-x_0) \biggr] \\ &=& \max_{|b|=1,0\le d\le l_n} \biggl[-\frac12(\lambda+\delta)|a_n-b|^2-(\lambda+\delta )(c_n+d)(a_n+b)\cdot(x_n-x_0) \biggr], \end{eqnarray*} where $(a_n,c_n)$ achieves the minimum, and then by choosing $(b,d)=(-a_n,0)$, \[ \mu_n\ge-2(\lambda+\delta). \] Hence, (\ref{1.34}) is still true. Rest of the argument for the subsolution property of $V_0$ follows as that for $U_0$. \end{pf} \begin{lemma}\label{lem5} Fix $x\in\overline{G}$. \begin{longlist} \item One can choose $(k_n,l_n)$ in such a way that $\limsup_{n\to\infty}V_n(x)\le V(x)$. \item One can choose $(k_n,l_n)$ in such a way that $\liminf_{n\to\infty}V_n(x)\ge V(x)$. Similar statements hold for $U_n(x)$ and $U(x)$. \end{longlist} \end{lemma} \begin{pf} We prove (i) and (ii). The statements regarding $U_n(x)$ and $U(x)$ are proved analogously. \begin{longlist} \item Fix $k$. Since $\Gamma= \bigcup_{l\ge1}\Gamma_l$, we have that given $\varepsilon$, \begin{eqnarray*} V(x)&\ge&\inf_{\beta\in\Gamma}\sup_{Y\in M_k}J^x(Y,\beta)\\ &\ge&\inf_{\beta\in\Gamma_l}\sup_{Y\in M_k}J^x(Y,\beta )-\varepsilon\\ &=&V_{kl}(x)-\varepsilon, \end{eqnarray*} for all $l$ sufficiently large. This shows $V(x)\ge\limsup_{l\to\infty}V_{kl}(x)$, and (i) follows. \item Fix $\varepsilon$. For each $(k,l)\in{\mathbb{N}}^2$, let $\beta_{kl}\in\Gamma _l$ be such that \begin{equation}\label{22} \sup_{Y\in M_k}J^x(Y,\beta_{kl})\le\inf_{\beta\in\Gamma_{l}}\sup _{Y\in M_k}J^x(Y,\beta)+\varepsilon. \end{equation} Fix $l$. Let $\beta_l$ be defined by \[ \beta_l[Y]=\beta_{kl}[Y],\qquad Y\in M_k\setminus M_{k-1}, k\in {\mathbb{N}}, \] where we define $M_0$ to be the empty set. Then $\beta_l\in\Gamma_{l}$. Since $M = \bigcup_{k \ge1}M_k$, we have that the following holds provided that $k$ is sufficiently large \begin{eqnarray*} V(x)&\le&\sup_{Y\in M}J^x(Y,\beta_l)\\ &\le&\sup_{Y\in M_k}J^x(Y,\beta_l)+\varepsilon\\ &=&\max_{j\le k}\sup_{Y\in M_j\setminus M_{j-1}}J^x(Y,\beta _{jl})+\varepsilon \\ &\le&\inf_{\beta\in\Gamma_{l}}\sup_{Y\in M_k}J^x(Y,\beta )+2\varepsilon, \end{eqnarray*} where the last inequality follows from (\ref{22}). This shows that, for every $l$, $V(x)\le\liminf_kV_{kl}(x)$. The result follows.\qed \end{longlist} \noqed\end{pf} \begin{pf*}{Proof of Theorem \protect\ref{th1}} The statement that $U$ and $V$ are solutions of (\ref{07}) follows from Theorem \ref{th3}, Lemmas \ref{lem3}, \ref{lem4} and uniqueness of solutions of (\ref{07}), established in \cite{pssw}. The latter result also yields $U=V$. \end{pf*} \section{Equicontinuity}\label{sec4} In this section, we prove Theorem \ref{th3}. With an eye toward estimates needed in Section \ref{sec5} we will consider a somewhat more general setting. Thanks to Lemma \ref{lem7} we may, and will use the value functions (\ref{50}), (\ref{51}), defined using simple controls and strategies (Definition \ref{def2}). Given $X$ defined as in (\ref{01}) for some $Y, Z \in M^0$, we let for $\gamma\in[0, 1)$, $X^{\gamma} = X + \gamma\widetilde W$. Define $\tau^{\gamma}$ and $J_{\gamma}$ as below (\ref{23}) but with $X$ replaced with $X^{\gamma}$. Also denote by $U^{\gamma}_{kl}$ and $V^{\gamma}_{kl}$ the expressions in (\ref{50}), (\ref{51}) with $J$ replaced with $J_{\gamma}$. We write $U_n^{\gamma} = U^{\gamma}_{k_nl_n}$, $V_n^{\gamma} = V^{\gamma}_{k_nl_n}$. Theorem \ref{th3} is an immediate consequence of the following more general result. \begin{theorem}\label{th3plus} For some $n_2 \in\mathbb{N}$, the family $\{V_n^{\gamma}, U_n^{\gamma}; n \ge n_2, \gamma\in[0, 1)\}$ is equicontinuous. \end{theorem} In what follows, we will suppress $\gamma$ from the notation unless there is a scope for confusion. We start by showing that the value functions are uniformly\vspace*{1pt} bounded. To this end, fix $a^0\in S^{m-1}$, and note that the constant process $Y^0:=(a^0,1)$ is in $M^0$. \begin{lemma} \label{lem01} There exists a constant $c_1<\infty$ such that \[ \mathbf{E}[\tau(x,Y^0,Z)^2]\le c_1, \qquad x\in\overline G, Z\in M^0, \gamma \in[0,1). \] \end{lemma} \begin{pf} We only present the proof for the case $\gamma= 0$. The general case follows upon minor modifications. Denote by $m_0$ the diameter of $G$. Fix $T > m_0$. By~(\ref{01}), with $\alpha_t=a^0\cdot B_t$, on the event $\tau>T$ one has \[ \int_0^T(1-\alpha_s)\,dW_s+\int_0^T(1+\alpha_s)\,ds\le a^0\cdot (X_T-X_0)<m_0. \] Consider the $\{\mathcaligr{F}_t\}$-martingale, $M_t = \int_0^t (1-\alpha_s)\, dW_s$, with $\langle M\rangle_t = \int_0^t (1-\alpha_s)^2 \,ds$. On the event $\langle M \rangle_T < T$, \[ \int_0^T(1 + \alpha_s) \,ds = 2T - \int_0^T (1-\alpha_s)\,ds \ge T. \] So on the set $\{\langle M \rangle_T < T; \tau> T\}$ we have $|M_T| \ge T - m_0$. Letting $\sigma= \inf\{s\dvtx\langle M \rangle_s \ge T\}$, \begin{eqnarray}\label{ab442} \mathbf{P}(\tau> T; \langle M \rangle_T < T) &\le& \mathbf{P}(|M_{T\wedge\sigma}| \ge T - m_0) \nonumber\\[-8pt]\\[-8pt] &\le&\frac{m_1\mathbf{E} \langle M \rangle^2_{T\wedge\sigma}}{(T-m_0)^4} \le\frac{m_1 T^2}{(T-m_0)^4}.\nonumber \end{eqnarray} We now consider the event $\{\tau> T ; \langle M \rangle_T \ge T\}$. One can find $m_2, m_3 \in(0, \infty)$ such that for all nondecreasing, nonnegative processes $\{\widehat\gamma_t\}$, \begin{equation}\label{ab455} \mathbf{P}\bigl(H_s + \widehat\gamma_s \in(-m_0, m_0); 0 \le s \le T \bigr) \le m_2e^{-m_3 T}, \end{equation} where $H$ is a one-dimensional Brownian motion. Letting $\gamma_t = \int_0^t (1+D_s) (1 + \alpha_s) \,ds$, where $Z = (B,D)$, we see that \[ \{\tau> T; \langle M \rangle_T \ge T\} \subset \{M_s + \gamma_s \in(-m_0, m_0), 0 \le s \le T; \langle M \rangle _T \ge T\}. \] For $u\ge0$, let $S_u = \inf\{s\dvtx\langle M \rangle_s > u\}$. Then, with $\widehat\gamma_s = \gamma_{S_s}$, \[ \mathbf{P}(\tau> T; \langle M \rangle_T \ge T) \le \mathbf{P}\bigl(H_s + \widehat\gamma_s \in(-m_0, m_0); 0 \le s \le T\bigr) \le m_2e^{-m_3 T}, \] where the last inequality follows from (\ref{ab455}). The result now follows on combining the above display with (\ref{ab442}) \end{pf} The inequality $J(x, Y^0, Z) \le|h|_{\infty} \mathbf{E}(\tau(x, Y^0, Z)) + |g|_{\infty}$, where $|h|_{\infty} = {\sup_x} |h(x)|$ and $|g|_{\infty} = {\sup_x} |g(x)|$, immediately implies the following. \begin{corollary} \label{cor01} There exists a constant $c_2<\infty$ such that $|V_n^{\gamma}(x)|\vee |U_n^{\gamma}(x)|\le c_2$, for all $x\in\overline G$, $\gamma\in[0, 1)$ and $n\in{\mathbb{N}}$. \end{corollary} The idea of the proof of equicontinuity, explained in a heuristic manner, is as follows. Let $x_1$ and $x_2$ be in $G$, let $\varepsilon=|x_1-x_2|$, and let $\delta>0$. Consider the game with bounded controls for which $V_n$ is the lower value function, for some $n\in{\mathbb{N}}$. Let the minimizing player select a strategy $\beta^n$ that is $\delta$-optimal for the initial position $x_1$; namely $\sup_{Y \in M^0_{k_n}} J^{x_1}(Y, \beta^n) \le V_n(x_1) + \delta$. Denote the exit time by $\tau_1=\tau^{x_1}(Y,\beta^n)$ and the exit position by $\xi_1=X^{x_1}(\tau_1)$. Now, modify the strategy is such a way that the resulting control $Z=\beta^n[Y]$ is only affected for times $t\ge\tau_1$. This way, the payoff incurred remains unchanged. Thus, denoting the modified strategy by $\widetilde\beta^n$, we have, for every $Y\in M^0_{k_n}$, \[ J^{x_1}(Y,\widetilde\beta^n)\le V_n(x_1)+\delta. \] Given a point $\xi_2$ located inside $G$, $\varepsilon$ away from $\xi_1$, and a new state process which, at time $\tau_1$ is located at $\xi_2$, the modified strategy attempts to force this process to exit the domain soon after $\tau_1$ and with a small displacement from $\xi_2$ (provided that $\varepsilon$ is small). Let now the maximizing player select a control $Y^n$ that is $\delta$-optimal for playing against $\widetilde\beta^n$, when starting from $x_2$. This control is modified after the exit time $\tau^{x_2}(Y^n, \widetilde\beta^n)$ in a similar manner to the above. Denoting the modified control by~$\widetilde Y^n$, we have \[ V_n(x_2)\le J^{x_2}(\widetilde Y^n,\widetilde\beta^n)+\delta. \] Hence, $V_n(x_2)-V_n(x_1)\le J^{x_2}(\widetilde Y^n,\widetilde\beta^n)-J^{x_1}(\widetilde Y^n,\widetilde\beta ^n)+2\delta$. One can thus estimate the modulus of continuity of $V_n$ by analyzing the payoff incurred when $(\widetilde Y^n,\widetilde\beta^n)$ is played, considering simultaneously two state processes, starting from $x_1$ and~$x_2$. The form (\ref{01}) of the dynamics ensures that the processes remain at relative position $x_1-x_2$ until, at time $\sigma$, one of them leaves the domain. The difference between the running payoffs incurred up to that time can be estimated in terms of~$\varepsilon$, the modulus of continuity of $h$, and the expectation of $\sigma$. It is not hard to see that the latter is uniformly bounded, owing to Corollary \ref{cor01} and the boundedness of $h$ away from zero. By construction, one of the players will now attempt to force the state process that is still in $G$ to exit. If one can ensure that exit occurs soon after $\sigma$ and with a small displacement (uniformly in $n$), then the running payoff incurred between time $\sigma$ and the exit time is small, and the difference between the terminal payoffs is bounded in terms of $\varepsilon$ and the modulus of continuity of $g$, resulting in an estimate that is uniform in $n$. This argument is made precise in the proof of the theorem. Lemmas \ref{lem02} and \ref{lem03} provide the main tools for showing that starting at a state near the boundary, each player may force exit within a short time and with a small displacement. To state these lemmas, we first need to introduce some notation. We have assumed that $G$ is a bounded $C^2$ domain in ${\mathbb {R}}^m$. Thus, there exist $\overline\rho\in(0,\frac18)$, $k\in{\mathbb{N}}$, $z_j\in \partial G$, $E_j\in\mathcaligr{O}(m)$, $\xi_j\in C^2({\mathbb{R}}^{m-1})$, $j=1,\ldots,k$, such that, with $\mathbb{B}_j=\mathbb{B}_{\overline\rho}(z_j)$, $j=1,\ldots ,k$, one has $\partial G\subset\bigcup_{j=1}^k\mathbb{B}_j$, and \[ G\cap\mathbb{B}_j=\{E_jy\dvtx y_1>\xi_j(y_2,\ldots,y_m)\}\cap \mathbb{B}_j,\qquad j=1,\ldots,k. \] Here, $\mathcaligr{O}(m)$ is the space of $m\times m$ orthonormal matrices. Define for $j=1,\ldots,k$, $\widetilde\varphi_j\dvtx{\mathbb{R}}^m\to {\mathbb{R}}$ as \[ \widetilde\varphi_j(y)=y_1-\xi_j(y_2,\ldots,y_m),\qquad y\in {\mathbb{R}}^m. \] Let $\varphi_j(x)=\widetilde\varphi_j(E_j^{-1}x)$, $x\in{\mathbb {R}}^m$. Then $|D\varphi_j(x)|\ge1$, $x\in{\mathbb{R}}^m$, $j=1,\ldots,k$. Furthermore, \[ G\cap\mathbb{B}_j=\{x\dvtx\varphi_j(x)>0\}\cap\mathbb{B}_j,\qquad j=1,\ldots,k. \] Let $0<\rho_0<\overline\rho$ be such that $\partial G\subset\bigcup_{j=1}^k\mathbb{B}_{\rho_0}(z_j)$. For $\varepsilon >0$, denote \[ \mathbf{X}_\varepsilon=\{(x_1,x_2)\dvtx x_1\in\partial G, x_2\in G, |x_1-x_2|\le\varepsilon \}. \] Let $\underline j\dvtx \partial G\to\{1,\ldots,k\}$ be a measurable map with the property \[ x\in\mathbb{B}_{\rho_0}\bigl(z_{\underline j(x)}\bigr)\qquad \mbox{for all } x\in\partial G. \] For existence of such a map see, for example, Theorem 10.1 of \cite{EK}. Then, for every $\varepsilon\le\rho_1:=\frac{\overline\rho-\rho_0}{4}$, \begin{equation}\label{34} (x_1,x_2)\in\mathbf{X}_\varepsilon\qquad\mbox{implies } \overline{\mathbb{B}_{\rho_1}(x_i)}\subset\mathbb{B}_{\underline j(x_1)},\qquad i=1,2. \end{equation} For $j=1,\ldots,k$ and $x_0\in\mathbb{B}_j$, define $\psi^{x_0}_j\dvtx{\mathbb{R}}^m\to{\mathbb{R}}$ as \[ \psi^{x_0}_j(x)=\varphi_j(x)+|x-x_0|^2. \] Also, note that $|D\psi^{x_0}_j|\ge\frac12$ in $\mathbb{B}_j$. Define $\pi_j^{x_0}\dvtx{\mathbb{R}}^m\to\mathcaligr{S}^{m-1}$ such that it is Lipschitz, and \begin{equation}\label{31} \pi_j^{x_0}(x)=-\frac{D\psi^{x_0}_j(x)}{|D\psi^{x_0}_j(x)|},\qquad x\in\mathbb{B}_j. \end{equation} Given a strategy $\beta$, and a point $x_2$, we seek a control $Y=(A,C)$ that forces a state process starting from $x_2$ to exit in a short time and with a small displacement from $x_2$ (provided that $x_2$ is close to the boundary). We would like to determine $Y$ via the functions $\pi_j$ just constructed, in such a way that the following relation holds: \begin{equation}\label{39} A(t)=\pi(X(t)), \qquad C(t)=c^0, \end{equation} where $\pi=\pi^{x_2}_{\underline j(x_1)}$ and $c^0>0$ is some constant. Making $A$ be oriented in the negative direction of the gradient of $\varphi_j$ allows us to show that the state is ``pushed'' toward the boundary. The inclusion of a quadratic term in $\psi_j^{x_0}$ ensures in addition that the sublevel sets $\{\psi_j<a\}$ are contained in a small vicinity of $x_0$, provided smallness of $a$ and $\operatorname{dist}(x_0,\partial G)$. The latter property enables us to show that the process does not wander a long way along the boundary before exiting. The difficulty we encounter is that due to the feedback nature of $A$ in (\ref{39}) we cannot ensure (local) existence of solutions of the set of (\ref{01}) and (\ref{39}). Take, for example, a strategy $\beta$ that is given as $\beta[Y]_t = b(Y_t)$, $Y \in M^0$, where $b$ is some measurable map from $\mathcaligr{H}$ to $\mathcaligr{H}$. Along with (\ref{01}), and (\ref{39}) this defines $X$ as a solution to an SDE with general measurable coefficients. However, as is well known, the SDE may not admit any solution in this generality. To overcome this problem, we will construct a $Y$ that approximates the $Y$ we seek in (\ref{39}) via a time discretization. Let $\bolds{\Sigma}_\varepsilon$ denote the collection of all quintuples $\bolds{\sigma}=(\sigma,\xi_1,\xi_2,\beta,Y)$ such that $\sigma$ is an a.s. finite $\mathcaligr{F}_t$ stopping time, $\xi_1$ and $\xi_2$ are $\mathcaligr{F}_\sigma$-measurable random variables satisfying $(\xi _1,\xi_2)\in \mathbf{X}_\varepsilon$ a.s., $\beta\in\Gamma^0$, and $Y\in M^0$. Fix $\gamma \in [0, 1)$. Let $\bolds{\sigma}=(\sigma,\xi_1,\xi_2,\beta,Y)\in\bolds {\Sigma} _{\rho_1}$ be given. Let $j^*(\omega)=\underline j(\xi_1(\omega))$. Denote \[ \Phi\equiv\Phi(\omega,\cdot)=\varphi _{j^*(\omega)},\qquad \Psi\equiv\Psi(\omega,\cdot)=\psi^{\xi_2(\omega)}_{j^*(\omega )},\qquad \Pi\equiv\Pi(\omega,\cdot)=\pi^{\xi_2(\omega)}_{j^*(\omega)}. \] We define a sequence of processes $(X^{(i)},Y^{(i)})_{i\ge0}$ as follows. Let $Y_t^{(0)}\equiv(A_t^{(0)},\break C_t^{(0)})$ be given by \[ Y_t^{(0)} = \cases{Y_t, &\quad $t<\sigma$,\cr (\Pi(\xi_2),c^0), &\quad $t\ge\sigma$.} \] The constant $c^0$ above will be chosen later. Denote $(B^{(0)},D^{(0)})=\beta[Y^{(0)}]$. Define, for $t\ge\sigma$, \begin{eqnarray*} X_t^{(0)} &=& \xi_2+\int\mathbf{1}_{[\sigma,t]}(s) \bigl(\bigl[A^{(0)}_s-B^{(0)}_s\bigr]\,dW_s + \gamma d\widetilde W_s \bigr) \\ &&{} +\int_\sigma^t\bigl[C^{(0)}_s+D^{(0)}_s\bigr]\bigl[A^{(0)}_s+B^{(0)}_s\bigr]\,ds. \end{eqnarray*} The process $X^{(0)}$ can be defined arbitrarily for $t<\sigma$. Set $\eta_0=\sigma$. We now define recursively, for all $i\ge1$, \begin{eqnarray}\hspace*{33pt} \label{33} \eta_i &=& (\eta_{i-1}+\varepsilon) \nonumber\\[-8pt]\\[-8pt] &&{}\wedge\inf \biggl\{ t\ge\eta_{i-1}\dvtx \bigl|X^{(i-1)}_t-X^{(i-1)}_{\eta_{i-1}}\bigr|\ge\varepsilon \mbox { or } \int_{\eta_{i-1}}^t R_s^{(i)}\,ds \ge t-\eta_{i-1} \biggr\},\nonumber \end{eqnarray} where $R^{(i)}_s=[C^{(i-1)}_s+D^{(i-1)}_s]D\Psi(X^{(i-1)}_s)\cdot [A^{(i-1)}_s-\Pi(X_s^{(i-1)})]$, \begin{eqnarray}\label{32} X^{(i)}_t &=& X^{(i-1)}_t, \qquad Y^{(i)}_t=Y^{(i-1)}_t, \qquad 0\le t<\eta_i,\nonumber\\ Y^{(i)}_t &\equiv& \bigl(A^{(i)}_t,C^{(i)}_t\bigr)=\bigl(\Pi\bigl(X^{(i-1)}_{\eta_i}\bigr),c^0\bigr),\qquad t\ge\eta_i,\nonumber\\ \bigl(B^{(i)},D^{(i)}\bigr) &=& \beta\bigl[Y^{(i)}\bigr], \\ X^{(i)}_t &=& X^{(i-1)}_{\eta_i}+\int\mathbf{1}_{[\eta_i,t]}(s) \bigl(\bigl[A_s^{(i)}-B_s^{(i)}\bigr]\,dW_s+ \gamma d\widetilde W_s \bigr)\nonumber\\ &&{} +\int_{\eta_i}^t\bigl[A^{(i)}_s+B^{(i)}_s\bigr]\bigl[C^{(i)}_s+D^{(i)}_s\bigr]\,ds,\qquad t>\eta_i.\nonumber \end{eqnarray} It is easy to check that $\eta_0<\eta_1<\cdots$ and $\eta_i\to \infty$ a.s. Define $X_t=X^{(i)}_t$, $Y_t=Y^{(i)}_t$ if $t\le\eta_i$. Let $\rho=\rho_1^2$ and \begin{eqnarray}\label{4.1} \tau_{\rho} &=& \inf\{t\ge\sigma\dvtx\Psi(X_t)\ge\rho\},\nonumber\\ \tau_G &=& \inf\{t\ge\sigma\dvtx X_t\in G^c\},\\ \tau &=& \tau_{\rho}\wedge\tau_G.\nonumber \end{eqnarray} Define \begin{eqnarray*} \overline Y_t &\equiv& (\overline A_t,\overline C_t)= \cases{Y_t, &\quad $t<\tau$, \cr (a^0,c^0), &\quad $t\ge\tau$,} \\ (\overline B,\overline D)&=&\beta(\overline Y), \end{eqnarray*} where $a^0$ is as fixed at the beginning of the section. Let \[ \overline X_t=\xi_2+\int\mathbf{1}_{[\sigma,t]}(s) ([\overline A_s-\overline B_s]\,dW_s+ \gamma \,d\widetilde W_s ) +\int_\sigma^t[\overline C_s+\overline D_s][\overline A_s+\overline B_s]\,ds. \] We write \[ \overline X=\overline X[\sigma,\xi_1,\xi_2,\beta,Y],\qquad \overline Y=\overline Y[\sigma,\xi_1,\xi_2,\beta,Y]. \] Note that if $\overline\tau_{\rho}$ and $\overline\tau_G$ are defined by (\ref{4.1}) upon replacing $X$ by $\overline X$ then $\overline\tau:=\overline\tau_{\rho}\wedge\overline\tau_G=\tau_{\rho}\wedge \tau _G=\tau$, because $\overline X$ differs from $X$ only after time $\tau$. We write $\overline\tau=\overline\tau[\sigma,\xi_1,\xi_2,\beta,Y]$. Similar notation will be used for $\overline\tau_{\rho}$ and $\overline\tau_G$. \begin{lemma} \label{lem02} There exists a $c^0 \in(0, \infty)$ and a modulus $\vartheta$ such that for every $\varepsilon\in(0, \rho _1)$, and $\gamma\in[0, 1)$, if $\bolds{\sigma}=(\sigma,\xi_1,\xi_2,\beta ,Y)\in\bolds{\Sigma}_\varepsilon$ and $\overline\tau_G=\overline\tau_G[\bolds{\sigma}]$, one has: \begin{longlist} \item $\mathbf{E}\{\overline\tau_G-\sigma |\mathcaligr{F}_\sigma\}\le \vartheta(\varepsilon)$, \item $\mathbf{E}\{|\overline X-\xi_2|^2_{*,\overline\tau_G} |\mathcaligr {F}_\sigma\}\le\vartheta(\varepsilon)$, where $|\overline X-\xi_2|_{*,\overline\tau_G} = {\sup_{t \in[\sigma, \overline \tau_G]}}|\overline X (t) - \xi_2|$. \end{longlist} \end{lemma} Proof of the lemma is provided after the proof of Theorem \ref{th3plus}. Next, we construct a strategy $\beta^*\in\Gamma^0$ with analogous properties. Here, existence of solutions is not an issue, and discretization is not needed. Fix $(x_1,x_2)\in\mathbf{X}_{\rho_1}$. Let $j=\underline j(x_1)$, $\widetilde\Psi=\psi_j^{x_2}$, and $\widetilde\Pi=\pi_j^{x_2}$. Given $Y=(A,C)\in M^0$, let $\widetilde X$ solve \[ \widetilde X_t=x_2+\int_0^t \bigl([A_s-\widetilde\Pi(\widetilde X_s)]\,dW_s + \gamma \,d\widetilde W_s \bigr)+\int _0^t[C_s+d^0][A_s+\widetilde\Pi (\widetilde X_s)]\,ds, \] where $d^0$ is a constant to be determined later. Let \begin{eqnarray}\label{5.1} \widetilde\tau_{\rho} &=&\inf\{t\dvtx\widetilde\Psi(\widetilde X_t)\ge\rho\},\nonumber\\ \widetilde\tau_G &=&\inf\{t\dvtx\widetilde X_t\in G^c\}, \\ \widetilde\tau &=&\widetilde\tau_{\rho}\wedge\widetilde\tau_G.\nonumber \end{eqnarray} Define $Z^*\in M^0$ as \[ Z^*_s\equiv(B^*_s,D^*_s)= \cases{(\widetilde\Pi(\widetilde X_s),d^0), &\quad $s<\widetilde\tau $,\vspace*{2pt}\cr (a^0, d^0), &\quad $s\ge\widetilde\tau$.} \] Note that $\beta^*[Y](s):=\widetilde Z_s$, $s\ge0$ defines a strategy. Let \[ X^*_t=x_2+\int_0^t ([A_s-B^*_s]\,dW_s + \gamma \,d\widetilde W_s )+\int _0^t[C_s+d^0][A_s+B^*_s]\,ds. \] Define $\tau_{\rho}^*$, $\tau_{G}^*$ and $\tau^*$ by replacing $\widetilde X$ with $X^*$ in (\ref{5.1}), and note that $\tau^*=\widetilde\tau$. To make the dependence explicit, we write \[ X^*=X^*[x_1,x_2,Y,\overline W], \qquad Z^*=Z^*[x_1,x_2,Y,\overline W],\qquad \tau^*=\tau^*[x_1,x_2,Y,W]. \] \begin{lemma} \label{lem03} There exists $d^0 \in(0, \infty)$ and a modulus $\widetilde\vartheta $ such that for all $\varepsilon\in(0, \rho_1)$, $Y\in M^0$, $(x_1,x_2)\in\mathbf{X}_\varepsilon$, if $\tau^*_G=\tau^*_G[x_1,x_2,Y,W]$, one has: \begin{longlist} \item $\mathbf{E}[\tau^*_G]\le\widetilde\vartheta(\varepsilon)$, \item $\mathbf{E}[|X^*-x_2|^2_{*,\tau^*_G} ]\le\widetilde\vartheta (\varepsilon)$, where $|X^*-x_2|_{*,\tau^*_G} = {\sup_{t \in[0, \tau^*_G]}}|X^*(t) - x_2|$. \end{longlist} \end{lemma} The proof of Lemma \ref{lem03} is very similar to (in fact somewhat simpler than) the proof of Lemma \ref{lem02}, and therefore will be omitted. \begin{figure}\label{figure20} \end{figure} If $\sigma$ is an a.s. finite $\{\mathcaligr{F}_t\}$-stopping time and $(\xi_1,\xi_2)$ are $\mathcaligr{F}_\sigma$-measurable random variables such that $(\xi_1,\xi_2)\in\mathbf{X}_{\rho_1}$ a.s., then we define the $\mathcaligr{G}_t=\mathcaligr{F}_{t+\sigma}$ adapted processes \begin{eqnarray*} \overline X{}^*_t &=& X^*[\xi_1,\xi_2,\widehat Y_\sigma,\widehat W_\sigma](t), \\ \overline Z{}^*_t &=& Z^*[\xi_1,\xi_2,\widehat Y_\sigma,\widehat W_\sigma](t), \end{eqnarray*} where $\widehat Y_\sigma(t)=Y(t+\sigma)$ and $\widehat W_\sigma (t)=\overline W(t+\sigma)-\overline W(\sigma)$, $t\ge0$. To make the dependence explicit, write \[ \overline X{}^*=\overline X{}^*[\sigma,\xi_1,\xi_2,Y],\qquad \overline Z{}^*=\overline Z{}^*[\sigma,\xi_1,\xi_2,Y]. \] \begin{pf*}{Proof of Theorem \protect\ref{th3}} Fix $x_1,x_2\in G$ and $\gamma\in[0,1)$. We will suppress $\gamma$ from the notation. Assume that $|x_1-x_2|=\varepsilon<\rho_1$, so that Lemmas \ref{lem02} and \ref{lem03} are in force (see Figure \ref{figure20}). Let $n_0$ be large enough so that $l_n, k_n \ge\max\{c^0, d^0\}$ for all $n \ge n_0$. Given $\delta\in(0,1)$ and $n\ge n_0$, let $\beta^n\in\Gamma^0_{k_nl_n}$ be such that \[ \sup_{Y\in M^0_{k_n}}J^{x_1}(Y,\beta^n)-\delta\le V_n(x_1)\le c_1. \] For $Y\in M^0$ write $\tau^{1,n}(Y):=\tau^{x_1}(Y,\beta^n)$ and $X^{1,n}(Y):=X^{x_1}(Y,\beta^n)$. Note that \[ \underline h \mathbf{E}[\tau^{1,n}(Y)]-|g|_\infty\le c_1+1, \] hence for every $n$ and every $Y\in M^0_{k_n}$, \begin{equation}\label{35} \mathbf{E}[\tau^{1,n}(Y)]\le m_1, \end{equation} where $m_1<\infty$ is a constant that does not depend on $n$. Define $\widetilde\beta^n\in\Gamma^0_{k_nl_n}$ as follows. For $Y\in M^0$, let \begin{eqnarray*} \xi_1^{1,n}(Y)&=&X^{1,n}_{\tau^{1,n}(Y)}(Y),\qquad \xi_2^{1,n}(Y)=\xi_1^{1,n}(Y)+x_2-x_1, \\ \widetilde\beta^n[Y]_t &=& \cases{\beta^n[Y]_t, &\quad $t<\tau^{1,n}(Y)$,\vspace*{2pt}\cr \overline Z{}^*[\tau^{1,n}(Y),\xi_1^{1,n}(Y),\xi_2^{1,n}(Y),Y], &\quad $t\ge\tau^{1,n}(Y), \xi_2^{1,n}(Y)\in\overline G$,\cr \mbox{arbitrarily defined}, &\quad $t\ge\tau^{1,n}(Y), \xi _2^{1,n}(Y)\in\overline G^c$.} \end{eqnarray*} Note that for every $Y\in M^0_{k_n}$, $J^{x_1}(Y,\beta^n)=J^{x_1}(Y,\widetilde\beta^n)$. Next, choose $Y^n\in M^0_{k_n}$ such that \[ V_n(x_2)\le\sup_{Y\in M^0_{k_n}}J^{x_2}(Y,\widetilde\beta^n) \le J^{x_2}(Y^n,\widetilde\beta^n)+\delta. \] Let $\tau^{2,n}=\tau^{x_2}(Y^n,\widetilde\beta^n)$, and $X^{2,n}=X^{x_2}(Y^n,\widetilde\beta^n)$. Let \[ \xi_2^{2,n}=X^{2,n}_{\tau^{2,n}},\qquad \xi_1^{2,n}=\xi_2^{2,n}+x_1-x_2. \] Define $\widetilde Y^n\in M^0_{k_n}$ as \[ \widetilde Y^n_t=\cases{Y^n_t, &\quad $t<\tau^{2,n}$,\vspace*{2pt}\cr \overline Y[\tau^{2,n},\xi_2^{2,n},\xi_1^{2,n},\widetilde\beta ^n,Y^n](t), &\quad $t\ge\tau^{2,n}, \xi_1^{2,n}\in\overline G$,\vspace*{2pt}\cr \mbox{arbitrarily defined}, &\quad $t\ge\tau^{2,n}, \xi_1^{2,n}\in \overline G^c$.} \] Note that $J^{x_2}(Y^n,\widetilde\beta^n)=J^{x_2}(\widetilde Y^n,\widetilde\beta^n)$. Thus \begin{equation} \label{8.1} V_n(x_2)-V_n(x_1)-2\delta\le J^{x_2}(\widetilde Y^n,\widetilde\beta^n)- J^{x_1}(\widetilde Y^n,\widetilde\beta^n). \end{equation} For $k=1,2$, let \[ \sigma^{k,n}=\tau^{x_k}(\widetilde Y^n,\widetilde\beta^n),\qquad \widetilde X^{k,n}=X^{x_k}(\widetilde Y^n,\widetilde\beta^n),\qquad \Xi^{k,n}=\widetilde X^{k,n}_{\sigma^{k,n}}. \] For $m_0\ge0$, let $\vartheta_g(m_0)=\sup\{|g(x)-g(y)|\dvtx x,y\in \partial G,|x-y|\le m_0\}$ and $\vartheta_h(m_0)=\sup\{|h(x)-h(y)|\dvtx x,y\in G,|x-y|\le m_0\}$. Using (\ref{35}), the right-hand side of (\ref{8.1}) can be bounded by \begin{equation}\hspace*{33pt} \label{8.2} \mathbf{E}\vartheta_g(|\Xi^{1,n}-\Xi^{2,n}|)+c_3\vartheta _h(\varepsilon)+ |h|_{\infty} \mathbf{E}[(\sigma^{1,n}\vee\sigma^{2,n})-(\sigma^{1,n}\wedge \sigma^{2,n})]. \end{equation} On the set $\sigma^{1,n}\le\sigma^{2,n}$, we have $|\Xi^{1,n}-\Xi^{2,n}| \le\varepsilon+|\widetilde X_{\sigma^{1,n}}^{2,n}-\widetilde X_{\sigma^{2,n}}^{2,n}|$. Hence, by Lem\-ma \ref{lem03}(ii), \[ \mathbf{E} \bigl[|\Xi^{1,n}-\Xi^{2,n}|^2 \mathbf{1}_{\{\sigma ^{1,n}\le\sigma^{2,n}\} } \bigr]\le\vartheta_1(\varepsilon) \] for some modulus $\vartheta_1$. Using Lemma \ref{lem02}(ii), a similar estimate holds on the complement set, and consequently, the first term of (\ref{8.2}) is bounded by $\vartheta_2(\varepsilon)$, for some modulus $\vartheta_2$. By Lemmas \ref{lem02}(i) and \ref{lem03}(i), the last term of (\ref{8.2}) is bounded by $|h|_{\infty}(\vartheta(\varepsilon)+\widetilde\vartheta (\varepsilon))$. Hence, $V_n(x_2)-V_n(x_1)\le2\delta+\vartheta_3(|x_1-x_2|)$ for some modulus $\vartheta_3$, and the equicontinuity of $\{V_n^{\gamma}; n,\gamma\}$ follows on sending $\delta\to0$. The proof of equicontinuity of $\{U_n^{\gamma}; n,\gamma\}$ is similar, and therefore omitted. \end{pf*} \begin{pf*}{Proof of Lemma \protect\ref{lem02}} We will only present the proof for the case $\gamma= 0$. The general case follows upon minor modifications. Denote \[ \psi_{1,\infty}={\sup_{x_0,x\in\overline G}\sup_j}|D\psi^{x_0}_j(x)|,\qquad \psi_{2,\infty}={\sup_{x_0,x\in\overline G}\sup_j}|D^2\psi^{x_0}_j(x)|, \] and let $\varphi_{1,\infty}$, $\varphi_{2,\infty}$ be defined analogously. Let $\varepsilon>0$ and $\bolds{\sigma}\equiv(\sigma,\xi_1,\xi _2,\beta ,Y)\in\bolds{\Sigma}_\varepsilon$ be given, let $\overline X=\overline X[\bolds{\sigma}]$, $\overline Y=\overline Y[\bolds{\sigma}]$, $\overline\tau_\rho=\overline\tau_\rho[\bolds{\sigma}]$, $\overline\tau _G=\overline \tau_G[\bolds{\sigma}]$, and $\overline\tau=\overline\tau[\bolds{\sigma}]$. Let \[ \overline\tau_0=\inf\{t\ge\sigma\dvtx\Psi(\overline X_t)\le0\},\qquad \overline\tau_B=\inf\{t\ge\sigma\dvtx\overline X_t\notin\mathbb{B}_{j^*}\}, \] where we recall that $j^* = \underline{j}(\xi_1)$. We have $\overline\tau\equiv\overline\tau_{\rho}\wedge\overline\tau_G\le\overline \tau_{\rho}\wedge\overline\tau_0$, because $\Psi\ge\Phi$. Also, by (\ref{34}), $\overline\tau\le\overline \tau_B$. By It\^{o}'s formula, for $t>\sigma$, \begin{eqnarray*} \Psi(\overline X_t) &=&\Phi(\xi_2)+\int\mathbf{1}_{[\sigma,t]}(s)D\Psi(\overline X_s)[\overline A_s-\overline B_s]\,dW_s \\ &&{} +\int_\sigma^tD\Psi(\overline X_s)[\overline A_s+\overline B_s][\overline C_s+\overline D_s]\,ds \\ &&{} +\frac12\int_\sigma^t[\overline A_s-\overline B_s]'D^2\Psi(\overline X_s)[\overline A_s-\overline B_s]\,ds. \end{eqnarray*} For $t\le\overline\tau$, using (\ref{31}) and (\ref{32}), \begin{eqnarray*} D\Psi(\overline X_s)[\overline A_s-\overline B_s] &=& D\Psi(\overline X_s)[\Pi(\overline X_s)-\overline B_s]+D\Psi(\overline X_s)[\overline A_s-\Pi (\overline X_s)] \\ &=&-|D\Psi(\overline X_s)| (1-\alpha_s-\delta_s), \end{eqnarray*} where $\alpha_s=-|D\Psi(\overline X_s)|^{-1}D\Psi(\overline X_s)\cdot\overline B_s$, and $\delta_s=-\Pi(\overline X_s)\cdot[\overline A_s-\Pi(\overline X_s)]$. Note that $|\alpha_s|\le1$. Moreover, using the inequality $|\frac{v}{|v|}\cdot(\frac{u}{|u|}-\frac{v}{|v|})|\le 2|v|^{-1}|u-v|$ along with (\ref{33}), recalling the definition of $\overline A$ and the fact $|D\Psi (\overline X_s)|\ge1/2$, we see that \[ |\delta_s|\le4\varepsilon\psi_{2,\infty}. \] Furthermore, \begin{eqnarray*} &&D\Psi(\overline X_s)[\overline A_s+\overline B_s][\overline C_s+\overline D_s] \\ &&\qquad= D\Psi(\overline X_s)[\Pi(\overline X_s)+\overline B_s][\overline C_s+\overline D_s]+D\Psi (\overline X_s)[\overline A_s-\Pi(\overline X_s)][\overline C_s+\overline D_s] \\ &&\qquad= -|D\Psi(\overline X_s)| (1+\alpha_s)(c^0+\overline D_s)+e_s, \end{eqnarray*} where, by (\ref{33}), for $\sigma\le t_1 \le t_2 \le\overline\tau$ \[ \int_{t_1}^{t_2}e_s\,ds\le\varepsilon+t_2-t_1. \] Finally, we can estimate \[ p_s:=\tfrac12[\overline A_s-\overline B_s]'D^2\Psi(\overline X_s)[\overline A_s-\overline B_s] \] by $|p_s|\le2\psi_{2,\infty}$. Shifting time by $\sigma$, we denote $\mathcaligr{G}_t=\mathcaligr {F}_{t+\sigma}$, $\check W_t=W_{t+\sigma}-W_\sigma$, and \[ (\check X_t,\check D_t,\check\alpha_t,\check\delta_t,\check e_t,\check p_t) =(\overline X_{t+\sigma},\overline D_{t+\sigma},\alpha_{t+\sigma},\delta _{t+\sigma}, e_{t+\sigma },p_{t+\sigma}). \] Denote also $m_t=|D\Psi(\check X_s)|$, let $M$ be the $\mathcaligr{G}_t$-martingale \[ M_t=-\int_0^tm_s(1-\check\alpha_s-\check\delta_s)\,d\check W_s \] and set \begin{eqnarray}\label{36}\quad \mu_t:\!&=&\langle M\rangle_t=\int_0^t m_s^2(1-\check\alpha_s-\check \delta_s)^2\,ds, \nonumber\\ P_t&=&-\int_0^tm_s(1+\check\alpha_s)(c^0+\check D_s)\,ds,\qquad Q_t=\int_0^t(\check e_s+\check p_s)\,ds,\\ \Psi_t &=&\Psi(\check X_t).\nonumber \end{eqnarray} Combining the above estimates, we have for $0\le s \le t\le\overline\tau -\sigma$, \begin{eqnarray} \label{11.1} \Psi_t &=& \Psi_0+M_t+P_t+Q_t, \\ \label{38} Q_t-Q_s&\le&\varepsilon+r(t-s), \end{eqnarray} where $r=2\psi_{2,\infty}+1$. Note that $m_t\ge1/2$ for $s\le\overline\tau_B-\sigma$, and recall that $\overline\tau_B\ge\overline\tau\equiv\overline\tau_{\rho}\wedge\overline\tau _G$. We have for $t\le \overline\tau-\sigma$, assuming without loss of generality $4\varepsilon\psi_{2,\infty}<1/32$, \begin{eqnarray*} r &=& \frac{r}{4} (1-\check\alpha_t-\check\delta_t+1+\check\alpha _t+\check\delta_t)^2 \le \frac{r}{2}(1-\check\alpha_t-\check\delta_t)^2+2r(1+\check\alpha _t+\check\delta _t)\\ &\le& 2rm_t^2(1-\check\alpha_t-\check\delta_t)^2+4rm_t(1+\check\alpha _t)+2r\check\delta_t \end{eqnarray*} and \[ \check\delta_t \le(1+\check\alpha_t)+\tfrac18(1-\check\alpha _t-\check\delta_t)^2 \le2m_t(1+\check\alpha_t)+\tfrac12m_t^2(1-\check\alpha_t-\check \delta_t)^2. \] Hence, for $t\le\overline\tau-\sigma$, we have \( r\le3rm_t^2(1-\check\alpha_t-\check\delta_t)^2+8rm_t(1+\check \alpha_t). \) Thus, by~(\ref{11.1}), if $c^0$ is chosen larger than $8r$, we have \begin{equation} \label{15.0} \Psi_t=\Psi_0+M_t+3r\mu_t+\widetilde P_t+\widetilde Q_t, \end{equation} where \[ \widetilde P_t=-\int_0^tm_s(1+\check\alpha_s)(c^0+\check D_s-8r)\,ds, \] $\widetilde Q_0=0$, and \begin{equation}\label{37} \widetilde P_t-\widetilde P_s\le0,\qquad \widetilde Q_t-\widetilde Q_s\le\varepsilon,\qquad 0 \le s\le t \le \overline\tau- \sigma. \end{equation} We will write $\widehat{\mathbf{P}}$ for $\mathbf{P}[ \cdot |\mathcaligr{G}_0]$, and $\widehat{\mathbf{E}}$ for the respective conditional expectation. The proof will proceed in several steps. \begin{Step}\label{Step1} For some $\nu_1 \in(0, \infty)$, \[ \sup_{\bolds{\sigma}\in\bolds{\Sigma}_{\rho_1}} \widehat{\mathbf {E}}[(\overline\tau-\sigma )^2] \le \nu_1,\qquad \mbox{a.s.} \] \end{Step} \begin{Step}\label{Step2} For some $\nu_2 \in(0, \infty)$, \[ \sup_{\bolds{\sigma}\in\bolds{\Sigma}_{\rho_1}} \widehat{\mathbf {E}}[(\overline\tau _G-\sigma)^2] \le \nu_2,\qquad \mbox{a.s.} \] \end{Step} Note that Step \ref{Step2} is immediate from Step \ref{Step1} and Lemma \ref{lem01} because by construction, a constant control is used after time $\overline\tau$. \begin{Step}\label{Step3} There exists a modulus $\vartheta_1$ such that \[ \sup_{\bolds{\sigma}\in\bolds{\Sigma}_\varepsilon}\widehat {\mathbf{P}}[\overline\tau_G-\sigma >\vartheta_1(\varepsilon), \overline\tau_{\rho}>\overline\tau_G]\le \vartheta_1(\varepsilon),\qquad \varepsilon>0. \] \end{Step} \begin{Step}\label{Step4} There exists a modulus $\vartheta_2$ such that \[ \sup_{\bolds{\sigma}\in\bolds{\Sigma}_\varepsilon}\widehat {\mathbf{P}}[\overline\tau_{\rho }\le\overline\tau_G]\le\vartheta_2(\varepsilon),\qquad \varepsilon>0. \] \end{Step} Based on these steps, part (i) of the lemma is established as follows. Writing $E_\varepsilon$ for the event $\overline\tau_G-\sigma >\vartheta_1(\varepsilon)$, \begin{eqnarray*} \widehat{\mathbf{E}}[\overline\tau_G-\sigma] &=& \widehat{\mathbf{E}}[(\overline\tau_G-\sigma)\mathbf{1}_{E_\varepsilon }]+\widehat{\mathbf{E}}[(\overline\tau_G-\sigma)\mathbf{1} _{E_\varepsilon^c}]\\ &\le& [\widehat{\mathbf{E}}[(\overline\tau_G-\sigma)^2]\widehat{\mathbf {P}}(E_\varepsilon) ]^{1/2}+\vartheta_1(\varepsilon)\\ &\le&\nu_2^{1/2}[\vartheta_1(\varepsilon)+\vartheta_2(\varepsilon )]^{1/2}+\vartheta_1(\varepsilon), \end{eqnarray*} where the first inequality uses Cauchy--Schwarz, and the second uses Steps \ref{Step2}, \ref{Step3} and \ref{Step4}. To show part (ii) of the lemma, use Steps \ref{Step3} and \ref{Step4} to write \begin{eqnarray}\label{13.1} \widehat{\mathbf{E}}[|\overline X-\xi_2|_{*,\overline\tau_G}^2] &\le&\widehat{\mathbf{E}}\bigl[|\overline X-\xi_2|_{*,\overline\tau_G}^2\mathbf{1}_{E_\varepsilon^c\cap\{\overline \tau_{\rho}>\overline\tau_G\}}\bigr] \nonumber\\[-8pt]\\[-8pt] &&{}+[\vartheta_1(\varepsilon)+\vartheta_2(\varepsilon)] \operatorname{diam}(G)^2.\nonumber \end{eqnarray} By (\ref{11.1}) and (\ref{38}), we can estimate \begin{equation} \label{13.2}\qquad\quad \widehat{\mathbf{E}}\Bigl[\sup_{t \in[\sigma, \overline\tau_G]}\Psi(\overline X_{t})\mathbf{1}_{E_\varepsilon ^c\cap\{\overline\tau_{\rho}>\overline\tau_G\}}\Bigr] \le\varphi_{1,\infty}\varepsilon+3\vartheta_1(\varepsilon )^{1/2}\psi_{1,\varepsilon}+r\vartheta _1(\varepsilon)+\varepsilon. \end{equation} Thus, noting that $\Phi(\overline X_{t})\ge0$ on $\{\overline\tau _{\rho}>\overline\tau_G; t \in[\sigma, \overline\tau_G]\}$, we have on this set, \[ \Psi(\overline X_{t})\ge|\overline X_{t}-\xi_2|^2. \] Part (ii) of the lemma now follows on using the above inequality and (\ref{13.2}) in (\ref{13.1}). In order to complete the proof, we need to establish the statements in Steps \ref{Step1}, \ref{Step3} and \ref{Step4}. \begin{pf*}{Proof of Step \ref{Step1}} Let $t$ be given. Let $F_t$ denote the event $\{\Psi_s\in(0,\rho), 0\le s\le t\}$. We have \begin{eqnarray} \label{15.1} \widehat{\mathbf{P}}(\overline\tau-\sigma>t)&=&\widehat{\mathbf{P}}(\overline \tau-\sigma>t, \overline\tau_0 - \sigma> t, \overline\tau _B-\sigma>t)\nonumber\\[-8pt]\\[-8pt] &\le& \widehat{\mathbf{P}}(F_t,\overline\tau_B-\sigma>t).\nonumber \end{eqnarray} Denote $S_u=\inf\{s\dvtx\mu_s>u\}$, where we recall that the infimum over an empty set is taken to be $\infty$. Let $\kappa\in(0,1/16)$. Then \begin{eqnarray}\label{16.15} &&\widehat{\mathbf{P}}(F_t,\overline\tau_B-\sigma>t,\mu_t>\kappa t) \nonumber\\ &&\qquad \le\widehat{\mathbf{P}}\bigl(\mu_t>\kappa t, \Psi_{S_s}\in(0,\rho), \overline\tau_B - \sigma> t, 0\le s\le \mu_t\bigr)\nonumber\\[-8pt]\\[-8pt] &&\qquad\le\widehat{\mathbf{P}}\bigl(\Psi_0+H_s+3rs+\widehat P_s\in(0,\rho), 0\le s\le\kappa t\bigr)\nonumber\\ &&\qquad=\widehat{\mathbf{P}}\bigl(\widehat H_s+\widehat P_s\in(0,\rho), 0\le s\le\kappa t\bigr),\nonumber \end{eqnarray} where $H$ is a standard Brownian motion (in particular, $H_s=M_{S_s}$ for $s < \mu_t$), $\widehat P_t$~is a process that satisfies $\widehat P_s-\widehat P_u\le \varepsilon$, $u\le s$, and $\widehat H_s=\Psi_0+H_s+3rs$. On the event indicated in the last line of (\ref{16.15}), one has, for every integer $k<\kappa t$, that $\widehat H_k-\widehat H_{k-1}\ge-2$. Hence, the right-hand side of (\ref {16.15}) can be estimated by $m_1e^{-m_2\kappa t}$, for some positive constants $m_1$ and $m_2$, independent of $t$ and $\kappa$, $\varepsilon$, and as a result, \begin{equation} \label{16.2} \widehat{\mathbf{P}}(F_t,\overline\tau_B-\sigma>t,\mu_t>\kappa t)\le m_1e^{-m_2\kappa t}. \end{equation} Next, on the event $F_t\cap\{\overline\tau_B-\sigma>t,\mu_t\le\kappa t\}$ we have $\int_0^tm_s^2(1-\check\alpha_s-\check\delta_s)^2\,ds\le\kappa t$, thus \[ \int_0^t m_s^2(1-\check\alpha_s)^2\,ds\le2\kappa t +2\int_0^tm_s^2\check\delta_s^2\,ds\le(2\kappa+16\psi_{1,\infty }^2\psi _{2,\infty}^2\varepsilon^2)t\le\frac t4, \] where we assumed without loss that $2\kappa+16\psi_{1,\infty}^2\psi_{2,\infty}^2\varepsilon^2\le 1/4$. Consequently, $\int_0^tm_s(1-\check\alpha_s)\,ds\le t/2$. Using $m_s\ge1/2$, we have $\int_0^t(1+\check\alpha_s)\,ds\ge2t-t=t$, whence, letting $c^0$ be so large that $c^0-8 r>8r$, \[ 3r\mu_t+\widetilde P_t \le3rt-4rt=-rt, \] where we used $\mu_t\le\kappa t\le t$. Using (\ref{15.0}), on this event we have $0\le\Psi_t\le\Psi_0+M_t-rt+\widetilde Q_t$. Hence, recalling that $\widetilde Q_t\le\varepsilon$ and $\Psi_0\le\psi_{1,\infty }\varepsilon$, denoting $m_0=\psi_{1,\infty}+1$, and letting $\gamma$ be the ($t$-dependent) stopping time $\gamma=\inf\{s\dvtx\mu_s>\kappa t\}$, we have, for $t\ge r^{-1}m_0\varepsilon$, \begin{eqnarray} \label{17.1} \widehat{\mathbf{P}}(F_t,\overline\tau_B-\sigma>t,\mu_t\le\kappa t) &\le& \widehat{\mathbf{P}}(M_t\ge rt-m_0\varepsilon, \mu_t\le\kappa t)\nonumber\\ &\le& \widehat{\mathbf{P}}(M_{t\wedge \gamma}\ge rt-m_0\varepsilon)\\ &\le&\frac{m_3\widehat{\mathbf{E}}[\langle M\rangle_{t\wedge\gamma }^2]}{(rt-m _0\varepsilon)^{4}}\le\frac{m_3 (\kappa t)^2}{(rt-m_0\varepsilon)^4}.\nonumber \end{eqnarray} In the second inequality above, we used the fact that $\mu_t\le\kappa t$ implies $\gamma\ge t$, and in the third we used Burkholder's inequality. In particular, $m_3$ does not depend on $t$ or $\kappa$ (which will allow us to use this estimate more efficiently in Step \ref{Step3} below). Combining (\ref{15.1}), (\ref{16.15}) and (\ref{17.1}) we obtain the statement in Step \ref{Step1}. \end{pf*} \begin{pf*}{Proof of Step \protect\ref{Step3}} We begin by observing that, from (\ref{16.15}), \begin{equation} \label{18.1} \widehat{\mathbf{P}}(F_t,\overline\tau_B-\sigma>t,\mu_t>\kappa t)\le\mathbf{P}(\widehat H_s+\widehat P_s>0, 0\le s\le\kappa t), \end{equation} and since $\Psi_0+\widehat P_s\le m_0\varepsilon$, this probability is bounded by \[ p(\varepsilon,\kappa t):=\mathbf{P}(m_0\varepsilon+H_s+3rs>0, 0\le s\le\kappa t). \] The latter converges to zero as $\varepsilon\to0$ (for fixed $\kappa $ and $t$). Let $\overline\vartheta$ be a modulus such that $p(\varepsilon,\overline\vartheta(\varepsilon))\le\overline\vartheta (\varepsilon)$, and $\frac12\overline\vartheta(\varepsilon)^{1/4}\ge m_0\varepsilon$. Taking $t=r^{-1}(\overline\vartheta(\varepsilon))^{1/4}$ and $\kappa=r\overline \vartheta(\varepsilon)^{3/4}$, combining (\ref{17.1}) and (\ref{18.1}), \[ \widehat{\mathbf{P}}\bigl(F_t,\overline\tau_B-\sigma>r^{-1}\overline\vartheta (\varepsilon)^{1/4}\bigr) \le\overline\vartheta(\varepsilon)+\frac{m_3\overline\vartheta(\varepsilon )^2}{(1/2\overline \vartheta(\varepsilon)^{1/4})^4} =(1+16m_3)\overline\vartheta(\varepsilon). \] Using the above estimate in (\ref{15.1}), Step \ref{Step3} follows. \end{pf*} \begin{pf*}{Proof of Step \protect\ref{Step4}} For $a>0$, let $\tau_a$ and $\tau_0$ denote the first time $[a,\infty)$, and, respectively, $(-\infty,0]$, is hit by $\widehat H$. Since $\widehat H$ is a Brownian motion (with drift $3r$) starting from $\widehat H(0)\le\psi_{1,\infty}\varepsilon$, we have that $\mathbf{P}(\tau_{\rho-\varepsilon}\le\tau_0)$ converges to zero as $\varepsilon\to0$. The proof is completed on noting that \[ \widehat{\mathbf{P}}(\overline\tau_{\rho}\le\overline\tau_G)\le\mathbf {P}(\tau_{\rho-\varepsilon}\le\tau_0), \] which follows from (\ref{15.0}), (\ref{37}), the relation $\widehat H_s=\Psi_0+M_{S_s}+3r\mu_{S_s}$ for all $s < \mu_{\infty} \equiv \sup_{t \ge0} \mu_t$ and observing that on the set where $\sigma_0 = \sup\{S_s\dvtx s < \mu_{\infty}\} < \infty$ we have that $M_t + 3r\mu_t = M_{\sigma_0} + 3r\mu_{\sigma_0}$, for $t \ge\sigma_0$.\qed \noqed\end{pf*} \noqed\end{pf*} \section{Analysis of the game with bounded controls}\label{sec5} The main result of this section, Theorem \ref{th2}, implies Lemma \ref{lem3}. Fix $k,l$ such that $\min\{k, l\} \ge\max\{c^0, d^0$, $k_{n_2},l_{n_2}\}$, where $n_2$ is as in Theorem \ref{th3plus}. Throughout this section, $(k,l)$ will be omitted from the notation. As in the previous section, only simple controls and strategies will be used. Recall that \[ \Phi(a,b,c,d;p,S)= -\tfrac12 (a-b)'S(a-b)-(c+d)(a+b)\cdot p. \] Fix $\gamma\in[0,1)$ and write \begin{eqnarray*} \Lambda^+_{\gamma}(p,S)&=& \max_{|a|=1, 0\le c\le k} \min_{|b|=1, 0\le d\le l} \Phi(a,b,c,d;p,S) -\frac{\gamma^2}{2} \operatorname{Tr}(S), \\ \Lambda^-_{\gamma}(p,S)&=&\min_{|b|=1, 0\le d\le l} \max_{|a|=1, 0\le c\le k} \Phi(a,b,c,d;p,S)-\frac{\gamma^2}{2} \operatorname{Tr}(S), \end{eqnarray*} and consider the equations \begin{eqnarray}\label{11} &&\cases{\Lambda^+_{\gamma}(Du,D^2u)-h=0, &\quad in $G$,\cr u=g, &\quad on $\partial G$,} \\ \label{17} &&\cases{\Lambda^-_{\gamma}(Du,D^2u)-h=0, &\quad in $G$,\cr u=g &\quad on $\partial G$.} \end{eqnarray} We will write $V^{\gamma}$ and, respectively, $U^{\gamma}$ for the functions $V^{\gamma}_{kl}$ and $U^{\gamma}_{kl}$ introduced at the beginning of Section \ref{sec4}. \begin{theorem}\label{th2} For each $\gamma\in[0,1)$, one has the following: \begin{longlist} \item The function $U^{\gamma}$ uniquely solves (\ref{11}). \item The function $V^{\gamma}$ uniquely solves (\ref{17}). \end{longlist} \end{theorem} The proof of the theorem is based on a result on a finite time horizon, Proposition \ref{prop1}, in which we adopt a technique of \cite{swi}. Given a function $u\in\mathcaligr{C}(\overline G)$, $x_0\in \overline G$, $T\ge0$, and $Y\in M^0$, $Z\in M^0$, let \begin{equation}\label{12} J^{\gamma}(x_0,T,u,Y,Z)=\mathbf{E} \biggl[\int_0^{T\wedge\tau ^{\gamma }}h(X_s^{\gamma}) \,ds+u(X^{\gamma}_{T\wedge\tau^{\gamma}}) \biggr], \end{equation} where $X^{\gamma}$ and $\tau^{\gamma} =\tau(x_0,Y,Z)$ are as introduced in Section \ref{sec4} with $X_0=x_0$, $Y=(A,C)$ and $Z=(B,D)$. \begin{proposition}\label{prop1} Let $x_0\in\overline G$, $T\in[0,\infty)$ and $\gamma\in[0, 1)$. Let $u\in \mathcaligr{C}(\overline G)$. \begin{longlist} \item If $u$ is a subsolution of (\ref{11}), then \begin{equation}\label{13} u(x_0)\le\sup_{\alpha\in\Gamma^0_{lk}}\inf_{Z\in M^0_l}J^{\gamma}(x_0,T,u,\alpha[Z],Z). \end{equation} \item If $u$ is a supersolution of (\ref{11}), then \begin{equation}\label{14} u(x_0)\ge\sup_{\alpha\in\Gamma^0_{lk}}\inf_{Z\in M^0_l}J^{\gamma}(x_0,T,u,\alpha[Z],Z). \end{equation} \item If $u$ is a subsolution of (\ref{17}), then \begin{equation}\label{15} u(x_0)\le\inf_{\beta\in\Gamma^0_{kl}}\sup_{Y\in M^0_k}J^{\gamma}(x_0,Y,T,u,\beta[Y]). \end{equation} \item If $u$ is a supersolution of (\ref{17}), then \begin{equation}\label{16} u(x_0)\ge\inf_{\beta\in\Gamma^0_{kl}}\sup_{Y\in M^0_k}J^{\gamma}(x_0,T,u,Y,\beta[Y]). \end{equation} \end{longlist} \end{proposition} Before proving Proposition \ref{prop1}, we show how it implies the theorem. \begin{pf*}{Proof of Theorem \protect\ref{th2}} We only prove (i) since the proof of (ii) is similar. We first argue that any solution of (\ref{11}) must equal $U^{\gamma}$, and then show that a solution exists. Let a solution $u$ of (\ref{11}) be given. Fix $x_0\in\overline G$ and $\varepsilon>0$. Fix $\alpha\in\Gamma^0_{lk}$ such that \begin{equation} \label{27} U^{\gamma}(x_0)\le\inf_{Z\in M^0_l}J^{x_0}_{\gamma}(\alpha ,Z)+\varepsilon. \end{equation} By Proposition \ref{prop1}(ii), \begin{equation} \label{28} u(x_0)\ge\inf_{Z\in M^0_l}j^{\gamma}(T,Z), \end{equation} where we denote \[ j^{\gamma}(T,Z)=J^{\gamma}(x_0,T,u,\alpha[Z],Z). \] For the rest of the proof, we suppress $\gamma$ from the notation. Lemma \ref{lem01} shows that there is $m_1<\infty$ such that, for every $T\in[0,\infty)$, $\inf_{Z\in M^0_l}j(T,Z)\le m_1$.\vspace*{-2pt} Letting $M(T)=\{Z\in M^0_l\dvtx j(T,Z)\le c_1\}$, it follows from the lower bound on $h$ that for some $T<\infty$ that does not depend on $Z$, one has $\mathbf{P}(\tau>T)<\varepsilon$ for all $Z\in M(T)$, where $\tau=\tau^{x_0}(\alpha,Z)$. Fix such a $T$. Given $Z\in M(T)$, let $\widehat Z\in M^0_l$ be equal to $Z$ on $[0,T)$, and let it assume the constant value $(a^0,1)$ on $[T,\infty)$. Clearly $j(T,\widehat Z)=j(T,Z)$. Also, by Lemma \ref{lem01}, denoting $\widehat\tau=\tau^{x_0}(\alpha,\widehat Z)$, we have \[ \mathbf{E}[(\widehat\tau-T)^+| \widehat\tau>T]\le m_2 \] for some constant $m_2$ independent of $\varepsilon$ and $T$. By (\ref{12}), (\ref{27}), the definition of the payoff, and using the boundary condition $u|_{\partial G}=g$, we have for some $m_3 \in(0, \infty)$ \begin{eqnarray*} U(x_0)&\le& J^{x_0}(\alpha,\widehat Z)+\varepsilon\\ &\le& J(x_0,T,u,\alpha[\widehat Z],\widehat Z)+m_3\{\mathbf{E}[(\widehat\tau -T)^+]+\mathbf{P} (\widehat\tau>T)\}+\varepsilon. \end{eqnarray*} Using $\mathbf{P}(\widehat\tau>T)=\mathbf{P}(\tau>T)$ yields $U(x_0)\le j(T,\widehat Z)+m_4\varepsilon=j(T,Z)+m_4\varepsilon$. Note that the infimum of $j(T,Z)$ over $M^0_l$ is equal to that over $M(T)$. Thus, using (\ref{28}) and sending $\varepsilon\to0$ proves that $U(x_0)\le u(x_0)$. To obtain the reverse inequality, fix $x_0\in\overline G$. From Lemma \ref{lem01}, there exists $m_5<\infty$ and $Z_1\in M^0_l$ such that, for every $\alpha$, \[ J^{x_0}(\alpha,Z_1)\le m_5. \] Denote $N(\alpha)=\{Z\dvtx J^{x_0}(\alpha,Z)\le m_5\}$. Clearly, for each $\alpha$, the infimum of $J^{x_0}(\alpha,Z)$ over all $Z\in M^0_l$ is equal to that over $Z\in N(\alpha)$. Hence, \[ U(x_0)\ge\inf_{Z\in N(\alpha)}J^{x_0}(\alpha,Z),\qquad \alpha\in \Gamma^0_{lk}. \] Using the positive lower bound on $h$ as before, it follows that there exists a function $r\dvtx [0, \infty) \to[0, \infty)$ with $\lim _{T\to\infty}r(T)=0$, such that for every $\alpha$ and $Z\in N(\alpha)$ we have $\mathbf{P}(\tau^{x_0}(\alpha,Z)>T)\le r(T)$. Therefore, for some $m_6 \in(0, \infty)$ \[ J^{x_0}(\alpha,Z)\ge J(x_0,T,u,\alpha[Z],Z)-m_6 r(T),\qquad \alpha \in \Gamma^0_{lk}, Z\in N(\alpha). \] In conjunction with Proposition \ref{prop1}(i), this shows that $U(x_0)\ge u(x_0)- m_6 r(T)$. Since $T$ is arbitrary, we obtain $U(x_0)\ge u(x_0)$. Finally, we argue existence of solutions to (\ref{11}). Let us write (\ref{11})${}_\gamma$ for (\ref{11}) with a specific $\gamma$. For $\gamma\in(0,1)$, existence of solutions to (\ref{11})${}_\gamma$ follows from Theorem 1.1 of \cite{CKLS}. To handle the case $\gamma=0$, we will use the fact that any uniform limit, as $\gamma\to0$, of solutions to (\ref{11})${}_\gamma$ is a solution to (\ref{11})${}_0$. This fact follows by a standard argument, that we omit. Now, since for $\gamma\in(0,1)$ we have existence, the uniqueness statement established above shows that $U^{\gamma}$ solves (\ref{11})${}_\gamma$. From Theorem \ref{th3plus}, we have that the family $\{U^{\gamma}, \gamma\in (0,1)\}$ is equicontinuous, and thus a uniform limit of solutions, and in turn a solution to (\ref{11})${}_0$, exists. \end{pf*} In the rest of this section, we prove Proposition \ref{prop1}. Let $G_n$ be a sequence of domains compactly contained in $G$ and increasing to~$G$. Let $J_n^{\gamma}$ be defined as $J^{\gamma}$ of (\ref{12}), with $\tau^{\gamma}=\tau^{\gamma}(x_0,Y,Z)$ replaced by $\tau_n^{\gamma }=\tau_n^{\gamma}(x_0,Y,Z)$, where \[ \tau_n^{\gamma}=\inf\{t\dvtx X_t^{\gamma}\in\partial G_n\}. \] \begin{lemma}\label{lem6} For every $n$, and $\gamma\in[0, 1)$ Proposition \ref{prop1} holds with $J^{\gamma}$ replaced by $J_n^{\gamma}$. \end{lemma} \begin{pf} We follow the proof of \cite{swi}, Lemma 2.3 and Theorem 2.1. Assume without loss that $G_0\subset\subset G_1\subset\subset G_2\subset\subset G$. We will prove the lemma for $n=0$. Since the claim is trivial, if $x_0\notin G_0$, assume $x_0\in G_0$. In this proof only, write $\tau$ for $\tau_0^{\gamma}$, the exit time of $X^{\gamma}$ from $G_0$. Fix $\widetilde\gamma>\gamma$, let $\widetilde\tau= \tau_1^{\widetilde \gamma }$ and $\sigma=\tau\wedge\widetilde\tau$. For $\varepsilon>0$, consider the sup convolution \[ u_\varepsilon(x)=\sup_{\xi\in{\mathbb{R}}^m} \biggl\{u(\xi)-\frac {|\xi-x|^2}{2\varepsilon } \biggr\},\qquad x\in G_2, \] where, in the above equation only, $u$ is extended to ${\mathbb{R}}^m$ by setting $u=0$ outside $G$. It is easy to see that there exists $\varepsilon_0$ such that the supremum is attained inside $G$ for all $(x,\varepsilon)\in G_2\times(0,\varepsilon_0)$. The standard mollification $u_\varepsilon^\delta\dvtx\overline G_1\to{\mathbb{R}}$ of $u_\varepsilon\dvtx G_2\to{\mathbb{R}}$ is well defined, provided that $\delta$ is sufficiently small. The result \cite{swi}, Lemma 2.3, for the smooth function $u_\varepsilon^\delta$ and the argument in the proof of \cite{swi}, Theorem 2.1, show \[ u_\varepsilon^\delta(x_0)\le\sup_{\alpha\in\Gamma^0}\inf_{Z\in M^0}\mathbf{E} \biggl[\int_0^{T\wedge\sigma}h(X^{\widetilde\gamma}_s )\,ds+u_\varepsilon^\delta(X^{\widetilde\gamma}_{T\wedge\sigma}) \biggr]+\rho(\varepsilon,\delta ,\gamma, \widetilde\gamma), \] where $\lim_{\varepsilon\to0}\lim_{\widetilde\gamma\to\gamma}\lim _{\delta\to 0}\rho(\varepsilon,\delta,\gamma, \widetilde\gamma)=0$. We remark here that Lemma 2.3 of \cite{swi} is written for the case where $u$ is a subsolution of a PDE of the form (\ref{11}) on all of $\mathbb{R}^m$ and $T\wedge\sigma$ is replaced by $T$, however the proof with $u$ and $T\wedge\sigma$ as in the current setting can be carried out in exactly the same way. Since $G_0$ is compactly contained in $G_1$, we have that for every $\theta> 0$ \[ \sup_{\alpha\in\Gamma^0} \sup_{Z \in M^0} \Bigl\{ \mathbf {P} \Bigl( |T\wedge \sigma - T\wedge\tau| + {\sup_{0 \le s \le T}} |X^{\widetilde\gamma}_s - X^{\gamma }_s| > \theta \Bigr) \Bigr\} \] converges to $0$ as $\widetilde\gamma\to \gamma$. Moreover, $u_\varepsilon^\delta\to u_\varepsilon$ as $\delta\to0$ and $u_\varepsilon\to u$ as $\varepsilon\to0$, where in both cases, the convergence is uniform on $\overline G_0$ (see ibid.). Hence, the result follows on taking $\delta\to0$, then $\widetilde\gamma\to\gamma$ and finally $\varepsilon\to0$. \end{pf} \begin{pf*}{Proof of Proposition \protect\ref{prop1}} The main argument is similar to that of Theorem \ref{th3}, and so we omit some of the details. We will prove only item (iv) of the proposition, since the other items can be proved in a similar way. Fix $x$ and $T$. Let $u$ be a supersolution of (\ref{17}). Let $n$ be large enough so that $\operatorname{dist}(\partial G_n, \partial G) < \rho_1$. Write $j_n^{\gamma}(Y,\beta)$ for $J_n^{\gamma}(x,T,u,Y,\beta[Y])$ and $j^{\gamma}(Y,\beta)$ for $J^{\gamma}(x,T,u,Y,\beta[Y])$. Below we will keep $\gamma$ in the notation only if there is scope for confusion. By Lemma \ref{lem6}, $u(x)\ge v_n:=\inf_\beta\sup_Yj_n(Y,\beta)$, for every $n$. We need to show $u(x)\ge v:=\inf_\beta\sup_Yj(Y,\beta)$. Fix $\varepsilon>0$. Let $\beta_n$ be such that \begin{equation}\label{29} \sup_Y j_n(Y,\beta_n)\le v_n+\varepsilon \end{equation} and let $\tau^n_1(Y)=\tau_n^{\gamma}(x,Y,\beta_n[Y])$, $Y\in M^0_l$. Let $\widetilde\beta_n$ be constructed from $\beta_n$ as in the proof of Theorem \ref{th3}, where in particular, $\beta_n[Y]$ and $\widetilde\beta_n[Y]$ differ only on $[\tau^n_1,\infty)$, by which $j_n(Y,\widetilde\beta_n)=j_n(Y,\beta_n)$. Choose $Y_n$ such that \[ v\le\sup_Yj(Y,\widetilde\beta_n)\le j(Y_n,\widetilde\beta _n)+\varepsilon \] and set $\tau^n_2(Y)=\tau^{\gamma}(x,Y,\widetilde\beta_n[Y])$. Then \[ v-v_n-2\varepsilon\le j(Y_n,\widetilde\beta_n)-j_n(Y_n,\widetilde \beta_n)=:\delta_n. \] Denote $X_n=X^x(Y_n,\widetilde\beta_n)$. Using Lemma \ref{lem02}, \[ 0\le\tau^n_2-\tau^n_1<\varepsilon \quad\mbox{and}\quad |X_n(\tau^n_1\wedge T)-X_n(\tau^n_2\wedge T)|<\varepsilon \] with probability tending to 1 as $n\to\infty$. It now follows from the definition of $J_n$ and $J$ [cf. (\ref{12})] that $\limsup_n\delta_n\le\rho(\varepsilon)$ for some modulus $\rho$. Since $\varepsilon$ is arbitrary, this proves the result. \end{pf*} \section{Concluding remarks}\label{sec6} \subsection[Identity (1.4)]{Identity (\protect\ref{44})} Recall from (\ref{09}) that \begin{equation}\label{48} \Phi(a,b,c,d;p,S)= -\tfrac12 (a-b)'S(a-b)-(c+d)(a+b)\cdot p \end{equation} and denote \begin{eqnarray}\label{40} \Lambda^+(p,S)&=& \sup_{|b|=1, 0\le d<\infty} \inf_{|a|=1, 0\le c<\infty} \Phi(a,b,c,d;p,S), \\ \label{41} \Lambda^-(p,S)&=&\inf_{|a|=1, 0\le c<\infty} \sup_{|b|=1, 0\le d<\infty} \Phi(a,b,c,d;p,S) \end{eqnarray} [compare with (\ref{18}) and (\ref{10})]. The following proposition establishes identity (\ref{44}) that, as discussed in the introduction, allows one to view the infinity-Laplacian equation as a Bellman--Issacs type equation. The result states that for the SDG of Section \ref{sec2}, the associated Isaacs condition, $\Lambda^+=\Lambda^-$, holds. Although we do not make use of it in our proofs, such a condition is often invoked in showing that the game has value (cf. \cite{FS,swi}). \begin{proposition}\label{prop2} For $p\in{\mathbb{R}}^m$, $p\ne0$ and $S\in\mathcal{S}(m)$, $\Lambda^+(p,S)=\Lambda(p,S)$ and $\Lambda^-(p,S)=\Lambda(p,S)$. In particular, identity (\ref {44}) holds. \end{proposition} \begin{pf} We will only show $\Lambda^-=\Lambda$ (the proof of $\Lambda ^+=\Lambda$ being similar). Fix $p$, $S$, and omit them from the notation. Write $\mathcaligr{H}_k$ for $\{(a,c)\in\mathcaligr{H}\dvtx c\le k\}$ and $\phi(y,z)$ for $\Phi(a,b,c,d)$, where $y=(a,c)$, $z=(b,d)$. Given $\delta>0$ let $k$ be such that $\Lambda^-\ge\inf_{y\in\mathcaligr{H}_k}\sup_{z\in\mathcaligr{H}}\phi (y,z)-\delta$. Then \[ \Lambda^-\ge\inf_{y\in\mathcaligr{H}_k}\sup_{z\in\mathcaligr {H}_l}\phi(y,z)-\delta =\Lambda^-_{kl}-\delta. \] Thus, by Lemma \ref{lem1}, $\Lambda^-\ge\Lambda$. Next, let $\overline\phi(y) = \sup_{z \in\mathcaligr{H}} \phi(y,z)$. Fix $\delta \in(0, \infty)$, let $y_{\delta} = (\overline p, \delta^{-1})$, where $\overline p=p/|p|$, and let $z_{\delta} = (b_{\delta}, d_{\delta}) \in\mathcaligr{H}$ be such that $\overline \phi(y_{\delta}) \le\phi(y_{\delta}, z_{\delta}) + \delta$. Then \begin{eqnarray*} \Lambda^{-} &\le& \overline\phi(y_{\delta}) \le- \tfrac{1}{2}(\overline p - b_{\delta})'S(\overline p - b_{\delta}) - (\delta^{-1} + d_{\delta}) (\overline p + b_{\delta}) \cdot p + \delta\\ &\le&- \tfrac{1}{2}(\overline p - b_{\delta})'S(\overline p - b_{\delta}) + \delta. \end{eqnarray*} Note that $b_{\delta}$ must converge to $-\overline p$ or else the middle inequality above will say $\Lambda^{-} = - \infty$, contradicting the bound $\Lambda^{-} \ge\Lambda$. Letting $\delta\to0$, we now have from the third inequality that $\Lambda^{-} \le\Lambda$. The result follows. \end{pf} \subsection{Limit trajectory under a nearly optimal play} In \cite{pssw}, the authors raise questions about the form of the limit trajectory under optimal play of the Tug-of-War game, as the step size approaches zero (see Section 7 therein). It is natural to ask, similarly, whether one can characterize (near) optimal trajectories for the SDG studied in the current paper. Let $V$ be as given in (\ref{05}). Let $x\in\overline G$ and $\delta>0$ be given. We say that a policy $\beta\in\Gamma$ is $\delta$-optimal for the lower game and initial condition $x$ if $\sup_{Y\in M}J^x(Y,\beta)\le V(x)+\delta$. When a strategy $\beta\in\Gamma$ is given, we say that a control $Y\in M$ is $\delta$-optimal for play against $\beta$ with initial condition $x$, if $J^x(Y,\beta)\ge\sup_{Y'\in M}J^x(Y',\beta)-\delta$. A pair $(Y,\beta)$ is said to be a $\delta$-optimal play for the lower game with initial condition $x$, if $\beta$ is $\delta$-optimal for the lower game and $Y$ is $\delta$-optimal for play against $\beta$ (both considered with initial condition $x$). One may ask whether the law of the process $X^{\delta}$, under an arbitrary $\delta$-optimal play $(\beta^{\delta},Y^{\delta})$, converges to a limit law as $\delta\to0$; whether this limit law is the same for any choice of such $(\beta^{\delta},Y^{\delta})$ pairs; and finally, whether an explicit characterization of this limit law can be provided. A somewhat less ambitious goal, that is the subject of a forthcoming work \cite{AtBu2} is the characterization of the limit law of $X^\delta$ under \textit{some} choice of a $\delta$-optimal play. The result from \cite{AtBu2} states the following. \begin{theorem} \label{th1new} Suppose that $V$ is a $C^2(\overline{G})$ function and $DV \neq0$ on $\overline{G}$. Assume there exist uniformly continuous bounded extensions, $p$ and $q$ of $\frac{Du}{|Du|}$ and $\frac{1}{|Du|^2}(D^2u Du-\Delta_\infty u Du)$, respectively, to ${\mathbb{R}}^m$ such that, for every $x\in{\mathbb{R}}^m$, weak uniqueness holds for the SDE \[ dX_t=2 p(X_t)\,dW_t+2q(X_t)\,dt,\qquad X_0=x. \] Fix $x \in\overline G$ and let $X$ and $\tau$ denote such a solution and, respectively, the corresponding exit time from $G$. Then, given any sequence $\{\delta_n\}_{n\ge1}$, $\delta_n \downarrow0$, there exists a sequence of strategy-control pairs $(\beta^{n}, Y^{n}) \in M\times\Gamma$, $n \ge1$, with the following properties: \begin{longlist} \item For every $n$, the pair $(\beta^{n},Y^{n})$ forms a $\delta_n$-optimal play for the lower game with initial condition $x$. \item Denoting $X^{n} = X(x, Y^{n}, \beta^{n})$ and $\tau^n =\tau(x, Y^{n}, \beta^{n})$, one has that $(X^{n}(\cdot\wedge\tau^n), \tau^n)$ converges in distribution to $(X(\cdot\wedge\tau), \tau)$, as a sequence of random variables with values in $C([0, \infty)\dvtx\overline G) \times[0,\infty]$. \end{longlist} An analogous result holds for the upper game. \end{theorem} A sufficient condition for the uniqueness to hold is that $D^2u$ is Lipschitz on $\overline G$, since then both $ p$ and $q$ are Lipschitz, and thus admit bounded Lipschitz extensions to ${\mathbb{R}}^m$. \printaddresses \end{document}
\begin{document} \mathfrak{m}aketitle \begin{abstract} In this article a condition is given to detect the containment among thick subcategories of the bounded derived category of a commutative noetherian ring. More precisely, for a commutative noetherian ring $R$ and complexes of $R$-modules with finitely generated homology $M$ and $N$, we show $N$ is in the thick subcategory generated by $M$ if and only if the ghost index of $N_\mathfrak{p}$ with respect to $M_\mathfrak{p}$ is finite for each prime $\mathfrak{p}$ of $R$. To do so, we establish a ``converse coghost lemma" for the bounded derived category of a non-negatively graded DG algebra with noetherian homology. \epsilonnd{abstract} \section*{Introduction} This article is concerned with certain numerical invariants and thick subcategories in the bounded derived category of a commutative noetherian ring. Let $R$ be a commutative noetherian ring, $\mathfrak{m}athsf{D}(R)$ will denote its derived category, and $\mathfrak{m}athsf{D}^f_b(R)$ will be the full subcategory of $\mathfrak{m}athsf{D}(R)$ consisting of objects with finitely generated homology. An object $N$ of $\mathfrak{m}athsf{D}(R)$ is in the thick subcategory generated by $M$, denoted $\thick_{\mathfrak{m}athsf{D}(R)}(M)$, provided that $N$ can be inductively built from $M$ using the triangulated structure of $\mathfrak{m}athsf{D}(R)$ (see \ref{level} for more details). There are cases where a notion of support reports on whether $N$ is in $\thick_{\mathfrak{m}athsf{D}(R)}(M)$. For example, there is the celebrated theorem of Hopkins \cite[Theorem 11]{H} and Neeman \cite[Theorem 1.5]{N} that applies when $M$ and $N$ are perfect complexes. Another instance is when $R$ is locally complete intersection by using support varieties; this was proved by Stevenson for thick subcategories containing $R$ when $R$ is a quotient of a regular ring \cite[Corollary 10.5]{Stev}, and in general in \cite[Theorem 3.1]{JJ}. However, in general, detecting containment among thick subcategories can be an intractable task. In this article, we give a new criterion to determine the containment among thick subcategories of $\mathfrak{m}athsf{D}^f_b(R)$ based on certain numerical invariants being locally finite. We quickly define these below; see \ref{level}, \ref{coghost}, and \ref{ghostchunk} for precise definitions. For a triangulated category $\mathfrak{m}athsf{T}$, fix objects $G$ and $X$. The level of $X$ with respect to $G$ counts the minimal number of cones needed to generate $X$, up to suspensions and direct summand, starting from $G$. We denote this by ${\mathsf{level}}_{\mathfrak{m}athsf{T}}^G(X)$ and note that this is finite exactly when $X$ is in $\thick_{\mathfrak{m}athsf{T}} (G)$. The coghost index of $X$ with respect to $G$, denoted ${\mathsf{cogin}}_\mathfrak{m}athsf{T}^G (X)$, is the minimal number $n$ satisfying that any composition \[ X^n{\bm{x}}ra{f^n}X^{n-1}\to \ldots {\bm{x}}ra{f^1} X^0=X, \] where each $\Hom_\mathfrak{m}athsf{T}(f^i,{\mathsf{{\mathcal{S}}igma}}^j G)=0$, must be zero in $\mathfrak{m}athsf{T}$. Switching the variance in the definition above determines the ghost index of $X$ with respect to $G$, denoted ${\bm{g}}in_\mathfrak{m}athsf{T}^G(X). $ These invariants are of independent interest and have been studied in \cite{ABI,ABIM,Beli,Bergh,BonBer, Christensen,Kelly,Letz,Rouq}. In general, they are related by the following well-known \epsilonmph{(co)ghost lemma}: \[ \mathfrak{m}ax\{{\mathsf{cogin}}_\mathfrak{m}athsf{T}^G(X), {\bm{g}}in^G_\mathfrak{m}athsf{T} (X)\}\leq {\mathsf{level}}_\mathfrak{m}athsf{T}^G(X).\] Oppermann and \v{S}\'{t}ov\'{i}\v{c}ek proved a so-called \epsilonmph{converse coghost lemma:} Namely, $ {\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(R)}^M(N)$ and ${\mathsf{level}}_{\mathfrak{m}athsf{D}^f_b(R)}^M (N)$ agree whenever $M$ and $N$ are objects of $\mathfrak{m}athsf{D}^f_b(R)$, see \cite[Theorem 24]{OS}. In general, it remains open whether the analogous equality holds for objects of $\mathfrak{m}athsf{D}^f_b(R)$ when ${\mathsf{cogin}}$ is replaced by ${\bm{g}}in$; furthermore, \cite[2.11]{Letz} notes that the techniques used in \cite[Theorem 24]{OS} cannot be suitably adapted to prove this. In this article we ask whether finiteness of certain ghost indices can determine finiteness of ${\mathsf{level}}$, and hence containment among thick subcategories. The main result in this direction is the following which is contained in Theorem \ref{main theorem}. \begin{introthm}\label{thmintro} Let $R$ be a commutative noetherian ring. For $M$ and $N$ in $\mathfrak{m}athsf{D}^f(R),$ the following are equivalent: \begin{enumerate} \item \label{e2}${\mathsf{level}}^M_{\mathfrak{m}athsf{D}^f_b(R)} (N)<\infty$; \item \label{e3}${\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R_\mathfrak{p})}^{M_\mathfrak{p}} (N_\mathfrak{p})<\infty$ for all prime ideals $\mathfrak{p}$ of $R$. \epsilonnd{enumerate} \epsilonnd{introthm} One of the main steps in the proof of Theorem \ref{thmintro} is establishing a converse coghost lemma for graded-commutative, bounded below DG algebras $A$ with $\h(A)$ a noetherian $\h_0(A)$-module (cf. Theorem \ref{t:converse}). We follow the proof of \cite[Theorem 24]{OS} closely, however, extra care is needed when working with such DG algebras. Namely, we make use of certain ascending semifree filtrations, see \ref{filtration}, as the truncations used by Oppermann and \v{S}\'{t}ov\'{i}\v{c}ek are no longer available in this setting. \section{Derived Category of a DG Algebra and Semifree DG Modules} Much of this section is devoted to setting notation and reviewing the necessary background regarding the topics from the title of the section. Proposition \ref{truncation} is the main technical result of the section and will be put to use in the next section. Throughout this article objects will be graded homologically. By a DG algebra we will implicitly assume $A$ is non-negatively graded and graded-commutative. For the rest of the section fix a DG algebra $A$. \begin{chunk} Let $\mathfrak{m}athsf{D}(A)$ denote the derived category of (left) DG $A$-modules (see, for example, \cite[Sections 2 \& 3]{ABIM} or \cite[Section 4]{Keller}). We use ${\mathsf{{\mathcal{S}}igma}}$ to denote the suspension functor on the triangulated category $\mathfrak{m}athsf{D}(A)$ where ${\mathsf{{\mathcal{S}}igma}}$ is the autoequivalence defined by \[({\mathsf{{\mathcal{S}}igma}} M)_i\coloneqq M_{i-1},~ a\cdot({\mathsf{{\mathcal{S}}igma}} m)\coloneqq (-1)^{\mathfrak{m}id a\mathfrak{m}id}am,\text{ and }\mathfrak{p}artial^{{\mathsf{{\mathcal{S}}igma}} M}\coloneqq-\mathfrak{p}artial^M.\] For a DG $A$-module $X,$ its homology $\h(M)=\{\h_i(M)\}_{i\in \mathfrak{m}athbb{Z}}$ is naturally a graded $\h(A)$-module. Also, define the \epsilonmph{infimum of X} to be $\inf(M)\coloneqq\inf \{n\mathfrak{m}id \h_n(X)\mathfrak{m}athfrak{n}eq 0\};$ its \epsilonmph{supremum} is $\sup (M)\coloneqq \sup\{n\mathfrak{m}id \h_n(M)\mathfrak{m}athfrak{n}eq 0\}.$ \epsilonnd{chunk} \begin{chunk} The following triangulated subcategories of $\mathfrak{m}athsf{D}(A)$ will be of interest in the sequel. First, let $\mathfrak{m}athsf{D}^f(A)$ denote the full subcategory of $\mathfrak{m}athsf{D}(A)$ consisting of DG $A$-modules $M$ such that each $\h_i(M)$ is a noetherian $\h_0(A)$-module. We let $\mathfrak{m}athsf{D}^f_+(A)$ be the full subcategory of objects $M$ of $\mathfrak{m}athsf{D}^f(A)$ such that $\inf(M)>-\infty$. Finally, $\mathfrak{m}athsf{D}^f_b(A)$ consists of those objects $M$ of $\mathfrak{m}athsf{D}^f(A)$ satisfying $\h_i(M)=0$ for all $|i|{\bm{g}}g 0$. When $\h(A)$ is noetherian as a module over $\h_0(A)$ and $\h_0(A)$ is noetherian, $\mathfrak{m}athsf{D}^f_b(A)$ is exactly the full subcategory of $\mathfrak{m}athsf{D}(A)$ whose objects $M$ are those with $\h(M)$ being finitely generated as a graded $\h(A)$-module. \epsilonnd{chunk} \begin{chunk}\label{semifree resolution}\label{triangle} A DG $A$-module $F$ is \epsilonmph{semifree} if it admits a filtration of DG $A$-submodules \[ \ldots \subseteq F(-1)\subseteq F(0)\subseteq F(1)\subseteq \ldots \] where $F(i)=0$ for $i\ll 0$, $F=\cup F(i)$ and each $F(i)/F(i-1)$ is a direct sum of shifts of $A$. The filtration above is called a \epsilonmph{semifree filtration} of $F$.{\bm{f}}ootnote{The choice to allow arbitrary indices for the start of the filtration is a non-standard one but simplifies notation in the proof of Theorem \ref{t:converse}.} By \cite[Section 3]{Keller}, $F$ is homotopy colimit of the $F(i)$ and so there is the following exact triangle in $\mathfrak{m}athsf{D}(A)$ \begin{equation}\label{exact triangle} \coprod_{i\in \mathfrak{m}athbb{Z}}F(i){\bm{x}}ra{1-s} \coprod_{i\in \mathfrak{m}athbb{Z} }F(i)\rightarrow F\rightarrow {\mathsf{{\mathcal{S}}igma}} \coprod_{i\in \mathfrak{m}athbb{Z}}F(i), \epsilonnd{equation} where $s$ is induced by the canonical inclusions $F(j)\hookrightarrow F(j+1)\hookrightarrow \coprod F(i)$. \epsilonnd{chunk} \begin{chunk} For the following background on semifree resolutions see \cite[Chapter 6]{FHT} (or \cite[Section 1.3]{IFR}). Let $M$ be a DG $A$-module. There exists a surjective quasi-isomorphism of DG $A$-modules $\epsilonpsilon\colon F{\bm{x}}ra\simeq M$ where $F$ is a semifree DG $A$-module, see \cite[Proposition 6.6]{FHT}; the map $\epsilonpsilon$ is called a \epsilonmph{semifree resolution} of $M$ over $A$. Semifree resolutions of $M$ are unique up to homotopy equivalence. \epsilonnd{chunk} \begin{chunk}\label{hominD(A)chunk} Fix a DG $A$-module $M$ with semifree resolution $\epsilon\colon F{\bm{x}}ra{\simeq} M.$ For any DG $A$-module $N$, it is clear that \[ \Hom_{\mathfrak{m}athsf{D}(A)}(M,N)=\Hom_{\mathfrak{m}athsf{D}(A)}(F,N) \] and the right-hand side is computed as the degree zero homology of the DG $A$-module $\Hom_A(F,N).$ That is, \begin{equation}\label{hominD(A)} \Hom_{\mathfrak{m}athsf{D}(A)}(M,-)=\h_0(\Hom_A(F,-)). \epsilonnd{equation}In particular, $\Hom_{\mathfrak{m}athsf{D}(A)}(M,N)$ naturally inherits an $\h_0(A)$-module structure and since $A$ is non-negatively graded, $\Hom_{\mathfrak{m}athsf{D}(A)}(M,N)$ inherits an $A_0$-module structure. As semifree resolutions are homotopy equivalent, this $\h_0(A)$-module is independent of choice of semifree resolution. \epsilonnd{chunk} \begin{chunk}\label{filtration} Assume each $\h_i(A)$ is finitely generated over $\h_0(A)$ and $\h_0(A)$ is itself noetherian. Let $M$ be an object of $\mathfrak{m}athsf{D}^f_+(A)$. By \cite[Appendix B.2]{AINSW}, there exists a semifree resolution $F{\bm{x}}ra\simeq M$ with $F_i=0$ for all $i<\inf(M)$ and $F$ admits a semifree filtration $\{F(i)\}_{i\in \mathfrak{m}athbb{Z}}$ equipped with exact sequences of DG $A$-modules \[ 0\to F(i-1)\to F(i)\to {\mathsf{{\mathcal{S}}igma}}^{i} A^{\beta_i}\to 0 \] for some non-negative integer $\beta_i{\bm{g}}eq 0$. \epsilonnd{chunk} \begin{lemma}\label{null Lemma} Assume $\h_0(A)$ is noetherian and that each $\h_i(A)$ is finitely generated over it. Let $N$ be an object of $\mathfrak{m}athsf{D}(A)$ such that $\sup(N)<\infty$. For an object $M$ in $\mathfrak{m}athsf{D}(A)$ with $\inf(M)> \sup(N)$, $ \Hom_{\mathfrak{m}athsf{D}(A)}(M,N)=0.$ \epsilonnd{lemma} \begin{proof} Fix a semifree resolution $F{\bm{x}}ra\simeq M$ as in \ref{filtration}. By (\ref{hominD(A)}) in \ref{hominD(A)chunk}, \[ \Hom_{\mathfrak{m}athsf{D}(A)}({\mathsf{{\mathcal{S}}igma}}^{i}A^{\beta_i},N)\subseteqg \h_{i}(N)^{\beta_i}=0\] for each $i{\bm{g}}eq \inf(M).$ Combining these isomorphisms with the exact sequences \[ 0\rightarrow F(i-1)\rightarrow F(i)\rightarrow {\mathsf{{\mathcal{S}}igma}}^{i}A^{\beta_i}\rightarrow 0\] yields by induction that $\Hom_{\mathfrak{m}athsf{D}(A)}(F(i),N)=0$ for all $i{\bm{g}}eq \inf(M)$. Finally, (\ref{exact triangle}) in \ref{triangle} implies $\Hom_{\mathfrak{m}athsf{D}(A)}(F,N)=0$, and hence $\Hom_{\mathfrak{m}athsf{D}(A)}(M,N)=0$ (cf. \ref{hominD(A)chunk}). \epsilonnd{proof} \begin{proposition}\label{truncation} Assume $\h_0(A)$ is noetherian and each $\h_i(A)$ is a finitely generated $\h_0(A)$-module. Let $M$ be in $\mathfrak{m}athsf{D}^f_+(A)$ and $N$ be an object in $\mathfrak{m}athsf{D}(A)$ such that $\sup(N)<\infty$. Suppose $F{\bm{x}}ra{\simeq} M$ is a semifree resolution of $M$ as in \ref{filtration}, then for all $i> \sup(N)$ the natural map below is an isomorphism \[ \Hom_{\mathfrak{m}athsf{D}(A)}(M,N){\bm{x}}ra{\subseteqg} \Hom_{\mathfrak{m}athsf{D}(A)}(F(i),N). \] \epsilonnd{proposition} \begin{proof} For each $i{\bm{g}}eq \inf(M)$, there is an exact sequence of DG $A$-modules \begin{equation}\label{exact sequence} 0\rightarrow F(i)\rightarrow F\rightarrow F'\rightarrow 0 \epsilonnd{equation} where by choice of $F$ we have that $\inf(F')> i$. Applying $\Hom_{\mathfrak{m}athsf{D}(A)}(-,N)$ to (\ref{exact sequence}) and appealing to Lemma \ref{null Lemma} yields the desired isomorphisms whenever $i> \sup (N).$ \epsilonnd{proof} \section{Levels and Coghost Index in $\mathfrak{m}athsf{D}(A)$} \label{sectioncoghost} We begin by briefly recalling the notion of level. For more details, see \cite[Section 2]{ABIM}, \cite[Section 2]{BonBer} or \cite[Section 3]{Rouq}. \begin{chunk}\label{construction}\label{level} Let $\mathfrak{m}athsf{T}$ be a triangulated category and $\mathfrak{m}athsf{C}$ be a full subcategory of $\mathfrak{m}athsf{T}$. We say $\mathfrak{m}athsf{C}$ is \epsilonmph{thick} if it is closed under suspensions, retracts and cones. The smallest thick subcategory of $\mathfrak{m}athsf{T}$ containing an object $X$ is denoted $\thick_\mathfrak{m}athsf{T} (X);$ this consists of all objects $Y$ such that one can obtain $Y$ from $X$ using finitely many suspensions, retracts and cones. We set ${\mathsf{level}}_\mathfrak{m}athsf{T}^X (Y)$ to be smallest non-negative integer $n$ such that $Y$ can be built starting from $X$ using finitely many suspensions, finitely many retracts and exactly $n-1$ cones in $\mathfrak{m}athsf{T}.$ If no such $n$ exists, we set ${\mathsf{level}}_\mathfrak{m}athsf{T}^X(Y)=\infty$. Note $Y$ is in $\thick_\mathfrak{m}athsf{T} (X)$ if and only if ${\mathsf{level}}_\mathfrak{m}athsf{T}^X (Y)<\infty$. Also, if $\mathfrak{m}athsf{C}$ is a thick subcategory of $\mathfrak{m}athsf{T}$ containing $X$, then $ {\mathsf{level}}^X_\mathfrak{m}athsf{T} (Y)={\mathsf{level}}^X_\mathfrak{m}athsf{C} (Y).$ \epsilonnd{chunk} \begin{example}\label{example} Let $A$ be a DG algbera. A DG $A$-module $M$ is \epsilonmph{perfect} if $M$ is an object of $\thick_{\mathfrak{m}athsf{D}(A)}(A).$ In this case, $M$ is a retract of a semifree DG $A$-module $F$ with finite semifree filtration \[ 0\subseteq F(0)\subseteq F(1)\subseteq \ldots \subseteq F(n)=F. \] If $n$ is the minimal such value, then ${\mathsf{level}}_{\mathfrak{m}athsf{D}(A)}^A (M)=n-1$, see \cite[Theorem 4.2]{ABIM}. \epsilonnd{example} \begin{chunk}\label{coghost} Let $\mathfrak{m}athsf{T}$ be a triangulated category with suspension functor ${\mathsf{{\mathcal{S}}igma}}$. A morphism $f\colon X\rightarrow Y$ in $\mathfrak{m}athsf{T}$ is called $G$-\epsilonmph{coghost} if \[ \Hom_\mathfrak{m}athsf{T}(f,{\mathsf{{\mathcal{S}}igma}}^iG) \colon \Hom_\mathfrak{m}athsf{T}(Y,{\mathsf{{\mathcal{S}}igma}}^iG)\rightarrow \Hom_\mathfrak{m}athsf{T}(X,{\mathsf{{\mathcal{S}}igma}}^iG) \] is zero for all $i\in \mathfrak{m}athbb{Z}$. Following \cite[Defnition 2.4]{Letz}, we define the \epsilonmph{coghost index of $X$ with respect to $G$ in $\mathfrak{m}athsf{T}$}, denoted ${\mathsf{cogin}}_T^G(X)$, to be the smallest non-negative integer $n$ such that any composition of $G$-ghost maps \[ X^n{\bm{x}}ra{f^n}X^{n-1}{\bm{x}}ra{f^{n-1}}\ldots\to X^1 {\bm{x}}ra{f^1} X^0=X \] is zero in $\mathfrak{m}athsf{T}.$ \epsilonnd{chunk} \begin{chunk} Let $\mathfrak{m}athsf{T}$ be a triangulated category with objects $G$ and $X$. In this generality, ${\mathsf{level}}$ bounds ${\mathsf{cogin}}$ from above. That is, \[ {\mathsf{cogin}}_{\mathfrak{m}athsf{T}}^G(X)\leq {\mathsf{level}}_\mathfrak{m}athsf{T}^G(X),\] see \cite[Lemma 2.2(1)]{Beli} (see also \cite[Lemma 4.11]{Rouq}). However, there are known instances when equality holds. For example, ${\mathsf{level}}^G_\mathfrak{m}athsf{T} (-)$ and ${\mathsf{cogin}}_\mathfrak{m}athsf{T}^G(-)$ agree provided every object in $\mathfrak{m}athsf{T}$ has an appropriate left approximation by $G$, see \cite[Lemma 2.2(2)]{Beli} for more details. Another instance is when $R$ is a commutative noetherian ring (or more generally, a noether algebra) \[ {\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(R)}^G (X)={\mathsf{level}}_{\mathfrak{m}athsf{D}^f_b(R)}^G(X) \] for each $G$ and $X$ in $\mathfrak{m}athsf{D}^f_b(R);$ this has been coined the \epsilonmph{converse coghost lemma} (see \cite[Theorem 24]{OS}). \epsilonnd{chunk} We now get to the main result of the section which generalizes a particular case of \cite[Theorem 24]{OS} mentioned above. It is worth noting that \cite[Theorem 24]{OS} was proved for derived categories satisfying certain finiteness conditions; however, it does not apply directly to the case considered in the theorem below. The proof of \cite[Theorem 24]{OS} is suitably adapted to the setting under consideration with the main observation being that truncations need to be replaced with the ascending filtrations discussed in \ref{filtration}. We have indicated the necessary changes below, while attempting to not recast the parts of the proof of \cite[Theorem 24]{OS} that carry over with only minor changes. \begin{theorem}\label{t:converse} Let $A$ be a DG algebra with $\h(A)$ noetherian as an $\h_0(A)$-module. If $M$ and $N$ are in $\mathfrak{m}athsf{D}^f_b(A)$, then \[ {\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(A)}^M(N)={\mathsf{level}}_{\mathfrak{m}athsf{D}(A)}^M(N). \] \epsilonnd{theorem} \begin{remark}\label{remarksameproof} For $M$ and $N $ in $\mathfrak{m}athsf{D}_b^f(R)$, \[ {\mathsf{cogin}}_{\mathfrak{m}athsf{D}_+^f(A)}^M(N)={\mathsf{level}}_{\mathfrak{m}athsf{D}_+^f(A)}^M(N). \] Indeed, one can directly apply the argument from \cite[Theorem 24]{OS} once it is noted that, by restricting scalars along the map of commutative rings $A_0\to \h_0(R)$, $\Hom_{\mathfrak{m}athsf{D}_+^f(A)}(X,{\mathsf{{\mathcal{S}}igma}}^iN)$ is finitely generated over $A_0$ for $X$ in $\mathfrak{m}athsf{D}_+^f(A)$ and $i\in \mathfrak{m}athbb{Z}$. To see the latter holds, such an $X$ admits a semifree filtration whose subquotients are perfect DG $A$-module. Also, since $N$ is in $\mathfrak{m}athsf{D}^f_b(A)$ we can apply Proposition \ref{truncation} to get \[\Hom_{\mathfrak{m}athsf{D}_+^f(A)}(X,{\mathsf{{\mathcal{S}}igma}}^iN)\subseteqg \Hom_{\mathfrak{m}athsf{D}_+^f(A)}(P,{\mathsf{{\mathcal{S}}igma}}^iN)\] where $P$ a perfect DG $A$-module with a finite semifree filtration as in Example \ref{example}. Therefore, induction on the length of this filtration finishes the proof of the claim, where we are again using that $N$ is in $\mathfrak{m}athsf{D}^f_b(A)$. \epsilonnd{remark} Before beginning the proof of Theorem \ref{t:converse}, we record an easy but important lemma. \begin{lemma}\label{compatible}Let $A$ be a DG algebra. Assume $\alpha\colon F^1\to F^2$ is a morphism of bounded below semifree DG $A$-modules with $F^j_i=0$ for $i<\inf (F^j)$ and semifree filtrations $\{F^j(i)\}_{i\in \mathfrak{m}athbb{Z}}$ for $j=1,2$ satisfying \[ 0\to F^j(i-1)\to F^j(i)\to \coprod_{\epsilonll\leq i}{\mathsf{{\mathcal{S}}igma}}^\epsilonll A^{\beta^j_\epsilonll(i)}\to 0 \] for non-negative integers $\beta^j_\epsilonll(i)$ and $j=1,2.$ For each $i\in \mathfrak{m}athbb{Z}$, $\alpha$ restricts to a morphism of DG $A$-modules $ \alpha(i)\colon F^1(i)\to F^2(i). $ \epsilonnd{lemma} \begin{proof} Indeed, $F^1(i)=0$ for all $i<\inf(F^1)$ and so there is nothing to show for such values of $i$. Now for $i{\bm{g}}eq \inf (F^1)$, the DG $A$-module $F^1(i)$ is generated in degrees at most $i$ and since $\alpha$ is degree preserving $\alpha(F^1(i))$ is generated in degrees at most $i.$ However, the assumption on the filtration $\{F^2(j)\}$ also implies \[ \alpha(F^1(i))\subseteq F^2(i). \] Hence, setting $\alpha(i)\coloneqq \alpha\mathfrak{m}id_{F^1(i)}$ proves the claim by induction. \epsilonnd{proof} \begin{proof}[Proof of Theorem \ref{t:converse}] First, by \ref{level} and Remark \ref{remarksameproof} \[{\mathsf{level}}_{\mathfrak{m}athsf{D}(A)}^M(N)={\mathsf{level}}_{\mathfrak{m}athsf{D}^f_+(A)}^M(N)={\mathsf{cogin}}_{\mathfrak{m}athsf{D}_+^f(A)}^M(N),\] while the inequality \begin{equation}\label{equationtoshow}{\mathsf{cogin}}_{\mathfrak{m}athsf{D}_+^f(A)}^M(N){\bm{g}}eq {\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(A)}^M(N)\epsilonnd{equation} is standard. So it suffices to prove the reverse inequality of (\ref{equationtoshow}) holds. Set $n={\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(A)}^M(N)$ and consider a composition \[ N^n{\bm{x}}rightarrow {f^n}N^{n-1}{\bm{x}}rightarrow{f^{n-1}} \ldots{\bm{x}}rightarrow{f^2} N^1{\bm{x}}rightarrow{f^1} N^0=N, \] where each $f^i$ is a $M$-coghost map in $\mathfrak{m}athsf{D}_+^f(A)$. Using the assumptions on $\h(A)$ and that each $N^i$ is in $\mathfrak{m}athsf{D}_+^f(A)$, there exist semifree resolution $F^i{\bm{x}}ra{\simeq} N^i$ with corresponding semifree filtrations $\{F^i(j)\}_{j\in \mathfrak{m}athbb{Z}}$ as in \ref{filtration}. Moreover, by \ref{hominD(A)chunk}(\ref{hominD(A)}), each $f^i$ determines a morphism of DG $A$-modules $\alpha^i\colon F^i\to F^{i-1}$ such that the following diagram commutes in $\mathfrak{m}athsf{D}(A)$ \begin{equation}\label{diagram} \begin{tikzcd} F^i\arrow["\alpha^i"]{r} \arrow["\simeq"]{d} & F^{i-1}\arrow["\simeq"]{d}\\ N^i\arrow["f^i"]{r} & N^{i-1}. \epsilonnd{tikzcd} \epsilonnd{equation} Furthermore, by Lemma \ref{compatible} there are the following commutative diagrams of DG $A$-modules \begin{equation}\label{diagram2} \begin{tikzcd} F^{i}(j)\arrow[hookrightarrow]{d} \arrow["\alpha^i(j)"]{r} &F^{i-1}(j)\arrow[hookrightarrow]{r} \arrow[hookrightarrow]{dr} &F^{i-1}(j')\arrow[hookrightarrow]{d}\\ F^i\arrow["\alpha^i"]{rr} & & F^{i-1} \epsilonnd{tikzcd} \epsilonnd{equation} whenever $j'{\bm{g}}eq j.$ Moreover, since each $F^i(j)$ is a perfect DG $A$-module and $M$ is in $\mathfrak{m}athsf{D}^f_b(A)$, the commutativity of the diagrams in (\ref{diagram}) and the assumption that each $f^i$ is $M$-coghost imply the compositions along the top of (\ref{diagram2}) are $M$-coghost for all $j'{\bm{g}}eq j{\bm{g}}g 0;$ the same argument as in proof of \cite[Theorem 24]{OS} works in this setting. Combining this with Proposition \ref{truncation} there exists integers $i_j$ such that \begin{center} \begin{tikzcd} F^n(i_n)\arrow["\beta^n"]{r} \arrow[hookrightarrow]{d} & F^{n-1}(i_{n-1})\arrow["\beta^{n-1}"]{r} \arrow[hookrightarrow]{d} & \ldots \arrow["\beta^1"]{r} & F^0(i_0)\arrow[hookrightarrow]{d}\\ F^n\arrow["\alpha^n"]{r} \arrow["\simeq"]{d} & F^{n-1}\arrow["\alpha^{n-1}"]{r} \arrow["\simeq"]{d} & \ldots \arrow["\alpha^1"]{r} & F^0\arrow["\simeq"]{d}\\ N^n\arrow["f^n"]{r} & N^{n-1}\arrow["f^{n-1}"]{r} & \ldots \arrow["f^1"]{r} & N^0 \epsilonnd{tikzcd} \epsilonnd{center} commutes in $\mathfrak{m}athsf{D}(A)$, the natural map \begin{equation}\label{isohom} \Hom_{\mathfrak{m}athsf{D}_+^f(A)}(F^n,N){\bm{x}}ra{\subseteqg} \Hom_{\mathfrak{m}athsf{D}_+^f(A)}(F^n(i_n),N) \epsilonnd{equation} is an isomorphism and each $\beta^i$ is $M$-coghost. Now since each $\beta^i$ is an $M$-coghost map between perfect DG $A$-modules then by choice of $n$ the composition along the top and then down to $N$, denoted $\beta,$ must be zero. It is worth noting that the previous step needs the assumption that $\h(A)$ is finitely generated over $\h_0(A)$ since, in this case, each map in the composition defining $\beta$ must be in $\mathfrak{m}athsf{D}^f_b(A).$ Finally, the isomorphism in (\ref{isohom}) identifies $\beta$ with $f=f^1f^2\ldots f^n$. Hence, $f=0$ and so ${\mathsf{cogin}}_{\mathfrak{m}athsf{D}_+^f(A)}^M(N)\leq n={\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(A)}^M(N),$ as needed. \epsilonnd{proof} \begin{chunk}\label{ghostchunk}Let $\mathfrak{m}athsf{T}$ be a triangulated category and fix $G$ and $X$ in $\mathfrak{m}athsf{T}.$ The \epsilonmph{ghost index of $X$ with respect to $G$ in $\mathfrak{m}athsf{T}$}, denoted ${\bm{g}}in_T^G (X)$, to be the least non-negative integer $n$ such that any composition of $G$-ghost maps \[ X=X^n{\bm{x}}ra{f^n}X^{n-1}{\bm{x}}ra{f^{n-1}}\ldots\to X^1 {\bm{x}}ra{f^1} X^0 \] is zero in $\mathfrak{m}athsf{T};$ this where a map $g$ is $G$-\epsilonmph{ghost} provided $\Hom_\mathfrak{m}athsf{T}({\mathsf{{\mathcal{S}}igma}}^i G, g)=0$ for all $i\in \mathfrak{m}athbb{Z}$. That is, ${\bm{g}}in_\mathfrak{m}athsf{T}^G(X)={\mathsf{cogin}}_{\mathfrak{m}athsf{T}^{op}}^G(X).$ In general, ${\bm{g}}in_{\mathfrak{m}athsf{T}}^G(X)\leq {\mathsf{level}}_\mathfrak{m}athsf{T}^G(X)$ and it is unknown whether equality holds when $R$ is a commutative noetherian ring and $\mathfrak{m}athsf{T}=\mathfrak{m}athsf{D}^f_b(R)$. The point of the next section is to provide a partial ``converse." \epsilonnd{chunk} \section{A Partial Converse Ghost Lemma} \label{sectionmain} In this section $R$ is a commutative noetherian ring. As localization defines an exact functor $\mathfrak{m}athsf{D}(R)\to \mathfrak{m}athsf{D}(R_\mathfrak{p})$, ${\mathsf{level}}$ cannot increase upon localization. Hence, for $M$ and $N$ in $\mathfrak{m}athsf{D}^f_b(R)$, if $N$ is in $\thick_{\mathfrak{m}athsf{D}(R)}(M)$, then \[{\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R_\mathfrak{p})}^{M_\mathfrak{p}}(N_\mathfrak{p})<\infty\text{ for all }\mathfrak{p}\in {\mathcal{S}}pec R.\] The converse and an evident corollary are established below. \begin{theorem}\label{main theorem} Let $R$ be a commutative noetherian ring and fix $M$ and $N$ in $\mathfrak{m}athsf{D}^f_b(R)$. If $ {\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R_\mathfrak{p})}^{M_\mathfrak{p}}(N_\mathfrak{p}) <\infty$ for all $\mathfrak{p}\in {\mathcal{S}}pec R$, then $N$ is an object of $\thick_{\mathfrak{m}athsf{D}(R)}(M).$ \epsilonnd{theorem} \begin{corollary}\label{implication}If ${\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R_\mathfrak{p})}^{M_\mathfrak{p}} (N_\mathfrak{p})<\infty$ for all $\mathfrak{p}\in {\mathcal{S}}pec R$, then $ {\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R)}^{M} (N)<\infty.$ \epsilonnd{corollary} To prove Theorem \ref{main theorem}, there are essentially two steps. We first go to derived categories of certain Koszul complexes where it is shown that ${\mathsf{cogin}}$, ${\bm{g}}in$ and ${\mathsf{level}}$ all agree using Theorem \ref{t:converse}. Second, we apply a local-to-global principle to conclude the desired result. We explain this below and give the proof of the theorem at the end of the section. \begin{chunk}\label{l2g} Assume $R$ is local with maximal ideal $\mathfrak{m}$, we let $K^R$ be the Koszul complex on a minimal generating set for $\mathfrak{m}$. It is regarded as a DG algebra in the usual way and is well-defined up to an isomorphism of DG $R$-algebras, see \cite[Section 1.6]{BH}. For any $\mathfrak{p}\in {\mathcal{S}}pec R$, let $M$ be an object of $\mathfrak{m}athsf{D}(R).$ We set \[M(\mathfrak{p})\coloneqq M_\mathfrak{p}\otimes^{\mathsf{L}}imes_{R_\mathfrak{p}} K^{R_\mathfrak{p}}\] which is a DG $K^{R_\mathfrak{p}}$-module. Restricting scalars along the morphism of DG algebras $R_\mathfrak{p} \to K^{R_\mathfrak{p}}$ we may regard $M(\mathfrak{p})$ as an object of $\mathfrak{m}athsf{D}(R_\mathfrak{p})$. In \cite[Theorem 5.10]{BIK2}, Benson, Iyengar and Krause proved the following local-to-global principle: For objects $M$ and $N$ in $\mathfrak{m}athsf{D}^f_b(R)$, $N$ is in $\thick_{\mathfrak{m}athsf{D}(R)}(M)$ if and only if $N(\mathfrak{p})$ is in $\thick_{\mathfrak{m}athsf{D}(R_\mathfrak{p})}(M(\mathfrak{p}))$. \epsilonnd{chunk} \begin{lemma}\label{l:gincogin} Let $R$ be a commutative noetherian local ring. For $M$ and $N$ in $\mathfrak{m}athsf{D}^f_b(K^R)$, \[ {\mathsf{level}}_{\mathfrak{m}athsf{D}(K^R)}^{M}(N)={\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(K^R)}^M(N)={\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(K^R)}^M(N). \] \epsilonnd{lemma} \begin{proof} The natural map $K^R\to K^{\widehat{R}}$ is a quasi-isomorphism of DG algebras and so it induces an exact equivalence \[ \mathfrak{m}athsf{D}^f_b(K^R){\bm{x}}ra{\epsilonquiv} \mathfrak{m}athsf{D}^f_b(K^{\widehat{R}}). \] Since ${\mathsf{cogin}}$, ${\bm{g}}in$ and ${\mathsf{level}}$ are invariant under exact equivalences we can assume $R$ is complete and set $K=K^R$. As $R$ is complete, it is well known that $R$ admits a dualizing DG module $\omega$; see, for example, \cite[Corollay 1.4]{Kaw}. Now applying \cite[Theorem 2.1]{FIJ}, $\Hom_R(K,\omega)$ is a dualizing DG $K$-module. In particular, setting $(-)^\dagger\coloneqq \Hom_{K}(-,\Hom_R(K,\omega))$ then for any $M$ in $\mathfrak{m}athsf{D}^f_b(K)$, $M^\dagger$ is in $\mathfrak{m}athsf{D}^f_b(K)$ and the natural biduality map $M{\bm{x}}ra{\simeq} M^{\dagger\dagger}$ is an isomoprhism in $\mathfrak{m}athsf{D}^f_b(K).$ Hence, $(-)^\dagger$ restricts to an exact auto-equivalence of $\mathfrak{m}athsf{D}^f_b(K)$. Finally, as $(-)^\dagger$ is an exact auto-equivalence of $\mathfrak{m}athsf{D}^f_b(K)$ interchanging coghost and ghost maps, from Theorem \ref{t:converse} the desired equality follows. \epsilonnd{proof} \begin{remark}The lemma holds for any DG alegbra $A$ satisfying the hypotheses of Theorem \ref{t:converse} which admits a dualizing DG module as defined in \cite[1.8]{FIJ}. Another example, generalizing the Koszul complex above, would be the DG fiber of any local ring map of finite flat dimension whose target ring admits a dualizing complex \cite[Theorem VI]{FIJ}. \epsilonnd{remark} \begin{lemma}\label{l:koszulgin} Let $R$ be a commutative noetherian local ring and let $\mathfrak{m}athsf{t}\colon \mathfrak{m}athsf{D}(R)\to \mathfrak{m}athsf{D}(K^R)$ denote $-\otimes^{\mathsf{L}}imes_R K^R$. If $M$ and $N$ are objects of $\mathfrak{m}athsf{D}^f_b(R)$, then \[ {\bm{g}}in_{\mathfrak{m}athsf{D}(K^R)}^{\mathfrak{m}athsf{t} M}(\mathfrak{m}athsf{t} N )\leq {\bm{g}}in_{\mathfrak{m}athsf{D}(R)}^M(N). \] \epsilonnd{lemma} \begin{proof} We set $K=K^R$. For $X$ in $\mathfrak{m}athsf{D}(R)$ and $Y$ in $\mathfrak{m}athsf{D}(K)$, there is an adjunction isomorphism \begin{equation}\label{adjunction} \Hom_{\mathfrak{m}athsf{D}(K)}(\mathfrak{m}athsf{t} X ,Y)\subseteqg \Hom_{\mathfrak{m}athsf{D}(R)}(X,Y), \epsilonnd{equation} which is induced by the natural map $\epsilonta_X\colon X\to \mathfrak{m}athsf{t} X$ given by $x\mathfrak{m}apsto x\otimes^{\mathsf{L}}imes 1.$ Moreover, when $f\colon Y\rightarrow Z$ is a $\mathfrak{m}athsf{t} M$-ghost map in $\mathfrak{m}athsf{D}^f_b(K)$, then (\ref{adjunction}) implies that $f$ is a $M$-ghost map in $\mathfrak{m}athsf{D}^f_b(R)$. Assume $n\coloneqq {\bm{g}}in^M_{\mathfrak{m}athsf{D}^f_b(R)}(N)<\infty$ and Suppose $g\colon\mathfrak{m}athsf{t} N\to Y$ in $\mathfrak{m}athsf{D}^f_b(K)$ factors as the composition of $n$ maps in $\mathfrak{m}athsf{D}^f_b(K)$ which are $\mathfrak{m}athsf{t} M$-ghost. Hence, by the adjunction above $g$ is the composition of $n$ maps in $\mathfrak{m}athsf{D}^f_b(R)$ which are $M$-ghost, and thus so is $g\circ \epsilonta_N$. Therefore, by assumption $g\circ \epsilonta_N=0$ and so from (\ref{adjunction}) we conclude that $g=0$ in $\mathfrak{m}athsf{D}^f_b(K)$, completing the proof. \epsilonnd{proof} \begin{proof}[Proof of Theorem \ref{main theorem}] Let $\mathfrak{p}\in {\mathcal{S}}pec R$ , then by assumption ${\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R_\mathfrak{p})}^{M_\mathfrak{p}}(N_\mathfrak{p})<\infty$. Also, \begin{align*} {\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(R_\mathfrak{p})}^{M_\mathfrak{p}}(N_\mathfrak{p})&{\bm{g}}eq {\bm{g}}in_{\mathfrak{m}athsf{D}^f_b(K^{R_\mathfrak{p}})}^{M(\mathfrak{p})}(N(\mathfrak{p}))\\ &={\mathsf{cogin}}_{\mathfrak{m}athsf{D}^f_b(K^{R_\mathfrak{p}})}^{M(\mathfrak{p})}(N(\mathfrak{p}))\\ &={\mathsf{level}}_{\mathfrak{m}athsf{D}(K^{R_\mathfrak{p}})}^{M(\mathfrak{p})}(N(\mathfrak{p})) \epsilonnd{align*} where the inequality is from Lemma \ref{l:koszulgin} and the equalities are from Lemma \ref{l:gincogin}. Thus ${\mathsf{level}}_{\mathfrak{m}athsf{D}(K^{R_\mathfrak{p}})}^{M(\mathfrak{p})}(N(\mathfrak{p}))<\infty$, and so $N(\mathfrak{p})$ is in $\thick_{\mathfrak{m}athsf{D}(K^{R_\mathfrak{p}})} (M(\mathfrak{p}))$ for all $\mathfrak{p}\in {\mathcal{S}}pec R$. Now by restricting scalars along $R_\mathfrak{p}\to K^{R_\mathfrak{p}}$ we conclude that $N(\mathfrak{p})$ is in $\thick_{\mathfrak{m}athsf{D}(R_\mathfrak{p})} (M(\mathfrak{p}))$ for all $\mathfrak{p}\in {\mathcal{S}}pec R$. Finally, we apply \ref{l2g} to obtain the desired result. \epsilonnd{proof} \input{ref.bbl} \epsilonnd{document}
\begin{document} \title{Extreme Entropy Machines:\ Robust information theoretic classification} \begin{abstract} Most of the existing classification methods are aimed at minimization of empirical risk (through some simple point-based error measured with loss function) with added regularization. We propose to approach this problem in a more information theoretic way by investigating applicability of entropy measures as a classification model objective function. We focus on quadratic Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the construction of Extreme Entropy Machines (EEM). The main contribution of this paper is proposing a model based on the information theoretic concepts which on the one hand shows new, entropic perspective on known linear classifiers and on the other leads to a construction of very robust method competetitive with the state of the art non-information theoretic ones (including Support Vector Machines and Extreme Learning Machines). Evaluation on numerous problems spanning from small, simple ones from \textsc{UCI} repository to the large (hundreads of thousands of samples) extremely unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of the EEM in real life problems and that it scales well. \end{abstract} \section{Introduction} There is no one, universal, perfect optimization criterion that can be used to train machine learning model. Even for linear classifiers one can find multiple objective functions, error measures to minimize, regularization methods to include~\cite{kulkarni1998learning}. Most of the existing methods are aimed at minimization of empirical risk (through some simple point-based error measured with loss function) with added regularization. We propose to approach this problem in more information theoretic way by investigating applicability of entropy measures as a classification model objective function. We focus on quadratic Renyi's entropy and connected Cauchy-Schwarz Divergence. One of the information theoretic concepts which has been found very effective in machine learning is the entropy measure. In particular the rule of maximum entropy modeling led to the construction of MaxEnt model and its structural generalization -- Conditional Random Fields which are considered state of the art in many applications. In this paper we propose to use Renyi's quadratic cross entropy as the measure of two density estimations divergence in order to find best linear classifier. It is a conceptually different approach than typical entropy models as it works in the input space instead of decisions distribution. As a result we obtain a model closely related to the Fischer's Discriminant (or more generally Linear Discriminant Analysis) which deepens the understanding of this classical approach. Together with a powerful extreme data transformation we obtain a robust, nonlinear model competetive with the state of the art models not based on information theory like Support Vector Machines (SVM~\cite{ Vapnik95}), Extreme Learning Machines (ELM~\cite{huang2004extreme}) or Least Squares Support Vector Machines (LS-SVM~\cite{suykens1999least}). We also show that under some simplifing assumptions ELM and LS-SVM can be seen through a perspective of information theory as their solutions are (up to some constants) identical to the ones obtained by proposed method. Paper is structured as follows: first we recall some preliminary information regarding ELMs and Support Vector Machines, including Least Squares Support Vector Machines. Next we introduce our Extreme Entropy Machine (EEM) together with its kernelized extreme counterpart -- Extreme Entropy Kernel Machine (EEKM). We show some connections with existing models and some different perspectives for looking at proposed model. In particular, we show how learning capabilities of EEMs (and EEKM) reasamble those of ELM (and LS-SVM respectively). During evaluation on over 20 binary datasets (of various sizes and characteristics) we analyze generalization capabilities and learning speed. We show that it can be a valuable, robust alternative for existing methods. In particular, we show that it achieves analogous of ELM stability in terms of the hidden layer size. We conclude with future development plans and open problems. \section{Preliminaries} Let us begin with recalling some basic information regarding Extreme Learning Machines~\cite{huang2006extreme} and Support Vector Machines~\cite{Vapnik95} which are further used as a competiting models for proposed solution. We focus here on the optimization problems being solved to underline some basic differences between these methods and EEMs. \subsection{Extreme Learning Machines} Extreme Learning Machines are relatively young models introduced by Huang et al.~\cite{huang2004extreme} which are based on the idea that single layer feed forward neural networks (SLFN) can be trained without iterative process by performing linear regression on the data mapped through random, nonlinear projection (random hidden neurons). More precisely speaking, basic ELM architecture consists of $d$ input neurons connected with each input space dimension which are fully connected with $h$ hidden neurons by the set of weights $w_j$ (selected randomly from some arbitrary distribution) and set of biases $b_j$ (also randomly selected). Given some generalized nonlinear activation function $\Gfunction$ one can express the hidden neurons activation matrix $\Hlayer$ for the whole training set $\Xlayer, \Yset = \{(\xpoint_i,\ypoint_i)\}_{i=1}^N$ such that $\xpoint_i \in \R^d$ and $\ypoint_i \in \{-1.+1\}$ as $$ \Hlayer_{ij} = \Gfunction(\xpoint_i,w_j,b_j). $$ we can formulate following optimization problem \noindent{\textbf{Optimization problem: Extreme Learning Machine} } \begin{equation*} \begin{aligned} & \underset{\LinearOperator}{\text{minimize}} & & \left \| \Hlayer\LinearOperator - \Yset \right \|^2 \\ & \text{where} & & \Hlayer_{ij} = \Gfunction(\xpoint_i,w_j,b_j). ,\; i = 1, \ldots ,N, j = 1, \ldots , h\\ \end{aligned} \end{equation*} If we denote the weights between hidden layer and output neurons by $\LinearOperator$ it is easy to show~\cite{huang2006extreme} that putting $$ \LinearOperator = \Hlayer^\dagger \Yset, $$ gives the best solution in terms of mean squared error of the regression: $$ \left \| \Hlayer\LinearOperator - \Yset \right \|^2 =\left \| \Hlayer(\Hlayer^\dagger \Yset) - \Yset \right \|^2 = \min_{\alpha \in \mathbb{R}^{h}} \left \| \Hlayer\alpha - \Yset \right \|^2 $$ where $\Hlayer^\dagger$ denotes Moore-Penrose pseudoinverse of matrix $\Hlayer$. Final classification of the new point $x$ can be now performed analogously by classifying according to $$ cl(\xpoint) = \textsc{sign}( \left [\begin{matrix} \Gfunction(\xpoint,w_1,b_1) & \ldots & \Gfunction(\xpoint,w_d,b_d) \end{matrix} \right ]\LinearOperator ). $$ As it is based on the oridinary least squares optimization, it is possible to balance it in terms of unbalanced datasets by performing weighted ordinary least squares. In such a scenario, given a vector $B$ such that $B_i$ is a square root of the inverse of the $\xpoint_i$'s class size and $B \cdot X$ denotes element wise multiplication between $B$ and $X$: $$ \LinearOperator = (B \cdot \Hlayer)^\dagger B \cdot \Yset $$ \subsection{Support Vector Machines and Least Squares Support Vector Machines} One of the most well known classifiers of the last decade is Vapnik's Support Vector Machine (SVM~\cite{Vapnik95}), based on the principle of creating linear classifier that maximizes the separating margin between elements of two classes. \noindent{\textbf{Optimization problem: Support Vector Machine} } \begin{equation*} \begin{aligned} & \underset{\LinearOperator,b.\xi}{\text{minimize}} & & \frac{1}{2} \left \| \LinearOperator \right \|^2 + C \sum_{i=1}^N \xi_i \\ & \text{subject to} & & \ypoint_i(\langle \LinearOperator, \xpoint_i\rangle + b) = 1 - \xi_i ,\; i = 1, \ldots ,N \end{aligned} \end{equation*} \noindent which can be further kernelized (delinearized) using any kernel $\Kernel$ (valid in the Mercer's sense): \noindent{\textbf{Optimization problem: Kernel Support Vector Machine} } \begin{equation*} \begin{aligned} & \underset{\LinearOperator}{\text{maximize}} & & \sum_{i=1}^N\LinearOperator_i - \frac{1}{2} \sum_{i,j=1}^N \LinearOperator_i \LinearOperator_j \ypoint_i \ypoint_j \Kernel(\xpoint_i,\xpoint_j) \\ & \text{subject to} & & \sum_{i=1}^N \LinearOperator_i \ypoint_i = 0 \\ & & & 0 \leq \LinearOperator_i \leq C ,\; i = 1, \ldots ,N \end{aligned} \end{equation*} The problem is a quadratic optimization with linear constraints, which can be efficiently solved using quadratic programming techniques. Due to the use of hinge loss function on $\xi_i$ SVM attains very sparse solutions in terms of nonzero $\LinearOperator_i$. As a result, classifier does not have to remember the whole training set, but instead, the set of so called support vectors ($SV = \{ \xpoint_i : \LinearOperator_i > 0 \}$), and classify new point according to $$ cl(\xpoint) = \textsc{sign} \left ( \sum_{\xpoint_i \in SV} \LinearOperator_i \Kernel(\xpoint_i, \xpoint) + b \right ). $$ It appears that if we change the loss function to the quadratic one we can greatly reduce the complexity of the resulting optimization problem, leading to the so called Least Squares Support Vector Machines (LS-SVM). \noindent{\textbf{Optimization problem: Least Squares Support Vector Machine} } \begin{equation*} \begin{aligned} & \underset{\LinearOperator,b.\xi}{\text{minimize}} & & \frac{1}{2} \left \| \LinearOperator \right \|^2 + C \sum_{i=1}^N \xi^2_i \\ & \text{subject to} & & \ypoint_i(\langle \LinearOperator, \xpoint_i\rangle + b) = 1 - \xi_i ,\; i = 1, \ldots ,N \end{aligned} \end{equation*} and decision is made according to $$ cl(\xpoint) = \textsc{sign}( \langle \LinearOperator, \xpoint \rangle + b ) $$ As Suykens et al. showed~\cite{suykens1999least} this can be further generalized for abitrary kernel induced spaces, where we classify according to: $$ cl(\xpoint) = \textsc{sign} \left ( \sum_{i=1}^N \LinearOperator_i \Kernel(\xpoint_i, \xpoint) + b \right ) $$ where $\LinearOperator_i$ are Lagrange multipliers associated with particular training examples $\xpoint_i$ and $b$ is a threshold, found by solving the linear system $$ \left [ \begin{matrix} 0 & \mathbb{1}^T \\ \mathbb{1} & \Kernel(\Xlayer,\Xlayer) + I/C \end{matrix} \right ] \left [ \begin{matrix} b\\ \LinearOperator \end{matrix} \right ] = \left [ \begin{matrix} 0 \\ \Yset \end{matrix} \right ] $$ where $\mathbb{1}$ is a vector of ones and $I$ is an identity matrix of appropriate dimensions. Thus a training procedure becomes $$ \left [ \begin{matrix} b\\ \LinearOperator \end{matrix} \right ] = \left [ \begin{matrix} 0 & \mathbb{1}^T \\ \mathbb{1} & \Kernel(\Xlayer,\Xlayer) + I/C \end{matrix} \right ]^{-1} \left [ \begin{matrix} 0 \\ \Yset \end{matrix} \right ]. $$ Similarly to the classical SVM, this formulation is highly unbalanced (it's results are skewed towards bigger classes). To overcome this issue one can introduce a weighted version~\cite{suykens2002weighted}, where given diagonal matrix of weights $Q$, such that $Q_{ii}$ is invertibly proportional to the size of $\xpoint_i$'s class and . $$ \left [ \begin{matrix} b\\ \LinearOperator \end{matrix} \right ] = \left [ \begin{matrix} 0 & \mathbb{1}^T \\ \mathbb{1} & \Kernel(\Xlayer,\Xlayer) + Q /C \end{matrix} \right ]^{-1} \left [ \begin{matrix} 0 \\ \Yset \end{matrix} \right ]. $$ Unfortunately, due to the introduction of the square loss, the Support Vector Machines sparseness of the solution is completely lost. Resulting training has a closed form solution, but requires the computation of the whole Gram matrix and the resulting machine has to remember\footnote{there are some pruning techniques for LS-SVM but we are not investigating them here} whole training set in order to perform new point's classification. \section{Extreme Entropy Machines} Let us first recall the formulation of the linear classification problem in the highly dimensional feature spaces, ie. when number of samples $N$ is equal (or less) than dimension of the feature space $h$. In particular we formulate the problem in the limiting case\footnote{which is often obtained by the kernel approach} when $h=\infty$: \begin{problem} We are given finite number of (often linearly independent) points $\hpoint_i^\pm$ in an infinite dimensional Hilbert space $\Hspace $. Points $\hpoint^+ \in \Hlayer^+$ constitute the positive class, while $\hpoint^- \in \Hlayer^-$ the negative class. We search for $\LinearOperator \in \Hspace $ such that the sets $\LinearOperator^T\Hlayer^+$ and $\LinearOperator^T\Hlayer^-$ are {\em optimally separated}. \end{problem} Observe that in itself (without additional regularization) the problem is not well-posed as, by applying the linear independence of the data, for arbitrary $\meanpoint_+ \neq \meanpoint_-$ in $\R$ we can easily construct $\LinearOperator \in \Hspace $ such that \begin{equation} \label{over} \LinearOperator^T\Hlayer^+=\{\meanpoint_+\} \text{ and } \LinearOperator^T\Hlayer^-=\{\meanpoint_-\}. \end{equation} However, this leads to a model case of overfitting, which typically yields suboptimal results on the testing set (different from the orginal training samples). To make the problem well-posed, we typically need to: \begin{enumerate} \item add/allow some error in the data, \item specify some objective function including term penalising model's complexity. \end{enumerate} Popular choices of the objective function include per-point classification loss (like square loss in neural networks or hinge loss in SVM) with a regularization term added, often expressed as the square of the norm of our operator $\LinearOperator$ (like in SVM or in weight decay regularization of neural networks). In general one can divide objective functions derivations into following categories: \begin{itemize} \item regression based (like in neural networks or ELM), \item probabilistic (like in the case of Naive Bayes), \item geometric (like in SVM), \item information theoretic (entropy models). \end{itemize} We focus on the last group of approaches, and investigate the applicability of the {\em Cauchy-Schwarz divergence}~\cite{jenssen2006cauchy}, which for two densities $f$ and $g$ is given by \begin{equation*} \begin{aligned} \Dcs(f,g)&=\ln \left (\int f^2 \right )+\ln \left (\int g^2 \right )-2\ln \left (\int fg \right)\\ &=-2\ln \left (\int \tfrac{f}{\|f\|_2} \tfrac{g}{\|g\|_2} \right ). \end{aligned} \end{equation*} Cauchy-Schwarz divergence is connected to Renyi's quadratic cross entropy ($H_2^\times$~\cite{principe}) and Renyi's quadratic entropy ($H_2$), defined for densities $f,g$ as \begin{equation*} \begin{aligned} H_2^\times(f,g)&=-\ln \left (\int fg \right ) \\ H_2(f)=H_2^\times(f,f)&=-\ln \left (\int f^2 \right ), \end{aligned} \end{equation*} consequently \begin{equation*} \begin{aligned} \Dcs(f,g)=2H_2^\times(f,g) -H_2(f)-H_2(g). \end{aligned} \end{equation*} and as we showed in \cite{MELC}, it is well-suited as a discrimination measure which allows the construction of mulit-threshold linear classifiers. In general increase of the value of Cauchy-Schwarz Divergence results in better sets' (densities') discrimination. Unfortunately, there are a few problems with such an approach: \begin{itemize} \item true datasets are discrete, so we do not have densities $f$ and $g$, \item statistical density estimators require rather large sample sizes and are very computationally expensive. \end{itemize} There are basically two approaches which help us recover underlying densities from the samples. First one is performing some kind of density esimation, like the well known Kernel Density Estimation (KDE) technique, which is based on the observation that any arbitrary continuous distribution can be sufficiently approximated by the convex combination of Gaussians. The other approach is to assume some density model (distribution's family) and fit its parameters in order to maximize the data generation probability. In statistics it is known as maximum likelihood esetimation (MLE) approach. MLE has an advantage that in general it produces much simplier densities descriptions than KDE as later's description is linearly big in terms of sample size. A common choice of density models are Gaussian distributions due to their nice theoretical and practical (computational) capabilities. As mentioned eariler, the conxex combination of Gaussians can approximate the given continuous distribution $f$ with arbitrary precision. In order to fit a Gaussian Mixture Model (GMM) to given dataset one needs algorithm like Expectation Maximization~\cite{dempster1977maximum} or conceptually similar Cross-Entropy Clustering~\cite{tabor2014cross}. However, for simplicity and strong regularization we propose to model $f$ as one big Gaussian $\nor(m,\Sigma)$. One of the biggest advantages of such an approach is closed form MLE parameter estimation, as we simply put $m$ equal to the empirical mean of the data, and $\Sigma$ as some data covariance estimator. Secondly, this way we introduce an error to the data which has an important regularizing role and leads to better posed optimization problem. Let us now recall that the Shannon's differential entropy (expressed in nits) of the continuous distribution $f$ is $$ H(f) = - \int f \ln f, $$ we will now show that choice of Normal distributions is not arbitrary but supported by the assumptions of the entropy maximization. Following result is known, but we include the whole reasoning for completeness. \begin{remark} Normal distribution $\nor(m,\Sigma)$ has a maximum Shannon's differential entropy among all real-valued distributions with mean $m \in \R^h$ and covariance $\Sigma \in \R^{h \times h}$. \end{remark} \begin{proof} Let $f$ and $g$ be arbitrary distributions with covariance $\Sigma$ and means $m$. For simplicity we assume that $m=0$ but the analogous proof holds for arbitrary mean, then $$ \int f \hpoint_i \hpoint_j d \hpoint_i d \hpoint_j = \int g \hpoint_i \hpoint_j d \hpoint_i d \hpoint_j = \Sigma_{ij}, $$ so for quadratic form $A$ $$ \int Af = \int Ag. $$ Notice that \begin{equation*} \begin{aligned} \ln \nor(0,\Sigma)[\hpoint] &= \ln \left ( \frac{1}{\sqrt{(2\pi)^h \det(\Sigma)}} \exp( -\tfrac{1}{2}\hpoint^T \Sigma^{-1} \hpoint) \right )\\ &= -\frac{1}{2} \ln( (2\pi)^h \det(\Sigma) ) -\tfrac{1}{2}\hpoint^T \Sigma^{-1} \hpoint \end{aligned} \end{equation*} is a quadratic form plus constant thus $$\int f \ln \nor(0,\Sigma) = \int \nor(0,\Sigma) \ln \nor(0,\Sigma),$$which together with non-negativity of Kullback-Leibler Divergence gives \begin{equation*} \begin{aligned} 0 &\leq \Dkl(f\;||\;\nor(0,\Sigma)) \\ &= \int f \ln ( \tfrac{f}{\nor(0,\Sigma)} ) \\ &= \int f \ln f - \int f \ln \nor(0,\Sigma) \\ &= -{H}(f) - \int f \ln \nor(0,\Sigma) \\ &= -{H}(f) - \int \nor(0,\Sigma) \ln \nor(0,\Sigma) \\ &= -{H}(f) + {H}( \nor(0,\Sigma) ), \end{aligned} \end{equation*} thus $$ {H}( \nor(0,\Sigma) ) \geq {H}(f). $$ \end{proof} There appears nontrivial question how to find/estimate the desired Gaussian as the covariance can be singular. In this case to regularize the covariance we apply the well-known Ledoit-Wolf approach \cite{ledoit2004well}. $$ \Sigma^{\pm}=\covlw ( \Hlayer^{\pm} ) = (1-\varepsilon^\pm)\cov(\Hlayer^\pm) + \varepsilon^\pm \mathrm{tr}(\cov(\Hlayer^\pm))h^{-1} I, $$ where $\cov(\cdot)$ is an empirical covariance and $\varepsilon^\pm$ is a shrinkage coefficient given by Ledoit and Wolf~\cite{ledoit2004well}. \def2.5cm{2.5cm} \begin{figure*} \caption{Extreme Entropy Machine (on the left) and Extreme Entropy Kernel Machine (on the right) as neural networks. In both cases all weights are either randomly selected (dashed) or are the result of closed-form optimization (solid). } \label{fig:nn} \end{figure*} Thus, our optimization problem can be stated as follows: \begin{problem}[Optimization problem] Suppose that we are given two datasets $\Hlayer^\pm$ in a Hilbert space $\Hspace $ which come from the Gaussian distributions $\nor(\meanpoint^\pm,\Sigma^\pm)$. Find $\LinearOperator \in \Hspace $ such that the datasets $$ \LinearOperator^T \Hlayer^+ \mbox{ and } \LinearOperator^T \Hlayer^- $$ are optimally discriminated in terms of Cauchy-Schwarz Divergence. \end{problem} Because $\Hlayer^\pm$ has density $\nor(\meanpoint^\pm,\Sigma^\pm)$, $\LinearOperator^T\Xlayer^\pm$ has the density $\nor(\LinearOperator^T\meanpoint^\pm,\LinearOperator^T\Sigma^\pm \LinearOperator)$. We put \begin{equation} \label{eq:S} \meanpoint_\pm=\LinearOperator^T\meanpoint^\pm, \, S_\pm=\LinearOperator^T\Sigma^\pm \LinearOperator. \end{equation} Since, as one can easily compute~\cite{ckrbf}, $$ \int \frac{\nor(\meanpoint_+,S_+)}{\|\nor(\meanpoint_+,S_+)\|_2} \cdot \frac{\nor(\meanpoint_-,S_-)}{\|\nor(\meanpoint_-,S_-)\|_2} $$ $$ =\frac{\nor(\meanpoint_+-\meanpoint_-,S_++S_-)[0]}{(\nor(0,2S_+)[0]\nor(0,2S_-)[0])^{1/2}} $$ $$ =\frac{(2\pi S_+ S_-)^{1/4}}{(S_++S_-)^{1/2}}\exp \left (-\frac{(\meanpoint_+-\meanpoint_-)^2}{2(S_++S_-)} \right ), $$ we obtain that \begin{equation} \begin{aligned} \Dcs&(\nor(\meanpoint_+,S_+),\nor(\meanpoint_-,S_-))\\ &=-\ln \left ( \int \frac{\nor(\meanpoint_+,S_+)}{\|\nor(\meanpoint_+,S_+)\|_2} \cdot \frac{\nor(\meanpoint_-,S_-)}{\|\nor(\meanpoint_-,S_-)\|_2} \right ) \\ &=-\frac{1}{2}\ln \frac{\pi}{2}-\ln \frac{\tfrac{1}{2}(S_++S_-)}{\sqrt{S_+S_-}}+\frac{(\meanpoint_+-\meanpoint_-)^2}{S_++S_-}. \end{aligned} \end{equation} Observe that in the above equation the first term is constant, the second is the logarithm of the quotient of arithmetical and geometrical means (and therefore in the typical cases is bounded and close to zero). Consequently, crucial information is given by the last term. In order to confirm this claim we perform experiments on over 20 datasets used in further evaluation (more details are located in the Evaluation section). We compute the Spearman's rank correlation coefficient between the $\Dcs(\nor(\meanpoint_+,S_+),\nor(\meanpoint_-,S_-))$ and $\frac{(\meanpoint_+-\meanpoint_-)^2}{S_++S_-}$ for hundread random projections to $\Hspace $ and hundread random linear operators $\LinearOperator$. \begin{table}[H] \centering \caption{Spearman's rank correlation coefficient between optimized term and whole $\Dcs$ for all datasets used in evaluation. Each column represents different dimension of the Hilbert space.} \label{tab:dcs} \begin{tabular}{lccccc} \toprule dataset & $1$ & $10$ & $100$ & $200$ & $500$ \\ \midrule \textsc{ australian } & 0.928 & -0.022 & 0.295 & 0.161 & 0.235 \\ \textsc{ breast-cancer } & 0.628 & 0.809 & 0.812 & 0.858 & 0.788 \\ \textsc{ diabetes } & -0.983 & -0.976 & -0.941 & -0.982 & -0.952 \\ \textsc{ german.numer } & 0.916 & 0.979 & 0.877 & 0.873 & 0.839 \\ \textsc{ heart } & 0.964 & 0.829 & 0.931 & 0.91 & 0.893 \\ \textsc{ ionosphere } & 0.999 & 0.988 & 0.98 & 0.978 & 0.984 \\ \textsc{ liver-disorders } & 0.232 & 0.308 & 0.363 & 0.33 & 0.312 \\ \textsc{ sonar } & -0.31 & -0.542 & -0.41 & -0.407 & -0.381 \\ \textsc{ splice } & -0.284 & -0.036 & -0.165 & -0.118 & -0.101 \\ \midrule \textsc{ abalone7 } & 1.0 & 0.999 & 0.999 & 0.999 & 0.998 \\ \textsc{ arythmia } & 1.0 & 1.0 & 0.999 & 1.0 & 1.0 \\ \textsc{ balance } & 1.0 & 0.998 & 0.998 & 0.999 & 0.998 \\ \textsc{ car evaluation } & 1.0 & 0.998 & 0.998 & 0.997 & 0.997 \\ \textsc{ ecoli } & 0.964 & 0.994 & 0.995 & 0.998 & 0.995 \\ \textsc{ libras move } & 1.0 & 0.999 & 0.999 & 1.0 & 1.0 \\ \textsc{ oil spill } & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \textsc{ sick euthyroid } & 1.0 & 0.999 & 1.0 & 1.0 & 1.0 \\ \textsc{ solar flare } & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \textsc{ spectrometer } & 1.0 & 1.0 & 0.999 & 0.999 & 0.999 \\ \midrule \textsc{ forest cover } & 0.988 & 0.997 & 0.997 & 0.992 & 0.988 \\ \textsc{ isolet } & 0.784 & 1.0 & 0.997 & 0.997 & 0.999 \\ \textsc{ mammography } & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \textsc{ protein homology } & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \textsc{ webpages } & 1.0 & 1.0 & 1.0 & 0.999 & 0.999 \\ \bottomrule \end{tabular} \end{table} As one can see in Table~\ref{tab:dcs}, in small datasets (first part of the table) the correlation is generally high, with some exceptions (like \textsc{sonar}, \textsc{splice}, \textsc{liver-disorders} and \textsc{diabetes}). However, for bigger datasets (consisting of thousands examples) this correlation is nearly perfect (up to the randomization process it is nearly $1.0$ for all cases) which is a very strong empirical confirmation of our claim that maximization of the $\frac{(\meanpoint_+-\meanpoint_-)^2}{S_++S_-}$ is generally equivalent to the maximization of $\Dcs(\nor(\meanpoint_+,S_+),\nor(\meanpoint_-,S_-))$. This means that, after the above reductions, and application of \eqref{eq:S} our final problem can be stated as follows: \noindent{\textbf{Optimization problem: Extreme Entropy Machine} } \begin{equation*} \begin{aligned} & \underset{\LinearOperator}{\text{minimize}} & & \LinearOperator^T \Sigma^+ \LinearOperator + \LinearOperator^T \Sigma^- \LinearOperator &\\ & \text{subject to} & & \LinearOperator^T( \meanpoint^+ - \meanpoint^- ) = 2\\ & \text{where} & & \Sigma^\pm = \covlw (\Hlayer^\pm) \\ & & & \meanpoint^\pm = \frac{1}{|\Hlayer^\pm|} \sum_{\hpoint^\pm \in \Hlayer^\pm} \hpoint^\pm\\ & & & \Hlayer^\pm = \varphi(\Xlayer^\pm) \end{aligned} \end{equation*} Before we continue to the closed-form solution we outline two methods of actually transforming our data $\Xlayer^\pm \subset \Xspace $ to the highly dimensional $\Hlayer^\pm \subset \Hspace $, given by the $\varphi : \Xspace \to \Hspace $. We investigate two approaches which lead to the Extreme Entropy Machine and Extreme Entropy Kernel Machine respectively. \begin{itemize} \item for \textbf{Extreme Entropy Machine} (EEM) we use the random projection technique, exactly the same as the one used in the ELM. In other words, given some generalized activation function $\Gfunction(\xpoint,w,b) : \Xspace \times \Xspace \times \mathbb{R} \to \R$ and a constant $h$ denoting number of hidden neurons: $$ \varphi : \Xspace \ni \xpoint \to [\Gfunction(\xpoint,w_1,b_1),\dots,\Gfunction(\xpoint,w_h,b_h)]^T \in \R^h $$ where $w_i$ are random vectors and $b_i$ are random biases. \item for \textbf{Extreme Entropy Kernel Machine} (EEKM) we use the randomized kernel approximation technique~\cite{drineas2005nystrom}, which spans our Hilbert space on randomly selecteed subset of training vectors. In other words, given valid kernel $\Kernel(\cdot,\cdot) : \Xspace \times \Xspace \to \R_+$ and size of the kernel space base $h$: $$ \varphi_\Kernel : \Xspace \ni \xpoint \to (\Kernel(\xpoint,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2})^T \in \R^h $$ where $\Xlayer\subsample$ is a $h$ element random subset of $\Xlayer$. It is easy to verify that such low rank approxmation truly behaves as a kernel, in the sense that for $\varphi_\Kernel(\xpoint_i), \varphi_\Kernel(\xpoint_j) \in \R^{h} $ \begin{equation*} \begin{aligned} \varphi_\Kernel(\xpoint_i)^T&\varphi_\Kernel(\xpoint_j) = \\ = &((\Kernel(\xpoint_i,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2})^T)^T \\ &( \Kernel(y,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2} )^T \\ = &\Kernel(\xpoint_i,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2} \\ &( \Kernel(y,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2} )^T \\ = &\Kernel(\xpoint_i,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2}\\ &\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2} \Kernel^T(\xpoint_j,\Xlayer\subsample) \\ = &\Kernel(\xpoint_i,\Xlayer\subsample)\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1} \Kernel(\Xlayer\subsample,\xpoint_j), \end{aligned} \end{equation*} \noindent given true kernel projection $\phi_\Kernel$ such that $$\Kernel(\xpoint_i,\xpoint_j)=\phi_\Kernel(\xpoint_i)^T\phi_\Kernel(\xpoint_j)$$ we have \begin{equation*} \begin{aligned} \Kernel(\xpoint_i,\Xlayer\subsample)&\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1} \Kernel(\Xlayer\subsample,\xpoint_j) = \\ =&\phi_\Kernel(\xpoint_i)^T\phi_\Kernel(\Xlayer\subsample) \\ &(\phi_\Kernel(\Xlayer\subsample)^T\phi_\Kernel(\Xlayer\subsample))^{-1}\\ &\phi_\Kernel(\Xlayer\subsample)^T\phi_\Kernel(\xpoint_j) \\ =&\phi_\Kernel(\xpoint_i)^T\phi_\Kernel(\Xlayer\subsample) \phi_\Kernel(\Xlayer\subsample)^{-1}\\ &(\phi_\Kernel(\Xlayer\subsample)^T)^{-1} \phi_\Kernel(\Xlayer\subsample)^T\phi_\Kernel(\xpoint_j) \\ =& \phi_\Kernel(\xpoint_i)^T\phi_\Kernel(\xpoint_j)\\ =& \Kernel(\xpoint_i,\xpoint_j). \end{aligned} \end{equation*} Thus for the whole samples' set we have $$ \varphi_\Kernel(\Xlayer)^T \varphi_\Kernel(\Xlayer) = \Kernel(\Xlayer,\Xlayer), $$ which is a complete Gram matrix. \end{itemize} So the only difference between Extreme Entropy Machine and Extreme Entropy Kernel Machine is that in later we use $\Hlayer^\pm=\varphi_\Kernel(\Xlayer^\pm)$ where $\Kernel$ is a selected kernel instead of $\Hlayer^\pm=\varphi(\Xlayer^\pm)$. Fig.~\ref{fig:nn} visualizes these two approaches as neural networks, in particular EEM is a simple SLFN, while EEKM leads to the network with two hidden layers. \begin{figure*} \caption{Visualization of the whole EEM classification process. From the left: Linearly non separable data in $\Xspace$; data mapped to the $\Hspace$ space, where we find covariance estimators; density of projected Gaussians on which the decision is based; decision boundary in the input space $\Xspace$. } \label{fig:classificaion} \end{figure*} \begin{remark} Extreme Entropy Machine optimization problem is closely related to the SVM optimization, but instead of maximizing the margin between closest points we are maximizing the mean margin. \end{remark} \begin{proof} Let us recall that in SVM we try to maximize the margin $\frac{2}{\| \LinearOperator \|}$ under constraints that negative samples are projected at values at most -1 ($\LinearOperator^T \hpoint^- + b\leq -1 $) and positive samples on at least 1 ($\LinearOperator^T \hpoint^+ + b\geq 1$) In other words, we are minimizing the $\LinearOperator$ operator norm $$\| \LinearOperator \|$$which is equivalent to minimizing the square of this norm $\| \LinearOperator\|^2$, under constraint that $$\min_{\hpoint^+ \in \Hlayer^+} \{ \LinearOperator^T \hpoint^+ \} - \max_{\hpoint^- \in \Hlayer^-} \{ \LinearOperator^T \hpoint^- \} = 1 - (-1) = 2.$$ On the other hand, EEM tries to minimize \begin{equation*} \begin{aligned} \LinearOperator^T \Sigma^+ \LinearOperator + \LinearOperator^T \Sigma^- \LinearOperator &= \LinearOperator^T ( \Sigma^+ + \Sigma^- ) \LinearOperator \\ &= \| \LinearOperator \|_{\Sigma^+ + \Sigma^-}^2 \end{aligned} \end{equation*} under the constraint that $$ \tfrac{1}{|\Hlayer^+|}\sum_{\hpoint^+ \in \Hlayer^+} \LinearOperator^T \hpoint^+ - \tfrac{1}{|\Hlayer^-|} \sum_{\hpoint^- \in \Hlayer^-} \LinearOperator^T \hpoint^- = 2. $$ So what is happening here is that we are trying to maximize the mean margin between classes in the Mahalanobis norm generated by the sum of classes' covariances. It was previously shown in Two ellipsoid Support Vector Machines model~\cite{czarnecki2014two} that such norm is an approximation of the margin coming from two ellpisoids instead of the single ball used by traditional SVM. \end{proof} Similar observation regarding connection between large margin classification and entropy optimization has been done in case of the Multithreshold Linear Entropy Classifier~\cite{MELC}. We are going to show by applying the standard method of Lagrange multipliers that the above problem has a closed form solution (similar to the Fischer's Discriminant). Let $$ \Sigma=\Sigma^++\Sigma^- \mbox{ and }\meanpoint=\meanpoint^+-\meanpoint^-. $$ We put $$ L(\LinearOperator,\lambda):=2\LinearOperator^T\Sigma \LinearOperator-\lambda(\LinearOperator^T\meanpoint-2). $$ Then $$ \nabla_v L=2\Sigma \LinearOperator-\lambda \meanpoint \text{ and } \frac{\partial}{\partial \lambda}L=\LinearOperator^T\meanpoint-2, $$ which means that we need to solve, with respect to $\LinearOperator$, the system $$ \begin{cases} 2\Sigma \LinearOperator-\lambda \meanpoint=0, \\ \LinearOperator^Tm=2. \end{cases} $$ Therefore $\LinearOperator=\frac{\lambda}{2}\Sigma^{-1}\meanpoint$, which yields $$ \tfrac{\lambda}{2}\meanpoint^T\Sigma^{-1}\meanpoint=2, $$ and consequently\footnote{where $\|\meanpoint\|^2_\Sigma=\meanpoint^T\Sigma^{-1}\meanpoint$ denotes the squared Mahalanobis norm of $\meanpoint$.}, if $\meanpoint\neq 0$, then $\lambda=4/\|\meanpoint\|^2_\Sigma$ and \begin{equation} \label{eq:final} \begin{aligned} \LinearOperator&=\tfrac{2}{\|\meanpoint\|^2_\Sigma}\Sigma^{-1}\meanpoint\\ &=\frac{2(\Sigma^+ + \Sigma^-)^{-1}(\meanpoint^+-\meanpoint^-)}{\|\meanpoint^+-\meanpoint^-\|^2_{\Sigma^+ + \Sigma^-}}. \end{aligned} \end{equation} The final decision of the class of the point $\hpoint$ is therefore given by the comparison of the values $$ \nor(\LinearOperator^T\meanpoint^+,\LinearOperator^T\Sigma^+\LinearOperator)[\LinearOperator^T\hpoint] \text{ and } \nor(\LinearOperator^T\meanpoint^-,\LinearOperator^T\Sigma^-\LinearOperator)[\LinearOperator^T\hpoint]. $$ We distinguish two cases based on number of resulting classifier's thresholds (points $t$ such that $\nor(\LinearOperator^T\meanpoint^+,\LinearOperator^T\Sigma^+\LinearOperator)[t] = \nor(\LinearOperator^T\meanpoint^-,\LinearOperator^T\Sigma^-\LinearOperator)[t]$): \begin{enumerate} \item $S_- = S_+$, then there is one threshold \begin{flalign*} &t_0=\meanpoint_- + 1, & \end{flalign*} which results in a traditional (one-threshold) linear classifier, \item $S_- \neq S_+$, then there are two thresholds \begin{flalign*} &t_\pm = \meanpoint_- + \tfrac{2S_- \pm \sqrt{S_-S_+(\ln(S_-/S_+)(S_--S_+)+4)}}{S_--S_+}, & \end{flalign*} which makes the resulting classifier a member of two-thresholds linear classifiers family~\cite{anthony2003learning}. \end{enumerate} Obviously, in the degenerated case, when $m=0 \iff \meanpoint^-=\meanpoint^+$ there is no solution, as the constraint $\LinearOperator^T(\meanpoint^--\meanpoint^+)=2$ is not fulfilled for any $\LinearOperator$. In such a case EEM returna a trivial classifier constantly equal to any class (we put $\LinearOperator=0$). From the neural network perspetive we simply construct a custom activation function $\Ffunction(\cdot)$ in the output neuron depending on one of the two described cases: \begin{enumerate} \item $\Ffunction(x) = \left \{ \begin{matrix} +1, \text{ if } x \geq t_0\\ -1, \text{ if } x < t_0 \end{matrix} \right . = \textsc{sign}(x-t_0),$ \item $\Ffunction(x) = \left \{ \begin{matrix} +1, \text{ if } x \in [t_-,t_+]\\ -1, \text{ if } x \notin [t_-,t_+] \end{matrix} \right . = -\textsc{sign}(x-t_-) \textsc{sign}(x-t_+) ,$ if $t_-<t_+$ and \\ $\Ffunction(x) = \left \{ \begin{matrix} -1, \text{ if } x \in [t_+,t_-]\\ +1, \text{ if } x \notin [t_+,t_-] \end{matrix} \right . = \textsc{sign}(x-t_-) \textsc{sign}(x-t_+) ,$\\ otherwise. \end{enumerate} The whole classification process is visualized in Fig.~\ref{fig:classificaion}, we begin with data in the input space $\Xspace $, transform it into Hilbert space $\Hspace $ where we model them as Gaussians, then perform optimization leading to the projection on $\R$ through $\LinearOperator$ and perform densitiy based classification leading to non-linear decision boundary in $\Xspace $. \section{Theory: density estimation in the kernel case} To illustrate our reasoning, we consider a typical basic problem concerning the density estimation. \begin{problem} Assume that we are given a finite data set $\Hlayer$ in a Hilbert space $\Hspace $ generated by the unknown density $f$, and we want to obtain estimate of $f$. \end{problem} Since the problem in itself is infinite dimensional typically the data would be linearly independent. Moreover, one usually can not obtain reliable density estimation - the most we can hope is that after transformation by a linear functional into $\R$, the resulting density will be well-estimated. To simplify the problem assume therefore that we want to find the desired density in the class of normal densities -- or equivalently that we are interested only in the estimation of the mean and covariance of $f$. The generalization of the above problem is given by the following problem: \begin{problem} Assume that we are given a finite data sets $\Hlayer^\pm$ in a Hilbert space $\Hspace $ generated by the unknown densities $f^\pm$, and we want to obtain estimate of the unknown densities. \end{problem} In general $\text{dim}(\Hspace ) = h \gg N$ which means that we have very sparse data in terms of Hilbert space. As a result, classical kernel density estimation (KDE) is not reliable source of information~\cite{parzen1962estimation}. In the absence of different tools we can however use KDE with very big kernel width in order to cover at least some general shape of the whole density. \begin{remark} Assume that we are given a finite data sets $\Hlayer^\pm$ with means $\meanpoint^\pm$ and covariances $\Sigma^\pm$ in a Hilbert space $\Hspace $. If we conduct kernel density estimation using Gaussian kernel then, in a limiting case, each class becomes a Normal distribution. $$ \lim_{\sigma \to \infty} \| \de{ \Hlayer^\pm }_\sigma - \nor( \meanpoint^\pm, \sigma^2 \Sigma^\pm ) \|_2 = 0, $$ where $$ \de{ A }_\sigma = \tfrac{1}{|A|} \sum_{a \in A}\nor(a, \sigma^2 \cdot \cov(A)) $$ \end{remark} Proof of this remark is given by Czarnecki and Tabor~\cite{MELC} and means that if we perform a Gaussian kernel density estimation of our data with big kernel width (which is reasonable for small amount of data in highly dimensional space) then for big enough $\hat \sigma$ EEM is nearly optimal linear classifier in terms of estimated densities $$ \hat f^\pm = \nor( \meanpoint^\pm, \hat \sigma^2 \Sigma^\pm ) \approx \de{ \Hlayer^\pm }_{\hat \sigma}. $$ \begin{sidewaystable*}[ph!] \centering \caption{comparison of considered classiifers. $|SV|$ denotes number of support vectors. Asterix denotes features which can be adde to a particular model by some minor modifications, but we compare here the base versiond of each model.} \label{tab:comparison} \begin{tabular}{ccccc} \hline & ELM & SVM & LS-SVM & EE(K)M \\ \hline optimization method & Linear regression & Quadratic Programming & Linear System & \cellcolor{gray!10}Fischer's Discriminant \\ nonlinearity & random projection & kernel & kernel & \cellcolor{gray!10}random (kernel) projection\\ closed form & yes & no & yes & \cellcolor{gray!10}yes \\ balanced & no* & no* & no* & \cellcolor{gray!10}yes \\ regression & yes & no* & yes & \cellcolor{gray!10}no \\ criterion & Mean Squared Error & Hinge loss & Mean Squared Error & \cellcolor{gray!10} Entropy optimization \\ no. of thresholds & 1 & 1 & 1 & \cellcolor{gray!10}1 or 2 \\ problem type & regression & classification & regression & \cellcolor{gray!10} classification \\ model learning & discriminative & discriminative & discriminative & \cellcolor{gray!10} generative \\ direct probability estimates & no & no & no & \cellcolor{gray!10} yes \\ training complexity & $\mathcal{O}(Nh^2)$ & $\mathcal{O}(N^3)$ & $\mathcal{O}(N^{2.34})$ & \cellcolor{gray!10} $\mathcal{O}(Nh^2)$ \\ resulting model complexity & $hd$ & $|SV|d$, $|SV|\ll N $ & $Nd+1$ & \cellcolor{gray!10}$hd+4$\\ memory requirements & $\mathcal{O}(Nd)$ & $\mathcal{O}(Nd)$ & $\mathcal{O}(N^2)$ & \cellcolor{gray!10}$\mathcal{O}(Nd)$ \\ source of regularization & Moore-Penrose pseudoinverse & margin maximization & quadratic loss penalty term & \cellcolor{gray!10} Ledoit-Wolf estimator \\ \hline \end{tabular} \end{sidewaystable*} Let us now investigate the probabilistic interpretation of EEM. Under the assumption that $ \Hlayer^\pm \sim \nor(\meanpoint^\pm, \Sigma^\pm) $ we have the conditional probabilities $$ \Prob(\hpoint|\pm) = \nor(\meanpoint^\pm,\Sigma^\pm )[\hpoint], $$ \noindent so from Bayes rule we conclude that \begin{equation*} \begin{aligned} \Prob(\pm|\hpoint) &= \frac{\Prob(\hpoint|\pm)\Prob(\pm)}{\Prob(\hpoint)} \\ &\propto \nor(\meanpoint^\pm,\Sigma^\pm )[\hpoint] \Prob(\pm), \end{aligned} \end{equation*} \noindent where $\Prob(\pm)$ is a prior classes' distribution. In our case, due to the balanced nature (meaning that despite classes imbalance we maximize the balanced quality measure such as Averaged Accuracy) we have $\Prob(\pm)=1/2$. \noindent But $$ \Prob(\hpoint) = \sum_{\ypoint \in \{+,-\}} \Prob(\hpoint|\ypoint) , $$ \noindent so $$ \Prob(\pm|\hpoint) = \frac{ \nor(\meanpoint^\pm,\Sigma^\pm )[\hpoint] } { \sum_{\ypoint \in \{+,-\}} \nor(\meanpoint^\ypoint,\Sigma^\ypoint )[\hpoint] }. $$ Furthermore it is easy to show that under the normality assumption, the resulting classifier is optimal in the Bayesian sense. \begin{remark} If data in feature space comes from Normal distributions $\nor(\meanpoint^\pm,\Sigma^\pm)$ then $\LinearOperator$ given by EEM minimizes probability of missclassification. More strictly speaking, if we draw $\hpoint^+$ with probability $1/2$ from $\nor(\meanpoint^+,\Sigma^+)$ and $\hpoint^-$ with 1/2 from $\nor(\meanpoint^-,\Sigma^-)$ then for any $\alpha \in \R^h$ $$ \Prob( \mp| \LinearOperator^T \hpoint^\pm ) \leq \Prob( \mp| \alpha^T \hpoint^\pm ) $$ \end{remark} \section{Theory: learning capabilities} First we show that under some simplifing assumptions, proposed method behaves as Extreme Learning Machine (or Weighted Extreme Learning Machine~\cite{zong2013weighted}). Before proceeding further we would like to remark that there are two popular notations for projecting data onto hyperplanes. One, used in ELM model, assumes that $\Hlayer$ is a row matrix and $\LinearOperator$ is a column vector, which results in the projection's equation $\Hlayer \LinearOperator$. Second one, used in SVM and in our paper, assumes that both $\Hlayer$ and $\LinearOperator$ are column oriented, which results in the $\beta^T\Hlayer$ projection. In the following theorem we will show some duality between $\LinearOperator$ found by ELM and by EEM. In order to do so, we will need to change the notation during the proof, which will be indicated. \begin{theorem} Let us assume that we are given an arbitrary, balanced\footnote{analogous result can be shown for unbalanced dataset and Weighted ELM with particular weighting scheme.} dataset $\{(\xpoint_i,\ypoint_i)\}_{i=1}^N$, $\xpoint_i \in \R^d, \ypoint_i \in \{-1,+1\}, |\Xlayer^-|=|\Xlayer^+|$ which can be perfectly learned by ELM with $N$ hidden neurons. If this dataset's points' image through random neurons $\Hlayer=\varphi(\Xlayer)$ is centered (points' images have 0 mean) and classes have homogenous covariances (we can assume that $\exists_{a_\pm \in \R_+} \cov(\Hlayer) = a_+\cov(\Hlayer^+) = a_-\cov(\Hlayer^-)$ then EEM with the same hidden layer will also learn this dataset perfectly (with 0 error). \end{theorem} \begin{proof} In the first part of the proof we use the ELM notation. Projected data is centered, so $\cov(\Hlayer) = \Hlayer^T\Hlayer$. ELM is able to learn this dataset perfectly, consequently $\Hlayer$ is invertible, thus also $\Hlayer^T\Hlayer$ is invertible, as a result $\covlw (\Hlayer)=\cov(\Hlayer)=\Hlayer^T\Hlayer$. We will now show that $ \exists_{a \in \R_+} \LinearOperator_{\text{ELM}} = a \cdot \LinearOperator_{\text{EEM}}. $ First, let us recall that $\LinearOperator_{\text{ELM}} = \Hlayer^\dagger \Yset = \Hlayer^{-1}\Yset$ and $\LinearOperator_{\text{EEM}}=\frac{2(\Sigma^++\Sigma^-)^{-1}(\meanpoint^+-\meanpoint^-)}{\| \meanpoint^+-\meanpoint^- \|_{\Sigma^-+\Sigma^+}^2}$ where $\Sigma^\pm = \covlw (\Hlayer^\pm)$. Due to the assumption of geometric homogenity $\LinearOperator_{\text{EEM}}=\frac{2}{\|\meanpoint^+-\meanpoint^-\|_{\Sigma}^2}(\frac{a_++a_-}{a_+a_-}\Sigma)^{-1}(\meanpoint^+-\meanpoint^-)$ , where $\Sigma = \covlw (\Hlayer)$. Therefore \begin{equation*} \begin{aligned} \LinearOperator_{\text{ELM}} &= \Hlayer^{-1}\Yset \\ &= (\Hlayer^T\Hlayer)^{-1}\Hlayer^T\Yset \\ &= \covlw ^{-1}(\Hlayer)\Hlayer^T\Yset \end{aligned} \end{equation*} From now we change the notation back to the one used in this paper. \begin{equation*} \begin{aligned} \LinearOperator_{\text{ELM}} &= \Sigma^{-1} \left (\sum_{\hpoint^+ \in \Hlayer^+} (+1 \cdot \hpoint^+) + \sum_{\hpoint^- \in \Hlayer^-} (-1 \cdot \hpoint^-) \right )\\ &= \Sigma^{-1} \left (\sum_{\hpoint^+ \in \Hlayer^+} \hpoint^+ - \sum_{\hpoint^- \in \Hlayer^-} \hpoint^- \right )\\ &= \Sigma^{-1}\frac{N}{2}(\meanpoint^+ - \meanpoint^-)\\ &= \frac{N}{2} \frac{\|\meanpoint^+-\meanpoint^-\|_{\Sigma}^2}{2}\frac{a_++a_-}{a_+a_-} \LinearOperator_{\text{EEM}}\\ &= a \cdot \LinearOperator_{\text{EEM}}, \end{aligned} \end{equation*} for $a = \frac{N}{2} \frac{\|\meanpoint^+-\meanpoint^-\|_{\Sigma}^2}{2}\frac{a_++a_-}{a_+a_-} \in \R_+$. Again from homogenity we obtain just one equilibrium point, located in the $\LinearOperator_{\text{EEM}}^T(\meanpoint^+-\meanpoint^-)/2$ which results in the exact same classifier as the one given by ELM. This completes the proof. \end{proof} Similar result holds for EEKM and Least Squares Support Vector Machine. \begin{theorem} Let us assume that we are given arbitrary, balanced\footnote{analogous result can be shown for unbalanced dataset and Balanced LS-SVM with particular weighting scheme.} dataset $\{(\xpoint_i,\ypoint_i)\}_{i=1}^N$, $\xpoint_i \in \R^d, \ypoint_i \in \{-1,+1\}, |\Xlayer^-|=|\Xlayer^+|$ which can be perfectly learned by LS-SVM. If dataset's points' images through Kernel induced projection $\varphi_\Kernel$ have homogenous classes' covariances (we can assume that $\exists_{a_\pm \in \R_+} \cov(\varphi_\Kernel(\Xlayer)) = a_+\cov(\varphi_\Kernel(\Xlayer^+)) = a_-\cov(\varphi_\Kernel(\Xlayer^-))$ then EEKM with the same kernel and $N$ hidden neurons will also learn this dataset perfectly (with 0 error). \end{theorem} \begin{proof} It is a direct consequence of the fact that with $N$ hidden neurons and honogenous classes projections covariances, EEKM degenerates to the kernelized Fischer Discriminant which, as Gestel et al. showed ~\cite{van2004benchmarking}, is equivalent to the solution of the Least Squares SVM. \end{proof} \section{Practical considerations} We can formulate the whole EEM training as a very simple algorithm (see Alg.~\ref{alg:eem}). \begin{algorithm}[H] \caption{Extreme Entropy (Kernel) Machine} \label{alg:eem} \begin{algorithmic} \STATE \STATE \textbf{\textsc{train}}$(\Xlayer^+,\Xlayer^-)$ \STATE \hspace{0.5cm}\textbf{build} $\varphi$ \textbf{using Algorithm~\ref{alg:phi}} \STATE \hspace{0.5cm}$ \Hlayer^\pm \gets \varphi(\Xlayer^\pm) $ \STATE \hspace{0.5cm}$ \meanpoint^\pm \gets 1/|\Hlayer^\pm| \sum_{\hpoint^\pm \in \Hlayer^\pm} \hpoint^\pm $ \STATE \hspace{0.5cm}$ \Sigma^\pm \gets \covlw(\Hlayer^\pm) $ \STATE \hspace{0.5cm}$ \LinearOperator \gets 2\left ( \Sigma^+ + \Sigma^- \right )^{-1}(\meanpoint^+-\meanpoint^-) / \|\meanpoint^+-\meanpoint^-\|_{\Sigma^+ + \Sigma^-} $ \STATE \hspace{0.5cm}$ \Ffunction(x) = \argmax_{\ypoint \in \{+,-\}} \nor(\LinearOperator^T \meanpoint^\ypoint, \LinearOperator^T \Sigma^\ypoint \LinearOperator)[x]$ \STATE \hspace{0.5cm}\textbf{return} $\LinearOperator, \varphi, \Ffunction$ \STATE \STATE \textbf{\textsc{predict}}$(\Xlayer)$ \STATE \hspace{0.5cm}\textbf{return} $\Ffunction( \LinearOperator^T \varphi(\Xlayer) )$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{$\varphi$ building} \label{alg:phi} \begin{algorithmic} \STATE \STATE \textbf{\textsc{Extreme Entropy Machine}}$(\Gfunction,h)$ \STATE \hspace{0.5cm}\textbf{select randomly } $w_i,b_i$ \textbf{ for } $i \in \{1,...,h\}$ \STATE \hspace{0.5cm}$ \varphi(\xpoint) = [\Gfunction(\xpoint,w_1,b_1),...,\Gfunction(\xpoint,w_h,b_h)]^T $ \STATE \hspace{0.5cm}\textbf{return} $\varphi$ \STATE \STATE \textbf{\textsc{Extreme Entropy Kernel Machine}}$(\Kernel,h,\Xlayer)$ \STATE \hspace{0.5cm}\textbf{select randomly } $\Xlayer\subsample \subset \Xlayer, |\Xlayer\subsample|=h$ \STATE \hspace{0.5cm}$ \Kernel\subsample \gets \Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2}$ \STATE \hspace{0.5cm}$ \varphi_\Kernel(\xpoint) = \Kernel\subsample \Kernel(\Xlayer\subsample, \xpoint) $ \STATE \hspace{0.5cm}\textbf{return} $\varphi_\Kernel$ \end{algorithmic} \end{algorithm} Resulting model consists of three elements: \begin{itemize} \item feature projection function $\varphi$, \item linear operator $\LinearOperator$, \item classification rule $\Ffunction$. \end{itemize} As described before, $\Ffunction$ can be further compressed to just one or two thresholds $t_\pm$ using equations from previous sections. Either way, complexity of the resulting model is linear in terms of hidden units and classification of the new point takes $\mathcal{O}(dh)$ time. During EEM training, the most expensive part of the algorithm is the computation of the covariance estimators and inversion of the sum of covariances. Even computation of the empirical covariance takes $\mathcal{O}(Nh^2)$ time so the total complexity of training, equal to $\mathcal{O}(h^3 + Nh^2) = \mathcal{O}(Nh^2)$, is acceptable. It is worth noting that training of the ELM also takes exactly $\mathcal{O}(Nh^2)$ time as it requires computation of $\Hlayer^T\Hlayer$ for $\Hlayer \in \R^{N \times h}$. Training of EEMK requires additional computation of the square root of the sampled kernel matrix inverse $\Kernel(\Xlayer\subsample,\Xlayer\subsample)^{-1/2}$ but as $\Kernel(\Xlayer\subsample,\Xlayer\subsample) \in \R^{h \times h}$ can be computed in $\mathcal{O}(dh^2)$ and both inverting and square rooting can be done in $\mathcal{O}(h^3)$ we obtain exact same asymptotical computational complexity as the one of EEM. Procedure of square rooting and inverting are both always possible as assuming that $\Kernel$ is a valid kernel in the Mercer's sense yields that $\Kernel(\Xlayer\subsample,\Xlayer\subsample)$ is strictly positive definite and thus invertible. Further comparision of EEM, ELM and SVM is summarized in Table~\ref{tab:comparison}. Next aspect we would like to discuss is the cost sensitive learning. EEMs are balanced models in the sense that they are trying to maximize the balanced quality measures (like Averaged Accuracy or \textsc{GMean}). However, in practical applications it might be the case that we are actually more interested in the positive class then the negative one (like in the medical applications). Proposed model gives a direct probability estimates of $\Prob(\LinearOperator^T\hpoint|\ypoint)$, which we can easily convert to the cost sensitive classifier by introducing the prior probabilities of each class. Directly from Bayes Theorem, given $\Prob(+)$ and $\Prob(-)$, we can label our new sample $\hpoint$ according to $$ \Prob(\ypoint|\LinearOperator^T\hpoint) \propto \Prob(\ypoint)\Prob(\LinearOperator^T\hpoint|\ypoint), $$ so if we are given costs $C_+, C_- \in \R_+$ we can use them as weighting of priors $$ cl(\xpoint) = \argmax_{\ypoint \in \{-,+\}} \tfrac{C_y}{C_-+C_+} \Prob(\LinearOperator^T\hpoint|\ypoint). $$ Let us now investigate the possible efficiency bottleneck. In EEKM, the classification of the new point $\hpoint$ is based on \begin{equation*} \begin{aligned} cl(\xpoint) &= \Ffunction( \LinearOperator^T \varphi_\Kernel(\xpoint) ) \\ &= \Ffunction( \LinearOperator^T (\Kernel(\xpoint,\Xlayer\subsample)\Kernel\subsample)^T ) \\ &= \Ffunction( \LinearOperator^T (\Kernel\subsample)^T \Kernel(\xpoint,\Xlayer\subsample)^T )\\ &= \Ffunction( (\Kernel\subsample \LinearOperator)^T \Kernel(\Xlayer\subsample, \xpoint) ). \end{aligned} \end{equation*} One can convert EEKM to the SLFN by putting: \begin{equation*} \begin{aligned} \hat \varphi_\Kernel(\xpoint) &= \Kernel(\Xlayer\subsample, \xpoint)\\ \hat \LinearOperator &= \Kernel\subsample \LinearOperator , \end{aligned} \end{equation*} so the classification rule becomes \begin{equation*} \begin{aligned} cl(\xpoint) &= \Ffunction( \hat \LinearOperator \phantom{}^T \hat \varphi_\Kernel(\xpoint) ). \end{aligned} \end{equation*} This way complexity of the new point's classification is exactly the same as in the case of EEM and ELM (or any other SLFN). \section{Evaluation} For the evaluation purposes we implemented five methods, namely: Weighted Extreme Learning Machine (WELM~\cite{zong2013weighted}), Extreme Entropy Machine (EEM), Extreme Entropy Kernel Machine (EEKM), Least Squares Support Vector Machines (LS-SVM~\cite{suykens1999least}) and Support Vector Machines (SVM~\cite{Vapnik95}). All methods but SVM were implemented using Python with use of the bleeding-edge versions of \textsc{numpy}~\cite{van2011numpy} and \textsc{scipy}~\cite{jones2001scipy} libraries included in \textsc{anaconda}\footnote{\url{https://store.continuum.io/cshop/anaconda/}} for fair comparision. For SVM we used highly efficient \textsc{libSVM}~\cite{chang2011libsvm} library with bindings avaliable in \textsc{scikit-learn}~\cite{pedregosa2011scikit}. Random projection based methods (WELM, EEM) were tested using three following generalized activation functions $\Gfunction(\xpoint,w,b)$ \begin{itemize} \item sigmoid (\textsc{sig}): $\tfrac{1}{1+\exp(-\langle w,\xpoint \rangle + b)} $, \item normalized sigmoid (\textsc{nsig}): $ \tfrac{1}{1+\exp(-\langle w,\xpoint \rangle /d + b)} $, \item radial basis function (\textsc{rbf}): $ \exp(-b \| w - \xpoint \|^2 )$. \end{itemize} Random parameters (weights and biases) were selected from uniform distributions on $[0,1]$. Training of WELM was performed using Moore-Penrose pseudoinverse and of EEM using Ledoit-Wolf covariance estimator, as both are parameter less, closed form estimators of required objects. For kernel methods (EEKM, LS-SVM, SVM) we used the Gaussian kernel (\textsc{rbf}) $ \Kernel_\gamma(\xpoint_i,\xpoint_j)=\exp(-\gamma \| \xpoint_i-\xpoint_j \|^2 )$. In all methods requiring class balancing schemes (WELM, LS-SVM, SVM) we used balance weights $w_i$ equal to the ratio of bigger class and current class (so $\sum_{i=1}^N w_i \ypoint_i = 0$). Metaparameters of each model were fitted, performed grid search included: hidden layer size $h=50,100,250,500,1000$ (WELM, EEM, EEKM), Gaussian Kernel width $\gamma=10^{-10},\ldots,10^0$ (EEKM, LS-SVM, SVM), SVM regularization parameter $C=10^{-1},\ldots,10^{10}$ (LS-SVM, SVM). Datasets' features were linearly scaled in order to have each feature in the interval $[0,1]$. No other data whitening/filtering was performed. All experiments were performed in repeated 10-fold stratified cross-validation. We use $\textsc{GMean}$\footnote{$\textsc{GMean}(\text{TP,FP,TN,FN}) = \sqrt{\frac{\text{TP}}{\text{TP}+\text{FN}} \cdot \frac{\text{TN}}{\text{TN}+\text{FP}}}$.} (geometric mean of accuracy over positive and negative samples) as an evaluation metric. due to its balanced nature and usage in previous works regarding Weighted Extreme Learning Machines~\cite{zong2013weighted}. \begin{table}[ht] \caption{Characteristics of used datasets } \label{tab:data} \begin{center} \begin{tabular}{lrrrrrrr} \toprule dataset & d & $|\Xlayer^-|$& $|\Xlayer^+|$ \\ \midrule \textsc{australian} & 14 & 383 & 307 \\ \textsc{bank} & 4 & 762 & 610 \\ \textsc{breast cancer} & 9 & 444 & 239 \\ \textsc{diabetes} & 8 & 268 & 500 \\ \textsc{german numer} & 24 & 700 & 300 \\ \textsc{heart} & 13 & 150 & 120 \\ \textsc{liver-disorders} & 6 & 145 & 200 \\ \textsc{sonar} & 60 & 111 & 97 \\ \textsc{splice} & 60 & 483 & 517 \\ \midrule \textsc{abalone7} & 10 & 3786 & 391 \\ \textsc{arythmia} & 261 & 427 & 25 \\ \textsc{car evaluation} & 21 & 1594 & 134 \\ \textsc{ecoli} & 7 & 301 & 35 \\ \textsc{libras move} & 90 & 336 & 24 \\ \textsc{oil spill} & 48 & 896 & 41 \\ \textsc{sick euthyroid} & 42 & 2870 & 293 \\ \textsc{solar flare} & 32 & 1321 & 68 \\ \textsc{spectrometer} & 93 & 486 & 45 \\ \midrule \textsc{forest cover} & 54 & 571519 & 9493 \\ \textsc{isolet} & 617 & 7197 & 600 \\ \textsc{mammography} & 6 & 10923 & 260 \\ \textsc{protein homology} & 74 & 144455 & 1296 \\ \textsc{webpages} & 300 & 33799 & 981 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Basic \textsc{UCI} datasets} We start our experiments with nine datasets coming from \textsc{\textsc{UCI} repository~\cite{UCI}}, namely \textsc{australian}, \textsc{breast-cancer}, \textsc{diabetes}, \textsc{german.numer}, \textsc{heart}, \textsc{ionosphere}, \textsc{liver-disorders}, \textsc{sonar} and \textsc{splice}, summarized in the Table~\ref{tab:data}. This datasets include rather balanced, low dimensional problems. On such data, EEM seems to perform noticably better than ELM when using RBF activation function (see Table~\ref{tab:uci}), and rather similar when using sigmoid one -- in such a scenario, for some datasets ELM achieves better results while for other EEM wins. \begin{sidewaystable*}[ph!] \centering \caption{\textsc{GMean} on \textsc{UCI} datasets} \label{tab:uci} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$&WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule { \textsc{ australian } } & \scriptsize 86.3 \tiny $\pm4.5$ & \scriptsize \textbf{ 87.0 } \tiny $\pm4.0$ & \scriptsize 85.9 \tiny $\pm4.4$ & \scriptsize 86.5 \tiny $\pm3.2$ & \scriptsize 85.8 \tiny $\pm4.9$ & \scriptsize 86.9 \tiny $\pm4.4$ & \scriptsize 86.9 \tiny $\pm4.1$ & \scriptsize 86.8 \tiny $\pm3.8$ & \scriptsize 86.8 \tiny $\pm3.7$ \\ \midrule { \textsc{ breast-cancer } } & \scriptsize 96.9 \tiny $\pm1.7$ & \scriptsize 97.3 \tiny $\pm1.2$ & \scriptsize 97.6 \tiny $\pm1.5$ & \scriptsize 97.4 \tiny $\pm1.2$ & \scriptsize 96.6 \tiny $\pm1.8$ & \scriptsize 97.3 \tiny $\pm1.1$ & \scriptsize 97.6 \tiny $\pm1.3$ & \scriptsize \textbf{ 97.8 } \tiny $\pm1.1$ & \scriptsize 96.8 \tiny $\pm1.7$ \\ \midrule { \textsc{ diabetes } } & \scriptsize 74.2 \tiny $\pm4.6$ & \scriptsize 74.5 \tiny $\pm4.6$ & \scriptsize 74.1 \tiny $\pm5.5$ & \scriptsize 74.9 \tiny $\pm5.0$ & \scriptsize 73.2 \tiny $\pm5.6$ & \scriptsize 74.9 \tiny $\pm5.9$ & \scriptsize 75.5 \tiny $\pm5.6$ & \scriptsize \textbf{ 75.7 } \tiny $\pm5.6$ & \scriptsize 74.8 \tiny $\pm3.5$ \\ \midrule { \textsc{ german } } & \scriptsize 68.8 \tiny $\pm6.9$ & \scriptsize 71.3 \tiny $\pm4.1$ & \scriptsize 70.7 \tiny $\pm6.1$ & \scriptsize 72.4 \tiny $\pm5.4$ & \scriptsize 71.1 \tiny $\pm6.1$ & \scriptsize 72.2 \tiny $\pm5.7$ & \scriptsize 73.2 \tiny $\pm4.5$ & \scriptsize 72.9 \tiny $\pm5.3$ & \scriptsize \textbf{ 73.4 } \tiny $\pm5.4$ \\ \midrule { \textsc{ heart } } & \scriptsize 78.8 \tiny $\pm6.3$ & \scriptsize 82.5 \tiny $\pm7.4$ & \scriptsize 78.1 \tiny $\pm7.0$ & \scriptsize 83.7 \tiny $\pm7.2$ & \scriptsize 80.2 \tiny $\pm8.9$ & \scriptsize 81.9 \tiny $\pm6.9$ & \scriptsize 83.7 \tiny $\pm8.5$ & \scriptsize 83.6 \tiny $\pm7.5$ & \scriptsize \textbf{ 84.6 } \tiny $\pm7.0$ \\ \midrule { \textsc{ ionosphere } } & \scriptsize 71.5 \tiny $\pm9.5$ & \scriptsize 77.0 \tiny $\pm12.8$ & \scriptsize 82.7 \tiny $\pm7.8$ & \scriptsize 84.6 \tiny $\pm9.1$ & \scriptsize 85.6 \tiny $\pm8.4$ & \scriptsize 90.8 \tiny $\pm5.2$ & \scriptsize 91.2 \tiny $\pm5.5$ & \scriptsize 93.4 \tiny $\pm4.3$ & \scriptsize \textbf{ 94.7 } \tiny $\pm3.9$ \\ \midrule { \textsc{ liver-disorders } } & \scriptsize 68.1 \tiny $\pm8.0$ & \scriptsize 68.6 \tiny $\pm8.9$ & \scriptsize 66.3 \tiny $\pm8.2$ & \scriptsize 62.1 \tiny $\pm8.1$ & \scriptsize 67.2 \tiny $\pm5.9$ & \scriptsize 71.4 \tiny $\pm7.0$ & \scriptsize 71.1 \tiny $\pm8.3$ & \scriptsize 70.2 \tiny $\pm6.9$ & \scriptsize \textbf{ 72.3 } \tiny $\pm6.2$ \\ \midrule { \textsc{ sonar } } & \scriptsize 66.7 \tiny $\pm10.1$ & \scriptsize 70.1 \tiny $\pm11.5$ & \scriptsize 80.2 \tiny $\pm7.4$ & \scriptsize 78.3 \tiny $\pm11.2$ & \scriptsize 83.2 \tiny $\pm6.9$ & \scriptsize 82.8 \tiny $\pm5.2$ & \scriptsize 86.5 \tiny $\pm5.4$ & \scriptsize \textbf{ 87.0 } \tiny $\pm7.5$ & \scriptsize 83.0 \tiny $\pm7.1$ \\ \midrule { \textsc{ splice } } & \scriptsize 64.7 \tiny $\pm2.8$ & \scriptsize 49.4 \tiny $\pm5.5$ & \scriptsize 81.8 \tiny $\pm3.2$ & \scriptsize 80.9 \tiny $\pm2.7$ & \scriptsize 75.5 \tiny $\pm3.9$ & \scriptsize 82.2 \tiny $\pm3.5$ & \scriptsize \textbf{ 89.9 } \tiny $\pm3.0$ & \scriptsize 88.0 \tiny $\pm4.0$ & \scriptsize 88.0 \tiny $\pm2.2$ \\ \bottomrule \end{tabular} \end{sidewaystable*} Results obtained for EEKM are comparable with those obtained by LS-SVM and SVM, in both cases proposed method achieves better results on about third of problems, on the third it draws and on a third it loses. This experiments can be seen as a proof of concept of the whole methodology, showing that it can be truly a reasonable alternative for existing models in some problems. It appears that contrary to ELM, proposed methods (EEM and EEKM) achieve best scores across all considered models in some of the datasets regardless of the used activation function/kernel (only Support Vector Machines and their least squares counterpart are competetitive in this sense). \subsection{Highly unbalanced datasets} In the second part we proceeded to the nine highly unbalanced datasets, summarized in the second part of the Table~\ref{tab:data}. Ratio between bigger and smaller class varies from $10:1$ to even $20:1$ which makes them really hard for unbalanced models. Obtained results (see Table~\ref{tab:unb}) resembles these obtained on \textsc{UCI} repository. We can see better results in about half of experiments if we fix a particular activation function/kernel (so we compare ELM$_x$ with EEM$_x$ and LS-SVM$_x$ with EEKM$_x$). \begin{sidewaystable*}[ph!] \centering \caption{Highly unbalanced datasets} \label{tab:unb} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$&WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule { \textsc{ abalone7 } } & \scriptsize 79.7 \tiny $\pm2.3$ & \scriptsize 79.8 \tiny $\pm3.5$ & \scriptsize 80.0 \tiny $\pm2.8$ & \scriptsize 76.1 \tiny $\pm3.7$ & \scriptsize 80.1 \tiny $\pm3.2$ & \scriptsize 79.7 \tiny $\pm3.6$ & \scriptsize \textbf{ 80.2 } \tiny $\pm3.4$ & \scriptsize 79.9 \tiny $\pm3.4$ & \scriptsize 79.7 \tiny $\pm2.7$ \\ \midrule { \textsc{ arythmia } } & \scriptsize 28.3 \tiny $\pm35.4$ & \scriptsize 40.3 \tiny $\pm20.9$ & \scriptsize 64.2 \tiny $\pm24.6$ & \scriptsize \textbf{ 85.6 } \tiny $\pm10.3$ & \scriptsize 66.9 \tiny $\pm25.8$ & \scriptsize 79.4 \tiny $\pm12.5$ & \scriptsize 84.4 \tiny $\pm10.0$ & \scriptsize 85.2 \tiny $\pm10.6$ & \scriptsize 80.9 \tiny $\pm11.8$ \\ \midrule { \textsc{ car evaluation } } & \scriptsize 99.1 \tiny $\pm0.3$ & \scriptsize 98.9 \tiny $\pm0.4$ & \scriptsize 99.0 \tiny $\pm0.3$ & \scriptsize 97.9 \tiny $\pm0.6$ & \scriptsize 99.0 \tiny $\pm0.3$ & \scriptsize 98.5 \tiny $\pm0.3$ & \scriptsize 99.5 \tiny $\pm0.2$ & \scriptsize 99.2 \tiny $\pm0.3$ & \scriptsize \textbf{ 100.0 } \tiny $\pm0.0$ \\ \midrule { \textsc{ ecoli } } & \scriptsize 86.9 \tiny $\pm6.5$ & \scriptsize 88.3 \tiny $\pm7.1$ & \scriptsize 86.9 \tiny $\pm6.8$ & \scriptsize 88.6 \tiny $\pm6.9$ & \scriptsize 86.4 \tiny $\pm7.0$ & \scriptsize 88.8 \tiny $\pm7.2$ & \scriptsize 89.2 \tiny $\pm6.3$ & \scriptsize \textbf{ 89.4 } \tiny $\pm6.9$ & \scriptsize 88.5 \tiny $\pm6.2$ \\ \midrule { \textsc{ libras move } } & \scriptsize 65.5 \tiny $\pm10.7$ & \scriptsize 19.3 \tiny $\pm8.1$ & \scriptsize 82.5 \tiny $\pm12.0$ & \scriptsize 93.0 \tiny $\pm11.8$ & \scriptsize 89.6 \tiny $\pm11.9$ & \scriptsize 93.9 \tiny $\pm11.9$ & \scriptsize 96.5 \tiny $\pm8.6$ & \scriptsize \textbf{ 96.6 } \tiny $\pm8.7$ & \scriptsize 91.6 \tiny $\pm11.9$ \\ \midrule { \textsc{ oil spill } } & \scriptsize 86.0 \tiny $\pm6.9$ & \scriptsize \textbf{ 88.8 } \tiny $\pm6.5$ & \scriptsize 83.8 \tiny $\pm7.6$ & \scriptsize 84.7 \tiny $\pm8.7$ & \scriptsize 85.8 \tiny $\pm9.3$ & \scriptsize 88.1 \tiny $\pm6.1$ & \scriptsize 86.7 \tiny $\pm8.4$ & \scriptsize 87.2 \tiny $\pm4.9$ & \scriptsize 85.7 \tiny $\pm11.4$ \\ \midrule { \textsc{ sick euthyroid } } & \scriptsize 88.1 \tiny $\pm1.7$ & \scriptsize 87.9 \tiny $\pm2.4$ & \scriptsize 88.5 \tiny $\pm2.1$ & \scriptsize 81.7 \tiny $\pm2.7$ & \scriptsize 89.1 \tiny $\pm1.9$ & \scriptsize 88.2 \tiny $\pm2.4$ & \scriptsize 89.5 \tiny $\pm1.7$ & \scriptsize 89.3 \tiny $\pm1.9$ & \scriptsize \textbf{ 90.9 } \tiny $\pm2.0$ \\ \midrule { \textsc{ solar flare } } & \scriptsize 60.4 \tiny $\pm16.8$ & \scriptsize 63.7 \tiny $\pm12.9$ & \scriptsize 61.3 \tiny $\pm10.8$ & \scriptsize 67.4 \tiny $\pm9.0$ & \scriptsize 60.3 \tiny $\pm14.8$ & \scriptsize 68.9 \tiny $\pm9.3$ & \scriptsize 67.3 \tiny $\pm8.8$ & \scriptsize 67.3 \tiny $\pm9.0$ & \scriptsize \textbf{ 70.9 } \tiny $\pm8.5$ \\ \midrule { \textsc{ spectrometer } } & \scriptsize 82.9 \tiny $\pm13.0$ & \scriptsize 87.3 \tiny $\pm7.8$ & \scriptsize 88.0 \tiny $\pm10.8$ & \scriptsize 90.2 \tiny $\pm8.6$ & \scriptsize 86.6 \tiny $\pm8.2$ & \scriptsize 93.0 \tiny $\pm14.6$ & \scriptsize 94.6 \tiny $\pm8.4$ & \scriptsize 93.5 \tiny $\pm14.7$ & \scriptsize \textbf{ 95.4 } \tiny $\pm5.1$ \\ \bottomrule \end{tabular} \end{sidewaystable*} Table~\ref{tab:unbt} shows that training time of Extreme Entropy Machines are comparable with the ones obtained by Extreme Learning Machines (differences on the level of $0.1-0.2$ are not significant on such datasets' sizes). We have a robust method which learns in below two seconds a model for hundreads/thousands of examples. For larger datasets (like \textsc{abalone7} or \textsc{sick euthyroid}) proposed methods not only outperform SVM and LS-SVM in terms of robustness but there is also noticable difference between their training times and ELMs. This suggests that even though ELM and EEM are quite similar and on small datasets are equally fast, EEM can better scale up to truly big datasets. Obviously obtained training times do not resemble the full training time as it strongly depends on the technique used for metaparameters selection and resolution of grid search (or other parameters tuning technique). In such full scenario, training times of SVM related models is significantly bigger due to the requirment of exact tuning of both $C$ and $\gamma$ in real domains. \begin{sidewaystable*}[ph!] \centering \caption{ highly unbalanced datasets times} \label{tab:unbt} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$ &WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule { \textsc{ abalone7 } } & \scriptsize 1.9 s & \scriptsize 1.2 s & \scriptsize 2.5 s & \scriptsize 1.6 s & \scriptsize 1.8 s & \scriptsize \textbf{ 1.2 } s & \scriptsize 20.8 s & \scriptsize 1.9 s & \scriptsize 4.7 s \\ \midrule { \textsc{ arythmia } } & \scriptsize 0.2 s & \scriptsize 0.7 s & \scriptsize 0.3 s & \scriptsize 0.9 s & \scriptsize 0.3 s & \scriptsize 0.7 s & \scriptsize \textbf{ 0.1 } s & \scriptsize 0.3 s & \scriptsize 0.1 s \\ \midrule { \textsc{ car evaluation } } & \scriptsize 1.3 s & \scriptsize 0.9 s & \scriptsize 1.5 s & \scriptsize 1.0 s & \scriptsize 1.1 s & \scriptsize 0.9 s & \scriptsize 2.0 s & \scriptsize 1.4 s & \scriptsize \textbf{ 0.1 } s \\ \midrule { \textsc{ ecoli } } & \scriptsize 0.2 s & \scriptsize 0.8 s & \scriptsize 0.2 s & \scriptsize 0.8 s & \scriptsize 0.1 s & \scriptsize 0.7 s & \scriptsize \textbf{ 0.0 } s & \scriptsize 0.1 s & \scriptsize 0.2 s \\ \midrule { \textsc{ libras move } } & \scriptsize 0.2 s & \scriptsize 0.9 s & \scriptsize 0.2 s & \scriptsize 0.8 s & \scriptsize 0.1 s & \scriptsize 0.7 s & \scriptsize 0.0 s & \scriptsize 0.1 s & \scriptsize \textbf{ 0.0 } s \\ \midrule { \textsc{ oil spill } } & \scriptsize 0.7 s & \scriptsize 0.8 s & \scriptsize 0.6 s & \scriptsize 0.8 s & \scriptsize 0.6 s & \scriptsize 0.8 s & \scriptsize 0.4 s & \scriptsize 0.9 s & \scriptsize \textbf{ 0.1 } s \\ \midrule { \textsc{ sick euthyroid } } & \scriptsize 1.5 s & \scriptsize 1.1 s & \scriptsize 1.4 s & \scriptsize \textbf{ 1.1 } s & \scriptsize 1.5 s & \scriptsize 1.1 s & \scriptsize 9.6 s & \scriptsize 1.7 s & \scriptsize 21.0 s \\ \midrule { \textsc{ solar flare } } & \scriptsize \textbf{ 0.7 } s & \scriptsize 0.8 s & \scriptsize 0.7 s & \scriptsize 0.8 s & \scriptsize 0.8 s & \scriptsize 0.8 s & \scriptsize 1.1 s & \scriptsize 1.3 s & \scriptsize 16.1 s \\ \midrule { \textsc{ spectrometer } } & \scriptsize 0.2 s & \scriptsize 0.7 s & \scriptsize 0.3 s & \scriptsize 0.7 s & \scriptsize 0.2 s & \scriptsize 0.7 s & \scriptsize 0.1 s & \scriptsize 0.3 s & \scriptsize \textbf{ 0.0 } s \\ \bottomrule \end{tabular} \end{sidewaystable*} \subsection{Extremely unbalanced datasets} Third part of experiments consists of extremely unbalanced datasets (with class imbalance up to 100:1) containing tens and hundreads thousands of examples. Five analyzed datasets span from NLP tasks (\textsc{webpages}) through medical applications (\textsc{mammography}) to bioinformatics (\textsc{protein homology}). This type of datasets often occur in the true data mining which makes these results much more practical than the one obtained on small/balanced data. 0.0 scores on \textsc{Isolet} dataset (see Table~\ref{tab:big}) for sigmoid based random projections is a result of very high values ($\sim~200$) of $\langle \xpoint,w \rangle$ for all $\xpoint$, which results in $\Gfunction(\xpoint,w,b)=1$, so the whole dataset is reduced to the singleton $\{ [1,\ldots,1]^T \} \subset \R^h \subset \Hspace $ which obviously is not separable by any classifier, netither ELM nor EEM. For other activation functions we see that EEM achieves sllightly worse results than ELM. On the other hand, scores of EEKM generally outperform the ones obtained by ELM and are very close to the ones obtained by well tuned SVM and LS-SVM. In the same time, EEM and EEKM were trained significantly faster, as Table~\ref{tab:bigt} shows, it was order of magnitude faster than SVM related models and even $1.5-2 \times$ faster than ELM. It seems that the Ledoit-Wolf covariance estimation computation with this matrices inversion is simply a faster operation (scales better) than computation of the Moore-Penrose pseudoinverse of the $\Hlayer^T\Hlayer$. Obviously one can alternate ELM training routine to the regularized one where instead of $(\Hlayer^T\Hlayer)^\dagger$ one computes $(\Hlayer^T\Hlayer + I/C)^{-1}$, but we are analyzing here parameter less approaches, while the analogous could be used for EEM in the form of $(\cov(\Xlayer^-)+\cov(\Xlayer^+) + I/C)^{-1}$ instead of computing Ledoit-Wolf estimator. In other words, in the parameter less training scenario, as described in this paper EEMs seems to scale better than ELMs while still obtaining similar classification results. In the same time EEKM obtains SVM-level results with orders of magnitude smaller training times. Both ELM and EEM could be transformed into regularization parameter based learning, but this is beyond the scope of this work. \begin{sidewaystable*}[ph!] \centering \caption{Big highly unbalanced datasets} \label{tab:big} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$ &WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule { \textsc{ forest cover } } & \scriptsize 90.8 \tiny $\pm0.3$ & \scriptsize 90.5 \tiny $\pm0.3$ & \scriptsize 90.7 \tiny $\pm0.3$ & \scriptsize 85.1 \tiny $\pm0.4$ & \scriptsize 90.9 \tiny $\pm0.3$ & \scriptsize 87.1 \tiny $\pm0.0$ & \scriptsize - & \scriptsize \textbf{ 91.8 } \tiny $\pm0.3$ & \scriptsize - \\ \midrule { \textsc{ isolet } } & \scriptsize 0.0 \tiny $\pm0.0$ & \scriptsize 0.0 \tiny $\pm0.0$ & \scriptsize 96.3 \tiny $\pm0.7$ & \scriptsize 95.6 \tiny $\pm1.1$ & \scriptsize 93.0 \tiny $\pm0.9$ & \scriptsize 91.4 \tiny $\pm1.0$ & \scriptsize \textbf{ 98.0 } \tiny $\pm0.7$ & \scriptsize 97.4 \tiny $\pm0.6$ & \scriptsize 97.6 \tiny $\pm0.6$ \\ \midrule { \textsc{ mammography } } & \scriptsize 90.4 \tiny $\pm2.8$ & \scriptsize 89.0 \tiny $\pm3.2$ & \scriptsize 90.7 \tiny $\pm3.3$ & \scriptsize 87.2 \tiny $\pm3.0$ & \scriptsize 89.9 \tiny $\pm3.8$ & \scriptsize 89.5 \tiny $\pm3.1$ & \scriptsize \textbf{ 91.0 } \tiny $\pm3.1$ & \scriptsize 89.5 \tiny $\pm3.1$ & \scriptsize 89.8 \tiny $\pm3.8$ \\ \midrule { \textsc{ protein homology } } & \scriptsize 95.3 \tiny $\pm0.8$ & \scriptsize 94.9 \tiny $\pm0.8$ & \scriptsize 95.1 \tiny $\pm0.9$ & \scriptsize 94.2 \tiny $\pm1.3$ & \scriptsize 95.0 \tiny $\pm1.0$ & \scriptsize 95.1 \tiny $\pm1.1$ & \scriptsize - & \scriptsize \textbf{ 95.7 } \tiny $\pm0.9$ & \scriptsize - \\ \midrule { \textsc{ webpages } } & \scriptsize 72.0 \tiny $\pm0.0$ & \scriptsize 73.1 \tiny $\pm2.0$ & \scriptsize 93.0 \tiny $\pm1.8$ & \scriptsize \textbf{ 93.1 } \tiny $\pm1.7$ & \scriptsize 86.7 \tiny $\pm0.0$ & \scriptsize 84.4 \tiny $\pm1.6$ & \scriptsize - & \scriptsize \textbf{ 93.1 } \tiny $\pm1.7$ & \scriptsize \textbf{ 93.1 } \tiny $\pm1.7$ \\ \bottomrule \end{tabular} \end{sidewaystable*} \begin{sidewaystable*}[ph!] \centering \caption{Big highly unbalanced datasets times} \label{tab:bigt} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$ &WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule { \textsc{ forest cover } } & \scriptsize 110.7 s & \scriptsize 104.6 s & \scriptsize 144.9 s & \scriptsize 45.6 s & \scriptsize 111.3 s & \scriptsize \textbf{ 38.2 } s & \scriptsize $>$600 s & \scriptsize 107.4 s & \scriptsize $>$600 s \\ \midrule { \textsc{ isolet } } & \scriptsize 9.7 s & \scriptsize 4.5 s & \scriptsize 4.9 s & \scriptsize 3.0 s & \scriptsize 3.4 s & \scriptsize \textbf{ 2.1 } s & \scriptsize 126.9 s & \scriptsize 3.2 s & \scriptsize 53.5 s \\ \midrule { \textsc{ mammography } } & \scriptsize 4.0 s & \scriptsize \textbf{ 2.2 } s & \scriptsize 6.1 s & \scriptsize 3.0 s & \scriptsize 4.0 s & \scriptsize 2.2 s & \scriptsize 327.3 s & \scriptsize 3.3 s & \scriptsize 9.5 s \\ \midrule { \textsc{ protein homology } } & \scriptsize 27.6 s & \scriptsize \textbf{ 21.6 } s & \scriptsize 86.3 s & \scriptsize 27.9 s & \scriptsize 62.5 s & \scriptsize 22.0 s & \scriptsize $>$600 s & \scriptsize 30.7 s & \scriptsize $>$600 s \\ \midrule { \textsc{ webpages } } & \scriptsize 16.0 s & \scriptsize \textbf{ 6.2 } s & \scriptsize 14.5 s & \scriptsize 8.5 s & \scriptsize 7.1 s & \scriptsize 6.4 s & \scriptsize $>$600 s & \scriptsize 9.0 s & \scriptsize 217.0 s \\ \bottomrule \end{tabular} \end{sidewaystable*} \subsection{Entropy based hyperparameter optimization} Now we proceed to entropy based evaluation. Given particular set of linear hypotheses $\mathcal{M}$ in $\Hspace $ we want to select optimal set of hyperparameters $\theta$ (such as number of hidden neurons or regularization parameter) which identify a particular model $\LinearOperator_\theta \in \mathcal{M} \subset \Hspace $. Instead of using expensive internal cross-validation (or other generalization error estimation technique like Err$^{0.632}$) we select such $\theta$ which maximizes our entropic measure. In particular we consider a simpified Cauchy-Schwarz Divergence based strategy where we select $\theta$ maximizing $$ \Dcs(\nor(\LinearOperator_\theta^T \meanpoint^+, \var(\LinearOperator_\theta^T \Hlayer^+)),\nor(\LinearOperator_\theta^T \meanpoint^-, \var(\LinearOperator_\theta^T \Hlayer^-))), $$ and kernel density based entropic strategy~\cite{MELC} selecting $\theta$ maximizing \begin{equation} \Dcs( \de{ \LinearOperator_\theta^T \Hlayer^+ } , \de{ \LinearOperator_\theta^T \Hlayer^-}), \label{dcscond} \end{equation} where $\de{ A } = \de{ A }_{\sigma(A)}$ is a Gaussian KDE using Silverman's rule of the window width~\cite{silverman1986density} $$ \sigma(A) = \left ( \tfrac{4}{3|A|} \right )^{1/5}\std(A) \approx \tfrac{1.06}{\sqrt[5]{|A|}}\std(A). $$ This way we can use whole given set for training and do not need to repeat the process, as $\Dcs$ is computed on the training set instead of the hold-out set. First, one can notice on Table~\ref{tab:enthyperdcsnor} and Table~\ref{tab:enthyperdcs} that such entropic criterion works well for EEM, EEKM and Support Vector Machines. On the other hand, it is not very well suited for ELM models. This confirms conclusions from our previous work on classification using $\Dcs$~\cite{MELC} where we claimed that SVMs are conceptually similar in terms of optimization objective, as well as widens it to the new class of models (EEMs). Second, Table~\ref{tab:enthyperdcsnor} shows that EEM and EEKM can truly select their hyperparameters using very simple technique requiring no model retrainings. Computation of (\ref{dcscond}) is linear in terms of training set and constant time if performed using precomputed projections of required objects (which are either way computed during EEM training). This make this very fast model even more robust. \begin{sidewaystable*}[ph!] \centering \caption{\textsc{UCI} datasets \textsc{GMean} with parameters tuning based on selecting a model according to $\Dcs(\nor(\LinearOperator^T \meanpoint^+, \LinearOperator^T \Sigma^+ \LinearOperator),\nor(\LinearOperator^T \meanpoint^-, \LinearOperator^T \Sigma^- \LinearOperator))$ where $\LinearOperator$ is a linear operator found by a particular optimization procedure instead of internal cross validation} \label{tab:enthyperdcsnor} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$ &WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule \textsc{ australian } & \scriptsize 51.2 \tiny $\pm 7.5$ & \scriptsize 86.3 \tiny $\pm 4.8$ & \scriptsize 50.3 \tiny $\pm 6.4$ & \scriptsize \textbf{ 86.5 } \tiny $\pm 3.2$ & \scriptsize 50.3 \tiny $\pm 8.5$ & \scriptsize 86.2 \tiny $\pm 5.3$ & \scriptsize 58.5 \tiny $\pm 7.9$ & \scriptsize 85.2 \tiny $\pm 5.6$ & \scriptsize 85.7 \tiny $\pm 4.7$ \\ \midrule \textsc{ breast-cancer } & \scriptsize 83.0 \tiny $\pm 4.3$ & \scriptsize 97.0 \tiny $\pm 1.6$ & \scriptsize 72.0 \tiny $\pm 6.6$ & \scriptsize 97.1 \tiny $\pm 1.9$ & \scriptsize 77.3 \tiny $\pm 5.3$ & \scriptsize 97.3 \tiny $\pm 1.1$ & \scriptsize 79.2 \tiny $\pm 7.7$ & \scriptsize 96.9 \tiny $\pm 1.4$ & \scriptsize \textbf{ 97.5 } \tiny $\pm 1.2$ \\ \midrule \textsc{ diabetes } & \scriptsize 52.3 \tiny $\pm 4.7$ & \scriptsize 74.4 \tiny $\pm 4.0$ & \scriptsize 51.7 \tiny $\pm 4.0$ & \scriptsize \textbf{ 74.7 } \tiny $\pm 5.2$ & \scriptsize 52.1 \tiny $\pm 3.7$ & \scriptsize 73.5 \tiny $\pm 5.9$ & \scriptsize 60.1 \tiny $\pm 4.2$ & \scriptsize 72.2 \tiny $\pm 5.4$ & \scriptsize 73.2 \tiny $\pm 5.9$ \\ \midrule \textsc{ german } & \scriptsize 57.1 \tiny $\pm 4.0$ & \scriptsize 69.3 \tiny $\pm 5.0$ & \scriptsize 51.7 \tiny $\pm 3.0$ & \scriptsize \textbf{ 72.4 } \tiny $\pm 5.4$ & \scriptsize 52.8 \tiny $\pm 6.3$ & \scriptsize 70.9 \tiny $\pm 6.9$ & \scriptsize 55.0 \tiny $\pm 4.3$ & \scriptsize 67.8 \tiny $\pm 5.7$ & \scriptsize 60.5 \tiny $\pm 4.5$ \\ \midrule \textsc{ heart } & \scriptsize 68.6 \tiny $\pm 5.8$ & \scriptsize 79.4 \tiny $\pm 6.9$ & \scriptsize 65.6 \tiny $\pm 5.9$ & \scriptsize \textbf{ 82.9 } \tiny $\pm 7.4$ & \scriptsize 60.3 \tiny $\pm 9.4$ & \scriptsize 77.4 \tiny $\pm 7.2$ & \scriptsize 66.2 \tiny $\pm 4.2$ & \scriptsize 77.7 \tiny $\pm 7.0$ & \scriptsize 76.5 \tiny $\pm 6.6$ \\ \midrule \textsc{ ionosphere } & \scriptsize 62.7 \tiny $\pm 10.6$ & \scriptsize 77.0 \tiny $\pm 12.8$ & \scriptsize 68.5 \tiny $\pm 5.1$ & \scriptsize 84.6 \tiny $\pm 9.1$ & \scriptsize 69.5 \tiny $\pm 9.6$ & \scriptsize 90.8 \tiny $\pm 5.2$ & \scriptsize 72.8 \tiny $\pm 6.1$ & \scriptsize 93.4 \tiny $\pm 4.2$ & \scriptsize \textbf{ 94.7 } \tiny $\pm 3.9$ \\ \midrule \textsc{ liver-disorders } & \scriptsize 53.2 \tiny $\pm 7.0$ & \scriptsize 68.5 \tiny $\pm 6.7$ & \scriptsize 52.2 \tiny $\pm 11.8$ & \scriptsize 62.1 \tiny $\pm 8.1$ & \scriptsize 53.9 \tiny $\pm 8.0$ & \scriptsize \textbf{ 71.4 } \tiny $\pm 7.0$ & \scriptsize 62.9 \tiny $\pm 7.8$ & \scriptsize 69.6 \tiny $\pm 8.2$ & \scriptsize 66.9 \tiny $\pm 8.0$ \\ \midrule \textsc{ sonar } & \scriptsize 66.3 \tiny $\pm 6.1$ & \scriptsize 66.1 \tiny $\pm 15.0$ & \scriptsize 80.2 \tiny $\pm 7.4$ & \scriptsize 76.9 \tiny $\pm 5.2$ & \scriptsize 83.2 \tiny $\pm 6.9$ & \scriptsize 82.8 \tiny $\pm 5.2$ & \scriptsize 85.9 \tiny $\pm 4.9$ & \scriptsize \textbf{ 87.7 } \tiny $\pm 6.1$ & \scriptsize 86.6 \tiny $\pm 3.3$ \\ \midrule \textsc{ splice } & \scriptsize 51.8 \tiny $\pm 4.3$ & \scriptsize 49.4 \tiny $\pm 5.5$ & \scriptsize 64.9 \tiny $\pm 3.1$ & \scriptsize 80.2 \tiny $\pm 2.6$ & \scriptsize 60.8 \tiny $\pm 3.5$ & \scriptsize 82.2 \tiny $\pm 3.5$ & \scriptsize \textbf{ 89.7 } \tiny $\pm 3.3$ & \scriptsize 88.0 \tiny $\pm 4.0$ & \scriptsize 89.5 \tiny $\pm 2.9$ \\ \midrule \end{tabular} \end{sidewaystable*} \begin{sidewaystable*}[ph!] \centering \caption{\textsc{UCI} datasets \textsc{GMean} with parameters tuning based on selecting a model according to $\Dcs(\de{ \LinearOperator^T \Hlayer^+ },\de{\LinearOperator^T \Hlayer^-})$ where $\LinearOperator$ is a linear operator found by a particular optimization procedure instead of internal cross validation} \label{tab:enthyperdcs} \begin{tabular}{ccccccccccccc} \toprule &WELM$_{\textsc{sig}}$& EEM$_{\textsc{sig}}$ &WELM$_{\textsc{nsig}}$& EEM$_{\textsc{nsig}}$ &WELM$_{\textsc{rbf}}$ & EEM$_{\textsc{rbf}}$ & LS-SVM$_{\textsc{rbf}}$ & EEKM$_{\textsc{rbf}}$ & SVM$_{\textsc{rbf}}$\\ \midrule \textsc{ australian } & \scriptsize 51.2 \tiny $\pm 7.5$ & \scriptsize 86.3 \tiny $\pm 4.8$ & \scriptsize 50.3 \tiny $\pm 6.4$ & \scriptsize \textbf{ 86.5 } \tiny $\pm 3.2$ & \scriptsize 50.3 \tiny $\pm 8.5$ & \scriptsize 86.2 \tiny $\pm 5.3$ & \scriptsize 58.5 \tiny $\pm 7.9$ & \scriptsize 85.2 \tiny $\pm 5.6$ & \scriptsize 84.2 \tiny $\pm 4.1$ \\ \midrule \textsc{ breast-cancer } & \scriptsize 83.0 \tiny $\pm 4.3$ & \scriptsize 97.0 \tiny $\pm 1.6$ & \scriptsize 72.0 \tiny $\pm 6.6$ & \scriptsize \textbf{ 97.4 } \tiny $\pm 1.2$ & \scriptsize 77.3 \tiny $\pm 5.3$ & \scriptsize 97.3 \tiny $\pm 1.1$ & \scriptsize 79.3 \tiny $\pm 7.1$ & \scriptsize 96.9 \tiny $\pm 1.4$ & \scriptsize 96.3 \tiny $\pm 2.4$ \\ \midrule \textsc{ diabetes } & \scriptsize 52.3 \tiny $\pm 4.7$ & \scriptsize 74.4 \tiny $\pm 4.0$ & \scriptsize 51.7 \tiny $\pm 4.0$ & \scriptsize \textbf{ 74.7 } \tiny $\pm 5.2$ & \scriptsize 52.1 \tiny $\pm 3.7$ & \scriptsize 73.5 \tiny $\pm 5.9$ & \scriptsize 60.1 \tiny $\pm 4.2$ & \scriptsize 72.2 \tiny $\pm 5.4$ & \scriptsize 71.9 \tiny $\pm 5.4$ \\ \midrule \textsc{ german } & \scriptsize 57.1 \tiny $\pm 4.0$ & \scriptsize 69.3 \tiny $\pm 5.0$ & \scriptsize 51.7 \tiny $\pm 3.0$ & \scriptsize \textbf{ 71.7 } \tiny $\pm 5.9$ & \scriptsize 52.8 \tiny $\pm 6.3$ & \scriptsize 70.9 \tiny $\pm 6.9$ & \scriptsize 54.4 \tiny $\pm 5.7$ & \scriptsize 67.8 \tiny $\pm 5.7$ & \scriptsize 59.5 \tiny $\pm 4.2$ \\ \midrule \textsc{ heart } & \scriptsize 60.0 \tiny $\pm 9.2$ & \scriptsize 79.4 \tiny $\pm 6.9$ & \scriptsize 65.6 \tiny $\pm 5.9$ & \scriptsize \textbf{ 82.9 } \tiny $\pm 7.4$ & \scriptsize 52.6 \tiny $\pm 9.0$ & \scriptsize 77.4 \tiny $\pm 7.2$ & \scriptsize 61.9 \tiny $\pm 5.8$ & \scriptsize 77.7 \tiny $\pm 7.0$ & \scriptsize 76.3 \tiny $\pm 7.7$ \\ \midrule \textsc{ ionosphere } & \scriptsize 62.4 \tiny $\pm 8.1$ & \scriptsize 77.0 \tiny $\pm 12.8$ & \scriptsize 68.5 \tiny $\pm 5.1$ & \scriptsize 84.6 \tiny $\pm 9.1$ & \scriptsize 67.6 \tiny $\pm 9.8$ & \scriptsize 90.8 \tiny $\pm 5.2$ & \scriptsize 67.0 \tiny $\pm 10.7$ & \scriptsize \textbf{ 93.4 } \tiny $\pm 4.2$ & \scriptsize 92.3 \tiny $\pm 4.6$ \\ \midrule \textsc{ liver-disorders } & \scriptsize 50.9 \tiny $\pm 11.5$ & \scriptsize 68.5 \tiny $\pm 6.7$ & \scriptsize 50.4 \tiny $\pm 9.2$ & \scriptsize 62.1 \tiny $\pm 8.1$ & \scriptsize 53.9 \tiny $\pm 8.0$ & \scriptsize \textbf{ 71.4 } \tiny $\pm 7.0$ & \scriptsize 62.9 \tiny $\pm 7.8$ & \scriptsize 69.6 \tiny $\pm 8.2$ & \scriptsize 66.9 \tiny $\pm 8.0$ \\ \midrule \textsc{ sonar } & \scriptsize 66.3 \tiny $\pm 6.1$ & \scriptsize 66.1 \tiny $\pm 15.0$ & \scriptsize 80.2 \tiny $\pm 7.4$ & \scriptsize 76.9 \tiny $\pm 5.2$ & \scriptsize 62.9 \tiny $\pm 9.4$ & \scriptsize 82.8 \tiny $\pm 5.2$ & \scriptsize 83.6 \tiny $\pm 4.5$ & \scriptsize \textbf{ 87.7 } \tiny $\pm 6.1$ & \scriptsize 86.6 \tiny $\pm 3.3$ \\ \midrule \textsc{ splice } & \scriptsize 51.8 \tiny $\pm 4.3$ & \scriptsize 33.1 \tiny $\pm 6.5$ & \scriptsize 64.9 \tiny $\pm 3.1$ & \scriptsize 80.2 \tiny $\pm 2.6$ & \scriptsize 60.8 \tiny $\pm 3.5$ & \scriptsize 82.2 \tiny $\pm 3.5$ & \scriptsize 85.4 \tiny $\pm 4.1$ & \scriptsize 88.0 \tiny $\pm 4.0$ & \scriptsize \textbf{ 89.5 } \tiny $\pm 2.9$ \\ \midrule \end{tabular} \end{sidewaystable*} \subsection{EEM stability} It was previously reported~\cite{huang2006extreme} that ELMs have very stable results in the wide range of the number of hidden neurons. We performed analogous experiments with EEM on \textsc{UCI} datasets. We trained models for 100 increasing hidden layers sizes ($h=5,10,\ldots,500$) and plotted resulting \textsc{GMean} scores on Fig.~\ref{fig:stability}. \begin{figure} \caption{Plot of the EEM's (with RBF activation function) \textsc{GMean} \label{fig:stability} \end{figure} One can notice that similarly to ELM proposed methods are very stable. Once machine gets enough neurons (around 100 in case of tested datasets) further increasing of the feature space dimension has minor effect on the generalization capabilities of the model. It is also important that some of these datasets (like sonar) do not even have 500 points, so there are more dimensions in the Hilbert space than we have points to build our covariance estimates, and even though we still do not observe any rapid overfitting. \section{Conclusions} In this paper we have presented Extreme Entropy Machines, models derived from the information theoretic measures and applied to the classification problems. Proposed methods are strongly related to the concepts of Extreme Learning Machines (in terms of general workflow, rapid training and randomization) as well as Support Vector Machines (in terms of margin maximization interpretation as well as LS-SVM duality). Main characteristics of EEMs are: \begin{itemize} \item information theoretic background based on differential and Renyi's quadratic entropies, \item closed form solution of the optimization problem, \item generative training, leading to direct probability estimates, \item small number of metaparameters, \item good classification results, \item rapid training that scales well to hundreads of thousands of examples and beyond, \item theoretical and practical similarities to the large margin classifiers and Fischer Discriminant. \end{itemize} Performed evaluation showed that, similarly to ELM, proposed EEM is a very stable model in terms of the size of the hidden layer and achieves comparable classification results to the ones obtained by SVMs and ELMs. Furthermore we showed that our method scales better to truly big datasets (consisting of hundreads of thousands of examples) without sacrificing results quality. During our considerations we pointed out some open problems and issues, which are worth investigation: \begin{itemize} \item Can one construct a closed-form entropy based classifier with different distribution families than Gaussians? \item Is there a theoretical justification of the stability of the extreme learning techniques? \item Is it possible to further increase achieved results by performing unsupervised entropy based optimization in the hidden layer? \end{itemize} \section*{Acknowledgment} We would like to thank Daniel Wilczak from Institute of Computer Science and Computational Mathematics of Jagiellonian Univeristy for the access to the Fermi supercomputer, which made numerous experiments possible. \end{document}
\begin{document} \title[Six Line Configurations]{Six line configurations and string dualities} \author{A. Clingher} \address{Department of Mathematics and Computer Science University of Missouri St. Louis MO 63121} \email{[email protected]} \author{A. Malmendier} \address{Department of Mathematics and Statistics, Utah State University, Logan, UT 84322} \email{[email protected]} \author{T. Shaska} \address{Department of Mathematics and Statistics, Oakland University, Rochester, MI 48309} \email{[email protected]} \begin{abstract} We study the family of K3 surfaces of Picard rank sixteen associated with the double cover of the projective plane branched along the union of six lines, and the family of its Van~\!Geemen-Sarti partners, i.e., K3 surfaces with special Nikulin involutions, such that quotienting by the involution and blowing up recovers the former. We prove that the family of Van~\!Geemen-Sarti partners is a four-parameter family of K3 surfaces with $H \oplus E_7(-1) \oplus E_7(-1)$ lattice polarization. We describe explicit Weierstrass models on both families using even modular forms on the bounded symmetric domain of type $IV$. We also show that our construction provides a geometric interpretation, called geometric two-isogeny, for the F-theory/heterotic string duality in eight dimensions. \end{abstract} \keywords{K3 surfaces, six line configurations, heterotic string, F-theory} \subjclass[2010]{11F03, 14J28, 14J81} \maketitle \section{Introduction} In this article, we consider configurations of six lines in general position on the projective plane. The double cover of the plane branched along their union is a K3 surface after resolving only ordinary double points. The moduli space of such K3 surfaces was described in~\cite{MR1136204}. Kloosterman classified all possible types of elliptic fibrations with a section on them in~\cite{MR2254405}. In \cite{MR3201823}, the authors consider K3 surfaces which are double covers of a blow-up of $\mathbb{P}^2$, branched along rational curves. They classified the elliptic fibrations on such surfaces and their van Geemen-Sarti involutions. \par The assumption that the six lines are in general position implies that the Picard rank of the resulting K3 surface is sixteen. In the special case when the six lines are tangent to a conic, the Picard rank is, generically, seventeen and one obtains as K3 surface a Kummer surface $\mathrm{Kum}(\operatorname{Jac}\mathcal{C})$ of the Jacobian $\operatorname{Jac}(\mathcal{C})$ of a generic genus-two curve $\mathcal{C}$. There is then, as shown in \cites{MR2427457, MR2824841, MR2935386, MR3366121}, a closely related K3 surface, called the \emph{Shioda-Inose surface} $\mathrm{SI}(\operatorname{Jac}\mathcal{C})$, which carries a \emph{Nikulin involution}, i.e., an automorphism of order two preserving the holomorphic two-form, such that quotienting by this involution and blowing up the fixed points recovers the Kummer surface. The Shioda-Inose surface $\mathrm{SI}(\operatorname{Jac}\mathcal{C})$ carries a canonical lattice polarization of type $H \oplus E_8(-1) \oplus E_7(-1)$ and is part of \emph{a geometric two-isogeny}: \begin{equation} \xymatrix{ \mathrm{Kum}(\operatorname{Jac}\mathcal{C}) \ \ \ar @/_0.5pc/ @{-->} _{} [rr] & & \ \ \mathrm{SI}(\operatorname{Jac}\mathcal{C}) \ar @/_0.5pc/ @{-->} _{} [ll] } \end{equation} establishing a one-to-one correspondence between two different types of surfaces with the same Hodge-theoretic data: principally polarized abelian surfaces and algebraic K3 surfaces polarized the special lattice $H \oplus E_8(-1) \oplus E_7(-1)$. The key geometric ingredient in this construction is a normal form equation for an elliptically fibered K3 surface whose periods determine a point $\tau$ in the Siegel upper-half space $\mathbb{H}_2$, with the coefficients in the equation being Siegel modular forms. The normal form equation, as well as the two-isogeny construction, are due in different forms to Kumar \cite{MR2427457} and to Clingher and Doran \cite{MR2935386}. \par In this article, we extend the notion of geometric two-isogeny to K3 surfaces with Picard rank sixteen. In this context, Kummer surfaces are replaced by what we shall refer to as \emph{double sextic surfaces} - K3 surfaces $\mathcal{Y}$ obtained as minimal resolutions of double covers of the projective plane branched along a configuration of six distinct lines. The Shioda-Inose surfaces from above are then replaced, as shown by Clingher and Doran in \cite{MR2824841} by K3 surfaces $\mathcal{X}$ polarized by the rank-sixteen lattice $H \oplus E_7(-1) \oplus E_7(-1)$. Similarly to the Shioda-Inose case, each of these K3 surfaces $\mathcal{X}$ carries a special Nikulin involution, $\jmath_{\mathcal{X}} $ called \emph{Van~\!Geemen-Sarti involution}. When quotienting by the involution $\jmath_{\mathcal{X}} $ and blowing up the fixed locus, one recovers the corresponding double-sextic surface $\mathcal{Y}$ together with a rational double cover map $ \Phi \colon \mathcal{X} \dashrightarrow \mathcal{Y}$. However, the Van~\!Geemen-Sarti involutions $\jmath_{\mathcal{X}} $ no longer determine Shioda-Inose structures. Instead, they appear as fiber-wise translation by two-torsion in a suitable Jacobian elliptic fibration $\pi^{\mathcal{X}}_{\mathrm{alt}}$. The geometric two-isogeny picture is then given by the diagram below: \begin{equation} \xymatrix{ \mathcal{X} \ar @(dl,ul) _{\jmath_{\mathcal{X}}} \ar [dr] _{\pi^{\mathcal{X}}_{\mathrm{alt}}} \ar @/_0.5pc/ @{-->} _{\hat{\Phi}} [rr] & & \mathcal{Y} \ar @(dr,ur) ^{\jmath_{\mathcal{Y}}} \ar [dl] ^{\pi^{\mathcal{Y}}_{\mathrm{alt}}} \ar @/_0.5pc/ @{-->} _{\Phi} [ll] \\ & \mathbb{P}^1 } \end{equation} \noindent We shall refer to the K3 surfaces $\mathcal{X}$ as the \emph{Van~\!Geemen-Sarti partners} of the double sextic surface $\mathcal{Y}$. From a physics point of view, Jacobian elliptic fibrations on K3 surfaces correspond to a certain subclass of eight-dimensional compactifications of the type IIB string in which the axio-dilaton field varies over a base~\cites{MR1403744,MR1409284,MR1412112,MR1408164}. The period lattice of the Van~\!Geemen-Sarti partners describes physical models dual to the $\mathfrak{e}_8\oplus \mathfrak{e}_8$ heterotic string, with an unbroken gauge algebra $\mathfrak{e}_7\oplus \mathfrak{e}_7$ ensuring that two Wilson line expectation values are non-zero. A similar result holds for the $\mathfrak{so}(32)$ heterotic string with an unbroken gauge algebra $\mathfrak{so}(24)\oplus \mathfrak{su}(2)^{\oplus 2}$. The function field of the Narain moduli space of these heterotic theories turns out to be the ring of modular forms of \emph{even characteristic} on the bounded symmetric domain of type $IV$ introduced by Matsumoto et al.~\!\cite{MR1204828}. Geometric two-isogeny provides a more refined and geometric understanding for this string duality on a natural sub-space of the full eighteen dimensional moduli space \cites{MR2369941, MR2826187, MR3366121}: by taking the K3 surface to be the Shioda-Inose surface $\mathrm{SI}(\operatorname{Jac}\mathcal{C})$, the F-theory/heterotic string duality is manifested as the aforementioned geometric two-isogeny. \par This article is structured as follows: in Section~2 we review the work of Dolgachev and Ortland \cite{MR1007155} and the moduli space associated with six-line configurations in the projective plane. We define new invariants of six-line configurations that generalize the Igusa invariants of binary sextics. We construct the function field of the moduli space explicitly, by determining a complete set of generators for the ring of modular forms of even characteristic. In Section~3 we construct explicit Weierstrass models for three Jacobian elliptic fibrations on the family of double-sextic surfaces $\mathcal{Y}$. One of them, which we call the \emph{alternate fibration}, is of particular importance: the coefficients in its Weierstrass equation are the generators of the ring of modular forms derived before. In Section~4 we construct the family of Van~\!Geemen-Sarti partners $\mathcal{X}$ of the double-sextic surfaces $\mathcal{Y}$ polarized by the lattice $H \oplus E_7(-1) \oplus E_7(-1)$. There are four non-isomorphic elliptic fibrations on $\mathcal{X}$; three will be important for the considerations in this article, and Weierstrass models will be constructed for them. Using the Van~\!Geemen-Sarti involution, we will determine the coefficients of these Weierstrass models in terms of the modular forms found in Section~2. Using a result of Vinberg \cite{MR3235787} and its interpretation in string theory in \cite{MR3366121}, we prove that the function field of the Narain moduli space of quantum-exact heterotic string compactifications with two non-vanishing Wilson lines is the ring of Siegel modular forms of {\it even weight.} In Section~5 we discuss the specialization of six-line configurations tangent to a common conic and the associated K3 surfaces. We find perfect agreement in this case with the results in~\cites{MR2824841,MR3712162,MR3366121}. \section{Invariants of six-line configurations in the projective plane} \label{sec:invariants} The Pl\"ucker embedding algebraically embeds the Grassmannian $\operatorname{Gr}(k,n;\mathbb{C})$ of all $k$-dimensional sub-spaces of an $n$-dimensional complex vector space $V$ as a sub-variety of the projective space $\mathbb{P}(\wedge^k V)$. The homogeneous coordinates of the image under the Pl\"ucker embedding, with respect to the natural basis of the exterior space $\wedge^k V$ relative to a chosen basis in $V$, are called \emph{Pl\"ucker coordinates}. The image of the Pl\"ucker embedding is an intersection of a number of quadrics defined by the so called \emph{Pl\"ucker relations}. \par We consider the situation $k=3$ and $n=6$ with $\dim \operatorname{Gr}(k,n;\mathbb{C})=9$. We start with the geometric setup of an \emph{ordered} configuration of six lines in general position in the projective plane $\mathbb{P}^2$. We write each line in the form $\ell_i: a_i z_1 + b_i z_2 + c_i z_3 =0$ for $i=1, \dots , 6$ with $[z_1:z_2:z_3] \in \mathbb{P}^2$. The coefficients of the lines are assembled in vectors $\mathbf v_i = \langle a_i, b_i, c_i \rangle^t$ and form a matrix $\mathbf{A} \in \operatorname{Mat}(3,6;\mathbb{C})$ given by $\mathbf{A}= [ \mathbf v_1 | \cdots | \mathbf v_6]$. Let $\mathbf{A}_{ijk} = [ \mathbf v_i | \mathbf v_j | \mathbf v_k]$ and $D_{ijk} = \det \mathbf{A}_{ijk}$ be the Pl\"ucker coordinates derived from $\mathbf{A} \in \operatorname{Mat}(3,6;\mathbb{C})$ considered as an element of the Grassmannian $\operatorname{Gr}(3,6;\mathbb{C})$. \par We consider the following cases of configurations of six lines in $\mathbb{P}^2$: \begin{definition} \label{def:sstable} We consider configurations of six lines in $\mathbb{P}^2$ that \begin{enumerate} \item[(0)] contain six lines in general position, \item are tangent to a common conic, \item contain three lines which are coincident in one point, \item contain one line which is coincident with two different pairs of lines in two different points, \item contain three lines pairwise coincident in three different points, and each of the three remaining lines is coincident in one intersection point, \item are combinations of case (1) and cases (2) through (4), \item[(6a)] contain four lines which intersect in one point, \item[(6b)] contain one double line. \end{enumerate} \end{definition} Configurations that include cases (0) through (6a) and (6b) are called \emph{semi-stable} configurations. On configurations of six lines we have a right action of $(\mathbb{C}^*)^6$ given by rescaling each line separately, and the obvious left action of $\operatorname{GL}_3(\mathbb{C})$ by acting on $[z_1:z_2:z_3]\in \mathbb{P}^2$. Next, we want to describe the isomorphism classes of such configurations of six lines. We define the so called degree-one Dolgachev-Ortland coordinates \cite{MR1007155} for configurations of six lines in $\mathbb{P}^2$ to be given by \begin{equation} \label{eqn:DOcoords} \begin{array}{lclclcl} t_1 & = & D_{135} D_{246}, &\quad& t_2 & = & D_{145} D_{236}, \\ t_3 & = & D_{146} D_{235}, &\quad& t_4 & = & D_{136} D_{245},\\ t_5 & = & D_{125} D_{346}, &\quad& t_6 & = & D_{126} D_{345}, \\ t_7 & = & D_{134} D_{256}, &\quad& t_8 & = & D_{124} D_{356},\\ t_9 & = & D_{156} D_{234}, &\quad& t_{10} & = & D_{123} D_{456}. \end{array} \end{equation} We have the following: \begin{lemma} The degree-one coordinates $t_1, \dots, t_{10}$ satisfy the relations \begin{equation} \label{eqn:PlueckerRelations} \begin{array}{lll} t_1-t_2-t_5-t_9,& t_1-t_2-t_6-t_7, & t_1-t_3-t_5-t_{10},\\ t_1-t_3-t_6-t_8,& t_1-t_4-t_7-t_{10}, & t_1-t_4-t_8-t_9,\\ t_2-t_3+t_7-t_8, & t_2-t_3+t_9-t_{10},& t_2-t_4+t_5-t_8,\\ t_2-t_4+t_6-t_{10},& t_3-t_4+t_5-t_7, & t_3-t_4+t_6-t_9,\\ t_5-t_6-t_7+t_9,& t_5-t_6-t_8+ t_{10}, & t_7-t_8-t_9+t_{10}. \end{array} \end{equation} In particular, only five relations among the fifteen relations are linearly independent. \end{lemma} \begin{proof} The proof follows by explicit computation for any matrix $\mathbf{A} \in \operatorname{Mat}(3,6;\mathbb{C})$. \end{proof} \noindent One also introduces the degree-two Dolgachev-Ortland coordinate given by \begin{equation} \label{Eqn:moduli} R= D_{123} D_{145} D_{246} D_{356} - D_{124} D_{135} D_{236} D_{456}. \end{equation} We have the following: \begin{lemma} The degree-two coordinate $R$ satisfies \begin{equation} \label{eqn:PlueckerRelations2} R^2 = \frac{1}{12} \left( \Big(\sum_{i=1}^{10} t_i^2\Big)^2 - 4\sum_{i=1}^{10} t_i^4 \right). \end{equation} \end{lemma} \begin{proof} The proof follows by explicit computation for any matrix $\mathbf{A} \in \operatorname{Mat}(3,6;\mathbb{C})$. \end{proof} The different strata in the moduli space can now be characterized as follows: \begin{lemma} \label{lem:equiv} In Definition~\ref{def:sstable} we have the following: \begin{enumerate} \item[(0)] $\Leftrightarrow$ no element of $(t_i)_{i=1}^{10}$ vanishes and $R\not =0$, \item $\Leftrightarrow$ no element of $(t_i)_{i=1}^{10}$ vanishes and $R=0$, \item $\Leftrightarrow$ exactly one element of $(t_i)_{i=1}^{10}$ vanishes, \item $\Leftrightarrow$ exactly two elements of $(t_i)_{i=1}^{10}$ vanish, \item $\Leftrightarrow$ exactly three elements of $(t_i)_{i=1}^{10}$ vanish, \item $\Leftrightarrow$ up to three elements of $(t_i)_{i=1}^{10}$ vanish and $R=0$, \item $\Leftrightarrow$ exactly four elements of $(t_i)_{i=1}^{10}$ vanish and $R=0$. \end{enumerate} \end{lemma} \begin{proof} Configurations of six lines no three of which are concurrent have four homogeneous moduli which we denote by $a, b, c, d$. A general matrix $\mathbf{A} \in \operatorname{Mat}(3,6;\mathbb{C})$ is written in terms of only $a, b, c, d$ using a $\operatorname{GL}_3(\mathbb{C})$ transformation. The lines are then in the form of Equations~(\ref{lines}). We discuss the details in Section~\ref{ssec:natural_fibration}. Equations~(\ref{compare1}) determine the Dolgachev-Ortland coordinates in terms of these moduli. We can easily check necessary and sufficient conditions for cases (1) through (6). It follows from Equation~(\ref{compare1}) and \cite{MalmendierClingher:2018}*{Prop.~5.13} that $R=0$ in Equation~(\ref{Eqn:moduli}) if and only if the six lines in general position are tangent to a common conic. \end{proof} We have the following: \begin{lemma} \label{lem:invariance} For a configuration of six lines in $\mathbb{P}^2$ the point \begin{equation} [t_1: \dots : t_{10}: R] \in \mathbb{P}(1, \dots, 1, 2) \end{equation} in complex weighted projective space, is well-defined and invariant under the right action of $(\mathbb{C}^*)^6$ and the left action of $\operatorname{GL}_3(\mathbb{C})$ on $\mathbf{A}$. \end{lemma} \begin{proof} The point in weighted projective space is well-defined because of Lemma~\ref{lem:equiv}. The invariance under the right action of $(\mathbb{C}^*)^6$ on $\mathbf{A}$ is immediate. The invariance under the left action of $\operatorname{GL}_3(\mathbb{C})$ follows from a computation showing that the coordinates $t_i$ for $1\le i \le 10$ and $R$ rescale by the determinant with weight two and four, respectively, and the point in weighted projective space remains invariant. \end{proof} For more details we refer to \cites{MR1007155, MR1828467}. The following is a corollary of Lemma~\ref{lem:invariance}: \begin{corollary}[\cite{MR1007155}] The moduli space of configurations of six lines in $\mathbb{P}^2$ is isomorphic to the algebraic variety in $\mathbb{P}(1, \dots, 1, 2)$ with the coordinates $[t_1: \dots : t_{10}: R]$ given by Equations~(\ref{eqn:PlueckerRelations}) and (\ref{eqn:PlueckerRelations2}), and $R \not =0$ and $t_i\not =0$ for all $i \in \{1, \dots, 10\}$. \end{corollary} We define the moduli space $\mathfrak{M}(2)^+$ to be the moduli space of \emph{ordered} configurations of six lines in $\mathbb{P}^2$ that fall into cases (0) through (5) in Definition~\ref{def:sstable}, i.e., \begin{equation} \label{eqn:M2+} \mathfrak{M}(2)^+ = \left\lbrace \big[ t_1 : \dots : t_{10} : R ] \ \Big\vert \begin{array}{l}\text{$t_i=0$ for at most three $i \in \{1, \dots, 10\}$},\\ \text{Eqns.~(\ref{eqn:PlueckerRelations}) and (\ref{eqn:PlueckerRelations2}) hold.}\end{array} \right\rbrace. \end{equation} The notation $\mathfrak{M}(2)^+$ indicates (i) the existence of a level-two structure obtained by splitting up six indices into two pairs of three, and (ii) the fact that we include all cases (1) through (5) in Definition~\ref{def:sstable} in addition to case (0). \par For a given ordered configuration of lines $\{\ell_1, \dots , \ell_6\}$ in general position, let us fix six out of fifteen points of intersection, namely the points \begin{equation} \begin{split} p_1 = \ell_2 \cap \ell_3, \qquad p_2 & = \ell_1 \cap \ell_3, \qquad p_3 = \ell_1 \cap \ell_2, \\ p_4 = \ell_5 \cap \ell_6, \qquad p_5 & = \ell_4 \cap \ell_6, \qquad p_6 = \ell_4 \cap \ell_5. \end{split} \end{equation} Given any non-singular conic $C \subset \mathbb{P}^2$, we define the dual of a point $p_i \not \in C$ to be the line $\ell'_i$ that joins the two points of $C$ on the two tangent lines of $C$ passing through $p_i$; if $p_i \in C$ we define $\ell'_i$ to be the tangent line of $C$ at $p_i$. Changing the conic $C$ to another non-singular conic $C'$ in this construction simply transforms the lines $\ell'_i$ by a projective automorphism of $\mathbb{P}^2$. We then say that the two configurations $\{\ell'_1, \dots , \ell'_6\}$ and $\{\ell_1, \dots , \ell_6\}$ are \emph{in association}. It was proved in \cite{MR1828467} that $\{\ell'_1, \dots , \ell'_6\}$ and $\{\ell_1, \dots , \ell_6\}$ are associated if and only if their respective matrices $\mathbf{A}'$ and $\mathbf{A}$ satisfy $\mathbf{A}' \cdot D \cdot \mathbf{A}^t=0$ for some diagonal matrix $D$ with $\det D \not =0$. \par Mapping an ordered configuration of six lines to an associated ordered configuration defines an involution $\textnormal{Im}\;\;\!\!\!ath$ on $\mathfrak{M}^+(2)$ with a fixed point set that consists of configurations of six lines tangent to a common conic, and in terms of the Dolgachev-Ortland coordinates it is given by \begin{equation} \label{eqn:i_action} \textnormal{Im}\;\;\!\!\!ath: \, [\ t_1: \dots : t_{10}: R \ ] \to [\ t_1: \dots : t_{10}: -R \ ]. \end{equation} We define a four-dimensional sub-space $\mathfrak{M}(2)$ of $\mathbb{P}^9$ by setting \begin{equation} \label{M_2} \mathfrak{M}(2) = \left\lbrace \big[ t_1 : \dots : t_{10} ] \in \mathbb{P}^9 \ \Big\vert \begin{array}{l} \text{$t_i=0$ for at most three $i \in \{1, \dots, 10\}$},\\ \text{and Eqns.~\!(\ref{eqn:PlueckerRelations}) hold.}\end{array} \right\rbrace. \end{equation} We also set \begin{equation} \overline{\mathfrak{M}(2)} = \left\lbrace \big[ t_1 : \dots : t_{10} ] \in \mathbb{P}^9 \ \Big\vert \begin{array}{l} \text{Eqns.~\!(\ref{eqn:PlueckerRelations}) hold.}\end{array} \! \right\rbrace. \end{equation} Notice that, apart from the six-line configurations listed in Definition~\ref{def:sstable}, there are more degenerate configurations: there are configurations such that exactly six elements of $(t_i)_{i=1}^{10}$ vanish; there are also configurations such that exactly four elements of $(t_i)_{i=1}^{10}$ vanish, $R\not =0$, and all non-vanishing $t_i$'s equal $\pm 1$. Since $\mathfrak{M}(2)$ is a four-dimensional linear sub-space of $\mathbb{P}^9$, it is easy to show \cite{MR1204828}*{Sec.~3.2} that $\overline{\mathfrak{M}(2)}$ is in fact isomorphic to $\mathbb{P}^4$. \par We take the map $\operatorname{pr}$ to be the projection from $\mathbb{P}(1,\dots,1,2) \backslash \{[0:\dots:0:1]\} \to \mathbb{P}^9$ given by $[t_1: \dots : t_{10}: R] \mapsto [t_1: \dots : t_{10}]$. We have the following: \begin{lemma} \label{lem:M2} We have $\operatorname{pr} = \operatorname{pr}\circ \, \textnormal{Im}\;\;\!\!\!ath: \mathfrak{M}(2)^+ \to \mathfrak{M}(2)$ and $\operatorname{pr}(\mathfrak{M}(2)^+)\cong \mathfrak{M}(2)$. \end{lemma} \subsection{The modular description} \label{ssec:modular_description} The moduli spaces $\mathfrak{M}(2)$ and $\mathfrak{M}(2)^+$ have modular descriptions based on the seminal work in \cite{MR1204828}. By $\mathbf{H}_2$ we denote the set of all complex two-by-two matrices $\varpi$ over $\mathbb{C}$ such that the hermitian matrix $(\varpi-\varpi^\dagger)/(2i)$ is positive definite, i.e., \begin{equation} \label{Siegel_tau} \mathbf{H}_2 = \left\lbrace \left( \begin{array}{cc} \tau_1 & z_1 \\ z_2 & \tau_2\end{array} \right) \in \operatorname{Mat}(2,2;\mathbb{C}) \; \Big| \; 4 \, \textnormal{Im}\;\;\!\!\!{\tau_1} \, \textnormal{Im}\;\;\!\!\!{\tau_2} > |z_1 - \bar{z}_2|^2, \; \textnormal{Im}\;\;\!\!\!{\tau_2} > 0 \right\rbrace , \end{equation} and the modular group $\Gamma \subset \operatorname{U}(2,2)$ given by \begin{equation} \label{modular_group} \Gamma = \left\lbrace G \in \operatorname{GL}_4\big(\mathbb{Z}[i]\big) \, \Big| \, G^\dagger \cdot \left( \begin{array}{cc} 0 & \mathbb{I}_2 \\ -\mathbb{I}_2 & 0 \end{array}\right) \cdot G = \left( \begin{array}{cc} 0 & \mathbb{I}_2 \\ -\mathbb{I}_2 & 0 \end{array}\right) \right\rbrace. \end{equation} The modular group acts on $\varpi \in \mathbf{H}_2$ by \begin{gather*} \forall \, G= \left(\begin{matrix} A & B \\ C & D \end{matrix} \right) \in \Gamma: \quad G\cdot \varpi=(A\cdot \varpi+B)(C\cdot \varpi+D)^{-1}. \end{gather*} It was shown in \cite{MR1204828}*{Prop.~\!1.5.1} that $\Gamma$ is generated by the five elements $G_1$, $G_2$, $G_3$, $G_4$, $G_5$ given by \begin{equation} \label{eqn:generators} \scalemath{\MyScaleTiny}{ \begin{array}{ccccc} \left(\begin{array}{rrrr} i & & & \\ & 1 & & \\ & & i & \\ & & & 1 \end{array}\right) , & \left(\begin{array}{rrrr} 1 &1 & & \\ 0 & 1 & & \\ & & 1 & 0 \\ & & -1 & 1 \end{array}\right) , & \left(\begin{array}{rrrr} 0 &1 & & \\ 1 & 0 & & \\ & & 0 & 1 \\ & & 1 & 0 \end{array}\right) , & \left(\begin{array}{rrrr} 1 &0 &1 &0 \\ 0 & 1 & 0 &0 \\ & & 1 & \\ & & & 1 \end{array}\right) , & \left(\begin{array}{rrrr} & & 1 & \\ & & &1 \\ -1 & & & \\ &-1 & & \end{array}\right), \end{array}} \end{equation} with determinants $\det{(G_1)}=-1$ and $\det{(G_k)}=1$ for $k=2,\dots, 5$. We also introduce the principal modular sub-group of \emph{complex level} $1+i$ (over the Gaussian integers) given by \begin{equation} \Gamma(1+i) = \left\lbrace G \in \Gamma \, \Big| \, G \equiv \mathbb{I}_4 \! \!\mod{1+i} \right\rbrace . \end{equation} There is an additional involution $\mathcal{T}$ acting on elements of $\mathbf{H}_2$ by transposition, i.e., $\varpi \mapsto \mathcal{T}\cdot \varpi=\varpi^t$, yielding extended groups obtained from the semi-direct products \begin{equation} \label{modular group_extended} \Gamma_{\mathcal{T}} = \Gamma \rtimes \langle \mathcal{T} \rangle , \qquad \Gamma_{\mathcal{T}}(1+i) = \Gamma(1+i) \rtimes \langle \mathcal{T} \rangle , \end{equation} where $\langle \mathcal{T} \rangle$ is the sub-group generated by $\mathcal{T}$. We will always write elements $g \in \Gamma_{\mathcal{T}}$ in the form $g = G\,\mathcal{T} ^{n}$ with $G \in \Gamma$ and $n \in \{0,1\}$. A \emph{modular form $f$ of weight $2k$ relative to a finite-index sub-group $\Gamma' \subset \Gamma_{\mathcal{T}}$ with character $\chi_{f}$} is a holomorphic function on $\mathbf{H}_2$ such that \begin{equation} \forall \, \varpi \in \mathbf{H}_2, \ \forall \, g = G \mathcal{T} ^{n} \in \Gamma': \ f\big(g\cdot \varpi\big) = \chi_f(g) \ \det(C\varpi+D)^{2k} \ f(\varpi) . \end{equation} There is a well-known isomorphism $\Gamma/\Gamma(1+i)\cong \mathrm{S}_6$ -- since both groups are in fact isomorphic to $\operatorname{Sp}_4(\mathbb{Z}/2\mathbb{Z})$ -- where $\mathrm{S}_6$ is the permutation group of six elements. By $S_G$ we denote the image of $G \in \Gamma$ under the natural quotient map $\Gamma \to \mathrm{S}_6$ and by $\operatorname{sign}\!{(S_G)}$ the sign of this permutation $S_G$. The following was proven in \cite{MR1204828}: \begin{theorem}[Props.~3.1.1, 3.1.3, 3.1.5 in \cite{MR1204828}] \label{thm1} \hspace{2em} \begin{enumerate} \item There are ten theta functions $\theta^2_i(\varpi)$ for $1 \le i \le 10$ which are non-zero modular forms of weight two relative to $ \Gamma_{\mathcal{T}}(1+i)$ and for each $g = G\,\mathcal{T} ^{n} \in \Gamma_{\mathcal{T}}(1+i)$ with $n \in \lbrace 0,1 \rbrace$ the modular forms $\theta^2_i(\varpi)$ transform with $\chi_{\theta_i}(g)=\det{(G)}$. \item Any five of the ten functions $\theta^2_i(\varpi)$ for $1 \le i \le 10$ generate the ring of modular forms of level $1+i$ and character $\chi(g)=\det{(G)}$ for all $g \in\Gamma_{\mathcal{T}}(1+i)$. \item There is a unique function $\Theta(\varpi)$ which is a non-zero modular form of weight four relative to $\Gamma_{\mathcal{T}}$ such that for each $g = G \mathcal{T} ^{n} \in \Gamma_{\mathcal{T}}$ with $n \in \lbrace 0,1 \rbrace$ the modular form $\Theta(\varpi)$ transforms with character $\chi_{\Theta}(g)=(-1)^n \det{(G)} \, \operatorname{sign}{(S_G)}$ and satisfies \begin{equation} \label{eqn:Tsqr} \Theta(\varpi)^2 = 2^{-6}\cdot3^5\cdot5^2 \left( \big(\sum_{i=1}^{10} \theta_i(\varpi)^4\big)^2 - 4 \sum_{i=1}^{10} \theta_i(\varpi)^8\right) . \end{equation} \end{enumerate} \end{theorem} In the interest of keeping this section short, we do not give explicit formulas for $\theta^2_i(\varpi)$ with $1 \le i \le 10$. However, just as there are simple sum formulas for theta functions of even and odd characteristic in genus two and genus one, the same holds for the theta functions $\theta^2_i(\varpi)$ in Theorem~\ref{thm1}: they are simply theta functions of complex characteristic. All quadratic relations among the even theta functions $\theta^2_i(\varpi)$ for $1 \le i \le 10$ can then be derived explicitly. We refer to \cite{MR1204828}*{Sec.~2} for details. \begin{remark} The space $\mathbf{H}_2$ is a generalization of the Siegel upper-half space $\mathbb{H}_2$. In fact, elements invariant under the involution $\mathcal{T}$ are precisely the two-by-two symmetric matrices over $\mathbb{C}$ whose imaginary part is positive definite, i.e., \begin{equation} \mathbb{H}_2 = \Big\{ \varpi \in \mathbf{H}_2 \, \Big\vert \, \varpi^t=\varpi \Big\} . \end{equation} It was proven in \cite{MR1204828}*{Lemma 2.1.1(vi)} that for $\varpi = \tau \in \mathbb{H}_2$ we have $\theta_i(\varpi)=\vartheta_i(\tau)^2$ where $\vartheta_i(\tau)$ for $1 \le i \le 10$ are the even theta functions of genus two. We provide a geometric cross-check for (the squares of) these reduction formulas in Proposition~\ref{prop-5.6}. \end{remark} The following describes the action of the \emph{full} modular group on the theta functions: \begin{lemma} \label{lem:Gtransfo} The action of the generators $\mathcal{T}, G_1, \dots, G_5 \in\Gamma_{\mathcal{T}}$ in Equation~(\ref{eqn:generators}) on $\theta_i(\varpi)$ with $1 \le i \le 10$ and $\rho=-\det{(\varpi)}$ is given in the following table: \begin{equation} \label{eqn:Gtransfo} \scalemath{\MyScaleMedium}{ \begin{array}{l|rrrrrrrrrr} & \theta_1 & \theta_2 & \theta_3 & \theta_4 & \theta_5 & \theta_6 & \theta_7 & \theta_8 & \theta_9 & \theta_{10}\\ \hline \mathcal{T} & \theta_1 & \theta_2 & \theta_3 & \theta_4 & \theta_5 & \theta_6 & \theta_7 & \theta_8 & \theta_9 & \theta_{10}\\ G_1^{\pm1} & \theta_1 & \theta_2 & \theta_3 & \theta_4 & \theta_5 & \theta_6 & \theta_7 & \theta_8 & \theta_9 & -\theta_{10}\\ G_2^{\pm1} & \theta_1 & \theta_4 & \theta_3 & \theta_2 & \theta_8 & \theta_{10} & \theta_7 & \theta_5 & \theta_9 & \theta_{6}\\ G_3^{\pm1} & \theta_1 & \theta_2 & \theta_4 & \theta_3 & \theta_7 & \theta_9 & \theta_5 & \theta_8 & \theta_6 & \theta_{10}\\ G_4^{\pm1} & \theta_3 & \theta_4 & \theta_1 & \theta_2 &\pm i\,\theta_5&\pm i\,\theta_6 & \theta_9 &\pm i\,\theta_8 & \theta_7 &\pm i\,\theta_{10}\\ G_5^{\pm1} & \rho^{\pm 1} \theta_1&\rho^{\pm 1}\theta_8 &\rho^{\pm 1} \theta_5 &\rho^{\pm1}\theta_7 &\rho^{\pm1}\theta_3 &\rho^{\pm1}\theta_9 &\rho^{\pm1}\theta_4 &\rho^{\pm1}\theta_2 &\rho^{\pm1}\theta_6 &\rho^{\pm1}\theta_{10} \end{array}} \end{equation} \end{lemma} \begin{proof} The proof follows from an explicit computation applying the formulas in Lemmas~2.1.1(ii) and Lemma~2.1.2(viii)-(x) in \cite{MR1204828}. \end{proof} \begin{lemma} \label{lem:J4vanish} Under the action $\varpi \mapsto \mathcal{T} \cdot \varpi =\varpi^t$ we have \begin{equation} \label{eqn:I_action} \Big( \theta_1(\varpi) , \dots , \theta_{10}(\varpi) , \Theta(\varpi) \Big) \ \mapsto \ \Big( \theta_1(\varpi) , \dots , \theta_{10}(\varpi) , - \Theta(\varpi) \Big). \end{equation} \end{lemma} \begin{proof} The transformation for $\Theta(\varpi)$ was proven in \cite{MR1204828}*{Cor.~\!3.1.4}. \end{proof} \begin{lemma} \label{lem:DAvanish} Under the action $\varpi \mapsto M_i \cdot G_1 \cdot M_i^{-1} \cdot \varpi$ we have \begin{equation} \Big[ \theta_1(\varpi): \dots : \theta_{10}(\varpi) \Big] \mapsto \Big[ (-1)^{\delta_{i,1}} \theta_1(\varpi): \dots : (-1)^{\delta_{i,10}} \theta_{10}(\varpi) \Big], \end{equation} where $M_i \in \Gamma$ with $\det{(M_i)}=1$ and $1 \le i \le 10$, $\delta_{\mu,\nu}$ is the Kronecker delta function, and the matrices $M_i$ are given in the following table: \begin{equation} \scalemath{\MyScaleMedium}{ \begin{array}{r|r|rrrrrrrrrrr} i& M_i & \multicolumn{11}{c}{[(-1)^{\delta_{i,1}}, \dots , (-1)^{\delta_{i,10}}]}\\ \hline 1&G_4G_3G_5G_4G_3G_2 & [&-1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,& 1] \\ 2&G_2G_5G_4G_3G_2 & [&1 ,&-1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,& 1] \\ 3&G_3G_5G_4G_3G_2 & [&1 ,&1 ,&-1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,& 1] \\ 4&G_5G_4G_3G_2 & [&1 ,&1 ,&1 ,&-1 ,&1 ,&1 ,&1 ,&1 ,&1 ,& 1] \\ 5&G_3G_4G_3G_2 & [&1 ,&1 ,&1 ,&1 ,&-1 ,&1 ,&1 ,&1 ,&1 ,& 1] \\ 6&G_2 & [&1 ,&1 ,&1 ,&1 ,&1 ,&-1 ,&1 ,&1 ,&1 ,& 1] \\ 7&G_4G_3G_2 & [&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&-1 ,&1 ,&1 ,& 1] \\ 8&G_2G_3G_4G_3G_2 & [&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&-1 ,&1 ,& 1] \\ 9&G_3G_2 & [&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&-1 ,& 1] \\ 10&\mathbb{I}_2 & [&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&1 ,&-1] \end{array}} \end{equation} In particular, $\theta_{i}(\varpi)$ has simple zeros exactly on the $\Gamma_{\mathcal{T}}(1+i)$-orbit of the fixed locus of $M_i \cdot G_1 \cdot M_i^{-1}$ for $1 \le i \le 10$. \end{lemma} \begin{proof} The first part of the proof follows from Lemma~\ref{lem:Gtransfo}. The fact that it is a simple zero must only be proven for one theta function, say $\theta_{10}$. This was done in \cite{MR1204828}*{Lemma~\!2.3.1}. \end{proof} The groups $\Gamma_{\mathcal{T}}$ and $\Gamma_{\mathcal{T}}(1+i)$ have the index-two subgroups given by \begin{equation} \label{index-two} \begin{split} \Gamma^{+}_{\mathcal{T}} = \Big\lbrace g = G\, \mathcal{T}^{n} \in \Gamma_{\mathcal{T}} & \ \Big| \ n \in \lbrace 0,1 \rbrace, \; (-1)^n \, \det{G} =1 \Big\rbrace, \\ \Gamma^{+}_{\mathcal{T}}(1+i) & = \Gamma^{+}_{\mathcal{T}} \cap \Gamma_{\mathcal{T}}(1+i). \end{split} \end{equation} Obviously, we have the following: \begin{lemma} \label{lem:generators} The group $\Gamma^{+}_{\mathcal{T}}$ is generated by elements $G_1\mathcal{T}$ and $G_2, \dots, G_5$ where $G_k$ for $k=1,\dots, 5$ were given in Equation~\!(\ref{eqn:generators}). \end{lemma} \par We now consider the quotient spaces $\mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)$ and $ \mathbf{H}_2/\Gamma^+_{\mathcal{T}}(1+i)$, and the Satake compactification \begin{equation} \overline{\mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)} \;. \end{equation} For details on the construction of the Satake-Baily-Borel compactification we refer to \cites{MR0216035,MR0118775}. We define a holomorphic map \begin{equation} \label{eqn:map1a} \mathcal{F}': \ \ \mathbf{H}_2 \to \mathbb{P}^9, \quad \varpi \mapsto \Big[ \theta^2_1(\varpi) : \dots : \theta^2_{10}(\varpi) \Big]. \end{equation} It follows immediately from Theorem~\!\ref{thm1} that the map $\mathcal{F}'$ descends to a holomorphic map on the quotient space $\mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)$. Equations~\!(\ref{eqn:PlueckerRelations}) coincide with the quadratic relations between the even theta functions in Theorem~\!\ref{thm1} under the identification given by the map $\mathcal{F}'$. An analysis of the simple zeros of the theta functions in \cite{MR1136204}*{Lemma~\!2.3.1} shows that the image is in fact contained in $\mathfrak{M}(2)$. Thus, we obtain a holomorphic map \begin{equation} \label{eqn:map1b} \begin{split} \mathcal{F}: \ \ & \mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i) \ \longrightarrow \ \mathfrak{M}(2) \subset \mathbb{P}^9,\\ & \varpi \ \mapsto \ \Big[t_1 : \dots : t_{10} \Big] = \Big[ \theta^2_1(\varpi) : \dots : \theta^2_{10}(\varpi) \Big] . \end{split} \end{equation} We have the following: \begin{theorem}[\cite{MR1204828}*{Thm.~3.2.1}] \label{thm2} The map $\mathcal{F}$ in Equation~(\ref{eqn:map1b}) extends to an isomorphism between the Satake compactification of $\mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)$ and $\overline{\mathfrak{M}(2)}$ given by \begin{equation*} \begin{split} \mathcal{F}: \ \ & \overline{\mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)}\ \overset{\cong}{\longrightarrow}\ \overline{\mathfrak{M}(2)} \subset \mathbb{P}^9. \end{split} \end{equation*} \end{theorem} We also define a holomorphic map \begin{equation} \label{eqn:map2a} \mathcal{G}': \ \ \mathbf{H}_2 \to \mathbb{P}(1,\dots,1,2), \; \varpi \mapsto \Big[ \theta^2_1(\varpi) : \dots : \theta^2_{10}(\varpi) : 2^2 3^{-3} 5^{-2}\Theta(\varpi) \Big]. \end{equation} We have the following: \begin{proposition} The map $\mathcal{G}'$ descends to a holomorphic map \begin{equation} \label{eqn:map2b} \begin{split} \mathcal{G}: \ \ & \mathbf{H}_2/\Gamma^+_{\mathcal{T}}(1+i) \ \longrightarrow \ \mathfrak{M}(2)^{+} \subset \mathbb{P}(1,\dots,1,2),\\ & \varpi \ \mapsto \ \Big[t_1 : \dots : t_{10}: R \Big] = \Big[ \theta^2_1(\varpi) : \dots : \theta^2_{10}(\varpi): 2^2 3^{-3} 5^{-2}\Theta(\varpi)\Big] . \end{split} \end{equation} Moreover, the following diagram commutes: \begin{equation} \label{pic:modular_maps} \xymatrix{ \mathbf{H}_2/\Gamma^+_{\mathcal{T}}(1+i) \ar[r]^-{\mathcal{G}} \ar @{->>} [d] &\mathfrak{M}(2)^{+} \ar @{->>} [d]^{\operatorname{pr}}\\ \mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i) \ar [r]^-{\mathcal{F}} &\mathfrak{M}(2)} \end{equation} with $\mathcal{G} \circ \mathcal{T}= \textnormal{Im}\;\;\!\!\!ath \circ \mathcal{G}$ and $\operatorname{pr} \circ\, \textnormal{Im}\;\;\!\!\!ath =\operatorname{pr}$ and $\Gamma^+_{\mathcal{T}}(1+i)$ the index-two sub-group of $\Gamma_{\mathcal{T}}(1+i)$ defined in Equation~(\ref{index-two}). \end{proposition} \begin{proof} For elements $g = G\,\mathcal{T} ^{n} \in \Gamma^+_{\mathcal{T}}(1+i)$ we have by definition $(-1)^n \det{G} =1$ and $\operatorname{sign}\!{(S_G)}=1$. It then follows from Theorem~\ref{thm1} that the map $\mathcal{G}'$ descends to a holomorphic map on the quotient space $\mathbf{H}_2/\Gamma^+_{\mathcal{T}}(1+i) $. Combined with the results in Theorem~\ref{thm2} this shows that the image is contained in $\mathfrak{M}(2)^{+}$. The branching locus of the covering $\mathbf{H}_2/\Gamma^+_{\mathcal{T}}(1+i) \to \mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)$ is given by the simple zeros of the modular form $\Theta(\varpi)$ of weight four which is unique up to constant; see~Theorem~\ref{thm1}. Moreover, the ratio of the right hand sides of Equation~(\ref{eqn:Tsqr}) and Equation~(\ref{eqn:PlueckerRelations2}) yields $\Theta(\varpi)^2/R^2=2^{-4}\cdot3^{6}\cdot5^2$ under the identification given by the holomorphic map $\mathcal{F}$. Thus, the branching locus is identical. The equivariance follows from Equation~(\ref{eqn:i_action}) and Equation~(\ref{eqn:I_action}) and the fact that their branching locus is identical. \end{proof} \subsection{The Satake sextic and ring of modular forms} \label{ssec:mod_forms} We introduce the following linear combinations of the degree-one invariants $t_i$ which we call the generalized \emph{level-two Satake coordinate functions} $x_1, \dots, x_6$. We set $x_1 + \dots + x_6=0$. Choosing three Satake roots out of the five roots $x_1, \dots, x_5$, we want to obtain all invariants $t_1, \dots, t_{10}$ by setting \begin{equation} \label{InvSatake2Theta} \begin{array}{llllll} -3\,t_9 & = x_1&+x_2&+x_3,\\ \phantom{-}3\,t_8 & = x_1&+x_2&&+x_4,\\ -3\,t_6 &=x_1&+x_2 &&&+x_5,\\ \phantom{-}3\,t_5 &=x_1& &+x_3&+x_4,\\ -3\,t_{10} & = x_1&&+x_3&&+x_5,\\ \phantom{-}3\,t_7 & = x_1&&&+x_4&+x_5,\\ -3\,t_3 &= &+x_2 &+ x_3 &+x_4,\\ -3\,t_1 &= &+x_2 &+ x_3 &&+x_5,\\ -3\,t_4 &= &+x_2 &&+ x_4 &+x_5,\\ -3\,t_2 &= &&+x_3 &+ x_4 &+x_5. \end{array} \end{equation} The $j$-th power sums $s_j$ are defined by $s_j = \sum_{k=1}^6 x_k^j$ for $j=1, \dots,6$. It can be easily checked using Equation~(\ref{InvSatake2Theta}) that $s_1=\sum_k x_k =0$. We combine the level-two Satake functions in a sextic curve, called the \emph{Satake sextic}, given by \[ \mathcal{S}(x) = \prod_{k=1}^6 (x -x_k) \;. \] The coefficients of the Satake sextic are polynomials in $\mathbb{Z} \left[ \frac 1 2 , \frac 1 3 , s_2, s_3, s_4, s_5, s_6 \right]$. In fact, we obtain \[ \mathcal{S}(x) = x^6 + \sum_{k=1}^6 \frac{(-1)^k}{k!} b_k\, x^{6-k} \, \] where $b_k$ is the $k$-th Bell polynomials in the variables $\lbrace s_1, -s_2, 2! s_3, - 3! s_4, 4! s_5, -5! s_6 \rbrace$. The following holds: \begin{lemma}\label{Satake6} The generalized level-two Satake coordinate functions $x_1, \dots , x_6$ are the roots of the Satake sextic $\mathcal{S}\in \mathbb{Z} \left[ \frac 1 2 , \frac 1 3 , s_2, s_3, s_4, s_5, s_6 \right]\![x]$ given by \begin{equation} \label{SatakeSextic} \begin{split} \mathcal{S}(x) & = \mathcal{B}(x)^2-4 \, \mathcal{A}(x),\; \\[8pt] \ \text{with} \qquad\mathcal{B}(x)& =x^3- \frac{s_2}{4} \, x - \frac{s_3}{6},\\[5pt] \mathcal{A}(x) &= \frac{4s_4-s_2^2}{64} x^2 - \frac{5s_2 s_3-12s_5}{240} x + \frac{3s_2^3-4s_3^2-18s_2 s_4+24s_6}{576}. \end{split} \end{equation} \end{lemma} \begin{proof} The proof follows from the explicit computation of the Bell polynomials using the relation $s_1=0$. \end{proof} \par We define new quantities $J_2, J_3, J_4, J_5, J_6$ by setting \begin{equation} \label{eqn:Jinvariants} \begin{split} J_2 = \dfrac{s_2}{12} , \qquad J_3 &= \dfrac{s_3}{12}, \qquad J_4 =\dfrac{4s_4-s_2^2}{64} \\ J_5 = \dfrac{5s_2 s_3-20s_5}{240}, \qquad s_6 &= \dfrac{3 s_2^3 - 4 s_3^2 -18 s_2 s_4+24 s_6}{576}, \end{split} \end{equation} such that \begin{equation} \label{Jinvariants} \begin{split} s_2 = 12 \, J_2, \qquad s_3 & = 12 \, J_3, \qquad \ s_4 = 36 \, J_2^2+16 \, J_4, \\ s_5 = 60 \, J_2 J_3-20 \, J_5, \qquad s_6 & = 108 \, J_2^3+144 \, J_2 J_4+24 \, J_3^2+24 \, J_6. \end{split} \end{equation} We have the following: \begin{lemma} \label{lem:Jinvariance} $J_2, J_3, J_4, J_5, J_6$ are polynomials over $\mathbb{Q}$ in $t_i$ for $1 \le i \le10$ that are invariant under the action of the permutation group $\mathrm{S}_6$ on the variables $t_i$ induced by permuting the lines of a six-line configuration. \end{lemma} \begin{proof} Using Equations~(\ref{eqn:PlueckerRelations}) we can solve for $x_1, \dots, x_6$ using any five of the then $t_i$'s and obtain \begin{equation} \label{Satake2Theta} \begin{array}{rrrrrr} x_1 =&2\,t_1 &+2\,t_5 &-3\,t_6 &-t_7 &-t_8,\\ x_2 =&-t_1 &-t_5 & &-t_7 &+2\, t_8,\\ x_3 =&-t_1 &+2\,t_5 & &-t_7 &-t_8,\\ x_4 =& -t_1 &-t_5 &+3\,t_6 &+2\,t_7 &+2\,t_8,\\ x_5 =& -t_1 &-t_5 & &+2\,t_7 &-t_8,\\ x_6 =&2t_1 &-t_5 & &-t_7 &- t_8. \end{array} \end{equation} Plugging Equations~(\ref{Satake2Theta}) into the $j$-th power sums $s_j$ and, in turn, into Equations~(\ref{eqn:Jinvariants}) proves that $J_k$ for $2\le k \le 6$ are polynomials in $t_i$ with $1 \le i \le10$ and rational coefficients. One checks that for a set of generators of the permutation group of six elements $\mathrm{S}_6$, acting on the variables $t_i$ with $1 \le i \le 10$ as defined by permuting lines in Equations~(\ref{eqn:DOcoords}), the polynomials $J_k$ for $2\le k \le 6$ remain invariant. \end{proof} \par Notice that there are many notations in the literature for invariants of binary equations. We will show in Section~\ref{6Lrestricted} how our invariants $J_k$ for $2 \le k \le 6$, when restricted to a six-line configuration tangent to a common conic, are related to the Igusa invariants of a binary sextic which we denote by $I_2, I_4, I_6, I_{10}$; see Equation~(\ref{modd_restriced}). We have the following: \begin{lemma} The moduli space $\mathfrak{M}(2)^{+}$ of six lines in $\mathbb{P}^2$ embeds into the variety in $\mathbb{P}(1, 1, 1, 1, 1, 1, 2)$ with coordinates $[x_1: \dots: x_6: X]$ given by the equations $s_1=0$ and $X^2=4s_4-s_2^2$. \end{lemma} \begin{proof} Given a point in the image, setting $x_6=-(x_1 + \dots +x_5)$, Equations~(\ref{InvSatake2Theta}) for $t_1, t_5, t_6, t_7, t_8$ constitute the inverse transformation to Equations~(\ref{Satake2Theta}). Moreover, one checks that $4s_4-s_2^2=(18 \, R)^2$ in Equation~(\ref{Eqn:moduli}). \end{proof} \begin{remark} The sub-variety defined by $X=0$ comprises an algebraic variety in $\mathbb{P}^4$ given by $s_1=0$ and $s_2^2=4s_4$ known as Igusa's quartic. It corresponds to the moduli space of configurations of six-lines tangent to a conic and is closely related to the moduli space of genus-two curves with level-two structure. We will discuss the details in Section~\ref{6Lrestricted}. \end{remark} \par In terms of the invariants $J_k$ with $2\le k\le6$, the Satake sextic is given by \begin{equation} \label{SatakeSextic2} \begin{array}{c} \mathcal{S}(x) =\mathcal{B}(x)^2 -4 \, \mathcal{A}(x),\\[8pt] \ \text{with} \qquad \mathcal{B}(x)=x^3- 3 \, J_2 x - 2 \, J_3, \quad \quad \mathcal{A}(x) = J_4 \, x^2 - J_5 \, x + J_6. \end{array} \end{equation} One also checks that the square of the degree-two Dolgachev-Ortland invariant in Equation~(\ref{eqn:PlueckerRelations2}) is given by $J_4$, i.e., \begin{equation} \label{eqn:R} R^2 = 2^{4} 3^{-4} J_4 \;. \end{equation} We introduce three more invariants: the discriminants of the Satake sextic $\mathcal{S}(x)$ and the quadratic polynomial $\mathcal{A}(x)$ which have degrees $30$ and $10$, respectively, as well as the resultant of the polynomials $\mathcal{A}(x)$ and $\mathcal{B}(x)$ which has degree $18$. By construction, they are all homogeneous polynomials in the invariants $\mathcal{J}_k$ for $2\le k \le 6$ with integer coefficients. One checks that \begin{equation} \label{eqn:decomp} \begin{split} \operatorname{Disc} (\mathcal{A}) & = J_5^2 -4 \, J_4 J_6 = 2^{-4}3^{10} \prod_{i=1}^{10} t_i ,\\ \operatorname{Res} (\mathcal{A},\mathcal{B}) & = 9 \, J_2^2 J_4^2 J_6+6 \, J_2 \, J_3 \, J_4^2 J_5+4 \, J_3^2 J_4^3 +6 \, J_2 \, J_4 \, J_6^2\\ &-3 \, J_2 \, J_5^2 \,J_6+6 \, J_3\, J_4 \, J_ 5 \, J_6-2 \,J_3 \, J_5^3+J_6^3. \end{split} \end{equation} For the discriminant of the Satake sextic, i.e., $\mathcal{S}(x)= \prod_{i<j} (x_i - x_j)^2$, we suppress the lengthy polynomial expression in terms of the $J_k$ for $k=2, \dots, 6$. We rather give the following formula in terms of modular forms on $\mathfrak{M}(2)$, namely \begin{equation}\label{eq:DiscS} \begin{array}{c} \operatorname{Disc} (\mathcal{S}) = 3^{30} \prod_{j=2}^{10} (t_1 -t_j)^{2} \\[6pt] \times (t_2 -t_3)^{2} (t_3 -t_4)^{2} (t_4 -t_5)^{2} (t_5 -t_6)^{2} (t_2 -t_4)^{2} (t_4 -t_6)^{2}. \end{array} \end{equation} We have the following: \begin{corollary} \label{lem:Js} A configuration of six lines in $\mathbb{P}^2$ falls into cases (0) through (5) in Definition~\ref{def:sstable} if and only if the invariants in Equation~(\ref{eqn:Jinvariants}) satisfy \begin{equation} ( J_4 , \ J_5, \ J_6 ) \neq (0,0,0) . \end{equation} For cases (1) through (5) in Definition~\ref{def:sstable}, we find the following equivalences (where all invariants not specified remain generic): \begin{enumerate} \item [(1)] $\Leftrightarrow \ J_4 =0$, \item [(2)] $\Leftrightarrow \ \operatorname{Disc}(\mathcal{A}) =0$, \item [(3, 4)] $\Leftrightarrow \ \operatorname{Disc}(\mathcal{A}) =\operatorname{Res} (\mathcal{A},\mathcal{B}) =0$, \item [(5)] $\Leftrightarrow \ J_4 =J_5=0$, \end{enumerate} For cases (6a) and (6b), we find $( J_4 , J_5, J_6 ) = (0,0,0)$. \end{corollary} \begin{proof} One first checks that $ \operatorname{Disc}(\mathcal{A}) =\operatorname{Res} (\mathcal{A},\mathcal{B})=0$ implies $\operatorname{Disc}(\mathcal{S}) =0$. The proof follows the same strategy as the one applied in the proof of Lemma~\ref{lem:equiv}. We explicitly compute the invariants $J_4 , \ J_5, \ J_6$ in terms of the moduli $a, b, c, d$ and then restrict to cases (1) through (5) in Definition~\ref{def:sstable}. Moreover, $( J_4 , J_5, J_6 ) = (0,0,0)$ implies $\operatorname{Disc}(\mathcal{A}) = \operatorname{Disc}(\mathcal{S}) =\operatorname{Res} (\mathcal{A},\mathcal{B})= 0$. \end{proof} In light of Lemma~\ref{lem:Jinvariance} and Corollary~\ref{lem:Js} we can now define a moduli space of \emph{unordered} configurations of six lines in $\mathbb{P}^2$. We define $\mathfrak{M}$ to be the four-dimensional open complex variety given by \begin{equation} \label{modulispace} \mathfrak{M} = \Big\{ \left [ J_2 : \ J_3 : \ J_4: \ J_5 : \ J_6 \right ] \in \mathbb{P}(2,3,4,5,6) \ \Big\vert \ ( J_4 , \ J_5, \ J_6 ) \neq (0,0,0) \Big\}, \end{equation} and we also set $\overline{\mathfrak{M}}=\mathbb{P}(2,3,4,5,6)$. \par By construction, the points in projective space $\mathbb{P}^9$ arising as image of the map $\mathcal{F}': \mathbf{H}_2 \to \mathbb{P}^9$ in Equation~(\ref{eqn:map1a}), i.e., the points given by \begin{equation} \label{eqn:map1aB} \mathcal{F}': \ \ \mathbf{H}_2 \to \mathbb{P}^9, \quad \varpi \mapsto \Big[ \theta^2_1(\varpi) : \dots : \theta^2_{10}(\varpi) \Big], \end{equation} are invariant under the action of the sub-group $\Gamma(1+i)$ of level $(1+i)$; see Theorem~\ref{thm1}. As explained, there is a natural action of the permutation group of six elements $\mathrm{S}_6$ on the variables $t_i$ with $1 \le i \le 10$ induced by permuting the six lines. This action coincides with the action of $\Gamma/\Gamma(1+i)$. \par Equation~(\ref{eq:DiscS}) provides a geometric characterization of the locus $\operatorname{Disc} (\mathcal{S})=0$ in $\mathfrak{M}(2)$. It turns out that the fifteen components of the vanishing locus are in one-to one correspondence with permutations of the from $\sigma_\alpha=(ij)(kl)(mn)$ where $(ij)=i \leftrightarrow j$ is the permutation of the $i$-th and $j$-th line. We have the following: \begin{lemma} \label{lem:Svanish} The vanishing locus $\operatorname{Disc} (\mathcal{S}) =0$ is the union of the $\Gamma_{\mathcal{T}}(1+i)$-orbits of the fixed loci of $\varpi \mapsto S_j \cdot G_2 \mathcal{T} \cdot S_j^{-1} \cdot \varpi$ in $\mathfrak{M}(2)$ where $S_j \in \Gamma$ with $\det{(S_j)}=1$ and $1\le j \le 15$. The fixed loci, the elements $S_j$, and corresponding permutations $\sigma_j$ are given in the following table: \begin{equation} \label{eqn:iso_S6} \scalemath{\MyScaleMedium}{ \begin{array}{r|r|c|rlrrlrrlr} j & S_j & \sigma_j & \multicolumn{9}{c}{\text{fixed locus}}\\ \hline 1 &G_5G_4G_5G_3G_5G_2G_5G_4 & (15)(26)(34) & t_1 & = & t_2,& t_5 &= -&t_9,& t_6 &= -&t_7 \\ 2 &G_5G_2G_5G_4 & (12)(35)(46) & t_1 & = & t_3,& t_5 &= -&t_{10}, & t_6 &= -&t_8 \\ 3 &G_5G_4G_3G_5G_2G_5G_4 & (13)(24)(56) & t_1 & = & t_4,& t_7 &= -&t_{10},& t_8 &= -&t_9\\ 4 &G_3G_5G_2G_5G_4 & (15)(23)(46) & t_1 & = & t_5,& t_2 &= -&t_9,& t_3 &= -&t_{10}\\ 5 &G_4G_2G_5G_4 & (14)(26)(35) & t_1 & = & t_6,& t_2 &= -&t_7,& t_3 &= -&t_8\\ 6 &G_5G_3G_5G_2G_5G_4 & (13)(26)(45) & t_1 & = & t_7,& t_2 &= -&t_6,& t_4 &= -&t_{10}\\ 7 &G_4G_3G_5G_2G_5G_4 & (16)(24)(35) & t_1 & = & t_8,& t_3 &= -&t_6,& t_4 &= -&t_9\\ 8 &G_4G_5G_4G_5G_4 & (15)(24)(36) & t_1 & = & t_9,& t_2 &= -&t_5,& t_4 &= -&t_8\\ 9 &G_4G_5G_4 & (13)(25)(46) & t_1 & = & t_{10},& t_3 &= -&t_5,& t_4 &= -&t_7\\ 10 &G_2G_5G_4 & (14)(23)(56) & t_2 & = & t_3,& t_7 &=&t_8,& t_9 &= &t_{10}\\ 11 &G_5G_4G_5G_4 & (12)(36)(45) & t_2 & = & t_4,& t_5&= &t_8,& t_6 &= &t_{10}\\ 12 &G_4 & (14)(25)(36) & t_2 & = & t_8,& t_3 &=&t_7,& t_4 &= &t_5\\ 13 &G_4G_5G_3G_5G_2G_5G_4 & (16)(23)(45) & t_2 & = & t_{10},& t_3 &=&t_9,& t_4 &= &t_6\\ 14 &G_5G_4 & (16)(25)(34) & t_3 & = & t_4,& t_5 &=&t_7,& t_6 &= &t_9\\ 15 &\mathbb{I}_4 & (12)(34)(56) & t_5 & = & t_6,& t_7 &=&t_9,& t_8 &= &t_{10} \end{array}} \end{equation} \end{lemma} \begin{proof} The relation between the components of vanishing locus and the fixed loci of the transformations $\varpi \mapsto S_j \cdot G_2 \mathcal{T} \cdot S_j^{-1} \cdot \varpi$ with $1\le j \le 15$ follow from Equation~(\ref{eq:DiscS}) and Lemma~\ref{eqn:Gtransfo}. The relation between permutations acting on $t_i$ with $1 \le i \le 10$ and the listed fixed loci is checked directly using Equations~(\ref{eqn:DOcoords}) \end{proof} \noindent We denote the six-line configuration discussed in Lemma~\ref{lem:Svanish} by (2b), adding to cases (0) through (5) in Definition~\ref{def:sstable}. Equation~(\ref{eqn:iso_S6}) provides the explicit from of the isomorphism $\Gamma/\Gamma(1+i) \cong \mathrm{S}_6$. We have the following: \begin{corollary} \label{cor:VanishingLoci} The following vanishing loci are fixed loci of elements in $\Gamma_{\mathcal{T}} \backslash \Gamma^+_{\mathcal{T}}$: \begin{enumerate} \item[(1)] The locus $J_4=0$ is the fixed locus of $\mathcal{T} \in \Gamma_{\mathcal{T}}$. \item[(2)] The locus $\operatorname{Disc}(\mathcal{A}) =0$ is the union of the fixed loci of $M_i\cdot G_1\cdot M_i^{-1} \in \Gamma_{\mathcal{T}}$ with $M_i \in \Gamma^+_{\mathcal{T}}$ and $1\le i \le 10$ given in Lemma~\ref{lem:DAvanish}. \item[(2b)] The locus $\operatorname{Disc}(\mathcal{S}) =0$ is the union of the fixed loci of $S_j\cdot G_2\mathcal{T}\cdot S_j^{-1} \in \Gamma_{\mathcal{T}}$ with $S_j \in \Gamma^+_{\mathcal{T}}$ and $1\le j \le 15$ given in Lemma~\ref{lem:Svanish}. \end{enumerate} \end{corollary} \begin{proof} Parts (1), (2) follow from Equations~(\ref{eqn:R}) and (\ref{eqn:decomp}) when using Lemma~\ref{lem:J4vanish} and Lemma~\ref{lem:DAvanish}. Part (3) follows from Lemma~\ref{lem:Svanish}. \end{proof} \par If we plug into the expressions for $J_k$ in Equation~(\ref{eqn:Jinvariants}) for $k=2, 3, 4, 5, 6$, the theta functions $t_i=\theta^2_i(\varpi)$ for $1 \le i \le 10$ of Theorem~\ref{thm1}, we obtain modular forms which we will denote by $J_2(\varpi), J_3(\varpi), J_4(\varpi), J_5(\varpi), J_6(\varpi)$. We have the following: \begin{lemma} The functions $J_k(\varpi)$ are modular forms of weight $2k$ relative to $\Gamma_{\mathcal{T}}$ with character $\chi_{2k}(g)=\det{(G)}^k$ for all $g \in\Gamma_{\mathcal{T}}$. \end{lemma} \begin{proof} It follows from Lemma~\ref{lem:Jinvariance} that $J_k(\varpi)$ are homogeneous polynomials of degree $k$ in $t_i=\theta^2_i(\varpi)$ for $1 \le i \le 10$. Using Theorem~\ref{thm1}, we conclude that $J_k(\varpi)$ are modular forms of weight $2k$ relative to $ \Gamma_{\mathcal{T}}(1+i)$ with character $\chi_{2k}(g)=\det{(G)}^k$. The isomorphism $\Gamma/\Gamma(1+i) \cong \operatorname{Sp}_4(\mathbb{Z}/2\mathbb{Z})$ extends the group homomorphism obtained from the projection \begin{equation} \mathbb{Z}[i] \mapsto \mathbb{Z}[i] / (1+i) \mathbb{Z}[i] \cong \mathbb{Z}/2\mathbb{Z} . \end{equation} On the other hand, there is a group isomorphism $\operatorname{Sp}_4(\mathbb{Z}/2\mathbb{Z}) \cong \mathrm{S}_6$. We showed in Lemma~\ref{lem:Gtransfo} that this action coincides with the natural action of the permutation group of six elements $\mathrm{S}_6$ on the variables $t_i$ with $1 \le i \le 10$ due to Equations~(\ref{eqn:DOcoords}). Since the invariants $J_k$ are polynomials in the symmetric polynomials $s_k$ with $2 \le k \le 6$ given in Equation~(\ref{eqn:Jinvariants}), hence invariant under the action of $\mathrm{S}_6$, $J_k(\varpi)$ are modular forms of weight $2k$ relative to the full modular group $ \Gamma_{\mathcal{T}}$. \end{proof} We have the following: \begin{theorem} \label{thm4} The graded ring of modular forms relative to $\Gamma^+_{\mathcal{T}}$ of even characteristic, i.e., with character $\chi_{2k}(g)=\det{(G)}^k$, is generated over $\mathbb{C}$ by the five algebraically independent modular forms $J_k(\varpi)$ of weight $2k$ with $k=2, \dots, 6$. \end{theorem} \begin{proof} It follows from~\cite{MR2682724}*{Thm.~\!1} that the ring of modular forms relative to $\Gamma_{\mathcal{T}}$ is generated by five modular forms of weights $4, 6, 8, 10, 12$. For arguments $\varpi=\tau$ invariant under $\mathcal{T}$, the functions $J_k(\varpi)$ for $k=2, 3, 4, 5, 6$ descend to Siegel modular forms of even weight. In fact, we will check in Equation~(\ref{modd_restriced}) that under the restriction from $\mathbf{H}_2/\Gamma_{\mathcal{T}}$ to $\mathbb{H}_2/\operatorname{Sp}_4(\mathbb{Z})$, we obtain \begin{equation} \label{modd_restricedb} \scalemath{\MyScaleMedium}{ \begin{array}{c} \Big[ J_2(\varpi) : J_3(\varpi) : J_4(\varpi): J_5(\varpi) : J_6(\varpi) \Big] = \Big[ \psi_4(\tau) : \psi_6(\tau) : 0: 2^{12} 3^5 \chi_{10}(\tau) : 2^{12}3^6 \chi_{12}(\tau)\Big]. \end{array}} \end{equation} Here, $\psi_4$, $\psi_6$, $\chi_{10}$, $\chi_{12}$ are Siegel modular forms of respective weights $4, 6, 10, 12$ that, as Igusa proved in~\cites{MR0229643, MR527830}, generate the ring of Siegel modular forms of even weight. Thus, the forms $J_k(\varpi)$ for $k=2, 3, 5, 6$ must be independent. After looking at some Fourier coefficients to ensure that $J_4$ is not identically zero, we adjoin $J_4$ as a fifth form to the ring generated by $J_k$ for $k=2, 3, 5, 6$. The fundamental theorem of symmetric polynomials establishes the power sums as an algebraic basis for the space of symmetric polynomials. Therefore, $J_k$ for $k=2, 3, 4, 5, 6$ are algebraically independent. Moreover, we check that $J_4(\varpi)=(\Theta(\varpi)/15)^2$ using Theorem~\ref{thm1}. $\Theta$ does \emph{not} have the character $\chi(g)=\det(G)$ for all $g \in\Gamma_{\mathcal{T}}$. It follows that $J_4$ cannot be decomposed further as a modular form relative to $\Gamma^+_{\mathcal{T}}$ with even characteristic. \end{proof} We define the holomorphic map \begin{equation} \label{eqn:map3a} \mathcal{H}': \mathbf{H}_2 \to \mathbb{P}^9, \quad \varpi \mapsto \Big[ J_2(\varpi) : \ J_3(\varpi) : \ J_4(\varpi): \ J_5(\varpi) : \ J_6(\varpi) \Big ]. \end{equation} We have the following: \begin{corollary} The map $\mathcal{H}'$ descends to a holomorphic map \begin{equation} \label{eqn:map3b} \begin{split} \mathcal{H}: \ \ & \mathbf{H}_2/\Gamma_{\mathcal{T}} \ \longrightarrow \ \mathfrak{M} \subset \mathbb{P}(2,3,4,5,6),\\ & \varpi \ \mapsto \ \Big[ J_2(\varpi) : \ J_3(\varpi) : \ J_4(\varpi): \ J_5(\varpi) : \ J_6(\varpi) \Big ]. \end{split} \end{equation} The map $\mathcal{H}$ in Equation~(\ref{eqn:map3b}) extends to an isomorphism between the Satake compactification of $\mathbf{H}_2/\Gamma_{\mathcal{T}}$ and $\overline{\mathfrak{M}}$ given by \begin{equation*} \begin{split} \mathcal{H}: \ \ & \overline{\mathbf{H}_2/\Gamma_{\mathcal{T}}}\ \overset{\cong}{\longrightarrow}\ \overline{\mathfrak{M}} \subset \mathbb{P}(2,3,4,5,6). \end{split} \end{equation*} \end{corollary} \begin{proof} Using Theorem~\ref{thm4} and Corollary~\ref{lem:Js} it follows that $\mathcal{H}'$ descends to the holomorphic map $\mathcal{H}$ as stated. By construction the Satake compactification of $\mathbf{H}_2/\Gamma_{\mathcal{T}}$ is given by \begin{equation} \operatorname{Proj} \mathbb{C}\Big[ J_2, J_3, J_4, J_5, J_6 \Big]\cong \mathbb{P}(2,3,4,5,6). \end{equation} \end{proof} \section{K3 surfaces associated with double covers of six lines} \label{sec:6lines} In this section, we discuss several Jacobian elliptic fibrations on the K3 surface associated with configurations of six lines $\ell_i$ in $\mathbb{P}^2$ with $i=1,\dots,6$, no three of which are concurrent. The double cover branched along six lines $\ell_i$ with $i=1,\dots,6$ given in terms of variables $z_1, z_2, z_3$ of $\mathbb{P}^2$ is the solution of \begin{equation} \label{six-lines-cover} z_4^2 = \ell_1\ell_2\ell_3\ell_4\ell_5\ell_6 \end{equation} with $[z_1:z_2:z_3:z_4]\in \mathbb{P}(1,1,1,3)$. It is well known that the minimal resolution is a K3 surface of Picard-rank $16$ which we will always denote by $\mathcal{Y}$. In \cite{MR2254405} Kloosterman classified all Jacobian elliptic fibrations on $\mathcal{Y}$, i.e., elliptic fibrations $\pi^{\mathcal{Y}}: \mathcal{Y} \to \mathbb{P}^1$ together with a section $\sigma: \mathbb{P}^1 \to \mathcal{Y}$ such that $\pi^{\mathcal{Y} }\circ \sigma = 1$. We will construct explicit Weierstrass models for three of these Jacobian elliptic fibrations which we call the \emph{natural} fibration, the \emph{base-fiber-dual} of the natural fibration, and the \emph{alternate} fibration. \subsection{The natural fibration} \label{ssec:natural_fibration} Configurations of six lines no three of which are concurrent have four homogeneous moduli which we will denote by $a, b, c, d$. These moduli can be constructed as follows: the six lines $\ell_i$ for $1\le i \le 6$ can always be brought into the form \begin{equation} \label{lines} \ell_1=z_1, \ell_2 = z_2, \ell_3= z_3, \ell_4=z_1 + z_2 + z_3, \ell_5=z_1+ a z_2 + b z_3, \ell_6=z_1+ c z_2 + d z_3. \end{equation} The matrix $\mathbf{A}$ defined in Section~\ref{sec:invariants} for this six-line configuration is \begin{equation} \mathbf{A} = \left( \begin{array}{cccccc} 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 & a & c \\ 0 & 0 & 1 & 1 & b & d \end{array}\right) \;, \end{equation} with Dolgachev-Ortland coordinates given by \begin{equation} \label{compare1} \scalemath{\MyScaleMedium}{ \begin{array}{c} t_1=a(d-1), \ t_2=b-a, \ t_3=d-c, \ t_4=c(b-1), \ t_5=b(c-1), \\[4pt] t_6=d(a-1), \ t_7=d-b, \ t_8=c-a, \ t_9=ad-bc, \ t_{10}=ad-bc-a+b+c-d, \\[4pt] R = -abc+abd+acd-bcd-ad+bc. \end{array}} \end{equation} The six lines intersect as follows: \begin{equation} \label{tab:6lines} \scalemath{\MyScaleTiny}{ \begin{array}{c|c|c|c|c|c|c} & \ell_1 & \ell_2 & \ell_3 & \ell_4 & \ell_5 & \ell_6 \\ \hline &&&&&& \\[-0.9em] \ell_1 & - & [0:0:1] & [0:1:0] & [0:1:-1] & [0:b:-a] & [0:d:-c] \\ &&&&&& \\[-0.9em] \ell_2 & [0:0:1] & - & [1:0:0] & [1:0:-1] & [-b:0:1] & [-d:0:1] \\ &&&&&& \\[-0.9em] \ell_3 & [0:1:0] & [1:0:0] & - & [1:-1:0] & [-a:1:0] & [-c:1:0] \\ &&&&&& \\[-0.9em] \ell_4 & [0:1:-1] & [1:0:-1] & [1:-1:0] & - & [a-b:b-1:1-a] & [c-d:d-1:1-c] \\ &&&&&& \\[-0.9em] \ell_5 & [0:b:-a] & [-b:0:1] & [-a:1:0] & [a-b:b-1:1-a] & - & [ad-bc:b-d:c-a] \\ &&&&&& \\[-0.9em] \ell_6 & [0:d:-c] & [-d:0:1] & [-c:1:0] & [c-d:d-1:1-c] & [ad-bc:b-d:c-a] & - \\ &&&&&& \\[-0.9em] \end{array}} \end{equation} Setting \begin{equation} z_1 = \frac{u(u+1)(au+b)(cu+d)}{X-u(au+b)(cu+d)}, \quad z_2=u, \quad z_3=1 \end{equation} in Equation~(\ref{six-lines-cover}), it is transformed into the Weierstrass equation \begin{equation} \label{eqns_Kummer} Y^2 = X \Big(X - 2 \,u \big(\mu(u)-\nu(u)\big) \Big) \Big(X - 2 \,u \big(\mu(u)+\nu(u)\big) \Big), \end{equation} with discriminant \begin{equation} \Delta_{\mathcal{Y}}(u) =2^8 u^6 \mu(u)^2 \, \Big(\mu(u)^2-\nu(u)^2\Big)^2 \end{equation} and \begin{equation} \label{coeffs} \begin{split} 2(\mu-\nu) & = \Big(au +b\Big)\, \Big((c-1) \, u+ (d-1) \Big) ,\\ 2(\mu+\nu) & = \Big(c u +d\Big)\, \Big((a-1) \, u + (b-1) \Big). \end{split} \end{equation} In this way, the K3 surfaces associated with the double cover of $\mathbb{P}^2$ branched along the union of six lines, no three of which are concurrent, are equipped with an elliptical fibration $\pi^{\mathcal{Y}}_{\mathrm{nat}}: \mathcal{Y} \to \mathbb{P}^1$ with section $\sigma$ given by the point at infinity and with a fiber $\mathcal{Y}_{u}$ given by the minimal Weierstrass equation~(\ref{eqns_Kummer}). We call this fibration the \emph{natural fibration}. Three two-torsion sections are obvious from the explicit Weierstrass points in Equation~(\ref{eqns_Kummer}). The following is immediate: \begin{lemma} \label{lem:EllLeft} Equation~(\ref{eqns_Kummer}) defines a Jacobian elliptic fibration $\pi^{\mathcal{Y}}_{\mathrm{nat}}: \mathcal{Y} \to \mathbb{P}^1$ with six singular fibers of type $I_2$, two singular fibers of type $I_0^*$, and the Mordell-Weil group of sections $\operatorname{MW}(\pi^{\mathcal{Y}}_{\mathrm{nat}})=(\mathbb{Z}/2\mathbb{Z})^2$. \end{lemma} One checks that the Weierstrass model in Equation~(\ref{eqns_Kummer}) is minimal if and only if the configuration of six lines falls into cases (0) through (5) in Definition~\ref{def:sstable}, but not into case (6). In the latter case, the singularities of Equation~(\ref{eqns_Kummer}) are not canonical singularities. The following proposition was given in \cite{MR1877757}: \begin{proposition}[\cite{MR1877757}] \label{lem_6lines} For generic parameters $a, b, c, d$, the K3 surface $\mathcal{Y}$ has the transcendental lattice $T_{\mathcal{Y}} \cong H(2) \oplus H(2) \oplus \langle -2 \rangle^{\oplus 2}$. \end{proposition} \begin{remark} \label{rem:GKZ} We choose $z_1=1$ as an affine chart for Equation~(\ref{six-lines-cover}) with lines given by Equations~(\ref{lines}), and the holomorphic two-form $dz_2 \wedge dz_3/z_4$. After relabelling variables, the period of the holomorphic two-form for the family of K3 surfaces $\mathcal{Y}(a,b,c,d)$ over a transcendental two-cycle $\Sigma \in T_{\mathcal{Y}}$ is given by \begin{equation} \label{GKZperiod} \iint_\Sigma \; \frac{dz_1}{\sqrt{z_1}} \; \frac{dz_2}{\sqrt{z_2}} \; \frac{1}{\sqrt{(1+z_1+z_2)(1+a z_1+b z_2)(1+c z_1+d z_2)}} \;. \end{equation} We set \begin{equation} \begin{split} \alpha&=(\alpha_1,\alpha_2,\alpha_3)=\left(-\frac{1}{2}, -\frac{1}{2},-\frac{1}{2}\right), \quad \beta=(\beta_1,\beta_2)=\left(-\frac{1}{2},-\frac{1}{2}\right) \end{split} \end{equation} and \begin{equation} P_1 = 1+z_1+z_2, \quad P_2=1+a z_1+b z_2, \quad P_3=1+c z_1+d z_2. \end{equation} We observe that $\forall \, i: \alpha_i \not \in \mathbb{Z}$, $\forall \, j: \beta_j \not \in \mathbb{Z}$, and $\sum_{i} \alpha_i + \sum_{j} \beta_j\not \in \mathbb{Z}$, and write the periods in the form \begin{equation} F_{\Sigma}\Big(\alpha,\beta,\{P_i\}\Big|\, a,b,c,d\Big)=\iint_\Sigma \; \frac{dz_1}{z_1^{-\beta_1}} \; \frac{dz_2}{z_2^{-\beta_2}} \; \prod_{i=1}^{3} \, P_i^{\alpha_i}\;. \end{equation} This identifies the periods as so called $\mathcal{A}$-hypergeometric functions that satisfy a system of linear differential equations known as non-resonant GKZ system \cite{MR1080980}. The particular GKZ system satisfied by the periods in Equation~(\ref{GKZperiod}) is a system of differential equations of rank six in four variables known as Aomoto-Gel'fand Hypergeometric System of Type $(3, 6)$. It was studied in great detail in \cites{MR1204828,MR1136204, MR1233442, MR1267602, MR2683512}. \par A basis of the solutions $F_1, \dots F_6$ defines a map from the Grassmanian $\operatorname{Gr}(3,6;\mathbb{C})$ into the projective space $\mathbb{P}^5$, more precisely into the domain in $\mathbb{P}^5$ cut out by the Hodge-Riemann relations. The period map is equivariant with respect to the action of $\Gamma_{\mathcal{T}}(1+i)$ on the domain and the monodromy group on the image. In fact, the map $[F_1 : \dots : F_6]\in \mathbb{P}^5$ is a multi-valued vector function from the moduli space $\mathfrak{M}(2)$ to the period domain acted upon by a monodromy group moving the branching locus of six lines around. The map $\mathcal{F}$ in Equation~(\ref{eqn:map1b}) is the inverse of this period mapping. \par We obtain a new map by quotienting further by the permutation group $\mathrm{S}_6$. The monodromy group is then generated by reflections and transformations caused by permuting the lines in a six-line configuration. The resulting map is the period map for a family of K3 surfaces $\mathcal{X}$ closely related to the K3 surfaces $\mathcal{Y}$ discussed in Section~\ref{sec:VGSP}. \end{remark} We have the following: \begin{corollary} Switching the roles of base and fiber in Equation~(\ref{eqns_Kummer}) defines a second Jacobian elliptic fibration $\check{\pi}^{\mathcal{Y}}_{\mathrm{nat}}: \mathcal{Y} \to \mathbb{P}^1$ with 12 singular fibers of type $I_1$, a fiber of type $I_4$, a fiber of type $I_8$, and $\operatorname{MW}(\check{\pi}^{\mathcal{Y}}_{\mathrm{nat}})_{\mathrm{tor}}=\{ \mathbb{I} \}$ and $\operatorname{rk} \operatorname{MW}(\check{\pi}^{\mathcal{Y}}_{\mathrm{nat}})=4$ and $\det\operatorname{discr} \operatorname{MW}(\check{\pi}^{\mathcal{Y}}_{\mathrm{nat}})=1/2$. \end{corollary} \noindent The elliptic fibration appears in the list of all Jacobian elliptic fibrations in \cite{MR2254405}. We call this fibration the \emph{base-fiber-dual} of the natural fibration. \begin{proof} In~\cite{MalmendierClingher:2018} it was proved that a Weierstrass model of the form given by Equation~(\ref{eqns_Kummer}) is equivalent to the genus-one fibration \begin{equation} \eta^2 = \nu(u)^2 \xi^4 + 2 \, u \, \mu(u) \, \xi^2 + u^2, \end{equation} with one apparent rational point. Since $\mu, \nu$ are given as polynomials in $u$ in Equation~(\ref{coeffs}), the equation can be rewritten as \begin{equation} \eta^2 = A(\xi) \, u^4 + B(\xi) \, u^3 + C(\xi) \, u^2 + D(\xi) \, u + E(\xi)^2, \end{equation} where $A, B, C, D, E$ are polynomials in $\xi$. Because there always is the rational point $(u,\eta)=(0,E(\xi))$, it can be brought into the Weierstrass form \begin{equation} \label{fib1} y^2 = 4 \, x^3 - g_2(\xi) \, x - g_3(\xi), \end{equation} with \begin{equation} \begin{split} g_2 & = \frac{16}{3} \, C(\xi)^2+64 \, A(\xi) \, E(\xi)^2-16\, B(\xi) \, D(\xi),\\ g_3 & = -\frac{64}{27} \, C(\xi)^3+\frac{256}{3} A(\xi) \, C(\xi)\, E(\xi)^2+\frac{32}{3} B(\xi) \, C(\xi) \, D(\xi)\\ & \qquad-32\ A(\xi) \, D(\xi)^2-32\, B(\xi)^2 \, E(\xi)^2. \end{split} \end{equation} This is a Weierstrass model with 12 singular fibers of type $I_1$, a fiber of type $I_4$, a fiber of type $I_8$, and $\operatorname{MW}(\check{\pi}^{\mathcal{Y}}_{\mathrm{nat}})_{\mathrm{tor}}=\{ \mathbb{I} \}$. In Proposition~\ref{lem_6lines} we showed that the transcendental lattice of the K3 surfaces $\mathcal{Y}$ has rank six and is given by \begin{equation} \begin{split} T_{\mathcal{Y}}& \cong H(2) \oplus H(2) \oplus \langle -2 \rangle^{\oplus 2}. \end{split} \end{equation} Therefore, the determinant of the discriminant group for the rank-six lattice $T_{\mathcal{Y}}$ is $\det(\operatorname{discr}T_{\mathcal{Y}})=2^6$. The root lattice associated with the singular fibers in Equation~(\ref{fib1}) has rank ten and determinant $2^5$. The claim follows. \end{proof} \subsection{The alternate fibration} In the list of all possible fibrations on the K3 surface $\mathcal{Y}$ associated with the double cover branched along the union of six lines given in \cite{MR2254405} we find the following fibration which we call the \emph{alternate} fibration: \begin{corollary}[\cite{MR2254405}] \label{cor:Kum_alternate} On the K3 surface $\mathcal{Y}$ there is a Jacobian elliptic fibration $\pi^{\mathcal{Y}}_{\mathrm{alt}}: \mathcal{Y} \to \mathbb{P}^1$ with six singular fibers of type $I_2$, two singular fibers of type $I_1$, one singular fiber of type $I_4^*$, and the Mordell-Weil group of sections $\operatorname{MW}(\pi^{\mathcal{Y}}_{\mathrm{alt}})=\mathbb{Z}/2\mathbb{Z}$. \end{corollary} The alternate fibration on the K3 surface $\mathcal{Y}$ can be obtained explicitly from Equation~(\ref{six-lines-cover}). In fact, we obtained the Weierstrass model for this fibration using a 2-neighbor-step procedure applied twice, a method explained in \cite{MR3263663}, starting with the natural fibration. The details of this computation will be published in another forthcoming article \cite{ClingherDoran:2019}. We have the following: \begin{theorem} \label{thm:6lines_alt} The K3 surface $\mathcal{Y}$ associated with the double cover branched along six lines admits the Jacobian elliptic fibration $\pi^{\mathcal{Y}}_{\mathrm{alt}}: \mathcal{Y} \to \mathbb{P}^1$ with the Weierstrass equation \begin{equation} \label{eqns_Kummer_alt} Y^2 = X \Big(X^2 - 2 \, \mathcal{B}(t) X + \mathcal{B}(t)^2 - 4 \, \mathcal{A}(t) \Big), \end{equation} with discriminant \begin{equation} \label{eqns_Kummer_alt_discr} \Delta_{\mathcal{Y}}(t) = 16 \, \mathcal{A}(t) \, \Big(\mathcal{B}(t)^2 - 4 \, \mathcal{A}(t)\Big)^2 , \end{equation} and \begin{equation} \begin{split} \mathcal{B}(t)=t^3- 3 \, J_2 \, t - 2 \, J_3, \ & \qquad \mathcal{A}(t) = J_4 \, t^2 - J_5 \, t + J_6, \end{split} \end{equation} where $J_k$ for $k=2, \dots, 6$ are the invariants of the configuration of six lines defined in Equations~(\ref{Jinvariants}). \end{theorem} \begin{proof} The invariants $J_2, J_3, J_4, J_5, J_6$ defined by Equations~(\ref{Jinvariants}) are determined by the symmetric polynomials in terms of the five degree-one invariants $t_1, t_5, t_6, t_7, t_8$ which in turn are given in terms of the affine moduli $a, b, c, d$ using Equations~(\ref{compare1}). These affine moduli $a, b, c, d$ were defined by arranging the six lines to be in the form of Equation~(\ref{lines}). On the other hand, the 2-neighbor-step procedure, when applied twice, gives the Weierstrass model in Equation~(\ref{eqns_Kummer_alt}) with \begin{equation} \mathcal{B}(t)=t^3-3\, J'_2 \, t - 2\, J'_3, \qquad \mathcal{A}(t) = J'_4 \, t^2 - J'_5 \, t + J'_6. \end{equation} We computed the coefficients $J'_i$ following the same procedure as the one outlined in \cite{MR3263663} for a general Kummer surface using a computer algebra system. At the end of the computation, the coefficients $J'_i$ for $2 \le 6$ are obtained directly in terms of the affine moduli $a, b, c, d$. The resulting expressions for the coefficients are then given in Equations~(\ref{QP_inv}). One easily checks that $J_i=J'_i$ for $2 \le i \le 6$. \end{proof} \section{The Van~Geemen-Sarti partner} \label{sec:VGSP} To extend the notion of geometric two-isogeny to Picard rank 16, we replaced the Kummer surfaces by the K3 surface $\mathcal{Y}$ associated with the double cover branched along the union of six lines discussed in Section~\ref{sec:6lines}. The Shioda-Inose surface is now replaced by a K3 surface $\mathcal{X}$ introduced by Clingher and Doran in \cite{MR2824841}. The K3 surface occurs as the general member of a six-parameter family of K3 surfaces polarized by the lattice $H \oplus E_7(-1) \oplus E_7(-1)$. Each K3 surface in the family carries a special Nikulin involution, called \emph{Van~\!Geemen-Sarti involution}, such that quotienting by this involution and blowing up fixed points recovers a double-sextic surface. \subsection{A Six-Parameter Family of K3 Surfaces} Let $(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta) \in \mathbb{C}^6 $. We consider the projective quartic surface $ {\rm Q}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta) \subset \mathbb{P}^3(x,y,z,w) $ defined by the homogeneous equation: \begin{equation} \label{mainquartic} \scalemath{\MyScaleMedium}{ \begin{array}{c} \mathbf{Y}^2 \mathbf{Z} \mathbf{W}-4 \mathbf{X}^3 \mathbf{Z}+3 \alpha \mathbf{X} \mathbf{Z} \mathbf{W}^2+\beta \mathbf{Z} \mathbf{W}^3+\gamma \mathbf{X} \mathbf{Z}^2 \mathbf{W}- \frac{1}{2} \left (\delta \mathbf{Z}^2 \mathbf{W}^2+ \zeta \mathbf{W}^4 \right )+ \varepsilon \mathbf{X} \mathbf{W}^3 = 0. \end{array}} \end{equation} The family in Equation~(\ref{mainquartic}) was first introduced by Clingher and Doran in \cite{MR2824841} as a generalization of the Inose quartic introduced in \cite{MR578868}. We denote by $\mathcal{X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ the smooth complex surface obtained as the minimal resolution of ${\rm Q}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$. We have the following: \begin{theorem} \label{thm:polarization} Assume that $(\gamma, \delta) \neq (0,0)$ and $ (\varepsilon, \zeta) \neq (0,0)$. Then, the surface $\mathcal{X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ obtained as the minimal resolution of ${\rm Q}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ is a K3 surface endowed with a canonical $H \oplus E_7(-1) \oplus E_7(-1) $ lattice polarization. \end{theorem} \begin{proof} The conditions imposed on the pairs $(\gamma, \delta)$ and $(\varepsilon, \zeta)$ ensure that singularities of $ {\rm Q}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ are rational double points. This fact, in connection with the degree of Equation~(\ref{mainquartic}) being four, guarantees that the minimal resolution $\mathcal{X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ is a K3 surface. \par Note that the quartic ${\rm Q}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ has two special singularities at the following points: $$ {\rm P}_1 = [0,1,0,0], \ \ \ \ {\rm P}_2 = [0,0,1,0]. $$ One verifies that the singularity at ${\rm P}_1$ is a rational double point of type ${\rm A}_9$ if $ \varepsilon \neq 0$, and of type ${\rm A}_{11}$ if $ \varepsilon = 0$. The singularity at ${\rm P}_2$ is of type ${\rm A}_5$ if $ \gamma \neq 0$, and of type ${\rm E}_6$ if $ \gamma = 0$. For a generic sextuple $ (\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$, the points ${\rm P}_1$ and ${\rm P}_2$ are the only singularities of Equation~(\ref{mainquartic}). \par As a first step in uncovering the claimed lattice polarization, we introduce the following three special lines, denoted $ {\rm L}_1 $, $ {\rm L}_2 $, ${\rm L}_3$: $$ \mathbf{X}=\mathbf{W}=0, \ \ \ \mathbf{Z}=\mathbf{W}=0, \ \ 2\varepsilon \mathbf{X}-\zeta \mathbf{W} = \mathbf{Z} = 0 \ . $$ Note that $ {\rm L}_1 $, $ {\rm L}_2 $, ${\rm L}_3$ lie on the quartic in Equation~(\ref{mainquartic}). \par Assume the case $ \gamma \varepsilon \neq 0 $. Then $ {\rm L}_1 $, $ {\rm L}_2 $, ${\rm L}_3$ are distinct and concurrent, meeting at ${\rm P}_1$. When taking the minimal resolution $\mathcal{X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$, one obtains a configuration of smooth rational curves intersecting according to the dual diagram below. \begin{equation} \label{diaggl7} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \xymatrix @-0.9pc { & & \stackrel{{\rm L}_3}{\bullet} & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ & \stackrel{a_1}{\bullet} \ar @{-} [r] \ar @{-}[uur] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{b_3}{\bullet} & \stackrel{b_4}{\bullet} \ar @{-} [l] \ar @{-}[ddl] & \\ & & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & \stackrel{b_1}{\bullet} & & \\ & & & & & & & & & & & & \stackrel{b_5}{\bullet} & & \\ } \end{equation} The two sets $ a_1, a_2, \dots, a_9$ and $b_1, b_2, \dots, b_5$ denote the curves appearing from resolving the rational double point singularities at ${\rm P}_1$ and ${\rm P}_2$, respectively. In the context of diagram (\ref{diaggl7}), the lattice polarization $H \oplus E_7(-1) \oplus E_7(-1) $ is generated by: \begin{align*} {\rm H}_{ \ } \ =& \ \langle a_7, \ {\rm L}_3 + 2 a_1 + 3 a_2 + 4 a_3 + 2 {\rm L}_2 + 3 a_4 + 2 a_5 + a_6 \rangle \\ {\rm E}_7 \ =& \ \langle \ {\rm L}_3, \ a_1, \ a_2, \ a_3, \ {\rm L}_2, \ a_4, \ a_5 \ \rangle \\ {\rm E}_7 \ =& \ \langle \ b_5, \ b_4, \ b_3, \ b_2, \ b_1, \ {\rm L}_1, \ a_9 \ \rangle \ . \end{align*} In the case $\gamma=0$, $\varepsilon \neq 0$, the diagram (\ref{diaggl7}) changes to: \begin{equation} \label{diaggl8} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \scalemath{\MyScaleMedium}{ \xymatrix @-0.9pc { & & \stackrel{{\rm L}_3}{\bullet} & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & \\ & \stackrel{a_1}{\bullet} \ar @{-} [r] \ar @{-}[uur] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] & \stackrel{b_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{b_5}{\bullet} & \stackrel{b_6}{\bullet} \ar @{-} [l] & \\ & & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & & & \stackrel{b_4}{\bullet} & & } } \end{equation} One obtains an enhanced lattice polarization of type $H \oplus E_8(-1) \oplus E_7(-1)$ with: \begin{align*} {\rm H}_{ \ } \ =& \ \langle a_7, \ {\rm L}_3 + 2 a_1 + 3 a_2 + 4 a_3 + 2 {\rm L}_2 + 3 a_4 + 2 a_5 + a_6 \rangle \\ {\rm E}_7 \ =& \ \langle \ {\rm L}_3, \ a_1, \ a_2, \ a_3, \ {\rm L}_2, \ a_4, \ a_5 \ \rangle \\ {\rm E}_8 \ =& \ \langle \ b_6, \ b_5, \ b_4, \ b_3, \ b_2, \ b_1, \ {\rm L}_1, \ a_9 \ \rangle \ . \end{align*} In the case $ \gamma \neq 0$, $ \varepsilon = 0$, the lines ${\rm L}_2$ and ${\rm L}_3$ coincide but the rational double point type at ${\rm P}_1$ changes from ${\rm A}_9$ to ${\rm A}_{11}$. One obtains rational curves on $\mathcal{X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ as follows: \begin{equation} \label{diaggl9} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \scalemath{\MyScaleMedium}{ \xymatrix @-0.9pc { & \stackrel{a_1}{\bullet} \ar @{-} [r] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{a_{10}}{\bullet} \ar @{-} [r] & \stackrel{a_{11}}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{b_3}{\bullet} & \stackrel{b_4}{\bullet} \ar @{-} [l] \ar @{-}[ddl] & \\ & & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & & & \stackrel{b_1}{\bullet} & & \\ & & & & & & & & & & & & & & \stackrel{b_5}{\bullet} & & \\ } } \end{equation} This provides a $H \oplus E_8(-1) \oplus E_7(-1)$ polarization as follows: \begin{align*} {\rm H}_{ \ } \ =& \ \langle a_9, \ 2 a_1 + 4 a_2 + 6 a_3 + 3 {\rm L}_2 + 5 a_4 + 4 a_5 + 3 a_6 + 2 a_7 + a_8 \rangle \\ {\rm E}_8 \ =& \ \langle \ a_1, \ a_2, \ a_3, \ {\rm L}_2, \ a_4, \ a_5, \ a_6, \ a_7 \ \rangle \\ {\rm E}_7 \ =& \ \langle \ b_5, \ b_4, \ b_3, \ b_2, \ b_1, \ {\rm L}_1, \ a_{11} \ \rangle \ . \end{align*} Finally, in the case $ \gamma = \varepsilon = 0$, the lines ${\rm L}_2$, ${\rm L}_3$ coincide and the rational double point types at ${\rm P}_1$ and ${\rm P}_2$ are ${\rm A}_{11}$ and ${\rm E}_6$, respectively. This determines the following diagram of smooth rational curves on the resolution $\mathcal{X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$. \begin{equation} \label{diaggl89} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \scalemath{\MyScaleMedium}{ \xymatrix @-0.9pc { \stackrel{a_1}{\bullet} \ar @{-} [r] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{a_{10}}{\bullet} \ar @{-} [r] & \stackrel{a_{11}}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] & \stackrel{b_3}{\bullet} \ar @{-} [d] \ar @{-} [r] & \stackrel{b_5}{\bullet} \ar @{-} [r] & \stackrel{b_6}{\bullet} \\ & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & & & & & \stackrel{b_4}{\bullet} & & \\ } } \end{equation} This determines a lattice polarization of type $H \oplus E_8(-1) \oplus E_8(-1)$ polarization as follows: \begin{align*} {\rm H}_{ \ } \ =& \ \langle a_9, \ 2 a_1 + 4 a_2 + 6 a_3 + 3 {\rm L}_2 + 5 a_4 + 4 a_5 + 3 a_6 + 2 a_7 + a_8 \rangle \\ {\rm E}_8 \ =& \ \langle \ a_1, \ a_2, \ a_3, \ {\rm L}_2, \ a_4, \ a_5, \ a_6, \ a_7 \ \rangle \\ {\rm E}_8 \ =& \ \langle \ b_6, \ b_5, \ b_4, \ b_3, \ b_2, \ b_1, \ {\rm L}_1, \ a_{11} \ \rangle \ . \end{align*} \end{proof} \begin{remark} The degree-four polarization determined on ${\rm X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ by its quartic definition is described explicitly in the context of diagrams $(\ref{diaggl7})$-$(\ref{diaggl89})$. For instance, assuming the case $ \gamma \varepsilon \neq 0 $, one can write a polarizing divisor as: \begin{equation} \label{linepolariz} \scalemath{\MyScaleMedium}{ \begin{array}{c} \mathcal{L} \ = \ {\rm L}_2 + \left ( a_1+2a_2+3a_3+3a_4+3a_5+\cdots 3a_9 \right ) + 3 {\rm L}_1 + \left ( 2b_1+4b_2+3b_3+2b_4+b_5 \right ) \end{array}} \end{equation} Similar formulas hold in the other three cases. \end{remark} \noindent Diagrams $(\ref{diaggl7})$, $(\ref{diaggl8})$ and $(\ref{diaggl9})$ from the above proof can be nicely augmented. Consider the following complete intersections: $$ 2\varepsilon \mathbf{X}-\zeta \mathbf{W} \ = \ \left (3 \alpha \varepsilon ^2 \zeta + 2 \beta \varepsilon ^3 - \zeta^3 \right ) \mathbf{W}^2 - \varepsilon^2 \left ( \delta \varepsilon-\gamma \zeta \right ) \mathbf{Z} \mathbf{W} + 2 \varepsilon^3 \mathbf{Y}^2 \ = \ 0 $$ $$ 2\gamma \mathbf{X}-\delta \mathbf{W} \ = \ \left (3 \alpha \gamma ^2 \delta + 2 \beta \gamma ^3 - \delta^3 \right ) \mathbf{Z} \mathbf{W}^2 - \gamma^2 \left ( \gamma \zeta - \delta \varepsilon \right ) \mathbf{W}^3 + 2 \gamma^3 \mathbf{Y}^2 \mathbf{Z} \ = \ 0 \ .$$ Assuming appropriate generic conditions, the above equations determine two projective curves ${\rm R}_1$, ${\rm R}_2$, of degrees two and three, respectively. The conic ${\rm R}_1$ is a (generically smooth) rational curve tangent to ${\rm L}_1$ at ${\rm P}_2$. The cubic ${\rm R}_2$ has a double point at ${\rm P}_2$, passes through ${\rm P}_1$ and is generically irreducible. When resolving the quartic surface $(\ref{mainquartic})$, these two curves lift to smooth rational curves on ${\rm X}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$, which by a slight abuse of notation we shall denote by the same symbol. One obtains the following dual diagrams of rational curves. \par Case $ \gamma \neq 0$, $ \varepsilon \neq 0$: \begin{equation} \label{exdiagg77} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \scalemath{\MyScaleMedium}{ \xymatrix @-0.9pc { & & \stackrel{{\rm L}_3}{\bullet} \ar @{=}[rrrrrrrrrr]& & & & & & & & & & \stackrel{{\rm R}_1}{\bullet} & & \\ & & & & & & & & & & & & & \\ & \stackrel{a_1}{\bullet} \ar @{-} [r] \ar @{-}[ddr] \ar @{-}[uur] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{b_3}{\bullet} & \stackrel{b_4}{\bullet} \ar @{-} [l] \ar @{-}[ddl] \ar @{-}[uul] & \\ & & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & \stackrel{b_1}{\bullet} & & \\ & & \stackrel{{\rm R}_2}{\bullet} \ar @{=}[rrrrrrrrrr]& & & & & & & & & & \stackrel{b_5}{\bullet} & & \\ } } \end{equation} \par Case $ \gamma = 0$, $ \varepsilon \neq 0$: \begin{equation} \label{exdiaggl8} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \scalemath{\MyScaleMedium}{ \xymatrix @-0.9pc { & & \stackrel{{\rm L}_3}{\bullet} \ar @{=}[rrrrrrrrrrrr] & & & & & & & & & & & & \stackrel{{\rm R}_1}{\bullet} & & \\ & & & & & & & & & & & & & & & \\ & \stackrel{a_1}{\bullet} \ar @{-} [r] \ar @{-}[uur] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] & \stackrel{b_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{b_5}{\bullet} & \stackrel{b_6}{\bullet} \ar @{-} [l] \ar @{-} [uul] & \\ & & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & & & \stackrel{b_4}{\bullet} & & } } \end{equation} \par Case $ \gamma \neq 0$, $ \varepsilon = 0$: \begin{equation} \label{exdiaggl9} \def\scriptstyle{\scriptstyle} \def\scriptstyle{\scriptstyle} \scalemath{\MyScaleMedium}{ \xymatrix @-0.9pc { & \stackrel{a_1}{\bullet} \ar @{-} [r] \ar @{-}[ddr] & \stackrel{a_2}{\bullet} \ar @{-} [r] & \stackrel{a_3}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{a_4}{\bullet} \ar @{-} [r] & \stackrel{a_5}{\bullet} \ar @{-} [r] & \stackrel{a_6}{\bullet} \ar @{-} [r] & \stackrel{a_7}{\bullet} \ar @{-} [r] & \stackrel{a_8}{\bullet} \ar @{-} [r] & \stackrel{a_9}{\bullet} \ar @{-} [r] & \stackrel{a_{10}}{\bullet} \ar @{-} [r] & \stackrel{a_{11}}{\bullet} \ar @{-} [r] & \stackrel{{\rm L}_1}{\bullet} \ar @{-} [r] & \stackrel{b_2}{\bullet} \ar @{-} [r] \ar @{-} [d] & \stackrel{b_3}{\bullet} & \stackrel{b_4}{\bullet} \ar @{-} [l] \ar @{-}[ddl] & \\ & & & \stackrel{{\rm L}_2}{\bullet} & & & & & & & & & & \stackrel{b_1}{\bullet} & & \\ & & \stackrel{{\rm R}_2}{\bullet} \ar @{=}[rrrrrrrrrrrr] & & & & & & & & & & & & \stackrel{b_5}{\bullet} & & \\ } } \end{equation} Note that diagrams $(\ref{exdiaggl8})$ and $(\ref{exdiaggl9})$ are similar in nature. This is not a fortuitous fact, as we shall see next. \begin{proposition} \label{symmetries1} Let $(\alpha, \beta, \gamma, \delta, \varepsilon, \zeta) \in \mathbb{C}^6 $ with $(\gamma, \delta) \neq (0,0)$ and $(\varepsilon, \zeta) \neq (0,0)$. Then, one has the following isomorphisms of $H \oplus E_7(-1) \oplus E_7(-1)$ lattice polarized K3 surfaces: \begin{itemize} \item [(a)] $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \ \simeq \ \mathcal{X}(t^2 \alpha, \ t^3 \beta, \ t^5 \gamma, \ t^6 \delta, \ t^{-1} \varepsilon, \ \zeta ) $, for any $t \in \mathbb{C}^*$. \item [(b)] $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \ \simeq \ \mathcal{X}(\alpha, \beta, \varepsilon, \zeta, \gamma, \delta )$. \end{itemize} \end{proposition} \begin{proof} Let $q$ be a square root of $t$. Then, the projective automorphism given by \begin{equation} \label{tmor} \mathbb{P}^3 \ \longrightarrow \ \mathbb{P}^3, \ \ \ [\mathbf{X}: \mathbf{Y}: \mathbf{Z}: \mathbf{W}] \ \mapsto \ [\ q^8\mathbf{X}: \ q^9\mathbf{Y}: \mathbf{Z}: \ q^6\mathbf{W} \ ] \ \end{equation} extends to an isomorphism $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \simeq \mathcal{X}(t^2 \alpha, t^3 \beta, t^5 \gamma, t^6 \delta, t^{-1} \varepsilon, \zeta ) $ preserving the lattice polarization. Similarly, the birational involution: \begin{equation} \label{invoef} \mathbb{P}^3 \ \dashrightarrow \ \mathbb{P}^3, \ \ \ [\mathbf{X}: \mathbf{Y}: \mathbf{Z}: \mathbf{W}] \ \mapsto \ [\ \mathbf{X}\mathbf{Z}: \ \mathbf{Y}\mathbf{Z}, \mathbf{W}^2, \ \mathbf{Z}\mathbf{W} \ ] \ . \end{equation} extends to an isomorphism between $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta)$ and $\mathcal{X}(\alpha, \beta, \varepsilon, \zeta, \gamma, \delta )$. \end{proof} \subsection{Elliptic Fibrations on \texorpdfstring{$\mathcal{X}$}{X}} By the nature of the $H \oplus E_7(-1) \oplus E_7(-1)$ lattice polarizations, K3 surfaces $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta)$ carry interesting elliptic fibrations with sections. As discussed in \cite{MR2824841}, there are four non-isomorphic elliptic fibrations with section; three will be important for the considerations of this article. \subsubsection{The standard fibration} The first elliptic fibration with section is canonically associated with the lattice polarization, as the classes of its fiber and section span the hyperbolic factor in $H \oplus E_7(-1) \oplus E_7(-1)$. Following the terminology of \cite{MR2369941}, we shall refer to this elliptic fibration as \emph{standard} and denote it by $$ \pi^{\mathcal{X}}_{\mathrm{std}} \colon \mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \rightarrow \mathbb{P}^1 \ . $$ One obtains the fibers of $\pi^{\mathcal{X}}_{\mathrm{std}}$ by considering the pencil of planes in $ \mathbb{P}^3 $ that contain the line $ {\rm L}_1 $, where ${\rm L}$ is the degree-four canonical hyperplane polarization of $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta)$. \par It is obtained from residual intersections with the pencil of planes containing the line $ \mathbf{Z}=\mathbf{W}=0$. Setting \begin{equation} \mathbf{X} = sx, \quad \mathbf{Y}= y, \quad \mathbf{W} = -4\, s^3, \quad \mathbf{Z} = 4\, s^4, \end{equation} in Equation~(\ref{mainquartic}), we obtain the Weierstrass equation \begin{equation} \label{eqns_SI_std} \mathcal{X}_s: \quad y^2 = x^3 + f(s) \, x + g(s), \end{equation} with discriminant \begin{equation} \Delta_{\mathcal{X}}(s) = 4 \, f(s)^3 + 27 \, g(s)^2 = - 64 \, s^9 P(s) \end{equation} and \begin{equation} f(s) = 4\, s^3 \Big(\gamma \, s^2 - 3 \, \alpha \, s + \varepsilon \Big), \quad g(s) = -8 \, s^5 \Big(\delta \, s^2 + 2 \, \beta \, s + \zeta\big), \end{equation} and \begin{equation} \begin{array}{rl} P(s) = & 4 \, \gamma^3 s^6 - 9\, (4\, \alpha \gamma^2-3 \,\delta^2)\, s^5 + 12\, (9 \, \alpha^2 \gamma+9 \, \beta \delta+\gamma^2 \varepsilon)\, s^4\\[0.5em] - & 18\, (6\, \alpha^3+4\, \alpha \gamma \varepsilon-6 \, \beta^2-3\, \delta \zeta) \, s^3 + 12 \, (9\, \alpha^2 \varepsilon+9 \, \beta \zeta +\gamma \varepsilon^2) \, s^2 \\[0.5em] - & 9 \, (4 \, \alpha \varepsilon^2-3 \, \zeta^2 ) \, s + 4 \, \varepsilon^3 . \end{array} \end{equation} In this way, we obtain an elliptically fibered K3 surface $\pi^{\mathcal{X}}_{\mathrm{std}} \colon \mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \rightarrow \mathbb{P}^1$ with section given by the point at infinity in each fiber $\mathcal{X}_{s}$ and minimal Weierstrass equation~(\ref{eqns_SI_std}). The fibration has singularities of Kodaira type $III^*$ over $s=0$ and $s = \infty$. The following lemma is immediate: \begin{lemma} \label{lem:Ellstd} Equation~(\ref{eqns_SI_std}) is a Jacobian elliptic fibration $\pi^{\mathcal{X}}_{\mathrm{std}} \colon \mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \rightarrow \mathbb{P}^1$ with six singular fibers of type $I_1$, two singular fibers of type $III^*$, and the Mordell-Weil group of sections $\operatorname{MW}(\pi^{\mathcal{X}}_{\mathrm{std}} )=\{ \mathbb{I} \}$. \end{lemma} Application of Tate's algorithm shows immediately: \begin{lemma} \label{lem:std_extension} We have the following: \begin{itemize} \item If $ \varepsilon \neq 0 $, there is a singular fiber of Kodaira type $III^*$ at $s = 0$. Otherwise, it is a singular fiber of Kodaira type $II^*$. \item If $ \gamma \neq 0 $, there is a singular fiber of Kodaira type $III^*$ at $s = \infty$. Otherwise, it is a singular fiber of Kodaira type $II^*$. \end{itemize} \end{lemma} \subsubsection{The alternate fibration} Another elliptic fibration with section is obtained from residual intersections with the pencil of planes containing the line $\mathbf{X}=\mathbf{W}=0$. Setting \begin{equation} \mathbf{X}= t\, x^3, \quad \mathbf{Y}= \sqrt{2}\, x^2y, \quad \mathbf{W} = 2\, x^3, \quad \mathbf{Z} = 2\,x^2(-\varepsilon t+\zeta), \end{equation} in Equation~(\ref{mainquartic}), determines a second Jacobian elliptic fibration $\pi^{\mathcal{X}}_{\mathrm{alt}} \colon \mathcal{X} \rightarrow \mathbb{P}^1$ with fiber $\mathcal{X}_{t}$ given by the Weierstrass equation \begin{equation} \label{eqns_SI_alt} \mathcal{X}_t: \quad y^2 = x \Big(x^2 + B(t) \, x +A(t) \Big), \end{equation} with discriminant \begin{equation} \Delta_{\mathcal{X}}(t) = A(t)^2 \, \Big(B(t)^2-4\,A(t)\Big) \end{equation} and \begin{equation} \begin{split} A(t) &= (\gamma t-\delta)(\varepsilon t -\zeta) = \gamma\varepsilon t^2 - (\gamma \zeta + \delta \varepsilon) t + \delta\zeta,\\ B(t) &= t^3-3\alpha t- 2 \beta. \end{split} \end{equation} Thus, we obtain an elliptically fibered K3 surface $\pi^{\mathcal{X}}_{\mathrm{alt}} \colon \mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \rightarrow \mathbb{P}^1$ which we call the \emph{alternate fibration}, with section given by the point at infinity in each fiber $\mathcal{X}_{t}$ and minimal Weierstrass equation~(\ref{eqns_SI_alt}). It has a two-torsion section $(x,y)=(0,0)$, two singularities of Kodaira type $I_2$ over $A(t)=0$, and a singularity of Kodaira type $I_8^*$ over $t=\infty$. Therefore, the following is immediate: \begin{lemma} \label{lem:Ellalt} Equation~(\ref{eqns_SI_alt}) defines a Jacobian elliptic fibration $\pi^{\mathcal{X}}_{\mathrm{alt}} \colon \mathcal{X} \rightarrow \mathbb{P}^1$ with six singular fibers of type $I_1$, two singular fibers of type $I_2$, one singular fibers of type $I_8^*$, and the Mordell-Weil group of sections $\operatorname{MW}(\pi^{\mathcal{X}}_{\mathrm{alt}} )=\mathbb{Z}/2\mathbb{Z}$. \end{lemma} \par Setting \begin{equation} x=T, \quad y = \frac{Y}{T^2}, \quad t=\frac{X-\frac{1}{3}\gamma \varepsilon T}{T^2}, \end{equation} in Equation~(\ref{eqns_SI_alt}) determines another Jacobian elliptic fibration $\check{\pi}^{\mathcal{X}}_{\mathrm{alt}}: \mathcal{X} \to \mathbb{P}^1$ with fiber $\mathcal{X}_{T}$ given by the minimal Weierstrass equation \begin{equation} \label{eqn:bfdual} \mathcal{X}_{T}: \quad Y^2 = X^3 + \check{f}(T) \, X + \check{g}(T) , \end{equation} with discriminant \begin{equation} \Delta_{\mathcal{X}}(T)=4\, \check{f}(T)^3+27\, \check{g}(T)^2 \end{equation} and \begin{equation*} \begin{split} \check{f}(T) =& -\frac{1}{3}\, T^2\, \Big(9\, \alpha \, T^2 + 3(\gamma\zeta + \delta \varepsilon)\, T +( \gamma\varepsilon)^2\Big),\\ \check{g}(T)= & \, \frac{1}{27} \, T^3 \, \Big(27 \, T^4 -54\, \beta\, T^3 + 27\, (\alpha \gamma \varepsilon + \delta \zeta) \, T^2 \\ &\qquad + 9 \, \gamma\varepsilon\, (\delta\varepsilon+\gamma \zeta) \, T + 2( \gamma\varepsilon)^3\Big) . \end{split} \end{equation*} Thus, we obtain a Jacobian elliptic fibration $\check{\pi}^{\mathcal{X}}_{\mathrm{alt}} \colon \mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \rightarrow \mathbb{P}^1$ which we call the \emph{base-fiber-dual} of the alternate fibration. We have the following: \begin{lemma} \label{lem:Ellalt_dual} Equation~(\ref{eqn:bfdual}) defines a Jacobian elliptic fibration $\check{\pi}^{\mathcal{X}}_{\mathrm{alt}} \colon \mathcal{X} \rightarrow \mathbb{P}^1$ with 6 singular fibers of Kodaira type $I_1$, a singular fibers of Kodaira type $I_2^*$, and a singular fiber of Kodaira type $II^*$, and the Mordell-Weil group of sections $\operatorname{MW}(\check{\pi}^{\mathcal{X}}_{\mathrm{alt}})=\{ \mathbb{I} \}$. \end{lemma} \subsection{Van~Geemen-Sarti involutions and moduli} Equation~(\ref{eqns_SI_alt}) is a minimal Weierstrass model for the Jacobian elliptic fibration $\pi^{\mathcal{X}}_{\mathrm{alt}}: \mathcal{X} \to \mathbb{P}^1$ with fiber $\mathcal{X}_{t}$ given by \begin{equation} \label{Xsurface} \mathcal{X}_{t}: \quad y^2 = x \, \Big(x^2 + B(t) \, x + A(t) \Big) . \end{equation} The singular fibers of $\mathcal{X}$ are located over the support of $\Delta_{\mathcal{X}}=A(t)^2 (B(t)^2-4 \, A(t))$. A smooth section $\sigma$ is given by the point at infinity in each fiber. A two-torsion section $\tau$ is given by $\tau: t \mapsto (x,y)=(0,0)$ such that $2 \tau=\sigma$. Thus, we have $\mathbb{Z}/2\mathbb{Z} \subset \operatorname{MW}(\pi^{\mathcal{X}}_{\mathrm{alt}})$. The holomorphic two-form is given by $\omega_{\mathcal{X}}=dt \wedge dx/y$. A \emph{Nikulin involution} on a K3 surface $\mathcal{X}$ is a symplectic involution $\jmath_{\mathcal{X}}: \mathcal{X} \to \mathcal{X}$, i.e., an involution with $\jmath_{\mathcal{X} }^*(\omega)=\omega$. If a Nikulin involution exists on a K3 surface $\mathcal{X}$, then it necessarily has eight fixed points, and the minimal resolution of the quotient surface is another K3 surface $\mathcal{Y}=\widehat{\mathcal{X}/\{1,\jmath_{\mathcal{X} }\}}$ \cite{MR544937}. Special Nikulin involution are obtained in our situation: the fiberwise translation by the two-torsion section acting by $p \mapsto p+\tau$ for all $p \in \mathcal{X}_{t}$ extends to a Nikulin involution $\jmath_{\mathcal{X} }$ on $\mathcal{X}$, called \emph{Van~\!Geemen-Sarti} involution. A computation shows that the involution is, on each fiber $\mathcal{X}_t$, given by \begin{equation} \label{involution_fiber} \jmath_{\mathcal{X}_t}: (x,y) \mapsto (x,y) + (0,0) = \left( \frac{A(t)}{x}, - \frac{A(t)\, y}{x^2} \right) \end{equation} for $p \not \in \{\sigma, \tau\}$ and interchanges $\sigma$ and $\tau$. It is also easy to check that $\jmath_{\mathcal{X} }$ leaves the holomorphic two-form $\omega_{\mathcal{X}}$ invariant. Using the smooth two-isogeneous elliptic curve $ \mathcal{X}_t/\{ \sigma, \tau\}$ for each smooth fiber, we obtain the new K3 surface $\mathcal{Y}$ equipped with an elliptic fibration $\pi^{\mathcal{Y}}_{\mathrm{alt}}:\mathcal{Y}\to\mathbb{P}^1$ with section $\Sigma$ as the Weierstrass model with fiber $\mathcal{Y}_{t}$ given by \begin{equation} \label{Ysurface} \mathcal{Y}_{t}: \quad Y^2 = X \Big(X^2- 2 \, B(t) \, X + B(t)^2-4 \, A(t)\Big). \end{equation} The singular fibers of $\mathcal{Y}$ are located over the support of $\Delta_{\mathcal{Y}}=16\, A(t)\,(B(t)^2-4A(t))^2$. The holomorphic two-form on $\mathcal{Y}$ is $\omega_{\mathcal{Y}}=dt\wedge dX/Y$. The fiberwise isogeny given by \begin{equation} \label{isog} \hat{\Phi}\vert_{\mathcal{X}_t}: (x,y) \mapsto (X,Y) = \left( \frac{y^2}{x^2}, \frac{(x^2-A(t))y}{x^2} \right) \end{equation} extends to a degree-two rational map $\hat{\Phi}: \mathcal{X} \dasharrow \mathcal{Y}$. We observe that the K3 surface $\mathcal{Y}$ satisfies $\mathbb{Z}/2\mathbb{Z} \subset \operatorname{MW}(\pi^{\mathcal{Y}}_{\mathrm{alt}})$ with a two-torsion section $T$ given by $T: t \mapsto (X,Y)=(0,0)$. Therefore, the surface $\mathcal{Y}$ is itself equipped with a Van~\!Geemen-Sarti involution $\jmath_{\mathcal{Y}}$, namely \begin{equation} \label{dual_isog_involution} \jmath_{\mathcal{Y}_t}: (X,Y) \mapsto (X,Y) + (0,0) = \left( \frac{B(t)^2-4\, A(t)}{X}, - \frac{(B(t)^2-4\,A(t))Y}{X^2} \right). \end{equation} The involution $\jmath_{\mathcal{Y}}$ leaves the holomorphic two-form $\omega_{\mathcal{Y}}$ invariant and covers the map $\Phi$ extending the fiberwise dual isogeny $P \mapsto P + T$ for all $P \in \mathcal{Y}_t$ given by \begin{equation} \label{dual_isog} \Phi\vert_{\mathcal{Y}_t}: (X,Y) \mapsto (x,y) = \left( \frac{Y^2}{4X^2}, \frac{Y(X^2-B(t)^2+4\, A(t))}{8\, X^2} \right) . \end{equation} The situation is summarized in the following diagram: \begin{equation} \label{pic:VGS} \xymatrix{ \mathcal{X} \ar @(dl,ul) _{\jmath_{\mathcal{X}}} \ar [dr] _{\pi^{\mathcal{X}}_{\mathrm{alt}}} \ar @/_0.5pc/ @{-->} _{\hat{\Phi}} [rr] & & \mathcal{Y} \ar @(dr,ur) ^{\jmath_{\mathcal{Y}}} \ar [dl] ^{\pi^{\mathcal{Y}}_{\mathrm{alt}}} \ar @/_0.5pc/ @{-->} _{\Phi} [ll] \\ & \mathbb{P}^1 } \end{equation} We refer to such K3 surfaces $\mathcal{X}$ and $\mathcal{Y}$ as \emph{Van~\!Geemen-Sarti partners}. Therefore, the family of K3 surfaces $\mathcal{Y}$ associated with the double cover of the projective plane branched along the union of six lines equipped with the alternate fibration in Equation~(\ref{eqns_Kummer_alt}) and the Clingher-Doran family of K3 surfaces equipped with the alternate fibration in Equation~(\ref{eqns_SI_alt}) constitute such Van~\!Geemen-Sarti partners; see \cites{MalmendierClingher:2018,Clingher:2017aa}. The notion of Van~\!Geemen-Sarti partners is more general than the one of a Shioda-Inose structure. We make the following: \begin{remark} In Picard rank $17$, $\mathcal{X}$ carries a Shioda-Inose structure \cites{MR2369941,MR2824841}. The quotient map $\hat{\Phi}: \mathcal{X} \dashrightarrow \mathcal{Y}=\operatorname{Kum}(A)$ induces a Hodge isometry $T_{\mathcal{X}}(2 ) \cong T_{\operatorname{Kum}(A)}$. In Picard rank 16, the map $\hat{\Phi}: \mathcal{X} \dashrightarrow \mathcal{Y}$ in Equation~(\ref{pic:VGS}) does NOT induce a Hodge isometry. In Proposition~\ref{lem_6lines} the transcendental lattice $T_{\mathcal{Y}}$ of the family of K3 surfaces $\mathcal{Y}$, and in Proposition~\ref{symmetries1} the lattice polarization of the family of K3 surfaces $\mathcal{X}$ were determined. For generic parameters, we have \begin{equation} \begin{split} T_{\mathcal{X}}&= H \oplus H \oplus \langle -2 \rangle^{\oplus 2},\\ T_{\mathcal{Y}}&= H(2) \oplus H(2) \oplus \langle -2 \rangle^{\oplus 2}. \end{split} \end{equation} Hence, it is no longer the case that $T_{\mathcal{X}}(2 ) \cong T_{\mathcal{Y}}$. \end{remark} In the context of the above results, we have the following: \begin{proposition} \label{lem:polar1} Any $H \oplus E_7(-1) \oplus E_7(-1)$-polarized K3 surface $\mathcal{X}$ given by Equation~(\ref{mainquartic}) is the Van~\!Geemen-Sarti partner of a K3 surface $\mathcal{Y}$ given in Theorem~\ref{thm:6lines_alt} associated with a six-line configuration in $\mathbb{P}^2$ with invariants $J_k$ for $k=2, \dots, 6$. In particular, we have \begin{equation} \label{modd} \left [ J_2 : \ J_3 : \ J_4: \ J_5 : \ J_6 \right ] = \left [\, \alpha : \ \beta : \ \gamma \cdot \varepsilon: \ \gamma \cdot \zeta + \delta \cdot \varepsilon : \ \delta \cdot \zeta \right ] \end{equation} as points in the four-dimensional weighted projective space $\mathbb{P}(2,3,4,5,6)$. \end{proposition} \begin{proof} The proof follows directly by comparing Equation~(\ref{Ysurface}) -- obtained by fiberwise two-isogeny from Equation~(\ref{eqns_SI_alt}) -- with Equation~(\ref{eqns_Kummer_alt}). It then follows that $\mathcal{A}(t)=A(t)$ and $\mathcal{B}(t)=B(t)$, and the claim follows. \end{proof} We also have the following: \begin{lemma} \label{lem:polar2} The isomorphism classes in the family of K3 surfaces $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta)$ in Equation~(\ref{mainquartic}) are parametrized by the four-dimensional open complex variety $\mathfrak{M}$ defined in Equation~(\ref{modulispace}). \end{lemma} \begin{proof} As a consequence of Theorem~\ref{thm:polarization}, one has an isomorphism of polarized K3 surfaces \begin{equation} \label{eq:isom} \mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon, \zeta) \ \simeq \ \mathcal{X}(\alpha, \ \beta, \ t \gamma, \ t \delta, \ t^{-1} \varepsilon, \ t^{-1} \zeta ) \end{equation} for any $t \in \mathbb{C}^*$. The conditions imposed on the pairs $( J_4 , \ J_5, \ J_6 ) \neq (0,0,0)$ ensure that singularities of ${\rm Q}(\alpha, \beta, \gamma, \delta , \varepsilon, \zeta)$ are rational double points. \end{proof} Combing the results of Proposition~\ref{lem:polar1} and Lemma~\ref{lem:polar2} we obtain the following: \begin{theorem} \label{cor:K3moduli_space} The moduli space $\mathfrak{M}$ in Equation~(\ref{modulispace}) is the coarse moduli space of K3 surfaces endowed with $H \oplus E_7(-1) \oplus E_7(-1)$ lattice polarization. \end{theorem} We have the immediate consequence: \begin{corollary} The loci of the singular fibers in the alternate fibration on the K3 surface $\mathcal{X}$ are determined by the Satake sextic in Section~\ref{ssec:mod_forms}. That is, if $\varpi \in \mathfrak{M}$ is the point in the moduli space associated with the six-line configuration defining $\mathcal{Y}$ and $\mathcal{X}$, the loci of the fibers of Kodaira type $I_1$ and $I_2$ are given by $\mathcal{S} =\mathcal{B}^2 - 4 \, \mathcal{A} = 0$ and $\mathcal{A} = 0$, respectively, with \begin{equation} \label{SatakeSextic2b} \begin{split} \mathcal{B}=t^3- 3 \, J_2(\varpi)\, t - 2 \, J_3(\varpi), \ & \quad \mathcal{A}= J_4(\varpi)\, t^2 - J_5(\varpi) \, t + J_6(\varpi), \end{split} \end{equation} where $J_k(\varpi)$ are the modular forms of weights $2k$ for $k=2, \dots , 6$ in Theorem~\ref{thm4} generating the ring of modular forms relative to $\Gamma_{\mathcal{T}}$. \end{corollary} Next, we describe what confluences of singular fibers appear in the three Jacobian elliptic fibrations determined above. We discuss several cases for each fibration where the labelling corresponds to the one used to characterize six-line configurations in Definition~\ref{def:sstable}, Lemma~\ref{lem:Js}, and Corollary~\ref{cor:VanishingLoci}. We have the following: \begin{lemma} \label{lem:extensions_alt} The Weierstrass model in Equation~(\ref{eqns_SI_alt}) associated with a six-line configuration in $\mathbb{P}^2$ with invariants $J_k$ for $k=2, \dots, 6$ satisfies the following: \begin{enumerate} \item[(0)] In the generic case, there are singular fibers $I_8^* + 2 \, I_2 + 6 \, I_1$. \item[(0b)] If $\operatorname{Res} (\mathcal{A}, \mathcal{B})=0$, one $I_1$ and one $I_2$ fiber coalesce to a $III$ fiber. \item[(1)] If $J_4 = 0$, one $I_2$ and the $I_8^*$ fiber coalesce to an $I_{10}^*$ fiber. \item[(2)] If $\operatorname{Disc} (\mathcal{A})=0$, two $I_2$ fibers coalesce to an $I_4$ fiber. \item[(2b)] If $\operatorname{Disc} (\mathcal{S})=0$, two $I_1$ fibers coalesce to an $I_2$ fiber. \item[(3+4)] If $\operatorname{Disc} (\mathcal{A})=\operatorname{Res} (\mathcal{A}, \mathcal{B})=0$, two $I_1$ and two $I_2$ fibers coalesce to an $I_0^*$ fiber. \item[(5)] If $J_4 = J_5 = 0$, two $I_2$ fibers and the $I_8^*$ fiber coalesce to an $I_{12}^*$ fiber. \end{enumerate} \end{lemma} \begin{proof} The coefficients of the Weierstrass model in Equation~(\ref{eqns_SI_alt}) can be written in terms of modular forms. The proof follows from the application of Tate's algorithm. Notice that $J_4 = \operatorname{Disc} (\mathcal{A}) = 0$ is equivalent to $J_4 = J_5 = 0$; and $\operatorname{Disc} (\mathcal{A})=\operatorname{Res} (\mathcal{A}, \mathcal{B})=0$ implies $\operatorname{Disc} (\mathcal{S})=0$. \end{proof} We also have the following: \begin{lemma} \label{lem:extensions_alt_FB} The Weierstrass model in Equation~(\ref{eqn:bfdual}) associated with a six-line configuration in $\mathbb{P}^2$ with invariants $J_k$ for $k=2, \dots, 6$ satisfies the following: \begin{enumerate} \item[(0)] In the generic case, there are singular fibers $II^* + 6 \, I_1 + I_2^*$. \item[(0b)] If $\operatorname{Res} (\check{f}\, T^{-2}, \check{g}\, T^{-3})=0$, two $I_1$ fibers coalesce to a $II$ fiber. \item[(1)] If $J_4 = 0$, one $I_1$ and the $I_2^*$ fiber coalesce to a $III^*$ fiber. \item[(2)] If $\operatorname{Disc} (\mathcal{A})=0$, one $I_1$ and the $I_2^*$ fiber coalesce to an $I_3^*$ fiber. \item[(2b)] If $\operatorname{Disc} (\mathcal{S})=0$, two $I_1$ fibers coalesce to an $I_2$ fiber. \item[(3+4)] If $\operatorname{Disc} (\mathcal{A})=\operatorname{Res} (\mathcal{A}, \mathcal{B})=0$, two $I_1$ and the $I_2^*$ fiber coalesce to an $I_4^*$ fiber. \item[(5)] If $J_4 = J_5 = 0$, two $I_1$ fibers and the $I_2^*$ fiber coalesce to a $II^*$ fiber. \end{enumerate} \end{lemma} \begin{proof} The coefficients of the Weierstrass model in Equation~(\ref{eqn:bfdual}) can be written in terms of modular forms. The proof then follows from the application of Tate's algorithm. \end{proof} Similarly, one proves the following: \begin{lemma} \label{lem:extensions_std} The Weierstrass model in Equation~(\ref{eqns_SI_std}) associated with a six-line configuration in $\mathbb{P}^2$ with invariants $J_k$ for $k=2, \dots, 6$ satisfies the following: \begin{enumerate} \item[(0)] In the generic case, there are singular fibers $III^* + 6 \, I_1 + III^*$. \item[(0b)] If $\operatorname{Res} (f s^{-2}, g s^{-5})=0$, two $I_1$ fibers coalesce to a $II$ fiber. \item[(1)] If $J_4 = 0$, one $I_1$ and one $III^*$ fiber coalesce to a $II^*$ fiber. \item[(2b)] If $\operatorname{Disc} (\mathcal{S})=0$, two $I_1$ fibers coalesce to an $I_2$ fiber. \item[(5)] If $J_4 = J_5 = 0$, two pairs of $I_1$ and $III^*$ fiber coalesce each to a $II^*$ fiber. \end{enumerate} \end{lemma} Notice that cases in which two $I_1$'s coalesce to form a fiber of type $II$ or one $I_1$ fiber and one $I_2$ fiber coalesce to a fiber of type $III$ -- a case we labelled (0b), adding to cases (0) through (5) in Definition~\ref{def:sstable} -- do not affect the lattice polarization. An immediate consequence is the following: \begin{corollary} \label{cor:extensions} The family of K3 surfaces in Equation~(\ref{mainquartic}) satisfies the following: \begin{enumerate} \item[(0)] For a generic point in $\mathfrak{M}$, there is a $H \oplus E_7(-1) \oplus E_7(-1)$ polarization. \item[(1)] If $J_4 = 0$, the polarization extends to $H \oplus E_8(-1) \oplus E_7(-1)$. \item[(2)] if $\operatorname{Disc} (\mathcal{A})=0$, the polarization extends to $H \oplus E_8(-1) \oplus D_7(-1)$. \item[(2b)] If $\operatorname{Disc} (\mathcal{S})=0$, the polarization extends to $H \oplus E_7(-1) \oplus E_7(-1) \oplus \langle -2 \rangle$. \item[(3+4)] If $\operatorname{Disc} (\mathcal{A})=\operatorname{Res} (\mathcal{A}, \mathcal{B})=0$, the polarization extends to $H \oplus E_8(-1) \oplus D_8(-1)$. \item[(5)] If $J_4 = J_5 = 0$, the polarization extends to $H \oplus E_8(-1) \oplus E_8(-1)$. \end{enumerate} \end{corollary} \begin{proof} The presence of a singular fiber of Kodaira type $II^*$ in the fibration given by Equation~(\ref{eqn:bfdual}) implies that we have, in all cases, a Mordell-Weil group of sections $\operatorname{MW}(\check{\pi}^{\mathcal{X}}_{\mathrm{alt}})=\{ \mathbb{I} \}$ \cite{MR2732092}*{Lemma~7.3}. Therefore, the lattice polarization coincides with the trivial lattice generated by the singular fibers extended by $H$ generated by the classes of the smooth fiber and the section of the elliptic fibration. \end{proof} \subsection{F-theory/heterotic duality} We determined equations for three important elliptic fibrations with sections on the universal family of such K3 surfaces admitting a $H \oplus E_7(-1) \oplus E_7(-1)$ lattice polarization in Theorem~\ref{thm:polarization} and Lemmas~\ref{lem:Ellstd},~\ref{lem:Ellalt},~\ref{lem:Ellalt_dual}. We also identified the coarse moduli space $\mathfrak{M}$ of the Clingher-Doran family to be the quotient space of $\mathbf{H}_2$ by the modular group $\Gamma_{\mathcal{T}}$; see Theorem~\ref{thm4} and~\ref{cor:K3moduli_space}. We will now show, that the ring of modular forms of even characteristic relative to $\Gamma_{\mathcal{T}}$ coincides with the ring of modular forms on the bounded symmetric domain of type $IV$. The latter is known to provide a quantum-exact effective heterotic description of a natural sub-space of the moduli space of non-geometric heterotic models \cites{MR3366121,MR2826187,MR3417046}. Therefore, we will have established a particular F-theory/heterotic string duality map in eight dimensions. The heterotic theories on a torus $\mathbf{T}^2$ in question have two complex moduli and two non-vanishing complex Wilson lines. The duality map is the exact correspondence between the two moduli spaces, known in the large volume limit on the heterotic side, which corresponds to the stable degeneration limit on the F-theory side and established in \cites{MR1468319,MR1643100,MR1697279,MR2126482}. Therefore, our result generalizes earlier results by \cites{MR2369941, MR2826187, MR2824841, MR2935386, MR3366121, MR3417046}. \par We let $L^{2,4}$ be the lattice of signature $(2,4)$ which is the orthogonal complement of $E_7(-1)\oplus E_7(-1)$ in the unique integral even unimodular lattice $\Lambda^{2,18}$ of signature $(2,18)$, which is \begin{equation} \label{k3lattice} \Lambda^{2,18}=H\oplus H \oplus E_8(-1)\oplus E_8(-1) \;. \end{equation} We restrict to the quotient of the symmetric space\footnote{By $\mathcal{D}_{p,q}$ we denote the symmetric space for $O(p,q)$, i.e., \begin{equation} \mathcal{D}_{p,q} = (O(p)\times O(q))\backslash O(p,q). \end{equation} } for $O(2,4)$ by the automorphism group $O(L^{2,4})$, i.e., the space \begin{equation} \label{moduli_space} \mathcal{D}_{2,4}/O(L^{2,4}). \end{equation} The space $\mathcal{D}_{2,4}$ is also known as \emph{bounded symmetric domain of type $IV$}. We further restrict to a certain index-two sub-group $O^+(L^{2,4}) \subset O(L^{2,4})$ in the construction above with the corresponding degree-two cover given by \begin{equation} \label{eqn:MSP+} \mathcal{D}_{2,4}/O^+(L^{2,4}). \end{equation} The group $O^+(L^{2,4})$ is the maximal sub-group whose action preserves the complex structure on the symmetric space, and thus is the maximal sub-group for which the corresponding modular forms are holomorphic. \par We have the following: \begin{theorem} The natural sub-space of the moduli space of non-geometric heterotic models whose quantum-exact effective heterotic description is captured by the ring of holomorphic modular forms on the bounded symmetric domain of type $IV$ is isomorphic to the coarse moduli space K3 of surfaces admitting a $H \oplus E_7(-1) \oplus E_7(-1)$ lattice polarization in Theorem~\ref{thm:polarization}. \end{theorem} \begin{proof} In Section~\ref{ssec:modular_description}, we introduced the space $\mathbf{H}_2$ of complex two-by-two matrices $\varpi$ over $\mathbb{C}$ such that the hermitian matrix $(\varpi-\varpi^\dagger)/(2i)$ is positive definite; see~Equation~(\ref{Siegel_tau}). On these elements, the modular group $\Gamma_{\mathcal{T}}$ introduced in Equation~(\ref{modular group_extended}) acts by matrix multiplication for elements in $U(2,2)$ and as matrix transposition, generated by the additional element $\mathcal{T}\cdot \varpi =\varpi^t$. In \cite{MR1204828}*{Prop.~\!1.5.1} it was proved that there is an isomorphism $\Gamma_{\mathcal{T}} \cong O^+(L^{2,4})$ that induces an isomorphism \begin{equation} \label{eq:h2iso} \mathbf{H}_2 \cong \mathcal{D}_{2,4}. \end{equation} Generally, $O^+(L^{2,n})$ is the index-two sub-group given by the condition that the upper left minor of order two is positive; see \cite{MR2682724} for details. The group $O^+(L^{2,4})$ contains the special orthogonal sub-group $SO^+(L^{2,4})$ of all elements of determinant one. In our situation, this group $SO^+(L^{2,4})$ is precisely the index-two sub-group $\Gamma^{+}_{\mathcal{T}}$ introduced in Equation~(\ref{index-two}): an isomorphism $SO^+(L^{2,4}) \cong \Gamma^{+}_{\mathcal{T}}$ is given by mapping the generators of $\Gamma^{+}_{\mathcal{T}}$ to generators of $SO^+(L^{2,4})$. In fact, we map the generators $G_1\mathcal{T}$ and $G_2, \dots, G_5$ in Lemma~\ref{lem:generators} to the generators explicitly given in \cite{MR1204828}*{p.~\!393} and denote the latter by $\mathcal{G}_k \in SO^+(L^{2,4})$ for $k=1,\dots, 5$.\footnote{In \cite{MR1204828} $G_1$ was mapped to $\mathcal{G}_1$ which is \emph{not} compatible with the identification $SO^+(L^{2,4}) \cong \Gamma^{+}_{\mathcal{T}}$.} Moreover, the elements $G_1$ and $\mathcal{T}$ are mapped to reflections $\mathcal{R}_{G_1}$ and $\mathcal{R}_{\mathcal{T}}$ in $O^+(L^{2,4})$ associated with roots of square $-2$ and $-4$, respectively, such that $\mathcal{G}_1 =\mathcal{R}_{G_1} \cdot \mathcal{R}_{\mathcal{T}}$. We also find $\mathcal{G}_3=\mathcal{R}_{G_3} \cdot \mathcal{R}_{\mathcal{T}}$ for another reflection $\mathcal{R}_{G_3}$.\footnote{In \cite{MR1204828} the roots associated with $\mathcal{R}_{G_1}$, $\mathcal{R}_{\mathcal{T}}$, and $\mathcal{R}_{G_3}$ were denoted by $\alpha(1,2,3)$, $\beta_1$, and $\beta_6$.} Note that reflections belong to $O^+(L^{2,4})$, but not to $SO^+(L^{2,4})$. In fact, the generators $\mathcal{R}_{G_1}, \mathcal{R}_{\mathcal{T}}\in O^+(L^{2,4})$ together with $\mathcal{G}_k \in SO^+(L^{2,4})$ for $k=1,\dots, 5$ determine the full isomorphism $\Gamma_{\mathcal{T}} \cong O^+(L^{2,4})$. \par The element $\mathcal{T}$ acts trivially on the five modular forms $J_k$ of weights $2k$ for $k=2, \dots , 6$. Thus, they all have \emph{even characteristic} with respect to the action of $\mathcal{T}$. We proved in Theorem~\ref{thm4} that they freely generate the ring of modular forms relative to $\Gamma_{\mathcal{T}}$ with character $\chi_{2k}(g)=\det(G)^k$ for all $g = G\,\mathcal{T} ^{n} \in\Gamma_{\mathcal{T}}$. By a result of Vinberg~\cite{MR2682724}, the ring of modular forms relative to $O^+(L^{2,4})$ turns out to be exactly this ring of modular forms relative to $\Gamma_{\mathcal{T}}$ of even characteristic. \end{proof} \par The space $\mathbf{H}_2$ is a generalization of the Siegel upper-half space $\mathbb{H}_2$. In fact, elements invariant under transposition $\mathcal{T}$ are precisely the two-by-two symmetric matrices over $\mathbb{C}$ whose imaginary part is positive definite, i.e., elements of the Siegel upper-half plane $\mathbb{H}_2\cong \mathcal{D}_{2,3}$, on which the modular group $\operatorname{Sp}_4(\mathbb{Z})\cong SO^+(L^{2,3})$ acts. For the sub-space \begin{equation} \label{eq:h2iso_rest} \mathcal{D}_{2,3}/O^+(L^{2,3}) \hookrightarrow \mathcal{D}_{2,4}/O^+(L^{2,4}), \end{equation} another result of Vinberg \cite{MR3235787} proves that the ring of $O^+(L^{2,3})$-modular forms corresponds to the ring of Siegel modular forms of \emph{even weight}. Igusa \cite{MR0141643} showed that this ring of even modular forms is generated by the Siegel modular forms $\psi_4$, $\psi_6$, $\chi_{10}$, $\chi_{12}$ of respective weights $4, 6, 10, 12$. \par Matrix transposition $\mathcal{T}$ acts as $-1$ on the $\Gamma_{\mathcal{T}}$-modular forms of \emph{odd characteristic}, and the fixed locus of $\mathcal{T}$ must be contained in the vanishing locus of any $\Gamma_{\mathcal{T}}$-modular form of odd characteristic. Modular forms of odd characteristic are generated by the unique (up to scaling) modular form $\Theta(\varpi)$ of odd characteristic introduced in Theorem~\ref{thm1}. In Theorem~\ref{thm4} we found the relation $J_4(\varpi)=(\Theta(\varpi)/25)^2$. Therefore, the fixed locus of $\mathcal{T}$ coincides with the vanishing locus of $J_4(\varpi)$. In fact, we will show in Proposition~\ref{prop:compare} that in the case $\varpi=\tau \in \mathbb{H}_2$ the form $J_4=(\Theta(\varpi)/25)^2$ vanishes and the other $\Gamma_{\mathcal{T}}$-modular forms restrict to the generators of the ring of Siegel modular forms. \section{Specialization to six lines tangent to a conic} \label{6Lrestricted} In this section we consider the specialization of the generic six-line configuration when the six lines are tangent to a common conic. Such a configuration has three moduli which we will denote by $\lambda_1, \lambda_2, \lambda_3$. It follows from \cite{MalmendierClingher:2018}*{Prop.~5.13} that the lines can be brought into the form \begin{equation} \label{Eqn:6Ltangent} \begin{array}{lrcl} \ell_1: & z_1 & = & 0,\\ \ell_2: & z_2 & = & 0,\\ \ell_3: & z_1 + z_2 - z_3 &=& 0,\\ \ell_4: & \lambda_1^2 \, z_1 + z_2 - \lambda_1 z_3 &=& 0,\\ \ell_5: & \lambda_2^2 \, z_1 + z_2 - \lambda_2 z_3 &=& 0,\\ \ell_6: & \lambda_3^2 \, z_1 + z_2 - \lambda_3 z_3 &=& 0, \end{array} \end{equation} where $\lambda_i \not = 0, 1, \infty$ and $\lambda_i \not = \lambda_j$ for all $i \not= j$. We have the following: \begin{lemma} The six lines in Equation~(\ref{Eqn:6Ltangent}) are tangent to $C: z_3^2-4z_1z_2=0$. \end{lemma} \begin{proof} It is easy to prove that the intersection of the conic $C: z_3^2-4z_1z_2$ with any of the six lines $\ell_i$ for $1\le i \le6$ in Equation~(\ref{Eqn:6Ltangent}) yields a root of order two, that is, a point of tangency. \end{proof} \par The following lemma is immediate: \begin{lemma} \label{lem:EllLeft_special} For a configuration of six lines tangent to a conic, the K3 surface $\mathcal{Y}$ satisfies the following: \begin{enumerate} \item Equation~(\ref{eqns_Kummer}) is a Jacobian elliptic fibration $\pi^{\mathcal{Y}}_{\mathrm{nat}}: \mathcal{Y} \to \mathbb{P}^1$ with 6 singular fibers of type $I_2$, two singular fibers of type $I_0^*$, and the Mordell-Weil group of sections $\operatorname{MW}(\pi^{\mathcal{Y}}_{\mathrm{nat}})=(\mathbb{Z}/2\mathbb{Z})^2+\langle 1 \rangle$. \item Equation~(\ref{eqns_Kummer_alt}) is a Jacobian elliptic fibration $\pi^{\mathcal{Y}}_{\mathrm{alt}}: \mathcal{Y} \to \mathbb{P}^1$ with 6 singular fibers of type $I_2$, one fiber of type $I_5^*$, one fiber of type $I_1$, and a Mordell-Weil group of sections $\operatorname{MW}(\pi^{\mathcal{Y}}_{\mathrm{alt}})=\mathbb{Z}/2\mathbb{Z}$. \end{enumerate} \end{lemma} \begin{proof} The proof of (1) was given in \cite{MalmendierClingher:2018}*{Prop.~5.13}. The proof of (2) was given in \cite{MR3712162}*{Prop.~9}. \end{proof} \begin{lemma} \label{prop_Kummers} For a configuration of six lines tangent to a conic, the K3 surface $\mathcal{Y}$ is the Kummer surface $\operatorname{Kum}(\operatorname{Jac} \mathcal{C})$ of the principally polarized abelian surface $\operatorname{Jac}(\mathcal{C})$, i.e., the Jacobian variety of a generic genus-two curve $\mathcal{C}$. In particular, the curve $\mathcal{C}$ is given in Rosenhain normal form as \begin{equation}\label{Rosenhain} \mathcal{C}: \quad Y^2 = F(X)=X (X-1) (X-\lambda_1) (X-\lambda_2) (X-\lambda_3) . \end{equation} \end{lemma} \begin{proof} All inequivalent elliptic fibrations on a generic Kummer surface where determined explicitly by Kumar in \cite{MR3263663}. In fact, Kumar computed elliptic parameters and Weierstrass equations for all twenty five different fibrations that appear, and analyzed the reducible fibers and Mordell-Weil lattices. Equation~(\ref{eqns_Kummer}) is the Weierstrass model of the elliptic fibration {\tt (7)} in the list of all possible elliptic fibrations in~\cite{MR3263663}*{Thm.~2}. \end{proof} \par The ordered tuple $(\lambda_1, \lambda_2, \lambda_3)$ determines a point in the moduli space of genus-two curves together with a level-two structure, and, in turn, a level-two structure on the corresponding Jacobian variety, i.e., a point in the moduli space of principally polarized abelian surfaces with level-two structure \begin{equation} \mathfrak{A}_2(2) = \mathbb{H}_2 / \Gamma_2(2), \end{equation} where $\Gamma_2(2)$ is the principal congruence sub-group of level two of the Siegel modular group $\Gamma_2 = \operatorname{Sp}_4(\mathbb{Z})$. In turn, the Rosenhain invariants generate the function field $\mathbb{C}(\lambda_1, \lambda_2, \lambda_3)$ of $\mathfrak{A}_2(2)$. For a Jacobian variety with level-two structure corresponding to $\tau \in \mathfrak{A}_2(2)$, we have six odd theta characteristics and ten even theta characteristics; see \cites{MR2367218,MR3782461} for details. We denote the even theta characteristics by \begin{small} \[ \begin{split} \vartheta_1 = \begin{bmatrix} 0 & 0 \\[3pt] 0 & 0 \end{bmatrix}, \, \vartheta_2 = \begin{bmatrix} 0 & 0 \\[3pt] \frac{1}{2} & \frac{1}{2} \end{bmatrix}, \, \vartheta_3 & = \begin{bmatrix} 0 & 0 \\[3pt] \frac{1}{2} & 0 \end{bmatrix}, \; \vartheta_4 = \begin{bmatrix} 0 & 0 \\[3pt] 0 & \frac{1}{2} \end{bmatrix}, \; \vartheta_5 \ = \begin{bmatrix} \frac{1}{2} & 0 \\[3pt] 0 & 0 \end{bmatrix}, \\[5pt] \vartheta_6 = \begin{bmatrix} \frac{1}{2} & 0 \\[3pt] 0 & \frac{1}{2} \end{bmatrix}, \, \vartheta_7 = \begin{bmatrix} 0 & \frac{1}{2} \\[3pt] 0 & 0 \end{bmatrix}, \, \vartheta_8 & = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\[3pt] 0 & 0 \end{bmatrix}, \; \vartheta_9 = \begin{bmatrix} 0 & \frac{1}{2} \\[3pt] \frac{1}{2} & 0 \end{bmatrix}, \, \vartheta_{10} = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\[3pt] \frac{1}{2} & \frac{1}{2} \end{bmatrix}. \end{split} \] \end{small} and write \begin{equation} \label{Eqn:theta_short} \vartheta_i(\tau) \ \text{instead of} \ \vartheta\begin{bmatrix} a^{(i)} \\ b^{(i)} \end{bmatrix}(\tau) \ \text{with $i=1,\dots ,10$,} \end{equation} and $\vartheta_i =\vartheta_i(0)$. Fourth powers of theta constants are modular forms of $\mathfrak{A}_2(2)$ and define the Satake compactification of $\mathfrak{A}_2(2)$ given by $\operatorname{Proj}[\vartheta^4_1: \dots : \vartheta^4_{10}]$. \par The three $\lambda$-parameters in the Rosenhain normal~(\ref{Rosenhain}) can be expressed as ratios of even theta constants by Picard's lemma. There are 720 choices for such expressions since the forgetful map, i.e., forgetting the level-two structure, is a Galois covering of degree $720 = |\mathrm{S}_6|$ since $\mathrm{S}_6$ acts on the roots of $\mathcal{C}$ by permutations. Any of the $720$ choices may be used, we chose the one from \cite{MR0141643}: \begin{lemma} \label{lem:ppas} If $\mathcal{C}$ is a non-singular genus-two curve with period matrix $\tau$ for $\operatorname{Jac}(\mathcal{C})$, then $\mathcal{C}$ is equivalent to the curve~(\ref{Rosenhain}) with Rosenhain parameters $\lambda_1, \lambda_2, \lambda_3$ given by \begin{equation}\label{Picard} \lambda_1 = \frac{\vartheta_1^2\vartheta_4^2}{\vartheta_2^2\vartheta_3^2} \,, \quad \lambda_2 = \frac{\vartheta_4^2\vartheta_7^2}{\vartheta_2^2\vartheta_9^2}\,, \quad \lambda_3 = \frac{\vartheta_1^2\vartheta_7^2}{\vartheta_3^2\vartheta_9^2}\,. \end{equation} Conversely, given three distinct complex numbers $(\lambda_1, \lambda_2, \lambda_3)$ different from $0, 1, \infty$ there is complex abelian surface $A$ with period matrix $[\mathbb{I}_2 | \tau]$ such that $A =\operatorname{Jac} (\mathcal{C})$ where $\mathcal{C}$ is the genus-two curve with period matrix $\tau$. \end{lemma} \begin{proof} A proof can be found in \cite{MR2367218}*{Lemma~8}. \end{proof} We also have the following: \begin{lemma} The following equations relate theta functions and branch points: \begin{equation}\label{Thomae} \begin{array}{ll} \vartheta_1^4 = \kappa \, \lambda_1 \lambda_3 (\lambda_2 -1) (\lambda_3 - \lambda_1) & \vartheta_2^4 = \kappa\, \lambda_3 (\lambda_2 - \lambda_1) (\lambda_3 - 1) \\[5pt] \vartheta_3^4 = \kappa \, \lambda_2 (\lambda_2 -1) ( \lambda_3 - \lambda_1) & \vartheta_4^4 = \kappa \, \lambda_1 \lambda_2 (\lambda_2 - \lambda_1) (\lambda_3 - 1) \\[5pt] \vartheta_5^4 = \kappa \, \lambda_2 (\lambda_1 - 1) (\lambda_3 - \lambda_1) (\lambda_3 - 1)& \vartheta_6^4 =\kappa \, \lambda_3 (\lambda_1 - 1) (\lambda_2 - 1) (\lambda_2 - \lambda_1) \\[5pt] \vartheta_7^4 = \kappa\, \lambda_2 \lambda_3 (\lambda_1 - 1) (\lambda_3 - \lambda_2) & \vartheta_8^4 = \kappa \, \lambda_1 (\lambda_2 - 1) (\lambda_3 - 1) (\lambda_3 - \lambda_2)\\[5pt] \vartheta_9^4 = \kappa \, \lambda_1 (\lambda_1 - 1) (\lambda_3 - \lambda_2), & \vartheta_{10}^4 = \kappa \, (\lambda_2 - \lambda_1) (\lambda_3 - \lambda_1) (\lambda_3 - \lambda_2), \end{array} \end{equation} where $\kappa\not = 0$ is a non-zero constant. \end{lemma} \begin{proof} The proof follows immediately using Thomae's formula. \end{proof} We can now express the invariants $t_i$ in terms of theta functions: \begin{proposition}\label{prop-5.6} For a configuration of six lines tangent to a conic associated with a genus-two curve $\mathcal{C}$ with level-two structure, the period matrix $\tau \in \mathfrak{A}_2(2)$ determines a point $\varpi \in \mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)$ such that \begin{equation} \label{modd_restriced_thetas} \Big[ t_1(\varpi): \dots : t_{10}(\varpi)\Big] = \Big[ \vartheta_1^4(\tau): \ \vartheta_2^4(\tau): \ \dots : \vartheta_{10}^4(\tau)\Big] \in \mathbb{P}^9 , \quad R=0. \end{equation} \end{proposition} \begin{proof} For the lines in Equations~(\ref{Eqn:6Ltangent}) we compute the period matrix $\tau$ for $\operatorname{Jac}(\mathcal{C})$ using Lemma~(\ref{lem:ppas}). Setting $\varpi = \tau$ yields a $\mathcal{T}$-invariant point in $\mathbf{H}_2/\Gamma_{\mathcal{T}}(1+i)$. By construction, the modular forms $\theta_i^2(\varpi)$ equal $t_i$ for $1 \le i \le 10$ and can be computed directly from Equations~(\ref{eqn:DOcoords}) for the lines in Equations~(\ref{Eqn:6Ltangent}). On the other hand, we can also compute the fourth powers of theta functions directly using Equations~(\ref{Thomae}) to confirm Equation~(\ref{modd_restriced_thetas}). \end{proof} \begin{remark} Proposition~\ref{prop-5.6} is a special case of a statement in \cite{MR1204828}*{Lemma~2.1.1(vi)} where it was shown that under the restriction to $\mathbb{H}_2/\Gamma_2(2)$ we have $\theta_i(\varpi)=\vartheta_i^2(\tau)$. \end{remark} For the Siegel three-fold $\mathfrak{A}_2=\mathbb{H}_2/\Gamma_2$, i.e., the set of isomorphism classes of principally polarized abelian surfaces, the even Siegel modular forms of $\mathfrak{A}_2$ form a polynomial ring in four free generators of degrees $4$, $6$, $10$ and $12$ usually denoted by $\psi_4, \psi_6, \chi_{10}$ and $\chi_{12}$, respectively. Igusa showed in \cite{MR0229643} that for the full ring of modular forms, one needs an additional generator $\chi_{35}$ which is algebraically dependent on the others. In fact, its square is a polynomial in the even generators given in \cite{MR0229643}*{p.~849}. \par Let $I_2, I_4, I_6 , I_{10}$ denote Igusa invariants of the binary sextic $Y^2=F(X)$ as defined in \cite{MR3731039}*{Sec.~2.3}. Igusa \cite{MR0229643}*{p.~\!848} proved that the relation between the Igusa invariants of a binary sextic $Y^2=F(X)$ defining a genus-two curve $\mathcal{C}$ with period matrix $\tau$ for $\operatorname{Jac}(\mathcal{C})$ and the even Siegel modular forms are as follows: \begin{equation} \label{invariants} \begin{split} I_2(F) & = -2^3 \cdot 3 \, \dfrac{\chi_{12}(\tau)}{\chi_{10}(\tau)} \;, \\ I_4(F) & = \phantom{-} 2^2 \, \psi_4(\tau) \;,\\ I_6(F) & = -\frac{2^3}3 \, \psi_6(\tau) - 2^5 \, \dfrac{\psi_4(\tau) \, \chi_{12}(\tau)}{\chi_{10}(\tau)} \;,\\ I_{10}(F) & = -2^{14} \, \chi_{10}(\tau) \;. \end{split} \end{equation} Conversely, the point $[I_2 : I_4 : I_6 : I_{10}]\in \mathbb{P}(2,4,6,10)$ in weighted projective space equals \begin{equation} \label{IgusaClebschProjective} \big[ 2^3 3^2 \chi_{12} : 2^2 3^2 \psi_4 \chi_{10}^2 : 2^3 3^2 \big(12 \psi_4 \chi_{12}+ \psi_6 \chi_{10} \big) \chi_{10}^2: 2^2 \chi_{10}^6 \big] . \end{equation} We have the following: \begin{proposition} \label{prop:compare} For a configuration of six lines tangent to a conic associated with the binary sextic $Y^2=F(X)$ defining a genus-two curve $\mathcal{C}$, the period matrix $\tau$ determines a point $\varpi \in \mathbf{H}_2/\Gamma_{\mathcal{T}}$ such that \begin{equation} \label{modd_restriced} \scalemath{\MyScaleMedium}{ \begin{array}{c} \Big[ J_2(\varpi) : J_3(\varpi) : J_4(\varpi): J_5(\varpi) : J_6(\varpi) \Big] = \Big[ \psi_4(\tau) : \ \psi_6(\tau) : \ 0: \ 2^{12} 3^5 \chi_{10}(\tau) : \ 2^{12}3^6 \chi_{12}(\tau)\Big]\\[1em] = \Big[ \frac{1}{4} I_4(F): \frac{1}{8} (I_2 I_4-3 I_6)(F): 0: -\frac{243}{4} I_{10}(F): \frac{243}{32} I_2 I_{10}(F)\Big] \end{array}} \end{equation} as points in $\mathbb{P}(2,3,4,5,6)$. The discriminant of the Satake sextic restricts to \begin{equation} \operatorname{Disc}(\mathcal{S}) = 2^{64} 3^{30} \frac{\chi_{35}^2(\tau)}{\chi_{10}(\tau)} . \end{equation} \end{proposition} \begin{proof} For the lines in Equations~(\ref{Eqn:6Ltangent}) we compute the period matrix $\tau$ for $\operatorname{Jac}(\mathcal{C})$ using Lemma~(\ref{lem:ppas}). Setting $\varpi = \tau$ and forgetting the level-two structure, yields a $\mathcal{T}$-invariant point in $\mathbf{H}_2/\Gamma_{\mathcal{T}}$. By construction, the modular forms $J_k(\varpi)$ equal $J_k$ for $2 \le k \le 6$ and can be computed directly from Equations~(\ref{eqn:Jinvariants}) for the lines in Equations~(\ref{Eqn:6Ltangent}). On the other hand, we can compute the Igusa invariants $I_2, I_4, I_6 , I_{10}$ of the binary sextic $Y^2=F(X)$ as defined in \cite{MR3731039}*{Sec.~2.3} for the genus-two curve~(\ref{Rosenhain}) to confirm Equation~(\ref{modd_restriced}). We then use Equation~(\ref{IgusaClebschProjective}) to convert to expressions in terms of $\psi_4, \psi_6, \chi_{10}$ and $\chi_{12}$. \end{proof} \par To summarize, when the six lines are tangent to a conic, the K3 surface $\mathcal{Y}$ becomes the Kummer surface $\mathrm{Kum}(\operatorname{Jac}\mathcal{C})$ of the Jacobian variety $\operatorname{Jac}(\mathcal{C})$ of a generic genus-two curve $\mathcal{C}$. In \cites{MR2824841,MR2935386} it was proved that the K3 surface $\mathcal{X}$ in turn is the Shioda-Inose surface $\mathrm{SI}(\operatorname{Jac}\mathcal{C})$, i.e., a K3 surface which carries a Nikulin involution such that quotienting by this involution and blowing up the fixed points, recovers the Kummer surface $\mathcal{Y}$ \emph{and} the rational quotient map of degree two induces a Hodge isometry\footnote{A Hodge isometry between two transcendental lattices is an isometry preserving the Hodge structure.} between the transcendental lattices $T_{\mathcal{X}}(2)$\footnote{The notation $T_{\mathcal{X}}(2)$ indicates that the bilinear pairing on the transcendental lattice $T_{\mathcal{X}}$ is multiplied by $2$.} and $T_{\operatorname{Kum}(\operatorname{Jac}\mathcal{C})}$. In particular, the Shioda-Inose surface $\mathcal{X}$ and the Kummer surface $\operatorname{Kum}(\operatorname{Jac}\mathcal{C})$ have Picard rank greater or equal to $17$. Proposition~\ref{prop:compare} then has the following corollary: \begin{corollary} Configurations of six lines tangent to a conic give rise to a three-parameter family of Kummer surfaces $\mathrm{Kum}(\operatorname{Jac}\mathcal{C})$ of the Jacobian varieties $\operatorname{Jac}(\mathcal{C})$ of generic genus-two curves $\mathcal{C}$. Moreover, the corresponding three-parameter family of Shioda-Inose surfaces $\mathrm{SI}(\operatorname{Jac}\mathcal{C})$ associated with $\mathrm{Kum}(\operatorname{Jac}\mathcal{C})$ is obtained by setting $\epsilon=0$ and $\zeta=1$ in Equation~(\ref{mainquartic}). \end{corollary} We also have the following: \begin{corollary} Along the locus $J_4=0$, the lattice polarization of the K3 surfaces $\mathcal{X}(\alpha,\beta, \gamma, \delta, \varepsilon=0, \zeta=1)$ extends to a canonical $H \oplus E_8(-1) \oplus E_7(-1)$ lattice polarization. \end{corollary} \begin{proof} It was proved in \cite{MR2824841} that the family in Equation~(\ref{mainquartic}) with $\varepsilon=0, \zeta=1$ is endowed with a canonical $H \oplus E_8(-1) \oplus E_7(-1)$ lattice polarization. They also found the parameters $(\alpha,\beta,\gamma,\delta)$ in terms of the standard even Siegel modular forms $\psi_4, \psi_6, \chi_{10}, \chi_{12}$ (cf.~\cite{MR0141643}) given by \begin{equation} (\alpha,\beta,\gamma,\delta) = \left(\psi_4, \psi_6, 2^{12}3^5 \, \chi_{10}, 2^{12}3^6 \, \chi_{12}\right) \;, \end{equation} which agrees with Equation~(\ref{modd_restriced}) and Equation~(\ref{modd}). \end{proof} \noindent The different Jacobian elliptic fibrations, the Satake sextic, and further confluences of singular fibers were investigated in \cites{MR3712162, MR3731039}. {} \begin{appendix} \section{Invariants of the quintic pencils} Using a 2-neighbor-step procedure twice starting with the natural fibration in Equation~(\ref{eqns_Kummer}), we constructed on the K3 surface $\mathcal{Y}$ associated with the double cover branched along the six lines given by Equations~(\ref{lines}) the following Weierstrass model: \begin{equation} \label{eqns_Kummer_alt_b} Y^2 = X \Big(X^2 - 2 \, \mathcal{B}(t) X + \mathcal{B}(t)^2 - 4 \, \mathcal{A}(t) \Big), \end{equation} where $\mathcal{B}(t)=t^3- J'_2 \, t - J'_3$ and $\mathcal{A}(t) = J'_4 t^2 - J'_5 t + J'_6$, and \begin{equation} \label{QP_inv} \scalemath{\MyScaleTiny}{ \begin{array}{rcl} 2 J'_2 & = & 2\,{a}^{2}{d}^{2}-2\,abcd+2\,{b}^{2}{c}^{2}-2\,{a}^{2}d+bca+adb+adc-2\,a{d}^{2}-2\,{b}^{2}c-2\,b{c}^{2}+bcd+2\,{a}^{2}\\ &&-2\,ba-2\,ca +ad+2\,{b}^{2}+bc-2\,db+2\,{c}^{2}-2\,cd+2\,{d}^{2},\\ -4 J'_3 &= & -4\,{a}^{3}{d}^{3}+6\,{a}^{2}bc{d}^{2}+6\,a{b}^{2}{c}^{2}d-4\,{b}^{3}{c}^{3}+6\,{a}^{3}{d}^{2}-6\,{a}^{2}bcd-3\,{a}^{2}b{d}^{2}\\ &&-3\,{a}^{2}c{d}^{2}+6\,{a}^{2}{d}^{3}-3\,a{b}^{2}{c}^{2}-6\,a{b}^{2}cd-6\,ab{c}^{2}d-6\,abc{d}^{2}+6\,{b}^{3}{c}^{2}+6\,{b}^{2}{c}^{3}\\ &&-3\,{b}^{2}{c}^{2}d+6\,{a}^{3}d-3\,{a}^{2}bc-6\,{a}^{2}bd-6\,{a}^{2}cd-6\,{a}^{2}{d}^{2}-6\,a{b}^{2}c-3\,a{b}^{2}d-6\,ab{c}^{2}+60\,abcd\\ &&-6\,ab{d}^{2}-3\,a{c}^{2}d-6\,ac{d}^{2}+6\,a{d}^{3}+6\,{b}^{3}c-6\,{b}^{2}{c}^{2}-6\,{b}^{2}cd+6\,b{c}^{3}-6\,b{c}^{2}d-3\,bc{d}^{2}-4\,{a}^{3}\\ &&+6\,{a}^{2}b+6\,{a}^{2}c-3\,{a}^{2}d+6\,a{b}^{2}-6\,bca-6\,adb+6\,a{c}^{2}-6\,adc-3\,a{d}^{2}-4\,{b}^{3}-3\,{b}^{2}c\\ &&+6\,{b}^{2}d-3\,b{c}^{2}-6\,bcd+6\,b{d}^{2}-4\,{c}^{3}+6\,{c}^{2}d+6\,c{d}^{2}-4\,{d}^{3},\\ 16 \, J'_4 & = & 81 \, \left( bca-adb-adc+bcd+ad-bc \right) ^{2},\\ -\frac{8}{81}J'_5 & = & -2\,{b}^{2}{c}^{2}d{a}^{3}+{b}^{2}c{d}^{2}{a}^{3}+{b}^{2}{d}^{3}{a}^{3}+b{c}^{2}{d}^{2}{a}^{3}-4\,bc{d}^{3}{a}^{3}+{c}^{2}{d}^{3}{a}^{3}+{b}^{3}{c}^{3}{a}^{2}+{b}^{3}{c}^{2}d{a}^{2}\\ &&-2\,{b}^{3}c{d}^{2}{a}^{2}+{b}^{2}{c}^{3}d{a}^{2}+4\,{b}^{2}{c}^{2}{d}^{2}{a}^{2}+{b}^{2}c{d}^{3}{a}^{2}-2\,b{c}^{3}{d}^{2}{a}^{2}+b{c}^{2}{d}^{3}{a}^{2}-4\,{b}^{3}{c}^{3}da\\ &&+{b}^{3}{c}^{2}{d}^{2}a+{b}^{2}{c}^{3}{d}^{2}a-2\,{b}^{2}{c}^{2}{d}^{3}a+{b}^{3}{c}^{3}{d}^{2}+{b}^{2}{c}^{2}{a}^{3}+{b}^{2}cd{a}^{3}-2\,{b}^{2}{d}^{2}{a}^{3}+b{c}^{2}d{a}^{3}\\ &&+4\,bc{d}^{2}{a}^{3}+b{d}^{3}{a}^{3}-2\,{c}^{2}{d}^{2}{a}^{3}+c{d}^{3}{a}^{3}-2\,{b}^{3}{c}^{2}{a}^{2}+{b}^{3}cd{a}^{2}+{b}^{3}{d}^{2}{a}^{2}-2\,{b}^{2}{c}^{3}{a}^{2}\\ &&-4\,{b}^{2}{c}^{2}d{a}^{2}-4\,{b}^{2}c{d}^{2}{a}^{2}-2\,{b}^{2}{d}^{3}{a}^{2}+b{c}^{3}d{a}^{2}-4\,b{c}^{2}{d}^{2}{a}^{2}+4\,bc{d}^{3}{a}^{2}+{c}^{3}{d}^{2}{a}^{2}\\ &&-2\,{c}^{2}{d}^{3}{a}^{2}+{b}^{3}{c}^{3}a+4\,{b}^{3}{c}^{2}da+{b}^{3}c{d}^{2}a+4\,{b}^{2}{c}^{3}da-4\,{b}^{2}{c}^{2}{d}^{2}a+{b}^{2}c{d}^{3}a+b{c}^{3}{d}^{2}a\\ &&+b{c}^{2}{d}^{3}a+{b}^{3}{c}^{3}d-2\,{b}^{3}{c}^{2}{d}^{2}-2\,{b}^{2}{c}^{3}{d}^{2}+{b}^{2}{c}^{2}{d}^{3}-4\,bcd{a}^{3}+b{d}^{2}{a}^{3}+c{d}^{2}{a}^{3}-2\,{a}^{3}{d}^{3}\\ &&+{b}^{2}{c}^{2}{a}^{2}+4\,{b}^{2}cd{a}^{2}+{b}^{2}{d}^{2}{a}^{2}+4\,b{c}^{2}d{a}^{2}-4\,{a}^{2}bc{d}^{2}+b{d}^{3}{a}^{2}+{c}^{2}{d}^{2}{a}^{2}+c{d}^{3}{a}^{2}+{b}^{3}{c}^{2}a\\ &&-4\,{b}^{3}cda+{b}^{2}{c}^{3}a-4\,a{b}^{2}{c}^{2}d+4\,{b}^{2}c{d}^{2}a-4\,b{c}^{3}da+4\,b{c}^{2}{d}^{2}a-4\,bc{d}^{3}a-2\,{b}^{3}{c}^{3}+{b}^{3}{c}^{2}d\\ &&+{b}^{2}{c}^{3}d+{b}^{2}{c}^{2}{d}^{2}+{a}^{3}{d}^{2}+{a}^{2}bcd-2\,{a}^{2}b{d}^{2}-2\,{a}^{2}c{d}^{2}+{a}^{2}{d}^{3}-2\,a{b}^{2}{c}^{2}+a{b}^{2}cd\\ &&+ab{c}^{2}d+abc{d}^{2}+{b}^{3}{c}^{2}+{b}^{2}{c}^{3}-2\,{b}^{2}{c}^{2}d,\\ \frac{16}{81}J'_6 & = & -4\,{b}^{2}{c}^{2}d{a}^{4}+4\,b{c}^{2}d{a}^{4}+4\,{b}^{2}cd{a}^{4}-10\,bcd{a}^{4}+4\,b{c}^{2}{d}^{4}{a}^{3}-22\,b{c}^{2}{d}^{3}{a}^{3}-4\,{b}^{3}c{d}^{3}{a}^{3}\\ &&-22\,{b}^{2}c{d}^{3}{a}^{3}-10\,{b}^{2}{c}^{3}{d}^{2}{a}^{3}+16\,b{c}^{3}{d}^{2}{a}^{3}-10\,{b}^{3}{c}^{2}{d}^{2}{a}^{3}+4\,{b}^{2}{c}^{2}{d}^{2}{a}^{3}+16\,{b}^{3}c{d}^{2}{a}^{3}-4\,{b}^{3}{c}^{3}d{a}^{3}\\ &&+16\,{b}^{2}{c}^{3}d{a}^{3}-10\,b{c}^{3}d{a}^{3}+16\,{b}^{3}{c}^{2}d{a}^{3}-10\,{b}^{3}cd{a}^{3}+4\,{b}^{2}{c}^{2}{d}^{4}{a}^{2}-10\,b{c}^{2}{d}^{4}{a}^{2}-10\,{b}^{2}c{d}^{4}{a}^{2}\\ &&+4\,{b}^{2}c{d}^{4}{a}^{3}+12\,bc{d}^{4}{a}^{3}-4\,b{c}^{3}{d}^{3}{a}^{3}+12\,{b}^{2}{c}^{2}{d}^{3}{a}^{3}+12\,bc{d}^{4}{a}^{2}-10\,{b}^{2}{c}^{3}{d}^{3}{a}^{2}+16\,b{c}^{3}{d}^{3}{a}^{2}\\ &&-10\,{b}^{3}{c}^{2}{d}^{3}{a}^{2}+4\,{b}^{4}c{d}^{2}a-10\,{b}^{4}{c}^{4}da+12\,{b}^{3}{c}^{4}da+12\,{b}^{2}{c}^{4}da-10\,b{c}^{4}da+12\,{b}^{4}{c}^{3}da+12\,{b}^{4}{c}^{2}da\\ &&-10\,{b}^{4}cda+4\,{b}^{2}{c}^{2}{d}^{3}{a}^{2}+16\,{b}^{3}c{d}^{3}{a}^{2}+4\,{b}^{2}{c}^{4}{d}^{2}{a}^{2}-4\,b{c}^{4}{d}^{2}{a}^{2}+12\,{b}^{3}{c}^{3}{d}^{2}{a}^{2}+4\,{b}^{2}{c}^{3}{d}^{2}{a}^{2}\\ &&+4\,{b}^{4}{c}^{2}{d}^{2}{a}^{2}+4\,{b}^{3}{c}^{2}{d}^{2}{a}^{2}-4\,{b}^{4}c{d}^{2}{a}^{2}+4\,{b}^{3}{c}^{4}d{a}^{2}-10\,{b}^{2}{c}^{4}d{a}^{2}+4\,b{c}^{4}d{a}^{2}+4\,{b}^{4}{c}^{3}d{a}^{2}\\ &&-22\,{b}^{3}{c}^{3}d{a}^{2}-10\,{b}^{4}{c}^{2}d{a}^{2}+4\,{b}^{4}cd{a}^{2}-4\,{b}^{2}{c}^{2}{d}^{4}a+4\,b{c}^{2}{d}^{4}a+4\,{b}^{2}c{d}^{4}a-10\,bc{d}^{4}a-4\,{b}^{3}{c}^{3}{d}^{3}a\\ &&+16\,{b}^{2}{c}^{3}{d}^{3}a-10\,b{c}^{3}{d}^{3}a+16\,{b}^{3}{c}^{2}{d}^{3}a-10\,{b}^{3}c{d}^{3}a+4\,{b}^{3}{c}^{4}{d}^{2}a-10\,{b}^{2}{c}^{4}{d}^{2}a+4\,b{c}^{4}{d}^{2}a+4\,{b}^{4}{c}^{3}{d}^{2}a\\ &&-22\,{b}^{3}{c}^{3}{d}^{2}a-10\,{b}^{4}{c}^{2}{d}^{2}a-10\,bc{d}^{4}{a}^{4}+4\,b{c}^{2}{d}^{3}{a}^{4}+4\,{b}^{2}c{d}^{3}{a}^{4}+12\,bc{d}^{3}{a}^{4}+4\,{b}^{2}{c}^{2}{d}^{2}{a}^{4}\\ &&-10\,b{c}^{2}{d}^{2}{a}^{4}-10\,{b}^{2}c{d}^{2}{a}^{4}+12\,bc{d}^{2}{a}^{4}+4\,{b}^{2}c{d}^{2}{a}^{3}+12\,b{c}^{2}d{a}^{3}+12\,{b}^{2}cd{a}^{3}+4\,b{c}^{2}{d}^{3}{a}^{2}+4\,{b}^{2}c{d}^{3}{a}^{2}\\ &&+4\,{b}^{2}{c}^{3}d{a}^{2}+12\,b{c}^{3}d{a}^{2}+4\,{b}^{3}{c}^{2}d{a}^{2}+12\,{b}^{3}cd{a}^{2}+12\,b{c}^{2}{d}^{3}a+12\,{b}^{2}c{d}^{3}a+4\,{b}^{2}{c}^{3}{d}^{2}a+12\,b{c}^{3}{d}^{2}a\\ &&+4\,{b}^{3}{c}^{2}{d}^{2}a+12\,{b}^{3}c{d}^{2}a+4\,bc{d}^{3}{a}^{3}-22\,bc{d}^{2}{a}^{3}-22\,{b}^{2}{c}^{2}d{a}^{3}+4\,bcd{a}^{3}-22\,bc{d}^{3}{a}^{2}\\ &&-22\,b{c}^{3}{d}^{2}{a}^{2}+12\,{b}^{2}{c}^{2}{d}^{2}{a}^{2}+4\,b{c}^{2}{d}^{2}{a}^{2}-22\,{b}^{3}c{d}^{2}{a}^{2}+4\,{b}^{2}c{d}^{2}{a}^{2}+4\,{b}^{2}{c}^{2}d{a}^{2}-10\,b{c}^{2}d{a}^{2}\\ &&-10\,{b}^{2}cd{a}^{2}-22\,{b}^{2}{c}^{2}{d}^{3}a+4\,bc{d}^{3}a+4\,{b}^{2}{c}^{2}{d}^{2}a-10\,b{c}^{2}{d}^{2}a-10\,{b}^{2}c{d}^{2}a+4\,{b}^{3}{c}^{3}da-22\,{b}^{2}{c}^{3}da\\ &&+16\,{a}^{2}bc{d}^{2}+16\,a{b}^{2}{c}^{2}d+4\,b{c}^{3}da-22\,{b}^{3}{c}^{2}da+4\,{b}^{3}cda+4\,b{c}^{2}{d}^{2}{a}^{3}-4\,{b}^{3}{c}^{2}a-4\,{b}^{2}{c}^{3}d\\ &&+4\,{c}^{3}{d}^{2}{a}^{2}-10\,{b}^{2}{d}^{3}{a}^{2}+4\,{b}^{2}{c}^{2}{a}^{3}+4\,{b}^{2}{d}^{2}{a}^{2}-10\,{b}^{3}{c}^{2}{d}^{2}+4\,{b}^{2}{c}^{2}{a}^{2}+16\,{b}^{3}{c}^{3}a+4\,{c}^{2}{d}^{2}{a}^{2}\\ &&-10\,{b}^{2}{c}^{3}{d}^{2}-4\,b{d}^{3}{a}^{2}+4\,{b}^{3}{d}^{2}{a}^{2}+16\,c{d}^{3}{a}^{3}-10\,{b}^{3}{c}^{2}{a}^{2}+16\,b{d}^{3}{a}^{3}-4\,c{d}^{2}{a}^{3}+16\,{b}^{2}{d}^{3}{a}^{3}\\ &&-4\,c{d}^{3}{a}^{2}-10\,{b}^{2}{c}^{3}{a}^{2}+4\,{b}^{2}{c}^{2}{d}^{3}-10\,{c}^{2}{d}^{3}{a}^{2}+16\,{b}^{3}{c}^{3}d-10\,{c}^{2}{d}^{2}{a}^{3}+16\,{b}^{3}{c}^{3}{d}^{2}-4\,{b}^{3}{c}^{2}d\\ &&+4\,{b}^{2}{c}^{2}{d}{2}-4\,b{d}^{2}{a}^{3}-4\,{b}^{2}{c}^{3}a+16\,{b}^{3}{c}^{3}{a}^{2}-10\,{b}^{2}{d}^{2}{a}^{3}+16\,{c}^{2}{d}^{3}{a}^{3}+{b}^{4}{c}^{2}+4\,{b}^{4}{c}^{4}-4\,{b}^{3}{c}^{4}\\ &&-4\,{b}^{4}{c}^{3}+{d}^{2}{a}^{4}+{d}^{4}{a}^{2}+4\,{d}^{4}{a}^{4}-4\,{d}^{3}{a}^{4}-4\,{d}^{4}{a}^{3}+{b}^{2}{c}^{4}+2{a}^{3}{d}^{3}+2{b}^{3}{c}^{3}+4\,{b}^{4}{c}^{2}d\\ &&+2{b}^{3}{c}^{3}{a}^{3}+{b}^{4}{c}^{4}{d}^{2}-10\,b{d}^{3}{a}^{4}-4\,{b}^{3}{d}^{2}{a}^{3}+4{b}^{3}{d}^{3}{a}^{3}+{b}^{2}{c}^{2}{a}^{4}+4\,{b}^{2}{c}^{4}a+4\,b{d}^{4}{a}^{4}\\ &&-10\,c{d}^{3}{a}^{4}-4\,{c}^{3}{d}^{3}{a}^{2}+4\,{b}^{2}{d}^{4}{a}^{2}+{b}^{4}{d}^{2}{a}^{2}-10\,{b}^{3}{c}^{4}a-4\,{b}^{3}{c}^{4}{a}^{2}+4\,b{d}^{2}{a}^{4}+4\,{b}^{2}{d}^{2}{a}^{4}\\ &&+2{b}^{3}{c}^{3}{d}^{3}-10\,c{d}^{4}{a}^{3}-4\,{b}^{3}{c}^{2}{a}^{3}+4\,{c}^{2}{d}^{4}{a}^{2}+4\,c{d}^{4}{a}^{2}-4\,{b}^{4}{c}^{3}{a}^{2}-10\,{b}^{3}{c}^{4}d+4\,c{d}^{4}{a}^{4}-10\,b{d}^{4}{a}^{3}\\ &&+4\,b{d}^{4}{a}^{2}+2\,{c}^{3}{d}^{3}{a}^{3}+4\,{b}^{2}{c}^{4}d+4\,c{d}^{2}{a}^{4}-4\,{b}^{3}{d}^{3}{a}^{2}+4\,{b}^{4}{c}^{2}a-10\,{b}^{4}{c}^{3}d-4\,{c}^{2}{d}^{4}{a}^{3}+{b}^{4}{c}^{4}{a}^{2}\\ &&-4\,{b}^{3}{c}^{4}{d}^{2}-4\,{b}^{2}{c}^{3}{d}^{3}+4\,{b}^{4}{c}^{2}{a}^{2}-4\,{b}^{2}{c}^{3}{a}^{3}+4\,{b}^{4}{c}^{2}{d}^{2}-4\,{b}^{3}{c}^{2}{d}^{3}+4\,{b}^{4}{c}^{4}a-10\,{b}^{4}{c}^{3}a+4\,{b}^{2}{c}^{4}{d}^{2}\\ &&+2\,{c}^{4}{d}^{2}{a}^{2}-4\,{b}^{4}{c}^{3}{d}^{2}-4\,{b}^{2}{d}^{3}{a}^{4}-4\,{b}^{2}{d}^{4}{a}^{3}-4\,{c}^{2}{d}^{3}{a}^{4}+4\,{c}^{2}{d}^{2}{a}^{4}\\ &&+{b}^{2}{d}^{4}{a}^{4}+{b}^{2}{c}^{2}{d}^{4}+4\,{b}^{4}{c}^{4}d-4\,{c}^{3}{d}^{2}{a}^{3}+{c}^{2}{d}^{4}{a}^{4}+4\,{b}^{2}{c}^{4}{a}^{2}. \end{array}} \end{equation} \end{appendix} \end{document}
\begin{document} \title{Galois representations and Galois groups over $\mathbb{Q} \begin{abstract} In this paper we generalize results of P.~Le Duff to genus $n$ hyperelliptic curves. More precisely, let $C/\mathbb{Q}$ be a hyperelliptic genus $n$ curve, let $J(C)$ be the associated Jacobian variety and let $\bar{\rho}_{\ell}:G_{\mathbb{Q}}\rightarrow \GSp(J(C)[\ell])$ be the Galois representation attached to the $\ell$-torsion of $J(C)$. Assume that there exists a prime $p$ such that $J(C)$ has semistable reduction with toric dimension 1 at $p$. We provide an algorithm to compute a list of primes $\ell$ (if they exist) such that $\bar{\rho}_{\ell}$ is surjective. In particular we realize $\mathrm{GSp}_6(\mathbb{F}_{\ell})$ as a Galois group over $\mathbb{Q}$ for all primes $\ell\in [11, 500000]$. \end{abstract} \section*{Introduction} In this paper we present the work carried out at the conference \emph{Women in numbers - Europe}, (October 2013), by the working group \emph{Galois representations and Galois groups over $\mathbb{Q}$}. Our aim was to study the image of Galois representations attached to the Jacobian varieties of genus $n$ curves, motivated by the applications to the inverse Galois problem over $\mathbb{Q}$. In the case of genus 2, there are several results in this direction (e.g. \cite{LeDuff}, \cite{Di02}), and we wanted to explore the scope of these results. Our result is a generalization of P.~Le Duff's work to the genus $n$ setting, which allows us to produce realizations of groups $\mathrm{GSp}_6(\mathbb{F}_{\ell})$ as Galois groups over $\mathbb{Q}$, for infinite families of primes $\ell$ (with positive Dirichlet density). These realizations are obtained through the Galois representations $\bar{\rho}_{\ell}$ attached to the $\ell$-torsion points of the Jacobian of a genus $3$ curve. The first section of this paper contains a historical introduction to the inverse Galois problem and some results obtained in this direction by means of Galois representations associated to geometric objects. Section 2 presents some theoretic tools, which we collect to prove a result, valid for a class of abelian varieties $A$ of dimension $n$, that yields primes $\ell$ for which we can ensure surjectivity of the Galois representation attached to the $\ell$-torsion of $A$ (see Theorem \ref{thm:explicitsurjectivity}). In Section 3, we focus on hyperelliptic curves and explain the computations that allow us to realize $\mathrm{GSp}_6(\mathbb{F}_{\ell})$ as a Galois group over $\mathbb{Q}$ for all primes $\ell\in [11, 500000]$. \noindent\textbf{Acknowledgements} \noindent The authors would like to thank Marie-Jos\'e Bertin, Alina Bucur, Brooke Feigon and Leila Schneps for organizing the WIN-Europe conference which initiated this collaboration. Moreover, we are grateful to the Centre International de Rencontres Math\'ematiques, the Institut de Math\'ematiques de Jussieu and the Institut Henri Poincar\'e for their hospitality during several short visits. The authors are indebted to Irene Bouw, Jean-Baptiste Gramain, Kristin Lauter, Elisa Lorenzo, Melanie Matchett Wood, Frans Oort and Christophe Ritzenthaler for several insightful discussions. We also want to thank the anonymous referee for her/his suggestions that helped us to improve this paper. S.~Arias-de-Reyna and N.~Vila are partially supported by the project MTM2012-33830 of the Ministerio de Econom\'ia y Competitividad of Spain, C.~Armana by a BQR 2013 Grant from Universit\'e de Franche-Comt\'e and M.~Rebolledo by the ANR Project R\'egulateurs ANR-12-BS01-0002. L.~Thomas thanks the Laboratoire de Math\'ematiques de Besan\c con for its support. \section{ Images of Galois representations and the inverse Galois problem }\label{sec:1} One of the main objectives in algebraic number theory is to understand the absolute Galois group of the rational field, $G_{\mathbb{Q}}=\Gal (\overline{ \mathbb{Q}}/ \mathbb{Q} ).$ We believe that we would get all arithmetic information if we knew the structure of $G_{\mathbb{Q}}$. This is a huge group, but it is compact with respect to the profinite topology. Two problems arise in a natural way: on the one hand, the identification of the finite quotients of $G_{\mathbb{Q}}$, and on the other hand, the study of $G_{\mathbb{Q}}$ via its Galois representations. The inverse Galois problem asks whether, for a given finite group $G$, there exists a Galois extension $L/ \Q$ with Galois group isomorphic to $G$. In other words, whether a finite group $G$ occurs as a quotient of $G_{\mathbb{Q}}$. As is well known, this is an open problem. The origin of this question can be traced back to Hilbert. In 1892, he proved that the symmetric group $S_n$ and the alternating group $A_n$ are Galois groups over $\Q$, for all $n$. We also have an affirmative answer to the inverse Galois problem for some other families of finite groups. For instance, all finite solvable groups and all sporadic simple groups, except the Mathieu group $M_{23}$, are known to be Galois groups over $\Q$. A Galois representation is a continuous homomorphism \[ \rho: G_{\mathbb{Q}} \rightarrow \GL_n(R), \] where $R$ is a topological ring. Examples for $R$ are $\mathbb{C}$, $\mathbb{Z}/n\mathbb{Z}$ or $\mathbb{F}_q$ with the discrete topology, and $\mathbb{Q}_\ell$ with the $\ell$-adic topology. Conjectures by Artin, Serre, Fontaine-Mazur and Langlands, which have experienced significant progress in recent years, are connected with these Galois representations. Since $G_{\mathbb{Q}}$ is compact, the image of $\rho$ is finite when the topology of $R$ is discrete. As a consequence, images of Galois representations yield Galois realizations over $\mathbb{Q}$ of finite linear groups $$\Gal (\overline{ \mathbb{Q}}^{\ker \rho}/\mathbb{Q})\simeq \rho(G_{\mathbb{Q}})\subseteq \GL_n(R).$$ This gives us an interesting connection between these two questions and provides us with a strategy to address the inverse Galois problem. Let us assume that $\rho$ is an $\ell$-adic Galois representation associated to some arithmetic-geometric object. In this case, we have additional information on the ramification behavior, like the characteristic polynomial of the image of the Frobenius elements at unramified primes or the description of the image of the inertia group at the prime $\ell$. This gives us some control on the image of mod $\ell$ Galois representations in some cases and we can obtain, along the way, families of linear groups over finite fields as Galois groups over $\Q$. More precisely, let $X/\mathbb{Q}$ be a smooth projective variety and let $$\rho_\ell: G_\mathbb{Q} \rightarrow \GL(H^k_{\textrm{\'et}} (X_{\overline{ \mathbb{Q}}}, \mathbb{Q}_\ell)),$$ be the $\ell$-adic Galois representation on the $k$-th \'{e}tale cohomology. We know that: \begin{itemize} \item $\rho_\ell$ is unramified away from $\ell$ and the primes of bad reduction for $X$, \item if $p$ is a prime of good reduction and $p\neq \ell$, the characteristic polynomial of $\rho_\ell(\mathrm{Frob}_p)$ has coefficients in $\mathbb{Z}$, is independent of $\ell$ and its roots have absolute value $p^{k/2}$.\end{itemize} Let us consider an attached residual Galois representation $$\overline{\sigma}_\lambda: G_\mathbb{Q} \rightarrow \GL_n(\mathbb{F}_{\ell^r}),$$ where $\lambda$ is a prime in a suitable number field, dividing $\ell$ and $r\geq 1$ an integer. To determine the image of $\overline{\sigma}_\lambda$, we usually need to know the classification of maximal subgroups of $ \GL_n(\mathbb{F}_{\ell^r})$, as well as a description of the image of the inertia group at $\ell$ and the computation of the characteristic polynomial of $\overline{\sigma}_\lambda(\mathrm{Frob}_p)$, for some prime of good reduction $p\neq \ell$. Let us summarize the known cases of realizations of finite linear groups as Galois groups over $\Q$, obtained via Galois representations. In the case of $2$-dimensional Galois representations attached to an elliptic curve $E$ defined over $\Q$ without complex multiplication, we know, by a celebrated result of Serre \cite{Proprietes}, that the associated residual Galois representation is surjective, for all but finitely many primes. Moreover, it can be shown that if we take, for example, the elliptic curve $E$ defined by the Weierstrass equation $ Y^2+Y=X^3-X$, then the attached residual Galois representation is surjective, for all primes $\ell$. Thus we obtain that the group $\GL_2(\mathbb{F}_{\ell})$ occurs as a Galois group over $\Q$, for all primes $\ell$. Actually we have additional information in this case: the Galois extension $\Q(E[\ell])/\Q$ is a Galois realization of $\GL_2(\mathbb{F}_{\ell})$, and it is unramified away from $37$ and $\ell$, since $E$ has conductor $37$. The image of $2$-dimensional Galois representations, attached to classical modular forms without complex multiplication, has been studied by Ribet \cite{R}. The image of the residual Galois representations attached to a normalized cuspidal Hecke eigenform without complex multiplication is as large as possible, for all but finitely many primes $\lambda$. This gives us that the groups $\PSL_2(\mathbb{F}_{\ell^r})$ or $\PGL_2(\mathbb{F}_{\ell^r})$ can occur as Galois groups over $\Q$. Moreover, we have effective control of primes with large image for the mod $\ell$ Galois representation attached to specific modular forms. This gives us Galois realizations over $\mathbb{Q}$ of the groups $\PSL_2(\mathbb{F}_{\ell^r})$, $r$ even, and $\PGL_2(\mathbb{F}_{\ell^r})$, $r$ odd; $1 \leq r\leq 10 $, for explicit infinite families of primes $\ell$, given by congruence conditions on $\ell$ (cf. \cite{R-V}, \cite{D-V00}). Recently, it has been proven that the groups $\PSL_2(\mathbb{F}_{\ell})$ are Galois groups over $\mathbb{Q}$ for all $\ell>3$, by considering the Galois representations attached to an explicit elliptic surface (see \cite{Z}). Results on generically large image of compatible systems of 3-dimensional Galois representations associated to some smooth projective surfaces and to some cohomological modular forms are obtained in \cite{D-V04}. The effective control of primes with large image for the residual 3-dimensional Galois representations attached to some explicit examples gives us that the groups $\PSL_3(\mathbb{F}_{\ell})$, $\PSU_3(\mathbb{F}_\ell)$, $\SL_3(\mathbb{F}_\ell) $, $\SU_3(\mathbb{F}_\ell) $ are Galois groups over $ \mathbb{Q}$, for explicit infinite families of primes $\ell$ (cf. \cite{D-V04}). In the case of $4$-dimensional Galois representations, we have results on large image for compatible systems of Galois representations attached to abelian surfaces $A$ defined over $\mathbb{Q}$ such that $\mathrm{End}_{\overline{\mathbb{Q} }}(A)=\mathbb{Z}$, to Siegel modular forms of genus two and to some pure motives (cf. \cite{LeDuff}, \cite{DKR}, \cite{D02}, \cite{D-V11}). The effective control of primes with large image in some explicit cases gives us that the groups $\PGSp_4(\mathbb{F}_{\ell})$, for all $\ell >3$; and the groups $\PGSp_4(\mathbb{F}_{\ell^3})$, $\PSp_4(\mathbb{F}_{\ell^2})$, $\PSL_4(\mathbb{F}_{\ell})$ and $\PSU_4(\mathbb{F}_{\ell})$, for explicit infinite families of primes $\ell$, are Galois groups over $\Q$ (cf. \cite{ArVi}, \cite{DKR}, \cite{D02}, \cite{D-V08}).\\ In the next section we consider the image of residual Galois representations attached to principally polarized abelian varieties of dimension $n$, which provides Galois realizations over $\Q$ of the general symplectic group $\GSp_{2n}(\mathbb{F}_\ell)$, for almost all $\ell$. \\ Finally, we remark that, using these methods, we can expect to obtain realizations of the groups $\PSL_2(\mathbb{F}_{\ell^r})$, $\PGL_2(\mathbb{F}_{\ell^r})$, $\PGSp_{2n}(\mathbb{F}_{\ell^r})$ and $\PSp_{2n}(\mathbb{F}_{\ell^r})$ as Galois groups over $\mathbb{Q}$. In fact, by considering compatible systems of Galois representations attached to certain automorphic forms, we know (cf. \cite{W}, \cite{D-W}, \cite{KLS}, \cite{ArDiShWi}) that these groups are Galois groups over $\mathbb{Q}$, for infinitely many integers $r$ and infinitely many primes $\ell$. More precisely, we have: \begin{itemize} \item ``Vertical direction": For every fixed prime $\ell$, there are infinitely many positive integers~$r$, such that $\PSL_2(\mathbb{F}_{\ell^r})$ can be realized as a Galois group over $\mathbb{Q}$. Moreover, for each $n \geq 2$, there are infinitely many positive integers~$r$, such that either $\PGSp_{2n}(\mathbb{F}_{\ell^r})$ or $\PSp_{2n}(\mathbb{F}_{\ell^r})$ are Galois groups over $\mathbb{Q}$ (cf. \cite{W}, \cite{KLS}). \item ``Horizontal direction": For every fixed $r$, there is a positive density set of primes~$\ell$, such that $\PSL_2(\mathbb{F}_{\ell^r})$ can be realized as a Galois group over $\mathbb{Q}$. Moreover, for each $n \geq 2$, there is a set of primes~$\ell$ of positive density for which either $\PGSp_{2n}(\mathbb{F}_{\ell^r})$ or $\PSp_{2n}(\mathbb{F}_{\ell^r})$ are Galois groups over $\mathbb{Q}$ (cf. \cite{D-W}, \cite{ArDiShWi}). \end{itemize} \section{Galois representations attached to abelian varieties}\label{sec:2} \subsection{The image of the $\ell$-torsion Galois representation} Let $A$ be an abelian variety of dimension $n$ defined over $\mathbb{Q}$. The set of $\overline{\mathbb{Q}}$-points of $A$ admits a group structure. Let $\ell$ be a prime number. Then the subgroup of the $\overline{\mathbb{Q}}$-points of $A$ consisting of all $\ell$-torsion points, which is denoted by $A[\ell]$, is isomorphic to $(\mathbb{Z}/\ell\mathbb{Z})^{2n}$ and it is endowed with a natural action of $G_{\mathbb{Q}}$. Therefore, it gives rise to a (continuous) Galois representation \begin{equation*} \overline{\rho}_{A, \ell}:G_{\mathbb{Q}}\rightarrow \GL(A[\ell])\simeq \GL_{2n}(\mathbb{F}_{\ell}). \end{equation*} As explained in Section \ref{sec:1}, we obtain a realization of the image of $\overline{\rho}_{A, \ell}$ as a Galois group over~$\mathbb{Q}$. In this section, we will consider principally polarized abelian varieties, i.e. we will consider pairs $(A, \lambda)$, where $A$ is an abelian variety (defined over $\mathbb{Q}$) and $\lambda:A\rightarrow A^{\vee}$ is an isogeny of degree 1 (that is, an isomorphism between $A$ and the dual abelian variety $A^{\vee}$), induced from an ample divisor on $A$. Not every abelian variety $A$ admits a principal polarization $\lambda$ and, when it does, it causes certain restrictions on the image of $\overline{\rho}_{A, \ell}$. Let $V$ be a vector space of dimension $2n$, which is defined over $\mathbb{F}_{\ell}$ and endowed with a symplectic (i.e.~skew-symmetric, nondegenerate) pairing $\langle \cdot, \cdot \rangle:V\times V\rightarrow \mathbb{F}_{\ell}$. We consider the \emph{symplectic group} \begin{equation*}\Sp(V, \langle \cdot, \cdot \rangle):= \{M\in \GL(V): \forall v_1, v_2\in V, \langle Mv_1, Mv_2\rangle=\langle v_1, v_2\rangle\}\end{equation*} and the \emph{general symplectic group} \begin{equation*}\GSp(V, \langle \cdot, \cdot \rangle):= \{M\in \GL(V): \exists m\in \mathbb{F}_{\ell}^{\times}\text{ such that }\forall v_1, v_2\in V, \langle Mv_1, Mv_2\rangle=m \langle v_1, v_2\rangle\}.\end{equation*} When $A$ is a principally polarized abelian variety, the image of $\overline{\rho}_{A, \ell}$ lies inside the general symplectic group of $A[\ell]$ with respect to a certain symplectic pairing. More precisely, denote by $\mu_{\ell}(\overline{\mathbb{Q}})$ the group of $\ell$-th roots of unity inside a fixed algebraic closure $\overline{\mathbb{Q}}$ of $\mathbb{Q}$. Recall that the Weil pairing $e_{\ell}$ is a perfect pairing \begin{equation*} e_{\ell}:A[\ell]\times A^{\vee}[\ell]\rightarrow \mu_{\ell}(\overline{\mathbb{Q}}). \end{equation*} If $(A, \lambda)$ is a principally polarized abelian variety, we can consider the pairing \begin{equation*}\begin{aligned} e_{\ell, \lambda}:A[\ell]\times A[\ell]&\rightarrow \mu_{\ell}(\overline{\mathbb{Q}})\\ (P, Q)& \mapsto e_{\ell}(P, \lambda(Q))\end{aligned} \end{equation*} which is a non-degenerate skew-symmetric pairing (i.e.~a symplectic pairing), compatible with the action of $G_{\mathbb{Q}}$. This last condition means that, for any $\sigma\in G_{\mathbb{Q}}$, \begin{equation*} (e_{\ell, \lambda}(P, Q))^{\sigma}=e_{\ell, \lambda}(P^{\sigma}, Q^{\sigma}). \end{equation*} Note that $G_{\mathbb{Q}}$ acts on $\mu_{\ell}(\overline{\mathbb{Q}})$ via the mod $\ell$ cyclotomic character $\chi_{\ell}$, so that $(e_{\ell, \lambda}(P, Q))^{\sigma}=(e_{\ell, \lambda}(P, Q))^{\chi_{\ell}(\sigma)}$. If we fix a primitive $\ell$-th root of unity $\zeta_{\ell}$, we may write the pairing $e_{\ell, \lambda}(\cdot, \cdot)$ additively, i.e. we define \begin{equation*}\langle \cdot, \cdot \rangle:A[\ell]\times A[\ell] \rightarrow \mathbb{F}_{\ell} \end{equation*} as $\langle P, Q\rangle:= a\text{ such that }\zeta^a=e_{\ell, \lambda}(P, Q)$. In other words, we have a symplectic pairing on the $\mathbb{F}_{\ell}$-vector space $A[\ell]$ such that, for all $\sigma\in G_{\mathbb{Q}}$, the linear map $\overline{\rho}(\sigma):A[\ell]\rightarrow A[\ell]$ satisfies that there exists a scalar, namely $\chi_{\ell}(\sigma)$, such that \begin{equation}\label{eq:multiplier} \langle\overline{\rho}(\sigma)(P), \overline{\rho}(\sigma)(Q)\rangle = \chi_{\ell}(\sigma)\langle P, Q\rangle.\end{equation} That is to say, the image of the representation $\overline{\rho}_{A, \ell}$ is contained in the general symplectic group $\GSp(A[\ell], \langle \cdot, \cdot \rangle)\simeq \GSp_{2n}(\mathbb{F}_{\ell})$. Therefore, below we will consider $\overline{\rho}_{A,\ell}$ as a map into $\GSp(A[\ell], \langle \cdot, \cdot \rangle)\simeq \GSp_{2n}(\mathbb{F}_{\ell})$ and we will say that it is surjective if $\mathrm{Im}\overline{\rho}_{A, \ell}=\GSp(A[\ell])\simeq \GSp_{2n}(\mathbb{F}_{\ell})$. The determination of the images of the Galois representations $\overline{\rho}_{A, \ell}$ attached to the $\ell$-torsion of abelian varieties is a topic that has received a lot of attention. A remarkable result by Serre quoted in \cite[n. 136, Theorem 3]{Ouvres} is: \begin{thm}[Serre] Let $A$ be a principally polarized abelian variety of dimension $n$, defined over a number field $K$. Assume that $n=2, 6$ or $n$ is odd and furthermore assume that $\mathrm{End}_{\overline{K}}(A)=\mathbb{Z}$. Then there exists a bound $B_{A, K}$ such that, for all $\ell>B_{A, K}$, \begin{equation*}\mathrm{Im} \overline{\rho}_{A, \ell}=\GSp(A[\ell])\simeq \GSp_{2n}(\mathbb{F}_{\ell}).\end{equation*} \end{thm} For arbitrary dimension, the result is not true (see e.g.~\cite{Mumford69} for an example in dimension 4). However, one eventually obtains symplectic image by making some extra assumptions. For example, there is the following result of C.~Hall (cf.~\cite{Hall}). \begin{thm}[Hall]\label{thm:Hall} Let $A$ be a principally polarized abelian variety of dimension $n$ defined over a number field $K$, such that $\mathrm{End}_{\overline{K}}(A)=\mathbb{Z}$, and satisfying the following property: \begin{quote} {\normalfont (T)} There is a finite extension $L/K$ so that the N\'eron model of $A/L$ over the ring of integers of $L$ has a semistable fiber with toric dimension 1. \end{quote} Then there is an (explicit) finite constant $B_{A, K}$ such that, for all $\ell\geq B_{A, K}$, \begin{equation*} \mathrm{Im} \overline{\rho}_{A, \ell}=\GSp(A[\ell])\simeq \GSp_{2n}(\mathbb{F}_{\ell}). \end{equation*} \end{thm} \begin{rem}\label{rem:conditionT} In the case when $A=J(C)$ is the Jacobian of a hyperelliptic curve $C$ of genus $n$, say defined by an equation $Y^2=f(X)$ with $f(X)\in K[X]$ a polynomial of degree $2n+1$, Hall gives a sufficient condition for Condition (T) to be satisfied at a prime $\mathfrak{p}$ of the ring of integers of $K$; namely, the coefficients of $f(X)$ should have $\mathfrak{p}$-adic valuation greater than or equal to zero and the reduction of $f(X)$ mod $\mathfrak{p}$ (which is well-defined) should have one double zero in a fixed algebraic closure of the residue field, while all the other zeroes are simple. \end{rem} Applying the result of Hall with $K=\mathbb{Q}$ yields the following partial answer to the inverse Galois problem: \begin{cor}\label{cor:invGal} Let $n\in \mathbb{N}$ be any natural number. Then for \emph{all sufficiently large} primes $\ell$, the group $\GSp_{2n}(\mathbb{F}_{\ell})$ can be realized as a Galois group over $\mathbb{Q}$. \end{cor} \begin{rem} Several people, including the anonymous referee, pointed us to the following fact: if we consider a family of genus $n$ hyperelliptic curves $C_t$ defined over $\mathbb{Q}(t)$, with big monodromy at $\ell$, then Hilbert's Irreducibility Theorem provides us with infinitely many specializations $t=t_0\in \mathbb{Q}$ such that the Jacobian $J_{t_0}$ of the corresponding curve $C_{t_0}$ satisfies that $\mathrm{Im}\overline{\rho}_{J_{t_0}, \ell}\simeq \mathrm{GSp}_{2n}(\mathbb{F}_{\ell})$. Such families of curves exist for any odd $\ell$ (see e.g.~\cite{Hall2008} or \cite{Zarhin}). In particular, for any $n\in \mathbb{N}$ and any odd $\ell$, the Inverse Galois problem has an affirmative answer for the group $\mathrm{GSp}_{2n}(\mathbb{F}_{\ell})$. Although ensuring the existence of the desired curve, this fact does not tell us how to find such a curve explicitly. \end{rem} In the case of curves of genus 2, Le Duff has studied the image of the Galois representations attached to the $\ell$-torsion of $J(C)$, when Condition (T) in Theorem \ref{thm:Hall} is satisfied. The main result in \cite{LeDuff} is the following: \begin{thm}[Le Duff]\label{thm:LeDuff} Let $C$ be a genus $2$ curve defined over $\mathbb{Q}$, with bad reduction of type (II) or (IV) according to the notation in \cite{Liu} at a prime $p$. Let $\Phi_p$ be the group of connected components of the special fiber of the N\'eron model of $J(C)$ at $p$. For each prime $\ell$ and each prime $q$ of good reduction of $C$, let $P_{q, \ell}(X)=X^4 + a X^3 + b X^2 + qaX + q^2\in \mathbb{F}_{\ell}[X]$ be the characteristic polynomial of the image under $\overline{\rho}_{J(C), \ell}$ of the Frobenius element at $q$ and let $Q_{q, \ell}(X)= X^2 + aX + b-2q\in \mathbb{F}_{\ell}[X]$, with discriminants $\Delta_P$ and $\Delta_Q$ respectively. Then for all primes $\ell$ not dividing $2 pq \vert \Phi_p\vert$ and such that $\Delta_P$ and $\Delta_Q$ are not squares in $\mathbb{F}_{\ell}$, the image of $\overline{\rho}_{J(C), \ell}$ coincides with $\GSp_{4}(\mathbb{F}_{\ell})$. \end{thm} Using this result, he obtains a realization of $\GSp_4(\mathbb{F}_{\ell})$ as Galois group over $\mathbb{Q}$ for all odd primes $\ell$ smaller than 500000. \subsection{Explicit surjectivity result} A key point in Hall's result is the fact that the image under $\overline{\rho}_{A, \ell}$ of the inertia subgroup at the place $\mathfrak{p}$ of $L$ which provides the semistable fiber with toric dimension 1 is generated by a nontrivial transvection (whenever $\ell$ does not divide $\mathfrak{p}$ nor the cardinality of the group $\Phi_{\mathfrak{p}}$ of connected components of the special fiber of the N\'eron model at $\mathfrak{p}$). A detailed proof of this fact can be found in Proposition 1.3 of \cite{LeDuff}. We expand on this point. Given a finite-dimensional vector space $V$ over $\mathbb{F}_{\ell}$, endowed with a symplectic pairing $\langle \cdot, \cdot \rangle:V\times V\rightarrow \mathbb{F}_{\ell}$, a transvection is an element $T\in \GSp(V, \langle \cdot, \cdot\rangle)$ such that there exists a hyperplane $H\subset V$ satisfying that the restriction $T\vert_H$ is the identity on $H$. We say that it is a nontrivial transvection if $T$ is not the identity\footnote{We adopt the convention that identity is a transvection so that the set of transvections for a given hyperplane $H$ is a group.}. It turns out that the subgroups of $\GSp(V, \langle \cdot, \cdot \rangle)$ that contain a nontrivial transvection can be classified into three categories as follows (for a proof, see e.g.~\cite[Theorem~1.1]{ArDiWi}): \begin{thm}\label{thm:classification} Let $\ell\geq 5$ be a prime, let $V$ be a finite-dimensional vector space over $\mathbb{F}_{\ell}$, endowed with a symplectic pairing $\langle \cdot, \cdot \rangle:V\times V\rightarrow \mathbb{F}_{\ell}$ and let $G\subset \GSp(V, \langle \cdot, \cdot\rangle)$ be a subgroup that contains a nontrivial transvection. Then one of the following holds: \begin{enumerate} \item $G$ is reducible. \item There exists a proper decomposition $V = \bigoplus_{i\in I} V_i$ of $V$ into equidimensional non-singular symplectic subspaces $V_i$ such that, for each $g \in G$ and each $i \in I$, there exists some $j \in I$ with $g(V_i) \subseteq V_j$ and such that the resulting action of $G$ on $I$ is transitive. \item $G$ contains $\Sp(V, \langle \cdot, \cdot\rangle)$. \end{enumerate} \end{thm} \begin{rem} Assume that $V$ is the $\ell$-torsion group of a principally polarized abelian variety $A$ defined over $\mathbb{Q}$ and $\langle \cdot, \cdot \rangle$ is the symplectic pairing coming from the Weil pairing. If $G=\mathrm{Im}\overline{\rho}_{A, \ell}$ satisfies the third condition in Theorem \ref{thm:classification}, then $G=\GSp(V, \langle \cdot, \cdot \rangle)$. Indeed, we have the following exact sequence \begin{equation*} 1 \rightarrow \Sp(V, \langle \cdot, \cdot \rangle)\rightarrow \GSp(V, \langle \cdot, \cdot \rangle) \rightarrow \mathbb{F}_{\ell}^{\times}\rightarrow 1, \end{equation*} where the map $m:\GSp(A[\ell], \langle \cdot, \cdot \rangle) \rightarrow \mathbb{F}_{\ell}^{\times}$ associates to $M$ the scalar $a$ satisfying that, for all $u, v\in V$, $\langle Mu, Mv\rangle=a\langle u, v\rangle$. By Equation \eqref{eq:multiplier}, the restriction of $m$ to $\mathrm{Im}(\overline{\rho}_{A, \ell})$ coincides with the mod $\ell$ cyclotomic character $\chi_{\ell}$. We can easily conclude the result using that $\chi_{\ell}$ is surjective onto $\mathbb{F}_{\ell}^{\times}$. \end{rem} Even in the favourable case when we know that $\mathrm{Im}(\overline{\rho}_{A, \ell})$ contains a nontrivial transvection, we still need to distinguish between the three cases in Theorem \ref{thm:classification}. In this paper, we will make use of the following consequence of Theorem \ref{thm:classification} (cf.\ Corollary 2.2 of \cite{ArKa}). \begin{cor}\label{cor:classification} Let $\ell\geq 5$ be a prime, let $V$ be a finite-dimensional vector space over $\mathbb{F}_{\ell}$, endowed with a symplectic pairing $\langle \cdot, \cdot \rangle:V\times V\rightarrow \mathbb{F}_{\ell}$ and let $G \subset \GSp(V, \langle \cdot, \cdot\rangle)$ be a subgroup containing a nontrivial transvection and an element whose characteristic polynomial is irreducible and which has nonzero trace. Then $G$ contains $\Sp(V, \langle \cdot, \cdot\rangle)$. \end{cor} In order to apply this corollary in our situation, we need some more information on the image of $\overline{\rho}_{A, \ell}$. We will obtain this by looking at the images of the Frobenius elements $\mathrm{Frob}_q$ for primes $q$ of good reduction of $A$. More generally, let $A$ be an abelian variety defined over a field $K$ and assume that $\ell$ is a prime different from the characteristic of $K$. Any endomorphism $\alpha$ of $A$ induces an endomorphism of $A[\ell]$, in such a way that the characteristic polynomial of $\alpha$ (which is a monic polynomial in $\mathbb{Z}[X]$, see e.g.~$\S 3$, Chapter 3 of \cite{Lang} for its definition) coincides, after reduction mod $\ell$, with the characteristic polynomial of the corresponding endomorphism of $A[\ell]$. In the case when $K$ is a finite field (say of cardinality $q$), we can consider the Frobenius endomorphism $\phi_{q}\in \mathrm{End}_K(A)$, induced by the action of the Frobenius element $\mathrm{Frob}_q\in \mathrm{Gal}(\overline{K}/K)$. Then the reduction mod $\ell$ of the characteristic polynomial of $\phi_q$ coincides with the characteristic polynomial of $\overline{\rho}_{A, \ell}(\mathrm{Frob}_q)$. This will turn out to be particularly useful in the case when $A=J(C)$ is the Jacobian of a curve $C$ of genus $n$ defined over $K$, since one can determine the characteristic polynomial of $\overline{\rho}_{J(C), \ell}(\mathrm{Frob}_q)$ by counting the $\mathbb{F}_{q^r}$-valued points of $C$, for $r=1, \dots, n$. As a consequence, we can state the following result, which will be used in the next section. \begin{thm}\label{thm:explicitsurjectivity} Let $A$ be a principally polarized $n$-dimensional abelian variety defined over $\mathbb{Q}$. Assume that there exists a prime $p$ such that the following condition holds: \begin{quote} {\normalfont ($T_p$)} The special fiber of the N\'eron model of $A$ over $\mathbb{Q}_p$ is semistable with toric dimension $1$.\end{quote} Denote by $\Phi_p$ the group of connected components of the special fiber of the N\'eron model at $p$. Let $q$ be a prime of good reduction of $A$, let $A_q$ be the special fiber of the N\'eron model of $A$ over $\mathbb{Q}_q$ and let $P_q(X)=X^{2n} + aX^{2n-1} + \cdots + q^n\in \mathbb{Z}[X]$ be the characteristic polynomial of the Frobenius endomorphism acting on $A_q$. Then for all primes $\ell$ which do not divide $6pq\vert \Phi_p\vert a$ and are such that the reduction of $P_q(X)$ mod $\ell$ is irreducible in $\mathbb{F}_{\ell}$, the image of $\overline{\rho}_{A, \ell}$ coincides with $\GSp_{2n}(\mathbb{F}_{\ell})$. \end{thm} \begin{rem} The condition that $\ell$ does not divide $a$ corresponds to the Frobenius element having non-zero trace modulo $\ell$. Note that the theorem is vacuous when $a=0$. \end{rem} \section{Galois realization of $\GSp_{2n}(\mathbb{F}_{\ell})$ from a hyperelliptic curve of genus $n$}\label{sec:3} Let $C$ be a hyperelliptic curve of genus $n$ over $\Q$, defined by an equation $Y^2=f(X)$ where $f(X)\in\mathbb{Q}[X]$ is a polynomial of degree~$2n+1$. Let $A=J(C)$ be its Jacobian variety. We assume that $A$ satisfies condition $(T_p)$ for some prime $p$. In this section we present an algorithm, based on Theorem~\ref{thm:explicitsurjectivity}, which computes a finite set of prime numbers $\ell$ for which the Galois representation $\overline{\rho}_{A, \ell}$ has image $\GSp_{2n}(\mathbb{F}_{\ell})$. We apply this procedure to an example of a genus~3 a curve using a computer algebra system. \subsection{Strategy} First, to apply Theorem~\ref{thm:explicitsurjectivity}, we restrict ourselves to hyperelliptic curves of genus $n$ whose Jacobian varieties will satisfy Condition ($T_p$) for some $p$. Namely, we fix a prime number $p$ and then choose $f(X) \in\mathbb{Z}[X]$ monic of degree $2n+1$ such that both of the following conditions hold: \begin{enumerate} \item The polynomial $f(X)$ only has simple roots over $\overline{\mathbb{Q}}$, so that $Y^2=f(X)$ is the equation of an hyperelliptic curve $C$ over $\mathbb{Q}$. \item All coefficients of $f(X)$ have $p$-adic valuation greater than or equal to zero, and the reduction $f(X)\bmod p$ has one double zero in $\overline{\mathbb{F}}_p$, and its other zeroes are simple. This ensures that $A=J(C)$ satisfies Condition ($T_p$) (see Remark~\ref{rem:conditionT}). \end{enumerate} Any prime of good reduction for $C$ is also a prime of good reduction for its Jacobian $A$. Primes of good reduction for the hyperelliptic curve can be computed using the discriminant of Weierstrass equations for $C$ (see~\cite{Lockhart}). In our case, it turns out that any prime not dividing the discriminant of $f(X)$ is of good reduction for $C$, hence for $A$. We take such a prime number $q$ of good reduction for $A$. Recall that $P_q(X) \in \mathbb{Z}[X]$ is the characteristic polynomial of the Frobenius endomorphism acting on the fiber $A_q$. Let $\mathcal{S}_q$ denote the set of prime numbers $\ell$ satisfying the following conditions: \begin{itemize} \item[(i)] $\ell$ divides neither $6pq\vert \Phi_p\vert$ nor the coefficient of $X^{2n-1}$ in $P_q(X)$, \item[(ii)] the reduction of $P_q(X)$ modulo $\ell$ is irreducible in $\mathbb{F}_{\ell}$. \end{itemize} Note that if the coefficient of $X^{2n-1}$ in $P_q(X)$ is nonzero, condition (i) rules out only finitely many prime numbers $\ell$, whereas if it vanishes, condition (i) rules out all prime numbers $\ell$. By Theorem~\ref{thm:explicitsurjectivity}, for each $\ell \in \mathcal{S}_q$ the representation $\overline{\rho}_{A,\ell}$ is surjective with image $\GSp_{2n}(\mathbb{F}_{\ell})$. Also, primes in $\mathcal{S}_q$ can be computed effectively up to a given fixed bound. Since we want the polynomial $P_q(X)$ (of degree $2n$) to be irreducible modulo $\ell$, its Galois group $G$ over $\Q$ must be a transitive subgroup of $S_{2n}$ with a ${2n}$-cycle. Therefore, by an application of a weaker version of the Chebotarev density theorem due to Frobenius (\cite{SL}, ``Theorem of Frobenius", p.~32), the density of $\mathcal{S}_{q}$ is \[ \frac{\# \{ \sigma \in G \subset S_{2n} \colon \sigma \text{ is a $2n$-cycle} \}}{\# G}. \] This estimate is far from what Theorem~\ref{thm:Hall} provides us, namely that the density of $\ell$'s with $\mathrm{Im}(\overline{\rho}_{A, \ell})=\GSp_{2n}(\mathbb{F}_{\ell})$ is $1$. This leads us to discuss the role of the prime $q$. First of all, we can see that \[ \bigcup_{q} \mathcal{S}_q = \{\ell \textrm{ prime} \colon \ell \nmid 6p|\Phi_p| \mbox{ and } \overline{\rho}_{A,\ell} \textrm{ surjective} \}, \] where the union is taken over all primes $q$ of good reduction for $A$. Note that the inclusion $\subset$ follows directly from Theorem \ref{thm:explicitsurjectivity}. To show the other inclusion $\supset$, suppose now that $\ell \nmid 6p|\Phi_p|$ and that the representation at $\ell$ is surjective. Its image $\GSp_{2n}(\mathbb{F}_{\ell})$ contains an element with irreducible characteristic polynomial and nonzero trace (see for instance Proposition A.2 of \cite{ArKa}). This element defines a conjugacy class $C\subset \GSp_{2n}(\mathbb{F}_{\ell})$ and the Chebotarev density theorem ensures that there exists $q$ such that $\overline{\rho}_{A,\ell}(\mathrm{Frob}_q)\in C$, hence $\ell \in \mathcal{S}_q$. Moreover, if, for some fixed $\ell$, the events ``$\ell$ belongs to $\mathcal{S}_q$'' are independent as $q$ varies, the density of primes $\ell$ for which $\overline{\rho}_{A,\ell}$ is surjective will increase when we take several different primes $q$. A sufficient condition for this density to tend to 1 is that there exists an infinite family of primes $q$ for which the splitting fields of $P_q(X)$ are pairwise linearly disjoint over $\mathbb{Q}$. Therefore, it seems reasonable to expect that computing the sets $\mathcal{S}_q$ for several values of $q$ increases the density of primes $\ell$ for which we know the surjectivity of $\overline{\rho}_{A,\ell}$. This is what we observe numerically in the next example. \subsection{A numerical example in genus $3$} We consider the hyperelliptic curve $C$ of genus $n=3$ over $\Q$ defined by $Y^2=f(X)$, where \[ f(X) = X^2(X-1)(X+1)(X-2)(X+2)(X-3)+7(X-28) \in \mathbb{Z}[X]. \] This is a Weierstrass equation, which is minimal at all primes $\ell$ different from $2$ (see~\cite[Lemma~2.3]{Lockhart}), with discriminant $-2^{12} \cdot 7 \cdot 73\cdot 1069421 \cdot 11735871491$. Thus, $C$ has good reduction away from the primes appearing in this factorization. Clearly, $p=7$ is a prime for which the reduction of $f(X)$ modulo $7$ has one double zero in $\overline{\mathbb{F}}_7$ and otherwise only simple zeroes. Therefore, its Jacobian $J(C)$ satisfies Condition ($T_7$). As we computed with \textsc{Magma}, the order of the component group $\Phi_7$ is $2$. Recall that $P_q(X)$ coincides with the characteristic polynomial of the Frobenius endomorphism of the reduced curve $C$ modulo $q$ over $\mathbb{F}_q$. Our method provides no significant result for $q\in\{3,5\}$ because for $q=3$ the characteristic polynomial $P_q(X)$ is not irreducible in $\mathbb{Z}[X]$ and for $q=5$ it has zero trace in $\mathbb{Z}$. So in this example, we first take $q=11$. The curve has $11, 135$ and $1247$ points over $\mathbb{F}_{11}$, $\mathbb{F}_{11^2}$ and $\mathbb{F}_{11^3}$, respectively. The characteristic polynomial $P_{11}(X)$ is \[ P_{11}(X) = X^6-X^5+7X^4-35X^3+77X^2-121X+1331 \] and it is irreducible over $\Q$. Its Galois group $G$ has order $48$ and is isomorphic to the wreath product $S_2 \wr S_3$. This group is the direct product of $3$ copies of $S_2$, on which $S_3$ acts by permutation (see ~\cite[Chapter~4]{JamesKerber}): An element of $S_2 \wr S_3$ can be written as $((a_1,a_2,a_3), \sigma)$, where $(a_1,a_2,a_3)$ denotes an element of the direct product $S_2\times S_2 \times S_2$ and $\sigma$ an element of $S_3$. The group law is defined as follows: $$((a_1,a_2,a_3),\sigma)((a_1',a_2',a_3'),\sigma')=((a_1,a_2,a_3)(a_1',a_2',a_3')^{\sigma},\sigma\sigma'),$$ where $(a_1',a_2',a_3')^{\sigma} = (a'_{\sigma(1)},a'_{\sigma(2)},a'_{\sigma(3)})$. One can also view the wreath product $S_2\wr S_3$ as the centralizer of $(12)(34)(56)$ in $S_6$, through an embedding $\psi : S_2 \wr S_3 \rightarrow S_6$ whose image is isomorphic to the so-called Weyl group of type $B_3$ (\cite[4.1.18 and 4.1.33]{JamesKerber}). More precisely, under $\psi$, the image of an element $((a_1,a_2,a_3),\sigma)\in S_2\wr S_3$ is the permutation of $S_6$ that acts on $\{1,2,...,6\}$ as follows: it first permutes the elements of the sets $E_1=\{1,2\}$, $E_2=\{3,4\}$ and $E_3=\{5,6\}$ separately, according to $a_1$, $a_2$ and $a_3$ respectively (identifying $E_2,E_3$ with $\{1,2\} $ in an obvious way) and then permutes the pairs $E_1,E_2,E_3$ according to the action of $\sigma$ on the indices. For example, denoting $S_2=\{\mathrm{id},\tau\}$, the image under $\psi$ of $((\tau,\mathrm{id},\mathrm{id}),(123))$ is the $6$-cycle $(135246)$. Let us now determine the elements of $S_2\wr S_3$ which map to $6$-cycles in $S_6$ through the embedding $\psi$. For an element in $S_2 \wr S_3$ to be of order $6$, it has to be of the form $((a_1,a_2,a_3),\gamma)$ with $\gamma$ a $3$-cycle in $S_3$. Now, $\psi$ sends an element $((a_1,a_2,a_3),\gamma)$ where either one or three $a_i$'s are $\mathrm{id}$, to a product of two disjoint $3$-cycles in~$S_6$. So the elements of $S_2\wr S_3$ which are $6$-cycles in $S_6$ are among the eight elements $((\mathrm{id},\mathrm{id},\tau),\gamma)$, $((\mathrm{id},\tau,\mathrm{id}),\gamma)$, $((\tau,\mathrm{id},\mathrm{id}),\gamma)$ and $((\tau,\tau,\tau),\gamma)$ with $\gamma=(123)$ or $\gamma=(132)$. Moreover, \cite[Theorem~4.2.8]{JamesKerber} (see also \cite[Lemma~3.1]{Gramain} or \cite{taylor}) ensures that these $8$ elements are conjugate. Since $\psi((\tau,\mathrm{id},\mathrm{id}),(123))=(135246)$ is a $6$-cycle, we deduce that the $8$ elements listed above are exactly the elements of $S_2\wr S_3$ which are $6$-cycles in $S_6$. To conclude, the Galois group $G$, viewed as a subgroup of $S_6$, contains exactly $8$ elements that are $6$-cycles. Therefore, the density of $\mathcal{S}_{11}$ is $8/48=1/6$. We can compute $P_q(X)$ using efficient algorithms available in \textsc{Magma} \cite{magma} or \textsc{Sage} \cite{sage}, which are based on $p$-adic methods. We found that there are $6891$ prime numbers $11 \leq \ell \leq 500000$ that belong to $\mathcal{S}_{11}$. For these $\ell$, we know that the image of $\overline{\rho}_{A,\ell}$ is $\GSp_6(\mathbb{F}_{\ell})$, so the groups $\GSp_{6}(\mathbb{F}_{\ell})$ are realized as Galois groups arising from the $\ell$-torsion of the Jacobian of the hyperelliptic curve $C$. For instance, the first ten elements of $\mathcal{S}_{11}$ are \[ 47, 71, 79, 83, 101, 113, 137, 251, 269,271. \] Also, the proportion of prime numbers $11 \leq \ell \leq 500000$ in $\mathcal{S}_{11}$ is about 0.1659, which is quite in accordance with the density obtained from the Chebotarev density theorem. By looking at polynomials $P_q(X)$ for several primes $q$ of good reduction, we are able to significantly improve the known proportion of primes $\ell$, up to a given bound, for which the Galois representation is surjective. Namely, we computed that \[ \{\ell \text{ prime}, 11 \leq \ell \leq 500000 \} \subseteq \bigcup_{11\leq q \leq 571} \mathcal{S}_q. \] As a consequence, for any prime $11 \leq \ell \leq 500000$, the group $\GSp_{6}(\mathbb{F}_{\ell})$ is realized as a Galois group arising from the $\ell$-torsion of the Jacobian of the hyperelliptic curve $C$. This is reminiscent of Le Duff's numerical data for $\GSp_{4}(\mathbb{F}_l)$ (see Theorem~\ref{thm:LeDuff}). Combining all of the above suggests that the single hyperelliptic curve $C$ might provide a positive answer to the inverse Galois problem for $\GSp_{6}(\mathbb{F}_{\ell})$ for any prime $\ell \geq 11$. \end{document}
\begin{document} \title[Classification of real algebraic curves] {Classification of real algebraic curves under blow-spherical homeomorphisms at infinity} \author[J. E. Sampaio]{Jos\'e Edson Sampaio} \author[E. C. da Silva]{Euripedes Carvalho da Silva} \address{Jos\'e Edson Sampaio: Departamento de Matem\'atica, Universidade Federal do Cear\'a, Rua Campus do Pici, s/n, Bloco 914, Pici, 60440-900, Fortaleza-CE, Brazil. \newline E-mail: {\tt [email protected]} } \address{Euripedes Carvalho da Silva: Departamento de Matem\'atica, Instituto Federal de Educa\c{c}\~ao, Ci\^encia e Tecnologia do Cear\'a, Av. Parque Central, 1315, Distrito Industrial I, 61939-140, Maracana\'u-CE, Brazil. \newline E-mail: {\tt [email protected]} } \thanks{The first named author was partially supported by CNPq-Brazil grant 310438/2021-7. This work was supported by the Serrapilheira Institute (grant number Serra -- R-2110-39576). } \keywords{Blow-spherical equivalence; Algebraic curves; Classification of algebraic curves.} \subjclass[2010]{14B05; 32S50} \begin{abstract} In this article, we present a complete classification, with normal forms, of the real algebraic curves under blow-spherical homeomorphisms at infinity. \end{abstract} \maketitle \tableofcontents \section{Introduction} In this article, we study real algebraic curves under blow-spherical homeomorphisms from the global point of view. Roughly speaking, two subsets of Euclidean spaces are blow-spherical homeomorphic, if their spherical modifications (see Definition \ref{def:strict_transf}) are homeomorphic and, in particular, this homeomorphism induces a homeomorphism between their tangent links (see Definition \ref{def:bs_homeomorphism}). This gives an equivalence which lives between topological equivalence and semialgebraic bi-Lipschitz equivalence. The study of analytic sets under blow-spherical homeomorphisms from the local point of view has been studied in some works, e.g., \cite{BirbrairFG:2012,BirbrairFG:2017, Sampaio:2015,Sampaio:2020,Sampaio:2022b}. In \cite{Sampaio:2020}, the first author, among other things, presented a complete classification of the germs of complex analytic curves under blow-spherical homeomorphisms. In \cite{Sampaio:2022b}, the first author presented several results of related to blow-spherical homeomorphism between germs of real analytic sets and, in particular, he presented a classification of germs of real analytic curves under blow-spherical homeomorphisms. Recently, the authors of this article in \cite{SampaioS:2023} presented a complete classification of complex algebraic curves under (global) blow-spherical homeomorphisms (see Theorem 4.6 in \cite{SampaioS:2023}). They also presented a complete classification of complex algebraic curves under blow-spherical homeomorphisms at infinity with normal forms (see Theorems 4.2 and 4.3 in \cite{SampaioS:2023}). So, it becomes natural to try classifying real algebraic curves under blow-spherical homeomorphisms. The main aim of this article is to present a complete classification of real algebraic curves under blow-spherical homeomorphisms at infinity (see Proposition \ref{propblowspherical}). Moreover, we also present normal forms for this classification (see Proposition \ref{thmnormalform}). We also present a result of realization of our (complete) invariant (see Proposition \ref{thm:realizable}). In order to be a bit more precise, for a real algebraic curve $X$, we define our invariant $k(X,\infty)$ (see Definition \ref{def:relative_mult}), which is a point of $(\mathbb{Z}_{>0})^n$ and $n$ is the cardinality of the link (at 0) of the tangent cone of $X$ at infinity (see Subsection \ref{subsec:tg_cones}), and in Proposition \ref{propblowspherical}, we prove that two real algebraic curve $X$ and $\widetilde{X}$ are blow-spherical homeomorphic at infinity if and only if $k(X,\infty)=k(\widetilde{X},\infty)$. In Subsection \ref{sec:normal_forms}, we present a collection of real algebraic curves and we prove in Proposition \ref{thmnormalform} that a real algebraic curve is blow-spherical homeomorphic at infinity to exactly one curve of that collection. In Subsection \ref{sec:realization}, we observe that $k(X,\infty)\in \mathcal{N}$, where $\mathcal{N}=\bigcup\limits_{n=1}^{\infty} \mathcal{N}_n$ and $\mathcal{N}_n$ is the set of all $(\eta_1, \eta_2,\cdots,\eta_n)\in (\mathbb{Z}_{>0})^n$ such that $\eta _1\leq \eta _2\leq \cdots \leq \eta_n$. Moreover, given $\eta=(\eta_1,...,\eta_k)\in \mathcal{N}$, we prove in Proposition \ref{thm:realizable} that there is a real algebraic curve $X$ such that $ k(X,\infty)=\eta$ if and only if $\eta_1+...+\eta_k$ is an even number. \section{Preliminaries}\label{section:cones} Here, all the real algebraic sets are supposed to be pure dimensional. \subsection{Definition of the blow-spherical equivalence} Let us consider the {\bf spherical blowing-up at infinity} (resp. $p$) of $\mathbb{R}^{n+1}$, $\rho_{\infty}\colon\mathbb{S}^n\times (0,+\infty )\to \mathbb{R}^{n+1}$ (resp. $\rho_p\colon\mathbb{S}^n\times [0,+\infty )\to \mathbb{R}^{n+1}$), given by $\rho_{\infty}(x,r)=\frac{1}{r}x$ (resp. $\rho_p(x,r)=rx+p$). Note that $\rho_{\infty}\colon\mathbb{S}^n\times (0,+\infty )\to \mathbb{R}^{n+1}\setminus \{0\}$ (resp. $\rho_p\colon\mathbb{S}^n\times (0,+\infty )\to \mathbb{R}^{n+1}\setminus \{0\}$) is a homeomorphism with inverse mapping $\rho_{\infty}^{-1}\colon\mathbb{R}^{n+1}\setminus \{0\}\to \mathbb{S}^n\times (0,+\infty )$ (resp. $\rho_p \colon\mathbb{S}^n\times (0,+\infty )\to \mathbb{R}^{n+1}\setminus \{0\}$) given by $\rho_{\infty}^{-1}(x)=(\frac{x}{\|x\|},\frac{1}{\|x\|})$ (resp. $\rho_p^{-1}(x)=(\frac{x-p}{\|x-p\|},\|x-p\|)$). \begin{definition}\label{def:strict_transf} The {\bf strict transform} of the subset $X$ under the spherical blowing-up $\rho_{\infty}$ is $X'_{\infty}:=\overline{\rho_{\infty}^{-1}(X\setminus \{0\})}$ (resp. $X'_{p}:=\overline{\rho_{p}^{-1}(X\setminus \{0\})}$). The subset $X_{\infty}'\cap (\mathbb{S}^n\times \{0\})$ (resp. $X_{p}'\cap (\mathbb{S}^n\times \{0\})$) is called the {\bf boundary} of $X'_{\infty}$ (resp. $X'_p$) and it is denoted by $\partial X'_{\infty}$ (resp. $\partial X'_p$). \end{definition} \begin{definition}\label{def:bs_homeomorphism} Let $X$ and $Y$ be subsets in $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively. Let $p\in \mathbb{R}^n\cup \{\infty\}$, $q\in \mathbb{R}^m\cup \{\infty\}$. A homeomorphism $\varphi:X\rightarrow Y$ such that $q=\lim\limits_{x\rightarrow p}{\varphi(x)}$ is said a {\bf blow-spherical homeomorphism at $p$}, if the homeomorphism $$\rho^{-1}_{q}\circ \varphi\circ \rho_p \colon X'_p\setminus \partial X'_p\rightarrow Y'_q\setminus \partial Y'_q$$ extends to a homeomorphism $\varphi'\colon X'_p\rightarrow Y'_q$. A homeomorphism $\varphi\colon X\rightarrow Y$ is said a {\bf blow-spherical homeomorphism} if it is a blow-spherical homeomorphism for all $p\in \overline{X}\cup \{\infty\}$. In this case, we say that the sets $X$ and $Y$ are {\bf blow-spherical homeomorphic} or {\bf blow-isomorphic} ({\bf at $(p,q)$}). \end{definition} \begin{definition}\label{def:strong_diffeo} Let $X$ and $Y$ be subsets in $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively. We say that a blow-spherical homeomorphism $h\colon X\rightarrow Y$ is a {\bf strong blow-spherical homeomorphism} if $h({\rm Sing}_1(X))={\rm Sing}_1(Y)$ and $h|_{X\setminus {\rm Sing}_1(X)}\colon X\setminus {\rm Sing}_1(X)\rightarrow Y\setminus {\rm Sing}_1(Y)$ is a $C^1$ diffeomorphism, where, for $A\subset \mathbb{R}^p$, ${\rm Sing}_k(A)$ denotes the points $x\in A$ such that, for any open neighbourhood $U$ of $x$, $A\cap U$ is not a $C^k$ submanifold of $\mathbb{R}^p$. A blow-spherical homeomorphism at $\infty$, $\varphi\colon X\rightarrow Y$, is said a {\bf strong blow-spherical homeomorphism at $\infty$ } if there are compact sets $K\subset \mathbb{R}^n$ and $\tilde{K}\subset\mathbb{R}^m$ such that $\varphi(X\setminus K)=Y\setminus \tilde{K}$ and the restriction $\varphi|_{X\setminus K}\colon X\setminus K\to Y\setminus \tilde{K}$ is a strong blow-spherical homeomorphism. \end{definition} \begin{remark} We have some examples: \begin{enumerate} \item $\mbox{Id}:X \rightarrow X$ is a blow-spherical homeomorphism for any $X\subset \mathbb{R}^n$; \item Let $X\subset \mathbb{R}^n$, $Y\subset \mathbb{R}^m$ and $Z\subset \mathbb{R}^k$ be subsets. If $f\colon X\rightarrow Y$ and $g\colon Y\rightarrow Z$ are blow-spherical homeomorphisms, then $g\circ f\colon X\rightarrow Z$ is a blow-spherical homeomorphism. \end{enumerate} \end{remark} Thus we have a category called {\bf blow-spherical category}, which is denoted by $\mbox{BS}$, where its objects are all the subsets of Euclidean spaces and its morphisms are all blow-spherical homeomorphisms. By definition, if $X$ and $Y$ are strongly blow-spherical homeomorphic then they are blow-spherical homeomorphic, but the converse does not hold in general, as we can see in the next example. \begin{example} Let $V=\{(x,y)\in\mathbb{R}^2; y=|x|\}$. The mapping $\varphi\colon \mathbb{R}\to V$ given by $\varphi(x)=(x,|x|)$ is a blow-spherical homeomorphism. However, since ${\rm Sing}_1(\mathbb{R})=\emptyset$ and ${\rm Sing}_1(V)=\{(0,0)\}$, there is no strong blow-spherical homeomorphism $h\colon \mathbb{R}\to V$ and, in particular, $\mathbb{R}$ and $V$ are not strongly blow-spherical homeomorphic. \end{example} \begin{proposition}[Proposition 3.12 in \cite{SampaioS:2023}]\label{Lip_implies_bs} Let $X\subset \mathbb{R}^n$ and $Y\subset \mathbb{R}^m$ be semialgebraic sets. If $h\colon X\rightarrow Y$ is a semialgebraic outer lipeomorphism then $h$ is a blow-spherical homeomorphism. \end{proposition} \begin{example} $V=\{(x,y)\in\mathbb{R}^2; y^2=x^3\}$ and $W=\{(x,y)\in\mathbb{R}^2; y^2=x^5\}$ are blow-spherical homeomorphic, but are not outer bi-Lipschitz homeomorphic. \end{example} \subsection{Tangent Cones}\label{subsec:tg_cones} Let $X\subset \mathbb{R}^{n+1}$ be an unbounded semialgebraic set (resp. subanalytic set with $p\in \overline{X}$). We say that $v\in \mathbb{R}^{n+1}$ is a tangent vector of $X$ at infinity (resp. $p$) if there are a sequence of points $\{x_i\}\subset X$ tending to infinity (resp. $p$) and a sequence of positive real numbers $\{t_i\}$ such that $$\lim\limits_{i\to \infty} \frac{1}{t_i}x_i= v \quad (\mbox{resp. } \lim\limits_{i\to \infty} \frac{1}{t_i}(x_i-p)= v).$$ Let $C(X,\infty)$ (resp. $C(X,p)$) denote the set of all tangent vectors of $X$ at infinity (resp. $p$). We call $C(X,\infty)$ the {\bf tangent cone of $X$ at infinity} (resp. $p$). We have the following characterization. \begin{corollary}[Corollary 2.16 \cite{FernandesS:2020}]\label{corollary 2.16FernandesS:2020} Let $X\subset \mathbb{R}^n$ be an unbounded semialgebraic set. Then $C(X,\infty)=\{v\in\mathbb{R}^n;\, \exists \gamma:(\varepsilon,+\infty )\to Z$ $C^0$ semialgebraic such that $\lim\limits _{t\to +\infty }|\gamma(t)|=+\infty$ and $\gamma(t)=tv+o_{\infty }(t)\}$, where $g(t)=o_{\infty }(t)$ means $\lim\limits_{t\to +\infty}\frac{g(t)}{t}=0$. \end{corollary} Thus, we have the following \begin{corollary}[Corollary 2.18 \cite{FernandesS:2020}]\label{dimension_tg_cone} Let $Z\subset \mathbb{R}^n$ be an unbounded semialgebraic set. Let $\phi:\mathbb{R}^n\setminus\{0\}\to \mathbb{R}^n\setminus\{0\}$ be the semialgebraic mapping given by $\phi(x)=\frac{x}{\|x\|^2}$ and denote $X=\phi(Z\setminus \{0\})$. Then $C(Z,\infty)$ is a semialgebraic set satisfying $C(Z,\infty)=C(X,0)$ and $\dim_{\mathbb{R}} C(Z,\infty) \leq \dim_{\mathbb{R}} Z$. \end{corollary} \begin{remark}\label{remarksimplepointcone} {\rm Another way to present the tangent cone at infinity (resp. $p$) of a subset $X\subset\mathbb{R}^{n+1}$ is via the spherical blow-up at infinity (resp. $p$) of $\mathbb{R}^{n+1}$. In fact, if $X\subset \mathbb{R}^{n+1}$ is a semialgebraic set, then $\partial X'_{\infty}=(C(X,\infty)\cap \mathbb{S}^n)\times \{0\}$ (resp. $\partial X'_{p}=(C(X,p)\cap \mathbb{S}^n)\times \{0\}$).} \end{remark} \subsection{Relative multiplicities} Let $X\subset \mathbb{R}^{m+1}$ be a $d$-dimensional subanalytic subset and $p\in \mathbb{R}^{m+1}\cup \{\infty\}$. We say $x\in\partial X'_p$ is a {\bf simple point of $\partial X'_p$}, if there is an open subset $U\subset \mathbb{R}^{m+2}$ with $x\in U$ such that: \begin{itemize} \item [a)] the connected components $X_1,\cdots , X_r$ of $(X'_p\cap U)\setminus \partial X'_{p}$ are topological submanifolds of $\mathbb{R}^{m+2}$ with $\dim X_i=\dim X$, for all $i=1,\cdots,r$; \item [b)] $(X_i\cup \partial X'_{p})\cap U$ is a topological manifold with boundary, for all $i=1,\cdots ,r$. \end{itemize} Let ${\rm Smp}(\partial X'_{p})$ be the set of simple points of $\partial X'_{p}$ and we define $C_{\rm Smp}(X,p)=\{t\cdot x; \, t>0\mbox{ and }x\in {\rm Smp}(\partial X'_{p})\}$. Let $k_{X,p}\colon {\rm Smp}(\partial X'_{p})\to \mathbb{N}$ be the function such that $k_{X,p}(x)$ is the number of connected components of the germ $(\rho_{p}^{-1}(X\setminus\{p\}),x)$. \begin{remark}\label{remarksimplepointdense} It is known that ${\rm Smp}(\partial X'_p)$ is an open dense subset of the $(d-1)$-dimensional part of $\partial X'_p$ whenever $\partial X'_p$ is a $(d-1)$-dimensional subset. where $d=\dim X$ (see \cite{Pawlucki:1985}). \end{remark} \begin{definition}\label{def:relative_mult} It is clear the function $k_{X,p}$ is locally constant. In fact, $k_{X,p}$ is constant on each connected component $X_j$ of ${\rm Smp}(\partial X'_{p})$. Then, we define {\bf the relative multiplicity of $X$ at $p$ (along of $X_j$)} to be $k_{X,p}(X_j):=k_{X,p}(x)$ with $x\in X_j$. Let $X_1,...,X_r$ be the connected components of ${\rm Smp}(\partial X'_{p})$. By reordering the indices, if necessary, we assume that $k_{X,p}(X_1)\leq \cdots \leq k_{X,p}(X_r)$. Then we define $k(X,p)=(k_{X,p}(X_1),...,k_{X,p}(X_r))$. \end{definition} \begin{remark} \label{rel_mult_number_of_branches} Let $X\subset \mathbb{R}^n$ be an unbounded real algebraic curve such that $C(X,\infty)\cap \mathbb{S}^{n-1}=\{a_1,...,a_r\}$. By the Conical Structure Theorem at infinity for semialgebraic sets that there exists a constant $R_0\gg 1$ such that for all $R\geq R_0$, we have $$ X\setminus B_R(0)=\bigcup_{j=1}^r\bigcup_{l=1}^{k_j}{\Gamma_{j,l}} $$ and there is a semialgebraic diffeomorphism $h\colon X\setminus B_R(0)\rightarrow {\rm Cone}_{\infty}(X\cap \mathbb{S}^{n-1}_R(0))$ such that $\|h(x)\|=\|x\|$ and $h|_{X\cap \mathbb{S}^{n-1}_R(0)}=Id$ and, moreover, for each $j\in\{1,...,r\}$, $C(\Gamma_{j,l},\infty)\cap \mathbb{S}^{n-1}=\{a_j\}$ for all $l=1,...,k_j$. These $\Gamma_{j,l}$'s are called the {\bf branches of $X$ at infinity}. In particular, $k_j=k_{X,\infty}(a_j,0)$, i.e., $k_{X,\infty}(a_i,0)$ is the number of branches of $X$ at infinity that are tangent to $a_i$ at infinity. \end{remark} \begin{proposition}[Proposition 3.5 in \cite{SampaioS:2023}]\label{propinvarianciamultiplicidaderelativa} Let $X$ and $Y$ be semialgebraic sets in $\mathbb{R}^n$ and $\mathbb{R}^m$ respectively. Let $\varphi\colon X\rightarrow Y$ be a blow-spherical homeomorphism at $p\in \mathbb{R}^n\cup \{\infty\}$. Then $$k_{X,p}(x)=k_{Y,q}(\varphi'(x)),$$ for all $x\in Smp(\partial X'_p)\subset \partial X'_p$, where $q=\lim\limits_{x\to p}\varphi(x)$. In particular, $k(X,p)=k(Y,q)$. \end{proposition} \begin{proposition}[Proposition 3.6 in \cite{SampaioS:2023}]\label{coneinvarianteblow} If $h\colon X\rightarrow Y$ is a blow-spherical homeomorphism at $p\in X\cup \{\infty\}$, then $C(X,p)$ and $C(Y,h(p))$ are blow-spherical homeomorphic at $0$. \end{proposition} \section{On the classification of real algebraic curves} In this section, we present some examples and results about the classification of real algebraic curves under blow-spherical homeomorphisms. \begin{proposition}\label{propblowspherical} Let $X,\widetilde X\subset \mathbb{R}^n$ be two real semialgebraic curves. Then the following statements are equivalent: \begin{enumerate} \item [(1)] $X$ and $\widetilde X$ are blow-spherical homeomorphic at infinity; \item [(2)] $k(X,\infty)=k(\widetilde X,\infty)$; \item [(3)] $X$ and $\widetilde X$ are strongly blow-spherical homeomorphic at infinity; \end{enumerate} \end{proposition} \begin{proof} Assume that $C(X,\infty)\cap \mathbb{S}^{n-1}=\{a_1,...,a_r\}$ and $C(\widetilde{X},\infty)\cap \mathbb{S}^{n-1}=\{\widetilde{a}_1,...,\widetilde{a}_s\}$. By Remark \ref{remarksimplepointdense}, ${\rm Smp}(\partial X'_{\infty})$ is an open dense subset of the $0$-dimensional part of $\partial X'_{\infty}$, hence ${\rm Smp}(\partial X'_{\infty})=\partial X'_{\infty}$. Clearly, (3) $\mathbb{R}ightarrow$ (1). (1) $\mathbb{R}ightarrow$ (2). Assume that $X$ and $\widetilde{X}$ are blow-spherical homeomorphic at infinity. By Proposition \ref{propinvarianciamultiplicidaderelativa}, $k(X,\infty)=k(\widetilde X,\infty)$. (2) $\mathbb{R}ightarrow$ (3). Assume that $k(X,\infty)=k(\widetilde X,\infty)$. Thus, $r=s$ and by reordering the indices, if necessary, we may assume that $k_i=k_{X,\infty}(a_i,0)=k_{\widetilde{X},\infty}(\widetilde{a_i},0)$ for all $i=1,...,r$ and $k_1\leq k_2\leq ....\leq k_r$. Thus, it follows from Remark \ref{rel_mult_number_of_branches} and the Conical Structure Theorem at infinity for semialgebraic sets that there exists a constant $R_0\gg 1$ such that for all $R\geq R_0$, we have $$ X\setminus B_R(0)=\bigcup_{j=1}^r\bigcup_{l=1}^{k_j}{\Gamma_{j,l}} $$ and there is a semialgebraic diffeomorphism $h\colon X\setminus B_R(0)\rightarrow {\rm Cone}_{\infty}(X\cap \mathbb{S}^{n-1}_R(0))$ such that $\|h(x)\|=\|x\|$ and $h|_{X\cap \mathbb{S}^{n-1}_R(0)}=Id$ and, moreover, for each $j\in \{1,...,r\}$, $C(\Gamma_{j,l},\infty)\cap \mathbb{S}^{n-1}=\{a_j\}$ for all $l=1,...,k_j$. In particular, we consider $h|_{\Gamma_{j,l}}\colon\Gamma_{j,l}\rightarrow {\rm Cone}_{\infty}(\Gamma_{j,l}\cap \mathbb{S}^{n-1}_R(0))$. Now define the curve $\alpha_{j,l}\colon [R,+\infty)\rightarrow {\rm Cone}_{\infty}(\Gamma_{j,l}\cap \mathbb{S}^{n-1}_R(0))$ given by $\alpha_{j,l}(t)=t\frac{a_{j,l}}{|a_{j,l}|}$, where $\{a_{j,l}\}=\Gamma_{j,l}\cap \mathbb{S}^{n-1}_R(0)$. Thus, we define the curve $\beta_{j,l}\colon [R,+\infty)\rightarrow \Gamma_{j,l}$ by $\beta_{j,l}(t):=(h^{-1}\circ \alpha_{j,l})(t)$. Analogously, we also have $$ \widetilde{X}\setminus B_R(0)=\bigcup_{j=1}^r\bigcup_{l=1}^{k_j}{\widetilde{\Gamma}_{j,l}}, $$ and there are semialgebraic diffeomorphisms $\widetilde{h}\colon \widetilde{X}\setminus B_R(0) \rightarrow {\rm Cone}_{\infty}(\widetilde{X}\cap \mathbb{S}^{n-1}_R(0))$, $\widetilde{\alpha}_{j,l}\colon [R,+\infty)\rightarrow {\rm Cone}_{\infty}(\widetilde{\Gamma}_{j,l}\cap \mathbb{S}^{n-1}_R(0))$ and $\widetilde{\beta}_{j,l}\colon [R,+\infty)\rightarrow \widetilde{\Gamma}_{j,l}$ and, moreover, for each $j$, $C(\widetilde{\Gamma}_{j,l},\infty)\cap \mathbb{S}^{n-1}=\{\widetilde{a}_j\}$ for all $l=1,...,k_j$. Let $A=X\setminus B_R(0)$ and $\widetilde{A}=\widetilde{X}\setminus B_R(0)$ and define $\varphi\colon A\rightarrow \widetilde{A}$ by $\varphi(z)=\widetilde{\beta}_{j,l}\circ \beta^{-1}_{j,l}(z)$ if $z\in \Gamma_{j,l}$. We have that $\varphi$ is a strong blow-spherical homeomorphism at infinity and $\varphi'\colon X'_{\infty}\rightarrow \widetilde{X}'_{\infty}$ is given by $$ \varphi'(x,s)=\left\{\begin{array}{ll} \left(\frac{\varphi(\frac{x}{s})}{\|\varphi(\frac{x}{s})\|},\frac{1}{\|\varphi(\frac{x}{s})\|}\right),& \mbox{ if }s\not =0;\\ (\widetilde{a}_j,0),& \mbox{ if } (x,s)=(a_j,0). \end{array}\right. $$ In order to see that, it is enough to prove that $\varphi$ is a blow-spherical homeomorphism at infinity. Let us prove that $\varphi'$ is continuous at each $(a_j,0)\in \partial A'_{\infty}$. Thus take $(a_j,0)\in \partial A'_{\infty}$ and consider a sequence $(x_k,s_k)_{k\in \mathbb{N}}\subset A'_{\infty}\setminus \partial A'_{\infty}$ such that $(x_k,s_k)\rightarrow (a_j,0)$. Thus, for any subsequence $\{(x_{k_i},s_{k_i})\}_{i\in \mathbb N}$, there is some $l\in \{1,...,k_j\}$ and a subsequence $\{(z_k,t_k)\}_{k\in \mathbb N}\subset\{(x_{k_i},s_{k_i})\}_{i\in \mathbb N}$ such that $\frac{z_k}{t_k}\in \Gamma_{j,l}$ for all $k$. Since \begin{eqnarray*} \|\beta_{j,l}(t)\| & = & \|h^{-1}\circ \alpha_{j,l}(t)\|\\ & = & \left\|h^{-1}\left(t\frac{a_{j,l}}{\|a_{j,l}\|}\right)\right\|\\ & = & t, \end{eqnarray*} we have $$ \frac{z_k}{t_k}=\beta_{j,l}\left(\frac{1}{t_k}\right). $$ Therefore, \begin{eqnarray*} \varphi'(z_k,t_k) & = & \left( \frac{\widetilde{\beta}_{j,l}(\frac{1}{t_k})}{\|\widetilde{\beta}_{j,l}(\frac{1}{t_k})\|}, \frac{1}{\|\widetilde{\beta}_{j,l}(\frac{1}{t_k})\|}\right). \end{eqnarray*} Since $\lim\limits_{t\to 0^+}\frac{\widetilde{\beta}_{j,l}(\frac{1}{t})}{\|\widetilde{\beta}_{j,l}(\frac{1}{t})\|}=\widetilde{a}_j$, we have $\lim\limits_{k\rightarrow +\infty}{\varphi'(z_k,t_k)}=(\widetilde{a}_j,0).$ This shows that $\lim\limits_{k\rightarrow +\infty}{\varphi'(x_k,s_k)}=(\widetilde{a}_j,0).$ Thus, $\varphi'$ is continuous at each $(a_j,0)$. Analogously, $(\varphi^{-1})'$ is continuous at each $(\widetilde{a}_j,0)\in \partial \widetilde{A}'_{\infty}$. Therefore, $\varphi$ is a strong blow-spherical homeomorphism at infinity. \end{proof} \subsection{Normal forms for the classification at infinity}\label{sec:normal_forms} Let $\mathcal{F}(\{0,1\};\mathbb{Z}_{\geq 0})$ be the set of all non-null functions from $\{0,1\}$ to $\mathbb{Z}_{\geq 0}$. For each positive integer number $N$, let $\mathcal{A}_N$ be the subset of $(\mathcal{F}(\{0,1\};\mathbb{Z}_{\geq 0}))^N$ formed by all $(r_1,...,r_N)$ satisfying the following: \begin{enumerate} \item $r_l(0)\leq r_{l+1}(0)$ for all $l\in \{1,\cdots, N-1\}$; \item If $r_l(0)=r_{l+1}(0)$ then $r_l(1)\leq r_{l+1}(1)$. \end{enumerate} Let $\mathcal{A}=\bigcup\limits_{N=1}^{\infty}\mathcal{A}_N$. Let $A=(r_1,...,r_N)\in \mathcal A$. For $j\in \{0, 1\}$ and $r_l(j)>0$, we define the following curves: \begin{equation*}\label{def.normalform} X_{A,1}=\left\{(x,y)\in \mathbb{R}^2; \prod_{l=1}^{N}\prod_{r=1}^{r_l(1)}{( (y-lx)^{2}-r(y+lx))}=0 \right\}, \end{equation*} and \begin{equation*}\label{def.normalform_two} X_{A,0}=\left\{(x,y)\in \mathbb{R}^2; \prod_{l=1}^{N}\prod_{r=1}^{r_l(0)}{( (y-lx)-r)}=0 \right\}. \end{equation*} Moreover, if $r_l(j)=0$ we define $X_{A,j}=\{0\}$. Finally, we define the realization of $A=(r_1,...,r_N)$ to be the curve $X_A:=X_{A,0}\cup X_{A,1}$. Thus, it follows from Proposition \ref{propblowspherical} and the definition of $A$, the following classification result: \begin{proposition}\label{thmnormalform} For each real algebraic curve $X\subset \mathbb{R}^n$, there exists a unique $A\in \mathcal{A}$ such that $X_A$ and $X$ are blow-spherical homeomorphic at infinity. \end{proposition} \begin{proof} We consider $\overline{X}$ the projective closure of $X$ and $\{c_1,\cdots,c_N\}=\overline{X}\cap L_{\infty}$, where $L_{\infty}$ is the hyperplane at infinity. By taking local charts, it follows from Lemma 3.3.5 in \cite{Milnor:1968} that there exist an open neighborhood $V_{l}\subset \mathbb{RP}^n$ of $c_l$ and $\Upsilon_{l,1},\cdots, \Upsilon_{l,r_l}$ such that $\Upsilon_{l,i}\cap \Upsilon_{l,i'}=\{c_l\}$ whenever $i\neq i'$ and $$Y\cap V_l=\bigcup_{i=1}^{r_l}{\Upsilon_{l,i}}$$ and, moreover, for each $i\in \{1,\cdots, r_l\}$, there exists an analytic homeomorphism $\upsilon_{l,i}\colon (-\epsilon,\epsilon)\rightarrow \Upsilon_{l,i}$ with $\upsilon_i(0)=c_l$. By shrinking $V_{l}$, if necessary, we may assume that for each $i\in \{1,\cdots, s_l\}$, $\Upsilon_{l,i}\subset L_{\infty}$ or $\Upsilon_{l,i}\cap L_{\infty}=\{c_l\}$. By reordering the indices, if necessary, there is $r_l> 0$ such that $\Upsilon_{l,i}\cap L_{\infty}=\{c_l\}$ for all $i\in \{1,\cdots, r_l\}$ and $\Upsilon_{l,i}\subset L_{\infty}$ for all $i\in \{r_l+1,\cdots, s_l\}$. By reordering the indices again, if necessary, we may assume that $r_l\leq r_{l+1}$, for all $j\in \{1,...,N\}$. We denote the half-branch by $\Upsilon_{l,i}^{+}=\upsilon_{l,i}(0,\epsilon)$ and $\Upsilon_{l,i}^{-}=\upsilon_{l,i}(-\epsilon,0)$. So, denoting $\Gamma_{l,i}^+=\Upsilon_{l,i}^{+}\cap \mathbb{R}^n$ and $\Gamma_{l,i}^-=\Upsilon_{l,i}^{-}\cap \mathbb{R}^n$, we obtain that $$\{\Gamma_{l,1}^+,\Gamma_{l,1}^-,\cdots, \Gamma_{l,r_l}^+,\Gamma_{l,r_l}^-\}_{l=1}^{N} $$ are all the branches of $X$ at infinity. For each $l\in \{1,..,N\}$, there is $a_l\in \mathbb{S}^{n-1}$ such that $\pi^{-1}(c_l)\cap \mathbb{S}^{n-1}=\{-a_l,a_l\}$, where $\pi\colon \mathbb{R}^n\setminus\{0\}\to \mathbb{RP}^{n-1}\cong L_{\infty}$ is the canonical projection. Thus, for each $l\in \{1,..,N\}$, $C(\Gamma_{l,i},\infty)\cap \mathbb{S}^{n-1}\subset\{- a_l,a_l\}$. For each $l\in \{1,..,N\}$, by reordering the indices, if necessary, there are non-negative integer numbers $n_l, p_l$ and $z_l$ such that: \begin{itemize} \item $C(\Gamma_{l,i}^+,\infty)=-C(\Gamma_{l,i}^-,\infty)$ for all $i\in \{1,...,z_l\}$; \item $C(\Gamma_{l,i}^+,\infty)\cap \mathbb{S}^{n-1}=C(\Gamma_{l,i}^-,\infty)\cap \mathbb{S}^{n-1}=\{- a_l\}$, for all $i\in \{z_l+1,...,z_l+n_l\}$; \item $C(\Gamma_{l,i}^+,\infty)\cap \mathbb{S}^{n-1}=C(\Gamma_{l,i}^-,\infty)\cap \mathbb{S}^{n-1}=\{ a_l\}$, for all $i\in \{z_l+n_l+1,...,r_l=z_l+n_l+p_l\}$. \end{itemize} By changing $a_l$ by $-a_l$, if necessary, we may assume that $n_l\leq p_l$. Thus, we define $\tilde\Gamma_{l,i}^+=\Gamma_{l,i}^+$ for all $i\in \{1,...,r_l\}$ and $$ \tilde \Gamma_{l,i}^-=\left\{\begin{array}{ll} \Gamma_{l,i}^-,\mbox{ if }i\in \{1,...,z_l\}\cup \{z_l+n_l+1,...,r_l\}\\ \Gamma_{l,i+n_l}^-,\mbox{ if }i\in \{z_l+1,...,z_l+n_l\}. \end{array}\right. $$ Thus, we have the following: \begin{itemize} \item $C(\tilde\Gamma_{l,i}^+,\infty)=-C(\tilde\Gamma_{l,i}^-,\infty)$ for all $i\in \{1,...,z_l+n_l\}$; \item $C(\tilde\Gamma_{l,i}^+,\infty)=C(\tilde\Gamma_{l,i}^-,\infty)\}$, for all $i\in \{z_l+n_l+1,...,r_l\}$. \end{itemize} For each $l\in \{1,..,N\}$, we define $r_l\colon\{0,1\}\to \mathbb{N}$ by $r_l(0)=z_l+n_l$ and $r_l(1)=p_l$. By reordering the indices, if necessary, we may assume that \begin{enumerate} \item $r_l(0)\leq r_{l+1}(0)$ for all $l\in \{1,\cdots, N-1\}$; \item If $r_l(0)=r_{l+1}(0)$ then $r_l(1)\leq r_{l+1}(1)$. \end{enumerate} Therefore, $A={(r_1,...,r_N)}\in \mathcal{A}$ and by Proposition \ref{propblowspherical}, we have that $X_A$ is blow-spherical homeomorphic at infinity to $X$. The uniqueness of $A$ follows from the definition of $\mathcal{A}$ and Proposition \ref{propblowspherical}. \end{proof} \begin{remark}\label{remark:spatial} Proposition \ref{thmnormalform} says in particular that any spacial real algebraic curve is blow-spherical homeomorphic at infinity to a plane real algebraic curve. \end{remark} Note that Remark \ref{remark:spatial} is not true in the global case, as it is shown in the next example. \begin{example} Let $Y$ and $Z$ be the algebraic curves in $\mathbb{R}^3$ given by $$Y=\textstyle{\left\{(x,y,z)\in \mathbb{R}^3;(y-1)(y-\frac{x^2}{2})(y-x^2)(y-2x^2)=0\mbox{ and }z=0\right\}}$$ and $$Z=\textstyle{\left\{(x,y,z)\in \mathbb{R}^3;\left((x-\frac{1}{2})^2+(y-\frac{1}{2})^2+z^2-\frac{1}{2}\right)=0\mbox{ and } x=y\right\}}.$$ Thus the spatial algebraic curve $X=Y\cup Z$ is not blow-spherical equivalent to a plane curve (see figure \ref{fig:spatial}). \end{example} \begin{figure} \caption{The spatial algebraic curve $X$} \label{fig:spatial} \end{figure} \subsection{Realization of the invariant $k(\cdot,\infty)$}\label{sec:realization} \begin{definition} For each positive integer $n$, let $\mathcal{N}_n$ be the set of all $(\eta_1, \eta_2,\cdots,\eta_n)\in (\mathbb{Z}_{>0})^n$ such that $\eta _1\leq \eta _2\leq \cdots \leq \eta_n$. Let $\mathcal{N}=\bigcup\limits_{n=1}^{\infty} \mathcal{N}_n$. \end{definition} By definition, we have that if $X\subset \mathbb{R}^n$ is a semialgebraic curve then $k(X,\infty)\in\mathcal{N}$. Reciprocally, for $\eta=(\eta_1, \eta_2,\cdots,\eta_N)\in \mathcal{N}$, it is easy to find a semialgebraic curve $X\subset \mathbb{R}^n$ such that $k(X,\infty)=\eta$. For example, \begin{equation*} X=\bigcup_{l=1}^{N}\bigcup_{r=1}^{\eta_l}\left\{(x,y)\in \mathbb{R}^2; y-lx-r=0 \mbox{ and }y+lx\geq 0\right\}. \end{equation*} \begin{definition} We say that $\eta=(\eta_1, \eta_2,\cdots,\eta_r)\in \mathcal{N}$ is {\bf algebraically realizable}, if there exists a real algebraic curve $X\subset \mathbb{R}^n$ such that $k(X,\infty)=\eta$. \end{definition} The next result gives a necessary and sufficient condition for a $\eta=(\eta_1, \eta_2,\cdots,\eta_n)\in \mathcal{N}$ to be algebraically realizable. Let $v=(v_1,\cdots,v_n)\in \mathbb{R}^n$ a vector, we denote by $\|\cdot\|_1$ the norm given by $\|v\|_1=|v_1|+\cdots+|v_n|$. \begin{proposition}\label{thm:realizable} $\eta=(\eta_1,\cdots,\eta_k)\in \mathcal{N}$ is algebraically realizable if and only if $\|\eta\|_{1} \equiv 0 \pmod{2}$. \end{proposition} \begin{proof} Assume that $\eta=(\eta_1,\cdots,\eta_k)\in \mathcal{N}$ is algebraically realizable, that is, there exists an unbounded real algebraic curve $X\subset \mathbb{R}^n$ such that $\eta=k(X,\infty)$. We consider $\overline{X}$ the projective closure of $X$ and $\{c_1,\cdots,c_N\}=\overline{X}\cap L_{\infty}$, where $L_{\infty}$ is the hyperplane at infinity. By the proof of Proposition \ref{thmnormalform}, we obtain that there exist an open neighbourhood $V_{l}\subset \mathbb{RP}^n$ of $c_l$ and $\Upsilon_{l,1},\cdots, \Upsilon_{l,r_l}$ such that $\Upsilon_{l,i}\cap \Upsilon_{l,i'}=\{c_l\}$ whenever $i\neq i'$, $$Y\cap V_l=\bigcup_{i=1}^{r_l}{\Upsilon_{l,i}}$$ and, moreover, for each $i\in \{1,\cdots, r_l\}$, there exists an analytic homeomorphism $\upsilon_{l,i}\colon (-\epsilon,\epsilon)\rightarrow \Upsilon_{l,i}$ with $\upsilon_i(0)=c_l$. Additionally, there is $r_l> 0$ such that $\Upsilon_{l,i}\cap L_{\infty}=\{c_l\}$ for all $i\in \{1,\cdots, r_l\}$ and $\Upsilon_{l,i}\subset L_{\infty}$ for all $i\in \{r_l+1,\cdots, s_l\}$. By reordering the indices again, if necessary, we may assume that $r_l\leq r_{l+1}$, for all $j\in \{1,...,N\}$. We denote the half-branch by $\Upsilon_{l,i}^{+}=\upsilon_{l,i}(0,\epsilon)$ and $\Upsilon_{l,i}^{-}=\upsilon_{l,i}(-\epsilon,0)$. So, denoting $\Gamma_{l,i}^+=\Upsilon_{l,i}^{+}\cap \mathbb{R}^n$ and $\Gamma_{l,i}^-=\Upsilon_{l,i}^{-}\cap \mathbb{R}^n$, we obtain that $$\{\Gamma_{l,1}^+,\Gamma_{l,1}^-,\cdots, \Gamma_{l,r_l}^+,\Gamma_{l,r_l}^-\}_{l=1}^{N} $$ are all the branches of $X$ at infinity. For each $l\in \{1,..,k\}$, there is $b_l\in \mathbb{S}^{n-1}$ such that $\pi^{-1}(c_l)\cap \mathbb{S}^{n-1}=\{-b_l,b_l\}$, where $\pi\colon \mathbb{R}^n\setminus\{0\}\to \mathbb{RP}^{n-1}\cong L_{\infty}$ is the canonical projection. Thus, for each $l\in \{1,..,k\}$, $C(\Gamma_{l,i},\infty)\cap \mathbb{S}^{n-1}\subset\{- b_l,b_l\}$. By Remark \ref{rel_mult_number_of_branches}, $$ k_{X,\infty}(b_l,0)+k_{X,\infty}(-b_l,0)=2r_l\equiv 0 \pmod{2}, $$ where $k_{X,\infty}(v)$ is defined to be zero if $v\not\in Smp(\partial X')$. Assume that $C(X,\infty)\cap \mathbb{S}^{n-1}=\{a_1,...,a_k\}$ and $\eta_l=k_{X,\infty}(a_l,0)$ for all $l\in \{1,...,k\}$. We consider the following decomposition of $\{1,...,k\}$: $$ \{l_1,...,l_{2s}\}=\{l\in \{1,...,k\};- a_l,a_l\in C(X,\infty)\} $$ and $$ \{l_{2s+1},...,l_k\}= \{1,...,k\}\setminus \{l_1,...,l_{2s}\} $$ and such that $a_{l_i}=-a_{l_{i+s}}$ for all $i\in \{1,...,s\}$. Therefore, by writing $k_{X,\infty}(a)$ instead of $k_{X,\infty}(a,0)$, we have \begin{eqnarray*} \|\eta\|_1& = & k_{X,\infty}(a_{1_1})+k_{X,\infty}(a_{l_{1+s}})+\cdots +k_{X,\infty}(a_{1_s})+k_{X,\infty}(a_{l_{2s}})\\ & &+ k_{X,\infty}(a_{1_{2s+1}})+k_{X,\infty}(-a_{1_{2s+1}})+\cdots +k_{X,\infty}(a_{1_k}))+k_{X,\infty}(-a_{l_{k}})\\ & = & 2r_{l_1} +\cdots + 2r_{l_s}+2r_{l_{2s+1}} +\cdots + 2r_{l_k} \\ & \equiv & 0 \pmod{2}. \end{eqnarray*} For the converse, assume that $\eta=(\eta_1,\cdots,\eta_k)\in \mathcal{N}$ satisfies $\|\eta\|_{1} \equiv 0 \pmod{2}$. Thus, $\#\{j;\eta_j \equiv 1 \pmod{2}\}\equiv 0 \pmod{2}$. Let $J=\{j_1\leq ... \leq j_{2m}\}\subset I:=\{1,...,k\}$ be the indices such that $\eta_{j}\equiv 1 \pmod{2}$ if and only if $j\in J$. For each $j_i\in J$, let $n_i$ be the non-negative integer number such that $\eta_{j_i}=2n_i+1$. Let $I\setminus J=\{q_1,...,q_N\}$. We consider the following three algebraic curves \begin{equation*} X^+=\left\{(x,y)\in \mathbb{R}^2; \prod_{l=1}^{m}(y-lx)\prod_{r=1}^{n_l}{((y-lx)^{2}-r(y+lx))}=0 \right\}, \end{equation*} \begin{equation*} X^-=\left\{(x,y)\in \mathbb{R}^2; \prod_{l=1}^{m}\prod_{r=1}^{n_{m+l}}{((y-lx)^{2}+r(y+lx))}=0 \right\} \end{equation*} and \begin{equation*} X^0=\left\{(x,y)\in \mathbb{R}^2; \prod_{l=1}^{N}\prod_{r=1}^{\eta_{q_l}}{((y-(m+l)x)^{2}-r(y+(m+l)x))}=0 \right\} \end{equation*} Then, for $X=X_0\cup X^+\cup X^-$, we have that $k(X,\infty)=\eta$. \end{proof} \subsection{Some considerations on the global case} In the global case, the problem of classification is harder. For instance, two homeomorphic algebraic curves having the same relative multiplicities may not be blow-spherical homeomorphic, as we can see in the next example. \begin{example} Let $$\textstyle{X=\left\{(x,y)\in \mathbb{R}^2;p(x,y)\left(x^2-y^2-y^3\right) \left((x+\frac{7\sqrt{12}}{8})^2+(y-3)^2-\frac{76}{64}\right)=0\right\}}$$ and $$\textstyle{\widetilde{X}=\left\{(x,y)\in \mathbb{R}^2;q(x,y)\left(x^2-y^2-y^3\right) \left((x+\frac{7\sqrt{12}}{8})^2+(y-3)^2-\frac{76}{64}\right)=0\right\}},$$ where $p$ and $q$ are the polynomials given by $p(x,y)=((x-\frac{7\sqrt{12}}{8})^2+(y-3)^2-\frac{76}{64})$ and $q(x,y)=((x+\frac{27\sqrt{80}}{28})^2+(y-5)^2-\frac{864}{784})$. Then $X$ and $\widetilde{X}$ are homeomorphic algebraic curves and there is a bijection $\sigma\colon {\rm Sing}(X)\cup \{\infty\}\to {\rm Sing}(\widetilde{X})\cup \{\infty\}$ such that $\sigma(\infty)=\infty$ and $k(X,p)=k(\widetilde{X},\sigma(p))$ for all $p\in {\rm Sing}(X)\cup \{\infty\}$. However, $X$ and $\widetilde{X}$ are not blow-spherical homeomorphic (see Figures \ref{fig1} and \ref{fig2}). \end{example} \begin{figure} \caption{Curve $X$} \label{fig1} \caption{Curve $\widetilde{X} \label{fig2} \end{figure} Thus, we need of the following notion: \begin{definition} We say that a homeomorphism $\phi\colon X\to \widetilde{X}$ between two analytic curves is a {\bf tangency-preserving homeomorphism} if, for each $p\in X\cup \{\infty\}$, two half-branches $\Gamma_1$ and $\Gamma_2$ of $(X,p)$ are tangent at $p$ if and only if $\phi(\Gamma_1)$ and $\phi(\Gamma_2)$ are tangent at $\phi(p)$. \end{definition} \begin{proposition} Let $X,\widetilde X\subset \mathbb{R}^n$ be two connected real algebraic curves. Then the following statements are equivalent: \begin{enumerate} \item [(1)] $X$ and $\widetilde X$ are blow-spherical homeomorphic; \item [(2)] There is a tangency-preserving homeomorphism $\phi\colon X\to \widetilde X$; \item [(3)] $X$ and $\widetilde{X}$ are strongly blow-spherical homeomorphic. \end{enumerate} \end{proposition} \begin{proof} It follows from Propositions \ref{propinvarianciamultiplicidaderelativa} and \ref{coneinvarianteblow} that $(1)\mathbb{R}ightarrow (2)$. Since $(3)\mathbb{R}ightarrow (1)$, we only have to prove $(2)\mathbb{R}ightarrow (3)$. Assume that there is a tangency-preserving homeomorphism $\phi\colon X\to \widetilde X$. Let ${\rm Sing}_1(X)=\{p_1,...,p_r\}$ and ${\rm Sing}_1(\widetilde X)=\{\widetilde p_1,...,\widetilde p_s\}$. Since $\phi$ is a tangency-preserving homeomorphism, it follows from \cite[Proposition 6.9]{Sampaio:2022b} that $\phi({\rm Sing}_1(X))={\rm Sing}_1(\widetilde X)$ and, in particular, $r=s$. Thus, we assume that $\phi(p_i)=\widetilde p_i$ for all $i\in \{1,...,r\}$. By following the proof of Theorem 6.19 in \cite{Sampaio:2022b}, we can find $\epsilon>0$ and a strong blow-spherical homeomorphism $\varphi\colon \bigcup\limits_{i=1}^r (X\cap B_{\epsilon}(p_i))\to \bigcup\limits_{i=1}^r(\widetilde X\cap B_{\epsilon}(\widetilde p_i))$ such that for each $i\in \{1,...,r\}$, $\|\varphi(x)-\widetilde p_i\|=\|x-p_i\|$ for all $x\in X\cap B_{\epsilon}(p_i)$. It follows from Proposition \ref{propblowspherical} that there exists a strong blow-spherical homeomorphism $h\colon X\setminus B_{R}(0)\rightarrow \widetilde{X}\setminus B_{R}(0)$ such that $\|h(x)\|=\|x\|$ for all $x\in X\setminus B_{R}(0)$. Now, we define the following set $$X_{\epsilon, R}=\left( X\cap B_{2R}(0)\right)\setminus \left\{B_{\epsilon/2}(p_1)\cup \cdots \cup B_{\epsilon/2}(p_s)\right\},$$ and $$\widetilde{X}_{\epsilon, R}=\left(\widetilde{X}\cap B_{2R}(0)\right)\setminus \left\{B_{\epsilon/2}(p_1)\cup \cdots \cup B_{\epsilon/2}(p_s)\right\}.$$ Note that $X_{\epsilon,R}$ (resp. $\widetilde{X}_{\epsilon, R}$) is a finite union of compact one-dimensional with boundary, i.e, $X_{\epsilon,R}=\bigcup_{i=1}^{l_0}{X_{\epsilon,R}^i}$ (resp. $\widetilde{X}_{\epsilon,R}=\bigcup_{i=1}^{l_0}{\widetilde{X}_{\epsilon,R}^i}$), where each $X_{\epsilon,R}^i$ (resp. $\widetilde{X}_{\epsilon,R}^i$) is diffeomorphic to the compact interval $[0,1]$. Thus there is a diffeomorphism $\Phi\colon X_{\epsilon,R}\rightarrow \widetilde{X}_{\epsilon, R}$ such that each $\Phi(X_{\epsilon,R}^i)$ is contained in the connected component of $\widetilde{X}\setminus {\rm Sing}_1(\widetilde{X})$ which contains $\phi(X_{\epsilon,R}^i)$. By using standard arguments of bump functions, we may define a strong blow-spherical homeomorphism $F\colon X\rightarrow \widetilde{X}$ such that \[ F(x) = \begin{cases} \varphi(x), &\quad\text{if} \ x\in X; \|x-p_j\|\leq \epsilon/2\\ h(x), &\quad\text{if} \ x\in E_i; \|x\|\geq 2R\\ \Phi(x), &\quad\text{if} \ x\in X_{2\epsilon, R/2}, \end{cases} \] which finishes the proof. \end{proof} \end{document}
\begin{document} \title{Dynamics of induced homeomorphisms of one-dimensional solenoids} \author{ Francisco J. L\'opez--Hern\'andez} \affil{Centro de Investigaci\'on en matem\'aticas A.C.} \date{\today} \maketitle \begin{abstract} We study the displacement function of homeomorphisms isotopic to the identity of the universal one-dimensional solenoid and we get a characterization of the lifting property for an open and dense subgroup of the isotopy component of the identity. The dynamics of an element in this subgroup is also described using rotation theory. \end{abstract} \section*{Introduction} H. Poincar\'e (see \cite{Poi}) introduced an invariant of topological conjugation called the \emph{rotation number} for orientation-preserving homeomorphisms of the unit circle, defined through the lifting property to the universal covering space $\mathbb{R}$ of the circle. \\ Denote by $\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$ the group of homeomorphisms which are orientation-preserving of $\mathbb{S}^{1}$ and let $\widetilde{\mathrm{Homeo}}_+(\mathbb{S}^{1})$ be the space containing the real valued functions that are lifts of elements in $\mathrm{Hom}eo_{+}(\mathbb{S}^{1}).$ Define $\tau:\widetilde{\mathrm{Homeo}}_+(\mathbb{S}^{1})\mathbb{T}o \mathbb{R}$ by $$\tau(F)= \lim_{n\mathbb{T}o\infty}\mathrm{p}hirac{F^{n}(x)-x}{n}.$$ Since the function $\tau$ is $\mathbb{Z}$--invariant and it does not depend on the choice of $x$, it can be projected to $\rho(f)= \mathrm{p}i(\tau(f)) \in\mathbb{S}^{1}.$ Therefore, for any $f\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$ we have a distinguished element $\rho(f)$ called the \emph{Poincar\'e's rotation number,} which gives information on the topological dynamics generated by $f$, summarized in the following two cases: \begin{enumerate} \item $\rho(f)$ is rational if and only if $f$ has a periodic orbit; \item if $\rho(f)$ is irrational, then $f$ is semi-conjugate to the irrational rotation by $\rho(f).$ \end{enumerate} This theory was generalized to \emph{rotation sets} for toral homeomorphisms isotopic to the identity in a similar way as in $\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$. The rotation set of any $f\in\mathrm{Hom}eo_{+}(\mathbb{T}^{2})$, also denoted by $\rho(f)$, is a compact and convex subset of $\mathbb{R}^{2}$ (see \cite{M-Z}).\\ On the one hand, Frank's theorem gives dynamical information (see \cite{Fra}) similar to part (1) of above when $f\in\mathrm{Hom}eo_{+}(\mathbb{T}^{2})$ such that $\rho(f)$ has nonempty interior. If a vector $v$ lies in the interior of $\rho(f)$ and has both coordinates rational, then there is a periodic point $x\in\mathbb{T}^{2}$ with the property that $$\mathrm{p}hirac{F^{q}(x_{0})-x_{0}}{q}=v,$$ where $x_{0}\in\mathbb{R}^{2}$ is any lift of $x$ and $q$ is the least period of $x$.\\ On the other hand, T. Jaeger obtained a version of the semi-conjugation theorem (see \cite{Jag}) as in part (2) of Poincar\'e 's theorem for irrational pseudo-rotations with bounded mean motion.\\ In the case of homeomorphisms from the real line with bounded displacement, J. Kwapisz in \cite{Kwa} gives a nice generalization of the rotation set presented in the torus case, particulary when the displacement function is an almost periodic function. Some aspects of the rotation theory for arbitrary manifolds were introduced by M. Pollicott in \cite{Pol} and recently this theory was generalized for solenoidal groups by A. Verjovsky and M. Cruz L\'opez in \cite {C-V}, giving a semi-conjugation theorem for irrational pseudo-rotations with bounded mean motion. A detailed summary on other generalizations of this theory can be found in \cite{A-J}.\\ Our goal in this article is to describe the dynamics of a certain class of homeomorphisms of the universal one--dimensional solenoid which can be described as follows. \\ The \emph{universal one--dimensional solenoid} is the inverse limit of the unbrached covering tower of $\mathbb{S}^{1}$ $$\mathcal{S}:=\lim_{\overleftarrow{n}} (\mathbb{S}^{1},p_{n}),$$ together with surjective homomorphisms determined by projection onto the $n^{\text{th}}$ coordinate $$\mathrm{p}i_{n}:\mathcal{S}\mathbb{T}o \mathbb{S}^{1}.$$ The first projection $\mathrm{p}i_{1}$ defines a locally trivial $\widehat{\mathbb{Z}}$--bundle structure $$\widehat{\mathbb{Z}}\hookrightarrow \mathcal{S} \mathbb{T}o \mathbb{S}^{1},$$ where $$\widehat{\mathbb{Z}}:=\lim_{\overleftarrow{n}}\; \mathbb{Z}/n\mathbb{Z}$$ is the profinite completion of $\mathbb{Z}$, which is a compact perfect totally disconnected abelian topological group homeomorphic to the Cantor set. Since $\mathbb{Z}$ is residually finite, its profinite completion $\widehat{\mathbb{Z}}$ admits a dense inclusion of $\mathbb{Z}$. There is also defined a dense canonical inclusion of $\mathbb{R}$, $\sigma:\mathbb{R}\mathbb{T}o\mathcal{S}$.\\ In Section \ref{displacements} of the article the displacement function for orientation-preserving solenoidal homeomorphims isotopic to the identity will be analyzed, and we will compare this function with the displacement function for orientation-preserving circle homeomorphisms. If $f\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$ we can lift it to some homeomorphism $F:\mathbb{R}\mathbb{T}o\mathbb{R}$, and any lift satisfies that the displacement function $$\delta:=F-\mathrm{id}:\mathbb{R}\mathbb{T}o\mathbb{R}$$ is periodic. The displacement function for orientation-preserving solenoidal homeomorphisms isotopic to the identity, $\mathrm{Hom}eo_{+}(\mathcal{S})$, can be defined using their lifts to the appropriate covering space $\mathbb{R}\times\widehat{\mathbb{Z}}$ (see Section \ref{solenoid}). These lifs can be described as $$(x,k)\longmapsto (F_{k}(x),k):=(x+\delta_{k}(x),k),$$ where $F_{k}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is an increasing continuous function and $\delta_{k}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is a bounded continuous function. Therefore for all $k\in\widehat{\mathbb{Z}}$, the \emph{displacement function} is defined as $$\delta_{k}=F_{k}-\mathrm{id}:\mathbb{R}\mathbb{T}o\mathbb{R}.$$ Recall now the $\mathbb{R}$-action in $C(\mathbb{R})$ given by $$(s,\delta)\longmapsto\delta^{s}$$ where $\delta^{s}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is defined by $\delta^{s}(x)=\delta(x+s)$, and denote by $\Omega(\delta)$ the closure of the orbit of $\delta$. The function $\delta$ is called \emph{almost periodic} if $\Omega(\delta)$ is compact. In this case it is possible to define on it a group structure. The classification theorem for almost periodic functions (see \cite{Bohr}) states that $\Omega(\delta)$ is isomorphic to the circle in the \emph{periodic} case, $\mathbb{T}^{n}$ with $n>1$ in the \emph{quasi-periodic} case or a solenoidal group in the \emph{purely limit periodic} case.\\ If $\delta_{0}$ is the displacement function of $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ for $k=0$, it will be proved that $\Omega(\delta_0)$ is a quotient topological group of $\mathcal{S}$ of the form $$\Omega(\delta_{0})\cong \mathcal{S}/\ker(K),$$ where $\ker(K)$ is the kernel of a certain continuous homomorphism $K:\mathcal{S}\mathbb{T}o\Omega(\delta_{0})$. An immediate consequence of this fact is the next description of the displacement function for orientation-preserving solenoidal homeomorphims isotopic to identity.\\ \textbf{Theorem \ref{lp}} For all $k\in\widehat{\mathbb{Z}}$ the displacement function $\delta_{k}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is periodic or purely limit periodic.\\ In Section \ref{semi-conjugation} is proved that given $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ with displacement function for $k=0$ denoted by $\delta_{0}$, there exists an orientation-preserving homeomorphism istopic to the identity $g:\Omega(\delta_{0})\mathbb{T}o \Omega(\delta_{0})$ such that $g$ is semi-conjugated to $f$ by $K$, i.e. the following diagram commutes \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{K}& \mathcal{S}\ar[d]^{K} \\ \Omega(\delta_{0})\ar[r]^{g}& \Omega(\delta_{0})}\] An example on how this semi-conjugation gives us information on the dynamics generated by $f$ is when $$\Omega(\delta_{0})\cong \mathcal{S}/\ker(K)\cong \mathbb{S}^{1}$$ called the subspace of \emph{induced homeomorphisms} by a periodic function and denoted by $\mathrm{Hom}eo_{I}(\mathcal{S})$ (compare with \cite{R-T-L}). This group is studied in Section \ref{Inducedhomeos} for the case of orientation-preserving homeomorphisms isotopic to the identity. \\ In the first part of the section, a description of such homeomorphisms is given, in which for some $n\in\mathbb{Z}_{+}$, we have an orientation-preserving homeomorphism $f_{n}:\mathbb{R}/n\mathbb{Z}\mathbb{T}o\mathbb{R}/n\mathbb{Z}$ such that the following diagram commutes: \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{n}}& \mathcal{S}\ar[d]^{\mathrm{p}i_{n}} \\ \mathbb{R}/n\mathbb{Z} \ar[r]^{f_{n}}& \mathbb{R}/n\mathbb{Z}}\] Also we will give a description of the lift of these kinds of homeomorphisms, which will be denoted by $\mathrm{Hom}eo_{I_{n}}(\mathcal{S})$ for each $n\in\mathbb{Z}_{+}$, and will be called \emph{induced homeomorphism of degree $n$} by the homeomorphims $f_{n}\in\mathrm{Hom}eo_{+}(\mathbb{R}/n\mathbb{Z})$. A first observation is that if $n|m$, where $n,m\in\mathbb{Z}$, then $$\mathrm{Hom}eo_{I_{n}}(\mathcal{S})\subset\mathrm{Hom}eo_{I_{m}}(\mathcal{S}),$$ and we will have that $$\mathrm{Hom}eo_{I}(\mathcal{S})=\lim_{\overrightarrow{n}}\mathrm{Hom}eo_{I_{n}}(\mathcal{S})=\bigcup_{n\in\mathbb{Z}_{+}}\mathrm{Hom}eo_{I_{n}}(\mathcal{S})$$ This subspace has a group structure and coincides with the dense subspace of $\mathrm{Hom}eo_{+}(\mathcal{S})$ which has periodic displacement. Also we will prove that $$\mathrm{Hom}eo_{I_{1}}(\mathcal{S})\cong\widetilde{\mathrm{Homeo}}_{+}(\mathbb{S}^{1}),$$ fitting the universal central extension (see \cite{Ghys}) $$0\mathbb{T}o\mathbb{Z}\mathbb{T}o\widetilde{\mathrm{Homeo}}_{+}(\mathbb{S}^{1})\mathbb{T}o\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\mathbb{T}o 1$$ into the diagram \[\thetaymatrix{0\ar[r]\ar[d]&\mathbb{Z}\ar[r]\ar[d]& \widetilde{\mathrm{Homeo}}_{+}(\mathbb{S}^{1})\ar[r]\ar[d]&\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\ar[r]\ar[d]& 1\ar[d]\\ 0\ar[r]&R_{\alpha}\ar[r]& \mathrm{Hom}eo_{I_{1}}(\mathcal{S})\ar[r]&\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\ar[r]& 1}\] where $R_{\alpha}$ denotes integer translations in $\mathcal{S}$.\\ Finally, the dynamics of induced homeomorphisms can be described using the Poincar\'e theory for $\mathrm{Homeo}_{+}(\mathbb{S}^{1})$. In \cite{C-V}, for general homeomorphisms which are isotopic to the identity, the authors study the irrational case and do not consider rational dynamics. In order to complete the dynamical picture of Poincar\'e theory in the case of induced homeomorphisms, we introduce first a convenient concept which captures the idea of rationality.\\%In \cite{Ali} the author studies rational translations but not general aspects of a rational dynamics. \textbf{Definition \ref{fiberperiodic}} Let $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$. We will say that $s\in\mathcal{S}$ is \emph{$p/q$-fiber periodic} if there are $p,q\in\mathbb{Z}_+$ such that $$f^{q}(s)=s+\sigma(p).$$ Here the sum is the Abelian sum along the leaves. Note that the name ``fiber periodic" is due to the fact that the point is returning to the $p$-fiber in $s$ periodically after $q$ times, and we refer to the orbit of $s$ as a \emph{$p/q$-fiber}. \\ The relationship between the dynamics generated by the homeomorphism $f_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$ and the dynamics generated by the induced homeomorphism $f\in\mathrm{Hom}eo_{I}(\mathcal{S})$ is described in next.\\ \textbf{Theorem \ref{dynamics}} Let $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ be induced by a homeomorphism $f_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1}).$ \begin{enumerate} \item If $\rho(f)=p/q,$ then any point $s\in\mathcal{S}$ is a $p/q$-fiber periodic point or the orbit of $s$ is asymptotic to the orbit of a $p/q$-fiber periodic point. \item If $\rho(f)\notin\mathbb{Q}$, then $f$ is semi-conjugate to the rotation by $\rho(f).$ \end{enumerate} Section \ref{solenoid} introduces the universal one-dimensional solenoid and the lifting properties of its homeomorphisms. In Section \ref{displacements} we study the displacement function for orientation-preserving solenoidal homeomorphisms isotopic to the identity. Section \ref{semi-conjugation} deals with the semi-conjugation to a quotient dynamics, and finally, Section \ref{Inducedhomeos} talks about the dynamics generated by induced homeomorphisms. \section{The solenoid and its homeomorphisms that are isotopic to the identity}\label{solenoid} In this section, the universal one-dimensional solenoid will be introduced which is the space where we are interested in studying the dynamics generated by orientation-preserving homeomorphisms isotopic to the identity. Also these kinds of homeomorphims and their lifting properties will be studied the end of this section.\\ \subsection{The solenoid} For every integer $n\geq 1$ we have defined the unbranched covering space of degree $n$ by $p_n:\mathbb{S}^{1} \mathbb{T}o \mathbb{S}^{1}$, $z\longmapsto {z^n}$. If $n,m\in \mathbb{Z}_{+}$ and $n$ divides $m$, then there exists a unique covering map $p_{nm}:\mathbb{S}^{1}\mathbb{T}o \mathbb{S}^{1}$ such that $p_n \circ p_{nm} = p_m$. This determines a projective system of covering spaces $\{\mathbb{S}^{1},p_n\}_{n\geq 1}$ whose projective limit is the \emph{universal one--dimensional solenoid} $$\mathcal{S}:=\lim_{\overleftarrow{n}} (\mathbb{S}^{1},p_{n}).$$ We have canonical projections determined by projection onto the $n^{\text{th}}$ coordinate $$\mathrm{p}i_{n}:\mathcal{S}\mathbb{T}o \mathbb{S}^{1}.$$ This determines a locally trivial $\widehat{\mathbb{Z}}$--bundle structure $\widehat{\mathbb{Z}}\hookrightarrow \mathcal{S} \mathbb{T}o \mathbb{S}^{1}$, where $$\widehat{\mathbb{Z}}:=\lim_{\overleftarrow{n}}\; \mathbb{Z}/m\mathbb{Z}$$ is \emph{the profinite completion of $\mathbb{Z}$}, which is a compact perfect totally disconnected abelian topological group homeomorphic to the Cantor set. Since $\widehat{\mathbb{Z}}$ is the profinite completion of $\mathbb{Z}$, we have canonical projections determined by projection onto the $n^{\text{th}}$ coordinate $$p_{n}:\widehat{\mathbb{Z}}\mathbb{T}o \mathbb{Z}/n\mathbb{Z},$$ and $\widehat{\mathbb{Z}}$ admits a canonical inclusion $i:\mathbb{Z}\mathbb{T}o\widehat{\mathbb{Z}}$ defined by $t\longmapsto(p_{n}(t))_{n\in\mathbb{Z}_{+}},$ whose image is dense. We will use $t$ to denote $i(t)\in\widehat{\mathbb{Z}}$.\\ We can define a properly discontinuously free action of $\mathbb{Z}$ on $\mathbb{R}\times \widehat{\mathbb{Z}}$ by $$t\cdot (x,k) := (x+t,k-t) \quad (t\in \mathbb{Z}).$$ Here $\mathcal{S}$ is identified with the orbit space $\mathbb{R}\times_{\mathbb{Z}} \widehat{\mathbb{Z}} \equiv \mathbb{R}\times \widehat{\mathbb{Z}} / \mathbb{Z}$, and $\mathbb{Z}$ is acting on $\mathbb{R}$ by covering transformations and on $\widehat{\mathbb{Z}}$ by translations. The path--connected component $\mathcal{L}_0$ of the identity element $0\in \mathcal{S}$ is called the \emph{base leaf}. Clearly, $\mathcal{L}_0$ is the image of $\mathbb{R}\times \{0\}$ under the canonical projection $\mathbb{R}\times \widehat{\mathbb{Z}}\mathbb{T}o \mathcal{S}$ and it is homeomorphic to $\mathbb{R}$. \\ In summary, $\mathcal{S}$ is a compact connected abelian topological group and also is a one--dimensional lamination where each ``leaf" is a simply connected one--dimensional manifold homeomorphic to the universal covering space $\mathbb{R}$ of $\mathbb{S}^{1}$, and a typical 'transversal section' is isomorphic to the Cantor group $\widehat{\mathbb{Z}}$.\\ \subsection{ Homeomorphisms that are isotopic to the identity} Let $p:\mathbb{R}\times \widehat{\mathbb{Z}}\mathbb{T}o \mathcal{S}$ denote the canonical projection. Then $p$ is an infinite cyclic covering and we have a lifting property of homeomorphisms. The space of orientation-preserving homeomorphisms isotopic to the identity of $\mathcal{S}$ is denoted by $\mathrm{Hom}eo_{+}(\mathcal{S}),$ and the space containing all the liftings of homeomorphisms in $\mathrm{Hom}eo_{+}(\mathcal{S})$ will be denoted by $\widetilde{\mathrm{Homeo}}_{+}(\mathcal{S})$. By \cite{Kwa2} we have a complete description of the homeomorphisms in $\widetilde{\mathrm{Homeo}}_{+}(\mathcal{S})$.\\ Let $F:\mathbb{R}\times \widehat{\mathbb{Z}}\mathbb{T}o \mathbb{R}\times \widehat{\mathbb{Z}}$ be a lifting of $f\in \mathrm{Hom}eoS$ to $\mathbb{R}\times \widehat{\mathbb{Z}}$. Then $F$ has the form $$F(x,k) = (F_k (x),k),$$ where $F_{k}:\mathbb{R}\mathbb{T}o\mathbb{R}$ satisfies the condition of being equivariant with respect to the $\mathbb{Z}$--action: $$F_{k-t}(x+t) = F_k (x) + t$$ for any $t\in \mathbb{Z}$. We have a continuous function given by $\widehat{\mathbb{Z}}\mathbb{T}o \mathrm{Homeo}_{+}(\mathbb{R})$ $k\longmapsto F_k$, where $F_t:\mathbb{R}\mathbb{T}o \mathbb{R},$ and $R_t:\widehat{\mathbb{Z}} \mathbb{T}o \widehat{\mathbb{Z}}$ is a minimal translation. This implies that $F$ commutes with the integral translation $T_t:\mathbb{R}\times \widehat{\mathbb{Z}}\mathbb{T}o \mathbb{R}\times \widehat{\mathbb{Z}}$ given by $$(x,k)\longmapsto (x+t,k-t)$$ and also must be invariant under the $\mathbb{Z}$--action in $\mathbb{C}ont(\widehat{\mathbb{Z}},\mathrm{Homeo}(\mathbb{R}))$.\\ Moreover, $F_k:=\mathrm{id}+\delta_{k}$ where $\delta_{k}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is an increasing, bounded and continuous function. The function $\Delta:\mathbb{R}\times\widehat{\mathbb{Z}}\mathbb{T}o\mathbb{R}$ defined by $(x,k)\longmapsto \delta_{k}(x)$ satisfies $$T_{t}\circ\Delta(x,k)=\Delta(x,k),$$ i.e is invariant by integral translation and induces a continuous functions $\delta:\mathcal{S}\to\mathbb{R}$ such that $$f:\mathrm{id}+i\circ\delta.$$ Denote by $\mathrm{Hom}eoL$ the set of all such homeomorphisms. The $\mathbb{R}-$displacement of the homeomorphism in $\mathrm{Hom}eoL$ will be described in the following section. \section{Limit periodic displacements}\label{displacements} This study began in the work \cite{Lop} where the displacement function is introduced and studied as in this section.\\ If $F\in\widetilde{\mathrm{Homeo}}_{+}(\mathcal{S})$, then the \emph{displacement function} $D_{F}:\widehat{\mathbb{Z}}\mathbb{T}o C(\mathbb{R})$ can be defined as $$k\longmapsto F_{k}-\mathrm{id}=\delta_k$$ For all $k\in\widehat{\mathbb{Z}}$ \begin{lemma} If $F\in\widetilde{\mathrm{Homeo}}_{+}(\mathcal{S})$, the displacement function is continuous and closed. \end{lemma} \begin{proof} Since $\delta:\mathbb{R}\times\widehat{\mathbb{Z}}\mathbb{T}o\mathbb{R}$ is continuous, by the exponential law $A_{F}$ is continuous. Applying the closed function theorem we conclude that $A_{F}$ is closed. \end{proof} Remember that if $C(\mathbb{R})$ denote the set of all continuous functions from $\mathbb{R}$ to $\mathbb{R}$ with the compact-open topology, then an $\mathbb{R}-$action on $C(\mathbb{R})$ is defined by $$\mathbb{R}\times C(\mathbb{R})\mathbb{T}o C(\mathbb{R}),\quad (\varphi,t)\longmapsto \varphi^{t},$$ where $\varphi^{t}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is given by $$\varphi^{t}(x):=\varphi(x+t).$$ Denote by $\mathcal{O}_{\mathbb{R}}(\varphi)$ the orbit of $\varphi$ under this action, and by $\Omega(\varphi)$ the closure of $\mathcal{O}_{\mathbb{R}}(\varphi)$ in $C(\mathbb{R})$. We will say that $\varphi$ is \emph{almost periodic} if $\Omega(\varphi)$ is compact. In this case we can define a group structure `` $*$'' so that for $\lim_{n\mathbb{T}o\infty}\varphi^{t_{n}}*\lim_{n\mathbb{T}o\infty}\varphi^{s_{n}}=\lim_{n\mathbb{T}o\infty}\varphi^{t_{n}+s_{n}}.$ Note that $(\Omega(\varphi),*)$ is a compact abelian topological group with neutral element $\varphi$.\\ If $\alpha:\mathcal{S}\mathbb{T}o\mathbb{R}$ is a continuous function, define $$K_{0}:\mathcal{L}_0\subset\mathcal{S}\mathbb{T}o C(\mathbb{R}),\;0 \longmapsto \alpha_{0},$$ where $$\alpha_{0}:\mathbb{R}\mathbb{T}o\mathbb{R}$$ is defined by $$\alpha_0:=\alpha\circ\sigma,$$ and $\sigma:\mathbb{R}\hookrightarrow\mathcal{S}$ is the 1-parameter dense subgroup. By definition $$\mathcal{O}(\alpha_{0}):=\{\alpha_{0}^{t}\in C(\mathbb{R}):\alpha_{0}^{t}(x)=\alpha_{0}^{t}(x+t)\}.$$ Then $$K_{0}(\sigma(t))=\alpha_{0}^{t}\quad(t\in\mathbb{R}).$$ \begin{theorem} For any continuous function $\alpha:\mathcal{S}\mathbb{T}o\mathbb{R}$, the function $K_{0}$ can be extended to a continuous and surjective homomorphism $K:\mathcal{S}\mathbb{T}o\Omega(\alpha_0)$. Therefore $\Omega(\alpha_{0})$ is a quotient group from $\mathcal{S}$. \end{theorem} \begin{proof} First we will prove that $K_{0}:\mathcal{L}_0\mathbb{T}o\Omega(\alpha_{0})$ defines a surjective homomorphism. If $\varphi\in\mathcal{O}(\alpha_{0}),$ then $\varphi(t)=\alpha_{0}(t+r)$ for some $r\in\mathbb{R}$. By definition \begin{eqnarray*} \alpha_{0}(t+r)&=&\alpha(\sigma(t+r))\\ &=&\alpha(\sigma(t)+\sigma(r))\\ &=&\alpha_{\sigma(r)}(t), \end{eqnarray*} for all $t,r\in\mathbb{R}.$ This tell us that $K_{0}$ is surjective. Also, the products in $\mathcal{L}_{0}$ and $\mathcal{O}(\alpha_{0})$ are given by the additive structure in $\mathbb{R},$ which means $K_{0}$ is a continuous homomorphism. Note that this homomorphism can be naturally extended to a surjective homomorphism $$K:\mathcal{S}\mathbb{T}o \Omega(\alpha_{0}),$$ Since $\mathcal{S}$ is compact and $\mathcal{L}_{0}$ is dense, the traslation flow is an isometry, $\alpha$ is uniformly continous and $K_{0}$ is continuous. The image of $K_{0}$ under this extension to $\mathcal{S}$ is closed in $\Omega(\alpha_{0})$ and must contain $\mathcal{O}(\alpha_{0}).$ Since $\mathcal{O}(\alpha_{0})$ is dense in $\Omega(\alpha_{0})$, it follows that $\mathrm{Im}(K)=\Omega(\alpha_{0}).$ Applying the first isomorphism theorem, we conclude that $$\Omega(\alpha_{0})\cong\mathcal{S}/\mathrm{ker}(K).$$ \end{proof} \begin{remark} Since $\Omega(\alpha_{0})$ is a quotient group of $\mathcal{S},$ it follows that $\Omega(\alpha_{0})$ is compact and $\alpha_{0}$ is almost-periodic. If $\alpha_{0}$ is quasi-periodic, then $$\mathbb{T}^{n}\cong\Omega(\alpha_{0})\cong\mathcal{S}/\mathrm{ker}(K)$$ for some $n\geq 2.$ Given that $\mathbb{T}^{n}$ cannot be a quotient group of $\mathcal{S}$ for $n\geq 2,$ it follows that $\delta'_{0}$ is not quasi-periodic, so must be periodic or purely limit periodic. \end{remark} \begin{theorem}\label{lp} For all $k\in\widehat{\mathbb{Z}}$ the displacement function $\delta_{k}:\mathbb{R}\mathbb{T}o\mathbb{R}$ is periodic or purely limit periodic. \end{theorem} \begin{proof} We know that for all $k\in\widehat{\mathbb{Z}}$ and arbitrary $n\in\mathbb{Z}$, $$\delta_{k+n}(x)=\delta_{k}(x+n).$$ This implies That $\delta_{k}$ and $\delta_{k+n}$ are in the same orbit under the $\mathbb{R}-$action in $C(\mathbb{R})$. This means that if $\delta_{k}$ is limit periodic for some $k\in\widehat{\mathbb{Z}},$ then $\delta_{k+n}$ will be limit periodic for every $n\in\mathbb{Z}.$ Thus, it is enough to prove that for some fixed $k\in\widehat{\mathbb{Z}}$, $\delta_{k}$ is limit periodic. We will prove this for $\delta_{0}.$\\ If $\delta:\mathbb{R}\times\widehat{\mathbb{Z}}\mathbb{T}o\mathbb{R}$ is a bounded continuous function and $\mathbb{Z}$-invariant, then $\delta$ induces a continuous function $\overline{\delta}:\mathcal{S}\mathbb{T}o\mathbb{R}$. Denote by $\overline{\delta_{0}}$ the function obtained in the proof of the last theorem. It is easy to see that $\overline{\delta_{0}}$ coincides with the function $\delta_{0}$ since we have the following commutative diagram \[\thetaymatrix{\mathbb{R} \ar@/^0.7cm/[rr]^{\delta_{0}}\ar[r]^{\sigma'} \ar[dr]^{\sigma} & \mathbb{R}\times\widehat{\mathbb{Z}}\ar[d]^{P}\ar[r]^{\delta}& \mathbb{R}\\ & \mathcal{S}\ar[ur]_{\overline{\delta}}&}\] Here $\sigma'(x)=(x,0),$ therefore \begin{eqnarray*} \delta_{0}&=&\delta\circ \sigma'\\ &=&\overline{\delta}\circ P\circ \sigma'\\ &=& \overline{\delta}\circ \sigma\\ &=&\overline{\delta}_{0}. \end{eqnarray*} Since $\overline{\delta_{0}}$ is limit periodic, it follows that $\delta_{0}$ is limit periodic. So then $\delta_{k}$ is limit periodic for every $k\in\widehat{\mathbb{Z}}$. \end{proof} Denote by $\mathrm{Hom}eo_{+}^{p}(\mathcal{S})\subset\mathrm{Hom}eo_{+}(\mathcal{S})$ the homeomorphisms having periodic displacement function and by $\mathrm{Hom}eo_{+}^{lp}(\mathcal{S})\subset\mathrm{Hom}eo_{+}(\mathcal{S})$ the homeomorphisms having purely limit periodic displacement function. The obvious theorem is the following. \begin{theorem} $\mathrm{Hom}eo_{+}(\mathcal{S})=\mathrm{Hom}eo_{+}^{p}(\mathcal{S})\cup\mathrm{Hom}eo_{+}^{lp}(\mathcal{S})$. Moreover, $\mathrm{Hom}eo_{+}^{p}(\mathcal{S})$ is a dense subgroup. \end{theorem} \begin{proof} $\mathrm{Hom}eo_{+}^{lp}(\mathcal{S})$ is closed since it contains its limit points. Moreover each function in $\mathrm{Hom}eo_{+}^{lp}(\mathcal{S})$ can be aproximated by functions in $\mathrm{Hom}eo_{+}^{p}(\mathcal{S})$, and therefore $\mathrm{Hom}eo_{+}^{p}(\mathcal{S})$ is dense. \end{proof} \begin{theorem}\label{Ap} Let $F\in\widetilde{\mathrm{Homeo}}_{+}(\mathcal{S})$ be given by $F(x,k)=(x+\delta_{k}(x),k).$ If $D_{F}:\widehat{\mathbb{Z}}\to C(\mathbb{R})$ is injective, then $D_{F}(k):=\delta_{k}$ is limit periodic for all $k\in\widehat{\mathbb{Z}}.$ \end{theorem} \begin{proof} If $D_{F}$ is injective then $\delta_{k}\neq \delta_{k+n}$ for all $n\in\mathbb{Z}.$ By the $\mathbb{Z}$-invariance of $\delta$, we know that for every $x\in\mathbb{R}$ $$\delta_{k}(x)\neq \delta_{k+n}(x)=\delta_{k}(x+n).$$ We conclude that $\delta_{k}$ cannot be periodic, so must be limit periodic. \end{proof} We will give a charactization of $\mathrm{Hom}eo_{+}^{p}(\mathcal{S})$ in Section \ref{Inducedhomeos}. \section{The semi-conjugation theorem}\label{semi-conjugation} In this seccion the dynamics generated by $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ will be compared with the dynamics generated by a homeomorphism of a qoutient group of $\mathcal{S}$.\\ Given $f\in\mathrm{Homeo}_{+}(\mathcal{S})$, the displacement function at the level 0, $\delta_{0}$, satisfies $$\Omega(\delta_{0})\cong \mathcal{S}/\ker(K) ,$$ where $\ker(K)$ is the kernel of a specific homomorphism $K$. Now we would like to give a homeomorphism isotopic to the identity $g:\Omega(\delta_{0})\mathbb{T}o \Omega(\delta_{0})$ such that is semi-conjugated to $f$ by $K$ i.e. the following diagram commutes \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{K}& \mathcal{S}\ar[d]^{K} \\ \Omega(\delta_{0})\ar[r]^{g}& \Omega(\delta_{0})}\] First, given a limit periodic function $\delta:\mathbb{R}\mathbb{T}o\mathbb{R}$, suppose that $F:\mathbb{R}\mathbb{T}o\mathbb{R}$ defined as $x\longmapsto x+\delta(x)$ is an increasing homeomorphism, define $g:\Omega(\delta)\mathbb{T}o\Omega(\delta)$ by $$\gamma\longmapsto\gamma*\delta^{\gamma(0)}$$ where ``$*$'' denotes the product defined on $\Omega(\delta)$ in the section \ref{displacements}. \begin{lemma} $g$ defines a homeomorphism isotopic to the identity. \end{lemma} \begin{proof} We notice that $g$ is a continuous function because $*$ and the ``valuation function'' are continuous. Since $\Omega(\delta)$ is compact, it is enough to prove that g is a bijective. \\ This function is onto since it is onto on $\mathcal{O}_{\mathbb{R}}(\delta)$ which is a dense set in $\Omega(\delta)$ and $g$ is continuous.\\ To prove that $g$ is one to one, we can take $\alpha,\;\beta\in\Omega(\delta)$ such that $\alpha=\delta^{a}$ and $\beta=\delta^{b}$ and suppose that $g(\beta)=g(\alpha)$ then $$\alpha*\delta^{\alpha(0)}=\beta*\beta^{\beta(0)}.$$ By definition $\alpha*\delta^{\alpha(0)}=\delta^{a+\delta(a)}$ and $\beta*\delta^{\beta(0)}=\delta^{b+\delta(b)}$. Therefore $$\delta(x+a+\delta(a))=\delta(x+b+\delta(b))$$ For all $x\in\mathbb{R}$. Using $x=-\delta(a)$ it satisfies $$\delta(a)=\delta(b+\delta(b)-\delta(a)).$$ But \begin{eqnarray*} F(b+\delta(b)-\delta(a))&=&b+\delta(b)-\delta(a)+\delta(b+\delta(b)-\delta(a))\\ &=&b+\delta(b)-\delta(a)+\delta(a)\\ &=&b+\delta(b)\\ &=&F(b). \end{eqnarray*} Since $F$ is an increasing homeomorphism $b=b+\delta(b)-\delta(a)$ and therefore $\delta(b)=\delta(a)$. This implies $$(\beta)^{-1}*\alpha=\beta^{\beta(0)}*(\delta^{\alpha(0)})^{-1}=\delta^{\delta(b)-\delta(a)}=\delta^{0},$$ and $\alpha=\beta$. We can extend this arguments by limits to all $\Omega(\delta)$ to prove the injectivity of $g$. Finally, if $G:[0,1]\times\Omega(\delta)\mathbb{T}o\Omega(\delta)$ is defined by $$(c,\gamma)\longmapsto\gamma*\delta^{c\gamma(0)},$$ then $G$ is an isotopy from $g$ to the identity. \end{proof} The next theorem follows from the argument above. \begin{theorem} If $f\in\mathrm{Homeo}_{+}(\mathcal{S})$ and $\delta_{0}$ is the displacement function at the level 0, then $g:\Omega(\delta_{0})\mathbb{T}o \Omega(\delta_{0})$ is semi-conjugated to $f$ by $K$. \end{theorem} \section{Induced homeomorphisms and its dynamics}\label{Inducedhomeos} \subsection{Induced homeomorphisms} Given $f_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1}),$ we can extend it to a homeomorphism $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ which will be our first example for the dynamics generated by a homeomorphism in $\mathrm{Hom}eo_{+}(\mathcal{S})$ and it´s relationship with the rotation set.\\ Given $n\geq 2$, define $f_{n}:\mathbb{R}/n\mathbb{Z}\mathbb{T}o\mathbb{R}/n\mathbb{Z}$ extending $f_{1}$ in the following way. Choose a lifting $F_{1}:\mathbb{R}\mathbb{T}o\mathbb{R}$ which is a $\mathbb{Z}$-equivariant homeomorphism, and thus $n\mathbb{Z}$-equivariant, and project it to a homeomorphism $f_{n}:\mathbb{R}/n\mathbb{Z}\to \mathbb{R}/n\mathbb{Z}.$ Note that these homeomorphisms satisfy the compatibility condition, i.e. if $n|m$, then the following diagram commutes. \[\thetaymatrix{\mathbb{R}/m\mathbb{Z} \ar[r]^{p_{nm}}\ar[d]^{f_{m}}& \mathbb{R}/n\mathbb{Z}\ar[d]^{f_{n}} \\ \mathbb{R}/m\mathbb{Z} \ar[r]^{p_{nm}}& \mathbb{R}/n\mathbb{Z}}\] Therefore there is a well defined homeomorphism: $$f:\mathcal{S}\mathbb{T}o\mathcal{S},$$ which covers $f_{1}$ in the sense that the following diagram commutes \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{1}}& \mathcal{S}\ar[d]^{\mathrm{p}i} \\ \mathbb{R}/\mathbb{Z} \ar[r]^{f_{1}}& \mathbb{R}/\mathbb{Z}.}\] We will call these kinds of homeomorphisms \emph{induced homeomorphisms of degree 1}, and the subspace of all these homeomorphisms will be denoted by $\mathrm{Hom}eo_{I_{1}}(\mathcal{S})$.\\ We can write each $f_{n}$ as $\mathrm{id}+\delta_{n},$ where $\delta_{n}:\mathbb{R}/n\mathbb{Z}\mathbb{T}o\mathbb{R}/n\mathbb{Z}$ can be identified with a $n\mathbb{Z}$-invariant function $\delta_{n}:\mathbb{R}/n\mathbb{Z}\mathbb{T}o\mathbb{R}$. In this case, each $\delta_{n}$ is $\mathbb{Z}$-invariant and bounded by 1, therefore $f$ can be written as \begin{eqnarray*} f((x_{1},x_{2},\dots,x_{n},\dots))&=&(x_{1},x_{2},\dots,x_{n},\dots)\\ &+&(\delta_{1}(x_{1}),\delta_{2}(x_{2}),\dots,\delta_{n}(x_{n}),\dots). \end{eqnarray*} It is clear from the definitions that if $$f:=\mathrm{id}+\overline{\delta} \in\mathrm{Hom}eo_{I_{1}}(\mathcal{S})$$ is induced by $$f_{1}:=\mathrm{id}+\delta_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1}),$$ then the following diagram commutes. \[\thetaymatrix{\mathbb{R} \ar@/^0.7cm/[rr]^{\delta_{0}}\ar[r]^{\sigma'} \ar[dr]^{\sigma} \ar[ddr]_{\mathrm{p}i}& \mathbb{R}\times\widehat{\mathbb{Z}}\ar[d]^{P}\ar[r]^{\delta}& \mathbb{R}\\ & \mathcal{S}\ar[ur]^{\overline{\delta}}\ar[d]^{\mathrm{p}i_{1}}&\\ &\mathbb{S}^{1}\ar[uur]_{\delta_{1}}& }\] Note that the displacement function from $f$ is determined by $$\overline{\delta}((x_1,x_2,\dots,x_{n},\dots))=(\delta_{1}(x_{1}),\delta_{2}(x_{2}),\dots,\delta_{n}(x_{n}),\dots).$$ It follows that $\overline{\delta}(x)\in\mathcal{L}_{0}$ for all $x\in\mathcal{S},$ and we have proved the following theorem. \begin{theorem}$ \mathrm{Hom}eo_{I_{1}}(\mathcal{S})\subset\mathrm{Hom}eo_{+}(\mathcal{S}).$ \end{theorem} \begin{remark} This homeomorphism is unique modulo integer translations in the base leaf due to the dependence of the choice of the lifting to $\mathbb{R}$ when we induce the homeomorphisms $f_{n},$ i.e. we have the exact sequence \[\thetaymatrix{0\ar[r]&R_{\alpha}\ar[r]& \mathrm{Hom}eo_{I_{1}}(\mathcal{S})\ar[r]&\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\ar[r]& 1}\] where $R_{\alpha}:=\{r_{\alpha}:\mathcal{S}\to\mathcal{S}|r_{\alpha}(s)=s+\alpha,\; \alpha\in i(\mathbb{Z})\subset\mathcal{S}\}\cong\mathbb{Z}.$ \end{remark} The universal central extension (see \cite{Ghys}) $$0\mathbb{T}o\mathbb{Z}\mathbb{T}o\widetilde{\mathrm{Homeo}}_{+}(\mathbb{S}^{1})\mathbb{T}o\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\mathbb{T}o 1$$ fits into the diagram \[\thetaymatrix{0\ar[r]\ar[d]&\mathbb{Z}\ar[r]\ar[d]& \widetilde{\mathrm{Homeo}}_{+}(\mathbb{S}^{1})\ar[r]\ar[d]&\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\ar[r]\ar[d]& 1\ar[d]\\ 0\ar[r]&R_{\alpha}\ar[r]& \mathrm{Hom}eo_{I_{1}}(\mathcal{S})\ar[r]&\mathrm{Hom}eo_{+}(\mathbb{S}^{1})\ar[r]& 1}\] where $R_{\alpha}$ denotes integer translations in $\mathcal{S}$, the next isomorphism follows. \begin{theorem} $\mathrm{Hom}eo_{I_{1}}(\mathcal{S})\simeq\widetilde{\mathrm{Homeo}}_{+}(\mathbb{S}^{1}).$ \end{theorem} It is important to notice that if $f\in\mathrm{Hom}eo_{I_{1}}(\mathcal{S})$ is induced by $f_{1}\in\mathrm{Hom}eo_{+}(\mathcal{S})$ then following diagram commutes. \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{1}}& \mathcal{S}\ar[d]^{\mathrm{p}i_{1}} \\ \mathbb{R}/\mathbb{Z} \ar[r]^{f_{1}}& \mathbb{R}/\mathbb{Z}}\] In the next theorem we will see that this property characterizes the induced homeomorphism of degree 1 in $\mathrm{Hom}eo_{+}(\mathcal{S})$. \begin{theorem} $f\in\mathrm{Hom}eo_{I_{1}}(\mathcal{S})$ if and only if $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ and the following diagram commutes. \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{1}}& \mathcal{S}\ar[d]^{\mathrm{p}i_{1}} \\ \mathbb{R}/\mathbb{Z} \ar[r]^{f_{1}}& \mathbb{R}/\mathbb{Z}.}\] \end{theorem} \begin{proof} ``$\mathbb{R}ightarrow$'' It follows directly from the last theorem and the definition.\newline ``$\Leftarrow$'' Any $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ preserve the leaves and the orientation and the following diagram commutes \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{1}}& \mathcal{S}\ar[d]^{\mathrm{p}i_{1}} \\ \mathbb{R}/\mathbb{Z} \ar[r]^{f_{1}}& \mathbb{R}/\mathbb{Z},}\] so that $f$ is $\sigma(\mathbb{Z})$--invariant. Therefore its lift will be as in the last theorem. We conclude that $f\in\mathrm{Hom}eo_{I_{1}}(\mathcal{S})$. \end{proof} Now we would like to generalize this idea to induced homeomorphisms of degree $n$. Specifically, we are thinking about elements in $\mathrm{Hom}eo_{+}(\mathcal{S})$ that satisfy, for some level $n\in\mathbb{Z}_{+}$, the following diagram \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{n}}& \mathcal{S}\ar[d]^{\mathrm{p}i_{n}} \\ \mathbb{R}/n\mathbb{Z} \ar[r]^{f_{n}}& \mathbb{R}/n\mathbb{Z}}\] We will give a description of the lifts of these kind of homeomorphisms, which we denote by $\mathrm{Hom}eo_{I_{n}}(\mathcal{S})$ for each $n\in\mathbb{Z}_{+}$, and we call these \emph{induced homeomorphism of degree $n$} by the homeomorphims $f_{n}\in\mathrm{Homeo}_{+}(\mathbb{R}/n\mathbb{Z})$.\\ \begin{theorem}\label{induced} $f\in\mathrm{Homeo}_{I_{n}}(\mathcal{S})$ is induced by a homeomorphisms $f_{n}\in\mathrm{Homeo}_{+}(\mathbb{S}^{1})$ if and only if $f$ has a lift $F:\mathbb{R}\times\widehat{\mathbb{Z}} \to \mathbb{R}\times\widehat{\mathbb{Z}}$ such that $$(x,k)\longmapsto (F_{n}(x,k),k)=(F_{0}(x)+r(k),k)$$ where $F_{0}:\mathbb{R}\to\mathbb{R}$ is a lift of $f_{n}$ to $\mathbb{R}$ and $r(k)\in\{0,\dots,n-1\}$ is determined by $r(k)=i$ if $k\in p_{n}^{-1}(i),$ and $p_{n}$ denotes the canonical projection $\widehat{\mathbb{Z}}\mathbb{T}o\mathbb{Z}/n\mathbb{Z}$. \end{theorem} \begin{proof} Note that the restriction of $f$ to the base leaf is an increasing $n\mathbb{Z}$-invariant function. Therefore $f|_{\mathcal{L}_{0}}:\mathcal{L}_{0}\mathbb{T}o\mathcal{L}_{0}$ is a $\sigma(n\mathbb{Z})-$invariant function, and we must have that $$f(x+\sigma(n))=f(x)+\sigma(n).$$ Let $F_{0}:\mathbb{R}\mathbb{T}o\mathbb{R}$ be the restriction of a lift to $\mathbb{R}\times\widehat{\mathbb{Z}}$ in level 0. Then $F_{0}$ can be seen as a $n\mathbb{Z}$-invariant lift to $\mathbb{R}$ of $f_{n}$, and satisfies $$f(x+n)=f(x)+n.$$ By the equivariance with respect to the diagonal action, for each $k\in\mathbb{Z},$ $$(F_{n}(x),k)=(F_{0}(x)+r(k),k)$$ where $r(k)=k\; (\mathrm{mod}\; n).$ Since the inclusion of $\mathbb{Z}$ in $\widehat{\mathbb{Z}}$ is dense and the function $$D:\widehat{\mathbb{Z}}\mathbb{T}o \mathrm{Hom}eo_{+}(\mathbb{R})$$ is continuous and constant on each integer in the open set $p_{n}^{-1}(i),$ it follows that $D$ is constant on the whole open set. \end{proof} \begin{corollary} If two homeomorphisms are induced by the same homeomorphism $f_{n}\in\mathrm{Homeo}_{+}(\mathbb{R}/n\mathbb{Z}),$ then they differ by an integer. \end{corollary} An important observation is that if $n|m$ for $n,m\in\mathbb{Z}$, then $\mathrm{Hom}eo_{I_{n}}(\mathcal{S})\subset\mathrm{Hom}eo_{I_{m}}(\mathcal{S})$ because the following diagram commutes. \[\thetaymatrix{\mathcal{S} \ar[r]^{f}\ar[d]^{\mathrm{p}i_{n}}& \mathcal{S}\ar[d]^{\mathrm{p}i_{n}} \\ \mathbb{R}/n\mathbb{Z} \ar[r]^{f_{n}}\ar[d]^{p_{mn}}& \mathbb{R}/n\mathbb{Z}\ar[d]^{p_{mn}}\\ \mathbb{R}/m\mathbb{Z} \ar[r]^{f_{m}}& \mathbb{R}/m\mathbb{Z}} \] Here $f_{m}$ is the projection of $f_{n}$ by the covering function $p_{mn}.$ Therefore we have an inductive system $\{\mathrm{Hom}eo_{I_{n}}(\mathcal{S})\}_{n\in\mathbb{Z}_{+}}$ with inclusion functions. Denote by $\mathrm{Hom}eo_{I}(\mathcal{S})$ the direct limit of this system, then $$\mathrm{Hom}eo_{I}(\mathcal{S})=\bigcup_{n\in\mathbb{Z}_{+}}\mathrm{Hom}eo_{I_{n}}(\mathcal{S}).$$ \begin{theorem} $\mathrm{Hom}eo_{I}(\mathcal{S})$ is an open and dense set in $\mathrm{Hom}eo_{+}(\mathcal{S})$ \end{theorem} \begin{proof} It is enough to prove that $\mathrm{Hom}eo_{I}(\mathcal{S})$ is homeomorphic to $\mathrm{Hom}eo_{+}^{p}(\mathcal{S}),$ which are the functions with periodic displacement. It is easy to see that if $f$ has periodic displacement with period $p/q$, then $f$ is induced of degree $p$, so using Theorem \ref{induced}, we can finish the proof. It also follows that $$\mathrm{Hom}eo_{+}(\mathcal{S})=\mathrm{Homeo}_{I}(\mathcal{S})\sqcup\mathrm{Hom}eo_{+}^{\mathrm{lp}}(\mathcal{S}).$$ \end{proof} \begin{remark} $\mathrm{Hom}eo_{I}(\mathcal{S})$ has a subgroup structure with the induced operation. \end{remark} \subsection{The dynamics of induced homeomorphisms}\label{dynamic} First we will talk about the rational dynamics. It is obvious that in this case we do not have periodic points; so the next definition replaces the role of periodic points by periodic fibers. \begin{definition}\label{fiberperiodic} Let $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$. The point $s\in\mathcal{S}$ is called \emph{$p/q$-fiber periodic} if there exist $p,q\in\mathbb{Z}_+$ such that $$f^{q}(s)=s+\sigma(p),$$ where $\sigma:\mathbb{R}\mathbb{T}o\mathcal{S}$ is the one-parameter subgroup. We will write just $s+p$ to denote $s+\sigma(p)$. Note that the name ``fiber periodic'' is due to the fact that the point is returning periodically to the fiber in $s$ and we refer to the orbit of $s$ as a \emph{$p/q$-fiber}. \end{definition} The relationship between the dynamics generated by the homeomorphism $f_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$ and the dynamics generated by the induced homeomorphism $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ is described as follows. \begin{lemma} Let $f\in\mathrm{Hom}eo_{+}(\mathcal{S})$ be induced by a homeomorphism $f_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1})$. If $\rho(f)=p/q$ then $f$ has a $p/q$-fiber periodic point. \end{lemma} \begin{proof} From Theorem \ref{induced} we know that the lift of $f$ to the covering space $\mathbb{R}\times\widehat{\mathbb{Z}}$ is $$F(x,k):=(F_{0}(x),k),$$ where $F_{0}:\mathbb{R}\to\mathbb{R}$ is a lift of $f$ to $\mathbb{R}$. Poincar\'e's theory says that for $F$ there exists a point $x\in\mathbb{R}$ such that $F(x)=x+1$. The set $$\bigg\{(F^{n}(x,k)|\; n\in\mathbb{Z},\; k\in\widehat{\mathbb{Z}}\bigg\}$$ is equivariant under the $\mathbb{Z}-$action and the projection to $\mathcal{S}$ satisfies the condition required. \end{proof} \begin{theorem}\label{dynamics} Let $f\in\mathrm{Hom}eo_{I}(\mathcal{S})$ be induced by a homeomorphism $f_{1}\in\mathrm{Hom}eo_{+}(\mathbb{S}^{1}).$ \begin{enumerate} \item If $\rho(f)=p/q,$ then any point $s\in\mathcal{S}$ is $p/q$-fiber periodic point or the orbit of $s$ is asymptotic to a $p/q$-fiber. \item If $\rho(f)\notin\mathbb{Q}$, then $f$ is semi-conjugate to the rotation by $\rho(f).$ \end{enumerate} \end{theorem} \begin{proof} Using Theorem 4.6 of \cite{C-V}, it is sufficient to prove that $f$ satisfies the bounded mean motion condition. It is easy to see that this condition holds since the restriction to any leaf is a $\sigma(\mathbb{Z})$-periodic function. \end{proof} \end{document}